lang
stringclasses 1
value | s2FieldsOfStudy
listlengths 0
8
| url
stringlengths 78
78
| fieldsOfStudy
listlengths 0
5
| lang_conf
float64 0.8
0.98
| title
stringlengths 4
300
| paperId
stringlengths 40
40
| venue
stringlengths 0
300
| authors
listlengths 0
105
| publicationVenue
dict | abstract
stringlengths 1
10k
⌀ | text
stringlengths 1.94k
184k
| openAccessPdf
dict | year
int64 1.98k
2.03k
⌀ | publicationTypes
listlengths 0
4
| isOpenAccess
bool 2
classes | publicationDate
timestamp[us]date 1978-02-01 00:00:00
2025-04-23 00:00:00
⌀ | references
listlengths 0
958
| total_tokens
int64 509
40k
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Environmental Science",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/ffaf6578aad5c97b8a0ace5e2cbe70b7dbab234f
|
[
"Computer Science"
] | 0.881966
|
An Analysis of Energy Consumption and Carbon Footprints of Cryptocurrencies and Possible Solutions
|
ffaf6578aad5c97b8a0ace5e2cbe70b7dbab234f
|
Digit. Commun. Networks
|
[
{
"authorId": "28169450",
"name": "Varun Kohli"
},
{
"authorId": "2008034355",
"name": "Sombuddha Chakravarty"
},
{
"authorId": "3185174",
"name": "V. Chamola"
},
{
"authorId": "32326383",
"name": "K. S. Sangwan"
},
{
"authorId": "1706796",
"name": "S. Zeadally"
}
] |
{
"alternate_issns": null,
"alternate_names": null,
"alternate_urls": null,
"id": null,
"issn": null,
"name": null,
"type": null,
"url": null
}
|
There is an urgent need to control global warming caused by humans to achieve a sustainable future. $CO_2$ levels are rising steadily and while countries worldwide are actively moving toward the sustainability goals proposed during the Paris Agreement in 2015, we are still a long way to go from achieving a sustainable mode of global operation. The increased popularity of cryptocurrencies since the introduction of Bitcoin in 2009 has been accompanied by an increasing trend in greenhouse gas emissions and high electrical energy consumption. Popular energy tracking studies (e.g., Digiconomist and the Cambridge Bitcoin Energy Consumption Index (CBECI)) have estimated energy consumption ranges of 29.96 TWh to 135.12 TWh and 26.41 TWh to 176.98 TWh respectively for Bitcoin as of July 2021, which are equivalent to the energy consumption of countries such as Sweden and Thailand. The latest estimate by Digiconomist on carbon footprints shows a 64.18 Mt$CO_2$ emission by Bitcoin as of July 2021, close to the emissions by Greece and Oman. This review compiles estimates made by various studies from 2018 to 2021. We compare with the energy consumption and carbon footprints of these cryptocurrencies with countries around the world, and centralized transaction methods such as Visa. We identify the problems associated with cryptocurrencies, and propose solutions that can help reduce their energy usage and carbon footprints. Finally, we present case studies on cryptocurrency networks namely, Ethereum 2.0 and Pi Network, with a discussion on how they solve some of the challenges we have identified.
|
# An Analysis of Energy Consumption and Carbon Footprints of Cryptocurrencies and Possible Solutions
#### Varun Kohli[a], Sombuddha Chakravarty[b], Vinay Chamola[∗][,b], Kuldip Singh Sangwan[c], Sherali Zeadally[d]
**_Abstract—There is an urgent need to control global warming_**
**caused by humans to achieve a sustainable future. CO2 levels**
**are rising steadily and while countries worldwide are actively**
**moving toward the sustainability goals proposed during the Paris**
**Agreement in 2015, we are still a long way to go from achieving**
**a sustainable mode of global operation. The increased popularity**
**of cryptocurrencies since the introduction of Bitcoin in 2009**
**has been accompanied by an increasing trend in greenhouse gas**
**emissions and high electrical energy consumption. Popular energy**
**tracking studies (e.g., Digiconomist and the Cambridge Bitcoin**
**Energy Consumption Index (CBECI)) have estimated energy**
**consumption ranges of 29.96 TWh to 135.12 TWh and 26.41 TWh**
**to 176.98 TWh respectively for Bitcoin as of July 2021, which are**
**equivalent to the energy consumption of countries such as Sweden**
**and Thailand. The latest estimate by Digiconomist on carbon**
**footprints shows a 64.18 MtCO2 emission by Bitcoin as of July**
**2021, close to the emissions by Greece and Oman. This review**
**compiles estimates made by various studies from 2018 to 2021.**
**We compare with the energy consumption and carbon footprints**
**of these cryptocurrencies with countries around the world, and**
**centralized transaction methods such as Visa. We identify the**
**problems associated with cryptocurrencies, and propose solutions**
**that can help reduce their energy usage and carbon footprints.**
**Finally, we present case studies on cryptocurrency networks**
**namely, Ethereum 2.0 and Pi Network, with a discussion on how**
**they solve some of the challenges we have identified.**
**_Index Terms—Blockchain, Carbon footprint, Climate change,_**
**Cryptocurrency, Sustainability**
I. INTRODUCTION
The past century has witnessed a steady rise in atmospheric
Green House Gas (GHG) levels with nearly 584 Gt CO2
from fossil fuels, land use change and industrial activity
contributing to 0.9[◦]C of global temperature increase since
1960 [1]. CO2 levels have increased from 250ppm in 1960
to 400ppm in 2020 and current average trends show a rise in
natural disasters caused by high temperatures and droughts [2].
Day and night temperatures have increased worldwide, and the
average global temperatures are expected to go up by 3-5[◦]C
by 2100 according to the Intergovernmental Panel on Climate
Change (IPCC) [3]. Evidence suggests a change in the lengths
of seasons across the globe due to global warming. Summers
_a Department of Electrical and Computer Engineering Engineering, Na-_
tional University of Singapore, Singapore (email: varun.kohli@u.nus.edu)
_b Department of Electrical and Electronics Engineering & APPCAIR,_
BITS-Pilani, Pilani Campus, 333031, India (email: f2016165p@alumni.bitspilani.ac.in, vinay.chamola@pilani.bits-pilani.ac.in)
_c Department of Mechanical Engineering, BITS-Pilani, Pilani Campus,_
333031, India (email: kss@pilani.bits-pilani.ac.in)
_d College of Communication and Information, University of Kentucky,_
L i t KY 40506 0224 ( il d ll @ k d )
in the mid-high latitudes have lengthened while the winters
have shortened, as well as shorter spring and autumn periods
[4]. It has been predicted that even if the GHG levels do not
increase beyond current levels, summers will last for nearly
half a year while winters will be less than two months long
by 2100.
In 2015, leaders from 197 countries settled upon the Paris
Agreement with the aim to keep global warming caused by
human beings under 2[◦]C [5]. This is already a difficult task
given the increase in population, energy consumption and
the lack of environment friendly policies by governments
worldwide. USA, China, Japan, Germany, and India which
have been the main ecological footprint hotspots since 2019,
also correspond to the top GHG emission nations across the
world [6]. Since 2009, various cryptocurrencies have emerged
starting with Bitcoin which was the first well-known application of Satoshi Nakamoto’s blockchain technology introduced
in 2008 [7]. It soon became the biggest cryptocurrency in the
world with a market capitalization of USD $ 614.9 billion
as of July 2021 among the 5,655 known cryptocurrencies
[8]: Ethereum, Tether, Binance, Cardano, and Dogecoin to
name a few; and together they account for to a total market
capitalization of USD $1.39 trillion. Millions of transactions
are made every single day to exchange these currencies and
their stock markets and operations run 24/7 [9]. The electrical
energy consumption of cryptocurrencies is over-proportionate
compared to their technical performance [10] and despite
their promising applications, cryptocurrencies have also been
contributors responsible for global warming due to their high
carbon footprint [11]. It has been predicted that Bitcoin alone
can raise the global temperatures by 2[◦]C within the next three
decades [1].
Due to the distributed nature of cryptocurrency networks,
obtaining close estimates of electrical energy consumption
and carbon footprints is a difficult task. The main source of
uncertainty is the mining equipment used [12] and the source
of energy [13]. The minimum and maximum power demand
estimates for the Bitcoin network according to various studies
conducted between 2014 and 2018 have been compiled in [14].
Estimates in the range of 2.5 GW to 7.67 GW [15], 1.3 GW
to 14.8 GW and 15.47 TWh to 50.24 TWh [16], and 22 TWh
to 105 TWh [17] were made for Bitcoin in 2018. The power
consumption was later estimated to be 4.3 GW in March 2020,
nearly a 68% share of the top 20 cryptocurrencies drawing
a total of 6.5 GW [18]. This was done without considering
auxiliary losses caused by the cooling and mining equipment
-----
Alternative consensus
Fig. 1: The flow of this review.
and with that premise, the true power draw is expected to be
higher. With the consideration that only 20 cryptocurrencies
were used in this study, the actual cryptocurrency network
power consumption of the 5,654 cryptocurrencies [8] would
be much higher than their estimate. Among the latest data
on consumption, the University of Cambridge Bitcoin Energy
Consumption Index (CBECI) [19] shows theoretical maximum
and minimum power consumptions of 26.09 TWh to 174.82
TWh respectively, with an estimate of 69.63 TWh. A study
from early 2021 [10] showed a range of 60 TWh to 125
TWh per year for Bitcoin, 15 TWh for Ethereum and 100
TWh for Bitcoin Cash. A sensitivity-based method used by
Alex de Vries in early 2021 [11] factored into the Bitcoin
market cost, electricity cost and the percentage of miners’
income spent on electricity. The results showed the Bitcoin
network energy consumption to be up to 184 TWh. His famous
blog, Digiconomist founded in 2014 [20] estimates the energy
consumption of Bitcoin and Ethereum to be 135.12 TWh and
55.01 TWh respectively as of July 2021.
As a consequence of high electrical energy consumption,
cryptocurrencies have also been found to have high carbon
footprints. The carbon footprint of Bitcoin alone was estimated
to be 63 MtCO2 in 2018 [21] and 55 MtCO2 in 2019
[9]. Another study in 2018 [22] stated a footprint of 38.73
MtCO2 which was equivalent to Denmark, over 700,000 Visa
transaction and nearly 49,000 hours of YouTube viewing. Alex
de Vries showed the consumption to be up to 90.2 MtCO2
[11] early in 2021 with an estimate of 64.18 MtCO2 [23].
Along similar lines, Digiconomist also calculated a 26.13
MtCO2 footprint for Ethereum in July 2021. The 3rd Global
Cryptoasset Benchmarking Study (GCBS) conducted by the
University of Cambridge in 2020 [24] found an average of
39% of renewable energy share in Proof of Work (PoW)
mining while a contesting result was found in a 2018 study
[25] with a 78% share of renewable energy. But considering
the high carbon footprints for these cryptocurrencies, we can
infer that there is still a considerable load on non renewable
sources of energy such as fossil fuels.
From the discussion so far, we found that the energy
consumption and carbon footprint of cryptocurrencies are very
high. We show later in this work that these metrics are close
to if not more than those of several countries and much of
this high energy consumption stems from mechanisms used
by many of the cryptocurrency implementations. Figure-1
presents the organization of this review paper. We summarize
the main research contributions of this review as follows:
_• We present a global perspective on energy consumption_
and carbon footprints by the two most popular cryptocurrencies namely, Bitcoin and Ethereum. We also present a
comparison of energy consumption and carbon emissions
of Bitcoin, Ethereum, and the card payment system Visa.
_• We identify four underlying factors responsible for high_
energy consumption and carbon emissions of Bitcoin and
Ethereum.
_• We discuss possible solutions to address the factors that_
result in high energy consumption and carbon emissions
for cryptocurrencies such as Bitcoin and Ethereum. Additionally, we discuss two case studies on work-in-progress
solutions.
II. BACKGROUND
This section presents a brief overview of blockchain technology and cryptocurrencies. It provides a global perspective
of energy consumptions and carbon emissions of the two
biggest cryptocurrencies namely, Bitcoin and Ethereum, and
compares them to the centralized banking system, Visa.
_A. Blockchain, Bitcoin and Cryptocurrencies_
Blockchain is a disruptive technology of distributed ledgers
which was created by Satoshi Nakamoto in 2008 [7]. A
blockchain is a database that chronologically stores information in ”blocks”. These blocks have a storage capacity of
information that consists of the stored information, a timestamp the hash value of the previous block and a unique
-----
|Col1|Prev hash Nonce Txn1 Txn2 Txn3 ......|Col3|Prev hash Nonce Txn1 Txn2 Txn3 ......|Col5|
|---|---|---|---|---|
||||||
Fig. 2: A block diagram depicting the structure of a blockchain.
TABLE I: Ranking Bitcoin and Ethereum among countries based on annual electrical energy consumption as of July 2021
[23, 26–29] (Note: N.A. stands for Not Available).
|Rank|Country|Population (Millions) [26]|Energy (TWh) [23, 27–29]|Share (%)|
|---|---|---|---|---|
|0|World|7,878.2|23,398.00|100.00|
|1|China|1,444.9|7,500.00|32.05|
|2|U.S.A|332.9|3,989.60|17.05|
|3|India|1,366.4|1,547.00|6.61|
|20|Taiwan|23.8|237.55|1.01|
|21|Vietnam|98.2|216.99|0.92|
|22|South Africa|60.1|210.30|0.89|
|23|Bitcoin + Ethereum|N.A.|190.13|0.81|
|24|Thailand|69.9|185.85|0.79|
|25|Poland|37.80|153.00|0.65|
|26|Egypt|104.3|150.57|0.64|
|27|Malaysia|3.1|147.21|0.62|
|28|Bitcoin|N.A.|135.12|0.57|
|29|Sweden|10.2|131.79|0.56|
|49|Switzerland|8.7|56.35|0.24|
|50|Ethereum|N.A.|55.01|0.24|
|51|Romania|19.1|55.00|0.23|
identification number called the nonce. Once a block has been
filled, it is added or ”chained” onto the previously filled block
thereby creating a ”blockchain” as Figure-2 shows. In addition,
any changes to a block are detected by the hash value for
that block making it easy to identify fraud [31]. Blockchain
offers many benefits. First, it stores data chronologically and
securely, with a copy of the ledger stored on every node in
the cryptocurrency network. Second, the functionality of the
network is maintained even if a few participating nodes are
removed or malfunction. Third, peer-peer trust is maintained
through the consensus mechanism, which removes the need
for intermediaries that may not be trustworthy. Blockchain
finds applications in various areas such as logistics and supply
chain [32, 33], e-commerce [34], education [35], healthcare
[36], governance [37] and others [38]. It can also be used
in telecommunication technology [39], stock exchange [40],
industrial IoT [41], smart city development [42, 43], energy
management [44], Unmanned Aerial Vehicles (UAV) [45],
and smart grids [46]. But the most successful application has
been in the banking sector [47] with the rise of over 5,000
cryptocurrencie as of July 2021 [8].
Bitcoin, as described by Satoshi Nakamura, is a peer-peer
electronic cash system in which the double spending prevention process is decentralized across various nodes through a
consensus protocol. All Bitcoin transactions are time-stamped,
and any double spending attempts are rejected. ”Bitcoin Miners” play a major role in maintaining consensus over the
ledger’s state through the PoW (discussed in depth in SectionIII) in which they compete with others on the cryptocurrency
network to solve resource intensive cryptographic problems to
earn the right to add their proposed block onto the chain. The
difficulty of the puzzle changes over time to maintain the time
to mine a block at nearly 10 minutes [48]. The miners invest
in higher computational power in order to not be left behind
in the race of pushing their blocks onto the ledger. Successful
attempts are awarded a certain quantity of Bitcoin (BTC) as
a reward for each block solved. The reward is halved after
every 210,000 blocks, in order to maintain a steady synthetic
inflation until the 21 million possible BTC is in circulation
[1, 49]. The reward per block has been 6.25 BTC since the
most recent halving that occurred on May 11, 2020 [50]. With
nearly 140,000 blocks left to mine, the next halving is expected
-----
TABLE II: Ranking of Bitcoin and Ethereum among countries based on annual carbon footprint as of July 2021 [23, 26, 27, 30].
|Rank|Country|Population (Millions) [26]|Emission (MtCO ) 2|Share (%)|
|---|---|---|---|---|
|0|World|7,878.2|37,077.40|100.00|
|1|China|1,444.9|10,060.00|27.13|
|2|U.S.A|332.9|5410.00|14.59|
|3|India|1,336.4|2,300.00|6.2|
|38|Nigeria|211.3|104.30|0.28|
|39|Czech Republic|10.7|100.80|0.27|
|40|Belgium|11.6|91.20|0.24|
|41|Bitcoin + Ethereum|N.A.|90.31|0.24|
|42|Kuwait|4.3|87.80|0.23|
|43|Qatar|2.9|87.00|0.23|
|49|Oman|5.2|68.80|0.18|
|50|Bitcoin|N.A.|64.18|0.17|
|51|Greece|10.3|61.60|0.16|
|76|Tunisia|11.94|26.20|0.07|
|77|Ethereum|N.A.|26.13|0.07|
|78|SAR|17.9|25.80|0.06|
to occur on March 26, 2024.
Another popular blockchain network, Ethereum, introduced
the concept of a programmable network. Ethereum supports
the cryptocurrency Ether (ETH) which has the second highest market capitalization [8]. With the development of the
Ethereum Virtual Machine (EVM), the concept of smart contracts (i.e. the automatic execution of contracts when certain
conditions are met) was proposed. However, as is the case
in Bitcoin, Ethereum is also based on the PoW consensus
algorithm and therefore it is associated with the same issues of
electrical energy consumption and carbon footprints. Ethereum
has proposed Ethereum 2.0 in order to address most of the
issues with BTC and ETH which we discuss in more detail in
Section-V.
_B. A Global Perspective: Energy Consumption and CO2_
_Emissions_
Table-I shows the comparison of electrical energy consumption of Bitcoin and Ethereum obtained from Digiconomist
[23, 27]. We obtained the country-wise consumption and
population data from the U.S. Energy Information Administration database [28] and Worldometer [26] respectively. We
calculated the percentage Share of energy consumption as
follows:
_Share =_ _[Energy][i]_ 100 (1)
_×_
_Energyw_
where Energyi is the energy consumption of the country at
rank i and Energyw corresponds to the total energy consumption of the world as the table shows. Accordingly, the estimates
of the total electrical energy consumption share of Bitcoin and
Ethereum are 0.58% and 0.23%. They rank 28th and 50th with
135.12 TWh and 55.01 TWh of consumption respectively. The
University of Cambridge has also arrived at a close estimate
of 0.6% for Bitcoin [19] which supports these calculations.
The consumption by Bitcoin is comparable to Sweden (131.79
TWh, 0.56%), while that by Ethereum is nearly the same as
Romania (55 TWh, 0.23%). Considering the high rated power
share of 79.85% for these two cryptocurrencies among all
in circulation as of March 2020 [18], the data for the two
cryptocurrencies as a single entity has also been considered
to obtain a holistic representation. It is worth noting that they
together rank 23rd in the world and consume a total of 190.13
TWh of energy annually with a share of 0.81%, which is
equivalent to Thailand (185.85 TWh, 0.79%).
Table-II presents a similar ranking, but this time based on
the annual CO2 emissions. Data on the emissions of various
countries was obtained from the International Energy Agency
database [30]. The percentage Share has been calculated in the
same manner as for energy consumption. It can be observed
from the table that Bitcoin ranks 50th in emissions among the
143 countries in this database, with 64.18 MtCO2 of emissions
and a share of 0.17%. These values are close to those of Oman
(68.8 MtCO2, 0.18%) and Greece (61.6 MtCO2, 0.16%). The
statistics for Ethereum are also significant, with a rank of 77,
emissions of 26.13 MtCO2 and a 0.07% global share, which
is comparable to Tunisia (26.2 MtCO2, 0.07%). When the two
cryptocurrencies are considered together, they rank 41 in the
world with emissions of 90.31 MtCO2 and a global share of
0.24% which is nearly the same as Belgium (91.2 MtCO2,
0.24%).
_C. Comparison with Visa_
Table-III and Table-IV present the data available on the
energy consumption and CO2 emissions of Bitcoin [23],
Ethereum [27] and Visa [27, 51].
Table-III shows the annual energy consumption and emission values for the three transactions methods considering all
-----
TABLE III: Energy consumption and carbon footprints of
Bitcoin, Ethereum and Visa (total) as of July 2021 [23, 27, 51].
**Transaction** **Market cap** **Transactions/day** **Emission** **Energy consumption**
**method** **($ Billion)** **(Million)** **(MtCO2)** **(TWh)**
2000
1750
1500
1250
1000
750
500
250
|Transaction method|Market cap ($ Billion)|Transactions/day (Million)|Emission (MtCO2)|Energy consumption (TWh)|
|---|---|---|---|---|
|Bitcoin [23]|617.05|0.4|64.18|135.12|
|Ethereum [27]|247.8|1.23|26.13|55.01|
|Visa [51]|520.62|500|62,400|197.57|
TABLE IV: Comparison of energy consumption and carbon
footprints per transaction for Bitcoin, Ethereum and Visa as
of July 2021 [23, 27].
**Transaction** **Emission** **Energy consumption**
**method** **(KgCO2)** **(kWh)**
Bitcoin [23] 844.13 1777.11
Ethereum [27] 59.55 125.36
Visa [27] 0.00045 0.0015
sources of consumption in Visa. While at first glance it may
seem that the total CO2 emission and energy consumption
are comparatively high for Visa, it is worth pointing out that
the number of daily transactions occurring in the Bitcoin
and Ethereum networks is 0.4 million and 1.25 million, i.e.
0.08% and 0.25% respectively of the 500 million daily Visa
transactions. This implies the over-proportionate consumption
in cryptocurrencies which are relatively nascent transaction
methods. In addition, the total metrics for Visa have been
calculated considering all requirements to run the cooperation
offices such as office and server electricity, and commute.
Table-IV shows the per-transaction estimates for the three
transaction methods considering only the computational costs.
From the table we observe that the energy consumption and
_CO2 emission per transaction are very high for Bitcoin and_
Ethereum. Figure-3 presents a visual comparison between
these metrics per transaction. Energy consumption and CO2
emissions for Visa have been plotted by raising their values
by a factor of 10[5]. Accordingly, Table-V shows the Break
Even (BE) values that correspond to the number of Visa
transactions that can occur to have total energy consumption
and CO2 emission equal to a single transaction of these
cryptocurrencies. We calculate BE as follows:
2000
1750
1500
1250
1000
750
500
250
0
BTC ETH VISA(x10[5])
0
Fig. 3: Electrical energy consumption and CO2 emissions per
transaction for Bitcoin, Ethereum and Visa [23, 27].
of Bitcoin. Similarly the BE counts of Visa to Ethereum
are 83,574 for energy consumption and 132,334 for carbon
footprint.
|Transaction method|Emission (KgCO ) 2|Energy consumption (kWh)|
|---|---|---|
|Bitcoin [23]|844.13|1777.11|
|Ethereum [27]|59.55|125.36|
|Visa [27]|0.00045|0.0015|
III. PROBLEMS
Based on a review of past studies, we have identified four
major responsible for the high energy consumption and CO2
emissions in cryptocurrencies, namely: the Proof of Work
consensus mechanism, redundancy in operation and traffic,
mining devices, and the energy sources. This section discusses
these issues so that future development in cryptocurrencies can
take them into consideration.
_A. Consensus Mechanism: Proof of Work_
PoW was the first consensus mechanism proposed for
blockchain networks [7]. Paul Haunter, a contributor of
Ethereum, acknowledged the high energy requirements of
PoW [52] being the reason for the development of Ethereum
2.0 which we discuss in more detail in Section-V. While
redundancy in the operation and traffic of cryptocurrency
networks is also a contributor to energy consumption (as we
discuss in the next subsection), the transactions themselves do
_BEV isa/i[M]_ [=] _Mi_ (2)
_MV isa_
where BEV isa/i[M] [is the][ BE][ value for Visa with cryptocur-]
rency i, which is either Bitcoin or Ethereum. M corresponds to
the metric in consideration, energy consumption or CO2 emissions. As Table-V shows, it takes 1,195,657 Visa transactions
to use the same amount of electrical energy as one transaction
of Bitcoin. Similarly, it takes 83,574 Visa transactions to
generate the same carbon footprint as a single transaction
TABLE V: Break Even (BE) count for the number of Visa
transactions per Bitcoin and Ethereum transaction as of July
2021, obtained from Equation-2.
**Category** _BE[Energyconsumption]_ _BE[CO][2][emission]_
|Category|BEEnergyconsumption|BECO2emission|
|---|---|---|
|Visa/Bitcoin|1,195,657|1,870,875|
Visa/Ethereum 83,574 132,334
-----
not consume as much energy as the PoW process does. It has
been proven that PoW mining has high computational needs
and thus imposes major limitations on the continuous use
and scalability of cryptocurrencies [53, 54]. Recent research
estimates that PoW mining in Bitcoin consumes nearly 18GW
of power for 100 million transactions a week [53] making the
practical use Bitcoin questionable. Based on current trends,
a study from 2021 has predicted that, because of the rapid
growth of cryptocurrencies, PoW mining processes in China
alone will consume nearly 300 TWh of electrical energy and
generate 130 MtCO2 by 2024 [55]. To understand why it
is an important energy issue, we need to first understand its
operation.
Figure-4 shows the mining process in Bitcoin using the
PoW. Each new block proposed every T minutes is given a
hash that is computed using the 256-bit hash of the previous
block, the Nonce and the Merkle root using the equation:
_SHA256(Hprev + MB + Nonce) ≤_ _Target_ (3)
TABLE VI: Performance metrics of different mining devices
(Sources: [9, 14, 59]).
|Hardware ty|pe Mining rate (GH/s)|Efficiency (J/GH)|mEC (TWh)|
|---|---|---|---|
|CPU|0.01|9000|11,000|
|GPU|0.2 – 2|1500 – 400|3,000|
|FPGA|0.1 – 25|100 – 45|250|
|ASIC|44,000|0.05|1.46|
10[3]
10[2]
10[1]
10[0]
10 1
where SHA256 is the hash function, Hprev refers to the 256bit hash of previous block, Nonce is a one time use positive
number and MB is the Merkle root. Once the hash has been
calculated, it is compared with the target hash value. This
target value is set to increase the difficulty of mining so as
to maintain a constant time for the block to be added to the
chain. This time is set to 10 minutes for Bitcoin. If the hash is
higher than the target, the Merkel root is changed, the nonce
is re-calculated and another hash is generated. This process is
repeated until the miner reaches a hash value below the target
value set.
It is computationally expensive to find the nonce and therefore provides the proof of the amount of computational power
put in by the miner, thereby giving this consensus mechanism
the name PoW. Since the solution searching process cannot
be sped up by parallelization and alternative algorithms [56],
a miner’s share of reward can be equated to the share of
computational power owned in the cryptocurrency network
[11]. As mining becomes harder over time, the PoW becomes
an arms race of computational power and resources because
miners with more powerful devices compute more hashes per
second.
_B. Redundancy in Traffic and Operation_
10 2
|Col1|0.2 – 2 1500 – 400 3,000 0.1 – 25 100 – 45 250 44,000 0.05 1.46|Col3|
|---|---|---|
||Mining devices||
||CPU FPGA GPU ASIC||
|||CPU FPGA GPU ASIC|
|0.0|2.5 5.0 7.5 10.0 12.5 15.0 17.5 20.0 Hashes calculated (in GH)||
Fig. 5: Logarithmic plot of electrical energy versus hashes
calculated for Application-Specific Integrated Circuit (ASIC),
Field-Programmable Gate Array (FPGA), Graphics Processing
Unit (GPU) and Central Processing Unit (CPU) devices [12].
showed its impact with network size, number of peers, and
routing length. A linear relation was found between the total
number of peers and the traffic redundancy in the Bitcoin
network, with over 98% of network traffic being redundant
showing inefficiency in the current Bitcoin broadcasting algorithm. Every 1000 nodes were shown to increase the effective
traffic by 0.3 GB, while the total traffic increased by 24 GB
showing a redundancy of 23.7 GB. In addition, the study
found a positive correlation between the routing path length
and traffic redundancy in the network, demonstrating that
denser networks consisting of shorter routing lengths have less
redundant traffic.
While PoW blockchains have energy problems which stem
mainly from the consensus mechanism, energy consumption
due to redundant operations and network traffic becomes more
relevant in non-PoW blockchains. It arises from the system
storing the complete ledger on all nodes in the network [57].
In addition, each node performs operations associated with the
transactions independently, based on the available transaction
information. Additionally, redundant network traffic is another
contributor to this problem [58]. Redundancy reduces the
efficacy of the system [58] while also increasing the total
electrical energy consumption [10].
As stated in [10], redundancy in the network arises from
the number of nodes and the workload on each node. In [58],
simulation results obtained for network traffic redundancy
_C. Mining Devices_
In [59], the authors argued that if all mining facilities
utilized the highly efficient ASIC-based mining devices as
done in the KnCMiner Facility in Sweden, the overall Bitcoin
mining process would consume nearly 1.46 TWh worldwide
which is much lower than the current estimates of 184 TWh
[11], 135.12 TWh [23] and 69.63 TWh [19] for 2021.This
discrepancy demonstrates that inefficient mining devices are
being used worldwide. Thus, a major contributor of energy
consumption is the use of inefficient mining devices [12, 14].
Due to the increasing difficulty of mining, several devices
have been used since the introduction of Bitcoin starting from
CPU in 2009, to GPU in 2010, FPGA in 2011 and ASIC since
2013 [9]. Table-VI presents these devices along with their hash
rates, efficiencies [12], and minimum total energy consumption
[14 59] The total energy consumption corresponds to the
-----
10[3]
10[2]
10[1]
10[0]
10 1
10 2
10 3
10 4
Bitcoin
(PoW)
Ethereum
(PoW)
Dogecoin
(PoW)
CHIA
(PoSpace)
XRP
(XRP)
Eth 2.0
(PoS)
IOTA
(FPC)
Hedera
(Hashgraph)
Cryptocurrency
Fig. 6: Energy consumption per transaction for various cryptocurrencies and their consensus mechanisms.
amount of energy used when only that type of device is used
for Bitcoin mining worldwide. From the table, we note that
CPU provides the least computational power calculated in
giga-hashes (GH) at 0.01 GH/s at 9000 J/GH, while consuming
the most energy per GH, while ASIC-based devices used in
the study provide the highest computational power with 44,000
GH/s at an efficiency of 0.05 J/GH. Figure-5 is a logarithmicplot of log-energy consumption against the number of GH
computed. The plot depicts the high energy consumption of
CPU devices followed by GPU, FPGA and ASIC being the
most efficient among them. It is important to note that ASICbased devices provide 40,000 to 200,000 times the computational power of GPUs which can be seen from in TableVI. They create a problem of centralization of computational
power [60]. ASIC-resistant algorithms remove the benefit of
using ASIC-based devices, because reaching a solution for
these algorithms with ASIC deices is either impossible or
comparable to GPUs. Such algorithms, for instance, X16Rv2
in Ravencoin and Ethash in Ethereum, force miners to use
general purpose and cheaper devices such as GPUs, which
cause an over-proportionate amount of energy consumption
[18].
In 2019, the authors of [14] investigated the power demand
of Bitcoin mining by considering the performance of 269
mining hardware devices (111 CPU, 111 GPU, 4 FPGA and
43 ASIC) in a 160 GB Bitcoin network. They used data
published by the manufacturer in whitepapers corresponding
to the device, and also the user-benchmark [61] and passmark [62] websites for manufacturer reliability. The study also
considered mining pools [63] to make estimates using the
regional electricity costs. Two metrics were defined namely
the Minimum Energy Consumption (mEC) and Maximum
Energy Consumption (MEC), corresponding to the energy
consumption of the most efficient and the least efficient devices respectively relative to context. Calculations of the mEC
showed that in comparison to the global energy demand of
23,000 TWh, continued use for only CPU devices alone would
consume a minimum of 11,000 TWh of electrical energy. GPU
and FPGA devices were shown to consume a minimum of
nearly 3,000 TWh, 250 TWh respectively while ASIC devices
consumed the least among all devices. The minimum and
maximum power demands for all devices were shown to be 2
GW and 6 GW respectively.
_D. Sources of Energy_
The annual carbon footprint of Aluminium mining has
been estimated at 90 MtCO2 [11] and that of Oman is 68.8
MtCO2 as Table-II shows. From our earlier discussion on
global comparisons of carbon footprints of cryptocurrencies
in Section-II, considering the latest emission estimates of
upto 90 MtCO2 [11] and 64.18 MtCO2 [23] for Bitcoin and
26.13 MtCO2 for Ethereum [27] respectively, it is alarming
to see nation-level and industry-level carbon emissions from
relatively nascent transaction systems.
The earliest research on the impact of energy consumption
on ecological footprints [64] found the negative impacts of
fossil fuels on the environment. Subsequently, research indicated a negative impact on the ecological footprint due to the
excessive use of fossil fuels by the pulp production industry
in the Canadian Prairies [65]. It is important to note that
the results of these studies can also extend to non-renewable
energy based cryptocurrency mining. While the sources of
energy themselves do not cause the over proportionate electri
-----
cal energy consumption in PoW blockchains, the use of nonrenewable energy sources leads to high carbon footprints [66].
Due to the cryptocurrency networks being distributed, it is
difficult to obtain an accurate share of renewable and nonrenewable sources of energy during mining [13]. Additionally,
there is also uncertainty in estimations based on the mining
devices used [12]. While some studies [25, 67] argue that
the main source of energy for cryptocurrencies is renewable
with a share of nearly 80%, the 3rd GCBS [24] shows a
61% reliance on non-renewable sources of energy. Statistics
of cryptocurrency mining in China show a 58% and 42% split
of hydro-energy and coal-heavy power generation respectively
according to a recent study [9]. The research has estimated
an adjustment emission factor of 550 g/kWh for China by
considering a weighted average of the hydro-rich and coalheavy provinces of Sichuan and Inner Mongolia. Considering
the mining pool share based on hashrate (number of hashes
computed per second) of 46% in China [19] and the prediction
of 130 MtCO2 of the Bitcoin network by 2021 in China alone
[55], along with the continued use of fossil fuels, there is a real
threat to the environment all of which make the expected rise
of 2[◦]C contributed by Bitcoin within the next few decades
a real possibility [1]. It is therefore necessary to find green
solutions to minimize the CO2 emissions of cryptocurrencies.
IV. SOLUTIONS
As we have discussed in the previous section, current day
cryptocurrencies impose problems of high energy consumption
and CO2 emissions because of various reasons such as the
PoW, redundancy, device efficiency and sources of energy.
This section explores and recommends solutions to address
these issues by providing examples from past works in individual areas of alternative consensus mechanisms and redundancy
reduction. It also discusses some of the most popular and
effective mining devices as of July 2021, and conducts an in
depth analysis of renewable energy sources in top mining areas
as alternatives to fossil fuels to reduce the carbon footprints
of cryptocurrencies.
_A. Alternative Consensus Mechanisms_
Out of the four issues we have discussed above, the consensus mechanism is the biggest contributor to the energy
consumption in current cryptocurrencies that use PoW. Thus,
one viable option would be to explore other consensus mechanisms which are more energy-efficient than PoW. Figure-6
provides the Electrical Energy Consumption per transaction
(EEC/trans) for various cryptocurrencies compiled from the
studies [23, 27, 69, 70].
One of the most promising substitutes for the PoW is
the Proof of Stake (PoS) consensus mechanism which was
first used in Peercoin [71] as an energy saving alternative
of PoW. It has also been proposed in Ethereum 2.0 which
is discussed in Section-V. In PoS, the proof is derived from
stakes, i.e. contributions of miners to the blockchain, instead
of computational power. This removes the computational race
involved in the PoW thereby reducing energy consumption
and CO emissions during mining [72] In a study from
2014 [73], the authors extended the PoW using PoS and
proposed the Proof of Activity (PoA) which provided reduced
network communication and storage requirements without
compromising on security. Proof of Burn (PoB) is another low
energy consuming consensus mechanism [74]. Miners reach a
consensus by ”burning” coins and permanently remove them
from circulation. This process is initiated by miners on virtual
mining rigs instead of physical mining devices. A miner’s
mining power increases when the number of coins burned,
and not based on computational power. PoB has been proven
to be sustainable and highly decentralized, and is implemented
in cryptocurrencies such as SlimCoin.
Hedera is an exo-friendly cryptocurrency with a highly
efficient consensus mechanism called Hashgraph [75], based
on the gossip protocol. Participants in the blockchain relay
novel information (called gossip), and the collaborative gossip
history is stored as a hashgraph, which each member in
the network uses to comes to a consensus based on their
knowledge of what another node might know. The authors
f [76] proposed probabilistic mechanism called the Fast Probabilistic Consensus (FPC). It is used in the cryptocurrency
IOTA. It is a highly efficient and secure binary voting protocol
wherein a set of nodes can come to consensus on the value of
an individual bit instead of consensus through computation.
A trust-based mechanism called the XRP Consensus [77]
has also been proposed, in which the participants reach an
agreement without complete consensus among all members
of the network. Hashgraph, FPC and XRP do not require
high computational power and therefore consume substantially
lower energy than PoW, which can also be seen in Figure-6.
The authors of [78] proposed the Stellar Consensus Protocol
(SCP) based on the Byzantine Agreement [79]. It removes
time-limitations for the processing of blocks by enabling flexibility in the PoW-difficulty parameters and processes several
blocks in parallel. Increased computational power therefore
increases the throughput of the system, thereby increasing the
scalability and sustainability because there are more blocks
processed in the same amount of time, making the energy
consumption proportionate to the outcome obtained. SCP is
used in the Pi Network [80] which we discussed in more detail
in Section-V.
Several storage-based consensus mechanisms have been
proposed. The authors of [81] proposed a consensus mechanism based on distributed storage called the Proof of Retrievability (PoR). However, because the proposed scheme
lacks a leader node election method, a similar PoR-based
approach was proposed in [82] called the Proof of SpaceTime
(PoST). PoST proves that useful data was stored for a certain
amount of time, and it is thus a storage power-based consensus
mechanism. PoST consumes less energy because the difficulty
of the proof can be changed by extending the time-period of
data stored instead of computational capacity. Another storage
share based consensus mechanism called the Proof of Space
(PoSpace) was adopted in SpaceCoin [83] and CHIA [84].
PoSpace requires little computation power and can be run
on any free computer with free disk space and an Internet
connection.
The authors of [85] recommend the adoption of useful
-----
TABLE VII: Proposed ASIC-based mining devices for cryptocurrency mining as of March 2021 [68].
#### Device Cost ($) Hashrate (TH/s) Power (W) Efficiency (J/TH)
Whatsminer M32-70 6,200 70 3,360 48
Variable 4.73 1,293 273.36
AvalonMiner 1246 Variable 90 3,420 38
WhatsMiner M32-62T 1,075 62 3,348 54
AvalonMiner A1166 Pro 2,199 81 3,400 41.97
273.36
54
48
41.97
38
M32-70 Antminer S7 1246 M32-62T A1166
#### ASIC Device
Fig. 7: Energy efficiency (energy consumption per TH) of various ASIC devices.
|Device|Cost ($)|Hashrate (TH/s)|Power (W)|Efficiency (J/TH)|MAC (GWh)|
|---|---|---|---|---|---|
|Whatsminer M32-70|6,200|70|3,360|48|29.43|
|Antminer S7|Variable|4.73|1,293|273.36|11.32|
|AvalonMiner 1246|Variable|90|3,420|38|29.95|
|WhatsMiner M32-62T|1,075|62|3,348|54|29.32|
|AvalonMiner A1166 Pro|2,199|81|3,400|41.97|29.78|
Proofs of Work (uPoW) based on the Orthogonal Vectors
(OV) problem. They explain usefulness as the allocation of
computational tasks to the miners such that the solutions for
the tasks can be reconstructed verifiably and quickly from the
miners’ response. uPoW converts the amount of wasteful work
in PoW into useful work without compromising on hardness.
Research on Resource Efficient Mining (REM) [86] for Bitcoin
proposed the REM framework using trusted hardware (Intel
SGX) and developed the first complete implementation of
SGX-blockchain with a computational overhead of 5-15%.
This mechanisms is similar to the uPoW [56]. Clients supply
their workloads as tasks to the SGX protected enclave. The
truthfulness guaranteed feature of the attestation service in
SGX verifies and measures the software running in the enclave.
The enclave randomly decides which computational task leads
to a valid proof for the block.
_B. Redundancy Reduction Techniques_
Among the methods proposed in the literature for reducing
storage redundancy in blockchain networks, a promising one
relies on ”sharding”, i.e. breaking the network into subparts called ”shards” based on the consensus mechanism and
updating the transactions within the bounds of each shard [10].
In [87], the authors conducted research on scaling blockchain
via sharding, and proposed a stable sharding technique with
a low failure rate. The concept of sharding has also been
proposed for Ethereum 2.0 which we discuss in SectionV. While the division of blockchain networks into shards
is difficult because of the decentralization of computational
power in the PoW, it can be done based on the proportions of
stakes and storage in the case of PoS and PoSpace respectively
[56, 72]. In [57], the authors proposed another method (called
ElasticChain) to reduce redundancy. In ElasticChain, the nodes
of the chain store a part of the complete ledger based on
-----
a duplicate ratio regulation algorithm. The research shows
stability, security and fault tolerance at the same level as the
current blockchain design, while improving its storage scalability. The authors of [88] proposed a different approach with
Semantic Differential Transaction (SDT) to reduce redundancy
in the integration of Building Information Modeling (BIM)
and blockchain. SDT captures local changes in an information
model as BIM Change Contracts (BCC) at 0.02% the size of
Industry Foundation Classes (IFC) (the standard of ensuring
interoperability across BIM platforms, safeguarding them in
a blockchain and restoring them when needed). SDT thus
reduces redundancy in BIM-blockchain systems. A study on
network traffic redundancy [58] recommends reducing the
average routing path lengths between two nodes in order to
reduce traffic redundancy in the Bitcoin network.
Another category of methods proposed to reduce operational
redundancy in blockchains lies in the use of Zero Knowledge
Proofs (ZKP) such as SNARKS [89, 90]. ZKP does not
require complex encryption. It increases privacy of users by
avoiding the disclosure of personal information as is the case
in public blockchains such as Bitcoin. Additionally, it provides
security while increasing the scalability and throughput of
the cryptocurrency network, thereby making it more energyefficient. The methodology proposed by the authors of [91]
uses ZKP to reduce the time needed to prove and verify large
sequential computations in comparison to other current ZKP
implementations [92].
_C. Choice of Mining Device_
While efficient devices will help reduce energy costs regardless of the consensus mechanism, if the PoW continues to be
in use, it becomes imperative to use highly efficient devices
such as ASIC [12]. As Table-VI and Figure-5 show, ASICbased devices consume the least amount of energy per hash,
and provide the highest computational power with a hash rate
of 40,000 GH/s at 0.05 J/GH. Studies have shown that the use
of ASIC devices as done in the KnCMiner facility in Boden,
Sweden can reduce the worldwide annual energy consumption
by mining to 1.46 TWh [14, 59].
In [68], the author discusses the top five ASIC-based mining
devices as of March 2021 which include Whatsminer M32-70,
Antminer S7, AvalonMiner 1246, WhatsMiner M32-62T, and
AvalonMiner A1166 Pro. Table-VII presents the cost, hashrate,
power consumption, efficiency and Maximum Annual Consumption (MAC) for each of these devices. Figure-7 shows
that among the most popular available options, Antminer S7
is the least efficient device, with the energy consumption per
terra-hash (TH) of 273.36 J/TH. The other four devices have
comparable efficiencies, with AvlonMiner 1246 being the most
energy-efficient at 38 J/TH. In addition to the efficiencies, we
have also calculated the MAC (in GWh) of these devices as
follows:
_MAC =_ _[P][ ×][ 24][ ×][ 365]_ (4)
10[6]
wherein, P (in W) is the power consumption of the device
which is multiplied with the total number of hours in a
year as 24 365 to obtain the annual energy consumption
_×_
equivalent. The table shows that Antminer S7 consumed the
least amount of energy, while providing the least efficiency
among the five options. The other four ASIC-based mining
devices have comparable MAC ranging from 29.32 GWh to
29.95 GWh, thereby further demonstratingthat the best choice
is AvlonMiner 1247 based on its efficiency.
_D. Renewable Sources of Energy_
Considering the high electrical energy consumption in current PoW blockchains, we need to address the impact of
their emissions and the deterioration of the ecological footprint. Research shows a reduction in CO2 emission by using
renewable sources of energy [94]. Sustainable Development
Goals (SDG) for economic growth and trade provided by a
study on renewable and non-renewable energy and their impact
[95] recommends the transition from fossil fuels to renewable energy sources, implementation of environmental friendly
production processes, enforcement of green trade, education,
and creating awareness. While these recommendations have
been provided for sustainable economic growth and trade in
general, they are also applicable to cryptocurrencies. In [66],
the authors show that legal criteria, and the continuity and
cost of electrical energy supply are the most important factors
considered to decide the location of cryptocurrency mining
operations. The study concluded that wind and solar energy are
the best energy alternatives for blockchain networks. The use
of these renewable energy sources will make the high energy
consumption in PoW cryptocurrencies more environmental
friendly. Subsequently it is highly recommended that countries
with high cryptocurrency mining activity should invest in the
use of renewable energy.
Figure-8 shows the distribution of mining shares based on
hashrates as of July 2021. We obtain the data from the Cambridge Bitcoin Energy Consumption Index [19]. The major
Bitcoin mining countries of the world are China (46%), U.S.A
(16.8%), Kazakhstan (8.2%), Russia (6.8%), Iran (4.6%),
Malaysia (3.4%), Canada (3%), Germany (2.8%) and Ireland
(2.3%). Considering the Digiconomist [23] estimate of 135.12
TWh for Bitcoin, and energy shares of these to be equal to
the hashrate shares, we can calculate their Estimated Energy
Consumption (EEC) as follows:
_EEC = [135][.][12][ ×][ Share][(%)]_ (5)
100
From Table-VIII, we note that China alone consumed 62.15
TWh of electrical energy, which is comparable to the electrical energy consumptions of Switzerland (56.35 TWh). It is
therefore important for these major mining regions to focus
on measures to minimize the environmental degradation and
global warming caused by the PoW mining processes.
Table-VIII presents the data, provided by the International
Renewable Energy Agency (IREA) [93], on various infrastructures based on renewable energy sources as of 2020. The table
also presents the Maximum energy Generaction (MEG) from
renewable sources based on installed renewable capacities for
each country. The MEG (in TWh) is calculated using the
following equation:
-----
China
U.S.A
Kazakistan
Other
2.30%
2.80%
3.00%
Ireland
Germany
Canada
Malaysia
Iran
Fig. 8: Mining share based on hashrates [19].
TABLE VIII: Mining shares, renewable energy capacities installed and evaluation metrics: Estimated Energy Consumptions
(EEC), Maximum Energy Generation (MEG), Renewable Capacity Ratio (RCR) for major Bitcoin mining regions.
|Region|Bitcoin mining [19, 23]|Col3|Renewable energy capacity installed (MW [93]|Col5|Col6|Col7|Col8|Col9|Col10|MEG (TWh)|RCR|Relative RCR|
|---|---|---|---|---|---|---|---|---|---|---|---|---|
||Share (%)|EEC (TWh)|Total capacity|Hydropower|Wind|Solar|Bioenergy|Geothermal|Marine||||
|World|100|135.12|2,799,094|1,331,889|733,267|713,970|126,557|14,050|527|24520.06|-|-|
|China|46|62.15|894,879|370,160|281,993|254,335|18,687|0|5|7839.14|126.12|0.4134|
|U.S.A|16.8|22.70|292,065|103,058.00|117,744|75,572|12,372|2,587|0|2558.48|112.70|0.3694|
|Kazakhstan|8.2|11.07|4,997|2,785|486|1,719|8|-|-|43.77|3.95|0.0129|
|Russia|6.8|9.18|54,274|51,811|945|1,428|1,370|74|2|475.44|51.74|0.1696|
|Iran|4.6|6.21|12,922|13,233|303|414|12|-|-|113.19|18.21|0.0597|
|Malaysia|3.4|4.59|8,699|6,275||1,493|931|-|-|76.20|16.58|0.0543|
|Canada|3|4.05|101,188|81,058|13,577|3,325|3,383|-|20|886.40|218.67|0.7168|
|Germany|2.8|3.78|131,739|10,720|62,184|53,783|10,364|40|-|1154.03|305.02|1.0000|
|Ireland|2.3|3.10|4,685|529|4,300|40|107|-|-|41.04|13.20|0.0432|
|Other|6|8.24|1,293,646|692,260|251,735|321,861|79,323|11,349|500|11332.33|-|-|
_MEG =_ _[TotalCapacity][ ×][ 24][ ×][ 365]_ (6)
10[6]
It is worth noting that the MEG is calculated as the highest
possible energy generation per annum using the installed
capacity. Additionally the Renewable Capacity Ratio (RCR)
for each country is calculated as the ratio of the MEG to the
EEC following the equation below:
_RCR =_ _[MEG]_ (7)
_EEC_
The RCR provides a proportion of renewable energy available per TWh of energy consumption in Bitcoin mining.
High RCR values indicate a higher capacity to allocate
renewable energy toward the mining process. From Figure-9,
we deduce that countries such as Germany, Canada, China, and
U.S.A have high renewable energy capacities relative to their
mining energy consumption in comparison with those such as
Kazakhstan Ireland Malaysia Iran and Russia which do not
making it imperative for these countries to further invest in
renewable energy.
V. CASE STUDIES
Sections-III and IV have discussed implementation factors
that cause high energy consumption and carbon footprints of
cryptocurrencies, and proposed possible solutions respectively.
In this section, we explore a few cryptocurrency networks that
aim to solve some of the practical limitations of cryptocurrencies such as Bitcoin and Ethereum. As we have discussed
earlier, several alternative consensus mechanisms such as the
PoS, and redundancy reduction techniques such as sharding,
have been proposed to reduce the energy consumption of
cryptocurrencies. In this section, we discuss how some recently
developed cryptocurrency networks such as Ethereum 2.0
and the Pi Network have adapted these solutions to solve
the cryptocurrency energy consumption and carbon footprint
problems in the real world These case studies will provide
-----
350
300
250
200
150
100
50
0
Kazakistan Ireland Malaysia Iran Russia U.S.A China Canada Germany
#### Country
Fig. 9: Renewable Capacity Ratio for major Bitcoin mining regions.
more insight into the ongoing active research and development,
and will shed light on future research directions in this area.
_A. Ethereum 2.0_
We briefly described Ethereum in Section-II. Several alternate cryptocurrencies have been introduced over time, but
none of them have gained as much traction as Bitcoin,
with the exception of the PoW cryptocurrency, Ethereum.
However, since it also suffers from energy and scalability
issues, Ethereum has come up with a major upgrade, called
Ethereum 2.0 [96]. This version aims to resolve issues related
to sustainability, scalability, and security. The security aspects
are beyond the scope of this paper, hence we focus our
discussion on sustainability and scalability:
i. Sustainability: Ethereum 2.0 attempts the energy problem
by shifting from the PoW consensus mechanism to the PoS.
PoS consumes significantly lower amount of energy because
it involves much fewer mathematical calculations and hence
has lesser computational requirements. It also provides security against attacks like 51% attack, and prevents overcentralization of miners as ownership of coins is considered as
opposed to share of computational power for reward payouts.
This change in consensus algorithm is expected to consume
less than 99% of the current consumption of the PoW algorithm. [52].
ii. Scalability: The current version of Ethereum is not very
scalable due to the increase in network congestion and data
redundancy with the addition of nodes and transactions. This
increases the energy consumption of the cryptocurrency network in addition to slowing down the speed of the transaction
process. With Ethereum 2.0, Ethereum plans to introduce the
”Beacon Chain” which implements the concept of sharding.
Sharding is a concept where the load on a network is distributed amongst nodes or groups of nodes to reduce network
congestion and increase throughput. The release will also
include the introduction of 64 new chains, with each chain
consisting of a fraction of the nodes validating the transactions.
Hence more transactions can be processed in parallel, with the
requirement to share the transaction details with only a fraction
of the nodes. This reduces redundancy, congestion and energy
consumption.
_B. Pi Network_
In [80], the authors present an introduction to the Pi
Network which addresses the two issues that the Bitcoin
network suffers from namely, high energy consumption and
centralization of miners.
i. Energy efficiency: The Pi Network uses a modified
version of the Stellar Consensus Protocol (SCP) [97] instead of
the highly energy intensive PoW consensus mechanism. While
such networks need multiple exchanges among the nodes to
reach consensus and can lead to network congestion, they have
significantly lower energy requirements
-----
ii. Decentralization: While the original goal of Bitcoin was
to provide a decentralized transaction method, the increase
in price and better payoffs has made the network extremely
centralized to the extent that around 87% of the BTCs are
owned by 1% of the nodes. The Pi network allows any user
with a mobile phone to mine coins without any need for
expensive ASIC devices. Hence it makes mining inexpensive
and more widely accessible.
VI. CONCLUSION
This review has shown the alarmingly high electrical energy
consumption and carbon footprints of PoW cryptocurrencies
such as Bitcoin and Ethereum. When compared the energy
consumption of countries around the world, we found that
Bitcoin and Ethereum consumed nearly as much energy as
countries such as Sweden and Romania respectively. We also
found that their CO2 emissions were close to those of Greece
and Tunisia respectively. Our analysis of centralized transaction methods has revealed that Visa is much more energyefficient and has a lower carbon footprint per transaction
compared to the cryptocurrencies discussed in this review.
The review identified four underlying issues causing these
problems, namely, the PoW consensus mechanism, network
redundancy, mining devices and sources of energy. We found
that, among other possible solutions such as PoSpace, PoST,
PoA, uPoW and REM, PoS proves to be the most promising alternative to PoW. We discussed redundancy reduction
methods and popular ASIC devices for efficient mining. We
compiled a list of popular mining devices available on the
market that would be useful to various stakeholders working in
the cryptocurrency area. We calculated the maximum possible
energy consumption using MAC. Additionally, we presented
renewable energy capacities for major Bitcoin mining areas,
and the defined RCR has showed that it would be easier for
major mining countries such as China, U.S.A, Germany and
Canada to allocate renewable energy compared to countries
such as Russia, Iran, Malaysia, Ireland and Kazakhstan. Finally, we presented two case studies on Ethereum 2.0 and the
Pi-Network which plan to use consensus algorithms such as
_PoS and SCP_, and concepts such as sharding to distribute
the load and reduce redundancy in the cryptocurrency network
to reduce the overall energy consumption and carbon footprints. While these networks are still under development, they
demonstrate that considerable efforts are being made in this
direction to address the real world energy consumption and
_CO2 issues associated with cryptocurrencies to make them_
more sustainable and widely acceptable.
VII. ACKNOWLEDGMENT
This work was also supported by the SERB ASEAN prject
CRD/2020/000369 received by Dr. Vinay Chamola. Sherali
Zeadally was supported by a 2021-2022 Fulbright U.S. scholar
grant award administered by the U.S. Department of State
Bureau of Educational and Cultural Affairs, and through its
cooperating agency the Institute of International Education
(“IIE”). Further, we thank the anonymous reviewers for their
valuable comments which helped us improve the quality and
presentation of this work
REFERENCES
[1] C. Mora, R. L. Rollins, K. Taladay, M. B. Kantar, M. K.
Chock, M. Shimada, E. C. Franklin, Bitcoin emissions
alone could push global warming above 2 c, Nature
Climate Change 8 (11) (2018) 931–933.
[2] S. I. Zandalinas, F. B. Fritschi, R. Mittler, Global warming, climate change, and environmental pollution: Recipe
for a multifactorial stress combination disaster, Trends in
Plant Science.
[3] IPCC, Ipcc.
URL https://www.ipcc.ch/
[4] J. Wang, Y. Guan, L. Wu, X. Guan, W. Cai, J. Huang,
W. Dong, B. Zhang, Changing lengths of the four seasons
by global warming, Geophysical Research Letters 48 (6)
(2021) e2020GL091753.
[5] J. Rogelj, M. Den Elzen, N. H¨ohne, T. Fransen,
H. Fekete, H. Winkler, R. Schaeffer, F. Sha, K. Riahi, M. Meinshausen, Paris agreement climate proposals
need a boost to keep warming well below 2 c, Nature
534 (7609) (2016) 631–639.
[6] S. A. Sarkodie, Environmental performance, biocapacity,
carbon & ecological footprint of nations: drivers, trends
and mitigation options, Science of the Total Environment
751 (2021) 141912.
[7] S. Nakamoto, Bitcoin: A peer-to-peer electronic cash
system, Decentralized Business Review (2008) 21260.
[8] CoinMarketCap, Today’s cryptocurrency prices by market cap.
URL https://coinmarketcap.com/
[9] C. Stoll, L. Klaaßen, U. Gallersd¨orfer, The carbon footprint of bitcoin, Joule 3 (7) (2019) 1647–1661.
[10] J. Sedlmeir, H. U. Buhl, G. Fridgen, R. Keller, The
energy consumption of blockchain technology: beyond
myth, Business & Information Systems Engineering
62 (6) (2020) 599–608.
[11] A. de Vries, Bitcoin boom: What rising prices mean for
the network’s energy consumption, Joule 5 (3) (2021)
509–513.
[12] N. Houy, Rational mining limits bitcoin emissions, Nature Climate Change 9 (9) (2019) 655–655.
[13] J. Koomey, Estimating bitcoin electricity use:
A beginner’s guide, May, Coin Center Report,
https://www.coincenter.org/app/uploads/2020/05/estimatingbitcoinelectricity-use.pdf.
[14] S. K¨ufeo˘glu, M. Ozkuran, Bitcoin mining: A global[¨]
review of energy and power demand, Energy Research
& Social Science 58 (2019) 101273.
[15] A. De Vries, Bitcoin’s growing energy problem, Joule
2 (5) (2018) 801–805.
[16] S. K¨ufeoglu, M. Ozkuran, Energy consumption of bitcoin[¨]
mining.
[17] M. Zade, J. Myklebost, P. Tzscheutschler, U. Wagner, Is
bitcoin the only problem? a scenario model for the power
demand of blockchains, Frontiers in Energy Research 7
(2019) 21.
[18] U. Gallersd¨orfer, L. Klaaßen, C. Stoll, Energy consumption of cryptocurrencies beyond bitcoin Joule 4 (9)
-----
(2020) 1843–1846.
[19] U. of Cambridge, Cambridge bitcoin electricity consumption index.
URL https://cbeci.org/
[20] Digiconomist, Digiconomist.
URL https://digiconomist.net/
[21] S. Kohler, M. Pizzol, Life cycle assessment of bitcoin
mining, Environmental science & technology 53 (23)
(2019) 13598–13606.
[22] B. Sriman, S. G. Kumar, P. Shamili, Blockchain technology: Consensus protocol proof of work and proof
of stake, in: Intelligent Computing and Applications,
Springer, 2021, pp. 395–406.
[23] Digiconomist, Bitcoin energy consumption index.
URL https://digiconomist.net/bitcoin-energyconsumption/
[24] A. Blandin, G. C. Pieters, Y. Wu, A. Dek, T. Eisermann,
D. Njoki, S. Taylor, 3rd global cryptoasset benchmarking
study, Available at SSRN 3700822.
[25] C. Bendiksen, S. Gibbons, E. Lim, The bitcoin mining
network-trends, marginal creation cost, electricity consumption & sources, CoinShares Research 21 (2018) 3–
19.
[26] worldometer, Countries in the world by population
(2021).
URL https://www.worldometers.info/worldpopulation/population-by-country/
[27] Digiconomist, Ethereum energy consumption index.
URL https://digiconomist.net/ethereum-energyconsumption/
[28] U. E. I. Administration, International.
URL https://www.eia.gov/international/overview/world
[29] Wikipedia, List of countries by electricity consumption.
URL https://en.wikipedia.org/wiki/List of countries
by electricity consumption
[30] I. E. Agency, Co2 emissions from fuel combustion.
URL http://energyatlas.iea.org/#!/tellmap/1378539487
[31] M. Nofer, P. Gomber, O. Hinz, D. Schiereck, Blockchain,
Business & Information Systems Engineering 59 (3)
(2017) 183–187.
[32] S. Zeadally, J. B. Abdo, Blockchain: Trends and future
opportunities, Internet Technology Letters 2 (6) (2019)
e130.
[33] V. Hassija, V. Chamola, V. Gupta, S. Jain, N. Guizani,
A survey on supply chain security: Application areas,
security threats, and solution architectures, IEEE Internet
of Things Journal 8 (8) (2020) 6222–6246.
[34] A. Ometov, Y. Bardinova, A. Afanasyeva, P. Masek,
K. Zhidanov, S. Vanurin, M. Sayfullin, V. Shubina,
M. Komarov, S. Bezzateev, An overview on blockchain
for smartphones: State-of-the-art, consensus, implementation, challenges and future trends, IEEE Access 8
(2020) 103994–104015.
[35] W. Gr¨ather, S. Kolvenbach, R. Ruland, J. Sch¨utte,
C. Torres, F. Wendland, Blockchain for education: lifelong learning passport, in: Proceedings of 1st ERCIM
Blockchain workshop 2018, European Society for Socially Embedded Technologies (EUSSET) 2018
[36] L. Ismail, H. Materwala, S. Zeadally, Lightweight
blockchain for healthcare, IEEE Access 7 (2019)
149935–149951.
[37] S. Ølnes, J. Ubacht, M. Janssen, Blockchain in government: Benefits and implications of distributed ledger
technology for information sharing (2017).
[38] M. Pilkington, Blockchain technology: principles and
applications, in: Research handbook on digital transformations, Edward Elgar Publishing, 2016.
[39] G. Praveen, V. Chamola, V. Hassija, N. Kumar,
Blockchain for 5g: A prelude to future telecommunication, IEEE Network 34 (6) (2020) 106–113.
[40] G. Bansal, V. Hasija, V. Chamola, N. Kumar, M. Guizani,
Smart stock exchange market: a secure predictive decentralized model, in: 2019 IEEE Global Communications
Conference (GLOBECOM), IEEE, 2019, pp. 1–6.
[41] T. Alladi, V. Chamola, R. M. Parizi, K.-K. R. Choo,
Blockchain applications for industry 4.0 and industrial
iot: A review, IEEE Access 7 (2019) 176935–176951.
[42] V. Hassija, V. Gupta, S. Garg, V. Chamola, Traffic jam
probability estimation based on blockchain and deep
neural networks, IEEE Transactions on Intelligent Transportation Systems.
[43] V. Hassija, V. Saxena, V. Chamola, F. R. Yu, A parking
slot allocation framework based on virtual voting and
adaptive pricing algorithm, IEEE Transactions on Vehicular Technology 69 (6) (2020) 5945–5957.
[44] A. Miglani, N. Kumar, V. Chamola, S. Zeadally,
Blockchain for internet of energy management: Review,
solutions, and challenges, Computer Communications
151 (2020) 395–418.
[45] T. Alladi, V. Chamola, N. Sahu, M. Guizani, Applications
of blockchain in unmanned aerial vehicles: A review,
Vehicular Communications 23 (2020) 100249.
[46] T. Alladi, V. Chamola, J. J. Rodrigues, S. A. Kozlov,
Blockchain in smart grids: A review on different use
cases, Sensors 19 (22) (2019) 4862.
[47] G. Hileman, M. Rauchs, Global cryptocurrency benchmarking study, Cambridge Centre for Alternative Finance
33 (2017) 33–113.
[48] A. M. Antonopoulos, Mastering Bitcoin: unlocking digital cryptocurrencies, ” O’Reilly Media, Inc.”, 2014.
[49] C. Berg, S. Davidson, J. Potts, Proof of work as a threesided market, Frontiers in Blockchain 3 (2020) 2.
[50] BuyBitcoinWorldwide, Bitcoin clock.
URL https://www.buybitcoinworldwide.com/bitcoinclock/
[51] IMPAKTER, Visa.
URL https://index.impakter.com/visa/
[52] I. Spectrum, Ethereum plans to cut its absurd energy
consumption by 99 percent.
URL https://spectrum.ieee.org/ethereum-plans-to-cutits-absurd-energy-consumption-by-99-percent
[53] S. P. Mishra, V. Jacob, S. Radhakrishnan, Energy
consumption–bitcoin’s achilles heel, Available at SSRN
3076734.
[54] V. Hassija, S. Zeadally, I. Jain, A. Tahiliani, V. Chamola,
S Gupta Framework for determining the suitability of
-----
blockchain: Criteria and issues to consider, Transactions
on Emerging Telecommunications Technologies e4334.
[55] S. Jiang, Y. Li, Q. Lu, Y. Hong, D. Guan, Y. Xiong,
S. Wang, Policy assessments for the carbon emission
flows and sustainability of bitcoin blockchain operation
in china, Nature communications 12 (1) (2021) 1–10.
[56] W. Wang, D. T. Hoang, P. Hu, Z. Xiong, D. Niyato,
P. Wang, Y. Wen, D. I. Kim, A survey on consensus mechanisms and mining strategy management in
blockchain networks, IEEE Access 7 (2019) 22328–
22370.
[57] D. Jia, J. Xin, Z. Wang, W. Guo, G. Wang, Elasticchain: support very large blockchain by reducing data
redundancy, in: Asia-Pacific Web (APWeb) and Web-Age
Information Management (WAIM) Joint International
Conference on Web and Big Data, Springer, 2018, pp.
440–454.
[58] Y.-H. Zhang, X. F. Liu, Traffic redundancy in blockchain
systems: The impact of logical and physical network
structures, in: 2021 IEEE International Symposium on
Circuits and Systems (ISCAS), IEEE, 2021, pp. 1–5.
[59] T. Economist, The magic of mining.
URL https://www.economist.com/business/2015/01/08/
the-magic-of-mining
[60] S. B. Mariem, P. Casas, M. Romiti, B. Donnet, R. St¨utz,
B. Haslhofer, All that glitters is not bitcoin–unveiling
the centralized nature of the btc (ip) network, in: NOMS
2020-2020 IEEE/IFIP Network Operations and Management Symposium, IEEE, 2020, pp. 1–9.
[61] UserBenchmark, Userbenchmark.
URL http://www.userbenchmark.com/
[62] P. Software, Passmark.
URL https://www.passmark.com/index.html
[63] Blockchain.com, Blockchain.com.
URL https://www.blockchain.com/pools
[64] B. Chen, G. Chen, Z. Yang, M. Jiang, Ecological footprint accounting for energy and resource in china, Energy
Policy 35 (3) (2007) 1599–1609.
[65] M. Kissinger, J. Fix, W. E. Rees, Wood and non-wood
pulp production: Comparative ecological footprinting on
the canadian prairies, Ecological economics 62 (3-4)
(2007) 552–558.
[66] J. Liu, J. Lv, H. Dinc¸er, S. Y¨uksel, H. Karakus¸, Selection
of renewable energy alternatives for green blockchain investments: A hybrid it2-based fuzzy modelling, Archives
of Computational Methods in Engineering (2021) 1–15.
[67] X. Li, K. J. Chalvatzis, D. Pappas, Life cycle greenhouse
gas emissions from power generation in china’s provinces
in 2020, Applied Energy 223 (2018) 93–102.
[68] techradar.pro, Best asic devices for mining cryptocurrency in 2021.
URL https://www.techradar.com/in/best/asic-devices
[69] TGR, Most environment friendly cryptocurrencies.
URL https://www.trgdatacenters.com/most-environmentfriendly-cryptocurrencies/
[70] M. Platt, J. Sedlmeir, D. Platt, P. Tasca, J. Xu,
N. Vadgama, J. I. Iba˜nez, Energy footprint of blockchain
consensus mechanisms beyond proof of work arXiv
preprint arXiv:2109.03667.
[71] S. King, S. Nadal, Ppcoin: Peer-to-peer cryptocurrency
with proof-of-stake, aug. 2012, URL https://peercoin.
net/assets/paper/peercoinpaper. pdf.
[72] C. T. Nguyen, D. T. Hoang, D. N. Nguyen, D. Niyato,
H. T. Nguyen, E. Dutkiewicz, Proof-of-stake consensus mechanisms for future blockchain networks: fundamentals, applications and opportunities, IEEE Access 7
(2019) 85727–85745.
[73] I. Bentov, C. Lee, A. Mizrahi, M. Rosenfeld, Proof of
activity: Extending bitcoin’s proof of work via proof of
stake [extended abstract] y, ACM SIGMETRICS Performance Evaluation Review 42 (3) (2014) 34–37.
[74] K. Karantias, A. Kiayias, D. Zindros, Proof-of-burn, in:
International Conference on Financial Cryptography and
Data Security, Springer, 2020, pp. 523–540.
[75] L. Baird, The swirlds hashgraph consensus algorithm:
Fair, fast, byzantine fault tolerance, Swirlds Tech Reports
SWIRLDS-TR-2016-01, Tech. Rep.
[76] S. Popov, W. J. Buchanan, Fpc-bi: Fast probabilistic
consensus within byzantine infrastructures, Journal of
Parallel and Distributed Computing 147 (2021) 77–86.
[77] B. Chase, E. MacBrough, Analysis of the xrp ledger
consensus protocol, arXiv preprint arXiv:1802.07242.
[78] L. Luu, V. Narayanan, K. Baweja, C. Zheng, S. Gilbert,
P. Saxena, Scp: A computationally-scalable byzantine
consensus protocol for blockchains, See https://www.
weusecoins. com/assets/pdf/library/SCP 20 (20) (2015)
2016.
[79] D. Dolev, H. R. Strong, Authenticated algorithms for
byzantine agreement, SIAM Journal on Computing 12 (4)
(1983) 656–666.
[80] P. Network, Pi white paper.
URL https://minepi.com/white-paper.Pi
[81] A. Miller, A. Juels, E. Shi, B. Parno, J. Katz, Permacoin:
Repurposing bitcoin work for data preservation, in: 2014
IEEE Symposium on Security and Privacy, IEEE, 2014,
pp. 475–490.
[82] T. Moran, I. Orlov, Simple proofs of space-time and rational proofs of storage, in: Annual International Cryptology
Conference, Springer, 2019, pp. 381–409.
[83] S. Park, K. Pietrzak, J. Alwen, G. Fuchsbauer, P. Gazi,
Spacecoin: A cryptocurrency based on proofs of space,
IACR Cryptology ePrint Archive 2015 (2015) 528.
[84] B. Cohen, K. Pietrzak, The chia network blockchain
(2019).
[85] M. Ball, A. Rosen, M. Sabin, P. N. Vasudevan, Proofs
of useful work., IACR Cryptol. ePrint Arch. 2017 (2017)
203.
[86] F. Zhang, I. Eyal, R. Escriva, A. Juels, R. Van Renesse,
REM : Resource-efficient mining for blockchains, in:
_{_ _}_
26th USENIX Security Symposium ( USENIX Se_{_ _}_ _{_ _}_
curity 17), 2017, pp. 1427–1444.
[87] M. Zamani, M. Movahedi, M. Raykova, Rapidchain:
Scaling blockchain via full sharding, in: Proceedings of
the 2018 ACM SIGSAC Conference on Computer and
Communications Security, 2018, pp. 931–948.
[88] F Xue W Lu A semantic differential transaction ap
-----
proach to minimizing information redundancy for bim
and blockchain integration, Automation in construction
118 (2020) 103270.
[89] X. Yang, W. Li, A zero-knowledge-proof-based digital
identity management scheme in blockchain, Computers
& Security 99 (2020) 102050.
[90] A. M. Pinto, An introduction to the use of zk-snarks in
blockchains, in: Mathematical Research for Blockchain
Economy, Springer, 2020, pp. 233–249.
[91] E. Ben-Sasson, I. Bentov, Y. Horesh, M. Riabzev, Scalable zero knowledge with no trusted setup, in: Annual
international cryptology conference, Springer, 2019, pp.
701–732.
[92] Y. Ishai, M. Mahmoody, A. Sahai, D. Xiao, On zeroknowledge pcps: Limitations, simplifications, and applications (2015).
[93] IRENA, Country rankings.
URL https://www.irena.org/Statistics/View-Data-byTopic/Capacity-and-Generation/Country-Rankings
[94] K. Dong, X. Dong, Q. Jiang, How renewable energy
consumption lower global co2 emissions? evidence from
countries with different income levels, The World Economy 43 (6) (2020) 1665–1698.
[95] M. A. Destek, A. Sinha, Renewable, non-renewable
energy consumption, economic growth, trade openness
and ecological footprint: Evidence from organisation
for economic co-operation and development countries,
Journal of Cleaner Production 242 (2020) 118537.
[96] Ethereum, Upgrading ethereum to radical new heights.
URL https://ethereum.org/en/eth2/
[97] D. MAZIERES, The stellar consensus protocol: A
federated model for internet-level consensus.
URL https://www.stellar.org/papers/stellar-consensusprotocol?locale=en
-----
|
{
"disclaimer": "Notice: Paper or abstract available at https://arxiv.org/abs/2203.03717, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": null,
"status": "GREEN",
"url": "http://arxiv.org/pdf/2203.03717"
}
| 2,022
|
[
"JournalArticle",
"Review"
] | true
| 2022-02-20T00:00:00
|
[
{
"paperId": "36c170e455040640f43716abfb23324740e2ff31",
"title": "Environmental performance"
},
{
"paperId": "5fa1a5e4cbbb6184475367596ee80b9ebf807f5b",
"title": "The Energy Footprint of Blockchain Consensus Mechanisms Beyond Proof-of-Work"
},
{
"paperId": "8299412376e93cd8d335fdb76d506ca7a617f845",
"title": "Framework for determining the suitability of blockchain: Criteria and issues to consider"
},
{
"paperId": "ffef0d8e1bd784e70c0c7d09f1c4e0098e8497a5",
"title": "Traffic Jam Probability Estimation Based on Blockchain and Deep Neural Networks"
},
{
"paperId": "00d691118304ac5e8a716a230d16ac8baa142129",
"title": "Traffic Redundancy in Blockchain Systems: The Impact of Logical and Physical Network Structures"
},
{
"paperId": "98ec4a0080f897c7ba4b6840a7403b1eae2a55aa",
"title": "A Survey on Supply Chain Security: Application Areas, Security Threats, and Solution Architectures"
},
{
"paperId": "13889e4bab980643016a3afeb6d464fc682f2f20",
"title": "Changing Lengths of the Four Seasons by Global Warming"
},
{
"paperId": "ec25169bbb939b4ddfd5a033b5d75b11ae655301",
"title": "Global Warming, Climate Change, and Environmental Pollution: Recipe for a Multifactorial Stress Combination Disaster."
},
{
"paperId": "5c4c943cb158a083df9d2aa2e06621c9ce74ff5f",
"title": "Bitcoin boom: What rising prices mean for the network’s energy consumption"
},
{
"paperId": "ded1a7e337855dce1222fb204e0b9717d42254fd",
"title": "Selection of Renewable Energy Alternatives for Green Blockchain Investments: A Hybrid IT2-based Fuzzy Modelling"
},
{
"paperId": "049fb4055514c35a4675805098f15ea2a168f418",
"title": "Renewable"
},
{
"paperId": "e389423b8f8695533e4a52dfdd76ae88f265248e",
"title": "Blockchain"
},
{
"paperId": "2b8269e8fc475c3e208cc32c658ad3111d49f493",
"title": "A zero-knowledge-proof-based digital identity management scheme in blockchain"
},
{
"paperId": "db0198f3420c7e4dcdb4303fcbd231aaa41f3e23",
"title": "Blockchain for 5G: A Prelude to Future Telecommunication"
},
{
"paperId": "8d95dc5c1d78f54bd8ef442353fb203f6f35bddc",
"title": "A semantic differential transaction approach to minimizing information redundancy for BIM and blockchain integration"
},
{
"paperId": "ab005d1a6176399a6a94c172503317be6236e705",
"title": "Blockchain Technology: Consensus Protocol Proof of Work and Proof of Stake"
},
{
"paperId": "c81770b30a6717640f2abb0f72ff86e3598d8c5d",
"title": "3rd Global Cryptoasset Benchmarking Study"
},
{
"paperId": "7bf750eeb8bedc755c4b3e61295decfeee2a1521",
"title": "Policy assessments for the carbon emission flows and sustainability of Bitcoin blockchain operation in China"
},
{
"paperId": "0ffecbb82e8b285d9f6a6db22bc892b83f4b55da",
"title": "Environmental performance, biocapacity, carbon & ecological footprint of nations: Drivers, trends and mitigation options."
},
{
"paperId": "e069e6e9adbeb7c677b30d8ac6cf12530644c7b6",
"title": "Energy Consumption of Cryptocurrencies Beyond Bitcoin"
},
{
"paperId": "1f296f47f8f81270d994b749f25cddf8a8ac48c2",
"title": "The Energy Consumption of Blockchain Technology: Beyond Myth"
},
{
"paperId": "662659cbbfc312f76c8b2ab82acbeba055d842a8",
"title": "Applications of blockchain in unmanned aerial vehicles: A review"
},
{
"paperId": "1f43ea079151cf5818b659b79fd9a6bf1438fb8b",
"title": "An Overview on Blockchain for Smartphones: State-of-the-Art, Consensus, Implementation, Challenges and Future Trends"
},
{
"paperId": "7cbe09fd50f2e8cf4741888aa4c911b06b661daf",
"title": "How renewable energy consumption lower global CO\n 2\n emissions? Evidence from countries with different income levels"
},
{
"paperId": "0bc7951266b9f0ba1da5ad5b3e8daa76a37bcfd4",
"title": "NOMS 2020 - 2020 IEEE/IFIP Network Operations and Management Symposium"
},
{
"paperId": "9888c3272ab56ddd0a26e1f141d44fb657682e53",
"title": "A Parking Slot Allocation Framework Based on Virtual Voting and Adaptive Pricing Algorithm"
},
{
"paperId": "dff116b66dd4fb4350473873f165a2b8039e9445",
"title": "Proof-of-Burn"
},
{
"paperId": "dcbdc10ea83e963a4df34d4eb257e09101eab9b2",
"title": "Blockchain for Internet of Energy management: Review, solutions, and challenges"
},
{
"paperId": "85cf85de75076846e387d36f6654c76a0afe573a",
"title": "Proof of Work as a Three-Sided Market"
},
{
"paperId": "49e0c7b77e5beceb87be176a7c59313863128d21",
"title": "All that Glitters is not Bitcoin – Unveiling the Centralized Nature of the BTC (IP) Network"
},
{
"paperId": "f59427fa7455f5395b671e645479c6091d9e8e92",
"title": "Renewable, non-renewable energy consumption, economic growth, trade openness and ecological footprint: Evidence from organisation for economic Co-operation and development countries"
},
{
"paperId": "52255ba9522c088c462ecbd6fd797637e85fc0f9",
"title": "Bitcoin mining: A global review of energy and power demand"
},
{
"paperId": "feb9df9b3251a4f1d77a97be3e9e25862d6e5c9d",
"title": "Smart Stock Exchange Market: A Secure Predictive Decentralized Model"
},
{
"paperId": "282112cc8470c0bac43e6e0266adeb8b7800fd67",
"title": "Blockchain Applications for Industry 4.0 and Industrial IoT: A Review"
},
{
"paperId": "bd469b51e170780f6c7ffde54146f816ba844835",
"title": "Life Cycle Assessment of Bitcoin Mining."
},
{
"paperId": "d8214bbc91fe4c0405af5fbe56d54392e157c05c",
"title": "Blockchain in Smart Grids: A Review on Different Use Cases"
},
{
"paperId": "63e679dbf7b53323340e26d7de5fb776f337094d",
"title": "Lightweight Blockchain for Healthcare"
},
{
"paperId": "a92fef1f0c6894490f454c9600b08ebc238e460b",
"title": "Blockchain: Trends and future opportunities"
},
{
"paperId": "a84b0b53255a9cc8789d412e7ad6b38f3417858d",
"title": "Rational mining limits Bitcoin emissions"
},
{
"paperId": "e0e2be702317e8232d3568cfbcafe8a091cb153b",
"title": "Scalable Zero Knowledge with No Trusted Setup"
},
{
"paperId": "10fc5adcc951d99b54df9710a1320ec90d6abc06",
"title": "FPC-BI: Fast Probabilistic Consensus within Byzantine Infrastructures"
},
{
"paperId": "c80f809987fdeb177d94027f077e5e069b807bbf",
"title": "Energy Consumption of Bitcoin Mining"
},
{
"paperId": "928d59c0e4ecbb6019bb43f326c7b43a94e07ea7",
"title": "Is Bitcoin the Only Problem? A Scenario Model for the Power Demand of Blockchains"
},
{
"paperId": "4dd8fe53cad1826186e0a82508f0ab7285ce58bc",
"title": "The Carbon Footprint of Bitcoin"
},
{
"paperId": "2014c54382134b83a5a5c2e8ee00e43748f0cef1",
"title": "Bitcoin emissions alone could push global warming above 2°C"
},
{
"paperId": "160995fa72e82bdde408fe0a554575c67ad6e888",
"title": "RapidChain: Scaling Blockchain via Full Sharding"
},
{
"paperId": "8e0781c1a85ca4e3bef22e5d478bad33cad9a4e0",
"title": "Life cycle greenhouse gas emissions from power generation in China’s provinces in 2020"
},
{
"paperId": "73132e4625236bd71c6422e27452ae63131ea724",
"title": "ElasticChain: Support Very Large Blockchain by Reducing Data Redundancy"
},
{
"paperId": "51648f57cd9d445519ebba8ff583a7150d9bd0e7",
"title": "Bitcoin's Growing Energy Problem"
},
{
"paperId": "01ea88051f84c77a386294fc715a71198d87a9b9",
"title": "A Survey on Consensus Mechanisms and Mining Strategy Management in Blockchain Networks"
},
{
"paperId": "7b104524ad6037cf47ac1a1932afc66358e33a3b",
"title": "SpaceMint: A Cryptocurrency Based on Proofs of Space"
},
{
"paperId": "b4cdbcb06db9fae8cf82c02d92dd4eb92ba9158c",
"title": "Analysis of the XRP Ledger Consensus Protocol"
},
{
"paperId": "48080ea326aa48e022c43eeb8bab6c376a83150a",
"title": "Energy Consumption – Bitcoin’s Achilles Heel"
},
{
"paperId": "488ebe4db7190efe445c225aa67a10f70bc46d8d",
"title": "Blockchain in government: Benefits and implications of distributed ledger technology for information sharing"
},
{
"paperId": "9d37ccca3d5d56f701a8bbe9570e5a8775239741",
"title": "REM: Resource-Efficient Mining for Blockchains"
},
{
"paperId": "3cc0d7b1f7bc40a1ad78bf6ebe128311bb95e23b",
"title": "Paris Agreement climate proposals need a boost to keep warming well below 2 °C"
},
{
"paperId": "e31ca71621e1402a46ac2c1afb2eba9a7061d139",
"title": "Blockchain Technology: Principles and Applications"
},
{
"paperId": "148f044225ce7433e5fcf2c214b3bb48d94f37ef",
"title": "Mastering Bitcoin: Unlocking Digital Crypto-Currencies"
},
{
"paperId": "6f40066eb122c06dbff24241ba203e87b3d3c4ff",
"title": "Proof of Activity: Extending Bitcoin's Proof of Work via Proof of Stake [Extended Abstract]y"
},
{
"paperId": "ecef1d55851c0632da88b07e6b0dd2f775d74e4d",
"title": "Permacoin: Repurposing Bitcoin Work for Data Preservation"
},
{
"paperId": "f90344c644877eef2d348246ef036edb90f2e336",
"title": "Wood and non-wood pulp production: Comparative ecological footprinting on the Canadian prairies"
},
{
"paperId": "72ee73d2e56043d744b33b92386c4c22a7777d67",
"title": "Ecological footprint accounting for energy and resource in China"
},
{
"paperId": "caa16698257dff95b316cf6b4b6cd59358004752",
"title": "CO2 Emissions from Fuel Combustion"
},
{
"paperId": "38c830bf6192d9e83cf6793d01c54032b63bb8f8",
"title": "Authenticated Algorithms for Byzantine Agreement"
},
{
"paperId": null,
"title": "worldometer, Countries in the world by population"
},
{
"paperId": "eed76365143d2e3f35d0cd37cb8f41fdebffe34d",
"title": "Country"
},
{
"paperId": "120de9ed3ad57dfa2f9297aaad25f63a708be89a",
"title": "Blockchain for Education: Lifelong Learning Passport"
},
{
"paperId": null,
"title": "Estimating bitcoin electricity use: A beginner’s guide, May"
},
{
"paperId": "f4346d8f444517807d76199154b45593e823a46c",
"title": "An Introduction to the Use of zk-SNARKs in Blockchains"
},
{
"paperId": "5e39658476d07b2f927afae6ea06880b0a175876",
"title": "Proof-of-Stake Consensus Mechanisms for Future Blockchain Networks: Fundamentals, Applications and Opportunities"
},
{
"paperId": "86e114bf70f5cbcd7e11f5ac032d93876dace1b8",
"title": "The Chia Network Blockchain"
},
{
"paperId": null,
"title": "The bitcoin mining network-trends"
},
{
"paperId": "03d1b20e24c1f9f597e7e6780f794ec4a77a7c76",
"title": "Proofs of Useful Work"
},
{
"paperId": "94c347a7c426011d68da497e761efdc472852bfb",
"title": "Global Cryptocurrency Benchmarking Study"
},
{
"paperId": "7aea23d6e14d2cf70c8bcc286a95b520d1bc6437",
"title": "Proofs of Space-Time and Rational Proofs of Storage"
},
{
"paperId": "3babb89369eed603ce3c702f447ee6274429cda1",
"title": "The Stellar Consensus Protocol: A Federated Model for Internet-level Consensus"
},
{
"paperId": "259603d8d1c2a6d439eb8fa5038659a94aac08e1",
"title": "SCP: A Computationally-Scalable Byzantine Consensus Protocol For Blockchains"
},
{
"paperId": "5eddbae6f523c422e18c0a8eeb54e0d50ebab1d6",
"title": "On Zero-Knowledge PCPs : Limitations , Simplifications , and Applications ∗"
},
{
"paperId": "0db38d32069f3341d34c35085dc009a85ba13c13",
"title": "PPCoin: Peer-to-Peer Crypto-Currency with Proof-of-Stake"
},
{
"paperId": "1ce3631034774fcd210d9f524d5ed20c68d6739f",
"title": "in Spectrum"
},
{
"paperId": "4e9ec92a90c5d571d2f1d496f8df01f0a8f38596",
"title": "Bitcoin: A Peer-to-Peer Electronic Cash System"
},
{
"paperId": null,
"title": ", Bitcoin clock"
},
{
"paperId": null,
"title": "Ethereum plans to cut its absurd energy consumption by 99 percent"
},
{
"paperId": null,
"title": "Pi white"
},
{
"paperId": null,
"title": "Cambridge bitcoin electricity consumption index"
},
{
"paperId": null,
"title": "The magic of mining"
},
{
"paperId": null,
"title": "The swirlds hashgraph consensus algorithm: Fair, fast, byzantine fault tolerance"
},
{
"paperId": null,
"title": "Best asic devices for mining cryptocurrency in 2021"
},
{
"paperId": null,
"title": "Ethereum, Upgrading ethereum to radical new heights"
},
{
"paperId": null,
"title": "Today's cryptocurrency prices by market cap"
},
{
"paperId": null,
"title": "List of countries by electricity consumption"
}
] | 19,437
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/ffafec4e849b25b41a496e388c93c4339d4970b0
|
[
"Computer Science"
] | 0.900321
|
Distributed Social-based Overlay Adaptation for Unstructured P2P Networks
|
ffafec4e849b25b41a496e388c93c4339d4970b0
|
2007 IEEE Global Internet Symposium
|
[
{
"authorId": "47532692",
"name": "Ching-Ju Lin"
},
{
"authorId": "2112612932",
"name": "Yi-Ting Chang"
},
{
"authorId": "2909901",
"name": "Shuo-Chan Tsai"
},
{
"authorId": "1718734",
"name": "Cheng-Fu Chou"
}
] |
{
"alternate_issns": null,
"alternate_names": null,
"alternate_urls": null,
"id": null,
"issn": null,
"name": null,
"type": null,
"url": null
}
| null |
# Distributed Social-based Overlay Adaptation for Unstructured P2P Networks
## Ching-Ju Lin
Institute of Networking and Multimedia
National Taiwan University, Taipei, Taiwan
cjlin@cmlab.csie.ntu.edu.tw
## Yi-Ting Chang, Shuo-Chan Tsai, Cheng-Fu Chou
Dept. of Computer Science and Information Engineering
National Taiwan University, Taipei, Taiwan
seashell, r92069, ccf @cmlab.csie.ntu.edu.tw
_{_ _}_
1
**_Abstract— The widespread use of Peer-to-Peer (P2P) systems_**
**has made multimedia content sharing more efficient. Users in a**
**P2P network can query and download objects based on their pref-**
**erence for specific types of multimedia content. However, most**
**P2P systems only construct the overlay architecture according to**
**physical network constraints and do not take user preferences**
**into account. In this paper, we investigate a social-based overlay**
**that can cluster peers that have similar preferences. To construct**
**a semantic social-based overlay, we model a quantifiable measure**
**of similarity between peers so that those with a higher degree**
**of similarity can be connected by shorter paths. Hence, peers**
**can locate objects of interest from their overlay neighbors, i.e.,**
**peers who have common interests. In addition, we propose an**
**overlay adaptation algorithm that allows the overlay to adapt**
**to P2P churn and preference changes in a distributed manner.**
**We use simulations and a real database called Audioscrobbler,**
**which tracks users’ listening habits, to evaluate the proposed**
**social-based overlay. The results show that social-based overlay**
**adaptation enables users to locate content of interest with a**
**higher success ratio and with less message overhead.**
I. INTRODUCTION
The widespread use of P2P systems has made sharing multimedia content, such as music and video files, more efficient.
In a social network, people with similar tastes in multimedia
content (e.g., people who like jazz music) form a community
to share their experience and knowledge. Like people who
form social networks, some users of P2P networks have a
preference for various types of multimedia content, which may
affect the way they query and download content. Although P2P
users normally exchange multimedia content with other users
who have similar tastes, the architecture of most well-known
P2P systems is based on physical network constraints only
and does not take user preferences into account. To remedy
the situation, in this paper, we propose a social-based P2P
overlay that can leverage social phenomena to improve the
efficiency of content sharing in P2P systems.
In sociology, a social network [1] is comprised of a set
of actors (nodes) that may have relationships (ties) with
one another. Sociologists normally use graphs to represent
information about relationship patterns between social actors.
Such graphs are also called “socio-grams” in sociology. The
1This work was partially supported by the National Science Council and
the Ministry of Education of ROC under the contract No. NSC95-2221-E002-103-MY2 and NSC95-2622-E-002-018.
design of social-based P2P overlay networks is motivated by
the concept of socio-grams. More specifically, the objective
of the proposed social-based P2P network is to build a sociogram as an overlay topology for the P2P network.
In a socio-gram, an edge between a pair of nodes indicates
that a tie exists between two adjacent nodes; for example, if we
are interested in each node nominates which nodes as friends,
an edge can be used to represent a friendship tie. In a P2P
network, a user hopes to obtain objects of interest from peers
who have similar tastes and can provide the requested objects.
The key to efficient and scalable searches in unstructured P2P
systems is to cover nodes holding the requested objects as
quickly as possible and with as little overhead as possible.
Actually, the only way to find objects of interest is to continue
visiting peers until one that holds the requested object is found.
In this paper, instead of building a friendship socio-gram, we
build a similarity-based socio-gram as a P2P overlay topology,
where a similarity tie between two peers exists if they have
common interests in specific types of multimedia content.
Hence, in the proposed social-based overlay, peers sharing
similar interests can be connected by shorter paths so that
they can exchange multimedia content efficiently. Specifically,
whenever a peer requests an object of interest, it can locate the
object among its neighboring peers, i.e., the peers who have
similar tastes and are more likely to hold the requested object.
The following factors determine the efficiency of a socialbased overlay. (1) Similar peer selection: in decentralized
P2P systems, it is challenging to define user preferences and
identify peers who have similar tastes. (2) Distributed overlay
adaptation: a system can collect information about all users to
estimate the similarity between peers. However, the centralized
method is not scalable, and it can not cope with changes of
users’ preferences and network dynamics, i.e., churn (defined
as the dynamics of peers joining or leaving [2][3]). Therefore,
a distributed adaptation algorithm is required so that each
peer can discover its similar peers and maintain overlay links
distributedly and dynamically.
The goal of this paper is to model a distance measure that
quantifies the similarity between peers; hence, peers can form
an effective social-based overlay based on the proper similarity
measure. On the other hand, we propose an overlay adaptation
algorithm that uses a random walk technique to sample the
population and discover similar peers from the randomly
-----
selected samples, instead of collecting detailed information
about all P2P users. Because the random walk technique
reduces the overlay update overhead significantly, each peer
can exploit this method to handle dynamic churn and adapt to
changes of users’ tastes efficiently and distributedly. Finally,
we use a database called Audioscrobbler, which tracks users’
listening habits, to evaluate the performance of the proposed
social-based P2P network.
The remainder of the paper is organized as follows. Section II presents related works on random-walk-based P2P
systems and social-based P2P systems. Section III describes
the proposed social-based overlay construction algorithm in
detail. Section IV evaluates the performance of the socialbased overlay via simulations. Then, in Section V, we present
our conclusions.
II. RELATED WORKS
Decentralized P2P systems are typically classified into
two categories: structured P2P systems and unstructured P2P
systems. In structured P2P systems, i.e., Distributed Hash
Table (DHT) systems, both data placement and the overlay topology are tightly controlled. However, although DHT
systems balance the workload and improve query efficiency,
most DHT systems must repair the architecture for each
node failure; hence, they can not handle churn efficiently.
In unstructured P2P systems, such as Gnutella [4], each
node incurs a reasonable overhead to build overlay links and
repair link failures dynamically according to some loose rules.
In addition, querying multimedia content by keywords has
become increasingly popular in P2P systems. Unlike DHT
systems, which incur extra overlay maintenance costs for
providing a keyword search service [5][6][7][8], users of an
unstructured system can forward query messages as a sequence
of keywords by flooding to find objects that partially match
the query keywords. Because of the robustness and flexibility
of unstructured systems, we adapt the Gnutella system to
social-based unstructured P2P networks, in which the overlay
topology is based on the social relationships between peers.
In decentralized unstructured P2P systems, the use of a
flooding scheme for overlay construction or content queries
induces a scalability problem. Hence, some approaches
[9][10][11] use a random walk technique, rather than flooding, to reduce the message overhead. However, the lack of
flow control and topology control is one of the weaknesses
of random-walk-based Gnutella-like systems. A number of
works [2][12][13] balance the load among peers by controlling
the number of outlinks and inlinks explicitly based on the
bandwidth capability of each peer. Consequently, in graphs
that control the node degree, a node with higher capacity will
have more overlay links so that there is a higher probability
that it will donate its bandwidth resources.
However, random-walk-based Gnutella-like systems can not
guarantee that queries will be handled efficiently. Since overlay construction based on random walk does not take user
preferences into account, a node may not be able to locate
objects of interest from its overlay neighbors. Thus, it may
need to visit more peers to locate the requested objects, and
thereby generate more message overhead. To address this
problem, some works [14][15] have proposed social-based P2P
systems, which build an overlay topology that mimics social
phenomena. The objective is to connect peers based on their
social relationships so that a peer can obtain content efficiently
from its neighboring overlay nodes.
In [14], each peer establishes overlay links with peers who
have similar preferences. The similarity of peers is measured
by comparing their preference lists, which record a number of
the most recently downloaded objects. However, this method
causes a new user problem; that is, a new user who has only
made a few downloads can not get an accurate similarity
measure. In [15], a central server collects the description
vectors of all users, and establishes overlay links based on
the distance between each pair of users. One limitation of
the centralized methods is that they can not handle churn in
P2P systems efficiently, since they generate a heavy traffic
load when exchanging information in a large-scale network. In
addition, [15] does not explicitly define the description vector,
which has a significant effect on the accuracy of the similarity
measure.
In this work, we propose a novel social-based overlay for
unstructured P2P networks. Our contribution is twofold: (1)
we define a quantifiable measure of the similarity between
each pair of peers; and (2) we propose an overlay adaptation
algorithm that enables each node to establish ties with similar
nodes in a distributed and dynamic manner based on a random
walk technique. The proposed method uses social relationships
to improve the performance of content search, and exploits the
advantages of the random walk method to reduce the overlay
construction overhead.
III. DISTRIBUTED SOCIAL-BASED OVERLAY
CONSTRUCTION
In this section, we present an overview of the social-based
overlay topology, and then describe in detail how to construct
a distributed social-based overlay network based on a random
walk technique.
_A. Overview of a Social-based Overlay_
A social-based overlay for P2P networks clusters users
who have similar preferences for multimedia content. Thus,
we build a similarity-based socio-gram, denoted as Gs, in
which a tie between two peers exists if they have a common
interest in specific types of multimedia content. To determine whether two nodes should be connected by a similarity
tie, the system needs to compile user profiles containing
information about users’ preferences, and then measure the
degree of similarity between the profiles. From a real-world
perspective, the objects held by a peer typically reflect the
characteristics of that peer, since a peer has limited storage capability and may not keep objects that are not of
interest. Therefore, users can be distinguished by the objects
they hold. Many works focus on techniques that extract
low level or semantic metadata from multimedia objects. An
-----
object can be described by multiple attributes, which can
be associated with the extracted metadata. For example, a
music file can be associated with several keywords (i.e.,
metadata), such as genre=“Jazz”, artist=“Pat Metheny”,
_title=“Bright Size Life”. Thus, objects can be catego-_
rized based on the tagged keywords. Preferences for different
categories of objects can be used to distinguish the characteristics of each peer. Specifically, let Profile(ci) be the profile
of user ci. Profile(ci) is defined as a vector of weights
_⃗wi = (wi,k1_ _, wi,k2_ _, · · ·, wi,kn, · · · ), where the weight wi,kn_
denotes user ci’s preference for the objects described by the
keyword kn, as shown by
_wi,kn =_ _[|O][i,k][n]_ _[|]_ (1)
_|Oi|_ _[,]_
where Oi is the set of objects held by user ci and Oi,kn is
a subset of Oi containing the objects tagged by the keyword
_kn. We then use the cosine similarity measure [16][17] to_
quantify similarity sim(ci, cj) between two peers, ci and cj,
as follows:
_sim(ci, cj) = cos( ⃗wi, ⃗wj)_
= _⃗wi · ⃗wj_ = Σ[K]k=1[w][i,k][w][j,k] _,_ (2)
_∥⃗wi∥2 × ∥⃗wj∥2_ �Σ[K]k=1[w]i,k[2] �Σ[K]k=1[w]j,k[2]
where K is the total number of keywords. If ci and cj have
similar tastes in certain styles of multimedia content, then
_sim(ci, cj) returns a smaller value._
In the proposed social-based P2P network, each peer finds d
similar peers (so called buddies) distributedly, and establishes
overlay links with them. However, constructing a similarity
graph Gs does not guarantee the connectivity of a P2P
overlay network. Hence, in the proposed social-based overlay
topology, we merge Gs with a weak graph, denoted as Gw,
which connects two peers named the consecutive identifiers. In
other words, all peers are connected as a ring topology in Gw
to avoid partitioning the overlay topology. Thus, in the socialbased overlay, each node builds (d +1) overlay outlinks, d for
_Gs and one for Gw._
The proposed similarity measure can resolve the new user
problem because a new user can also provide his/her multimedia content in the buffer space. Hence, the profile for
a new user can be created based on the objects stored in
the buffer. The other unexpected advantage of the proposed
user profiling method is that it discourages freeriders in P2P
systems. If a peer does not offer content in its public storage
space for other users, its preference (i.e., user profile) can not
be compiled precisely, so it can not find buddies based on its
user profile. Therefore, the proposed similarity measure can
inherently provide incentive for users to share their resources.
_B. Distributed Overlay Adaptation_
The overlay topology is the component that connects all
peers in an unstructured P2P network. The overlay topology
must be updated efficiently so that it can react to dynamic
churn. Hence, we propose an overlay adaptation algorithm
that allows each peer to determine its buddies in a distributed
manner. When a new user joins a P2P network, it uses
bootstrapping mechanisms, similar to those used in Gnutella,
to locate other peers in the overlay topology. It then builds
temporary overlay links with those peers to connect to the P2P
network, exchanges information with neighbors, and compiles
its buddy list distributedly.
Given a set of peers, a new peer can use certain strategies
to collect information about the peers to determine their
relationships and compile a buddy list. The strategies can
be categorized into two types [1]; full network methods and
_snowball methods. Full network methods collect the user_
profiles of all peers in a central server, and rank sim(ci, cj) for
any pair of peers, ci and cj, in the system. The method allows
the central server to analyze the social structure explicitly and
cluster peers who have similar preferences; however, it can be
very expensive to collect full information as the network scales
up. In contrast, the snowball method collects information via
epidemic protocols, i.e., a peer can know friends-of-friends
through its friends. Because the snowball method only samples
the target population, the information exchange overhead is
much lower, which resolves the scalability problem. Hence, we
propose a distributed overlay adaptation algorithm that enables
each peer to compile its buddy list distributedly based on the
concept of snowball sampling.
In the following, we present the proposed distributed overlay
adaptation algorithm, which involves two phases: distributed
_buddy selection and buddy list update._
_1) Distributed Buddy Selection: To reduce the message_
exchange overhead, each node can locate buddies with similar
tastes from a subset of overlay nodes (called candidates hereafter). Each peer, ci, can find M candidates from the overlay
distributedly and randomly, and calculate the cosine similarity
measure, sim(ci, cj), for any candidate cj. Therefore, each
peer can maintain a list of the d most similar buddies, i.e.,
the candidates that yield smaller values of sim(ci, cj), and
establish the overlay links with the peers in the buddy list.
In this method, the effectiveness of the buddy list depends
on the efficiency of the candidate selection mechanism. The
most efficient way (i.e., the method that generates the lowest
message overhead) is to select the d most similar peers from
the M neighboring overlay nodes. However, when locating
neighbors, the bootstrapping procedure does not consider the
characteristic of peers, so a new peer may not be able to
find any peers who have similar tastes or interests to itself
under this procedure. To resolve this problem, we need an
unbiased sampling mechanism that can randomly select a set
of candidates from the overlay topology.
Random walk is a typical unbiased sampling technique that
forwards a request to a randomly selected neighbor with a
probability p at each step, or stops in a visited node with
a probability (1 _p). The technique reduces the message_
_−_
overhead significantly, since each request takes its own random
walk and generates only as many messages as the length of
the path it traverses. In contrast to the flooding method, in
the random walk method, the number of messages does not
-----
60
50
50
40
40
30
20
10
30
20
0
10
0
1 2 3 4 5 6 7 8 9
TTL
1 2 3 4 5 6 7 8 9
TTL
Fig. 1. TTL vs. Success Ratio
increase exponentially with the number of outlinks of each
traversed node. In addition, assuming the location of a peer in
an overlay is independent of the tastes of that peer, we can exploit an advantage of the random walk mechanism whereby M
candidates can be selected randomly and unbiasedly from the
overlay network. Hence, to reduce the information collection
overhead and avoid biased candidate selection, each node can
use the random walk method to select candidates distributedly.
However, to strike a balance between the message overhead
and unbiased candidate selection, we let each peer designate
_⌊_ _[M]2_ 2
_[⌋]_ [nearest neighbors as candidates and also start][ ⌈] _[M]_ _[⌉]_ [walks]
to discover ⌈ _[M]2_
_[⌉]_ [candidates randomly. Then, a peer can select]
_d buddies distributedly from the M candidates._
_2) Buddy List Update: A peer may lose its outlinks if its_
buddies fail or leave the network. Besides, since peers’ tastes
may change over time, a peer’s tastes may no longer be similar
to those of the peers on the buddy list. Hence, a peer must
update its buddy list in the following cases: (1) its outlinks are
lost, or (2) its user profile is changed. In the second case, user
profile modification only occurs in the following situations:
(1) when a peer retrieves new objects from other peers, or
(2) objects cached in the buffer are deleted by the user or
dropped because of buffer overflow. Based on the concept of
social phenomena, each peer can exploit the snowball method
to locate more buddies through friends-of-friends, since users
with similar preferences are usually clustered in a community.
When a peer decides to update its buddy list, it locates 2d
candidates: d friends-of-friends and d peers chosen by random
walk. Then, it ranks all candidates and the original buddies in
order of their similarity measure sim(ci, cj), and updates its
buddy list with the d most similar peers.
IV. PERFORMANCE EVALUATION AND DISCUSSION
In this section, we use simulations to evaluate the performance of the proposed distributed social-based overlay
construction algorithm. To validate the proposed algorithm, we
use log-based user profiles collected from Audioscrobbler[2], a
database that tracks listening habits by collecting the play-lists
of users’ media players (for instance, Winamp, iTunes, and
XMMS). The profiles are used in our simulations to mimic
Fig. 2. TTL vs. Precision Rate
social relationships in the real world. We collect profiles for
1, 355 fans who have listened to five popular styles of music
(i.e., rock, metal, pop, punk, and jazz) the most. The number
of the fans selected for a specific music style is proportional
to the popularity of that style. For each fan, the data set
records the 50 songs that he/she listens to the most. Thus,
there are 31, 005 objects in our simulations. To simulate a
P2P overlay network, we use brite [18] to generate the physical
network, in which 1, 355 nodes are distributed in a topology of
Autonomous Systems (ASes). Then, the 677 nodes (fans) are
randomly selected from the physical network to join the P2P
overlay network. Each overlay node establishes four outlinks:
three for the similarity graph and one for the weak graph.
We classify the 50 music files held by each node into five
groups according to genre, and the user profile is defined
as ⃗w = (wrock, wmetal, wpop, wpunk, wjazz). Each overlay
node has a buffer that can cache 45 music files. If the buffer
is overloaded, cache replacement is based on a popularitydriven algorithm, i.e., the song listened to the least is dropped
first. For cross-validation, we randomly divide each node’s 50
favorite songs into a training set (40 songs) and a test set
(10 songs). Let the training set of songs be cached in each
node’s buffer. Each node then requests songs in the test set to
evaluate the performance of the content query service in the
proposed social-based overlay. To evaluate the performance
of the keyword search service in the social-based overlay, we
let each peer query an object by the tags associated with that
object.
We compare three variations of social-based overlay construction methods and two non-social-based methods as follows. (1) Social-based full network (SFN): each peer collects
the profiles of all other nodes, and selects d buddies. (2) Socialbased random walk (SRW): each peer collects 2d candidates (d
selected by random walk and d selected from local neighbors)
to compile a list of d buddies. Each walk visits a randomly
selected neighbor with a probability of 0.5, or stops in a
visited node with a probability of 0.5. (3) Social-based local
network (SLN): each peer selects d buddies from the 2d local
neighbors. (4) Non-social-based random walk (NRW): each
node starts d random walks, and establishes overlay links
with d destination nodes. (5) Non-social-based local network
2http://www.audioscrobbler.net/
-----
60
50
1e+007
1e+006
40
30
20
10
100000
10000
0
1000
1 2 3 4 5 6 7 8 9
TTL
1 2 3 4 5 6 7 8 9
TTL
Fig. 3. TTL vs. Recall Rate
(NLN): each peer establishes d overlay links with the peers
who have consecutive identifiers.
_A. Performance Comparison in Static Environments_
In this simulation, we compare the performance of the above
five schemes in terms of the following performance metrics:
(a) the success ratio: the ratio of the number of successful
searches to the total number of requests; (b) the precision rate:
the number of target objects on the returned list divided by
the total number of objects on the returned list; (c) the recall
rate: the number of target objects on the returned list over
the number of replicas of the target object in the system; and
(d) the overlay adaptation overhead: the number of messages
used to construct and update the overlay topology. To verify
the impact of overlay construction on the performance of the
content query service, overlay nodes are not allowed to leave
the P2P system during this simulation.
Gnutella-like systems use TTL (Time-To-Live) to control
the number of hops that flood a query. This simulation
evaluates the performance of all five schemes for various
numbers of TTL. Generally, if the TTL is low, peers may
not be able to locate the requested objects, even though a
copy exists in the overlay network. Conversely, if the TTL is
high, peers can discover more overlay nodes and locate the
requested objects in the overlay. Figures 1, 2, and 3 show that
all schemes achieve better performance in terms of the success
ratio, precision rate, and recall rate as the number of TTL
increases. However, message overhead caused by flooding
increases as TTL increases. To reduce the overhead, a good
overlay topology should enable a node to locate the requested
object with limited TTL. The figures show that the socialbased overlay construction methods, i.e., SFN, SRW, and SLN,
outperform the non-social-based methods as TTL is limited to
5. This is because the former methods take user behavior into
account and connect peers who have similar tastes. In a socialbased overlay, since two buddies can be connected by a shorter
path, they can obtain objects of interest with limited TTL from
the peers with similar interests. The NLN method performs
worst because peers who have consecutive user identifiers are
clustered together; hence, the query can not be forwarded to
other peers.
Fig. 4. TTL vs. Overhead of Overlay Adaptation
The figures also show that the SFN scheme is the best
of the three social-based methods. This is because it enables
each node to obtain complete information about other peers
by collecting all users’ profiles to compile a precise buddy
list. However, as shown in Figure 4, collecting all user
profiles by flooding generates a large message overhead while
constructing or updating the overlay topology. The other two
social-based methods, SRW and SLN, can perform as well as
the full network method, but only incur a small amount of
overhead to maintain the overlay links. Because the random
walk and local network methods only collect 2d candidates’
user profiles, they reduce the message overhead of overlay
adaptation significantly.
_B. Performance Comparison in Dynamic Environments_
This simulation evaluates the performance of the distributed
overlay adaptation algorithm in dynamic environments, similar
to the simulation scenarios in [13], as follows:
1) Churn: Initially, an _[N]2_ [-node (][677][-node) overlay is built.]
There are N churn-events during the simulation period.
A churn-event is either a single node joining with a
probability of 0.5 or a single node leaving with a
probability of 0.5. The expected network size after a
sequence of events is _[N]2_ [.]
2) Shrink: Initially, an N -node (1355-node) overlay is built.
Then, 30% of the nodes leave the system during the
simulation period.
To simulate the dynamic of churn over time, we distribute all
events uniformly over the simulation period, i.e., 40 minutes.[3]
The query arrival pattern of each peer follows a Poisson
distribution. Specifically, a random variable, X, is used to represent the interarrival times of two queries, and the probability
distribution function of X is an exponential distribution with
mean 1(/minute). When a peer fails to locate an object of
interest, it re-issues the query after 1(/minute). Each query
event is deleted until the request is matched. Because some
peers may join or leave the P2P system, a peer that fails to
locate an object in the current step may be able to find it
in subsequent steps if new users holding the requested object
3We use the minute as the time unit. However, we believe that the trend of
simulation results will be consistent as the time scale varies.
-----
5000
4500
4000
3500
3000
2500
2000
1500
1000
500
0
3500
3000
500
0
2500
2000
1500
1000
0 5 10 15 20 25 30 35 40
Time (min)
Fig. 5. Number of Successful Matches in Churn Scenario
0 5 10 15 20 25 30
Time (min)
Fig. 6. Number of Success Matches in Shrink Scenario
join the system. In this simulation, we set the TTL to 5, and
evaluate the performance of the overlay adaptation algorithms
in terms of the cumulative number of successful queries over
time.
Figures 5 and 6 show the number of successful matches in
the churn and shrink scenarios respectively. In the churn scenario, there are at most about 2, 000 successful matches within
10 minutes in the non-social-based schemes, whereas the
proposed distributed social-based overlay adaptation method
generates 3, 000 successful matches within 10 minutes. This
is because dynamic social-based adaptation allows each peer
to update its buddy list, i.e., overlay links, if it or its original
buddies change their listening habits. On the other hand,
the non-social-based schemes can successfully match at most
2, 700 queries within the simulation period. Because the nonsocial-based overlay construction algorithm only updates the
overlay links based on some random mechanisms, users can
not locate objects of interest from their neighboring overlay
nodes. In other words, TTL must be increased so that users
can locate objects of interest by visiting more peers. Clearly,
the social-based method also performs better than the nonsocial-based methods in the shrink scenario.
V. CONCLUSION
We have proposed a social-based overlay construction algorithm. We have also defined a user profiling method based
on the characteristics of the objects held by each user, and
proposed a distance measure to quantify the similarity between
peers. The results show that a social-based overlay built
according to the proposed similarity measure can improve
the performance of the content query service in terms of the
success ratio, precision rate, and recall rate. We have also proposed a random-walk-based sampling method to select buddies
from unbiased sample candidates. Because the random walk
method reduces the overhead of buddy selection significantly,
each peer can maintain its overlay links distributedly and
dynamically if overlay links fail or user preferences change.
The simulation results also illustrate that, even in dynamic
environments, the proposed social-based overlay adaptation
algorithm can update the overlay topology dynamically and,
thus, improve the efficiency of the content query service.
REFERENCES
[1] R. A. Hanneman and M. Riddle, Introduction to social network methods:
_Table of contents, A. Oram, Ed._ http://www.faculty.ucr.edu/ hanneman/nettext/, 2005.
[2] Y. Chawathe, S. Ratnasamy, L. Breslau, N. Lanham, and S. Shenker,
“Making gnutella-like p2p systems scalable,” in SIGCOMM ’03: Pro_ceedings of the 2003 conference on Applications, technologies, architec-_
_tures, and protocols for computer communications, 2003, pp. 407–418._
[3] D. Stutzbach and R. Rejaie, “Understanding churn in peer-to-peer
networks,” in IMC ’06: Proceedings of the 6th ACM SIGCOMM on
_Internet measurement, 2006, pp. 189–202._
[4] Gnutella: http://www.gnutella.com.
[5] O. Gnawali, “A keyword set search system for peer-to-peer networks,”
June 2002, master’s thesis, Massachusetts Institute of Technology.
[6] P. Reynolds and A. Vahdat, “Efficient peer-to-peer keyword searching,”
in Proceedings of International Middleware Conference, Jun 2003.
[7] L. Liu and K.-W. Lee, “Keyword fusion to support efficient keywordbased search in peer-to-peer file sharing,” in CCGRID ’04: Proceedings
_of the 2004 IEEE International Symposium on Cluster Computing and_
_the Grid, 2004, pp. 269–276._
[8] Y.-J. Joung, C.-T. Fang, and L.-W. Yang, “Keyword search in dht-based
peer-to-peer networks,” in ICDCS ’05: Proceedings of the 25th IEEE In_ternational Conference on Distributed Computing Systems (ICDCS’05),_
2005, pp. 339–348.
[9] L. A. Adamic, R. M. Lukose, A. R. Puniyani, and B. A. Bhuberman,
“Search in power-law networks,” Physical Review E, vol. 64 46135,
2001.
[10] I. Clarke, O. Sandberg, B. Wiley, and T. W. Hong, “Freenet: A distributed anonymous information storage and retrieval system,” Lecture
_Notes in Computer Science, vol. 2009, pp. 46–66, 2001._
[11] Q. Lv, P. Cao, E. Cohen, K. Li, and S. Shenker, “Search and replication in unstructured peer-to-peer networks,” in SIGMETRICS ’02:
_Proceedings of the 2002 ACM SIGMETRICS international conference_
_on Measurement and modeling of computer systems, 2002, pp. 258–259._
[12] C. Law and K.-Y. Siu, “Distributed construction of random expander
networks.” in INFOCOM 2003, 2003.
[13] V. Vishnumurthy and P. Francis, “On heterogeneous overlay construction
and random node selection in unstructured p2p networks,” in INFOCOM,
April 2006.
[14] J. A. Pouwelse, P. Garbacki, J. W. A. Bakker, J. Yang, A. Iosup,
D. Epema, M.Reinders, M. R. van Steen, and H. J. Sips, “Tribler: A
social-based based peer to peer system,” in 5th Int’l Workshop on Peer_to-Peer Systems (IPTPS), February 2006._
[15] P. Androutsos, D. Androutsos, and A. Venetsanopoulos, “Small world
distributed access of multimedia data: an indexing system that mimics
social acquaintance networks,” Signal Processing Magazine, IEEE,
_vol.23, no.2pp, pp. 142– 153, Mar, 2006._
[16] R. Baeza-Yates, B. Ribeiro-Neto, et al., Modern information retrieval.
Addison-Wesley Harlow, England, 1999.
[17] G. Salton, Automatic text processing: the transformation, analysis, and
_retrieval of information by computer._ Addison-Wesley Longman
Publishing Co., Inc. Boston, MA, USA, 1989.
[18] Brite: http://www.cs.bu.edu/brite/.
-----
|
{
"disclaimer": "Notice: The following paper fields have been elided by the publisher: {'references', 'abstract'}. Paper or abstract available at https://api.unpaywall.org/v2/10.1109/GI.2007.4301422?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1109/GI.2007.4301422, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": null,
"status": "GREEN",
"url": "http://netsec.cs.uoregon.edu/gi2007/papers/1569024028.pdf"
}
| 2,007
|
[] | true
| 2007-05-11T00:00:00
|
[] | 8,326
|
en
|
[
{
"category": "Medicine",
"source": "external"
},
{
"category": "Medicine",
"source": "s2-fos-model"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/ffb08be629d35e755afc36467b9fda4f64dc2957
|
[
"Medicine"
] | 0.888614
|
An Anonymous IoT-Based E-Health Monitoring System Using Blockchain Technology
|
ffb08be629d35e755afc36467b9fda4f64dc2957
|
IEEE Systems Journal
|
[
{
"authorId": "49887665",
"name": "Samuel Omaji"
},
{
"authorId": "2150477746",
"name": "A. B. Omojo"
},
{
"authorId": "46242598",
"name": "Syed Muhammad Mohsin"
},
{
"authorId": "2086549",
"name": "P. Tiwari"
},
{
"authorId": "2119236970",
"name": "Deepak Gupta"
},
{
"authorId": "1955932190",
"name": "Shahab S. Band"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"IEEE Syst J"
],
"alternate_urls": [
"http://www.systemsjournal.org/"
],
"id": "38a8edf0-2998-4daf-93f7-16f34689e228",
"issn": "1932-8184",
"name": "IEEE Systems Journal",
"type": "journal",
"url": "http://ieeexplore.ieee.org/servlet/opac?punumber=4267003"
}
|
The Internet of Things (IoT) has made it possible for health institutions to have remote diagnosis, reliable, preventive, and real-time decision-making. However, the anonymity and privacy of patients are not considered in IoT. Therefore, this article proposes a blockchain-based anonymous system, known as GarliMediChain, for providing anonymity and privacy during COVID-19 information sharing. In GarliMediChain, garlic routing and blockchain are integrated to provide low-latency communication, privacy, anonymity, trust, and security. Also, COVID-19 information is encrypted multiple times before transmitting to a series of nodes in the network. To ensure that COVID-19 information is successfully shared, a blockchain-based coalition system is proposed. The coalition system enables health institutions to share information while maximizing their payoffs. In addition, each institution uses the proposed fictitious play to study the strategies of others in order to update its belief by selecting the best responses from them. Furthermore, simulation results show that the proposed system is resistant to security-related attacks and is robust, efficient, and adaptive. From the results, the proposed proof-of-epidemiology-of-interest consensus protocol has 15.93% less computational cost than 26.30% of proof-of-work and 57.77% proof-of-authority consensus protocol, respectively. Nonetheless, the proposed GarliMediChain system promotes global collaborations by combining existing anonymity and trust solutions with the support of blockchain technology.
|
### This is an electronic reprint of the original article. This reprint may differ from the original in pagination and typographic detail.
## Samuel, Omaji; Omojo, Akogwu Blessing; Mohsin, Syed Muhammad; Tiwari, Prayag; Gupta, Deepak; Band, Shahab S. An Anonymous IoT-Based E-Health Monitoring System Using Blockchain Technology
_Published in:_
IEEE Systems Journal
_DOI:_
[10.1109/JSYST.2022.3170406](https://doi.org/10.1109/JSYST.2022.3170406)
Published: 01/06/2023
_Document Version_
Peer reviewed version
_Please cite the original version:_
Samuel, O., Omojo, A. B., Mohsin, S. M., Tiwari, P., Gupta, D., & Band, S. S. (2023). An Anonymous IoT-Based
E-Health Monitoring System Using Blockchain Technology. IEEE Systems Journal, 17(2), 2422-2433. Advance
[online publication. https://doi.org/10.1109/JSYST.2022.3170406](https://doi.org/10.1109/JSYST.2022.3170406)
This material is protected by copyright and other intellectual property rights, and duplication or sale of all or
part of any of the repository collections is not permitted, except that material may be duplicated by you for
your research use or educational purposes in electronic or print form. You must obtain permission for any
other use. Electronic or print copies may not be offered, whether for sale or otherwise to anyone who is not
an authorised user.
-----
# An Anonymous IoT based e-Health Monitoring System using Blockchain Technology
### Omaji Samuel ID, Akogwu Blessing Omojo, Syed Muhammad Mohsin ID, Prayag Tiwari ID, Deepak Gupta ID, and Shahab S. Band [ID]
**_Abstract—The Internet of things (IoT) has made it possible for_**
**health institutions to have remote diagnosis, reliable, preventive**
**and real-time decision making. However, the anonymity and**
**privacy of patients are not considered in IoT. Therefore, this**
**paper proposes a blockchain-based anonymous system, known**
**as GarliMediChain, for providing anonymity and privacy dur-**
**ing COVID-19 information sharing. In GarliMediChain, garlic**
**routing and blockchain are integrated to provide low-latency**
**communication, privacy, anonymity, trust and security. Also,**
**COVID-19 information is encrypted multiple times before trans-**
**mitting to a series of nodes in the network. To ensure that**
**COVID-19 information is successfully shared, a blockchain-**
**based coalition system is proposed. The coalition system enables**
**health institutions to share information while maximizing their**
**payoffs. In addition, each institution uses the proposed fictitious**
**play to study the strategies of others in order to update its**
**belief by selecting the best responses from them. Furthermore,**
**simulation results show that the proposed system is resistant**
**to security-related attacks and is robust, efficient, and adaptive.**
**From the results, the proposed proof-of-epidemiology-of-interest**
**(PoEoI) consensus protocol has 15.93% less computational cost**
**than 26.30% of proof-of-work (PoW) and 57.77% proof-of-**
**authority (PoA) consensus protocol, respectively. Nonetheless, the**
**proposed GarliMediChain system promotes global collaborations**
**by combining existing anonymity and trust solutions with the**
**support of blockchain technology.**
**_Index Terms—Blockchain, e-health, Fictitious Play, Healthcare,_**
**Internet of Things (IoT), IoT data**
I. INTRODUCTION
Today, the Intenet of things (IoT) is a new technological
way to bring together different sensors via the Internet [1].
Besides, the concept of IoT was initiated in 1999 to connect
all electronic items via the Internet using radio frequency
identification (RFID) [2]. Also, IoT allows other information
from sensors to be collected for management and intelligence
O. Samuel is with the Department of Computer Science, Confluence
University of Science and Technology (CUSTECH), Osara, 264103, Kogi
State, and Edo State University, Uzairue, 300281, Nigeria; Email: omajis@custech.edu.ng.
A. B. Omojo is with the Applied Mathematics and Simulation, Advanced Research Centre, SHESTCO, Kwali, Abuja 186 Nigeria; Email:
omojo@shestco.gov.ng.
S. M. Mohsin is with the Department of Computer Science, COMSATS
University Islamabad, 45550 Pakistan; Email: syedmmohsin9@yahoo.com;
FA17-PCS-008@isbstudent.comsats.edu.pk
P. Tiwari is with the Department of Computer Science, Aalto University,
02150, Espoo, Finland; Email: prayag.tiwari@aalto.fi
D. Gupta is with the Maharaja Agrasen Institute of Technology, Delhi,
India; Email: deepakgupta@mait.ac.in
S. S. Band is with Future Technology Research Center, National Yunlin
University of Science and Technology, 123 University Road, Section 3,
Douliou, Yunlin 64002, Taiwan; Email: shamshirbands@yuntech.edu.tw.
C di th P Ti i d Sh h b S B d
gathering. Nevertheless, IoT may connect other input-output
devices, such as smart mobiles, medical sensors, fitness trackers, cameras, bluetooth devices, near field communication,
etc., [2]. The technological advancement in IoT facilitates the
emergence of the Internet of medical things (IoMT). The IoMT
allows the remote management and monitoring of patients’
data. It is also utilized to solve a variety of health information
technology infrastructure problems [3]. In this study, the
IoT devices are resource-constrained, which means that they
cannot be used for activities that require large computations
and memory storage. To resolve this challenge, the IoT devices
are connected to edge nodes, which have more memory storage
and high computational capabilities. Additionally, the privacy
and anonymity of users are not fully explored in IoT, which
are the main focus of this study.
_A. Anonymity Protection of COVID-19 Patients using Garlic_
_Routing_
The invisible Internet project (I2P) provides an efficient
network that enables users to communicate in an encrypted
and anonymous manner [4]. I2P uses the onion routing concept
for providing anonymity to users that deployed the network.
Moreover, onion routing provides low-latency Internet connections that prevent traffic analysis and other network attacks. It
also uses public-key encryption for encrypting messages in an
onion-like structure to be decrypted by the intended recipients.
For example, the work in [5] deployed onion routing for
enabling users to anonymously access the Internet.
The improvement over the onion routing is the garlic
routing. Garlic routing is a technique that establishes a path or
tunnel through a series of peers. The sender in garlic routing
continuously encrypts messages which are decrypted by every
hop as they are transmitted via the tunnel. During the establishment phase, the path for routing messages is known to each
peer. The peer formed intermediate nodes in the garlic routing
technology. Unlike onion routing, garlic routing encapsulates
all relayed messages from the intermediate nodes in encrypted
form and sends the ciphertexts to the concerned nodes [4].
The authors in [6] developed a sidechain system, which is a
hybrid of garlic routing and onion routing. The objective of
the sidechain is to enhance the privacy of transactions within
the network. However, the trust concerns among nodes in the
blockchain are not considered. The authors in [7] designed an
approach that is based on garlic routing for enhancing secure
information sharing among users. The proposed approach
provides anonymity in the context of information security The
-----
proposed approach, on the other hand, does not solve the issue
of a single point of failure or user trust problems during the
manufacturing process.
To improve the anonymity and privacy of users’ transactions, blockchain has been combined with garlic routing.
In [8], the authors presented an anonymous technique for
ensuring users’ privacy during energy trading that is built
on blockchain and garlic routing. For the selection of miners
and the construction of blocks, the proposed technique used
a proof-of-authority (PoA) consensus process. However, the
technique does not resolve the issues of trust concerns and
miners’ centralization problems. In [9], the authors developed
a solution based on blockchain and garlic routing to protect
the privacy and secrecy of bills of landing users. However,
the system does not solve the problem of trust among users.
In literature, none of the works done in [4]–[9] considered
how to solve the problem with tracing of nodes when errors
have been committed. Additionally, coalition among nodes for
ensuring trustworthy data sharing is not considered.
_B. Privacy and Anonymity of COVID-19 Patients using_
_Blockchain Technology_
Currently, different technologies and approaches have been
deployed to reduce and minimize the danger and transmission
of the pandemic coronavirus, known as COVID-19, since its
outbreak in 2019. These technologies range from artificial
intelligence [10]–[13], epidemiological models [14], [15], etc.
Besides, different open research areas, such as integrative
medicine, vaccine development, drug discovery and public
communication are essential to finding lasting solutions to
COVID-19 [16]. Interestingly, public communication is vital
in the fight against COVID-19 through media propagation
and public awareness. However, inappropriate COVID-19 information exchange among health institutions can result in
excessive coronavirus transmission. Also, because of lack
of trust and unauthentic media propagation of COVID-19
information from several unregulated news items, patients
infected by COVID-19 cannot get proper guidance on the
prevention and mitigation of the spread of the virus. Therefore,
an efficient technology to track and minimize the spread of the
COVID-19 virus is essential. Furthermore, researchers are not
limited to just discovering a cure for the pandemic; they are
also building theoretical and practical technologies to aid in
the effective exchange of information in the fight against the
pandemic. The proposed system helps in mitigating the spread
of the COVID-19 virus through authentic public information
dissemination. In this paper, before any information about
COVID-19 is shared, it must be validated and authenticated
by a trusted entity (see Section II-C4 for the credibility of the
trusted entity). Furthermore, rumour mongering is eliminated
while unnecessary news items are scrutinized before adopting
them as a means of information dissemination.
Nowadays, unlike several emerging technologies,
blockchain provides a secure and decentralized way of
data storage where untrusted parties are allowed to participate
in the global wellbeing of the system. The authors in [17]
proposed a blockchain-based system to track critical COVID19 data During data sharing; however the technology does
not ensure privacy or anonymity to the health institutions.
The authors in [18] identified methods of blockchain that
are addressing the problems, which may arise from the
COVID-19 pandemic. These methods include disease control,
supply chain control of medical items, treatment transparency
control, tracking control of health instruments, etc. However,
privacy concerns and scalability issues of blockchain are
not considered. Similar work in [19] presented the roles of
blockchain in detecting COVID-19, such as contact tracing,
e-government, online education, supply chain management,
automated surveillance, manufacturing management, etc.
However, salient features of blockchain such as security,
scalability, throughput, resource management require further
improvement. The authors in [20] presented a system, known
as Beeptrace, which is based on blockchain for providing
an efficient contact tracing. However, the Beeptrace solution
does not consider the anonymity of users. The authors in
[21] proposed a framework that is based on blockchain to
preserve the privacy of patients using their smartphones.
However, anonymity depends on pseudonyms, which make
it difficult to trace defaulter during a record auditing. The
authors in [22] presented a k-anonymity method along with
hyper-ledger system to preserve the privacy of patients.
However, k-anonymity method is prone to temporal attack,
complementary release attack and unsorted matching attack.
The authors in [23] proposed a system that is based on
blockchain for preserving the privacy of COVID-19 patients.
In the proposed system, an identity-based broadcast group
signcryption was used. However, they do not address the
elucidation key escrow problem.
Moreover, there is a similar work with our proposed system.
The work in [8] considers anonymity and privacy preservation
of energy users. However, the system in [8] incurs high computation costs since the energy users are resource-constrained,
i.e., energy users are smart meters. In addition, no information
regarding the robustness, efficiency and adaptability of the
system were discussed. To solve the problems, this study
introduces edge computing to solve the problem of resource
constraints of medical devices. Furthermore, the efficiency,
robustness and adaptability of the system are presented. Table I
compares the proposed GarliMediChain system with existing
systems in terms of year, techniques, limitations, consensus
protocol, robustness, efficiency and adaptability.
_C. Motivation_
Motivated by the drawbacks of existing schemes [17], [18],
[20], [21] regarding the lack of anonymity and privacy concerns of patients’ health information, our proposed research is
conceived. For example, because of the societal stigmatization
of those who are infected by the COVID-19 virus, there is a
need to develop a system that provides both anonymity and
privacy for the patient during data sharing. The concerns of
privacy and anonymity for COVID-19 data sharing in public
health scenarios are addressed in this study. It is important
to note that anonymity refers to the concealment of patients’
identities; whereas, privacy refers to the protection of patients’
private information from other patients. As the risk of infections and transmission of ongoing pandemics increases the
-----
TABLE I: The proposed system is compared to other systems
Ref. 1 2 3 4 5
[4] 2019 I2P Communication link falsification and fault-tolerance issue ✗ ✗
[5] 2018 I2P Communication link falsification and fault-tolerance issue ✗ ✗
[6] 2019 Sidechain Trust concern ✗ ✗
[7] 2019 Garlic routing Trust concern ✗ ✗
[8] 2021 Garlic routing and blockchain Problem with tracing of nodes when errors have been committed ✓ ✗
[9] 2021 Garlic routing and blockchain Trust concern ✗ ✗
[17] 2020 Blockchain System does not provide users’ privacy and anonymity ✗ ✗
[18] 2020 Blockchain Privacy concern and scalability issue ✗ ✗
[19] 2020 Blockchain Privacy concern and scalability issue ✗ ✗
[20] 2020 Blockchain Anonymity issue ✗ ✗
[21] 2021 Blockchain Anonymity issue ✗ ✗
[22] 2021 _k-anonymity system_ Prone to temporal, complementary release and unsorted matching attacks ✗ ✗
[23] 2021 Blockchain Elucidation key escrow problem ✗ ✗
Our 2022 GaliMediChain The overall computational cost of the proposed system model is not considered ✓ ✓
1: Years, 2: Techniques, 3: Limitations, 4: Consensus Protocols, 5: Robustness, 6: Efficiency, 7: Adaptability, ✓: Considered, ✗: Not considered
technology for implementing medical public communication
is also improving. As more researchers, academia and health
practitioners are expected to be involved, these problems are
more vital to the development of such technology in order to
alleviate the risk of transmission via public health awareness.
In this regard, we offer solutions to the issues mentioned, as
well as the following contributions to this work:
1) To propose a privacy and anonymity health system
for COVID-19 data sharing using a garlic routing and
blockchain technology, known as GaliMediChain.
2) Trust among coalition group is enforced using fictitious play. Fictitious play enables users to update their
believes by selecting from the best responses of the
opponents’ play.
3) A consensus mechanism is proposed for the generation
of blocks and the selection of miners. The proposed
mechanism is based on proof of epidemiology of interest
(PoEoI).
4) The proposed system’s performance is analyzed, which
reveals that it is robust, efficient, and adaptive in the
presence of security-related threats.
The remaining part of the paper is organized as follows.
Section II presents the proposed system model while Section
III provides the security analysis of the system. Finally,
Section IV presents the conclusion with future work.
II. THE PROPOSED SYSTEM MODEL
In centralized solutions [2]–[5], control and utilization of
resources are possible. However, the problem of a single
point of failure and the high cost of computation may make
the centralized solutions impractical in a real-world scenario
especially when the number of IoMT devices increases. Also,
the solutions that are based on centralization does not solve
the problem of decision making especially when the patients
involved have divergent opinions. Furthermore, the centralized
system manages each patient’s transaction records in consolidated solutions. Patients are also subjected to additional
judicial oversight. Each patient has a copy and control over
their transactions with our proposed solution, which is not
achievable with a centralized system. Therefore, the scenario
considers in this study solves the above mentioned problems of
centralized solutions. The proposed system model is depicted
in Fig. 1. From the figure, the proposed system model consists
of five important components, such as edge devices, garlic
routing, consortium blockchain system and coalition group.
These components are discussed as follows.
Health Data Centres
Bi-directional
Communication TrsutNode IoT Coalition
Gateway Network
Garlic Routing Consortium Garlic Routing
Blockchain
**...**
Coalition 1 Coalition N
NodeEdge **...** NodeEdge NodeEdge **...** NodeEdge
IoT IoT IoT IoT IoT IoT IoT IoT
Device Device Device Device Device Device Device Device
Fig. 1: The anonymous IoT based e-health monitoring system
_A. Edge Nodes_
Edge computing was introduced to intelligently connect
several IoT devices and remote servers including data centres
[24]. It allows the efficient management and processing of
load, and data storage that are handled by edge nodes. This
makes the edge nodes to be increasingly sophisticated and
smart. In existing literature [24], cloud system plays a central
role in data analysis and management of edge nodes. Besides,
edge nodes are just meant to relay and filter remote data to
the cloud system, not to undertake in-depth data analysis.
Furthermore, edge nodes provide content caching, persistent
storage and service delivery. However, distributing edge nodes
to different networks bring the problems of security, privacy,
anonymity and single point of failures. To address these
problems, we introduce blockchain technology, which will be
discussed in Section II C
-----
_B. Garlic Routing_
The proposed anonymous IoT healthcare system layer encryption process in Fig. 2, is comprised of a set of source
nodes (senders), a set of intermediate nodes and a set of
destination nodes (receivers). Any node in the source nodes
can communicate with a node in the destination nodes via the
intermediate nodes. Before communication is established, a
trusted node, known as TrustNode, is selected based on its
credibility among other nodes. TrustNode is responsible for
setting up the system credentials, which include a pair of keys
(i.e., private and public keys), blind certificates, pseudonyms
and path selection model. The system credentials are initialized before any node can communicate with each other for
mitigating fraudulent dealings in the proposed system. The
pair of keys are used for encrypting and decrypting multiple
messages before and after transmission, the blind certificates
are used to ensure the authenticity of transmitted messages,
and the pseudonyms are used to provide anonymity of entities
during communication. A path selection is randomly chosen to
prevent the same path from being used repeatedly. It prevents
network traffic analysis attacks [25] and also ensures the
anonymity of entities involved during data sharing.
IoT Gateway Encryption Edge Device BlockchainNetwork Message CommunicationBi-directional
CDC CDC
**(B)** **(C)**
Unwrapped
Clove
CDC Full Clove Clove # 1: Request Message CDC
**(A)** **(F)**
Clove # 2: Response Message
Receiver
Sender
CDC Inbound CDC
**(D)** **(E)**
Fig. 2: The anonymous IoT based e-health system layer
encryption process
A method called garlic routing, as defined by I2P [26], is
used in the proposed GarliMediChain system. Garlic routing is
a private network that hides senders’ and recipients’ identities.
Within a garlic routing network, numerous messages are
encased in layers of encryption structure. The GarliMediChain
system employs the onion routing concept, allowing the recipient to decode a packet by unfolding one layer of the encryption
structure across a one-way tunnel [8]. Each sender encodes the
packets in the garlic routing, referred to as “cloves.” Before
being sent between nodes, the encoded cloves are encased in
a predetermined size termed “garlic.” The destination node is
the only node that decodes each clove, making it undetectable
to the other nodes, which re-translate the clove to the next
hop in the network. In this paper, nodes and centre for disease
controls (CDCs) are used interchangeably.
In Fig. 2, the CDC A can select multiple paths: CDC B
CDC C and CDC D CDC E, for forwarding packets
_−→_ _−→_
to CDC F Identity based encryption is used to safeguard the
identities of nodes in the paper, and it was inspired by the work
in [8]. Let the set of source nodes be defined as SN =△ _sn =_
_{_
1, 2, 3, . . ., SN, the set of intermediate nodes be IMN =△
_}_
_imn = 1, 2, 3, . . ., IMN_ and the set of destination nodes
_{_ _}_
be DN =△ _dn = 1, 2, 3, . . ., DN_ . To avoid verbosity, the
_{_ _}_
proposed GarliMediChain system has a similar architecture
with the work presented in [8]. Fig. 3 shows the processes and
relationships between protocols and analyses. From the figure,
it is shown that each IoT device requested a login credential
from TrustNode through the registration protocol at step (1).
In step (2), TrustNode requested session, private and public
keys of all nodes from the layered encryption protocol. The
keys generated by layered encryption protocol are sent to IoT
users via TrustNode at steps (3) and (4). The IoT user gets a
list of path sets from the path selection protocol in steps (5)
and (6). Steps (6), (8) and (9) enable IoT users to encrypt the
message and route via intermediate nodes to the destination
node while the destination node decrypts the message using
its private key.
IoT Layered Path IoT
Registration
Device Encryption Selection Device
(1) Each user (2) TrustNode (3) Generate the (8) Send the
requested for requested for session, private and encrypted
login session, private public keys of all of message to
credentials and public keys the nodes in the the
network destination
(4) TrustNode node via the
receives all intermediate
credentials from nodes
layered encryption
(5) Request for
possible path (9)
set to route Destination
message to node decrypts
destination the message
node using its
(6) Send list of private key
(7) Encrypt possible path
message using sets to the user
the public and
session keys of
the intended
nodes
Fig. 3: A sequence diagram showing the processes and relationship between the different protocols of the proposed
system model
_C. Consortium Blockchain System_
In medical edge computing, data sharing from controllers
to patients may cause problems like insecurity, lack of both
privacy and trust. Blockchain is one of the plausible solutions
to efficiently address the above-mentioned problems. In the
blockchain, all messages are broadcasted and communicated
in a distributed and decentralized fashion. These messages are
written onto the blockchain in an immutable manner and can
be audited and verified by entities in the network. In this
study, we aim to combine the advantages of edge computing, garlic routing and fictitious play with blockchain. Also,
all calculations are performed within the proposed network
and off-chain. It means that the computations are performed
distributively by using edge computing, which minimizes the
overall computing cost of the proposed system model. Note
that the validation of transactions, selection of miners and
consensus protocol are discussed as follows
-----
_1) Validator Selection Process: Inspired by [27], two types_
of blockchain nodes are considered in this paper: evaluator
and validator. Hospitals who take and transmit ledger data
are represented by evaluator nodes, and every CDC in the
blockchain network is a node. All nodes have a greater
probability of becoming validator nodes, allowing them to
be part of the consensus process. Validators are nodes on
the blockchain that send block confirmation messages to the
rest of the nodes in the network. They are chosen from
a list of high credible nodes. Any validator with a high
credibility is qualified to write a block onto the blockchain,
and is referred to as a TrustNode. TrustNode digitally
signs and hashes a hospital’s record before submitting it to
the blockchain. The signed record is stored in the blockchain
as a candidate block transaction. Hospitals rate CDCs based
on their current performance, and each TrustNode saves a
copy of its network’s credibility scores. A node becomes a
validator node in the PoEoI consensus protocol only when
its credibility score exceeds the defined credibility threshold
value, which lies between “0” and “1”. The defined threshold
value in this paper is assumed to be 0.6. Although, we are not
constrained by the defined threshold value, it can be chosen
dynamically. The validator nodes do not include nodes with
credibility scores less than the defined threshold value.
_2) Processes for Creating and Validating Blocks: With the_
assistance of some validators, the TrustNode validates the
candidate block. As soon as the candidate block arrives from
the TrustNode, each validator compares its signature to the
signature of the preceding block that was remotely stored.
After successful verification, validators on the blockchain network broadcast confirmation messages. The TrustNode then
provides the necessary epidemiological data sharing service
to the hospitals and opens a new transaction for it. When the
_TrustNode receives the (N_ _NCN_ ) amount of confirmation
_−_
messages with all validators signatures attached, a new block is
created. The number of malicious node is denoted by NCN . If
(N _NCN_ ) confirmation messages are received, the block is
_−_
published; otherwise, it is not. In chronological order, a new
block is added to the blockchain. The system is considered
attacked if the TrustNode does not properly store the data.
Nodes can provide puzzle solutions, which are a random
number of nonces that resolve the cryptographic hash issues
of the proof of work (PoW) consensus protocol [28], on the
blockchain. The difficulty of PoW is unrelated to the network
nodes’ credibility. By resolving the puzzling issue [28], a node
on the blockchain can create a new block.
_H(nonce_ _H(bh))_ _f_ (CS(n)).target, (1)
_||_ _≤_
where denotes ”append,” H(.) signifies a function of cryp_||_
tographic hash, bh denotes a block header, and f (.) denotes
a function that produces the puzzle difficulty. During each
consensus process, target is the system’s difficulty target for
all validators. The TrustNode becomes the quickest validator
on the blockchain that answers cryptographic puzzles by
broadcasting the candidate block across the network. While
the other validators evaluate the correctness of the nonce
that generates the candidate block. If the validation procedure
went well it means the validators were in agreement In a
linear order, the recently produced block in the blockchain is
linked to the preceding block. After that, each blockchain node
updates its record in order to keep track of the information of
the newly created block [28].
_3) Properties of the Proposed System: In this study, the_
computational power of the proposed system is measured
based on its hash rate. The hash rate of the system is calculated
as the ratio of the successful nonce to the total number of
elapsed time. Other salient properties of the proposed system
are discussed as follows.
1) Security: The security of the proposed system is determined based on blockchain and garlic routing. The
blockchain used in this study is a consortium system
where access control is used to limit the number of unauthorized users. Here, only users with valid credentials
can authenticate and have access to the system. In addition, identity-based encryption mechanism is adopted
for the encryption of session keys and messages before
they are transmitted over the network. Note that only
the intended users can decrypt the messages even if they
are sent to the intermediate nodes for routing to the next
hop.
2) Scalability: The scalability of the proposed system is
determined by the number of coalitions created. It means
that more nodes are added to the system without necessarily increasing the computing cost of the system.
3) Throughput: The throughput of the system depends
on the system’s efficiency and to avoid verbosity, see
discussion in Section II-C5.
4) Resource Management: The proposed system uses application intensive consensus mechanism, which requires
minimum energy resources as compared to the PoW
consensus mechanism, which is CPU intensive [18].
This means that the proposed system does not required
high computational power for mining and adding of
blocks to the blockchain. Moreover, in future, we intend
to consider the overall computational cost by proposing
an efficient optimization method.
The benefit of employing blockchain for the anonymity and
privacy of patients’ information is discussed as follows. The
traditional anonymity method [22] does not guarantee trust
in information. Also, it may violate the privacy of the data
owners. Furthermore, it may lead to homogeneity and background knowledge attacks. Whereas, the traditional privacy
method may create the problem of data accuracy. Therefore,
to solve these problems, blockchain is employed in this study
to ensure the trust of information and privacy while garlic
routing provides anonymity to patients.
_4) Credibility of the Trusted Node: In this paper, it is_
assumed that TrustNode can either behave honestly or maliciously. To prevent the malicious behavior of TrustNode,
a credibility method is adopted. Here, every node in the
network is allowed to participate in the evaluation process
of TrustNode. The evaluation process considered in this
work includes direct and indirect evaluations. In the direct
evaluation process, a rating score between [0, 1] is awarded
to TrustNode while for the indirect evaluation process, the
historical honest behavior of TrustNode is used for assessing
-----
its credibility. Although, direct evaluation is prone to feedback
sparseness and misjudgment [29]. However, in this study, time
relevance is incorporated in the evaluation process to prevent
misjudgment. If TrustNode receives a rating score between
0 5, it means that TrustNode is involved in malicious
_−_
activity; otherwise, a rating score between 5 10 is awarded
_−_
to TrustNode, which means that it has an honest behavior.
For the indirect evaluation, trust recommendation from other
nodes is used to determine the honest behavior of TrustNode.
The historical honest behavior of TrustNode is measured on
the basis of two consecutive high rating scores that are above
5.
_5) The Proposed Protocol for Proof of Epidemiology of_
_Interest: The proposed PoEoI protocol is based on the addition_
number game, as shown in Fig. 4. The steps for playing the
Number Board
|Player 1: (11) + (22) = 33 Player 2: (12) + (33) = 45 Player 1: (13) + (45) = 58 Player 2: (14) + (58) = 72 Player 1: (14) + (72) = 86 Player 2: (14) + (86) = 100|22 23 24 25 26 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 …………………………………………. …………………………………………. 109 110 111 112 113 114 115|
|---|---|
Fig. 4: The proposed addition number game
game are described as follows.
_• Step 1⃝: To start the game, a binary number “1” or “0”,_
is generated by the system. If any player selects “1”, it
means that the player can start the game. On the other
hand, if “0” is selected, it means that the player cannot
start the game.
_• Step 2⃝: A winner begins the game by choosing x_ _←−R_ _X,_
This indicates that a number x is chosen at random from
the set of numbers X contained in the number cards.
Where X = 11, 12, 13, 14, 15 . The player adds the
_{_ _}_
number x to any number in the number board and then
returns x to the other four numbers in the number cards.
_• Step 3⃝: The second player picks x_ _←−R_ _X and adds to_
the sum obtained by the first player.
_• Step 4⃝: The players continue to add x_ _←−R_ _X alternately_
to the sum obtained by the opponent.
_• Step 5⃝: The game continues until one player obtains an_
overall total of 100 and beyond. It means that the player
is declared the winner of the game.
The strategies of this game are (1) every player is given an
equal opportunity to pick a random binary number, which was
generated by the system at the start of the game and (2) a
winner is declared if it has a total of 100 and above against its
opponent. In this study, the proposed GarliMediChain system
is resilient because any faulty in CDCs does not affect the
total operations of the network. Algorithm of the proposed
GarliMediChain system is given in Algorithm 1. Moreover,
the properties of the proposed PoEoI consensus protocol are
given as follows.
_Efficiency: The efficiency of the proposed GarliMediChain_
system is evaluated in this research based on the time it takes
each CDC to respond to or request EoI either in the same
coalition group or different coalition groups (see Section II-D).
We consider the communication time CT, delivery time DT
and the total cost for requesting EoI, which is defined as
_EoIC = CT + DT ._ (2)
Let the system’s throughput be represented as Rp and suppose
that the request of EoI from any CDC is greater than Rp,
then the system is said to be overloaded with requests (i.e,
_Rp <_ _[D]N[T]_ ). The actual time taken ACT, also known as the
elapsed time, for any CDC to provide an authentic EoI is
defined as
_ACT = DT + FT,_ (3)
where FT is the function of CDC request for EoI and the total
number of CDCs.
_FT =_ _[N]_ _,_ (4)
_DT_
where N is the number of CDCs. If Rp < FT, the system is
saturated and ACT will grow infinitely.
_Robustness: The estimated cost that the proposed Gar-_
liMediChain system will fail multiplied by the probability
of the failure is referred to robustness of GarliMediChain
system. Let PrCDC denotes the probability that a CDC may
fail to respond or supply EoI, CCDC represents the cost of
reassigning the request of EoI from another CDC and CL is
the cost of losing the request for EoI from a single CDC. Thus,
the weakness of the system, denoted as Wsys, is defined as
_Wsys = PrCDCCCDC + PrCDCCL._ (5)
_Adaptability: In GarliMediChain system, the ability to keep_
records of transactions upto date in a way that any fault can
be detected easily in real-time, is referred to as adaptability.
Besides, the capacity Ccap of the proposed GarliMediChain
system to keep records is defined as
_Ccap =_ _[N]_ + [1] _._ (6)
_DT_ _Rp_
In this paper, the proposed PoEoI consensus protocol is
explained in the following phases, as shown in Fig. 5.
_Request Phase: Each hospital initially requests EoI to the_
winner CDC. The winner CDC checks the authenticity of the
request before processing it. Afterwards, it sends the prepare
EoI message to other CDCs for validation.
_Prepare Phase: The CDCs broadcast a prepared EoI mes-_
sage to each other while checking for its validity. When a
CDC receives 2n valid EoI from different CDCs, the “prepare
phase” is completed. Where n _N_ .
_∈_
_Commit Phase: Each CDC broadcasts a commit EoI mes-_
sage to one another for validation. Once the number of commit
EoI messages is greater than 2n +1, the EoI message is added
to the blockchain
-----
Fig. 5: Phases of proposed PoEoI consensus protocol
_Response Phase: In this phase, when the hospital receives_
2n + 1 of the same reply of the EoI messages, the consensus
is completed.
_D. The Coalition Group_
In this paper, each CDC can accumulate more quantity of
epidemiological information EI that comprises of a set of
actions, represented by ACDC, which means the information
sharing action that a CDC is willing to perform. Moreover,
_ACDC = {P, EDavail}, defines the available epidemiological_
data EDavail at negotiation prices P . The values of P depend
on the total amount of epidemiological data collected by each
CDC after calculating Rev = F (EI). Where F (EI) is the
function of the shared data and EI = [�]i∈N _[e][i][ such that]_
_ei ∈_ _EI is the ith shared information. Moreover, Rev is the_
revenue of CDC. At any given time slot t, the utility of CDC
as a function of ei is calculated as:
_Ui(ei) = q ln(ei),_ (7)
where q is the payment negotiation parameter. When the
requester CDC’s maximum quantity of epidemiological information EI is not met, it receives EI from other CDCs who
are ready to contribute. When the requester CDC receives EI,
he or she is satisfied.
Note that Ui(ei) is the utility of CDC, which is expected to
be a concave non-decreasing function of ei, i.e., _[δU]δe[(][e]i[i][)]_ _≥_ 0
and _[δ][2][U]δe[(][2]i[e][i][)]_ _< 0._
_Definition 2.1: Each CDC depends on the EoI that is_
equivalent to its negotiation power.
The explanation of Definition 2.1 is that CDC must collect
more EoIs from infectious disease experts or world health
organization (WHO). The cost for collecting EoI depends on
_P and the investment cost. Each CDC wishes to maximize_
its profit by learning or imitating other CDCs with the best
strategies. Moreover, every CDC is expected to acquire the
requisite knowledge on how to collect EoI through drug
discovery, integrative medicine and vaccine development [16].
Otherwise, it has to get the required EoI alone or through
negotiation with other CDCs
_Proposition 2.1 (Optimal Response): Every CDC maximizes_
its utility by adopting and improving on the optimal strategies
of other CDCs.
_Proof 2.1: The proof of Proposition 2.1 is given as follows._
Each CDC initially believed that other CDCs have the best
strategies. It means that CDC will fulfill its EoI by learning
the opponent’s play and strategy. Besides, it may deviate from
existing strategies in order to optimize its payoff.
_Definition 2.2: Let P_ (S) = _a, S_ be the joint policy that
_{_ _}_
assigns all CDCs’ joint state S = [s1, s2, . . ., si] to the i
actions A = [a1, a2, . . ., ai].
_E. Fictitious Play_
In a game theory, fictitious play is a type of learning
paradigm in which CDCs are confronted with an uncertain
distribution of their opponents’ strategy. For example, even
when a CDC is fully engaged in coalition activities, it is
conceivable for the CDC to depart from those activities to
maximize its utility. As a result, each CDC monitors the
opponents’ play techniques to update its belief by selecting
the optimum response to their play. In terms of action a, the
total utility of CDCs for engaging in coalition S is expressed
as TU .
III. SECURITY ANALYSIS
The proposed GarliMediChain system is subjected to a
security assessment in this section. The analysis is based on
threats to information system, which include Sybil attacks
and double spending attacks. Besides, there are other attacks,
such as distributed denial-of-service (DDoS) and man-in-themiddle attacks. These attacks are prevented by the proposed
system model. The DDoS attack occurs when the network is
overwhelmed with bogus traffic (e g a centralized system is
��
_TU_ (S) = max
_i≤N_
� [�] _Ui(ai)�[�]._ (8)
_ai∈aS_
_TC =_ [�]a∈S _[TU]_ [(][S][)][ is used to calculate the total coalition]
value.
Using the fictitious play, a CDC can monitor the behavior of
other CDCs by learning the collection of random probability
distributions pr1, pr2, pr3, . . ., pri. The probability law for
_R_
random variables is defined by each distribution pri _←−_ [0, 1].
As a result, [�]i[N]=1 _[pr][i][ = 1][. According to the fictitious play,]_
CDC must calculate pri by taking into account a count ci for
each action that corresponds to EI. As a result, it is defined
as
_Fp =_ �Nci _._ (9)
_i=1_ _[c][i]_
Note that the demand for EoI by any CDC is uncertain,
which must conform to the supply EoI of other CDCs.
Similarly, a requester CDC may negotiate with other CDCs
by developing a probability pri for each negotiation. Thus,
Eq. (8) is redefined as:
��
_TU_ (S) = max
_i≤N_
� [�] _priUi(ai)�[�]._ (10)
_ai∈aS_
-----
**Algorithm 1: The proposed GarliMediChain Algo-**
rithm
**Input: Number of CDCs**
**Output: CDC’s strategies**
**1 set i = 1**
**2 if ∃(ni ∈** _N == 0) then_
**3** Return CDCi
**4 else**
**5** Return CDC such that
**6** _TU_ (S) = max ��i≤N ��ai∈aS _[U][i][(][a][i][)]�[�],_
**7** when fictitious ends;
**8** **foreach Coalition group do**
**9** Set the negotiation price;
**10** Create a list of CDCs who are willing to share
EoI;
**11** Get a list of CDCs that require EoI;
**12** Get the leader of the coalition group using the
proposed addition number game;
**13** Implement PoEoI consensus protocol to add a
block to blockchain;
**14** Evaluate the system’s performance based on
robustness, adaptability and efficiency;
**15** Update TU (S) as
_TU_ (S) = max ��
��
_i≤N_
_ai∈aS_ _[pr][i][U][i][(][a][i][)]�[�]._
most vulnerable to this type of attack); thereby, making the
system malfunction [31]. The proposed model is a distributed
system, which means that the failure of any node does not
affect the system. The advantage of the proposed system is
that every node has the same copy of the ledger. The manin-the-middle attack happens when an intruder intercepts the
communication for the purpose of exploiting the vulnerability
of the system [32]. This type of attack occurs when the
intruder has knowledge of the proposed system. In this study,
it is impossible for an intruder to intercept the network
because of the architectural design of the system. Also, the
consensus mechanism makes it difficult for an intruder to
modify the information because all information in the form
of the transaction must be validated and authenticated by the
majority in the network.
Before performing the security analysis, a threat model is
designed for the proposed system.
_A. Threat Model_
A threat model enables us to assess the security design and
makes is easier to perform risk assessment on the system.
However, there are no universal established principles for
designing a threat model [30]. In this research, we assume that
the proposed GarliMediChain system is vulnerable to identitybased attacks and honest-but-curious adversaries. Furthermore,
some CDCs in the proposed system may be honest; in the
sense that they provide EoI voluntarily, while others may
be malicious; in the sense that they purposefully exploit the
system’s vulnerability to create harm. Moreover, some CDCs
may intentionally fail to respond or provide an incorrect EoI.
The proposed PoEoI consensus protocol aims to safeguard
against system’s failure by using coalition decision making
(i.e., it involves data of both correct and incorrect CDCs)
that reduces the number of defaulter CDCs. Note that the
GarliMediChain system is resilient to both Sybil and doublespending attacks because of the PoEoI consensus protocol.
The protocol ensures that the identity of each CDC is verified,
which prevents the creation of fake identities. Besides, before
any transaction is written onto the blockchain, it must be
verified and authenticated by validators, which prevent doublespending related attacks.
In this study, we categorize the security assessment of the
proposed system based on authentication attack, availability
attack, confidentiality attack, and controllability of the system,
as shown in Table II. Motivated by [33], the security assessment of the proposed GarliMediChain system is performed. To
prevent certain attacks on the proposed system, it is paramount
to give the security features of the blockchain nodes. Two
cases of attacks can be possible in this scenario: internal
and external attacks. The latter has no significant impact on
the system since blockchain and garlic routing is secured.
Moreover, our focus is on the former case, which occurs
when a malicious user gains entry into the system. The impact
of the attack may be degradation of patients’ information or
complete interruption of the system. Blockchain nodes’ authentication is a vital part of the security architecture as CDCs
formed the nodes in the blockchain. Furthermore, because
they are real network users, CDCs may intentionally attack
the system by compromising its security. The availability
attack occurs when the blockchain nodes are not available
for negotiation and interaction (i.e., coalition formation). The
availability attack affects the performance and process of the
system, such as delays in communication. Typically, DDoS is
a kind of availability attack. To address this type of attack, a
request threshold, denoted as Request Threshold, is defined
along with the maximum number of requests, Max Request
as given in Algorithm 2. Confidentiality attack enables the
**Algorithm 2: DDoS mechanism**
**1 if Max Request > Request Threshold then**
**2** Alert the system for possible DDoS attack
**3 else**
**4** Allow communication to happen
malicious user to gain access to both patients and system
information when access right is not granted.
_Definition 3.1: Considering that the malicious user gains_
access to the proposed system; then, it can exploit the system
to determine its security, which is defined as follows.
1
(11)
_N_ Ψ
_[−]_ [1] _[≤]_ _[θ,]_
where Ψ is the number of unavailable nodes and θ is the degree
of availability attack
-----
TABLE II: Security assessment of the proposed system model
_Theorem 3.1: The proposed system prevents availability_
attacks.
_Proof 3.1: The proof of Theorem 3.1 is presented as follows._
Suppose that Eq. 11 is not true. Then, the malicious user can
conveniently exploit the vulnerability of the proposed system.
However, because a single point of failure is not possible with
the proposed system, which means failure in any node does
not affect the entire system. Therefore, Theorem 3.1 is proven,
which implies the proposed system prevent availability attacks.
_Theorem 3.2: The proposed system prevents confidentiality_
attacks.
_Proof 3.2: The proof of Theorem 3.2 is presented as follows._
Suppose that Eq. 11 is true. Then, the malicious user has
access to the proposed system. Besides, it is difficult for the
malicious user to modify the private information of a patient
because it requires the login credential of that patient. Therefore, the proposed system prevents confidentiality attacks.
IV. SIMULATION RESULTS
_A. Evaluation of the proposed GarliMediChain System_
In Fig. 6, we consider 1000 CDCs for the analysis. The
efficiency of the proposed system is the amount of time (in
seconds) needed by a CDC to request for EoI. From the
figure, as the number of CDCs grows, the total cost of the
system grows as well. It means that CT and DT are inversely
proportional to each other, which have an impact on the total
cost. Here, efficiency means that suppose 1000 CDCs decide
to request for EoI, their total cost is 103 seconds.
60
40
100
80
We implement the proposed system using Python 3.6.1. To
create keys for the identity-based encryption, a Charm library
is utilized, as well as a Crypto library for encryption and hashing [34]. The performance parameters considered in this paper
to evaluate the proposed system are efficiency, robustness and
adaptability. Besides, this paper is not limited to the abovementioned performance parameters. The simulation results of
the proposed GarliMediChain are provided in this section. The
parameters used for performing simulation are given in Table
III while the implementation can be found in Github [1].
TABLE III: The parameter values utilized in this paper
|Col1|Col2|Col3|Col4|Col5|Col6|Col7|
|---|---|---|---|---|---|---|
||||||||
|||Efficienc|y||||
||||||||
||||||||
||||||||
||||||||
||||||||
Fig. 6: Efficiency of the proposed GarliMediChain
In Fig. 7, the proposed system’s adaptability is analyzed.
Adaptability means that the system in real-time can store
records up-to-date in a manner that any fault is detected. From
the results in Fig. 7, it is observed that as the number of
CDCs increases, the total cost reduces, which means that the
system can store more records and keep them up-to-date in a
reasonable amount of time. Also, it means that the system has
a higher capacity to detect a fault in real-time.
In Fig.8, the robustness of the proposed system is evaluated.
We assume PrCDC = 0.6, CCDC = 30 seconds and CL = 60
seconds. It means that suppose 60% probability of failure
is encountered, then the total cost increases along with the
number of CDCs. Besides, it means that as the adversary
tries to compromise more CDCs, its total cost increases
proportionally.
In Fig. 9, the time taken by the system to respond or request
for EoI is analyzed. The elapsed time increases as the number
of CDCs grows, according to the results. It means that FT and
_DT are inversely proportional. Moreover, there is a tradeoff_
between elapsed time and communication time. Furthermore,
it means that as more CDCs wish to share EoI, communication
time increases while elapsed time decreasing. Besides, elapsed
time and delivery time are proportional
20
200 400 600 800 1000
Number of CDCs
1Gith b i l t ti [f th](https://github.com/omajiman/An-Anonymous-System-for-COVID-19-Information-Sharing-using-Blockchain-Technology) d t d l
-----
10
8
6
4
the payment negotiation parameter q = 0.6. Moreover, the
value of q is arbitrary selected, which implies that there is a
60% probability of achieving a fair negotiation. From Fig. 10,
it is observed that as the number of iterations increases, the
total utility converges to a stable value after 200 iterations. It
implies that both CDC1 and CDC2 consider the same strategy
to update their belief by selecting the best responses of the
play.
2
|Col1|Col2|Col3|Col4|Col5|Col6|
|---|---|---|---|---|---|
|||||||
||||Ad|aptability||
|||||||
|||||||
|||||||
|||||||
|||||||
200 400 600 800 1000
Number of CDCs
0.30
0.25
Fig. 7: Adaptability of the proposed GarliMediChain
0.20
0.15
90
80
70
60
50
40
30
20
10
0.10
0.05
0.00
|Col1|Col2|Col3|Col4|Col5|Col6|Col7|Col8|
|---|---|---|---|---|---|---|---|
|||||||CDC2||
|||||||CDC1||
|||||||||
|||||||||
|||||||||
|||||||||
|||||||||
|||||||||
|||||||||
0 200 400 600 800 1000
Number of Iterations
Fig. 10: Total utility versus number of iterations
_C. Evaluation of Security Analysis_
|Col1|Col2|Col3|Col4|Col5|Col6|
|---|---|---|---|---|---|
||Robustn|ess||||
|||||||
|||||||
|||||||
|||||||
|||||||
|||||||
|||||||
|||||||
200 400 600 800 1000
Number of CDCs
Fig. 8: Robustness of the proposed GarliMediChain
The results for the security analysis is given in this section.
According to Eq. (11), we consider the degree of availability
attack θ = 0.6. In this study, if the probability is more than
0.6, the availability attack is highly possible. In Fig. 11, it is
observed that as the number of unavailable nodes increases,
the probability of attack reduces, which means that the degree
of availability attack reduces as well. Also, with a lower
probability, it is difficult for a malicious user to compromise
the proposed system.
100
90
80
70
60
50
40
30
20
0.6
0.5
|Col1|Col2|Col3|Col4|Col5|Col6|Col7|
|---|---|---|---|---|---|---|
||||||||
|||Elapsed|Time||||
||||||||
||||||||
||||||||
||||||||
||||||||
||||||||
||||||||
||||||||
||||||||
200 400 600 800 1000
Number of CDCs
0.4
0.3
Fig. 9: Elapsed Time of the proposed GarliMediChain
_B. Evaluation of the Fictitious Play for CDCs_
0.2
0.1
|Col1|Col2|Col3|Col4|Col5|Col6|Col7|
|---|---|---|---|---|---|---|
||D|D|egree of|Availabili|ty Attack||
||||||||
||||||||
||||||||
||||||||
||||||||
||||||||
In this section, the evaluation of the fictitious play for CDCs
is provided. For the analysis, two CDCs are considered. Using
Eq. (7) the total utility is calculated, and its value is shown
in Fig 10 The value of the total utility lies within 0 1 and
20 40 60 80 100
Number of Unavailable Nodes
Fig. 11: Probability versus number of unavailable nodes
-----
_D. Evaluation of the Proposed PoEoI Consensus Protocol_
In this paper, we compare our proposed PoEoI consensus
protocol with PoW consensus protocol [28] and PoA consensus protocol [8]. As already discussed in Section II-C3,
the hash rate is used to determine the computational cost per
second of the proposed system. In Fig.12, it is observed that
the proposed PoEoI has 15.93% less computational cost than
26.30% of PoW and 57.77% of PoA consensus protocols,
respectively. The reason for the high computational cost of the
PoA consensus protocol is that the Pagerank rank algorithm
added to the overall cost of the system. In Fig. 13, the number
of nonces versus the elapsed time is given. It is observed that
as the number of elapsed time increases, the nonce increases as
well. Hence, there is a direct relationship between nonce and
elapsed time. Besides, nonce determines the level of difficulty
for mining a block in the blockchain. Thus, our proposed
PoEoI consensus protocol has the least number of nonces
generated, which means that it is more efficient than other
existing protocols.
V. CONCLUSION
This study proposes a blockchain-based anonymous system
that provides anonymity and privacy of COVID-19 patients’
information in IoT. Garlic routing and blockchain have been
combined in the system to provide low-latency communication, privacy, anonymity, trust, and security. Additionally,
COVID-19 data is encrypted numerous times before being
sent to a series of network nodes. To facilitate secure COVID19 information exchange, a blockchain-based coalition system
is being developed. The coalition method enables healthcare
institutions to exchange data while simultaneously improving
profitability. Furthermore, each institution uses the proposed
fictitious play to examine other institutions’ strategies to
update its beliefs by choosing the best responses from them.
The simulation findings demonstrate that the proposed system
is robust, adaptive, and efficient, preventing an honest-butcurious health institution from attacking it. From the results,
the PoEoI consensus protocol has 15.93% less computational
cost as compared to 26.30% of PoW and 57.77% PoA consensus protocol, respectively.
In future, we intend to analyze the overall cost of the proposed system and the scalability of the proposed system will
be investigated for real-time implementation. Furthermore,
we want to improve the proposed system in collaboration
with other health institutions, practitioners, and government
organizations.
ACKNOWLEDGMENT
We are thankful to Prof. Chung-Chian Hsu for his valuable
feedback in revision.
REFERENCES
300
250
200
150
100
50
0
|Col1|Col2|Col3|Col4|Col5|Col6|Col7|
|---|---|---|---|---|---|---|
||Po Po|EoI W|||||
|Po|Po|A|||||
||||||||
||||||||
||||||||
||||||||
||||||||
0 5 10 15 20
Number of Bits
Fig. 12: Computation cost versus number of bits
[1] Ray, P. P., Dash, D., Salah, K., & Kumar, N. (2020). Blockchain for IoTbased healthcare: background, consensus, platforms, and use cases. IEEE
Systems Journal, 15(1), 85-94.
[2] Hu, F., Xie, D., & Shen, S. (2013, August). On the application of the
internet of things in the field of medical and health care. In 2013 IEEE
international conference on green computing and communications and
IEEE Internet of Things and IEEE cyber, physical and social computing,
2053-2058.
[3] Qian, Y., Shen, J., Vijayakumar, P., & Sharma, P. K. (2021). Profile
Matching for IoMT: A Verifiable Private Set Intersection Scheme. IEEE
Journal of Biomedical and Health Informatics, 25(10), 3794-3803.
[4] De Boer, T., & Breider, V. (2019). Invisible Internet Project (I2P), System
and Network Engineering, 1-16.
[5] Naik, A., Saksena, A., Mudliar, K., Kazi, A., Sukhija, P., & Pawar, R.
(2018, March). Secure Complaint bot using Onion Routing Algorithm
Concealing identities to increase effectiveness of complain bot. In 2018
Second International Conference on Electronics, Communication and
Aerospace Technology (ICECA), Coimbatore, India, 1777-1780.
[6] Parizi, R. M., Homayoun, S., Yazdinejad, A., Dehghantanha, A., & Choo,
K. K. R. (2019, May). Integrating privacy enhancing techniques into
blockchains using sidechains. In 2019 IEEE Canadian Conference of
Electrical and Computer Engineering (CCECE), Edmonton, AB, Canada,
1-4.
[7] Dakhnovich, A., Moskvin, D., & Zeghzda, D. (2019, March). An approach for providing industrial control system sustainability in the age of
digital transformation. In IOP Conference Series: Materials Science and
Engineering, 497(1), 1-10.
[8] Samuel, O., & Javaid, N. (2021). GarliChain: A privacy preserving
system for smart grid consumers using blockchain. International Journal
of Energy Research, 1-17.
[9] Precht, H., & Marx G´omez, J. (2021, October). Usage of Multiple
Independent Blockchains for Enhancing Privacy Using the Example of the
Bill of Lading. In International Congress on Blockchain and Applications,
S i Ch 300 309
4
3
2
1
0
1e7
PoEoI
PoW
PoA
0 25 50 75 100 125 150 175 200
Elapsed Time (seconds)
|Col1|Col2|Col3|Col4|Col5|Col6|Col7|Col8|Col9|Col10|Col11|
|---|---|---|---|---|---|---|---|---|---|---|
||||||||||||
||||PoEo PoW PoA|I|||||||
||||||||||||
||||||||||||
||||||||||||
||||||||||||
Fig. 13: Nonce versus elapsed time
-----
[10] Vaishya, R., Javaid, M., Khan, I. H., & Haleem, A. (2020). Artificial
Intelligence (AI) applications for COVID-19 pandemic. Diabetes &
Metabolic Syndrome: Clinical Research & Reviews, 14(4), 337-339.
[11] Jelodar, H., Wang, Y., Orji, R., & Huang, S. (2020). Deep sentiment
classification and topic discovery on novel coronavirus or covid-19 online
discussions: Nlp using lstm recurrent neural network approach. IEEE
Journal of Biomedical and Health Informatics, 24(10), 2733-2742.
[12] Gomez-Exposito, A., Rosendo-Macias, J. A., & Gonzalez-Cagigal, M.
A. (2021). Monitoring and tracking the evolution of a viral epidemic
through nonlinear kalman filtering: Application to the covid-19 case.
IEEE Journal of Biomedical and Health Informatics.
[13] Samuel, O., Omojo, A.B., Onuja, A.M., Sunday, Y., Tiwari, P., Gupta,
D., Hafeez, G., Yahaya, A.S., Fatoba, O.J., & Shamshirband, S. (2022).
IoMT: A COVID-19 Healthcare System driven by Federated Learning and
Blockchain. IEEE Journal of Biomedical and Health Informatics 1-12.
[14] Chen, Y. C., Lu, P. E., Chang, C. S., & Liu, T. H. (2020). A timedependent SIR model for COVID-19 with undetectable infected persons.
IEEE Transactions on Network Science and Engineering, 7(4), 32793294.
[15] Chayu, Y., & Jin, W. (2020). A Mathematical Model for the Novel
Coronavirus Epidemic in Wuhan, China. Mathematical Biosciences and
Engineering, 17(3), 2708-2724.
[16] Abhimanyu, S. A., Vineet P. R., Oge M. (2020). Artificial intelligence
and COVID-19: A multidisciplinary approach. Integrative Medicine Research, 9(1), 1-3.
[17] Marbouh, D., Abbasi, T., Maasmi, F., Omar, I. A., Debe, M. S.,
Salah, K., & Ellahham, S. (2020). Blockchain for COVID-19: review,
opportunities, and a trusted tracking system. Arabian Journal for Science
and Engineering, 1-17.
[18] Sharma, A., Bahl, S., Bagha, A. K., Javaid, M., Shukla, D. K., &
Haleem, A. (2020). Blockchain technology and its applications to combat
COVID-19 pandemic. Research on Biomedical Engineering, 1-8.
[19] Kalla, A., Hewa, T., Mishra, R. A., Ylianttila, M., & Liyanage, M.
(2020). The role of blockchain to fight against COVID-19. IEEE Engineering Management Review, 48(3), 85-96.
[20] Xu, H., Zhang, L., Onireti, O., Fang, Y., Buchanan, W. J., & Imran,
M. A. (2020). Beeptrace: Blockchain-enabled privacy-preserving contact
tracing for covid-19 pandemic and beyond. IEEE Internet of Things
Journal, 8(5), 3915-3929.
[21] Choudhury, H., Goswami, B., & Gurung, S. K. (2021). Covidchain: An
anonymity preserving blockchain based framework for protection against
covid-19. Information Security Journal: A Global Perspective, 30(5), 257280.
[22] Sowmiya, B., & Poovammal, E. (2021). A Heuristic K-Anonymity
Based Privacy Preserving for Student Management Hyperledger Fabric
blockchain. Wireless Personal Communications, 1-18.
[23] Kumar, M., & Chand, S. (2021). MedHypChain: A patient-centered
interoperability hyperledger-based medical healthcare system: Regulation
in COVID-19 pandemic. Journal of Network and Computer Applications,
179, 102975.
[24] Jutila, M. (2016). An adaptive edge router enabling internet of things.
IEEE Internet of Things Journal, 3(6), 1061-1069.
[25] Al-Naami K, El Ghamry A, Islam MS, Khan L, Thuraisingham BM,
Hamlen KW, Alrahmawy M, Rashad M. BiMorphing: A bi-directional
bursting defense against website fingerprinting attacks. IEEE Transactions
on Dependable and Secure Computing. 2019 1-15.
[26] Ye L, Yu X, Zhao J, Zhan D, Du X, Guizani M. (2018). Deciding your
own anonymity: user-oriented node selection in I2P. IEEE Access. 2018
Nov 16;6:71350-9.
[27] Samuel, O., Javaid, N., Almogren, A., Javed, M. U., Qasim, U., &
Radwan, A. (2022). A Secure Energy Trading System for Electric
Vehicles in Smart Communities using Blockchain. Sustainable Cities and
Society, 79, 1-21.
[28] Wang, Y., Su, Z., & Zhang, N. (2019). BSIS: Blockchain-based secure
incentive scheme for energy delivery in vehicular energy network. IEEE
Transactions on Industrial Informatics, 15(6), 3620-3631.
[29] Kong, W., Li, X., Hou, L., Yuan, J., Gao, Y., & Yu, S. (2022). A Reliable
and Efficient Task Offloading Strategy Based on Multi-feedback Trust
Mechanism for IoT Edge Computing. IEEE Internet of Things Journal.
[30] Egoshin, N. S., Konev, A. A.& Shelupanov, A. A. (2020). A Model of
Threats to Confidentiality of Information Processed in Cyberspace based
on Information Flows Model. Symmetry, 1-18.
[31] Mohapatro, M., & Snigdh, I. (2021). An Experimental Study of Distributed Denial of Service and Sink Hole Attacks on IoT based Healthcare
Applications. Wireless Personal Communications, 121(1), 707-724.
[32] Salem, O., Alsubhi, K., Shaafi, A., Gheryani, M., Mehaoua, A., &
B t b R (2021) M i th Middl Att k Miti ti i I t t
of Medical Things. IEEE Transactions on Industrial Informatics, 18(3),
2053-2062.
[33] Khalid, A., Kirisci, P., Khan, Z. H., Ghrairi, Z., Thoben, K. D., &
Pannek, J. (2018). Security framework for industrial collaborative robotic
cyber-physical systems. Computers in Industry, 97, 132-145.
[34] Akinyele, J. A., Garman, C., Miers, I., Pagano, M. W., Rushanan, M.,
Green, M., & Rubin, A. D. (2013). Charm: a framework for rapidly
prototyping cryptosystems. Journal of Cryptographic Engineering, 3(2),
111-128.
-----
|
{
"disclaimer": "Notice: The following paper fields have been elided by the publisher: {'references'}. Paper or abstract available at https://api.unpaywall.org/v2/10.1109/JSYST.2022.3170406?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1109/JSYST.2022.3170406, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": null,
"status": "GREEN",
"url": "https://research.aalto.fi/files/83948601/Tiwari_An_Anonymous_IoT_Based_E_Health_Monitoring_System_Using_Blockchain_Technology.pdf"
}
| 2,023
|
[
"JournalArticle"
] | true
| 2023-06-01T00:00:00
|
[] | 16,646
|
en
|
[
{
"category": "Computer Science",
"source": "s2-fos-model"
},
{
"category": "Economics",
"source": "s2-fos-model"
},
{
"category": "Business",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/ffb5f95ec374f6923df89c76f7c82eca8d24fb0d
|
[] | 0.886857
|
CryptoAR: scrutinizing the trend and market of cryptocurrency using machine learning approach on time series data
|
ffb5f95ec374f6923df89c76f7c82eca8d24fb0d
|
Indonesian Journal of Electrical Engineering and Computer Science
|
[
{
"authorId": "2138525416",
"name": "Abu Kowshir Bitto"
},
{
"authorId": "2138907078",
"name": "Imran Mahmud"
},
{
"authorId": "1999666337",
"name": "Md. Hasan Imam Bijoy"
},
{
"authorId": "96401792",
"name": "F. Jannat"
},
{
"authorId": "114586633",
"name": "M. Arman"
},
{
"authorId": "2187547572",
"name": "Md. Mahfuj Hasan Shohug"
},
{
"authorId": "2187544006",
"name": "Hasnur Jahan"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"Indones J Electr Eng Comput Sci"
],
"alternate_urls": null,
"id": "bb21160f-0f31-4e34-abab-715b95a870a2",
"issn": "2502-4752",
"name": "Indonesian Journal of Electrical Engineering and Computer Science",
"type": "journal",
"url": "http://www.iaescore.com/journals/index.php/IJEECS"
}
|
Cryptocurrencies are encrypted digital or virtual money used to avoid counterfeiting and double spending. The scope of this study is to evaluate cryptocurrencies and forecast their price in the context of the currency rate trends. A public survey was conducted to determine which cryptocurrency is the most well-known among Bangladeshi people. According to the survey respondents, Bitcoin is the most famous cryptocurrency among the eight digital currencies. After that, we'll explore the four most well-known cryptocurrencies: Bitcoin, Ethereum, Litecoin, and Tether token. The 'YFinance' python package collects our cryptocurrency dataset, and the relative strength index (RSI) is employed to investigate these cryptocurrencies. Autoregressive (AR), moving average (MA), and autoregressive moving average (ARMA) models are applied to our time-series data from 2015-1-1 to 2021-6-1. Using the 'closing' price and a simple moving average (SMA) graph, bitcoin and tether are identified as oversold or overbought cryptocurrencies. We employ the seasonal decomposed technique into the dataset before implementing the model, and the augmented dickey-fuller test (ADF) indicates too much seasonality in the dataset. The autoregressive (AR) model is the most accurate in predicting the price of Bitcoin, Ethereum, Litecoin, and Tether-token, with 97.21%, 96.04%, 95.8%, and 99.91% accuracy, consecutively.
|
**Indonesian Journal of Electrical Engineering and Computer Science**
Vol. 28, No. 3, December 2022, pp. 1684~1696
ISSN: 2502-4752, DOI: 10.11591/ijeecs.v28.i3.pp1684-1696 1684
# CryptoAR: scrutinizing the trend and market of cryptocurrency
using machine learning approach on time series data
**Abu Kowshir Bitto[1], Imran Mahmud[1,2], Md. Hasan Imam Bijoy[3], Fatema Tuj Jannat[1],**
**Md. Shohel Arman[1], Md. Mahfuj Hasan Shohug[1], Hasnur Jahan[1 ]**
1Department of Software Engineering, Daffodil International University, Dhaka, Bangladesh
2Graduate School of Business, Universiti Sains Malaysia, Penang, Malaysia
3Department Computer Science and Engineering, Daffodil International University, Dhaka, Bangladesh
**Article Info** **ABSTRACT**
**_Article history:_**
Received Mar 19, 2022
Revised Aug 22, 2022
Accepted Sep 10, 2022
**_Keywords:_**
Autoregressive
Bitcoin
Blockchain
Cryptocurrency
Etherum
**_Corresponding Author:_**
Cryptocurrencies are encrypted digital or virtual money used to avoid
counterfeiting and double spending. The scope of this study is to evaluate
cryptocurrencies and forecast their price in the context of the currency rate
trends. A public survey was conducted to determine which cryptocurrency is
the most well-known among Bangladeshi people. According to the survey
respondents, Bitcoin is the most famous cryptocurrency among the eight
digital currencies. After that, we'll explore the four most well-known
cryptocurrencies: Bitcoin, Ethereum, Litecoin, and Tether token. The
'YFinance' python package collects our cryptocurrency dataset, and the
relative strength index (RSI) is employed to investigate these
cryptocurrencies. Autoregressive (AR), moving average (MA), and
autoregressive moving average (ARMA) models are applied to our timeseries data from 2015-1-1 to 2021-6-1. Using the 'closing' price and a simple
moving average (SMA) graph, bitcoin and tether are identified as oversold
or overbought cryptocurrencies. We employ the seasonal decomposed
technique into the dataset before implementing the model, and the
augmented dickey-fuller test (ADF) indicates too much seasonality in the
dataset. The autoregressive (AR) model is the most accurate in predicting
the price of Bitcoin, Ethereum, Litecoin, and Tether-token, with 97.21%,
96.04%, 95.8%, and 99.91% accuracy, consecutively.
_[This is an open access article under the CC BY-SA license.](https://creativecommons.org/licenses/by-sa/4.0/)_
Abu Kowshir Bitto
Department of Software Engineering, Daffodil International University
Dhanmondi, Dhaka-1207, Bangladesh
Email: abu.kowshir777@gmail.com
**1.** **INTRODUCTION**
Cryptocurrency like bitcoin uses peer to peer connections. In the real world, these cryptocurrencies
have no physical existence. They have no visible presence. There has no authority of the government over
cryptocurrency. Functioning cryptocurrency relies on a technology called a blockchain. This blockchain was
founded to relieve the double-spending problem and interrupt the centralized parties' control in the asset's
transaction. It is Bitcoin's most significant invention. The blockchain is used to keep track of all economic
and financial transactions. This blockchain uses a cluster of computers. It can be said simply that this
technology is so strong that it can keep records permanently of transactions of business, assets, financial data,
contract conversion, and property which is intellectual [1]. Because of increasing blockchain interest, the
continuous acceptance and FinTech technology by private equity companies and traditional financial
institutions, cryptocurrency assets markets have seen huge capital inflows in current years. Cryptocurrencies
**_Journal homepage: http://ijeecs.iaescore.com_**
-----
Indonesian J Elec Eng & Comp Sci ISSN: 2502-4752
1685
approachable number for investment climbed to approximately 2000 this time [2]. There have a public
address and private key for all cryptocurrencies. The currencies owner got these for receiving and giving
coins. This public key address is utilized to find the address and where the coin will deposit. But, without the
private key, one will not be able to get them. It is a form of digital cash that may be used to hide one's
identity. As a result, its popularity has grown in recent years for these factors. As of February 2017, with 720
cryptocurrencies in existence, since 2009, many cryptocurrencies have been founded [3], [4].
There are now over 1500 active currencies. This research will look at five cryptocurrencies: Bitcoin,
Ethereum, Dogecoin, Litecoin, and Stellar (2015-2021). There has been shifting competition among these
currencies. The network also affects the currency exchange. A positive shift occurred because of using it
more by the people. The more money is utilized, the more precious it becomes [5], [6]. When the exchange is
larger and more popular, it becomes more winning to buyer and seller. August 18, 2008, at first bitcoin.org
domain name was registered. Bitcoin is the cryptocurrency that is the most decentralized and valued.
pseudonym Satoshi Nakamoto introduced it on January 3, 2009 [7]-[9]. Some businessmen began to accept it
as traditional currency from 2010 mid. It happened because of approximately 35% of overall market
capitalization. The most popular and largest cryptocurrency is Bitcoin. For 81% of the global cryptocurrency
market, it is accounting approximately. Although the cryptocurrency idea was introduced in 2009, it became
interested among people in 2012. Ethereum is the second most widely used cryptocurrency. It was first
presented in 2013 by Vitalik Buterin, a programmer. This currency went online on July 30, 2015, with an
insufficient 72 million coins. Elon Musk believes that Dogecoin will be the future of cryptocurrency.
Dogecoin was launched on December 6, 2013. Then it swiftly established itself in the internet community,
reaching a market value of US$85,314,347,523 on May 5, 2021.
On October 13, 2011, it became live on the internet. Stellar was launched in 2014 by Mt. Gox
founder and Ripple co-founder Joyce Kim, a former lawyer. Market players are constantly aware of
numerous negative limits; investors in cryptocurrency markets appear to be unaware of the aversion to high
risk when significant unfavourable market moves occur. They also appear to be unaware of the danger they
are taking since they are caught up in the speculative frenzy of crypto-currency markets [10]. Despite all
these efforts to examine the predicting performance of cryptocurrencies, knowing the link between
cryptocurrencies is critical for investors who have cryptocurrencies in their portfolios and regulators whose
job it is to keep financial markets stable [11]. Cryptocurrency has been around for years and has grown in
popularity, acceptance, and controversy because of inventive advances. Cryptocurrencies, as opposed to
conventional money, are based on cryptography [12].
We looked at a lot of publications to analyze prior studies. Sifat _et al. [2] used the vector error_
correction model (VECM), granger causality, autoregressive moving average (ARMA), autoregressive
distributed lag (ARDL), and wavelet Coherence models. There were a total of 9008 observations. He
unearthed that crypto traders could not use premium pricing respectively bitcoin (BTC) and ethereum (ETH)
to scalp or make a decent profit using hourly and daily statics, and that discrepancies were the price research
process respectively BTC and ETH. Chowdhury _et al. [1] utilized a gradient boosted trees approach to_
analyse seven features and split data into two groups for testing and training. The functionality of the models
looks to be better and more competitive. The ensemble learning approach had a 92.4% accuracy rate, and the
gaps were less than in other models; the K-nearest neighbor (K-NN) model hasn't shown to be very
successful. In their research, Stoi _et al. [13] used random matrix theory and the minimum spanning tree_
approach. The cross-correlation matrix demonstrates non-trivial hierarchical patterns and groupings of
cryptocurrency pairings that are not visible in partial cross-correlations, according to the results and daily
closing values of the cryptocurrencies mentioned. Using tweets, retweets, and cryptocurrency prices, Li et al. [14]
depict price variations within ZClassic coin or the alternative cryptocurrency market. They use a
classification algorithm of natural language processing, XGboost, Gradient boosting Tree, cross-validation of
10-fold for the entire process. There have some gaps like that they should have trained the positive exhibited
trained data.
Abraham _et al. [15] predict the cryptocurrencies price by applying sentiment analysis to collected_
tweets to determine if the tweets are typically positive or negative in their thought of cryptocurrencies. There
is an effect of these tweeters' sentiment on increasing and decreasing the price of cryptocurrencies in the
future. Songmuang [5] used five cryptocurrencies (BTC, ETH, XRP, ADA, XEM) market prices to find the
correlation between currencies and forecast the future price. The relationship between ETH and other
cryptocurrencies is not studied there. Farell et al. [16] showed a breakdown of 21 coins, the evolution of the
network security mechanism, and the market capitalization of cryptocurrencies. He also revealed that the
industry would be owed to bitcoin for pioneering anarchic coins in the future. Bouri _et al. [17] investigate_
equicorrelation return is time-varying and unstable. Between January 1st, 2016, and April 24th, 2016,
Alessandretti et al. [18] forecast the price of the currencies at day, 2018, using XGboost, different regression,
LSTM of the cryptocurrency whose age is greater than 50 and price is more than 100000. But they have
ignored fluctuations of intra-day price. The simulated model and its outputs prices of Bitcoins are analyzed
_CryptoAR: scrutinizing the trend and market of cryptocurrency using machine … (Abu Kowshir Bitto)_
-----
1686
ISSN: 2502-4752
and compared to actual prices to find the presence of different trader populations. Cocco et al. [19] used the
Heterogeneous agent and hypothesis models. They did not consider the dependency on the company of
various traders' people. Gandal and Halaburda [4] showed Litecoin was the 2nd strongest cryptocurrency
after bitcoin. Bitcoin accounted for 90% of all digital currencies at the end of February 2014. Several
researcher also work on non-ML techniques like structural equation modelling [20]-[23] and hybrid model
like artificial neural network (ANN) [24] to analysis this type of data.
Many countries have allowed this money to be used till now. Like Japan (called the hub of
cryptocurrency), the United States, Nigeria, Germany, Canada, Philippines, France, and Australia. But some
countries refuse to provide authorization to use it. Like Bangladesh, Algeria, Bolivia, Morocco, Nepal,
Pakistan, and Vietnam. Because it is one of the safest ways to exchange currency in the illegal market.
According to people's interest in cryptocurrencies, we conducted a poll for choosing these coins.
**2.** **METHOD**
This section is a portion of our systematic workflow procedure. Three segments of methodology are
discussed below as: data description that we used in this study, three implemented model descriptions, model
implementation procedures and performance metrics of the applied model to predict the bitcoin price and
workflow diagram presented in Figure 1.
Figure 1. Systematic workflow diagram to predict the cryptocurrency using AR, MA, and ARIMA
**2.1. Public survey and selection of cryptocurrency**
First and foremost, we conducted a public survey of Daffodil International University's software
engineering students. A two-question online survey was conducted, and questions were i) Do you know
about cryptocurrency?, ii) Which cryptocurrency you are interested in?. From Figure 2 explain the popularity
and interested where Figure 2(a), 150 individuals have signed up to help fill these out. Only 25 individuals
out of 150 are unfamiliar with cryptocurrencies, yet they are all familiar with bitcoin. One hundred twentyfive persons are aware of cryptocurrencies, with the majority of them interested in bitcoin. Others are
interested in Ethereum, Litecoin, Dogecoin, Neo, Stellar, Tether, and IQTA. We selected these
cryptocurrency analyses based on public interest. From Figure 2(b), we picked Bitcoin, Ethereum, Litecoin,
and Tether cryptocurrency for study and prediction.
**2.2. Dataset and preprocessing**
After our selection from Figure 2, we go for data collection based on four cryptocurrencies (Bitcoin,
Ethereum, Litecoin, and Tether). Then, we gather time-series data from Yahoo Finance by employing the
'Yfinance' python package. In a time, series, there are six columns. All currency data sorted are obtained from
‘2015-1-1’ through ‘2021-6-1’. There are 2121 rows and 6 columns in our collected dataset. We looked for
null values for data preprocessing but could not find any, so we opted to use the dataset as-is. For instance,
the screenshot of two datasets from four cryptocurrencies is given below in Figure 3. Figures 3(a) and (b)
show the sample data for Bitcoin and Ethereum, respectively.
Indonesian J Elec Eng & Comp Sci, Vol. 28, No. 3, December 2022: 1684-1696
-----
Indonesian J Elec Eng & Comp Sci ISSN: 2502-4752
1687
(a)
(b)
Figure 2. Public (a) familiar with cryptocurrency and (b) interested on cryptocurrency
(a)
(b)
Figure 3. Sample (a) Bitcoin and (b) Ethereum price data (partial)
**2.3. Relative strength index (RSI)**
The RSI is indeed a technical indicator that assesses the size of recent price fluctuations to identify if
a share or other investment is overbought or oversold. Its goal is to depict companies or market's present and
historical strengths and weaknesses using closing prices from past trading periods. A wide trend may also be
seen using the RSI. The following is the RSI (1).
𝑅𝑆𝐼= 100 −
100
1+ [𝐴𝑣𝑒𝑟𝑎𝑔𝑒 𝑈𝑝𝑤𝑎𝑟𝑑 𝑃𝑟𝑖𝑐𝑒 𝐶ℎ𝑎𝑛𝑔𝑒]
𝐴𝑣𝑒𝑟𝑎𝑔𝑒 𝐷𝑜𝑤𝑛𝑤𝑎𝑟𝑑 𝑃𝑟𝑖𝑐𝑒 𝐶ℎ𝑎𝑛𝑔𝑒
(1)
**2.4. Model implementation (train and test model)**
During the model implementation phase, we split our dataset in half, using 80% of the data to train the
model and 20% to evaluate its performance. Our train data was used to train the AR, MA, and ARMA. The
autoregressive (AR) model stands for the autoregressive model. The order of this model is specified as 'p'. The
symbol AR denotes ‘p’ an autoregressive model of order (p). As follows is a description of the AR(p) model:
𝑌𝑡 = 𝜑0 + 𝜑1 × 𝑦𝑡−1 + 𝜑2 × 𝑦𝑡−2 + 𝜑3 × 𝑦𝑡−3 … + 𝜑𝑚 × 𝑦𝑡−𝑚 (2)
where,
𝑇 = 1, 2, 3…………., t
𝑦𝑡= signifies Y as a function of time t
𝜑𝑚 = is in the autoregression coefficients
The moving average (MA) model is a time series model that adjusts for severe short-run
autocorrelation. It means that the next observation is the average of all the preceding ones. The order of the
moving average model 'q' may usually be determined by looking at the ACF plot of the time series. The
symbol MA denotes ‘q’ a satisfied average model order (q). As follows is a description of the MA(q) model:
𝑌𝑡 = 𝜎0 + 𝜎1 × 𝜎𝑡−1 + 𝜎2 × 𝜎𝑡−2 + 𝜎3 × 𝜎𝑡−3 … + 𝜎𝑘 × 𝜎𝑡−𝑘 (3)
_CryptoAR: scrutinizing the trend and market of cryptocurrency using machine … (Abu Kowshir Bitto)_
-----
1688
ISSN: 2502-4752
where, 𝜎 the mean of the series is, the parameters of the mode are 𝜎0, 𝜎1, 𝜎2, … … … . 𝜎𝑘 and the white noise
error terms are 𝜎𝑡−1, 𝜎𝑡−2, 𝜎𝑡−3………𝜎𝑡−𝑘.
ARMA is used to describe weakly stationary stochastic time series. The first polynomial represents
autoregression, while the second represents the moving average. The order of the autoregressive polynomial
is denoted by p. The moving average polynomial's order is ‘q’:
𝑋𝑡 = 𝑐+ 𝜀𝑡 + ∑𝑝𝑖=1 𝜑𝑖 𝑋𝑡−𝑖 + ∑𝑞𝑖=1 𝜃𝑖𝜀𝑡−𝑖 (4)
where, 𝜑= the autoregressive model’s parameters, 𝜃= the moving average model’s parameters, c =
constant, ∑= summation notation, 𝜀= error terms (white noise).
**2.5. Performance measure (error/accuracy)**
Based on their forecasting accuracy and error, the estimated models are evaluated and contrasted.
With MAE [25], we understood the mean absolute error. In measurement, the amount of the error is the mean
absolute error. It is an error measurement between the coupled observations, which express the same event. It
also the differences between the actual value and the measured value. It is the total absolute error’s arithmetic
average. For example, Y versus X include differences in the predicted value against the observed value. The
equation for mean absolute error is (5).
𝑀𝐴𝐸=
𝑛1 [ ∑]𝑛𝑖=1 |𝑥𝑖 −𝑥| (5)
Here, n = Errors numbers, |xi – x| = Absolute errors, Σ = symbol of summation (it means to add all).
In statistics, the estimator’s mean square error calculates the error’s square’s average. This average squared
find the contrast of the absolute value and estimated values. It shows how near a regression line needs to a set
of points. By distance from regression line point, it shows that and then squares them. By this square, it
makes all the negative values positive. With mean square error, we find the court of error. It forecasts better
when the MSE is low. The equation for mean square error is (6).
𝑀𝑆𝐸=
𝑛1 [ ∑]𝑛𝑖=1 |𝐴𝑐𝑡𝑢𝑎𝑙−𝑓𝑜𝑟𝑐𝑎𝑠𝑡|[2] (6)
Here, n = items number, Σ = summation, Actual = original y-value, Forecast = regression y-value.
The standard deviation of prediction error is called root mean square error. With residuals or prediction
errors, we can calculate where the data points of the regression line are situated. Root mean square error
(RMSE) figure out how to expand these prediction errors are. Root mean square error is generally utilized for
climatology, regression analysis, forecasting for verifying the result, which is experimental. MSE is a good
accuracy measurement. But it only compares the different model’s prediction error for a specific variable, not
among variables because it’s scale dependent. The equation for root means square error is (7).
𝑅𝑀𝑆𝐸= √
𝑛1 [ ∑]𝑛𝑖=1 |𝑠𝑖 −𝑜𝑖|[2] (7)
Here, oi = observations, si = variables predicted values, n = Observations number for analysis which is
available.
**3.** **RESULTS AND DISCUSSION**
We observe an analysis between Bitcoin and Tether based on current trends. Bitcoin, often known as a
cryptocurrency or virtual money, is a virtual form of currency. Bitcoin is a peer-to-peer (P2P) computer network
primarily used to share digital media files. From Figure 4 provide the analysis Bitcoin and tither where from
Figure 4(a), we can observe when the bitcoin price is low and high by looking at the close and open times.
Stripe, an online payment company, stated on January 24 that it would phase out bitcoin payments by late April
2018, citing falling demand, higher rates, and lengthier transaction times as causes. PayPal and several stock
market companies enabled bitcoin in 2020, and from then until 2021, its price will steadily rise above that of
other currencies, putting it at the top of the heap. Tether tokens are the Tether network's native tokens. To
decrease the friction of transferring actual money around the cryptocurrency ecosystem, each token is priced at
$1.00. We can observe that the price of a token ranges from 0.8 to 1.2 in Figure 4(b).
Figure 5 and Figure 6 shows when bitcoin and tether are oversold or overbought using the close price
and a simple moving average (SMA) graph. SMA is an arithmetic moving average produced by combining
recent prices, generally closing costs, and dividing that figure by the number of periods in the computation.
Manipulate this using a 14-day period where everything below 0 is down, and anything over 0 is up.
Indonesian J Elec Eng & Comp Sci, Vol. 28, No. 3, December 2022: 1684-1696
-----
Indonesian J Elec Eng & Comp Sci ISSN: 2502-4752
1689
(a)
(b)
Figure 4. Analysis of (a) Bitcoin and (b) Tether
Figure 5. Bitcoin RSI
Figure 6. Tether RSI
In this section, we analyze our findings of those cryptocurrencies using a multiple time series model
for time series analysis. Here we use three models: AR, MA, and ARIMA models and explore those models
for individual currency and predicting the future. First, we need to preprocess the data according to this
model; in those models, we need to check the stationary for all cryptocurrencies. This stationarity is limited
according to the P-value for those time-series data. Then, we analyzed the ‘Close’ price for Bitcoin,
Ethereum, Litecoin and Tether tokens. Figure 7 shows the plot for predicting the future ‘Close’ price
according to the historical data and where Figures 7(a)-(d) show individually for Bitcoin, Etherum, Litecoin
and Tehther Token, respectively.
In this data set for building the time series model for this series data. The autocorrelation plot in Figure
8 refers to observations of a single variable across a specified time horizontal axis for Bitcoin in Figure 8(a),
Etherum in Figure 8(b), Litecoin in Figure 8(c), and Tether token in Figure 8(d). From Figure 9 (in Appendix),
using the seasonal decomposed method in Figures 9(a)-(c) (in Appendix) this data set has too much seasonality;
that is why we are applying the augmented dickey-fuller test (ADF), after founding the rolling mean and
slandered deviation for this series dataset and according to the P-value, which is less than 0.05.
_CryptoAR: scrutinizing the trend and market of cryptocurrency using machine … (Abu Kowshir Bitto)_
-----
1690
ISSN: 2502-4752
(a)
(c)
(b)
(d)
Figure 7. Predecting future close price for (a) Bitcoin, (b) Etherum, (c) Litecoin, and (d) Tehther Token
(a)
(c)
(b)
(d)
Figure 8. Auto correlation for (a) Bitcoin, (b) Etherum, (c) Litecoin, and (d) Tehther Token
Indonesian J Elec Eng & Comp Sci, Vol. 28, No. 3, December 2022: 1684-1696
-----
Indonesian J Elec Eng & Comp Sci ISSN: 2502-4752
1691
From Figure 10, we applied the AR (1) model and MA (10) with residual sum of squares (RSS)
value and ARMA (1, 10) with RSS value where the arguments are p and q which present in Figure 10(a) and
(b), respectively. In this fundings which are shown [Tab: 1] models’ evaluation where p = 1 and q = 10. We
find those values according to the RSS. This RSS defines that RSS defines variance as the amount of
variation in a data set. From Figure 11, for this Ethereum coin, we use AR(1) model MA(5) and also
ARMA(1, 5) model where p = 1 and q = 5 with the rest of the minimum RSS value. For MA(5), the
minimum value of RSS is 62.32 in Figure 11(a), and for ARMA(1,5), the minimum value of RSS is 61.68 in
Figure 11(b). From Figure 12, after that, for this Litecoin data set, we build AR (1) model, MA (4) model
with the minimum value of RSS is 52.03 and ARMA (1, 2) with the minimum RSS value of 52.02. Build the
best model and define which the best for the application is. Finally, here p = 1, and q = 2.
(a)
(b)
Figure 10. RSS value with (a) ARMA (1, 10) and (b) MA (10) for Bitcoin
(a)
(b)
Figure 11. RSS value with (a) MA (5) and (b) ARMA (1, 5) for Ethereum
(a)
(b)
Figure 12. RSS value with (a) MA (4) and (b) ARMA (1, 2) for Litecoin
_CryptoAR: scrutinizing the trend and market of cryptocurrency using machine … (Abu Kowshir Bitto)_
-----
1692 ISSN: 2502-4752
From Figure 13, for building the time series model AR(1). With the minimum value of RSS for the
for MA(2) is 0.91 in Figure 13(a) and for the ARMA(1,2) is 0.92 in Figure 13(b). The model evaluation is
shown in Table 1 with the mean value, MAE value, RMSE value, and model accuracy after model
implementation and performance computation.
(a)
(b)
Figure 13. RSS value with (a) ARMA and (b) MA for Tether Token
Table 1. Applied model performance evaluation based on each cryptocurrency
Cryptocurrency Model Mean Value MAE Value RMSE Value Model Accuracy (%)
AR 9248.93 924.74 1480.58 97.21%
Bitcoin MA 9248.93 13140.41 14841.25 67.97%
ARMA 9248.93 2084.47 3570.15 81.32%
AR 489.53 61.29 111.59 96.04%
Etherum MA 489.53 524.19 698.15 80.70%
ARMA 489.53 539.66 717.48 80.25%
AR 72.95 6.01 11.25 95.8%
Litecoin MA 72.95 33.29 36.26 92.88%
ARMA 72.95 60.93 72.75 74.25%
AR 1.001067 0.000939 0.001832 99.91%
Tether Token MA 1.001067 0.001354 0.001640 99.86%
ARMA 1.001067 0.001442 0.001739 99.87%
From Table 1, we can choose the AR model for predicting Bitcoin ‘Close’ prices. We also favored the
ARMA model. However, we did not utilize the MA model to forecast the Bitcoin’ Close' price because it would
not perform better. The AR and MA models outperformed the other two models for Ethereum and Litecoin.
Finally, with the Tether Token, we can see that all models worked well and correctly predicted the price, as we
previously said. As a result, we may apply any time series model to forecast the future using this dataset.
**4.** **CONCLUSION**
In this study, we use AR, MA, and ARMA models to forecast cryptocurrency prices. Among the
eight cryptocurrencies, Bangladeshis are most familiar with Bitcoin, Ethereum, Litecoin, and Tether token,
according to a public survey. The related strength index (RSI) determines if Bitcoin and Tether are
overbought or oversold by measuring the magnitude of recent price movements. Based on prior currency
periods to closing prices, a coin's present and historical strengths and weaknesses. The P-value for the timeseries data determines if all cryptocurrencies are stationary. The P-value, which is less than 0.05, is
significant. The null hypothesis (H0) is rejected since the data does not have a unit root and is stationary.
According to our testing data, models give high accuracy in predicting the price of crypto. This research
examines the popularity of which cryptocurrency is most familiar to Bangladeshis and the potential for the
cryptocurrency sector to grow. Our applied model also displays the anticipated closing price for chosen
coins. In the future, an optimization method to fine-tune the closing price to the most acceptable value may
be helpful in the study. Alternate response functions can also be used to investigate how the market reacts to
additional data.
Indonesian J Elec Eng & Comp Sci, Vol. 28, No. 3, December 2022: 1684-1696
-----
Indonesian J Elec Eng & Comp Sci ISSN: 2502-4752
**APPENDIX**
(a)
(b)
1693
_CryptoAR: scrutinizing the trend and market of cryptocurrency using machine … (Abu Kowshir Bitto)_
-----
1694
ISSN: 2502-4752
(c)
Figure 9. The seasonality and ADF test results for (a) Bitcoin, (b) Ethereum, and (c) Litecoin (Continue)
**REFERENCES**
[1] R. Chowdhury, M. A.Rahman, M. S. Rahman, and M. R. C. Mahdy, "An approach to predict and forecast the price of constituents
and index of cryptocurrency using machine learning," Physica A: Statistical Mechanics and its Applications, vol. 551, p. 124569,
2020, doi: 10.1016/j.physa.2020.124569.
[2] I. M. Sifat, A. Mohamad, and M. S. B. M. Shariff, "Lead-lag relationship between bitcoin and ethereum: Evidence from hourly
and daily data," Research in International Business and Finance, vol. 50, pp. 306-321, 2019, doi: 10.1016/j.ribaf.2019.06.012.
[3] S. Chan, J. Chu, S. Nadarajah, and J. Osterrieder, "A statistical analysis of cryptocurrencies," Journal of Risk and Financial
_Management, vol. 10, no. 2, pp. 12, 2017, doi:_ 10.3390/jrfm10020012.
[4] N. Gandal and H. Halaburda, "Competition in the cryptocurrency market," 2014, doi: 10.2139/ssrn.2506577.
[5] K. Songmuang, "The forecasting of cryptocurrency price by correlation and regression analysis," Kasem Bundit Journal, vol. 19,
no. June, pp. 287-296, 2018.
[6] Q. Ji, E. Bouri, R. Gupta, and D. Roubaud, "Network causality structures among Bitcoin and other financial assets: A directed
acyclic graph approach," The Quarterly Review of Economics and Finance, vol. 70, pp. 203-213, 2018, doi:
10.1016/j.qref.2018.05.016.
[7] D. Shah and K. Zhang, "Bayesian regression and Bitcoin," In 2014 52nd annual Allerton conference on communication, control,
_and computing (Allerton), pp. 409-414. IEEE, 2014, doi:_ 10.1109/ALLERTON.2014.7028484.
[8] C.-H., Wu, C.-C. Lu, Y.-F. Ma, and R.-S. Lu, "A new forecasting framework for bitcoin price with LSTM," In 2018 IEEE
_International Conference on Data Mining Workshops (ICDMW), IEEE, 2018, pp. 168-175, doi: 10.1109/ICDMW.2018.00032._
[9] S. McNally, J. Roche, and S. Caton, "Predicting the price of bitcoin using machine learning," In 2018 26th Euromicro
_International Conference on Parallel, Distributed and Network-Based Processing (PDP), IEEE, 2018, pp. 339-343, doi:_
10.1109/PDP2018.2018.00060.
[10] A. Meyer and L. Ante, "Effects of initial coin offering characteristics on cross-listing returns," Digital Finance, vol. 2, no. 3,
pp. 259-283, 2020, doi: 10.1007/s42521-020-00025-z.
[11] S. Hyun, J. Lee, J.-M. Kim, and C. Jun, "What coins lead in the cryptocurrency market: using Copula and neural networks
models," Journal of Risk and Financial Management, vol. 12, no. 3, p. 132, 2019, doi: 10.3390/jrfm12030132.
[12] Ferdiansyah, S. H. Othman, R. Z. R. M. Radzi, D. Stiawan, Y. Sazaki, and U. Ependi, "A lstm-method for bitcoin price
prediction: A case study yahoo finance stock market," In 2019 International Conference on Electrical Engineering and Computer
_Science (ICECOS), IEEE, 2019, pp. 206-210, doi:_ 10.1109/ICECOS47637.2019.8984499.
[13] D. Stosic, D. Stosic, T. B. Ludermir, and T. Stosic, "Collective behavior of cryptocurrency price changes," Physica A: Statistical
_Mechanics and its Applications, vol. 507, pp. 499-509, 2018, doi: 10.1016/j.physa.2018.05.050._
[14] T. R. Li, A. S. Chamrajnagar, X. R. Fong, N. R. Rizik, and F. Fu, "Sentiment-based prediction of alternative cryptocurrency price
fluctuations using gradient boosting tree model, arXiv preprint, 2018," Applied Mathematics Journal of Hindawi www. hindawi.
_com, 2018, doi: 10.3389/fphy.2019.00098._
Indonesian J Elec Eng & Comp Sci, Vol. 28, No. 3, December 2022: 1684-1696
-----
Indonesian J Elec Eng & Comp Sci ISSN: 2502-4752
1695
[15] J. Abraham, D. Higdon, J. Nelson, and J. Ibarra, "Cryptocurrency price prediction using tweet volumes and sentiment
analysis," SMU Data Science Review, vol. 1, no. 3, 2018.
[16] R. Farell, "An analysis of cryptocurrency industry," _Wharton_ _Research_ _Scholars,_ 2015,
https://repository.upenn.edu/wharton_research_scholars/130
[17] E. Bouri, X. V. Vo, and T. Saeed, "Return equicorrelation in the cryptocurrency market: analysis and determinants," Finance
_Research Letters, vol. 38, p. 101497, 2021, doi: 10.1016/j.frl.2020.101497._
[18] L. Alessandretti, A. ElBahrawy, L. M. Aiello, and A. Baronchelli, "Machine learning the cryptocurrency market," Available at
_SSRN 3183792, 2018, doi: 10.2139/ssrn.3183792._
[19] L. Cocco, G. Concas, and M. Marchesi, "Using an artificial financial market for studying a cryptocurrency market," Journal of
_Economic Interaction and Coordination, vol. 12, no. 2, pp. 345-365, 2017, doi: 10.1007/s11403-015-0168-2._
[20] I. Mahmud, S. Sultana, A. Rahman, T. Ramayah, and T. C. Ling, "E-waste recycling intention paradigm of small and medium
electronics store managers in Bangladesh: An S–O–R perspective," Waste Management & Research, vol. 38, no. 12, pp. 14381449, 2020, doi: 10.1177/0734242X20914753.
[21] A. Alzahrani, I. Mahmud, R. Thurasamy, O. Alfarraj, and A. Alwadain, "End users' resistance behaviour paradigm in pre
deployment stage of ERP systems: evidence from Bangladeshi manufacturing industry," Business Process Management Journal,
2021, doi: 10.1108/BPMJ-08-2019-0350.
[22] A. Z. Satter, A. Mahmud, A. Rahman, I. Mahmud, and R. Akter, “Civic engagement through restaurant review page in Facebook:
a structural equation modelling approach,” International Journal of Ethics and Systems, 2021, doi: 10.1108/IJOES-06-2020-0078.
[23] E. U. Rahaman, I. Mahmud, R. Himel, A. Begum, and N. Jahan, “Mathematical modelling of teachers’ intention to participate in
online training during COVID-19 lockdown: evidence from emerging economy,”International Journal of Emerging Technologies
_in Learning, vol. 17, no. 12, 2022, doi: 10.3991/ijet.v17i12.30465._
[24] A. Rahman, T. A. Ping, S. K. Mubeen, I. Mahmud, and G. A. Abbasi, “What influences home gardeners’ food waste composting
intention in high-rise buildings in dhaka megacity, Bangladesh? An integrated model of TPB and DMP,” Sustainability, vol. 14,
no. 15, p. 9400, 2022, doi: 10.3390/su14159400.
[25] M. A. Rubi, H. I. Bijoy, and A. K. Bitto, "Life expectancy prediction based on GDP and population size of Bangladesh using
multiple linear regression and ANN model," 2021 12th International Conference on Computing Communication and Networking
_Technologies (ICCCNT), 2021, pp. 1-6, doi: 10.1109/ICCCNT51525.2021.9579594._
**BIOGRAPHIES OF AUTHORS**
**Abu Kowshir Bitto** received his undergraduate degree in Software Engineering
Major in Data Science at Daffodil International University (DIU), Dhaka, Bangladesh. He is
currently attending MediprospectsAI as a Research and Development Engineer. He is member
of International Association of Engineers. He is a Chief Human Resource Executive (CHRE)
at Virtual Multsidisciplinary Research Lab. He previously worked as a Research Assistant at
the Data Science Lab DIU. He is an energetic, focused and hard-working person since his
student life. His research experience and interest now in Computer Vision, Data Science, and
Natural Language Processing. He can be contacted at email: abu.kowshir777@gmail.com.
**Dr. Imran Mahmud** is an Associate professor and head of the Department of
Software Engineering (SWE) at Daffodil International University. He is also an adjunct
professor at the Graduate School of Business, Universiti Sains Malaysia. Dr. Imran is an
expert in Business Analytics, Technology Management, and Structural Equation Modeling.
He can be contacted at email: imranmahmud@daffodilvarsity.edu.bd.
**Md. Hasan Imam Bijoy** pursued his bachelor's degree (B. Sc) in Computer
Science and Engineering (CSE) at Daffodil International University (DIU), Dhaka,
Bangladesh. Currently he is working as a Lecturer in CSE department at DIU. He is a
Convener of the Virtual Multidisciplinary Research Lab. He is a research zealot, having
published over 15 conference papers, 4 journal publications, and one programming book [A
Handbook of C Programming with Example]. He is presently acting as a reviewer at MLIS
2022, and performed the role of Reviewer at ICECET2022, ICECET2021 and
ICECCME2022, ICECCME2021. His area of interest includes Machine Learning, Deep
Learning, Computer Vision, Natural Language Processing, Image Processing, Internet of
Things, and so many field. He can be contacted at email: hasan15-11743@diu.edu.bd.
_CryptoAR: scrutinizing the trend and market of cryptocurrency using machine … (Abu Kowshir Bitto)_
-----
1696
ISSN: 2502-4752
**Fatema Tuj Jannat** received her undergraduate degree in Software Engineering
at Daffodil International University (DIU), Dhaka, Bangladesh.She is a professional Graphic
and UI/UX designer. She also works on several UI/UX design contractual projects. She is a
quick learner and hard working person in her life. Her research experience and interest now in
Machine Learning. She can be contacted at email: jannat.fatema7940@gmail.com.
**Md. Shohel Arman** is Assitant Professor and alumni of Department of Software
Engineering under Faculty of Science & Information Technology in Daffodil International
University, Dhaka, Bangladesh. He is an energetic and focused man since his student life. His
research interests are distributed database system, machine learning, data mining nternet of
things (IoT), software security and management information system (MIS). He can be
contacted at email: arman.swe@diu.edu.bd.
**Md. Mahfuj Hasan Shohug** completed his graduation in Software Engineering
major in Data Science from Daffodil International Dhaka, Bangladesh. He is currently a Web
Designer at Bardown Sports Inc. He is an ambitious, hard-working, and very punctual person
in his life. His research interest is in Machine Learning and Data Science. He can be contacted
at email: mahfuj.shohug@gmail.com.
**Hasnur Jahan** is a Teaching Apprentice Fellow (TAF) of the Department of
Software Engineering at Daffodil International University, Dhaka Bangladesh. Also, she
completed her bachelor of science degree in software engineering there. Her interest in
research is machine learning, Image processing, and natural language processing. She can be
contacted at email: hasnur35-2297@diu.edu.bd.
Indonesian J Elec Eng & Comp Sci, Vol. 28, No. 3, December 2022: 1684-1696
-----
|
{
"disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.11591/ijeecs.v28.i3.pp1684-1696?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.11591/ijeecs.v28.i3.pp1684-1696, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "CCBYNC",
"status": "GOLD",
"url": "https://ijeecs.iaescore.com/index.php/IJEECS/article/download/28057/16895"
}
| 2,022
|
[
"Review"
] | true
| 2022-10-07T00:00:00
|
[] | 10,586
|
en
|
[
{
"category": "History",
"source": "external"
},
{
"category": "Engineering",
"source": "s2-fos-model"
},
{
"category": "Business",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/ffb850b6077c90c8592a04f3a7aa8293b82e830e
|
[
"History"
] | 0.950418
|
Aviation of the Future: What Needs to Change to Get Aviation Fit for the Twenty-First Century
|
ffb850b6077c90c8592a04f3a7aa8293b82e830e
|
Aviation and Its Management - Global Challenges and Opportunities
|
[
{
"authorId": "121307485",
"name": "Ursula Silling"
}
] |
{
"alternate_issns": null,
"alternate_names": null,
"alternate_urls": null,
"id": null,
"issn": null,
"name": null,
"type": null,
"url": null
}
|
The world around us has changed dramatically, particularly since the beginning of the twenty-first century, mainly due to the broad availability of the Internet. Inventions such as smart phones, apps, virtual face to face conversations, coupled with the rise of Facebook, Google, Amazon & Co. added a lot of speed to this development. The digital revolution empowers the consumer and determines ever increasing expectations. At the same time, latest tech developments such as artificial intelligence (AI), machine learning (ML), blockchain, voice and more create opportunities never seen before. However, the aviation industry to a large extent has remained stuck in legacy processes and their decades old technology. It also suffers from low profit margins. With a few exceptions, aviation management overall struggles on how to adapt to the real-time and agile environment. Digital transformation activities have started both in operational and commercial areas, but fundamental underlying platforms and culture change in most cases have not yet been addressed. This chapter explains reasons behind key pain points of the industry, what activities are ongoing and the main areas that need to change to get into shape for the current dynamic environment.
|
## We are IntechOpen, the world’s leading publisher of Open Access books Built by scientists, for scientists
# 6,700
Open access books available
# 180,000 195M
International authors and editors
Our authors are among the
Downloads
# 12.2%
# 154 TOP 1%
Countries delivered to most cited scientists Contributors from top 500 universities
Selection of our books indexed in the Book Citation Index
in Web of Science™ Core Collection (BKCI)
#### Interested in publishing with us? Contact book.department@intechopen.com
Numbers displayed above are based on latest data collected.
For more information visit www.intechopen.com
-----
###### Chapter
### Aviation of the Future: What Needs to Change to Get Aviation Fit for the Twenty-First Century
##### Ursula Silling
###### Abstract
The world around us has changed dramatically, particularly since the beginning
of the twenty-first century, mainly due to the broad availability of the Internet.
Inventions such as smart phones, apps, virtual face to face conversations, coupled
with the rise of Facebook, Google, Amazon & Co. added a lot of speed to this
development. The digital revolution empowers the consumer and determines
ever increasing expectations. At the same time, latest tech developments such as
artificial intelligence (AI), machine learning (ML), blockchain, voice and more
create opportunities never seen before. However, the aviation industry to a large
extent has remained stuck in legacy processes and their decades old technology. It
also suffers from low profit margins. With a few exceptions, aviation management
overall struggles on how to adapt to the real-time and agile environment. Digital
transformation activities have started both in operational and commercial areas,
but fundamental underlying platforms and culture change in most cases have not
yet been addressed. This chapter explains reasons behind key pain points of the
industry, what activities are ongoing and the main areas that need to change to get
into shape for the current dynamic environment.
**Keywords: digital transformation, change management, legacy processes,**
technology, agile, leadership, artificial intelligence, blockchain, customer experience,
aviation, airline, airports, travel agencies, tour operators, airlines, airports,
modern management, multi-speed IT, distribution, digitisation, sales, travel retail,
technology, machine learning, data, digital cockpit, digital airport, digital airline,
amazon of the air, travel retail, business model, strategy, aircraft on demand,
travel tech
###### 1. Introduction
The target of this chapter is to provide some glimpse behind the curtains, some
results of empirical and cross industry research as well as my personal observations
and experiences over time. I will focus on why the aviation industry has been slow
to adopt the changes, give more background about the underlying problems and
outline what activities are already happening and which are the four key opportunities which absolutely need to be tackled. This is not meant to be a complete list of
what is happening in the industry, but rather about some of the game changers and
critical success factors to bring about change, based on our extensive experience
and insight over the years as well as ongoing market research. I am not tackling
-----
sustainability, even though I think it is a key problem that needs to be addressed
separately, not just in terms of the impact of fuel consumption but also in terms of
the amount of plastic created during each flight, the airport operation and impact
on the environment, and the problems of over-tourism.
Let me start with an illustration of the as is situation by pointing out some of my
recent travel experiences. In June 2018 I travelled from Switzerland to the US as I
was a speaker and judge at a big travel tech event in San Francisco. During the flight
I had to use internet as I still needed to send an urgent email. When I asked the flight
attendant why internet was not working she shrugged her shoulders and said she
did not know. 2 h later I tried again and finally managed to send my email. At arrival
in San Francisco the queues for passport control were so long that people could not
get off the running walkways. It took more than 2 h to get out of the airport. I had
to continue my trip to Asia before going back home. I tried several times to book a
ticket directly with a large Asian carrier, but I could not complete the process as the
payment options did not foresee any European credit cards. I was forced to book with
an online agency instead, and their booking process did not allow me to book a seat.
Lost seat revenue and higher ticket cost because of agency commission are what this
meant for the airline. For me it meant a lot of wasted time and frustration. On the
last part of my flight back to Switzerland, a woman from Chicago sitting next to me
was crying as she had lost her previous connection and had been running so hard to
get on this flight—as missing it would have meant an overnight—that she had left her
laptop bag in the aircraft. The airline crew at the gate was very unfriendly with her,
and she felt completely helpless. She was visiting her boyfriend in Switzerland for
only a couple of days, and while the super friendly flight attendant had already been
able to tell her that her luggage had been found, it only finally arrived at her boyfriend’s address several days after she had already been back home in the US. It took
her several phone calls and being stuck in waiting lines to contact centres to get there.
These experiences contrast sharply with a world where I write invoices with my
mobile phone, buy products and services at Amazon and Alibaba with one click,
switch off the light at my home by talking to Alexa, answer my doorbell even when
travelling thanks to the smart doorbell Ding, order a present for my Mum online and
let it be delivered to my car’s boot, get flight status updates by just entering a flight
number in google, order my dinner for my late flight via app, for pick up at the airport
restaurants or even gate delivery. Where do these visible problems come from in an
industry which in its early days had so much pride in customer service and innovation?
###### 2. The state of the industry: and why flying can be so painful
The aviation market has always been quite volatile. Even going back to regulated
environments airlines have gone from a wave of positive results to huge losses. They
have been extremely exposed to external factors, from new legal restrictions to fuel
price change and politic and economic impact on demand for air travel. Airports as
being even more capital intensive have seen their performance as a consequence of
airline decisions. The rise of the low cost carriers was not taken seriously initially by
the full service traditional network carriers before they reached significant market
share and started to enter the lucrative long haul sector as well.
**2.1 Airline profitability**
For the aviation industry dependence on external factors such as fuel, labour
cost, the political environment and economic growth factors has always been
-----
extremely high. The Gulf war illustrated this very clearly, as did the rise of low cost
carriers in the 1990s, 11/9/2001 and the global economic crisis starting in 2008.
These events led airlines to rethink their aircraft ownership or lease strategies as
well as increased focus on their cost structures. Ryanair as a game changer for the
European and global airline market had turned to the low cost model when facing
huge losses and realising that they could only survive with drastic change. They
questioned everything they did, aligned processes and product proposition and
seized the opportunities which the broad availability of internet provided in terms
of efficiency and customer reach without the necessity of large investments into
sales infrastructure. They started to reinvent themselves again a couple of years ago
with the introduction of significant customer service improvements “… and begin to
manage those customers and deliver individually tailored service for them to meet
their needs” [1], when realising the limits their model had reached.
The subsequent global growth of low cost carriers can be attributed to extreme
cost focus and subsequent large price differentials to traditional carriers, frequency of service, flexibility to abandon routes if they do not perform, the rise in
economic activity and increased internet penetration and e-literacy, increase in
purchasing power of middle class households particularly in developing regions,
ease of travel, urbanisation and changes in lifestyle and consumer preferences with
the widespread availability of the smart phone and the control that the internet
rendered to consumers. While many attempts at long haul low cost operations had
failed, there has been some radical change in recent years, with Norwegian Airlines
being one of the key drivers, attacking the main profit makers of the traditional
network carriers.
The latter had already started in the 1990s to found their own low cost carrier.
Yet as they did not let them develop completely independently they often failed
and incurred extremely high losses as their cost structures and behaviour was too
much aligned with what the airline group did. Lufthansa’s subsidiary Eurowings is
one example. Go by British Airways was sold to Easy Jet and latest attempts include
long haul low cost with their subsidiary Level as a reaction to Norwegian Airlines’
growth in the lucrative long haul market. Emirates is moving to an alignment of
network and customer proposition such as their frequent flyer program with their
low cost subsidiary flydubai after they had originally been independent. There are
still more recent low cost carrier start ups by network carriers, for example Swoop,
West Jet’s new ultra-low cost carrier and flyadeal, Saudia Airlines’ new low cost
subsidiary.
In recent years, traditional airlines started to unbundle their service offering
and followed what low cost carriers had been doing as part of their strategy: they
added price tags for luggage, early boarding, hold fees and more. The interesting
thing is that this happened in a period when the low cost carriers reached more
maturity and started to enhance their customer proposition and to target business
travellers with tailored services. This leads to the somehow paradox situation that
network carriers still claim to offer more service, yet factually customers can choose
their way of flying for much lower fares and not rarely better service with low cost
carriers.
Low fuel rates, relatively high growth in demand for air travel (7–8% versus a
20-year average of 5.5%), growing seat load factors and the adoption of more and
more ancillary services for sale helped to achieve a positive performance again for
airlines in the last years. In some regions such as the US the intensive consolidation has also helped to increase average fares and thus total revenue. International
Air Traffic Association (IATA) announced in June 2018 that it expects airlines to
achieve a collective net profit of $33.8 billion, with a net margin of 4.1% in 2018 [2].
-----
This result is driven to a large extent by North American airlines, followed by AsiaPacific and European ones.
However, this is a downward revision from the previous forecast and compares
to US$38 billion in 2017, mainly driven by increase in cost of fuel, labour and interest rates.
According to IATA [2], airfares keep going down. The 2018 average return
airfare (before surcharges and tax) is expected to be US$380 (2018 dollars), which
is 59% below 1998 levels after adjusting for inflation. Average air freight rates for
2018 are expected to be US$1.80/kg, which is 63% below 1998 levels.
An analysis of the Forbes Global 2000 list [3] gives some interesting insights in
terms of financial perspective, particularly market capitalisation. Looking at the
top 10, there is not a single airline or airport part of it. Yet for the first time since
2015, China and the US split the top 10 evenly this year. On the inaugural list in
2003, there were just 43 companies from the Greater Chinese Area. Meanwhile,
Japan, the United Kingdom and South Korea also broke into the top five countries
with the most companies.
In comparison to C-trip (which also owns Skyscanner) and Expedia, most
airlines market capitalisation is in the best case close or much lower. In comparison to tech companies the gap is simply enormous. This is illustrated clearly in
**Figure 1. The one airline which does stand out is Delta, which is with US$37.1**
billion in a much better position to the other airlines, with the next best one being
American Airlines followed by IAG. Delta’s CEO Ed Bastian [4] has realised the
role of technology as a competitive advantage-next to the people in the airline- and
invests heavily. When adding airports to the list, it is interesting that Aeropuertos
Españoles y Navegación Aérea (AENA) seems to come close to Amadeus’ market
capitalisation, while all the others are significantly lower.
If you compare airline value with some of its IT providers, then you realise that
Amadeus as a key IT provider to airlines is worth more in terms of market capitalisation than the airlines Lufthansa, IAG/British Airways, Air France-KLM and SAS
**Figure 1.**
_Market capitalisation (US$ billion), extract 1—market capitalisation aviation and travel companies versus_
_travel tech and technology companies overall. Data from Forbes Global 2000 [3], illustration by the author._
-----
**Figure 2.**
_Market capitalisation (US$ billion), extract 2—Amadeus versus its founding airlines._
that originally founded it 30 years ago (Figure 2). In fact, decades ago airlines had
been very innovative and developed their own IT to be able to handle reservations
and the underlying operational requirements. American Airlines had founded
Sabre, Delta had founded Worldspan, Lufthansa had founded Lufthansa Systems.
Many more airlines globally had developed their own IT systems in the 1950s and
1960s predominantly.
Sabre, the equivalent provider of airline solutions to Amadeus that was founded
by American Airlines in 1960 is estimated to have a market capitalisation between
US$7 and US$8 billion. This compares to an estimated US$19.9 billion for American
Airlines [3], thus in this case the IT provider representing less than half of the value
than the airline which had founded it.
For the complete picture it should be mentioned that the big traditional airline
solutions providers Amadeus, Sabre and Travelsky have also a vested interest in the
travel agency market by providing the Global Distribution System (GDS). They
incentivise travel agencies to use their systems while they charge airlines for those
distribution services [5].
**2.2 Airline technology and processes**
Given low profit margins and focus on operational issues and safety first, airlines
in most cases simply have not had the money to invest in state of the art technology.
But it is also—if not even more—the lack of priority of technology for top management. Most aviation leadership teams have been set up with more traditional
management, where digital and also customer centricity had been underestimated
and misrepresented. It takes a long time to change this mindset even when bringing
in additional individual talent to adjust.
Airlines are used to iterative and process thinking, to a great degree influenced by legal frameworks to ensure a safe operation, but also by the decade old
systems being in place and very much an inward looking culture. Top management
had not realised the importance of digital. Ecommerce was evolving in a separate
department with some specialists but had not really become part of the overall
strategy until recently. The mindset of the workforce is significantly influenced
by this process thinking approach, traditional leadership and the complexity and
barriers of the current systems landscape.
-----
Airline and airport staff often do not know why they do things. They just do it
because it has always been done that way. And because their environment does not
encourage questions. This leads to a number of pain points which get completely
absurd in the current environment. Let me just give a few obvious questions as
examples:
- Why do I need to check in? If I buy a cinema ticket or goods in the store, I pay
and I get what I paid for without further validation
- Price levels for flights are restricted by numbers of letters of the alphabet
instead of true commercial requirements
- Why can I not dynamically adjust change fees, e.g., by period ahead of
booking, colour of shoes you are wearing, day on which you are making the
changes
- Why can I not book luggage for me just for the return flight, a meal for my
husband and priority boarding and a seat free next to her for my Mum
- Why do I get offered seats at check in even though I have already booked
them
- I paid much for my seat, yet short term aircraft changes might mean I cannot
get the seat anymore which I had reserved
- Why should airlines still spend time and money to load prices via the Airline
Tariff Publishing Company (ATPCO)
- Why can I not book add-ons/ancillaries if I had booked the flight with tour
operators
- Why are the additional services I had bought (seat, luggage, car) not changed
as well when I make flight changes
- Why do I still receive these tickets with long text and lots of abbreviations
- Why can codeshare partners offer lower fares on the operating carrier flights
than the operating carrier itself
- Why do airlines need codeshares when I could connect directly with the other
airlines, which is also more transparent for customers
- Why do I not get offered more services by my airline for the airport &
destination
- Why can I not start my booking on one device and continue on the other
- Why do I not just get the possibility to use the next available low cost flight if a
network carrier cancels a flight ad hoc
- Why are there still cabins in the plane: one customer might look after the best
seat to sleep, the other might want a good meal, etc.
-----
- Why do I need to wait at the luggage carousel and the queue at the lost luggage desk when it is already known that my luggage was left at the departure
location
- Why are data all over the place and not easily accessible nor comparable, making it very difficult for airline staff to really help to solve issues but results in
fragmented processes
- Why do I not have one view of the customer but only data referring to specific
flights
- Why do accounting systems have a different truth to other systems
- Why do revenue management systems still focus mainly on historic data and
do not include real time information
- Why is it so costly and takes so long to make system changes, often inhibiting
both certain commercial activities as well as realisation of service improvements and innovations
This list of pain points is just an extract. The pain points cover all parts of the
customer journey, from trip planning to booking, experiencing and sharing. They
are a result of continuing with processes and systems which had been created for a
different environment, where internet did not exist and in which the technological
possibilities were more limited.
The traditional systems landscape is extremely fragmented and complex, and
many of the new elements such as the online channel, optional services for sale,
mobile, self service for customers and staff, reporting, customer notifications had
to be added on top of it as workarounds (Figure 3). And the traditional processes
around this are still to a great degree manual and broken, and based on specialist
**Figure 3.**
_Typical airline IT architecture. Abbreviations used: GDS global distribution system, FQTV fare quote system,_
_schedule distribution, fare distribution, RES reservations, Sched scheduling, Reacc. Reaccommodation,_
_Inv. inventory, Anc’s ancillary services, Pay’t payment, W&B Weight and balance, Rev.Mgmt. Revenue_
_management, RevInt’y revenue integrity, Rev. Acc’g revenue accounting, Flt ops flight operations, HR human_
_resources, API application programming Interface, AI artificial intelligence, ML machine learning, IOT_
_internet of things, AR/VR stand for augmented reality and virtual reality._
-----
silos instead of a holistic approach. They were focused on transactions and had not
put the customer in the centre nor did they target a seamless experience or have
foreseen the commercial and competitive pressures that we encounter today [6].
There were a number of computer failures and outages in recent years and
months, from Delta, Southwest Airlines, United to British Airways [7]. Part of
the underlying reason are complexity both of systems and processes, with a large
degree of legacy technology, and subsequent problems to find the error. The impact
is even higher as manual or alternative processes are often not in case, leading to
huge disruption for customers and the airline as a result. The underlying principles
and processes had been standardised via IATA initiatives, in order to make cooperation between airlines and airports and travel globally easier. IATA in recent years
took a number of initiatives to adjust them to better fit with the current age. Yet it is
difficult to turn around a tanker, and these are small steps in comparison to what we
would expect as normal in the current digital environment.
Technology spend by airlines and airports are estimated to have reached nearly
US$33 billion in 2017 [8]. This is almost exactly the total market capitalisation of
Amadeus IT Systems alone. According to reference [8], top of the agenda for both
airports and airlines are cyber security, cloud services and passenger self-service.
Airlines’ expenditure as a percentage of revenue was about 3.3% in 2017. For
airports, the figure was about 5% for this year or US$8.43 billion. For 2018, it is
expected that at least the same levels are being maintained, if not increased. These
investment figures do not seem huge given the digital agenda but rather look like
maintaining status quo. While new technology makes it possible to take a smarter
approach with much less money than aviation is used to, it first requires the investment in the change. Hidden in the average figures there are airlines such as Delta
and Ryanair, which are investing heavily, while a large part just work on maintaining status quo and do the most urgent adjustments. Given the high amount of
investment over time and the amount of people employed coupled with resistance
to change, there are lots of economic interests by providers and some other stakeholders to maintain the status quo as long as possible. In a time when the only thing
which is clear about the future is that flexibility is required, providers are still trying
to achieve 10 or 15 years contracts and to even restrict some commercial flexibility
with regard to distribution policies. There are first signs that some big airlines do
not accept this anymore, with a particular breakthrough by Lufthansa and their
introduction of a distribution fee (see also Section 3.3) (Figures 1–4).
###### 3. What is being done: a selection of initiatives
We had some vivid discussions and lots of examples of current activities
during our last annual global think tank “think future - Hamburg Aviation
Conference”, bringing together top leaders, innovators and thought leaders from
airlines, airports, rail, hospitality, other travel stakeholders, innovative travel tech
and universities to discuss solutions how to succeed in the current dynamic environment. The live stream for this year’s event can be watched on YouTube [6]. We
particularly recommend the opening panel discussion between top leaders from
airports, airlines and tech providers and the panel about the future for airports
as additional insight to the following sections, directly from aviation thought
leaders.
I think the good news is that in the meantime even the most traditional airlines’
and airports’ boards and executive teams have realised that change is not a choice
anymore. But they struggle with the how, and what to focus on. I have summarised
-----
a few activities by airlines and airports and selective other travel stakeholders,
which I think give a good impression in terms of what initiatives are prioritised out
there at the moment.
**3.1 Structural technology changes**
Delta Airlines- having declared technology as a key focus has brought inhouse two key technology platforms, its reservations and passenger services
system and its flight operations systems. These are old systems they have already
been using, so there was no migration required. They bought the rights from
their provider Travelport. Delta had owned Worldspan—which then became part
of Travelport—back in the early days of the airline. By controlling these systems
Delta hopes to not only be able to act faster but also to be able to develop one
view on the customer. Virgin announced in August 2018 that they will launch
a new loyalty scheme with Delta in an attempt to offer a joint scheme for their
customers [9].
A few low cost airlines developed their own distribution and passenger services
systems (PSS) to be able to achieve best possible flexibility, Easy Jet and Jet2 in the
UK being key examples.
Some airlines have decided to choose one of the more recent players in the area
of underlying reservations and operations systems—to note that “recent” is relative to the majority of the systems in use today, it still means systems which were
founded more than 20 years ago—such as Radixx (founded 1993), Bravo Passenger
Solutions (founded 1993) and the most recent one IBS Software Services (founded
1997). ITA software, which had started to develop a completely new Passenger
Services System (PSS), was bought by Google for US$ 700 million in 2010 as a
vehicle for Google to further develop their travel capabilities. Since then, Google
developed many features including Google flight searcher, directly linking to the
relative airlines.
A number of older airlines which still own their own PSS systems—for example
Aer Lingus, Iberia and Air New Zealand—are evaluating change to an external
provider. The fact that this has not happened is a good example that some of them
do not think that just moving to one of the existing external provider swill solve
their issues. The IAG group is an illustrative for this: British Airways uses Amadeus,
but Aer Lingus and Iberia both still own their own internal system.
A number of airlines have also started to think about what some of the more
modular systems and add on processes such as revenue management and network
planning as well as group management and operations planning of the future
should ideally look like given the changed environment.
**3.2 Customer experience improvements and revenue increases**
Customer self service activities have been a priority for a couple of years. It is
now increasingly extended to other areas such as self connecting and additional
servicing via chatbots. All types of airlines started to offer additional optional
services, and also charge for them, particularly for seats and luggage and other
ancillary services. Yet often this has been more of a panic activity to recover poor
revenue results, and the experience is often not completely thought through,
with failures in terms of luggage and seat delivery by the traditional airlines in
particular as they own a diversity of aircraft types. The bundling of services is an
attempt to facilitate the sales process, often determined by technology restrictions as well.
-----
Latest attempts focus on data analysis and one view of the customer in order to
be able to sell more personalised products and services. In addition, beyond the predeparture and inflight services there is more focus on the airport and destination
experience. The following selection is a result of our ongoing research.
Delta tackled the luggage delivery issues in 2016 and invested US$50 million in
technology so that travellers will be able to track their luggage via an app, from the
moment they check their bags to the minute the bags arrive at their destination. For
2018, they focus on re-organising all their customer related data to achieve one view
of the customer [10].
JetBlue has invested in Gladly through its venture arm, JetBlue Technology
Ventures. Gladly is the maker of a customer service platform for various companies,
including airlines, helping to achieve a customer centric service with one view of
the customer.
Ryanair has started a project declared to become the Amazon of the air, as
part of their “always getting better campaign”. As part of this initiative they have
created a customer login—which has been in place with Easy Jet and other airlines
for many years already—and keep adding optional service offerings related to
travel [11].
IATA has initiated a number of projects to support the airline industry—particularly New Distribution Capability (NDC) and One Order to achieve a better view on
the customer and enable sales of ancillary products regardless of which distribution
channel is used.
KLM focuses on social media as a way to enhance customer service, but even as
a sales channel. This initiative came about during the ash cloud, when they realised
the difficulties of communicating with their customers via the limited contact
centre channels, as a result of which many customers approached them via social
media. They are strong with their social media proposition both in Europe but also
in their key regions, adapting to local preferences such as we chat in Asia. However,
they also realised that the actual operational delivery is lacking behind and
announced recently that they have just launched a project and released significant
budget to focus on this [12].
Lufthansa and United Airlines recently declared the development of a new
digital services platform (DSP) [13] that will further align the Star Alliance carriers.
So far, the travel experience for customers is still fragmented, in particular in terms
of additional services such as seat reservation and luggage bookings. For example,
they launched a seat selection feature in June 2018 which allows a United Airlines
customer to select a seat on Singapore Airlines flights booked via united.com or the
United App. It means that a customer can now select a complimentary seat for the
entire journey at time of registration regardless of which Star Alliance carrier is
involved. At the moment this is just possible at check in.
Airlines have started to introduce digital concierge services by using multilingual chatbot technology. Finnair and Sun Express are just a few of the airlines
realising this as a way forward for better customer service around the clock and
increased efficiency. It focuses so far mainly on information related to bookings, but
booking services are in the making as well as adding voice. But it requires a process
alignment first in order to add real value.
Seat resale and upgrade offering products that airlines such as LATAM have
started to introduce are more examples how airlines can solve some operational
problems due to overbooking and improve the customer experience as well as gain
additional revenues.
Moscow Domodedovo airport turned itself into a shopping mall, thus
attracting additional visitors and revenues. Many airports had traditionally only
focused on the b2b customers. But in the meantime they have realised that they
-----
do need to get better customer insight and to keep up with customer expectations. Airports such as Copenhagen, Heathrow and Dublin have introduced customer programs, in an attempt to allow for sales of additional services, customer
insight and direct communication with the customer. Many airports have introduced services such as fast track and airport parking for sale online or via an app.
Geneva Airport and most UK and Italian airports are examples. Also pre-order
and pick up at arrival of duty free products has become a common feature. Yet
it is still difficult to find exactly the retail offering at the airport ahead of your
trip. But more recently this is being extended to include all the retailers at the
airport, and even in town, with pick up at the airport, via an online sales offering
for customers. The German company AOE have started to offer these services
via their digital platform at Auckland and Frankfurt. Heathrow Airport have just
announced that they will join.
Grab is an innovative company which allows to pre-order food at the airport
and grab it on your way to the gate [14]. Their solution is already integrated in a
number of airport apps or websites, for example London Gatwick and Heathrow
Airport adopted this offering. As airport food and beverage offering have
improved significantly this could become a solution to the poor quality yet high
cost for the airline of offering food during flights. American Airlines and a couple
of other airlines have already decided to include this offer in their customer proposition. Airlines just need to have the open mind to test this as a complete solution
for food on board. Hamburg Airport has just introduced a test for preorder and
delivery of breakfast at the gate, thus saving valuable time in the morning for their
customers.
Amsterdam Airport and Hamburg Airport tested in 2018 improvements for
the customer experience through the PASSME [15] project, which uses technology
and some airport design elements to reduce the unwanted travel time and helping
to spend their time according to their preferences. Tampa airport introduced a
program to get more customer insight and build an action plan for higher customer
satisfaction, making use of technology to support the process. Incheon/Seoul
Airport have extremely efficient biometric identification at security control, which
speaks to the customers in the language of their passport.
Other travel stakeholders have also done an enormous amount of customer
experience improvements. Transport for London created a unified API to allow
a more seamless travel experience for customers [16]. The German rail operator
Deutsche Bahn improved their customers’ experience by turning the DB navigator
into a travel concierge, allowing clients’ time to be spent effectively and according
to their priorities instead of wasting it with travel planning [17].
Expedia have adapted a completely agile approach in terms of testing which
websites and costumer propositions work best. They also experiment with Voice by
developing a number of solutions for Alexa by Amazon [18].
Kayak and Expedia have all started using chatbots that can learn what consumers like and deliver appropriate suggestions for travel products to buy. American
Express just bought Mezi, which is a personalised travel assistant based on AI
supporting business travel agencies to offer multiple services for their customers,
including “please just buy the same flowers as every week”.
**3.3 Efficiency increases**
Low cost or hybrid carriers such has Virgin Express and later Brussels Airlines
had already worked with surcharges for more expensive channels more than
10 years ago. These were relatively small carriers in the global context and therefore
did not create much awareness or subsequent change.
-----
In 2015, Lufthansa announced a 16-euro surcharge [19] on each booking made
through global distribution systems (GDSs) like Amadeus and Sabre. Other carriers such as British Airways and Air France followed. They want customers to book
directly through their websites to be able to get a better customer understanding,
control their experience, offer ancillary services for sale and introduce more flexible
pricing as well as ad hoc offers at the airport as for example lounge access. And they
aim to control the high direct and indirect cost created through GDS bookings.
Airlines and airports are increasing the focus on self service. This leads to the
increased availability and push of self service luggage check in, as Air New Zealand
and Lufthansa have had in place at their home airports for a couple of years. Self
connecting services to simplify connecting traffic and enable connections with low
cost carriers have started to take ground since Easy Jet announced cooperation with
long haul carries such has Norwegian and West Jet [20], and Air Asia introduced a
special product for this.
In Japan airports are testing robots to carry heavy luggage and to clean airport
premises. Munich Airport in cooperation with Lufthansa is also running a pilot to
test Pepper, the humanoid robot to answer customer questions at the airport [21].
Fraport introduced the “Smart Data Lab”, in an attempt to gain useful knowledge
and insights and be able to take action from the data in the organisation.
**3.4 Organisation design to incorporate digital, retail and innovation**
Some changes in terms of realising the importance of digital and innovation
have become visible in the organisational setup, both in terms of new functions and
an increased presence in the top leadership. Titles such as Customer Experience
Director, Digital Transformation Officers [22], Digital Officers and Innovation
Officers or Directors have become quite common. Dependent on the stage of the
organisation, digital is often still seen as an add-on, which becomes visible in titles
such as “digital customer experience” and / or separate functions for ancillary
services and loyalty instead of taking a holistic approach. “Retail” has become part
of the nomenclature in organisations in some airlines and is already very common
in airport organisations. Some organisations, in an attempt to stress the customer
focus, have also renamed operational areas, for example “airport customer delivery” instead of “ground handling”.
However, the main base of the organisation is still very similar to what it used
to be, even though the functions and activities should change as they are not really
aligned anymore with the current world. Revenue Management & Pricing for
example is becoming increasingly mingled with digital channel pricing and sales,
ancillaries and loyalty services overlap, digital channel experience and customer
experience overall overlap and so on.
Throughout my career I have noticed that aviation companies often prefer reorganisation instead of tackling the key problems of revising processes to be fit for
the future, assigning and building the right talent and departments working in silos.
**3.5 The rise of innovation labs**
Both airports and airlines have started to take initiatives to foster innovation via
innovation labs.
To name a few real life examples from the airline world:
- Easy Jet puts disruptive thinking at the heart of its digital strategy and invested
in Founders Factory [23].
-----
- Ryanair established Ryanair Labs as an internal solution as part of its “always
getting better” campaign.
- Lufthansa created the Lufthansa Innovation Hub as a separate subsidiary.
- IAG, in partnership with L. Marks, launched the Hangar 51 program in 2016
to help improve airport processes, digitise business processes, improve data
driven decision making to enhance customer satisfaction and to develop
completely new innovative ideas that can make a difference to customers.
- Jet Blue created a venture arm to foster innovation, Jet Blue Technology
Ventures.
- Malaysia Airlines has launched its first in-house innovation lab last year. It is
called iSpace. Malaysian claim that the opening marks the third phase of its
digital transformation. Tata Consultancy Services, IBM Bluemix, Amadeus,
Telekom Malaysia and University of Malaya are partnering with the airline in
the initiative.
But also airports are taking attempts to innovate and support digital transformation. There is a lot of potential through digitisation to speed up and increase
efficiency for processes and to develop new experiences:
- Manchester Airport Group have launched its own technology and e-commerce
business to respond to technology-driven changes in the way passengers travel.
They want to move the airport experience into the digital age.
- Group ADP (Paris Airports) launched the “Smart Airport” innovation hub
initiative to design the airport of tomorrow.
- Munich Airport has recently announced the development of a future focused
innovation campus.
- San Diego International Airport’s Innovation Lab is a collaborative environment where companies, innovators and airport executives work together to
create and test new ideas. The aim is to drive airport innovation and improve
the customer experience. Successful ideas have the opportunity to be implemented at San Diego, other airports, and even in other relevant industries like
malls, hotels, convention centres, etc.
Made by many, a digital innovative agency in London has done research on
innovation labs, with a broad collection of best practice knowledge ([24], see also
**Figure 4).**
They look at four main experiments related to innovation labs: the impact of
proper design, the impact of actual competition, the impact of hard targets and the
impact of tranquillity. The report reveals plenty of valuable insights and data, about
where the blockers to innovation are, what innovation lab talent looks like (and how
to manage it), how to integrate with the sponsor organisation, and why innovation
labs are to business what science-fiction is to literature. Above all, and perhaps most
valuably, made by many defined the key reasons why innovation labs fail, and what
critical success factors are. Figure 4 is the summary of the key learnings from the
report.
-----
**Figure 4.**
_Made by many, Kevin Braddock: Innovation labs—best practice, main conclusions; madebymany.com_
IATA have started to support aviation by running hackathons to develop innovative solutions based on the IATA standards such as New Distribution Capability
(NDC). These hackathons help to show what can be done to achieve the culture
change so much needed in the industry. Unfortunately airlines are not yet making
enough use of these possibilities.
**3.6 Innovative things in the making: newcomers, innovating and disrupting**
New technology and fresh thinking can help significantly to challenge and
improve the current way of working, current profitability models, customer and
staff experience, operational and commercial areas. It would be beyond the scope
of this chapter to go further into details, but just showing some of the revolutionary
developments in the market gives an idea of the possibilities.
A number of solutions help to overcome the silos within organisations and also
foster more open thinking with external partners. In particular airports and airlines
have missed a lot of opportunities because of building frontiers around themselves
and not cooperating closely.
We outline a number of innovations from travel tech start ups and enabling
technologies that reflect new thinking—not only new technology for old ways of
doing things.
_3.6.1 Augmenting customer experience and making travel planning easier_
- Group Travel digitise the process of group bookings, reducing manual work
and allowing to include a lot of additional services, facilitating the cooperation
within the organization and between tour operators and airlines
- Trvl Porter: a style concierge recommending wardrobe for travellers to rent
and making it available at their destination, no need to carry luggage any
more
- AirPortr offers the service in London to pick up/deliver your luggage from/to
your home or hotel and check it in for your flight
-----
- Bounce is a start up allowing travellers to store their luggage with hotels and
retailers whenever needed
- kiwi.com—helps to find all kinds of flights and develop a journey including low
cost and full service airlines as well as other means of transport; they operate a
contact centre as well to support customers in case of any operational disruptions
- TrustaBit uses blockchain technology to allow airlines to automate the compensation process, including the possibility to distribute vouchers during disruptions at the airport
- A number of inter-modular solutions such as Rome2Rio evolve, for airlines
these solutions and new technologies make it much easier and less complex and
costly than today to partner with other airlines, local taxi companies, and even
boat taxis or bicycle rentals in order to get travellers exactly where they want to
go and how they want to go there
- Boni Loud Steps developed indoor navigation for the visually impaired
- Interes is an innovative retail engine which helps airlines to develop and control
dynamic product and promotional approaches adapted to their target groups,
with pricing with no limits of the traditional systems related to fare filing or
letters of the alphabet
- Hopper predict future price evolution and advise customers when best to book;
they also offer alternatives to the destination chosen in line with customer
preferences and budgets
- Grab allow mobile (pre)ordering for retail products and services at airports
_3.6.2 Faster, more efficient, more revenue_
- Automated aircraft checks conducted by robots and AI will speed up the
turnaround process considerable, helping airlines to plan more efficiently
- New technology, such as 3D printing, offer new aircraft and engine design
opportunities
- Data can be used to anticipate customer numbers in order to reduce crew
requirements and engine maintenance, allocating the most suitable aircraft,
or the most suitable gate at the airport even in case of delays. This allows more
efficient staff planning. Beontra is one of the companies which developed
models for integrated capacity, traffic and revenue planning to already achieve
this in terms of airport planning
- Winding Tree is a start up allowing safe direct transaction with third parties
by using blockchain technology, this can also help to foster the airport—airline
cooperation
- YieldIn is a revenue management solution making it possible to align business priorities and revenue management practices, thus overcoming silos and
ensuring engagement by top management
-----
_3.6.3 Safer and/or more sustainable and eco-friendly_
- Helmets are being developed that include an augmented reality (AG) display.
Pilots will be able to track all of the controls, alerts, signals, etc. more easily.
Training will become more immersive as well as a result.
- The solution via Trvl Porter to “rent” your clothes at the destination saves fuel
and thus is a more sustainable solution than carrying luggage.
- Further enhancements for “self flying” using AI and Machine Learning are in
the making.
_3.6.4 Substitutes on the horizon for current aviation models and processes_
- What if Google, Amazon, Alibaba do move forward even more into travel and
re-invent the whole model?
- Amazon had made some advances into travel some years ago [25] and stopped
the initiative, yet technology has advanced even more now and they might
give it another go given their expertise in online frictionless retail and 300 mn
customer base [26].
- Alibaba has already shown significant muscle to play a major role in the
Chinese travel market in spite of a strong player such as Ctrip. With their
investment in a new brand Fliggy based on their Alitrip infrastructure they
target the younger digital generation and have created a kind of travel marketplace, allowing travel players to create their own shop while providing marketing and data analytics support for airlines and travel players. If they combine
this even more with their retail expertise and innovation activities this could
potentially become a game changer.
- Google keeps adding elements of the travel journey, linking already to a
number of airlines directly via Google flight search and adding travel partners
to Google Maps; could they become the GDS of the future?
- Could there be completely new players in the market? What if there was just
a market place for retail services and modular web based services to resolve
inventory, wiping out a lot of the current processes?
- What if the principles of easy flying - which we still tend to call low cost
services - becomes the norm for both long-haul and short-haul travel?
- If check in was eliminated, what would the large check in areas in airport terminals be used for? Could the stores just become mobile and move around the
airport - where the customers are instead of directing customers to the stores?
Will the order of food & beverage turn into delivery at gate services via robotics?
- Waves as a model for “flying on demand” is a start up which does already operate in the UK.
- Electric and hybrid engines and models will support new models such as “flying cars” and revised Concordes.
- Hyperloop as an alternative to longer distance travel.
-----
###### 4. What still needs to be done for the industry to survive
As seen just with the selection in Section 3 there are a number of activities ongoing in aviation to adjust to the digital age. Are they really the right things? Are they
enough?
From an external view a lot of these activities seem to be little things just to get
to the “normal” standard of today, and it is hard to understand why they take so
much effort. And real structural issues seem to be missing. If your house is dump,
just adding some high quality paint on top of the dump walls will not help. If you
drive a vintage car, you will not normally use it to drive on the motorway, unless
there is an emergency and you know you will be driving far too slow.
Digital innovation by Google, Amazon, Facebook, Apple, Samsung, Alibaba
and other tech players but even other travel players such as online travel agents and
meta search companies Expedia, Skyscanner/Ctrip and various new start ups has
been out pacing the rate of change in aviation for several years, and the speed is
accelerating, putting airlines and airports at a disadvantage to other industries and
even to other travel stakeholders. The Forbes 2000 [3] list examples from Section 2
and the profitability and market capitalisation figures are a clear result of this
(Figure 1).
Potential substitutes as described in Section 3.6.4 could become a real threat
or simply a driver for faster and more drastic change. Coming back to the house
example, it is as if avoiding to go to the basement because you know that it is full of
water and old wiring and fragile walls, but you restore your house above and ignore
this, hoping you will be able to continue as long as possible.
More drastic change is needed than copying current business models such as
ancillary revenues or putting more focus on the customer and adding technology
workarounds to make this happen. But only a few airlines and airports are really
serious about it, starting to go down into the basement.
Sir Tim Clark, Emirates Airline president expressed a warning recently in an
interview with Business Insider. “Guys, there’s a storm coming, and if you don’t
get on it and deal with it, you will perish,” Clark said in a recent interview with
Business Insider. “The company of the 2050s will bear no resemblance to the
company of 2018.”
“It’s not a question about using advanced technology to increase the way you
do your business, like ancillary revenue streams, because that’s a given,” Clark said
emphatically. “It’s not a question of not knocking your companies down internally
and rebuilding them on digital platforms. That’s a given for us. It’s not the case
for a lot [of other airlines].” [27] Tim Clark made a major change by hiring a high
calibre Chief Digital Innovation and Transformation Officer into his team end of
2016.
I believe there are 4 key areas which need to be tackled more seriously to really
create a sustainable future for aviation. The model with the 4 Bs that we created is
not iterative or a once off thing to do, but is meant to be re-applied on an ongoing
basis, referring back and forth between the different stages and continuously
evolving (Figure 5).
**4.1 Big vision**
The activities that airlines and airports currently perform are in most cases not
part of a holistic strategy. They do add certain capabilities, without questioning
enough the current processes and set up. If you see the tremendous amount of
change happening outside of the industry, it is certain that consumer expectations
will increase even more significantly.
-----
**Figure 5.**
_The 4 Bs model, XXL Solutions._
Digitisation and technology based on digital platforms are a must, not even part
of any vision any more.
A big and bold vision, starting with “greenfield” thinking and how you would
set up an airline/airport without considering current processes. Only subsequently
you would decide which of the current processes to eliminate completely, which
ones to improve, which factors to build on and enhance in order to get closer to your
vision.
Some airlines such as Ryanair have claimed they want to become the Amazon of
the air. But Amazon has been continuously re-inventing itself, and is again doing so
now with their Amazon Go stores, moving forward into the food supply chain and
the internet of things (IOT) with Amazon Echo.
For airlines, their retailing ambitions so far are mainly based on adding ancillary services and optimising revenue and using more data analysis. And it seems
each one is just following the others. Yet decade old technology and manual
processes in distribution, revenue management and even operational areas will not
provide the flexibility anymore to be ready to adapt. Unfortunately there is not the
one technology solution out there to choose from, which delivers all the possibilities and flexibility needed today. But there are all the technological opportunities to
implement a vision, without the complexities and large investments needed in the
past.
**4.2 Behaviour and mindset**
A complete makeover is needed—including sorting out the basement of the
house, or building a completely new house.
To develop the big vision and the subsequent makeover strategy requires above
all the right leadership and mindset. And a lot of energy and care. When Ryanair
re-invented the way of doing business in the 1990s their biggest risk was to have to
close down. They questioned everything and used the opportunities of technology.
When Willie Walsh turned ailing Aer Lingus into a low cost carrier at the beginning
of this century he put very bold targets in order to achieve change and the thinking
of what is needed to get there, even though it seemed far away and impossible at the
time. West Jet as one of the global carriers with double digit profit margins for years
as a very charismatic leader at the helm.
-----
**Figure 6.**
_XXL solutions, what aviation can learn from technology companies._
The big opportunity is that technology today allows to do everything we want
to—it just requires a smart approach and a big vision to get there, which in turn
requires the right behaviour and mindset.
If you look at how successful technology companies are, it helps to step back and
think a bit about how they work and what aviation can learn from this. In Figure 6,
we have pointed out some of the main relevant differences between the two types of
businesses. Even though they are not always completely evolved, the tendencies in
terms of behaviour are very relevant.
We believe that a change of behaviour and mindset is crucial for airlines and
airports to achieve any change. The current prevailing iterative and process oriented style is counter productive for the dynamic and agile digital environment.
Developing an agile approach, collaboration and using best talent related to a
project rather than the one who should be there according to hierarchical thinking
is a key element of success for tech companies but not yet for aviation. Trust and
personal responsibility are at the heart of this behaviour.
The following elements can help to create this behaviour:
A. **People and talent are key. It requires a full review of the talent required to meet**
the digital and innovation requirements. You can only think out of the box if you
have different boxes. Bringing in some younger people (for example by working
with universities, for recruitment of new jobs or ad hoc activities) and creating
more diversity to ensure more out of the box thinking coupled with training
and support for existing staff to support change can help to speed up the digital
transformation process. It is important to ensure that these people are really
involved and can value. I have often seen some really good talent being left aside
because of organisational dynamics. Travel brings together people with all kinds
of different lifestyles and cultural background, yet airports and airlines are still
very much national/local staff apart from the flying staff. It has also become very
evident when looking at the picture of the airline CEOs at the last IATA Annual
General Meeting (AGM) in Sydney that there was only one woman present.
B. **Visible changes such as collaboration tools like Slack, Facebook for work and**
introducing methodologies such as design thinking and Kanban can help to
support the cultural change and to break down silos. Creating time, for example
Fridays or 3 h per week for innovation, making use of co-working office space
can help to grow with lower cost and foster an innovation and change culture
and open minds. Even innovation awards—with simple rewards such as having
lunch with the CEO could make the focus very visible.
-----
C. **Events and thought leadership from external sources, or which training to do**
to look out of the current boundaries and comfort zones can help to develop the
open minded behaviour needed. Aviation tends to shut down even more in case
of high result pressure. Many airlines had initiated a stop on travel activities in
recent years because of result pressure. In my opinion this is exactly the opposite
of what should be done in the current environment. Disruptive events such as
hackathons, travel tech start up events, tech and retail trends events and innovative think tanks should be used strategically to help board members, executive
team members and other staff members think out of the box and develop their
agenda for success. Even inviting thought leaders to do a presentation about
trends in the marketplace and what it will look like in the future is something
which can add to out of the box thinking for the whole company. Also Coursera,
itunesU, Udemy or edX offer opportunities for staff development and training
which did not exist before—and had traditionally been extremely expensive and
not personalised.
D. **Define standards how to work. This should include the principle of collabora-**
tion, agile behaviour, fast results, allowing trial and error, and to ensure to
choose state of the art suppliers and do not just exclude them because they do
not yet have enough customers. We are in an environment now where there is
no fixed roadmap but a lot of possibilities, but it needs to be determined by the
aviation stakeholders. Truly innovative suppliers and partners can help to foster
innovation and open mindedness.
E. **Putting yourselves into your customers shoes. I have seen it too often that**
they had no clue what happens at the airport, or on the website, as they booked
in a different way and were never asked to talk to customers or make observations at different touch points of the customer journey.
Obviously, being close to customers and understanding their mindset is one
of the keys to ensure the right behaviour. If customer satisfaction and feedback
become part of the board meeting agenda and the executive team meetings, and
technology—which is already available—is used to analyse it and immediately
direct it to the right people to take action, then this already key to realise current
and future needs to satisfy customers. Tesco, the UK retailer, have also created
household panels and feedback opportunities at specific touch points to really get a
360 degree of customers and realise trends early.
**4.3 Branding and selling**
Branding and selling is meant to be both for internal and external purposes to
drive change and help staff to engage and fully understand what their role is, and to
retain existing customers and create awareness for new ones. It is what companies
often forget; they start building a new house first and create fears as staff see the
preparatory works. Ryanair’s “always getting better” campaign and the vision to
become Amazon of the air is a good example of initiating a major change process to
reinvent themselves and establish a new market positioning.
Their fast activities and visible results are the “moments of truth” and significantly help to make people belief in the change.
Change is always linked to fears, in particular in this fast changing environment. Fears of losing their jobs because of functional changes or introduction
of AI should be foreseen and rather thought about ahead. Addressing those
-----
fears, defining where human intervention can add value and start foreseeing
these changes can make a significant difference. If staff are taken seriously and
get engaged they can play a key role to turn around the company and establish
new value—and revenue— adding functions while digitisation and optimising
others.
I also believe that leadership should put themselves more often into the shoes of
staff in addition to customers to be able to best understand the sentiment and act
accordingly.
**4.4 Building**
The aviation industry has typically been extremely process oriented and risk
averse, with big governance structures, also when it comes to running projects. A
focus on results and agile behaviour is another thing which aviation can learn from
tech companies in terms of how to run projects. Coupled with supporting new ideas
and taking bold risks as part of the eco system, but abandoning when realising that
it will not work out as predicted is crucial.
Methodologies such as design thinking [28] and Kanban help to design and
run projects and achieve fast solutions in agile environments versus discussions without decisions over long periods of time and cumbersome governance
structures for processes taking away empowerment of people. They can also
help to ensure to draft processes which take account of future needs and allow
flexibility rather than just reproducing similar approaches to today with new
technology.
It is important that this approach is being understood and clearly becoming
alive. It also involves taking some risks and creating a culture of trial and failure.
There are solutions in the market now which help to overcome some of the
shortcomings of the old technology. If those solutions are adopted in a modular
way, then gradually the unnecessary elements of the old systems can be phased out,
and ultimately a truly state of the art proposition be in place, with well managed
risk.
At the moment those innovative solutions are often ignored by airline people
because they cannot yet imagine this new world. It is crucial for leadership to ensure
that they take a leading role in guiding the organisation to do things differently. The
biggest risk in the current environment is to not move.
**4.5 Limitations and further considerations**
Aviation is at a turning point. Changing consumer behaviour and customer
expectations, rise of middle classes in developing economies, the global political landscape, environmental concerns and technological development lead to a
dynamic environment and challenges never seen before.
Digitisation leads to a large scale of transformation across multiple aspects of
business. It creates enormous opportunities, but also represents risks if not managed properly. The strategic implications for organisation, industry ecosystems
and society have not yet been fully grasped by business leaders nor governments.
Digitisation creates new challenges not yet fully understood. They include the
pace of change never seen before, cultural change, the impact on society and
identification of skills needed, outdated regulations, how to overcome legacy systems, the need for funding of both digital and physical infrastructure. Industry
and Governments leaders need to take up the challenges in order to ensure that
the potential value for society and industry can be leveraged. The question of the
-----
value of digitisation for aviation, travel and tourism is estimated to reach up to
$305bn between 2016 and 2025 through increased profitability because of higher
productivity, increased demand for products and services due to personalisation,
sharing models and further improved perception of security. $100 billion (bn) of
value are expected to migrate from traditional to new players in the industry (for
example from traditional travel booking intermediaries to OTAs). $700 bn are
the expected value for customers and wider society because of reduced environmental footprint, cost and time savings for travellers and safety and security
improvements [29].
_4.5.1 Customer experience_
Travellers will expect a seamless experience tailored to their habits and preferences. Companies in the travel eco ecosystem along the customer journey will
exchange data via secure technologies and continuously create insights. Travel
will become frictionless and gradually blend with other daily activities. Digital
technologies will augment the customer experience and the aviation workforce.
Artificial intelligence (AI) and Machine Learning (ML) will help to turn data into
insights and improve the customer experience, in the form of personalisation and
chatbots, as well as take over specialist tasks of staff and transform the workforce.
In addition, digital platforms, connected devices (Internet of things IoT), Virtual
and Augmented Reality (VR/AR) and other technologies will allow for innovation,
better customer experiences and increased efficiencies, and lead to a complete revision or erasion of legacy old processes. With digitisation of identify increased collaborative efforts need to be taken to ensure cyber security. The example of British
Airways hacker attack on customer data in August 2018 is a good reminder of how
real this threat is. Closely linked are fake news and fake revisions and evaluations of
services via social media platforms.
_4.5.2 Jobs and skills_
The greatest societal impact of digitisation is probably the impact on the workforce and estimated to represent 1 in 11 jobs in the aviation and travel industry
world wide according to the World Economic Forum study referenced above [31],
potentially a number of 780,000 traditional job losses in the aviation and travel
industry. Digitisation and new technologies will also mean displacement of current
jobs in the industry, expected to be partially offset by next generation skilled jobs
inside and outside aviation at the high and low end of the economy (for example
in the area of robotics, Internet of Things (IoT), data analytics). All of these pose
questions about future workforce which need to be addressed by industry and
governments alike. New thinking is needed with regard to views on employment by
society, concepts for next generation jobs and next generation occupation and pass
time of people. Middle-level jobs that require routine manual and cognitive skills
are the ones most at risk in terms of labour displacement and productivity effects
[30]. Big legacy companies in particular struggle with the challenges of identifying
new functions and redesigning organisation to integrate new and current functions
in a way which suits the current dynamic environment [31]. Most departments have
been run in silos, and staff fear about losing recognition and their jobs. Training
programmes working with new technologies and helping to update relevant skills
are required.
Top executives and board members have often been far away from digital and
technological developments, and these areas have been specific entities in the
-----
organisation. It is a big challenge for these leaders to open up and learn fast about
the relevant technologies they need to consider, what their set up should look like
and strategic options and tactics how to get away from their legacy systems and
processes. I have heard from many personal discussions with people within these
organisations that many change activities do not go ahead as they should as leaders
lack the insight and thus courage to decide to go ahead with radical changes when
they get opposition from some people within the organisation.
_4.5.3 Legacy systems_
Airlines in particular but also other aviation and travel stakeholders face limitations in their activities and speed of transformation as they need to keep legacy
systems running while developing new technology. They are afraid of the risk of
changing the underlying legacy technology. Yet there are new technologies available
now which could help to develop an environment for the “new world” for specific
routes only as a test case and to get confidence while keeping the legacy systems
running. Such a multi-speed approach to information technology (IT) requires
strong leadership to move ahead successfully. Other limitations often encountered
are the fact that technology and the knowledge going along with it had been
outsourced by main players for many years. It is essential to develop some in-house
knowledge and skills even to be able to understand and manage IT suppliers better.
Innovation in terms of technology often happens much more with smaller suppliers
in the aviation and travel world, which leads to the question of small versus large
suppliers in the eco system. Aviation stakeholders have often feared being exposed
to smaller suppliers, and bigger one-stop suppliers have fostered this fear, yet the
current environment asks for new approaches and a critical review of the choice of
a supplier in terms of innovation potential.
_4.5.4 Regulation and legislation_
The regulatory framework has a significant influence on transformation and
can encourage or discourage the introduction of new technology. Innovation moves
much faster than regulations and policy making, which means that Governments
are forced to introduce regulations for nascent technologies. Concerted actions by
industry leaders, regulators and policy makers are needed in order to maximise
the value of digitisation in aviation, travel and tourism. The problem with fake
news on social media reflects the risk of not embracing the new digital trends and
not addressing the related opportunities and challenges. A series of actions for all
participants in the ecosystem can be identified. They include the following according to the study by the World Economic Forum cited above [29]:
- Empower educational institutions to design curricula that help to prepare the
next generation for the digital economy.
- Support the transition of the workforce with reselling current employees
through training.
- A framework of rules for the operation of machines and AI systems is
needed. Yet frameworks should remain flexible enough to not kill the innovative spirit but help to foster the development with guidelines and pro-active
measures to address liability, safety, security and privacy of these new
technologies.
-----
- Transforming legacy systems into agile platforms with interoperability,
enabling plug-and-play interactions between the partners in the ecosystem.
- Define a regulatory framework that defines the appropriate use of data, involving private, public and civil-society organisations.
_4.5.5 Global political trends and economic evolution_
International departures have more than doubled between 1996 and 2016,
from 650 million to 1.45 billion, according to the world bank [32]. It appears that
growth will continue. According to the World Economic Forum report on digital
transformation for aviation, travel and tourism [29] global emerging markets will
account for 70% of forecast share of global airline travel by 2034. Demographic
developments play a key role in terms of growth and how fast new technology
will be adopted. Regions in Asia, Africa and Latin America will drive a main part
of this growth due to a rising middle class. Technology adoption may be speedier
in developed countries though. Businesses will also face the challenge to manage
experiences for travellers who are less used to technology.
Growth means that the aviation stakeholders need to adapt faster. But it also
creates other problems in terms of overtourism and sustainability. This is further
increased by additional cruise tourism. A number of places have started to tackle
too many visitors. The authorities of the Philippines and Thailand have introduced
a forced break for Boracay Island (Philippines) and Maya Bay (Thailand). Cinque
Terre in Italy try an app with which tourists can see the number of people on the
routes in real time. Machu Picchu in Peru turns to time slots. Jeju Island in South
Korea faced almost 180 daily flights in 2017 and 15 million visitors, yet relief came
not through the authorities but due to a Chinese ban not related to the underlying
problem. Colombia’s Caño Cristales site faces the challenge of balancing a delicate ecosystem with an unprecedented number of visitors. In a quite exceptional
approach for a developing country they tackled this fast and introduced a set
number of rules: no plastic bottles, no sunscreen or insect repellent in the water, no
swimming in certain areas, no cigarettes, no feeding the fish. On arrival, visitors
attend a briefing to make this completely clear. They are also training local tour
guides and hosts [33].
Political tendencies to protectionism rather than continued globalisation as well
as rising fuel prices could potentially have an impact on the growth forecast [34].
Other key considerations about the future evolution include
- How can stakeholders in the aviation and travel eco system ensure data security
and comply with new data protection laws while incentivising customers to
share personal data in exchange for tangible benefits, such as a hyper-personalised travel experiences. To what degree can personal data be securely and
ethically used, and made interoperable across public and private stakeholders,
to boost safety and security?
- The world of the hyper connected consumer is moving from physical to
digital assets. Examples such as Uber, Amazon, Google, Apple, Expedia,
Tesla, WhatsApp and more illustrate that the enterprise value of the future is
about how well an organisation develops their digital assets for the benefits
of customers and employees [35]. Is there a model for aviation to foster global
collaboration and facilitating the sharing of company assets, to unleash the
full potential of digital transformation, while also preserving the individual
-----
company’s relevance in the battle for consumer mindshare? How will this
impact on future investments in both physical infrastructure and digital
technologies.
- How will the operating models of travel organisations change in a smart and
connected world where the lines between online and offline are blurring,
and physical assets turn to digital ones? How will this change the behaviour
and expectations of individuals?
- Will it need completely new players in the market to finally push aviation
and travel stakeholders towards more radical change? Similar as the low cost
model gradually forced airlines and airports to change? Google now operates a large number of its own services, all branded accordingly, including
Google Flights, Google Destinations and Google Hotels. Such improvements are already proving fruitful as more travellers turn to the Mountain
View, California-based search company. According to the annual Portrait
of American Travellers study from MMGY last year, 40% of travellers cite
Google as their first source in booking trips. That’s up 8 percentage points
from the 2016 study [36].
###### 5. Key conclusions
Aviation, particularly airlines’ small profit margins and poor market capitalisation versus technology companies and other industries and increasing
customer expectations are clear indicators that substantial change is needed to
get fit for the twenty-first century. Airlines and airports have started the change
process slowly, but a lot of digital transformation activities are ongoing in the
meantime.
Main focus of activities is on customer experience improvements, cost
efficiencies, better analytics and revenue optimisation as well as operational
excellence. Internal and external innovation labs have been created to support
the process, with more or less success so far. The most advanced companies
have in-sourced or created at least some key parts of the software development
activities.
Yet more drastic changes are still the exception, most of the activities are focused
on creating workarounds based on decade old processes and systems. A lot of industry players either find it difficult to navigate in these stormy waters, or they prefer
to stay ashore in the waters they know well and avoid any marks which indicate new
ways because they cannot imagine that they will work.
It is critical for all board members and the whole leadership teams to have a
deep understanding of the digital agenda, to ask the right questions and to drive
the vision and the strategy. A big vision what the destination is and behaviour as
prerequisites for branding and selling the trip to get the whole team work towards
getting through stormy waters and test new ways to build the new world, even starting to build and show fast results are the main areas that still need to be completely
fulfilled in many cases.
There are a lot of innovative start ups in the market, lots of opportunities to
start drastic change. Disruptions and faster change will mean that the storm will get
even stronger. Political changes and regulations, particularly with the increasing
protectionist agenda of some countries are a risk for the foreseeable future in terms
of expected growth.
-----
Cost pressures above all due to increased labour and fuel cost but also in the area
of aircraft cost are other main risks to be aware of. The latter could become bigger
given the deals by Airbus with Bombardier and by Boeing with Embraer, which will
restore the duopoly which the two giant manufacturers have had for many years.
Both Bombardier with their C-series and Embraer with their E-series had started to
compete directly with the smaller versions of Boeing and Airbus jets.
Technology will remain a key disruptor - but also a key enabler. If the big
vision and behaviour start to get alive and are followed by branding and selling
as well as building activities based on solution orientation, agile principles and
the will to move forward and not remain in the past, then digitisation and current technological opportunities can open doors to do things previously thought
impossible, creating seamless customer and staff experiences and creating endless
new revenue and cost saving opportunities at the same time. Digitisation offers
opportunities never seen before to shape the future. But industry leaders need to
take up this chance and introduce the radical changes needed to create the potential value. Only the players who do this best will have a chance to survive and to
compete successfully in the light of these dynamic technological changes and ever
increasing customer demands. And competition is likely to increase strong players
coming from originally other eco-systems such as Google, Amazon, Alibaba or
others not seen before which will continue to move forward in the aviation and
travel sphere.
###### Acknowledgements
I would like to thank IATA for having invited me as a jury member for their
last hackathon in Kochi. Their hackathons contribute significantly to a change of
mindset in the industry.
Thank you to the XXL Solutions team for research and empirical insight and to
Hamburg Airport as the main sponsor for the think future event, which we have
developed into the reference for innovation and transformation in the aviation and
travel industry.
###### Conflict of interest
There is no conflict of interest to declare. Our strength is being an independent
consultancy, which is very active in the digital transformation, innovation and start
up travel and aviation arena.
-----
###### Author details
Ursula Silling
XXL Solutions - Do Things Differently, Geneva, Switzerland
*Address all correspondence to: u@xxlsolutions.us
© 2019 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms
of the Creative Commons Attribution License (http://creativecommons.org/licenses/
by/3.0), which permits unrestricted use, distribution, and reproduction in any medium,
provided the original work is properly cited.
-----
###### References
[1] Elyatt H. CNBC. Ryanair Turns
Customer-Friendly on Easy Jet Threat.
22/11/2013
[2] IATA Press Release No. 31: Corporate
Communications (Sydney), Solid
Profits Despite Rising Costs. 4/6/2018
[3] Forbes Global 2000 Publication. The
World’s Largest Public Companies, Edited
by Halah Touryalai and Kristin Stoller
with data by Andrea Murphy. 6/6/2018
[4] Business Insider. Zhang B. Delta’s
CEO Explains Why Airline Computers
Fail and How Tech Will Change Flying.
17/12/2017. Available from: https://
www.businessinsider.com/delta-ceoexplains-how-tech-will-change-flying2017-12?IR=T
[5] IATA, Atmosphere Research Group.
The Future of Airline Distribution.
2016-2021. Available from: https://
www.iata.org/whatwedo/airlinedistribution/ndc/Documents/ndcfuture-airline-distribution-report.pdf.
Airline Distribution Fundamentals Current Problems, Disruptors and
Future Perspectives
[6] Video 1: Live Stream Think Future Hamburg Aviation Conference,
YouTube Channel. 8-9 February
2018. Available from: https://www.
youtube.com/playlist?list=PL5IfEpU_
v0VCu5534ZWouJqSHU43VYHPS
[7] BBC Article. British Airways
Boss Apologises for ‘Malicious’
Data Breach. 7/9/2018. Available
from: http://www.bbc.co.uk/news/
uk-england-london-45440850
[8] Journal Article. International Airport
Review: Airlines and Airports to Invest
US$33 Billion in 2017. 5/9/2017
[9] Business Travel Article. Jarvis H.
Virgin to Launch New Loyalty Scheme
with Delta. 21/8/2018. Available from:
https://standbynordic.com/virginto-launch-new-loyalty-scheme-withdelta/
[10] Future Travel Experience. Delta
Invests $50m in RFID Baggage Tracking
Technology. May 2016. Available from:
https://www.futuretravelexperience.
com/2016/05/delta-invests-50m-rfidbaggage-tracking-technology/
[11] Irish Times Article. Ryanair Wants
to be “Amazon of Air Travel” With New
Booking Option. 9/6/2016
[12] Article in SMBP Social Media for
Business Performance, KLM: Using
Social Media to Leverage “Service,
Brand and Commerce. 2/4/2017.
Available from: https://smbp.uwaterloo.
ca/2017/04/klm-using-social-media-toleverage-service-brand-and-commerce/
[13] staralliance.com: Star Alliance
Creates Digital Service Platform with
Accenture. 8/2/2018. Available from:
https://www.staralliance.com/en/
news-article?newsArticleId=DSP&grou
pId=20184
[14] Future Travel Experience Article.
Heathrow Partners with Grab to Offer
App-Based F&B Pre-order Service.
September 2017. Available from:
https://www.futuretravelexperience.
com/2017/09/heathrow-partners-withgrab-to-offer-app-based-fb-pre-orderservice/
[15] European Commission Website.
Personalised Airport Systems for
Seamless Mobility and Experience. 20152018. Available from: https://ec.europa.
eu/inea/en/horizon-2020/projects/
h2020-transport/aviation/passme
[16] Website Transport for London.
Unified API - Transport for London.
Available from: https://tfl.gov.uk/
info-for/open-data-users/unified-api
-----
[17] Business Architecture and
Consultancy, Blog, Deutsche Bahn as
a Digital Role Model. 2016. Available
from: http://www.digitalsocialstrategy.
org/bac/2016/12/09/deutsche-bahn-asa-digital-role-model/
[18] Ad Age Article. Pasquarelli A.
Overbooked: Expedia and Priceline
Battle the Digital Duopoly. 19/3/2018.
Available from: http://adage.com/
article/cmo-strategy/expediapriceline-battle-digital-duopolyairbnb/312769/
[19] Tnooz Article. Lufthansa to Add
Surcharge to Every Booking Made via
the GDS. 2/6/2015. Available from:
https://www.tnooz.com/article/
lufthansa-to-add-surcharge-to-everybooking-made-via-the-gds/
[20] Passenger Self-service Article.
Easy Jet Launches Connecting Flights
Platform. 13/9//2017. Available from:
https://www.passengerselfservice.
com/2017/09/easyjet-launchesconnecting-flights-platform/
[21] Munich Airport Website. A
Humanoid Robot with Artificial
Intelligence. February 2018. Available
from: https://www.munich-airport.
com/hi-i-m-josie-pepper-3613413
[22] Board of Innovation Blog. Khayati
Y. Jobs in Innovation: Our Field
Guide. 23/9/2015. Available from:
https://www.boardofinnovation.
com/blog/2015/09/23/
jobs-in-innovation-our-field-guide/
[23] easyJet Website - media centre.
easyJet Signs Deal with Founders
Factory to Create from Scratch and
Accelerate Start ups to Innovate the
Travel Sector. 16/10/2016. Available
from: https://mediacentre.easyjet.
com/en/stories/11200-easyjet-signsdeal-with-founders-factory-to-createfrom-scratch-and-accelerate-startupsto-innovate-the-travel-sector
[24] Made by Many Blog. Braddock
K. Innovation Labs - Best Practice,
16/11/2016
[25] GeekWire. Wong K. How Amazon
Could Succeed in Travel: Researchers
Issue a Warning to the Industry.
11/7/2018. Available from: https://www.
geekwire.com/2018/amazon-succeedtravel-researchers-issue-warningindustry/
[26] CNBC. Kim T. Amazon Could
Disrupt Online Travel Industry Next,
Morgan Stanley Says. 9/3/ 2018.
Available from: https://www.cnbc.
com/2018/03/09/amazon-coulddisrupt-online-travel-industry-nextmorgan-stanley-says.html
[27] Business Insider Article. Zhang B.
‘There’s a Storm Coming’, Emirates Boss
Warns Airlines of a Looming Seismic
Shift in Technology. 8/2/18 4:06 pm
[28] Article “Thisislarry”. Reimagining
Flight with People at the Center:
How Design Thinking Can
Change Air Travel. 2017. Available
from: https://flytranspose.com/
reimagining-flight-with-people-atthe-center-how-design-thinking-canchange-air-travel-c9d6e2bb0d7d
[29] White Paper: World Economic
Forum in Collaboration with Accenture.
Digital Transformation Initiative.
Aviation, Travel and Tourism Industry.
January 2017
[30] Neufeind M, O’Reilly J, Ranft F.
Work in the Digital Age. Challenges of
the Fourth Industrial Revolution, Policy
Network 2018
[31] M&S should look at Amazon
tie-up, Says Marcus East Available
from: http://www.bbc.co.uk/news/
business-44551664
[32] The World Bank. International
Tourism, Number of Departures.
1996-2016. Available from: https://data.
worldbank.org/indicator/st.int.dprt
-----
[33] BBC News. Baker V. Tourism
Pressures: Five Places Tackling Too
Many Visitors. 16/4/2018. Available
from: https://www.bbc.com/news/
world-43700833
[34] Annual Economic Report, World
Travel and Tourism Council (WTTC).
Travel and Tourism, Global Economic
Impact and Issues. 2017. Available
from: https://www.wttc.org/-/media/
files/reports/economic-impactresearch/2017-documents/globaleconomic-impact-and-issues-2017.pdf
[35] Keynote Presentation at Think
Future 18. Ghosh B. Leveraging
Innovation Through Insight Into
Other Industries, Think Future.
2018. Available from: https://www.
hamburgaviationconference.com/
publications/
[36] MMGY Study. Blount A.
Portrait of American Travellers.
28/6/2017. Available from: https://
www.mmgyglobal.com/news/news2017%E2%80%932018-portrait-ofamerican-travelers
-----
|
{
"disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.5772/INTECHOPEN.81660?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.5772/INTECHOPEN.81660, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "CCBY",
"status": "HYBRID",
"url": "https://www.intechopen.com/citation-pdf-url/65435"
}
| 2,019
|
[] | true
| 2019-04-25T00:00:00
|
[
{
"paperId": "38eba30f2d060913f2566ef43d17bb0581935191",
"title": "Aviation and Its Management - Global Challenges and Opportunities"
},
{
"paperId": "c212235b11368c766c92fb2b181a1b084e9c0639",
"title": "The future of airline distribution"
},
{
"paperId": null,
"title": "Keynote Presentation at Think Future 18. Ghosh B. Leveraging Innovation Through Insight Into Other Industries, Think Future"
},
{
"paperId": null,
"title": "Solid Profits Despite Rising Costs"
},
{
"paperId": null,
"title": "Work in the Digital Age"
},
{
"paperId": null,
"title": "British Airways Boss Apologises for 'Malicious' Data Breach"
},
{
"paperId": null,
"title": "Travel and Tourism, Global Economic Impact and Issues"
},
{
"paperId": null,
"title": "Portrait of American Travellers"
},
{
"paperId": null,
"title": "Reimagining Flight with People at the Center: How Design Thinking Can Change Air Travel"
},
{
"paperId": null,
"title": "Amazon of Air Travel\" With New Booking Option"
},
{
"paperId": null,
"title": "Deutsche Bahn as a Digital Role Model"
},
{
"paperId": null,
"title": "Jobs in Innovation: Our Field Guide"
},
{
"paperId": null,
"title": "Personalised Airport Systems for Seamless Mobility and Experience"
},
{
"paperId": "ce9a81c0df80685c755b7d145006c8ea20150808",
"title": "Celebrating nurses: Sargeʼs healing powers"
},
{
"paperId": null,
"title": "Publication. The World's Largest Public Companies"
},
{
"paperId": null,
"title": "International Tourism, Number of Departures"
},
{
"paperId": null,
"title": "How Amazon Could Succeed in Travel: Researchers Issue a Warning to the Industry"
},
{
"paperId": null,
"title": "There's a Storm Coming' , Emirates Boss Warns Airlines of a Looming Seismic"
},
{
"paperId": null,
"title": "Tourism Pressures: Five Places Tackling Too Many Visitors. 16/4/2018"
},
{
"paperId": null,
"title": "Business Insider. Zhang B. Delta's CEO Explains Why Airline Computers Fail and How Tech Will Change Flying"
},
{
"paperId": null,
"title": "M&S should look at Amazon tie-up"
},
{
"paperId": null,
"title": "Amazon Could Disrupt Online Travel Industry Next"
},
{
"paperId": null,
"title": "Ryanair Turns Customer-Friendly on Easy Jet Threat"
},
{
"paperId": null,
"title": "Unified API -Transport for London"
},
{
"paperId": null,
"title": "A Humanoid Robot with Artificial Intelligence"
},
{
"paperId": null,
"title": "11200-easyjet-signsdeal-with-founders-factory-to-createfrom-scratch-and-accelerate-startupsto-innovate-the-travel-sector [24] Made by Many Blog. Braddock K. Innovation Labs -Best Practice"
},
{
"paperId": null,
"title": "Virgin to Launch New Loyalty Scheme with Delta. 21/8/2018"
},
{
"paperId": null,
"title": "Heathrow Partners with Grab to Offer App-Based F&B Pre-order Service"
}
] | 19,247
|
en
|
[
{
"category": "Computer Science",
"source": "s2-fos-model"
},
{
"category": "Agricultural and Food Sciences",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/ffbc9c64191867d3623b54da8f7fc0a96f2ad18e
|
[] | 0.883468
|
Blockchain'e Dayalı Tarım ve Gıda Tedarik Zinciri Kaynağının Kurulması: Literatür İncelemesi
|
ffbc9c64191867d3623b54da8f7fc0a96f2ad18e
|
European Journal of Science and Technology
|
[
{
"authorId": "2087649060",
"name": "Ufuk Cebeci"
},
{
"authorId": "2174424926",
"name": "Ergün Arat"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"Eur J Sci Technol"
],
"alternate_urls": null,
"id": "68571925-b158-4563-8047-3f483854eae6",
"issn": "2148-2683",
"name": "European Journal of Science and Technology",
"type": "journal",
"url": null
}
|
In recent years, technological research and studies have accelerated in the agriculture and food industry to protect and improve the trust of consumers. In 2008, with the publication of the white paper on “Bitcoin: Peer-to-peer Electronic Cash Payment System” by Satoshi Nakamoto, the world met with blockchain technology, where there are no middlemen and transfers are made securely. In the following years, with the development of Ethereum by Vitalik Buterin and the interpretation of the concept of Smart Contracts with blockchain technology, blockchain technology has begun to influence all sectors, thanks to its benefits such as increasing transparency and reliability in contracts between parties. Blockchain technology, in addition to providing solutions to financial systems that have become dysfunctional, also brings alternatives to supply chain management, where data needs to be transferred securely and quickly. Blockchain applications used in FSC emerge as a technology that will enable us to solve problems such as food security, food integrity, food fraud, etc.
In this paper, It has been studied on how to use blockchain technology in the food supply chain, how to choose the suitable blockchain platform, and how It will be facilitating for solutions such as tracking from field to fork, back-tracking are examined the data saved in the blocks and the working mechanism will be discussed in the background.
|
#### © Telif hakkı EJOSAT’a aittir
## Derleme Makalesi
www.ejosat.com ISSN:2148-2683
#### Copyright © 2022 EJOSAT
## Review Article
# Establishing Agri and Food Supply Chain Provenance Based on
Blockchain: Literature Review
### Ergün Arat[1+], Ufuk Cebecı[2* ]
**[1* Istanbul Technical University, Faculty of Management, Engineering Management Department, Istanbul, Turkey (ORCID: 0000-0001-6270-5792), arate21@itu.edu.tr](mailto:arater21@itu.edu.tr)**
**[2 Istanbul Technical University, Faculty of Management, Engineering Management Department, Istanbul, Turkey (ORCID: 0000-0003-4367-6206), cebeciu@itu.edu.tr](mailto:cebeciu@itu.edu.tr)**
(5[th] International Symposium on Innovative Approaches in Smart Technologies– 28-29 May 2022)
(DOI: 10.31590/ejosat.1131779)
**ATIF/REFERENCE:** Arat, E. & Cebeci, U. (2022). Establishing Agri and Food Supply Chain Provenance Based on Blockchain:
Literature Review. Avrupa Bilim ve Teknoloji Dergisi, (37), 59-64.
**Abstract**
The demand for food, which is the indispensable basic need of people, has turned to healthier and safer alternatives with
increasing awareness all over the world, especially in developing countries. At the same time, food safety management in accordance
with society's health goals, customer demands, and international food standards is increasing its importance day by day in a period of
increasing food-borne diseases. As a result of this, the maximum risk level that manufacturers can take in the production of products
has decreased. The food supply chain, which consists of production, collecting, packaging, warehousing, processing, distribution, and
transfer processes, is so sensitive and complex process and has high risks. Traditional methods are insufficient for food supply chain
risk management due to the increasing demands of consumers for transparent information and food safety concerns.
In recent years, technological research and studies have accelerated in the agriculture and food industry to protect and improve the
trust of consumers. In 2008, with the publication of the white paper on “Bitcoin: Peer-to-peer Electronic Cash Payment System” by
Satoshi Nakamoto, the world met with blockchain technology, where there are no middlemen and transfers are made securely. In the
following years, with the development of Ethereum by Vitalik Buterin and the interpretation of the concept of Smart Contracts with
blockchain technology, blockchain technology has begun to influence all sectors, thanks to its benefits such as increasing transparency
and reliability in contracts between parties. Blockchain technology, in addition to providing solutions to financial systems that have
become dysfunctional, also brings alternatives to supply chain management, where data needs to be transferred securely and quickly.
Blockchain applications used in FSC emerge as a technology that will enable us to solve problems such as food security, food
integrity, food fraud, etc.
In this paper, It has been studied on how to use blockchain technology in the food supply chain, how to choose the suitable blockchain
platform, and how It will be facilitating for solutions such as tracking from field to fork, back-tracking are examined the data saved in
the blocks and the working mechanism will be discussed in the background.
**Keywords: Agriculture and food supply chain, Food safety, Risk reduction, Blockchain, Smart contracts, Trust, Information**
transparency.
# Blockchain Tabanlı Tarım ve Gıda Tedarik Zinciri Kaynağı
Oluşturma: Literatür İncelemesi
**Öz**
Tüm dünyada, özellikle gelişmekte olan ülkelerde artan bilinçlenmeyle, insanların olmazsa olmaz temel ihtiyacı olan gıdalara yönelik
talebi, daha sağlıklı ve daha güvenli alternatíflere yönelmistir. Aynı zamanda toplumsal sağlık amaçlarına, müşteri ihtiyaçlarına,
uluslararası gıda güvenliği standartlarına uygun gıda güvenliği yönetimi, gıda kaynaklı hastalıkların arttığı bir dönemde önemini
günden güne artırmaktadır. Bunun etkisi sonucu üreticilerin üretimde göze alabileceği maksimum risk düzeyi düşmüştür. Üretim,
toplama, paketleme, depolama, işleme, dağıtım ve taşıma süreçlerinden oluşan gıda tedarik zinciri, en hassas ve kompleks işlemlerden
bir tanesidir ve riskleri yüksektir. Tüketicilerin artan şeffaf bilgi talepleri ve gıda güvenliği endişelerinden dolayı geleneksel
-----
yöntemler gıda tedarik zinciri risk yönetimi için yetersiz kalmaktadır. Bu sebeple son yillarda tarim ve gida sektöründe, tüketicilerin
güvenini korumak ve iyileştirmek için teknolojik araştırmalar ve çalışmalar hızlanmıştır. 2008 ’de Satoshi Nakamoto tarafından
“Bitcoin: Eşten-eşe Elektronik Nakit Ödeme Sistemi” konulu teknik dökümanın yayınlanmasıyla birlikte, dünya, aracıların olmadığı
ve transferlerin güvenli bir şekilde gerçekleştiği blokzincir teknolojisiyle tanıştı. İlerleyen yıllarda Vitalik Buterin tarafından
Ethereum’un geliştirilmesi ve Akıllı Şözlesmeler kavramının blokzincir teknolojisi ile birlikte yorumlanmasıyla, taraflar arası
sözleşmelerde şeffaflığın ve güvenilirliğin artması, aracıların ortadan kaldırılması gibi faydaları sayesinde blokzincir teknolojisi tüm
sektörleri etkisi altına almaya başlamıştır. Blockzincir teknolojisi, başta işlevsiz kalmış finansal sistemlere çözüm getirmenin yanı sıra
verilerin, güvenli ve hızlı şekilde aktarılmasına ihtiyaç duyulan tedarik zinciri yönetimine de alternatifler getirmektedir. Tarım ve gıda
tedarik zincirinde kullanılan blockzincir uygulamaları, küresel açlik, gıda güvenliği, gıda bütünlüğü, gıda kaçakçılığı gibi sorunları
çözmemizi sağlayacak bir teknoloji olarak karşımıza çıkmaktadır. İşte bu makale de blockzincir teknolojisinin tarım ve gıda tedarik
zincirinde nasıl kullanılabileceği, artı yönleri, entegre edilmesi ve gelecekteki etkisi üzerine çalışılmıştır.
Bu bildiride, blok zinciri teknolojisinin tedarik zincirinde nasıl kullanılabileceği, uygun blok zinciri platformunun nasıl seçileceği ve
tarladan çatala takip, geri izleme gibi çözümlerin nasıl sağlanacağı incelenmiştir. Bloklara kaydedilen veriler incelenecek ve arka
planda çalışma mekanizması tartışılacaktır.
**Anahtar Kelimeler:** Tedarik zinciri, Tarım ve gıda tedarik zinciri, Gıda güvenliği, Risk azaltma, Blokzincir, Akıllı sözleşmeler,
Güven, Bilgi Şeffaflığı.
## 1. Introduction
In the 21st century, food safety increases its importance as a
result of globalization, growth of economies, and increasing
population rates, as a result of changing people's living standards
and consumption habits. Food safety is physical, chemical,
biological and all kinds of damage that may occur in food refers
to the measures taken to eliminate; and safe (healthy) foods can
be defined as clean and healthy food whose nutritional values
have not been lost in terms of physical, chemical and biological
hazards (Erkmen, 2010). In the past years, a couple of serious
food safety issues occurred, such as "Sudan red", "clenbuterol",
"Sanlu toxic milk powder". It is worth noting in the world these
kinds of scandals have broken out during the past 20 years,
including Escherichia coli in hamburgers, Salmonella in eggs, Figure 1- Agricultural food supply chain process (Awan et
poultry, and pork, Listeria in pates, and cheeses, and the al., 2021).
"horsemeat scandal" in 2013 (Tian 2016). According to the
.
World Health Organization, contaminated food causes 600
million cases of foodborne disease and 420.000 deaths per year
With the acceleration of technological developments and their
around the world. Children under the age of five account for
integration into many industries, technological infrastructures
30% of all foodborne deaths. Each year, the World Health
have begun to be created against problems in supply chain
Organization estimates that 33 million years of good life are lost
management, food, and agriculture sectors. Quality and
owing to eating unsafe food, and this figure is likely
assurance in the food chain process can be monitored with
underestimated. These and similar food problems have not only
modern technologies and all information can be transmitted to
worried people day by day, but also damaged their trust in
the consumer without changing it. When there is a threat to
companies and institutions.
health, it is necessary to trace the process backward and find the
In addition to these problems, in the agriculture and food source of the problem and establish an information system for
supply chain, the food goes through a dynamic operation in the crisis management by following it forward. There may be
process according to the manufacturer, producer, wholesaler, different definitions between businesses in the food chain,
distributor, and retailer, in short, from farm to fork. Food quality incompatibility problems may arise between administrative and
can be affected by uncertain conditions like weather, related physical units, or food-related information may not be verified.
heat, humidity, and coolness. The limited shelf life, delivery In order to follow the food, all members should be connected to
delays, and volatile demand structure of food products increase a transparent information network, and information about the
uncertainty and risk. These events also reminded people of the features and location of the product should be shared instantly.
many problems and the inadequacy of traditional methods in the For this purpose, tracking technologies such as paper tracking,
already complex food production, supply chain, and processing product labeling, barcode, temperature, light, and humidity
environment. The process of the supply chain is summarized in sensors embedded with RFID (radio frequency identification)
Figure 1 (Awan et al., 2021). can be used. As a result of the adoption of Internet of Things
(IoT) technologies and their usage in many sectors of daily life,
they have started to be used in agriculture and food production
and distribution processes, and studies on reliable, traceable, and
auditable systems have increased. Current IoT-based traceability
and provenance systems for Agri-Food supply chains are built on
top of centralized infrastructures and this leaves room for
unsolved issues and major concerns, including data integrity,
-----
tampering, and single points of failure (Caro et al., 2018).
However, the majority of the current IoT solutions still rely on
heavily-centralized cloud infrastructures, where there is usually
a lack of transparency, and by nature presents security threats
including availability, data lock-in, confidentiality, and
auditability (Armbrust et al., 2010). IoT includes a system of
devices that can collect, transfer and store data over a wireless
network. Using blockchain with IoT devices enables smart
devices to exchange data and other financial transactions in a
scalable, private and reliable way (ReportLinker, 2022). At the
points where IoT is lacking, Blockchain can be used as a
solution with its decentralized structure, auditability,
immutability, and encryption where IoT is insufficient.
## 2. Material and Method
### 2.1. Blockchain Technology
Blockchain is the basic infrastructure of digital currencies,
known as crypto money, which everyone is familiar with.
Although cryptocurrencies are the most well-known application
area of Blockchain technology, Blockchain is a strong and
general subject that is not limited to the financial sector.
Blockchain is completely decentralized and the place where
every transaction or every data is recorded in the parts we call
blocks. Each block contains all transaction data in a given time
period and these act as digital IDs that can be used for
verification. In blocks, each block is linearly linked to each
other, sequentially with each other in time, and contains the hash
value of the previous one.
Especially with the emergence of Ethereum, the concept we call
'Smart Contracts' has gained meaning again today. Smart
contracts and blockchain technologies will be a solution to the
classical methods that are insufficient in almost every field and
in every subject and will provide benefits such as saving
documents or transactions in a secure environment, sharing,
traceability, control, and immutability, automation of ongoing
manual processes.
Blockchain technology can be visualized as a general term for
technical schemes which are similar to NoSQL (Not Only
Structured Query Language), and it can be realized by many
kinds of programming languages (Tian, 2016). The key
characteristics of blockchain are shown in Figure 2 (Puthal et al.,
2018).
Figure 2 - Key characteristics of Blockchain Technology (Puthal
et al., 2018).
By integrating the blockchain into the supply chain and
saving every piece of information on a block, the whole process
is tracked and reviewed. It provides the consumer with all
information about the product they buy. Product owner, logistic
business, and purchaser are the three key entities involved in the
trade and delivery system. A product owner is someone who
sells a product in the supply chain; a logistic firm is a
corporation that transports products; and a consumer, as the
name suggests, is someone who wishes to spend ethers on a
product. As previously stated, the logistic firm is a systemregistered entity. Arbitrators are in charge of off-chain dispute
resolution in the event of a transactional dispute. Figure 3
depicts the trading and delivery business, though (Shadid et al.,
2020).
Figure 3 - Blockchain-based end-to-end solution for agri-food
supply chain (Shadid et al., 2020).
#### 2.1.1. Consensus Mechanism
X In the applications of blockchain, we need to solve two
problems-double spending and Byzantine Generals Problem
(Lamport et al., 1982). Using a digital asset more than once at
the same time is called double-spending. Since blockchain
networks work with a distributed ledger system, every
transaction is verified. Transactions performed on networks such
as Bitcoin are processed on the blockchain with the approval of
the miners. If the same transaction is attempted a second time,
full nodes indicate that the transaction is fraudulent. This
protects users against the possibility of double-spending. The
Byzantine Generals Problem; deals with the stalemate that
generals, who can only send messages to each other via
messenger, reach consensus on the move to attack or retreat. It is
a consensus problem about coordination and integration
problems in software technologies, especially in distributed
systems. Data would be transmitted between nodes via peers.
Some nodes may be attacked, which may cause the relevant
content to change. Normal nodes need to distinguish the
information that has been tampered with and obtain consistent
results with other normal nodes (Mingxiao et al., 2017). This
requires the design of the consensus mechanism needed.
Consensus (mechanism) algorithms are the decision-making
process for a group where its members form and support the
decision that is best for the rest of the group. The algorithm
-----
basically says: If this happens then this if this happens and so
on… The consensus algorithm for blockchain allows a group of
people to make sure that all transactions are authentic and real.
There are some methods for achieving that, such as POW (Proof
of Work), POS (Proof of Stake), DPOS (Delegated Proof of
Stake), and PBFT (Practical Byzantine Fault Tolerance).
POW (Proof Of Work)
Its core idea is to distribute accounting rights and rewards
through hash power competition between nodes. Hashing is the
name given to the process of creating a fixed-size output from
different-sized inputs. This is done using mathematical formulas
(implemented as hashing algorithms) known as hash functions.
Based on the information from the previous block, the different
nodes calculate the specific solution to a mathematical problem.
(Mingxiao et al., 2017).
The proof of work mechanism works on the principle that
adding transactions to the network is difficult but easy to verify.
It is very easy to understand whether a transaction is valid or not,
as all previous transactions stop transparently on the network. If
a malicious user attempts to commit fraudulently, their
transaction will be rejected by the rest of the network. However,
this is a very expensive method and poses big problems in terms
of energy consumption. In addition to these, there are long
processing times and certain security problems.
POS (Proof Of Stake)
The core idea of PoS evolves around the concept that the
nodes who would like to participate in the block creation process
must prove that they own a certain number of coins at first
(Ferdous et al., 2020). Proof of stake is a consensus mechanism
that has become popular in recent years, using different
variations of some cryptocurrencies. Proof-of-stake architecture
does not require huge amounts of processing power and devices
as in proof-of-work. Instead of miners, there are validators called
"validators" on the network and they do the work of adding
blocks. In the proof of stake architecture, each block is added
every 10 seconds. This provides a much faster transaction
processing time than the bitcoin blockchain.
DPOS (Delegated Proof Of Stake)
In the Proof of Stake protocol based on cryptocurrency
ownership, a user has the right to verify transactions and
generate blocks by keeping their crypto assets in their wallet
connected to the relevant blockchain. dPoS, on the other hand,
comes with some additional features and leverages the power of
stakeholders to resolve consensus by voting fairly. It uses a
social reputation system to drive consensus across its Delegated
Proof-of-Stake (dPoS) blockchain network. Referred to as the
least decentralized protocol compared to others, dPoS aims to
give cryptocurrency holders a say in the management of the
network. Unlike the Proof-of-Stake system, users delegate their
crypto assets in their wallets to another user. Cryptocurrency
asset is not transferred from the wallet but is considered as the
asset of the delegated user, increasing the delegated user's voice
in the network. The person who receives the right to delegate
from other users receives a larger share of the revenues in the
network and shares the revenue with the delegates in proportion
to their shares.
PBFT (Practical Byzantine Fault Tolerance)
When we evaluate it through the blockchain structure, the
generals represent the nodes in the network. Nodes in the
network must reach a consensus for the transaction to occur.
Thus, proven data is transferred to the blocks. In simpler terms, a
consensus is needed by the majority of network participants,
given that erroneous or incomplete information may occur.
The algorithm is designed to work in asynchronous systems.
It is optimized to provide high performance and fast execution
time. In fact, all nodes in the pBFT model are pipelined. One of
them is the master node (leader), the others are called backup
nodes. All nodes in the system interact with each other. The
purpose of all honest nodes is to agree on the state of the system
based on the majority opinion. It is important not only to prove
that messages came from a particular peer-to-peer node but also
to make sure that the message did not change during
transmission.
### 2.2. Blockchain Platforms
Two of the most suitable blockchain platforms for use in the
supply chain will be examined and compared according to their
purpose, operating logic, privacy level, programming languages,
and consensus mechanism.
**Ethereum and Hyperledger**
Ethereum is an open-source distributed public blockchain
network that uses Smart Contract technology to allow
decentralized applications to be built on top of it.
Hyperledger Fabric, an open-source project like Ethereum,
is a widely accepted platform for enterprise blockchain
platforms with its modular structure. Designed to develop
enterprise-grade applications and professional solutions, the
convenient, modular architecture uses "plug and play"
components to adapt to many use cases. The most important
point of the project is to create intersectoral cooperation by
enabling blockchain-based projects to interact with each other.
Hyperledger hosts several enterprise-grade blockchain-based
software projects. Projects are designed by the developer
community for vendors, organizations, service providers, and
academics to build and deploy blockchain networks or
commercial solutions.
Each peer in Ethereum has a role, which means that
whenever a transaction occurs, numerous nodes must participate
in order for it to be completed, which causes scalability, privacy,
and efficiency difficulties. Hyperledger, on the other hand, is a
distributed ledger technology (DLT) that does not require each
peer in the network to be informed in order to complete a
transaction.
The anonymity of users within the system is one of the most
emphasized issues in crypto money projects. However, this is
not always required. Keeping data on a public network and
making it accessible to everyone can cause issues in some
projects. Hyperledger is a permissioned blockchain that uses an
identity management module to enable us authenticate.
For this reason, It can store some information specific to a
certain user group by using Hyperledger due to the private
structure.
Figure 4 shows the differences between Ethereum and
Hyperledger.
-----
Figure 4 – Difference between Ethereum and Hyperledger
Since the blockchain to be integrated into the supply chain
will only provide information flow between the stakeholders, in
short, it will be a B2B application, Hyperledger is a more
suitable platform for this.
### 2.3. Blockchain-driven IoT Technology
IoT aims to provide food identification and monitoring and
collecting pieces of information about heat, humidity, cold chain
protection, in short, concentration product-related in the
agricultural supply chain. Agricultural production personnel can
analyze environmental big data by monitoring pests and diseases
and various risk factors so that targeted agricultural production
materials can be put in place; various execution equipment can
be mobilized as required to perform temperature control,
dimming and ventilation, as well as other actions to achieve
intelligent control for the growing environment of agriculture
(Lin et al., 2018).
Wireless communication technologies (such as Bluetooth
and Wi-Fi) are used in the connection layer to transmit data
between sensor nodes and relay nodes, while machine-tomachine (M2M) communication technologies are used to
transmit data between relay nodes and specified IoT platforms.
IoT development platforms are used to develop and manage
applications at the application layer, and application
programming interfaces are used to connect external systems
and databases (APIs). It should also be incorporated with ERP
for things like managing and controlling internal resources and
expenses. In terms of decentralized control, data transparency,
auditability, distributed information, decentralized consensus,
and high security, blockchain may currently bridge the gap in
IoT systems.
Figure 5- Blockchain-driven IoT Technology (Awan et al., 2021)
## 3. Results and Discussion
Blockchain technology provides a solution to many
problems in the FSC with the visibility and traceability it
provides. We examined the benefits and possible consequences
of the integration of blockchain with IoT. The disadvantage of
the IoT system being centralized can be overcome by using
blockchain technology. The blockchain is a powerful technology
that is able to decentralize computation and management
processes that can solve many IoT issues, especially security
(Lin et al., 2018).
While the data stored with the use of the Hyperledger
platform can be retrieved later, especially due to its performance
and its openness to members only, It is possible to write smart
contracts and include them in the system so that the data is
automatically generated by the sensor creates certain conditions.
Both platforms are suitable for making complex smart contracts.
However, Hyperledger allows a custom transaction structure to
be defined.
The suggested blockchain-based paradigm has numerous
advantages and benefits, including increased trust, efficiency,
quality, durability, and stability. In terms of efficiency, it reduces
overall traceability process handling and, as a result, relevant
traceability-related operating expenses, and eliminates hidden
costs and paper burden from the FSC traceability process. The
self-fulfillment provided by the creation and inclusion of smart
contracts also serves as a cost-reduction mechanism and ensures
the authenticity and real-time synchronization of incoming
information
## 4. Conclusions and Recommendations
The combined use of blockchain and IoT can enable the
creation of a self-governing, intelligent agriculture and supply
chain management that connects all parties transparently from
the beginning in the FSC processes which information is
transmitted in the flow without changing it. This proposition
minimizes the human factor, which includes traditional tracking
and the security of information.
In conventional practice, insufficient information on the
delivery and traceability of processes is inefficient and
unreliable. By using IoT, all collected data is stored and
managed in a remote database, with the addition of the
blockchain, this information is recorded in blocks and cannot be
changed, forming the basis of reliable information flow. All this
information can be used in the analysis of food process
management and predictions can be made about food life. As a
result, consumers can access information such as the way food is
grown, and the time of collection and distribution, rather than
just learning about the shelf life of the product they buy. Thanks
to this data, companies can implement different strategies in the
production and distribution process, making improvements both
operationally and costly.
The use of blockchain will provide benefits such as creating
a completely transparent and reliable system in all processes,
and self-disclosure, thanks to its features such as its distributed
and decentralized structure, being closed to outside interference,
and creation of smart contracts. Blockchain applications
currently used in agriculture and the food supply chain are only
used for supply chain management, except for the benefit of
tracking food products to the source they come from. IoT
-----
technologies are currently limited to monitoring the agricultural
environment or being used in processes such as the cold chain,
and the manufacturers of the first product cannot communicate
with the buyers. In this article, we developed a complete
approach by integrating IoT and blockchain into the whole
process. With this approach, it can provide information to the
first producer about the environmental conditions necessary to
produce products with high efficiency and quality, and provide
the know-how to create suitable conditions or improve the
production process. One of the most important features of this
model is that the collaborators can transmit the information flow
between each other and cross in real-time and cannot be
accessed from the outside, protecting information security. The
smart model will greatly boost the efficiency and reliability of
the food supply chain, which will inevitably increase food safety
and regain customer trust in the food industry (Awan et al.,
2021).
This paper presents a blockchain and IoT-based framework
for farm-to-fork traceability of the food and agricultural supply
chain. Organizations, processes, functions, and their interaction
with each other are explained. Through smart contracts, the
benefits of establishing and maintaining a standard of product
definition throughout the process, enabling processes to be
carried out without the need for parties to trust each other, and
providing an improved supply chain management are discussed.
## References
Armbrust, M., Fox, A., Griffith, R., Joseph, A. D., Katz, R.,
Konwinski, A., Lee, G., Patterson, D., Rabkin, A., Stoica, I.,
& Zaharia, M. (2010). A view of cloud computing.
_Communications_ _of_ _the_ _ACM,_ _53(4),_ 50–58.
https://doi.org/10.1145/1721654.1721672
Awan, S., Ahmed, S., Ullah, F., Nawaz, A., Khan, A., Uddin, M.
I., Alharbi, A., Alosaimi, W., & Alyami, H. (2021). IoT with
BlockChain: A Futuristic Approach in Agriculture and Food
Supply Chain. _Wireless Communications and Mobile_
_Computing,_ _2021,_ 1–14.
[https://doi.org/10.1155/2021/5580179](https://doi.org/10.1155/2021/5580179)
Caro, M. P., Ali, M. S., Vecchio, M., & Giaffreda, R. (2018).
Blockchain-based traceability in Agri-Food supply chain
management: A practical implementation. 2018 IoT Vertical
_and Topical Summit on Agriculture - Tuscany (IoT Tuscany)._
https://doi.org/10.1109/iot-tuscany.2018.8373021
Erkmen, O. (2010). Gıda kaynaklı tehlikeler ve güvenli gıda
üretimi. Çocuk Sağlığı ve Hastalıkları Dergisi, 53(3), 220235.
Feng Tian. (2016). An agri-food supply chain traceability system
for China based on RFID & blockchain technology. _2016_
_13th International Conference on Service Systems and_
_Service_ _Management_ _(ICSSSM)._
https://doi.org/10.1109/icsssm.2016.7538424
Ferdous, M. S., Chowdhury, M. J. M., & Hoque, M. A. (2021). A
survey of consensus algorithms in public blockchain
systems for crypto-currencies. _Journal of Network and_
_Computer_ _Applications,_ _182,_ 103035.
https://doi.org/10.1016/j.jnca.2021.103035
Lamport, L., Shostak, R., & Pease, M. (1982). The Byzantine
Generals Problem. _ACM Transactions on Programming_
_Languages_ _and_ _Systems,_ _4(3),_ 382–401.
https://doi.org/10.1145/357172.357176
Lin, J., Shen, Z., Zhang, A., & Chai, Y. (2018). Blockchain and
IoT based Food Traceability for Smart Agriculture.
_Proceedings of the 3rd International Conference on Crowd_
_Science_ _and_ _Engineering_ _-_ _ICCSE’18._
https://doi.org/10.1145/3265689.3265692
Mingxiao, D., Xiaofeng, M., Zhe, Z., Xiangwei, W., & Qijun, C.
(2017). A review on consensus algorithm of blockchain.
_2017 IEEE International Conference on Systems, Man, and_
_Cybernetics_ _(SMC)._
https://doi.org/10.1109/smc.2017.8123011
Puthal, D., Malik, N., Mohanty, S. P., Kougianos, E., & Yang, C.
(2018). The Blockchain as a Decentralized Security
Framework [Future Directions]. IEEE Consumer Electronics
_Magazine,_ _7(2),_ 18–21.
https://doi.org/10.1109/mce.2017.2776459
ReportLinker, “Blockchain In Agriculture And Food Supply
Chain Global Market Report 2022,”
[https://www.reportlinker.com/p06246504/Blockchain-In-](https://www.reportlinker.com/p06246504/Blockchain-In-Agriculture-And-Food-Supply-Chain-Global-Market-Report.html?utm_source=GNW)
[Agriculture-And-Food-Supply-Chain-Global-Market-](https://www.reportlinker.com/p06246504/Blockchain-In-Agriculture-And-Food-Supply-Chain-Global-Market-Report.html?utm_source=GNW)
[Report.html?utm_source=GNW](https://www.reportlinker.com/p06246504/Blockchain-In-Agriculture-And-Food-Supply-Chain-Global-Market-Report.html?utm_source=GNW)
Shahid, A., Almogren, A., Javaid, N., Al-Zahrani, F. A., Zuair,
M., & Alam, M. (2020). Blockchain-Based Agri-Food
Supply Chain: A Complete Solution. _IEEE Access,_ _8,_
69230–69243. https://doi.org/10.1109/access.2020.2986257
World Health Organization, ”Estimating the burden of foodborne
diseases,” [https://www.who.int/activities/estimating-the-](https://www.who.int/activities/estimating-the-burden-of-foodborne-diseases)
[burden-of-foodborne-diseases](https://www.who.int/activities/estimating-the-burden-of-foodborne-diseases)
-----
|
{
"disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.31590/ejosat.1131779?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.31590/ejosat.1131779, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": null,
"status": "GOLD",
"url": "https://dergipark.org.tr/en/download/article-file/2490741"
}
| 2,022
|
[] | true
| 2022-06-30T00:00:00
|
[
{
"paperId": "4f57f8f6c523bb0a6b90c0edba0cbf6239b26cec",
"title": "IoT with BlockChain: A Futuristic Approach in Agriculture and Food Supply Chain"
},
{
"paperId": "2e71235ce05efdc14898da3d0dcbd568c2c9b6f1",
"title": "A survey of consensus algorithms in public blockchain systems for crypto-currencies"
},
{
"paperId": "6d5eff0f4dcaf531a6129134119bc9613ce1b657",
"title": "Blockchain and IoT based Food Traceability for Smart Agriculture"
},
{
"paperId": "5deef74e922df23a636a3fd4e33c119247de8d30",
"title": "A view of cloud computing"
},
{
"paperId": "1689f401f9cd18c8fd033d99d1e2ce99b71e6047",
"title": "The Byzantine Generals Problem"
}
] | 8,146
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
},
{
"category": "Environmental Science",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/ffbd0c506ec3d6a35f99653560006aa1c54055e6
|
[
"Computer Science"
] | 0.870842
|
A distributed computing approach to improve the performance of the Parallel Ocean Program (v2.1)
|
ffbd0c506ec3d6a35f99653560006aa1c54055e6
|
[
{
"authorId": "1753573",
"name": "B. V. Werkhoven"
},
{
"authorId": "144103378",
"name": "J. Maassen"
},
{
"authorId": "2416318",
"name": "M. Kliphuis"
},
{
"authorId": "2129013",
"name": "H. Dijkstra"
},
{
"authorId": "46674754",
"name": "Sandra-Esther Brunnabend"
},
{
"authorId": "2653775",
"name": "M. Meersbergen"
},
{
"authorId": "1790652",
"name": "F. Seinstra"
},
{
"authorId": "144680288",
"name": "H. Bal"
}
] |
{
"alternate_issns": null,
"alternate_names": null,
"alternate_urls": null,
"id": null,
"issn": null,
"name": null,
"type": null,
"url": null
}
|
Abstract. The Parallel Ocean Program (POP) is used in many strongly eddying ocean circulation simulations. Ideally it would be desirable to be able to do thousand-year-long simulations, but the current performance of POP prohibits these types of simulations. In this work, using a new distributed computing approach, two methods to improve the performance of POP are presented. The first is a block-partitioning scheme for the optimization of the load balancing of POP such that it can be run efficiently in a multi-platform setting. The second is the implementation of part of the POP model code on graphics processing units (GPUs). We show that the combination of both innovations also leads to a substantial performance increase when running POP simultaneously over multiple computational platforms.
|
www.geosci-model-dev.net/7/267/2014/
doi:10.5194/gmd-7-267-2014
© Author(s) 2014. CC Attribution 3.0 License.
## Geoscientific
Model Development
# A distributed computing approach to improve the performance of the Parallel Ocean Program (v2.1)
**B. van Werkhoven[1], J. Maassen[2], M. Kliphuis[3], H. A. Dijkstra[3], S. E. Brunnabend[3], M. van Meersbergen[2],**
**F. J. Seinstra[2], and H. E. Bal[1]**
1VU University Amsterdam, Amsterdam, the Netherlands
2Netherlands eScience Center, Amsterdam, the Netherlands
3Institute for Marine and Atmospheric research Utrecht, Utrecht, the Netherlands
_Correspondence to: B. van Werkhoven (ben@cs.vu.nl)_
Received: 5 July 2013 – Published in Geosci. Model Dev. Discuss.: 12 September 2013
Revised: 17 December 2013 – Accepted: 2 January 2014 – Published: 7 February 2014
**Abstract. The Parallel Ocean Program (POP) is used in**
many strongly eddying ocean circulation simulations. Ideally it would be desirable to be able to do thousand-yearlong simulations, but the current performance of POP prohibits these types of simulations. In this work, using a new
distributed computing approach, two methods to improve
the performance of POP are presented. The first is a blockpartitioning scheme for the optimization of the load balancing of POP such that it can be run efficiently in a multiplatform setting. The second is the implementation of part
of the POP model code on graphics processing units (GPUs).
We show that the combination of both innovations also leads
to a substantial performance increase when running POP simultaneously over multiple computational platforms.
**1** **Introduction**
Physical oceanography is currently undergoing a paradigm
shift in the understanding of the processes controlling the
global ocean circulation. Two factors have contributed to this
shift: (i) the now about 20 yr long record of satellite data
and (ii) the possibility to simulate the ocean circulation using
models which include processes on the Rossby deformation
radius (10–50 km). Resolving this scale captures the instability processes that lead to ocean eddies which subsequently
interact and affect the large-scale ocean flow (Vallis, 2006).
The level of realism (in relation to available observations) in simulating the ocean with high-resolution, strongly
eddying models substantially increases compared to the
low-resolution models in which the effects of eddies are
parametrized. For example, it leads to a much better simulation of the different oceanic boundary currents, in particular the separation of the Gulf Stream in the Atlantic. Also,
the degree to simulate the surface kinetic energy distribution, which can be compared with satellite data, markedly
improves (Smith et al., 2000; Maltrud et al., 2010).
The use of the strongly eddying models is, even on the
supercomputing platforms currently available, still computationally expensive, and simulations have a long turn-around
time. Typical performances are from one to a few model
years per 24 h using thousands of cores (Dennis, 2007). Considering the fact that it takes at least 1000 yr to reach a nearstatistical-equilibrium state, innovations to increase the performance of these models and to efficiently analyse the data
from the simulations have a high priority.
Today many traditional cluster systems are equipped with
graphics processing units (GPUs) because of their ability
to process computationally intensive workloads at unprecedented throughput and power efficiency rates. Existing software requires modifications such as the expression of finegrained parallelism before it may benefit from the added processing power that GPUs offer.
GPUs have been used to successfully accelerate numerical simulations before. For example, Michalakes and Vachharajani (2008) used GPUs to improve the performance of
the Weather Research and Forecast (WRF) model. Similarly,
Bleichrodt et al. (2012) implemented a numerical solver for
the barotropic vorticity equation for a GPU.
-----
However, it is currently not well known which specific
parts of ocean models can benefit the most from execution
on GPUs, how the existing software should be revised to efficiently use GPUs, and what impact the use of GPUs will
have on performance. In this paper, we aim to answer these
questions.
We present two innovations to improve the performance
of the Parallel Ocean Program (POP). POP is also used as
the ocean component of the much used Community Earth
System Model (CESM). We have applied our modifications
to a standalone version of POP (v2.1). However, we have
confirmed through source code inspection that all of our
changes are also applicable to and fully compatible with
the latest release of CESM (v1.2.0). The main issue is how
to adapt POP such that it can run simultaneously (and efficiently) on multiple GPU clusters. First, we address alternative domain decomposition schemes and hierarchical
load-balancing strategies which enable multi-platform simulations such that further scaling can be achieved. Second,
we show how POP can be adapted to run on GPUs and
study the effect of GPU usage on its performance. The source
code of our modified version of POP can be obtained from
[https://github.com/NLeSC/eSalsa-POP/.](https://github.com/NLeSC/eSalsa-POP/)
**2** **Load balancing**
The model considered here is the global version of POP
(Dukowicz and Smith, 1994) developed at Los Alamos National Laboratory. We consider the strongly eddying configuration, indicated by R0.1, as used in recent high-resolution
ocean model simulations (Maltrud et al., 2010; Weijer et al.,
2012). This version has a nominal horizontal resolution of
0.1[◦] using a 3600 2400 horizontal grid with a tripolar grid
×
layout, having poles in Canada and Russia. The model has 42
non-equidistant z levels, increasing in thickness from 10 m
just below the upper boundary to 250 m just above the lower
boundary at 6000 m depth. In addition, bottom topography is
discretized using partial bottom cells, creating a more accurate and smoother representation of topographic slopes.
**2.1** **Domain decompositions and block distributions**
POP supports parallelism on distributed memory computers
through the message passing interface (MPI). To distribute
the computation over the processors, POP uses a threedimensional mesh, sketched in Fig. 1a. The domain is decomposed into equal-sized rectangular blocks in the horizontal direction. Each block also contains several layers in
the vertical direction (depth). The blocks are then distributed
over the available MPI tasks, where each task receives one
or more blocks. Blocks consisting only of land points may
be discarded from the computation. Below we will assume
that a single MPI task is assigned to a processor core (unless
stated otherwise).
Each block is surrounded by a halo region (Fig. 1b)
that contains a copy of the information of the neighbouring blocks. These halos allow the calculations on each block
to be performed relatively independently of its neighbour
blocks, thereby improving parallel performance. Nevertheless, the data in the halo regions need to be updated regularly. This requires a data exchange between the blocks,
which leads to communication between the MPI tasks, the
amount of data depending on the width of the halo, the size
of the blocks, and the block distribution over the MPI tasks.
In POP, the halo width is typically set to 2. For an example block size of 60 60, the number of elements that
×
need to be exchanged per block in every halo exchange is
4×(60×2)+4×4 = 496. This number may need to be multiplied by the number of vertical levels, depending on the data
structure on which the halo exchange is performed. Some
data structures, like the horizontal velocity, store a value for
every grid point at every depth level. As a result, a 3-D halo
exchange is required that exchanges elements from every
depth level. Others data structures, such as surface pressure,
only consist of a single level. There, a 2-D halo exchange is
sufficient.
For neighbouring blocks that are assigned to the same MPI
task, the data exchange is implemented by an internal copy
and no MPI communication is required. Also, no data need
to be exchanged with (or between) land elements. Therefore,
the amount of data that needs to be communicated between
MPI tasks depends heavily on the way the blocks are distributed over the MPI tasks.
**2.2** **Existing block-partitioning schemes**
POP currently supports three algorithms for distributing the
blocks over the available MPI tasks, Cartesian, rake (Marquet and Dekeyser, 1998), and space-filling curve (Dennis,
2007). The Cartesian algorithm starts by organizing the tasks
in a two-dimensional grid. Next, the blocks are assigned to
these tasks according to their position in the domain. If the
number of MPI tasks does not divide the number of blocks
evenly in either dimension, some tasks may receive more
blocks than others. In addition, some tasks may be left with
less work (or even no work) if one or more blocks assigned
to it only contain land. As shown in Dennis (2007), load imbalance between tasks can significantly degrade the performance of high-resolution ocean simulations.
The rake algorithm attempts to improve the load balance
by redistributing the blocks over the tasks. Note that this
requires that the number of blocks is significantly larger
than the number of MPI tasks. The rake algorithm starts
with a Cartesian distribution and the corresponding twodimensional MPI task grid. First, the average number of
blocks per task is computed. Then, for each row in the task
grid, the algorithm takes the first task in the row and determines whether the number of blocks exceeds the average. If so, the excess blocks are passed on to the next task.
-----
**Fig. 1. (a) Sketch of the blockwise subdivision of the domain in POP. (b) The halo regions of a block; image(a)** (b)
from Smith et al. (2010).
**Fig. 1. (a) Sketch of the block-wise subdivision of the domain in POP. (b) The halo regions of a block; image from Smith et al. (2010).Fig. 1. (a) Sketch of the blockwise subdivision**
from Smith et al. (2010).
a Hilbert curve if P = 2[n], or by a meandering Peano curve if
_P = 3[m], where n and m are integers. By using combinations_
of different curves, the set of supported problem sizes can be
extended.
**2.3** **Hierarchical block partitioning**
None of the load-balancing algorithms described in the pre
**Fig. 2. Examples of the space-filling-curve load-balancing algo-**
**Fig. 2. Examples of the space-filling curve load balancing algorithm, with the Hilbert (left panel), meanderingvious section takes into account the inherent hierarchical na-**
rithm, with the Hilbert (left panel), meandering Peano (middle
Peano (middle panel) and Cinco (right panel) curves; image from Dennis (2007).panel), and Cinco (right panel) curves; image from Dennis (2007). ture of modern computing hardware. This typically consists
of multiple cores per processor, multiple processors per node,
multiple nodes per cluster, and even the availability of mul
**Fig. 2. Examples of the space-filling curve load balancing algorithm, with the Hilbert (left panel), meanderingtiple clusters for a numerical simulation. The communica-**
This process is repeated for all tasks in the row. The pro-Peano (middle panel) and Cinco (right panel) curves; image from Dennis (2007).tion performance drops as we go up in the hierarchy. The
cess is repeated for all columns of the task grid. As described cores in a processor share cache memory and can therefore
in Smith et al. (2010), the algorithm “can be visualized as communicate almost instantaneously, while communication
a rake passing over each node and dragging excess work into between processors has to go through main memory, which
the next available hole”. In an attempt to keep neighbouring[2,2] [3,3] [3,3,2] [3,3,2,2] is much slower. Communication between processors on difblocks close together, constraints are placed on block move- ferent nodes must go through an external network, which
**Fig. 3.ments that prevent blocks from moving too far from their di- Example subdivisions of a square into 4,6,8, and 10 rectangular sections.** is orders of magnitude slower, and communication between
rect neighbours. Unfortunately, there are instances where the clusters in different locations is again orders of magnitude
rake algorithm actually results in a worse load balance where slower. Therefore, simply balancing the load for the individ
[2,2] [3,3] [3,3,2] [3,3,2,2]
blocks get raked into a corner. As a result Dennis (2007) ual processors (or cores) is not sufficient. Instead, a hierar
21
states that “we do not consider the current implementation chical load-balancing scheme must be used that takes both
of the rake algorithm...sufficiently robust.”Fig. 3. Example subdivisions of a square into 4,6,8, and 10processor load and the communication hierarchy of the target rectangular sections.
The space-filling-curve algorithm described in Dennis machine into account. We suggest using a similar approach
(2007) uses a combination of Hilbert, meandering Peano, and to the one used in Zoltan (Zoltan User Guide, 2013; Teresco
Cinco curves to partition the blocks (Fig. 2). Conceptually, 21 et al., 2005). However, where Zoltan supports dynamic load
it draws a single line that visits each of the blocks exactly _balancing (where the work distribution may change during_
once. It then splits this line into equal-sized segments, each the application’s lifetime), we compute a single static solusegment visiting the same number of blocks. Due to the way _tion before the application is started._
the line is drawn, the blocks in each segment are also contin- Our hierarchical load-balancing scheme, like the rake and
uous in the two-dimensional domain. This solution degrades space-filling-curve algorithms described earlier, assumes that
slightly when the land-only blocks are discarded, which in- the number of blocks is significantly larger than the number
troduces “cuts” in the curve. Nevertheless, the space-filling- of processors. Instead of simply specifying the number of
curve algorithm significantly improves the load balance be- MPI tasks for which to create a partitioning, the user must
tween MPI tasks. A limitation of this approach is that each of now specify a sequence of partitionings. For example, a sethe space-filling curves can only partition domains of a spe- quence 2 16 8 indicates that the blocks must first be parti
: :
cific size. For example, a domain P ×P can be partitioned by tioned into 2 sets (preferably of equal size), each of which is
|This ess i|proc s rep|
|---|---|
|n Sm rake|ith e pass|
|rep|eated|
|---|---|
|for a (201|ll co 0), t|
|ver e|ach|
|l ta|sks|in task|
|---|---|---|
|of t orit|he hm||
|||“c ggi|
|nd|dra||
|w.|T|he sc|p rib|
|---|---|---|---|
|As is|de ua|||
|||liz or|ed k i|
|ss|w|||
|”. In a|[ an atte|
|---|---|
|. In a onstr|n att aints|
|ks fro quare in unate|m m to 4,6,8 ly, th|
|] to kee|[ ep nei|
|---|---|
|to ke aced|ep ne on bl|
|too f|ar fro|
|0 rectan e inst|gular se ances|
|] ring|g|is|
|---|---|---|
|ring ve-||is fe|
|di-|||
|||is cl|
|the|||
|lo de|we s|r. mu|Co st|
|---|---|---|---|
|of|m|||
|||ag fer|nit ent|
|n|dif|||
(b)
-----
**Fig. 3.Fig. 3. Example subdivisions of a square into Example subdivisions of a square into 4 4,6,8, and 10 rectangular sections.,** 6, 8, and 10 rectan
gular sections.
then partitioned into 16 pieces, which are further divided into21
8 pieces. The sequence of partitionings relates directly to the
hierarchy that is present in the computational platform. For
example, the 2 16 8 partitioning can be used for an exper: :
iment on two clusters, each containing 16 nodes of 8 cores.
Once the user has specified the desired partitioning, the algorithm proceeds by repeatedly splitting the available blocks
into N (preferably equal-sized) subsets. We try to partition
the domain in such a way that the shape of each of the subsets is as close to a square as possible. This will reduce the
amount of communication out of each subset in relation to
the amount of work inside each subset.
When splitting a domain, multiple solutions may be available which are equivalent from a load-balancing perspective.
However, the amount of communication required between
subsets may vary between these solutions due to assignment
of blocks to MPI tasks and the location of land-only blocks.
Our algorithm therefore compares these solutions and selects
the one which generates the least communication between
subsets.
To explain our algorithm in more detail, we use the simplified example domain shown in the upper left panel (a1)
of Fig. 4. This example domain contains 1200 1000 grid
×
elements. It is divided into blocks of 100 100, resulting in
×
12 10 blocks, of which 20 are land-only blocks. To divide
×
this domain into 10 subsets, the algorithm starts by computing the required number of blocks per subset. The 100 nonland blocks must be divided into 10 subsets, resulting in 10
blocks per subset. Next, the algorithm tries to arrange the
desired number of subsets in a (roughly) rectangular grid.
The dimensions of this grid, consisting of N subsets, is determined as follows:
```
f:= floor(sqrt(N));
c:= ceiling(sqrt(N))
if (f = c) we have found a square grid of [f x f]
if (f*c = N) we have found a rectangular grid of [f x c]
if (N < f*c) we have found a rectangular grid of [f x c]
- (f*c-N)
if (N > f*c) we have found a square grid of [c x c]
- (c*c-N)
```
In the first two cases of the algorithm shown above,
a square or rectangular decomposition is available containing exactly N subsets. In the last two cases, the decomposition contains (f ∗ _c −_ _N) or (c ∗_ _c −_ _N) subsets too many_
respectively. To correct this, we repeatedly remove a single
subset from each row until the desired number of subsets is
reached. Figure 3 shows four example subdivisions, for values of N = 4, 6, 8, and 10, that correspond to each of these
four cases. For our example domain we will use the rightmost
subdivision in Fig. 3 for N = 10 named [3, 3, 2, 2], which
represents the number of blocks in each column.
Next, we compute the required number of blocks per column using the average number of blocks per subset and
the selected subdivision. For our example, we will use the
[3, 3, 2, 2] subdivision as in Fig. 3 and the 10 blocks per
subset average, which will result in columns containing
[30, 30, 20, 20] blocks. We then split the domain into subsets by traversing the blocks in a vertical zigzag fashion and
selecting all non-land blocks until the desired number of
blocks for that column in reached. It should be noted that
the partitioning scheme is not a flood-fill type of algorithm,
which may skip over isolated points; instead, our partitioning scheme simply skips over any land points encountered
while scanning in a certain direction, and continues scanning
in a zigzag fashion until the required number of ocean (i.e.
non-land) points have been selected.
The panels (a2–a6) in Fig. 4 show how the example domain is split into the four columns. We subsequently split
each of the columns in a horizontal zigzag fashion into the
desired number of subsets for that column. Panels b1–b5 of
Fig. 4 show an example for the first column, which needs to
be split into 3 subsets of 10 blocks. A similar subdivision is
applied to the other columns. The final block distribution for
the example domain is shown in Fig. 4c.
As explained above, the subdivision shown in panel (c)
of Fig. 4 is only one out of a series of options. Several permutations of the [3, 3, 2, 2] subdivision can be created that
are equivalent from a load-balancing perspective but require
a different amount of communication. In addition, the subdivision can also be rotated, thereby initially dividing the
domain row-wise instead of column-wise. Finally, when selecting the blocks in a zigzag fashion (as shown in Fig. 4),
a choice can be made as to which position to start the selection from: top or bottom, or left or right. In our algorithm we
simply compute all unique permutations of the subdivision
in all possible rotations, with all possible starting points. We
then select the solution with the lowest average communication per subset. If multiple equivalent solutions exist, we
select the one with the lowest maximum communication per
subset. Table 1 shows the best scoring results for all permutations of the [3, 3, 2, 2] subdivision. All solutions use the same
number of blocks per task, but the amount of communication
varies per solution. Once a domain has been split into the desired number of subsets, the algorithm is repeated for each of
these subsets for the next split.
**2.4** **Hierarchical partitioning of tripole grids**
In the application of the hierarchical load-balancing scheme
to POP, the tripolar grid layout, where the North Pole is
-----
Step 1: Split domain column wise
(a1): initial domain (a2): select 30 blocks (a3): select 30 blocks
for �rst column for second column
(a4): select 20 blocks (a5): select 20 blocks (a6): initial split
for third column for fourth column completed
Step 2: Split column selections row-wise
(only �rst column selection is shown)
(b1): �rst (b2): select
column 10 blocks
for �rst
set
(b3): select
10 blocks
for second
set
(c): �nal result
(b4): select
10 blocks
for third
set
(b5): �rst
column
completed
**Fig. 4. Description of the hierarchical load-balancing scheme for an example of 12Fig. 4. Description of the hierarchical load balancing scheme for an * example of × 10 blocks, of which 20 are land-only blocks, as shown 12** _×_ 10 blocks, of which 20
in panel (a1). The initial column-wise split is shown in panels (a2)–(a6), the next row wise split in the panels (b1)–(b5), and the final results
are land-only blocks, as shown in panel (a1). The initial columnwise split is shown in panels (a2)–(a6), the
is shown in panel (c).
next row wise split in the panels (b1)–(b5) and the final results is shown in panel (c).
replaced with two poles located (on land) in Canada and Russia, needs special attention. Note that tripolar grids are frequently used in ocean models because the grid spacing in the
Arctic is much more uniform and the cell aspect ratios are
closer to 1 when compared to traditional latitude–longitude
(dipole) grids (Smith et al., 2010). In this case, additional
communication is required for the blocks located on the line
between these poles, as explained in Smith et al. (2010).
These blocks are located on the upper boundary of the grid,
as shown in Fig. 5a. To support a tripolar grid layout in our
hierarchical load-balancing scheme, we add the additional
tripole communication to the communication requirements
of the subset whenever a subset contains a tripole block. The22
extra communication will then be taken into account in the
search phase of the algorithm. Although this approach will
improve the partitioning, the result will not be optimal. As
shown in Fig. 5a, two communicating tripole blocks may
be located on opposite sides of the grid. This makes it difficult for our partitioning scheme to put these two blocks into
the same subset. We overcome this problem by remapping
-----
**Fig. 5. (a) A subdivision of the topography into 60×40 blocks. The two tripoles are depicted by the red dots on the upper boundary. Note that**
the leftmost and rightmost dots represent the same tripole; the tripole communication is (partially) shown by the arrows. (b) A remapping of
the grid that moves an area of 30 × 7 blocks. The original tripole boundary is shown as a red line.
**Fig. 6. An example of POP running without the MPI wrapper on a single cluster (left panel) and with the MPI wrapper on a multi-cluster**
(right panel).
**Table 1. Permutations of the [3,** 3, 2, 2] example distribution showing the number of assigned blocks and the communication per task
in grid points per level. The entries are sorted by average communication per task. The topmost entry provides the best solution.
permutation blocks communication per task
per task (min/avg/max)
(3, 3, 2, 2) 10 1440/2186/2888
(2, 3, 3, 2) 10 1244/2187/2888
(2, 2, 3, 3) 10 1240/2188/3100
(2, 3, 2, 3) 10 1240/2188/3300
(3, 2, 2, 3) 10 1240/2229/3720
(3, 2, 3, 2) 10 1440/2265/2876
the grid before we start the partitioning (Fig. 5b). By simply moving blocks from one side of the grid to the other,
we enable our partitioning algorithm to optimize the tripole
communication. Note that this remapping is only performed
on the grid used in our partitioning algorithm. No change to
POP is required, as POP only uses the result of the partitioning in which the original block numbering is maintained.
**3** **Results: load balancing**
In this section we will compare the performance of our hierarchical algorithm to the Cartesian, rake, and space-fillingcurve block-partitioning schemes. In our experiments we
carry out a 10-day simulation with the R0.1 version of POP, as
described at the beginning of Sect. 2, and show performance
measures averaged over these 10 days.
-----
**3.1** **Hardware**
[The Huygens (http://www.surfsara.nl) is an IBM pSeries](http://www.surfsara.nl)
575, a clustered SMP (symmetric multiprocessing) system.
Each node contains 16 dual-core IBM Power 6 processors
running at 4.7 GHz, resulting in 32 cores per node. As the
cores support simultaneous multi-threading (SMT), every
node appears to have 64 CPUs. Most applications will perform better by using 64 MPI tasks per node (two MPI tasks
per processor core). Per node, 128 GB of memory is available (4 GB per core). The nodes are connected using 8
×
(4 DDR) InfiniBand, resulting in a 160 Gbit s[−][1] inter-node
×
bandwidth.
[The DAS-4 (http://www.cs.vu.nl/das4) is a six-cluster,](http://www.cs.vu.nl/das4)
wide-area distributed system. DAS-4 is heterogeneous in design, but in this experiment we will use dual quad-core compute nodes containing Intel E5620 CPUs running at 2.4 GHz,
resulting in eight cores per node. The nodes contain 24 GB of
memory (3 GB per core). Nodes are connected using QDR
InfiniBand, resulting in a 20 Gbit s[−][1] bandwidth. We use
DAS-4 in a single-cluster and two-cluster experiment. In the
two-cluster experiment, the clusters are connected using a internet link with a maximum bandwidth of 1 Gbit s[−][1]. The average round-trip time between clusters is 2.6 ms. As the link
is shared with other users, the available bandwidth and round
trip latency may vary over time.
200
150
100
50
0
|118|Col2|Col3|
|---|---|---|
||||
||||
||||
|145|153|Col3|
|---|---|---|
||||
||||
cartesian 225x150 rake 60x60 sfc 60x60 hierarchical 60x60
**Fig. 7.Fig. 7. Performance comparison of POP using cartesian, rake, space-filling curve and hierarchical block Performance comparison of POP using Cartesian, rake,**
space-filling curve, and hierarchical block-partitioning schemes ontioning schemes on three different hardware configurations, each using 256 MPI tasks.
three different hardware configurations, each using 256 MPI tasks.
|Col1|Col2|
|---|---|
|Col1|Col2|Col3|
|---|---|---|
**3.2** **Using MPI for multiple clusters**
For POP to run on multiple clusters, an MPI implementation
is required that is capable of communicating both within and
between clusters. This is far from trivial, as clusters are often
protected by a firewall that disallows any incoming communication into the cluster. Also, it is common for the compute
nodes to be configured such that they can only communicate
with the cluster frontend, but not directly with the outside
world, as explained in Maassen and Bal (2007). To solve this
problem, we created wrapper code that is capable of intercepting the MPI calls in POP. For each intercepted call, the
MPI wrapper decides whether it should be forwarded to the
local MPI implementation or whether it should be sent to
another cluster. To use the MPI wrapper code, POP needs
to be recompiled using a different MPI library; however, no
changes to the POP code itself are required.
To communicate between clusters, one or more support
processes, so-called hubs, are used. Each hub typically runs
on the cluster frontend, and serves as a gateway to the other
clusters. If necessary, multiple hubs can be connected together to circumvent communication restrictions caused by
firewalls. In Fig. 6, the left panel shows a traditional POP
run on a single machine, while the right image illustrates
how a hub is used in DAS-4 to connect two clusters together.
Only a single hub is needed, as all compute nodes in DAS4 can communicate with all head nodes, even those of other
clusters. However, compute nodes cannot directly communicate with compute nodes in other clusters.
-3.3 Explicit implementation (uses explicit copy statements)Performance
CPU-GPU Comm.
Table 2 shows the configurations of the partitioning schemes.
GPU ComputationFor each experiment we use 256 MPI tasks. The Cartesian
distribution uses a 225 150 block size, resulting in exactly
- Implicit implementation (uses device-mapped host memory) ×
CPU-GPU Comm.one block per MPI task (no land blocks are discarded). Both
rake and the space-filling curve use a block size of 60 60
×
GPU Computationand discard 628 of 2400 blocks (i.e. 26 %). The table also
-shows the minimum, average, and maximum communica- Streams implementation (uses CUDA Streams and explicit copy statements)
CPU-GPU Comm.tion per MPI task, as well as the amount of traffic gener
ated between the clusters for the two-cluster experiment. We
GPU Computation
will discuss these below. As can be seen from Table 2, the
hierarchical domain distribution significantly decreases the
amount of traffic between the clusters compared to rake andFig. 8. Schematic of the three different implementations, Explicit, Implicit, and Streams, that shows the po
the space-filling curve. As a result, the performance overheadoverlap between computation and communication.
of using two clusters is limited.
The performance results of POP are shown in Fig. 7 in
model day[−][1]. On Huygens and single-cluster DAS-4, the
rake and space-filling curve block distribution clearly im
24
prove the performance over the Cartesian distribution. On
Huygens, the performance improvement of the space-filling
curve is close to the amount of work discarded (23 % vs.
26 %). On DAS-4 the improvement is much greater (54 %
vs. 26 %) due to the better cache behaviour of smaller blocks.
The space-filling curve distribution outperforms the rake distribution in all cases, due to the better load-balancing characteristics, as shown in Table 2. Figure 7 also shows that the
performance degrades in the two-cluster DAS-4 experiments.
Interestingly, the performance reduction for Cartesian is only
10 %, while the space-filling curve (41 %) and rake (44 %)
are much more affected. This difference is caused by the increased communication caused by these two block distributions, as shown in Table 2.
|k,|as w|ell|as t|he a|
|---|---|---|---|---|
|ount|of t|raffic|g|ener|
|---|---|---|---|---|
|Col1|Col2|Col3|Col4|Col5|
|---|---|---|---|---|
-----
**Table 2. Configuration of the Cartesian, rake, and space-filling curve, and hierarchical distributions.**
algorithm block blocks blocks communication communication
size per core discarded per task between clusters
(min/max) (min/avg/max) (messages/volume)
Cartesian 225 × 150 1/1 0 (of 256) 0/1267.4/2408 22.3 M/99.0 GB
rake 60 × 60 5/8 628 (of 2400) 748/1940.5/3936 77.9 M/337.4 GB
space-filling curve 60 × 60 6/7 628 (of 2400) 1007/1707.7/2960 41.0 M/212.7 GB
hierarchical 60 × 60 6/7 628 (of 2400) 504/1394.9/2584 20.0 M/82.5 GB
**Table 3. Speed-up on DAS-4 for one- and two-cluster configura-**
tions using a hierarchical domain distribution.
configuration performance speed-up
(modeldays day[−][1])
1 cluster, 16 nodes 82 1.0
1 cluster, 32 nodes 155 1.9
2 clusters, 16 nodes each 142 1.7
Although rake and the space-filling curve both decrease
the amount of work per MPI task, they also significantly
increase the amount of communication between tasks. On
supercomputers, where POP is traditionally run, this problem is mitigated by high-speed network interconnects, but in
a multi-cluster environment, the internet link between clusters becomes a bottleneck. In Table 2, the column “communication between clusters” clearly shows that compared to
Cartesian, rake causes an increase of 3.4 times in the communication between clusters. The increase caused by the spacefilling curve is smaller, a factor of 2.1, but still significant.
The hierarchical scheme performs slightly better than the
space-filling curve scheme on Huygens and single-cluster
DAS-4 (Fig. 7). This is to be expected, as the communication overhead is small on these systems due to the fast local
network interconnects. On two-cluster DAS-4, however, the
hierarchical domain distribution provides a significant performance improvement over the existing algorithms. When
running on two clusters, the performance drop compared to
a single-cluster run is only 8 % for the hierarchical domain
distribution, compared to 10 % for Cartesian, 41 % for the
space-filling curve, and 44 % for rake.
Table 3 shows the speed-up on DAS-4 compared to a 16node run on a single cluster. The speed-up on 32 nodes on
a single cluster is, with a factor of about 1.9, almost perfect.
Although the speed-up on two-clusters (of 16 nodes each)
is slightly lower, about a factor of 1.7, the performance gain
compared to a single cluster is still significant. These results
clearly demonstrate that using multiple clusters can be beneficial, especially to increase the number of machines beyond
the size of a single cluster.
**4** **Execution on GPUs**
This section discusses the main challenges that exist when
moving parts of the computation in POP to a GPU. We
use the CUDA programming model (Nvidia, 2013) in order to have fine-grained control over our GPU implementation and to be able to explain and improve performance
results. Many different software tools, libraries, (directivebased) parallelization tools, and compilers aim to assist in
the development of GPU code. However, it is our goal to gain
a deep understanding of the performance behaviour of POP,
which requires more control over the implementation and in
particular how data are transferred between the host memory and GPU device memory. We are currently not aware of
the capability to implement GPU kernels that overlap GPU
computation with CPU–GPU communication in any of the
existing directive-based parallelization tools for GPUs. However, if this were possible, it would require a collection of
directives similar to the collection of calls to the CUDA runtime that are currently responsible for achieving this overlapping behaviour. While directive-based parallelization tools
do leave the kernel code in the same language as the original,
understanding the underlying architecture is still required in
order to modify that parallelized code and assess its correctness. In the following sections, we use CUDA terminology
(Nvidia, 2013), although our methods could just as easily apply to OpenCL (Khronos Group, 2013).
POP consists of a large Fortran 90 codebase, and in this
paper we therefore limit ourselves to the most computeintensive parts of the program and only offload those computations to the GPU. The main challenge with this approach
is to overcome the PCIe bus bottleneck. Whenever computations are to be performed on the GPU, the input and output data have to be transferred from host memory through
the PCIe bus to GPU device memory and vice versa. The
achieved bandwidth to GPUs connected through the PCIe
2.0 bus is approximately 5.7 GB s[−][1] from host to device and
6.3 GB s[−][1] from device to host. This is significantly lower
than the bandwidth between host memory and a CPU and
the bandwidth between GPU device memory and the GPU.
Therefore, it is crucial that we maximize the overlap of data
transfers to the GPU with computation and with transfers
from the GPU back to the host.
-----
**Table 4. List of the most compute-intensive functions in POP, covering 76.48 % of the total computation time. The reported time does not**
include time spent in functions called by this function.
% time function module #calls computes
15.09 state state_mod 29562112 density of water and derivatives
6.69 hdiffu_del4 hmix_del4 4865280 horizontal diffusion of momentum
5.79 advu advection 4865280 advection of momentum
5.33 bldepth vmix_kpp 115840 ocean boundary layer depth
5.25 hdifft_del4 hmix_del4 4865280 horizontal diffusion of tracers
4.62 chrongear pop_solversmod 115840 preconditioned conjugate-gradient solver
4.07 ri_iwmix vmix_kpp 115840 viscosity and diffusivity coefficients
3.83 vmix_coeffs_kpp vmix_kpp 115840 vertical mixing coefficients
3.66 impvmixt_correct vertical_mix 115840 implicit vertical mixing corrector step
3.34 blmix vmix_kpp 115840 mixing coefficients within boundary layer
3.27 impvmixt vertical_mix 231680 implicit vertical mixing of tracers
3.27 clinic baroclinic 4865280 forcing terms of baroclinic momentum
3.17 advt_centered advection 4865280 tracer advection using centred differencing
3.12 btropoperator pop_solversmod 14705152 applies operator for the barotropic solver
3.10 baroclinic_driver baroclinic 115840 integration of velocities and tracers
2.88 ddmix vmix_kpp 115840 add double-diffusion diffusivities
To overlap GPU communication and computation we need
fine-grained control over how data are transferred to the
GPU. There are several alternative techniques for moving
data between host and device using the CUDA programming
model. The most commonly used approach is to simply use
_explicit memory copy statements to transfer large blocks of_
memory to and from the GPU.
Alternatively, CUDA streams may be used to separate the
computation into distinct streams that may execute in parallel. This way, communication from one stream can be
overlapped with computation and communication in other
streams. GPUs with 2 copy engines, such as Nvidia’s Tesla
K20, can use the PCIe bus in full duplex with explicit memory copies in different streams. This way, communication
and computation from different streams can be fully overlapped.
Finally, the mapped memory approach uses no explicit
copies, but maps part of the host memory into device memory space. Whether this approach is feasible depends on
the memory access pattern of the kernel. Typically, mapped
memory can only be used efficiently if each input and output
element is read or written only once by the GPU function,
called kernel. Although this approach results in very clean
host code, requiring no explicit copy statements, it requires
complex kernel implementations with intricate memory access patterns to ensure high performance.
**4.1** **Targets for GPU implementation**
To determine which part of POP to port to the GPU, we must
first get an impression of where the most time is spent. It
is well known that the three-dimensional baroclinic solver is
the most computationally intensive part of POP (Kerbyson
and Jones, 2005; Worley and Levesque, 2003). We therefore
limit ourselves to analysing the performance of the baroclinic
solver.
Table 4 gives an overview of the most time-consuming
functions in POP. These profiling results of are obtained from
one month of simulation using the R0.1 version (see beginning of Sect. 2) on the DAS-4 cluster (described in Sect. 2).
For this experiment we have used a Cartesian distribution
with blocks of size 255 300 and 8 processes per node on
×
16 nodes.
Table 4 lists the percentage of the total execution time
spent in this function, not including subfunctions. All
functions in Table 4, except those from the module pop
solversmod, belong to the baroclinic solver. Our profiling results indicate that the baroclinic solver does not contain any
true computational hotspots; that is, no individual function
consumes a major part of the computation time.
However, the density computations from the equation of
state are requested by several different parts both within the
baroclinic solver and at the end of each time step. The computation of water densities is required so frequently by the
model that their computation time consumes 15.09 % of the
total execution time on average.
The functions from the vmix_kpp module in Table 4 are
part of the computation of the vertical mixing coefficients
for the KPP mixing scheme (Large et al., 1994), which in
total consumes about 35.3 % of the total execution time. We
therefore focus on obtaining a GPU implementation for the
equation of state and for the computation of vertical mixing coefficients, in particular the three function states, buoydiff (the computation of buoyancy differences) and ddmix.
We focus on buoydiff() and ddmix() since they are among
the most compute intensive functions and are responsible for
64.9 % of the calls to state().
-----
It is well known that kernel-level optimizations focused on
increasing computation throughput are generally not worthwhile when memory bandwidth is the primary factor in limiting performance (Ryoo et al., 2008). A frequently used
tool for performance analysis on multi- and many-core hardware using the Roofline model (Williams et al., 2009) is
the arithmetic intensity. For example, the Nvidia Tesla K20
GPU has a theoretical peak performance of 1173 GFLOP s[−][1]
for double precision and a theoretical peak global memory bandwidth of 208 GB s[−][1]. However, in practice the
achieved memory bandwidth is (roughly) 160 GB s[−][1], as reported by the bandwidthTest tool in the Nvidia CUDA SDK.
A rough estimation tells us that an arithmetic intensity of
at least 7.3 FLOP byte[−][1] is required for the kernel to become compute-bound. Thus, if the arithmetic intensity is less
than 7.3 FLOP byte[−][1], then we know the kernel is memorybandwidth-bound when executed on the K20.
The arithmetic intensity of the state() function is computed
as follows. Although POP supports various implementations
for the equation of state, we focus on the 25-term equation
of state (McDougall et al., 2003) because it is the most commonly used implementation. The state() function requires the
temperature and salinity tracers as inputs as well as 25 coefficients, of which 6 depend on the water pressure and the rest
are constant. The state() function outputs the density of water and optionally also outputs the derivatives of the water
density with respect to temperature and salinity. When only
the density of water is computed, state() performs 40 floating point operations per grid point with an arithmetic intensity of 2.5 FLOP byte[−][1], assuming that all 25 coefficients can
be stored in on-chip caches and can be fully reused. When
all outputs are requested, 89 floating point operations are
executed per grid point, resulting in an arithmetic intensity
of 5.56 FLOP byte[−][1]. With an arithmetic intensity of either
2.5 or 5.56, the state() kernel is memory bandwidth-bound.
Therefore, we focus on optimizing the time spent on communication between host and device rather than kernel-level
optimizations.
**4.2** **Efficient integration of GPU code**
- Explicit implementation (uses explicit copy statements)
CPU-GPU Comm.
GPU Computation
- Implicit implementation (uses device-mapped host memory)
CPU-GPU Comm.
GPU Computation
- Streams implementation (uses CUDA Streams and explicit copy statements)
CPU-GPU Comm.
GPU Computation
**Fig. 8.Fig. 8. Schematic of the three different implementations, Schematic of the three different implementations – Explicit, Implicit, and Explicit Streams, that shows the potent,**
_Implicitoverlap between computation and communication., and Streams – that shows the potential overlap between_
computation and communication.
30
25
20
15
10
5
0
|12|Col2|Col3|
|---|---|---|
||||
||||
||||
|23|Col2|
|---|---|
|||
|||
|25|Col2|Col3|
|---|---|---|
||||
||||
||||
state buoydiff ddmix
**Fig. 9.Fig. 9. Performance results for the three POP functions on a GPU with three different implementatio Performance results for the three POP functions on a GPU**
obtained on the Tesla K20 GPU with awith three different implementations as obtained on the Tesla K20 229 _×_ 304 block size.
GPU with a 229 × 304 block size.
We now describe how POP should be revised to efficiently
use GPUs. For our discussion, we focus on three functions
in POP state(), buoydiff(), and ddmix(). Due to a lack in
GPU performance models that consider asynchronous PCIe
transfers, it is currently impossible to predict what kind of
implementation will be the most efficient. For each function
we have therefore implemented three different versions that
we call Explicit, Implicit, and Streams. We first describe the
three versions in general and then discuss the specific implementations for state(), buoydiff(), and ddmix() in detail.
Figure 8 provides a schematic overview of the three different
implementations with regard to the way GPU computation
(shown in green) and CPU–GPU communication (shown in
blue) could be overlapped.
_Explicit is a bulk-synchronous implementation that uses_
explicit memory copy statements to copy all the required
120
input data to GPU and from the GPU for the entire three-8-core DAS4 (CPU only)
8-core DAS4 (CPU + GTX480)
dimensional grid. The kernel used indimensional array of threads, i.e. one thread for each hori- 100 12-core DAS4 (CPU + K20)12-core DAS4 (CPU only) _Explicit creates a two-_
zontal grid point, which iterate the grid points in the vertical 80
dimension. Implicit uses mapped memory and therefore requires no explicit memory copy statements. Instead, data are 60
requested by the GPU directly from the host memory and
sent over the PCIe bus. The performance of accessing the 40
memory in this way is very sensitive to the order in which
20
data are requested, and care must be taken not to create gaps
or misalignments from the mapping between threads and 0
data. Therefore,4 cores/node Implicit uses a kernel implementation that8 cores/node 12 cores/node
creates a one-dimensional array of threads with size equal
**Fig. 10.to the number of grid points in the three-dimensional grid. Performance of POP using 8 computes nodes of the DAS4 cluster, with and without GPUs,**
hierarchical partitioning withEach thread then computes its three-dimensional index from 60 _×_ 60 block size.
its one-dimensional thread ID to direct itself to the correct
part of the computation. The Streams implementation creates
25
|Col1|Col2|Col3|71 d in on e g me|
|---|---|---|---|
||||m ate o or tiv be in|
|||st fr erf nsi st app||
||ly p e u m|||
|e s m||||
|Col1|100 ion the ent reat|
|---|---|
||or n th the ea me cc der o c th|
|i d st t a or t n||
-----
one stream for each vertical level and uses explicit copy statements to copy the corresponding vertical level of the input
and output variables to and from the GPU. If the computation of one vertical level requires input from multiple vertical levels, CUDA events are used to delay the computation
until all inputs have been moved to the device and vice versa.
The kernel used in Streams is similar to the kernel used in
_Explicit except for the fact that the kernel only computes the_
grid points of one vertical level.
The three different implementations are very different in
terms of code and the effort to create them. All three implementations use very distinctive host codes as well as modified GPU kernels. For example, the Implicit implementation barely requires any host code, whereas the Streams implementation requires multiple loops of memory copy operations and kernel invocations with advancing offsets. Note
that, except for the differences described here, the kernels do
not contain any architecture-specific optimizations.
While the state() function computes the density of water
at a certain vertical level k, the function is mostly used directly surrounded by a loop over all vertical levels. These
code blocks can safely be replaced by a call to a single function that directly computes the water densities for all vertical levels. Our Explicit implementation uses explicit copies
to move the three-dimensional grid of tracer values between
host and device and creates one thread for each horizontal
grid point, which computes all outputs in the vertical direction. However, this approach is unable to overlap communication to and from the device with GPU computation. It is
possible to also parallelize the computation of different vertical levels using CUDA streams. Our Streams implementation ensures that GPU computation can be overlapped with
GPU communication of different vertical levels and thus alleviates the PCIe bus bottleneck to a large extent. Because
of the simple access pattern in state(), where each input and
output element is read or written only once, it is also a good
candidate for the highly parallel Implicit implementation.
More complex uses of the equation of state are found
within the computation of the vertical mixing coefficients for
the KPP mixing scheme (Large et al., 1994), in particular
in the computation of buoyancy differences (buoydiff) and
double-diffusion diffusivities (ddmix). In POP the vertical
mixing coefficients are sequentially computed for all vertical levels. The computation of buoyancy differences at level
_k requires the density of both the surface level and level k_ −1
displaced to level k, as well as the water density at level k.
These values can be computed for each level in parallel as
long as all the data are present on the GPU. Overlapping data
movement from the host to the GPU with GPU computation
and data movement from the GPU to host becomes significantly more difficult, because the tracers for levels 1, k − 1,
and k need to be present on the GPU to compute the buoyancy differences at level k. The Streams implementation first
schedules memory copies to the GPU for all vertical levels
in concurrent streams and then invokes GPU kernel launches
for all levels. However, before the execution of the kernel
in stream k can start, the memory copies in stream 1, k − 1,
and k need to be complete. The kernel executing in stream
_k outputs to different vertical levels for different variables._
Therefore, some of the memory copies from device to host
in stream k have to wait for the kernel in stream k − 1 to
complete. We use the CUDA event management functions
to guarantee that no computations or memory transfers start
prematurely.
In the ddmix function, the computation of diffusivities at
level k requires the derivatives of density with respect to temperature and salinity at level k and k − 1; that is, the computation of level k reuses the derivatives that were used to
compute level k − 1. At a first glance, it would seem that
the computation of all vertical levels cannot be parallelized.
The sequential approach prevents these values having to be
recomputed, but inhibits the ability to overlap communication and computation of different vertical levels. Therefore,
our implementation also parallelizes the computation in the
vertical dimension by introducing double work. The cost of
computing the derivatives twice is significantly less than the
inability to overlap computation and communication. Similarly to the buoyancy differences computation, the kernel executing in stream k requires the memory copies of stream k
and k − 1 to be complete. Again, CUDA event management
functions are used to guarantee that no data are copied from
the GPU back to the host before GPU computations have finished.
**5** **Performance of POP on GPUs**
In this section, we will describe the performance of the R0.1
version of POP on a single cluster and on multiple GPU
clusters. In the first subsection below, we focus on the performance impact on individual POP subroutines when using
a GPU. In the second subsection, we address the performance
of the whole POP code on a single GPU and on multiple GPU
clusters.
**5.1** **Performance impact of GPU usage:**
**individual routines**
First we evaluate the performance of single functions that
were taken out of POP for individual benchmarking. We test
our three implementations (Explicit, Implicit, and Streams)
for each discussed function of POP on a single node equipped
with a Nvidia Tesla K20 GPU in the DAS-4 cluster. The
Tesla K20 has 2496 CUDA cores running at 705 MHz, providing a theoretical peak double-precision performance of
1173 GFLOP s[−][1]. The K20 has 5 GB of device memory and
a theoretical peak memory bandwidth of 208 GB s[−][1]. The
K20 is connected through a PCIe 2.0 bus and has two copy
engines which enable full duplex use of the PCIe bus for concurrent explicit memory transfers. The grid dimensions used
-----
for the experiments discussed here are 229 304 42. This
× ×
is the same block size as used to obtain our profiling results,
with two ghost cells in both horizontal dimensions. The performance results presented here are averaged execution times
of five distinct runs. The execution times of these individual
routines on the tested GPUs show minimal variance.
For all three implementations, most of the execution time
is spent on transferring the data to and from the GPU. For example, for the Streams implementation of state() only 10.3 %
of the execution time is spent on GPU computation, and only
19.4 and 13.3 % for buoydiff() and ddmix(), respectively.
Note that the reported times for buoydiff() and ddmix() include the time spent within state() when called as a subfunction. In fact, calls to state() from the GPU kernels of buoydiff() and ddmix() are inlined to optimize the data access pattern of these kernels.
Figure 9 shows the performance results for all three functions with three different GPU implementations. For the
state() function the Implicit implementation provides the best
performance. Although the kernel implementation used by
_Implicit is slightly less efficient than the kernel used by Ex-_
_plicit, the total execution time is significantly less because_
a large part of the memory transfers between host and device and computation is overlapped. While Streams achieves
overlapping behaviour similar to Implicit, it is more coarsegrained, with one vertical level at a time rather than individual grid points. That explains why Implicit outperforms the
_Streams implementation for the state() function._
The buoydiff() function has a very low arithmetic intensity and therefore the computation again accounts for only
a small part of the total execution time. The Implicit implementation is slower than Explicit because the access pattern
in buoydiff() requires several input elements multiple times.
As a result, the Implicit approach transfers more data than
necessary over the PCIe bus. Although these transfers can be
overlapped with computation and with transfers in the opposite direction, the performance penalty for transferring data
multiple times reduces the overall performance. The Streams
approach again benefits from the fact that data transfers and
computation can be overlapped, but without the restrictions
that come with the Implicit approach. The data access pattern
in buoydiff() requires that operations in some streams may
have to wait for operations in another stream to complete before they can start. The overhead of these synchronizations
accounts for on average 3.26 % of the total execution time of
the Streams implementation.
To parallelize the computation of ddmix() in the vertical dimension, the Implicit and Streams implementations
do some double work; that is, some values are computed
twice by different threads operating at different vertical levels, whereas a thread in the Explicit approach may reuse
that value from the computation of a previous vertical level.
Therefore, the time spent in computation for Implicit and
_Streams is higher than that of Explicit. However, due to the_
overlap of computation and PCIe transfers in both directions,
both Streams and Implicit do outperform the Explicit implementation in terms of total execution time. The Implicit implementation again suffers from the fact that, although overlapped with communication and computation, data have to
be transferred multiple times through the PCIe bus.
In the GPU implementation of the POP we use in the next
subsection, the Implicit implementation for state() and the
_Streams implementation for buoydiff() and ddmix() are used._
As buoydiff() is executed before ddmix() as part of the computation of vertical mixing coefficients, ddmix() reuses the
tracers that have been copied to the GPU by buoydiff(). Additionally, for all three functions, the execution on the GPU
as well as all data transfers are overlapped with the computation of other functions on the CPU. Therefore, the CPU never
has to wait for the results of GPU computations.
**5.2** **Performance of POP on multiple (GPU) clusters**
In this section, we evaluate the performance of the combination of the two approaches presented in this paper. The goal
of this evaluation is to assess whether the addition of a GPU
is at all beneficial for performance on the application level.
This is certainly not trivial, considering that large amounts
of data have to be moved back and forth between the different memories over a relatively slow PCIe link. Additionally,
only a small number of functions are executed on the GPU
and a single GPU is shared between the various CPU cores.
As such, we compare the performance of two versions of the
program: one that only uses CPUs and one that uses the available CPUs as well as the GPU.
We recognize that a truly fair comparison between the different experimental setups is very hard to achieve. We take
the achieved performance in terms of the number of model
days per day of simulation as a measure for comparison. We
have chosen not to normalize these results using additional
metrics such as hardware costs or power consumption to keep
the experimental setup as simple as possible. Hardware costs
of both CPUs and GPUs are influenced by different factors in
addition to their performance capabilities. Power consumption is an important factor in the operational costs for modern supercomputers. However, as only a small fraction of the
code currently executes on the GPU, it is clear that with the
current state of the software, the GPU will be idle for a large
fraction of the execution. Whether a complete GPU implementation of POP is more efficient than a CPU-only implementation in terms of power consumption is an interesting
issue, but it is outside the scope of this paper.
For this evaluation we use the DAS-4 cluster (described
earlier in Sect. 3.1). First, eight compute nodes each containing two quad-core Intel E5620 CPUs (eight cores per node
total) running at 2.4 GHz, 24 GB of memory, and a Nvidia
GTX480 GPU are used. In addition, we also use 8 compute
nodes each containing two six-core Intel E5-2620 CPUs (12
cores per node total) running at 2.0 GHz, 64 GB of memory,
and a Nvidia Tesla K20 GPU each. As a reference for the
-----
120
100
8-core DAS4 (CPU only)
8-core DAS4 (CPU + GTX480)
12-core DAS4 (CPU only)
12-core DAS4 (CPU + K20)
120
100
80
60
80
60
40
20
40
20
0
0
|Col1|38|
|---|---|
|||
|||
|Col1|Col2|Col3|71|
|---|---|---|---|
|||||
|||||
|||||
|||||
|Col1|100|
|---|---|
|||
|||
4 cores/node 8 cores/node 12 cores/node
CPU only CPU+GTX480
**Fig. 10.Fig. 10. Performance of POP using 8 computes nodes of the DAS4 cluster, with and without GPUs, using Performance of POP using eight compute nodes of the** **Fig. 11.cluster, on one or two clusters, using hierarchical partitioning withFig. 11. Performance of POP using 16 computes nodes of the DAS4 cluster, on one or two cluster Performance of POP using 16 compute nodes of the DAS-4**
hierarchical partitioning withDAS-4 cluster, with and without GPUs, using hierarchical partition-ing with 60 × 60 block size. 60 _×_ 60 block size. 60hierarchical partitioning with × 60 block size. 60 _×_ 60 block size.
CPU-only version of POP we use the original POP code with
the hierarchical partitioning scheme described in Sect. 2.3.25
Comparisons against other load-balancing schemes can be
derived from Fig. 7. All configurations in this section use
a block size of 60 60.
×
Figure 10 shows the performance of POP using 4, 8, and
12 MPI tasks per node, with and without GPU. Note that only
a single GPU is available in each node. Therefore, the GPU
is shared between the multiple MPI tasks on a single node.
For the eight-core DAS-4 nodes, the performance gained by
using the GPU is approximately 20 %, both when using four
or eight MPI tasks. This directly corresponds with the execution time consumed by POP code that has been ported to the
GPU. The figure also shows that the scalability of POP itself
is far from perfect. Running on eight MPI task per node, only
provides a speed-up of 1.4 compared to four MPI tasks per
node, both for the CPU-only and GPU versions.
For the 12-core DAS-4 nodes, the performance gained by
using the GPU is approximately 15 % when using 4 MPI
tasks per node, and 13 % when using 8 or 12 MPI tasks per
node. Although this relative performance gain is lower that
for the eight-core nodes, the absolute performance gain is
much higher due to the better performance offered by the
(newer) six-core CPU and K20 GPUs. In addition, the scalability of POP on the 12-core nodes is also much better,
achieving a speed-up of 1.9 on 8 cores and 2.6 on 12 cores
(both relative to the 4-core experiment).
The results show that it is possible to combine the hierarchical partitioning scheme with GPU execution and still
obtain a performance increase. This is a remarkable result,
as the hierarchical partitioning scheme prefers small block
sizes, such as 60 60, to eliminate as many land-only blocks
×
as possible and distribute load evenly among MPI tasks,
while the GPU code would prefer larger-sized blocks to increase GPU utilization. However, GPU utilization is already
increased by the fact that all MPI tasks running on a single
node share a single GPU for all their GPU computations. It is
important to understand that this would not have been possible with larger block sizes because of the limited size of the
GPU memory. As such, the two approaches presented in this
paper work in concert to improve the performance of POP.
As a final experiment, we study the performance of POP
on multiple platforms including GPUs. For this experiment,
we use eight-core DAS-4 compute nodes with an Nvidia
GTX480 GPU (described in Sects. 3.1 and 5.2).
Figure 11 compares the performance of a 16-node singlecluster run with a 2 8-node two-cluster run. Results are
×
shown for CPU-only and CPU GPU experiments. The re+
sults show a performance increase of 15 % on one cluster
27
and 13 % on two clusters when using the GPUs. The performance loss when changing from one to two clusters is 5 %
for the CPU-only version and 6 % for the CPU GPU ver+
sion. These results clearly indicate that running POP on multiple GPU clusters is feasible and also beneficial in terms of
performance. Moreover, it allows users with access to multiple smaller GPU clusters to scale up to well beyond the size
of a single GPU cluster.
**6** **Summary, discussion, and conclusions**
High-resolution ocean and climate models are becoming
a very important tool in climate research. It is crucially important that multi-century simulations with these models can
be performed efficiently. In this paper, we presented a new
distributed computing approach to increase the performance
of the POP model.
First of all, we have shown that it is possible to optimize
the load balancing of POP such that it can run successfully
in a multi-platform setting. The hierarchical load-balancing
scheme was shown to perform much better than the existing
load-balancing schemes (Cartesian, rake, and space-filling
curve), mainly due to the reduction in communication between the MPI tasks. In the future, we plan to take advantage
-----
of the Zoltan library in order to extend our load-balancing
scheme so as to also take performance differences between
machines into account. Secondly, it was demonstrated that it
is advantageous to port part of POP to GPUs (and get a performance increase), even though POP itself does not contain
any real hotspots and is therefore not an obvious candidate
for using GPUs.
In the experiments shown, only three functions in POP
were implemented on a GPU. Another substantial portion of
the execution time is spent computing the advection of momentum and the horizontal diffusion of momentum and tracers. Obtaining a GPU implementation for these functions is
deferred to future work. The advection of tracers also uses
the equation of state to compute the potential density referenced to the surface layer, which is used to compute a variety
of time-averaged fields. Currently, most of the execution time
is spent on PCIe transfers. When more computation is moved
to the GPU, more data can be reused, and some intermediate
data structures that result from computation may even never
have to leave the GPU. In that case, some PCIe transfers can
be eliminated completely. In future work we hope to produce
a complete GPU implementation of the vertical mixing part
of POP.
The software presented in this paper has the same portability properties as the original POP. The GPU code is written in
CUDA, which is a widely used language for GPU computing
applications. To increase portability across different GPU architectures, no architecture-specific optimizations have been
included. OpenCL is a well-known alternative to CUDA that
aims at a wider set of many-core compute devices and different compilers are available for different platforms. However,
there are no real linguistic differences between CUDA and
OpenCL, and porting the code will be a simple engineering
effort; furthermore, automated source-to-source translation
tools are also available. The use of both extensions (domain
decomposition or GPU functions) can be enabled, disabled,
and controlled individually through the well-known pop_in
namelist file.
Finally, we have shown that the combination of these
two approaches also improves performance. Although we
demonstrated this only for the DAS-4 cluster, it opens up
the possibility to submit a POP job in the near future over
multiple supercomputing platforms (with or without GPUs).
The new hierarchical load-balancing scheme and the MPI
wrapper methodology are crucial elements for maintaining
the performance of POP. Future work is to port more of POP
to GPUs and to scale up the multi-cluster experiments to production size hardware.
_Acknowledgements. This publication is part of the eSALSA_
project (An eScience Approach to determine future Local Sea-level
chAnges) of the Netherlands eScience Center (NLeSC), Institute
for Marine and Atmospheric research Utrecht (IMAU) at Utrecht
University, and VU University Amsterdam. This publication was
supported by the Dutch national program COMMIT. Part of the
computations were done on the Huygens IBM Power6 at SURFsara
in Amsterdam (www.surfsara.nl). Use of these computing facilities
was sponsored by the Netherlands Organisation for Scientific
Research (NWO) under the project SH244-13. Support from NWO
to cover the costs of this open access publication is also gratefully
acknowledged.
Edited by: R. Redler
**References**
Bleichrodt, F., Bisseling, R., and Dijkstra, H. A.: Accelerating a
barotropic ocean model using a GPU, Ocean Model., 41, 16–21,
[doi:10.1016/j.ocemod.2011.10.001, 2012.](http://dx.doi.org/10.1016/j.ocemod.2011.10.001)
Dennis, J. M.: Inverse space-filling curve partitioning of a
global ocean model, IPDPS 2007, IEEE International, 1, 1–10,
[doi:10.1109/IPDPS.2007.370215, 2007.](http://dx.doi.org/10.1109/IPDPS.2007.370215)
Dukowicz, J. K. and Smith, R. D.: Implicit free-surface method
for the Bryan-Cox-Semtner ocean model, J. Geophys. Res., 99,
[7991–8014, doi:10.1029/93JC03455, 1994.](http://dx.doi.org/10.1029/93JC03455)
Kerbyson, D. J. and Jones, P. W.: A performance model of the
parallel ocean program, Int. J. High Perform. C., 19, 261–276,
[doi:10.1177/1094342005056114, 2005.](http://dx.doi.org/10.1177/1094342005056114)
[Khronos Group: OpenCL, available at: http://www.khronos.org/](http://www.khronos.org/opencl/)
[opencl/ (last access: August 2013), 2013.](http://www.khronos.org/opencl/)
Large, W. G., McWilliams, J. C., and Doney, S. C.: Oceanic
vertical mixing: a review and a model with a nonlocal
boundary layer parameterization, Rev. Geophys., 32, 363–403,
[doi:10.1029/94RG01872, 1994.](http://dx.doi.org/10.1029/94RG01872)
Maassen, J. and Bal, H. E.: Smartsockets: solving the connectivity problems in grid computing, in: Proceedings of the
16th IEEE International Symposium on High-Performance
Distributed Computing (HPDC), Monterey, CA, USA, 1–10,
[doi:10.1145/1272366.1272368, 2007.](http://dx.doi.org/10.1145/1272366.1272368)
Maltrud, M., Bryan, F., and Peacock, S.: Boundary impulse response functions in a century-long eddying global ocean simu[lation, Environ. Fluid Mech., 10, 275–295, doi:10.1007/s10652-](http://dx.doi.org/10.1007/s10652-009-9154-3)
[009-9154-3, 2010.](http://dx.doi.org/10.1007/s10652-009-9154-3)
Marquet, C. P. and Dekeyser, J. L.: Data-parallel load balancing
[strategies, Parallel Comput., 24, 1665–1684, doi:10.1016/S0167-](http://dx.doi.org/10.1016/S0167-8191(98)00049-0)
[8191(98)00049-0, 1998.](http://dx.doi.org/10.1016/S0167-8191(98)00049-0)
McDougall, T. J., Jackett, D. R., Wright, D. G., and Feistel, R.: Accurate and computationally efficient algorithms for potential temperature and density of seawater,
[J. Atmos. Ocean. Tech., 20, 730–741, doi:10.1175/1520-](http://dx.doi.org/10.1175/1520-0426(2003)20%3C730:AAACEAF%3E2.0.CO;B2)
[0426(2003)20<730:AAACEAF>2.0.CO;B2, 2003.](http://dx.doi.org/10.1175/1520-0426(2003)20%3C730:AAACEAF%3E2.0.CO;B2)
Michalakes, J and Vachharajani, M: GPU acceleration of numerical
weather prediction, in: Proceedings of the International Symposium on Parallel and Distributed Processing (IPDPS), IEEE, 1–7,
2008.
[Nvidia: CUDA Programming Guide, available at: http://docs.](http://docs.nvidia.com/cuda/)
[nvidia.com/cuda/ (last access: August 2013), 2013.](http://docs.nvidia.com/cuda/)
Ryoo, S., Rodrigues, C. I., Stone, S. S., Baghsorkhi, S. S., Ueng, S.Z., Stratton, J. A., and Hwu, W.-M. W.: Program optimization
space pruning for a multithreaded GPU, in: Proceedings of the
6th Annual IEEE/ACM International Symposium on Code Gen[eration and Optimization, ACM, doi:10.1145/1356058.1356084,](http://dx.doi.org/10.1145/1356058.1356084)
195–204, 2008.
-----
Smith, R. D., Maltrud, M. E., Bryan, F. O., and Hecht, M. W.: Numerical simulation of the North Atlantic Ocean at [1] ◦, J. Phys.
10
Oceanogr., 30, 1532–1561, 2000.
Smith, R., Jones, P., Briegleb, B., Bryan, F., Danabasoglu, G.,
Dennis, J., Dukowicz, J., Eden, C., Fox-Kemper, B., Gent, P.,
Hecht, M., Jayne, S., Jochum, M., Large, W., Lindsay, K., Maltrud, M., Norton, M., Peacock, S., Vertenstein, M., and Yeager, S.: The Parallel Ocean Program (POP) Reference Manual:
Ocean Component of the Community Climate System Model
(CCSM), 2010.
Teresco, J. D., Faik, J., and Flaherty, J. E.: Resource-aware scientific
computation on a heterogeneous cluster, Comput. Sci. Eng., 7,
[40–50, doi:10.1109/MCSE.2005.38, 2005.](http://dx.doi.org/10.1109/MCSE.2005.38)
Vallis, G. K.: Atmospheric and Oceanic Fluid Dynamics: Fundamentals and Large-Scale Circulation, Cambridge University
Press, Cambridge, UK, 2006.
Weijer, W., Maltrud, M. E., Hecht, M. W., Dijkstra, H. A., and
Kliphuis, M. A.: Response of the Atlantic Ocean circulation to
Greenland Ice Sheet melting in a strongly-eddying ocean model,
[Geophys. Res. Lett., 39, L09606, doi:10.1029/2012GL051611,](http://dx.doi.org/10.1029/2012GL051611)
2012.
Williams, S., Waterman, A., and Patterson, D.: Roofline: an insightful visual performance model for multicore architectures, Com[mun. ACM, 52, 65–76, doi:10.1145/1498765.1498785, 2009.](http://dx.doi.org/10.1145/1498765.1498785)
Worley, P. and Levesque, J.: The performance evolution of the parallel ocean program on the Cray X1, in: Proceedings of the 46th
Cray User Group Conference, 17–21, 2003.
[Zoltan User Guide: Hierarchical Partitioning, available at: http:](http://www.cs.sandia.gov/Zoltan/ug_html/ug_alg_hier.html)
[//www.cs.sandia.gov/Zoltan/ug_html/ug_alg_hier.html (last ac-](http://www.cs.sandia.gov/Zoltan/ug_html/ug_alg_hier.html)
cess: December 2013), 2013.
-----
|
{
"disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.5194/GMD-7-267-2014?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.5194/GMD-7-267-2014, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "CCBY",
"status": "GOLD",
"url": "https://gmd.copernicus.org/articles/7/267/2014/gmd-7-267-2014.pdf"
}
| 2,013
|
[] | true
| 2013-09-12T00:00:00
|
[
{
"paperId": "5aa322ea3005a6375eb59ab5cf8c985ca7124508",
"title": "Response of the Atlantic Ocean circulation to Greenland Ice Sheet melting in a strongly‐eddying ocean model"
},
{
"paperId": "3da5e2cc49ead1cac639ef48635f696fdfab9ca0",
"title": "Boundary impulse response functions in a century-long eddying global ocean simulation"
},
{
"paperId": "e116175719926103d41c62bb0069e85ecbe6fd8a",
"title": "GPU acceleration of numerical weather prediction"
},
{
"paperId": "41bff2e236e73c5c9b21f8660856f58bba46aaf5",
"title": "Program optimization space pruning for a multithreaded gpu"
},
{
"paperId": "abf9f1c14d9f1f41a7fa7ce7571d399d009fb503",
"title": "Smartsockets: solving the connectivity problems in grid computing"
},
{
"paperId": "fab23065b5e0bbf13e3c01261eef18600208b985",
"title": "Inverse Space-Filling Curve Partitioning of a Global Ocean Model"
},
{
"paperId": "037942ae049e835f432229d05abe0c9430125ec8",
"title": "A Performance Model of the Parallel Ocean Program"
},
{
"paperId": "0218c4e79ee229398db899f72f0a2847b10f8056",
"title": "Resource-aware scientific computation on a heterogeneous cluster"
},
{
"paperId": "90b356a9b01f7707350693aee63f17eff9f2a0bc",
"title": "Accurate and Computationally Efficient Algorithms for Potential Temperature and Density of Seawater"
},
{
"paperId": "36449a6cfe9f03a7093c3194af423223aa8213fe",
"title": "Numerical simulation of the North Atlantic Ocean at 1/10 degrees"
},
{
"paperId": "f44e6328d20b5693e6a37ffff142ce6cef2ba1b8",
"title": "Data-Parallel Load Balancing Strategies"
},
{
"paperId": "af3cb0d768e912bd661806df0a206fb46202e261",
"title": "Oceanic vertical mixing: a review and a model with a nonlocal boundary layer parameterization"
},
{
"paperId": "95e0b53b26aa66ae412cb8dd044d75ca3f1da375",
"title": "Implicit free‐surface method for the Bryan‐Cox‐Semtner ocean model"
},
{
"paperId": null,
"title": "Geosci. Model Dev"
},
{
"paperId": null,
"title": "Khronos Group: OpenCL, available at: http://www.khronos.org/ opencl/ (last access"
},
{
"paperId": null,
"title": "Nvidia: CUDA Programming Guide, available at: http://docs. nvidia.com/cuda/ (last access"
},
{
"paperId": null,
"title": "Hierarchical Partitioning, available at: http: //www.cs.sandia.gov/Zoltan/ug_html/ug_alg_hier.html (last access"
},
{
"paperId": "5c9fc316320392e4d081e3a1c3b4f9131325f060",
"title": "Accelerating a barotropic ocean model using a GPU"
},
{
"paperId": "0b6b529386e5a97a9a2860c994b54d29e7dee6da",
"title": "The Parallel Ocean Program (POP) reference manual: Ocean component of the Community Climate System Model (CCSM)"
},
{
"paperId": null,
"title": "ketch of the blockwise subdivision of the domain in POP. (b) The hal regions of a block; image"
},
{
"paperId": "f8b1b43f284f1246ca015cc002ac949bb67c5645",
"title": "Roofline: An Insightful Visual Performance Model for Floating-Point Programs and Multicore Architectures"
},
{
"paperId": "7fefb84fb89fca1c3c9aaa92f9f579a6050f873a",
"title": "The Performance Evolution of the Parallel Ocean Program on the Cray X1 (paper)"
},
{
"paperId": null,
"title": "Oceanogr"
}
] | 18,249
|
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
},
{
"category": "Business",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/ffbebdc5a84c7b0a9ab7990346ce995134972210
|
[
"Computer Science"
] | 0.89604
|
A Systematic Literature Review on Smart Contracts Security
|
ffbebdc5a84c7b0a9ab7990346ce995134972210
|
arXiv.org
|
[
{
"authorId": "2196170134",
"name": "Harry Virani"
},
{
"authorId": "2196140277",
"name": "Manthan Kyada"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"ArXiv"
],
"alternate_urls": null,
"id": "1901e811-ee72-4b20-8f7e-de08cd395a10",
"issn": "2331-8422",
"name": "arXiv.org",
"type": null,
"url": "https://arxiv.org"
}
|
Smart contracts are blockchain-based algorithms that execute when specific criteria are satisfied. They are often used to automate the implementation of an agreement so that all parties may be confident of the conclusion right away, without the need for an intermediary or additional delay. They can also automate a process so that the following action is executed when circumstances are satisfied. This study seeks to pinpoint the most significant weaknesses in smart contracts from the viewpoints of their internal workings and software security flaws. These are then addressed using various techniques and tools used across the industry. Additionally, we looked into the limitations of the tools or analytical techniques about the found security flaws in the smart contracts.
|
# A Systematic Literature Review on Smart Contracts Security
### Harry Virani
_Department of Engineering_
_University of Guelph_
Guelph, Canada
hvirani@uoguelph.ca
**_Abstract—Smart contracts are blockchain-based algorithms_**
**that execute when specific criteria are satisfied. They are often**
**used to automate the implementation of an agreement so that all**
**parties may be confident of the conclusion right away, without**
**the need for an intermediary or additional delay. They can also**
**automate a process so that the following action is executed when**
**circumstances are satisfied. This study seeks to pinpoint the most**
**significant weaknesses in smart contracts from the viewpoints of**
**their internal workings and software security flaws. These are**
**then addressed using various techniques and tools used across**
**the industry. Additionally, we looked into the limitations of the**
**tools or analytical techniques about the found security flaws in**
**the smart contracts.**
**_Index_** **_Terms—Smart_** **Contracts,** **Blockchain** **Technology,**
**Ethereum,** **Cyber** **Security,** **Cryptocurrencies,** **Crypto-**
**transactions,** **Systematic** **Literature** **Reviews,** **Distributed**
**Ledgers, Internet of Things**
### Manthan Kyada
_Department of Engineering_
_University of Guelph_
Guelph, Canada
mkyada@uoguelph.ca
final result then becomes the new block’s hash. Through this
procedure, each block is connected to the one before it, making
a chain of blocks (thus the the term ”block-chain”). Each node
or computer in the network contributes a unique record to the
block-chain. and is always synchronised and updated. Blockchain finally maintains the records as a database or ledger of
every transaction carried out across the network.
1 INTRODUCTION
With the use of distributed ledger technology (DLT), individuals with little to no confidence in one another may trade
any kind of digitized information peer-to-peer (P2P) using few
to no middlemen [9]. In this sense, it replaces conventional
middlemen or reliable third parties, at a minimum. The certain
transaction or asset that may be transformed into electronic
form, such as currency transactions or storage, health records,
birth, marriage, and insurance certificates, the purchase and
sale of products and services, and insurance contracts, could
be represented by the data transferred.
A subclass of DLTs called block-chain uses ”blocks” of data
to record data transactions over a distributed network of many
nodes or computers. Party A asks for a transaction with party
B, such as a money transfer, a contract, or the exchanging of
documents. This transaction is sent out to a dispersed network
of ”nodes,” or computers, who will validate it in accordance
with a set of predetermined guidelines known as a ”consensus”
method. An additional ”block” will be added to the blockchain once the transaction has been verified. 16 A pointer to
the previous block in the chain is supplied, the transaction
data is submitted, and the new block is timestamped when it
is added to the block-chain.
Then, the cryptographic technology is used to process data
where a hash is produced based on the hash of the fresh
block’s data contents plus those from the preceding block. The
_1.1 Prior Research_
According to the article given in [1], their study is focused
on the Document of Understanding (DOU) contract, which is
the foundation of the partnership between a consumer service
and its supplier. It is directed at supply chain activities. There
is a chance to use blockchain technology as a solution because
the approval process for supply chain activities is currently
taking too long [2]. They utilised regional resources for our
project. Creating a localised blockchain ledger using resources,
agile methods, and design thinking. As a consequence, they
created a proof-of-concept Blockchain prototype that promotes
secrecy and preserves participant private information while
having the whole history of the agreement, including immutable transactions. With this demonstration, they measured
the time required to obtain the DOU contract’s approval from
all parties involved, and it was significantly reduced. The
project’s original contribution is implementing Blockchain in
our company’s operations, which enhances business processes
and provides staff with a real-time view of all the data. As a
consequence, their business operations have significantly improved when they combine their work processes with cuttingedge technology. Now that the program has had a successful
test run, they can confidently implement smart contracts in
regular Smart City operations. It may also be used to other
professions that deal with financial reporting and private data.
Apart from that paper, we also found a study [3], they
provide an automated deep learning strategy to learn the
structural code embeddings of smart contracts in Solidity,
which is important for contract validation, clone identification, and bug detection on smart contracts.they apply our
methodology to more than 22K Solidity contracts obtained
from the Ethereum blockchain, and the findings demonstrate
that Solidity code has a substantially higher clone ratio (about
90%) than conventional software. As our bug database, we
-----
compile a list of 52 recognised flawed smart contracts that
fall under 10 categories of widespread vulnerabilities. Using
our bug databases, the method can effectively and precisely
identify more than 1000 clone-related problems. To make that
easier for Solidity developers to use their solution, They have
incorporated it as a web-based application called SmartEmbed
in response to developers’ comments. Their tool may allow
Solidity developers quickly find recurring smart contracts on
the live Ethereum blockchain and check their contract against
a known set of defects, which can increase users’ trust in the
contract’s dependability. They improve SmartEmbed implementations so they can help developers in real-time for useful
applications. Their study has implications for the Ethereum
ecosystem and the individual Solidity developer.
Moreover, in this [5] paper, blockchain security and privacy
are described in great depth. They initially describe the concept
of blockchains and its utility in the context of online transactions akin to Bitcoin in order to facilitate the conversation. For
outlining the core security attributes that are supported as the
essential requirements and building blocks for cryptocurrency
systems like Bitcoin, they then explore the additional security
and privacy qualities that are sought after in many blockchain
applications [4], [8]. The techniques employed in blockchainbased systems to achieve these security attributes are covered
at the conclusion, including representative consensus algorithms, hash-chained storage, mixing protocols, anonymous
signatures, non-interactive zero-knowledge proof, and others.
They contend that this survey will provide readers with a comprehensive understanding of privacy and blockchain security
in terms of ideas, attributes, approaches, and systems [12],
[15].
In order to address more research possibilities, a paper [6]
was published that examined the trend of studies conducted to
date and discussed blockchain technology and associated fundamental technologies. Before using blockchain in the cloud
computing environment, there are several existing concerns
that must be addressed. Even today, blockchain has numerous
challenges, including the security of transactions, wallets, and
software. Various research have been done to address these
problems. User data must be kept confidential and completely
deleted when the operation is ended while using blockchain in
a cloud-based computing environment. It may be inferred from
the data that is still accessible if the individual information is
kept and not deleted.
_1.2 Research Goals_
Analysis of previous research and its conclusions, as well
as a summary of research efforts into blockchain applications
for cyber security [18], [21], are the goals of this study. An
overview of the questions pursued with a little discussion can
be seen in Table I.
_1.3 Contribution and layout_
The contributions provided by this systematic literature
review are a combination of past research along with come
ongoing tasks are as follow:
TABLE I: Research Questions
**Research Question (RQ)** **Discussion**
RQ1: What are the most recent
studies on platforms and consent
protocols for blockchain-enabled
smart contracts?
RQ2: What Are the Main Use
Cases for Smart Contracts and
What Are the Conditions for Using
Them?
RQ3: What Factors Aid Organizations in Selecting a Blockchain
Platform?
A bunch of studies will be
evaluate to figure out what
are the major protocols are
used in smart contracts and
how to smart contract helps
to build blockchain.
Different use-cases is evaluate to measure and evaluate
the extend of block-chain enabled smart contracts
Scalability, Ledger Type,
Consensus mechanism,
programming language and
smart contract are evaluate
for different blockchain
based smart contract to
make a selection.
_• IEEE was the top publisher among the top 10 publishers,_
per an examination of 475 recently released publications.
_• We looked at 743 publications between 2014 and 2022._
Through citation networks created utilising the data gathered from WoS, we have determined the acceptance and
authenticity of these research papers.
_• In order to represent the research, concepts, and con-_
siderations in the disciplines of blockchain and smart
contracts, we undertake an extensive evaluation of the
information available in the group of 21 papers and offer
the data.
The format of this review paper is as follows: The techniques used to choose the primary studies for analysis in a
methodical manner are described in Section 2. The results of
all the primary research chosen are presented in Section 3. The
findings in relation to the earlier-presented study questions are
discussed in Section 4. The research is concluded in Section 5,
which also makes some recommendations for additional study.
2 RESEARCH METHODOLOGY
We performed the SLR in accordance with the instructions
outlined by Kitchenham and Charters [27] in order to accomplish the goal of responding to the research questionnaire. To
enable a comprehensive assessment of the SLR, we attempted
to progress through study’s preparation, executing, and publishing steps in cycles.
_2.1 Selection of primary studies_
By supplying keywords to a particular publication’s or
search engine’s search function, primary research were highlighted. The keywords were chosen to encourage the appearance of study findings that would help answer the research
questions. The query terms were: (”smart contracts” OR
_”smart-contracts” OR ”blockchain” OR ”block-chain”) AND_
_”security”_
We searched on platforms such as:
-----
1) Google Scholar
2) ACM Digital Library
3) ScienceDirect
4) IEEE Xplore Digital Library
Depending on the search platforms, the title, keywords,
or abstract were used in the searches. On Nov 7, 2012, we
conducted the searches and processed all papers that had been
issued up to that point. The inclusion/exclusion criteria, which
will be provided in Section 2.2, were used to filter the results
from these searches. The criterion enabled us to generate a
collection of findings that could subsequently be subjected to
Wohlin’s [10] snowballing procedure. Snowballing iterations
were performed both forward and backward until no further
publications that met the inclusion criteria could be found.
_2.2 Inclusion and exclusion criteria_
With the help of a broad definition of smart contracts and security, we were able to incorporate articles on blockchain technology, Ethereum, cyber security, cryptocurrencies, cryptotransactions, systematic literature reviews, distributed ledgers,
Internet of Things, etc. Article titles, keywords, and abstracts
were examined to decide if they should be included. The
articles’ major texts were also carefully examined as needed.
More attention was paid to articles that outlined specific parts
of the smart contracts that underpin blockchain activities or
technology along with its security application.
Papers providing true facts about implementation of the
discussed technology, peer-reviewd articles, and published in
a journal or conference proceeding are accepted. Whereas
papers relying on financial, commercial or any other out-ofthe-topic matters are dismissed. Also, the papers included are
only in English language Table II . summarizes the mentioned
criterias.
TABLE II: Inclusion and exclusion criteria for the primary
studies.
**Criteria for inclusion** **Criteria for exclusion**
The paper must provide actual
facts about the execution and application of smart contracts security.
The paper must include data about
blockchain or comparable distributed ledger systems.
The article must be a peerreviewed article that has been published in a conference proceeding
or journal.
_2.3 Selection results_
Papers that concentrate on
the financial, commercial,
or legal implications of
blockchain applications
Websites and government
papers are examples of irrelevant papers.
Papers that are in other language
Figure 1 displayed the general screening procedures and the
order of picking pertinent material. A total of 742 records were
discovered in the initial phase (98 from Google Scholar using
the sophisticated search approach, 69 from Science Direct, and
575 from IEEE Xplore). The number of literary works was
reduced to 47 articles preserved for further title reading after
the removal of works of literature like grey literature, extended
abstracts, presentations, keynotes, book chapters, non-English
language papers, and inaccessible publications. Following that,
only 27 articles met the requirements for additional abstract
reading. Only 15 articles were left after reading the article
abstracts to be read in full. After doing snowballing, 19
of them evaluated smart contracts, and those articles were
downloaded for additional screening procedures.
Fig. 1: Selection Process
_2.4 Quality assessment_
In accordance with the recommendations provided by
Kitchenham and Charters, an evaluation of the main studies’
quality was conducted [7]. This made it possible to evaluate
the articles’ importance of the research issues while taking
any evidence of selection bias and the reliability of observed
measurements into account. The evaluation procedure was
modelled after the one employed by Hosseini et al. To evaluate
their efficacy, four articles were chosen at random and put
through following design assessments:
Stage 1:• **Smart Contracts. The article should be based on**
the implementation of smart contracts or its wellcommented deployment to a particular issue.
-----
Stage 2: Background. The aims and results of the study
must be adequately contextualised. This will make
it possible to evaluate the research correctly.
Stage 3: Application of Smart Contract. The report must
have sufficient information to accurately depict how
the solution has been implemented to a particular
issue, which will help to address research questions.
Stage 4: Security and Privacy context. In order to help in
responding to RQ2, the document must explain the
security issue.
Stage 5: Data acquisition. To assess accuracy, specifics on
the data’s collection, measurement, and reporting
must be provided.
Excluded papers based on this checklist can be found in
Table III
TABLE III: Excluded Studies
**Stages of the Criteria Checklist** **Excluded Studies**
Stage 1: Smart Contracts [29] [32]
Stage 2: Background [26] [30] [33]
Stage 3: Application of smart con- [24] [31]
tract
Stage 4: Security and Privacy con- [25]
text
Stage 5: Data acquisition [27] [28] [34] [35]
_2.5 Data extraction_
Data was then taken from all papers that had passed the
quality evaluation in order to evaluate the completeness of the
data and verify the accuracy of the information included within
the articles. Before being applied to the entire set of studies
that have successfully completed the quality evaluation phase,
the data extraction technique was first tested on a sample
of five studies. Each study’s data was taken out, put into
categories, and then entered into an excel sheet. The following
groups were applied to the information:
_• Context Data: Data regarding the study’s objectives serves_
as context data.
_• Qualitative data: The writers’ findings and judgments._
_• Quantitative data: Information gathered through trial and_
research and used in the study.
_2.6 Data analysis_
We gathered the information contained in the qualitative
and quantitative data categories in order to achieve the goal
of responding to the study questions. We also performed a
meta-analysis on the studies that were exposed to the last
step of data extraction.
_2.6.1 Publication over time: The term Smart contract was_
coined by Nick Szabo in 1994. And then an exponential
increase can be seen from 2015 year till 2022. The highest
trend of publications can be seen in 2017 and 2018 where
bitcoin took a hit in the crypto-currency market.
_2.6.2 Significant keywords counts: The most significant_
keywords used to search and implement the literature review
are ”smart contract, blockchain, network, transaction, attacks”.
Other related word queries are distributed kedgers and Internet
of Things.
Additionally, publications that addressed the uses of smart
contract technology explicitly were chosen for the identification process. Articles that did not include smart contract
technology as their main subject were not included, such as
those that used the blockchain to explain Bitcoins without
mentioning smart contracts. Our collection of references includes papers from year 2006 to 2021.
3 FINDING
Each primary research paper was read in full and relevant
qualitative and quantitative data was extracted and summarized
in Table 5. All the primary studies had a focus or theme
in relation to how blockchain was dealing with a particular
problem. The focus of each paper is also recorded below
in Table V The categories found in the main research show
that nearly half (47%) of the papers on blockchain and smart
contracts have an interest in IoT device security. With an 18%
rate, transportation and system is the second most popular
subject. And the remaining keywords contribute a bit to the
original study.This information can be viewed in Figure 2
TABLE IV: Keyword counts in the primary studies
**Keywords** **Counts**
smart contracts 1283
blockchain 978
security 623
transaction 455
system 447
vulnerable 318
network 311
IoT 294
device 266
ethereum 248
attack 175
distribute 151
privacy 108
internet 89
encrypt 30
4 DISCUSSION
Smart contract usability is impacted by a number of
variables, including data transmission rate, information
-----
TABLE VI: Table V (continued)
TABLE V: The key research’ main results and topics
IOT(Spcifically
for Smart
Home)
-----
TABLE VII: Table V (continued)
[19] Utilizing the inherent security
mechanisms of the blockchain,
blockchain-based implementation
processing system suggests using
smart contracts to automate the
many procedures needed in the validation and verification of applica
Blockchain(for
education)
Fig. 2: Theme of primary studies
update rate, and domain-specific needs. Clarifying the
application environment for smart contracts is crucial for their
development and planning. Preliminary keyphrases reveal that
there are a large number of studies on smart contracts. Smart
contracts and genuinely distributed decentralised systems
technologies have been created for only 10 years and are
obviously still in their development. A significant number of
the major studies chosen are experimental recommendations or
notions for solving today’s challenges, with little quantitative
data and few actual implementations. Gateway flaws, secret
keys security issues, blockchain integration systems, absence
of full-scale testing, a lack of rules and regulations, unproven
code, and smart contract flaws are among the most prevalent
issues .Both illegal miners and consumers can take advantage
of certain kinds of vulnerabilities, claim the authors at [11].
Several researchers have concentrated on studying the most
frequent mistakes in smart contracts and attempted to fix
them in order to enhance the creation of smart contracts
secure [13], [14]. Recent publications [16] present techniques
for static code analysis vulnerability detection. All verified
smart contracts are made to adhere to a guidelines by
Quantumstamp. The decentralised security mechanism they
built enhances the blockchain architecture.
**RQ1: What are the most recent studies on platforms**
**and** **consent** **protocols** **for** **blockchain-enabled** **smart**
**contracts?**
**Current Research on Smart Contracts In January 2009,**
Satoshi Nakamoto created the bitcoin blockchain. Both the
actual evidence of smart contracts as well as the decentralized
peer-to-peer digital money Bitcoin were presented in his study
[17]. Those two essential ideas provide the basis for the
majority of the SLR results that follow and have substantially
influenced the development of blockchain technology. Since
then, the emphasis has migrated to other fields than economics
since it may help firms assure integrity, boost efficiency, and
cut down on redundancies [19], [20]. Implementing smart
-----
contracts may be highly difficult, particularly for non-experts
[20]. Therefore, it is essential to comprehend the speed and
scalability constraints of smart contract functionalities.
**Platforms for Smart Contracts Various blockchain sys-**
tems allow for the development and processing of smart
contracts, based on a number of factors and traits [21]. In
this part, we identified several crucial technical characteristics
of the five systems that received the most citations throughout
the 30 publications we analysed. In light of the kind of enterprise, database, smart contract capability, transaction costs,
accessible languages, consensus process, and administration,
we emphasized the significant distinctions between these platforms.
1) Bitcoin A decentralised digital money network is called
Bitcoin. It makes use of a permissionless blockchain
network to provide an open and permanent record of all
monetary transactions. To create 256-bit long hashes for
documents that can be used confirm the validity, Bitcoin
utilises the cryptographic hash algorithm SHA256 [22].
The use of Bitcoin is severely constrained by the proofof-work consensus process that it depends on. The fresh
chain’s block is produced by nodes inside a bitcoin
network by solving an algorithmic puzzle in parallel.
2) Ethereum Created in July 2015, Ethereum is a decentralised online system for financial transactions as well
as other uses. Ethereum is a programmable platform that
allows for the compilation and implementation of payment systems in a variety of languages, unlike numerous
other blockchains [21]. In fact, Ethereum offers the
Ethereum Virtual Computer (EVM), a Turing-complete
machinery that allows the execution of numerous programming languages. The most well-known ones are
Solidity and Vyper, which are mostly utilised in the creation of complicated smart contracts [21]. Ethereum has
implemented the Proof-of-Work agreement technique to
verify its calculations, following the lead set by Bitcoin.
3) Hyperledger Fabric The Linux Foundation has created
an open-source, decentralized distributed ledger known
as Hyperledger Fabric. Extensive customization of the
consensus process and programming language makes
Fabric one of the most modular and flexible systems
[24]. Hyperledger Fabric is the first blockchain platform
to support general-purpose programming languages such
as Python, Go, Java, JavaScript, and Node.js, using a
plugin consensus framework to customize for specific
use cases. Scalability and performance issues are other
issues Fabric is known to address.
**Programming Languages for Smart Contracts**
The development of smart contracts on the blockchain is
still in its infancy. As a result, new programming languages are
being developed in accordance with the architecture of each
platform. In fact, the most popular programming languages for
smart contracts are emphasised in this article since it is essential to see which ones are reinforced by whatever blockchain
stage before starting any project. Due to the intricacy of their
contracts, we decide to concentrate on these four languages.
There are four major languages seen as solidity, viper, rholang
and kotlin, from which two are explained here:
1) Vyper The programming language Vyper was developed
to fend off errors and assaults [22]. It is closely related to
Serpent language and is descended from Python. Due to
Python’s high-level syntax, Vyper offers more efficiency
and trustworthy outcomes compared to Solidity.
2) Rholang A concurrent programming language with behavioural typing, Rholang is officially patterned after
Rho-calculus. Rchain blockchain [22] was the first software to use this programming language.
**Consensus** **Tools** **in** **Smart** **Contracts** **Powered** **by**
**Blockchain The consensus protocols increase the proper and**
effective implementation and execution of a smart contract. In
actuality, a network’s transactions should all be recorded, and
any relevant smart contracts should be carried out. The nodes
of the same network perform these two activities in a unified
and predictable manner. Nodes should first come to consensus
in order to achieve this state.
Recently, several consensus protocols were introduced.
However, Proof-of-Work (PoW) and Proof-of-Stake (PoS) are
the most popular ones.
1) Proof-of-Work (PoW) Every block of blockchains includes information that has been firmly recorded. Cryptography is a method for creating trust. Miners must
complete a proof-of-work by resolving a mathematical
puzzle in order for the network’s members to produce
and validate a block . Figure 3 depicts the PoW protocol’s flow.
Fig. 3: Proof of work [25]
2) Proof-of-Stake(PoS) A network can employ the PoS
method to reach distributed consensus without the energy loss of PoW. PoS picks the participants that will
build the next block depending on how wealthy they
are, in contrast to PoW’s rewarding mechanism for coinminers, which is grounded on completing challenging
problems and systems. Figure 4 depicts the PoS protocol’s flow.
**RQ2: What Are the Main Use Cases for Smart Contracts**
**and What Are the Conditions for Using Them?**
We concentrated our research and analysis based on the use
cases and goals of smart contracts in order to quantify and
assess the level of business value provided by blockchainenabled smart contracts.
-----
Fig. 4: Proof of stake [25]
Use of smart contracts powered by blockchain has spread to
a variety of industries. The three key drivers behind this technology adoption are data protection, trust, and accountability
[21], [23], [27]. But it could also be used for other things in
some places.
To secure not one data confidentiality and confidence but
also transparency and contaminate material, major application
domains including healthcare, voting, the pharmaceutics, and
the schooling institution have implemented the block chain
technology smart contracts. The same goals of smart contracts
are shared by IoT and data security [24].
The deployment of blockchain-enabled smart agreements
in Smart City applications [27], management of occupational
processes, as well as land registration and land is a consequence of the requirement for trust-based transactions. Data
relevance is another desired feature that the market forecast
[27] discovered in blockchain-enabled payment systems.
Other application domains need efficiency, security, and efficiency. Relevant examples of these sectors include industrial
output, energy supplies [24], management of supply chains,
and financial [24] [27], [29].
We provide some pertinent instances of platforms for each
application area based on the findings in Table 2. After
comparing the key features of the public blockchain with the
needs for the domain, the platform was selected. Although
Ethereum continues to be the most popular platform owing to
its high information immutableness, it still suffers from major
performance and scalability issues, making it increasingly
probable that alternative platforms will take its place.
NXT [23], for instance, intends to integrate security and
provide efficiency in order to prevent end-to-end deferrals
in the financial sphere. WAVES can be used by application
sectors that want to achieve great result in terms of cost and
time savings since it also increases scalability and speed.
Cross-industry platforms like R3 Corda and EOS [23]
promote confidence and transparency among the network’s
many participants. They are suitable platforms for the supply
chain application domain because of their characteristics.
The secrecy and security of records are a key emphasis
of the Quorum and Hyperledger Fabric platforms. They work
with apps that demand swift private transactions, which are
crucial for patients and other users of the healthcare system.
**RQ3: What Factors Aid Organizations in Selecting a**
**Blockchain Platform?**
Practically speaking, we provide a grid of criteria that busi
nesses may use to select the best public blockchain for their
operations. We defined five key technological characteristics
and requirements for the platform based on the research that
the organisation should support in order to meet its needs.
1) Scalability Application of smart contracts has major
challenges in terms of scalability [20]. In fact, due to
their transaction-intensive nature, several application areas, like IoT, demand high resilience and scalability [23].
Data storage on the blockchain might lead to serious
scalability problems [sl06]. An organisation must select
a public blockchain that can expand to accommodate
expansion for this reason.
2) Ledger Type Blockchain, a young technology, offers
three types of ledgers: consortia, private, and public. The
network scope determines which ledger category to use.
For instance, anybody can be a lump on public networks.
In grouping networks, nodes are assigned and authorizations are regulated. Permissions are more tightly
controlled in private networks, which leads to very little
decentralisation. Since there are several variations of
blockchain, not all of its systems offer completely open
networks or less decentralised ledgers, like R3 Corda,
which would be exclusively permissioned.
3) The consensus process Some platforms’ usability is
constrained by non-adaptable consensus protocol [20]te,
and an appropriate consensus procedure must provide
security and offer accountability tolerance . It is well
recognised that PoW uses a lot of energy and has a
very low throughput of only 3–7 transactions per second.
There are various protocols and approaches that may
be used to reduce the restriction of this method, such
as Merkle tree [28], to address the scalability problem
for systems that only allow PoW, such as Ethereum.
Platforms based on PoS and DPoS may also be an useful
substitute.
4) Programming dialect The advent of the blockchain has
led to the introduction of several programming languages [26]. The most well-known of them is Solidity,
a language created expressly for blockchain that was
strongly impacted on JavaScript. As a result, a company needs to find out which programming languages
a blockchain platform supports. In addition to the four
language categories mentioned above, functional, procedural, declarative, and object-oriented languages (such
as C++ and Python) were also discovered.
5) Smart contracts support Some blockchain platforms
might not support smart contracts, which are in charge
of carrying out actions carried out by ordinary programming languages in a blockchain network. Quorum [19]
is an illustration of this type of platform.
These requirements are not all included in this list. The
simplicity of use, toolchain maturity, and people resources
and capabilities are just a few other variables that might
influence the selection of the best distributed platform. The
five previously stated criteria, however, are the only ones that
-----
this study article focuses on.
5 FUTURE RESEARCH DIRECTIONS OF SMART CONTRACTA
SECURITY
In considering all the options, it is important to keep in
mind that blockchain technology remains in its infancy so it
will take a while and development before it enters the public.
Given that a smart contract is really a ”contract” that is subject
to strict restrictions, the regulatory components of the contract
need also be taken good care of.
Some nations still have legal frameworks from the Eighteenth Century that are over about 140 years old. Since there
is no one body gathering information from the blockchain, it is
possible that data security regulations and the associated consequences for not complying with them may not be effectively
followed.
Decentralization may be wonderful, but some purists may
ignore the worry of having a centralised authority to hold
responsible. There is a potential that someone with superior
technological understanding might make flaws in the shared
ledger directly, which could lead to the loss of information,
revenue, confidence, and ethics.
Nevertheless, people are becoming more knowledgeable
about blockchain as well as its potential. Smart contracts are
evolving to reach a delicate balance between conventional
ideals and contemporary technologies. We may anticipate
smart contracts influencing, if not controlling, each aspect of
our life that is tied to the word ”contract,” once both of them
are in existence and yet when they eventually merge.
6 CONCLUSION
This research undertook a methodical content analysis that
outlined the key characteristics of block chain technology
consensus mechanism and the current state of the art in its
many uses. We analysed a wide range of scientific details
and standards, including the accepted computer languages and
agreement processes, to showcase a large number of network
platforms.
We have a tendency to think that such a research will assist
corporations in comprehending their demands and specifications for the creation of their smart contracts apps. Indeed,
not all blockchain platforms are appropriate for all networks.
We came to the conclusion that one of the most important
things an organisation should understand about its execution
environment are: (i) check to see if the system deals with
blockchain networks; (ii) verify the consensus protocols aided
by this system; (iii) know what computer scripts, the Software
Development Kits (SDKs) of the runtime environments; and
(iv) exactly what sort of scalability would the solution require.
The firm will be able to select the best blockchain - based
platform thanks to this early diagnosis, which will also assist
to prevent the serious technical problems that can arise in
terms of speed and scale when an agreement is implemented.
The discussion section’s little used practical example brought
attention to the value of our study on smart contracts systems.
In order to enhance the system grid and stay consistent with the
applications areas’ changing needs, future studies can broaden
the original study focus and investigate additional determining
criteria.
**Declarations of interest**
There is no conflict of interest.
**Acknowledgement**
None.
REFERENCES
[1] J. M. Montes, C. E. Ramirez, M. C. Gutierrez, and V. M. Larios, ‘Smart
Contracts for supply chain applicable to Smart Cities daily operations’,
in 2019 IEEE International Smart Cities Conference (ISC2), 2019, pp.
565–570.
[2] Yazdinejad, Abbas, et al. ”An energy-efficient SDN controller architecture for IoT networks with blockchain-based security.” IEEE Transactions on Services Computing 13.4 (2020): 625-638.
[3] Z. Gao, ‘When Deep Learning Meets Smart Contracts’, in Proceedings
of the 35th IEEE/ACM International Conference on Automated Software
Engineering, Virtual Event, Australia, 2020, pp. 1400–1402.
[4] Yazdinejad, Abbas, et al. ”Decentralized authentication of distributed
patients in hospital networks using blockchain.” IEEE journal of biomedical and health informatics 24.8 (2020): 2146-2156.
[5] R. Zhang, R. Xue, and L. Liu, ‘Security and Privacy on Blockchain’,
ACM Comput. Surv., vol. 52, no. 3, Jul. 2019.
[6] J. H. Park and J. H. Park, ‘Blockchain Security in Cloud Computing:
Use Cases, Challenges, and Solutions’, Symmetry, vol. 9, no. 8, 2017.
[7] B. Kitchenham and S. Charters, ‘Guidelines for performing systematic
literature reviews in software engineering’, 2007.
[8] Yazdinejad, Abbas, et al. ”Blockchain-enabled authentication handover
with efficient privacy protection in SDN-based 5G networks.” IEEE
Transactions on Network Science and Engineering 8.2 (2019): 11201132.
[9] E. Insurance and O. P. Authority, ‘Discussion Paper on blockchain and
smart contracts in insurance’. 2021.
[10] C. Wohlin, ‘Guidelines for snowballing in systematic literature studies
and a replication in software engineering’, in Proceedings of the 18th
international conference on evaluation and assessment in software engineering, 2014, pp. 1–10.
[11] L. Loi, C. Duc-Hiep, O. Hrishi, P., S. and A. Hobor, ”Making Smart
Contracts Smarter”, ACM SIGSAC Conferece on Computer and Communications Security (CCS ’16), pp. 254-269, 2016.
[12] Yazdinejad, Abbas, et al. ”Enabling drones in the internet of things
with decentralized blockchain-based security.” IEEE Internet of Things
Journal 8.8 (2020): 6406-6415.
[13] K. Bhargavan, A. Delignat-Lavdaoud, C. Fournet, A. Gollamudi, G.
Gonthier, N. Kobeissi, et al., ”Formal verification of smart contracts:
Short Paper”, ACM Workshop on Programming Languages and Analysis
for Security (PLAS 16), 2016.
[14] J. Pettersson and R. Edstorm, Safer smart contracts through typedriven development: using dependent and polymorphic types for safer
development of smart contracts, 2016.
[15] Yazdinejad, Abbas, et al. ”Block Hunter: Federated Learning for Cyber
Threat Hunting in Blockchain-based IIoT Networks.” IEEE Transactions
on Industrial Informatics (2022).
[16] N. Atzei, M. Bartoletti and T. Cimoli, ”A survey of attacks on Ethereum
smart contracts”, IACR Cryptology ePrint Archive, pp. 99-110, 2016.
[17] Buterin, V.: A next-generation smart contract and decentralized application platform. Ethereum White Paper, 2014.
[18] Yazdinejad, A., Dehghantanha, A., Parizi, R. M., Srivastava, G., &
Karimipour, H. (2023). Secure Intelligent Fuzzy Blockchain Framework:
Effective Threat Detection in IoT Networks. Computers in Industry, 144,
103801.
[19] Rouhani, S., Deters, R.: Security, Performance, and Applications of
Smart Contracts: A Systematic Survey. IEEE Access. 7, 50759–50779
(2019).
[20] Udokwu, C., Kormiltsyn, A., Thangalimodzi, K., Norta, A.: The State
of the Art for Blockchain-Enabled Smart-Contract Applications in the
Organization. In: 2018 Ivanni-kov Ispras Open Conference (ISPRAS).
137–144 (2018).
-----
[21] Rabieinejad, Elnaz, Abbas Yazdinejad, and Reza M. Parizi. ”A deep
learning model for threat hunting in ethereum blockchain.” 2021 IEEE
20th International Conference on Trust, Security and Privacy in Computing and Communications (TrustCom). IEEE, 2021.
[22] Stefanovi´c, M., Risti´c, S., Stefanovi´c, D., Bojki´c, M., Prˇzulj, D.: Possible
Applications of Smart Contracts in Land Administration. In: 2018 26th
Telecommunications Forum. 420–425 (2018).
[23] Yazdinejad, Abbas, et al. ”Energy efficient decentralized authentication
in internet of underwater things using blockchain.” 2019 IEEE Globecom
Workshops (GC Wkshps). IEEE, 2019.
[24] Wang, S., Ouyang, L., Yuan, Y., Ni, X., Han, X., Wang, F.-Y.:
Blockchain-Enabled Smart Contracts: Architecture, Applications, and
Future Trends. IEEE Transactions on Systems, Man, and Cybernetics:
Systems. 49, 2266–2277 (2019).
[25] Zhang and Lee, 2019 (Zhang, S., Lee, J.-H.: Analysis of the main
consensus protocols of blockchain. ICT Express. Online first (2019).
https://doi.org/10.1016/j.icte.2019.08.001)
[26] Yazdinejad, Abbas, et al. ”P4-to-blockchain: A secure blockchainenabled packet parser for software defined networking.” Computers &
Security 88 (2020): 101629.
[27] Tinu, N. S.: Ethereum: A Blockchain Platform with smart contract
support for Distrib-uted Application Development, INTERNATIONAL
JOURNAL OF INFORMATION AND COMPUTING SCIENCE, 6(7),
[142-145, available at http://ijics.com/ (2019)](http://ijics.com/)
[28] Kim, S., Kwon, Y., Cho, S.: A survey of scalability solutions on
blockchain. In: 2018 International Conference on Information and
Communication Technology Convergence (ICTC), pp. 1204–1207. IEEE
(2018). https://doi.org/10.1109/ICTC.2018.8539529
[29] Kazemi, Mostafa, and Abbas Yazdinejad. ”Towards automated
benchmark support for multi-blockchain interoperability-facilitating
[platforms.” arXiv preprint arXiv:2103.03866 (2021).](http://arxiv.org/abs/2103.03866)
**Primary Studies**
REFERENCES
[1] I. Singh and S.-W. Lee, ‘Self-Adaptive Security for SLA Based Smart
Contract’, in 2021 IEEE 29th International Requirements Engineering
Conference Workshops (REW), 2021, pp. 388–393.
[2] J. Wickstr¨om, M. Westerlund, and G. Pulkkis, ‘Smart Contract based
Distributed IoT Security: A Protocol for Autonomous Device Management’, in 2021 IEEE/ACM 21st International Symposium on Cluster,
Cloud and Internet Computing (CCGrid), 2021, pp. 776–781.
[3] J. Dongfang and L. Wang, ‘Research on smart contract technology
based on block chain’, in 2022 International Conference on Artificial
Intelligence in Everything (AIE), 2022, pp. 664–668.
[4] S. Fujimoto and K. Omote, ‘Proposal of a smart contract-based security
token management system’, in 2022 IEEE International Conference on
Blockchain (Blockchain), 2022, pp. 419–426.
[5] A. Dika and M. Nowostawski, ‘Security Vulnerabilities in Ethereum
Smart Contracts’, in 2018 IEEE International Conference on Internet
of Things (iThings) and IEEE Green Computing and Communications
(GreenCom) and IEEE Cyber, Physical and Social Computing (CPSCom) and IEEE Smart Data (SmartData), 2018, pp. 955–962.
[6] S. Tang, Z. Wang, J. Dong, and Y. Ma, ‘Blockchain-Enabled Social
Security Services Using Smart Contracts’, IEEE Access, vol. 10, pp.
73857–73870, 2022.
[7] A. Qashlan, P. Nanda, and X. He, ‘Security and Privacy Implementation
in Smart Home: Attributes Based Access Control and Smart Contracts’,
in 2020 IEEE 19th International Conference on Trust, Security and
Privacy in Computing and Communications (TrustCom), 2020, pp.
951–958.
[8] M. Fang, Z. Zhang, C. Jin, and A. Zhou, ‘High-Performance Smart
Contracts Concurrent Execution for Permissioned Blockchain Using
SGX’, in 2021 IEEE 37th International Conference on Data Engineering
(ICDE), 2021, pp. 1907–1912.
[9] P. Khandelwal, R. Johari, V. Gaur, and D. Vashisth, ‘BlockChain Technology based Smart Contract Agreement on REMIX IDE’, in 2021 8th
International Conference on Signal Processing and Integrated Networks
(SPIN), 2021, pp. 938–942.
[10] S. J. Pee, E. S. Kang, J. G. Song, and J. W. Jang, ‘Blockchain
based smart energy trading platform using smart contract’, in 2019
International Conference on Artificial Intelligence in Information and
Communication (ICAIIC), 2019, pp. 322–325.
[11] M. Nazari, S. Khorsandi, and J. Babaki, ‘Security and Privacy Smart
Contract Architecture for Energy Trading based on Blockchains’, in
2021 29th Iranian Conference on Electrical Engineering (ICEE), 2021,
pp. 596–600.
[12] M. Shurman, A. A.-R. Obeidat, and S. A.-D. Al-Shurman, ‘Blockchain
and Smart Contract for IoT’, in 2020 11th International Conference on
Information and Communication Systems (ICICS), 2020, pp. 361–366.
[13] M. Abubakar, Z. Jaroucheh, A. Al Dubai, and X. Liu, ‘A Lightweight
and User-centric Two-factor Authentication Mechanism for IoT Based
on Blockchain and Smart Contract’, in 2022 2nd International Conference of Smart Systems and Emerging Technologies (SMARTTECH),
2022, pp. 91–96.
[14] I. Popchev, I. Radeva, and V. Velichkova, ‘Auditing blockchain smart
contracts’, in 2022 International Conference Automatics and Informatics
(ICAI), 2022, pp. 276–281.
[15] M. Almakhour, A. Wehby, L. Sliman, A. E. Samhat, and A. Mellouk,
‘Smart Contract Based Solution for Secure Distributed SDN’, in 2021
11th IFIP International Conference on New Technologies, Mobility and
Security (NTMS), 2021, pp. 1–6.
[16] J.-W. Liao, T.-T. Tsai, C.-K. He, and C.-W. Tien, ‘SoliAudit: Smart
Contract Vulnerability Assessment Based on Machine Learning and Fuzz
Testing’, in 2019 Sixth International Conference on Internet of Things:
Systems, Management and Security (IOTSMS), 2019, pp. 458–465.
[17] E. Zhou et al., ‘Security Assurance for Smart Contract’, in 2018 9th IFIP
International Conference on New Technologies, Mobility and Security
(NTMS), 2018, pp. 1–5.
[18] H. Zhao, Y. Liu, Y. Wang, and Y. Huang, ‘Hiding Data into Blockchainbased Digital Video for Security Protection’, in 2020 3rd International
Conference on Smart BlockChain (SmartBlock), 2020, pp. 23–28.
[19] R. J. Kutty and N. Javed, ‘Secure Blockchain for Admission Processing in Educational Institutions’, in 2021 International Conference on
Computer Communication and Informatics (ICCCI), 2021, pp. 1–4.
[20] Weingaertner, T., Rao, R., Ettlin, J., Suter, P., Dublanc, P.: Smart
Contracts Using Blockly: Representing a Purchase Agreement Using a
Graphical Programming Lan-guage. In: 2018 Crypto Valley Conference
on Blockchain Technology (CVCBT). 55–64 (2018).
[21] Wang, S., Yuan, Y., Wang, X., Li, J., Qin, R., Wang, F.-Y.: An Overview
of Smart Con-tract: Architecture, Applications, and Future Trends. In:
2018 IEEE Intelligent Vehicles Symposium (IV). 108–113 (2018).
[ps22] heng, Z., Xie, S., Dai, H.-N., Chen, W., Chen, X., Weng, J., Imran, M.:
An overview on smart contracts: Challenges, advances and platforms.
Future Generation Computer Systems. 105, 475–491 (2020)
[22] Knecht, M.: Mandala: A Smart Contract Programming Language.
[arXiv:1911.11376 [cs]. (2019).](http://arxiv.org/abs/1911.11376)
[23] Rashid, A., Siddique, M.J.: Smart Contracts Integration between
Blockchain and Internet of Things: Opportunities and Challenges. In:
2019 2nd International Conference on Ad-vancements in Computational
Sciences (ICACS). 1–9 (2019).
[24] J. Liu and Z. Liu, ‘A Survey on Security Verification of Blockchain
Smart Contracts’, IEEE Access, vol. 7, pp. 77894–77904, 2019.
[25] Z. Wan, X. Xia, D. Lo, J. Chen, X. Luo, and X. Yang, ‘Smart
Contract Security: A Practitioners’ Perspective’, in 2021 IEEE/ACM
43rd International Conference on Software Engineering (ICSE), 2021,
pp. 1410–1422.
[26] M. Demir, M. Alalfi, O. Turetken, and A. Ferworn, ‘Security Smells
in Smart Contracts’, in 2019 IEEE 19th International Conference on
Software Quality, Reliability and Security Companion (QRS-C), 2019,
pp. 442–449.
[27] R. Pise and S. Patil, ‘A Deep Dive into Blockchain-based Smart
Contract-specific Security Vulnerabilities’, in 2022 IEEE International
Conference on Blockchain and Distributed Systems Security (ICBDS),
2022, pp. 1–6.
[28] J. Chen, ‘Finding Ethereum Smart Contracts Security Issues by Comparing History Versions’, in 2020 35th IEEE/ACM International Conference
on Automated Software Engineering (ASE), 2020, pp. 1382–1384.
[29] F. D. Giraldo, B. Milton C., and C. E. Gamboa, ‘Electronic Voting
Using Blockchain And Smart Contracts: Proof Of Concept’, IEEE Latin
America Transactions, vol. 18, no. 10, pp. 1743–1751, 2020.
[30] E. M. Sifra, ‘Security Vulnerabilities and Countermeasures of Smart
Contracts: A Survey’, in 2022 IEEE International Conference on
Blockchain (Blockchain), 2022, pp. 512–515.
[31] S. S. Kushwaha, S. Joshi, D. Singh, M. Kaur, and H.-N. Lee, ‘Systematic Review of Security Vulnerabilities in Ethereum Blockchain Smart
Contract’, IEEE Access, vol. 10, pp. 6605–6621, 2022.
-----
[32] B. Thuraisingham, ‘Blockchain Technologies and Their Applications in
Data Science and Cyber Security’, in 2020 3rd International Conference
on Smart BlockChain (SmartBlock), 2020, pp. 1–4.
[33] M. Maffei, ‘Formal Methods for the Security Analysis of Smart Contracts’, in 2021 Formal Methods in Computer Aided Design (FMCAD),
2021, pp. 1–2.
[34] S. Sayeed, H. Marco-Gisbert, and T. Caira, ‘Smart Contract: Attacks
and Protections’, IEEE Access, vol. 8, pp. 24416–24427, 2020.
[35] K. B. Kim and J. Lee, ‘Automated Generation of Test Cases for Smart
Contract Security Analyzers’, IEEE Access, vol. 8, pp. 209377–209392,
2020.
-----
|
{
"disclaimer": "Notice: Paper or abstract available at https://arxiv.org/abs/2212.05099, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": null,
"status": "GREEN",
"url": "http://arxiv.org/pdf/2212.05099"
}
| 2,022
|
[
"JournalArticle",
"Review"
] | true
| 2022-12-09T00:00:00
|
[
{
"paperId": "d1b5bc6930507ec6982509e72125d82ca2f1c6e6",
"title": "Secure Intelligent Fuzzy Blockchain Framework: Effective Threat Detection in IoT Networks"
},
{
"paperId": "aeaadfaef35f7c8af114fc0008189646c6558dc8",
"title": "Auditing blockchain smart contracts"
},
{
"paperId": "91a78d2eac82d03834693fb3cd19cb79c4d910a7",
"title": "A Deep Dive into Blockchain-based Smart Contract-specific Security Vulnerabilities"
},
{
"paperId": "133a42674bb1ad782ecda39187f766b257e6cf3a",
"title": "Research on smart contract technology based on block chain"
},
{
"paperId": "be4664e16ceb2cbdcbdf97d6be9c9e44a3b666af",
"title": "Proposal of a smart contract-based security token management system"
},
{
"paperId": "eefd84d969d48b09708a0630424d229b612c5327",
"title": "Security Vulnerabilities and Countermeasures of Smart Contracts: A Survey"
},
{
"paperId": "498b9e0151b4a3ce85d4ae42f387a10fd83a1487",
"title": "A Lightweight and User-centric Two-factor Authentication Mechanism for IoT Based on Blockchain and Smart Contract"
},
{
"paperId": "3537615c7af82f9a1d127381ba0ffd6d511d5896",
"title": "Block Hunter: Federated Learning for Cyber Threat Hunting in Blockchain-Based IIoT Networks"
},
{
"paperId": "775dbaa1682fba59127da0c291b1bd9989e5d3f2",
"title": "Formal Methods for the Security Analysis of Smart Contracts"
},
{
"paperId": "e00384ad4b6241d38c8ee5ecd4dc64970c85e465",
"title": "A Deep Learning Model for Threat Hunting in Ethereum Blockchain"
},
{
"paperId": "1ec149d9c4eaba764e18c05fac368f85cbea1c46",
"title": "Self-Adaptive Security for SLA Based Smart Contract"
},
{
"paperId": "92c4acb9bfcd650ac0eeb51a47f3ae32a7821c09",
"title": "BlockChain Technology based Smart Contract Agreement on REMIX IDE"
},
{
"paperId": "7440d51c90a719103f8a34947fb55e5f25d3388a",
"title": "Security and Privacy Smart Contract Architecture for Energy Trading based on Blockchains"
},
{
"paperId": "f3076eec7afaf883ac9bc752643ec61298c9dde3",
"title": "Smart Contract based Distributed IoT Security: A Protocol for Autonomous Device Management"
},
{
"paperId": "d633d514c3ece79276c4d3536c5f0a3dfffd068c",
"title": "Smart Contract Based Solution for Secure Distributed SDN"
},
{
"paperId": "664d2f6c846b38eb00cbc1c5174926b126b8079e",
"title": "High-Performance Smart Contracts Concurrent Execution for Permissioned Blockchain Using SGX"
},
{
"paperId": "6f46341b34e9f89ec274cf0774598d802b93542b",
"title": "Towards Automated Benchmark Support for Multi-Blockchain Interoperability-Facilitating Platforms"
},
{
"paperId": "a255ca065cde7a2bf7bc83127b476e5d6e5cc39d",
"title": "Smart Contract Security: A Practitioners' Perspective"
},
{
"paperId": "8a977b4c278675d4462164f5ee90c709d466bfe2",
"title": "Secure Blockchain for Admission Processing in Educational Institutions"
},
{
"paperId": "f7c71eda7ad1bd21fdac699943dacde709a229d7",
"title": "Security and Privacy Implementation in Smart Home: Attributes Based Access Control and Smart Contracts"
},
{
"paperId": "fb1b28dcada9136a8d7fc1fe9312e4818b277206",
"title": "Electronic Voting Using Blockchain And Smart Contracts: Proof Of Concept"
},
{
"paperId": "11f1095d7e878c75735253ca86e27b3521e2ace5",
"title": "Hiding Data into Blockchain-based Digital Video for Security Protection"
},
{
"paperId": "42387aef984c9fa7d4eff76831bc32fd8724253a",
"title": "Blockchain Technologies and Their Applications in Data Science and Cyber Security"
},
{
"paperId": "cb8576127730966d1b90867b47980f4bafa08de0",
"title": "Finding Ethereum Smart Contracts Security Issues by Comparing History Versions"
},
{
"paperId": "7bdd87b80734cf3d62a3dbb0acc82281163f94a3",
"title": "Enabling Drones in the Internet of Things With Decentralized Blockchain-Based Security"
},
{
"paperId": "427d79cae3bb235c734166929e79950b067b0dda",
"title": "When Deep Learning Meets Smart Contracts"
},
{
"paperId": "ca412e6403e1529cfbf2f50314b102030bffa4e1",
"title": "An Energy-Efficient SDN Controller Architecture for IoT Networks With Blockchain-Based Security"
},
{
"paperId": "00b0db9d82a7e4c34c2fb69138e97a6a74f16e25",
"title": "Analysis of the main consensus protocols of blockchain"
},
{
"paperId": "95350f66682ff37929d1a43cc43349ae2fa91d51",
"title": "Blockchain and Smart Contract for IoT"
},
{
"paperId": "5519b6963905b20057848ad6aeb4cae08e35391b",
"title": "Smart Contract: Attacks and Protections"
},
{
"paperId": "016b3d2502746e4ec47f6a51513077cf5df9d426",
"title": "Decentralized Authentication of Distributed Patients in Hospital Networks Using Blockchain"
},
{
"paperId": "cd7dad05d82ed9bbc6e1bde60ef702e3f12749fb",
"title": "P4-to-blockchain: A secure blockchain-enabled packet parser for software defined networking"
},
{
"paperId": "e0bc89f5776804bc2be27f1945f900d1ac8f1e7f",
"title": "An Overview on Smart Contracts: Challenges, Advances and Platforms"
},
{
"paperId": "5db6454595fc5c2d3c9b5fd87067be3ee781db98",
"title": "Energy Efficient Decentralized Authentication in Internet of Underwater Things Using Blockchain"
},
{
"paperId": "0fe9e895f94509e60b555f8e57859595229b5026",
"title": "Mandala: A Smart Contract Programming Language"
},
{
"paperId": "6967e96d6f03a2c3b2b864e7178205895e8ea368",
"title": "SoliAudit: Smart Contract Vulnerability Assessment Based on Machine Learning and Fuzz Testing"
},
{
"paperId": "4ce4b578b3ef7f792e78dcd93e9697de74ff766b",
"title": "Smart Contracts for supply chain applicable to Smart Cities daily operations"
},
{
"paperId": "089f26359d7de562b9564e94e1c26b3bf2661c2b",
"title": "Security Smells in Smart Contracts"
},
{
"paperId": "9fb3c23d6a9757c13b625d8384c01439f94a1496",
"title": "A Survey on Security Verification of Blockchain Smart Contracts"
},
{
"paperId": "283e67166d82f754506abee906b1eac1da69609c",
"title": "Blockchain-Enabled Authentication Handover With Efficient Privacy Protection in SDN-Based 5G Networks"
},
{
"paperId": "12c37ed1451eb3279bac8cf83c7adaf7a1032b43",
"title": "Security, Performance, and Applications of Smart Contracts: A Systematic Survey"
},
{
"paperId": "b4c3ae7667cc4c64ba6cf7114ab3be0b163312cf",
"title": "Security and Privacy on Blockchain"
},
{
"paperId": "d8a61389105aa80114af849feda7060d1de3117b",
"title": "Blockchain-Enabled Smart Contracts: Architecture, Applications, and Future Trends"
},
{
"paperId": "ab9e607c1dc0acefcd85d2fea8c7210d40872ed4",
"title": "Smart Contracts Integration between Blockchain and Internet of Things: Opportunities and Challenges"
},
{
"paperId": "87c4237c28560ccde987bbedd12b90849c0b5139",
"title": "Blockchain based smart energy trading platform using smart contract"
},
{
"paperId": "05402a4175ab7584e0017d752181b072f5118055",
"title": "Security Vulnerabilities in Ethereum Smart Contracts"
},
{
"paperId": "2a788112edd0b3f1e003f92641f55d4731c0cdd3",
"title": "The State of the Art for Blockchain-Enabled Smart-Contract Applications in the Organization"
},
{
"paperId": "dae3ae4454b3f273d884510e1e170b8e363b48ec",
"title": "Possible Applications of Smart Contracts in Land Administration"
},
{
"paperId": "1599d77b5596b459a1fcf62b3fc0ef339c26dc6c",
"title": "A Survey of Scalability Solutions on Blockchain"
},
{
"paperId": "a09c030a5a38569ad8f1cb985f28175ac15f196a",
"title": "Smart Contracts Using Blockly: Representing a Purchase Agreement Using a Graphical Programming Language"
},
{
"paperId": "4b99fbe18fe4a8cd1d797ed073fb92fb71bd2dcf",
"title": "Security Assurance for Smart Contract"
},
{
"paperId": "5d7bf180157709f2515ea7b596bb0bf231e83559",
"title": "Blockchain Security in Cloud Computing: Use Cases, Challenges, and Solutions"
},
{
"paperId": "aec843c0f38aff6c7901391a75ec10114a3d60f8",
"title": "A Survey of Attacks on Ethereum Smart Contracts (SoK)"
},
{
"paperId": "4b13d7d3abcd071619bd4834882fccda95bc7c98",
"title": "Formal Verification of Smart Contracts: Short Paper"
},
{
"paperId": "7968129a609364598baefbc35249400959406252",
"title": "Making Smart Contracts Smarter"
},
{
"paperId": "0961e2650b3a62a1d198a046bef5f0700ab8c08f",
"title": "Guidelines for snowballing in systematic literature studies and a replication in software engineering"
},
{
"paperId": "55bdaa9d27ed595e2ccf34b3a7847020cc9c946c",
"title": "Performing systematic literature reviews in software engineering"
},
{
"paperId": "b775cb62051a3ee97d339007b65dd274ca6d2073",
"title": "Blockchain-Enabled Social Security Services Using Smart Contracts"
},
{
"paperId": "69de6f205b54508004bce4f70c06691a836127d2",
"title": "Systematic Review of Security Vulnerabilities in Ethereum Blockchain Smart Contract"
},
{
"paperId": "512f9be93caa643329506dcc1263ba0a419ab3b6",
"title": "DISCUSSION PAPER ON BLOCKCHAIN AND SMART CONTRACTS IN INSURANCE"
},
{
"paperId": "cb9e378db4445fe1657890fced51689e320109c5",
"title": "Automated Generation of Test Cases for Smart Contract Security Analyzers"
},
{
"paperId": null,
"title": "Ethereum: A Blockchain Platform with smart contract support for Distrib-uted Application Development"
},
{
"paperId": "17c7fb511cb754e259a78f97b3644ded7d87d00d",
"title": "Safer smart contracts through type-driven development"
},
{
"paperId": "0dbb8a54ca5066b82fa086bbf5db4c54b947719a",
"title": "A NEXT GENERATION SMART CONTRACT & DECENTRALIZED APPLICATION PLATFORM"
},
{
"paperId": null,
"title": "Stage 2: Background"
},
{
"paperId": null,
"title": "Programming dialect The advent of the blockchain has led to the introduction of several programming languages"
}
] | 11,727
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/ffbf0f77bbada199fd07039bfc79675d3eb6479b
|
[
"Computer Science"
] | 0.894302
|
Joint Reputation Based Grouping and Hierarchical Byzantine Fault Tolerance Consensus Protocol
|
ffbf0f77bbada199fd07039bfc79675d3eb6479b
|
IEEE Access
|
[
{
"authorId": "2112656323",
"name": "Hao Qin"
},
{
"authorId": "2069571985",
"name": "Yepeng Guan"
}
] |
{
"alternate_issns": null,
"alternate_names": null,
"alternate_urls": [
"http://ieeexplore.ieee.org/servlet/opac?punumber=6287639"
],
"id": "2633f5b2-c15c-49fe-80f5-07523e770c26",
"issn": "2169-3536",
"name": "IEEE Access",
"type": "journal",
"url": "http://www.ieee.org/publications_standards/publications/ieee_access.html"
}
|
Consensus protocol has challenges in terms of low consensus efficiency and centralization, as well as poor fault tolerance. A joint reputation model based grouping and hierarchical byzantine fault tolerance consensus protocol has been proposed. It is composed of both grouping and hierarchical models. The grouping model uses joint reputation values to balance node grouping, minimize overall differences in joint reputation values among groups. Some nodes with a joint reputation value ranking in the top 50% of the group are randomly selected as function ones, which improves the degree of decentralization compared to some election strategies of leader nodes. In the hierarchical model, nodes in each group are layered and the supervision nodes are mapped to the upper layer, which improves consensus network security and efficiency. Besides a new communication structure has been designed to improve fault tolerance and reduce communication complexity by improving the three phases of Practical Byzantine Fault Tolerant (PBFT). Comparative experiments have shown the superiority of the developed protocol over other existing protocols.
|
Date of publication xxxx 00, 0000, date of current version xxxx 00, 0000.
_Digital Object Identifier 10.1109/ACCESS.2023.0322000_
## Joint Reputation Based Grouping and Hierarchical Byzantine Fault Tolerance Consensus Protocol
HAO QIN[1], YEPENG GUAN[1,2,3*]
1School of Communication and Information Engineering, Shanghai University, Shanghai 200444, China
2Key Laboratory of Advanced Display and System Application, Ministry of Education, Shanghai 200072, China
3Key Laboratory of Silicate Cultural Relics Conservation (Shanghai University), Ministry of Education, Shanghai 200444, China
Corresponding author: Yepeng Guan (e-mail: ypguan@shu.edu.cn).
This work is supported in part by National Key R&D Program of China (Grant no. 2020YFC1523004)
**ABSTRACT Consensus protocol has challenges in terms of low consensus efficiency and centralization,**
as well as poor fault tolerance. A joint reputation model based grouping and hierarchical byzantine fault
tolerance consensus protocol has been proposed. It is composed of both grouping and hierarchical models.
The grouping model uses joint reputation values to balance node grouping, minimize overall differences in
joint reputation values among groups. Some nodes with a joint reputation value ranking in the top 50% of
the group are randomly selected as function ones, which improves the degree of decentralization compared
to some election strategies of leader nodes. In the hierarchical model, nodes in each group are layered
and the supervision nodes are mapped to the upper layer, which improves consensus network security and
efficiency. Besides a new communication structure has been designed to improve fault tolerance and reduce
communication complexity by improving the three phases of Practical Byzantine Fault Tolerant (PBFT).
Comparative experiments have shown the superiority of the developed protocol over other existing protocols.
**INDEX TERMS Blockchain, Fault tolerance, Consensus protocol, Reputation model, Distributed network.**
**I. INTRODUCTION**
N recent years, blockchain technology has been widely
studied with the rapid development of Bitcoin [1] [2].
# I
Blockchain is a distributed network system, in which each
node maintains an append-only ledger [3] [4]. The system has
the characteristics of decentralization, tamper-proof, traceability, and programmability, and holds huge promise for
the Internet of Things (IoT), finance, logistics, and healthcare [5] [6]. Blockchain is generally classified into three
types based on the degree of decentralization, namely public
blockchain, consortium blockchain, and private blockchain.
The public blockchain [7] is a highly decentralized consensus
network where any node can join the network at any time.
The chain is typically used to accept untrusted and high latency nodes. Consensus protocols for the public chain mainly
include Proof-of-Work (PoW) [8], Proof-of-Stake (PoS) [9],
and Delegated Proof-of-Stake (DPoS) [10]. The consortium
chain [11] [12] is a consensus network with a partial degree
of decentralization. Nodes joining the network are usually
managed and shared by several institutions. The private chain
[13] [14] is a highly centralized consensus network where
nodes are usually controlled by one institution or individual.
Consensus protocol serves as the core technology for
blockchain networks [15] [16], which largely determines
the performance of the network system, such as throughput,
fault tolerance, efficiency, scalability, and so on. Practical
Byzantine Fault Tolerance (PBFT) consensus protocol was
proposed to protect consortium blockchain systems in [17].
The protocol is an improved and practical protocol based
on the original Byzantine Fault Tolerance (BFT) [18]. PBFT
[17] reduces the complexity of BFT [18] from exponential
to polynomial. However, PBFT [17] has problems with poor
scalability, high communication complexity and low fault
tolerance. Therefore, PBFT [17] is generally only applicable
to the consensus network with less than 100 network nodes,
which is challenging to use in a wider network [19] [20].
In recent years, some BFT-based consensus protocols have
been proposed to address PBFT [17] scalability issues. A
multi-layer consensus protocol [21] was developed to reduce
communication complexity. It assumes that fault nodes only
exist at the bottom layer, while nodes at other layers are normal. However, the protocol cannot be applied to a real consortium blockchain [22]. A Delegate Byzantine Fault Tolerance
consensus protocol (DBFT) [23] splits nodes into multiple
clusters. Each cluster selects a delegated node to represent
the cluster, which can reduce communication complexity by
-----
exchanging confirmed information between representative
nodes. However, the protocol cannot tolerate that the delegate
node is a Byzantine node [24]. A hash-ring based consensus
protocol (HC-PBFT) [25] was designed to reduce communication complexity, which used hierarchical technology to
avoid direct communication between a large number of nodes,
but its upper nodes still communicate directly, which makes
it difficult to achieve high fault tolerance.
Some grouping methods are proposed to improve consensus efficiency. A K-medoids-based approach was proposed to
reach consensus within groups [26]. A network was designed
to group nodes according to their communication capabilities
[27]. Nodes are grouped according to their geographical locations [28]. NBFT [29] was proposed to avoid much communication between nodes by using the consistent hash algorithm
to group consensus nodes. A score-based consensus protocol
(SG-PBFT) [30] was designed to group nodes. However,
these grouping methods [26] [27] [28] [29] [30] do not consider the distribution of different performance or behavioral
reputation nodes in a comprehensive manner, which results
in significant differences between groups. These differences
can affect the effectiveness of grouping strategies and lead
to nodes in a particular group without expected speed and
security of consensus.
Some methods based on reputation have been proposed to
improve node reliability. T-PBFT [31] evaluates node trust
by the transactions between nodes. A reputation based PBFT
was introduced in [32], which could enhance node reliability
through penalty mechanisms. A protocol [33] was designed
to obtain a node reliability rating by dynamically evaluating
the real-time performance of the service. These works [31]
[32] [33] only consider node performance reputation or node
behavior reputation, so they perform poorly in evaluating
node reliability.
Digital signature technology was utilized to aggregate vote
results from the nodes into the primary node [34]. The SBFT
[35] sets up a collector that assembles and forwards voting
data from each node throughout the consensus process. However, these protocols [34] [35] typically allocate more time
to decryption and encryption, and their security has yet to be
formally assessed.
A Joint Reputation based Grouping and Hierarchical
Byzantine Fault Tolerance consensus protocol (JR-GHBFT)
has been proposed to address the above issues in this paper.
A joint reputation model has devised to solve the problem
of poor node reliability evaluation in some methods [31]
[32] [33]. A grouping model is developed to minimize the
overall differences in node distribution between groups for
certain methods [26] [27] [28] [29] [30]. An election strategy
has been proposed to improve the degree of decentralization
compared to some election strategies [31] [32] [33] of leader
nodes. A hierarchical model is built to improve consensus
network security in contrast to HC-PBFT [25]. Moreover, a
new data transmission structure has been designed to reduce
PBFT [17] communication complexity and improve HCPBFT [25] fault tolerance. This new structure can be applied
to improve SBFT [35] scalability. Some main contributions
are as follows.
1) The joint reputation model has been proposed to improve node reliability. The credibility of each node
based on node behavior and performance can be evaluated in this model. The model can also be applied to
differentiate between normal and Byzantine nodes.
2) The joint reputation based grouping and hierarchical
model has been proposed to improve consensus efficiency. It is composed of both grouping and hierarchical models. The grouping model uses joint reputation
values to balance node grouping, and minimize overall
differences in joint reputation values among groups.
Some nodes with a joint reputation ranking in the top
50% of this group are randomly selected as function
nodes to improve the degree of decentralization. In the
hierarchical model, nodes in each group are layered
and the supervision nodes are mapped to the upper
layer, which improves consensus network security and
efficiency.
3) The new data transmission structure has been designed
to improve fault tolerance and reduce communication
complexity.
Comparative experiments have shown the superiority of the
developed protocol over other existing protocols.
The remainder of the paper is arranged as follows. A joint
reputation model is illustrated in Section II. Joint reputation
based grouping and hierarchical model is illustrated in Section III. JR-GHBFT consensus protocol is described in Section IV. Section V discusses performance analysis. Section
VI describes the experimental results and discussions, while
Section VII provides some conclusions.
**II. JOINT REPUTATION MODEL**
In a consensus network, nodes are typically made up of both
normal and Byzantine nodes. Some consensus algorithms,
including PBFT [17] and DBFT [23], are unable to differentiate between normal nodes and Byzantine nodes before and
after the consensus process. Some reputation-based methods
[31] [32] [33] have been proposed to differentiate between
normal and Byzantium nodes. These works [31] [32] [33]
only consider node performance reputation or node behavior
reputation, so they perform poorly in evaluating node reliability.
A joint reputation model is designed to overcome the limitations mentioned above. It is composed of node performance
reputation model and node behavior reputation model. Firstly,
a node performance reputation model is described in part
A of this section, which is used to measure the network
performance of nodes. In part B of this section, a node behavior reputation model is proposed to evaluate the behavior
of nodes throughout the consensus process. Some details are
described as following.
-----
_A. NODE PERFORMANCE REPUTATION MODEL_
Nodes could usually be considered as providers in the
blockchain network. Response delay is an indicator to measure the network performance of the node. If the node has
high data throughput and low response delay, it would have
high performance and reliability. The calculation of response
delay is as following.
For the case where node n serves node k, the response delay
of node n can be defined as the difference between the start
time of node k message transmission and the end time when
node n completes processing node k information. When node
_n serves multiple nodes, the response delay of node n can_
be taken as the average of the response delays of all nodes
it serves.
To get the average response delay of all nodes, the average
network response delay is defined in a cycle as follows:
_TaveN = [1]_
_N_
_N_
�
_n=1_
_Tn_ (1)
The node performance reputation value would be updated
after this cycle as follows:
_P[n]_ = P[n][−][1] + RpRP × P[n][−][1] (4)
where P[n] denotes the node performance reputation value in
the n[th] round. The initial performance reputation value P[0]
is zero, where P[0] denotes the node performance reputation
value in initial state.
_B. NODE BEHAVIOR REPUTATION MODEL_
Performance reputation can ensure the reliability of the node
to a certain extent. However, it cannot measure malicious
behavior of the nodes with high performance reputation values during consensus. A node behavior reputation model has
been proposed to evaluate node behavior during consensus as
following.
The valid response is introduced as a quantitative index to
evaluate node behavior. We judge whether the response of
node n is valid through other nodes. (Nn − _Mn) has been used_
to measure whether the node n has valid network response or
not. Judgment rules involve detecting whether the received
message has been tampered and whether the message has
timed out. When a node among other nodes confirms that the
response of node n is normal, the value of Nn (representing
the number of normal responses) for node n increases by 1.
Conversely, if the response is considered abnormal, the value
of Mn (representing the number of abnormal responses) for
node n increases by 1.
To get the average number of valid responses of all nodes,
the average number of valid network responses is defined in
a cycle as follows:
where N denotes the total number of nodes in the network,
_Tn represents average response delay of node n serving other_
nodes in a round.
To determine the superiority or inferiority of the average
response delay of node j relative to other nodes, relative
average response delay of node j serving other nodes is then
defined in a cycle as follows:
_TreAve = [1]_
_X_
_X_
� �
_j=1_
_TaveN −_ _Tj�_ (2)
where X is the total number of cycles that node j serves for
other nodes, Tj represents average response delay of node j
serving other nodes in a cycle.
Relative average response time can effectively reflect node
performance relative to other nodes. The larger the relative
average response delay, the better the reliability and performance of the network node. To normalize the relative average
response delay and better reward and punishment nodes, we
define a reward and punishment function as follows:
_X_
�
_l=1_
_CaveN = [1]_
_N_
_N_
�
_n=1_
�Nn[l] _[−]_ _[M]n[l]_ � (5)
_RpRP = [2]_
_π_ [tan][−][1][ �]
_TreAve × w�_ (3)
where Nn[l] [denotes whether node][ n][ is a normal response in the]
_l[th]_ consensus. The value of Nn[l] [is 0 or 1, and][ N]n[ l] [= 1][ denotes a]
normal response. Mn[l] [represents whether node][ n][ is a mistaken]
response in the l[th] consensus. The value of Mn[l] [is 0 or 1, and]
_Mn[l]_ [= 1][ represents a mistaken response.]
To determine the superiority or inferiority of the response
of node j compared to other nodes, relative average number
of valid responses is then defined in a cycle as follows:
where is a multiple operator, w is the weight which is used
_×_
to balance the normalized reward and punishment function
and set to 1000 in the experiment. Relative average response
delay of the node can be more effectively normalized in
tan[−][1]() function.
A developed reward and punishment strategy is different
from previous strategies. Firstly, the performance reputation
value of a node is dynamically calculated based on the average network response delay. As the performance of network
nodes becomes better, the average network response delay
would decrease, which would urge nodes to provide better
services in order to obtain a higher performance reputation.
Secondly, it is easy to calculate the performance reputation
of nodes, thereby reducing computational costs.
where Nj[l] [denotes whether node][ j][ is a normal response in the]
_l[th]_ consensus. The value of Nj[l] [is 0 or 1, and][ N]j[ l] [= 1][ denotes a]
normal response. Mj[l] [represents whether node][ j][ is a mistaken]
response in the l[th] consensus. The value of Mj[l] [is 0 or 1, and]
_Mj[l]_ [= 1][ represents a mistaken response.]
Relative average number of valid responses can effectively
reflect the degree of superiority or inferiority of node behavior
relative to other nodes. The larger the relative average number
of valid responses, the better the reliability of the network
_CreAve =_
_X_
�
_l=1_
�Nj[l] _[−]_ _[M]j[l]�_ _−_ _CaveN_ (6)
-----
node. To normalize the relative average number of valid
responses and better reward and punishment nodes, we define
a reward and punishment function as follows:
_RbRP = [2]_
_π_ [tan][−][1][ �]
_CreAve�_ (7)
**Algorithm 1 Updating joint reputation value.**
**Input:**
Consensus cycles, X ;
Average network response delay, TaveN ;
Average number of valid network responses, CaveN ;
Weight of joint reputation value, µ;
**Output:**
_R;_
1: for each j [1, X ] do
_∈_
2: _TreAve = TreAve + TaveN_ _Tj;_
_−_
3: _CreAve = CreAve + Nj_ _Mj;_
_−_
4: end for
5: TreAve = TreAve/X ;
6: CreAve = CreAve _CaveN_ ;
_−_
7: RpRP = _π[2]_ [tan][−][1][ �]TreAve × w�;
8: RbRP = _π[2]_ [tan][−][1][ �]CreAve�;
9: if CreAve ≥ _Th then_
10: _P[n]_ = P[n][−][1] + RpRP × P[n][−][1];
11: else
12: _P[n]_ = RpRP × P[n][−][1];
13: end if
14: B[n] = B[n][−][1] + RbRP × B[n][−][1];
15: R = µ _P[n]_ + (1 _µ)_ _B[n];_
_×_ _−_ _×_
16: return R;
_A. GROUPING MODEL_
The scalability of PBFT [17] would be limited in large-scale
networks when the number of nodes increases due to the lack
of grouping nodes in PBFT [17]. Several grouping strategies
[26] [27] [28] [29] [30] have been proposed to enhance PBFT
[17] scalability in large networks. However, these methods
[26] [27] [28] [29] [30] cannot effectively capture the distribution of performance and behavior reputation nodes. It would
affect the effectiveness of the grouping policy and cause
certain nodes within the group to fail to achieve the expected
level of consensus speed and security. A grouping model
based on joint reputation is proposed to ensure consistency of
overall performance and behavior reputation among groups.
We divide consensus nodes into function nodes and ordinary nodes. Function nodes are divided into four types of
nodes with different responsibilities including verification
node, collection node, supervision node, and primary node.
The primary node authorizes client requests and collects
votes from each group. The verification node verifies authorized client messages. The collection node collects votes for
this group. The supervision node supervises other function
nodes in this group. The ordinary node is honest and has the
opportunity to be selected as function node.
There is only one collection node or primary node in each
group. The consensus network has only one primary node in a
cycle. Some nodes with a joint reputation ranking in the top p
of this group have the opportunity to become function nodes.
_p is the proportion of the number of candidate nodes to the_
total number of nodes, which would be discussed later. The
grouping process is shown in Algorithm 2.
The node behavior reputation value would be updated after
this cycle as follows:
_B[n]_ = B[n][−][1] + RbRP × B[n][−][1] (8)
where B[n] denotes the node behavior reputation value in the n[th]
round. The initial behavior reputation value B[0] is zero, where
_B[0]_ denotes the node behavior reputation value in initial state.
Since when nodes with high performance reputation values engage in malicious behavior, the current performance
reputation update mechanism is not sufficient to effectively
restrict their behavior. Therefore, when the relative average number of valid responses is less than a predetermined
threshold Th, the performance reputation update function is
redefined as follows:
_P[n]_ = RpRP × P[n][−][1] (9)
The Th is defined as follows:
_Th =_ _[C][aveN][ −]_ _[C][pAveN]_ (10)
2
where CpAveN demotes the average number of valid network
responses for primary node. The joint reputation value of a
node can be calculated by combining performance reputation
and behavior reputation, as shown below:
_J = µ_ _P[n]_ + (1 _µ)_ _B[n]_ (11)
_×_ _−_ _×_
where µ is the weight of the joint reputation value, which
would be discussed later.
Each node in the network possesses a trusted container that
is not controlled by itself. The joint reputation value of a node
is automatically computed and disseminated through these
trusted containers.
In order to better illustrate the relationship between performance reputation and behavior reputation, Algorithm 1
outlines the process of updating joint reputation values in a
more intuitive way.
**III. JOINT REPUTATION BASED GROUPING AND**
**HIERARCHICAL MODEL**
In order to ensure consistency and improve the security and
efficiency of consensus networks, a joint reputation based
grouping and hierarchical model has been developed. It is
composed of both grouping and hierarchical models. Grouping model based on joint reputation is described for grouping
nodes in part A of this section. Hierarchical model is proposed
in part B of this section, which maps different types of nodes
in each group to one of the two layers.
-----
**Algorithm 2 Grouping process.**
**Input:**
Node set, Nodes;
Joint reputation model, R;
Number of groups, x;
1: for each j _Nodes do_
_∈_
2: Allocate node i to a group in a balanced manner based
on joint reputation value;
3: end for
4: for each j [1, x] do
_∈_
5: Sort nodes in group j based on joint reputation value;
6: Select randomly function nodes in group j with the top
50% joint reputation values;
7: end for
8: Select primary node from collection nodes;
**FIGURE 1. The hierarchical structure of JR-GHBFT.**
_B. HIERARCHICAL MODEL_
HC-PBFT [25] maps different types of nodes in each group
to one of two layers. However, it has only one node per group
mapped to the upper layer, which makes it difficult to ensure
the security and efficiency of consensus networks.
A hierarchical model based on joint reputation has been
designed to address the above issues. It places three function
nodes in each group on the upper layer, while the remaining
nodes are placed on the lower layer. There is a supervision
node that monitors other function nodes in the group, and
the two upper layer nodes divide labor to complete the task
of one upper layer node of the original HC-PBFT [25]. The
hierarchical structure is shown in Fig. 1.
The number of nodes in each group is approximately the
same, which ensures consistency of overall performance and
behavior reputation among groups. The relationship between
the number of groups and the total number of network nodes
will be discussed and analyzed later.
**IV. JR-GHBFT CONSENSUS PROTOCOL**
PBFT [17] has problems with high communication complexity and low fault tolerance due to the direct communication between nodes. To improve fault tolerance, SBFT [35]
collects votes from each node by using aggregate signature
technology. It is in a high workload state due to receiving a
large number of messages sent by other nodes at the same
time. HC-PBFT [25] uses layered technology to avoid direct
communication among a large number of nodes for reducing
communication complexity. However, its upper layer nodes
still communicate directly with each other so that it is difficult
to reach high fault tolerance.
A consensus protocol has been designed to address the
above issues as shown in Fig. 2. The specific process of JGGHBFT data transmission is as follows:
_Request phaseRequest phaseRequest phase : : : a client initiates a transaction signed by_
itself with the private key. It then sends the transaction to the
primary node.
_PrePrePre___prepare phaseprepare phaseprepare phase : : : the primary node determines whether_
the transaction is legal. If so, it generates a block, and then
sends hash input code and Pre_prepare1 message to client,
other nodes, respectively. Hash input code is composed of
client IP, random number, and transaction. The Pre_prepare1
message format is:
_< Pre_prepare1, < h, v, S(p), bs, block >, ho >_ (12)
where h is the current block height, v is the current view
number. S(p) is the signature of the primary node. bs is the
summary of block. ho is the hash output code generated by
the primary node.
Hash output code is the result of the hash function calculating the hash input code. The client sends the Pre_prepare2
messages to the verification nodes after it is received the
hash input code sent by the primary node. The Pre_prepare2
message format is:
_< Pre_prepare2, hi, m >_ (13)
where hi is hash input code generated by the primary node.
_m is a transaction message encrypted by the private key of_
client.
The verification nodes will determine whether the block
contains the m after it received Pre_prepare2 message sent by
the client. If so, it will send Pre_prepare3 message to other
nodes of this group. The Pre_prepare3 message format is:
_< Pre_prepare3, < h, v, S(p), bs, block >, hi >_ (14)
_Prepare phasePrepare phasePrepare phase : : : other nodes determines whether the output_
result of hi through the hash function is equal to ho or not.
If so, they send a Supported message to the collection node
of this group. Otherwise, they send Unsupported message to
collection node of this group. The primary node acts as the
collection node of this group. The Supported message format
is:
_< Supported, < h, v, S(p), bs, block >, S >_ (15)
where S is the signature of the node by the private key. The
_Unsupported message format is:_
_< Unsupported, < h, v, S(p), bs, block >, S >_ (16)
The collection node collects the Supported and Unsupported messages sent by the nodes of this group within a
period of time. Then it sends the Prepare message to the
-----
primary node and other nodes in the group. The Prepare
message format is:
_< Prepare, < h, v, S(p), bs, block >, Ss >_ (17)
where Ss is the sum of S in this group.
_Commit phaseCommit phaseCommit phase : : : the primary node obtains the result_
through the counting votes. If the result is f + 1 supported
votes, the primary node will send Commit message to other
nodes, where f is the number of unsupported votes. The
Commit message format is:
_< Commit, < h, v, S(p), bs, block >, Sss >_ (18)
where Sss is the sum of Ss with different collection nodes.
_Reply phaseReply phaseReply phase : : : other nodes verify the message after they got_
the Commit message from the primary node. If it has f + 1
supported votes, they will write the new block in the local
ledger, and then reply to the client.
**V. PERFORMANCE ANALYSIS**
_A. SAFETY_
During the JG-GHBFT Pre_prepare phase, if the client is a
Byzantine node, it will send an error message to the verification node. The verification node then checks the message
for correctness. If the received message does not match the
message sent by the primary node, the verification node will
broadcast an error signal to other verification nodes. The
client request will not be processed when all verification
nodes receive x 1 error messages, where x is the number
_−_
of groups.
In the JG-GHBFT Prepare phase, the collection nodes
collect votes and send the voting results to primary node and
the nodes in this group. In the JG-GHBFT Commit phase,
the primary node will send the voting results of all groups to
other nodes. After receiving the voting results, they will verify
whether their votes have been tampered with. If tampering is
detected, the node will send an error signal to the supervision
node in this group. The error function nodes will be replaced
when the supervision node receives more than half of the
error signals. Regardless of whether the function nodes or
clients are malicious, the consensus protocol can guarantee
the security of the entire blockchain network.
_B. FAULT TOLERANCE_
Suppose N is the total number of nodes in the consensus network. The Byzantine fault tolerance of HC-PBFT [25] is (N /2
- 2x/3). JR-GHBFT consensus protocol can improve fault
tolerance. The fault tolerance of the developed consensus
network depends on the number of votes cast by the consensus
node. Analysis of the developed protocol is as follows.
In the Prepare phase, the primary node collects the voting
results of each group. The total number of nodes participating
in the voting is SN . The total number of supported vote nodes
is ST . The total number of unsupported vote nodes is SF. The
relationship between them is (19). In order to reach consensus, it is necessary to satisfy inequality (20) that the number
of supported votes exceeds the number of unsupported votes.
Inequality (21) is derived from (19) and inequality (20) as
follows:
_SN = ST + SF + 1_ (19)
_ST_ _SF + 1_ (20)
_≥_
_SF_ (21)
_≤_ _[SN][ −]_ [2]
2
We consider that downtime nodes cannot participate in
voting in the JR-GHBFT Prepare phase. The total number of
network nodes is N . The total number of non-voted nodes is
_O. Their relationship to SN is as following:_
_N = SN + O_ (22)
In a large network, the total number of nodes participating
in voting is far greater than the total number of nodes not participating in voting. Therefore, N is approximately equal to
_SN_ . As a result, the maximum fault tolerance of JR-GHBFT
is (N 2)/2. We can conclude that JR-GHBFT is superior to
_−_
HC-PBFT [25] for fault tolerance when x is greater than 2.
_C. EFFICIENCY_
Communication complexity is a key indicator of consensus
efficiency. Communication times can be used as an indicator
to assess communication complexity. The total communication time of PBFT [17] after reaching a consensus is:
_CPBFT = 2N_ [2] _−_ 2N (23)
The total communication time of JG-GHBFT after reaching a consensus is:
_CJR−GHBFT = 2N + 3xy −_ 3x − 2 (24)
where y is the number of nodes in each group. We consider
that the relationship between N and x is x = N _/y. Therefore,_
the total communication time of JG-GHBFT is 5N –3x–2.
One can find from (24) that JR-GHPBFT reduces the communication complexity of PBFT [17] from O(N [2]) to O(N ).
_D. WORKLOAD_
When multiple nodes possess the same load capacity, the
lower the number of requests they process from the same
number of nodes at the same stage, the better the consensus structure. In blockchain systems, the primary node often
handles a substantial number of requests during the Prepare
phase, so we primarily focus on the workload condition of the
node during this phase.
In the Prepare phase, the primary node workload of SBFT
[35] WSBFT is N − 1, while that of JR-GHBFT WJR−GHBFT is
_x + y_ 3. The workload ratio of these two methods is:
_−_
_PWR =_ _[W][JR][−][GHBFT]_ = _[x][ +][ y][ −]_ [3] (25)
_WSBFT_ _N −_ 1
Given that the relationship between N and y is y = N _/x,_
the PWR is:
_PWR =_ _[x][ +][ N]x_ _[−]_ [3] (26)
_N_ 1
_−_
-----
**FIGURE 2. JR-GHBFT consensus protocol.**
We consider that N is held constant, the PWR value is
minimized when x = N _/x. We can get that the relationship_
between the grouping numbers and the total of network node
ones as:
_√_
_x =_ _N_ (27)
We call the relationship in (27) as an adaptive grouping
strategy.
It can be found in (26) that PWR tends to decrease gradually
and its value is always less than 1 with the increase of N . The
reason is that JR-GHBFT collection nodes carry most of the
workload of the primary node.
**VI. EXPERIMENTAL RESULTS AND DISCUSSIONS**
In order to evaluate performance of the developed JRGHBFT, we constructed an experimental network. In this
network, all nodes run on one machine. Each node is equipped
with a reputation value ledger and a ledger for recording transactions in the consensus process. This network and related
models are deployed on a Linux system with an 8-core Intel
i7-9200U CPU clocked at 3.6GHz and 32GB of RAM.
_A. REPUTATION MODEL PARAMETERS_
1) Parameter µµµ
Joint reputation is a key metric for assessing node reliability,
which is composed of performance reputation and behavior
reputation. The higher the joint reputation value for a node,
the better its reliability. To obtain the best joint reputation
value, the parameter /mu in equation (11) is adjusted from 0.1
to 0.9 in increments of 0.2. Fig. 3 shows some experimental
results with different µ value.
**FIGURE 3. The proportion of normal nodes with different µ in (11).**
It can be found in Fig. 3 that the proportion of normal nodes
is the highest when µ is 0.3. Therefore, µ is set to 0.3 and
remains the same in subsequent experiment.
2) Parameter ppp
Equation (28) has been introduced as evaluation index to
evaluate the impact of selecting functional nodes from different proportions of candidate nodes on consensus network
systems. It is represented as follows.
_C =_
_N_
�
(αRi + (1 − _α) Fi)_ (28)
_i=1_
where Ri represents the number of candidate nodes reselected
as functional nodes in the i[th] consistency process among p _N_
_×_
candidate nodes. Fi represents the number of Byzantine nodes
selected as functional nodes in the i[th] consistency process
among p _N candidate nodes. C serves as a quantitative_
_×_
indicator measuring the centrality or security of consensus
-----
**FIGURE 4. C value in (28) with different p values.**
network systems, while α functions as a selector with a value
of 0 or 1.
When α is 1, a larger value of C indicates a higher degree
of centralization in the consensus network system. When α is
0, a larger value of C indicates a lower level of security in the
consensus network system.
In the experiment, the value of N is set to 100, with 49
Byzantine nodes and 51 normal nodes. It is worth noting that
half of the total network nodes are characterized by superior
performance. The reason for setting the number of Byzantine
nodes to 49 is based on the fault tolerance analysis in part
B of Section V. It means that the consensus network system
can tolerate up to 49 Byzantine nodes in this particular case.
In addition, we would implement a reward and punishment
system that applies to all network nodes after each round of
consensus.
To get a reasonable p value, we changed it from 0.1 to 1
with an interval of 0.1. For each p, 100 rounds of consensus
experiments have been done. Some results are shown at Fig.
4 to illustrate the influence of different p on the consensus
network system.
One can find from Fig. 4 that the value of C decreases with
the increase of p when α is 1 and p is less than 0.9. When p
is greater than 0.9, the value of C shows an fluctuates trend.
The value of C fluctuates with the increase of p when α is 0
and p is greater than 0.5. When p is less than 0.5, the value of
_C shows an upward trend. The centralization and security of_
the consensus network have roughly the same impact on the
system when p is 0.5. The p is set to 0.5 at the experiment and
keep it the same.
_B. MODEL ANALYSES_
1) Probability Analysis
In a consensus network, if both the primary node and the verification node are Byzantine nodes, it will result in a consensus
failure. We need to do a probabilistic analysis of this situation.
Suppose that each node is independent of each other, we can
get the probability of consensus failure as follows:
_P =_ _[C]F[x][+1]_ (29)
_CN[x][+1]_
**FIGURE 5. P value in (29) with different grouping strategies.**
**FIGURE 6. Data throughput with the joint reputation or not.**
where F indicates the total number of Byzantine nodes in the
network. CF[x][+1] indicates that the x + 1 nodes are randomly
selected from F nodes when both the primary node and the
verification node are Byzantine node. CN[x][+1] indicates that the
_x + 1 nodes are randomly selected from N consensus nodes_
when both the primary node and the verification node are
Byzantine nodes.
Since the fault tolerance of JR-GHBFT is (N 2)/2 dis_−_
cussed in part B of section V, the range of F is [0, (N 2)/2].
_−_
When x is held constant, an increase of F results in a corresponding increase of P. The reason is that as the number
of malicious nodes within a blockchain network increases,
the probability of malicious nodes selected as function nodes
increases.
We take F = (N 2)/2, and the fixed grouping number
_−_
is set 7 for the sake of discussion. The Fig. 5 shows some
experimental results with different grouping strategies. It can
be found in Fig. 5 that the performance of adaptive grouping is
better than that of fixed grouping. The reason is that the larger
the number of groups there are, the lower the probability of
malicious nodes gathering in the adaptive grouping strategy.
2) Ablation Experiment
Data throughput is an important indicator for measuring the
performance of consensus protocols. The higher the data
throughput, the better the consensus protocol performance.
The throughput refers to the number of data processed within
a given time interval, typically measured in units such as bytes
per second or packets per second. It is defined in the paper as
follows:
_TPS =_ _[transactions]_ (30)
_t_
_△_
-----
**FIGURE 7. Comparisons of latency and data throughput.**
**TABLE 1. Comparisons with different consensus protocols**
Consensus protocols PBFT [17] HC-PBFT [25] T-PBFT [31] _JRJRJR − − −_ _GHBFTGHBFTGHBFT_
Communication complexity _O(N_ [2]) _O(N_ ) _O(N_ [2]) _OOO(((NNN)))_
Fault tolerance (N − 1)/3 _≥_ (N − 1)/3 _≥_ (N − 1)/3 (((NNN − − − 2)2)2)///222
Scalability Low Medium Medium _HighHighHigh_
Degree of decentralization High High Low _HighHighHigh_
To test the JR-GHBFT, the number of network nodes is set
to 50. There are 39 normal nodes, 10 Byzantine nodes and
one client. The Fig. 6 shows some experimental results with
joint reputation or not.
It can be found in Fig. 6 that JR-GHBFT is superior
GHBFT for data throughput. The reason is that JR-GHBFT
has chosen nodes with high joint reputation as the upper layer
nodes.
_C. COMPARISON WITH OTHER CONSENSUS PROTOCOLS_
Some protocols including PBFT [17], HC-PBFT [25], and TPBFT [31] are chosen to further evaluate the developed JRGHBFT performance in data throughput and latency. For a
fair comparison, we set the number of transactions to 2500. In
the experiment, the number of network nodes increased from
4 to 40 in increments of 3. The Fig. 7 shows some results of
data throughput and latency from left to right, respectively.
It can be found in Fig. 7 that the performance of developed
JR-GHBFT is the best among the investigated protocols [17]
[25] [31] for both latency and data throughput. Some of the
reasons are as follows. Direct communication between a large
number of nodes in PBFT [17] results in high communication complexity and low data throughput. HC-PBFT [25]
reduces direct communication between a large number of
nodes through grouping and hierarchical technology, which
reduces communication complexity. However, since the primary node of each group is randomly selected, malicious
nodes may be selected as primary nodes in HC-PBFT [25],
which can reduce consensus efficiency to some extent. TPBFT [31] selects nodes based on reputation values though,
its communication complexity is still O(N [2]). Some reliable
nodes can be selected as functional ones in the developed JRGHBFT, which improves consensus efficiency. Besides JR
GHBFT reduces the communication complexity to O(N ) by
improving the data transmission process.
In order to achieve greater fairness in results, certain additional indicators are considered. Results depend on the
optimal parameters specified by the authors in their established protocol [17] [25] [31]. Table 1 shows a selection of
comparison results.
It can be found in Table 1 that JR-GHBFT has the best
performance among the investigated protocols [17] [25] [31].
Some of the reasons are as follows. Due to the communication
complexity of O(N [2]), PBFT [17] has low efficiency in large
networks. In HC-PBFT [25], it is difficult for lower layer
nodes to verify the authenticity of messages delivered by upper layer nodes. In T-PBFT [31], the consensus group is only
made up of highly trusted value nodes. The JR-GHBFT can
reduce both communication complexity and centralization.
In addition, the developed protocol can also tolerate more
malicious nodes and improve scalability.
**VII. CONCLUSIONS**
A joint reputation model has been proposed to improve node
reliability. The model can assess node credibility based on
its behavior and performance. And it could also be applied
to differentiate between normal and Byzantine nodes. Joint
reputation based grouping and hierarchical model has been
proposed to improve consensus efficiency. It is composed of
both grouping and hierarchical models. The grouping model
uses joint reputation values to balance node grouping so as to
minimize overall differences in joint reputation values among
groups. Some nodes with a joint reputation ranking in the
top 50% of this group are randomly selected as function
nodes, which improves the degree of decentralization. In the
hierarchical model, nodes in each group are layered and the
-----
supervision nodes are mapped to the upper layer, which improves consensus network security and efficiency. Moreover,
a new communication structure is designed to improve fault
tolerance and reduce communication complexity by improving the three phases of PBFT [17]. Comparative experiments
have shown the superiority of the developed protocol over
other existing protocols.
We would further refine the consensus protocol in future
research. The Preliminary Preparation phase can be streamlined to speed up the consensus process for the blockchain
network. In addition, we plan to explore the application of
blockchain technology for the protection of cultural relics,
which facilitates the digitization of cultural relics.
**REFERENCES**
[1] M. Gimenez-Aguilar, J. M. De Fuentes, L. Gonzalez-Manzano, and D. Arroyo, ‘‘Achieving cybersecurity in blockchain-based systems: A survey,’’
_Future Generation Computer Systems, vol. 124, pp. 91–118, 2021._
[2] M. Mostafa, ‘‘Bitcoin’s blockchain peer-to-peer network security attacks
and countermeasures,’’ Indian Journal of Science and Technology, vol. 13,
no. 07, pp. 767–786, 2020.
[3] Y. Lu, X. Huang, Y. Dai, S. Maharjan, and Y. Zhang, ‘‘Blockchain and
federated learning for privacy-preserved data sharing in industrial iot,’’
_IEEE Transactions on Industrial Informatics, vol. 16, no. 6, pp. 4177–4186,_
2019.
[4] S. Kaur, S. Chaturvedi, A. Sharma, and J. Kar, ‘‘A research survey on
applications of consensus protocols in blockchain,’’ Security and Commu_nication Networks, vol. 2021, pp. 1–22, 2021._
[5] D. Berdik, S. Otoum, N. Schmidt, D. Porter, and Y. Jararweh, ‘‘A survey
on blockchain for information systems management and security,’’ Infor_mation Processing & Management, vol. 58, no. 1, p. 102397, 2021._
[6] T. Meng, Y. Zhao, K. Wolter, and C.-Z. Xu, ‘‘On consortium blockchain
consistency: A queueing network model approach,’’ IEEE Transactions on
_Parallel and Distributed Systems, vol. 32, no. 6, pp. 1369–1382, 2021._
[7] C. Tang, L. Wu, G. Wen, and Z. Zheng, ‘‘Incentivizing honest mining
in blockchain networks: a reputation approach,’’ IEEE Transactions on
_Circuits and Systems II: Express Briefs, vol. 67, no. 1, pp. 117–121, 2019._
[8] M. Jakobsson and A. Juels, ‘‘Proofs of work and bread pudding protocols,’’
in Secure Information Networks: Communications and Multimedia Secu_rity IFIP TC6/TC11 Joint Working Conference on Communications and_
_Multimedia Security (CMS’99) September 20–21, 1999, Leuven, Belgium._
Springer, 1999, pp. 258–272.
[9] B. David, P. Gaži, A. Kiayias, and A. Russell, ‘‘Ouroboros praos: An
adaptively-secure, semi-synchronous proof-of-stake blockchain,’’ in Ad_vances in Cryptology–EUROCRYPT 2018: 37th Annual International Con-_
_ference on the Theory and Applications of Cryptographic Techniques, Tel_
_Aviv, Israel, April 29-May 3, 2018 Proceedings, Part II 37. Springer, 2018,_
pp. 66–98.
[10] B. Wang, Z. Li, and H. Li, ‘‘Hybrid consensus algorithm based on modified
proof-of-probability and dpos,’’ Future Internet, vol. 12, no. 8, p. 122,
2020.
[11] G. Sun, M. Dai, J. Sun, and H. Yu, ‘‘Voting-based decentralized consensus design for improving the efficiency and security of consortium
blockchain,’’ IEEE Internet of Things Journal, vol. 8, no. 8, pp. 6257–6272,
2020.
[12] R. Qiao, X.-Y. Luo, S.-F. Zhu, A.-D. Liu, X.-Q. Yan, and Q.-X. Wang, ‘‘Dynamic autonomous cross consortium chain mechanism in e-healthcare,’’
_IEEE journal of biomedical and health informatics, vol. 24, no. 8, pp._
2157–2168, 2020.
[13] S. Banerjee, B. Bera, A. K. Das, S. Chattopadhyay, M. K. Khan, and J. J.
Rodrigues, ‘‘Private blockchain-envisioned multi-authority cp-abe-based
user access control scheme in iiot,’’ Computer Communications, vol. 169,
pp. 99–113, 2021.
[14] S. Pahlajani, A. Kshirsagar, and V. Pachghare, ‘‘Survey on private
blockchain consensus algorithms,’’ in 2019 1st International Conference
_on Innovations in Information and Communication Technology (ICIICT)._
IEEE, 2019, pp. 1–6.
[15] Y. Xiao, N. Zhang, W. Lou, and Y. T. Hou, ‘‘A survey of distributed
consensus protocols for blockchain networks,’’ IEEE Communications
_Surveys & Tutorials, vol. 22, no. 2, pp. 1432–1465, 2020._
[16] L. S. Sankar, M. Sindhu, and M. Sethumadhavan, ‘‘Survey of consensus
protocols on blockchain applications,’’ in 2017 4th international con_ference on advanced computing and communication systems (ICACCS)._
IEEE, 2017, pp. 1–5.
[17] M. Castro and B. Liskov, ‘‘Practical byzantine fault tolerance and proactive
recovery,’’ ACM Transactions on Computer Systems (TOCS), vol. 20, no. 4,
pp. 398–461, 2002.
[18] Z. Zheng, S. Xie, H.-N. Dai, X. Chen, and H. Wang, ‘‘Blockchain challenges and opportunities: A survey,’’ International journal of web and grid
_services, vol. 14, no. 4, pp. 352–375, 2018._
[19] H. Sukhwani, J. M. Martínez, X. Chang, K. S. Trivedi, and A. Rindos, ‘‘Performance modeling of pbft consensus process for permissioned blockchain
network (hyperledger fabric),’’ in 2017 IEEE 36th symposium on reliable
_distributed systems (SRDS)._ IEEE, 2017, pp. 253–255.
[20] O. Onireti, L. Zhang, and M. A. Imran, ‘‘On the viable area of wireless
practical byzantine fault tolerance (pbft) blockchain networks,’’ in 2019
_IEEE Global Communications Conference (GLOBECOM)._ IEEE, 2019,
pp. 1–6.
[21] W. Lv, X. Zhou, and Z. Yuan, ‘‘Design of tree topology based byzantine
fault tolerance system,’’ J. Commun, vol. 38, no. Z2, pp. 143–150, 2017.
[22] W. Li, C. Feng, L. Zhang, H. Xu, B. Cao, and M. A. Imran, ‘‘A scalable
multi-layer pbft consensus for blockchain,’’ IEEE Transactions on Parallel
_and Distributed Systems, vol. 32, no. 5, pp. 1146–1160, 2020._
[23] T. Crain, V. Gramoli, M. Larrea, and M. Raynal, ‘‘Dbft: Efficient leaderless
byzantine consensus and its application to blockchains,’’ in 2018 IEEE 17th
_International Symposium on Network Computing and Applications (NCA)._
IEEE, 2018, pp. 1–8.
[24] I. M. Coelho, V. N. Coelho, R. P. Araujo, W. Yong Qiang, and B. D. Rhodes,
‘‘Challenges of pbft-inspired consensus for blockchain and enhancements
over neo dbft,’’ Future Internet, vol. 12, no. 8, p. 129, 2020.
[25] W. Zhong, X. Zheng, W. Feng, M. Huang, and S. Feng, ‘‘Improve pbft
based on hash ring,’’ Wireless Communications and Mobile Computing,
vol. 2021, pp. 1–9, 2021.
[26] L. CHENZH, ‘‘Improvedpbftconsensusmechanismbased onkgmedoids,’’
_ComputerScience, vol. 46, no. 12, p. 101G107, 2019._
[27] H. Yoo, J. Yim, and S. Kim, ‘‘The blockchain for domain based static sharding,’’ in 2018 17th IEEE International Conference On Trust, Security And
_Privacy In Computing And Communications/12th IEEE International Con-_
_ference On Big Data Science And Engineering (TrustCom/BigDataSE)._
IEEE, 2018, pp. 1689–1692.
[28] N. Gao, Z. Chuangming, C. Yang, S. Lina, and W. He, ‘‘Improvement of
pbft algorithm based on network self-clustering,’’ Computer Application
_Research, vol. 38, no. 11, pp. 1–8, 2021._
[29] J. Yang, Z. Jia, R. Su, X. Wu, and J. Qin, ‘‘Improved fault-tolerant consensus based on the pbft algorithm,’’ IEEE Access, vol. 10, pp. 30 274–30 283,
2022.
[30] G. Xu, H. Bai, J. Xing, T. Luo, N. N. Xiong, X. Cheng, S. Liu, and
X. Zheng, ‘‘Sg-pbft: A secure and highly efficient distributed blockchain
pbft consensus algorithm for intelligent internet of vehicles,’’ Journal of
_Parallel and Distributed Computing, vol. 164, pp. 1–11, 2022._
[31] S. Gao, T. Yu, J. Zhu, and W. Cai, ‘‘T-pbft: An eigentrust-based practical
byzantine fault tolerance consensus algorithm,’’ China Communications,
vol. 16, no. 12, pp. 111–123, 2019.
[32] X. Yuan, F. Luo, M. Z. Haider, Z. Chen, and Y. Li, ‘‘Efficient byzantine
consensus mechanism based on reputation in iot blockchain,’’ Wireless
_Communications and Mobile Computing, vol. 2021, pp. 1–14, 2021._
[33] W. Liu, X. Zhang, W. Feng, M. Huang, and Y. Xu, ‘‘Optimization of pbft
algorithm based on qos-aware trust service evaluation,’’ Sensors, vol. 22,
no. 12, p. 4590, 2022.
[34] P. Boldi, F. Bonchi, C. Castillo, and S. Vigna, ‘‘Voting in social networks,’’
in Proceedings of the 18th ACM conference on Information and knowledge
_management, 2009, pp. 777–786._
[35] G. Golan Gueta, I. Abraham, S. Grossman, D. Malkhi, B. Pinkas, M. K.
Reiter, D.-A. Seredinschi, O. Tamir, and A. Tomescu, ‘‘Sbft: A scalable and
decentralized trust infrastructure,’’ arXiv e-prints, pp. arXiv–1804, 2018.
-----
HAO QIN received his M.S. degree in signal and
information processing with the school of Communication and Information Engineering, Shanghai
University, Shanghai, China. His research interests
include security and privacy in Blockchain technology.
YEPENG GUAN is currently a full professor at
the College of Communication and Information
Engineering in Shanghai University, China. He received the B.S. and M.S. degrees in physical geography from the Central South University, Changsha, China, in 1990, 1996, respectively, and the
Ph.D. degree in geodetection and information technology from the Central South University, Changsha, China, in 2000. His research interests include Machine Learning, Cloud Computing and
Blockchain.
-----
|
{
"disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.1109/ACCESS.2023.3305375?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1109/ACCESS.2023.3305375, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "CCBYNCND",
"status": "GOLD",
"url": "https://ieeexplore.ieee.org/ielx7/6287639/6514899/10217811.pdf"
}
| 2,023
|
[
"JournalArticle"
] | true
| null |
[
{
"paperId": "bed0d878dcf32c47c57865d64f5044aa13d59505",
"title": "Optimization of PBFT Algorithm Based on QoS-Aware Trust Service Evaluation"
},
{
"paperId": "7cc231d29da4e156b5c8297fa1dcd991d13037f9",
"title": "SG-PBFT: A secure and highly efficient distributed blockchain PBFT consensus algorithm for intelligent Internet of vehicles"
},
{
"paperId": "fe8699e1a4c559a11d601e573936d577d3f59eb6",
"title": "Improve PBFT Based on Hash Ring"
},
{
"paperId": "dc04c10f6ebdb2073be85556ccb67ba8b831babb",
"title": "On Consortium Blockchain Consistency: A Queueing Network Model Approach"
},
{
"paperId": "2db9ce33350e2511ae1e0d62ae250c516c5540f3",
"title": "Efficient Byzantine Consensus Mechanism Based on Reputation in IoT Blockchain"
},
{
"paperId": "55773c851cd54c057cc8a7d2eab25c690efc29b1",
"title": "Voting-Based Decentralized Consensus Design for Improving the Efficiency and Security of Consortium Blockchain"
},
{
"paperId": "42d1daa7e3f640ad24e0b611c0daf486f0848b5c",
"title": "A Scalable Multi-Layer PBFT Consensus for Blockchain"
},
{
"paperId": "898e7d07da40dd8f40e630afcf2957221d249034",
"title": "Challenges of PBFT-Inspired Consensus for Blockchain and Enhancements over Neo dBFT"
},
{
"paperId": "4942f4e401cb36ade0cd3b0bf050936d5b4999b3",
"title": "Hybrid Consensus Algorithm Based on Modified Proof-of-Probability and DPoS"
},
{
"paperId": "d792ce75ae10d0534cada7fb9c8d6ef316e35a9f",
"title": "Blockchain and Federated Learning for Privacy-Preserved Data Sharing in Industrial IoT"
},
{
"paperId": "9c5413fba70586048935751dbdb5c234aa485e22",
"title": "Bitcoin’s Blockchain Peer-to-Peer Network Security Attacks and Countermeasures"
},
{
"paperId": "18fd72a0eee9d427cc2da5778b99e6cbb5a22ab0",
"title": "Dynamic Autonomous Cross Consortium Chain Mechanism in e-Healthcare"
},
{
"paperId": "01138cc9215073999e40e34481369b49dbd37573",
"title": "Incentivizing Honest Mining in Blockchain Networks: A Reputation Approach"
},
{
"paperId": "bfd9bc834510ef695701d626c41e41ff05885300",
"title": "T-PBFT: An EigenTrust-based practical Byzantine fault tolerance consensus algorithm"
},
{
"paperId": "a43c132bb0448ab78d02445f9a96fd64db294d81",
"title": "On the Viable Area of Wireless Practical Byzantine Fault Tolerance (PBFT) Blockchain Networks"
},
{
"paperId": "886746ef12bcd3eaa35cb02f952607ea05fc159e",
"title": "Survey on Private Blockchain Consensus Algorithms"
},
{
"paperId": "20d82e2cbf460df9fd7d1b461511e729d0e54f90",
"title": "A Survey of Distributed Consensus Protocols for Blockchain Networks"
},
{
"paperId": "29af628bfc1cc4e0a48559b6b63dbd71a9029eaf",
"title": "DBFT: Efficient Leaderless Byzantine Consensus and its Application to Blockchains"
},
{
"paperId": "305edd92f237f8e0c583a809504dcec7e204d632",
"title": "Blockchain challenges and opportunities: a survey"
},
{
"paperId": "1542e178b03c6a61478dcf228fc13e5c4ff2df63",
"title": "The Blockchain for Domain Based Static Sharding"
},
{
"paperId": "3b3aec1dcaa40298f58b60f03dd038536346bf58",
"title": "Ouroboros Praos: An Adaptively-Secure, Semi-synchronous Proof-of-Stake Blockchain"
},
{
"paperId": "ded90bba862c2629a737cd508c3532807acbbf86",
"title": "SBFT: A Scalable and Decentralized Trust Infrastructure"
},
{
"paperId": "187215803f494171887a012cb0116157046c292a",
"title": "Performance Modeling of PBFT Consensus Process for Permissioned Blockchain Network (Hyperledger Fabric)"
},
{
"paperId": "b2db5ed9b073e4d9393e456a24ec4ee6ac29d13f",
"title": "Voting in social networks"
},
{
"paperId": "48326c5da8fd277cc32e1440b544793c397e41d6",
"title": "Practical byzantine fault tolerance and proactive recovery"
},
{
"paperId": "1745e5dbdeb4575c6f8376c9e75e70650a7e2e29",
"title": "Proofs of Work and Bread Pudding Protocols"
},
{
"paperId": "14032d98a8eda78d6209bda9f117d1b01dcb791a",
"title": "Improved Fault-Tolerant Consensus Based on the PBFT Algorithm"
},
{
"paperId": "324b2f6d6c45187602657268a87d04d45df392ec",
"title": "Achieving cybersecurity in blockchain-based systems: A survey"
},
{
"paperId": "3323812162b3b9d051d1dba4b76a9bec78966710",
"title": "A Survey on Blockchain for Information Systems Management and Security"
},
{
"paperId": "453eef0ffedfc4779f038282d8e3ca69af32882b",
"title": "Private blockchain-envisioned multi-authority CP-ABE-based user access control scheme in IIoT"
},
{
"paperId": "d7863f72bff7150e18cecb4bb158487914dd8fb1",
"title": "A Research Survey on Applications of Consensus Protocols in Blockchain"
},
{
"paperId": null,
"title": "Improved PBFT consensus mechanism based on kg medoids"
},
{
"paperId": "00113e81ef3a179d74d988d72329d306eae78525",
"title": "Survey of consensus protocols on blockchain applications"
},
{
"paperId": null,
"title": "Design of tree topology based Byzantine fault tolerance system"
},
{
"paperId": null,
"title": "degrees in physical geography and the Ph.D. degree in geo-detection and information technology from"
},
{
"paperId": null,
"title": "His current research interests include security and privacy in blockchain technology"
}
] | 13,126
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
},
{
"category": "Medicine",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/ffc05f9506580f7987bc16293405a39c44d0e9b5
|
[
"Computer Science"
] | 0.897801
|
Secure Multi-Party Delegated Authorisation For Access and Sharing of Electronic Health Records
|
ffc05f9506580f7987bc16293405a39c44d0e9b5
|
arXiv.org
|
[
{
"authorId": "2155492367",
"name": "Kheng-Leong Tan"
},
{
"authorId": "36452710",
"name": "Chi-Hung Chi"
},
{
"authorId": "49535103",
"name": "Kwok-Yan Lam"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"ArXiv"
],
"alternate_urls": null,
"id": "1901e811-ee72-4b20-8f7e-de08cd395a10",
"issn": "2331-8422",
"name": "arXiv.org",
"type": null,
"url": "https://arxiv.org"
}
|
— Timely sharing of electronic health records (EHR) across providers is essential and significance in facilitating medical researches and prompt patients’ care. With sharing, it is crucial that patients can control who can access their data and when, and guarantee the security and privacy of their data. In current literature, various system models, cryptographic techniques and access control mechanisms are proposed which requires patient’s consent before sharing. However, they mostly focus on patient is available to authorize the access of the EHR upon requested. This is impractical given that the patient may not always be in a good state to provide this authorization, eg, being unconscious and requires immediate medical attention. To address this gap, this paper proposes an efficient and secure protocol for the pre-delegation of authorization to multi-party for the access of the EHR when patient is unavailable to do so. The solution adopts a novel approach to combine self-sovereign identity concepts and framework with secure multi-party computation to enable secure identity and authorization verification. Theoretical analysis showed that it increased the efficiency of the protocol and verification processes to ensure the security and privacy of patient’s data.
|
transferred without notice, after which this version may no longer be accessible.”
# Secure Multi-Party Delegated Authorisation For Access and Sharing of Electronic Health Records
Kheng Leong Tan
_School of Computer Science and_
_Engineering, Nanyang Technological_
_University_
_Strategic Centre for Research in_
_Privacy-Preserving Technologies,_
_Nanyang Technological University_
Singapore
khengleong@ntu.edu.sg
Chi-Hung Chi
_Strategic Centre for Research in_
_Privacy-Preserving Technologies,_
_Nanyang Technological University_
Singapore
chihung.chi@ntu.edu.sg
Kwok-Yan Lam
_School of Computer Science and_
_Engineering, Nanyang Technological_
_University_
_Strategic Centre for Research in_
_Privacy-Preserving Technologies,_
_Nanyang Technological University_
Singapore
kwokyan.lam@ntu.edu.sg
**_Abstract— Timely sharing of electronic health records_**
**(EHR) across providers is essential and significance in**
**facilitating medical researches and prompt patients’ care. With**
**sharing, it is crucial that patients can control who can access**
**their data and when, and guarantee the security and privacy of**
**their data. In current literature, various system models,**
**cryptographic techniques and access control mechanisms are**
**proposed which requires patient’s consent before sharing.**
**However, they mostly focus on patient is available to authorize**
**the access of the EHR upon requested. This is impractical given**
**that the patient may not always be in a good state to provide this**
**authorization, eg, being unconscious and requires immediate**
**medical attention. To address this gap, this paper proposes an**
**efficient and secure protocol for the pre-delegation of**
**authorization to multi-party for the access of the EHR when**
**patient is unavailable to do so. The solution adopts a novel**
**approach to combine self-sovereign identity concepts and**
**framework with secure multi-party computation to enable**
**secure identity and authorization verification. Theoretical**
**analysis showed that it increased the efficiency of the protocol**
**and verification processes to ensure the security and privacy of**
**patient’s data.**
**Keywords— Data privacy, Information security, Digital**
**preservation, Identity management systems, Distributed**
**computing.**
I. INTRODUCTION
Healthcare service providers and professionals operate
various healthcare services at different locations. Usually, a
user visits more than one healthcare professional, e.g.,
general practitioner, specialists, clinics, pharmacies, etc. for
different needs. In a usual scenario, where users’ health
records that are issued by a healthcare provider are stored
locally at the provider’s data system as electronic health
records (EHRs); all the management and maintenance of the
data are on the provider side; only this provider is eligible to
edit these records. On the aspect of availability and access to
data, patients currently do not have a full, complete, let alone,
comprehensive view of his or her medical history.
Accessibility of health records by other healthcare service
providers cannot be provided on a sufficiently prompt basis
for patients and doctors to make correct and informed
decisions on a timely basis. Insurance agents are unable to
fully verify a client’s (or claimant patient) full medical
records before approving insurance claims or to facilitate
checking and reviewing of complete medical information
declarations made by clients during the purchase of insurance
policies. Incorrect, incomplete and delayed medical
information access could lead to patients paying incorrect
premiums or making insurance claims that have exceeded or
fall short of the actual reimbursable amounts.
For privacy and transparency issues, patients currently do
not know how and to what extent have their medical records
been utilised and the identities of parties that have access to
their records. The level of appropriate privacy and
transparency to patients’ medical records may not have been
adequately guaranteed or established. And on the
fundamental matter of determining ownership of data,
medical records (i.e., data) are created by the healthcare
service provider upon a person registering as a patient with a
clinical doctor or health service provider ( “HSP”, e.g.
Hospital). As such, the patient and medical practitioner or
HSP should have joint ownership of the data. But
authorisation to access should always be conferred on the
patient since s/he is the rightful due data owner to decide how
the data would be utilized.
In the aspect of entrusting medical records between the
patient and the HSP, HSP is the data custodian. HSP is to
ensure that the data is safe and secure, and to share the data
only upon the patient’s approval. Between or among HSPs,
only after authorisation is obtained from the patient, the data
can be shared only on a need-to-know basis and having based
on mutual endorsement of a data sharing agreement.
Additionally, the access of data is to be allowed only over a
specified time frame to ensure its usage is on a need-to- know
basis during the period of diagnosis or consultation. For
health authorities to access data during emergency situations,
eg, when patient is unconscious, this is only to be done after
the patient has delegated the authorization or prior endorsed
consent/authorise to the data sharing agreement
The use of blockchain technology (“BCT“) has been
advocated by research communities[1] in an attempt to
-----
overcome the challenges mentioned above and to address the
gaps inherently in the healthcare industry. The BCT is a
decentralized database and its properties of immutability,
transparency and auditability, data provenance and
availability can address some of the security concerns of the
EHR sharing. However, it is unable to address adequately the
security requirements pertaining to data confidentiality,
privacy as well as access authorization of EHR.
In current literature, various system models and,
cryptographic schemes and techniques are proposed.
Majority of the reviewed literature requires data owner’s
(e.g., the patient) approval before sharing the EHR, but they
have not taken into consideration the case whereby the patient
is unavailable to perform the approval. There are often
scenarios, as when the patient’s health suddenly deteriorates,
that require records be made available to HSP (eg specialists
who could be remote) or other care givers who might not have
initial access to the patient’s health records. Existing
authorization models follow a patient-centric approach where
the EHR data authorization must be approved by the patient
when required. This is not practical in some scenario and
moreover the patient may not be in a state to provide this
authorization when required. Hence there is a need to develop
an authorization delegation mechanism whereby the patient
can pre-authorizes the providers’ access to his/her EHR in the
event that s/he is certified as medically unfit to do so.
In regards to the issue on the control of data ownership,
the notion of self-sovereign identity (SSI) has emerged in the
past few years. SSI is a new paradigm of online identity
management [2], whereby individuals and entities can
manage their digital identity and identity-related information
(i.e., identifiers, attributes and credentials, or other personal
data) by storing them locally on their own devices (or
remotely on a distributed network) and selectively grant
access to this information to authorized third parties, without
the need to refer to any trusted authority or intermediary
operator to provide or validate these claims. SSI is a
promising concepts that could be a means of confronting the
challenge of sharing and securing sensitive medical
information among healthcare parties, as well as ensuring
patients maintain sovereignty over their data.
Thus, the focus of this paper is to propose an efficient and
secure protocol for the pre-delegation of authorization[3, 4]
to multi-party for the access of the EHR. The protocol
facilitates the execution of the patient’s pre-defined
authorization to authorized parties, eg a panel of doctors can
access the EHR when the patient is unconscious.
This paper’s contributions are summarized as follows:
1) Proposes and designs an authorization security protocol
that enables patients, as data owners, to pre-grant
selected data requesters, example healthcare providers,
access to and share their EHR.
2) Adopts a novel approach to combine self-sovereign
identity (SSI) concepts and framework with secure
multi-party computation (SMPC) to enable secure
identity and authorisation verification in a decentralized
setup. To the best of knowledge, this is the first research
work that utilizes SSI, particularly Verifiable Credentials
and Decentralized Identifier, for the purpose of granting
authorisation using SMPC for verification and access to
EHR.
3) Conducts a detailed security and privacy analysis of the
security protocol using STRIDE [36] and LINDDUN
[37].
The structure of this paper is organized as follows. Section
II looks at related work and Section III provides the
background for the components of the proposed solution.
Section IV elaborates and discusses the system overview and
design. Finally, Section V sets out the directions of future
works and concludes the paper.
II. RELATED WORK
Currently there are a number of researches conducted on
the sharing of EHR using blockchain and different
cryptographic schemes and access control mechanisms for
secure sharing and access of EHR on a blockchain and cloud
platforms. [5-10] proposed utilizing the blockchain platform
as storage system for access-control model, protocols for
authentication and sharing of healthcare data and access
control for shared medical records in cloud repositories. [1116] proposed a secure medical record sharing system using
an attribute-based encryption and (multi-)signature scheme.
[17, 18] proposed a blockchain based secure and privacypreserving EHR sharing protocol using searchable encryption
and conditional proxy re-encryption cryptographic schemes.
[19] also uses searchable encryption but partitions patient’s
record into a hierarchical structure, each portion of which is
encrypted with a corresponding key, thus enables patient to
selectively distribute subkeys for decryption of various
portions of the record. And [20] proposed MedChain, which
combines blockchain, digest chain, and structured P2P
network techniques to provide a session-based healthcare
data-sharing scheme.
Majority of the above literature solution requires data
owner’s (e.g., the patient) to be available to approve before
sharing the EHR, but only a few have taken into consideration
the case whereby the patient is unavailable to perform the
approval, e.g., in cases when he/she is unconscious in an
emergency situation or mentally unfit to perform any tasks.
[21] mentioned the use of an ‘allowed list’ for clinicians to
access patient’s data under emergency situation via prior onetime authentication from the patient. But as it is under an
umbrella account of the HSP that links all clinicians (i.e.
shared account), data security and privacy of the patient can
be a major concerns, especially those not involved in the
patient’s medical consultations. [22] proposes the concept of
using organisational structure roles to define entity-to-entity
relationships and access rights based on functional roles and
duties. This structure is used for authorisation management
as well as access control. However, this way of access control
is specific to a pre-defined organization structure and may not
be aligned with the intention of the patient, the rightful data
owner. [23] proposes a distributed system for delegation
management using their eTRON enterprise security
architecture that enables a patient to securely delegate access
rights to her health records to someone s/he trusts. eTRON
functions much like the SSI framework which has issuer to
issue authorization token and this token is used for access to
EHR. The solution requires hardware specific eTRON card
with chip that stores the holder’s identity. Unlike SSI, whose
-----
building blocks components and standards are defined by
W3C, eTRON is more propriety which may have
interoperability issues for wide deployment. [24] uses
Attribute Based Encryption (ABE) and allows for delegated
secure access of patient records. It similarly requires an
organization structure of the entities or stakeholders of a
medical organization and its patients to map out the access
control rights base on the entities’ attributes.
Self-Sovereign Identity (SSI), a decentralised technology
for digital identity management, is a promising concept for
handling health data. It could represent a step forward in
empowering users, granting them control over their data [25].
[26] conducts a systematic literature review to investigate
state-of-the-art measures based on SSI and Blockchain
technologies for dealing with electronic health records
(EHRs). It concludes SSI is still a novel subject and, even
though adopting the principles of SSI could make patientcentric solutions more accurate, current healthcare research
has neither adequately defined nor employed it in the health
context.
The solution proposed by this paper combines SMPC
scheme to delegate the authorization and adopts the SSI
principles and framework to ensure the validity and
verification of the identities, credentials and claims. To the
best of knowledge, there is currently no related work on this
approach.
III. BACKGROUND
_A. Self-sovereign Identity_
Self-sovereign identity (SSI) is a digital identity
framework where an entity (an individual or an organization)
owns its identity and controls the way it is shared in a
decentralized setup, thus empowering the entity, granting it
control over its data. Decentralized Identifier (DID) and
Verifiable Claims /Credential (VC) are the essential building
blocks of the SSI framework [2]. DID is a new type of
identifier for verifiable, self-sovereign digital identity that is
universally discoverable and interoperable across a range of
systems and a standard defined by The World Wide Web
Consortium (W3C)[27], analogous to a digital certificate
issued by a certificate authority [33]. It is an URL (i.e., unique
web addresses) associated with at least one pair of
cryptographic keys: a public key & a private key. Together,
the DID and public keys are published in the blockchain, and
this “package” is called a DID document. A DID Document
provides information on how to use t specific DID. For
example, a DID Document can specify that a particular
verification method (such as a cryptographic public key or
pseudonymous biometric protocol) can be used for the
purpose of authentication. Fig. 1 shows the DID data model.
A DID by itself is only useful for the purpose of
authentication. It becomes particularly useful when used in
combination with verifiable claims or credentials (VC),
another W3C standard, that can be used to make any number
of attestations about a DID subject [28]. These attestations
include credentials and certifications that grant the DID
subject access rights or privileges. A verifiable claim contains
the DID of its subject (e.g., a HSP), the attestation (access
approval), and must be signed by the person or entity making
the claim using the private keys associated with the claim
issuer's DID (e.g., the patient). Verifiable claims are thus
**Fig. 1. An example DID data model. Method provides detail of where to**
fetch the DID and method-specifier identifier provides DID’s unique
identifier within the method.
methods for trusted authorities (parties) to provably issue a
certified credential associated to a particular DID to grant
consent. It also guarantees privacy by enabling methods such
as minimum/selective disclosure. Fig. 2 shows a typical
structure of a VC.
**Fig. 2. A typical VC structure: Credential Metadata provides properties or**
attributes of the credential, Claims provides a statements about a subject and
Proofs provides cryptographic signatures tied to private keys that prove the
user sharing the VC is the subject of the information.
_B. Secure Multi-Party Computation (SMPC)_
Secure multi-party computation (SMPC) protocol, such as
oblivious transfer [29], homomorphic encryption (HE) [30]
and the secret sharing scheme (SSS) [31], provides enhanced
privacy, correctness and independence of inputs, and
guarantees output delivery. It suits a distributed network like
blockchain as it deals with security and trust issues in
distributed environments. It is helpful in the scenarios
whereby confidential data are to be shared across several
organizations, across several sources and to run some kind of
joint aggregation analysis or processing. Only specifically
crafted shards of the data are exchange and every shard
reveals nothing about the original data and it alone cannot be
used to restore to the original. However the joint processing
of shards is still possible to analyze the data. SSS is a form of
multi-party computation, whereby a secret is divided into
parts, giving each participant its own unique part. To
reconstruct the original secret, a minimum number of parts,
known as the threshold, is required. In the threshold scheme,
this number (t) is less than the total number of parts (n),
otherwise all participants are needed to reconstruct the
original secret. The secret sharing scheme is defined as
follows:
Let P be a set {P1,…,Pn} of n entities, called participants,
who take part in sending and receiving communications. An
access structure on _P is a collection_ _A of subsets of P. A_
subset _A_ ∈ _A_ is called an authorised set (of participants).
Thus any set of participants that contains an authorised subset
is authorised. In a monotone access structure, a minimal
authorised subset is a subset A ∈ _Asuch that A\{a}∉A for all_
_a_ ∈ _A, and a maximal unauthorised subset is a subset A_ ∉ _A_
such that A∪{a}∈A for all a ∈P \ A.
-----
For i = 1,…,n, let Si denotes a set of elements, called shares
corresponding to participant Pi ∈P. A secret sharing scheme
for the key set K on the set of participants P is a subset D of
_K_ � _Si_ �… � _Sn together with a probability distribution_
defined on D. If (k, s1,…, sn) ∈ _D then we say that key k is_
shared among the participants P1,…,Pn who hold shares s1,…,
_sn respectively. The probability distribution on_ _D induces a_
probability distribution on K and each of Si, i = 1,…,n. The
set _D is a secret sharing scheme for_ _K with respect to the_
access structure A on P if
_H(K | Si1,…,Sit) = 0 iff Pi1,…, Pit_ ∈ _A_
- 0 iff Pi1,…, Pit ∉ _A_
for all subsets {Pi1,…, Pit} ⊆ {1,…, n} and shares s’i1,…, s’it
where s’ij ∈ Sij for j = 1,…, t there is a k’ ∈ _K such that k = k’_
for every (k, s1,…, sn) ∈ _D with sij = s’ij for j = 1,…,t if and_
only iff Pi1,…, Pit ∈ _A. We say that an authorised subset of_
participants Pi1,…, Pit pool their shares s’i1,…, s’it to get the
key k’.
Secret sharing schemes defined on **_n participants, whose_**
access structure consists of all sets of size of at least **_t are_**
referred to as (t, n)-threshold schemes.
[32] proposes a solution to provide shared encryption
(decryption) by applying the secret sharing techniques to the
sharing of block cipher. That is either the encryption or the
decryption of a message sent using that block cipher is a
process to be distributed amongst a group of entities. They
proposed 2 techniques, using cascading and XOR, for the
composition of block ciphers. When an authorised group
wish to encrypt a message or decrypt some ciphertext they
cooperate by taking part in a protocol. This protocol enables
them to perform the distributed computation of the cipher.
This is an approach this paper adapts.
IV. SYSTEM OVERVIEW AND DESIGN
_A. Solution Overview_
The focus of the proposed solution is on delegation of
authorization to HSPs or data requesters to access the
patient’s own EHR in the event that the patient is unconscious
or mentally unfit to grant the approval in order for immediate
medical care to proceed. Each of the participants in this
ecosystem is issued with a DID, whether it is an individual
(eg patient) or an entity (eg organization like HSP). The DID
is also recorded on the blockchain (BC). HSP typically has a
copy of the patient’s EHR in their local system. Patient, as
the data owner (DO) of the EHR, can request for his/her EHR
to be accessible externally, eg via a cloud storage provider
(CSP). HSP encrypts the EHR and stores it in CSP, the data
custodian (DC). HSP provides DO with the secret key, ehr-id
and DC identity. The secret key is required to decrypt and
access the EHR and to ensure multi-party validation before
the secret is revealed, multi-party computation is required.
DO will generate the set of keys according to the number of
authorized parties (n) and the minimum parties (t) needed to
reveal the secret key. DO will encrypt the secret key with the
set of keys. This set of keys is then split partially to the _n_
parties whereby _t parties will have all the set of keys to_
decrypt and derive the secret key.
In order to ensure the validity of the authorization process,
one or more Notaries are identified as a witness. The Notary
can be a lawyer or a trusted independent party. This is
analogous to the Power of Attorney (Lasting Power of
Attorney) process [38]. The set of keys is split among the t
parties (Notaries and DC) and encrypted with their respective
public keys which are recorded in their DIDs. For
transparency and accessibility, the ehr-id and the encrypted
key sets are recorded on the blockchain using DO generated
_pseudoID. DO can now issue a signed verifiable credential_
(VC) proving who are authorised to access to his/her EHR,
with details of Notaries (witness/lawyer), DC, ehr-id, DO’s
_pseudoID, expiry date and the encrypted secret key. By using_
a new pseudoID every time, DO’s privacy can be protected.
The VCs are cryptographically signed by the DO and issued
to authorised parties, the data requesters (DRs). The VC
provides DRs with the claim that DRs is authorized by DO
for the access to the EHR identified by ehr-id and only VC
has the link between DO’s DID and _pseudoID. When DR,_
holder of the VC, needs access to DO’s EHR, he/she uses the
VC and disclose the essential details to one of the Notaries to
verify the authorisation. The Notary will decide depending on
validity and expiry of the VC, and a check of the revocation
list. Once Notary validated, DR will need to work together
with DC and Notary to decrypt and retrieve the encrypted
secret key in the VC. Since only DC knows the storage
location of DO’s EHR linked to the ehr-id, it will provide a
link for DR to access the encrypted EHR. DR can impose a
time period for the access – eg availability of the link. DR can
downloaded the encrypted EHR and decrypt it with the secret
key to access the EHR content.
In a SSI model, DO is the issuer, Notary and DC are the
verifier and DR is the holder of the VC. The VCs are
presented via Verifiable Presentation (VP) and with VPs, the
holders (for our case DR) can freely choose which
information (from underlying VCs) they include in a
Verifiable Presentation and thus, share with a relying party.
This is one of selective disclosure feature of SSI solution.
Additional access rights and attributes can be defined in the
VC to provide more fine-grained access control of the EHR
content.
_B. Functional Flow_
The functional flows are broken down into 3 parts;
namely, Secure storing of EHR, Delegation of authorizations
to DRs and Secure Access to EHR. It is assumed that all
parties’ DIDs are recorded and verifiable on blockchain (BC).
_1)_ **_Secure storing of EHR_**
Patient has consulted a medical practitioner from a HSP and
his/her EHR is recorded in HSP’s private data store. Patient
requests the EHR to be made available to him/her.
a) HSP retrieves patient’s (DO) EHR from its private data
store and encrypts it with sk before uploading to a public
cloud storage provider [34] (DC). HSP is provided with
**_ehr-id which is used to locate the encrypted EHR_**
(EHRsk) in DC’s data store. The DC is assumed to be
semi-honest.
b) HSP encrypts sk and ehr id with DO’s public key (which
is within DO’s **_DID) and sends to DO through secure_**
channel, eg Transport Layer Security (TLS).
c) DO decrypts with its private key (from DO’s DID in its
personal wallet) and store the sk and ehr id.
_2)_ **_Delegation of authorization to DRs_**
-----
DO wishes to pre-assign parties with the authorization to
access her/his EHR in the event s/he is unfit to do so.
a) DO identifies the parties (DRs) which it would like to
grant the authorisation to access its EHR.
b) DO generates a set of keys (depending on n:number of
Notaries + DC) and t: minimum parties needed – eg [n=3,
t=2]) and a nonce,r, and encrypts **_r with the keys and_**
XOR with sk to derive, **_cipherKey._**
c) DO splits the keys each for DC and Notaries and
encrypts them using their public keys to derive,
**_encryptedKeysi, i is the party index._**
d) DO generates a pseudoID for BC and records
**_encryptedKeysi, its pseudoID and ehr-id on BC._**
e) DO generates for each DR a verifiable credential (VC)
and input the **_DIDs of the DR, Notaries and DC,_**
**_cipherKey, pseudoID,_** **_ehr-id and_** **_VC expiry date. DO_**
digitally signs each **_VC using its private key, and_**
encrypts the data using each DR’s public key.
f) DO issues the VCs to the DRs through secure channel.
g) DRs decrypts the VCs with their private keys and store
the VCs in their repository.
_3)_ **_Secure Access to EHR_**
In the event that DO is unconscious and unable to authorise
the access to his/her EHR:
a) DR needs access to DO’s EHR.
b) DR retrieves **_VC from its repository and reads DO’s_**
**_pseudoID, cipherKey, Notaries and DC’s DIDs._**
c) DR extracts nonce, r, from cipherKey.
d) DR contacts a Notary and presents the **_VC as a_**
verification presentation, disclosing only the relevant
details –DO’s signature, r, pseudoID and ehr-id.
e) Notary verifies DO’s state, availabilty – an offline
process.
f) If DO is available, Notary will seek DO’s approval, else
access is granted based on authenticity and expiry of VC
as well as a check against the revocation list available on
BC.
g) If granted, Notary reads from BC its keys
(encryptedKeys) based on DO’s **_pseudoID and_** **_ehr-id._**
Notary decrypts its keys (encryptedKeys) with its private
key and encrypts r with the keys to derive partialCipheri.
**_partialCipheri_** is returned to DR.
h) DR similarly presents VC to DC with details of the DO’s
signature, Notary DID, pseudoID, r, and ehr-id..
i) DC verifies the VC and optionally with Notary.
j) DC searches BC based on pseudoID and **_ehr-id, reads_**
and decrypts its keys (encryptedKeys) with its private
key and encrypts r with the keys to derive partialCipheri.
**_partialCipheri_** is returned to DR together with the link
to download the encrypted EHR, **_EHRsk._**
k) DR uses XOR of all recieived **_partialCipheri_** with
**_cipherKey to derive sk which is used to decrypt EHRsk_**
and retrieve the EHR records.
The secret key that encrypts the EHR is derived via SMPC:
DR extracts nonce in cipherSK in VC, gets DC and Notary to
encrypt the nonce with keys they each have, and xor together
with **_cipherSK to derive the secret key to decrypt the_**
encrypted EHR.
_C. Threat Modelling_
Blockchain only addresses a portion of the desired
security requirements, in terms of transparency, integrity,
availability and thus a certain level of trust the blockchain
technology provides. However the other security
requirements are also needed to be addressed, namely;
Confidentiality, Privacy and Improved level of Trust.
To put it into a better perspective the security and privacy
threats of the proposed solution, threat modeling is performed
using a data flow diagram (Fig. 3) together with a security
analysis (TABLE I) and a privacy analysis (TABLE II) to
illustrate the threats exhibited by the system functions. The
data flow diagram in Fig. 3 illustrates the data flow as was
elaborated in the functional flow in earlier section. The tables
show likely threats face by each of the data flow elements. A
discussion of the threats and how the proposed solution
**Fig. 3. Data Flow Diagram for a typical system for storing and sharing EHR**
TABLE I SECURITY ANALYSIS USING STRIDE
using blockchain
addresses them follows:
-----
_1)_ _Security Analysis_
A security analysis is performed using STRIDE. STRIDE
[36] is a threat model developed by Praerit Garg and Loren
Kohnfelder at Microsoft for identifying security threats. It
provides a mnemonic for security threats in six categories,
namely; Spoofing, Tampering, Repudiation, Information
disclosure, Denial of Service and Elevation of privileges. The
security threats present in the system according to six
categories are discussed. In addition, the collusion and key
management threats are also discussed:
- **Spoofing:**
(i) The identities of the participants are recorded on BC
in the form of DIDs which stores the public keys
which can verify the participant identity and
signature.
(ii) The corresponding private key is safely stored in the
participant’s wallet or repository which can be
retrieved for cryptographic operations and
generating digital signature.
_Threat: A likely spoofing of identity threat can be_
validation and verification of the DID prior to recording
onto BC which relates to the implementation of
consensus protocol.
- **Tampering:**
(i) The VC is digitally signed by the issuer which
ensures the integrity and authenticity of the signed
content since it requires the issuer’s private key to
do so.
(ii) DO’s EHR is encrypted with secret key known only
to him/herself and HSP. One additional measure is
for DO to digitally sign the secret key before
encrypting with a set of keys.
(iii) DC will not be able to tamper with the EHR since it
is encrypted and it is non-beneficial to itself to
provide a tampered link to DR for downloading the
encrypted EHR.
- **Repudiation:**
(i) The interactions between the participants are direct
and verification of identities are immediate, thus the
participants cannot deny the interaction and actions
taken.
(ii) VC is digitally signed and issued by DO to DRs.
(iii) Only DR knows how to derive the secret key to
decrypt the data downloaded from the link provided
by DC.
- **Information Disclosure:**
(i) Only pseudoID and ehr-id are recorded on BC.
(ii) DR can selectively disclose the need to know
information when requesting for verification.
(iii) DC cannot disclose any content of DO’s encrypted
EHR other than its storage location.
_Threat: Only threat is HSP and DR’s revelation of EHR_
content after retrieving clear content. This can be
mitigated through law abiding policies like NDA or code
of ethics.
- **Denial Of Service:**
(i) Except for DC which holds the location of EHR, all
contents are on BC which is decentralized and
available. However, this threat should be already
TABLE II PRIVACY ANALYSIS USING LINDDUN
mitigated by DC as with most cloud storage
providers.
- **Elevation of Privileges:**
(i) VC is only issued to the authorized DRs with details
also recorded on the VC. Unauthorised DR cannot
use the VC as its own or download the encrypted
EHR.
(ii) DC would have mitigated this risk as part of its
security posture.
(iii) Mutli-parties are involved with Notary to start off
with the verification to proceed, and DC to finally
grant access to EHR. Possible threat is collusion.
- **Collusion:**
(i) Notary with DR – They do not know the other partial
keys held by DC and the location of the EHR.
(ii) DR with DC – They do not know the other partial
keys held by Notary
_Threat: Notary with DR and DC – This is the only likely_
collusion threat that requires all 3 parties.
- **Key management:**
(i) The set of keys used for SMPC are encrypted and
stored on BC with link to DO pseudoID and ehr-id.
This removed the need to store in a secure location
for the participants (Notaries and DC) and can read
from BC and extract the keys using their private
keys.
(ii) DO need only store the secret key to the encrypted
EHR and optionally the encrypted nonce using the
set of keys.
_2)_ _Privacy analysis_
A privacy analysis is performed using LINDDUN.
LINDDUN [37] is a privacy threat modeling methodology
that supports analysts in systematically eliciting and
mitigating privacy threats in software architectures. It
provides a mnemonic for privacy threats in seven categories,
namely; Linkability, Identifiability, Non-repudiation,
Detectability, Disclosure of information, Unawareness, Non
-----
compliance. The privacy threats present in the system
according to seven categories are:
(i) **Linkability (able to link items of interest to know the**
identity of the data subject(s) involved):
(i) Only DO’s pseudoID is used on BC and there is no
link between it and DO’s identity (DID). The only
link is found on the VC which is stored internally–
this link is required to identify DO’s ehr-id block on
BC.
(ii) The EHR is not publicly available and only DC can
grant its access. In addition, DC only knows the ehrid and not the content of the encrypted EHR.
- **Identifiability (to able to identify a data subject from a**
set of data subjects through an item of interest):
(i) PseudoID on BC is not able to identify who is the
DO. The pseudoID is generated new for every VC.
(ii) There is no content on DC that can identify who the
DO is.
- **Non-repudiation (from data owner’s perspective of able**
to deny a claim):
(i) The VC stored all required details to identify DO
and its signature for his/her authorisation for DRs.
(ii) Only DR knows how to derive the secret key to
decrypt the data downloaded from the link provided
by DC.
(iii) The interactions of DR with Notary and DC are also
recorded on BC for reference.
- **Detectability (able to distinguish whether an item of**
interest about a data subject exists, regardless of being
able to read the contents itself):
(i) With only pseudoID and ehr-id recorded on BC,
there is no trace to detect it is DO’s EHR but only
an EHR is stored. The pseudoID will not be the same
for the same DO.
(ii) The EHR is not publicly available and only DC can
grant its access. In addition, DC only knows the ehrid and not the content of the encrypted EHR.
- **Disclosure of Information:**
(i) BC only stored pseudoID, ehr-id and encrypted keys
with no other details.
_Threat: Only risk is DR’s disclosure of information after_
decrypting EHR – this is beyond any control since EHR
is already in clear.
- **Unawareness (unaware of the actions done on the one’s**
(data subject) personal data):
(i) DRs need to request Notary in order to access EHR.
Notary will notify DO for any access to his/her data.
_Threat: Notary may choose not to notify DO. As the_
solution is on assumption that DO is unavailable to grant
access, DO can detect the access only via the transaction
recorded on BC.
- **Non-compliance (action done on personal data that is**
not compliant with legislation, regulation, and/or
policy.):
(i) As a participants of this healthcare ecosystem, each
participant will have accepted the term and
conditions of use and to abide to the medical code of
ethics, policies, laws and regulations specific to its
country.
To further analyse the disclosure of information and the
information available to each participants and nonparticipants, a ‘who knows what’ table is shown in TABLE
III. This provides further analysis of the privacy protection of
the proposed solution and any information available to any
participants is on a need-to-know basis. Utilising the selective
disclosure of VC, DR need not provide the encrypted secret
key, **_cipherSK,_** to Notary or DC when requesting them to
verify the VC. DO’s pseudoID used on BC will not be
linkable to DO’s DID unless the parties are part of the process
since the partial keys are recorded on BC and DO’s DID need
to be verified.
The interactions between parties are also on a need basis.
DO is required to interact with HSP to locate its EHR in DC
and DRs when issuing the VCs to them. DR will need to
interact with DO to receive the VC and with Notary and DC
for verification of the VC and receive the partialCipheri.
V. CONCLUSION AND FUTURE WORK
Timely sharing of electronic health records (EHR) across
providers is essential and has great positive significance in
facilitating the medical research of diseases and doctors'
diagnosis for prompt patients’ care. It is also important for
patient, as the rightly data owner, to have full control of
his/her EHR and grant the access to the EHR accordingly.
Current researches have looked into different
cryptographhics techniques and access control to ensure the
security and privacy of the shared EHR. However, it is also
essential to address the authorization concerns when patient
is unavailable, eg unconscious in an emergency, to grant the
consent for the access of the EHR for immediate medical
attention. This paper proposed and designed an authorization
security protocol that enables patients, as data owners, to pregrant selected data requesters access to and share their EHR.
The design adopted the Self-Sovereign Identity (SSI)
concepts and framework, particularly Decentralized
Identifier (DID) and Verifiable Claim/Credential for
authentication and authorization respectively and combined
with secure multi-party computation (SMPC) to enable
secure identity and authorisation verification in sharing of the
EHR and protect patient’s privacy through selective
disclosure. A security and privacy analysis were conducted
on the protocol and discussed.
An implementation of the protocol is on-going. A suitable
SSI frameworks [2] will be adopted and a SMPC
implementation assessing the XOR and cascade approach
[32] will be conducted. The SMPC implementation will be
TABLE III. WHO KNOWS WHAT
-----
integrated into the eventual SSI framework and blockchain
platform with sample medical data [35] for testing. In
addition, access rights and attributes can be further defined in
the VC to provide more fine-grained access control of the
EHR content. The proposed solution and implementation can
also be explored and adapted for other domains, eg the
processing and execution of Lasting Power of Attorney
(LPA) and will.
REFERENCES
[1] S. Arsheen, and K. Ahmad, "SLR: A Systematic Literature Review on
Blockchain Applications in Healthcare." In Proceedings of
International Conference on Information Science and Communications
Technologies (ICISCT), pp. 1-6. IEEE, 2021.
[2] K.-L. Tan, C.-H. Chi, and K.-Y. Lam, “Analysis of Digital Sovereignty
and Identity: From Digitization to Digitalization,” arXiv preprint
arXiv:2202.10069, 2022.
[3] X.-B. Zhao, K.-Y. Lam, S.-L. Chung, M. Gu, and J.-G. Sun,
"Authorization mechanisms for virtual organizations in distributed
computing systems." In Proceedings of Australasian Conference on
Information Security and Privacy, pp. 414-426. Springer, Berlin,
Heidelberg, 2004.
[4] J.-P. Yong, K.-Y. Lam, S.-L. Chung, M. Gu, and J.-G. Sun, "Enhancing
the scalability of the community authorization service for virtual
organizations." In Proceedings of Advanced Workshop on Content
Computing, pp. 182-193. Springer, Berlin, Heidelberg, 2004.
[5] X. Yue, H. Wang, D. Jin, M. Li, and W. Jiang, “Healthcare data
gateways: found healthcare intelligence on blockchain with novel
privacy risk control,” Journal of medical systems, vol. 40, no. 10, pp.
1-8, 2016.
[6] J. Zhang, N. Xue, and X. Huang, “A secure system for pervasive social
network-based healthcare,” IEEE Access, vol. 4, pp. 9239-9250, 2016.
[7] Q. Xia, E. B. Sifah, K. O. Asamoah, J. Gao, X. Du, and M. Guizani,
“MeDShare: Trust-less medical data sharing among cloud service
providers via blockchain,” IEEE access, vol. 5, pp. 14757-14767, 2017.
[8] X. Liang, J. Zhao, S. Shetty, J. Liu, and D. Li, "Integrating blockchain
for data sharing and collaboration in mobile healthcare applications."
In Proceedings of IEEE 28th annual international symposium on
personal, indoor, and mobile radio communications (PIMRC), pp. 1-5.
IEEE, 2017.
[9] H. Guo, W. Li, M. Nejad, and C.-C. Shen, "Access control for
electronic health records with hybrid blockchain-edge architecture." In
Proceedings of IEEE International Conference on Blockchain
(Blockchain), pp. 44-51. IEEE, 2019.
[10] K. Rabieh, K. Akkaya, U. Karabiyik, and J. Qamruddin, "A secure and
cloud-based medical records access scheme for on-road emergencies."
In Proceedings of 15th IEEE Annual Consumer Communications &
Networking Conference (CCNC), pp. 1-8. IEEE, 2018.
[11] H. Wang, and Y. Song, “Secure cloud-based EHR system using
attribute-based cryptosystem and blockchain,” Journal of Medical
Systems, vol. 42, no. 8, pp. 1-9, 2018.
[12] R. Guo, H. Shi, Q. Zhao, and D. Zheng, “Secure attribute-based
signature scheme with multiple authorities for blockchain in electronic
health records systems,” IEEE access, vol. 6, pp. 11676-11686, 2018.
[13] Y. Sun, R. Zhang, X. Wang, K. Gao, and L. Liu, "A decentralizing
attribute-based signature for healthcare blockchain." In Proceedings of
27th International Conference on Computer Communication and
Networks (ICCCN), pp. 1-9. IEEE, 2018.
[14] J. Vora, P. Italiya, S. Tanwar, S. Tyagi, N. Kumar, M. S. Obaidat, and
K.-F. Hsiao, "Ensuring privacy and security in e-health records." In
Proceedings of International Conference on Computer, Information
and Telecommunication Systems (CITS), pp. 1-5. IEEE, 2018.
[15] H. Guo, W. Li, E. Meamari, C.-C. Shen, and M. Nejad, "Attributebased multi-signature and encryption for ehr management: A
blockchain-based solution." In Proceedings of IEEE International
Conference on Blockchain and Cryptocurrency (ICBC), pp. 1-5. IEEE,
2020.
[16] S. Ghaffaripour, and A. Miri, "Application of blockchain to patientcentric access control in medical data management systems." In
Proceedings of IEEE 10th Annual Information Technology,
Electronics and Mobile Communication Conference (IEMCON), pp.
0190-0196. IEEE, 2019.
[17] L. Chen, W.-K. Lee, C.-C. Chang, K.-K. R. Choo, and N. Zhang,
“Blockchain based searchable encryption for electronic health record
sharing,” Future generation computer systems, vol. 95, pp. 420-429,
2019.
[18] Y. Wang, A. Zhang, P. Zhang, and H. Wang, “Cloud-assisted EHR
sharing with security and privacy preservation via consortium
blockchain,” IEEE Access, vol. 7, pp. 136704-136719, 2019.
[19] J. Benaloh, M. Chase, E. Horvitz, and K. Lauter, "Patient controlled
encryption: ensuring privacy of electronic medical records." In
Proceedings of the 2009 ACM Workshop on Cloud Computing
Security, pp. 103-114. 2009.
[20] B. Shen, J. Guo, and Y. Yang, “MedChain: Efficient healthcare data
sharing via blockchain,” Applied sciences, vol. 9, no. 6, pp. 1207, 2019.
[21] Y. Zhuang, L. R. Sheets, Y.-W. Chen, Z.-Y. Shae, J. J. Tsai, and C.-R.
Shyu, “A patient-centric health information exchange framework using
blockchain technology,” IEEE Journal of Biomedical and Health
Informatics, vol. 24, no. 8, pp. 2169-2176, 2020.
[22] B. Blobel, “Authorisation and access control for electronic health
record systems,” International Journal of Medical Informatics, vol. 73,
no. 3, pp. 251-257, 2004.
[23] M. F. F. Khan, and K. Sakamura, "A Distributed Approach to
Delegation of Access Rights for Electronic Health Records." In 2020
International Conference on Electronics, Information, and
Communication (ICEIC), pp. 1-6. IEEE, 2020.
[24] M. Joshi, K. P. Joshi, and T. Finin, “Delegated authorization
framework for EHR services using attribute based encryption,” IEEE
Transactions on Services Computing, 2019.
[25] X. Liang, S. Shetty, J. Zhao, D. Bowden, D. Li, and J. Liu, "Towards
decentralized accountability and self-sovereignty in healthcare
systems." In Proceedings of International Conference on Information
and Communications Security, pp. 387-398. Springer, Cham, 2017.
[26] A. Siqueira, A. F. Da Conceição, and V. Rocha, “Blockchains and SelfSovereign Identities Applied to Healthcare Solutions: A Systematic
Review,” arXiv preprint arXiv:2104.12298, 2021.
[27] W.-W. W. Consortium. "W3C DID Primer for Introduction,"
Accessed: January 2022, available: https://github.com/w3c-ccg/didprimer.
[28] P. Dunphy, and F. A. Petitcolas, “A first look at identity management
schemes on the blockchain,” IEEE Security & Privacy, vol. 16, no. 4,
pp. 20-29, 2018.
[29] C. Crépeau, J. v. d. Graaf, and A. Tapp, "Committed oblivious transfer
and private multi-party computation." In Proceedings of Annual
International Cryptology Conference, pp. 110-123. Springer, Berlin,
Heidelberg, 1995.
[30] X. Yi, R. Paulet, and E. Bertino, "Homomorphic encryption,"
Homomorphic encryption and applications, pp. 27-46: Springer, 2014.
[31] A. Shamir, “How to share a secret,” Communications of the ACM, vol.
22, no. 11, pp. 612-613, 1979.
[32] K. M. Martin, R. Safavi-Naini, H. Wang, and P. R. Wild, “Distributing
the encryption and decryption of a block cipher,” Designs, Codes and
Cryptography, vol. 36, no. 3, pp. 263-287, 2005.
[33] M. Ge, and K.-Y. Lam, "Self-initialized distributed certificate authority
for mobile ad hoc network." In Proceedings of International
Conference on Information Security and Assurance, pp. 392-401.
Springer, Berlin, Heidelberg, 2009.
[34] J. Guo, W. Yang, K.-Y. Lam, and X. Yi, "Using blockchain to control
access to cloud data." In Proceedings of International Conference on
Information Security and Cryptology, pp. 274-288. Springer, Cham,
2018.
[35] H. International, “Welcome to FHIR,” Accessed: January 2022,
available: https://www.hl7.org/fhir/
[36] Microsoft, “The STRIDE Threat Model,” Accessed: January 2022,
available: https://docs.microsoft.com/en-us/previousversions/commerceserver/ee823878(v=cs.20)?redirectedfrom=MSDN
[37] DistriNet Research Group,“LINDDUN privacy engineering,”
Accessed: January 2022, available: https://www.linddun.org/
[38] Wikipedia, “Power Of Attorney” Accessed: January 2022, available:
https://en.wikipedia.org/wiki/Power_of_attorney
-----
|
{
"disclaimer": "Notice: Paper or abstract available at https://arxiv.org/abs/2203.12837, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": null,
"status": "GREEN",
"url": "http://arxiv.org/pdf/2203.12837"
}
| 2,022
|
[
"JournalArticle"
] | true
| 2022-03-24T00:00:00
|
[
{
"paperId": "7fd6b26802c51b4e95c69453284ce4568565348f",
"title": "Analysis of Digital Sovereignty and Identity: From Digitization to Digitalization"
},
{
"paperId": "e8d0339544beb24b322b16daa85f8b9169f31719",
"title": "SLR: A Systematic Literature Review on Blockchain Applications in Healthcare"
},
{
"paperId": "d36427a9ce866afb92a7c81357bacdfa0b17dfa7",
"title": "Blockchains and Self-Sovereign Identities Applied to Healthcare Solutions: A Systematic Review"
},
{
"paperId": "3d32b97a637567fe5a36c28654225b7db0d4a4b0",
"title": "A Patient-Centric Health Information Exchange Framework Using Blockchain Technology"
},
{
"paperId": "99e6ff83390f338fee5ec0fa713314e5090aa67e",
"title": "Attribute-based Multi-Signature and Encryption for EHR Management: A Blockchain-based Solution"
},
{
"paperId": "64c53e2cdbafc9ffafebf0e4b90985c83097ba21",
"title": "A Distributed Approach to Delegation of Access Rights for Electronic Health Records"
},
{
"paperId": "f34fc5d24f719e973a25b9d3d7beefd36c0d518b",
"title": "Cloud-Assisted EHR Sharing With Security and Privacy Preservation via Consortium Blockchain"
},
{
"paperId": "9db4ed0c4e24e9ce83fc7daae5647850ca1f163c",
"title": "Access Control for Electronic Health Records with Hybrid Blockchain-Edge Architecture"
},
{
"paperId": "2d9e86d42ec4a56502f8a722a4866b9ea4ecc505",
"title": "Blockchain based searchable encryption for electronic health record sharing"
},
{
"paperId": "b5d18950e3bcf122c7dc4b815022a83e78053b8e",
"title": "Delegated Authorization Framework for EHR Services Using Attribute-Based Encryption"
},
{
"paperId": "92edc29dfda7cae9d9a11c8ed8644f4469eabd67",
"title": "MedChain: Efficient Healthcare Data Sharing via Blockchain"
},
{
"paperId": "ed1b321921a505d5672f92ed9ec340ab6d596bbf",
"title": "Using Blockchain to Control Access to Cloud Data"
},
{
"paperId": "387801508877bac81de36942429c7d6de100472e",
"title": "Secure Cloud-Based EHR System Using Attribute-Based Cryptosystem and Blockchain"
},
{
"paperId": "ac804b151ae6fbb9845743680e984d2b53c138d1",
"title": "Ensuring Privacy and Security in E- Health Records"
},
{
"paperId": "90dd3108633ba6964f631ae9029f952d2ff26e19",
"title": "A Decentralizing Attribute-Based Signature for Healthcare Blockchain"
},
{
"paperId": "34f8581a76f09645b27572c3b51290223bf5d6ab",
"title": "Secure Attribute-Based Signature Scheme With Multiple Authorities for Blockchain in Electronic Health Records Systems"
},
{
"paperId": "d4cf27fc7484eac5069d647a734d26df67b05c41",
"title": "A First Look at Identity Management Schemes on the Blockchain"
},
{
"paperId": "cfff80a971e45066ddf140da68cc45c6a9440bfb",
"title": "Towards Decentralized Accountability and Self-sovereignty in Healthcare Systems"
},
{
"paperId": "cf1dcc9d8ae1655b1717c531c7d74d4b2e853750",
"title": "Integrating blockchain for data sharing and collaboration in mobile healthcare applications"
},
{
"paperId": "49af9119c09b97af977595b011afd8a3f588412d",
"title": "MeDShare: Trust-Less Medical Data Sharing Among Cloud Service Providers via Blockchain"
},
{
"paperId": "4bd5d941233633054090db5236ae01bd4402487a",
"title": "A Secure System For Pervasive Social Network-Based Healthcare"
},
{
"paperId": "208735a6c437b8ae3efba01693c3e8a06289c3dd",
"title": "Healthcare Data Gateways: Found Healthcare Intelligence on Blockchain with Novel Privacy Risk Control"
},
{
"paperId": "4be948fc9dc533d11497a38b76f65076ffb89cbd",
"title": "Homomorphic Encryption and Applications"
},
{
"paperId": "7831fe5c58a7dc29867af18f2c1d84cbb64e0493",
"title": "Patient controlled encryption: ensuring privacy of electronic medical records"
},
{
"paperId": "94fe9e5908f243cce093cb86ead2cde71a38d29d",
"title": "Self-initialized Distributed Certificate Authority for Mobile Ad Hoc Network"
},
{
"paperId": "35d5d81e552097b6b78f2b02b131dbd545fc5f12",
"title": "Distributing the Encryption and Decryption of a Block Cipher"
},
{
"paperId": "a52378a2c9976fbf17d0640601d9d9470dface3d",
"title": "Enhancing the Scalability of the Community Authorization Service for Virtual Organizations"
},
{
"paperId": "8a092e8177d9abedefe05c1f66f77b49a428e630",
"title": "Authorization Mechanisms for Virtual Organizations in Distributed Computing Systems"
},
{
"paperId": "9db64676c2a2b4a7fb90b151b8a4134e39c2d9ed",
"title": "Authorisation and access control for electronic health record systems"
},
{
"paperId": "e1a0dd0b0b193d87b83bc970a5a81387ea951a00",
"title": "Committed Oblivious Transfer and Private Multi-Party Computation"
},
{
"paperId": "88abb2cda4f2a57499a717966ac4fbe9a993027a",
"title": "How to share a secret"
},
{
"paperId": null,
"title": "The STRIDE Threat Model"
},
{
"paperId": null,
"title": "Welcome to FHIR"
},
{
"paperId": null,
"title": "W3C DID Primer for Introduction"
},
{
"paperId": null,
"title": "Application of blockchain to patientcentric access control in medical data management"
},
{
"paperId": "7afca8a86d426f46093647bf3a7c4e3f72c0003c",
"title": "A secure and cloud-based medical records access scheme for on-road emergencies"
},
{
"paperId": "09ad6b41fc80faa5f4d84079a6cb93f3c6e0b827",
"title": "Homomorphic Encryption"
},
{
"paperId": "fea426adb138e13c5817f7b16b4be0fabf3bbea3",
"title": "Power of attorney."
},
{
"paperId": null,
"title": "LINDDUN privacy engineering"
},
{
"paperId": null,
"title": "Accessed"
}
] | 11,881
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
},
{
"category": "Engineering",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/ffc27e1a71df5007266a6a3508ced5f2b769ea93
|
[
"Computer Science"
] | 0.816603
|
Number of Confirmation Blocks for Bitcoin and GHOST Consensus Protocols on Networks
|
ffc27e1a71df5007266a6a3508ced5f2b769ea93
|
Theoretical and Applied Cybersecurity
|
[
{
"authorId": "2286822693",
"name": "Ludmila Kovalchuk"
},
{
"authorId": "2434311",
"name": "Dmytro Kaidalov"
},
{
"authorId": "16266077",
"name": "Andrii Nastenko"
},
{
"authorId": "2286818491",
"name": "Oleksii Shevtsov"
},
{
"authorId": "2286832993",
"name": "Mariya Rodinko"
},
{
"authorId": "2257283323",
"name": "Roman Oliynykov"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"Theor Appl Cybersecur"
],
"alternate_urls": null,
"id": "da556bb0-b28b-4b54-9d3b-1d68ccb315b3",
"issn": "2664-2913",
"name": "Theoretical and Applied Cybersecurity",
"type": "journal",
"url": "http://tacs.ipt.kpi.ua/"
}
| null |
UDC 004.75
# Number of Confirmation Blocks for Bitcoin and GHOST Consensus Protocols on Networks with Delayed Message Delivery
## L. V. Kovalchuk[1,3,][ a], D. S. Kaidalov[3], A. O. Nastenko[3], O. V. Shevtsov[2,3], M. Yu. Rodinko[2,3], R. V. Oliynykov[2,3,][ b]
1National Technical University of Ukraine "Igor Sikorsky Kyiv Polytechnic Institute"
2V. N. Karazin Kharkiv National University
3Input Output HK
## Abstract
A specific number of transaction confirmation blocks determines average time of receiving and accepting payments at
cryptocurrencies, and the shortest confirmation time for the same level of blockchain security provides the best user
properties. Existing papers on transaction confirmation blocks for Bitcoin use implicit assumption of prompt spreading of
Bitcoin blocks over the network (that is not always the case for the real world conditions). The newer publications with
rigorous analysis and proofs of Bitcoin blockchain properties that take into account network delays provide asymptotic
estimates, with no specific numbers for transaction confirmation blocks.
We propose three methods for determination of required number of confirmation blocks for Bitcoin and GHOST on
networks with delayed message delivery with different models that take into account the possibility of faster adversarial
node syncronization. For the GHOST we propose the first (to our knowledge) strict theoretical method that allows to get
required number of confirmation blocks for a given attacker’s hashrate and attack success probability.
_Keywords: Bitcoin, GHOST, consensus protocol, Proof-of-Work_
## Introduction
Bitcoin and many other altcoins provide decentralized payment services with no trusted parties. Modern
cryptocurrencies are based on public transaction ledgers
(blockchains) that are maintained by each participant (a
full node) of a distributed peer-to-peer network. Consistent transaction ledger is built using consensus protocol
that must be robust to arbitrary behavior of an attacker with bounded resources, as well as to honest
nodes’ failures or network outages. The latter leads to
the possibility of existing several unintentional alternative histories of blockchain concurrently run by honest
nodes, and ability of consensus protocol to select the
only one "correct" version of blockchain among several
available branches on discovering them.
These properties of cryptocurrency distributed consensus protocols also allow intentional adversarial creation of a blockchain branch for a double spend attack,
when a transaction is reverted or cancelled (e.g., after
a merchant sent goods or provided services), so an attacker gets goods or services and finally keeps his coins
back.
To prevent such type of attacks (to decrease their
success probability to acceptable small threshold), it is
necessary to wait for some amount of blocks that follow
the one with the transaction of interest, after which it
is accepted by merchant.
The exact number of such confirmation blocks is
important for application properties of cryptocurrency
_alusi.kovalchuk@gmail.com_
_broman.oliynykov@iohk.io_
and closely related to average time of receiving and
accepting payments. The shortest confirmation time
for the same level of transaction security provides the
best user properties for cryptocurrency.
**Previous work. The first model that shows expo-**
nential decreasing of attack probability success with
number of confirmation blocks was shown in the original Bitcoin paper [1]. It uses a random walk process
with a single random variable that follows binomial distribution (with Poisson approximation). There is also
an implicit assumption of prompt spreading of Bitcoin
blocks over a peer-to-peer network. Though several
honest chains were mentioned that may be visible to
nodes in the paper, the model takes into account only
an intentionally built alternative adversarial chain.
The paper of M.Rosenfeld [2] uses an assumption of
a random variable that follows a negative binomial distribution for defining of the difference in the number of
blocks generated by honest miners and by an adversary.
The later paper [3] by C.Grunspan and R.Perez-Marco
provides proofs on selection of the negative binomial
distribution of the analyzed random variable, and gives
strict estimates of the number of confirmation blocks.
The paper of C.Pinzon and C.Rocha [4] generalizes the
approach from [2] and incorporates generation time to
the model of double spend attack.
All mentioned papers use models with implicit assumption of prompt spreading of Bitcoin blocks over
the network that leads to following consequences:
-----
- network synchronization is done promptly, and
each block is visible to all nodes immediately at
the very moment it is mined and published;
- two or more honest miners cannot generate blocks
simultaneously (probability of this event is equal
to zero), as well as it is impossible to create an
unintentional fork;
- probability of existence of two different chains having the same length mined by honest miners is also
equal to zero;
- speed of the growing main chain is equal to honest
miners’ block generation speed.
These statements are not always the case for the
real world conditions of cryptocurrencies application,
so a different model should be used that should take
into account delays introduced by peer-to-peer network
message delivery.
The paper [5] introduces a formal definition and analysis of Bitcoin backbone protocol when the participants
operate in a synchronous and partially synchronous
communication network (that has an upper bound for
delays of message delivery). An approach for formal
analysis in asynchronous networks was presented at [6].
Further development of [5] is presented at [7] that allows
strict formalization of target recalculation function in
Bitcoin.
These papers provide generalized analysis with
proofs of asymptotic estimates on achievement of main
blockchain properties (persistence and liveness), but do
not give any method for computation of the required
number of confirmation blocks for cryptocurrency practical application.
In [8] and [9] a tradeoff on transaction throughput and
security of blockchains were studied, and the GHOST
rule was proposed that allows achieving higher transaction rates via adoption of tree data structures for keeping blocks. A discussion of options for some proofs was
presented. E.g., Proposition 11 at [8]: from inequality 1
in the proof it follows that the rate of block addition to
the main chain by honest miners only 𝛽(𝜆ℎ) is higher
than the rate of block addition when main chain is extended both by honest users blocks and a fraction 𝑓 of
the attacker’s blocks: 𝛽(𝜆ℎ) ≥ _𝛽(𝜆ℎ_ + 𝑓 _· 𝑞_ _· 𝜆ℎ); mono-_
tonically decreasing properties of the 𝛽(𝜆) function on
its argument follow from the same inequality (i.e., with
increase of the speed of block generation 𝜆, the rate of
block addition to the main chain is decreased). These
papers also provide upper and lower bounds of the rate
of block addition to the main chain, but there is no
published strict theoretical method (to our knowledge)
for computation of the required number of confirmation
blocks in cryptocurrencies that utilize GHOST.
**Our results. Within a model of a synchronous com-**
munication network with limited delays of message
delivery [5, 10], we develop several methods for determination of the required number of confirmation blocks for
Bitcoin and GHOST. The first model considers equal
delays for message delivery on the Bitcoin peer-to-peer
network both for honest and malicious miners. The
second model for Bitcoin assumes that an attacker may
create his own centralized network with faster synchro
nization, thus optimizing attack efficiency. The last
model is for GHOST and takes into account its tree
data structure for organizing of blocks, the longest chain
selection rule and much shorter time between blocks.
For each model we develop a method for determination
of the required number of confirmation blocks with a
given attacker’s hashrate and attack success probability.
## 1. Notations and auxiliary statements
We define a timeslot (TS) as the period of synchronization, i.e. the amount of time needed to share a
block between independent miners. We introduce a
value 𝑠𝐻 which is the ratio _[𝑡]𝑡[1]2_ [, where][ 𝑡][1][ is the period]
of network synchronization for honest miners and 𝑡2 is
the time needed for one attempt of block generation
(roughly speaking, time of random oracle of hash function request processing). It means that each honest
miner (HM) can make approximately 𝑠𝐻 attempts to
generate a block, before he can see a block generated by
some other HM in this TS. For a malicious miner (MM),
we assume 𝑠𝑀 = 𝑠𝐻 for the first model and 𝑠𝑀 = _[𝑠]2[𝐻]_
for the second one. For the third model, we assume
_𝑠𝑀_ = 𝑠𝐻 = 𝑠.
We also use the following notations and assumptions:
- 𝑝 is the probability to generate a block by one
miner in one attempt; roughly speaking, this is the
probability to generate an appropriate pre-image of
some hash-function (we assume 𝑝 = _𝑘·𝑛1·𝑠𝐻_ [, where][ 𝑘]
is the ratio of block generation time to network block
propagation time);
- 𝑛 is the number of HMs;
- 𝑚 is the number of MMs (we assume that 𝑚< 𝑛,
so honest miners have majority).
Also we emphasize once more that in Model 1 HMs
and MMs can extend the blockchain not more than by
one block during one TS, in Model 2 HMs can extend
the blockchain not more than one block during one
TS, but MMs, using their advantage in synchronization
time, can extend it by one or two blocks during one
TS. In Model 3, HMs can extend the blockchain not
more than by three blocks during one TS and MMs
can extend the blockchain not more than by two blocks
during one TS.
Now we need to define and to calculate some probabilities that we will use in further statements.
In Models 1 and 2 for HMs the probability to generate
nothing during one TS is
_𝑝0 = (1 −_ _𝑝)[𝑛][·][𝑠][𝐻]_ _,_
and the probability to extend the blockchain exactly by
one block is
_𝑝1 = 1 −_ _𝑝0._
For MMs, the probability to generate nothing during
one TS is
_𝑞0 = (1 −_ _𝑝)[𝑚][·][𝑠][𝐻]_ _,_
-----
the probability to extend the blockchain exactly by two
blocks is
_𝑞2 =_ (︀1 − (1 − _𝑝)[𝑚][·][𝑠][𝑚][)︀][2]_ _,_
and the probability to extend the blockchain exactly by
one block is
_𝑞1 = 1 −_ _𝑞0 −_ _𝑞2._
Note that for the Model I: 𝑞2 = 0.
Also, for Model 3 we introduce the corresponding
probabilities:
_𝑝𝑖_ = 𝐶𝑛𝑠[𝑖] _[𝑝][𝑖][(1][ −]_ _[𝑝][)][𝑛𝑠][−][𝑖][, 𝑖]_ [= 0][,][ 1][,][ 2;] (1)
_𝑝3 = 1 −_ _𝑝0 −_ _𝑝1 −_ _𝑝2;_
and
_𝑞𝑖_ = 𝐶𝑚𝑠[𝑖] _[𝑝][𝑖][(1][ −]_ _[𝑝][)][𝑚𝑠][−][𝑖][, 𝑖]_ [= 0][,][ 1][,]
_𝑞2 = 1 −_ _𝑞0 −_ _𝑞1,_ (2)
where 𝑠 is the number of attempts in one TS (for Model
3, the parameter 𝑠 is the same that 𝑆𝐻 for Models 1
and 2).
To prove our main result, we need auxiliary lemmas.
The first and the second ones are some kind of ruin
problem generalizations. We formulate them in this
section. The others will be formulated in sections 4
and 5. To formulate the lemmas, we introduce some
additional notations.
Let {𝜉𝑖, 𝑖 _≥_ 1}, and {𝜂𝑖, 𝑖 _≥_ 1} be mutually independent random variables (RVs), where for all 𝑖 _≥_ 1
_𝜉𝑖_ = {︂ 10,, withwith probabilityprobability _𝑝𝑝01;,_ (3)
_𝑛_
Σ𝑛 = ∑︀ _𝜂𝑖_ _−_ _𝑘, 𝑛_ _≥_ 1; Σ0 = −𝑘 for some 𝑘 _∈_ _𝑁_
_𝑖=1_
and
_𝐿𝑛_ = 𝑆𝑛 _−_ Σ𝑛, 𝑛 _≥_ 1; 𝐿0 = 𝑘.
_𝑛_
We can also write 𝐿𝑛 as 𝐿𝑛 = ∑︀ _𝜁𝑖_ + 𝑘. From
_𝑖=1_
the probability distribution of 𝜁𝑖, we get the following
equalities:
_𝐿𝑛_ _−_ 2, with prob. _𝑃−2;_
_𝐿𝑛_ _−_ 1, with prob. _𝑃−1;_
_𝐿𝑛,_ with prob. _𝑃0;_
_𝐿𝑛_ + 1, with prob. _𝑃1._
_𝐿𝑛+1 =_
⎧
⎪⎪⎨
⎪⎪⎩
(5)
Now we are ready to formulate the first lemma.
**Lemma 1. Define the event 𝐴𝑘** _as_
_𝐴𝑘_ = {∃ _𝑛_ _≥_ 1 : _𝐿𝑛_ _≤_ 0} and let 𝑞[(][𝑘][)] = 𝑃 (𝐴𝑘).
_Then if the condition_
_𝑃−1 + 2𝑃−2 < 𝑃1_ (6)
_holds, then_
_where_
1 _−_ (1 − _𝜆1) 𝜆[𝑘]2[+1]_
_𝑞[(][𝑘][)]_ = [(1][ −] _[𝜆][2][)][ 𝜆][𝑘][+1]_ _,_ (7)
_𝜆1 −_ _𝜆2_
√︁
_𝑃−1 + 𝑃−2 −_
_𝜆1 =_
√︁
_𝑃−1 + 𝑃−2 +_
_𝜆2 =_
(𝑃−1 + 𝑃−2)[2] + 4𝑃−1𝑃−2
_,_
2𝑃1
(𝑃−1 + 𝑃−2)[2] + 4𝑃−1𝑃−2
_._
2𝑃1
0, with probability _𝑞0;_
1, with probability _𝑞1;_ (4)
2, with probability _𝑞2,_
_Proof. To prove the Lemma, we will derive a differential_
equation for 𝑞[(][𝑘][)] using (5) and solve it.
According to the compound probability formula
_𝑞[(][𝑘][)]_ = 𝑃 (𝐴𝑘) = 𝑃 (︁𝐴𝑘/𝜁1 = −2)︁ _𝑃−2+_
+𝑃 (︁𝐴𝑘/𝜁1 = −1)︁ _𝑃−1+_
+𝑃 (︁𝐴𝑘/𝜁1 = 0)︁ _𝑃0 + 𝑃_ (︁𝐴𝑘/𝜁1 = 1)︁ _𝑃1=_
=𝑞[(][𝑘][−][2)]𝑃−2 + 𝑞[(][𝑘][−][1)]𝑃−1 + 𝑞[(][𝑘][)]𝑃0 + 𝑞[(][𝑘][+1)]𝑃1,
where the second equality is due to (5). We can rewrite
it as
_𝑞[(][𝑘][−][2)]𝑃−2 + 𝑞[(][𝑘][−][1)]𝑃−1+_
+𝑞[(][𝑘][)] (𝑃0 1) + 𝑞[(][𝑘][+1)]𝑃1 = 0. (8)
_−_
The corresponding characteristic equation is
_𝜆[3]𝑃1 + 𝜆[2]_ (𝑃0 − 1) + 𝜆𝑃−1 + 𝑃−2 = 0
with one obvious root 𝜆 = 1. After division by 𝜆 _−_ 1,
we obtain a new equation:
_𝜂𝑖_ =
⎧
⎨
⎩
and define RVs {𝜁𝑖, 𝑖 _≥_ 1}, as
_𝜁𝑖_ = 𝜉𝑖 _−_ _𝜂𝑖._
It is easy to calculate probability distribution of 𝜁𝑖,
_𝑖_ _≥_ 1:
_𝑃0 := 𝑃_ (𝜁𝑖 = 0) = 𝑝0𝑞0 + 𝑝1𝑞1;
_𝑃1 := 𝑃_ (𝜁𝑖 = 1) = 𝑝1𝑞0;
_𝑃−1 := 𝑃_ (𝜁𝑖 = −1) = 𝑝0𝑞1 + 𝑝1𝑞2;
_𝑃−2 := 𝑃_ (𝜁𝑖 = −2) = 𝑝0𝑞2.
Also let us define RVs as
_𝑆𝑛_ =
_𝑛_
∑︁
_𝜉𝑖, 𝑛_ _≥_ 1; 𝑆0 = 0;
_𝑖=1_
-----
_𝜆[2]𝑃1 −_ _𝜆_ (𝑃−1 + 𝑃−2) − _𝑃−2 = 0._
Its discriminant is positive:
(𝑃−1 + 𝑃−2)[2] + 4𝑃−1𝑃−2 > 0,
so the equation has two real roots:
2 _−_ (1 − _𝜆1) 𝜆2[𝑘][+1]_
_<_ [(1][ −] _[𝜆][2][)][ 𝜆][𝑘][+1]_ =
_𝜆1 −_ _𝜆2_
= [(1][ −] _[𝜆]𝜆[2]1[)] −[ −]_ _𝜆[(1]2[ −]_ _[𝜆][1][)]_ _𝜆[𝑘]2[+1]_ = 𝜆[𝑘]2[+1] _< 1._
Now we have only to prove that the condition 𝑃−1 +
2𝑃−2 < 𝑃1 involves the condition 𝜆2 < 1. The former
inequality holds iff
√︁
_𝑃−1 + 𝑃−2 −_
_𝜆1 =_
√︁
_𝑃−1 + 𝑃−2 +_
_𝜆2 =_
(𝑃−1 + 𝑃−2)[2] + 4𝑃−1𝑃−2
_,_
2𝑃1
(𝑃−1 + 𝑃−2)[2] + 4𝑃−1𝑃−2
_._
2𝑃1
√︁
_𝑃−1 + 𝑃−2 +_ (𝑃−1 + 𝑃−2)[2] + 4𝑃−1𝑃−2 < 2𝑃1,
or iff
Also we can see that 𝜆1 < 0 because of
√︀
_𝑃−1 + 𝑃−2 =_ (𝑃−1 + 𝑃−2) <
√︀
_<_ (𝑃−1 + 𝑃−2) + 4𝑃−1𝑃−2
and 𝜆1 > −1 because of
_𝑃1 + 𝑃−1 > 0._
The general solution of (8) is
_𝑞[(][𝑘][)]_ = 𝑎1𝜆[𝑘]1 [+][ 𝑎][2][𝜆]2[𝑘][,]
where 𝑎1 and 𝑎2 can be found from the boundary conditions
_𝑞[(0)]_ = 𝑞[(][−][1)] = 1. (9)
The boundary conditions (9) lead to
{︂ _𝑎1 + 𝑎2 = 1;_
_𝑎1𝜆1 + 𝑎2𝜆2 = 𝜆1𝜆2,_
whence we obtain
√︁
(𝑃−1 + 𝑃−2)[2] + 4𝑃−1𝑃−2 < 2𝑃1 − _𝑃−1 −_ _𝑃−2,_
or iff
{︂ _𝑃−1 + 𝑃−2 < 2𝑃1;_
(𝑃−1 + 𝑃−2)[2] + 4𝑃1𝑃2 < (2𝑃1 − _𝑃−1 −_ _𝑃−2)[2]_ _._
Direct calculations show that the latter system is
equivalent to the inequality 𝑃−1 + 2𝑃−2 < 𝑃1.
The Lemma is proved.
**Corollary 1. In the particular case when 𝑞2 = 0 we**
_obtain_
_Proof. In the case of 𝑞2 = 0, we get the following equal-_
ities:
_𝑃−2 = 0; 𝜆1 = 0; 𝜆2 =_ _[𝑝]𝑝[0]1[𝑞]𝑞[1]0_ ; 𝑎2 = 1.
(︂ _𝑝0𝑞1_
_𝑞[(][𝑘][)]_ =
_𝑝1𝑞0_
)︂𝑘
_._
)︂𝑘
_._
_𝑎1 =_ _[𝜆][1][ (1][ −]_ _[𝜆][2][)]_ ; 𝑎2 = _[𝜆][2][ (1][ −]_ _[𝜆][1][)]_
_𝜆1 −_ _𝜆2_ _𝜆1 −_ _𝜆2_
and, finally,
1 _−_ (1 − _𝜆1) 𝜆[𝑘]2[+1]_
_𝑞[(][𝑘][)]_ = [(1][ −] _[𝜆][2][)][ 𝜆][𝑘][+1]_ _._
_𝜆1 −_ _𝜆2_
But 𝑞[(][𝑘][)] is the probability of some event, so we should
guarantee that it is not smaller than 0 and is not larger
than 1.
The inequality 𝑞[(][𝑘][)] _> 0 implies from the facts that_
1 − _𝜆2 < 1 −_ _𝜆1, 𝜆[𝑘]1_ _[< 𝜆]2[𝑘]_ [(because of][ |][𝜆][1][|][ <][ |][𝜆][2][|][,][ 𝜆][1] [is]
negative, 𝜆2 is positive) and 𝜆1 − _𝜆2 < 0._
Now we will prove that the inequality 𝑞[(][𝑘][)] _< 1 follows_
from the condition 𝑃−1 + 2𝑃−2 < 𝑃1 of this lemma.
Note that the condition 𝜆2 < 1 is sufficient for 𝑞[(][𝑘][)] _< 1._
Thus, if 𝜆2 < 1 then we obtain
We are going to formulate some statement for RVs (4)
and (9), which is more general than Lemma 1, formulated for RVs (3) and (4).
Let us define RV {𝛾𝑖, 𝑖 _≥_ 1} as
_𝛾𝑖_ = 𝜈𝑖 _−_ _𝜂𝑖._
(︂ _𝑝0𝑞1_
Then 𝑞𝑘 = 𝜆[𝑘]2 [=]
_𝑝1𝑞0_
We also need a more complicated lemma that will be
proved using Lemma 1. Let {𝜈𝑖, 𝑖 _≥_ 1} be independent
identically distributed RV, which are also mutually
independent with {𝜂𝑖, 𝑖 _≥_ 1}, introduced in (4). Their
probability distribution is
0, with probability _𝑟0;_
1, with probability _𝑟1;_
2, with probability _𝑟2;_
3, with probability _𝑟3._
_𝜈𝑖_ =
⎧
⎪⎪⎨
⎪⎪⎩
(10)
1 _−_ (1 − _𝜆1) 𝜆[𝑘]2[+1]_
_𝑞[(][𝑘][)]_ = [(1][ −] _[𝜆][2][)][ 𝜆][𝑘][+1]_ _<_
_𝜆1 −_ _𝜆2_
-----
It is easy to prove that for all 𝑖 _≥_ 1:
_𝑅0 := 𝑃_ (𝛾𝑖 = 0) = 𝑟0𝑞0 + 𝑝1𝑞1 + 𝑝2𝑞2;
_𝑅1 := 𝑃_ (𝛾𝑖 = 1) = 𝑟1𝑞0 + 𝑟2𝑞1 + 𝑟3𝑞2;
_𝑅2 := 𝑃_ (𝛾𝑖 = 2) = 𝑟2𝑞0 + 𝑟3𝑞1;
_𝑅3 := 𝑃_ (𝛾𝑖 = 3) = 𝑟3𝑞0;
_𝑅−1 := 𝑃_ (𝛾𝑖 = −1) = 𝑟0𝑞1 + 𝑟1𝑞2;
_𝑅−2 := 𝑃_ (𝛾𝑖 = −2) = 𝑟0𝑞2.
Also define RVs 𝑈𝑛 = [∑︀]𝑖[𝑛]=1 _[𝜈][𝑖][, 𝑛]_ _[≥]_ [1][, 𝑈][0][ = 0][,]
and
_𝑇𝑛_ = 𝑈𝑛 _−_ Σ𝑛, 𝑛 _≥_ 1, 𝑇0 = 𝑘.
Note that 𝑇𝑛 = [∑︀]𝑖[𝑛]=1 _[𝛾][𝑖]_ [+][ 𝑘, 𝑛] _[≥]_ [1][.]
From (11) we obtain that
Next, from (13) we get that
_𝛿𝑖_ =
0, with probability 𝑅0,
1, with probability (16)
_𝑄1 = 𝑅1 + 𝑅2 + 𝑅3._
(11)
Then we can apply Lemma 1 to RVs (4) and (13) and
obtain the probability 𝑃 (𝐶𝑘) = 𝑄[(][𝑘][)], and then use
inequality (15) to complete the proof of this Lemma.
## 2. Model 1. Fork probability for an adversary with ordinary synchronization.
Let us fix some 𝑁 _∈_ N and consider a part of
blockchain from TS number 𝑡0 = 1 to TS number 𝑁 .
We define the event:
_𝐹_ (𝑙, 𝑁 ) = { the fork occurred, that started at 𝑡0 = 1
and got the length 𝑙 before the TS number 𝑁, under
the condition that HMs generated 𝑙 confirmation blocks
starting at 𝑡0 }.
**Theorem 1. For the event 𝐹** (𝑙, 𝑁 ), the following upper
_bound holds:_
_𝑁_ _−𝑙_ [︃
∑︁
_𝑃_ (𝐹 (𝑙, 𝑁 )) ≤ _𝐶𝑙[𝑙]+[−]𝑙[1]0−1[𝑝]1[𝑙]_ [(1][ −] _[𝑝][1][)][𝑙][0]_ _[·]_
_𝑙0=0_
⎧
0
⎪⎨
1
⎪⎩
_𝑇𝑛−1 −_ 2, with probability _𝑅−2;_
_𝑇𝑛−1 −_ 1, with probability _𝑅−1;_
_𝑇𝑛−1,_ with probability _𝑅0;_
_𝑇𝑛−1 + 1,_ with probability _𝑅1;_
_𝑇𝑛−1 + 2,_ with probability _𝑅2;_
_𝑇𝑛−1 + 3,_ with probability _𝑅3._
_𝑇𝑛_ =
⎧
⎪⎪⎪⎪⎪⎪⎨
⎪⎪⎪⎪⎪⎪⎩
**Lemma 2. Let us define the event**
_𝐵𝑘_ = {∃ _𝑛_ _≥_ 1 : 𝑇𝑛 _≤_ 0}.
_Also, define 𝑄1 = 𝑅1 + 𝑅2 + 𝑅3._
_Then if the condition_
_𝑅−1 + 2𝑅−2 < 𝑄1_ (12)
_holds, then 𝑃_ (𝐵𝑘) ≤ _𝑄[(][𝑘][)], where_
1 _−_ (1 − _𝜆1)𝜆[𝑘]2[+1]_
_𝑄[(][𝑘][)]_ = [(1][ −] _[𝜆][2][)][𝜆][𝑘][+1]_ _,_
_𝜆1 −_ _𝜆2_
(︂ _𝑞1(1 −_ _𝑝1)_
_·_
_𝑝1(1 −_ _𝑞1)_
(︂
(1
_·_ _−_
_𝑙−1_
∑︁
_𝐶𝑙[𝑘]+𝑙0_ _[𝑞]1[𝑘]_
_[×][ (1][ −]_ _[𝑞][1][)][𝑙][+][𝑙][0][−][𝑘][)+]_
_𝑘=0_
+
_𝑙−1_
∑︁
_𝑘=0_
{︂
_𝐶𝑙[𝑘]+𝑙0_ _[𝑞]1[𝑘][(1][ −]_ _[𝑞][1][)][𝑙][+][𝑙][0][−][𝑘][·]_
_._ (17)
)︂𝑙−𝑘}︂)︂[]︃]
√︀
_𝜆1 =_ _[𝑅][−][1][ +][ 𝑅][−][2][ −]_ (𝑅−1 + 𝑅−2)[2] + 4𝑅−1𝑅−2 _,_
2𝑄1
√︀
_𝜆2 =_ _[𝑅][−][1][ +][ 𝑅][−][2][ +]_ (𝑅−1 + 𝑅−2)[2] + 4𝑅−1𝑅−2 _._
2𝑄1
_Proof. Let us introduce new RVs {𝛿𝑖, 𝑖_ _≥_ 1} that are
obtained from 𝜈𝑖 in such a way:
{︂ _𝜈𝑖, 𝑖𝑓𝜈𝑖_ _∈{0, 1};_
_𝛿𝑖_ = 1, 𝑖𝑓𝜈𝑖 _∈{2, 3};_ (13)
It is easy to see that ∀𝑖 _≥_ 1 : 𝛿𝑖 _≤_ _𝜈𝑖, and therefore,_
_𝑛_
∑︁
_𝑍𝑛_ = _𝛿𝑖_ _≤_ _𝑈𝑛, 𝑛_ _≥_ 1;
_𝑖=1_
_𝑌𝑛_ = 𝑍𝑛 _−_ Σ𝑛 + 𝑘 _≤_ _𝑇𝑛, 𝑛_ _≥_ 1. (14)
Let us introduce the event
_𝐶𝑘_ = {∃ _𝑛_ _≥_ 1 : 𝑌𝑛 _≤_ 0}.
From the definition of 𝐵𝑘 and (14) we get that
_𝐵𝑘_ _⊂_ _𝐶𝑘_ and
_𝑃_ (𝐵𝑘) ≤ _𝑃_ (𝐶𝑘). (15)
_Proof. It is obvious that 𝐹_ (𝑙, 𝑁 ) ⊂∪𝑙[𝑁]0=0[−][𝑙] _[𝐹][𝑙,𝑙][0]_ [,]
where 𝐹𝑙,𝑙0 is the event
_𝐹𝑙,𝑙0 = { the fork occurred after HMs generated 𝑙_
confirmation blocks, and they generated these blocks
exactly during 𝑙 + 𝑙0 TSs starting from 𝑡0 = 1}.
Also for some fixed 𝑙, 𝑙0 ∈ N we introduce the following events:
_𝐻𝑙,𝑙0 = { HMs generated 𝑙_ confirmation blocks during
exactly 𝑙 + 𝑙0 TSs, starting at 𝑡0 = 1};
_𝑀_ = { MMs generated not less then 𝑙 (i.e. 𝑙 or more)
blocks during exactly 𝑙 + 𝑙0 TSs, starting at 𝑡0 };
_𝑀𝑘_ = { MMs generated exactly 𝑘 (0 ≤ _𝑘_ _≤_ _𝑙_ _−_ 1)
blocks during 𝑙 + 𝑙0 TSs, starting at 𝑡0 };
_𝐻𝑙[∞]−𝑘_ [=][ {][ MMs ever catch up with the honest chain]
under the condition that in TS 𝑙 + 𝑙0 they are exactly
_𝑙_ _−_ _𝑘_ blocks behind }.
From the definition of 𝐹𝑙,𝑙0, we get
_𝐹𝑙,𝑙0 ⊂_ _𝐻𝑙,𝑙0 ∩_ (𝑀 _∪_ (∪𝑘[𝑙][−]=0[1] [(][𝑀][𝑘] _[∩]_ _[𝑀]𝑙[∞]−𝑘[)))][.]_
It is easy to calculate that
_𝑃_ (𝐻𝑙,𝑙0 ) = 𝐶𝑙[𝑙]+[−]𝑙[1]0−1[𝑝]1[𝑙] [(1][ −] _[𝑝][1][)][𝑙][0]_ [;]
-----
_𝑙−1_
∑︁
_𝑃_ (𝑀 ) = 1 − _𝑃_ ( 𝑀[¯] ) = 1 − _𝐶𝑙[𝑘]+𝑙0_ _[𝑞]1[𝑘]_ _[×][ (1][ −]_ _[𝑞][1][)][𝑙][+][𝑙][0][−][𝑘][;]_
_𝑘=0_
1
_−_
_𝑙−1_
∑︁
_𝐶𝑙[𝑘]+𝑙0_ _[𝑞]1[𝑘]_
_[×][ (1][ −]_ _[𝑞][1][)][𝑙][+][𝑙][0][−][𝑘]_ [=]
_𝑘=0_
_𝑃_ (𝑀𝑘 _∩_ _𝑀𝑙[∞]−𝑘[) =][ 𝑃]_ [(][𝑀][𝑘][)][ ·][ 𝑃] [(][𝑀]𝑙[∞]−𝑘[) =]
=
_𝑙−𝑙0_
∑︁
_𝐶𝑙[𝑘]+𝑙0_ _[𝑞]1[𝑘]_
_[×][ (1][ −]_ _[𝑞][1][)][𝑙][+][𝑙][0][−][𝑘]_ _[≈]_
_𝑘=𝑙_
(︂ _𝑞1(1 −_ _𝑝1)_ )︂𝑙−𝑘
= 𝐶𝑙[𝑘]+𝑙0 _[𝑞]1[𝑘][(1][ −]_ _[𝑞][1][)][𝑙][+][𝑙][0][−][𝑘]_ _[·]_ _,_
_𝑝1(1 −_ _𝑞1)_
where the first equality in the latter expression is due
to independence of 𝑀𝑘 and 𝑀𝑙[∞]−𝑘[, and the second one]
is due to the Corollary 1.
So,
_𝑃_ (𝐹𝑙,𝑙0 ) ≤ _𝐶𝑙[𝑙]+[−]𝑙[1]0−1[𝑝]1[𝑙]_ [(1][ −] _[𝑝][1][)][𝑙][0]_ _[×]_
_𝑥[2]_
_−_
1
where 𝜙(𝑥) = _√_ 2 is normal density, 𝜙(−𝑥) =
2𝜋 _[𝑒]_
_𝜙(𝑥), and Φ is a Laplace function, Φ(𝑥) =_ ∫︀0𝑥 _[𝜙][(][𝑥][)][𝑑𝑥]_ [=]
∫︀ _𝑥_
_−∞_ _[𝜙][(][𝑥][)][𝑑𝑥]_ _[−]_ [1]2 [, for][ 𝑥] _[≥]_ [0][, and][ Φ(][−][𝑥][) =][ −][Φ(][𝑥][)][.]
Using these approximations, we can provide another
formulation of Theorem 1.
**Theorem 2. For the event 𝑃** (𝐹 (𝑙, 𝑁 )), the following
_upper bound holds:_
(︂ _𝑙_ _−_ (𝑙 + 𝑙0)𝑞1
_≈_ [1]2 _[−]_ [Φ] √︀(𝑙 + 𝑙0)𝑞1(1 − _𝑞1)_
)︂
=
= [1] (︂ (𝑙 + 𝑙0)𝑞1 − _𝑙_
2 [+ Φ] √︀(𝑙 + 𝑙0)𝑞1(1 − _𝑞1)_
)︂
_,_
(︂
(1
_×_ _−_
_𝑙−1_
∑︁
_𝐶𝑙[𝑘]+𝑙0_ _[𝑞]1[𝑘]_
_[×][ (1][ −]_ _[𝑞][1][)][𝑙][+][𝑙][0][−][𝑘][)+]_
_𝑘=0_
_𝑙−1_
∑︁
_𝑘=0_
+
{︂ (︂ _𝑞1(1 −_ _𝑝1)_
_𝐶𝑙[𝑘]+𝑙0_ _[𝑞]1[𝑘][(1][ −]_ _[𝑞][1][)][𝑙][+][𝑙][0][−][𝑘]_ _[·]_
_𝑝1(1 −_ _𝑞1)_
)︂𝑙−𝑘}︂)︂
_,_
and
_𝑃_ (𝐹 (𝑙, 𝑁 ))
_≤_
_𝑁_ _−𝑙_
∑︁
_𝑃_ (𝐹𝑙,𝑙0 ) ≤
_𝑙0=0_
[︃
_𝑙0𝑝1 + (𝑙_ _−_ 1)(1 − _𝑝1)_
_𝜙(_ )
√︀
(𝑙 + 𝑙0 − 1))𝑝1(1 − _𝑝1)_
√︀ _×_
(𝑙 + 𝑙0 − 1)𝑝1(1 − _𝑝1)_
_𝑁_ _−𝑙_
∑︁
_𝑙0=0_
_𝑝1 ·_
_𝐶𝑙[𝑙]+[−]𝑙[1]0−1[𝑝]1[𝑙]_ [(1][ −] _[𝑝][1][)][𝑙][0]_ _[×]_
_𝑃_ (𝐹 (𝑙, 𝑁 ))
_≤_
[︃
)︂
)+
_≤_
_𝑁_ _−𝑙_
∑︁
_𝑙0=0_
(︂( [1] (︂ (𝑙 + 𝑙0)𝑞1 − _𝑙_
_×_ 2 [+ Φ] √︀(𝑙 + 𝑙0)𝑞1(1 − _𝑞1)_
(︂
(1
_×_ _−_
_𝑙−1_
∑︁
_𝐶𝑙[𝑘]+𝑙0_ _[𝑞]1[𝑘]_
_[×][ (1][ −]_ _[𝑞][1][)][𝑙][+][𝑙][0][−][𝑘][)+]_
_𝑘=0_
_𝑘_ _−_ (𝑙 + 𝑙0)𝑞1
)
{︂ _[𝜙][(]_ √︀(𝑙 + 𝑙0))𝑞1(1 − _𝑞1)_
√︀ _×_
(𝑙 + 𝑙0)𝑞1(1 − _𝑞1)_
_𝑙−1_
∑︁
_𝑘=0_
)︂𝑙−𝑘}︂)︂[]︃]
+
{︂ (︂ _𝑞1(1 −_ _𝑝1)_
_𝐶𝑙[𝑘]+𝑙0_ _[𝑞]1[𝑘][(1][ −]_ _[𝑞][1][)][𝑙][+][𝑙][0][−][𝑘]_ _[·]_
_𝑝1(1 −_ _𝑞1)_
_,_
_𝑙−1_
∑︁
+
_𝑘=0_
_._ (18)
)︂𝑙−𝑘}︂)︂[]︃]
the theorem is proved.
Note that formula (17) contains binomial coefficients
with large parameters 𝑙 and 𝑙0, which may take values
10[3] and more. For such values it is computationally
hard to calculate the coefficients directly. But we can
use the Moivre-Laplace local and integral theorem that
gives a rather good approximation in our case.
So we will use the Moivre-Laplace local and integral
theorem to approximate the sum.
Hence, using the Moivre-Laplace local theorem we
obtain:
(︂ _𝑞1(1 −_ _𝑝1)_
_×_
_𝑝1(1 −_ _𝑞1)_
## 3. Model 2: Fork probability for an adversary with fast synchronization.
In this section we consider an advanced model for
an adversary. We allow malicious miners (MMs) to be
corrupted in such a way that they can be synchronized
about twice as fast as the honest ones (HMs).
For some 𝑇, 𝑘 _∈_ _𝑁_, let us define the event 𝑀𝑇,𝑘 as
“During exactly 𝑇 TSs MMs generate exactly 𝑘 blocks”.
**Lemma 3. In our notations,**
[ _[𝑘]2_ []]
∑︁
_𝑃_ (𝑀𝑇,𝑘) = _𝐶𝑇[𝑘][2]_ _[𝐶]𝑇[𝑘][−]−[2]𝑘[𝑘]2[2]_ _[𝑞]2[𝑘][2]_ _[𝑞]1[𝑘][−][2][𝑘][2]_ _𝑞0[𝑇]_ _[−][𝑘][+][𝑘][2]_ _._ (19)
_𝑘2=0_
_𝐶𝑙[𝑙]+[−]𝑙[1]0−1[𝑝]1[𝑙]_ [(1][−] _[𝑝][1][)][𝑙][0][ ≈]_ _[𝑝][1]_ _[·]_
_𝑙0𝑝1 + (𝑙_ _−_ 1)(1 − _𝑝1)_
_𝜙(_ )
√︀
(𝑙 + 𝑙0 − 1))𝑝1(1 − _𝑝1)_ ;
√︀
(𝑙 + 𝑙0 − 1)𝑝1(1 − _𝑝1)_
_𝐶𝑙[𝑘]+𝑙0_ _[𝑞]1[𝑘][(1][ −]_ _[𝑞][1][)][𝑙][+][𝑙][0][−][𝑘]_ _[≈]_
_𝑘_ _−_ (𝑙 + 𝑙0)𝑞1
_𝜙(_ )
√︀
(𝑙 + 𝑙0))𝑞1(1 − _𝑞1)_
_._
√︀
(𝑙 + 𝑙0)𝑞1(1 − _𝑞1)_
And using Moivre-Laplace integral theorem we obtain:
_Proof. Let 𝑘2 be the number of TSs where MMs extend_
their branch on two blocks.
Note that if 𝑘2 is fixed, the event 𝑀𝑇,𝑘 is just the
intersection of the following events:
- MMs extend their branch by two blocks exactly in
_𝑘2 TSs;_
-----
- MMs extend their branch by one block exactly in
_𝑘_ _−_ 2𝑘2 TSs;
- MMs generate no blocks in exactly 𝑇 _−_ _𝑘2 −_
(𝑘 _−_ 2𝑘2) = 𝑇 _−_ _𝑘_ + 𝑘2 TSs.
The probability of such event is
_𝐶𝑇[𝑘][2]_ _[𝐶]𝑇[𝑘][−]−[2]𝑘[𝑘]2[2]_ _[𝑞]2[𝑘][2]_ _[𝑞]1[𝑘][−][2][𝑘][2]_ _𝑞0[𝑇]_ _[−][𝑘][+][𝑘][2]_ _._
Then the probability of the event 𝑀𝑇,𝑘 is the union
of such events for all possible values of 𝑘2 (note that
any two of these events have empty intersection), and
its probability is the sum of corresponding probabilities.
Finally, it is easy to see that 𝑘2 can take values from
0 to [︀ _𝑘2_ ]︀.
The Lemma is proved.
Now we are ready to formulate the main theorem
about fork probability for Model 2.
Let us fix some 𝑁 _∈_ _𝑁_ and consider the part of
blockchain from TS number 𝑡0 = 1 to TS number 𝑁 .
For some 𝑙 _≤_ _𝑁_ let us define the event 𝐹 (𝑙, 𝑁 ) as “The
fork occurred that started in TS 𝑡0 = 1 and achieved
the length 𝑙 before TS number 𝑁 under the condition
that HMs generated 𝑙 confirmation blocks starting at
_𝑡0 = 1 and the fork was hidden till HMs generated these_
_𝑙_ confirmation blocks”.
**Theorem 3. In our notations, the following upper esti-**
_mate holds:_
- 𝑀𝑙[∞]−𝑘 [is “MMs ever catch up with the honest chain]
under the condition that in TS number 𝑙 + 𝑙0 they are
exactly 𝑙 _−_ _𝑘_ blocks behind”.
From the definition of 𝐹𝑙, 𝑙0, we see that
_𝐹𝑙, 𝑙0 ⊂_ _𝐻𝑙, 𝑙0_ _∩_
(︃
(︁ )︁[)︃)︃]
_𝑀𝑙+𝑙0,𝑘_ _∩_ _𝑀𝑙[∞]−𝑘_
(︃ _𝑙−1_
⋃︁
_𝑘=0_
_._
_∩_
Next,
_𝑀𝑙+𝑙0, ≥𝑙_ _∪_
_𝑃_ (𝐻𝑙,𝑙0 ) = 𝐶𝑙[𝑙]+[−]𝑙[1]0−1[𝑝]1[𝑙] _[𝑝][𝑙]0[0]_ _[,]_
_𝑃_ (𝑀𝑙+𝑙0, ≥𝑙) = 1 − _𝑃_ (︀𝑀 _𝑙+𝑙0, ≥𝑙)︀_ =
= 1
_−_
_𝑙−1_
∑︁
_𝑃_ (𝑀𝑙+𝑙0, 𝑘),
_𝑘=0_
where 𝑃 (𝑀𝑙+𝑙0, 𝑘) is defined according to (19) and
(︁ )︁ (︁ )︁
_𝑃_ _𝑀𝑙+𝑙0,𝑘_ _∩_ _𝑀𝑙[∞]−𝑘_ = 𝑃 (𝑀𝑙+𝑙0,𝑘) 𝑃 _𝑀𝑙[∞]−𝑘_ =
= 𝑃 (𝑀𝑙+𝑙0,𝑘) 𝑞[(][𝑙][−][𝑘][)]
where 𝑞[(][𝑙][−][𝑘][)] is defined according to (7).
Then
_𝑙−1_
∑︁
_𝑃_ (𝑀𝑙+𝑙0, 𝑘)+
_𝑘=0_
(︃
1
_−_
(︃
[︃
_𝐶𝑙[𝑙]+[−]𝑙[1]0−1[𝑝]1[𝑙]_ _[𝑝][𝑙]0[0]_
_𝑃_ (𝐹𝑙, 𝑙0 ) ≤ _𝐶𝑙[𝑙]+[−]𝑙[1]0−1[𝑝]1[𝑙]_ _[𝑝][𝑙]0[0]_
_𝑃_ (𝐹 (𝑙, 𝑁 ))
_≤_
_𝑁_ _−𝑙_
∑︁
_𝑙0=0_
1
_−_
_𝑙−1_
∑︁
_𝑃_ (𝑀𝑙+𝑙0,𝑘)+
_𝑘=0_
_𝑙−1_ )︃
∑︁
_𝑃_ (𝑀𝑙+𝑙0, 𝑘) · 𝑞[(][𝑙][−][𝑘][)]
_𝑘=0_
_._ (22)
Substituting (22) into (21), we obtain (20) and finish
the proof of the theorem.
**Note: we can also rewrite the inequality (20) in a**
such way:
+
+
_𝑙−1_ )︃]︃
∑︁
_𝑃_ (𝑀𝑙+𝑙0,𝑘)𝑞[(][𝑙][−][𝑘] _,_ (20)
_𝑘=0_
_where the value 𝑞[(][𝑙][−][𝑘][)]_ _is defined according to (7), and_
_the value 𝑃_ (𝑀𝑙+𝑙0,𝑘) is defined according to (19).
_Proof. For some 𝑙0 ≤_ _𝑁_ _−_ _𝑙_ let us define the event 𝐹𝑙, 𝑙0
as “The fork with the length at least 𝑙 occurred that
started in TS 𝑡0 = 1 and was hidden till HMs generated
_𝑙_ confirmations blocks, and these blocks were generated
during exactly 𝑙 + 𝑙0 TSs starting at 𝑡0 = 1”.
Then
_𝐶𝑙[𝑙]+[−]𝑙[1]0−1[𝑝]1[𝑙]_ _[𝑝][𝑙]0[0]_ _[·]_
[︃
_𝑃_ (𝐹 (𝑙, 𝑁 ))
_≤_
_𝑁_ _−𝑙_
∑︁
_𝑙0=0_
(︃
∑︁𝑙−1 (︁
_𝑃_ (𝑀𝑙+𝑙0,𝑘) 1 − _𝑞[(][𝑙][−][𝑘][)][)︁)︃]︃]_
_𝑘=0_
_,_ (23)
1
_−_
_𝐹_ (𝑙, 𝑁 )
_⊂_
_𝑁_ _−𝑙_
⋃︁
_𝐹𝑙, 𝑙0 𝑎𝑛𝑑𝑃_ (𝐹 (𝑙, 𝑁 )) ≤
_𝑙0=0_
_·_
which is easier to calculate.
And, at last, we want to simplify the condition (6).
**Lemma 4. In our notations, condition (6) is equivalent**
_to the inequality_
(1 _𝑝)[𝑛𝑠][𝐻]_ _< 2 (1_ _𝑝)[𝑚]_ _[𝑠𝐻]2_ 1.
_−_ _−_ _−_
_Proof. In our notations,_
_𝑃1 = 𝑝1𝑞0;_
_≤_
_𝑁_ _−𝑙_
∑︁
_𝑃_ (𝐹𝑙, 𝑙0 ). (21)
_𝑙0=0_
Also let us introduce the following events:
- 𝐻𝑙, 𝑙0 is “HMs generated 𝑙 confirmation blocks during
exactly 𝑙 + 𝑙0 TSs starting at 𝑡0 = 1”;
- 𝑀𝑙+𝑙0, ≥𝑙 is “MMs generated not less than 𝑙 (i.e. 𝑙
or more) blocks during 𝑙 + 𝑙0 TSs starting at 𝑡0 = 1”;
- 𝑀𝑙+𝑙0,𝑘 is “MMs generated exactly 𝑘 (0 ≤ _𝑘_ _≤_ _𝑙_ _−_ 1)
blocks during 𝑙 + 𝑙0 TSs starting at 𝑡0 = 1”;
-----
_𝑃−1 = 𝑝0𝑞1 + 𝑝1𝑞2;_
_𝑃−2 = 𝑝0𝑞2,_
so inequality (19) can be rewritten as
_𝑝0𝑞1 + 𝑝1𝑞2 + 2𝑝0𝑞2 < 𝑝1𝑞0,_
or
Also let us define probabilities
_𝑃𝑖_ = 𝐶𝑠𝑛[𝑖] _[𝑝][𝑖][(1][ −]_ _[𝑝][)][𝑠𝑛][−][𝑖][, 𝑖]_ [= 0][,][ 1][,][ 2][,][ 3][,] (25)
where 𝑝𝑖 is the probability that HMs generate exactly 𝑖
blocks during one TS.
**Lemma 5. In our notations**
_𝑃_ (𝐻𝑙,𝑙0 ) = 𝑃 (𝑆𝑙+𝑙0−1 = 𝑙 _−_ 1) · (𝑝1 + 𝑝2 + 𝑝3)+
+ 𝑃 (𝑆𝑙+𝑙0−1 = 𝑙 _−_ 2)·
_· (𝑝2 + 𝑝3) + 𝑃_ (𝑆𝑙+𝑙0−1 = 𝑙 _−_ 3) · 𝑝3), (26)
_where_
_𝑃_ (𝑆𝑙+𝑙0−1 = 𝑙 _−_ _𝑖) =_
or
_𝑝0_ _<_ _𝑞0 −_ _𝑞2_
1 − _𝑝0_ 1 − (𝑞0 − _𝑞2)_ _[,]_
]︂
[︂ _𝑙_ _−_ _𝑖_ _−_ 3𝑘3 ]︂
2
∑︁
_𝐶𝑙[𝑘]+[3]𝑙0−1[𝐶]𝑙[𝑘]+[2]𝑙0−1−𝑘3_ _[×]_
_𝑘2=0_
_𝑝0 < 𝑞0_ _𝑞2._
_−_
Direct calculations give us
_𝑞0 −_ _𝑞2 = 2 (1 −_ _𝑝)[𝑚]_ _[𝑠𝐻]2 −_ 1,
and, according to the definition
_𝑝0 = (1 −_ _𝑝)[𝑛𝑠][𝐻]_ _._
The Lemma is proved.
## 4. Model 3: fork probability for GHOST
In this section we assume 𝑘 = 1, i.e.
=
[︂ _𝑙_ _𝑖_
_−_
3
∑︁
_𝑘3=0_
_×𝐶𝑙[𝑙]+[−]𝑙[𝑖]0[−]−[3]1[𝑘]−[3][−]𝑘3[2]−[𝑘][2]𝑘2_ _[·][ 𝑝]3[𝑘][3]_ _[·][ 𝑝]2[𝑘][2]_ _[·][ 𝑝]1[𝑙][−][𝑖][−][3][𝑘][3][−][2][𝑘][2]_ _×_
_×𝑝[𝑙]0[0][−][1+][𝑖][+2][𝑘][3][+][𝑘][2]_ _, 𝑖_ = 1, 2, 3. (27)
_Proof. We define as 𝜉𝑖, 𝑖_ _≥_ 1 the number of blocks that
HMs generate in TS number 𝑖. According to (25) and
our assumptions,
0, with probability _𝑝0,_
1, with probability _𝑝1,_
2, with probability _𝑝2,_
3, with probability _𝑝3._
_𝜉𝑖_ =
⎧
⎪⎪⎨
⎪⎪⎩
_𝑝_ = [1] (24)
_𝑛𝑠_
where 𝑛 is the number of HMs, 𝑠 is the number of
attempts in one TS.
Note that in that model probability of success in one
attempt (24) is 47 times larger than for two previous
models.
In this section we make the following assumptions.
1) Some transaction was made at TS 𝑡0, and there
exists only one chain of blocks at this TS. Hence
block 𝐵0 with transaction was the last block of this
chain. And all the next blocks generated by HMs
are the "children" of block 𝐵0, so its "weight" at
some TS 𝑡> 𝑡0 is equal to the number of all blocks
generated by HMs from the TS 𝑡0 till the TS 𝑡.
2) For the sake of simplicity, we assume that HMs
can generate not more than 3 blocks and MMs can
generate not more than 2 blocks during one TS.
This restriction is not essential: the probability
that HMs generate 4 or more blocks during one TS
is about 0,01; the probability that MMs generate
3 or more blocks during one TS is about 0,02 in
case when the ratio of MMs is about 33%.
Without these restrictions, it seems impossible to
obtain valuable results in this model.
Now we need one additional lemma.
For some 𝑙, 𝑙0 ∈ N, define the event 𝐻𝑙,𝑙0 as "It takes
exactly 𝑙 + _𝑙0 TSs for HMs to generate at least 𝑙_ blocks".
In other words, 𝐻𝑙,𝑙0 means that HMs generate not
more than 𝑙 _−_ 1 blocks during TSs 1, 2, ...𝑙 + 𝑙0 − 1 and
generate not less than 𝑙 blocks during TSs 1, 2, ...𝑙 + 𝑙0.
as 𝑃 (𝐴𝑛/𝐵𝑛[(0)][) = 0][.]
Next, note that for 𝑖 = 1, 2, 3 :
_𝑃_ (𝐴𝑛/𝐵𝑛[(][𝑖][)][) =][ 𝑃] [(][𝑙] _[−]_ _[𝑖]_ _[≤]_ _[𝑆][𝑛][−][1]_ _[≤]_ _[𝑙]_ _[−]_ [1)][.] (29)
Let us find 𝑃 (𝑆𝑛−1 = 𝑙 _−_ _𝑖), 𝑖_ = 1, 2, 3.
We note as 𝑘𝑖 the number of TSs where HMs generate
exactly 𝑖 blocks, 𝑖 = 0, 1, 2, 3.
Then 0 ≤ _𝑘3 ≤_ [ _[𝑙]_ _[−]3_ _[𝑖]_ ].
Note that if 𝑘3 is fixed, then 0 ≤ _𝑘2 ≤_ [︀ _𝑙_ _−_ _𝑖_ 2− 3𝑘3 ]︀.
Also define the 𝑆𝑛 = [∑︀]𝑖[𝑛]=1 _[𝜉][𝑖][.]_
Now we introduce the event 𝐴𝑛 as
_𝐴𝑛_ = {𝑚𝑖𝑛{𝑘 _≥_ 1 : 𝑆𝑘 _≥_ _𝑙} = 𝑛}._
In other words, 𝐴𝑛 means that {𝑆𝑛−1 < 𝑙}∩{𝑆𝑛 _≥_ _𝑙}._
In our notations, we need to find the probability
_𝑃_ (𝐴𝑙+𝑙0 ).
We define the events
_𝐵𝑛[(][𝑖][)]_ = {𝜉𝑛 = 𝑖}, 𝑖 = 0, 1, 2, 3, and note that
_𝑃_ (𝐵𝑛[(][𝑖][)][) =][ 𝑝]𝑖[.]
Then, according to the compound probability formula
_𝑃_ (𝐴𝑛) =
=
3
∑︁
_𝑃_ (𝐴𝑛/𝐵𝑛[(][𝑖][)][)][𝑃] [(][𝐵]𝑛[(][𝑖][)][) =]
_𝑖=0_
3
∑︁
_𝑃_ (𝐴𝑛/𝐵𝑛[(][𝑖][)][)][𝑝][𝑖][,] (28)
_𝑖=1_
-----
Next if 𝑘3 and 𝑘2 are fixed, then 𝑘1 = 𝑙 _−_ _𝑖_ _−_ 3𝑘3 − 2𝑘2
and finally,
_𝑘0 = 𝑛_ _−_ 1 − _𝑘3 −_ _𝑘2 −_ _𝑘1 =_
= 𝑛 _−_ 1 − _𝑘3 −_ _𝑘2 −_ (𝑙 _−_ _𝑖_ _−_ 3𝑘3 − 2𝑘2) =
= 𝑛 _−_ 1 − _𝑙_ + 𝑖 + 2𝑘3 + 𝑘2.
So,
[︀ _𝑙_ _−_ _𝑖_ _−_ 3𝑘3 ]︀
2
∑︁
_𝐶𝑛[𝑘]−[3]_ 1[×]
_𝑘2=0_
_𝑃_ (𝑆𝑛−1 = 𝑙 _−_ _𝑖) =_
_𝑙_ _𝑖_
[︀ _−_ ]︀
3
∑︁
_𝑘3=0_
_×𝐶𝑛[𝑘]−[2]_ 1−𝑘3 _[·][ 𝐶]𝑛[𝑙][−]−[𝑖]1[−]−[3]𝑘[𝑘]3[3]−[−]𝑘[2]2[𝑘][2]_ _· 𝑝[𝑘]3[3]_ _[·][ 𝑝][𝑘]2[2]_ _[×]_
_×𝑝1[𝑙][−][𝑖][−][3][𝑘][3][−][2][𝑘][2]_ _× 𝑝[𝑛]0_ _[−][1][−][𝑙][+][𝑖][+2][𝑘][3][+][𝑘][2]_ _._ (30)
Also, using (28) and (29), we can write that
_𝑃_ (𝐴𝑛) = (︀𝑃 (𝑆𝑛−1 = 𝑙 _−_ 1) + 𝑃 (𝑆𝑛−1 = 𝑙 _−_ 2)+
+ 𝑃 (𝑆𝑛−1 = 𝑙 _−_ 3)))︀ _· 𝑝3 + (𝑃_ (𝑆𝑛−1 = 𝑙 _−_ 2)+
+ 𝑃 (𝑆𝑛−1 = 𝑙 _−_ 1)) · 𝑝2 + 𝑃 (𝑆𝑛−1 = 𝑙 _−_ 1)𝑝1 =
= 𝑃 (𝑆𝑛−1 = 𝑙 _−_ 1)(𝑝1 + 𝑝2 + 𝑝3)+
+ 𝑃 (𝑆𝑛−1 = 𝑙 _−_ 2)(𝑝2 + 𝑝3)+
+ 𝑃 (𝑆𝑛−1 = 𝑙 _−_ 3)𝑝3, (31)
and formulas (30) and (31) finish the proof of the lemma,
when 𝑛 = 𝑙 + 𝑙0.
To formulate the main result, we also need formula (19) from Lemma 3, but for values 𝑞0, 𝑞1, 𝑞2 defined
for Model 3 in (2).
**Theorem 4. Let the event 𝐹** (𝑙, 𝑁 ) be the same as defined
_in Models 1 or 2. Then_
_𝑃_ (𝐹 (𝑙, 𝑁 ))
_≤_
for adversarial nodes; 𝑛 = 1000 and 𝑁 = 17000 (these
parameters provide sufficiently good accuracy due to
attack success probability value saturation; further increasing of 𝑁, shows no changes in block confirmations
number given in the table). We took the ratio of block
generation time to network block propagation time as
_𝑘_ = 47.6 for Bitcoin, Model 1 and Model 2, and 𝑘 = 1
for GHOST, Model 3 [10].
To verify theoretical results independently, we also
performed direct simulation of attacks in the software
and obtained results that are very close to the ones
given in the table.
Though our method for Model 1 is quite different from
the methods proposed by M.Rosendeld and C.Grunspan,
we got exactly the same numbers for block confirmation
number. Full coincidence of results provides additional
evidence of right approach taken in the papers.
For the Model 2, we can see that even 2x faster
adversarial synchronization gives an advantage for an
attacker only for high adversarial hash rate (0.35+).
The GHOST rule requires the number of confirmation
blocks comparable to Bitcoin. Taking into account
much shorter time between blocks for GHOST, that
gives advantage to this consensus protocol by providing
the same level of blockchain security in shorter time.
## Conclusions
The number of transaction confirmation blocks is
important for application properties of a cryptocurrency
and is closely related to average time of receiving and
accepting of payments. The shortest confirmation time
for the same level of transaction security provides the
best user properties for cryptocurrency.
Papers that provide a number of transaction confirmation blocks for Bitcoin use models with implicit
assumption of prompt spreading of Bitcoin blocks over
the network that leads to conditions that are not always the case for the real world conditions of cryptocurrencies application. Papers that take into account
delays of message delivery on peer-to-peer networks,
provide proofs of asymptotic estimates on reaching of
main blockchain properties, with no specific values of
numbers of transaction confirmation blocks.
We developed three methods for determination of the
required number of confirmation blocks for Bitcoin and
GHOST. The first method uses a model that considers
equal network delays for message delivery on Bitcoin
peer-to-peer network both for honest and malicious
miners. The second one is for Bitcoin and assumes that
an attacker may have faster synchronization for attack
optimization. The third method allows to determine
required number of confirmation blocks for the GHOST
protocol. It is the first strict theoretical method (to our
knowledge) that allows obtaining of these values for the
GHOST.
Compared to other existing methods, in the conditions of equal delays of synchronization for honest
miners and adversarial nodes, our method gives the
same numbers as the known results by M.Rosenfeld and
C.Grunspan, et.al, though uses quite different approach
_𝑙−1_
∑︁{𝑃 (𝑀𝑙+𝑙0,𝑘) _·_ (1 _−_ _𝑄[(][𝑙][−][𝑘][)])})]︀,_
_𝑘=0_
_≤_
_𝑁_ _−𝑙_
∑︁
_𝑙0=0_
[︀𝑃 (𝐻𝑙,𝑙0 ) _×_ (1 _−_
_where 𝑃_ (𝑀𝑙+𝑙0,𝑘) is as defined in (19) and 𝑃 (𝐻𝑙,𝑙0 ) is
_as defined in (26) using values (2) and (3)._
The proof of this theorem is just the same as the
proof of Theorem 3, but the probabilities of events 𝐻𝑙,𝑙0
and 𝑀𝑙+𝑙0,𝑘 take other values that in (20).
## 5. Comparison of confirmation blocks’ num- bers for different methods
The Table 1 shows the number 𝑧 of block confirmations for attack success probability of 0.001 for various
values of the adversarial hashrate 𝑞, determined by the
methods developed by S.Nakamoto [1], M.Rosenfeld [2],
C. Grunspan and R.Perez-Marco [3], compared to our
results obtained for Bitcoin consensus in the network
with equal delays both for honest miners and attacker
nodes (Model 1), for Bitcoin consensus on the network
with faster (2x) adversarial synchronization (Model 2)
and for the GHOST protocol (Model 3).
For this computation, we took 𝑠𝐻 = 1000 and
_𝑠𝑀_ = 𝑠𝐻 for Model 1 and Model 3; for Model 2, we
took 𝑠𝑀 = _[𝑠]2[𝐻]_ [that means twice as fast synchronization]
-----
Table 1. The number 𝑧 of block confirmations for attack success probability of 0.001 for various values of the
adversarial hashrate 𝑞 for different models
|q|S.Nakamoto|M.Rosenfield|C.Grunspan and R.Perez- Marco|Model 1 (Bitcoin)|Model 2 (Bitcoin, fast adv. synch.)|Model 3 (GHOST)|
|---|---|---|---|---|---|---|
|0.1|5|6|6|6|6|6|
|0.15|8|9|9|9|9|8|
|0.2|11|13|13|13|13|12|
|0.25|15|20|20|20|20|18|
|0.3|24|32|32|32|32|28|
|0.35|41|58|58|58|59|49|
|0.4|81|133|133|133|136|101|
(also taking into account message delivery delays). In
the model with 2x faster adversarial synchronization,
an attacker may gain an advantage only controlling
high hash rate (0.35+).
According to our method, the GHOST protocol requires the number of confirmation blocks, comparable
to Bitcoin. But having much shorter time between
blocks, GHOST has advantage by providing the same
level of blockchain security in shorter time.
## References
[1] S. Nakomoto, “A peer-to-peer electronic cash system,” online, 2008.
[2] M. Rosenfeld, “Analysis of hashrate-based doublespending,” arXiv preprint, 2014.
[3] C. Grunspan and R. Pérez-Marco, “Double spend
races,” CoRR, vol. abs/1702.02867, 2017.
[4] C. Pinzon and C. Rocha, “Double-spend attack
models with time advantange for bitcoin,” Elec_tronic Notes in Theoretical Computer Science,_
vol. 329, pp. 79–103, 2016.
[5] J. A. Garay, A. Kiayias, and N. Leonardos, “The
bitcoin backbone protocol: Analysis and applications,” Advances in Cryptology - EUROCRYPT
_2015 - 34th Annual International Conference on_
_the Theory and Applicaitons of Cryptographic Tech-_
_niques, Sofia, Bulgaria, April 26-30, 2015, Proceed-_
_ings, Part II,, pp. 281–310, 2015._
[6] R. Pass, L. Seeman, and A. Shelat, “Analysis of
the blockchain protocol in asynchronous networks,”
in Annual International Conference on the The_ory and Applications of Cryptographic Techniques,_
pp. 643–673, Springer, 2017.
[7] J. A. Garay, A. Kiayias, and N. Leonardos, “The
bitcoin backbone protocol with chains of variable difficulty.,” IACR Cryptology ePrint Archive,
vol. 2016, p. 1048, 2016.
[8] Y. Sompolinsky and A. Zohar, “Secure high-rate
transaction processing in bitcoin,” Financial Cryp_tography and Data Security - 19th International_
_Conference, FC 2015, San Juan, Puerto Rico, Jan-_
_uary 26-30, 2015, Revised Selected Papers, 2015._
[9] Y. Sompolinsky and A. Zohar, “Accelerating bitcoin’s transaction processing. fast money grows on
trees, not chains,” IACR Cryptology ePrint Archive,
vol. 2013, p. 881, 2013.
[10] A. Kiayias and G. Panagiotakos, “Speed-security
tradeoffs in blockchain protocols,” _Cryptology_
_ePrint Archive, Report 2015/1019, 2015._
-----
|
{
"disclaimer": "Notice: The following paper fields have been elided by the publisher: {'references'}. Paper or abstract available at https://api.unpaywall.org/v2/10.20535/tacs.2664-29132019.1.169018?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.20535/tacs.2664-29132019.1.169018, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": null,
"status": "BRONZE",
"url": "http://tacs.ipt.kpi.ua/article/download/169018/168850"
}
| 2,019
|
[] | true
| 2019-05-29T00:00:00
|
[] | 22,448
|
en
|
[
{
"category": "Economics",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
},
{
"category": "Economics",
"source": "s2-fos-model"
},
{
"category": "Business",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/ffc28edb00dcbc2d8774113be1a03b3cc70cfba3
|
[
"Economics"
] | 0.881634
|
Lead Behaviour in Bitcoin Markets
|
ffc28edb00dcbc2d8774113be1a03b3cc70cfba3
|
Risks
|
[
{
"authorId": "2118426587",
"name": "Ying Chen"
},
{
"authorId": "144630568",
"name": "Paolo Giudici"
},
{
"authorId": "2117410437",
"name": "Branka Hadji Misheva"
},
{
"authorId": "122782314",
"name": "Simon Trimborn"
}
] |
{
"alternate_issns": [
"2261-611X"
],
"alternate_names": null,
"alternate_urls": [
"https://www.mdpi.com/journal/risks",
"http://nbn-resolving.de/urn/resolver.pl?urn=urn:nbn:ch:bel-317985",
"https://risks.hypotheses.org/"
],
"id": "eedb1527-2aee-4def-bdb2-20fca9c12c3d",
"issn": "2227-9091",
"name": "Risks",
"type": null,
"url": "http://www.e-helvetica.nb.admin.ch/directAccess?callnumber=bel-317985"
}
|
We aim to understand the dynamics of Bitcoin blockchain trading volumes and, specifically, how different trading groups, in different geographic areas, interact with each other. To achieve this aim, we propose an extended Vector Autoregressive model, aimed at explaining the evolution of trading volumes, both in time and in space. The extension is based on network models, which improve pure autoregressive models, introducing a contemporaneous contagion component that describes contagion effects between trading volumes. Our empirical findings show that transactions activities in bitcoins is dominated by groups of network participants in Europe and in the United States, consistent with the expectation that market interactions primarily take place in developed economies.
|
# risks
_Article_
## Lead Behaviour in Bitcoin Markets
**Ying Chen** **[1], Paolo Giudici** **[2,]*** **, Branka Hadji Misheva** **[3]** **and Simon Trimborn** **[4]**
1 Department of Mathematics and Risk Management Institute, National University of Singapore,
Singapore 119077, Singapore; matcheny@nus.edu.sg
2 Department of Economics and Management, University of Pavia, 27100 Pavia, Italy
3 School of Engineering, ZHAW University of applied sciences, 8005 Zurich, Switzerland; hadji@zhaw.ch
4 Department of Mathematics, National University of Singapore, Singapore 119077, Singapore;
simon.trimborn@nus.edu.sg
***** Correspondence: giudici@unipv.it
Received: 5 November 2019; Accepted: 31 December 2019; Published: 4 January 2020
[����������](https://www.mdpi.com/2227-9091/8/1/4?type=check_update&version=2)
**�������**
**Abstract: We aim to understand the dynamics of Bitcoin blockchain trading volumes and, specifically,**
how different trading groups, in different geographic areas, interact with each other. To achieve this
aim, we propose an extended Vector Autoregressive model, aimed at explaining the evolution of
trading volumes, both in time and in space. The extension is based on network models, which improve
pure autoregressive models, introducing a contemporaneous contagion component that describes
contagion effects between trading volumes. Our empirical findings show that transactions activities
in bitcoins is dominated by groups of network participants in Europe and in the United States,
consistent with the expectation that market interactions primarily take place in developed economies.
**Keywords: bitcoin markets; bitcoin trading volumes; network models**
**1. Introduction**
The bitcoin is the leading cryptocurrency by capitalisation, with a market share greater than 50%
of the total cryptocurrency market, corresponding to 330 billion USD at its historical peak, in December
2017. Recent studies report that the same market capitalisation is concentrated on a limited number
of owners. In particular, Credit Swiss in January 2018 provided a study which indicates that 97% of
Bitcoins are held by 4% of all Bitcoin addresses. Bloomberg reported similar findings by suggesting
that about 40 percent of Bitcoin is held by perhaps 1000 users.
The previous empirical findings suggest that the trading movement by a few bitcoin owners has
the potential to cause major disruptions in the price of all cryptocurrencies. An example of this is the
transaction that took place on 12 November 2017, when a user moved 25,000 Bitcoins, worth at the
time USD159 million, to an exchange. A very important research question is therefore: “to find the
bitcoin owners who are most connected in the markets, in terms of trading volumes”.
Unfortunately, the anonymity of bitcoin transactions makes very difficult to find an answer to the
previous question. However, although it may be difficult to trace the “physical” identity of the users,
it may be possible to understand their “statistical” identity, applying appropriate econometric models
to the (very large) database of payments generated by bitcoin trades themselves. This may help to
answer a less demanding, but still important research question: “to find groups of bitcoin owners who
are most connected in the market, in terms of trading volumes”.
In this study, we classify bitcoin owners according to their observed trading behaviour, in ten
classes of increasing average size. We add to this classification the geographical area of the owners,
defined (very broadly) by the continent to which they belong. We then apply network econometric
models to understand the map of interconnections that exist between the defined owner groups and,
in this way, identify the trading groups who lead bitcoin markets, along time.
-----
_Risks 2020, 8, 4_ 2 of 14
The econometric research on the dynamics of cryptocurrency markets has mainly been focused on
the issue of price discovery and prediction. In this context, many of the stylized facts that are valid for
traditional financial time series apply, to some extent, also in the context of these alternative currencies
Elendner et al. (2017). A large stream of papers consider the dynamics of crypto prices, using VAR
models (Bianchi (2019); Catania et al. (2019); Bohte and Rossini (2019); Giudici and Abu-Hashish
(2019)), VECM models (Giudici and Pagnottoni (2019a), (2019b)), similarity networks Giudici and
Polinesi (2019) and Generalized Autoregressive Conditional Hetheroskedasticity (GARCH) models
Bouoiyour et al. (2016). The results from the different papers, however, seem far from consistent. In our
view, this is mostly due to the nature of the cryptocurrencies. For example, they are much more volatile
compared to traditional currencies, their exchange rates cannot be assumed to be independently and
identically distributed and their global nature limits researchers’ ability to account for systematic
causal factors.
In our opinion, it becomes necessary to move away from traditional price volatility models,
and focusing on the identification of the mechanisms that drive trading behaviour, as in our research
question. The available literature on trading volume dependency in cryptocurrency markets is
very limited. Notable exception to this are the papers by Tasca et al. (2018), Foley et al. (2019)
and Chen et al. (2018). In particular, Tasca et al. (2018) attempt to identify different clusters within
the Bitcoin economy by analyzing the trading patterns and ascribing them to particular business
categories. Using network-based methods, the authors have identified three market regimes that have
characterized Bitcoin transactions.
Our work intends to extract the network of payment relationship between Bitcoin users, owners,
similar to Tasca et al. (2018). We extend their work, acquiring evidence on whether trading volumes
behaviors of different groups of Bitcoin traders, defined by volume size and geographical region,
are interconnected and, therefore, affect each other.
From an econometric viewpoint, we propose an econometric network model which extends
Vector Autoregressive models. The extension is based on network models, which improve over pure
autoregressive models, as they introduce a contemporaneous contagion component that describes
contagion effects between groups of traders.
The validity of the model was demonstrated in recent studies on systemic risk, in which
researchers have proposed correlation network models, able to combine the rich structure of financial
networks (see, e.g., Lorenz et al. (2009); Battiston et al. (2012)) with a more parsimonious approach
that can estimate contagion effects from the dependence structure among market prices. The first
contributions in this framework are Billio et al. (2012) and Diebold and Yilmaz (2014), who derive
contagion measures based on Granger-causality tests and variance decompositions. More recently,
Ahelegbey et al. (2016) and Giudici and Spelta (2016) have extended this methodology introducing
stochastic correlation networks.
While bivariate systemic risk models (such as Acharya et al. (2012), Acharya et al. (2016) and
Adrian and Brunnermeier (2015)) explain whether the risk of an institution is affected by a market
crisis event or by a set of exogenous risk factors, correlation network models explain whether the same
risk depends on contagion effects, in a cross-sectional perspective.
We extend the approach of Giudici and Spelta (2016) enriching their graphical Gaussian model
with an autoregressive component derived through a VAR model, as in Ahelegbey et al. (2016). In
contrast with the latter, we employ partial correlations rather than correlations, and we do not follow a
Bayesian approach.
We remark that our work is related to some recent papers that explore the cross-country
trading in cryptocurrency markets Makarov and Schoar (2019), the network dynamics across
cryptocurrency markets Ji et al. (2019) and the information content of trading volumes in crypto
investing Bianchi (2019); Bouri et al. (2019). We combine the views of the previous paper into a
network-based analysis of bitcoin trading patterns across countries and trading groups.
-----
_Risks 2020, 8, 4_ 3 of 14
To demonstrate our methodology, we will consider the all world’s bitcoin transactions,
independently of the exchange in which they were traded, in the time period 25 February 2012
to 17 July 2017.
Our empirical findings show that transactions activities in bitcoins is dominated by groups of
network partici- pants in Europe and in the United States, consistent with the conventional wisdom
that posits market interactions, at least nominally, primarily take place in developed economies.
The paper is organized as follows: Section 2 contains our proposed model; Section 3 presents
the available data; Section 4 the empirical application of the proposed model to the obtained data;
Section 5 contains some concluding remarks.
**2. Proposal**
Let y[i]t [be the traded volume of Bitcoin by a specific group of traders][ i][ (][i][ =][ 1,][ . . .][,][ I][)][, at time]
_t (t = 1, . . ., T). We assume that y[i]t_ [is a function of: (a) an autoregressive element that captures the]
dependence on the past trading volumes of the same group; (b) a cross-sectional element that captures
the contemporaneous dependence on the trading volumes of other groups; (c) a stochastic residual.
Mathematically, we assume that in the case of the Bitcoin traded volumes, for each volume i and time t
the following equation holds:
_y[i]t_ [=]
_p0_
### ∑ α[i]p[y][i]t−p [+] ∑ β[ij]yt[j] [+][ ϵ]t[i][,] (1)
_p=1_ _j̸=i_
where p is a time lag (with a maximum value of p0 < t), α[i]p [and][ β][ij][ are the coefficients which are to be]
estimated, and ϵt[i] [are residuals, which we assume standard Gaussian and independent.]
Equation (1) models the Bitcoin volume dynamics as a structural VAR, in which the traded volume
in each group depends on its p past values, through the idiosyncratic autoregressive component
∑pp0=1 _[α][i]p[y][i]t−p_ [and, in addition, it depends on the contemporaneous values of the other groups, through]
the systemic component ∑j̸=i β[ij]yt[j][.]
Defining B0 as a I × I symmetric matrix with null diagonal elements containing the
contemporaneous coefficients, the previous model can be expressed in a more compact matrix form, as
follows:
_Yt =_
_p0_
### ∑ ApYt−p + B0Yt + εt, (2)
_p=1_
where Yt is a I-dimensional vector containing the traded volumes of all groups at time t, Yt−p is the
same vector, lagged at time t − _p, Ap is a I × I matrix that contains the autoregressive coefficients and_
_εt is a vector of residuals._
In the following step, we transform the model in (2) into a reduced form for the purpose of
facilitating the estimation process, thus becoming:
_Yt = Γ1Yt−1 + ... + Γp0Yt−p0 + Ut,_ (3)
with
Γ1 = (I − _B0)[−][1]_ _A1,_
...
Γp0 = (I − _B0)[−][1]_ _Ap0,_
_Ut = (I −_ _B0)[−][1]εt._
(4)
-----
_Risks 2020, 8, 4_ 4 of 14
This reduced form allows the estimation of the vectors of modified autoregressive coefficients
Γ1, ..., Γp0, using time series data on the traded volumes contained in the stacked vector
_{Y1, . . ., Yt, . . ., YT}._
However, we are not interested in estimating Γp. In fact, the purpose of this analysis is
to disentangle its autoregressive and contemporaneous components, thus separately estimating
_{A1, ..., Ap0} and B0. In this sense, once B0 is obtained, {A1, ..., Ap0} can be derived from (4)._
To estimate B0, note that (I − _B0)Ut = εt, so that Ut = B0Ut + εt. This implies that, for each_
group i,
_Ut[i]_ [=] ∑ _β[ij]Ut[j]_ [+][ ϵ]t[i][,] (5)
_j̸=i_
meaning that the off-diagonal elements of B0 can be obtained regressing each modified residual,
derived from the application of (3), on those of the other groups.
Please note that the regression model in (5) is based on the transformation derived in Equation
(4), which makes the modified residuals correlated. The direction of such correlation is, however,
unknown. In the application of (5) it is, therefore, not clear which volume residual assumes the form
of a response variable, and which one of an explanatory regressor.
To determine the direction of such dependence, we propose to approximate each pair of regression
coefficients β[ij] and β[ji], with their partial correlation coefficient, which is undirected.
Mathematically, let Σ = Corr(U) be the correlation matrix between the modified residuals, and let
Σ[−][1] be its inverse, with elements σ[ij]. The partial correlation coefficient ρij|S between the residuals
_U[i]_ and U _[j], conditional on the remaining residuals (U[s], s = 1, . . ., S), where S = I_ _i, j_, can be
_\ {_ _}_
obtained as:
_σ[ij]_
_−_
_ρij|S =_ _√σ[ii]σ[jj][ .]_ (6)
It can be shown that:
�
_|ρij|S| =_
_β[ij]_ _β[ji],_ (7)
_·_
which means that the absolute value of the partial correlation coefficient between U[i] and U _[j], given all_
the other residuals, can be obtained as the geometric average between the coefficients β[ij] and β[ji]
defined by equation (5) setting, respectively, i rather than j as response variables. Equation (7) justifies
the replacement of β[ij] and β[ji] with their corresponding partial correlation coefficient ρij|S.
From an economic viewpoint, the partial correlation coefficient expresses how the trading volume
of node i is affected by the contemporaneous trading volume of node j (j = i), keeping the other
_̸_
volumes fixed.
An important advantage that derives from the employment of partial correlations lies in
the possibility of employing correlation network models based on the conditional independence
relationships described by partial correlations.
More precisely, let us assume that the vectors Ut are independently distributed according to a
multivariate normal distribution NI (0, Σ), where Σ represents the correlation matrix (that we assume
to be non-singular).
A correlation network model can be represented by an undirected graph G such that G = (V, E),
with a set of nodes V = 1, ..., I, and an edge set E = V _V that describes the connections between_
_{_ _}_ _×_
the nodes. G can be represented by a binary adjacency matrix E with elements eij, each of them
providing the information of whether a pair of vertices in G is (symmetrically) linked between each
other (eij = 1) or not (eij = 0). If the nodes V of G are put in correspondence with the random variables
_U1, ..., UI, the edge set E induces conditional independences on U via the so-called Markov properties_
(see e.g., Lauritzen (1996)).
-----
_Risks 2020, 8, 4_ 5 of 14
Following up on (7), Whittaker (1990) proved that the following equivalence holds:
_ρij|S = 0 ⇐⇒_ _Ui ⊥_ _Uj|UV\{i,j} ⇐⇒_ _eij = 0_ (8)
where the symbol indicates conditional independence.
_⊥_
From a graph theoretic viewpoint, the previous equivalence means that a link between two
volume residuals is present if and only if the corresponding partial correlation coefficient is significantly
different from zero.
From a financial viewpoint, the previous equivalence implies that, if the partial correlation
between two measures is equal to zero, the corresponding volumes residuals are conditionally
independent and, therefore, the corresponding groups do not (directly) impact each other.
From a statistical viewpoint, it is also possible to test the null hypotheses that two groups of Bitcoin
owners are conditionally independent by controlling whether the corresponding partial correlation
coefficient is equal to zero, by means of the statistical test described in Whittaker (1990).
However, this poses a problem of multiple testing, and correcting for this problem could results
in loss of power (for example using Bonferroni’s inequality). One of the most widely used method
for limiting the number of spurious edges—while at the same time obtaining networks that are more
interpretable,—is through the use of a regularization approach. One such prominent approach of
regularization is the ‘least absolute shrinkage and selection operator (LASSO) which in its essence,
allows us to set estimates of exactly zero. More formally, the LASSO limits the sum of absolute
partial correlation coefficients which in turn lead to overall shrinkage of estimates and inviolably some
become zero. Mathematically, if ˆσ represents the sample variance–covariance matrix) LASSO aims
to estimate the precision matrix by maximizing the penalized likelihood function (with λk being the
penalty parameter).
_l(Θ) = log detΘ−tr(σˆ_ Θ) − _λk ∑i,j(|Θi,j|)_
For the purpose of our study, both the significance testing and the graphical LASSO serve as a
robustness check for identifying the true network that emerges between Bitcoin owner groups.
**3. Data**
We consider all data from the Bitcoin blockchain, from 25 February 2012 to 17 July 2017 (1969
days with 1843 observed days), described in detail in Chen et al. (2018) . Bitcoin blocks are published
approximately every 10 min and contain information about the transaction size, the account ID
(anonymous), the participating accounts and the timestamp of the transactions.
The previous information is very useful to understand the time dynamics of volume transactions,
but it indicates nothing about the nature of the bitcoin owners who generate the trade. Trying to
capture some kind of information on bitcoin traders, we consider the website Blockchain.info provides
information about the IP address of the relying party that provides a secure access to the originator
of each transaction, and extract from it the approximate geographical provenience of the trader who
generates the transaction. To avoid a too large approximation error, we decided to group geographical
provenience in a few classes, corresponding to six continental groups: Africa (Af), Asia (As), Europe
(Eu), North America (N_A), Oceania (Oc) and South America (S_A). More precisely, the continent of
the bitcoin trader is identified from the data in Blockchain.info, comparing its IP address with a dataset
of IP address from MaxMind Inc. The approximate location of the transaction origin can be tracked by
recording the first node relaying it. We remark that this approach works as long as the running node
does not use an anonymizing technology.
We thus have a first grouping of bitcoin owners that roughly correspond to their continent of
residence. To further characterize them, for each of the six continental groups we associate to each
account IDs according the absolute size of the total transaction amount they generate in the considered
time period. We then further group the IDs of each continent according to the deciles of their statistical
distribution. The first group, which will be labeled 1 after the continent abbreviation, has the smallest
-----
_Risks 2020, 8, 4_ 6 of 14
transactions, corresponding to the 0–10% percentile class, while the tenth group with the largest
transactions is labeled 10,corresponding to 90–100% percentile class. The final result is a classification
of bitcoin owners in 60 groups: 10 groups per continent.
With this grouping we will investigate our research hypotheses, and search for the bitcoin owners
who mostly impact the market. Specifically we will be able to investigate whether large-size Bitcoin
owner affect the trade decisions of the others, or whether a specific continent drives the others, in terms
of bitcoin trades, or both.
We remark that, although the Bitcoin is the most liquid and largest cryptocurrency, there is
sometimes low liquidity in its transactions. Our data show that there are days without a single
transaction in Africa, Asia, Oceania and South America, with frequency of low liquidity varying
between 1% and 25%. We can overcome the liquidity problem by accumulating the 10 min data to a
daily frequency. In any case, this indicates that a further regional grouping, for example by countries,
would lead to lack of data for many of them.
For each of our considered groups, our main variable of interest is the volume of transactions,
in any given time point. To normalise such data, we consider the logarithm of the transaction volumes.
To avoid computational problems, when no transactions in a group arise within a day, we add 1 Satoshi
1 to each transaction. Given the large numbers under consideration, the bias effect of the correction is
negligible.
In Figure 1 we illustrate the daily log accumulated transaction sizes over all 10 groups in each
continent. The largest transaction sizes appear in Europe and North America, whose dynamic pattern
is quite steady. Asia and Oceania are evidently more volatile then Europe and North America, but less
volatile than Africa and South America. The descriptive statistics, reported in Table 1, provide further
evidence to these findings. Note in particular that Asia, Oceania, Africa and South America have a
minimum value of zero, indicatinga lack of liquidity in certain time periods.
For deeper insights into the data features of the groups in each continent, the empirical distribution
of the log transaction sizes is displayed by means of boxplots in Figure 1. For each continent, the left
plot corresponds to the first group, namely the group 1 with the smallest transactions, and the right
one to the group 10 with the largest transactions, respectively.
**Table 1. Descriptive statistics of the accumulated log transactions of the 6 regions Africa (Af), Asia (As),**
Europe (Eu), North America (N_A), Oceania (Oc), South America (S_A). Eu and N_A show a related
behavior in terms of the descriptive statistics, as so do As and Oc. Also Af and S_A behave related.
**Af** **As** **Eu** **N_A** **Oc** **S_A**
mean 142.25 193.77 232.18 230.45 186.60 155.80
sd 72.84 19.81 11.59 9.18 24.55 62.39
skewness _−1.30_ _−4.81_ _−0.86_ _−1.61_ _−4.59_ _−1.91_
kurtosis 2.98 44.71 5.27 10.50 34.79 5.12
min 0.00 0.00 162.72 154.25 0.00 0.00
max 222.76 240.14 257.76 254.96 235.36 228.09
From Figure 1, the narrow box width of Europe and North America suggests that these continents
are characterised by transaction sizes with low volatility and a few outliers. However for Asia and
Oceania the daily transaction sizes are more volatile, and lead to larger center boxes and wider whiskers.
South America becomes extreme in the sense of showing even longer whiskers, with transaction sizes
varying stronger between groups. Africa follows a very different picture from the other continents: it
has the lowest liquidity and a much higher volatility and it shows frequent drops of the transaction
volume to 0.
1 The BTC transactions are reported in Satoshi values, the smallest fraction of a BTC, where 1 BTC = 100,000,000 Satoshi.
-----
_Risks 2020, 8, 4_ 7 of 14
(a) Af (b) As
(c) Eu (d) N_A
(e) Oc (f) S_A
**Figure 1. Daily volume transactions (expressed in logarithms) of the 10 groups displayed as boxplots,**
where the left boxplot represents the first group and the right one the ten groups of the respective
continent. The scatter plot displays the accumulated log transaction size of the 10 groups. The time
period goes from 25 February 2012 until 17 July 2017 in all 6 continents.
**4. Empirical Findings**
In this Section secwe present the results from the application of the proposed model. First we
evaluate the model in terms of predictive accuracy, to gauge its validity in the present context; second,
we interpret the model results in terms of our research hypotheses, aimed at assessing the dependency
patterns among the trading behaviour of different bitcoin traders.
We first consider an unregularised network, whose edges are all present, even when the
corresponding partial correlation is very low.
By calculating the partial correlations as specified in (6), we can derive the B0 matrix and, then, the
autoregressive parameters A1, . . ., Ap0. We are thus able to disentangle the time-dependent volume of
node i, separately estimating the autoregressive idiosyncratic component and the contemporaneous
one, according to Equation (2). Table 2 presents the assessment of the predictive performance of our
-----
_Risks 2020, 8, 4_ 8 of 14
model, to understand if the proposed approach is suitable, from a statistical viewpoint. Specifically,
we want to investigate whether the inclusion of the contemporaneous component improves predictive
accuracy, with respect to a much simpler pure autoregressive model. Table 2 contains the results of the
predictive assessment.
**Table 2. Comparison between the root mean square errors obtained with our full VAR model and with**
a model composed by the solely autoregressive component.
**Group** **RMSE_Full** **RMSE_AR** **Group** **RMSE_Full** **RMSE_AR**
Africa1 0.1945 0.2052 N_A1 0.2495 0.2500
Africa2 0.1298 0.1315 N_A2 0.4590 0.4613
Africa3 0.1600 0.1584 N_A3 0.5523 0.5596
Africa4 0.1521 0.1538 N_A4 0.3241 0.3631
Africa5 0.1492 0.1460 N_A5 0.8437 0.8530
Africa6 0.1609 0.1538 N_A6 1.2396 1.2653
Africa7 0.1385 0.1419 N_A7 0.9865 0.9951
Africa8 0.1382 0.1371 N_A8 0.8721 0.9041
Africa9 0.1276 0.1250 N_A9 0.6895 0.6962
Africa10 0.0960 0.0979 N_A10 1.2575 1.2698
Asia1 0.2258 0.2286 Oceania1 0.3182 0.3209
Asia2 0.2340 0.2264 Oceania2 0.2447 0.2477
Asia3 0.3148 0.3173 Oceania3 0.3717 0.3655
Asia4 0.3479 0.3432 Oceania4 0.4795 0.4914
Asia5 0.4328 0.4501 Oceania5 0.4909 0.5057
Asia6 0.5425 0.5493 Oceania6 0.5837 0.5782
Asia7 0.6143 0.6064 Oceania7 0.5857 0.5965
Asia8 0.6403 0.6455 Oceania8 0.8265 0.8353
Asia9 0.5294 0.6863 Oceania9 0.3350 0.3255
Asia10 0.5565 0.5623 Oceania10 0.2659 0.2733
Europe1 0.0558 0.0572 S_A1 0.2577 0.2663
Europe2 0.1414 0.1433 S_A2 0.2162 0.2183
Europe3 0.1779 0.1894 S_A3 0.2315 0.2326
Europe4 0.1405 0.1423 S_A4 0.2307 0.2302
Europe5 0.1822 0.1839 S_A5 0.2196 0.2231
Europe6 0.2241 0.2257 S_A6 0.2227 0.2234
Europe7 0.2852 0.2880 S_A7 0.2152 0.2145
Europe8 0.3673 0.3688 S_A8 0.2052 0.2061
Europe9 0.4021 0.4028 S_A9 0.1970 0.1960
Europe10 0.3460 0.3481 S_A10 0.1749 0.1757
From Table 2 note that the proposed model overperforms a pure autoregressive model, as the
corresponding root mean squared errors of the one-step ahead predictions are lower in the vast
majority of cases. It can be shown that the overall RMSE is equal to about 0.37 for the proposed model,
against 0.42 for the autoregressive one, further confirming its superiority.
We now move towards the interpretation of the results that can be drawn from our model and,
specifically, from the partial correlations (Equation (6)). In Figure 2, each node represents one of
the 60 groups of traders and each present edge indicate that two traders are dependent on each
other, in terms of their transactions (conditionally on all the others). Differently, when an edge is
missing, the corresponding traders behave independently of each other (conditionally on all the others).
Each edge is associated with a weight, which corresponds to a partial correlation coefficient. The size
of each edge in Figure 2 is proportional to such weight. On the other hand, the coloring of an edge
between two nodes indicates the sign of the partial correlation coefficient: green highlights a positive
partial correlation and red a negative partial correlation.
-----
_Risks 2020, 8, 4_ 9 of 14
**Figure 2. Unregularized Partial Correlation Network.**
What we can observe from the network that emerges from Figure 2 is that there exist many
interconnections between Bitcoin groups of users. Precisely, the summary statistics provided in the
upper left corner of Figure 2 indicates that the network contains a total of 1770 non-zero links between
groups. Although the graph is difficult to interpret, some clusters can be identified. We can see about
five clusters which in most part correspond to the continents, with the exception of Europe and North
America which are placed in the same cluster, suggesting that there exist strong dependence between
the traders of the two continents. This is something that we expected to see due to the economic and
political similarities among the two regions, as well as on their news sharing.
Note also that the groups representing the larger traders in Europe and North America - N_A10,
N_A9, Eu10, Eu9 - show stronger positive connections than other groups. This may be explained by
the fact that these groups have a comparable size of transactions, which come from a similar set of
information, which induce them to behave similarly. If we match this result with that in Figure 1,
which indicates the relatively larger volumes of transactions coming from these groups, we obtain a
clear indication that these are the groups which can mostly impact the market. Note also that these
exists a strong positive link between Oc10 and Eu9, and not between Oc9 and Eu09. This is consistent
with our previous finding: the transaction volumes of Oc10 are more comparable in their size to Eu9,
rather than to Eu10 (see Figure 1) and, therefore, they act similarly.
As mentioned previously, in unregularized correlation networks some edges may present but
may not be statistically significant. In the graphical representation, such situations will be visualized
as very weak connections in the network. To prevent this and to correctly identify the significant
associations between Bitcoin groups, a crucial step is to impose restrictions that will limit (or eliminate)
the occurrence of spurious edges. One way to achieve this is by testing the statistical significance of
partial correlations.
Figure 3 presents the same network containing only links that are found statistically significant at
both 5% and 1% level of significance.
-----
_Risks 2020, 8, 4_ 10 of 14
**Figure 3. Regularized partial correlation networks (without edges that are not significant).**
Figure 3 shows that the structure of the network does not change significantly if we impose
different levels of significance. What we observe from the graphs is that the majority of links that were
present in the unregularized network have disappeared, reducing the total number of links from 1770
to 146 and 137, respectively. Interesting, even though a significant portion of the links were removed,
the clustering of nodes remains the same as in Figure 2. Specifically, we see the formation of clusters
equivalent to the continents and we also see significant interconnection between traders in Europe and
North America. Furthermore, we also see a statistically significant positive correlation Oceania’s top
group and Europe’s and between Asia’s top group and Europe’s.
To further confirm our findings, we perform a further robustness check through the application of
the graphical LASSO. As discussed previously, LASSO is a very popular method for eliminating
spurious links. Figures 4 and 5 represent the networks that emerge by the applying graphical
LASSO with different smoothness parameters λ. We remark that, unlike the classical LASSO, in the
graphical approach the choice of λ cannot be done based on cross-validation as it represents a
completely unsupervised process. As we are mainly interested in assessing the robustness of the
results, we consider four alternative values for λ, and see whether what found in Figure 3 changes.
**Figure 4. GLASSO partial correlation networks [varying lambda], 1/2.**
-----
_Risks 2020, 8, 4_ 11 of 14
**Figure 5. GLASSO partial correlation networks [varying lambda], 2/2.**
From Figures 4 and 5, the changing λ does change the structure of the network, but the underlying
clusters remain the same, thus confirming the close interconnection between Europe and North
America, as well as those between top traders in Oceania and Europe.
A closer inspection of Figure 4, reveals frequent linkages between European and North American
nodes, which is in line with the previous observations. Positive linkages appear more often inside
each continent, compared to negative ones. One the other hand negative and positive edges appear
frequently between two continents (see Table 3). The largest two groups in both continents share strong
links with each other, confirming that that they probably share a common information set. Interestingly
the largest trader group from Asia, AS10, has multiple positive edges to several groups in Europe
and North America. Considering that most bitcoin mining farms are based in Asia, and especially
in China, it follows that a large amount of capital is acquired and, therefore, traded, from Asia with
the rest of the world. Last, note that the largest volume trading groups from Oceania and South
America also share links with each other and with the larger Western-World groups. This observation
leads to the conclusion that the large traders around the world are somewhat connected, possibly
communicating with each other. On the other hand smaller groups, which have less information,
shows less connections around the world.
**Table 3. Count of links between and within North America and Europe.**
**Lambda 0.001** **Lambda 0.01**
**Positive** **Negative** **Positive** **Negative**
**Within Europe** 17 14 17 13
**Within North America** 21 13 19 13
**Between Europe and North America** 48 53 45 48
Figure 5 shows what happens when we increase the penalty level to λ = 0.25. Most edges vanish,
but the previously found connections persists. Still the largest trader groups from Europe and North
America remain connected, while the edges from Oc9, S_A10 and As10 persist to stay connected with
them. The connection goes via the largest groups in Europe, namely Eu9 and Eu10. Other persisting
edges exist between the smaller groups from Asia and Europe, yet with small magnitude. Within the
continents many edges are not affected by the penalty, hence emphasize the importance of the regional
connectedness. Finally, when increasing the penalty parameter to λ = 0.5, most cross-continent
-----
_Risks 2020, 8, 4_ 12 of 14
edges are ruled out, except for the ones between the largest groups in Europe and North America.
The remaining edges only appear within the continents.
To further establish the robustness of the results to the varying value of λ, Table 4 compares some
centrality values, averaged over the whole network, under the four considered values of λ.
**Table 4. Average centralities across different lambda parameters.**
**_λ = 0.001_** **_λ = 0.01_** **_λ = 0.25_** **_λ = 0.5_**
Average degree 1.189206937 1.157479 0.855028 0.663931
Average betweenness 270.5666667 288.5667 269 39.3
Average closeness 0.000448235 0.000428 0 0
From Table 4 note that, consistently with our previous findings, by increasing the parameter λ the
average centrality decreases, according to degree, betweenness and closeness. Regardless of this, our
main conclusions remain stable.
To summarise, our empirical findings give an answer to our research proposition: which are the
group of traders that mostly affect the bitcoin markets? These groups were found among the top two
classes of traders in North America and Europe, strongly and positively connected to each other. These
traders are linked to the others, affecting their behaviours. In particular, they are especially linked with
the top traders from Oceania and South America. In addition, top traders from Asia, and especially
larger ones, are highly linked to the others, likely as a result of their mining activity.
**5. Conclusions**
In the paper, we proposed a model that explains the dynamics of Bitcoin trading volumes, based
on a correlation network VAR process that models the interconnections between different groups of
traders.
Our main methodological contribution consists of the introduction of partial correlations and
correlation networks into VAR models. This allows describing the correlation patterns between trading
volumes and to disentangle the autoregressive component of volumes from its contemporaneous part.
The introduction of VAR correlation networks also allows building a volume predictive model that
leverages the information contained in the correlation patterns.
Our main financial findings show that trading volumes are highly correlated within geographical
regions. Groups of traders with high transaction volumes over all continents covary in the network
model, leading to the conclusion this groups share a mutual information set. The results are robust
over various penalized network models. This result may have different economical explanations, such
as a common behaviour, a common time-zone, similar institutional and legal contexts.
Our results also contribute to the identification of group of bitcoin traders that are the most likely
influencers of the market. These are found to high volume traders, especially from North America,
Europe, and Asia. These results are in line with the expectation that trading follows the news sharing
patterns and the major Bitcoin mining localization patterns.
The proposed model can be very useful for policy makers and regulators. It can be used to predict
“regular” trading volumes and, therefore, identify anomalies. Our empirical findings show that the
proposed model is able to predict trading volumes with an error that is lower than that of a pure
autoregressive model.
Our result suggests that policy makers and regulators, interested in preserving the integrity of
bitcoin markets, should also pay particular attention to the transactions coming from large volume
traders, and especially of those from America, Europe and Asia, which have the potential to disrupt
the market.
The main weakness of this work is related to the available sample. It refers to a specific cryptoasset,
the bitcoin; it relates to a specific period of time and is taken directly from blockchain transactions,
rather than from market exchanges. These limitations derive from the proprietary nature of the data
-----
_Risks 2020, 8, 4_ 13 of 14
that was made available to us. However, we believe that our model is rather general, and can be
easily extended on a different database. This in particular to deal with transactions that take place on
crypto exchanges, more frequent that those taking place on the blockchain, considered here. Further
work may concern acquiring data on the electronic identity of the traders, to investigate the reason of
“regional” behaviours, as also discussed in Tasca et al. (2018) and Foley et al. (2019).
From a methodological viewpoint, it may be worth considering extending correlation network
models to become time dependent, although this requires acquiring data with a higher frequency. In
addition, it may be worth considering an extension of the model that accounts for exogenous factors,
such as regulatory interventions, transaction fees, sentiment and media coverage. This may require an
event-based analysis, aimed at understanding not only trading patterns, but also what may originate
them. To achieve this task our work could be extended with Bayesian network models, following
Giudici et al. (2014), Giudici and Bilotta (2014) and Cerchiello and Giudici (2016).
**Author Contributions: All four authors have contributed to the paper and, in particular, to its conceptualization,**
methodology, software, data curation, validation, writing, review and editing. The paper work has been
coordinated by the corresponding author. All authors have read and agreed to the published version of the
manuscript.
**Funding: This research received no specific external funding.**
**Acknowledgments: We acknowledge useful comments and suggestions from the participants at the workshops**
where the paper was presented. We also acknowledge very useful comments and suggestions from the four
referees that have commented the paper very thoroughly. The comments have helped us to substantially revise
the paper. This research has received funding from the European Union’s Horizon 2020 research and innovation
program “FIN-TECH: A Financial supervision and Technology compliance training programme” under the grant
agreement No 825215 (Topic: ICT-35-2018, Type of action: CSA). We also gratefully acknowledge the financial
support of Singapore Ministry of Education Academic Research Fund Tier 1 at National University of Singapore.
**Conflicts of Interest: The authors declare no conflict of interest.**
**References**
Acharya, Viral, Robert Engle, and Matthew Richardson. 2012. Capital shortfall: A new approach to ranking and
[regulating systemic risks. American Economic Review: Papers and Proceedings 102: 59–64. [CrossRef]](http://dx.doi.org/10.1257/aer.102.3.59)
Acharya, Viral, Lasse Pedersen, Thomas Philippon, and Matthew Richardson. 2016. Measuring systemic risk.
_[Review of Financial Studies 30: 2–47. [CrossRef]](http://dx.doi.org/10.1093/rfs/hhw088)_
Adrian, Tobias, and Markus Brunnermeier. 2015. Covar. American Economic Review: Papers and Proceedings 106:
[1705–41. [CrossRef]](http://dx.doi.org/10.1257/aer.20120555)
Ahelegbey, Daniel, Monica Billio, and Roberto Casarin. 2016. Bayesian graphical models for structural vector
[autoregressive processes. Journal of Applied Econometrics 31: 357–86. [CrossRef]](http://dx.doi.org/10.1002/jae.2443)
Battiston, Srefano, Domenico Delli Gatti, Mauro Gallegati, Bruce Greenwald, and Joseph Stiglitz. 2012. Liasons
dangereuses: Increasing connectivity risk sharing, and systemic risk. Journal of Economic Dynamics and
_[Control 36: 1121–41. [CrossRef]](http://dx.doi.org/10.1016/j.jedc.2012.04.001)_
Bianchi, Daniele. 2019. Cryptocurrencies as an asset class? an empirical assessment. _Journal of Alternative_
_[Investments. [CrossRef]](http://dx.doi.org/10.2139/ssrn.3077685)_
Bianchi, Daniele, and Alexander Dickerson. 2019. Trading volumes in cryptocurrency markets WBS Finance Group
_[Research Paper. [CrossRef]](http://dx.doi.org/10.2139/ssrn.3239670)_
Billio, Monica, Mila Getmansky, Andrew Lo, and Loriana Pelizzon. 2012. Econometric measures of connectedness
[and systemic risk in the finance and insurance sectors. Journal of Financial Economics 104: 535–59. [CrossRef]](http://dx.doi.org/10.1016/j.jfineco.2011.12.010)
Bohte, Rick, and Luca Rossini. 2019. Comparing the forecasting of cryptocurrencies by bayesian time-varying
[volatility models. Journal of Risk and Financial Management 12: 150. [CrossRef]](http://dx.doi.org/10.3390/jrfm12030150)
Bouoiyour, Jamal, Refk Selmi, Aviral Tiwari, and Olayeni Olaulu. 2016. What drives Bitcoin price? Economics
_Bullettin 36: 843–50._
Bouri, Elie, Chi Keung Lau, Brian Lucey, and David Roubaud. 2019. Trading volume and the predictability of
[return and volatility in the cryptocurrency market. Finance Research Letters 29: 340–46. [CrossRef]](http://dx.doi.org/10.1016/j.frl.2018.08.015)
Catania, Leopoldo, Stefano Grassi, and Francesco Ravazzolo. 2019. Forecasting cryptocurrencies under model
[and parameter instability. International Journal of Forecasting 35: 485–501. [CrossRef]](http://dx.doi.org/10.1016/j.ijforecast.2018.09.005)
-----
_Risks 2020, 8, 4_ 14 of 14
Cerchiello, Paola, and Paolo Giudici. 2016. Big data analysis for financial risk management. _Journal of Big Data 3:_
[1–18. [CrossRef]](http://dx.doi.org/10.1186/s40537-016-0053-4)
Chen, Ying, Simon Trimborn, and Jiejie Zhang. 2018. Discover Regional and Size Effects in Global Bitcoin
[Blockchain Via Sparse-Group Network Autoregressive Modeling. Avaliable online: https://papers.ssrn.](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3245031)
[com/sol3/papers.cfm?abstract_id=3245031 (accessed on 1 October 2019). [CrossRef]](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3245031)
Diebold, Francis, and Kamil Yilmaz. 2014. On the network topology of variance decompositions: Measuring the
[connectedness of financial firms. Journal of Econometrics 182: 119–34. [CrossRef]](http://dx.doi.org/10.1016/j.jeconom.2014.04.012)
Elendner, Hermann, Simon Trimborn, Bobby Ong, and Teik Ming Lee. 2017. The Cross-Section of
Crypto-Currencies as Financial Assets: Investing in crypto-currencies beyond bitcoin. In Handbook of
_Blockchain, Digital Finance and Inclusion: Cryptocurrency, FinTech, InsurTech, and Regulation 1st ed. Edited by_
D. Lee Kuo Chuen and R. Deng. Amsterdam: Elsevier, vol. 1, pp. 145–73.
Foley, Sean, Jonathan Karlsen, and Talis Putnins. 2019. Sex, Drugs and Bitcoin. how much illegal activity is
financed through cryptocurrencies Review of Financial Studies 32: 1798–853.
Giudici, Paolo, and Iman Abu-Hashish. 2019. What determines bitcoin exchange prices? a network var approach.
_[Finance Research Letters 28: 309–18. [CrossRef]](http://dx.doi.org/10.1016/j.frl.2018.05.013)_
Giudici, Paolo, and Annalisa Bilotta. 2004. Modelling operational losses: A Bayesian approach Quality and
_[Reliability Engineering 20: 407–17. [CrossRef]](http://dx.doi.org/10.1002/qre.655)_
Giudici, Paolo, Maura Mezzetti, and Pietro Muliere. 2003. Mixtures of products of Dirichlet process for variable
selection in survival analyis. _[Journal of Statistical Planning and Inference 111: 101–15. [CrossRef]](http://dx.doi.org/10.1016/S0378-3758(02)00291-4)_
Giudici, Paolo, and Paolo Pagnottoni. 2019a. High frequency price change spillovers in bitcoin exchange markets.
_[Risks 7: 111. [CrossRef]](http://dx.doi.org/10.3390/risks7040111)_
Giudici, Paolo, and Paolo Pagnottoni. 2019b. Vector error correction models to measure connectedness of bitcoin
[exchange markets. Applied Stochastic Models in Business and Industry, in press. [CrossRef]](http://dx.doi.org/10.1002/asmb.2478)
Giudici, Paolo, and Gloria Polinesi. 2019. Crypto price discovery through correlation networks. _Annals of_
_[Operations Research. [CrossRef]](http://dx.doi.org/10.1007/s10479-019-03282-3)_
Giudici, Paolo, and Alessandro Spelta. 2016. Graphical network models for international financial flows. Journal
_[of Business and Economic Statistics 34: 128–38. [CrossRef]](http://dx.doi.org/10.1080/07350015.2015.1017643)_
Ji, Qiang, Elie Bouri, Chi Keung Lau, and David Roubaud. 2019. Dynamic connectedness and integration in
[cryptocurrency markets. International Review of Financial Analysis 63: 257–72. [CrossRef]](http://dx.doi.org/10.1016/j.irfa.2018.12.002)
Lauritzen, Steffen. 1996. Graphical Models. Oxford: Oxford University Press.
Lorenz, Jan, Stefano Battiston, and Frank Schweitzer. 2009. Systemic risk in a unifying framework for cascading
processes on networks. The European Physical Journal B—Condensed Matter and Complex Systems 71: 441–60.
[[CrossRef]](http://dx.doi.org/10.1140/epjb/e2009-00347-4)
Makarov, Igor, and Antoninette Schoar. 2019. Trading and arbitrage in cryptocurrency markets. Journal of Financial
_[Economics. [CrossRef]](http://dx.doi.org/10.1016/j)_
Tasca, Paolo, Shaowen Liu, and Adam Hayes. 2018. The evolution of bitcoin economy: Extracting and analyzing
[the network of payment relationship. The Journal of Risk Finance 19: 94–126. [CrossRef]](http://dx.doi.org/10.1108/JRF-03-2017-0059)
Whittaker, Joe. 1990. Graphical Models in Applied Multivariate Statistics. Chichester: John Wiley and Sons.
_⃝c_ 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access
article distributed under the terms and conditions of the Creative Commons Attribution
[(CC BY) license (http://creativecommons.org/licenses/by/4.0/).](http://creativecommons.org/licenses/by/4.0/.)
-----
|
{
"disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.3390/risks8010004?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.3390/risks8010004, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "CCBY",
"status": "GOLD",
"url": "https://www.mdpi.com/2227-9091/8/1/4/pdf?version=1578553915"
}
| 2,020
|
[] | true
| 2020-01-04T00:00:00
|
[
{
"paperId": "3ba0e6a972a40dcada819971714dba79991ba562",
"title": "Practical Applications of Cryptocurrencies as an Asset Class? An Empirical Assessment"
},
{
"paperId": "889c78890a43dacc7f43f8f6b4f60c93df5de9be",
"title": "Graphical Models"
},
{
"paperId": "a84f2e84ea85e32fd4297a72d3f22ddf0233755c",
"title": "High Frequency Price Change Spillovers in \nBitcoin Markets"
},
{
"paperId": "a2d1e2d09dee662b7fab3ff144533803670f4ee8",
"title": "Comparing the Forecasting of Cryptocurrencies by Bayesian Time-Varying Volatility Models"
},
{
"paperId": "94d4e53341c199cc61f7016d7e49e0c743a0b029",
"title": "Trading Volume in Cryptocurrency Markets"
},
{
"paperId": "6ac9b46f99a92555f8b70eeb7500cb68587b2d99",
"title": "Trading volume and the predictability of return and volatility in the cryptocurrency market"
},
{
"paperId": "c9761723d5ecbc54161d1bda3754fba7a59de318",
"title": "Crypto price discovery through correlation networks"
},
{
"paperId": "0336acd79011c0ef2bdea9fe97b6f81686adf3dd",
"title": "Dynamic connectedness and integration in cryptocurrency markets"
},
{
"paperId": "f81e1048044a043937e296766226e906ad61ba50",
"title": "Forecasting cryptocurrencies under model and parameter instability"
},
{
"paperId": "835cd1d2eb825837466de11bd52a51ee192caa92",
"title": "What determines bitcoin exchange prices? A network VAR approach"
},
{
"paperId": "a18756dda6a3b4ae10152f1a84423a7c33ea6c02",
"title": "Discover Regional and Size Effects in Global Bitcoin Blockchain via Sparse-Group Network AutoRegressive Modeling"
},
{
"paperId": "c86e30f6f190fdce85945cb1c5a7549449418b66",
"title": "Trading and Arbitrage in Cryptocurrency Markets"
},
{
"paperId": "3b077ca66e32c2476cdf95534b6f613f84e1c909",
"title": "Big data analysis for financial risk management"
},
{
"paperId": "be635255fd0167a5b8f829cc321bded3a5dce735",
"title": "The Evolution of the Bitcoin Economy: Extracting and Analyzing the Network of Payment Relationships"
},
{
"paperId": "8c7300564b10ff440eda63fa35db09eca0ab184d",
"title": "What drives Bitcoin price"
},
{
"paperId": "bd02c2a9940e4f0eccbd79c8aca58c85661f65bd",
"title": "Graphical Network Models for International Financial Flows"
},
{
"paperId": "90b66ac1946c681626f3a8e4096ad9391bcce7cc",
"title": "Bayesian Graphical Models for Structural Vector Autoregressive Processes"
},
{
"paperId": "7398463326e753077c5c4408be11537902c11dfa",
"title": "Liaisons dangereuses: Increasing connectivity, risk sharing, and systemic risk"
},
{
"paperId": "5ea6731c2d6be78c7dc2e8dd35206bd89f9e37a3",
"title": "Capital Shortfall: A New Approach to Ranking and Regulating Systemic Risks †"
},
{
"paperId": "9dcd13ef4cf2809d7d0de4553d23132bd9b26ccd",
"title": "Econometric Measures of Connectedness and Systemic Risk in the Finance and Insurance Sectors"
},
{
"paperId": "869939b8fb6267e0335c608a64af5ef69cae89af",
"title": "On the Network Topology of Variance Decompositions: Measuring the Connectedness of Financial Firms"
},
{
"paperId": "ff201468ee91d59f38d60c81d709f6753a92dcc6",
"title": "Measuring Systemic Risk"
},
{
"paperId": "d1c75e08a3339ccfd7bef785599090c89b1b2852",
"title": "Systemic risk in a unifying framework for cascading processes on networks"
},
{
"paperId": "4bec50a4129db60fe29ebe17b9c230f90e65d073",
"title": "Modelling Operational Losses: A Bayesian Approach"
},
{
"paperId": "cd34418db8209ec14baf1932945333ae259aaa4d",
"title": "Mixtures of products of Dirichlet processes for variable selection in survival analysis"
},
{
"paperId": "a88c27e799809852d798f73eaa8893ad73de8da5",
"title": "Vector error correction models to measure connectedness of Bitcoin exchange markets"
},
{
"paperId": null,
"title": "financed through cryptocurrencies Review of Financial Studies"
},
{
"paperId": "2bd3911942dd2cf22eddff8d53c3da5bb87f9b1f",
"title": "The Cross-Section of Crypto-Currencies as Financial Assets: Investing in Crypto-Currencies Beyond Bitcoin"
},
{
"paperId": "775983c368b4ae3b82eb510f837ecd35e04d5d99",
"title": "Sex , drugs , and bitcoin"
},
{
"paperId": "32e1269855661f21b994d23d9bd5a38249426cb2",
"title": "Cross section"
},
{
"paperId": "29b400fa5b18a8948436180affa63fb3e226f428",
"title": "Graphical Models In Applied Multivariate Statistics"
},
{
"paperId": null,
"title": "CoVaR"
},
{
"paperId": null,
"title": "com/sol3/papers.cfm?abstract_id=3245031 (accessed on 1 October 2019)"
},
{
"paperId": "010912a835c3a214e6db0e6d5908a70269396b77",
"title": "Modelling Operational Losses"
},
{
"paperId": null,
"title": "This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution"
},
{
"paperId": null,
"title": "Bobby Ong, and Teik Ming Lee. 2017. The Cross-Section of"
}
] | 11,931
|
en
|
[
{
"category": "Computer Science",
"source": "s2-fos-model"
},
{
"category": "Engineering",
"source": "s2-fos-model"
},
{
"category": "Environmental Science",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/ffc539c04e6ee0ed33a7a8646603d41a398e1196
|
[] | 0.825592
|
Fine-grained Access Control Method for Blockchain Data Sharing based on Cloud Platform Big Data
|
ffc539c04e6ee0ed33a7a8646603d41a398e1196
|
International Journal of Advanced Computer Science and Applications
|
[
{
"authorId": "50627262",
"name": "Yunli Qiu"
},
{
"authorId": "145986709",
"name": "Biying Sun"
},
{
"authorId": "2158515118",
"name": "Qian Dang"
},
{
"authorId": "2174874725",
"name": "Chunhui Du"
},
{
"authorId": "2157950851",
"name": "Na Li"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"Int J Adv Comput Sci Appl"
],
"alternate_urls": [
"http://thesai.org/Publication/Default.aspx",
"https://thesai.org/Publications/IJACSA"
],
"id": "20a3a2f3-532a-4f04-9f3d-1e268e100872",
"issn": "2156-5570",
"name": "International Journal of Advanced Computer Science and Applications",
"type": "journal",
"url": "http://sites.google.com/site/ijacsa2010/"
}
|
—Blockchain technology has the advantages of decentralization, de-trust, and non-tampering, which breaks through the limitations of traditional centralized technology, so it has gradually become the key technology of power data security storage and privacy protection. In the existing smart grid framework, the grid operator is a centralized key distribution organization, which is responsible for sending all the secret credentials, so it is easy to have a single point of failure, resulting in a large number of personal information losses. To solve the problems of inflexible access control in smart grid data-sharing framework and considering the limitation of multi-party cooperation among grid operators and efficiency, an attribute-based access control scheme supporting privacy preservation in smart grid is constructed in this paper. A fine-grained access control scheme supporting privacy protection is designed and extended to the smart grid system, which enables the system to achieve fine-grained access control of power data. A decryption test algorithm is added before the decryption algorithm. Finally, through performance analysis and comparison with other schemes, it is verified that the performance of this system is 7% higher than the traditional method, and the storage cost is 9.5% lower, which reflects the superiority of the system. Full optimization of the access policy is achieved. It is proved that the scheme is more efficient to implement the coordination and cooperation of multiple authorized agencies in the system initialization.
|
# Fine-grained Access Control Method for Blockchain
Data Sharing based on Cloud Platform Big Data
###### Yu Qiu*, Biying Sun, Qian Dang, Chunhui Du, Na Li
State Grid Gansu Electric Power Company Internet Division, Lanzhou, China
**_Abstract—Blockchain technology has the advantages of_**
**decentralization, de-trust, and non-tampering, which breaks**
**through the limitations of traditional centralized technology, so it**
**has gradually become the key technology of power data security**
**storage and privacy protection. In the existing smart grid**
**framework, the grid operator is a centralized key distribution**
**organization, which is responsible for sending all the secret**
**credentials, so it is easy to have a single point of failure, resulting**
**in a large number of personal information losses. To solve the**
**problems of inflexible access control in smart grid data-sharing**
**framework and considering the limitation of multi-party**
**cooperation among grid operators and efficiency, an attribute-**
**based access control scheme supporting privacy preservation in**
**smart grid is constructed in this paper. A fine-grained access**
**control scheme supporting privacy protection is designed and**
**extended to the smart grid system, which enables the system to**
**achieve fine-grained access control of power data. A decryption**
**test algorithm is added before the decryption algorithm. Finally,**
**through performance analysis and comparison with other**
**schemes, it is verified that the performance of this system is 7%**
**higher than the traditional method, and the storage cost is 9.5%**
**lower, which reflects the superiority of the system. Full**
**optimization of the access policy is achieved. It is proved that the**
**scheme is more efficient to implement the coordination and**
**cooperation of multiple authorized agencies in the system**
**initialization.**
**_Keywords—Power grid data; blockchain technology; data_**
**_sharing; fine-grained access control; game strategy; ciphertext key_**
I. INTRODUCTION
With the wide application of big data, fog computing, and
Internet of Things technology, more and more applications
store a large number of users' private data in the near-end fog
node for computing. This solves the problem of insufficient
storage space or limited computing resources of most mobile
terminals in the current Internet of Things environment. At the
same time, with the rise of new network architectures such as
SDN, the computing, and storage capabilities of edge network
devices and core gateway devices are continuously enhanced
[1]. However, because the private data of users can bring
commercial value to criminals, Internet of Things devices with
weak performance have become the main target of hackers
[2]. To prevent the user's data from being stolen, it is
necessary to authenticate all unknown devices in the
environment through identity authentication and other
technical means, and then grant the corresponding device
access to data after passing the identity authentication.
However, most of the existing identity authentication schemes
ignore the user's privacy disclosure in the authentication
process, including the user's functional attributes, real identity
privacy and geographical location privacy.
Power data can be used by other organizations outside the
grid system, for example, to calculate costs, monitor
unexpected behavior, and predict future conditions. However,
the power data of a single smart meter contains private
information such as household habits, which needs to be
protected. Therefore, how to balance the availability and
privacy of power data is a problem faced by the smart grid [3].
In addition, RTUs and power consumers want to control
access from users. Users want to get different power
information depending on their specific tasks. For example,
maintainers and system engineers monitor the network, while
costing and analysis will be performed by auditors [4].
Therefore, in the smart grid system, it is particularly important
to achieve fine-grained access control of power data [5].
However, most of the existing smart grid schemes focus on
information aggregation but ignore the privacy protection and
access control in the process of power data sharing.
Blockchain technology is a trusted storage network
composed of distributed equal nodes, consisting of tamperproof block data and automatically executable smart contract
code, which has the characteristics of tamper-proof,
coordination autonomy, high security, and trust of
decentralized decision-making [6]. In the research of data
sharing mechanisms based on blockchain, Dai Mingjun et al.
[7] promoted the storage space of blockchain through
distributed storage (DS) based on network coding (NC). Yang
Jiachen et al. [8] introduced encryption algorithms to solve the
problem of distributed secure storage of big data. Wang Zuan
et al. [9] separated the original data storage and data
transactions by using a double-chain structure and combined
with proxy re-encryption technology to achieve secure and
reliable data sharing. In 2016, Alharbi et al. [10] proposed an
efficient privacy-preserving identity-based signature (IBS)
scheme for smart grid communication. In 2017, a smart grid
communication model [11] was proposed by Sedaghat et al.,
in which the cloud proxy service center, as a trusted third
party with powerful computing power, is responsible for
partially decrypting the shared ciphertext to reduce the burden
of authorized users. In 2019, a privacy-preserving power data
aggregation scheme [12] was proposed by Liu et al., but this
scheme does not consider the access control of shared data.
This paper aims to design a fine-grained access control
scheme supporting privacy preservation in the cloud
environment. Firstly, a fine-grained access control scheme for
data sharing with a completely hidden access policy is
constructed. Then, based on this, extended research on
-----
application scenarios is carried out, and an attribute-based
access control scheme supporting privacy preservation in a
smart grid is constructed.
The main innovations of this paper are:
_1)_ The access policy and attribute set are transformed into
vectors, and the access policy is completely hidden.
_2)_ An attribute-based access control scheme supporting
privacy preservation in a smart grid is constructed. Combine
blockchain technology to control data sharing and access.
_3)_ The scheme in this paper can realize the independent
work of multiple distribution network operators, realize
lightweight encryption, and improve decryption efficiency.
Content and structure of this paper are as follows:
_1)_ Elaborate the research direction, introduce the research
background and content;
_2)_ Introduce the theoretical content of the relevant basic
content;
_3)_ Design the security game strategy of power grid big
data access control system;
_4)_ Establish a shared data access scheme for power grid
block chain;
_5)_ Realize the attribute-based access control scheme for
the power grid privacy protection;
_6)_ Summarize the paper and look forward to the next step.
II. RELATED WORK
_A._ _Access Control Security Model_
With the rapid development of the ubiquitous power
Internet of Things (IoT), various IoT intelligent terminal
devices deployed in the smart grid generate a large amount of
data. Although the application of cloud Internet of Things
technology has effectively solved the problem of massive data
collection, storage and sharing, the smart grid is faced with a
huge number of intelligent terminal devices deployed in all
aspects of the grid, users with a sharp increase in data and
mixed personnel. Therefore, the data privacy security issues
that involve posing a serious security threat [13]. These
security threats are mainly manifested in:
Data security risk: the combination of smart grid and
Internet of Things technology, and the application of various
emerging technologies in the smart grid makes the system
complexity of the smart grid become higher. The security risk
of various types of data is increased [14]. The application of
cloud Internet of Things technology effectively realizes the
collection, storage, and management of terminal data, but
when it interacts with users, business systems, and power grid
researchers, the misoperation and illegal access will cause data
leakage.
User privacy risk: In the smart grid, while protecting the
privacy data of ordinary users, it is also necessary to prevent
the leakage of grid system data. Users' personal information
and power consumption data belong to users' privacy; and the
important operation data of each link of the smart grid system
also need to be protected [15]. In the face of distributed
attacks by illegal elements, illegal access by malicious users
and illegal operations by staff, the privacy data of power grid
users and systems will be threatened.
_B._ _Safety Requirements_
According to the security threat analysis of data privacy
protection in the smart grid cloud Internet of Things, effective
access control methods are adopted to achieve the goal of data
privacy security, and the following security requirements are
considered:
_1)_ _Authentication: The identity of the user connected to_
the smart grid control center needs to be authenticated to
prevent the user from stealing private data under false names.
Data visitors must be authenticated with the control center in
both directions [16].
_2)_ _Data confidentiality: When a visitor in the smart grid_
needs to decrypt and obtain encrypted data, its attribute set
needs to meet the access policy requirements defined by the
data owner, and unauthorized visitors cannot access user data.
_3)_ _Anti-collision attack: Unauthorized users cannot_
combine their key information to decrypt the ciphertext
through the collusion of multiple users.
_4)_ _Forward-backward_ _confidentiality:_ The newly
authorized visitor cannot decrypt the previous ciphertext data
with his own private key; the unauthorized visitor cannot
decrypt the decrypted ciphertext data [17].
_5)_ _Data integrity: All kinds of private data must be_
encrypted before they can be transmitted between entities to
avoid illegal tampering, damage and plagiarism during
transmission and storage [18].
III. POWER GRID BIG DATA ACCESS CONTROL SYSTEM
The system consists of five main bodies, as shown in Fig.
1.
Grid Operator (GO): As a certification center, the GO is
responsible for setting up the smart grid system, distributing
GIDs to users, and granting access to users. In addition, GO
distributes identity keys for legitimate users.
Multiple Distribution Operators (DGOs): As multiple
attribute authorities, each DGO is responsible for establishing
its own domain, managing attributes, and distributing attribute
keys to users according to the attribute set.
Cloud storage server: It is responsible for storing power
data in the form of ciphertext. The cloud storage server does
not participate in the access control and data decryption
process.
Grid
consumer Distributor
operator
User
upload data attribute key Download data
Cloud User ID
USER
Server
Fig. 1. Access Control System Model of Smart Grid.
|consumer|Col2|
|---|---|
|d data||
|Cloud Server|User ID|
|---|---|
|||
-----
Power data owners: Power data owners include RTUs and
power consumers. The owner of the power data can define an
access policy, use it to encrypt the power data, and upload it to
the cloud storage server.
Users: Users may be maintainers, system engineers,
researchers, policymakers, and auditors of power systems
[19]. After the user downloads the encrypted power data from
the cloud storage server, if the user wants to decrypt it, he
needs to prove his identity to GO and initiate a key request to
DGOs.
_A._ _Fine-grained Shared Security Game Strategy_
This section will elaborate the security model of the
scheme based on the security game between the attacker _[A]_
and the simulator _[B]_ . Among them, the security game will
have the following stages:
Initialization: attacker _[A]_ sends fine-grained authority
###### DGOk* to emulator B, and d gets the public parameter pp
of the system.
Authority establishment: for each fine-grain authority, that
simulator _[B]_ runs an authority establishment algorithm. The
public key _PK and the private key k_ _SK are obtained, and k_
then the public key _PK is published to k_ _A [20]._
Stage 1: Attacker _[A]_ submits attribute vector _[y]_ and GID
and initiates a user key challenge to impersonator _[B]_ . Wherein
the vector _[y]_ is generated by encoding the attribute set _S_ '
randomly selected by _[A]_ . _[B]_ runs the user key generation
algorithm and replies the corresponding _SKk j,_ and _SKgid_ to
###### A . In phase 1, A may interrogate the key within the PPT.
Challenge: _[A]_ submits two messages _M . 0_ _M of equal 1_
length and two policy vectors _[x]0_, _x1 to_ _B . Wherein, the_
vectors _x0 and_ _[x]1_ are respectively encoded and generated by
the access strategies _W0 and '_ _W1 selected by '_ _A [21]. But it_
must be satisfy that neither that vector _[x]0_ nor the vector _[x]1_ is
orthogonal _y, that is,_ ( _x y0,_ 0) ( _x y1,_ 0) .
Simulator _[B]_ tosses a coin to generate a random bit
###### 0,1
, and then runs the encryption algorithm to generate
the corresponding ciphertext _[CT][] and sends it to attacker_ _[A]_ .
Stage 2: As in stage 1, _[A]_ then makes a user key challenge
to _[B]_ . But must satisfy that vector _[x]0_ . Neither _x1 nor the_
vector _[y]_ is orthogonal.
######,v j n,
###### + a x1 +
###### U k = w1,n1, w2,n2,, wj n, j be two attribute sets of the same
length, where S represents the attribute set of the user in the
system, and _U represents the attribute set managed by the k_
authority _DGOk . For a corresponding location that is not an_
attribute managed by _U, k_ let _wi j,_ = 0 . Define
###### vi j, vi j, = wi j,
Sk = S U k = 0 vi j, wi j,
. Then, calculate the value of
the Lagrange polynomial vi j, [,][ ( )]S _[x]_ at _x =_ 0, where
###### vi j,,S ( )x = k S i k, vxi j, −−vvk j,k j, . For each element vi j, in the
attribute set _[S]k_, a component element of the corresponding
vector _[y]_ is generated:
###### vi j, [,]S (0) vi j, Sk yi = 0 vi j, Sk : =i 1,, L −1 yL = 1
(3)
###### ' ' =
Guess: _[A]_ guesses []and gives . If, then _[A]_
.
wins and the winning margin is
_B._ _Threshold Access Policy_
###### 1
AdvA = Pr' = − 2
The key technologies of threshold access policy encoding
are divided into the following two parts:
_1)_ _The access policy W is transformed into a vector_ _[x]_ _:_
First, the power data owner defines an access policy
###### W = t1,n1,t2,n2,,t j n, j , selects t random coefficients
ai Z p, and sets a polynomial f x( ) of order t −1 as
follows:
######,t j n,
###### f x( ) = at −1xt −1 + + a x1 + a0 (mod p ) (1)
Then, for each element _ti j,_ in the access policy W, the
component elements of the corresponding vector _[x]_ are
generated:
###### f t( i j, ) ti j, W xi = 0 ti j, W : =i 1,, L −1 xL = − f (0) = −a0
(2)
_2)_ _Convert the attribute set_ _S to a vector k_ _y :_
######, L −1
Let _S_ = v1,n1,v2,n2,,v _j n,_ _j_
and
0 nor the vector 1 is
_k_
0
length and two policy vectors
-----
Attention:
###### x y, = f (0) ti j, = vi j, (i t) S ⊥ W
[(4) ]
The above calculations only appear in the exponential part
of the decryption phase.
IV. BLOCKCHAIN SHARED DATA ACCESS SCHEME
###### u = att1[,],attL is defined as the global attribute set of
the system and _H_ : 0,1 _Z_ _lp+1_ → _Z_ _pk_ +1 is a collision
resistant hash function. The specific construction of the
attribute-based access control scheme supporting privacy
protection in the smart grid is as follows:
System initialization: This phase consists of the following
two algorithms. GO generates the whole system by running
the system establishment algorithm, and DGOs generates its
own domain by running the authority establishment algorithm.
GO-Setup: Run the group generator g to generate the
bilinear group (,p g g e G G G1, 2,, 1, 2, _T_ ) . GO builds N
authorities for the system, respectively:
###### DGO DGO1, 2,, DGON, where each DGOk manages a
mutually exclusive set of attributes
######, DGON
######,attL
(k +1)
###### U k = Att Att1, 2,, Attnk , and Uk = nk . Let
##### sign = (keygen Sign Verify,, ) be a signature scheme. Select
the random matrix _A B,_ _Z_ _p(k_ +1)k, _P_ _Z_ _p(k_ +1) ( _k_ +1), calculate
###### P1 = g1A, P2 = g2B, X = g1P A, and return the common
parameter _[pp]_ as follows:
###### pp =G G G e p g g P P X Verify1, 2, T,,, 1, 2, 1, 2,, (5)
DGOS-Setup: For any authority _DGOk in the system,_
select two random matrices _U Wk_, _k_ _Z_ _p(k_ +1) ( _k_ +1) and a
random vector _ak_ _Z_ _pk_ +1, calculate
###### V1,k = g1U Ak,V2,k = g1W Ak,Yk = e g( 1, g2 )ak A, and then publish
the public key _PK of k_ _DGOk and keep the private key_ _SK . k_
###### PKk = {V1,k,V2,k,Yk }, SKk = {U Wk, k , k } (6)
Authentication and key distribution: When a user joins the
system, GO assigns a unique GID to the user. If the user wants
to decrypt the ciphertext, first, the user needs to convert the
###### y = y j j 1, L
attribute set S into a vector, the user needs
to submit the attribute set S and the attribute vector _[y]_ to
request the key from GO. For the legal user who has
completed the registration, GO will distribute the identity key
###### SKgid with the signature to the legal user by running the
identity key generation algorithm. Next, the user needs to
submit the attribute set S, the attribute vector _[y]_, and the
identity key _SKgid_ with the GO signature to request the
attribute key from the DGOs. Each _DGOk then uses Verify_
to verify the signature. Once the verification is passed, each
###### DGOk generates attribute keys SKk j, by running the
attribute key generation algorithm and sends them to the user.
This process involves the following two algorithms:
Identity key generation (GO-KeyGen): For the
authentication center GO in the system, randomly select two
vectors _Z_ _p(k_ +1), _r_ _Z_ _pk_ and calculate the user identity key:
_r_
###### SK gid = P2u+ (7)
u = H GID y(, )
Where, .
Attribute key generation (DGOS-KeyGen): For each
authority _DGOk in the system, it is first necessary to convert_
the attribute set _Sk_ into a vector
###### y = {y j | j [1, nk ], kN=1nk = L}, where Sk = S Uk,
according to Section 4.1.5, and then calculate the user attribute
key:
###### SKk j, = g2 [(k SK gid ) y jU k +Wk ]
######, Attn
_k_
(8)
Data release: The power data owner defines an access
policy W, and converts it into a vector _[x]_ ; then, the power
data _M is encrypted by the following encryption algorithm, p_
and the ciphertext _CTp is uploaded._
Encrypt: For the power data owner in the system,
randomly select two vectors _s s,_ - _Z_ _pk_ and calculate the
ciphertext _CTp_ = {C C C C C0, 1, _k_, *j, _k_, _j}_ as follows:
Encrypt: For the power data owner in the system,
_N_ _N_
###### C0 = M p·Yks = M e g gp· ( 1, 2 ) k =1k As k =1 C1 = P1s
CCkk j, ==V[2,Xk s x=j gV1W As1,kk ]s = g1( x j P +Uk ) As
C*j = P1s x* j (k 1, N , j 1, L)
(
(9)
######
-----
Data recovery: Any user can access the power data
encrypted in the cloud, but only when _[S]_ ⊥W, the authorized
user can successfully decrypt it. In order to reduce the cost of
decryption, the decryption process is divided into two stages:
decryption test and complete decryption. The user first runs
the test algorithm to verify _[S]_ ⊥W or _[S W]_ . If _F W S(_, ) =1
is output, the user runs the full decryption algorithm;
otherwise, the decryption is terminated. Details are as follows:
Decryption Test Phase: User calculation:
phase is performed by DGOs, inputting the private key
_l_ +1
###### F W S(, ) = C
##
_j=1_
(10)
_j_
_y_ _j_
_1)_ Identity key generation phase (GO-KeyGen): This
phase is executed by GO. Input the global identity GID of the
user to get the identity key _SKgid_ of the user.
_2)_ Attribute key generation phase (DGOS-KeyGen): This
phase is performed by DGOs, inputting the private key _SKk_
and the encoding vector _y_ of the attribute set S, and then
outputting the attribute key _SKk j,_
[of the user. ]
Encrypt: The algorithm is run by the owner of the power
data, inputting the public key _PK, power data k_ _M, and the p_
encoding vector _x_ of the access policy W, and outputting the
power data in encrypted form _SKk j,_ .
Decrypt: This algorithm includes two phases: decryption
test and full decryption, as follows:
_1)_ Decryption Test Phase: Input the power data _CTp in_
###### F W S(, ) =1
encrypted form and the encoding vector _[y]_ . If,
proceed to the next phase; otherwise, the algorithm is aborted.
_2)_ Complete Decryption Phase (Dec-Phase): Input the
power data _CTp_ in encrypted form, the encoding vector y,
the user's attribute key _SKk j,_ and the user's identity key
###### SKgid, and output the power data M p ⊥ .
or
VI. EXPERIMENTAL ANALYSIS
_A._ _Experimental Platform_
The experimental environment builds a micro-cloud
environment to simulate the big data service under the cloud
platform. The server-side and client-side configurations are
shown in Table I.
This paper is based on the Pairwise Cryptography
Laboratory (PBC) and uses 160-bit elliptic curve groups over
a 512-bit finite field, which are used to calculate the cost of
the test operation and the decryption operation.
_B._ _Performance Analysis_
_EG1_ _EG2_ _EGT_
In the simulation test,, and respectively
represent the time cost of an index operation in _G1,_ _G2 and_
###### GT . NW and N represent the number of attributes in the S
access policy and user attribute set, respectively. The _[e][ˆ]_
represents the time cost required to compute a bilinear
###### O H( )
function. The represents the time required to compute
a hash function. The _[p]i_ indicates the number of possible
values for a multivalued attribute.
_1)_ _Theoretical analysis: In Table II, this scheme is further_
compared with scheme [22-25] from four aspects of key
generation cost, encryption cost, test cost and decryption cost.
Attention: _F W S(_, ) = 1 _Lj=1_ _x yj_ _j_ = 0 _S_ ⊥ _W_ . If
output _F W S(_, ) =, so, it represents 1 _S_ ⊥W, then the user
will proceed to the next stage for full decryption; otherwise,
the user will terminate decryption.
Dec-Phase: Once the above test Phase is passed, it
indicates _[S]_ ⊥W, and the user has performed the following
calculations:
###### C0 e(kN=1 Lj =N1Ck j, yLj C SKk, gid ) = M p
e C( 1,k =1 j=1SKk j, ) (11)
V. ATTRIBUTE-BASED ACCESS CONTROL SCHEME FOR
PRIVACY PROTECTION IN POWER GRID
System initialization Setup1 (PK, MSK):
Enter the security parameters to obtain the public key PK
and the master key MSK.
Encrypt (M, PK, W) CT: Input message M, public key PK,
and access policy W to get ciphertext CT.
Key GenPK, MSK, S SK: Input public key PK, master key
MSK and attribute set S to get user key SK.
Decrypt CT, SK, PK M or: input ciphertext CT, user key
SK and public key PK, if S W, output message M; otherwise,
the algorithm aborts and outputs.
System Setup (GO-Setup): This phase is executed by GO,
which inputs security parameters [1][] and obtains system
public parameters _[pp]_, as well as a pair of signature and
authentication keys (Sign, Verify).
DGOS-Setup: This algorithm is executed by DGOs, which
inputs the subscript k of DGOk and outputs the public and
private keys.
User key generation (KeyGen): includes two stages of
identity key generation and attribute key generation, as
follows:
-----
|Col1|TABLE I. EXPERIMENTAL PLATFORM CONFIGURATION PARAMETERS|Col3|
|---|---|---|
|Configurations|Type||
||Server side|Client side|
|CPU|Core(TM) i7-10900k4.6GHz|Core(TM) i5-10400f 4.3GHz|
|Memory|128G|16G|
|System|Windows Server LTSC Preview|Windows 11|
TABLE II. PERFORMANCE COMPARISON OF SMART GRID ACCESS CONTROL SCHEMES
Types Independent authorized agency Decryption test Full hiding strategy IPE
MA-ABE √ × × × ×
MA-ABE √ × × × ×
MA-ABE √ × × × ×
D-MA-ABE × × √ √ √
MA-ABE √ √ √ √ √
TABLE III. COMPARISON OF COMPUTATIONAL COMPLEXITY OF SMART GRID ACCESS CONTROL SCHEMES IN DIFFERENT STAGES
|Plan|Types|Independent authorized agency|Decryption test|Full hiding strategy|IPE|Adaptive safety|
|---|---|---|---|---|---|---|
|[22]|MA-ABE|√|×|×|×|×|
|[23]|MA-ABE|√|×|×|×|×|
|[24]|MA-ABE|√|×|×|×|×|
|[25]|D-MA-ABE|×|×|√|√|√|
|The plan|MA-ABE|√|√|√|√|√|
|Plan|User key generation|Col3|Encryption|Col5|Col6|Decryption test|Col8|Col9|Fully decrypted|Col11|Col12|
|---|---|---|---|---|---|---|---|---|---|---|---|
||E G2|O(H)|e|E G1|E GT|e|E G1|E GT|e|E G2|E GT|
|[25]|N S|N S|1|1 + N W|1|-|-|-|2|2|2pN i S|
|The plan|N S|0|1|1+3N W|1|0|pN i S|0|1|3pN i S|3|
It can be seen from Table III that this scheme is more
efficient than the scheme [25] in the key generation and
decryption stages because of the calculation of the hash
function in the scheme [25].
_2)_ _Simulation test: The actual performance of the present_
protocol and the protocol [25] will be tested. The results show
that, compared with the scheme [25], the present scheme has
obvious advantages in both the key distribution phase and the
decryption phase. Fig. 2 shows the comparison of the storage
cost of this scheme and the scheme [25] in each stage of the
algorithm. In the simulation, the lengths of the elements in the
bilinear groups _G1,_ _G2, and_ _GT are set to 512 bits. Assume_
that there are 10 authorities in the system, that is, _N =10_,
and specify that each authority manages five attributes.
10000
It can be seen from Fig. 2 that, compared with the scheme
[25], the construction of this scheme requires less space to
store the public key and the secret key of the user. Fig. 3, Fig.
4 and Fig. 5 respectively show the running time comparison of
the user key distribution algorithm, the encryption algorithm
and the decryption algorithm in this scheme and the scheme
[25].
Fig. 3 shows that the running time of the user key
distribution algorithm of the two schemes increases linearly
with the number of attribute sets.
As can be seen from Fig. 4, the efficiency of the
encryption algorithm in this scheme is obviously low, which is
a compromise for security performance.
Literature [25]
30
8000
6000
150
120
4000
2000
90
60
Algorithm
0 6 12 18 24
0
30
0
Setup KeyGen Encryption Decryption
Fig. 2. Storage Cost Comparison.
Number of attributes
Fig. 3. Comparison of Secret Key Generation Time.
-----
⊥
Literature [25]
⊥
30
250
200
150
100
50
0
Algorithm
0 6 12 18 24
Given _g1A,_ _g1As b s+_ ⊥ ˆ, _g1U Ak_, _g2B_, this term _g1U_ _k_ ( _As b s+_ ⊥ ˆ)
[is ]
uniformly distributed in the group. Therefore, there is no
adversary that can distinguish strategies _Game3 and_ _Game4_
with any advantage.
In the strategy _Game4, in the opponent's view, the choice_
of _b by the simulator_ _B is statistically independent, and the_
opponent cannot win the strategy by any advantage.
If the K-Linear assumption holds, the privacy-preserving
power data access control scheme is IND-CPA secure.
It is proved that under the k-Linear assumption, based on
the proof of the above lemma, the attacker's advantage in
winning the real security strategy is negligible. Therefore, the
attacker cannot break the scheme in the PPT.
VII. CONCLUSION
Number of attributes
Fig. 4. Time Comparison of Encryption Algorithms.
Literature [25]
Algorithm
30
300
240
180
120
60
0
Incomplete test
0 6 12 18 24
Number of attributes
Fig. 5. Time Comparison of Decryption Algorithms.
As shown in Fig. 5, the decryption test algorithm in this
scheme takes significantly less time than the full decryption
algorithm. If the attribute set of the user does not satisfy the
access policy, the scheme only executes the decryption test
algorithm and does not need to execute the complete
decryption operation. Due to the decryption test operation, the
time required for successful decryption of the present scheme
is much shorter than that of the scheme [25].
In this paper, a fine-grained access control scheme is
proposed to support data sharing in the smart grid. The main
work includes:
_1)_ The decentralized attribute-based encryption scheme is
extended to the smart grid system, which is based on a more
flexible threshold access structure.
_2)_ In order to improve the efficiency, a test phase is added
before the data is completely decrypted, which avoids many
unnecessary decryption operations.
_3)_ Based on the k-Linear assumption, it is proved that the
scheme achieves adaptive security.
Performance analysis shows that this scheme has obvious
advantages compared with similar schemes.
The content of this paper can protect the privacy
information of the trajectory data, and improve the availability
of the data. Finally, through the experimental verification, it is
proved that the proposed method not only protects the privacy
information of trajectory data but also improves the
availability of data.
In the next step, when the privacy budget is allocated by
the special series method. Although infinitely many points can
be protected, if there are too many position points in trajectory
data, the smaller the privacy budget allocated to the later
position points is, the larger the corresponding added random
noise is. Then, the availability of the data will be reduced.
REFERENCES
_C._ _Discussion_
If the distributions of encryption and decryption are
statistically similar, there is no one simulator B that can
distinguish the two strategies by any advantage.
Proof: In the challenge phase, simulator B randomly
selects A, to satisfy.
Then generate:
###### Ck j, = g1( xb j, P +Uk )( As +b⊥sˆ) = g1xb j, P ( As +b⊥sˆ) g1Uk ( As +b⊥sˆ) (12)
[1] Zhang P, Song J. Research progress on performance optimization of
blockchain consensus algorithm. Computer Science, 2020, 47(12): 296303.
[2] Lu G, Xie L, Li X. A comparative study of blockchain consensus
algorithms. Computer Science, 2020, 47(6A): 332-339.
[3] Bamakana S, Motavali A, Bondarti A. A survey of blockchain
consensus algorithms performance evaluation criteria. Expert Systems
with Applications, 2020, 154: 1-21.
[4] W. Sun, L. Wang, P. Wang, and Y. Zhang. Collaborative blockchain for
space-air-ground integrated network. IEEE Wireless Communications,
2020, 27(6): 82-89.
[5] Sel D, Zhang K, Jacobsen H. Towards solving the data availability
problem for sharded Ethereum. SERIAL 2018-Proceedings of the 2018
⊥ ⊥
-----
Workshop on Scalable and Resilient Infrastructures for Distributed
Ledgers, 2018, 1: 25-30.
[6] Liu X, Feng J. Trusted blockchain oracle scheme based on aggregate
signature. Journal of Computer and Communications, 2021: 95-109.
[7] Fan H, Liu Y, Zeng Z, Decentralized privacy-preserving data
aggregation scheme for smart grid based on blockchain. Sensors, 2020,
20(18): 1-14.
[8] Xue Z, Pan X, Lv Z, et al. Application of blockchain in energy and
power business. Journal of Physics: Conference Conference Series,
2020, 1626(1): 1-7.
[9] Zeng Z, Li Y, Cao Y, et al. Blockchain technology for information
security of the energy internet: fundamentals, features, strategy and
application. Energies, 2020, 13(4): 1-24.
[10] J. Huang, C. Lin, H. Zhou, Z. Xu, and C. Lin. Research on key
technologies of deduction of multinational power trading in the context
of Global Energy Interconnection. Global Energy Interconnection, 2019,
2(6): 560-566.
[11] Y. Jiang, C. Wang, Y. Wang, and L. Gao. A cross-chain solution to
integrating multiple blockchains for IoT data management. Sensors
(Switzerland), 2019, 19(9): 1-18.
[12] G. van Leeuwen, T. AlSkaif, M. Gibescu, and W. van Sark. An
integrated blockchain-based energy management platform with bilateral
trading for microgrid communities. Applied Energy, 2020, 263(1):
114613.
[13] M. Kim, et al. A secure charging system for electric vehicles based on
blockchain. Sensors (Switzerland), 2019, 19(13): 1-22.
[14] E. Mengelkamp, J. Gärttner, K. Rock. Designing microgrid energy
markets A case study: The Brooklyn Microgrid. Applied Energy, 2018,
210: 870–880.
[15] Z. Ji, X. Wang, C. Cai, and H. Sun. Power entity recognition based on
bidirectional long short-term memory and conditional random fields.
Global Energy Interconnection, 2020, 3(2): 186-192.
[16] T. Alladi, V. Chamola, J. J. P. C. Rodrigues, and S. A. Kozlov.
Blockchain in smart grids: A review on different use cases. Sensors
(Switzerland), 2019, 19(22): 1-25.
[17] J. Zhang, et al. Design scheme for fast charging station for electric
vehicles with distributed photovoltaic power generation. Global Energy
Interconnection, 2019, 2(2): 150-159.
[18] P. Liu, W. Jiang, X. Wang, H. Li, H. Sun. Research and application of
artificial intelligence service platform for the power field. Global Energy
Interconnection, 2020, 3(2): 175-185.
[19] B. Hong, Q. Li, W. Chen, B. Huang, H. Yan, K. Feng. Supply modes for
renewable-based distributed energy systems and their applications: case
studies in China. Global Energy Interconnection, 2020, 3(3): 259-271.
[20] N. Saxena, B. J. Choi. Authentication Scheme for Flexible Charging and
Discharging of Mobile Vehicles in the V2G Networks. IEEE
Transactions on Information Forensics & Security, 2017, 11(7): 14381452.
[21] M. S. Rahman, A. Basu, S. Kiyomoto, M. Z. A. Bhuiyan. Privacy
friendly secure bidding for smart grid demand-response. Information
Sciences, 2017, 379: 229-240.
[22] K. Xue, Y. Xue, J. Hong, W. Li, H. Yue, D. S. L. Wei, P. Hong. RAAC:
Robust and Auditable Access Control With Multiple Attribute
Authorities for Public Cloud Storage. IEEE Transactions on Information
Forensics and Security, 2017, 12(4): 953-967.
[23] Aitzhan N Z, Svetinovic D. Security and Privacy in Decentralized
Energy Trading Through Multi-Signatures, Blockchain and Anonymous
Messaging Streams. IEEE Transactions on Dependable and Secure
Computing, 2018, 15(5): 840-852.
[24] Fan, T.; He, Q.; Nie, E.; Chen, S. A study of pricing and trading model
of Blockchain & Big data-based Energy-Internet electricity. In
Proceedings of the 3rd International Conference on Environmental
Science and Material Application (ESMA 2018), Chongqing, China,
25–26 November 2018: 1–12.
[25] POP, Claudia, et al. Blockchain based decentralized management of
demand response programs in smart energy grids. Sensors, 2018, 18(1):
162.
-----
|
{
"disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.14569/ijacsa.2022.0131004?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.14569/ijacsa.2022.0131004, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": null,
"status": "GOLD",
"url": "http://thesai.org/Downloads/Volume13No10/Paper_4-Fine_grained_Access_Control_Method_for_Blockchain_Data_Sharing.pdf"
}
| 2,022
|
[] | true
| null |
[
{
"paperId": "4193264777198613168127d178311fa0c27c2d3e",
"title": "Trusted Blockchain Oracle Scheme Based on Aggregate Signature"
},
{
"paperId": "6fa52870d6c2667ec9f61aba0dda8b5e133262bc",
"title": "Collaborative Blockchain for Space-Air-Ground Integrated Networks"
},
{
"paperId": "c5532f0a29b08d08e17f2a536d302d8d06a67d2b",
"title": "Application of Blockchain in Energy and Power Business"
},
{
"paperId": "9b1396f3bdc0da6b69600e0a540bab11b18c6872",
"title": "A survey of blockchain consensus algorithms performance evaluation criteria"
},
{
"paperId": "7b20611cd8e745f17e7032a3381d83120a2ea566",
"title": "Decentralized Privacy-Preserving Data Aggregation Scheme for Smart Grid Based on Blockchain"
},
{
"paperId": "07d7f0adab27f2a597c150d52a7d853ae5e417cc",
"title": "Supply modes for renewable-based distributed energy systems and their applications: case studies in China"
},
{
"paperId": "0261940e78eaed7c9358f2d9f9ae4ff260597eb8",
"title": "Research and application of artificial intelligence service platform for the power field"
},
{
"paperId": "4964f2dc4a999f212427428d421bb95f4e6427ef",
"title": "An integrated blockchain-based energy management platform with bilateral trading for microgrid communities"
},
{
"paperId": "0ce6d0860cf2367e487c439998d727e4cad55d78",
"title": "Power entity recognition based on bidirectional long short-term memory and conditional random fields"
},
{
"paperId": "cf8967a7e11aad6cf4dfcce6410f248655fb9220",
"title": "Blockchain Technology for Information Security of the Energy Internet: Fundamentals, Features, Strategy and Application"
},
{
"paperId": "703e8e13de0fb25e8237cb49b91322381b21f48a",
"title": "A Comparative Study of Blockchain Consensus Algorithms"
},
{
"paperId": "6480e7dde371371086fc2c9b241c09fc4cbecfa2",
"title": "Research on key technologies of deduction of multinational power trading in the context of Global Energy Interconnection"
},
{
"paperId": "d8214bbc91fe4c0405af5fbe56d54392e157c05c",
"title": "Blockchain in Smart Grids: A Review on Different Use Cases"
},
{
"paperId": "51b9bea670431cb96589a510dd088a546e3f164b",
"title": "A Secure Charging System for Electric Vehicles Based on Blockchain"
},
{
"paperId": "9e66ae24a541255c2d931184498ee116ce81478a",
"title": "A Cross-Chain Solution to Integrating Multiple Blockchains for IoT Data Management"
},
{
"paperId": "e3941b86a461cf05d66f353dcdc65c3ca93fa304",
"title": "Design scheme for fast charging station for electric vehicles with distributed photovoltaic power generation"
},
{
"paperId": "884eafb738f664a4e129a7f26ce81cc86795811a",
"title": "Towards Solving the Data Availability Problem for Sharded Ethereum"
},
{
"paperId": "9978f6a9ff6ebbbf6cf134ea9324da6271f3e07d",
"title": "Security and Privacy in Decentralized Energy Trading Through Multi-Signatures, Blockchain and Anonymous Messaging Streams"
},
{
"paperId": "3e921b2a61649b70209bb93aeee73645beeec6c4",
"title": "Blockchain Based Decentralized Management of Demand Response Programs in Smart Energy Grids"
},
{
"paperId": "a11e7a46df66bbea56fed347be7ca5231e7ba5a1",
"title": "Privacy-friendly secure bidding for smart grid demand-response"
},
{
"paperId": "3cbc945f8389246e64a28f7d18897636dd0dcc4f",
"title": "RAAC: Robust and Auditable Access Control With Multiple Attribute Authorities for Public Cloud Storage"
},
{
"paperId": "8c7234eb838c1c24d0fc33ebb1488b2a7fa9b4bc",
"title": "Authentication Scheme for Flexible Charging and Discharging of Mobile Vehicles in the V2G Networks"
},
{
"paperId": "392fe6f21d9b735c719a742ed987702b893824dd",
"title": "Designing microgrid energy markets A case study: The Brooklyn Microgrid"
},
{
"paperId": "9afb18333559794bf2933d19d5fbc19e456fab69",
"title": "A study of pricing and trading model of Blockchain & Big data-based Energy-Internet electricity"
}
] | 10,401
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/ffc5faa49654af2c3e1929d53f999a3096ead97c
|
[
"Computer Science"
] | 0.864233
|
A Secure E-Coupon Service Based on Blockchain Systems
|
ffc5faa49654af2c3e1929d53f999a3096ead97c
|
IEEE Access
|
[
{
"authorId": "72655093",
"name": "Jongbeen Han"
},
{
"authorId": "35184696",
"name": "Yongseok Son"
},
{
"authorId": "1738654",
"name": "Hyeonsang Eom"
}
] |
{
"alternate_issns": null,
"alternate_names": null,
"alternate_urls": [
"http://ieeexplore.ieee.org/servlet/opac?punumber=6287639"
],
"id": "2633f5b2-c15c-49fe-80f5-07523e770c26",
"issn": "2169-3536",
"name": "IEEE Access",
"type": "journal",
"url": "http://www.ieee.org/publications_standards/publications/ieee_access.html"
}
|
As the popularity of e-commerce grows, an electronic coupon (e-coupon) is widely used due to its convenience and portability. In most e-coupon services, the information of e-coupons is managed on a centralized server. However, e-coupon services are often vulnerable to security issues because of centralization. For example, when the e-coupon information which is stored in a centralized e-coupon server is forged, it becomes difficult to match the user and the e-coupon’s owner, and an expired e-coupon can be used repetitively (i.e., double-spending). To handle this issue, we propose a new e-coupon service by exploiting a blockchain system to improve the security of the service. To do this, we first design a server to enable the e-coupon service and communicate with the blockchain system. Second, we devise a smart contract on the blockchain system to provide integrity of the e-coupon business logic and the e-coupon’s information. We implemented the proposed service on an Ethereum-based blockchain system. The experimental results show that our proposed service improves higher security with a minor performance overhead compared with an existing e-coupon service.
|
Received January 17, 2022, accepted February 8, 2022, date of publication February 18, 2022, date of current version March 3, 2022.
_Digital Object Identifier 10.1109/ACCESS.2022.3152765_
# A Secure E-Coupon Service Based on Blockchain Systems
JONGBEEN HAN 1, YONGSEOK SON 2, AND HYEONSANG EOM1
1Department of Computer Science and Engineering, Seoul National University, Seoul 08826, Republic of Korea
2School of Computer Science and Engineering, Chung-Ang University, Seoul 06974, Republic of Korea
Corresponding author: Yongseok Son (sysganda@cau.ac.kr)
This work was supported in part by the BK21 FOUR Intelligence Computing (Department of Computer Science and Engineering, SNU)
funded by the Ministry of Education (MOE, South Korea); in part by the National Research Foundation of Korea (NRF) under Grant
4199990214639; and in part by the National Research Foundation of Korea (NRF) Grant funded by the Korean Government through MSIT
under Grant NRF-2021R1F1A1063438, Grant 2021R1C1C1010861, and Grant KIAT-P0012724.
**ABSTRACT As the popularity of e-commerce grows, an electronic coupon (e-coupon) is widely used due**
to its convenience and portability. In most e-coupon services, the information of e-coupons is managed
on a centralized server. However, e-coupon services are often vulnerable to security issues because of
centralization. For example, when the e-coupon information which is stored in a centralized e-coupon
server is forged, it becomes difficult to match the user and the e-coupon’s owner, and an expired e-coupon
can be used repetitively (i.e., double-spending). To handle this issue, we propose a new e-coupon service
by exploiting a blockchain system to improve the security of the service. To do this, we first design a
server to enable the e-coupon service and communicate with the blockchain system. Second, we devise
a smart contract on the blockchain system to provide integrity of the e-coupon business logic and the
e-coupon’s information. We implemented the proposed service on an Ethereum-based blockchain system.
The experimental results show that our proposed service improves higher security with a minor performance
overhead compared with an existing e-coupon service.
**INDEX TERMS E-coupon, blockchain, smart contract, security.**
**I. INTRODUCTION**
With the growth of the electronic commerce market, electronic coupons (e-coupons) are being adapted as an effective
marketing tool [1], [2]. The electronic nature of e-coupons
not only provides coupon providers, such as sellers and
marketers, with an efficient way of management but is also
convenient for customers. For example, since an e-coupon
is provided by digital code, e-coupon providers can distribute the e-coupon to the customers online and easily collect
statistics such as downloading and using e-coupons. Also,
customers can easily manage the e-coupons via their mobile
devices or PCs. Because of these advantages of e-coupons,
Global Mobile Coupons Market 2016-2020 reports that the
global mobile coupon market will grow to a compound annual
growth rate (CAGR) of 73.14% over 2016-2020 [3].
Although the e-coupon market evolves and an e-coupon
provides several benefits, there are some challenges. For
easy management, most e-coupon services manage e-coupon
The associate editor coordinating the review of this manuscript and
approving it for publication was Zhangbing Zhou .
information in a centralized system. When an e-coupon is
used, the e-coupon is validated by using the information in the
centralized database system. However, the information can
be easily manipulated by an administrator due to the centralization nature so that there can be a forgery and fraudulent
usage of an e-coupon. For example, an e-coupon may be
redeemed multiple times (double spending), or a malicious
attacker may manipulate the discount rate. In the United
States, PennLive estimates real e-coupon crime costs to be
around $300-$600 million dollars per year [4].
To enhance the security of e-coupons, Hsueh et al. [5]
propose an e-coupon system using a hash chain which is
combined with blockchain technology. Our study is in line
with the work in terms of providing the integrity of e-coupon
information via blockchain technology. In contrast, furthermore, we provide the integrity of operations (e.g., managing
e-coupons, etc.) as well as the integrity of e-coupon information by devising a secure smart contract.
In this paper, we propose an e-coupon service based on
a blockchain system to improve the security of the service.
To do this, we first design a server to enable e-coupon service
-----
**FIGURE 1. Example of centralized e-coupon service.**
and communicate with the blockchain system. Second,
we devise an e-coupon smart contract in the blockchain
system to provide the integrity of the operations (i.e., business logic code [6]) and e-coupon information. In addition,
we deploy an e-coupon smart contract to the blockchain
automatically for user convenience.
We apply and implement the proposed service on the
Quorum blockchain system [7] for the security of e-coupon
information and business logic code (i.e., downloading,
giving, and using an e-coupon). Experimental results demonstrate that the proposed service improves security and has
a minor performance overhead compared with existing services. The contributions of our work are as follows
:
- We investigate the existing e-coupon processing mechanism in terms of security and e-coupon trading.
- We propose a new service that enables secure e-coupon
trading via an e-coupon smart contract on a blockchain
system and deploys the e-coupon smart contract
automatically.
- We demonstrate that the proposed e-coupon service is
more secure compared with the existing services.
The rest of this paper is organized as follows. Section II
describes the background and motivation. Section III
pr-esents the design and implementation of the proposed
service. Section IV shows the experimental results. Section V
discusses the related work. Section VI concludes this
paper.
**II. BACKGROUND AND MOTIVATION**
_A. SERVING E-COUPONS ON A CENTRALIZED SERVER_
With the expansion of smartphones and the development of
e-commerce, the usage of e-coupon is increasing [8]–[10].
Unlike traditional paper coupons, the e-coupons allow
coupon providers to collect and manage the coupon information easily (e.g., the number of coupons, the number of downloads, lists of customers, or whether coupons have been used).
In addition, e-coupons provide customers to use and manage
the e-coupons via website or smartphone [11]. As shown in
Figure 1, most e-coupons are provided by a centralized server
for managing the e-coupon information since the information
on the centralized server can be managed and collected efficiently. The e-coupon services have the following process to
redeem an e-coupon
:
1) To download an e-coupon, a customer registers the
customer information to an e-coupon issuer.
2) The customer downloads an e-coupon from the issuer
via a mobile device or PC.
3) When a customer uses the e-coupon, the customer
sends the e-coupon to the store (i.e., e-coupon
provider).
4) The store requests the issuer to verify the e-coupon.
And the issuer verifies the validity of this e-coupon
according to the database.
In the process of e-coupon services, verifying an e-coupon
is the most important task because the forged or manipulated
e-coupons by malicious attacks lead to a financial problem.
To prevent this forgery of e-coupons, previous works [2],
[12]–[15] propose mechanisms to validate the e-coupons via
message-digest algorithm 5 (MD5), message authentication
code (MAC), and one-way hash function. However, they do
not provide the techniques to prevent the falsification of the
information on a centralized server. In other words, forgery
of e-coupons does not occur during data transmission, meanwhile, forgery of e-coupon information stored in the e-coupon
database can occur when using the above techniques. In addition, an administrator of the e-coupon server can modify any
e-coupon information for his/her own benefits. Therefore, our
study aims to introduce a new e-coupon service that dose not
allow unauthorized forging of e-coupons and manipulation of
information on the e-coupon server. To this end, we devise an
e-coupon service based on a blockchain system.
_B. BLOCKCHAIN_
The blockchain technology [16], [17] is an attractive solution
to address security issues (e.g., data integrity) in distributed
systems. To address the issues, most blockchain systems
maintain a time-stamped chain of the blocks with every participating user. The block consists of a block header and block
body. The block body includes transactions. The block header
includes a previous block hash and the root of the Merkle
tree [18] generated with the transactions of the block body,
etc. The blocks are chained together by the previous block
hash and a new block can only be appended to the end of
the chain. With these features, the transactions stored in the
blockchain can not be updated or deleted due to the chain of
historical transactions. Thus, a blockchain system can provide
the Byzantine fault tolerance (BFT) [19] and inter-individual
transfers without an intermediate entity.
Among the blockchain systems, Ethereum is a one of popular blockchain-based platform that provides smart contracts.
A smart contract is a set of promises in a digital form which
users perform [20]. A smart contract can run consistently on
all the Ethereum nodes without the arbitration of a trusted
entity because the business logic code and status value (which
is the result of a smart contract) of the smart contracts are
stored in the blockchain [21], [22]. With these features,
users can construct distributed applications (DApps) with
anonymity, transparency, immediacy, and high-level security
via the smart contract. Although smart contracts enhance
-----
**FIGURE 2. Overall architecture of the e-coupon service.**
security, there are shortcomings. For example, it is difficult
for users to manually build a smart contract, which can
decrease the usability of the smart contract. Thus, we devise
a secure and highly usable e-coupon service by exploiting the
high-level security of blockchain and deploying an e-coupon
smart contract automatically.
**III. DESIGN AND IMPLEMENTATION**
To achieve higher security and usability of an e-coupon service, we propose a new secure e-coupon service by exploiting
blockchain and smart contracts. In our service, we provide the
integrity of the business logic and e-coupon’s information by
adopting blockchain.
_A. OVERVIEW_
Figure 2 shows the overall architecture of the proposed
e-coupon service. The e-coupon service consists of three
layers: an application, e-coupon server, and Ethereum-based
blockchain. The application is similar to the existing applications except for signing transactions and sending the transactions to the blockchain. The e-coupon server is a broker
that delivers member information and e-coupon information
stored in the blockchain to the application. The Ethereumbased blockchain validates e-coupon transactions and stores
the data into the blockchain. Also, e-coupon smart contracts operate via the Ethereum virtual machine (EVM),
a sandboxed virtual machine implicitly enclosed within each
complete Ethereum node, capable of executing the contract
bytecode.
We consider blockchain architecture for improving the performance of the blockchain in e-coupon service. For example,
Ethereum blockchain stores all transaction states to a smart
contract by using a tree structure (i.e., account storage trie).
Therefore, when the size of stored states increases, the tree
size also increases. This result can increase the tree search
time to store and retrieve the state information. Therefore,
this scheme may show performance degradation in storing
or retrieving the e-coupon state information. On the other
hand, we provide each smart contract for each e-coupon
provider, and each tree in each smart contract manages its
own e-coupon state information. This scheme reduces the
tree depth and so that it improves performance in storing and
retrieving e-coupon state information.
In addition, we enable to easily manage the e-coupon
and reduce the cost of development by making and deploying e-coupon smart contracts automatically. To do this, the
proposed e-coupon service provides a smart contract template to e-coupon providers. With this template, the e-coupon
providers can easily create a coupon smart contract and automatically deploy the smart contract to the blockchain without
writing a new smart contract by configuring the e-coupon
information (i.e, the quantity of the coupon, coupon validity
period, coupon type, discount amount, etc.). Therefore, it can
provide convenience to e-coupon providers and reduce the
cost of building the smart contract.
_B. E-COUPON SERVER_
1) E-COUPON MANAGER
The e-coupon manager provides an interface to deploy an
e-coupon smart contract, get an e-coupon list, download an
e-coupon, use the e-coupon, and provide the e-coupon to
customers. Furthermore, the manager communicates with the
blockchain to obtain and store e-coupon information. For
example, when an e-coupon provider issues an e-coupon, the
e-coupon provider requests to deploy an e-coupon smart contract to the e-coupon manager. Then, the e-coupon manager
generates the transaction that deploys the e-coupon smart
contract on the blockchain. After then, it stores the e-coupon
information and the smart contract address in the server’s
database. By using the information stored in the database,
the e-coupon manager provides e-coupon information to customers. Note that all the e-coupon data stored in the server is
only used for displaying to the application. The data modification must be performed via transaction processing based
on the data in blockchain.
We classify e-coupons into two types which our service
supports. The first type is a discount coupon, which e-coupon
providers use to attract new customers or provide a discount
on a product or service. This coupon can be free, depending
on the choice of the e-coupon provider. The second type is
a reserve coupon, which is used to increase the loyalty of
existing customers. Furthermore, the reserve coupon can be
a point or stamp. The point is used as cash, and the stamp is
used when the quantity set by an e-coupon provider is satisfied for redeeming goods or services. The reserve coupon is
related to payments because customers can obtain the coupon
when they purchase goods or services.
2) MEMBER MANAGER
The member manager manages user information for communicating between the application and the blockchain.
For example, the manager maps the wallet address in the
Ethereum-based blockchain to the user’s ID in the applications (e.g., e-coupon provider or customer). This is because
applications perform the transactions based on the wallet
-----
**Algorithm 1 An Example of an E-Coupon Smart Contract**
1: function downloadCoupon(mgs)
2: _/* require() is a verification function for a given_
_condition */_
3: require(remain_coupons > 0)
4: require(coupons[msg.sender].downloaded == false)
5: require(expirationDate >= now)
6:
7: _/* Set a customer and modify the number of coupons_
_*/_
8: _coupons[msg.sender] = Coupon({_
9: _downloaded: true,_
10: _pendding: false_
11: });
12: _remain_coupons = remain_coupons.sub(1);_
13:
14: DownloadCouponEvent(msg.sender)
15: end function
16:
17: function requestCoupon(msg)
18: require(coupons[msg.sender].downloaded == true)
19: require(coupons[msg.sender].pendding == false)
20: require(expirationDate >= now)
21: require(startDate <= now)
22:
23: _/* Modify a state to use the e-coupon from the cus-_
_tomer */_
24: _coupons[msg.sender].pendding = true;_
25:
26: RequestCouponEvent(msg.sender)
27: end function
28:
29: function confirmCoupon(msg, customer)
30: require(msg.sender == owner)
31: require(coupons[customer].downloaded == true)
32: require(coupons[customer].pendding == true)
33:
34: _/* Confirm the use of e-coupon from the customer */_
35: _coupons[customer].pendding = false_
36: _coupons[customer].downloaded = false_
37:
38: ConfirmCouponEvent(customer, now)
39: end function
address in the blockchain, as well as the user’s ID on the
server. In addition, the manager maps the wallet addresses
of the e-coupon provider and customer to the smart contract
addresses of the e-coupon. To provide the privacy of the
members, we do not upload member information on the
blockchain. Instead, we use the member information mapped
between the blockchain and the e-coupon server.
3) PAYMENT MANAGER
The payment manager provides an interface to save information paid by a customer in the blockchain and search
the information. Also, the manager manages the history
of reserve e-coupons created when customers pay for
goods/services or redeem e-coupons to purchase corresponding goods or services. For instance, when a customer pays
for a product or service, the payment manager creates a
transaction related to the payment to the e-coupon smart
contract for saving reserve e-coupons to the blockchain. And
the manager stores payment information in a database and
serves the history to customers.
_C. E-COUPON SMART CONTRACT IN_
_ETHEREUM-BASED BLOCKCHAIN_
We exploit the blockchain to prevent the forgery of e-coupon
information via a consensus algorithm. Also, the smart contract stored in the blockchain does not allow falsification
because all nodes participating in the blockchain network
perform the smart contract’s business logic whether the logic
is correct or not. By exploiting this feature of the smart
contract, we guarantee the integrity of the e-coupon business
logic. The business logic of an e-coupon includes e-coupon
operations (e.g., issue, download, redeem, gift, etc.).
Algorithm 1 shows an example of how we guarantee
integrity using a smart contract for e-coupons. Specifically,
this algorithm describes the main business logic of downloading and using a discount e-coupon. downloadCoupon()
is a function in the smart contract to download an e-coupon
according to a customer request based on the transaction information (Algorithm 1, lines 1-15). To download an e-coupon, downloadCoupon() first validates
whether the e-coupon exists or not (line 3). If the e-coupon
exists, downloadCoupon() validates whether the customer already has the e-coupon or not (line 4). If the customer does not have the e-coupon, downloadCoupon()
finally validates whether the e-coupon is expired or not
(line 5). When the transaction satisfies all the above conditions, downloadCoupon() generates a new state for
the e-coupon which identifies the customer who has downloaded the e-coupon (lines 8-11). Also, it reduces the
number of e-coupons remaining (line 12). Subsequently,
downloadCoupon() generates an event of downloading
e-coupon used by the e-coupon server to track changes
in the state of the e-coupon (line 14). After calling
downloadCoupon(), the changed states are stored in the
blockchain, which can not be falsified.
To redeem an e-coupon, there are two functions
which are requestCoupon() and confirmCoupon()
(Algorithm 1, lines 17-39). requestCoupon() is a function to use an e-coupon from a customer request (lines 17-27).
requestCoupon() first validates whether the customer
has the e-coupon or not (line 18). If the customer has
the e-coupon, requestCoupon() validates whether the
e-coupon has been used or not (line 19). If the e-coupon
has not been used, requestCoupon() finally validates
whether the e-coupon is available or not (lines 20-21). The
transaction is rejected even if a single condition is not satisfied. Otherwise, requestCoupon() modifies the state to
-----
**TABLE 1. Notation in our e-coupon service.**
use the e-coupon based on the customer requests (line 24).
Next, the requestCoupon() generates an request event
for the customer to use the e-coupon (line 26). Subsequently, the event is used by the e-coupon server to notify
the e-coupon provider that there is a request for the use
of e-coupons (line 26). confirmCoupon() is a function
that approves the customer’s request to use the e-coupon
(lines 29-39). First, confirmCoupon() validates whether
the message sender (who sent a transaction) is the owner of
the e-coupon smart contract and the customer has requested
to use the e-coupon (lines 30-32). After this verification,
confirmCoupon() confirms the use of the e-coupon
(lines 35-36). Finally, confirmCoupon() generates an
event that confirms the use of the e-coupon. The event is used
by the e-coupon server to notify the e-coupon provider and the
customer that the e-coupon has been applied (line 38). Note
that all statements in Algorithm 1 are performed among nodes
participating in the blockchain network. Thus, whenever each
statement is executed, the nodes reach a consensus to modify
or check the state. This can guarantee the integrity of the
business operations.
The process for reserve e-coupons is similar to that
for the discount e-coupon. Exceptionally, for the reserve
e-coupons, the smart contract manages the payment, the
reserve e-coupon (i.e., point or stamp) related to an e-coupon
provider, redeemable goods or services with the reserve
e-coupon, etc. For example, when a customer pays for a
product or service, the smart contract calculates the number of
coupons provided based on the configuration of the e-coupon
provider. In addition, the smart contract enables all types of
e-coupons to be transferred between customers.
_D. PROCESSING E-COUPON OPERATIONS_
In this section, we explain how to process each e-coupon
operation between application, e-coupon server, and
Ethereum-based blockchain. Table 1 lists the notations we
use in our e-coupon service. As shown in the table, UID is
a registered user identifier stored in the e-coupon server to
identify a member. It is used when the e-coupon server provides the e-coupon information to a corresponding member.
**FIGURE 3. Registration of a new member (i.e., creating a wallet).**
**FIGURE 4. Issuing an e-coupon smart contract.**
C represents the e-coupon information stored in the smart
contract of the blockchain. The initial e-coupon information is determined by an e-coupon provider. This information is updated when the e-coupon is downloaded, used,
or transferred as gift. Addrc and Addrecp are the external
owned address (EOA) of the customer and the e-coupon
provider, respectively. They are the wallet’s addresses used
in the blockchain when e-coupons are distributed, downloaded, used, or given by e-coupon providers or customers.
Addrecsc is a contract address (CA) about an e-coupon
smart contract and a target address to execute the business
logic of the e-coupon. Keyc and Keyecp are the private
keys of the customer and the e-coupon provider, respectively. The keys are used to sign an unsigned transaction
(Txu) including the e-coupon information (C) without a
signature when an e-coupon smart contract is deployed,
downloaded, or used. Txs_c and Txs_ecp refer to the transaction with a signature by an e-coupon provider or customer. The Blockchain verifies Txs_c and Txs_ecp using
Pubc or Pubecp which is a public key related to Addrc
and Addrecp. The process of the e-coupon service consists of five steps: (1) registration of a new member,
(2) issuing an e-coupon smart contract, (3) downloading an
e-coupon, (4) gifting an e-coupon, and (5) using an e-coupon.
We will explain each step as follow.
-----
**FIGURE 5. Downloading an e-coupon.**
1) REGISTRATION OF A NEW MEMBER
Figure 3 shows the process of registering a new member (i.e.,
an e-coupon provider or a customer). To perform a business
logic operation through a smart contract, each member needs
to create a wallet via createWallet(). The wallet stores
pairs of public and private keys and is configured to interact
with the blockchain. For example, an e-coupon provider uses
the wallet when deploying e-coupon smart contracts, paying
for a product or service, or confirming usage of an e-coupon.
Also, a customer uses the wallet when downloading, giving,
or using an e-coupon.
To create a pair of private and public key in the wallet, we
use a one-way encryption algorithm (i.e., public key infrastructure (PKI) [23]) that generates a pair of private and public
keys such as (Keyc, Pubc, Addrc) or (Keyecp, Pubecp,
Addrecp) as shown in Figure 3. Keyc and Keyecp are random
numbers and the key size is 256 bits (32 bytes). Generated
Keyc and Keyecp are encrypted using the password transmitted by a member for high-level security. Pubc and Pubecp are
derived from Keyc and Keyecp via elliptic curve cryptography (ECC) [24] and the key size is 512 bits (64 bytes). Pubc
and Pubecp are again hashed into a SHA-3 (i.e., Keccak-256),
resulting in 256 bits (32 bytes), and the last 20 bytes are
used as the wallet address (i.e., Addrc or Addrecp), which
is target address of a transaction. After creating the wallet,
the member requests to register the wallet address on the
e-coupon server, and the e-coupon server stores the wallet
address with UID via registerMember(). Finally, the
e-coupon server transfers the result of the registration.
2) ISSUING AN E-COUPON SMART CONTRACT
Figure 4 shows the procedure of issuing an e-coupon smart
contract. When an e-coupon provider issues an e-coupon
(e.g., tickets, gift certificates, discount coupons, etc.), the
e-coupon provider sets the e-coupon information (C), including the price, the number of e-coupons, start date, expiration
**FIGURE 6. Gifting an e-coupon.**
date, etc. The price of an e-coupon is what a customer has
to pay when downloading the e-coupon. When the price is
set to 0, the e-coupon is classified as a free coupon. The
number of e-coupons indicates the total count of remaining
e-coupons. This means that customers cannot download the
e-coupon anymore if the number is 0. The start date shows
the starting date when a customer can download the e-coupon.
The expiration date shows the duration of an e-coupon. When
an e-coupon expires, all related operations will be disabled
(such as downloading, giving, and using the e-coupon).
After setting the e-coupon information (C), the e-coupon
provider requests the e-coupon server to create Txu, which
is a transaction to generate the e-coupon smart contract
via createContractTx(). The e-coupon server creates Txu with C, UID, Addrecp and returns Txu to the
e-coupon provider. The e-coupon provider checks Txu and
signs Txu with Keyecp via signTx(). And then, the
e-coupon provider transmits Txs_ecp to the blockchain via
deploySmartContract(). If the transaction of the
e-coupon smart contract (Txs_ecp) is valid, the transaction is
processed to issue the contract and the e-coupon information
is stored in the blockchain. After then, the e-coupon provider
requests to register the smart contract address (Addrecsc)
on the e-coupon server via registerContract().
At this time, to synchronize the e-coupon server and the
blockchain, the e-coupon server gets the e-coupon information (C) of the blockchain and stores it in its database
via getCouponInfo(). Furthermore, the e-coupon server
transfers the result of the smart contract registration for the
e-coupon smart contract to the e-coupon provider. Finally, the
e-coupon information can be provided to customers so that
they obtain the e-coupon list to download e-coupons.
3) DOWNLOADING AN E-COUPON
As shown in Figure 5, the first step of downloading an
e-coupon, customers receive a list of e-coupon information (C) and e-coupon smart contract addresses (Addrecsc)
from the e-coupon server via getCouponList(). The
customer can use a filter to receive all e-coupons or specific e-coupons according to the e-coupon provider’s UID.
Next, the customer creates an e-coupon download transaction (Txu) of the desired e-coupon with the e-coupon smart
-----
**FIGURE 7. Using an e-coupon.**
contract address (Addrecsc) and the wallet address (Addrc)
via createDownloadTx().
After then, the customer signs the transaction with own
private key (Keyc) via signTx(). The transaction (Txs_c)
is propagated to the blockchain via downloadCoupon()
and the corresponding e-coupon smart contract verifies the
validity of the downloadable e-coupon by checking different
parameters, such as the validity of the transaction signature,
the validity period, and the quantity of the coupon. If the
transaction is valid (i.e., the e-coupon is downloaded), the
e-coupon smart contract in the blockchain updates the state
with the customer having downloaded the e-coupon and propagates the e-coupon download transaction to the blockchain
network. Otherwise, the blockchain returns a failed result.
When a block containing the transaction is generated, the ecoupon server obtains and stores an event of downloading
e-coupon from the blockchain to its database, to synchronize
between the e-coupon server and blockchain.
4) GIFTING AN E-COUPON
Figure 6 shows the process that a customer gifts an e-coupon
to another customer. First, a customer-1 creates a transaction
to give an e-coupon to customer-2 via createGiftTx()
with three parameters such as customer-1’s wallet address
(Addrc1), customer-2’s wallet address (Addrc2), and the
e-coupon smart contract address (Addrecsc) associated with
the e-coupon. After creating the transaction, the customer-1
signs the transaction (Txu) with the own private key (Keyc1)
via signTx(), and passes the signed transaction (Txs_c1)
to the blockchain via giveCoupon(). The e-coupon smart
contract, according to the address (Addrecsc), executes a
gift operation to validate the transaction (Txs_c1) whether
the customer-1 has enough coupons, the e-coupon has not
expired, etc. After then, the blockchain returns the execution
result such as success or failure of the transaction. Also, the
e-coupon server obtains and stores the event of e-coupon
for gift, and transfers the notification (an event with which
the e-coupon for gift is now available) to customer-2 via
notifyGiftCoupon().
5) USING AN E-COUPON
Figure 7 shows the usage of an e-coupon. As shown in the
figure, two types of transactions are required when using an
e-coupon. One transaction is performed when the e-coupon
is used by a customer. Another transaction is to confirm the
use by the e-coupon provider. This is to prevent the e-coupon
provider from using the e-coupon without the customer’s
permission. This guarantees that the provider can use the
e-coupon only based on the user’s request.
To use an e-coupon (i.e., a discount or reserve e-coupon),
a customer creates a transaction (Txu) with the customer
wallet address (Addrc) and the e-coupon contract address
(Addrecsc) via createReqUsingCouponTx(). Then,
the customer signs the transaction with their private key
(Keyc) via signTx() and sends the signed transaction
(Txs_c) to the blockchain via requestUsingCoupon().
On the blockchain, the e-coupon smart contract performs the
transaction. When the transaction is valid, the smart contract
modifies the state of the e-coupon with the transaction. Then,
the blockchain returns the result of the requesting for using
the e-coupon to the customer. After then, the e-coupon server
gets the event of the requesting for using the e-coupon and
synchronizes the e-coupon state. The e-coupon server notifies
the e-coupon provider about the request for the use of the
e-coupon via notifyReqUsingCoupon().
Subsequently, to get an approval to use the e-coupon, the
e-coupon provider generates another transaction (TxU ) to
confirm use of the e-coupon via createConfirmUsing
CouponTx() and signs the transaction with the private key
of the e-coupon provider (Keyecp) via signTx(). After
then, the e-coupon provider sends the signed transaction
(Txs_ecp) to the blockchain via confirmUsingCoupon().
When the blockchain receives the transaction (Txs_c), the
e-coupon smart contract carries out the transaction and
returns the result regarding the confirmation of using the
e-coupon. Once the use of the e-coupon is approved, the
e-coupon server takes the event of confirming usage of
the e-coupon and notifies the customer and the e-coupon
provider via notifyUsingCoupon().
_E. DEMONSTRATION OF THE PROPOSED_
_E-COUPON SERVICE_
Based on the proposed smart contract mechanisms,
we develop a proof of concept (PoC) service, as shown in
Figure 8. As shown in the figure, there are screens of the
main, list of stores, detail of a store, list of e-coupon, and
e-coupon for gift. Figures 8(a) shows the home screen, which
-----
**FIGURE 8. Proposed e-coupon service.**
**FIGURE 9. Performance results.**
includes the e-coupon information. The menu of My coupons
shows the discount e-coupon. The menus of My points and
_My stamps show the reserve e-coupon. To download a dis-_
count e-coupon, first, the customer gets a list of stores (i.e.,
e-coupon provider) as shown in Figure 8(b). After then, the
customer can download the e-coupon from the store detail
screen as shown in the Figure 8(c).
Figure 8(d) shows the e-coupon information owned by a
customer. The customer can also give the e-coupon to other
customers using a gift button. Figure 8(e) shows a gift screen,
where a customer confirms giving an e-coupon to another
customer. Once the customer gives an e-coupon to another
customer, the customer cannot cancel it. As well as these
features, our service has additional functionalities which we
omit in Figure 8. For example, there are customer registration, request of using e-coupon, issuing an e-coupon, issuing
points or stamps, and confirming the use of e-coupon. Also,
by using smart contracts, users can maximize efficiency by
allowing users to exchange e-coupons for coupons that they
want.
**IV. EVALUATION**
_A. EXPERIMENTAL SETUP_
We perform all the experiments on private blockchain which
is built with five nodes. Each node includes two Intel Xeon
E5-2683 processors (total 32 cores), 64 GiB DRAM, and runs
Ubuntu 16.04.5 LTS distribution with Linux kernel 4.4.0.
In our service, we construct an e-coupon server using the
-----
web3.js and an Express framework. Also, we use Quorum,
an Ethereum-based distributed ledger protocol, which provides new consensus mechanisms for private blockchain [7].
In the consensus mechanisms, we use Istanbul BFT (Byzantine Fault Tolerance) consensus algorithm. To build this
BFT-based blockchain, we need more than 3f 1 nodes
+
are required (f is the number of faulty nodes). The minimum number of nodes is five when the number of fault
nodes is one. Thus, we build the BFT-based blockchain
with five nodes. In addition, in the case of Ethereum-based
public blockchain, the gas is used to prevent spam on the
Ethereum networks. Meanwhile, in our scheme, we use an
Ethereum-based private blockchain (i.e., Quorum). In the
case of the Ethereum-based private blockchain, the gas
is not required since only authorized nodes can access
the blockchain. Thus, we do not consider gas limit and
gas price.
We empirically evaluate our service by using a synthetic benchmark. The smart contract scenario of the synthetic benchmark consists of four operations: deploying an
e-coupon smart contract, downloading, giving, and using the
e-coupon. To compare the performance differences between
blockchain e-coupon service and blockchain-free service, we
evaluate the proposed service using the blockchain and an
existing service without using the blockchain. We measure
the performance with Jmeter [25] which is used as a load
testing tool for analyzing and measuring the performance of
a variety of web application services.
_B. PERFORMANCE RESULTS_
Figure 9 shows the performance results for each operation in
the existing and proposed services. For experimental parameters, we set the number of clients as 10, 100, 300, and 1000
at each experiment. Each client generates a transaction, and
the number of client is same as the number of transactions. And, the performance metric is transaction per second
(TPS). Note that Figure 9(a) shows a baseline performance
through requesting empty page to e-coupon server. It shows
the performance from 425 TPS to 532 TPS. However, other
experimental results show the performance of 38 TPS to 168
TPS in both existing and proposed schemes as shown in
Figure 9(b), 9(c), 9(d), 9(e), and 9(f). This is because each
operation has an overhead to store data into a database or
execute various business logic, etc.
Overall, Figure 9 shows that the performance increases
when a large number of requests is issued. This is due to the
increase of parallelism as the number of threads processed
in the server increases. As shown in Figure 9(b), in the case
of the deploy operation, the performance of the proposed
service is reduced by up to 28%, 21%, 3%, and 3% compared with the existing service (without blockchain) when
the number of clients is 10, 100, 300, and 1000, respectively. This is because the proposed service uses the e-coupon
smart contract in the blockchain to improve security. Especially, the result of deploying an e-coupon smart contract as
depicted in Figure 9(b) shows a few lower throughput than
other operations such as downloading, giving, requesting,
and Confirming of an e-coupon. This is because the deploy
operation stores more data and requires more steps such as
validation of all the e-coupon information. Except for the
result of the deploy operation, other operations show similar
performance.
In the case of downloading, giving, requesting, and confirming e-coupon, as shown in Figures 9(b), 9(c), 9(d), 9(e),
and 9(f), the proposed service shows the performance degradation by up to 18%, 21%, 33%, and 33% compared with
an existing service when the number of clients is 10, 100,
300 and, 1000, respectively. The results show that this performance degradation depends on the use of blockchain.
Usually, there is a trade-off between performance and security [26]. We sacrifice the performance of the e-coupon
service.
Instead, we focus on improving the security level by guaranteeing the integrity of e-coupon information. For example, the existing e-coupon service uses a database system.
In this system, since an administrator can easily obtain the
authority, the administrator rather easily modifies the data
maliciously. Meanwhile, in our provided service which uses
a blockchain system, the administrator cannot easily obtain
the authority since the authority should be obtained by the
consensus of all users. Thus, it is hard to take over the
authority, and we can prevent the malicious modification.
This means that our proposed scheme increases the security
level. We explain in detail in sub-section IV-C. Also, note that
many studies [27], [28] are underway to improve the performance of the blockchain. Therefore, performance issues with
blockchain will be mitigated in the future and we also leave
the improvement for the performance of blockchain as future
work.
_C. DISCUSSION OF SECURITY_
Blockchain is an append-only database, so it cryptographically links each block added to the blockchain and does not
provide modify and delete operations. For example, to change
a block’s contents, the hash value of all subsequent blocks
from the block must be modified, which also has to be agreed
upon by other nodes via a consensus algorithm. Therefore,
a malicious attacker must hack into multiple nodes in the
blockchain network. This means that if the attacker tries to
modify or delete data, the attacker must have overwhelming
power compared with the others. Through these techniques,
the blockchain provides a high level of integrity and security for data (i.e., e-coupon information). In addition, the
smart contract on the blockchain can ensure the integrity of
the business logic (i.e., operations for e-coupon) because it
also is stored in the blockchain. Consequently, our proposed
e-coupon service exploiting blockchain and smart contract
can provide a more secure while maintaining the performance
with a minor overhead.
In an e-coupon service, there are security requirements
which are (1) non-repudiation and (2) unique usage of
e-coupon [29].
-----
1) NON-REPUDIATION
Non-repudiation in an e-coupon service is the assurance that
users (i.e., e-coupon provider and customer) cannot deny
the validity of associated transactions (i.e., issue, use, and
give an e-coupon). To provide the non-repudiation, we use
digital signature and blockchain. To perform an e-coupon
operation, an user generates a transaction and signs the transaction. Smart contract checks if the signed transaction is
valid or not. If the signed transaction is valid, the signed
transaction is stored in the blockchain. With the signed transaction, the user can identify who issues, uses, or gives an
e-coupon. In addition, the blockchain can prevent modifying
the stored transaction by using a consensus algorithm with
multiple nodes. Therefore, when an attacker tries to falsify a
transaction, the attacker must have overwhelming computing
power compared with the other nodes, which is unrealistic.
Consequently, we can identify who performs a transaction
with the signature and preserve the transaction invariant with
blockchain. Thus, it allows that the user signing the transaction cannot deny the performed transaction.
2) UNIQUE USAGE
An unique usage of e-coupon means that a customer cannot
use the already used e-coupon again. To guarantee the unique
usage, we devise an e-coupon smart contract based on the
blockchain. The smart contract is performed according to
the e-coupon state information (e.g., whether to use coupons,
the coupon price, the number of e-coupons, start date, expiration date, etc.). For example, when a customer uses an
e-coupon, the e-coupon smart contract allows the use of the
e-coupon and changes the e-coupon state as ‘‘used’’. After
then, when the customer tries to use the e-coupon again,
the e-coupon smart contract rejects the use of the e-coupon
since the e-coupon state is already changed ‘‘used’’ by the
smart contract. Thus, we can provide the unique usage of an
e-coupon by exploiting the e-coupon smart contract.
**V. RELATED WORK**
There are previous studies [2], [12], [13], [15] for providing secure e-coupon. Blundo et al. [2] propose new e-coupon
models and e-coupon protocols using message authentication code (MAC) for e-coupon security. Agarwal et al. [12]
propose a solution based on a third-party centralized coupon
mint, which checks for double-spending. Hsueh et al. [13]
sign the e-coupon with digital signatures (i.e., PKI) and
use hash functions to check the consistency of the information and verify all digital signatures of the e-coupon.
Chang et al. [15] use one-way hash function and MAC,
allowing e-coupon providers to prevent e-coupon from being
double-redeemed by customers without any additional computation cost on mobile devices.
With these techniques [2], [12], [13], [15], a user can
detect whether an e-coupon is modified or not by a malicious
attacker. Therefore, they prevent the forgery and falsification
of an e-coupon and effectively manage e-coupon issue and
use of e-coupons. However, these approaches are not suitable when e-coupon information can be modified by a malicious attacker in the e-coupon server database. In addition,
these approaches cannot prevent the malicious behavior of
an administrator. Our study is in inline with these works [2],
[12], [13], [15] in terms of enhancing security of e-coupons.
In contrast, we focus on improving the security of e-coupon
service stored in its database as well as preventing forgery of
e-coupon itself.
Hsueh et al. [5] provide a hash chain which is combined
with the blockchain technology to verify the forgery of
e-coupons. They guarantee the integrity of e-coupon information by using blockchain technology. Our study is in inline
with the work [5] in terms of using blockchain technology for
integrity of e-coupon. In contrast, we exploit a smart contract
to provide the integrity of the e-coupon business logic such
as downloading, using, and gifting an e-coupon.
Podda et al. [29] analyze and compare several blockchainbased coupon systems. It also proposes a general schema
of digital coupons and points out the basic properties that a
coupon system should guarantee. Hsu et al. [30] analyze to
prove that the security requirements of the e-voucher system
and explore how to apply blockchain technology and cryptography to build a secure e-voucher system. And, They propose a feasible application model that integrates blockchain
technology in the context of vouchers to support the field
of the campus welfare meal voucher system. Our study is in
line with this approaches [29], [30] in terms of providing the
e-coupon security features (non-repudiation, unique usage,
decentralized verification, etc.) by using a blockchain system
and start contract. In contrast, we focus on investigating the
performance and cost of development by using e-coupon
smart contract template. In addition, we consider a generalpurpose e-coupon system rather than a specific use case (i.e.,
campus welfare meal voucher system) with e-coupon smart
contract template.
**VI. CONCLUSION**
We have investigated e-coupon services that store e-coupon
information on a centralized server. We found that the
e-coupon information stored in the server can be manipulated
by a malicious attacker or administrator. To handle this issue,
we present a new e-coupon service that improves security by
exploiting e-coupon smart contracts in a blockchain system.
We have implemented the proposed service on the Quorum
blockchain and evaluated the service using a synthetic benchmark. According to our experimental results, the proposed
service prevents the manipulation of e-coupon information
with higher security and minor performance overhead. In the
future, we will focus on improving blockchain performance.
**REFERENCES**
[1] (2019). Wikipedia: E-coupon. [Online]. Available: https://en.wikipedia.
org/wiki/E-coupon
[2] C. Blundo, S. Cimato, and A. De Bonis, ‘‘Secure E-coupons,’’ Electron.
_Commerce Res., vol. 5, no. 1, pp. 117–139, Jan. 2005._
-----
[3] (2016). World Mobile Coupons Market to Grow at 73.1% CAGR
_to_ _2020._ [Online]. Available: https://www.prnewswire.com/newsreleases/world-mobile-coupons-market-to% -grow-at-7314-cagr-to-2020603320306.html
[4] (2017). Coupon Fraud is Crime, Even if it Feels Harmless: Coupon Coun_selor. [Online]. Available: https://goo.gl/2emab1._
[5] S.-C. Hsueh and J.-H. Zeng, ‘‘Mobile coupons using blockchain technology,’’ in Proc. Int. Conf. Intell. Inf. Hiding Multimedia Signal Process.
Springer, 2018, pp. 249–255.
[6] A. Knight and N. Dai, ‘‘Objects and the web,’’ IEEE Softw., vol. 19, no. 2,
pp. 51–59, Mar. 2002.
[7] (2018). Quorum. [Online]. Available: https://github.com/jpmorganchase/
quorum
[8] (2017). Coupon Statistics: The Ultimate Collection. [Online]. Available:
https://blog.accessdevelopment.com/ultimate-collection-couponstatistic%s
[9] (2017). emphDigital Coupon Marketing—Statistics and Trends. [Online].
Available: https://www.invespcro.com/blog/digital-coupon-marketing
[10] (2019). Digital Coupons Continue to be the Fastest Growing Method
_of Redemption due to Shoppers’ Increased Demand for Convenience._
[Online]. Available: https://www.globenewswire.com/news-release/2019/
02/13/1724510/0/en/Digi%tal-Coupons-Continue-to-be-the-FastestGrowing-Method-of-Redemption-Due-to-Sho%ppers-IncreasedDemand-for-Convenience.html
[11] (2017). The Coupon Insider: Digital vs. Paper Coupons. [Online].
Available: https://livingonthecheap.com/coupon-insider-digital-papercoupons//
[12] R. G.-P. M.-V. Agarwal and N. Modani, ‘‘An architecture for secure
generation and verification of electronic coupons,’’ in Proc. USENIX Annu.
_Tech. Conf., Boston, MA, USA, Jun. 2001, p. 51._
[13] S.-C. Hsueh and J.-M. Chen, ‘‘Sharing secure m-coupons for peergenerated targeting via eWOM communications,’’ Electron. Commerce
_Res. Appl., vol. 9, no. 4, pp. 283–293, Jul. 2010._
[14] R. Rivest, ‘‘The MD5 message-digest algorithm,’’ Tech. Rep., 1992.
[15] C.-C. Chang, C.-C. Wu, and I.-C. Lin, ‘‘A secure e-coupon system for
mobile users,’’ Int. J. Comput. Sci. Netw. Secur., vol. 6, no. 1, p. 273, 2006.
[16] M. Crosby, P. Pattanayak, S. Verma, and V. Kalyanaraman, ‘‘Blockchain
technology: Beyond bitcoin,’’ Appl. Innov., vol. 2, nos. 6–10, p. 71, 2016.
[17] S. Nakamoto, ‘‘Bitcoin: A peer-to-peer electronic cash system,’’
Tech. Rep., 2008.
[18] M. Szydlo, ‘‘Merkle tree traversal in log space and time,’’ in Proc. Int.
_Conf. Theory Appl. Cryptograph. Techn. Springer, 2004, pp. 541–554._
[19] M. Castro and B. Liskov, ‘‘Practical Byzantine fault tolerance,’’ in Proc.
_OSDI, vol. 99, 1999, pp. 173–186._
[20] N. Szabo, ‘‘Smart contracts: Building blocks for digital markets,’’
Tech. Rep., 2018.
[21] V. Buterin, ‘‘A next-generation smart contract and decentralized application platform,’’ Tech. Rep., 2014.
[22] V. Buterin, ‘‘A next-generation smart contract and decentralized application platform,’’ White Paper, vol. 3, p. 37, Jan. 2014.
[23] U. Maurer, ‘‘Modelling a public-key infrastructure,’’ in Proc. Eur. Symp.
_Res. Comput. Secur. Springer, 1996, pp. 325–350._
[24] D. Hankerson, A. J. Menezes, and S. Vanstone, Guide to Elliptic Curve
_Cryptography. Springer, 2006._
[25] (2019). _Apache_ _JMeter—Apache_ _JMeterT._ [Online]. Available:
https://jmeter.apache.org/
[26] K. Wolter and P. Reinecke, ‘‘Performance and security tradeoff,’’ in
_Proc. Int. School Formal Methods Design Comput., Commun. Softw. Syst._
Springer, 2010, pp. 135–167.
[27] H. Dang, T. T. A. Dinh, D. Loghin, E.-C. Chang, Q. Lin, and B. C. Ooi,
‘‘Towards scaling blockchain systems via sharding,’’ in Proc. Int. Conf.
_Manage. Data, Jun. 2019, pp. 123–140._
[28] J. Wang and H. Wang, ‘‘Monoxide: Scale out blockchains with asynchronous consensus zones,’’ in Proc. 16th USENIX Symp. Netw. Syst.
_Design Implement. (NSDI), 2019, pp. 95–112._
[29] A. S. Podda and L. Pompianu, ‘‘An overview of blockchain-based systems
and smart contracts for digital coupons,’’ in Proc. IEEE/ACM 42nd Int.
_Conf. Softw. Eng. Workshops, Jun. 2020, pp. 770–778._
[30] C.-S. Hsu, S.-F. Tu, and Z.-J. Huang, ‘‘Design of an E-voucher system
for supporting social welfare using blockchain technology,’’ Sustainability,
vol. 12, no. 8, p. 3362, Apr. 2020.
JONGBEEN HAN received the B.S. degree in
computer engineering from Hansung University,
in 2015, and the M.S. degree from the Department of Computer Science and Engineering, Seoul
National University, in 2019, where he is currently
pursuing the Ph.D. degree in computer science
and engineering. His research interests include
blockchain, operating, and distributed systems.
YONGSEOK SON received the B.S. degree in
information and computer engineering from Ajou
University, in 2010, and the M.S. and Ph.D.
degrees from the Department of Intelligent Convergence Systems and Electronic Engineering
and Computer Science, Seoul National University, in 2012 and 2018, respectively. He was a
Postdoctoral Research Associate in electrical and
computer engineering at the University of Illinois
at Urbana-Champaign. Currently, he is an Assistant Professor with the School of Computer Science and Engineering,
Chung-Ang University. His research interests include operating, distributed,
and database systems.
HYEONSANG EOM received the B.S. degree
in computer science and statistics from Seoul
National University (SNU), Seoul, South Korea,
in 1992, and the M.S. and Ph.D. degrees in computer science from the University of Maryland,
College Park, MD, USA, in 1996 and 2003,
respectively. He was an Intern with the Data
Engineering Group, Sun Microsystems, CA, USA,
in 1997, and a Senior Engineer with the Telecommunication Research and Development Center,
Samsung Electronics, South Korea, from 2003 to 2004. He is currently a
Professor with the Department of Computer Science and Engineering, SNU,
where he has been a Faculty Member, since 2005. His research interests
include high performance storage systems, operating systems, distributed
systems, cloud computing, energy efficient systems, fault-tolerant systems,
security, and information dynamics.
-----
|
{
"disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.1109/ACCESS.2022.3152765?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1109/ACCESS.2022.3152765, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "CCBY",
"status": "GOLD",
"url": "https://ieeexplore.ieee.org/ielx7/6287639/9668973/09717237.pdf"
}
| 2,022
|
[
"JournalArticle"
] | true
| null |
[
{
"paperId": "6105c1c53f866864fabfd27c98f03192da24dbee",
"title": "An overview of blockchain-based systems and smart contracts for digital coupons"
},
{
"paperId": "d6dcdd6df41f83e72e294f546322771ccfc6b48d",
"title": "Design of an E-Voucher System for Supporting Social Welfare Using Blockchain Technology"
},
{
"paperId": "6fa22acca2ff1d15137cb0e0509c5277d6dc8940",
"title": "Mobile Coupons Using Blockchain Technology"
},
{
"paperId": "db458ee14286fa3c314794479bc3a5544f758356",
"title": "Towards Scaling Blockchain Systems via Sharding"
},
{
"paperId": "b4ad87df7eec8ab483e68ad451391a7f065d1efc",
"title": "Sharing secure m-coupons for peer-generated targeting via eWOM communications"
},
{
"paperId": "edb18ae75184aa6e434d99ea6e03ac86570547a0",
"title": "Performance and Security Tradeoff"
},
{
"paperId": "342ca79a0ac2cc726bf31ffb4ce399821d3e2979",
"title": "Merkle Tree Traversal in Log Space and Time"
},
{
"paperId": "c9e66fa71d3ed808d466b9ea14c6ee93348586cb",
"title": "Objects and the Web"
},
{
"paperId": "bc533d2f27381d81d8e0cd3f445c54556e938816",
"title": "The State of Elliptic Curve Cryptography"
},
{
"paperId": "8132164f0fad260a12733b9b09cacc5fff970530",
"title": "Practical Byzantine fault tolerance"
},
{
"paperId": "711b615426fdb925356ef7828bb5693413ecd2eb",
"title": "Modelling a Public-Key Infrastructure"
},
{
"paperId": "e9ce5ad132f753624a017dc036f45eff45839265",
"title": "The MD4 Message-Digest Algorithm"
},
{
"paperId": "75aa1f1a04b5f2bb6bf9afb662711121edde9eda",
"title": "A and V"
},
{
"paperId": "ea639e0bb4e4ef7ec5cbc7c0915033de4c89fd65",
"title": "Monoxide: Scale out Blockchains with Asynchronous Consensus Zones"
},
{
"paperId": null,
"title": "Apache JMeter—Apache JMeterT"
},
{
"paperId": null,
"title": "Digital Coupons Continue to be the Fastest Growing Method of Redemption due to Shoppers’ Increased Demand for Convenience"
},
{
"paperId": "9b6cd3fe0bf5455dd44ea31422d015b003b5568f",
"title": "Smart Contracts: Building Blocks for Digital Markets"
},
{
"paperId": null,
"title": "Quorum"
},
{
"paperId": null,
"title": "Coupon Fraud is Crime, Even if it Feels Harmless: Coupon Coun-selor"
},
{
"paperId": null,
"title": "emphDigital Coupon Marketing—Statistics and Trends"
},
{
"paperId": null,
"title": "The Coupon Insider: Digital vs. Paper Coupons"
},
{
"paperId": null,
"title": "Coupon Statistics: The Ultimate Collection"
},
{
"paperId": null,
"title": "World Mobile Coupons Market to Grow at 73.1 % CAGR to 2020"
},
{
"paperId": null,
"title": "‘‘Blockchain technology: Beyond bitcoin,’’"
},
{
"paperId": "0dbb8a54ca5066b82fa086bbf5db4c54b947719a",
"title": "A NEXT GENERATION SMART CONTRACT & DECENTRALIZED APPLICATION PLATFORM"
},
{
"paperId": "4e9ec92a90c5d571d2f1d496f8df01f0a8f38596",
"title": "Bitcoin: A Peer-to-Peer Electronic Cash System"
},
{
"paperId": "e79bc98361342565fcdb03e417a52d41942140d9",
"title": "A Secure E-coupon System for Mobile Users"
},
{
"paperId": "409883d042a1576b25a4ed7c53f9a17218ea22ab",
"title": "Secure E-Coupons"
},
{
"paperId": null,
"title": "Professor with the Department of Computer Science and Engineering, SNU, where he has been a Faculty Member,"
},
{
"paperId": "5b7d7c6b7c31e79fd32c09556c1172bf42dd14e7",
"title": "Ò Ö Blockinøø Blockinøùöö Óö Ëë Blockinùöö Òòööøøóò Òò Îöö¬ Blockin Blockinøøóò Óó Ðð Blockinøöóòò Óùôóò× Êêêùð Öö Èöùð Ååøøøð Îîî× Öûð Aeaeøûö Åóòò"
},
{
"paperId": null,
"title": "When a customer uses the e-coupon"
},
{
"paperId": null,
"title": "The customer downloads an e-coupon from the issuer via a mobile device or PC"
},
{
"paperId": null,
"title": "Secure E-Coupon Service Based on Blockchain Systems"
},
{
"paperId": null,
"title": "received the B.S. degree in information and computer engineering from Ajou University"
},
{
"paperId": null,
"title": "To download an e-coupon, a customer registers the customer information to an e-coupon issuer"
}
] | 13,997
|
en
|
[
{
"category": "Economics",
"source": "external"
},
{
"category": "Economics",
"source": "s2-fos-model"
},
{
"category": "Business",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/ffc6b7718fd70d8c390862cef84a7e19988d367d
|
[
"Economics"
] | 0.822678
|
Nonlinear Autoregressive Distributed Lag Approach: An Application on the Connectedness between Bitcoin Returns and the Other Ten Most Relevant Cryptocurrency Returns
|
ffc6b7718fd70d8c390862cef84a7e19988d367d
|
Mathematics
|
[
{
"authorId": "2186061750",
"name": "M. González"
},
{
"authorId": "2446104",
"name": "Francisco Jareño"
},
{
"authorId": "152460221",
"name": "Frank S. Skinner"
}
] |
{
"alternate_issns": null,
"alternate_names": null,
"alternate_urls": [
"http://nbn-resolving.de/urn/resolver.pl?urn=urn:nbn:ch:bel-283014",
"https://www.mdpi.com/journal/mathematics"
],
"id": "6175efe8-6f8e-4cbe-8cee-d154f4e78627",
"issn": "2227-7390",
"name": "Mathematics",
"type": null,
"url": "http://www.e-helvetica.nb.admin.ch/directAccess?callnumber=bel-283014"
}
|
This article examines the connectedness between Bitcoin returns and returns of ten additional cryptocurrencies for several frequencies—daily, weekly, and monthly—over the period January 2015–March 2020 using a nonlinear autoregressive distributed lag (NARDL) approach. We find important and positive interdependencies among cryptocurrencies and significant long-run relationships among most of them. In addition, non-Bitcoin cryptocurrency returns seem to react in the same way to positive and negative changes in Bitcoin returns, obtaining strong evidence of asymmetry in the short run. Finally, our results show high persistence in the impact of both positive and negative changes in Bitcoin returns on most of the other cryptocurrency returns. Thus, our model explains about 50% of the other cryptocurrency returns with changes in Bitcoin returns.
|
# mathematics
_Article_
## Nonlinear Autoregressive Distributed Lag Approach: An Application on the Connectedness between Bitcoin Returns and the Other Ten Most Relevant Cryptocurrency Returns
**María de la O González** **[1]** **, Francisco Jareño** **[1,]*** **and Frank S. Skinner** **[2]**
1 Department of Economics and Finance, Faculty of Economics and Business Sciences, University of
Castilla-La Mancha, Plaza de la Universidad 1, 02071 Albacete, Spain; MariaO.Gonzalez@uclm.es
2 Department of Economics and Finance, Brunel University, Uxbridge, Middlesex, London UB8 3PH, UK;
frank.skinner@brunel.ac.uk
***** Correspondence: Francisco.Jareno@uclm.es; Tel.: +34-967-599-200
Received: 22 April 2020; Accepted: 14 May 2020; Published: 17 May 2020
[����������](https://www.mdpi.com/2227-7390/8/5/810?type=check_update&version=2)
**�������**
**Abstract: This article examines the connectedness between Bitcoin returns and returns of ten**
additional cryptocurrencies for several frequencies—daily, weekly, and monthly—over the period
January 2015–March 2020 using a nonlinear autoregressive distributed lag (NARDL) approach.
We find important and positive interdependencies among cryptocurrencies and significant long-run
relationships among most of them. In addition, non-Bitcoin cryptocurrency returns seem to react
in the same way to positive and negative changes in Bitcoin returns, obtaining strong evidence of
asymmetry in the short run. Finally, our results show high persistence in the impact of both positive
and negative changes in Bitcoin returns on most of the other cryptocurrency returns. Thus, our model
explains about 50% of the other cryptocurrency returns with changes in Bitcoin returns.
**Keywords: Bitcoin; cryptocurrencies; NARDL; connectedness**
**1. Introduction**
The importance of the cryptocurrency market has continued to increase, even in recent years.
References [1,2] highlighted that the cryptocurrency market was worth more than $12.5 billion in 2016.
Additionally, reference [2] noticed the growing popularity of the cryptocurrency markets, now being
suggested in the literature as an investment asset, and highlighting that the price of the most liquid
cryptocurrency—Bitcoin price—increased about 700%, from $616 to $4800 US dollars between October
2016 and October 2017. Presently, the overall cryptocurrency market is even more important as the
total cryptocurrency market capitalization is $251.5 billion on 7 March 2020 and the Bitcoin price has
increased almost 3300% from $269.2 to $8887.8 US dollars between the beginning (26 January 2015)
and the final (7 March 2020) date of the sample period.
Furthermore, Bitcoin dominance in the cryptocurrency market is increasing. Reference [3]
confirmed that Bitcoin’s capitalization was about 37% of the cryptocurrency market on 1 May 2018 but
now, merely two years later, Bitcoin’s market share is about 66% on 7 March 2020. Therefore, Bitcoin
is the most globally recognized cryptocurrency in terms of capitalization and the number of users.
Additionally, reference [4] notes that the cryptocurrencies’ market reached a peak in early 2018 with
the market’s capitalizations of $800 billion and suggests that cryptocurrencies can now be considered
to be an alternative investment option for everyone. This spectacular growth attracted the attention of
regulation authorities, big corporations, and small investors.
-----
_Mathematics 2020, 8, 810_ 2 of 22
In this context, a wide and recent branch of the financial literature has focused on studying the
cryptocurrency market. Thus, many kinds of research analyze potential connectedness between different
altcoins in the cryptocurrency market, as well as between cryptocurrencies and alternative financial
assets. These studies apply different methodologies such as: Autoregressive Distributed Lag (ARDL)
model in [1]; several Diebold and Yilmaz type approach [5–7] in [8,9]; Vector AutoRegressive (VAR)
and Generalized AutoRegressive Conditional Heteroskedasticity (GARCH) methodologies in [3,10–12];
BEKK-GARCH framework in [13–16]; and other innovative approaches in [4,17], among many others.
All of them find important interdependencies between many altcoins of the cryptocurrency market.
Thus, the main aim of this research is to explore potential long- and short-run connectedness
between Bitcoin returns and the rest of the recent (March 2020) top 10 cryptocurrency returns (Ethereum,
XRP, Bitcoin Cash, Tether, Bitcoin SV, Litecoin, EOS, Binance coin and Tezos). For robustness,
these estimates are repeated for different frequencies (daily, weekly, and monthly) for a sample period
from 26 January 2015 to 7 March 2020 in a nonlinear ARDL framework.
This paper contributes to the previous literature in several ways. First, to the best of our
knowledge, this is the first research that simultaneously estimates both long- and short-run asymmetries
in the cryptocurrency markets. This is accomplished by using the NARDL approach [2,18] to
examine the relationship between Bitcoin returns and the remaining top 10 cryptocurrencies’ returns.
References [2,18,19] affirm that some of the main advantages of the NARDL methodology is that it is
suitable for small samples regardless of the stationarity of the variables. Additionally, this methodology
checks simultaneously long- and short-run nonlinearities by estimating positive and negative partial
sum decompositions of the regressors. Also, the NARDL approach separately measures responses
to positive and negative shocks of the regressors from the asymmetric dynamic multipliers. Second,
this research studies in depth the potential connectedness between Bitcoin and the nine alternative
named cryptocurrencies. Alternative cryptocurrencies have been selected as the largest market
capitalizations as reported on 7 March 2020 from the Coinmarketcap site. Finally, for robustness,
this study compares estimates for daily, weekly, and monthly frequencies.
The rest of the paper is structured as follows. Section 2 develops a wide literature review
concerning the interdependence among different altcoins of the cryptocurrency market. Section 3
presents the data and the methodology applied in this study. Section 4 collects the main results of
our NARDL estimates, distinguishing three different sub-sections depending on the frequency (daily,
weekly, and monthly) of the data. Finally, Section 5 summarizes and presents concluding remarks and
comments on potential implications and future research.
**2. Literature Review**
The number of empirical studies analyzing cryptocurrencies has grown exponentially in the
recent years in the financial literature. Thus, reference [20] performs a rigorous review of financial
literature about the cryptocurrency market, remarking that cryptocurrencies must face charges of
potential illicit use and inexperienced exchange systems, among others. Some additional recent
examples of research include reference [2] that studies the relationship between Bitcoin and Gold price
returns, finding a positive and statistically significant connectedness, and reference [21] that remark
the prevalence of cryptocurrencies with over 2000 Bitcoin-like cryptocurrencies now in use among
many recent contributions.
However, a recent important extension of the literature examines the relationships among Bitcoin
and other alternative cryptocurrencies. Reference [1] proposes the autoregressive distributed lag
(ARDL) methodology to study interdependencies between the reference cryptocurrency Bitcoin plus
other alternative virtual currencies and two altcoin markets in the short and long run for the period
2013–2016. They find that there is a statistically significant relationship between Bitcoin and altcoin
markets, mainly in the short run. Using the same ARDL approach, reference [22] checks if the new coin
events significantly influence Bitcoin returns. They find evidence that Initial Public Offerings (IPOs) of
new altcoins reduce Bitcoin returns.
-----
_Mathematics 2020, 8, 810_ 3 of 22
Reference [23] studies potential co-movements between Bitcoin and some relevant cryptocurrencies
(Dash, Ethereum, Litecoin, Monero and Ripple) using wavelet techniques. The find co-movements in
the following relationships: Bitcoin-Dash, Bitcoin-Monero, Bitcoin-Ripple and additionally they find
evidence of important diversification abilities with an Ethereum-Bitcoin portfolio in the long-term,
and Monero-Bitcoin portfolio in the short-term. Reference [24] uses wavelet-based methods to analyze
the time-varying co-movement patterns of some relevant cryptocurrency prices (Bitcoin, Ethereum,
Lite, and Dashcoin). First, using wavelet multiple correlation and cross-correlation, they show
Bitcoin could be the potential market leader. Additionally, they estimate wavelet local multiple
correlation for the aforementioned cryptocurrency prices across different time-scales concluding that
the correlation follows an aperiodic cyclical pattern and that the cryptocurrency prices are driven by
Bitcoin price fluctuations, with important implications for investment purposes. Reference [25] applies
the cross-quantilogram approach to study the hedging abilities of some relevant cryptocurrencies
against down fluctuations in the US stock market and US sector indices. They find very heterogeneous
results that help investors to manage cryptocurrencies portfolios. Reference [26] analyzes the volatility
movements of the most important cryptocurrencies (Bitcoin and Ether) by using a bivariate Diagonal
BEKK model. This research finds evidence of interdependencies in the cryptocurrency market as well as
the effects of important events on volatility with important implications for informed decision-making
by investors.
In the same vein, reference [8] measures interdependencies between the most important
cryptocurrencies’ returns and volatilities, using the Diebold and Yilmaz approach [5]. They suggest an
emergent and time-varying interdependence between the cryptocurrencies analyzed. One of the recent
methodologies is applied in [9], specifically the Diebold and Yilmaz measures [6,7], to study potential
return and volatility connectedness among six cryptocurrencies. They discover that changes in Litecoin
and Bitcoin returns show the most relevant impact on the rest of cryptocurrencies. Furthermore, Bitcoin
and Litecoin show the highest and Dash the lowest volatility connectedness, confirming the hedging
potential of Bitcoin and Litecoin when constructing portfolios with cryptocurrencies. Reference [27]
estimates market-herding dynamics in the cryptocurrency market by adapting the Capital Asset
Pricing Model (CAPM) framework as developed earlier by [28]. Thus, this methodology explores
time variation in betas and cross-sectional dispersion of individual assets, showing a recent growing
market herding.
Some other research, such as [3], uses a VAR modelling methodology to study the information
transmission between the most important cryptocurrencies (Bitcoin, Litecoin, Ripple, Ethereum,
and Bitcoin Cash). Specifically, by obtaining the Geweke’s feedback measures and generalized impulse
response functions, they confirm a strong contemporaneous information transmission, and some
lagged feedback effects, mainly from other cryptocurrencies to Bitcoin. Reference [10] examines
potential spillovers between Bitcoin and companies in the energy and technology sector in the context
of an asymmetric multivariate VAR-GARCH methodology. They find statistically significant return
and short-run volatility spillovers from (mainly technology) companies to Bitcoin and long-run
volatility spillovers from Bitcoin to energy companies. Reference [11] uses several time-varying copula
methods and bivariate dynamic conditional correlation GARCH models to examine the financial
properties of cryptocurrencies and their dynamic relationship with some financial and commodity assets.
They discover some important implications for investors, as the cross-correlation with conventional
assets is changeable over time, depending on economic shocks. Additionally, cryptocurrencies may be
suitable for financial diversification, but may form poor hedging instruments. Reference [12] applying
the GARCH-MIDAS approach to forecast volatility of some relevant cryptocurrencies using different
data frequencies. In addition, they propose different economic and financial drivers. They conclude
that Global Real Economic Activity provides better volatility forecasts in bull and bear markets.
Reference [13] uses a multivariate BEKK-GARCH methodology and impulse response analysis
applied within a VAR model to check potential hedging properties and volatility spillovers between
Bitcoin and Ethereum. They find that the connectedness between them is time-variant and decreases the
-----
_Mathematics 2020, 8, 810_ 4 of 22
potential diversification properties over time. These results have implications for investment strategies
mainly during economic turmoil. Reference [14] applies pairwise bivariate BEKK models to study
interlinkages and conditional correlations between different pairs of cryptocurrencies. Specifically,
they analyze Bitcoin-Ether, Bitcoin-Litecoin, and Ether-Litecoin, pairs finding evidence of bi-directional
effects in Bitcoin-Ether and Bitcoin-Litecoin, and uni-directional spillover from Ether to Litecoin.
Furthermore, bi-directional volatility spillovers are found in all cases, as well as time-varying and
positive conditional correlations. Reference [15] applies Diagonal BEKK and Asymmetric Diagonal
BEKK methodologies on eight cryptocurrencies (Bitcoin, Ethereum, Litecoin, Dash, Ethereum Classic,
Monero, Neo and OmiseGO) to study conditional volatility dynamics among them and their volatility
co-movements. They find that cryptocurrencies have high term persistence of volatility, show strong
interdependencies between them and have time-varying and positive conditional correlations. In the
same vein, reference [16] uses the Granger causality test and a BEKK-MGARCH approach to study the
return and volatility spillovers between Bitcoin and Litecoin. They show that both return and volatility
spillovers run in one direction, from Bitcoin to Litecoin.
Reference [29] studies, among other topics, the weak-form market efficiency in the cryptocurrency
market analyzing the measure “price delay” showing that it significantly decreases over time thereby
supporting weak-form efficiency of the cryptocurrency market. Reference [30] studies Bitcoin,
Litecoin, Ripple and Dash portfolio optimization and the correlation between them showing that the
Black–Litterman model with Variance-Based Constraints (VBCs) offers better out-of-sample estimates
than other benchmarks. Therefore, investors should apply more advanced approaches such as the
Black–Litterman model to better manage cryptocurrency portfolios. Reference [31] studies many
(smaller and larger) cryptocurrencies and the potential existence of herding in this market, showing
inefficiency and excessive risk only in economic turmoil. In addition, smaller cryptocurrencies
may be herding with larger ones. Reference [32] studies the relationship between returns and
volatility of Bitcoin, at both contemporaneous and intertemporal levels, employing high-frequency
data. Thus, there could be a negative, statistically significant, and contemporaneous link between
all volatility measures and Bitcoin returns, but weak evidence in case of realized variance, jump
variation, and downside realized semivariance. Additionally, there is no justification for a positive
risk-return trade-off in Bitcoin markets. Reference [33] remarks on the relevance of correlation networks
on the evolution of cryptocurrency prices over time and finds a positive and statistically significant
connectedness between different cryptocurrencies. Specifically, one group of cryptocurrencies could be
particularly correlated with Cardano while another group associated with Ethereum.
Some of the literature use novel approaches. Reference [4] applies descriptive metrics from
Complex Networks to study the price synchronization in the cryptocurrency market. Specifically,
they employ the Threshold Weighted—Minimum Dominating Set (TW-MDS) methodology to detect
dominant cryptocurrencies over time, assuming that a dominant node would describe the behavior
of the cryptocurrency market. They conclude that there is strong evidence of a growing price
synchronization in this market. Reference [34] applies the generalized variance decomposition
methodology, which enables the construction of a directional weighted network to study the
connectedness between return and volatility of many cryptocurrencies. She finds highly connected
cryptocurrencies mainly during shocks and some cryptocurrencies (Ethereum, Monero, OmiseGo) have
more impact on the market than others. Additionally, some cryptocurrencies are less connected and less
affected by shocks implying they are more attractive for investment purposes. Reference [17] analyzes
the structure of the cryptocurrency market and propose the Bitcoin-Ethereum filtering mechanism
(based on the agglomerative hierarchical clustering and minimum spanning tree) to exclude their linear
influences with other cryptocurrencies. For robustness, they examine the market structures before and
after filtering in terms of the Total, Pre-, and Post-regulation periods. They find evidence that Bitcoin
and Ethereum are leaders in the cryptocurrency market, there are six other clusters of cryptocurrencies,
and market structures renovate after the announcement of new regulations from several countries.
-----
_Mathematics 2020, 8, 810_ 5 of 22
Reference [35] uses cointegrating tests and Vector Error Correction (VEC) Granger Causality/Block
Exogeneity Test approaches to research the Bitcoin–Altcoin price synchronization hypothesis for ten
altcoins, specifically Litecoin, Dash, Doge, IOTA, Nem, Neo, Stellar, Ripple and Tron for three different
sub-periods: 2015–2016, 2017, and 2018. They find cryptocurrency investors are more sensitive to
the features and quality of each coin during 2018 than for 2017. Reference [36] provides a systematic
survey of return and volatility spillovers of cryptocurrencies, considering other cryptocurrencies
and alternative assets. Thus, Bitcoin is the most relevant cryptocurrency mainly as a transmitter,
but also as a receiver of spillovers. Furthermore, Bitcoin shows the most important connectedness with
Ethereum, Litecoin, and Ripple. Return spillovers are more pronounced than volatility bi-directional
spillovers. Finally, reference [36] detects volatility transmission among Bitcoin and national currencies.
Reference [37] applies multivariate extreme value theory and they estimate a bias-corrected extreme
correlation coefficient to study the contemporaneous tail dependence structure in pairwise comparisons
of a large number of cryptocurrencies (Bitcoin, Dash, Dogecoin, Ethereum, Litecoin, Monero, Namecoin,
Novacoin, Peercoin, and Ripple). They find significantly high bivariate dependency in the distribution
tails of some of the most important cryptocurrencies. Thus, extreme correlations increase in bear
markets, but not in bull markets for the pairs studied. Moreover, many cryptocurrency pairs show a
low level of dependency in the tails of the distribution. Reference [38] uses panel ordinary least squares
with cluster-robust standard errors to research the field of Tokenomics studying many blockchain
tokens. This paper analyzes the potential connectedness between non-digital entities and digital tokens,
finding that token functions significantly affect token prices regardless of the stage of the business
cycle. Finally, reference [39] studies the diversification capability of some cryptocurrencies (Bitcoin,
Litecoin, Ripple, Stellar, Monero, Dash, and Bytecoin) against certain economic risks such as changes
in oil price, gold price, interest rate, USD strength, and the stock market. Thus, they show structural
breaks and Autoregressive Conditional Heteroskedastic (ARCH) disturbance in each cryptocurrency,
suggesting a systematic risk within the cryptocurrency market. Furthermore, cryptocurrencies could
have insignificant correlations with economic risk factors, reducing their diversification abilities.
Thus, to the best of our knowledge, this paper contributes to this previous literature in several
ways. First, this research studies in depth the potential connectedness between Bitcoin and many
other important cryptocurrencies in terms of recent market capitalization using the NARDL approach.
The advantage of this methodology is that it enables us to simultaneously estimate both long- and
short-run asymmetries [2,18]. Additionally, for robustness, this study compares estimates from several
frequency data (daily, weekly, and monthly).
**3. Materials and Methods**
_3.1. Data_
Our data set consists of daily, weekly, and monthly log returns of the top ten cryptocurrencies
ranked by market capitalization. These ten cryptocurrencies ordered from highest to lowest by market
capitalization are Bitcoin (BTC), Ethereum (ETH), Ripple (XRP), Bitcoin_cash (BCH), Tether (USDT),
Bitcoin_sv (BSV), Litecoin (LTC), EOS, Binance_coin (BNB) and Tezos (XTZ). The data is provided
by the Coinmarketcap website. These top ten cryptocurrencies under study represent, on average,
over 92% of the cryptocurrency market capitalization and Bitcoin shows approximately 66% dominance
in this market, on 7 March 2020.
Our sample period runs from 26 January 2015 until 7 March 2020, which yields 1868 daily,
267 weekly, and 61 monthly data observations. The starting point is imposed by the price availability
of some cryptocurrencies and the end of this period is established just before the massive selloff
in the cryptocurrency market on 8 March 2020 and the recent stock market crash on 9 March 2020
caused by COVID-19. Due to this massive selloff, the cryptocurrency market lost $21 billion in
market capitalization in 24 hours from Saturday 7 March 2020 to Sunday 8 March 2020 (from a total
cryptocurrency market capitalization of $251.5 billion to $230.8 billion). Moreover, two weeks later,
-----
_Mathematics 2020, 8, 810_ 6 of 22
on 22 March 2020, the cryptocurrency market has lost more than $84 billion because of COVID-19,
falling to a total of $167.1 billion. It is remarkable that despite the big drop in cryptocurrency market
capitalization, Bitcoin still has a 65.1% dominance of this market on 22 March 2020. These top ten
cryptocurrencies did not come into existence at the same time. The starting date for each cryptocurrency
is shown in column 7 of Table 1. Therefore, the most recent cryptocurrencies, especially Bitcoin_sv and
Tezos, will provide fewer monthly data for the empirical analysis.
**Table 1. Top 10 Cryptocurrencies by Market Capitalization (Date: 7 March 2020/22 March 2020)**
(Total market capitalization: $251.5 billion/$167.1 billion). [1]
**Name** **Market Cap** **Price** **Volume (24 h)** **Circulating Supply** **Change (24 h)** **Starting Date**
Bitcoin $166,743,993,933 $8887.80 $47,868,579,352 18,238,800 BTC −0.21% 01/26/2015
$106,591,196,069 $5830.25 $40,099,664,740 18,282,425 BTC −5.73%
$26,966,016,878 $237.32 $25,206,666,119 109,863,231 ETH 2.07%
Ethereum 03/10/2016
$13,590,860,527 $123.32 $12,497,707,224 110,207,055 ETH −7.05%
XRP $10,688,702,708 $0.23624 $3,252,412,868 43,749,413,421 XRP −0.88% 01/26/2015
$6,585,765,149 $0.150214 $1,864,979,798 43,842,625,397 XRP −5.02%
Bitcoin_Cash $6,364,459,307 $330.77 $6,617,099,625 18,300,000 BCH −0.25% 08/03/2017
$3,736,418,941 (5) $203.67 $4,015,953,536 18,345,250 BCH −7.47%
$4,641,437,047 $1.0047 $66,519,050,406 4,642,367,414 USDT 0.16%
Tether 04/15/2017
$4,637,871,717 (4) $0.99903 $49,036,623,749 4,642,367,414 USDT −0.21%
Bitcoin_SV $4,439,960,724 $233.95 $3,344,789,290 18,297,290 BSV −1.66% 11/19/2018
$2,894,145,363 $157.78 $3,365,019,330 18,342,440 BSV −6.35%
Litecoin $4,072,866,599 $60.45 $6,342,837,357 64,168,987 LTC −0.77% 08/24/2016
$2,292,391,578 $35.63 $3,148,219,029 64,342,318 LTC −7.34%
EOS $3,526,893,934 $3.64 $6,064,573,978 920,452,308 EOS −0.47% 07/02/2017
$1,965,191,547 $2.13 $2,921,411,201 921,045,767 EOS −6.45%
Binance $3.292,877,236 $20.24 $427,799,971 155,536,713 BNB −1.68% 11/09/2017
Coin $1,735,514,181 $11.16 $308,670,064 155,536,713 BNB −7.48%
Tezos $2,250,710,445 $2.98 $317,321,520 702,028,555 XTZ −0.04% 02/02/2018
$1,038,511,561 $1.47 $113,589,399 704,565,511 XTZ −11.11%
1 Compiled by the authors, based on the information provided by the Coinmarketcap website.
Figure 1 plots the time evolution of the cryptocurrencies’ daily prices up to the end of March 2020
and so incorporates the COVID-19 crash of 8 March 2020. Consequently, the market capitalization
of the top ten cryptocurrencies analyzed in this paper has decreased sharply on March 8, ranging
from 53.8% for Tezos to 38.3% for Ripple while Bitcoin suffered a lower loss 36% (though not as low
as 34.7% for Bitcoin_sv). Interestingly, Tether is an outlier by experiencing a very modest one day
loss of 0.065%. Table 1 also shows that two weeks after the COVID-19 crash, the total cryptocurrency
market capitalization has fallen by almost 40% from $251.5 billion to $167.1 billion and these top ten
cryptocurrencies have decreased in value between 32% and 50%, except in the case of Tether, where this
decrease is only 0.5%.
-----
_Mathematics 2020, 8, 810_ 7 of 22
_Mathematics_ **2020, 8, 810** 7 of 24
**4000** **25000**
**3500**
**20000**
**3000**
**2500**
**15000**
**2000**
**10000**
**1500**
**1000**
**5000**
**500**
**0** **0**
**ETHEREUM** **XRP** **BITCOIN CASH** **TETHER** **BITCOIN SV** **LITECOIN** **EOS** **BINANCE COIN** **TEZOS** **BITCOIN**
**Figure 1. Figure 1. Time evolution of the Bitcoin and the rest of relevant cryptocurrencies daily prices (BitcoinTime evolution of the Bitcoin and the rest of relevant cryptocurrencies daily prices (Bitcoin**
prices in the right-axis and the rest of cryptocurrencies prices in the left-axis). prices in the right-axis and the rest of cryptocurrencies prices in the left-axis).
Figure 2 shows the time evolution of the cryptocurrency returns and Table 2 collects the Figure 2 shows the time evolution of the cryptocurrency returns and Table 2 collects the descriptive
descriptive statistics and unit root tests of the ten cryptocurrency returns for daily, weekly, and statistics and unit root tests of the ten cryptocurrency returns for daily, weekly, and monthly frequency
monthly frequency data. All cryptocurrencies show similar mean log returns, although Bitcoin_sv data. All cryptocurrencies show similar mean log returns, although Bitcoin_sv and Binance_coin
and Binance_coin show slightly higher mean values. Additionally, the lower the frequency of data, show slightly higher mean values. Additionally, the lower the frequency of data, the higher the mean
the higher the mean log returns and the higher the standard deviation. Most of cryptocurrency log returns and the higher the standard deviation. Most of cryptocurrency returns show positive
returns show positive skewness, except for Tezos which shows the largest negative skewness for all skewness, except for Tezos which shows the largest negative skewness for all three data frequencies.
three data frequencies. All variables show excess kurtosis, especially for daily returns. The standard All variables show excess kurtosis, especially for daily returns. The standard Augmented Dickey–Fuller
Augmented Dickey–Fuller (ADF) and Phillips–Perron (PP) unit root tests and the Kwiatkowski–(ADF) and Phillips–Perron (PP) unit root tests and the Kwiatkowski–Phillips–Schmidt–Shin (KPSS)
Phillips–Schmidt–Shin (KPSS) stationarity test confirms that all cryptocurrency returns are stationarity test confirms that all cryptocurrency returns are stationary. However, for monthly data,
stationary. However, for monthly data, it is interesting to note the smaller sample size for some it is interesting to note the smaller sample size for some cryptocurrencies that leads to some doubt
cryptocurrencies that leads to some doubt about the stationarity of Theter and Tezos returns. about the stationarity of Theter and Tezos returns.
-----
_Mathematics 2020, 8, 810_ 8 of 22
**Table 2. Descriptive statistics of Bitcoin returns and returns of the rest of the top ten cryptocurrency returns.** [1]
**Panel A: Daily Frequency.**
**Name** **Mean** **Median** **Max.** **Min.** **Std. Dev.** **Skewness** **Kurtosis** **JB Stat.** **ADF Stat.** **PP Stat.** **KPSS Stat.**
Bitcoin returns 0.0019 0.0019 0.2276 −0.1869 0.0376 −0.1471 7.3114 1453 *** −43.873 *** −43.881 *** 0.1581
Ethereum returns 0.0021 −0.0001 0.2586 −0.3134 0.0574 −0.0418 6.4015 703.3 *** −38.679 *** −38.816 *** 0.3182
XRP returns 0.0015 −0.0013 1.0280 −0.9965 0.0994 0.8984 30.2463 58000 *** −32.003 *** −59.811 *** 0.1527
Bitcoin_cash returns 0.0001 −0.0038 0.4355 −0.4792 0.0780 0.6110 10.6729 2382 *** −28.553 *** −28.566 *** 0.1053
Theter returns 0.0000 0.0000 0.0453 −0.0575 0.0063 0.0252 19.1176 11441 *** −22.254 *** −47.324 *** 0.0110
Bitcoin_sv returns 0.0026 −0.0014 0.8979 −0.3259 0.0860 3.6652 34.7653 20990 *** −23.548 *** −23.516 *** 0.0578
Litecoin returns 0.0022 −0.0024 0.6070 −0.3080 0.0619 1.7426 16.6638 10696 *** −36.409 *** −36.453 *** 0.3425
EOS returns 0.0003 −0.0015 0.3559 −0.3567 0.0757 0.4055 7.6595 912.4 *** −32.951 *** −32.980 *** 0.0918
Binance_coin returns 0.0028 0.0007 0.4874 −0.4023 0.0626 0.9070 13.6192 4105.6 *** −27.227 *** −27.191 *** 0.2255
Tezos returns 0.0000 −0.0042 0.2525 −0.4094 0.0667 −0.1728 6.4442 381.4 *** −26.555 *** −26.563 *** 0.3154
**Panel B: Weekly Frequency.**
**Name** **Mean** **Median** **Max.** **Min.** **Std. Dev.** **Skewness** **Kurtosis** **JB Stat.** **ADF Stat.** **PP Stat.** **KPSS Stat.**
Bitcoin returns 0.0136 0.0093 0.3446 −0.3686 0.1007 −0.0770 4.9667 43.128 *** −15.549 *** −15.547 *** 0.1537
Ethereum returns 0.0138 0.0083 0.7457 −0.3951 0.1592 0.9938 6.4246 135.227 *** −12.899 *** −13.087 *** 0.2326
XRP returns 0.0103 −0.0124 1.2546 −0.9822 0.2240 1.7314 12.631 1161.02 *** −16.056 *** −16.074 *** 0.1336
Bitcoin_cash returns 0.0005 −0.0087 0.8526 −0.7188 0.2199 0.7793 6.1413 68.656 *** −10.451 *** −10.422 *** 0.1020
Theter returns 0.0004 0.0001 0.0439 −0.0444 0.0105 −0.4256 8.2501 176.799 *** −8.8943 *** −14.437 *** 0.1301
Bitcoin_sv returns 0.0216 −0.0036 0.9894 −0.4649 0.2205 1.6966 8.6941 122.655 *** −7.6877 *** −7.6881 *** 0.0484
Litecoin returns 0.0150 −0.0033 1.1406 −0.3031 0.1828 2.6024 16.126 1528.52 *** −13.285 *** −13.310 *** 0.2772
EOS returns 0.0017 −0.0064 0.7216 −0.4452 0.1966 0.5641 3.8327 11.387 *** −9.8301 *** −9.8971 *** 0.0679
Binance coin returns 0.0213 0.0102 0.6706 −0.3331 0.1645 1.3036 6.8411 107.756 *** −10.142 *** −10.433 *** 0.2077
Tezos returns 0.0016 0.0051 0.4392 −0.6843 0.1690 −0.4786 5.1781 25.471 *** −8.8875 *** −8.9152 *** 0.2496
**Panel C: Monthly Frequency.**
**Name** **Mean** **Median** **Max.** **Min.** **Std. Dev.** **Skewness** **Kurtosis** **JB Stat.** **ADF Stat.** **PP Stat.** **KPSS Stat.**
Bitcoin returns 0.0625 0.0437 0.8826 −0.5717 0.2452 0.7046 4.8384 13.414 *** −7.6711 *** −7.6713 *** 0.1204
Ethereum returns 0.0640 0.0000 1.2973 −0.7859 0.4150 0.5850 3.63045 3.4593 −6.2936 *** −6.3522 *** 0.1704
XRP returns 0.0541 −0.0258 2.0518 −0.5347 0.4546 2.5123 10.569 206.345 *** −6.2751 *** −5.1123 *** 0.1214
Bitcoin_cash returns 0.0130 −0.0169 1.3271 −1.5992 0.5085 −0.3969 5.6314 9.4425 *** −5.3384 *** −5.3394 *** 0.1039
Theter returns 0.0003 0.0003 0.0302 −0.0441 0.0124 −0.8521 6.8862 25.510 *** −5.9941 *** −14.375 *** 0.5000 **
Bitcoin_sv returns 0.1087 0.0293 1.1937 −0.4832 0.4831 1.2566 3.6701 3.9463 −4.7496 *** −4.7496 *** 0.1259
Litecoin returns 0.0732 0.0373 1.5685 −0.6346 0.3906 1.5518 7.1185 45.431 *** −5.2324 *** −5.2614 *** 0.2251
EOS returns 0.0417 0.1166 1.5578 −0.9160 0.5107 0.6028 4.3839 4.3512 −3.9072 *** −4.5276 *** 0.3110
Binance_coin returns 0.1019 0.0534 1.5514 −0.6107 0.4498 1.2385 5.5057 13.966 *** −4.3508 *** −4.7401 *** 0.1590
Tezos returns 0.0073 −0.0174 0.8747 −1.0750 0.4401 −0.1028 3.4300 0.2271 −3.6817 ** −3.6335 ** 0.2478
1 This table presents the descriptive statistics of daily (Panel A), weekly (Panel B) and monthly (Panel C) Bitcoin returns and returns of the rest of relevant cryptocurrencies over the period
from January 2015 to March 2020. They include mean, median, minimum (Min.) and maximum (Max.) values, standard deviation (Std. Dev.) and Skewness and Kurtosis measures.
JB denotes the statistic of the Jarque–Bera test for normality. The results of the Augmented Dickey–Fuller (ADF) and Phillips–Perron (PP) unit root tests and the Kwiatkowski et al. (KPSS)
stationarity test are also reported in the last three columns. As usual, *, **, *** indicate statistical significance at the 10%, 5% and 1% levels, respectively.
-----
_Mathematics 2020, 8, 810_ 9 of 22
_Mathematics_ **2020, 8, 810** 10 of 24
**1.8**
**1.3**
**0.8**
**0.3**
**-0.2**
**-0.7**
**-1.2**
**BITCOIN** **ETHEREUM** **XRP** **BITCOIN_CASH** **TETHER** **BITCOIN_SV** **LITECOIN** **EOS** **BINANCE_COIN** **TEZOS**
Panel A: Daily frequency
**1.8**
**1.3**
**0.8**
**0.3**
**-0.2**
**-0.7**
**-1.2**
**BITCOIN** **ETHEREUM** **XRP** **BITCOIN_CASH** **TETHER** **BITCOIN_SV** **LITECOIN** **EOS** **BINANCE_COIN** **TEZOS**
Panel B: Weekly frequency
**Figure 2. Cont.**
-----
_Mathematics 2020, 8, 810_ 10 of 22
_Mathematics_ **2020, 8, 810** 11 of 24
**2**
**1.5**
**1**
**0.5**
**0**
**-0.5**
**-1**
**-1.5**
**BITCOIN** **ETHEREUM** **XRP** **BITCOIN_CASH** **TETHER** **BITCOIN_SV** **LITECOIN** **EOS** **BINANCE_COIN** **TEZOS**
Panel C: Monthly frequency
**Figure 2. Figure 2. Time evolution of the Bitcoin returns and the rest of relevant cryptocurrency returns. CompiledTime evolution of the Bitcoin returns and the rest of relevant cryptocurrency returns.**
Compiled by the authors, based on the information provided by the by the authors, based on the information provided by the CoinmarketcapCoinmarketcap website. website.
_3.2. Methodology_
_3.2. Methodology_
To analyze the connectedness between Bitcoin returns and returns of the other top nine
To analyze the connectedness between Bitcoin returns and returns of the other top nine
cryptocurrencies we use the nonlinear autoregressive distributed lag (NARDL) model developed by [40].
cryptocurrencies we use the nonlinear autoregressive distributed lag (NARDL) model developed by
Importantly, NARDL is applied to simultaneously capture both long- and short-run asymmetries
[40]. Importantly, NARDL is applied to simultaneously capture both long- and short-run
between our variables.
asymmetries between our variables.
The asymmetric long-run regression of the top ten cryptocurrency returns [18,40] is a simple
The asymmetric long-run regression of the top ten cryptocurrency returns [18,40] is a simple
approach to modelling asymmetric cointegration based on partial sum decompositions:
approach to modelling asymmetric cointegration based on partial sum decompositions:
_Rjt =Rjt α = α0 +0 + α α[+][+]·BR·BRtt[+][+] + α+ α[−][−]·BR·BRt[−]t + [−]_ _ɛ+ jt εjt_ (1) (1)
𝛥𝐵𝑅∆BR𝑡t = = 𝑣 vt 𝑡 (2) (2)
wherewhere RRjtjt and and BRBRtt are scalar I(1) variables. In detail, are scalar I(1) variables. In detail, RRjtjt is the returns from the is the returns from the jj-th alternative-th alternative
cryptocurrency returns corresponding to period cryptocurrency returns corresponding to period tt, for, for j j = 1,…9 = 1, . . . 9,, BR BRt is the Bitcoin returns for period t is the Bitcoin returns for periodt
which is decomposed as t which is decomposed asBR BRt = t =BR BR0 + 0BR +t BR[+] + _BRt[+]_ +t[−], where BRt[−], whereBRt[+] and BRtBR[+] andt[−] are partial sums of positive BRt[−] are partial sums of
positive (appreciations) and negative (depreciations) changes in Bitcoin returns,(appreciations) and negative (depreciations) changes in Bitcoin returns, _εjt ε and jt andv vt are random t are random_
disturbances and disturbances and αα = = ( (αα00,,α α[+][+], α, α[−]) is a vector of long-run parameters to be estimated. [−]) is a vector of long-run parameters to be estimated.
𝑡 𝑡
𝐵𝑅𝑡+ = ∑𝛥𝐵𝑅�t 𝑖+ = ∑max (𝛥𝐵𝑅�t 𝑖, 0) (3)
_BRt[+]_ [=] 𝑖=1∆BRi[+] [=] 𝑖=1 max(∆BRi, 0) (3)
_i=1_ _i=1_
𝑡 𝑡
𝐵𝑅𝑡− = ∑𝛥𝐵𝑅�t 𝑖− = ∑min (𝛥𝐵𝑅�t 𝑖, 0) (4)
_BR[−]t_ [=] 𝑖=1∆BR[−]i [=] 𝑖=1 min(∆BRi, 0) (4)
_i=1_ _i=1_
The coefficients α[+] and α[−], in Equation (1), capture the long-run relationship between each of the
The coefficients α[+] and α[−], in Equation (1), capture the long-run relationship between each of the
top alternative cryptocurrency returns and increases (α[+]) or decreases (α[−]), respectively, in the Bitcoin
top alternative cryptocurrency returns and increases (α[+]) or decreases (α[−]), respectively, in the Bitcoin
returns. Finally, we study whether the long-run relationship reflects asymmetric long-run Bitcoin
returns. Finally, we study whether the long-run relationship reflects asymmetric long-run Bitcoin
returns passthrough to each of the alternative cryptocurrency returns.
returns passthrough to each of the alternative cryptocurrency returns.
Reference [40] affirms that the long-run relationship between _Rjt and_ _BRt is modelled as_
piecewise asymmetric linear function subject to the decomposition of BRt because if we suppose that
|α[+]|<|α[−]| in Equation (1), the long-run effect of a unit negative change in BRt will increase BRt by a
-----
_Mathematics 2020, 8, 810_ 11 of 22
Reference [40] affirms that the long-run relationship between Rjt and BRt is modelled as piecewise
asymmetric linear function subject to the decomposition of BRt because if we suppose that |α[+]| < |α[−]| in
Equation (1), the long-run effect of a unit negative change in BRt will increase BRt by a greater amount
than a unit positive change would reduce it. Therefore, reference [40] confirms that the NARDL model
includes a regime-switching cointegrating relationship in which regime transitions are governed by
the sign of ∆BRt.
Thus, reference [40] developed the following flexible, dynamic, asymmetric, and nonlinear
ARDL(p,q) model by extending the well-known linear autoregressive distributed lag (ARDL) bounds
testing approach popularized by [41,42]:
_q_
�
(γi[+][∆][BR]t[+]−i [+][ γ]i[−][∆][BR]t[−]−i[) +][ ε] _jt_ (5)
_i=0_
_R_ _jt = β0 + β1·Rt−1 + β2·BRt[+]_ [+][ β]3[·][BR]t[−] [+]
_p_
�
φiRt−i +
_i=1_
where BRt is a k × 1 vector of multiple regressors defined such that BRt = BR0 + BRt[+] + BRt[−], φi is
the autoregressive parameter, p is the number of lagged dependent variables and q is the number of
lags for regressors, γi[+] and γi[−] are the asymmetric distributed lag parameters, and, finally, εjt is an iid
process with zero mean and constant variance σε[2].
Moreover, α[+] = −β2/β1, α[−] = −β3/β1, are the coefficients of long-run impacts of Bitcoin return
increases and decreases respectively on each of the nine alternative cryptocurrency returns. On the other
hand,
[�]i[q]=0 [γ]i[+] [and][ �]i[q]=0 [γ]i[−] [measure the short-run influences of increases and decreases respectively]
of Bitcoin returns on each of the top nine alternative cryptocurrency returns. Thus, not only are the
asymmetric long-run relationship considered, but the asymmetric short-run influences of Bitcoin
returns changes on the top ten cryptocurrency returns are also captured in order to identify differences
in the response of economic agents to positive and negative shocks.
Reference [40] affirms that the dynamic adjustment of the NARDL model in the error correction
form maps the gradual movement of the process from initial equilibrium through the shock and
towards the new equilibrium. Moreover, the estimation of the error correction model (ECM) improves
the performance of the NARDL model in small samples and increase the power of the cointegration
tests. Thus, we estimate the proposed NARDL model using stepwise regression under ECM.
In summary, the cointegrating NARDL model of reference [40] enables us to check for the possibility
that the time series are nonlinearly cointegrated. This methodology tests simultaneously the long- and
short-run asymmetries estimating positive and negative partial sum decompositions of the regressors in
a computationally simple and tractable manner that reflects its flexibility. Additionally, it also measures
the separate responses to positive and negative shocks of the regressors from the asymmetric dynamic
multipliers. Moreover, references [2,18,19] suggest, in addition to the advantages of good small sample
properties and simultaneous estimates of short- and long-run coefficients, some additional advantages
of the NARDL methodology including suitable regardless of the stationarity of the variables and
freedom of residual correlation and so not prone to omitted lag bias.
However, empirical implementation of the NARDL approach requires classical unit root tests
in order to confirm that the variables are I(0) or I(1), because the presence of an I(2) variable renders
the computed F statistics for testing cointegration invalid. These tests, collected in Table 2, confirm
that all cryptocurrency returns are stationary for daily and weekly data although there are doubts
about the stationary of Theter and Tezos for monthly data due to the low number of data for these
recent cryptocurrencies.
Finally, based on the estimated NARDL model, we test for the presence of asymmetry and
cointegration in the relations between Bitcoin returns and the rest of the top ten cryptocurrencies.
Specifically, we study in the next section: first, the connectedness between these variables by the
Pearson’s correlation coefficients defined by the null hypothesis of no correlation (H0: PCorr = 0);
second, the presence of cointegration by the Wald F test for the joint null hypothesis that coefficients
on the level variables are jointly equal to zero (H0: β1 = β2 = β3 = 0); third, the cointegration equation
-----
_Mathematics 2020, 8, 810_ 12 of 22
(long-run elasticities) between variables; fourth, the long-run symmetry by means of the Wald test,
with symmetry implying H0: −β2/β1 = −β3/β1; fifth, the short-run symmetry in the short-run model
by the Wald test for the null of short-run symmetry defined by γi[+] = γi[−] and sixth, the effect of the
cumulative sum of positive and negative changes (respectively) in Bitcoin returns for 1 to 4 lags on the
rest of cryptocurrencies’ returns.
**4. Results and Discussion**
This section reports the estimates of the nonlinear ARDL model; including estimates of the longand short-run relationships between Bitcoin returns and the rest of the top 10 cryptocurrency returns for
different frequencies (daily, weekly, and monthly) for a sample period from 26 January 2015 to 7 March
2020. We would like to highlight that the results may not be appropriate for monthly frequencies
because due to the recent appearance of certain currencies such as “Bitcoin SV” (on 19 November 2018)
and “Tezos” (on 2 February 2018), there are very few monthly data in these two cases. In addition, it is
noteworthy that the maximum lag order considered in these NARDL estimations is 4.
_4.1. Results of the NARDL Models: Daily Frequency_
Table 3 reports the regression results of the nonlinear ARDL models and the asymmetry and
cointegration tests between Bitcoin returns and the rest of the top ten cryptocurrency returns (Ethereum,
XRP, Bitcoin Cash, Tether, Bitcoin SV, Litecoin, EOS, Binance coin, and Tezos) for daily frequency.
Table 3 is organized as follows. Column 2 contains the Pearson’s correlation coefficients, column 3 the
Wald F test for the presence of cointegration, column 4 the cointegration equation (long-run elasticities)
between Bitcoin returns and the rest of cryptocurrency returns, column 5 the Wald test for long-run
symmetry, column 6 the Wald test for short-run symmetry, columns 7 and 8 report the effect of the
cumulative sum of positive and negative changes (respectively) in Bitcoin returns for (1–4)-lags on the
rest of cryptocurrencies and finally, column 9 shows the Adjusted R[2] of each cryptocurrency.
The Pearson’s correlation coefficients in column 2 show that the null hypothesis of no correlation
(H0: PCorr = 0) is rejected by all the top ten cryptocurrencies. More specifically, a high positive
correlation is observed between Bitcoin returns and all the rest of the top ten cryptocurrency returns.
All of them exhibit statistical significance at the 1% level, showing Pearson’s correlation coefficients
between 43.3% and 82.2%, except for Tether that shows statistical significance at the 5% level and the
lowest Pearson’s correlation coefficient of 10.7%.
The Wald F test for the presence of cointegration reported in column 3 shows that the null
hypotheses of no cointegration on the level variables jointly equal to zero (H0: β1 = β2 = β3 = 0) is
rejected by five cryptocurrencies (XRP, Bitcoin_cash, Tether, EOS, and Binance coin). Thus, the F
statistics show long-run relationships, i.e., cointegration, between changes in Bitcoin returns and XRP,
Bitcoin_cash, Tether, EOS and Binance_coin returns for daily frequency. Additionally, the long-run
coefficients of changes in Bitcoin returns are positive and statistically significant at 1% level for these
five cryptocurrencies, where the highest values are for XRP and Theter.
Column four of Table 3 shows the cointegration equation: Rjt−i = e[+]·BR[+]t−i + e[−]·BR[−]t−i (long-run
elasticities) between Bitcoin returns (BR) and the rest of the top ten cryptocurrencies’ returns (Rjt−i).
Thus, regarding the long-run elasticities for the cumulative sum of positive changes in Bitcoin returns)
_BR[+]t−i and the cumulative sum of negative changes in Bitcoin returns BR[−]t-i, all cryptocurrency_
returns respond in the same way to positive and negative changes in Bitcoin returns. Additionally,
the coefficients are quite similar and are of modest size for all cryptocurrencies. The largest coefficients
correspond to Bitcoin_sv returns that respond more to positive and negative changes in Bitcoin returns
(4.5% versus 5.7%, respectively). Moreover, the long-run elasticities for the cumulative sum of positive
and negative changes in Bitcoin returns are statistically significant just for four cryptocurrencies, EOS,
XRP, Tether and Binance_coin. Moreover, the coefficients are negative for XRP and EOS, meaning
they move in the opposite direction to the changes in Bitcoin returns, but are positive for Tether and
Binance_coin, meaning they fluctuate in line with Bitcoin returns.
-----
_Mathematics 2020, 8, 810_ 13 of 22
**Table 3. Regression results of nonlinear ARDL models: asymmetry and cointegration tests between Bitcoin returns and the rest of relevant cryptocurrencies’ returns:**
daily frequency. [1]
**Cryptocurrencies** **PCorr** **Coint** **Eq** **LAsym** **SAsym** **Lags[+]** **Lags[−]** **Adj. R[2]**
_e[+]: 0.0370_ (2): 0.0935 *
Ethereum returns 0.8242 *** 0.6334 _e[−]: 0.0500_ 0.3384 17.776 *** (4): 0.1477 *** (3): −0.1319 ** 0.3254
XRP returns 0.7266 *** 60.617 *** _e[+]: −0.0226 **_ 3.3268 * 8.1825 *** (1): 0.2196 ** - 0.1619
_e[−]: −0.0272 **_ (3): 0.1807 **
_e[+]: 0.0203_
Bitcoin_cash returns 0.6778 *** 15.534 *** _e[−]: 0.0230_ 0.8904 13.737 *** (1): −0.1787 [**] (1): −0.3240 *** 0.3091
Theter returns 0.1069 ** 54.861 *** _e[+]: 0.0019 **_ 0.2310 - - (1): −0.0124 * 0.1449
_e[−]: 0.0020 *_ (2): −0.0224 ***
_e[+]: 0.4491_
Bitcoin_sv returns 0.4328 *** 0.3960 0.2313 6.7191 **[*] (2): 0.3620 ** - 0.1824
_e[−]: 0.5710_
Litecoin returns 0.7694 *** 0.6729 _e[+]: −0.0390_ 0.4228 18.475 *** (1): 0.1033 * - 0.3601
_e[−]: −0.0550_ (2): 0.1408 **
EOS returns 0.7609 *** 5.7063 *** _ee[−][+]:: − −0.5148 ***0.4973 ***_ 0.9959 18.881 *** - (4): −0.2319 *** 0.4045
_e[+]: 0.0561 *_
Binance_coin returns 0.6222 *** 10.605 *** _e[−]: 0.0668 **_ 3.9280 ** 17.722 *** - (1): −0.3004 *** 0.4023
_e[+]: 0.1403_
Tezos returns 0.5006 *** 1.0487 0.3006 10.531 *** - - 0.1936
_e[−]: 0.1275_
1 This table reports the coefficient estimates of the NARDL model between Bitcoin returns and the rest of relevant cryptocurrencies’ returns. PCorr refers to the Pearson’s correlation
coefficients defined by the null of PCorr = 0. Coint refers to the Wald test for the presence of cointegration defined by β1 = β2 = β3 = 0. Eq shows the cointegration equation (long-run
elasticities) between Bitcoin returns (BR) and the rest of relevant cryptocurrencies’ returns Rjt-i = e[+]·BR[+]t-i + e[−]·BR[−]t-i. LAsym refers to the Wald test for the null of long-run symmetry
defined by −β2/β1 = −β3/β1. SAsym refers to the Wald test for the null of short-run symmetry defined by γi[+] = γi[−]. Lags[+] and Lags[−] show the effect of the cumulative sum of positive and
negative changes (respectively) in Bitcoin returns for ()-lags on the rest of relevant cryptocurrency returns. As usual, *, **, *** indicate statistical significance at the 10%, 5% and 1% levels,
respectively. The critical values are available in [43], in case of small sample size.
-----
_Mathematics 2020, 8, 810_ 14 of 22
The fifth column shows the Wald test for investigating long-run symmetry. These results show that
the null hypothesis of long-run symmetry (H0: −β2/β1 = −β3/β1), is rejected only by two cryptocurrencies:
XRP and Binance_coin. Thus, the Wald test indicates that there could be asymmetry in the long-run
impact of Bitcoin returns on XRP and Binance_coin returns for daily data, corroborating previous
results obtained with long-run elasticities.
The sixth column shows the Wald test for short-run symmetry. In this case, the null hypothesis of
short-run symmetry (H0: γi[+] = γi[−]), is rejected by all the cryptocurrencies as all cryptocurrencies show
positive and statistically significant coefficients at the 1% significance level. Therefore, there is strong
evidence of asymmetric short-run responses of all cryptocurrency returns to changes in Bitcoin returns
for daily frequency. Thus, nonlinear asymmetries are important in the short-run relationship between
Bitcoin returns and the remaining top ten cryptocurrencies’ returns for daily data.
Columns seven and eight show the effect of the cumulative sum of positive and negative changes
respectively in Bitcoin returns for 1 to 4 lags on the rest of cryptocurrencies’ returns. In line with [2,18],
among others, we observe high persistence in the effect of both positive and negative changes in Bitcoin
returns, for 1 to 4 lags, in more than half of the cryptocurrency returns. More specifically, we observe a
positive and statistically significant effects of the cumulative sum of positive changes in Bitcoin returns
on Ethereum returns (for 2- and 4-lags), XRP returns (for 1- and 3-lags), Bitcoin_sv returns (for 2-lags)
and Litecoin returns (for 1- and 2-lags), as well as a negative and statistically significant effect of the
cumulative sum of positive changes in Bitcoin returns on Bitcoin_cash returns (for 1-lag). We also
notice just negative and statistically significant effect of the cumulative sum of negative changes in
Bitcoin returns on Ethereum returns (for 3-lags), Bitcoin_cash returns (for 1-lag), Tether returns (for 1and 2-lags), EOS returns (for 4-lags) and Binance_coin returns (for 1-lag).
Finally, the explanatory power of the NARDL model as reported in the last column varies from a
minimum of 14.5% for Tether to a maximum of more than 40% for EOS and Binance_coin returns.
_4.2. Results of the NARDL Models: Weekly Frequency_
Table 4 shows the weekly regression results of nonlinear ARDL models and asymmetry and
cointegration tests between Bitcoin and the remaining top 10 cryptocurrency returns. Overall,
the explanatory power of the NARDL model as measured and reported in the last column of Table 4
varies from a minimum of 6.7% (for XRP returns) to a maximum of 51.6% (for Bitcoin_cash returns)
and 50% (for EOS returns). There appears to be a tendency for the R[2] to be a bit higher for weekly than
for daily frequencies.
Table 4, column 2, reports the Pearson’s correlation coefficients between Bitcoin returns and the
rest of the top ten cryptocurrency returns and states that the null hypothesis of no correlation is rejected
by all the top ten cryptocurrencies. There is a strong positive correlation, at least 40%, between Bitcoin
and all but Tether cryptocurrency returns and all of them show a statistical significance at the 1%
level. Tether is an interesting exception showing a negative and statistically significant correlation
with Bitcoin returns.
Column 3’s Wald’s F test for cointegration shows that the null hypothesis of no cointegration is
rejected by four cryptocurrencies (Ethereum, Tether, EOS, and Binance coin). Thus, indicating long-run
connectedness between weekly Bitcoin returns and Ethereum, Tether, EOS and Binance_coin weekly
returns. Additionally, the long-run coefficients of changes in Bitcoin returns are positive and significant
at the 5% significance level for Tether and EOS and at the 10% significance level for Ethereum and
Binance_coin.
-----
_Mathematics 2020, 8, 810_ 15 of 22
**Table 4. Regression results of nonlinear ARDL models: asymmetry and cointegration tests between Bitcoin returns and the rest of relevant cryptocurrencies’ returns:**
weekly frequency. [1]
**Cryptocurrencies** **PCorr** **Coint** **Eq** **LAsym** **SAsym** **Lags[+]** **Lags[−]** **Adj. R[2]**
_e[+]: 0.0529_
Ethereum returns 0.8123 *** 2.3692 * 0.3332 6.9406 *** - - 0.3861
_e[−]: 0.0821_
XRP returns 0.7392 *** 0.8958 _e[+]: −1.1248 *_ 0.2152 3.5334 *** - - 0.0666
_e[−]: −1.7386 *_
Bitcoin_cash returns 0.7315 *** 0.5972 _e[+]: −0.9784_ 0.0613 6.8692 *** (2): 0.3845 ** (1): 0.5360 ** 0.5155
_e[−]: −1.0266_ (4): 0.3768 * (3): 0.7716 ***
_e[+]: 0.0388 ***_ (1): 0.0440 ***
Theter returns −0.4073 *** 2.8918 ** _e[−]: 0.0429 ***_ 0.6522 - (3): 0.0196 * - 0.1409
Bitcoin_sv returns 0.4208 *** 1.0911 _ee[+][−]:: − −0.75331.4758_ 0.6861 2.6063 *** (1): 0.8402 ** (1): −1.0168 ** 0.2719
_e[+]: 0.0899_
Litecoin returns 0.6745 *** 0.2642 0.1199 5.3563 *** - - 0.3196
_e[−]: −0.0127_
_e[+]: 0.6927 **_
EOS returns 0.6991 *** 3.1813 ** _e[−]: 0.8068 **_ 0.7554 7.7183 *** (3): −0.5188 *** (1): −0.4054 *** 0.5000
_e[+]: 0.1923_
Binance_coin returns 0.5308 *** 1.9915 * 0.0867 6.2489 *** (2): 0.4735 *** - 0.3054
_e[−]: 1.1908_
_e[+]: 0.5929_
Tezos returns 0.5138 *** 0.9228 0.2075 6.2904 *** - - 0.2798
_e[−]: 0.4970_
1 This table reports the coefficient estimates of the NARDL model between Bitcoin returns and the rest of relevant cryptocurrencies’ returns. PCorr refers to the Pearson’s correlation
coefficients defined by the null of PCorr = 0. Coint refers to the Wald test for the presence of cointegration defined by β1 = β2 = β3 = 0. Eq shows the cointegration equation (long-run
elasticities) between Bitcoin returns (BR) and the rest of relevant cryptocurrencies’ returns Rj−i = e[+]·BR[+]t−i + e[−]·BR[−]t−i. LAsym refers to the Wald test for the null of long-run symmetry
defined by −β2/β1 = −β3/β1. SAsym refers to the Wald test for the null of short-run symmetry defined by γi[+] = γi[−]. Lags[+] and Lags[−] show the effect of the cumulative sum of positive and
negative changes (respectively) in Bitcoin returns for ()-lags on the rest of relevant cryptocurrency returns. As usual, *, **, *** indicate statistical significance at the 10%, 5% and 1% levels,
respectively. The critical values are available in [43], in case of small sample size.
-----
_Mathematics 2020, 8, 810_ 16 of 22
Column four of Table 4 shows that all cryptocurrency returns (except for Litecoin returns)
respond in the same way to positive and negative changes in Bitcoin returns. Additionally,
the coefficients are quite similar for most cryptocurrencies except for Ethereum, Bitcoin_sv, Litecoin
and especially for Binance_coin where estimates for long-run elasticities are substantially different.
Clearly, the Binance_coin returns respond more to negative changes in Bitcoin returns because the
coefficient is larger. Thus, for instance, a 10% increase in Bitcoin returns is related to the increase
in the Binance_coin returns by about 1.9%. However, a 10% decrease in Bitcoin returns leads to an
11.9% decrease in Binance_coin returns. Nevertheless, these elasticities are not statistically significant.
Long-run elasticities for the cumulative sum of positive and negative changes in Bitcoin returns
are statistically significant just for Tether, EOS and XRP at the 1%, 5% and 10% significance level,
respectively. Moreover, the coefficients are negative for XRP and positive for EOS and Tether.
The Wald test for long-run symmetry reported in column five shows that the null hypothesis of
long-run symmetry is not rejected by any of the top ten cryptocurrencies. However, the corresponding
test for short-run symmetry reported in column six shows that the null hypothesis of short-run
symmetry is rejected by all the cryptocurrencies. More specifically, all cryptocurrencies show positive
and statistically significant coefficients at 1% significance level. Therefore, there is strong evidence of
asymmetric short-run responses of all cryptocurrency returns to changes in Bitcoin returns for weekly
frequency but there is no evidence of long-run asymmetry. Therefore, nonlinear asymmetries are also
important for the short-run relationship between Bitcoin and the remaining top 10 cryptocurrencies for
weekly data.
Weekly frequency data also corroborate a high persistence on the impact of both positive and
negative changes in Bitcoin returns, for 1 to 4 lags, on half of the remaining top 10 cryptocurrency
returns. More specifically, the cumulative sum of positive and negative changes (respectively) of
Bitcoin returns for 1 to 4 lags on the rest of cryptocurrency returns, shown in columns seven and
eight of Table 4, illustrates that there is a statistically significant and slightly larger short-run impact
of increases than decreases of Bitcoin returns on most cryptocurrency returns. We notice a positive
and statistically significant effect of the cumulative sum of positive changes in Bitcoin returns on
Bitcoin_cash returns for 2- and 4-lags, on Tether returns for 1- and 3-lags, on Bitcoin_sv returns for
1-lag and on Binance_coin returns for 2-lags, as well as a negative and statistically significant effect of
the cumulative sum of positive changes in Bitcoin returns on EOS returns for 3-lags. We also notice a
positive and statistically significant effect of the cumulative sum of negative changes in Bitcoin returns
on Bitcoin_cash for 1- and 3-lags and a negative and statistically significant effect of the cumulative
sum of negative changes in Bitcoin returns on Bitcoin_sv and EOS for 1-lag.
_4.3. Results of the NARDL Models: Monthly Frequency_
Table 5 shows the regression results of nonlinear ARDL models and asymmetry and cointegration
tests between Bitcoin returns and the remaining top 10 cryptocurrency returns for monthly frequency.
It should be noted that monthly data may give inaccurate results for a few of the altcoin cryptocurrencies
because some have only recently been created and so have a modest number of monthly observations.
Specifically, the most recent cryptocurrencies are Tezos, whose prices start on 2 February 2018, and
especially Bitcoin_sv, whose prices start on 19 November 2018. Therefore, we will analyze the monthly
results considering this potential limitation.
Neglecting the results of recently issued cryptocurrencies with modest sample size, the explanatory
power of the monthly NARDL model varies from a minimum adjusted R[2] of 26.8% for the Tether
returns to a maximum of 77.3% for EOS returns. It is noticeable that the two most recently issued
cryptocurrencies with the smallest sample size have the highest adjusted R[2]; 96.6% for Bitcoin_sv and
80.1% for Tezos. In any event, there is a clear tendency for the explanatory power of the NARDL model
to rise as the sampling frequency decreases. For example, for EOS the explanatory power steadily
increases as we move from daily, weekly, and monthly frequency, achieving R[2] of 40.4%, 50% and
77.3% respectively.
-----
_Mathematics 2020, 8, 810_ 17 of 22
**Table 5. Regression results of nonlinear ARDL models: asymmetry and cointegration tests between Bitcoin returns and the rest of relevant cryptocurrencies’ returns:**
monthly frequency. [1]
**Cryptocurrencies** **PCorr** **Coint** **Eq** **LAsym** **SAsym** **Lags[+]** **Lags[−]** **Adj. R[2]**
Ethereum returns 0.6352 *** 0.1902 _e[+]: −0.8061_ 0.0205 3.9753 *** - - 0.4302
_e[−]: −1.0821_
_e[+]: 0.1575_
XRP returns 0.4454 * 4.4249 *** 0.9089 2.7308 *** - - 0.2721
_e[−]: 0.4109_
_e[+]: 0.7670_
Bitcoin_cash returns 0.5927 ** 0.4673 0.1481 4.8457 *** (1): 1.1441 *** - 0.5652
_e[−]: 0.4763_
_e[+]: 0.0203 **_
Theter returns −0.1473 3.8636 ** _e[−]: 0.0289 **_ 1.8779 −2.5775 ** (1): 0.0210 ** (1): −0.0292 * 0.2680
_e[+]: 0.7260_ (1): 2.8139 *
Bitcoin_sv returns 0.2854 34.743 *** _e[−]: 6.0939 *_ 46.084 *** −3.2676 *** (2): 2.4948 * - 0.9657
_e[+]: 3.0736 ***_ (1): 0.7763 **
Litecoin returns 0.4924 * 2.7840 ** _e[−]: 4.2521 **_ 0.1822 3.4526 *** (4): 0.8604 *** (3): −0.6674 * 0.4907
EOS returns 0.4932 * 2.7137 * _e[+]: 1.4434_ 0.3991 3.2146 *** (1): 0.8562 [***] (1): −1.0826 ** 0.7731
_e[−]: 2.6779 **_ (3): −0.7961 ***
_e[+]: 0.2134_ (1): 1.3610 *** (2): −0.6275 **
Binance_coin returns 0.5057 * 1.8156 _e[−]: 0.2746_ 0.0705 2.4323 *** (4): 0.3091 * (3): −1.1079 *** 0.7481
(4): −0.6770 **
(1): 3.5387 ***
_e[+]: 1.5210 ***_
Tezos returns 0.2630 14.1765 *** _e[−]: 3.2410 ***_ 20.439 *** 3.0335 *** (2): −2.1163 *** (2): 2.1296 *** 0.8079
(3): 1.9299 ***
1 This table reports the coefficient estimates of the NARDL model between Bitcoin returns and the rest of relevant cryptocurrencies’ returns. PCorr refers to the Pearson’s correlation
coefficients defined by the null of PCorr = 0. Coint refers to the Wald test for the presence of cointegration defined by β1 = β2 = β3 = 0. Eq shows the cointegration equation (long-run
elasticities) between Bitcoin returns (BR) and the rest of relevant cryptocurrencies’ returns Rjt−i = e[+]·BR[+]t−i + e[−]·BR[−]t−i. LAsym refers to the Wald test for the null of long-run symmetry
defined by −β2/β1 = −β3/β1. SAsym refers to the Wald test for the null of short-run symmetry defined by γi[+] = γi[−]. Lags[+] and Lags[−] show the effect of the cumulative sum of positive and
negative changes (respectively) in Bitcoin returns for ()-lags on the rest of relevant cryptocurrency returns. As usual, *, **, *** indicate statistical significance at the 10%, 5% and 1% levels,
respectively. The critical values are available in [43], in case of small sample size.
-----
_Mathematics 2020, 8, 810_ 18 of 22
The Pearson’s correlation reported in column two of Table 5 rejects the null hypothesis of no
correlation for just six out of nine cryptocurrencies. More specifically, a positive and statistically
significant relationship is observed between Bitcoin returns and Ethereum Bitcoin_cash, XRP, Litecoin,
EOS and Binance_coin returns at but only Ethereum and Bitcoin_cash are highly significant, the rest
are significant at the 10% level. It is interesting to note that the three cryptocurrencies that do not reject
the null hypothesis are the two above noted most recently issued cryptocurrencies and Tether, which as
the lowest R[2] of all, showing that there is no correlation between bitcoin returns and the returns of
these more recent cryptocurrencies.
The results of the Wald’s F test for cointegration, reported in column three of Table 5, show that
the null hypothesis of no cointegration is rejected by six cryptocurrencies, XRP, Tether, Bitcoin_sv,
Litecoin, EOS, and Tezos. Thus, the bounds F statistics show long-run connectedness between these
cryptocurrency returns and changes in Bitcoin returns. In addition, the long-run coefficients of changes
in Bitcoin returns are positive and statistically significant in these six cryptocurrencies. We should
note that for Tezos and Bitcoin_sv, the two most recently issued cryptocurrencies, have the very high F
statistics that could be an artifact of a modest sample size.
The cointegration equation listed in column four shows that all cryptocurrency returns respond in
the same way to positive and negative changes in Bitcoin returns. Additionally, the coefficients are
quite similar for most cryptocurrencies except for the two most recently issued cryptocurrencies where
Tezos coefficient of negative changes in Bitcoin returns is twice as high as the coefficient of positive
changes and especially the most recently issued Bitcoin_sv, where the coefficient of negative changes is
almost nine times higher than the coefficient of positive changes. Furthermore, the long-run elasticities
for the cumulative sum of positive and negative changes in Bitcoin returns are statistically significant
just for Tether, Litecoin and Tezos and just the coefficient of negative changes of Bitcoin returns for
Bitcoin_sv and EOS.
The results of the Wald test for testing the long-run symmetry reported in column five, show that
the null hypothesis of long-run symmetry is rejected only by Bitcoin_sv and Tezos indicating that
there could be asymmetry in the long-run impact of Bitcoin returns for these two most recently
issued cryptocurrencies. For the Wald test for testing the short-run symmetry reported in column
six, it is observed that only two of them, one of which is the modest sample size by Bitcoin_sv,
show negative and statistically significant coefficients. Meanwhile all the remaining cryptocurrencies
have positive and statistically significant coefficients at 1% level. Therefore, all cryptocurrency returns
show asymmetric short-run responses to changes in Bitcoin returns for monthly frequency.
The effect of the cumulative sum of positive and negative changes in Bitcoin returns for 1–4 lags
on the rest of cryptocurrency returns is shown in columns seven and eight of Table 5. There is a positive
and statistically significant effect for the cumulative sum of positive changes in Bitcoin returns on six
out of nine cryptocurrency returns: on Bitcoin_cash, Tether and EOS returns (for 1-lag), on Bitcoin_sv
returns (for 1- and 2-lags), and on Litecoin and Binance_coin returns (for 1- and 4-lags), as well as just
a negative and statistically significant effect in Bitcoin returns on Tezos returns (for 2-lags). We also
notice a positive and statistically significant effect of the cumulative sum of negative changes in Bitcoin
returns just on Tezos returns (for 1-, 2- and 3-lags), as well as a negative and statistically significant
effect of the cumulative sum of negative changes in Bitcoin returns on four out of nine cryptocurrency
returns: on Tether returns (for 1-lag), on Litecoin returns (for 3-lags), on EOS returns (for 1- and 3-lags)
and on Binance_coin returns (for 2-, 3-, and 4-lags). Consequently, for monthly frequency, we find a
high persistence in the effect of both positive and negative variations in Bitcoin returns, for 1 to 4 lags,
on most of the cryptocurrency returns.
**5. Concluding Remarks**
This paper aims to study both long- and short-run interdependencies between returns of Bitcoin
and the rest of the recent most important cryptocurrencies that is Ethereum, XRP, Bitcoin Cash, Tether,
Bitcoin SV, Litecoin, EOS, Binance coin, and Tezos applying a NARDL approach. Our sample period
-----
_Mathematics 2020, 8, 810_ 19 of 22
extends from 26 January 2015 to 7 March 2020 and our research check results for daily, weekly, and
monthly frequency data.
To the best of knowledge, this is the first study that explores the co-movement between Bitcoin
and the remaining top ten cryptocurrencies selected according to the largest market capitalization,
by using the NARDL approach to evaluate both long- and short-run asymmetries. The Pearson’s
correlation coefficients provide evidence that there is a positive and statistically significant correlation
between Bitcoin returns and all the rest of the top ten cryptocurrencies for all frequencies, except for
the most recent cryptocurrencies, for monthly frequency, likely due to the lack of data. These results
are in line with those obtained in works such as [2,4,9,15,26,31]. We find a cointegration or long-run
relationship between most cryptocurrency returns and changes in Bitcoin returns for all frequencies [32],
while in [35] most of the variables are not cointegrated. Moreover, the cointegration equation reveals
that cryptocurrency returns usually respond in the same way to positive and negative changes in Bitcoin
returns, with very few exceptions. Furthermore, our tests indicate that asymmetries in the long-run
impact of Bitcoin returns is operative on a maximum of only two of nine cryptocurrency returns but
there is strong evidence of asymmetry in the short-run impact of Bitcoin returns in all cryptocurrency
returns for all frequencies. This provides strong evidence that nonlinear asymmetries are especially
important for the short-run relationships between these cryptocurrencies. Our results are similar to
those found in [1,22], but instead of using ARDL, we include non-linearity in the estimation. We find
evidence of high persistence in the impact of both positive and negative changes in Bitcoin returns,
for 1 to 4 lags for most of the cryptocurrency returns. Specifically, the cumulative sum of positive
and negative changes in Bitcoin returns has a statistically significant effect on most cryptocurrency
returns for daily, weekly, and monthly frequencies. The NARDL model explains more than 40% and
50% of the cryptocurrency returns with changes in Bitcoin returns for the daily and weekly time series
respectively but monthly results for the most recently issued cryptocurrencies could be exaggerated
due to the short time series available for monthly data.
According to our results, some cryptocurrencies (in concrete XRP, Tether and EOS) are more
connected to Bitcoin than others (Tezos, among other altcoins), in line with [23]. The economic intuition
being the more connected an altcoin is, the more likely they can be used as a substitute whereas the
lower the connectedness, the more they can be considered to be an alternative asset distinct from
Bitcoin. Thus, potential practical applications of our results could be that the least connected virtual
coin can be used to diversify positions in Bitcoin whereas the more connected the altcoin is, the better
it can be used to hedge positions in Bitcoin. Assuming that there would be a lack of liquidity in the
cryptocurrency market so that if you, as a potential investor, wish to reduce exposure in Bitcoin and
you sell, then your own selling actions could reduce the Bitcoin price against you. Similarly, if you
want to hedge, you probably could not short Bitcoin so selling a highly correlated altcoin could be the
alternative to hedge. Moreover, another relevant aspect of research is how the results change as we
move from daily to monthly observations. We seem to obtain an increase in R square as we reduce
the frequency of the observations. Does that suggest that the longer the periodicity of data the more
connected the altcoins are to Bitcoin? That would be interesting if for example we wish to hedge
Bitcoin positions with say Tether positions.
For all that, our results would have important implications for market participants, because
potential connectedness between the top cryptocurrencies’ returns may affect the decision-making of
investors and policymakers. Thus, future research could extend our study to the analysis of potential
co-movements in volatility in the cryptocurrency market as volatility co-movements can have a key
role for implementing suitable investment strategies as well. To make more informed decisions,
an extensive study of interdependencies between cryptocurrencies and conventional assets is crucial.
Finally, it would be very interesting to incorporate into the analysis the stage of the economy, because
previous literature confirms that interdependence patterns may change over time. This is a significant
aspect in a market as volatile as the cryptocurrency market, especially in periods of economic recession
such as the present one, caused by COVID-19, which is affecting the whole world. Therefore, a critical
-----
_Mathematics 2020, 8, 810_ 20 of 22
issue will be to propose investment strategies using cryptocurrencies as hedging and/or diversification
instruments in the current period affected by the SARS-CoV-2 pandemic.
**Author Contributions: Conceptualization, F.J.; Data curation, M.d.l.O.G.; Formal analysis, M.d.l.O.G., F.J.**
and F.S.S.; Funding acquisition, F.J. and F.S.S.; Investigation, M.d.l.O.G. and F.J.; Methodology, F.J.; Software,
F.J.; Supervision, F.J. and F.S.S.; Validation, M.d.l.O.G. and F.S.S.; Writing—original draft, M.d.l.O.G. and F.J.;
Writing—review & editing, M.d.l.O.G., F.J. and F.S.S. All authors have read and agreed to the published version of
the manuscript.
**Funding:** This research was funded by Spanish Ministerio de Economía, _Industria y Competitividad,_
grant number ECO2017-89715-P.
**Conflicts of Interest: The authors declare no conflict of interest.**
**References**
1. Ciaian, P.; Rajcaniova, M.; Kancs, D. Virtual relationships: Short- and long-run evidence from BitCoin and
[altcoin markets. J. Int. Financ. Mark. Inst. Money 2018, 52, 173–195. [CrossRef]](http://dx.doi.org/10.1016/j.intfin.2017.11.001)
2. Jareño, F.; González, M.O.; Tolentino, M.; Sierra, K. Bitcoin and Gold Price Returns: A Quantile Regression
[and NARDL Analysis. Resour. Policy 2020, 67, 101666. [CrossRef]](http://dx.doi.org/10.1016/j.resourpol.2020.101666)
3. Bação, P.; Duarte, A.P.; Sebastião, H.; Redzepagic, S. Information Transmission Between Cryptocurrencies:
[Does Bitcoin Rule the Cryptocurrency World? Sci. Ann. Econ. Bus. 2018, 65, 97–117. [CrossRef]](http://dx.doi.org/10.2478/saeb-2018-0013)
4. Papadimitriou, T.; Gogas, P.; Gkatzoglou, F. The evolution of the cryptocurrencies market: A complex
[networks approach. J. Comput. Appl. Math. 2020, 376, 112831. [CrossRef]](http://dx.doi.org/10.1016/j.cam.2020.112831)
5. Diebold, F.X.; Yilmaz, K. Measuring Financial Asset Return and Volatility Spillovers, with Application to
[Global Equity Markets. Econ. J. 2009, 119, 158–171. [CrossRef]](http://dx.doi.org/10.1111/j.1468-0297.2008.02208.x)
6. Diebold, F.; Yilmaz, K. Better to give than to receive: Predictive directional measurement of volatility
[spillovers. Int. J. Forecast. 2012, 28, 57–66. [CrossRef]](http://dx.doi.org/10.1016/j.ijforecast.2011.02.006)
7. Diebold, F.X.; Yılmaz, K.; Yilmaz, K. On the network topology of variance decompositions: Measuring the
[connectedness of financial firms. J. Econ. 2014, 182, 119–134. [CrossRef]](http://dx.doi.org/10.1016/j.jeconom.2014.04.012)
8. Koutmos, D. Return and volatility spillovers among cryptocurrencies. Econ. Lett. 2018, 173, 122–127.
[[CrossRef]](http://dx.doi.org/10.1016/j.econlet.2018.10.004)
9. Ji, Q.; Bouri, E.; Lau, C.K.M.; Roubaud, D. Dynamic connectedness and integration in cryptocurrency markets.
_[Int. Rev. Financ. Anal. 2019, 63, 257–272. [CrossRef]](http://dx.doi.org/10.1016/j.irfa.2018.12.002)_
10. Symitsi, E.; Chalvatzis, K.J. Return, volatility and shock spillovers of Bitcoin with energy and technology
[companies. Econ. Lett. 2018, 170, 127–130. [CrossRef]](http://dx.doi.org/10.1016/j.econlet.2018.06.012)
11. Charfeddine, L.; Benlagha, N.; Maouchi, Y. Investigating the dynamic relationship between cryptocurrencies
[and conventional assets: Implications for financial investors. Econ. Model. 2020, 85, 198–217. [CrossRef]](http://dx.doi.org/10.1016/j.econmod.2019.05.016)
12. Walther, T.; Klein, T.; Bouri, E. Exogenous drivers of Bitcoin and Cryptocurrency volatility—A mixed data
[sampling approach to forecasting. J. Int. Financ. Mark. Inst. Money 2019, 63, 101133. [CrossRef]](http://dx.doi.org/10.1016/j.intfin.2019.101133)
13. Beneki, C.; Koulis, A.; Kyriazis, N.A.; Papadamou, S. Investigating volatility transmission and hedging
[properties between Bitcoin and Ethereum. Res. Int. Bus. Financ. 2019, 48, 219–227. [CrossRef]](http://dx.doi.org/10.1016/j.ribaf.2019.01.001)
14. Katsiampa, P.; Corbet, S.; Lucey, B.M. Volatility spillover effects in leading cryptocurrencies: A BEKK[MGARCH analysis. Financ. Res. Lett. 2019, 29, 68–74. [CrossRef]](http://dx.doi.org/10.1016/j.frl.2019.03.009)
15. Katsiampa, P.; Corbet, S.; Lucey, B. High frequency volatility co-movements in cryptocurrency market. J. Int.
_[Financ. Mark. Inst. Money 2019, 62, 35–52. [CrossRef]](http://dx.doi.org/10.1016/j.intfin.2019.05.003)_
16. Tu, Z.; Xue, C. Effect of bifurcation on the interaction between Bitcoin and Litecoin. Financ. Res. Lett. 2019,
_[31, 382–385. [CrossRef]](http://dx.doi.org/10.1016/j.frl.2018.12.010)_
17. Song, J.Y.; Chang, W.; Song, J.W. Cluster analysis on the structure of the cryptocurrency market via
[Bitcoin-Ethereum ltering. Phys. A 2019, 527, 121339. [CrossRef]](http://dx.doi.org/10.1016/j.physa.2019.121339)
18. Jareño, F.; Tolentino, M.; De La O González, M.; Oliver, A. Impact of changes in the level, slope and curvature
of interest rates on U.S. sector returns: An asymmetric nonlinear cointegration approach. Econ. Res. -Ekon.
_[Istraživanja 2019, 32, 1275–1297. [CrossRef]](http://dx.doi.org/10.1080/1331677X.2019.1632726)_
-----
_Mathematics 2020, 8, 810_ 21 of 22
19. Arize, A.C.; Malindretos, J.; Igwe, E.U. Do exchange rate changes improve the trade balance: An asymmetric
[nonlinear cointegration approach. Int. Rev. Econ. Financ. 2017, 49, 313–326. [CrossRef]](http://dx.doi.org/10.1016/j.iref.2017.02.007)
20. Corbet, S.; Lucey, B.M.; Urquhart, A.; Yarovaya, L. Cryptocurrencies as a financial asset: A systematic
[analysis. Int. Rev. Financ. Anal. 2019, 62, 182–199. [CrossRef]](http://dx.doi.org/10.1016/j.irfa.2018.09.003)
21. White, R.; Marinakis, Y.D.; Islam, N.; Walsh, S.T. Is Bitcoin a currency, a technology-based product,
[or something else? Technol. Forecast. Soc. Chang. 2020, 151, 119877. [CrossRef]](http://dx.doi.org/10.1016/j.techfore.2019.119877)
22. Nguyen, T.; Nguyen, B.T.; Nguyen, T.C.; Nguyen, Q.Q. Bitcoin return: Impacts from the introduction of new
[altcoins. Res. Int. Bus. Financ. 2019, 48, 420–425. [CrossRef]](http://dx.doi.org/10.1016/j.ribaf.2019.02.001)
23. Mensi, W.; Rehman, M.U.; Al-Yahyaee, K.H.; Al-Jarrah, I.M.W.; Kang, S.H.; Al-Jarrah, I. Time frequency
analysis of the commonalities between Bitcoin and major Cryptocurrencies: Portfolio risk management
[implications. N. Am. J. Econ. Financ. 2019, 48, 283–294. [CrossRef]](http://dx.doi.org/10.1016/j.najef.2019.02.013)
24. Kumar, A.S.; Ajaz, T. Co-movement in crypto-currency markets: Evidences from wavelet analysis. Financ.
_[Innov. 2019, 5, 1–17. [CrossRef]](http://dx.doi.org/10.1186/s40854-019-0143-3)_
25. Bouri, E.; Shahzad, S.J.H.; Roubaud, D. Cryptocurrencies as hedges and safe-havens for US equity sectors. Q.
_[Rev. Econ. Financ. 2020, 75, 294–307. [CrossRef]](http://dx.doi.org/10.1016/j.qref.2019.05.001)_
26. Katsiampa, P. Volatility co-movement between Bitcoin and Ether. Financ. Res. Lett. 2019, 30, 221–227.
[[CrossRef]](http://dx.doi.org/10.1016/j.frl.2018.10.005)
27. Leclair, E.M. Herding in the Cryptocurrency Market; ECON 5029 Final Research; Carleton University: Ottawa,
ON, Canada, 2018.
28. [Hwang, S.; Salmon, M. Market stress and herding. J. Empir. Financ. 2004, 11, 585–616. [CrossRef]](http://dx.doi.org/10.1016/j.jempfin.2004.04.003)
29. Köchling, G.; Müller, J.; Posch, P.N. Price delay and market frictions in cryptocurrency markets. Econ. Lett.
**[2019, 174, 39–41. [CrossRef]](http://dx.doi.org/10.1016/j.econlet.2018.10.025)**
30. Platanakis, E.; Urquhart, A. Portfolio management with cryptocurrencies: The role of estimation risk. Econ.
_[Lett. 2019, 177, 76–80. [CrossRef]](http://dx.doi.org/10.1016/j.econlet.2019.01.019)_
31. Vidal-Tomás, D.; Escribano, A.M.I.; Farinós, J.E. Herding in the cryptocurrency market: CSSD and CSAD
[approaches. Financ. Res. Lett. 2019, 30, 181–186. [CrossRef]](http://dx.doi.org/10.1016/j.frl.2018.09.008)
32. Ahmed, W.M. Is there a risk-return trade-off in cryptocurrency markets? The case of Bitcoin. J. Econ. Bus.
**[2019, 108, 105886. [CrossRef]](http://dx.doi.org/10.1016/j.jeconbus.2019.105886)**
33. Burnie, A. Exploring the Interconnectedness of Cryptocurrencies using Correlation Networks. In Proceedings
of the Cryptocurrency Research Conference 2018, Anglia Ruskin University Lord Ashcroft International
Business School Centre for Financial Research, Cambridge, UK, 24 May 2018.
34. Lebedeva, E. Spillovers between cryptocurrencies. Network map of cryptocurrencies. Master’s Thesis,
University of Tartu, Tartu, Estonia, 2018.
35. Adedokun, A. Bitcoin-Altcoin Price Synchronization Hypothesis: Evidence from Recent Data. J. Financ.
_Econ. 2019, 7, 137–147._
36. Kyriazis, N. A Survey on Empirical Findings about Spillovers in Cryptocurrency Markets. J. Risk Financial
_[Manag. 2019, 12, 170. [CrossRef]](http://dx.doi.org/10.3390/jrfm12040170)_
37. Gkillas, K.; Bekiros, S.; Siriopoulos, C. Extreme Correlation in Cryptocurrency Markets. SSRN Electron. J.
**[2018. [CrossRef]](http://dx.doi.org/10.2139/ssrn.3180934)**
38. Lo, Y.C.; Medda, F. Assets on the Blockchain: An Empirical Study of Tokenomics. SSRN Electron. J. 2019.
[[CrossRef]](http://dx.doi.org/10.2139/ssrn.3309686)
39. Canh, N.P.; Binh, N.Q.; Thanh, S.D. Cryptocurrencies and Investment Diversification: Empirical Evidence
[from Seven Largest Cryptocurrencies. Theor. Econ. Lett. 2019, 9, 431–452. [CrossRef]](http://dx.doi.org/10.4236/tel.2019.93031)
40. Shin, Y.; Yu, B.; Greenwood-Nimmo, M. Modelling Asymmetric Cointegration and Dynamic Multipliers in a
Nonlinear ARDL Framework. In Festschrift in Honor of Peter Schmidt; Springer Science and Business Media
LLC: New York, NY, USA, 2014; pp. 281–314.
41. Andvig, J.C.; Thonstad, T.; Bjerkholt, O.; Chipman, J.S.; Hausman, J.; Newey, W.K.; Blundell, R.; Griliches, Z.;
Mairesse, J.; Jorgenson, D.W.; et al. Econometrics and Economic Theory in the 20th Century. In Proceedings of
_the Econometrics and Economic Theory in the 20th Century; Cambridge University Press (CUP): Cambridge,_
UK, 1999.
-----
_Mathematics 2020, 8, 810_ 22 of 22
42. Pesaran, M.H.; Shin, Y.; Smith, R.J. Bounds testing approaches to the analysis of level relationships. J. Appl.
_[Econ. 2001, 16, 289–326. [CrossRef]](http://dx.doi.org/10.1002/jae.616)_
43. Narayan, P.K. The saving and investment nexus for China: Evidence from cointegration tests. Appl. Econ.
**[2005, 37, 1979–1990. [CrossRef]](http://dx.doi.org/10.1080/00036840500278103)**
© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access
article distributed under the terms and conditions of the Creative Commons Attribution
[(CC BY) license (http://creativecommons.org/licenses/by/4.0/).](http://creativecommons.org/licenses/by/4.0/.)
-----
|
{
"disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.3390/math8050810?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.3390/math8050810, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "CCBY",
"status": "GOLD",
"url": "https://www.mdpi.com/2227-7390/8/5/810/pdf?version=1590669087"
}
| 2,020
|
[] | true
| 2020-05-17T00:00:00
|
[
{
"paperId": "6b43d5671f472276b33d046655390740e36ac31e",
"title": "The evolution of the cryptocurrencies market: A complex networks approach"
},
{
"paperId": "b9ef794b26a9e9a5018b4594ffbe694f878b048f",
"title": "Bitcoin and gold price returns: A quantile regression and NARDL analysis"
},
{
"paperId": "08d9b253c02b276b3a74a5882931d91586c6757b",
"title": "Bitcoin and the Demand for Money: Is Bitcoin More Than a Speculative Asset?"
},
{
"paperId": "b26e72769ae9e6ef84ec5fc3fecceafd962e14b8",
"title": "Interest rate exposure of European insurers"
},
{
"paperId": "da22595570c0cdce884bb0cbd17ce927ec88cf73",
"title": "Is there a risk-return trade-off in cryptocurrency markets? The case of Bitcoin"
},
{
"paperId": "45dd4470fa167d81cd830dc84c5141192a486614",
"title": "Analysis of the Spanish IBEX-35 Companies' Returns Using Extensions of the Fama and French Factor Models"
},
{
"paperId": "768d9796fcb8b102bc072eb09403eac2bf68cef7",
"title": "Investigating the dynamic relationship between cryptocurrencies and conventional assets: Implications for financial investors"
},
{
"paperId": "5dc43ec36ee33a88493b67e3362f11b963246a3d",
"title": "Cryptocurrencies as hedges and safe-havens for US equity sectors"
},
{
"paperId": "d6faa1112ae766f50510fee65b5a8f7ff29515f7",
"title": "Is Bitcoin a currency, a technology-based product, or something else?"
},
{
"paperId": "a38c2009efec1c353553322550fa128c96e2bd8f",
"title": "Bitcoin-Altcoin Price Synchronization Hypothesis: Evidence from Recent Data"
},
{
"paperId": "7605000ead65a30f24d0b06ff5e50169bb909bca",
"title": "Co-movement in crypto-currency markets: evidences from wavelet analysis"
},
{
"paperId": "36ba99e7defb64a276566b466d3cd5d900a4f83f",
"title": "Effect of bifurcation on the interaction between Bitcoin and Litecoin"
},
{
"paperId": "33a0b88437f9e26f10bac83622638a0a41093cad",
"title": "Do cryptocurrencies and traditional asset classes influence each other?"
},
{
"paperId": "77fa8bf3b693d89c9e5d6b180aad258dea78a8e6",
"title": "The policy uncertainty and market volatility puzzle: Evidence from wavelet analysis"
},
{
"paperId": "1ca3700c15328cd596fbe168e9ab95e000420cf5",
"title": "A Survey on Empirical Findings about Spillovers in Cryptocurrency Markets"
},
{
"paperId": "70927b9d6f3e4fd45409e4f75d7cc6bc04f7a73c",
"title": "Herding in the cryptocurrency market: CSSD and CSAD approaches"
},
{
"paperId": "9ed6d8769278533c29548bd904ed60b9db84ada8",
"title": "Volatility co-movement between Bitcoin and Ether"
},
{
"paperId": "351a63ecb22115ce567c8bbcad349392a02b0677",
"title": "Bitcoin returns and risk: A general GARCH and GAS analysis"
},
{
"paperId": "de74565b51c4417ecffe3f93fa836d11a4008ac9",
"title": "Detecting overreaction in the Bitcoin market: A quantile autoregression approach"
},
{
"paperId": "70a7f3eada84a63c887544956fdb3c05496d865d",
"title": "Cluster analysis on the structure of the cryptocurrency market via Bitcoin–Ethereum filtering"
},
{
"paperId": "ef2a72b0a74991e722a386431d4050fee5933f06",
"title": "Impact of changes in the level, slope and curvature of interest rates on U.S. sector returns: an asymmetric nonlinear cointegration approach"
},
{
"paperId": "ac6bd64d236ea9a8a80810558c15fc66e823632a",
"title": "Can uncertainty indices predict Bitcoin prices? A revisited analysis using partial and multivariate wavelet approaches"
},
{
"paperId": "d6a2aac02e3069a32237dd62bfa0bd0adbb08966",
"title": "High Frequency Volatility Co-Movements in Cryptocurrency Markets"
},
{
"paperId": "6ac9b46f99a92555f8b70eeb7500cb68587b2d99",
"title": "Trading volume and the predictability of return and volatility in the cryptocurrency market"
},
{
"paperId": "c2286586a7b423493589083b72c20315e399cfa8",
"title": "Market risk and Bitcoin returns"
},
{
"paperId": "0336acd79011c0ef2bdea9fe97b6f81686adf3dd",
"title": "Dynamic connectedness and integration in cryptocurrency markets"
},
{
"paperId": "a7f61f3104f8e29551afe5aa9a36b9bccc9cf08b",
"title": "The effects of markets, uncertainty and search intensity on bitcoin returns"
},
{
"paperId": "04b979c165b59419c22b30a04ee49e8905beff6f",
"title": "Portfolio diversification with virtual currency: Evidence from bitcoin"
},
{
"paperId": "fd98495e0e52248fda48cf718f89e7bb81c43af2",
"title": "Bitcoin return: Impacts from the introduction of new altcoins"
},
{
"paperId": "930c503628e85eaacfc2aadf2f7dda690813f196",
"title": "Investigating volatility transmission and hedging properties between Bitcoin and Ethereum"
},
{
"paperId": "481949c575c9e294c3da92b131d40fdd9bc409b8",
"title": "Time frequency analysis of the commonalities between Bitcoin and major Cryptocurrencies: Portfolio risk management implications"
},
{
"paperId": "491a60895637a9edf714f0ebdec411172286b6df",
"title": "Cryptocurrencies and Investment Diversification: Empirical Evidence from Seven Largest Cryptocurrencies"
},
{
"paperId": "6b66c16115854eb15b9786de60db4820d5320139",
"title": "Risk Aversion and Bitcoin Returns in Normal, Bull, and Bear Markets"
},
{
"paperId": "03211337a31b8b2f3e69f968a86589b8c5938fb2",
"title": "Testing extensions of Fama & French models: A quantile regression approach"
},
{
"paperId": "c6738f89bb1df67574fa55f6ebe3fd850e66d81c",
"title": "Portfolio Management with Cryptocurrencies: The Role of Estimation Risk"
},
{
"paperId": "dc9171a154901a1cfd23848acd88031ae6ce6d02",
"title": "Assets on the Blockchain: An Empirical Study of Tokenomics"
},
{
"paperId": "adc4aba7057ea8ac2fd54a5153338b70f4e59514",
"title": "Does global economic uncertainty matter for the volatility and hedging effectiveness of Bitcoin?"
},
{
"paperId": "9d9057880b2c61f74ddc5490cb04d6cb16560f2b",
"title": "Return and volatility spillovers among cryptocurrencies"
},
{
"paperId": "9fc88fcfae6b3187c97cb0c332f5093500423245",
"title": "Price Delay and Market Frictions in Cryptocurrency Markets"
},
{
"paperId": "c6b979c8ea5008a98e54eb3dd2f7cc69f0ce8e10",
"title": "Cryptocurrency Reaction to FOMC Announcements: Evidence of Heterogeneity Based on Blockchain Stack Position"
},
{
"paperId": "801739a51d2ea7ce35149863faacddc79ee065fd",
"title": "Extreme Correlation in Cryptocurrency Markets"
},
{
"paperId": "eaeb64cda0e9dde8b0a7adcda1736118ee463ac1",
"title": "Network causality structures among Bitcoin and other financial assets: A directed acyclic graph approach"
},
{
"paperId": "04a959ec5322f19902defe339d84d309c69cb669",
"title": "Return, volatility and shock spillovers of Bitcoin with energy and technology companies"
},
{
"paperId": "1f0da8b2c92610df60118e41768fff7f4cbc7b72",
"title": "Does economic policy uncertainty predict the Bitcoin returns? An empirical investigation"
},
{
"paperId": "27e4a896643bb6880f76ec84b4ddd230561be7a3",
"title": "What Can Explain the Price, Volatility and Trading Volume of Bitcoin?"
},
{
"paperId": "882ed0637e7e2f471160f0da6196998086d93bfc",
"title": "Volatility Spillover Effects in Leading Cryptocurrencies: A BEKK-MGARCH Analysis"
},
{
"paperId": "e1366eb3a671ce620a4df9f601d44699df1ed611",
"title": "Testing for asymmetric nonlinear short- and long-run relationships between bitcoin, aggregate commodity and gold prices"
},
{
"paperId": "bcf19c53481cb0ea5179851aa9d6f627e12dfe62",
"title": "Bitcoin and global financial stress: A copula-based approach to dependence and causality in the quantiles"
},
{
"paperId": "85abc37b0c669e75a22de3625bda2bf44a5b8996",
"title": "Is Bitcoin a hedge, a safe haven or a diversifier for oil price movements? A comparison with gold"
},
{
"paperId": "6ecdf555d586ca43a9f903ebf5b04ac9dff71727",
"title": "Bitcoin as a Safe Haven: Is It Even Worth Considering?"
},
{
"paperId": "df2f8cf3f16c4e0a467bee97f753e56c31b612c6",
"title": "Exploring the Interconnectedness of Cryptocurrencies using Correlation Networks"
},
{
"paperId": "0ee88a87f8c98fe52a10dd234c37b1649872f1e7",
"title": "Exogenous Drivers of Bitcoin and Cryptocurrency Volatility – A Mixed Data Sampling Approach to Forecasting"
},
{
"paperId": "8d73fea1e352fcafb07c7208973f00f3e45cfd75",
"title": "Information Transmission Between Cryptocurrencies: Does Bitcoin Rule the Cryptocurrency World?"
},
{
"paperId": "261423156e162c51807409e605a2f3391f16483e",
"title": "Bitcoin Is Not the New Gold: A Comparison of Volatility, Correlation, and Portfolio Performance"
},
{
"paperId": "da9da59b5c3753b267df5c2d9702b92fd57a5baa",
"title": "Cryptocurrencies as a Financial Asset: A Systematic Analysis"
},
{
"paperId": "2cdabd2e6e8b16b02773fb4419a9eec7304913eb",
"title": "Does the Introduction of Futures Improve the Efficiency of Bitcoin?"
},
{
"paperId": "11e49621096e616a36e10060832da0bd4937fc2e",
"title": "Interactions between financial stress and economic activity for the U.S.: A time- and frequency-varying analysis using wavelets"
},
{
"paperId": "ec00ced0f17aa912bd0dce6b21b62acfea04782b",
"title": "The impact of international factors on Spanish company returns: a quantile regression approach"
},
{
"paperId": "3b539f3d38dab6a25aea93f918ea1d0d8dac6c05",
"title": "Exploring the Dynamic Relationships between Cryptocurrencies and Other Financial Assets"
},
{
"paperId": "572271b7f4d35459aaa4d8d84c3b26c9f2380765",
"title": "Some Stylized Facts of the Bitcoin Market"
},
{
"paperId": "be1b0fd4f68cb917ebadb8e2ed33f2ecea4200bb",
"title": "Main driving factors of the interest rate-stock market Granger causality"
},
{
"paperId": "e68a72995419964d2fa42f2de34c5b1eef7cb51d",
"title": "Virtual Relationships: Short- and Long-run Evidence from BitCoin and Altcoin Markets"
},
{
"paperId": "511768677932bde5c8cbd565e619e39a228b1fda",
"title": "Do exchange rate changes improve the trade balance: An asymmetric nonlinear cointegration approach"
},
{
"paperId": "5f6bd4b2c763981ea4ddaee4eb4d2f9171e229fd",
"title": "Time-varying causality between crude oil and stock markets: What can we learn from a multiscale perspective?"
},
{
"paperId": "50d93fa7a4149f6cb5d1f36f0574483d8efc8428",
"title": "Interest Rate Sensitivity of Spanish Industries: A Quantile Regression Approach"
},
{
"paperId": "3a77f470d5c37490e6a33852f89cba732a7f77c6",
"title": "Buzz Factor or Innovation Potential: What Explains Cryptocurrencies’ Returns?"
},
{
"paperId": "25d54caebb5020a0e3f3e2b9325ab1ef193387b6",
"title": "Can Volume Predict Bitcoin Returns and Volatility? A Quantiles-Based Approach"
},
{
"paperId": "c39a5ab390775f9e58b7ed956e74238b6eb0c814",
"title": "Does Bitcoin hedge global uncertainty? Evidence from wavelet-based quantile-in-quantile regressions"
},
{
"paperId": "ab502130cc6e6c48686df8ddd3ccf3fe1448e9f8",
"title": "An Analysis of Bitcoin Price Based on VEC Model"
},
{
"paperId": "7679788f9f67c96c516c01635a021802f478877e",
"title": "Evidence of Stock Returns and Abnormal Trading Volume: A Threshold Quantile Regression Approach"
},
{
"paperId": "0c601be9e798368832350bd62f9eb5664616b426",
"title": "Hedging capabilities of bitcoin. Is it the virtual gold"
},
{
"paperId": "268f2253c50dd1a897d2d5a767a8c1f65aab80fd",
"title": "Bitcoin, gold and the dollar – A GARCH volatility analysis"
},
{
"paperId": "c26e16afa0d9a3ba772cddf3bb3d39a4faebab4a",
"title": "US stock market sensitivity to interest and inflation rates: a quantile regression approach"
},
{
"paperId": "466975263ba3291b6d7c498997eeebfc29599ad7",
"title": "Trends in stock-bond correlations"
},
{
"paperId": "30ad39f06ab6caf910013dd3946ae2aeca61d7db",
"title": "A Comparative Analysis on US Financial Stress Indicators"
},
{
"paperId": "bb07b157497bf0ab71e37bddd32cb9da6d6449bf",
"title": "Modelling Asymmetric Cointegration and Dynamic Multipliers in a Nonlinear ARDL Framework"
},
{
"paperId": "53a1bf5894828350f5a7114dbbf36a639768d9a9",
"title": "Measuring Economic Policy Uncertainty"
},
{
"paperId": "869939b8fb6267e0335c608a64af5ef69cae89af",
"title": "On the Network Topology of Variance Decompositions: Measuring the Connectedness of Financial Firms"
},
{
"paperId": "17051cc5818f5b2814233de3858bbe380bdbcbb8",
"title": "Better to Give than to Receive: Predictive Directional Measurement of Volatility Spillovers"
},
{
"paperId": "77102f7a8e6b772f1cdf16ab2eb8a02633f528a2",
"title": "Spanish stock market sensitivity to real interest and inflation rates: an extension of the Stone two-factor model with factors of the Fama and French three-factor model"
},
{
"paperId": "b0a4749b11b86c6e45dc42b867e653203bdec9c6",
"title": "Measuring Financial Asset Return and Volatility Spillovers, with Application to Global Equity Markets"
},
{
"paperId": "7e4e22664351c5f1e3696099608c50cdbb7c5236",
"title": "Financial Intermediaries and Interest Rate Risk: Ii"
},
{
"paperId": "bbff7feffa1474deec4f20e24b21078ff3cd331d",
"title": "The saving and investment nexus for China: evidence from cointegration tests"
},
{
"paperId": "ff73b42ba356c5a8770c0fbcb4873fecd56cabc0",
"title": "Market Stress and Herding"
},
{
"paperId": "56dc615e4292eee9248f6be4af5f7f2d27596bad",
"title": "The Interest Rate Risk Exposure of Financial Intermediaries: A Review of the Theory and Empirical Evidence"
},
{
"paperId": "d96faee1898de7a052115ecfe533a5b2fd1151c0",
"title": "Bounds testing approaches to the analysis of level relationships"
},
{
"paperId": "feb48f1e445006a241c244733c7cf0dc96bf5d05",
"title": "Effects of the geopolitical risks on Bitcoin returns and volatility"
},
{
"paperId": "cd0248af872fb54c492d8c6067d9131ccf8d2ea3",
"title": "Spillovers between cryptocurrencies. Network map of cryptocurrencies"
},
{
"paperId": "b527d146322fdc4bb6262c3ee9525f0cc1a72976",
"title": "Do global factors impact bitcoin prices?: evidence from wavelet approach"
},
{
"paperId": "f94571d944801733d9b85014980518b5a19cf240",
"title": "Festschrift in Honor of Peter Schmidt"
},
{
"paperId": null,
"title": "Analysis of the Cryptocurrency Marketplace"
},
{
"paperId": "c09db7439505f49a0958f68e782df94b3807341a",
"title": "Regression Quantiles"
},
{
"paperId": "8e5cd86eb8edcc835b673eceebdf4f1948b5a792",
"title": "Econometrics and Economic Theory in the 20th Century"
},
{
"paperId": "8df324b8ef143204150550bc783ca1e14312dd32",
"title": "A test for independence based on the correlation dimension"
},
{
"paperId": "743dc1e8cf7eea4a2ac9bc58907f2ce08a1f5d90",
"title": "An Autoregressive Distributed Lag Modelling Approach to Cointegration Analysis"
},
{
"paperId": null,
"title": "This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license"
}
] | 23,082
|
en
|
[
{
"category": "Computer Science",
"source": "s2-fos-model"
},
{
"category": "Business",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/ffc7dee1990c5216f5ad66f586007d5595bb9743
|
[] | 0.861896
|
Implementation of the BFTIRS Algorithm for Integrating Distributed Ledgers with Supply Chain Network
|
ffc7dee1990c5216f5ad66f586007d5595bb9743
|
Nanotechnology Perceptions
|
[
{
"authorId": "2276285075",
"name": "K. S. Chandrasekaran"
},
{
"authorId": "2276288334",
"name": "V. Mahalakshmi"
},
{
"authorId": "2334101755",
"name": "M. R. Anantha Padmanaban"
}
] |
{
"alternate_issns": [
"2235-2074"
],
"alternate_names": [
"Nanotechnol Percept"
],
"alternate_urls": null,
"id": "20a13e69-95c3-4ea7-a563-1d39130e6ab9",
"issn": "1660-6795",
"name": "Nanotechnology Perceptions",
"type": "journal",
"url": null
}
|
Over the past ten years, blockchain technology has significantly captured interest in various application fields. Originally devised for the Bitcoin peer-to-peer cryptocurrency network, extensive research now explores integrating blockchain with various other service domains. The technology is celebrated for its decentralized structure, robust security, immutability, and transparency. In blockchain systems, consensus algorithms play a crucial role in establishing unanimous agreement among participants within a distributed computing environment, facilitating the addition of new blocks to the blockchain network. The effectiveness and security of the network largely hinge on the performance of these consensus algorithms. However, existing consensus algorithms face challenges with throughput, latency, and communication complexity. To address these issues, an enhanced consensus algorithm known as Intuitive Random Selection based Byzantine Fault Tolerant (BFTIRS) is introduced. This algorithm optimizes the consensus process by selecting a subset of nodes, thereby reducing network complexity and enhancing efficiency without sacrificing security. To tackle scalability issues in blockchains, a hierarchical BFTIRS algorithm that incorporates sharding is developed. This approach segments network participants into local and global consensus groups, each conducting the consensus process independently. Performance evaluations of this algorithm show improvements in both efficiency and security over existing solutions.
|
_Nanotechnology Perceptions_
_ISSN 1660-6795_
_[www.nano-ntp.com](http://www.nano-ntp.com/)_
# Implementation of the BFTIRS
Algorithm for Integrating Distributed
Ledgers with Supply Chain Network
## K. S. Chandrasekaran[1], V. Mahalakshmi[2], M. R. Anantha
Padmanaban[3]
_1Research Scholar, Department of Computer Science and Engineering, Annamalai_
_University, Chidambaram, India,_ _chandrasekaran-cse@saranathan.ac.in_
_2Assistant Professor, Department of Computer Science and Engineering, Annamalai_
_University, Chidambaram, India, mahaa80@gmail.com_
_3Associate Professor,_ _Saranathan College of Engineering, Tiruchirappalli, India,_
_mrpadmanaban-mech@saranathan.ac.in_
Over the past ten years, blockchain technology has significantly captured interest in various
application fields. Originally devised for the Bitcoin peer-to-peer cryptocurrency network,
extensive research now explores integrating blockchain with various other service domains. The
technology is celebrated for its decentralized structure, robust security, immutability, and
transparency. In blockchain systems, consensus algorithms play a crucial role in establishing
unanimous agreement among participants within a distributed computing environment, facilitating
the addition of new blocks to the blockchain network. The effectiveness and security of the network
largely hinge on the performance of these consensus algorithms. However, existing consensus
algorithms face challenges with throughput, latency, and communication complexity. To address
these issues, an enhanced consensus algorithm known as Intuitive Random Selection based
Byzantine Fault Tolerant (BFTIRS) is introduced. This algorithm optimizes the consensus process
by selecting a subset of nodes, thereby reducing network complexity and enhancing efficiency
without sacrificing security. To tackle scalability issues in blockchains, a hierarchical BFTIRS
algorithm that incorporates sharding is developed. This approach segments network participants
into local and global consensus groups, each conducting the consensus process independently.
Performance evaluations of this algorithm show improvements in both efficiency and security over
existing solutions.
**Keywords:** Blockchain, supply chain, reputation assessment, C-PBFT.
**1. Introduction**
A blockchain is a decentralised ledger, a digital technology utilised to document transactions
among multiple participants in a verifiable and tamper-resistant manner. The ledger can be
configured to execute transactions autonomously. The principal function of blockchain in
cryptocurrency networks designed to supplant conventional currencies is to enable secure and
_Nanotechnology Perceptions 20 No. S14 (2024) 414-424_
-----
415 K.S.Chandrasekaran et al. Implementation of the BFTIRS Algorithm...
private transactions among several anonymous entities, eliminating the necessity for a central
intermediary. Supply chains employ restricted access to protect corporate operations from
adversarial entities and improve overall efficiency. The effective implementation of
blockchain technology in supply chains requires the development of private blockchains, the
installation of novel protocols for transaction recording, and the formation of new regulations
to govern the system. These components are presently under development at varying stages.
The Advantages of Blockchain Technology
During the 1990s, substantial advancements in the dissemination of supply chain information
were primarily propelled by companies such as Walmart and Procter & Gamble, through the
adoption of enterprise resource planning (ERP) systems. Nonetheless, the challenge of
visibility persists in broad supply chains that involve complex operations.
To illustrate the limitations of current financial ledger entries and ERP systems, along with the
potential benefits of a blockchain-based environment, we will present a hypothetical scenario:
This is a fundamental transaction in which a merchant acquires a product from a supplier, and
a bank transfers the requisite payments to the supplier to complete the order. The transaction
involves the exchange of information, transportation of merchandise, and transfer of financial
assets. It is important to acknowledge that a certain flow does not produce financial ledger
entries for all three parties involved. Cutting-edge ERP systems, manual audits, and
inspections fail to adequately integrate the three flows, leading to challenges in mitigating
execution errors, improving decision-making, and addressing supply chain problems. This
Recently, e-commerce has profoundly influenced contemporary economic life as an
innovative trading model, expanding rapidly due to its accessibility and efficacy. The
expansion has stimulated a rise in the digital economy and increased consumer expenditure,
resulting in significant economic advantages for society. Supply chains are a crucial element
of e-commerce as they link various entities, including consumers, intermediaries,
manufacturers, and suppliers, to facilitate transactions on online platforms. As the number of
nodes proliferates, the intricacies of the supply chain escalate, resulting in significant
management and maintenance issues. Issues such as information transmission errors or
logistical disruptions are exacerbated when one party in a transaction possesses more or
superior information than another, complicating product traceability and intensifying the
bullwhip effect. This results in losses for consumers and providers, heightens supply and
inventory risks, and disrupts supply chain order and marketing management. Blockchain
technology has arisen as a prominent framework for decentralised applications owing to its
incorporation of distributed ledger storage, consensus mechanisms, and encryption
methodologies. The use of blockchain technology into the data-sharing framework enhances
administrative efficiency and provides transparent visibility into supply chain information,
benefiting all parties involved in the transaction. This integration also alleviates the bullwhip
effect by providing steady trade information. Despite these benefits, blockchain's consensus
mechanism inefficiency poses significant challenges to supply chain throughput and
transaction processing speed. Among the array of blockchain consensus mechanisms, Practical
Byzantine Fault Tolerance (PBFT) effectively addresses these issues with a protocol that
simplifies agreement among nodes. Nonetheless, PBFT struggles with efficiency under rapid
peer expansion. To address PBFT's shortcomings, concurrent PBFT (C-PBFT) has been
_Nanotechnology Perceptions Vol. 20 No. S14 (2024)_
-----
_Implementation of the BFTIRS Algorithm… K.S.Chandrasekaran et al. 416_
developed to enhance consensus efficiency, accommodating rapid expansions with low
transaction latency and high throughput. However, current research often overlooks the
selection of highly reputable primary peers within concurrent consensus clusters. To tackle
this, a consensus algorithm incorporating a reputation assessment, named C-PBFT, has been
designed to boost blockchain's efficacy in this integration. Key contributions of this work
include:
- Developing a framework that merges supply chain and blockchain for efficient
management and data transparency.
- Categorizing supply chain peers into clusters based on transaction history analysis.
- Employing reputation assessment methods like the Simple Additive Weighting.
**2. Literature of the past findings**
With asymmetric and opaque information and complex administration, the e-commerce supply
chain is a vast network made up of suppliers, subcontractors, factories, warehouses,
transporters, customers, agents, after-sales services, and so forth. The implementation of
blockchain technology in the supply chain results in transparent information, easier
management tasks, and reliable transactions [1]. The information island phenomenon is
effectively resolved and the connections between manufacturing, sales, logistics, and
supervision are broken through with the integration of supply chain and blockchain [1].
This study investigates the challenges associated with tracking the supply chain of cannabis
and its significance, specifically in terms of verifying the origin to ensure the product's
authenticity. The proposal suggests implementing a blockchain strategy using Polygon
technology for the cannabis supply chain. This plan would provide enhanced data security,
immutability, and decentralized control over cannabis extract goods[2,4].
Research has already been done on the food supply chain with blockchain technology based
on the Internet of Things architecture with supply chain security in mind [5]. By utilizing
blockchain technology, companies were able to address the problem of drug safety and
establish medical traceability [6]. Some researchers propose the automotive supply chain and
t01he blockchain-based automotive sector for on-demand supply chain services. They
emphasized that in order to reduce transaction fraud, supply chain peers' trustworthiness can
be enhanced using blockchain technology. However, instead of taking into account the various
supply chains involved in e-commerce, these studies merely apply the blockchain to one [8,
9]. Because different businesses offer different products on an e-commerce platform, the
consensus procedures operate poorly in a contemporaneous environment [10,11].
In order to improve the Byzantine Fault Tolerant in cloud computing and reduce delay a novel
method to address the inefficiencies of the existing consensus mechanism[12]. One of the most
sophisticated consensus algorithms, the SBFT, was demonstrated by Gueta et al. [13Its latency
is two-thirds that of the PBFT and its throughput is approximately double that. To improve
throughput and decrease transaction latency, a reputation system that runs on the blockchain
and is based on the Proof-of-Stake consensus mechanism was introduced. When compared to
existing systems, this one provides better privacy guarantees [14]. Using the distributed
_Nanotechnology Perceptions Vol. 20 No. S14 (2024)_
-----
417 K.S.Chandrasekaran et al. Implementation of the BFTIRS Algorithm...
storage mechanism of blockchain technology, a reliable platform was developed that lowers
management costs and enhances data transfer security [15].
Nevertheless, while choosing the principal peer, the current PBFT program does not account
for reputation evaluation. Additionally, Sulin and Yongqing looked into how banks make
decisions about credit risk [16]. A new credit model known as a negative rating model was
suggested by Luo, Jiang, and Zhao [17][21]. Additionally, the creditworthiness of online
retailers is assessed using artificial immune technology (negative survey) for the first time. In
order to guarantee efficiency and security, Huang et al. introduce a blockchain system with a
reputation-based consensus mechanism [18].
This article develops a reputation assessment approach based on past transaction records as
the foundation for primary peers, aiming to address the aforementioned issues. Because the
primary peers are reliable, there is a significantly lower chance that they will be Byzantine
peers, which enhances the stability of the consensus mechanism.
The e-commerce supply chain is a complex and expansive network that includes suppliers,
subcontractors, factories, warehouses, transporters, customers, agents, and after-sales services.
Integrating blockchain technology into this system enhances transparency, simplifies
management, and ensures more reliable transactions. This integration effectively addresses the
"information island" issue, creating seamless links between manufacturing, sales, logistics,
and oversight.
Researchers like S. Mondal have applied blockchain to specific supply chains like food,
utilizing Internet of Things architecture to bolster security. Similarly, Kumar and Tripathi
leveraged blockchain to improve drug safety and traceability. Sharma et al. suggested a
blockchain-based model for the automotive sector to provide on-demand supply chain services
and boost peer trustworthiness, reducing the risk of fraud. However, these studies tend to focus
on single supply chain areas rather than the diverse range needed for e-commerce, where
different products and consensus processes may interact poorly in simultaneous operations.
To improve Byzantine Fault Tolerance in cloud computing and minimize delays, a new
method that overcomes existing consensus mechanism inefficiencies was introduced. The
SBFT algorithm, which achieves higher throughput and lower latency compared to PBFT.A
blockchain-based reputation system using the Proof-of-Stake protocol to decrease transaction
times and increase throughput, offering superior privacy protections was also developed.
However, current PBFT implementations do not consider reputation assessments when
selecting a principal peer. Research by Sulin and Yongqing into bank credit risk decisionmaking and a new negative rating credit model by Luo, Jiang, and Zhao also highlight the
importance of assessing the credibility of online retailers. To enhance the efficiency and
security of blockchain systems, Huang et al. introduced a reputation-based consensus
mechanism.
This article introduces a reputation assessment method based on historical transactions to
select dependable primary peers, thereby reducing the likelihood of encountering Byzantine
peers and increasing the stability of the consensus mechanism.
_Nanotechnology Perceptions Vol. 20 No. S14 (2024)_
-----
_Implementation of the BFTIRS Algorithm… K.S.Chandrasekaran et al. 418_
**3. Proposed Algorithm**
Scalability remains the primary obstacle to the widespread adoption of blockchain technology,
as noted in reference. The consensus protocol plays a crucial role in how blockchains perform,
with the number of network nodes inversely affecting transaction synchronization speeds.
Sharding, originally a database technique that distributes data across several servers to boost
search speeds, can enhance consensus by dividing transactions among different groups and
merging their outcomes. This method integrates sharding into the consensus process by
organizing verifier nodes into functional groups that each handle consensus tasks separately.
The end result is a blockchain where smaller group outputs are consolidated into the final
block. This sharding-enhanced hierarchical BFT_IRS model not only scales and streamlines
the blockchain network but also maintains its security and fault tolerance. By fine-tuning the
number of nodes involved in consensus, transaction speeds can be increased without
sacrificing fault tolerance. Moreover, this proposed Hierarchical BFT_IRS framework offers
higher throughput than existing solutions.
Proposed Architecture
The design of the blockchain network in this proposed method is depicted in Figure 1. It
consists of verifier nodes organized into two distinct layers namely Local Consensus Group
(LCG) and Global Consensus Group (GCG) Within each consensus group, one verifier node
is designated as the Primary node, with the remainder serving as backup nodes. These backup
nodes have the responsibility of validating the consensus outcomes and logging the data. The
BFT_IRS algorithm is implemented at the LCG level, where it processes and consolidates
verified transactions into mini blocks. These mini blocks are then collected by the GCG from
all the LCGs to form a large block, which is subsequently integrated into the blockchain
network
Fig. 1. Sharding-based Layered Blockchain Architecture
Consensus in the Local Consensus Group
The process of reaching consensus within the Local Consensus Group (LCG) begins when a
transaction is sent from a client to a backup node, which then assigns it to an LCG. The nodes
within the LCG use the IRS algorithm to determine which nodes will participate in the
_Nanotechnology Perceptions Vol. 20 No. S14 (2024)_
-----
419 K.S.Chandrasekaran et al. Implementation of the BFTIRS Algorithm...
consensus. Both primary and backup nodes carry out the BFTIRS consensus procedure,
culminating in the formation of a new mini-block. These mini-blocks are then relayed to the
Global Consensus Group (GCG) to assemble the large block that gets added to the blockchain
network. The following steps detail the consensus execution within an LCG:
i. The IRS algorithm is run to select verifier nodes. One node is designated as the primary
node, while others serve as backup nodes.
ii. The client submits a transaction REQUEST message to a backup node.
iii. This backup node checks the client's signature, assigns a transaction number, and sends
out a PRE_PREPARE message throughout the LCG.
iv. The primary node within the LCG validates the signatures of both the client and the
backup node, along with the transaction number. It also scans for any conflicting transactions
in its local database. If no conflicts are detected, the primary node issues a PREPARE message.
v. The backup nodes verify the PREPARE message. Upon receiving 2f identical PREPARE
messages, they broadcast a COMMIT message.
vi. A backup node, upon collecting 2f+1 identical COMMIT messages, processes the client
transaction and dispatches a REPLY message.
vii. The primary node, after receiving f+1 identical REPLY messages, confirms that
consensus has been reached for the transaction, allowing it to be incorporated into a miniblock.
Consensus in the Global Consensus Group
The Global Consensus Group (GCG) implements the IRS algorithm to select its primary and
backup nodes. The primary node in the GCG is responsible for verifying all mini blocks
created by various Local Consensus Groups (LCGs) and ensuring their timestamps are correct.
It checks the integrity of these mini blocks by verifying signatures and confirming the correct
order of transactions. Before the mini blocks can be consolidated into a large block, several
conditions must be met by the GCG nodes: They check for any pending transactions that have
been verified but not yet included in a mini block. They confirm the authenticity of the
signatures from the primary nodes across all LCGs. They verify that the previous hash value
of the current large block is accurate. They ensure the correct order of transactions and resolve
any conflicts. As outlined in Figure 2, the consensus process within the GCG involves
verifying the signatures of both primary and backup nodes. Once these signatures are
confirmed, the verifier nodes within the GCG issue a PREPARE message to all LCGs. The
primary node in each LCG must receive 2f+1 identical PREPARE messages from the GCG
before sending its mini block back to the GCG. The primary node of the GCG then ensures
that all mini blocks sharing the same timestamp are collected. Upon successful aggregation,
the primary node of the GCG dispatches a COMMIT message to its backup nodes, leading to
the final packaging of the large block.
_Nanotechnology Perceptions Vol. 20 No. S14 (2024)_
-----
_Implementation of the BFTIRS Algorithm… K.S.Chandrasekaran et al. 420_
Figure 2 : Consensus at Global Consensus Group
The backup nodes in the Global Consensus Group (GCG) play a crucial role in verifying the
COMMIT message and all received mini blocks. Should they discover that a mini block is
missing, they request the respective primary node in the Local Consensus Group (LCG) to
resend it. After confirming that the transaction order is correct, they send out a REPLY
message and proceed to package the large block. The integrity and accuracy of the newly
created large block are then verified by the backup nodes. The addition of the large block to
the existing ledger occurs only after receiving 2f+1 identical REPLY messages, ensuring that
a consensus has been achieved and the transactions are accurately recorded and synchronized
across the network
**4. Experimental Results**
The evaluation is conducted in a Golang command line-based development environment,
utilizing the Go programming language. Unique identities are assigned to participants for use
during the consensus procedure. Once identities are set, participants are organized based on
their geographical proximity. Local Consensus Groups (LCGs) are established by grouping
nodes within one-hop communication range. Using selection and election algorithms, various
node roles such as candidates, verifiers, and normal nodes are designated. The verifiers within
the LCGs select nodes to form the Global Consensus Group (GCG), ensuring equal
opportunities for all participants to be included in the GCG.
_Nanotechnology Perceptions Vol. 20 No. S14 (2024)_
-----
421 K.S.Chandrasekaran et al. Implementation of the BFTIRS Algorithm...
Fig3: Analysis of throughput vs Verifying nodes
The configuration varies with 4 to 12 LCGs, and each LCG consists of 20 to 50 nodes, while
the GCG consistently comprises 20 nodes. Each consensus group has exactly 5 verifier
nodes. Performance metrics such as query and confirmation latency, throughput, and block
creation time are rigorously measured. The size of each large block is standardized at 1 MB.
The simulation results, depicted in Figures 3 and 4, show outcomes for configurations with
50 nodes per LCG, where the number of LCGs ranges from 4 to 12. The observed data
indicates that query latency ranges from 2.25 to 3.35 seconds, confirmation latency from 6 to
9 seconds, and block creation time from 6 to 8 seconds. Through these configurations, the
Hierarchical BFTIRS technique achieves a throughput of up to 7500 transactions per second
(TPS).
Figure 4: Query and Confirmation Latency Analysis
_Nanotechnology Perceptions Vol. 20 No. S14 (2024)_
-----
_Implementation of the BFTIRS Algorithm… K.S.Chandrasekaran et al. 422_
TABLE I. PERFORMANCE COMPARISON OF THE PROPOSED HIERARCHICAL BFTIRS WITH
EXISTING APPROACHES
Elasti- Omni- Monoxi BFT IRS Sharding +
Parameter co Ledger -de PBFT (Proposed) BFTIRS
Miner PoW + Chu State Intuitive Intuitive
Selection I PoW + Atomix ko- nu Machin Random Random
Consensus PBFT + PBFT e Selection Selection
Based
Byzantine
Fault
Tolerance 50% 50% 50% 33% 33% 33%
Node Public Public Public Permis Permissione Permissioned
Management sioned d
Block
Creation
Time (s) 61 – 76 51 – 61 36 - 46 10-16 5-6 6-8
Throughput 250 - 300 - 350 - 600 - 800 - 6000 1000-7500
(TPS) 2500 2750 2900 4500
Average
Confirmatio n
Latency (s) 15 - 30 18 - 25 20-35 11-15 7-8 8-10
The proposed hierarchical BFTIRS algorithm is evaluated against other existing shardingbased approaches, with the comparative results detailed in Table I. The algorithm demonstrates
superior performance, achieving higher throughput and reduced latency compared to current
solutions. Additionally, it facilitates quicker block creation times and accommodates a larger
number of nodes without compromising transaction synchronization speeds. The hierarchical
structure of the algorithm allows for dynamic adjustments in the number of nodes within the
shards and the total number of groups, based on the overall network node count. This flexibility
helps in optimizing the performance of the consensus process while minimizing delays.
**5. Conclusion**
In this paper, a hierarchical consensus protocol is presented, which integrates the BFTIRS
algorithm with the sharding technique. By employing a stratified architecture, this
methodology successfully satisfies the require for scalability. The scheme is specifically
engineered to allow for the flexible adjustment of the quantity of Local Consensus Groups
(LCGs), which effectively regulates complexity and reduces the number of verifier nodes. By
increasing throughput without causing significant delays, this functionality optimizes the
performance of blockchain systems. Furthermore, a security analysis verifies that despite the
existence of malicious nodes, the system continues to function normally, thereby ensuring
robust security.
**References**
1. Xu Zhang, Wenpeng Lu, Fangfang Li, Xueping Peng, and Ruoyu Zhang. 2019. Deep feature
fusion model for sentence semantic matching. Comput. Mater. Contin. 61, 2 (2019), 601-616.
2. Piwat Nowvaratkoolchai, Natcha Thawesaengskulthai, Wattana Viriyasitavat and Pramoch
_Nanotechnology Perceptions Vol. 20 No. S14 (2024)_
|Parameter|Elasti- co|Omni- Ledger|Monoxi -de|PBFT|BFT IRS (Proposed)|Sharding + BFTIRS|
|---|---|---|---|---|---|---|
|Miner Selection I Consensus|PoW + PBFT|Atomix + PBFT|PoW + Chu ko- nu|State Machin e Based|Intuitive Random Selection|Intuitive Random Selection|
|Byzantine Fault Tolerance|50%|50%|50%|33%|33%|33%|
|Node Management|Public|Public|Public|Permis sioned|Permissione d|Permissioned|
|Block Creation Time (s)|61 – 76|51 – 61|36 - 46|10-16|5-6|6-8|
|Throughput (TPS)|250 - 2500|300 - 2750|350 - 2900|600 - 4500|800 - 6000|1000-7500|
|Average Confirmatio n Latency (s)|15 - 30|18 - 25|20-35|11-15|7-8|8-10|
-----
423 K.S.Chandrasekaran et al. Implementation of the BFTIRS Algorithm...
Rangsunvigit, “Blockchain-based Cannabis Traceability in Supply Chain Management”
International Journal of Advanced Computer Science and Applications(IJACSA), 15(2), 2024.
http://dx.doi.org/10.14569/IJACSA.2024.0150210
3. Kai Zheng, Ying Liu, Chuanyu Dai, Yanli Duan, and Xin Huang. 2018. Model checking PBFT
consensus mechanism in healthcare blockchain network. In Proceedings of the 9th
International Conference on Information Technology in Medicine and Education (ITME'18).
IEEE, 877-881
4. Asmaa Aldoubaee, Noor Hafizah Hassan and Fiza Abdul Rahim, “A Systematic Review on
Blockchain Scalability” International Journal of Advanced Computer Science and
Applications(IJACSA), 14(9), 2023. http://dx.doi.org/10.14569/IJACSA.2023.0140981
5. Saikat Mondal, Kanishka Wijewardena, Saranraj Karuppuswami, Nitya Kriti, Deepak Kumar,
and Premjeet Chahal. 2019. Blockchain inspired RFID-based information architecture for food
supply chain. IEEE Internet Things J. 6, 3 (2019), 5803-5813.
6. Randhir Kumar and Rakesh Tripathi. 2019. Traceability of counterfeit medicine supply chain
through Blockchain.In Proceedings of the 11th International Conference on Communication
Systems and Networks (COMSNETS'19). IEEE,568-570.
7. Pradip Kumar Sharma, Neeraj Kumar, and Jong Hyuk Park. 2018. Blockchain-based
distributed framework for auto- motive industry in a smart city. IEEE Trans. Industr. Info. 15,
7 (2018), 4197-4205
8. Mu-Chen Chen, Yu-Hsiang Hsiao, and Hsi-Yuan Huang. 2016. Semiconductor supply chain
planning with decisions of decoupling point and VMI scenario. IEEE Trans. Syst. Man
Cybernet.: Syst. 47, 5 (2016), 856-868.
9. Ke Huang, Xiaosong Zhang, Yi Mu, Xiaofen Wang, Guomin Yang, Xiaojiang Du, Fatemeh
Rezaeibagha, Qi Xia, and Mohsen Guizani. 2019. Building redactable consortium blockchain
for industrial Internet-of-Things. IEEE Trans. In- dustr. Info. 15, 6 (2019), 3670-3679.
10. Daniel Miehle, Dominic Henze, Andreas Seitz, Andre Luckow, and Bernd Bruegge. 2019.
PartChain: A decentralized traceability application for multi-tier supply chain networks in the
automotive industry. In Proceedings of the IEEE International Conference on Decentralized
Applications and Infrastructures (DAPPCON'19). IEEE, 140-145
11. Domantas Pelaitis and Georgios Spathoulas. 2018. Developing a universal, decentralized and
immutable Erasmus credit transfer system on blockchain. In Proceedings of the Innovations in
Intelligent Systems and Applications (IN· ISTA'18). IEEE, 1-6.
12. H. Baghban, M. Moradi, C. Hsu, J. Chou, and Y. Chung. 2016. Byzantine fault tolerant
optimization in federated cloud computing. In Proceedings of the IEEE International
Conference on Computer and Information Technology (CIT'16).658-661.
DOI:https://doi.org/10.1109/CIT.2016.114
13.
14. G. Golan Gueta, I. Abraham, S. Grossman, D. Malkhi, B. Pinkas, M. Reiter, D. Seredinschi,
O. Tamir, and A. Tomescu. 2019. SBFT: A scalable and decentralized trust infrastructure. In
Proceedings of the 49th Annual IEEE/IPIP International Conference on Dependable Systems
and Networks (DSN'19). 568-580. DOI:https://doi.org/10.1109/DSN.2019.00063
15. Doaa Mohey El-Din M. Hussein, Mohamed Hamed N. Taha and Nour Eldeen M. Khalifa, “A
Blockchain Technology Evolution Between Business Process Management (BPM) and
Internet-of-Things (IoT)” International Journal of Advanced Computer Science and
Applications(ijacsa), 9(8), 2018. http://dx.doi.org/10.14569/IJACSA.2018.090856
16. Wei Liang, Mingdong Tang, Jing Long, Xin Peng, Jianlong Xu, and Kuan-Ching Li. 2019. A
secure fabric blockchain- based data transmission technique for industrial Internet-of-Things.
IEEE Trans. Industr. Info. 15, 6 (2019), 3582-3592.
17. Pang Sulin, Liu Yongqing, Wang Yanming, and Yao Hongzhu. 2001. The credit-risk decision
mechanism on fixed loan interest rate with imperfect information. J. Syst. Eng. Electron. 12, 3
_Nanotechnology Perceptions Vol. 20 No. S14 (2024)_
-----
_Implementation of the BFTIRS Algorithm… K.S.Chandrasekaran et al. 424_
(2001), 20-24.
18. Wenjian Luo, Hao Jiang, and Dongdong Zhao. 2017. Rating credits of online merchants using
negative ranks. IEEE Trans. Emerg. Topics Comput. Intell. 1, 5 (2017), 354-365.
19. Junqin Huang, Linghe Kong, Guihai Chen, Min-You Wu, Xue Liu, and Peng Zeng. 2019.
Towards secure industrial IoT: Blockchain system with credit-based consensus mechanism.
IEEE Trans. Industr. Info. 15, 6 (2019), 3680-3689.
20. Sobia Yaqoob, Muhammad Murad Khan, Ramzan Talib, Arslan Dawood Butt, Sohaib Saleem,
Fatima Arif and Amna Nadeem, “Use of Blockchain in Healthcare: A Systematic Literature
Review” International Journal of Advanced Computer Science and Applications(IJACSA),
10(5), 2019. http://dx.doi.org/10.14569/IJACSA.2019.0100581
21. Prajakta U. Waghe, A Suresh Kumar, Arun B Prasad, Vuda Sreenivasa Rao, E. Thenmozhi,
Sanjiv Rao Godla and Yousef A.Baker El-Ebiary, “Blockchain-Enabled Cybersecurity
Framework for Safeguarding Patient Data in Medical Informatics” International Journal of
Advanced Computer Science and Applications(IJACSA), 15(3), 2024.
http://dx.doi.org/10.14569/IJACSA.2024.0150381
22. Selvi, S., Revathy, G., & Brindha, P. (2024). Blockchain-Enabled Federated Learning for
Secured Edge Data Communication Through a Decentralized Software-Defined Network. In
Achieving Secure and Transparent Supply Chains With Blockchain Technology (pp. 128-141).
IGI Global.
_Nanotechnology Perceptions Vol. 20 No. S14 (2024)_
-----
|
{
"disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.62441/nano-ntp.v20is14.27?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.62441/nano-ntp.v20is14.27, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "CCBY",
"status": "HYBRID",
"url": "https://nano-ntp.com/index.php/nano/article/download/2788/2089"
}
| 2,024
|
[
"JournalArticle"
] | true
| 2024-11-04T00:00:00
|
[
{
"paperId": "6291cf5f4d01de67deb1c4e39e10ffbcb664cdee",
"title": "Blockchain-Based Distributed Framework for Automotive Industry in a Smart City"
},
{
"paperId": "5a8419bdb2db83a589d71a4a796c8a682a1cd815",
"title": "Traceability of counterfeit medicine supply chain through Blockchain"
},
{
"paperId": "848b8513db57488b94c6064c5115a9a19ae331fa",
"title": "Semiconductor Supply Chain Planning With Decisions of Decoupling Point and VMI Scenario"
},
{
"paperId": null,
"title": "Building redactable consortium blockchain"
},
{
"paperId": null,
"title": "vi. A backup node, upon collecting 2f+1 identical COMMIT messages, processes the client transaction and dispatches a REPLY message"
}
] | 7,227
|
en
|
[
{
"category": "Computer Science",
"source": "s2-fos-model"
},
{
"category": "Engineering",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/ffc92b58d15c16b7d27a4951309d4309aefec6cd
|
[] | 0.843883
|
A Provable Secure Cross-Verification Scheme for IoT Using Public Cloud Computing
|
ffc92b58d15c16b7d27a4951309d4309aefec6cd
|
Security and Communication Networks
|
[
{
"authorId": "2158952000",
"name": "Naveed Khan"
},
{
"authorId": "47540115",
"name": "Jian-biao Zhang"
},
{
"authorId": "2131903191",
"name": "Jehad Ali"
},
{
"authorId": "34811818",
"name": "M. S. Pathan"
},
{
"authorId": "1952761",
"name": "Shehzad Ashraf Chaudhry"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"Secur Commun Netw"
],
"alternate_urls": [
"http://onlinelibrary.wiley.com/journal/10.1002/(ISSN)1939-0122",
"http://www.interscience.wiley.com/journal/security"
],
"id": "02a4454a-84c8-471c-9b40-cc045d4f3223",
"issn": "1939-0122",
"name": "Security and Communication Networks",
"type": "journal",
"url": "https://www.hindawi.com/journals/scn/"
}
|
Public cloud computing has become increasingly popular due to the rapid advancements in communication and networking technology. As a result, it is widely used by businesses, corporations, and other organizations to boost the productivity. However, the result generated by millions of network-enabled IoT devices and kept on the public cloud server, as well as the latency in response and safe transmission, are important issues that IoT faces when using the public cloud computing. These concerns and obstacles can only be overcome by designing a robust mutual authentication and secure cross-verification mechanism. Therefore, we have attempted to design a cryptographic protocol based on a simple hash function, xor operations, and the exchange of random numbers. The security of the proposed protocol has formally been verified using the ROR model, ProVerif2.03, and informally using realistic discussion. In contrast, the performance metrics have been analyzed by looking into the security feature, communication, and computation costs. To sum it up, we have compared our proposed security mechanism with the state-of-the-art protocols, and we recommend it to be effectively implemented in the public cloud computing environment.
|
Hindawi
Security and Communication Networks
Volume 2022, Article ID 7836461, 11 pages
[https://doi.org/10.1155/2022/7836461](https://doi.org/10.1155/2022/7836461)
# Research Article A Provable Secure Cross-Verification Scheme for IoT Using Public Cloud Computing
## Naveed Khan,[1] Jianbiao Zhang,[1] Jehad Ali,[2] Muhammad Salman Pathan,[3]
and Shehzad Ashraf Chaudhry 4
_1Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China_
_2Department of Computer Science and Engineering, Sejong University, Seoul 05006, Republic of Korea_
_3Department of Computer Science, Maynooth University, Maynooth, Ireland_
_4Department of Computer Engineering, Faculty of Engineering and Architecture, Nisantasi University, Istanbul 34398, Turkey_
[Correspondence should be addressed to Jehad Ali; jehadali@ajou.ac.kr](mailto:jehadali@ajou.ac.kr)
Received 6 June 2022; Revised 24 July 2022; Accepted 30 July 2022; Published 23 November 2022
Academic Editor: Mohammad Ayoub Khan
[Copyright © 2022 Naveed Khan et al. Tis is an open access article distributed under the Creative Commons Attribution License,](https://creativecommons.org/licenses/by/4.0/)
which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Public cloud computing has become increasingly popular due to the rapid advancements in communication and networking
technology. As a result, it is widely used by businesses, corporations, and other organizations to boost the productivity.
However, the result generated by millions of network-enabled IoT devices and kept on the public cloud server, as well as the
latency in response and safe transmission, are important issues that IoT faces when using the public cloud computing. Tese
concerns and obstacles can only be overcome by designing a robust mutual authentication and secure cross-verifcation
mechanism. Terefore, we have attempted to design a cryptographic protocol based on a simple hash function, xor operations,
and the exchange of random numbers. Te security of the proposed protocol has formally been verifed using the ROR model,
ProVerif2.03, and informally using realistic discussion. In contrast, the performance metrics have been analyzed by looking
into the security feature, communication, and computation costs. To sum it up, we have compared our proposed security
mechanism with the state-of-the-art protocols, and we recommend it to be efectively implemented in the public cloud
computing environment.
## 1. Introduction
Nowadays, cloud computing ofers diferent services to the
Internet enabling devices (IoT) for reducing cost and providing efciency. Tese cloud servers are accessible via an
Internet connection at any time and from any location. As
the world moves toward globalization, IoT-enabled devices’
importance and uses increase daily. IoTenables devices to be
deployed and used in diferent applications and environments such as smart homes, smart cities, industries, Internet
of Drones (IoD), space, underwater, and many more environments. IoT devices generate massive amounts of data
that can be stored in the cloud servers. IoT is an emerging
heterogeneous network industry, and 41 billion IoT devices
will be connected to the Internet worldwide. Tese devices
will generate 79 Zettabytes of data annually [1].
Diferent enterprises and individuals use three primary
cloud deployment models: private, public, and hybrid. Te
public cloud is the most commonly used because it is
cheaper than private and hybrid cloud deployment models.
Cloud servers provide platform as a service, storage as a
service, software as a service, and infrastructure as a service
to diferent enterprises and users according to their needs.
Te public cloud delivers services over a public network.
Terefore, it raises security concerns when services are
delivered over a public network channel. Tus, secure
transmission plays a vital role in outsourcing data by corporations, businesses, government entities, and individuals.
However, it also needs to notice the recent increase of
cyberattacks on diferent networks and cloud servers and
privacy leakage trying to stop those enterprises and individuals from using the cloud services. Terefore, to tackle
-----
2 Security and Communication Networks
these issues and challenges for such massive use of the cloud,
it is imperative to authenticate the communicating entities to
protect the outsourced data from cybercriminals. However,
authentication in IoT-enabled devices is not so easy because
of limited resources and energy. Terefore, the authentication process should be efcient and reliable for network
and energy constraints devices.
_1.1. Motivation and Contribution. Recent developments in_
high-speed Internet, such as 5G and 6G architectures, increase the use of IoT-enabled devices. IoT enables devices to
generate gigantic amounts of data annually. However, storing,
analyzing, and processing vast amounts of data locally are
complex. Terefore, cloud computing ofers diferent services
to consumers over the Internet to store and process data on
servers with minimal cost. However, security is a big concern
while transmitting data to the cloud servers over insecure
channels because of cyberattacks. Tus, authenticating the
communicating party is very important to transmit data
securely. According to our analysis, the scheme [2] has
vulnerabilities such as anonymity, untraceability, a man in the
middle attacks, server impersonation attacks, and secret key
disclosure attacks. Terefore, it motivates us to cryptanalysis
the scheme [2] and proposes a secure and efcient scheme.
Our contribution is to solve the security faws in the scheme
[2] and propose a more efcient and secure protocol. Further
contributions are explained in detail below:
(i) Te proposed scheme is efcient and based on
symmetric-key cryptography to resist all known
potential attacks.
(ii) Te security analysis of the proposed scheme has
been verifed using. (A) ROR model. (B) ProVerif
for key secrecy, confdentiality, and reachability.
(iii) Te symmetric keys have been exchanged through
the Dife–Hellman method to confrm that no one
can forge them.
(iv) Te performance analysis of the proposed security
mechanism has been made, bearing in mind. (A)
computation overheads. (B) Communication
overheads.
(v) Upon comparing the proposed scenario with the
existing scheme, the proposed scheme is lightweight
in terms of communication, computation costs, and
efciency.
_1.2. System Model. Our system model consists of four entities,_
as shown in Figure 1. IoT devices, users, registration servers,
and public cloud computing. Te IoT devices generate data
send to the public cloud servers over the Internet. Te users and
IoT devices frst need to register with the registration server.
Further details are given in the proposed scheme section.
_1.3. Treat Model. We used the famous DY model [3] and_
CK model [4] as threat and adversary models in our article,
where we consider the action and assume the power of A as
follows:
(i) Te A can intercept the exchanged messages
transmitted among the participants and replay,
listen, and forge messages.
(ii) Te A can be insider or outsider dishonest
participants.
(iii) Te A can extract secret values from IoTdevices and
perform power analysis [5].
(iv) Te A cannot extract secret keys from stored data in
IoT devices, users, and servers.
(v) Te A can intercept messages and try to modify,
delete, insert, and intentionally temper them.
_1.4. Paper Organization. Te rest of this article is laid out as_
follows: Te literature review is presented in detail in Section
2, and the proposed scenario is presented in Section 3. Ten,
in Sections 4 and 5, we examine the proposed framework’s
security, and in Section 6, we conduct a performance
analysis. Finally, Section 7 brings the paper to a conclusion.
## 2. Literature Review
Te integration of IoT enables devices in public cloud environments to make communication vulnerable to cybercriminals. Terefore, the biggest challenge is securely
communicating over open network channels. Nevertheless,
the researchers have proposed authentication schemes to
communicate with IoT devices in the cloud server environment securely. However, these schemes have security
vulnerabilities and high communication and computation
costs. Tese high computations, communication costs, and
vulnerable schemes are discussed below.
Te author [6] proposed an authentication scheme for
heterogeneous devices in wireless sensor networks. However, their scheme sufers from known session key and
impersonation attacks and cannot provide perfect forward
secrecy. Another scheme is proposed in [7] for wireless
sensor networks. Nevertheless, their scheme is also vulnerable to known session keys and perfect forward secrecy.
Finally, an ECC-based protocol is proposed in [8]. Te
protocol fulfls most security requirements except replay
attacks and perfect forward secrecy. On the other hand, the
protocol proposed in [9] has security vulnerabilities such as
known session keys, insecure password change phase, and
impersonation attacks.
Moreover, the authors [10] proposed a secure scheme in
a multi-server environment based on the smartcard.
However, their scheme has security faws such as session key
disclosure, spoofng, anonymity, traceability, and impersonation attacks. Te author [11] proposed an authentication protocol and identifed the security faws in [12]. Te
protocol [13] proposed a scheme for smart home environments. However, their scheme grieves from ofine
password guessing attacks and insider attacks. Another
scheme was also proposed for smart home environments in
[14]. However, their scheme also has security vulnerabilities
such as anonymity and cannot provide untraceability.
Nevertheless, the protocols proposed in [15, 16] have
-----
Security and Communication Networks 3
Registration Server
Figure 1: System model.
Registration Server
signifcant security faws. Tese security faws are mutual
authentication, replay attacks, known session keys, anonymity, untraceability, and impersonation attacks.
Te protocol proposed by [17] sufers from secret key
guessing attacks. Terefore, the user and server can easily be
compromised. Te protocols [18, 19] sufer from session key
attacks and secret key guessing attacks, and server and user
can be compromised. At the same time, the scheme [14] is
also vulnerable to impersonation attacks. Te protocol [20]
is based on the ECC, but the scheme has security vulnerabilities such as ofine password guessing attacks, impersonation attacks, and anonymity issues. Finally, the scheme
[21] has serious security vulnerabilities. Te scheme [21]
sufers from ofine password guessing attacks, session key
disclosure attacks, anonymity, perfect forward secrecy,
impersonation attacks, desynchronization, and man-in-themiddle attacks. Te author [22] proposed an ECC-based
scheme for IoT devices in wireless sensor networks.
According to the author [22], the protocol proposed in [23]
is vulnerable to impersonation and password guessing attacks and unable to provide perfect forward secrecy.
Furthermore, the scheme [24] sufers from ofine
password guessing, impersonation, and perfect forward
secrecy. Te scheme proposed in [25] is based on ECC, but
the scheme grieves from impersonation attacks, ofine
password guessing, man-in-the-middle, and session key
disclosure attacks. Finally, the author proposed a scheme for
a client-server environment in [26]. However, the scheme
cannot resist impersonation, man-in-the-middle, password
guessing, perfect forward secrecy, and insider attacks.
Nevertheless, the scheme [27] sufers from ofine password
guessing attacks and anonymity, while the scheme [28]
sufers from ofine password guessing attacks. A multiserver cloud server authentication scheme based on biometrics has been proposed [29]. However, the scheme sufers
from anonymity and man-in-the-middle attacks. Te protocol in [30] is designed for a multi-server environment
using biometrics. However, the scheme sufers from a
known session temporary attack. Finally, a three-factor
authentication scheme for a multi-server environment based
on ECC is proposed in [31]. However, the scheme has
signifcant security faws such as impersonation, insider, and
known session key temporary attacks and cannot provide
perfect secrecy. Te scheme [32] sufers from impersonation
attacks and known session temporary attacks. Te protocol
in [33] sufers from DoS attacks and session key attacks,
while the scheme [34] cannot resist ofine password
guessing attacks.
Moreover, the scheme [35] proposed for IoT enables
devices, but the scheme sufers from insider attacks and
cannot provide anonymity. Furthermore, the scheme [36] is
vulnerable to impersonation and password guessing attacks.
-----
4 Security and Communication Networks
An ECC-based authentication protocol was proposed in
[37]. However, the scheme cannot resist impersonation and
ofine password guessing attacks. In contrast, the scheme
[38] sufers from ofine password guessing attacks and
cannot provide anonymity. Furthermore, an ECC-based
three-factor authentication for a multi-server environment
is proposed by [31] and cannot resist impersonation attacks
and is unable to provide perfect forward secrecy. Finally,
authentication schemes [39–41] are proposed for VANETs.
However, these schemes have security vulnerabilities. For
example, the scheme [39] is vulnerable to replay attacks,
while the schemes [40, 41] have traceability issues. Finally,
the author [42] proposed an anonymous authentication
scheme for mobile devices in a public cloud server. Te
scheme [42] achieved all the security vulnerabilities of the
scheme [43], and the communication and computation costs
are also less. In the end, we identify some faws in the scheme
[2], and these faws are discussed in detail under:
(i) Anonymity and untraceability: In the protocol [2],
the server identity is transmitted openly over an
insecure network. Terefore, the A can easily intercept messages transmitted among users, the
registration center, and the server. Tus, the proposed protocol cannot fulfl the property of anonymity and untraceability.
(ii) Man in the Middle Attack: As we know that the
protocol did not provide anonymity and untraceability. Terefore, the A can pretend to be a fake
server and start communicating with peers. Tus,
the A easily launches a man in a middle attack.
(iii) Secret Key disclosure Attack: Te server’s identity
is known to A. Terefore, the A can easily impersonate the server and fool the registration
server. Once the A can impersonate the server, it
easily gets the registration server secret key.
Terefore, the scheme is vulnerable to secret key
disclosure attacks.
(iv) Server Impersonation Attack: As we know, the A
can easily obtain the server’s identity, which is
transmitted openly on an insecure channel.
Terefore, the A can easily impersonate the server.
## 3. Proposed Protocol
Our proposed scheme is based on a symmetric key authentication protocol for IoT devices in public cloud environments. Our protocol is described under:
_3.1. Deployment Phase. Te registration server generates_
secret key SKps and sends them to the public cloud server
(PS). Te public cloud server stores the SKps. Furthermore,
the registration server assigns unique identities to IoT-enabled devices. IDi � {1, 2, 3, 4, . . ., n}. Te registration server
generates a secret key for IoTdevices SKi and stores it in each
IoT device. Table 1 shows the notations and their descriptions of our proposed scheme.
Table 1: Notations and description.
Notations Description
IDu User identity
IDps Public cloud server identity
PS Public cloud server
MSKups Master shared key b/w user and PS
SKps Secret key of public cloud server
SKu Secret key of user
||, h(.) Concatenation, hash function
Gen() Generate
IDi IoT device identity
RS Registration server
∯ Fuzzy extractor
MSKips Master shared key b/w IoT and PS
SKi Te secret key to IoT device
_S K_ Session key
⊕, A XOR operator, adversary
Rep() Reproduce
_3.2. User Registration Phase. Te user generates a random_
number ru and selects identity and password IDu, PWu. Te
user calculates Gen (BIO) � ( ∯). Te user further computes PIDu � _h(IDu||∯), Pu �_ _h (PWu|| ∯). Te user sends_
(PIDu, Pu, ru) toward the registration server. Te registration
server calculates MSKups � _h(PIDu||SKps||ru), and U1 �_ _h(ru||_
_Pu) ⊕_ MSKups. Te registration server sends MSKups to the
public cloud server while sending U1 to the user. Te PS
computes: Nu � _h(IDps||SKps) ⊕_ MSKUPS and X � _h(IDu||ru||_
MSKups). Te cloud server store (X, Nu) and send X to a user.
After receiving (U1, _X),_ the user further calculates
_M �_ _h(IDu||PWu||∯), U2 �_ _EMSKups(U1), U3 �_ _h(PIDu||Pu) ⊕_
_ru, U4 �_ _h(PIDu||Pu||ru). Te user store (U2, U3, U4, X)._
_3.3. IoT Device Registration Phase. Te IoT device select_
random number ri and calculate PIDi � _h(IDi||ri) and send_
(PIDi, ri) towards registration server. After receiving the
credentials from IoT device, the registration server further
calculates MSKips � _h(PIDi||SKps||ri). Te registration server_
sends (PIDi, ri) to a public cloud server. Te public cloud
server is stored (PIDi, ri). Te registration server sends
MSKips to the IoT device. Te IoT device calculate
_D1 �_ _h(IDi||SKi) ⊕_ _ri, D2 �_ MSKips ⊕ _h(SKi||ri). Te IoTdevice_
stores D1 and D2.
_3.4. Login and Authentication Phase. Tis phase of the_
protocol is shown in Table 2 and completed in the following
steps:
(i) Te user enters identity and password IDu, PWu and
computes ∯ = rep (BIO, [] ), PIDu = h(IDu||∯)
_Pu = h(PWu||∯),_ _M = h(IDu||PWu||),_ _U1 = DM_
(U2), ru = U3 ⊕ _h(PIDu||Pu), MSKups = U1 ⊕_ _h(ru||Pu)_
and check U4? = h(PIDu||Pu||ru). Te user selects
Select TLA1 and r2 and further calculate S1 = (IDi||
_r2) ⊕_ MSKups ⊕ _TLA1,_ _S2 = PIDu ⊕_ _h(MSKups||r2||_
_TLA1), S3 = h(PIDU||MSKups||r2||TLA1) and forward_
Message1{S1, S2, S3, X, TLA1} towards PS.
-----
Security and Communication Networks 5
-----
6 Security and Communication Networks
(ii) Te public cloud server check Check TLA1-T ≤ΔT,
MSKups = h(IDps||SKps) ⊕ _Nu, (IDi||r2) = S1 ⊕_ MSKups
⊕ _TLA1,_ PIDu = S2 ⊕ _h(MSKups||r2||TLA1),_ _S3?_
= h(PIDu||MSKups||r2||TLA1). Te public cloud
server selects timestamp TLA2, and random number
_r3. Te PS further calculates MSKips = h(IDi||SKps),_
_S4 = (PIDu||IDps||r2||r3) ⊕_ _h(IDi||MSKips||TLA2), S5 =_
_h(PIDu||IDps||MSKips||r2||r3||TLA2)_ and send
Message2{S4, S5, TLA2} towards IoT device through
open network channel.
(iii) Te IoT device Check TLA2-T ≤ΔT and further
calculate ri = D1 ⊕ _h(IDi||SKi), MSKips = D2 ⊕_ _h(SKi||_
_ri),_ (PIDu||IDps||r2||r3) = S4 ⊕ _h(IDi||MSKips||TLA2),_
and verify S5? = h (PIDu||IDps||MSKips||r2||r3||TLA2).
Te IoT selects timestamp TLA3 and random
number r4. Now, the IoT device further calculates,
_S6 = h(MSKips||PIDi||IDps||TLA3) ⊕_ _r4,SK = h(r2||r3||_
_r4||PIDu||IDps||IDi),_ _S7 = h(IDi||r4||MSKips||SK||_
_TLA3). Te IoT device sends back Message3{S6, S7,_
_TLA3} towards PS._
(iv) Te PS frst check TLA3-T ≤ΔT and computes
_r4 = S6 ⊕_ _h(MSKips||PIDi||IDps||TLA3),_ _SK = h(r2||r3||_
_r4||PIDu||IDps||IDi),_ and verify _S7? = h(IDi||r4||_
MSKips||SK||TLA3). Now, the PS select timestamp
_TLA4_ and further calculate _S8 = (IDps||r3||r4) ⊕_
_h(PIDu||MSKups||r2||TLA4),_ _S9 = h(PIDu||IDps||r2||_
_r3||SK||TLA4), X[new]_ = h(IDu||r3||MSKups) and send
Message4 = {S8, S9, TLA4} back to user.
(v) Te User verify timestamp TLA4-T ≤ΔT and computes (IDps||r3||r4) = S8 ⊕ _h_ (PIDu||MSKups||r2||
_TLA4), SK = h(r2||r3||r4||PIDu||IDps||IDi), and verify_
_S9 ? = h(PIDu||IDps||r2||r3||SK||TLA4). Te user up-_
date X = h(IDu||r3||MSKups).
_3.5. Biometric and Password Change Phase_
(i) Enter identity IDu, and old password PWuP, and
imprints old biometric BIO[P].
(ii) Computes∯[∗]), _Pu[∗] ∯�_ _h[∗](PW�_ rep (BIOuP||∯∗),[P], MSK[][ ∗]), PIDups[∗] _u�[∗]h�(PIDh(IDu[∗]u||||_
SKps||ru), _U1[∗]_ � _DMSKups[∗](U2),_ _U3[∗]_ � _h(PIDu[∗]||_
_Pu[∗]) ⊕_ _ru, and U4[∗]_ � _h(PIDu[∗]||Pu[∗]||ru) and check_
_U4[∗]? �_ _U4. If true, then allowed to input new_
password and imprint new BIO otherwise, terminate the connection.
(iii) Te User inputs a new password PWUN and imprints a new biometric BIO[N].
(iv) ComputesPIDuN � _h(IDu||∯∯N[N]),�_ rep _Pu(BION �_ _h(PW[N],_ _uN||∯[][ N]N),,),_
MSKupsN � _h(PIDuN||SKps||ru), U2N �_ EMSKupsN(U1),
_U3N �_ _h(PIDuN||PuN) ⊕_ _ru, and U4N �_ _h(PIDuN||PuN||_
_ru) and update (U2N, U3N, U4N)._
## 4. Formal Security Analysis
In the section of our research article, we will investigate,
analyze, discuss, and explain our proposed scheme against
all potential attacks using ProVerif, the ROR model, and
informal security discussions.
_4.1. ProVerif Code. ProVerif is a simulation toolkit that is_
used to simulate cryptographic algorithms. ProVerif checks
the key secrecy, reachability, and confdentiality [44]. Figure 2 shows our proposed scheme simulation code result,
and according to the ProVerif simulation result, our proposed scheme is secure.
_4.2. ROR Model. In this section, we evaluate our proposed_
scheme SK by using the ROR model [45]. Tree participants
are involved in our scheme such as user P[T]u[1][, public cloud]
server P[T]ps[2][, and IoT-enabled device][ P]i[T][3][. We demonstrate]
each query used in ROR model such as Execute, CorruptSC,
Reveal, Send, and Test.
**Theorem 1. Te AdvA has the advantage of violating SK of**
_our_ _scheme,_ _the_ _inequality_ _ADV_ _A ≤_ (q[2]h[/][|][HASH][|)+]
2 C. (q[s]/2[lf]). q[s] _denoted the hash queries, and C, l, and f are_
_Zipf values [46]._
_Proof. Four Games in a sequence Gameg: {g �_ 0, 1, 2, 3} are
played by A. Te AdvA has the probability of winning all the
Games. Tese Games are discussed below:
Gameg0: In this Gameg0, the A executes a real attack and
tries to guess a bit in order to win the Gameg0.
ADVA � 2.ADVA,Gameg0 − 1. (1)
Gameg1: Te A trying to eavesdrop attack on a proposed
scheme where all messages transmitted are intercepted by
using Execute. Te A perform Test and Reveal to check that
the message has SK or random numbers. Te A need secret
values such as SKu, SKps, SKi, PIDu, PIDi, and random
numbers to construct _SK �_ _h(r2||r3||r4||PIDu||IDps||IDi)._
Terefore, based on this, we obtained
ADVA,Gameg1 � ADVA,Gameg0. (2)
Gameg2: In this game, Gameg2, the A trying actively/
passively attack our scheme. Te A using the Send query and
Hash query. Te A intercepted all exchanged messages such
as Message1{S1, S2, S3, TLA1}, Message2{S4, S5, TLA2},
Message3{S6, S7, TLA3}, and Message4 � {S8, S9, TLA4}. Furthermore, these messages are protected using secret keys,
random numbers, and hashing h(.). Terefore, we obtain
_q[s]_
ADVA,Gameg2 − ADVA,Gameg1 ≤ C.qsend2[lf]. (3)
Gameg3: Te A trying to get {U2, U2, U3} from IoTdevice
memory using CorruptSC through power analysis attack.
Te A trying to get password PWu using ofine password
guessing attack. However, in our scheme, the A cannot get a
password using Send query. Terefore, we get
ADVA,Gameg2 − ADVA,Gameg1 ≤ |HASHq[2]h |[.] (4)
-----
Security and Communication Networks 7
Figure 2: ProVerif simulation result.
After playing Gameg0, Gameg1, Gameg2, and Gameg3.
Te A tries to guess the bit to win the game using the Test
query. Hence, we get
ADVA,Gameg3 � [1]2[.] (5)
By applying (1), (2) and (5), we obtained
1
2[ADV][A][ �] [ADV][A,][Game][g][0][ −] [1]2
� ADVA,Gameg1 − [1]2
� ADVA,Gameg2 − [1]2
Equation (7) is multiplied by 2 on both sides, and we get
ADVA ≤ |HASHq[2]h | [+][ 2][ C.q] send[2] _[, q]2[lf][s]_ . (8)
Hence, the theorem is proved.
_4.3. Shared Session Key Correctness. In this section, we will_
prove that the shared session key for communicating participants is the same. During in login and authentication
phase the shared session key is calculated by IoT device is
_SK �_ _h(r2||r3||r4||PIDu||IDps||IDi) and the receiver end the_
user calculated the shared session key SK � _h(r2||r3||r4||PIDu||_
IDps||IDi). In the initiator IoT device received S4 � (PIDu||
IDps||r2||r3) ⊕ _h(IDi||MSKips||TLA2) and S5 �_ _h(PIDu||IDps||_
MSKips||r2||r3||TLA2). It successfully computed (PIDu||IDps||
_r2||r3) �_ S4 ⊕ _h (IDi||MSKips||TLA2) and verify S5 �_ _h(PIDu||_
IDps||MSKips||r2||r3||TLA2). Furthermore, the IoT device
computed _S6 �_ _h(MSKips||PIDi||IDps||TLA3) ⊕_ _r4_ and
_S7 �_ _h(IDi||r4||MSKips||SK||TLA3) and forward it to the public_
cloud server. Similarly, likewise IoT device, the public cloud
server successfully generated r4 � _S6 ⊕_ _h(MSKips||PIDi||IDps||_
_TLA3) and verify S7 �_ _h(IDi||r4||MSKips||SK||TLA3) and further_
calculated S8 � (IDps||r3||r4) ⊕ _h(PIDu||MSKups||r2||TLA4) and_
_S9 �_ _h(PIDu||IDps||r2||r3||SK||TLA4). Te public cloud server_
forward S8 and S9. Te user successfully computes (IDps||r3||
_r4) �_ _S8 ⊕_ _h(PIDu||MSKups||r2||TLA4) and verify S9. Terefore,_
the communicating participants successfully get the required
credentials to construct the shared session key.
� ADVA,Gameg1 − ADVA,Gameg3.
Now by using (4), (5), and (6), we get
12[ADV][A][ �] [ADV][A,][Game][g][1][ −] [ADV][A,][Game][g][3]
≤ ADVA,Gameg1 − ADVA,Gameg2
+ ADV _A,Gameg2 −_ ADVA,Gameg3
≤ |HASHq[2]h | [+][ C.q] send[2] 2q[lf][s] .
(6)
(7)
-----
8 Security and Communication Networks
## 5. Informal Security Analysis
Informal security discussion and explanation of our proposed architecture are under:
(1) Impersonation Attack: Te A trying to impersonate
user, public cloud server, and IoTdevice. It will need
to calculate the authentication request messages such
as message1 and message4. However, it is challenging
for A to generate secret key SKps, random numbers,
and PIDu. Terefore, our proposed scheme resists
impersonation attacks because the A is unable to
compute the values mentioned above.
(2) IoT Device Capture Attack: Let us suppose the IoT
device is physically captured by A and A trying to
extract secret values such as {D1, D2}. However, the
A cannot compute MSKips without knowing the
secret key of public cloud SKps, random number r1,
and pseudo-identity PIDi. Terefore, the proposed
scheme resists IoT device capture attacks.
(3) Man-in-the-Middle Attack: Suppose the A eavesdrop on all transmitted messages among IoTdevices,
users, and public cloud servers, then it is possible to
launch a MITM attack. However, the A cannot
construct the transmitted messages because these
messages are protected with secret keys {SKi, SKu,
SKps}, identities {IDi, IDu, IDps}, and random
numbers {r1, r2, r3, r4}. Tus, our proposed scheme is
secure against MITM attacks.
(4) Session Key Disclosure Attack: Let suppose the A
obtain {U2, U3, U4} that are stored on the user side.
However, the A should get the random numbers {r1,
_r2, r3, r4} to construct session key Sk. Moreover, the A_
also needs to know the pseudo-identity of user PIDu,
cloud server identity IDps, and IoT identity IDi.
Hence, our scheme resists session key disclosure
attacks.
(5) Ofine Password Guessing Attack: Suppose the A
access to {U2, U3, U4} is stored on the user side. Tese
values are constructed in a way that the A cannot get
a password from it, such as _U2 �_ EMK(U1),
_U3 �_ _h(PIDu||Pu) ⊕_ _ru, and U4 �_ _h(PIDu||Pu||ru). Te_
A needs random number r1 and ⊕ to construct those
values. Terefore, our scheme is secure against ofline password guessing attacks.
(6) Anonymity and untraceability: Suppose the A access
to all transmitted messages during the login and
authentication phase. However, the A cannot get the
identities {IDu, IDps, IDi}, pseudo identities {PIDi,
PIDu} without knowing the secret keys. Furthermore, the random numbers and timestamps are
diferent in each session. Terefore, the A cannot
trace any peers. Hence, the proposed scheme provides anonymity and untraceability.
(7) Mutual Authentication: In our proposed architecture, all parties mutual authenticate each other; after
receiving, Message1{S1, S2, S3, TLA1} from a user, the
public cloud server authenticates the user using S3? �
_h(PIDi||MSKups||r2||TLA1) while the IoT device au-_
thenticate PS using S5? � _h(PIDu||IDps||MSKips||r2||_
_r3||TLA2). Furthermore, the PS authenticate IoT de-_
vices using S7? � _h(IDi||r4||MSKips||Sk||TLA3) and the_
user authenticate PS using S9? � _h(PIDu||IDps||r2||r3||_
_SK||TLA4). Hence, our proposed architecture pro-_
vides mutual authentication.
(8) Replay Attack: If the A intercept previous session
transmitted messages such as Message1{S1, S2, S3,
_TLA1}, Message2{S4, S5, TLA2}, Message3{S6, S7, TLA3},_
and Message4 � {S8, S9, TLA4}. After the interception,
the A trying to resend those messages again, then
our proposed scheme checks the validation of
timestamps. Furthermore, all transmitted messages
are protected using secret keys and random numbers. Hence, our scheme is resilient to replay attacks.
(9) Perfect Forward Secrecy: In our proposed scheme,
the A cannot construct the session key if it is
compromised previous session key SK. Because the A
will need MSKups, ri, PIDi, PIDu, and MSKips to
construct the session key. Terefore, the proposed
scheme provides perfect forward secrecy.
## 6. Performance Analysis
We evaluate the proposed scenario regarding security features, communication, and computation costs. We consider
the existing protocols and compare them with our scheme.
Our scheme provides foolproof security and lower computation and communication costs.
_6.1. Security Features. Tis section evaluates our proposed_
scheme in terms of security features. We compared our
proposed protocol with other recent related existing
schemes. Table 3 compares our scheme with the existing
schemes and shows that our scheme performs better than
other schemes in terms of security features.
_6.2. Communication Cost. We calculate our proposed_
scheme communication cost in this section. We choose SH1, where identities are equal to 160 bits, random numbers are
160, and timestamp 32 bits. For encryption and decryption,
we select AES-128, which takes 128 bits as an input and
output. Te hash function is 160 bits. Our scheme authentication is completed in four rounds. Te message transmitted from the user to the public cloud server is
message1 � {512}. From public cloud server to IoT device
message2 � {352} while from IoT device to public cloud
server is message3 � {352} and from public cloud server to
user is message4 � {352}. Te total communication cost is
1568 bits, as shown in Figure 3.
_6.3. Computation Cost. We compute the computation cost_
of our proposed scheme in this section. We adopted the
work done by [54]. Tm represents multiplication time, Th is a
one-way hash function, and TE and TD are encryption and
decryption. Te operation execution time in ms is
-----
Security and Communication Networks 9
Table 3: Security features.
Features↓ ⟶Schemes [2] [47] [48] [49] [50] [51] [52] [53] [54] [35] [55] Our
Impersonation attack ∞ ∝ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓
Ofine password guessing
✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ - ✓
attack
Man-in-the-middle attack ∞ ∞ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ - ✓
Session key disclosure
∞ ∞ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓
attack
Anonymity and
∞ ∞ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ∝ ✓ ✓
untraceability
Mutual authentication ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓
Replay attack ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ - ✓
Perfect forward secrecy ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓
Stolen device attack — — — — — ∝ ∝ ∝ — — — ✓
✓: secure, ∞: insecure, —: not considered.
7000
6000
5000
4000
3000
2000
1000
mentioned in Table 4. Furthermore, our scheme computation cost equals � 0.266 ms, as shown in Figure 4.
## 7. Conclusions
[2] [47] [48] [49] [50] [51] [52] [53] [54] [35] [55] Our
Schemes
Figure 3: Communication cost.
Table 4: Operation and execution time.
Operation Execution time in ms
_T h_ 0.00097
_T A_ 0.0028
_T E_ 0.109
_T D_ 0.0036
_T M_ 0.0035
As we know that the misconfguration, unauthorized
accessing of applications, and the response of cloud servers
to the results generated by IoT of end-user in the cloud
computing paradigm is yet to be addressed by the researchers. In this regard, we have attempted to design a
security mechanism for mitigating the aforesaid issues to a
maximum extent. Te security analysis section of the proposed framework has been made using worldwide used
techniques ROR model, ProVerif2.03, and realistic discussion. Furthermore, the performance analysis has been
evaluated by considering three metrics, i.e., security features,
communication, and computation costs. Te comparison
results show that the proposed scenario is suitable for
practical implantation in the IoTusing a public cloud server.
In the future, we have planned to design a transitional
authentication for end-users when using IoT. At the same
time, its security will be conducted using AVISPA.
## Data Availability
40
35
30
25
20
15
10
5
0
Te data used to support the fndings of this study can be
obtained from the corresponding author upon request.
## Conflicts of Interest
Te authors declare that they have no conficts of interest.
## Acknowledgments
-5
[2] [47] [48] [49] [50] [51] [52] [53] [54] [35] [55] Our
Schemes
Figure 4: Computation cost.
Tis work was supported partially by the BK21 FOUR
program of the National Research Foundation of Korea,
funded by the Ministry of Education (NRF5199991514504)
and by the MSIT (Ministry of Science and ICT), Korea,
under the ITRC (Information Technology Research Center)
support program (IITP-2022-2018-0-01431) supervised by
the IITP (Institute for Information and Communications
Technology Planning and Evaluation).
-----
10 Security and Communication Networks
## References
[1] M. Saqib, B. Jasra, and A. H. Moon, “A lightweight three factor
authentication framework for IoT based critical applications,”
_Journal of King Saud University-Computer and Information_
_Sciences, 2021._
[2] Z. Ali, S. Hussain, R. H. U. Rehman et al., “ITSSAKA-MS: An
improved three-factor symmetric-key based secure AKA
scheme for multi-server environments,” IEEE Access, vol. 8,
pp. 107993–108003, 2020.
[3] D. Dolev and A. Yao, “On the security of public key protocols,” IEEE Transactions on Information Teory, vol. 29, no. 2,
pp. 198–208, 1983.
[4] R. Canetti and H. Krawczyk, “Analysis of key-exchange
protocols and their use for building secure channels,” in
_Lecture Notes in Computer Science, Vol 2045, pp. 453–474,_
Springer, Berlin, Heidelberg, 2001.
[5] P. Kocher, J. Jafe, and B. Jun, “Diferential power analysis,” in
_Advances in Cryptology-CRYPTO’ 99, pp. 388–397, Springer,_
Berlin, Heidelberg, 1999.
[6] R. Amin, S. H. Islam, N. Kumar, and K.-K. R. Choo, “An
untraceable and anonymous password authentication protocol for heterogeneous wireless sensor networks,” Journal of
_Network and Computer Applications, vol. 104, pp. 133–144,_
2018.
[7] M. Alotaibi, “An enhanced symmetric cryptosystem and
biometric-based anonymous user authentication and session
key establishment scheme for WSN,” IEEE Access, vol. 6,
pp. 70072–70087, 2018.
[8] P. Chandrakar and H. Om, “An extended ECC-based anonymity-preserving 3-factor remote authentication scheme
useable in TMIS,” International Journal of Communication
_Systems, vol. 31, no. 8, p. e3540, 2018._
[9] A. H. Moon, U. Iqbal, and G. M. Bhat, “Mutual entity authentication protocol based on ECDSA for WSN,” Procedia
_Computer Science, vol. 89, pp. 187–192, 2016._
[10] W.-i. Bae and J. Kwak, “Smart card-based secure authentication protocol in multi-server IoT environment,” Multime_dia Tools and Applications, vol. 79, no. 23, pp. 15793–15811,_
2020.
[11] S. Shin and T. Kwon, “A lightweight three-factor authentication and key agreement scheme in wireless sensor networks
for smart homes,” Sensors, vol. 19, no. 9, p. 2012, 2019.
[12] J. Jung, J. Moon, D. Lee, and D. Won, “Efcient and security
enhanced anonymous authentication with key agreement
scheme in wireless sensor networks,” Sensors, vol. 17, no. 3,
p. 644, 2017.
[13] M. Fakroon, M. Alshahrani, F. Gebali, and I. Traore, “Secure
remote anonymous user authentication scheme for smart
home environment,” Internet of Tings, vol. 9, p.100158, 2020.
[14] S. Banerjee, V. Odelu, A. K. Das, S. Chattopadhyay, and
Y. Park, “An efcient, anonymous and robust authentication
scheme for smart home environments,” Sensors, vol. 20, no. 4,
p. 1215, 2020.
[15] L. Zhou, X. Li, K.-H. Yeh, C. Su, and W. Chiu, “Lightweight
IoT-based authentication scheme in cloud computing circumstance,” Future Generation Computer Systems, vol. 91,
pp. 244–251, 2019.
[16] R. Mart´ınez-Pel´aez, H. Toral-Cruz, J. R. Parra-Michel et al.,
“An enhanced lightweight IoT-based authentication scheme
in cloud computing circumstances,” Sensors, vol. 19, no. 9,
p. 2098, 2019.
[17] X. Jia, D. He, N. Kumar, and K.-K. R. Choo, “A provably
secure and efcient identity-based anonymous authentication
scheme for mobile edge computing,” IEEE Systems Journal,
vol. 14, no. 1, pp. 560–571, 2019.
[18] C.-M. Chen, Y. Huang, K.-H. Wang, S. Kumari, and
M.-E. Wu, “A secure authenticated and key exchange scheme
for fog computing,” Enterprise Information Systems, vol. 15,
no. 9, pp. 1200–1215, 2021.
[19] X. Jia, D. He, N. Kumar, and K.-K. R. Choo, “Authenticated
key agreement scheme for fog-driven IoT healthcare system,”
_Wireless Networks, vol. 25, no. 8, pp. 4737–4750, 2019._
[20] B. Ying and A. Nayak, “Lightweight remote user authentication protocol for multi-server 5G networks using selfcertifed public key cryptography,” Journal of Network and
_Computer Applications, vol. 131, pp. 66–74, 2019._
[21] M. Nikooghadam, R. Jahantigh, and H. Arshad, “A lightweight authentication and key agreement protocol preserving
user anonymity,” Multimedia Tools and Applications, vol. 76,
no. 11, pp. 13401–13423, 2017.
[22] B. Hu, W. Tang, and Q. Xie, “A two-factor security Authentication scheme for wireless sensor networks in IoT
environments,” Neurocomputing, 2022.
[23] C.-T. Chen, C.-C. Lee, and I.-C. Lin, “Efcient and secure
three-party mutual authentication key agreement protocol for
WSNs in IoT environments,” PLoS One, vol. 15, no. 4,
p. e0232277, 2020.
[24] R. Amin, T. Maitra, D. Giri, and P. Srivastava, “Cryptanalysis
and improvement of an RSA based remote user authentication scheme using smart card,” Wireless Personal Commu_nications, vol. 96, no. 3, pp. 4629–4659, 2017._
[25] M. Luo, Y. Zhang, M. K. Khan, and D. He, “A secure and
efcient identity-based mutual authentication scheme with
smart card using elliptic curve cryptography,” International
_Journal of Communication Systems, vol. 30, no. 16, p. e3333,_
2017.
[26] T. Maitra, M. S. Obaidat, R. Amin, S. H. Islam,
S. A. Chaudhry, and D. Giri, “A robust ElGamal-based
password-authentication protocol using smart card for clientserver communication,” International Journal of Communi_cation Systems, vol. 30, no. 11, p. e3242, 2017._
[27] S. H. Islam, “Design and analysis of an improved smartcardbased remote user password authentication scheme,” Inter_national Journal of Communication Systems, vol. 29, no. 11,_
pp. 1708–1719, 2016.
[28] T. Maitra, M. S. Obaidat, S. H. Islam, D. Giri, and R. Amin,
“Security analysis and design of an efcient ECC-based twofactor password authentication scheme,” Security and Com_munication Networks, vol. 9, no. 17, pp. 4166–4181, 2016._
[29] S. Kumari, X. Li, F. Wu, A. K. Das, K.-K. R. Choo, and J. Shen,
“Design of a provably secure biometrics-based multi-cloudserver authentication scheme,” Future Generation Computer
_Systems, vol. 68, pp. 320–330, 2017._
[30] Q. Feng, D. He, S. Zeadally, and H. Wang, “Anonymous
biometrics-based authentication scheme with key distribution
for mobile multi-server environment,” Future Generation
_Computer Systems, vol. 84, pp. 239–251, 2018._
[31] R. Ali and A. K. Pal, “An efcient three factor-based authentication scheme in multiserver environment using ECC,”
_International Journal of Communication Systems, vol. 31,_
no. 4, p. e3484, 2018.
[32] F. Wang, G. Xu, C. Wang, and J. Peng, “A provably secure
biometrics-based authentication scheme for multi-server
environment,” _Security_ _and_ _Communication_ _Networks,_
vol. 2019, 2019.
[33] J. Wang, H. Liu, H. Shao, and H.-y. Xia, “Novel two-way
security authentication wireless scheme based on hash
-----
Security and Communication Networks 11
function,” Computer Science, vol. 43, no. 11, pp. 205–209,
2016.
[34] S. D. Kaul and A. K. Awasthi, “Security enhancement of an
improved remote user authentication scheme with key
agreement,” Wireless Personal Communications, vol. 89, no. 2,
pp. 621–637, 2016.
[35] S. S. Sahoo, S. Mohanty, and B. Majhi, “A secure three factor
based authentication scheme for health care systems using IoT
enabled devices,” Journal of Ambient Intelligence and Hu_manized Computing, vol. 12, no. 1, pp. 1419–1434, 2021._
[36] M. Qi and J. Chen, “New robust biometrics-based mutual
authentication scheme with key agreement using elliptic curve
cryptography,” Multimedia Tools and Applications, vol. 77,
no. 18, pp. 23335–23351, 2018.
[37] A. Ostad-Sharif, D. Abbasinezhad-Mood, and
M. Nikooghadam, “A robust and efcient ECC-based mutual
authentication and session key generation scheme for
healthcare applications,” Journal of Medical Systems, vol. 43,
no. 1, pp. 1–22, 2019.
[38] A. K. Sutrala, A. K. Das, V. Odelu, M. Wazid, and S. Kumari,
“Secure anonymity-preserving password-based user authentication and session key agreement scheme for telecare
medicine information systems,” Computer Methods and
_Programs in Biomedicine, vol. 135, pp. 167–185, 2016._
[39] I. Z. Ahmed, T. M. Mohamed, and R. A. Sadek, “A low
computation message delivery and authentication VANET
protocol,” in Proceedings of the 2017 12th International
_Conference on Computer Engineering and Systems (ICCES),_
pp. 204–211, IEEE, Cairo, Egypt, December 2017.
[40] H. Tan, Z. Gui, and I. Chung, “A secure and efcient certifcateless authentication scheme with unsupervised anomaly
detection in VANETs,” IEEE Access, vol. 6, pp. 74260–74276,
2018.
[41] R. Ma, J. Cao, D. Feng et al., A secure Authentication scheme
_for Remote Diagnosis and Maintenance in Internet of Vehicles,_
pp. 1–7.
[42] N. Khan, J. Zhang, and S. U. Jan, “A robust and privacypreserving Anonymous user Authentication scheme for
public cloud server,” Security and Communication Networks,
vol. 2022, 2022.
[43] Q. Jiang, N. Zhang, J. Ni, J. Ma, X. Ma, and K.-K. R. Choo,
“Unifed biometric privacy preserving three-factor authentication and key agreement for cloud-assisted autonomous
vehicles,” IEEE Transactions on Vehicular Technology, vol. 69,
no. 9, pp. 9390–9401, 2020.
[44] B. Blanchet, B. Smyth, V. Cheval, and M. Sylvestre, ProVerif
_2.00: Automatic Cryptographic Protocol Verifer, User Manual_
_and Tutorial, pp. 05–16, 2018._
[45] S. A. Chaudhry, K. Yahya, F. Al-Turjman, and M.-H. Yang, “A
secure and reliable device access control scheme for IoT based
sensor cloud systems,” IEEE Access, vol. 8, pp.139244–139254,
2020.
[46] Z. Hou and D. Wang, “New Observations on Zipf’s Law in
passwords,” IEEE Transactions on Information Forensics and
_Security, 2022._
[47] S. Barman, H. P. Shum, S. Chattopadhyay, and D. Samanta, “A
secure authentication protocol for multi-server-based
e-healthcare using a fuzzy commitment scheme,” IEEE Access,
vol. 7, pp. 12557–12574, 2019.
[48] X. Li, T. Liu, M. S. Obaidat, F. Wu, P. Vijayakumar, and
N. Kumar, “A lightweight privacy-preserving authentication
protocol for VANETs,” IEEE Systems Journal, vol. 14, no. 3,
pp. 3547–3557, 2020.
[49] R. I. Abdelfatah, N. M. Abdal-Ghafour, and M. E. Nasr,
“Secure VANET Authentication protocol (SVAP) using
Chebyshev Chaotic Maps for Emergency Conditions,” IEEE
_Access, vol. 10, pp. 1096–1115, 2021._
[50] K. Mahmood, S. Shamshad, M. Rana et al., “PUF enable
lightweight key-exchange and mutual authentication protocol
for multi-server based D2D communication,” Journal of In_formation Security and Applications, vol. 61, p. 102900, 2021._
[51] M. Kaveh, D. Mart´ın, and M. R. Mosavi, “A lightweight
authentication scheme for V2G communications: A PUFbased approach ensuring cyber/physical security and identity/
location privacy,” Electronics, vol. 9, no. 9, p. 1479, 2020.
[52] P. Gope and B. Sikdar, “An efcient privacy-preserving authenticated key agreement scheme for edge-assisted internet
of drones,” IEEE Transactions on Vehicular Technology,
vol. 69, no. 11, pp. 13621–13630, 2020.
[53] P. Gope, Y. Gheraibia, S. Kabir, and B. Sikdar, “A secure IoTbased modern healthcare system with fault-tolerant decision
making process,” IEEE Journal of Biomedical and Health
_Informatics, vol. 25, no. 3, pp. 862–873, 2020._
[54] S. Shamshad, M. F. Ayub, K. Mahmood, S. Kumari,
S. A. Chaudhry, and C.-M. Chen, “An enhanced scheme for
mutual authentication for healthcare services,” _Digital_
_Communications and Networks, vol. 8, no. 2, pp. 150–161,_
2022.
[55] S. S. Sahoo, S. Mohanty, and B. Majhi, “Improved biometricbased mutual authentication and key agreement scheme using
ECC,” Wireless Personal Communications, vol. 111, no. 2,
pp. 991–1017, 2020.
-----
|
{
"disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.1155/2022/7836461?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1155/2022/7836461, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "CCBY",
"status": "GOLD",
"url": "https://downloads.hindawi.com/journals/scn/2022/7836461.pdf"
}
| 2,022
|
[] | true
| 2022-11-23T00:00:00
|
[
{
"paperId": "5ade98ced7ae577bcd2ffd534347ff829593a40b",
"title": "A two-factor security authentication scheme for wireless sensor networks in IoT environments"
},
{
"paperId": "b28941df8c7457857fbb76219e4f6e7ba5853acd",
"title": "A Robust and Privacy-Preserving Anonymous User Authentication Scheme for Public Cloud Server"
},
{
"paperId": "0c4085b440d3174c76c29a3027d6949f8984d4a0",
"title": "A lightweight three factor authentication framework for IoT based critical applications"
},
{
"paperId": "28874ccfd3b803b3c15ddf2325b1e67935477aca",
"title": "An enhanced scheme for mutual authentication for healthcare services"
},
{
"paperId": "ecce523e0730385b5f950308bad5188b6bd8ef11",
"title": "A Lightweight Authentication Scheme for V2G Communications: A PUF-Based Approach Ensuring Cyber/Physical Security and Identity/Location Privacy"
},
{
"paperId": "5ed58e49129c95ac2b5243d40d54dd1401d4a425",
"title": "A Lightweight Privacy-Preserving Authentication Protocol for VANETs"
},
{
"paperId": "c3217229f293bf42d5f54679edd81ccd59d8142a",
"title": "An Efficient Privacy-Preserving Authenticated Key Agreement Scheme for Edge-Assisted Internet of Drones"
},
{
"paperId": "8183e1be214873d24720f37e76c52b90de54bee8",
"title": "A secure three factor based authentication scheme for health care systems using IoT enabled devices"
},
{
"paperId": "2a09780cd57fd3825a4900d982ef089c9c014a2f",
"title": "A Secure IoT-Based Modern Healthcare System With Fault-Tolerant Decision Making Process"
},
{
"paperId": "45e3f65721282c220c887b8b4dc1000888a486ab",
"title": "A Secure Authentication Scheme for Remote Diagnosis and Maintenance in Internet of Vehicles"
},
{
"paperId": "0937d27f62be5bea1c3adea3a3a9e393b41d94ae",
"title": "Efficient and secure three-party mutual authentication key agreement protocol for WSNs in IoT environments"
},
{
"paperId": "b219fe87b95cce73d8895b1182ae841edb408538",
"title": "Secure remote anonymous user authentication scheme for smart home environment"
},
{
"paperId": "f6c5bc56de18f50e19171771347879bc2f30e61a",
"title": "A Provably Secure and Efficient Identity-Based Anonymous Authentication Scheme for Mobile Edge Computing"
},
{
"paperId": "074f6fb6a6d041480881ae649f1050cc8ad39818",
"title": "Unified Biometric Privacy Preserving Three-Factor Authentication and Key Agreement for Cloud-Assisted Autonomous Vehicles"
},
{
"paperId": "39fba4b2bb549bdfae77d1b8b048e1e924aa34b7",
"title": "An Efficient, Anonymous and Robust Authentication Scheme for Smart Home Environments"
},
{
"paperId": "e554bc59c5dd96fc30056c04b02877966f92daae",
"title": "A secure authenticated and key exchange scheme for fog computing"
},
{
"paperId": "7cb9280e93438b9b4395da405f1d7155cd9ce203",
"title": "Improved Biometric-Based Mutual Authentication and Key Agreement Scheme Using ECC"
},
{
"paperId": "a003849b98cb11117d29f6a80b5771b8b5eb77ac",
"title": "A Provably Secure Biometrics-Based Authentication Scheme for Multiserver Environment"
},
{
"paperId": "6993702ff5effe75c94787d580d4f0e0be6d5b95",
"title": "An Enhanced Lightweight IoT-based Authentication Scheme in Cloud Computing Circumstances"
},
{
"paperId": "dbbb1dc1cda3a56e3283ba3809125b699551c33b",
"title": "A Lightweight Three-Factor Authentication and Key Agreement Scheme in Wireless Sensor Networks for Smart Homes"
},
{
"paperId": "50531a19ac3595a5fa245daf3d1b196437780025",
"title": "Lightweight remote user authentication protocol for multi-server 5G networks using self-certified public key cryptography"
},
{
"paperId": "58e17344ca69bf9f08e877672d69e0e2aaa10898",
"title": "Lightweight IoT-based authentication scheme in cloud computing circumstance"
},
{
"paperId": "11b71300c2a01c48f1e81a206f3ccf75c778b78b",
"title": "A Secure Authentication Protocol for Multi-Server-Based E-Healthcare Using a Fuzzy Commitment Scheme"
},
{
"paperId": "647e1b6ab4bcce492fc25d22f669fc3b81631206",
"title": "A Robust and Efficient ECC-based Mutual Authentication and Session Key Generation Scheme for Healthcare Applications"
},
{
"paperId": "a90a57ab53bf8a58d81cf8711cf4717320aa9105",
"title": "Authenticated key agreement scheme for fog-driven IoT healthcare system"
},
{
"paperId": "dbae942c652856f4240cbe5d8c6919841b727c93",
"title": "An extended ECC‐based anonymity‐preserving 3‐factor remote authentication scheme usable in TMIS"
},
{
"paperId": "41eb76e8ba45ba0c9f8ebb2b0f73b68bc6d65633",
"title": "An efficient three factor–based authentication scheme in multiserver environment using ECC"
},
{
"paperId": "19b1b7692a7f43c4c9a12e36aaec83ee095102fd",
"title": "New robust biometrics-based mutual authentication scheme with key agreement using elliptic curve cryptography"
},
{
"paperId": "1286fe22be5788a4bbeab41ef46f11b38d1b1c4f",
"title": "Smart card-based secure authentication protocol in multi-server IoT environment"
},
{
"paperId": "ef3a48578bb67c4a78b7ce51aa7ebb22f085fded",
"title": "An untraceable and anonymous password authentication protocol for heterogeneous wireless sensor networks"
},
{
"paperId": "82fc3653b16dc044d8896f30084f2839b96153c8",
"title": "A low computation message delivery and authentication VANET protocol"
},
{
"paperId": "2835966da056cb6171a2912c5186a466c4f79ed5",
"title": "Anonymous biometrics-based authentication scheme with key distribution for mobile multi-server environment"
},
{
"paperId": "c5d3eade368f13b322d2ad52d00687091e10e9d6",
"title": "A robust ElGamal‐based password‐authentication protocol using smart card for client‐server communication"
},
{
"paperId": "12a542e6eac198ca6bbb0bb813b3a7c35f34511e",
"title": "Cryptanalysis and Improvement of an RSA Based Remote User Authentication Scheme Using Smart Card"
},
{
"paperId": "98f2a44a86866751b9f1df354695253b002e1a0c",
"title": "A secure and efficient identity‐based mutual authentication scheme with smart card using elliptic curve cryptography"
},
{
"paperId": "402335a9ae06257b086fd921c58795e6ada7db85",
"title": "Design of a provably secure biometrics-based multi-cloud-server authentication scheme"
},
{
"paperId": "a1184cc8ce28bca7af69f992f6af304b4fd7f3b9",
"title": "Efficient and Security Enhanced Anonymous Authentication with Key Agreement Scheme in Wireless Sensor Networks"
},
{
"paperId": "0f94e094445d8e996150c3e756ce1e0f1b97e431",
"title": "Security analysis and design of an efficient ECC-based two-factor password authentication scheme"
},
{
"paperId": "4cadfdd61612bc0a26f053d4d93aab6847896ace",
"title": "Secure anonymity-preserving password-based user authentication and session key agreement scheme for telecare medicine information systems"
},
{
"paperId": "090c2b296565a7555872768dcef7904c8dd846c3",
"title": "Design and analysis of an improved smartcard‐based remote user password authentication scheme"
},
{
"paperId": "9eeb5a80d87610e7338a54e03530c28337b634e3",
"title": "A lightweight authentication and key agreement protocol preserving user anonymity"
},
{
"paperId": "78b7ea7a20b01e8b08b995d8e27070016d6568a4",
"title": "Security Enhancement of an Improved Remote User Authentication Scheme with Key Agreement"
},
{
"paperId": "e56f9dc2ef53c9479d3d94f593b66a40e6dba52b",
"title": "Analysis of Key-Exchange Protocols and Their Use for Building Secure Channels"
},
{
"paperId": "dfd1ffe1fe37c62e6738fef70447be98778ceab6",
"title": "On the security of public key protocols"
},
{
"paperId": "12d4cf751833fccc7ec752f6d96d7208218be3c6",
"title": "New Observations on Zipf’s Law in Passwords"
},
{
"paperId": "797bc40d95469074b0f8105f1e3c02d97b552998",
"title": "Secure VANET Authentication Protocol (SVAP) Using Chebyshev Chaotic Maps for Emergency Conditions"
},
{
"paperId": "58300a55089f71fe6928ad46c13b9d313bf47814",
"title": "PUF enable lightweight key-exchange and mutual authentication protocol for multi-server based D2D communication"
},
{
"paperId": "e21f646d1a0bf6831fd22e9d31bd7051453fb55b",
"title": "ITSSAKA-MS: An Improved Three-Factor Symmetric-Key Based Secure AKA Scheme for Multi-Server Environments"
},
{
"paperId": "c6ad3b4c66103c11e3ef50ebcbb2e9569eb3a0d6",
"title": "A Secure and Reliable Device Access Control Scheme for IoT Based Sensor Cloud Systems"
},
{
"paperId": "81316a42443f45e39ce073bf346da98de36a6d2c",
"title": "An Enhanced Symmetric Cryptosystem and Biometric-Based Anonymous User Authentication and Session Key Establishment Scheme for WSN"
},
{
"paperId": "a03caf9aed2cb7a932e66c4e0fc1ff8f80d71e03",
"title": "A Secure and Efficient Certificateless Authentication Scheme With Unsupervised Anomaly Detection in VANETs"
},
{
"paperId": "d73437220301286bc4560535f1da78425a5e24c1",
"title": "Mutual Entity Authentication Protocol Based on ECDSA for WSN"
},
{
"paperId": "d6f1b26a2718d638cb46c77c01726f9273bb3244",
"title": "一种新颖的基于Hash函数的无线双向安全认证方案 (Novel Two-way Security Authentication Wireless Scheme Based on Hash Function)"
},
{
"paperId": "ab059425a55a30e90aa0de78ff72e8867a1afa69",
"title": "ProVerif 1.85: Automatic Cryptographic Protocol Verifier, User Manual and Tutorial"
},
{
"paperId": null,
"title": "Diferential power analysis"
}
] | 14,274
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Mathematics",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
},
{
"category": "Engineering",
"source": "s2-fos-model"
},
{
"category": "Environmental Science",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/ffcaa093dae6210fe8676bf0ddea2bbc4294a2fe
|
[
"Computer Science",
"Mathematics"
] | 0.853863
|
An Online Optimization Framework for Distributed Fog Network Formation With Minimal Latency
|
ffcaa093dae6210fe8676bf0ddea2bbc4294a2fe
|
IEEE Transactions on Wireless Communications
|
[
{
"authorId": "2163520",
"name": "Gilsoo Lee"
},
{
"authorId": "145412074",
"name": "W. Saad"
},
{
"authorId": "1702172",
"name": "M. Bennis"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"IEEE Trans Wirel Commun"
],
"alternate_urls": [
"http://ieeexplore.ieee.org/servlet/opac?punumber=7693",
"https://ieeexplore.ieee.org/xpl/RecentIssue.jsp?punumber=7693&year=2005"
],
"id": "bb40a041-3875-45d5-afd4-e1c75f896fa6",
"issn": "1536-1276",
"name": "IEEE Transactions on Wireless Communications",
"type": "journal",
"url": "http://www.comsoc.org/twc/"
}
|
Fog computing is emerging as a promising paradigm to perform distributed, low-latency computation by jointly exploiting the radio and computing resources of end-user devices and cloud servers. However, the dynamic and distributed formation of local fog networks is highly challenging due to the unpredictable arrival and departure of neighboring fog nodes. Therefore, a given fog node must properly select a set of neighboring nodes and intelligently offload its computational tasks to this set of neighboring fog nodes and the cloud in order to achieve low-latency transmission and computation. In this paper, the problem of fog network formation and task distribution is jointly investigated while considering a hybrid fog-cloud architecture. The overarching goal is to minimize the maximum communication and computation latency by enabling a given fog node to form a suitable fog network and optimize the task distribution under uncertainty on the arrival process of neighboring fog nodes. To solve this problem, a novel online optimization framework is proposed, in which the neighboring nodes are selected by using a threshold-based online algorithm that uses a target competitive ratio, defined as the ratio between the latency of the online algorithm and the offline optimal latency. The proposed framework repeatedly updates its target competitive ratio and optimizes the distribution of the fog node’s computational tasks in order to minimize latency. The simulation results show that, for specific settings, the proposed framework can successfully select a set of neighboring nodes while reducing latency by up to 19.25% compared with a baseline approach based on the well-known online secretary framework. The results also show how, using the proposed framework, the computational tasks can be properly offloaded between the fog network and a remote cloud server in different network settings.
|
# An Online Optimization Framework for Distributed Fog Network Formation with Minimal Latency
## Gilsoo Lee[∗], Walid Saad[∗], and Mehdi Bennis[†]
_∗_ Wireless@VT, Department of Electrical and Computer Engineering, Virginia Tech, Blacksburg, VA, USA,
Emails: {gilsoolee,walids}@vt.edu.
_† Centre for Wireless Communications, University of Oulu, Finland, Email: bennis@ee.oulu.fi._
**_Abstract—Fog computing is emerging as a promising paradigm_**
**to perform distributed, low-latency computation by jointly ex-**
**ploiting the radio and computing resources of end-user devices**
**and cloud servers. However, the dynamic and distributed for-**
**mation of local fog networks is highly challenging due to the**
**unpredictable arrival and departure of neighboring fog nodes.**
**Therefore, a given fog node must properly select a set of**
**neighboring nodes and intelligently offload its computational**
**tasks to this set of neighboring fog nodes and the cloud in**
**order to achieve low-latency transmission and computation. In**
**this paper, the problem of fog network formation and task**
**distribution is jointly investigated while considering a hybrid**
**fog-cloud architecture. The overarching goal is to minimize the**
**maximum communication and computation latency by enabling**
**a given fog node to form a suitable fog network and optimize**
**the task distribution, under uncertainty on the arrival process**
**of neighboring fog nodes. To solve this problem, a novel online**
**optimization framework is proposed in which the neighboring**
**nodes are selected by using a threshold-based online algorithm**
**that uses a target competitive ratio, defined as the ratio between**
**the latency of the online algorithm and the offline optimal**
**latency. The proposed framework repeatedly updates its target**
**competitive ratio and optimizes the distribution of the fog node’s**
**computational tasks in order to minimize latency. Simulation**
**results show that, for specific settings, the proposed framework**
**can successfully select a set of neighboring nodes while reducing**
**latency by up to 19.25% compared to a baseline approach based**
**on the well-known online secretary framework. The results also**
**show how, using the proposed framework, the computational**
**tasks can be properly offloaded between the fog network and a**
**remote cloud server in different network settings.**
**_Index Terms—Fog Network, Edge Computing, Online Opti-_**
**mization, Online Resource Scheduling, Network Formation.**
I. INTRODUCTION
The Internet of Things (IoT) is expected to connect over 50
billion things worldwide, by 2020 [2]–[4]. To meet the lowlatency requirement of task computation for the IoT devices,
relying on conventional, remote cloud solutions may not be
suitable due to the high end-to-end transmission latency of
the cloud [5]. Therefore, to reduce the transmission latency,
the local proximity of IoT devices can be exploited for offloading computational tasks, in a distributed manner. Such local
computational offload gives rise to the emerging paradigm of
_fog computing [6]. Fog computing also known as edge com-_
puting allows overcoming the limitations of centralized cloud
computation by enabling distributed, low-latency computation
at the network edge, for supporting various wireless and IoT
applications [7]. The advantages of the fog architecture comes
from the transfer of some of the network functions to the
A preliminary conference version [1] of this work was presented at IEEE
ICC 2017
network edge. Indeed, significant amounts of data can be
stored, controlled, and computed over fog networks that can
be configured and managed by end-user nodes [5]. Within
the fog paradigm, computational tasks can be intelligently
allocated between the fog nodes and the cloud to meet
computational and latency requirements [8]. To implement the
fog paradigm, a three-layer network architecture is typically
needed to manage sensor, fog, and cloud layers [7]. When the
computing tasks are offloaded from the sensor layer to the fog
and cloud layers, fog computing faces a number of challenges
such as fog network formation and radio/computing resource
allocation [9]. In particular, it is challenging for fog nodes to
dynamically form and maintain a fog network that they can
use for offloading their task. This challenge is exacerbated
by the fact that fog computing devices are inherently mobile
and will join/leave a network sporadically [10]. Moreover, to
efficiently use the computing resource pool of the fog network,
novel resource management schemes for the hybrid fog-cloud
network architecture are needed [11].
To reap the benefits of fog networks, many architectural and
operational challenges must be addressed [12]–[25]. A number
of approaches for fog network formation are investigated in
[12]–[16]. To configure a fog network, the authors in [12]
propose the use of a device-to-device (D2D)-based network
that can efficiently support networking between a fog node
and a group of sensors. Also, to enable connectivity for fog
computing, the work in [13] reviews D2D techniques that can
be used for reliable wireless communications among highly
mobile nodes. The work in [14] proposes a framework for
vehicular fog computing in which fog servers can form a
distributed vehicular network for content distribution. In [15],
the authors study a message exchange procedure to form a
local network for resource sharing between the neighboring
fog nodes. The work in [16] introduces a method to form a
hybrid fog architecture in the context of transportation and
drone-based networks.
Once a fog network is formed, the next step is to share
resources and tasks among fog nodes as studied in [17]–
[25]. For instance, the work in [17] investigates the problem of scheduling tasks over heterogeneous cloud servers
in different scenarios in which multiple users can offload
their tasks to the cloud and fog layers. The work in [18]
studies the joint optimization of radio and computing resources
using a game-theoretic approach in which mobile cloud service providers can decide to cooperate in resource pooling.
Meanwhile, in [19], the authors propose a task allocation
approach that minimizes the overall task completion time by
-----
using a multidimensional auction and finding the best time
**Network Formation (Section IV.A)**
Determine the network size (Part 1 in Alg. 1)interval between multiple auctions to reduce unnecessary timeParameter
**Update**
Make online decisions (Part 2 in Alg. 1) overheads. The authors in [20] study a latency minimization
problem to allocate the computational resources of the mobile- Increase γ
edge servers. Moreover, the authors in [21] study the delayto satisfy
minimization problem in fog and cloud-assisted networksTask Distribution (Section IV.B) constraint
(14)
Solve the offline optimization problem under heterogeneous delay considerations. Moreover, the work
to minimize the maximum latency (10)in [22] investigates the problem of minimizing the aggregate
cloud fronthaul and wireless transmission latency. In [23], a
task scheduling algorithm is proposed to jointly optimize the
radio and computing resources to reduce the users’ energy
consumption while satisfying delay constraints. The problem
of optimizing power consumption is also considered in [24]
subject to delay constraint using a queueing-theoretic delay
model at the cloud. Moreover, the work in [25] studies the
power consumption minimization problem in an online scenario subject to uncertain task arrivals. Furthermore, the work
in [26], studies how tasks can be predicted and proactively
scheduled. Last, but not least, the work in [27] implements
a prototype for fog computing that can manage edge node’s
resources in a distributed computing environment.
In all of these existing fog network formation and task
scheduling works in fog networks [14]–[24], it is generally
assumed that information on the formation of the fog network
is completely known to all nodes. However, in practice, the
fog network can be spontaneously initiated by a fog node
when other neighboring fog nodes start to dynamically join
or leave the network. Hence, the presence of a neighboring
fog node to which one can offload tasks is unpredictable.
Indeed, it is challenging for a fog node to know when and
where another fog node will arrive. Thus, there exists an inherent uncertainty stemming from the unknown locations and
availability of fog nodes. Further, most of the existing works
[14], [15], [19]–[23] typically assume a simple transmission
or computational latency model for a fog node. In contrast,
the use of a queueing-theoretic model for both transmission
and computational latency is necessary to capture realistic
latency metrics. Consequently, unlike the existing literature
[15], [19]–[23] which assumes full information knowledge for
fog network formation and relies on simple delay models, our
goal is to design an online approach to enable an on-thefly formation of the fog network, under uncertainty, while
minimizing the computational latency given an end-to-end
latency model.
The main contribution of this paper is a novel framework
for online fog network formation and task distribution in a
hybrid fog-cloud network. This framework allows any given
fog node to dynamically construct a fog network by selecting
the most suitable set of neighboring fog nodes in presence of
uncertainty on the arrival order of neighboring fog nodes. The
fog node can jointly use its fog network as well as a distant
cloud server to compute given tasks. We formulate an online
optimization problem whose objective is to minimize the
maximum computational latency of all fog nodes by properly
selecting the set of fog nodes to which computations will
be offloaded while also properly distributing the tasks among
those fog nodes and the cloud. To solve this problem without
any prior information on the future arrival order of fog nodes
**fog node j** **fog node j'**
**Cloud**
μj μj'
**Base Station**
μc μij μij'
dc dij dij'
**fog node i**
**Computing**
μi
**Storage**
αc αi αij αij'
**Network Optimizer**
xi
Fig. 1: System model of the fog networking architecture and the
cloud.
we propose an online optimization framework that achieves
a target competitive ratio; defined as the ratio between the
latency achieved by the proposed algorithm and the optimal
latency that can be achieved by an offline algorithm. In the
proposed framework, an online algorithm is used to form a
fog network when the neighboring nodes arrive sequentially,
the task distribution is optimized among the nodes on the
formed network, and the target competitive ratio is repeatedly
updated. We show the target competitive ratio can be achieved
by iteratively running the proposed algorithm. Simulation
results show that the proposed framework can achieve a target
competitive ratio of 1.21 in a given simulation scenario. For
a specific simulation setting, simulation results show that the
proposed algorithm can reduce the latency by up to 19.25%
compared to the baseline approach that is a modified version
of the popular online secretary algorithm [1]. Therefore, the
proposed framework is shown to be able to find a suitable
competitive ratio that can reduce the latency of fog computing
while properly selecting the neighboring fog nodes that have
high performance and suitably distributing tasks across fog
nodes and a cloud server.
The rest of this paper is organized as follows. In Section II, the system model is presented. We formulate the
online problem in Section III. In Section IV, we propose
our online optimization framework to solve the problem. In
Section V, simulation results are carried out to evaluate the
performance of our proposed framework. Conclusions are
drawn in Section VI.
II. SYSTEM MODEL
Consider a fog network consisting of a sensor layer, a
fog layer, and a cloud layer as shown in Fig. 1. In this
system, the sensor layer includes smart and small-sized IoT
sensors with limited computational capability. Therefore, when
sensors generate the computational tasks, the sensors’ tasks
are offloaded to the fog and cloud layers for purposes of
remote distributed computing. Similarly, cloud tasks can also
be offloaded to the fog layer. In our model, the cloud layer
can be seen as the conventional cloud computing center. The
fog layer refers to the set of IoT devices (also called fog
nodes) that can perform fog computing jobs such as storing
|Col1|fog node i Computing μ i|Col3|Col4|Col5|Col6|
|---|---|---|---|---|---|
|α c|α i|Storage α ij|α ij'||nline Fog ontroller|
|Network Optimizer||||||
|||||||
-----
TABLE I: Summary of notations
_i_ Index of initial fog node
_j_ Index of neighboring fog nodes in J
_c_ Index of cloud
_J = |J |_ Number of neighboring fog nodes
_xi_ Total task arrival rate from sensors to node i
_αk∈∈{i,ij,c}_ Tasks offloaded toward k
_µij_ Fog transmission service rate from i to j
_µc_ Cloud transmission service rate
_µi_ Computing service rate of fog node i
_µj_ Computing service rate of fog node j
1/ωk∈{i,j,c} Processing speed of node k
_n_ Arrival order
_K_ Size of a task packet
_γ_ Target competitive ratio
data and computing tasks. We assume that various kinds of
sensors send their task data to a certain fog node i, and the
data arrival rate to this node is xi packets per second where a
task packet has a size of K bits[1]. Fog node i performs the roles
of collecting, storing, controlling, and processing the task data
from the sensor layer, as is typical in practical fog networking
scenarios [5]. In our architecture, for efficient computing, fog
node i must cooperate with other neighboring fog nodes and
the cloud data center. We consider a network having a set
_N_
of N fog nodes other than fog node i. For a given fog node i,
we focus on the fog computing case in which fog node i builds
a network with a subset of J neighboring fog nodes.
_J ⊂N_
Also, since the cloud is typically located at a remote location,
fog node i must access the cloud via wireless communication
links using a cellular base station c.
Once the initial fog node i receives tasks that arrive with
the rate of xi packets per second, it assigns a fraction of xi to
other nodes. Then, each node within the considered fog-cloud
network will locally compute the assigned fraction of xi. The
fraction of tasks locally computed by fog node i is λi(αi) =
_αixi. Then, the task arrival rate offloaded from fog node i_
to fog node j ∈J is λij(αij) = αijxi. Therefore, the task
arrival rate processed at the fog layer is (αi + [�]j∈J _[α][ij][)][x][i][.]_
The number of remaining tasks λc(αc) = αcxi will then be offloaded to the cloud. When fog node i makes a decision on the
distribution of all input tasks xi, the task distribution variables
are represented as vector α = [αi, αc, αi1, . . ., αij, . . ., αiJ ]
with [�]j∈J _[α][ij][ +]_ _[α][i][ +]_ _[α][c][ = 1][. Naturally, the total task arrival]_
rate that arrives at fog node i will be equal to the sum of
the task arrival rates assigned to all computation nodes in the
fog and cloud layers. Also, to model the random arrival of
tasks from the sensors to fog node i, the total task arrival
rate arriving at fog node i can be modeled by a Poisson
process [24]. The tasks offloaded to the fog nodes and the
cloud also follow a Poisson process if the tasks are randomly
scheduled in a round robin fashion [28]. Also, the initial fog
node can determine the transmission order of the task packets
offloaded from the sensor layer. Therefore, in future work, if
the tasks offloaded from the sensor layer have different servicelevel latency requirements, the initial fog node can prioritize
urgent task packets in its queue.
When the tasks arrive from the sensors to fog node i, they
are first saved in fog node i’s storage, incurring a waiting delay
before they are transmitted and distributed to other nodes (fog
or cloud). This additional delay pertains to the transmission
from fog node i to c or j and can be modeled using a transmis_sion queue. Moreover, when the tasks arrive at the destination,_
the latency required to perform the actual computations will be
captured by a computation queue. In Fig. 1, we show examples
of both queue types. For instance, for transmission queues,
fog node i must maintain transmission queues for each fog
node j and the cloud c. For computation, each fog node has
a computation queue. To model the transmission queue, the
tasks are transmitted to fog node j over a wireless channel.
Then, the service rate (in packets per second) can be given by
|i|Index of initial fog node|
|---|---|
|j|Index of neighboring fog nodes in J|
|c|Index of cloud|
|J = |J ||Number of neighboring fog nodes|
|xi|Total task arrival rate from sensors to node i|
|α k∈∈{i,ij,c}|Tasks offloaded toward k|
|µij|Fog transmission service rate from i to j|
|µc|Cloud transmission service rate|
|µi|Computing service rate of fog node i|
|µj|Computing service rate of fog node j|
|1/ω k∈{i,j,c}|Processing speed of node k|
|n|Arrival order|
|K|Size of a task packet|
|γ|Target competitive ratio|
�
_,_ (1)
_µij =_ _[W][l]_
_K_ [log][2]
�
1 + _[g][ij][hP][tx][,i]_
_WlN0_
1The initial fog node can gather data from any other node, including sensors
l d
where gij is the channel gain between fog nodes i and j with
_dij being the distance between them, and h is the average_
fading gain of the fog node i. When the fog nodes are located
in proximity within a similar environment, we assume that
they have identical average fading gains. If dij ≤ 1 m, gij ≜
_β1, and, if dij > 1 m, gij ≜_ _β1d[−]ij[β][2]_ where β1 and β2 are,
respectively, the path loss constant and the path loss exponent.
Also, Ptx,i is the transmission power of fog node i and N0 is
the noise power spectral density. The bandwidth per node is
given by Wl where l = 1 and 2 indicate, respectively, two
types of bandwidth allocation schemes: equal allocation and
cloud-centric allocation.[2] For equal bandwidth allocation, all
nodes in the network will be assigned equal bandwidth, i.e.,
_W1 =_ _JB+1_ [where the total bandwidth][ B][ is equally shared by]
_J + 1 nodes that include J neighboring fog nodes and the_
connection to the cloud via the base station. For the cloud_centric bandwidth allocation, the bandwidth allocated to the_
cloud is twice that of the bandwidth used by a fog node, i.e.,
the cloud and the fog node will be assigned the bandwidth
2B _B_
_J+2_ [and] _J+2_ [, respectively.]
Since the tasks arrive according to a Poisson process, and
the transmission time in (1) is deterministic, the latency of the
transmission queue can be modeled as an M/D/1 system[3] [28]:
_Tj(λij(αij), µij) =_ _λij(αij)_ _,_ (2)
2µij(µij − _λij(αij)) [+ 1]µij_
where the first term is the waiting time in the queue at fog
node i, and the second term is the transmission delay between
fog nodes i and j. Similarly, when the tasks are offloaded to
the cloud, the transmission queue delay will be:
_Tc(λc(αc), µc) =_ _λc(αc)_ _,_ (3)
2µc(µc − _λc(αc)) [+ 1]µc_
where the service rate µc between fog node i and cloud c is
given by (1) where fog node j is replaced with cloud c.
Next, we define the computation queue. When a fog node
needs to compute a task, this task will experience a waiting
time in the computation queue of this fog node due to a
previous task that is currently being processed. Since a fog
2The problem of joint bandwidth optimization and fog computing can be
subject for future work.
3Instead of M/D/1 queueing, other delay models can be used to account
f th h t i ti h diff t k t i fi it b ff i
-----
node j receives tasks from not only fog node i but also
other fog nodes and sensors, the task arrival process can be
approximated by a Poisson process by applying the Kleinrock
approximation [28]. Therefore, the computation queue can be
modeled as an M/D/1 queue and the latency of fog node j’s
computation will be:
_Sj(λij(αij)) =_ _λij(αij)_ + ωjλij(αij), (4)
2µj(µj − _λij(αij)) [+ 1]µj_
where the first term is the waiting delay in the computation
queue, the second term is the delay for fetching the proper
application needed to compute the task, and the third term is
a function of the processor delay implying the processing delay
for the task. The delay of this fetching procedure depends on
the performance of the node’s hardware which is a deterministic constant that determines the service time of the computation
queue. In the first and second terms of (4), µj is a parameter
related to the overall hardware performance of fog node j.
In the third term, ωjλij(αij) is the actual computation time
of the task with ωj being a constant time needed to compute
a task. For example, 1/ωj can be proportional to the CPU
clock frequency of fog node j. ωjλij(αij) implies that the
delay needed to compute a task at a given node can increase
with the task arrival rate since the number of concurrently
running tasks increases with the task arrival rate. The increased
number of the concurrently running tasks also increases the
context switching delay that affects the computing delay. For
fog node j, it is assumed that the maximum of computing
_∈J_
service rate and processing speed are given by ¯µj and 1/ωj,
respectively. This information can be known in advance if
the manufacturers of fog devices can provide the hardware
performance in the database. Then, when fog node i locally
computes its assigned tasks λi(αi), the latency will be:
_Si(λi(αi)) =_ _λi(αi)_ + ωiλi(αi), (5)
2µi(µi − _λi(αi)) [+ 1]µi_
where µi is the computing service rate of fog node i (dependent on hardware performance) and ωiλi(αi) is the fog node
_i’s computing time. Since the cloud is equipped with more_
powerful and faster hardware than the fog node, the waiting
time at the computation queue of the cloud can be ignored.
This implies that the cloud initiates the computation for the
received tasks without queueing delay; thus, we only account
for the actual computing delay. As a result, when tasks are
computed at the cloud, the computing delay at the cloud will
be:
_Sc(λc(αc)) = ωcλc(αc)._ (6)
In essence, if a task is routed to the cloud c, the latency
will be
_Dc(λc(αc), µc) = Tc(λc(αc), µc) + Sc(λc(αc))._ (7)
Also, if a task is offloaded to fog node j, then the latency can
be defined as the sum of the transmission and computation
queueing delays:
_D (λ (α ) µ )_ _T (λ (α ) µ ) + S (λ (α ))_ (8)
Furthermore, when fog node i computes tasks locally, the
latency will be:
_Di(λi(αi)) = Si(λi(αi)),_ (9)
since no transmission queue is necessary for local computing.
Since xi is constant, λk∈{i,ij,c} is only dependent to αk. From
now on, for notational simplicity, λk(αk) is presented by λk.
Given this model, in the next section, we formulate an online
latency minimization problem to study how a fog network can
be formed and how tasks are effectively distributed in the fog
network.
III. PROBLEM FORMULATION
In distributed fog computing, the maximum latency of
computing nodes must be minimized for effective distributed
computing. To minimize the maximum latency, fog node i
must opportunistically find neighboring nodes to form a fog
network and carry out the process of task offload. In practice,
such neighbors will dynamically join and leave the system.
Also, the neighbors have to process their existing workloads
[29]. As a result, the initial fog node i will be unable to know
a priori whether an adjacent fog node will be available to assist
it with its computation by sharing the communication and
computational resources. Moreover, the total number of neighboring fog nodes as well as their locations and their available
computing resources are unknown and highly unpredictable.
Under such uncertainty, jointly optimizing the fog network
formation and task distribution processes is challenging since
selecting neighboring fog nodes must account for potential
arrival of new fog nodes that can potentially provide a higher
data rate and stronger computational capabilities. To cope with
the uncertainty of the neighboring fog node arrivals while
considering the data rate and computing capability of current
and future fog nodes, we introduce an online optimization
_scheme that can handle the problem of fog network formation_
and task distribution under uncertainty.
We formulate the following online fog network formation
and task distribution problem whose goal is to minimize the
maximum latency when computing a new task that arrives at
fog node i:
_Jminσ,α_ max (Di(λi), Dc(λc, µc), Dj∈Jσ (λij, µij)), (10)
s.t.
_αi + αc +_ [�]j∈J _[α][ij][ = 1][,]_ (11)
_αi ∈_ [0, 1], αc ∈ [0, 1], αij ∈ [0, 1], ∀j ∈Jσ ⊂Nσ, (12)
_αixi ≤_ _µi,αcxi ≤_ _µc,αijxi ≤_ _µj,αijxi ≤_ _µij,∀j ∈Jσ, (13)_
_|Nσ| ≤_ _N._ (14)
Since our goal is to minimize the worst-case latency among
the fog nodes and the cloud, any task can be processed with
a low latency regardless of which node actually computes the
task[4]. By using an auxiliary variable u, problem (10) can be
4If the objective function is defined with a minimum function, the initial
fog node will minimize the latency of only one node, and, therefore, it will
i th l t f th d
-----
transformed into the following:
min (15)
_Jσ,α_ _[u,]_
s.t. u ≥ max (Di(λi), Dc(λc, µc), Dj∈Jσ (λij, µij)), (16)
(11), (12), (13), (14),
where u is the maximum latency of the fog network. In (15),
_u represents the largest value among Di(λi), Dc(λc, µc), and_
_Dj(λij, µij). Then, minimizing u is equivalent to minimizing_
the max function in (10). Hence, problems (10) and (15) are
equivalent.
In constraints (11) and (12), all tasks arriving at fog node i
are offloaded among the computing nodes in the fog network.
Due to constraint (13), the tasks offloaded to a node cannot
exceed the service rate of the computing node. In this problem,
the initial fog node i determines the set of neighboring fog
nodes Jσ when they arrive online and the task distribution
vector α so as to minimize the computing latency. Fog node
_i will observe a total number of N arriving fog nodes due to_
constraint (14). Fog node i has to make a decision on network
formation and task distribution while observing N neighboring
nodes. As the number of observations increases, fog node
_i may be able to discover neighboring fog nodes that have_
higher performance. However, due to constraint (14), fog node
_i cannot wait to observe an infinite number of neighboring_
fog nodes. Thus, while observing up to N arriving fog nodes,
fog node i should select J _N neighboring fog nodes to_
_≤_
minimize (10).
In our model, we assume that fog node i does not have
any prior information on the neighboring fog nodes given by
set Nσ, and the information about each neighboring node
is collected sequentially. Such random arrival sequence is
denoted by σ = σ1, . . ., σn, . . ., σN where the arrival of n-th
neighboring node is shown as σn. For example, a smartphone
can choose to become a fog node spontaneously if it decides
to share its resources. In practice, to discover the neighboring
nodes, the fog nodes can use the node discovery mechanisms
implemented in D2D networks [12]. When fog node i does
not have complete information on other fog nodes, the nodes
in Nσ arrive at fog node i in a random order, and index n
can be the arriving order of the neighboring fog nodes. At
the arrival of a neighboring node, the arrival order n increases
by one; thus, n captures the time order of arrival. At time n,
node n can transmit a beacon signal to fog node i to indicate
its willingness to join the network of fog node i. The beacon
signal can include an information tuple on node n that includes
the distance din, computing service rate µn, and the processing
speed ωn. At each time that σn is known, e.g., by receiving the
beacon signal, fog node i will now have information on these
parameters that pertain to node n [30]. Therefore, fog node i
only knows the information on the nodes that have previously
arrived (as well as the current node).
When fog node i observes σn and has knowledge of the nth neighboring node, it has to make an online decision whether
to select node n. If fog node n is chosen by the initial fog
node i, it is indexed by j and included in a set Jσ which
is a subset of Nσ. Otherwise, fog node i will no longer be
able to select fog node n at a later time period since the latter
can join another fog network or terminate its resource sharing
offer to fog node i. For notational simplicity, Jσ and Nσ
are hereafter denoted as and, respectively. Fog node i
_J_ _N_
will not be able to have complete information about all N
neighboring nodes before all neighboring nodes are selected
by fog node i. Therefore, since fog node i cannot know any
information on future fog nodes, it is challenging for the initial
fog node i to form the fog network by determining .
_J_
Even when the information on each node is known to fog
node i, it is difficult to calculate the exact service rates of
the fog node in the formulated problem. This is due to the
fact that the service rate in (1), that includes the wireless data
rate, is a function of the network size J. As the number of
nodes sharing their wireless bandwidth increases, the available
channel bandwidth per node decreases, thus reducing the
data rate. Therefore, unlike the constant parameters µi and
_µj, the transmission service rates µij and µc will vary with_
the network size. As a consequence, in order to calculate
the service rates of neighboring nodes, fog node i has to
determine the network size. However, the optimal network
size can change by the selection of neighboring nodes. Since
network size and node selection are related, it is challenging
for fog node i to optimize both network size and the set of
neighboring nodes that minimize (15). To solve the online
problem, we need to find the set of neighboring fog nodes
_J_
and the task distribution vector α that minimize the maximum
latency. Moreover, since there is uncertainty about the future
arrival of neighboring nodes as well as their service rates,
one has to seek an online, sub-optimal solution that is also
robust to uncertainty. In the next section, we propose an online
optimization framework that minimizes the value of u in (15).
IV. TASK DISTRIBUTION AND NETWORK FORMATION
ALGORITHMS
In our problem, fog node i has to decide whether to admit
each neighboring node as the different neighboring nodes
arrive in a random order. This problem can be formulated as an
online stopping problem. In such problems, such as online secretary problem [31], the goal is to develop online algorithms
that enable a company to hire a number of employees, without
knowing in which order they will arrive to the interview. To
apply such known solutions from the stopping problems, the
following assumptions are commonly needed. For instance,
the number of hiring positions should be deterministic and
given in the problem. Also, the decision maker should be
able to decide the preference order among the candidates by
comparing the values that can be earned by hiring candidates.
Under these assumptions, online stopping algorithms can be
used to select the best set of candidates in an online manner.
In this regard, even though the structures of our fog network
formation problem and the secretary problem are similar,
the fog network formation case has different assumptions.
First, the number of neighboring fog nodes is an optimization
variable in our problem. Second, the latency of computing
nodes that somewhat maps to the valuation of hiring candidates
in the secretary problem is not constant. Moreover, in our
problem, each neighboring fog node exhibits two types of
latency: transmission latency and computing latency As a
-----
μc μijPhase 1 (in Alg. 1): μij'
Increase
γ←γ+τ
|Col1|Parameter Update Increase γ to satisfy constraint (14)|
|---|---|
Fig. 2: Online optimization framework for Fog network formation and task distribution.
result, it is challenging to define the preference order of the
neighboring nodes as done in conventional online stopping
problems. To address those challenges, we propose a new
online optimization framework[5] that extends existing results
from online stopping theory to accommodate the specific
challenges of the fog network formation problem[6].
_A. Overview of the Proposed Optimization Framework_
Problem (15) has two optimization variables and α that
_J_
constitute the solutions of the network formation and task
distribution problems, respectively. To solve (15), fog node
_i must first optimize the network formation by selecting the_
neighboring fog nodes, and then decide on its task distribution.
This two-step process is required due to the fact that the
computing resources of the fog nodes are unknown before
the network is formed. The online optimization framework
consists of three highly inter-related components as shown in
Fig. 2. In the network formation stage, an online algorithm
is used to find by determining the minimal network size
_J_
and, then, selecting the neighboring fog nodes within N
observations to satisfy (14). After is determined, the task
_J_
distribution among the selected nodes is optimized by using an
offline optimization method during the task distribution stage.
The output of the task distribution stage is the task allocation
vector α that satisfies constraints (11), (12), and (13). Finally,
we use a parameter update stage, during which the target
performance parameter γ that will be used in the next iteration
is updated in order to satisfy constraint (14). After repeatedly
running three components of our framework, fog node i is
able to form a network without any prior information on the
neighboring nodes and also offload the tasks to the nodes
on the fog network. This algorithm is shown to converge in
Theorem 3.
The performance of our online optimization framework
will be evaluated by using competitive analysis [33]. In this
analysis, the performance is measured by the competitive ratio
5The framework proposed in this work is different from the previous work
in [1] since this work uses a different definition of transmission service rate
in (1) and a different objective function in (10).
6Fog networks can be formed by using game-theoretic approaches such as
coalitional games which require a complete knowledge of the exact utility
functions [32]. However, such knowledge can be difficult to gather, since the
initial fog node cannot have the complete information on the neighboring
nodes in an online scenario, and, therefore, an online optimization framework
is more apropos. Moreover, using a coalitional game framework to solve the
proposed fog network formation problem under uncertainty will require the
use of very complex algorithms that are not amenable to analysis, unlike the
d li ti i ti f k
|Col1|fog node|e i Determining the ne|Col4|Col5|Col6|
|---|---|---|---|---|---|
|αc|fog node Co μi|i mputing |J|< J ˆ and n YES||||
||Sto α|rage A αij αij'|neig|Controller Online hbori e arri Fog|ng ve|
||i Networ|nod k Optimizer||||
|Col1|formation w|
|---|---|
Fig. 3: Flow chart of the proposed framework for fog network
formation and task distribution.
_γ that is defined by_
1 (17)
_≤_ [ALG][(][σ][)]
OPT(σ)
_[≤]_ _[γ,]_
where ALG(σ) denotes the latency achieved by the online
algorithm and OPT(σ) is the optimal latency achieved by an
offline algorithm. If the online algorithm finds the optimal
solution, the online algorithm achieves γ = 1. However, since
the online algorithm cannot have complete information, it is
challenging to find the optimal solution in an online setting.
Therefore, in an online minimization problem, the online
algorithm should be able to achieve γ that is close to one.
We use this notion of competitive ratio to design our online
optimization framework.
The online optimization framework is summarized in the
flow chart shown in Fig. 3. In the network formation stage,
fog node i needs to select the set of neighboring fog nodes
with high service rates and processing speeds to achieve
a given value of γ. At each iteration, to achieve a target
competitive ratio γ, fog node i determines the number of
neighboring nodes _J[ˆ] by using Phase 1 of Algorithm 1, and it_
sequentially observes the arrivals of a total of N neighboring
fog nodes while making an online decision in Phase 2 of
Algorithm 1. After the network formation stage is finished,
the task distribution is optimized by the initial fog node in an
offline manner. Then, fog node i checks whether the number
of selected neighboring nodes is _J[ˆ]. For a small value of γ,_
fog node i must find the neighboring nodes having a high
computing service rate and processing speed so as to achieve
low latency. Therefore, in this case, fog node i must observe a
large number of neighboring nodes until _J[ˆ] neighboring nodes_
are selected. Hence, N observations may not be sufficient to
find _J[ˆ] neighboring nodes. On the other hand, a large γ can_
allow the target latency to be less stringent, thus allowing
the fog node i to select the neighboring nodes with fewer
observations To find the proper value of γ the proposed
-----
**Algorithm 1 Online Fog Network Formation Algorithm**
1 : **inputs: N**, γ, µi, ωi, ωc, dc, ¯µij(dij), ¯µj, ωj.
_Phase 1: Calculate_ _λ[ˆ]ij,_ _J[ˆ], and ˆu._
2 : **initialize: J = 0, n = 0.**
3 : **while ∆** _≥_ 0
4 : _J ←_ _J + 1._
5 : ∆ _←_ [Dj(λij, ¯µij)]|J |=J−1 − [Dj(λij, ¯µij)]|J |=J .
6 : **end while**
7 : Find _λ[ˆ]ij by optimizing task distribution when |J | = J −_ 1.
� �
8 : Set _J[ˆ] = J −_ 1 and ˆu = _Dj(λ[ˆ]ij, ¯µij)_ _|J |=J−1[.]_
_Phase 2: Decide J ._
9 : **while |J | <** _J[ˆ] and n < N_
10: **if Dn(λ[ˆ]ij, µin) ≤** _γuˆ,_
11: _J ←J ∪{n}._
12: **end if**
13: _n ←_ _n + 1._
14: **end while**
framework iteratively updates γ. For instance, the value of
_γ can be set to one initially. Then, if a smaller γ cannot be_
achieved in the network formation stage at that iteration, the
value of γ increases by a small constant τ . By repeatedly
increasing γ, the proposed framework can find the achievable
value of γ. In the next section, we present the details of the
proposed online algorithm that exploits the updated value of
_γ for the network formation stage._
_B. Fog Network Formation: Online Approach_
In problem (15), the decision on faces two primary
_J_
challenges: how many fog nodes are needed in the network
and which fog nodes join the network (at which time). Since
the transmission service rates are functions of the wireless
bandwidth that can vary with the network size, the service
rates of neighboring fog nodes cannot be calculated without
having a fixed network size. Therefore, the proposed algorithm
includes two phases as shown in Algorithm 1. The goal of
the first phase is to determine the parameters including the
network size and the temporal task distribution so that the
parameters can be used in the second phase of Algorithm 1.
Then, the second phase of Algorithm 1 allows fog node i to
make an online decision regarding the selection of an arriving
node.
In the first phase of Algorithm 1, the goal is to determine
the parameters that will be used in the second phase of
Algorithm 1. In the given system model, a neighboring node
will be referred to as ideal in terms of minimizing the
latency in (15) if it has the highest computing service rate ¯µj,
processing speed 1/ωj, and transmission service rate ¯µij when
the distance between two fog nodes is dij. Such an ideal node
is denoted by [¯]j. If a network is formed with nodes having high
computing resources, a smaller network size can effectively
minimize the latency. When the service rates of the nodes are
divided by the smallest network size, the transmission service
rates of the nodes also can be maximized, and, hence, the
latency can be minimized. In the case in which the ideal nodes
construct a network, the minimized latency of (15) is denoted
by ˆu. Also, when the latency is ˆu, the corresponding number
of neighboring nodes and task distribution are denoted by _J[ˆ]_
and _λ_ _λˆ_ _λˆ_ respectively
_{[ˆ]_ _}_
**First phase: The first phase of Algorithm 1 is used to calcu-**
late _J[ˆ] and_ _λ[ˆ]ij. The latency in (15) decreases as the number of_
neighboring nodes increases since the computational load per
node can be reduced. However, if the number of neighboring
nodes becomes too large, the bandwidth per fog node will
be smaller yielding lower transmission service rates for the
nodes. Consequently, the latency can increase with the number
of neighboring nodes, due to these bandwidth limitations. By
using the relationship between network size and latency, the
first phase of Algorithm 1 searches for _J[ˆ] while increasing_
the network size incrementally, one by one. Once the number
of neighboring users _J[ˆ] that minimizes ˆu is found, the tasks_
offloaded to each ideal node are denoted by _λ[ˆ]ij. Therefore,_
we will have _J[ˆ], ˆu, and_ _λ[ˆ]ij as the outputs from the first_
phase of Algorithm 1 that will be used in the second phase of
Algorithm 1.
**Second phase: In the second phase of Algorithm 1, fog**
node i decides on whether to select each neighboring node or
not, by using a threshold-based algorithm. Our algorithm uses
a single threshold so that the latency of each arriving node can
be compared with the threshold value. Since comparing two
values is a simple operation having constant time complexity,
a threshold-based algorithm can be executed with low latency.
However, before the network formation process is completed,
fog node i is not able to know the optimal latency of each
node, and, therefore, finding the distribution of tasks that must
be offloaded to each node is not possible. Nonetheless, fog
node i must set a threshold before the first neighbor arrives.
To this end, fog node i sets this initial threshold by assuming
that an equal amount of tasks, _λ[ˆ]ij, is offloaded to each one_
of the _J[ˆ] neighboring nodes. Thus, in our threshold-based_
algorithm, the threshold value is compared with the latency
that results from offloading _λ[ˆ]ij tasks. For example, when_
a neighboring node n arrives, the algorithm compares the
latency of node n, Dn(λ[ˆ]ij, µin), to the threshold γuˆ. If the
latency of node n is smaller than the threshold, fog node i
will immediately select node n. This procedure is repeated
until fog node i observes _N arrivals and selects_ _J[ˆ] neighboring_
nodes. In the proposed algorithm, the initial fog node needs
to discover the neighboring nodes and know the information
on the communication and computational performance of the
neighboring nodes. This procedure can use any node-discovery
and message exchanging protocols developed for D2D communications or wireless sensor networks. Also, our framework
requires a low signaling and communication overhead since
each neighboring node can transmit its location and computing
speed using a very small packet after which the initial fog
node transmits a decision on node selection using a single bit.
After the fog network is formed, the task distribution is done
to minimize latency. In the next section, we investigate the
property of the optimal task distribution, and show that the
threshold can satisfy (17).
_C. Task Distribution: Offline Optimization_
Once the nodes are selected to form a network, the task
distribution can be performed using an offline optimization
problem which can be solved using known algorithms such
-----
as the interior-point algorithm [34]. From problem (15), the
following properties can be derived, for a given .
_J_
**Theorem 1. If there exists a task distribution α[∗]** _satisfying_
_u[∗]_ = Di(λi) = Dc(λc, µc) = Dj(λij, µij), ∀j ∈J, then α[∗]
_is the unique and optimal solution of problem (10)._
_Proof. Let α be the initial task distribution, and assume that_
any other task distribution α[′] different from α is the optimal
distribution. When α[′] is considered, we can find a certain node
_A satisfying αA[′]_ _[< α][A][ where][ α]A[′]_ _[∈]_ **_[α][′][ and][ α][A][ ∈]_** **_[α][. This, in]_**
turn, yields DA(αA[′] [)][ < D][A][(][α][A][)][. Due to the constraint (11),]
there exists another node B such that B ̸= A, αB[′] _[> α][B][,]_
and DB(αB[′] [)][ > D][B][(][α][B][)][ where][ α]B[′] _[∈]_ **_[α][′][ and][ α][B][ ∈]_** **_[α][.]_**
Since DB(αB[′] [)][ > D][B][(][α][B][) =][ D][A][(][α][A][)][ > D][A][(][α]A[′] [)][, we must]
decrease αB[′] [to minimize the maximum, i.e.,][ D][B][(][α]B[′] [)][. Thus,]
we can clearly see that α[′] is not optimal, and, thus, the initial
distribution α is optimal.
Furthermore, Dj(λij, µij) is a monotonically increasing
function with respect to λij = xiαij since _∂λ∂ij_ _[D][j][(][λ][ij][, µ][ij][)][ >]_
0. Therefore, there are no more than two points of α[∗] that
have the same u[∗]. Hence, the distribution α is unique and
optimal.
Theorem 1 shows that the optimal solution of the offline
latency minimization problem results in an equal latency for
all fog nodes and the cloud on the network (whenever such a
solution is feasible). Using the objective function in (10), the
initial fog node minimizes the worst-case latency among the
nodes. To that end, the initial fog node can decrease the task
arrival rate of the node having the highest latency, but, in turn,
the latency of other node increases. This is due to the fact that
reducing one node’s task arrival rate leads to increase the other
node’s arrival rate since we have [�]j∈J _[λ][ij][ +][ λ][i][ +][ λ][c][ =][ x][.]_
Therefore, as shown in Theorem 1, an equal latency for all
fog nodes and the cloud is obtained by repeatedly reducing the
arrival rate of the node having the highest latency. According
to Theorem 1, selecting the node that has high computing
resources is beneficial to minimize latency. Once fog node
_i determines the task distribution, the efficiency of the task_
distribution can be derived by applying the definition of task
scheduling efficiency in [35]. For a task distribution α, the
_efficiency is given by_
� � _Di(αi),_ �
max _Dc(αc, µc),_ _−_ _Dk_
Γ = 1 + _k∈{i,c,{ij|j∈J }}_ _Dj∈J (αij, µij)_ 1.
_≥_
_Di(αi) + Dc(αc) +_ [�]j∈J _[D][j][(][α][ij][)]_
(18)
In other words, Γ is defined as one plus the ratio between
the total idle time of the fog computing nodes and the total
transmission and computing time. Therefore, Γ = 1 means
that all nodes in the fog network can complete their assigned
tasks with the same latency. Theorem 1 shows that the optimal
latency is u[∗] = Di(λi) = Dc(λc, µc) = Dj(λij, µij). Since
_u[∗]_ is the maximum value among Di(λi), Dc(λc, µc), and
_Dj(λij, µij), from (10), the efficiency of the optimal task_
distribution will be equal to one. Thus, if the efficiency of
the task distribution becomes one, the latency of the task
distribution is the optimal latency u[∗] according to Theorem 1
_D. Performance Analysis of the Proposed Online Optimization_
_Framework_
Next, we show that the proposed framework can achieve the
target competitive ratio γ.
**Theorem 2. For a given γ, the proposed framework satisfies**
_ALG(σ)/OPT(σ)_ _γ if: (i) a given γ enables fog node i_
_≤_
_to select_ _J[ˆ] nodes, and (ii) the optimal task distribution can_
_always be found, i.e., Γ = 1._
_Proof. The offline optimal latency of the nodes in_ is
_J_
greater than or equal to ˆu, i.e., ˆu OPT(σ). Also, in
_≤_
Algorithm 1, the selected nodes satisfy Dj(λ[ˆ]ij, µij) ≤ _γuˆ,_
_j_ where = _Jˆ. When the task distribution is_
_∀_ _∈J_ _|J |_
not yet optimized with respect to, the latency that re_J_
sults from using distribution {λ[ˆ]i,λ[ˆ]c,λ[ˆ]ij} can be shown as
� �
ALGb(σ) = max _Di(λ[ˆ]i),Dc(λ[ˆ]c, µc),Dj∈J (λ[ˆ]ij, µij)_ . Recall
� �
that ˆu ≜ max _Di(λ[ˆ]i), Dc(λ[ˆ]c, µc), D¯j(λ[ˆ]ij, µi¯j)_, and, by
Theorem 1, ˆu = Di(λ[ˆ]i) = Dc(λ[ˆ]c, µc) = Dj(λ[ˆ]ij, ¯µij). Since
the service rates and computing speeds of selected node j
_∈J_
are less than or equal to those of the ideal node, i.e, µij ≤ _µ¯ij,_
_µj ≤_ _µ¯j, and 1/ωj ≤_ 1/ωj, we have ˆu ≤ _Dj∈J (λ[ˆ]ij, µij)._
� �
Therefore, we have ALGb(σ) = max _u, Dˆ_ _j(λ[ˆ]ij, µij)_ =
� �
max _Dj(λ[ˆ]ij, µij)_ _≤_ _γu,ˆ_ _∀j ∈J . By optimizing the task_
distribution for the nodes in, the latency can be further
_J_
reduced, i.e, ALG(σ) ≤ ALGb(σ). Hence, it is possible to
conclude that ALG(σ) ≤ ALGb(σ) ≤ _γuˆ ≤_ _γOPT(σ) and,_
therefore, ALG(σ)/OPT(σ) _γ._
_≤_
This result shows that the online optimization framework
can achieve the target competitive ratio γ by determining a
proper number of neighboring nodes _J[ˆ] and optimizing the_
task distribution. According to Theorem 2, the ratio between
the latency achieved by executing one iteration of the proposed
framework and an offline optimal latency can be bounded by
the value of γ.
To satisfy the first condition of Theorem 2, the proper value
of γ needs to be found iteratively as shown in Fig. 3. Then,
we prove that γ converges to an upper bound. For this proof,
we define the lowest transmission service rate as µ
_ij when the_
maximum of din is _d[¯]ij. Also, the lowest computing service_
rate and the lowest processing speed are defined as µ
_j and_
1/ω¯j, respectively.
**Theorem 3. The target competitive ratio γ converges to**
_Dj(λ[ˆ]ij, µij)/uˆ if: (i) a given γ enables fog node i to select_
_Jˆ nodes, and (ii) the optimal task distribution can always be_
_found, i.e., Γ=1._
_Proof. We show that there exists an upper bound of γ denoted_
by ¯γ. Therefore, for a given sequence σ, we show that
ALG(σ)
OPT(σ) _[≤]_ [max]minσ[σ]′[′] OPT[ ALG]([(]σ[σ][′][′])[)] [= ¯][γ,][ where][ σ][′][ denotes any sequence.]
In the first phase of Algorithm 1, since ˆu is calculated by
assuming that all neighboring nodes are ideal nodes, the lower
bound of the offline latency for any sequence is given by
minσ′ OPT(σ[′]) = ˆu. Also, if _J[ˆ] neighboring nodes are located_
at the farthest distance _d[¯]ij, the lowest fog transmission service_
rate denoted as µ is derived Then the worst case is defined
-----
by assuming that the neighboring nodes have the lowest
service rates and computing speed, i.e., µij, µj, and 1/ω¯j.
Therefore, the latency in the worst case can be presented by
maxσ′ ALG(σ[′]) = Dj(λ[ˆ]ij, µij). Finally, γ always increases
when it is updated, and, hence, γ converges to a competitive
ratio given by ¯γ = _Dj_ (λ[ˆ]ijuˆ _,µij_ ) .
Therefore, the proposed framework is able to find the target
competitive ratio by iteratively updating γ when _d[¯]ij, µj, and_
1/ω¯j are not known to fog node i. Thus, once γ is found
through the iterative process, Algorithm 1 is used to select
the neighboring nodes, and the tasks are offloaded to the
neighboring nodes as stated in Theorem 1. As a result, the
proposed framework yields the set of _J[ˆ] selected neighboring_
nodes and the corresponding task distribution that can achieve
the target competitive ratio as shown in Theorem 2.
The upper bound in Theorem 3 is the performance in the
worst case if a given γ enables fog node i to select _J[ˆ] neigh-_
boring nodes, and the optimal task distribution can always be
found, i.e., Γ = 1. If the first condition on the network size in
Theorem 3 cannot be satisfied, γ is updated. When the target
competitive ratio γ converges to ¯γ, the number of iterations
tends to infinity since the value of γ asymptotically approaches
to ¯γ. In particular, as γ becomes closer to ¯γ, the probability of
updating γ decreases exponentially. Therefore, after running a
finite, large number of iterations, the probability of updating
_γ can become marginal. When the current value of γ is rarely_
updated, the first condition on the network size in Theorem 3
is assumed to be satisfied, and, thus, the iteration process used
to update γ will terminate. In doing so, the final value of γ
that is smaller than ¯γ can be used to further reduce the latency
of the formed fog network.
To this end, we derive a lower bound of the probability, with
respect to γ, that the initial fog node forms a fog network with
_Jˆ neighboring nodes in an iteration including N observations._
To derive a statistical result, we assume that the values of
the communications and computing capabilities of neighboring
nodes are random variables. For example, the distance, din,
between the initial node and a neighboring node is a random
variable within a finite range [dij, _d[¯]ij], and, therefore, the_
service rate µin from (1) is a random variable in the range
[µij, ¯µij]. Also, a neighboring node’s computing service rate
_µn and computing delay ωn can be modeled as random_
variables that lie in the finite ranges [µj, ¯µj] and [ωj, ¯ωj],
respectively.
**Proposition** **1.** _The_ _probability_ _that_ _the_ _initial_ _fog_
_node_ _forms_ _a_ _fog_ _network_ _with_ _Jˆ_ _neighboring_
_nodes_ _in_ _an_ _iteration_ _including_ _N_ _observations_ _is_
_at_ _least_ _p[′](γ)_ =� �Nk�= J[ˆ] �Nk �p[′]sk(1 − _p′s[)][N]_ _[−][k]_ ��where−β21
_p[′]s_ = _Fdin_ _βWlN1_ _Ptx,i0_ 2[(][ 1]γ [(¯][µij][ (][xij][ )][−][λij][ˆ] [ )+ˆ][λij][)][ K]Wl −1
�1 − _Fµn�_ _γ1_ [(¯][µ][j][ −] _[λ][ˆ][ij][) + ˆ][λ][ij]��_ _Fωn_ �γωj�.
_Proof. See Appendix A._
|Col1|Col2|Col3|Col4|
|---|---|---|---|
|||||
||= 40 = 60|||
||= 80 = 100|||
||= 200 = 300|||
|||||
|||||
|||||
|||||
|||||
|||||
Fig. 4: Example of the probability p[′] derived in Proposition 1.
is very close to 1. This is due to the fact that, for a given γ,
a fog network is always formed with _J[ˆ] neighboring nodes if_
_p[′](γ) = 1. We define ¯γs as the smallest value of γ with which_
the initial fog node forms a network including _J[ˆ] neighboring_
nodes with probability p[′](γ) = 1 in an iteration including N
observations, i.e., ¯γs = min({γ|p[′](γ) = 1}).
Fig. 4 shows the upper bound ¯γ derived in Theorem 3. Fig. 4
also shows the probability p[′](γ) derived in Proposition 1 with
respect to the target competitive ratio γ for different numbers
of observations N . In Fig. 4, the neighboring nodes are randomly located on a circular area with the maximum distance
_d¯ij = 50 m. Also, µn and ωn follow uniform distributions_
in the ranges [15, 40] and [0.05, 0.10], respectively. In Fig. 4,
we use h = 1, _J[ˆ] = 6,_ _λ[ˆ]ij = 1.4, and l = 1. In Fig. 4, if_
the initial fog node sets γ = ¯γs, we can see that p[′](γ) = 1
for a large value of N . For example, the probability p[′](γ) is
one when γ = 2.08 and N = 300. In this case, since the first
condition of Theorem 3 is satisfied with a probability close to
one, the iteration process for updating γ will terminate if the
optimal task allocation is achieved. Also, Fig. 4 shows that
_γ¯s becomes larger with small N_ . This is due to the fact that
the initial fog node must increase ¯γs to select its neighboring
nodes within a small number of observations. Since p[′](γ)
approaches to one with increasing γ, it is possible to determine
_γ¯s by numerically finding the smallest γ such that p[′](γ) is_
very close to 1. Then, in Fig. 4, we can observe that p[′](¯γs)
becomes one. Consequently, by setting the initial value of the
target competitive ratio γ to ¯γs, the results of Proposition 1 can
be used to prevent any trial and error in the network formation
stage. If the conditions of Theorem 3 are satisfied, a network
can be formed at once, and updating γ is not required. To do
so, the initial fog node however has to know the information
assumed to derive ¯γs. When the information is unknown,
the proposed framework in Fig. 3 can be used to iteratively
optimize the target competitive ratio.
V. SIMULATION RESULTS AND ANALYSIS
For our simulations, we use a MATLAB simulator[7] in which
we consider an initial fog node that can connect to neighboring
1
0.9 = 40
= 60
0.8
= 80
0.7 = 100
= 200
0.6 = 300
0.5
0.4
0.3
0.2
0.1
0
1.6 1.7 1.8 1.9 2 2.1 2.2 2.3 2.4 2.5 2.6
Target competitive ratio
By using the probability in Proposition 1, the first condition
of Theorem 3 can be replaced with the condition that p[′](γ)
7For further validation of our results, future works can implement the
t t l f t ki t tb d
-----
TABLE II: Simulation parameters
Notation Value
_i =_ _ωj_, ωc 50, 25 msec/packet
_µj_, ¯µi = ¯µj 15, 40 packet/sec
_N_, τ 300, 0.002 (0.005 in Fig. 8)
_,i, β1, β2, h_ 20 dBm, 10[−][3], 4, 1
_K_ 64 kilobytes
_B, N0_ 3 MHz, −174 dBm/Hz
240 Baseline,h=0.6 Proposed,h=0.6
Baseline,h=1.0 Proposed,h=1.0
230
Proposed,h=0.3
Proposed,h=0.6
220
Proposed,h=1.0
210
200
190
180
170
160
150
140
10 13 16 19
Task arrival rate [packet/sec]
23
22
21
20
19
18
17
16
15
|Notation|Value|
|---|---|
|ωi =ωj, ωc|50, 25 msec/packet|
|µ = µ j, µ¯i = µ¯j i|15, 40 packet/sec|
|N, τ|300, 0.002 (0.005 in Fig. 8)|
|Ptx,i, β1, β2, h|20 dBm, 10−3, 4, 1|
|K|64 kilobytes|
|B, N0|3 MHz, −174 dBm/Hz|
���
���
���
���
|Col1|Col2|Baseline,h=0.3|Col4|Col5|Proposed,h=0.3|Col7|Col8|
|---|---|---|---|---|---|---|---|
|||||||||
|||Baseline,h Baseline,h|=0.6 =1.0||Proposed, Proposed,|h=0.6 h=1.0||
|||||||||
|||Proposed, Proposed,|h=0.3 h=0.6|||||
|||||||||
|Proposed,||Proposed,|h=1.0|||||
|||||||||
|||||||||
|||||||||
|||||||||
|||||||||
|||||||||
|||||||||
���
���
Fig. 6: Computing latency and percentage of tasks processed
at the initial fog node i.
���
�� ������ �� ��
������������������ ������������
180
170
|%DVHOLQHG F %DVHOLQHG F %DVHOLQHG |P P P|Col3|Col4|
|---|---|---|---|
|F 3URSRVHGG F 3URSRVHGG F|P P|||
|3URSRVHGG F|P|||
|||||
|||||
|||||
|||||
Fig. 5: Latency for different task arrival rates at the initial fog
node i.
fog nodes uniformly distributed within a circular area of radius
50 m. The arrival sequence of the fog nodes follows a uniform
distribution. The task arrival rate at fog node i is xi = 10
packets per second. The computing service rate of the fog
nodes is randomly drawn from a uniform distribution over a
range of 15 to 40 packets per second. All statistical results are
averaged over a large number of simulation runs. Similar to
prior work [1], the simulation results are evaluated with the
parameters listed in Table II.
_A. Performance Evaluation of the Online Optimization Frame-_
_work_
Fig. 5 shows the latency when the total task arrival rate
increases from 10 to 19 packets per second with dc = 100,
120, and 140 m, respectively. For comparison purposes, we
use a baseline algorithm in which the algorithm observes
the first 110 over 300 observations nodes and then selects
the neighboring nodes from the rest of the arrivals by using
the secretary algorithm in [1]. In Fig. 5, we show that the
proposed framework can reduce the latency compared to the
baseline, for all task arrival rates. For instance, the latency can
be reduced by up to 19.25% compared to the baseline when
_xi = 19 and dc = 140 m. Also, from Fig. 5, we can see that_
the latency decreases as the distance to the cloud is reduced.
With a shorter distance to the cloud, the cloud transmission
service rate becomes higher. Therefore, the cloud is able to
process more tasks with a low latency, and the overall latency
of the fog network is improved. For example, at xi = 19, if
_dc decreases from 140 m to 100 m, the latency is reduced by_
4.29%. Moreover, we show that the latency decreases as less
tasks arrive at the initial fog node i. For instance, when xi
decreases from 19 to 10, the latency is reduced by about 25%
with d 100 m
|x=10, i x=13,|Col2|ω=ω=0.05 i j ω=ω=0.05|Col4|Col5|Col6|Col7|
|---|---|---|---|---|---|---|
|||ω=ω=0.05 i j ω=ω=0.05|||||
||i x=10, i x i=13,|i j ω=ω=0.03 i j ω=ω=0.03 i j|||||
||||||||
||||||||
||||||||
||||||||
||||||||
Fig. 7: Latency for different number of neighboring nodes.
Fig. 6 shows the latency and the percentage of tasks
processed at the initial fog node i when the total task arrival
rate increases from 10 to 19 packets per second with average
fading gain values of h = 0.3, 0.6, and 1.0, respectively. In
Fig. 6, we show that the latency decreases as the average
fading gain increases, for all task arrival rates. For a higher
average fading gain, the transmission service rates of the fog
computing nodes become larger. Therefore, the tasks can be
efficiently offloaded, with low latency, to neighboring fog
nodes and the cloud hence improving the overall latency of the
fog network. Also, from Fig. 6, we can see that the percentage
of tasks processed at the initial fog node i decreases as the
total task arrival rate xi increases. Moreover, Fig. 6 shows
that the initial fog node i tends to process more tasks when
_h is smaller. This is due to the fact that a smaller h increases_
the wireless transmission latency required to offload tasks to
other computing nodes. For example, at xi = 10, if h increases
from 0.3 to 1.0, the percentage of tasks processed at node i
increases by up to about 10%.
Fig. 7 shows the relationship between the latency and the
number of neighboring nodes when the total task arrival rate
is given by xi = 10 and 13 packets per second, respectively,
and the processing delays of the fog nodes are given by ωi =
_ω_ 50 and 30 milliseconds respectively In Fig 7 a smaller
160
150
140
130
120
110
2 3 4 5 6 7
Number of neighboring nodes Jˆ
-----
1.25
1.2
40
35
1.15
1.1
30
25
1.05
1
20
|Col1|Col2|Col3|Col4|Col5|Col6|Col7|Col8|
|---|---|---|---|---|---|---|---|
|||||||||
|||||||||
|||||||||
||||||d d|d d|=100m c =120m c|
|Col1|Col2|Col3|Col4|Col5|Col6|Col7|
|---|---|---|---|---|---|---|
|||||||x=10 i x=13 i|
||||||||
||||||||
||||||||
||||||||
||||||||
||||||||
0 100 200 300 400 500 600 700
Iterations
4 5 6 7
Number of neighboring nodes
Fig. 8: Changes in the target competitive ratio γ over 700
updates.
processing delay indicates that the fog nodes have a higher
processing speed. From Fig. 7, we can observe the tradeoff
between scenarios having a large number of fog nodes with
low processing power and scenarios having a small number
of fog nodes with high processing power. If fog nodes with
higher processing speed are deployed, latency is reduced, and
the formed network size decreases. This is due to the fact
that the fog nodes having a faster processing speed do not
need to form a large network. In fact, a larger network size
can lead to lower transmission service rates. For instance, if
the processing delay of fog nodes decreases from 50 to 30
milliseconds, the latency is reduced by up to 18.8% while the
number of neighboring nodes decreases from 7 to 5.
Fig. 8 plots the value of γ during 700 updates for different
distances to the cloud, dc = 100 m and 120 m, respectively.
Fig. 8 shows that the value of γ approaches a constant value.
For instance, γ first reaches 1.17 at 38 iterations with dc =
120 m. Then, γ becomes 1.21 at 329 iterations, and this value
is maintained thereafter. From Fig. 8, we can see that fog
node i can find a proper γ after a finite number of trials and
updates. Also, the results of Fig. 8 show that γ becomes larger
as the distance to the cloud is closer. This is because ˆu and the
threshold value decrease when dc is reduced. If the threshold
value decreases, it becomes more challenging to select the _J[ˆ]_
neighboring nodes within the limited number of observations
since the selected neighboring nodes must have a lower latency
than the threshold. Therefore, in order to maintain a proper
threshold value, γ will be larger when dc decreases.
Fig. 9 shows the relationship between the fog transmission
service rate and the number of neighboring nodes when
_xi = 10 and 13, respectively. Here, we can see that the_
fog transmission service rate increases as the number of
neighboring nodes decreases. This stems from the fact that
the bandwidth per node increases as less fog nodes share the
total bandwidth. For instance, the fog transmission service rate
can increase by 15.6% if _J[ˆ] goes from 6 to 4 with xi = 10._
Fig. 9 also shows that the formed network size becomes larger
if xi increases. This is due to the fact that offloading tasks to a
larger size of the network can reduce the tasks per node, and,
hence the maximum latency of the network will decrease For
Fig. 9: Fog transmission service rate with respect to the
number of neighboring nodes.
|Col1|Col2|1HLJK 1HLJK|Col4|ERULQJQRGHV ( ERULQJQRGHV &|TXDOEDQGZLGWK ORXGFHQWULF|
|---|---|---|---|---|---|
||||&ORXG &ORXG| (TXDOEDQGZLG &ORXGFHQWULF|WK|
|||)RJQ )RJQ||RGHL (TXDOEDQ RGHL &ORXGFHQ|GZLGWK WULF|
|||||||
|||||||
|||||||
|||||||
|||||||
|||||||
|||||||
Fig. 10: Task distribution with respect to the number of
neighboring nodes.
instance, when xi = 10, the range of _J[ˆ] is between 4 and 6._
However, if xi = 13, _J[ˆ] falls in the range between 5 and 7._
In Fig. 10, we show the task distribution among neighboring
nodes, the cloud, and fog node i for different numbers of
neighboring nodes when two bandwidth allocation approaches
are used, respectively. It can be seen that the cloud-centric
bandwidth allocation increases the tasks offloaded to the
cloud when compared to the equal-bandwidth allocation. This
is because the cloud transmission service rate increases, so
offloading more tasks to the cloud can lower the latency. For
instance, if the cloud-centric bandwidth allocation is used and
_Jˆ = 4, the cloud is allocated 22.86% more tasks than in_
the case of equal bandwidth allocation. Also, in Fig. 10, we
show that the optimal network size is different, depending
on the bandwidth allocation scheme. For instance, the cloudcentric bandwidth allocation yields a larger network size than
the equal bandwidth allocation. When the network size is
large, the cloud can maintain a high transmission service rate
by using the cloud-centric bandwidth allocation. Therefore,
the high cloud transmission service rate enables to offload
most tasks to the cloud with a low transmission latency.
For example, Fig. 10 shows that the number of neighboring
nodes is between 4 and 6 if equal bandwidth allocation is
���
��
��
��
��
��
��
��
��
��
�
� � � � � �
���������������������������
-----
���
���
�����������dc������
��� �������� �dc������
��� ��������� ����������� �ddcc������������
���
���
���
���
���
���
��� ���� ���� ���
�������������������������
���
���
��
�
��� ���� ���� ���
�������������������������
���
���
|Col1|Col2|Col3|Col4|Col5|
|---|---|---|---|---|
||||||
||%DVHOLQH d c|P|||
||3URSRVHG d c %DVHOLQH d c|P P|||
|3URSRVHG d c|3URSRVHG d c|P|||
||||||
||||||
||||||
||||||
||||||
|Col1|Col2|d P c d P|
|---|---|---|
|||c|
||||
||||
||||
Fig. 11: Latency comparison versus the target competitive
ratio.
used. However, if the cloud-centric bandwidth allocation is
used, the number of neighboring nodes varies from 4 to 9.
Moreover, Fig. 10 shows that the number of tasks offloaded
to the cloud decreases when _J[ˆ] increases from 4 to 6 for both_
bandwidth allocation schemes. In this phase, the number of
tasks offloaded to neighboring nodes will increase because
offloading more tasks at the fog layer can reduce the latency
at the cloud. However, if the number of neighboring nodes
increases when using the cloud-centric bandwidth allocation,
e.g., there are 7 or more neighboring nodes, the number of
tasks offloaded to the neighboring nodes will decrease with the
network size. This is due to the fact that the fog transmission
service rates are smaller for larger networks which yields
higher fog transmission latency. As a result, more tasks will
be allocated to the cloud so as to utilize its fast computing
resources.
_B. Performance Evaluation of Algorithm 1 for a fixed γ_
In Figs. 11 and 12, we evaluate the performance of Algorithm 1 when the proposed framework uses a fixed value
of γ without constraint (14). While the target competitive
ratio is used in the proposed framework to determine the
threshold value and make a decision on node selection, the
baseline algorithm has a different mechanism to determine
threshold values. Therefore, the latency results of the baseline
do not depend on the target competitive ratio. By using a
predefined γ, the update step of γ is not needed, which can
be useful for scenarios in which the delay of this update can
hinder the network latency. Fig. 11 shows the latency for
the different preset values of γ ranging from 1.2 to 1.5 with
_dc = 100 m and 120 m, respectively. From Fig. 11, we can see_
that the proposed framework achieves lower latencies than the
baseline, for all γ. For instance, the latency of the proposed
framework can be reduced by up to 20.3% compared to that of
the baseline if γ = 1.2 and dc = 100 m. Also, Fig. 11 shows
that the latency achieved by the proposed framework becomes
smaller when γ decreases. This stems from the fact that a
low threshold value with small γ allows the initial fog node
to only select neighboring nodes having a high performance.
For example, the latency can be reduced by up to 12.1% if γ
decreases from 1 5 to 1 2 with d 100 m
Fig. 12: The required number of observations for different
values of γ.
**(b)**
30
25
**(a)**
20
15
10
5
0
1
|1.016|Col2|(b|b)|
|---|---|---|---|
|1.016||Equal bandwidth Cloud-centric ban|allocation dwidth allocation|
|||||
|1.014 1.012 1.01 Efficiency 1.008 1.006 1.004 1.002||||
|||||
|||||
|||||
|||||
|||||
|||||
|||||
4 5 6 4 5 6
Number of neighboring nodes Number of neighboring nodes
Fig. 13: Performance comparison of two bandwidth allocation
schemes with respect to the number of neighboring nodes.
In Fig. 12, we show the number of observations of the neighboring node arrivals until _J[ˆ] neighboring nodes are selected for_
different γ with dc = 100 m and 120 m, respectively. In this
figure, we can see that a large value of γ results in a small
number of observations due to the associated increase in the
threshold value. For instance, as γ increases from 1.2 to 1.5,
the number of observations can be reduced by about 96% with
_dc = 100 m. Fig. 12 shows that a large value of dc lowers the_
number of observations since increasing dc results in a large ˆu
and threshold value. For example, the number of observations
can be reduced by about 42% if dc increases from 100 m
to 120 m with γ = 1.2. Moreover, from Figs. 11 and 12,
we can characterize the tradeoff between the latency and the
number of observations. In particular, a small γ results in a
lower latency, but requires a large number of observation.
Fig. 13 shows the percentage of tasks offloaded to the
cloud and the scheduling efficiency of the task distribution
when two bandwidth allocation schemes are used, respectively,
with γ = 1.2 and dc = 100 m. In Fig. 13 (a), the tasks
offloaded to the the cloud decreases as the number of fog nodes
increases since the cloud transmission service rate decreases.
Also Fig 13 (b) shows that when equal bandwidth allocation
-----
220
215
210
205
200
195
190
185
180
175
170
3 nodes is required to minimize the latency. Also, we note that
the results in Fig. 14 show that there exists an optimal network
size that can be found by running Phase 1 of Algorithm 1.
Finally, Fig. 14 clearly shows that the latency is reduced by
offloading the tasks to both the fog layer and the cloud, instead
of relying solely on the cloud. For example, if the tasks are
offloaded to the cloud, initial fog node, and 5 neighboring
nodes located at dij = 10 m, the latency can be reduced by
up to 43.9% compared to the case using the cloud only.
VI. CONCLUSION AND FUTURE WORK
|Col1|Col2|La|Col4|Col5|Col6|Col7|ud only:|
|---|---|---|---|---|---|---|---|
||||La|tency when||using the clo||
|||30|30|5.8 msec||||
|||||||||
|||||||Distanc Distanc|e (d=10m) ij e (d=20m)|
|||||||||
|||||||Distanc|ij e (d=30m) ij|
|||||||||
|||||||Distanc|e (d=40m) ij|
|||||||||
|||||||||
|||||||||
|||||||||
|||||||||
|||||||||
0 1 2 3 4 5 6
Number of neighboring fog nodes
Fig. 14: Latency for different number of neighboring fog nodes
in an offline setting.
is used for a large network size, the scheduling efficiency may
not be optimal, i.e., Γ > 1 due to a large latency for the transmissions to the cloud. In this case, though the equal-bandwidth
allocation still achieves Γ that is close to 1, the cloud-centric
bandwidth allocation can be used to enhance efficiency. This
is because the cloud-centric bandwidth allocation increases the
cloud transmission service rate by allocating more bandwidth.
It can be seen for instance that the equal bandwidth allocation
yields Γ = 1.013 in the case of 6 neighboring nodes, but the
efficiency of the cloud-centric bandwidth allocation becomes
Γ = 1.
_C. Optimal Network Size in an Offline Setting_
Fig. 14 shows the optimal latency for different network sizes
when all neighboring nodes are located at dij varying from
10 m to 40 m. In Fig. 14, it is assumed that complete information on the network is known and that the fog nodes have
identical parameters, i.e., µi = µj = 20 when dc = 150 m.
In this offline setting, we study the impact of the network
size on the latency by using an offline optimization solver to
find the optimal latency for a given network. Fig. 14 shows
that the optimal latency is directly affected by the number of
neighboring nodes. When the network size increases, latency
starts to decrease since fewer tasks can be offloaded to each
neighboring node. However, if the network size increases, the
latency will eventually increase since the bandwidth per node
is smaller. For example, the optimal latency decreases when
the number of neighboring nodes increases from 1 to 3 with
_dij = 40. However, once the number of neighboring nodes_
increases beyond 3, the latency starts to increase. Moreover,
from Fig. 14, we can see that the optimal network size
changes with the distances between fog nodes. For instance,
for dij = 40 m, the latency can be minimized when there
are 3 neighboring nodes in the fog network. However, if
_dij = 10 m, the latency is minimized when the number of_
neighboring nodes is 5. Therefore, if the fog transmission
service rate is high (for shorter distances), increasing the
number of neighboring nodes to 5 can reduce the latency. On
the other hand, if the fog transmission service rate is low (due
to poor wireless channel) having a smaller network size with
In this paper, we have proposed a novel framework to jointly
optimize the formation of fog networks and the distribution of
computational tasks in a hybrid fog-cloud system. We have
addressed the problem using an online optimization formulation whose goal is to minimize the maximum latency of the
nodes in the fog network in presence of uncertainty about
fog nodes’ arrivals. To solve the problem, we have proposed
online optimization algorithms whose target competitive ratio
is achieved by suitably selecting the neighboring nodes while
effectively offloading the tasks to the neighboring fog nodes
and the cloud. The theoretical analysis and simulation results
have shown that the proposed framework achieves a low target
competitive ratio while successfully minimizing the maximum
latency in fog computing. Extensive simulation results are
used to showcase the performance benefits of the proposed
approach. For future work, a dynamic bandwidth scheme
can be designed to further reduce the latency. Also, packet
prioritizing can be adopted at the initial fog node to meet
different service-level latency requirements. Moreover, the
proposed framework can be extended to the scenario in which
multiple fog networks are formed by multiple initial fog nodes.
Further, the proposed fog network formation algorithm can
be extended to account for the instantaneous fading by using
advanced techniques such as stochastic optimization. Finally,
one important future work is to conduct an experimental
analysis pertaining to fog computing over an actual wireless
testbed.
APPENDIX A
PROOF OF PROPOSITION 1
_Proof. For a given γ, the arriving node n is selected by the_
initial fog node if Dn(λ[ˆ]ij, µin) ≤ _γuˆ. The probability of node_
� �
selection event Es is ps = Pr _Dn(λ[ˆ]ij, µin) ≤_ _γuˆ_ . With the
same target competitive ratio γ, E is defined as the event
where Es happens more than _J[ˆ] times during N trials within_
an iteration. Since event E is a sufficient condition to form a
network for a given γ, the probability to form a network is at
least given by p = [�]k[N]= J[ˆ] �Nk �p[k]s [(1][ −] _[p][s][)][N]_ _[−][k][ where][ N][ is the]_
maximum number of observations allowed within an iteration,
and all inputs σn, ∀n ∈ [1, N ] are independent.
Since ps = Pr� _µin−1_ _λ[ˆ]ij_ [+] _µ1in_ [+] _µn−1λ[ˆ]ij_ [+] _µ1n_ [+ 2][ω][n][ ≤]
_γ_ � _µ¯ij_ _−1_ _λ[ˆ]ij_ [+] _µ¯1ij_ [+] _µ¯j_ _−1λ[ˆ]ij_ [+] _µ¯1j_ [+ 2][ω][j]��, a lower bound of
_pwhereis the event wheres can be given by E1 is the event whereµ p1in_ _[′]s_ [=][ Pr]µ¯ijµ[ {]in[E]−1[1]λ[ˆ]ij[∩] _[E][−][1][′][γ][ ∩]µ¯ij[E]−1[2]λ[ˆ][∩]ij_ _[E][≤][2][′][ ∩][0][,][ E][E][3][1][}][′]_
1 _γ_ 1 _≤_ 0 E[−]2 _[γ] is the event where[ 1]_ _[≤]_ [0][,][ E][2][ is the event where]1 _γ_ [1] _≤_ 0
-----
and E3 is the event where ωn − _γωj ≤_ 0. Then, p[′]s [can]
be rewritten as Pr{E1[′] _|E1}Pr{E1}Pr{E2[′]_ _|E2}Pr{E2}Pr{E3}._
Then, due to the relationshipγ _µ¯[1]j_ [, if the condition for][ E][1][ is satisfied, i.e.,]µ1in _[≤]_ _µin−1_ _λ[ˆ]ij_ _[≤]_ _[γ]µµ¯inij−1−1λ[ˆ]λ[ˆ]ijij_ _[≤]≤_
_γ_ 1
_µ¯ij_ _−λ[ˆ]ij_ [, then it is clear that the condition for][ E][1][′][ is also]
satisfied, i.e., _µ1in_ _µ¯j_ [. This, in turn, implies Pr][{][E][1][′] _[|][E][1][}][ =]_
_[≤]_ _[γ][ 1]_
1. Similarly, if E2 happens, then it always incurs E2′, and,
thus, Pr{E2′ _|E2} = 1. In consequence, p[′]s_ [can be sim-]
plified as p[′]s [=][ Pr][{][E][1][}][Pr][{][E][2][}][Pr][{][E][3][}][. Note that Pr][{][E][1][}]
can be expressed by using din since µin is a function of
_din in (1). When Fdin_, Fµn, and Fωn, respectively, are
the cumulative probability functions with respect to din,
_µn, and�_ _ωn, Pr�_ _{E1}, Pr{E2}, and Pr{��E3−}1/β are Pr2_ _{E1} =_
_Fdin_ _βWlN1_ _Ptx,i0_ 2[(][ 1]γ [(¯][µij][ (][xij][ )][−][λij][ˆ] [ )+ˆ][λij][)][ K]Wl −1 , Pr{E2} =
1 − _Fµn_ � _γ1_ [(¯][µ][j][ −] _[λ][ˆ][ij][) + ˆ][λ][ij]�, and Pr{E3} = Fωn_ �γωj�.
Finally, it is clear that p[′] ≜ [�][N]k= J[ˆ] �Nk �p[′]sk(1 − _p′s[)][N]_ _[−][k][ ≤]_ _[p]_
due to p[′]s _[≤]_ _[p][s][. Hence,][ p][′][ is a lower bound of the probability]_
that a given target competitive ratio is used to form a network
without an update.
REFERENCES
[1] G. Lee, W. Saad, and M. Bennis, “An online secretary framework for
fog network formation with minimal latency,” in Proc. IEEE Int. Conf.
_on Commun. (ICC), Paris, France, May 2017, pp. 1–6._
[2] Z. Dawy, W. Saad, A. Ghosh, J. G. Andrews, and E. Yaacoub, “Toward massive machine type cellular communications,” IEEE Wireless
_Communications, vol. 24, no. 1, pp. 120–128, Feb. 2017._
[3] M. Mozaffari, W. Saad, M. Bennis, and M. Debbah, “Unmanned aerial
vehicle with underlaid device-to-device communications: Performance
and tradeoffs,” IEEE Trans. Wireless Commun., vol. 15, no. 6, pp. 3949–
3963, Jun. 2016.
[4] T. Park, N. Abuzainab, and W. Saad, “Learning how to communicate in
the Internet of Things: Finite resources and heterogeneity,” IEEE Access,
vol. 4, pp. 7063–7073, Nov. 2016.
[5] M. Chiang and T. Zhang, “Fog and IoT: An overview of research
opportunities,” IEEE Internet of Things Journal, vol. 3, no. 6, pp. 854–
864, Dec. 2016.
[6] Cisco, “Fog computing and the Internet of Things: Extend the cloud to
where the things are,” Cisco white paper, 2015.
[7] M. Peng, S. Yan, K. Zhang, and C. Wang, “Fog-computing-based radio
access networks: issues and challenges,” IEEE Network, vol. 30, no. 4,
pp. 46–53, Jul. 2016.
[8] M. S. ElBamby, M. Bennis, and W. Saad, “Proactive edge computing
in latency-constrained fog networks,” in Proc. European Conf. on Netw.
_and Commun., Oulu, Finland, May 2017, pp. 1–6._
[9] G. Lee, W. Saad, and M. Bennis, “Online optimization techniques for
effective fog computing under uncertainty,” MMTC Communications_Frontiers, vol. 12, no. 4, pp. 19–23, Jul. 2017._
[10] M. Yannuzzi, R. Milito, R. Serral-Graci, D. Montero, and M. Nemirovsky, “Key ingredients in an iot recipe: Fog computing, cloud
computing, and more fog computing,” in Proc. IEEE 19th International
_Workshop on Computer Aided Modeling and Design of Communication_
_Links and Networks (CAMAD), Athens, Greece, Dec 2014, pp. 325–329._
[11] F. Bonomi, R. Milito, J. Zhu, and S. Addepalli, “Fog computing and its
role in the internet of things,” in Proc. 1st MCC workshop on Mobile
_cloud computing._ Helsinki, Finland: ACM, Aug. 2012, pp. 13–16.
[12] C. Vallati, A. Virdis, E. Mingozzi, and G. Stea, “Exploiting LTE
D2D communications in M2M fog platforms: Deployment and practical
issues,” in Proc. IEEE 2nd World Forum on IoT, Milan, Italy, Dec. 2015,
pp. 585–590.
[13] A. Khelil and D. Soldani, “On the suitability of device-to-device
communications for road traffic safety,” in Proc. IEEE World Forum
_on Internet of Things (WF-IoT), Seoul, Korea, Mar. 2014, pp. 224–229._
[14] T. H. Luan, L. X. Cai, J. Chen, X. Shen, and F. Bai, “Vtube: Towards
the media rich city life with autonomous vehicular content distribution,”
in Proc. IEEE Conf. Sensor, Mesh and Ad Hoc Commun. and Netw.,
S lt L k Cit UT USA J 2011 359 367
[15] T. Nishio, R. Shinkuma, T. Takahashi, and N. B. Mandayam, “Serviceoriented heterogeneous resource sharing for optimizing service latency
in mobile cloud,” in Proc. 1st Int. Wksh. on Mobile Cloud Comput.
_Netw., Bangalore, India, Jul. 2013, pp. 19–26._
[16] V. Sharma, J. D. Lim, J. N. Kim, and I. You, “SACA: Self-aware
communication architecture for IoT using mobile fog servers,” Mobile
_Information Systems, vol. 2017, pp. 1–17, Apr. 2017._
[17] T. Zhao, S. Zhou, X. Guo, and Z. Niu, “Tasks scheduling and resource
allocation in heterogeneous cloud for delay-bounded mobile edge computing,” in Proc. IEEE Int. Conf. on Commun. (ICC), Paris, France, May
2017, pp. 1–7.
[18] R. Kaewpuang, D. Niyato, P. Wang, and E. Hossain, “A framework for
cooperative resource management in mobile cloud computing,” IEEE J.
_Sel. Areas in Commun., vol. 31, no. 12, pp. 2685–2700, Dec. 2013._
[19] M. Khaledi, M. Khaledi, and S. K. Kasera, “Profitable task allocation in
mobile cloud computing,” in Proc. 12th Int. Symp. on QoS and Security
_for Wireless and Mobile Networks, Malta, Nov. 2016._
[20] I. Ketyk´o, L. Kecsk´es, C. Nemes, and L. Farkas, “Multi-user computation offloading as multiple knapsack problem for 5G mobile edge
computing,” in Proc. European Conf. on Netw. and Commun., Athens,
Greece, Jun. 2016, pp. 225–229.
[21] V. Souza, W. Ramirez, X. Masip-Bruin, E. Marn-Tordera, G. Ren,
and G. Tashakor, “Handling service allocation in combined fog-cloud
scenarios,” in Proc. IEEE Int. Conf. on Commun. (ICC), Kuala Lumpur,
Malaysia, May 2016, pp. 1–5.
[22] S. H. Park, O. Simeone, and S. S. Shitz, “Joint optimization of cloud and
edge processing for fog radio access networks,” IEEE Trans. Wireless
_Commun., vol. 15, no. 11, pp. 7621–7632, Nov. 2016._
[23] Y. Yu, J. Zhang, and K. B. Letaief, “Joint subcarrier and CPU time
allocation for mobile edge computing,” in Proc. IEEE Global Commun.
_Conf. (GLOBECOM), Washington DC, USA, Dec. 2016._
[24] R. Deng, R. Lu, C. Lai, and T. H. Luan, “Towards power consumptiondelay tradeoff by workload allocation in cloud-fog computing,” in Proc.
_IEEE Int. Conf. on Commun. (ICC), London, UK, Jun. 2015, pp. 3909–_
3914.
[25] Y. Mao, J. Zhang, S. Song, and K. B. Letaief, “Power-delay tradeoff
in multi-user mobile-edge computing systems,” in Proc. IEEE Global
_Commun. Conf. (GLOBECOM), Washington DC, USA, Dec. 2016._
[26] G. Lee, W. Saad, and M. Bennis, “Online optimization for low-latency
computational caching in fog networks,” in Proc. Fog World Congress
_2017, Santa Clara, CA, USA, Jun. 2017._
[27] N. Wang, B. Varghese, M. Matthaiou, and D. S. Nikolopoulos, “Enorm:
A framework for edge node resource management,” IEEE Transactions
_on Services Computing, pp. 1–1, Sep. 2017._
[28] D. P. Bertsekas, R. G. Gallager, and P. Humblet, Data networks.
Prentice-Hall International New Jersey, 1992, vol. 2.
[29] B. Varghese, N. Wang, S. Barbhuiya, P. Kilpatrick, and D. S. Nikolopoulos, “Challenges and opportunities in edge computing,” in Proc. Int.
_Conf. on Smart Cloud, New York, NY, USA, Nov. 2016, pp. 20–26._
[30] K. Doppler, C. B. Ribeiro, and J. Kneckt, “Advances in D2D communications: Energy efficient service and device discovery radio,” in
_Proc. Wireless Veh. Technol., Info. Theory, Aerosp. Electr. Syst. Technol.,_
Chennai, India, Feb. 2011, pp. 1–6.
[31] M. Babaioff, N. Immorlica, D. Kempe, and R. Kleinberg, “A knapsack
secretary problem with applications,” in Proc. Int. Workshop on Approx.
_and Random., and Combinatorial Optimization, Princeton, NJ, USA,_
Aug. 2007, pp. 16–28.
[32] W. Saad, Z. Han, M. Debbah, and A. Hjorungnes, “A distributed
coalition formation framework for fair user cooperation in wireless
networks,” IEEE Trans. Wireless Commun., vol. 8, no. 9, pp. 4580–
4593, Sep. 2009.
[33] A. Borodin and R. El-Yaniv, Online computation and competitive
_analysis._ Cambridge University Press, 2005.
[34] J. Nocedal and S. J. Wright, Numerical Optimization, 2nd ed. New
York, NY, USA: Springer, 2006.
[35] S. Mirshekarian and D. N. Sormaz, “Correlation of job-shop scheduling problem features with scheduling efficiency,” Expert Systems with
_Applications, vol. 62, pp. 131–147, 2016._
-----
|
{
"disclaimer": "Notice: Paper or abstract available at https://arxiv.org/abs/1710.05239, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": null,
"status": "GREEN",
"url": "http://jultika.oulu.fi/files/nbnfi-fe202003117806.pdf"
}
| 2,017
|
[
"JournalArticle"
] | true
| 2017-10-14T00:00:00
|
[
{
"paperId": "5fe19a1dddfc77901d62716f58488394025ad6f8",
"title": "Online optimization for low-latency computational caching in Fog networks"
},
{
"paperId": "83469120f36297c706e199408283abcfac157b64",
"title": "ENORM: A Framework For Edge NOde Resource Management"
},
{
"paperId": "c012401133b8a4a36662d85c43151c80fff769ea",
"title": "Tasks scheduling and resource allocation in heterogeneous cloud for delay-bounded mobile edge computing"
},
{
"paperId": "417bb1b3b19aa88770554568edab0b05638b39f4",
"title": "Proactive edge computing in latency-constrained fog networks"
},
{
"paperId": "95fcc68dc5e862a46b8a914ec31af46622cb0e81",
"title": "SACA: Self-Aware Communication Architecture for IoT Using Mobile Fog Servers"
},
{
"paperId": "d7c2b9dcd2d786fb29af75a37f1f2f6db49f188b",
"title": "An online secretary framework for fog network formation with minimal latency"
},
{
"paperId": "8383d362b94f9c889fb7fee9a63d425525fa95cc",
"title": "Toward Massive Machine Type Cellular Communications"
},
{
"paperId": "3e79cf8c0803688d33372f8aea1e71416f34e729",
"title": "Correlation of job-shop scheduling problem features with scheduling efficiency"
},
{
"paperId": "305c699a42a870e38c374b6be49057064fe67f5d",
"title": "Learning How to Communicate in the Internet of Things: Finite Resources and Heterogeneity"
},
{
"paperId": "92da94fa6c7683a1e7cdf54954879af46d504775",
"title": "Power-Delay Tradeoff in Multi-User Mobile-Edge Computing Systems"
},
{
"paperId": "26e3fb9e76ad6f694c2940c25159593bacf2c4d0",
"title": "Challenges and Opportunities in Edge Computing"
},
{
"paperId": "81e464905a558b2ee22bc5448ab1c24051521798",
"title": "Profitable Task Allocation in Mobile Cloud Computing"
},
{
"paperId": "6f59148bf0074de716b09c6dd1b74afb1321410d",
"title": "Joint Subcarrier and CPU Time Allocation for Mobile Edge Computing"
},
{
"paperId": "8336f14dde1576cf5c55161e4512dcc59c9fee9f",
"title": "Joint cloud and edge processing for latency minimization in Fog Radio Access Networks"
},
{
"paperId": "e3ea3bc0ea5fa022a930e97cef3d54ff95620b2c",
"title": "Multi-user computation offloading as Multiple Knapsack Problem for 5G Mobile Edge Computing"
},
{
"paperId": "aa245959d84734f92d1e8f179417eb7226868e62",
"title": "Fog and IoT: An Overview of Research Opportunities"
},
{
"paperId": "2e01500ba821b036ec0cafb92cb1f6715ce387d2",
"title": "Handling service allocation in combined Fog-cloud scenarios"
},
{
"paperId": "dd495fc7326f870191403f780ab73aae3f9700e7",
"title": "Joint optimization of cloud and edge processing for fog radio access networks"
},
{
"paperId": "bc010c3d43bfa4eacd22e10a7d89d3ee14479c1c",
"title": "Exploiting LTE D2D communications in M2M Fog platforms: Deployment and practical issues"
},
{
"paperId": "60326587aab13915bdfb9384c0758d7b1a593155",
"title": "Unmanned Aerial Vehicle With Underlaid Device-to-Device Communications: Performance and Tradeoffs"
},
{
"paperId": "70243021f90294acc659d0819f41045a6735deb5",
"title": "Fog-computing-based radio access networks: issues and challenges"
},
{
"paperId": "3321ad7db4d355739953e3edf305642621649d61",
"title": "Towards power consumption-delay tradeoff by workload allocation in cloud-fog computing"
},
{
"paperId": "0ba2b6536880c3857370612adfb864bf9716ee98",
"title": "Key ingredients in an IoT recipe: Fog Computing, Cloud computing, and more Fog Computing"
},
{
"paperId": "c98cb019d4a7d8f197b53d0d81c08e41cee142e6",
"title": "On the suitability of Device-to-Device communications for road traffic safety"
},
{
"paperId": "f68db97a67fa4feca86b2ba2264ed6be482ad226",
"title": "A Framework for Cooperative Resource Management in Mobile Cloud Computing"
},
{
"paperId": "cf95229e80403ce1186faf9f2551ef2050ab1b77",
"title": "Service-oriented heterogeneous resource sharing for optimizing service latency in mobile cloud"
},
{
"paperId": "849564dcfd14d3ee5e5e3c337a31042bd3a252a2",
"title": "Advances in D2D communications: Energy efficient service and device discovery radio"
},
{
"paperId": "a8976aea76a62c1601ffa523ef568d4ca7b300ea",
"title": "Network Formation Games Among Relay Stations in Next Generation Wireless Networks"
},
{
"paperId": "ec58a32433b0c85bf39823a3c7fb2d3f24d0f979",
"title": "VTube: Towards the media rich city life with autonomous vehicular content distribution"
},
{
"paperId": "f308f5d48bbb82ee4909a997b76ed2673df4de70",
"title": "A distributed coalition formation framework for fair user cooperation in wireless networks"
},
{
"paperId": "e7fbc3a15e8d4a56bd9fd9577e937def48a83879",
"title": "A Knapsack Secretary Problem with Applications"
},
{
"paperId": null,
"title": "Online optimization techniques for effective fog computing under uncertainty"
},
{
"paperId": null,
"title": "Fog computing and the Internet of Things: Extend the cloud to where the things are"
},
{
"paperId": null,
"title": "Fog computing and its role in the internet of things"
},
{
"paperId": null,
"title": "Numerical Optimization, 2nd ed"
},
{
"paperId": "48e992a734ef6ecbc9d5aeb3fc9135bbee531e07",
"title": "Online computation and competitive analysis"
},
{
"paperId": "ff9c1a5c7f67012987c562699f0bc7f2857aa630",
"title": "Data Networks"
}
] | 24,671
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Medicine",
"source": "external"
},
{
"category": "Environmental Science",
"source": "s2-fos-model"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
},
{
"category": "Engineering",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/ffcc18b1b6912b5a439896d51951d9aa8c6a9f27
|
[
"Computer Science",
"Medicine"
] | 0.902821
|
Enriching IoT Modules with Edge AI Functionality to Detect Water Misuse Events in a Decentralized Manner
|
ffcc18b1b6912b5a439896d51951d9aa8c6a9f27
|
Italian National Conference on Sensors
|
[
{
"authorId": "2125725",
"name": "D. Loukatos"
},
{
"authorId": "2175462394",
"name": "Kalliopi-Agryri Lygkoura"
},
{
"authorId": "26351340",
"name": "Chrysanthos Maraveas"
},
{
"authorId": "3006388",
"name": "K. Arvanitis"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"SENSORS",
"IEEE Sens",
"Ital National Conf Sens",
"IEEE Sensors",
"Sensors"
],
"alternate_urls": [
"http://nbn-resolving.de/urn/resolver.pl?urn=urn:nbn:ch:bel-142001",
"http://www.mdpi.com/journal/sensors",
"https://www.mdpi.com/journal/sensors"
],
"id": "3dbf084c-ef47-4b74-9919-047b40704538",
"issn": "1424-8220",
"name": "Italian National Conference on Sensors",
"type": "conference",
"url": "http://www.e-helvetica.nb.admin.ch/directAccess?callnumber=bel-142001"
}
|
The digital transformation of agriculture is a promising necessity for tackling the increasing nutritional needs of the population on Earth and the degradation of natural resources. Focusing on the “hot” area of natural resource preservation, the recent appearance of more efficient and cheaper microcontrollers, the advances in low-power and long-range radios, and the availability of accompanying software tools are exploited in order to monitor water consumption and to detect and report misuse events, with reduced power and network bandwidth requirements. Quite often, large quantities of water are wasted for a variety of reasons; from broken irrigation pipes to people’s negligence. To tackle this problem, the necessary design and implementation details are highlighted for an experimental water usage reporting system that exhibits Edge Artificial Intelligence (Edge AI) functionality. By combining modern technologies, such as Internet of Things (IoT), Edge Computing (EC) and Machine Learning (ML), the deployment of a compact automated detection mechanism can be easier than before, while the information that has to travel from the edges of the network to the cloud and thus the corresponding energy footprint are drastically reduced. In parallel, characteristic implementation challenges are discussed, and a first set of corresponding evaluation results is presented.
|
# sensors
_Article_
## Enriching IoT Modules with Edge AI Functionality to Detect Water Misuse Events in a Decentralized Manner
**Dimitrios Loukatos *** **, Kalliopi-Agryri Lygkoura** **, Chrysanthos Maraveas** **and Konstantinos G. Arvanitis**
Department of Natural Resources Management and Agricultural Engineering, Agricultural University of Athens,
75 Iera Odos Str., Botanikos, 11855 Athens, Greece; stud616018@aua.gr (K.-A.L.); maraveas@aua.gr (C.M.);
karvan@aua.gr (K.G.A.)
*** Correspondence: dlouka@aua.gr; Tel.: +30-210-5294-109**
**Citation: Loukatos, D.;**
Lygkoura, K.-A.; Maraveas, C.;
Arvanitis, K.G. Enriching IoT
Modules with Edge AI Functionality
to Detect Water Misuse Events in a
Decentralized Manner. Sensors 2022,
_[22, 4874. https://doi.org/10.3390/](https://doi.org/10.3390/s22134874)_
[s22134874](https://doi.org/10.3390/s22134874)
Academic Editor: Sigfredo Fuentes
Received: 10 May 2022
Accepted: 27 June 2022
Published: 28 June 2022
**Publisher’s Note: MDPI stays neutral**
with regard to jurisdictional claims in
published maps and institutional affil
iations.
**Copyright:** © 2022 by the authors.
Licensee MDPI, Basel, Switzerland.
This article is an open access article
distributed under the terms and
conditions of the Creative Commons
[Attribution (CC BY) license (https://](https://creativecommons.org/licenses/by/4.0/)
[creativecommons.org/licenses/by/](https://creativecommons.org/licenses/by/4.0/)
4.0/).
**Abstract: The digital transformation of agriculture is a promising necessity for tackling the increasing**
nutritional needs of the population on Earth and the degradation of natural resources. Focusing
on the “hot” area of natural resource preservation, the recent appearance of more efficient and
cheaper microcontrollers, the advances in low-power and long-range radios, and the availability of
accompanying software tools are exploited in order to monitor water consumption and to detect
and report misuse events, with reduced power and network bandwidth requirements. Quite often,
large quantities of water are wasted for a variety of reasons; from broken irrigation pipes to people’s
negligence. To tackle this problem, the necessary design and implementation details are highlighted
for an experimental water usage reporting system that exhibits Edge Artificial Intelligence (Edge AI)
functionality. By combining modern technologies, such as Internet of Things (IoT), Edge Computing
(EC) and Machine Learning (ML), the deployment of a compact automated detection mechanism
can be easier than before, while the information that has to travel from the edges of the network to
the cloud and thus the corresponding energy footprint are drastically reduced. In parallel, characteristic implementation challenges are discussed, and a first set of corresponding evaluation results
is presented.
**Keywords: water resource preservation; Internet of Things; Edge Computing; Machine Learning;**
Edge AI; Smart Sensing; Precision Agriculture; Arduino; Raspberry; Edge Impulse
**1. Introduction**
The degradation of natural resources in quality and quantity has a direct impact on
the global food production numbers. According to FAO [1], the agricultural sector should
increase its productivity by 60 per cent to counterbalance the depletion of natural resources
and the population growth on Earth. The utilization of innovative technologies seems
to be a key factor for addressing these issues. In this regard, toward a successful digital
transformation of agriculture, it is promising that the rapid development of the electronics
industry has managed to increase the production numbers and the quality of several
components, such as microcontroller units (MCUs), single board computers, sensors, and
radio transceivers, at very affordable cost levels. More specifically, the recently appeared
new generation of microcontrollers, apart from orchestrating typical sensing and acting
tasks, can support composite operations at reduced execution times, as they have faster and
more efficient processors and larger memory. In parallel, the advances in radio technology
deliver low-power modules capable of long-range communication at reduced energy levels.
These high-end components are not only widely available but are also accompanied by
very fluent documentation and software tools that facilitate their programming, leading
to improved implementations. These characteristics can lead to a more efficient approach
regarding serious problems, such as the preservation of natural resources. Nevertheless,
any fusion of software and hardware elements has first to address potential implementation
bottlenecks, prior to the delivery of any effective solution.
-----
_Sensors 2022, 22, 4874_ 2 of 20
Indeed, as the world will be populated by billions of connected devices [2] of limited
resources, interacting with the surrounding environment and users, the bottleneck will be
the increased amount of data traffic that could congest the network and generate several
latency, reliability and privacy problems [3,4]. The deployment of enhanced processing
features on Internet of Things (IoT) devices, for example Machine Learning (ML), reduces
the network congestion by allowing computations to be performed close to the data sources,
and thus it preserves privacy in uploading data, and reduces power consumption for wireless transmission to gateways or cloud servers [4]. In this regard, one of the options is to run
the intelligent algorithms locally on the end devices (e.g., on the sensor nodes hardware).
If the tasks are performed by smaller devices, less power will be required to keep them
running and more flexible energy management will be applied, compared with the typical
central system case. Small devices can operate on batteries for months or even for years,
while a diverse set of energy harvesting options is offered for elongated operation duration.
Thankfully, the recent technological advances delivered end devices with improved hardware characteristics (i.e., processing capabilities and memory size), thus making it possible
for these devices to execute machine learning algorithms in an efficient and cost-effective
manner. Not only do the microcontrollers become better performing, but the application
of machine learning techniques on them, such as the artificial neural networks (ANNs),
have also become more efficient, due to the improvement of the corresponding software
platforms and tools.
In greater detail, the execution/utilization phase of an ANN requires less computational power than its training phase. In fact, during the training, a large amount of data
is used to calculate the weights and biases of the network, and thus a quite powerful machine is needed. Once the learning has been completed and the network has been trained,
the model can be used for inference actions with lower computational requirements [4].
Consequently, the AI algorithms can more likely be run on devices with less resources, as
microcontrollers, allowing local data processing. Nevertheless, as the trained models may
still remain comparatively heavy for the in situ MCUs, tools such as TensorFlow Lite [5], in
the context of TinyML [6], make possible the creation of trimmed-down versions that can
be fit safely in the improved generation of MCUs, but still of limited computational and
memory capacity.
Finally, the improved transmission range characteristics of the low power wide area
network (LPWAN) technologies, such as LoRa, perfectly fit to the reduced network traffic
profiles [7]. The balanced utilization of the discussed technological innovations can deliver
applications that can be very helpful for solving real-world problems, e.g., the preservation
of water resources.
Water is one of the most critical resources on the Earth as, apart from humans, both
plants and animals depend on it, while many processes from irrigation to washing or food
preparation, cannot be accomplished without it. Despite its necessity, large amounts of
water are being wasted due to a variety of reasons, from water pipe or valve failures to
human inattention. It is noteworthy that according to the World Bank [8], the non-revenue
water (NRW) level in developing countries ranges from 40% to 50% of the water pumped
into the distribution systems. Furthermore, 80 per cent of wastewater in the world flows
back into the ecosystem without being treated or reused, and 70 per cent of the world’s
natural wetland extent has been lost [9]. Sustainable Development Goal 6 (SDG 6) [9] on
water and sanitation, adopted by United Nations (UN) Member States as part of the 2030
Agenda for Sustainable Development [10], highlights in practice the importance of the
proper water resource management, from both quantitative and qualitative perspective. As
agriculture remains the largest consumer of water globally, the significance of water for
keeping the food produce to satisfactory levels is crucial.
Targeted at the preservation of water resources with emphasis on their impact on
agriculture, in this work, the pilot implementation of a smart water usage alerting system
is presented. The whole approach exploits the findings of the approach described in [11]
toward the delivery of a more compact and efficient solution with artificial intelligence (AI)
-----
_Sensors 2022, 22, 4874_ 3 of 20
capabilities. The latter task is addressed by utilizing recently-appeared, cost effective but
powerful microcontroller boards and software, for supporting the in situ machine learning
operations, and a low-power and long-range radio network technology based on the LoRa
protocol. The combination of these elements results in reduced power consumption and
in less network traffic and processing load for the central entities of the network, as the
water usage classification decisions are taken locally, at the edges of the network, and only
notification messages have to travel toward the end user. Response times are also reduced,
while privacy is better preserved. The water usage episodes that the smart system had
been trained to intercept were of comparatively short duration, but the methods being
used and the accuracy being achieved make the proposed arrangements, only with minor
modifications, to be applicable for supporting a wide variety of water preservation/misuse
detection scenarios.
Apart from this introductory section, in order to better highlight the main objectives of
this research, the rest of this paper is organized as follows: Section 2 highlights the motives
and the challenges behind this work and the design directions being necessary. Section 3
provides interesting implementation details. Section 4 is dedicated to evaluation results
and discussion. Finally, Section 5 contains important concluding remarks.
**2. Background and Design Overview**
_2.1. Motives and Challenges for Agriculture_
Internet of things (IoT) is an emerging technology that includes devices connected to
the Internet equipped with sensors, transducers, radio transceivers, and actuators comprising a functioning of the whole that gathers, interchanges and responds to information [12].
In this regard, the IoT makes agricultural automation more efficient, and thus fosters production [13]. Recent works emphasize the contribution of the IoT technologies in critical
agricultural operations [14,15], including precision farming, livestock, and greenhouses,
with the irrigation and water management activities to be of among the open issues of
growing interest [16].
Machine Learning (ML) is a very welcome companion for any IoT solution and
provides multiple solutions to problems that were among the most difficult to be tacked
without, some years ago. The exploitation of the ML potential by agriculture is a necessity
that follows several directions [17], even beyond Agriculture 4.0 [18]. The most significant
advantage of machine learning techniques is that they can provide generally applicable
solutions, with minor human intervention and in a way that does not require meticulous
a priori knowledge of the idiosyncracies of the system the solution is being tailored for.
This makes satisfactorily-working solutions to be generated easily and quickly by people
with less expertise in a specific area. Apparently, the role of the “experts” of the sector
cannot be overlooked, but their involvement into the whole process remains consulting
and supervising, as they do not have to inject “magic” threshold values into conventional
and difficult to maintain blocks of code.
The Edge Computing (EC) is a newcomer to the equation of tackling modern problems
more efficiently using IoT and ML. Indeed, a traditional IoT solution (a few years ago)
typically required a large amount of real-time sensor data to be destined to a central
computer entity in the cloud which in its turn had to process this increased amount of data,
to take the necessary decisions and probably had to deliver the corresponding responses
back to the appropriate nodes. This organization had to tackle high communication and
processing loads, while any potential failure of the central entity would result in total
system collapse. Furthermore, data privacy concerns were also very reasonable, as thirdparty communication, storage and/or decision entities had to get involved in the whole
process. On the contrary, by increasing the intelligence at the edges of the network (i.e., on
or nearby the sensor nodes), decisions and any potential action are addressed locally, in a
faster, cheaper and more private way, thus leaving considerably less (or none at all) work
for the central entity [4,19]. Typically, only sporadic metadata information updates are
necessary toward the central entity, mainly for supervision purposes.
-----
_Sensors 2022, 22, 4874_ 4 of 20
The enrichment of IoT with Edge Computing and Machine Learning functionality is
often referred as Edge Artificial Intelligence (Edge AI) and tries to exploit the advantages
of these technologies, for serving a wide set of applications in a better manner, with the
agricultural sector not to be an exception [20]. In this regard, the approach being presented
is trying to highlight how these elements of innovation can be combined to ease the intense
problem of water resource waste.
Demographics continue changing and unsustainable economic practices are affecting
the quantity and quality of the water being available, thus making it an increasingly scarce
and expensive resource [9]. Inevitably, water is at the core of sustainable development
and is closely linked to poverty reduction and climate change. As agriculture remains
the largest consumer of water globally and irrigation is responsible for 70% of its use
worldwide, water is the most valuable resource for keeping the quality and the quantity of
plant and animal production to satisfactory levels. The way water is utilized for both urban
and rural use directly impacts its future availability and thus, emphasis must be placed on
water management and irrigation efficiency and make sure clean water can be provided for
all people.
Apart from the more conventional bare IoT solutions for water resource management
and utilization, mainly with focus on agriculture, there is a growing interest for the exploitation of ML in order to achieve better results [21–24]. The fusion with Edge AI functionality
has yet a lot to offer. The potential exploitation of modern microcontrollers for water
usage related applications with embedded ML functionality has already started delivering
interesting outcomes [25], in neighboring scientific areas, with the selection of devices and
functions for communication between sensor appliances to remain a key challenge [26]
for success.
On the other hand, recent studies show that farmers are still facing concerns for
adopting the IoT technologies in their everyday activities. This skepticism is attributed to a
variety of reasons, from privacy concerns due the cloud-based nature of many solutions
to fears for job cuts and for high purchase and maintenance costs [27,28], while it is really
hard to find experts having the necessary set of talents at a satisfactory degree and being
available for fluent cooperation, at the same time.
Furthermore, while the machine learning methods seem to provide accurate and less
expensive solutions [23] for water misuse detection events such as leaks, there is enough
room for further improvements. Indeed, due to the very recent character of the innovative
hardware and software components supporting in situ (i.e., on-device node) machine learning techniques, in the agricultural sector for water utilization report/classification purposes,
few works combine these assets toward the delivery of a cost effective and efficient solution
with Edge AI characteristics. There are research contributions that exploit IoT infrastructures for water monitoring purposes, but without incorporating AI functionality [29] or
there are contributions that exploit machine learning methods that either require central
processing of the data being collected [30,31] or that they are not optimized to be executed
by the new low-cost and high-efficiency microcontrollers [32]. These remarks are in line
with recent review findings in agriculture [24] and reflect a problem already specified in
the wider IoT area [4,33].
Trying to bridge this gap, the proposed solution indicates that, for water usage characterization/report delivery, a quite accurate model can now be trained, using flexible tools,
be executed on the end device and communicate its classification reports using almost
negligible power and bandwidth resources. Combining decentralized intelligence and
low-cost design, provision is made for reduced to null amount of information to travel
toward the cloud. These arrangements are addressing data privacy and reliability issues
as well.
_2.2. Functionality Overview and Component Selection_
This section reports briefly on the components being selected as well as on their role,
in order to develop a system capable of intercepting and characterizing water usage events.
-----
_2.2. Functionality Overview and Component Selection_
_Sensors 2022, 22, 4874_ This section reports briefly on the components being selected as well as on their role, 5 of 20
in order to develop a system capable of intercepting and characterizing water usage
events. This system includes sensor nodes, placed in situ, at the edge points where the
water is actually being used, as well as the suitable sink/gateway node(s) able to collect This system includes sensor nodes, placed in situ, at the edge points where the water is
the reports delivered by the aforementioned peripheral nodes. The “key” point of the actually being used, as well as the suitable sink/gateway node(s) able to collect the reports
approach being presented is that the edge (sensor) nodes, apart from collecting time se-delivered by the aforementioned peripheral nodes. The “key” point of the approach being
ries corresponding to events containing the instantaneous water consumption data, are presented is that the edge (sensor) nodes, apart from collecting time series corresponding
“smart” enough to classify these events into categories of proper or improper use of wa-to events containing the instantaneous water consumption data, are “smart” enough to
ter, without assistance from external entities. Thus, via this “filtering”, only the classifi-classify these events into categories of proper or improper use of water, without assistance
cation reports have to travel toward the gateway and the cloud (if the latter is necessary). from external entities. Thus, via this “filtering”, only the classification reports have to travel
The analytical (low quality and high volume) information of the instantaneous water toward the gateway and the cloud (if the latter is necessary). The analytical (low quality and
consumption might flood the network infrastructures and exhaust the batteries of the high volume) information of the instantaneous water consumption might flood the network
edge nodes. The user can easily monitor the operation of the whole system via their infrastructures and exhaust the batteries of the edge nodes. The user can easily monitor the
portable equipment (e.g., their tablet, smart phone, or laptop) using conventional con-operation of the whole system via their portable equipment (e.g., their tablet, smart phone,
nectivity options (e.g., Wi-Fi or 3G/4G), either locally or remotely (e.g., via a virtual pri-or laptop) using conventional connectivity options (e.g., Wi-Fi or 3G/4G), either locally or
remotely (e.g., via a virtual private networking (VPN) service). The proposed architecture
vate networking (VPN) service). The proposed architecture is depicted in Figure 1.
is depicted in Figure 1.
**Figure 1. Functionality overview of the proposed water usage event characterization solution.**
**Figure 1. Functionality overview of the proposed water usage event characterization solution.**
The proposed implementation exploited the experience gained during the activities
The proposed implementation exploited the experience gained during the activities
described in [11] with the excellent Arduino Nano 33 BLE Sense [34] microcontroller that
described in [11] with the excellent Arduino Nano 33 BLE Sense [34] microcontroller that
offers plenty of sensors and connectivity options, but utilized an even newer generation of
offers plenty of sensors and connectivity options, but utilized an even newer generation
cheaper microcontroller modules that were able to host and to execute composite machine
of cheaper microcontroller modules that were able to host and to execute composite
learning algorithms, at the same price levels with the “traditional” units. For this reason,
machine learning algorithms, at the same price levels with the “traditional” units. For this
the Raspberry Pi Pico [35] microcontroller board (that costs about 6€) was selected, which,
reason, the Raspberry Pi Pico [35] microcontroller board (that costs about 6€) was se
apart from its very attractive price, has fluent processing power and memory (due to its
lected, which, apart from its very attractive price, has fluent processing power and
new RP2040 chip). More specifically, the Raspberry Pi Pico unit, grace at its new RP2040
memory (due to its new RP2040 chip). More specifically, the Raspberry Pi Pico unit, grace
chip, has fluent processing power and memory, that allows for larger and faster program
at its new RP2040 chip, has fluent processing power and memory, that allows for larger
execution compared to the typical Arduino Uno [36] standard, as it exhibits 64 times more
and faster program execution compared to the typical Arduino Uno [36] standard, as it
flash memory (i.e., program memory), 128 times more random access memory (RAM) and
exhibits 64 times more flash memory (i.e., program memory), 128 times more random
a much faster dual-core processor. Consequently, the Raspberry Pi Pico board was able to
access memory (RAM) and a much faster dual-core processor. Consequently, the Rasp
support, apart from the basic water consumption metering process, the necessary machine
berry Pi Pico board was able to support, apart from the basic water consumption meter
learning functionality to invoke the corresponding water usage alert message generation.
ing process, the necessary machine learning functionality to invoke the corresponding
For the final deployment, the absence of a radio interface on the Raspberry Pi Pico unit
water usage alert message generation. For the final deployment, the absence of a radio
was counterbalanced by the adoption of a cost effective microcontroller board, running at
interface on the Raspberry Pi Pico unit was counterbalanced by the adoption of a cost 8 MHz and equipped with a LoRa radio, namely a LoRa32u4 unit [37]. For programming
both systems, the preferred option was the well-supported Arduino IDE [38] environment.
During the implementation and testing stages, an ESP8266 based module [39], namely an
ESP-01 unit, offering Wi-Fi connectivity, was utilized.
-----
pp g p
_Sensors 2022, 22, 4874_ testing stages, an ESP8266 based module [39], namely an ESP-01 unit, offering Wi-Fi 6 of 20
connectivity, was utilized.
The water flow meter device is a Hall-effect counter sensor (YF-S201 [40] model),
which can detect the flow changes as the water passes through it and the rotor rolls.
The water flow meter device is a Hall-effect counter sensor (YF-S201 [40] model), which
Furthermore, the MIT App Inventor cloud-based programming environment [41] was
can detect the flow changes as the water passes through it and the rotor rolls. Furthermore,
selected for the easy creation of a mobile application for inspecting the water usage ac
the MIT App Inventor cloud-based programming environment [41] was selected for the
tivity, via the smart phone/tablet device of the user.
easy creation of a mobile application for inspecting the water usage activity, via the smart
To add machine learning functionality, it was necessary to prepare and incorporate
phone/tablet device of the user.
a trained artificial neural network (ANN) model into the software running on the Rasp
To add machine learning functionality, it was necessary to prepare and incorporate a
berry Pi Pico. An artificial neural network is based on the operation of neurons in the
trained artificial neural network (ANN) model into the software running on the Raspberry
human brain. This structure has one input layer, one or more hidden layers, being in
Pi Pico. An artificial neural network is based on the operation of neurons in the human
terconnected, and an output layer for delivering the results. A very simple and efficient
brain. This structure has one input layer, one or more hidden layers, being interconnected,
manner to prepare (i.e., to train and to extract/compile) a suitable ANN model was the
and an output layer for delivering the results. A very simple and efficient manner to prepare
Edge Impulse [42] cloud environment. The latter processing environment incorporates (i.e., to train and to extract/compile) a suitable ANN model was the Edge Impulse [42]
the functionality of the TensorFlow Lite engine for training neural networks. More spe-cloud environment. The latter processing environment incorporates the functionality of
cifically, it is equipped with fluent graphical interface and network connectivity options the TensorFlow Lite engine for training neural networks. More specifically, it is equipped
for importing sensor data, designing the ANN model, applying assistive processing with fluent graphical interface and network connectivity options for importing sensor data,
blocks, for creating, testing and deploying the final version of it. Finally, the coefficients designing the ANN model, applying assistive processing blocks, for creating, testing and
describing the ANN are stored in the memory of the Raspberry Pi Pico microcontroller, deploying the final version of it. Finally, the coefficients describing the ANN are stored in
and thus the AI algorithm can be executed on a device with comparatively low but the memory of the Raspberry Pi Pico microcontroller, and thus the AI algorithm can be
enough capacity, in terms of processing power and RAM. The Edge Impulse platform, executed on a device with comparatively low but enough capacity, in terms of processing
from February of 2022, provides full support from the Raspberry Pi Pico board. power and RAM. The Edge Impulse platform, from February of 2022, provides full support
from the Raspberry Pi Pico board.The gateway node, gathers the classification decision information from the periph
eral (edge) sensor nodes, stores and makes it available for the end device (e.g., smart The gateway node, gathers the classification decision information from the peripheral
phone, tablet or laptop) of the user, via common network services installed on it, or posts (edge) sensor nodes, stores and makes it available for the end device (e.g., smart phone,
the information to the cloud, for better visualization and post-processing. Details referred tablet or laptop) of the user, via common network services installed on it, or posts the
to the latter choice are beyond the scope of this research work. information to the cloud, for better visualization and post-processing. Details referred to
the latter choice are beyond the scope of this research work.
**3. Implementation Details**
**3. Implementation Details**
In accordance with the design and functionality directions provided in Section 2.2,
In accordance with the design and functionality directions provided in Section 2.2,
Section 3 is dedicated in presenting characteristic details of the implementation process.
Section 3 is dedicated in presenting characteristic details of the implementation process.
The analytic steps being followed for the training are illustrated in Figure 2.
The analytic steps being followed for the training are illustrated in Figure 2.
**Figure 2. The analytic steps being necessary for the training of the proposed water usage event**
**Figure 2.** The analytic steps being necessary for the training of the proposed water usage event
characterization solution.
characterization solution.
More specifically, the basic water flow sensing unit connection and programming
arrangements are highlighted, in order to gather efficient data for training the ANN model
(step 1), and thus, to add machine learning capabilities to the whole system. The details for
this training are also explained (steps 2 and 3), as well as the incorporation of the trained
ANN model into the microcontroller of the flow-metering system (step 4) for enhancing its
functionality. In parallel, the corresponding network node(s) arrangements are discussed,
-----
_Sensors 2022, 22, 4874_ 7 of 20
as well as the characteristics of a pairing end-user mobile application, for the delivery of a
fluently working solution.
_3.1. Initial Sensor Node Preparation_
The Raspberry Pi Pico is a 3.3 V level unit, for this reason, the flow sensor was
connected to its 3.3 V supply pin, in order to generate 3.3 V logic compatible pulse signals
to its output. The 3.3 V level was adequate for the operation of the specific flow metering
device being selected. Furthermore, the output of the latter sensor was connected with an
interrupt (input) digital pin of the microcontroller, and the ground pins of both components
were also wired together. The sensor was connected to a testing tap via a pipe, and thus, it
could be exposed to a variety of water consumption scenarios potentially being invoked by
human, according to empirical assumptions.
The Arduino IDE environment was customized properly by downloading and installing the necessary libraries corresponding to the Raspberry Pi Pico, according to the
instructions of the its official page, for facilitating the programming process of the microcontroller, via a computer through a USB port connection.
The pulses that the flow sensor was generating correspond to the rotations of its
blades and thus to the water flow passing through it. More specifically, according to the
basic algorithm, as the flow sensor signal generated a pulse signal any time 2.22 mL water
quantity, approximately, passed through it, the Raspberry Pi Pico intercepted these pulses
as interrupt triggers to be counted and, in turn, calculated an one-second average value
corresponding to the water flow (in mL). The sequence of these flow values was output to
the serial port of the microcontroller. After compiling the program (sketch) and uploading
it to the Raspberry Pi Pico board, the sequence of the flow measurements was acquired via
the USB cable. The latter measurements were fed into the machine learning platform, in
order to train the suitable ANN model, as the Edge Impulse environment offers options for
automated uploading of the values being measured.
_3.2. Training the Neural Network_
The corresponding ANN model to be generated had to be simple and lightweight
enough for the microcontroller’s potential but still precise enough. In this regard, the
system was trained to recognize three characteristic kinds of water utilization profiles: the
Normal Use or NU, Water Leak or WL and Water Waste or WW. The proper training of an
ANN requires data series corresponding to each of these categories to be collected and to be
uploaded to the Edge Impulse engine. The total data length was 5 h 55 min 47 s (148 files)
for all three cases. According to Edge Impulse platform requirements, the duration of the
data length had to be approximately the same for all categories, in order for the final model
to be more accurate. Nevertheless, the number of profiles for each case may differ (NU: 69,
WL: 44, WW: 44 profiles).
During the profile collection process, the lowest flow value that the flow sensor could
record was about 10–15 mL/s, while the maximum flow being recorded was in the range
between 250 and 280 mL/s. The network was trained using empirical data based on
human observations for classifying samples (water usage episodes) into categories. In
general, NU profiles were created so as to contain low to moderate flow values and having
duration below 180 s, making the training pattern hypothesis that a non-WL water usage
scenario would last for 3 min at maximum. Similarly, it was assumed that WL profiles
exhibited continuous flow duration of more than 180 s and that most WW profiles had
flow consumption over 160 mL/s and duration of more than 160 s, as it would be more
likable for the classification experiment, during the episodes to use water for shorter time
and at lower flow rate. Some typical profiles for each category are given in Figure 3, where
the water flow was measured in ml/s and the time was measured in seconds (s). For each
category, there is a diversification among the profiles being recorded and fed to the training
system. This diversification results in increased accuracy under real-world conditions.
-----
g
_Sensors 2022, 22, 4874_ seconds (s). For each category, there is a diversification among the profiles being rec-8 of 20
orded and fed to the training system. This diversification results in increased accuracy
under real-world conditions.
(a) (b)
(c) (d)
(e) (f)
**Figure 3. (a,b) Normal Use profiles; (c,d) Water Leak profiles; (e,f) Water Waste profiles.**
**Figure 3. (a,b) Normal Use profiles; (c,d) Water Leak profiles; (e,f) Water Waste profiles.**
In the next stage, the water flow data (raw data) were uploaded to the Edge ImpulseIn the next stage, the water flow data (raw data) were uploaded to the Edge Impulse
cloud platform, via the Data Acquisition menu category, and were split into training andcloud platform, via the Data Acquisition menu category, and were split into training and
testing data, automatically, while the data labelling was performed manually.testing data, automatically, while the data labelling was performed manually.
For training of the ANN model, the window size was set at 200,000 msFor training of the ANN model, the window size was set at 200,000 ms (i.e., 200 s), (i.e., 200 s),
according to the profiles that were fed into the training system and by taking into consider-according to the profiles that were fed into the training system and by taking into conation the maximum time that a person might use the tap. Similarly, the window increasesideration the maximum time that a person might use the tap. Similarly, the window inwas set at 1000 ms (i.e., at 1 s) and the frequency at 1 Hz (i.e., for 1 sps sampling rate). Furthermore, “Raw Data” was selected as the preferred processing block and “Classification
(Keras)” as the ANN learning block. The option “Raw Data” means that no additional
prepossessing was made (e.g., a spectral characteristics extraction) before using the original
data for the training process. This option does not reduce the number of features to be fed
to the input layer of the network, but also preserves as many characteristics of the initial
data as possible and, as it is explained right below, it fits easily in the microcontroller being
-----
_Sensors 2022, 22, 4874_ 9 of 20
selected. Furthermore, the number of training cycles was set to the moderate value of 50, to
avoid overfitting, and the learning rate at 0.0005, via the NN Classifier configuration section,
as the Edge Impulse suggests. The final neural network structure has an input layer with
200 features (window size), two hidden layers, with the first one to have 20 neurons and the
second one 10 neurons, and an output layer with 3 classes (NU, WL, WW). This architecture
for the NN provided an optimal combination between performance and computer resource
allocation (i.e., model accuracy versus time needed for a decision to be made and memory
size needed for hosting the program in the flash and for executing it in RAM). For the
specific model, in the quantized version, the RAM usage was 1.9 KB and the flash memory
usage was 22.5 KB, values that are far below the capacity limit of the Raspberry Pi Pico
unit. It must be noted though that during the actual operation of the microcontroller, more
memory will be needed as along with the NN model coexist several variables and code
parts dedicated to other tasks.
The Edge Impulse platform allows for easy experimentation with various candidate
settings and for saving the model with the best performance after the end of the training
process. Finally, there is the option to download the model from the Edge Impulse cloud
platform, via the “Deployment” section of Edge Impulse menu category, as code that
includes library and sketches to be compiled and uploaded to the microcontroller via the
Arduino IDE environment.
_3.3. Sensor Node Software Enhancement_
As explained in Section 3.2, the code generated by the Edge Impulse platform, in
the form of a generic Arduino library, provides customizable examples (sketches) for the
Arduino environment, with the Raspberry Pi Pico board to be among the models being
supported, and thus, being compatible with the generated model parameters. The selection
of the “Arduino library” option (instead of the tailored firmware output one) provides
freedom to combine the machine learning engine with further algorithmic behaviors being
necessary to be executed by the hosting microcontroller.
In this regard, the final software running on the microcontroller had to be updated so
as to be able to perform (almost simultaneously) some simple but sharp calculations/tasks
of different time granularity:
_•_ Intercept the interrupt signals corresponding to the rotor roll pulses of the water flow
sensor module;
Calculate the instantaneous water consumption, at a fixed and specific rate, typically
_•_
1 or 2 times per second, update the aggregate metrics, and trigger the classification
process every time the predefined number of samples (i.e., 200) was gathered;
Deliver system status data and water usage reports via USB to the hosting computer,
_•_
or wirelessly to a gateway node or to the operator’s smart phone/tablet;
As expected, the above tasks had to be performed without blocking or delaying each
other, constraints that required meticulous programming (e.g., using timer events) to
achieve fluent operation. Optimally, the delivery of information toward the gateway had to
take place once, after the end of each classification process utilizing the 200 consecutive
samples. Nevertheless, for debugging or training purposes, all 200 values had to be
transmitted toward the gateway node. Communication with the LoRa32u4 radio module
was achieved through the serial TTL level port of the microcontroller.
_3.4. Gateway Node and User-End Software_
For the reception (and the inspection) of the remote alerts through Wi-Fi, an android
smart phone or a tablet device, which most modern people are familiar with, was a
satisfactory solution. The MIT App Inventor environment was utilized in order to deploy a
simple monitoring application. The necessary programming was completed using visual
blocks, based on the information provided in [43,44].
The initial deployment involved direct connection between the smart water sensor
node and the end user equipment (e.g., a tablet device), typically through a Wi-Fi connection
-----
_Sensors 2022, 22, 4874_ 10 of 20
link. This solution is not optimal if multiple sensors units exist and deliver water usage
reports in parallel. Furthermore, the latter sensors may be placed at comparatively long
distances from the user. These facts made necessary the development of a gateway/sink
node to gather the corresponding data and the migration to LoRa radio links.
For implementing the latter gateway node, a Raspberry Pi 3 Model A+ had been
selected [45], due to its reduced size and energy footprint and its fluent programming and
interfacing options. The Raspberry Pi Model 3 A+ unit allows for fast implementation
of code that intercepts the data reports from the peripheral smart sensor nodes, storing
them into files or a simple database, and making them available via the proper TCP/IP
based service. This request could be either asynchronous or periodic (i.e., generated by
a proper application running on the user’s mobile phone). These tasks are served using
python and Linux shell scripts, inter process communication (IPC) techniques exploiting IP
sockets, and the activation of preexisting applications such as the Apache web server, the
SSH server and/or a Virtual Private Networking (VPN) service. Furthermore, the gateway
node, properly combined with VPN networking techniques, assured monitoring functions
from distant locations, based on the availability of Wide Area Network (WAN) wired or
wireless technologies (i.e., 3G/4G, DSL, etc.).
_3.5. Summary of IoT Deployment Steps_
The Edge AI tasks had to be performed fluently, while deployment in open-field
environments using long-range radios, such as LoRa, was an important priority. The final
functionality being implemented can be summarized in the following steps/cases:
1. Use a Wi-Fi radio transceiver (e.g., an ESP-01 module), attached to the sensor node, to
provide communication between the sensor node and the user’s smart phone/tablet,
for testing purposes, during the initial deployment;
2. Use a Raspberry Pi Model 3 A+ and a LoRa radio module as a LoRa gateway/web
server, in conjunction with the LoRa radio transceiver modules being attached to the
(preferably more than one) smart sensor nodes;
3. Increase user-friendliness by adding services using the Raspberry Pi Model 3 A+ unit
of the gateway node and well-known web-based applications.
Case 1 was suitable for verifying the basic wireless connectivity potential of the sensor
node via the tablet/smart phone device of the user, being nearby the sensor. This arrangement made easy for the user to inspect the status of the water activity characterization
system for one smart sensor and from short distances.
The need to have a more complete on-demand view of the status of more than one
water use points, at increased distance, was favoring the adoption of a local gateway
node facilitating the whole monitoring process, as explained in case 2. The sensor nodes
were sending water usage notifications toward this local gateway, over LoRa. It must be
noted though that the TCP/IP technology, as a solution for the delivery of data (i.e., the
sporadic metadata) from the sensors to the gateway, is not optimal, in terms of energy
consumption, complexity and range coverage. Indeed, in a typical application scenario, the
distance between the sensor nodes and the gateway node is limited to a hundred meters,
approximately. If willing to extend this distance to the kilometer range or beyond, without
special and expensive equipment, transceivers utilizing technologies such as LoRa are
more suitable.
In case of the LoRa solution, the LoRa32u4 board, as a transceiver, was the optimal selection for both the sensor and gateway nodes, due to its low cost and its easy programming.
The RadioHead software package [46] is a very efficient library that supports several critical
LoRa protocol functions, and thus, it was adopted for adjusting the LoRa32u4 modules.
These modules were programmed easily via the Arduino IDE environment. Consequently,
the microcontroller of each sensor node was connected (typically via its hardware serial TTL
interface) with a LoRa32u4 board in order to relay the water usage information from the
machine learning engine toward the gateway node. A Lora32u4 board was also connected
via USB with the Raspberry Pi 3 Model A+ unit implementing the gateway functions. The
-----
via its hardware serial TTL interface) with a LoRa32u4 board in order to relay the water
_Sensors 2022, 22, 4874_ usage information from the machine learning engine toward the gateway node. A Lo-11 of 20
ra32u4 board was also connected via USB with the Raspberry Pi 3 Model A+ unit implementing the gateway functions. The necessary code was written in python to bridge
necessary code was written in python to bridge the serial port of the LoRa32u4 board withthe serial port of the LoRa32u4 board with an IP socket service running on the gateway
an IP socket service running on the gateway node.node.
Characteristic deployment arrangements are depicted in FigureCharacteristic deployment arrangements are depicted in Figure 4a,b. More specifi- 4a,b. More specifically,
Figurecally, Figure 4a depicts the smart water sensor node implementation using a Raspberry Pi 4a depicts the smart water sensor node implementation using a Raspberry Pi Pico
unit and a LoRa radio, while in FigurePico unit and a LoRa radio, while in Figure 4b the gateway/sink node implementation is 4b the gateway/sink node implementation is
depicted using a Raspberry Pi 3 Model A+ and a LoRa radio. The information exchangeddepicted using a Raspberry Pi 3 Model A+ and a LoRa radio. The information exchanged
between the LoRa radios was packetized and encrypted using the RadioHead librarybetween the LoRa radios was packetized and encrypted using the RadioHead library and
and the Arduino Cryptography Library [the Arduino Cryptography Library [47], in order to hide the sensitive data from 47], in order to hide the sensitive data from
non-authorized users.non-authorized users.
(a) (b)
**Figure 4.** (a) Smart water sensor node deployment using Raspberry Pi Pico and LoRa radio; (b)
**Figure 4.** (a) Smart water sensor node deployment using Raspberry Pi Pico and LoRa radio;
Gateway/sink node implementation using Raspberry Pi 3 Model A+ and LoRa radio;.
(b) Gateway/sink node implementation using Raspberry Pi 3 Model A+ and LoRa radio.
Initial experiments were performed using USB powering via the hosting computerInitial experiments were performed using USB powering via the hosting computer
and/or power banks. Later updates included LiPo or Li-ion batteries, mainly of 18650 typeand/or power banks. Later updates included LiPo or Li-ion batteries, mainly of 18650
which are cheap and robust, as well as small photovoltaic panels (e.g., 2 W units). It musttype which are cheap and robust, as well as small photovoltaic panels (e.g., 2 W units). It
be noted though that the absence of a permanent power supply source nearby is not alwaysmust be noted though that the absence of a permanent power supply source nearby is not
the rule, and thus the operation of the alerting system was facilitated.always the rule, and thus the operation of the alerting system was facilitated.
**4. Results and Evaluation4. Results and Evaluation**
This work is putting emphasis on intercepting water usage events and on characteriz-This work is putting emphasis on intercepting water usage events and on characing them properly. Via fluently-working machine learning techniques, applied at the edgesterizing them properly. Via fluently-working machine learning techniques, applied at the
of the network, the amount of information that needs to travel from the peripheral nodesedges of the network, the amount of information that needs to travel from the peripheral
to the central node and the cloud is minimized. This fact signifies reduced communicationnodes to the central node and the cloud is minimized. This fact signifies reduced comload and energy consumption, and better autonomy and privacy. The adoption of simple,
munication load and energy consumption, and better autonomy and privacy. The adop
long-range and low-energy radios facilitates the whole process. Relevant details are given
tion of simple, long-range and low-energy radios facilitates the whole process. Relevant
into the following Sections 4.1–4.4.
details are given into the following Sections 4.1–4.4.
_4.1. Testing the Acuracy of the Model_
For classification evaluation algorithms, accuracy is the most frequently used indicator,
and it is defined as the proportion of the correctly classified samples to the total number
of samples. After the training process, based on the testing data, the system generated
the right outcome for the NU category with 77.8% accuracy. Similarly, for the WW and
WL categories, 100% success was achieved, according to Edge Impulse cloud environment.
These performance results made the final model to have a 98.5% expected accuracy, using
the testing data set, in the Quantized (int8) version.
At next stage, actual water consumption episodes of known type (i.e., NU, WW or WL)
had to be invoked, by rotating the tap head properly, thus letting the proposed machine
-----
environment. These performance results made the final model to have a 98.5% expected
_Sensors 2022, 22, 4874_ accuracy, using the testing data set, in the Quantized (int8) version. 12 of 20
At next stage, actual water consumption episodes of known type (i.e., NU, WW or
WL) had to be invoked, by rotating the tap head properly, thus letting the proposed
learning engine to perform classification according to the flow data being collectedmachine learning engine to perform classification according to the flow data being col- (i.e., in
chunks of 200 consecutive values). The corresponding results were recorded. Figurelected (i.e., in chunks of 200 consecutive values). The corresponding results were rec- 5
depicts the proposed sensor node connected in-line with a water tap. This process wasorded. Figure 5 depicts the proposed sensor node connected in-line with a water tap. This
matching the steps being followed during the training stage of the system.process was matching the steps being followed during the training stage of the system.
**Figure 5.Figure 5. The proposed sensor node connected in-line with a water tap.The proposed sensor node connected in-line with a water tap.**
It must be noted that the in-parallel visual inspection of the ongoing process wasIt must be noted that the in-parallel visual inspection of the ongoing process was
drastically facilitating the experiments. More specifically, further arrangements were madedrastically facilitating the experiments. More specifically, further arrangements were
in order for the whole sequence of the analytical flow readings to arrive to the smartmade in order for the whole sequence of the analytical flow readings to arrive to the
phone/tablet device, using a modified version of the application created for the end usersmart phone/tablet device, using a modified version of the application created for the end
(as presented in Sectionuser (as presented in Section 3.4). This application variant provided detailed real-time 3.4). This application variant provided detailed real-time graphs
(into the form of histograms) reflecting the instantaneous water consumption during eachgraphs (into the form of histograms) reflecting the instantaneous water consumption
episode, for direct comparison and adjustments. Figureduring each episode, for direct comparison and adjustments. Figure 6a–6f illustrate in- 6a–f illustrate indicative smart
phone screenshots reflecting typical water usage characterization decisions during thedicative smart phone screenshots reflecting typical water usage characterization deciactual testing process, corresponding to the NU, WL and WW categories, respectively.sions during the actual testing process, corresponding to the NU, WL and WW catego
ries, respectively. The combination of the trained ANN model implementation with simple more con
ventional programming techniques was improving the accuracy and the response times of
the system being presented. For instance, the in situ module logic was modified so as to
ignore the zero-flow events, as an episode (i.e., event) started being recorded only after the
arrival of the first non-zero flow value.
Table 1 contains the confusion matrix that corresponds to the testing of the real
system, after classifying 100 water consumption episodes. The processing of the data being
collected revealed that the actual accuracy was 91% (i.e., 91 over 100 samples were classified
correctly), after testing the model with user-generated water consumption profiles, using
the proposed smart flow metering system. It is important to mention that the model
could clearly recognise the undesirable WL profiles, achieving accuracy values reaching
100%. On the other hand, there were some incorrect predictions, where the neural model
was classifying an actual WW scenario as NU or WL (with percentages 5.1% and 7.7%,
respectively). In fewer cases, the model was classifying an NU as WW or WL (with
percentages equal to 2.8%). These failures can be attributed to the fact that there was a
small area where the borders of those categories were overlapped, thus confusing the
neural network classifier. An additional 0.4 certainty threshold was programmed on the
microcontroller for more reliable characterizations. This performance is close to the one
expected according to the testing of the model. The overall performance is lower than the
one achieved by other machine learning approaches [23] using more composite systems,
-----
_Sensors 2022, 22, 4874_ 13 of 20
_Sensors 2022, 22, x FOR PEER REVIEW but remains high and can be easily achieved by the proposed low-cost equipment. The13 of 21_
accuracy can be further improved by using more extensive training and samples.
(a) (b) (c)
(d) (e) (f)
**Figure 6.Figure 6. (a(–af)–() Indicative smart phone screenshots during the in situ testing process, reflecting typicalf) Indicative smart phone screenshots during the in situ testing process, reflecting**
typical water usage characterization decisions for the categories NU, WL and WW, respectively.
water usage characterization decisions for the categories NU, WL and WW, respectively.
The combination of the trained ANN model implementation with simple more
**Table 1. The confusion matrix corresponding to the trained neural network model, created by**
conventional programming techniques was improving the accuracy and the response
classifying 100 water consumption episodes, of specific (and known) type each.
times of the system being presented. For instance, the in situ module logic was modified
so as to ignore the zero-flow events, as an episode (i.e., event) started being recorded only
**Class** **NU** **WL** **WW** **Unknown**
after the arrival of the first non-zero flow value.
NU 91.7% 2.8% 2.8% 2.8%
Table 1 contains the confusion matrix that corresponds to the testing of the real
WL 0.0% 100.0% 0.0% 0.0%
system, after classifying 100 water consumption episodes. The processing of the data WW 5.1% 7.7% 84.6% 2.6%
being collected revealed that the actual accuracy was 91% (i.e., 91 over 100 samples were
classified correctly), after testing the model with user-generated water consumption
_4.2. Networking and Power Consumption Issues_
profiles, using the proposed smart flow metering system. It is important to mention that
According to the specifications of the experimental system being presented, although
the model could clearly recognise the undesirable WL profiles, achieving accuracy values
200 consecutive samples had to be recorded before a classification decision to be make,
reaching 100%. On the other hand, there were some incorrect predictions, where the
this decision was taken locally, and thus only the (final) characterization message had
neural model was classifying an actual WW scenario as NU or WL (with percentages
to travel toward the gateway (and to the end user) instead of 200 messages containing
5.1% and 7.7%, respectively). In fewer cases, the model was classifying an NU as WW or
the corresponding analytical flow values. The packet payload information needed to
WL (with percentages equal to 2.8%). These failures can be attributed to the fact that there
travel from the peripheral sensor nodes toward the gateway node did not exceed 10 bytes
was a small area where the borders of those categories were overlapped, thus confusing
in binary format, thus resulting in a bellow 50-byte description per episode in textual
the neural network classifier. An additional 0.4 certainty threshold was programmed on
the microcontroller for more reliable characterizations This performance is close to the
-----
travel from the peripheral sensor nodes toward the gateway node did not exceed 10 bytes
_Sensors 2022, 22, 4874_ in binary format, thus resulting in a bellow 50-byte description per episode in textual 14 of 20
format, in the final log files on the Raspberry Pi Model 3 A+ unit of the gateway. The size
requirements of the analytical data would be roughly 200 times higher. In addition to
format, in the final log files on the Raspberry Pi Model 3 A+ unit of the gateway. The sizethat, the cost for performing the classification at the central node was not necessary any
requirements of the analytical data would be roughly 200 times higher. In addition to that,more.
the cost for performing the classification at the central node was not necessary any more.Figure 7 provides indicative details of the water flow episode/event specific infor
mation as stored into the log files on the Raspberry Pi Model 3 A+ unit implementing the Figure 7 provides indicative details of the water flow episode/event specific infor
mation as stored into the log files on the Raspberry Pi Model 3 A+ unit implementing thegateway node functionality. These files were directly available through the Apache web
gateway node functionality. These files were directly available through the Apache web serverserver and typically contained an arrival timestamp, node address, episode type (i.e.,
and typically contained an arrival timestamp, node address, episode typeNU/WW/WL), flow value per each sample into a specific episode (in debug mode only), (i.e., NU/WW/WL),
flow value per each sample into a specific episode (in debug mode only), total water con-total water consumption per episode, as well as sensor battery voltage and RSSI indicasumption per episode, as well as sensor battery voltage and RSSI indicator.tor.
**Figure 7.Figure 7. Characteristic details of the water flow episode/event specific information as stored intoCharacteristic details of the water flow episode/event specific information as stored into**
the log files on the Raspberry Pi Model 3 A+ unit implementing the gateway node.the log files on the Raspberry Pi Model 3 A+ unit implementing the gateway node.
Some stability problems were experienced when using the highest baud rate (i.e., theSome stability problems were experienced when using the highest baud rate (i.e., the
115,200 bps value) between the Raspberry Pi Pico and the LoRa32u4 module. For this115,200 bps value) between the Raspberry Pi Pico and the LoRa32u4 module. For this
reason the data rate was set to the “safe” 38,400 bps value.reason the data rate was set to the “safe” 38,400 bps value.
The techniques being followed for testing the effective communication range of theThe techniques being followed for testing the effective communication range of the
proposed system were utilizing the methods presented in [proposed system were utilizing the methods presented in [7,48]. The gateway node, apart 7,48]. The gateway node, apart
from the water flow specific information, for each node, was collecting assistive data, such
as sensor battery status and received signal strength indicator (RSSI). The latter information
was collected for sensor nodes being at various distances from the gateway node, for both
Wi-Fi and LoRa radio cases. The left part of Figure 8 depicts a LoRa radio transceiver
during the in situ radio coverage experiments. According to results being gathered, by
using ESP-01 Wi-Fi transceivers, the maximum range coverage was at about 100 m, while by
using LoRa modules with custom wire antennas the communication distance was extended
to 300 m in free space. By using standard but still cheap antennas, the LoRa link scenario
was easily achieving communication coverage of above 1 km. These results are justified
by the fact that the receiver sensitivity limit for nodes equipped with Wi-Fi radios was
around 90 dBm, while for the LoRa, the sensitivity being achieved was reaching the
_−_
130 dBm level.
_−_
-----
These results are justified by the fact that the receiver sensitivity limit for nodes equipped
_Sensors 2022, 22, 4874_ with Wi-Fi radios was around −90 dBm, while for the LoRa, the sensitivity being 15 of 20
achieved was reaching the −130 dBm level.
**Figure 8. Figure 8. Experiments for testing the range coverage (Experiments for testing the range coverage (leftleft) and the energy consumption () and the energy consumption (rightright) of the) of**
the prototype sensor nodes. prototype sensor nodes.
In order to better capture and study the short-scale dynamics of the smart sensor In order to better capture and study the short-scale dynamics of the smart sensor
modes, a measuring circuit was built, according to the directions provided in [49]. More modes, a measuring circuit was built, according to the directions provided in [49]. More
specifically, an Arduino Uno board was utilized to calculate the voltage drops over a re-specifically, an Arduino Uno board was utilized to calculate the voltage drops over a resistor
of known value, connected in series with the load of interest (i.e., the smart water sensor
sistor of known value, connected in series with the load of interest (i.e., the smart water
node); the right part of Figure 8 depicts the corresponding experimental setup. The actual
sensor node); the right part of Figure 8 depicts the corresponding experimental setup.
measuring process was performed via a separate ADC module (namely an ADS1015 unit)
The actual measuring process was performed via a separate ADC module (namely an
capable of true differential measurements, of satisfactory resolution (i.e., of 12 bits) and of
ADS1015 unit) capable of true differential measurements, of satisfactory resolution (i.e.,
adjustable gain. The communication of this module with the hosting Arduino board was
of 12 bits) and of adjustable gain. The communication of this module with the hosting
completed using an I2C interface. The presence of the Arduino Uno unit allowed for the
Arduino board was completed using an I2C interface. The presence of the Arduino Uno
additional processing of data and quick graphical inspection. Consequently, for the system
unit allowed for the additional processing of data and quick graphical inspection. Con
under testing, amperage consumption traces could be easily captured, at a typical time
sequently, for the system under testing, amperage consumption traces could be easily
resolution of 100 sps and at an approximate amperage resolution of 1 mA, via the Serial
captured, at a typical time resolution of 100 sps and at an approximate amperage resolu
Monitor or the Serial Plotter component of the Arduino IDE environment. By using the
tion of 1 mA, via the Serial Monitor or the Serial Plotter component of the Arduino IDE
specific measuring setup, several results were collected. The behavior of the sensor nodes
environment. By using the specific measuring setup, several results were collected. The
was on the focus of this study, as, typically, the gateway node was considered of having
behavior of the sensor nodes was on the focus of this study, as, typically, the gateway
fixed power supply and its consumption was around 250 mA.
node was considered of having fixed power supply and its consumption was around 250
More specifically, the consumption of a bare node, equipped only with a Raspberry Pi
mA.
Pico unit was 27 mA, approximately, with the water flow metering unit to absorb 3–4 mA of
More specifically, the consumption of a bare node, equipped only with a Raspberry
this quantity. When activating the radio modules on the system and letting them transmit
Pi Pico unit was 27 mA, approximately, with the water flow metering unit to absorb 3–4
information, further data were collected. For debugging purposes, apart from the standard
settings where only the water usage decision was reported, the analytical flow data could
also be transmitted toward the gateway, limited only by the maximum data rate being
supported by the selected radio modules.
Referring to the Wi-Fi communication case, Figure 9 provides characteristic details of
the short time dynamics of the scanning and connection establishment stages that were
mandatory before the utilization of the radio modules. The inspection of the results revealed
that the scanning process was extremely energy-consuming, reaching the level of 90 mA (in
total) with additional and non-negligible sporadic spikes exceeding that level. The whole
scanning process lasted for 2 to 3 s, and after that, the overall consumption was stabilized
to the 40 mA level, with peaks of additional 50 mA corresponding to the water flow event
reports toward the gateway. The high cost for the Wi-Fi initialization link (especially in
optimized radio sleep/wakeup scenarios), along with its limited range coverage were
favoring the assessment of other communication alternatives, such as LoRa.
-----
link (especially in optimized radio sleep/wakeup scenarios), along with its limited range
_Sensors 2022, 22, 4874_ coverage were favoring the assessment of other communication alternatives, such as 16 of 20
LoRa.
**Figure 9. Figure 9. Short time dynamics of the mandatory scanning and connection establishment stages,Short time dynamics of the mandatory scanning and connection establishment stages,**
following the activation of the Wi-Fi radio module that smart sensors were equipped with. following the activation of the Wi-Fi radio module that smart sensors were equipped with.
Similarly, Figure 10 depicts characteristic short-time dynamics for the LoRa com-Similarly, Figure 10 depicts characteristic short-time dynamics for the LoRa commumunication alternative. Namely, from the LoRa module activation (left) to the energy nication alternative. Namely, from the LoRa module activation (left) to the energy peaks
reflecting the water usage notification packet transmission events (top right) and to the
peaks reflecting the water usage notification packet transmission events (top right) and to
corresponding textual information content as intercepted by the gateway (bottom right).
the corresponding textual information content as intercepted by the gateway (bottom
The LoRa32u4 LoRa board consumed 12–13 mA, approximately, at idling, with the radio
right). The LoRa32u4 LoRa board consumed 12–13 mA, approximately, at idling, with the
enabled, while the transmission events at the standard radio parameter settings (i.e., having
radio enabled, while the transmission events at the standard radio parameter settings
Coding Rate—CR set to 4/5, Bandwidth—BW to 128 kHz, Spreading Factor—SF set to 7)
(i.e., having Coding Rate—CR set to 4/5, Bandwidth—BW to 128 kHz, Spreading Fac
and with the transmit power at 15 dBm, resulted in spikes of 70 mA (at 3.3 V), having an
tor—SF set to 7) and with the transmit power at 15 dBm, resulted in spikes of 70 mA (at
approximate duration of 50 ms, thus requiring around 12 mJ each. It must be noted that
3.3 V), having an approximate duration of 50 ms, thus requiring around 12 mJ each. It
the whole process lacked the high connection establishment cost (in both time and energy)
must be noted that the whole process lacked the high connection establishment cost (in
characterizing the Wi-Fi case. The tradeoff of LoRa was the far lower communication rate,
_Sensors 2022, 22, x FOR PEER REVIEW both time and energy) characterizing the Wi-Fi case. The tradeoff of LoRa was the far which was not an issue for the specific application case that only a few bytes had to be17 of 21_
lower communication rate, which was not an issue for the specific application case that
transmitted per sensor unit, every 2 to 3 min, at the fastest utilization activity scenario.
only a few bytes had to be transmitted per sensor unit, every 2 to 3 min, at the fastest
utilization activity scenario.
**Figure 10. Figure 10.Characteristic short-time dynamics for the LoRa communication alternative: From the Characteristic short-time dynamics for the LoRa communication alternative: From the**
LoRa module activation (LoRa module activation (leftleft) to the energy peaks reflecting the packet transmission events () to the energy peaks reflecting the packet transmission events (top righttop )
**rightand to the corresponding textual information content as intercepted by the gateway () and to the corresponding textual information content as intercepted by the gateway (bottom rightbottom** ).
**right).**
According to the overall performance description presented herein, it can be inferred
that typically, the benefits of the pilot implementation being discussed were maximized inAccording to the overall performance description presented herein, it can be in
ferred that typically the benefits of the pilot implementation being discussed were
-----
_Sensors 2022, 22, 4874_ 17 of 20
application cases where many water consumption check points were needed, spread into
an area of a few kilometres.
_4.3. Node Cost Issues_
The total cost of each of the discussed nodes, after adding the 6€ for the Raspberry
Pi Pico unit, the 15€ for the LoRa equipped module, the 8€ for the YF-S201 flow sensor,
the 8€ for LiPo batteries and the 5€ needed for a good-quality plastic enclosure box, was
around 42€. The utilization of a LoRa transceiver instead of a typical Wi-Fi radio saved
energy and offered improved distance coverage. The decision of using the LoRa32u4
board added some extra cost (of about 5€, compared with a bare LoRa chip) but provided
further GPIO pins and connectivity options, as well as fast programming and testing of
the diverse communication and arithmetic processing variants, thus counterbalancing the
almost 15 min of time required for the compilation of the code containing the trained neural
network model destined for the Raspberry Pi Pico unit. The gateway node needed 30€ for
a Raspberry Pi Model 3 A+, 15€ for the LoRa32u4 board, 5€ for a plastic enclosure box, and
5€ for a power supply, resulting in cost below 60€.
_4.4. Further Discussion_
This work presented a pilot implementation targeted at intercepting water usage
events and characterizing them properly, with the emphasis to be put on misuse cases,
such as leakages or wastes. The rapid growth of electronics and of the pairing software
allowed for very cost-effective but efficient solutions, with cutting-edge features. Indeed,
the adoption of machine learning techniques at the edge points (i.e., where the water
sensors are) was drastically reducing the amount of information that needed to travel
from the peripheral nodes to the central node and the cloud. This fact resulted in reduced
communication load and energy consumption, while it increased autonomy and privacy.
The focus was put on the in situ processing and the pairing with simple, long-range and lowenergy radios, e.g., the LoRa technology ones. The water usage episodes the experimental
system was trained to intercept were of comparatively short duration, but the software
and hardware methods being used, and the accuracy being achieved, make the proposed
arrangements, only with minor configuration modifications, to be applicable for supporting
a wide variety of water preservation/misuse detection scenarios. Apparently, several issues
are still open, requiring more elaboration for the delivery of an out-of-the-box solution.
The time interval between the fixed, in number (e.g., 200), consecutive flow data
required for a characterization decision, was set to 1 s during the training. The same trained
model, can still be valid considering intervals of much longer value (e.g., of 30 s instead
of 1 s), provided that the proper normalization in flow values will be made and that the
activity will be classified in following the same pattern. Nevertheless, gathering richer
data sets, reflecting further realistic use cases, can train the model more accurately, and
is an apparent priority for wider applicability. This training can follow the same generic
principles and methods described herein.
The option of using a bare LoRa chip with the Raspberry Pi Pico unit is amongst the
future priorities toward a more commercially-friendly version of the prototype presented
herein. While the adoption of the LoRa protocol allows for better flexibility, the LoRaWAN
solution is also feasible, either via implementing the necessary protocol stack, via software
on the 32u4 LoRa board, or by utilizing native LoRaWAN chips. Furthermore, these
processes can become more efficient by introducing a sleep/wakeup energy management
schema which will allow the Raspberry Pi Pico to wake up (via interrupts) whenever water
flow activity is intercepted by the flow sensor. The task of the efficient powering the system
at the absence of permanent power supply nearby is also quite challenging. Indeed, more
than one alternative can be adopted, from using solar panels or a tiny wind generator, to
pairing the rotating blades of the flow sensor unit with a tiny electric generator [50]. Finally,
as the adoption of a Raspberry Pi Model 3 A+ as a central/gateway node was providing
an adequate but poor level of functionality, via elementary web and archiving or database
-----
_Sensors 2022, 22, 4874_ 18 of 20
services, linking with well-known and more user-friendly cloud services is also a case
worth investigating in the future.
**5. Conclusions**
In this paper, the synergy between several innovative and low-cost electronic components and software was exploited, in order to monitor and remotely report characteristic
water consumption/misuse events. The whole approach introduces modern Edge AI
techniques (i.e., combining IoT, ML and Edge Computing principles) that up until recently
was not possible to be executed with traditional low-cost microcontrollers. The challenges
for the delivery of a generally applicable and inexpensive alerting system for either urban
or rural water resource usage were further highlighted. The system being presented can
work in a decentralized manner as the amount of information that has to travel from the
edges to the cloud is drastically reduced, or becomes practically unnecessary, thus resulting in energy requirement minimization and increased privacy. Only the final decision
(water usage characterization) information has to be transmitted to the final user (e.g., the
farmer), and the cloud is necessary only in case that the latter user is not nearby or asks for
sophisticated information post processing.
As for the future, more optimized variants of the proposed system will be assessed, in
terms of hardware selection (e.g., of flow sensor units), neural network model accuracy,
networking options and energy autonomy. Great companies, such as Arduino, Raspberry,
ESP or Adafruit, during their noble competition, will continue to produce excellent parts
with leveraged application support potential. Finally, an out-of-the box version of the
functionality being presented, of commercial standards, exploiting additional well-known
services, and thus exhibiting increased user-friendliness, will be a significant future priority.
**Author Contributions: Conceptualization, D.L.; methodology, D.L. and K.-A.L.; software, D.L.;**
validation, D.L., K.-A.L., C.M. and K.G.A.; investigation, D.L., K.-A.L. and C.M.; data curation,
K.-A.L.; writing—original draft preparation, D.L. and K.-A.L.; writing—review and editing, D.L.,
K.-A.L., C.M. and K.G.A.; visualization, D.L. and K.-A.L.; supervision, K.G.A. All authors have read
and agreed to the published version of the manuscript.
**Funding: This research received no external funding.**
**Institutional Review Board Statement: Not applicable.**
**Informed Consent Statement: Not applicable.**
**Data Availability Statement: Available upon request.**
**Acknowledgments: The authors would like to thank the personnel and the students of the Dept. of**
Natural Resources Management & Agricultural Engineering of the Agricultural University of Athens,
Greece, for their assistance in the deployment and testing of the discussed system.
**Conflicts of Interest: The authors declare no conflict of interest.**
**References**
1. [FAO. Climate-Smart Agriculture Sourcebook. 2013. Available online: http://www.fao.org/3/i3325e/i3325e.pdf (accessed on](http://www.fao.org/3/i3325e/i3325e.pdf)
25 March 2022).
2. [Statista. IoT: Number of Connected Devices Worldwide 2012–2025. Available online: https://www.statista.com/statistics/4712](https://www.statista.com/statistics/471264/iot-number-of-connected-devices-worldwide/)
[64/iot-number-of-connected-devices-worldwide/ (accessed on 8 June 2022).](https://www.statista.com/statistics/471264/iot-number-of-connected-devices-worldwide/)
3. Dastjerdi, A.V.; Buyya, R. Fog Computing: Helping the Internet of Things Realize Its Potential. IEEE Comput. Soc. 2016, 49,
[112–116. [CrossRef]](http://doi.org/10.1109/MC.2016.245)
4. [Merenda, M.; Porcaro, C.; Iero, D. Edge machine learning for AI-Enabled IoT devices: A review. Sensors 2020, 20, 2533. [CrossRef]](http://doi.org/10.3390/s20092533)
[[PubMed]](http://www.ncbi.nlm.nih.gov/pubmed/32365645)
5. [TensorFlow Lite. Available online: https://www.tensorflow.org/lite (accessed on 8 June 2022).](https://www.tensorflow.org/lite)
6. [TinyML. Available online: https://www.tensorflow.org/lite (accessed on 8 June 2022).](https://www.tensorflow.org/lite)
7. Loukatos, D.; Arvanitis, K.G. Multi-Modal Sensor Nodes in Experimental Scalable Agricultural IoT Application Scenarios; Springer:
[Berlin/Heidelberg, Germany, 2021; pp. 101–128. [CrossRef]](http://doi.org/10.1007/978-3-030-71172-6_5)
-----
_Sensors 2022, 22, 4874_ 19 of 20
8. Kingdom, B.; Liemberger, R.; Marin, P. The Challenge of Reducing Non-Revenue (NRW) Water in Developing Countries. How the Private
_Sector Can Help: A Look at Performance-Based Service Contracting; Water Supply and Sanitation (WSS) Sector Board Discussion_
Paper N. 8; The World Bank: Washington, DC, USA, 2006.
9. [United Nations SDG6. Available online: https://sdgs.un.org/goals/goal6 (accessed on 20 February 2022).](https://sdgs.un.org/goals/goal6)
10. [Sustainable Development Goals. Available online: https://sdgs.un.org/goals (accessed on 20 February 2022).](https://sdgs.un.org/goals)
11. Loukatos, D.; Lygkoura, K.-A.; Misthou, S.; Arvanitis, K.G. Internet of Things Meets Machine Learning: A Water Usage Alert Example. In Proceedings of the 2022 IEEE Global Engineering Education Conference (EDUCON), Tunis, Tunisia, 28–31 March 2022.
12. Singh, D.; Tripathi, G.; Jara, A.J. A survey of internet-of-things: Future vision, architecture, challenges, and services. In Proceedings
[of the IEEE World Forum on Internet of Things, Seoul, Korea, 6–8 March 2014; pp. 287–292. [CrossRef]](http://doi.org/10.1109/WF-IoT.2014.6803174)
13. Lova Raju, K.; Vijayaraghavan, V. IoT Technologies in Agricultural Environment: A Survey. Wirel. Pers. Commun. 2020, 113,
[2415–2446. [CrossRef]](http://doi.org/10.1007/s11277-020-07334-x)
14. Farooq, M.S.; Riaz, S.; Abid, A.; Umer, T.; Zikria, Y.B. Role of IoT Technology in Agriculture: A Systematic Literature Review.
_[Electronics 2020, 9, 319. [CrossRef]](http://doi.org/10.3390/electronics9020319)_
15. Zude-Sasse, M.; Akbari, E.; Tsoulias, N.; Psiroukis, V.; Fountas, S.; Ehsani, R. Sensing in Precision Horticulture. In
_Sensing Approaches for Precision Agriculture; Kerry, R., Escolà, A., Eds.; Progress in Precision Agriculture; Springer:_
[Cham, Switzerland, 2021. [CrossRef]](http://doi.org/10.1007/978-3-030-78431-7_8)
16. Islam, N.; Rashid, M.M.; Pasandideh, F.; Ray, B.; Moore, S.; Kadel, R. A Review of Applications and Communication Technologies
for Internet of Things (IoT) and Unmanned Aerial Vehicle (UAV) Based Sustainable Smart Farming. Sustainability 2021, 13, 1821.
[[CrossRef]](http://doi.org/10.3390/su13041821)
17. Liakos, K.G.; Busato, P.; Moshou, D.; Pearson, S.; Bochtis, D. Machine Learning in Agriculture: A Review. Sensors 2018, 18, 2674.
[[CrossRef] [PubMed]](http://doi.org/10.3390/s18082674)
18. Ahmad, L.; Nabi, F. Agriculture 5.0: Artificial Intelligence, IoT, and Machine Learning, 1st ed.; CRC Press: Boca Raton, FL, USA, 2021.
[[CrossRef]](http://doi.org/10.1201/9781003125433)
19. Garcia Lopez, P.; Montresor, A.; Epema, D.; Datta, A.; Higashino, T.; Iamnitchi, A.; Barcellos, M.; Felber, P.; Riviere, E. Edge-centric
[computing: Vision and challenges. ACM SIGCOMM Comput. Commun. Rev. 2015, 45, 37–42. [CrossRef]](http://doi.org/10.1145/2831347.2831354)
20. Gia, T.N.; Qingqing, L.; Queralta, J.P.; Zou, Z.; Tenhunen, H.; Westerlund, T. Edge AI in Smart Farming IoT: CNNs at the Edge and
[Fog Computing with LoRa. In Proceedings of the 2019 IEEE AFRICON, Accra, Ghana, 25–27 September 2019; pp. 1–6. [CrossRef]](http://doi.org/10.1109/AFRICON46755.2019.9134049)
21. Bu, F.; Wang, X. A smart agriculture IoT system based on deep reinforcement learning. Future Gener. Comput. Syst. 2019, 99,
[500–507. [CrossRef]](http://doi.org/10.1016/j.future.2019.04.041)
22. Huang, R.; Ma, C.; Ma, J.; Huangfu, X.; He, Q. Machine learning in natural and engineered water systems. Water Res. 2021, 15,
[117666. [CrossRef] [PubMed]](http://doi.org/10.1016/j.watres.2021.117666)
23. Mashhadi, N.; Shahrour, I.; Attoue, N.; El Khattabi, J.; Aljer, A. Use of Machine Learning for Leak Detection and Localization in
[Water Distribution Systems. Smart Cities 2021, 4, 1293–1315. [CrossRef]](http://doi.org/10.3390/smartcities4040069)
24. Ramachandran, V.; Ramalakshmi, R.; Kavin, B.P.; Hussain, I.; Almaliki, A.H.; Almaliki, A.A.; Elnaggar, A.Y.; Hussein, E.E.
[Exploiting IoT and Its Enabled Technologies for Irrigation Needs in Agriculture. Water 2022, 14, 719. [CrossRef]](http://doi.org/10.3390/w14050719)
25. Roy, A.; Dutta, H.; Griffith, H.; Biswas, S. An On-Device Learning System for Estimating Liquid Consumption from Consumer[Grade Water Bottles and Its Evaluation. Sensors 2022, 22, 2514. [CrossRef] [PubMed]](http://doi.org/10.3390/s22072514)
26. Slama, S.-B. Prosumer in smart grids based on intelligent edge computing: A review on Artificial Intelligence Scheduling
[Techniques. Ain Shams Eng. J. 2022, 13, 101504. [CrossRef]](http://doi.org/10.1016/j.asej.2021.05.018)
27. Gaspar, P.D.; Fernandez, C.M.; Soares, V.N.G.J.; Caldeira, J.M.L.P.; Silva, H. Development of Technological Capabilities through
the Internet of Things (IoT): Survey of Opportunities and Barriers for IoT Implementation in Portugal’s Agro-Industry. Appl. Sci.
**[2021, 11, 3454. [CrossRef]](http://doi.org/10.3390/app11083454)**
28. McCaig, M.; Rezania, D.; Dara, R. Is the Internet of Things a helpful employee? An exploratory study of discourses of Canadian
[farmers. Internet Things 2022, 17, 100466. [CrossRef]](http://doi.org/10.1016/j.iot.2021.100466)
29. Nkosi, S.H.; Chowdhury, S.P.D. Automated Irrigation and Water Level Management System Using Raspberry PI. In Proceedings
[of the 2018 IEEE PES/IAS PowerAfrica, Cape Town, South Africa, 26–29 June 2018; pp. 804–809. [CrossRef]](http://doi.org/10.1109/PowerAfrica.2018.8521109)
30. Gosavi, G.; Gawde, G.; Gosavi, G. Smart water flow monitoring and forecasting system. In Proceedings of the 2017 2nd IEEE
International Conference on Recent Trends in Electronics, Information & Communication Technology (RTEICT), Bangalore, India,
[19–20 May 2017; pp. 1218–1222. [CrossRef]](http://doi.org/10.1109/RTEICT.2017.8256792)
31. Glória, A.; Dionisio, C.; Simões, G.; Cardoso, J.; Sebastião, P. Water Management for Sustainable Irrigation Systems Using
[Internet-of-Things. Sensors 2020, 20, 1402. [CrossRef]](http://doi.org/10.3390/s20051402)
32. Attallah, N.A.; Horsburgh, J.S.; Beckwith, A.S., Jr.; Tracy, R.J. Residential Water Meters as Edge Computing Nodes: Disaggregating
[End Uses and Creating Actionable Information at the Edge. Sensors 2021, 21, 5310. [CrossRef]](http://doi.org/10.3390/s21165310)
33. Neto, A.R.; Soares, B.; Barbalho, F.; Santos, L.; Batista, T.; Delicato, F.C.; Pires, P.F. Classifying Smart IoT Devices for Running
Machine Learning Algorithms. In Anais do XLV Seminário Integrado de Software e Hardware; SBC: Nashville, TN, USA, 2018.
34. Arduino Nano 33 BLE Sense. Overview of the Arduino Nano 33 BLE Sense Microcontroller Unit. 2022. Available online:
[https://store.arduino.cc/products/arduino-nano-33-ble-sense (accessed on 25 February 2022).](https://store.arduino.cc/products/arduino-nano-33-ble-sense)
35. [Raspberry Pi Pico. Overview of the Raspberry Pi Pico Microcontroller Unit. 2022. Available online: https://www.raspberrypi.](https://www.raspberrypi.com/products/raspberry-pi-pico/)
[com/products/raspberry-pi-pico/ (accessed on 25 March 2022).](https://www.raspberrypi.com/products/raspberry-pi-pico/)
-----
_Sensors 2022, 22, 4874_ 20 of 20
36. [Arduino Uno. Arduino Uno Board Description on the Official Arduino Site. 2022. Available online: https://store.arduino.cc/](https://store.arduino.cc/arduino-uno-rev3)
[arduino-uno-rev3 (accessed on 20 February 2022).](https://store.arduino.cc/arduino-uno-rev3)
37. LoRa32u4. The LoRa32u4 Module Description. 2022. [Available online: https://www.diymalls.com/LoRa32u4-II-Lora-](https://www.diymalls.com/LoRa32u4-II-Lora-Development-Board-868mhz-915mhz-Lora-Module)
[Development-Board-868mhz-915mhz-Lora-Module (accessed on 25 March 2022).](https://www.diymalls.com/LoRa32u4-II-Lora-Development-Board-868mhz-915mhz-Lora-Module)
38. [Arduino Software IDE. 2022. Available online: https://www.arduino.cc/en/Guide/Environment (accessed on 20 February 2022).](https://www.arduino.cc/en/Guide/Environment)
39. [ESP8266. The ESP8266 Low-Cost Wi-Fi Microchip. 2022. Available online: https://en.wikipedia.org/wiki/ESP8266 (accessed on](https://en.wikipedia.org/wiki/ESP8266)
20 February 2022).
40. [Twinschip. Water Flow Meter. 2022. Available online: https://www.twinschip.com/Water-Flow%20Sensor-Control-Effect-](https://www.twinschip.com/Water-Flow%20Sensor-Control-Effect-Flowmeter-Hall--YF-S201)
[Flowmeter-Hall--YF-S201 (accessed on 20 February 2022).](https://www.twinschip.com/Water-Flow%20Sensor-Control-Effect-Flowmeter-Hall--YF-S201)
41. MIT App Inventor Programming Environment. Available online: [http://appinventor.mit.edu/explore/ (accessed on](http://appinventor.mit.edu/explore/)
20 February 2022).
42. [EdgeImpulse. 2022. Available online: https://www.edgeimpulse.com/ (accessed on 20 March 2022).](https://www.edgeimpulse.com/)
43. [TCP/IP Extension. Description of the TCP Extension for the MIT App Inventor Environment. 2022. Available online: https:](https://community.appinventor.mit.edu/t/tcp-ip-extension/7142)
[//community.appinventor.mit.edu/t/tcp-ip-extension/7142 (accessed on 20 March 2022).](https://community.appinventor.mit.edu/t/tcp-ip-extension/7142)
44. UDP/IP Extension. Description of the UDP Extension for the MIT App Inventor Environment. 2022. Available online:
[https://ullisroboterseite.de/android-AI2-UDP-en.html (accessed on 20 March 2022).](https://ullisroboterseite.de/android-AI2-UDP-en.html)
45. Raspberry Pi 3 Model A+. Raspberry Pi 3 Model A+ Board Description on the Official Raspberry Site. 2022. Available online:
[https://www.raspberrypi.com/products/raspberry-pi-3-model-a-plus/ (accessed on 25 March 2022).](https://www.raspberrypi.com/products/raspberry-pi-3-model-a-plus/)
46. [RadioHead. The RadioHead Library to Support LoRa Modules. 2022. Available online: https://www.airspayce.com/mikem/](https://www.airspayce.com/mikem/arduino/RadioHead/)
[arduino/RadioHead/ (accessed on 25 February 2022).](https://www.airspayce.com/mikem/arduino/RadioHead/)
47. [Arduino Cryptography Library. Description of the Arduino Cryptography Library. 2022. Available online: https://www.arduino.](https://www.arduino.cc/reference/en/libraries/crypto/)
[cc/reference/en/libraries/crypto/ (accessed on 25 February 2022).](https://www.arduino.cc/reference/en/libraries/crypto/)
48. Loukatos, D.; Fragkos, A.; Arvanitis, K. Experimental Performance Evaluation Techniques of LoRa Radio Modules and Exploitation for Agricultural Use. In Information and Communication Technologies for Agriculture—Theme I: Sensors; Bochtis, D.D.,
Lampridi, M., Petropoulos, G.P., Ampatzidis, Y., Pardalos, P., Eds.; Springer International Publishing: Cham, Switzerland, 2022;
[pp. 101–120. [CrossRef]](http://doi.org/10.1007/978-3-030-84144-7_4)
49. Loukatos, D.; Dimitriou, N.; Manolopoulos, I.; Kontovasilis, K.; Arvanitis, K.G. Revealing Characteristic IoT Behaviors by
Performing Simple Energy Measurements via Open Hardware/Software Components. In Proceedings of the Sixth International
Congress on Information and Communication Technology—ICICT 2021, London, UK, 25–26 February 2021; Springer: Singapore,
[2022; pp. 1045–1053. [CrossRef]](http://doi.org/10.1007/978-981-16-1781-2_90)
50. Micro Water Turbine—Hydroelectric Generator. Description of the 5V DC water turbine miniature electric generator.
[Available online: https://www.seeedstudio.com/Micro-Water-Tubine-Generator-DC-5V-p-4512.html (accessed on 19 June 2022).](https://www.seeedstudio.com/Micro-Water-Tubine-Generator-DC-5V-p-4512.html)
-----
|
{
"disclaimer": "Notice: Paper or abstract available at https://pmc.ncbi.nlm.nih.gov/articles/PMC9269755, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "CCBY",
"status": "GOLD",
"url": "https://www.mdpi.com/1424-8220/22/13/4874/pdf?version=1656464599"
}
| 2,022
|
[
"JournalArticle"
] | true
| 2022-06-28T00:00:00
|
[
{
"paperId": "cc227498317f99d9de7729403b60149dc3a45556",
"title": "Internet of Things Meets Machine Learning: A Water Usage Alert Example"
},
{
"paperId": "2101828e7b0a7de7fddc0e5b927b207762d647f2",
"title": "An On-Device Learning System for Estimating Liquid Consumption from Consumer-Grade Water Bottles and Its Evaluation"
},
{
"paperId": "35cb2ded0159ccf957f16a2597f35397ecdb4bec",
"title": "Exploiting IoT and Its Enabled Technologies for Irrigation Needs in Agriculture"
},
{
"paperId": "55cf0c266fb5af9fbd5e8ccbb2be2e5b439cc035",
"title": "Is the Internet of Things a helpful employee? An exploratory study of discourses of Canadian farmers"
},
{
"paperId": "366e5d69a755b050caca59905b319773a4e22a4c",
"title": "Use of Machine Learning for Leak Detection and Localization in Water Distribution Systems"
},
{
"paperId": "f5b0858d4b2e826a52a73b1cd220f0a296c2a42e",
"title": "Machine learning in natural and engineered water systems."
},
{
"paperId": "ac5e8de8f4b1af4e31a4738d33397315b67108e0",
"title": "Revealing Characteristic IoT Behaviors by Performing Simple Energy Measurements via Open Hardware/Software Components"
},
{
"paperId": "7d927dea3feae43f18c8cbedc00e423ac86fa9fe",
"title": "Residential Water Meters as Edge Computing Nodes: Disaggregating End Uses and Creating Actionable Information at the Edge"
},
{
"paperId": "a39a06b4769c1f346a68d8536d7931dc76bc8305",
"title": "Prosumer in smart grids based on intelligent edge computing: A review on Artificial Intelligence Scheduling Techniques"
},
{
"paperId": "b1bfdf9f3094ce9c77c1cd0924684667107f7d17",
"title": "Development of Technological Capabilities through the Internet of Things (IoT): Survey of Opportunities and Barriers for IoT Implementation in Portugal’s Agro-Industry"
},
{
"paperId": "cb342850cbf5d2a64460b377aa6683c355d69a6b",
"title": "Agriculture 5.0: Artificial Intelligence, IoT, and Machine Learning"
},
{
"paperId": "b090ed9ce4c1ef4bb371f85b243a1d758667e4f5",
"title": "A Review of Applications and Communication Technologies for Internet of Things (IoT) and Unmanned Aerial Vehicle (UAV) Based Sustainable Smart Farming"
},
{
"paperId": "821fde6dc36d1264c765d249d4247ea66daff55f",
"title": "Edge Machine Learning for AI-Enabled IoT Devices: A Review"
},
{
"paperId": "0c93d892f85d490bde75396e97f7ddc7c55f6186",
"title": "IoT Technologies in Agricultural Environment: A Survey"
},
{
"paperId": "e721f0c3169cbec5071df5d81c07c883d9006dbd",
"title": "Water Management for Sustainable Irrigation Systems Using Internet-of-Things†"
},
{
"paperId": "3bfa57dbd2551f6ccf65053632ae98078655a0ae",
"title": "Role of IoT Technology in Agriculture: A Systematic Literature Review"
},
{
"paperId": "ac6a136ff8c2940792426b62208ebdd72683e8d0",
"title": "A smart agriculture IoT system based on deep reinforcement learning"
},
{
"paperId": "6f76c0d5fed290645945340689246764aa1590e9",
"title": "Edge AI in Smart Farming IoT: CNNs at the Edge and Fog Computing with LoRa"
},
{
"paperId": "6e23398447a022fb9495c44fa80e9de593a574bc",
"title": "Machine Learning in Agriculture: A Review"
},
{
"paperId": "1acb4ece41a7eb8d1779d98fb6396b447197647e",
"title": "Classifying Smart IoT Devices for Running Machine Learning Algorithms"
},
{
"paperId": "1ec2c407cb0382801a21b0c95f33788bab990289",
"title": "Automated Irrigation and Water Level Management System Using Raspberry PI"
},
{
"paperId": "fd1fa124c60432bb2ddcff8f068e98482d098026",
"title": "Smart water flow monitoring and forecasting system"
},
{
"paperId": "778b90dbd4edcb1df6201978c20aa860a26eab7c",
"title": "Fog Computing: Helping the Internet of Things Realize Its Potential"
},
{
"paperId": "c00be1e030c2a7550f473bdfdcc59ba2afb3a0b6",
"title": "Edge-centric Computing: Vision and Challenges"
},
{
"paperId": "da094a4967fbf6c9153ad9b78aa8bbd0c36d3b39",
"title": "A survey of Internet-of-Things: Future vision, architecture, challenges and services"
},
{
"paperId": "9b1e90f23ca69c78f292c09b64f275dfeb60d263",
"title": "Information and Communication Technologies for Agriculture—Theme I: Sensors"
},
{
"paperId": "29c9db63a79610cf887fa0b595019247edfa46b3",
"title": "Multi-Modal Sensor Nodes in Experimental Scalable Agricultural IoT Application Scenarios"
},
{
"paperId": "f6a462166190d1c318ca4758408df9c23fa95ecb",
"title": "Climate-smart agriculture: sourcebook."
},
{
"paperId": null,
"title": "Arduino Uno Board Description on the Official Arduino Site. 2022"
},
{
"paperId": null,
"title": "Description of the UDP Extension for the MIT App Inventor Environment. 2022"
},
{
"paperId": null,
"title": "Overview of the Raspberry Pi Pico Microcontroller Unit"
},
{
"paperId": null,
"title": "TensorFlow Lite TinyML"
},
{
"paperId": null,
"title": "Sensing in Precision Horticulture"
},
{
"paperId": null,
"title": "Progress in Precision Agriculture"
},
{
"paperId": null,
"title": "Arduino Nano 33 BLE Sense . Overview of the Arduino Nano 33 BLE Sense Microcontroller Unit"
},
{
"paperId": null,
"title": "The Challenge of Reducing Non - Revenue ( NRW ) Water in Developing Countries"
},
{
"paperId": null,
"title": "United Nations SDG 6 Sustainable Development Goals"
},
{
"paperId": null,
"title": "The LoRa32u4 Module Description"
},
{
"paperId": null,
"title": "Overview of the Arduino Nano 33 BLE Sense Microcontroller Unit"
},
{
"paperId": null,
"title": "Arduino Uno Board Description on the Official Arduino Site"
},
{
"paperId": null,
"title": "IoT : Number of Connected Devices Worldwide 2012 – 2025"
},
{
"paperId": null,
"title": "Description of the Arduino Cryptography Library"
},
{
"paperId": null,
"title": "Micro Water Turbine-Hydroelectric Generator. Description of the 5V DC water turbine miniature electric generator"
},
{
"paperId": null,
"title": "The RadioHead Library to Support LoRa Modules"
},
{
"paperId": null,
"title": "Arduino Cryptography Library . Description of the Arduino Cryptography Library"
},
{
"paperId": null,
"title": "Description of the UDP Extension for the MIT App Inventor Environment"
},
{
"paperId": null,
"title": "Number of Connected Devices Worldwide 2012–2025"
},
{
"paperId": null,
"title": "Raspberry Pi 3 Model A + Board Description on the Official Raspberry Site"
},
{
"paperId": null,
"title": "Description of the TCP Extension for the MIT App Inventor Environment"
},
{
"paperId": null,
"title": "Sensing in Precision Horticulture. In Sensing Approaches for Precision Agriculture; Kerry, R., Escolà, A., Eds.; Progress in Precision Agriculture"
},
{
"paperId": null,
"title": "Arduino Uno Board Description on the Official Arduino Site LoRa 32 u 4 . The LoRa 32 u 4 Module Description"
}
] | 21,485
|
en
|
[
{
"category": "Engineering",
"source": "external"
},
{
"category": "Engineering",
"source": "s2-fos-model"
},
{
"category": "Environmental Science",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/ffd510fce9d0243d389179d9800edcce02bbc60e
|
[
"Engineering"
] | 0.828904
|
Search Improvement Process-Chaotic Optimization-Particle Swarm Optimization-Elite Retention Strategy and Improved Combined Cooling-Heating-Power Strategy Based Two-Time Scale Multi-Objective Optimization Model for Stand-Alone Microgrid Operation
|
ffd510fce9d0243d389179d9800edcce02bbc60e
|
[
{
"authorId": "40351394",
"name": "Fei Wang"
},
{
"authorId": "2116380418",
"name": "Lidong Zhou"
},
{
"authorId": "145789439",
"name": "Hui Ren"
},
{
"authorId": "2109323189",
"name": "Xiaoli Liu"
}
] |
{
"alternate_issns": null,
"alternate_names": null,
"alternate_urls": null,
"id": null,
"issn": null,
"name": null,
"type": null,
"url": null
}
|
The optimal dispatching model for a stand-alone microgrid (MG) is of great importance to its operation reliability and economy. This paper aims at addressing the difficulties in improving the operational economy and maintaining the power balance under uncertain load demand and renewable generation, which could be even worse in such abnormal conditions as storms or abnormally low or high temperatures. A new two-time scale multi-objective optimization model, including day-ahead cursory scheduling and real-time scheduling for finer adjustments, is proposed to optimize the operational cost, load shedding compensation and environmental benefit of stand-alone MG through controllable load (CL) and multi-distributed generations (DGs). The main novelty of the proposed model is that the synergetic response of CL and energy storage system (ESS) in real-time scheduling offset the operation uncertainty quickly. And the improved dispatch strategy for combined cooling-heating-power (CCHP) enhanced the system economy while the comfort is guaranteed. An improved algorithm, Search Improvement Process-Chaotic Optimization-Particle Swarm Optimization-Elite Retention Strategy (SIP-CO-PSO-ERS) algorithm with strong searching capability and fast convergence speed, was presented to deal with the problem brought by the increased errors between actual renewable generation and load and prior predictions. Four typical scenarios are designed according to the combinations of day types (work day or weekend) and weather categories (sunny or rainy) to verify the performance of the presented dispatch strategy. The simulation results show that the proposed two-time scale model and SIP-CO-PSO-ERS algorithm exhibit better performance in adaptability, convergence speed and search ability than conventional methods for the stand-alone MG’s operation.
|
# energies
_Article_
## Search Improvement Process-Chaotic Optimization-Particle Swarm Optimization-Elite Retention Strategy and Improved Combined Cooling-Heating-Power Strategy Based Two-Time Scale Multi-Objective Optimization Model for Stand-Alone Microgrid Operation
**Fei Wang** **[1,2]** **[ID, Lidong Zhou 1, Hui Ren 1,* and Xiaoli Liu 3](https://orcid.org/0000-0002-7332-9726)**
1 State Key Laboratory of Alternate Electrical Power System with Renewable Energy Sources, North China
Electric Power University, Baoding 071003, China; feiwang@ncepu.edu.cn (F.W.);
zhoulidong_ncepu@sina.com (L.Z.)
2 Department of Electrical and Computer Engineering, University of Illinois at Urbana-Champaign,
Urbana 61801, IL, USA
3 Shuozhou Power Company of State Grid Shanxi Electric Power Company, Shuozhou 036000, China;
ncepulxl@sina.com
***** Correspondence: hren@ncepu.edu.cn; Tel.: +86-139-3328-5267
Received: 18 October 2017; Accepted: 10 November 2017; Published: 23 November 2017
**Abstract: The optimal dispatching model for a stand-alone microgrid (MG) is of great importance**
to its operation reliability and economy. This paper aims at addressing the difficulties in improving
the operational economy and maintaining the power balance under uncertain load demand
and renewable generation, which could be even worse in such abnormal conditions as storms
or abnormally low or high temperatures. A new two-time scale multi-objective optimization
model, including day-ahead cursory scheduling and real-time scheduling for finer adjustments,
is proposed to optimize the operational cost, load shedding compensation and environmental benefit
of stand-alone MG through controllable load (CL) and multi-distributed generations (DGs). The main
novelty of the proposed model is that the synergetic response of CL and energy storage system (ESS)
in real-time scheduling offset the operation uncertainty quickly. And the improved dispatch strategy
for combined cooling-heating-power (CCHP) enhanced the system economy while the comfort is
guaranteed. An improved algorithm, Search Improvement Process-Chaotic Optimization-Particle
Swarm Optimization-Elite Retention Strategy (SIP-CO-PSO-ERS) algorithm with strong searching
capability and fast convergence speed, was presented to deal with the problem brought by the
increased errors between actual renewable generation and load and prior predictions. Four typical
scenarios are designed according to the combinations of day types (work day or weekend) and
weather categories (sunny or rainy) to verify the performance of the presented dispatch strategy.
The simulation results show that the proposed two-time scale model and SIP-CO-PSO-ERS algorithm
exhibit better performance in adaptability, convergence speed and search ability than conventional
methods for the stand-alone MG’s operation.
**Keywords: stand-alone MG; SIP-CO-PSO-ERS; two-time scale optimized model; improved CCHP**
dispatch strategy; multi-scenario; economic dispatch
-----
_Energies 2017, 10, 1936_ 2 of 23
**1. Introduction**
Owing to the great pressure of the global energy crisis and environmental pollution [1], much
effort has been devoted to integrating different kinds of distributed generations (DGs) into microgrids
(MGs) in order to reduce carbon emissions and improve power quality [2]. MGs could operate in
grid-connected or islanded mode, managing all kinds of DGs effectively [3]. This is an ideal way
to realize local coordination control and optimized operation of multi-DGs, including micro-gas
turbines (MTs), diesel engines (DEs), fuel cells (FCs), photovoltaics (PVs), wind turbines (WTs), small
hydropower and some energy storage devices such as flywheels, super capacitors and accumulators [4].
Most of the existing MGs are designed to work primarily under on-grid mode, excluding emergency
situations [5]. However, the impact of hybrid renewable energy sources (HRES) to power system
should be paid much attention. Researches such as the unsymmetrical faults [6], improvement of
transient stability [7], ground fault current [8] were conducted for MG and they are beneficial to the
application of renewable energies. On the other hand, more and more attention is drawn to study the
stand-alone MG for its capability to supply power economically in some other particular applications,
such as MGs for islands or remote areas without power grids [9,10].
For a small but important power system like MG, the problems of voltage balance [11], fault
current limit and power system stability are also very important. All in all, the power quality [12]
must be guaranteed through a series means such as storage coordination [13], dynamic control [14]
or demand response (DR) [15]. Fortunately, all these operation requirements could be included into
the optimized operation model as constraints. In order to take full advantages of stand-alone MGs
and promote their popularization, researchers around the world have devoted momentous efforts
to the optimal operation of stand-alone MGs [16]. However, the uncertainty of renewable power
generation because of weather conditions [17–19] and load demand challenges the economic operation
a lot. Because of the uncertainty, the predicted data of renewable energy and demand is subject
to errors, which negatively affect the optimized generation schedules and operation plans [20,21].
As a result, the economic operation cannot be realized and even the power balance would be broken
in extreme conditions such as storms, abnormally high or low temperatures, or part damage of
distribution facilities.
To mitigate the impact of uncertainty on optimized operation, energy storage devices were
introduced to ensure the safety and reliability of the MG with consideration of their lifetime
characteristics [22]. However, the advantage of fast responses for batteries was not used to its full extent
and the environmental benefit was not included in the optimization objective. Secondly, the stochastic
scheduling method was applied in the MG’s optimized operation to decrease the unfavorable effects
brought by the uncertainty [23–25]. To a certain degree, the impacts of uncertainty were impaired
by transferring the optimal operation into a deterministic optimization problem with a large number
of stochastic scenarios. However, the fundamental issue of uncertainty was not resolved because
the stochastic method merely dealt with the problem by considering more scenarios while no one
could take all scenarios into account due to the complexity of the environment. Another trouble was
that the computed burden increased accordingly. Thirdly, with the development of DR, the demand
side has been proved an effective tool to improve system reliability and maintain power balance by
responding to the dispatch information [26–28]. The applications of DR strategies may help to settle the
intermittency of renewable resources by devoting to the balance between energy supply and demand,
thus minimizing the operation costs and providing a more reliable grid management [29]. Although
DR was considered in studies such as [30–32], the expense for DR was not taken into account and the
constraints for DR were not described in the optimized model.
To address these problems and realize the optimized operation of stand-alone MG, this paper
establishes a multi-objective optimized model for a stand-alone MG, consisting of PV, WT, FC, DE,
MT and an energy storage system (ESS) based on the coordinated operation among sources-load-ESS
and an improved dispatch strategy of the MT’s CCHP operation mode. It should be pointed that
multi-types of micro sources and ESS are considered at the same time so as to improve the stability
-----
_Energies 2017, 10, 1936_ 3 of 23
and flexibility of stand-alone MG by providing various choices to satisfy the power balance and
coping with emergency circumstances. And the installation cost increase of this structure is following
therefore. Controllable load (CL) is taken into account as DR resources to improve the reliability.
The optimized model is divided into two-time scales in order to deal with the uncertainty of load
demand and renewable power generation. The first time scale model is day-ahead optimization,
which is to seek a global optimal solution for all the generation resources, CL and ESS, based on the
day-ahead predicted data. The renewable integration could be further optimized if storage systems
are coupled with DR in order to enlarge load-shifting capacity [33,34]. Therefore, the coordinating
operation of ESS and CL are introduced into the second time scale model, called real-time optimization,
to adjust the optimized schedule considering the real-time weather condition and demand based on
the day-ahead scheduling.
In terms of the optimization solution, various algorithms are developed recently, such as basic
particle swarm optimization (PSO) [35], ε-constraint method [36] and non-dominated sorting genetic
algorithm II (NSGA-II) [37]. All these algorithms achieved relatively good result in the setting of
MGs and models. However, the performance needs to be further studied when it comes to different
scenarios. PSO is a stochastic and population-based evolutionary algorithm and has gained popularity
in the optimized operation of MGs due to its superiorities of having few constraints on fitness function,
simple principle, easy coding and rapid convergence speed [38]. However, when major fluctuations
occur in the base data of optimized model resulting from different scenarios during stand-alone MG’s
optimized operation, two problems would appear in PSO algorithm: (i) the local and global search
ability is not good enough to find an excellent solution in a relatively short time; (ii) the premature
phenomenon would occur due to the loss of population diversity in the later iterations. Moreover,
conditions could be worse especially for the model with complex variables and intricate scenarios [39].
Chaotic optimization (CO) has a strong local search capability profiting from the characteristics of
randomness, ergodicity and inherent regularity [40] which would be effective to the optimization
problem with many variables and the nature of chaos could also decrease the impact that comes from
renewable energy or load uncertainty. In addition, an adequate elite retention strategy (ERS) could
further improve the solution quality, as well as the convergence speed, even under the inconstant
conditions [41]. In order to solve the problems of poor search ability and premature in PSO, this paper
introduces a duel-step modification (search improvement process and CO) and ERS into PSO to present
a Search Improvement Process-Chaotic Optimization-Particle Swarm Optimization-Elite Retention
Strategy (SIP-CO-PSO-ERS). SIP-CO-PSO-ERS was applied to solve the day-ahead scheduling model,
while linear programming was used to deal with the real-time scheduling model due to the simplicity
of its model which contains fewer decision variables and constraints.
The main contributions of this paper can be summarized as follows:
A new two-time scale multi-objective optimization model which aims to optimize the operation
_•_
cost, load cut compensation and environmental benefit of stand-alone MGs that consists of electric,
thermal and cooling energy styles based on CL and multi-DGs; the synergetic response of CL
and ESS (battery in this paper) in real-time scheduling offsets the operation uncertainty quickly,
and the improved dispatch strategy for CCHP enhances the system economy, guaranteeing
comfort feel;
A duel-step modification and ERS are introduced into PSO to present SIP-CO-PSO-ERS, which
_•_
has a strong search capability and fast convergence speed; four typical scenarios are designed
according to diverse situations to verify the adaptation of SIP-CO-PSO-ERS and proposed
optimized model.
This paper focuses on the achievement of the presented points and is organized as follows.
Section 2 gives descriptions of the two-time scale model. Section 3 gives a detail explanation of the
proposed SIP-CO-PSO-ERS method. Simulation is given in Section 4 to illustrate the advantages and
validity of the proposed algorithm and model. Section 5 gives a conclusion.
-----
_Energies 2017, 10, 1936_ 4 of 23
_Energies 2017, 10, 1936_ 4 of 23
**2. Optimization Model**
**2. Optimization Model**
_2.1. The CCHP Model and Improved Dispatch Strategy_
_2.1. The CCHP Model and Improved Dispatch Strategy_
2.1.1. The CCHP Model of MT
2.1.1. The CCHP Model of MT
Generally, the efficiency of MTs’ working in electricity generation is 30% with full load, or
10~15% with half load. It is very inefficient, letting much heat energy go to waste. Actually, the Generally, the efficiency of MTs’ working in electricity generation is 30% with full load, or 10~15%
with half load. It is very inefficient, letting much heat energy go to waste. Actually, the efficiencyefficiency could increase to more than 80% if the remaining heat energy is reused by CCHP
operation mode [42]. CCHP is composed of a generation module and a heat recovery module, and could increase to more than 80% if the remaining heat energy is reused by CCHP operation mode [42].
the latter is further split into an absorption chiller (APC) and a heat-exchanging system (HES). The CCHP is composed of a generation module and a heat recovery module, and the latter is further split
generation, APC and HES modules export electricity, cold and heat energy, respectively. The into an absorption chiller (APC) and a heat-exchanging system (HES). The generation, APC and HES
structure is shown in Figure 1. modules export electricity, cold and heat energy, respectively. The structure is shown in Figure 1.
**Figure 1.Figure 1. The structure of MT’s CCHP operation mode. The structure of MT’s CCHP operation mode.**
The cost model adopted in this paper for MT is expressed by (1) and the mathematical
The cost model adopted in this paper for MT is expressed by (1) and the mathematical description
description of heat recovery module is expressed by Equations (2)–(4):
of heat recovery module is expressed by Equations (2)–(4):
_CMT_ _= Cnl_ × _PMT_ ×Δt (1)
_CMT = Cnl ×_ _[P][MT]ηηMTMT[ ×][ ∆][t]_ (1)
_QMTQ =MT_ _=[P][MT]PηMTηMT[ ×]MT× Δ[ ∆][t]t(( - η11 −_ _ηMTMT- η ) −l_ _ηl)_ (2) (2)
_QHQ =Q =H QMTMT ×× ηηHH.REC.REC× ×ξH ξ_ _H_ (3) (3)
_QC = QMT_ _ηC.REC_ _ξC_ (4)
##### Q =QC MT ××ηC.REC × ×ξC (4)
where CMT represents the fuel cost of the MT in the operation time; Cnl stands for the natural gas
where _CMT represents the fuel cost of the MT in the operation time;_ _Cnl_ stands for the natural gas
price; PMT is the electricity energy produced by the MT, and ηMT represents MT’s efficiency; ∆t is
price; the dispatch interval time, and it is 1 h in this paper;PMT is the electricity energy produced by the MT, and QMT is the residual heat of exhaust air afterηMT represents MT’s efficiency; Δt is the
power generation;dispatch interval time, and it is 1 h in this paper; ηl represents the heat loss factor of the CCHP system;QMT is the residual heat of exhaust air after power QH and QC represent the
generation; heating and cooling capacity generated from the residual heat of exhaust;ηl represents the heat loss factor of the CCHP system; QH and ηQH.RECC represent the heating and ηC.REC are the
and cooling capacity generated from the residual heat of exhaust; heat and cooling efficiency, respectively. ξH and ξC stand for the heating and refrigeration coefficientηH.REC and ηC.REC are the heat and
cooling efficiency, respectively. respectively. For detailed information about PV, WT, FC, DE and ESS, please refer to [ξH and _ξC stand for the heating and refrigeration coefficient 43–45]._
respectively. For detailed information about PV, WT, FC, DE and ESS, please refer to [43–45].
2.1.2. Improved CCHP Dispatch Strategy
2.1.2. Improved CCHP Dispatch Strategy In general, MT is designed to operate in CCHP mode. The electric power generated by MT is only
decided by the whole MG’s thermal or cooling load. On this occasion, the electric power output of MT
In general, MT is designed to operate in CCHP mode. The electric power generated by MT is
is converted from a decision variable to a constant value which is related to the thermal or cooling
only decided by the whole MG’s thermal or cooling load. On this occasion, the electric power output
load only. Consequently, the optimization model for MG is simplified and the effect devoted by MT
of MT is converted from a decision variable to a constant value which is related to the thermal or
to operation performance is weakened. Based on the fact that little variation (5% in this paper) in
cooling load only. Consequently, the optimization model for MG is simplified and the effect devoted
environmental parameters will not have great impacts on people’s comfort fell, an improved dispatch
by MT to operation performance is weakened. Based on the fact that little variation (5% in this
strategy for CCHP was presented, as shown in Figure 2 (taking the case of thermal load for example).
paper) in environmental parameters will not have great impacts on people’s comfort fell, an
The basic electric power is determined by MG’s thermal load, while the upper limit rises 5% and the
improved dispatch strategy for CCHP was presented, as shown in Figure 2 (taking the case of
thermal load for example). The basic electric power is determined by MG’s thermal load, while the
-----
_EnergiesEnergies Energies 201720172017,, 10, 1010, 1936, 1936, 1936_ 5 of 235 of 23 5 of 23
upper limit rises 5% and the lower limit declines 5% due to the variation margin of indoor
upper limit rises 5% and the lower limit declines 5% due to the variation margin of indoor
lower limit declines 5% due to the variation margin of indoor environmental parameters. Intuitively,environmental parameters. Intuitively, the columns in Figure 2 stand for the adjustable range of an
environmental parameters. Intuitively, the columns in Figure 2 stand for the adjustable range of an
the columns in FigureMT’s electric power generation. 2 stand for the adjustable range of an MT’s electric power generation.
MT’s electric power generation.
Thermal Load Basic Electric Power Upper Limit Lower Limit
Thermal Load Basic Electric Power Upper Limit Lower Limit
80
80
60
60
40
40
20 Adjustable Range
20
0
0 4:00 8:00 12:00 16:00 20:00 24:00
4:00 8:00 12:00 16:00 20:00 24:00
Time / h
Time / h
|Thermal Load Basic Electric Power Upper Limit Lower Limit Thermal Load Basic Electric Power Upper Limit Lower Limit|Col2|
|---|---|
|||
|||
|||
|Adjustable Range Adjustable Range||
|||
**Figure 2. Adjustable range of an MT’s electric power generation.**
**Figure 2.Figure 2. Adjustable range of an MT’s electric power generation. Adjustable range of an MT’s electric power generation.**
_2.2. Overview of Studied Stand-Alone MG_
_2.2. Overview of Studied Stand-Alone MG2.2. Overview of Studied Stand-Alone MG_
Figure 3 shows the MG studied in this paper with ESS, FC, PV, WT, MT and DE. Storage battery
FigureFigure 3 shows the MG studied in this paper with ESS, FC, PV, WT, MT and DE. Storage battery 3 shows the MG studied in this paper with ESS, FC, PV, WT, MT and DE. Storage battery
(SB) is selected as ESS in this paper. In this system, improved dispatch strategy for CCHP was
(SB) is selected as ESS in this paper. In this system, improved dispatch strategy for CCHP was applied.(SB) is selected as ESS in this paper. In this system, improved dispatch strategy for CCHP was
applied. Various types of micro sources and ESS are integrated together in the MG because the
Various types of micro sources and ESS are integrated together in the MG because the operationapplied. Various types of micro sources and ESS are integrated together in the MG because the
operation reliability is the first issue especially for a stand-alone MG which lacks the support from
reliability is the first issue especially for a stand-alone MG which lacks the support from utility grid.operation reliability is the first issue especially for a stand-alone MG which lacks the support from
utility grid. As a result, the installation cost is not the most important in some cases such as
As a result, the installation cost is not the most important in some cases such as independent islands orutility grid. As a result, the installation cost is not the most important in some cases such as
independent islands or scientific surveys in remote areas. And multi-types of generations could
scientific surveys in remote areas. And multi-types of generations could improve the operation stabilityindependent islands or scientific surveys in remote areas. And multi-types of generations could
improve the operation stability and reliability. The objective is to get the optimal output
and reliability. The objective is to get the optimal output combination of DGs and realize optimizedimprove the operation stability and reliability. The objective is to get the optimal output
combination of DGs and realize optimized operation under the conditions of renewable energy and
operation under the conditions of renewable energy and demand uncertainty. A two-time scale model,combination of DGs and realize optimized operation under the conditions of renewable energy and
demand uncertainty. A two-time scale model, consisting of day-ahead scheduling and real-time
consisting of day-ahead scheduling and real-time scheduling, is established for the optimal operationdemand uncertainty. A two-time scale model, consisting of day-ahead scheduling and real-time
scheduling, is established for the optimal operation of the stand-alone MG.
of the stand-alone MG.scheduling, is established for the optimal operation of the stand-alone MG.
**Figure 3.Figure 3. The structure of stand-alone MG. The structure of stand-alone MG.**
**Figure 3. The structure of stand-alone MG.**
All the controllable DGs and CLs are dispatched in the day-ahead scheduling on the basis of
All the controllable DGs and CLs are dispatched in the day-ahead scheduling on the basis of 24-hAll the controllable DGs and CLs are dispatched in the day-ahead scheduling on the basis of
24-h forecasted output of WT and PV, while only ESS and CL are dispatched in the real-time because
forecasted output of WT and PV, while only ESS and CL are dispatched in the real-time because of24-h forecasted output of WT and PV, while only ESS and CL are dispatched in the real-time because
of their fast response speed, and MT or WT was assistant dispatch means at the same time. The
their fast response speed, and MT or WT was assistant dispatch means at the same time. The overallof their fast response speed, and MT or WT was assistant dispatch means at the same time. The
overall optimized process is shown in Figure 4.
optimized process is shown in Figureoverall optimized process is shown in Figure 4. 4.
Day-ahead scheduling provides the rough dispatch scheme while the real-time scheduling
Day-ahead scheduling provides the rough dispatch scheme while the real-time scheduling
makes small adjustments based on the results of day-ahead scheduling to smooth out the actual
makes small adjustments based on the results of day-ahead scheduling to smooth out the actual
-----
_Energies 2017, 10, 1936_ 6 of 23
_Energies Day-ahead scheduling provides the rough dispatch scheme while the real-time scheduling makes2017, 10, 1936_ 6 of 23
small adjustments based on the results of day-ahead scheduling to smooth out the actual fluctuations of
load and renewable energy relative to predicted data, reducing the disadvantageous impacts. It shouldfluctuations of load and renewable energy relative to predicted data, reducing the disadvantageous
be noted that the battery will be charged only in the first time scale.impacts. It should be noted that the battery will be charged only in the first time scale.
**Figure 4.Figure 4. The overall optimized process of two-time scale optimization model. The overall optimized process of two-time scale optimization model.**
_2.3. The Day-Ahead Scheduling Optimized Model_
_2.3. The Day-Ahead Scheduling Optimized Model_
The first time scale optimization is the day-ahead scheduling, which dispatches the primary
The first time scale optimization is the day-ahead scheduling, which dispatches the primary
outputs of PV, WT, MT, FC, DE, ESS and load control quantity (LCQ) in this paper. For stand-alone
outputs of PV, WT, MT, FC, DE, ESS and load control quantity (LCQ) in this paper. For stand-alone
MGs, the key operation objective is to keep the power balance within the MG. Consequently, it’s
MGs, the key operation objective is to keep the power balance within the MG. Consequently, it’s better
better to have more energy supply than load demand rather than less. Considering that the response
to have more energy supply than load demand rather than less. Considering that the response speed
speed of the battery is fast [46], it will be charged only in this stage, so that in the second time scale, it
of the battery is fast [46], it will be charged only in this stage, so that in the second time scale, it has
has enough electricity to discharge rapidly to track the load fluctuation over the predicted data and
enough electricity to discharge rapidly to track the load fluctuation over the predicted data and weaken
weaken the influence from predicted errors.
the influence from predicted errors.
2.3.1. Objective Function in Day-Ahead Scheduling2.3.1. Objective Function in Day-Ahead Scheduling
MG’s optimized operation is a multi-objective and multi-constraint minimization optimizationMG’s optimized operation is a multi-objective and multi-constraint minimization optimization
problem. This paper adopts the daily 24-h scheduling model in which the load and renewableproblem. This paper adopts the daily 24-h scheduling model in which the load and renewable
energy output are supposed to be constant in each dispatch period. The objective function includesenergy output are supposed to be constant in each dispatch period. The objective function includes
three sub-goals which aim to minimize the operation and maintenance cost (OMC) of different DGs,three sub-goals which aim to minimize the operation and maintenance cost (OMC) of different DGs,
pollutant disposal expense and load control compensation (LCC). The established multi-objectivepollutant disposal expense and load control compensation (LCC). The established multi-objective
optimization model is:optimization model is:
_min Fmin F t(t( )) ⇒[[ ( )FF t,F t,F t11(t), F22( )(t), F3( )]3(t)]_ (5) (5)
where F1(t) is the OMC of the whole MG; F2(t) represents the pollutant disposal cost, and F3(t) is the
where F1(t) is the OMC of the whole MG; F2(t) represents the pollutant disposal cost, and F3(t) is the
LCC of MG. In this paper, all the subgoals are transformed into cost values and the multi-objective
LCC of MG. In this paper, all the subgoals are transformed into cost values and the multi-objective
model could be converted into a single objective model:
model could be converted into a single objective model:
_min fmin f t =min F t +F t +F t (t) =( ) min[F[ ( )1(1t) + F22( )(t) +3 F( )]3(t)]_ (6) (6)
The proposed model is applied to provide a 24-h scheduling scheme of various DGs to minimizeThe proposed model is applied to provide a 24-h scheduling scheme of various DGs to
the total cost while satisfying the electricity, thermal and cooling load of MG.minimize the total cost while satisfying the electricity, thermal and cooling load of MG.
(1)(1) Operation and Maintenance Cost (OMC)Operation and Maintenance Cost (OMC)
The OMCs of micro sources are usually proportional to their power outputs. Supposing that
the renewable power generations (WT and PV) have little OMC, then the sub-objective of OMC can
be expressed by:
_N_
_F t =1[( )]_ (C Pi( _i_ _t_ ) _+ K Pi_ _i_ _tΔt)_ + _K PH_ _HtΔ +t_ _K PC_ _C_ _tΔt_ (7)
-----
_Energies 2017, 10, 1936_ 7 of 23
The OMCs of micro sources are usually proportional to their power outputs. Supposing that the
renewable power generations (WT and PV) have little OMC, then the sub-objective of OMC can be
expressed by:
_N_
_F1(t) =_ ∑ (Ci(Pi[t][) +][ K][i][P]i[t][∆][t][) +][ K][H] _[P]H[t]_ [∆][t][ +][ K][C][P]C[t] [∆][t] (7)
_i=1_
where Pi[t] and Ci(Pi[t]) are the generation output and fuel cost of micro source i in the t-th dispatch
period. Ki, KH, KC are the maintenance factor of micro source i, HES and AC modules. PH[t] and PC[t]
represent the heat power generated by HES and the cooling power generated by AC, respectively.
(2) Pollutant Disposal Cost
MT, DE and FC would release NOX, CO2, SO2 and other pollutants into air during generation.
And the emission coefficients of pollutant disposal are different for diverse generation units and
different impacts on the environment as well [47]. In this paper, the pollutant disposal cost was
considered by Equation (8):
_N_
_F2(t) =_ ∑
_i=1_
_M_
#### ∑ αk × Eik × Pi[t] [×][ ∆][t] (8)
_k=1_
where Eik is the released quantity of pollutant k when micro source i lets out unit power; N is the
number of generation units while M is the number of pollutant types. αk is the conversion coefficient
for various pollutant (NOX, CO2, SO2).
(3) Load Control Compensation (LCC)
To take the advantage of demand side management and improve the operation reliability, CL
was considered, which could also act as an auxiliary resource to MG’s power balance. The LCC is
corresponding to the reliability cost of the MG. It’s difficult to calculate the reliability cost strictly
in theory. Generally, it’s given by the product of expected energy not supplied (EENS) and unit
interruption cost (UIC). In this paper, the EENS was representative by LCQ which took the whole
MG’s operation economy and reliability into account, and the corresponding compensatory costs were
calculated as follows:
_F3(t) = p[t]D_ _[×][ P]cut[t]_ (9)
where pD[t] is the UIC of MG and Pcut[t] is the LCQ.
2.3.2. Operation Constraints in Day-Ahead Scheduling
In terms of MG’s optimized operation, constraints like security, reliability and power
balance must be guaranteed [48]. These constraints can be divided into equality constraints and
inequality constraints.
(1) Power Balance Constraint:
_K_
#### ∑ Pi = PL − Pcut (10)
_i=1_
_QH = QHL_ (11)
_QC = QCL_ (12)
where Pi is the output of generation unit i; PL and Pcut are the load demand and load control
power, respectively. QHL and QCL represent the thermal and cooling load independently; QH and
_QC are the thermal and cooling power supplied by micro sources._
(2) Output Constraint:
_Pimin ≤_ _Pi[t]_ _[≤]_ _[P][imax]_ (13)
where Pimin and Pimax are the minimum and maximum power output of generation unit i.
-----
_Energies 2017, 10, 1936_ 8 of 23
(3) Ramp Up/Down Rate Constraint:
_Pi[t]_ _[−]_ _[P]i[t][−][1]_ _≤_ _Rup∆t_ (14)
_Pi[t][−][1]_ _−_ _Pi[t]_ _[≤]_ _[R][down][∆][t]_ (15)
where Rup and Rdown are the ramp up/down rate of micro source i. Pi[t] and Pi[t][−][1] represent the
output of micro source i in the current and last dispatch interval.
(4) Battery Operation Constraint:
_SSOC.min < SSOC < SSOC.max_ (16)
_−_ _KCQBηSBC ≤_ _PSB[t]_ _[≤]_ _[K][D][Q][B][η][SBD]_ (17)
where SSOC.min and SSOC.max are the minimum and maximum state of charge (SOC) for the battery.
_KC and KD are the maximum charging/discharging proportion in a dispatch interval, while PSB[t]_
is the battery’s power output in the t-th period. ηSBC and ηSBD represent the charge/discharge
efficiency. QB represents the capacity of battery.
(5) Load Control Constraint:
_Pcut[t]_ _[≤]_ _[P][cut][.][max]_ (18)
where Pcut[t] is the LCQ in the t-th dispatch interval and Pcut.max is the load control upper limit
of MG.
(6) MT’s Electric Output Constraint:
0.95PE,MT ≤ _PE,MT ≤_ 1.05PE,MT (19)
where PE,MT is the electric output of MT.
_2.4. The Real-Time Scheduling Optimized Model_
The second time scale optimization is the real-time scheduling which further adjusts the battery
discharge and load control to realize the power balance in real time. The coordinated operation of ESS
and CL is put forward to reduce the impact of renewable energy and demand uncertainty, making the
best of their fast response characteristic. A unified prediction error percentage (UPEP) is defined to
describe the difference between the actual and predicted load demand:
∆E% = [(][∆][P][E][ −] [∆][P][PV][ −] [∆][P][WT][ −] [∆][P][MT][)] 100% (20)
_×_
_PRe_
∆H% = [∆][H] 100% (21)
_×_
_HRe_
∆C% = [∆][C] 100% (22)
_×_
_CRe_
where ∆E%, ∆H% and ∆C% are the UPEP of electric, heat and cooling load demands, respectively. PRe,
_HRe and CRe represent the predicted electric, heat and cooling load demands. ∆PE, ∆H and ∆C are the_
differences of actual and predicted electric, heat and cooling load demands. ∆PPV, ∆PWT and ∆PMT
stand for the differences between actual and predicted electric outputs of PV, WT and MT, respectively.
∆E%, ∆H% and ∆C% are the error quantization of predicted data.
-----
_Energies 2017, 10, 1936_ 9 of 23
2.4.1. Objective Function in Real-Time Scheduling
In this stage, the decision variables have decreased and the model has become simpler.
The dispatch objects are mainly CL and battery, which can respond rapidly to eliminate the errors in the
last scheduling and realize optimal economy, while WT and MT remain auxiliary means. The objectives
consist of OMC and LCC; the model can also be converted into single-objective optimization.
(1) OMC Adjustment in Real-time Operation:
_F4(t) =_ �KESPES[t] [+][ K][MT][∆][P]MT[t] [+][ K][H][∆][P]H[t] [+][ K][C][∆][P]C[t] [+][ C][MT]�∆PMT[t] �� _× ∆t_ (23)
where KES and KMT are the maintenance factors of ESS and MT. PES[t] is the charge/discharge
quantity of ESS. ∆PMT[t], ∆PH[t] and ∆PC[t] are the output adjustments of MT between two time
scales, predicted error of heat and cooling load demand, respectively. CMT(∆PMT[t]) stands for the
change of fuel cost change for MT.
(2) LCC Adjustment in Real-time Operation:
_F5(t) = p[t]D_ _[×][ ∆][P]cut[t]_ (24)
where ∆Pcut[t] is the LCQ differences between two time scales.
2.4.2. Operation Constraints in Real-Time Scheduling
In this time scale, constraints (1), (4), (5) and (6) in the day-ahead scheduling model will
be satisfied.
**3. SIP-CO-PSO-ERS Algorithm**
For a multi-objective optimization problem, the best condition is to find the absolute optimal
solution. However, subgoals are usually contradictory with each other and it’s impossible to find a
common solution that makes all the sub-goals achieve optimal values at the same time. Therefore, the
multi-objective model is transformed into a weighted single-objective model to optimize the whole
system’s operation cost. Considering that the model of the first time scale has been converted into
single-objective optimization model, this paper proposes SIP-CO-PSO-ERS to solve the day-ahead
scheduling model. Fewer decision variables and constraints simplify the model in the second time
scale. Linear programming in MATLAB/Optimization Tool (R2011B, MathWorks, Natick, MA, USA)
was conducted to solve the real-time scheduling model.
_3.1. Basic PSO Algorithm_
PSO is a meta-heuristic intelligent algorithm on the basis of population search [49].
The individuals of population update their velocity vectors according to their own speed, individual
optimal solution pbest and population optimal solution gbest to converge to global optimal solution
during all the iterations. The velocity and position for particle i at moment t are updated as follows:
_vi,j(t + 1) = wvi,j(t) + c1r1�pi,j −_ _xi,j(t)�_ + c2r2�pg,j − _xi,j(t)�_ (25)
_xi,j(t + 1) = xi,j(t) + vi,j(t + 1), j = 1, 2 . . . . . . d_ (26)
where w is the inertia weight for PSO; c1 and c2 are both learning factors; r1 and r2 are random numbers
between 0 and 1; d is the dimension of the optimization problem; pi,j and pg,j represent the individual
and population optimal solution. vi,j(t) and vi,j(t + 1) are the velocity vectors for particle i in the j-th
dimension at moment t and t + 1; accordingly, xi,j(t) and xi,j(t + 1) are the position vectors for particle i
in the j-th dimension at moment t and t + 1.
-----
_Energies 2017, 10, 1936_ 10 of 23
Due to the full use of individuals’ and group’s experience, the PSO algorithm is able to approach
the optimal solution with a relatively high convergence efficiency [50]. Because of the consideration of
CL and multi-scenarios, more decision variables, constraints, and intricate data for variable scenarios
complicate the optimization model. Therefore, the PSO exhibits the problems of premature, poor
local and global search ability when solving the optimized operation model of stand-alone MG [51].
Specially, a fall into the local optimum because of the oscillation around certain local optimums with
inappropriate step lengths would occur. In addition, the convergence speed is slow in later iterations
because the optimum search goes beyond the constraints easily when there is great fluctuation in
predicted data from different scenarios, causing the process to repeat several times until the constraints
are all satisfied. However, the MG’s day-ahead optimized scheduling requires not only a faster solution
speed to meet the dispatch timeliness, but also an excellent search performance to satisfy dispatch
accuracy. Reasonable modification must be developed to improve the properties of basic PSO. In this
paper, a dual-step modification consisting of SIP and CO is introduced into PSO as well as ERS.
_3.2. Search Improvement Process (SIP)_
Considering that a local optimum cannot take full advantages of different DGs for a stand-alone
MG in economy and environmental protection, the total ability of PSO in both global and local
optimizing must be improved. SIP was conducted on all the particles during the optimization to
improve the global search ability for PSO. The global search ability improvement of proposed SIP is
based on [52]:
(1) Increasing the population’s diversity by mutations and cross operations.
(2) Promoting all the particles to move toward the best promising local or global individuals.
After the update of both velocity and position vectors for particle i, a modified process was carried
out as follows:
(1) Find out the best individual Xbest and the worst individual Xworst through the calculation of
fitness function.
(2) For each particle i, two particles Xm and Xn are selected from the particle swarm randomly such
that m = n = i, then the following two particles are generated by cross style:
_̸_ _̸_
_Xcross[1]_ [=][ X]i [+][ ∆] _[×][ (][X][m]_ _[−]_ _[X][n][)]_ (27)
_Xcross[2]_ [=][ X]cross[1] [+][ ∆] _[×][ (][X]best_ _[−]_ _[X][worst][)]_ (28)
where ∆ is a random number between 0 and 1, X[1]cross and X[2]cross are two new particles obtained
by cross.
(3) A mutation process is implemented after the cross to get five new particles, and the j-th
dimensions of X[1]muta, X[2]muta, X[3]muta, X[4]muta and X[5]muta are obtained by:
_Xmuta[1]_,j [=][ λ][1][ ×][ X][best] [+][ λ][2][ ×][ X][worst] (29)
_Xmuta[2]_,j [=]
_Xmuta[3]_,j [=]
_Xmuta[4]_,j [=]
�
_Xbest,j_ _i f k1 ≥_ _k2_ (30)
_Xi,j_ _i f k1 < k2_
�
_Xbest,j_ _i f k3 ≥_ _k4_ (31)
_Xcross[1]_,j _i f k3 < k4_
�
_Xbest,j_ _i f k5 ≥_ _k6_ (32)
_Xcross[2]_,j _i f k5 < k6_
-----
_Energies 2017, 10, 1936_ 11 of 23
_Xmuta[5]_,j [=]
�
_Xcross[1]_,j _i f k7 ≥_ _k8_ (33)
_Xcross[2]_,j _i f k7 < k8_
where k1, k2, . . ., k8, λ1 and λ2 are all random numbers range from 0 to 1; Equation λ1 + λ2 = 1
is satisfied.
(4) Then the best particle among X[1]muta, X[2]muta, X[3]muta, X[4]muta and X[5]muta is selected by fitness values
to compare with Xi. If it is better than Xi, replace Xi with the selected particle; otherwise, Xi will
remain in the initial position. After SIP, CO will be conducted.
_3.3. Chaotic Optimization (CO)_
The ergodicity and randomness characteristics of chaos could realize local deep search [53]. Better
local optimized ability is achieved by searching the space near superior individuals. The basic principle
for chaotic optimization-particle swarm optimization (CO-PSO) to strength the local search ability is
mapping the chaotic variables into the optimized variables’ space linearly. For a given optimization
target, the search process is corresponding to the traversal process of chaotic orbit. The steps of chaotic
search in this paper are indicated as:
(1) Suppose k = 0, and map the decision variables xj[k], j = 1, 2 . . . d into chaotic variables sj[k] between
0~1 for every dimension of the solution. xmax,j and xmin,j are the upper and lower search bounds
of the j-th dimension:
_s[k]j_ [=] _x[k]j_ _[−]_ _[x][min][,][j]_, j = 1, 2 . . . . . . d (34)
_x[k]j_ _[−]_ _[x][max][,][j]_
(2) Calculate the chaotic variables of the next iteration:
� �
_s[k]j_ [+][1] = 4 × s[k]j 1 − _s[k]j_, j = 1, 2 . . . . . . d (35)
(3) Convert the chaotic variables sj[k+][1] into decision variable xj[k+][1] by the following formula:
_x[k]j_ [+][1] = xmin,j + s[k]j [+][1]�xmax,j − _xmin,j�, j = 1, 2 . . . . . . d_ (36)
(4) Assess the new obtained solution by xj[k+][1]. Make a decision by different result: if the new obtained
solution is better than the initial one or the chaotic search has reached the maximum iteration,
the new obtained solution will be the final result of chaotic search; otherwise, set k = k + 1 and
turn to Step 2.
In this paper, the first 20% of the best particles during each iteration are chaotic searched in
order to further excavate the adaptability of excellent particles and improve the local search ability of
optimization algorithm.
_3.4. Elite Retention Strategy (ERS)_
The premature of an optimization algorithm is caused by the loss of population diversity, which
is due to the population’s pattern simplification in later iteration. It is an obstacle to find the global
optimal solution during the stand-alone MG’s optimized operation. ERS is a procedure to preserve
the optimal individuals, or a part of excellent individuals during each iteration, and replace the
worst individuals at the beginning of next iteration. The ERS could avoid the loss of better solutions
generated during each iteration and maximize the advantages of superior individuals. That is to
say, poor solutions will be superseded as soon as possible. In addition, the population diversity is
guaranteed because of the reservation of initial particles at the beginning of each iteration as well as the
connection between two generations. Through this process, the premature phenomena will be impaired
and the convergence speed is accelerated. In this paper, ERS is integrated into basic PSO algorithm.
-----
_Energies 2017, 10, 1936_ 12 of 23
_Energies 2017, 10, 1936_ 12 of 23
Specifically, the top 10% of the best individuals are reserved at the beginning of each iteration. Then the
the beginning of each iteration. Then the last 10% of the population in next-generation individuals last 10% of the population in next-generation individuals will be replaced correspondingly.
will be replaced correspondingly.
_3.5. Detailed Procedures of SIP-CO-PSO-ERS_
_3.5. Detailed Procedures of SIP-CO-PSO-ERS_
Figure 5 exhibits the structure of presented algorithm and the detailed procedures of
SIP-CO-PSO-ERS in this paper are given as follows:Figure 5 exhibits the structure of presented algorithm and the detailed procedures of
SIP-CO-PSO-ERS in this paper are given as follows:
(1) Initialize the position and velocity of each particle in the population.
(2)(1) Initialize the position and velocity of each particle in the population. Assess the fitness of each particle by objective function calculation.
(3)(2) Assess the fitness of each particle by objective function calculation. Preserve current particles’ positions and fitness values into pbest of each particle; preserve the
(3) Preserve current particles’ positions and fitness values into position and fitness value of the optimal individual in current population intopbest of each particle; preserve the gbest.
position and fitness value of the optimal individual in current population into gbest.
(4) Save the top 10% of the best individuals whose fitness values are the best.
(4) Save the top 10% of the best individuals whose fitness values are the best.
(5) Execute the SIP on all particles.
(5) Execute the SIP on all particles.
(6) Evaluate the fitness of each particle and search the top 20% of best individuals with CO; update
(6) Evaluate the fitness of each particle and search the top 20% of best individuals with CO; update
_pbest and gbest of the whole population._
_pbest and gbest of the whole population._
(7) If the solution has reached the required search accuracy or the maximum iteration, stop the
(7) If the solution has reached the required search accuracy or the maximum iteration, stop the
chaotic search and export the result, otherwise, turn to step 8.
chaotic search and export the result, otherwise, turn to step 8.
(8) Update the position and speed of each particle; evaluate all particles’ fitness values and replace
(8) Update the position and speed of each particle; evaluate all particles’ fitness values and replace
the last 10% individuals with the worst fitness by the best individuals preserved in step 4, then
the last 10% individuals with the worst fitness by the best individuals preserved in step 4, then
turn to step 3.
turn to step 3.
**Figure 5.Figure 5. Structure of the proposed SIP-CO-PSO-ERS. Structure of the proposed SIP-CO-PSO-ERS.**
_3.6. The Limitations of Proposed SIP-CO-PSO-ERS 3.6. The Limitations of Proposed SIP-CO-PSO-ERS_
SIP-CO-PSO-ERS has many advantages such as better adaptability, fast convergence speed and SIP-CO-PSO-ERS has many advantages such as better adaptability, fast convergence speed and
excellent search ability. However, limitations are also existed, as follows: excellent search ability. However, limitations are also existed, as follows:
(1)(1) SIP-CO-PSO-ERS consists of different procedure modules due to the algorithm integration. As SIP-CO-PSO-ERS consists of different procedure modules due to the algorithm integration. As a
a result, it’s really hard work for programmers to write the program correctly. Any errors in result, it’s really hard work for programmers to write the program correctly. Any errors in the
the code would lead to a wrong operational result. More time should be spent on the code would lead to a wrong operational result. More time should be spent on the programming
programming so as to ensure the correct code; so as to ensure the correct code;
(2) The particles that are generated randomly increase the operation time to some extent. When
(2) The particles that are generated randomly increase the operation time to some extent. When the
the proposed model and SIP-CO-PSO-ERS are applied in a specific MG, initial values of
proposed model and SIP-CO-PSO-ERS are applied in a specific MG, initial values of particles
particles could be given according to MG’s historical operation states so as to decrease the
could be given according to MG’s historical operation states so as to decrease the iteration
iteration numbers and operation time.
numbers and operation time.
_3.7. The Framework of Stand-alone MG’s Optimized Operation_
Figure 6 shows the integrated framework of this study about the optimized operation for
proposed stand alone MG in detail The final dispatch scheme is obtained by the results of
-----
_Energies 2017, 10, 1936_ 13 of 23
_3.7. The Framework of Stand-Alone MG’s Optimized Operation_
Figure 6 shows the integrated framework of this study about the optimized operation for proposed
stand-alone MG in detail. The final dispatch scheme is obtained by the results of day-ahead and
_Energies real-time scheduling models.2017, 10, 1936_ 13 of 23
##### .
**Figure 6.Figure 6. Integrated framework of the whole study. Integrated framework of the whole study.**
**4. Simulation Analysis 4. Simulation Analysis**
_4.1. Description of the Stand-Alone MG system 4.1. Description of the Stand-Alone MG system_
The stand-alone MG adopted in this paper is shown in Figure 3. The battery’s parameters are The stand-alone MG adopted in this paper is shown in Figure 3. The battery’s parameters are
as follows [54]: the self-discharge rate is 0.14%, charge/discharge efficiency is 92%, minimum SOC is as follows [54]: the self-discharge rate is 0.14%, charge/discharge efficiency is 92%, minimum SOC
20%, total capacity is 50 kWh while the lower limit is assuming as the initial SOC. The efficiency of
i d b 95% R d f PV d WT i d b 250 kW d 300 kW
-----
_Energies 2017, 10, 1936_ 14 of 23
is 20%, total capacity is 50 kWh while the lower limit is assuming as the initial SOC. The efficiency
of convertors is assumed to be 95%. Rated power of PV and WT is assumed to be 250 kW and
300 kW, respectively. The proportion of CL is assumed as 10%. Other parameters of different DGs
are summarized in Table 1. Table 2 lists the disposal cost for different kinds of pollutants and the
respective pollutant emission factors of MT, DE and FC [55,56]. The simulation in this paper takes
winter for example, so thermal load (TL) is included except electric load (EL).Energies 2017, 10, 1936 14 of 23
**Table 1. Parameters setting of various DGs.**
**Table 1. Parameters setting of various DGs.**
**Type** **_Pe (kW)_** **_Pmax/Pmin (kW)_** **_Rup/Rdown (kW/min)_** **_K ($/kWh)_**
**Type** **_Pe (kW)_** **_Pmax/Pmin (kW)_** **_Rup/Rdown (kW/min)_** **_K ($/kWh)_**
DE 150 180/10 20 0.01258
DE FC 130 150 160/10 180/10 10 20 0.00419 0.01258
FC 130 160/10 10 0.00419
MT MT 100 100 125/10 125/10 10 10 0.00587 0.00587
ESS ESS 25 25 - - - - 0.01241 0.01241
**Table 2.Table 2. Pollutant disposal cost and emission factors. Pollutant disposal cost and emission factors.**
**TypeType** **Disposal Cost ($/lb)Disposal Cost ($/lb)** **DE(lb/kWh)DE(lb/kWh)** **FC (lb/kWh)FC (lb/kWh)** **MT (lb/kWh) MT (lb/kWh)**
−2 −5 −4
_NOSONOSO2x_ 2x 0.994.20.99 4.2 4.542.182.18 104.54 10 × × 10 10×× _[−][−][4][2]−4_ 6 103 1063 × ××× 10 10[−][−]−6[6][5] 4.4 108 104.48× ×× × 10 10−6[−][−][6][4]
_COCO2_ 2 0.0140.014 1.4321.432 10 × 10× _[−][3]−3_ 1.078 101.078 ×× 10[−]−3[3] 1.596 101.596× × 10−3[−][3]
_4.2. Results Analysis4.2. Results Analysis_
In order to analyze and compare the optimized dispatch problem in various situations and verifyIn order to analyze and compare the optimized dispatch problem in various situations and
the proposed model, different scenarios are designed in this paper for the stand-alone MG. Since theverify the proposed model, different scenarios are designed in this paper for the stand-alone MG.
load demand in work day differs from that in weekend, and the output of PVs in sunny day differsSince the load demand in work day differs from that in weekend, and the output of PVs in sunny
greatly from that in rainy day, four scenarios are chosen for the designed stand-alone MG: sunny-workday differs greatly from that in rainy day, four scenarios are chosen for the designed stand-alone
day, sunny-weekend, rainy-work day and rainy-weekend scenario. The predicted load demand andMG: sunny-work day, sunny-weekend, rainy-work day and rainy-weekend scenario. The predicted
renewable power generation in different scenarios are displayed in Figureload demand and renewable power generation in different scenarios are displayed in Figure 7. 7.
(a)Sunny-working day
600
WT PV TL OPBH
400
(b)Sunny-weekend
600
400
200
200
0
4:00 8:00 12:00 16:00 20:00 24:00
Time/h
(c)Rainy-working day
600
0
4:00 8:00 12:00 16:00 20:00 24:00
Time/h
(d)Rainy-weekend
600
|W|T PV|TL|OPBH|EL|WT+PV+MT|
|---|---|---|---|---|---|
|||||||
|||||||
|||||||
400
200
400
200
0
4:00 8:00 12:00 16:00 20:00 24:00
Time/h
4:00 8:00 12:00 16:00 20:00 24:00
Time/h
0
**Figure 7. The load demand and predicted renewable power generations in four scenarios for MG.**
**Figure 7. The load demand and predicted renewable power generations in four scenarios for MG.**
Figure 7 shows clearly that the output of renewable generations in sunny days and rainy days is
Figure 7 shows clearly that the output of renewable generations in sunny days and rainy days is
quite different: the overall output of renewable energy in sunny days is larger, and peak time
quite different: the overall output of renewable energy in sunny days is larger, and peak time intervals
intervals are concentrated in 11:00~15:00. Because of the weakness of solar radiation in rainy days,
are concentrated in 11:00~15:00. Because of the weakness of solar radiation in rainy days, the PV’s
the PV’s power output is very low. As a result, the main output of renewable energy is wind power
power output is very low. As a result, the main output of renewable energy is wind power under these
under these conditions. The load change is closely related to the activities of people. Based on the
conditions. The load change is closely related to the activities of people. Based on the fact that the
fact that the main load type in a stand-alone MG is from residents, the EL demand of weekends is
obviously higher than that of work days, while the thermal load demand performs a relatively little
fluctuation between work days and weekends
-----
_Energies 2017, 10, 1936_ 15 of 23
main load type in a stand-alone MG is from residents, the EL demand of weekends is obviously higher
than that of work days, while the thermal load demand performs a relatively little fluctuation between
work days and weekends.
4.2.1. The Day-Ahead Scheduling Results
SIP-CO-PSO-ERS was used to solve the day-ahead scheduling model. For the algorithm,
the iteration numbers of CO and PSO are set as 10 and 200 respectively. The particles’ number
is 30, inertia weight is 0.5, and the learning factors are both 2. PV modules are under the control of
a maximum power point tracking (MPPT) strategy. When the total electric output of PV, WT andEnergies 2017, 10, 1936 15 of 23
MT (ordering power by heat, OPBH) is higher than the load demand, and the battery has reached
(ordering power by heat, OPBH) is higher than the load demand, and the battery has reached the
the upper limit of capacity, WT is adjusted to track the load demand. Otherwise the WT modules
upper limit of capacity, WT is adjusted to track the load demand. Otherwise the WT modules are
are also under the control of MPPT. Figure 8 shows the optimized results of the first time scale in
also under the control of MPPT. Figure 8 shows the optimized results of the first time scale in
different scenarios.
different scenarios.
350
300
250
200
150
100
50
0
|Col1|Col2|Col3|Col4|Col5|Col6|Col7|Col8|Col9|Col10|Col11|Col12|Col13|Col14|Col15|Col16|Col17|Col18|Col19|Col20|Col21|Col22|Col23|Col24|Col25|Col26|Col27|Col28|Col29|Col30|Col31|Col32|Col33|Col34|Col35|Col36|Col37|Col38|Col39|Col40|Col41|Col42|Col43|Col44|Col45|Col46|Col47|Col48|Col49|Col50|Col51|Col52|Col53|Col54|Col55|Col56|Col57|Col58|Col59|Col60|Col61|Col62|Col63|Col64|Col65|Col66|Col67|Col68|Col69|Col70|Col71|Col72|Col73|Col74|Col75|Col76|Col77|Col78|Col79|Col80|Col81|Col82|Col83|Col84|Col85|Col86|Col87|Col88|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
-50
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
Time / h
**Figure 8. The optimized results in each period for four scenarios in the first time scale.**
**Figure 8. The optimized results in each period for four scenarios in the first time scale.**
The model takes consideration of load control in a stand-alone MG. The simulation results in
The model takes consideration of load control in a stand-alone MG. The simulation resultsFigure 8 show that the load control which is corresponding to LCQ column of the figure is
in Figureinconspicuous in sunny-work day and rainy-work day scenarios because of the low demand and 8 show that the load control which is corresponding to LCQ column of the figure is
inconspicuous in sunny-work day and rainy-work day scenarios because of the low demandsufficient energy supply. In contrast, load control effect is apparent in sunny-weekend and
rainy-weekend scenarios and concentrated in two periods (noon and night) of a day. Compared with
and sufficient energy supply. In contrast, load control effect is apparent in sunny-weekend and
Figure 7, it’s obvious that the load control mainly takes place in the periods with inadequate
rainy-weekend scenarios and concentrated in two periods (noon and night) of a day. Compared with
renewable outputs relative to the load demand. For a stand-alone MG, other DGs like DE, MT and
Figure 7, it’s obvious that the load control mainly takes place in the periods with inadequate renewable
FC must be started to maintain the power balance if the reneable energy is insufficient. When the
outputs relative to the load demand. For a stand-alone MG, other DGs like DE, MT and FC must beLCC is lower than the generation cost of DGs, the system will cut off part of unimportant load to
started to maintain the power balance if the reneable energy is insufficient. When the LCC is lowermaximize the operational economy. In addition, load control is more common in rainy-weekend
than the generation cost of DGs, the system will cut off part of unimportant load to maximize thescenario than sunny-weekend scenario, because the low PV output in rainy-weekend scenario
operational economy. In addition, load control is more common in rainy-weekend scenario thanfurther expands the difference between renewable energy output and load demand. In case of
sunny-weekend scenario, because the low PV output in rainy-weekend scenario further expands theemergency, load control is not only a measure to improve the system economy, but also an auxiliary
resource to maintain stability and power balance for stand-alone MG.
difference between renewable energy output and load demand. In case of emergency, load control is
The SOC variation of battery is related to whether the sum of renewable energy and basic
not only a measure to improve the system economy, but also an auxiliary resource to maintain stability
output (decided by thermal load demand) of MT is higher than EL demand. If the condition is
and power balance for stand-alone MG.
satisfied, the battery will be charged. For instance, in rainy-weekend scenario, the EL demand is
The SOC variation of battery is related to whether the sum of renewable energy and basic output
relatively high and PV output is low, which results in the EL demand being higher than the sum of
(decided by thermal load demand) of MT is higher than EL demand. If the condition is satisfied, therenewable energy and MT’s basic output after 8:00; accordingly, there is no redundant electric
battery will be charged. For instance, in rainy-weekend scenario, the EL demand is relatively highpower for the battery to charge in these periods. And the SOC of battery will decrease slowly
and PV output is low, which results in the EL demand being higher than the sum of renewable energybecause of the self-discharge effect. However, before 8:00, the conditions are opposite and the
and MT’s basic output after 8:00; accordingly, there is no redundant electric power for the battery tobattery is charged. If the battery is being charged, it indicates that the power of the whole system is
charge in these periods. And the SOC of battery will decrease slowly because of the self-dischargesurplus. Therefore, the outputs of DE and FC are 0, which is consistent with the actual situation.
Figure 8 also indicates that the FC was preferential dispatched than DE within a certain range,
because the model considers the economic and environmental benefits. And FC is more eco-friendly
than DE according to Table 2 Based on the optimized model 24 h’soperation costs of four scenarios
-----
_Energies 2017, 10, 1936_ 16 of 23
effect. However, before 8:00, the conditions are opposite and the battery is charged. If the battery is
being charged, it indicates that the power of the whole system is surplus. Therefore, the outputs of DE
and FC are 0, which is consistent with the actual situation.
Figure 8 also indicates that the FC was preferential dispatched than DE within a certain range,
because the model considers the economic and environmental benefits. And FC is more eco-friendly
than DE according to Table 2. Based on the optimized model, 24-h’soperation costs of four scenarios
for each day in the first time scale scheduling are shown in FigureEnergies 2017, 10, 1936 9. 16 of 23
200
150
100
50
0
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
Time/h
|Col1|Col2|Su Su Ra|Col4|nny nny iny|-wo -we -wo|rk eke rk d|day nd ay|Col9|Col10|Col11|Col12|Col13|Col14|Col15|Col16|Col17|Col18|Col19|Col20|Col21|Col22|Col23|Col24|Col25|Col26|Col27|Col28|Col29|Col30|Col31|Col32|Col33|Col34|Col35|Col36|Col37|Col38|Col39|Col40|Col41|Col42|Col43|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
||||||||||||||||||||||||||||||||||||||||||||
||||||||||||||||||||||||||||||||||||||||||||
|||Ra||iny|-we|eke|nd||||||||||||||||||||||||||||||||||||
||||||||||||||||||||||||||||||||||||||||||||
||||||||||||||||||||||||||||||||||||||||||||
||||||||||||||||||||||||||||||||||||||||||||
||||||||||||||||||||||||||||||||||||||||||||
||||||||||||||||||||||||||||||||||||||||||||
||||||||||||||||||||||||||||||||||||||||||||
||||||||||||||||||||||||||||||||||||||||||||
||||||||||||||||||||||||||||||||||||||||||||
||||||||||||||||||||||||||||||||||||||||||||
||||||||||||||||||||||||||||||||||||||||||||
**Figure 9. Total operation costs of four scenarios in the first time scale.**
**Figure 9. Total operation costs of four scenarios in the first time scale.**
If the sum of renewable energy and MT’s basic output is higher or close to EL demand, the total
If the sum of renewable energy and MT’s basic output is higher or close to EL demand, the totaloperation cost will be low. For example, in sunny-work day, the sum output of WT, PV and MT is
operation cost will be low. For example, in sunny-work day, the sum output of WT, PV and MT ishigher than EL demand from 7:00 to 18:00; accordingly, the operation costs in these periods are very
higher than EL demand from 7:00 to 18:00; accordingly, the operation costs in these periods are verylow. Only WT, PV and MT are running in the whole system when battery’s SOC reaches the upper
limit. The MG tracks the EL change by adjusting WT’s output. When the EL demand is greater than
low. Only WT, PV and MT are running in the whole system when battery’s SOC reaches the upper
the sum of renewable energy and MT’s basic output, the cost increases due to the expenses
limit. The MG tracks the EL change by adjusting WT’s output. When the EL demand is greater than
generated by other DGs. Comparing the four scenarios, it could be found that the costs of
the sum of renewable energy and MT’s basic output, the cost increases due to the expenses generated
sunny-weekend and rainy-weekend are signally higher than that of work day scenarios. That’s due
by other DGs. Comparing the four scenarios, it could be found that the costs of sunny-weekend andto the higher load demand on weekend scenarios. On the other hand, the cost of rainy-weekend is
rainy-weekend are signally higher than that of work day scenarios. That’s due to the higher loadhigher than that of sunny-weekend because of the lower PV output during rainy days.
demand on weekend scenarios. On the other hand, the cost of rainy-weekend is higher than that ofThis paper proposes an improved dispatch strategy for CCHP operation mode under the
sunny-weekend because of the lower PV output during rainy days.condition where the essential load demand is not influenced. The electric output of MT is variable
This paper proposes an improved dispatch strategy for CCHP operation mode under the conditionfrom 95~105% of the basic electric demand ordered by the thermal load. To verify the effectiveness of
the improved strategy, simulation with the same conditions except CCHP’s strategy of four
where the essential load demand is not influenced. The electric output of MT is variable from 95~105%
scenarios was carried out. Table 3 shows the results of operation costs.
of the basic electric demand ordered by the thermal load. To verify the effectiveness of the improved
strategy, simulation with the same conditions except CCHP’s strategy of four scenarios was carried
**Table 3. Comparison of CCHP’s improved and general strategy.**
out. Table 3 shows the results of operation costs.
**Scenario** **Sunny-Workday** **Sunny-Weekend** **Rainy-Work Day** **Rainy-Weekend**
Improved Strategy ($) 45.694 543.358 48.067 845.266
**Table 3. Comparison of CCHP’s improved and general strategy.**
Traditional Strategy ($) 47.207 581.512 50.153 901.072
Cost Decrease (%) 3.21 6.56 4.16 6.19
**Scenario** **Sunny-Workday** **Sunny-Weekend** **Rainy-Work Day** **Rainy-Weekend**
Load Demand (kW) 1201.761 1268.601 1201.761 1268.601
Improved Strategy ($)Actual Output (kW) 1164.591 45.694 1280.596 543.358 1160.254 48.067 1263.831 845.266
Traditional Strategy ($)Demand Deviation (%) 47.2073.09 -0.95 581.512 3.45 50.153 0.38 901.072
Cost Decrease (%) 3.21 6.56 4.16 6.19
Load Demand (kW) 1201.761 1268.601 1201.761 1268.601
Actual Output (kW)From the table, it is evident that the MG’s economic and environmental benefits are improved 1164.591 1280.596 1160.254 1263.831
in all the scenarios without destroying the comfort feel and primary demand. For example, in Demand Deviation (%) 3.09 -0.95 3.45 0.38
rainy-weekend scenario, the total operation cost decreased 6.19% at the expense of 0.38% load
variation. And the improved CCHP strategy was obviously more effective in weekend scenarios,
From the table, it is evident that the MG’s economic and environmental benefits are improved in all
because the adjustment margin of iterative optimization was more extensive as a result of higher
the scenarios without destroying the comfort feel and primary demand. For example, in rainy-weekend
electric demand during the weekend.
scenario, the total operation cost decreased 6.19% at the expense of 0.38% load variation. And the
improved CCHP strategy was obviously more effective in weekend scenarios, because the adjustment4.2.2. The Real-Time Scheduling Results
The real-time scheduling model mainly dispatches the CL and ESS to overcome the errors
b l d d di d d f l d d d d bl Th f EL
-----
_Energies 2017, 10, 1936_ 17 of 23
margin of iterative optimization was more extensive as a result of higher electric demand during
the weekend.
4.2.2. The Real-Time Scheduling ResultsEnergies 2017, 10, 1936 17 of 23
The real-time scheduling model mainly dispatches the CL and ESS to overcome the errors between
the simulation results of ΔE% and ΔH% by Monte-Carlo simulation, while Figure 10 shows the
actual data and predicted data for load demand and renewable energy. The error of EL demand and
optimized results in four scenarios including the adjustment quantity (AQ) of the battery, the CL,
renewable energy is uniformly expressed by the UPEP, which represents the total electricity variation.
and the cost variation.
Fluctuations of EL and thermal load are simulated by Monte-Carlo simulation and the model is solved
by linear programming in the MATLAB Optimization Tool. Table 4 exhibits the simulation results
**Table 4. Comparison of improved and traditional strategy for CCHP.**
of ∆E% and ∆H% by Monte-Carlo simulation, while Figure 10 shows the optimized results in four
**Sunny-Work Day** **Sunny-Weekend** **Rainy-Work Day** **Rainy-Weekend**
scenarios including the adjustment quantity (AQ) of the battery, the CL, and the cost variation.Interval
**ΔE/%** **ΔH/%** **ΔE/%** **ΔH/%** **ΔE/%** **ΔH/%** **ΔE/%** **ΔH/%**
1, 2, 3 −1, 2, −5 −3, 2, 2 2, 2, 3 −2, 2, 1 5, 4, −1 −3, 1, 3 3, −2, −2 −2, 3, 3
**Table 4. Comparison of improved and traditional strategy for CCHP.**
4, 5, 6 5, −1, 3 2, 2, 3 −4, 5, −1 −3, −2, −3 −5, 1, 4 −2, 3, −1 5, −1, −3 −3, 1, 3
7, 8, 9 **Sunny-Work Day−5, −3, −4** 2, 3, 3 **Sunny-Weekend−3, −1, −1** 2, −1, 2 4, 2, 4 Rainy-Work Day−2, 3, −2 4, 2, −5 Rainy-Weekend1, −2, 1
**Interval**
10, 11, 12 ∆E/%−5, −3, 3 ∆H/%−2, −3, 2 ∆E/%4, −2, 3 **∆H1, 1, −1 /%** **∆−5, 3, 5 E/%** −2, 2, 2 ∆H/% 4, 5, 1 ∆E/% −3, 2, −2 ∆H/%
1, 2, 313, 14, 15 −1, 2,−3, −2, −1 −5 _−3, 2, 2−3, 3, 2_ 2, 2, 35, −3, 2 _−2, 2, 13, −1, 3_ 5, 4,3, 5, 3 −1 −2, −2, −2 −3, 1, 3 −4, 5, −5 3, −2, −2 1, −2, 3 −2, 3, 3
4, 5, 67, 8, 916, 17, 18 −5,5, − −1, 33,−2, 3, −5 −4 2, 2, 32, 3, 3−1, 1, −2 −−3,4, 5, −1, −2, 4, −5 −11 _−3,2, − −−3, −3, 2 2,1, 2 −3_ _−4, 2, 45, −3, 1 5, 1, 4_ −1, −1, −3 −−2, 3,2, 3, − −12 1, 1, −5 5,4, 2, −1, − −53 2, −2, −1 1,− −3, 1, 32, 1
10, 11, 1219, 20, 21 −5, −3, 3−4, 3, 5 −2, −3, 22, −1, 3 4, −−4, −1, −4 2, 3 1, 1, −2, 2, 1 1 _−1, −2, 1 5, 3, 5_ 2, 2, −1 −2, 2, 2 −5, 2, 4 4, 5, 1 3, 1, 1 −3, 2, −2
13, 14, 1522, 23, 24 −3, −2,5, 5, −1 −1 _−3, 3, 2−2, 2, 3 5, −3, 23, −1, −3_ 3, −1, −1, −3 1, 3 −5, −2, 3 3, 5, 3 _−1, −2, 1 2, −2, −2_ −3, 3, 4 −4, 5, −5 −1, 2, −2 1, −2, 3
16, 17, 18 _−2, 3, −5_ _−1, 1, −2_ 2, 4, −5 _−3, −3, 2_ 5, −3, 1 _−1, −1, −3_ 1, 1, −5 2, −2, −1
19, 20, 21 _−4, 3, 5_ 2, −1, 3 _−4, −1, −4_ 2, 2, 1 1, −2, 1 2, 2, −1 _−5, 2, 4_ 3, 1, 1
22, 23, 24 5, 5, −1 _−2, 2, 3_ 3, −1, −3 1, −1, −3 _−5, −2, 3_ 1, −2, 1 _−3, 3, 4_ _−1, 2, −2_
|enariIonst eirnvcallu 1, 2, 3 4, 5, 6|Col2|Sunny-Work Day ding the adjustment q|Col4|Col5|Col6|Sunny-Weekend uantity (AQ) of the b|Col8|Col9|Col10|Rainy-Work Day attery, the CL, and t|Col12|Rainy-Weekend he cost variation. ΔE/% ΔH/% 3, −2, −2 −2, 3, 3 HP. 5, −1, −3 −3, 1, 3|Col14|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|||ΔE/%||ΔH/%||ΔE/%||ΔH/%||ΔE/%|ΔH/%|ΔE/%||
|||−1, 2, −5 Table 4. 5, −1, 3||−3, 2, 2 Compariso 2, 2, 3||2, 2, 3 n of impro −4, 5, −1||−2, 2, 1 ved and tra −3, −2, −3||5, 4, −1 ditional str −5, 1, 4|−3, 1, 3 ategy for CC −2, 3, −1|3, −2, −2 HP. 5, −1, −3||
|7, 8 Interval 10, 11|, 9 S|un−n5y, -−W3o, r−k4 D||ay 2, 3|, 3|S−u3n, n−y1-,W −e1e k||end2, −1, 2||4R, a2i,n 4y- W|ork− D2,a 3y, −2|4, 2,R −a5in y-W|1ee, k−e2n,d 1|
||, 12 ∆|E/%−5, −3|, 3 ∆|H/−%2, −|3, 2 ∆|E/%4, −2,|3|∆H1/%, 1, −1||∆−E5/,% 3, 5|−∆2,H 2/%, 2|4, 5∆,E 1/% −|3, 2∆, H−/2%|
|1, 21, 33, 14 4, 5, 6 16, 17 7, 8, 9 10, 111,9 1, 220 13, 124,2 1, 523 16, 17, 18 19, 20, 21 22, 23, 24|, 1−5 1 185,, −5,, 2−1 5, 2−43,|, 2−, 3−,5 −2, −1 −, 23, 3, −3, −4, −3−, 43, 3, −2,5 −, 15, −|−1− 3 2, −5 2, 5− 2, 1 −3|, 2−, 23, 2, 3 −1, 1 3, 3 −32,, 2−, 3−, 22,|3, 2 2,, −2− 4, −3, 1, 3 4, 2, 3 5,|2, 53, −3, 5, 2−, 1 4, − −1, −1 −−24,,3 −1, −33,, 2−1,|2 5 − −4 −3|−2,3 2,, 1−1, 3 3, −−2 3, −−3, 3, 2, −1, 2 1, 1, 2−, 12, 1 3, −11,,− 31, −|2 3|5, 34,, −51, 3 −55,, 1 −, 4 3, 1 4, 2, 4 −15,, 3−,2 5, 1 −3,5 5,, −32, 3|−2−, 3−, 21,, 3−2 −−1,2, − 3 1,,− −1 3 −2, 3, −2 2−, 22,,2 −, 21 −12,, −−22,, −1 2|−43,, 5−, 2−, 5− 2 1,5 1, −, −1, 5 − 3 2 4, 2, −5 −5, 42,,5,4 1 −3−, 43,,5,4 − 5 −|1, −−22,,3 3, 3 −2−,3 −, 1 1, 3, 1, −2, 1 3,− 13,, 12, −2 1, 21,, −−22, 3 2, −2, −1 3, 1, 1 −1, 2, −2|
||−2, 3, −5 −4, 3, 5 5, 5, −1||−1, 1, −2 2, −1, 3 −2, 2, 3||2, 4, −5 −4, −1, −4 3, −1, −3||−3, −3, 2 2, 2, 1 1, −1, −3||5, −3, 1 1, −2, 1 −5, −2, 3||−1, −1, −3 2, 2, −1 1, −2, 1|1, 1, −5 −5, 2, 4 −3, 3, 4||
20
0
20
0
-20
20
10
0
-20
50
0
|Col1|Col2|Col3|Col4|Col5|Col6|Col7|
|---|---|---|---|---|---|---|
||||||2n|d Dispatch of SB|
|Col1|Col2|Col3|Col4|2nd|Dispatch of SB|
|---|---|---|---|---|---|
|||||2nd|Dispatch of SB|
|||||||
-50
40
20
0
4:00 8:00 12:00 16:00 20:00 24:00
2nd Dispatch of CL
4:00 8:00 12:00 16:00 20:00 24:00
-10
10
5
0
4:00 8:00 12:00 16:00 20:00 24:00
2nd Dispatch of CL
4:00 8:00 12:00 16:00 20:00 24:00
|Col1|Col2|Col3|Col4|Col5|Col6|Col7|
|---|---|---|---|---|---|---|
||||||2n|d Dispatch of CL|
|Col1|Col2|Col3|Col4|2nd|Dispatch of CL|
|---|---|---|---|---|---|
|||||||
|Col1|Col2|Col3|Col4|Col5|Col6|Cost Variation|
|---|---|---|---|---|---|---|
||||||||
4:00 8:00 12:00 16:00 20:00 24:00
Time/h
-5
20
4:00 8:00 12:00 16:00 20:00 24:00
Time/h
-20
(a) Sunny-work day (b) Sunny-weekend
0
-20
20
10
0
40
20
0
-20
-40
50
0
|Col1|Col2|Col3|Col4|2n|d Dispatch of SB|
|---|---|---|---|---|---|
|||||||
|Col1|Col2|Col3|Col4|2nd|Dispatch of ES|
|---|---|---|---|---|---|
|||||||
-50
50
0
4:00 8:00 12:00 16:00 20:00 24:00
2nd Dispatch of CL
4:00 8:00 12:00 16:00 20:00 24:00
-10
40
20
0
4:00 8:00 12:00 16:00 20:00 24:00
2nd Dispatch of CL
4:00 8:00 12:00 16:00 20:00 24:00
|Col1|Col2|Col3|Col4|2n|d Dispatch of CL|
|---|---|---|---|---|---|
|||||||
|Col1|Col2|Col3|Col4|2nd|Dispatch of CL|
|---|---|---|---|---|---|
|||||||
4:00 8:00 12:00 16:00 20:00 24:00
Time/h
-20
4:00 8:00 12:00 16:00 20:00 24:00
Time/h
-50
(c) Rainy-work day (d) Rainy-weekend
**Figure 10. The scheduling results of four different scenarios in the second time scale.**
**Figure 10. The scheduling results of four different scenarios in the second time scale.**
Based on the results in the first time scale, Figure 10 reveals the minor adjustments of battery
Based on the results in the first time scale, Figure 10 reveals the minor adjustments of battery
and CL, which aims to track the actual demand variation. Positive values of the battery represents
and CL, which aims to track the actual demand variation. Positive values of the battery represents
discharge state while negative values stand for charge state. The positive adjustment of CL
discharge state while negative values stand for charge state. The positive adjustment of CL corresponds
corresponds to a LCQ increase while the negative adjustment represents LCQ decrease. It can be
to a LCQ increase while the negative adjustment represents LCQ decrease. It can be seen that the
seen that the cost variation primarily depends on the CL adjustment because the cost of battery is
cost variation primarily depends on the CL adjustment because the cost of battery is low. The battery
low. The battery is dispatched first when the actual demand is higher than predicted demand due to
is dispatched first when the actual demand is higher than predicted demand due to the economy.
the economy. On the other hand, CL is adjusted prior than the battery when the predicted demand is
On the other hand, CL is adjusted prior than the battery when the predicted demand is higher than
higher than actual demand. For instance, during the 14th period of rainy-weekend scenario, the ΔE%
was 5% and the ΔH% was −2%. According to the optimized objective, the battery discharged 16.732
-----
_Energies 2017, 10, 1936_ 18 of 23
actual demand. For instance, during the 14th period of rainy-weekend scenario, the ∆E% was 5%
and the ∆H% was −2%. According to the optimized objective, the battery discharged 16.732 kW first
and then the CL cut 2.157 kW, because the battery had reached the lower limit of capacity. In the 9th
period of sunny-work day scenario, the ∆E% was −4% and the ∆H% was 3%. Noticing that the LCQ
of this period in the first time scale was 0, so the battery was charged instead of the LCQ decreased.
Otherwise, LCQ would decrease first and if it was reduced to 0, the battery would charge.
_Energies 2017, 10, 1936_ 18 of 23
4.2.3. Algorithm Evaluation
4.2.3. Algorithm Evaluation
To compare the effectiveness of different optimization algorithms, PSO, CO-PSO and
To compare the effectiveness of different optimization algorithms, PSO, CO-PSO and
SIP-CO-PSO-ERS are used to solve the same model under rainy-weekend scenario in the first time
SIP-CO-PSO-ERS are used to solve the same model under rainy-weekend scenario in the first time
scale. The averaged costs and convergence time for 20 trials are given in Table 5.
scale. The averaged costs and convergence time for 20 trials are given in Table 5.
**Table 5. Statistics of 20 operating results for three different optimization algorithms.**
**Table 5. Statistics of 20 operating results for three different optimization algorithms.**
**Algorithms Algorithms** **Total CostTotal Cost** **Average Convergence Time/s Average Convergence Time/s**
**Average Value/$Average Value/$** **Standard Deviation /$Standard Deviation /$**
PSO PSO 859.677 859.677 1.984 1.984 176.49 176.49
CO-PSO CO-PSO 852.142 852.142 1.501 1.501 142.91 142.91
SIP-CO-PSO-ERS 845.373 0.361 104.43
SIP-CO-PSO-ERS 845.373 0.361 104.43
According to Table 5, it can be found that SIP-CO-PSO-ERS provided the lowest average total According to Table 5, it can be found that SIP-CO-PSO-ERS provided the lowest average total
operation cost over the 20 trials, which reveals a better searching and convergence performance. This operation cost over the 20 trials, which reveals a better searching and convergence performance. This is
is because ERS combined with the dual-step modification was able to excavate the best individuals, because ERS combined with the dual-step modification was able to excavate the best individuals,
improving the global and local search ability for optimization algorithm. The lowest standard improving the global and local search ability for optimization algorithm. The lowest standard
deviation of the SIP-CO-PSO-ERS indicates that the algorithm was stable and strongly robust. The deviation of the SIP-CO-PSO-ERS indicates that the algorithm was stable and strongly robust.
SIP-CO-PSO-ERS also had some superiority on convergence speed due to the adoption of ERS. The SIP-CO-PSO-ERS also had some superiority on convergence speed due to the adoption of ERS.
Figure 11 shows the iterative process of three algorithms in the first period of sunny-work day Figure 11 shows the iterative process of three algorithms in the first period of sunny-work day
scenario. From the figure, values of the objective function of all the algorithms decrease gradually scenario. From the figure, values of the objective function of all the algorithms decrease gradually
along the iteration, which indicates that the algorithms searched in a favorable direction and finally along the iteration, which indicates that the algorithms searched in a favorable direction and finally
reached a stable value. However, the SIP-CO-PSO-ERS can converge to a better solution much faster reached a stable value. However, the SIP-CO-PSO-ERS can converge to a better solution much faster
because of the introduction of dual-step modification and the ERS, which made full use of the because of the introduction of dual-step modification and the ERS, which made full use of the “survival
“survival of the fittest” principle under the premise of population diversity. of the fittest” principle under the premise of population diversity.
17.4
17.2
17
16.8
16.6
16.4
20 40 60 80 100 120 140 160 180 200
Iterations
**Figure 11.Figure 11. Iterative process comparison of three algorithms. Iterative process comparison of three algorithms.**
|Col1|Col2|Col3|Col4|Col5|Col6|Col7|Col8|Col9|Col10|Col11|Col12|Col13|SI C|Col15|P-CO-P O-PSO|Col17|SO-ERS|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|||||||||||||||||||
|||||||||||||PS|PS||O|||
|||||||||||||||||||
|||||||||||||||||||
|||||||||||||||||||
|||||||||||||||||||
|||||||||||||||||||
|||||||||||||||||||
|||||||||||||||||||
|||||||||||||||||||
|||||||||||||||||||
|||||||||||||||||||
|||||||||||||||||||
**5. Conclusions 5. Conclusions**
In this paper, a comprehensive optimized operation model is presented for a stand-alone MG. In this paper, a comprehensive optimized operation model is presented for a stand-alone MG.
It’s of great significance to keep the power balance and decrease the operation cost especially for It’s of great significance to keep the power balance and decrease the operation cost especially for
stand-alone MG. The MG was composed of PV, WT, MT, DE, FC and ESS with the consideration of stand-alone MG. The MG was composed of PV, WT, MT, DE, FC and ESS with the consideration of
CL. A two-time scale multi-objective optimization model was developed based on MT’s CCHP
mode. The dual-step modification and ERS were combined into the PSO to strengthen the global and
-----
_Energies 2017, 10, 1936_ 19 of 23
CL. A two-time scale multi-objective optimization model was developed based on MT’s CCHP mode.
The dual-step modification and ERS were combined into the PSO to strengthen the global and local
search ability as well as improve the convergence speed. An enhanced dispatch strategy for CCHP
and the proposed SIP-CO-PSO-ERS algorithm were applied to solve the model in the first time scale
with related constraints. The presented SIP-CO-PSO-ERS effectively deal with the stand-alone MG’s
optimized operation of different scenarios and the improved CCHP strategy significantly enhances the
economic and environmental benefits. SIP-CO-PSO-ERS improved the operation economy with about
1.66% average cost decrease and robustness with better standard deviation than general algorithms.
In addition, the average convergence time has also decreased about 40.83% compared with PSO which
is common used in MG’s optimization solution. In other words, it will promote the application of
renewable energies in some degree. The coordinated operation of ESS and CL reduced the impact of
renewable energy and demand uncertainty effectively in real-time scheduling. After the optimized
dispatch, the MG achieves economic operation while the load demands are satisfied. For this paper,
the data observation for one day is 24. More detailed time density will be considered in the future to
improve the real-time dispatch precision. And effective DR control and coordination schemes which
could deal with the simultaneous existence of multiple DR techniques in the same MG are required to
be contained in the optimization model in the future.
**Acknowledgments: This work was supported in part by the National Natural Science Foundation of China (grant**
No. 51577067), the Beijing Natural Science Foundation of China (grant No. 3162033), the Hebei Natural Science
Foundation of China (grant No. E2015502060), the State Key Laboratory of Alternate Electrical Power System with
Renewable Energy Sources (grant Nos. LAPS16007, LAPS16015), the Science & Technology Project of State Grid
Corporation of China (SGCC), the Open Fund of State Key Laboratory of Operation and Control of Renewable
Energy & Storage Systems (China Electric Power Research Institute) (No. 5242001600FB), the China Scholarship
Council. The authors would like to acknowledge Fangxing Li with The University of Tennessee, Knoxville, USA,
Saber Talari with University of Beira Interior, Portugal, for their contributions and suggestions to this manuscript.
**Author Contributions: All authors have worked on this manuscript together, and all authors have read and**
approved the final manuscript.
**Conflicts of Interest: The authors declare no conflict of interest.**
**Nomenclature**
DGs Distributed generations MGs Microgrids
MTs Micro-gas turbines DEs Diesel engines
FCs Fuel cells PVs Photovoltics
WTs Wind turbines DR Demand response
ESS Energy storage system CL Controllable load
PSO Particle swarm optimization CO Chaotic optimization
ERS Elite retention strategy SIP Search improvement process
APC Absorption chiller HES Heat-exchanging system
_CMT_ The fuel cost of MT _Cnl_ The natural gas price
_PMT_ Electricity energy produced by MT _ηMT_ Efficiency of MT
∆t Dispatch interval time _QMT_ Residual heat of exhaust air
_ηl_ Heat loss factor of CCHP system _QH_ Heating capacity by exhaust
_QC_ Cooling capacity by exhaust _ηH.REC_ Heat efficiency
_ηC.REC_ Cooling efficiency _ξH,ξC_ Heating and refrigeration coefficient
SB Storage battery LCQ Load control quantity
OMC Operation and maintenance cost LCC Load control compensation
_F1(t)_ OMC of the whole MG _F2(t)_ Pollutant disposal cost
_F3(t)_ LCC of MG _Pi[t]_ Generation output of micro source i
_Ci(Pi[t])_ Fuel cost of micro source i _Ki_ Maintenance factor of micro source i
_KH_ Maintenance factor of HES module _KC_ Maintenance factor of AC modules
_PH[t]_ Heat power generated by HES _PC[t]_ Cooling power generated by AC
-----
_Energies 2017, 10, 1936_ 20 of 23
_Eik_ Released quantity of pollutant k _N_ The number of generation units
_M_ The number of pollutant types _αk_ Conversion coefficient for pollutant
EENS Expected energy not supplied UIC Unit interruption cost
_pD[t]_ The UIC of MG _Pcut[t]_ The LCQ of MG
_Pi_ Output of generation unit i _PL_ The electric load demand
_Pcut_ The load control power _QHL, QCL_ Thermal and cooling load demand
_QH, QC_ Supplied thermal and cooling power _Pimin_ Minimum output of generation unit i
_Pimax_ Maximum output of generation unit i _Rup,Rdown_ Ramp up/down rate of micro source i
_Pi[t]_ Output of micro source i at time t _Pi[t][−][1]_ Output of micro source i at time t-1
_SSOC.min_ Minimum SOC for battery _SSOC.max_ Maximum SOC of battery
SOC State of charge _KC_ Maximum charging proportion
_KD_ Maximum discharging proportion _PSB[t]_ The output power of battery at time t
_ηSBC,ηSBD_ The charging/discharging efficiency _QB_ Capacity of battery
_Pcut[t]_ The LCQ in the t-th dispatch interval _Pcut.max_ Load control upper limit of MG
_PE,MT_ The electric output of MT UPEP Unified prediction error percentage
∆E% The UPEP of electric load demand ∆H% The UPEP of thermal load demand
∆C% The UPEP of cooling load demand _PRe_ Predicted electric load demand
_HRe_ Predicted thermal load demand _CRe_ Predicted cooling load demand
∆PE Difference of actual and predicted EL ∆H Difference of actual and predicted TL
Difference of actual and predicted electric
∆C Difference of actual and predicted CL ∆PPV output of PV
Difference of actual and predicted electric Difference of actual and predicted electric
∆PWT output of WT ∆PMT output of MT
_KES, KMT_ Maintenance factors of ESS and MT _PES[t]_ Charge/discharge quantity of EES
∆PMT[t] Output adjustments of MT ∆PH[t] Predicted error of heat load demand
∆PC[t] Predicted error of cooling load demand _CMT(∆PMT[t])_ The fuel cost change of MT
∆Pcut[t] LCQ difference of two time scales _pbest_ The individual optimal solution
_gbest_ The population optimal solution _w_ The inertia weight
_c1, c2_ Learning factors _r1, r2_ Random numbers between 0 and 1
_d_ Dimension of the optimization model _pi,j_ Individual optimal solution
Velocity vectors for particle i in the j-th
_pg,j_ Population optimal solution _vi,j(t)_ dimension at moment t
Velocity vectors for particle i in the j-th Position vector for particle i in the j-th
_vi,j(t+1)_ dimension at moment t + 1 _xi,j(t)_ dimension at moment t
Position vector for particle i in the j-th
_xi,j(t+1)_ dimension at moment t + 1 _Xi_ The i-th solution in the population
_Xbest_ The best individual _Xworst_ The worst individual
_Xm, Xn_ Selected particles randomly ∆ Random number between 0 and 1
_X[1]cross_ New particle obtained by cross _X[2]cross_ New particle obtained by cross
EL Electric load TL Thermal load
CL Cooling load MPPT Maximum power point tracking
OPBH Ordering power by heat AQ Adjustment quantity
**References**
1. Amrollahi, M.H.; Bathaee, S.M.T. Techno-economic optimization of hybrid photovoltaic/wind generation
together with energy storage system in a stand-alone micro-grid subjected to demand response. Appl. Energy
**[2017, 202, 66–77. [CrossRef]](http://dx.doi.org/10.1016/j.apenergy.2017.05.116)**
2. Craparo, E.; Karatas, M.; Singham, D.I.; Fan, W.; Liu, J. A robust optimization approach to hybrid microgrid
[operation using ensemble weather forecasts. Appl. Energy 2017, 201, 135–147. [CrossRef]](http://dx.doi.org/10.1016/j.apenergy.2017.05.068)
3. Huang, C.; Yue, D.; Deng, S.; Xie, J. Optimal Scheduling of Microgrid with Multiple Distributed Resources
[Using Interval Optimization. Energies 2017, 10, 339. [CrossRef]](http://dx.doi.org/10.3390/en10030339)
4. Ackermann, T.; Andersson, G.; Söder, L. Distributed generation: A definition. Electr. Power Syst. Res. 2001,
_[57, 195–204. [CrossRef]](http://dx.doi.org/10.1016/S0378-7796(01)00101-8)_
5. Hashemi, F.; Mohammadi, M.; Kargarian, A. Islanding detection method for microgrid based on extracted
features from differential transient rate of change of frequency. IET Gener. Transm. Distrib. 2017, 11, 891–904.
[[CrossRef]](http://dx.doi.org/10.1049/iet-gtd.2016.0795)
6. Ou, T.-C. A novel unsymmetrical faults analysis for microgrid distribution systems. Int. J. Electr. Power
_[Energy Syst. 2012, 43, 1017–1024. [CrossRef]](http://dx.doi.org/10.1016/j.ijepes.2012.05.012)_
7. Ou, T.; Lu, K.; Huang, C. Improvement of Transient Stability in a Hybrid Power Multi-System Using a
[Designed NIDC (Novel Intelligent Damping Controller). Energies 2017, 10, 488. [CrossRef]](http://dx.doi.org/10.3390/en10040488)
-----
_Energies 2017, 10, 1936_ 21 of 23
8. Ou, T.-C. Ground fault current analysis with a direct building algorithm for microgrid distribution. Int. J.
_[Electr. Power Energy Syst. 2013, 53, 867–875. [CrossRef]](http://dx.doi.org/10.1016/j.ijepes.2013.06.005)_
9. Bustos, C.; Watts, D. Novel methodology for microgrids in isolated communities: Electricity cost-coverage
trade-off with 3-stage technology mix, dispatch & configuration optimizations. Appl. Energy 2017, 195,
204–221.
10. Wang, C.; Liu, Y.; Li, X.; Guo, L.; Qiao, L.; Lu, H. Energy management system for stand-alone
[diesel-wind-biomass microgrid with energy storage system. Energy 2016, 97, 90–104. [CrossRef]](http://dx.doi.org/10.1016/j.energy.2015.12.099)
11. Ou, T.; Su, W.; Liu, X.; Huang, S.; Tai, T. A Modified Bird-Mating Optimization with Hill-Climbing for
[Connection Decisions of Transformers. Energies 2016, 9, 671. [CrossRef]](http://dx.doi.org/10.3390/en9090671)
12. Ali, M.H. Wind Energy Systems: Solutions for Power Quality and Stabilization; CRC Press–Taylor & Francis
Group: Boca Raton, FL, USA, 2012.
13. Ghasemi, A. Coordination of pumped-storage unit and irrigation system with intermittent wind generation
[for intelligent energy management of an agricultural microgrid. Energy 2018, 142, 1–13. [CrossRef]](http://dx.doi.org/10.1016/j.energy.2017.09.146)
14. Ou, T.-C.; Hong, C.-M. Dynamic operation and control of microgrid hybrid power systems. Energy 2014, 66,
[314–323. [CrossRef]](http://dx.doi.org/10.1016/j.energy.2014.01.042)
15. Mohseni, A.; Mortazavi, S.S.; Ghasemi, A.; Nahavandi, A.; abdi, M.T. The application of household
appliances’ flexibility by set of sequential uninterruptible energy phases model in the day-ahead planning of
[a residential microgrid. Energy 2017, 139, 315–328. [CrossRef]](http://dx.doi.org/10.1016/j.energy.2017.07.149)
16. Sachs, J.; Sawodny, O. Multi-objective three stage design optimization for island microgrids. Appl. Energy
**[2016, 165, 789–800. [CrossRef]](http://dx.doi.org/10.1016/j.apenergy.2015.12.059)**
17. Wang, F.; Zhen, Z.; Mi, Z.; Sun, H.; Su, S.; Yang, G. Solar irradiance feature extraction and support vector
machines based weather status pattern recognition model for short-term photovoltaic power forecasting.
_[Energy Build. 2015, 86, 427–438. [CrossRef]](http://dx.doi.org/10.1016/j.enbuild.2014.10.002)_
18. Chen, Q.; Wang, F.; Hodge, B.M.; Zhang, J.; Li, Z.; Shafie-khah, M.; Catalão, J.P.S. Dynamic price vector
formation model based automatic demand response strategy for PV-assisted EV charging station. IEEE Trans.
_[Smart Grid. 2017, 8, 2903–2915. [CrossRef]](http://dx.doi.org/10.1109/TSG.2017.2693121)_
19. Wang, F.; Mi, Z.; Su, S.; Zhao, H. Short-Term Solar Irradiance Forecasting Model Based on Artificial Neural
[Network Using Statistical Feature Parameters. Energies 2012, 5, 1355–1370. [CrossRef]](http://dx.doi.org/10.3390/en5051355)
20. Silvente, J.; Kopanos, G.M.; Pistikopoulos, E.N.; Espuña, A. A rolling horizon optimization framework for
the simultaneous energy supply and demand planning in microgrids. Appl. Energy 2015, 155, 485–501.
[[CrossRef]](http://dx.doi.org/10.1016/j.apenergy.2015.05.090)
21. Di Piazza, M.C.; La Tona, G.; Luna, M.; Di Piazza, A. A two-stage Energy Management System for smart
[buildings reducing the impact of demand uncertainty. Energy Build. 2017, 139, 1–9. [CrossRef]](http://dx.doi.org/10.1016/j.enbuild.2017.01.003)
22. Korkas, C.D.; Baldi, S.; Michailidis, I.; Kosmatopoulos, E.B. Occupancy-based demand response and thermal
comfort optimization in microgrids with renewable energy sources and energy storage. Appl. Energy 2016,
_[163, 93–104. [CrossRef]](http://dx.doi.org/10.1016/j.apenergy.2015.10.140)_
23. Guo, L.; Liu, W.; Jiao, B.; Hong, B.; Wang, C. Multi-objective stochastic optimal planning method for
[stand-alone microgrid system. IET Gener. Transm. Distrib. 2014, 8, 1263–1273. [CrossRef]](http://dx.doi.org/10.1049/iet-gtd.2013.0541)
24. Li, P.; Xu, D.; Zhou, Z.; Lee, W.J.; Zhao, B. Stochastic Optimal Operation of Microgrid Based on Chaotic
[Binary Particle Swarm Optimization. IEEE Trans. Smart Grid 2016, 7, 66–73. [CrossRef]](http://dx.doi.org/10.1109/TSG.2015.2431072)
25. Talari, S.; Yazdaninejad, M.; Haghifam, M.R. Stochastic-based scheduling of the microgrid operation
including wind turbines, photovoltaic cells, energy storages and responsive loads. IET Gener. Transm. Distrib.
**[2015, 9, 1498–1509. [CrossRef]](http://dx.doi.org/10.1049/iet-gtd.2014.0040)**
26. Wang, F.; Xu, H.; Xu, T.; Li, K.; Shafie-khah, M.; Catalão, J.P.S. The values of market-based demand response
on improving power system reliability under extreme circumstances. Appl. Energy 2017, 193, 220–231.
[[CrossRef]](http://dx.doi.org/10.1016/j.apenergy.2017.01.103)
27. Paterakis, N.G.; Erdinç, O.; Catalão, J.P.S. An overview of Demand Response: Key-elements and international
[experience. Renew. Sustain. Energy Rev. 2017, 69, 871–891. [CrossRef]](http://dx.doi.org/10.1016/j.rser.2016.11.167)
28. Erdinc, O.; Paterakis, N.G.; Pappi, I.N.; Bakirtzis, A.G.; Catalão, J.P.S. A new perspective for sizing of
distributed generation and energy storage for smart households under demand response. Appl. Energy 2015,
_[143, 26–37. [CrossRef]](http://dx.doi.org/10.1016/j.apenergy.2015.01.025)_
29. Pina, A.; Silva, C.; Ferrão, P. The impact of demand side management strategies in the penetration of
[renewable electricity. Energy 2012, 41, 128–137. [CrossRef]](http://dx.doi.org/10.1016/j.energy.2011.06.013)
-----
_Energies 2017, 10, 1936_ 22 of 23
30. Neves, D.; Silva, C.A. Optimal electricity dispatch on isolated mini-grids using a demand response strategy
[for thermal storage backup with genetic algorithms. Energy 2015, 82, 436–445. [CrossRef]](http://dx.doi.org/10.1016/j.energy.2015.01.054)
31. Zakariazadeh, A.; Jadid, S.; Siano, P. Smart microgrid energy and reserve scheduling with demand response
[using stochastic optimization. Int. J. Electr. Power Energy Syst. 2014, 63, 523–533. [CrossRef]](http://dx.doi.org/10.1016/j.ijepes.2014.06.037)
32. Carpinelli, G.; Mottola, F.; Proto, D. Optimal scheduling of a microgrid with demand response resources.
_[IET Gener. Transm. Distrib. 2014, 8, 1891–1899. [CrossRef]](http://dx.doi.org/10.1049/iet-gtd.2013.0758)_
33. Gelazanskas, L.; Gamage, K.A.A. Demand side management in smart grid: A review and proposals for
[future direction. Sustain. Cities Soc. 2014, 11, 22–30. [CrossRef]](http://dx.doi.org/10.1016/j.scs.2013.11.001)
34. Livengood, D.; Larson, R. The energy box: Locally automated optimal control of residential electricity usage.
_[Serv. Sci. 2009, 1, 1–16. [CrossRef]](http://dx.doi.org/10.1287/serv.1.1.1)_
35. Moradi, H.; Esfahanian, M.; Abtahi, A.; Zilouchian, A. Modeling a Hybrid Microgrid Using Probabilistic
[Reconfiguration under System Uncertainties. Energies 2017, 10, 1430. [CrossRef]](http://dx.doi.org/10.3390/en10091430)
36. Nazari-Heris, M.; Abapour, S.; Mohammadi-Ivatloo, B. Optimal economic dispatch of FC-CHP based heat
[and power micro-grids. Appl. Therm. Eng. 2017, 114, 756–769. [CrossRef]](http://dx.doi.org/10.1016/j.applthermaleng.2016.12.016)
37. Farzin, H.; Fotuhi-Firuzabad, M.; Moeini-Aghtaie, M. A Stochastic Multi-Objective Framework for Optimal
[Scheduling of Energy Storage Systems in Microgrids. IEEE Trans. Smart Grid 2017, 8, 117–127. [CrossRef]](http://dx.doi.org/10.1109/TSG.2016.2598678)
38. Kerdphol, T.; Fuji, K.; Mitani, Y.; Watanabe, M.; Qudaih, Y. Optimization of a battery energy storage system
using particle swarm optimization for stand-alone microgrids. Int. J. Electr. Power Energy Syst. 2016, 81,
[32–39. [CrossRef]](http://dx.doi.org/10.1016/j.ijepes.2016.02.006)
39. Pedrasa, M.A.A.; Spooner, T.D.; MacGill, I.F. Scheduling of demand side resources using binary particle
[swarm optimization. IEEE Trans. Power Syst. 2009, 24, 1173–1181. [CrossRef]](http://dx.doi.org/10.1109/TPWRS.2009.2021219)
40. You, M.; Jiang, T. New method for target identification in a foliage environment using selected bispectra
and chaos particle swarm optimization based support vector machine. IET Signal Process. 2014, 8, 76–84.
[[CrossRef]](http://dx.doi.org/10.1049/iet-spr.2012.0389)
41. Ahn, C.W.; Ramakrishna, R.S. Elitism-based compact genetic algorithms. IEEE Trans. Evolut. Comput. 2003,
_7, 367–385._
42. Waqar, A.; Shahbaz Tanveer, M.; Ahmad, J.; Aamir, M.; Yaqoob, M.; Anwar, F. Multi-Objective Analysis of a
[CHP Plant Integrated Microgrid in Pakistan. Energies 2017, 10, 1625. [CrossRef]](http://dx.doi.org/10.3390/en10101625)
43. Martinez, A.A.; Champenois, G. Eco-design optimisation of an autonomous hybrid wind-photovoltaic
system with battery storage. IET Renew. Power Gener. 2012, 6, 358–371.
44. Azmy, A.M.; Erlich, I. Online optimal management of PEMFuel cells using neural networks. IEEE Trans.
_[Power Deliv. 2005, 20, 1051–1058. [CrossRef]](http://dx.doi.org/10.1109/TPWRD.2004.833893)_
45. Noroozian, R.; Vahedi, H. Optimal management of MicroGrid using Bacterial Foraging Algorithm.
In Proceedings of the 2010 18th Iranian Conference on Electrical Engineering, Isfahan, Iran, 11–13 May 2010;
pp. 895–900.
46. Nguyen, T.A.; Crow, M.L.; Elmore, A.C. Optimal Sizing of a Vanadium Redox Battery System for Microgrid
[Systems. IEEE Trans. Sustain. Energy 2015, 6, 729–737. [CrossRef]](http://dx.doi.org/10.1109/TSTE.2015.2404780)
47. Mohamed, F.A.; Koivo, H.N. System modelling and online optimal management of MicroGrid using Mesh
[Adaptive Direct Search. Int. J. Electr. Power Energy Syst. 2010, 32, 398–407. [CrossRef]](http://dx.doi.org/10.1016/j.ijepes.2009.11.003)
48. Parisio, A.; Rikos, E.; Tzamalis, G.; Glielmo, L. Use of model predictive control for experimental microgrid
[optimization. Appl. Energy 2014, 115, 37–46. [CrossRef]](http://dx.doi.org/10.1016/j.apenergy.2013.10.027)
49. Delgarm, N.; Sajadi, B.; Kowsary, F.; Delgarm, S. Multi-objective optimization of the building energy
performance: A simulation-based approach by means of particle swarm optimization (PSO). Appl. Energy
**[2016, 170, 293–303. [CrossRef]](http://dx.doi.org/10.1016/j.apenergy.2016.02.141)**
50. Sadeghzadeh, H.; Ehyaei, M.A.; Rosen, M.A. Techno-economic optimization of a shell and tube heat
[exchanger by genetic and particle swarm algorithms. Energy Convers. Manag. 2015, 93, 84–91. [CrossRef]](http://dx.doi.org/10.1016/j.enconman.2015.01.007)
51. Tang, J.; Wang, D.; Wang, X.; Jia, H.; Wang, C.; Huang, R.; Yang, Z.; Fan, M. Study on day-ahead optimal
economic operation of active distribution networks based on Kriging model assisted particle swarm
[optimization with constraint handling techniques. Appl. Energy 2017, 204, 143–162. [CrossRef]](http://dx.doi.org/10.1016/j.apenergy.2017.06.053)
52. Mohammadi, S.; Mozafari, B.; Solimani, S.; Niknam, T. An Adaptive Modified Firefly Optimisation Algorithm
based on Hong’s Point Estimate Method to optimal operation management in a microgrid with consideration
[of uncertainties. Energy 2013, 51, 339–348. [CrossRef]](http://dx.doi.org/10.1016/j.energy.2012.12.013)
-----
_Energies 2017, 10, 1936_ 23 of 23
53. Zhou, Q.; Zhang, W.; Cash, S.; Olatunbosun, O.; Xu, H.; Lu, G. Intelligent sizing of a series hybrid electric
power-train system based on Chaos-enhanced accelerated particle swarm optimization. Appl. Energy 2017,
_[189, 588–601. [CrossRef]](http://dx.doi.org/10.1016/j.apenergy.2016.12.074)_
54. Diaf, S.; Diaf, D.; Belhamel, M.; Haddadi, M.; Louche, A. A methodology for optimal sizing of autonomous
[hybrid PV/wind system. Energy Policy 2007, 35, 5708–5718. [CrossRef]](http://dx.doi.org/10.1016/j.enpol.2007.06.020)
55. Pipattanasomporn, M.; Willingham, M.; Rahman, S. Implications of on-site distributed generation for
[commercial/industrial facilities. IEEE Trans. Power Syst. 2005, 20, 206–212. [CrossRef]](http://dx.doi.org/10.1109/TPWRS.2004.841233)
56. Bernow, S.; Marron, D. Valuation of Environmental Externalities for Energy Planning and Operations; Tellus
Institute Report 90-SB01; Tellus Institute: Boston, MA, USA, 1990.
© 2017 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access
article distributed under the terms and conditions of the Creative Commons Attribution
[(CC BY) license (http://creativecommons.org/licenses/by/4.0/).](http://creativecommons.org/licenses/by/4.0/.)
-----
|
{
"disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.3390/EN10121936?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.3390/EN10121936, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "CCBY",
"status": "GOLD",
"url": "https://www.mdpi.com/1996-1073/10/12/1936/pdf?version=1511433885"
}
| 2,017
|
[] | true
| 2017-11-23T00:00:00
|
[
{
"paperId": "a695df6889d8b0fa2e4847da7ca74ac26dcf7f77",
"title": "The application of household appliances' flexibility by set of sequential uninterruptible energy phases model in the day-ahead planning of a residential microgrid"
},
{
"paperId": "eeb1e901b90606279eec07faae16adf6888a02b5",
"title": "Multi-Objective Analysis of a CHP Plant Integrated Microgrid in Pakistan"
},
{
"paperId": "7a6292b0c968c11a5c26bbc107e89a27a9dca2aa",
"title": "Study on day-ahead optimal economic operation of active distribution networks based on Kriging model assisted particle swarm optimization with constraint handling techniques"
},
{
"paperId": "b59fa6fe9e3b4bef7212998c4e6085edf2c834d8",
"title": "Modeling a Hybrid Microgrid Using Probabilistic Reconfiguration under System Uncertainties"
},
{
"paperId": "64ce8196ee3cef0a1c7ce72d66aea8d49b1e51dd",
"title": "Techno-economic optimization of hybrid photovoltaic/wind generation together with energy storage system in a stand-alone micro-grid subjected to demand response"
},
{
"paperId": "4ee7fc503a634a3415636483ed75ac85126d40db",
"title": "A robust optimization approach to hybrid microgrid operation using ensemble weather forecasts"
},
{
"paperId": "b940074bafca49ffc0096e9e63c1960d5f7201cd",
"title": "Novel methodology for microgrids in isolated communities: Electricity cost-coverage trade-off with 3-stage technology mix, dispatch & configuration optimizations"
},
{
"paperId": "9c116d767aa403c1a20ab73f3d4d84410e7a6ef0",
"title": "The values of market-based demand response on improving power system reliability under extreme circumstances"
},
{
"paperId": "2015f8aad7243a21f2d37abaaf59e71e2a3aa1d8",
"title": "Dynamic Price Vector Formation Model-Based Automatic Demand Response Strategy for PV-Assisted EV Charging Stations"
},
{
"paperId": "bd88f2f157ff89261a968b159b6993ab743068d3",
"title": "Improvement of Transient Stability in a Hybrid Power Multi-System Using a Designed NIDC (Novel Intelligent Damping Controller)"
},
{
"paperId": "575936052620da5787536c8094aea4951f05ea76",
"title": "A two-stage Energy Management System for smart buildings reducing the impact of demand uncertainty"
},
{
"paperId": "96ede1858956f6e94ba5af9542fb0d99cc8ac347",
"title": "Islanding detection method for microgrid based on extracted features from differential transient rate of change of frequency"
},
{
"paperId": "baf21469c6bdaddf745dc4987958839983e25e80",
"title": "Optimal Scheduling of Microgrid with Multiple Distributed Resources Using Interval Optimization"
},
{
"paperId": "5db2414733e282bc0317e3536e23233235874312",
"title": "Optimal economic dispatch of FC-CHP based heat and power micro-grids"
},
{
"paperId": "02808366200f66ba2d4d9c005cc2efc6ada2b1f0",
"title": "An overview of Demand Response: Key-elements and international experience"
},
{
"paperId": "b5c67f0e27d2a8924d50d3f46e550a9b21bf35e0",
"title": "Intelligent sizing of a series hybrid electric power-train system based on Chaos-enhanced accelerated particle swarm optimization"
},
{
"paperId": "f95e4e3458530eea5d1e48b6add13346e96f7908",
"title": "Optimization of a battery energy storage system using particle swarm optimization for stand-alone microgrids"
},
{
"paperId": "268482ac2c6ec9dcb7ffd748099e0c22048fe71e",
"title": "A Modified Bird-Mating Optimization with Hill-Climbing for Connection Decisions of Transformers"
},
{
"paperId": "1c6930e6f964eedfa35cc43dae73dea7fe50e9c8",
"title": "Multi-objective optimization of the building energy performance: A simulation-based approach by means of particle swarm optimization (PSO)"
},
{
"paperId": "4301b71a6c7f7d284b6bd873dac994f98ca619fa",
"title": "Multi-objective three stage design optimization for island microgrids"
},
{
"paperId": "b9e20191f30c7dd1b9ea6625923365228c89779f",
"title": "Energy management system for stand-alone diesel-wind-biomass microgrid with energy storage system"
},
{
"paperId": "669d337632fbc2af6c78e89b96c116bc69882475",
"title": "Occupancy-based demand response and thermal comfort optimization in microgrids with renewable energy sources and energy storage"
},
{
"paperId": "11bdb7858a11ada8ea63ce0152d6f19005319835",
"title": "A rolling horizon optimization framework for the simultaneous energy supply and demand planning in microgrids"
},
{
"paperId": "df02a53e893bebaf9c4193894555478394c29dfc",
"title": "Stochastic-based scheduling of the microgrid operation including wind turbines, photovoltaic cells, energy storages and responsive loads"
},
{
"paperId": "cc4c8e3f1adbb47eac1242972447a2fd38c3df56",
"title": "Optimal Sizing of a Vanadium Redox Battery System for Microgrid Systems"
},
{
"paperId": "a7f08d06259e827592f6158d5ccb62b870438749",
"title": "A new perspective for sizing of distributed generation and energy storage for smart households under demand response"
},
{
"paperId": "d3b3eb7b59c69b686f12adc3f147c4ffc4efd54b",
"title": "Techno-economic optimization of a shell and tube heat exchanger by genetic and particle swarm algorithms"
},
{
"paperId": "dd1d80c4f78c652752e09bb1a7946a1b4478818c",
"title": "Optimal electricity dispatch on isolated mini-grids using a demand response strategy for thermal storage backup with genetic algorithms"
},
{
"paperId": "ebb796fcb1c91dd2dd91d661c5d5cb60f4843cad",
"title": "Optimal scheduling of a microgrid with demand response resources"
},
{
"paperId": "332393c5a8fdb37b3a093ec8677baa70bdcd7b9e",
"title": "Smart microgrid energy and reserve scheduling with demand response using stochastic optimization"
},
{
"paperId": "f67b6181da6613bc7ad42eb2d36f262ea114ee80",
"title": "Multi-objective stochastic optimal planning method for stand-alone microgrid system"
},
{
"paperId": "fc1859b13cef702cfeb0d9be81a4558b52403163",
"title": "Dynamic operation and control of microgrid hybrid power systems"
},
{
"paperId": "9c84b665749af75143f47d80f99026287719b0d3",
"title": "Use of model predictive control for experimental microgrid optimization"
},
{
"paperId": "6771f26c373a3930ab4240b0f06a1e2db4887690",
"title": "Demand side management in smart grid: A review and proposals for future direction"
},
{
"paperId": "a780c1b0002c22fbb16f7ce3a7ff0935a341b8cf",
"title": "New method for target identification in a foliage environment using selected bispectra and chaos particle swarm optimisation-based support vector machine"
},
{
"paperId": "4abcfe0d9528e1d374fcde4b6f7f284573884b4f",
"title": "Ground fault current analysis with a direct building algorithm for microgrid distribution"
},
{
"paperId": "3d845ab322b5c77de07d9581b0a779823cd2e510",
"title": "An Adaptive Modified Firefly Optimisation Algorithm based on Hong's Point Estimate Method to optimal operation management in a microgrid with consideration of uncertainties"
},
{
"paperId": "58578a4403380cf54cde0e42ded8cfbf0c235d95",
"title": "A novel unsymmetrical faults analysis for microgrid distribution systems"
},
{
"paperId": "a281c500c81fa00dd961ffe4122c8759e673bf3f",
"title": "Eco-design optimisation of an autonomous hybrid wind–photovoltaic system with battery storage"
},
{
"paperId": "a5f4e1964046c79f4d971e86839bddb74c78b2b9",
"title": "Short-Term Solar Irradiance Forecasting Model Based on Artificial Neural Network Using Statistical Feature Parameters"
},
{
"paperId": "e010af1358b82cee372535436e971fab26aeebfb",
"title": "The impact of demand side management strategies in the penetration of renewable electricity"
},
{
"paperId": "8a96e75a00314239eb85b8ae12fcab157fa45dff",
"title": "Wind Energy Systems: Solutions for Power Quality and Stabilization"
},
{
"paperId": "6a5857cce3fd1fdf3eaa32686ea06e4f093865fe",
"title": "System modelling and online optimal management of MicroGrid using Mesh Adaptive Direct Search"
},
{
"paperId": "3303d5cde744f4baf61d21eab5cc4f40f51a590b",
"title": "Optimal management of MicroGrid using Bacterial Foraging Algorithm"
},
{
"paperId": "af5ac27ff4764e1ef7b6ecb49ec307e0596940fb",
"title": "Scheduling of Demand Side Resources Using Binary Particle Swarm Optimization"
},
{
"paperId": "154a3f7aa104927ece8094f1f89bb33fbcc1e608",
"title": "The Energy Box: Locally Automated Optimal Control of Residential Electricity Usage"
},
{
"paperId": "9acd146613323090a54b0d9434eed9562975a68d",
"title": "A methodology for optimal sizing of autonomous hybrid PV/wind system"
},
{
"paperId": "090ce7ea3d4d95cf537265dd6399b05396f08c32",
"title": "Online optimal management of PEMFuel cells using neural networks"
},
{
"paperId": "db8573436f51bab0749e4909cef19ec00e206cf6",
"title": "Implications of on-site distributed generation for commercial/industrial facilities"
},
{
"paperId": "0ae32f809544eba28d17c96acd1253dd8686e0f8",
"title": "Elitism-based compact genetic algorithms"
},
{
"paperId": "6e34618fc4ff59f455db7d7f38ab20196d585aa1",
"title": "Distributed generation : a definition"
},
{
"paperId": "c0297d350d248479f4664aa64020d876fab00acd",
"title": "Coordination of pumped-storage unit and irrigation system with intermittent wind generation for intelligent energy management of an agricultural microgrid"
},
{
"paperId": "53e56e7f553cdb890ec41a277b3549d67f5a3b1e",
"title": "A Stochastic Multi-Objective Framework for Optimal Scheduling of Energy Storage Systems in Microgrids"
},
{
"paperId": "a618ed1be35dd22bd05e8a022933439a3bb26cd9",
"title": "Stochastic Optimal Operation of Microgrid Based on Chaotic Binary Particle Swarm Optimization"
},
{
"paperId": "b4051ba39290728932f7f193999d1ad892149f34",
"title": "Solar irradiance feature extraction and support vector machines based weather status pattern recognition model for short-term photovoltaic power forecasting"
},
{
"paperId": null,
"title": "Valuation of Environmental Externalities for Energy Planning and Operations; Tellus Institute Report 90-SB01; Tellus Institute"
}
] | 29,932
|
|
en
|
[
{
"category": "Computer Science",
"source": "s2-fos-model"
},
{
"category": "Sociology",
"source": "s2-fos-model"
},
{
"category": "Business",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/ffd5885ed03b64b491991db9d927d6457ead88af
|
[] | 0.894212
|
Study on the Discursive Strategies of Wired to Repair Trust in Blockchain
|
ffd5885ed03b64b491991db9d927d6457ead88af
|
Scientific and Social Research
|
[
{
"authorId": "2006550",
"name": "Qi-Ying Su"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"Sci Soc Res"
],
"alternate_urls": null,
"id": "d23adfc8-c7ea-486c-b841-2e6b04e5f1aa",
"issn": "2661-4332",
"name": "Scientific and Social Research",
"type": "journal",
"url": null
}
|
Digital trust involves not only human trust mediated by certain technology but trust in that technology. However, emerging technologies confront ever-growing skepticism. The blockchain debate is a typical example which may be led by its hypes from the mass media. If the place where blockchain is hyped is the place where the damaged trust in blockchain is repaired, Wired magazine, the voice of the industry, is an appropriate third-party repairer. Though previous studies have deeply investigated trust repair in interpersonal relationships, much remains unknown about how to measure trust in a specific technology and how to repair it if it is violated. This study aims to examine how Wired discursively repair trust in blockchain. To address the issue, 60 Wired stories on blockchain are collected as the corpus data. The corpus is annotated with the help of UAM CorpusTool. A discourse analysis is performed based on the annotation. Unlike the studies on interpersonal trust repair, the results show that the magazine puts more efforts on repairing the functionality and the helpfulness of blockchain partly due to the contextual variables. The discourse of the magazine, sitting on the rational side of trust, features open, objective, and straightforward. Together with the research standpoint of a third-party repairer, the repairing effect of trust-in-tech seems to be more predictable. The reparative strategies of EP & NN could be interpreted as a kind of justification to explain the violations of trust in blockchain, which the magazine mainly attributes to those externally unstable and uncontrollable factors. Above all, blockchain is a technological innovation with the aim to build a trustless world, but meanwhile, its development requires the escort from cyber-resilience which is built on the netizens’ digital trust.
|
**Scientific and Social Research**
**2023, Volume 5, Issue 2**
# Study on the Discursive Strategies of Wired to Repair Trust in Blockchain
**Qi Su***
Teaching Department of Public Courses, West Yunnan University of Applied Sciences, Dali 671000, Yunnan Province,
China
***Corresponding author: Qi Su, suuuup@163.com**
**[Copyright: © 2023 Author(s). This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC](https://creativecommons.org/licenses/by/4.0/)**
[BY 4.0), permitting distribution and reproduction in any medium, provided the original work is cited.](https://creativecommons.org/licenses/by/4.0/)
**Abstract: Digital trust involves not only human trust mediated by certain technology but trust in that technology. However,**
emerging technologies confront ever-growing skepticism. The blockchain debate is a typical example which may be led by
its hypes from the mass media. If the place where blockchain is hyped is the place where the damaged trust in blockchain is
repaired, Wired magazine, the voice of the industry, is an appropriate third-party repairer. Though previous studies have deeply
investigated trust repair in interpersonal relationships, much remains unknown about how to measure trust in a specific
technology and how to repair it if it is violated. This study aims to examine how Wired discursively repair trust in blockchain.
To address the issue, 60 Wired stories on blockchain are collected as the corpus data. The corpus is annotated with the help of
UAM CorpusTool. A discourse analysis is performed based on the annotation. Unlike the studies on interpersonal trust repair,
the results show that the magazine puts more efforts on repairing the functionality and the helpfulness of blockchain partly
due to the contextual variables. The discourse of the magazine, sitting on the rational side of trust, features open, objective,
and straightforward. Together with the research standpoint of a third-party repairer, the repairing effect of trust-in-tech seems
to be more predictable. The reparative strategies of EP & NN could be interpreted as a kind of justification to explain the
violations of trust in blockchain, which the magazine mainly attributes to those externally unstable and uncontrollable factors.
Above all, blockchain is a technological innovation with the aim to build a trustless world, but meanwhile, its development
requires the escort from cyber-resilience which is built on the netizens’ digital trust.
**Keywords: Trust repair; Trust in a specific technology; Third-party evaluation; Blockchain; Wired**
**_Online publication:_** February 28, 2023
**1. Introduction**
Compared with the optimism of technique worship in the past, emerging technologies are confronted with
ever-growing skepticism. The mass media tend to be techno phobic and sometimes exaggerates the
potential risks, and the public often form opinions and attitudes without scientifically or authoritatively
pertinent information. Furthermore, to dispel the mystification of the emerging technologies is usually
beyond the reach of amateurs. The issue of trust is thus the weak link of the technology industry. Though
previous research has discussed the effect of trust repair attempts in interpersonal relationships [[1]], much
remains unknown about the outcomes of reparative strategies when it is administrated by cyber network
system. The disputable trustless mechanism of blockchain technology is an example of digital trust issues
to name. The advocates consider it as the driver of future digital economy [[2]], but its decentralized feature
[3] makes it also possible for criminals to use it for illegal purposes. Concerns about cybersecurity [4] hereby
rise. More importantly, some empirical research has proved that the nontechnical drivers are the real
**8** Volume 5; Issue 2
-----
obstacles for its current low adoption rate [[5]]. In the long run, the technology industry has to deal with their
users’ damaged trust in a specific technology. As mass media is the place where blockchain has been
misrepresented, and it should be the place where the people’s distrust in that technology is going to be
repaired. Wired, the voice of the technology industry, is at the forefront of reporting blockchain, serving as
an appropriate third-party [[6]] to tackle the problem. However, previous linguistic research on trust repair
mainly focuses on interpersonal trust, but seldom steps into the field of trust between human and technology.
Therefore, this study aims to examine how Wired discursively repair trust in blockchain.
**2. Literature review**
A clear divergence of what exactly trust is exists across disciplines because trust has long been an issue
concerned by scholars of various fields. Trust is also a complicated phenomenon that has been classified
into many types in different research backgrounds. Trust within social context often refers to interpersonal
trust and existing literature mainly differentiates initial trust from experiential trust since a trust relationship
evolves. From management point of view, trust is the lubricant of interpersonal relationship and the
important foundation of cooperation [[7]]. However, violation of trust seems to be unavoidable, trust repair is
of great necessity then through basically either verbal (e.g., make an apology) or behavioral (e.g., make a
compensation) strategies.
**2.1. Interpersonal trust repair discourse**
The action of trust repair could not be taken only by the trustee [[8]], but the trustor or both of them, suggesting
three research standpoints. Among them, the standpoint of the violator is criticized for the lack of innovation
on reparative strategies and the ignorance of realistic factors. Notably, the standpoint of third-party
evaluation starts to prevail in the field. The theoretical mechanism of trust repair tends to be grounded on
the attribution theory [[9]], the perceived equity theory or the theory of social risk, schematically presented in
trust-related models. Reparative strategies like apology, denial, and explanation [[10]] draw attention if
compared to those models, but the effect of trust repair is universally controversial since it is affected by
various measurable and non-measurable factors [[11]] namely, emotion, time span, interpersonal relation,
attribution of violation, and so on. There are also no approbatory criteria within a discipline or relatively
mature approaches to consult partly because of different research methods.
Linguistic studies on the topic are still underdeveloped, but some of them believe that language plays
a role in building and maintaining and sometimes undermining a trustworthy relationship [[12]]. It is feasible
to construct trust as discourse [[13]] when ideational concepts of trust are concerned. The model of trust repair
discourse [[14]], developed from the casual attribution model of trust repair, demonstrates how the damaged
interpersonal trust is repaired through the discursively reparative strategies of “emphasize the positive and
neutralize the negative” (EP & NN) from the dimensions of literature-grounded trusting beliefs of “ability,
integrity and benevolence” (AIB) [[15]]. However, the adaptability of the model is questioned for it is
developed from a particular text. Firstly, trust violation does not equate to or necessarily lead to trust crisis,
but relevant studies seem to prefer the background of a palpable crisis. Therefore, similar research seldom
probes trust repair in the background of a potential crisis. Secondly, the model lacks consideration of
discourse purpose: it is inappropriate to construct AIB as discourse effects as they are not decided only by
the speaker [[16]]. Thirdly, EP & NN are too general when applied in specific contexts, and they fail to manage
emotion that is an important base for interpersonal trust repair [[17]]. Although various modifications to the
model are made in order to make up for the one-sidedness of previous research, trust between individuals
or groups, especially its emotional side, is still the focal point in the complex social intercourse.
In fact, the rational side of trust plays a role in such reparative behaviors and the trust relationships do
not confine to the human-human pattern. People do place their trust on non-human entities in daily life.
**9** Volume 5; Issue 2
-----
With the overwhelming popularity of technological usage in society, a critical examination of the humantechnology trust relationship is ever more worthwhile. Considering the human factors inherited in trust, a
shift to trust in a specific technology does not surpass the research paradigm of interpersonal trust, but
expands its application, and might weakens the flaws of the model by changing the trustee.
**2.2. Trust in blockchain**
“Trust in a specific technology” (trust-in-tech) [[18]] means “treat technology as trustee” [[19]] in a digital world.
It is neither unreasonable nor uncommon because people talk about trust in non-human entities in everyday
discourse. Previous studies on interpersonal trust repair can serve as the starting place for exploring trustin-tech, and relevant research questions like what constitutes and how to measure trust-in-tech are helpful
to draw up a general picture of the dynamic circulation of the human-technology trust relationship. The
answer to those questions lays a foundation for research on both the violation and the repair of trust-in-tech.
Specifically, the system-like trusting beliefs of “functionality, reliability, and helpfulness” (FRH) [[20]],
corresponded to human-like trusting beliefs of AIB, are proposed to account for some of the complexities
of building and maintaining such a new relationship in the digital world. FRH mainly involve and assess
the social presence or affordance of a specific technology. The measurements of trust-in-tech resemble
those of interpersonal trust. Studies on the topic are welcomed because such studies not only help to
elucidate how human actually experience, feel about, and respond to the digital environment [[21]], but more
importantly, to address a big-time issue: in today’s technologically manipulated society, trust-in-tech
confronts ever-growing skepticism and the debate on blockchain is a typical example.
Blockchain originally appeared in those bitcoin papers [[22]] and became a buzzword in the
cryptocurrency mania in 2017 because it provides financial services for customers without access to
banking via smart contracts [[23]]. As the most popular Distributed Ledger Technology (DLT) [[24]] deployed
in practice, it is believed to be the top area of exploration in supply chain and trade flow. Besides, it solves
a fatal defect of past online systems: once the center was hacked, the whole system collapsed. The center
of the system can be seen as the authorities in reality where people place trust. Quite a few research focus
on the role of blockchain in strengthening cybersecurity and protecting privacy. Perhaps it is bringing
human into a brand-new trust paradigm. However, it is not unbreakable [[25]]. Although DLT is encrypted,
its decentralized structure dooms that start-ups cannot have a full control over clients’ personal data. There
were industrial efforts to handle data vulnerability in the past, and internet engineers keep working on
technical loopholes and introducing new methods to resist cyberattacks [[26]]. Opinions vary on if this trustless
technology eliminates our needs for trust. The truth lies somewhere in the middle as corresponding
challenges accompany with its wide applications [[27]].
Blockchain, perhaps more than any other technology, is in need of trust–in-tech to change its low
adoption rate at current stage and to escort its future development. The decentralized feature of blockchain
leads to its coupling relation [[28]] with our trust-in-tech, but people’s distrust in emerging technologies
customarily root and sprout. This study aims to apply specific discursive strategies to repair system-like
trusting beliefs of blockchain. In addition, Wired magazine is at the forefront of reporting the technology
industry [[29-30]] where blockchain has been hyped and misunderstood. Therefore, a possible research question
could be: How does Wired apply EP & NN to repair trust in blockchain from the dimensions of FRH? Such
study does not set in any trust crisis event and the state of trust-in-tech involves only a subtly unidirectional
flow of cognition and emotion.
**2.3. A model of trust-in-tech repair discourse**
Based on the theoretical foundation reviewed above, a model of trust in blockchain repair discourse is
initiated for research needs and presented in Figure 1. The model is adapted from the model of trust repair
**10** Volume 5; Issue 2
-----
discourse and the causal attribution model of trust repair. It is a gradable model circled in the dotted box
that contains three linearly developed levels of discourse-as-context, reparative strategies and system-like
trusting beliefs. At the micro level, engagement, and attitude systems of systemic functional linguistics [[31]]
are introduced to identify those linguistic resources of dialogic engagement, evaluation (explicit or invoked)
and affect respectively for fulfilling EP & NN. At the meso level, EP & NN are set to repair trust in
blockchain from three key dimensions of RFH at the macro level. The research standpoint of the third-party
evaluation goes through the whole process. The impact of contextual variables (i.e., Wired & blockchain)
and the casual attributions to violations of trust in blockchain will be discussed based on the coming results,
especially the discourse analysis.
**Figure 1. An adapted model of trust in blockchain repair discourse**
**3. Research methods**
To answer the research question, 60 articles from the official website of Wired are collected and incorporate
onto UAM CorpusTool [[32]]. The corpus data contains 70,000 words or so. For corpus annotation, three
systems are built on the tool. Among them, amendments are made to the engagement and attitude systems
in branch and depth to identify those linguistic resources in an alternant way. The trust-in-tech system is
responsible to identify EP & NN and FRH respectively via text analyses. Finally, a discourse analysis is
conducted to describe the reparative process. The data processing synchronizes with the corpus annotation,
and each feature of the systems is enclosed with a detailed gloss to assist the annotation.
**4. Results**
4,310 pieces of featured linguistic resources are identified in terms of engagement and attitude, which fulfill
500 pieces of EP & NN from the corpus data. The results are displayed in **Figure 2 and each feature is**
followed by its number of frequency and global percentage. Specifically, the engagement is slightly less
than the attitude in the number of frequencies, but the contract distinctly outweighs the expand. Furthermore,
the disclaim is about four times more than the proclaim. Subsystems of the disclaim vary slightly while
those of the proclaim vary considerably.
As for the type of the attitude, the judgement ranks first, followed by the appreciation and the affect.
About four-fifths of the attitude is inscribed between lines and more than half of it is positive. The results
**11** Volume 5; Issue 2
-----
above are generally consistent with similar studies of interpersonal trust repair [[33]]. Most of the judgment is
subdivided into the capacity, and about half of the appreciation is subdivided into the reaction. The
in/security is the most prominent affect, but most of the affect is non-authorial. For EP & NN, EP is fulfilled
over four times more than NN. For FRH, the data is inclined to discuss the functionality and the helpfulness
of blockchain. Table 1 summarizes the main discursive motives of EP & NN made by Wired to repair FRH
of blockchain. EP tends to start from the technology end while NN tends to start from the human end in the
trust-in-tech relationship. The functionality seems to show what blockchain is, the reliability deals with
what users care about, and helpfulness anticipates what its potentialities are.
**Figure 2. The statistical results of the annotation from UAMCT**
**Table 1. A summary of trust in blockchain repair discourse analysis**
_F-EP_ Blockchain is openly secure, highly self-managing, hard to be tempered with.
Blockchain is the solution to problems on record-keeping and provenance-providing.
Blockchain fires middlemen and has potential to create a trustless cyberspace.
_F-NN_ The proof-of-stake algorithm will make blockchain less energy-consuming.
Blockchain does not show the added information but only computational results.
Tech megatrends boost blockchain hype that do not tell the full story.
_(Continued on next page)_
**12** Volume 5; Issue 2
-----
_(Continued from previous page)_
_R-EP_ Blockchain is immutable, so records are permanently stored.
The interdependence of blockchain ensures integrity of records.
_R-NN_ As a distributed ledger technology, it is impossible to take blockchain down easily.
Blockchain cannot refuse online attacks and online attacks make blockchain robust.
Quantum computers could break blockchain but rescue it, too.
What blockchain needs now is not regulation but understanding.
_H-EP_ Blockchain optimizes complex supply chains for big corporations.
Blockchain helps photographers assert control over their work.
Blockchain provides permanent provenance to counteract different kinds of fraud.
_H-NN_ Some use blockchain for illegal purposes, but others use it for good.
Blockchain disrupts music market but develops music business.
**5. Discussion**
It is truly inappropriate to construct AIB for interpersonal trust repair as discourse effects which are not
decided only by the speaker. At the macro level, matters of emotionality are naturally harder to control than
those of rationality; at the micro level, the particularity of trust-in-tech requires a third-party to play the role
of repairer, and the evaluation from reputable _Wired would lower the uncertainty of discourse effect._
Besides, trust repair dynamics in the human-technology interaction is different from those in human-human
relationship. FRH of a technology are theoretically easier to be measured than AIB of a person. Moreover,
FRH have a positive bias for technology but against human [[34]], inclining the discourse effect to be
prominent.
According to the attribution theory, Wired mainly attributes the violations of trust in blockchain to
those external factors such as tech megatrends, the blockchain hype [[35]], internet system, cyberattacks,
illegal or unethical applications and so on. Owing to the locality of the factors, subscribers of Wired perceive
a weak correlation between the violations and the violator, resulting in positive credential assessments on
FRH of blockchain. The credibility of the violator stays because those factors are uncontrollable. The
instability of the factors is also in favor of repairing trust-in-tech. As for EP & NN, they could be categorized
into explanation, justification more precisely, to repair trust in blockchain; both of them also function well.
On the one hand, the unrequited emotion between the trustor and the trustee is not so urgent to be managed
if compare with those negative even hostile emotions in trust crises; on the other hand, the effect of a thirdparty on trust repair implies almost unnecessary emotion management between it and the other two parties.
Furthermore, the trust-in-tech repair discourse focuses more on the technology and what users do with it
than on human.
The influence of contextual variables on some of results on Figure 2 is discussed mainly from two
aspects. Firstly, the affect fails to outnumber either judgement or appreciation in frequency. One possible
explanation goes to the context of Wired. The magazine has devoted itself to all aspects of technology and
innovation for three decades. Stylists see it as a men’s lifestyle magazine that allows for a negotiation of
masculinity premised on work and leisure and production and consumption. The way of conceptualizing
technology as culture accumulatively exerts subtle influence on the language of _Wired, which is open,_
objective, and straightforward. Secondly, the security is the most frequently observed effect though the
effect is the least kind of the attitude. This could be attributed to the seemingly predetermined relation
between the technology and data security [[36]]. Thirdly, the data talks more about the functionality and the
helpfulness than the reliability of blockchain. This can be justified if consider the corpus annotation. What
FRH refer to is semantically links with the subsystems of the judgement and the appreciation, but the
**13** Volume 5; Issue 2
-----
context of blockchain is the reason behind it. The blockchain hype is actually an exaggeration of its key
features or functionality under the technique megatrends [[37]]. The wide applications of blockchain argue for
its usefulness, and the technology is still in nascence with limited feedbacks or assessments, which explains
the inferior positions of the reliability and NN in frequency counting.
The security concern is a trigger to blockchain debate, and the trust-in-tech repair discourse analysis
finds that Wired appears to respond to the debate [[38]]. The response is not a black or white affair. There are
problems to think about, such as the general classification of the technology and the level of trust in need.
Public or permissionless blockchain like bitcoin and Ethereum is trustless, but both of them require a low
level of trust among anonymous users in order to take in charge of the network. Private or permissioned
blockchain like Hyperledger is not trustless due to the dominant role of one or more organizations in
maintaining those ledgers [[39]]. Therefore, blockchain indeed has challenged the traditional mode of trust
and been trying to bring us to the paradigm of digital trust [[40]], but we still need interpersonal trust to reach
a real trustless world.
**6. Conclusion**
The consideration of both trust repair and digital trust is of necessity to deal with the growing skepticism
towards emerging technologies in the digital age. This study starts from the theoretical foundation of
interpersonal trust repair to our damaged trust-in-tech and situates at Wired magazine to frame blockchain
debate. The trust-in-tech repair discourse analysis demonstrates how Wired apply EP & NN to repair FRH
of blockchain. Compared with studies on interpersonal trust repair, this study reiterates the rational side of
trust which would result in more predictable discourse effects. The major findings could give certain
references for technical enterprises to tackle trust-related problems of products or services powered by
emerging technologies. Of course, there are limitations. The corpus data comes from only one magazine
that may not show the whole picture of blockchain, and the manual annotation is often questioned for
subjectivity. Future research would expand the corpus data and collect feedbacks from the subscribers of
_Wired on the topic by questionnaire if possible._
**Acknowledgments**
The author thanks Prof. Chen for revising the ideas and proofreading the corpus annotation.
**Disclosure statement**
The author declares no conflict of interest.
**References**
[1] Kohn SC, Momen A, Wiese E, et al., 2019, The Consequences of Purposefulness and Human-Likeness
on Trust Repair Attempts Made by Self-Driving Vehicles. Proceedings of the Human Factors and
Ergonomics Society Annual Meeting, 63(1): 222–226. https://doi.org/ 10.1177/1071181319631381
[2] Upadhyay N, 2020, Demystifying Blockchain: A Critical Analysis of Challenges, Applications and
Opportunities. International Journal of Information Management, 54: 102120.
https://doi.org/10.1016/j.ijinfomgt.2020.102120
[3] Liu M, Shi Y, Chen Z, 2019, Distributed Trusted Network Connection Architecture Based on
Blockchain. Journal of Software, 30(8): 2314–2336.
[4] Wainwright R, 2019, The Cybersecurity Guide for Leaders in Today’s Digital World, Word Economic
Forum, viewed October 3, 2022
**14** Volume 5; Issue 2
-----
https://www3.weforum.org/docs/WEF_Cybersecurity_Guide_for_Leaders.pdf?_gl=1*16u7c8k*_up*
MQ..&gclid=CjwKCAiA0JKfBhBIEiwAPhZXDwh8oSUeX2Mxrf39Q93nXN_2vQ4T43y6vwGrOZ
7CCAfj2OTl65OyrRoCTMcQAvD_BwE
[5] Koens T, Poll E, 2019, The Drivers Behind Blockchain Adoption: The Rationality of Irrational Choices,
in Euro-Par 2018: Parallel Processing Workshops, Springer, Cham. https://doi.org/10.1007/978-3-03010549-5_42
[6] Bozoyan C, Vogt S, 2016, The Impact of Third-Party Information on Trust: Valence, Source, and
Reliability. PLoS ONE, 11(2): 1–18. https://doi.org/10.1371/journal.pone.0149542
[7] Huang X, Jiang S, He Z, 2020, A Research Summary of Factors of Interpersonal Trust Repair. Young
Society, 2020(14): 134–135.
[8] Mayer RC, Davis JH, Schoorman DF, 1995, An Integrative Model of Organizational Trust. AMR, 20,
709–734. https://doi.org/10.5465/amr.1995.9508080335
[9] Tomlinson EC, Mayer RC, 2009, The Role of Causal Attribution Dimensions in Trust Repair. Academy
of Management Review, 34(1): 85–104. https://doi.org/10.5465/AMR.2009.35713291
[10] Kim T, Song H, 2021, How Should Intelligent Agents Apologize to Restore Trust? Interaction Effects
Between Anthropomorphism and Apology Attribution on Trust Repair. Telematics and Informatics, 61:
101595. https://doi.org/10.1016/j.tele.2021.101595
[11] Zhang L, Zhang N, 2020, Effectiveness of Trust Repair Strategies in the Crisis of Corporate Internet
Public Opinion. American Journal of Management Science and Engineering, 5(1): 10.
https://doi.org/10.11648/j.ajmse.20200501.12
[12] Pelsmaekers K, Jacobs G, Rollo C, (eds) 2014, Trust and Discourse: Organizational Perspectives, John
Benjamins Publishing Company. https://doi.org/10.1075/dapsac.56
[13] Brugger P, 2015, Trust as a Discourse: Concept and Measurement Strategy – First Results from a Study
on German Trust in the USA. Journal of Trust Research, 5(1): 78–100.
https://doi.org/10.1080/21515581.2015.1011164
[14] Fuoli M, Paradis C, 2014, A Model of Trust-Repair Discourse. Journal of Pragmatics, 74: 52–69.
https://doi.org/10.1016/j.pragma.2014.09.001
[15] McKnight DH, Cummings LL, Chervany NL, 1998, Initial Trust Formation in New Organizational
Relationships. Academy of Management Review, 23(3): 473–490.
[16] Wang X, Liu D, 2019, Discursive Behaviors for Trust-Repair in Crisis: A Text Analysis of BP’s Letters
to Shareholders after Oil Spill Crisis. Foreign Language Research, 210(5): 43–48.
[17] Yao X, Qin Y, 2019, A Cognitive⁃Emotional Approach to Interpersonal Trust Repair. Modern Foreign
Languages (Bimonthly), 42(6): 743–754.
[18] Lankton NK, McKnight DH, Tripp J, 2015, Technology, Humanness, and Trust: Rethinking Trust in
Technology. Journal of the Association for Information Systems, 16(10): 880–918. https://doi.org/
10.17705/1jais.00411
[19] Corritore CL, Kracher B, Wiedenbeck S, 2003, On-Line Trust: Concepts, Evolving Themes, a Model.
International Journal of Human Computer Studies, 58(6): 737–758.
[20] McKnight DH, Carter M, Thatcher JB, et al., 2011, Trust in a Specific Technology: An Investigation
of its Components and Measures. ACM Transactions on Management Information System, 2(2): 12
https://doi.org/10.1145/1985347.1985353
[21] de Visser EJ, Pak R, Shaw TH, 2018, From ‘Automation’ to ‘Autonomy’: The Importance of Trust
Repair in Human–Machine Interaction. Ergonomics, 61(10): 1409–1427.
**15** Volume 5; Issue 2
-----
[22] Lin X, Hu Y, 2018, A Summary of Blockchain Technology. Investment, Financing and Trade, 45(2):
97–109.
[23] Hewett N, Lehmacher W, Wang Y, 2019, Inclusive Deployment of Blockchain for Supply Chains: Part
5-A Framework for Blockchain Cybersecurity, World Economic Forum, October 4, 2022,
https://www3.weforum.org/docs/WEF_Inclusive_Deployment_of_Blockchain_for_Supply_Chains_
Part_5.pdf
[24] Hewett N, Lehmacher W, Wang Y, 2019, Inclusive Deployment of Blockchain for Supply Chains Part 1 –
Introduction, World Economic Forum, October 4, 2022,
https://www3.weforum.org/docs/WEF_Introduction_to_Blockchain_for_Supply_Chains.pdf
[25] Madnick S, 2020, Blockchain Isn’t as Unbreakable as You Think. MIT Sloan Management Review,
61(2): 65–70. http://dx.doi.org/10.2139/ssrn.3542542
[26] Jordan A, 2020, Cybercrime Prevention Principles for Internet Service Providers, World Economic Forum,
October 5, 2022, https://www3.weforum.org/docs/WEF_Cybercrime_Prevention_ISP_Principles.pdf
[27] Woodside JM, Augustine, Fred KJ, Giberson W, 2017, Blockchain Technology Adoption Status and
Strategies. Journal of International Technology and Information Management, 26(2): 65–93.
https://doi.org/10.58729/1941-6679.1300
[28] Zhao G, Wan Q, Wu Y, Liu S, 2019, Study on the Trust Management Mechanism of Supply Chain
Based on Blockchain. Credit Reference, 250(11): 25–31.
[29] Bødker H, 2017, ‘Gadgets and Gurus’: Wired Magazine and Innovation as a Masculine Lifestyle.
Media History, 23(1): 67–79. https://doi.org/10.1080/13688804.2016.1273103
[30] White K, 1994, The Killer App: Wired Magazine, Voice of the Corporate Revolution. The Baffler, 6(6):
23–28. https://doi.org/10.1162/bflr.1994.6.23
[31] Martin JR, White PRR, 2005, The Language of Evaluation: Appraisal in English. Palgrave Macmillan.
https://doi.org/10.1057/9780230511910
[32] O’Donnell M, 2013, UAM Corpus Tool 3.0 Tutorial Introduction, viewed November 2, 2022,
http://www. corpustool.com/Documentation/UAMCorpusToolTutorial3.0.pdf
[33] Guo G, Mi C, 2020, An Analysis of Boeing’s Trust-Repair Discourse. English Language and Literature
Studies, 10(2): 17–26. https://doi.org/10.5539/ells.v10n2p17
[34] Baker AL, Phillips EK, Ullman D, et al., 2018, Toward an Understanding of Trust Repair in Human
Robot Interaction: Current Research and Future Directions. ACM Transactions on Interactive
Intelligent Systems, 8(4): 30. https://doi.org/10.1145/3181671
[35] Mulligan C, Zhu Scott J, Warren S, Rangaswami J, 2018, Blockchain Beyond the Hype a Practical
Framework for Business Leaders, World Economic Forum, viewed October 6, 2022
https://www3.weforum.org/docs/48423_Whether_Blockchain_WP.pdf
[36] Flanagan AJ, Maclean F, Sun M, et al., 2019, Inclusive Deployment of Blockchain for Supply Chains:
Part 4 – Protecting Your Data, World Economic Forum, October 6, 2022,
https://www3.weforum.org/docs/WEF_Inclusive_Deployment_of_Blockchain_for_Supply_Chains_
Part_4_Report.pdf
[37] Metag J, Marcinkowski F, 2014, Technophobia Towards Emerging Technologies? A Comparative
Analysis of the Media Coverage of Nanotechnology in Austria, Switzerland, and Germany. Journalism,
15(4): 463–481. https://doi.org/10.1177/1464884913491045
[38] Azhar NF, Jie NQ, Hyun KT, et al., 2020, Security and Privacy Issues in Wireless Networks.
https://doi.org/10.20944/preprints202008.0523.v1
**16** Volume 5; Issue 2
-----
[39] de Filippi P, Mannan M, Reijers W, 2020, Blockchain as a Confidence Machine: The Problem of Trust
& Challenges of Governance. Technology in Society, 62: 101284.
[40] Janssen M, Weerakkody V, Ismagilova E, et al., 2020, A Framework for Analyzing Blockchain
Technology Adoption: Integrating Institutional, Market and Technical Factors. International Journal of
Information Management, 50: 302–309.
**Publisher’s note**
Bio-Byword Scientific Publishing remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
**17** Volume 5; Issue 2
-----
|
{
"disclaimer": "Notice: The following paper fields have been elided by the publisher: {'references'}. Paper or abstract available at https://api.unpaywall.org/v2/10.26689/ssr.v5i2.4727?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.26689/ssr.v5i2.4727, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": null,
"status": "BRONZE",
"url": "http://ojs.bbwpublisher.com/index.php/ssr/article/download/4727/4134"
}
| 2,023
|
[] | true
| 2023-02-28T00:00:00
|
[] | 7,760
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
},
{
"category": "Engineering",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/ffd59b81e6873c26349f27ebcd316054866a9c37
|
[
"Computer Science"
] | 0.890545
|
A Blockchain-Based Architecture for Traffic Signal Control Systems
|
ffd59b81e6873c26349f27ebcd316054866a9c37
|
International Conference on Informatics, IoT, and Enabling Technologies
|
[
{
"authorId": "1995641714",
"name": "Wanxin Li"
},
{
"authorId": "144205717",
"name": "Mark M. Nejad"
},
{
"authorId": "144142365",
"name": "Rui Zhang"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"ICIOT",
"International Conference on Internet of Things [Services Society]",
"Int Congr Internet Thing",
"Int Conf Internet Thing [services Soc",
"International Congress on Internet of Things",
"ICIoT",
"Int Conf Informatics Iot Enabling Technol"
],
"alternate_urls": null,
"id": "be5cc8fb-1b0b-40de-9597-c6dd055201e7",
"issn": null,
"name": "International Conference on Informatics, IoT, and Enabling Technologies",
"type": "conference",
"url": null
}
|
Ever-growing incorporation of connected vehicle (CV) technologies into intelligent traffic signal control systems brings about significant data security issues in the connected vehicular networks. This paper presents a novel decentralized and secure by design architecture for connected vehicle data security, which is based on the emerging blockchain paradigm. In a simulation study, we applied this architecture to defend the Intelligent Traffic Signal System (I-SIG), a USDOT approved CV pilot program, against congestion attacks. The results show the performance of the proposed architecture for the traffic signal control system.
|
# A Blockchain-Based Architecture for Traffic Signal Control Systems
## Wanxin Li [a], Mark Nejad [a], Rui Zhang [b]
_a Department of Civil and Environmental Engineering_
_b Department of Computer and Information Sciences_
_University of Delaware_
_Newark, DE 19716, United States_
_{wanxinli, nejad, ruizhang}@udel.edu_
**_Abstract—Ever-growing incorporation of connected vehicle_**
**(CV) technologies into intelligent traffic signal control systems**
**brings about significant data security issues in the connected**
**vehicular networks. This paper presents a novel decentralized**
**and secure by design architecture for connected vehicle data**
**security, which is based on the emerging blockchain paradigm.**
**In a simulation study, we applied this architecture to defend the**
**Intelligent Traffic Signal System (I-SIG), a USDOT approved**
**CV pilot program, against congestion attacks. The results show**
**the performance of the proposed architecture for the traffic**
**signal control system.**
**_Keywords-blockchain; connected and automated vehicles;_**
**_data security; data credibility; internet of things; internet of_**
**_vehicles; vehicular networks; hyperledger; traffic signal control_**
I. INTRODUCTION
Emerging adaptive traffic signal control systems
incorporate real-time traffic data in their signal phase and
timing (SPaT) mechanisms to improve the performance of
intersections (e.g., safety and throughput). However,
centralized traffic signal control systems and their datacenters
can be attacked by receiving and processing malicious
messages from connected vehicles in the traffic network.
These malicious messages can include false information about
vehicle IDs, locations, trajectories, etc. Systematic malicious
attacks are a major challenge for traffic datacenters that need
to validate a large amount of vehicular data for making
decisions in real time. Without a trustable defending
mechanism, malicious information could lead to serious
consequences in a traffic network such as collisions [1] and
congestions [2]. In this paper, we present a blockchain-based
architecture to defend intelligent traffic signal control systems
against information and data attacks by transforming the
conventional connected vehicle network into a trustable and
transparent decentralized network.
As an emerging computer network technology, blockchain
was first invented in a cryptocurrency system, Bitcoin [3]. In
the past few years, blockchain-based system designs have
come a long way, and they have been successful in various
decentralized applications [4, 5]. The nature of traceability
and transparency in blockchain has a suitable match with
increasing demands for data security in the connected-vehicle
networks. However, most blockchain-based applications
depend largely on digital tokens for the system design. This
limits blockchain technology to be implemented mostly in
cryptocurrency related systems. In this paper, we extend
blockchain technology from classic cryptocurrency systems
into traffic signal control systems. Blockchain not only links
vehicles and infrastructures together in a decentralized
network but also it works as a distributed and immutable
ledger to automatically record vehicular information with
timestamps. Furthermore, this distributed ledger provides
trustable input data directly for intelligent traffic signal control
systems.
_A. Our Contributions_
We address the problem of data security in CV-based
traffic signal control systems. These intelligent systems
receive and process a certain number of arrival vehicle
information as input table to generate optimal traffic signal
plans at each intersection. Due to limited computational power
in real-time processing and their centralized algorithms and
datacenters, they are vulnerable if the input table contains
spoofing vehicle information. To defend CV-based traffic
signal control systems against malicious data attacks, we
designed a blockchain-based decentralized architecture.
To the best of our knowledge, this is the first study
exploring the blockchain paradigm in CV-based traffic signal
control systems. Our proposed architecture introduces i) a
customized blockchain network for connected vehicles; ii)
and a consensus protocol design for validating source data.
For the blockchain network, we choose Hyperledger Fabric
[6] framework as the developing platform. Comparing with
other blockchain frameworks, Hyperledger Fabric provides
more flexibility for non-cryptocurrency system design.
In this study, we developed a blockchain prototype
network. In addition, we perform simulations that show our
prototype network can maintain a trustable distributed ledger
for recording arrival vehicle information. For the consensus
protocol, we designed a new mechanism to avoid attacker
sending spoofing source information to the blockchain
network. We add Roadside Units (RSU) and witness vehicles
together as references for other nodes in the network to
validate every piece of vehicle information before recording
permanently in the blockchain network.
To show how our proposed architecture contributes to a
realistic CV-based traffic signal control system, we applied
our architecture to defend the vulnerable USDOT Intelligent
Traffic Control System (I-SIG) [7] in a case analysis. In our
architecture design, we utilize the distributed ledger on
blockchain networks as input for traffic signal controller,
which will avoid spoofing attack to the original datacenter.
-----
_B. Organization_
The rest of the paper is organized as follows. In Section II,
we present previous research in CV network attacking and
recent progresses in blockchain applications. In Section III,
we describe a full architecture design for the vehicular
network transform, a blockchain framework preliminary, and
we present our blockchain-based network, consensus protocol,
and the workflow process. To further illustrate how our
blockchain based architecture works to defend realistic
intelligent traffic signal control systems, we choose Intelligent
Signal Control System (I-SIG) [7] as a case analysis in Section
IV. In Section V, we performed extensive experiments to test
the robustness and performance of our developed CV
blockchain network. In Section VI, we analyze the security of
the proposed architecture against potential attacks. In Section
VII, we conclude this study and present directions for future
research.
II. RELATED WORk
_A. Data Spoofing Attack in CV Networks_
Similar to many kinds of intelligent traffic signal control
systems, I-SIG system [7] take arrival vehicle information as
input table and generate optimal signal plans at intersection.
In a recent work, Chen et al. [2] showed that the I-SIG system
is vulnerable in the signal control algorithm level. Due to
limited computation power, the signal controller cannot
handle data validation in the real-time processing
requirement, usually 5-7 seconds. They conducted the V2I
attacking strategy by spoofing one vehicle information in the
arrival table which caused congestion. Previously,
Amoozadeh et al. [1] presented that spoofing attack in a V2Vbased network can cause significant instability and even
collisions. In another work, Dominic et al. [8] reported new
attack surfaces and data flow in V2V-based network. Note
that V2I attacks can affect all vehicles in the same network as
I-SIG attacking scenario [2] whereas V2V attacks that can
affect a certain group of vehicles.
_B. Blockchain Technology in Transportation_
In recent years, exploring the Blockchain paradigm in
general transportation field has attracted a great deal of
attention (e.g. [9-11]). Founded in August 2017, Blockchain
in Transport Alliance (BiTA) has attracted more than 450
members around the world and became the largest
commercial blockchain alliance [12]. These members are
primarily from freight, logistics, technology companies and
also academic institutes. The mainstream for implementing
blockchain technology in transportation industry are freight
tracking and food supply chain management. For instance,
IBM has been working with retail giant Walmart to develop
an efficient blockchain-based tracking system for food supply
chain [13]. The blockchain technology helps Walmart to
reduce tracing product time from weeks to seconds. This gives
the company the ability to not only track where the food came
from quickly but also how it was processed and distributed
safely and responsibly.
Some studies have presented the possibility of
implementing blockchain technology in forensic investigation.
A recent study proposed a forensic investigation framework
for IoT using blockchain, which is called FIF-IoT [14]. In
addition, Guo et al. [15] proposed a blockchain-inspired
“proof of event” mechanism for accident recording system in
CAV network. Compared to these studies, our work focuses
on blockchain-based system design in a new field that
improves data security for CV-based traffic signal control
systems.
III. ACHITECTURE DESIGN
_A. Vehicular Network Transform_
In a conventional centralized CV network (Fig. 1), every
traffic signal control system has to set up its own datacenter
that runs all the codes and receives all the data. In addition,
vehicles interacting with this control system must
communicates with its centralized datacenter. Due to lower
transparency and the single point of failure, a centralized
architecture is not suited for creating trustable connected
vehicle networks that have frequent real-time data
transmissions.
We propose a blockchain traffic data network (Fig. 2) in
which decentralization brings vehicles closer. Instead of
having a central server and a database, the blockchain is a
network and a database all in one [16]. It creates a vehicle-tovehicle and vehicle-to-infrastructure network that share all the
data. Any vehicle connected to the blockchain talks to all the
other vehicles and infrastructures in the network. Thus, there
are no more centralized server but only connected vehicles
and infrastructures that reach into agreements on the network.
Figure 1. Central Server Vehicle Network
Figure 2. Blockchain-Based Vehicle Network
-----
_B. Blockchain Framework Preliminary_
In our architecture design, we choose Hyperledger Fabric
[6] as the developing platform. It is the common platform for
various mainstream blockchain systems. Comparing with
older frameworks like Bitcoin [3], both Hyperledger Fabric
and Ethereum [17] can provide programmable portion which
is called Smart Contract [18]. Smart Contract is where the
business logic of a blockchain network runs. We choose
Hyperledger Fabric [6] instead of Ethereum [17] because the
former provides more flexibility and modularity for
blockchain implementation among cross-industries [19]. Most
popular frameworks like Ethereum [17] cannot avoid digital
tokens in system design. This restricts blockchain technology
to serve well only in cryptocurrency related system. In
addition, Hyperledger Fabric [6] has a cost-effective approach
towards transactions since no mining process from a
cryptocurrency design is needed anymore. On the contrary,
both Bitcoin [3] and Ethereum [17] require nodes to mine
transactions by longer processing time and significant
consumption of computation hardware and electricity.
In a connected vehicular network, we utilize blockchain
technology as a distributed ledger that records every vehicle
information including VIN, Location (GPS) and trajectory in
ledger (Fig. 3). In addition, Blockchain technology
automatically add timestamp for each record, which makes it
traceable. For this purpose, we don’t involve digital tokens in
architecture design level to avoid adding unnecessary
components and overheads. On the other hand, the flexibility
and modularity in Hyperledger Fabric have been proved well
in freight tracking and food supply chain systems like the IBM
and Walmart project [13]. These precedents give us an
appropriate launchpad for leveraging blockchain technology
into connected vehicular network. Instead of recording
vehicular information in a vulnerable and centralized server,
blockchain technology creates a transparent and trustful
decentralized database providing reliable information to the
traffic signal control systems.
As Figure 4 [20] shows, Hyperledger Fabric [6] is a
highly modularized framework for developing full-stack
blockchain networks. We first describe a blockchain network
in four programmable parts: Model File, Script File, Access
Control and Query File. Model File is where we define all the
objects in the network. All the response functions are written
in Script File. Hyperledger Fabric also provides Access
Control to restrict data access to certain roles in the network.
As for Query File, it works similar with conventional
database query definitions. Except for the Model File, the
other three parts are pluggable according to the application
requirements. Then, we package up these files into one
Business Network Archive file and deploy it into a running
blockchain network. This blockchain network can be
accessed and tested in a front-end webpage.
Figure 3. Distributed Ledger on Blockchain
Figure 4. Hyperledger Fabric Infrastructure
_C. Developing the Blockchain Network_
We developed a blockchain network prototype based on
Hyperledger Fabric framework. We identified each vehicle
by its VIN number. Our blockchain network maintains a
distributed ledger for sharing and recording of arrival vehicle
information as input for the traffic signal control systems. As
shown in Figure 3, we define arrival vehicle information in
the Model File as follows:
Define Arrival Vehicle Information
1. Vehicle_Info {
2. Record_ID
3. VIN
4. GPS{
5. Longtitue
6. Latitude
7. }
8. Trajectory{
9. Speed
10. Accelartion
11. }
12. Timestamp
13. }
-----
Figure 5. Blockchain Network Webpage UI
In order to make the ledger immutable, we grant Access
Control rule for all participants. Each participant (i.e. vehicle,
RSU, and traffic signal controller) only have ADD or READ
operation access for ledger records. Therefore, no one can
modify data in the ledger. We use Hyperledger Composer
Tool [21] to generate the deployable unit file (.bna) and
deploy it on the blockchain network. Hyperledger Composer
Tool [21] also provides a webpage interface for connecting
and testing the blockchain network (Fig 5). Each participant
has an ID registry for connecting to the blockchain network,
and we assign the traffic signal controller as the
administrator. Other users’ roles are either a vehicle or an
RSU.
_D. Consensus Protocol Design_
By deploying blockchain technology into a connected
vehicle network, we can guarantee data immutability and
traceability in a decentralized ledger. For this purpose, we
design a consensus protocol for the network to validate the
source vehicle information. After validation, our blockchain
network records vehicle information permanently. Classic
blockchain protocol in cryptocurrency can validate new
transactions by checking hash code of tokens and the previous
transaction history [22]. This is trustable since all tokens were
carefully defined and encrypted as source data within the
system from beginning. However, we do not involve digital
tokens concept into the proposed connected vehicular
network. Consequently, the consensus protocol needs a
creative design.
Figure 6. Broadcasting Scenario
Vehicles broadcast their information among the
blockchain network. For consensus protocol design, we add
Roadside Units (RSU) as nodes into our blockchain network.
Then, we introduce witness vehicles and nearby RSU together
as references for validating source information. In this
scenario (Fig. 6), if a broadcasting vehicular information is
matched with references from its nearby RSUs and witness
vehicles, the source vehicular information is trustable and we
let the blockchain network to record it. On the contrary, if the
source vehicular information cannot match with the
references, we treat this as a malicious vehicular information
and will not let blockchain network to record it. At the same
time, we can locate and add this vehicle as an attacker into a
blacklist. The consensus algorithm is represented as the
following pseudo-code:
Consensus Algorithm
1. s = source data;
2. r = reference data;
3.
4. Function validation (s, r) {
5. l = distributed ledger;
6. b = blacklist for recording attacker;
7. if (b.find(s) == true ) {
8. reject;
9. } else {
10. if (s == r) {
11. l.add(s);
12. } else {
13. reject;
14. b.add(s);
15. }
16. }
17. }
_E. Workflow Process_
Combining the above-mentioned four parts, we reach at
the full view of our blockchain-based architecture for
connected vehicular networks. As shown in the flowchart
(Fig. 7), blockchain technology makes vehicular information
transparent and trustable by providing protocol and
cryptography on a decentralized network. When a connected
vehicle broadcasts its information, the other nodes in the same
network first validate this information by comparing it with
references from nearby RSUs and witness vehicles. If the
source information is false, blockchain network will not
record this piece of information for the traffic control
processes and record the malicious attack and the attacker. If
the source information is correct, blockchain network will
record and share it on a decentralized ledger. Blockchain
technology automatically calculates each vehicular
information into a hash code. Since every node including
connected vehicles and RSUs saves all the data in the network,
a spoofing attack can be quickly found by a peer-to-peer
check. All the nodes will reach into an agreement for checking
|Col1|1. s = source data; 2. r = reference data; 3. 4. Function validation (s, r) { 5. l = distributed ledger; 6. b = blacklist for recording attacker; 7. if (b.find(s) == true ) { 8. reject; 9. } else { 10. if (s == r) { 11. l.add(s); 12. } else { 13. reject; 14. b.add(s); 15. } 16. } 17. }|
|---|---|
-----
data and this process can be finished in real-time, within
milliseconds.
Figure 7. Architecture Flowchart
IV. CASE ANALYSIS
To show how our decentralized architecture works in
defending traffic signal control systems, we employ the I-SIG
system [7] as a case analysis, which is an intelligent signal
control system for connected vehicles. As one of USDOT
approved CV Pilot Programs, this system has been deployed
in New York City, Tampa and Wyoming since 2016 [7].
The I-SIG system takes arrival vehicles’ BSM (Basic
Safety Message) messages, which contain locations and
trajectories, as an input table to calculate and generate signal
plans at each intersection (Fig. 8).
In a recent paper [2], Chen et al. showed that the I-SIG
system can be easily attacked in order to create congestions
(Fig. 9). In their work, they first showed that I-SIG [7] is not
able to validate arrival vehicles’ data in real-time. They
modified one vehicle’s location and trajectory data in the
arrival table. As a result, this straight forward attacking
strategy can account for a blocking effect that jams the whole
intersection.
Figure 8. Original I-SIG system
Figure 9. Attacking I-SIG system
Figure 10. I-SIG system with Blockchain Technology
This kind of attacking strategies can work successfully in
a traffic signal control system relying on centralized vehicular
networks. Our defending strategy is to leverage our
blockchain-based architecture to transform the original
centralized vehicular network in a decentralized one (Fig. 10).
Instead of receiving and saving all vehicular information in a
vulnerable datacenter, we record and share the information in
a transparent and trustable decentralized ledger with traceable
timestamps on the blockchain network. We show that
blockchain can keep data immutable due to its decentralized
cryptographic mechanism. We also introduce consensus
protocol that combines nearby RSUs and witness vehicles as
s for the source vehicular information validation. In this way,
the decentralized ledger provides clean data input for traffic
signal control systems such as I-SIG [7]. If an attacker is
trying to modify the record on the blockchain, the network can
quickly locate and reject the attack.
V. EXPERIMENTS
_A. Experimental Setup_
We conducted simulations to test the robustness and
performance of our blockchain framework under spoofing
attacks. We deployed the blockchain network on Hyperledger
Composer [21], which will maintain a distributed ledger for
recording and sharing arrival vehicle information that contains
VIN, GPS, trajectory and timestamp. We simulated sending
and recording arrival vehicle information process by
initializing 20 records on the distributed ledger (Fig. 11).
Based on our consensus protocol design, the initialized
records on the distributed ledger are validated arrival vehicle
information. We access the blockchain and conduct
experiments on macOS High Sierra operating system with 2.9
GHz Intel i5 processor with 60 Mbps bandwidth Wi-Fi
connection as the default settings. We simulated the attacking
strategy by trying to modify records on the ledger. We then
checked the response of our blockchain framework against
attacks and record its performance via Chrome DevTools. To
provide more insights for hardware and internet requirements
of our proposed architecture in real CV environment, we
conducted a series of experiments with different participant
numbers, network speed, and processor speed.
-----
Figure 11. Initializing Arrival Vehicle Information
Figure 12. Response Against Modifying Record
_B. Response Against Attack_
Protected by blockchain technology, our prototype
framework will reject vehicular spoofing attacks 100% of the
time successfully in real-time. Once an arrival vehicle
information is saved on the distributed ledger, it does not
allow any participant to modify it. Our blockchain framework
rejects and pops out a warning message for any attempt to
modify the records (Fig. 12). We use Chrome DevTools to
record the performance and found that the average response
time is on average 39 ms on the default hardware and internet
settings. Considering that intelligent traffic control systems
such as I-SIG take 5 to 7 seconds for processing signal plans,
our proposed architecture will easily meet requirements for
real-time operation and protect vulnerable traffic control
systems.
_C. Change in the Participant Number and Network Speed_
In a blockchain network, every participant runs the same
code and saves the same data in a distributed way.
Theoretically, our framework performance cannot be affected
by participant number or network speed when attack happens.
To change the number of participants, we increase original
arrival vehicle records in ledger from 20 to 40, 80, 160 and
320 and conduct 8 attacks separately. The average response
time against attacks keeps around 39 ms as shown in Fig. 13.
Figure 13. Response Time When Changing Participant Number
Figure 14. Response Time When Changing Network Speed
To change the network speed, we changed network
settings from default 60 Mbps bandwidth Wi-Fi to fast 3G and
slow 3G. Similarly, the average response time against 8
attacks is still around 39 ms (Fig. 14). In extreme condition,
we set the attacker offline (i.e. it cannot access to the ledger
even locally). However, the ledger will restore once network
connects. Note that changing network speed will only affect
performance of adding and sharing new data (arrival vehicle
information in our case analysis) on the distributed ledger.
_D. Change in the Processor Speed_
Since each participant runs code on its own processor and
their hardware specifications are different, we tested our
proposed architecture on different settings. We conduct this
experiment by throttling default CPU speed slower. In CPU 4
times slower scenario, we get the average response time at 74
ms. In another setting, we slowdown the CPU 6 times.
Loading webpage like popping out warning message becomes
slower, in seconds. However, the back-end response process
keeps at around 118ms. Figure 15 shows the response results
based on 8 attacks in default CPU, 4 times slowdown and 6
times slowdown configurations. The results show that our
framework will still work well in a CV environment with lowtier processors.
-----
Figure 15. Response Time When Changing Processor Speed
Figure 16. Default Computer Response Time Against Multiple Attacks
_E. Multiple Attacks at the Same time_
We conduct above experiments based on one signal attack
scenario [2] in the I-SIG system [7]. In our last experiment,
we test the response performance against multiple attacks at
the same time. Based on the Part C results, our framework
performance is not affected by other participants since the
framework distributes codes and data on each participant’s
hardware. Therefore, multiple attacks do not affect the
response performance. To verify this conjecture, we deployed
our framework on a Local Area Network (LAN) and add three
more computers A, B and C into the network as potential
attackers. Although these computers have different processors,
we focus on observing response time on our default computer,
which has a 2.9 GHz i5 processor. To find the relationship
between response time and multiple attack numbers, we
conducted this experiment in four rounds: (1) default
computer is the only attacker; (2) default computer and
computer A are attackers; (3) default computer, computer A
and computer B are attackers; (4) All four computers are
attackers. We repeat 3 times in each round and record
response performance for the default computer in figure 16.
The results show that when there are multiple attacks at the
same time, the framework can reject the attacks and keep
response time of 39 ms for the default computer.
Our framework’s performance is not affected by the
misreports of multiple participants. Our Blockchain-based
framework can transform the conventional connected
vehicular network into a decentralized one in which not only
the data but also the codes are saved and executed on each
participant’s hardware.
VI. SECURITY ANALYSIS
In this section, we will analyze the security of the proposed
blockchain-based decentralized architecture for the connected
vehicular networks.
_A. Spoofing Source Vehicle Information_
In a connected vehicle network, it is possible for on-site
attackers to broadcast spoofing source vehicle information,
such as false locations or trajectories. In order to avoid this
kind of attack, we first add RSUs as nodes in our blockchain
network. We then combine nearby RSUs and witness vehicles
as references for consensus protocol. By adding this
consensus protocol into our architecture, all participants in the
same network will achieve agreement on validating the source
information process. If the source information is matched with
reference information from RSUs and witness vehicles, the
blockchain network approves and saves it permanently on the
distributed ledger. If the attacker is broadcasting spoofing
vehicle information such as false location or trajectory
information, this information does not match with reference
information from RSUs and the witness vehicles. Our
architecture rejects this spoofing information and adds the
attacker into a blacklist.
_B. Recorded Data Attack_
Blockchain technology keeps data immutable. It ensures
data security by saving data in a distributed ledger, peer-topeer check and various types of pluggable cryptographic
algorithms including hash digest [23] and Merkle tree [24]. As
mentioned in Section III Part B, Hyperledger Fabric [6] also
provides Access Control to restrict data access to certain users
in the network. The Access Control is implemented in a way
that the participants can only read or add new data in the
distributed ledger, but they cannot make any modifications.
When an attacker is trying to modify the ledger record, our
blockchain framework rejects and pops out a warning
message immediately.
_C. Multiple Attacks at the Same Time_
We extended I-SIG attacking strategy presented in [2]
from single attack to multiple and simultaneous attacks. The
proposed architecture rejects all the attacks and keeps
response performance the same for each participant. This
shows that blockchain technology can fully transform
connected vehicle network into a decentralized architecture.
There is no centralized server in the network and each
participant runs the code on its own hardware.
_D. Majority Attack_
A majority attack or 51% attack is an extreme attacking
scenario when there is a super node that tries to manipulate the
blockchain network, which has more computational power
than the rest of the nodes. This only exists theoretically in
mining-based blockchain frameworks such as Bitcoin and
Ethereum. In our proposed architecture, the blockchain
network maintains a distributed ledger for recording arrival
-----
vehicle information. Our architecture is resilient to majority
attack since we avoided redundant digital tokens, transactions
and mining process by employing the flexible Hyperledger
Fabric [6] framework.
VII. CONCLUSION
In this paper, we designed a blockchain-based and
decentralized architecture for connected vehicular networks.
Targeting a promising blockchain implementation in a new
area, we refine the workflow process in our vehicular network
representation. In addition, we developed a blockchain
prototype network and consensus protocol. To show how our
architecture works in a realistic traffic signal control system,
we used I-SIG system [7], which is under USDOT approved
CV Pilot Program, as a case analysis. By transforming the
original centralized vehicular network into a decentralized
one, we defend the original vulnerable I-SIG system [7]
against malicious attacks. In addition, we conducted a series
of simulations to analyze the response performance under
different settings.
This study serves as the first step for migrating blockchain
technology from cryptocurrency systems into traffic signal
control systems. Future research directions include: (1) novel
consensus protocol designs for validating broadcasted source
vehicular data when a systematic group attack happens on site,
when both nearby RSUs and witness vehicles cooperate with
the attacker to send spoofing reference; (2) other realistic
intelligent traffic control systems based on connected vehicles;
(3) flexible blockchain framework developments for crossindustries implementations.
REFERENCES
[1] M. Amoozadeh et al., "Security vulnerabilities of connected vehicle
streams and their impact on cooperative driving," _IEEE_
_Communications Magazine, vol. 53, no. 6, pp. 126-132, 2015._
[2] Q. A. Chen, Y. Yin, Y. Feng, Z. M. Mao, and H. X. Liu, "Exposing
Congestion Attack on Emerging Connected Vehicle based Traffic
Signal Control," Network and Distributed Systems Security Symposium
_2018, 2018._
[3] D. Tapscott and A. Tapscott, _Blockchain revolution: how the_
_technology behind bitcoin is changing money, business, and the world._
Penguin, 2016.
[4] V. Buterin, "A next-generation smart contract and decentralized
application platform," white paper, 2014.
[5] H. Guo, W. Li, M. Nejad and C.-C. Shen, "Access Control for
Electronic-Health Records with Hybrid Blockchain-Edge
Architecture," IEEE Blockchain 2019, 2019.
[6] ["Hyperledger Fabric," https://www.hyperledger.org/projects/fabric.](https://www.hyperledger.org/projects/fabric)
[7] "CV Pilot Deployment Program,"
_[https://www.its.dot.gov/pilots/cv_pilot_apps.htm.](https://www.its.dot.gov/pilots/cv_pilot_apps.htm)_
[8] D. Dominic et al., "Risk Assessment for Cooperative Automated
Driving," presented at the Proceedings of the 2nd ACM Workshop on
Cyber-Physical Systems Security and Privacy, Vienna, Austria, 2016.
[9] Y. Yuan and F.-Y. Wang, "Towards blockchain-based intelligent
transportation systems," in 2016 IEEE 19th International Conference
_on Intelligent Transportation Systems (ITSC), 2016, pp. 2663-2668:_
IEEE.
[10] T. Jiang, H. Fang, and H. Wang, "Blockchain-based Internet of
vehicles: distributed network architecture and performance analysis,"
_IEEE Internet of Things Journal, 2018._
[11] V. Sharma, "An Energy-Efficient Transaction Model for the
Blockchain-enabled Internet of Vehicles (IoV)," _IEEE_
_Communications Letters, vol. 23, no. 2, pp. 246-249, 2019._
[[12] "BiTA: Blockchain in Transport Alliance," https://www.bita.studio/.](https://www.bita.studio/)
[13] "IBM Food Trust," _[https://www.ibm.com/blockchain/solutions/food-](https://www.ibm.com/blockchain/solutions/food-trust)_
_trust._
[14] M. Hossain, Y. Karim, and R. Hasan, "FIF-IoT: A Forensic
Investigation Framework for IoT Using a Public Digital Ledger," in
_2018 IEEE International Congress on Internet of Things (ICIOT),_
2018, pp. 33-40.
[15] H. Guo, E. Meamari, and C.-C. Shen, "Blockchain-inspired Event
Recording System for Autonomous Vehicles," in _2018 1st IEEE_
_International Conference on Hot Information-Centric Networking_
_(HotICN), 2018, pp. 218-222: IEEE._
[16] E. Gaetani, L. Aniello, R. Baldoni, F. Lombardi, A. Margheri, and V.
Sassone, "Blockchain-based database to ensure data integrity in cloud
computing environments," 2017.
[[17] "Ethereum," https://www.ethereum.org/.](https://www.ethereum.org/)
[18] K. Christidis and M. Devetsikiotis, "Blockchains and smart contracts
for the internet of things," Ieee Access, vol. 4, pp. 2292-2303, 2016.
[19] M. Valenta and P. Sandner, "Comparison of Ethereum, Hyperledger
Fabric and Corda," FSBC Working Paper2017.
[20] "Welcome to Hyperledger Composer,"
_[https://hyperledger.github.io/composer/v0.19/introduction/introductio](https://hyperledger.github.io/composer/v0.19/introduction/introduction)_
_n._
[21] "Hyperledger Composer,"
_[https://www.hyperledger.org/projects/composer.](https://www.hyperledger.org/projects/composer)_
[22] Y. Yuan and F. Wang, "Blockchain and Cryptocurrencies: Model,
Techniques, and Applications," IEEE Transactions on Systems, Man,
_and Cybernetics: Systems, vol. 48, no. 9, pp. 1421-1428, 2018._
[23] J. A. Dev, "Bitcoin mining acceleration and performance
quantification," in 2014 IEEE 27th Canadian Conference on Electrical
_and Computer Engineering (CCECE), 2014, pp. 1-6._
[24] Q. Liu and K. Li, "Decentration Transaction Method Based on
Blockchain Technology," in _2018 International Conference on_
_Intelligent Transportation, Big Data & Smart City (ICITBS), 2018, pp._
416-419.
-----
|
{
"disclaimer": "Notice: Paper or abstract available at https://arxiv.org/abs/1906.02628, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": null,
"status": "GREEN",
"url": "https://arxiv.org/pdf/1906.02628"
}
| 2,019
|
[
"JournalArticle"
] | true
| 2019-06-06T00:00:00
|
[
{
"paperId": "3ead9ba6582992e9d4fbafaeacb8cdca7a549ad5",
"title": "Ethereum"
},
{
"paperId": "dc03d098ec890f3b5df438a37e33a147ecbf790d",
"title": "Blockchain-based Database to Ensure Data Integrity in Cloud Computing Environments"
},
{
"paperId": "9db4ed0c4e24e9ce83fc7daae5647850ca1f163c",
"title": "Access Control for Electronic Health Records with Hybrid Blockchain-Edge Architecture"
},
{
"paperId": "84c31cbce0470b7ccd061be339e635605dfa5804",
"title": "Blockchain-Based Internet of Vehicles: Distributed Network Architecture and Performance Analysis"
},
{
"paperId": "640326c8bf932e497fc15de715480ac303e1f1eb",
"title": "An Energy-Efficient Transaction Model for the Blockchain-Enabled Internet of Vehicles (IoV)"
},
{
"paperId": "0dcd7d68be1d32a85c3c0474f059ffd162c0095a",
"title": "Blockchain-inspired Event Recording System for Autonomous Vehicles"
},
{
"paperId": "0297dc68c481b9009c22d0da950c2cae2c33c6e4",
"title": "Blockchain and Cryptocurrencies: Model, Techniques, and Applications"
},
{
"paperId": "edc6280c29e0378444ccdf6ea58bbbfab9acecd0",
"title": "FIF-IoT: A Forensic Investigation Framework for IoT Using a Public Digital Ledger"
},
{
"paperId": "3222d1e74b171cfd84516e4652c0efafb804c95c",
"title": "Towards blockchain-based intelligent transportation systems"
},
{
"paperId": "61cce71b6ff9e83d6020f48d197ea5d85affc679",
"title": "Risk Assessment for Cooperative Automated Driving"
},
{
"paperId": "c998aeb12b78122ec4143b608b517aef0aa2c821",
"title": "Blockchains and Smart Contracts for the Internet of Things"
},
{
"paperId": "2cf79e894671e57fdd9e78e738fd070dd2bde219",
"title": "Blockchain Revolution: How the Technology Behind Bitcoin Is Changing Money, Business, and the World"
},
{
"paperId": "9c96fa903e0c2c1b4dd44ecb68f29c7c89863462",
"title": "Security vulnerabilities of connected vehicle streams and their impact on cooperative driving"
},
{
"paperId": "445d4079350c40505418a1a2e72bba73df8638eb",
"title": "Bitcoin mining acceleration and performance quantification"
},
{
"paperId": "533bca27f289224d6ab0aa0b308ed513b1da7029",
"title": "Hyperledger Composer 기반 컨소시움 블록체인을"
},
{
"paperId": "20b3a2eead097d98a11b0024ff2f30873c2bb8fc",
"title": "Exposing Congestion Attack on Emerging Connected Vehicle based Traffic Signal Control"
},
{
"paperId": "78e934bce1b1b715a898cb7675fc4cf35dca31ed",
"title": "Decentration Transaction Method Based on Blockchain Technology"
},
{
"paperId": "9f4f80c8e596b70ec8e2324f44ede15c48c147b5",
"title": "Comparison of Ethereum, Hyperledger Fabric and Corda"
},
{
"paperId": "0dbb8a54ca5066b82fa086bbf5db4c54b947719a",
"title": "A NEXT GENERATION SMART CONTRACT & DECENTRALIZED APPLICATION PLATFORM"
},
{
"paperId": null,
"title": "IBM Food Trust"
},
{
"paperId": null,
"title": "CV Pilot Deployment Program"
},
{
"paperId": null,
"title": "BiTA: Blockchain in Transport Alliance"
}
] | 8,044
|
en
|
[
{
"category": "Computer Science",
"source": "s2-fos-model"
},
{
"category": "Engineering",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/ffd61bbb7dcd3ca6cad55d77dd043f6ee85a6291
|
[] | 0.91139
|
Improving Byzantine Fault Tolerance in Swarm Robotics Collective Decision-making Scenario via a New Blockchain Consensus Algorithm
|
ffd61bbb7dcd3ca6cad55d77dd043f6ee85a6291
|
Social Science Research Network
|
[
{
"authorId": "1443777601",
"name": "Theviyanthan Krishnamohan"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"SSRN, Social Science Research Network (SSRN) home page",
"SSRN Electronic Journal",
"Soc Sci Res Netw",
"SSRN",
"SSRN Home Page",
"SSRN Electron J",
"Social Science Electronic Publishing presents Social Science Research Network"
],
"alternate_urls": [
"www.ssrn.com/",
"https://fatcat.wiki/container/tol7woxlqjeg5bmzadeg6qrg3e",
"https://www.wikidata.org/wiki/Q53949192",
"www.ssrn.com/en",
"http://www.ssrn.com/en/",
"http://umlib.nl/ssrn",
"umlib.nl/ssrn"
],
"id": "75d7a8c1-d871-42db-a8e4-7cf5146fdb62",
"issn": "1556-5068",
"name": "Social Science Research Network",
"type": "journal",
"url": "http://www.ssrn.com/"
}
|
Swarm robotics applies concepts of swarm intelligence to robotics. Discrete consensus achievement is one of the major behaviors found in swarm robotics. Various algorithms have been developed for discrete consensus achievement. However, existing discrete consensus achievement algorithms are vulnerable to Byzantine robots. Blockchain has been successfully used to mitigate the negative effect of Byzantine robots. Nevertheless, since the blockchain solution uses the Proof-of-Work blockchain consensus algorithm, it is vulnerable to the 51% attack. Besides, the swarm also takes longer to achieve consensus. This research proposes a novel blockchain consensus algorithm called Proof-of-Identity—which uses a private-public key pair and a swarm controller—to create a dynamically permissioned blockchain that would negate the 51%-attack problem associated with the Proof-of-Work algorithm while also reducing the consensus time. This proposed solution was tested against the classical solution and the existing blockchain solution using the collective perception scenario. Test results show that the Proof-of-Identity algorithm prevents the 51%-attack problem while improving the consensus time in comparison to the existing blockchain solution without affecting the exit probability.
|
# Improving Byzantine Fault Tolerance in Swarm Robotics Collective Decision-Making Scenario via a New Blockchain Consensus Algorithm
[Theviyanthan Krishnamohan
(
theviyanthan.20201022@iit.ac.lk
)](mailto:theviyanthan.20201022@iit.ac.lk)
Informatics Institute of Technology
Research Article
Keywords: blockchain, swarm robotics, proof of identity, proof of work, blockchain consensus algorithm,
collective perception
Posted Date: August 3rd, 2022
DOI: [https://doi.org/10.21203/rs.3.rs-1891485/v1](https://doi.org/10.21203/rs.3.rs-1891485/v1)
License:
This work is licensed under a Creative Commons Attribution 4.0 International
License.
[Read Full License](https://creativecommons.org/licenses/by/4.0/)
-----
### Abstract
Swarm robotics applies concepts of swarm intelligence to robotics. Discrete consensus achievement is
one of the major behaviors found in swarm robotics. Various algorithms have been developed for discrete
consensus achievement. However, existing discrete consensus achievement algorithms are vulnerable to
Byzantine robots. Blockchain has been successfully used to mitigate the negative effect of Byzantine
robots. Nevertheless, since the blockchain solution uses the Proof-of-Work blockchain consensus
algorithm, it is vulnerable to the 51% attack. Besides, the swarm also takes longer to achieve consensus.
This research proposes a novel blockchain consensus algorithm called Proof-of-Identity—which uses a
private-public key pair and a swarm controller—to create a dynamically permissioned blockchain that
would negate the 51%-attack problem associated with the Proof-of-Work algorithm while also reducing
the consensus time. This proposed solution was tested against the classical solution and the existing
blockchain solution using the collective perception scenario. Test results show that the Proof-of-Identity
algorithm prevents the 51%-attack problem while improving the consensus time in comparison to the
existing blockchain solution without affecting the exit probability.
### 1 Introduction
Swarm robotics uses multiple, simple robots to collectively solve real-life problems. Collective decisionmaking is one of the applications of swarm robotics. In collective decision-making, robots in a swarm try
to collectively come to a consensus on one particular decision. Consensus achievement is a type of
collective decision-making scenario where robots collectively choose one among several choices.
Several strategies exist to solve consensus achievement scenarios. However, such solutions are
vulnerable to Byzantine robots. Blockchain-based solutions were developed to provide protection against
Byzantine robots. However, blockchain introduced a new Byzantine problem in the form of the 51%
attack. Further, these solutions also performed poorly in comparison to the existing solutions. Such
issues with the blockchain-based solutions can be zeroed down to the Proof-of-Work (PoW) blockchain
consensus algorithm used.
This paper proposes a novel blockchain consensus algorithm called Proof of Identity (PoI) to provide
improved Byzantine fault tolerance to consensus achievement strategies in swarm robotics. Through
performance and security testing, this study shows that the PoI algorithm offers immunity against the
51% attack while improving performance.
This paper first discusses swarm robotics before providing a primer on blockchain. Then, existing
classical and blockchain-based solutions are explored. Subsequently, the methodology of the solution is
discussed by explaining the PoI algorithm and the benchmarking tool that was developed. Finally, the
experiment setup and test results are expounded before the findings are discussed and the conclusion is
presented.
-----
### 2 Swarm Robotics
Swarm robotics applies concepts of swarm intelligence to robotics in order to solve problems that single,
monolithic or multi-agent robots cannot solve.
Swarm intelligence is heavily inspired by biological systems found in nature such as ant colonies, bee
colonies, bird flocking, and bacterial growth. These systems solve complex problems via the coordination
of simple individuals. A good example of this is insect societies that contain simple and homogenous
individuals that find the best route to a source by communicating using pheromones without
centralization or synchronization (Beni, 2005).
Swarm robotics can be formally defined as “the study of how large number of relatively simple physically
embodied agents can be designed such that a desired collective behavior emerges from the local
interactions among agents and between the agents and the environment” (Şahin, 2005).
## 2.1 Classification of Swarm Robotics
Brambilla et al. (2013) classify the existing works into two major taxonomies, viz. methods and collective
behaviors (Brambilla et al., 2013).
The methods taxonomy is based on the methods used to design swarm robotics systems. The collective
behaviors taxonomy is based on the basic problem-solving behaviors of swarms.
Collective behaviors are divided into four main groups: spatially organizing behaviors, navigation
behaviors, collective decision-making behaviors, and other collective behaviors.
This research deals with collective decision-making behaviors. Collective decision-making is having a
swarm agree on a certain decision. This can be divided into consensus achievement, and task allocation.
Consensus achievement is choosing one option among several others while task allocation is distributing
different tasks among robots. This research focuses on consensus-achievement behavior.
### 3 Blockchain
Blockchain was invented to decentralize monetary systems through a distributed ledger. However, over
time, blockchain has started to be used to create decentralized applications as well (Crosby, 2016)
(Krishnamohan et al., 2020).
A ledger is a chain of blocks that stores transactions. A private-public key pair is used to perform
transactions. All nodes in a blockchain network get a copy of this ledger (Nakamoto, 2009).
A transactor sends money to a recipient by using the recipient’s public key. The transaction is signed
using the transactor’s private key. A transactor must have already received the money to be able to send
-----
y g
public key.
To prevent double spending, the order of transactions should be recorded. So, transactions are packed
into blocks and the blocks are chained together using hashes. This makes the order immutable. The
blocks are generated through a process called mining. The nodes that generate blocks are called miners.
Miners compete to generate the next block. The winner is decided by a consensus algorithm. PoW is the
most popular consensus algorithm at present. This algorithm decides the winner by checking if the hash
value of a block is less than a specified value. The difficulty of mining a block can be adjusted by
lowering or raising this value. Miners add a nonce value to their block to try to produce a block with a
hash value below the specified value.
Producing the right hash value is done through trial and error. This work takes CPU time. The right hash
value serves as proof of the miner’s work. Thus, this algorithm is called Proof of Work. To modify the
order of blocks, the work done since that block has to be repeated. This is expensive, thus, making the
blockchain immutable.
### 4 Related Work
## 4.1 Classical Approach
Valentini, Brambilla, et al. (2016) introduced the collective perception scenario to test three different
consensus-achievement strategies (Valentini, Brambilla, et al., 2016). In this scenario, the swarm tried to
find the color of the majority of the tiles in a square grid that had black and white tiles. This scenario had
two states, namely the exploration state and the dissemination state and these were tantamount to the
waggle dance of the bee populations (Frisch, 1993).
Robots start with an opinion when the experiment is started. This opinion is about the color of the
majority of the tiles. In the exploration state, the robots explore their environments through random walk
and rotations for a random amount of time. If a robot detects an obstacle within 30cm, then it turns in the
opposite direction and continues its motion. In the meantime, the robots scan the color of the floor using
their ground sensors. The quality pi of an opinion i, where i ∈ {a, b} (a corresponds to black and b to
white), is defined as the amount of time the robot detected the color of its opinion (ti) over the amount of
time the robot spent in the exploration state (t).
##### pi = [t][i]
t
Equation 1
After the exploration state, robots switch to the dissemination state. During this state, while performing
-----
, p p p gy
the best opinion is chosen.
Direct Modulation of Majority-based Decision (DMMD)
When this strategy is used, a robot remains in the dissemination state for a random amount of time
proportional to the quality of its opinion. This allows a robot with a higher quality opinion to broadcast its
opinion to a lot of neighbors.
During the dissemination state, robots also receive the opinions of their neighboring robots. By the end of
this state, the robots choose the opinion of the majority of their neighboring robots as their own and
begin the next cycle. (Valentini, Hamann and Dorigo, 2015)(Valentini, Ferrante, et al., 2016).
Direct Modulation of Voter-based Decision (DMVD)
The DMVD strategy differs from the DMMD only in its decision-making mechanism. Just like DMMD,
DMVD also modulates its dissemination time using the quality of its opinion.
However, when DMVD is used, robots choose the opinion of a random neighbor as their own (Valentini,
Hamann and Dorigo, 2014).
Direct Comparison (DC)
Unlike in DMMD and DMVD, the dissemination time is not modulated in DC. Instead, the dissemination
time is randomly chosen. Besides, the robots broadcast the quality of their opinion in addition to their
opinion. Towards the end, robots compare the quality of their opinion with that of a random neighbor and
choose the greater of the two as their opinion (Valentini, Brambilla, et al., 2016).
Consensus is achieved when all the robots end up with the same opinion.
## 4.2 Blockchain Approach
Strobel et al. (2018) attempted to solve the Byzantine problem in the classical DMMD, DMVD, and DC
strategies using blockchain (Strobel, Ferrer and Dorigo, 2018). The authors found that the classical
solutions faltered when faulty or malicious robots kept broadcasting the wrong opinion and they showed
that blockchain could make these strategies immune to Byzantine robots.
In the blockchain approach, the exploration state was the same as it was in the classical approach.
However, in the dissemination state, instead of broadcasting their opinion, robots voted using the smart
contract. A vote was cast every 5 ticks (10 ticks made a second), so the higher the quality, the higher the
number of votes was.
After voting, robots executed the decision-making strategy by calling the smart contract. When DMMD
was used, opinions of two pseudorandom robots were chosen and the opinion of the majority was
-----
p, p p
the best opinion.
When DC was used, robots passed both their opinion and its quality to the smart contract, and picked the
opinion of the higher quality between its own opinion and that of a pseudorandom robot.
Strobel et al. (2018) employed exogenous fault detection to identify Byzantine robots (Christensen,
O’Grady and Dorigo, 2009). A vote from a robot was rejected if it was based on an outdated opinion or if
the blockchain versions were different. An outdated opinion is an opinion that has not been updated
during the last 25 blocks. Besides, robots could cast a maximum of 50 votes when DMMD and DMVD
were used and only one vote when DC was used.
Even though Strobel et al. (2018) solved the Byzantine problem using this approach, consensus time was
found to be higher when compared to the classical approaches. This was because of the PoW consensus
algorithm.
Additionally, since PoW is resource-intensive, it is not suitable to run on simple robotics devices. Moreover,
PoW introduced a new Byzantine problem in the form of the 51% attack, which meant that the Byzantine
problem was not completely resolved.
The PoW algorithm can be compromised by a node or a group of nodes with a hash rate in excess of 50%
of the total hash rate of the network (Anita and Vijayalakshmi, 2019). This attack is known as the 51%
attack and the solution of Strobel et al. (2018) is vulnerable to it.
### 5 Methodology
## 5.1 Proof of Identity (PoI)
The PoI algorithm allows only authorized nodes to mine blocks and thus, creates a permissioned
blockchain. However, in contrast to the typical Proof-of-Authority (PoA) algorithms, the authorized nodes
are not declared before the blockchain is run [(Ferdous et al., 2020)]. To allow new miners into the network
during runtime, the PoI algorithm introduces a novel swarm controller that uses a private-public key pair
to sign authorized miners. This allows PoI to create dynamically permissioned blockchains.
When the swarm controller is spun up, a private-public key pair is generated. To add a new miner, the
miner first sends its coinbase to the swarm controller. The swarm controller signs the coinbase with its
private key and returns its signature. The miner also obtains the swarm controller’s public key.
When mining a block, a miner adds its signature to the header of the block and seals it. When verifying
blocks, the verifying node decrypts the signature of the block with the public key of the swarm controller
and checks if the decrypted value is equal to the coinbase of the miner. If the values match, then the
authenticity of the miner can be affirmed.
-----
, p g, p
network if it is not authorized by the swarm controller. At the same time, since the algorithm does not
involve producing the right block through trial and error, the performance concerns are also rectified.
## 5.2 Benchmarking Tool
The benchmarking tool was developed to benchmark the performance of the PoI algorithm using the
collective perception scenario on top of the benchmarking tool developed by Valentini, Brambilla, et al.
(2016) and Strobel et al. (2018). This benchmarking tool improves the existing tool by introducing a live
dashboard to carry out experiments, a database to store experiment data, and a service layer to facilitate
communication between the dashboard and the simulator.
### 5.2.1 The Architecture of the Benchmarking Tool
The architecture of the prototype consists of the frontend layer, service layer, simulator layer, and
blockchain layer. The frontend layer provides the user of this prototype with a user interface to interact
with the prototype. The service layer sits in between the frontend layer and the simulator layer and
provides the necessary APIs to the frontend layer to communicate with the simulator layer. The simulator
layer interacts with the blockchain layer to solve the collective perception scenario using the smart
contract deployed in the blockchain. The forthcoming section discusses these layers and the modules
belonging to them elaborately.
#### 5.2.1.1 The Frontend Layer
This layer consists of the Graphical User Interface (GUI) that a user will be using to interact with the
prototype. It consists of the following modules:
Experiment Creation Form—This is a form that allows a user to configure the parameters of the
experiment such as the number of robots, the decision rule to be used, the percentage of black and
white tiles, the number of Byzantine robots and the approach to be used.
Experiment Queue—Since, to benchmark different solutions, a user may need to run experiments in
batches, experiments created using the Experiment Creation Form are added to this queue. This
queue allows users to delete experiments that are later deemed unnecessary, specify the number of
times each experiment should be repeated and provides a button to start running the experiments in
the queue.
Experiment Data View—This view shows the result of each experiment live as it is completed in a
tabular format. This view also allows the user to download the results as a Comma-Separated
Values (CSV) file. Moreover, this view also shows a progress bar to give the user an idea about how
many experiments have been completed and how many more remain.
-----
y y y y p g y
APIs. The configurations of the experiment entered through the frontend layer are fed to the simulator via
this layer. This layer also communicates the results of the experiment from the simulator layer to the
frontend layer. The modules contained in this layer are as follows:
REST API Service—This provides REST API services to be consumed by the frontend layer. Users can
configure experiments, start experiments and get experiment results using these REST API services.
The experiment configurations sent to this service by the frontend are also persisted in a database in
the data layer.
Websocket—This allows live experiment results to be streamed to the frontend layer so that users
can view the experiment results in a GUI that gets updated automatically.
Message Queue—This is used to capture the experiment results from the simulator layer. This allows
process-to-process communication between the server and the simulator. The experiment results in
the message queue are also persisted in a database in the data layer.
5.2.1.3 The Simulator Layer
This is the layer where the experiments are run. This layer gets the experiment configuration from the
service layer, runs the experiments, and communicates the results of the experiments back to the service
layer using the message queue. This layer consists of the following modules:
Test Grid—This is the environment in which the robots will operate on. This is a 200 × 200cm[2 ]grid
consisting of 10 × 10cm[2 ]tiles of colors black and white. The ratio between the number of black and
white tiles is configurable. Moreover, this grid is bounded by walls that can be detected by the robots
to avoid collisions.
e-puck Robot—This is a small robot with a footprint of 7cm[2 ]that is used to sense the color of the
tiles and to take part in the consensus achievement task to find the color of the majority of the tiles.
When blockchain is used, this robot also acts as a miner.
ARGoS 3—This is the simulator that controls the robots. This simulator runs the robots on the test
grid and finds out if consensus has been reached or not. Apart from this, the simulator also gathers
evaluation metrics such as the exit probability and consensus time and communicates them to the
service layer.
5.2.1.4 The Blockchain Layer
The blockchain layer consists of the blockchain, the mining nodes, the validators, and the swarm
controller. The e-puck robots in the simulator layer publish their opinion to the blockchain and receive
updated opinions from the smart contract running on the blockchain. The functionality of the modules in
this layer is discussed below.
-----
g p y
distributes its public key to the miners. This allows the PoI algorithm to create a dynamically
permissioned blockchain.
Miner—The e-puck robots also act as miners who mine blocks to be added to the blockchain. When
the robots publish their opinions as transactions, the miners verify these transactions and add them
to a new block before sealing them with their signature.
Validator—The e-puck robots also act as validators. The validators validate the blocks mined by the
miners before adding them to their blockchain. The blocks are validated by verifying the signature
found in the blocks using their coinbase and the public key of the swarm controller.
Blockchain—The smart contract that runs the decision rule algorithm is deployed in the blockchain.
5.2.1.5 The Data Layer
The data layer consists of a database that is used to persist the experiment results so that this data can
be later serialized into a different format or used as it is for data analysis. Aside from this, experiment
configurations sent by the frontend to the REST API service are also persisted in the database.
5.2.2 The Functioning of the Benchmarking Tool
Figure 3 shows the data flow diagram that shows how data flows between different components of the
benchmarking tool. Accordingly, it can be seen that a user first inputs the experiment configuration to the
frontend app, which is then sent to the REST API service. This data is persisted in a database while being
fed into the ARGoS 3 simulator. The simulator then configures the e-puck robots using this configuration
data.
The e-puck robots sense the color of the tiles in the test grid and transact their opinion about the color to
the blockchain miners. The miners verify these transactions, pack them into blocks and broadcast them
to the validators. The validators validate these blocks and add them to their blockchain. The blockchain
smart contract runs the decision rules and updates the e-puck robots with the new opinion. The ARGoS 3
simulator reads the opinions of the robots to decide if consensus has been reached.
Once consensus is reached, the evaluation metrics of the experiment are pushed to the message queue.
These metrics are persisted in the database and emitted to the frontend using a WebSocket so that the
user can view the data live.
### 6 Testing
## 6.1 Collective Perception Experiment
The collective perception scenario was used to benchmark the research prototype. The collective
perception scenario involves a fixed number of robots coming to a consensus on the color of most tiles in
-----
p p, p p y g y
four walls. The grid had 400 tiles each of area 10cm x 10cm. The tiles were either black or white in color
and the ratio between the number of black and white tiles determined the difficulty of the challenge. The
difficulty of the challenge is given by the following equation:
##### ρb = w[b]
Equation 1
Where:
##### ρ
b
is the difficulty of choosing white as the best opinion
b is the percentage of black tiles
w is the percentage of white tiles
At the beginning of the experiment, one half of the robots started with the opinion black, and the other
half started with the opinion white. During the experiments, robots changed their opinions based on the
decision rules used and the experiment ended when all robots had the same opinion. In the experiments
performed, white was always the color of most of the tiles. This was done to ensure the results of these
experiments could be compared to those of the existing research works.
The experiments were executed in discreet time steps called ticks with 10 ticks forming a second. During
the experiment, two robots could communicate with one another only when the distance between them
was under 50cm.
The experiments had the following configurable parameters, and experiments were run for every value of
each of these parameters. The parameters and the values they took are given in Table 1.
Table 1
The experiment parameters and their values
Parameter Values
Difficulty 0.52, 0.56, 0.61, 0.67, 0.72, 0.79, 0.85, 0.92
Decision Rules DMMD, DMVD, DC
Approach Classical, Proof of Work (PoW), PoI
The following metrics were used for benchmarking:
1. Exit probability—The number of correct consensus decisions over the total number of runs.
-----
g pp
1. Classical—The original approach used by Valentini et al. (2016).
2. PoW—The blockchain-based approach used by Strobel, Ferrer and Dorigo (2018) using the PoW
consensus algorithm.
3. PoI—The blockchain-based approach used by Strobel, Ferrer and Dorigo (2018) using the PoI
consensus algorithm.
Thus, altogether, 72 different types of experiments were planned. To avoid random errors and variations,
each type of experiment was repeated 10 times. Consequently, 720 experiments were run in total.
## 6.2 Experiment Setup
The experiments were run on a virtual machine running on a macOS host. The details of the virtual
machine and the host machine are furnished in Table 2.
Table 2 Experiment setup
Component Model/Type/Capacity
Virtual Machine CPU ARM64
Operating System Debian 11.3.0
RAM 14GB
Hypervisor QEMU 7.0 ARM Virtual Machine
Host Machines CPU Apple M1 Pro
Operating System macOS Monterey
RAM 16GB (Unified memory)
## 6.3 Test Results
Figure 4 shows the exit probability obtained for the three decision rules using the classical, PoW, and PoI
approaches on a column graph.
Figure 5 shows the consensus time obtained for the three decision rules using the three different
approaches on a box plot.
The results obtained for the classical as well as the PoW approaches were mostly consistent with the
findings of Strobel, Ferrer and Dorigo (2018). The DC decision rule with the classical approach showed
-----
g p y y g g
classical approach also produced the fastest consensus time.
The DMVD decision rule with the classical approach showed a steady decline in exit probability with the
rise in difficulty whereas the DMMD rule, though showed an overall decline, had comparatively more
variability. Overall, as far as the exit probability was concerned, the blockchain approach performed worse
than the classical approach in comparison. Both the PoI and PoW approaches showed greater parity even
though the PoI performed marginally better under certain circumstances.
The consensus time of both the classical approach and the blockchain approaches steadily increased
with difficulty for the DC decision rule. However, even though the classical approach showed a similar
steady increase for both the DMMD and DMVD decision rules, the consensus time of the blockchain
approaches was largely disaffected by the difficulty. This observation is consistent with that of Strobel,
Ferrer and Dorigo (2018).
However, unlike it was the case with the exit probability, the consensus time of the PoI approach showed
a significant improvement in comparison to the PoW approach for all decision rules. Notwithstanding, the
consensus time of the PoI approach was generally higher than that of the classical approach.
### 7 Discussion
The findings of the tests were very similar to the findings of Strobel, Ferrer and Dorigo (2018). Generally,
the classical approach had a better exit probability than both the PoI and PoW approaches. This is due to
the limitation of the blockchain approach as explained by Strobel, Ferrer and Dorigo (2018). In the
classical approach, duplicate opinions from the neighbors are discarded, while in the blockchain
approach, no such implementation exists. The classical approach was also faster than the PoI and PoW
approaches. This is due to the delays introduced by the mining process. However, the PoI approach was
shown to be faster than the PoW approach. The test results showed that the PoI algorithm, developed to
nullify the Byzantine robot issue introduced by the 51%-attack threat inherent to the PoW algorithm, made
consensus achievement faster while not impacting the exit probability under most circumstances and
slightly improving it under some.
### 8 Conclusion
This research work improved Byzantine fault tolerance in swarm robotics by addressing the 51%-attack
issue found in the existing blockchain solution without compromising on the performance. Moreover, the
developed solution was also shown to perform better than the existing blockchain solution improving the
practical usability of blockchain-based solutions.
Besides, this research also created a web application to benchmark solutions to the collective perception
scenario. This application will help future researchers benchmark their solutions in a lot more user
-----
### Declarations
## Ethics approval and consent to participate
Not applicable
## Consent for publication
I hereby consent to have my work published in your journal.
## Availability of data and material
Data is available publicly.
https://github.com/PoI-Research/poi-analysis/blob/master/experiment-data.csv
## Competing interests
Not applicable
## Funding
Not applicable
## Authors' contributions
The prototype was designed, developed and tested, and the manuscript was written by Theviyanthan K.
## Authors' information (optional)
Theviyanthan Krishnamohan
theviyanthan.20201022@iit.ac.lk
14, 5/2, Mary’s Road,
Colombo-04,
Sri Lanka
-----
## Acknowledgements
Not applicable
### References
1. Anita, N. and Vijayalakshmi, M. (2019) “Blockchain Security Attack: A Brief Survey,” in 2019 10th
International Conference on Computing, Communication and Networking Technologies, ICCCNT
2019. Institute of Electrical and Electronics Engineers Inc. Available at:
https://doi.org/10.1109/ICCCNT45670.2019.8944615.
2. Beni, G. (2005) “From swarm intelligence to swarm robotics,” Lecture Notes in Computer Science,
3342, pp. 1–9. Available at: https://doi.org/10.1007/978-3-540-30552-1_1.
3. Brambilla, M. et al. (2013) “Swarm robotics: A review from the swarm engineering perspective,”
Swarm Intelligence, 7(1), pp. 1–41. Available at: https://doi.org/10.1007/s11721-012-0075-2.
4. Christensen, A.L., O’Grady, R. and Dorigo, M. (2009) “From fireflies to fault-tolerant swarms of robots,”
IEEE Transactions on Evolutionary Computation, 13(4), pp. 754–766. Available at:
https://doi.org/10.1109/TEVC.2009.2017516.
5. Crosby, M. (2016) “BlockChain Technology: Beyond Bitcoin,” Applied Innovation Review Issue
[Preprint], (2). Available at: http://scet.berkeley.edu/wp-content/uploads/AIR-2016-Blockchain.pdf.
6. Ferdous, M.S. et al. (2020) “Blockchain Consensus Algorithms: A Survey.” Available at:
http://arxiv.org/abs/2001.07091 (Accessed: August 30, 2021).
7. Frisch, K. von (1993) The Dance Language and Orientation of Bees, The Dance Language and
Orientation of Bees. Harvard University Press. Available at:
https://doi.org/10.4159/harvard.9780674418776.
8. Krishnamohan, T. et al. (2020) “BlockFlow: A decentralized SDN controller using blockchain,”
International Journal of Scientific and Research Publications (IJSRP), 10(3), p. p9991. Available at:
https://doi.org/10.29322/ijsrp.10.03.2020.p9991.
9. Nakamoto, S. (2009) “Bitcoin: A Peer-to-Peer Electronic Cash System.” Available at: www.bitcoin.org.
10. Şahin, E. (2005) “Swarm robotics: From sources of inspiration to domains of application,” Lecture
Notes in Computer Science, 3342, pp. 10–20. Available at: https://doi.org/10.1007/978-3-540-305521_2.
11. Strobel, V., Ferrer, E.C. and Dorigo, M. (2018) “Managing Byzantine Robots via Blockchain Technology
in a Swarm Robotics Collective Decision Making Scenario,” in International Conference on
Autonomous Agents and Multiagent Systems. Available at: www.ifaamas.org (Accessed: June 26,
2021).
12. Valentini, G., Ferrante, E., et al. (2016) “Collective decision with 100 Kilobots: speed versus accuracy
in binary discrimination problems,” Autonomous Agents and Multi-Agent Systems, 30(3), pp. 553–
580 Available at: https://doi org/10 1007/s10458-015-9323-3
-----
,,,, ( ) p
Swarm,” in Swarm Intelligence, 10th International Conference, ANTS 2016 Brussels, Belgium,
September 7–9, 2016 Proceedings. Springer International Publishing Switzerland 2016, pp. 65–76.
Available at: https://doi.org/10.1007/978-3-319-44427-7_2.
14. Valentini, G., Hamann, H. and Dorigo, M. (2014) “Self-organized collective decision making: The
weighted voter model,” 13th International Conference on Autonomous Agents and Multiagent
Systems, AAMAS 2014, 1(January), pp. 45–52.
15. Valentini, G., Hamann, H. and Dorigo, M. (2015) Efficient Decision-Making in a Self-Organizing Robot
Swarm: On the Speed Versus Accuracy Trade-Off. DMMD; swarm robotics. Available at:
www.ifaamas.org (Accessed: June 28, 2021).
### Figures
-----
Figure 1
A diagrammatic representation of the PoI algorithm
-----
Figure 2
The user interface of the benchmarking tool
-----
Figure 3
Figure 2 The layered architecture of the benchmarking tool
-----
Figure 4
Figure 3 The data flow diagram of the benchmarking tool
-----
Figure 5
Figure 4 Exit probability for different decision rules and approaches
-----
Figure 5 Consensus time for different decision rules and approaches
-----
|
{
"disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.2139/ssrn.4522194?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.2139/ssrn.4522194, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "CCBY",
"status": "GREEN",
"url": "https://www.researchsquare.com/article/rs-1891485/latest.pdf"
}
| 2,023
|
[
"JournalArticle"
] | true
| 2023-07-01T00:00:00
|
[
{
"paperId": "d640554f15a6a5aa217f83d663eeeb4c3714c7ca",
"title": "BlockFlow: A Decentralized SDN Controller Using Block-chain"
},
{
"paperId": "330ed3413bd1ae096a5171271f246bf3abf72e8e",
"title": "Blockchain Consensus Algorithms: A Survey"
},
{
"paperId": "7e39bd1609940635111811ffd2d205768ceb4dc9",
"title": "Blockchain Security Attack: A Brief Survey"
},
{
"paperId": "c071049c4e15c434a4e865a3a6ed962007f18dbe",
"title": "Managing Byzantine Robots via Blockchain Technology in a Swarm Robotics Collective Decision Making Scenario"
},
{
"paperId": "7c3f788056883aac49598a7caa799816673dee4c",
"title": "Collective Perception of Environmental Features in a Robot Swarm"
},
{
"paperId": "1f7e7bf12e98e99cfd502ef7c0754b99088fbfa9",
"title": "Collective decision with 100 Kilobots: speed versus accuracy in binary discrimination problems"
},
{
"paperId": "e2ff5fc4342ee41a71513e3afc255ae84cc7d64a",
"title": "Efficient Decision-Making in a Self-Organizing Robot Swarm: On the Speed Versus Accuracy Trade-Off"
},
{
"paperId": "00b09e3d542e851fe3578412e2d9f33164377f63",
"title": "Self-organized collective decision making: the weighted voter model"
},
{
"paperId": "7793416bbf1916d59af727e0b34e780d6477fcfb",
"title": "Swarm robotics: a review from the swarm engineering perspective"
},
{
"paperId": "8f1dca7435817eedda2723adbeabf8da63815377",
"title": "From Fireflies to Fault-Tolerant Swarms of Robots"
},
{
"paperId": "2e22f54ad16a0e6c502f1903e1a056f3746037b8",
"title": "From Swarm Intelligence to Swarm Robotics"
},
{
"paperId": "610b935d9a9e8da3d678798de1f4ef2cf47368b8",
"title": "Swarm Robotics: From Sources of Inspiration to Domains of Application"
},
{
"paperId": "04d73a1459d0eb1ce448776ae38871d0af985e3c",
"title": "The Dance Language and Orientation of Bees"
},
{
"paperId": null,
"title": "PoW—The blockchain-based approach used by Strobel, Ferrer and Dorigo"
},
{
"paperId": null,
"title": "BlockChain Technology: Beyond Bitcoin"
},
{
"paperId": "4e9ec92a90c5d571d2f1d496f8df01f0a8f38596",
"title": "Bitcoin: A Peer-to-Peer Electronic Cash System"
},
{
"paperId": "cb3b1c15726aba84b60c64dd991e8bf9e6988c82",
"title": "The dance language and orientation of bees"
},
{
"paperId": "8126e0d989530197cb8ab378980c2a5df4388942",
"title": "This Work Is Licensed under a Creative Commons Attribution 4.0 International License"
},
{
"paperId": null,
"title": "it. This is veri�ed by checking if there are transactions in the chain that are addressed to the transactor’s public key"
},
{
"paperId": null,
"title": "When DC was used, robots passed both their opinion and its quality to the smart contract, and picked the opinion of the higher quality between its own opinion and that of a pseudorandom robot"
},
{
"paperId": null,
"title": "chosen as the best opinion"
},
{
"paperId": null,
"title": "5 Consensus time for different decision rules and approaches"
},
{
"paperId": null,
"title": "Miner—The e-puck robots also act as miners who mine blocks to be added to the blockchain. When the robots"
}
] | 7,656
|
en
|
[
{
"category": "Computer Science",
"source": "s2-fos-model"
},
{
"category": "Medicine",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/ffdb7258acf8e434493fd7bab96cfcf03199a6da
|
[] | 0.866688
|
A novel application of blockchain technology and its features in an effort to increase uptake of medications for Opioid Use Disorder
|
ffdb7258acf8e434493fd7bab96cfcf03199a6da
|
Artificial Intelligence Advances
|
[
{
"authorId": "2205223575",
"name": "Garett Renee"
},
{
"authorId": "2204313775",
"name": "Zeyad Kelani"
},
{
"authorId": "2205239852",
"name": "Young Sean D."
}
] |
{
"alternate_issns": null,
"alternate_names": [
"Artif Intell Adv"
],
"alternate_urls": null,
"id": "7638cb02-dc16-43f8-88a3-4abfd1359fba",
"issn": "2661-3220",
"name": "Artificial Intelligence Advances",
"type": null,
"url": "https://ojs.bilpublishing.com/index.php/aia/index"
}
|
The opioid crisis has impacted the lives of millions of Americans. Digital technology has been applied in both research and clinical practice to mitigate this public health emergency. Blockchain technology has been implemented in healthcare and other industries outside of cryptocurrency, with few studies exploring its utility in dealing with the opioid crisis. This paper explores a novel application of blockchain technology and its features to increase uptake of medications for opioid use disorder.
|
**_Artificial Intelligence Advances | Volume 04 | Issue 02 | October 2022_**
## Artificial Intelligence Advances
https://ojs.bilpublishing.com/index.php/aia
ARTICLE
# A Novel Application of Blockchain Technology and Its Features in an Effort to Increase Uptake of Medications for Opioid Use Disorder
## Renee Garett1* Zeyad Kelani3 Sean D. Young
**2,3**
1. ElevateU, Irvine, California, CA 92697, United States of America
2. Department of Emergency Medicine, University of California, Irvine, California, CA 92697, United States of
America
3. University of California Institute for Prediction Technology, Department of Informatics, University of California,
Irvine, California, CA 92697, United States of America
ARTICLE INFO ABSTRACT
_Article history_
Received: 11 January 2023
Revised: 28 January 2023
Accepted: 2 February 2023
Published Online: 8 February 2023
_Keywords:_
Blockchain
Opioid use disorder
Data Security
## 1. Background
The misuse of an addiction to opioids is a national pub
lic health crisis that has a significant impact on society. In
2017, an estimated 1.7 million Americans suffered from
opioid use disorder (OUD) and over 47,000 Americans
The opioid crisis has impacted the lives of millions of Americans. Digital
technology has been applied in both research and clinical practice to
mitigate this public health emergency. Blockchain technology has been
implemented in healthcare and other industries outside of cryptocurrency,
with few studies exploring its utility in dealing with the opioid crisis. This
paper explores a novel application of blockchain technology and its features
to increase uptake of medications for opioid use disorder.
died due to an opioid overdose. Among adult patients
who suffered from chronic8 pain, between 21% to 29%
who were prescribed opioid medication misused it, and
8% to 12% developed OUD [1]. The economic burden of
non-medical opioid use attributed to health care services,
*Corresponding Author:
Renee Garett,
ElevateU, Irvine, California, CA 92697, United States of America;
_Email: reneegarettlcsw@gmail.com_
DOI: https://doi.org/10.30564/aia.v4i2.5398
Copyright © 2022 by the author(s). Published by Bilingual Publishing Co. This is an open access article under the Creative Commons
Attribution-NonCommercial 4.0 International (CC BY-NC 4.0) License. (https://creativecommons.org/licenses/by-nc/4.0/).
-----
**_Artificial Intelligence Advances | Volume 04 | Issue 02 | October 2022_**
premature mortality, criminal justice activities, child and
family assistance programs, education programs and lost
productivity was estimated to be $188 billion [2]. Effec
tive treatment for opioid misuse is available. Food and
Drug Administration approved medications for opioid use
disorder (MOUD) are methadone, buprenorphine, and
naltrexone. Studies showed that treatment with MOUD
resulted in decreased mortality, reduced opioid use, reten
tion in an opioid treatment program (OTP) [3,4], and long
term treatment improved outcomes [4]. Federal regulations
mandate that counseling and behavioral therapy accom
pany methadone treatment and buprenorphine providers
have the capacity to recommend counseling to patients.
As digital tools continue to proliferate, researchers
and clinical practitioners have adopted them to address
public health issues. Applications of technology like mo
bile health to educate [5], improve access [6], and program
maintenance [7] of MOUD have been studied. Papers about
the utility of blockchain technology in mitigating the opi
oid crisis have been proposed for data collection [8], pain
management [9], prescription tracking, and pharmaceutical
supply chain [10]. This paper highlights features of the
blockchain technology as it applies to MOUD.
## 2. A Primer on Blockchain
Blockchain is an immutable distributed public ledger
[11]. It came to prominence as the transformative technology
that launched Bitcoin. Blockchain has utility beyond cryp
tocurrency and has applications in a variety of industries
such as finance, e-commerce, governance, and healthcare
[12]. Our main inspiration for this paper is the successful use
of Blockchain technology in Decentralized Finance (DeFi).
DeFi is a decentralized permissionless replication of the
current traditional financial infrastructure that provides
secure transactions using smart contracts and blockchain
verification [13]. Blockchain has potential to decrease both
the cost and time for transaction completion compared to
the traditional banking system. Moreover, it has potential
to lead to the democratization of financial transactions and
loosens restrictions on the transnational flow of money [14].
DeFi ensures that all financial transactions are transparent
and public while preserving privacy through encrypting
user information.
## 3. Features of Blockchain that are Relevant to MOUD
### 3.1 Immutable Chain
A key feature of blockchain technology is the im
mutable block. A block is akin to a digital folder that
contains transactions, timestamp of the transactions, and
an encrypted code called a hash [11]. Blockchain sequence
follows a linked list data structure and hashes connect
blocks as each block contains its hash and the hash from
the previous block, as shown in Figure 1 [15]. In the case
of patients with OUD, patient records could be developed
into blocks, and before adding each block to the chain,
transactions would need to be verified by the network.
Upon verification, new blocks would be secured and
stored chronologically at the end of the chain. Once the
block is added to the chain, data cannot be altered, even
by the data owner, allowing for secure storage and sharing
of patient data.
Signature is a key component to ensure the secure
communication between blocks. Verification happens
by checking the sender’s private key and the recipient’s
public key, as shown in Figure 1. OUD patient records
on the blockchain could only be added but not changed.
If a MOUD provider wants to change a patient’s record,
the new information would need to be included in a new
block and added to the chain. Prescription drug monitor
ing programs (PDMP) might benefit from the immuta
bility feature of the blockchain. Each transaction, or data
entry, by the prescriber and pharmacist, is verified and
secured before they are added to the blockchain as sepa
rate blocks which leads to accurate data of the patient’s
prescription in real-time.
**Figure 1. Block structure [15].**
Source: B. Rawat D, Chaudhary V, Doku R. Blockchain
Technology: Emerging Applications and Use Cases for Secure
and Trustworthy Smart Systems. JCP. 2020 Nov 10;1(1): 4-18.
-----
**_Artificial Intelligence Advances | Volume 04 | Issue 02 | October 2022_**
### 3.2 Decentralized Network and Interoperability
A decentralized network refers to the structure of the
blockchain. The blockchain is a distributed ledger tech
nology in that the ledger is distributed to all participating
computers (or nodes) in the network and can be accessed
by all users on the network. There is no centralized au
thority that manages the blockchain. Nodes act in concert
to verify new transactions on the network and a copy of
the updated blockchain is downloaded. As there is no
gatekeeper, users access the data through encrypted keys.
A public (permissionless) blockchain is open source in
that the public has access to all data, and transactions
can be recorded and verified by everyone in the network.
It has high transparency and accountability. On the oth
er hand, a private (permission) blockchain can only be
read by those with required access, typically granted by
a single organization. Transparency is reduced in favor
of greater access control. A consortium blockchain is a
hybrid of public and private blockchain. The network is
managed by a group of stakeholders instead of one cen
tral organization (private) or the public. Transactions are
verified by a group of preapproved entities, and have a
high degree of control over who can access the data [16].
With respect to healthcare, a consortium blockchain could
afford patients more control of their data and medical
records since their data are not tied to a hospital or physi
cian. They have the capacity to grant access to physicians,
opioid treatment program (OTP), counselor, pharmacy,
and PDMP. Each of these entities then can view or update
the patient’s medical records without needing approval or
authorization. Communication between all involved in the
patient’s treatment is seamless and issues with disparate
medical records dissipate.
### 3.3 Secure Data Storage
The distributed ledger is the backbone of blockchain tech
nology, where it is composed of a write-only database that is
continuously distributed across all network nodes [15]. Nodes
execute blocks of programs known as smart contracts. Then,
the network uses consensus algorithms to choose final ver
sion of the database from all updated nodes.
Patient medical records should be kept private, secure,
and confidential, marginalized patients such as those with
OUD will discontinue treatment or avoid seeking treat
ment due to the fear of stigma [17,18] and perceived viola
tions of privacy and confidentiality [17]. Due to potential
legal consequences, as well as facing stigma from family
and friends, individuals who misuse opioids value privacy
and confidentiality. Additionally, individuals who misuse
opioids may also experience stigma from their healthcare
provider.
Therefore, OUD patients need a very secure method of
storing and sharing their data to avoid further stigmatiza
tion or negative consequences associated with identifying
such patients. One relevant project to keep MOUD patient
data is the InterPlanetary File System (IPFS), a peer-topeer network for storing data and making it available.
IPFS splits data files into smaller chunks, encrypts them,
and distributes them among different nodes on the net
work [19]. Files can then be queried back using a content
identifier (CID).
### 3.4 Privacy
Users are provided with a pair of cryptographic keys:
public and private. The public key is visible to the public
and serves as the user’s public identity. The private key is
used to initiate and sign transactions and guarantee user
authenticity [16]. In blockchain, protected health informa
tion (PHI) will be accessible to others if granted permis
sion by the patient. Patients have agency over who can
view their data, update it, and for how long entities have
access. The patients own their data on blockchain and may
grant access to treatment programs, pharmacies, counse
lors, etc. If a patient transfers to another clinic or stops
the program, access to the blockchain can be revoked. Pa
tients may also view a history of who accessed their data.
### 3.5 Transparency
In dealing with the opioid crisis, data provenance will
keep a record of history of MOUD participation, from
date of entry into a program, which OTP the patient goes
to, type of mediation using, visits with counselors, insur
ance billing; all of these events will be updated into the
blockchain creating a transparent history of the patient’s
treatment. This is especially useful for populations with
out regular access to a healthcare provider such as those
without insurance, homeless, and individuals recently re
leased from prison.
### 3.6 Efficiency
One key feature of blockchain technology is its capac
ity for efficiency. Registration on the blockchain can be
used as authentication for enrollment in programs. Treat
ment facilities may use blockchain identity authentication
prior to providing treatment to patients, obviating the need
to keep records in-house and minimizing the potential
for private information to be stolen due to network at
tacks. Removal of barriers to use of PDMP would lead to
increasing use [20]. PDMP could benefit from blockchain
technology in delivering timely data to the network there
-----
**_Artificial Intelligence Advances | Volume 04 | Issue 02 | October 2022_**
by minimizing the interval between dispensing prescrip
tions and submission to the PDMP. This enhances patient
safety by providing accurate information on a patient’s
recent prescription.
### 3.7 New Paradigms: DeSci and DAOs
Like the established DeFi, Decentralized Science (De
Sci) is a new way of doing science built on blockchain
technology. It is a new paradigm that utilizes smart con
tracts, blockchain, and other decentralized technologies
to address the inefficiency of MOUD scientific research.
DeSci is defined as an interoperable system that allows
multiple stakeholders in the scientific research community
to collaborate without trusting (or knowing) each other [21].
Trustless scientific collaboration in that regard can happen
within Decentralized Autonomous Organizations (DAOs),
which are collective democratic management organiza
tions using programs running on the blockchain [22]. One
application of DAOs in providing MOUD is through fa
cilitating treatment agreement contracts between patients
and providers, Medicaid prior authorizations, and expan
sion of access. Despite availability of MOUD, access and
initiation by patients remain low [23]. One of the possible
ways to increase MOUD access is to expand training
and credentialing of eligible providers [23]. Once qualified
practitioners submit all necessary documents (Waiver
Notification of Intent, training certificate) to a DAO,
smart contract may fast track credentialing process using
decentralized governance structure and in-network due
diligence.
## 4. Challenges in Implementation
Like any new technology, blockchain is developing
every day and faces several challenges related to MOUD
application. The most challenging is scalability; permis
sionless blockchain allows higher computational resourc
es across the network but limited transaction volume. For
example, the bitcoin blockchain allows only 7 transactions
per second with almost 10 million users and 200,000 daily
submitted transactions [24]. On the other hand, permis
sion-based blockchains allow higher transaction volume
with limited computational power based on their limited
network base. Another related challenge is the cost of op
eration, as is still unknown what would be the exact cost
of operating blockchain technology in healthcare.
## 5. Conclusions
Though effective treatment for opioid use disorder ex
ists, barriers challenge uptake for those who would most
benefit from treatment. Key features of the blockchain
technology presented highlight ways in which innovative
technologies may be implemented by healthcare and pub
lic health practitioners in addressing limitations.
## Author Contribution
All authors contributed to the manuscript conception
and design. All authors read and approved the final manu
script.
## Conflict of Interest
None of the authors report a conflict of interest.
## Funding
This work was supported by the National Center for
Complementary and Integrative Health under Grant
4R33AT010606-03 and National Institute on Drug Abuse.
## References
[1] National Institute on Drug Abuse. Opioid Over
dose Crisis [Internet]. National Institute on Drug
Abuse. 2020 [cited 2020 Sep 3]. Available from:
https://www.drugabuse.gov/drug-topics/opioids/opi
oid-overdose-crisis.
[2] Davenport, S., Caverly, M., Weaver, A., 2019.
Economic Impact of Non-Medical Opioid Use in
the United States [Internet]. Annual Estimates and
Projections for 2015 through 2019. Available from:
https://www.soa.org/globalassets/assets/files/resourc
es/research-report/2019/econ-impact-non-medicalopioid-use.pdf
[3] Koehl, J.L., Zimmerman, D.E., Bridgeman, P.J.,
2019. Medications for management of opioid use dis
order. American Journal of Health-system Pharmacy.
76(15), 1097-1103.
[4] Mancher, M., Leshner, A.I., 2019. Medications for
opioid use disorder save lives. National Academies
Press: Washington (DC).
[5] Cavazos-Rehg, P.A., Krauss, M.J., Sowles, S.J., et
al., 2015. “Hey Everyone, I’m Drunk.” An Evalua
tion of Drinking-Related Twitter Chatter. Journal of
Studies on Alcohol & Drugs. 76(4), 635-643.
[6] Gustafson, D.H., Landucci, G., McTavish, F., et al.,
2016. The effect of bundling medication-assisted
treatment for opioid addiction with mHealth: Study
protocol for a randomized clinical trial. Trials. 17(1),
592.
[7] Guarino, H., Acosta, M., Marsch, L.A., et al., 2016. A
mixed-methods evaluation of the feasibility, accept
ability, and preliminary efficacy of a mobile interven
tion for methadone maintenance clients. Psychology
-----
**_Artificial Intelligence Advances | Volume 04 | Issue 02 | October 2022_**
of Addictive Behaviors. 30(1), 1-11.
[8] Raghavendra, M., 2019. Can Blockchain technol
ogies help tackle the opioid epidemic: A Narrative
Review. Pain Medicine. 20(10), 1884-1889.
[9] Chang, M.C., Hsiao, M.Y., Boudier-Revéret, M.,
2020. Blockchain Technology: Efficiently managing
medical information in the pain management field.
Pain Medicine. 21(7), 1512-1513.
[10] Evans, J.D., 2019. Improving the transparency of the
pharmaceutical supply chain through the adoption of
Quick Response (QR) Code, Internet of Things (IoT),
and Blockchain Technology: One result: Ending
the opioid crisis. Pittsburgh Journal of Technology
Law & Policy. 19, 35-53.
[11] Pilkington, M., 2016. Blockchain Technology: Prin
ciples and applications [Internet] [cited 2020 Aug
26]. Available from: https://www.elgaronline.com/
view/edcoll/9781784717759/9781784717759.00019.
xml.
[12] Underwood, S., 2016. Blockchain beyond bitcoin.
Communications of the ACM. 59(11), 15-17.
[13] Schär, F.,2021. Decentralized Finance: On Block
chain—and Smart Contract-Based Financial Mar
kets [Internet] [cited 2022 Mar 10]. Available
from: https://research.stlouisfed.org/publications/
review/2021/02/05/decentralized-finance-on-block
chain-and-smart-contract-based-financial-markets.
[14] Chen, Y., Bellavitis, C., 2020. Blockchain disruption
and decentralized finance: The rise of decentralized
business models. Journal of Business Venturing In
sights. 13, e00151.
[15] Rawat, D.B., Chaudhary, V., Doku, R., 2020. Block
chain technology: Emerging applications and use
cases for secure and trustworthy smart systems. Jour
nal of Cybersecurity and Privacy. 1(1), 4-18.
[16] Dib, O., Brousmiche, K.L., Durand, A., et al., 2018.
Consortium blockchains: Overview, applications
and challenges. International Journal on Advances in
Telecommunications. 11(1 & 2), 51-64.
[17] Tsai, A.C., Kiang, M.V., Barnett, M.L., et al., 2019.
Stigma as a fundamental hindrance to the United
States opioid overdose crisis response. PLOS Medi
cine. 16(11), e1002969.
[18] Boekel, L.C., Brouwers, E.P.M., Weeghel, J., et al.,
2013. Stigma among health professionals towards
patients with substance use disorders and its conse
quences for healthcare delivery: Systematic review.
Drug and Alcohol Dependence. 131(1), 23-35.
[19] IPFS Powers the Distributed Web [Internet] [cited
2022 Mar 10]. Available from: https://ipfs.io/.
[20] Norwood, C.W., Wright, E.R., 2016. Promoting con
sistent use of prescription drug monitoring programs
(PDMP) in outpatient pharmacies: Removing ad
ministrative barriers and increasing awareness of Rx
drug abuse. Research in Social and Administrative
Pharmacy. 12(3), 509-514.
[21] Tenorio-Fornés, Á., Tirador, E.P., Sánchez-Ruiz,
A.A., et al., 2021. Decentralizing science: Towards
an interoperable open peer review ecosystem using
blockchain. Information Processing & Management.
58(6), 102724.
[22] Kaal., Wulf, A., A Decentralized Autonomous
Organization (DAO) of DAOs [Internet] [cited
2021 Mar 6]. Available from: https://ssrn.com/
abstract=3799320 or http://dx.doi.org/10.2139/
ssrn.3799320.
[23] Jones, C.M., Campopiano, M., Baldwin, G., et al.,
2015. National and state treatment need and capaci
ty for opioid agonist medication-assisted treatment.
American Journal of Public Health. 105(8), e55-e63.
[24] Krawiec, R., Housman, D., White, M., et al., 2016.
Opportunities for Health Care. 16.
-----
|
{
"disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.30564/aia.v4i2.5398?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.30564/aia.v4i2.5398, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": null,
"status": "GOLD",
"url": "https://journals.bilpubgroup.com/index.php/aia/article/download/5398/4025"
}
| 2,023
|
[] | true
| 2023-02-08T00:00:00
|
[
{
"paperId": "04e1310c4e25d8afc3d5f1e8af09610389e7fc35",
"title": "Decentralizing science: Towards an interoperable open peer review ecosystem using blockchain"
},
{
"paperId": "908a73dc1212ca9fef02a5f7da975888ac9f6785",
"title": "A Decentralized Autonomous Organization (DAO) of DAOs"
},
{
"paperId": "5f1c4811b2446efc921c9818c8f81463dc84728b",
"title": "Blockchain Technology: Emerging Applications and Use Cases for Secure and Trustworthy Smart Systems"
},
{
"paperId": "082f7f6e1fda6358d47df5d26fe862ef6021a803",
"title": "Decentralized Finance: On Blockchain- and Smart Contract-Based Financial Markets"
},
{
"paperId": "5670ce7fe0f40c6c06554941cccfe09d75262e1a",
"title": "Blockchain Disruption and Decentralized Finance: The Rise of Decentralized Business Models"
},
{
"paperId": "ba23293ff0f0f71f1e12dc1d39683903f84d7ce8",
"title": "Stigma as a fundamental hindrance to the United States opioid overdose crisis response"
},
{
"paperId": "fd2596175a4d59f5d2a9511012bdc5bf0cd37716",
"title": "Blockchain Technology: Efficiently Managing Medical Information in the Pain Management Field."
},
{
"paperId": "48c75d9aad103302ed35ac027bd5b8704c0a8e16",
"title": "Can Blockchain technologies help tackle the opioid epidemic: A Narrative Review."
},
{
"paperId": "fdde449c70cb7c119fb1bf1be7a28f1fae39e984",
"title": "Medications for management of opioid use disorder."
},
{
"paperId": "223ffd80aee26bb4b9818c355f7b73010e9d00b9",
"title": "Improving the Transparency of the Pharmaceutical Supply Chain through the Adoption of Quick Response (QR) Code, Internet of Things (IoT), and Blockchain Technology: One Result: Ending the Opioid Crisis"
},
{
"paperId": "5175df3cffd519b4a8362549ae72f7b7b2575900",
"title": "The effect of bundling medication-assisted treatment for opioid addiction with mHealth: study protocol for a randomized clinical trial"
},
{
"paperId": "efe573cbfa7f4de4fd31eda183fefa8a7aa80888",
"title": "Blockchain beyond bitcoin"
},
{
"paperId": "781fc2c9f69a36dcd88599bd89ad8978c97a16f3",
"title": "Promoting consistent use of prescription drug monitoring programs (PDMP) in outpatient pharmacies: Removing administrative barriers and increasing awareness of Rx drug abuse."
},
{
"paperId": "c114400f822ceee43484b8e9af4834ce6d69719c",
"title": "A mixed-methods evaluation of the feasibility, acceptability, and preliminary efficacy of a mobile intervention for methadone maintenance clients."
},
{
"paperId": "e31ca71621e1402a46ac2c1afb2eba9a7061d139",
"title": "Blockchain Technology: Principles and Applications"
},
{
"paperId": "9addf1a016f9a401e482662a8105b8ebd61f396b",
"title": "National and State Treatment Need and Capacity for Opioid Agonist Medication-Assisted Treatment."
},
{
"paperId": "baeafb8a472dc915965edc7fe1ed3f73ab43c38b",
"title": "\"Hey Everyone, I'm Drunk.\" An Evaluation of Drinking-Related Twitter Chatter."
},
{
"paperId": null,
"title": "Opioid Overdose Crisis"
},
{
"paperId": "1f7656566668b5f980eb89abdc0e7dc37f8edd58",
"title": "Medications for Opioid Use Disorder Save Lives"
},
{
"paperId": null,
"title": "Economic Impact of Non-Medical Opioid Use in the United States [Internet]. Annual Estimates and Projections for 2015 through 2019"
},
{
"paperId": "2dc3f16404739c153ce6d45bf370e295623f6714",
"title": "Consortium Blockchains: Overview, Applications and Challenges"
},
{
"paperId": null,
"title": "Opportunities for Health Care"
},
{
"paperId": "6961233d5e641ccfe9a1b5e326283df11e030e7e",
"title": "[Stigma among health professionals towards patients with substance use disorders and its consequences for healthcare delivery: systematic review]."
},
{
"paperId": null,
"title": "Annual Estimates and Projections for"
},
{
"paperId": null,
"title": "IPFS Powers the Distributed Web"
}
] | 5,132
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/ffde41db2f51d5d7aaf52ee49b5b004276222b88
|
[
"Computer Science"
] | 0.80556
|
Provably Secure Group Key Management Approach Based upon Hyper-Sphere
|
ffde41db2f51d5d7aaf52ee49b5b004276222b88
|
IEEE Transactions on Parallel and Distributed Systems
|
[
{
"authorId": "1738396",
"name": "Shaohua Tang"
},
{
"authorId": "2156184",
"name": "Lingling Xu"
},
{
"authorId": "1848009",
"name": "Niu Liu"
},
{
"authorId": "144095295",
"name": "Xinyi Huang"
},
{
"authorId": "143985770",
"name": "Jintai Ding"
},
{
"authorId": "2109527354",
"name": "Zhiming Yang"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"IEEE Trans Parallel Distrib Syst"
],
"alternate_urls": [
"http://ieeexplore.ieee.org/servlet/opac?punumber=71"
],
"id": "7c9d091e-015e-4e5d-a11f-9bc369fcf414",
"issn": "1045-9219",
"name": "IEEE Transactions on Parallel and Distributed Systems",
"type": "journal",
"url": "http://www.computer.org/tpds"
}
| null |
# Provably Secure Group Key Management Approach Based upon Hyper-sphere
Shaohua Tang[1][,][2], Lingling Xu[1], Niu Liu[1], Jintai Ding[2][,][3], and Zhiming Yang[1]
1 School of Computer Science & Engineering,
South China University of Technology, Guangzhou, China
```
shtang@IEEE.org, csshtang@scut.edu.cn
```
2 Department of Mathematical Sciences, University of Cincinnati, OH, USA
```
jintai.ding@mail.uc.edu
```
3 Dept. of Applied Math., South China University of Technology, China
```
jintai.ding@gmail.com
```
**Abstract. Secure group communication systems have become increasingly im-**
portant for many emerging network applications. An efficient and robust group
key management approach is indispensable to a secure group communication system. Motivated by the theory of hyper-sphere, this paper presents a new group key
management approach with a group controller GC. In our new design, a hypersphere is constructed for a group and each member in the group corresponds to
a point on the hyper sphere, which is called the member’s private point. The GC
computes the central point of the hyper-sphere, intuitively, whose “distance” from
each member’s private point is identical. The central point is published such that
each member can compute a common group key, using a function by taking each
member’s private point and the central point of the hyper-sphere as the input.
This approach is provably secure under the pseudo-random function (PRF) assumption. Compared with other similar schemes, by both theoretical analysis and
experiments, our scheme (1) has significantly reduced memory and computation
load for each group member; (2) can efficiently deal with massive membership
change with only two re-keying messages, i.e., the central point of the hypersphere and a random number; and (3) is efficient and very scalable for large-size
groups.
**Keywords: Group Communication, Key Management, Hyper-Sphere, Pseudo-Random**
Function (PRF), Provable Security
## 1 Introduction
With the rapid development of Internet technology and the popularization of multicast, group-oriented applications, such as video conference, network games, and video
on demand, etc., are playing important roles. How to protect the communication security of these applications are becoming more and more significant. Generally speaking,
a secure group communication system should not only provide data confidentiality,
user authentication, and information integrity, but also accommodate perfect scalability. Without any doubt, a secure, efficient, and robust group key management approach
is essential to a secure group communication system.
-----
2 S. Tang, L. Xu, N. Liu, J. Ding, Z. Yang
**Our Contributions. This paper presents a secure group key management approach**
based on the properties of hyper-sphere. In mathematics, a hyper-sphere is a generalization of the surface of an ordinary sphere to arbitrary dimension. The distance from
any point on the hyper-sphere to the central point of the hyper-sphere is identical. Inspired by this principle, a secure group key management scheme is designed. The most
significant advantages of the proposed approach are the reduction of user storage, user
computation, and the amount of update information while re-keying. The group key is
updated periodically to protect its secrecy. Each key is completely independent from
any previously used and future keys. A formal security proof for our scheme is given
under the pseudo-random function.
**Organization. The remainder of this paper is organized as follows. A brief survey**
of some related schemes on secure group key management is described in Section 2.
Some preliminaries and security model are given in Section 3. The proposed secure
group key management approach is presented in Section 4. Security is formally proven,
and performance is discussed in Section 5. Comparisons with related work are presented
in Section 6. Finally, Section 7 summarizes the major contributions of this paper.
## 2 A Brief Survey of Related Work
There are various approaches on the key management for secure group communication.
Rafaeli and Hutchison [30] presented a comprehensive survey on this area. Existing
schemes can be divided into three different categories: centralized, distributed, and decentralized schemes.
In a centralized system, there is an entity GC (Group Controller) controlling the
whole group [30]. Some typical schemes in this category include Group Key Management Protocol (GKMP) [19, 20], Secure Lock (SL) [12], Logical Key Hierarchy (LKH)
[41], etc. The Group Key Management Protocol (GKMP) [19, 20] is a direct extension
from unicast to multicast communication. It is assumed that there exists a secure channel between the GC and every group member. Initially, the GC selects a group key K0
and distributes this key to all group members via the secure channel. Whenever a member joins in the group, the GC selects a new group key KN and encrypts the new group
key with the old group key yielding K[′] = EK0(KN) then broadcasts K[′] to the group
members. Moreover, the GC sends KN to the joining member via the secure channel
between the GC and the new member. Obviously, the solution is not scalable [30]. The
Secure Lock (SL) scheme [12] takes advantage of Chinese Remainder Theorem (CRT)
to construct a secure lock to combine all the re-keying messages into a single message
while the group key is updated. However, CRT is a time-consuming operation. As mentioned in [12], the SL scheme is efficient only when the number of users in a group is
small, since the time to compute the lock and the length of the lock (hence the transmission time) is proportional to the number of users. The Logical Key Hierarchy (LKH)
scheme [41] adopts tree structure to organize keys. The GC maintains a virtual tree,
and the nodes in the tree are assigned keys. The key held by the root of the tree is the
group key. The internal nodes of the tree hold key encryption keys (KEK). Keys at leaf
nodes are possessed by individual members. Every member is assigned the keys along
the path from its leaf to the root. When a member joins or leaves the group, its parent
-----
Provably Secure Group Key Management Approach Based on Hyper-sphere 3
node’s KEK and all KEKs held by nodes in the path to the root should be updated. The
number of keys which need to be changed for a joining or leaving is O(log2 n) and the
number of encryptions is O(2 × log2 n). If there are a great deal of members need to join
or leave the group, then the re-keying overhead will increase proportionally to the number of members changed. There are some other schemes that adopt tree structures, for
example, OFT (One-way Function Tree) [37], OFCT (One-way Function Chain Tree)
[10], Hierarchical α-ary Tree with Clustering [11], Efficient Large-Group Key [29], etc.
In the distributed schemes, there is no explicit GC and the key generation can be
either contributory or done by one of the members [30]. Some typical schemes include: Burmester and Desmedt Protocol [9], Group Diffie-Hellman key exchange [38],
Octopus Protocol [5], Conference Key Agreement [7], Distributed Logical Key Hierarchy [34], Distributed One-way Function Tree [16], Diffie-Hellman Logical Key Hierarchy [28, 21], Distributed Flat Table [40], etc. Recent references paid more attentions to
contributory and collaborative group key agreement [14, 46, 24, 25, 1, 2], etc. Recently,
the concepts of asymmetric group key agreement and contributory broadcast encryption were proposed [42, 43]. An asymmetric group key agreement (ASGKA) protocol
[42] lets the group members negotiate a shared encryption key instead of a common
secret key. The encryption key is accessible to attackers and corresponds to different
decryption keys, each of which is only computable by one group member. A contributory broadcast encryption (CBE) [43] enables a group of members negotiate a common
public encryption key while each member holds a decryption key.
In the decentralized architectures, the large group is split into small subgroups.
Different controllers are used to manage each subgroup [30]. Some typical schemes
include: Scalable Multicast Key Distribution [4], Iolus [26], Dual-Encryption Protocol [15], MARKS [8], Cipher Sequences [27], Kronos [36], Intra-Domain Group Key
Management [13], Hydra [31], etc.
The secure group key management approaches can be applied to a lot of application areas. For example: wireless/mobile network [33, 18, 44, 35, 39, 45], wireless sensor network [32], storage area networks [22], etc.
## 3 Preliminaries
In this section, we briefly introduce the concept of hyper-sphere, and present some syntax used throughout this paper. Then we define Pseudo-Random Function (PRF), and
describe the security model in which we prove the security of our group key management protocol.
**3.1** **N-dimensional Hyper-sphere**
For any natural number N ∈ N, an N-dimensional hyper-sphere or an N-sphere is a
generalization of the surface of an ordinary sphere to arbitrary dimension. In particular,
an 0-sphere is a pair of points on a line, an 1-sphere illustrated in Fig. 1 is a circle in
a plane, and an 2-sphere is an ordinary sphere in three-dimensional space. Spheres of
dimension N > 2 are sometimes called hyper-spheres.
-----
4 S. Tang, L. Xu, N. Liu, J. Ding, Z. Yang
Y
B1
C
R
B0
B2
O X
**Fig. 1. An 1-sphere or a circle in a plane**
**Hyper-sphere in Euclidean Space. In mathematics, an N-sphere of radius r ∈** R with
a central point C = (c0, c1, . . ., cN) ∈ R[N][+][1] is defined as the set of points in (N + 1)dimensional Euclidean space which are at distance r from the central point C. Any point
**_X = (x0, x1, . . ., xN) ∈_** R[N][+][1] on the hyper-sphere can be represented by the equation
(x0 − _c0)[2]_ + (x1 − _c1)[2]_ + . . . + (xN − _cN)[2]_ = r[2]. (1)
Any given N + 2 points Ai = (ai,0, ai,1, . . ., ai,N) ∈ R[N][+][1], where i = 0, 1, . . ., N + 1,
can uniquely determine a hyper-sphere as long as certain conditions are satisfied, which
will be presented at the end of this subsection. By applying the coordinates of the points
**_A0, A1, . . ., AN+1 to (1), we can obtain a system of N + 2 equations_**
(a0,0 − _c0)[2]_ + (a0,1 − _c1)[2]_ + . . . + (a0,N − _cN)[2]_ = r[2],
(a1,0 − _c0)[2]_ + (a1,1 − _c1)[2]_ + . . . + (a1,N − _cN)[2]_ = r[2], (2)
. . . . . .
(aN+1,0 − _c0)[2]_ + (aN+1,1 − _c1)[2]_ + . . . + (aN+1,N − _cN)[2]_ = r[2].
By subtracting the j-th equation from the ( j+1)-th equation, where j = 1, 2, . . ., N +
1, we can get a system of linear equations with N + 1 unknowns c0, c1, . . ., cN:
_N_ _N_
2(a0,0 − _a1,0)c0 + . . . + 2(a0,N −_ _a1,N)cN =_ _j�=0_ _a[2]0, j_ [−] _j�=0_ _a[2]1, j[,]_
. . . . . .
_N_ _N_
2(aN,0 − _aN+1,0)c0 + . . . + 2(aN,N −_ _aN+1,N)cN =_ _j�=0_ _a[2]N,_ _j_ [−] _j�=0_ _a[2]N+1,_ _j[.]_
(3)
If and if only the determinant of the coefficients in (3) is non-zero, this system
of linear equations can have unique solution c0, c1, . . ., cN. By applying the values of
_c0, c1, . . ., cN to one of the equations in (2), we can obtain r[2]._
**Hyper-sphere over Finite Field. We can extend the concept of Hyper-sphere to finite**
fields. For simplicity, the Galois field GF(p) is adopted as the ground field, where p is a
large prime number. However, the results can be easily extended to other forms of finite
fields. For any given positive integer N, and vector C = (c0, c1, . . ., cN) ∈ _GF(p)[N][+][1],_
we define function
-----
Provably Secure Group Key Management Approach Based on Hyper-sphere 5
**R : GF(p)[N][+][1]** → _GF(p)_
as
**R(X) ≡∥X −** **_C∥[2]_** mod p, (4)
where X = (x0, x1, . . ., xN) ∈ _GF(p)[N][+][1], and_
∥X − **_C∥[2]_** ≡ (x0 − _c0)[2]_ + (x1 − _c1)[2]_ + . . . + (xN − _cN)[2]_ mod p.
For a given R ∈ _GF(p), the hyper-sphere determined by R and C is defined by_
**R(X) ≡** _R_ mod p, (5)
or
(x0 − _c0)[2]_ + (x1 − _c1)[2]_ + . . . + (xN − _cN)[2]_ ≡ _R_ mod p. (6)
Notice that only R is needed in our scheme, and the square-root of R over GF(p) is never
required throughout this paper. The square-root may not always be a valid operation
over GF(p).
**3.2** **Syntax**
If κ ∈ N, then 1[κ] is the string consisting of κ ones. If A is a randomized algorithm,
then y ← `A(x) denotes the assignment to y of the output of A on input x when run with`
fresh random coins. We use the notation u ←R S to denote that u is chosen randomly
from S . Unless noted, all algorithms are probabilistic polynomial-time (PPT) and we
implicitly assume that they take an extra parameter 1[κ] in their input, where κ is a security
parameter. A function ν : N →[0, 1] is negligible if for all c ∈ N there exists a κc ∈ N
such that ν(κ) < κ[−][c] for all κ > κ[c].
**3.3** **Pseudo-Random Function (PRF)**
Let κ be a security parameter, F[κ] : Keys(F[κ]) × D → _R be a family of functions with_
input length lin(κ), output length lout(κ), and key length lkey(κ), where Keys(F[κ]) stands
for the key space of F[κ], D and R represent the input space and output space respectively.
Let Func : D → _R be a set of all functions from D to R. We adopt some expressions of_
pseudo-random function in [6, 17], and its definition is given as follows.
**Definition 1 (Pseudo-Random Function). We say that F[κ]** _is a pseudo-random func-_
_tion (or PRF for short) if FK(x) is polynomial-time computable in κ, where FK ∈_ _F[κ],_
_K ∈_ _Keys(F[κ]) and x ∈_ _D, and for every PPT distinguisher D who is given access to_
_an oracle for a function g : D →_ _R, where g can be chosen at random from Func or is_
_chosen at random from F[κ], the advantage Adv[PRF]F[κ],D_ _[is negligible in][ κ][.][ Adv][PRF]F[κ],D_ _[is defined]_
_by indistinguishability of the following two experiments,_
-----
6 S. Tang, L. Xu, N. Liu, J. Ding, Z. Yang
Experiment EXP[prf][−][1](D) Experiment EXP[prf][−][0](D)
_K ←R Keys(F[κ])_ _g ←R Func_
_b←D(FK)_ _b←D(g)_
_return b_ _return b_
_The advantage Adv[PRF]F[κ],D_ _[is defined as]_
**Adv[PRF]F[κ],D** [=][ |][Prob][[][EXP][prf][−][1][(][D][)][ =][ 1]][ −] **[Prob][[][EXP][prf][−][0][(][D][)][ =][ 1]][|][.]**
**PRF Assumption: There exists no (t, ϵ)-PRF distinguisher in κ. In other words, for**
every probabilistic, polynomial-time, 0/1-valued distinguisher D, Adv[PRF]F[κ],D [≤] [ϵ][ for any]
sufficiently small ϵ > 0.
In our construction of group key management protocol, we specify a family of
pseudo-random functions F[κ] : GF(p) × GF(p) → _GF(p), i.e. F[κ]_ = { _fa(·) | a ∈_ _GF(p)}._
The cardinalities of F[κ] and Func are p and p[p] respectively.
**3.4** **Security Model**
Usually, a group key management scheme includes some phases like initialization,
adding members, removing members, massively adding and removing members, and
periodically update.
Our adversarial model described below is similar to the formal security model of
Atallah et al. [3] and Dutta et al. [14]. Let P = {U1, U2, · · ·, UN} be a set of N users
or group members. At any point of time, any subset of P may decide to establish a
session key via the group controller GC who is a trusted third party. We identify the
execution of protocols for initial group key establishment, adding member, removing
member, and periodically re-keying as different sessions. The adversarial model allows each user an unlimited number of instances of joining or leaving or re-keying.
We assume that an adversary never participates as a user in the protocol. This adversarial model allows concurrent execution of the protocol. The interaction between
the adversary A and the protocol users occur only by querying oracles, which models the adversary’s capabilities in real attacks. Let G, G1, and G2 be three user sets
such that G ∩ _G1 = φ and G2 ⊆_ _G. More precisely, let G = {(U1, i1), ..., (Un, in)},_
_G1 = {(Un+1, in+1), ..., (Un+k, in+k)}, G2 = {(U j1_, i j1 ), ..., (U jk, i jk )}, where {U1, ..., Un} is
any non-empty subset of P. We will require the following notations.
-----
Provably Secure Group Key Management Approach Based on Hyper-sphere 7
**LSGC Long-term secret kept by the group controller GC.**
**LSU Long-term secret of user U.**
Π[i]U The i-th instance of user U.
**sk[i]U** [Session key after execution of the protocol by][ Π]U[i] [.]
**sid[i]U** [Session identity for instance][ Π]U[i] [. We set][ sid]U[i] [=][ G][ =][ {][(][U][1][,][ i][1][)][,]
- · ·, (Un, in)} such that (U, i) ∈ _G and users U1, · · ·, Un wish to_
agree upon a common key in a session using unused instances
Π[i]U[1] 1 [,][ · · ·][,][ Π]U[i][n] _n_ [.]
**pid[i]U** [Partner identity for instance][ Π]U[i] [, defined by][ pid]U[i] [=][ {][U][1][,][ · · ·][,][ U][n][}][,]
such that (U j, i j) ∈ **sid[i]U** [for all 1][ ≤] _[j][ ≤]_ _[n][, where][ i][ j][ comes from]_
**sid[i]U** [defined above.]
**acc[i]U** [0][/][1-valued variable which is set to be 1 by][ Π]U[i] [upon normal]
termination of the session and 0, otherwise.
In our setup we assume that each user U with instance Π[i]U [knows his partners’ iden-]
tities pid[i]U [in a session. Two instances][ Π]iU j1 j1 [and][ Π]iUj2 j2 [are][ partnered][ if][ sid]iU j1 j1 [=][ sid]iU j2 j2
and acciU j1 j1 [=][ acc]iU j2 j2 [=][ 1.]
An adversary’s interaction with principals in the network is modeled by allowing it
to have access to the following oracles.
– Execute(G) : This query models passive attacks in which the attacker eavesdrops
on honest execution of group key management protocol among unused instances
Π[i]U[1] _1_ [, ...,][ Π]U[i][n] _n_ [and outputs the transcript of the execution. A transcript consists of the]
messages that were exchanged during the honest execution of the protocol.
– Send(U, i, m) : This query models an active attack, in which the adversary A may
intercept a message and then either modify it, create a new one or simply forward it
to the intended participant. The output of the query is the reply ( if any ) generated
by the instance Π[i]U [upon receipt of message][ m][.]
– Reveal(U, i) : This query unconditionally outputs session key sk[i]U [if it has pre-]
viously been accepted by Π[i]U[, otherwise a value][ NULL][ is returned. This query]
models the misuse of the session keys, i.e. known session key attack.
– Corrupt(U) : This query outputs the long-term secret LSU (if any) of user U. We
say that user Ux is honest if and only if no query Corrupt(Ux) has ever been made
by the adversary. Corrupt(GC) is not allowed since the GC is a trusted third party
in the adversarial model we adopt.
– Test(U, i) : This query is allowed only once, at any time during the adversary’s
execution. A bit b ∈{0, 1} is chosen uniformly at random. The adversary is given
**sk[i]U** [if][ b][ =][ 1, and a random session key otherwise.]
Throughout the paper, we assume that all communications in the group key management protocol are authenticated. The adversary can ask Execute, Reveal and Corrupt
queries several times, while Test query is asked only once and on a fresh instance.
We say that an instance Π[i]Ux0 [is][ fresh][ unless either the adversary, at a certain point,]
queried Reveal(Ux0, i) or Reveal(Ux1, j) with (Ux1, j) ∈ **sid[i]Ux0** [or the adversary queried]
**Corrupt(Ux2** ) with Ux2 ∈ **pid[i]Ux0** [.]
-----
8 S. Tang, L. Xu, N. Liu, J. Ding, Z. Yang
Finally, the adversary outputs a guess bit b[′]. Such an adversary is said to win the
game if b[′] = b, where b is the hidden bit used by Test oracle.
Let Succ denote the event that the adversary A wins the game for the protocol. We
define
**Adv := |2Prob[Succ] −** 1|
to be the advantage of the adversary A in attacking the protocol.
**Definition 2. We say that a group key management protocol is secure if for any PPT**
_adversary A who makes qE Execute queries, runs in time t and does not violate the_
_freshness of the Test instance, the advantage Adv(t) is negligible in κ._
## 4 The Proposed Scheme Based on Hyper-Sphere
**4.1** **The Proposed Approach**
Inspired by the mathematical principle that any point on the hyper-sphere is at the same
distance from the central point, a new secure group key management scheme is proposed.
Before the establishment of a group, the group controller GC chooses a large prime
number p and a family of pseudo-random function F[κ] = { _fK : GF(p) × GF(p) →_
_GF(p)} which is described in Section 3.3, and publishes them to the public. Hereafter,_
all computations are conducted over the finite field GF(p).
Intuitively, a hyper-sphere is constructed for the group, and each member in the
group corresponds to a point on the hyper-sphere. The GC, who manages the group
initialization and membership change operations, computes the central point C of the
hyper-sphere and publishes it to the public. Then each member can calculate R via (5) or
(6). Therefore, the value K = (R−∥C∥[2]) mod p can be assigned as the group key, which
can be computed by all members of the group. Any illegitimate user cannot calculate
this value without the knowledge of the legitimate private point, therefore cannot derive
the group key.
Our group key management approach includes the phases of initialization, adding
members, removing members, massively adding and removing members, and periodically update.
**Initialization. The GC lets the first user U1 join the group at the initialization phase,**
including the following steps.
Step 1) The GC selects two different 2-dimensional private points S0 = (s00, s01) ∈
_GF(p)[2]_ and S1 = (s10, s11) ∈ _GF(p)[2]_ at random, and keeps them secret.
Step 2) After authenticating U1, the GC chooses an 2-dimensional private point
**_A1 = (a10, a11) at random for the user U1, where a10 �_** 0, a11 � 0 and a10 � _a11. The_
GC stores the point A1 securely and transmits it to the user U1 via a secure channel.
**_A1 is the private information of U1, and should be kept secret by both the member_**
_U1 and the GC._
Step 3) The GC selects a random number u ∈ _GF(p) and computes:_
_b00 = fs00_ (u), b01 = fs01 (u),
-----
Provably Secure Group Key Management Approach Based on Hyper-sphere 9
_b10 = fs10_ (u), b11 = fs11(u),
_b20 = fa10_ (u), b21 = fa11 (u).
Then the GC constructs new points B0, B1, and B2:
**_B0 = (b00, b01), B1 = (b10, b11), B2 = (b20, b21)._**
If
2(b00 − _b10) · 2(b11 −_ _b21) −_ 2(b10 − _b20) · 2(b01 −_ _b11) �_ 0 mod p, (7)
go to Step 4; otherwise, the GC repeats Step 3.
Notice that the condition in (7) can guarantee that the points B0, B1, and B2 can
uniquely determine a circle in 2-dimensional space.
Step 4) The GC establishes a hyper-sphere, herein a circle, in 2-dimensional space
using the above points B0, B1, and B2. Suppose the central point of the hyper-sphere
is C = (c0, c1) ∈ _GF(p). By applying points B0, B1, and B2 to (5) or (6), the GC can_
construct the following system of equations:
(b00 − _c0)[2]_ + (b01 − _c1)[2]_ ≡ _R_ mod p,
(b10 − _c0)[2]_ + (b11 − _c1)[2]_ ≡ _R_ mod p, (8)
(b20 − _c0)[2]_ + (b21 − _c1)[2]_ ≡ _R_ mod p.
By subtracting the first equation from the second one, and subtracting the second
equation from the third one, we can get a system of linear equations with two unknowns
_c0 and c1:_
� 2(b00 − _b10)c0 + 2(b01 −_ _b11)c1 ≡_ _b200_ [+][ b]01[2] [−] _[b]10[2]_ [−] _[b]11[2]_ mod p, (9)
2(b10 − _b20)c0 + 2(b11 −_ _b21)c1 ≡_ _b[2]10_ [+][ b]11[2] [−] _[b]20[2]_ [−] _[b]21[2]_ mod p.
The condition in (7) guarantees that (9) has one and only one solution (c0, c1). Then
the central point C = (c0, c1) of the hyper-sphere is determined.
Step 5) The GC delivers C and u to the member U1 via open channel.
Step 6) The member U1 can calculate the group key by using its private point A1 =
(a10, a11) along with the public information C = (c0, c1) and u:
_K = (R −∥C∥[2])_ mod p
= (b[2]20 [+][ b]21[2] [−] [2][b][20][c][0][ −] [2][b][21][c][1][)] mod p (10)
= (( fa10 (u))[2] + ( fa11 (u))[2] − 2 fa10 (u)c0 − 2 _fa11_ (u)c1) mod p,
where C is the central point of the hyper-sphere, and ∥C∥[2] = c[2]0 [+][ c]1[2][.]
Notice that in order to keep our scheme clear and simple, the dimension of the constructed hyper-sphere is designed to equal the number of the group members. Therefore,
an 1-sphere or a circle is constructed if the condition in (7) is satisfied, since the first
member U1 is enrolled in the group at this phase.
-----
10 S. Tang, L. Xu, N. Liu, J. Ding, Z. Yang
**Adding Members.** Suppose that there are n − _m members in the group before the_
enrollment of new members, where n > 0 and n > m ≥ 0. Now there are m new
members want to join the group. After new members are admitted, there will be n
members in the group, which can be denoted by Ui1, Ui2, · · ·, Uin . The steps are as
follows.
Step 1) After the new user Ux is authenticated, the GC selects unique 2-dimensional
private point Ax = (ax0, ax1) ∈ _GF(p)[2]_ for each new member Ux, where ax0 � 0,
_ax1 �_ 0, ax0 � _ax1, and x = (n −_ _m) + 1, (n −_ _m) + 2, · · ·, n._
The points Ax should satisfy Ai � **_A_** _j if i �_ _j, where 1 ≤_ _i, j ≤_ _n._
Step 2) The GC sends the point Ax to the user Ux via a secure channel.
The point Ax is the private information of Ux, and should be kept secret by both the
member Ux and the GC.
Step 3) The GC selects a random number u ∈ _GF(p), and computes_
_b00 = fs00_ (u), b01 = fs01 (u),
_b10 = fs10_ (u), b11 = fs11 (u).
For j = 2, 3, · · ·, n + 1, the GC computes
_b_ _j0 = faij−1_,0 (u), b j1 = faij−1,1 (u).
Then the GC constructs new points B0, B1, · · ·, Bn+1:
**_B0 = (b00, b01), B1 = (b10, b11), B2 = (b20, b21),_**
- · · · · ·
**_Bn+1 = (bn+1,0, bn+1,1)._**
If the condition
(2(b00 − _b10) · 2(b11 −_ _b21) −_ 2(b10 − _b20) · 2(b01 −_ _b11)) ×_
_n+1_
�
(−2bt1) � 0 mod p (11)
_t=3_
satisfies, go to Step 4; otherwise, the GC repeats Step 3.
Step 4) The GC expands each B _j to become an (n + 1)-dimensional point V j._
Then the GC constructs an n-dimensional hyper-sphere based on the set of points
**_V0, V1, · · ·, Vn+1. Suppose that the central point of the hyper-sphere is C = (c0, c1, · · ·, cn) ∈_**
_GF(p)[n][+][1]._
Step 4.1) The GC expands each B _j to become an (n + 1)-dimensional point V j ._
For j = 0, 1, and 2, the point B _j is supplemented (n −_ 1) zeros to become V j, i.e.,
**_V0 = (b00, b01, 0, · · ·, 0),_**
**_V1 = (b10, b11, 0, · · ·, 0),_**
**_V2 = (b20, b21, 0, · · ·, 0)._**
-----
Provably Secure Group Key Management Approach Based on Hyper-sphere 11
For j = 3, 4, · · ·, n + 1, let
**_V3 = (b30, 0, b31, 0, · · ·, 0),_**
- · · · · ·
**_V j = (b_** _j0, 0, · · ·, 0, b_ _j1, 0, · · ·, 0),_
- · · · · ·
**_Vn+1 = (bn+1,0, 0, · · ·, 0, bn+1,1),_**
where the number of 0 between b _j0 and b j1 is ( j −_ 2), and there are (n + 1 − _j) zeros_
supplemented after b _j1._
Step 4.2) The GC constructs the system of equations about the hyper-sphere by
applying the set of points V0, V1, · · ·, Vn+1 to (5) or (6):
(b00 − _c0)[2]_ + (b01 − _c1)[2]_ + (0 − _c2)[2]_ + · · · + (0 − _cn)[2]_ = R,
(b10 − _c0)[2]_ + (b11 − _c1)[2]_ + (0 − _c2)[2]_ + · · · + (0 − _cn)[2]_ = R,
(b20 − _c0)[2]_ + (b21 − _c1)[2]_ + (0 − _c2)[2]_ + · · · + (0 − _cn)[2]_ = R, (12)
(b30 − _c0)[2]_ + (0 − _c1)[2]_ + (b31 − _c2)[2]_ + · · · + (0 − _cn)[2]_ = R,
- · · · · ·
(bn+1,0 − _c0)[2]_ + (0 − _c1)[2]_ + (0 − _c2)[2]_ + · · · + (bn+1,1 − _cn)[2]_ = R.
By subtracting the j-th equation from the ( j + 1)-th equation in (12), where j =
1, 2, · · ·, n, we can get a system of linear equations with (n+1) unknowns c0, c1, ..., and cn.
2(b00 − _b10) 2(b01 −_ _b11)_ 0 ... 0 _c0_ _b[2]00_ [+][ b]01[2] [−] _[b]10[2]_ [−] _[b]11[2]_
2(b10 − _b20) 2(b11 −_ _b21)_ 0 ... 0 _c1_ _b[2]10_ [+][ b]11[2] [−] _[b]20[2]_ [−] _[b]21[2]_
2(b20 − _b30)_ 2b21 −2b31 ... 0 _c2_ = _b[2]20_ [+][ b]21[2] [−] _[b]30[2]_ [−] _[b]31[2]_ .
...... ... ... ...
2(bn0 − _bn+1,0)_ 0 ... ... −2bn+1,1 _cn_ _b[2]n0_ [+][ b]n[2]1 [−] _[b]n[2]+1,0_ [−] _[b]n[2]+1,1_
(13)
Let matrix
2(b00 − _b10) 2(b01 −_ _b11)_ 0 ... 0
2(b10 − _b20) 2(b11 −_ _b21)_ 0 ... 0
2(b20 − _b30)_ 2b21 −2b31 ... 0
...... ...
2(bn0 − _bn+1,0)_ 0 ... ... −2bn+1,1
**_Y =_**
and vectors
_b[2]00_ [+][ b]01[2] [−] _[b]10[2]_ [−] _[b]11[2]_
_b[2]10_ [+][ b]11[2] [−] _[b]20[2]_ [−] _[b]21[2]_
_b[2]20_ [+][ b]21[2] [−] _[b]30[2]_ [−] _[b]31[2]_
...
_b[2]n0_ [+][ b]n[2]1 [−] _[b]n[2]+1,0_ [−] _[b]n[2]+1,1_
_c0_
_c1_
_c2_
...
_cn_
**_C[T]_** =
, **_Z =_**
,
where C[T] denotes the transpose of C.
-----
12 S. Tang, L. Xu, N. Liu, J. Ding, Z. Yang
Then (13) can be expressed in the matrix and vector form
**_Y × C[T]_** = Z. (14)
The condition in (11) guarantees that (13) or (14) has one and only one solution
**_C[T]_** = Y[−][1] × Z. Then the central point C = (c0, c1, · · ·, cn) of the hyper-sphere is determined.
Step 5) The GC multicasts C and u to all the group members Ui1, Ui2, · · ·, Uin via
open channel.
Step 6) Each group member Ux can calculate the group key by using its private
point Ax(ax0, ax1) along with the public information C = (c0, c1, · · ·, cn) and u :
_K = (R −∥C∥[2])_ mod p
= (b[2]x+1,0 [+][ b][2]x+1,1 [−] [2][b][x][+][1][,][0][c][0][ −] [2][b][x][+][1][,][1][c][i]x [)] mod p (15)
= (( _faix_,0 (u))[2] + ( faix,1 (u))[2] − 2 faix,0 (u)c0 − 2 faix,1 (u)cix ) mod p,
where C is the central point of the hyper-sphere, and ∥C∥[2] = c[2]0 [+][ c]1[2] [+][ · · ·][ +][ c]n[2][.]
**Removing Members.** Suppose that there are n + w members in the group before
membership exclusion, where n > 0 and w ≥ 0. Now there are w members want to leave
the group, then there will be n members in the group after w users leave. Suppose the
set of remaining members in the group is {Ui1, Ui2, · · ·, Uin } after removing members.
The steps are as follows.
Step 1) The GC deletes the leaving members’ private 2-dimensional points.
Step 2) The GC’s private 2-dimensional points S0 and S1, and the remaining members’ private points Ai1, Ai2, · · ·, Ain should be stored securely by the GC.
The following steps are the same as Steps 3 - 6 in the “Adding Members” phase,
i.e., the GC re-selects a new random number u ∈ _GF(p) and constructs new points_
**_B0, B1, · · ·, Bn+1 in Step 3. Then the GC constructs a new hyper-sphere in Step 4, and_**
publishes the new random number u and the new central point C of the hyper-sphere in
Step 5. Finally, each group member can calculate the new group key by using its private
point in Step 6.
**Massively Adding and Removing Members. This subsection manipulates the situa-**
tion that a lot of members join and other members leave the group at the same time in
batch mode. Suppose that there are n + _w_ − _m members in the group before membership_
change, where n > 0 and w ≥ 0, n + w > m ≥ 0. Now there are w members want to
leave, and v new members want to join the group simultaneously. After the membership
update, there will be n members in the group. The steps are as follows.
Step 1) The GC deletes the leaving members’ private 2-dimensional points, and let
new users join in at the same time. After new user Ux is authenticated, the value of x is
assigned as the identifier of the new joining members, where x = (n − _m) + 1, (n −_ _m) +_
2, · · ·, n.
The GC selects unique 2-dimensional point Ax = (ax0, ax1) ∈ _GF(p) as Ux’s private_
information, where ax0 � 0, ax1 � 0, and ax0 � _ax1. The private points Ax should satisfy_
**_Ai �_** **_A_** _j if i �_ _j, where 1 ≤_ _i, j ≤_ _n._
-----
Provably Secure Group Key Management Approach Based on Hyper-sphere 13
Step 2) The GC sends the private point Ax to the user Ux via a secure channel.
The point Ax is the private information of Ux, and should be kept secret by both the
member Ux and the GC.
Other steps are for w members to leave the group, which are the same as Steps 3 6 described in the “Adding Members” phase. By executing Step 3 to Step 6, the GC reselects a new random number u ∈ _GF(p), constructs a new hyper-sphere, and publishes_
the new random number u and the new central point C of the hyper-sphere. Then each
group member can calculate the new group key.
**Periodically Update. If the group key is not updated within a period of time, the GC**
will start periodically update procedure to renew the group key to safeguard the secrecy
of group communication. The GC needs to re-select a new random number u ∈ _GF(p),_
then construct a new hyper-sphere, and publish the new random number u and the new
central point of the hyper-sphere. These steps are the same as Steps 3 - 6 in “Adding
Members” phase.
## 5 Security and Performance Analysis
**5.1** **Security Analysis**
We will show (in Theorem 1) that our group key management protocol is secure, supposed that all communications are authenticated. The proof is similar to the way to
prove the security of the unauthenticated protocol by Dutta-Barua[14] and Mikhall et
al.[3]. In our security model, the adversary A can access five oracles, i.e., Execute,
**Reveal, Corrupt, Send and Test. The Send query may be ignored by A because all**
communications are assumed to be authenticated. Some notations, such as F[κ], Func, p
and GF(p), are defined in Section 3.
**Theorem 1. Our protocol is secure under PRF assumption, and the adversary’s ad-**
_vantage Adv(t) satisfies the following:_
1
Adv(t) ≤ (2n + 4) × (qE × AdvGF[PRF](p)[(][t][(1)][)][ +] _p[p][−][1][ )][,]_
_where qE is the number of Execute queries that the adversary can call and run in time t,_
_and t[(1)]_ = t + O(n[3])M + O(n)H, in which n is the number of members in the group, M is
_the average time required to perform multiplication over GF(p), and H is the average_
_time to compute f._
_Proof. Let A be an adversary for the group key management protocol. By using this, we_
can construct an algorithm D that will distinguish between random and pseudo-random
functions. Assume that A will make qE Execute queries, and choose rth session as the
**Test session. And assume that D correctly guessed the Test session r. Then, when A**
makes Execute query, (except for the rth session), D follows the real protocol. When
A makes Reveal or Corrupt oracle (other than for the rth session), D sends A all the
corresponding information as in a real interaction.
-----
14 S. Tang, L. Xu, N. Liu, J. Ding, Z. Yang
_u ←R GF(p) :_
_b00 = fs00(u), b01 = fs01(u),_
_b10 = fs10(u), b11 = fs11(u),_
for _j = 2, 3, · · ·, n + 1 :_
_b_ _j0 = fa_ _j−1,0_ (u), b j1 = fa _j−1,1_ (u);
_V0 = (b00, b01, 0, · · ·, 0),_
_V1 = (b10, b11, 0, · · ·, 0),_
_V2 = (b20, b21, 0, · · ·, 0)._
for _j = 3, 4, · · ·, n + 1 :_
_V j = (b_ _j0, 0, · · ·, b_ _j1, 0, · · ·, 0);_
2(b00 − _b10) 2(b01 −_ _b11)_ 0 ... 0
2(b10 − _b20) 2(b11 −_ _b21)_ 0 ... 0
**_Y =_** 2(b20 − _b30)_ 2b21 −2b31 ... 0
...... ...
2(bn0 − _bn+1,0)_ 0 ... ... −2bn+1,1
_b[2]00_ [+][ b]01[2] [−] _[b]10[2]_ [−] _[b]11[2]_
_b[2]10_ [+][ b]11[2] [−] _[b]20[2]_ [−] _[b]21[2]_
**_Z =_** _b[2]20_ [+][ b]21[2] [−] _[b]30[2]_ [−] _[b]31[2]_ ;
...
_b[2]n0_ [+][ b]n[2]1 [−] _[b]n[2]+1,0_ [−] _[b]n[2]+1,1_
**_C[T]_** = (c0, c1, · · ·, cn+1)[T] = Y[−][1] × Z;
_R = ∥Vi −_ **_C∥[2]_** ;
_T = {u; C}_
_K = R −∥C∥[2]_ .
**Real :=**
;
,
.
**Fake[(0][,][0)]** :=
_u ←R GF(p) :_
_b00 = g00(u), b01 = fs01_ (u),
_b10 = fs10_ (u), b11 = fs11 (u),
for _j = 2, 3, · · ·, n + 1 :_
_b j0 = fa_ _j−1,0_ (u), b _j1 = fa_ _j−1,1_ (u);
the rest are the same as the ones in Real.
In the rest of the proof, we will assume that D correctly guessed the Test session.
Since such a priori guess is correct with 1/qE chance, this affects the exact security of
the reduction proof by a factor of qE.
As a stepping stone, we first define distributions Real and Fake[(0][,][0)] above for transcript/session key pair (T, K) as follows, where Real is the real execution scenario
of our protocol while fs00 is replaced with a truly random function g00 in Fake[(0][,][0)].
Similarly, we can define the distributions Fake[(0][,][1)], . . ., Fake[(][n][+][1][,][0)], Fake[(][n][+][1][,][1)]. For
_i = 0, 1, . . ., n + 1, Fake[(][i][,][1)]_ is the same as Fake[(][i][,][0)] except that let bi1 = gi1(u) where gi1
is a truly random function, and Fake[(][i][+][1][,][0)] is the same as Fake[(][i][,][1)] except that let bi+1,0 =
_gi+1,0(u) where gi0 is a truly random function. Finally, the distribution Fake[(][n][+][1][,][1)]_ (we
-----
Provably Secure Group Key Management Approach Based on Hyper-sphere 15
denote as Fake hereafter) is described as follows, where for i = 1, 2, . . ., n + 1, gi0 and
_gi1 are all truly random functions_
.
**Fake :=**
_u ←R GF(p);_
for _i = 0, 1, · · ·, n + 1 :_
_bi0 = gi0(u), bi1 = gi1(u);_
the rest are the same as the ones in Real.
Due to the PRF assumption, we can obtain from Lemma 1 below that
|Prob[(T, K) ← **Real : A(T, K) = 1]**
−Prob[(T, K) ← **Fake[(0][,][0)]** : A(T, K) = 1]| (1)
≤ **AdvGF[PRF](p)[(][t][(1)][)][ +]** _p[p]1[−][1][,]_
where t is the running time of A, t[(1)] = t+O(n[3])M +O(n)H, n is the number of members
in the group, M is the average time required to perform multiplication over GF(p), and
_H is the average time to compute f_ .
Similarly, for i = 0, 1, . . ., n + 1, we can further conclude that
|Prob[(T, K) ← **Fake[(][i][,][0)]** : A(T, K) = 1]
−Prob[(T, K) ← **Fake[(][i][,][1)]** : A(T, K) = 1]| (2)
≤ **AdvGF[PRF](p)[(][t][(1)][)][ +]** _p[p]1[−][1][,]_
and
|Prob[(T, K) ← **Fake[(][i][,][1)]** : A(T, K) = 1]
−Prob[(T, K) ← **Fake[(][i][+][1][,][0)]** : A(T, K) = 1]| (3)
≤ **AdvGF[PRF](p)[(][t][(1)][)][ +]** _p[p]1[−][1][ .]_
From equation (2) and (3), we have
|Prob[(T, K) ← **Real : A(T, K) = 1]**
−Prob[(T, K) ← **Fake : A(T, K) = 1]|** (4)
≤ (2n + 4)(AdvGF[PRF](p)[(][t][(1)][)][ +] _p[p]1[−][1][ )][.]_
Furthermore, from Lemma 2, the success probability of A in distinguishing between
the keys from the distribution Fake and keys randomly chosen from GF(p) is just [1]2 [.]
That is,
|Prob[(T, K0) ← **Fake; K1 ←R GF(p); b ←R {0, 1} : A(T, Kb) = b] =** [1]2 [.] (5)
Hence by Lemmas 1 and 2, we can conclude that
|Prob[(T, K0) ← **Real, K1 ←R GF(p), b ←R {0, 1} : A(T, Kb) = b] −** [1]2 [|]
= |Prob[(T, K0) ← **Real, K1 ←R GF(p), b ←R {0, 1} : A(T, Kb) = b]**
−Prob[(T, K0) ← **Fake, K1 ←R GF(p), b ←R {0, 1} : A(T, Kb) = b]|**
≤ (2n + 4) × (AdvGF[PRF](p)[(][t][(1)][)][ +] _p[p]1[−][1][ )][.]_
(6)
-----
16 S. Tang, L. Xu, N. Liu, J. Ding, Z. Yang
We assumed that D correctly guessed the Test session above, which affects the
exact security of the reduction proof by a factor of qE. Finally we conclude that the
adversary’s advantage is negligible under the pseudo-random function assumption,
**Adv(t) ≤** (2n + 4) × (qE × AdvGF[PRF](p)[(][t][(1)][)][ +] _p[p]1[−][1][ )][.]_ (7)
**Lemma 1. For any algorithm A running in time t, we have the following where t[(1)]** =
_t + O(n[3])M + O(n)H:_
|Prob[(T, K) ← **Real : A(T, K) = 1]**
−Prob[(T, K) ← **Fake[(0][,][0)]** : A(T, K) = 1]|
≤ **AdvGF[PRF](p)[(][t][(1)][)][ +]** _p[p]1[−][1][ .]_
_Proof. We construct a distinguisher D by using A which on an input g1 ∈_ _Func. D_
first generates a pair (T, K) according to the distribution Dist[′] described below which
depends on g1, then runs A on (T, K) and outputs whatever A outputs.
**Dist[′]** :=
_u ←R GF(p) :_
_b00 = g1(u), b01 = fs01_ (u),
_b10 = fs10_ (u), b11 = fs11(u),
for _j = 2, 3, · · ·, n + 1 :_
_b_ _j0 = fa_ _j−1,0_ (u), b _j1 = fa_ _j−1,1_ (u);
the rest are the same as the ones in Real.
.
Define set E1 = {g | g ∈ _Func\F[κ]}. The distribution Real and distribution {(T, K) :_
_g1 ∈_ _F[κ]; (T, K) ←_ **Dist[′](g1)} are statistically equivalent. On the other hand, the distribu-**
equivalent but for a factor oftion Fake(00) and the distributionpp[p][ since] {(T,[ g] K[1]) :[ is not in] g1 ∈ _E[ F]1; ([κ][. These two distributions are statis-]T, K) ←_ **Dist[′](g1)} are statistically**
tically equivalent by the definition of PRF,
|Prob[(T, K) ← **Real : A(T, K) = 1] −** **Prob[(T, K) ←** **Fake[(0][,][0)]** : A(T, K) = 1]|
|F[κ]|
≤|Prob[g1 ←R F[κ] : D(g1) = 1] − **Prob[g1 ←R E1 : D(g1) = 1]| +** |Func|
1
≤ **AdvGF[PRF](p)[(][t][(1)][)][ +]**
_p[p][−][1][ .]_
The time required to perform n × _n matrix inversion and n_ × _n matrix multiplying an_
_n-dimensional vector operation in GF(p) are O(n[3])M and O(n[2])M respectively. There_
are 2n+3 computations of f in Dist[′]. Hence t[(1)] is basically equal to t+O(n[3])M+O(n)H.
⊓⊔
**Lemma 2. For any computationally unbounded adversary A, we have**
**Prob[(T, K0) ←** **Fake; K1 ←R GF(p); b ←R {0, 1} : A(T, Kb) = b] =** [1]2 [.]
-----
Provably Secure Group Key Management Approach Based on Hyper-sphere 17
_Proof. We have T = {u; C}, K = R −∥C∥[2]_ and C[T] = (c0, c1, · · ·, cn+1)[T] = Y[−][1] × Z.
Because Test is allowed to call Fresh session for only once, no player in this session
is corrupted, so a2,0, a2,1, · · ·, an+1,0, an+1,1 are kept secret and unknown to A. And C =
**_Y[−][1]_** × Z is independent from u, where all elements b _j0, b j1 in both Y and Z are chosen_
randomly. Thus K is also independent from u and is a random value in GF(p). A gets no
information on both K0 and K1, therefore the probability of guessing the bit b correctly
is exactly [1]2 [.]
⊓⊔
Above we present the static security of our scheme. In the phases of Adding Mem**bers and Removing Members, when new users join the group or members leave the**
group, GC establishes the new group key as in the phase of Initialization by re-selecting
new random value u ∈ _GF(p). So both the new users who join the group in Adding_
**Members and members who leaves the group in Removing Members cannot obtain**
any information about the previous group key.
**5.2** **Performance Analysis**
Suppose that the length of the prime p in binary expression is L bits. Table 1 shows the
performance requirements by both the GC and each member.
**Storage. Each member needs to store its 2-dimensional private point only. The GC**
should store all members’ 2-dimensional private points. Then the storage for each member is 2 × L bits, and the storage for the GC is 2 × (n + 2) × L bits.
**Computation. The major computation by each member is to calculate the group key**
via (13) or (14), which includes two computations of f function, four modular multiplications and five modular additions over finite field. The computation for the GC
is to solve a system of linear equations. Since the coefficient matrix in (13) can easily be converted to a lower triangular matrix, the computation complexity of solving
(c0, c1, · · ·, cn) from (13) is O(n).
**Number and Size of Re-keying Message. The total number of re-keying messages is**
two, including the central point of the hyper-sphere and the random number u. The size
of re-keying messages is (n + 2) × L bits.
**Batch Processing. If there are a lot of users join and leave the group simultaneously,**
only one batch processing is needed.
**5.3** **Experiments**
While f can be any computationally efficient function assumable to be pseudo-random,
we instantiate it by a cryptographic hash function to ease the comparison. Our experimental test bed for the GC is a 2.33GHz Intel Xeon quad-core dual-processor PC
server with 4GB memory and running Linux, and the platform for the member is a
HP XW4600 Workstation with 2.33GHz Intel dual-processor and 2GB memory and
-----
18 S. Tang, L. Xu, N. Liu, J. Ding, Z. Yang
**Table 1. Performance Requirements by the GC and each Member**
Storage Computation Re-keying Messages
(bits) Number Size(bits)
GC 2 × (n + 2) × L O(n) 2 (n + 2) × L
Member 2 × L 2H + 4M + 5A 0 0
Notation for Table 1:
_n : number of members in the group_
_L : the length of the prime p in bits_
_H : average time required by an f function_
_M : average time required by a modular multiplication_
_A : average time required by a modular addition_
running Linux. C/C++ programming language is adopted to compose the software to
simulate the behavior of the GC and members. We choose L = 128 bits, which denotes the length of the prime p in binary form, then we compute the average cost of the
GC and each member. The time was averaged over 20 distinct runs of the experiments,
and the difference among the same experiments is less than 2%. The summary of the
experimental results are presented in Table 2 and Table 3.
In Table 2, the first column represents the size of the group; the second, the storage
for the computation, and the third and fourth, the computation time. For a large group
_n = 100000, the GC takes 85.2 ms = 0.0852 seconds to process member adding or_
removing. We can observe from this experimental data that the GC can manage a large
group efficiently.
Table 3 shows that the storage and the computation cost does not increase at all for
each group member even when the group size increases, which is very desirable.
Our experimental results confirm that our scheme is very scalable and very efficient
for large groups.
**Table 2. Storage and Computation Required by the GC**
|Col1|Storage (bits)|Computation|Re-keying Messages|Col5|
|---|---|---|---|---|
||||Number|Size(bits)|
|GC|2 × (n + 2) × L|O(n)|2|(n + 2) × L|
|Member|2 × L|2H + 4M + 5A|0|0|
|Size of group|Storage (bytes)|Computation (ms)|Col4|
|---|---|---|---|
|||Adding Members|Removing Members|
|10|384|0.06|0.06|
|100|3,264|0.4|0.4|
|1,000|32,064|0.7|0.7|
|10,000|320,064|7.7|7.7|
|100,000|3,200,064|85.2|85.2|
-----
Provably Secure Group Key Management Approach Based on Hyper-sphere 19
**Table 3. Storage and Computation Required by each Member**
Size of Storage Computation (ms)
group (bytes) Adding Members Removing Members
10 32 0.00564 0.00564
100 32 0.00564 0.00564
1,000 32 0.00564 0.00564
10,000 32 0.00564 0.00564
100,000 32 0.00564 0.00564
## 6 Comparison with Related Work
Our scheme falls into the category of centralized systems, therefore we will compare
our scheme with some typical centralized key management schemes. A summary of the
comparison results are presented in Table 4 and Table 5.
GKMP (Group Key Management Protocol) is a simple extension from unicast to
multicast, but not scalable and very inefficient. Table 4 clearly shows that our scheme
outperforms GKMP with respect to both secrecy and performance.
The LKH (Logical Key Hierarchy) scheme can be considered to be the representative of tree-based schemes, including OFT [37], OFCT [10], Hierarchical α-ary Tree
with Clustering [11], Efficient Large-Group Key [29], etc. Hence, we compare our
scheme with LKH only, but the results are similar to other tree-based schemes.
The advantages of our scheme over the LKH are as follows: 1) our scheme is scalable for massive membership change; 2) the number of re-keying messages is O(1) in
our scheme, but is O(log2 n) in LKH; 3) the computation complexity of each member is
O(1) in our scheme, but is O(log2 n) in LKH.
The major differences between our scheme and LKH are: 1) the principles behind
are different: hyper-sphere is adopted in our scheme, but tree structure is adopted in
LKH; 2) The computation complexity by the GC in our scheme is O(n) simple operations, but the one in LKH is O(2 log2 n) encryptions. In average conditions, the computation of simple operations can be faster than encryptions.
**Table 4. Feature and Computation Complexity Comparison among Schemes**
|Size of group|Storage (bytes)|Computation (ms)|Col4|
|---|---|---|---|
|||Adding Members|Removing Members|
|10|32|0.00564|0.00564|
|100|32|0.00564|0.00564|
|1,000|32|0.00564|0.00564|
|10,000|32|0.00564|0.00564|
|100,000|32|0.00564|0.00564|
|Col1|GKMP|LKH|Secure Lock|This Paper|
|---|---|---|---|---|
|Major principle adopted|Encryption|Tree structure|Chinese Remainder Theorem|Hyper-sphere|
|Efficient for very large group|No|Yes|No|Yes|
|Scalable to massively adding and removing members|No|No|Yes|Yes|
|Number of re-keying messages|n|O(log n) 2|O(1)|O(1)|
|Member computation complexity|O(1) decryptions|O(log n) 2 decryptions|O(1) decryptions and modular operations|O(1) simple operations|
|GC computation complexity|O(n) encryptions|O(log n) 2 encryptions|O(n) encryptions and modular operations|O(n) simple operations|
-----
20 S. Tang, L. Xu, N. Liu, J. Ding, Z. Yang
**Table 5. GC’s Computation Comparison between Secure Lock and our Scheme**
Secure Lock This Paper
Computation complexity _E · O(n) + M · O(2n) + A · O(n) + R · O(2n) H · O(2n) + M · O(2n) + A · O(4n) + R · O(n)_
Difference between schemes _E · O(n) + R · O(n)_ 2H · O(n) + 3A · O(n)
Notation for Table 5:
_n : number of members in the group_ _E : average time required by a symmetric encryption_
_M : average time required by a modular multiplication_ _H : average time required by a hash function_
over GF(p) _R : average time required by a multiplication_
_A : average time required by a modular addition over GF(p)_ reverse over GF(p)
Notice that tree structure can also be adopted by our scheme to divide the members into different sub-trees and to further speed up our scheme. We will explore this
direction in our future research.
The SL (Secure Lock) is based on Chinese Remainder Theorem (CRT), which is a
time-consuming operation. Hence, the SL scheme is applicable only for small groups
[12].
There are some similarities between the SL and our scheme: 1) both schemes can
be regarded as flat structure, that is, no hierarchical structures such as tree structures are
adopted; 2) the numbers of re-keying messages in both schemes are O(1); 3) the computation complexity by each member in both schemes are also O(1); 4) the computation
complexity by the GC in both schemes are O(n).
Table 5 compares the computation complexity by the GC in the SL and our scheme.
The one in the SL is based on an optimized CRT [12]. The first row presents the computation complexity, and the second row shows the difference of computation complexity
of two schemes by omitting the identical items in the first row. The complexity differences are: E · O(n) + R · O(n) in the SL, and 2H · O(n) + 3A · O(n) in our scheme, where
_n is the number of members in the group, E, R, H and A are the average time required_
by encryption, modular multiplication reverse, f function, and modular addition, respectively. Usually, we can choose a pseudo-random function f that can be computed
very fast, so E > 2H. Modular reverse operation over finite field is a time-consuming
computation, thus R ≫ 3A, and then
_E · O(n) + R · O(n) ≫_ 2H · O(n) + 3A · O(n),
or
_E · O(n) + M · O(2n) + A · O(n) + R · O(2n)_
≫ _H · O(2n) + M · O(2n) + A · O(4n) + R · O(n)._
Hence, the computation of our scheme is much faster than that of SL.
Therefore, the advantages of our scheme over the ones of the SL include: 1) our
scheme is efficient for very large group; 2) the performance by each member and the
GC in our scheme is much better than the ones in SL.
Our scheme belongs to the category of centralized systems. Thus some common
disadvantages of the centralized ones, like the group controller being a single point of
failure, are also employed by our scheme. The failure of the group controller could
|Col1|Secure Lock|This Paper|
|---|---|---|
|Computation complexity|E · O(n) + M · O(2n) + A · O(n) + R · O(2n)|H · O(2n) + M · O(2n) + A · O(4n) + R · O(n)|
|Difference between schemes|E · O(n) + R · O(n)|2H · O(n) + 3A · O(n)|
-----
Provably Secure Group Key Management Approach Based on Hyper-sphere 21
compromise the system completely. This is one main disadvantage compared with distributed or decentralized schemes. However, some techniques to prevent the failure of
single point can be adopted to weaken this disadvantage. In addition, our scheme can
be a fundamental component to construct some decentralized schemes by combining
other techniques.
## 7 Conclusions
In this paper, we study the problem of group key management from a very different
angle than before. A new secure group key management scheme based on hyper-sphere
is constructed, where each member in the group corresponds to a private point on the
hyper-sphere and the group controller (GC) computes the central point of the hypersphere, intuitively, whose “distance” from each member’s private point is identical. The
central point is published, and each member can compute a common group key using a
function by taking each member’s private point and the central point of the hyper-sphere
as the input. Our new approach is formally proved secure under the pseudo-random
function (PRF) assumption.
The advantages of our scheme include: (1) the re-keying messages can be broadcasted or multicasted via open channel, and the secure channel is required only once
when new users register to join in the group for the first time; (2) it is very efficient
and scalable for large-size groups and can deal with massive membership change efficiently with only two re-keying messages, i.e., the central point of the hyper-sphere
and a random number; (3) both the storage and the computation overhead of each member is significantly reduced, which is independent of the group size; and (4) the GC’s
storage and computation cost is also acceptable: the storage and computation overhead
increases linearly with the group size.
The performance estimations are further confirmed by our experiments. For example, in the case of a group of size n = 100000, the storage cost for each member’s private
information is 32 bytes, the time for each member to compute the group key is 0.000564
_ms or 5.64 × 10[−][7]_ seconds, and the time for the GC to process membership change is
85.2 ms or 8.52 × 10[−][4] seconds on a 2.33 GHz Intel Xeon quad-core dual-processor PC
server.
## Acknowledgement
This paper is financially supported by the National Natural Science Foundation of China
under Grant No. U1135004 and 61170080, and Guangdong Province Universities and
Colleges Pearl River Scholar Funded Scheme (2011), and Guangzhou Metropolitan Science and Technology Planning Project under grant No. 2011J4300028, and High-level
Talents Project of Guangdong Institutions of Higher Education (2012), and the Fundamental Research Funds for the Central Universities under Grant No. 2009ZZ0035 and
2011ZG0015, and Guangdong Provincial Natural Science Foundation of under grant
No. 9351064101000003.
-----
22 S. Tang, L. Xu, N. Liu, J. Ding, Z. Yang
## References
1. Y. Amir, C. Nita-Rotaru, S. Stanton, and G. Tsudik, “Secure spread: an integrated architecture for secure group communication,” IEEE Transactions on Dependable and Secure
Computing, vol. 2, no. 3, pp. 248-261, 2005.
2. Y. Amir, Y. Kim, C. Nita-Rotaru, J. L. Schultz, J. Stanton, and G. Tsudik, “Secure group
communication using robust contributory key agreement,” IEEE Transactions on Parallel
and Distributed Systems, vol. 15, no.5, pp. 468-480, 2004.
3. M. Atallah, M. Blanton, N. Fazio, and K. Frikken, “Dynamic and efficient key management
for access hierarchies,” ACM Trans. Inf. Syst. Secur., vol. 12, no. 3, pp. 1-43, 2009.
4. A. Ballardie, “Scalable multicast key distribution,” RFC 1949, 1996.
5. C. Becker, U. Wille, “Communication complexity of group key distribution,” In Proceedings
of the 5th ACM Conference on Computer and Communications Security, San Francisco,
Calif., ACM, New York, pp. 1-6, Nov. 1998.
6. M. Bellare, R. Canetti, and H. Krawczyk, “Pseudorandom functions revisited; the cascade
construction and its concrete security,” In Proceedings of the 37th Annual Symposium on
Foundations of Computer Science. IEEE Computer Society, pp. 514 - 523, 1996.
7. C. Boyd, “On key agreement and conference key agreement,” In Proceedings of the Information Security and Privacy: Australasian Conference, Lecture Notes in Computer Science,
vol. 1270. Springer-Verlag, New York, pp. 294-302, 1997.
8. B. Briscoe, “MARKS: Multicast key management using arbitrarily revealed key sequences,”
In Proceedings of the 1st International Workshop on Networked Group Communication,
Pisa, Italy, pp. 301-320, Nov. 1999.
9. M. Burmester, and Y. Desmedt, “A secure and efficient conference key distribution system
(extended abstract),” In Advances in Cryptology-EUROCRYPT 94, A. D. Santis, Ed., Lecture Notes in Computer Science, vol. 950. Springer- Verlag, New York, pp. 275-286, 1994.
10. R. Canetti, J. Garay, G. Itkis, D. Micciancio, M. Naor, and B. Pinkas, “Multicast security: A
taxonomy and some efficient constructions,” In Proceedings of the IEEE INFOCOM, New
Yok, pp. 708-716, Mar. 1999.
11. R. Canetti, T. Malkin, and K. Nissim, “Efficient communication-storage tradeoffs for multicast encryption,” In Advances in Cryptology-EUROCRYPT 1999, Lecture Notes in Computer Science, vol. 1599, Springer- Verlag, New York, pp. 459-474, 1999.
12. G.-H. Chiou, and W.-T. Chen, “Secure broadcasting using the secure lock,” IEEE Transactions on Software Engineering, vol. 15, no. 8, pp. 929-934, Aug. 1989.
13. B. Decleene, L. Dondeti, S. Griffin, T. Hardjono, D. Kiwior, J. Kurose, D. Towsley, S. Vasudevan, and C. Zhand, “Secure group communications for wireless networks,” In Proceedings of the MILCOM, pp. 113 - 117, June 2001.
14. R. Dutta, and R. Barua, “Provably secure constant round contributory group key agreement in
dynamic setting,” IEEE Transactions On Information Theory, vol. 54, no. 5, pp. 2007-2025,
May 2008.
15. L. Dondeti, S. Mukherjee, and A. Samal, “Scalable secure one-to-many group communication using dual encryption,” Comput. Commun., vol. 23, no. 17, pp. 1681-1701, Nov. 1999.
16. L. Dondeti, S. Mukherjee, and A. Samal, “A distributed group key management scheme
for secure many-to-many communication,” Tech. Rep. PINTL-TR-207-99, Department of
Computer Science, University of Maryland, 1999.
17. S. Goldwasser, and M. Bellare, “Lecture notes on cryptography,” Summer cource cryptography and computer security at MIT, 2008.
18. Q. Gu, P. Liu, W. C. Lee, and C. H. Chu, “KTR: An efficient key management scheme for
secure data sccess control in wireless broadcast services,” IEEE Transactions on Dependable
and Secure Computing, vol. 6, no. 3, pp. 188-201, 2009.
-----
Provably Secure Group Key Management Approach Based on Hyper-sphere 23
19. H. Harney, C. Muckenhirn, and T. Rivers, “Group key management protocol (GKMP) specification,” RFC 2093, July 1997. [Online]. Available: http://tools.ietf.org/rfc/rfc2093.txt
20. H. Harney, C. Muckenhirn, and T. Rivers, “Group key management protocol (GKMP) architecture,” RFC 2094, July 1997. [Online]. Available: http://tools.ietf.org/rfc/rfc2094.txt.
21. Y. Kim, A. Perrig, and G. Tsudik, “Simple and fault-tolerant key agreement for dynamic
collaborative groups,” In Proceedings of the 7th ACM conference on Computer and Communications security, pp. 235-244, 2000.
22. Y. Kim, M. Narasimha, and G. Tsudik, “Secure group key management for storage area
networks,” IEEE Communications Magazine, vol. 41, no. 8, pp. 92-99, 2003.
23. Y. Kim, A. Perrig, and G. Tsudik, “Group key agreement efficient in communication,” IEEE
Transactions on Computers, vol. 53, no. 5, pp. 905-921, 2004.
24. P. P. C. Lee, J. C. S Lui, and D. K. Y. Yau, “Distributed collaborative key agreement and authentication protocols for dynamic peer Groups,” IEEE/ACM Transactions on Networking,
vol. 14, no. 2, pp. 263-276, 2006.
25. Y. Mao, Y. Sun, M. Wu and K. J. R. Liu, “JET: Dynamic join-exit-tree amortization and
scheduling for contributory key management,” IEEE/ACM Transactions on Networking, vol.
14, no. 5, pp. 1128-1140, Oct. 2006.
26. S. Mittra, “Iolus: A framework for scalable secure multicasting,” In Proceedings of the ACM
SIGCOMM, ACM, New York, vol. 27, no. 4, pp. 277-288, Sept. 1997.
27. R. Molva, and A. Pannetrat, “Scalable multicast security in dynamic groups,” In Proceedings
of the 6th ACM Conference on Computer and Communications Security, Singapore, ACM,
New York, 101-112, Nov. 1999.
28. A. Perrig, “Efficient collaborative key management protocols for secure autonomous group
communication,” In Proceedings of the International Workshop on Cryptographic Techniques and E-Commerce (CrypTEC’99), Hong Kong, China, M. Blum and C H Lee, Eds.
City University of Hong Kong Press, Hong Kong, China, pp. 192-202, July 1999.
29. A. Perrig, D. Song, and J. Tygar, “ELK, a new protocol for efficient large-group key distribution,” In Proceedings of the IEEE Symposium on Security and Privacy, Oakland, pp.
247-262, May 2001.
30. S. Rafaeli, and D. Hutchison, “A survey of key management for secure group communication,” ACM Comput.Surv., vol. 35, no. 3, pp. 309-329, Sept. 2003.
31. S. Rafaeli, and D. Hutchison, “Hydra: A decentralised group key management,” In Proceedings of the 11th IEEE International WETICE: Enterprise Security Workshop, A. Jacobs, Ed.,
Pittsburgh, Pa., IEEE Computer Society Press, Los Alamitos, Calif, pp. 62-67, June 2002.
32. K. Ren, W. Lou, B. Zhu and S. Jajodia, “Secure and efficient multicast in wireless sensor
networks allowing Ad hoc group formation,” IEEE Transactions on Vehicular Technology,
vol. 58, no. 4, pp. 2018-2029, 2009.
33. B. Rong, H. Chen, Y. Qian, K. Lu, R. Hu, and S. Guizani, “A pyramidal security model
for large-scale group-oriented computing in mobile Ad hoc networks: The key management
study,” IEEE Transactions on Vehicular Technology, vol. 58, no. 1, pp. 398-408, January
2009.
34. O. Rodeh, K. Birman, and D. Dolev, “Optimized group rekey for group communication
systems,” In Network and Distributed System Security, San Diego, Calif., Feb. 2000.
35. J. Salido, L. Lazos, and R. Poovendran, “Energy and bandwidth-efficient key distribution in
wireless Ad hoc networks: a cross-layer approach,” IEEE/ACM Transactions on Networking,
vol. 15, no. 6, pp. 1527-1540, 2007.
36. S. Setia, S. Koussih, and S. Jajodia, “Kronos: A scalable group re-keying approach for secure
multicast,” In Proceedings of the IEEE Symposium on Security and Privacy, Oakland Calif.,
IEEE Computer Society Press, Los Alamitos, Calif, pp. 215-228, May 2000.
-----
24 S. Tang, L. Xu, N. Liu, J. Ding, Z. Yang
37. A.T. Sherman and D.A McGrew, “Key establishment in large dynamic groups using one-way
function trees,” IEEE Transactions on Software Engineering, vol. 29, no. 5, pp. 444-458, May
2003.
38. M. Steiner, G. Tsudik, and M. Waidner, “Diffie-Hellman key distribution extended to group
communication,” In SIGSAC Proceedings of the 3rd ACM Conference on Computer and
Communications Security, New Delhi, India, ACM, New York, pp. 31-37, Mar. 1996.
39. Y. Sun, W. Trappe, and K. J. R. Liu, “A scalable multicast key management scheme for
heterogeneous wireless networks,” IEEE/ACM Transactions on Networking, vol. 12, no. 4,
pp. 653-666, Aug. 2004.
40. M. Waldvogel, G. Caronni, D. Sun, N. Weiler, and B. Plattner, “The VersaKey framework:
Versatile group key management,” IEEE J. Sel. Areas Commun., vol. 17, no. 9, pp. 16141631, Sept. 1999.
41. C.K. Wong, M. Gouda, and S.S.Lam, “Secure group communications using key graphs,”
IEEE/ACM Transactions on Networking, vol. 8, no. 1, pp 16-30, Feb. 2000.
42. Q.H. Wu, Y. Mu, W. Susilo, B. Qin, and J. Domingo-Ferrer, “Asymmetric group key agreement,” In Advances in Cryptology-EUROCRYPT 2009, Antoine Joux, Lecture Notes in
Computer Science, vol. 5479. Springer- Verlag, Heidelberg, pp. 153-170, 1994.
43. Q.H. Wu, B. Qin, L. Zhang, J. Domingo-Ferre, and O. Farrs, “Bridging broadcast encryption
and group key agreementm,” In Advances in Cryptology-ASIACRYPT 2011, D. Lee and
X.Y. Wang, Lecture Notes in Computer Science, vol. 7073. Springer- Verlag, Heidelberg,
pp. 143-160, 2011.
44. X. Yi, C. K. Siew, C. H. Tan, and Y. Ye, “A secure conference scheme for mobile communications,” IEEE Transactions on Wireless Communications, vol. 2, no. 6, pp. 1168-1177,
2003.
45. X. Yi, C. K. Siew, and C. H. Tan, “A secure and efficient conference scheme for mobile
communications,” IEEE Transactions on Vehicular Technology, vol. 52, no. 4, pp. 784-793,
2003.
46. W. Yu, Y. Sun, and K. J. R. Liu, “Optimizing rekeying cost for contributory group key agreement schemes,” IEEE Transactions On Dependable and Secure Computing, vol. 4, no. 3, pp.
228-242, 2007.
## A Toy Example
A toy example is given to illustrate the procedure of massive membership change in our
group key management approach based upon hyper-sphere.
Before the system setup, the group controller GC should choose a large prime number p and a family of pseudo-random function F[κ] = { _fK : GF(p) × GF(p) →_ _GF(p)}_
which is described in Section 3.3, and publish them to the public. Hereafter, all computations are conducted over the finite field GF(p).
At the initiazation stage, the GC selects two different 2-dimensional private points
**_S0 = (s00, s01) ∈_** _GF(p)[2]_ and S1 = (s10, s11) ∈ _GF(p)[2]_ at random, and keeps them
secret.
Now suppose the set of members in the current group is {U1, U2, U3, U4}. The members U2 and U4 want to leave the group, and new users U5 and U6 want to join the group.
The following steps can support massively adding and removing of members.
Step 1) The GC deletes the private points A2 = (a20, a21) and A4 = (a40, a41) of the
leaving members.
-----
Provably Secure Group Key Management Approach Based on Hyper-sphere 25
After the new users U5 and U6 are authenticated, the GC assigns ID=5 and ID=6 to
the new members U5 and U6 respectively.
The GC selects unique 2-dimensional points A5 = (a50, a51) and A6 = (a60, a61) as
the private information of U5 and U6 respectively.
Now the set of private points of the group members is {A1, A3, A5, A6}, and the
subscripts of the private points are: i1 = 1, i2 = 3, i3 = 5, and i4 = 6. The points Ax
should also satisfy Ay � **_Az if y �_** _z, where y, z ∈{1, 3, 5, 6}._
Step 2) The GC sends the point Ax to the member Ux via a secure channel, where
_x ∈{5, 6} ._
Step 3) The GC chooses a random number u, and computes:
_b00 = fs00_ (u), b01 = fs01(u),
_b10 = fs10_ (u), b11 = fs11(u),
_b20 = fai1_,0 (u) = fa10 (u), b21 = fai1,1 (u) = fa11 (u),
_b30 = fai2_,0 (u) = fa30 (u), b31 = fai2,1 (u) = fa31 (u),
_b40 = fai3_,0 (u) = fa50 (u), b41 = fai3,1 (u) = fa51 (u),
_b50 = fai4_,0 (u) = fa60 (u), b51 = fai4,1 (u) = fa61(u).
The GC then constructs points B0, B1, · · ·, B5:
**_B0 = (b00, b01), B1 = (b10, b11),_**
**_B2 = (b20, b21), B3 = (b30, b31),_**
**_B4 = (b40, b41), B5 = (b50, b51)._**
If the condition
(2(b00 − _b10) · 2(b11 −_ _b21) −_ 2(b10 − _b20) · 2(b01 −_ _b11)) ×_
5
�
(−2bt1) � 0 mod p (16)
_t=3_
satisfies, go to Step 4; otherwise, the GC repeats Step 3;
Step 4) The GC expands B0, B1, B2, B3, B4, and B5 to become 5-dimensional points:
**_V0 = (b00, b01, 0, 0, 0),_**
**_V1 = (b10, b11, 0, 0, 0),_**
**_V2 = (b20, b21, 0, 0, 0),_**
**_V3 = (b30, 0, b31, 0, 0),_**
**_V4 = (b40, 0, 0, b41, 0),_**
**_V5 = (b50, 0, 0, 0, b51)._**
The GC is now going to establishe a 4-dimensional hyper-sphere based on the
set of points V0, V1, · · ·, V5. Suppose the central point of the hyper-sphere is C =
(c0, c1, · · ·, c4). The GC then constructs the set of equations about the hyper-sphere:
-----
26 S. Tang, L. Xu, N. Liu, J. Ding, Z. Yang
(b00 − _c0)[2]_ + (b01 − _c1)[2]_ + (0 − _c2)[2]_ + (0 − _c3)[2]_ + (0 − _c4)[2]_ ≡ _R_ mod p,
(b10 − _c0)[2]_ + (b11 − _c1)[2]_ + (0 − _c2)[2]_ + (0 − _c3)[2]_ + (0 − _c4)[2]_ ≡ _R_ mod p,
(b20 − _c0)[2]_ + (b21 − _c1)[2]_ + (0 − _c2)[2]_ + (0 − _c3)[2]_ + (0 − _c4)[2]_ ≡ _R_ mod p,
(b30 − _c0)[2]_ + (0 − _c1)[2]_ + (b31 − _c2)[2]_ + (0 − _c3)[2]_ + (0 − _c4)[2]_ ≡ _R_ mod p,
(b40 − _c0)[2]_ + (0 − _c1)[2]_ + (0 − _c2)[2]_ + (b41 − _c3)[2]_ + (0 − _c4)[2]_ ≡ _R_ mod p,
(b50 − _c0)[2]_ + (0 − _c1)[2]_ + (0 − _c2)[2]_ + (0 − _c3)[2]_ + (b51 − _c4)[2]_ ≡ _R_ mod p.
(17)
Let matrix
and vectors
2(b00 − _b10) 2(b01 −_ _b11)_ 0 0 0
2(b10 − _b20) 2(b11 −_ _b21)_ 0 0 0
2(b20 − _b30)_ 2b21 −2b31 0 0
2(b30 − _b40)_ 0 2b31 −2b41 0
2(b40 − _b50)_ 0 0 2b41 −2b51
**_Y =_**
_c0_ _b[2]00_ [+][ b]01[2] [−] _[b]10[2]_ [−] _[b]11[2]_
_c1_ _b[2]10_ [+][ b]11[2] [−] _[b]20[2]_ [−] _[b]21[2]_
**_C[T]_** = _c2_, **_Z =_** _b[2]20_ [+][ b]21[2] [−] _[b]30[2]_ [−] _[b]31[2]_ .
_c3_ _b[2]30_ [+][ b]31[2] [−] _[b]40[2]_ [−] _[b]41[2]_
_c4_ _b[2]40_ [+][ b]41[2] [−] _[b]50[2]_ [−] _[b]51[2]_
By subtracting the j-th equation from the ( j + 1)-th equation in (17), where j =
1, 2, · · ·, 5, we can get a system of linear equations with 5 unknowns c0, c1, · · ·, c4,
which can be expressed in the matrix and vector form
**_Y × C[T]_** = Z . (18)
The condition in (16) in Step 3 guarantees that (18) has one and only one solution
**_C[T]_** = Y[−][1] × Z . Then the central point C = (c0, c1, · · ·, c4) of the hyper-sphere is
determined.
Step 5) The GC multicasts C and u to all group members U1, U3, U5, and U6 via
open channel.
Step 6) Each group member can calculate the new group key.
The member U1 can calculate the group key by using its private point A1 = (a10, a11)
along with the public information C = (c0, c1, · · ·, c4) and u, and the third equation in
(17):
_K = R −∥C∥[2]_ = b[2]20 [+][ b]21[2] [−] [2][b][20][c][0][ −] [2][b][21][c][1]
= ( fa10 (u))[2] + ( fa11 (u))[2] − 2 fa10 (u)c0 − 2 _fa11(u)c1._
Similarly, the member U3 can calculate the group key by using its private point
_A3(a30, a31) along with the public information C = (c0, c1, · · ·, c4) and u, and the forth_
equation in (17):
_K = R −∥C∥[2]_ = b[2]30 [+][ b]31[2] [−] [2][b][30][c][0][ −] [2][b][31][c][2]
-----
Provably Secure Group Key Management Approach Based on Hyper-sphere 27
= ( fa30 (u))[2] + ( fa31(u))[2] − 2 fa30 (u)c0 − 2 fa31 (u)c2.
For users U5 and U6, the computation procedures are similar to that of members
_U1 and U3. Finally, all the group members can re-construct the same hyper-sphere and_
calculate the same group key K = R −∥C∥[2].
-----
|
{
"disclaimer": "Notice: The following paper fields have been elided by the publisher: {'abstract', 'references'}. Paper or abstract available at https://api.unpaywall.org/v2/10.1109/TPDS.2013.2297917?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1109/TPDS.2013.2297917, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": null,
"status": "GREEN",
"url": "https://eprint.iacr.org/2011/216.pdf"
}
| 2,014
|
[
"JournalArticle"
] | true
| 2014-01-16T00:00:00
|
[] | 26,350
|
en
|
[
{
"category": "Business",
"source": "external"
},
{
"category": "Law",
"source": "s2-fos-model"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/ffdfa2b6f5e2e70c3b47fe824210515f93b1e9c3
|
[
"Business"
] | 0.888719
|
The K-Y Protocol: The First Protocol for the Regulation of Crypto Currencies (E.G.-Bitcoin)
|
ffdfa2b6f5e2e70c3b47fe824210515f93b1e9c3
|
[
{
"authorId": "73152562",
"name": "Kartik Hegadekatti"
},
{
"authorId": "1741238247",
"name": "Yatish S G"
}
] |
{
"alternate_issns": null,
"alternate_names": null,
"alternate_urls": null,
"id": null,
"issn": null,
"name": null,
"type": null,
"url": null
}
| null |
###### Munich Personal RePEc Archive
#### The K-Y Protocol: The First Protocol for the Regulation of Crypto Currencies (E.g.-Bitcoin)
###### Hegadekatti, Kartik and S G, Yatish
23 February 2016
Online at https://mpra.ub.uni-muenchen.de/82067/ MPRA Paper No. 82067, posted 23 Oct 2017 08:28 UTC
-----
###### THE K-Y PROTOCOL: THE FIRST PROTOCOL FOR THE
REGULATION OF CRYPTO CURRENCIES (E.g.-Bitcoin)
Dr.Kartik H & Dr.Yatish S.G
Authors’ Email: dr.kartik.h@gmail.com ; dryatish.blr@gmail.com
Abstract- Crypto currencies like Bitcoin are gaining prominence as a medium of exchange.
They have several benefits like very low transaction cost, fungibility etc. But Crypto
currencies are also identified with their use in crimes, illegal activities and speculation. Part
of the reason for their prominence as well as notoriety is the fact that they have no
Sovereign Backing whatsoever and also because they are decentralized. To make Crypto
currencies acceptable by the people and also curb their misuse, the authors have proposed
a protocol containing a set of standards and procedures. By using this procedure, any nation
can create its own Sovereign Backed crypto currency called NationCoin. A commission will
be established which will hold a certain quantum of money loaned by the Government. This
loaned money will provide the Sovereign backing to the Crypto Currency. A Controlled Block
Chain Protocol is used. The Genesis Block of several NationCoins is then provided to the
banks in the country to use them for interbank settlements. These Interbank transactions
will lead to the mining (generation) of additional NationCoins by the commission which will
hold it without releasing it to the public. Once there are sufficient numbers of NationCoins
so as to be equal to the loaned amount unit-for-unit, it shall be released to the public for
use.
###### INTRODUCTION
A Crypto currency is storage of some value and a medium of exchange. It uses cryptographic
techniques to protect transactions and also manage the generation of money.
Crypto currencies are decentralized, meaning that it is outside the control of central banks.
Crypto currencies also have a decentralized ledger system which makes it possible to verify
and confirm transactions over the entire network. It also makes possible for each unit of
crypto currency to be tracked right from creation to the most recent transaction. They are
outside the control of central Banks, and are explicitly NOT RECOGNISED. As such, they are
outside the ambit of regulation. The absence of regulation no doubt makes the system free
from the supervision of Governments and appears to give more freedom and rights to the
people using Crypto currencies. The privacy, anonymity and personal space appear to be
"enhanced" in the absence of regulation. But since they are unregulated, Crypto currencies
have been misused for money laundering and criminal activities by various anti-social
elements. The personal freedom and rights that were “enhanced” due to the absence of
K-Y Protocol Authors’email: dr.kartik.h@gmail.com ; dryatish.blr@gmail.com Page 1
-----
regulation will be usurped by powerful antisocial elements that do not respect any law or
have any ethical considerations.
To protect people’s rights and also optimize economic activity, it is necessary to regulate
Crypto currencies. But we need to do it in a way that it eliminates all (or a majority) of the
shortcomings of Crypto currencies. At the same time we need to enhance its benefits.
People tend to think of decentralization as an inherent, inseparable character of Crypto
currencies. They are led to believe that the Crypto currency concept will fail if regulation
and sovereign backing is introduced. But debates around Crypto currencies tend to discount
the fact that it is possible and feasible to regulate Crypto currencies.
Bitcoin is the first and most famous Crypto currency. It has recently gained widespread
usage. But it is not regulated or backed by any sovereign authority and is thus susceptible to
misuse.
Advantages of a Regulated and Sovereign Backed (RSB) Crypto currency
1) Minimal or no transaction cost to the public- The people can use the RSB Crypto
currency without any trepidation as it will be guaranteed by the Government. Nil
transaction cost is the basic feature of a crypto currency. Lack of transaction cost will
allow seamless and unhindered exchange of money leading to increased economic
activity. It will also leave more money in the hands of the public.
2) Money Accountability- It will be possible for Governments to account for all the
money in the system. This way, the counterfeit and parallel economy can be curbed,
Money laundering can be detected and flow of money to possible illegal activities can
be monitored.
3) No need for Bank Accounts- Banks need to be paid to maintain bank accounts. Bank
accounts also need to have a minimum balance so as to be viable. But Crypto
currencies do not need accounts. Having only a digital wallet is enough. RSB crypto
currencies can be maintained in digital wallets at no cost to the owners.
4) Easy transfer of funds-Governments can transfer funds or social security benefits to
citizens’ wallets in an instant, free of cost. Citizens’ digital wallets can be linked to
their social security number or other Government mandated IDs.
5) Easy Taxation- A person’s money holding can be inferred by the Government when
necessary. The Government can automatically deduct taxes without the need for
people to file tax returns. It can wind up its tax collecting infrastructure and invest
those resources somewhere else.
6) Certification- Assets can be certified, recorded and maintained using the same
protocols that a RSB crypto currency will use. The protocol for RSB crypto currency
will be called as Controlled Block Chain.
K-Y Protocol Authors’email: dr.kartik.h@gmail.com ; dryatish.blr@gmail.com Page 2
-----
(A Controlled Block chain is different from a Block Chain per se [1]. A Block Chain is a
permissionless Distributed Database, whereas a Controlled Block Chain will be
Permission Based. The Permission here being provided by the Sovereign Authority.)
7) Price stability-Presently, crypto currencies like Bitcoin are highly volatile. This is
because a lack of backing has led to rampant speculation. Consequently, Bitcoin has
undergone many Boom-and-bust cycles. RSB crypto currencies will provide stability in
value so as to be a reliable medium of exchange.
8) Manageable Deflationary and Inflationary indices- Because RSB crypto currency will
be backed by Government; it will have a manageable inflation and deflation index.
9) Environmental advantage- Printing currency notes and maintaining them in
circulation is costly both for the economy as well as the environment. In the long run,
RSB crypto currencies will replace paper currency. It will thus save a lot of trees from
being cut and used for paper.
10) Easy convertibility- People from one country will be able to invest more freely in
other countries. This will lead to the emergence of a loan market which is highly
competitive. This will make cheap and safe credit available to the neediest. This is
presently not possible due to existing monetary, fiscal and distance barriers.
**THE K-Y PROTOCOL**
The K-Y Protocol aims to make Regulated and Sovereign Backed (RSB) Crypto currencies a
practical reality. The authors have designed this protocol carrying their initials in
abbreviated form as the name of the protocol. The Protocol consists of a set of rules and
procedures.
(*)NationCoin- abbreviated as NC, it is a generalized designation for any RSB Crypto
currency (RSBC). For example USA's RSB Crypto currency can be called USCoin, India’s as
IndiaCoin, China’s as ChinaCoin etc. Each nation can have only one NationCoin i.e. RSB
Crypto currency.
Since various countries have currencies of their own with differing Exchange rates, we have
defined a NationCoin Unit as
One NationCoin Unit=One NationCoin X Exchange rate of the currency with the US Dollar.
For example, in case of Rupee IndiaCoin unit
One IndiaCoin Unit= 1IndiaCoin X 68 =68 IndiaCoins.
One ChinaCoin Unit=6.5 ChinaCoins
One EuroCoin Unit=0.88 EuroCoins
One JapanCoin Unit=112 JapanCoins
One BritishCoin Unit=0.69 BritishCoins
(1 USD=0.88 Euros=0.69 Pounds=112 Yen=6.5 Chinese Yuan=68 Indian Rupees; as on
12/02/2016)
K-Y Protocol Authors’email: dr.kartik.h@gmail.com ; dryatish.blr@gmail.com Page 3
-----
**Note that NationCoin Unit is different from NationCoin. A NationCoin Unit is generic in**
**nature. One NationCoin Unit is always equal to one US Dollar. Whereas One NationCoin is**
**equal to one unit of native currency in that particular nation.**
The KY Protocol is as follows
1. The Government of the country wanting to introduce the NationCoin will first setup the
**“DIGITAL ASSETS RESERVE” (DAR) by passing a law or amending existing laws as need**
be. It will also setup the “DIGITAL ASSETS REGULATION & EXCHANGE COMMISSION”
**(DAREC) which initially will have no role to play. Later on, when NationCoin becomes**
established as a primary mode of transaction, DAREC will play the role of an impartial
regulator. The NATIONAL LEDGER DATABASE (NLD) is also created. It will be closely
linked with the DAR. It will keep track of the transactions in its Block Chain Ledger whose
copies will be distributed throughout the Network Nodes.
2. By a separate funding from the Government, DAR will setup “Grid Computing Clusters”
with several nodes throughout the country. These networks will not be open to the
public. These are the nodes that will mine the NationCoins. This will be done by “DATA –
**DIGITAL ASSETS TRACKING & ADMINISTRATION” which will be the technical wing and**
technical assistance arm of the DAR.
3. The Government will provide the DAR $10 million worth of loans. This will form The
Corpus- to be used to back NationCoins.
4. DATA will also help the banks in the country to setup NationCoin compatible softwares.
DATA will create block chain protocols for NationCoin.
5. The RESERVE will be the entity which will Sovereign stamp the Crypto Currencies and
give it the RSB (Regulated and Sovereign Backed) certification. It will be an integral part
of the DAR.
###### DATA
**RESERVE**
##### NLD
# DAR
###### DAREC
6. The networks so formed will be tested in trial runs involving NationCoin transactions,
interest payments and exchange procedures. This is the System Configuration Stage.
K-Y Protocol Authors’email: dr.kartik.h@gmail.com ; dryatish.blr@gmail.com Page 4
###### DATA
**RESERVE**
##### NLD
# DAR
###### DAREC
**RESERVE**
##### NLD
-----
7. The Government will provide a soft loan of $10 million worth of assets in any form
(either in $ or National currency) to the DAR. This $10 million will be called “THE
CORPUS”.
8. When the corpus is in place it will be securely locked up physically in vaults and the
“GENESIS BLOCK” [2] (the First Block in the Block chain) of 50,000 NationCoin Units
(NCUs) will be generated.
9. The 50,000 NationCoin Units will be provided to the banks for their daily interbank
clearances. These 50,000 NationCoin Units will be pegged to $10 million in the Corpus
giving each NationCoin a value of $200. This Backing will be certified by the Governor of
DAR.
## $10 Million
50,000 NCUs
10. Banks will be mandated to use these NationCoin Units in their Intra-day and Inter-day
settlements and clearances. For this purpose, Banks will be provided their own
NationCoin wallets maintained by DATA.
11. Each Bank is mandated to use at least 25 cents worth of NationCoin Units per $100 in
settlements and interbank transactions.
12. These transactions will be verified by DATAs Network nodes. Once verified, these will be
categorized into blocks of between 45 kb to 85 kb and “Mined”. The Mining will be done
by DATA's systems only and will not be open to public. Once mined, 190 NationCoin
Units will be generated every 10 minutes. Therefore the Block time for each block will be
10 minutes. Reward per block will be 190 NationCoin units.
## 50,000 NCUs
BANK
### 10 MILLION NCUs
## $10 Million
BANK
BANK
NG
K-Y Protocol Authors’email: dr.kartik.h@gmail.com ; dryatish.blr@gmail.com Page 5
BANK
NI
-----
13. The NationCoin Units so mined will go into HOLDING. HOLDING is a Digital vault of DAR
which is not connected to the public Network and will not to be released to the banks
either. The NationCoin Units in HOLDING are not yet sovereign backed.
14. The DAR will hold the NationCoin Units in HOLDING until it accumulates 9.95 Million
NationCoin Units. Along with the 50,000 NationCoin Units used by banks, there are now
a total of 10 million NationCoin Units altogether.
15. When there are 10 Million NationCoin Units in Toto, it reaches the next crucial stage
called the Equation.
16. Equation: When there are 9.95 Million NationCoin Units, DAR will start pegging its
Corpus to the 9.95 Million NationCoin Units that it holds. Once sovereign stamped and
certified, these 10 Million NationCoin Units will be exactly equal to $10 Million in the
Corpus. When one NationCoin Unit= One Dollar in the Corpus, then Equation is said to
have been achieved.
[*As mentioned earlier, since various countries have currencies of their own with
differing Exchange rates, we have defined a NationCoin Unit.
One NationCoin Unit=One NationCoin X Exchange rate of the currency with
the US Dollar.
For example, in case of Rupee IndiaCoin unit
One IndiaCoin Unit = 1 X 68 IndiaCoins=68 IndiaCoins.
10 Million Dollars=10 Million IndiaCoin Units=680 Million IndiaCoins=680 Million Rupees
Therefore, when there are 680 Million IndiaCoins, each IndiaCoin will be equal to One
Indian Rupee and Equation is said to have reached. In case of Yen, Equation will be
attained at 1,120 Million JapanCoins, for Euro it will be 8.8 Million EuroCoins, For
Chinese Yuan it will be65 Million ChinaCoins and so on.]
17. Once Equation is reached, two things will happen in parallel.
1. First Parallel: - DAR will release this 10 Million NationCoin Units to the Banking
System in 4 phases over a period of 4 weeks. 2.5 Million NationCoin Units will be
released every week. This is necessary so as to release NationCoins Units in a
controlled manner without overloading or harming the Computing Systems.
2. Second Parallel: This is the most important step. A process called Scaling is
initiated. The number of NationCoin Units mined per Block is increased to more
than 15 times the mining rate per block before Equation. The block size will also
dramatically increase due to the large number of inter-bank transactions that will
be taking place (as more and more NCUs are pumped into the system).The block
size will increase to around 5 MB. The Block time will reduce from 10 minutes to
1 minute and number of NationCoin Units mined per block will be 2,850 NCUs.
Thus the total rate of NationCoin Units generation will increase by 150 times the
rate it was before Equation.
K-Y Protocol Authors’email: dr.kartik.h@gmail.com ; dryatish.blr@gmail.com Page 6
-----
18. All the NCUs mined will flow into the HOLDING and is not backed in any manner. It will
not be released to the public. But Banks can buy them by paying requisite currency
which will go into the Corpus and an equivalent number of NCUs are released.
**Equation is important for several reasons.**
1. For the sake of public convenience, One Dollar has to be equal to One
NationCoin Unit. The public may get confused with any other value and this may
cause chaos and panic leading to adverse economic outcomes. By Equation, we
ensure that people still identify One Dollar with One NationCoin Unit.
[In case of Euro, 1 Euro=1 EuroCoin
Yen, 1 Yen=1 JapanCoin
Rupee, 1 Rupee=1 IndiaCoin
Pound, 1 Pound =1 BritishCoin and so on]
2. Say, for instance 1 NationCoin Unit is equal to 2 Dollars, then speculators may
see 1 NationCoin Unit as more valuable and may begin to hoard it, this will cause
many problems for the society both in long and short term.
3. In case, One Dollar is equal to 2 NationCoin Units, people may see
NationCoins as less valuable and may not prefer to use it for transaction. Then the
whole idea of RSB Crypto currency will become impractical.
Freshly Mined NationCoin Units will not be backed by anything and as such will have no
value. They are put into Holding. HOLDING will always contain non-backed NationCoin Units.
These non-backed NationCoin Units will have a unique identity that sets them apart from
RSB NCUs. The non-backed NCUs, when backed, will be certified as Backed NCUs by the
DAR. These Backed NCUs, after Sovereign Stamping and Certification will be known as RSB
NCUs. As soon as they are backed, they will undergo a change in their identity which will
make them recognisable by the DAR and other Network nodes as RSB NCUs, fit for use in
transactions.
K-Y Protocol Authors’email: dr.kartik.h@gmail.com ; dryatish.blr@gmail.com Page 7
-----
This change in identity and certification will happen electronically in the Reserve.
19. Equation will happen one year after the Genesis Block. Scaling will start immediately
after Equation.
20. From the end of first year to the end of second year around 1.5 Billion NCUs (NationCoin
**Units) will be generated which will be put in HOLDING.**
21. From the beginning of the third year the Government can start paying a small part
(around 1%) of the Government salaries through RSB NCUs. Say, the Government decides to
pay 1 Million NCUs as salary. It will provide $1 million to the DAR. DAR will then provide 1
Million RSB NCUs to the Government to pay salaries.
22. The National Coin Wallets (NCW) of employees will be created and maintained by DATA
free of cost. This NCW will be linked to the social security number or any ID system
depending on the country (In case of the US it will be linked to the Social Security Number.
In case of India it will be linked to PAN number).
23. Joe is a Government employee drawing $10,000 per month as salary. The Government
decides to pay 1% of salaries in NCUs i.e. $ 9,900 will be in Dollar form and $100 worth in
NCUs. Now Joe decides that he does not want NCUs. All that he has to do is access his bank
account via internet and give back NCUs to the DAR (There will be a facility provided for this
purpose). The DAR will credit $100 into Joe’s account in lieu of 100NCUs.
24. Say Joe wants to transfer $1000 to Alice; he can do it in Dollars by paying around 25
cents as transaction cost. But if he transfers 1000 NCUs to Alice, he can do it freely without
any transaction cost. International transaction costs of money transfers in native currencies
will be even higher. But for RSB NCUs it will be minimal or zero.
25. The Bitcoin Protocol follows the practice of halving, every 4 years the number of bitcoins
mined per block will halve. This will go on till there are 21 Million Bitcoins in the system. But
for RSB NCUs, this is not the case. The RSB NCUs' primary objective is to make it widely
utilized among the public. As such we need a large supply of RSB NCUs so as to replace a
proportion of paper currency in circulation. For this reason, the RSB NCUs will undergo a
process called Doubling.
26. Doubling: One year after Scaling has taken place, the process of Doubling will occur.
Block time will remain 1 minute only. Number of NCUs mined per block will now be 5,700
NCUs (it was 2,850 NCUs after Scaling). The block size may (or may not be depending on
number of transaction) double to 10 MB.
K-Y Protocol Authors’email: dr.kartik.h@gmail.com ; dryatish.blr@gmail.com Page 8
-----
###### SYSTEM CONFIGURATION
###### SCALING
**DOUBLING**
27. All the NCUs mined will follow the process of flowing into the Holding, to be backed and
certified in the Reserve when funds flow into the CORPUS or as and when mandated by the
Government(on being provided equivalent backing in currency).
28. All this time, the NLD (National Ledger Database) whose copy is present in all the nodes
of the DAR network is promptly updated from time-to-time duly following the Controlled
Block Chain protocol. The NLD keeps track of all RSB NCUs through its NCU ledger.
29. The DAR shall aim for replacement portion of around 50% of all total currency in
circulation over a period of 10 years.
30. For the US Dollar, at present rates it will take about 8-10 years to replace half the
currency in circulation by USCoin.
31. Linkage: Linkage here means that the NationCoin is allowed to be freely traded in the
International Market. When around 50% of circulating currency is RSB NCUs, then Linkage
with international markets can be allowed. 50% replacement is necessary so as to have a
robust amount of NCUs which will not be affected by minor speculation. For the purpose of
Linkage, NLD copies will be uploaded into satellites, so that they will act as a network node.
For example take JapanCoin, if Joe sends a JapanCoin from Argentina to Alice who is in
South Africa, the transaction is recorded and beamed to a Network Node in space (Japanese
satellite). This will in turn update all other nodes in Japan, thus upgrading the Ledger.
##### PUBLIC REPLACEMENT LINKAGE USE
32. Later on, every National Capital can host at least one network node of every other
nation as part of a diplomatic treaty.
K-Y Protocol Authors’email: dr.kartik.h@gmail.com ; dryatish.blr@gmail.com Page 9
##### REPLACEMENT
###### LINKAGE
-----
33. Once Linkage occurs, the Government (through the DAR of the country) can decide if it
will allows “Free Float” of its NCU or a “Managed Float”.
34. In case of “Free Float”, market forces will determine the value of NCUs whereas in case
of Managed float, DAR will allow the rates to float up to a particular range. Beyond that
range it will manage NCU rates as it presently manages its native paper currency.
35. After a certain level is reached, say 50% of total circulating currency, Doubling can be
stopped and NationCoins generated at a steady rate every year, accounting for inflation if
necessary. Eventually RSB NCUs will replace paper currencies to a large extent.
36. RSB Crypto Currencies can also be introduced at the International Level. A WorldCoin
can be created based on the K-Y Protocol. Only, the WorldCoin will be backed by SDRs
(Special Drawing Rights) of the IMF. Exchange rates of various NationCoins vis-à-vis the
WorldCoin will decide the inter-relations between the several RSB Crypto Currencies.
###### CONCLUSION
We have proposed a system for the creation of Regulated and Sovereign Backed (RSB)
Crypto currencies. They will eventually replace, to a large extent, paper currencies of their
respective nations. We began with the setting up of the Digital Assets Reserve which will be
a sovereign authority. The first cache of NationCoins generated in the Genesis Block [2] will
be given to banks for their internal settlements. This will ensure that the system continues
to generate NationCoins subsequent to transaction verification as per the Controlled Block
Chain Protocol. It will also test the robustness of the system before the NationCoins are
released to the public.
Equation defines the unit-for-unit equivalency of NationCoin Units with the native currency.
Scaling after Equation is used to cater to the huge demand that the Crypto currency will
face. Doubling is aimed at replacement of a particular nation's currency with NationCoins.
Linkage will enable the NationCoin to be used across borders.
The unique feature of The K-Y Protocol is that it can be used by any Sovereign Authority to
create a credible RSB Crypto Currency. The people stand to benefit from all the advantages
accruing from such a currency. Nations with a larger and more diverse economy will take
longer to shift to NationCoins from paper currencies as the common medium of exchange.
Smaller Economies can shift faster.
To make the NationCoin secure, several security features at various stages have been
incorporated. Holding, Corpus Backing, Sovereign Stamping, Certification and National
Ledger Database are some of the built-in security features. Hence it has a Multi-tiered
security structure. The introduction of RSB NationCoins will usher in an era of Cashless
**Liquidity. The National Ledger Database can also be used for Non-Financial Block Chain uses**
where object ownership is decoupled from functional Utility.
K-Y Protocol Authors’email: dr.kartik.h@gmail.com ; dryatish.blr@gmail.com Page 10
-----
**ABBREVIATIONS**
**DAR-Digital Assets Reserve**
**DAREC-Digital Assets Regulation and Exchange Commission**
**DATA- Digital Assets Tracking and Administration**
**NCU- NationCoin Units**
**NCW-National Coin Wallet**
**NLD-National Ledger Database**
**RSB-Regulated and Sovereign Backed**
###### REFERENCES
[1][2]-Bitcoin: A Peer-to-Peer Electronic Cash System-Satoshi Nakamoto
****************
K-Y Protocol Authors’email: dr.kartik.h@gmail.com ; dryatish.blr@gmail.com Page 11
-----
|
{
"disclaimer": "Notice: The following paper fields have been elided by the publisher: {'abstract', 'references'}. Paper or abstract available at https://api.unpaywall.org/v2/10.2139/SSRN.2735267?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.2139/SSRN.2735267, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": null,
"status": "GREEN",
"url": "https://mpra.ub.uni-muenchen.de/82067/1/MPRA_paper_82067.pdf"
}
| 2,016
|
[] | true
| 2016-02-13T00:00:00
|
[] | 6,348
|
|
en
|
[
{
"category": "Engineering",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/ffe618b5b04563094e4a84734edaec09961e8bbe
|
[] | 0.809663
|
Distributed Model Predictive Control and Coalitional Control Strategies—Comparative Performance Analysis Using an Eight-Tank Process Case Study
|
ffe618b5b04563094e4a84734edaec09961e8bbe
|
Actuators
|
[
{
"authorId": "46305620",
"name": "A. Maxim"
},
{
"authorId": "52002479",
"name": "Ovidiu Pauca"
},
{
"authorId": "35363951",
"name": "C. Caruntu"
}
] |
{
"alternate_issns": null,
"alternate_names": null,
"alternate_urls": [
"http://nbn-resolving.de/urn/resolver.pl?urn=urn:nbn:ch:bel-271143",
"https://www.mdpi.com/journal/actuators"
],
"id": "f6d0ccd1-a3b3-4c19-8677-753b3279918f",
"issn": "2076-0825",
"name": "Actuators",
"type": "journal",
"url": "http://www.e-helvetica.nb.admin.ch/directAccess?callnumber=bel-271143"
}
|
Complex systems composed of multiple interconnected sub-systems need to be controlled with specialized control algorithms. In this paper, two classes of control algorithms suitable for such processes are presented. Firstly, two distributed model predictive control (DMPC) strategies with different formulations are described. Afterward, a coalitional control (CC) strategy is proposed, with two different communication topologies, i.e., a default decentralized topology and a distributed topology. All algorithms were tested on the same simulation setup consisting of eight water tanks. The simulation results show that the coalitional control methodology has a similar performance to the distributed algorithms. Moreover, due to its simplified formulation, the former can be easily tested on embedded systems with limited computation storage.
|
# actuators
_Article_
## Distributed Model Predictive Control and Coalitional Control Strategies—Comparative Performance Analysis Using an Eight-Tank Process Case Study
**Anca Maxim** **, Ovidiu Pauca** **and Constantin-Florin Caruntu ***
Department of Automatic Control and Applied Informatics, “Gheorghe Asachi” Techical University of Iasi,
700050 Iasi, Romania
*** Correspondence: caruntuc@ac.tuiasi.ro**
**Abstract: Complex systems composed of multiple interconnected sub-systems need to be controlled**
with specialized control algorithms. In this paper, two classes of control algorithms suitable for such
processes are presented. Firstly, two distributed model predictive control (DMPC) strategies with
different formulations are described. Afterward, a coalitional control (CC) strategy is proposed, with
two different communication topologies, i.e., a default decentralized topology and a distributed
topology. All algorithms were tested on the same simulation setup consisting of eight water tanks.
The simulation results show that the coalitional control methodology has a similar performance to the
distributed algorithms. Moreover, due to its simplified formulation, the former can be easily tested
on embedded systems with limited computation storage.
**Keywords: distributed model predictive control; coalitional control; networked systems**
**1. Introduction**
**Citation: Maxim, A.; Pauca, O.;**
Caruntu, C.-F. Distributed Model
Predictive Control and Coalitional
Control Strategies—Comparative
Performance Analysis Using
an Eight-Tank Process Case Study.
_[Actuators 2023, 12, 281. https://](https://doi.org/10.3390/act12070281)_
[doi.org/10.3390/act12070281](https://doi.org/10.3390/act12070281)
Academic Editor: Eihab M.
Abdel-Rahman
Received: 24 May 2023
Revised: 23 June 2023
Accepted: 7 July 2023
Published: 10 July 2023
**Copyright:** © 2023 by the authors.
Licensee MDPI, Basel, Switzerland.
This article is an open access article
distributed under the terms and
conditions of the Creative Commons
[Attribution (CC BY) license (https://](https://creativecommons.org/licenses/by/4.0/)
[creativecommons.org/licenses/by/](https://creativecommons.org/licenses/by/4.0/)
4.0/).
Distributed model predictive control (DMPC) is a preferred control strategy when
dealing with complex systems. Such processes are composed of multiple sub-systems, more
often completely or partially interconnected, either physically or through common shared
resources or goals [1]. To control such systems, centralized control is not a reliable strategy,
due to the sheer size of the computational burden, for solving a unique optimization
problem [2]. Decentralized control can be applied only in the particular case of a weak
interconnection between sub-systems since, from the control point of view, all of them are
independently treated, deliberately ignoring the interdependent connections [3]. Thus,
distributed control is a control strategy of compromise between the aforementioned ones, by
independently controlling the sub-systems while also taking into account the links between
them. The DMPC methodology was developed within the mature model predictive control
(MPC) research field [4], in which each sub-system solves a coupled MPC optimization
problem, considering both local and inter-shared information.
The subject is ongoing and in fast development, evidenced by extensive research in
the DMPC field. During the last decade (i.e., publication years 2013–2023), in the Web Of
Science Core Collection, around 1000 DMPC-related papers were published, with more
than 500 articles published in prestigious journals such as Annual Reviews in Control,
Automatica, IEEE Transactions on Control Systems Technology, Systems & Control Letters
and IEEE Control Systems Magazine, among others.
The DMPC strategy was successfully applied in various domains, such as microgrids [5,6], smart grids [7,8], traffic control [9–14], vehicle platooning [15–18] wind
farms [19–21], wastewater treatment plants [22,23], chemical processes [24,25] or network
systems [26], just to name a few. In [27], a robust DMPC algorithm for energy management
optimization in a multi-microgrid system was presented. The stability of an independent
microgrid with respect to the uncertainties introduced by the renewable energy sources
-----
_Actuators 2023, 12, 281_ 2 of 23
was ensured using the advantages of robust MPC optimization. Moreover, a robust DMPC
strategy was used to dynamically develop an energy schedule for the multi-microgrid
system, using the advantage of power transactions between independent units. In [28], a
DMPC approach for the online scheduling involved in the coordination problem between
demand response and alternating current optimal power flow in a smart grid was proposed.
In [29], a DMPC strategy for high-speed train traffic control was developed. To ensure
smaller traveling distances between each train, a virtual coupling was considered, and
proofs for feasibility and terminal invariant set constraint stability were provided. In [30], a
DMPC approach for a vehicle platoon with two string stability criteria based on l∞− and
_l2−_ norms was investigated. In [31], an economic DMPC strategy for a large-scale wind
farm was introduced. For each wind turbine, a local Nash optimal solution was reached
using an iterative algorithm while also ensuring the dynamic global economic target for
the overall wind farm. In [22], an economic DMPC method for a wastewater treatment
plant was presented. Two design approaches for the economic DMPC were proposed, with
the difference consisting in the model used in the local controller. In one case, for each
subsystem, the centralized plant model was used in the optimization problem, whereas
the other approach used the corresponding local model defined for each subsystem in the
local controller. The simulations performed in various weather conditions showed that
the first approach outperforms the second one in terms of control performance. In [32], an
explicit DMPC design for chemical processes was introduced. The strategy was used to
handle the constraints in a matrix form, while dividing them into two sets. When compared
with a classical DMPC, the simulation results obtained for a coke oven pressure control
system showed the efficiency of the explicit DMPC formulation. In [33], a robust DMPC
for networked control systems with uncertainties and time delays was presented. By
decomposing the network optimization problem in multiple optimization sub-problems,
each one described using an upper bound robust objective, the computational complexity
of the algorithm was decreased.
A comprehensive recent review work on DMPC strategies classified depending on
their robustness in the presence of system faults (in sensors and actuators), external cyberattacks on the communication network or internal attacks from malignant agents inside the
network that share false information is given in [34].
A highly cited review work in the DMPC field, which early on envisioned the future
research trends for the next decade, is provided in [35]. Furthermore, the DMPC algorithms
are classified depending on the optimization problem to be solved as:
- Non-cooperative DMPC—if each agent (or controller) solves a local cost function
using both local information from its sub-system and information received from the
interconnected sub-systems;
- Cooperative DMPC—if each agent solves a global cost function, taking into account
both local information and information received from the entire system.
Depending on the communication protocols established between different agents, the
cooperative architectures are further classified as:
**–** Iterative DMPC—if each agent exchanges information with other agents multiple times within a sampling period; to this end, the communication flow is
bidirectional.
**–** Non-iterative or sequential DMPC—if each agent exchanges information with
other agents only once during a sampling period; in this case, the communication
flow is unidirectional.
Moreover, based on the topology of the communication network, DMPC methods are
categorized as [36]:
- Fully connected DMPC—if each agent is connected with all other agents from
the network;
- Partially connected DMPC—if each agent is connected with only a group of agents
within the network, called neighbours.
-----
_Actuators 2023, 12, 281_ 3 of 23
All the above mentioned DMPC strategies have one common denominator, namely
that the communication and controller topologies are fixed, i.e., once established in the
beginning, they do not change during operation. However, this characteristic is rather
restrictive, and thus another methodology was introduced, called coalitional control (CC),
using the principle of flexible architecture [37]. In this methodology, rather than having to
choose between a fully connected or a partially connected communication network, the
idea is that, during operation, in a fully connected topology, certain communication links
can be disabled (if they are not necessary), thus obtaining a partially connected topology.
A group of agents partially connected (through communication link activation) is called
a coalition (or cooperative group), and, within the coalition, a cooperative optimization
problem is solved. When the links are disabled, the coalition is dissolved, and the agents
solve a non-cooperative optimization problem [38,39].
In this work, we extended the comparative performance analysis provided in [40] for
two DMPC methodologies to also include a coalitional control algorithm. The contributions
of this work are the following:
- A comprehensive performance analysis was performed for two non-cooperative
DMPC algorithms (one formulated using a state-space model, and another formulated
using an input–output model) and a CC method, described using a state-space model.
- All three algorithms were tested in simulation on the same process, i.e., the eight-tank
process introduced in [40].
- The CC algorithm was based on a matrix gain feedback controller, computed by
solving a gradient-based optimization problem. The basic principle of computing the
gains was firstly presented in [41].
With respect to our previous works, the following novelties are listed:
- The eight-tank process model introduced in [40] was extended with the nonlinear
mathematical description based on Bernoulli’s law and the mass balances.
- The DMPC strategies given in [40] are presented in an extended version.
- The gradient-based methodology for computing the gain feedback matrix in the coalitional control framework provided in [41] was reformulated to achieve comparative
results with respect to the DMPC strategies. To this end, the feedback gain matrices
used in the coalitional control methodology were computed solving a cost function,
which minimizes the error between the coalitional state trajectories, with respect to
a set of DMPC state trajectories. Moreover, a closed-loop stability constraint was
also introduced.
- Two communication topologies were designed for the CC algorithm (with different
sets of feedback matrices optimally computed), i.e., a default decentralized communication topology without communication between sub-systems, and a distributed
topology with communication links between sub-systems.
- A procedure that automatically switches between the distributed and decentralized communication topologies designed for the coalitional control methodology
is introduced.
The remainder of this paper is structured as follows: Section 2 introduces the statespace DMPC algorithm, Section 3 describes the input–output DMPC algorithm, and, in
Section 4, the CC algorithm is provided. The process model description, followed by the
simulation results and discussion, is given in Section 5. The conclusions and future work
ideas are presented in Section 6.
**2. DMPC Algorithm with State-Space Model (DMPCSS)**
In this section, a non-cooperative DMPC algorithm with velocity-form formulation,
designed for a system composed of N sub-systems, is presented. This algorithm was firstly
introduced in [42] for a two-agent system and then extended to N sub-systems in [40].
-----
_Actuators 2023, 12, 281_ 4 of 23
_2.1. Problem Formulation_
Let us introduce a class of linear-time-invariant (LTI) systems consisting of N subsystems, interconnected through inputs signals. Each sub-system i, ∀i ∈N, with N the set
_{1, . . ., N} ⊆_ N, has the following dynamics:
_xpi_ (k + 1) = Api _xpi_ (k) + Bpii _ui(k) + ∑_ _Bpij_ _uj(k)_ (1)
_j∈Ni_
_yi(k) = Cpi_ _xpi_ (k), ∀i ∈N (2)
with xpi R[n][x], ui R[n][u], uj R[n][u] and yi R[n][y] the state, input, coupling inputs and
_∈_ _∈_ _∈_ _∈_
output vectors for the process, respectively; k is the discrete-time index; Api, Bpii, Bpij and
_Cpi are matrices with adequate dimensions. All the sub-systems coupled with sub-system i_
are included in the set Ni = {j ∈N : Bpij ̸= 0}. Within this neighbourhood set, between
sub-systems i and j, relevant information pertaining to the input vectors is exchanged.
Both input and output vectors are constrained as:
_ui ∈Ui, yi ∈Yi, ∀i ∈N_ (3)
where Ui and Yi denote sets of linear inequalities.
As previously mentioned, the proposed DMPC strategy has velocity-form formulation
to ensure the presence of an integral action in the control loop. This is achieved using the
difference operation on both sides of (1), obtaining:
_xpi_ (k + 1) − _xpi_ (k)
� �� �
∆xpi (k+1)
with the compact form as:
= _Api_ �xpi (k) − _xpi_ (k − 1)� +Bpii �ui(k) − _ui(k −_ 1)� +
� �� � � �� �
∆xpi (k) ∆ui(k)
+ ∑ _Bpij_ �uj(k) − _uj(k −_ 1)�, ∀i ∈N (4)
_j∈Ni_ � �� �
∆uj(k)
∆xpi (k + 1) = Api ∆xpi (k) + Bpii ∆ui(k) + ∑ _Bpij_ ∆uj(k), ∀i ∈N (5)
_j∈Ni_
Using the same operation on (2), and substituting (5), we obtain:
_yi(k + 1) −_ _yi(k)_
� �� �
∆yi(k+1)
= _Cpi_ ∆xpi (k + 1)
= _Cpi_
� �
_Api_ ∆xpi (k) + Bpii ∆ui(k) + ∑ _Bpij_ ∆uj(k), ∀i ∈N (6)
_j∈Ni_
The new state variable is selected as xi(k) = �∆xpi (k)[T] _yi(k)�T, obtaining the velocity-_
form model:
� ∆xpi (k + 1) � = � _Api_ _O_ � � ∆xpi (k) �
_yi(k + 1)_ _Cpi Api_ _I_ _yi(k)_
� �� � � �� � � �� �
_xi(k+1)_ _Ai_ _xi(k)_
�
+ � _Bpii_ �
_Cpi_ _Bpii_
� �� �
_Bii_
∆ui(k) + ∑
_j∈Ni_
�
_Bpij_
_Cpi_ _Bpij_
� �� �
_Bij_
∆uj(k)
�
, ∀i ∈N (7)
_yi(k)_ = � _O_ _I_ �
� �� �
_Ci_
� ∆xpi (k)
_yi(k)_
-----
_Actuators 2023, 12, 281_ 5 of 23
where I and O are the identity and zero matrix, respectively, with adequate dimensions.
In a compact form, model (7) can be written as:
�
_xi(k + 1) = Aixi(k) + Bii∆ui(k) + ∑j∈Ni Bij∆uj(k)_ (8)
_yi(k) = Cixi(k), ∀i ∈N_
where ∆ui(k) and ∆uj(k), ∀i ∈N, ∀j ∈Ni, are the inputs in velocity form.
_2.2. Optimization Problem_
Each agent ∀i ∈N solves the following cost function Ji:
_Ji(xi(k), ∆Ui(k), {∆Uj(k)}j∈Ni_ ) = �Rspi − _Yi�T�Rspi −_ _Yi�_ + ∆Ui(k)TRi∆Ui(k) (9)
The optimal input sequence
∆Ui[∗][(][k][) = [][∆][u]i[∗][(][k][|][k][)][ . . .][ ∆][u]i[∗][(][k][ +][ N][c][ −] [1][|][k][)]][T]
is computed minimizing (9), defined based on the output predictor:
_Yi =_ �yi(k + 1|k) . . . yi(k + Np|k)�T, ∀i ∈N
where Np is the prediction horizon and Nc ≤ _Np is the control horizon. Rspi ∈_ R[N][p] is the
predicted reference trajectory, imposed constant over the prediction window, equal to the
imposed setpoint at sampling time k. Ri = αi _INc_, αi 0 is the input weight matrix.
_≥_
The output predictor Yi is interactively calculated from (8), obtaining the following
compact form:
_Yi =_ _A[˜]_ _ixi(k) +_ _B[˜]ii∆Ui(k) + ∑_ _B˜ij∆Uj(k)_ (10)
_j∈Ni_
in terms of the current state xi(k) (and, implicitly, the measured process state xpi (k)),
and the input trajectories ∆Ui(k), ∀i ∈N, and {∆Uj(k)}j∈Ni . _A˜_ _i, ˜Bii and ˜Bij are the_
predictor matrices.
Explicitly, the cost function to be minimized by each agent ∀i ∈N is:
_Ji(xi(k), ∆Ui, {∆Uj(k)}j∈Ni_ ) =
(Rspi − _A[˜]_ _ixi(k))[T](Rspi −_ _A[˜]_ _ixi(k)) + 2∆Ui[T]_ _[B][˜]ii[T] ∑_ _B˜ij∆Uj −_ 2∆Ui[T][B][˜]ii[T][[][R][sp]i _[−]_ _[A][˜]_ _[i][x][i][(][k][)]]_
_j∈Ni_
_−_ 2 ∑ ∆Uj[T][B][˜]ij[T][[][R][sp]i _[−]_ _[A][˜]_ _[i][x][i][(][k][)] +][ 2][∆][U]i[T][(][ ˜][B]ii[T][B][˜][ii]_ [+][ R][i][)][∆][U][i] [+] ∑ ∆Uj[T][(][ ˜][B]ij[T][B][˜][ij][)][∆][U][j] (11)
_j∈Ni_ _j∈Ni_
obtained by the substitution of (10) in (9). Note that, in (11), the unknown variable is
∆Ui(k), ∀i ∈N, while we consider that {∆Uj(k)}j∈Ni is available inside the neighbourhood.
The optimal solution is obtained minimizing (11) subject to (3).
**3. DMPC Algorithm with Input–Output Model (DMPCIO)**
In this section, a non-cooperative DMPC with an input–output model, designed for
a system composed of N sub-systems, is presented. The algorithm was firstly tested on a
three-agent system in [43], and extended to N sub-systems in [40].
_3.1. Problem Formulation_
Let us introduce an LTI system, similar to the one given in Section 2.1, where each
sub-system i has the following dynamics:
_yi(k) = Gii(q[−][1])ui(k) + ∑j∈Ni Gij(q[−][1])uj(k) + wi(k)_ (12)
-----
_Actuators 2023, 12, 281_ 6 of 23
with ui ∈ R[n][u], yi ∈ R[n][y] and wi ∈ R[n][w] the input, output and disturbance vectors, respectively; q[−][1] is the backward shift operator; k denotes the discrete-time index; Gii(q[−][1]) and
_Gij(q[−][1]) are discrete-time transfer functions with monic denominators._
All the sub-systems coupled with sub-system i are included in the set Ni = {j ∈N :
_Gij(q[−][1]) ̸= 0}. The disturbance term wi, ∀i ∈N is considered as a white noise signal_
filtered with an appropriate model [44]. To introduce an integral action in the control loop,
the disturbance model was chosen as an integrator:
_wi(k) =_ _D[C][i]i[(]([q]q[−][−][1][1][)])_ _[e][i][(][k][) =]_ 1−1q[−][1][ e][i][(][k][)] (13)
where ei, ∀i ∈N is a white noise signal.
The input and output vectors are constrained as (3).
_3.2. Optimization Problem_
Each agent ∀i ∈N solves the following cost function Ji:
_Ji(Yi(k), Ui(k), {Uj(k)}j∈Ni_ ) = (Rspi (k) − _Yi(k))[T](Rspi_ (k) − _Yi(k))_ (14)
+ ∆Ui(k)[T]Ri∆Ui(k)
where Yi(k) = �yi(k + 1|k) . . . yi(k + Np|k)�T is the output predictor; the input sequence
∆Ui(k) = [∆ui(k|k) . . . ∆ui(k + Nc − 1|k)][T] is defined as the control increment over the
control horizon Nc ≤ _Np; Rspi_ (k) ∈ R[N][p] is the reference trajectory imposed constant over
the prediction horizon and equal with the set-point at the current time instant k; Ri = αi _INc_
is the input weight.
The input–output MPC formulation provided in [45], which is the basis for the DMPC
implementation, computes the output predictor by aggregating past and future effects:
_Yi(k) =_ _Y[¯]i(k) + Yi[opt](k),_ (15)
where Yi[opt](k) formulated in (16) represents the future actions, while _Y[¯]i(k) = Xi(k) + Wi(k)_
represents the past actions Xi(k) and the disturbance prediction Wi(k).
In compact matrix form, Yi[opt](k) is calculated as:
_Yi[opt](k) =_ _G[˜]_ _iiUi(k) + ∑j∈Ni_ _G[˜]_ _ijUj(k), ∀i ∈N_ (16)
with
_h1[ij]_ 0 . . . _g1[ij]−Nc+1_
_h2[ij]_ _h1[ij]_ . . . . . .
. . . . . . . . . . . .
_h[ij]Np_ _h[ij]Np−1_ . . . _g[ij]Np−Nc+1_
_h1[ii]_ 0 . . . _g1[ii]−Nc+1_
_h2[ii]_ _h1[ii]_ . . . . . .
. . . . . . . . . . . .
_h[ii]Np_ _h[ii]Np−1_ . . . _g[ii]Np−Nc+1_
_G˜_ _ij =_
_G˜_ _ii =_
(17)
where {h1[ij] _[h]2[ij]_ _[h]3[ij]_ [. . .][}][ are the impulse responses from input][ j][,][ ∀][j][ ∈N][i][, to output][ i][, and]
_g[ij]Np−Nc+1_ [is the corresponding step response.]
Explicitly, the cost function to be minimized by each agent i, ∀i ∈N is:
-----
_Actuators 2023, 12, 281_ 7 of 23
_Ji(Yi, Ui, {Uj}j∈Ni_ )
= ((Rspi − _Y[¯]i −_ _G[˜]_ _iiUi −_ ∑ _G˜_ _ijUj)[T](Rspi −_ _Y¯i −_ _G˜_ _iiUi −_ ∑ _G˜_ _ijUj)_
_j∈Ni_ _j∈Ni_
+ (A[¯] _iUi +_ _b[¯]i)[T]Ri(A[¯]_ _iUi +_ _b[¯]i))_
= (Ui[T][(][ ˜][G]ii[T][G][˜] _[ii]_ [+][ ¯][A][T]i _[R][i]_ _[A][¯]_ _[i][)][U][i]_ _[−]_ [2][[][ ˜][G]ii[T][(][R][sp]i _[−]_ _[Y][¯][i]_ _[−]_ ∑ _G˜_ _ijUj) + ¯A[T]i_ _[R][i][b][¯][i][]][T][U][i]_
_j∈Ni_
+ (Rspi − _Y[¯]i −_ ∑ _G˜_ _ijUj)[T](Rspi −_ _Y¯i −_ ∑ _G˜_ _ijUj) + ¯bi[T][R][i][b][¯][i][)]_ (18)
_j∈Ni_ _j∈Ni_
where the incremental variable ∆Ui(k) is written in matrix form ∆Ui = _A[¯]_ _iUi +_ _b[¯]i. Matrix_
_A¯_ _i and vector ¯bi are recursively computed from the formula ∆ui(k|k) = ui(k|k) −_ _ui(k −_ 1),
with ui(k − 1) being the actual input sent to the sub-system at the previous sampling instant.
Note that, in (14), the unknown variable is Ui(k), ∀i ∈N, while we consider that
_{Uj(k)}j∈Ni is available inside the neighbourhood._
The optimal solution Ui(k)[∗] is obtained minimizing (18) subject to (3).
**4. Coalitional Control with Gain Feedback Control (CC)**
In this section, a coalitional control algorithm with gain feedback matrix formulation
based on a state-space model is presented. The algorithm was firstly introduced in [41].
As previously mentioned, the idea behind the coalitional control is to ensure a degree of
flexibility in the control architecture. This is obtained by enabling or disabling certain
communication links between different agents, thus obtaining different communication
topologies [41].
_4.1. Problem Formulation_
Consider the LTI system introduced in Section 2.1, where each sub-system i has the
dynamics (1) and (2) and the constraints (3).
In the proposed CC strategy, to ensure the presence of an integral action in the control
loop, an additional state was introduced. This state was defined as an integral of the control
error, denoted ¯xpi, and defined as ¯xpi (k + 1) = ¯xpi (k) + ri(k) − _Cpi_ _xpi_ (k). This additional
state was used to extend the state vector, obtaining an extended model:
� _xpi_ (k + 1) �
_x¯pi_ (k + 1)
� �� �
_xi(k+1)_
����
_Rspi_
= � _Api_ _O_ �
_−Cpi_ _I_
� �� �
_Ai_
� _xpi_ (k) �
_x¯pi_ (k)
� �� �
_xi(k)_
� _O_
+
_I_
�
_ri(k)_
+ � _Bpii_
_O_
�
_ui(k)_ + ∑j∈Ni
�
_uj(k)_ (19)
� _Bpij_
_O_
� �� �
_Bii_
_yi(k) =_ � _Cpi_ _O_ �
� �� �
_Ci_
� _xpi_ (k)
_x¯pi_ (k)
� �� �
_Bij_
�
, ∀i ∈N (20)
where I and O are the identity and zero matrix, respectively, with adequate dimensions.
In a compact form, model (19) and (20) can be written as:
�
_xi(k + 1) = Aixi(k) + Bspi_ _ri(k) + Biiui(k) + ∑j∈Ni Bijuj(k)_ (21)
_yi(k) = Cixi(k), ∀i ∈N_
where ui(k) and uj(k), ∀i ∈N, ∀j ∈Ni, are the input and the coupling input, respectively.
-----
_Actuators 2023, 12, 281_ 8 of 23
_4.2. Optimization Problem_
In the proposed coalitional control strategy, each agent ∀i ∈N is controlled using
a state feedback gain matrix. Within the methodology, a given communication topology
will have a particular form for the corresponding overall gain matrix (comprising all
individual feedback matrices, correlated to each sub-system). As such, in the initialization
phase of the methodology, one must decide the communication topologies that will be
employed in the coalitional control. The difference between different topologies is the
uni-directional communication links that are enabled, thus resulting in different overall
gain feedback matrices.
Hereafter, we will formulate the following communication topologies:
1. A decentralized topology, where the control action of the sub-systems is computed
without external information; thus, all the communication links are disabled;
2. A distributed topology, where the control action of the sub-systems is computed using
relevant external information from the neighbours. This means that the communication links between neighbours are enabled.
In all tests, for each sub-system, the control action is obtained using the gain feedback
matrix formulation obtained as an optimal solution that minimizes the difference between
the DMPC algorithm and the feedback gain matrix solution.
Each feedback gain matrix K, corresponding to each communication topology, is
computed by solving the following cost function using gradient optimization:
_J(K) =_ ∑
_xi[DMPC]∈XDMPC_
with
_N_
### ∑ JxiDMPC (K) (22)
_i=1_
_M_
_JxiDMPC_ (K) = ∑ _∥xi(j) −_ _xi[DMPC](j)∥2[2][,]_ (23)
_j=1_
s.t. (21), (3),
max(|eig(Ai + BiiKi,i)|) < 1 (24)
with ui(k) = Ki,ixi(k). (25)
where XDMPC is a set of state trajectories denoted xi[DMPC], ∀i ∈N, obtained from the
DMPCSS algorithm, simulated for M time samples.
The overall gain feedback matrix K is the optimal solution of problem (22).
Within the optimization, to compute the matrix K, a cost index is defined as the error
between the state trajectory xi[DMPC] chosen as an imposed reference for the state trajectories
_xi obtained using the control law (25) corresponding to the decentralized communication_
topology. In this manner, we ensure that the closed-loop dynamics obtained using the
coalitional control strategy are similar to the closed-loop dynamics from DMPCSS (i.e., we
consider the response generated by the DMPCSS strategy to be the desired response for our
coalitional control method). Moreover, note that constraint (24) ensures that all eigenvalues
(computed with Matlab function eig.m) of the closed-loop system are within the unit circle,
i.e., the closed-loop stability is satisfied, with the control law based on the feedback gain
matrix Ki,i, ∀i ∈N .
The set XDMPC contains manifold state trajectories obtained by testing the process in
multiple operating points feasible for the process functionality (i.e., respecting the imposed
hard constraints (3)). Using this set ensures that no bias from a particular simulation case
influences the computation of the optimal overall gain matrix K.
Since we wished to compare the distributed results obtained with the DMPCSS strategy
with the coalitional ones, a distributed communication topology was defined taking into
account the physical coupling between sub-systems. It resulted in an optimal feedback
matrix K, which has elements Ki,i, ∀i ∈N, on the main diagonal, corresponding to each
-----
_Actuators 2023, 12, 281_ 9 of 23
sub-system and elements off-diagonal Ki,j, ∀i, j ∈N, ∀j ∈Ni, corresponding to the
communication links enabled between neighbours.
The overall gain matrix K for the distributed topology was computed by minimizing the same cost function (22), where (25) was rewritten as ui(k) = Ki,ixi(k) + Ki,jxj(k),
and (24) was rewritten as max(|eig(Ai + BiiKi,i + BijKi,j)|) < 1, so that the interaction
between neighbours is considered.
Note that, for the proposed coalitional control strategy, we designed two communication topologies. From the coalitional point of view, these two case studies can be
regarded as: (i) the default test without coalitions, where the sub-systems do not exchange
information, and the overall gain matrix is diagonal, and (ii) the test with uni-directional
coalitions only between each two neighbours, which are coupled directly through inputs.
In this case, the overall gain matrix has only one non-zero element on each row, placed
off-diagonal.
As previously mentioned, the main advantage of the proposed coalitional control
methodology is to minimize the communication burden of the algorithm. This is managed
by opening additional communication links only when needed. In this framework, a
coalitional control strategy with switching communication topologies was designed, in
which the sub-systems can work either in a decentralized or in a distributed manner.
An important aspect of the coalitional control test is the criteria that switching between
the two topologies are based on. In our case, we decided on a time-based framework in
which, during the simulation, at each T sample times, each communication topology was
re-evaluated (i.e., a cost index was computed). The evaluation was performed for the next T
samples horizon, starting from the current initial conditions (i.e., similar with the receding
horizon principle in DMPC). The topology that has the ‘future’ smallest cumulative cost
was used for the next T sample times.
Let us denote with Jdist(K) the cumulative cost for the distributed communication
topology, computed as follows:
_N_
_Jdist(K) =_ ∑ _Jxi_ (Ki) (26)
_i=1_
with
_T_
_Jxi_ (Ki) = ∑ _∥ri(k + j) −_ _Cixi(k + j)∥2[2]_ [+][ β][∥][u]i[(][k][ +][ j][)][∥][2]2 [+][ γ][|][K]i[|] (27)
_j=1_
s.t. (21), (3),
with ui(k) = Ki,ixi(k) + Ki,jxj(k) (28)
where |Ki| denotes the number of off-diagonal, non-zero elements from gain matrix Ki
corresponding to sub-system i. The weight γ is selected by the user, and influences the
importance given to the communication cost involved within a given topology (i.e., to
provide a balance between performance and the number of enabled communication links).
In an analogous manner, the cumulative cost for the decentralized communication
topology Jdec(K) can be computed using (26) by replacing (28) with (25) and selecting
_γ = 0, since no communication links are opened._
**5. Numerical Analysis on an Eight-Tank Process**
The proposed control strategies (i.e., DMPCSS, DMPCIO and CC) were tested in simulation on a process consisting of eight interconnected water tanks.
_5.1. Process Description_
Let us introduce a benchmark process that can be decomposed into four input-coupled
sub-systems. Namely, two quadruple-tank processes, described in [46] (consisting of two
sub-systems each) were connected in a circular architecture (i.e., sub-system 1 coupled with
sub-system 4, which is coupled with sub-system 3, which is coupled with sub-system 2,
-----
_Actuators 2023, 12, 281_ 10 of 23
which is coupled with sub-system 1), obtaining an eight-tank process, introduced in [40].
In Figure 1 (from [40]), the schematic diagram of the eight-tank process is provided. For
this process, the idea is to control the water level in the lower tanks (L2, L4, L6, L8) by
manipulating the corresponding water flows (i.e., implicitly, by changing the voltages of the
four pumps Vp1, Vp2, Vp3, Vp4). Note that, the sub-systems are coupled through the inputs
(marked in Figure 1 with dashed coloured lines). Thus, a percentage of the water flow
provided by pump Vp1 from sub-system 1, influences the water level L4 from sub-system 2
(see the water flow marked with red dashed arrow).
**Figure 1. Schematic diagram of the eight-tank process [40].**
The nonlinear mathematical model corresponding to sub-system 1 (ensemble of two
water tanks, denoted Tank 1 (upper level) and Tank 2 (lower level)) is described using the
Bernoulli’s law and the mass balances, obtaining:
_dL2_ (1 − _γ4)k_ _p_
=
_dt_ _At2_
� ��a4 �
_Vp4 −_ _[A]A[o]t2[2]_
����
_D2_
�
2gL2 + _[A][o][1]_
_At2_
����
_D1_
�
2gL1 (29)
_dL1_ = _γ1k_ _p_
_dt_ _At1_
����
_b1_
_Vp1 −_ _[A]A[o]t1[1]_
����
_D1_
�
2gL1 (30)
_oi_
where g = 981 cm/s[2] is the gravitational constant on Earth, and Aoi = π _[D]4[2]_ cm[2] and
_ti_
_Ati = π_ _[D]4[2]_ [cm][2][ are the cross-section of the outflow orifice and the cross-section of Tank][ i][,]
_i = {1, 2}, respectively. The voltage applied to Pump i, i = {1, 4}, is Vpi and the corre-_
sponding flow is k _pVpi. The parameters γi ∈_ (0, 1), i = {1, 4} represent the percentages of
the flow from Pump i through inlets Out 1 and Out 2, respectively, and are defined as:
_γ1 =_ (Ai1A+i1Ai2) [,][ γ][4][ =] (Ai7A+i7Ai8) (31)
where Ai1 = Ai7 = _[π][Out1]4_ [2] cm[2] and Ai2 = Ai8 = _[π][Out2]4_ [2] cm[2] are the upper and lower
tanks inlet areas. The numerical values for the set-up parameters are derived from the user
manual for the quadruple tank process provided by Quanser and are given in Table 1. Note
that sub-system 1 defined with (29) and (30) is coupled with sub-system 4 through input
Pump 4, since the water level L2 depends on the flow k _pVp4, which is the control input in_
sub-system 4. The water level L1 for the upper tank Tank 1 depends on the flow provided
by Pump 1, e.g., k _pVp1 (see Figure 1)._
Following this reasoning and the schematic diagram of the process, which indicates
the interconnection between sub-systems, the remaining models for sub-systems 2, 3 and 4
can be easily derived.
-----
_Actuators 2023, 12, 281_ 11 of 23
The nonlinear sub-system’s model was linearized in Taylor expansion in the desired
equilibrium value for the lower tank level (i.e., L20 = 10 cm). Same equilibrium point
values were used for sub-systems 2, 3 and 4.
The process states were chosen as deviations from the equilibrium point xi := Li _Li0,_
_−_
_i = {1, . . ., 8}, (i.e., the upper tanks equilibrium points were chosen as: L10 = 3.69 cm,_
_L30 = 6.76 cm, L50 = 2.89 cm and L70 = 4.86 cm). The inputs variables were defined also as_
deviations ui := Vpi − _Vpi0, i = {1, . . ., 4}, (i.e., with the equilibrium values Vp10 = 3.73 V,_
_Vp20 = 9.71 V, Vp30 = 6.35 V and Vp40 = 8.24 V)._
**Table 1. Eight-tank process from Quanser model parameters.**
**Variable** **Value** **Unit** **Description**
Out 1 0.635 cm “Out 1” Orifice diameter
Out 2 0.476 cm “Out 2” Orifice diameter
_Dti_ 4.445 cm Inner diameter Tank i, i ∈{1, . . ., 8}
_Doi_ 0.476 cm Outlet diameter Tank i, i ∈{1, . . ., 8}
_γi_ 0.6402 - Flow ratio parameter for Pump i, i ∈{1, . . ., 4}
_Ai1, Ai3, Ai5, Ai7_ 0.316 cm[2] Inlet area Tank i, i ∈{1, 3, 5, 7}
_Ai2, Ai4, Ai6, Ai8_ 0.178 cm[2] Inlet area Tank i, i ∈{2, 4, 6, 8}
_Ati_ 15.517 cm[2] Inside cross-section area Tank i, i ∈{1, . . ., 8}
_Aoi_ 0.178 cm[2] Outlet area Tank i, i ∈{1, . . ., 8}
_k_ _p_ 3.3 cm[3]/s/V Pump flow constant
_g_ 981 cm/s[2] Gravitational constant on Earth
Further on, after the linerization procedure, we obtained the following overall linearized state-space model for the eight-tank process:
_b1_ 0 0 0
0 0 0 _a4_
0 _b2_ 0 0
_a1_ 0 0 0
0 0 _b3_ 0
0 _a2_ 0 0
0 0 0 _b4_
0 0 _a3_ 0
(32)
_x˙ =_
_y =_
_−η1_ 0 0 0 0 0 0 0
_η1_ _−η2_ 0 0 0 0 0 0
0 0 _−η3_ 0 0 0 0 0
0 0 _η3_ _−η4_ 0 0 0 0
0 0 0 0 _−η5_ 0 0 0
0 0 0 0 _η5_ _−η6_ 0 0
0 0 0 0 0 0 _−η7_ 0
0 0 0 0 0 0 _η7_ _−η8_
� ��A¯ _c_ �
0 1 0 0 0 0 0 0
0 0 0 1 0 0 0 0
_x_
0 0 0 0 0 1 0 0
0 0 0 0 0 0 0 1
_x +_
_u,_
� ��B¯c �
� ��C¯c �
where x = [x1 . . . x8][T] is the state vector, u = [u1 . . . u4][T] is the input vector and
_y = [y1 . . . y4][T]_ is the output vector. The parameters ηi = _D2i[√]√L2i0g_ [,][ i][ ∈{][1][ . . .][ 8][}][ were]
computed with partial derivatives.
By replacing all the numerical values provided in Table 1, we obtained the following
system matrices:
-----
_Actuators 2023, 12, 281_ 12 of 23
0.13 0 0 0 0 0 0 0
_−_
0.13 0.08 0 0 0 0 0 0
_−_
0 0 0.09 0 0 0 0 0
_−_
0 0 0.09 0.08 0 0 0 0
_−_
0 0 0 0 0.14 0 0 0
_−_
0 0 0 0 0.14 0.08 0 0
_−_
0 0 0 0 0 0 0.11 0
_−_
0 0 0 0 0 0 0.11 0.08
_−_
0.13 0 0 0
0 0 0 0.07
0 0.13 0 0
0.07 0 0 0
0 0 0.13 0
0 0.07 0 0
0 0 0 0.13
0 0 0.07 0
_A¯_ _c =_
_B¯_ _c =_
(33)
The overall state-space continuous time model (32) was discretized with the sampling
period Ts = 1 s using the MATLAB function c2d.m, and the discretization method zeroorder-hold, obtaining:
_xd(k + 1) =_ _A[¯]_ _dxd(k) +_ _B[¯]_ _dud(k)_ (34)
_yd(k) =_ _C[¯]dxd(k)_
where _A[¯]_ _d,_ _B[¯]_ _d and_ _C[¯]d are the discrete-time counterparts for the continuous-time system_
matrices from (32).
Next, the system was decomposed into four input-coupled sub-systems, hereafter
denoted by Si, i ∈{1, . . ., 4}, with the following components:
_xS1 = [xd1 xd2][T]_
_uS1 = u1_
_NS1 = {4}_
_yS1 = xd2_
_xS3 = [xd5 xd6][T]_
_uS3 = u3_
_NS3 = {2}_
_yS3 = xd6_
_xS2 = [xd3 xd4][T]_
_uS2 = u2_
_NS2 = {1}_
_yS2 = xd4_
_xS4 = [xd7 xd8][T]_
_uS4 = u4_
_NS4 = {3}_
_yS4 = xd8_
_S1 :_
_S3 :_
_S2 :_
_S4 :_
(35)
where xS1, uS1, NS1 and yS1 are the states, input, neighbourhood set and output for S1,
respectively. Similar definitions correspond to sub-systems S2, S3 and S4.
With the state, input and output partitions given in (35), the discrete-time matrices of
sub-systems Si, i ∈{1, . . ., 4}, are the following:
� � 0.8761 0
_S1 :_ _A¯_ _d1 =_
0.1189 0.9227
� � 0.9069 0
_S2 :_ _A¯_ _d2 =_
0.0894 0.9227
� � 0.8612 0
_S3 :_ _A¯_ _d3 =_
0.1333 0.9227
� � 0.8912 0
_S4 :_ _A¯_ _d4 =_
0.1045 0.9227
� � 0.1275
_B¯_ _d11 =_
0.0084
� � 0.1297
_B¯_ _d22 =_
0.0063
� � 0.1265
_B¯_ _d33 =_
0.0094
� � 0.1286
_B¯_ _d44 =_
0.0074
� � 0
_B¯_ _d14 =_
0.0735
� � 0
_B¯_ _d21 =_
0.0735
� � 0
_B¯_ _d32 =_
0.0735
� � 0
_B¯_ _d43 =_
0.0735
�
_C¯d1 =_ � 0 1 �
�
_C¯d2 =_ � 0 1 �
�
_C¯d3 =_ � 0 1 �
�
_C¯d4 =_ � 0 1 �
(36)
Each sub-system Si, i ∈{1, . . ., 4}, with the state-space model matrices given in (36),
was converted to a minimal realization of its corresponding transfer function form using
the MATLAB functions ss2tf.m and minreal.m, obtaining:
-----
_Actuators 2023, 12, 281_ 13 of 23
0.07351q[−][1]
_S1 : G[¯]_ _d11 =_ [0.00839][q][−][1][ +][ 0.007816][q][−][2] _G¯_ _d14 =_
1 1.799q[−][1] + 0.8084q[−][2] 1 0.9227q[−][1]
_−_ _−_
0.07351q[−][1]
_S2 : G[¯]_ _d22 =_ [0.006274][q][−][1][ +][ 0.005912][q][−][2] _G¯_ _d21 =_
1 1.83q[−][1] + 0.8368q[−][2] 1 0.9227q[−][1]
_−_ _−_
0.07351q[−][1]
_S3 : G[¯]_ _d33 =_ [0.009428][q][−][1][ +][ 0.008733][q][−][2] _G¯_ _d32 =_ (37)
1 1.784q[−][1] + 0.7946q[−][2] 1 0.9227q[−][1]
_−_ _−_
0.07351q[−][1]
_S4 : G[¯]_ _d44 =_ [0.007351][q][−][1][ +][ 0.006887][q][−][2] _G¯_ _d43 =_
1 1.814q[−][1] + 0.8223q[−][1] 1 0.9227q[−][1]
_−_ _−_
Since DMPCSS has a velocity-form formulation, each sub-system Si, i ∈{1, . . ., 4},
with the state-space model matrices given in (36), was converted to the augmented statespace model (8). Moreover, since the CC algorithm has an extended model with an integrator, each sub-system Si, i ∈{1, . . ., 4}, with the state-space model matrices given in (36).
was converted to the extended state-space model (21).
_5.2. Simulation Results_
The proposed DMPC and CC strategies have the following optimization parameters
and constraint limits:
- The sampling period Ts = 1 s, the prediction horizon Np = 30 samples and the control
horizon Nc = 30 samples;
- The input weight matrices Ri = αINc, with α = 10, _∀i ∈{1, . . ., 4}._
- The input weight β = 0.01, the communication cost γ = 0.01 and the horizon T = 20
samples.
- The input constraints are 0 V ≤ _ui ≤_ 22 V, _∀i ∈{1, . . ., 4};_
- The output constraints are 0 cm ≤ _yi ≤_ 25 cm, _∀i ∈{1, . . ., 4}._
All proposed methodologies were compared in a setpoint tracking test, performed on
the eight-tank process described in Section 5.1. The test had a length of M = 1000 s and
was designed as a series of step changes as follows:
- During the first 200 s, all references ri for all sub-systems Si, i ∈ 1, . . ., 4 are equal to
5 cm.
- At time 201 s, the references values are: r1 = 8 cm, r2 = 10 cm, r3 = 12 cm and
_r4 = 15 cm._
- At time 401 s, the references values are: r1 = 15 cm, r2 = 12 cm, r3 = 10 cm and
_r4 = 15 cm._
- At time 601 s, the references values are: r1 = 10 cm, r2 = 15 cm, r3 = 15 cm and
_r4 = 12 cm._
- At time 801 s, the references values are: r1 = 10 cm, r2 = 20 cm, r3 = 15 cm and
_r4 = 15 cm._
**Remark 1. For the DMPC strategies, the numerical values for the optimization parameters were**
_empirically chosen, after several numerical simulations, taking into account various factors such as:_
_the open-loop dynamics of the process, the compromise between a good closed-loop performance and_
_small control effort, etc._
_The prediction horizon Np was selected as large enough such that the prediction will cover_
_part of the transient response of the open-loop sub-system. However, a larger prediction horizon will_
_result in a slower closed-loop response, with the benefit of a smaller control effort._
_The input weight matrix Ri was chosen as a compromise between a good tracking error and_
_smaller control effort. A smaller value will put more emphasis on the minimization of the tracking_
_error at the detriment of the value of the control effort. Taking into account that the used process is_
_hard-constrained in the input values, it makes more sense to influence the optimization toward the_
_minimization of the input, and the second priority is given to the tracking error._
-----
_Actuators 2023, 12, 281_ 14 of 23
**Remark 2. For the CC strategy with switching topologies, the values for the parameters from the**
_cumulative cost (26) used for the evaluation of the topologies were also empirically chosen, after_
_several tests._
_Similar to the prediction horizon parameter from the DMPC, the value of the horizon T was_
_selected as large enough to cover part of the transient response of the open-loop system. A larger_
_value for the horizon T will influence the switching rate between topologies._
_The weight γ was selected taking into account that the decentralized topology has γ = 0_
_(i.e., no links enabled). This results in a non-zero, positive value influencing the evaluation result_
_with respect to the cumulative cost corresponding to the distributed topology. A larger value can_
_excessively penalize the communication, forcing only the activation of the decentralized topology._
The comparative simulation results for the DMPCSS and DMPCIO strategies are
given in Figures 2 and 3, depicting the outputs and inputs, respectively. As expected,
despite the fact that these two DMPC algorithms have different implementations, using the
same optimization parameters and in identical simulation conditions, we obtained quasiindistinguishable transient performances. This is because the distributed methodologies
are similar, exchanging the optimal input between coupled sub-systems.
Next, the decentralized CCK dec and the distributed CCK dist communication topologies designed for the coalitional control strategy were comparatively tested in the same
simulation scenario. The results obtained are given in Figures 4 and 5, depicting the outputs
and inputs, respectively. As previously mentioned, within the decentralized formulation,
there are no communication links enabled between coupled sub-systems.
20 ref yS1 yS1 DMPCSS yS1 DMPCIO
10
0
0 100 200 300 400 500 600 700 800 900 1000
20
10
ref yS2 yS2 DMPCSS yS2 DMPCIO
0
0 100 200 300 400 500 600 700 800 900 1000
20
10
ref yS3 yS3 DMPCSS yS3 DMPCIO
0
0 100 200 300 400 500 600 700 800 900 1000
20
10
ref yS4 yS4 DMPCSS yS4 DMPCIO
0
0 100 200 300 400 500 600 700 800 900 1000
Time (samples)
**Figure 2. Comparative simulation results for DMPCSS (red lines) and DMPCIO (blue lines) strategies—**
outputs for all sub-systems.
|Col1|ref y y DMPC y DMPC|
|---|---|
|S1 S1 SS S1 IO|S1 S1 SS S1 IO|
|ref y y DMPC y DMPC S3 S3 SS S3 IO|Col2|Col3|Col4|Col5|Col6|
|---|---|---|---|---|---|
||ref y y DMPC y DMPC S3 S3 SS S3 IO|||||
|||||||
|ref y y DMPC y DMPC S4 S4 SS S4 IO|Col2|Col3|Col4|Col5|Col6|
|---|---|---|---|---|---|
||ref y y DMPC y DMPC S4 S4 SS S4 IO|||||
|||||||
-----
_Actuators 2023, 12, 281_ 15 of 23
uS1 DMPCSS uS1 DMPCIO
10
0
0 100 200 300 400 500 600 700 800 900 1000
10
uS2 DMPCSS uS2 DMPCIO
0
0 100 200 300 400 500 600 700 800 900 1000
uS3 DMPCSS uS3 DMPCIO
10
0
0 100 200 300 400 500 600 700 800 900 1000
uS4 DMPCSS uS4 DMPCIO
10
0
0 100 200 300 400 500 600 700 800 900 1000
Time (samples)
**Figure 3. Comparative simulation results for DMPCSS (red lines) and DMPCIO (blue lines) strategies—**
inputs for all sub-systems.
20 ref yS1 yS1 CCK dec yS1 CCK dist
10
0
0 100 200 300 400 500 600 700 800 900 1000
20
10
ref yS2 yS2 CCK dec yS2 CCK dist
0
0 100 200 300 400 500 600 700 800 900 1000
20
10
ref yS3 yS3 CCK dec yS3 CCK dist
0
0 100 200 300 400 500 600 700 800 900 1000
20
10
ref yS4 yS4 CCK dec yS4 CCK dist
0
0 100 200 300 400 500 600 700 800 900 1000
Time (samples)
**Figure 4.** Comparative simulation results for CCK dec (green lines) and CCK dist (black lines)
strategies—outputs for all sub-systems.
|ref y y CC y CC S3 S3 K dec S3 K dist|Col2|Col3|Col4|Col5|Col6|
|---|---|---|---|---|---|
||ref y y CC y CC S3 S3 K dec S3 K dist|||||
|||||||
-----
_Actuators 2023, 12, 281_ 16 of 23
uS1 CCK dec uS1 CCK dist
10
0
0 100 200 300 400 500 600 700 800 900 1000
10
uS2 CCK dec uS2 CCK dist
0
0 100 200 300 400 500 600 700 800 900 1000
uS3 CCK dec uS3 CCK dist
10
0
0 100 200 300 400 500 600 700 800 900 1000
uS4 CCK dec uS4 CCK dist
10
0
0 100 200 300 400 500 600 700 800 900 1000
Time (samples)
**Figure 5.** Comparative simulation results for CCK dec (green lines) and CCK dist (black lines)
strategies—inputs for all sub-systems.
For this reason, one can see that the control effort is more aggressive during the transient time when compared with the distributed topology (see Figure 5, at time 600 samples).
Because, in the latter, there are communication links opened between coupled sub-systems,
it results in a smoother output response.
Moreover, the strength of the proposed coalitional control methodology is the dynamical configuration of the communication topology. Thus, the next step in our analysis was to
test the efficiency of the algorithm by automatically switching between the decentralized
and distributed communication topologies.
The obtained results are presented in Figures 6 and 7, depicting the outputs and inputs,
respectively. In Figure 8, the switching times between the two topologies are presented.
It is interesting to notice in this figure that the distributed topology is activated when
the need for coupling information is more stringent to ensure a better response. Thus,
between time 0 samples and time 390 samples, the topology is decentralized. When the
simulation conditions are more challenging (see Figure 6, in the interval 390–600 samples
and 790–1000 samples), the communication topology switches to distributed and shares
information between sub-systems. This is partially due to the fact that sub-systems S2 and
_S3 are coupled and have opposite setpoint changes._
Another remark is the fact that, for this setup, if a decrease in the water level in a
tank is desired, this results in a decrease in the water flow, and implicitly a lower pump
voltage. However, if the coupling sub-system has a significant water level increase, due to
the physical coupling between sub-systems, this can evolve to a pump saturation on the
lower limit of 0 volts (see Figure 7 at time 200 samples for sub-system S1).
-----
_Actuators 2023, 12, 281_ 17 of 23
20
ref yS1 yS1 CCK switch
10
0
0 100 200 300 400 500 600 700 800 900 1000
20
10
ref yS2 yS2 CCK switch
0
0 100 200 300 400 500 600 700 800 900 1000
20
10
ref yS3 yS3 CCK switch
0
0 100 200 300 400 500 600 700 800 900 1000
20
10
ref yS4 yS4 CCK switch
0
0 100 200 300 400 500 600 700 800 900 1000
Time (samples)
**Figure 6. Simulation results for CCK switch strategy—outputs for all sub-systems.**
uS1 CCK switch
10
0
0 100 200 300 400 500 600 700 800 900 1000
10
uS2 CCK switch
0
0 100 200 300 400 500 600 700 800 900 1000
uS3 CCK switch
10
0
0 100 200 300 400 500 600 700 800 900 1000
uS4 CCK switch
10
0
0 100 200 300 400 500 600 700 800 900 1000
Time (samples)
**Figure 7. Simulation results for CCK switch strategy—inputs for all sub-systems.**
|ref y y CC S1 S1 K switch|Col2|Col3|Col4|
|---|---|---|---|
|Col1|u CC S1 K switch|
|---|---|
-----
_Actuators 2023, 12, 281_ 18 of 23
1
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
0 100 200 300 400 500 600 700 800 900 1000
Time (samples)
**Figure 8. Switching dynamics for CCK switch strategy—1 corresponds to CCK dist, whereas 0 corre-**
sponds to CCK dec.
_5.3. Discussion_
The performance of the proposed strategies was analyzed with respect to the following
performance index:
4
### ∑
_i=1_
_Jcost =_ [1]
_M_
_M_
### ∑
_k=1_
�ri(k) − _yi(k))[2]_ + βui(k)[2][�] (38)
where M is the length of the simulation time and yi(k), ri(k) and ui(k) are the measured
output, the imposed reference and the computed input of sub-system Si, ∀i ∈{1, . . ., 4},
at sample time k. As the numerical values given in Table 2 show, the DMPCSS has a
slightly smaller cost index than the DMPCIO. When comparing the coalitional strategies
using the same criteria, as expected, it results in the coalitional control with the switching
communication topology CCK switch outperforming the other two CC strategies, with the
smallest Jcost.
**Table 2. Comparative analysis for DMPCSS, DMPCIO, CCK dist, CCK dec and CCK switch algorithms**
based on performance index Jcost, overshoot (σ) and settling time (tt).
**Algorithm** **_Jcost_** **_σ (%)_** **tt (s)**
DMPCSS 4.6103 3.9102 33
DMPCIO 5.0120 2.3250 31
CCK dec 4.4757 0 29
CCK dist 5.4070 4.6806 54
CCK switch 4.4682 0 30
What is noteworthy is the fact that, from this cost analysis, it results in the coalitional
control methods with the gain feedback formulations having similar performances to the
DMPC strategies. This outcome was expected since the CC algorithms were designed using
as the results obtained with the DMPCSS method as a reference.
In terms of transient response performances (i.e., overshoot and settling time), for
simplicity, only sub-system S1 was analyzed, at the beginning of the experiment (first
-----
_Actuators 2023, 12, 281_ 19 of 23
100 samples). The results are also given in Table 2, and confirm that the DMPC strategies
have comparable results with the coalitional control. The latter algorithm, based on gain
feedback matrix control, provides an alternative control strategy to the optimization-based
distributed model predictive control methods, and can be easily implemented on embedded
systems due to its simpler formulation.
The time resource required for the local controller to compute the solution at each
sampling time is: DMPCSS 6.75 × 10[−][3] s, DMPCIO 5.75 × 10[−][3] s, CCK dec 6.3609 × 10[−][8] s,
CCK dist 6.4234 × 10[−][8] s and CCK switch 6.8371 × 10[−][8] s. These numerical values show that
the CC strategy is more time-efficient that the DMPC methods.
Note that the numerical value of the Jcost for CCK switch given in Table 2 depends on
the simulation test (i.e., the switching dynamics from Figure 8). Another simulation test,
with other references, can give different results. The overall index value will be influenced
by which topology is ‘dominant’ in the switching test depending on the corresponding
simulation scenario.
To this end, an additional analysis was performed to evaluate the performance cost
for multiple tracking scenarios. Hence, a set of 50 references was generated with the
following characteristics:
- Length of the simulation time M = 500.
- The input weight β = 0.001.
- During the first 100 s, reference r1 = 10 cm, at time 101 s, r1 has a step change to a
randomly generated value between 5 and 15 cm.
- During the first 200 s, reference r2 = 10 cm, at time 201 s, r2 has a step change to a
randomly generated value between 5 and 15 cm.
- During the first 300 s, reference r3 = 10 cm, at time 301 s, r3 has a step change to a
randomly generated value between 5 and 15 cm.
- During the first 400 s, reference r4 = 10 cm, at time 401 s, r4 has a step change to a
randomly generated value between 5 and 15 cm.
For clarity, only the first 4 out of 50 references are depicted in Figure 9. For all
50 references, the Jcost was computed and is provided in Table 3. The results show that there
are situations (see ref1 and ref11) in which the switching dynamics for CCK switch selects
only one strategy for the entire simulation. In this case, for that reference, there are two
equal values for Jcost. For each algorithm, the mean of Jcost values from Table 3 is 7.32 for
DMPCSS, 4.08 for DMPCIO, 6.96 for CCK dec, 8.39 for CCK dist and 7.02 for CCK switch. These
mean values reinforce the initial findings, i.e., that the coalitional control strategy has a
similar performance to DMPCSS.
Another analysis was performed to investigate the influence of the horizon T value
within the switching algorithm. Using the same reference scenarios provided in Table 3,
the algorithm CCK switch was tested for T = 40 and T = 70. For simplicity, only the mean
of Jcost values are provided. Thus, algorithm CCK switch has an average Jcost of 6.98 and
6.97 for T = 40 and T = 70, respectively. This small difference when compared with the
average cost of 7.02 corresponding to T = 20 implies that there is no gain in using larger
horizon values when evaluating the topologies.
With respect to satisfying the imposed hard input and output constraints, only the
lower limit of the input constraint was reached and respected, whereas the upper limits
were never touched.
In the coalitional control strategy, when computing the optimal K for each topology
(distributed and decentralized), a closed loop stability constraint (24) was imposed within
the problem. After the computation of matrix K, for each topology, the stability constraint
value denoted ρ was computed. Thus, the closed-loop stability of the coalition control
strategy was assessed numerically for both communication topologies, obtaining two values
within the unit circle, i.e., ρ = 0.9506 for CCK dec and ρ = 0.9596 for CCK dist.
**Remark 3. Both DMPC and CC algorithms were tested using an academic simulation benchmark.**
_The simulations were performed using MATLAB R2021a on Windows 10, 64-bit Operating System_
-----
_Actuators 2023, 12, 281_ 20 of 23
_with a laptop Intel Core i5-9850H CPU @ 2.60 GHz460 and 8 GB RAM. Thus, the DMPC_
_algorithms were not yet optimized to be executed on embedded devices and to be tested in a real-_
_time setup, but this is a subject of future work. However, the simplicity of the coalitional control_
_formulation, as well as its reduced computation burden, makes it suitable for controlling various_
_coupled sub-systems, using embedded devices with limited storage and computation capabilities._
_This endeavor is subject to ongoing work._
15 ref 1 ref 2 ref 3 ref 4
10
5
0 50 100 150 200 250 300 350 400 450 500
15
10
5 ref 1 ref 2 ref 3 ref 4
0 50 100 150 200 250 300 350 400 450 500
15 ref 1 ref 2 ref 3 ref 4
10
5
0 50 100 150 200 250 300 350 400 450 500
15 ref 1 ref 2 ref 3 ref 4
10
5
0 50 100 150 200 250 300 350 400 450 500
Time (samples)
**Figure 9. First 4 out of 50 reference sets scenarios used for the performance analysis provided in**
Table 3.
**Table 3. Comparative analysis for DMPCSS, DMPCIO, CCK dist, CCK dec and CCK switch algorithms**
based on performance index Jcost for 50 reference tracking scenarios.
**Algorithm** **ref1** **ref2** **ref3** **ref4** **ref5** **ref6** **ref7** **ref8** **ref9** **ref10**
DMPCSS 7.06 6.9232 7.234 6.9941 7.4663 7.1101 6.2858 7.5058 7.1761 7.2923
DMPCIO 3.8399 3.7616 3.9228 3.7897 4.0333 3.8603 3.4247 4.5454 3.9086 3.9745
CCK dec 6.6852 6.5503 6.9014 6.6434 7.1358 6.7618 5.8971 7.1275 6.7765 6.957
CCK dist 8.0269 7.9341 8.3214 7.9283 8.6276 8.1404 7.1244 8.4239 8.1521 8.3999
CCK switch 6.6852 6.6496 6.9014 6.6434 7.2027 6.708 5.8971 7.1338 6.9815 7.0748
**Algorithm** **ref11** **ref12** **ref13** **ref14** **ref15** **ref16** **ref17** **ref18** **ref19** **ref20**
DMPCSS 7.1924 7.8314 7.9128 6.8795 7.1698 7.5848 7.3844 6.8498 6.684 7.3929
DMPCIO 3.906 4.6893 4.278 3.7476 3.8727 4.1029 4.0108 3.7335 3.6424 4.0175
CCK dec 6.8137 7.4959 7.5955 6.529 6.8305 7.2667 7.0623 6.4549 6.3308 7.0272
CCK dist 8.1562 10.0325 9.0328 7.8895 8.2037 8.7419 8.5967 7.7314 7.6851 8.5349
CCK switch 6.8137 7.3403 7.5955 6.7604 6.8305 7.3402 7.0623 6.4549 6.3626 7.0342
**Algorithm** **ref21** **ref22** **ref23** **ref24** **ref25** **ref26** **ref27** **ref28** **ref29** **ref30**
DMPCSS 6.6234 6.526 7.534 8.3664 8.869 6.8697 6.9339 7.4802 6.8035 6.4023
DMPCIO 3.6011 3.5644 4.2458 4.5478 6.5396 3.7348 3.7476 4.0698 3.7181 3.4967
CCK dec 6.2535 6.1489 7.1624 8.0334 8.5175 6.5079 6.588 7.1204 6.4382 6.016
CCK dist 7.4819 7.4183 8.8481 9.691 10.2905 7.8125 7.9544 8.5508 7.7259 7.2463
CCK switch 6.2535 6.1489 7.4142 8.434 8.5175 6.5079 6.588 7.1204 6.4382 6.016
-----
_Actuators 2023, 12, 281_ 21 of 23
**Table 3. Cont.**
**Algorithm** **ref31** **ref32** **ref33** **ref34** **ref35** **ref36** **ref37** **ref38** **ref39** **ref40**
DMPCSS 10.0696 7.6568 6.1639 6.9149 7.0216 6.7982 8.2884 7.6686 8.1218 6.5389
DMPCIO 7.3631 4.1428 3.3615 3.7589 3.8114 3.708 5.2155 4.1158 4.3801 3.5525
CCK dec 9.7118 7.2916 5.7669 6.5551 6.6673 6.4463 7.9295 7.3523 7.8033 6.1602
CCK dist 11.4071 8.7722 6.9522 7.8125 8.0195 7.795 9.2517 8.8459 9.3484 7.4062
CCK switch 9.9748 7.2916 5.7669 6.6249 6.7133 6.4463 7.9884 7.6813 7.8033 6.2888
**Algorithm** **ref41** **ref42** **ref43** **ref44** **ref45** **ref46** **ref47** **ref48** **ref49** **ref50**
DMPCSS 7.9808 7.4112 7.6753 7.3881 8.3035 6.7645 7.0579 7.3141 7.4849 7.1852
DMPCIO 4.2934 4.347 4.1347 4.0217 4.5533 3.6815 3.8437 3.9524 4.035 3.8977
CCK dec 7.6536 7.0194 7.3547 7.0098 7.9717 6.4051 6.6749 6.9475 7.1379 6.7994
CCK dist 9.0457 8.8949 8.8047 8.3742 9.8268 7.7782 7.9759 8.2984 8.4842 8.1058
CCK switch 7.494 7.7026 7.3214 7.0098 7.8147 6.5034 6.6749 7.1071 7.2662 6.7994
**6. Conclusions**
In this paper, a comparative performance analysis for two classes of control strategies
was performed. When testing the DMPC and coalitional control strategies in a simulation
setup, chosen as an eight-tank process with interconnected sub-systems, the results reveal
that the coalitional methodology, based on feedback gain matrix control, is a suitable
replacement for the optimization-based DMPC algorithms. Since the DMPC algorithm is
based on online optimization and requires specialized optimization software, it is not trivial
to use it on embedded systems, with limited capabilities. This was the motivation behind
introducing the CC methodology, which has a simpler formulation based on a matrix gain
feedback controller, and, once computed offline, can be easily employed on embedded
systems. These findings are encouraging, and future work will test the proposed coalitional
control strategy in a challenging, real-time experimental setup.
**Author Contributions: Conceptualization, A.M. and C.-F.C.; methodology, A.M., O.P. and C.-F.C.;**
software, A.M. and O.P.; validation, A.M. and O.P.; writing—original draft preparation, A.M.; supervision, C.-F.C. All authors have read and agreed to the published version of the manuscript.
**Funding: The work of A.M. and O.P. was supported by “Institutional development through increas-**
ing the innovation, development and research performance of TUIASI—COMPETE 2.0”, project
funded by contract no. 27PFE /2021, financed by the Romanian government. The work of A.M.
was also supported by “Gheorghe Asachi” Technical University of Iasi (TUIASI) through the Project
“Performance and excellence in postdoctoral research 2022”. The work of C.F.C. was supported by a
grant of the Ministry of Research, Innovation and Digitization, CNCS/CCCDI-UEFISCDI, project
number PN-III-P1-1.1-TE-373 2019-1123, within PNCDI III.
**Institutional Review Board Statement: Not applicable.**
**Informed Consent Statement: Not applicable.**
**Data Availability Statement: Not applicable.**
**Conflicts of Interest: The authors declare no conflict of interest.**
**Abbreviations**
The following abbreviations are used in this manuscript:
MPC Model Predictive Control
DMPC Distributed Model Predictive Control
DMPCSS DMPC with state-space model
DMPCIO DMPC with input–output model
CC Coalitional Control
CCK dec CC with decentralized communication topology
CCK dist CC with distributed communication topology
CCK switch CC with switching communication topology
-----
_Actuators 2023, 12, 281_ 22 of 23
**References**
1. Maestre, J.M.; Negenborn, R.R. Distributed Model Predictive Control Made Easy; Springer Science+Business Media: Dordrecht,
The Netherlands, 2014.
2. Espin-Sarzosa, D.; Palma-Behnke, R.; Nuñez-Mata, O. Energy Management Systems for Microgrids: Main Existing Trends in
[Centralized Control Architectures. Energies 2020, 13, 547. [CrossRef]](http://doi.org/10.3390/en13030547)
3. Sun, X.; Yin, Y. Decentralized game-theoretical approaches for behaviorally-stable and efficient vehicle platooning. Transp. Res.
_[Part B -Methodol. 2021, 153, 45–69. [CrossRef]](http://dx.doi.org/10.1016/j.trb.2021.08.012)_
4. Camacho, E.F.; Bordons, C. Model Predictive Control; Springer: Berlin, Germany, 1999.
5. Lou, G.; Gu, W.; Xu, Y.; Cheng, M.; Liu, W. Distributed MPC-based secondary voltage control scheme for autonomous drop-control
[microgrids. IEEE Trans. Sustain. Energy 2017, 8, 792–804. [CrossRef]](http://dx.doi.org/10.1109/TSTE.2016.2620283)
6. Cortés, A.; Martínez, S. On distributed reactive power and storage control on microgrids. Int. J. Robust Nonlinear Control 2016,
_[16, 3150–3169. [CrossRef]](http://dx.doi.org/10.1002/rnc.3497)_
7. Liu, M.; Shi, Y.; Liu, X. Distributed MPC of aggregated heterogeneous thermostatically controlled loads in smart grids. Int. Trans.
_[Ind. Electron. 2016, 63, 1120–1129. [CrossRef]](http://dx.doi.org/10.1109/TIE.2015.2492946)_
8. del Real, A.J.; Arce, A.; Bordons, C. Combined environmental and economic dispatch of smart grids using distributed model
[predictive control. Electr. Power Energy Syst. 2014, 54, 65–76. [CrossRef]](http://dx.doi.org/10.1016/j.ijepes.2013.06.035)
9. Pham, V.; Ahn, H. Distributed Stochastic MPC Traffic Signal Control for Urban Networks. IEEE Trans. Intell. Transp. Syst. 2023,
_[early access. [CrossRef]](http://dx.doi.org/10.1109/TITS.2023.3262580)_
10. Liu, P.; Ozguner, U.; Zhang, Y. Distributed MPC for cooperative highway driving and energy-economy validation via microscopic
[simulations. Transp. Res. Part C Emerg. Technol. 2017, 77, 80–95. [CrossRef]](http://dx.doi.org/10.1016/j.trc.2016.12.016)
11. Yan, X.; Cai, B.; Ning, B.; ShangGuan, W. Online distributed cooperative model predictive control of energy-saving trajectory
[planning for multiple high speed train movements. Transp. Res. Part C Emerg. Technol. 2016, 69, 60–78. [CrossRef]](http://dx.doi.org/10.1016/j.trc.2016.05.019)
12. Kersbergen, B.; van den Boom, T.; de Schutter, B. Distributed model predictive control for railway traffic management. Transp.
_[Res. Part C Emerg. Technol. 2016, 68, 462–489. [CrossRef]](http://dx.doi.org/10.1016/j.trc.2016.05.006)_
13. Ferarra, A.; Nai Oleari, A.; Sacone, S.; Siri, S. Freeway as system of systems: A distributed model predictive control scheme. IEEE
_[Syst. J. 2015, 9, 462–489. [CrossRef]](http://dx.doi.org/10.1109/JSYST.2014.2317931)_
14. Ye, B.L.; Wu, W.; Mao, W. Distributed model predictive control method for optimal coordination of signal splits in urban traffic
[networks. Asian J. Control 2015, 17, 775–790. [CrossRef]](http://dx.doi.org/10.1002/asjc.1011)
15. Li, H.; Zhang, T.; Zheng, S.; Sun, C. Distributed MPC for Multi-Vehicle Cooperative Control Considering the Surrounding Vehicle
[Personality. IEEE Trans. Intell. Transp. Syst. 2023, early access. [CrossRef]](http://dx.doi.org/10.1109/TITS.2023.3253878)
16. Pauca, O.; Maxim, A.; Caruntu, C.F. DMPC-based Data-packet Dropout Compensation in Vehicle Platooning Applications using
V2V Communications. In Proceedings of the 2021 European Control Conference, Rotterdam, The Netherlands, 29 June–2 July
2021; pp. 2639–2644.
17. Maxim, A.; Lazar, C.; Caruntu, C.F. Distributed Model Predictive Control Algorithm with Communication Delays for a
Cooperative Adaptive Cruise Control Vehicle Platoon. In Proceedings of the 28th Mediterranean Conference on Control and
Automation, Saint-Raphaël, France, 15–18 September 2020; pp. 909–914.
18. Caruntu, C.F.; Braescu, C.; Maxim, A.; Rafaila, R.C.; Tiganasu, A. Distributed model predictive control for vehicle platooning: A
brief survey. In Proceedings of the 20th International Conference on System Theory, Control and Computing, Sinaia, Romania,
13–15 October 2016; pp. 644–650.
19. Liu, X.; Zhang, Y.; Lee, K.Y. Coordinated distributed MPC for load frequency control of power system with wind farms. IEEE
_[Trans. Ind. Electron. 2017, 64, 5140–5150. [CrossRef]](http://dx.doi.org/10.1109/TIE.2016.2642882)_
20. Spudi´c, V.; Conte, C.; Baoti´c, M.; Morari, M. Cooperative distributed model predictive control for wind farms. Optim. Control
_[Appl. Methods 2015, 36, 333–352. [CrossRef]](http://dx.doi.org/10.1002/oca.2136)_
21. Zhao, H.; Wu, Q.; Guo, Q.; Sun, H.; Xue, Y. Distributed model predictive control of a wind farm for optimal active power control:
Part II: Implementation with clustering based piece-wise affine wind turbine model. IEEE Trans. Sustain. Energy 2015, 6, 840–849.
[[CrossRef]](http://dx.doi.org/10.1109/TSTE.2015.2418281)
22. Zhang, A.; Yin, X.; Liu, S.; Zeng, J.; Liu, J. Distributed economic model predictive control of wastewater treatment plants. Chem.
_[Eng. Res. Des. 2019, 141, 144–155. [CrossRef]](http://dx.doi.org/10.1016/j.cherd.2018.10.039)_
23. Foscoliano, C.; Del Vigo, S.; Mulas, M.; Tronci, S. Improving the wastewater treatment plant performance through model
predictive control strategies. In Proceedings of the 26th European Symposium on Computer Aided Process Engineering, Portoroz,
Slovenia, 12–15 June 2016; pp. 1863–1868.
24. Albalawi, F.; Durand, H.; Christofides, P.D. Distributed Economic Model Predictive Control with Safeness-Index Based Constraints
of a Nonlinear Chemical Process. In Proceedings of the 2018 Annual American Control Conference, Milwaukee, WI, USA, 27–29
June 2018; pp. 2078–2083.
25. Zhang, S.; Zhao, D.; Spurgeon, S.K.; Yan, X. Distributed Model Predictive Control for the Atmospheric and Vacuum Distillation
Towers in a Petroleum Refining Process. In Proceedings of the 11th UKACC International Conference on Control, Belfast, North
Ireland, 31 August–2 September 2016.
26. Ocampo-Martinez, C.; Puig, V.; Cembrano, G.; Quevedo, J. Application of predictive control strategies to the management of
complex networks in the urban water cycle. IEEE Control Syst. 2013, 33, 15–41.
-----
_Actuators 2023, 12, 281_ 23 of 23
27. Zhao, Z.; Guo, J.; Luo, X.; Lai, C.S.; Yang, P.; Lai, L.; Li, P.; Guerrero, J.; Shahidehpour, M. Distributed Robust Model Predictive
Control-Based Energy Management Strategy for Islanded Multi-Microgrids Considering Uncertainty. IEEE Trans. Smart Grid
**[2022, 13, 2107–2120. [CrossRef]](http://dx.doi.org/10.1109/TSG.2022.3147370)**
28. Shi, Y.; Tuan, H.D.; Savkin, A.V.; Lin, C.T.; Zhu, J.G.; Poor, H.V. Distributed model predictive control for joint coordination of
[demand response and optimal power flow with renewables in smart grid. Appl. Energy 2021, 209, 116701. [CrossRef]](http://dx.doi.org/10.1016/j.apenergy.2021.116701)
29. Liu, Y.; Liu, R.; Wei, C.; Xun, J.; Tang, T. Distributed Model Predictive Control Strategy for Constrained High-Speed Virtually
[Coupled Train Set. IEEE Trans. Veh. Technol. 2022, 71, 171–183. [CrossRef]](http://dx.doi.org/10.1109/TVT.2021.3130715)
30. Zhou, Y.; Wang, M.; Ahn, S. Distributed model predictive control approach for cooperative car-following with guaranteed local
[and string stability. Transp. Res. Part B 2019, 128, 69–86. [CrossRef]](http://dx.doi.org/10.1016/j.trb.2019.07.001)
31. Kong, X.; Ma, L.; Wang, C.; Guo, S.; Abdelbaky, M.A.; Liu, X.; Lee, K.Y. Large-scale wind farm control using distributed economic
[model predictive scheme. Renew. Energy 2022, 181, 581–591. [CrossRef]](http://dx.doi.org/10.1016/j.renene.2021.09.048)
32. Teng, Y.; Bai, J.; Wu, F.; Zou, H. Explicit distributed model predictive control design for chemical processes under constraints and
[uncertainty. Can. J. Chem. Eng. 2023, early access. [CrossRef]](http://dx.doi.org/10.1002/cjce.24784)
33. Zhang, L.; Wang, J.; Liu, Z.; Li, K. Distributed MPC for tracking based on reference trajectories. In Proceedings of the 33rd
Chinese Control Conference, Nanjing, China, 28–30 July, 2014; pp. 7778–7783.
34. Arauz, T.; Chanfreut, P.; Maestre, J.M. Cyber-security in networked and distributed model predictive control. Annu. Rev. Control
**[2022, 53, 338–355. [CrossRef]](http://dx.doi.org/10.1016/j.arcontrol.2021.10.005)**
35. Christofides, P.D.; Scattolini, R.; Muñoz de la Peña, D.; Liu, J. Distributed model predictive control: A tutorial review and future
[research directions. Comput. Chem. Eng. 2013, 51, 21–41. [CrossRef]](http://dx.doi.org/10.1016/j.compchemeng.2012.05.011)
36. Scattolini, R. Architectures for distributed and hierarchical Model Predictive Control—A review. _J. Process Control 2009,_
_[19, 723–731. [CrossRef]](http://dx.doi.org/10.1016/j.jprocont.2009.02.003)_
37. Fele, F.; Maestre, J.M.; Camacho, E.F. Coalitional control: Cooperative game theory and control. IEEE Control Syst. 2017, 37, 53–69.
38. Chanfreut, P.; Maestre, J.M.; Camacho, E.F. A survey on clustering methods for distributed and networked control systems. Annu.
_[Rev. Control 2021, 52, 75–90. [CrossRef]](http://dx.doi.org/10.1016/j.arcontrol.2021.08.002)_
39. Maxim, A.; Caruntu, C.F. A Coalitional Distributed Model Predictive Control Perspective for a Cyber-Physical Multi-Agent
[Application. Sensors 2021, 21, 4041. [CrossRef]](http://dx.doi.org/10.3390/s21124041)
40. Maxim, A.; Caruntu, C.F.; Lazar, C.; De Keyser, R.; Ionescu, C.M. Comparative Analysis of Distributed Model Predictive Control
Strategies. In Proceedings of the 23rd International Conference on System Theory, Control and Computing, Sinaia, Romania, 9–11
October 2019; pp. 468–473.
41. Maxim, A.; Pauca, O.; Maestre, J.M.; Caruntu, C.F. Assessment of computation methods for coalitional feedback controllers.
In Proceedings of the 2022 European Control Conference, London, UK, 12–15 July 2022; pp. 1448–1453.
42. Maxim, A.; Ionescu, C.M.; Caruntu, C.F.; Lazar, C.; De Keyser, R. Reference Tracking using a Non-Cooperative Distributed
Model Predictive Control Algorithm. In Proceedings of the 11th IFAC Symposium on Dynamics and Control of Process Systems,
including Biosystems, Trondheim, Norway, 6–8 June 2016; pp. 1079–1084.
43. Maxim, A.; Copot, D.; De Keyser, R.; Ionescu, C.M. An industrially relevant formulation of a distributed model predictive control
[algorithm based on minimal process information. J. Process Control 2018, 68, 240–253. [CrossRef]](http://dx.doi.org/10.1016/j.jprocont.2018.06.004)
44. De Keyser, R.; Ionescu, C.M. The disturbance model in model based predictive control. In Proceedings of the 2003 IEEE
Conference on Control Applications, Istanbul, Turkey, 25–25 June 2003; pp. 446–451.
45. De Keyser, R. Model Based Predictive Control for Linear Systems. In UNESCO Encyclopaedia of Life Support Systems, Control
_Systems, Robotics and Automation—Vol. XI, Article Contribution 6.43.16.1; Eolss Publishers Co. Ltd.: Oxford, UK, 2003. Available_
[online: http://www.eolss.net/sample-chapters/c18/e6-43-16-01.pdf (accessed on 1 January 2023).](http://www.eolss.net/sample-chapters/c18/e6-43-16-01.pdf)
46. Maxim, A.; Ionescu, C.M.; Copot, C.; De Keyser, R.; Lazar, C. Multivariable model-based control strategies for level control in a
quadruple tank process. In Proceedings of the 17th International Conference on System Theory, Sinaia, Romania, 11–13 October
2013; pp. 343–348.
**Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual**
author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to
people or property resulting from any ideas, methods, instructions or products referred to in the content.
-----
|
{
"disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.3390/act12070281?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.3390/act12070281, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "CCBY",
"status": "GOLD",
"url": "https://www.mdpi.com/2076-0825/12/7/281/pdf?version=1688974859"
}
| 2,023
|
[
"JournalArticle"
] | true
| 2023-07-10T00:00:00
|
[
{
"paperId": "da62af7dfc39d1b50d0c8fb07ff8b5b6a02fff0e",
"title": "Explicit distributed model predictive control design for chemical processes under constraints and uncertainty"
},
{
"paperId": "b7bb827ad27ac72edd4033123c0e7e7cebe999b5",
"title": "Distributed Robust Model Predictive Control-Based Energy Management Strategy for Islanded Multi-Microgrids Considering Uncertainty"
},
{
"paperId": "f9d42cfa5225b48c5fbf47833d90cb1cbf8a307f",
"title": "Large-scale wind farm control using distributed economic model predictive scheme"
},
{
"paperId": "29be4570bd232fc6abc119f41246153a409800a4",
"title": "Distributed Model Predictive Control Strategy for Constrained High-Speed Virtually Coupled Train Set"
},
{
"paperId": "44958ce35f6eb131fd673a3a0a0a6c4a7da25cd0",
"title": "Cyber-security in networked and distributed model predictive control"
},
{
"paperId": "ae31544c230aeb7ea4d9ac9c3e28391f395c2a7c",
"title": "A survey on clustering methods for distributed and networked control systems"
},
{
"paperId": "6d0c4206ff0b115cbd7b6752dd8cd4b7ef77fb29",
"title": "A Coalitional Distributed Model Predictive Control Perspective for a Cyber-Physical Multi-Agent Application"
},
{
"paperId": "8e7c3e1ee67da4f66a16ed53664f09d6f5c1baa6",
"title": "Distributed model predictive control for joint coordination of demand response and optimal power flow with renewables in smart grid"
},
{
"paperId": "1278210bcacb25adcce4fee953689c9212d85af0",
"title": "Decentralized Game-Theoretical Approaches for Behaviorally-Stable and Efficient Vehicle Platooning"
},
{
"paperId": "06729b1aa3b35de8407b876ae87eb6e4e4f009f7",
"title": "Energy Management Systems for Microgrids: Main Existing Trends in Centralized Control Architectures"
},
{
"paperId": "faa8afe2ac9a332cb44f2a238ef1b35dfdc0ee8c",
"title": "Distributed model predictive control approach for cooperative car-following with guaranteed local and string stability"
},
{
"paperId": "f4ad777c859db6554faca885309f5e292852bd55",
"title": "Distributed economic model predictive control of wastewater treatment plants"
},
{
"paperId": "01b3acac6397ccaff6bf89433a2d7c82725569ed",
"title": "An industrially relevant formulation of a distributed model predictive control algorithm based on minimal process information"
},
{
"paperId": "61eb3ad8ee711bbc5f824ff8609cb7a14ef2ad60",
"title": "Coordinated Distributed MPC for Load Frequency Control of Power System With Wind Farms"
},
{
"paperId": "eb95cef14442a2cf9ff4d68adaf3faa69c8b2024",
"title": "Distributed MPC-based secondary voltage control scheme for autonomous droop-controlled microgrids"
},
{
"paperId": "8de70defde2ae2aafd1cbed3ec6968d534aa8151",
"title": "Distributed MPC for cooperative highway driving and energy-economy validation via microscopic simulations☆"
},
{
"paperId": "c98f2ed5e846a786456e41d4af802da21600250c",
"title": "Coalitional Control: Cooperative Game Theory and Control"
},
{
"paperId": "416e4d95b0f18613059e1c8e8e8f23620ca48af9",
"title": "Online distributed cooperative model predictive control of energy-saving trajectory planning for multiple high-speed train movements"
},
{
"paperId": "5c73c212c9c7ae5a9dfad29192f2124cbafe72cf",
"title": "Distributed model predictive control for railway traffic management"
},
{
"paperId": "ca6483012ff7ea55f971396728ea6bc8fe9b1ebc",
"title": "Distributed MPC of Aggregated Heterogeneous Thermostatically Controlled Loads in Smart Grid"
},
{
"paperId": "4ee15dddd9644b964a756e33b356af1192eb8b41",
"title": "Cooperative distributed model predictive control for wind farms"
},
{
"paperId": "624f79efbed9c22e98c3fd80f7f1d426a2d71295",
"title": "Distributed Model Predictive Control Method for Optimal Coordination of Signal Splits in Urban Traffic Networks"
},
{
"paperId": "0b5f2ff2ab3470178f3c4190a4df66c64eeacc9f",
"title": "Distributed Model Predictive Control of a Wind Farm for Optimal Active Power ControlPart II: Implementation With Clustering-Based Piece-Wise Affine Wind Turbine Model"
},
{
"paperId": "0b6486e20d9319a1d327cda339aedc1edf365ceb",
"title": "Freeways as Systems of Systems: A Distributed Model Predictive Control Scheme"
},
{
"paperId": "91a5f07d0a24ffaa1942e3e4093d26069bfc5cfc",
"title": "Distributed Model Predictive Control Made Easy"
},
{
"paperId": "f0707f321ed62e14ec95c174e1f1a98b2fb90ea3",
"title": "Distributed model predictive control: A tutorial review and future research directions"
},
{
"paperId": "bbd1141c46ffc283c94ef224d1248f45241bdd9c",
"title": "Architectures for distributed and hierarchical Model Predictive Control - A review"
},
{
"paperId": "3c3fbadbcfe98e3dd096865948c486bb3b3ac386",
"title": "Model Predictive Control"
},
{
"paperId": "f3bd06c423ea52731d0d475fc0711d04237abef1",
"title": "On distributed reactive power and storage control on microgrids"
},
{
"paperId": "d09a317a9929e0ea35a4e3f502f80e8b529dd406",
"title": "Combined environmental and economic dispatch of smart grids using distributed model predictive control"
},
{
"paperId": "36521e51a2fe6c5a7acf3078f6655dc1b1e5296f",
"title": "Application of Predictive Control Strategies to the Management of Complex Networks in the Urban Water Cycle"
}
] | 24,083
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/ffeba7f808aff3d5a4eaa80ceaacc47e4efee057
|
[
"Computer Science"
] | 0.853188
|
In Differential Privacy, There is Truth: On Vote Leakage in Ensemble Private Learning
|
ffeba7f808aff3d5a4eaa80ceaacc47e4efee057
|
Neural Information Processing Systems
|
[
{
"authorId": "2110238504",
"name": "Jiaqi Wang"
},
{
"authorId": "39347554",
"name": "R. Schuster"
},
{
"authorId": "47473421",
"name": "Ilia Shumailov"
},
{
"authorId": "47412202",
"name": "D. Lie"
},
{
"authorId": "1967156",
"name": "Nicolas Papernot"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"Neural Inf Process Syst",
"NeurIPS",
"NIPS"
],
"alternate_urls": null,
"id": "d9720b90-d60b-48bc-9df8-87a30b9a60dd",
"issn": null,
"name": "Neural Information Processing Systems",
"type": "conference",
"url": "http://neurips.cc/"
}
|
When learning from sensitive data, care must be taken to ensure that training algorithms address privacy concerns. The canonical Private Aggregation of Teacher Ensembles, or PATE, computes output labels by aggregating the predictions of a (possibly distributed) collection of teacher models via a voting mechanism. The mechanism adds noise to attain a differential privacy guarantee with respect to the teachers' training data. In this work, we observe that this use of noise, which makes PATE predictions stochastic, enables new forms of leakage of sensitive information. For a given input, our adversary exploits this stochasticity to extract high-fidelity histograms of the votes submitted by the underlying teachers. From these histograms, the adversary can learn sensitive attributes of the input such as race, gender, or age. Although this attack does not directly violate the differential privacy guarantee, it clearly violates privacy norms and expectations, and would not be possible at all without the noise inserted to obtain differential privacy. In fact, counter-intuitively, the attack becomes easier as we add more noise to provide stronger differential privacy. We hope this encourages future work to consider privacy holistically rather than treat differential privacy as a panacea.
|
## In Differential Privacy, There is Truth: On Vote Leakage in Ensemble Private Learning
**Jiaqi Wang[a b], Roei Schuster[b], Ilia Shumailov[b c], David Lie[a], Nicolas Papernot[a b]**
_aUniversity of Toronto_
_bVector Institute_
_cUniversity of Oxford_
### Abstract
When learning from sensitive data, care must be taken to ensure that training algorithms address privacy concerns. The canonical Private Aggregation of Teacher
Ensembles, or PATE, computes output labels by aggregating the predictions of a
(possibly distributed) collection of teacher models via a voting mechanism. The
mechanism adds noise to attain a differential privacy guarantee with respect to
the teachers’ training data. In this work, we observe that this use of noise, which
makes PATE predictions stochastic, enables new forms of leakage of sensitive information. For a given input, our adversary exploits this stochasticity to extract
high-fidelity histograms of the votes submitted by the underlying teachers. From
these histograms, the adversary can learn sensitive attributes of the input such as
race, gender, or age. Although this attack does not directly violate the differential
privacy guarantee, it clearly violates privacy norms and expectations, and would
not be possible at all without the noise inserted to obtain differential privacy. In
fact, counter-intuitively, the attack becomes easier as we add more noise to provide stronger differential privacy. We hope this encourages future work to consider
privacy holistically rather than treat differential privacy as a panacea.
### 1 Introduction
The canonical Private Aggregation of Teacher Ensembles, PATE, is a model-agnostic approach to
obtaining differential privacy guarantees for the training data of ML models [1], [2], that is widely
applied [3], [4] and adapted [5]–[7] due to its comparatively favorable trade-off between differential
privacy, utility, and ease of decentralization [6]. In PATE, one considers an ensemble of independently trained teacher models. To generate a prediction, PATE first collects the predictions of these
teachers to form a histogram of votes. It then adds Gaussian noise to the histogram and only reveals
the label achieving plurality. This label can be used directly as a prediction, or to supervise the
training of a student model—in a form of knowledge transfer. Because PATE only reveals the label
receiving the most votes, it comes with guarantees of differential privacy, i.e., the noisy voting mechanism allows us to bound how much information from the training data is potentially exposed [8].
But PATE does not explicitly protect from leakage of a key element in its inference procedure: the
histogram of votes submitted by teachers. While the histogram is used internally and not directly
exposed to clients, a careful examination of PATE reveals that information about the histogram leaks
to clients via query answers.
The histogram can contain highly sensitive information, not the least of which is membership in
minority groups which, if revealed, can be used to discriminate against individuals. We demonstrate
this by showing how an attacker, using the vote histogram of a PATE ensemble trained to predict
an individual’s income, can infer wholly different attributes such as their level of education, even
when the attacker’s instance does not contain any information related to education-level. Why is this
36th Conference on Neural Information Processing Systems (NeurIPS 2022).
-----
possible? At a high level, the histogram of votes can be interpreted as a relatively rich representation
of the instance, that reveals attributes beyond what the ensemble was designed to predict.
Next, we ask, is this attack a realistic threat? We answer this in the affirmative by designing an
attack that extracts PATE histograms by repeatedly querying PATE, and showing that it reconstructs
internal histograms to near perfection. Our attack builds on the fact that repeated executions of
the same query produce the same internal histogram and a consistent distribution of PATE’s noised
answers corresponding to this histogram. Our adversary can thus sample this distribution many
times via querying, and use it to reconstruct the histogram.
This implies that our attack relies on the stochasticity of PATE’s output, which is a product of Gaussian noise, the very mechanism that was intended to protect privacy. In fact, we find that the larger
the variance of noise added to the histogram votes, the more successful our adversary is in reconstructing the histogram. This is in sharp contrast with the known and expected effect in differential
privacy, that higher noise scale generally leads to stronger privacy. Put simply: differential privacy
makes our attack possible.
An astute reader may observe that histogram leakage does not violate the differential privacy guarantee, which only protects individual users in the training data, which is not compromised here.
While it is absolutely true that our attack does not violate differential privacy, it clearly violates
societal norms and user expectations that differential privacy is often incorrectly assumed to protect.
The fact that differential privacy enables the leakage we exploit nicely underscores the distinction
between technical definitions of privacy and common conceptions of privacy.
The attack is difficult to mitigate. Particularly, we show that it is stealthy in the sense that PATE’s
own accounting of “privacy cost” considers our attacker’s set of queries “cheap”, meaning that revealing their answers has a relatively small effect in terms of differential privacy. Consequently,
PATE’s privacy-spending monitoring does not prevent our attack. Our attack also performs only
a moderate number of queries in absolute numbers, the same number used by common legitimate
PATE clients, so a hard limit on queries would impede PATE’s utility. We will discuss other mitigation approraches, which are not robust and/or not always usable.
To summarize, our contributions are as follows:
- We posit the novel threat of extracting PATE’s internal vote histograms. We observe and
show that those contain sensitive information such as minority-group membership.
- We show that differential privacy is the cause for histogram-information leakage to PATE’s
querying clients.
- We exploit this leakage to reconstruct the vote histogram. We achieve this by minimizing
the difference between (a) the probability distribution of outcomes observed by repeatedly
querying PATE and (b) an analytical counterpart that we derive.
- We experiment with standard PATE benchmarks, showing that the attack can recover highfidelity histograms while using a low number of queries that remain well within PATE’s
budget intended to control leakage.
### 2 Vote Histograms are Sensitive Information
We consider an ensemble’s vote histogram, such as those computed internally in PATE. Clearly,
such histograms contain a lot more information on PATE’s innerworkings than simply its revealed
decision, but it is important to clarify that there are common contexts in which this leakage can
actually be used to hurt individuals as they contain sensitive information about them.
As a prominent example, minority-group membership often leaks via histograms, and can of course
be used to discriminate against group members. To understand this, let’s consider a minority group
that is under-represented in the training data distributed across PATE’s teachers. Each teacher observes some outliers and mis-representitive phenomena such as coincidental correlations or out-ofdistribution examples. When data on group members is scarce, each model will tend to over-fit to
the outlier phenomena within its own data, creating inter-model inconsistencies and resulting in disagreement, or low consensus, when predicting on similar inputs at test time—which readily presents
itself on vote histograms. Thus, we expect histograms to reveal members of minority group members
_via low consensus values. Next, we illustrate this via a simple experiment._
-----
**Extracting sensitive attributes from UCI Adult-Income histograms.** We now simulate an attack that receives the vote histogram of a salary-predictor ensemble and uses it to detect a small
minority of the population, specifically, PhD holders. Following the above observation, our attack will simply classify all highly-consensus (consensus > 75%) predictions as non-PhD-holders,
whereas low-consensus (< 75%) predictions will be classified as PhD holders. This is a heuristic
attack that relies on intuition rather than learning the ensemble’s behavior using a labeled dataset.
On one hand, it may underestimate the attacker’s ability to detect PhD holders; on the other hand, it
does not require a labeled dataset and only assumes that the attacker sees the votes histogram.
We use UCI Adult-Income dataset [9], containing around 41,000 data points with basic personal
information on people such as age, work hours, weight, education, marital status, and more. PhD
holders form about 1% of this dataset. We randomly selected 80% of the dataset for training, and
held out the rest for testing. We randomly partitioned the data into 250 disjoint sets. For each,
we fitted a random forest model (using a hyperparameter grid search, see Appendix D) predicting
whether income is above or below $50,000. For both training and test data, we removed the data
columns explicitly indicating education levels, that is, training and test individuals do not contain
any feature that directly distinguishes PhD from non-PhD holders.
Figure 1 shows the distribution of high-consensus and
low-consensus on the test set (to make the effect clearer,
Minority Majority
we balanced the minority and majority groups in the test
set by randomly removing most of the non-PhD samples). 0.8
0.7
We observe that low consensus indeed indicates minority- 0.6
group membership. Our attacker’s precision is not partic- 0.5
ularly high (75% on the balanced set), but they can still 0.4
use this signal to discriminate against minority groups. 0.3
0.2
0.1
**End-to-end scenario and other attacks.** Appendix E 0.0 Low Consensus High Consensus
presents this attack in an end-to-end scenario where the
attacker does not have direct access to the histogram, and Figure 1: High vs. low-consensus
has to first query a PATE instance to infer it, using our distributions of the PhD-detection atmethodology in Section 3. More sophisticated attackers tack: vote histograms of minority-group
can look for distinctive histogram patterns that character- members present lower consensus, alize certain groups; the attack should become more accu- lowing an attacker to identify them.
rate as more models are added to the ensemble, refining
the attacker’s histogram measurement; and precision can be amplified if the attacker holds multiple
samples that are known to belong to the same group. Further, we note that sensitive-attribute extraction is not the only example for when vote histograms leak sensitive information: an attack could
use votes to try to infer dataset properties [10] or distinguish between different partitions of the data
associated with the different teachers in the ensemble.
|ning and test individuals do not cont olders. Minority Majority 0.8 0.7 0.6 Percentage 0.5 0.4 0.3 0.2 0.1|Col2|Col3|Col4|Col5|Col6|Col7|
|---|---|---|---|---|---|---|
||Minority Majority||||||
||||||||
||||||||
||||||||
||||||||
||||||||
||||||||
||||||||
||||||||
||||||||
### 3 How to Extract PATE Histograms
Having established that vote histogram leakage poses a risk to privacy and fairness, we proceed to
provide a generic method for extracting vote histograms from PATE.
**3.1** **Problem Formulation and Attack Model**
**A primer on PATE.** The PATE framework begins by independently training an ensemble of models, called teachers, on partitions of the private data. There is no particular requirement for the
training procedure of each of these teacher models; the only constraint is that the partitions be disjoint. Queries made by clients are answered as follows: (1) each teacher model predicts a label on
the instance, (2) the PATE aggregator builds a histogram of class votes submitted by teachers, (3)
Gaussian noise is added to this histogram, and (4) the client receives the noised histogram’s argmax
class (henceforth result class). This noisy voting mechanism gives PATE its differential privacy
(DP) guarantee, in what is an application of the Gaussian mechanism [11].
To preserve differential privacy, PATE tracks the privacy cost of the set of past queries, and stops
answering queries once the cost surpasses the privacy budget. The cost computation is parameterized
by a size δ. The key differential privacy guarantee of PATE can be stated as follows: for a given set
-----
of queries with cost ε, PATE is (ε, δ) differentially private. Put succinctly, ε bounds an adversary’s
ability to distinguish between any adjacent training datasets, whereas δ bounds the (usually small)
probability, over PATE’s randomness, of this bound not holding. We defer additional details on
differential privacy in PATE to Appendix A.
**Attacker’s motivation.** Our attacker’s goal is to
recover the histogram of the vote counts when PATE
labels an instance. Formally, given N predictors {P1, ..., PN _} and a target input a, our attacker_
wants to infer H ≡ _Count(P1(a), . . ., PN_ (a)) =
[h1, . . ., hc] where Count counts the number of appearances of each element in [c]. Vote histograms
can be used to extract potentially-sensitive information about an instance, such as its race, gender, or
religion (see Section 2).
**Attacker’s access and knowledge.** Our attacker
can send queries to the aggregator and receive the
label predicted by PATE (i.e., the output of the noisy
voting mechanism). This may be possible because
the aggregator willfully exposes the predictions of
PATE, e.g., through a MLaaS API. Alternatively,
fully-decentralized implementations of PATE have
been proposed where the central aggregator is replaced with a cryptographic multi-party computation
protocol [6], and its output is exposed directly. Figure 2 visualizes the workflow of our attack.
**online**
|Col1|PATE|
|---|---|
|query|vote differential ... histogram privacy noise teachers|
|label||
**teachers**
**labels**
**recovered**
**histogram**
Figure 2: In an online phase, the attacker
sends a specific query to PATE repeatedly
and receives labels output by the noisy
argmax. Offline, the attacker uses the labels
to recover the histogram by constructing and
solving an optimization problem.
In PATE, the parameters (mean, variance) of the noise added during aggregation are public domain [2]; we therefore assume the attacker knows them. We also assume the attacker knows the
number of teacher models N, which may or may not be public. This assumption is only necessary
to shift the attacker’s learned distribution by a constant to attain a low L1 approximation error when
reconstructing histograms (Section 3). We note that the attacker could just as easily exploit the leakage (e.g. to learn sensitive attributes or differentiate between training sets) without it but we chose
to instead make this assumption to simplify result presentation and interpretation.[1]
**3.2** **Our histogram reconstruction attack**
The idea behind the attack, given in pseudo-code in Algorithm 2, is as follows: let Q be a function
that computes output-class probability distribution of PATE given a vote histogram H. First, our
attacker will sample PATE to find an estimate for this distribution ¯q _Q(H). Second, the attacker_
_≈_
will use gradient descent to find _H[ˆ] that minimizes the Euclidean distance between Q( H[ˆ]_ ) and ¯q.
Finally, they shift the estimated histogram _H[ˆ] by a constant to account for the number of teachers_
(this step assumes the number of teachers is known, but is done mostly for presentation purposes,
see Section 3.1). We now detail these 3 steps.
**Step 1: Monte Carlo approximation.** The first step will sample PATE M times and estimate the
distribution over PATE’s outputs ¯q _Q(H) by setting each class probability as its Monte Carlo_
_≈_
estimated mean frequency, i.e. ¯qi ← _M1_ �Mj=1 _[q]i[j]_ [where][ q]i[j] [indicates whether class][ i][ was sampled in]
the jth step. By the law of large numbers, as M increases, ¯qi converges to i’s sampling probability
[Q(H)]i, and we can expect the attacker’s estimate produced in the next steps to be more accurate.
Our attacker would want to increase M as much as possible, until they exceed PATE’s privacy
budget.
Indeed, in PATE, the privacy leakage expended by each individual query can then be composed over
multiple queries to obtain the total privacy cost ε needed to answer the set of queries. Once the total
1Indeed, we could avoid this assumption while still retaining low error if we measured the attacker’s error
with shift-invariant distances, like Pearson correlation.
-----
privacy cost ε exceeds a maximum tolerable privacy budget, PATE must stop answering queries to
preserve differential privacy. Section 4 shows that the attack succeeds for values of M that remain
well below PATE’s privacy budget, and are also moderate in absolute value, as they are similar to
the query number of student models that use PATE.
**Step 2: constructing the optimization objec-**
**Algorithm 2 Attack pseudocode**
**tive.** Our attacker wants to find _H[ˆ] such that_
**Input:**
_Q( ˆH)_ _q¯_
��� _−_ ���2 [is minimized where][ ∥·∥][2][ denotes] 1: N ∈ N _▷_ total number of teachers (see
the Euclidean norm. Given a (differentiable) Section 3.1 for why this is needed)
closed-form expression for Q, it becomes nat- 2: O _▷_ PATE instance
ural to program and solve this with modern 3: T, λ _▷_ optimization termination threshold
gradient-based optimization frameworks. The- and learning rate
orem 1 provides a closed form expression; and **Output:** _H[ˆ]_
our attacker will use a differentiable approxi
4: S ← _sample(O, M_ ) _▷_ sampling PATE M
mation of this expression, as explained below.
times and storing into S 1, ..., K _[M]_
_∈_
**Theorem 1. Let H = [H1, . . ., Hc] be the** 5: for i = 1, 2, . . ., M do
_vote histogram for the c classes, and let PATE’s_ 6: **for j = 1, 2, . . ., c do**
_Gaussian-mechanism function Agg(H)_ _≡_ 7: _qj[i]_ [=][ int][(][S][i][ ==][ j][)] _▷qj[i]_ [= 1][ if]
argmax{Hi + Si} where S = [S1, . . ., SM ] _S[i]_ = j, 0 otherwise
_is a vector of M_ _samples from a zero-_
8: **end for**
_mean normal distribution with variance σ[2]._
9: end for
_Then the probability that the randomized_
10: ¯q 0[c] _▷_ initialization of ¯q with a 0 vector
_←_
_aggregator_ _outputs_ _the_ _class_ _k_ _is_ _given_
11: for i = 1, 2, . . ., M do
_by�−∞∞ [Q�(c,iiH=1≠_ )]kkΦi(=α)φkP((αAgg)dα( whereH) = Φi(·k) is the) = 13:12: end forq¯[i] ← _M1_ �Mj=1 _[q]i[j]_
_cumulative probability distribution (CDF) of_ 14: _H[ˆ]_ 0[c] _▷_ initialization of _H[ˆ]_, here we use an
_←_
_N_ (Hi, σ[2]) (normal distribution with mean Hi all-zero array of length c
_and variancedensity function (PDF) of σ) and φk( N·) is the probability(Hk, σ[2])._ 15: while ���Q( ˆH) − _q¯���2_ _[> T][ do]_
16: _Hˆ ←_ _Hˆ −_ _λ∇Hˆ_ ���Q( ˆH) − _q¯���2_
_Proof. Q(H) = P(Agg(H) = k), is the prob-_ 17: end while
ability that Hk +Sk = max{Agg(H)}. For any 18: _H[ˆ] ←_ _H[ˆ] + [_ _[N]_ _[−]c[�]_ _H[ˆ]_ ][c] _▷_ shift _H[ˆ] to sum to N_
_k, Hk + Sk is a random variable that follows a_ **return** _H[ˆ]_
normal distribution with mean equal to Hk and
variance equal to σ[2]. Let gk = Hk + Sk, then
_gk ∼N_ (Hk, σ[2]). [Q(H)]k is the probability
of gk is greater than gj, ∀j ∈{1, . . . k − 1, k +
1, . . . c
_}_
[Q(H)]k = P(Agg(H) = k)
= P(gk > g1, . . ., gk > gk−1, gk > gk+1, . . ., gk > gc)
� _∞_
=
_−∞_
� _∞_
=
_−∞_
_c,i≠_ _k_
�
P (gi < α | gk = α) P(gk = α)dα
_i=1_
_c,i≠_ _k_
�
Φi(α)φk(α)dα
_i=1_
The expression in Theorem 1 is not usable in automatic differentiation and optimization frameworks;
we therefore use an approximation of the integral by the trapezoid formula. We select points with
higher probability and sum up their values to get an approximation of the integral with infinite
bounds. Then we decide what values to select. In the integral �−∞∞ �m,ii=1≠ _k_ Φi(α)φk(α)dα, α is
the value of gk ∼N (Hk, σ[2]). Therefore α has the highest probability at Hk, and has the higher
probability closer to Hk. More specifically, properties of the normal distribution give us that µ±6∗σ
covers 99% of the values of Gaussian random variable z (µ, σ). Therefore values of α between
_∼N_
-----
250
215
199
179
150
100
50
250
231
200
150
100
50
0
0 3000 6000 9000
index
0 8677 17355 26032
index
0
Figure 3: Divisions of the 9,000 and 26,032 histograms of MNIST (left) and SVHN (right) datasets
into 3 consensus levels, measured by top-agreed label percentage. The dashed red lines delineate
the 33.3% and 66.7% quantiles.
_Hk ± 6 ∗_ _σ cover 99% of the integral area. Therefore,_
� _∞_ _c,i≠_ _k_ _Hk+6σ_
� �
Φi(α)φk(α)dα ≈
_−∞_ _i=1_ _Hk−6σ_
_c,i≠_ _k_
�
Φi(α)φk(α),
_i=1_
which is differentiable and is handled well by most automatic differentiation packages.
**Step 3: accounting for the number of teachers.** The distribution estimate produced by our optimization may be skewed by a constant because [Q(H)]k only depends on the differences between
_gk and g1, . . ., gk−1, gk+1, . . ., gc, so the attacker shifts each element of_ _H[ˆ] by (N −_ [�] _H[ˆ]_ )/c so
that the new histogram _H[ˆ] sums up to_ _H + c_ (N _H)/c = N_ . Theorem 2 in Appendix B
[�] [ˆ] _∗_ _−_ [�] [ˆ]
provides proof that shifting Q( H[ˆ] ) by a constant does not affect Q( H[ˆ] ).
### 4 Evaluation
We evaluate our attack against instantiations of PATE on common benchmarks. We show that the
extracted histograms only differ slightly from the true ones underlying PATE’s decision. This is
despite the low privacy cost of the attacker’s queries, which remains well within budgets enforced
by common PATE instantiations. We also quantify the impact of the choice of scale for the noise
being added to preserve DP: we show that higher noise values result in increased attack success for
a given number of queries. We offer an hypothesis to explain this ostensibly surprising observation.
**4.1** **Experimental Setup**
**Data.** We use the experimental results from Papernot et al. [1] to simulate our attack environment. Papernot et al. released the histograms obtained by PATE using 250 teachers for two 10-class
computer-vision benchmarks, MNIST [12] and SVHN [13]. There are 9,000 histograms generated
by MNIST experiments and 26,032 histograms generated by SVHN experiments, corresponding to
the sizes of these datasets’ test sets.
We define a histogram’s consensus as its maximum value, and divide each dataset into three equalsized groups corresponding to high consensus, medium consensus, and low consensus. Figure 3
illustrates this. We sample five histograms randomly from each group, and mount our attack for
various noise levels.
**Attack parameterization.** We simulated attackers with two types of query limits: first, an attacker
limited by PATE’s canonical privacy budget; we used the parameterization from Papernot et al. [1],
i.e. budgets of 1.97 and 4.96 for MNIST and SVHN and σ = 40. Second, an attacker with a hard
limit of 10[4] queries; this is a moderate number of queries for clients wishing to train their own
“student” model using the aggregator’s labels (see [1], [2]). We applied this attack against PATE
instantiations for MNIST and SVHN with noise levels σ 40, 60, 80, 100 .
_∈{_ _}_
For optimization (see Section 3), we use an adaptive learning rate: at the beginning of training, we
use a learning rate of 10, where J = Q( H[ˆ] ) _q¯ is the optimization objective. As the optimiza-_
_∥[∇]Hˆ_ _[J][∥]2_ _−_
-----
tion starts to converge, 10 becomes too large so we switch to a learning rate of 1 . This
_∥[∇]Hˆ_ _[J][∥]2_ _∥[∇]Hˆ_ _[J][∥]2_
results in changes to the histogram of the magnitude of one vote for each update. We use 0.01 as
a threshold on the loss to establish convergence, and thus when ∥J∥2 < 0.01, we stop optimizing.
For the attacks against canonical settings, we stopped once estimated histograms started presenting
negative values, which we found to be a slightly better strategy. (We could also try to constrain it to
only-positive values; we discuss improving this optimization procedure further in Section 5).
**Metrics.** For every attack, we measured the error rate and privacy cost. The error rate is defined
as the normalized L1 distance between the ground-truth histogram H = [H1, . . ., Hc] and our
attacker’s estimate _H[ˆ] = [ H[ˆ]1, . . .,_ _H[ˆ]c], i.e.,_ [�]i ���Hi − _Hˆi���_ _/ (2 �i_ _[|][H][i][|][)][. (While the optimization]_
minimizes Euclidean distance, we report L1 errors because they can be interpreted as corresponding
to the number of mis-counted votes.)
We define and compute the privacy cost incurred by the adversary using established practices. At a
high level (see details in [1]), we model PATE as a R´enyi-differentially-private mechanism and leverage known privacy-preserving-composition theorems; we attain (non-R´enyi) differential privacy via
a known reduction from differential privacy to R´enyi differential privacy.
The parameter δ is set as 10[−][5] for MNIST and 10[−][6] for SVHN, following Papernot et al. [1].
**Implementation** Our implementation is provided in Python and the optimization uses
the Jax library. [Our code is open-sourced at https://github.com/cleverhans-lab/](https://github.com/cleverhans-lab/monte-carlo-adv)
```
monte-carlo-adv. We ran the optimization on an Intel Xeon Processor E5-2630 v4; it takes
```
about 2.5 hours to complete for a single histogram.
**4.2** **Results**
**Our attack has high performance within canonical privacy budgets.** We first evaluate our attack on canonical PATE from Papernot et al. [1]. Figure 4a and 4b show our attacker’s error rates
for the different histograms, averaging 0.11 on the MNIST setup and 0.05 on the SVHN setup.
**Our attack extracts high-fidelity histograms and has low privacy costs.** Figure 4 reports the
performance of the privacy-budget limited attack; Figures 6 and 7 show our hard-query-limit attacker’s error rate and query costs for different noise levels, i.e. values of σ. We observe that, across
attacks, we attain very low error rates, often as low as 0.03, translating to 3% of the votes being
miscounted. For the hard-query-limit attack, privacy costs roughly range between 1 to 12, which
is the order of magnitude for the budget one would plausibly use, for example to attain guarantees
similar to Papernot et al. [2] (which uses budgets of up to 8 in a directly comparable setting to ours)
or Abadi et al. [14] (which also employs a (8, 10[−][5])-differentially private mechanism for MNIST).
0.00
|H1 H2 H3 H4 H5 0.16 0.14 0.12 Rate 0.10 Error 0.08 0.06 0.04 0.02|Col2|Col3|Col4|Col5|Col6|Col7|Col8|Col9|Col10|Col11|Col12|Col13|Col14|Col15|Col16|Col17|Col18|Col19|Col20|Col21|Col22|Col23|Col24|Col25|Col26|Col27|Col28|Col29|Col30|Col31|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
||||||||||||||||||||||||||||||||
||||||||||||||||||||||||||||||||
||||||||||||||||||||||||||||||||
||||||||||||||||||||||||||||||||
||||||||||||||||||||||||||||||||
||||||||||||||||||||||||||||||||
||||||||||||||||||||||||||||||||
||||||||||||||||||||||||||||||||
||||||||||||||||||||||||||||||||
|of up to 8 in a directly comparable setting to ours) 5)-differentially private mechanism for MNIST).|Col2|Col3|Col4|Col5|Col6|Col7|Col8|Col9|Col10|Col11|Col12|Col13|Col14|Col15|Col16|Col17|Col18|Col19|Col20|Col21|Col22|Col23|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|H1 H2 H3 H4 H5 0.16 0.14 0.12 Rate 0.10 0.08 Error 0.06 0.04 0.02|||||||||||||||||||||||
||||||||||||||||||||||||
||||||||||||||||||||||||
||||||||||||||||||||||||
||||||||||||||||||||||||
||||||||||||||||||||||||
||||||||||||||||||||||||
||||||||||||||||||||||||
||||||||||||||||||||||||
Low Median High
Histogram
Low Median High
Histogram
0.00
(a) Error rates on attacking a canonical MNIST
PATE with privacy budget = 1.97 and σ = 40
(b) Error rates on attacking a canonical SVHN PATE
with privacy budget = 4.96 and σ = 40
Figure 4: Error rates on budget-limited attack on the canonical PATE [1], for our 15
low/median/high-consensus sample histograms.
-----
0.20
0.15
0.10
0.05
14
12
10
8
6
0.00
|Col1|sigma = 4|Col3|0 s|igma = 60 sigma|= 80 sigma = 100|Privacy Cost|Col8|Col9|Col10|Col11|
|---|---|---|---|---|---|---|---|---|---|---|
||||||||||||
||||||||||||
||||||||||||
||||||||||||
||||||||||||
L1 L2 L3 L4 L5 M1 M2 M3 M4 M5 H1 H2 H3 H4 H5
Histogram
Figure 6: Our attack’s error extracting 15 MNIST histograms with low/medium/high consensus (L15, M1-5, and H1-5 respectively) using different noise scales and a query limit of 10[4]. The red dots
and the right axis show the privacy cost of the attack on each histogram.
Privacy Cost
0.3
0.2
0.1
0.0
|s|igma = 40|Col3|si|gma = 60 sigma|Col6|Col7|Col8|Col9|Col10|Col11|
|---|---|---|---|---|---|---|---|---|---|---|
||||||||||||
||||||||||||
||||||||||||
||||||||||||
||||||||||||
L1 L2 L3 L4 L5 M1 M2 M3 M4 M5 H1 H2 H3 H4 H5
Histogram
10
8
6
4
Figure 7: Our attack’s error extracting 15 SVHN histograms with low/medium/high consensus (L15, M1-5, and H1-5 respectively) using different noise scales and a query limit of 10[4]. The red dots
and the right axis show the privacy cost of the attack on each histogram.
**Adding more noise helps the attacker.** Perhaps the most surprising result in this work is that the
higher the noise scale, the lower the attacker’s error is. This is not necessarily aligned with using up
more of the privacy budget. In fact, in many cases, increasing the noise decreases both the attacker’s
privacy cost and their error; Figure 5 shows the correlation between cost and error.
This is counter-intuitive, as larger
Gaussian scales σ usually correspond to tighter privacy guarantees.
That is, more expected protection
against attacks. Specifically, our
Monte Carlo estimation should be
less accurate when higher-variance
noise is added, as convergence to
the mean is slower. Nevertheless,
our attack actually performs better
with higher noise levels.
14
12
10
8
6
4
2
0
0.0 0.1 0.2 0.3 0.4
Error Rate
with higher noise levels. Figure 5: Our attack’s average error rate vs. privacy cost on
the histograms extracted with 10[4] queries. Weak inverse cor
To explain this, consider the aggre
relation implies cheaper attacks are often more accurate.
gator’s output distribution. When it
is uniform, classes are sampled with equal probabilities, contributing equal information to each
Monte Carlo estimator. Conversely, when some classes have a lower probability than others, their
estimator will receive less samples. Sharp output distributions, for example, have a peak that essentially “eclipses” other classes. To illustrate this, consider the case where no noise is added at
all; here, the output is always the plurality vote, and a black-box querying adversary cannot learn
_anything about the histogram except its top-voted class, which is already known after a single query._
Our results indicate that the mitigation of this eclipsing effect by increasing the noise, can be more
dominant than the adverse effect that increasing noise has on Monte Carlo convergence. Interestingly, this is not always reflected in PATE’s privacy-cost score, which is often lower for setups that
leak more on vote histograms. Technically, there is no contradiction: privacy cost measures differential privacy, which does not necessarily translate to protection against vote-histogram leakage.
-----
### 5 Discussion
**Mitigation.** The possibility of this attack is inherent to PATE’s aggregation mechanism, as long as
the attacker can make multiple queries to PATE. Our experiments in Section 4 show that (1) using
tighter privacy budgets does not necessarily mitigate the attack, as there is no strong correspondence
between the privacy cost and the attack’s success, and (2) it would be hard to limit the number
of queries some other way without crippling PATE’s utility, because our attack is successful while
using the same number of queries used in common scenarios from the literature.
Theoretically, the attack would be mitigated if PATE returned a consistent answer for each query.
PATE can thus try to cache answers to past queries and not recalculate them. Unfortunately, this
defense would be exposed to adversarial perturbations that try to evade the caching mechanism
without affecting predictions, and would not be possible for settings that keep queries confidential
and/or include decentralized aggregation [6].
Finally, we can try to prevent sensitive information from leaking onto vote histograms. Particularly,
models that generalize well across subgroups will be more immune to an attacker inferring group
membership via consensus. This reduces to the problem of subgroup fairness, an active line of work
with many proposed approaches [15]–[19] but no silver-bullet solutions.
**Limitations.** Empirical analysis of sensitive-attribute leakage onto vote histograms (Section 2)
can be expanded to improve more sophisticated attackers, other scenarios, and also other forms
of sensitive information that can leak onto histograms. We instead focus our work on extracting
histograms from PATE, noting that this can be used as a foundation for various different attacks.
A full optimization procedure takes a noticeably long time (roughly 10 minutes for a single step
and 13 hours to convergence on a histogram), which prevented us from fully optimizing its hyperparameter choices. This is however a limitation of our current experimental setup, not of the attack,
bearing the main consequence that we are potentially under-estimating our attack’s capabilities.
**Related work.** PATE is a widely-adopted framework for differentially-private ML, with myriad
applications [3], [4] and extensions [5]–[7]; our attack is generally applicable to many of those
frameworks, which inherit their privacy analysis from PATE.
Another prominent decentralized ML framework, Federated Learning (FL) [20], has been extensively investigated from a privacy perspective. As we did for PATE in this work, prior work attacking
FL uncovered numerous forms of leakage. For example, Hitaj et al. [21] reconstructed the average
training set representation of each classes; Geiping et al. [22] reconstructed training data with high
fidelity; Nasr et al. [23] mounted a membership inference attack against the clients; Wang et al.[24]
showed how a malicious server could distinguish multiple properties of data simultaneously; and
Melis et al. [25] inferred the clients’ training data sensitive properties. These prior efforts all focus
on FL, and are orthogonal to ours. We are the first to evaluate any attack against PATE.
**Conclusion** We are the first to audit the confidentiality of PATE from an adversarial perspective.
Our attack extracts histograms of votes, which can reveal attributes of the input such as race or gender, or help attackers characterize teacher partitions. The attacker’s success is not highly correlated
with their queries’ privacy cost, which is monitored by PATE. Thus, mitigations of this attack are
nontrivial and/or significantly hinder prediction utility. Particularly, using larger Gaussian noise,
even when it fortifies the differential privacy guarantee, actually increases risk to the confidentiality
of the vote histogram. This surprising tension demonstrates that care must be taken to analyze the
protection differential privacy provides within a given threat model, rather than treat it as a silver
bullet protecting against any form of leakage.
**Broader Impact** Our work studies information leakage in a widely-adopted system, thus promoting our understanding of its risks. Our adversarial method can be used by developers and auditors
to evaluate the confidentiality and privacy promises of PATE-based frameworks.
Our observation that differential privacy does not prevent but rather enables the attack is the first of
its kind in that it reveals a discrepancy between differential privacy and societal norms of privacy.
Characterizing this distinction is essential to building technology that uses technical definitions of
privacy as an instrument to protect privacy norms.
-----
### Acknowledgments
We would like to acknowledge our sponsors, who support our research with financial and in-kind
contributions: Amazon, CIFAR through the Canada CIFAR AI Chair program, DARPA through the
GARD program, Intel, Meta, Microsoft, NFRF through an Exploration grant, NSERC through the
Discovery Grant, the OGS Scholarship Program, a Tier 1 Canada Research Chair and the COHESA
Strategic Alliance. Resources used in preparing this research were provided, in part, by the Province
of Ontario, the Government of Canada through CIFAR, and companies sponsoring the Vector Institute. We also thank members of the CleverHans Lab for their feedback.
### References
[1] N. Papernot, S. Song, I. Mironov, A. Raghunathan, K. Talwar, and U. Erlingsson,[´] _Scalable private_
_[learning with pate, 2018. arXiv: 1802.08908 [stat.ML].](https://arxiv.org/abs/1802.08908)_
[2] N. Papernot, M. Abadi, U. Erlingsson, I. Goodfellow, and K. Talwar,[´] _Semi-supervised knowledge trans-_
_[fer for deep learning from private training data, 2017. arXiv: 1610.05755 [stat.ML].](https://arxiv.org/abs/1610.05755)_
[3] Y. Long, B. Wang, Z. Yang, B. Kailkhura, A. Zhang, C. A. Gunter, and B. Li, “G-pate: Scalable differentially private data generator via private aggregation of teacher discriminators,” in Thirty-Fifth Confer_ence on Neural Information Processing Systems, 2021._
[4] C.-H. H. Yang, S. M. Siniscalchi, and C.-H. Lee, “PATE-AAE: incorporating adversarial autoencoder into private aggregation of teacher ensembles for spoken command classification,” CoRR,
[vol. abs/2104.01271, 2021. arXiv: 2104.01271. [Online]. Available: https://arxiv.org/abs/](https://arxiv.org/abs/2104.01271)
```
2104.01271.
```
[5] M. M. Esmaeili, I. Mironov, K. Prasad, I. Shilov, and F. Tramer, “Antipodes of label differential privacy: PATE and ALIBI,” in Thirty-Fifth Conference on Neural Information Processing Systems, 2021.
[[Online]. Available: https://openreview.net/forum?id=sR1XB9-F-rv.](https://openreview.net/forum?id=sR1XB9-F-rv)
[6] C. A. Choquette-Choo, N. Dullerud, A. Dziedzic, Y. Zhang, S. Jha, N. Papernot, and X. Wang, “Ca{pc}
learning: Confidential and private collaborative learning,” in International Conference on Learning Rep_[resentations, 2021. [Online]. Available: https://openreview.net/forum?id=h2EbJ4_wMVq.](https://openreview.net/forum?id=h2EbJ4_wMVq)_
[7] B. Wang, F. Wu, Y. Long, L. Rimanic, C. Zhang, and B. Li, “Datalens: Scalable privacy preserving
training via gradient compression and aggregation,” Proceedings of the 2021 ACM SIGSAC Conference
_[on Computer and Communications Security, Nov. 2021. DOI: 10.1145/3460120.3484579. [Online].](https://doi.org/10.1145/3460120.3484579)_
[Available: http://dx.doi.org/10.1145/3460120.3484579.](http://dx.doi.org/10.1145/3460120.3484579)
[8] M. Abadi, A. Chu, I. Goodfellow, H. B. McMahan, I. Mironov, K. Talwar, and L. Zhang, “Deep learning with differential privacy,” Proceedings of the 2016 ACM SIGSAC Conference on Computer and
_[Communications Security, Oct. 2016. DOI: 10.1145/2976749.2978318. [Online]. Available: http:](https://doi.org/10.1145/2976749.2978318)_
```
//dx.doi.org/10.1145/2976749.2978318.
```
[9] [D. Dua and C. Graff, UCI machine learning repository, 2017. [Online]. Available: http://archive.](http://archive.ics.uci.edu/ml)
```
ics.uci.edu/ml.
```
[10] K. Ganju, Q. Wang, W. Yang, C. A. Gunter, and N. Borisov, “Property inference attacks on fully connected neural networks using permutation invariant representations,” in Proceedings of the 2018 ACM
_SIGSAC conference on computer and communications security, 2018, pp. 619–633._
[11] K. Nissim, S. Raskhodnikova, and A. Smith, “Smooth sensitivity and sampling in private data analysis,”
in Proceedings of the thirty-ninth annual ACM symposium on Theory of computing, 2007, pp. 75–84.
[12] Y. Lecun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based learning applied to document recogni[tion,” Proceedings of the IEEE, vol. 86, no. 11, pp. 2278–2324, 1998. DOI: 10.1109/5.726791.](https://doi.org/10.1109/5.726791)
[13] Y. Netzer, T. Wang, A. Coates, A. Bissacco, B. Wu, and A. Y. Ng, “Reading digits in natural images with
unsupervised feature learning,” in NIPS Workshop on Deep Learning and Unsupervised Feature Learn_[ing 2011, 2011. [Online]. Available: http://ufldl.stanford.edu/housenumbers/nips2011_](http://ufldl.stanford.edu/housenumbers/nips2011_housenumbers.pdf)_
```
housenumbers.pdf.
```
[14] M. Abadi, A. Chu, I. Goodfellow, H. B. McMahan, I. Mironov, K. Talwar, and L. Zhang, “Deep learning with differential privacy,” in Proceedings of the 2016 ACM SIGSAC conference on computer and
_communications security, 2016, pp. 308–318._
[15] N. V. Chawla, K. W. Bowyer, L. O. Hall, and W. P. Kegelmeyer, “Smote: Synthetic minority oversampling technique,” Journal of artificial intelligence research, vol. 16, pp. 321–357, 2002.
[16] M. Kearns, S. Neel, A. Roth, and Z. S. Wu, “Preventing fairness gerrymandering: Auditing and learning
for subgroup fairness,” in International Conference on Machine Learning, PMLR, 2018, pp. 2564–2572.
[17] M. J. Kearns, R. E. Schapire, and L. M. Sellie, “Toward efficient agnostic learning,” Machine Learning,
vol. 17, no. 2, pp. 115–141, 1994.
-----
[18] M. Mohri, G. Sivek, and A. T. Suresh, “Agnostic federated learning,” in International Conference on
_Machine Learning, PMLR, 2019, pp. 4615–4625._
[19] Y. Cui, M. Jia, T.-Y. Lin, Y. Song, and S. Belongie, “Class-balanced loss based on effective number
of samples,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition,
2019, pp. 9268–9277.
[20] J. Koneˇcn´y, H. B. McMahan, F. X. Yu, P. Richtarik, A. T. Suresh, and D. Bacon, “Federated learning:
Strategies for improving communication efficiency,” in NIPS Workshop on Private Multi-Party Machine
_[Learning, 2016. [Online]. Available: https://arxiv.org/abs/1610.05492.](https://arxiv.org/abs/1610.05492)_
[21] B. Hitaj, G. Ateniese, and F. P´erez-Cruz, “Deep models under the GAN: information leakage from col[laborative deep learning,” CoRR, vol. abs/1702.07464, 2017. arXiv: 1702.07464. [Online]. Available:](https://arxiv.org/abs/1702.07464)
```
http://arxiv.org/abs/1702.07464.
```
[22] J. Geiping, H. Bauermeister, H. Dr¨oge, and M. Moeller, “Inverting gradients - how easy is it to break
privacy in federated learning?” In Advances in Neural Information Processing Systems, H. Larochelle,
M. Ranzato, R. Hadsell, M. F. Balcan, and H. Lin, Eds., vol. 33, Curran Associates, Inc., 2020,
[pp. 16 937–16 947. [Online]. Available: https://proceedings.neurips.cc/paper/2020/file/](https://proceedings.neurips.cc/paper/2020/file/c4ede56bbd98819ae6112b20ac6bf145-Paper.pdf)
```
c4ede56bbd98819ae6112b20ac6bf145-Paper.pdf.
```
[23] M. Nasr, R. Shokri, and A. Houmansadr, “Comprehensive privacy analysis of deep learning: Passive and
active white-box inference attacks against centralized and federated learning,” 2019 IEEE Symposium
_[on Security and Privacy (SP), May 2019. DOI: 10.1109/sp.2019.00065. [Online]. Available: http:](https://doi.org/10.1109/sp.2019.00065)_
```
//dx.doi.org/10.1109/SP.2019.00065.
```
[24] Z. Wang, M. Song, Z. Zhang, Y. Song, Q. Wang, and H. Qi, “Beyond inferring class representatives:
[User-level privacy leakage from federated learning,” CoRR, vol. abs/1812.00535, 2018. arXiv: 1812.](https://arxiv.org/abs/1812.00535)
```
00535. [Online]. Available: http://arxiv.org/abs/1812.00535.
```
[25] L. Melis, C. Song, E. D. Cristofaro, and V. Shmatikov, “Inference attacks against collaborative learning,”
_[CoRR, vol. abs/1805.04049, 2018. arXiv: 1805.04049. [Online]. Available: http://arxiv.org/](https://arxiv.org/abs/1805.04049)_
```
abs/1805.04049.
```
-----
### A Differential Privacy
An algorithm is said to be differentially private if its outputs on adjacent inputs (in our case, datasets)
are statistically indistinguishable. Informally, the framework of differential privacy requires that the
probabilities of an algorithm making specific outputs be indistinguishible on two adjacent input
datasets. Two datasets are said to be adjacent if they only differ by at most one training record. The
degree of indistinguishibility is bounded by a parameter denoted ε. The lower ε is, the stronger the
privacy guarantee is for the algorithm because it is harder for an adversary to distinguish adjacent
datasets given access to the algorithm’s predictions on these datasets. In the variant of differential
privacy we use, we can also tolerate that the guarantee not hold with probability δ. This allows us to
achieve higher utility.
### B Shifting Distributions
In Section 3, we explain that we shift our histogram estimate by a constant to account for the number
of teachers known to the attacker. The following theorem shows that the number of teachers does
not affect the attacker’s computation
**Theorem 2. For two histograms, H** [1] = [h[1]1[, . . ., h]m[1] []][ and][ H] [2][ = [][h][2]1[, . . ., h]m[2] []][,][ Q][H] [1][,σ][ =][ Q][H] [2][,σ][ if]
_h[1]i_ _i_ [=][ h]j[1] _j_ _[for all][ i, j][ = 1][, . . ., m][.]_
_[−]_ _[h][2]_ _[−]_ _[h][2]_
_Proof. let d = h[1]i_ _i_ [=][ h]j[1] _j_ [for all][ i, j][ = 1][, . . ., m][.]
_[−]_ _[h][2]_ _[−]_ _[h][2]_
for all i, j = 1, . . ., m.
P(gi[2] _[> g]j[2][)][ ∼N]_ [((][h]i[2] _j_ [)][,][ 2][σ][2][)]
_[−]_ _[h][2]_
= N ((h[1]i [+][ d][)][ −] [(][h]j[1] [+][ d][)][,][ 2][σ][2][)]
= N (h[1]i _[−]_ _[h]j[2][,][ 2][σ][2][)]_
= P(gi[1] _[> g]j[1][)]_
_Q[H]k_ [1][,σ] = P([gk[1] _[> g]1[1][, . . ., g]k[1]_ _[> g]k[1]−1[,]_
_gk[1]_ _[> g]k[1]+1[, . . ., g]k[1]_ _[> g]m[1]_ [])]
= P([gk[2] _[> g]1[2][, . . ., g]k[2]_ _[> g]k[2]−1[,]_
_gk[2]_ _[> g]k[2]+1[, . . ., g]k[2]_ _[> g]m[2]_ [])]
= Q[H]k [2][,σ]
What Theorem 2 states is, if the difference between two histograms is uniform, then the probability
distribution of the outcomes is the same. With the support of Theorem 2, H can be safely shifted by
a constant amount to sums up to the number of teachers, N .
### C Chosen histograms for evaluation
Table 1 shows the histograms we chose for evaluation in the 3 consensus-level categories.
-----
**MNIST** **SVHN**
_High consensus_
H1 [4, 7, 6, 8, 4, 2, 0, 214, 4, 1] [0, 0, 0, 0, 250, 0, 0, 0, 0, 0]
H2 [4, 7, 207, 10, 4, 4, 0, 10, 3, 1] [0, 0, 250, 0, 0, 0, 0, 0, 0, 0]
H3 [5, 205, 7, 8, 4, 3, 0, 11, 6, 1] [0, 0, 0, 250, 0, 0, 0, 0, 0, 0]
H4 [4, 7, 6, 7, 4, 200, 4, 10, 7, 1] [0, 250, 0, 0, 0, 0, 0, 0, 0, 0]
H5 [4, 7, 210, 7, 4, 4, 0, 10, 3, 1] [0, 0, 0, 0, 0, 0, 250, 0, 0, 0]
_Median consensus_
H1 [5, 183, 9, 16, 4, 3, 1, 10, 17, 2] [0, 0, 1, 0, 249, 0, 0, 0, 0, 0]
H2 [6, 7, 6, 30, 4, 181, 0, 10, 5, 1] [0, 10, 1, 232, 1, 3, 0, 1, 0, 2]
H3 [4, 7, 6, 10, 13, 4, 0, 17, 3, 186] [0, 0, 0, 6, 0, 243, 0, 0, 0, 1]
H4 [6, 18, 184, 7, 10, 4, 7, 10, 3, 1] [236, 0, 0, 7, 0, 0, 6, 0, 1, 0]
H5 [7, 7, 8, 7, 4, 9, 193, 10, 4, 1] [234, 2, 0, 4, 0, 0, 0, 1, 9, 0]
_Low consensus_
H1 [12, 7, 6, 30, 4, 161, 0, 10, 19, 1] [1, 1, 20, 12, 0, 0, 2, 207, 7, 0]
H2 [4, 8, 7, 11, 38, 16, 1, 13, 8, 144] [0, 158, 1, 6, 4, 38, 0, 40, 1, 2]
H3 [4, 7, 15, 33, 6, 5, 0, 171, 5, 4] [0, 184, 0, 2, 3, 0, 0, 61, 0, 0]
H4 [4, 7, 117, 99, 4, 4, 0, 10, 4, 1] [0, 0, 24, 0, 0, 0, 0, 0, 0, 226]
H5 [4, 17, 6, 11, 154, 4, 0, 11, 5, 38] [10, 1, 2, 19, 7, 109, 73, 0, 19, 10]
Table 1: The 30 MNIST and SVHN vote histograms sampled from the collection of histograms provided by Papernot et al [1] (divided into 3 equally-sized consensus groups). We refer to histograms
denoted here by H1-5 in the different consensus groups throughout the presentation of our results.
### D Fittig Random Forests
Every one of our teachers in Section 2 fits a random forest classifier using the sklearn package; each
teacher performed a grid search over the following hyperparameters, and picked the values that lead
to the lowest training loss.
- max depth : the maximum number of levels that a tree has, an integer chosen between 1
and 11 inclusively;
- max features : the maximum number of features, while splitting a node, one of sqrt(number
of features), log(number of features), 0.1*(number of features), 0.2*(number of features),
0.3*(number of features), 0.4*(number of features), 0.5*(number of features), 0.6*(number
of features), 0.7*(number of features), 0.8*(number of features), 0.9*(number of features);
- n estimators : the number of trees that the forest has, an integer chosen between log(9.5)
and log(300.5);
- criterion: the loss function, one of gini impurity and entropy;
- min samples split : the minimum number of instance for a node to split, one of 2, 5, 10;
- bootstrap: one of True or False
### E End-to-end sensitive-attribute inference
In Section 2, we showed that histograms leak by mounting an attack that classifies histograms to
low-consensus and high-consensus groups, which reveals information about minority-group membership. In Section 4, we showed that we can extract histograms by querying PATE instances. Now,
we combine these two attacks, to extract minority-group membership information directly from a
PATE instance. Our setting mirrors the setting from Section 2, but the attacker does not have direct access to histograms of individuals, and instead they extract them from PATE’s answers using
our methodology (Section 3). We used the same ensemble from Section 2, but this time, the 250
teachers’ vote histogram was noised, again using σ = 40, δ = 0.00001 and a privacy budget of 1.9
-----
as in [2]. We sampled 10 low-consensus and 10 high-consensus members of the test set, and ran
the attack on them: we queried PATE with each member’s data record until exhausting the privacy
budget, computed the Monte Carlo estimators, ran the optimization to recover the vote histogram,
and then classified it to low-consensus/high-consensus as in Section 2. Results are given in Figure 8,
and indeed, they mirror the results of the attack in Section 2.
Minority Majority
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0.0
Low Consensus High Consensus
Figure 8: High vs. low-consensus distributions of the PhD-detection attack on PATE: vote histograms of minority-group members present lower consensus, allowing an attacker to identify them.
### F Edge values for noise
Here, our purpose is to evaluate our attack given extremely low and extremely high values of σ. We
repeated the query-number-limited attack from Section 4.1 where adversaries perform 10[4] queries.
This time, we used a σ value approaching 0 and a very high one (400). Figure 9 shows that when
noise is close to 0, the error rate is the highest, it then drops and climbs again as we increase the
error. This is consistent with what we would expect: we know that when σ = 0, the attacker cannot
learn anything but the argmax class, whereas if σ is infinitely large, PATE’s output distribution is
uniform regardless of the underlying votes, and the attacker again cannot learn anything.
0.01 60 80 100 400
40
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0.0
Noise Scale
Figure 9: Error rates with baselines of a median-consensus histogram (from H3) in SVHN. When
the noise is close to 0, the error is the largest; at some point, the error starts moderately increasing
as the noise increases.
|queried PATE with each member’s data record|Col2|Col3|Col4|Col5|Col6|Col7|Col8|
|---|---|---|---|---|---|---|---|
|Monte Carlo estimators, ran the optimization to low-consensus/high-consensus as in Section 2. R the results of the attack in Section 2. Minority Majority 0.8 0.7 0.6 Percentage 0.5 0.4 0.3 0.2 0.1||||||||
||Minority Majority|||||||
|||||||||
|||||||||
|||||||||
|||||||||
|||||||||
|||||||||
|||||||||
|||||||||
|||||||||
|gmax class, whereas if σ is infinitely large, PA|Col2|Col3|Col4|Col5|Col6|Col7|Col8|
|---|---|---|---|---|---|---|---|
|underlying votes, and the attacker again cann 0.01 60 80 100 400 40 0.8 0.7 0.6 Rate 0.5 0.4 Error 0.3 0.2 0.1||||||||
||0.01 60 80 100 400 40|||||||
|||||||||
|||||||||
|||||||||
|||||||||
|||||||||
|||||||||
|||||||||
|||||||||
|||||||||
|||||||||
|||||||||
-----
|
{
"disclaimer": "Notice: Paper or abstract available at https://arxiv.org/abs/2209.10732, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": null,
"status": "GREEN",
"url": "http://arxiv.org/pdf/2209.10732"
}
| 2,022
|
[
"JournalArticle"
] | true
| 2022-09-22T00:00:00
|
[
{
"paperId": "4ace0211031f9ef79c70fbeca9e07dc121be90ff",
"title": "Antipodes of Label Differential Privacy: PATE and ALIBI"
},
{
"paperId": "2bbfd3671198bc23a96cb7f992e3faca62721ee6",
"title": "PATE-AAE: Incorporating Adversarial Autoencoder into Private Aggregation of Teacher Ensembles for Spoken Command Classification"
},
{
"paperId": "7a3cf7aa2a25d70255c32c2a2c9768a71f6e5e38",
"title": "DataLens: Scalable Privacy Preserving Training via Gradient Compression and Aggregation"
},
{
"paperId": "12bc3df669b64666c9fac71e918c9761f6ed5b71",
"title": "CaPC Learning: Confidential and Private Collaborative Learning"
},
{
"paperId": "698ab1cc02a79596a87f92d5a0882ab1a7aee266",
"title": "Inverting Gradients - How easy is it to break privacy in federated learning?"
},
{
"paperId": "198c4eaf73f41573fcc892b596848f548da5824f",
"title": "G-PATE: Scalable Differentially Private Data Generator via Private Aggregation of Teacher Discriminators"
},
{
"paperId": "159395b0f7a2b9ea04f9a758d18887bcb970ee78",
"title": "Agnostic Federated Learning"
},
{
"paperId": "54036f43acc6c9b49b334270c7237217685f52fb",
"title": "Class-Balanced Loss Based on Effective Number of Samples"
},
{
"paperId": "fd9541fe4317904b9a0637b6505fb0bea0979491",
"title": "Comprehensive Privacy Analysis of Deep Learning: Passive and Active White-box Inference Attacks against Centralized and Federated Learning"
},
{
"paperId": "33c3f816bde8ee63ee9f2e60d4387b9390696371",
"title": "Beyond Inferring Class Representatives: User-Level Privacy Leakage From Federated Learning"
},
{
"paperId": "490c30b1d6b680be3c5a13552073e4fc10a850dc",
"title": "Property Inference Attacks on Fully Connected Neural Networks using Permutation Invariant Representations"
},
{
"paperId": "8bdf6f03bde08c424c214188b35be8b2dec7cdea",
"title": "Inference Attacks Against Collaborative Learning"
},
{
"paperId": "44058a625cb64c311043145655645d8206e272c2",
"title": "Scalable Private Learning with PATE"
},
{
"paperId": "19930147204c97be4d0964e166e8fe72ac1d6c3d",
"title": "Preventing Fairness Gerrymandering: Auditing and Learning for Subgroup Fairness"
},
{
"paperId": "44a97f4eaaefaf5338f8aed2913d5debb2459f7e",
"title": "Deep Models Under the GAN: Information Leakage from Collaborative Deep Learning"
},
{
"paperId": "e70b9a38fcf8373865dd6e7b45e45cca7ff2eaa9",
"title": "Semi-supervised Knowledge Transfer for Deep Learning from Private Training Data"
},
{
"paperId": "7fcb90f68529cbfab49f471b54719ded7528d0ef",
"title": "Federated Learning: Strategies for Improving Communication Efficiency"
},
{
"paperId": "e9a986c8ff6c2f381d026fe014f6aaa865f34da7",
"title": "Deep Learning with Differential Privacy"
},
{
"paperId": "9cc0cff26e287f59aa030d7e664b97b74afc2777",
"title": "Smooth sensitivity and sampling in private data analysis"
},
{
"paperId": "fd77430f6f5c5e35e8a45ff3478032b680fa0b0c",
"title": "To all authors"
},
{
"paperId": null,
"title": "machine learning"
},
{
"paperId": "02227c94dd41fe0b439e050d377b0beb5d427cda",
"title": "Reading Digits in Natural Images with Unsupervised Feature Learning"
},
{
"paperId": "78cb4ca2c9c78348ade4c17621491d14d72f2a19",
"title": "Toward efficient agnostic learning"
},
{
"paperId": "8cb44f06586f609a29d9b496cc752ec01475dffe",
"title": "SMOTE: Synthetic Minority Over-sampling Technique"
},
{
"paperId": "162d958ff885f1462aeda91cd72582323fd6a1f4",
"title": "Gradient-based learning applied to document recognition"
},
{
"paperId": null,
"title": "data, models) or curating/releasing new assets... (a) If your work uses existing assets"
},
{
"paperId": null,
"title": "(a) Did you state the full set of assumptions of all theoretical results"
},
{
"paperId": null,
"title": "Do the main claims made in the abstract and introduction accurately reflect the paper's contributions and scope?"
},
{
"paperId": null,
"title": "Did you report error bars (e.g., with respect to the random seed after running experiments multiple times"
},
{
"paperId": null,
"title": "Have you read the ethics review guidelines and ensured that your paper conforms to them"
},
{
"paperId": null,
"title": "a) Did you include the full text of instructions given to participants and screenshots, if applicable?"
},
{
"paperId": null,
"title": "Table 1: The 30 MNIST and SVHN vote histograms sampled from the collection of histograms provided by"
},
{
"paperId": null,
"title": "Did you describe any potential participant risks, with links to Institutional Review Board (IRB) approvals, if applicable?"
},
{
"paperId": null,
"title": "Did you include the estimated hourly wage paid to participants and the total amount spent on participant compensation?"
},
{
"paperId": null,
"title": "Did you describe the limitations of your work"
}
] | 15,375
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
},
{
"category": "Engineering",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/ffecead4be7deb3b7fbe82488c77a9e89a51b117
|
[
"Computer Science"
] | 0.894987
|
Enhanced Usability of Managing Workflows in an Industrial Data Gateway
|
ffecead4be7deb3b7fbe82488c77a9e89a51b117
|
IEEE International Conference on e-Science
|
[
{
"authorId": "2219421",
"name": "G. McGilvary"
},
{
"authorId": "97702983",
"name": "M. Atkinson"
},
{
"authorId": "1702333",
"name": "S. Gesing"
},
{
"authorId": "40489795",
"name": "Alvaro Aguilera"
},
{
"authorId": "1706590",
"name": "Richard Grunzke"
},
{
"authorId": "2621881",
"name": "E. Sciacca"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"e-Science",
"Int Conf e-science",
"IEEE Int Conf e-science",
"E-Science",
"International Conference on e-Science"
],
"alternate_urls": null,
"id": "34342fcb-3fe0-45c6-a017-6e65b73d030f",
"issn": null,
"name": "IEEE International Conference on e-Science",
"type": "conference",
"url": "https://escience-conference.org/"
}
|
The Grid and Cloud User Support Environment (gUSE) enables users convenient and easy access to grid and cloud infrastructures by providing a general purpose, workflow-oriented graphical user interface to create and run workflows on various Distributed Computing Infrastructures (DCIs). Its arrangements for creating and modifying existing workflows are, however, non-intuitive and cumbersome due to the technologies and architecture employed by gUSE. In this paper, we outline the first integrated web-based workflow editor for gUSE with the aim of improving the user experience for those with industrial data workflows and the wider gUSE community. We report initial assessments of the editor's utility based on users' feedback. We argue that combining access to diverse scalable resources with improved workflow creation tools is important for all big data applications and research infrastructures.
|
## Edinburgh Research Explorer
### Enhanced Usability of Managing Workflows in an Industrial Data Gateway
**Citation for published version:**
McGilvary, GA, Atkinson, M, Gesing, S, Aguilera, A, Grunzke, R & Sciacca, E 2015, Enhanced Usability of
Managing Workflows in an Industrial Data Gateway. in Proceedings of the 1st International Workshop on
_Interoperable Infrastructures for Interdisciplinary Big Data Sciences. pp. 495-502._
[https://doi.org/10.1109/eScience.2015.62](https://doi.org/10.1109/eScience.2015.62)
**Digital Object Identifier (DOI):**
[10.1109/eScience.2015.62](https://doi.org/10.1109/eScience.2015.62)
**Link:**
[Link to publication record in Edinburgh Research Explorer](https://www.research.ed.ac.uk/en/publications/20998ccd-c373-49f5-b82a-fbc4b1418e57)
**Document Version:**
Peer reviewed version
**Published In:**
Proceedings of the 1st International Workshop on Interoperable Infrastructures for Interdisciplinary Big Data
Sciences
**General rights**
Copyright for the publications made accessible via the Edinburgh Research Explorer is retained by the author(s)
and / or other copyright owners and it is a condition of accessing these publications that users recognise and
abide by the legal requirements associated with these rights.
**Take down policy**
The University of Edinburgh has made every reasonable effort to ensure that Edinburgh Research Explorer
content complies with UK legislation. If you believe that the public display of this file breaches copyright please
contact openaccess@ed.ac.uk providing details, and we will remove access to the work immediately and
investigate your claim.
-----
#### Enhanced Usability of Managing Workflows in an Industrial Data Gateway
Gary A. McGilvary[∗], Malcolm Atkinson[∗], Sandra Gesing[†], Alvaro Aguilera[‡], Richard Grunzke[‡] and Eva Sciacca[§]
_∗Edinburgh Data-Intensive Research Group, School of Informatics, The University of Edinburgh_
_Email: gary.mcgilvary@ed.ac.uk_
_† Center for Research Computing, University of Notre Dame, Indiana, United States_
_‡ Center for Information Services and High Performance Computing (ZIH), Technische Universit¨at Dresden, Germany_
_§ INAF-Osservatorio Astrofisico di Catania, Italy_
**_Abstract—The Grid and Cloud User Support Environment_**
**(gUSE) enables users convenient and easy access to grid**
**and cloud infrastructures by providing a general purpose,**
**workflow-oriented graphical user interface to create and run**
**workflows on various Distributed Computing Infrastructures**
**(DCIs). Its arrangements for creating and modifying existing**
**workflows are, however, non-intuitive and cumbersome due**
**to the technologies and architecture employed by gUSE. In**
**this paper, we outline the first integrated web-based workflow**
**editor for gUSE with the aim of improving the user experience**
**for those with industrial data workflows and the wider gUSE**
**community. We report initial assessments of the editor’s utility**
**based on users’ feedback. We argue that combining access to**
**diverse scalable resources with improved workflow creation**
**tools is important for all big data applications and research**
**infrastructures.**
**_Keywords-workflows; gateways; gUse; usability_**
I. INTRODUCTION
A plethora of mature workflow systems has evolved that
support diverse workflow concepts and workflow languages
with different strengths and focus on different areas of workflow processing. As well as requiring appropriate workflow
concepts for their applications, a user community has to
evaluate four other requirements: a) its usability for all
members of their community in their work context; b) its
availability, with respect to licensing terms and cost; c) its
anticipated long-term support, e.g. via an active open-source
community; and d) its ability to deal efficiently with the
scales of data, computation and concurrent use required.
The majority of users in the context of the project VAVID
[1] have no previous exposure to the kind of HPC systems
used to power big data analysis. Consequently, the main
preference regarding usability has been for a web-based
graphical user interface enabling intuitive creation, editing,
submission and monitoring of workflows without the need
for programming or installations on the users’ side. The
aspects of scale most critical in the VAVID project are
large amounts of data to be processed and a requirement
to access high-performance computing infrastructures. The
third aspect has been that it should be free of charge also for
companies since the project partners are partly from industry.
Last but not least, a robust security concept is paramount
given the sensitive nature of the industrial data.
gUSE with its flexible web-based user interface WSPGRADE consists of web services for the workflow management exploiting local clusters as well as diverse distributed,
grid and cloud infrastructures via the “DCI bridge” [2] and
accessing various distributed data systems via the “Data
Avenue” [4]. With these mappings to diverse computing
resources and as open source software, gUSE fulfills the
requirements for the second, third and fourth criteria for
the selection of a workflow system. The usability of WSPGRADE has been found sufficient except for the process
of creating workflows.
With the WS-PGRADE system prior to the work reported
here, users had to create workflows in three stages, one
of which required the use a particular graph editor. This
editor is a Java Web Start application and therefore requires
a local installation of Java and its security preferences to
be set correctly; the latter being quite inconvenient for the
users particularly within industrial and organizational contexts. For example, conflicts with an organization’s security
management policies or restrictions on downloads and selfadministered installations will often inhibit the use of the
gUse workflow editor in such contexts.
The three-stage creation process also impeded experiment
and innovation by requiring completion of one aspect, the
topology of data and control flow, for every step of a
workflow, before the details of individual steps could be
considered. Whereas, a scientist or engineer may want
to refine some parts before outlining others, or be able
to modify the workflow’s graphical representation after a
workflow has been created. Both are important in R&D
contexts epitomized by VAVID, as the practitioners need to
fluently innovate, refine methods formalized as workflows,
incrementally develop workflows and repeatedly use existing
workflows on new data or with new parameters—a modus
_operandi well supported by science gateways [5]._
Therefore, the usability issues surrounding gUSE/WSPGRADE the graph editor, the three-stage workflow creation process and the ability to incrementally develop and
refine workflows have been addressed and replaced by the
workflow editor presented in this paper. This allows domain
scientists to focus and take more responsibility of their own
work rather than the technical aspects surrounding it. While
-----
this editor is specific to gUSE/WS-PGRADE, it is a step
in the direction of also empowering scientists and engineers, improving their prototyping agility and reducing their
dependence on IT specialists during innovation [6]. When
methods have stabilized and are being used in large scale
production the IT specialists may still contribute efficiency
and reliability improvements.
The paper establishes the background, and presents the
design and implementation of the new editor in that context.
An initial evaluation is then reported, that leads to conclusions and plans for further work.
II. RELATED WORK
Developers and providers of workflow management systems have recognized the demand by user communities for
usability during the composition of workflows, i.e. their
initial creation and their subsequent edits to improve the
method or develop a derived method. WS-PGRADE [7],
Pegasus [8], KNIME [9], Galaxy [10], Taverna [11], Kepler
[12], Swift [13] and UNICORE [14] are widely used opensource workflow management systems, which offer workflow canvases. Workflows are illustrated as directed graphs
on the canvases. Nodes normally represent jobs or executable
modules, while the directed edges define the control and data
dependencies between the jobs.
Conceptually WS-PGRADE distinguishes between an abstract workflow and a concrete workflow. The abstract
workflow is created via the graph editor with drag-and-drop
mechanisms to add nodes and connect them to each other via
input and output ports representing the data flow. The result
is a graphical representation of the workflow lacking the
information about distinctive jobs or data. In a further step,
the abstract workflow is extended to a concrete workflow,
which can be configured for concrete jobs, parameters and
data files. Similar to gUSE, Pegasus supports a wide range
of cluster, grid and cloud infrastructures with cutting-edge
data management capabilities. Its web-based user interface
is formed by Triana [15] but only exists as a prototype.
KNIME follows a different approach to the workflow
canvas than WS-PGRADE, that its users find convenient and
intuitive. Users select from available modules and nodes that
they want to connect with each other. They can develop
parts of a workflow completely, including running that
subgraph and inspecting intermediate data, before extending
the workflow towards completion. This allows their focus
to match the way they think about a method. Advanced
users can also create new modules, which requires some
programming experience.
The KNIME workflow canvas is very intuitive but is offered as a workbench based on Eclipse requiring installation
on the users’ side and not as web-based user interface. This
detracts from its utility in contexts such as VAVID. Galaxy
follows a concept for creating workflows similar to the one
in KNIME and offers a toolbox via a web-based solution.
While Galaxy is widely used, especially by the biomedical
community, the data management capabilities are quite restricted for large data and necessitate data transfers between
single jobs of a workflow to the server hosting the backend of Galaxy. However, Galaxy can map to the highly
parallelized enactments of Swift [16]. Another workflow
system well established in the biomedical community is
Taverna but the workflow canvas is only available as a
workbench. The workflows can be shared via the social
website myExperiment [17].
Kepler offers a desktop application and a web-based
graphical user interface for workflow management. The
latter has fewer features than the desktop solution and lacks
support for creating or modifying a workflow’s structure.
Thus, it cannot be used for composing a workflow, but
only for uploading existing workflows, which can then be
modified only with respect to the data and parameters used.
While UNICORE also provides both solutions for workflow
management and the web-based one is capable of all features
available in the desktop application, its use is restricted to
computing infrastructures interfaced via UNICORE.
Commercial products offering workflow canvases include
a commercial version of KNIME, products applying WSBPEL (Web Services Business Process Execution Language)
[18], PipelinePilot [19] or the Genomics Research Platform
created by OnRamp [20]. The commercial version of KNIME supports advanced features for increasing productivity
such as connectors to clouds and Software-as-a-Service
(SaaS) as well as features for collaboration.
WS-BPEL is widely used in industry but requires that
all applications integrated into a workflow are available
as web service. PipelinePilot, as well the the Genomics
Research Platform are solutions that are especially tuned
for bioinformatic applications but general applicable for
diverse domains. Workflows can be configured for local and
batch systems but are missing connectors to grid or cloud
infrastructures. Since partners in the VAVID project are
from industry, the business models behind such commercial
solutions would necessitate the coverage of license costs
without delivering more functionalities than gUSE.
In summary, few workflow systems deliver the power of
diverse digital resources as gUSE does and most of the
web-based creation and editing tools either require local
software installations with inherent security problems or
offer incomplete functionality. Hence we suggested a general
approach to these deficiencies [6], however, the current
work, though a step in that direction, is specific to gUSE.
III. DESIGNING THE WORKFLOW EDITOR
In this section, we first give an overview of the
pre-existing workflow editing capabilities of gUSE/WSPGRADE and detail its associated problems. We then
introduce a partial solution that was under development
before discussing the design of our new web-based workflow
-----
editor. We explain how it overcomes the aforementioned
inconveniences and how it is integrated into gUSE/WSPGRADE.
_A. gUSE/WS-PGRADE Graph and Workflow Creation_
gUSE/WS-PGRADE is composed of a number of Liferay
portlets each providing a specific functionality in relation
to workflow management. These portlets are typically composed of a presentation layer, portlet layer and persistence
layer. The portlet content is displayed using Java Server
Pages (JSP), with optional imported JavaScript libraries,
where the portlet layer interacts with the client-side presentation layer to serve resources or perform defined actions
dependent on the actions of a user. If necessary, the portlet
will interact with the database to store or retrieve data.
Using the pre-existing facilities to create a gUSE/WSPGRADE workflow a user must navigate through three
portlets: Graph, Create Concrete and Concrete. A workflow’s graph, or an abstract workflow, is created by downloading and executing a Java Network Launch Protocol
(JNLP) file from the Graph portlet. This instantiates the
Java Web Start (JWS) graph editor application, only after
the user has correctly added a Java security exception. This
process is not user friendly and many problems can arise
if the correct security exception is not added or there are
problems with the local Java installation. Figure 1 shows an
example graph created using the JWS graph editor.
Figure 1. gUSE Java Web Start Graph Editor
Users have the ability to add and remove jobs, input and
output ports as well as the connections between ports, all of
which are represented as an XML document. After the graph
has been saved, it is stored in the gUSE database. Graphs can
then be transformed into workflows via the Create Concrete
portlet and configured using the Concrete portlet. The latter
displays a static image of the workflow graph where jobs can
be selected allowing configuration parameters to be entered
via a pop-up form, e.g. defining a job’s executable type, its
arguments and data files. Although configuration changes
can be made to an existing workflow, the graph’s topology
and geometry cannot be modified. Therefore, when a user
wishes to make such changes, a new graph and workflow
must be created and re-configured.
_B. A Web-based Workflow Editor for gUSE/WS-PGRADE_
We first introduce a graph editor that was being developed
contemporaneously, which fed into our design, and then
explain the design of our workflow editor.
_1) Graph Editor: Our workflow editor builds on the pre-_
vious work of the National Institute of Astrophysics (INAF)[1]
that created a web-based graph editor portlet implementation
of the JWS graph editor, named GraphEditorPortlet. The
graph editor was developed in the context of the VisIVO
mobile application [21] to allow gUSE/WS-PGRADE usage
from mobile devices, where the JWS editor application
cannot operate. The web-based graph editor was developed
using the JavaScript libraries KinecticJS 4.7.3[2], jQuery 1.9
and jQuery UI 1.10.3[3] and replicates the JWS graph editor
both in terms of functionality and presentation. Therefore
any user familiar with the current JWS graph editor of
gUSE/WS-PGRADE will be able to easily use the webbased graph editor.
The web-based graph editor is split into two components:
the graphical editor front-end and the back-end Liferay
portlet implementation. Much of the editor’s complexity
resides with the former, where the position of graphical
objects and their respective states must conform to the user’s
requirements. An object’s state consists of the object name,
description and its xy coordinates. If an object is a port, the
port type, its sequence number and a list of any connections
to other ports are included.
The front-end also provides dialogs, similar to those of
the JWS graph editor, which must initiate the appropriate
operations such as saving and loading graphical representations. Save operations convert each object’s state into XML,
using the XMLWriter library[4], to create an XML document
that is passed to the portlet via an AJAX call. The XML is
then sent to the gUSE wfs module via existing mechanisms
to store the graph as an abstract workflow in the gUSE
database. Similarly, a load operation retrieves the required
graph’s XML from wfs, which is then passed to KineticJS
to reconstruct each object’s state on the display canvas.
This web-based graph editor is a direct replacement for
the current gUSE JWS graph editor. It does not allow graphs
of existing workflows to be modified, nor does it remove the
inefficient three-stage process of creating, configuring and
submitting workflows.
1www.inaf.it/en
2www.kineticjs.com
3www.jquery.com
4www.javascriptsource.com/ajax/xmlwriter.htm
-----
_2) Workflow Editor: In order to transition from a graph_
to workflow editor and to solve these usability issues, we
developed a new portlet named the WorkflowEditorPortlet,
which inherits from both the GraphEditorPortlet and the
_Concrete portlet but contains additional functionality and_
improvements to allow the user to directly interact with
workflows as opposed to just graphs. The only common
entity between the graph and workflow editor is that of the
interface and its associated code. Improvements to both the
front-end and back-end graph editor components, as well as
the necessary additions to gUse, are the foundations of the
workflow editor. Figure 2 gives a preview of this complete
workflow editor.
We see that users have the necessary functionality to
create, save and load workflows. Furthermore, users have
the ability to operate the editor in two modes: graph or
_workflow. The former mode is an improved version of the_
web-based graph editor inherited from INAF, while the latter
mode allows direct interactions with workflows, including
those created by the JWS graph editor, as well as the ability
to submit syntactically correct workflows to a configured
DCI. The differentiation of modes ensures past and present
users of gUSE/WS-PGRADE are still able to operate on
graphs and workflows as individual entities.
In addition to creating this new portlet, we have modified
the existing gUSE/WS-PGRADE Concrete portlet to exhibit
equal functionality to that of the WorkflowEditorPortlet by
modifying the former’s configure.jsp presentation layer to
include our editor in place of the static workflow image
previously provided. In order to ensure both the Work_flowEditorPortlet and the Concrete portlet provide consistent_
functionality, both share the same presentation layer, as
shown in Figure 3 depicting the editor’s architecture.
In effect, our _WorkflowEditorPortlet_ replaces the
gUSE/WS-PGRADE Concrete portlet, but with added
functionality. The availability of latter remains at the
discretion of gUSE. Figure 3 also shows that configure.jsp
includes the JSP files related to the selected operating mode.
Regardless of the mode selected, users continue to interact
with the same KineticJS objects, however the integration of
workflow editing capabilities required substantial changes
|Col1|UserData (Cache)|Col3|
|---|---|---|
|Portlet Layer (Java) WorkflowEditorPortlet ConcretePortlet|Col2|
|---|---|
|WorkflowEditorPortlet|ConcretePortlet|
|AJAX Handlers AddNewJob AddPort RemoveJob RemovePort RemoveLine|Col2|Col3|Col4|Col5|
|---|---|---|---|---|
|||AddNewJob|||
|||AddPort|||
|||RemoveJob|||
|||RemovePort|||
|||RemoveLine|||
||ChangeJobConfig||||
||ChangePortConfig||||
||||||
configure.jsp
workflow_editor_mode.jsp
workflow_editor_submit.jsp
ConcretePortlet
ChangePortConfig
AJAX Handlers
AddNewJob
AddPort
RemoveJob
RemovePort
gUse
DB
to both the graph editor and the gUSE back-end; a task that
proved difficult when integrating a solution into a system
adopting legacy libraries and where the distinction between
front-end and back-end functionality was minimal.
A large number of these modifications were made to allow
graphs of existing workflows to be altered on-demand. The
previous implementation of gUSE/WS-PGRADE lacks the
functionality to save incremental changes to a workflow’s
graph and instead only permits the bulk saving of graphs
and workflows to the database. This is a result of storage
mechanisms, which cache loaded workflows and only allow
configuration parameters to be added or modified. Upon a
save operation, the cache contents are saved to the database,
in turn saving any configuration changes, however any
modifications to the graph are not replicated in the cache
and therefore are not saved.
We upgraded the cache to account for such changes by
creating and instantiating a jQuery AJAX call for each
type of change made to the graph. The change is caught
and processed by the portlet which is then passed to the
appropriate handler to update the cache. This process, as
well as the available handlers, are shown in Figure 3.
Cache and Persistence Layer
UserData
(Cache)
WorkflowEditorPortlet
RemovePort
RemoveLine
RemoveLine
ChangeJobConfig
-----
For example, upon the addition of a new port, the presentation layer concatenates the values of the port’s properties
into a string and an AJAX call is made. The portlet processes
this call and spawns the AddPort handler, which enters
the values directly into the cache, either for a new or
an existing workflow; the latter resulting in current values
being overwritten. The properties of an existing port can be
amended via the ChangePortConfig handler. The amended
cache, present in the Java class UserData, can then be stored
into the database when a save operation is initiated by the
user.
The close conceptual relationship between a gUSE graph
and workflow means that in order to allow the user to
directly store workflows, it first must be saved as a graph.
The workflow can then be created from the graph by calling
the existing method newWorkflow, which takes the graph
name as one of many arguments, and saves the workflow in
the database. Similarly, workflows are loaded by determining
the graph name of a specified workflow and returning the
graph’s XML to reconstruct each object’s state on the display
canvas.
The modification of the gUSE cache appears as a trivial addition, however this introduced many complications.
Firstly, a new series of database interactions had to be
created to retrieve unique identifiers for each new workflow
object added to the display canvas. Secondly, any object
added to the canvas had to be checked for uniqueness and
correctness; a feature that was not present in the inherited
web-based graph editor. For example, by adding a port,
its name and sequence number must be compared with all
others attached to the job.
Validity checks must also ensure objects and their state
are consistent with a correctly constructed workflow. For
example, validity rules must ensure an output port cannot
be connected to another output port. Thirdly, and most
importantly, the workflow’s state present in the cache must
be equivalent to the state present on the display canvas; a
feature also not present in the inherited web-based graph editor. If the state is not equivalent in both entities, workflows
will be incorrectly configured and subsequently, are likely
to exhibit unexpected behaviour when executing on a DCI.
The ability to dynamically add jobs, ports and connections
to the cache also allows on-demand workflow configuration.
Previously, users had to create and save a workflow before
it could be configured via the Concrete portlet, by selecting
jobs from the static representation of the workflow. By
selecting the desired job, users can now instantly add configuration parameters without having to save the workflow
in the first instance; all changes are reflected in the cache
and are uploaded to the database when the user initiates a
save operation.
The incorporation of this feature came with many difficulties, primarily due to the incompatibilities between the
different jQuery versions used by the web-based graph editor
and the gUSE/WS-PGRADE workflow configuration entry
form. The latter uses jQuery 1.3.2 and outdated associated jQuery libraries such as jqDock and BeautyTips. In
order to upgrade these libraries, a complete re-design of
the gUSE/WS-PGRADE elements reliant on these libraries
would have to take place. As the inherited web-based editor
is only compatible with jQuery versions 1.9 and above, a
solution was devised to operate multiple jQuery versions
concurrently.
The web-based workflow editor provides a much needed
solution for the workflow community, and in particular for
those who interact with and submit workflows via gUSE.
We have shown the necessary changes to create a simple
yet effective web-based editor, removing the dependency for
a client-side Java installation and extending the Java server
portlet implementation. Furthermore, by using standard web
technologies, the editor operates on all popular web browsers
allowing all users to efficiently create workflows and modify
existing ones.
IV. EVALUATION
The new editor and its integrated method for workflow
creation and management have been deployed and evaluated
on one of the test systems used for the VAVID project;
detailed functionality and performance testing of the editor
will take place after the use cases of VAVID have been fully
created.
When opening the workflow editor portlet, as expected,
no Java Web Start application is instantiated and the editor
is now displayed inside the web browser. As there is no
separate editor window, the editor now follows the same
style conventions used in the rest of WS-PGRADE. Furthermore, it is also much faster and involves less user-interaction
than downloading and opening the former editor. The former
method was also cumbersome, often involving having to
determine how to enable Java support in the web browser
and properly adjust the security settings of Java to execute
the editor.
The new editor improves the usability in different scenarios as well. One of them being the ability to use test systems
located behind a remote firewall by simply tunnelling the
HTTP port using SSH and accessing the localhost with the
browser. For users without previous exposure to gUSE, the
new integrated method of workflow creation, configuration
and submission within the same portlet is more intuitive
than the previous three-stage method. These improvements
translate into less helpdesk support required by end users and
thus, more time for the development and integration teams
to concentrate on other aspects of the VAVID project.
While the general idea of simplifying the three stages of
workflow management into a single one is perceived as being
more intuitive by the users, the current way of configuring
jobs with the new editor could be further improved. Once the
workflow graph is created, users can modify the name and
-----
description of each job by double-clicking on it. However,
selecting any other point of the node that is not its name
will display the configuration dialog for the corresponding
job. This behaviour is hinted to the user by highlighting
the job’s name on mouse over. Our experience shows this
isn’t sufficiently clear for most users independent of their
experience level, therefore this will be revised in future
versions.
Other potential improvements could be made to the
accessibility and positioning of the workflow nodes. The
accessibility problems relate to the color-scheme and style
used to render workflows, making certain selections and
active elements difficult to recognize. This is simple to
resolve and will be fixed in future releases. The suboptimal positioning of the elements can be traced back to
the JavaScript frameworks upon which the editor is based.
Despite being state of the art when the original INAFimplementation of the editor was created, they have now
been superseded by more powerful ones. Reimplementing
the editor with a new framework would have required an
effort outside the means of the VAVID project.
An important requirement for the new editor is that
of backward compatibility with workflows created using
former versions of the editor. In addition to VAVID’s own
workflows, the gUSE development team provided a set of
test workflows to evaluate the backward compatibility. No
compatibility problems have been found during our tests.
Previous workflows could be loaded, modified and submitted
by the new editor. Moreover, given that the underlying format in which the workflows are stored in the database hasn’t
changed, compatibility issues are not expected. Another vital
compatibility aspect is a consistent rendering and functioning of the editor across different browsers and platforms.
During the development and evaluation of the editor, current
versions of Mozilla Firefox, Google Chrome, and Safari
were used on Linux, OS X, and Microsoft Windows without
observing any major changes of the HTML-rendering or a
reduction in usability.
Finally, the installation procedure and accompanying documentation of the new editor were also evaluated. Installing
or updating the editor from the source code involves the
compilation and re-deployment of the gUSE frontendbase,
_wfs, and wspgrade modules. In our experience of using_
gUSE 3.6.8, this can be performed with little effort by
following the installation instructions, if there is a working
Java SDK and Apache Maven installed on the system. It
is our hope that the new editor will be integrated into
future releases of gUSE making the manual installation
unnecessary.
V. CONCLUSION AND OUTLOOK
In this paper, we have outlined an improved workflow
editor for gUSE/WS-PGRADE that replaces the three-stage
process of creating, configuring and submitting workflows,
which was unnecessarily cumbersome for prototyping processing and analysis methods and raised conflicts with
security policies. Our web-based workflow editor portlet
implementation directly replaces the gUSE Java Web Start
graph editor application and subsequently, the requirement
of a local Java installation and correctly specified security
preferences. The previous three-stage process of creating
workflows has been reduced to a single stage process allowing workflow creation, instant configuration and submission
all within our workflow editor portlet.
Furthermore, users now have the ability to dynamically
modify the graphical structure of their existing workflows
and update job configuration parameters on-demand, allowing the incremental development and refinement of workflows; a feature supported by many other science gateways
and a requirement from the users of the VAVID project and
many other communities.
We believe that the aforementioned improvements to the
gUSE/WS-PGRADE workflow creation process will greatly
enhance the user experience of interacting with workflows
allowing domain scientists to focus and take more responsibility of their own work rather than the technical aspects
surrounding it. Preliminary usability studies strongly support
this. However there are many improvements that could
be made to gUSE and to our web-based workflow editor
to improve the users’ experience and operational behavior
further.
The revision of the system’s architecture to make the
client-side (browser embedded) and server-side of gUSE and
WS-PGRADE more independent would be a first step. The
API presented by the server side should support both bulk
and incremental changes to workflows. This might be partitioned across several back-end micro-services with sharply
focused functionality to improve flexibility and maintainability [22]. These stable and relevant interfaces would support
incremental enhancements to these adopted web-based tools
and permit others to create advanced alternatives.
Such workflow editors would exploit novel JavaScript
libraries and agile web frameworks. For example, the
JavaScript library jsPlumb[5] would improve the visual representation and deliver ready made graphical interaction
modes because of its excellent design. It offers many features for diverse illustration, representation and manipulation
models for the nodes and edges of a workflow graph. Also,
it is developed by an extensive open-source community,
thereby relieving the workflow-editor developers from substantial responsibilities.
The workflow editor reported here does not use this
yet for pragmatic and historical reasons—its adoption is
anticipated. It underpinned the prototype generic workflow
editor reported by Gesing et al. [6]. That proposed webbased workflow editor is intended to accommodate multiple
5www.jsplumb.org
-----
workflow systems for the following reasons: a) developing powerful and easily learnt web-based GUIs that run
on all devices from handhelds to work stations demands
skills and effort best amortized over many communities
and the similarities between workflow systems make this
feasible; b) user communities have considerable investments
in particular workflow systems that make transfer to replacement workflow systems infeasible, consequently when
inter-disciplinary work develops across communities using
different systems, and when researchers transfer between
groups that consistency saves the researchers intellectual
hurdles and delays; and c) the workflow enactment systems
are already developing capabilities for integrated multiworkflow language enactments, e.g. [23], and at present
developers of the scientific methods have to use each native
workflow editor rather than being able to work on the whole
method.
A long-term campaign is required to improve the usability
and abstraction so that users who are not adept at computing
can nevertheless take full responsibility for the logic of their
own methods and can innovate and experiment freely. This
becomes ever more necessary as the wealth of available data
grows and as more-and-more domain expect to exploit its
potential. A broad collaboration across disciplines should
address this agenda.
ACKNOWLEDGMENT
The authors would like to thank the Institute for Computer
Science and Control (SZTAKI) of the Hungarian Academy
of Sciences (MTA) and the gUSE development team for
their support throughout this project. The authors would also
like to thank the German Federal Ministry of Education and
Research (BMBF) for the opportunity to do research in the
VAVID project under grant 01IS14005. Furthermore, financial support by the German Research Foundation (DFG) for
the MASi project is gratefully acknowledged. The research
leading to these results has partially been supported by the
LSDMA project of the Helmholtz Association of German
Research Centres.
REFERENCES
[1] A. Aguilera, R. Grunzke, U. Markwardt, D. Habich,
D. Schollbach, and J. Garcke, “Towards an industry data
gateway: An integrated platform for the analysis of wind
turbine databases,” in Science Gateways (IWSG), 2015 7th
_International Workshop on, accepted._
[2] M. Kozlovszky, K. Kar´oczkai, I. M´arton, P. Kacsuk, and
T. Gottdank, “DCI Bridge: Executing WS-PGRADE Workflows in Distributed Computing Infrastructures,” in [3],
P. Kacsuk, Ed. Springer, 2014, ch. 4, pp. 51–67.
[3] P. Kacsuk, Ed., Science Gateways for Distributed Computing
_Infrastructures: Development framework and exploitation by_
_scientific user communities._ Springer International Publishing, 2014.
[4] A. Hajnal, Z. Farkas, P. Kacsuk, and T. Pint´er, “Remote
storage resource management in WS-PGRADE/gUSE,” in
_[3], P. Kacsuk, Ed._ Springer, 2014, ch. 5, pp. 69–81.
[5] A. Balasko, Z. Farkas, and P. Kacsuk, “Building science gateways by utilizing the generic WS-PGRADE/gUSE workflow
system,” Computer Science, vol. 14, no. 2, 2013.
[6] S. Gesing, M. Atkinson, R. Filgueira, I. Taylor, A. Jones,
V. Stankovski, C. S. Liew, A. Spinuso, G. Terstyanszky, and
P. Kacsuk, “Workflows in a Dashboard: A New Generation
of Usability,” in Proc. WORKS ’14. Piscataway, NJ,
USA: IEEE Press, 2014, pp. 82–93. [Online]. Available:
[http://dx.doi.org/10.1109/WORKS.2014.6](http://dx.doi.org/10.1109/WORKS.2014.6)
[7] P. Kacsuk, Z. Farkas, M. Kozlovszky, G. Hermann, A. Balasko, K. Karoczkai, and I. Marton, “WS-PGRADE/gUSE
Generic DCI Gateway Framework for a Large Variety of User
Communities,” Journal of Grid Computing, vol. 10, no. 4, pp.
601–630, 2012.
[8] E. Deelman, K. Vahi, G. Juve, M. Rynge, S. Callaghan, P. J.
Maechling, R. Mayani, W. Chen, R. F. da Silva, M. Livny,
and K. Wenger, “Pegasus, a workflow management system
for science automation,” Future Gener. Comput. Syst., no. 0,
pp. –, 2014.
[9] S. Beisken, T. Meinl, B. Wiswedel, L. de Figueiredo,
M. Berthold, and C. Steinbeck, “KNIME-CDK: Workflowdriven cheminformatics,” BMC Bioinformatics, vol. 14, no. 1,
p. 257, 2013.
[10] D. Blankenberg, G. V. Kuster, N. Coraor, G. Ananda,
R. Lazarus, M. Mangan, A. Nekrutenko, and J. Taylor,
_Galaxy: A Web-Based Genome Analysis Tool for Experimen-_
_talists._ John Wiley & Sons, Inc., 2010.
[11] K. Wolstencroft, R. Haines, D. Fellows, A. Williams, D. Withers, S. Owen, S. Soiland-Reyes, I. Dunlop, A. Nenadic,
P. Fisher, J. Bhagat, K. Belhajjame, F. Bacall, A. Hardisty,
A. Nieva de la Hidalga, M. P. Balcazar Vargas, S. Sufi,
and C. Goble, “The Taverna workflow suite: designing and
executing workflows of Web Services on the desktop, web or
in the cloud,” Nucleic Acids Research, vol. 41, no. W1, pp.
W557–W561, 2013.
[12] B. Lud¨ascher, I. Altintas, C. Berkley, D. Higgins, E. Jaeger,
M. Jones, E. A. Lee, J. Tao, and Y. Zhao, “Scientific workflow management and the Kepler system,” Concurrency and
_Computation: Practice and Experience, vol. 18, no. 10, pp._
1039–1065, August 2006.
[13] J. Wozniak, T. Armstrong, M. Wilde, D. Katz, E. Lusk, and
I. Foster, “Swift/t: Large-scale application composition via
distributed-memory dataflow processing,” in Proc. IEEE/ACM
_CCGRID ’13, May 2013, pp. 95–102._
[14] K. Benedyczak, P. Bala, S. van den Berghe, R. Menday, and
B. Schuller, “Key aspects of the UNICORE 6 security model,”
_Future Generation Comp. Syst., vol. 27, no. 2, pp. 195–201,_
2011.
[15] I. Taylor, M. Shields, I. Wang, and A. Harrison, “The Triana
workflow environment: Architecture and applications,” in
_[24]._ Springer London, 2007, pp. 320–339.
-----
[16] K. Maheshwari, A. Rodriguez, D. Kelly, R. Madduri, J. Wozniak, M. Wilde, and I. Foster, “Enabling multi-task computation on Galaxy-based gateways using Swift,” in CLUSTER
_2013, Sept 2013, pp. 1–3._
[17] D. De Roure, C. Goble, and R. Stevens, “The design and realisation of the myExperiment Virtual Research Environment
for social sharing of workflows,” Future Gener. Comput. Syst.,
vol. 25, no. 5, pp. 561–567, 2009.
[18] M. B. Juric, Business Process Execution Language for Web
_Services BPEL and BPEL4WS 2Nd Edition._ Packt Publishing, 2006.
[19] Accelrys, “Pipeline pilot,” 2015. [Online].
[Available: http://accelrys.com/products/collaborative-science/](http://accelrys.com/products/collaborative-science/biovia-pipeline-pilot/)
[biovia-pipeline-pilot/](http://accelrys.com/products/collaborative-science/biovia-pipeline-pilot/)
[20] OnRamp, “Genomics research platform,” 2015. [Online].
[Available: http://www.onrampbioinformatics.com](http://www.onrampbioinformatics.com)
[21] F. Vitello, E. Sciacca, U. Becciani, A. Costa, P. Massimino,
E. Takacs, and B. Szakal, “Mobile application development
exploiting science gateway technologies,” Concurrency and
_Computation: Practice and Experience, 2015._
[22] M. Fowler, “Microservices,”
http://martinfowler.com/articles/microservices.html.
[23] G. Terstyanszky, T. Kukla, T. Kiss, P. Kacsuk, A. Balasko,
and Z. Farkas, “Enabling scientific workflow sharing through
coarse-grained interoperability,” Future Gener. Comput. Syst.,
vol. 37, no. 0, pp. 46 – 59, 2014.
[24] I. J. Taylor, E. Deelman, D. B. Gannon, and M. Shields, Work_flows for e-Science: Scientific Workflows for Grids._ Springer
London, 2007.
-----
|
{
"disclaimer": "Notice: Paper or abstract available at https://arxiv.org/abs/1508.01412, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "other-oa",
"status": "GREEN",
"url": "https://www.pure.ed.ac.uk/ws/files/21218385/McGilvary_et_al_Aug15.pdf"
}
| 2,015
|
[
"JournalArticle",
"Conference"
] | true
| 2015-08-06T00:00:00
|
[
{
"paperId": "dd2783a972db8251db821ba48e2bed184cb6f32c",
"title": "Microservices"
},
{
"paperId": "21ae925c4cfa78dd4bbbddca6872519c69e0330d",
"title": "Mobile application development exploiting science gateway technologies"
},
{
"paperId": "d1fe9e8c9f6941d36e34688a505f8641920d09bd",
"title": "Towards an Industry Data Gateway: An Integrated Platform for the Analysis of Wind Turbine Data"
},
{
"paperId": "4224374796da64e17fce96033d4cd42240d80eaf",
"title": "Pegasus, a workflow management system for science automation"
},
{
"paperId": "b4a4b35d791b62f356327789e0f533c1533b9dfe",
"title": "Workflows in a Dashboard: A New Generation of Usability"
},
{
"paperId": "718d0cd3b1523ff58feca3278b9bf0fcff7caa91",
"title": "Science Gateways for Distributed Computing Infrastructures"
},
{
"paperId": "021575d1423fafa764fec9d59d04137c655cf592",
"title": "DCI Bridge: Executing WS-PGRADE Workflows in Distributed Computing Infrastructures"
},
{
"paperId": "5dc04f766c2967b880149d30e7b7c50154152e8f",
"title": "Enabling scientific workflow sharing through coarse-grained interoperability"
},
{
"paperId": "a0b439b6458c28c8a6f570bca65aad03002cf833",
"title": "Data Avenue: Remote Storage Resource Management in WS-PGRADE/gUSE"
},
{
"paperId": "4c6c68be82ad4b82dcf2166ccd7f46223f2adcaf",
"title": "Workflows for e-Science, Scientific Workflows for Grids"
},
{
"paperId": "735bbb3782fade94e053b46a179b82e53e06f063",
"title": "Enabling multi-task computation on Galaxy-based gateways using swift"
},
{
"paperId": "ebf322c4208fdc7953ae4a103d1ebd5acc8b3e6c",
"title": "KNIME-CDK: Workflow-driven cheminformatics"
},
{
"paperId": "018e8ad1827f064e2c806f4bd5f39f5e74eec9a3",
"title": "Building Science Gateways by Utilizing the Generic WS-Pgrade/gUSE Workflow System"
},
{
"paperId": "84c1285253bee1bce56731983b5bc3ae0e7c06e9",
"title": "Swift/T: Large-Scale Application Composition via Distributed-Memory Dataflow Processing"
},
{
"paperId": "a0b789d1f3afe9cf6fadfaa8f120479181f0c4e5",
"title": "The Taverna workflow suite: designing and executing workflows of Web Services on the desktop, web or in the cloud"
},
{
"paperId": "0896cc09e06fe0d7ebeef49f831cc563a4cd0173",
"title": "WS-PGRADE/gUSE Generic DCI Gateway Framework for a Large Variety of User Communities"
},
{
"paperId": "211ecaa799a6433d4a59d66e8ba494511660e48e",
"title": "Key aspects of the UNICORE 6 security model"
},
{
"paperId": "2fbd1bc4ecdea2a9ce3701babb7fa807db7a0ce6",
"title": "Galaxy: A Web‐Based Genome Analysis Tool for Experimentalists"
},
{
"paperId": "01ac1a5a243c0f53f200a04ef6316512c96a31e8",
"title": "The design and realisation of the myExperiment Virtual Research Environment for social sharing of workflows"
},
{
"paperId": "fbd0ac151c5edb8fa1be24871c74e74552f54c59",
"title": "Scientific workflow management and the Kepler system"
},
{
"paperId": "3be3ba1746d4a77c5f60cb6d11168be35056ae87",
"title": "Business Process Execution Language for Web Services BPEL and BPEL4WS 2nd Edition"
},
{
"paperId": null,
"title": "Genomics research platform"
},
{
"paperId": null,
"title": "Pipeline pilot"
},
{
"paperId": "903c0380fa202346f590144c2f5709ad40e88f50",
"title": "The Triana Workflow Environment: Architecture and Applications"
},
{
"paperId": null,
"title": "Accelrys , “ Pipeline pilot , ” 2015 . [ Online ]"
},
{
"paperId": "17a0de0cd9393b64c0e3ce97308cfc3cf13db3b3",
"title": "BMC Bioinformatics"
}
] | 9,847
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Engineering",
"source": "external"
},
{
"category": "Engineering",
"source": "s2-fos-model"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
},
{
"category": "Environmental Science",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/ffef09851d2749ce5d0f487aa3dc28377323dd14
|
[
"Computer Science",
"Engineering"
] | 0.860879
|
A Contemporary Survey on 6G Wireless Networks: Potentials, Recent Advances, Technical Challenges and Future Trends
|
ffef09851d2749ce5d0f487aa3dc28377323dd14
|
arXiv.org
|
[
{
"authorId": "101472905",
"name": "S. Mohsan"
},
{
"authorId": "2110512884",
"name": "Yanlong Li"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"ArXiv"
],
"alternate_urls": null,
"id": "1901e811-ee72-4b20-8f7e-de08cd395a10",
"issn": "2331-8422",
"name": "arXiv.org",
"type": null,
"url": "https://arxiv.org"
}
|
Smart services based on Internet of everything (IoE) are prophesied to reap notable attention by both academia and industry in the future. Although fifth-generation (5G) is a promising communication technology, however it cannot fulfill complete demands of novel applications. Sixth-generation (6G) technology is envisaged to overcome limitations of 5G technology. The vision and planning of future 6G network has been started with this aim to meet the stringent requirements of mobile communication. Our aim is to explore recent advances and potential challenges to enable 6G technology in this review. We have devised a taxonomy based on computing technologies, networking technologies, communication technologies, use cases, machine learning algorithms and key enabler technologies. In this regard, we subsequently highlight potential features and key areas of 6G. Key technological breakthroughs which include quantum communication, tactile communication, holographic communication, terahertz communication, visible light communication (VLC) Internet of Bio Nano Things, which can put profound impact on wireless communication, have been elaborated at length in this review. In this review, our prime focus is to discuss potential enabling technologies which can develop seamless and sustainable network, encompassing symbiotic radio, blockchain, new communication paradigm, VLC and terahertz. In addition, we have investigated open research challenges which can hamper the performance of 6G network. Finally, we have outlined several practical considerations, 6G key projects and future directions. We envision 6G undergoing unprecedented breakthroughs to eliminate technical uncertainties and provide enlightening research directions for subsequent future studies. Although it is impossible to envisage complete details of 6G, we believe this study will pave the way for future research work.
|
_Review_
# A Contemporary Survey on 6G Wireless Networks: Potentials, Recent Advances, Technical Challenges and Future Trends
**Syed Agha Hassnain Mohsan [1,]*, Yanlong Li [1,2 ]**
1 Optical Communications Laboratory, Ocean College, Zhejiang University, Zheda Road 1,
[Zhoushan 316021, China; hassnainagha@zju.edu.cn (S.A.H.M.); lylong@zju.edu.cn (Y.L.)](mailto:hassnainagha@zju.edu.cn)
2 Ministry of Education Key Laboratory of Cognitive Radio and Information Processing,
Guilin University of Electronic Technology, Guilin 541004, China
**[* Correspondence: hassnainagha@zju.edu.cn](mailto:hassnainagha@zju.edu.cn)**
**Abstract:** Smart services based on Internet of everything (IoE) are prophesied to reap notable
attention by both academia and industry in the future. Although fifth-generation (5G) is a
promising communication technology, however it cannot fulfill complete demands of novel
applications. Sixth-generation (6G) technology is envisaged to overcome limitations of 5G
technology. The vision and planning of future 6G network has been started with this aim to meet
the stringent requirements of mobile communication. Our aim is to explore recent advances and
potential challenges to enable 6G technology in this review. We have devised a taxonomy based on
computing technologies, networking technologies, communication technologies, use cases,
machine learning algorithms and key enabler technologies. In this regard, we subsequently
highlight potential features and key areas of 6G. Key technological breakthroughs which include
quantum communication, tactile communication, holographic communication, terahertz
communication, visible light communication (VLC) Internet of Bio Nano Things, which can put
profound impact on wireless communication, have been elaborated at length in this review. In this
review, our prime focus is to discuss potential enabling technologies which can develop seamless
and sustainable network, encompassing symbiotic radio, blockchain, new communication
paradigm, VLC and terahertz. These transformative possibilities can drive the surge to manage the
rapidly growing number of services and devices. In addition, we have investigated open research
challenges which can hamper the performance of 6G network. Finally, we have outlined several
practical considerations, 6G key projects and future directions. We envision 6G undergoing
unprecedented breakthroughs to eliminate technical uncertainties and provide enlightening
research directions for subsequent future studies. Although it is impossible to envisage complete
details of 6G, we believe this study will pave the way for future research work.
**Keywords:** 6G; communication; terahertz communications; quantum communication, Internet of
everything (IoE); visible light communications (VLC); holographic communications
**1. Introduction**
Commercial deployment of 5G has been started in 2019. It will mark a new era of a digital society
and introduces innovative breakthroughs in terms of mobility, data rates, latency and
communication [1]. If we look at the development of previous technologies, their subsequent
utilization remains for almost 10 years. That is, the research of next generation starts with the
commercialization phase of previous generation. As 5G has reached to its commercialization phase,
it is the right time to launch next 6G. Some countries have already made strategic plans for 6G [2].
They have started 6G projects for timely deployment. In 2018, Finland introduced 6Genesis Flagship
-----
program with $290 million investment on 6G ecosystem [3]. German, South Korea and UK
governments have invested in 6G quantum technology, while USA has also started projects on
terahertz (THz) [4] 6G wireless networks. The Ministry of Industry and Information Technology of
China has also focused on the development of 6G. The key technologies and novel services of 6G
will mark a revolution in wireless networks. Japanese government has also started 6G projects [5].
The rapidly growing research on 6G and its emerging technologies with associated applications
will marks a never-ending growth in this domain. International Telecommunication Union (ITU) has
predicted up to 5 zettabytes global mobile data till 2030 [6] as shown in figure 1. Meanwhile, due to
the emergence of the smart cities, e-health, smart industry and Internet-of-Everything (IoE)
paradigm, there is an urgent need to focus on ultra-reliable low-latency communications (URLLC)
which can enable a networked society. Besides offering massive data, the upsurge of IoE will
support a myriad of new data services. Additionally, the promising IoE services entail integrating
features like communication, computing and control into a single network architecture. In order to
support the forefront services and meet their heterogeneous desiderata, several challenges must be
address. These challenges include providing flexibility in the network architecture, monitoring the
network performance, leveraging the sub-terahertz (THz) bands and designing a holistic
orchestration strategy to integrate all network resource functionalities such as sensing, computing,
control and communication in a scalable, efficient, intelligent and sustainable manner.
**Figure 1. 2020-2030 Global mobile data traffic predicted by ITU [6].**
Upcoming applications such as self-driving, smart city and e-health have stringent demands of
throughput, data rate and latency which is beyond the limits of 5G. It is anticipated that 5G services
will be widely available in a decade and then emerging 6G technology will pave its way to industry.
The technical prospects of 6G include:
Ultra-low latency and ultra-high data.
Energy efficient resource-constrained devices.
Ubiquitous network coverage.
Intelligent and trusted connectivity.
6G will alter the perception and definition of communication, industry, society and modern
lifestyle. 6G will revolutionize several technological domains which are yet to be envisioned. Besides
-----
its advantages, several critical problems exist in deploying 6G. In this article, we investigate and
exploit these potential challenges in 6G. We have also analyzed and compared 5G, B5G and 6G.
The hype about 5G in media, industry and academia is highly validated by its prominent features
with regards to data rate, reliability and accessibility of mobile services. Concretely, a paradigm shift
in its design architecture has made 5G suitable to solve real business requirements [7].
The prominent features and promises of 6G technology have got attention from research
fraternity. It is expected 6G will mark revolution in diverse domains from 2030 onward. Various
aspects of next 6G are being considered in top-tier forums and several requirements of 6G have been
collected [9-11]. S. Nayak et al. [8] have exposed 6G communication challenges. Moreover, several
algorithms have been reported for 6G [12-13].
6G applications are vulnerable to some uncertainties. Autonomous systems and connected
robotics depend on VLC and AI technology where data transmission, encryption and malicious
behaviors can be intricate. The multisensory XR applications use quantum communication, terahertz
communication and molecular communication technology, which make them susceptible to data
transmission, access control risks and malicious behavior exposure. Wireless brain-computer
interactions also utilize multisensory XR applications, but have own privacy and security problems.
The main crucial weaknesses are encryption and malicious behavior. Although distributed ledger
technologies and blockchain are secure, but they can also face malicious behavior. All inclusive, new
areas of 6G are vulnerable to communication, encryption, malicious behavior, access control and
authentication issues.
**2. Our Survey**
While several operators have announced plans to rollout 5G services, research in 6G
technological trends has secured high impulse both in academia and industrial sectors. A number
of research studies have reported key technological trends, potential issues and future research
aspects which can bring 6G into reality, such as, see [14-15]. In [14-15], authors have provided a
speculative study which addresses use cases, trends, technologies and briefly discussed associated
challenges and future research aspects. In this article, we have adopted an approach to analyze the
research challenges associated with 6G networks. We expect a combination of evolution of current
networks and breakthrough technologies will be investigated in future. We also believe in our
findings to promote research efforts towards promising technologies to meet the stringent demands
of 6G.
An overview of 6G is illustrated in figure 2 which highlights key aspects in the form of localization,
data rate, capacity and reliability in terms of energy per bit, jitter, latency, frame data rate [16]. In
addition, overview of 1G to 6G with its applications is presented in figure 2 [17]. Whereas, we have
also provided comparison between 5G and 6G parameters in Table I [16], [18].
-----
**Figure 2. 6G wireless systems overview.**
The studies discussed in [10], [15-16], [18-21], have focused on key enabler technologies of 6G. To the
best of our knowledge, we are among few research groups who have provided taxonomy and
state-of-the-art for 6G as given in Table I. Additionally, we have discussed potential challenges and
future research directions. We have also suggested risk mitigation techniques in this article. Hence,
our major contributions include:
1) A comprehensive overview of various 6G topics with highlighting recent academic activities
and industry developments in different aspects of 6G.
2) The emerging key technologies are outlined with detailed explanation of potential issues.
3) An overview of 6G applications and future aspects are discussed.
4) The state-of-the-art towards 6G is provided.
5) A taxonomy based on machine learning techniques, communication technologies, computing
technologies, use cases, key enablers and network technologies is provided.
6) Research challenges and associated solutions are discussed.
7) The privacy and security concerns are investigated and presented.
8) An outlook for future directions is provided.
9) For researchers, this review is devoted to open new horizons by guiding towards future
research perspectives as it includes new references which can enable the pursuit of 6G.
The rest of our survey is organized as follows. Evolution of mobile communication networks is
presented in section III. We have briefly discussed current research towards 6G in section IV. Section
-----
V outlines the state-of-the-art advances toward enabling 6G wireless networks. Section VI the
devised taxonomy. Key areas in 6G networks are listed in section VII. Section VIII presents vision
and key features for 6G. Potential challenges and applications are discussed in Section IX and X
respectively. Finally, we have concluded this study in Section XI.
**TABLE 1.** COMPARISON SUMMARY OF THE EXISTING SURVEYS
**Reference** **Use cases** **Key enablers** **Taxonomy** **Recent advances**
Giordani et al. [10] Yes Yes No No
Saad et al. [15] Yes Yes No No
Chen et al. [18] No Yes No No
Letaief et al. [19] Yes Yes No No
Akyildiz et al. [20] Yes Yes No No
Kato et al. [21] No No No No
Yang et al. [22] No Yes Yes Yes
Zhang et al. [23] Yes Yes No No
Khan et al. [24] Yes Yes Yes Yes
Tariq et al. [25] No No Yes Yes
Zong et al. [26] No Yes Yes Yes
Our Survey Yes Yes Yes Yes
**3. Evolution of Mobile Communication Network**
A phenomenal advancement is witnessed in mobile communication networks sincere the
emergence of first generation in 1980s. This advancement contains several generations having
different techniques, technologies, data rate, capacities and standards. Every new generation is
introduced in almost a time span of 10 years [27]. Figure 3 presents the evolution of mobile
networks.
**Figure 3. Evolution of wireless mobile technologies**
-----
_3.1. From 1G to 3G_
1G was developed in 1980 from voice calling with 2.4 kbps data rate. Data transmission was
made in the form of analogue signal without any universal wireless standard. It led to several
drawbacks e.g. security issues, low transmission efficiency and hand-off [28]. In 1990, 2G was
introduced and it was dependent on digital modulation techniques e.g. Code Division Multiple
Access (CDMA) and Time Division Multiple Access (TDMA). It supported data rate of 64 kbps
featuring both Short Message Service (SMS) and better voice calling. Global System for Mobile
Communication (GSM) was the dominant standardization in 2G era [29]. In 2000, 3G was introduced
with the aim to transmit data at high-speed. 3G network provides high speed access to internet and 2
Mbps data transfer rate [30]. It covers advanced features as compared to 1G and 2G, including Video
services, navigational maps, live streaming and web browsing. To support global coverage, Third
Generation Partnership Project (3GPP) was developed to find technological aspects and
standardizations [31].
_3.2. 4G_
4G was introduced in 2000s. It is IP based network which can feature data rate up to 1 Gbps for
downlink communication and 500 Mbps for uplink communication respectively. Apparently it can
reduce latency and enhance spectral efficiency. It is capable to meet required criteria set by video
chatting, HD TV content and Digital Video Broadcasting (DVB). In addition, it provides automatic
roaming to facilitate wireless service anywhere and at any time.
_3.3. 5G_
5G has completed its initial testing, standardization processes and paved its way to
commercialization in few countries. China, UK, South Korea and USA have launched 5G technology
[32]. The main target of 5G is to revolutionize energy efficiency, network reliability, latency, data
rates and massive connectivity [33]. It makes use of both mmWave and microwave bands to enhance
data transmission up to 10 Gbps. 5G features access technologies like Filter Bank multi carrier
(FBMC) and Beam Division Multiple Access (BDMA). Some emerging technologies e.g. software
Defined Networks (SDN), Massive MIMO, Information Centric Networking (ICN) and network
slicing are also integrated into 5G [34-36]. IMT 2020 has suggested three key usage scenarios:
Massive machine type communications (mMTC), Enhanced mobile broadband (eMBB) and
Ultra-reliable and low latency communications (URLLC).
_3.4. Vision of Green 6G_
As 5G has entered into commercialization phase, research fraternity around the globe has started
focusing on future 6G technology which is expected to be launched in 2030s. This progress of 5G
yields the conceptualization of 6G with the capability to unleash the promises of ample autonomous
services. Specifically, 6G is envisaged to offer innovative promising wireless techniques and novel
network designs into perspective. 6G can bring a remarkable advancement in wireless technology
with ultra-low latency in microseconds and data rates up to 1 Tbps. Its capacity is envisioned to be
1000 times higher than 5G with spatial multiplexing and THz frequency communication. One main
objective of 6G is to feature ubiquitous coverage by incorporating undersea communication and
satellite communication to support global coverage [37]. Haptics communication, quantum machine
-----
learning and energy harvesting technologies will put profound impact to realize future sustainable
green networks. More precisely, it has the capability for high-precision communications for tactile
services to enable the desired sensing experience at various steps, such as smell, touch, vision and
listening. 6G is defined by its three classes as: ultrahigh data density (uHDD), ultrahigh-speed-with
low-Latency communications (uHSLLC) and ubiquitous mobile ultra-broadband (uMUB). Table II
illustrates a comparison between 5G and 6G while Table III summarizes evolution from 1G to 6G. 6G
is expected to fill gap of radio coverage limitation in previous generations. We can say it will
accommodate the whole surface area of ear including airspace, forests, deserts and oceans as
complete vision of 6G can be seen in figure 4.
The main technical aspects to realize this vision of 6G include:
To meet the extreme high level of communication reliability.
Offering ultra-high throughput and high data rates to support massive connectivity even in
extreme conditions.
Providing the required quality of immersion and unified quality of experience required by
extended reality (XR) applications.
Delivering real-time tactile feedback to meet the targeted haptic applications like digital
healthcare.
Integrating AI to enable seamless connectivity to control environments like smart city, smart
industry, self-driving system and smart structure.
**Figure 4. Vision of future 6G technology**
**TABLE II.** EVOLUTION FROM 5G TO 6G
**Key parameters or characteristics** **5G** **6G**
Reliability 10[-5 ] 10[-9 ]
Mobility (km/h) 350-500 1000
End-to-end latency (ms) 1 0.1
Area traffic capacity (Mbps/m[2]) 10 1000
Energy Efficiency (Tb/J) NA 1
Spectral efficiency (b/s/Hz) 0.3 3
Peak spectral efficiency (b/s/Hz) 30 60
Connection Density (device/km[2]) 10[6 ] 10[7 ]
User Data rate (Gbps) 1Gb/s >10Gb/s
Peak data rate 10-20Gb/s >100Gb/s
-----
Channel bandwidth (GHz) 1 100
Receiver sensitivity -120dBm <-130dBm
Coverage 70% >99%
Position precision m cm
Localization precision 10 cm on 2D 1 cm on 3D
Delay ms <ms
Processing delay 100 ns 10 ns
Jitter 1 µs 0.1 µs
Automatic integration Partial Full
Haptics communication Partial Full
THz communication No Yes
XR/AI integration Partial Full
Intelligent Reflecting Surface (IRS) Conceivable Yes
Satellite integration No Full
Cell free networks Possible Yes
Real-time buffering No Yes
Pervasive AI No Yes
VLC No Yes
Center of gravity User Service
Technique m-MIMO SM-MIMO, UM-MIMO
Energy consumption Low Ultra-low
Device lifetime 10 years 40 years
Dependability Not considered Relevant
End-to-End optimization Not considered Relevant
Device type Sensors,
smartphones and
drones
Services eMBB, URLLC,
mMTC
Distributed Ledger
Technology (DLT),
Smart implants,
Connected Robotic and
Autonomous System
(CRAS)
HCS, MPS, MBRLLC,
mURLLC
**TABLE III.** 1G-6G TECHNOLOGIES CHARACTERISTICS
**Feature** **1G** **2G** **3G** **4G** **5G** **6G**
Time span 1980-1990 1990-2000 2000-2010 2010-2020 2020-2030 2030-2040
Highlight Mobile Digital
format
Internet
connectivity
Real-time
applications
Extreme
data rates
Secrecy,
privacy,
security
Core
network
PSTN PSTN Packet N/W Internet IoT IoE
Utility Voice
calling
SMS Image Telecasting 3D VR/AR Quantum
Frame work SISO SISO SISO MIMO Massive
MIMO
Intelligent
Surface
0.3-3 THz
Frequency
band
Maximum
Data rate
Transmission
Range
800 MHz 890-960
MHz
- 35 km 10 km 5 km Below 1
km
1.94-2.14
GHz
0.3-3 THz 30-300
GHz
2.4 kb/s 144 kb/s 2 Mb/s 1 Gb/s 35.46 Gb/s 100 Gb/s
Below 1
km
-----
Multiplexing FDMA FDMA,
TDMA
Application Voice
calling
Macro
calling
CDMA OFDMA OFDMA Smart
OFDMA
plus IM
Macro cell Macro cell Pico cell Small cell
**4. Current Research Progress towards 6G**
Several researchers have shown vision for 6G and many research institutes have started planning
activities [38-40]. Referring to 6G vision, David et al. [41] suggested that service classes and battery
lifetime of mobile device need special attention than latency and data rates. Raghavan et al. [42]
pointed out 6G research should consider device manufacturing capability to design a closed loop of
research plans. Yastrebova et al. [43] predicted new communication aspects including tactile
internet, self-driving, UAVs and holographic connectivity. Tactile internet (TI) is an emerging
paradigm envisaged to catalyze the development of a plethora of new services such as education,
eHealth and smart manufacturing. To fully perceive TI, the communication infrastructure (CI)
should meet stringent design requirements for TI. Particularly, the CI should enable high reliability
and extremely low latency. Furthermore, it must fortify data privacy and security without
imperiling the latency requirements. To meet these targeted desiderata and address new services
with distinctive features, the maturing of disruptive 6G wireless communication technologies is of
paramount significance.
It is expected future wireless communications will have a similar reliability as wires
communications. Future trends and driving applications are discussed in references 38 and 39. In
future, blockchain technology will offer satisfactory performance and simplify network
controllability. Tariq et al. [25] have proposed human-centric service and key performance indicators
along with proving a comprehensive comparison between 5G and 6G. Some recent articles have
discussed practical scenarios including 6G data centers [44], intelligent reflecting surfaces (IRSs) [45]
and multiple accesses [46]. Intelligent reflecting surface (IRS) is observed as an energy efficient
technology to enlarge coverage area in future wireless technologies at low complexity and
implementation cost. Networking patterns such as 3D super-connectivity, decentralized resource
allocation and cell-less architecture are outlined in some studies [47-48]. Mahmood et al. [49] have
elaborated vertical-specific wireless network and Machine-type communications (MTCs) which can
provide unified solution to enable seamless connectivity in vertical industries.
Reconfigurable intelligent surfaces, artificial intelligence (AI) and terahertz communications are
attractive technological aspects pertaining to 6G. Rappaport et al. [50] provided a comprehensive
study of THz communications with practical demonstrations. Stoica et al. suggested that
AI-integrated 6G can empower new features such as opportunistic set-up, self-configuration, context
awareness and self-aggregation [51]. Moreover, AI-empowered 6G will enable a paradigm shifting
perspective in mobile networks [52].Quantum Machine learning algorithm for AI-empowered 6G is
discussed in an article [53]. Renzo et al. have envisaged reconfigurable intelligent surfaces to lay
hardware foundation of AI [54]. Reconfigurable intelligent surfaces are proposed for massive MIMO
in some earlier studies [55-57]. Here we have presented some standardization efforts and research
activities. A summary of research studies over 6G is provided in Table IV.
**TABLE IV.** SUMMARY OF RECENT RESEARCH STUDIES ON 6G
-----
**Reference** **Year** **Research contributions** **Key focus**
Katz et al. [3] 2018 This study sheds some light on initial
research of 6G technology and 6Genesis
Flagship Program (6GFP). This article
includes motivation, trends and future
aspects of 6G.
Letaief et al. [19] 2019 AI based 6G key technologies and
applications are discussed. This study
presents key trends in the evolution to 6G.
Yang et al. [22] 2019 This article includes an overview of 6G
promising techniques and key
requirements. This study highlights
potential challenges, solutions and
security approaches.
Zhao et al. [45] 2019 This article outlines 6G challenges, future
directions and a possible roadmap for AI
based cellular networks.
6Genesis Flagship
Program (6GFP)
Artificial intelligence
6G vision and
potential techniques
MIMO and intelligent
reflecting surfaces
Challenges and
opportunities of 6G
AI
Quantum machine
learning
Secrecy, security and
privacy
6G use cases and
technologies
6G performance
components
Rappaport et al.
[50]
2019 This article presents novel approaches,
promising discoveries, key technologies
and potential challenges for 6G. It
discusses current standard body
regulations for applications using above
100 GHz. It provides in-depth details of
THz products and applications.
Stoica et al. [52] 2019 This study outlines AI revolution for
future 6G networks.
Nawaz et al. [64] 2019 A comprehensive study is provided in
ML, QC and QML in order to seek
challenging issues and potential benefits.
A new QC-aided and QML-enabled
framework for future technology is
presented to articulate its enabling
technologies and potential challenges at
the user end, air interface, network edge
and network infrastructure. Finally, this
research study identifies some
groundbreaking future research directions
for B5G networks.
Dang et al. [4] 2020 This study presents a systematic
framework of 6G applications. It
highlights communication technologies
key potential features of 6G. Authors have
investigated potential issues which can
hamper deployment of 6G.
Giordani et al. [10] 2020 Authors have discussed technologies
which will develop wireless networks
towards 6G. They have presented key
enablers, use cases and a full stack
overview of 6G requirements.
Saad et al. [15] 2020 A holistic vision of 6G technology is
presented in this article. Primary drivers
for 6G systems are identified including
-----
technological trends and applications.
Authors have proposed a new set of
service classes. A comprehensive research
agenda and solid recommendations for the
6G roadmap is outlined in this research
study.
Gui et al. [11] 2020 This article outlines 6G core services, eight
KPIs and two centricities. Authors have
presented 6G architecture and outlined
potential challenges, possible solutions
and four application scenarios.
Mao et al. [13] 2020 This study proposes AI enabled adaptive
security strategy for IoT networks in 6G
technology. In this security method, IoT
devices are linked to cellular networks
through mmWave and THz. Authors have
used EKF for efficient energy harvesting
in 6G to avoid energy exhaustion.
Kato et al. [21] 2020 In this article, authors have analyzed
machine learning techniques for 6G and
highlighted 10 crucial challenges for
advancing ML in 6G.
This review study In this study, we devise a taxonomy based
on computing technologies, networking
technologies, communication
technologies, use cases, machine learning
algorithms and key enabler technologies.
We have briefly discussed 6G key projects,
potential challenges and applications.
6G key performance
indices (KPIs) and core
services
QoS and security for
6G
Challenges in machine
learning for 6G
6G technologies, key
enablers, key areas,
use cases, key projects,
potential challenges
and applications
Apart from above discussions, some countries around the globe have started 6G projects to reshape
the framework of 6G networks. In 2019, University of Oulu Finland started 6Genesis Flagship
Program [58]. In March 2019, 6G research race was triggered in first 6G Wireless Summit organized
in Levi, Finland. Many seminars and workshops have been conducted worldwide such as Carleton
6G, Wi-UAV Globecom 2018 workshop and Huawei 6G Workshop which was organized as a virtual
event in March 2020 [59]. Beyond academia, 6G has also attracted governments, industrial
organizations and standardizing bodies. In 2018, “Enabling 5G and beyond” was launched by IEEE.
Google has launched Loon Project [60] to provide internet connectivity to five billion users from
remote communities. In the end of 2018, Ministry of Industry and Information Technology, China
made an official announcement to expand 6G research and investment. Korea Advanced Institute of
Science and Technology (KAIST) has collaborated with LG Electronics to establish a 6G research
center. SK Telecom, Ericsson and Nokia are collaborating in 6G research. The Federal
Communications Commission (FCC), USA has opened 95 GHz -3 THz spectrum for research
contributions on 6G. Moreover, Networking Research beyond 5G’project has been launched in Japan
to use 100 GHz to 450 GHz THz spectrum. Different countries around the world such as Germany,
Australia, and Sweden etc. are carrying out research on 6G. We have summarized country wise 6G
initiatives in Table V.
**TABLE V.** 6G PROJECTS IN DIFFERENT COUNTRIES
-----
**Country** **Year** **Research Initiative**
2018 Finland University of Oulu launched 6G initiative in 2018.
UROS and University of Oulu has announced strategic partnership.
University of Oulu has required Toyota self-driving car for research
purposes.
2019 China 37 research institutes have collaborated for 6G research.
They have launched National 6G Technology Research and
Development Promotion Working Group.
2019 USA USA opened spectrum between 95 GHz and 3 THz.
BWN Lab in Georgia Institute of Technology is working on 6G
research projects.
2019 South Korea KAIST has collaborated with LE Electronics to establish a 6G
research center.
2019 Germany
and France
German and French ministries have officially announced to develop
6G combat aircraft in order to bring revolution in military affairs.
TU Berlin has established new Einstein fellowship to strengthen
research in 6G.
2020 Japan NTT, Sony and Intel have started collaboration for research on 6G
technology. Japan has also made plans to invest $US 2 billion to
carry out industrial research on 6G.
2020 Saudi Arabia Research groups from KAUST have initiated 6G research.
2020 Brazil 6G Brazilian Project was introduced to develop a national-wise
framework for 6G networks.
2021-2026 South Korea Government of Korea has planned to spend $169 million to secure
6G and it will start 6G pilot project around 2026.
**5. 6G: State-of-the-art**
In this section, we present state-of-the-art approaches to enable 6G. Federated learning for edge
network including Stackelberg-game-based incentive mechanism, hardware-software co-design and
resource optimization is discussed in [61]. Finally, they have discussed potential challenges and
future research plans. In-Edge AI provided good results for edge computing and caching. A 3D
wireless cellular network using drones is demonstrated in ref. [62]. They provided an analytical
approach for frequency planning and truncated octahedron cells for lowest number of drone base
station. They considered two issues of network planning and 3D cell association in this article. An
illustration of opportunities and critical challenges in THz communication is presented in ref. [63]. In
this article, Mumtaz et al. investigated different standardization activities and available bands for
THz communication. However, it is important to highlight key standards for 6G to incorporate THz
range at this stage. Nawaz et al. [64] presented quantum machine learning in the context of 6G. They
outlined state-of-the-art machine learning techniques, quantum communication schemes and
investigated potential research challenges to implement quantum machine learning techniques in
6G. In [65], Salem et al. demonstrated an EM based model for blood through effective medium
theory. They discussed advantages of proposed model for healthcare applications. S.
Canovas-Carrasco et al. [66] developed architecture via THz communication for nano-networks. X.
Wang et al. [67] proposed machine learning based In-Edge AI to empower intelligent edge
computing. Double deep Q-learning network (DDQN), federated learning-based DDQN and
Centralized DDQN have been proposed in this article. They designed two devices: nanorouters and
nanonodes. They were able to carry out THz communication between nanonodes. They mitigated
path loss and molecular absorption noise. In addition, they enhance transmission rate through
-----
energy harvesting by blood flow and an additional external source. Basar et al. demonstrated that
these intelligent surfaces can enhance the spectral efficiency [68] of 6G network.
**6. Taxonomy**
We consider communication technologies, computing technologies, machine learning schemes
and key enablers to devise taxonomy as it can be seen in figure 5. Further details are provided in
below subsections.
**Figure 5. Taxonomy of 6G wireless systems.**
_6.1. Communication Technologies_
_6.1.1. Terahertz Communication_
One key solution towards existing spectrum crunch is to utilize the THz-band, which is expected
to assist the infrared (IR) and mmWave band, by offering a considerably wider bandwidth and
supporting promising services with higher data rates requirements. It operates in the region of 100
GHz to 10 THz as shown in figure 6. It enables the potential of high data rates and high frequency
connectivity. The main issues prevented to use THz in commercialization are high penetration loss,
molecular absorption, propagation loss, RF circuitry and engineering challenges for antenna. In
addition, THz communication can be improved by selecting frequency bands which are less affected
by molecular absorption. THz communication is characterized by high security, moderate energy
consumption, short range and robust to atmospheric conditions [69-71]. In fact low frequency
channel model cannot capture the full characteristics of high frequency THz communication which
experiences high molecular absorption and attenuation. Therefore, it is important to design realistic
channel models for THz links to address LOS path in the THz communication system to investigate
the performance limitations for such technology. On the other hand, THz communication needs to
rethink current solutions and find new approaches which provide a seamless functionality over the
complete THz band. Such as, designing efficient beamforming and tracking methods which can
-----
precisely and dynamically trace the location of THz-assisted devices is an open research problem.
Additionally, there is need for research intervention to design tunable and intelligent ultra-fast
modulators to support reliable and efficient THz communication links. Other open research issues
in THz communication include novel hardware architecture designs and incorporation of massive
MIMO and intelligent surfaces.
A dramatic increase in data traffic is witnessed recently. This exponential growth has put demand
for better coverage and higher data rates [72]. THz (0.1-10 THz) communication is envisaged to be
among key enabling technologies for future 6G. THz band can facilitate with ultra-fast massive data
transfer to support plethora of applications. Federal Communications Commission (FCC) has issued
frequency band above 95 GHz [73] for future contributions. Researchers should pay attention to
multiple factor such as interference, imperfection in circuit and high complexity in realistic
communication channel to enhance data rates. Although THz bands are used in object detection,
imaging and radio spectroscopy, however it still needs research attention in wireless communication
domain. THz band lies between IR and mmWave spectrum as shown in figure 6 while previously it
was names as “no-man's land'’’. Recently, a significant research progress has been made to realize
wireless network on chip (WNoC) in THz [74]. Z. Chen et al. [75] have provided a comprehensive
survey over THz communications.
**Figure 6. THz spectrum [20].**
_6.1.2. Visible Light Communication (VLC)_
6G will support high coverage by incorporating undersea networks and space networks with
terrestrial networks. As undersea and space/air networks vary from typical terrestrial network;
therefore, typical EM waves are unable to attain high speed data for these environments. Optical
communication utilizing laser diode can operate in these environments to achieve high speed data
transmission. Meanwhile, visible light communication (VLC), operating between 430-790 THz [76],
is a promising alternative to RF for future 6G. Since, VLC is functional in THz range thus it provides
substantial bandwidth to meet the data rate and capacity needs of 6G. Taking 6G into account, a
hybrid network can be designed to leverage the best of VLC and other optical or RF systems such as
WiFi and Bluetooth (BLE).
VLC can be performed by using light-emitting diodes (LEDs). VLC receiver can be a
photodetector or solar cell. It can be used for several application including indoor positioning,
energy harvesting, diver-to-diver communication, vehicle-to-vehicle communication and
underwater networks [77]. VLC offers inherit benefits including high data rates, safety, low cost
deployment, robustness against interference, high energy efficiency and ultra-wide frequency band.
VLC can be employed to future 6G applications. The main characteristics of VLC are communication
-----
and lightning. As compared to RF, VLC systems are considered intrinsically secure. This technology
has been successfully used for an extensive range of applications including underwater mines,
visible light identification system, underwater communication and vehicular communication.
However, VLC will face several challenges like coverage, mobility, intercell interference and LED
connectivity to internet [77]. Specifically, because of broadcast feature of VLC systems, VLC systems
are vulnerable to eavesdropping threats at public places. The functionalities of VLC systems are
different from RF systems which must be taken into account to develop PLS strategies. For example,
VLC channels are real-valued and quasi-static. Therefore, such functional constraints must be
reconsidered for the optimization and performance analysis of physical layer security (PLS)
strategies in VLC systems. It is also important to mitigate mobility issue for seamless connectivity.
_6.1.3. 3D Communication_
3D communication is another leading aspect of 6G which integrates airborne and ground
networks. In 3D communication, low-orbit satellites and unmanned aerial vehicles (UAVs) can be
used as base stations (BSs) [78]. In comparison with 2D, 3D communication has significantly
divergent nature due to altitude dimension. Thus, novel techniques are required to handle mobility
and resource allocation.
_6.1.4. Molecular Communication_
Advance nanotechnology enables manufacturing of biosensors, implantable chips and
nano-robots. It has various applications such as biomedicine and nanoscale sensing [79]. Specifically,
biomedicine application can enhance health care through monitoring of body organs and intelligent
drug delivery. Establishing connection between nanodevices and internet can transfer information
and maintain effective communication. Internet of Bio-Nano-Things (IoBNT) can connect biological
entities and nanodevices [79]. In addition, combining body area networks and IoBNT can provide
feasible solution to enhance health care. This technique makes use of shorter wavelengths to
communicate at 1 cm or m. The key challenges in this technique are channel modeling and
transceiver design.
_6.1.5 Quantum Communication_
Another merging technology is quantum communication which will provide considerable
security, long distance communication and higher data rate in 6G network [80-81]. It is a technique
to deliver a quantum state from a sending component to the receiving component. It can execute
the tasks which cannot be performed through classical techniques. Some of the appealing
contributions of quantum communication are quantum network, Quantum Key Distribution (QKD),
quantum teleportation, Quantum Secret Sharing (QSS) and Quantum Secure Direct Communication
(QSDC). The high security mechanism of quantum communication makes is appropriate technology
for future 6G. Particularly, the prime motive of quantum entanglement and its non-cloning theorem,
inalienable law, superposition and non-locality offer strong privacy and security. The next
generation of applications enabled by quantum communication are brain-computer interaction
(BCI), tactile internet and intelligent communications. As it is contradictory to achieve both high
data rate and long distance communication [82], new repeaters can be designed to achieve high data
rate and secure long distance communications. Some research groups have already started working
on quantum key distribution (QKD) and protocols. UAVs, high altitude stations and satellites can be
-----
selected as key redistribution or regeneration and nodes. Single photon emitter device can operate as
quantum device above absolute zero temperature. A summary of existing research surveys is given
in Table VI.
**TABLE VI.** SUMMARY OF THE EXISTING SURVEYS
**Technology** **Reference** **Security and privacy**
**challenges**
Artificial intelligence [83] Malicious threat
Artificial intelligence [84] Communication
Artificial intelligence and
quantum communication
[85] Encryption
AI [86] Access control
AI [87] Authentication
Blockchain [88] Communication
Blockchain [89] Access control
Blockchain [90] Authentication
Visible light communication [91] Malicious threat
Visible light communication [92] Communication
Terahertz communication [93] Malicious threat
Terahertz communication [94] Authentication
Quantum communication [95] Encryption
Molecular communication [96] Authentication
Molecular communication [97] Encryption
Molecular communication [98] Malicious threat
_6.2. Networking Technologies_
Innovative networking technologies for 6G are 3D networks, optical, bio-networks and
nano-networks [99]. Molecular communication is used to operate N-IoT. Nanometer-range devices
can be designed by using metamaterials and graphene. BIoT is used for IoT based communication
[100]. N-IoT and B-IoT are core components of emerging 6G devices. Physical layer technologies and
novel routing schemes should be designed efficient biodevices and nanodevices should be
developed for B-IoT and N-IoT. In addition, new models for 3D communication must be devised.
_6.3. Computing Technologies_
6G systems include various smart applications which generate large amount of data. Intelligent
data analytics can be carries out by using quantum and computing technologies. In coming few
years, quantum computing will pave its way to commercial market and will be a great threat to the
existing cryptographic techniques. Quantum computing will revolutionize 6G network with higher
data rates which is not available until now [101], [102]. It can be used in 6G to detect, mitigate and
prevent from security vulnerabilities. An important characteristic of quantum communication is
secure channel for data encryption. In future, quantum channel will replace noiseless classical
channels to attain extreme levels of reliability. This advantage of quantum computing makes it
appropriate for 6G smart applications. Similarly, integration of physical layer security scheme with
post-quantum cryptography scheme will ensure secure 6G communication. Several 6G applications
including terahertz communication, terrestrial wireless networks, satellite communication and
underwater communication systems have potential to use quantum communication protocols e.g.
-----
quantum key distribution (QKD). Other emerging features are quantum encryption and intelligent
edge computing. These features ensure privacy and storage capability with low latency [103]. Z.
Zhou et al. [104] demonstrated energy efficient edge computing for vehicular networks.
_6.4. Key Enablers_
The key enablers of 6G network are network slicing, blockchain, AI, homomorphic encryption,
edge intelligence and photonics-based cognitive radio. This section discusses some key enablers for
future 6G.
_6.4.1. Blockchain_
Blockchain is distributed ledger based database for secure registration and updating of
transactions [105]. It aims to manage a digital ledger in a distributed and secure manner. This ledger
is cryptographically sealed and all the transactions are kept in a chronological manner. It is an
emerging candidate to urbanize internet services. It is an audible, decentralized and secure solution
to exchange and authenticate information. Blockchain offers numerous advantages like integrity,
pseudonymity, proof of provenance, non-repudiation, immutability and disintermediation.
Blockchain technology is ideal for some applications due to anonymity and decentralized
tamper-resistance [106]. In 2018, Jessica Rosenworcel, FCC commissioner and Mobile World
Congress Americas (MWCA) focused on blockchain technology as a revolution for future generation
[107]. It provides secure access for network entities and untamable distributed ledger which
strengthens its security feature [108]. Blockchains are also beneficial in terms of network access and
resource orchestration. X. Liang et al. [109] discussed that administration costs can be reduced
through Decentralized control mechanism based on blockchain. Moreover, the spectral efficiency
can also be enhanced through blockchain integration into spectrum. In 2020, F. Jameel et al. have
presented a survey on reinforcement learning in blockchain and explained integration with industrial
Internet-of-things (IIoT) [110]. Blockchain will enable smart health care, smart grid and smart supply
chain [111-112]. It is identified as one of the key enabler to support future 6G technology. Several
research efforts have been made leveraging its capability to enhance both the use cases of 6G
ecosystem as well as technical aspects. Besides advantages, blockchain also faces some challenges
including high energy consumption, high latency, reliability and scalability [113].
_6.4.2. Ubiquitous sensing_
Ubiquitous sensing includes 3D imaging and machine vision based video information for
automatic sensing and intelligent decision making [114-115]. J. M. Segui discussed RFID tags for
Ubiquitous sensing in automaker industry [116]. In future 6G, ubiquitous sensing will possibly
change every avenue of life. However, it will also lead to significant problems e.g. lack of
collaboration, inability to ingest and utilization of distributed information sources. It can be used in
clinical diagnostics, quality control and surveillance. It has been demonstrated in clinical diagnosis
and environmental monitoring. The key elements in ubiquitous sensing are implantable and
wearable sensors.
_6.4.3. Homographic Encryption_
M. Salem et al. [117] used homomorphic encryption to secure biometric recognition and preserve
privacy. F. Tang et al. [118] demonstrated deep learning technique for homomorphic encryption to
-----
increase security properties. Homomorphic encryption can be used to protect copyrights and
preserve privacy of multimedia transmission [119]. Catak et al. proposed a novel technique to
preserve privacy using homomorphic encryption and clustering methods [120]. This encryption
technique is same as an arithmetic operator on an encrypted data. It offers data privacy without
plain form data.
_6.4.4. Edge Intelligence_
A promising enabler for IIOT is edge intelligence as it provides smart cloud services with less
cost and low latency [121]. Edge intelligence is formed by integrating edge computing and AI [122]
for broader prospective as it has received a tremendous amount of attention. Edge intelligence has a
wide range of applications including energy internet, smart grid, networked UAV, connected robots
and autonomous driving. However, the gap lies to find out solutions for big data, coded computing,
system modeling and scheduling scheme for edge intelligence. A potential challenging issue in
industrial networks is to ensure edge service. Zhang et al. [121] demonstrated blockchain and edge
intelligence based IIOT framework to obtain secure and flexible edge service. Edge intelligence can
be implemented in cognitive internet of things to improve interactivity and sensitivity. Zhang et al.
[123] introduced CIoT, a new network paradigm, to meet technical requirements such as efficient
storage, generating big sensory data and integrating multiple data sources.
_6.5. Use Cases_
It is important to define new use cases for promising 6G technology. The innovative 6G services
include low-latency communication, mobile broadband reliable, Nano-Internet-of-things (N-IoT),
Bio-Internet-of-things (B-IoT), massive URLLC and autonomous connected vehicles. We have
discussed some use cases for 6G below.
_6.5.1. Haptics communication_
It is a communication technology based on tactile sensation for human-computer interaction with
computers. It is a tangible feedback system to take advantage of human’s sense of touch through
motion, sensation or forces. It enables physical interaction between humans and remote objects. It is
an innovative research domain to understand core functions of human touch. Haptic devices like
actuators and sensors allow users to sense and control objects in virtual and real world. These
devices still face a gap in terms of cost effectiveness as well as degrees of freedom. This technology
needs substantial design efforts to enable in 6G. Van Dan Berg et al. [124] have investigated some
challenges to realize haptics communication over tactile internet. In order to realize the envisioned
applications, haptics communication should offer tactile and kinesthetic control simultaneously.
_6.5.2. Holographic communication_
Holographic communication enables remote connectivity with high accuracy. Generally, it is
multi-dimensional camera image communication which needs higher data rates (Tbps) [16]. Huang
et al. [125] have discussed emerging trends and challenges for holographic communication in 6G.
_6.5.3. Unmanned Mobility_
-----
This use case is related to autonomous connected vehicles which enable enhanced traffic
management, smart infotainment, secure driving and unmanned mobility. Giordani et al. [126]
discussed unmanned mobility with safe driving an autonomous transportation features.
_6.5.4. Bio-Internet of Things_
This technology makes use of IoT for communication of bio devices. This use case has advantage
in smart health care sector. The performance characteristics of B-IoT must be defined as like N-IoT.
A. Salem investigated wireless communication in THz band considering rbcs concentration in blood
[65]. In 2018, S. Canovas-Carrasco et al. [66] used human hand scenario to develop nano scale
communication network. Thus, B-IoT can efficiently enable 6G.
_6.6. Machine Learning Techniques_
Recently, machine learning elicited high attraction to enable wide applications. It can be a
fundamental pillar for future 6G networks. Machine learning has given efficient performance in
various areas including game AI, autonomous driving, language processing [127], IoT security [128],
wireless-powered ambient backscatter communication [129], vehicular networks and pattern
recognition. Perspectives of ML in vehicular networks in shown in figure 7. Generally, we divide ML
is different categories as discussed below.
**Figure 7. Perspectives of ML in vehicular networks**
_6.6.1. Quantum machine learning_
Quantum machine learning is another most promising technology for human beings. It has
emerged as an excited paradigm. Several research studies are presented in this domain [120-133]. It
combines machine learning and quantum physics to design quantum machine learning models. It
uses quantum devices for intelligent, accurate and fast machine learning calculations and improves
control quantum systems. It is widely used in quantum mechanics and quantum biomimetic.
_6.6.2. Meta learning_
We have witnessed a dramatic rise in interested in this field of meta-learning in recent years as
many studies are presented in this domain [134-136]. Meta learning has potential use in neural
networks [137], speech recognition [138] and to develop curiosity algorithms [139]. Meta-learning
can handle several conventional challenges of data and computation bottlenecks.
-----
_6.6.3. Federated learning_
Federated learning has achieved widespread attention as it prevents the leakage of personal
information. It has the feature to update parameters without collecting raw data. Several research
studies [140-142] have focus on FL in several aspects. T. Yang et al. [143] demonstrated FL to
improve Google keyboard query search. However, there are several issue e.g. security, privacy,
resource allocation and cost to implement FL at large scale. FL has some inherit challenges such as
incentive mechanism design, computation resource optimization and communication. Some
challenges for advance machine learning based 6G are shown in figure 8.
**Figure 8. Challenges for advance machine learning based 6G.**
_7. Key areas in 6G networks_
Some features of 5G have already implemented AI in various applications. However, the
traditional network architecture limits AI-driven technologies. It does not support intelligent radios
and distributed AI. Although realtime intelligent edge is already deployed in 5G networks but it
cannot be fully controlled in realtime. However, 6G network can handle this scenario. In addition,
5G is limited to ground level; undersea and space communication is not possible. Accordingly, we
have discussed some key areas and potential issues in these areas. Table VII provides a summary of
these key areas.
_7.1. Real-time intelligent edge_
Implementation of Unmanned Aerial Vehicles (UAVs) networks with current technologies is not
fully possible as it can only control the network with real time intelligence and extremely low
latency. Although 5G technology supports self-driving, however prediction, self-awareness and
self-adaption network parameters is not featured [144]. Hence, a new technology is needed to tackle
these challenges. It is highly feasible by 6G technology to enable AI-assisted services. As AI will be
integrated in vehicular networks, it can support numerous security algorithms. However, this
integration can cause various privacy and security challenges. In [145], Tang et al. stated that both
physical environment and network system should be taken into account for a vehicular network as it
can mitigate malicious attacks.
_7.2. Intelligent Radio_
-----
In previous generations, transceiver algorithms and devices were developed together. However,
now transceiver algorithms and hardware can be separated. Thus, transceiver algorithm can update
itself on the basis of hardware information [146]. P. Yang et al. [147] stated that software defined
network techniques can enable intelligent radio signals after combining with leverage multiple
high-frequency bands. Shafin et al. [148] discussed AI based cellular networks. However, several
requirements must be satisfied to enable intelligent radios. Tariq et al. [25] investigated suspicious
activities during communication process. While Jiang et al. [149] investigated some signal jamming
problems during data transmission. There is a need to develop simple, yet highly effective security
approaches as communication systems suffer from security, privacy and jamming attacks.
_7.3. Internet of Everything (IoE)_
6G networks will support Internet of Everything (IoE) which is referred as an extension of IoT
including people, data, processes and things. The key idea of IoE is to incorporate different sensing
devices to identify, monitor and take intelligent decisions to design new operations. The sensing
devices in IoE are capable to acquire several parameters including pressure, bio-signals, light,
position, velocity and temperature. These devices are utilized in different application scenarios
ranging from traffic, smart cities, and digital healthcare to industrial sector. It will support
intelligent decision making feature in 6G networks [146]. The incorporation of IoE and 6G will be
useful to enhance the services related to body sensor networks, smart city, smart grid, connected
robotics, internet of medical things and many more avenues. It is envisaged that fusion of IoE and
6G will enable various novel applications to create a new era with improved and agile features.
_7.4. 3D intercoms_
In future technology, network planning and optimization will be extended from
two-dimensional to three-dimensional [114]. 6G technology will be able to feature 3D
communication to support underwater, aerial and satellite communication. A 3D intercom can
support this attribute with precise location and accurate time. Additionally, network resources,
routing and mobility aspects also need optimization strategies in 3D intercom. By using THz band,
emerging technologies like molecular and quantum communications can be used for distant
communication [151]. Wei et al. [152] investigated some security attacks for authentication
perspective. Similarly, performance evaluation of 6G networks in underwater environment is also
unforeseen. Once 6G network operations in underwater environment are achievable, innovative
applications and challenges will appear in near future. Different application scenarios empowered
by 6G technologies are shown in figure 9.
-----
**Figure 9. Some applications supported by 6G**
**TABLE VII.** SUMMARY OF KEY AREAS.
**Key area** **Relation to 6G** **Characteristics** **Summary**
3D intercoms Coverage Full 3D-cover It can provide coverage at
ground, space and undersea
levels.
Intelligent radio Communication Self-adaptive This framework can
configure and update
dynamically according to the
provided hardware
information.
Distributed artificial
intelligence
Real-time intelligent
edge
Decision making
capacity
Control capability Real-time response It can provide autonomous
driving at an unfamiliar
place in real-time.
Intelligent decision
making
This system is capable to
make intelligent decisions at
various levels.
**8. Vision and key features for future 6G**
This section highlights various key features for future 6G networks. In this regard, Table VIII
summarizes various key features such as mMTC, eMBB, eMBB-Plus, BigCom, and URLLC etc.
_8.1. Mobile Broadband Reliable Low Latency Communication (MBRLLC)_
Saad et al. proposed MBRLLC [15] by integrating eMBB and URLLC for 6G system to enable low
latency and high reliability. The core aspect of MBRLLC is energy efficiency. It also considers
impacts of resource utilization, rate and reliability on 6G network.
_8.2. eMBB-Plus_
eMBB-Plus [153-154] will provide high quality experience (QoE) in future 6G technology.
Notably, other key features like interference and handover will be able to exploit big data. Moreover,
-----
globally compatible connection and accurate indoor positioning is also expected. There is a need to
design strategic plans for eMBB-Plus without any compromise over privacy, secrecy and security of
network users.
_8.3. Multi-Purpose 3CLS and Energy Services_
6G system must support multi-purpose services. It can wireless transfer power to small devices
through WPT function. MPS system is good for CRAS, however, it should meet computing, control,
mapping function, sensing and energy consumption performance.
_8.4. Big communications (BigCom)_
BigCom [155] in 6G will be capable to support high coverage in distinct areas. It will maintain a
resource balance to establish a high data rate communication among users. Furthermore, high AI
and THz band in 6G will include environmental and operational aspects for better communication.
_8.5. Human-Centric Services (HCS)_
In ref.15, authors proposed human-centric services (HCS) which require QoPE targets. Wireless
BCI is a similar aspect to realize HCS in which physiology of users defines the network performance.
For HCS, a function of raw QoE and QoS metrics must be defined.
_8.6. Secure ultra-reliable low-latency communications (SURLLC)_
SURLLC can be highly beneficial for vehicular communication [155-156]. SURLLC in 6G is
advancement in mMTC and URLLC with high stringent demands on latency (lower than 0.1 ms)
and reliability (more than 99.99%).
_8.7. Massive URLLC_
URLLC in 5G technology was introduced to meet latency for IoE applications like smart factories.
Massive URLLC will keep scalability, latency and reliability into consideration. Hence, a proper
framework for 6G that enables better performance for decision making, topology, architecture,
reliability and delay is highly required.
_8.8. Three-dimensional integrated communications (3D-InteCom)_
There is a need to bring a radical change from 2D to 3D-InteCom model by including the high
aspect of communication nodes for full dimensional MIMO architectures [156-158]. Some of the
notable technologies in which 3D-InteCom can be incorporated are underwater communication,
unmanned aerial vehicle (UAV) and satellite communication. Thus, a re-adjustment in 2D model
which is stemmed from graph theory and stochastic geometry is required.
_8.9. Unconventional data communications (UCDC)_
Up to now, there is a lack in proper definition and meaning of UCDC [155]. However, follow
facets must be discussed: human bond, tactile and holographic communication.
_8.9.1. Holographic communications_
It is expected to add glamor in 6G technology. It is a 3D technology which controls a light beam
incident on any object and uses a recording device to capture resulting pattern. In real, it is
insufficient to real presence scenario through 3D images without a stereo voice. In future 6G, stereo
-----
audio will be incorporated to get presence characteristics. In other words, In other words, received
video or holographic data can be modified. Holographic data will use high bandwidth to transmit
data over reliable network [159].
_8.9.2. Tactile communications_
Real-time conveyance or cinematic experience is possible through tactile internet [160]. Some
expected advantages of this technology are interpersonal communication, cooperative self-driving
and teleoperation. A haptic touch can be implemented in this technology. Realizing this technology
requires some stringent needs for cross-layer architecture. It can trigger research activities to design
novel physical layer schemes. It will also bring attention to design procedures e.g. protocols,
handover, scheduling, queuing and buffering to meet requirements of 6G networks.
_8.9.3. Human-centric communications_
This technology will provide human access to physical features. Invariably, it will involve five
human senses. A promising use case of this technology is “communication through breath” project,
which makes use of exhaled breathe to read bio-profile [161].
Consequently, it will enable remote interactions with human body, biological features collection,
emotion detection and disease diagnosis. Thus, to design a communication system which can realize
five human senses requires interdisciplinary research efforts.
**TABLE VIII.** 6G SERVICES, PERFORMANCE INDICATORS AND APPLICATIONS
**Service** **Performance Indicator** **Applications**
MBRLLC Energy efficient Autonomous drones
XR/AR/VR
eMBB-Plus QoE Accurate indoor positioning
MPS Wireless energy transfer
Accurate mapping
Stable control
XR
Telemedicine
CRAS
Big communications
(BigCom)
Balance resource utilization High coverage to remote
areas
HCS QoPE Efficient communication
Haptics
Secure ultra-reliable
low-latency communications
(SURLLC)
Low latency, High reliability Vehicular communication
mURLLC Massive reliability
High connectivity
Autonomous robots
Blockchain
User tracking
3D-InteCom
Unconventional data
communications (UCDC)
Holographic and tactile
communication
MIMO architectures Underwater and satellite
communication
Automated driving, disease
diagnosis and teleoperation
**9. Potential Challenges and Practical Considerations**
There exist multiple challenges which can affect the performance of future 6G technology. In this
section, we have explored the potential unresolved challenges of hardware design, power supply,
network security, reliability, latency and user mobility. We have provided readers with motivation
to address and solve some of these challenges as shown in figure 10.
-----
_9.1. Portable and Low-latency Algorithm and Processors_
The existing artificial intelligence technologies are designed to fulfill definite requirements;
however, these technologies suffer from limited migration. In this regard, a potential solution is to
develop portable and low latency algorithms. Additionally, it is essential to meet accuracy and
latency trade-off in these algorithms than conventional computer vision tasks. In order to provide
better performance in latency critical scenarios such as medical/health and automated vehicles
applications, a communication link must be established within a short interval of time. It is quite
challenging to achieve low latency in few milliseconds. To attain low latency and ultra-high
reliability, it is required to design powerful high-end processing units with minimum power
consumption.
_9.2. Hardware Co-Design_
High density parallel computing techniques are needed in AI-assisted techniques. While certain
parameters are required in wireless network architecture to enable AI-assisted communication.
Furthermore, computing performance degrades in case of advance materials such as high
temperature superconductors and graphene transistors. Thus, it is a key issue to miniaturize high
frequency transceivers. Such as, Qualcomm and several other companies have been working to
decrease size of mmWave components from meter level to smallest fingertip antennas. This issue
will be more adverse for THz band. As explained in a previous study [162], optoelectronic is a
promising solution which is capable to exploit the advance antennas, high-speed semiconductor and
on-chip integration.
Transceiver design is a challenging issue in THz band as current designs are not sufficient to deal
with THz frequency sources (>300) GHz) [163]. Current transceivers structures cannot properly
operate in THz band frequencies [163]. New signal processing techniques are needed in order to
mitigate propagation losses at THz spectrum. Furthermore, noise figure, high sensitivity and high
power parameters must be controlled. A careful investigation of transmission distance and power is
also required. Moreover, a novel transceiver design considering modulation index, phase noise, RF
filters and nonlinear amplifier is needed. Nanomaterials like graphene and metal oxide
semiconductor technologies can be considered to design new transceiver structures for THz devices
[164]. The aforementioned metasurfaces are envisaged to support different applications involving
the operation over frequencies ranging from 1 to 60 GHz. Thus, developing efficient metasurface
structures which can dynamically switch the operating frequency will open a new research era to
realize THZ communication.
_9.3. Power Supply_
6G has the capability to efficiently and flexibly connect autonomous mobile devices.
Energy-efficient techniques become very essential in such scenarios. Currently, smartphones require
novel power supply techniques for efficient performance with 6G technology. The limited battery
life span of wireless devices poses a substantial design challenge. To deal with this challenge,
different wireless charging methods including wireless power transfer (WPT) [165] and wireless
energy harvesting have been proposed as potential solutions to offer perpetual energy
replenishment in these networks. In addition, signal detection algorithms and low complexity
precoding techniques can be developed for high power efficiency. On the other hand, a strategic
approach to optimize WPT techniques to enable future 6G mobile devices is required to enable
-----
energy autonomy in diverse conditions. Similarly, research contributions must be dedicated to
explore metasurfaces which can steer, collimate and absorb electromagnetic waves in order to
utilize the main operations of metasurfaces for wireless charging of any devices over a considerable
long distance.
_9.4. Network Security and Privacy Issue_
A major challenge in 6G is security and privacy problem. In 6G, integrated network security
should be kept into account with physical layer security. Therefore, an intensive study is required to
find new security approaches. Moreover, 5G security techniques can be extended to enable 6G. For
example, secure mmWaves and massive MIMO technique can be integrated into THz band
applications. H. Yao et al. [166] demonstrated a distributed key management mechanism which is a
key solution for STIN. A well-integrated security mechanism can be formulated to secure privacy in
6G networks. Furthermore, an exponential growth has been witnessed in number of IoT devices in
the last few years. These devices contain industrial, health care and personal IoT which can be linked
to create a mesh network. 6G technology is envisaged to be the key enabler for large scale cyber
mechanism within IoT scenarios. In such scenarios, distributed denial of service (DDoS) attacks will
be very common as IoT devices are linked with internet. Such large-scale DDoS attacks can cause
trust, privacy and security issues in the network. In future, it is important to address physical layer
security (PLS) mechanism to link users to the proper source such that it can enhance the system
secrecy rate. The adaptability and flexibility of PLS strategies, specifically for resource-constrained
environments, together with the services provided by promising 6G technology will reveal new
research directions for PLS in 6G.
_9.5. 3D Networking Reliability-Latency Fundamentals_
6G technology will support deployment of 3D applications such as 3D base stations. Research
into propagation model for 3D structure is essential. Frequency utilization and 3D network planning
is needed due to change in degree of freedom and altitude dimension from 2D to 3D. Furthermore,
3D evaluation metrics of rate-reliability-latency trade-off is necessary. Some recent studies [167,168]
have provided brief discussions in this direction.
_9.6. Potential Healthcare Issues_
Although 6G technology can provide massive data rate at THz spectrum, but experts envisage
that 6G applications are yet inchoate. THz waves propagation can effect human safety as it has three
times higher photon energy level as compared to nonionizing photon [169]. International
Commission on Non-Ionizing Radiation Protection [170] and Federal Communications Commission
(FCC) [171] regulations are followed to reduce potential hazards. Moreover, a careful consideration
is required on molecular and biological impacts of THz waves. Another promising solution to
mitigate health issues is electromotive force transmission [172]. 6G will be the right approach to
address the intelligent healthcare service in the future. Thus, device authentication, secure data
transmission, encryption and controlling wearable devices will be a crucial security issue to be
solved in 6G era. User privacy and ethical concerns of electronic health data will be major issues in
future healthcare systems. There is need to develop new AI-driven models following strict ethical
concerns to keep privacy and integrity aspects of healthcare records. These models must observe
privacy rules and regulations implemented by the concerned authorities.
-----
_9.7. Inteference Management_
In order to cope with the short range hindrance in wireless communication technologies, a
common approach is to employ maximum of access points (APs) to enhance the link coverage in
small cell scenarios. In different indoor environments, such as conference rooms and office cubicles,
networks face severe interference due to a large number of access points. Interference becomes
detrimental in that case where device is located closely to the interfering APs. Thus, researchers
should focus on developing new interference management mechanism in order to avoid significant
degradation in the performance of wireless communications technologies.
_9.8. User Mobility_
User mobility imposes a great challenge in to implement any wireless networks such as
mmWaves and it severely degrades the system’s performance and capacity. Therefore, it is
suggested to develop adaptive, efficient and novel coding and modulation schemes to overcome
channel variations. In addition, in indoor environments, which contain multiple access points (APs)
to serve multiple devices, user mobility incurs rapid load fluctuations. Thus, this calls for the
development of sophisticated handover mechanisms which can provide improved system’s
capacity, balanced load and a guaranteed QoS to realize efficient communications in future wireless
networks.
_9.9. Variable Radio Resource Allocation_
For variable quality of service desiderata, a variable radio resource must be allocated to the user.
It can be either variable power or bandwidth and even in some scenarios it can be both. Another
challenging factor in 6G is that the signals have high penetration loss and can attenuate quickly at
higher frequencies. These signals also attenuate automatically upon accessing workplaces,
residences, offices and houses. As the radio waves suffer from attenuation with increasing
frequency, it can face hurdles while penetrating through walls in houses and building, ultimately it
affects the QoS requirements. It is therefore of high significance to design precise and stable
algorithms to cope with 6G communication requirements through dynamic allocation of variable
resources.
_9.10. Blockage and Shadowing control_
Sensitivity to blockage in LOS links represents a major challenge in communication technologies.
Specifically, an abrupt obstruction in line-of-sight transmission between the base station and the
user poses delay or even disconnection, causing a notable decrease in the system’s performance and
reliability. Moreover, designing a new link between another base station and user enhances the
network overhead, affecting the overall network’s latency. A promising solution is signal steering,
which can mitigate human obstructions. However, it needs a large number of APs, which augments
complexity as well as interference. Therefore, it is essential to design reliable anti-blockage
mechanisms before the implementation of effective communication technologies such as mmWave
communication in future 6G wireless networks.
-----
**Figure 10. Problems in 6G and promising solutions**
**10. Key Projects on 6G**
_10.1. 6G Flagship (May 2018 – April 2026)_
The 6G Flagship [58] is eight years project funded by Academy of Finland for “6G-Enabled
Wireless Smart Society and Ecosystem”. The aim of this project is to discover how 6G will change
our lives. This project is categorized into four different research domains including devices and
circuit technology, wireless connectivity, distributed computing and services and application. New
6G standards will be developed under this project for future digital societies. It has been started in
cooperation with Aalto University, Oulu University of Applied Sciences, BusinessOulu and VTT
technological research center of Finland. Project opportunities within 6G Flagship program include
academic research, summits, symposiums, multi-partner project to tailored companies and
commercialization. The academic research under this program will address communication
between people, objects, devices considering privacy and security challenges. In industrial aspects,
its aim is to enable a high automated and smart society. It will enable unique wireless enabled
solutions for future digital societies with a tight collaboration between industrial experts from
various fields. It also focuses on 5G Test Network (5GTN) providing unique possibilities to test 5G
technology, components and services in real time.
_10.2. South Korea MSIT 6G research program_
The government of South Korea aims to initiate a 6G pilot project in 2026. 6G services in South
Korea will be commercially available between 2028 and 2030 [173]. The government expects to
invest $169 million between 2021 to 2026 in order to enable basic 6G technology. The government’s
strategic plan for 6G is based on preemptive development of 6G technology, new standards,
high-value-added patents, research and development (R&D) and industrial collaborations. The
initial strategic tasks include hyper-trust, hyper-intelligence, hyper-space, hyper-precision,
hyper-bandwidth and hyper-performance. Major research areas which have been adapted for 6G
pilot project include smart factories, smart cities, self-driving cars and digital healthcare immersive
content. The South Korean Ministry of Science and ICT (MSIT) has also formulated the ”6G R&D
Strategy Committee” [173] which contains public universities in South Korea, government
agencies and small/large scale device manufacturers to manage 6G related projects. The goals of
this 6G pilot project are: 1) to use AI within entire network, 2) to extend connectivity up to 6.2 miles
-----
from the ground, 3) to reduce latency up to 0.1 ms, 4) to achieve 1Tbps data rate and 5) to enable
various security features to secure entire network.
_10.3. Japan B5G/6G Promotion Strategy_
The Japanese government will earmark $482 million (50 billion yen) to promote R&D initiatives
under 6G promotion strategy. This fund is allocated to support 6G test-bed facility for institutional
and industrial testing of its designed technologies. Japanese government plans to use 30 billion yen
from this fund in coming years to support R&D in 6G technology. The government also plans to use
20 billion yen to design a facility to be used by companies and other collaboration partners to test
their developed technologies. Japan envisages designing and showcasing core technologies in 2025
while 6G will be commercially launched around 2030 [174]. The 6G vision includes scalability,
autonomy, reliability, ultra-security and resiliency, ultra-low latency, ultra-fast and large capacity,
ultra-numerous connectivity and ultra-low power consumption [174].
_10.4. INSPIRE-5Gplus_
INSPIRE-5Gplus, Research and Innovation (RIA) project under EC H2020, is a 36 months project
started in 2019 [175]. It has various project partners such as Universidad de Murcia, National Centre
for Scientific Research Demokritos and TAGS etc. [175]. INSPIRE-5Gplus is completely devoted to
strengthen security of 5G and B5G networks considering different features including learning
models, use cases, architecture, novel enablers and network management. It is based on two
approaches: 1) by leveraging existing assets and 2) by introducing novel solutions through
blockchain, AI and ML. This project will address key security challenges for efficient and concrete
realization of 5G. The outcomes of this project will serve the crucial objectives of pervasive trust
and intelligent security. It will also deliver unique assets to enable trusted and intelligent
multi-tenancy i.e. liable, evidence-based, and confident across holistic architecture of multi-tenants
network.
_10.5. AI@EDGE_
The key objective of AI@EDGE project is to design a secure AI-assisted platform for edge
computing in B5G networks [176]. It will enable frameworks to create, utilize and adapt
trustworthy, reusable and secure AI/ML models. The aim of this project is to design a
connect-computer fabric in order to create and manage secure, elastic and resilient end-to-end slices.
These slices will support an extensive range of AI-enabled applications Moreover, trusted
networking and privacy preserving ML techniques will be adapted to ensure privacy and
framework usage without disclosing sensitive information. This project will focus on breakthroughs
such as multi-connectivity, provision of AI-enabled application, privacy preserving, AI/ML for
closed loop automation and ML for multi-stakeholder environments. The AI@EDGE platform will
be performed through four high impact use cases including smart data and content curation for
in-flight entertainment services, edge AI aided monitoring through UAVs in BVLOS operation,
resilient and secure orchestration of large IoT networks and virtual validation of cooperative
vehicular networks [176].
_10.6. Hexa-X (January 2021 – June 2023)_
The Hexa-X project [177] is initiated with the vision to firmly anchor human and digital worlds
through a fusion of 6G key enablers. The vision of Hexa-X demands an x-enabler fabric of
-----
trustworthiness, extreme experience, global service coverage, sustainability, networks of networks,
operational resilience, integrity of secure communication and connected intelligence. This project
aims to investigate new key enablers in 6G for
Connected intelligence via AI-driven air interface
High resolution localization and sensing
Management of future networks
Radio access technologies at higher frequencies
6G architectural elements for dynamic dependability in network
Considering above aspects, Hexa-X project has been started under 6G flagship to bring together the
main industrial stakeholders, network operators, network vendors as well as the academia
researchers from most prestigious European research centers to bring an integrated contribution in
research and development (R&D) towards 6G.
_10.7. 5GZORRO_
5GZORRO is also an EC H2020 RIA project which aims to investigate new set of solutions to
enable zero-touch privacy, security and trust in network and security management in distributed
multi-stakeholder environments [178]. It will enable smart contracts for dynamic spectrum
allocation, ubiquitous connectivity and will support required agility. It will design architecture for
5G network in a trusted and secure way. The target stakeholders of 5FZORRO are regulators,
spectrum owners, virtual slice operators, telecom services providers and active/passive facility
owners.
_10.8. NEW-6G and RISE-6G_
Recently, two new European initiatives have been announced as NEW-6G and RISE-6G [179].
NEW-6G refers to Nano Electronic and Wireless for 6G. RISE-6G is launched under 5G PPP focused
on reconfigurable intelligent surfaces (RIS). Both the projects will be led by Atomic Energy
Commission and French Alternative Energies. E.U. has allocated €6.49 million for RISE-6G under
Horizon 2020 (H2020) program. It will enable ubiquitous wireless connectivity, ultra-massive,
instantaneous, data-driven as well as connected intelligence according to an article published in
November 2020 including RISE-6G principal investigator Marco Di Renzo. RISE-6G will perform
preliminary tests in real-time scenarios such as train station. RISE-6G will help to investigate a
broader range of subjects: deployment, infrastructure, network optimization, innovative
technologies and fundamental science. Furthermore, NEW-6G will support unprecedented
opportunities to rethink the role of nano-electronics and to promote innovative ideas, share
knowledge, encourage cooperation and establish roadmaps [179].
_10.9. ATIS’ Next G Alliance_
Several western companies including QUALCOMM, Nokia and AT&T have initiated the Next G
Alliance through a U.S. based standards organization named the Alliance for Telecommunications
Industry Solutions (ATIS) [179]. ATIS initiated this project to lay out the foundation of 6G for a
vibrant marketplace for services and products in North America. The coalition already announced a
team devoted to produce a 6G roadmap for the next decade to become a strong global mobile
technology leadership. This group has 43 founders including some tech giants like Facebook, Apple,
Google and Microsoft etc. Unlike other programs which foster 6G, the Next G Alliance started from a
private sector-led initiative whose objective is to influence U.S. funding agencies which will
-----
incentivize the industry [180]. Besides funding and research, the Next G Alliance aims to encompass
a high-level strategic perspective of standards and developments, manufacturing standardizations
and market readiness. The main idea to bring collaboration between diverse segments of
government, research institutes and industry, together with a strong emphasis on technology
commercialization and engaging international community into discussion about standardizations.
_10.10. Other Projects_
Several projects have been launched under Horizon 2020 (H2020) program. 6G BRAINS [181]
has been launched to bring AI-driven DRL to perform resource allocation with new spectrums
including optical wireless communication (OWC) and THz to improve the performance regarding
latency, reliability and capacity for future industrial networks. Similarly, DEDICATE 6G [182] has
been launched with this vision to transform B5G networks into a smart connectivity platform which
will be resilient, ultra-fast and highly adaptive to support human centric services. It will address
trust, privacy and security assurance for novel interaction between digital systems and humans.
Additionally, MARSAL [183] has been initiated with this aim to develop an entire framework for
orchestration of network resources in B5G by using optical wireless infrastructure and radically
enhancing the flexibility of this architecture. Its objective is to enable such a mechanism which
offers security and privacy to application data and workload. Furthermore, DAEMON [184] aims to
set forth a pragmatic strategy for network intelligence design. The main objectives of DAEMON
include extremely high reliability, reduced energy footprint of mobile network and extremely high
performance in real-time scenarios. Simultaneously, REINDEER [185] aims to design smart
connectivity technologies with uninterrupted availability, perceived zero latency and resilient
interaction experiences. It will develop a novel wireless access infrastructure called RadioWeaves as
a massive distributed antenna array composed of fabric of distributed radio, computing and storage
components. Its objective is to design algorithms and protocols to enable new resilient interactive
services which require real-space and real-time cooperation for future intuitive care, immersive
entertainment and robotized industrial environments.
**11. Potential Applications of 6G**
Every new wireless generation introduces some novel applications. Here, we have discussed
several potential applications for future 6G wireless networks. Table IX provides a summary of these
6G applications.
_11.1. Multi-sensory XR applications_
The advantages of 5G technology such as high bandwidth and low latency have extended the
VR/AR experience for 5G network users. However, there are several potential issues which must be
addressed in future 6G network in order to enhance this VR/AR experience. Several sensing devices
can be deployed to collect sensory data. Hence, a new feature extended reality (XR) can be realized
from eMBB and URLLC. Extended reality (XR) is an appealing technology which contains
Augmented Reality (AR), Virtual Reality (VR) and Mixed Reality (MR). 6G will support the
advancement of XR in various use cases including robot control, healthcare, video conferencing,
entertainment and virtual tourism. This requires extreme low latency, high resolution, extreme data
rates and strong connectivity, which is envisioned to be supported by 6G. Additionally, several
aspects including devices diversity, low overhead and high scalability should be taken into account
-----
while designing the security mechanism of XR. The major security concerns are malicious threats.
Access control, encryption, authentication and internal communication. In [186], R. Chen et al. [186]
briefly discussed security challenges in URLLC services. While in [187], J. M. Hamamreh et al. [187]
proposed an approach to enhance security against malicious attacks in URLLC. Furthermore,
authors in [188] have suggested a 3D model which addresses secrecy threats in XR applications.
_11.2. Connected robotics and autonomous systems_
Academia researchers and industrial experts have shown considerable interest in future
transport systems such as internet of vehicles, cooperative vehicular networks, intelligent robotics
and self-driving. Almost 50 leading technological and automotive companies have shown interest
to invest in autonomous vehicle technology. In future, connected autonomous vehicles (CAV)
technologies will introduce a new service ecosystem such as self-driving public transports.
Specifically, AI-enabled future vehicular networks will pave the way towards intelligent transport
system (ITS). In [189], Strianti et al. discussed automatic handling, caching and resource control in
network. They designed a complete automated factory based on UAVs, database and cloud services.
Similarly, UAV network, new algorithms and advanced hardware can be implemented in different
operations including agriculture, emergency, construction and fire control. In future, fully
automated vehicles and robots will participate in maintenance process, monitoring, operation and
real-time diagnostics. Intelligent robots will be deployed at harsh environments for communication
and research tasks. Highly reliable and self-organized features of automation will bring a
revolution is several aspects of daily life. Such innovations will pave a way to develop new cities
that are smart, greener, sustainable and productive.
_11.3. Wireless brain-computer interactions_
The key idea behind wireless BCI is to connect human brain with any device. This device can be
located inside or outside the human body. One potential feature of wireless BCI is to support
disabled people by controlling auxiliary equipment. It is envisaged that wireless BCI will become an
integral part of future 6G technology. In [190], Chen et al. proposed a BCI mechanism to accelerate
spelling. Besides its advantages, BCI system faces several security threats such as encryption and
malicious attacks. To tackle these security challenges, authors in [191-192] have briefly highlighted
security issues, hacking applications and prevention methods to overcome these security challenges.
_11.4. Accurate indoor positioning_
Global positioning system (GPS) has shown significant role in outdoor environments. However,
indoor position systems still require research focus to overcome complicated indoor EM
propagation. Several studies are presented over indoor positioning system [193-196]. New
functionalities of full-fledge services are envisaged with accurate and reliable indoor positioning
systems. It is possible to realize these services in future 6G technology.
_11.5. Intelligent Internet of medical Things (IIoMT)_
It is envisaged that 6G will bring revolution in healthcare sector. In future, 6G will overcome
space and time barriers to perform medical tasks beyond boundaries. Intelligent vehicles will
enable Hospital-to-Home (H2H) service. Diverse intelligent sensors and wearable devices will assist
to detect real-time accident and automatic surgery. IIoMT will remove space and time barriers.
High speed communication based telesurgery will be utilized by remote doctors to perform surgery.
-----
The doctors will operate telesurgery through tele-assist, verbal or telestration [197]. For verbal,
doctors will use holographic communication to obtain better visual of surgery. They can
tele-assist the surgical operations through haptic or tactile communication. For telestration, they
will use VR and AR. An overview of telesurgery is presented in figure 11. In 2019, China has
already made a remarkable feat by performing 5G remote brain surgery. With the help from
Chinese technology giant Huawei and China Mobile, China’s PLA General Hospital (PLAGH)
successfully performed the operation through 5G technology where doctor was 3000 km away from
patient [198].
**Figure 11. An overview of telesurgery**
_11.6. Internet of Nano Things (IoNT)_
Nanotechnology has given excellent opportunities to design advanced material based nanodevices
for medical and industrial use [197]. Nano-things have the ability to perform basic functionalities of
sensing and actuation at a high speed while having have low data storage capacity. Generally, the
idea of IoNT is derived by merging nanotechnology with IoT. In IoNT, nanosensors are connected
through a nanoscale network to exchange data. Nanosensors or nano-things can communicate over
a short distance by using Internet of Nano Things (IoNT) [199]. Typical architecture of IoNT is
presented in figure 12.
-----
**Figure 12. Typical architecture of IoNT**
IoNT based communication can be implemented via THz or molecular communication. THz
communication is more speedy, reliable and secure rather than molecular communication [200].
Future 6G technology with >1 Tbps speed will enable IoNT with a smooth data transmission.
Furthermore, it will be easy to control IoNT with massive number of nano things with high density
6G technology. IoNT is expected to bring remarkable revolution in modern healthcare [201]. IoNT
deployment is also complemented by other associated technologies as shown in figure 13.
**Figure 13. IoNT and allied technologies**
**TABLE IX.** SECURITY, PRIVACY AND CHALLENGING ISSUES IN 6G APPLICATIONS
**Reference** **Application** **Security, Privacy and**
**Challenging Issue**
[186] Multi-sensory XR applications Communication
[191] Wireless brain-computer interactions Malicious Attack
[193] Accurate indoor positioning Multi-access
[195] Accurate indoor positioning Positioning
-----
[201] IoNT Limited memory space and
computational capability
[202] IIoMT QoL
[203] Multi-sensory XR applications Access control
[204] Wireless brain-computer interactions Encryption
[205] Connected robotics and autonomous systems Authentication
[206] Connected robotics and autonomous systems Communication
_11.7. Edge Computing for Consumer Electronics (ECCE)_
The edge computing characteristic of 5G enables research fraternity and industrial experts to
reconsider innovative use-cases to realize an extensive range of applications. The future wireless
technologies such as B5G or 6G are envisaged to efficiently support low-latency and high-capacity
short-range applications. In this regards, it is expected that future consumer electronics (CE) will
effectively support wireless capabilities of B5G/6G. Nawaz et al. [207] proposed this concept of
ECCE to facilitate required computing services to consumer electronics (CE) by considering
e-URLLC wireless connectivity as shown in figure 14. Several CE devices can be seen in proposed
ECCE framework to support eHealth, surveillance, virtual reality and entertainment etc. The
proposed concept is expected to bring evolution in latency, reliability and link-speed to perform
tasks locally at the devices in the B5G/6G era. The anticipated innovations contain: 1) processor-less
devices, 2) inter-chip communication through THz links and 3) removing cabling requirements
between processor and associated user interface.
**Figure 14. Edge computing for consumer electronics (ECCE) [207]**
**12. Conclusion**
During the global deployment of 5G, both academicians and industrial experts have started
realizing 6G with the aim to strengthen the competitive advantages of future wireless technologies.
To support this vision, we have highlighted most promising research lines from recent literature. The
future 6G technology will focus to establish communication links among objects, devices, users and
industries. Performance analysis of network transmission is no longer only paramount parameter;
AI, IoT and blockchain have become essential candidates. It is expected that 6G technology will keep
penetrating into ubiquitous spaces, human-perceived actions and virtual societies. It will offer
-----
intelligent, deep, reliable, secure, seamless and holographic network architecture. The main
contributions involve several industrial projects and research activities around the globe to support
the vision of 6G. Furthermore, 6G will support several promising technologies including holographic
communication,, tactile communication and visible light communication. In future, B5G/6G
technologies will enable smart services and faster technologies than the existing technologies. In this
concern, the existing security approaches for 4G/5G will not be sufficient to protect future 6G
network. Thus, the basic parameters, such as authenticity, availability, integrity and confidentiality
must be addressed in the future 6G network. Similarly, privacy-by-design must be incorporated to
meet the demands of user, identity, location and data privacy. In summary the research fraternity
must think to develop innovative privacy and security solutions with low-cost, ease integration and
high security. This review article starts by providing the historical overview of wireless generations
and associated pivotal elements to foster future 6G network. Then, we profoundly examined
ongoing research progress, technological breakdown, potential issues associated with future 6G
technology. This paper also outlines the key technologies, use cases and key enablers of 6G networks
along with providing a prospective on future aspects. Finally, we conclude this article by shedding
some light over key projects and potential applications of future 6G wireless network. We believe
this review will open new horizons for future research directions by accelerating the interest of the
research community towards future wireless networks innovations.
**Conflicts of Interest: The author declares no conflict of interest.**
**References**
1. D. Soldani and A. Manzalini, ``Horizon 2020 and beyond: On the 5G operating system for a true digital
society,'' IEEE Veh. Technol. Mag., vol. 10, no. 1, pp. 32_42, Mar. 2015.
2. Mohsan, S. A. H., Mazinani, A., Malik, W., Younas, I., Othman, N. Q. H., Amjad, H., & Mahmood, A.
(2020). 6G: Envisioning the key technologies, applications and challenges. International Journal of
Advanced Computer Science and Applications, 11(9).
3. M. Katz, M. Matinmikko-Blue, and M. Latva-Aho, ``6Genesis flagship program: Building the bridges
towards 6G-enabled wireless smart society and ecosystem,'' in Proc. IEEE 10th Latin-Amer. Conf.
Commun. (LATINCOM), Nov. 2018, pp. 1_9.
4. S. Dang, O. Amin, B. Shihada, and M.-S. Alouini, “What should 6g be?” Nature Electronics, vol. 3, no. 1,
pp. 2520–1131, 2020.
5. N. DOCOMO, “White paper 5g evolution and 6g,” Accessed on 1 March 2020 from
https://www.nttdocomo.co.jp/english/binary/pdf/corporate/technology/whitepaper_6g/DOCOMO_6G_W
hite_PaperEN_20200124.pdf, 2020.
6. MT Traf_c Estimates for the Years 2020 to 2030, document ITU-R SG05, Jul. 2015.
7. ETSI, “5th Generation (5G),” 2018, retrieved Jan. 2019. [Online]. Available:
[https://www.etsi.org/technologies-clusters/technologies/5g](https://www.etsi.org/technologies-clusters/technologies/5g)
8. S. Nayak and R. Patgiri, “6G: Envisioning the Key Issues and Challenges,” CoRR, vol. abs/2004.040244,
[2020. [Online]. Available: https://arxiv.org/abs/2004.0402](https://arxiv.org/abs/2004.0402)
9. S. Chen, Y. Liang, S. Sun, S. Kang, W. Cheng, and M. Peng, “Vision, requirements, and technology trend
of 6g: How to tackle the challenges of system coverage, capacity, user data-rate and movement speed,”
IEEE Wireless Communications, pp. 1–11, 2020.
10. M. Giordani, M. Polese, M. Mezzavilla, S. Rangan, and M. Zorzi, “Toward 6g networks: Use cases and
technologies,” IEEE Communications Magazine, vol. 58, no. 3, pp. 55–61, March 2020.
11. G. Gui, M. Liu, F. Tang, N. Kato, and F. Adachi, “6g: Opening new horizons for integration of comfort,
security and intelligence,” IEEE Wireless Communications, pp. 1–7, 2020.
12. W. Dong, Z. Xu, X. Li, and S. Xiao, “Low cost subarrayed sensor array design strategy for iot and future
6g applications,” IEEE Internet of Things Journal, pp. 1–1, 2020.
13. B. Mao, Y. Kawamoto, and N. Kato, “AI-based joint optimization of qos and security for 6g energy
harvesting internet of things,” IEEE Internet of Things Journal, pp. 1–1, 2020.
-----
14. Viswanathan, Harish, and Preben E. Mogensen. "Communications in the 6G era." IEEE Access 8 (2020):
57063-57074.
15. W. Saad, M. Bennis, and M. Chen, “ A Vision of 6G Wireless Systems: Applications, Trends,
Technologies, and Open Research Problems,” IEEE Network (Early Access), 2019.
16. E. C. Strinati, S. Barbarossa, J. L. Gonzalez-Jimenez, D. Ktenas, N. Cassiau, L. Maret, and C. Dehos, “6G:
The next frontier: From holographic messaging to artificial intelligence using subterahertz and visible
light communication,” IEEE Vehicular Technology Magazine, vol. 14, no. 3, pp. 42–50, August 2019.
17. M. H. Alsharif, A. H. Kelechi, M. A. Albreem, S. A. Chaudhry, M. S. Zia, and S. Kim, “Sixth generation
(6G) wireless networks: Vision, research activities, challenges and potential solutions,” Symmetry, vol. 12,
no. 4, p. 676, 2020.
18. S. Chen, Y.-C. Liang, S. Sun, S. Kang, W. Cheng, and M. Peng, “Vision, requirements, and technology
trend of 6g: how to tackle the challenges of system coverage, capacity, user data-rate and movement
speed,” IEEE Wireless Communications, vol. 27, no. 2, pp. 218–228, 2020.
19. K. B. Letaief, W. Chen, Y. Shi, J. Zhang, and Y.-J. A. Zhang, “The roadmap to 6G: Ai empowered wireless
networks,” IEEE Communications Magazine, vol. 57, no. 8, pp. 84–90, August 2019.
20. I. Akyildiz, A. Kak, and S. Nie, “6G and beyond: The future of wireless communications systems,” To
appear, IEEE Access, 2020.
21. N. Kato, B. Mao, F. Tang, Y. Kawamoto, and J. Liu, “Ten challenges in advancing machine learning
technologies toward 6G,” To appear, IEEE Wireless Communications, pp. 1–8, 2020.
22. P. Yang, Y. Xiao, M. Xiao, and S. Li, “6G wireless communications: Vision and potential techniques,”
IEEE Network, vol. 33, no. 4, pp. 70–75, August 2019.
23. Z. Zhang, Y. Xiao, Z. Ma, M. Xiao, Z. Ding, X. Lei, G. K. Karagiannidis, and P. Fan, “6G wireless
networks: Vision, requirements, architecture, and key technologies,” IEEE Vehicular Technology
Magazine, vol. 14, no. 3, pp. 28–41, September 2019.
24. Khan, Latif U., et al. "6G Wireless Systems: A Vision, Architectural Elements, and Future
Directions." IEEE Access 8 (2020): 147029-147044.
25. F. Tariq, M. Khandaker, K.-K. Wong, M. Imran, M. Bennis, and M. Debbah, “A speculative study on 6G,”
arXiv preprint arXiv:1902.06700, 2019.
26. B. Zong, C. Fan, X. Wang, X. Duan, B. Wang, and J. Wang, “6G technologies: Key drivers, core
requirements, system architectures, and enabling technologies,” IEEE Vehicular Technology Magazine,
vol. 14, no. 3, pp. 18–27, July 2019.
27. K. David and H. Berndt, ``6G vision and requirements: Is there any need for beyond 5G?'' IEEE Veh.
Technol. Mag., vol. 13, no. 3, pp. 72_80, Sep. 2018.
28. A. Gupta and E. R. K. Jha, ``A survey of 5G network: Architecture and emerging technologies,'' IEEE
Access, vol. 3, pp. 1206_1232, Jul. 2015.
29. Dunnewijk, Theo, and Staffan Hultén. "A brief history of mobile communication in Europe." Telematics
and Informatics 24.3 (2007): 164-179.
30. Mshvidobadze, Tinatin. "Evolution mobile wireless communication and LTE networks." 2012 6th
international conference on Application of information and communication technologies (AICT). IEEE,
2012.
31. Yan, Hui. "The 3G Standard Setting Strategy and Indigenous Innovation Policy in China: Is TD-SCDMA a
Flagship?." DRUID Summer Conference, Copenhagen, Denmark. 2007.
32. Nikkei Asian Review [Online]. Available:
[https://asia.nikkei.com/Spotlight/5G-networks/China-closes-in-on-70-of-world-s-5G-subscribers](https://asia.nikkei.com/Spotlight/5G-networks/China-closes-in-on-70-of-world-s-5G-subscribers)
33. F. Tariq, M. Khandaker, K.-K. Wong, M. Imran, M. Bennis, and M. Debbah, “A speculative study on 6G,”
arXiv preprint arXiv:1902.06700, 2019.
34. J. Wu, M. Dong, K. Ota, J. Li, W. Yang, and M. Wang, ``Fog-computing enabled cognitive network
function virtualization for an information centric future Internet,'' IEEE Commun. Mag., vol. 57, no. 7, pp.
48_54, Jul. 2019.
35. J. Wu, M. Dong, K. Ota, J. Li, and Z. Guan, ``Big data analysis-based secure cluster management for
optimized control plane in software-de_ned networks,'' IEEE Trans. Netw. Service Manag., vol. 15, no. 1,
pp. 27_38, Mar. 2018.
-----
36. Z. Zhou, M. Dong, K. Ota, G. Wang, and L. T. Yang, ``Energy efficient resource allocation for D2D
communications underlaying cloud- RAN-based LTE-A networks,'' IEEE Internet Things J., vol. 3, no. 3,
pp. 428_438, Jun. 2016.
37. A. Yastrebova, R. Kirichek, Y. Koucheryavy, A. Borodin, and A. Koucheryavy, ``Future networks 2030:
Architecture & requirements,'' in Proc. 10th Int. Congr. Ultra Mod. Telecommun. Control Syst.
`Workshops (ICUMT), Nov. 2018, pp. 1_8.
38. Saad, W., Bennis, M. & Chen, M. A vision of 6G wireless systems: applications, trends, technologies, and
open research problems. IEEE Netw. https://doi.org/10.1109/MNET.001.1900287 (2019).
39. Calvanese Strinati, E. et al. 6G: the next frontier: from holographic messaging to artificial intelligence
using subterahertz and visible light communication. IEEE Veh. Technol. Mag. 14, 42–50 (2019).
40. Tariq, F. et al. A speculative study on 6G. Preprint at https://arxiv.org/abs/1902.06700 (2019).
41. David, K. & Berndt, H. 6G vision and requirements: is there any need for beyond 5G? IEEE Veh. Technol.
Mag. 13, 72–80 (2018).
42. Raghavan, V. & Li, J. Evolution of physical-layer communications research in the post-5G era. IEEE
Access 7, 10392–10401 (2019).
43. Yastrebova, A., Kirichek, R., Koucheryavy, Y., Borodin, A. & Koucheryavy, A. Future networks 2030:
architecture & requirements. In Proc. IEEE ICUMT 1–8 (2018).
44. Rommel, S., Raddo, T. R. & Monroy, I. T. Data center connectivity by 6G wireless systems. In Proc. IEEE
[PSC https://doi.org/10.1109/PS.2018.8751363 (IEEE, 2018).](https://doi.org/10.1109/PS.2018.8751363)
45. J. Zhao, “A survey of intelligent reflecting surfaces (IRSs): Towards 6G wireless communication
networks,” Jul. 2019.
46. Clazzer, F. et al. From 5G to 6G: has the time for modern random access come? Preprint at
https://arxiv.org/abs/1903.03063 (2019).
47. Yaacoub, E. & Alouini, M.-S. A key 6G challenge and opportunity— connecting the remaining 4 billions:
a survey on rural connectivity. Preprint at https://arxiv.org/abs/1906.11541 (2019).
48. Giordani, M., Polese, M., Mezzavilla, M., Rangan, S. & Zorzi, M. Towards 6G networks: use cases and
technologies. Preprint at https://arxiv.org/abs/1903.12216 (2019).
49. Mahmood, N. H. et al. Six key enablers for machine type communication in 6G. Preprint at
https://arxiv.org/abs/1903.05406 (2019).
50. Rappaport, T. S. et al. Wireless communications and applications above 100 GHz: opportunities and
challenges for 6G and beyond. IEEE Access 7, 78729–78757 (2019).
51. Stoica, R.-A. & de Abreu, G. T. F. 6G: the wireless communications network for collaborative and AI
applications. Preprint at https://arxiv.org/abs/1904.03413 (2019).
52. Stoica, Razvan-Andrei, and Giuseppe Thadeu Freitas de Abreu. "6G: the wireless communications
network for collaborative and AI applications." arXiv preprint arXiv:1904.03413 (2019).
53. Nawaz, S. J., Sharma, S. K., Wyne, S., Patwary, M. N. & Asaduzzaman, M. Quantum machine learning for
6G communication networks: state-of-the-art and vision for the future. IEEE Access 7, 46317–46350
(2019).
54. Renzo, D. et al. Smart radio environments empowered by reconfigurable AI meta-surfaces: an idea whose
time has come. EURASIP J. Wireless Commun. Netw. 2019, 129 (2019).
55. Nadeem, Q.-U.-A., Kammoun, A., Chaaban, A., Debbah, M. & Alouini, M.-S. Intelligent reflecting surface
assisted wireless communication: modeling and channel estimation. Preprint at
https://arxiv.org/abs/1906.02360v2 (2019).
56. Mohsan, S. A. H., Khan, M. A., Alsharif, M. H., Uthansakul, P., & Solyman, A. A. (2022). Intelligent
reflecting surfaces assisted UAV communications for massive networks: current trends, challenges, and
research directions. Sensors, 22(14), 5278.
57. Zhao, J. A Survey of intelligent reflecting surfaces (IRSs): towards 6G wireless communication networks.
[Preprint at https://arxiv.org/abs/1907.04789v3 (2019).](https://arxiv.org/abs/1907.04789v3)
58. 6G Flagship [Online]. Available:https://www.oulu.fi/6gflagship]
59. [6G Wireless Summit [Online]. Available: http://www.6gsummit.com/2019/](http://www.6gsummit.com/2019/)
60. [Loon [Online]. Available: https://loon.com/](https://loon.com/)
61. L. U. Khan, N. H. Tran, S. R. Pandey,W. Saad, Z. Han, M. N. Nguyen, and C. S. Hong, “Federated
learning for edge networks: Resource optimization and incentive mechanism,” arXiv preprint
arXiv:1911.05642, 2019.
-----
62. M. Mozaffari, A. T. Z. Kasgari, W. Saad, M. Bennis, and M. Debbah, “Beyond 5G with uavs: Foundations
of a 3d wireless cellular network,” IEEE Transactions
63. M S. Mumtaz, J. M. Jornet, J. Aulin, W. H. Gerstacker, X. Dong, and B. Ai, “Terahertz communication for
vehicular networks,” IEEE Transactions on Vehicular Technology, vol. 66, no. 7, pp. 5617–5625, July 2017.
64. S. J. Nawaz, S. K. Sharma, S.Wyne, M. N. Patwary, and M. Asaduzzaman, “Quantum machine learning
for 6G communication networks: State-ofthe-art and vision for the future,” IEEE Access, vol. 7, pp. 46
317–46 350, April 2019.
65. A. Salem and M. M. A. Azim, “The effect of rbcs concentration in blood on the wireless communication in
nano-networks in the thz band,” Nano communication networks, vol. 18, pp. 34–43, December 2018.
66. S. Canovas-Carrasco, A.-J. Garcia-Sanchez, and J. Garcia-Haro, “A nanoscale communication network
scheme and energy model for a human hand scenario,” Nano communication networks, vol. 15, pp. 17–
27, March 2018.
67. X. Wang, Y. Han, C. Wang, Q. Zhao, X. Chen, and M. Chen, “In-edge ai: Intelligentizing mobile edge
computing, caching and communication by federated learning,” IEEE Network, vol. 33, no. 5, pp. 156–
165, July 2019.
68. Basar, E. Reconfigurable intelligent surface-based index modulation: a new beyond MIMO paradigm for
6G. Preprint at https://arxiv.org/abs/1904.06704v2 (2019).
69. Yanikomeroglu, H. Integrated terrestrial/non-terrestrial 6G networks for ubiquitous 3D
super-connectivity. In Proc. 21st ACM Int. Conf. Modeling, Analysis and Simulation of Wireless and
Mobile Systems 3–4 (ACM, 2018).
70. C. Han, Y. Wu, Z. Chen, and X. Wang, “Terahertz communications (teracom): Challenges and impact on
6G wireless systems,” arXiv preprint arXiv:1912.06040, 2019.
71. J. F. O’Hara, S. Ekin, W. Choi, and I. Song, “A perspective on terahertz next-generation wireless
communications,” Technologies, vol. 7, no. 2, p. 43, June 2019.
72. I. Akyildiz, J. Jornet, and C. Han, ``TeraNets: Ultra-broadband communication networks in the terahertz
band,'' IEEE Wireless Commun., vol. 21, no. 4, pp. 130_135, Aug. 2014.
73. FCC. (Mar. 2019). FCC Takes Steps to Open Spectrum Horizons for New Services and Technologies.
[[Online]. Available: https://docs.fcc.gov/public/attachments/DOC-356588A1.pdf](https://docs.fcc.gov/public/attachments/DOC-356588A1.pdf)
74. S. Abadal, C. Han, and J. M. Jornet, ``Wave propagation and channel modeling in chip-scale wireless
communications: A survey from millimeterwave to terahertz and optics,'' IEEE Access, vol. 8, pp.
278_293, 2020.
75. Z. Chen, X. Ma, B. Zhang, Y. Zhang, Z. Niu, N. Kuang, W. Chen, L. Li, and S. Li, “A survey on terahertz
communications,” China Communications, vol. 16, no. 2, pp. 1–35, March 2019.
76. S. Vappangi and V. Mani, “Concurrent illumination and communication: A survey on visible light
communication,” Physical Communication, vol. 33, pp. 90–114, April 2019.
77. Mohsan, S. A. H., Mazinani, A., Sadiq, H. B., & Amjad, H. (2022). A survey of optical wireless
technologies: Practical considerations, impairments, security issues and future research
directions. Optical and Quantum Electronics, 54(3), 187.
78. Mohsan, S. A. H., Khan, M. A., Noor, F., Ullah, I., & Alsharif, M. H. (2022). Towards the unmanned aerial
vehicles (UAVs): A comprehensive review. Drones, 6(6), 147.
79. O. B. Akan, H. Ramezani, T. Khan, N. A. Abbasi, and M. Kuscu, “Fundamentals of molecular information
and communication science,” Proc. IEEE, vol. 105, no. 2, pp. 306–318, Feb. 2017. doi: 10.1109/
JPROC.2016.2537306.
80. F. Cavaliere, E. Prati, L. Poti, I. Muhammad, and T. Catuogno, “Secure quantum communication
technologies and systems: from labs to markets,” Quantum Reports, vol. 2, no. 1, pp. 80–106, January
2020.
81. T. A. Elsayed, “Deterministic secure quantum communication with and without entanglement,” arXiv
preprint arXiv:1904.05881, 2019.
82. S. Pirandola, “End-to-end capacities of a quantum communication network,” Communications Physics,
vol. 2, no. 1, pp. 1–10, march 2019.
83. S. Dang, O. Amin, B. Shihada, M.-S. Alouini, What should 6g be? Nat. Electron. 3 (1) (2020) 20–29.
84. T. Hong, C. Liu, M. Kadoch, Machine learning based antenna design for physical layer security in
ambient backscatter communications, Wireless Commun. Mobile Comput. (2019).
-----
85. S.J. Nawaz, S.K. Sharma, S. Wyne, M.N. Patwary, M. Asaduzzaman, Quantum machine learning for 6g
communication networks: state-of-the-art and vision for the fu-ture, IEEE Access 7 (2019) 46317–46350.
86. L. Loven, T. Lepp_anen, E. Peltonen, J. Partala, E. Harjula, P. Porambage, M. Ylianttila, J. Riekki, Edge Ai:
A Vision for Distributed, Edge-Native Artificial Intelligence in Future 6g Networks, The 1st 6G Wireless
Summit, 2019, pp. 1–2.
87. R. Sattiraju, A. Weinand, H. D. Schotten, Ai-assisted Phy Technologies for 6g and beyond Wireless
Networks, arXiv Preprint arXiv:1908.09523.
88. P. Ferraro, C. King, R. Shorten, Distributed ledger technology for smart cities, the sharing economy, and
social compliance, IEEE Access 6 (2018) 62728–62746.
89. K. Kotobi, S.G. Bilen, Secure blockchains for dynamic spectrum access: a decentralized database in
moving cognitive radio networks enhances security and user access, IEEE Veh. Technol. Mag. 13 (1)
(2018) 32–39.
90. S. Kiyomoto, A. Basu, M.S. Rahman, S. Ruj, On blockchain-based authorization architecture for
beyond-5g mobile services, in: 2017 12th International Conference for Internet Technology and Secured
Transactions (ICITST), IEEE, 2017, pp. 136–141.
91. S. Cho, G. Chen, J.P. Coon, Enhancement of physical layer security with simultaneous beamforming and
jamming for visible light communication systems, IEEE Trans. Inf. Forensics Secur. 14 (10) (2019) 2633–
2648.
92. S. Ucar, S. Coleri Ergen, O. Ozkasap, D. Tsonev, H. Burchardt, Secvlc: secure visible light communication
for military vehicular networks, in: Proceedings of the 14th ACM International Symposium on Mobility
Management and Wireless Access, 2016, pp. 123–129.
93. J. Ma, R. Shrestha, J. Adelberg, C.-Y. Yeh, Z. Hossain, E. Knightly, J.M. Jornet, D.M. Mittleman, Security
and eavesdropping in terahertz wireless links, Nature 563 (7729) (2018) 89–93.
94. I.F. Akyildiz, J.M. Jornet, C. Han, Terahertz band: next frontier for wireless communications, Phys.
Commun. 12 (2014) 16–32.
95. J.-Y. Hu, B. Yu, M.-Y. Jing, L.-T. Xiao, S.-T. Jia, G.-Q. Qin, G.-L. Long, Experimental quantum secure direct
communication with single photons, Light Sci. Appl. 5 (9) (2016), e16144.
96. V. Loscri, C. Marchal, N. Mitton, G. Fortino, A.V. Vasilakos, Security and privacy in molecular
communication and networking: opportunities and challenges, IEEE Trans. NanoBioscience 13 (3) (2014)
198–207.
97. Y. Lu, M.D. Higgins, M.S. Leeson, Comparison of channel coding schemes for molecular communications
systems, IEEE Trans. Commun. 63 (11) (2015) 3991–4001.
98. N. Farsad, H.B. Yilmaz, A. Eckford, C.-B. Chae, W. Guo, A comprehensive survey of recent advancements
in molecular communication, IEEE Commun. Surv. Tutorials 18 (3) (2016) 1887–1919.
99. A. Galal and X. Hesselbach, “Nano-networks communication architecture: Modeling and functions,”
Nano Communication Networks, vol. 17, pp. 45–62, September 2018.
100. I. F. Akyildiz, M. Pierobon, S. Balasubramaniam, and Y. Koucheryavy, “The internet of bio-nano things,”
IEEE Communications Magazine, vol. 53, no. 3, pp. 32–40, March 2015.
101. T. M. Fernández-Caramès and P. Fraga-Lamas, “Towards post-quantum blockchain: A review on
blockchain cryptography resistant to quantum computing attacks,” IEEE Access, vol. 8, pp. 21 091–21 116,
January 2020.
102. H. Thapliyal and E. Muñoz-Coreas, “Design of quantum computing circuits,” IT Professional, vol. 21, no.
6, pp. 22–26, November 2019.
103. Z. Zhou, J. Feng, Z. Chang, and X. Shen, “Energy-efficient edge computing service provisioning for
vehicular networks: A consensus admm approach,” IEEE Transactions on Vehicular Technology, vol. 68,
no. 5, pp. 5087–5099, May 2019.
104. Z. Zhou, J. Feng, Z. Chang, and X. Shen, “Energy-efficient edge computing service provisioning for
vehicular networks: A consensus admm approach,” IEEE Transactions on Vehicular Technology, vol. 68,
no. 5, pp. 5087–5099, May 2019.
105. Lin, Xi, et al. "Making knowledge tradable in edge-AI enabled IoT: A consortium blockchain-based
efficient and incentive approach." IEEE Transactions on Industrial Informatics 15.12 (2019): 6367-6378.
106. H. Liang, J. Wu, S. Mumtaz, J. Li, X. Lin, and M. Wen, ``MBID: Micro-blockchain-based geographical
dynamic intrusion detection for V2X,'' IEEE Commun. Mag., vol. 57, no. 10, pp. 77_83, Oct. 2019.
-----
107. Y. Dai, D. Xu, S. Maharjan, Z. Chen, Q. He, and Y. Zhang, ``Blockchain and deep reinforcement learning
empowered intelligent 5G beyond,'' IEEE Netw., vol. 33, no. 3, pp. 10_17, May/Jun. 2019.
108. I. Ahmad, S. Shahabuddin, T. Kumar, J. Okwuibe, A. Gurtov, and M. Ylianttila, ``Security for 5G and
beyond,'' IEEE Commun. Surveys Tuts., vol. 21, no. 4, pp. 3682_3722, 4th Quart., 2019.
109. X. Ling, J. Wang, T. Bouchoucha, B. C. Levy, and Z. Ding, ``Blockchain radio access network (B-RAN):
Towards decentralized secure radio access paradigm,'' IEEE Access, vol. 7, pp. 9714_9723, 2019.
110. Jameel, Furqan, et al. "Reinforcement Learning in Blockchain-Enabled IIoT Networks: A Survey of Recent
Advances and Open Challenges." Sustainability 12.12 (2020): 5161.
111. M. Belotti, N. Boži´c, G. Pujolle, and S. Secci, “A vademecum on blockchain technologies: When, which,
and how,” IEEE Communications Surveys Tutorials, vol. 21, no. 4, pp. 3796–3838, July 2019.
112. R. Yang, F. R. Yu, P. Si, Z. Yang, and Y. Zhang, “Integrated blockchain and edge computing systems: A
survey, some research issues and challenges,” IEEE Communications Surveys Tutorials, vol. 21, no. 2, pp.
1508–1532, January 2019.
113. T. Alladi, V. Chamola, J. J. Rodrigues, and S. A. Kozlov, “Blockchain in smart grids: A review on different
use cases,” Sensors, vol. 19, no. 22, p. 4862, 2019.
114. H. Ning, X. Ye, A. Ben Sada, L. Mao, and M. Daneshmand, “An attention mechanism inspired selective
sensing framework for physical-cyber mapping in internet of things,” IEEE Internet of Things Journal,
vol. 6, no. 6, pp. 9531–9544, July 2019.
115. A. Castiglione, K. R. Choo, M. Nappi, and S. Ricciardi, “Context aware ubiquitous biometrics in edge of
military things,” IEEE Cloud Computing, vol. 4, no. 6, pp. 16–20, December 2017.
116. J. Melià-Seguí and X. Vilajosana, “Ubiquitous moisture sensing in automaker industry based on standard
uhf rfid tags,” in IEEE International Conference on RFID, April.
117. Salem, Milad, Shayan Taheri, and Jiann-Shiun Yuan. "Utilizing transfer learning and homomorphic
encryption in a privacy preserving and secure biometric recognition system." Computers 8.1 (2019): 3.
118. Tang, Fengyi, et al. "Privacy-preserving distributed deep learning via homomorphic
re-encryption." Electronics 8.4 (2019): 411.
119. Li, Li, et al. "Homomorphic Encryption-Based Robust Reversible Watermarking for 3D
Model." Symmetry 12.3 (2020): 347.
120. Catak, Ferhat Ozgur, et al. "Practical Implementation of Privacy Preserving Clustering Methods Using a
Partially Homomorphic Encryption Algorithm." Electronics 9.2 (2020): 229.
121. Zhang, Ke, et al. "Edge intelligence and blockchain empowered 5G beyond for the industrial Internet of
Things." IEEE Network 33.5 (2019): 12-19.
122. Deng, Shuiguang, et al. "Edge intelligence: the confluence of edge computing and artificial
intelligence." IEEE Internet of Things Journal (2020).
123. Zhang, Yin, et al. "Edge intelligence in the cognitive Internet of Things: Improving sensitivity and
interactivity." IEEE Network 33.3 (2019): 58-64.
124. Van Den Berg, Daniël, et al. "Challenges in haptic communications over the tactile internet." IEEE
Access 5 (2017): 23502-23518.
125. Huang, Chongwen, et al. "Holographic MIMO surfaces for 6G wireless networks: Opportunities,
challenges, and trends." IEEE Wireless Communications (2020).
126. Giordani, Marco, et al. "Toward 6g networks: Use cases and technologies." IEEE Communications
Magazine 58.3 (2020): 55-61.
127. Bisio, Igor, et al. "Smart and robust speaker recognition for context-aware in-vehicle applications." IEEE
Transactions on Vehicular Technology 67.9 (2018): 8808-8821.
128. Hussain, Fatima, et al. "Machine learning in IoT security: current solutions and future challenges." IEEE
Communications Surveys & Tutorials (2020).
129. Jameel, Furqan, et al. "Machine learning techniques for wireless-powered ambient backscatter
communications: Enabling intelligent IoT networks in 6G era." Convergence of Artificial Intelligence and
the Internet of Things. Springer, Cham, 2020. 187-211.
130. Lamata, Lucas. "Quantum machine learning and quantum biomimetics: A perspective." Machine
Learning: Science and Technology (2020).
131. Biamonte, Jacob, et al. "Quantum machine learning." Nature 549.7671 (2017): 195-202.
132. Dunjko, Vedran, and Peter Wittek. "A non-review of Quantum Machine Learning: trends and
explorations." Quantum Views 4 (2020): 32.
-----
133. Zhang, Yao, and Qiang Ni. "Recent advances in quantum machine learning." Quantum Engineering 2.1
(2020): e34.
134. Chen, Zhen, et al. "iLearn: an integrated platform and meta-learner for feature engineering,
machine-learning analysis and modeling of DNA, RNA and protein sequence data." Briefings in
bioinformatics 21.3 (2020): 1047-1057.
135. Khan, Irfan, et al. "A Literature Survey and Empirical Study of Meta-Learning for Classifier
Selection." IEEE Access 8 (2020): 10262-10281.
136. Hospedales, Timothy, et al. "Meta-learning in neural networks: A survey." arXiv preprint
arXiv:2004.05439 (2020).
137. Hospedales, Timothy, et al. "Meta-learning in neural networks: A survey." arXiv preprint
arXiv:2004.05439 (2020).
138. Hsu, Jui-Yang, Yuan-Jui Chen, and Hung-yi Lee. "Meta learning for end-to-end low-resource speech
recognition." ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal
Processing (ICASSP). IEEE, 2020.
139. Alet, Ferran, et al. "Meta-learning curiosity algorithms." arXiv preprint arXiv:2003.05325 (2020).
140. Zhu, Hangyu, and Yaochu Jin. "Multi-objective evolutionary federated learning." IEEE transactions on
neural networks and learning systems 31.4 (2019): 1310-1322.
141. Lim, Wei Yang Bryan, et al. "Federated learning in mobile edge networks: A comprehensive
survey." IEEE Communications Surveys & Tutorials (2020).
142. Xu, Guowen, et al. "Verifynet: Secure and verifiable federated learning." IEEE Transactions on
Information Forensics and Security 15 (2019): 911-926.
143. Yang, Timothy, et al. "Applied federated learning: Improving google keyboard query suggestions." arXiv
preprint arXiv:1812.02903 (2018).
144. Khan, M. A., Kumar, N., Mohsan, S. A. H., Khan, W. U., Nasralla, M. M., Alsharif, M. H., ... & Ullah, I.
(2022). Swarm of UAVs for network management in 6G: A technical review. IEEE Transactions on
Network and Service Management.
145. F. Tang, Y. Kawamoto, N. Kato, J. Liu, Future intelligent and secure vehicular network toward 6g:
machine-learning approaches, Proc. IEEE.
146. T. Huang, W. Yang, J. Wu, J. Ma, X. Zhang, D. Zhang, A survey on green 6G network: architecture and
technologies, IEEE Access 7 (2019) 175758–175768.
147. P. Yang, Y. Xiao, M. Xiao, S. Li, 6g wireless communications: vision and potential techniques, IEEE
Network 33 (4) (2019) 70–75.
148. Shafin, Rubayet, et al. "Artificial intelligence-enabled cellular networks: A critical path to beyond-5G and
6G." IEEE Wireless Communications 27.2 (2020): 212-217.
149. C. Jiang, H. Zhang, Y. Ren, Z. Han, K.-C. Chen, L. Hanzo, Machine learning paradigms for
next-generation wireless networks, IEEE Wireless Commun. 24 (2) (2016) 98–105.
150. I. Tomkos, D. Klonidis, E. Pikasis, S. Theodoridis, Toward the 6g network era: opportunities and
challenges, IT Prof. 22 (1) (2020) 34–38.
151. M. Katz, P. Pirinen, H. Posti, Towards 6g: getting ready for the next decade, in: 2019 16th International
Symposium on Wireless Communication Systems (ISWCS), IEEE, 2019, pp. 714–718.
152. Y. Wei, H. Liu, J. Ma, Y. Zhao, H. Lu, G. He, Global voice chat over short message service of beidou
navigation system, in: 2019 14th IEEE Conference on Industrial Electronics and Applications (ICIEA),
IEEE, 2019, pp. 1994–1997.
153. Gui, G.; Liu, M.; Kato, N.; Adachi, F.; Tang, F. 6G: Opening New Horizons for Integration of Comfort,
Security and Intelligence. IEEE Wirel. Commun. 2020, 1–7.
154. Chen, S.; Sun, S.; Xu, G.; Su, X.; Cai, Y. Beam-space Multiplexing: Practice, Theory, and Trends-From 4G
TD-LTE, 5G, to 6G and Beyond. arXiv 2020, arXiv:2001.05021.
155. Dang, S.; Amin, O.; Shihada, B.; Alouini, M.-S. What should 6G be? Nat. Electron. 2020, 3, 20–29.
156. Liang, Y.-C.; Larsson, E.G.; Niyato, D.; Popovski, P. 6G Mobile Networks: Emerging Technologies and
Applications. China Commun. 2020, 17, 1–6.
157. Yuan, Y.; Zhao, Y.; Zong, B.; Parolari, S. Potential Key Technologies for 6G Mobile Communications.
arXiv 2019, arXiv:1910.00730.
-----
158. Chowdhury, M.Z.; Shahjalal, M.; Ahmed, S.; Jang, Y.M. 6G Wireless Communication Systems:
Applications, Requirements, Technologies, Challenges, and Research Directions. arXiv 2019,
arXiv:1909.11315v1.
159. Wakunami, K.; Hsieh, P.-Y.; Oi, R.; Senoh, T.; Sasaki, H.; Ichihashi, Y.; Okui, M.; Huang, Y.; Yamamoto, K.
Projection-type see-through holographic three-dimensional display. Nat. Commun. 2016, 7, 1–7.
160. Aggarwal, S.; Kumar, N. Fog Computing for 5G-Enabled Tactile Internet: Research Issues, Challenges,
and Future Research Directions. Mob. Netw. Appl. 2019, 1–28.
161. Khalid, M.; Amin, O.; Ahmed, S.; Shihada, B.; Alouini, M.-S. Communication through breath: Aerosol
transmission. IEEE Commun. Mag. 2019, 57, 33–39.
162. C. Mukherjee et al., “Reliability-Aware Circuit Design Methodology for Beyond-5G Communication
Systems,” IEEE Trans. Dev. and Mat. Reli., vol. 17, no. 3, Sept. 2017, pp. 490–506.
163. Huq, K.M.S.; Busari, S.A.; Rodriguez, J.; Frascolla, V.; Bazzi, W.; Sicker, D.C. Terahertz-enabled wireless
system for beyond-5G ultra-fast networks: A brief survey. IEEE Netw. 2019, 33, 89–95.
164. Mumtaz, S.; Jornet, J.M.; Aulin, J.; Gerstacker, W.H.; Dong, X.; Ai, B. Terahertz communication for
vehicular networks. IEEE Trans. Veh. Technol. 2017, 66, 5617–5625.
165. Mohsan, S. A. H., Khan, M. A., Mazinani, A., Alsharif, M. H., & Cho, H. S. (2022). Enabling Underwater
Wireless Power Transfer towards Sixth Generation (6G) Wireless Networks: Opportunities, Recent
Advances, and Technical Challenges. Journal of Marine Science and Engineering, 10(9), 1282.
166. H. Yao et al., “The Space-Terrestrial Integrated Network: An Overview,” IEEE Commun. Mag., vol.
56, no. 9, Sept. 2018, pp. 178–85.
167. Bennis, M.; Debbah, M.; Poor, H.V. Ultrareliable and low-latency wireless communication: Tail, risk, and
scale. Proc. IEEE 2018, 106, 1834–1853.
168. Kasgari, A.T.Z.; Saad,W. Model-free ultra-reliable lowlatency communication (URLLC):Adeep
reinforcement learning framework. In Proceedings of the 2019 IEEE International Conference on
Communications (ICC), Shanghai, China, 20–24 May 2019; pp. 1–6.
169. Wu, T.; Rappaport, T.S.; Collins, C.M. Safe for generations to come: Considerations of safety for
millimeter waves in wireless communications. IEEE Microw. Mag. 2015, 16, 65–84.
170. Kleine-Ostmann, T. Health and Safety Related Aspects Regarding the Operation of THz Emitters. In
Towards Terahertz Communications Workshop; European Commission: Brussels, Belgium, 2018.
Available online:
https://ec.europa.eu/digital-single-market/events/cf/towards-terahertz-communications-workshop/item-d
isplay.cfm?id=21219 (accessed on 12 August 2020).
171. Cho, C.; Maloy, M.; Devlin, S.M.; Aras, O.; Castro-Malaspina, H.; Dauer, L.T.; Jakubowski, A.A.; O’Reilly,
R.J.; Papadopoulos, E.B.; Perales, M.-A.; et al. Characterizing ionizing radiation exposure after T-cell
depleted allogeneic hematopoietic cell transplantation. Biol. Blood Marrow Transplant. 2018, 24, 252–253.
172. Chiaraviglio, L.; Cacciapuoti, A.S.; di Martino, G.; Fiore, M.; Montesano, M.; Trucchi, D.; Melazzi, N.B.
Planning 5G networks under EMF constraints: State of the art and vision. IEEE Access 2018, 6, 51021–
51037.
173. Korea lays out plan to become the first country to launch 6G. [Online]. Available:
https://www.6gworld.com/exclusives/korea-lays-out-plan-to-become-the-first-country-to-launch-6g/
174. Beyond 5G Promotion Consortium. [Online]. Available: https://b5g.jp/en/
175. INSPIRE 5Gplus. [Online]. Available: https://5g-ppp.eu/inspire-5gplus/
176. The AI@EDGE H2020 Project. [Online]. Available: https://aiatedge.eu/
177. Hexa-X Project. [Online]. Available: https://hexa-x.eu/
178. 5GZORRO Project. [Online]. Available: https://www.5gzorro.eu/
179. 6G Gains Momentum with Initiatives Launched Across the World. [Online]. Available:
https://www.6gworld.com/exclusives/6g-gains-momentum-with-initiatives-launched-across-the-world/
180. Next G Alliance. [Online]. Available: https://nextgalliance.org/
181. 6G BRAINS. [Online]. Available: https://6g-brains.eu/
182. DEDICATE 6G. [Online]. Available: https://5g-ppp.eu/dedicat-6g/
183. MARSAL. [Online]. Available: https://www.marsalproject.eu/
184. DAEMON. [Online]. Available: https://5g-ppp.eu/daemon/
185. REINDEER. [Online]. Available: https://reindeer-project.eu/about/
-----
186. R. Chen, C. Li, S. Yan, R. Malaney, J. Yuan, Physical layer security for ultra-reliable and low-latency
communications, IEEE Wireless Commun. 26 (5) (2019) 6–11.
187. J.M. Hamamreh, E. Basar, H. Arslan, Ofdm-subcarrier index selection for enhancing security and
reliability of 5g urllc services, IEEE Access 5 (2017) 25863–25875.
188. Yamakami, A privacy threat model in xr applications, in: International Conference on Emerging
Internetworking, Data & Web Technologies, Springer, 2020, pp. 384–394.
189. E. C. Strinati, S. Barbarossa, J. L. Gonzalez-Jimenez, D. Ktenas, N. Cassiau, C. Dehos, 6g: the Next
Frontier, arXiv_Preprint arXiv:1901.03239.
190. X. Chen, Y. Wang, M. Nakanishi, X. Gao, T.-P. Jung, S. Gao, High-speed spelling with a noninvasive
brain–computer interface, Proc. Natl. Acad. Sci. Unit. States Am. 112 (44) (2015) E6058–E6067.
191. R.A. Ramadan, A.V. Vasilakos, Brain computer interface: control signals review, Neurocomputing 223
(2017) 26–44.
192. P. McCullagh, G. Lightbody, J. Zygierewicz, W.G. Kernohan, Ethical challenges associated with the
development and deployment of brain computer interface technology, Neuroethics 7 (2) (2014) 109–122.
193. Pham, Ngoc Quan, Vega Pradana Rachim, and Wan-Young Chung. "High-accuracy VLC-based indoor
positioning system using multi-level modulation." Optics express 27.5 (2019): 7568-7584.
194. Huynh, Phat, and Myungsik Yoo. "VLC-based positioning system for an indoor environment using an
image sensor and an accelerometer sensor." Sensors 16.6 (2016): 783.
195. Lv, Huichao, et al. "High accuracy VLC indoor positioning system with differential detection." IEEE
Photonics Journal 9.3 (2017): 1-13.
196. Rahman, A. B. M., Ting Li, and Yu Wang. "Recent Advances in Indoor Localization via Visible Lights: A
Survey." Sensors 20.5 (2020): 1382.
197. Omanović-Mikličanin, Enisa, Mirjana Maksimović, and Vladimir Vujović. "The future of healthcare:
nanomedicine and internet of nano things." Folia Medica Facultatis Medicinae Universitatis
Saraeviensis 50.1 (2015).
198. CGTN News. [Online]. Available:
https://news.cgtn.com/news/3d3d774d7945444e33457a6333566d54/index.html
199. Akyildiz, Ian F., and Josep Miquel Jornet. "The internet of nano-things." IEEE Wireless
Communications 17.6 (2010): 58-63.
200. Sicari, Sabrina, et al. "Beyond the smart things: Towards the definition and the performance assessment of
a secure architecture for the Internet of Nano-Things." Computer Networks 162 (2019): 106856.
201. Pramanik, Pijush Kanti Dutta, et al. "Advancing Modern Healthcare With Nanotechnology,
Nanobiosensors, and Internet of Nano Things: Taxonomies, Applications, Architecture, and
Challenges." IEEE Access 8 (2020): 65230-65266.
202. A. J. Hung, J. Chen, A. Shah, and I. S. Gill, “Telementoring and telesurgery for minimally invasive
procedures,” The Journal of Urology, vol. 199, no. 2, pp. 355 – 369, 2018.
203. Y. Al-Eryani, E. Hossain, The d-oma method for massive multiple access in 6g: performance, security,
and challenges, IEEE Veh. Technol. Mag. 14 (3) (2019) 92–99.
204. I. Svogor, T. Ki_sasondi, Two factor authentication using eeg_augmented passwords, in: Proceedings of
the ITI 2012 34th International Conference on Information Technology Interfaces, IEEE, 2012, pp. 373–378.
205. J. Ni, X. Lin, X. Shen, Toward privacy-preserving valet parking in autonomous driving era, IEEE Trans.
Veh. Technol. 68 (3) (2019) 2893–2905.
206. X. Sun, W. Yang, Y. Cai, R. Ma, L. Tao, Physical layer security in millimeter wave swipt uav-based relay
networks, IEEE Access 7 (2019) 35851–35862.
207. Syed, Junaid Nawaz, et al. "Next-Generation Consumer Electronics for 6G Wireless Era." (2020).
-----
|
{
"disclaimer": "Notice: Paper or abstract available at https://arxiv.org/abs/2306.08265, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": null,
"status": "CLOSED",
"url": "http://arxiv.org/pdf/2306.08265"
}
| 2,023
|
[
"JournalArticle",
"Review"
] | true
| 2023-06-14T00:00:00
|
[
{
"paperId": "bcd75bc37f2f742ac32e3ff1876e91c1b0a1f62e",
"title": "Swarm of UAVs for Network Management in 6G: A Technical Review"
},
{
"paperId": "b87540e27e34142fa101b133ee35d56378a76f27",
"title": "Enabling Underwater Wireless Power Transfer towards Sixth Generation (6G) Wireless Networks: Opportunities, Recent Advances, and Technical Challenges"
},
{
"paperId": "cbae6c5f3a07bf4fdc213d06ce8d5e5b10861968",
"title": "Intelligent Reflecting Surfaces Assisted UAV Communications for Massive Networks: Current Trends, Challenges, and Research Directions"
},
{
"paperId": "ceb3127e9e926104d5d8e41d1c5014f28e0491a5",
"title": "Towards the Unmanned Aerial Vehicles (UAVs): A Comprehensive Review"
},
{
"paperId": "15d150c047a8c0ef698c047513740612459e2580",
"title": "A survey of optical wireless technologies: practical considerations, impairments, security issues and future research directions"
},
{
"paperId": "f214854c66b26651296073c2b94278acbf6f4d10",
"title": "Internet of nano things"
},
{
"paperId": "1b94fca89cb5039668cf94621e550b595340e53c",
"title": "Next-Generation Consumer Electronics for 6G Wireless Era"
},
{
"paperId": "acc5ef417f0089ce0247f08ff3578ac0621cf053",
"title": "6G Wireless Systems: A Vision, Architectural Elements, and Future Directions"
},
{
"paperId": "402e6245d6d2b8aba881d00d652c6121cf52fe27",
"title": "6G and Beyond: The Future of Wireless Communications Systems"
},
{
"paperId": "ce25aa35ae33993512052e02e06a12973eca3781",
"title": "Reinforcement Learning in Blockchain-Enabled IIoT Networks: A Survey of Recent Advances and Open Challenges"
},
{
"paperId": "6dd5e17fa356fff70012de84d999c37c5be0728a",
"title": "Low-Cost Subarrayed Sensor Array Design Strategy for IoT and Future 6G Applications"
},
{
"paperId": "87576c49fab648c3c7bedf7642bc4e6b98d9d69d",
"title": "Quantum machine learning and quantum biomimetics: A perspective"
},
{
"paperId": "9efeffbe85b4b615f9d6f1718f4e706b485803ec",
"title": "Sixth Generation (6G) Wireless Networks: Vision, Research Activities, Challenges and Potential Solutions"
},
{
"paperId": "020bb2ba5f3923858cd6882ba5c5a44ea8041ab6",
"title": "Meta-Learning in Neural Networks: A Survey"
},
{
"paperId": "16691c78b5948b5dd6b4efe42870f47460662c02",
"title": "Ten Challenges in Advancing Machine Learning Technologies toward 6G"
},
{
"paperId": "5b699d0812926221c9c17dca91f2bc676e09ce2f",
"title": "6G: Envisioning the Key Issues and Challenges"
},
{
"paperId": "6ce813f2c1b8e36ad32105f0dcbdea79f586ae17",
"title": "AI-Based Joint Optimization of QoS and Security for 6G Energy Harvesting Internet of Things"
},
{
"paperId": "695c8afd1d2b222c9f1ba0ccd1ac895df2a80af9",
"title": "Communications in the 6G Era"
},
{
"paperId": "298c6f111a0bcd04ccd38c14bc5a8c64cbcb323c",
"title": "A non-review of Quantum Machine Learning: trends and explorations"
},
{
"paperId": "11236ae4f31b428b6313559fb99300643c172cf9",
"title": "Meta-learning curiosity algorithms"
},
{
"paperId": "abbf386dbbf61d40ac6273ed530d0f105a3c5ecc",
"title": "Recent Advances in Indoor Localization via Visible Lights: A Survey"
},
{
"paperId": "c0b4b3918b7e0923cf381d61b1536f24bb435d52",
"title": "Homomorphic Encryption-Based Robust Reversible Watermarking for 3D Model"
},
{
"paperId": "f39c96f4f5d473247c3ac8a8a20bdb7073989236",
"title": "Recent advances in quantum machine learning"
},
{
"paperId": "3e745f56d6096ffc025cce0be558c8158d6a2518",
"title": "Toward 6G Networks: Use Cases and Technologies"
},
{
"paperId": "b98267cc64f4de8fbede52119256b6a306f51b4f",
"title": "A Privacy Threat Model in XR Applications"
},
{
"paperId": "a59bd043f84d34cdd8df56a66a6bdd6e43d29371",
"title": "Vision, Requirements, and Technology Trend of 6G: How to Tackle the Challenges of System Coverage, Capacity, User Data-Rate and Movement Speed"
},
{
"paperId": "f938a29f108fe901ba4bc9162bbece909e8940f1",
"title": "Future Intelligent and Secure Vehicular Network Toward 6G: Machine-Learning Approaches"
},
{
"paperId": "340488781f85ac2c60b5c7b2a081d269e9c1f78b",
"title": "Practical Implementation of Privacy Preserving Clustering Methods Using a Partially Homomorphic Encryption Algorithm"
},
{
"paperId": "239bf45c13b3f6d38c74026b535f785febf9cd08",
"title": "Towards Post-Quantum Blockchain: A Review on Blockchain Cryptography Resistant to Quantum Computing Attacks"
},
{
"paperId": "4192a3c886500e32a8b0e1b2972ae6e30e05340c",
"title": "Secure Quantum Communication Technologies and Systems: From Labs to Markets"
},
{
"paperId": "1076670af71265810360aa9ab762b98514159bc4",
"title": "6G: Opening New Horizons for Integration of Comfort, Security, and Intelligence"
},
{
"paperId": "a072ac8f3a0e43226f53542ddee9d6cb95138e54",
"title": "Beam-Space Multiplexing: Practice, Theory, and Trends, From 4G TD-LTE, 5G, to 6G and Beyond"
},
{
"paperId": "9ccc384e586ed7429aaa587162951a1a18a8efc0",
"title": "Toward the 6G Network Era: Opportunities and Challenges"
},
{
"paperId": "8b7fb9f31e5794956dbdeab513b7213f9e6d2895",
"title": "Terahertz Communications (TeraCom): Challenges and Impact on 6G Wireless Systems"
},
{
"paperId": "b4788f37d0d44cfd93b8c83a198d7ccb4a9b8cd4",
"title": "A Survey on Green 6G Network: Architecture and Technologies"
},
{
"paperId": "2c60b595519884a708b50c77c5ef9e56371e9158",
"title": "Holographic MIMO Surfaces for 6G Wireless Networks: Opportunities, Challenges, and Trends"
},
{
"paperId": "4b4a8f99c1deca853f502b4caac124544a72b79f",
"title": "Fog Computing for 5G-Enabled Tactile Internet: Research Issues, Challenges, and Future Research Directions"
},
{
"paperId": "2a3d09bbdfe21418ce75d6973f71028fa9192b89",
"title": "Federated Learning for Edge Networks: Resource Optimization and Incentive Mechanism"
},
{
"paperId": "024cb20bc0918629d87469c2ddbaaca7b0f2d87d",
"title": "Design of Quantum Computing Circuits"
},
{
"paperId": "d8214bbc91fe4c0405af5fbe56d54392e157c05c",
"title": "Blockchain in Smart Grids: A Review on Different Use Cases"
},
{
"paperId": "83a7bf49ea7961e097b21ff03709b57698c037bc",
"title": "Meta Learning for End-To-End Low-Resource Speech Recognition"
},
{
"paperId": "9ae67cc8a04ca0e952e31a232c2d22833d0b8300",
"title": "Potential key technologies for 6G mobile communications"
},
{
"paperId": "0a4b739a62b19c52d8fe919d8981085f27477ae8",
"title": "MBID: Micro-Blockchain-Based Geographical Dynamic Intrusion Detection for V2X"
},
{
"paperId": "3013b2ac1ff32c235fab229e64734aad31e196d7",
"title": "Beyond the smart things: Towards the definition and the performance assessment of a secure architecture for the Internet of Nano-Things"
},
{
"paperId": "aaf9069be5a498179cbd2932d793ea1b9d0092de",
"title": "Federated Learning in Mobile Edge Networks: A Comprehensive Survey"
},
{
"paperId": "f735c6086476893c6a0faefe148a43e97caba311",
"title": "6G Wireless Communication Systems: Applications, Requirements, Technologies, Challenges, and Research Directions"
},
{
"paperId": "40fb2468c3a77c68fe703a6e614f4ad25bd4e3dd",
"title": "Edge Intelligence: The Confluence of Edge Computing and Artificial Intelligence"
},
{
"paperId": "b2d5c403c9f0b8e8083e288ece67410c7989bc72",
"title": "Edge Intelligence and Blockchain Empowered 5G Beyond for the Industrial Internet of Things"
},
{
"paperId": "d044c566ba496226941c1dd26662d594d928e5a6",
"title": "AI-assisted PHY technologies for 6G and beyond wireless networks"
},
{
"paperId": "35713c59ad8e91e76fdd218e5ac7da6e32527908",
"title": "6G: The Next Frontier: From Holographic Messaging to Artificial Intelligence Using Subterahertz and Visible Light Communication"
},
{
"paperId": "84cc27fb2b0a5d90237fb402d21eb2e8ac9743aa",
"title": "Towards 6G: Getting Ready for the Next Decade"
},
{
"paperId": "2176b0c27b1bf4c103b268e18e4412d97f4dc85a",
"title": "Terahertz-Enabled Wireless System for Beyond-5G Ultra-Fast Networks: A Brief Survey"
},
{
"paperId": "fb23a56f504517e64a3c20f4b18ef7ba9d9d23d5",
"title": "Artificial Intelligence-Enabled Cellular Networks: A Critical Path to Beyond-5G and 6G"
},
{
"paperId": "79a1007b62d4e4ed3b09738594a05824c33aba67",
"title": "An Attention Mechanism Inspired Selective Sensing Framework for Physical-Cyber Mapping in Internet of Things"
},
{
"paperId": "ffa6396d0749697fc11e24a5d9ce53f8bf10c1fc",
"title": "6G Wireless Networks: Vision, Requirements, Architecture, and Key Technologies"
},
{
"paperId": "ea8a68d7f356c9e0a6f74706e429e8b41250b784",
"title": "A Vademecum on Blockchain Technologies: When, Which, and How"
},
{
"paperId": "6f00c4e1a2fd3ceba38f1677f94b9dbd882a1f1a",
"title": "6G Technologies: Key Drivers, Core Requirements, System Architectures, and Enabling Technologies"
},
{
"paperId": "4d5e23219bf31dfcfa3deff01fb374eb68927d19",
"title": "The D-OMA Method for Massive Multiple Access in 6G: Performance, Security, and Challenges"
},
{
"paperId": "877e7d3ce96d6d35e983343e0efc36700df86234",
"title": "A Survey of Intelligent Reflecting Surfaces (IRSs): Towards 6G Wireless Communication Networks with Massive MIMO 2.0"
},
{
"paperId": "2de42e8f5cb9d7d439cae0fc6bd9186791f777b2",
"title": "6G Wireless Communications: Vision and Potential Techniques"
},
{
"paperId": "e2e0314f191da18476d834d843d9b1e0d087144c",
"title": "Fog-Computing-Enabled Cognitive Network Function Virtualization for an Information-Centric Future Internet"
},
{
"paperId": "1411cdafb6c49517198675208a9412ddb86760f9",
"title": "A Key 6G Challenge and Opportunity - Connecting the Remaining 4 Billions: A Survey on Rural Connectivity"
},
{
"paperId": "fc22d691067e6d6d6c273c9c0a379375fb0e87f0",
"title": "Physical Layer Security for Ultra-Reliable and Low-Latency Communications"
},
{
"paperId": "eb7bae2edf7451ff7f483c461a736d3bdfae86dd",
"title": "A Perspective on Terahertz Next-Generation Wireless Communications"
},
{
"paperId": "72cb9b7db52ef30f9fab2f5f8df9f224598c81b7",
"title": "Wireless Communications and Applications Above 100 GHz: Opportunities and Challenges for 6G and Beyond"
},
{
"paperId": "857bcf523e2447c6ed74f3c3ee94bbe3ad05c42e",
"title": "Intelligent Reflecting Surface Assisted Wireless Communication: Modeling and Channel Estimation"
},
{
"paperId": "f83bc3b05635deccf50bf876fceebff1c7822338",
"title": "Global voice chat over short message service of Beidou navigation system"
},
{
"paperId": "fa3c0f3c0dc834d65cef14105e3f8070cef20dd9",
"title": "Smart radio environments empowered by reconfigurable AI meta-surfaces: an idea whose time has come"
},
{
"paperId": "d6e281c2b02b59c7b519e2f883a7fdc7a1c2de31",
"title": "End-to-end capacities of a quantum communication network"
},
{
"paperId": "848474f7b999308c0e63fc590d93646cb2281464",
"title": "Making Knowledge Tradable in Edge-AI Enabled IoT: A Consortium Blockchain-Based Efficient and Incentive Approach"
},
{
"paperId": "5536d7c5b1c4c3f24004ca6082e2a5593a891a83",
"title": "Security for 5G and Beyond"
},
{
"paperId": "9b37298e1497ac6bbd6a2c6d27c6acdb9d54d7d5",
"title": "What should 6G be?"
},
{
"paperId": "7d82961e80696d0d900a544370606bd359591d7e",
"title": "Edge Intelligence in the Cognitive Internet of Things: Improving Sensitivity and Interactivity"
},
{
"paperId": "a509a456b6ec55e066d316b9f4b55cddafc260cf",
"title": "Model-Free Ultra Reliable Low Latency Communication (URLLC): A Deep Reinforcement Learning Framework"
},
{
"paperId": "cae22ac04006c776a9110802f4b14567e9794d54",
"title": "Blockchain and Deep Reinforcement Learning Empowered Intelligent 5G Beyond"
},
{
"paperId": "70978b22b32a344545bfce3793b8dc7820e38974",
"title": "The Roadmap to 6G - AI Empowered Wireless Networks"
},
{
"paperId": "4acc1d2bf93d3ea4f319c41f0f96a2785103b170",
"title": "iLearn : an integrated platform and meta-learner for feature engineering, machine-learning analysis and modeling of DNA, RNA and protein sequence data"
},
{
"paperId": "ed5481af16a43cdff9d16ac6b88e5299358e0a14",
"title": "Reconfigurable Intelligent Surface-Based Index Modulation: A New Beyond MIMO Paradigm for 6G"
},
{
"paperId": "8a8a66c7470b1a2b1fdd03e15dfe45d4bcea534d",
"title": "Deterministic secure quantum communication with and without entanglement"
},
{
"paperId": "3162f47e300004d3fa0d09f680f4177c649187fb",
"title": "Privacy-Preserving Distributed Deep Learning via Homomorphic Re-Encryption"
},
{
"paperId": "7b9534aa5b8bda629fed2a1d2c525aa4f40f4a15",
"title": "6G: the Wireless Communications Network for Collaborative and AI Applications"
},
{
"paperId": "d682326dc1f552053ffbbee41e426da8170d8872",
"title": "Quantum Machine Learning for 6G Communication Networks: State-of-the-Art and Vision for the Future"
},
{
"paperId": "ac34f3b6326c55074706a1c0fa091037e93332f9",
"title": "Ubiquitous moisture sensing in automaker industry based on standard UHF RFID tags"
},
{
"paperId": "aa0cd8ce69947fe0028e1cd103c7e2107cff8d39",
"title": "Concurrent illumination and communication: A survey on Visible Light Communication"
},
{
"paperId": "73b5d87a0d882891e652deaa3b2a170b7d27e3db",
"title": "Towards 6G Networks: Use Cases and Technologies"
},
{
"paperId": "31a80229f0e3bcc971a549f2be7997bee826acb8",
"title": "Energy-Efficient Edge Computing Service Provisioning for Vehicular Networks: A Consensus ADMM Approach"
},
{
"paperId": "bfdad8066d2b8dff3e9bae43715724813ad5dbc3",
"title": "Machine Learning in IoT Security: Current Solutions and Future Challenges"
},
{
"paperId": "2c9378414235768542a7ae9200dbac725f01b966",
"title": "Six Key Enablers for Machine Type Communication in 6G"
},
{
"paperId": "f0020df023bfe8f6363982fe4a4a90cdf587fca3",
"title": "Physical Layer Security in Millimeter Wave SWIPT UAV-Based Relay Networks"
},
{
"paperId": "06f767ad7f6b49cc34b4359d55837a19bd285928",
"title": "Enhancement of Physical Layer Security With Simultaneous Beamforming and Jamming for Visible Light Communication Systems"
},
{
"paperId": "614867de2938badb567fc0d513447bb25cb4dfc5",
"title": "A survey on terahertz communications"
},
{
"paperId": "f5534af2bb2e3804f2a19b3a034aa6310bdb006d",
"title": "High-accuracy VLC-based indoor positioning system using multi-level modulation."
},
{
"paperId": "92cc5678bc8a323409d2bcbf02b558635b8ea3fc",
"title": "From 5G to 6G: Has the Time for Modern Random Access Come?"
},
{
"paperId": "98929800f75790c41e1c88e5f613c3c9412fac6f",
"title": "A Vision of 6G Wireless Systems: Applications, Trends, Technologies, and Open Research Problems"
},
{
"paperId": "bf976cf1b2035d58a7691ed15a550389d3cd608e",
"title": "A Speculative Study on 6G"
},
{
"paperId": "e3dc80530c9e27e06a033bb00bd0e9d8b79590d2",
"title": "Toward Privacy-Preserving Valet Parking in Autonomous Driving Era"
},
{
"paperId": "0b7a10138d8037d28dff301e207ff92f61554c2a",
"title": "Integrated Blockchain and Edge Computing Systems: A Survey, Some Research Issues and Challenges"
},
{
"paperId": "bfdf579e76178f84598cba563a5e013bf43cc8e6",
"title": "6G: The Next Frontier"
},
{
"paperId": "83f3b8760ee924996967907ce677b8db144d675c",
"title": "Evolution of Physical-Layer Communications Research in the Post-5G Era"
},
{
"paperId": "866bafba4c92e0185fe767b5bc9207a1fca395a3",
"title": "Machine Learning Based Antenna Design for Physical Layer Security in Ambient Backscatter Communications"
},
{
"paperId": "95a82da2071b7aec42907aa4174263153d48c400",
"title": "Utilizing Transfer Learning and Homomorphic Encryption in a Privacy Preserving and Secure Biometric Recognition System"
},
{
"paperId": "aceb99ad63ba02ea2df0b5bc7502fcdabf04d4e3",
"title": "Multi-Objective Evolutionary Federated Learning"
},
{
"paperId": "b97047c4dc75cbe8d6fc5cb3dd5a81d36458892d",
"title": "APPLIED FEDERATED LEARNING: IMPROVING GOOGLE KEYBOARD QUERY SUGGESTIONS"
},
{
"paperId": "e828531558af7cfb7c5a1d7802aa59fe242bd142",
"title": "The effect of RBCs concentration in blood on the wireless communication in Nano-networks in the THz band"
},
{
"paperId": "d97c65a74cc5c4ad26ccf0a045880a69ccfe9697",
"title": "Communication through Breath: Aerosol Transmission"
},
{
"paperId": "03c7af3cd6348ef0427dafdeea4f0a944e50e3eb",
"title": "Future Networks 2030: Architecture & Requirements"
},
{
"paperId": "043b0f74aae98fe9a3c6a434d30aec743222502d",
"title": "6Genesis Flagship Program: Building the Bridges Towards 6G-Enabled Wireless Smart Society and Ecosystem"
},
{
"paperId": "2c5688d0eeaff8a678ceab8658579bf6a315bdac",
"title": "Integrated Terrestrial/Non-Terrestrial 6G Networks for Ubiquitous 3D Super-Connectivity"
},
{
"paperId": "3ad86826e19ba02e01024a80f1acc1df4b391567",
"title": "Security and eavesdropping in terahertz wireless links"
},
{
"paperId": "6f89a632ceb8fcb81eac3d7b52e937099659cc6a",
"title": "In-Edge AI: Intelligentizing Mobile Edge Computing, Caching and Communication by Federated Learning"
},
{
"paperId": "6b10f73fc24b35aaa27c406ba88d2dd3c08adbe8",
"title": "Planning 5G Networks Under EMF Constraints: State of the Art and Vision"
},
{
"paperId": "8fcac9ef31c4f790bb417f04d44bf882863a0b29",
"title": "Data Center Connectivity by 6G Wireless Systems"
},
{
"paperId": "eb7be33dbc1c70cd5d620e536f298d4cd00a6c7e",
"title": "Nano-networks communication architecture: Modeling and functions"
},
{
"paperId": "8ec66ecc30e8cc84ae45469283b7dbf8f59ea09e",
"title": "6G Vision and Requirements: Is There Any Need for Beyond 5G?"
},
{
"paperId": "b1d0ced99f7aecbcf41ab29bdc0fe0dfb3ff2c81",
"title": "Distributed Ledger Technology for Smart Cities, the Sharing Economy, and Social Compliance"
},
{
"paperId": "2ce4c8741fea2ed49b8ff47ddf828d6f5e533f2c",
"title": "Smart and Robust Speaker Recognition for Context-Aware In-Vehicle Applications"
},
{
"paperId": "4499f86c2a99bbe29ab00cb2f783d5baaaa86ec9",
"title": "Beyond 5G With UAVs: Foundations of a 3D Wireless Cellular Network"
},
{
"paperId": "f715c379ee1ff46050e4a4b914cfeb55155f672d",
"title": "The Space-Terrestrial Integrated Network: An Overview"
},
{
"paperId": "ff8eea01cbb5de505672cf9bbda3a6a91624cf52",
"title": "Quantum Machine Learning"
},
{
"paperId": "336442fc49b9e038c1fb2c33db38968adc341e61",
"title": "A nanoscale communication network scheme and energy model for a human hand scenario"
},
{
"paperId": "a3846fef9562b213ed29036d6f550b35fc356cf2",
"title": "Telementoring and Telesurgery for Minimally Invasive Procedures"
},
{
"paperId": "1041640fdfcd0c3aebdf36881c9585206b5a0faa",
"title": "Secure Blockchains for Dynamic Spectrum Access: A Decentralized Database in Moving Cognitive Radio Networks Enhances Security and User Access"
},
{
"paperId": "f6aa41419770fd2c6cc1c352b36a35ce5a38fde5",
"title": "Ultrareliable and Low-Latency Wireless Communication: Tail, Risk, and Scale"
},
{
"paperId": "d2dd2324c11bc866899507449ad05a333aaf1f87",
"title": "Characterizing Ionizing Radiation Exposure after T-Cell Depleted Allogeneic Hematopoietic Cell Transplantation"
},
{
"paperId": "5106df4ec98e7257e022f693e1250ffef72e2384",
"title": "On blockchain-based authorization architecture for beyond-5G mobile services"
},
{
"paperId": "bea0528999eb6cd0e18864b808006a520e8ffd16",
"title": "Context Aware Ubiquitous Biometrics in Edge of Military Things"
},
{
"paperId": "4bf2874ed2df1f1db8f444925efd2a3f7c9e8339",
"title": "OFDM-Subcarrier Index Selection for Enhancing Security and Reliability of 5G URLLC Services"
},
{
"paperId": "ddabfda93ba39531b8512271cc94706733cdbcfc",
"title": "Challenges in Haptic Communications Over the Tactile Internet"
},
{
"paperId": "f2bb4a7568d9a32ec02e496baf04acad8827b65d",
"title": "Terahertz Communication for Vehicular Networks"
},
{
"paperId": "c5ba96f47c8e8c41cdbcc12806929035d351e8e9",
"title": "Reliability-Aware Circuit Design Methodology for Beyond-5G Communication Systems"
},
{
"paperId": "32ff816db1657bf917450b3e55b9d5543ff57210",
"title": "High Accuracy VLC Indoor Positioning System With Differential Detection"
},
{
"paperId": "dd41d656e21c30dd761bee2eba303d1aa014d120",
"title": "Machine Learning Paradigms for Next-Generation Wireless Networks"
},
{
"paperId": "f2bc778c26f6945ddf1335bd8ec62c8127662967",
"title": "Brain computer interface: control signals review"
},
{
"paperId": "1dffa17380cd3cef1ebf19cbd96397750e675a08",
"title": "Fundamentals of Molecular Information and Communication Science"
},
{
"paperId": "8c0544f5176a0c28a7433f69fe707421edee13d0",
"title": "SecVLC: Secure Visible Light Communication for Military Vehicular Networks"
},
{
"paperId": "a5838456a5117d5cee34c1a560868f8c72a0c302",
"title": "Projection-type see-through holographic three-dimensional display"
},
{
"paperId": "f315db3199ee6729f12e8f5e9ab003091586531b",
"title": "Energy-Efficient Resource Allocation for D2D Communications Underlaying Cloud-RAN-Based LTE-A Networks"
},
{
"paperId": "04aeabbce327b23ca7670be7ea5af5e93ebc492f",
"title": "VLC-Based Positioning System for an Indoor Environment Using an Image Sensor and an Accelerometer Sensor"
},
{
"paperId": "e8dac4e0bd6a6b765319c96b9f2fc5b991f83e4a",
"title": "High-speed spelling with a noninvasive brain–computer interface"
},
{
"paperId": "7b4bb9f65030baeb49c5e8e89eeb28a85a2e54d2",
"title": "Comparison of Channel Coding Schemes for Molecular Communications Systems"
},
{
"paperId": "26e309cad2af82adb3606d776a4e606966d2fde7",
"title": "A Survey of 5G Network: Architecture and Emerging Technologies"
},
{
"paperId": "1323d5de09adcac37d599147aa1a554ebac88f40",
"title": "The Future of Healthcare: Nanomedicine and Internet of Nano Things"
},
{
"paperId": "834828629d9b0684f3d57318dac894401c20083a",
"title": "The internet of Bio-Nano things"
},
{
"paperId": "9bb2efd9e69fd353316a220d271c453dfec8390c",
"title": "Experimental quantum secure direct communication with single photons"
},
{
"paperId": "de9f48320c2ee2970fa4909635729b0c27b7890f",
"title": "Horizon 2020 and Beyond: On the 5G Operating System for a True Digital Society"
},
{
"paperId": "9af2eee4f190f27ba718992026c5fa8dfabdda27",
"title": "Safe for Generations to Come: Considerations of Safety for Millimeter Waves in Wireless Communications"
},
{
"paperId": "57f71a752a67e5daabb4795b253e89ed43e36e92",
"title": "A Comprehensive Survey of Recent Advancements in Molecular Communication"
},
{
"paperId": "e7121cc91eab11e985a1b7f532bffd676f67f366",
"title": "Full length article: Terahertz band: Next frontier for wireless communications"
},
{
"paperId": "037453f311548b4f90dee58ef38a1651ea0e7fc5",
"title": "TeraNets: ultra-broadband communication networks in the terahertz band"
},
{
"paperId": "0bdb85d1c5c3b7870e5c1b98c7679c021d279030",
"title": "Security and Privacy in Molecular Communication and Networking: Opportunities and Challenges"
},
{
"paperId": "61a22de7f2c1cc424e3f6f77b3a0c2b6b6c231e5",
"title": "Ethical Challenges Associated with the Development and Deployment of Brain Computer Interface Technology"
},
{
"paperId": "61a2afa2d7ab0e12400b76648de12e9b651a4b9a",
"title": "Evolution mobile wireless communication and LTE networks"
},
{
"paperId": "c229e8b711b45124dfd738b0014befc247c0350a",
"title": "Two factor authentication using EEG augmented passwords"
},
{
"paperId": "e757c0c2aa71dce4491e3a542171f22670da5a67",
"title": "A brief history of mobile communication in Europe"
},
{
"paperId": "c29157683751e8f20f1ececfad65c265dd1eaf87",
"title": "VerifyNet: Secure and Verifiable Federated Learning"
},
{
"paperId": "fa1e0779039659fa8ee06d0874192cf45d13ac9e",
"title": "6G: Envisioning the Key Technologies, Applications and Challenges"
},
{
"paperId": "587f59303202e60ac8ce69bb871ec33a25e8f27d",
"title": "Machine Learning Techniques for Wireless-Powered Ambient Backscatter Communications: Enabling Intelligent IoT Networks in 6G Era"
},
{
"paperId": "2f96ba7af81a89d3e57b04e7566766211c733c24",
"title": "White Paper 5G Evolution and 6G"
},
{
"paperId": "61758b4e43aaf246a765179c34ff967cac3cd2d6",
"title": "Wave Propagation and Channel Modeling in Chip-Scale Wireless Communications: A Survey From Millimeter-Wave to Terahertz and Optics"
},
{
"paperId": "b17fad6604a7d72db9cc29adf93ba0b7a3a91b8f",
"title": "A Literature Survey and Empirical Study of Meta-Learning for Classifier Selection"
},
{
"paperId": "49f1dea3c7442e200e5f7dd102e14912eec5a0b9",
"title": "Advancing Modern Healthcare With Nanotechnology, Nanobiosensors, and Internet of Nano Things: Taxonomies, Applications, Architecture, and Challenges"
},
{
"paperId": null,
"title": "6G Mobile Networks: Emerging Technologies and Applications"
},
{
"paperId": "7cd316505f52aa337ef8a2aff10bc6bf1df561d0",
"title": "and s"
},
{
"paperId": "3e6255bcabf9533cb5e2f58dc08f00d3a753621e",
"title": "Blockchain Radio Access Network (B-RAN): Towards Decentralized Secure Radio Access Paradigm"
},
{
"paperId": "773833a629415d918a536bd9aeebc61d50617920",
"title": "EdgeAI: A Vision for Distributed, Edge-native Artificial Intelligence in Future 6G Networks"
},
{
"paperId": "9c32203d79e469ca77cee36468fad608143d71c8",
"title": "IEEE Transactions on Neural Networks and Learning Systems"
},
{
"paperId": null,
"title": "́c, G"
},
{
"paperId": null,
"title": "``Big data analysis-based secure cluster management for optimized control plane in software-de_ned networks,'' IEEE Trans"
},
{
"paperId": null,
"title": "Health and Safety Related Aspects Regarding the Operation of THz Emitters"
},
{
"paperId": null,
"title": "“5th Generation (5G),”"
},
{
"paperId": null,
"title": "MT Traf_c Estimates for the Years 2020 to 2030"
},
{
"paperId": "60f87896b0e3fbf0ae637237a58a1be2a0e64c9e",
"title": "The 3G standard setting strategy and indigenous innovation policy in China is TD-SCDMA a flagship?"
},
{
"paperId": null,
"title": "FCC Takes Steps to Open Spectrum Horizons for New Services and Technologies"
},
{
"paperId": null,
"title": "Nikkei Asian Review [ Online ]"
},
{
"paperId": null,
"title": "Korea lays out plan to become the first country to launch 6G"
},
{
"paperId": null,
"title": "6 G : the wireless communications network for collaborative and AI applications \" 6 G : the wireless communications network for collaborative and AI applications"
},
{
"paperId": null,
"title": "ETSI , “ 5 th Generation ( 5 G ) , ” 2018 , retrieved Jan . 2019 . [ Online ] Patgiri , “ 6 G : Envisioning the Key Issues and Challenges"
},
{
"paperId": null,
"title": "Beyond 5G Promotion Consortium"
},
{
"paperId": null,
"title": "6G Flagship"
},
{
"paperId": null,
"title": "The privacy and security concerns are investigated and presented"
},
{
"paperId": null,
"title": "The state-of-the-art towards 6G is provided"
},
{
"paperId": null,
"title": "A speculative study on 6 G , ” arXiv preprint arXiv : 1902"
},
{
"paperId": null,
"title": "A taxonomy based on machine learning techniques, communication technologies, computing technologies, use cases, key enablers and network technologies is provided"
},
{
"paperId": null,
"title": "6G Wireless Summit"
},
{
"paperId": null,
"title": "Research challenges and associated solutions"
}
] | 34,645
|
en
|
[
{
"category": "Education",
"source": "s2-fos-model"
},
{
"category": "Mathematics",
"source": "s2-fos-model"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/fff273f87cd446b452cd2dab2a6f00913d69a161
|
[] | 0.863576
|
Instruments for Measuring Pre-service Mathematics Teachers‘ TPACK Skill in Integrating Technology: A Systematic Literature Review
|
fff273f87cd446b452cd2dab2a6f00913d69a161
|
International Journal of Information and Education Technology
|
[
{
"authorId": "73710836",
"name": "Naufal Ishartono"
},
{
"authorId": "8282343",
"name": "S. H. Halili"
},
{
"authorId": "8474864",
"name": "R. Razak"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"Int J Inf Educ Technol"
],
"alternate_urls": [
"http://www.ijiet.org/list-6-1.html"
],
"id": "e15d2773-8b00-446c-8553-29014f48feaf",
"issn": "2010-3689",
"name": "International Journal of Information and Education Technology",
"type": "journal",
"url": "http://www.ijiet.org/"
}
|
A Systematic Literature Review (SLR) was undertaken by many researchers to examine studies that examined Pre-Service Mathematics Teachers’ technology integration skills in the Technological Pedagogical Content Knowledge (TPACK) framework. However, there has been little SLR research that analyzes the tools employed by earlier studies to measure these skills. As a result, this SLR investigates the instruments used to assess Pre-Service Mathematics Teachers’ (PSMTs) TPACK skills in integrating technology during teaching practice by addressing three issues: 1) what instruments have previous studies used to assess PSMTs’ TPACK skills in integrating technology? 2) what instruments are frequently used as references? and 3) what other frameworks are combined with TPACK in the measurement? This study adhered to the PRISMA guidelines based on the Scopus and Web of Science databases. This study filtered out 17 papers in total. According to the findings of this study, the TPACK questionnaire is the most commonly utilized instrument by researchers in the examined studies. The best appropriate instrument is the TPACK questionnaire created by Schmidt et al. Finally, attitude and perception are heavily incorporated into studies testing the TPACK skills of PSMTs. Future studies can use this study to determine the best instrument for testing PSMTs’ TPACK skills.
|
# Instruments for Measuring Pre-service Mathematics Teachers‘ TPACK Skill in Integrating Technology: A Systematic Literature Review
Naufal Ishartono, Siti Hajar binti Halili*, and Rafiza binti Abdul Razak
Abstract—A Systematic Literature Review (SLR) was
**undertaken by many researchers to examine studies that**
**examined Pre-Service Mathematics Teachers’ technology**
**integration skills in the Technological Pedagogical Content**
**Knowledge (TPACK) framework. However, there has been little**
**SLR research that analyzes the tools employed by earlier studies**
**to measure these skills. As a result, this SLR investigates the**
**instruments used to assess Pre-Service Mathematics Teachers’**
**(PSMTs) TPACK skills in integrating technology during**
**teaching practice by addressing three issues: 1) what**
**instruments have previous studies used to assess PSMTs’**
**TPACK skills in integrating technology? 2) what instruments**
**are frequently used as references? and 3) what other**
**frameworks are combined with TPACK in the measurement?**
**This study adhered to the PRISMA guidelines based on the**
**Scopus and Web of Science databases. This study filtered out 17**
**papers in total. According to the findings of this study, the**
**TPACK questionnaire is the most commonly utilized instrument**
**by researchers in the examined studies. The best appropriate**
**instrument is the TPACK questionnaire created by Schmidt et al.**
**Finally, attitude and perception are heavily incorporated into**
**studies testing the TPACK skills of PSMTs. Future studies can**
**use this study to determine the best instrument for testing**
**PSMTs’ TPACK skills.**
**_Index_** **_Terms—Technological_** **Pedagogical** **Content**
**Knowledge (TPACK), pre-service mathematics teachers,**
**technology integration**
I. INTRODUCTION
Many previous studies aimed to improve students‘
understanding of mathematical concepts by integrating digital
technology in mathematics learning, such as GeoGebra,
Matlab, android applications, Augmented Reality, and Virtual
Reality [1–5]. Integrating digital technology in mathematics
learning helps teachers deliver relatively complex
mathematical concepts more efficiently [6]. The complexity
of mathematical concepts arises from mathematical objects
which have an abstract nature [7]. Therefore, teachers‘
awareness of the need for digital learning media to bridge
teachers‘ delivery and students‘ understanding of
mathematical concepts is fundamental. Realizing the
importance of digital technology integration in mathematics
learning, the skills of teachers must be prepared as early as
Manuscript received December 27, 2022; revised February 15, 2023;
accepted February 27, 2023.
Naufal Ishartono is now with University of Malaya, Kuala Lumpur,
Malaysia and the Faculty of Teacher Training and Education in Universitas
Muhammadiyah Surakarta, Indonesia.
Siti Hajar binti Halili and Rafiza binti Abdul Razak are with the
Department of Curriculum and Instructional Technology, University of
Malaya, Malaysia.
[*Correspondence: siti_hajar@um.edu.my (S.H.B.H.)](mailto:siti_hajar@um.edu.my)
possible, especially at the Pre-Service Mathematics Teachers
(PSMTs) level.
By definition, PSMTs are similar to other college students.
PSMTs are Pre-Service Teachers (PSTs) that study
mathematics education under the program of the mathematics
education department in educational faculty or at higher
education institutions [8]. PSMTs also get a curriculum and
programs to become prospective professional mathematics
teachers like pre-service teachers. Some examples of
programs provided to PSMTs are microteaching and
school-teaching internships. Microteaching is a course that
focuses on developing the initial skills of PSMTs in teaching
[9]. In this course, they practice teaching their peers who
pretend to be students. Of course, these activities are under
the supervision and evaluation of lecturers regarding teaching
techniques, the validity of the materials taught, and their skills
in delivering the materials. This course is a prerequisite to
continue to the school-teaching internship program, where the
PSMTs become assistants for in-service teachers in teaching
and managing classes. The main goal of a teaching internship
is to strengthen and deepen the knowledge gained by students
in the learning process and to improve their skills and
knowledge of the future profession [10].
Almost all universities that organize the Professional
Teacher Training Program (PTTP) in Indonesia provide
microteaching and school-teaching internship programs as
part of their curriculum [11]. The same programs also run in
China, Korea, and Turkey, where universities in the three
countries provide microteaching and teaching internship
programs for PSTs [12–14]. This is done to ensure that the
PSTs have enough experience and initial insight as
professional teacher candidates. Many pedagogical concepts
are taught in these programs, one of which is the improvement
of PSTs‘ skills in integrating digital technology into their
teaching practice.
The digital technology integration skills given to them are
about using digital-based mathematics multimedia—Such as
GeoGebra, MATLAB, Statistical Package for Social Sciences
(SPSS), and Desmos—as part of various mathematics
teachings activities such as assessment, information delivery,
visualization of mathematical objects, and simulation of
mathematics concepts. Therefore, a framework is needed to
assist PSMTs in integrating technology into their teaching
practice.
_A._ _Theoretical Perspective of Technology Integration in_
_Mathematics Education_
Technological integration in education has become a
long-standing issue among educational researchers.
-----
Researchers in the field of education have highlighted the
importance of improving the quality of the learning process in
terms of effectiveness and efficiency without reducing the
meaningfulness of the learning process. In the mathematics
learning process, the technology integration helps
mathematics teachers in many aspects, where one of which is
in terms of material visualization [1]. Although experts have
no agreement regarding the definition of mathematics, some
argue that mathematics has abstract working objects [15–18].
Since the processing of abstract objects only occurs in the
brain, it can be said that mathematics is a cognitive
activity [19]. The problem is that not all students have good
mathematical abstraction skills. So, a medium that makes
abstract mathematical objects easier for students to
understand is needed [20]. In that case, technological
integration becomes significant, namely, visualizing abstract
mathematical objects.
Previous researchers have developed frameworks that
guide teachers in integrating technology into their learning
designs (see Table I for the sample of technological
integration frameworks). Table I shows several technological
integration frameworks often used by researchers in education:
Technological-Pedagogical-Content-Knowledge (TPACK);
Substitution, Augmentation, Modification and Redefinition
(SAMR); Universal Design for Learning (UDL);
Technological Integration Matrix (TIM); Technology
Integration Planning (TIP); Level of Technology
Implementation (LoTi); Passive, Interactive, Creative
Replacement, Amplification, and Transformation (PIC-RAT);
and Translational, Transformational, and Transcendent (T3).
Table I also shows the number of research publications (n)
related to each framework where the data were taken from the
ERIC (Education Resources Information Center) database.
The selection of ERIC is based on the reason of the article
selection on ERIC is relatively high [21]. The data collection
was carried out with the limitation that the articles were
research articles published between 2018 and 2022. Based on
Table I, this section compares the three frameworks with the
highest number of research articles: TPACK, Universal
Design for Learning (UDL), and SAMR.
TABLE I: TECHNOLOGICAL INTEGRATIONS FRAMEWORKS
Frameworks Inventors Description n
This framework combines three main knowledge components, namely technological knowledge (TK), pedagogical 41
TPACK [22]
knowledge (PK), and content knowledge (CK). 3
SAMR [23] This framework consists of substitution (S), augmentation (A), modification (M), and redefinition (R). 43
The Universal Design for Learning (UDL) framework consists of three principles which are multiple means of
UDL [24]
representation (MMR), multiple means of action and expression (MMAE), and multiple means of engagement (MME).
23
9
The T3 framework consists of three hierarchical domains: T1) Translational, T2) Transformational, and T3)
T3 [25] 1
Transcendent.
TIM (Technological Integration Matrix) has five interdependent characteristics of meaningful learning environments:
TIM [26] 0
active, collaborative, constructive, authentic, and goal-directed.
PICRAT consists of two parts which are PIC (passive, interactive, and creative) and RAT (replacement, amplification,
PIC-RAT [27] 1
and transformation)
TIP [28]
LoTI [29]
TIP (Technology Integration Planning) is a framework that has seven steps, namely 1) identifying an instructional goal,
2) determining a pedagogical approach, 3) considering tools, 4) contributing to instruction, 5) identifying constrain, 6)
delivering instruction, and 7) reflecting.
LoTI (Level of Technology Implementation) has six levels, namely level 0 (non-use), level 1 (awareness), level 2
(exploration), level 3 (infusion), level 4a (mechanical integration), level 4b (routine integration), level 5 (expansion), and
level 6 (refinement).
3
4
The TPACK Framework or Technological, Pedagogical,
and Content Knowledge is a framework proposed by
Puentedura [23]. In addition to having three essential
components—TK, PK, and CK—the combination of the three
components also produces three combined components,
namely TPK (Technological and Pedagogical Knowledge),
TCK (Technological and Content Knowledge), and PCK
(Pedagogical and Content Knowledge). This framework has
been widely used by previous researchers who examine how
teachers integrate technology in education from practical and
psychological aspects, such as related to teachers‘ beliefs on
technological integration using TPACK [30–34].
The second framework is Substitution, Augmentation,
Modification, and Redefinition (SAMR) which was first
introduced by Puentedura [23]. This framework is a
development of the framework RAT (Replacement,
Amplification, and Transformation) proposed by Hughes and
Thomas et al. [35]. This framework encourages educators to
improve the quality of learning via technology. However, this
framework is considered unclear regarding boundaries level,
specifically between augmentation and substitution [27]. In
addition, Kimmons argues that this framework‘s level of
distinction may not be meaningful for practitioners.
Lastly, Universal Design for Learning (UDL) Framework
is a framework initiated by the Center for Applied Special
Technology (CAST) in 2012; this framework is an approach
to instruction that promotes access, participation, and
progress in the general education curriculum for all learners
[24]. UDL acknowledges the necessity to provide curricula
and instructional activities that allow for multiple forms of
representation, expression, and interaction to promote the
inclusion of diverse learners [36]. Based on this explanation,
it can be said that this framework is not explicitly made for
integrating technology into the learning process.
In teaching mathematics in the 21st century, teachers‘ skills
in integrating digital technology into learning are one of the
factors that can determine the success of the transfer of
knowledge [37]. Mathematics that contains abstract objects
requires the teachers to be able to make the object closer to
students‘ life. The more students can feel it through their
senses, the more meaningful the learning process will be, for
example, when the teacher visualizes abstract objects or lets
-----
students manipulate the digital mathematics learning media.
Therefore, the technological integration framework is an
essential framework that mathematics educators must hold.
The framework in question can relate to the teachers‘ basic
knowledge of technological aspects, pedagogical aspects, and
aspects of the material taught. Thus, the technological
integration framework that complies with these demands is
TPACK.
_B._ _TPACK and Pre-service Mathematics Teachers_
The need for a theory and framework for the concept of
professionalism of a teacher prompted Shulman to propose a
framework called PCK, or Pedagogical and Content
Knowledge [38]. The PCK framework proposed by Shulman
includes a dynamic and complex relationship between
pedagogical knowledge and content knowledge (the material
taught) (See Fig. 1). According to Shulman, PCK integrates
content knowledge and pedagogy and affirms teachers‘
understanding of how a topic is structured, adapted, and
presented according to the diversity of students‘ abilities and
interests [38]. Furthermore, Shulman suggested that subjects‘
pedagogy and content should be integrated because teaching
pedagogy and content as separate activities was not adequate.
PCK became a fundamental framework for researchers and
practitioners in the field of education and became the basis for
the subsequent extensive educational research [39].
Fig. 1. Pedagogical content knowledge.
Studies related to the PCK framework continue to develop
and adapt to the times. One of the adjustments made is the one
by Mishra and Koehler [22], where they integrated
technological knowledge into the PCK framework and
became TPCK (Technology, Pedagogical, and Content
Knowledge). This is because, in 2006, computer technology
significantly developed fast and entered education. Moreover,
Mishra and Koehler [22] also argues that teaching using
technology is very complex for teachers. They saw that
existing technology was still partial and did not support each
other, such as pencils used for writing and microscopes used
only to see small objects. Therefore, integrating technology in
PCK becomes an escape from educational problems required
to be effective and efficient; students can fully understand the
material taught using various resources that can increase their
understanding. Until 2008, some research communities
proposed a more pronounced name, TPACK [40]. To date,
the TPACK framework has become a reference for assessing
teachers‘ skills in teaching, focused on how teachers can
connect their pedagogical knowledge, content knowledge,
and technological knowledge in a comprehensive and
meaningful learning process [41, 42].
The TPACK framework in Fig. 2 explains the knowledge of
technology (TK), the knowledge of content (CK), and the
knowledge of pedagogical (PK). TK in this framework is the
knowledge related to how a teacher knows and understands
how to operate technologies such as specific tools, software,
and hardware and integrate them into a learning process. With
this technology, learning becomes more meaningful and
comprehensive. Next, CK is teachers‘ knowledge of the
content they teach. The knowledge related to the material
taught must be valid so that what is delivered to students is
also valid. The last is PK, which is knowledge of learning
approaches, models, and strategies and their syntax. In
addition, this knowledge is also related to various learning
administrations that can help improve the quality of learning.
Apart from the three main components, Fig. 2 also
comprises a combination of the two components, such as the
combination of the knowledge of content and technology
(TCK), the knowledge of technology and pedagogy (TPK),
and the knowledge of pedagogy and content (TPC). TCK is
teachers‘ knowledge of integrating technology in the content
taught, such as in mathematics and how to visualize
mathematical objects using computer software. Next, TPK is
teachers‘ knowledge of integrating technology into their
pedagogical knowledge, such as utilizing PowerPoint in the
active learning-based learning process. The last is TPC,
teachers‘ knowledge of good teaching of the materials based
on a particular learning approach, model, or strategy.
Fig. 2. The TPACK framework.
From the three combinations, Koehler united them into a
technological-pedagogical-content knowledge (TPACK)
framework [43]. More importantly, the framework is the
complex context on which the teachers‘ actions rely [44].
Schmidt and Baran _et al. [45] define TPACK as a helpful_
framework for thinking about what knowledge teachers must
have to integrate TPACK as a framework for measuring
teaching knowledge which could potentially impact the type
of training and professional development experiences
designed for both pre-service and in-service teachers. The
same notion is also conveyed by Niess [46] under TPACK,
which is principally an integration of knowledge of the
subject matter, technology, and teaching-learning. TPACK
requires an understanding of the conceptions of using
technologies such as (1) pedagogical techniques that use
technology in constructive ways to teach content, (2)
knowledge of how to make initially tricky concepts more
accessible for students to understand, (3) knowledge of
students‘ prior knowledge and theories epistemology, and (4)
knowledge of how technology can be used to build existing
knowledge and evolve it into a new epistemology or
strengthen the old epistemology [47]. Based on this definition,
TPACK teachers can combine three elements (pedagogical
knowledge, content knowledge, and technological knowledge)
-----
into learning to simplify the complexity of a concept so that it
is easy for students to understand. The teachers can establish
effective solutions, pointing to an adaptable, pragmatic,
in-depth, and comprehensive understanding of instructional
activities with technology [43].
In mathematics education, the TPACK framework has been
widely studied concerning how pre-service teachers can
integrate technology to deliver mathematical concepts in the
classroom. Niess‘s research on TPACK in pre-service
mathematics teachers examines four components of
professional development for pre-service mathematics
teachers [48]. Such components are: (a) an overarching
conception of teaching mathematics with technology, (b)
instructional strategies and representation for teaching
mathematics with technologies, (c) students‘ understanding,
thinking, and learning in mathematics with technology, and (d)
mathematics curriculum and curricular materials. From these
four components, it can be concluded that a mathematics
teacher—including pre-service mathematics teachers
(PSMTs)—must be able to integrate technology as part of the
implementation of the learning process—including the
implementation of learning approach and assessment—to
teach mathematical concepts more comprehensively. For
example, when PSMTs practice teaching the concept of
graphic of a quadratic function to students at the junior high
school level using the Problem-Based Learning (PBL) model
integrated with GeoGebra. To determine GeoGebra as the
technology, they will integrate it into such teaching practice.
They must already have Technological Knowledge (TK)
related to the characteristics of GeoGebra and how well they
master it. From this knowledge, they relate to Pedagogical
Knowledge (PK) in the context of whether the GeoGebra
software can be integrated into the PBL model. In addition, it
is about how good students‘ skills are in operating GeoGebra.
Furthermore, their technological knowledge is developed
again with the Conceptual Knowledge e(CK) of a quadratic
function, which in this context is whether GeoGebra is
appropriate to visualize the quadratic function. Finally,
combined with that knowledge, they can adequately teach the
concept of quadratic function graphs through the PBL model
integrated by GeoGebra.
_C._ _Rational and the Purpose of the Study_
One of the essential aspects in measuring the PSMT‘s
TPACK in integrating technology during their teaching
practice is the instruments used by the researchers. In a study,
the research instrument determines the quality of the research
methodology [49]. Therefore, there needs to be a study
related to what instruments were used by previous researchers
in measuring the PSMTs‘ TPACK skills, where one of the
ways is to conduct a systematic literature review study to
examine data and findings of other authors relative to a
specified research question or questions [50]. Previous
researchers have tried to study TPACK and pre-service
teachers in a systematic literature review [51]. A systematic
literature review was conducted on 37 research articles from
ERIC, Scopus, and Web of Science databases from 2010 to
2020. The study examined the treatment of technologies that
initial teacher education offers to early childhood and primary
education pre-service teachers facing their practicum
experiences. Nuangchalerm [52] conducted a systematic
literature review of 11 research articles collected from the
ASEAN Citation Index (ACI). The study identified and
summarized the features of TPACK in ASEAN literature.
Wang and Schmidt-Crawford _et al. [53] conducted a_
systematic literature review of 88 research articles collected
from ERIC, PsycINFO, and Mendeley TPACK Research
Group from 2006 to 2015. This study analyzed pre-service
teachers‘ TPACK development organized around five
research methods (self-report measures, open-ended
questionnaires, performance assessments, interviews, and
observations). However, from those studies, the subjects
studied were not specific to Pre-Service Mathematics
Teachers (PSMTs). The SLR results that examine TPACK
and PSMTs are similar to those of Yigit [54]. This study
analyzed 45 articles from databases such as ERIC,
JSTOR-Scholarly Journal Archive, and PsychINFO.
However, Yigit [54] focused only on identifying PSMTs‘
development of the components of the TPACK framework,
their perspectives for their future teaching, how their
development of TPACK can be measured, and strategies to
develop their TPACK. Therefore, based on previous
empirical studies, this systematic literature review examines
instruments used to measure the PSMTs‘ TPACK skills in
integrating technology during teaching practice. The findings
of this study are expected to be a reference for stakeholders in
determining policies related to improving the skills of PSMTs
in integrating technology during teaching practice. The
research questions addressed in the study are as follows:
1) What kind of instruments are used to measure the
PSMTs‘ TPACK ability?
2) Which references are used to develop measurement
instruments for the PSMTs‘ TPACK in the technology
integration?
3) What other frameworks are combined with the
TPACK framework?
II. METHOD
This study uses a systematic literature review model to see
the factors influencing PSMTs in integrating technology
during teaching practice. Nightingale suggests that the first
stage of conducting SLR is developing a protocol that clearly
defines [55]: (1) the aims and objectives of the review, (2) the
inclusion and exclusion criteria for studies, (3) how the study
will be identified, and (4) the plan of analysis. Among those
four definitions, the second point is the most critical in
determining whether the SLR is well conducted. Nightingale
uses six inclusion criteria which are (1) type of study, (2) type
of participants, (3) type of intervention, (4) comparison, (5)
outcome measures, and (6) other aspects related to the
characteristic of the study [55]. To ensure that the protocol is
well conducted, then Moher and Liberati et al. [56] suggests
the concept of PRISMA (Preferred Reporting Items for
Systematic Reviews and Meta-Analyze), which consists of
four stages of review, namely identification, screening,
eligibility, and inclusion (see Fig. 3 for the PRISMA steps in
this study).
_A._ _Search Identification_
-----
The identification stage of this study was carried out by
determining the keywords used to browse the needed research
articles. The best article by the research objectives comes
from a reputable database, range of years, and the (Population,
Intervention, Comparison, Outcome) PICO principle—an
abbreviation of Participant, Intervention, Comparison, and
Outcome—used Mamédio and Santos _et al. [57]. The_
database used in this study is the Scopus and Web of Science
(WoS) database from 2012 to 2022. Both databases cover
high-quality journals that publish high-quality research
articles. In addition, Burnham also argues that WoS is over
Scopus in terms of the depth of coverage, where the WoS
database goes back to 1945 and Scopus goes back to 1966
[58]. However, those databases complement each other as
neither resource is all-inclusive. The databases were
prominent in educational technologies, and the publications
found in these databases were scientific articles [59]. The next
step is determining PICO, which enables the researchers to
identify keywords for the systematic review in the various
databases [60]. See Table II for the chosen keywords for each
PICO component. Keywords defined in Table II are then used
to find the desired research article using Boolean Operators
such as AND and OR (see Fig. 3 for the search sample in
Scopus). The articles were searched using Publish or Perish
(PoP) software [61]. At this stage, there were 1,807 articles
from the two databases.
Fig. 3. Sample of the search strategy.
_B._ _Article Screening_
This stage involves issuing research articles, not the desired
publication type. Therefore, some articles of the type
proceedings, review articles, and book chapters are deleted
from the list. Proceeding-type articles are excluded since this
type has a relatively limited scientific impact, their relative
importance is shrinking, and they become obsolete faster than
the scientific literature [62]. Next, review articles are also
excluded since these articles do not convey the research
results carried out empirically [63].
TABLE II: KEYWORDS BASED ON PICO PRINCIPLES
PICO Aspects Keywords
Participants ―pre-service mathematics teachers‖, ―pre-service mathematics teachers‖, ―prospective mathematics teachers‖
Intervention ―TPACK‖, ―TPCK‖, ―Technological, Pedagogical, Content Knowledge‖
Comparison ―factors‖
Outcome ―Technology integration‖
Besides the article type aspect, the exclusion criteria are
also based on the language used. At this stage, this research
selects only articles written in English. English is an
international language, making it easier for researchers to
analyze and synthesize. The last criterion is excluding
duplicated articles. Because this study uses two international
databases, therefore duplication might be found. Based on
this explanation, 666 articles were excluded, leaving 1,141
articles.
Fig. 4. Design of PRISMA steps.
_C._ _Article Eligibility and Inclusion_
The eligibility stage is achieved by selecting the articles
based on the abstract and title. The title that only involves
pre-service teachers and does not explicitly deal with PSMTs
is not selected at this stage. One example is a research article
from Baran and Canbazoglu Bilici _et al. [64] entitled_
―Investigating the impact of teacher education strategies on
-----
pre-service teachers‘ TPACK.‖ The article does not explicitly
involve PSMTs as subjects in the study. Besides, a study from
Valtonen and Leppänen et al. [65] titled ―Fresh perspectives
_on TPACK: pre-service teachers’ appraisal of their_
_challenging and confident TPACK areas‖ also did not_
involve PSMTs as subjects in the study. Some of the articles
issued are articles that do not contain TPACK/TPCK and
PSMTs both in the article title and in the article abstract, such
as research conducted by Parra and Raynor _et al. [66]._
Although it deals with TPACK, it does not involve PSMTs as
the research subject. Furthermore, another study was the
research of Undheim [67], which raised the topic of TPACK
but did not involve PSMTs as the research subject. Based on
the results of the title and abstract-based selection, there were
391 articles eliminated and 40 articles left.
The last step after the eligibility stage is the inclusion stage.
This stage is carried out by analyzing the suitability of each
article with the objectives of the SLR, which is related to the
identification of instruments to assess PSMTs‘ TPACK. From
the 40 articles selected at the eligibility stage, 22 articles were
eliminated due to several causes, such as the research does not
use a survey [68–72] and not focusing on TPACK assessment
instruments [73–84], Design-Based Research type [85–87],
and case study [88], [89]. As a result, the number of included
papers is 17 to be further analyzed using NVIVO 12. The
fundamental steps are visualized in Fig. 4.
III. RESULT
This section explains the analysis results related to the
research questions. Based on the results of the PRISMA
protocol, 17 articles were obtained (see Table III).
TABLE III: LISTED ARTICLE PROFILE
Number of
Authors Journal Country Research Method
Participants
[90] Technology, Pedagogy and Education Ghana 104 Mixed-Method
[91] International Journal of Research in Education and Science (IJRES) Ghana 126 Quantitative
[92] Educational Sciences: Theory & Practice Turkey 52 Mixed-Method
[93] The New Educator USA 3 (sample) Qualitative
[94] International Journal of Technology in Mathematics Education USA 51 Qualitative
[31] Australian Journal of Teacher Education Turkey 71 Mixed-Method
[95] Eurasia Journal of Mathematics, Science and Technology Education Spain 6 Quantitative
[96] Mathematics Education Research Journal Australia 373 Mixed-Method
[97] Australian Educational Computing Australia 18,690 Quantitative
[98] Australasian Journal of Educational Technology Tanzania 22 Quantitative
[99] Educational Technology & Society Turkey 427 Quantitative
[100] International Journal of Mathematical Education in Science and Technology Turkey 33 Qualitative
[101] Educational Sciences: Theory & Practice Turkey 407 Quantitative
[102] Education Sciences USA 175 Quantitative
[103] Contemporary Educational Technology Turkey 340 Quantitative
[104] Interactive Learning Environments Serbia 226 Quantitative
[105] Journal of Research on Technology in Education USA 315 Quantitative
_A._ _Instruments Used to Measure the PSMTs’ TPACK_
Based on the results of the literature analysis conducted on
the 17 articles, six types of instruments were used to measure
the PSMTs‘ TPACK skills: the TPACK questionnaire, lesson
plan rubric, observation form, interview, microteaching
artifact, and other questionnaires. In general, the TPACK
questionnaire is used by 88% of listed authors, of which
another 12% use rubric lesson plans. In addition, 23% of
listed authors used more than one instrument to measure the
PSMTs‘ TPACK skills (see Table IV for details).
TABLE IV: TPACK INSTRUMENTS USED BY PREVIOUS STUDIES
Instrument Used
TPACK Microteaching Artefact Other Questionnaires
Lesson Plan Rubric Observation Form Interview Guidance
Questionnaire
[90] √ √ √ √ (TAC)
[91] √
[92] √ √ √ √ √ (CAMI & SES)
[93] √
[94] √
[31] √ √ √
[95] √
[96] √
[97] √
[98] √
[99] √
[100] √
-----
Instrument Used
TPACK Microteaching Artefact Other Questionnaires
Lesson Plan Rubric Observation Form Interview Guidance
Questionnaire
[101] √
[102] √
[103] √
[104] √
[105] √
Total 15 3 2 2 2 2
Table IV shows the variation of instruments used by the
authors to measure PSMTs‘ TPACK skills, where three
authors use various instruments, namely [31, 90, 92]. Agyei
and Voogt [90] used various instruments because this is
inseparable from the efforts to answer the research question:
―how do the techniques used in the course on mathematics
_instructional technology affect the technology competencies_
_(attitudes, knowledge, and abilities) of aspiring math_
_teachers?‖. Although they use four instruments, only three are_
used to measure the PSMTs‘ TPACK skills, while another is
the Teachers‘ Attitude toward Computers (TAC)
questionnaire adapted from research by Christensen and
Knezek [106]. To answer the research question, they analyzed
technology integration competencies by analyzing evidence
in the PSMTs‘ lesson plans, lesson observation, and
self-reports. To analyze TPACK in the lesson plan, they used
the TPACK Lesson Plan Rubric adapted from the Technology
Integration Assessment Rubric (TIAR) proposed by Harris
and Grandgenett et al. [107]. Next, they adapted the TPACK
Survey developed by Schmidt and Baran et al. [45] by using a
5-point Likert scale format in the questionnaire. One of the
interesting aspects of this study is that [90] classified the
TPACK component into three parts, namely the technology
component using spreadsheets which includes TKss. The
content component in mathematics includes CKmaths and
TPCKmaths, and the pedagogy component uses
activity-based learning and includes PKABL, PCKABL, TCKABL,
and TPKABL. That way, they can distinguish the measurement
aspects of the PSMTs‘ knowledge and skills. The last
instrument used was the TPACK Observation Rubric, adapted
from the TPACK-based Technology Integration Observation
Instrument (TPACK-TIOI) developed by Hofer and
Grandgenett _et al. [108]. Adaptations were made so that_
TPACK observations could be carried out using
spreadsheet-supported Activity-Based Learning (ABL) in
mathematics consisting of 20 items with a 3-Likert scale.
Next, Aydogan Yenmez and Özpinar _et al. [92] used six_
instruments in their research. Of the six instruments, only four
are used to measure the PSMTs‘ TPACK skills. Based on
their research objective, that is to examine the elements of
microteaching as they are organized within the theoretical
framework of TPCK, as well as the changes pre-service
mathematics instructors encounter within the setting of TPCK,
they use four instruments which are observation forms,
microteaching videos, semi-structured interviews, and
self-evaluation forms. At the same time, the two other
instruments are the self-efficacy scale of Computer-Based
Education, adapted from Arslan [109], and the
Computer-Assisted Mathematics Instruction (CAMI)
questionnaire, adapted from a study conducted by Yenilmez
and Sarier [110]. Their observation form is used for peer
evaluation between PSMTs during the teaching practice. The
goal here is to improve the efficacy of microteaching by
requiring pre-service teachers to use the criteria within the
framework of components when assessing each pre-service
teacher. The instrument used was microteaching videos to
examine the change of each pre-service teacher along the axis
of TPACK. Next, self-evaluation is used by passing it to the
PSMTs for them to evaluate themselves related to TPACK
components. This form consists of 22 questions made by
shaping the observation form to allow for self-evaluation.
Lastly, semi-structured interviews explore the data obtained
from the self-evaluation form instrument. This can be noted
from the research of Aydogan Yenmez and Özpinar et al. [92];
although they involved seven experts in validating the
instrument, they did not describe based on what reference the
instrument was developed and how the quantitative analysis
of the instrument validity test was carried out.
Lastly, Kaya and Daǧ [111] used three instruments to
measure 71 Turkish PSMTs‘ TPACK skills in integrating
technology during their teaching practice. The research aims
to analyze PSTs‘ development of TPACK through a course
implementation that was designed and implemented based on
a TPACK framework. They used TPACK surveys,
semi-structured interviews, and microteaching evaluation
scales to answer this goal. The first instrument they used was
the TPACK questionnaire which was adapted from an
instrument developed by Kaya and Daǧ [111]. The
questionnaire showed that the overall sub-domains had alpha
reliability coefficients between 0.77 and 0.88. The instrument
used is a semi-structured interview consisting of six
open-ended questions. This interview aims to investigate the
PSMTs‘ development of TPACK in detail. They asked two
mathematics education teachers to read the questions and
confirm their clarity. The instrument used is the
Microteaching Evaluation Scale (MTES) which was
developed to obtain the required information related to
microteaching performances of the PSMT concerning
TPACK and course gains. The MTES was validated by two
researchers who independently evaluated the scale based on
common views.
Other authors were recorded to use only one type of
instrument, namely TPACK surveys [70, 91, 94–97, 99, 101,
103–105, 112]. In addition, two authors who only used rubric
lesson plan instruments as developed by Lyublinskaya and
Kplon-Schilis [113] and Kartal and Çinar [114] were also
recorded. The tendency of the listed authors to use the
TPACK questionnaire to obtain data on the PSMTs‘ TPACK
skills cannot be separated from the nature of the questionnaire
that reaches people quickly, data accuracy, flexibility of time
-----
and place, scalability, and respondent anonymity [115].
_B._ _References Used to Develop the Instruments_
Instruments in a study determine the quality of the
methodology and the research itself. Therefore, an instrument
must have a basis in each of its components. Two of the ways
are to adapt from existing instruments and adapt them to
research needs. Another alternative is to develop the
necessary instruments based on the theory developed in
previous research. Since Table III indicates that the most
widely used instrument is the questionnaire, this section only
focuses on the references used to develop the questionnaires.
Therefore, there are two articles whose instruments will not
be discussed: the research article by Kartal and Çınar [100]
and Lyublinskaya and Kplon-Schilis [113]. Both articles use
rubric lesson plans as their primary research instruments, so
the number of articles analyzed is 15. Based on the analysis
results of the listed articles, nine previous studies have been
used as a reference for adaptations of the TPACK
questionnaire instrument. In addition, it was also noted that
some authors chose to develop their TPACK questionnaires
according to their research objectives. Fig. 5 illustrates the
proportion of references used by the fifteen listed articles.
Fig. 5. Basis of research questionnaire development.
From Fig. 5, it can be seen that the instrument developed by Table V shows the type of development questionnaire
Schmidt et al. [45] became the most adapted. However, Fig. 5 (adapted (A) or developed by the author (DA)), the reference
also shows that the number of researchers who develop their used, the reliability level by Cronbach‘s Alpha, and
instruments is similar to those who adapt their instruments Exploratory Factor Analysis (EFA). From the aspect of the
from Schmidt et al. [45]. Detail-adapted instruments and the type of development, as previously explained, most of the
self-developed instrument can be seen in Table V. instruments developed are the result of adaptations from
previous research carried out by Apeanti, Agyei and
TABLE V: DETAILS OF ADAPTED AND SELF-DEVELOPED INSTRUMENTS Voogt [91, 125]. From the reference aspect, the instrument
Crobach‘s developed by Schmidt and Baran _et al. [45] is the most_
Authors Type References EFA
Alpha adapted compared to other reference instruments. Four
[90] A [45] 0.700 Unexplained studies [125, 126, 101, 127] are adapting the instrument
[91] A [122, 123] 0.726 Unexplained
questionnaire developed by Schmidt and Baran _et al. [45]._
[92] DA N/A Unexplained Unexplained
However, none of them explains why they prefer to adopt the
[93] A [117] Unexplained Unexplained
[94] A [118–120] Unexplained Unexplained instrument developed by Schmidt and Baran _et al. [45]. It_
[31] A [111] 0.770 √ may be because the instrument developed by Schmidt and
[95] DA N/A Unexplained Unexplained Baran _et al. [45] is intended to assess pre-service teachers‘_
[96] DA N/A Unexplained Unexplained TPACK abilities, the same as the four studies‘ research
[97] A [116] 0.970 √ subjects. Besides, four studies [92, 95, 96, 104] developed
[98] A [121] 0.812 Unexplained their TPACK questionnaire.
[99]) A [45] 0.940 √ The next aspect is related to Cronbach‘s Alpha reliability
[101] A [45] 0.890 √ level of the developed instrument. In general, several studies
[103] A [124] 0.830 √ convey the level of reliability of the instruments developed
[104] DA N/A 0.870 √
where the minimum recorded level is 0.700 [125]. However,
[105] A [45] 0.880 √
it was also noted that five studies do not include the level of
*A: Adapted; DA: Developed by Author; N/A: Not Applicable; EFA:
Exploratory Factor Analysis reliability of the instruments developed. Interestingly, three
studies developed their TPACK questionnaire instruments
-----
[92, 95, 96], while two others are adapted instruments [70, 94].
In developing research instruments, the internal reliability test
of an instrument (Cronbach‘s Alpha) is critical to verify that
each test item is relevant to the issue under investigation [128].
In addition, in the context of the research article publication,
the delivery of the reliability level of the research instrument
can provide an overview to other researchers related to the
quality of the instrument developed, which indirectly also
describes the quality of the research methodology used and
the results of the research.
The last aspect in Table V is conducting exploratory factor
analysis (EFA) for the TPACK questionnaire development. In
theory, factor analysis is a multivariate statistical procedure
with three benefits. It is used to 1) compress a large number of
variables into a smaller set of variables/factors, 2) establish
underlying dimensions between measured variables and latent
constructs, and 3) give valid evidence for self-reporting scales
[129]. Next, EFA is a factor analysis that allows researchers to
explore the main dimensions to generate a theory or model
from a relatively large set of latent constructs often
represented by a set of items [130–132]. Based on this
understanding, the EFA is essential for researchers, especially
in developing the TPACK questionnaire. Since the TPACK
questionnaires developed in the listed articles are the result of
development by the author and are the result of an adaptation
of the instruments developed by his previous research—not
entirely using it as it is—then EFA analysis is vital to do.
Table IV shows that of the fifteen articles listed, 53% do not
explicitly relate to the EFA analysis with details of three
DA-coded articles; the rest are A-coded articles. The
submission related to EFA analysis on the development of the
TPACK questionnaire in a research article is important to do
because it can provide and clarify information related to the
construct validity of the instrument, even though the
instrument is the result of an adaptation of previous research.
For example, Karatas and Tunc _et al. [99] stated in their_
research article that the TPACK questionnaire they used was
an adaptation of Schmidt and Baran _et al. [45] and was_
transliterated by Öztürk and Horzum [133]. Next, Karatas and
Tunc _et al. [126] added that the instruments they used had_
been tested EFA by Öztürk and Horzum [133] to determine
the construct validity of the instruments. Thus, the
information can indicate the quality of the adapted instrument.
This became mandatory for researchers with DA codes
because they developed the TPACK questionnaire they used.
Thus, the questionnaire quality affecting the methodology and
research results can be accounted for.
_C._ _Other Framework Measured Besides TPACK_
To get a holistic picture of the PSMTs skills of
technological integration during their teaching practice, some
previous researchers tried to combine TPACK with various
frameworks. Based on the listed articles, some frameworks
are integrated with TPACK, namely Teacher Acceptance
towards Computers (TAC), Theory of Planned Behavior
(TPB), Technology Acceptance Model (TAM), Perception
Toward Technology (PTT), SAMR (Substitution,
Augmentation, Modification, Redefinition), PoE (Perception
of Effectivity) & PoB (Perception of Barriers), and
self-efficacy & PCaE (Perception of Computer-assisted
Education). However, there are still some articles that review
TPACK only. See Fig. 6 for the details of the references of
each additional framework.
Fig. 6. Other Framework Integrated with TPACK.
The decision to integrate other frameworks with TPACK in adaptation of TRA (Theory of Reasoned Action) proposed by
measuring the skills of the PSMTs in integrating technology is Ajzen and Fishbein [134]—is the framework proposed by
based on the purpose of their research. From the listed articles, Davis [135] to measure an individual‘s acceptance and
several researchers examined the PSMTs‘ technological attitude toward technology. Lastly, TPB is a theory proposed
integration from the aspect of attitude. Three frameworks by Ajzen (1991) that aims to measure student‘s—in this term,
appeared in the study to measure the PSMTs‘ attitude towards the pre-service mathematics teachers—persistence intentions.
technology, which integrated the TAC framework [90]; the Within the TPB framework, a particular component examines
TPB [95]; the TAM [104]. The three studies have similarities individuals‘ attitudes toward anything. The relationship
in formulating questions and research objectives, namely the between the three theories/frameworks relates to measuring
measurement related to the PSMTs‘ attitude toward individuals‘ intention toward anything, which in the context
technology. In theory, TAC is a framework used to measure of TPACK becomes intention toward technology; each
PSTs‘ attitudes toward technology [106]. Next, TAM—an framework has an attitude component. Therefore, it can be
-----
understood why the three studies use one of the frameworks.
In addition to measuring attitude factors, several listed
researchers measure the PSMTs‘ technological integration
skills from perception. In Fig. 6, four types of perception
measurements are recorded through several theories from
previous research, such as (1) PTT (Perception Toward
Technology) proposed by Öksüz and Ak et al. [137]; (2) PoE
(Perception of Effectiveness) and PoB (Perception of Barriers)
contained in Teaching with Technology Instruments (TTI)
that adapted and modified from Yidana, Sahin [122, 123]; and
(3) self-efficacy perception in computer-based education
which is contained in Self-Efficacy Scale proposed by Arslan
[109]. Perception analysis is essential because how an
individual sees an object can determine how the individual
behaves and provide treatment for the object [138]. Thus, it
can be concluded that the relationship between PSMTs‘
perception, TPACK skills, and technological integration
during teaching practice lies in the PSMTs‘ willingness to
integrate technology during teaching practice based on how
they perceive technology and how well they master the
TPACK framework. This is seen in the research of Karatas
and Tunc _et al._ [99], who want to see how the PSMTs‘
technology is used through the PTT aspect. Similarly, Apeanti
[91] uses PoE and PoB aspects in the TTI instrument, and
Aydogan Yenmez and Özpinar _et al. [92] uses the_
Self-Efficacy Scale to see the PSMTs‘ perception toward
technology use.
Fig. 6 also shows that TPACK can be integrated with other
technology integration frameworks, such as SAMR, by
Caniglia and Meadows [94]. In theory, SAMR is a framework
proposed by Puentedura [23] to facilitate the acquisition of
proficiency in modern technologies. In the context of the
research of Caniglia and Meadows [94], the integration of
TPACK and SAMR is used for particular purposes
corresponding to each framework. TPACK provides a
framework for integrating technology across the curriculum,
while the SAMR model provides insight into how the
digital-based learning media chosen by PSMTs may affect
teaching and learning.
IV. DISCUSSION AND CONCLUSION
Technology integration skills for PSMTs are critical in
successfully implementing their teaching practices. In
addition to helping them learn more effectively and efficiently,
these skills can also help them communicate material better
and validly through visualization or simulation of abstract
mathematical objects. So, the effort to measure the skills of
PSMTs in integrating technology into the practice of teaching
mathematics is an excellent first step. However, studies
related to measurement instruments carried out by previous
researchers were deemed necessary to provide insight to
subsequent researchers regarding alternatives and variations
of what instruments could be used in measuring the PSMTs‘
technological integration skills, especially those based on the
TPACK framework. In addition, as explained in the
introduction section, systematic literature review research that
examines PSMTs‘ technological integration skill
measurement instruments from the TPACK framework aspect
is still limited, so the findings of this study can fill in the gaps.
The first concern in this study is the type of instrument used
by the authors. The TPACK questionnaire is the most widely
used instrument for measuring PSMTs‘ technological
integration skills, followed by three authors‘ rubric lesson
plans. The exact number of users are observation form
instruments, interview guidance, and microteaching artifacts
(such as video). The ease of using questionnaires in collecting
data is one of the considerations of the listed researchers. This
is in line with the opinion of Jenny and Diesinger [139] that a
self-administered questionnaire, which is simple to use and
has answers that can be mailed, is helpful for large-scale
assessments. Next is the use of the rubric‘s lesson plan, which
three researchers used, namely [90, 100, 102]. Based on the
analysis of the three articles, it was found that the
measurement of the PSMTs‘ technological integration skills
through TPACK was carried out during the PSMTs
conducting microteaching or instructional practice reviewed
from the lesson plan developed by the PSMTs. Therefore, the
instrument is an appropriate alternative technological
integration skill measurement tool. This is in line with what
was done by Kereluik and Casperson et al. [140], where they
used a rubric‘s lesson plan to see the skills of PSTs in
integrating technology in terms of the lesson plan that has
been developed. The last is observation form instruments,
interview guidance, and microteaching artifacts. These three
instruments are supporting instruments to strengthen the
questionnaire used as the main instrument. Likewise, Agyei
and Voogt [90] used the observation form to deepen the data
obtained from the rubric‘s questionnaire and lesson plan.
Durdu and Dag use interviews and microteaching artifacts to
synchronize and deepen the data obtained from the
questionnaires that have been distributed [31].
The next aspect is the reference used to develop the
instruments, specifically in the TPACK questionnaire
development. As already explained, the instrument developed
by Schmidt and Baran _et al. [45] became the most widely_
referred reference for developing the TPACK questionnaire.
Apart from the same research subjects—namely at the level of
pre-service teachers—the instruments developed by Schmidt
and Baran et al. [45] have been statistically tested both from
the aspect of internal reliability using Cronbach‘s Alpha, as
well as construct validity with varimax rotation within each
knowledge domain. Several previous researchers who studied
TPACK skills at the level of pre-service teachers using
questionnaires also adapted instruments developed by
Schmidt and Baran et al. [45]. Ritzhaupt and Huggins-Manley
_et al. [141] adapted an instrument that Schmidt developed to_
measure the TPACK skills of The US‘ PSTs [45]. Next,
Tondeur and Scherer [142] also adapted the TPACK
questionnaire developed by Schmidt and Baran et al. [45] and
combined it with the TPACK self-report scale developed by
Scherer, J. Tondeur _et al. [143] to measure 688 Belgian_
pre-service teachers‘ TPACK skills through an online survey.
Lastly, Kotzebue [144] adapted the TPACK questionnaire
developed by Schmidt and Baran _et al. [45] to analyze the_
TPACK skills of 206 Austrian biology PSTs combined with a
biology-specific self-report. Thus, it can be concluded that the
TPACK questionnaire developed by Schmidt and Baran et al.
[45] became an alternative reference to the appropriate
-----
instrument for measuring PSTs‘ TPACK skills.
Other instrument references, such as those developed by
Albion and Jamieson-Proctor _et al. [116], have the same_
subject level, i.e., PSTs. However, his developed instruments
led to the TPACK Confidence Survey (TCS). The
TPACK-TCS includes items that assess teachers‘ attitudes
about utilizing ICT, their confidence in using ICT for teaching
and learning tasks (TPACK), their proficiency with ICT, their
Technology Knowledge (TK), and their TPACK Vocational
Self-efficacy. Thus, this instrument can be an alternative to be
adapted to measure the psychological aspects of the PSTs
regarding the TPACK framework. Another alternative to the
TPACK survey instrument reference that can be used is the
one developed by Sahin [124]. This instrument has the same
target level of research subjects, namely pre-service teachers.
However, the question asked is relatively more technical, as
seen in the list of statements on technological knowledge
[124]. At that point, the TK statements developed led to the
technical mastery of computer devices, resulting in many
questions that were not holistic. Examples include ―I know
about communicating through Internet tools (ex., e-mail,
MSN Messenger)‖. This type of question becomes inflexible
because technology will continue to evolve.
In contrast to the TK statements developed by Schmidt
and Baran et al. [45], it is more general, such as ―I can learn
_technology easily‖. This makes adapting the instrument_
developed by Schmidt and Baran et al. [45] more accessible.
Next, this section does not discuss and examine the instrument
references [111, 117, 121–123], because the author does not
provide accessible instruments. So, it is not discussed further.
The last aspect discussed in this section is the other
framework integrated into the TPACK framework to measure
the PSMTs‘ technological integration skills. The context of
perception (PTT, Self-Efficacy Scale, and Perception of
Effectivity & Perception of Barriers) and attitude (TAM, TPB,
and TAC) are often associated with the TPACK framework,
followed by the context of the Technology Integration
Framework (TIF), namely SAMR. Some previous researchers
defined the two terminologies differently in the context of
perception and attitude. According to Allport [145], an
attitude is a mental or neurological state of readiness that is
organized by experience and has a directive or dynamic
impact on the individual‘s behavior toward all objects and
circumstances to which it is linked. Individuals‘ attitudes
affect their decisions, drive their conduct, and influence what
they selectively recall (not always the same as what we hear).
Attitudes come in various strengths, and they, like most things
taught or impacted by experience, may be assessed and
modified [146].
Meanwhile, perception is how organisms interpret and
arrange sensations to form a meaningful experience of their
surroundings [147]. In other words, a person is presented with
a scenario or stimulus. Based on earlier experiences, the
person interprets the inputs as something significant to him or
her. However, what a person thinks or sees may differ
significantly from reality [148]. Based on these two
explanations, it is very natural that TPACK researchers
embed aspects of perception and attitude as part of measuring
individual skills—in the context of this study, PSMTs—in
integrating technology into a learning process. Some previous
studies have also tried to integrate TPACK with the attitudes
embedded in the TPB [142, 149, 150], and perception aspects
[151–154].
On the other hand, SAMR is recorded as a TIF integrated
with TPACK in research by Caniglia and Meadows [94]. In
the study, SAMR was used as a comparison to TPACK.
Whereas TPACK provides a framework for integrating
technology across the curriculum, the SAMR model provides
insight into how the websites chosen by PSTs may affect
teaching and learning. Several previous studies have
combined TPACK and SAMR, such as those conducted by
Hilton [155] using both frameworks to see the effectiveness of
iPad use in future social studies learning.
From all these discussions, it can be concluded that the
TPACK Questionnaire is the most widely used instrument in
previous research related to efforts to measure the PSMTs‘
TPACK skills in integrating technology during teaching
practice. Next, the instrument developed by Schmidt and
Baran et al. [45] was found to be the most adapted by previous
researchers as an alternative instrument to measure the
PSMTs‘ TPACK skill. Finally, context-based and perception
contexts are the most integrated with TPACK-based
measurement frameworks.
This study still leaves some space for further research.
Some of them are from the field aspect because this research
cannot only focus on research on pre-service mathematics
teachers. Thus, systematic literature review research can be
done on TPACK instruments used to measure PSTs‘
technological integration skills in other fields. It is expected
that the results of this study can provide insight to subsequent
researchers on what instruments can be used to measure
PSMTs‘ TPACK, which research instruments can be used as
references, and what frameworks/factors can be integrated
with TPACK instruments.
CONFLICT OF INTEREST
The authors declare no conflict of interest.
AUTHOR CONTRIBUTIONS
N.I. conducted the data collection and analysis and wrote
the paper; S.H.H. conducted the content and format review;
and R.A.R. conducted the format and content review; all
authors approved the final version.
FUNDING
Universitas Muhammadiyah Surakarta fully supports the
funding of the present study through a Ph.D. scholarship and
research publication grant in scholarship.
ACKNOWLEDGMENT
The authors would like to thank the University of
Muhammadiyah Surakarta and the University of Malaya for
helping provide the online literature as the data of this study.
REFERENCES
[1] N. Ishartono, A. Nurcahyo, M. Waluyo, H. J. Prayitno, and M. Hanifah,
―Integrating GeoGebra into the flipped learning approach to improve
students‘ self-regulated learning during the Covid-19 pandemic,‖ _J._
-----
_Math._ _Educ.,_ vol. 13, no. 1, pp. 69–85, 2022, doi:
10.22342/jme.v13i1.pp69-86
[2] A. H. Bhatti, G. R. Laigo, H. M. GebreYohannes, and L. K. Pulipaka,
―Using a blended learning approach in teaching mathematics,‖ in Proc.
_EDULEARN16, 2016, vol. 1, no. July, pp. 1366–1373, doi:_
10.21125/edulearn.2016.1273
[3] A. H. Wahid _et al., ―Effectiveness of Android-based mathematics_
learning media application on student learning achievement,‖ Journal
_of Physics: Conference Series, 2020, vol. 1594, no. 1, doi:_
10.1088/1742-6596/1594/1/012047
[4] M. N. Wangid, H. E. Rudyanto, and Gunartati, ―The use of
AR-assisted storybook to reduce mathematical anxiety on elementary
school students,‖ _Int. J. Interact. Mob. Technol., vol. 14, no. 6, pp._
195–204, 2020, doi: 10.3991/IJIM.V14I06.12285
[5] D. Lindlbauer and A. D. Wilson, ―Remixed reality: Manipulating
space and time in augmented reality,‖ in _Proc._ _Conf. Hum. Factors_
_Comput._ _Syst.,_ vol. 2018-April, pp. 1–13, 2018, doi:
10.1145/3173574.3173703
[6] A. S. Fatimah and S. Santiana, ―Teaching in 21st century:
Students-teachers‘ perceptions of technology use in the classroom,‖
_Scr. J. J. Linguist. English Teach., vol. 2, no. 2, p. 125, 2017, doi:_
10.24903/sj.v2i2.132
[7] S. Widodo and Wahyudin. (2018). Selection of learning media
mathematics for junior school students. _Turkish Online J. Educ._
_Technol.–TOJET._ [Online]. 17(1). pp. 154–160. Available:
http://ezproxy.lib.uconn.edu/login?url=https://search.ebscohost.com/l
ogin.aspx?direct=true&db=eric&AN=EJ1165728&site=ehost-live.
[8] M. Brinkhurst, P. Rose, G. Maurice, and J. D. Ackerman,
―Sustainability consciousness of preservice teachers in Pakistan,‖ Int.
_J. Sustain. High. Educ. Inf., vol. 18, no. 7, pp. 1090–1107, 2017, doi:_
10.1108/IJSHE-11-2016-0218
[9] K. R. Reddy, ―Teaching how to teach: Microteaching (a way to build
up teaching skills),‖ J. Gandaki Med. Coll., vol. 12, no. 01, pp. 65–71,
2019, doi: 10.3126/jgmcn.v12i1.22621
[10] A. Sharzadin et al., ―Teaching internship in math teacher education,‖
_Int. J. Emerg. Technol. Learn., vol. 14, no. 12, pp. 57–70, 2019, doi:_
10.3991/ijet.v14i12.10449
[11] U. Kusmawan. (2017). Online microteaching: A multifaceted
approach to teacher professional development. _J. Interact. Online_
_Learn._ [Online]. 15(1). pp. 42–56. Available:
https://www.ncolr.org/jiol/issues/pdf/15.1.3.pdf
[12] Y. Seo, ―Self-reflections of pre-service English teachers on
microteaching experiences,‖ English Lang. Teach., vol. 32, no. 3, pp.
1–20, 2020, doi: 10.17936/pkelt.2020.32.3.1
[13] J. Yuan, W. Zhang, and Q. Wang, ―Microteaching based on internet
and multimedia technology,‖ in Proc. 8th Int. Conf. Comput. Sci. Educ.
_ICCSE_ _2013,_ no. Iccse, 2013, pp. 885–887, doi:
10.1109/ICCSE.2013.6554035
[14] G. O. D. Yasemin, ―Science teacher trainees microteaching
experiences: A focus group study,‖ Educ. Res. Rev., vol. 11, no. 16, pp.
1473–1493, 2016, doi: 10.5897/err2016.2892
[15] M. Mitchelmore and P. White, ―Abstraction in mathematics learning,‖
in _Proc._ _the 28th Conference of the International Group for the_
_Psychology of Mathematics Education, 2004, vol. 3, pp. 329–336, doi:_
10.1007/springerreference_226248
[16] C. Huan, C. C. Meng, and M. Suseelan, ―Mathematics learning from
concrete to abstract (1968-2021): A bibliometric analysis,‖ _Particip._
_Educ._ _Res.,_ vol. 9, no. 4, pp. 445–468, 2022, doi:
10.17275/per.22.99.9.4
[17] G. Lakoff and R. E. Nunez, Where Mathematics Comes From, New
York: Basic Books, 2002.
[18] M. Noor Kholid _et al.. (2022). Hierarchy of students‘ reflective_
thinking levels in mathematical problem solving. _Acta Sci. [Online]._
24(6). pp. 24–59. Available: 10.17648/acta.scientiae.6883.
[19] J. L. P. Velazquez, ―Brain, behaviour and mathematics: Are we using
the right approaches?‖ Phys. D Nonlinear Phenom., vol. 212, no. 3–4,
pp. 161–182, 2005, doi: 10.1016/j.physd.2005.10.005
[20] Sutama, H. J. Prayitno, S. Narimo, N. Ishartono, and D. P. Sari, ―The
development of student worksheets based on higher order thinking
skill for mathematics learning in junior high school,‖ _J. Phys. Conf._
_Ser., vol. 1776, 2021, doi: 10.1088/1742-6596/1776/1/012032_
[21] ERIC. (2018). Eric Selection Policy January 2016. _ERIC Selection_
_Policy._ [Online]. Available:
https://eric.ed.gov/pdf/ERIC_Selection_Policy.pdf
[22] P. Mishra and M. J. Koehler, ―Technological pedagogical content
knowledge: A framework for teacher knowledge,‖ _Teach. Coll. Rec._
_Voice Scholarsh. Educ., vol. 108, no. 6, pp. 1017–1054, 2006, doi:_
10.1177/016146810610800610
[23] R. Puentedura, ―Learning, technology, and the samr model: Goals,
processes, and practice,‖ Iste, pp. 1–20, 2014.
[24] CAST. (2012). Universal design for learning. [Online]. Available:
https://www.cast.org/impact/universal-design-for-learning-udl
[25] S. Magana, Education’s Moonshot, 2017.
[26] J. Welsh, J. C. Harmes, and R. Winkelman. (2011). Florida‘s new
technology integration matrix. Princ. Leadersh. [Online]. October. pp.
69–72. Available:
https://www.setda.org/wp-content/uploads/2013/12/PLOct11_techtip
s.pdf
[27] R. Kimmons, C. R. Graham, and R. E. West. (2020). The PICRAT
model for technology integration in teacher preparation. _Contemp._
_Issues Technol. Teach. Educ. [Online]. 20(1) pp. 176–198. Available:_
https://citejournal.org/volume-20/issue-1-20/general/the-picrat-model
-for-technology-integration-in-teacher-preparation
[28] M. Roblyer and A. H. Doering, Integrating Educational Technology
_into Teaching, USA: Pearson, 2007._
[29] C. Moersch, ―Levels of Technology implementation (LoTi): A
framework for measuring classroom technology use,‖ _Learn. Lead._
_with Technol., no. 23, pp. 40–42, 1995, doi: 10.1002/ca.10103_
[30] Y. E. Chieng and C. K. Tan, ―A sequential explanatory investigation of
TPACK: Malaysian science teachers‘ survey and perspective,‖ Int. J.
_Inf. Educ. Technol., vol. 11, no. 5, pp. 235–241, 2021, doi:_
10.18178/ijiet.2021.11.5.1517
[31] L. Durdu and F. Dag, ―Pre-service teachers‘ TPACK development and
conceptions through a TPACK-based course,‖ Aust. J. Teach. Educ.,
vol. 42, no. 11, pp. 150–171, 2017, doi: 10.14221/ajte.2017v42n11.10
[32] U. Uluçinar, ―The effects of technology supported UbD based
instructional design training on student teachers‘ technological
pedagogical content knowledge and learning — Teaching
conceptions,‖ _International Online Journal of Education and_
_Teaching, vol. 8, no. 4. pp. 2636–2664, 2021._
[33] P. Arya, T. Christ, and W. Wu, ―Patterns of technological pedagogical
and content knowledge in preservice-teachers‘ literacy lesson
planning,‖ Journal of Education and Learning, vol. 9, no. 5. pp. 1–14,
2020.
[34] N. Ishartono, A. Nurcahyo, M. Waluyo, R. A. Razak, S. F. Sufahani,
and M. Hanifah, ―GeoGebra-based flipped learning model: An
alternative panacea to improve student‘s learning independency in
online mathematics learning,‖ JRAMathEdu (Journal Res. Adv. Math.
_Educ., vol. 7, no. 3, 2022, doi: 10.23917/jramathedu.v7i3.18141_
[35] J. Hughes, R. Thomas, and C. Scharber, ―Assessing technology
integration : The RAT—replacement, amplification, and
transformation-framework,‖ in _Proc._ _Society for Information_
_Technology & Teacher Education International Conference, 2006, no._
March, pp. 1616–1620, doi: https://www.learntechlib.org/p/22293/
[36] M. E. King-Sears, ―Facts and fallacies: Differentiation and the general
education curriculum for students with special educational needs,‖
_Support Learn., vol. 23, no. 2, pp. 55–62, 2008, doi:_
10.1111/j.1467-9604.2008.00371.x
[37] D. Muhtadi, Wahyudin, B. G. Kartasasmita, and R. C. I. Prahmana,
―The integration of technology in teaching mathematics,‖ _J. Phys._
_Conf._ _Ser.,_ vol. 943, no. 1, 2018, doi:
10.1088/1742-6596/943/1/012020
[38] L. S. Shulman, ―Those who understand knowledge,‖ Educ. Res., vol.
15, no. 2, pp. 4–14, 1986.
[39] J. Gess-Newsome, ―Pedagogical content knowledge: An introduction
and orientation,‖ in _Proc._ _Examining Pedagogical Content_
_Knowledge, Springer, 1999, pp. 3–17._
[40] A. D. Thompson and P. Mishra, ―Editors‘ remarks: Breaking news:
TPCK becomes TPACK!‖ J. Comput. Teach. Educ., vol. 24, no. 2, pp.
38–64, 2007.
[41] C. S. Chai, J. H. L. Koh, and C.-C. Tsai. (2010). Facilitating preservice
teachers‘ development of Technological, Pedagogical, and Content
Knowledge (TPACK). Educ. Technol. Soc., [Online]. 13(4). pp. 63–73.
Available: http://www.ifets.info/
[42] J. Tondeur, A. Ottenbreit-leftwich, J. Voogt, G. Sang, and J. Tondeur,
―Preparing pre-service teachers to integrate technology in education: A
synthesis of qualitative evidence,‖ Comput. Educ., pp. 1–11, 2012, doi:
10.1016/j.compedu.2011.10.009
[43] M. J. Koehler, P. Mishra, and W. Cain, ―What is Technological
Pedagogical Content Knowledge (TPACK)?‖ J. Educ., vol. 193, no. 3,
pp. 13–19, 2009, doi: 10.1177/002205741319300303
[44] R. Voithofer and M. J. Nelson, ―Teacher Educator technology
integration preparation practices around TPACK in the United States,‖
_J. Teach. Educ., vol. 72, no. 3, pp. 314–328, 2021, doi:_
10.1177/0022487120949842
-----
[45] D. A. Schmidt, E. Baran, A. D. Thompson, P. Mishra, M. J. Koehler,
and T. S. Shin. (2009). Technological pedagogical content knowledge
(TPACK): The development and validation of an assessment
instrument for preservice teachers. [Online]. Available: www.iste.org
[46] M. L. Niess, ―Investigating TPACK: Knowledge growth in teaching
with technology,‖ J. Educ. Comput. Res., vol. 44, no. 3, pp. 299–317,
2011, doi: 10.2190/EC.44.3.c
[47] P. Mishra and M. J. Koehler, ―Introducing technological pedagogical
content knowledge,‖ presented at Annu. Meet. Am. Educ. Res. Assoc.,
pp. 1–16, 2008.
[48] M. L. Niess, ―Preparing teachers to teach science and mathematics
with technology: Developing a technology pedagogical content
knowledge,‖ Teach. Teach. Educ., vol. 21, no. 5, pp. 509–523, 2005,
doi: 10.1016/j.tate.2005.03.006
[49] K. Slim, E. Nini, D. Forestier, F. Kwiatkowski, Y. Panis, and J.
Chipponi, ―Methodological index for non-randomized studies
(Minors): Development and validation of a new instrument,‖ _ANZ J._
_Surg.,_ vol. 73, no. 9, pp. 712–716, 2003, doi:
10.1046/j.1445-2197.2003.02748.x
[50] A. Kofod-petersen. (2018). How to do a structured literature review in
computer science. _Researchgate. pp. 1–7. [Online]. Available:_
https://www.researchgate.net/profile/Anders-Kofod-Petersen/publicati
on/265158913_How_to_do_a_Structured_Literature_Review_in_co
mputer_science/links/599a00350f7e9b3edb17cda2/How-to-do-a-Stru
ctured-Literature-Review-in-computer-science.pdf
[51] I. García-Lázaro, J. Conde-Jiménez, and M. P. Colás-Bravo,
―Integration and management of technologies through practicum
experiences: A review in preservice teacher education (2010-2020),‖
_Contemp._ _Educ._ _Technol.,_ vol. 14, no. 2, 2022, doi:
10.30935/cedtech/11540
[52] P. Nuangchalerm, ―TPACK in ASEAN perspectives: Case study on
Thai pre-service teacher,‖ _International Journal of Evaluation and_
_Research in Education, vol. 9, no. 4. pp. 993–999, 2020, doi:_
10.11591/ijere.v9i4.20700
[53] W. Wang, D. Schmidt-Crawford, and Y. Jin, ―Preservice teachers‘
TPACK development: A review of literature,‖ J. Digit. Learn. Teach.
_Educ.,_ vol. 34, no. 4, pp. 234–258, 2018, doi:
10.1080/21532974.2018.1498039
[54] M. Yigit, ―A review of the literature: How pre-service mathematics
teachers develop their technological, pedagogical, and content
knowledge,‖ Int. J. Educ. Math. Sci. Technol., vol. 2, no. 1, pp. 26–35,
2014, doi: 10.18404/ijemst.96390
[55] A. Nightingale, ―A guide to systematic literature reviews,‖ _Surgery,_
vol. 27, no. 9, pp. 381–384, 2009, doi: 10.1016/j.mpsur.2009.07.005
[56] D. Moher, A. Liberati, J. Tetzlaff, D. G. Altman, and P. Group. (2009).
Preferred reporting items for systematic reviews and meta-analyses:
The PRISMA statement. _Ann. Intern. Med. [Online]. 151(4). pp._
246–269. Available: www.annals.org
[57] C. Mamédio, C. Santos, C. Andrucioli De Mattos Pimenta, M. Roberto,
and C. Nobre. (2009). The pico strategy for the research question
construction and evidence. Rev Latino-am Enferm. [Online]. 15(3).
pp. 508–511. Available: www.eerp.usp.br/rlae
[58] J. F. Burnham, ―Scopus database: A review,‖ _Biomedical Digital_
_Libraries, vol. 3, Mar. 08, 2006, doi: 10.1186/1742-5581-3-1_
[59] H. I. Haseski, U. Ilic, and U. Tugtekin, ―Defining a new 21st century
skill-computational thinking: Concepts and trends,‖ _Int. Educ. Stud.,_
vol. 11, no. 4, p. 29, Mar. 2018, doi: 10.5539/ies.v11n4p29
[60] A. Cooke, D. Smith, and A. Booth, ―Beyond PICO: The SPIDER tool
for qualitative evidence synthesis,‖ Qual. Health Res., vol. 22, no. 10,
pp. 1435–1443, Oct. 2012, doi: 10.1177/1049732312452938
[61] S. Rawat and S. Meena. (2014). ―Publish or perish: Where are we
heading?‖ _J. Res. Med. Sci. [Online]. 19(2). pp. 87–89. Available:_
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3999612/
[62] C. Usée, V. Larivière, and É. Archambault, ―Conference proceedings
as a source of scientific information: A bibliometric analysis,‖ J. Am.
_Soc. Inf. Sci. Technol., vol. 59, no. 11, pp. 1776–1784, 2008, doi:_
10.1002/asi.20888
[63] J. Short, ―The art of writing a review article,‖ Journal of Management,
vol. 35, no. 6. pp. 1312–1317, Nov. 2009, doi:
10.1177/0149206309337489
[64] E. Baran, S. Canbazoglu Bilici, A. Albayrak Sari, and J. Tondeur,
―Investigating the impact of teacher education strategies on preservice
teachers‘ TPACK,‖ Br. J. Educ. Technol., vol. 50, no. 1, pp. 357–370,
Jan. 2019, doi: 10.1111/bjet.12565
[65] T. Valtonen, U. Leppänen, M. Hyypiä, E. Sointu, A. Smits, and J.
Tondeur, ―Fresh perspectives on TPACK: Pre-service teachers‘ own
appraisal of their challenging and confident TPACK areas,‖ Educ. Inf.
_Technol., vol. 25, no. 4, pp. 2823–2842, Jul. 2020, doi:_
10.1007/s10639-019-10092-4
[66] J. Parra, C. Raynor, A. Osanloo, and R. O. Guillaume, ―(Re) Imagining
an undergraduate integrating technology with teaching course,‖
_TechTrends,_ vol. 63, no. 1, pp. 68–78, 2019, doi:
10.1007/s11528-018-0362-x
[67] M. Undheim, ― ‗We Need Sound Too!‘ children and teachers creating
multimodal digital stories together,‖ Nord. J. Digit. Lit., vol. 15, no. 3,
pp. 165–177, 2020, doi: 10.18261/issn.1891-943x-2020-03-03
[68] F. G. Bate, L. Day, and J. Macnish, ―Conceptualising changes to
pre-service teachers‘ knowledge of how to best facilitate learning in
mathematics: A TPACK inspired initiative,‖ Aust. J. Teach. Educ., vol.
38, no. 5, pp. 14–30, 2013, doi: 10.14221/ajte.2013v38n5.3
[69] R. W. D. S. Bueno, D. Lieban, and C. C. Ballejo, ―Mathematics
teachers‘ TPACK development based on an online course with
Geogebra,‖ Open Educ. Stud., vol. 3, no. 1, pp. 110–119, 2021, doi:
10.1515/edu-2020-0143
[70] F. C. Bonafini and Y. Lee, ―Investigating prospective teachers‘
TPACK and their use of mathematical action technologies as they
create screencast video lessons on iPads,‖ TechTrends, vol. 65, no. 3,
pp. 303–319, 2021, doi: 10.1007/s11528-020-00578-1
[71] S. Casler-Failing, ―Learning to teach mathematics with robots:
Developing the ‗t‘ in technological pedagogical content knowledge,‖
_Res. Learn. Technol., vol. 29, 2021, doi: 10.25304/RLT.V29.2555_
[72] Z. M. Yan, C. S. Chai, and H. J. So, ―Creating tools for inquiry-based
mathematics learning from technological pedagogical content
knowledge perspectives: Collaborative design approach,‖ Australas. J.
_Educ. Technol., vol. 34, no. 4, pp. 57–71, 2018, doi:_
10.14742/ajet.3755
[73] F. Saltan and K. Arslan, ―A comparison of in-service and pre-service
teachers‘ technological pedagogical content knowledge
self-confidence,‖ _Cogent Educ., vol. 4, no. 1, 2017, doi:_
10.1080/2331186X.2017.1311501
[74] [O. L. Ng and T. Chan, ―In-service mathematics teachers‘ video-based
noticing of 3D printing pens‘ in action,‘‖ Br. J. Educ. Technol., vol. 52,
no. 2, pp. 751–767, 2021, doi: 10.1111/bjet.13053
[75] C. Reading and H. Doyle, ―Teacher educators as learners: Enabling
learning while developing innovative practice in ict-rich education,‖
_Aust. Educ. Comput., vol. 27, no. 3, pp. 109–116, 2013._
[76] A. Kaplon-Schilis and I. Lyublinskaya, ―Analysis of relationship
between five domains of TPACK framework: TK, PK, CK math, CK
science, and TPACK of pre-service special education teachers,‖
_Technol. Knowl. Learn., vol. 25, no. 1, pp. 25–43, 2020, doi:_
10.1007/s10758-019-09404-x
[77] R. A. Filho, ―Pre-service teachers‘ knowledge : Analysis of teachers
‗education situation based on TPACK Let us know how access to this
document benefits you,‖ _Math. Enthus.,_ vol., vol. 19, no. 2, pp.
594–631, 2022.
[78] K. Bergeson and B. Beschorner, ―Modeling and scaffolding the
technology integration planning cycle for pre-service teachers: A case
study,‖ Int. J. Educ. Math. Sci. Technol., vol. 8, no. 4, pp. 330–341,
2020, doi: 10.46328/IJEMST.V8I4.1170
[79] N. Alrwaished, A. Alkandari, and F. Alhashem, ―Exploring in-and
pre-service science and mathematics teachers‘ technology, pedagogy,
and content knowledge (TPACK): What next?‖ Eurasia J. Math. Sci.
_Technol. Educ., vol. 13, no. 9, pp. 6113–6131, 2017, doi:_
10.12973/eurasia.2017.01053a
[80] S. Kim, ―Technological, pedagogical, and content knowledge (TPACK)
and beliefs of preservice secondary mathematics teachers: Examining
the relationships,‖ Eurasia J. Math. Sci. Technol. Educ., vol. 14, no.
10, pp. 1–24, 2018, doi: 10.29333/ejmste/93179
[81] G. G. Gonzales and R. R. Gonzales, ―Introducing IWB to preservice
mathematics teachers: An evaluation using the TPACK framework,‖
_Cypriot J. Educ. Sci., vol. 16, no. 2, pp. 436–450, 2021, doi:_
10.18844/CJES.V16I2.5619
[82] K. Larkin, R. Jamieson-Proctor, and G. Finger, ―TPACK and
pre-service teacher mathematics education: defining a signature
pedagogy for mathematics education using ICT and Based on the
metaphor ‗Mathematics is a Language,‘‖ _Comput. Sch., vol. 29, no._
1–2, pp. 207–226, 2012, doi: 10.1080/07380569.2012.651424
[83] D. Akyuz, ―Measuring technological pedagogical content knowledge
(TPACK) through performance assessment,‖ Comput. Educ., vol. 125,
no. May 2017, pp. 212–225, 2018, doi:
10.1016/j.compedu.2018.06.012
[84] H. S. Tokmak, L. Incikabi, and S. Ozgelen, ―An investigation of
change in mathematics, science, and literacy education pre-service
teachers‘ TPACK,‖ _Asia-Pacific Educ. Res., vol. 22, no. 4, pp._
407–415, 2013, doi: 10.1007/s40299-012-0040-2
-----
[85] C. S. Chai, Y. Rahmawati, and M. S. Y. Jong, ―Indonesian science,
mathematics, and engineering preservice teachers‘ experiences in
stem-tpack design-based learning,‖ Sustain., vol. 12, no. 21, pp. 1–14,
2020, doi: 10.3390/su12219050
[86] L. F. Gutierrez-Fallas and A. Henriques, ―Prospective mathematics
teachers‘ TPACK in a context of a teacher education experiment/o
TPACK de Futuros Professores de matematica numa experiencia de
formacao,‖ Rev. Latinoam. Investig. en Matemática Educ., vol. 23, no.
2, pp. 175–203, 2020.
[87] M. Yigit. (2014). A review of the literature: How pre-service
mathematics teachers develop their technological, pedagogical, and
content knowledge. _International Journal of Education in_
_Mathematics, Science and Technology, [Online]. 2(1). pp. 26–35._
Available: https://eric.ed.gov/?id=EJ1066346
[88] D. D. Agyei and J. Voogt, ―Developing technological pedagogical
content knowledge in pre-service mathematics teachers through
collaborative design,‖ Australas. J. Educ. Technol., vol. 28, no. 4, pp.
547–564, 2012, doi: 10.14742/ajet.827
[89] İ. Saralar, M. Işıksal-Bostan, and D. Akyüz, ―The evaluation of a
pre-service mathematics teacher‘s TPACK: A case of 3D shapes with
GeoGebra,‖ Int. J. Technol. Math. Educ., vol. 25, no. 2, pp. 3–22, 2021,
doi: 10.1564/tme
[90] D. D. Agyei and J. M. Voogt, ―Pre-Service teachers‘ TPACK
competencies for spreadsheet integration: Insights from a
mathematics-specific instructional technology course,‖ _Technol._
_Pedagog. Educ., vol. 24, no. 5, pp. 605–625, 2015._
[91] W. O. Apeanti, ―Contributing factors to pre-service mathematics
teachers‘ e-readiness for ICT integration contributing factors to
pre-service mathematics teachers‘ e-readiness for ICT integration,‖ Int.
_J. Res. Educ. Sci., vol. 2, no. 1, pp. 223–238, 2016._
[92] A. Aydogan Yenmez, İ. Özpinar, and S. Gökçe, ―Examining changes in
preservice mathematics teachers‘ technological pedagogical content
knowledge from their microteaching,‖ Educ. Sci. Theory Pract., vol.
17, no. 5, pp. 1699–1732, 2017, doi: 10.12738/estp.2017.5.0454
[93] F. C. Bonafini and Y. Lee, ―Portraying mathematics pre-service
teachers‘ experience of creating video lessons with portable interactive
whiteboards through the TPACK,‖ The New Educator, vol. 17, no. 4.
pp. 327–352, 2021.
[94] J. Caniglia and M. Meadows, ―Pre-service mathematics teachers‘ use
of web resources,‖ _Int. J. Technol. Math. Educ., vol. 25, no. 3, pp._
17–35, 2021, doi: 10.1564/tme_v25.3.02
[95] M. J. Gonzalez and I. González-Ruiz, ―Behavioural intention and
pre-service mathematics teachers‘ technological pedagogical content
knowledge,‖ Eurasia J. Math. Sci. Technol. Educ., vol. 13, no. 3, pp.
601–620, 2017, doi: 10.12973/eurasia.2017.00635a
[96] B. Handal, C. Campbell, and M. Cavanagh, ―Characterising the
perceived value of mathematics educational apps in preservice
teachers,‖ Math. Educ. Res. J., vol. 28, no. 1, pp. 199–221, 2016, doi:
10.1007/s13394-015-0160-0
[97] R. Jamieson-Proctor et al., ―Development of the TTF TPACK survey
instrument,‖ Aust. Educ. Comput., vol. 27, no. 3, pp. 26–35, 2013.
[98] A. Kafyulilo, P. Fisser, J. Pieters, and J. Voogt, ―ICT use in science and
mathematics teacher education in Tanzania: Developing technological
pedagogical content knowledge,‖ Australas. J. Educ. Technol., vol. 31,
no. 4, pp. 381–399, 2015, doi: 10.14742/ajet.1240
[99] I. Karatas, M. P. Tunc, N. Yilmaz, and G. Karaci, ―An investigation of
technological pedagogical content knowledge, self-confidence, and
perception of pre-service middle school mathematics teachers towards
instructional technologies,‖ _Educational Technology & Society, vol._
20, no. 3. pp. 122–132, 2017.
[100]B. Kartal and C. Çınar, ―Preservice mathematics teachers‘ TPACK
development when they are teaching polygons with geogebra,‖ Int. J.
_Math._ _Educ._ _Sci._ _Technol.,_ vol. May, 2022, doi:
10.1080/0020739X.2022.2052197
[101]Z. Kaya, O. N. Kaya, and I. Emre, ―Adaptation of technological
pedagogical content knowledge scale to Turkish,‖ _Educ. Sci. Theory_
_Pract.,_ vol. 13, no. 4, pp. 2367–2375, 2013, doi:
10.12738/estp.2013.4.1913
[102]I. Lyublinskaya and A. Kaplon-Schilis, ―Analysis of differences in the
levels of TPACK: Unpacking performance indicators in the TPACK
levels rubric,‖ _Educ._ _Sci.,_ vol. 12, no. 2, 2022, doi:
10.3390/educsci12020079
[103]K. Özgen and S. Narli, ―Intelligent data analysis of interactions and
relationships among technological pedagogical content knowledge
constructs via rough set analysis,‖ _Contemporary Educational_
_Technology, vol. 11, no. 1. pp. 77–98, 2020._
[104]T. Teo, V. Milutinović, M. Zhou, and D. Banković, ―Traditional vs.
innovative uses of computers among mathematics pre-service teachers
in Serbia,‖ Interact. Learn. Environ., vol. 25, no. 7, pp. 811–827, 2016,
doi: 10.1080/10494820.2016.1189943
[105]J. Zelkowski, J. Gleason, D. C. Cox, and S. Bismarck, ―Developing and
validating a reliable TPACK instrument for secondary mathematics
preservice teachers,‖ _J. Res. Technol. Educ., vol. 46, no. 2, pp._
173–206, 2013, doi: 10.1080/15391523.2013.10782618
[106]R. Christensen and G. Knezek, ―Internal consistency reliabilities for 14
computer attitude scales,‖ J. Technol. Teach. Educ., vol. 8, no. 4, pp.
327–336, 2000.
[107]J. Harris, N. Grandgenett, and M. Hofer. (2009). Testing a
TPACK-based technology integration assessment rubric developing
and assessing TPACK. Technology. [Online]. 2010(1). pp. 3833–3840.
Available: http://www.editlib.org/p/33978
[108][108] M. J. Hofer, N. Grandgenett, J. Harris, and K. Swan, ―Testing a
TPACK-based technology integration observation rubric,‖ in
_Educational Assessment, Evaluation, and Research Commons, and_
_the Teacher Education and Professional Development Commons,_
10th ed., W7M ScholarWorks, 2011.
[109]A. Arslan. (2006). Self-efficacy scale in relation to computer based
education. Abant İzzet Baysal Univ. Fac. Educ. J. [Online]. 6(1). pp.
191–198. Available: https://search.trdizin.gov.tr/yayin/detay/80295/
[110]K. Yenilmez and Y. Sarier, ―Preservice teachers‘ opinions on computer
based mathematics teaching,‖ in _Proc. International Computerand_
_Instructional Technologies Symposium, 2007, pp. 1184–1024._
[111]S. Kaya and F. Daǧ, ―Turkish adaptation of Technological Pedagogical
Content Knowledge Survey for elementary teachers,‖ _Kuram ve_
_Uygulamada Egit. Bilim., vol. 13, no. 1, pp. 302–306, 2013._
[112]A. Kafyulilo, P. Fisser, J. Pieters, and J. Voogt, ―ICT use in science and
mathematics teacher education in Tanzania: Developing technological
pedagogical content knowledge,‖ Australasian Journal of Educational
_Technology, vol. 31, no. 4. pp. 381–399, 2015._
[113]I. Lyublinskaya and A. Kaplon-Schilis, ―Analysis of Differences in the
levels of TPACK: Unpacking performance indicators in the TPACK
levels rubric,‖ Education Sciences, vol. 12, 2022.
[114]B. Kartal and C. Çinar, ―Examining pre-service mathematics teachers‘
beliefs of TPACK during a method course and field experience,‖
_Malaysian Online Journal of Educational Technology, vol. 6, no. 3._
pp. 11–37, 2018.
[115]P. Cleave. (2021). Advantages of questionnaires in online research.
_Smart_ _Survey._ [Online]. Available:
https://www.smartsurvey.co.uk/blog/advantages-of-questionnaires-inonline-research
[116]P. Albion, R. Jamieson-Proctor, and G. Finger. (2010). Auditing the
TPACK confidence of Australian pre-service teachers: The TPACK
confidence survey (TCS). in Proc. Society for Information Technology
_Teacher Education International Conference 2010. [Online]. 11(3)._
pp. 3772–3779. Available: http://www.editlib.org/p/33969
[117]J. T. DeCuir-Gunby, P. L. Marshall, and A. W. McCulloch,
―Developing and using a codebook for the analysis of interview data:
An example from a professional development research project,‖ Field
_Methods,_ vol. 23, no. 2, pp. 136–155, 2011, doi:
10.1177/1525822X10388468.
[118]CCSS. (2010). High-quality academic standards in mathematics and
English language arts/literacy (ELA). [Online]. Available:
http://www.corestandards.org/about-the-standards/
[119]NCTM. (2001). Principles and Standards for School Mathematics.
_Principles_ _and_ _Standards._ [Online]. Available:
https://www.nctm.org/standards/
[120]ISTE. (2017). Iste standards: Educators. [Online]. Available:
https://www.iste.org/standards/iste-standards-for-teachers
[121]J. Voogt, P. Fisser, N. Pareja Roblin, J. Tondeur, and J. van Braak,
―Technological pedagogical content knowledge — A review of the
literature,‖ J. Comput. Assist. Learn., vol. 29, no. 2, pp. 109–121, 2013,
doi: 10.1111/j.1365-2729.2012.00487.x
[122]J. L. Lambert, ―Technology integration expertise in middle school
social studies teachers: A study of multiplicity in thinking and
practice,‖ North Carolina State University, 2004.
[123]I. Yidana, ―Education curriculum: A survey of two Ghanaian
universities,‖ Ohio University, 2007.
[124]I. Sahin, ―Development of survey of technological pedagogical and
content knowledge (TPACK),‖ Turkish Online J. Educ. Technol., vol.
10, no. 1, pp. 97–105, 2011.
[125]D. D. Agyei and J. M. Voogt, ―Pre-service teachers‘ TPACK
competencies for spreadsheet integration: insights from a
mathematics-specific instructional technology course,‖ _Technol._
_Pedagog. Educ., vol. 24, no. 5, pp. 605–625, 2015, doi:_
10.1080/1475939X.2015.1096822
-----
[126]I. Karatas, M. P. Tunc, N. Yilmaz, and G. Karaci, ―An investigation of
technological pedagogical content knowledge, self-confidence, and
perception of pre-service middle school mathematics teachers towards
instructional technologies,‖ _Educ. Technol. Soc., vol. 20, no. 3, pp._
122–132, 2017.
[127]J. Zelkowski, J. Gleason, D. C. Cox, and S. Bismarck, ―Developing and
validating a reliable TPACK instrument for secondary mathematics
preservice teachers,‖ Journal of Research on Technology in Education,
vol. 46, no. 2. pp. 173–206, 2013.
[128]P. E. L. Marks, B. Babcock, A. H. N. Cillessen, and N. R. Crick, ―The
effects of participation rate on the internal reliability of peer
nomination measures,‖ Soc. Dev., vol. 22, no. 3, pp. 609–622, 2013,
doi: 10.1111/j.1467-9507.2012.00661.x
[129]B. Williams, A. Onsman, and T. Brown. (2010). Exploratory factor
analysis: A five-step guide for novices Mr. J. Emerg. Prim. Heal. Care.
[Online]. 8(3) pp. 1–13. Available: http://ajp.paramedics.org/index.p
hp/ajp/article/view/93
[130]K. G. Sapnas and R. A. Zeller, ―Minimizing sample size when using
exploratory factor analysis for measurement,‖ J. Nurs. Meas., vol. 10,
no. 2, pp. 135–154, 2002.
[131]M. A. Pett, N. R. Lackey, and J. J. Sullivan, Making Sense of Factor
_Analysis: The Use of Factor Analysis For Instrument Development In_
_Health Care Research, Sage, 2003._
[132]B. Thompson, _Exploratory and Confirmatory Factor Analysis:_
_Understanding Concepts and Applications, vol. 10694, Citeseer,_
2004.
[133]E. Öztürk and M. B. Horzum. (2011). Adaptation of technological
pedagogical content knowledge scale to Turkish. _Ahi Evran_
_Üniversitesi Eğitim Fakültesi Derg. [Online]. 12(3). pp. 255–278._
Available: https://dergipark.org.tr/en/pub/kefad/issue/59494/855137
[134]I. Ajzen and M. Fishbein, ―Attitudes and the attitude-behavior relation:
Reasoned and automatic processes,‖ Eur. Rev. Soc. Psychol., vol. 11,
no. 1, pp. 1–33, 2000, doi: 10.1080/14792779943000116
[135]J. Fred D. Davis, ―A Technology Acceptance Model For Empirically
Testing New End-User Information Systems: Theory and Results,‖
Massachusetts Institute of Technology, 1980.
[136]I. Ajzen, Theory of Planned Behavior, Academic Press, 1991.
[137]C. Öksüz, Ş. Ak, and S. Uça, ―A perceptions scale for technology use in
the teaching of elementary mathematics,‖ _Yüzüncü Yıl Üniversitesi_
_Eğitim Fakültesi Derg., pp. 270–287, 2009._
[138]O. U. Qiong, ―A brief introduction to perception,‖ Stud. Lit. Lang., vol.
15, no. 4, pp. 18–28, 2017, doi: 10.3968/10055
[139]J. Y. Jenny and Y. Diesinger, ―Validation of a French version of the
Oxford knee questionnaire,‖ Orthop. Traumatol. Surg. Res., vol. 97,
no. 3, pp. 267–271, 2011, doi: 10.1016/j.otsr.2010.07.009
[140]K. Kereluik, G. Casperson, and M. Akcaoglu, ―Coding pre-service
teacher lesson plans for TPACK,‖ in Proc. Soc. Inf. Technol. Teach.
_Educ. Int. Conf. 2010, no. July, pp. 3889–3891, 2010, doi:_
10.13140/RG.2.1.1761.6484
[141]A. D. Ritzhaupt, A. C. Huggins-Manley, K. Ruggles, and M. Wilson,
―Validation of the survey of pre-service teachers‘ knowledge of
teaching and technology: A multi-Institutional sample,‖ J. Digit. Learn.
_Teach._ _Educ.,_ vol. 32, no. 1, pp. 26–37, 2016, doi:
10.1080/21532974.2015.1099481
[142]J. Tondeur, R. Scherer, F. Siddiq, and E. Baran, ―Enhancing
pre-service teachers‘ Technological Pedagogical Content Knowledge
(TPACK): A mixed-method study,‖ Educ. Technol. Res. Dev., vol. 68,
no. 1, pp. 319–343, 2020, doi: 10.1007/s11423-019-09692-1
[143]R. Scherer, J. Tondeur, F. Siddiq, and E. Baran, ―The importance of
attitudes toward technology for pre-service teachers‘ technological,
pedagogical, and content knowledge: Comparing structural equation
modeling approaches,‖ _Comput. Human Behav., vol. 80, pp. 67–80,_
2018, doi: 10.1016/j.chb.2017.11.003
[144]L. von Kotzebue, ―Two is better than one—Examining
biology-specific TPACK and its T-dimensions from two angles,‖ _J._
Res. Technol. Educ., vol. 0, no. 0, pp. 1–18, 2022, doi:
10.1080/15391523.2022.2030268.
[145]G. W. Allport, ―1. Attitudes,‖ Terminology, 1933.
[146]J. Pickens, ―Attitudes and perceptions,‖ in Organizational Behavior in
_Health Care, 2005, pp. 123–136._
[147]P. H. Lindsay and D. A. Norman, Human Information Processing: An
_Introduction to Psychology, Academic press, 2013._
[148]N. Ishartono, I. D. Setyono, A. R. Maharani, and S. Firdaus, ―The
quality of mathematics teaching aids developed by mathematics
pre-service teachers in Indonesia,‖ _J. Varidika, vol. 34, no. 1, pp._
14–27, 2022, doi: 10.23917/varidika.v1i1.18034
[149]J. M. Marban and E. J. Sintema, ―Pre-service teachers‘ TPACK and
attitudes toward integration of ICT in mathematics teaching,‖ _Int. J._
_Technol. Math. Educ., vol. 28, no. 1, pp. 37–46, 2021, doi:_
10.1564/tme
[150]D. Altun, ―Investigating pre-service early childhood education
teachers‘ Technological Pedagogical Content Knowledge (TPACK)
competencies regarding digital literacy skills and their technology
attitudes and usage,‖ Journal of Education and Learning, vol. 8, no. 1.
pp. 249–263, 2019.
[151]P. Redmond and J. Lock, ―Secondary pre-service teachers‘ perceptions
of Technological Pedagogical Content Knowledge (TPACK): What do
they really think?‖ Australasian Journal of Educational Technology,
vol. 35, no. 3. pp. 45–54, 2019.
[152]P. Luik, M. Taimalu, and R. Suviste, ―Perceptions of technological,
pedagogical and content knowledge (TPACK) among pre-service
teachers in Estonia,‖ Educ. Inf. Technol., vol. 23, no. 2, pp. 741–755,
2018, doi: 10.1007/s10639-017-9633-y
[153]V. Reyes, C. Reading, N. Rizk, S. Gregory, and H. Doyle, ―An
exploratory analysis of TPACK perceptions of pre-service science
teachers: A regional Australian perspective,‖ Teach. Train. Prof. Dev.
_Concepts, Methodol. Tools, Appl., vol. 4, pp. 1968–1983, 2018, doi:_
10.4018/978-1-5225-5631-2.ch093
[154]N. Ishartono _et al., ―The role of instructional design in improving_
pre-service and in-service teacher‘s mathematics learning sets skills: A
systematic literature review in Indonesian context,‖ Indones. J. Learn.
_Adv._ _Educ.,_ vol. 5, no. 1, pp. 13–31, 2023, doi:
10.23917/ijolae.v5i1.20377
[155]J. T. Hilton, ―A case study of the application of SAMR and TPACK for
reflection on technology integration into two social studies
classrooms,‖ _Soc. Stud., vol. 107, no. 2, pp. 68–73, 2016, doi:_
10.1080/00377996.2015.1124376
Copyright © 2023 by the authors. This is an open access article distributed
under the Creative Commons Attribution License which permits unrestricted
use, distribution, and reproduction in any medium, provided the original
work is properly cited (CC BY 4.0).
-----
|
{
"disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.18178/ijiet.2023.13.8.1919?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.18178/ijiet.2023.13.8.1919, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "CCBY",
"status": "GOLD",
"url": "http://www.ijiet.org/vol13/IJIET-V13N8-1919.pdf"
}
| 2,023
|
[
"JournalArticle",
"Review"
] | true
| null |
[
{
"paperId": "8b0275c7839a2bf348e201894925e8d017c5b295",
"title": "Two is better than one—Examining biology-specific TPACK and its T-dimensions from two angles"
},
{
"paperId": "f1bd45324db18cfc0f933a12087eed7fa020f8a9",
"title": "The Role of Instructional Design in Improving Pre-Service and In-Service Teacher’s Mathematics Learning Sets Skills: A Systematic Literature Review in Indonesian Context"
},
{
"paperId": "1fd2e1e520158db0c8858dea215b518aae687370",
"title": "Hierarchy of Students’ Reflective Thinking Levels in Mathematical Problem Solving"
},
{
"paperId": "3d2f2fef6d231ab6c21c481f5e49d4a59f153dd2",
"title": "GeoGebra-based flipped learning model: An alternative panacea to improve student’s learning independency in online mathematics learning"
},
{
"paperId": "88fe93275e780972f38bab5f3bfa0a2a56775dac",
"title": "The Quality of Mathematics Teaching Aids Developed by Mathematics Pre-Service Teachers in Indonesia"
},
{
"paperId": "b53a63371bc82fae1dd2abb21449c7031c23a11a",
"title": "Mathematics Learning from Concrete to Abstract (1968-2021): A Bibliometric Analysis"
},
{
"paperId": "999513e2102dc7c5cd5c797ffd91a434d4a6021a",
"title": "Preservice mathematics teachers’ TPACK development when they are teaching polygons with geogebra"
},
{
"paperId": "6ae6f8c8ea780401aed985d5cbcd32c8a5e4fb50",
"title": "Integrating GeoGebra into the flipped learning approach to improve students' self-regulated learning during the covid-19 pandemic"
},
{
"paperId": "220de46e52419b4b88f858e5cccd9ca3f35c833d",
"title": "Analysis of Differences in the Levels of TPACK: Unpacking Performance Indicators in the TPACK Levels Rubric"
},
{
"paperId": "309cc08691eecac9b0fec14defa59e62ad50c7cf",
"title": "Integration and Management of Technologies Through Practicum Experiences: A Review in Preservice Teacher Education (2010-2020)"
},
{
"paperId": "d99405599e33ac319b4f69b468ba803433d19048",
"title": "Portraying Mathematics Pre-service Teachers’ Experience of Creating Video Lessons with Portable Interactive Whiteboards through the TPACK"
},
{
"paperId": "0934162c473ae7d404fd6ea0580da23a3b6dd889",
"title": "Introducing IWB to preservice mathematics teachers: An evaluation using the TPACK framework"
},
{
"paperId": "7425ee39ad29ebcd16088951328c7c96c39c21a2",
"title": "Pre-Service Teachers' TPACK and Attitudes Toward Integration of ICT in Mathematics Teaching"
},
{
"paperId": "856027a7b4bd80ad32c51676c20ce38e394a06a3",
"title": "The development of student worksheets based on higher order thinking skill for mathematics learning in junior high school"
},
{
"paperId": "2256db828cd92f4d4ed2e3e0ec601fd30635ab81",
"title": "Investigating Prospective Teachers’ TPACK and their Use of Mathematical Action Technologies as they Create Screencast Video Lessons on iPads"
},
{
"paperId": "26c6a4570f6bcc85a4b5c2182be14f2b7c9499fc",
"title": "Mathematics Teachers’ TPACK Development Based on an Online Course with Geogebra"
},
{
"paperId": "3c165c59eea777411291921e8549333cf75090af",
"title": "In-service mathematics teachers' video-based noticing of 3D printing pens \"in action\""
},
{
"paperId": "f1821c4038fb72468927555852a69e3aaaa5c4fc",
"title": "TPACK in ASEAN perspectives: Case study on Thai pre-service teacher"
},
{
"paperId": "2d4b9dee00738fa912165e18b507d7fbd9b19aff",
"title": "Indonesian Science, Mathematics, and Engineering Preservice Teachers’ Experiences in STEM-TPACK Design-Based Learning"
},
{
"paperId": "93877246b334616e1fdcdb737316fd085b9754f5",
"title": "“We Need Sound Too!” Children and Teachers Creating Multimodal Digital Stories Together"
},
{
"paperId": "55328fce5b8614f24a8cd0d68f8d4b2731547c45",
"title": "Modeling and Scaffolding the Technology Integration Planning Cycle for Pre-service Teachers: A Case Study"
},
{
"paperId": "7270eb6e826105e5d00e90bcb6fa4b85451ce0dc",
"title": "Teacher Educator Technology Integration Preparation Practices Around TPACK in the United States"
},
{
"paperId": "3441f7261b31e2e8c746f3fef7374e29698e4528",
"title": "Patterns of Technological Pedagogical and Content Knowledge in Preservice-Teachers’ Literacy Lesson Planning"
},
{
"paperId": "8e739fa9b6f6cf2437634b4a985939f4ab29c7d5",
"title": "Effectiveness of Android-Based Mathematics Learning Media Application on Student Learning Achievement"
},
{
"paperId": "c512c4bfb0da0c0ed564a14499318bd56e1f6920",
"title": "The Use of AR-Assisted Storybook to Reduce Mathematical Anxiety on Elementary School Students"
},
{
"paperId": "418e7d4e92970b18836661091165070a7fbbbd50",
"title": "The PICRAT Model for Technology Integration in Teacher Preparation."
},
{
"paperId": "12f43765e726ccf3b54915304207da4091f5c3c2",
"title": "Analysis of Relationship Between Five Domains of TPACK Framework: TK, PK, CK Math, CK Science, and TPACK of Pre-service Special Education Teachers"
},
{
"paperId": "934df43958f3671b56fa77b38a9bc5de161b5964",
"title": "Fresh perspectives on TPACK: pre-service teachers’ own appraisal of their challenging and confident TPACK areas"
},
{
"paperId": "3fc06325464e89e3f252ba7188976d5c49ca011f",
"title": "Intelligent Data Analysis of Interactions and Relationships among Technological Pedagogical Content Knowledge Constructs via Rough Set Analysis"
},
{
"paperId": "72f66ce3f64ff4d153ea8f638fbaabdf095e99a9",
"title": "Enhancing pre-service teachers’ technological pedagogical content knowledge (TPACK): a mixed-method study"
},
{
"paperId": "b52055854f559aecebd7ab5ca4fc1350fa867fc1",
"title": "Teaching Internship in Math Teacher Education"
},
{
"paperId": "36799ed9774973958cb6c225f27de8506c694882",
"title": "Secondary pre-service teachers’ perceptions of technological pedagogical content knowledge (TPACK): What do they really think?"
},
{
"paperId": "613e7739c72ebe70c76bbe09036caaee43a7a74a",
"title": "Teaching How to Teach: Microteaching (A Way to Build up Teaching Skills)"
},
{
"paperId": "0c7dd32eea4612bc37e65f231adba70c05ebccca",
"title": "Investigating Pre-Service Early Childhood Education Teachers’ Technological Pedagogical Content Knowledge (TPACK) Competencies Regarding Digital Literacy Skills and Their Technology Attitudes and Usage"
},
{
"paperId": "6aaa315171fc69df19cabab6a213d3f18bd97bfc",
"title": "(Re)Imagining an Undergraduate Integrating Technology with Teaching Course"
},
{
"paperId": "12fd442fb12f0a47bf9948ea0e491ec5dea847f9",
"title": "Preservice Teachers' TPACK Development: A Review of Literature"
},
{
"paperId": "ffa1ef3f6f6ed4de6ed7eb277ef323769ac7ff19",
"title": "Measuring technological pedagogical content knowledge (TPACK) through performance assessment"
},
{
"paperId": "694164b932f330141dd614cb64118a1b8086b50c",
"title": "Creating tools for inquiry-based mathematics learning from technological pedagogical content knowledge perspectives: Collaborative design approach"
},
{
"paperId": "bc1fac8b166764957b8be32b7586d4e29db06daf",
"title": "Pre-Service Mathematics Teachers’ Use of Web Resources"
},
{
"paperId": "19b6bf2ece8a85a84270a64360c94e96c988480d",
"title": "Technological, Pedagogical, and Content Knowledge (TPACK) and Beliefs of Preservice Secondary Mathematics Teachers: Examining the Relationships"
},
{
"paperId": "a3fdfbafd4a1f94d31afec8f567fb3ce20534ba4",
"title": "Examining Pre-Service Mathematics Teachers’ Beliefs of TPACK during a Method Course and Field Experience"
},
{
"paperId": "b71d5b8e69476603fd0e08500b7e914607387881",
"title": "The Evaluation of a Pre-Service Mathematics Teacher’s TPACK: A Case of 3D Shapes with GeoGebra"
},
{
"paperId": "46b4407499a3d1b1c990a7aadc7147f6cde88dae",
"title": "Remixed Reality: Manipulating Space and Time in Augmented Reality"
},
{
"paperId": "d4ae5b60908d979f3431356198df48902217d380",
"title": "Defining a New 21st Century Skill-Computational Thinking: Concepts and Trends."
},
{
"paperId": "cf30257ebe0c129dd027724ebe1daaf4483b7774",
"title": "The importance of attitudes toward technology for pre-service teachers' technological, pedagogical, and content knowledge: Comparing structural equation modeling approaches"
},
{
"paperId": "ee3dac77e97a7dc3d4da921d0f3a09d8a39399c3",
"title": "Perceptions of technological, pedagogical and content knowledge (TPACK) among pre-service teachers in Estonia"
},
{
"paperId": "d426feaccaee8e471682dd6000967f0cf7657707",
"title": "The Integration of technology in teaching mathematics"
},
{
"paperId": "4f756331e38f4d8b38c1b674a2497e4d3920fa1e",
"title": "Pre-Service Teachers' TPACK Development and Conceptions through a TPACK-Based Course"
},
{
"paperId": "4cd21969c7180e65640cb2b7356ffd51f2933565",
"title": "A Brief Introduction to Perception"
},
{
"paperId": "a76753bbc82ba0992d5527b0ba382ea3ec16565e",
"title": "TEACHING IN 21ST CENTURY: STUDENTS-TEACHERS’ PERCEPTIONS OF TECHNOLOGY USE IN THE CLASSROOM"
},
{
"paperId": "6d5f550fe618630ec539cfcb32506c49b9297556",
"title": "Traditional vs. innovative uses of computers among mathematics pre-service teachers in Serbia"
},
{
"paperId": "961a8c30a89aef6e6d775c9ec22d005152993576",
"title": "Examining Changes in Preservice Mathematics Teachers’ Technological Pedagogical Content Knowledge from their Microteaching"
},
{
"paperId": "ea9393b84c756f63b28dc39b8e9df191f84148b7",
"title": "Exploring In- and Pre-Service Science and Mathematics Teachers’ Technology, Pedagogy, and Content Knowledge (TPACK): What Next?"
},
{
"paperId": "26263366c37028343548e952420437b0d77b4ed3",
"title": "An Investigation of Technological Pedagogical Content Knowledge, Self-Confidence, and Perception of Pre-Service Middle School Mathematics Teachers towards Instructional Technologies"
},
{
"paperId": "77602db8ec0cc8bab9539729e718fa1eae755609",
"title": "Behavioural Intention and Pre-Service Mathematics Teachers' Technological Pedagogical Content Knowledge."
},
{
"paperId": "b9ebe65f1f24675f7c952ca6083002cc6f6b517f",
"title": "A comparison of in-service and pre-service teachers’ technological pedagogical content knowledge self-confidence"
},
{
"paperId": "b0cf6cfabb230c0611193ac5c0ec3500cf411c82",
"title": "An Exploratory Analysis of TPACK Perceptions of Pre-Service Science Teachers: A Regional Australian Perspective"
},
{
"paperId": "541571f47ad917e12b6a04d192583730f706e5f7",
"title": "Science teacher trainees microteaching experiences: A focus group study"
},
{
"paperId": "4aea737a346c173afb7a4de3daf7268444db40c0",
"title": "USING A BLENDED LEARNING APPROACH IN TEACHING MATHEMATICS"
},
{
"paperId": "7ca137b936eff521edc304569067ccdb56921032",
"title": "A Case Study of the Application of SAMR and TPACK for Reflection on Technology Integration into Two Social Studies Classrooms"
},
{
"paperId": "2293efa3f60bbfef0b7a7907b6897fc90ed19903",
"title": "Validation of the Survey of Pre-service Teachers' Knowledge of Teaching and Technology: A Multi-Institutional Sample"
},
{
"paperId": "8ce525813c66424ae2d79ffbe5a4acaa198018ed",
"title": "Characterising the perceived value of mathematics educational apps in preservice teachers"
},
{
"paperId": "c67f294a79fbf29614a4d3655fc5e931a2abcf28",
"title": "Pre-service teachers’ TPACK competencies for spreadsheet integration: insights from a mathematics-specific instructional technology course"
},
{
"paperId": "6aa390345112cb2e1ff4ebb68183f034d8d83fbe",
"title": "Contributing Factors to Pre-service Mathematics Teachers’ e-readiness for ICT Integration"
},
{
"paperId": "cbccf63b3f6d89bd89c4d96bccbef0821291788b",
"title": "Theory of Planned Behavior"
},
{
"paperId": "f33d6ad7550aaa66aaa901261fbc03ee88fb8ae0",
"title": "A Review of the Literature: How Pre-Service Mathematics Teachers Develop Their Technological, Pedagogical, and Content Knowledge."
},
{
"paperId": "f919c263974a914f662892445135159f9f586dd4",
"title": "Publish or perish: Where are we heading?"
},
{
"paperId": "3b2758bba140ec0bb875441e528bc361027e92dc",
"title": "Developing and Validating a Reliable TPACK Instrument for Secondary Mathematics Preservice Teachers"
},
{
"paperId": "5699c738b53986d38e07fe8f855f22298a0dc95a",
"title": "Adaptation of Technological Pedagogical Content Knowledge Scale to Turkish"
},
{
"paperId": "5ed3c4f80a9cb6fb24857574a4e0fc68f370b98e",
"title": "The Effects of Participation Rate on the Internal Reliability of Peer Nomination Measures"
},
{
"paperId": "f2359f6b2b7411d8c4c57dcc03b30d6410fd82d0",
"title": "Conceptualising changes to pre-service teachers' knowledge of how to best facilitate learning in mathematics: a TPACK inspired initiative"
},
{
"paperId": "048e369c4584374a69d8fa0b3fa069ca2aac2373",
"title": "Microteaching based on internet and multimedia technology"
},
{
"paperId": "fc679e00468ed22c40d11bcd84aae0462c40d89c",
"title": "Technological pedagogical content knowledge - a review of the literature"
},
{
"paperId": "9f6ab12f49a5be37972fafcfbbd9b2b443dba62c",
"title": "An Investigation of Change in Mathematics, Science, and Literacy Education Pre-service Teachers’ TPACK"
},
{
"paperId": "c182ee187bc08799125116ccd2cd95267dc14634",
"title": "Preparing pre-service teachers to integrate technology in education: A synthesis of qualitative evidence"
},
{
"paperId": "5ad689c04f0c45c68a1fed6c772e2001d337a3b2",
"title": "Developing technological pedagogical content knowledge in pre-service mathematics teachers through collaborative design"
},
{
"paperId": "42c9c2257419b180daae07c4ef2ddc4d6db6f032",
"title": "TPACK and Pre-Service Teacher Mathematics Education: Defining a Signature Pedagogy for Mathematics Education Using ICT and Based on the Metaphor “Mathematics Is a Language”"
},
{
"paperId": "07f8afd2337efa8064a44e1876a362f06884f6bc",
"title": "Validation of a French version of the Oxford knee questionnaire."
},
{
"paperId": "31716e2912569bfe8412447c364fa166795395ea",
"title": "Developing and Using a Codebook for the Analysis of Interview Data: An Example from a Professional Development Research Project"
},
{
"paperId": "07dd4245d3eefedb12a6b7c3164217c387847b87",
"title": "Investigating TPACK: Knowledge Growth in Teaching with Technology"
},
{
"paperId": "dcb360e8cb725db4e36b692be410224ddaf74db1",
"title": "Facilitating Preservice Teachers' Development of Technological, Pedagogical, and Content Knowledge (TPACK)"
},
{
"paperId": "61a7fdb78fff606cc95dc3f8f88a918807bd2734",
"title": "Auditing the TPACK Confidence of Australian Pre-Service Teachers: The TPACK Confidence Survey (TCS)"
},
{
"paperId": "77399f5ded36bd04667e2e8590b76c762bfbde64",
"title": "Coding Pre-Service Teacher Lesson Plans for TPACK"
},
{
"paperId": "22ed025c182a81af77d364158cd1e88fdfe289eb",
"title": "Testing a TPACK-Based Technology Integration Assessment Rubric"
},
{
"paperId": "66929e853f39ff2100c49ba5164548b2457394d7",
"title": "Exploratory Factor Analysis: A Five-Step Guide for Novices"
},
{
"paperId": "54cc755fd27dd0593c1b0a8d00b85b273fedd332",
"title": "The Art of Writing a Review Article"
},
{
"paperId": "4ab0317f55be2a46277b19d7f1a6495979f4ce32",
"title": "A guide to systematic literature reviews"
},
{
"paperId": "245831b1ba9fa32fdb224555b37533010af903e6",
"title": "Preferred Reporting Items for Systematic Reviews and Meta-Analyses: The PRISMA Statement"
},
{
"paperId": "11b5e79c892420b2b9e7f8eb9db8d283885ba947",
"title": "What is Technological Pedagogical Content Knowledge (TPACK)?"
},
{
"paperId": "7e7626bac913f5eeeef84b9aa124bc6a9a4ffc17",
"title": "Conference proceedings as a source of scientific information: A bibliometric analysis"
},
{
"paperId": "335c7d55e0f2709790a3e61325a95e89f176b4f9",
"title": "Facts and fallacies: differentiation and the general education curriculum for students with special educational needs"
},
{
"paperId": "8a521621abd6ef6476ef2180a201e7874b6d4d12",
"title": "Abstraction in mathematics learning"
},
{
"paperId": "402b4a9cdf7eed7fdeaff4ad1d18d8faae855139",
"title": "The PICO strategy for the research question construction and evidence search."
},
{
"paperId": "70e9bf32ba870128a4e6cc4ad09c6e691667fb14",
"title": "Assessing Technology Integration: The RAT – Replacement, Amplification, and Transformation - Framework"
},
{
"paperId": "c9e151ba8e59422320013d64307a17a94e018a98",
"title": "Scopus database: a review"
},
{
"paperId": "54cc1f2e86d1913521b466cef19d72ed02b6c800",
"title": "Argonaute—a database for gene regulation by mammalian microRNAs"
},
{
"paperId": "80cb8516ba8777bba176c4902eb79ac39a6a2ecd",
"title": "Brain, behaviour and mathematics: Are we using the right approaches?"
},
{
"paperId": "3b8404f22d97947200deba9745cf44b261df2db0",
"title": "Preparing teachers to teach science and mathematics with technology: Developing a technology pedagogical content knowledge."
},
{
"paperId": "82d8d6731dd95b0fd4aeaba93f95eb63b5872eda",
"title": "Technology Integration Expertise in Middle School Social Studies Teachers: A Study of Multiplicity in Thinking and Practice"
},
{
"paperId": "9bac152c18c1e7fefd5ce59c719166ed17553079",
"title": "Methodological index for non‐randomized studies (MINORS): development and validation of a new instrument"
},
{
"paperId": "67d63c86944d3854ff1e7b0119590dceef8a7511",
"title": "Making Sense of Factor Analysis: The Use of Factor Analysis for Instrument Development in Health Care Research"
},
{
"paperId": "7eeeeda99d659b201ff8f099fed38f8441f156be",
"title": "Minimizing Sample Size When Using Exploratory Factor Analysis for Measurement"
},
{
"paperId": "fcf65292084004ecafff29296a33c024ef7a27ce",
"title": "Internal consistency reliabilities for 14 computer attitude scales"
},
{
"paperId": "f79c2686db913efcb6baa47abf287405cea8c6d5",
"title": "Attitudes and the Attitude-Behavior Relation: Reasoned and Automatic Processes"
},
{
"paperId": "386eedde1ad12e725609b47c941843e7245fe0e8",
"title": "Universal Design for Learning"
},
{
"paperId": "3606854c6d294cc59fcea3e9badf013707d933ca",
"title": "Integrating Educational Technology Into Teaching"
},
{
"paperId": null,
"title": "Pre-service teachers‘ knowledge : Analysis of teachers ̳education situation based on TPACK Let us know how access to this document benefits you,‖ Math. Enthus"
},
{
"paperId": "a78956b0db446b112e09c1bc365e1074ec3ae0ab",
"title": "OF TECHNOLOGY SUPPORTED UbD BASED INSTRUCTIONAL DESIGN TRAINING ON STUDENT TEACHERS’"
},
{
"paperId": "249aaa6b80142f72f741c9667279918cfc75dbb5",
"title": "A Sequential Explanatory Investigation of TPACK: Malaysian Science Teachers‟ Survey and Perspective"
},
{
"paperId": null,
"title": "Advantages of questionnaires in online research"
},
{
"paperId": null,
"title": "Self-reflections of pre-service English teachers on microteaching experiences,"
},
{
"paperId": null,
"title": "Prospective mathematics teachers' TPACK in a context of a teacher education experiment/o TPACK de Futuros Professores de matematica numa experiencia de formacao"
},
{
"paperId": "6b95270c87306612358e9dcfa4998e0af08a08e6",
"title": "Investigating the impact of teacher education strategies on preservice teachers' TPACK"
},
{
"paperId": "aabadf8a762342c52e403de266effecf30d58c4d",
"title": "Selection of Learning Media Mathematics for Junior School Students."
},
{
"paperId": null,
"title": "How to do a structured literature review in computer science"
},
{
"paperId": "570deaeed7146d45eda0f86648380b1c48be441c",
"title": "Exploratory and confirmatory factor analysis of the"
},
{
"paperId": "98186058be4dda7f1b44dcf4acde13e14a37c089",
"title": "Online Microteaching: A Multifaceted Approach to Teacher Professional Development."
},
{
"paperId": null,
"title": "Sustainability consciousness of preservice teachers in Pakistan,‖"
},
{
"paperId": null,
"title": "Education's Moonshot"
},
{
"paperId": null,
"title": "standards: Educators"
},
{
"paperId": null,
"title": "Eric Selection Policy"
},
{
"paperId": "0539818492e2a12cfa7f1382138996d97873bc9d",
"title": "ICT Use in Science and Mathematics Teacher Education in Tanzania: Developing Technological Pedagogical Content Knowledge"
},
{
"paperId": null,
"title": "Learning, technology, and the samr model: Goals, processes, and practice,‖ Iste"
},
{
"paperId": "f7333de7582c23a666cdb6358bec9cb15dc871b1",
"title": "Turkish Adaptation of Technological Pedagogical Content Knowledge Survey for Elementary Teachers."
},
{
"paperId": "caeb916cdbf46bbaec2ee88e2aed156a71c2c75f",
"title": "Teacher educators as learners: Enabling learning while developing innovative practice in ICT-rich education."
},
{
"paperId": "e7bb851ba171e3eef8c5e94a5da30bb2905af654",
"title": "Development of the TTF TPACK Survey Instrument"
},
{
"paperId": null,
"title": "Beyond PICO: The SPIDER tool for qualitative evidence synthesis,"
},
{
"paperId": "011c803be51224be7381432f41fb2dcbaa542024",
"title": "DEVELOPMENT OF SURVEY OF TECHNOLOGICAL PEDAGOGICAL AND CONTENT KNOWLEDGE (TPACK)"
},
{
"paperId": "c991294cc19b8151b3a4493d6fe9ec915af0f7aa",
"title": "Testing a TPACK-Based Technology Integration Observation Rubric"
},
{
"paperId": null,
"title": "High-quality academic standards in mathematics and English language arts/literacy (ELA)"
},
{
"paperId": "e21bcb9f955f32ed32020ef2170577ddaa0a4d83",
"title": "A Mixed Method Study"
},
{
"paperId": null,
"title": "Uça, ―A perceptions scale for technology use in the teaching of elementary mathematics,‖ Yüzüncü Yıl Üniversitesi Eğitim Fakültesi Derg"
},
{
"paperId": "cfc5783601e9e5c541611fb5cbb20b8cc0d700dc",
"title": "Introducing Technological Pedagogical Content Knowledge"
},
{
"paperId": "7986865d81826859875a536ddec385bac5244cc8",
"title": "Faculty Perceptions of Technology Integration in the Teacher Education Curriculum: A Survey of Two Ghanaian Universities"
},
{
"paperId": null,
"title": "Self-efficacy scale in relation to computer based education"
},
{
"paperId": null,
"title": "Where Mathematics Comes From , New York:"
},
{
"paperId": "6046e761cc63e1bd572fa0c6cb789ec134caf509",
"title": "\"Principles and Standards for School Mathematics\" in the Classroom."
},
{
"paperId": "6b947956d7251fc41a354fb0b8bbd73217ae04c1",
"title": "Pedagogical Content Knowledge: An Introduction and Orientation"
},
{
"paperId": "db2a381fe11c207d368adf66a54a6cc416f0957b",
"title": "Levels of Technology Implementation (LoTi): A Framework for Measuring Classroom Technology Use."
},
{
"paperId": null,
"title": "Those who understand knowledge,"
},
{
"paperId": "93ea4da5f08cd2c8f29c800e730f6daa227755f7",
"title": "A technology acceptance model for empirically testing new end-user information systems : theory and results"
},
{
"paperId": "292161f17499575234bf1551ca619b17ee984d0a",
"title": "Attitudes and Perceptions"
},
{
"paperId": "81e8a47eb4add46027d408b6fa938ff80e852498",
"title": "Human Information Processing: An Introduction to Psychology"
},
{
"paperId": null,
"title": "Attitudes,‖ Terminology, 1933"
},
{
"paperId": null,
"title": "Knowledge: A Framework for Teacher Knowledge"
},
{
"paperId": "b325b34e95a437dd22dabfdc64b78739e3ec9b10",
"title": "Digital Commons@Georgia Southern Digital Commons@Georgia Southern Learning to Teach Mathematics With Robots: Developing the ‘T’ in Learning to Teach Mathematics With Robots: Developing the ‘T’ in Technological Pedagogical Content Knowledge Technological Pedagogical Content Knowledge"
},
{
"paperId": null,
"title": "‘ remarks : Breaking news : TPCK becomes TPACK !"
}
] | 26,084
|
en
|
[
{
"category": "Medicine",
"source": "external"
},
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
},
{
"category": "Engineering",
"source": "s2-fos-model"
},
{
"category": "Environmental Science",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/fff39b0143513e3deb350cbc59834d0bf3135439
|
[
"Medicine",
"Computer Science"
] | 0.881329
|
PyFF: A Fog-Based Flexible Architecture for Enabling Privacy-by-Design IoT-Based Communal Smart Environments †
|
fff39b0143513e3deb350cbc59834d0bf3135439
|
Italian National Conference on Sensors
|
[
{
"authorId": "33159542",
"name": "Fatima-Zohra Benhamida"
},
{
"authorId": "153385569",
"name": "Joan Navarro"
},
{
"authorId": "1404353892",
"name": "Oihane Gómez-Carmona"
},
{
"authorId": "1404253617",
"name": "D. Casado-Mansilla"
},
{
"authorId": "1383994527",
"name": "D. López-de-Ipiña"
},
{
"authorId": "34858034",
"name": "A. Zaballos"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"SENSORS",
"IEEE Sens",
"Ital National Conf Sens",
"IEEE Sensors",
"Sensors"
],
"alternate_urls": [
"http://nbn-resolving.de/urn/resolver.pl?urn=urn:nbn:ch:bel-142001",
"http://www.mdpi.com/journal/sensors",
"https://www.mdpi.com/journal/sensors"
],
"id": "3dbf084c-ef47-4b74-9919-047b40704538",
"issn": "1424-8220",
"name": "Italian National Conference on Sensors",
"type": "conference",
"url": "http://www.e-helvetica.nb.admin.ch/directAccess?callnumber=bel-142001"
}
|
The advent of the Internet of Things (IoT) and the massive growth of devices connected to the Internet are reshaping modern societies. However, human lifestyles are not evolving at the same pace as technology, which often derives into users’ reluctance and aversion. Although it is essential to consider user involvement/privacy while deploying IoT devices in a human-centric environment, current IoT architecture standards tend to neglect the degree of trust that humans require to adopt these technologies on a daily basis. In this regard, this paper proposes an architecture to enable privacy-by-design with human-in-the-loop IoT environments. In this regard, it first distills two IoT use-cases with high human interaction to analyze the interactions between human beings and IoT devices in an environment which had not previously been subject to the Internet of People principles.. Leveraging the lessons learned in these use-cases, the Privacy-enabling Fog-based and Flexible (PyFF) human-centric and human-aware architecture is proposed which brings together distributed and intelligent systems are brought together. PyFF aims to maintain end-users’ privacy by involving them in the whole data lifecycle, allowing them to decide which information can be monitored, where it can be computed and the appropriate feedback channels in accordance with human-in-the-loop principles.
|
# sensors
_Article_
## PyFF: A Fog-Based Flexible Architecture for Enabling Privacy-by-Design IoT-Based Communal Smart Environments [†]
**Fatima Zohra Benhamida** **[1,2,]*** **, Joan Navarro** **[3]** **, Oihane Gómez-Carmona** **[2]** **, Diego Casado-Mansilla** **[2]** **,**
**Diego López-de-Ipiña** **[2]** **and Agustín Zaballos** **[3]**
1 Laboratoire des Méthodes de Conception des Systèmes, Ecole Nationale Supérieure D’Informatique,
Algiers 16309, Algeria
2 DeustoTech, University of Deusto, 48007 Bilbao, Spain; oihane.gomezc@deusto.es (O.-G.C.);
dcasado@deusto.es (D.C.-M.); dipina@deusto.es (D.L.-d.-I.)
3 Grup de Recerca en Internet Technologies & Storage (GRITS), La Salle—Universitat Ramon Llull, C/Quatre
Camins, 30, 08022 Barcelona, Spain; joan.navarro@salle.url.edu (J.N.); agustin.zaballos@salle.url.edu (A.Z.)
***** Correspondence: f_benhamida@esi.dz
† This paper is an extended version of our paper published in CPSSIoT2019: 1st Workshop on Cyber-Physical
Social Systems co-located with the 9th International Conference on the Internet of Things (IoT 2019).
[����������](https://www.mdpi.com/article/10.3390/s21113640?type=check_update&version=1)
**�������**
**Citation: Benhamida, F.Z.;**
Navarro, J.; Gómez-Carmona, O.;
Casado-Mansilla, D.;
López-de-Ipiña, D.; Zaballos, A.
PyFF: A Privacy Fog-Based Flexible
Architecture for IoT-Based
Communal Smart Environments.
_[Sensors 2021, 21, 3640. https://](https://doi.org/10.3390/s21113640)_
[doi.org/10.3390/s21113640](https://doi.org/10.3390/s21113640)
Academic Editors: Soumya Kanti
Datta, Mirko Presser, Antonio
Skarmeta, Sébastien Ziegler, Srdjan
Krˇco and Latif Ladid
Received: 11 April 2021
Accepted: 20 May 2021
Published: 24 May 2021
**Publisher’s Note: MDPI stays neutral**
with regard to jurisdictional claims in
published maps and institutional affil
iations.
**Copyright: © 2021 by the authors.**
Licensee MDPI, Basel, Switzerland.
This article is an open access article
distributed under the terms and
conditions of the Creative Commons
[Attribution (CC BY) license (https://](https://creativecommons.org/licenses/by/4.0/)
[creativecommons.org/licenses/by/](https://creativecommons.org/licenses/by/4.0/)
4.0/).
**Abstract: The advent of the Internet of Things (IoT) and the massive growth of devices connected to**
the Internet are reshaping modern societies. However, human lifestyles are not evolving at the same
pace as technology, which often derives into users’ reluctance and aversion. Although it is essential
to consider user involvement/privacy while deploying IoT devices in a human-centric environment,
current IoT architecture standards tend to neglect the degree of trust that humans require to adopt
these technologies on a daily basis. In this regard, this paper proposes an architecture to enable
privacy-by-design with human-in-the-loop IoT environments. In this regard, it first distills two IoT
use-cases with high human interaction to analyze the interactions between human beings and IoT
devices in an environment which had not previously been subject to the Internet of People principles..
Leveraging the lessons learned in these use-cases, the Privacy-enabling Fog-based and Flexible (PyFF)
human-centric and human-aware architecture is proposed which brings together distributed and
intelligent systems are brought together. PyFF aims to maintain end-users’ privacy by involving them
in the whole data lifecycle, allowing them to decide which information can be monitored, where
it can be computed and the appropriate feedback channels in accordance with human-in-the-loop
principles.
**Keywords: user involvement; fog computing; internet of things; privacy; flexibility; smart environments**
**1. Introduction**
The Internet of Things (IoT)—committed to smartly connecting a deluge of digital
assets deployed in users environments—is one of the main drivers of the digital transformation in modern societies [1]. The advent of the IoT has materialized the conception of a new
interconnected world composed of new ubiquitous computing technologies. Several fields
and domains ranging from education [2] to Industry 4.0 [3], including transportation [4],
healthcare [5] and business [6], are exploiting the never-ending advances of IoT. Under
this context, the overriding presence of technology can play a relevant role in addressing
address new existing societal challenges [7] and bringing added value services in a way
never imagined before. However, despite this continuous progress in smart services and
technology, human beings seem to struggle to keep up with the pace of such digital achievements (e.g., smartphone adoption, use of social networks or e-administration services).
On the one hand, the cultural divide, digital skills or economic inequality may hinder
the equitable growth of these technologies [8]. On the other hand, human factors such
as the apprehension about being tracked or privacy concerns relating to who may access
-----
_Sensors 2021, 21, 3640_ 2 of 27
the collected data can also be candidates to explain this issue [9,10]. This work focuses on
the latter.
Generally speaking, people in modern societies are averse to be continuously surveyed
(i.e., monitored) by a digital entity that they do not trust (i.e., up to what extent humans are
confident with the data or service offered by a “thing” [11]), without knowing which data
they are sharing [12]. This lack of trust continues to grow despite the efforts made by many
initiatives on user and data privacy (e.g., GDPR (General Data Protection Regulation ) in
Europe [13], CCPA (California Consumer Privacy Act) in USA [14] or LGPD (Lei Geral de
Proteção de Dados (in English: General Data Protection Law)) in Brazil [15]). In addition,
the lack of understanding about the behavior of these digital services (e.g., for a regular
user, it is hard to grasp why a given IoT device has taken a certain decision) makes users
lose their trust and perceived value toward them. Notwithstanding, the IoT paradigm
should greatly contribute to boosting the involvement of human beings in new optimized
services powered by technology and, hence, somehow minimize their reluctance [16].
Current IoT reference architectures [17], such as RAMI 4.0, IIRA, or even the IoT
World Forum Reference Model, focus on specifying the hierarchical layers (also referred to
as levels), information flows, functionalities and interoperability guidelines to design an
IoT environment. However, the role of end-users is typically seen as a passive high-end
interface rather than an embedded entity inside the whole data lifecycle (also referred
to as human-in-the-loop [18]). Possibly, this design approach, together with the lack of
standards for trustworthiness in the IoT [19], have led to the aforementioned trust concerns
of IoT environments [11]. Note that these trust issues are more relevant than ever because
of the current global COVID-19 situation and the measures taken by different countries to
control the flows of people [20]. In the last months of 2020, society has witnessed important
concerns raised over privacy involving the tracking strategies established to cope with the
disease (i.e., technologies to track where people are, where they have been or what their
disease status is) [21].
Therefore, the purpose of this paper is to propose a human-centric and human-aware
(i.e., human-in-the-loop) IoT architecture where distributed and intelligent systems are
brought together to foster user adoption and trustworthiness in IoT environments. In this
regard, this work first proposes two different real-world use-cases to discuss the tangible
challenges of enabling the digitization of user environments by means of IoT architectures,
while considering user preferences, characteristics and behaviors. The findings and experiences collected from these two use-cases define the requirements of the proposed PyFF
(Privacy-Fog-based Flexible): a user-oriented architecture for enabling privacy-by-design
with human-in-the-loop IoT environments.
This work shows that understanding users and securing their privacy and including
them into the data lifecycle, as done in PyFF, to make them aware of which data they
are disclosing, is pivotal in the design and deployment of any IoT service that involves
physical interaction [22]. In fact, PyFF is also envisaged as a first step to conceive Internet
of People [23] architectures, where a shift from infrastructure-centric to human-centric
environments is necessary. Although an extensive real-world deployment and evaluation
of the PyFF architecture is still not available, the benefits of this approach are contextualized
in the framework of a communal smart IoT environment: the digital transformation of a
traditional office-based workplace. The selection of this particular use-case is conditioned
by the additional difficulties it poses. Beyond the traditional privacy and security concerns
that smart spaces need to face, smart workplaces propose additional threats. For example,
privacy perception acquires new dimensions involving a social component, as these data
can be associated with the image given to third parties or with the perception of productivity
and work performance [24]. Additionally, due to the long hours that users (i.e., workers)
spend in workplaces, this can be considered a strategic environment to address challenges
such as user comfort and energy efficiency by means of IoT.
-----
_Sensors 2021, 21, 3640_ 3 of 27
In essence, the contributions of this paper are twofold:
1. The PyFF architecture is proposed, which is conceived to transform digital environments while increasing energy efficiency, user comfort and maintaining users’ privacy.
This architecture is derived from the analysis of two empirical studies (i.e., Smart
Sustainable Coffee Machines and GreenSoul project) that are aimed to study user behaviors towards workplace digitization when it comes to automatizing energy-saving
actions.
2. A multi-faceted qualitative comparison among the proposed PyFF architecture, GreenSoul and the Smart Sustainable Coffee Machines is presented. This comparison enables practitioners to assess the strengths and weaknesses of these three different
IoT paradigms discussed in this work. In addition, these results can be taken as
reference guidelines on how to convert a digital workplace into an appropriate setting
to involve workers in decision making and motivate them towards more sustainable
and healthier behaviors while promoting changes.
As an expanded version of our work in [25], we consider that the novelty of PyFF seeks
to combine innovative data processing architectures, distributed intelligence processes and
advanced immersive interaction interfaces between users and things to give place to useraware (human-centred) IoT domains. This idea seeks to turn IoT environments into more
efficient, trustworthy and acceptable scenarios for their users. Thus, we aim to transform
the way users interact with their environment while promoting healthier behaviors or
increasing levels of comfort for their occupants in return. PyFF offers a generalized version
of fog-based privacy-aware architecture to use for any IoT-based smart environment.
To sum up, the proposal of the PyFF architecture, which puts humans in the loop
within IoT environments, aims to contribute to making the Internet of People a reality and
enable the conception of privacy-by-design IoT environments. The qualitative evaluation
conducted in this paper shall guide developers and system architects to build reliable
heterogeneous systems with regards to the data life cycle from the Edge to the Cloud.
The remainder of this paper is organized as follows. Section 2 details the two use-cases
that inspired us to introduce and define the requirements of the new PyFF architecture:
(1) the Smart Sustainable Coffee Machines project designed to test the effectiveness of
persuasive technology to raise energy efficiency awareness in the mid an long term; and
(2) the GreenSoul project that aimed at saving energy consumption in tertiary buildings
engaging employees though bespoke ICT-based feedback. Section 3 depicts the PyFF
architecture and discusses how it can be used to transform a digital workplace into a
human-centric smart workplace. To better understand the functionality of the proposed
architecture, an illustrative scenario is provided in Section 4 which showcases the flexibility
and use of PyFF in a smart workplace scenario. Section 5 provides a qualitative comparison
of the three IoT environments used in this paper. Section 6 compares our findings in the
field of smart workplaces derived from the conception of PyFF with the related work.
Finally, a discussion on the drivers and challenges and some conclusions are provided in
Section 7.
**2. Enabling the Digitization of User Environments by Means of IoT Architectures**
To discuss the tangible challenges to enabling the digitalization of user environments,
this section analyzes two already existing real-world uses-cases: (1) the Smart Sustainable
Coffee Machine; and (2) the GreenSoul project. They are briefly introduced in the following.
- The Smart Sustainable Coffee Machines use-case [26] consists of instrumenting several
capsule-based Coffee Machines in ten different work environments to provide them
energy sensing and user-interaction capabilities. This scenario is aimed at measuring
the importance of preserving user’s privacy when it comes to collecting sensitive
data. The conducted experimental tests have led to a better understanding of the
importance of user environment digitization and its side-effects. In fact, over-reliance
on automation may bring undesired effects to pro-environmental behavior and reduce
personal responsibility for action [27].
-----
_Sensors 2021, 21, 3640_ 4 of 27
- The GreenSoul project use-case [28] consists of deploying IoT interactive artifacts
to employees of six tertiary buildings across Europe (Austria, Greece, the UK and
Spain) to enhance their awareness about energy consumption. The objective was
to understand the new dynamics and discussions that these devices may bring in a
communal context when they are deployed from scratch (e.g., the interaction with the
device in the daily routine, the attachment or the confidence to the information they
provide, emotions related to the IoT devices or their role as mediators of conversations
among peers).
The analysis of these two use-cases combined with our previous work in [25] about
boosting energy efficiency in smart workplaces, exhibits the key parameters that limit user
involvement in IoT environments. Indeed, these use-cases have been used to collect new
insights and issues on what IoT may bring to communal contexts. These insights have
motivated the design requirements of the PyFF architecture.
_2.1. Use-Case 1: Smart Sustainable Coffee Machines_
The first use under analysis comes from an experimental intervention that took place
over one year in 15 different sites with more than 100 users. This use-case was designed
to assess the benefits of using IoT devices to increase users’ consciousness about energy
consumption in a persuasive way. To this end, the Coffee Machines found in office environments were selected as the target IoT devices that would be used to persuade users to
become more energy efficient (and aware) in the mid- and long-term. It is worth noting that
the selection of the Coffee Machine for this experiment is not arbitrary. On the one hand, it
is well-known that Coffee Machines are a commonplace asset in the majority of office-based
working environments. On the other hand, due to the fact that Coffee Machines need to
spend a considerable amount of energy maintaining the pump pressure and water heated,
their power consumption can be higher than other A-class appliances such as modern
refrigerators (i.e., A++), laptops, monitors or even ovens [29]. Full information and further
details about the implementation of this experiment can be found in [26].
In the following, the main strategies to transform a regular appliance into an IoT
device used for the sake of this use-case are summarized and the major findings on the
user interaction with an IoT domain derived from this use-case are outlined.
2.1.1. Preparation of the IoT Environment and Experiment Configuration
As shown in Figure 1, embedded energy measurement equipment developed with
an Arduino board was attached to the capsule-based Coffee Machines. The Arduino
microcontroller resulted in a very convenient way to sense the energy consumption of the
Coffee Machine—by means of an energy meter directly attached to its I/O ports—while, at
the same time, providing a straightforward gateway to the Internet by means of its Ethernet
port. This enabled the system itself to easily send energy consumption information to a
remote server [30].
**Figure 1. The energy consumption data flow from the Ethernet-based Arduino microcontroller board to the remote server**
where the data were stored for later processing and analysis [26].
-----
_Sensors 2021, 21, 3640_ 5 of 27
This layout enabled researchers to define three different experimental conditions
related to how the users would be informed on energy awareness (i.e., how the IoT domain
would interact with users): Automation, Persuasion and Web-based dashboard. Each
experimental condition is detailed in the following:
1. Web-based dashboard: In this configuration, a website showing the energy consumption of each user from the coffee machine was developed. This enabled participants to
monitor their own consumption and provide rational insights by means of showing
historical data.
2. Persuasive feedback: This configuration combined subtle visual hints with ambient
feedback provided in real-time to persuade the user to decide when the coffee machine
should be turned off.
3. Automation: This configuration required no intervention from the user. In this way,
the coffee machines decided themselves when was the best moment to shut down
and did so accordingly. This was aimed at providing a notion of comfort for the users
since they did not have to worry about switching the coffee machine off and on to
save energy.
It is worth mentioning that the last two configurations (i.e., Persuasive feedback and
_Automation) used an Auto Regressive Integrated Moving Average (ARIMA) model (running_
on an external server rather than on the coffee machine itself due to the reduced storage
and computing capabilities of the Arduino board) to statistically forecast the number of
users who would use the appliance every hour of the day [31]. The final architecture to run
the experiment is shown in Figure 2.
**Figure 2. System architecture of the Coffee Machine use-case [26].**
2.1.2. Evaluation Procedure and Obtained Results
The evaluation procedure was based on structured questionnaires. These questionnaires aimed to obtain users information related to the socioeconomic profile of each participant to contextualize the experiment population; their pro-environmental attitudes [32]
as well as their the pro-environmental readiness to change [33]; and their confidence in
technology as a means to address environmental challenges. This information facilitated
objectively assessing whether and up to what extent the users wanted to modify their proenvironmental behavior. It is worth noting that each participant enrolled in the experiment
had to answer these questionnaires twice: once before the experiment and then after the
experiment (i.e., 1 year later).
-----
_Sensors 2021, 21, 3640_ 6 of 27
The obtained results and main lessons learned from this use-case are summarized in
the following:
- Energy Consumption: After running the experiment with the IoT coffee machines, the
energy consumption for the Persuasive feedback and Automation experimental conditions dropped by 44% and 14%, respectively. Surprisingly, no energy consumption
reduction was observed in the Web-based dashboard experimental condition. Therefore,
the following remarks can be inferred. First, it is possible to improve energy consumption of daily appliances. Second, human supervision can mitigate bias in statistical
models (i.e., the Persuasive feedback condition saved more energy than the Automation
one). Finally, persuasion is key to involving users (i.e., no changes where observed in
the Web-based dashboard experimental condition)
- Questionnaires: After analyzing all the questionnaire data, it was found that the users
of the Automation experimental condition were the ones who most distrusted the
autonomous behavior of the coffee machine and, thus, felt skeptical that technology
could be a driver for pro-environmental change. Additionally, after the experiment,
this experimental group proved to be less likely to adopt attitudes to favor the environment. These findings are fairly well correlated with the work of Murtagh et al. [27],
who found that automation impairs pro-environmental attitudes and undermines
actions for personal responsibility. To sum up, the following remark can be inferred
from the evidence above: autonomous appliances (e.g., the coffee machine in this
use-case) may contribute to reduce the confidence and trust in technology. Therefore,
user idiosyncrasy cannot be neglected when implementing automation in an IoT
domain.
- Focus Groups: To further capture user feedback on this experiment, a set of focus
groups was conducted. From them, the most relevant observation came from the
users of the Automation experimental condition. Specifically, they complained about
the fact that users were kept out the loop of the coffee machine operation. That
is, it was not possible to intervene on the decision process that the coffee machine
did to self shutdown. Users reported feelings of frustration when being unable to
use the appliance at will—although they were aware that this was done to improve
energy consumption.
The main lesson learned from this situation is that users need to understand the
behavior of an autonomous device in order to ensure a long-term effective coexistence.
Overall, the results obtained in this use-case shown an unexpected rebound effect associated to automation in IoT environments. To sum up, leaving the processes management—
particularly, those ones related to energy efficiency—to automated entities (e.g., statistical
and machine learning) may bring to averse phenomena: passivity to act in favor of the environment and widespread distrust on the suitability of technological solutions to address
latent environmental issues.
_2.2. Use-Case 2: GreenSoul Project_
The second use-case, referred to as GreenSoul (GS) [28], was designed to optimize
energy costs in tertiary buildings considering the individual profile of each user. Although
this use-case is also targeted at energy consumption, GS takes a step forward from the
Coffee Machine and considers user behavioral patterns in order to take/suggest actions.
Therefore, before giving personalized recommendations and/or subtle nudges on
energy consumption, GS accurately monitored the operation of as many appliances as
possible (e.g., monitors, heating, ventilation and air conditioning devices). In addition,
GS considered the idiosyncrasy of each user in order to provide him/her with suitable,
yet effective, feedback to reach the overall goal of increasing energy efficiency without
neglecting privacy and comfort. Overall, GS took some of the lessons from the Coffee
Machine use-case and proposed strengthening the engagement of end-users rather than to
develop complex automation algorithms in order to obtain durable results.
-----
_Sensors 2021, 21, 3640_ 7 of 27
In the following, the IoT infrastructure deployed on the buildings to optimize their
energy consumption is summarized and the major findings on the user interaction with
the IoT domain derived from this use-case are outlined.
2.2.1. Preparation of the IoT Environment and Experiment Configuration
A three-layered scheme following the physical building deployment and Edge Computing approach (Figure 3) was designed for the GS architecture: (1) the Device Layer;
(2) the Building Layer; and (3) the Front-End Layer.
**Figure 3. GreenSoul Reference Architecture [28].**
The Device Layer, the bottom part of the architecture, features the set of sensors that
are considered relevant for data extraction and analysis; actuators that can be remotely
controlled to assure that energy efficiency is achieved; and adaptors, which are new
electronic devices connected to home or office appliances, of personal use (e.g., monitors,
PCs, etc.) or collective use (e.g., printers, coffee-makers, outlets or power strips, etc.).
Similarly to the smart coffee maker, the purpose of such adaptors was to optimize efficient
usage of the mentioned appliances.
The Building Layer is responsible for giving value and meaning to the information
retrieved. It consists of the GS-Decision Support System (GS-DSS) component, responsible
for processing data and generating final operational recommendations at the Edge level.
Finally, the Front-End Layer features the components of the Visualization Interfaces
that provide users access to mobile and web applications. With these interfaces, the GS
platform will capture, store and manage energy-consumption data per device/user. Then,
data are analyzed and displayed for educational and informative purposes.
The GS architecture benefits from flexibility in terms of: (1) enabling remote intelligent
management of diverse remote devices (energy-meters and persuasive-ambient devices)
always within the building; (2) applying persuasion techniques through GS-ed devices
and mobile apps to eco-educate users both individually and at user-group level; and (3)
providing device and environment decision-intelligence locally and at the Edge level to
enhance the eco-friendliness profile of a given installation, where several common use
devices are used by a group of users [26].
-----
_Sensors 2021, 21, 3640_ 8 of 27
2.2.2. Evaluation Procedure and Obtained Results
The effectiveness of the overall GreenSoul system was tested by carrying out an
intervention in six pilot buildings across Europe involving more than 350 people. Four
different treatments combining three different persuasion principles through ICT were
deployed (i.e., self-monitoring, cause–effect and conditioning). These treatments were
delivered using different feedback channels: a custom-based interactive coaster that provided visual information about energy consumption (self-monitoring); a gamified mobile
app with some automation features (conditioning); a series of analog signage in the form
of post-its and posters with “green messages” (cause–effect),which can be considered as
the control-treatment; and all three previous treatments together. Figure 4 illustrates each
of them.
**Figure 4. The GreenSoul Persuasion Treatments with the associated technology to deliver them (post-its, mobile app,**
physical devices and all the treatments together) [28].
As with the smart coffee-maker intervention, this study was divided into two phases:
individual and collective. During the individual phase, the primary objective was to
foster the awareness and motivation of the participants in energy efficiency practices.
Hence, the only individual information that was provided to end-users was regarding their
performance with devices and appliances under their own control. In the second phase, we
gave persuasive hints about how to reduce the energy consumption of electricity-powered
devices not directly attached to the individual but more related to equipment of shared use
(e.g., lighting, HVAC or common appliances).
Again, the overall GS solution was evaluated through a triangulation approach. To
this aim, three different qualitative and quantitative sources were used: (1) pre–post
validated surveys to assess energy awareness, motivations to change the behavior and
main obstacles that hinder the adoption of energy practices in the workplace; (2) the energy
consumption per user, per treatment and per building along the whole study; and (3) focus
groups throughout all experimental phases to understand user motivations at each time,
interventions pitfalls and other relevant matters.
-----
_Sensors 2021, 21, 3640_ 9 of 27
The results emphasized the importance of understanding user profiles in both socioeconomic and behavioral terms to inform ICT-based campaigns to promote sustainable
practices among employees. Related to privacy, automation and trust on systems and
work-peers, we found that people trusted more in ICT interventions at the beginning, yet
they simply presented cues of absentmindedness. Therefore, this suggests that providing
frequent subtle feedback (i.e., reminders) to employees and tenants would contribute to
helping users to remember green actions once they are aware of an energy-related problem. The GS intervention also shed light on the importance of understanding the level
of confidence in technology if an ICT-based intervention to change the people’s behavior
want to be applied. This finding was also relevant in the previous use-case. The pilots sites
with higher levels of confidence in technology at the end of the intervention were found
to be the ones with fewer barriers to behave energy efficiently. Finally, we also observed
that high rates of confidence in technology and trust are correlated to a more actionable
approach in favor of the environment.
To sum up, both use-cases stress the need to provide or maintain the confidence
of end-users on technology if we want them to maintain their involvement on green
actions suggested by ICT-interventions. This suggests the use of Fog/Edge Computing
architectures to retain private data close to end-users while the whole internal process of
computing the feedback is explained to them at any point.
_2.3. Architecture Requirements for Enabling a Privacy-by-Design with Human-in-the-Loop_
_IoT Environment_
The results and experiences collected from the Smart Sustainable Coffee Machines
and the GreenSoul project endorse the need to conceive a more flexible and privacy aware
architectural solution. The most important insights derived from the analysis of these
use-cases are summarized below:
- A fully-automated management system focused on energy efficiency seems to cause
passivity among people to act in favor of the environment. In fact, users are not
involved in actions which are automatically taken by the systems, and thus can hardly
be influenced to adopt a good habit to help to reduce energy consumption.
- The automated system can also generate widespread distrust in the technology since
it will discourage humans from taking the lead on their own actions.
- Users are often sensitive to sharing their data, resulting in users’ reluctance if the
desired level of privacy is not respected. However, it is of paramount importance
to sense as many data and monitor as many devices as possible to provide accurate
recommendations (e.g., in health or energy-related scenarios) in order to increase
end-users confidence.
- Since involving users to take actions in the smart environment is recommended, it is
important to study their profiles in both socioeconomic and behavioral terms. This
will help in defining the ICT intervention campaigns to communicate with each one
accordingly and promote sustainable practices among users.
These insights allow us to define the following requirements that will guide the
conception of the PyFF architecture:
1. Flexibility: The system must be able to provide different degrees of service at the
same time according to the user profile and service to be delivered.
2. Privacy: The system must take into account the sensitivity of the data originated in
the IoT environment, the service properties and user willingness to expose her/his
associated data when exchanging and computing data over the IoT environment.
Therefore, service performance shall be reduced, if necessary, to keep the desired
privacy level.
3. Scalability: The system must provide for an ever-growing number of devices (and
users) cohabiting and communicating among each others in the same IoT environment.
4. Including humans in the loop: The system must consider user preferences and
behavior, which requires a shift from infrastructure-centric to human-centric [23]
-----
_Sensors 2021, 21, 3640_ 10 of 27
architectures. Therefore, users are no longer a high-end interface but a critical part on
the whole information flow.
5. Data governance: The system must provide clear means to define which data will be
exchanged, by whom and where they will be processed.
**3. PyFF: A Privacy-Fog-Based Flexible Architecture**
Driven by these reflections, this work proposes PyFF: a Privacy Fog-based Flexible
architecture for IoT-based smart environments. PyFF features a distributed hierarchical
system that takes advantage of the Fog Computing paradigm for enabling privacy-bydesign with human-in-the-loop IoT environments. Specifically, PyFF is committed to:
(1) collecting, storing and processing multi-modal data from low-cost devices in a scalable
way; (2) providing several degrees of data privacy according to the user and application
preferences; (3) hosting recommendation and forecasting distributed algorithms with
variable computational cost; and (4) implementing ICT-based channels to communicate
the concluded recommendations to users based on their profiles and preferences. Overall,
based on a hierarchical design inspired by Fog Computing, we detail hereafter the PyFF
system model and the functionalities of its layers. These layers are depicted in Figure 5.
**Figure 5. The proposed PyFF system architecture.**
From an architecture point of view, PyFF is compatible with existing well-known
IoT architectures that, incidentally, are typically composed of three logical layers [34]:
Perception layer (that could be mapped to the Sensing layer of PyFF), Network layer (that
could be mapped to the Early Stage Computing Layer of PyFF) and Application layer (that
could be mapped to the Intensive Computing layer of PyFF). However, existing architecture
reference models (e.g., RAMI 4.0, IIRA, IoT-A and IEEE 2413-2019) focus on specific
challenges (e.g., infrastructure data and connectivity, business usage implementation,
interoperability and secure information exchange) and seem to neglect user involvement
in the whole data lifecycle [35]. Therefore, PyFF aims to: (1) simplify the complexity of
existing IoT reference models; and (2) enable privacy-by-design with human-in-the-loop
IoT environments.
_3.1. PyFF: System Model_
The very first requirement that PyFF should meet—thoroughly learned from the
GreenSoul use-case—is flexibility. The level of data privacy can change according to
-----
_Sensors 2021, 21, 3640_ 11 of 27
company policies and/or users preferences (e.g., users from the same company may have
different privacy policies). Accordingly, the use-case of the Smart Sustainable Coffee
Machines has stressed the relevance of providing user-adapted recommendations when
using persuasive techniques to raise energy efficiency awareness. Therefore, PyFF must
be able to adapt to the desired and dynamic levels of privacy, accuracy and automation.
The flexibility of the proposed approach allows the user to interact with the system while
iteratively personalizing it at any time. Thus, fine-grained control is given to the user,
who has the power to modify and adjust the system behavior according to their privacy
requirements and their current wiliness to be an active part of the process. This fine-grained
control consists of specifying how “far” the associated data of users will go, that is which
devices—and users—will store and/or process a certain datum for a given service. Such
specification will be made by the user at service sign-up and epidemically propagated [36]
to all the affected devices. This fine grained-control could be implemented by means of a
declarative access control policy language such as XACML [37], which can be adapted to
provide adaptive reasoning, as done in [38].
The Fog Computing nature of the proposed approach (see Figure 5) helps the system
to be inherently flexible and enables it to integrate different technologies and standards
with little effort, which makes it adaptable to any given scenario restrictions.
PyFF is composed of four main and flexible layers: (1) Sensing Layer is responsible for
data collection; (2) Early Stage Computing Layer is represented by a Fog network used for
local computation; (3) Intensive Computing Layer is deployed in a Cloud infrastructure and
responsible for data aggregation, which is used to obtain more accurate recommendations;
and (4) User–Environment-interaction Layer is used to optimize the interaction between the
users and their surrounding smart devices while giving recommendations.
Such flexibility provides data processing, storage and networking scalable services
between Cloud Computing infrastructures and IoT devices, generally located, but not exclusively, on the Edge of the network [39]. Indeed, the Fog Computing approach alleviates
those fears related to sharing sensitive and private data on the Cloud by enabling users
and applications to conduct intensive operations close to where the data were generated
(i.e., Edge) and, thus, minimize the amount of information sent to the remote servers. This
approach inherently increases data security since these data are kept inside the enterprise
network and its firewalls, which can be best seen as privacy-by-design [40] enabler.
The four layers featured by PyFF are supervised by a Decision Support System (DSS)
that, with the aid of the user, defines through intents the scope of every datum according
to some rules such as privacy, presence or availability. This intent-based DSS is based on a
previous work of the authors, the S[3]OiA framework [41]. Hence, PyFF can be considered
a flexible architecture thanks to the fact that it can be decomposed into layers that can be
added/removed depending on the system needs. The role and functionality of each layer
is detailed in hereafter.
3.1.1. Sensing Layer
Similar to submetering [42] in the electric field, the sensing layer is committed to
collecting the greatest amount of data from the environment. It can be best seen as an
IoT sub-domain where Internet-connected digital objects sense as many environmental
variables as possible. For instance, a desktop computer can easily detect user presence,
sitting posture and eye gaze/blinking by means of the built-in camera [43]. It can also
infer user activity by counting keystrokes (or clicks on the mouse) during a period of time.
Analogously, a smartphone can easily sense background noise, ambient light intensity or
the amount of phone calls interrupting user’s activity. Additionally, other smart devices
such as smart plugs, smart watches or smart speakers (digital assistants) can be easily
reconfigured to report all the data that they seamlessly capture. Data communications in
this Sensing Layer can be implemented by means of well-known protocols such as XMPP,
MQTT or CoAP [44] since all sensed data will be later processed and matched to a certain
behavior at the upper layers.
-----
_Sensors 2021, 21, 3640_ 12 of 27
3.1.2. Early Stage Computing Layer
Inspired from the Fog architecture, the Early Stage Computing Layer receives data
from the Sensing Layer and conducts local non-intensive computations. From a data
privacy point of view, this layer can be best seen as the frontier which sensible data shall
not go beyond. In fact, as already seen in the GS use-case (see Section 2.2), several studies
have shown that users, enterprises and stakeholders are keener to share and collaborate if
those sensitive data are managed at the Edge of the network (i.e., fog) rather than outside
of the premises [45].
Consequently, as long as the data privacy policies allow it, the Early Stage Computing
Layer sends encrypted objects to the upper layer for strong recommendations or more
sophisticated aggregated analytics. The latter requires greater computing power and more
robust models.
Devices located at the Edge of the network can be typically identified as gateways,
computers or local servers. Additionally, it is worth mentioning the situation in which
the same physical device—due to its advanced sensing, computing and communication
capabilities—can belong to the Sensing and Early Stage Computing Layers at the same
time. This would be the case of the Arduino boards used in the Coffee Machines usecase (see Figure 1). One of these Arduino boards can locally decide (at the Early Stage
Computing Layer) to turn on or off the coffee machines according to the current date
and time, which would result in an immediate energy saving but may potentially lead
to user dissatisfaction. However, before taking this decision, the Arduino board can
check the overall energy consumption of the whole building (e.g., it might be empty) and
decide—irrespective of the current date and time—to allow the user to have a cup of coffee.
This is why this early stage layer transferring sensed data to the upper layer for more
intensive computing and in exchange would obtain a richer and more accurate picture of
the environment.
For a further explanation of the role of the Early Stage Computing Layer, imagine that
a smart plug sends the power consumption of a heater. When the gateway detects that
the heater has been working uninterruptedly for a specified number of hours, it might
suggest to turn off the heater, which would result in energy saving—similar to the Smart
Sustainable Coffee Machines use-case. In the upper layer (i.e., Intensive Computing Layer),
the power consumption data will be correlated with other variables (e.g., office hours,
office occupancy and ambient temperature) to make the recommendation stronger and,
maybe, more widespread (e.g., in addition to the user, it could also trigger an alert to the
staff in charge of facility management).
In addition, another example could be the situation where a potential camera is used
to track users’ positions, and, thus, user privacy becomes of paramount importance. In
this case, the proposal is to take an alternative approach by encrypting and sending to the
following layer the user’s body/face edges and most notable features [43] instead of the
whole video stream (as done in [46]). Note that this strategy intrinsically boosts worker’s
privacy since it is guaranteed that: (1) the whole image stream cannot be reconstructed from
the landmarks (i.e., no raw images are sent); and (2) no other environmental information of
the user leaves the physical building. Additionally, the overall amount of data transferred to
the communications network is greatly reduced, which increases the system performance.
Indeed, as data go from one layer to the next, the degree of data privacy is unavoidably
reduced. Therefore, PyFF aims to move as few data as possible (following the principles of
Cloud Computing [47]: move computation to data rather than moving data to computation)
and, when the size of data or complexity of the computation associated to them makes it
necessary for them to be sent to the next layer, data are encrypted (using a privacy scheme
such as the one proposed in [48]).
3.1.3. Intensive Computing and Storage Layer
Recent advances in machine learning require powerful computing platforms (e.g.,
GPUs) to run analysis and forecasting algorithms (e.g., those based on deep neural net
-----
_Sensors 2021, 21, 3640_ 13 of 27
works). This comes together with an eagerness of data. That is, these algorithms typically
require large amounts of data to operate properly and provide accurate recommendations.
For those applications/services that require these artificial intelligence algorithms, the
modest features of devices deployed on the Edge network are not effective to appropriately handle such amount of data. Therefore, PyFF proposes a layer deployed in a Cloud
infrastructure named as Intensive Computing and Storage Laye, which can be used at will
whenever more computation and/or storage is needed (e.g., cloud bursting). Furthermore,
this layer can also work as a complement for those applications where the processing
capabilities are placed at the Early Stage Computing Layer. In those, inference tasks can
be performed locally, where new data can be extracted, processed and converted into
knowledge. Then, if the user allows their information to be externalized, learning models
can be updated according to this extracted knowledge using the higher resources available
at the Intensive Computing Layer.
At this point, the power of a Cloud Computing infrastructure is exploited by: (1) logging and aggregating all the collected data that reaches this layer—ideally, most of the
data would reside on the lower layers; (2) using a computing-intensive Learning Classifier
System able to build a set of user-readable rules (i.e., recommendations); and (3) forwarding these rules to the devices that have sensing but also acting capabilities from the Early
Stage Computing Layer (i.e., User–Environment-interaction Layer). The recommendations
resulting from this computing intensive data analysis will be mainly transmitted by means
of the User–Environment-interaction Layer, which will be in charge of finding the best
time/manner to deliver recommendations to the user (for instance, user’s presence must
be guaranteed before making a recommendation), as previously learned with the Smart
Sustainable Coffee Machines and GreenSoul use-cases. Note that the server used for the
coffee machines use-case (see Figure 1) could be deployed in this layer.
3.1.4. User–Environment-Interaction Layer
The availability of a large amount of data enables us to use this information to influence
users and guide their actions towards more accurate and precise behaviors. For instance, it
is better to recommend the user to switch off the light rather than telling him/her to reduce
the energy consumption. For this reason, this layer oversees optimizing the interaction
between the users and the devices by delivering contextualized feedback. This depends
on when and how to interact with the users to effectively influence their behavior: on the
one hand, by choosing the right recommendation mechanism (e.g., persuasive strategies
based on personalized messages [26]), while, on the other hand, by selecting the right
moment to provide the recommendations through anticipation (about-to-do moments)
and reflection on action (just-in-time moments). The first one is based on anticipation,
consisting of recognizing pre-action patterns that allow providing immediate interaction to
redirect the activity through context-aware signals (lights, sounds or vibrations, among
others). The second one consists on providing the user with all the information related to
their behavior and performance, analyzing in depth patterns and changes over time and
showing the possible consequences of this trend. Unlike the previous type of action, in this
case, we seek to influence future habits through personal inquiry.
A second approach that PyFF also supports is related to closing the loop of interaction
and allowing the users to not only receive information but also provide feedback to the
system through intents [41]. Implementation wise, these intents are in line with the idea of
the contemporary concept of human-in-the-loop [18] (i.e., human beings are the ones who
guide an intelligent system as it learns) and with the way Amazon Alexa or other voice
[assistants are developed (available online: https://developer.amazon.com/en-US/docs/](https://developer.amazon.com/en-US/docs/alexa/custom-skills/create-the-interaction-model-for-your-skill.html)
[alexa/custom-skills/create-the-interaction-model-for-your-skill.html (accessed on 7 May](https://developer.amazon.com/en-US/docs/alexa/custom-skills/create-the-interaction-model-for-your-skill.html)
2021)). The intents and their associated utterances can be provided through multimodal
interaction (e.g., tangible, voice-based or explicitly through a digital interface such as a web
app or mobile app). These intents have to be propagated through the system to retrain and
tailor the way and moment the feedback is provided according to the users’ criteria and
-----
_Sensors 2021, 21, 3640_ 14 of 27
needs. Hence, feedback and intents are two interwoven concepts towards personalization.
The more the feedback from users is provided to the system, the sooner it will provide
bespoke interaction in PyFF. The intents could be interpreted by the system through a
rule base engine following the Rete Algorithm [49]. Some candidate implementations are
[Jess [50], CLIPS (available online: http://www.clipsrules.net/ (accessed on 7 May 2021)),](http://www.clipsrules.net/)
[pyKe (available online: http://pyke.sourceforge.net/index.html (accessed on 7 May 2021))](http://pyke.sourceforge.net/index.html)
[or Durable Rules (available online: https://github.com/jruizgit/rules (accessed on 7 May](https://github.com/jruizgit/rules)
2021)) which allow different programming languages for their implementation.
Finally, in certain applications/services, no recommendation to the end-user is required (see the Automation group in the Smart Sustainable Coffee Machines use-case). In
this case, this layer could be removed/overlooked, which again shows the flexibility of the
proposed system.
3.1.5. Decision Support System
Since PyFF features a hierarchical heterogeneous architecture, a system orchestration
is, hence, required to ensure communication and interoperability between the proposed
four layers. PyFF integrates a Decision Support System (DSS) mainly based on middleware
solutions for IoT-, Fog- and/or Cloud-based systems [51–54]. By investigating the work
that Pore et al. [55] carried out on design issues for Fog and Edge middlewares, an
approach using micro-services could be implemented to hold and orchestrate PyFF system.
Indeed, some well-known Fog Computing frameworks such as Apache Edgent (available
[online: https://edgent.incubator.apache.org (accessed on 7 May 2021)) or Edgex Foundry](https://edgent.incubator.apache.org)
[(available online: https://docs.edgexfoundry.org/ (accessed on 7 May 2021)) use this](https://docs.edgexfoundry.org/)
paradigm that enables modular, scalable, secure and technology-agnostic applications [56].
In fact, the DSS with the aid of the user defines through intents the scope of every datum
according to some rules. The list of rules to decide how to assign services and communicate
between layers includes:
1. Privacy: Where users are enquired regarding their willingness in sharing sensitive data.
2. Accuracy: To decide where (i.e, Fog and/or Cloud) the computation (e.g., a recommendation) will take place.
3. User involvement: Where the system decides communication channels used to notify
users based on their preferences and the multi-modal channels employed to assess
how good or bad was the feedback received.
With the defined rules, the DSS covers communication and interaction between PyFF
layers in order to decide: (1) which data to retrieve from physical devices; (2) how to
protect data (anonymization, encryption, etc.); (3) which computation layer to address for
recommendation (Fog or Cloud); (4) how to interfere with the environment to take actions
based on computational results; and (5) how to communicate recommendations to users.
In essence, the main difference of PyFF with regards to other prior Fog/Edge architectures, systems and existing frameworks lies in the user involvement and the flexibility
of the architecture to enable all the layers or just the most basic and functional ones. In
other reviewed approaches, the end-user is mainly depicted as a bare consumer of the
services provided by the architecture, usually in the top layer called “applications” or
“marketplace”. However, PyFF provides a technology agnostic orchestration system able to
put the user in the center of the decision making of what services offer and to what level of
privacy they should be offered.
**4. Illustrative Example: Smart Workplace**
To better understand the functionality of the proposed architecture, an illustrative
scenario to showcase the flexibility and use of PyFF is provided. This scenario is abstracted
in Figure 6, while Figure 7 shows the mapping of the PyFF multi-layers architecture in a
real-world environment. Let us consider an SME company that has several/shared offices
for its workers and management. Every office, regardless of the employee category (i.e.,
blue or white collar) uses a set of standard devices (i.e., desktop computer with in-built
-----
_Sensors 2021, 21, 3640_ 15 of 27
camera, smartphone, smart plug, smart light and voice assistant) equipped with sensing
capabilities in the workplace environment (yellow row in Figure 6 and yellow components
in Figure 7).
**Figure 6. Abstraction of the PyFF architecture to address energy efficiency and user comfort in a smart workplace environment.**
**Figure 7. Implementation of the PyFF architecture in a smart workplace.**
On the one hand, the desktop computer of the office continuously monitors (i.e., Early
Stage Computing Layer) the worker position and periodically triggers alerts when no significant
movement is detected for long periods of time. This is aimed to improve the workers’ health
-----
_Sensors 2021, 21, 3640_ 16 of 27
conditions by reminding them to avoid sedentary attitudes (blue row in Figure 6 and blue
server in Figure 7).
For those workers with no data privacy concerns (i.e., high-level staff may be averse to
allow their sensed data to go away from the company), the face/body landmarks are sent
to the Intensive Computing Layer (red row/cloud in Figures 6 and 7) to precisely analyze
the worker’s gaze, eye blinking and sitting posture. This layer sends back recommendations
to the desktop (green row in Figure 6) in order to complement their local decisions (e.g., in
addition to the “sedentary attitude” alert, another specific recommendation could be triggered:
“perform neck exercises”). Note that, at this point, some users are taking advantage of the
rich recommendations provided by the machine learning algorithms running at the Intensive
Computing Layer (at the cost of assuming potential privacy leaks of the sensed data), while other
users renounce these recommendations (at the price of keeping their sensed data safe). This
flexibility is aimed at obtaining a larger user acceptance, as learned from the Smart Sustainable
Coffee Machines and GreenSoul use-cases.
In this scenario, it is also worth considering the case in which a smartphone collects
(Sensing Layer) data regarding ambient light intensity. When the smartphone detects an excess
of ambient light (Early Stage Computing Layer), it triggers a notification for the user suggesting
that s/he turns off the office light to reduce energy waste. Additionally, the ambient data sensed
by the smartphone will again be cross-checked with data from other sources (e.g., it might be
the case that the desktop screen is momentarily displaying bright images) in order to make a
stronger recommendation (e.g., making an automatized phone call to the user). This is why, in
some situations, the early stage layer needs to transfer sensed data to the upper layer for more
intensive and correlated computing and global storage.
Similarly, the smart plug is continuously sending the power consumption to the same
desktop application that locally monitors worker’s movements. This enables the system to
autonomously infer behavioral status (via association rules [57]) from the user and his/her
environment. For instance, with these rules, the system can assume—as early as at the Early
Stage Computing Layer—that, if there is no movement and the fan is turned on (i.e., there is
power consumption), the worker might have left and forgotten to turn off the fan and, thus,
might decide to trigger a warning via the voice assistant, just in case the worker is still in the
office. This inferred behavior must be further refined at the Intensive Computing Layer, where
the power consumption of the smart plug will be correlated with the worker agenda to check
whether the worker may be elsewhere and, thus, unilaterally decide to turn off the fan by means
of the smart plug.
Finally, it is worth considering how the proposed system implicates users to get involved
in these recommendations (to engage and leading them to a more responsible lifestyle) by
means of the User–Environment-interaction Layer. In fact, workers are directly involved in
changing their own habits in terms of energy waste. Users can configure the degree of privacy
they want and through which interfaces (e.g., cell phone or email) they are willing to receive
recommendations. Indeed, the system could be completely autonomous and, for instance, turn
on and off devices accordingly, as done in the Smart Sustainable Coffee Machines use-case.
However, in PyFF, we prefer in addition to implement a user-unaware energy efficient model,
instill better intentions for workers. With this, we are avoiding users reluctance to technology
as well as helping to tackle the root problem of energy consumption/waste by using those
recommendations at a larger scale (i.e., at home, in public spaces and elsewhere).
Overall, with this example, it can be seen how the IoT architecture provided by PyFF
can contribute to worker comfort and energy efficiency in a flexible and privacy-friendly, yet
persuasive, way. In addition, as shown in Figure 6, the PyFF approach enables to add/remove
layers according to the desired services or user constraints, which endorses the system flexibility.
Indeed, one application may choose to use only the Early Stage Computing and the User–
Environment-interaction Layers if all its users are reluctant to share their data. However, in the
case of different data sensitivity preferences between users, both computing layers can be kept
and only exclusively those data that meet the desired levels of privacy moved to the Cloud.
-----
_Sensors 2021, 21, 3640_ 17 of 27
**5. Qualitative Evaluation**
We depict hereafter, a qualitative study to compare between both use-cases and the
new PyFF architecture. The reason behind this cross validation is to demonstrate the
improvement that PyFF brings in terms of flexible design for a smart workplace. Since
PyFF has not been implemented yet, and since we conducted extensive experiments for
both use-cases, Smart Sustainable Coffee Machines and GreenSoul, the comparison below
is mainly based on the strategy of each architecture to enable a privacy-by-design with
human-in-the-loop smart workplace. To this end, we define in Table 1 a set of metrics under
three categories: (1) Privacy, to evaluate at up to what level the architecture respects privacy
policies at corporate and/or employee level; (2) Automation, to assess the autonomy of
the proposed system to offer the required optimization (e.g., energy efficiency) trading-off
the degree of intrusiveness; (3) Flexibility, to estimate the possibility of re-adapting the
design considering all potential parameters (physical components/architecture, ethical and
privacy policies, size of data/network, etc.); and (4) Deployment, to assess the deployment
efforts required to deploy it in a real-world environment.
To help read the qualitative comparison, we rank most factors varying from ++ (implemented/measured) to (not implemented). The ranking demonstrates how much the
_−−_
evaluation criterion was considered (or not) for each architecture. The example in Table 1
shows that data protection factor has been considered for both use-cases but relatively
less than PyFF (anonymization schemes vs. privacy-based, user-centric scheme), with a +
value for both GreenSoul and coffee Machine and ++ for PyFF. Seemingly, the disruption
factor has clearly been neglected in the coffee machines because of the fully-automated
(i.e., out-of-control) system which cost a—for its evaluation.
_5.1. Privacy Metrics_
Smart environments are challenging scenarios where technology is the primary way
to collect data and obtain information about users. They must preserve users’ privacy and
consider ethical concerns regarding personal data collection [58]. In Table 1, we evaluate
the privacy through four main metrics: (1) Data Protection, i.e., what protocols are used
to protect data; (2) Data usage, i.e., at what level we are disseminating data (Local/Edge,
Cloud, etc.); (3) Homogeneity, i.e., whether we are using the same rule/protocol for every
device/user in the application or not; and (4) Disruption/Intrusion, i.e., whether the new
smart environment is being intrusive/disruptive to the user of not.
As demonstrated along the proposed two use-cases, users are more reluctant to be
monitored in spaces that can be associated with their behaviors and habits (e.g., schedules
and work performance in smart workplaces) [59]. In PyFF, privacy concerns are covered,
ensuring the security of the data on every layer of the architecture, with special focus on
the way sensitive information is processed and sent to the Cloud. Therefore, no unwanted
personal data are made available. and, thus, the privacy of the users is preserved. In
this regard, the Early Stage Computing Layer is introduced as an intermediate layer that
offers local decisions based on data collected at the Sensing Layer and ensures sharing
resources and services in the neighborhood of a network while enhancing their secrecy
and availability. Nonetheless, in some applications, pre-processed data still need to be
delivered to an upper layer with more computing and storage capabilities. To maintain the
management requirements of the potentially sensitive information, the most critical point
to consider on this layer is data privacy. Therefore, PyFF proposes to: (1) filter/transform
personal data; and/or (2) encrypt data before sending them to the upper layer (i.e., Cloud
services). Many existing security schemes can be used in this Fog-inspired architecture. For
instance, SKES-Fog can be implemented as far as a smart environment architecture could
be presented using domains, as suggested in [48]. Besides, data filtering or transformation
allows deleting unnecessary data during the decision-making process (e.g., user’s identity).
Later, the interaction layer will assign the anonymized data to its corresponding worker to
send accurate recommendations (based on the decisions from the Intensive Computing
and Early Stage Computing Layers) and receive feedback from them.
-----
_Sensors 2021, 21, 3640_ 18 of 27
**Table 1. PyFF qualitative evaluation.**
**Qualitative Evaluation**
**Metrics**
**GreenSoul** **Smart Sustainable Coffee Machines** **PyFF**
Data protection + (anonymization & encryption) + (anonymization) ++ (based on privacy policy)
Data usage Edge Cloud Device, Edge, Cloud (based on user’s choice)
Privacy Homogeneity Yes Yes heterogenous privacy rules & preferences
Disruption/Intrusion -(many new deployed devices) - -(full automation) ++(Interaction-based scheme & no extra devices)
User involvement +(one-way recommendations) - -(full automation) ++ (full-duplex & adapted to user involvement preferences)
Recommendation accuracy Fog-based Cloud-based Cloud/Fog (parameter)
Automation ICT/HCI dashboard dashboard depends on user’s behavior/preference
Real-time Yes Yes Yes
Adaptive reasoning Non-existent Non-existent layer-based
Flexibility Context-based Energy Energy(coffee machines) Any context
Scalability workplace - home & workplace + ++
Deployment cost Hardware + software Hardware + software Hardware + software
Fault isolation and tolerance NA Yes Yes
Deployment
Heterogeneous devices Yes No Yes
Reliability - (fog-ML-based recommendation) +(Statistical method) NA
Distributed No No Yes
Event management + DSS NA ++ DSS + User-Environment layer
-----
_Sensors 2021, 21, 3640_ 19 of 27
Furthermore, data need to be gathered without affecting users’ routine and minimizing
their attention span, especially in workplaces. Thus, these systems need to be non-intrusive,
creating an ecosystem surrounding the user that allows collecting data without any effect
on his/her routine [60]. PyFF avoids intrusion and disruption by using digital devices
already deployed in the environment or the users’ devices so that space is not overinstrumented with disruptive elements. In general terms, one of the strong points of
a successful ICT initiative should be ensuring how the user interacts with technology,
promoting its adherence while creating a sense of confidence and trust. While comparing
PyFF architectural approach to the ones in the Smart Sustainable Coffee Machines and
GreenSoul use-cases, we found both use-cases relatively intrusive. GreenSoul requires
an amount of new deployed devices (which causes over-instrumentation in the smart
environment) while Coffee Machine makes the system fully automatic which causes user’s
reluctance. Since privacy is strongly based on the level of users’ adherence to sharing data
and/or being instrumented with smart devices, PyFF enables privacy-by-design [61] with
a heterogeneous scheme. With this, users have the choice of subscribing to the level of
privacy they feel comfortable with (e.g., sharing data/identity, selecting a set of smart
devices to collect data from, etc.) and update it according to the context or their current
attitude towards the system. However, both use-cases implement one single protocol for
all users and devices which make them less adaptive to users’ preferences and behaviors
change on the run.
_5.2. Automation Metrics_
Designing a smart environment requires building autonomous processes to collect
data, analyze information and make decisions. In the qualitative comparison, four metrics
are defined to evaluate Automation in PyFF: (1) how much user involvement is respected; (2)
what the level of Recommendation accuracy (i.e., intensive/early stage computation based
on Cloud/Fog) is; (3) ICT/HCI, i.e., how the system interacts with users (Communication
channels); and (4) if the system offers Real-Time services.
As concluded from the proposed use-cases, it remains important to communicate
with the users during any actions/recommendations issued from the automation process.
In fact, there is a risk of losing users’ trust and adherence in technology, while making
the architecture totally automated (as in the Smart Sustainable Coffee Machines use-case).
For this reason, PyFF was designed from a human-centered perspective to promote new
habits in smart environments by considering the role of the user as a key factor in bringing
changes. The basis of the change-management process is the way the information is used
as an awareness mechanism and how this information is provided to the workers. In
particular, information needs to be delivered effectively and digital feedback is an appropriate way to influence in the receiver [62]. In PyFF, the role of the user is boosted by
the User–Environment-interaction Layer, in charge of optimizing the interaction between
the users and the system through contextualized feedback [26] and privacy-based user
intentions [41]. The former pursues involving the workers in the smart process and influencing their behavior through the application of technological persuasion techniques
that increase their engagement and motivation. The latter allows the user to express the
data a user wants to preserve and a set of requirements which have to be accomplished
to this endeavor. Thus, the user will always be able to supervise the whole procedure
in a reliable and understandable manner. This human-in-the-loop approach augments
human interactions, making them part of the information retrieval, understanding and
processing [63,64]. Thanks to this layer, Cloud services and humans in the loop transparently interact with each other, allowing a more secure and confident data exchange. The
User–Environment-interaction Layer involves users in the process of promoting sustainable
behaviors and, thus, encourages them to have confidence on a layered architecture that
seeks to ensure the security, privacy and trust. This provides an adaptable interaction that
can be dynamically adapted to different contexts and user preferences and, ultimately,
-----
_Sensors 2021, 21, 3640_ 20 of 27
allows users to educate the system (and reciprocally help the system educate the users)
rather than relying exclusively on what the system decides for them.
_5.3. Flexibility Metrics_
An acceptable way to reach flexibility in a layer-based architecture is: (1) to offer
_Adaptive Reasoning by adding/removing layers in every application accordingly; (2) to_
implement Context-based protocols by offering a solution for any application domain
(instead of only smart workplaces as in previous use-cases); and (3) to define a scalable
solution that easily re-adapt to the size of network, devices and users (see Table 1).
The PyFF adaptive reasoning feature offers the possibility to add/remove one or more
layers depending on the system needs (recall that we propose four layers). According
to the service complexity, this can be implemented in the user registration process of the
service by including a semantic reasoner or a simple questionnaire (e.g., “would you be
comfortable with device X having access to your datum Y?”). The output of this module
will be the privacy and behavioral rules that will constrain the scope of the service delivered
to each user.
The addition and removal of PyFF layers is shown in the following examples. Let us
first consider a use-case about a top-confidential work environment (i.e., military field):
here, Cloud services can easily be excluded by removing the “Intensive Computing layer”,
which may result in a reduced performance as long as the low layer devices lack from
the required storage and computing capabilities to deliver service. Inversely, in a smart
farm environment [65], where we need very accurate recommendations by aggregating
data from all distributed lands (i.e., farms), and where privacy is not a big issue, there
will be no need for the “Early Stage Computing Layer”. In addition, the role of the User–
Environment-interaction Layer will be limited to communicate decisions to the user (i.e.,
farmer) without any suggestions of taking actions (because the goal behind the system is to
remotely monitor the fields using deployed smart devices). These two examples—different
from the smart workplace scenario—show that PyFF is a context-sensitive solution where
its architecture can be generalized to a larger spectrum of use. Even though in this paper
we focus on the energy-efficiency and users well-being in a smart workplace environment
as an illustrative example, PyFF architecture is based on decoupling elementary services
in any system (physical devices, privacy and computation rules, real-time and accuracy,
HCI, etc.).
_5.4. Deployment_
When the size of IoT-based environments grows in terms of devices, the deployment
and maintenance of their systems becomes relevant and intensive. It is very common to
find IoT domains composed of heterogeneous and non-standardized devices, which makes
them hard to deploy (e.g., individual configurations required) and maintain (e.g., when the
system fails it is hard to find and isolate the faulty device).
Additionally, when the number of devices grows, the system may degrade its performance due to the communication overhead between devices and a lack of a scalable
backbone. In this regard, the hierarchical approach featured by PyFF relies on Fog and
Cloud Computing to alleviate the scalability issues emerged when facing a large number
of IoT devices.
Furthermore, the distributed nature of PyFF makes it very robust against faulty IoT
devices. These devices are known to be fault prone for several reasons (e.g., lack of reliable
power sources, continuous exposition to harsh environments, etc.). In the likely case of a
faulty IoT device, PyFF would be able to: (1) trace the source of the fault (i.e., the Intensive
Computing Layer would identify non-coherent values compared to other sources or the
Early Stage computing layer would receive very different values compared to its historic
records); (2) isolate and ignore the faulty device (i.e., conducting a top-down analysis of
the information flow along the hierarchical architecture); and (3) report to the user that a
-----
_Sensors 2021, 21, 3640_ 21 of 27
device is faulty (i.e., using the User–Environment-interaction Layer). Hence, the source of
the events can be traced naturally.
**6. Related Work**
As shown above, technological advancements are starting to accelerate the evolution
of future smart environments. Now, this concept goes much further than implementing
technology to achieve this digital transformation and points to creating interactive spaces
where people and technology collaborate. Under this vision, smart environments sense the
physical world, give meaning to the obtained information and trigger suitable reactions to
transform human lifestyles. As a consequence, the Internet of Things (IoT) can enhance
health [66], wellness [67] or promote sustainable practices [68] in domains such as the
city [69] or the workplace [70]. The latter is a good example of how human and machine
intelligence can collaborate. Indeed, the inherent nature of these spaces, where an average
employee spends a substantial part of her daily routine, involves that the habits and
behaviors performed in the workplace play a key role in every individual and the society.
Thus, workplaces can be seen as ideal scenarios to guide workers towards new
lifestyles that are extended beyond their workday [71]. Linking the workplace with health
promotion and energy-related matters lead to the development of a sustainable working environment that increases awareness through healthier and more sustainable behaviors [72].
In particular, they can contribute to a more environmentally friendly energy management [73] and cover the lack of awareness of the individual about the impact these habits
on their health [74]. For example, a work environment augmented with IoT can detect
and classify unhealthy habits such as bad postures or sedentary habits and notify those
harmful practices to end-users. Moreover, it can assist the user towards energy-awareness
and to attain sustainable changes in the mid and long term.
A key factor when designing and implementing programs to promote new habits
in the workplace is to study specific methods to identify which are the main problems
and then to carry out useful strategies to solve them [75]. In this regard, ubiquitous
technology can be used, firstly, to identify the unhealthy and unsustainable behaviors that
are executed in these spaces and, secondly, to correct the inadequate practices that are
recognized. Transforming the quality of the workplace experience implies monitoring
which habits need to be changed and providing information about the consequences of
these habits. Technology-based solutions allow us to physically or digitally interact with
our surroundings to obtain data that can be transformed into information and, in the end,
knowledge about the daily routines of the workers. Based on this knowledge, contextaware guidance can be provided to influence the users and change their behaviors. Thus,
technology-based solutions can be considered appropriate drivers to promote wellness and
energy awareness in the workplace.
Several attempts have been made to design enhanced workplaces [76] through the
adoption of the Information and Communication Technologies (ICTs). From occupational
risk assessment and ensuring safety in the workplaces [77], different solutions are proposed to reach large audiences and help them to prevent indirect risks associated with
these spaces and bring energy awareness to their routine. For the former, occupational
health and promoting more active behaviors in the workplace stand out as one of the
most addressed concerns. In this direction, Taylor et al. reviewed the existing literature
addressing interventions designed to reduce sitting time and the role of the organizational
culture [78]. The obtained results coincide with the ones presented by Stephenson et al. [79],
who concluded that interventions using a computer and mobile and wearable technologies
can be useful in reducing these behaviors. The PEROSH initiative [80] studied how wearable devices could be part of wellness promotion interventions. It elaborated a decision
support framework for selecting useful sensors and proper data collection strategies for
avoiding sedentary behaviors neglecting data privacy issues. In the same way, Jimenez
et al. [81] presented some guidelines to promote workplace health by using electronic
and mobile health tools to provide easier administration for campaign proposers while
-----
_Sensors 2021, 21, 3640_ 22 of 27
considering data privacy from a technical and psychological points of view. However, no
specific ICT architectures have been proposed to conduct these processes. Other works
have approached wellness interventions through digital technologies and have also been
proposed for reducing sedentary behaviors [82] as well as to increase energy expenditure
and promote more active periods [83,84].
[Commercial solutions such as Comfy (available online: https://www.comfyapp.com/](https://www.comfyapp.com/)
(accessed on 7 May 2021)) are committed to providing a virtual link between the digital
workplace and the physical environment by means of a Cloud-based platform able to
collect users data. These data might also be used to monitor user activity [85] or even
suggest the most appropriate time intervals to take a break [86] considering the user’s
focus state. Collected data can also come from a smart chair that could be used to improve
the user’s sitting position [87]. Novel technologies such as 5G in IoT domains have been
devised to boost comfort [88] and safety [89] in working environments.
As far as energy awareness in working environments is concerned, there have also
been some proposals so far. For instance, a digital interface was proposed by Irizar-Arrieta
et al. [90] that was aimed to notify users about their associated energy consumption.
This is very similar to the interactive coaster developed in the context of the GS project
(Section 2.2) which was aimed to make workers aware of the energy consumption of the
electronic devices that were naturally spread over their offices [91]. Recently, there have
been proposals aimed at reaching a large number of users: from displaying statistics in
real-time regarding energy consumption in a physical ground of a factory [92] to measuring
the power consumption of shared laboratory equipment [93], including proposals to
transform working tools and equipment into smart devices that persuade their users with
eco-awareness [26].
Moreover, some works have already explored the human factor behind these interventions and how people and the devices that populate smart workplaces can cooperate
towards higher energy efficiency [94] or bringing health awareness to the workplace by
increasing technology acceptance [95]. In general, work environments are especially challenging scenarios where additional barriers regarding privacy concerns of the collected
information [96] and the ethical concerns [58] must be considered. Moreover, context and
commitment to change are also a key factor when workday duties involve the total daily
routine [97].
This work goes one step further in the line of converting work environments into
appropriate settings to promote the adoption of lifestyle changes that persist over time.
In contrast to the literature reviewed, our proposal puts the focus on the users’ concerns
as a way to successfully tailor their future actions. To that end, we present the requirements to design an open novel architecture able to allocate interactive interventions in
the workplaces while considering system scalability, users’ privacy and cost. Moreover,
this work highlights the role of the worker at the center of a system that addresses both
energy consumption and workers health, as a whole rather than tackling these aspects
individually with expensive or commercial (e.g., Comfy Enlighted (available online:
[https://www.enlightedinc.com/ (accessed on 7 May 2021)) ad hoc single purpose devices.](https://www.enlightedinc.com/)
In essence, the presented approach links innovative data architectures with the future
work environment while addressing the human role in the process.
**7. Conclusions**
The IoT paradigm has enabled the rapid conception of a plethora of new applications
and use-cases committed to improving and supporting humans’ daily lives. However,
despite the apparent benefits brought by these solutions, there is a growing number of
users who exhibit a somehow averse behavior towards these improvements. In this work,
we describe and analyze two IoT use-cases (i.e., Smart Sustainable Coffee Machines and
GreenSoul projects) to identify the source of these reluctant attitudes and set up the grounds
of an architecture to address them. The results from both tested deployments allow us
to conclude the importance of involving users to take actions in the smart environment
-----
_Sensors 2021, 21, 3640_ 23 of 27
themselves while preserving their privacy preferences. This motivated the design of PyFF,
a privacy-friendly by design architecture aimed to enable the transformation of physical
spaces into smart environments by actively involving the user in such a process.
PyFF is a Privacy Fog-based Flexible approach where the user decides which data
he or she wants to disclose (i.e., respecting privacy) and to what extent (i.e., exploiting
the Fog and Cloud Computing paradigms). From these premises, PyFF can continuously
monitor users’ activities and their environment and advise on the best actions to increase
their comfort while, for instance, optimizing energy usage (i.e., through flexible ICT
communication channels). Additionally, instead of conceiving expensive and new ad hoc
gadgets, PyFF aims to take advantage of the off-the-shelf technology already deployed in
user environments (e.g., desktop computers and smartphones) to sense the environmental
status and user dynamics and naturally interact with them. To overcome the data storage
and computing limitations associated to this continuous monitoring, PyFF features a
Fog Computing domain (i.e., Early Stage Computing Layer) composed of all the digital
devices deployed around the user (that can join or leave at will) and a Cloud Computing
layer (i.e., intensive computing layer) that will be used whenever these devices need
to carry more complex computations. Therefore, the combination of Fog and Cloud
Computing layers enable PyFF to limit the scope of the sensed data according to the users’
preferences in relation to the privacy they wanted to preserve, while obscuring its data
when needed (i.e., splitting the computation process in several distributed nodes improves
data security [98,99]). In essence, other architectures [54] are focused on how to distribute
the data, which data models to use, how many vendor protocols are able to endow or what
means of interoperability are the most appropriate to define a minimum interoperable
[system (available online: https://oascities.org/minimal-interoperability-mechanisms/](https://oascities.org/minimal-interoperability-mechanisms/)
(accessed on 7 May 2021)). However, PyFF has not yet proposed another architecture
with more or fewer layers than others, but a way of understanding the data flow and the
deployment based on the user requirements, needs and privacy concerns.
The conducted qualitative evaluation shows at what level PyFF can adjust its architecture to make it more flexible compared to both use-cases in terms of privacy, deployment
cost and automation. The next steps for this research work are: (1) conduct experiments
in a real environment to assess quantitative metrics; (2) deepen the security protocols
to enhance the proposed privacy scheme; or (3) study the possibility of splitting each
layer into micro services to offer more flexibility in terms of fault tolerance, heterogeneity
and accuracy.
**Author Contributions: Conceptualization, F.Z.B., J.N., D.C.-M. and A.Z.; data curation, D.C.-M.;**
formal analysis, O.G.-C.; funding acquisition, D.L.-d.-I.; methodology, F.Z.B., J.N. and D.C.-M.;
project administration, D.L.-d.-I.; resources, O.G.-C. and D.C.-M.; software, D.C.-M.; supervision,
D.L.-d.-I. and A.Z.; validation, F.Z.B., J.N. and A.Z.; writing—original draft, F.Z.B., J.N., Oihane
Gómez-Carmona and D.C.-M.; and writing—review and editing, F.Z.B., D.L.-d.-I. and A.Z. All
authors have read and agreed to the published version of the manuscript.
**Funding: This research was partially supported by Secretaria d’Universitats i Recerca of the Depart-**
ment of Business and Knowledge of the Generalitat de Catalunya under grant 2017-SGR-977 for Joan
Navarro and Agustín Zaballos. We gratefully acknowledge the support of the Basque Government´s
Department of Education for the predoctoral funding of one of the authors and the Deustek Research
Group. We also acknowledge the support of the Spanish government for SentientThings under
Grant No. TIN2017-90042-R and the support of ACM under Grant No. ACM2021_32. Finally, Joan
Navarro acknowledges Fundació “La Caixa” to support the research leading to this results under
grant agreement 2020-URL-IR2nQ-008.
**Informed Consent Statement: Informed consent was obtained from all subjects involved in the study.**
**Conflicts of Interest: The authors declare no conflict of interest.**
-----
_Sensors 2021, 21, 3640_ 24 of 27
**References**
1. Zhu, K.; Dong, S.; Xu, S.X.; Kraemer, K.L. Innovation diffusion in global contexts: Determinants of post-adoption digital
[transformation of European companies. Eur. J. Inf. Syst. 2006, 15, 601–616. [CrossRef]](http://doi.org/10.1057/palgrave.ejis.3000650)
2. Collins, A.; Halverson, R. Rethinking Education in the Age of Technology: The Digital Revolution and Schooling in AMERICA; Teachers
College Press: New York, NY, USA, 2018.
3. Ustundag, A.; Cevikcan, E. Industry 4.0: Managing the Digital Transformation; Springer: Berlin/Heidelberg, Germany, 2017.
4. Hars, A. Self-Driving Cars: The Digital Transformation of Mobility. In Marktplàtze im Umbruch: Digitale Strategien fúr Services im
_[Mobilen Internet; Springer: Berlin/Heidelberg, Germany, 2015; pp. 539–549. [CrossRef]](http://dx.doi.org/10.1007/978-3-662-43782-7_57)_
5. Agarwal, R.; Gao, G.; DesRoches, C.; Jha, A.K. Research commentary—The digital transformation of healthcare: Current status
[and the road ahead. Inf. Syst. Res. 2010, 21, 796–809. [CrossRef]](http://dx.doi.org/10.1287/isre.1100.0327)
6. [Berman, S.J. Digital transformation: Opportunities to create new business models. Strategy Leadersh. 2012, 40, 16–24. [CrossRef]](http://dx.doi.org/10.1108/10878571211209314)
7. Majchrzak, A.; Markus, M.L.; Wareham, J. Designing for digital transformation: Lessons for information systems research from
[the study of ICT and societal challenges. MIS Q. 2016, 40, 267–277. [CrossRef]](http://dx.doi.org/10.25300/MISQ/2016/40:2.03)
8. Van Deursen, A.J.; Mossberger, K. Any thing for anyone? A new digital divide in internet-of-things skills. Policy Internet
**[2018, 10, 122–140. [CrossRef]](http://dx.doi.org/10.1002/poi3.171)**
9. Paul, C.; Scheibe, K.; Nilakanta, S. Privacy concerns regarding wearable IoT devices: How it is influenced by GDPR? In Proceedings of the 53rd Hawaii International Conference on System Sciences, Maui, HI, USA, 7–10 January 2020.
10. Zheng, S.; Apthorpe, N.; Chetty, M.; Feamster, N. User perceptions of smart home IoT privacy. Proc. ACM Hum. Comput. Interact.
**[2018, 2, 1–20. [CrossRef]](http://dx.doi.org/10.1145/3274469)**
11. Voas, J.; Kuhn, R.; Laplante, P.; Applebaum, S. Internet of Things (IoT) Trust Concerns. NIST Tech. Rep 2018, 1, 1–50.
12. Agaku, I.T.; Adisa, A.O.; Ayo-Yusuf, O.A.; Connolly, G.N. Concern about security and privacy, and perceived control over
collection and use of health information are related to withholding of health information from healthcare providers. J. Am. Med.
_[Informatics Assoc. 2013, 21, 374–378. [CrossRef]](http://dx.doi.org/10.1136/amiajnl-2013-002079)_
13. [General Data Protection Regulation. Available online: https://gdpr-info.eu/ (accessed on 7 May 2021).](https://gdpr-info.eu/)
14. BUKATY, P. The California Consumer Privacy Act (CCPA): An Implementation Guide; IT Governance Publishing: Ely, UK, 2019.
15. [Brazilian General Data Protection Law. Available online: https://iapp.org/resources/article/brazilian-data-protection-law-](https://iapp.org/resources/article/brazilian-data-protection-law-lgpd-english-translation/)
[lgpd-english-translation/ (accessed on 10 May 2021).](https://iapp.org/resources/article/brazilian-data-protection-law-lgpd-english-translation/)
16. Kim, Y.; Park, Y.; Choi, J. A study on the adoption of IoT smart home service: Using Value-based Adoption Model. Total Qual.
_[Manag. Bus. Excell. 2017, 28, 1149–1165. [CrossRef]](http://dx.doi.org/10.1080/14783363.2017.1310708)_
17. [Weyrich, M.; Ebert, C. Reference architectures for the internet of things. IEEE Softw. 2015, 33, 112–116. [CrossRef]](http://dx.doi.org/10.1109/MS.2016.20)
18. Amershi, S.; Cakmak, M.; Knox, W.B.; Kulesza, T. Power to the people: The role of humans in interactive machine learning.
_[Ai Mag. 2014, 35, 105–120. [CrossRef]](http://dx.doi.org/10.1609/aimag.v35i4.2513)_
19. Labib, N.S.; Brust, M.R.; Danoy, G.; Bouvry, P. Trustworthiness in IoT–A Standards Gap Analysis on Security, Data Protection
and Privacy. In Proceedings of the 2019 IEEE Conference on Standards for Communications and Networking (CSCN), Granada,
Spain, 28–30 October 2019; pp. 1–7.
20. Currie, D.J.; Peng, C.Q.; Lyle, D.M.; Jameson, B.A.; Frommer, M.S. Stemming the flow: How much can the Australian smartphone
[app help to control COVID-19. Public Health Res. Pr. 2020, 30, e3022009. [CrossRef] [PubMed]](http://dx.doi.org/10.17061/phrp3022009)
21. Park, S.; Choi, G.J.; Ko, H. Information technology—Based tracing strategy in response to COVID-19 in South Korea—Privacy
[controversies. JAMA 2020, 323, 2129–2130. [CrossRef] [PubMed]](http://dx.doi.org/10.1001/jama.2020.6602)
22. Hánsel, K. Wearable and ambient sensing for well-being and emotional awareness in the smart workplace. In Proceedings of
the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing: Adjunct, Heidelberg, Germany, 12–16
September 2016; pp. 411–416.
23. Conti, M.; Passarella, A.; Das, S.K. The Internet of People (IoP): A new wave in pervasive mobile computing. Pervasive Mob.
_[Comput. 2017, 41, 1–27. [CrossRef]](http://dx.doi.org/10.1016/j.pmcj.2017.07.009)_
24. Nappi, I.; de Campos Ribeiro, G. Internet of Things technology applications in the workplace environment: A critical review.
_[J. Corp. Real Estate 2020. [CrossRef]](http://dx.doi.org/10.1108/JCRE-06-2019-0028)_
25. Benhamida, F.; Navarro, J.; Gómez-Carmona, O.; Casado-Mansilla, D.; López-de-Ipi na, D.; Zaballos, A. SmartWorkplace:
A Privacy-based Fog Computing Approach to Boost Energy Efficiency and Wellness in Digital Workspaces. In Proceedings of the
1st Workshop on Cyber-Physical Social Systems Co-Located with the 9th International Conference on the Internet of Things (IoT
2019), Bilbao, Spain, 22 October 2019; Volume 2530, pp. 9–15.
26. Casado-Mansilla, D.; Garaizar, P.; López-de Ipina, D. Design-insights for Devising Persuasive IoT Devices for Sustainability in
the Workplace. In Proceedings of the 2018 Global Internet of Things Summit (GIoTS), Bilbao, Spain, 4–7 June 2018; pp. 1–6.
27. Murtagh, N.; Gatersleben, B.; Cowen, L.; Uzzell, D. Does perception of automation undermine pro-environmental behaviour?
[Findings from three everyday settings. J. Environ. Psychol. 2015, 42, 139–148. [CrossRef]](http://dx.doi.org/10.1016/j.jenvp.2015.04.002)
28. [GreenSoul: Persuasive Eco-awareness for User Engagement through Networked Data Devices. Available online: https://cordis.](https://cordis.europa.eu/project/id/696129)
[europa.eu/project/id/696129 (accessed on 7 May 2021).](https://cordis.europa.eu/project/id/696129)
29. Nipkow, J.; Bush, E.; Josephy, B.; Pilone, A. For a tasty but efficient coffee. Proc. ECEEE 2011, 11, 1453–1470.
30. Casado-Mansilla, D.; López-de Armentia, J.; Ventura, D.; Garaizar, P.; López-de Ipina, D. Embedding intelligent eco-aware
[systems within everyday things to increase people’s energy awareness. Soft Comput. 2016, 20, 1695–1711. [CrossRef]](http://dx.doi.org/10.1007/s00500-015-1751-0)
-----
_Sensors 2021, 21, 3640_ 25 of 27
31. Ventura, D.; Casado-Mansilla, D.; López-de Armentia, J.; Garaizar, P.; López-de Ipina, D.; Catania, V. ARIIMA: A real IoT
implementation of a machine-learning architecture for reducing energy consumption. In Proceedings of the International Conference
_on Ubiquitous Computing and Ambient Intelligence; Springer: Berlin/Heidelberg, Germany, 2014; pp. 444–451._
32. Milfont, T.; Duckitt, J. A brief version of the environmental attitudes inventory. 2007. Unpublished Manuscript.
33. Tribble, S.L. Promoting Environmentally Responsible Behaviors Using Motivational Interviewing Techniques. Ph.D. Thesis,
Illinois Wesleyan University, Bloomington, IL, USA, 2008.
34. Zhao, K.; Ge, L. A survey on the internet of things security. In Proceedings of the 2013 Ninth International Conference on
Computational Intelligence and Security, Emei Moutain, China, 14–15 December 2013; pp. 663–667.
35. Mendez, D.M.; Papapanagiotou, I.; Yang, B. Internet of things: Survey on security and privacy. arXiv 2017, arXiv:1707.01879.
36. Arrieta-Salinas, I.; Armendáriz-Inigo, J.E.; Navarro, J. Epidemia: Variable consistency for transactional cloud databases. J. Univers.
_Comput. Sci. 2014, 20, 14._
37. Rao, P.; Lin, D.; Bertino, E.; Li, N.; Lobo, J. An algebra for fine-grained integration of XACML policies. In Proceedings of the 14th
ACM Symposium on Access Control Models and Technologies, Stresa, Italy, 3–5 June 2009; pp. 63–72.
38. Riad, K.; Cheng, J. Adaptive XACML access policies for heterogeneous distributed IoT environments. Inf. Sci. 2021, 548, 135–152.
[[CrossRef]](http://dx.doi.org/10.1016/j.ins.2020.09.051)
39. Bonomi, F.; Milito, R.; Zhu, J.; Addepalli, S. Fog Computing and Its Role in the Internet of Things. In Proceedings of the First
_[Edition of the MCC Workshop on Mobile Cloud Computing; MCC’12; ACM: New York, NY, USA, 2012; pp. 13–16. [CrossRef]](http://dx.doi.org/10.1145/2342509.2342513)_
40. Langheinrich, M. Privacy by design—Principles of privacy-aware ubiquitous systems. In Proceedings of the International Conference
_on Ubiquitous Computing; Springer: Berlin/Heidelberg, Germany, 2001; pp. 273–291._
41. Vega-Barbas, M.; Casado-Mansilla, D.; Valero, M.A.; López-de Ipina, D.; Bravo, J.; Flórez, F. Smart spaces and smart objects
interoperability architecture (S3OiA). In Proceedings of the Innovative Mobile and Internet Services in Ubiquitous Computing
(IMIS), 2012 Sixth International Conference, Palermo, Italy, 4–6 July 2012; pp. 725–730.
42. Alonso-Rosa, M.; Gil-de Castro, A.; Medina-Gracia, R.; Moreno-Munoz, A.; Ca nete-Carmona, E. Novel Internet of Things
[Platform for In-Building Power Quality Submetering. Appl. Sci. 2018, 8, 1320. [CrossRef]](http://dx.doi.org/10.3390/app8081320)
43. Cao, Z.; Hidalgo, G.; Simon, T.; Wei, S.; Sheikh, Y. OpenPose: Realtime Multi-Person 2D Pose Estimation using Part Affinity
[Fields. CoRR 2018, abs/1812.08008. Available online: http://xxx.lanl.gov/abs/1812.08008 (accessed on 7 May 2021). [CrossRef]](http://xxx.lanl.gov/abs/1812.08008)
44. Dizdarevi´c, J.; Carpio, F.; Jukan, A.; Masip-Bruin, X. A survey of communication protocols for internet of things and related
[challenges of fog and cloud computing integration. ACM Comput. Surv. (CSUR) 2019, 51, 1–29. [CrossRef]](http://dx.doi.org/10.1145/3292674)
45. Khan, S.; Parkinson, S.; Qin, Y. Fog computing security: A review of current applications and security solutions. J. Cloud Comput.
**[2017, 6, 19. [CrossRef]](http://dx.doi.org/10.1186/s13677-017-0090-3)**
46. Zhao, W.; Lun, R.; Gordon, C.; Fofana, A.B.M.; Espy, D.D.; Reinthal, M.A.; Ekelman, B.; Goodman, G.D.; Niederriter, J.E.; Luo,
X. A human-centered activity tracking system: Toward a healthier workplace. IEEE Trans. Hum. Mach. Syst. 2017, 47, 343–355.
[[CrossRef]](http://dx.doi.org/10.1109/THMS.2016.2611825)
47. Armbrust, M.; Fox, A.; Griffith, R.; Joseph, A.D.; Katz, R.; Konwinski, A.; Lee, G.; Patterson, D.; Rabkin, A.; Stoica, I.; et al. A view
[of cloud computing. Commun. ACM 2010, 53, 50–58. [CrossRef]](http://dx.doi.org/10.1145/1721654.1721672)
48. Challal, Y.; Benhamida, F.Z.; Nouali, O. Scalable Key Management for Elastic Security Domains in Fog Networks. In Proceedings
of the 2018 IEEE 27th International Conference on Enabling Technologies: Infrastructure for Collaborative Enterprises (WETICE),
[Paris, France, 27–29 June 2018; pp. 187–192. [CrossRef]](http://dx.doi.org/10.1109/WETICE.2018.00043)
49. Liu, D.; Gu, T.; Xue, J.P. Rule engine based on improvement rete algorithm. In Proceedings of the The 2010 International Conference on Apperceiving Computing and Intelligence Analysis Proceeding, Chengdu, China, 17–19 December 2010; pp. 346–349.
50. [Friedman-Hill, E. Jess, the Rule Engine for the Java Platform. 2008. Available online: http://alvarestech.com/temp/fuzzyjess/](http://alvarestech.com/temp/fuzzyjess/Jess60/Jess70b7/docs/embedding.html)
[Jess60/Jess70b7/docs/embedding.html (accessed on 7 May 2021).](http://alvarestech.com/temp/fuzzyjess/Jess60/Jess70b7/docs/embedding.html)
51. Yang, C.H.; Zhang, Y.S. Research on the Architecture of Iot Middleware Platform Based on BeiDou Navigation Satellite System.
_[Procedia Comput. Sci. 2020, 166, 46–50. [CrossRef]](http://dx.doi.org/10.1016/j.procs.2020.02.011)_
52. Blair, G.; Schmidt, D.; Taconet, C. Middleware for Internet distribution in the context of cloud computing and the Internet of
[Things: Editorial Introduction. Ann. Des. Telecommun. Telecommun. 2016, 71, 87–92. [CrossRef]](http://dx.doi.org/10.1007/s12243-016-0493-z)
53. Azeem, N.S.A.; Tarrad, I.; Hady, A.A.; Youssef, M.I.; El-kader, S.M.A. Shared Sensor Networks Fundamentals, Challenges,
Opportunities, Virtualization Techniques, Comparative Analysis, Novel Architecture and Taxonomy. J. Sens. Actuator Netw. 2019,
_[8. [CrossRef]](http://dx.doi.org/10.3390/jsan8020029)_
54. Antonini, M.; Vecchio, M.; Antonelli, F. Fog computing architectures: A reference for practitioners. IEEE Internet Things Mag.
**[2019, 2, 19–25. [CrossRef]](http://dx.doi.org/10.1109/IOTM.0001.1900029)**
55. Pore, M.; Chakati, V.; Banerjee, A.; Gupta, S. Middleware for fog and edge computing: Design issues. In Fog and Edge Computing:
_Principles and Paradigms; Wiley: Hoboken, NJ, USA, 2019; pp. 123–144._
56. [Zimmermann, O. Microservices tenets. Comput. Sci. Res. Dev. 2017, 32, 301–310. [CrossRef]](http://dx.doi.org/10.1007/s00450-016-0337-0)
57. Chen, J.H.; Hsu, S.C.; Chen, C.L.; Tai, H.W.; Wu, T.H. Exploring the association rules of work activities for producing precast
[components. Autom. Constr. 2020, 111, 103059. [CrossRef]](http://dx.doi.org/10.1016/j.autcon.2019.103059)
58. Bowen, J.; Hinze, A.; Griffiths, C.; Kumar, V.; Bainbridge, D. Personal Data Collection in the Workplace: Ethical and Technical
Challenges. In Proceedings of the British Human Computer Interaction Conference, (BHCI), Sunderland, UK, 3–6 July 2017;
pp. 1–11.
-----
_Sensors 2021, 21, 3640_ 26 of 27
59. Ajunwa, I.; Crawford, K.; Schultz, J. Limitless worker surveillance. Cal. L. Rev. 2017, 105, 735.
60. Shabha, G. A critical review of the impact of embedded smart sensors on productivity in the workplace. _Facilities_
**[2006, 24, 538–549. [CrossRef]](http://dx.doi.org/10.1108/02632770610705301)**
61. Pape, S.; Rannenberg, K. Applying privacy patterns to the internet of things’(iot) architecture. _Mob._ _Netw._ _Appl._
**[2019, 24, 925–933. [CrossRef]](http://dx.doi.org/10.1007/s11036-018-1148-2)**
62. Hermsen, S.; Frost, J.; Renes, R.J.; Kerkhof, P. Using feedback through digital technology to disrupt and change habitual behavior:
[A critical review of current literature. Comput. Hum. Behav. 2016, 57, 61–74. [CrossRef]](http://dx.doi.org/10.1016/j.chb.2015.12.023)
63. Holzinger, A. Interactive machine learning for health informatics: When do we need the human-in-the-loop? Brain Inform.
**[2016, 3, 119–131. [CrossRef]](http://dx.doi.org/10.1007/s40708-016-0042-6)**
64. [Zanzotto, F.M. Human-in-the-loop Artificial Intelligence. J. Artif. Intell. Res. 2019, 64, 243–252. [CrossRef]](http://dx.doi.org/10.1613/jair.1.11345)
65. Muangprathub, J.; Boonnam, N.; Kajornkasirat, S.; Lekbangpong, N.; Wanichsombat, A.; Nillaor, P. IoT and agriculture data
[analysis for smart farm. Comput. Electron. Agric. 2019, 156, 467–474. [CrossRef]](http://dx.doi.org/10.1016/j.compag.2018.12.011)
66. Doukas, C.; Maglogiannis, I. Bringing IoT and cloud computing towards pervasive healthcare. In Proceedings of the 2012 Sixth
International Conference on Innovative Mobile and Internet Services in Ubiquitous Computing, Palermo, Italy, 4–6 July 2012;
pp. 922–926.
67. Hiremath, S.; Yang, G.; Mankodiya, K. Wearable Internet of Things: Concept, architectural components and promises for
person-centered healthcare. In Proceedings of the 2014 4th International Conference on Wireless Mobile Communication and
Healthcare-Transforming Healthcare Through Innovations in Mobile and Wireless Technologies (MOBIHEALTH), Athens, Greece,
3–5 November 2014; pp. 304–307.
68. GhaffarianHoseini, A.; Dahlan, N.D.; Berardi, U.; GhaffarianHoseini, A.; Makaremi, N. The essence of future smart houses: From
[embedding ICT to adapting to sustainability principles. Renew. Sustain. Energy Rev. 2013, 24, 593–607. [CrossRef]](http://dx.doi.org/10.1016/j.rser.2013.02.032)
69. Sánchez-Corcuera, R.; Nu nez-Marcos, A.; Sesma-Solance, J.; Bilbao-Jayo, A.; Mulero, R.; Zulaika, U.; Azkune, G.; Almeida, A.
Smart cities survey: Technologies, application domains and challenges for the cities of the future. Int. J. Distrib. Sens. Netw. 2019,
_[15, 1550147719853984. [CrossRef]](http://dx.doi.org/10.1177/1550147719853984)_
70. Simmers, C.A.; Anandarajan, M. The Internet of People, Things and Services: Workplace Transformations; Routledge: London, UK, 2018.
71. Young, W.; Davis, M.; McNeill, I.M.; Malhotra, B.; Russell, S.; Unsworth, K.; Clegg, C.W. Changing behaviour: Successful
[environmental programmes in the workplace. Bus. Strategy Environ. 2015, 24, 689–703. [CrossRef]](http://dx.doi.org/10.1002/bse.1836)
72. Timmer, D.; Appleby, D.; Timmer, V. Sustainable Lifestyles: Options & Opportunities in the Workplace; One planet; One Earth:
Vancouver, BC, Canada, 2018.
73. Leygue, C.; Ferguson, E.; Spence, A. Saving energy in the workplace: Why, and for whom? J. Environ. Psychol. 2017, 53, 50–62.
[[CrossRef]](http://dx.doi.org/10.1016/j.jenvp.2017.06.006)
74. Sparks, K.; Faragher, B.; Cooper, C.L. Well-being and occupational health in the 21st century workplace. J. Occup. Organ. Psychol.
**[2001, 74, 489–509. [CrossRef]](http://dx.doi.org/10.1348/096317901167497)**
75. Ilvesmaki, A. Drivers and challenges of personal health systems in workplace health promotion. In Proceedings of the 2007
_29th Annual International Conference of the IEEE Engineering in Medicine and Biology Society; IEEE: New York, NY, USA, 2007;_
pp. 5878–5881.
76. Appelbaum, E.; Batt, R.L. The new American Workplace: Transforming Work Systems in the United States; Cornell University Press:
Ithaca, NY, USA, 1994.
77. Maman, Z.S.; Yazdi, M.A.A.; Cavuoto, L.A.; Megahed, F.M. A data-driven approach to modeling physical fatigue in the workplace
[using wearable sensors. Appl. Ergon. 2017, 65, 515–529. [CrossRef] [PubMed]](http://dx.doi.org/10.1016/j.apergo.2017.02.001)
78. Taylor, W.; Suminski, R.; Das, B.; Paxton, R.; Craig, D. Organizational Culture and Implications for Workplace Interventions to
[Reduce Sitting Time Among Office-Based Workers: A Systematic Review. Front. Public Health 2018, 6, 263. [CrossRef] [PubMed]](http://dx.doi.org/10.3389/fpubh.2018.00263)
79. Stephenson, A.; McDonough, S.M.; Murphy, M.H.; Nugent, C.D.; Mair, J.L. Using computer, mobile and wearable technology
enhanced interventions to reduce sedentary behaviour: A systematic review and meta-analysis. Int. J. Behav. Nutr. Phys. Act.
**[2017, 14, 105. [CrossRef]](http://dx.doi.org/10.1186/s12966-017-0561-4)**
80. Holtermann, A.; Schellewald, V.; Mathiassen, S.E.; Gupta, N.; Pinder, A.; Punakallio, A.; Veiersted, K.B.; Weber, B.; Takala, E.P.;
Draicchio, F.; et al. A practical guidance for assessments of sedentary behavior at work: A PEROSH initiative. Appl. Ergon. 2017,
_[63, 41–52. [CrossRef]](http://dx.doi.org/10.1016/j.apergo.2017.03.012)_
81. Jimenez, P.; Bregenzer, A. Integration of eHealth tools in the process of workplace health promotion: Proposal for design and
[implementation. J. Med Internet Res. 2018, 20, e65. [CrossRef]](http://dx.doi.org/10.2196/jmir.8769)
82. Huang, Y.; Benford, S.; Blake, H. Digital Interventions to Reduce Sedentary Behaviors of Office Workers: Scoping Review.
_[J. Med. Internet Res. 2019, 21, e11079. [CrossRef]](http://dx.doi.org/10.2196/11079)_
83. Pedersen, S.J.; Cooley, P.D.; Mainsbridge, C. An e-health intervention designed to increase workday energy expenditure by
[reducing prolonged occupational sitting habits. Work 2014, 49, 289–295. [CrossRef] [PubMed]](http://dx.doi.org/10.3233/WOR-131644)
84. Howarth, A.; Quesada, J.; Silva, J.; Judycki, S.; Mills, P.R. The impact of digital health interventions on health-related outcomes in
[the workplace: A systematic review. Digit. Health 2018, 4, 2055207618770861. [CrossRef] [PubMed]](http://dx.doi.org/10.1177/2055207618770861)
85. Cheng, J.; Zhou, B.; Sundholm, M.; Lukowicz, P. Smart chair: What can simple pressure sensors under the chairs legs tell us
about user activity. In Proceedings of the UBICOMM13: The Seventh International Conference on Mobile Ubiquitous Computing,
Systems, Services and Technologies, Porto, Portugal, 29 September–3 October 2013; pp. 81–84.
-----
_Sensors 2021, 21, 3640_ 27 of 27
86. Kaur, H.; Williams, A.C.; McDuff, D.; Czerwinski, M.; Teevan, J.; Iqbal, S. Optimizing for Happiness and Productivity: Modeling
Opportune Moments for Transitions and Breaks at Work. In Proceedings of the ACM Conference on Human Factors in Computing
Systems (CHI), Honolulu, HI, USA, 25–30 April 2020; pp. 1–15.
87. Roossien, C.; Stegenga, J.; Hodselmans, A.; Spook, S.; Koolhaas, W.; Brouwer, S.; Verkerke, G.; Reneman, M.F. Can a smart chair
[improve the sitting behavior of office workers? Appl. Ergon. 2017, 65, 355–361. [CrossRef] [PubMed]](http://dx.doi.org/10.1016/j.apergo.2017.07.012)
88. Ahmed, E.; Yaqoob, I.; Gani, A.; Imran, M.; Guizani, M. Internet-of-things-based smart environments: State of the art, taxonomy,
[and open research challenges. IEEE Wirel. Commun. 2016, 23, 10–16. [CrossRef]](http://dx.doi.org/10.1109/MWC.2016.7721736)
89. Podgorski, D.; Majchrzycka, K.; Dabrowska, A.; Gralewicz, G.; Okrasa, M. Towards a conceptual framework of OSH risk
management in smart working environments based on smart PPE, ambient intelligence and the Internet of Things technologies.
_[Int. J. Occup. Saf. Ergon. 2017, 23, 1–20. [CrossRef]](http://dx.doi.org/10.1080/10803548.2016.1214431)_
90. Irizar-Arrieta, A.; Casado-Mansilla, D. Coping with user diversity: UX informs the design of a digital interface that encourages
sustainable behaviour. In Proceedings of the 11th International Conference on Interfaces and Human Computer Interaction,
Lisbon, Portugal, 21–23 July 2017; pp. 1–8.
91. Irizar-Arrieta, A.; Casado-Mansilla, D.; Retegi, A. Accounting for User Diversity in the Design for Sustainable Behaviour in Smart
Offices. In Proceedings of the 2018 3rd International Conference on Smart and Sustainable Technologies (SpliTech), Split, Croatia,
26–29 June 2018; pp. 1–6.
92. Jönsson, L.; Broms, L.; Katzeff, C. Watt-Lite: Energy statistics made tangible. In Proceedings of the 8th ACM Conference on
Designing Interactive Systems, Aarhus, Denmark, 16–20 August 2010; pp. 240–243.
93. Morgan, E.; Webb, L.; Carter, K.; Goddard, N. Co-Designing a Device for Behaviour-Based Energy Reduction in a Large
[Organisation. Proc. ACM Hum. Comput. Interact. 2018, 2, 125. [CrossRef]](http://dx.doi.org/10.1145/3274394)
94. Casado-Mansilla, D.; Moschos, I.; Kamara-Esteban, O.; Tsolakis, A.C.; Borges, C.E.; Krinidis, S.; Irizar-Arrieta, A.; Kitsikoudis, K.;
Pijoan, A.; Tzovaras, D.; et al. A Human-centric & Context-aware IoT Framework for Enhancing Energy Efficiency in Buildings
[of Public Use. IEEE Access 2018, 6, 31444–31456. [CrossRef]](http://dx.doi.org/10.1109/ACCESS.2018.2837141)
95. Gomez-Carmonaa, O.; Casado-Mansillaa, D.; Garcıa-Zubiab, J. Opportunities and Challenges of Technology-ased Interventions
to Increase Health-wareness in the Workplace. Transform. Ergon. Pers. Health Intell. Work. 2019, 25, 33.
96. Schall, M.C., Jr.; Sesek, R.F.; Cavuoto, L.A. Barriers to the adoption of wearable sensors in the workplace: A survey of occupational
[safety and health professionals. Hum. Factors 2018, 60, 351–362. [CrossRef] [PubMed]](http://dx.doi.org/10.1177/0018720817753907)
97. Helfrich, C.D.; Kohn, M.J.; Stapleton, A.; Allen, C.L.; Hammerback, K.E.; Chan, K.; Parrish, A.T.; Ryan, D.E.; Weiner, B.J.;
Harris, J.R.; et al. Readiness to change over time: Change commitment and change efficacy in a workplace health-promotion trial.
_[Front. Public Health 2018, 6, 110. [CrossRef] [PubMed]](http://dx.doi.org/10.3389/fpubh.2018.00110)_
98. Lindell, Y. Secure multiparty computation for privacy preserving data mining. In Encyclopedia of Data Warehousing and Mining;
IGI Global: Hershey, PA, USA, 2005; pp. 1005–1009.
99. Léauté, T.; Faltings, B. Protecting privacy through distributed computation in multi-agent decision making. J. Artif. Intell. Res.
**[2013, 47, 649–695. [CrossRef]](http://dx.doi.org/10.1613/jair.3983)**
-----
|
{
"disclaimer": "Notice: Paper or abstract available at https://pmc.ncbi.nlm.nih.gov/articles/PMC8197254, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "CCBY",
"status": "GOLD",
"url": "https://www.mdpi.com/1424-8220/21/11/3640/pdf"
}
| 2,021
|
[
"JournalArticle"
] | true
| 2021-05-24T00:00:00
|
[
{
"paperId": "1b1084d1d3a2bd968973e7bf3222a393f94972e9",
"title": "Adaptive XACML access policies for heterogeneous distributed IoT environments"
},
{
"paperId": "26e36e4fe6f53925bb133561a7daa122099a6599",
"title": "Stemming the flow: how much can the Australian smartphone app help to control COVID-19?"
},
{
"paperId": "ec718bf231a8a297b1d3b69aaec7d3339c037e83",
"title": "Information Technology-Based Tracing Strategy in Response to COVID-19 in South Korea-Privacy Controversies."
},
{
"paperId": "855d68628df74ecc025fc768144bbd8009fca9cf",
"title": "Optimizing for Happiness and Productivity: Modeling Opportune Moments for Transitions and Breaks at Work"
},
{
"paperId": "b01ac2e83ace90d12b9bf67a0ae984e5fccc3010",
"title": "Exploring the association rules of work activities for producing precast components"
},
{
"paperId": "d656a037680d02235d25f7d53dcd6efee78307f2",
"title": "Internet of Things technology applications in the workplace environment: a critical review"
},
{
"paperId": "eaef1bbda668cd5b98249bd1b63dc62b31a9d436",
"title": "Privacy Concerns Regarding Wearable IoT Devices: How it is Influenced by GDPR?"
},
{
"paperId": "99b094558854fa4e1b3ab573c987a938fe9daf86",
"title": "Trustworthiness in IoT – A Standards Gap Analysis on Security, Data Protection and Privacy"
},
{
"paperId": "ea91bcfdb3801334f3e8b74d4bc126a2d44e4dbb",
"title": "Fog Computing Architectures: A Reference for Practitioners"
},
{
"paperId": "83b404d692d8a1f587cf4498dc86e8b3ca2c04f0",
"title": "The California Consumer Privacy Act (CCPA)"
},
{
"paperId": "fb138a903cdd2c7abf710591381ee5040ae64914",
"title": "Smart cities survey: Technologies, application domains and challenges for the cities of the future"
},
{
"paperId": "8efa6da16164bbd447d536ca973c3d10142633f0",
"title": "Shared Sensor Networks Fundamentals, Challenges, Opportunities, Virtualization Techniques, Comparative Analysis, Novel Architecture and Taxonomy"
},
{
"paperId": "48ed1e05a637aab4e40bd82a269cea7f7d139ba2",
"title": "Digital Interventions to Reduce Sedentary Behaviors of Office Workers: Scoping Review"
},
{
"paperId": "d2e600c8a8ad28b28026385a6d4b7e97daaa0773",
"title": "Middleware for Fog and Edge Computing: Design Issues"
},
{
"paperId": "83ed060560ad572fba9d6f6369726e73a53b3aaf",
"title": "IoT and agriculture data analysis for smart farm"
},
{
"paperId": "9e8db1519245426f3a78752a3d8360484f4626b1",
"title": "OpenPose: Realtime Multi-Person 2D Pose Estimation Using Part Affinity Fields"
},
{
"paperId": "7012d3cfde61d9741a1060c83aa491d5baff146d",
"title": "Co-Designing a Device for Behaviour-Based Energy Reduction in a Large Organisation"
},
{
"paperId": "a3c430d244da9c07668c750831d14e90680cecde",
"title": "Applying Privacy Patterns to the Internet of Things’ (IoT) Architecture"
},
{
"paperId": "e45265ab06077e452bd7c082104b1524a5dbfedd",
"title": "Organizational Culture and Implications for Workplace Interventions to Reduce Sitting Time Among Office-Based Workers: A Systematic Review"
},
{
"paperId": "e416158b16b209ade0166b834f5a3de5a3a0dfc7",
"title": "Internet of Things (IoT) Trust Concerns"
},
{
"paperId": "e0d4019232999dc3f801597adc2b3126cb668e50",
"title": "Novel Internet of Things Platform for In-Building Power Quality Submetering"
},
{
"paperId": "9445423239efb633f5c15791a7abe352199ce678",
"title": "General Data Protection Regulation"
},
{
"paperId": "1ec6d6fb1b18ef4a686386fa4e0f445fd940f3bd",
"title": "Accounting for User Diversity in the Design for Sustainable Behaviour in Smart Offices"
},
{
"paperId": "81f4486ddc9049a92cb6e3d0566455ce7f01e142",
"title": "Any Thing for Anyone? A New Digital Divide in Internet‐of‐Things Skills"
},
{
"paperId": "f2b40d276395ccebdc565a4bcfcd9ce023a18049",
"title": "Design-insights for Devising Persuasive IoT Devices for Sustainability in the Workplace"
},
{
"paperId": "c3efb7e2d2e64ca5e9c68f9d83a2462b1e93e4ac",
"title": "Scalable Key Management for Elastic Security Domains in Fog Networks"
},
{
"paperId": "9506a4a20680f3581d80488d66b6a449585c69fd",
"title": "A Human-Centric & Context-Aware IoT Framework for Enhancing Energy Efficiency in Buildings of Public Use"
},
{
"paperId": "c9b6156a56704e09903d73322b1b26f44fc90a8d",
"title": "The impact of digital health interventions on health-related outcomes in the workplace: A systematic review"
},
{
"paperId": "19565c25fde327267507122a1600c51558c00232",
"title": "Readiness to Change Over Time: Change Commitment and Change Efficacy in a Workplace Health-Promotion Trial"
},
{
"paperId": "8d0efc6da17d5bf01446ce48ed9e9ac7e32d6565",
"title": "A Survey of Communication Protocols for Internet of Things and Related Challenges of Fog and Cloud Computing Integration"
},
{
"paperId": "3e9bcd9c83cd15884e62212804d987f13b44acde",
"title": "User Perceptions of Smart Home IoT Privacy"
},
{
"paperId": "b02b40551edf7318de47f1e1c929921c04232e80",
"title": "Integration of eHealth Tools in the Process of Workplace Health Promotion: Proposal for Design and Implementation"
},
{
"paperId": "69a362d6bbce0d193a6752b8fe2fb9215631d3aa",
"title": "Barriers to the Adoption of Wearable Sensors in the Workplace: A Survey of Occupational Safety and Health Professionals"
},
{
"paperId": "0de0e47159d8f7015845d4d48dccddc2a20531a8",
"title": "Can a smart chair improve the sitting behavior of office workers?"
},
{
"paperId": "983f119278476e8d55df5ed5fff23eb61b43cf51",
"title": "A data-driven approach to modeling physical fatigue in the workplace using wearable sensors."
},
{
"paperId": "47fff9ff17ecbc71d7b1964e8ba350310c061885",
"title": "Saving energy in the workplace: Why, and for whom?"
},
{
"paperId": "b2277775e67ad13a5ab22d86a055e1f49a4cc8f5",
"title": "Human-in-the-loop Artificial Intelligence"
},
{
"paperId": "7b5dd6b4ddfd625cdc637eeddc51abec9d6057c7",
"title": "The Internet of People (IoP): A new wave in pervasive mobile computing"
},
{
"paperId": "db70e2c32d690891b77dd50af4e9bc40af1a820f",
"title": "A practical guidance for assessments of sedentary behavior at work: A PEROSH initiative."
},
{
"paperId": "bd51645c657a2c5aef4809c86eab6d47e3e9c484",
"title": "Fog computing security: a review of current applications and security solutions"
},
{
"paperId": "b83a73f8f08cb34719660191ef6762b9ecca5ff9",
"title": "Using computer, mobile and wearable technology enhanced interventions to reduce sedentary behaviour: a systematic review and meta-analysis"
},
{
"paperId": "20c5168252a2bae8eeeb5b1036faaf85fc74a9c2",
"title": "Personal Data Collection in the Workplace: Ethical and Technical Challenges"
},
{
"paperId": "f28eba796d09c794db4e3167af739185d7464106",
"title": "Microservices tenets"
},
{
"paperId": "0c29fd8a397dc03bb34a71266526adfbd3b6928d",
"title": "A Human-Centered Activity Tracking System: Toward a Healthier Workplace"
},
{
"paperId": "fbcafd5d5b26dccba2374686ef06cf6ad0102b6d",
"title": "A study on the adoption of IoT smart home service: using Value-based Adoption Model"
},
{
"paperId": "b4d7bdbc839deeff3d975e73dac5ea16e02e7350",
"title": "Towards a conceptual framework of OSH risk management in smart working environments based on smart PPE, ambient intelligence and the Internet of Things technologies"
},
{
"paperId": "dfad8f616bd2a05c8cae5f61060f743f966ece85",
"title": "Realtime Multi-person 2D Pose Estimation Using Part Affinity Fields"
},
{
"paperId": "bc902a07d59b193bfa42f0de827fb3456a69148b",
"title": "Internet-of-things-based smart environments: state of the art, taxonomy, and open research challenges"
},
{
"paperId": "11f4ce4c244ecedf50d5289023715a4f485d5c26",
"title": "Wearable and ambient sensing for well-being and emotional awareness in the smart workplace"
},
{
"paperId": "9fc4a61235d16dd1adc45be39d017f53004cdc38",
"title": "Designing for Digital Transformation: Lessons for Information Systems Research from the Study of ICT and Societal Challenges"
},
{
"paperId": "d13f94652f07785977aa0b77ada0d69bd5e68ec0",
"title": "Using feedback through digital technology to disrupt and change habitual behavior: A critical review of current literature"
},
{
"paperId": "901a06b70e66af72dd44d4364645335e919e1410",
"title": "Limitless Worker Surveillance"
},
{
"paperId": "09755f549468209199565f8037061281080c968f",
"title": "Interactive machine learning for health informatics: when do we need the human-in-the-loop?"
},
{
"paperId": "13880df7b92a60cb8e66c67b60b57d0f9dd94fc0",
"title": "Middleware for Internet distribution in the context of cloud computing and the Internet of Things"
},
{
"paperId": "73bf233ef14984f184bcd433080c62a02d6af32f",
"title": "Changing Behaviour: Successful Environmental Programmes in the Workplace"
},
{
"paperId": "2fb8b35882e2f4bf06574323dc741cbacfa5e1a3",
"title": "Embedding intelligent eco-aware systems within everyday things to increase people’s energy awareness"
},
{
"paperId": "e6aeb3ab56152b1286344547613df2775c9c52f8",
"title": "Does perception of automation undermine pro-environmental behaviour? Findings from three everyday settings"
},
{
"paperId": "825ca26af5a2a510dbc1a7b97587212bc98ae968",
"title": "Power to the People: The Role of Humans in Interactive Machine Learning"
},
{
"paperId": "4feb9e0f8ad2921f592c964ece84197e6b557f48",
"title": "Wearable Internet of Things: Concept, architectural components and promises for person-centered healthcare"
},
{
"paperId": "42de908343d3cd4696205cd2dc212f57f9d7e2f1",
"title": "ARIIMA: A Real IoT Implementation of a Machine-Learning Architecture for Reducing Energy Consumption"
},
{
"paperId": "d9b52c488f15a71cf639fb3053d016cc53cb9958",
"title": "An e-health intervention designed to increase workday energy expenditure by reducing prolonged occupational sitting habits"
},
{
"paperId": "67644e7fdbd5c4a3d3a7686a242bb236389592f6",
"title": "Concern about security and privacy, and perceived control over collection and use of health information are related to withholding of health information from healthcare providers"
},
{
"paperId": "6d81fa9ae2f96560bd5f6bf9377c0243ac4f6d55",
"title": "A Survey on the Internet of Things Security"
},
{
"paperId": "c093b435fb16ff9d8623c1215a293e43f82f6fb8",
"title": "Smart Chair: What Can Simple Pressure Sensors under the Chairs' Legs Tell Us about User Activity?"
},
{
"paperId": "5594eca62f37d69421f5feba31f37cb963b3881a",
"title": "The essence of future smart houses: from embedding ICT to adapting to sustainability principles"
},
{
"paperId": "5b95941435d64a6fc1723ad6bc80f66d9e227caf",
"title": "Protecting Privacy through Distributed Computation in Multi-agent Decision Making"
},
{
"paperId": "207ea0115bf4388d11f0ab4ddbfd9fd00de5e8d1",
"title": "Fog computing and its role in the internet of things"
},
{
"paperId": "5f25c219f94a7659388a33c25232e55030729f7f",
"title": "Smart Spaces and Smart Objects Interoperability Architecture (S3OiA)"
},
{
"paperId": "4132551dc6a891c62979c4f2c8a07f5d5cc90b6d",
"title": "Bringing IoT and Cloud Computing towards Pervasive Healthcare"
},
{
"paperId": "e1859bd14f2b60c6c6694160037f0942bc0302b9",
"title": "Digital transformation: opportunities to create new business models"
},
{
"paperId": "54dc501b4d0b52158e9c53d3ca406374942bffcc",
"title": "Rule Engine based on improvement Rete algorithm"
},
{
"paperId": "325f8e305ad8df705b69729badabb58d8fcbd521",
"title": "Rethinking education in the age of technology: The digital revolution and schooling in America"
},
{
"paperId": "52212011534777610963d045b8672fb8ce530a11",
"title": "Watt-Lite: energy statistics made tangible"
},
{
"paperId": "5deef74e922df23a636a3fd4e33c119247de8d30",
"title": "A view of cloud computing"
},
{
"paperId": "2dca68222df5745e595cee3dfbc1a28f836f5f3c",
"title": "An algebra for fine-grained integration of XACML policies"
},
{
"paperId": "a6f644f6e739fa73ada11dc4c85b812b31f63d53",
"title": "Secure Multiparty Computation for Privacy-Preserving Data Mining"
},
{
"paperId": "5096bb92951f90324c9c86367c463a2db9a5e3b6",
"title": "Drivers and Challenges of Personal Health Systems in Workplace Health Promotion"
},
{
"paperId": "e2f30502add268f953eed90cb14bb3ef141f1522",
"title": "Innovation diffusion in global contexts: determinants of post-adoption digital transformation of European companies"
},
{
"paperId": "20bd0053cef9ce9f8e0e35a9cccddb4b67d076b2",
"title": "A critical review of the impact of embedded smart sensors on productivity in the workplace"
},
{
"paperId": "4c36160d588084ff2240cef7a0639d890b42a171",
"title": "Well‐being and occupational health in the 21st century workplace"
},
{
"paperId": "e664af0e871ea3346320f3a16be2d85db69139ee",
"title": "Privacy by Design - Principles of Privacy-Aware Ubiquitous Systems"
},
{
"paperId": "745dd50a9bab83af48686e97710889bdc73ccb75",
"title": "The New American Workplace: Transforming Work Systems in the United States."
},
{
"paperId": "8bcc5e83cce5a5ebe07f7f4751eeb290a8cd7b6d",
"title": "The New American Workplace: Transforming Work Systems in the United States"
},
{
"paperId": "54800bbbcfd4b2bde01fd7a748f5a89ddfa1a2a2",
"title": "Research on the Architecture of Iot Middleware Platform Based on BeiDou Navigation Satellite System"
},
{
"paperId": "aac90c156cafb1e507d4109726e53321390429bb",
"title": "SmartWorkplace: A Privacy-based Fog Computing Approach to Boost Energy Efficiency and Wellness in Digital Workspaces"
},
{
"paperId": null,
"title": "Opportunities and Challenges of Technology-ased Interventions to Increase Health-wareness in the Workplace"
},
{
"paperId": "04d65cc284e7fbf7aca3de88353fa14357bb2a11",
"title": "Sustainable Lifestyles: Options & Opportunities in the Workplace"
},
{
"paperId": "5450244ef4e45865da945f5243c7af74fd5161c2",
"title": "Industry 4.0: Managing The Digital Transformation"
},
{
"paperId": null,
"title": "The Internet of People, Things and Services: Workplace Transformations"
},
{
"paperId": "f713c351c1b97c87f5da8163ab122c499d5f11c1",
"title": "Internet of Things: Survey on Security and Privacy"
},
{
"paperId": "226eae435788d64fd890a41cb050be341c2419b3",
"title": "COPING WITH USER DIVERSITY : UX INFORMS THE DESIGN OF A DIGITAL INTERFACE THAT ENCOURAGES SUSTAINABLE BEHAVIOUR"
},
{
"paperId": "3d5642dd8721253861a16bb53a9d2a998335dca4",
"title": "Rethinking Education in the Age of Technology: The Digital Revolution and Schooling in America"
},
{
"paperId": "a3b720bbcdf7b2a349c28dacd3b8707f05d600fe",
"title": "Reference Architectures for the Internet of Things"
},
{
"paperId": "618d2f5722bad184626c21f5572250b6d1733a0d",
"title": "Self-Driving Cars: The Digital Transformation of Mobility"
},
{
"paperId": "96ded74141a2113e0add275a37541f1355aa13a0",
"title": "Epidemia: Variable Consistency for Transactional Cloud Databases"
},
{
"paperId": null,
"title": "For a tasty but efficient coffee"
},
{
"paperId": "cf8b96dabf484fc28049eebd3afc86e50bba45c8",
"title": "The Digital Transformation of Healthcare: Current Status and the Road Ahead"
},
{
"paperId": "44ad1e6062337026aba57e25c3afe12cef63d920",
"title": "The Rule Engine for the Java Platform"
},
{
"paperId": "9c38296531417331fa6e74c1fdccc087912a58dd",
"title": "Promoting Environmentally Responsible Behaviors Using Motivational Interviewing Techniques"
},
{
"paperId": null,
"title": "A brief version of the environmental attitudes inventory"
},
{
"paperId": "9f54677fc97e2d51a7e4f6c9c783dc53055811c8",
"title": "Privacy"
},
{
"paperId": null,
"title": "Persuasive feedback: This configuration combined subtle visual hints with ambient feedback provided in real-time to persuade the user to decide when the coffee machine should be turned off"
},
{
"paperId": null,
"title": "Including humans in the loop: The system must consider user preferences and behavior, which requires a shift from infrastructure-centric to human-centric"
},
{
"paperId": null,
"title": "GreenSoul: Persuasive Eco-awareness for User Engagement through Networked Data Devices"
}
] | 27,335
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/fff54102e9e269a3f9c78616af03b90cb8d5d602
|
[
"Computer Science"
] | 0.815324
|
Efficient Image Representation Learning with Federated Sampled Softmax
|
fff54102e9e269a3f9c78616af03b90cb8d5d602
|
arXiv.org
|
[
{
"authorId": "40612048",
"name": "S. Waghmare"
},
{
"authorId": "47935745",
"name": "Qi"
},
{
"authorId": "49177577",
"name": "Huizhong Chen"
},
{
"authorId": "89903811",
"name": "Mikhail Sirotenko"
},
{
"authorId": "2158169261",
"name": "Tomer Meron"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"ArXiv"
],
"alternate_urls": null,
"id": "1901e811-ee72-4b20-8f7e-de08cd395a10",
"issn": "2331-8422",
"name": "arXiv.org",
"type": null,
"url": "https://arxiv.org"
}
|
Learning image representations on decentralized data can bring many benefits in cases where data cannot be aggregated across data silos. Softmax cross entropy loss is highly effective and commonly used for learning image representations. Using a large number of classes has proven to be particularly beneficial for the descriptive power of such representations in centralized learning. However, doing so on decentralized data with Federated Learning is not straightforward as the demand on FL clients' computation and communication increases proportionally to the number of classes. In this work we introduce federated sampled softmax (FedSS), a resource-efficient approach for learning image representation with Federated Learning. Specifically, the FL clients sample a set of classes and optimize only the corresponding model parameters with respect to a sampled softmax objective that approximates the global full softmax objective. We examine the loss formulation and empirically show that our method significantly reduces the number of parameters transferred to and optimized by the client devices, while performing on par with the standard full softmax method. This work creates a possibility for efficiently learning image representations on decentralized data with a large number of classes under the federated setting.
|
## EFFICIENT IMAGE REPRESENTATION LEARNING WITH FEDERATED SAMPLED SOFTMAX
**Sagar M. Waghmare** Hang Qi Huizhong Chen
Google Research Google Research Google Research
sagarwaghmare@google.com hangqi@google.com huizhongc@google.com
Mikhail Sirotenko Tomer Meron[∗]
Google Research Google Research
msirotenko@google.com tomer.meron@gmail.com
##### ABSTRACT
Learning image representations on decentralized data can bring many benefits in cases where data
cannot be aggregated across data silos. Softmax cross entropy loss is highly effective and commonly
used for learning image representations. Using a large number of classes has proven to be particularly beneficial for the descriptive power of such representations in centralized learning. However,
doing so on decentralized data with Federated Learning is not straightforward as the demand on FL
clients’ computation and communication increases proportionally to the number of classes. In this
work we introduce federated sampled softmax (FedSS ), a resource-efficient approach for learning
image representation with Federated Learning. Specifically, the FL clients sample a set of classes
and optimize only the corresponding model parameters with respect to a sampled softmax objective
that approximates the global full softmax objective. We examine the loss formulation and empirically show that our method significantly reduces the number of parameters transferred to and
optimized by the client devices, while performing on par with the standard full softmax method.
This work creates a possibility for efficiently learning image representations on decentralized data
with a large number of classes under the federated setting.
##### 1 Introduction
The success of many computer vision applications, such as classification [Kolesnikov et al., 2020, Yao et al., 2019,
Huang et al., 2016], detection [Lin et al., 2014, Zhao et al., 2019, Ouyang et al., 2016], and retrieval [Sohn, 2016,
Song et al., 2016, Musgrave et al., 2020], relies heavily on the quality of the learned image representation. Many
methods have been proposed to learn better image representation from centrally stored datasets. For example, the
contrastive [Chopra et al., 2005] and the triplet losses [Weinberger and Saul, 2009, Qian et al., 2019] enforce local
constraints among individual instances while taking a long time to train on O(N [2]) pairs and O(N [3]) triplets for N labeled training examples in a minibatch, respectively. A more efficient loss function for training image representations
is the softmax cross entropy loss which involves only O(N ) inputs. Today’s top performing computer vision models [Kolesnikov et al., 2020, Mahajan et al., 2018, Sun et al., 2017] are trained on centrally stored large-scale datasets
using the classification loss. In particular, using an extremely large number of classes has proven to be beneficial for
learning universal feature representations [Sun et al., 2017].
However, a few challenges arise when learning such image representations with the classification loss under the cross_device federated learning scenario [Kairouz et al., 2019] where the clients are edge devices with limited computational_
resources, such as smartphones. First, a typical client holds data from only a small subset of the classes due to
the nature of non-IID data distribution among clients [Hsieh et al., 2020, Hsu et al., 2019]. Second, as the size of
the label space increase, the communication cost and computation operations required to train the model will grow
proportionally. Particularly for ConvNets the total number of parameters in the model will be dominated by those in
its classification layer [Krizhevsky, 2014]. Given these constraints, for an FL algorithm to be practical it needs to be
resilient to the growth of the problem scale.
_∗Work done while at Google._
-----
|Col1|model parameters wrt|
|---|---|
tabby
tabby
Figure 1: An FedSS training round: The client sends a set of obfuscated class labels Sk to the FL server and receives
the feature extractor ϕ and a few columns WSk, corresponding to classes in Sk, from the weight matrix of the classification layer. The client optimizes this sub network with the sampled softmax loss and then communicates back the
model update to the server. The server aggregates the model updates from all the selected clients to construct a new
global model for the next round.
In this paper, we propose a method called federated sampled softmax (FedSS ) for using the classification loss efficiently in the federated setting. Inspired by sampled softmax [Bengio and Sen´ecal, 2008], which uses only a subset of
the classes for training, we devise a client-driven negative class sampling mechanism and formulate a sampled softmax
loss for federated learning. Figure 1 illustrates the core idea. The FL clients sample negative classes and request a sub
network from the FL server by sending a set of class labels that anonymizes the clients’ positive class labels in its local
dataset. The clients then optimize a sampled softmax loss that involves both the clients’ sampled negative classes as
well as its local positive classes to approximate the global full softmax objective.
To the best of our knowledge, this is the first work addressing the intersection of representation learning with Federated
Learning and resource efficient sampled softmax training. Our contributions are:
1. We propose a novel federated sampled softmax algorithm, which extends the image representation learning
via large-scale classification loss to the federated learning scenario.
2. Our method performs on-par with full softmax training, while requiring only a fraction of its cost. We evaluate
our method empirically and show that less than 10% of the parameters from the classification layer can be
sufficient to get comparable performance.
3. Our method is resilient to the growth of the label space and makes it feasible for applying Federated Learning
to train image representation and classification models with large label spaces.
##### 2 Related Work
**Large scale classification. The scale of a classification problem could be defined by the total number of classes**
involved, number of training samples available or both. Large vocabulary text classification is well studied in the
natural language processing domain [Bengio and Sen´ecal, 2008, Liu et al., 2017, Jean et al., 2015, Zhang et al.,
2018]. On the contrary, image classification is well studied with small to medium number of classes [LeCun et al.,
1998, Krizhevsky et al., Russakovsky et al., 2015] while only a handful of works [Kolesnikov et al., 2020, Hinton
et al., 2015, Mahajan et al., 2018, Sun et al., 2017] address training with large number of classes. Training image
classification with a significant number of classes requires a large amount of computational resources. For example,
Sun et al. [2017] splits the last fully connected layer into sub layers, distributes them on multiple parameter servers
-----
and uses asynchronous SGD for distributed training on 50 GPUs. In this work, we focus on a cross-device FL scenario
and adopt sampled softmax to make the problem affordable for the edge devices.
**Representation learning.** Majority of works in learning image representation are based on classification
loss [Kolesnikov et al., 2020, Hinton et al., 2015, Mahajan et al., 2018] and metric learning objectives [Oh Song
et al., 2016, Qian et al., 2019]. Using full softmax loss with a large number of classes in the FL setting can be very expensive and sometimes infeasible for two main reasons: (i) exorbitant cost of communication and storage on the clients
can be imposed by the classification layer’s weight matrix; (ii) edge devices like smartphones typically do not have
computational resources required to train on such scale. On the other hand, for metric learning methods [Oh Song
et al., 2016, Qian et al., 2019] to be effective, extensive hard sample mining from quadratic/cubic combinations of
the samples [Sheng et al., 2020, Schroff et al., 2015, Qian et al., 2019] is typically needed. This requires considerable computational resources as well. Our federated sampled softmax method addresses these issues by efficiently
approximating the full softmax objective.
**Federated learning for large scale classification. The closest related work to ours is Yu et al. [2020], which considers**
the classification problem with large number of classes in the FL setting. They make two assumptions: (a) every
client holds data for a single fixed class label (e.g. user identity); (b) along with the feature extractor only the class
representation corresponding to the client’s class label is transmitted to and optimized by the clients. We relax these
assumptions in our work since we focus on learning generic image representation rather than individually sensitive
users’ embedding. We assume that the clients hold data from multiple classes and the full label space is known to all
the clients as well as the FL server. In addition, instead of training individual class representations we formulate a
sampled softmax objective to approximate the global full softmax cross-entropy objective.
##### 3 Method
**3.1** **Background and Motivation**
**Softmax cross-entropy and the parameter dominance. Consider a multi-class classification problem with n classes**
where for a given input x only one class is correct y ∈ [0, 1][n] with [�]i[n]=1 _[y][i][ = 1][. We learn a classifier that computes]_
a d-dimensional feature representation f (x) ∈ R[d] and logit score oi = wi[T] _[f]_ [(][x][) +][ b][ ∈] [R][ for every class][ i][ ∈] [[][n][]][. A]
softmax distribution is formed by the class probabilities computed from the logit scores using the softmax function
_pi =_ �nexp(oi) _i ∈_ [n]. (1)
_j=1_ [exp(][o][j][)] _[,]_
Let t ∈ [n] be the target class label for the input x such that yt = 1, the softmax cross-entropy loss for the training
example (x, y) is defined as
_n_
�
exp(oj). (2)
_j=1_
(x, y) =
_L_ _−_
_n_
�
_yi log pi = −ot + log_
_i=1_
The second term involves computing the logit score for all the n classes. As the number of classes n increase so
does the number of columns in the weight matrix W ≡ [w1, w2, . . ., wn] ∈ R[d][×][n] of the classification layer. The
complexity of computing this full softmax loss also grows linearly.
Moreover, for a typical ConvNet classifier for n classes, the classification layer dominates the total number of parameters in the model as n increases, because the convolutional layers typically have small filters and the total number
of parameters (See Figure 9 in A.1 for concrete examples). This motivates us to use an alternative loss function to
overcome the growing compute and communication complexity in the cross-device federated learning scenario.
**Sampled softmax. Sampled softmax [Bengio and Sen´ecal, 2008] was originally proposed for training probabilistic**
language models on datasets with large vocabularies. It reduces the computation and memory requirement by approximating the class probabilities using a subset of negative classes whose size is m _n. These negative_
_N_ _≡|N| ≪_
classes are sampled from a proposal distribution Q, with qi being the sampling probability of the class i. Using the
adjusted logits o[′]j [=][ o][j][ −] [log(][mq][j][)][,][ ∀][j][ ∈N] [, the target class probability can be approximated with]
exp(o[′]t[)]
_p[′]t_ [=] (3)
exp(o[′]t[) +][ �]j∈N [exp(][o]j[′] [)] _[.]_
This leads to the sampled softmax cross-entropy loss
�
_Lsampled(x, y) = −o[′]t_ [+ log] exp(o[′]j[)][.] (4)
_j∈N ∪{t}_
-----
Note that the sampled softmax gradient is a biased estimator of the full softmax gradient. The bias decreases as m
increases. The estimator is unbiased only when the negatives are sampled from the full softmax distribution [Blanc
and Rendle, 2018] or m [Bengio and Sen´ecal, 2008].
_→∞_
**3.2** **Federated Sampled Softmax (FedSS)**
Now we discuss our proposed federated sampled softmax (FedSS ) algorithm listed in Algorithm 1, which adopts
sampled softmax in the federated setting by incorporating negative sampling under FedAvg [McMahan et al., 2017]
framework, the standard algorithm framework in federated learning.
One of the main characteristics of FedAvg is that all the clients receive and optimize the exact same model. To allow
efficient communication and local computing, our federated sampled softmax algorithm transmits a much smaller
sub network to the FL clients for local optimization. Specifically, we view ConvNet classifiers parameterized by
_θ = (ϕ, W_ ) as two parts: a feature extractor f (x; ϕ) : R[h][×][w][×][c] _→_ R[d] parameterized by ϕ that computes a ddimensional feature given an input image, and a linear classifier parameterized by a matrix W ∈ R[d][×][n] that outputs
logits for class prediction [2]. The FL clients, indexed by k, train sub networks parameterized by (ϕ, WSk ) where
_WSk contains a subset of columns in W_, rather than training the full model. With this design, federated sampled
softmax is more communication-efficient than FedAvg since the full model is never transmitted to the clients, and
more computation-efficient because the clients never compute gradients of the full model.
In every FL round, every participating client first samples a set of negative classes Nk ⊂ [n]/Pk that does not overlap
with the class labels Pk = {t : (x, y) ∈Dk, yt = 1, t ∈ [n]} in its local dataset Dk. The client then communicates
the union of these two disjoint sets Sk = Pk ∪Nk to the FL server for requesting a model for local optimization. The
server subsequently sends back the sub network (ϕ, WSk ) with all the parameters of the feature extractor together with
a classification matrix that consists of class vectors corresponding to the labels in Sk.
**Algorithm 1: Federated sampled softmax (FEDSS). The key differences to the FedAvg are lines 5–7 where the clients request**
and optimize different sub networks locally. η and α are the client and server learning rates, respectively.
**1 Initialize θ0 = (ϕ, W** ), where ϕ is the parameter of the feature extractor and W is the classification matrix.
**2 for each round t = 0, 1, . . . do**
**3** Select K participating clients.
**4** **for each client k = 1, 2, . . ., K do in parallel**
**5** Client k samples negatives Nk.
**6** Client k requests the model wrt Sk = Pk ∪Nk.
**7** The server sends back model θt[(][k][)] = (ϕ, WSk ).
**8** Start local optimization with θ[(][k][)] _←_ _θt[(][k][)]._
**9** **for each local mini-batch b over E epochs do**
**10** _θ[(][k][)]_ _←_ _θ[(][k][)]_ _−_ _η∇Lsampled[(][k][)]_ [(][b][;][ θ][(][k][)][)]
**11** ∆θ[(][k][)] _←_ _θ[(][k][)]_ _−_ _θ0[(][k][)]_
**12** **_g¯t ←_** [�]k[K]=1 _nnk_ [∆][θ][(][k][)][, where][ n][ =][ �]k[K]=1 _[n][k]_
**13** _θt+1 ←_ _θt −_ _αg¯t_
Then every client trains its sub network by minimizing the following sampled softmax loss with its local dataset
�
_L[(]FedSS[k][)]_ [(][x][,][ y][) =][ −][o]t[′] [+ log] exp(o[′]j[)][,] (5)
_j∈Sk_
after which the same procedure as FedAvg is used for aggregating model updates from all the participating clients.
In our federated sampled softmax algorithm, the set of positive classes Pk is naturally constituted by all the class
labels from the client’s local dataset, whereas the negative classes Nk are sampled by each client individually. Next
we discuss negative sampling and the use of positive classes in the following two subsections respectively.
**3.3** **Client-driven uniform sampling of negative classes**
For centralized learning, proposal distributions and sampling algorithms are designed for efficient sampling of negatives or high quality estimations of the full softmax gradients. For example, Jean et al. [2015] partition the training
corpus and define non-overlapping subsets of class labels as sampling pools. The algorithm is efficient once implemented, but the proposal distribution imposes sampling bias which is not mitigable even as m . Alternatively,
_→∞_
2We omit the bias term in discussion without loss of generality.
-----
efficient kernel-based algorithms [Blanc and Rendle, 2018, Rawat et al., 2019] yield unbiased estimators of the full
softmax gradients by sampling from the softmax distribution. These algorithms depend on both the current model
parameters (ϕ, W ) and the current raw input x for computing feature vectors and logit scores. However, this is not
feasible in the FL scenario, one the one hand due to lack of resources on FL clients for receiving the full model, on the
other hand due to the constraint of keeping raw inputs only on the devices.
In the FedSS algorithm, we assume the label space is known and take a client-driven approach, where every participating FL client uniformly samples negative classes Nk from [n]/Pk. Using a uniform distribution over the entire label
space is a simple yet effective choice that does not incur sampling bias. The bias on the gradient estimation can be
mitigated by increasing m (See 4.5 for an empirical analysis). Moreover, Nk can be viewed as noisy samples from the
maximum entropy distribution over [n]/Pk that mask the client’s positive class labels. From the server’s perspective,
it is not able to identify which labels in Sk belong to the client’s dataset. In practice, private information retrieval
techniques [Chor et al., 1995] can further be used such that no identity information about the set is revealed to the
server. The sampling procedure can be performed on every client locally and independently without requiring peer
information or the current latest model from the server.
**3.4** **Inclusion of positives in local optimization**
When computing the federated sampled softmax loss, including the set of positive class labels Pk in Eq. 5 is crucial.
To see this, Eq. 5 can be equivalently written as follows (shown in A.5)
_._ (6)
_LFedSS[(][k][)]_ [(][x][,][ y][) = log]
�
1 + exp(o[′]j _[−]_ _[o]t[′]_ [)]
_j∈Sk/{t}_
Minimizing this loss function pulls the input image representation f (x; ϕ) and target class representation wt closer,
while pushing the representations of the negative classes WSk/{t} away from f (x; ϕ). Utilizing Pk/{t} as an additional set of negatives to compute this loss encourages the separation of classes in Pk with respect to each other as
well as with respect to the classes in Nk (Figure 2d).
(a) Input-dependent (b) NegOnly (c) PosOnly (d) FedSS (Ours)
Figure 2: The set of classes providing pushing forces for the local training under different sampled softmax loss
formulations. (a) Input-dependent negative classes (depicted by the red squares) are sampled wrt to the inputs and
current model, not feasible in the FL setting. (b) Only using the sampled negatives reduces the problem to a binary
classification. (c) Using only the local positives lets the local objectives diverge from the global one. (d) FedSS
approximates the global objective with sampled negative classes together with local positives.
Alternatively, not using Pk/{t} as additional negatives leads to a negatives-only loss function
_LNegOnly[(][k][)]_ [(][x][,][ y][) = log]
�
1 + exp(o[′]j _[−]_ _[o]t[′]_ [)]
_j∈Nk_
_,_ (7)
where t ∈Pk only contributes to computing the true logit for individual inputs, while the same Nk is shared across all
inputs (Figure 2b). Minimizing this negatives-only loss, trivial solutions can be found for a client’s local optimization.
Because it encourages separation of target class representations WPk from the negative class representations WNk,
which can be easily achieved by increasing the magnitudes of the former and reducing those of the latter. In addition,
the learned representations can collapse, as the local optimization is reduced to a binary classification problem between
the on-client classes Pk and the off-client classes Nk.
-----
In contrast, using only the local positives Pk without the sampled negative classes Nk gives
�
_L[(]PosOnly[k][)]_ [(][x][,][ y][) = log] 1 + exp(o[′]j _[−]_ _[o]t[′]_ [)] _._ (8)
_j∈Pk/{t}_
Minimizing this loss function solves the client’s local classification problem which diverges from the global objective
(Figure 2c), especially when Pk remains fixed over FL rounds and |Pk| ≪ _n._
##### 4 Experiments
**4.1** **Setup**
**Notations and Baseline methods.** We denote our proposed algorithm as FedSS where both the sampled negatives
and the local positives are used in computing the client’s sampled softmax loss. We compare our method with the
following alternatives:
- NegOnly: The client’s objective is defined by sampled negative classes only (Eq. 7).
- PosOnly: The client’s objective is defined by the local positive classes only, no negative classes is sampled
(Eq. 8).
- FedAwS [Yu et al., 2020]: client optimization is same as the PosOnly, but a spreadout regularization is
applied on server.
In addition, we also provide two reference baselines:
- FullSoftmax: The client’s objective is the full softmax cross-entropy loss (Eq. 2), serving as performance
references when it is affordable for clients to compute the full model.
- Centralized : A model is trained with the full softmax cross-entropy loss (Eq. 2) in a centralized fashion using
IID data batches.
**Evaluation protocol.** We conduct experiments on two computer vision tasks: multi-class image classification and
image retrieval. Performance is evaluated on the test splits of the datasets, which have no sample overlap with the
corresponding training splits. We report the mean and standard deviation of the performance metrics from three
independent runs. For the FullSoftmax and Centralized baselines, we report the best result from three independent
runs. Please see A.2 for implementation details.
**4.2** **Multi-class Image Classification**
For multi-class classification we use the Landmarks-User-160K [Hsu et al., 2020] and report top-1 accuracy on its test
split. Landmarks-User-160k is a landmark recognition dataset created for FL simulations. It consists of 1,262 natural
clients based on image authorship. Collectively, every client contains 130 images distributed across 90 class labels.
For our experiments K = 64 clients are randomly selected to participate in each FL round. We train for a total 5,000
rounds, which is sufficient for reaching convergence.
_|Sk|_ 95 100 110 130 170
% of n (4.7%) (4.9%) (5.4%) (6.4%) (8.4%)
FedSS (Ours) 51.7 ± 0.4 53.3 ± 0.6 54.9 ± 0.3 55.3 ± 0.6 **56.0 ± 0.06**
NegOnly 7.1 ± 3.7 18.7 ± 0.4 22.0 ± 0.8 25.0 ± 0.4 26.5 ± 1.4
PosOnly 43.1 ± 0.2
FedAwS [Yu et al., 2020] 42.5 ± 0.4
FullSoftmax 56.8
Centralized 59.5
Table 1: Top-1 accuracy (%) on Landmarks-Users-160k at the end of 5k FL rounds. PosOnly and FedAwS have
4.4% of class representations on the clients, whereas, FullSoftmax has all the class representations.
_∼_
Table 1 summarizes the top-1 accuracy on the test split. For FedSS and NegOnly we report accuracy across different |Sk|. Overall, we observe that our method performs similar to the FullSoftmax baseline while requiring only a
-----
(b) SOP, |Sk| = 40
0 250 500 750 1000 1250 1500 1750 2000
FL rounds
0.6
0.4
0.2
0.0
(a) Landmarks, |Sk| = 110
FedSS (Ours)
NegOnly
PosOnly
FedAwS
FullSoftmax
0 1000 2000 3000 4000 5000
FL rounds
0.30
0.25
0.20
0.15
0.10
0.05
0.00
|FedSS (Ours) NegOnly PosOnly FedAwS FullSoftmax|0.30 0.25 MAP@10 0.20 0.15 0.10 0.05|Col3|
|---|---|---|
Figure 3: Learning curve for different methods for an average value of number of classes |Sk| on the clients. The
_PosOnly, FedAwS and FullSoftmax methods have |Pk|, |Pk| and n classes respectively, on the clients._
fraction of the classes on the clients. Our FedSS formulation also outperforms the alternative NegOnly, PosOnly and
_FedAwS formulations by a large margin. Approximating the full softmax loss with FedSS does not degrade the rate_
of convergence either as seen in Figure 3a. Additionally, Figure 4a shows learning curves for FedSS with different
_|Sk|. Learning with a sufficiently large |Sk| follows closely the performance of the FullSoftmax baseline. We also_
report performance on ImageNet-21k [Deng et al., 2009] in A.3.
**4.3** **Image Retrieval**
_|Sk|_ 25 30 40 60 100
% of n (0.22%) (0.27%) (0.35%) (0.53%) (0.88%)
FedSS (Ours) 25.2 ± 0.2 25.8 ± 0.2 26.1 ± 0.1 26.4 ± 0.12 **26.5 ± 0.03**
NegOnly 15.5 ± 0.2 16.2 ± 0.1 16.3 ± 0.1 16.5 ± 0.04 16.7 ± 0.17
PosOnly 19.7 ± 0.09
FedAwS [Yu et al., 2020] 20.0 ± 0.04
FullSoftmax 25.7
Centralized 25.4
Table 2: MAP@10 on the SOP dataset at the end of 2k FL rounds.
The Stanford Online Products dataset [Song et al., 2016] has 120,053 images of 22,634 online products as the classes.
The train split includes 59,551 images from 11,318 classes, while the test split includes 11,316 different classes with
60,502 images in total. For FL experiments, we partition the train split into 596 clients, each containing 100 images
distributed across 20 class labels. For each FL round, K = 32 clients are randomly selected. Similar to metric learning
literature, we use nearest neighbor retrieval to evaluate the models. Every image in the test split is used as a query
image against the remaining ones. We use normalized euclidean distance to compare two image representations. We
report MAP@R (R = 10) as the evaluation metric [Musgrave et al., 2020], which is defined as follows:
MAP@R = [1]
_R_
�R �precision at i, if i[th] retrieval is correct
_P_ (i), where P (i) = (9)
0, otherwise.
_i=1_
Table 2 summarizes MAP@10 on the SOP test split at the end of 2k FL rounds. Our FedSS formulation consistently
outperforms the alternative methods while requiring less than 1% of the classes on the clients. This reduces the overall
communication cost by 16% when |Sk| = 100 for every client per round. For reasonably small value of |Sk| our
method has a similar rate of convergence to the FullSoftmax baseline, as seen in Figure 3b and Figure 4b.
Using the MobilenetV3 [Howard et al., 2019] architecture with embedding size 64, the classification layer contributes
to 16% of the total number of parameters in the SOP experiment and 3.4% in the Landmarks-User-160k experiment. In
the former, our FedSS method requires only 84% of the model parameters on every client per round when |Sk| = 100.
In the latter, it reduces the model parameters transmitted by 3.38% per client per round when |Sk| = 170 (summarized
in Figure 5). These savings will increase as the embedding size or the total number of classes increases (Figure 9
in A.1). For example with embedding size of 1280, which is default embedding size of MobileNetV3, above setup
will result in 79% and 38% reduction in the communication cost per client per round for the SOP and LandmarksUser-160k datasets, respectively.
-----
0.27
(b) SOP
0.55
0.50
0.45
0.40
0.35
0.30
(a) Landmarks
95
100
110
130
170
FullSoftmax
0 1000 2000 3000 4000 5000
Figure 4: Convergence curves for the proposed FedSS method at different cardinalities of Sk. Given that Pk is fixed
for a client, the increase in |Sk| is caused by increase in |Nk|. The estimate of softmax probability via sampled softmax
improves with the increase in |Sk|, and therefore improving the efficacy of the method.
(a) Landmarks
PosOnly
FedAwS
5.8 6.4 7.0 8.3 10.9
|0.300|Col2|Col3|Col4|(b|b) SOP|Col7|Col8|Col9|Col10|
|---|---|---|---|---|---|---|---|---|---|
|0.300 0.275|||||||FullS|oftmax||
|||||||||||
|0.250 0.225 MAP@10 0.200 0.175 0.150 0.125 0.100||||||||||
|||||||||||
|||||||||||
|||||||||||
|||||||||||
||||||||dSS egOnl sOnl dAw|||
|||||||Fe||(Ours)||
|||||||N Po||y y||
|||||||Fe||S||
129.8[×10[3]]
724.4 [×10[3]]
(b) SOP
1.3 1.9 2.6 3.8 6.4
|0.6|Col2|Col3|Col4|(a) Lan|ndmarks|Col7|Col8|Col9|Col10|
|---|---|---|---|---|---|---|---|---|---|
|||||||||||
|0.5 Accuracy 0.4 0.3 Top-1 0.2 0.1 5.8|||||||FullSoft|max||
|||||||||||
|||||||||||
||||||Fe Ne|||||
|||||||Fe Ne|dSS (O gOnly sOnly dAwS|urs)||
|||||||Po Fe||||
|||||||||||
# of params in the classification layer
# of params in the classification layer
Figure 5: Performance vs number of parameters in the classification layer transmitted to and optimized by the clients
for Landmarks-Users-160k (a) and the SOP (b) datasets, respectively.
**4.4** **On importance of Pk in local optimization**
One may note that the NegOnly loss (Eq. 7) involves fewer terms inside the logarithm than FedSS (Eq. 6). To show
that the NegOnly is not unfairly penalized, we compare the FedSS with NegOnly such that the number of classes
providing pushing forces for every input is the same. This is done by sampling additional |Pk| − 1 negative classes
for the NegOnly method. As seen in Figure 6, using the on-client classes (Pk) as additional negatives instead of the
additional off-client negatives is crucial to the learning.
(a) Landmarks at round 5000
0.30
0.25
0.20
0.15
0.10
0.05
0.00
(b) SOP at round 2000
0.6
0.5
0.4
0.3
0.2
0.1
0.0
5 10 20 40 80 5 10 20 40 80
Number of sampled negatives (n=2028) Number of sampled negatives (n=11318)
FedSS (Ours): | k[| + |] k[|] 1 negative classes NegOnly: | k[| negative classes] NegOnly: | k[| + |] k[|] 1 negative classes
Figure 6: Performance of the FedSS (Ours) and NegOnly methods with different compositions of the negative classes
used for computing the sampled softmax loss. Utilizing on-client classes as additional negatives i.e, FedSS method,
has superior performance to the NegOnly method with equivalent number of negatives.
This boost can be attributed to better approximation of the global objective by the clients. Figure 7 plots a client’s
confusion matrix corresponding to the FedSS and NegOnly methods. The NegOnly loss leads to a trivial solution for
-----
FedSS
Predicted
NegOnly
Predicted
1.0
0.8
0.6
0.4
0.2
0.0
Figure 7: Confusion matrices for Pk of the same client from Landmarks-User-160k dataset. In both the FedSS and
_NegOnly formulations we used |Sk| = 95. In the former, the class representations are learned and well-separated, but_
are collapsed in the latter.
the client’s local optimization problem such that the client’s positive class representations collapse onto one representation, as reasoned in section 3.4.
**4.5** **FedSS Gradient noise analysis**
Bengio and Sen´ecal [2008] provides theoretical analysis of convergence of the sampled softmax loss. Doing so for
the proposed federated sampled softmax within the FedAvg framework is beyond the scope of this work. Instead we
provide an empirical gradient noise analysis for the proposed method. To do so we compute the expected difference
between FedAvg (with FullSoftmax) and FedSS gradients, i.e. E(|g¯F edAvg − **_g¯F edSS|), where ¯gF edAvg and ¯gF edSS_**
are client model changes aggregated by the server for FedAvg (with FullSoftmax) and FedSS methods, respectively.
Given that FedSS is an estimate of FedAvg (with FullSoftmax) this difference essentially represents the noise in FedSS
gradients.
FedSS convergence analysis with gradient noise
0.12
0.10
0.08
0.06
0.04
0.02
0.00
0 2000 4000 6000 8000 10000
| k[|]
Figure 8: Empirical FedSS gradient noise analysis. As we increase the sample size the difference between FedAvg
(with FullSoftmax) and FedSS diminishes.
To compute a single instance of gradient noise we assume that the clients participating in the FL round has same
_D_
with |D| = 32. Please note that the clients will have different Nk. For a given |Nk| we compute the expectation of the
gradient noise across multiple batches ( ) of the SOP dataset. Figure 8 shows the FedSS gradient noise as a function
_D_
of |Nk|. For very small values of |Nk| the gradients can be noisy but as the |Nk| increases the gradient noise drops
exponentially.
-----
##### 5 Conclusion
Federated Learning is becoming a prominent field of research. Major contributing factors to this trend are: rise in
privacy awareness among the general users, surge in amount of data generated by edge devices, and the noteworthy
increase in computing capabilities of edge devices. In this work we presented a novel federated sampled softmax
method which facilitates efficient training of large models on edge devices with Federated Learning. The clients solve
small subproblems approximating the global problem by sampling negative classes and optimizing a sampled softmax
objective. Our method significantly reduces the number of parameters transferred to and optimized by the clients,
while performing on par with the standard full softmax method. We hope that this encouraging result can inform
future research on efficient local optimization beyond the classification layer.
-----
##### References
Alexander Kolesnikov, Lucas Beyer, Xiaohua Zhai, Joan Puigcerver, Jessica Yung, Sylvain Gelly, and Neil Houlsby. Big transfer
(BiT): General visual representation learning. In ECCV, 2020.
Hantao Yao, Shiliang Zhang, Richang Hong, Yongdong Zhang, Changsheng Xu, and Qi Tian. Deep representation learning with
part loss for person re-identification. IEEE Transactions on Image Processing, 28(6):2860–2871, 2019.
Chen Huang, Yining Li, Chen Change Loy, and Xiaoou Tang. Learning deep representation for imbalanced classification. In CVPR,
2016.
Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Doll´ar, and C Lawrence Zitnick.
Microsoft COCO: Common objects in context. In ECCV, 2014.
Zhong-Qiu Zhao, Peng Zheng, Shou-tao Xu, and Xindong Wu. Object detection with deep learning: A review. IEEE Transactions
_on Neural Networks and Learning Systems, 30(11):3212–3232, 2019._
Wanli Ouyang, Xiaogang Wang, Cong Zhang, and Xiaokang Yang. Factors in finetuning deep model for object detection with
long-tail distribution. In CVPR, 2016.
Kihyuk Sohn. Improved deep metric learning with multi-class n-pair loss objective. In NIPS, pages 1857–1865, 2016.
Hyun Oh Song, Yu Xiang, Stefanie Jegelka, and Silvio Savarese. Deep metric learning via lifted structured feature embedding. In
_CVPR, 2016._
Kevin Musgrave, Serge Belongie, and Ser-Nam Lim. A metric learning reality check. arXiv preprint arXiv:2003.08505, 2020.
Sumit Chopra, Raia Hadsell, and Yann LeCun. Learning a similarity metric discriminatively, with application to face verification.
In CVPR, 2005.
Kilian Q Weinberger and Lawrence K Saul. Distance metric learning for large margin nearest neighbor classification. JMLR, 10
(2), 2009.
Qi Qian, Lei Shang, Baigui Sun, Juhua Hu, Hao Li, and Rong Jin. SoftTriple loss: Deep metric learning without triplet sampling.
In ICCV, 2019.
Dhruv Mahajan, Ross Girshick, Vignesh Ramanathan, Kaiming He, Manohar Paluri, Yixuan Li, Ashwin Bharambe, and Laurens
van der Maaten. Exploring the limits of weakly supervised pretraining. In ECCV, 2018.
Chen Sun, Abhinav Shrivastava, Saurabh Singh, and Abhinav Gupta. Revisiting unreasonable effectiveness of data in deep learning
era. In ICCV, 2017.
Peter Kairouz, H Brendan McMahan, et al. Advances and open problems in federated learning. arXiv preprint arXiv:1912.04977,
2019.
Kevin Hsieh, Amar Phanishayee, Onur Mutlu, and Phillip B. Gibbons. The non-IID data quagmire of decentralized machine
learning. arXiv preprint arXiv:1404.5997, 2020.
Tzu-Ming Harry Hsu, Hang Qi, and Matthew Brown. Measuring the effects of non-identical data distribution for federated visual
classification. arXiv preprint arXiv:1909.06335, 2019.
Alex Krizhevsky. One weird trick for parallelizing convolutional neural networks. arXiv preprint arXiv:1404.5997, 2014.
Yoshua Bengio and Jean-S´ebastien Sen´ecal. Adaptive importance sampling to accelerate training of a neural probabilistic language
model. IEEE Transactions on Neural Networks, 19(4):713–722, 2008.
Jingzhou Liu, Wei-Cheng Chang, Yuexin Wu, and Yiming Yang. Deep learning for extreme multi-label text classification. In
_SIGIR, 2017._
S´ebastien Jean, Kyunghyun Cho, Roland Memisevic, and Yoshua Bengio. On using very large target vocabulary for neural machine
translation. In ACL and IJCNLP, pages 1–10, 2015.
Wenjie Zhang, Junchi Yan, Xiangfeng Wang, and Hongyuan Zha. Deep extreme multi-label learning. In ACM International
_Conference on Multimedia Retrieval, pages 100–107, 2018._
Yann LeCun, L´eon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. Pro_ceedings of the IEEE, 86(11):2278–2324, 1998._
[Alex Krizhevsky, Vinod Nair, and Geoffrey Hinton. CIFAR-100 (canadian institute for advanced research). URL http://www.](http://www.cs.toronto.edu/~kriz/cifar.html)
[cs.toronto.edu/˜kriz/cifar.html.](http://www.cs.toronto.edu/~kriz/cifar.html)
Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya
Khosla, Michael Bernstein, et al. ImageNet large scale visual recognition challenge. IJCV, 115(3), 2015.
Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. Distilling the knowledge in a neural network. stat, 1050:9, 2015.
Hyun Oh Song, Yu Xiang, Stefanie Jegelka, and Silvio Savarese. Deep metric learning via lifted structured feature embedding. In
_CVPR, 2016._
Hao Sheng, Yanwei Zheng, Wei Ke, Dongxiao Yu, Xiuzhen Cheng, Weifeng Lyu, and Zhang Xiong. Mining hard samples globally
and efficiently for person reidentification. IEEE Internet of Things Journal, 7(10):9611–9622, 2020.
-----
Florian Schroff, Dmitry Kalenichenko, and James Philbin. FaceNet: A unified embedding for face recognition and clustering. In
_CVPR, 2015._
Felix Yu, Ankit Singh Rawat, Aditya Menon, and Sanjiv Kumar. Federated learning with only positive labels. In ICML, 2020.
Guy Blanc and Steffen Rendle. Adaptive sampled softmax with kernel based sampling. In ICML, 2018.
Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, and Blaise Aguera y Arcas. Communication-efficient learning
of deep networks from decentralized data. In AISTATS, 2017.
Ankit Singh Rawat, Jiecao Chen, Felix Xinnan X Yu, Ananda Theertha Suresh, and Sanjiv Kumar. Sampled softmax with random
fourier features. In NeurIPS, 2019.
Benny Chor, Oded Goldreich, Eyal Kushilevitz, and Madhu Sudan. Private information retrieval. In Annual Foundations of
_Computer Science, 1995._
Tzu-Ming Harry Hsu, Hang Qi, and Matthew Brown. Federated visual classification with real-world data distribution. In ECCV,
2020.
Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. ImageNet: A large-scale hierarchical image database. In
_CVPR, 2009._
Andrew Howard, Mark Sandler, Grace Chu, Liang-Chieh Chen, Bo Chen, et al. Searching for MobileNetV3. In ICCV, 2019.
Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift.
In ICML, 2015.
Yuxin Wu and Kaiming He. Group normalization. In ECCV, 2018.
Feng Wang, Xiang Xiang, Jian Cheng, and Alan Loddon Yuille. NormFace: L2 hypersphere embedding for face verification. In
_ACM Multimedia, 2017._
-----
##### A Supplementary Material
**A.1** **Parameters in the last layer**
The number of parameters in the classification layer grows linearly with respect to the number of classes and typically
dominates the total number of parameters in the model. Figure 9 shows the number of parameters in the classification
layer as the percentage of total number of parameters in the MobileNetV3 model. Each curve shows the percentage
for different number of target classes for a fixed embedding size.
100%
80%
60%
40%
20%
0%
Number of classes (n)
Figure 9: The number of parameters in the classification layer dominates the model as the number of classes n grows.
We show the percentage of parameters in the last layer using the MobileNetV3 architecture [Howard et al., 2019]
while varying the number of classes n and dimension d of the feature (d = 1280 is the default dimensionality of
MobileNetV3).
It is obvious that as the number of classes or the size of image representation increases so does the communication and
local optimization cost for the full softmax training in the federated setting. In either of these situations our proposed
method will facilitate training at significantly lower cost.
**A.2** **Implementation Details**
For all the datasets we use the default MobileNetV3 architecture [Howard et al., 2019], except that instead of 1280
dimensional embedding we output 64 dimensional embedding. We replace Batch Normalization [Ioffe and Szegedy,
2015] with Group Normalization [Wu and He, 2018] to improve the stability of federated learning [Hsu et al., 2019,
Hsieh et al., 2020]. Input images are resized to 256 256 from which a random crop of size 224 224 is taken. All
_×_ _×_
ImageNet-21k trainings start from scratch, whereas, for Landmarks-User-160k and the SOP we start from a ImageNet1k [Russakovsky et al., 2015] pretrained checkpoint. For client side optimization we go through the local data once
and use stochastic gradient descent optimizer with batchsize of 32. We use the learning rate of 0.01 for the SOP and
Landmarks-User-160k. All ImageNet-21k experiments start from scratch and use the same learning rate of 0.001. To
have a fair comparison with FedAwS method we do hyperparameter search to find the best spreadout weight and report
the performances corresponding to it. For all the experiments, we use scaled cosine similarity with fixed scale value
[Wang et al., 2017] of 20 for computing the logits; the server side optimization is done using Momentum optimizer
with learning rate of 1.0 and momentum of 0.9. All Centralized baselines are trained with stochastic gradient descent.
For a given dataset, all the FL methods are trained for a fixed number of rounds. The corresponding centralized
experiment is trained for an equivalent number of model updates.
**A.3** **Imagenet-21k experiments**
Along with Landmarks-User-160K [Hsu et al., 2020] and the SOP [Song et al., 2016] datasets we also experiment
with ImageNet-21k [Deng et al., 2009] dataset. It is a super set of the widely used ImageNet-1k [Russakovsky et al.,
2015] dataset. It contains 14.2 million images distributed across 21k classes organized by the WordNet hierarchy.
For every class we do a random 80-20 split on its samples to generate the train and test splits, respectively. The train
split is used to generate 25,691 clients, each containing approximately 400 images distributed across 20 class labels.
-----
ImageNet-21k requires a large number of FL rounds given its abundant training images, hence we set a training budget
of 25,000 FL rounds to make our experiments manageable. Although the performance we report on ImageNet-21k is
not comparable with the (converged) state-of-the-art, we emphasize that the setup is sufficient to evaluate our FedSS
method and demonstrate its effectiveness.
_|Sk|_ 70 120 220 420 820
% of n (0.3%) (0.5%) (1.0%) (1.9%) (3.7%)
FedSS (Ours) 9.1 ± 0.4 9.2 ± 0.1 9.9 ± 0.3 10.0 ± 0.5 9.8 ± 0.5
NegOnly 3.9 ± 0.1 4.2 ± 0.1 4.3 ± 0.2 4.4 ± 0.1 4.7 ± 0.2
PosOnly 5.1 ± 0.4
FedAwS [Yu et al., 2020] 5.1 ± 0.1
FullSoftmax 11.3
Centralized 15.4
Table 3: Top-1 accuracy (%) on ImageNet-21k at the end of 25k FL rounds. PosOnly and FedAwS have 0.1% of
_∼_
class representations on the clients, whereas, FullSoftmax has all the 21k class representations.
_∼_
0.00
|0.14|Col2|Col3|Col4|Col5|Col6|Col7|Col8|Col9|Col10|
|---|---|---|---|---|---|---|---|---|---|
|0.14 0.12|||||Ful||lSoft|max||
|||||||||||
|0.10 Accuracy 0.08 0.06 Top-1 0.04 0.02||||||||||
|||||||||||
|||||||||||
||||||FedS|||||
|||||||FedS|S (Ou nly nly wS|rs)||
|||||||NegO||||
|||||||PosO FedA||||
1.3 7.7 14.1 26.9 52.5
1398[×10[3]]
# of params in the classification layer
Figure 10: ImageNet-21k: Top-1 accuracy vs number of parameters in the classification layer transmitted to and
optimized by the clients.
Table 3 summarizes top-1 accuracy on the ImageNet-21k test split. We experiment with five different choices of |Sk|.
The FullSoftmax method reaches (best) top-1 accuracy of 11.30% by the end of 25,000 FL rounds, while our method
achieves top-1 accuracy of 10.02 0.5%, but with less than 2% of the classes on the clients. Figure 10 summarizess
_±_
performance of different methods with respect to number of parameters in the classification layer transmitted to and
optimized by the clients. Our client-driven negative sampling with positive inclusion method (FedSS) requires a very
small fraction of parameters in the classification layer while performing reasonably similar to the full softmax training
(FullSoftmax).
**A.4** **Overfitting in the SOP FullSoftmax experiments**
The class labels in the train and test splits of the SOP dataset do not overlap. In addition, it has, on average, only
5 images per class label. This makes the SOP dataset susceptible to overfitting (Table 4). In this case, using FedSS
mitigates the overfitting as only a subset of class representations is updated every FL round.
**Method** **Top-1 Accuracy (train)** **MAP@10 (test)**
FedSS (Ours) 97.6 ± 0.2 **26.5 ± 0.03**
FullSoftmax 99.9 25.7
Centralized 99.9 25.4
Table 4: Top-1 accuracy on the train split and corresponding MAP@10 on the test split for the SOP dataset at the end
of 2k FL rounds. The FedSS shown here is trained on |Sk| = 100.
-----
**A.5** **Derivations from Eq. 5 to Eq. 6**
_Proof. Starting from Eq. 5, we have_
�
_L[(]FedSS[k][)]_ [(][x][,][ y][) =][ −][o]t[′] [+ log] exp(o[′]j[)]
_j∈Sk_
�
= log exp(−o[′]t[)][ ·] exp(o[′]j[)]
_j∈Sk_
�
= log exp(o[′]j _t[)]_
_[−]_ _[o][′]_
_j∈Sk_
�
= log exp(o[′]t _[−]_ _[o]t[′]_ [) +] exp(o[′]j _[−]_ _[o]t[′]_ [)]
_j∈Sk/{t}_
�
= log 1 + exp(o[′]j _[−]_ _[o]t[′]_ [)] _._
_j∈Sk/{t}_
This gives Eq. 6.
-----
|
{
"disclaimer": "Notice: Paper or abstract available at https://arxiv.org/abs/2203.04888, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": null,
"status": "GREEN",
"url": "http://arxiv.org/pdf/2203.04888"
}
| 2,022
|
[
"JournalArticle"
] | true
| 2022-03-09T00:00:00
|
[
{
"paperId": "ef00c7264c8e5020f44c96a69c4a0f4fbb3da8e5",
"title": "Mining Hard Samples Globally and Efficiently for Person Reidentification"
},
{
"paperId": "8452a1317237ddebffd80e610ecc773bfb678e9c",
"title": "Federated Learning with Only Positive Labels"
},
{
"paperId": "e740a2b706fcae34850fd0e56619a2df7ee4dce7",
"title": "Federated Visual Classification with Real-World Data Distribution"
},
{
"paperId": "3926c80eb33d12ba2838e0890c372431192f42a6",
"title": "A Metric Learning Reality Check"
},
{
"paperId": "bc51622358d8eea83248ef29402fe10640d07ba6",
"title": "Big Transfer (BiT): General Visual Representation Learning"
},
{
"paperId": "07912741c6c96e6ad5b2c2d6c6c3b2de5c8a271b",
"title": "Advances and Open Problems in Federated Learning"
},
{
"paperId": "206261db1196e4e391ca42077f6fca6b3ece34d0",
"title": "The Non-IID Data Quagmire of Decentralized Machine Learning"
},
{
"paperId": "46d8c9e2dc9c12615eb5f6813d18f967d61c7e0d",
"title": "Measuring the Effects of Non-Identical Data Distribution for Federated Visual Classification"
},
{
"paperId": "6556cdf36a72c9842dbfb146df84b3e7d633e8df",
"title": "SoftTriple Loss: Deep Metric Learning Without Triplet Sampling"
},
{
"paperId": "2a687609ac1cecb9b20ba52d4f5d72ba14e0eaf2",
"title": "Sampled Softmax with Random Fourier Features"
},
{
"paperId": "5e19eba1e6644f7c83f607383d256deea71f87ae",
"title": "Searching for MobileNetV3"
},
{
"paperId": "7998468d99ab07bb982294d1c9b53a3bf3934fa6",
"title": "Object Detection With Deep Learning: A Review"
},
{
"paperId": "0f885fd46064d271d4404cf9bb3d758e1a6f8d55",
"title": "Exploring the Limits of Weakly Supervised Pretraining"
},
{
"paperId": "d08b35243edc5be07387a9ed218070b31e502901",
"title": "Group Normalization"
},
{
"paperId": "1bf64f0961da08ea0f9941bd899e916a385e9540",
"title": "Adaptive Sampled Softmax with Kernel Based Sampling"
},
{
"paperId": "1a0365567850837931d04126714ae6e2cbfc6270",
"title": "Deep Learning for Extreme Multi-label Text Classification"
},
{
"paperId": "8760bc7631c0cb04e7138254e9fd6451b7def8ca",
"title": "Revisiting Unreasonable Effectiveness of Data in Deep Learning Era"
},
{
"paperId": "a6b163dcca2054df8629cfd66c26700409c39604",
"title": "Deep Representation Learning With Part Loss for Person Re-Identification"
},
{
"paperId": "21063765fc3dc7884dc2a28c68e6c7174ab70af2",
"title": "NormFace: L2 Hypersphere Embedding for Face Verification"
},
{
"paperId": "27600451752d1a07937d11d4e0e276fcfb3d0c48",
"title": "Deep Extreme Multi-label Learning"
},
{
"paperId": "78a11b7d2d7e1b19d92d2afd51bd3624eca86c3c",
"title": "Improved Deep Metric Learning with Multi-class N-pair Loss Objective"
},
{
"paperId": "c88dbaa5d8f4c915e286be5e38b5599038220493",
"title": "Learning Deep Representation for Imbalanced Classification"
},
{
"paperId": "d1dbf643447405984eeef098b1b320dee0b3b8a7",
"title": "Communication-Efficient Learning of Deep Networks from Decentralized Data"
},
{
"paperId": "d3e09080f662f155a7f4c44597d963a2e97976a5",
"title": "Factors in Finetuning Deep Model for Object Detection with Long-Tail Distribution"
},
{
"paperId": "884750937bb97e82c41316d80e5d104e0c0e4795",
"title": "Deep Metric Learning via Lifted Structured Feature Embedding"
},
{
"paperId": "5aa26299435bdf7db874ef1640a6c3b5a4a2c394",
"title": "FaceNet: A unified embedding for face recognition and clustering"
},
{
"paperId": "0c908739fbff75f03469d13d4a1a07de3414ee19",
"title": "Distilling the Knowledge in a Neural Network"
},
{
"paperId": "995c5f5e62614fcb4d2796ad2faab969da51713e",
"title": "Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift"
},
{
"paperId": "1938624bb9b0f999536dcc8d8f519810bb4e1b3b",
"title": "On Using Very Large Target Vocabulary for Neural Machine Translation"
},
{
"paperId": "e74f9b7f8eec6ba4704c206b93bc8079af3da4bd",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"paperId": "71b7178df5d2b112d07e45038cb5637208659ff7",
"title": "Microsoft COCO: Common Objects in Context"
},
{
"paperId": "80d800dfadbe2e6c7b2367d9229cc82912d55889",
"title": "One weird trick for parallelizing convolutional neural networks"
},
{
"paperId": "d2c733e34d48784a37d717fe43d9e93277a8c53e",
"title": "ImageNet: A large-scale hierarchical image database"
},
{
"paperId": "699d5ab38deee78b1fd17cc8ad233c74196d16e9",
"title": "Adaptive Importance Sampling to Accelerate Training of a Neural Probabilistic Language Model"
},
{
"paperId": "78947497cbbffc691aac3f590d972130259af9ce",
"title": "Distance Metric Learning for Large Margin Nearest Neighbor Classification"
},
{
"paperId": "cfaae9b6857b834043606df3342d8dc97524aa9d",
"title": "Learning a similarity metric discriminatively, with application to face verification"
},
{
"paperId": "b4465c399f2937990077eb57200ef7be6788e428",
"title": "Private information retrieval"
},
{
"paperId": null,
"title": "Input images are resized to 256×256 from which a random crop of size 224×224"
},
{
"paperId": null,
"title": "2017] of 20 for computing the logits; the server side optimization is done using Momentum optimizer"
},
{
"paperId": null,
"title": "CIFAR-100 (canadian institute for advanced research)"
},
{
"paperId": "162d958ff885f1462aeda91cd72582323fd6a1f4",
"title": "Gradient-based learning applied to document recognition"
},
{
"paperId": null,
"title": "Efficient Image Representation Learning with Federated Sampled Softmax A P"
}
] | 12,820
|
en
|
[
{
"category": "Medicine",
"source": "external"
},
{
"category": "Business",
"source": "s2-fos-model"
},
{
"category": "Environmental Science",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/fff652c8be3a91b2ddb5c964c064d934e6b4d9fa
|
[
"Medicine"
] | 0.847973
|
The Effect of Traceability System and Managerial Initiative on Indonesian Food Cold Chain Performance: A Covid-19 Pandemic Perspective
|
fff652c8be3a91b2ddb5c964c064d934e6b4d9fa
|
Global Journal of Flexible Systems Management
|
[
{
"authorId": "152447020",
"name": "I. Masudin"
},
{
"authorId": "1712171024",
"name": "A. Ramadhani"
},
{
"authorId": "2082189006",
"name": "D. P. Restuputri"
},
{
"authorId": "72476287",
"name": "Ikhlasul Amallynda"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"Glob J Flex Syst Manag"
],
"alternate_urls": [
"https://link.springer.com/journal/40171"
],
"id": "5ed06dcb-b965-4071-a8b6-2867794825a1",
"issn": "0972-2696",
"name": "Global Journal of Flexible Systems Management",
"type": "journal",
"url": "https://www.springer.com/business+&+management/journal/40171"
}
|
This study aims to determine the effect of managerial initiatives on the adoption of traceability systems on food cold chain performance during the Covid-19 pandemic. Managerial initiatives are allegedly needed to improve the company's performance because it improves the traceability system in the supply chain. In addition, the effect of the traceability system adoption on the Indonesian food cold-chain performance during the Covid-19 pandemic is also discussed in this study. This study uses a quantitative approach and purposive sampling with a questionnaire research instrument obtained 250 statements of Indonesian consumers and retail employees. Partial least squares for structural equation modeling (PLS-SEM) were used to analyze latent variables' relationships. This study indicates that the traceability system has a significant effect on the performance of the food cold-chain during the Covid-19 pandemic. In addition, the adoption of electronic data exchange (EDI), radio frequency identification (RFID), and blockchain significantly impacted traceability systems during the Covid-19 pandemic. The managerial application of the initiative showed a positive and significant impact on the performance of the food cold-chain during the Covid-19 pandemic. However, the managerial initiative is not able to moderate the adoption of the traceability system.
|
[https://doi.org/10.1007/s40171 021 00281 x](https://doi.org/10.1007/s40171-021-00281-x)
ORIGINAL RESEARCH
# The Effect of Traceability System and Managerial Initiative on Indonesian Food Cold Chain Performance: A Covid-19 Pandemic Perspective
Ilyas Masudin[1] - Anggi Ramadhani[1] - Dian Palupi Restuputri[1] - Ikhlasul Amallynda[1]
Received: 22 March 2021 / Accepted: 10 July 2021 / Published online: 3 August 2021
� Global Institute of Flexible Systems Management 2021
Abstract This study aims to determine the effect of managerial initiatives on the adoption of traceability systems
on food cold chain performance during the Covid-19
pandemic. Managerial initiatives are allegedly needed to
improve the company’s performance because it improves
the traceability system in the supply chain. In addition, the
effect of the traceability system adoption on the Indonesian
food cold-chain performance during the Covid-19 pandemic is also discussed in this study. This study uses a
quantitative approach and purposive sampling with a
questionnaire research instrument obtained 250 statements
of Indonesian consumers and retail employees. Partial
least squares for structural equation modeling (PLS-SEM)
were used to analyze latent variables’ relationships. This
study indicates that the traceability system has a significant
effect on the performance of the food cold-chain during the
Covid-19 pandemic. In addition, the adoption of electronic
data exchange (EDI), radio frequency identification
(RFID), and blockchain significantly impacted traceability
systems during the Covid-19 pandemic. The managerial
application of the initiative showed a positive and significant impact on the performance of the food cold-chain
during the Covid-19 pandemic. However, the managerial
& Ilyas Masudin
masudin@umm.ac.id
Anggi Ramadhani
anggi222ramadhani@gmail.com
Dian Palupi Restuputri
restuputri@umm.ac.id
Ikhlasul Amallynda
ikhlasulamallynda@gmail.com
1 University of Muhammadiyah Malang, Jalan Raya Tlogomas
246, Malang 65144, Indonesia
initiative is not able to moderate the adoption of the
traceability system.
Keywords Blockchain EDI
� �
Food cold chain performance during Covid-19
�
Managerial initiative RFID Traceability system
� �
### Introduction
Fulfilling the increase in the supply of cold-chain products
requires a good integration to connect all supply chain
parties (Lewis & Boyle, 2017). Food cold-chain management associates all parties in the supply chain, from the
farmer to the consumer (Joshi et al., 2011). There is a
significant growth in Indonesia’s cold-chain market every
year and predicting increase from 4–6% to 8–10% in the
next five years (ILFA, 2020). Furthermore, Indonesia’s
food and agriculture sector is the most considerable contribution of the cold-chain sector to Indonesia’s gross
domestic product (GDP) (BPS, 2019). The food cold-chain
system helps the expansion of the Indonesian food supply.
This system uses a cold chain’s temperature control system
that can inhibit microbial growth to extends product storage life and maintain nutritional product quality well
(Aung & Chang, 2014a; Carullo et al., 2008; Shashi et al.,
2018).
However, the food cold-chain market has become disrupted due to the spread of the Covid-19 pandemic in all of
the world caused by Coronavirus (SARS-CoV-2). This is a
big challenge for Indonesia’s food cold chain industry. An
infected worker’s droplets could transmit the virus rapidly
and become a necessary concern as it causes acute respiratory syndrome (Ganyani et al., 2020; Wiersinga et al.,
2020). The government issued several policies to reduce
## 1 3
-----
the spread of Coronavirus transmission by implementing
health protocols to large-scale social restrictions in all
aspects of the industry (Paramita et al., 2021; Tam et al.,
2021; Ufua et al., 2021; Vergara et al., 2021). This policy
impacts the supply and demand, such as food losses cases.
The food cold chain product has a short life span characteristic and cannot be recycled (Masudin & Safitri, 2020).
Moreover, the possibility of Coronavirus contamination
along the supply chain is another concerning issue related
to product safety. The complexity of the problems raises
triggers for product-related information traceability systems that can monitor products’ condition along the supply
chain, considering the fast transmission of viruses.
Optimal integration can evaluate supply chain performance from traceability initiatives and operation management (Wang et al., 2009). The traceability system detects
the causes of quality and safety problems by determining
their origin and characteristics from the upstream supply
chain (Bechini et al., 2005). The utilization of the Internet
of Things (IoT) can increase product visibility, such as
product information, environmental conditions around the
product, and product quality (Tsang et al., 2018). Effective
management between corporate governance and employees
is needed to affect business performance positively (Galbreath, 2006). Besides, stakeholders’ initiative in the food
cold-chain is an important factor in successfully implementing the traceability system (Lewis & Boyle, 2017).
Without any initiative to encourage stakeholders, the
traceability system performance cannot be optimal. Thus,
this study was conducted to determine the influence of
managerial initiative on traceability system adoption. The
traceability system’s effect on the Indonesian food cold
chain performance during the Covid-19 pandemic was
determined in this study.
The structure of this article is written in six sections. The
first section (introduction) discusses this study’s background and identifies the gap between previous studies and
the research statement. Section two discusses the related
studies that contributed to developing the framework.
Subsequently, the following section discusses the research
methodology, followed by Section four, which discusses
the results and discussion. Section five presents the managerial implications, and it is followed by the final section,
which is the conclusion and limitations.
### Literature Review
Food Cold Chain Performance
The cooling system is applied in the post-harvest and after
food processing. This system uses the proper temperature
settings to keep product quality in good condition (Bogataj
## 1 3
et al., 2005; Shabani et al., 2015; Shashi et al., 2018).
Temperature control errors can occur before and after
loading and unloading in warehouses or consumer refrigerators’ storage (Mercier et al., 2017). These errors lead to
potential damage to the cold-chain product. Product classification in the food cold-chain is shown in Fig. 1.
The structural resistance of different foodstuffs makes
the distinction between products. Fresh food products such
as vegetables and fruit can keep up to two months in chilled
rooms with low temperatures (1–7 �C). Meanwhile, processed products, canned food, and animal protein require a
freezer at room temperature below 0 �C (Capricorn
Indonesia Consult, 2019). Many aspects of cold chain
performance, such as product shelf life, production time,
production period, physical product properties, type of
transportation, storage conditions, product safety, and
environmental conditions, make it quite challenging to
measure (Aramyan et al., 2007; Joshi et al., 2012). The
complexity in the cold chain management often leads to
nescience where the product’s damage, especially products
with a short shelf life (Aiello et al., 2012).
Food Cold Chain Industry Expenditure Cost
The first dimension in measuring the food cold chain’s
performance is costs incurred in all cold-chain operations.
Managing product losses, expenditure on energy used,
operating costs, maintenance of cooling systems, and
expenses caused by lost time can improve the competitiveness in the supply chain (Joshi et al., 2011). Food
industry waste from the loss due to microbes’ decay is the
most significant waste because spoilage can occur at any
cold chain stage. The level of performance efficiency of the
agro-food supply chain can be measured using cost indicators. Those indicators are distribution costs, transaction
costs, net profit from an investment, and return on investment. The cost indicators also include company inventory
costs such as products, raw materials, semifinished goods,
and finished goods (Aramyan et al., 2007).
Quality and Safety of Food Cold Chain Products
Quality and safety indicators are often used to measure the
food supply chain’s performance. The main concerns of
society are food production and consumption because they
have a wide range of social, economic, and environmental
consequences (Aung & Chang, 2014b). Thus, food product
problems become more customer-oriented by providing
excellent and fast responses in the food industry. The
increased regulations and consumer awareness regarding
food safety lead researchers to research food supply chains
(Kuo & Chen, 2010). Considering the effect of low temperatures in storage along the supply chain is one of the
-----
Fig. 1 Food cold chain source
and derivatives
supporting aspects of improving food products’ quality and
safety, because that can minimize the risk of the growth of
spoilage-causing microorganisms (Kuo & Chen, 2010;
Montanari, 2008; Rediers et al., 2009). Moreover, implementing worker training, recording product acceptance
temperatures, setting real-time temperatures, and using an
alarm system can reduce the food cold chain system’s
quality and safety risks (Wu & Hsiao, 2020).
Food Cold Chain Service Level
An organization’s service level provides to its customers is
another dimension used as a food cold chain performance
attribute. Customer satisfaction is supported by the maximum service level (Joshi et al., 2011). The cold chain uses
a service level as a differentiator from other competitors.
Those services include cooling systems vehicles as delivery services, flexible company operating hours, and placing
a strategic company location to reach customers quickly
and easily (Joshi et al., 2011). Consideration for retail
companies is improving the service quality. In increasing
operational efficiency and customer service, Wal-Mart
implements superior supply chain management practices
(Blanchard et al., 2008). These practices are maximizing
sales and revenue, merging distribution centers to maintain
control over shipping. Moreover, the practice also includes
minimizing inventory, maximizing the use of technology to
simplify the transaction process, and collaborating with
suppliers to reduce product prices each year. An organization certainly could please customers by good service.
Without the organization’s willingness to establish an
organizational culture and ensure that delivery is effective,
the customer-focused services and practices cannot be
developed or maintained in the long term (Bartley et al.,
2007).
Food Cold Chain During the Covid-19 Pandemic
The outbreak of the Covid-19 pandemic in the world has
sparked fears by the rapid virus transmission. In Indonesia,
until May 2, 2021, there are 1,672,880 confirmed cases of
Covid-19. Covid-19 caused by the Coronavirus 2 (SARSCoV-2) by triggers acute respiratory syndrome (Wiersinga
et al., 2020). The SARS-CoV-2 virus is transmitting
through saliva droplets that come out of breathing during
direct face-to-face contact and transmission of the virus
from the surface of objects (Ganyani et al., 2020). The food
supply chain’s quality and safety, including the food coldchain, can be interrupted by the Covid-19 pandemic. When
infected workers sneeze or cough while being in the food
production supply chain, respiratory droplets could transmit Coronavirus on food products (Rizou et al., 2020).
Moreover, the Covid-19 pandemic disrupts supply chain
integration due to supply chain uncertainty (uncertainty of
suppliers or technology) (Paul & Chowdhury, 2020; Shukor et al., 2020). Supply and demand problems can lead to
product returns, food losses, increase product prices, and
trigger transportation problems for food cold chain products caused by reusable packages in product delivery; there
is a risk of transmitting the virus (Masudin & Safitri, 2020).
The disruption of supply chain integration has an impact on
the flexibility of the company organization. Organizational
flexibility is one of the strategic dimensions for supply
chain integration, and the external environment (Khoobiyan et al., 2017). Shukor et al. (2020), in their research,
shows that environmental uncertainty and organizational
capability are important elements that affect supply chain
agility and organizational flexibility. It is more astute for
companies when dealing with external uncertainties that
force them to look beyond the normal limits of their
business.
## 1 3
-----
Traceability in the Food Cold Chain
The possibility of Coronavirus contamination is raising
more attention to the traceability of food products in the
cold food chain. Applying a traceability system in food
cold chains helps ensure food safety and quality to maintain consumer trust (Aung & Chang, 2014b). When
building a traceability system in a supply chain, one
problem is the large scale of the food cold chain stages
(production, processing, and distribution) (Bechini et al.,
2008). The traceability systems allow detecting the causes
of product quality problems because of the wide range
from downstream of the product path along the supply
chain.
According to Aiyar and Pingali (2020), it is necessary to
integrate traceability technology to reduce the risk of
pandemic disruption on the food system. This technology
could monitor the emergence of disease in several places
along the trade chain. This is very important because it can
improve long-term food security by preventing the
expansion of pandemics and disrupting the food system in
the future. One of the food cold-chain performance
matrices is the consistency in tracing product information
related to origin and location (Shashi et al., 2018). There is
a relationship between traceability and performance evaluation of a cold chain (Joshi et al., 2011). The track record
of temperature and its origin from each stage in the cold
chain may be obtained using a traceability system. Based
on this description, the following hypothesis can be
proposed:
H1 The traceability system (T) significantly affects the
food cold chain performance (FCCP).
An effective traceability system must be flexible and
responsive in identifying potential risky products and then
recalling products that are declared unsafe (Mc Carthy
et al., 2018). Technology needs to be supported to help
trace information data along the supply chain in maximizing traceability systems in food cold chains. Advanced
information technology properly adopted in a traceability
system would enable strategic flexibility (Lau, 1996) when
adaptable, quick, and responsive systems are highly sought
to reduce environmental threats caused by the Covid-19
pandemic. During the Covid-19 pandemic, it requires
minimal contacts between workers, and a flexible supply
chain system needs technology that allows automation of
processes on the purchasing, between suppliers, operations,
and customers (Duclos et al., 2003), because flexibility
considers the speed at which hardware and software
architectures can change. It is necessary to allow synchronization between companies in the supply chain (Duclos et al., 2003). With a traceability system’s advanced
technology, the speed and accuracy of data transmission
## 1 3
can be considered for company flexibility during
pandemics.
Electronic Data Interchange (EDI) Adoption
Transmitting information from one computer to another for
business transactions between organizations in the supply
chain uses electronic data interchange (EDI) technology
(Walton & Marucheck, 1997). Conventional businesses
such as purchase orders, material forecasting, shipping, and
invoice can be replaced with EDI tools (Hart & Saunders,
1997). EDI is important for transferring information
quickly and automatically to create more effective and
efficient integration or coordination (Hill & Scudder,
2002). Maximizing EDI technology in supply chain management requires integrities between organizations (Konsynski, 1993).
Various researchers have tried the use of EDI in various
industries. Ford Motor uses EDI as one of the applications
to handle corporate data transfers with partners (Webster,
1995). In the retail sector, Wal-Mart uses EDI to provide
real-time information with suppliers regarding order
accuracy and transparency throughout its supply chain
(Blanchard et al., 2008). Inventory visibility such as
making invoices and payments can increase by using EDI.
EDI is used to inform each department’s schedule, information related to production activities, and sales activities.
Companies view EDI as a tool to increase efficiency and be
more accommodating to customer desires than suppliers
(Hill & Scudder, 2002). Based on this description, the
following hypothesis can be proposed as follow:
H2 Electronic data interchange adoption (EDI) significantly affects the traceability system (T).
Radio Frequency Identification (RFID) Adoption
Radio frequency identification (RFID) is one of the technologies often used in communication between Internet of
Things (IoT) devices related to food safety (Bouzembrak
et al., 2019). This technology is an identification tool that
uses radio waves to detect the presence of objects through
tags. RFID has many advantages: ease of use, automatic
scanning, high data rates, large memory, and can scan
multiple tags simultaneously (Aung & Chang,
2014b; Musa & Dabo, 2016; Patil & Suresh, 2019). RFID
works by transmitting radio signals through an antenna
with a fixed frequency from a certain distance to form an
electromagnetic field (Cao et al., 2019).
RFID use as a tracking device for items inside and
outside the store in the retail industry, such as Wal-Mart
(Blanchard et al., 2008). This tracking device stores goods
in stores, simplifies refilling and retrieves items more
-----
accurately. The curbing counterfeiting and theft or
increasing visibility throughout the supply chain also provide by RFID. Several researchers carried out the application of RFID technology to food cold-chain for
monitoring the temperature of transport and remote storage
(Abad et al., 2009; Badia-Melis et al., 2015; Jedermann
et al., 2009; Ruiz-Garcia et al., 2008, 2010; Zou et al.,
2014), estimating shelf life (Chen et al., 2014; Nicometo
et al., 2014), monitoring of counterfeiting in food products
(Rajakumar et al., 2018), and the detection of gas or
volatile chemicals (Fiddes & Yan, 2013). A similar statement was made by O[´ ]skarsdo´ttir and Oddsson (2019) that
the most advanced technology for its integrity and traceability in the supply chain is RFID. Based on this
description, the following hypothesis can be proposed:
H3 Radio frequency identification adoption (RFID) significantly affects the traceability system (T).
Blockchain Adoption
Some researchers have started to apply blockchain technology to the traceability of supply chain systems in recent
years. Blockchain is a set of many blocks that contains data
of all transactions within a certain period. Fingerprint
scanning is used for the verification process for guaranteed
validity of information and possibly connected with other
blocks (Tian, 2016). The blockchain is distributed network
that keeps system data open and transparent with no way to
track and destroy data. Several sectors such as finance,
industry, health, social, transportation, education, and
agriculture have been applied blockchain technology (Cao
et al., 2019).
According to Cole et al. (2019), blockchain can improve
product safety and security and improve quality management. It can also reduce illegal counterfeiting, improve
sustainable supply chain management, reduce the need for
intermediaries, and reduce the usual supply chain transactions by implementing blockchain technology in supply
chain operations and management. In the cold chain sector,
to measure and monitor the entire network in a transparent
and real-time manner, Kim and Shin (2019) using blockchain by considering that cold chains have a very complex
structure and require different criteria for each stage and
item. According to Pal and Kant (2019), the traceability of
a product that all parties need in the chain uses the
blockchain’s information. End-users can use blockchain for
obtaining product-related information that will be used to
consider before buying products. Meanwhile, auditors can
ensure that the processing, handling, transportation, and
storage regulations have been carried out correctly. The
research also includes information that blockchain can
reduce the time to track information related to product
contamination cases from one week to just seconds. Based
on this description, the following hypothesis can be
proposed:
H4 Blockchain adoption (BC) has a significant effect on
the traceability system (T).
Managerial Initiatives
In their research, Lewis and Boyle (2017) provide an
overview of industry and government initiatives to improve
the seafood supply chain’s traceability system. The traceability system’s improvement is driven by industry leaders’
initiatives, pre-competitive collaboration, public–private
partnerships, and government involvement with the private
sector. Management initiatives in a supply chain are needed
to drive the performance of a company. Some literature tries
to explain the importance of an initiative in a company, such
as pressure from stakeholders and retailers to affect adding
value to customers and company/market performance and
supply chain finance (Baert et al., 2012; Kumar et al., 2013;
Martı´nez-Jurado & Moyano-Fuentes, 2014; Reuter et al.,
2012). Research conducted by Sousa et al. (2008) shows the
Portuguese pear industry is driven by the retailer’s leadership in introducing a quality assurance system and traceability along the supply chain. Masudin et al. (2018) prove
that implementing green supply chain management practices (GSCM), an initiative given to the organization, has a
positive and significant impact. This shows that the managerial initiative’s role is critical in encouraging the sustainability of a supply chain. Moreover, the implementation
of new technologies in the supply chain requires managerial
attention to help increase the organizational members’
willingness to learn in an uncertain field of knowledge. The
managerial initiatives allow a learning process of an outward-looking and experimental without damaging ongoing
efficiency-oriented activities of the organization (Khanagha
et al., 2017). Based on this description, the following
hypothesis can be proposed:
H5 Managerial initiatives (MIs) support traceability system (T) adoption on food cold chain performance (FCCP)
improvement.
H6 Managerial initiatives (MIs) have a significant effect
on the food cold chain performance (FCCP).
### Research Method
This study is explanatory research with a quantitative
method approach because the research variables measurement is numerical and uses statistics analysis (Nur &
Supomo, 2002). The decision of the method used is based
## 1 3
-----
on the study’s objectives. The consideration for selecting
the partial least square–structural equation modeling (PLSSEM) method, according to Hair et al. (2019), is when the
analysis is related to testing the theoretical framework from
a predictive perspective or the study conducted requires a
latent variable score for follow-up analysis. This study uses
the PLS-SEM method to determine whether all factors are
interrelated and affect food cold chain performance. The
analysis was conducted to score each latent variable and
identify the construct’s key driver.
This study’s respondents are an expert group of consumers who have consumed food cold chain products for at
least one year and retail employees who work in departments that handle food cold chain products. This study has
two stages of testing: pilot and field test. There are several
methods to determine a sample when the population is not
known with certainty. According to Alwi (2015), a sample
of 15 to 30 respondents is required for experimental and
comparative research. The number of respondents used in
the pilot test of this study was 30 respondents. The number
of samples used for the field test was 10 9 the number of
decided variables due to differences in sample sizes with
PLS-SEM analysis (Masudin et al., 2018). So the number
of field test respondents used is 10 9 6 = 60 respondents.
The results of the questionnaire were numbers from the
Likert scale and analyzed using statistical methods. The
use of SPSS 20.0 software helps process pilot testing data
descriptively when examining the validity and reliability.
Meanwhile, PLS-SEM was used to evaluating the relationship between variables using the Smart-PLS 3.2.9
software. This study measured each research variable’s
Fig. 2 Conceptual model
## 1 3
indicators using a questionnaire through the Google form
media. The questionnaire questions are arranged based on
each variable’s indicators determined in the conceptual
model. Measurement of the questionnaire question group
uses a Likert scale with five levels; they are 5 (very
important), 4 (important), 3 (neutral), 2 (less important),
and 1 (not important).
Conceptual Model
The conceptual map describes the studied area and is
represented by the theories compiled and describes the
relationship between variables (Rowley & Slack, 2004).
The conceptual model is used to map the author’s frame of
mind for ease the readers to understand (as shown in
Fig. 2). This model is developed based on the theory by
previous researchers in the journal literature. The conceptual model describes a causal relationship and an effect
between each variable. In this study, six latent variables
consist of T, EDI, RFID, BC, MI, and FCCP. As for the
manifest variables in this study, there were 32 attributes.
The following explains the hypothesis that describes the
relationship between latent variables in this study (Abad
et al., 2009).
From the conceptual model above, six hypotheses were
obtained, as given in Table 1.
Operational Variable
Operational variables define a concept that can be measured by determining the idea’s dimensions and
-----
Table 1 Research hypothesis
Hypothesis Relationship description
H1 T has a significant effect on FCCP
H2 EDI has a significant effect on T
H3 RFID has a significant effect on T
H4 BC has a significant effect on T
H5 MI supports FCCP to adopt T
H6 MI has a significant effect on FCCP
characteristics (Pujihastuti, 2010). Measuring research
variables can be measured by identifying operational
variables by considering the variable’s processes (Plumier
& Maier, 2018). The author determines the operational
variables by identifying them through journal literature
studies. The operational variables used in this study are
described in Table 2.
### Results and Discussion
Pilot Test
The questionnaire’s data were obtained from 30 respondents with an age range between 18 and 49 years. The
expert group consisted of 30% women and 70% men from
several western and central Indonesia areas. From the
screening, it is known that 76.7% of respondents have
consumed cold-chain products for more than five years and
obtained products from minimarkets (40%), supermarkets
(36.7%), and stalls/agents (23.3%). In this test, information
from retail employees is also needed due to the managerial
situation comprehension in the field. Retail employees’
data were obtained from 18 respondents (including 30
respondents), with 40% of respondents have only worked
for less than one year. The pilot test questionnaire results
were tested for validity and reliability shows in Tables 3
and 4.
Pearson correlation is used to determine the strength of
research instruments in measuring precisely or determining
the validity of the answers. The criterion for acceptance of
validity is when the Pearson correlation value obtained is
more than the Rtable value (Arikunto, 2006). The Rtable
value was determined using a significance level of 5%, so
that the value of R(n-2;0.05) = R(28;0.05) = 0.361. Using SPSS
ver.20 software, data processing results indicate that all
question items were mutually correlated between variables.
Most of them had a strong correlation because those values
between 0.70 and 0.89 (Schober et al., 2018). After comparing with the Rtable value, it can be seen that all the
questions have a Pearson correlation value that exceeds the
Rtable value (0.361). The research instrument is valid and
can be used for research instruments in the field test.
Research instruments need to be tested for accuracy and
consistency as a means of measuring research data. This
can be obtained by testing its reliability using the Cronbach’s alpha test because the questionnaire has more than
one correct answer (Adamson & Prion, 2013). The
acceptance criterion or a variable that can be reliable is
when the Cronbach’s alpha value obtained exceeds the
value of 0.60 (because that is considered a strong level of
relationship) (Streiner, 2003; Sugiyono, 2013). The results
of data processing for reliability testing are given in
Table 4. It can be concluded that all of the research variables are reliable because it has met the criteria of the rule
of thumb of Cronbach’s alpha. Almost all variables have a
robust correlation because they value between 0.80 and
1.00 (Sugiyono, 2013). It concluded that the research
instrument could be used for field tests because it has high
accuracy or precision.
Profile of Respondents and Descriptive Statistics
of Field Test
After ensuring the research questionnaire is valid and
reliable, the field test is conducted with another data set.
The field test compiles data from 220 respondents from
various western, central, and eastern Indonesia regions that
are given in Table 5. Most respondents have consumed
cold-chain products for more than five years (72%), so they
know how the needs and urge for handling cold-chain
products are supposed to do, especially during the Covid19 pandemic. Most respondents get cold food products
from retailers (53% minimarkets and 30% supermarkets).
To obtain more accurate data about the traceability system’s needs and managerial conditions in the food coldchain, the authors also took samples from 93 retail
employees (include in 220 respondents).
In describing the characteristics of the sample obtained,
the researcher used descriptive statistics. Descriptive
statistics can help researchers detect sample characteristics
that can influence conclusions (Thompson, 2009). Table 6
contains descriptive statistical data in this study, which
shows respondents’ tendency to assess each variable
indicator.
All questions (variable indicators) were answered
equally by 220 responses. Most of the question indicators
were responded with the highest score on the Likert scale
of 5 (very important). Meanwhile, the lowest answers
obtained, most of the question indicators were on Likert
scale 2 (less important). For the variability of the sample
data, the data have an inconsiderable range of standard
deviation (between 0.662 and 0.787). The variation is 30%,
indicating that the respondent has comprehended the
## 1 3
-----
Table 2 Operational variable definition
Variable Definition Dimension Attribute
EDI EDI is a tool for exchanging data between computer EDI 1 EDI technology as a transaction tool in the cold
systems and business partners chain during the Covid-19 pandemic (Foraker
et al., 2020; Hart & Saunders, 1997; Sharma &
Pai, 2015)
EDI 2 EDI is a communication system between food
supply chain suppliers and consumers during
the Covid-19 pandemic (Foraker et al., 2020;
Hill & Scudder, 2002)
EDI 3 EDI technology can be accessed globally on the
food supply chain during the Covid-19
pandemic (Foraker et al., 2020; Hill & Scudder,
2002; Webster, 1995)
RFID A tool to detect the presence of an object with a tag using RFID 1 Data information’s suitability with actual
radio frequency conditions along the cold supply chain during
the Covid-19 pandemic (Ho et al., 2020;
O[´ ]skarsdo´ttir & Oddsson, 2019)
RFID 2 The food supply chain information can be
accessed quickly and easily during the Covid-19
pandemic (Aung & Chang, 2014b; Otoom et al.,
2020)
RFID 3 RFD allows tracking product temperature and
humidity along the cold chain during the Covid19 pandemic (Abad et al., 2009; Garg et al.,
2020)
RFID 4 There was transparency in food cold product
information during the Covid-19 pandemic
(Tian, 2016; Sarkis et al., 2020)
BC Technologies with a wide range of transactions and
distributed across parties from each block are
continuously evolving
T The traceability system detects the causes of quality and
safety problems by determining their origin and
characteristics from the upstream supply chain. The
traceability system is for data information on food
cold chains during the Covid-19 pandemic
## 1 3
BC 1 The information system can be accessed
anonymously by all parties in the food supply
chain during the Covid-19 pandemic (Marbouh
et al., 2020; Pal & Kant, 2019; Tian, 2016)
BC 2 Data security on the food supply chain is
guaranteed during the Covid-19 pandemic
(Marbouh et al., 2020; Tian, 2016)
BC 3 The entire network’s security on the food supply
chain is guaranteed during the Covid-19
pandemic (Marbouh et al., 2020; Tian, 2016)
BC 4 The easy-to-access database system on the food
supply chain during the Covid-19 pandemic
(Marbouh et al., 2020; Tian, 2016)
BC 5 Data obtained of food cold products in real time
during the Covid-19 pandemic (Kim & Shin,
2019; Marbouh et al., 2020)
T 1 Able trace along the supply chain during the
Covid-19 pandemic (Joshi et al., 2011; Onoda,
2020)
T 2 Highly detailed data tracing results (including
information related to transactions, locations,
product conditions, production stages, and
transportation) during the Covid-19 pandemic
(Sahin, Dallery, & Gershwin, 2002; Joshi et al.,
2011; Onoda, 2020)
-----
Table 2 continued
Variable Definition Dimension Attribute
T 3 Degree of automation in item identification and
data collection along the supply chain during
the Covid-19 pandemic (Joshi et al., 2011;
Onoda, 2020; Sahin et al., 2002)
FCCP The food cold chain’s performance during the Covid-19
pandemic uses a temperature control system that can
inhibit microbial growth, which extends product
storage life and maintains nutritional product quality
MI An action that has elements of control, theory, and
purpose. It is the development of a unique terminology
to distinguish different cases when the organization
has the initiative (R. Cohen et al., 1998). This study
analyzing managerial initiatives during the Covid-19
pandemic occurs in Indonesia
Cost (C) C 1 Operating costs related to service and maintenance
costs in the cooling process are minimal during
the Covid-19 pandemic (Joshi et al., 2011)
C 2 Food cold companies incurred minimal storage
and transportation costs during the Covid-19
pandemic (Joshi et al., 2011)
C 3 Affordable refrigerated handling freight charges
during the Covid-19 pandemic (Joshi et al.,
2011)
C 4 Minimizing the cost of lost products expired or
wasted due to mishandling during the Covid-19
pandemic (Joshi et al., 2011)
C 5 Provide training for staff who handle food cold
products to improve the skills and knowledge
needed during the Covid-19 pandemic (Joshi
et al., 2011)
Product
Quality
and Safety
(QS)
Service
Level
(SL)
QS 1 The company had quality and safety of food cold
products certification (Joshi et al., 2011)
QS 2 Food cold products are continuously monitored to
ensure their products’ quality and safety from
Coronavirus contamination (Joshi et al., 2011)
QS 3 The freshness of food cold products is maintained
until the end consumer during the Covid-19
pandemic (Joshi et al., 2011)
SL 1 Easy-to-use transaction methods during the Covid19 pandemic (Joshi et al., 2011)
SL 2 Comfort and convenience in reaching consumers
during the Covid-19 pandemic (Joshi et al.,
2011)
SL 3 Flexible operating hours during the Covid-19
pandemic (Joshi et al., 2011)
SL 4 The scope of shipping with coolers is extensive
during the Covid-19 pandemic (Joshi et al.,
2011)
SL 5 Complete and varied product availability during
the Covid-19 pandemic (Joshi et al., 2011)
MI 1 Regulations issued by the organization as a
driving force for other organizations to carry out
activities in the food cold chain during the
Covid-19 pandemic (Masudin et al., 2018)
MI 2 Consumption of food cold products encourages
producers to trace products along the food coldchain during the Covid-19 pandemic (Masudin
et al., 2018)
## 1 3
-----
Table 2 continued
Variable Definition Dimension Attribute
MI 3 Food cold product supplier initiatives in
traceability technology can increase the food
cold chain effectiveness during the Covid-19
pandemic (Masudin et al., 2018)
MI 4 Several organizations in the food cold-chain
utilize traceability systems to maintain product
quality and safety during the Covid-19
pandemic (Masudin et al., 2018)
Table 3 Validity of pilot test
Variable Indicator Pearson correlation Evidence
EDI EDI 1 0.886 Valid
EDI 2 0.860 Valid
EDI 3 0.821 Valid
RFID RFID 1 0.736 Valid
RFID 2 0.882 Valid
RFID 3 0.785 Valid
RFID 4 0.887 Valid
BC BC 1 0.866 Valid
BC 2 0.723 Valid
BC 3 0.832 Valid
BC 4 0.868 Valid
BC 5 0.853 Valid
T T 1 0.850 Valid
T 2 0.756 Valid
T 3 0.874 Valid
FCCP C 1 0.611 Valid
C 2 0.584 Valid
C 3 0.841 Valid
C 4 0.787 Valid
C 5 0.756 Valid
QS 1 0.792 Valid
QS 2 0.824 Valid
QS 3 0.763 Valid
SL 1 0.721 Valid
SL 2 0.842 Valid
SL 3 0.786 Valid
SL 4 0.713 Valid
SL 5 0.849 Valid
MI MI 1 0.858 Valid
MI 2 0.792 Valid
MI 3 0.896 Valid
MI 4 0.820 Valid
questions comprehensively. The sample data’s tendency is
seen from the mean value of each indicator and variable.
The QS 2 indicator (4.445) has the highest mean value,
## 1 3
which is very important because the mean value was higher
than 4.21 (Restuputri et al., 2020). This indicates that
respondents consider that continuous monitoring of cold
-----
Table 4 Reliability of pilot test
Variable Cronbach’s alpha Evidence
EDI 0.814 Reliable
RFID 0.843 Reliable
BC 0.883 Reliable
T 0.749 Reliable
FCCP 0.930 Reliable
MI 0.860 Reliable
Table 5 Respondent’s profile of formal questionnaires
Profile Frequency Percentage (%)
Age
\ 18 11 5
18–25 175 79.5
26–33 8 3.6
34–41 5 2.3
42–49 14 6.4
[ 50 7 3.2
Gender
Female 103 46.8
Male 117 53.2
Length of work
1–5 years 38 40.9
More than 5 years 55 59.1
Education level
High school 142 64.5
Diploma 14 6.4
Bachelor 118 53.6
Master 7 3.2
chain products is critical in the traceability system. Regular
monitoring is to ensure the quality and safety of its products and protected them from Coronavirus contamination.
Partial Least Square–Structural Equation Modeling
(PLS-SEM) Analysis
PLS-SEM analysis is used to analyze all constructs
between latent variables. This study is formed by the
manifest variable (indicator) reflective model and the
framework illustrated in Fig. 3. Blue circles represent
latent variables connected with other latent variables,
indicating the research hypothesis. Inside the blue circle,
there is an R-square value of the latent variable. Meanwhile, the number contained in each research hypothesis
between latent variables is the path coefficient value. The
yellow box represents the manifest variable which is the
measuring variable in this study. Each manifest variable’s
loading factor value is shown on the manifest variable
arrow to the latent variable. Green circles represent moderating variables; managerial initiatives encourage traceability system variables on the food cold chain’s
performance.
There are two types of model fit criteria in PLS-SEM:
outer and inner models. The outer model is a measurement
of the relationship between variables and manifest variables in terms of validity and reliability; in other words, the
outer model’s suitability evaluates the measurement model,
whereas the inner model is more about regression to assess
the effect of one variable on other variables (construct) or it
is known as the structural model evaluation (Hair et al.,
2010; Tenenhaus et al., 2005).
## 1 3
-----
Table 6 Descriptive statistics of formal questionnaires
Variable Indicator N Min Max SD Mean value Mean
EDI EDI 1 220 2 5 0.725 4.232 4.197
EDI 2 220 2 5 0.662 4.150
EDI 3 220 2 5 0.735 4.209
RFID RFID 1 220 2 5 0.754 4.209 4.177
RFID 2 220 2 5 0.755 4.168
RFID 3 220 1 5 0.757 4.159
RFID 4 220 2 5 0.745 4.173
BC BC 1 220 2 5 0.744 4.186 4.289
BC 2 220 2 5 0.715 4.382
BC 3 220 2 5 0.708 4.345
BC 4 220 1 5 0.773 4.264
BC 5 220 2 5 0.680 4.268
T T 1 220 2 5 0.709 4.150 4.195
T 2 220 3 5 0.685 4.264
T 3 220 2 5 0.707 4.173
FCCP C 1 220 2 5 0.761 4.214 4.324
C 2 220 1 5 0.736 4.168
C 3 220 2 5 0.776 4.236
C 4 220 2 5 0.787 4.273
C 5 220 2 5 0.718 4.286
QS 1 220 2 5 0.743 4.400
QS 2 220 3 5 0.670 4.445
QS 3 220 2 5 0.700 4.409
SL 1 220 3 5 0.692 4.405
SL 2 220 2 5 0.687 4.414
SL 3 220 2 5 0.703 4.241
SL 4 220 2 5 0.705 4.332
SL 5 220 2 5 0.696 4.386
MI MI 1 220 1 5 0.747 4.177 4.224
MI 2 220 2 5 0.741 4.164
MI 3 220 2 5 0.730 4.245
MI 4 220 3 5 0.712 4.309
Evaluation of Measurement Model
As explained in the previous section, evaluating the measurement model is conducted by assessing the latent and
manifest (outer model) variables’ validity and reliability.
There are two types of validity in PLS-SEM, convergent
validity, which refers to the correlation of indicator items
with others, and discriminant validity, to determine how
constructs are entirely different. The convergent validity
between indicator constructs can be estimated based on the
loading factor (outer loading value) and the average variant
extraction (AVE) value, as shown in Table 7. Meanwhile,
in term of assessing discriminant validity in general, it is
done by testing the value of each cluster using the cross
## 1 3
loading test (Table 8) and then for a more robust assessment by comparing the square AVE value or better known
as the Fornell-Larcker criterion (Table 9) (Fornell & Larcker, 1981; Hair et al., 2011, 2016). In addition, this study
also discusses the size of the model fit, as shown in
Table 13. The analysis of the fit model using the PLS-SEM
index in this study uses standardized root mean square
residual (SRMR), normed fit index (NFI), and residual
covariance matrix. . root mean square (RMS_theta) (Hair
et al., 2016).
In general, reliability evaluation is defined by analyzing
Cronbach’s alpha value (Allen & Yen, 2002). However,
Cronbach’s alpha has been criticized because its lower
bound values tend to underestimate true internal
-----
Fig. 3 Initial model
consistency reliability and are sensitive to the number of
items on the scale (Nunnally, 1994; Peterson & Kim,
2013). Composite reliability as an alternative due to the
value is slightly higher than Cronbach’s alpha (Peterson &
Kim, 2013). Recapitulation of the value of composite
reliability is shown in Table 14.
The loading factor or outer loading value can describe
each indicator item’s value in measuring the variable. The
outer loading rating is less than 0.4, which is declared
weak, and less than 0.7 is declared weak (Hair et al., 2011).
Therefore, some researchers argue that the value of weak
outer loading should be excluded from the research model.
However, the deletion of these items will impact other
values. Before deletion, it needs to pay more attention to
the AVE value and the composite reliability value (Hair
et al., 2016; Hulland, 1999). Meanwhile, according to
(Ghozali, 2008), removing an indicator is when it has an
outer loading value below 0.6. AVE acceptance criteria are
declared valid if the value higher than 0.5 (Hair et al.,
2011).
Based on the recapitulation of the outer loading and
AVE values in Table 7, it can be seen that all indicator
items are declared valid because the value was higher than
the cutoff of the acceptance criteria for convergent validity.
In addition, there were still several indicator items with
weak outer loading values (\ 0.7); there are items C 1, C 2,
C 3, SL 3, and SL 4, which were FCCP variable’s indicators. However, the AVE value of the FCCP variable is
still acceptable. The weak indicator item is included even
though it is the lowest AVE value among other variables.
The cross-loading test aims to determine the value of
each cluster. The acceptance criteria is declared valid if the
values between the indicators on the same variable are
higher than the indicators of other variables (Hair et al.,
2011, 2016). Based on this study’s cross-loading test
results show in Table 8, all indicator items in each variable
are valid because all values in the gray box are higher than
the other values in a row. This indicates that the correlation
between indicator items and the variables is interrelated
and valid. They are continued to the Fornell–Larcker test
criteria to support discriminant validity.
In evaluating discriminant validity, the Fornell–Larcker
criteria also strengthen the outer model’s measurement
model. The Fornell–Larcker criterion is tested to determine
the correlation between variables in the research model
with the rule of thumb, the value on the diagonal of the
variable with the variable itself having to exceed other
values in a row or a column (Fornell & Larcker, 1981; Hair
et al., 2016). Table 9 lists the Fornell–Larcker test results
between variables and shows a relationship that does not
meet the acceptance criteria. The correlation value for the
FCCP variable is 0.724, which is still lower than the T–
FCCP variable correlation value of 0.735. This shows that
the Fornell–Larcker criteria are invalid. As in research
conducted by Restuputri et al. (2021), before analyzing the
model’s reliability, their research also ensures that Fornell–
## 1 3
-----
Table 7 Initial convergent validity recapitulation
Variable Indicator Outer loading AVE Evidence
EDI EDI 1 0.813 0.653 Valid
EDI 2 0.846 Valid
EDI 3 0.763 Valid
RFID RFID 1 0.795 0.615 Valid
RFID 2 0.823 Valid
RFID 3 0.739 Valid
RFID 4 0.778 Valid
BC BC 1 0.716 0.588 Valid
BC 2 0.738 Valid
BC 3 0.802 Valid
BC 4 0.788 Valid
BC 5 0.788 Valid
T T 1 0.766 0.606 Valid
T 2 0.752 Valid
T 3 0.825 Valid
FCCP C 1 0.667 0.525 Valid
C 2 0.647 Valid
C 3 0.679 Valid
C 4 0.741 Valid
C 5 0.753 Valid
QS 1 0.742 Valid
QS 2 0.789 Valid
QS 3 0.786 Valid
SL 1 0.752 Valid
SL 2 0.751 Valid
SL 3 0.670 Valid
SL 4 0.699 Valid
SL 5 0.723 Valid
MI MI 1 0.771 0.623 Valid
MI 2 0.791 Valid
MI 3 0.801 Valid
MI 4 0.795 Valid
EDI T*MI 1.216 1.000 Valid
Larcker’s recapitulation is in accordance with the criteria.
This may occur because previously, we left the weak outer
loading indicators affecting other tests (Hair et al., 2016;
Hulland, 1999). We re-evaluated removing indicator items
with weak outer loading from the research model.
The deletion of items with the outer loading \ 0.7 (C 1,
C 2, C 3, SL 3, and SL 4) has impacted other loading
values. This decision has also been taken before in Masudin et al., (2021a, 2021b)’s research. In their research,
before continuing the measurement model evaluation, they
ensured that all indicators had outer loading [ 0.7 and the
AVE value [ 0.5. Based on the recapitulation in Table 10,
all indicator items have exceeded the cutoff value of the
## 1 3
outer loading and AVE criteria. Interestingly, due to the
five indicator items’ deletion in the FCCP variable, the
AVE value of the variable was increased by 0.084, and the
lowest AVE value became the variable BC.
Re-analysis was also conducted on the cross-loading test
to evaluate the discriminant validity. There are changes in
some correlation values between indicator items and variables, especially in the FCCP variable’s indicators. This
occurs because of the effect of deleting indicator items on
the same variable. However, the changes are insignificant
and are still within the criteria for acceptance of crossloading so that the value test for each cluster is declared
valid (Table 11).
-----
Table 8 Initial discriminant validity based on cross-loading
EDI RFID BC T FCCP MI T*MI
EDI 1 0.813 0.473 0.481 0.482 0.497 0.370 - 0.152
EDI 2 0.846 0.544 0.551 0.490 0.536 0.429 - 0.106
EDI 3 0.763 0.510 0.468 0.483 0.558 0.502 - 0.197
RFID 1 0.500 0.795 0.511 0.548 0.539 0.387 - 0.153
RFID 2 0.580 0.823 0.607 0.575 0.590 0.476 - 0.205
RFID 3 0.427 0.739 0.542 0.529 0.491 0.467 - 0.178
RFID 4 0.466 0.778 0.582 0.547 0.538 0.379 - 0.150
BC 1 0.486 0.576 0.716 0.509 0.489 0.419 - 0.152
BC 2 0.436 0.524 0.738 0.482 0.640 0.359 - 0.144
BC 3 0.477 0.597 0.802 0.523 0.621 0.418 - 0.218
BC 4 0.511 0.534 0.788 0.549 0.566 0.472 - 0.162
BC 5 0.462 0.512 0.788 0.528 0.561 0.471 - 0.174
T 1 0.479 0.550 0.536 0.766 0.521 0.387 - 0.029
T 2 0.430 0.534 0.502 0.752 0.609 0.506 - 0.190
T 3 0.494 0.553 0.542 0.815 0.583 0.486 - 0.086
C 1 0.467 0.501 0.573 0.529 0.667 0.547 - 0.152
C 2 0.460 0.487 0.479 0.479 0.647 0.582 - 0.211
C 3 0.481 0.539 0.539 0.573 0.679 0.460 - 0.108
C 4 0.543 0.582 0.630 0.543 0.741 0.540 - 0.201
C 5 0.516 0.569 0.540 0.553 0.753 0.540 - 0.215
QS 1 0.471 0.528 0.573 0.566 0.742 0.439 - 0.204
QS 2 0.522 0.566 0.632 0.575 0.789 0.441 - 0.221
QS 3 0.497 0.508 0.566 0.569 0.786 0.485 - 0.249
SL 1 0.412 0.467 0.529 0.473 0.752 0.486 - 0.224
SL 2 0.452 0.477 0.614 0.549 0.751 0.416 - 0.240
SL 3 0.455 0.448 0.445 0.495 0.670 0.513 - 0.211
SL 4 0.450 0.373 0.483 0.478 0.699 0.517 - 0.238
SL 5 0.435 0.409 0.429 0.514 0.723 0.540 - 0.289
MI 1 0.396 0.471 0.479 0.574 0.537 0.771 - 0.264
MI 2 0.420 0.432 0.422 0.444 0.518 0.791 - 0.210
MI 3 0.446 0.421 0.423 0.459 0.595 0.801 - 0.220
MI 4 0.431 0.396 0.444 0.394 0.535 0.795 - 0.212
T*MI - 0.188 - 0.219 - 0.222 - 0.132 - 0.294 - 0.287 1.000
Table 9 Initial discriminant validity based on Fornell–Larcker
BC EDI FCCP MI T*MI RFID T
BC 0.767
EDI 0.619 0.808
FCCP 0.749 0.657 0.724
MI 0.560 0.537 0.694 0.789
T*MI - 0.222 - 0.188 - 0.294 - 0.287 1.000
RFID 0.715 0.630 0.689 0.545 - 0.219 0.784
T 0.677 0.601 0.735 0.592 - 0.132 0.701 0.778
## 1 3
-----
After ensuring all of the outer loading indicator
items [ 0.7, it turned out to impact the Fornell–Larcker
value that previously did not meet the validity acceptance.
Table 12 shows the final Fornell–Larcker value on the
diagonal correlation of the variable with the variable itself
has higher the other values in a row or a column. The
correlation value for the FCCP–FCCP variable, which was
previously valued at 0.724, increased to 0.781, and the T–
FCCP correlation, which was previously valued at 0.735,
became 0.697. So it can be said that the Fornell–Larcker
criteria have met the acceptance criteria, and the discriminant validity is declared valid. In addition to the model
validity, we add information about the size of this study’s
model fit, presented in Table 13.
The SRMR describes the difference between the
observed correlation and the expected correlation matrix
model as an absolute measure of the fit criterion (Hair
et al., 2014). If the SRMR assessment criterion is less than
Table 10 Final convergent validity recapitulation
0.10 or 0.08 for the more conservative version, it is considered a fit model (Hu & Bentler, 1998). Based on the
results obtained, the SRMR is 0.070 \ 0.10, stating the
model is fit.
The NFI is an additional measure of fit, which is the
value of the proposed Chi-squared model divided by the
zero model’s Chi-squared value (Bentler & Bonett, 1980).
The criterion for acceptance is that when the value
approaches 1, the better the model is fit. The NFI value of
this study is 0.912 or 91.2%, and the model is fit; in other
words, the model’s fit is acceptable.
Only the reflective model has an RMS_theta value that
explains how the outer model residuals are correlated. A
good correlation is when the RMS_theta value is close to
zero, which means that the outer residual correlation is
minimal. In this study, the RMS_theta value was obtained
at 0.103, indicating that the model is fit because it is less
than 0.12 (Hair et al., 2014). Based on the three parameters
Variable Indicator Outer loading AVE Evidence
EDI EDI 1 0.813 0.653 Valid
EDI 2 0.846 Valid
EDI 3 0.763 Valid
RFID RFID 1 0.795 0.615 Valid
RFID 2 0.823 Valid
RFID 3 0.739 Valid
RFID 4 0.778 Valid
BC BC 1 0.716 0.588 Valid
BC 2 0.738 Valid
BC 3 0.802 Valid
BC 4 0.788 Valid
BC 5 0.788 Valid
T T 1 0.766 0.606 Valid
T 2 0.754 Valid
T 3 0.813 Valid
FCCP C 4 0.747 0.609 Valid
C 5 0.770 Valid
QS 1 0.793 Valid
QS 2 0.826 Valid
QS 3 0.826 Valid
SL 1 0.767 Valid
SL 2 0.782 Valid
SL 5 0.727 Valid
MI MI 1 0.777 0.622 Valid
MI 2 0.783 Valid
MI 3 0.803 Valid
MI 4 0.793 Valid
Moderating effect (MI supports FCCP to adopt T) T*MI 1.217 1.000 Valid
## 1 3
-----
Table 11 Final discriminant validity based on cross-loading
EDI RFID BC T FCCP MI T*MI
EDI 1 0.813 0.473 0.481 0.482 0.457 0.37 - 0.152
EDI 2 0.846 0.544 0.551 0.49 0.511 0.429 - 0.107
EDI 3 0.763 0.51 0.468 0.483 0.53 0.501 - 0.198
RFID 1 0.500 0.795 0.511 0.548 0.522 0.386 - 0.154
RFID 2 0.580 0.823 0.607 0.575 0.560 0.476 - 0.205
RFID 3 0.427 0.739 0.542 0.530 0.459 0.468 - 0.180
RFID 4 0.466 0.778 0.582 0.547 0.526 0.381 - 0.152
BC 1 0.486 0.576 0.716 0.509 0.448 0.419 - 0.155
BC 2 0.436 0.524 0.738 0.482 0.63 0.36 - 0.144
BC 3 0.477 0.597 0.802 0.523 0.626 0.418 - 0.219
BC 4 0.511 0.534 0.788 0.55 0.533 0.473 - 0.164
BC 5 0.462 0.512 0.788 0.528 0.542 0.472 - 0.176
T 1 0.479 0.550 0.536 0.766 0.494 0.389 - 0.029
T 2 0.430 0.534 0.502 0.754 0.590 0.507 - 0.192
T 3 0.494 0.553 0.542 0.813 0.541 0.488 - 0.086
C 4 0.543 0.582 0.630 0.543 0.747 0.541 - 0.202
C 5 0.516 0.569 0.540 0.554 0.770 0.540 - 0.216
QS 1 0471 0.528 0.573 0.566 0.793 0.442 - 0.205
QS 2 0.522 0.566 0.632 0.575 0.826 0.442 - 0.221
QS 3 0.497 0.508 0.566 0.570 0.826 0.487 - 0.250
SL 1 0.412 0.467 0.529 0.473 0.767 0.486 - 0.226
SL 2 0.452 0.477 0.614 0.549 0.782 0.419 - 0.242
SL 5 0.435 0.409 0.429 0.514 0,727 0.539 - 0.290
MI 1 0.396 0.471 0.479 0.574 0.503 0.777 - 0.265
MI 2 0.420 0.432 0.422 0.444 0.441 0.783 - 0.210
MI 3 0.446 0.421 0.423 0.459 0.546 0.803 - 0.221
MI 4 0.431 0.396 0.444 0.393 0.474 0.793 - 0.211
T*MI - 0.188 - 0.221 - 0.224 - 0.133 - 0.297 - 0.288 1.000
Table 12 Final discriminant validity based on Fornell–Larcker
BC EDI FCCP MI T*MI RFID T
BC 0.767
EDI 0.619 0.808
FCCP 0.723 0.619 0.781
MI 0.560 0.537 0.626 0.789
T*MI - 0.224 - 0.188 - 0.297 - 0.288 1.000
RFID 0.715 0.630 0.660 0.545 - 0.221 0.784
T 0.677 0.601 0.697 0.595 - 0.133 0.701 0.778
of the fit model analyzed, it can be concluded that the
model has shown a good fit. Similar findings with Masudin
et al., (2021a, 2021b) research examined the effect of
traceability on humanitarian logistics performance. The
research obtained an SRMS value of 0.081, NFI of 92.3%,
and RMS_theta of 0.099, which indicates the right model.
The reliability test’s composite reliability parameters
aim to know the relationship of the load outside the construct, which is insufficient if using Cronbach’s alpha
parameters (Fornell & Larcker, 1981; Hair et al., 2016).
The composite reliability parameter’s acceptance criteria
are when the value is between 0.6 and 0.7, including
## 1 3
-----
Table 13 Recapitulation of model fit
PLS-SEM index Estimated model
SRMR 0.070
NFI 0.912
RMS_theta 0.103
having moderate and acceptable reliability. In contrast, if
the value of composite reliability reaches 0.7 to 0.9, it is
declared strong. In Table 14, a recapitulation of this study’s
composite reliability value is listed. The lowest value of
0.822 belongs to the traceability system (T) and the highest
of 1.000 for the moderating effect between the traceability
system and managerial initiatives (T * MI). Based on the
composite reliability parameter’s acceptance criteria, all
variables are declared reliable with a strong level of reliability and show the magnitude of the phenomenon for all
the identical indicator items in the same construct.
Evaluation of Structural Model
Structural model evaluation was conducted to evaluate the
inner relationships of this research model. Figure 4 shows a
valid and reliable research model. Furthermore, the model
has analyzed the coefficient of determination and path
coefficient.
The coefficient of determination or R-square describes
the latent variable’s variance explained by other latent
variables (Hair et al., 2011, 2016). In Fig. 4, the R-square
value is shown on the endogenous variable icon; the
traceability system variable is 0.572, and the food cold
chain performance variable is 0.575. The R-square value of
the two endogenous variables was included in the prediction accuracy moderate because the values ranged from
0.33 to 0.67 (Ghozali, 2008). This shows that the traceability system variable can be defined by 57.2% and the
remaining 42.8% contribution of other variables that are
not discussed in this study. The food cold chain
Table 14 Reliability of formal questioners
performance variable can be defined by 57.5%, and the
remaining 42.5% are not discussed in this study. Those
values defined more than half of the total explanation
required. Masudin and et al., (2021a, 2021b), in their
research on the humanitarian logistics performance variable, obtained an R-square value of 57.3%, which also only
defined half of the overall explanation for the variable.
Path coefficient analysis explains latent variables’ relationship to other latent variables by knowing the direction
of these variables (positive or negative) (Hair et al., 2016).
Table 15 summarizes the path coefficient values obtained
using a bootstrapping technique. The path coefficient criteria are less than 0.15, which is considered weak, the
values of 0.15—0.45 are stated to be moderate, and if the
value is more than 0.45, it is declared strong (Cohen,
1992). As many as five variables in the research model
show a moderate to a strong positive relationship, only the
moderating effect variable negatively correlates with the
food cold chain performance variable.
Hypothesis Testing
Hypothesis testing aims to determine the influence of
exogenous, endogenous, and moderating variables. The test
acceptance criteria if the T-statistical value C T-table or
P-value B level of significance (a) (Hair et al., 2016). This
study uses a significant level of 5% with a two-tailed test,
so the T-table value used is 1.96. The following are the
results of hypothesis testing using bootstrapping
techniques.
Variable Composite reliability Evidence
EDI 0.849 Reliable
RFID 0.865 Reliable
BC 0.877 Reliable
T 0.822 Reliable
FCCP 0.926 Reliable
MI 0.868 Reliable
Moderating effect (MI supports FCCP to adopt T) (T*MI) 1.000 Reliable
## 1 3
-----
Fig. 4 Final model
Table 15 Path coefficient recapitulation
Variable T FCCP
EDI 0.180
RFID 0.375
BC 0.297
T 0.511
MI 0.279
Moderating effect (MI will support FCCP to adopt T) - 0.122
The evaluation using bootstrapping techniques affects
the acceptance of this research hypothesis. The following is
a further explanation of the findings in Table 16.
H1 T has a significant effect on FCCP.
The statistical value calculation for the relationship
between T and FCCP variables obtained a t-statistic value
of 9.656 and a p-value of 0.000. These results are met with
the acceptance criteria of the hypothesis test. Therefore, it
can be concluded that the traceability system has a positive
and significant effect on the performance of the food coldchain. Furthermore, this hypothesis has the highest
t-statistic value among other variables, which shows that
the traceability system’s impact in obtaining information
data along the cold-chain chain helps improve industrial
performance during the Covid-19 pandemic. These results
are relevant to previous studies.
The ongoing Covid-19 pandemic has triggered social
restriction policies that disrupt activities along the food
cold-chain. The possibility of virus contamination in food
cold chain products requires a health protocol during the
product handling process. This caused increased processing
time and reduced worker movement. The characteristics of
most food cold chain products are easily damaged and have
a relatively short product life, so it needs to be handled
quickly and swiftly in order to keep a good product quality
(Bogataj et al., 2005; Shabani et al., 2015; Shashi et al.,
2018). Slow and uncontrolled handling can cause food
losses triggered by food damage before it reaches the end
consumer. This phenomenon creates an unusual routine for
workers. One of the managerial tasks, in this case, involves
the creation or promotion of dynamic capabilities.
Dynamic capabilities spur managerial initiatives to modify
the company’s resource base or regular routines and will
increase management control capabilities in general
## 1 3
-----
Table 16 Bootstrapping recapitulation
Hypothesis Relationship description T-statistic P-value Evidence
H1 T has a significant effect on FCCP 9.656 0.000 Significant
H2 EDI has a significant effect on T 2.486 0.013 Significant
H3 RFID has a significant effect on T 5.018 0.000 Significant
H4 BC has a significant effect on T 3.884 0.000 Significant
H5 MI supports FCCP to adopt T 3.428 0.001 Significant
H6 MI has a significant effect on FCCP 4.667 0.000 Significant
(Huber, 2011; Volberda, 2003; Winter, 2003). Managerial
initiatives also need operational flexibility to respond to
expected changes rapidly and aim to maximize efficiency
and minimize risk in volatile markets (van der Weerdt
et al., 2012; Volberda, 1996; Zollo & Winter, 2002).
The traceability system can ensure the product’s condition while monitoring the product storage temperature
(Joshi et al., 2011). Proper temperature control along the
food cold-chain is needed to reduce microbial growth to
prevent micronutrients in food products (Joshi et al., 2011;
Liao et al., 2011; Shashi et al., 2018). Moreover, suppose a
case of Covid-19 contamination is found in the food coldchain. In that case, the traceability system can help trace
the origin of the product and facilitate handling other
Covid-19 cases. Extra services with a traceability system
can increase customer satisfaction and trust in food cold
products’ quality and safety (Joshi et al., 2011). Thus, a
good and effective traceability system helps improve the
food cold chain’s performance during the Covid-19
pandemic.
H2 EDI significantly affects T.
The calculation of the EDI variable’s statistical value
with the T variable obtained a T-statistical value of 2.486
and a p-value of 0.013. These results have met the criteria
for acceptance of the hypothesis test. Therefore, it is concluded that the variable adoption of electronic data interchange has a positive and significant effect on the
traceability system variable. However, it should also be
noted that the EDI variable has the lowest considerable
value compared to other technology adoption variables
(RFID and blockchain). This shows that electronic data
interchange adoption has a low effect on the traceability
system of food cold-chain product data information during
the Covid-19 pandemic.
In addition to the advantages offered, EDI technology
also has several disadvantages such as complicated use and
sizeable initial capital costs, and does not even have the
security required by some companies (Scala & McGrath,
1993). Increasing the ability to track product units along
the supply chain will be more effective and efficient if it
## 1 3
relies on EDI technology in its internal management system (Hu et al., 2013). EDI allows fast and accurate data
transmission and a minimum of recurring errors (Scala &
McGrath, 1993). This can improve the relationship
between customers and suppliers, which helps flexibility in
responding to changes in demand and unexpected supply
disruptions during the Covid-19 pandemic (Hobbs, 2020;
Scala & McGrath, 1993).
H3 RFID has a Significant Effect on T.
Based on statistical calculations, the t-statistic value for
the T variable’s RFID variable is 5.018, and the p-value is
0.000. Thus, both values have met the acceptance criteria
of the t-statistic and p-value parameters. Thus, it can be
concluded that the variable radio frequency identification
adoption has a positive and significant effect on the
traceability system variable. Interestingly, the RFID variable is the technology adoption variable that has the most
significant value. This shows that RFID technology is
influential and effective in helping traceability systems
collect better information on the food cold-chain during the
Covid-19 pandemic. These results are also relevant to
previous studies.
RFID technology’s potential in wholesale supply chain
traceability systems is proved to provide operational efficiency and increase product stocks’ transparency with a
short shelf life (Ka¨rkka¨inen, 2003). This may happen
because RFID has a tag that makes it easy-to-access
information on the product’s date of use (Nicola et al.,
2020). In addition, RFID technology is also applied to food
cold chains to monitor product temperature along the chain
(Abad et al., 2009; Badia-Melis et al., 2015; Jedermann
et al., 2009; Ruiz-Garcia et al., 2010; Zou et al., 2014).
Temperature control errors are among the top five food
quality and safety risks in the food cold-chain (Wu &
Hsiao, 2020). The possibility of food losses due to
decreased product quality and safety occurred during the
Covid-19 pandemic and impacted the food cold-chain
(Masudin & Safitri, 2020). In addition, it currently requires
a monitoring system along the food cold-chain to anticipate
Coronavirus transmission in food colds (Han et al., 2021).
-----
H4 BC has a Significant Effect on T.
Based on Table 16, the relationship between the variable
BC and the variable T obtained a t-statistic value of 3.884
and a p-value of 0.000. These results also meet the
acceptance criteria of the hypothesis. So, it can be concluded that the blockchain adoption variable has a positive
and significant relationship to the traceability system.
Blockchain technology was able to help the food coldchain traceability system during the Covid-19 pandemic.
These results are relevant to previous studies on a similar
topic.
Food safety and consumer confidence in the food
industry can be significantly improved by utilizing blockchain technology (Tian, 2016). Blockchain technology can
provide real-time information to all entities in the supply
chain. In addition, blockchain can also reduce the risk of
centralized information systems, more secure, distributed,
transparent, and collaborative. This capability certainly
makes it easier to monitor food quality and safety tracing.
As previously explained, there are case findings that the
Coronavirus can survive and be stable for 14–21 days in
cold and freezing conditions. With comprehensive and
real-time monitoring, blockchain can assist the safety and
quality of products in the food cold-chain during the
Covid-19 pandemic (Han et al., 2021). Blockchain technology in helping the traceability system is quite important
given the many advantages it has.
H5 MI supports T adoption on FCCP improvement.
Based on the hypothesis test in Table 16, it is found that
the t-statistical value of the MI variable in supporting the
adoption of the T variable in the FCCP variable is 3,428,
and the p-value is 0.001. These results have met the
acceptance criteria of the hypothesis. However, based on
the bootstrapping results in Table 14, the managerial initiative variable shows a negative relationship with the food
cold chain performance variables. Therefore, it can be
concluded that managerial initiatives negatively support
traceability system adoption on food cold chain performance improvement.
This finding is different from the previous research
conducted by Lewis and Boyle (2017). His study shows the
positive influence of industry-leading initiatives, pre-competitive collaboration, partnerships, and government
involvement in improving the traceability system. This
difference may occur because sometimes, the participation
of certain parties can cause a negative influence. Collier
et al. (2004) explained that cultural inertia, increased politics, and more constrained strategy could negatively affect
the quality of strategic decisions and implementation efficiency. Over-initiative tends to lead to unnecessary interference and may result in an ineffective strategy. When
power and politics are very dominant, it can distort information and reduce strategic decisions’ quality. The urgency
of fulfilling needs during the Covid-19 pandemic has
triggered many parties to abuse their policies to gain personal benefits. Therefore, implementing a traceability system to improve food cold chains’ performance requires the
involvement of managerial initiatives that are more structured and strategic.
H6 MI significantly affects FCCP.
The hypothesis testing results indicate that managerial
initiatives positively and significantly affect the food cold
chain’s performance. This conclusion is based on the
t-statistic of the management initiative variable on the
FCCP variable. It shows that t-statistics is 4.667 [ ttable
(1.96) and the p-value of 0.000 \ sig (0.05). This is in
accordance with several previous studies which state that
added value for customers and company/supply chain
performance can be improved thanks to initiatives such as
pressure from stakeholders (Baert et al., 2012; Kumar
et al., 2013; Martı´nez-Jurado & Moyano-Fuentes, 2014;
Reuter et al., 2012). For example, food safety status can be
increased by creating a food policy or initiative taken by
risk managers in the food industry (Baert et al., 2012). This
allows for an increase in the food cold chain’s performance
due to stakeholder involvement in decision-making or
strategy. During the Covid-19 pandemic, an effective and
efficient policy was needed because, as previously discussed, the food cold chain means dealing with easily
damaged products. The quality of food cold products will
gradually decline if it is not handled properly (such as
temperature monitoring or human handling). There needs
to be concern and participation from all parties to realize
good quality and safety in the food cold-chain during the
pandemic because it is prone to Coronavirus transmission.
So, the most important thing from the harmful effect of
decreasing product quality and safety is the possibility of
reduced food losses.
### Managerial Implications
This section is expected to provide theoretical contributions to improve the food cold chain’s performance. The
compilation of managerial implications is based on the
indicators with the highest factor loading values on
exogenous and moderating variables. Researchers gave the
following suggestions to parties in the food cold-chain
during the Covid-19 pandemic:
1. Adopting electronic data interchange technology as a
communication system between food supply chain
suppliers and consumers during the Covid-19
## 1 3
-----
pandemic is beneficial. One of EDI technology’s
advantages is that it allows for the fast distribution of
information and the minimum number of errors (Scala
& McGrath, 1993). In other words, this technology has
a high level of information accuracy. However, in
reality, companies view EDI as a tool to increase
efficiency and accommodate customer needs rather
than suppliers (Hill & Scudder, 2002). Therefore, more
attention is needed because EDI between suppliers and
consumers (such as retail) can provide ordering
accuracy and transparency in the food cold-chain
during the Covid-19 pandemic.
2. The food supply chain information can be accessed
quickly and easily during the Covid-19 pandemic. As
explained in the previous section, fast and easily
accessible information is very important because coldchain products tend to be short-lived. Radio frequency
identification technology can help provide information
more effectively and efficiently because it uses large
memory and automatic scanning simultaneously (Aung
& Chang, 2014b). The advantages of RFID make it
possible to help the availability of information systems
easily accessible during the Covid-19 pandemic.
3. The security of the entire network on the food supply
chain is guaranteed during the Covid-19 pandemic.
The blockchain database system integrates all data
blocks and creates a distributed network (Tian, 2016).
The blockchain’s massive database system concerns
some parties regarding the data’s security being stored.
Moreover, all information on the blockchain is transparent, open, neutral, and reliable (Tian, 2016).
Therefore, the security of the blockchain system needs
to be considered so that data are not easily damaged.
4. Food cold product supplier initiatives in traceability
technology can increase the food cold chain’s effectiveness during the Covid-19 pandemic. The traceability system has proved to improve the performance of
food cold-chain during the Covid-19 pandemic. However, the implementation of a good traceability system,
of course, depends on the user whether it has been
implemented optimally or not. Therefore, suppliers of
the last goods (before retail) play a significant role in
product availability. Therefore, an industry can be
encouraged with good leadership to generate a strategy
for quality assurance and traceability along the supply
chain.
## 1 3
### Conclusion and Limitations
Conclusion
This study discusses how food cold-chain performance can
be improved during the current Covid-19 pandemic
worldwide. Adopting an information system can help trace
product data information on the possibility of Coronavirus
transmission. The researcher added that managerial initiatives were the driving factor for the adoption of the
traceability system. Six research hypotheses were formulated based on previous research literature studies with a
similar topic. A total of 250 respondents from various
Indonesian regions participated in this study to answer the
32 questions given in a questionnaire. Finally, the data
were collected and analyzed further in Sect. 5, along with a
detailed discussion.
Many previous studies have presented descriptions of
what impacts the Covid-19 pandemic has had on the food
industry. Starting from food safety issues, product availability, and food losses caused by deteriorating food
quality. This study shows that traceability system technologies such as EDI, RFID, and blockchain are beneficial
for food cold-chain during the Covid-19 pandemic. By
equipped with various advantages, these technologies can
facilitate easy-to-access information and monitor food
cold-chain. However, it should be noted that excessive
involvement in managerial initiatives can make things
worse. The excessive interference from the dominant party
in their power can disrupt adopting the traceability system.
Limitations
This research has limitations which are the scope of the
study. This research only refers to the needs of users and
retail employees who have consumed and or handled coldchain products. In addition, responses were collected based
on the perspective of the Covid-19 pandemic in Indonesia.
It is expected that the proposed application of a traceability
system with managerial initiatives can help improve the
performance of food cold chains in Indonesia, as summarized in the managerial implication. Future studies can use
different respondents and circumstances/perspectives or
use different variables in adopting a traceability system to
improve the performance of food cold chains. Different
methods for selecting traceability system requirements are
also possible, such as clustering indicators by calculating
their weights.
Acknowledgements We would like to thank the reviewers for their
appreciated and exceptional contribution by providing critical feedback and comments to improve the manuscript. We would like to
thank the editors and editor-in-chief for their encouragement and
-----
background in keeping the paper at this level of quality. We would
like to thank the Engineering Faculty of the University of Muhammadiyah Malang for full supports to complete the research.
Funding No funding was received to assist with the preparation of
this manuscript.
Declarations
Conflict of interest The authors hereby declare that there are no
potential conflicts of interest in terms of authorship, research, and/or
publication of this article.
Informed Consent There are no human subjects in this article, and
informed consent is not applicable.
### References
Abad, E., Palacio, F., Nuin, M., De Zarate, A. G., Juarros, A., Go´mez,
J. M., & Marco, S. (2009). RFID smart tag for traceability and
cold chain monitoring of foods: Demonstration in an intercontinental fresh fish logistic chain. Journal of Food Engineering,
93(4), 394–399.
Adamson, K. A., & Prion, S. (2013). Reliability: Measuring internal
consistency using Cronbach’s a. Clinical Simulation in Nursing,
9(5), e179–e180.
Aiello, G., La Scalia, G., & Micale, R. (2012). Simulation analysis of
cold chain performance based on time–temperature data. Production Planning & Control, 23(6), 468–476.
Aiyar, A., & Pingali, P. (2020). Pandemics and food systems-towards
a proactive food safety approach to disease prevention &
management. Food Security, 12(4), 749–756.
Allen, M., & Yen, W. (2002). Introduction to measurement theory. 4
(printing). Waveland Press Inc.
Alwi, I. (2015). Kriteria empirik dalam menentukan ukuran sampel
pada pengujian hipotesis statistika dan analisis butir. Formatif:
Jurnal Ilmiah Pendidikan MIPA, 2(2), 140–148.
Aramyan, L. H., Lansink, A. G. O., Van Der Vorst, J. G., & Van
Kooten, O. (2007). Performance measurement in agri-food
supply chains: A case study. Supply Chain Management: An
International Journal, 12(4), 304–315.
Arikunto, S. (2006). Metodelogi penelitian. Yogyakarta: Bina Aksara.
Aung, M. M., & Chang, Y. S. (2014a). Temperature management for
the quality assurance of a perishable food supply chain. Food
Control, 40, 198–207.
Aung, M. M., & Chang, Y. S. (2014b). Traceability in a food supply
chain: Safety and quality perspectives. Food Control, 39,
172–184.
Badia-Melis, R., Ruiz-Garcia, L., Garcia-Hierro, J., & Villalba, J.
I. R. (2015). Refrigerated fruit storage monitoring combining
two different wireless sensing technologies: RFID and WSN.
Sensors, 15(3), 4781–4795.
Baert, K., Van Huffel, X., Jacxsens, L., Berkvens, D., Diricks, H.,
Huyghebaert, A., & Uyttendaele, M. (2012). Measuring the
perceived pressure and stakeholders’ response that may impact
the status of the safety of the food chain in Belgium. Food
Research International, 48(1), 257–264.
Bartley, B., Gomibuchi, S., & Mann, R. (2007). Best practices in
achieving a customer-focused culture. Benchmarking: An International Journal, 14(4), 482–496.
Bechini, A., Cimino, M. G., Lazzerini, B., Marcelloni, F., & Tomasi,
A. (2005). A general framework for food traceability. Paper
presented at the 2005 Symposium on Applications and the
Internet Workshops (SAINT 2005 Workshops).
Bechini, A., Cimino, M. G., Marcelloni, F., & Tomasi, A. (2008).
Patterns and technologies for enabling supply chain traceability
through collaborative e-business. Information and Software
Technology, 50(4), 342–359.
Bentler, P. M., & Bonett, D. G. (1980). Significance tests and
goodness of fit in the analysis of covariance structures.
Psychological Bulletin, 88(3), 588.
Blanchard, C., Comm, C. L., & Mathaisel, D. F. (2008). Adding value
to service providers: Benchmarking Wal-Mart. Benchmarking:
An International Journal, 15(2), 166–177.
Bogataj, M., Bogataj, L., & Vodopivec, R. (2005). Stability of
perishable goods in cold logistic chains. International Journal of
Production Economics, 93, 345–356.
Bouzembrak, Y., Klu¨che, M., Gavai, A., & Marvin, H. J. (2019).
Internet of Things in food safety: Literature review and a
bibliometric analysis. Trends in Food Science & Technology, 94,
54–64.
[BPS. (2019). Badan Pusat Statistik. from http://www.bps.go.id/](http://www.bps.go.id/)
Cao, Y., Jia, F., & Manogaran, G. (2019). Efficient traceability
systems of steel products using blockchain-based industrial
Internet of Things. IEEE Transactions on Industrial Informatics,
16(9), 6004–6012.
Carullo, A., Corbellini, S., Parvis, M., & Vallan, A. (2008). A
wireless sensor network for cold-chain monitoring. IEEE
Transactions on Instrumentation and Measurement, 58(5),
1405–1411.
Cohen, J. (1992). A power primer. Psychological Bulletin, 112(1),
155.
Cohen, R., Allaby, C., Cumbaa, C., Fitzgerald, M., Ho, K., Hui, B.,
et al. (1998). What is initiative? User Modeling and UserAdapted Interaction, 8(3–4), 171–214.
Cole, R., Stevenson, M., & Aitken, J. (2019). Blockchain technology:
Implications for operations and supply chain management.
Supply Chain Management: An International Journal, 24(4),
469–483.
Collier, N., Fishwick, F., & Floyd, S. W. (2004). Managerial
involvement and perceptions of strategy process. Long Range
Planning, 37(1), 67–83.
Capricorn Indonesia Consult, P. (2019). A cold chain study of
Indonesia. In E. Kusano (Ed.), The cold chain for agri-food
products in ASEAN (pp. 101–147). Jakarta: ERIA Research
Project Report FY2018 ed.
Duclos, L. K., Vokurka, R. J., & Lummus, R. R. (2003). A conceptual
model of supply chain flexibility. Industrial Management &
Data Systems, 103(6), 446–456.
Fiddes, L. K., & Yan, N. (2013). RFID tags for wireless electrochemical detection of volatile chemicals. Sensors and Actuators
B: Chemical, 186, 817–823.
Foraker, R. E., Lai, A. M., Kannampallil, T. G., Woeltje, K. F.,
Trolard, A. M., & Payne, P. R. (2020). Transmission dynamics:
Data sharing in the COVID-19 era. Learning Health Systems,
e10235.
Fornell, C., & Larcker, D. F. (1981). Evaluating structural equation
models with unobservable variables and measurement error.
Journal of Marketing Research, 18(1), 39–50.
Galbreath, J. (2006). Corporate social responsibility strategy: strategic
options, global considerations. Corporate Governance: The
international journal of business in society, 6(2), 175–187.
Ganyani, T., Kremer, C., Chen, D., Torneri, A., Faes, C., Wallinga, J.,
& Hens, N. (2020). Estimating the generation interval for
coronavirus disease (COVID-19) based on symptom onset data,
March 2020. Eurosurveillance, 25(17), 2000257.
Garg, L., Chukwu, E., Nasser, N., Chakraborty, C., & Garg, G.
(2020). Anonymity preserving IoT-based COVID-19 and other
infectious disease contact tracing model. IEEE Access, 8,
159402–159414.
## 1 3
-----
Ghozali, I. (2008). Structural equation modeling: Metode alternatif
dengan partial least square (pls): Badan Penerbit Universitas
Diponegoro.
Hair, J. F., Anderson, R. E., Babin, B. J., & Black, W. C. (2010).
Multivariate data analysis: A global perspective (Vol. 7): Upper
Saddle River, NJ: Pearson.
Hair, J. F., Henseler, J., Dijkstra, T. K., & Sarstedt, M. (2014).
Common beliefs and reality about partial least squares: comments on Ro¨nkko¨ and Evermann.
Hair Jr, J. F., Hult, G. T. M., Ringle, C., & Sarstedt, M. (2016). A
primer on partial least squares structural equation modeling
(PLS-SEM): Sage publications.
Hair, J. F., Ringle, C. M., & Sarstedt, M. (2011). PLS-SEM: Indeed a
silver bullet. Journal of Marketing Theory and Practice, 19(2),
139–152.
Hair, J. F., Risher, J. J., Sarstedt, M., & Ringle, C. M. (2019). When to
use and how to report the results of PLS-SEM. European
Business Review, 31(1), 2–24.
Han, J., Zhang, X., He, S., & Jia, P. (2021). Can the coronavirus
disease be transmitted from food? A review of evidence, risks,
policies and knowledge gaps. Environmental Chemistry Letters,
19(1), 5–16.
Hart, P., & Saunders, C. (1997). Power and trust: Critical factors in
the adoption and use of electronic data interchange. Organization Science, 8(1), 23–42.
Hill, C. A., & Scudder, G. D. (2002). The use of electronic data
interchange for supply chain coordination in the food industry.
Journal of Operations Management, 20(4), 375–387.
Ho, H. J., Zhang, Z. X., Huang, Z., Aung, A. H., Lim, W.-Y., &
Chow, A. (2020). Use of a real-time locating system for contact
tracing of health care workers during the COVID-19 pandemic at
an infectious disease center in Singapore: Validation study.
Journal of Medical Internet Research, 22(5), e19437.
Hobbs, J. E. (2020). Food supply chains during the COVID-19
pandemic. Canadian Journal of Agricultural Economics/revue
Canadienne D’agroeconomie, 68(2), 171–176.
Hu, J., Zhang, X., Moga, L. M., & Neculita, M. (2013). Modeling and
implementation of the vegetable supply chain traceability
system. Food Control, 30(1), 341–353.
Hu, L.-T., & Bentler, P. M. (1998). Fit indices in covariance structure
modeling: Sensitivity to underparameterized model misspecification. Psychological Methods, 3(4), 424.
Huber, G. P. (2011). Organizations: Theory, design, future APA
handbook of industrial and organizational psychology, Vol 1:
Building and developing the organization. (pp. 117–160):
American Psychological Association.
Hulland, J. (1999). Use of partial least squares (PLS) in strategic
management research: A review of four recent studies. Strategic
Management Journal, 20(2), 195–204.
ILFA. (2020). Asosiasi Logistik dan Forwarder Indonesia., from
[http://www.ilfa.or.id/](http://www.ilfa.or.id/)
Jedermann, R., Ruiz-Garcia, L., & Lang, W. (2009). Spatial
temperature profiling by semi-passive RFID loggers for perishable food transportation. Computers and Electronics in Agriculture, 65(2), 145–154.
Joshi, R., Banwet, D., & Shankar, R. (2011). A Delphi-AHP-TOPSIS
based benchmarking framework for performance improvement
of a cold chain. Expert Systems with Applications, 38(8),
10170–10182.
Joshi, R., Banwet, D., Shankar, R., & Gandhi, J. (2012). Performance
improvement of cold chain in an emerging economy. Production
Planning & Control, 23(10–11), 817–836.
Ka¨rkka¨inen, M. (2003). Increasing efficiency in the supply chain for
short shelf life goods using RFID tagging. International Journal
of Retail & Distribution Management, 31(10), 529–536.
## 1 3
Khanagha, S., Volberda, H., & Oshri, I. (2017). Customer co-creation
and exploration of emerging technologies: The mediating role of
managerial attention and initiatives. Long Range Planning,
50(2), 221–242.
Khoobiyan, M., Pooya, A., Tavakkoli, A., & Rahimnia, F. (2017).
Taxonomy of manufacturing flexibility at manufacturing companies using imperialist competitive algorithms, support vector
machines and hierarchical cluster analysis. Engineering, Technology & Applied Science Research, 7(2), 1559–1566.
Kim, C., & Shin, K. (2019). A study on the measurement method of
cold chain service quality using smart contract of Blockchain.
The Journal of Society for e-Business Studies, 24(3), 1–18.
Konsynski, B. R. (1993). Strategic control in the extended enterprise.
IBM Systems Journal, 32(1), 111–142.
Kumar, S., Luthra, S., & Haleem, A. (2013). Customer involvement
in greening the supply chain: An interpretive structural modeling
methodology. Journal of Industrial Engineering International,
9(1), 1–13.
Kuo, J.-C., & Chen, M.-C. (2010). Developing an advanced multitemperature joint distribution system for the food cold chain.
Food Control, 21(4), 559–566.
Lau, R. S. M. (1996). Strategic flexibility: A new reality for worldclass manufacturing. SAM Advanced Management Journal,
61(2), 11.
Lewis, S. G., & Boyle, M. (2017). The expanding role of traceability
in seafood: Tools and key initiatives. Journal of Food Science,
82(S1), A13–A21.
Liao, P.-A., Chang, H.-H., & Chang, C.-Y. (2011). Why is the food
traceability system unsuccessful in Taiwan? Empirical evidence
from a national survey of fruit and vegetable farmers. Food
Policy, 36(5), 686–693.
Marbouh, D., Abbasi, T., Maasmi, F., Omar, I. A., Debe, M. S., Salah,
K., et al. (2020). Blockchain for COVID-19: Review, opportunities, and a trusted tracking system. Arabian Journal for
Science and Engineering, 45(12), 9895–9911.
Martı´nez-Jurado, P. J., & Moyano-Fuentes, J. (2014). Lean management, supply chain management and sustainability: A literature
review. Journal of Cleaner Production, 85, 134–150.
Masudin, I., Aprilia, G. D., Nugraha, A., & Restuputri, D. P. (2021a).
Impact of E-procurement adoption on company performance:
Evidence from Indonesian manufacturing industry. Logistics,
5(1), 16.
Masudin, I., Lau, E., Safitri, N. T., Restuputri, D. P., & Handayani, D.
I. (2021b). The impact of the traceability of the information
systems on humanitarian logistics performance: Case study of
Indonesian relief logistics services. Cogent Business & Management, 8(1), 1906052.
Masudin, I., & Safitri, N. T. (2020). Food cold chain in indonesia
during the Covid-19 pandemic: A current situation and mitigation. Jurnal Rekayasa Sistem Industri, 9(2), 99–106.
Masudin, I., Wastono, T., & Zulfikarijah, F. (2018). The effect of
managerial intention and initiative on green supply chain
management adoption in Indonesian manufacturing performance. Cogent Business & Management, 5(1), 1485212.
Mc Carthy, U., Uysal, I., Badia-Melis, R., Mercier, S., O’Donnell, C.,
& Ktenioudaki, A. (2018). Global food security–Issues, challenges and technological solutions. Trends in Food Science &
Technology, 77, 11–20.
Mercier, S., Villeneuve, S., Mondor, M., & Uysal, I. (2017). Time–
temperature management along the food cold chain: A review of
recent developments. Comprehensive Reviews in Food Science
and Food Safety, 16(4), 647–667.
Montanari, R. (2008). Cold chain tracking: A managerial perspective.
Trends in Food Science & Technology, 19(8), 425–431.
-----
Musa, A., & Dabo, A. A. A. (2016) A Review of RFID in Supply
Chain Management: 2000–2015, Global Journal of Flexible
Systems Management, 17(2), 189–228.
Nicola, M., Alsafi, Z., Sohrabi, C., Kerwan, A., Al-Jabir, A., Iosifidis,
C., et al. (2020). The socio-economic implications of the
coronavirus and COVID-19 pandemic: A review. International
Journal of Surgery, 78, 185–193.
Nunnally, J. C. (1994). Psychometric theory 3E: Tata McGraw-hill
education.
Nur, I., & Supomo, B. (2002). Metodologi Penelitian Bisnis untuk
Akuntansi dan Manajemen. Edisi Kedua. BPEE.
Onoda, H. (2020). Smart approaches to waste management for postCOVID-19 smart cities in Japan. IET Smart Cities, 2(2), 89–94.
O[´ ]skarsdo´ttir, K., & Oddsson, G. V. (2019). Towards a decision
support framework for technologies used in cold supply chain
traceability. Journal of Food Engineering, 240, 153–159.
Otoom, M., Otoum, N., Alzubaidi, M. A., Etoom, Y., & Banihani, R.
(2020). An IoT-based framework for early identification and
monitoring of COVID-19 cases. Biomedical Signal Processing
and Control, 62, 102149.
Pal, A., & Kant, K. (2019). Using blockchain for provenance and
traceability in Internet of things-integrated food logistics.
Computer, 52(12), 94–98.
Paramita, W., Rostiani, R., Winahjoe, S., Wibowo, A., Virgosita, R.,
& Audita, H. (2021) Explaining the voluntary compliance to
COVID-19 measures: An extrapolation on the gender perspective. Global Journal of Flexible Systems Management, 22(Suppl
1), S1–S18.
Patil, M., & Suresh, M. (2019) Modelling the enablers of workforce
agility in IoT projects: A TISM approach, Global Journal of
Flexible Systems Management, 20(2), 157–175.
Paul, S. K., & Chowdhury, P. (2020). Strategies for managing the
impacts of disruptions during COVID-19: an example of toilet
paper. Global Journal of Flexible Systems Management, 21(3),
283–293.
Peterson, R. A., & Kim, Y. (2013). On the relationship between
coefficient alpha and composite reliability. Journal of Applied
Psychology, 98(1), 194.
Plumier, B. M., & Maier, D. E. (2018). Sensitivity analysis of a
fumigant movement and loss model for bulk stored grain to
predict effects of environmental conditions and operational
variables on fumigation efficacy. Journal of Stored Products
Research, 78, 18–26.
Pujihastuti, I. (2010). Prinsip penulisan kuesioner penelitian.
CEFARS: Jurnal Agribisnis dan Pengembangan Wilayah, 2(1),
43–56.
Rajakumar, G., Kumar, T. A., Samuel, T., & Kumaran, E. M. (2018).
Iot based milk monitoring system for detection of milk
adulteration. International Journal of Pure and Applied Mathematics, 118(9), 21–32.
Rediers, H., Claes, M., Peeters, L., & Willems, K. A. (2009).
Evaluation of the cold chain of fresh-cut endive from farmer to
plate. Postharvest Biology and Technology, 51(2), 257–262.
Restuputri, D. P., Indriani, T. R., & Masudin, I. (2021). The effect of
logistic service quality on customer satisfaction and loyalty
using Kansei engineering during the COVID-19 pandemic.
Cogent Business & Management, 8(1), 1906492.
Restuputri, D. P., Masudin, I., & Sari, C. P. (2020). Customers
perception on logistics service quality using Kansei engineering:
Empirical evidence from indonesian logistics providers. Cogent
Business & Management, 7(1), 1751021.
Reuter, C., Goebel, P., & Foerstl, K. (2012). The impact of
stakeholder orientation on sustainability and cost prevalence in
supplier selection decisions. Journal of Purchasing and Supply
Management, 18(4), 270–281.
Rizou, M., Galanakis, I. M., Aldawoud, T. M., & Galanakis, C. M.
(2020). Safety of foods, food supply chain and environment
within the COVID-19 pandemic. Trends in Food Science &
Technology, 102, 293–299.
Rowley, J., & Slack, F. (2004). Conducting a literature review.
Management Research News, 27(6), 31–39.
Ruiz-Garcia, L., Barreiro, P., & Robla, J. (2008). Performance of
ZigBee-based wireless sensor nodes for real-time monitoring of
fruit logistics. Journal of Food Engineering, 87(3), 405–415.
Ruiz-Garcia, L., Barreiro, P., Robla, J. I., & Lunadei, L. (2010).
Testing ZigBee motes for monitoring refrigerated vegetable transportation under real conditions. Sensors, 10(5),
4968–4982.
Sahin, E., Dallery, Y., & Gershwin, S. (2002). Performance
evaluation of a traceability system. An application to the radio
frequency identification technology. Paper presented at the IEEE
International Conference on Systems, Man and Cybernetics.
Sarkis, J., Cohen, M. J., Dewick, P., & Schro¨der, P. (2020). A brave
new world: Lessons from the COVID-19 pandemic for transitioning to sustainable supply and production. Resources, Conservation, and Recycling, 159, 104894.
Scala, S., & McGrath, R., Jr. (1993). Advantages and disadvantages
of electronic data interchange an industry perspective. Information & Management, 25(2), 85–91.
Schober, P., Boer, C., & Schwarte, L. A. (2018). Correlation
coefficients: Appropriate use and interpretation. Anesthesia &
Analgesia, 126(5), 1763–1768.
Shabani, A., Torabipourv, S. M. R., & Saen, R. F. (2015). A new
super-efficiency dual-role FDH procedure: An application in
dairy cold chain for vehicle selection. International Journal of
Shipping and Transport Logistics, 7(4), 426–456.
Sharma, S., & Pai, S. S. (2015). Analysis of operating effectiveness of
a cold chain model using Bayesian networks. Business Process
Management Journal, 21(4), 722–742.
Shashi, S., Cerchione, R., Singh, R., Centobelli, P., & Shabani, A.
(2018). Food cold chain management. The International Journal
of Logistics Management, 29(3), 792–821.
Shukor, A. A. A., Newaz, M. S., Rahman, M. K., & Taha, A. Z.
(2020). Supply chain integration and its impact on supply chain
agility and organizational flexibility in manufacturing firms.
[International Journal of Emerging Markets, https://doi.org/10.](https://doi.org/10.1108/IJOEM-04-2020-0418)
[1108/IJOEM-04-2020-0418.](https://doi.org/10.1108/IJOEM-04-2020-0418)
Sousa, R., Yeung, A. C., & Cheng, T. (2008). Customer heterogeneity
in operational e-service design attributes. International Journal
of Operations & Production Management, 28(7), 592–614.
Streiner, D. L. (2003). Starting at the beginning: An introduction to
coefficient alpha and internal consistency. Journal of Personality
Assessment, 80(1), 99–103.
Sugiyono, P. (2013). Statistik untuk Penelitian. CV. Alvabeta
Bandung.
Tam, L. T., Ho, H. X., Nguyen, D. P., Elias, A., & Le, A. N. H.
(2021). Receptivity of Governmental Communication and Its
Effectiveness During COVID-19 Pandemic Emergency in Vietnam: A Qualitative Study. Global Journal of Flexible Systems
Management, 22(Suppl 1), S45–S64.
Tenenhaus, M., Vinzi, V. E., Chatelin, Y.-M., & Lauro, C. (2005).
PLS path modeling. Computational Statistics & Data Analysis,
48(1), 159–205.
Thompson, C. B. (2009). Descriptive data analysis. Air Medical
Journal, 28(2), 56–59.
Tian, F. (2016). An agri-food supply chain traceability system for
China based on RFID & blockchain technology. Paper presented
at the 2016 13th international conference on service systems and
service management (ICSSSM).
[Tobing, B. (2015). Food supply chain., from https://supplychainindo](https://supplychainindonesia.com/wp-content/files/Rantai_Pasok_Pangan1.pdf)
[nesia.com/wp-content/files/Rantai_Pasok_Pangan1.pdf](https://supplychainindonesia.com/wp-content/files/Rantai_Pasok_Pangan1.pdf)
## 1 3
-----
Tsang, Y. P., Choy, K. L., Wu, C.-H., Ho, G. T., Lam, C. H., & Koo,
P. (2018). An Internet of Things (IoT)-based risk monitoring
system for managing cold supply chain risks. Industrial Management & Data Systems, 118(7), 1432–1462.
Ufua, D. E., Osabuohien, E., Ogbari, M. E., Falola, H. O., Okoh, E.
E., & Lakhani, A. (2021) Re-Strategising government palliative
support systems in tackling the challenges of COVID-19
lockdown in Lagos State, Nigeria. Global Journal of Flexible
Systems Management, 22(Suppl 1), S19–S32.
van der Weerdt, N. P., Volberda, H. W., Verwaal, E., & Stienstra, M.
(2012). Organizing for flexibility: addressing dynamic capabilities and organization design Collaborative Communities of firms
(pp. 105–125). Springer.
Vergara, I. G. P., Go´mez, M. C. L., Martı´nez, I. L., & Herna´ndez, J.
V. (2021). Strategies for the preservation of service levels in the
inventory management during COVID-19: A case study in
company of biosafety products. Global Journal of Flexible
Systems Management, 22(Suppl 1), S65–S80.
Volberda, H. W. (1996). Toward the flexible form: How to remain
vital in hypercompetitive environments. Organization Science,
7(4), 359–374.
Volberda, H. W. (2003). Strategic flexibility creating dynamic
competitive advantages. The Oxford handbook of strategy.
Walton, S. V., & Marucheck, A. S. (1997). The relationship between
EDI and supplier reliability. International Journal of Purchasing
and Materials Management, 33(2), 30–35.
Wang, X., Li, D., & O’brien, C. (2009). Optimisation of traceability
and operations planning: An integrated model for perishable
food production. International Journal of Production Research,
47(11), 2865–2886.
Webster, J. (1995). Networks of collaboration or conflict? Electronic
data interchange and power in the supply chain. The Journal of
Strategic Information Systems, 4(1), 31–42.
Wiersinga, W. J., Rhodes, A., Cheng, A. C., Peacock, S. J., &
Prescott, H. C. (2020). Pathophysiology, transmission, diagnosis,
and treatment of coronavirus disease 2019 (COVID-19): A
review. JAMA, 324(8), 782–793.
Winter, S. G. (2003). Understanding dynamic capabilities. Strategic
Management Journal, 24(10), 991–995.
Wu, J.-Y., & Hsiao, H.-I. (2020). Food quality and safety risk
diagnosis in the food cold chain through failure mode and effect
analysis. Food Control, 120, 107501.
Zollo, M., & Winter, S. G. (2002). Deliberate learning and the
evolution of dynamic capabilities. Organization Science, 13(3),
339–351.
Zou, Z., Chen, Q., Uysal, I., & Zheng, L. (2014). Radio frequency
identification enabled wireless sensing for intelligent food
logistics. Philosophical Transactions of the Royal Society A:
Mathematical, Physical and Engineering Sciences, 372(2017),
20130313.
Publisher’s Note Springer Nature remains neutral with regard to
jurisdictional claims in published maps and institutional affiliations.
Ilyas Masudin is a professor of logistics and
supply chain at Industrial Engineering department,
University of Muhammadiyah Malang, Indonesia.
He holds a Ph.D. in Logistics from RMIT
University, Australia. His research interests
include logistics optimization, supply chain management, multi-criteria decision-making and
operations management.
Anggi Ramadhani is a researcher in Industrial
Engineering department, University of Muhammadiyah Malang, Indonesia. Her research interests
are industrial system optimization, system modelling and operations management.
Dian Palupi Restuputri is a senior lecturer and
researcher in Industrial Engineering department at
the University of Muhammadiyah Malang. Her
research interests are in the area of ergonomics
and human factor engineering. She received his
bachelor’s degree in Industrial Engineering from
the Diponegoro University, Indonesia (2007). She
holds a master’s degree in Industrial Engineering
from Institute of Technology Bandung, Indonesia (2012).
Ikhlasul Amallynda is a lecturer at Industrial
Engineering department, University of Muhammadiyah Malang, Indonesia. Her research interests
are system modeling and operations management.
Key Questions
1. What are the important elements that affect supply chain
agility and organizational flexibility?
2. What are the factors that affect the performance of the food
cold chain?
3. How do managerial initiatives moderate the relationship
between traceability systems and food cold chain
performance?
## 1 3
-----
|
{
"disclaimer": "Notice: Paper or abstract available at https://pmc.ncbi.nlm.nih.gov/articles/PMC8328815, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": null,
"status": "GREEN",
"url": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8328815"
}
| 2,021
|
[
"JournalArticle"
] | true
| 2021-08-03T00:00:00
|
[
{
"paperId": "93f48cc16be475559997581e7b390e7644583240",
"title": "Iot Based Milk Monitoring System for the Detection of Milk Adulteration"
},
{
"paperId": "db4577428341fdd158b9c5eaad39393ca3c1b752",
"title": "Receptivity of Governmental Communication and Its Effectiveness During COVID-19 Pandemic Emergency in Vietnam: A Qualitative Study"
},
{
"paperId": "ba2951724d567114d85e60534353309ad09fddf4",
"title": "Re-Strategising Government Palliative Support Systems in Tackling the Challenges of COVID-19 Lockdown in Lagos State, Nigeria"
},
{
"paperId": "3e3a4563a3ce28c2a848072b7f149c17e234d574",
"title": "Impact of E-Procurement Adoption on Company Performance: Evidence from Indonesian Manufacturing Industry"
},
{
"paperId": "80ffc3333f8d76baa53fe31e657e3bf2bf9a70c7",
"title": "Explaining the Voluntary Compliance to COVID-19 Measures: An Extrapolation on the Gender Perspective"
},
{
"paperId": "bde27299d4dd5d7aa0019518bea8008b26a9190b",
"title": "Food quality and safety risk diagnosis in the food cold chain through failure mode and effect analysis"
},
{
"paperId": "5c2d71c38e6009ba7b5b66308f0efa8e527e9932",
"title": "The effect of logistic service quality on customer satisfaction and loyalty using kansei engineering during the COVID-19 pandemic"
},
{
"paperId": "62fe3f4604d2d5139118cf486f72b74c88e72983",
"title": "The impact of the traceability of the information systems on humanitarian logistics performance: Case study of Indonesian relief logistics services"
},
{
"paperId": "268cf786f9891adc1c747fce844a69ce2cf68f43",
"title": "Strategies for the Preservation of Service Levels in the Inventory Management During COVID-19: A Case Study in a Company of Biosafety Products"
},
{
"paperId": "cc73896955e378c8aafbb6a0e0b518b17cd8bc33",
"title": "Customer Involvement"
},
{
"paperId": "c415f19525279a97d6bea477bbb74231bcc67d69",
"title": "Can the coronavirus disease be transmitted from food? A review of evidence, risks, policies and knowledge gaps"
},
{
"paperId": "1fe5fa78b92b15bb01de85229fc19b6664f10446",
"title": "Efficient Traceability Systems of Steel Products Using Blockchain-Based Industrial Internet of Things"
},
{
"paperId": "6790cb1248fd5632afa7641dceaa65accdcb555f",
"title": "Anonymity Preserving IoT-Based COVID-19 and Other Infectious Disease Contact Tracing Model"
},
{
"paperId": "c02daecd99104df69ece23030df56beff0b37283",
"title": "An IoT-based framework for early identification and monitoring of COVID-19 cases"
},
{
"paperId": "9e89c739a3de960ed59c5089a4847b261a85a1b2",
"title": "Supply chain integration and its impact on supply chain agility and organizational flexibility in manufacturing firms"
},
{
"paperId": "5e790dcabca3a958ee6fcc23c521326d579fb442",
"title": "Food Cold Chain in Indonesia during the Covid-19 Pandemic: A Current Situation and Mitigation"
},
{
"paperId": "69ce0154ed4b1ede5ca00904d7145df558d42294",
"title": "Strategies for Managing the Impacts of Disruptions During COVID-19: an Example of Toilet Paper"
},
{
"paperId": "20f9f266a5aa1e3dbf9700e8513537e93ba0ec97",
"title": "Pandemics and food systems - towards a proactive food safety approach to disease prevention & management"
},
{
"paperId": "64a834713edb3d36f21985bc8d4e35e7124874ad",
"title": "Pathophysiology, Transmission, Diagnosis, and Treatment of Coronavirus Disease 2019 (COVID-19): A Review."
},
{
"paperId": "2917ef2ed6e8ec8f71c83a7913430c7450976942",
"title": "Blockchain for COVID-19: Review, Opportunities, and a Trusted Tracking System"
},
{
"paperId": "388531f083ccf8dea03ef46b88380e8079cc83bd",
"title": "Transmission dynamics: Data sharing in the COVID‐19 era"
},
{
"paperId": "4b9b1dcab6d0577d0fa75481f7c39ec9201fbc0d",
"title": "Safety of foods, food supply chain and environment within the COVID-19 pandemic"
},
{
"paperId": "245e3df5d42b4f77d4eb4e606eedb39fcb96c878",
"title": "Smart approaches to waste management for post‐COVID‐19 smart cities in Japan"
},
{
"paperId": "659722603a9f8902097e21b2e6af985a88bb1704",
"title": "The impacts of the novel SARS-CoV-2 outbreak on surgical oncology - A letter to the editor on “The socio-economic implications of the coronavirus and COVID-19 pandemic: A review”"
},
{
"paperId": "4642715506a470a4257b019c8698660d1579e5cb",
"title": "Food supply chains during the COVID‐19 pandemic"
},
{
"paperId": "5f1d2fa73eccf7d106979f62d6ded2e113e5bf35",
"title": "Use of a Real-Time Locating System for Contact Tracing of Health Care Workers During the COVID-19 Pandemic at an Infectious Disease Center in Singapore: Validation Study"
},
{
"paperId": "925b17cf82894ac1343dd2ac8aec3433ac23a07d",
"title": "A brave new world: Lessons from the COVID-19 pandemic for transitioning to sustainable supply and production"
},
{
"paperId": "28ba5703e4c89b3207e21e48eb27e9c1a5b57a07",
"title": "The socio-economic implications of the coronavirus pandemic (COVID-19): A review"
},
{
"paperId": "459873a0a7f6426881eb205c4523ae7411f96d26",
"title": "Estimating the generation interval for coronavirus disease (COVID-19) based on symptom onset data, March 2020"
},
{
"paperId": "a8ed6a1b8bf7e465b5a22bcccf6e03b7cceb9873",
"title": "Customers perception on logistics service quality using Kansei engineering: empirical evidence from indonesian logistics providers"
},
{
"paperId": "a2d4e1444343f5e938557d5292b5a811d3112a7e",
"title": "Internet of Things in food safety: Literature review and a bibliometric analysis"
},
{
"paperId": "3b053a20293178c2042b9b92bb4ec647f2b878b0",
"title": "Using Blockchain for Provenance and Traceability in Internet of Things-Integrated Food Logistics"
},
{
"paperId": "abdb4a5bbdcab60344497edd3e48599b64f0c00f",
"title": "Blockchain technology: implications for operations and supply chain management"
},
{
"paperId": "46f5729f4c3c75f2364a5da73a876c7da0391020",
"title": "Modelling the Enablers of Workforce Agility in IoT Projects: A TISM Approach"
},
{
"paperId": "af86ecda37a25a129a3e3f8a03db7db053d8873e",
"title": "When to use and how to report the results of PLS-SEM"
},
{
"paperId": "cae4261496acd17bdfed85896b3d0ba864af261a",
"title": "Towards a decision support framework for technologies used in cold supply chain traceability"
},
{
"paperId": "f5644e4ddd62f33ef1b0c53e66d97158aadffd27",
"title": "Sensitivity analysis of a fumigant movement and loss model for bulk stored grain to predict effects of environmental conditions and operational variables on fumigation efficacy"
},
{
"paperId": "ac7cadf5dc641c4a06b434f10a1ecdf879d85d79",
"title": "An Internet of Things (IoT)-based risk monitoring system for managing cold supply chain risks"
},
{
"paperId": "c4e7c2f8634feb4faa713bcded4661f57417e25a",
"title": "Global food security – Issues, challenges and technological solutions"
},
{
"paperId": "07095e32861a3982d0f8d1794b2e560b81acbdc2",
"title": "Food cold chain management"
},
{
"paperId": "b8d5dbafc507ec21221d523ae869fc14941f61ad",
"title": "Correlation Coefficients: Appropriate Use and Interpretation"
},
{
"paperId": "69415b4fdf40828bc8227d6ba31f770c3eaf733d",
"title": "The effect of managerial intention and initiative on green supply chain management adoption in Indonesian manufacturing performance"
},
{
"paperId": "2b5e0999828c699d5bee400b84991df5a4ec66ad",
"title": "The Expanding Role of Traceability in Seafood: Tools and Key Initiatives"
},
{
"paperId": "cf6473f47bcbbb21968963a0c2ba567e456c7a39",
"title": "Partial Least Squares Strukturgleichungsmodellierung (PLS-SEM): Eine anwendungsorientierte Einführung"
},
{
"paperId": "0653146109dbaac5ad4f04feca4dc717251b54c7",
"title": "Time-Temperature Management Along the Food Cold Chain: A Review of Recent Developments."
},
{
"paperId": "25135939a74fc5b75e7a3e429b23226b84dfc8ed",
"title": "Customer Co-Creation and Exploration of Emerging Technologies: The Mediating Role of Managerial Attention and Initiatives"
},
{
"paperId": "1cd03d5374d2dad941624a30be8645add4bf1d65",
"title": "Taxonomy of Manufacturing Flexibility at Manufacturing Companies Using Imperialist Competitive Algorithms, Support Vector Machines and Hierarchical Cluster Analysis"
},
{
"paperId": "24cdeb7d7421012c2fdd362b8e2816c105b7071f",
"title": "An agri-food supply chain traceability system for China based on RFID & blockchain technology"
},
{
"paperId": "9f448766dc6bbc3c17c426a8c9dffb21a8498dcb",
"title": "A Review of RFID in Supply Chain Management: 2000–2015"
},
{
"paperId": "eae43f41ec60dbb644e42fe2337a5fbf4bfc663d",
"title": "A Review of RFID in Supply Chain Management: 2000–2015"
},
{
"paperId": "5a60e37f03bcfd7991badcff3137a2a60ada52b8",
"title": "Food supply chain"
},
{
"paperId": "5e90c524e288bce7944134b72379cd26f7e70441",
"title": "Kriteria Empirik dalam Menentukan Ukuran Sampel pada Pengujian Hipotesis Statistika dan Analisis Butir"
},
{
"paperId": "3c25f6863fb783cc3c8f13964be8bc8b287cd961",
"title": "Analysis of operating effectiveness of a cold chain model using Bayesian networks"
},
{
"paperId": "23cd440f70c6b8371056b4c51f72f8bab9052c19",
"title": "A new super-efficiency dual-role FDH procedure: an application in dairy cold chain for vehicle selection"
},
{
"paperId": "3657e479216f61a439d163304ecc4aa12cb51662",
"title": "Refrigerated Fruit Storage Monitoring Combining Two Different Wireless Sensing Technologies: RFID and WSN"
},
{
"paperId": "94cb959f57caeb70f7235f8bfcefd31cafa6da2c",
"title": "Lean Management, Supply Chain Management and Sustainability: A Literature Review"
},
{
"paperId": "3cef79debbc116f0ebfb92778bc9d35fb00d5db2",
"title": "STATISTIK UNTUK PENELITIAN"
},
{
"paperId": "fcd6db0d795464932917a24c7051762ef0f2f4bf",
"title": "Radio frequency identification enabled wireless sensing for intelligent food logistics"
},
{
"paperId": "970ab694af58df156ebbb1d5c462e29158dc0ae4",
"title": "Partial least squares structural equation modeling (PLS-SEM): An emerging tool in business research"
},
{
"paperId": "8e690230905042d2329e9e9cb06786302bfc8bc2",
"title": "Temperature management for the quality assurance of a perishable food supply chain"
},
{
"paperId": "ef719ba9342801d2c2d829819a7fa792027c4662",
"title": "PRINSIP PENULISAN KUESIONER PENELITIAN"
},
{
"paperId": "5ebb49ecf0393b35ccab3a6f28620a284a6ceec7",
"title": "Traceability in a food supply chain: Safety and quality perspectives"
},
{
"paperId": "a78c912853eafb15bd6fe1ec404211dad8cd2715",
"title": "Common Beliefs and Reality About PLS"
},
{
"paperId": "03a8ef2a57c57741aff68a7746fb18d45d6590e9",
"title": "RFID tags for wireless electrochemical detection of volatile chemicals"
},
{
"paperId": "f32577a03429b734c60ed7fdd5ad0b93f5520daf",
"title": "On the relationship between coefficient alpha and composite reliability."
},
{
"paperId": "498e4699838130773ee1c7a43fa0811354ef325a",
"title": "Reliability: Measuring Internal Consistency Using Cronbach's α"
},
{
"paperId": "11501638e3fa44bfd3af531d4bd6c4229b053689",
"title": "Customer involvement in greening the supply chain: an interpretive structural modeling methodology"
},
{
"paperId": "6a8cf47b79e2b2eed13f47a86ba0e38293b56a8b",
"title": "Modeling and implementation of the vegetable supply chain traceability system"
},
{
"paperId": "755cd666b08f64b9789af36ffafd5aa5cb3c4ebb",
"title": "Forecasting S&P 500 index using artificial neural networks and design of experiments"
},
{
"paperId": "c1c7159a9f54e71dfce282196de93293edf1fe70",
"title": "A Primer on Partial Least Squares Structural Equation Modeling"
},
{
"paperId": "624b608186af9f1b42e0a10746f90bfa721fe292",
"title": "The impact of stakeholder orientation on sustainability and cost prevalence in supplier selection decisions"
},
{
"paperId": "59e7b22f9fbf4504f30232eb09c6ce4f73540f57",
"title": "Performance improvement of cold chain in an emerging economy"
},
{
"paperId": "b948f919d039a1daf47ca74ad8a856e1e9b5ca1c",
"title": "Measuring the perceived pressure and stakeholders' response that may impact the status of the safety of the food chain in Belgium"
},
{
"paperId": "d365bd12fdad88957430641180515d3bc2aaef9f",
"title": "Simulation analysis of cold chain performance based on time–temperature data"
},
{
"paperId": "1d911c03cc60113541eaf404ea65d3cd20a9833b",
"title": "Why is the food traceability system unsuccessful in Taiwan? Empirical evidence from a national survey of fruit and vegetable farmers"
},
{
"paperId": "54d057fe71d164b549fe22620305a0201f77e552",
"title": "A Delphi-AHP-TOPSIS based benchmarking framework for performance improvement of a cold chain"
},
{
"paperId": "eae2faa5b4c771867fd54ea6436762028d56bcb6",
"title": "PLS-SEM: Indeed a Silver Bullet"
},
{
"paperId": "f2fe155aea38a3ebedc22b3bf45fca782596d28e",
"title": "Testing ZigBee Motes for Monitoring Refrigerated Vegetable Transportation under Real Conditions"
},
{
"paperId": "d1ee5c92542a2af6196612ea03a9929ecd822d7a",
"title": "Developing an advanced Multi-Temperature Joint Distribution System for the food cold chain"
},
{
"paperId": "45d261b873d614816f00b52db7e9d8ae433a7b3e",
"title": "RFID smart tag for traceability and cold chain monitoring of foods: Demonstration in an intercontinental fresh fish logistic chain"
},
{
"paperId": "f524273979e5950495e231c8358a957175d9f414",
"title": "A Wireless Sensor Network for Cold-Chain Monitoring"
},
{
"paperId": "225ae484950de378ce1dc1ccad833fa78ba1cc82",
"title": "Optimisation of traceability and operations planning: an integrated model for perishable food production"
},
{
"paperId": "e5412da0fb9dc880e13c8efabad598878aca2e0d",
"title": "Descriptive data analysis."
},
{
"paperId": "5a290637aaadf5e85aa6efc9e8e57a7e15dfb09e",
"title": "Spatial temperature profiling by semi-passive RFID loggers for perishable food transportation"
},
{
"paperId": "967bdad0b793e7f72ccc60a562375fe25650267e",
"title": "Evaluation of the cold chain of fresh-cut endive from farmer to plate"
},
{
"paperId": "a73605d64b19234d90190e92eb03d93fd0c3133f",
"title": "Performance of ZigBee-Based wireless sensor nodes for real-time monitoring of fruit logistics"
},
{
"paperId": "d7ef01937b0c077dbc63157480dfd6f38fa87f43",
"title": "Cold chain tracking: a managerial perspective"
},
{
"paperId": "8800ff371fbd3dbed6db3f994d4ae2850534fc32",
"title": "Customer heterogeneity in operational e‐service design attributes: An empirical investigation of service quality"
},
{
"paperId": "fcfa812ba3d442401338be46706c66e7773710e6",
"title": "Adding value to service providers: benchmarking Wal‐Mart"
},
{
"paperId": "2ecf222b0ce061755dc91bac34552695ae36f19c",
"title": "Patterns and technologies for enabling supply chain traceability through collaborative e-business"
},
{
"paperId": "80a2b61787d6c2316ecc9edf503b067a645d2ba2",
"title": "Best practices in achieving a customer‐focused culture"
},
{
"paperId": "2440c5df228a1dc0642abcde02b9897cddc407dc",
"title": "Performance measurement in agri‐food supply chains: a case study"
},
{
"paperId": "8d2cfa33a93d95b39bb8617085364fd8fa6a6e1e",
"title": "Strategic Flexibility Creating Dynamic Competitive Advantages"
},
{
"paperId": "bdce9d49de9ab7b7b62aca1c3a35eaac5bd59163",
"title": "Corporate social responsibility strategy: strategic options, global considerations"
},
{
"paperId": "836a45defe5279f626d785087a9e41638abbb05f",
"title": "A General Framework for Food Traceability"
},
{
"paperId": "d5e368f9b8a234e698fc0cb2fec5c1df1c326fca",
"title": "Stability of perishable goods in cold logistic chains"
},
{
"paperId": "3dce3b02eca5ce8432d524124b41a2b018869ccd",
"title": "Conducting a Literature Review"
},
{
"paperId": "22a9891bb463696b09ec4d332754dedbb9db8f78",
"title": "Managerial Involvement and Perceptions of Strategy Process"
},
{
"paperId": "f46ce27219f73a535cf018718dbd519898d300f4",
"title": "Understanding dynamic capabilities"
},
{
"paperId": "3047a9405ae0eda421791448878966e503b63e6c",
"title": "Increasing efficiency in the supply chain for short shelf life goods using RFID tagging"
},
{
"paperId": "335ed00dc5c632e1ac992aecab94d031f42ae724",
"title": "A conceptual model of supply chain flexibility"
},
{
"paperId": "9816722b7355561a07200e5ceb8174c052b90515",
"title": "Starting at the Beginning: An Introduction to Coefficient Alpha and Internal Consistency"
},
{
"paperId": "1c92e23164564ba69a30930cf7a60b720d05bd1c",
"title": "Performance evaluation of a traceability system. An application to the radio frequency identification technology"
},
{
"paperId": "92aea0529c130c8554ed4dd568ba6038fbc22e35",
"title": "The use of electronic data interchange for supply chain coordination in the food industry"
},
{
"paperId": "6ff68e94f9a9196c3d4f77d3b099a61352854216",
"title": "Deliberate Learning and the Evolution of Dynamic Capabilities"
},
{
"paperId": "75db310847a48e30c2a6235c73e6e59cd11ecf23",
"title": "Sensitivity Analysis"
},
{
"paperId": "b66fb884d8f37eb96e7b4bee0777b4b9ce7ce946",
"title": "Use of partial least squares (PLS) in strategic management research: a review of four recent studies"
},
{
"paperId": "4f318929bc14ddad7c9065c812940b3be9cf45f9",
"title": "What is Initiative?"
},
{
"paperId": "6b130ce87fe0331f97309feeb597d6aa093a2893",
"title": "The Relationship Between EDI and Supplier Reliability"
},
{
"paperId": "d531e4b01439a7055d27c140df5f008e2f14381e",
"title": "Power and Trust: Critical Factors in the Adoption and Use of Electronic Data Interchange"
},
{
"paperId": "ed4301ba45c7a7ce22c4dd0149c54ad9b11b560a",
"title": "Towards The Flexible Form: How To Remain Vital in Hypercompetitive Environments"
},
{
"paperId": "fb789d94bcaf697141bc6c2128c37c43e6b68d12",
"title": "Strategic Flexibility: A New Reality for World-Class Manufacturing"
},
{
"paperId": "07cd8ba3367e2a12b00143d5da33f641fa019117",
"title": "Networks of collaboration or conflict? Electronic data interchange and power in the supply chain"
},
{
"paperId": "978779728965c2884e4587df44b34bc2e774f8f4",
"title": "Advantages and disadvantages of electronic data interchange an industry perspective"
},
{
"paperId": "48e52c24ffaeaf268c787064d6e810f1f62c8c38",
"title": "Conducting A Literature Review"
},
{
"paperId": "1f3c7f6145b3bafef321b6808ce4097810f36ff9",
"title": "Simulation analysis"
},
{
"paperId": "b5cbb975c96b917792e0415a5d17c757c1c636cb",
"title": "Evaluating Structural Equation Models with Unobservable Variables and Measurement Error"
},
{
"paperId": "7494a3c88adeae87f48d20927aac72c67bcf9eb1",
"title": "Significance Tests and Goodness of Fit in the Analysis of Covariance Structures"
},
{
"paperId": "182acb41b0cc1e05118a4583f0b1387f61b942e1",
"title": "Socio-economic implications"
},
{
"paperId": "692902b35c28f082ae0e883cbc913a2a685fa328",
"title": "Towards a Decision"
},
{
"paperId": "a77c01f57e90ce660306b3690a506d970040d149",
"title": "Increasing the Efficiency of Supply Chain"
},
{
"paperId": "a3552e6caacbdfdeb90fa20c690b4405c6c9c263",
"title": "Temperature Management"
},
{
"paperId": null,
"title": "Asosiasi Logistik dan Forwarder Indonesia"
},
{
"paperId": null,
"title": "A cold chain study of Indonesia"
},
{
"paperId": "d31d316d7bbd39a2544696b8031e1b004a5f1763",
"title": "A Study on the Measurement Method of Cold Chain Service Quality Using Smart Contract of Blockchain"
},
{
"paperId": null,
"title": "Badan Pusat Statistik"
},
{
"paperId": null,
"title": "The cold chain for agri-food products in ASEAN"
},
{
"paperId": "9fbac72779a3070559b3f3a0931bb42acc918aa4",
"title": "Common Beliefs and Reality about Partial Least Squares : Comments on Rönkkö and Evermann"
},
{
"paperId": "c9de5071197fd8adf6b320ef2b8381f660f61395",
"title": "Organizing for Flexibility: Addressing Dynamic Capabilities and Organization Design"
},
{
"paperId": "74c9fddabdd91aa64864dd21b79d6e25295554a7",
"title": "Organizations: Theory, design, future."
},
{
"paperId": "f0ea368b924644319d75e9e607b4e2b50bfdf382",
"title": "APA handbook of industrial and organizational psychology, Vol 1: Building and developing the organization."
},
{
"paperId": "89fe2ca0bc4ea3f53f5745de6a88e094b8734a2b",
"title": "Multivariate data analysis : a global perspective"
},
{
"paperId": "f85e8771b19c7fc24226d5dae22f60b387a742fb",
"title": "Relationship between"
},
{
"paperId": null,
"title": "Structural equation modeling: Metode alternatif dengan partial least square (pls): Badan Penerbit Universitas"
},
{
"paperId": "269bdf5846d71ac3eee425376215e7f1bba42576",
"title": "PLS path modeling"
},
{
"paperId": "83b205bb0de66586214bdaabac4b1ec67e24fe0f",
"title": "the Effect of"
},
{
"paperId": null,
"title": "Introduction to measurement theory. 4 (printing)"
},
{
"paperId": "97e63a04355f93f503bbdb450369ef3ae775f648",
"title": "Metodologi Penelitian Bisnis : untuk akuntansi dan manajemen"
},
{
"paperId": "a92c9726361d9c1d165dbf2ea781b6c48363a816",
"title": "Fit indices in covariance structure modeling : Sensitivity to underparameterized model misspecification"
},
{
"paperId": null,
"title": "Toward the flexible form: How to remain"
},
{
"paperId": null,
"title": "Psychometric theory 3E: Tata McGraw-hill education"
},
{
"paperId": "1d4a47c88725073b95e0b0283622056e2d17e2c4",
"title": "Strategic Control in the Extended Enterprise"
},
{
"paperId": "124d726971ebb0858df205143d58b364537dae67",
"title": "QUANTITATIVE METHODS IN PSYCHOLOGY A Power Primer"
}
] | 27,103
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Education",
"source": "s2-fos-model"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/fff7eeabcf77501e6dd77f13095a1b7c6533c4d8
|
[
"Computer Science"
] | 0.873751
|
MetaEdu: a new framework for future education
|
fff7eeabcf77501e6dd77f13095a1b7c6533c4d8
|
Discover Artificial Intelligence
|
[
{
"authorId": "2212663863",
"name": "Luobin Cui"
},
{
"authorId": "2212048988",
"name": "Chengzhang Zhu"
},
{
"authorId": "49209539",
"name": "Ryan Hare"
},
{
"authorId": "144928235",
"name": "Ying Tang"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"Discov Artif Intell"
],
"alternate_urls": null,
"id": "dda0a41e-efcd-40e6-87f7-a663355aceb3",
"issn": "2731-0809",
"name": "Discover Artificial Intelligence",
"type": "journal",
"url": "https://www.springer.com/journal/44163"
}
|
The potential of the metaverse in the field of education is an area of increasing interest, with many researchers exploring the space to increase the ease and efficacy of student education while reducing time and labor requirements to deliver effective teaching. However, there has been little work into the systematic and technological aspects of delivering education through the metaverse. To fill this gap, we propose a metaverse education system that takes good advantages of virtual reality and Web3 blockchain techologies to create a social learning environment. With this added emphasis on social aspects, learners are able to socialize and engage in collaborative efforts to improve their own knowledge. Using blockchain technology, the system can also help to ensure security and transparency while also keeping progression and grading fair for all participating students.
|
# Discover Artificial Intelligence
**Research**
## MetaEdu: a new framework for future education
**LuoBin Cui[1] · ChengZhang Zhu[1] · Ryan Hare[1] · Ying Tang[1]**
Received: 1 January 2023 / Accepted: 28 February 2023
© The Author(s) 2023 OPEN
**Abstract**
The potential of the metaverse in the field of education is an area of increasing interest, with many researchers exploring
the space to increase the ease and efficacy of student education while reducing time and labor requirements to deliver
effective teaching. However, there has been little work into the systematic and technological aspects of delivering education through the metaverse. To fill this gap, we propose a metaverse education system that takes good advantages of
virtual reality and Web3 blockchain techologies to create a social learning environment. With this added emphasis on
social aspects, learners are able to socialize and engage in collaborative efforts to improve their own knowledge. Using
blockchain technology, the system can also help to ensure security and transparency while also keeping progression
and grading fair for all participating students.
**Keywords Metaverse learning · Artificial intelligence · Parallel Intelligence · Blockchain**
#### 1 Introduction
Education is fundamental for the growth and advancement of society because it helps all people understand new
concepts, ideas, and methodologies to better the world. Understanding how people learn to offer an education system
that achieves effective learning for all people has always been challenging. While some similarities exist, most students
have significantly different preferred approaches to learning new concepts. For example, many of them prefer guided
learning approaches to self-driven discovery learning [1]. Ideally, all education would be personalized to each individual
student’s preferences. However, the wide range of learning styles and varying degrees of aptitude makes it hard for traditional teaching methods to be universally effective, especially when considering personalized learning approaches.
Furthermore, the current one-size-fits-all approach to education presents a barrier to students who would succeed if
given personalized coaching [2].
To tackle this challenge, early research efforts have been devoted to intelligent tutoring systems (ITSs), where
computational intelligence methods are used to mimic human tutors. As stated in a recent survey [3], the long history of productive research of ITSs has resulted in successful applications in education [4], military training [5], and
healthcare [6], with even more work still ongoing. Early ITSs are often described as “homework helpers”, where a set
of generalized or specific hints is provided upon a learner’s request [7]. If a student were puzzled with a problem
and failed to phrase a meaningful question, older ITSs might offer irrelevant or incorrect guidance that harms the
student more than helps. With this in mind, ITSs continue to improve their mathematical student models through
LuoBin Cui, ChengZhang Zhu and Ryan Hare are contributed equally to this work
- Ying Tang, tang@rowan.edu; LuoBin Cui, cuiluo77@students.rowan.edu; ChengZhang Zhu, zhuche95@students.rowan.edu; Ryan Hare,
harer6@students.rowan.edu | [1]Department of Electrical and Computer Engineering, Rowan University, Glassboro, NJ 08028, USA.
Discover Artificial Intelligence (2023) 3:10 | https://doi.org/10.1007/s44163-023-00053-9
### 1 3
V l (0123456789)
-----
sensor informatics and machine learning. Rather than requiring students to ask relevant questions, modern ITSs
monitor students’ behaviors in their learning and identify their individual needs for support [3]. However, modern
ITSs still have issues engaging students and providing interesting lessons. Furthermore, sensor informatics is a limited
approach since many applications will not allow for the easy use of complex external sensors.
A second line of work that aims to overcome these shortcomings is to exploit the strengths of ITSs and increase
student engagement through gamification. So called adaptive serious games use the principles of gamification to
present educational concepts in an enjoyable and engaging setting. In other words, students can be distracted by
game playing to the point where they do not recognize that they are learning. By adding intelligent or adaptive
support, these games can be fully self-contained, providing lessons without the need for instructor intervention.
While providing such personalized serious games is important with much potential benefits [8], many challenges
still exist in the area. Although games provide a great environment to support contextualized knowledge construction, the requirement of self-directed and self-regulated learning on students makes it difficult to maximize a game’s
potential. While there is a wide range of data available in games for developers and researchers to analyze student
performance and game effectiveness, physical learning data is still sparse as non-invasive physical sensors are often
challenging to implement. Without necessary data, it is impossible to take good advantages of the power of data
mining and artificial intelligence to build accurate and precise student/player models. Furthermore, many adaptive
serious games offer one-player experiences, which do not consider the benefits of more social and group learning.
However, recent technological advancement has made it possible for learning to occur anywhere and anytime. Any
new and effective systems and platforms must consider that learning is no longer confined to classrooms, and must
be able to capture learner information in any possible setting.
Metaverse, considered as the next generation of social connection [9], presents one potential solution to the
aforementioned challenge in education. By extending physical learning through virtual and augmented technologies, physical education can be seamlessly integrated with virtual learning. By combining virtual reality learning with
physical learning, an educational social space can be constructed where students are able to interact and socialize
with peers while learning. Additionally, the flexible and configurable nature of virtual spaces makes it possible to tailor
a wide range of lessons and educational approaches including personalized support. However, Metaverse education
is still an emerging topic, with few efforts made to develop deep systematic approaches to this type of education
system. Our prior work attempted to provide a systematic model for Metaverse education from the perspective of
non-player characters (NPCs) that tutor students [10]. In that work, we did not consider other types of NPCs that
would learn alongside the students. In other words, we did not consider the benefits that social interaction these
learner NPCs would bring to an educational system. And though these aspects are beneficial to consider, they also
raise numerous issues with system security and safety. This paper aims to address the challenges and make the following contributions:
A) Extended from our prior work, this paper proposes MetaEdu, a novel framework that integrates both artificial intelligence (AI) and Web3 technologies through the ACP method (Artificial societies, Computational experiments, and
Parallel execution) for effective Metaverse learning. MetaEdu considers that learning can occur everywhere, both
within and outside of a standard classroom, including social interactions via extracurricular activities such as study
groups. The three developmental phases of MetaEdu are then defined, and their relations are elaborated to show
the progression and symbiosis of virtual and physical learning.
B) A detailed architecture of MetaEdu is then developed and analyzed, showing how the key technologies are applied
to design various types of NPCs in the virtual space with the aim to optimize physical learning. In particular, blockchain technology is used to ensure the security, transparency, and fairness of shared social connection, while AI is
deployed to provide students with an adaptive educational experience as they interact with MetaEdu.
The rest of the paper is organized as follows: Sect. 2 provides a review of relevant technologies that inspired the
proposed system, and a discussion of outstanding issues with existing research. Section 3 presents the definition
of MetaEdu, with the emphasis on its three developmental phases. Section 4 discusses challenges that could arise
when moving forward toward the implementation stage of such a system, followed by our conclusions in Section 5.
Vol (1234567890)1 3
-----
#### 2 Related work
##### 2.1 ITS and serious games
ITSs have made great strides in recent years [11], sharing responsibility with instructors for estimating student knowledge and providing coaching and tutoring. Their effectiveness has been demonstrated in various fields of education,
such as computer programming [12], language learning [13], dynamic system modelling [14], mathematics [14], and
more general-purpose e-learning approaches [15]. By providing students with more personalized education, ITSs
aim to improve the efficacy of education while simultaneously reducing the strain on instructors’ limited time and
resources. With an ITS, students can receive timely and personalized feedback on their learning without instructor
intervention.
Among the various successful implementations, there are many AI methods that have been applied to map student data or performance into actionable system decisions. Methods like reinforcement learning [16] and genetic
algorithms [17] allow AI systems to learn and adapt to new data. Other methods like Bayesian approaches [18] and
fuzzy logic [19] allow experts to define their own logical behavior for AI tutors.
Beyond methods that focus solely on the AI side of ITSs, there has also been extensive developments in data mining [20], big data [21], and multimodal learning analytics [3] for educational approaches, with both areas showing
promise for integration with more advanced AI methods. Methods like generative adversarial networks [22], unsupervised learning [23], and clustering [24] can work with student data to spot trends and make predictions that in
turn can be used by AI methods to provide appropriate support.
A prominent field that extends the capabilities of ITSs is serious games, which are games made for education or
training purposes. Serious games can be integrated with ITSs to both increase student engagement and to create a
learning environment that focuses more on problem-solving. Principles of gamification [25, 26] are often applied to
increase the educational merit and engagement of the system. And as such, serious games often focus on providing
more immersive and exciting lessons compared to a standard ITS or classroom education. Beyond that, many of the
technologies and systems established in the field of ITSs are also applicable within serious games such as reinforcement learning, supervised learning methods, and fuzzy logic [27].
As stated earlier, technological advances have made it easier to connect globally, resulting in vibrant networks
of learners and content around the world. Learning communities are inevitably expanded beyond the boundaries
of the classroom. However, both ITSs and serious games are primarily used in a traditional classroom setting or for
one-on-one tutoring, despite their successful research and educational merit. Thus, there is a crucial need to bring
ITSs and serious games into new development to address the emerging theme of “learning without borders” and
many social situations where education is present.
##### 2.2 Metaverse
The idea of the Metaverse has taken off in recent years with many researchers now exploring the possibilities and
technologies of a shared virtual social space for work, school, and fun. The level of social connection, mobility, and
collaboration in Metaverse presents great value to education, especially when considering the theme of “learning
without borders”. Metaverse promotes deeper learning by naturally bringing learning into new contexts and allowing
socialization for deep group collaboration [28]. Gu et al. [29], for example, proposed using a metaverse and deep
reinforcement learning to improve emergency evacuations, with a training system to help evacuees learn and predict
efficient routes with a great improvement over traditional approaches [29]. Artificial intelligence (AI) also plays very
important roles in Metaverse to ensure proper arbitration, simulations, and decision-making [30]. The involvement
of AI in Metaverse makes it possible for data analytics that help better estimate learner knowledge for personalization. Similarly, blockchain technology can be fused into Metaverse, bringing education to a different level [31, 32].
Despite these prominent features of Metaverse for education, the research is still in its infancy. Besides heated
discussions on its benefits and potential applications [33], there are very few technological developments. The
design of virtual classroom with commercial-grade software and hardware is presented by Shen et al. to allow for a
seamless connection between physical and virtual learning environments [34]. Hare and Tang focused their efforts
on building a virtual learning environment and designing AI-enabled tutor NPCs to offer guided learning [10]. A case
V l (0123456789)1 3
-----
study of a consortium university in Korea for Metaverse education is presented in [35]. Despite all these works, there
is still a need for formal, systematic methods to guide the development of Metaverse education and, particularly,
the integration of physical and virtual worlds to achieve optimal learning.
##### 2.3 Parallel intelligent systems
With the advancement of system science and computer simulations, ACP (Artificial Systems, Computational Experiments,
and Parallel Execution) methods were formally proposed by Fei-Yue Wang [36] to achieve Parallel Intelligence. ACP methods introduce a circular feedback mechanism to guide the operations of parallel intelligent systems - the integration of
an artificial system with a real system. While the artificial system mirrors the actual system, computational experiments
provide a unique way of testing models and algorithms in the virtual system that might be difficult or even impossible
to conduct in the physical system. The optimal schema validated in the virtual system then has to act on the real system
through parallel execution, including virtual-real interactions, double-feedback, and double closed-loop between the
virtual and physical spaces.
In recent years, the ACP method has been widely applied to many domains. Ren et al. successfully used it to design a
parallel vehicular crowd sensing (VCS) system [37]. In particular, various computational experiments considering human
and social factors were conducted, evaluated, and shared with the real VCS system to improve its efficiency and robustness. Similar studies can be found in transportation systems [38], healthcare [39], education [40], and image encryption
[41].
Given these recent developments using the ACP methods and parallel intelligent systems, it can be said that there are
many commonaltiies between parallel systems and metaverses. In particular, they both share the same challenges when
dealing with complex systems. For example, there are many variables involved in operations of a complex system including many unknown latent variables. Understanding these variables is key to characterizing the complex system for any
control and management application. However, such studies in the real world might be very costly or even impossible
due to financial, legal, or institutional constraints. In this case, the ACP approach offers a viable solution. The successful
application of ACP in other domains should be adopted for the design of Metaverse. Following this line of thinking, the
proposed system focuses on applying an ACP approach to metaverse education to create MetaEdu.
#### 3 MetaEdu
It is clear that Metaverse has the potential to make education more flexible, interactive, and effective with equal learning accessibility. The more opportunities Metaverse present, the more complex learning systems become, and the more
challenges have to be dealt with. Taking this into consideration, we propose a system called MetaEdu which aims to build
a virtual learning world that starts from mirroring the physical world but goes far beyond it. MetaEdu is built to store
users’ learning trajectories and knowledge trees irreversibly on the blockchain and establish a safe, fair, and open circle
with credible data through partial disclosure. Unlike current virtual reality education, MetaEdu is also able to protect
user privacy while keeping user information up-to-date through Web3, in addition to meeting the requirements of social
interaction in educational conditions.
##### 3.1 Definition
MetaEdu refers to a virtual-reality learning system based on metaverse technologies and features. It aims to generate a
virtual clone of real-world learning environments and extend it to make the learning process more immersive for users.
In addition to this, MetaEdu includes a blockchain technology-based Web3 reserve system that tightly integrates the
virtual world with the physical world in terms of the learning system, social system, and identity system, and allows each
user to produce specific content and edit the virtual world through their avatars. MetaEdu consists of three parts: the
physical learning system for the world, Web3, and the virtual learning system.
The human world is the physical world of humans (teachers, students, etc.) who can communicate with each other
and perform learning activities. The physical learning system aims to enable learning in the physical world, and therefore,
it contains devices/hardware, systems, communication, and computing with educational applications. For example,
books, personal communication devices, cloud computing devices, storage devices, management systems, and campus
or social environments. The virtual learning system is a simulated system that can perform all learning operations in the
Vol (1234567890)1 3
-----
physical world through artificial intelligence technology. It can also run and generate algorithms or systems designed
as physical learning systems and store the results on Web3. In addition, its AI can interact with avatars of users in the
human world through interactive devices. In contrast, users or robots in the human world can manipulate elements in
the virtual learning system through Web3 to achieve MetaEdu’s integration of physical and virtual worlds.
##### 3.2 Development
The development of MetaEdu consists of three phases: clone, expansion, and fusion of surreality. The detailed development of the proposed system is given in Fig. 1.
The cloning phase refers to the mirroring process from the physical learning system to the virtual learning system. To
give users a learning experience consistent with reality, the virtual world will have different scenarios that correspond
to the physical world. For example, a classroom, library, and study room all located in the virtual space. These virtual
scenarios must have the exact same elements and attributes as the physical world to encourage the same behaviors that
users would perform in a physical learning environment. The end goal of the cloning phase is to allow users to experience
a more convenient, efficient, and familiar virtual learning experience.
The expansion phase focuses on further developing and extending the framework created in the first phase. The
main manifestation of this phase of work is that the virtual learning system will be improved and extended. At this stage,
the virtual world as a mirror of the physical will be expanded with more scenarios and functions than the physical. For
example, virtual classrooms that are easier to access with free technology experiments. In addition, virtual worlds are
no longer just a mapping, but instead offer a way for students to self-improve beyond the limits of the physical world.
Users participate in virtual worlds by logging into them to generate an avatar. Under the control of parallel strategies, the
user’s behavior not only changes objects in the virtual world, but also generates impacts on the user experience in reality.
Additionally, since the framework has already been built, the extended content of the virtual learning system will have a
lower development cost with greater complexity and possibilities than the physical learning system. At the same time,
however, security and privacy are critical factors to consider when transitioning to a virtual education system, including:
A) Cybersecurity threats: The teaching and learning resources of a virtual education system originate from the web and
therefore may be vulnerable to cybersecurity threats such as hacking, malware, and phishing attacks.
B) Student safety: Virtual education systems may also pose greater risks to student safety, such as the possibility of
cyberbullying or exposure to inappropriate content.
C) Data privacy: Virtual education systems often involve the collection and storage of student data, and online data
storage may raise concerns about data privacy. It is therefore of utmost importance to ensure that student data is
properly protected and collection and use of data is as transparent as possible.
The last phase is to deploy a multi-faceted interactive virtual reality system based on blockchain technology. In order
to address the security and privacy issues raised in the second stage, the main goal is to ensure security, transparency,
immutability, decentralization, and efficiency of information transmission between all participating parties. For these
specific goals, blockchain technology offers a good solution. It is a decentralized and distributed technology that allows
behavior and data to be securely recorded and verified without the need for a central authority. In MetaEdu, the physical system collects the user’s data and constantly updates a student model on the blockchain. This model can then be
retrieved directly from the blockchain each time an educator or AI system calls for relevant content. Valid training results
that need to be saved will also be uploaded to the blockchain to reduce storage risk.
Correspondingly, this new framework solves the problems of the original virtual world through the following aspects:
A) Security - Because blockchain is decentralized and distributed, it is more secure than traditional databases stored in
a single location. This makes it more difficult for hackers to make unwanted changes to user information and records
stored on the blockchain.
B) Transparency - Blockchain is a transparent system, which means that all learning records and the non-encrypted data
stored on them are visible to anyone who has access to the network. This can help increase trust in the system.
C) Immutability - Once learning data has been added to the blockchain, it cannot be changed or deleted. This ensures
that the information stored on the blockchain is accurate and cannot be tampered with.
D) Efficiency - Using blockchain to store user learning information has the potential to be more efficient than traditional
databases because it eliminates the need for a middleman and can automate certain processes.
V l (0123456789)1 3
-----
**Fig. 1 MetaEdu System development**
Vol (1234567890)1 3
-----
The three stages stated above also represent trends in human learning styles, so the systematic structure of the third
stage will be explained in detail in the architecture.
##### 3.3 Architecture
The architecture of MetaEdu is shown in Fig. 2. As described in the previous chapter, MetaEdu is built on two worlds:
the physical world and the virtual world. In MetaEdu, the two worlds interact and synchronize information through
Web3-based on-chain connections to allow for independence and mutual feedback.
**Fig. 2 MetaEdu system architecture**
V l (0123456789)1 3
-----
**3.3.1 Physical world system**
The physical world consists of three parts: Information Collection, Communication Computation and Storage, and Management and Control.
A) Information Collection (IC): The IC system handles all in-boundary and over-boundary transmission. The in-boundary
transmission will include users’ information entry in the off-chain Internet, while over-boundary transmission covers
the over-bound user information authentication, the over-bound update of the knowledge system framework, and
sensor data such as voice recordings, gestures, expressions, heartbeat data, gaze tracking, or any other data collected
when the user participates in MetaEdu.
B) Communication, Computation, and Storage (CCS): The CCS system is a system that enables the exchange of information, the processing of data, and the storage of data. The communication component of the system allows for the
transmission of information between devices or systems through the internet. The computation component allows
for the computational processing of data. The storage component allows for the preservation of data through the
use of storage devices. Together, these three components enable the exchange, processing, and storage of information, allowing for efficient communication, data analysis, and data management.
C) Management and Control Center: The physical world management system and control system involves collaboration
between teachers, school administrators, and other stakeholders in order to create a positive and effective learning environment for students. It also involves the combination of technological tools and pedagogical strategies
online, as well as effective communication and collaboration between instructors, students, and other stakeholders.
In particular, this system is also responsible for communicating with IC and CCS systems in our MetaEdu cycle, so as
to complete on-chain user authentication, information upload, and knowledge framework update.
For the MetaEdu ecological cycle, the physical world system needs to rely on these three components for synchronization and feedback with virtual system:
– IC systems to collect user authentication and feedback, update user learning status, and improve the on-chain model.
– The CSS system to ensure user communication, collect and back up knowledge frameworks, and maintain efficient
up-link communication. CSS is also responsible for outputting in-chain/ off-chain information to users.
– The Management and Control Center to monitor and maintain the flow within the loop, using the best educational
strategies to ensure that users learn easily and efficiently.
In relation to the blockchain, the chain stores not only the knowledge framework updated and kept by CCS, but also all
the data of offline users, including login authentication data, interaction records and users’ knowledge records. In particular, due to blockchain irreversibility and on-chain publicness, MetaEdu can help users create on-chain knowledge trees
with cascading updates to ensure fair and valid certification through group public scoring. Because of this, blockchain
is a key technology that allows MetaEdu to operate more openly, fairly, securely, and efficiently.
**3.3.2 Web3 system**
Web3 refers to the next generation of the World Wide Web built on top of decentralized technologies such as blockchain.
Web3 technologies are designed to allow users to interact with decentralized applications (dApps) and to take advantage
of the security and transparency offered by blockchain. Blockchain in this case functions as a decentralized method of
securely storing data and recording transactions. It consists of a network of computers that work together to validate
and record transactions, which are then added to a chain of blocks that form a permanent record. Currently, blockchain
is used for a variety of purposes, including the creation of digital currencies, the facilitation of financial transactions, and
the storage and access of information which MetaEdu takes advantage of.
Blockchain in MetaEdu consists of 5 layers, as shown in Fig. 3:
Hardware/ Infrastructure layer: The hardware layer refers to the network of computers contributing to the blockchain’s
computing power forms. A node is a computer or a network of computers that decrypt transactions.
Data storage layer: This layer is responsible for storing the data that is recorded on the blockchain. The data storage
layer might use a variety of data structures, such as linked lists or hash tables to efficiently store and retrieve the data.
Vol (1234567890)1 3
-----
**Fig. 3 MetaEdu blockchain**
layers
Network layer: This layer refers to the protocols that are used to connect the nodes in the network and enable them
to communicate with each other.
Consensus layer: This layer is responsible for ensuring that all nodes in the network reach consensus on the state
of the blockchain. It uses various algorithms and protocols to ensure that all nodes agree on the transactions that are
included in the blockchain.
Application layer: This is the highest layer of the blockchain, and it refers to the applications and services that are built
on top of the blockchain. These applications might include decentralized applications (dApps) and other services that
allow users to interact with the blockchain and use its features.
V l (0123456789)1 3
-----
**Fig. 4 Computational experi-**
ment model
In this layer structure, the primary function of the blockchain is to store and access information, and the various layers
of the blockchain are structured in a way that enables this function to be performed efficiently and securely.
Users and virtual systems could access data on blockchain as shown in Fig. 4:
One of the smart contracts based on parallel intelligence can facilitate social interaction or interaction with other
smart contracts; on the training model provided by the virtual system, contracts can be designed to allow testing and
experimentation with different inputs or scenarios. Primarily, contracts are designed to allow the input of different variables or parameters and provide outputs based on these inputs.
It is worth noting, however, that the execution of smart contracts based on parallel intelligence is usually facilitated through the use of virtual machines, requiring consideration of the underlying blockchain platform as well
as the capabilities and limitations of the smart contract. While parallel execution can be used in the contract itself,
off-chain computation can also be used, or sharding can be used on the blockchain platform to improve overall
efficiency and capacity.
**3.3.3 Virtual world system**
The virtual world system is a mirror and extension of the physical world that offers users a platform for personalized
learning and communication. With AI-enabled non-player characters (NPCs), it can build a virtual learning system
that revolves around the user’s physical world and their digital avatar, continuously optimizing learning methods
and improving efficiency. The system is divided into two main parts, learner NPCs and tutor NPCs.
A) Learner NPCs, which act as peers to users, and can be either skilled learners or apprentice learners.
Skilled learner NPCs in MetaEdu exist to create more challenging and dynamic gameplay experiences for users.
These NPCs exist to act as challenging opponents for users that react to user strategies in competitive situations to
try to outperform users.
Apprentice learner NPCs in MetaEdu exist to ”learn” at a slower pace than users and skilled learner NPCs. Unlike
skilled learner NPCs which exist to compete with users, apprentice NPCs instead offer users an opportunity to teach
others. They act as peers to users to help them accomplish goals and help them achieve a deeper education through
teaching others.
Skilled learner NPCs and Apprentice learner NPCs will store and share learning experiences through the blockchain
while accessing information and data to learn and make decisions based on that information and data. They can
also use natural language processing and AI methods to communicate and interact with students in meaningful
ways. Behind the scenes, both types of NPC behaviors can be adjusted to ensure that students receive appropriate
competition or guidance from both competitive and collaborative NPCs. And while these NPCs may have conflicting
goals, educational scenarios can be tailored carefully to students to ensure that NPCs only act when it is appropriate
for collaboration or competition.
B) Tutor NPCs
Unlike learner NPCs which function as peers, Tutor NPCs in MetaEdu are meant to create more effective educational experiences. Tutor NPCs can be used to present information and explanations, provide examples and practice
Vol (1234567890)1 3
-----
**Fig. 5 Computational experiment model**
exercises, and offer feedback and reinforcement to help students improve their understanding and performance.
This could be particularly useful in online or distance learning environments, where students may not have access
to a human instructor.
Tutor NPCs access information about learning frameworks and student users via the blockchain. Using machine
learning algorithms to analyze data about the student’s performance and learning progress, tutor NPCs adjust the
learning experience accordingly. The NPC will provide more or less challenging material based on the student’s
performance, or may focus on specific areas where the student is struggling. This can help ensure that the learning
experience is tailored to the student’s needs and abilities, and can help them progress more quickly and effectively.
Some additional details and possible methods of NPCs were addressed in our prior work [10].
To provide students with an adaptive learning experience in the virtual world, we use the model shown in Fig. 5.
This computational experiment model is built to be highly controllable, easy to apply, and easily reproduced. In
Fig. 5, the inputs Fa, Fb, ..., Fn are factors collected by the system. For example, the system might collect score on an
exam, time taken to complete the exam, and gaze tracking data on which question the student looked at longest.
The optimization model is then trained on this data to estimate student performance and select what guidance those
students require. While specific methods to translate student data into knowledge models are beyond the scope of
this paper and left up to implementation, the system may, for example, score the user in several categories using
clustering methods. It would then select a hint from a database of hints, or generate a paragraph of useful information using a natural language model. In addition to providing support to the learner in the physical world, student
models can also feedback to help improve the behavior of NPCs and make them more realistic (for learner NPCs) or
more effective (for tutor NPCs). With this parallel approach, the goal is to improve system performance on multiple
fronts while helping the user learn.
#### 4 Challenges
While MetaEdu presents a good framework for a new way of education, there are many challenges ahead.
1 Security: since MetaEdu is a very complex system involving multiple smaller systems, it has many privacy and security issues. On a system level, the virtual world is a clone of the physical world, which naturally contains geographic
V l (0123456789)1 3
-----
information; the virtual learning world could also contain sensitive knowledge that needs to be taught, such as proprietary information from industries or countries. From the user level in MetaEdu, human users interact with in the
digital world through virtual reality devices, and the personal and activity data collected by the devices are stored
in the MetaEdu blockchain. The loss or leakage of information during the transmission process could cause huge
losses to the user or related users. At the same time, a large amount of user information and knowledge models are
stored on the chain, and it is very important to protect their security and integrity. However, since the number of
MetaEdu users is huge and the knowledge system is constantly expanding, protecting their privacy and security is
also an important challenge for MetaEdu.
2 Intelligence: to achieve the goal of introducing teaching and learning into both the physical and virtual worlds,
MetaEdu relies on artificial intelligence (AI) to build various non-player characters (NPCs) that present diverse challenges in terms of intelligence requirements. On the one hand, since NPCs in virtual worlds have changing goals and
environments, an AI model that can continuously learn and update itself is required. On the other hand, multiple
training models exist in the system from top to bottom, and they need to be trained on all of the collected data. This
information has considerable complexity and dimensionality, putting tremendous pressure and difficulty on the
training. Therefore, adding a layer of trainers that can dynamically filter and update the training data set is a possible
solution that would ensure smoother operation of the completed MetaEdu system.
3 Computation: as we mentioned in the previous point, as the number of users increases and the knowledge architecture is updated, a stable and efficient system ecology is necessary. So, without degrading the user experience,
MetaEdu needs a system that can provide great computing power. It must have a large amount of storage space, fast
computing power, and at the same time be responsible for managing system processes while maintaining stable
operation of the system within a manageable latency.
#### 5 Conclusion
In order to break the boundaries of the traditional education model and push education to a higher platform, we
apply the concept of Metaverse to education and propose MetaEdu. MetaEdu is an educational system that enables learning and communication simultaneously in the physical and virtual worlds, greatly improving learning
efficiency while enabling secure, seamless connections and interactions between users. The development stages
of MetaEdu include cloning, extending, and surreality fusing to put together the physical and virtual components
and the blockchain technology necessary to enable the completed system. By connecting both through the blockchain, the MetaEdu framework allows for safe and secure collection and storage of user data to enable powerful AI
techniques, all with the end goal of enhancing student learning. Using the ideas outlined in this paper, we hope to
inspire future researchers to create and apply MetaEdu to offer more effective and efficient education to students
around the world.
**Author contributions YT come up with ideas and wrote chapter 1,2. RH wrote the chapter 2. CZ wrote the chapter 2,3,4,5 and Figs 1, 3, 4. LC**
wrote the chapter 2,3,4 and Figs. 2, 3. All authors read and approved the final manuscript.
**Data availability Data sharing is not applicable to this article as no data were generated or analysed during the study.**
##### Declarations
**Competing interests The authors declare no competing interests**
**Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adapta-**
tion, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source,
provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article
are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in
the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will
[need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.](http://creativecommons.org/licenses/by/4.0/)
Vol (1234567890)1 3
-----
#### References
[1. Hattie J. Visible learning: a synthesis of over 800 meta-analyses relating to achievement. London: Routledge; 2008. https://doi.org/](https://doi.org/10.4324/9780203887332)
[10.4324/9780203887332.](https://doi.org/10.4324/9780203887332)
2. Tang Y, Franzwa C, Bielefeldt T, Jahan K, Saeedi-Hosseiny MS, Lamb N, Sun S. Sustain City: effective serious game design in promoting
[science and engineering education. Hershey: IGI Global; 2023. p. 914–43. https://doi.org/10.4018/978-1-6684-7589-8.ch044.](https://doi.org/10.4018/978-1-6684-7589-8.ch044)
3. Liang J, Hare R, Chang T, Xu F, Tang Y, Wang F-Y, Peng S, Lei M. Student modeling and analysis in adaptive instructional systems. IEEE
[Access. 2022;10:59359–72. https://doi.org/10.1109/ACCESS.2022.3178744.](https://doi.org/10.1109/ACCESS.2022.3178744)
4. Jing S, Tang Y, Liu X, Gong X, Cui W, Liang J. A parallel education based intelligent tutoring systems framework. In: 2020 IEEE Interna[tional Conference on Networking, Sensing and Control (ICNSC), pp. 2020:1–6. https://doi.org/10.1109/ICNSC48988.2020.9238052.](https://doi.org/10.1109/ICNSC48988.2020.9238052)
5. Lafond D, DuCharme MB, Rioux F, Tremblay S, Rathbun B, Jarmasz J. Training systems thinking and adaptability for complex decision
making in defence and security. In: 2012 IEEE International Multi-Disciplinary Conference on Cognitive Methods in Situation Aware[ness and Decision Support, pp. 2012:51–58. https://doi.org/10.1109/CogSIMA.2012.6188408.](https://doi.org/10.1109/CogSIMA.2012.6188408)
6. Koutsojannis C, Prentzas J, Hatzilygeroudis I. A web-based intelligent tutoring system teaching nursing students fundamental aspects
of biomedical technology. In: 2001 Conference Proceedings of the 23rd Annual International Conference of the IEEE Engineering in
[Medicine and Biology Society, vol. 4, 2001:4024–40274 . https://doi.org/10.1109/IEMBS.2001.1019728.](https://doi.org/10.1109/IEMBS.2001.1019728)
7. Chad H. Intelligent tutoring systems : prospects for guided practice and efficient learning. 2006.
8. Hare R, Tang Y. Player modelling and adaptation methods within adaptive serious games. In: 2021 International Conference on Cyber[Physical Social Intelligence (ICCSI), 2021:1–6. https://doi.org/10.1109/ICCSI53130.2021.9736213.](https://doi.org/10.1109/ICCSI53130.2021.9736213)
9. Hwang G-J, Chien S-Y. Definition, roles, and potential research issues of the metaverse in education: an artificial intelligence perspec[tive. Comput Educ: Artif Intell. 2022;3: 100082. https://doi.org/10.1016/j.caeai.2022.100082.](https://doi.org/10.1016/j.caeai.2022.100082)
10. Hare R, Tang Y. Hierarchical deep reinforcement learning with experience sharing for metaverse in education. IEEE Trans Syst Man
[Cybern Syst. 2022. https://doi.org/10.1109/TSMC.2022.3227919.](https://doi.org/10.1109/TSMC.2022.3227919)
11. Gobert JD, Sao Pedro MA, Li H, Lott C. Intelligent tutoring systems: a history and an example of an its for science. In: Tierney RJ, Rizvi
[F, Ercikan K, editors. International encyclopedia of education. 4th ed. Oxford: Elsevier; 2023. p. 460–70. https://doi.org/10.1016/](https://doi.org/10.1016/B978-0-12-818630-5.10058-2)
[B978-0-12-818630-5.10058-2.](https://doi.org/10.1016/B978-0-12-818630-5.10058-2)
12. Hooshyar D, Ahmad RB, Yousefi M, Fathi M, Abdollahi A, Horng S-J, Lim H. A solution-based intelligent tutoring system integrated
[with an online game-based formative assessment: development and evaluation. Educ Technol Res Dev. 2016;64(4):787–808. https://](https://doi.org/10.1007/s11423-016-9433-x)
[doi.org/10.1007/s11423-016-9433-x.](https://doi.org/10.1007/s11423-016-9433-x)
13. Vogt P, van den Berghe R, de Haas M, Hoffman L, Kanero J, Mamus E, Montanier J-M, Oranç C, Oudgenoeg-Paz O, García DH, Papadopoulos F, Schodde T, Verhagen J, Wallbridgell CD, Willemsen B, de Wit J, Belpaeme T, Göksun T, Kopp S, Krahmer E, Küntay AC, Leseman
P, Pandey AK. Second language tutoring using social robots: A large-scale study. In: 2019 14th ACM/IEEE International Conference
[on Human-Robot Interaction (HRI), 2019:497–505. https://doi.org/10.1109/HRI.2019.8673077.](https://doi.org/10.1109/HRI.2019.8673077)
14. Zhang B, Jia J. Evaluating an intelligent tutoring system for personalized math teaching. In: 2017 International Symposium on Edu[cational Technology (ISET), 2017:126–130. https://doi.org/10.1109/ISET.2017.37.](https://doi.org/10.1109/ISET.2017.37)
15. Ennouamani S, Mahani Z. An overview of adaptive e-learning systems. In: 2017 Eighth International Conference on Intelligent Com[puting and Information Systems (ICICIS), 2017:342–347. https://doi.org/10.1109/INTELCIS.2017.8260060.](https://doi.org/10.1109/INTELCIS.2017.8260060)
16. Georgila K, Core MG, Nye BD, Karumbaiah S, Auerbach D, Ram M. Using reinforcement learning to optimize the policies of an intelligent
tutoring system for interpersonal skills training. In: Proceedings of the 18th International Conference on Autonomous Agents and MultiAgent Systems, 2019:737–745.
[17. Stein G, Gonzalez AJ, Barham C. Machines that learn and teach seamlessly. IEEE Trans Learn Technol. 2013;6(4):389–402. https://doi.org/](https://doi.org/10.1109/TLT.2013.32)
[10.1109/TLT.2013.32.](https://doi.org/10.1109/TLT.2013.32)
18. Hooshyar D, Ahmad RB, Wang M, Yousefi M, Fathi M, Lim H. Development and evaluation of a game-based bayesian intelligent tutoring
[system for teaching programming. J Educ Comput Res. 2018;56(6):775–801. https://doi.org/10.1177/0735633117731872.](https://doi.org/10.1177/0735633117731872)
19. Papadimitriou S, Chrysafiadi K, Virvou M. Fuzzeg: fuzzy logic for adaptive scenarios in an educational adventure game. Multimed Tools
[Appl. 2019;78(22):32023–53. https://doi.org/10.1007/s11042-019-07955-w.](https://doi.org/10.1007/s11042-019-07955-w)
[20. Baker RS. Educational data mining: an advance for intelligent systems in education. IEEE Intell Syst. 2014;29(3):78–82. https://doi.org/10.](https://doi.org/10.1109/MIS.2014.42)
[1109/MIS.2014.42.](https://doi.org/10.1109/MIS.2014.42)
21. Baneres D, Caballé S, Clarisó R. Towards a learning analytics support for intelligent tutoring systems on mooc platforms. In: 2016 10th
[International Conference on Complex, Intelligent, and Software Intensive Systems (CISIS), 2016:103–110. https://doi.org/10.1109/CISIS.](https://doi.org/10.1109/CISIS.2016.48)
[2016.48.](https://doi.org/10.1109/CISIS.2016.48)
22. Chui KT, Liu RW, Zhao M, De Pablos PO. Predicting students’ performance with school and family tutoring using generative adversarial
[network-based deep support vector machine. IEEE Access. 2020;8:86745–52. https://doi.org/10.1109/ACCESS.2020.2992869.](https://doi.org/10.1109/ACCESS.2020.2992869)
23. Hershcovits H, Vilenchik D, Gal K. Modeling engagement in self-directed learning systems using principal component analysis. IEEE Trans
[Learn Technol. 2020;13(1):164–71. https://doi.org/10.1109/TLT.2019.2922902.](https://doi.org/10.1109/TLT.2019.2922902)
24. Bunić D, Jugo I, Kovačić B. Analysis of clustering algorithms for group discovery in a web-based intelligent tutoring system. In: 2019 42nd
International Convention on Information and Communication Technology, Electronics and Microelectronics (MIPRO), 2019:759–765.
[https://doi.org/10.23919/MIPRO.2019.8756951.](https://doi.org/10.23919/MIPRO.2019.8756951)
25. Bakhanova E, Garcia JA, Raffe WL, Voinov A. Targeting social learning and engagement: what serious games and gamification can offer
[to participatory modeling. Environ Model Softw. 2020;134: 104846. https://doi.org/10.1016/j.envsoft.2020.104846.](https://doi.org/10.1016/j.envsoft.2020.104846)
26. Fleming T, Sutcliffe K, Lucassen M, Pine R, Donkin L. 10.05-Serious games and gamification in clinical psychology. In: Asmundson GJG,
[editor. Comprehensive clinical psychology. 2nd ed. Oxford: Elsevier; 2022. p. 77–90. https://doi.org/10.1016/B978-0-12-818697-8.00011-X.](https://doi.org/10.1016/B978-0-12-818697-8.00011-X)
27. Esfahlani SS, Cirstea S, Sanaei A, Wilson G. An adaptive self-organizing fuzzy logic controller in a serious game for motor impairment
[rehabilitation. In: 2017 IEEE 26th International Symposium on Industrial Electronics (ISIE), 2017:1311–1318. https://doi.org/10.1109/ISIE.](https://doi.org/10.1109/ISIE.2017.8001435)
[2017.8001435.](https://doi.org/10.1109/ISIE.2017.8001435)
V l (0123456789)1 3
-----
28. Zhong J, Zheng Y. Empowering future education: Learning in the edu-metaverse. In: 2022 International Symposium on Educational
[Technology (ISET), 2022:292–295. https://doi.org/10.1109/ISET55194.2022.00068.](https://doi.org/10.1109/ISET55194.2022.00068)
29. Gu J, Wang J, Guo X, Liu G, Qin S, Bi Z. A metaverse-based teaching building evacuation training system with deep reinforcement learning.
[IEEE Trans Syst Man Cybern: Syst. 2023. https://doi.org/10.1109/TSMC.2022.3231299.](https://doi.org/10.1109/TSMC.2022.3231299)
30. Hwang G-J, Chien S-Y. Definition, roles, and potential research issues of the metaverse in education: an artificial intelligence perspective.
[Comput Edu: Artif Intell. 2022;3: 100082. https://doi.org/10.1016/j.caeai.2022.100082.](https://doi.org/10.1016/j.caeai.2022.100082)
31. Tlili A, Huang R, Shehata B, Liu D, Zhao J, Metwally AHS, Wang H, Denden M, Bozkurt A, Lee L-H, et al. Is metaverse in education a blessing or
[a curse: a combined content and bibliometric analysis. Smart Learn Environ. 2022;9(1):1–31. https://doi.org/10.1186/s40561-022-00205-x.](https://doi.org/10.1186/s40561-022-00205-x)
32. Li J, Lan M, Tang Y, Chen S, Wang F-Y, Wei W. A blockchain-based educational digital assets management system. IFAC-PapersOnLine. 3rd
[IFAC Workshop on Cyber-Physical Human Systems CPHS 2020. 2020;53(5):47–52. https://doi.org/10.1016/j.ifacol.2021.04.082.](https://doi.org/10.1016/j.ifacol.2021.04.082)
33. Wang M, Yu H, Bell Z, Chu X. Constructing an edu-metaverse ecosystem: a new and innovative framework. IEEE Trans Learn Technol.
[2022;15(6):685–96. https://doi.org/10.1109/TLT.2022.3210828.](https://doi.org/10.1109/TLT.2022.3210828)
34. Shen T, Huang S-S, Li D, Lu Z, Wang F-Y, Huang H. Virtualclassroom: a lecturer-centered consumer-grade immersive teaching system in
[cyber-physical-social space. IEEE Trans Syst Man Cybern: Syst. 2022. https://doi.org/10.1109/TSMC.2022.3228270.](https://doi.org/10.1109/TSMC.2022.3228270)
35. Jeon JH. A study on education utilizing metaverse for effective communication in a convergence subject. Int J Internet Broadcast Com[mun. 2021;13(4):129–34. https://doi.org/10.7236/IJIBC.2021.13.4.129.](https://doi.org/10.7236/IJIBC.2021.13.4.129)
36. Wang FYWP. Intelligent systems and technology for integrative and predictive medicine: an acp approach. ACM Trans Intell Syst Technol.
[2013. https://doi.org/10.1145/2438653.2438667.](https://doi.org/10.1145/2438653.2438667)
37. Ren Y, Jiang H, Feng X, Zhao Y, Liu R, Yu H. Acp-based modeling of the parallel vehicular crowd sensing system: framework, components
[and an application example. IEEE Trans Intell Veh. 2022. https://doi.org/10.1109/TIV.2022.3221927.](https://doi.org/10.1109/TIV.2022.3221927)
38. Shi H, Liu G, Zhang K, Zhou Z, Wang J. Marl sim2real transfer: Merging physical reality with digital virtuality in metaverse. IEEE Trans Syst
[Man and Cybern: Syst. 2022. https://doi.org/10.1109/TSMC.2022.3229213.](https://doi.org/10.1109/TSMC.2022.3229213)
39. Aloqaily M, Elayan H, Guizani M. C-healthier: a cooperative health intelligent emergency response system for c-its. IEEE Trans Intell Transp
[Syst. 2022. https://doi.org/10.1109/TITS.2022.3141018.](https://doi.org/10.1109/TITS.2022.3141018)
40. Gao Y, Miao H, Chen J, Song B, Hu X, Wang W. Explosive cyber security threats during covid-19 pandemic and a novel tree-based broad
[learning system to overcome. IEEE Trans Intell Transp Syst. 2022. https://doi.org/10.1109/TITS.2022.3160182.](https://doi.org/10.1109/TITS.2022.3160182)
41. Li P, Sun Z, Situ Z, He M, Song T. Joint jpeg compression and encryption scheme based on order-8-16 block transform. IEEE Trans Intell
[Transp Syst. 2022. https://doi.org/10.1109/TITS.2022.3217304.](https://doi.org/10.1109/TITS.2022.3217304)
**Publisher’s Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.**
Vol (1234567890)1 3
-----
|
{
"disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.1007/s44163-023-00053-9?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1007/s44163-023-00053-9, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "CCBY",
"status": "GOLD",
"url": "https://link.springer.com/content/pdf/10.1007/s44163-023-00053-9.pdf"
}
| 2,023
|
[
"JournalArticle"
] | true
| 2023-03-20T00:00:00
|
[
{
"paperId": "61ae2dc36a97bb9d9c051b0182e3f282570cb594",
"title": "Explosive Cyber Security Threats During COVID-19 Pandemic and a Novel Tree-Based Broad Learning System to Overcome"
},
{
"paperId": "19af9337647e7e3a2009734d53f35d62a7e15286",
"title": "Joint JPEG Compression and Encryption Scheme Based on Order-8-16 Block Transform"
},
{
"paperId": "b521c3c7ffb5d4bc772606b25da59c022a931a1d",
"title": "VirtualClassroom: A Lecturer-Centered Consumer-Grade Immersive Teaching System in Cyber–Physical–Social Space"
},
{
"paperId": "959465859a5812aa5cb54e85de018825cab33f44",
"title": "Hierarchical Deep Reinforcement Learning With Experience Sharing for Metaverse in Education"
},
{
"paperId": "396b2edb73c02e8536c7ea5b916b0422a2f1fc0d",
"title": "A Metaverse-Based Teaching Building Evacuation Training System With Deep Reinforcement Learning"
},
{
"paperId": "14a143d05635f79011c79d931d24efcfed26da2b",
"title": "MARL Sim2real Transfer: Merging Physical Reality With Digital Virtuality in Metaverse"
},
{
"paperId": "9b252459bd132748ebfb3f9df9984cb774f639ee",
"title": "C-HealthIER: A Cooperative Health Intelligent Emergency Response System for C-ITS"
},
{
"paperId": "0caf2d17b33b60e9f2b7ed2cfa65405d7469a183",
"title": "ACP-Based Modeling of the Parallel Vehicular Crowd Sensing System: Framework, Components and an Application Example"
},
{
"paperId": "a1d2b8a3e0fb972c33ba753270834021418780a2",
"title": "Constructing an Edu-Metaverse Ecosystem: A New and Innovative Framework"
},
{
"paperId": "4496da3881181e93ee2a771f80b1154767b70ab3",
"title": "Is Metaverse in education a blessing or a curse: a combined content and bibliometric analysis"
},
{
"paperId": "8f21e2176a510c941b4e4d56636fc9497e50952a",
"title": "Empowering Future Education: Learning in the Edu-Metaverse"
},
{
"paperId": "e2d3b48b46d34fac164ebcad2eb39661712a1d97",
"title": "Definition, roles, and potential research issues of the metaverse in education: An artificial intelligence perspective"
},
{
"paperId": "ed55dc529e051ea5a1df5a1c2d4901775aa21819",
"title": "Player Modelling and Adaptation Methods within Adaptive Serious Games"
},
{
"paperId": "3e72a883525fd4b894abf4c6fcfe629cc3c8840f",
"title": "Targeting social learning and engagement: What serious games and gamification can offer to participatory modeling"
},
{
"paperId": "c3ad418a7b60f71b3c85b20389776f507bc4e521",
"title": "A Parallel Education Based Intelligent Tutoring Systems Framework"
},
{
"paperId": "1d6b3301b37fba453a387d92af9ab5593b5a4432",
"title": "Serious games and gamification in Clinical Psychology"
},
{
"paperId": "3c9777af20cb85518cf32abbc65859bc6fedd514",
"title": "Modeling Engagement in Self-Directed Learning Systems Using Principal Component Analysis"
},
{
"paperId": "352dd041122241c1fe3a49853efdbd21f2c27317",
"title": "FuzzEG: Fuzzy logic for adaptive scenarios in an educational adventure game"
},
{
"paperId": "9a85c67b24632aac31b8c837a659b43dc668d2c1",
"title": "Using Reinforcement Learning to Optimize the Policies of an Intelligent Tutoring System for Interpersonal Skills Training"
},
{
"paperId": "62b64715e856654872992f52de0df18739699655",
"title": "Analysis of clustering algorithms for group discovery in a web-based intelligent tutoring system"
},
{
"paperId": "8e618dfa1233158609634aa0b89881219e044bea",
"title": "Second Language Tutoring Using Social Robots: A Large-Scale Study"
},
{
"paperId": "c5aa74fb8851ba47986d9d202d92f69d870bf96e",
"title": "Development and Evaluation of a Game-Based Bayesian Intelligent Tutoring System for Teaching Programming"
},
{
"paperId": "4be51f1672f5e01aec5d52e09bfb15f52ca6973b",
"title": "An overview of adaptive e-learning systems"
},
{
"paperId": "ab95638e01aac5e837ea840214c01cb500f5dbc7",
"title": "An adaptive self-organizing fuzzy logic controller in a serious game for motor impairment rehabilitation"
},
{
"paperId": "dbce31557b91e7018f3992999acecad7e0473322",
"title": "Evaluating an Intelligent Tutoring System for Personalized Math Teaching"
},
{
"paperId": "9068760000d10170e1a1fb3ce981447573e06099",
"title": "Towards a Learning Analytics Support for Intelligent Tutoring Systems on MOOC Platforms"
},
{
"paperId": "d9a8b895eb345d74e8e3165d1e433d8eb6111b31",
"title": "A solution-based intelligent tutoring system integrated with an online game-based formative assessment: development and evaluation"
},
{
"paperId": "a34634ec0778847807ffb97513166befa877dcea",
"title": "Educational Data Mining: An Advance for Intelligent Systems in Education"
},
{
"paperId": "20143871064ee4d7713b7a595249fc2ef7705c63",
"title": "Machines that learn and teach seamlessly"
},
{
"paperId": "a0ef7c385b54c7da072ece8bc4d5e2949a8232af",
"title": "Training systems thinking and adaptability for complex decision making in defence and security"
},
{
"paperId": "89c622f711c10e1bdc0c8e50b9ca4bd6936c73b7",
"title": "Visible learning: a synthesis of over 800 meta‐analyses relating to achievement"
},
{
"paperId": "ca4492c5b8a7d4a94765a25188a87891f1ca224e",
"title": "Visible Learning: A Synthesis of Over 800 Meta-Analyses Relating to Achievement"
},
{
"paperId": "80358d9c98346c9ce17aab9429410c452094cba3",
"title": "A web-based intelligent tutoring system teaching nursing students fundamental aspects of biomedical technology"
},
{
"paperId": null,
"title": "Intelligent tutoring systems: a history and an example of an its for science"
},
{
"paperId": null,
"title": "2017 IEEE 26th International Symposium on Industrial Electronics (ISIE), 2017:1311–1318"
},
{
"paperId": null,
"title": "Sustain City: effective serious game design in promoting science and engineering education"
},
{
"paperId": "a490f6e8454abb54ec6f76645dd07e9f2a53048f",
"title": "Student Modeling and Analysis in Adaptive Instructional Systems"
},
{
"paperId": "e3d79c72eb8cf56681b9ff18c7762c9e4bdf2a38",
"title": "A Study on Education Utilizing Metaverse for Effective Communication in a Convergence Subject"
},
{
"paperId": "e8e9449f192c8079e58cbd9047b5c3e11f8044e1",
"title": "Predicting Students’ Performance With School and Family Tutoring Using Generative Adversarial Network-Based Deep Support Vector Machine"
},
{
"paperId": "e2a0c7529e8ceaafc3ee8f918c9da0255197a8de",
"title": "A Blockchain-based Educational Digital Assets Management System"
},
{
"paperId": "d466520bf8a43e37d5126ea4c6436ffeda13b989",
"title": "Research commentary: Intelligent systems and technology for integrative and predictive medicine: An ACP approach"
},
{
"paperId": "4e52348f27fd1176604069e452b23f01bb8c6395",
"title": "Intelligent Tutoring Systems : Prospects for Guided Practice and Efficient Learning"
},
{
"paperId": null,
"title": "C) Immutability"
},
{
"paperId": null,
"title": "A) Information Collection (IC): The IC system handles all in-boundary and over-boundary transmission"
}
] | 12,552
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
},
{
"category": "Engineering",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/fff9b2323e86e4ecf577307e6cdf759aadb7731f
|
[
"Computer Science"
] | 0.887299
|
Decentralized in-order execution of a sequential task-based code for shared-memory architectures
|
fff9b2323e86e4ecf577307e6cdf759aadb7731f
|
IEEE International Symposium on Parallel & Distributed Processing, Workshops and Phd Forum
|
[
{
"authorId": "2180419037",
"name": "Charly Castes"
},
{
"authorId": "2659884",
"name": "E. Agullo"
},
{
"authorId": "1729212",
"name": "Olivier Aumage"
},
{
"authorId": "3152466",
"name": "Emmanuelle Saillard"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"IPDPSW",
"IEEE Int Symp Parallel Distrib Process Work Phd Forum"
],
"alternate_urls": null,
"id": "7ddefda0-174f-499a-9dce-855879dd01b7",
"issn": null,
"name": "IEEE International Symposium on Parallel & Distributed Processing, Workshops and Phd Forum",
"type": "conference",
"url": null
}
|
The hardware complexity of modern machines makes the design of adequate programming models crucial for jointly ensuring performance, portability, and productivity in high-performance computing (HPC). Sequential task-based programming models paired with advanced runtime systems allow the programmer to write a sequential algorithm independently of the hardware architecture in a productive and portable manner, and let a third party software layer -the runtime system- deal with the burden of scheduling a correct, parallel execution of that algorithm to ensure performance. Many HPC algorithms have successfully been implemented following this paradigm, as a testimony of its effectiveness. Developing algorithms that specifically require fine-grained tasks along this model is still considered prohibitive, however, due to per-task management overhead [1], forcing the programmer to resort to a less abstract, and hence more complex “task+X” model. We thus investigate the possibility to offer a tailored execution model, trading dynamic mapping for efficiency by using a decentralized, conservative in-order execution of the task flow, while preserving the benefits of relying on the sequential task-based programming model. We propose a formal specification of the execution model as well as a prototype implementation, which we assess on a shared-memory multicore architecture with several synthetic workloads. The results show that under the condition of a proper task mapping supplied by the programmer, the pressure on the runtime system is significantly reduced and the execution of fine-grained task flows is much more efficient.
|
## Decentralized in-order execution of a sequential task-based code for shared-memory architectures
### Charly Castes, Emmanuel Agullo, Olivier Aumage, Emmanuelle Saillard
To cite this version:
#### Charly Castes, Emmanuel Agullo, Olivier Aumage, Emmanuelle Saillard. Decentralized in-order ex- ecution of a sequential task-based code for shared-memory architectures. IPDPSW 2022 - IEEE International Parallel and Distributed Processing Symposium Workshops, May 2022, Lyon, France. pp.552-561, 10.1109/IPDPSW55747.2022.00095. hal-03896030
### HAL Id: hal-03896030
https://inria.hal.science/hal-03896030
#### Submitted on 13 Dec 2022
#### HAL is a multi-disciplinary open access archive for the deposit and dissemination of sci- entific research documents, whether they are pub- lished or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers.
#### L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
-----
# Decentralized in-order execution of a sequential task-based code for shared-memory architectures
#### Charly Castes Inria - LaBRI, EPFL Bordeaux, France charly.castes@epfl.ch
#### Emmanuel Agullo
_Inria - LaBRI_
Bordeaux, France
emmanuel.agullo@inria.fr
#### Olivier Aumage
_Inria - LaBRI_
Bordeaux, France
olivier.aumage@inria.fr
#### Emmanuelle Saillard
_Inria - LaBRI_
Bordeaux, France
emmanuelle.saillard@inria.fr
**_Abstract— The hardware complexity of modern machines_**
**makes the design of adequate programming models crucial for**
**jointly ensuring performance, portability, and productivity in**
**high-performance computing (HPC). Sequential task-based pro-**
**gramming models paired with advanced runtime systems allow**
**the programmer to write a sequential algorithm independently of**
**the hardware architecture in a productive and portable manner,**
**and let a third party software layer —the runtime system— deal**
**with the burden of scheduling a correct, parallel execution of**
**that algorithm to ensure performance. Many HPC algorithms**
**have successfully been implemented following this paradigm, as**
**a testimony of its effectiveness.**
**Developing algorithms that specifically require fine-grained**
**tasks along this model is still considered prohibitive, however, due**
**to per-task management overhead [1], forcing the programmer**
**to resort to a less abstract, and hence more complex “task+X”**
**model. We thus investigate the possibility to offer a tailored**
**execution model, trading dynamic mapping for efficiency by using**
**a decentralized, conservative in-order execution of the task flow,**
**while preserving the benefits of relying on the sequential task-**
**based programming model. We propose a formal specification of**
**the execution model as well as a prototype implementation, which**
**we assess on a shared-memory multicore architecture with several**
**synthetic workloads. The results show that under the condition of**
**a proper task mapping supplied by the programmer, the pressure**
**on the runtime system is significantly reduced and the execution**
**of fine-grained task flows is much more efficient.**
I. INTRODUCTION
Parallel computing is a requirement in HPC, to achieve
the necessary level of performance. Writing correct parallel
programs is a notoriously difficult task, though. Runtime
systems for automatized parallelization have thus long been
used as a means to offset part of this burden, and common
patterns have even been standardized, see OpenMP [2]. In the
last fifteen years, a new class of task-based runtime systems
such as StarPU [3], PaRSEC [4], SuperGlue [5], OmpSs [6],
to name a few, has been proposed to better take advantage
of multicore, manycore and heterogeneous accelerated architectures. This effort resulted in a rich ecosystem of runtimes
with their own different goals, guarantees, performance and
programming model declinations [1]. A common trait to many
of those initiatives is the ability to accept from the programmer
a sequential series of tasks with implicit dependencies as the
input algorithm to be parallelized. This programming model is
sometimes referred to as Sequential Task Flow (STF) [7], [8].
The STF programming model is supported by a large number
of runtimes, including OpenMP since revision 4.0 (through
the task construct and depend clause inspired from OmpSs),
StarPU with the default configuration and PaRSEC through its
_dynamic task discovery (DTD) mode._
While STF is therefore arguably a popular programming
model in HPC, the per-task management overhead incurred
by such runtime systems makes it prohibitive in practice to
execute fine-grained tasks, as highlighted in a recent study of
their performance as a function of task sizes [1]. The study
estimates that on current architectures, the minimal duration
of individual tasks should be on the order of 100µs for the approach to be profitable. Unfortunately, some important classes
of HPC applications actually do involve tasks of small granularity. A typical example is the High Performance Linpack
_Benchmark [9] (HPL) used for establishing the TOP500 [10]_
supercomputer ranking. The core of the HPL algorithm is
a LU matrix factorization with partial pivoting: while most
operations are performed at coarse granularity, the pivoting
itself requires fine-grained operations that can not be efficiently
executed as tasks with such runtime systems.
Most task-based runtime systems assessed in [1] support
the STF programming model while internally using various
strategies for their execution model. In this paper, we further
formalize the important but often implicit difference between
the programming model and execution model. We highlight
that most runtime systems supporting the STF programming
model on shared-memory machines most often explicitly or
implicitly assume a centralized, out-of-order execution model
(the scheduling work possibly being decentralized, but the
consistency management work remaining centralized). While
this execution model may be an excellent choice for dealing
with moderate or coarse grain tasks,on the contrary, this paper
proposes a new lightweight execution model relying on the
principles of decentralized dependency management and inorder execution, to drastically reduce per-task management
overhead. We introduce a formal specification of the proposed
execution model as well as a prototype implementation (for
shared-memory, homogeneous multicore architectures), which
we assess with synthetic workloads. The results are promising,
showing that under the condition of a proper task mapping
supplied by the programmer, our proposal enables a costeffective parallel execution of algorithms with fine-grained
tasks expressed in the STF programming model. There are,
-----
however, two limitations: first the programming model is
slightly modified by need to provide a mapping function, and
second the absence of dynamic re-ordering leads to less efficient pipelining in the presence of coarse tasks. Even though
we compare our model with the established centralized out-oforder paradigm our intent is not to replace the general-purpose
runtimes cited earlier, but to demonstrate superior efficiency
on some classes of computation involving fine granularity, and
eventually enable those general purpose runtimes to delegate
relevant computations to an embedded low-overhead runtime,
as the one described in this paper.
Our original contributions include the execution model, its
formal specification (as well as a specification of the STF programming model that the execution model must satisfy), and
an analysis of the performance of a prototype implementation
of that model on different synthetic benchmarks.
The paper is organized as follows. Section II presents some
background on the STF programming model (section II-A),
typical execution models (section II-B) employed in the HPC
literature for supporting it on shared-memory machines, and
our methodology to assess the efficiency of execution models (section II-C). Section III introduces our proposal for a
lightweight execution model implemented in our Run-in-Order
(RIO) runtime system prototype, to execute sequential flows
of fine-grained tasks. Section IV presents the methodology
we have employed to define the formal specification of both
the STF model and the proposed execution model. Section V
reports on experiments we conducted to assess the proposed
approach. Section VI concludes this paper.
II. BACKGROUND
Throughout this paper we make a clear distinction between the programming model and the execution model.
The programming model defines the semantic of a program,
it gives guarantees about the behavior of the program but
does not specify how it is executed. Defining the precise
execution of a program is the role of the execution model.
It must conform to the high level semantics described by
the programming model but is free to choose the underlying
implementation. Decoupling the programming and execution
models is important when discussing performance, because
even though the programming model imposes constraints on
the execution, different implementations can result in very
different performance profiles.
_A. The Sequential Task Flow programming model_
In the STF model the programmer writes its program as
a sequence of tasks to be executed, that we call the task
_flow. A task is a pure function (e.g. without side effects) that_
can operate on some data objects managed by the runtime
system. For each such data object, the task declares an access
_mode: read-only, write-only or read-write. The STF model_
gives the sequential consistency guarantee that the result of
a valid parallel execution in this model will be the same as
the result of a sequential execution of the tasks in the order
given by the task flow.
Fig. 1. Illustration of a centralized out-of-order execution model. A master
thread executes the STF program, producing a sequence of tasks that are
dispatched to a pool of workers, using tasks queues for instance. The master
thread can re-order the tasks to reduce worker idle time by taking advantage
of independent task, effectively executing tasks out of their original order.
The appeal of STF comes from the implicit management
of data dependencies it offers: such dependencies are deduced
from the access order in the task flow and the respective data
access modes declared by the tasks. Sequential consistency is
guaranteed by the runtime by ensuring that each read access
happens after all previous write operations and that each write
access happens after all previous read and write operations.
Dependencies being implicit, writing a STF algorithm is
similar to writing the sequential version of that algorithm. As
a result, STF applications avoid common pitfalls of concurrent
programs such as deadlocks, and data races.
_B. The Execution Model_
While the programming model describes the semantic of
the —STF in our case— programs, the execution details
within the boundaries of these semantic constraints are left
for the runtime to decide. The simplest possible execution
model for STF would be to execute the tasks sequentially in
the order given by the task flow. While semantically correct,
this execution model would make a poor usage of a parallel
computer. More efficient execution models have thus been
developed and are gaining momentum as an effective way to
write high performance applications for supercomputers.
Multiple runtimes are compliant with the STF programming
model: StarPU [3], PaRSEC [4] with Dynamic Tasks Discovery, Quark [11], SuperGlue [5], OmpSs [6] and OpenMP
starting with version 4.0 [2] and the introduction of the task
construct and depend clause. Within a hardware node, most
STF-compliant runtimes use very similar execution models
that we describe as centralized and out-of-order (OoO). We
designate them as centralized because they rely on a masterworker model (especially on shared-memory architectures), in
which a master thread unrolls the task flow to discover the
tasks and dispatch them to a pool of workers (illustrated in
Figure 1). In addition, the master thread (through scheduling)
and/or the workers (through work stealing) can re-order the
-----
Fig. 2. Execution time against task size for a 4096 by 4096 square matrix
multiplication using StarPU with the Intel MKL DGEMM kernel in shared
memory (24 cores). The task size corresponds to the dimensions of the square
sub-matrices.
tasks to minimize idleness as long as sequential consistency
is maintained. The execution is thus said to be OoO.
Centralized OoO runtimes are indeed effective. StarPU for
instance is consistently achieving performance within a few
percent of the best performing (possibly non STF) implementation on the Task Bench runtime survey [1]. OoO runtimes are
able to make good scheduling decisions at runtime by taking
into account parameters such as data locality, expected task
execution time and upcoming tasks, while also dynamically
balancing the workload through work stealing techniques.
Those features come at the cost of higher per-task overhead, as
highlighted by the Task Bench survey, which makes execution
of fine-grained tasks intractable.
_C. Decomposing runtime efficiency_
Figure 2 shows the evolution of the execution time against
the dimensions of the sub-matrices, for a matrix multiplication.
It uses a state-of-the-art general matrix multiplication kernel
for double precision values (DGEMM) from the Intel MKL
library, together with StarPU, on a dual socket 12-core Intel
Xeon E5-2680 v3 processor [12]. It illustrates the impact of
granularity on the execution time: finer grained tasks lead
to a longer execution. However, Figure 2 by itself does not
explain why the efficiency decreases, which results from a
combination of factors. Figure 3 shows the efficiency of the
Intel MKL DGEMM routine against the matrix tile sizes when
splitting the whole computation into tasks. This experiment
makes it clear that the global execution time is not a good
measurement of the runtime performance characteristics, since
the computation kernel itself looses efficiency with smaller
tasks. Matrix multiplication kernels usually exploit hardware
caches efficiently on sufficiently large matrices, while dividing
the computation into smaller tasks reduces opportunities for
cache reuse, which in return degrades the kernel efficiency.
In this paper we investigate the impact of the runtime system
on the global computation efficiency, using a methodology
inspired by previous works ([13], [8] and [14]) to decompose
the global efficiency into a product of efficiencies more easily
Fig. 3. Sequential Intel MKL DGEMM kernel efficiency as a function of the
task size, in this case the dimension of sub-matrices.
attributable to specific components and properties of execution
models. In the following, we use the notations:
_• t: execution time of the fastest sequential algorithm;_
_• t(g): execution time of the sequential algorithm when_
splitting the problem in tasks of granularity g;
_• tp(g): execution time when using a runtime with p_
threads and tasks of granularity g;
_• e: parallel efficiency[15]._
As discussed, the parallel efficiency encapsulates not only
the cost of the runtime but also overheads such as the reduced
efficiency of tasks’ computation kernels at a given granularity.
In our analysis, we thus want to isolate the efficiency of the
computation kernel from the efficiency of the runtime itself. To
that effect we further refine our notations by introducing the
cumulative execution time using a runtime τp(g) = p tp(g)
and decomposing it into three parts depending on the type of
event occurring at a given instant:
_• τp,t_ (g): cumulative time spent executing tasks;
_• τp,i_ (g): cumulative time spent idle, waiting for a dependence constraint to be resolved, for instance;
_• τp,r_ (g): cumulative time spent in the runtime not executing a task nor idle, which corresponds to the management
cost of tasks (e.g. memory allocation, scheduling).
The sum of these cumulative times corresponds to the total
parallel execution time multiplied by the number of threads:
_τp(g) = τp,t_ (g) + τp,i (g) + τp,r (g). This can be viewed as
a rectangle of height p and width tp being covered by events
among the above three possible types (processing tasks, idle,
internal runtime management).
Using these notations we decompose the parallel efficiency
_e into a product of four efficiencies: the granularity efficiency_
_eg representing the efficiency of the computation kernel at_
a given granularity, the locality efficiency el encapsulating
the effect of locality in a multi-threaded application, the
pipelining efficiency ep for the ability of the runtime to
efficiently pipeline tasks execution, and the runtime efficiency
_er representing the overhead of managing tasks in the run-_
time. Introducing t(g) the sequential time when operating at
-----
Fig. 4. Efficiency decomposition on a 4096 by 4096 square matrix multiplication with StarPU (24 threads).
granularity g, we can indeed write:
_t_
_e(g) =_
_p tp(g)_
where:
_t(g)_ _τp,t_ (g)
= _[t]_
_t(g)_ _[×]_ _τp,t_ (g) _[×]_ _τp,t_ (g) + τp,i (g)
_τp,t_ (g) + τp,i (g)
_×_
_τp,t_ (g) + τp,i (p) + τp,r (g)
=eg (g) el (g) ep(g) er (g),
_t_
_eg_ (g) =
_t(g)_ [;]
_t(g)_
_el_ (g) =
_τp,t_ (g) [;]
_τp,t_ (g)
_ep(g) =_
_τp,t_ (g) + τp,i (g) [;]
_τp,t_ (g) + τp,i (g)
_er_ (g) =
_τp,t_ (g) + τp,i (g) + τp,r (g) _[.]_
Figure 4 shows the efficiency decomposition using StarPU
for a matrix multiplication. The granularity efficiency is independent of the runtime. It corresponds to the efficiency
pictured in Figure 3 when measured in isolation. We observe a
small runtime overhead (er < 1) due to the StarPU execution
model in which one of the thread is exclusively dedicated
to the runtime. The parallel efficiency ep is maximized with
middle-sized granularities: enough to expose parallelism without flooding the runtime. Finally, the locality efficiency can
either slow down the computation in memory bound regime
or speed it up beyond what is possible in single-threaded
application (el > 1) by leveraging multiple caches. We use
this decomposition in Section V to analyse the performance
of different execution models for several granularities.
III. A LIGHTWEIGHT EXECUTION MODEL
Runtime systems such as StarPU are designed for the execution of “reasonably” coarse tasks. They are built around a rich
centralized OoO execution model using advanced heuristics
for dynamic decisions. This model achieves good pipelining
efficiency as long as the per-task overhead is negligible compared to the cost of executing the task. This assumption no
longer holds with smaller tasks, however. In this section, we
propose an alternative decentralized in-order execution model
optimized for small granularity, for which we will assess a
minimal implementation in section V.
_A. In-order execution_
In OoO execution models, tasks can be freely re-ordered as
long as sequential consistency holds. A smart OoO scheduler
can take advantage of that to yield better computation overlapping and reduce idle time. The gains from OoO scheduling
come from the ability to execute ready tasks while other tasks
are waiting for a dependency, which can produce efficient
execution even if the order of task submissions in the task
flow is not optimal. The overhead of OoO execution is due to
both the need for good (hence expensive) heuristics and the
necessary data structures used to store pending tasks, whose
space requirement is linear in the number of tasks.
To handle a high volume of fine-grained tasks, we propose
to use an in-order execution model rather than traditional
OoO. An in-order execution model removes the need for
scheduling heuristics and task storage, drastically reducing
the per-task overhead at the cost of a much higher sensitivity
to task submission order. The scheduler in OoO models is
also responsible for resources allocation, and often aims at
maximizing data locality. In our proposed in-order execution
model there is no dynamic scheduler; the assignment of tasks
to resources must thus be done through other means.
_B. Task mapping_
We propose to rely on a static mapping of the tasks to do
so. For some classes of computation, including most popular
numerical algorithms, there has been extensive research on
efficient static scheduling, such as 2d-block cyclic mapping in
dense linear algebra [16] or proportional mapping in sparse
linear algebra [17], [18], which can be leveraged to write
efficient task mapping and discovery order. Static mappings
have also been used in a distributed-memory task-based context [7], [19]. Although such mappings have been much more
often considered for designing distributed-memory algorithms,
nothing prevents one to translate them to the shared-memory
case. It is to be noted that in the case the mapping is collected
from the application, it also slightly changes the programming
model, as an additional information (the mapping) is requested
to write the algorithm. However, the automatic computation of
static mappings has also been considered [20]. We advocate
that, although less convenient than the original STF model
relying on dynamic scheduling, the additional constraint of
providing (or computing) a task mapping may be viewed as
reasonable in HPC where there is already a well established
expertise of optimizing mappings in a distributed context. In
any case, this is the assumption we assess in this paper.
-----
10[2]
10[1]
|- Task T1 - Task T2 - Task T3 - Task T4 - Task T5 ...|Worker|
|---|---|
|- Task T1 - Task T2 - Task T3 - Task T4 - Task T5 ...|Worker|
|---|---|
10[0]
10 1
10 2
|Col1|StarPU|Col3|Col4|Col5|Col6|Col7|Col8|
|---|---|---|---|---|---|---|---|
||Rio|||||||
|||||||||
|||||||||
|||||||||
10[1] 10[2] 10[3] 10[4] 10[5] 10[6] 10[7]
number of counter increments per task
Fig. 5. Illustration of a decentralized in-order execution model. All the
workers execute the STF program to produce the sequence of tasks but only
execute the tasks attributed to them by a deterministic mapping function. The
workers can make progress independently, synchronization is only needed
when there is a dependency between tasks executed by different workers.
_C. Decentralized task management_
Centralized runtimes rely on a master-workers model in
which a single thread is responsible for unrolling the task flow
and managing dependencies, while delegating task execution
to a worker pool. This model makes sense when the task
execution time is much greater than the unrolling and management cost, but the master thread can become a bottleneck with
smaller tasks. The total execution time tp(g) can be modelled,
in first approximation, as a function of the time spent in the
runtime per task tr and the task execution time tt (g):
Fig. 6. Execution time of a program executing a fixed number of tasks with
no dependencies consisting in incrementing a counter, with the centralized
runtime StarPU, and with our minimal decentralized runtime RIO.
shared) memory per dependency, depending on the access
modes. The decentralized execution model combined with
cheaper management costs is not affected by the bottleneck
effect introduced by the master thread. Figure 6 illustrates
this behavior by reporting the execution times of a minimalist program (executing a fixed number of tasks with no
dependencies consisting in incrementing a counter), first with
StarPU (a centralized runtime) and then with RIO, our minimal
decentralized runtime, for different task sizes. The cost of
runtime management quickly dominates in StarPU, for which
the centralized cost model (1) is accurate in the prediction
of a bottleneck for small granularities. We discuss possible
improvements to mitigate the worse theoretical complexity of
the decentralized model in section III-E.
�
_tp,centralized = max_ _n tr_ _,centralized_ _,_ _[n t][t]_ [(][g][)]
_w_
�
_,_ (1)
where n is the number of tasks to execute and w the number of
worker threads. With coarse tasks the application is limited by
the speed at which the workers execute the tasks, but at smaller
granularity the master thread may become the bottleneck.
We propose to use a decentralized execution model instead:
all the threads have symmetric roles, they all unroll the
whole task flow, while only executing tasks assigned to them
through the mapping function (see section III-B). The model is
illustrated in Figure 5. We also present an algorithm for cheap
decentralized data synchronization in section III-D. With this
model the total execution cost can be modelled as:
_tp,decentralized = n tr_ _,decentralized +_ _[n t][t]_ [(][g][)] (2)
_w_
Cost model (2) is obviously worse than model (1), all
things being equal. In practice the runtime cost per task tr is
different for the two execution models: in a centralized runtime
the master thread has to perform expensive operations for
each task, including updating data structures, scheduling and
dispatching tasks, whereas a worker in decentralized model
can simply skip over the tasks executed by other workers,
leading to a much lower runtime cost. In the algorithm we
present in section III-D, the runtime cost of a task not assigned
to the thread boils down to one or two writes in private (non
_D. Decentralized data synchronization_
Without a master thread to coordinate workers, a new protocol is needed to ensure data accesses are properly synchronized
and respect the sequential consistency ordering imposed by the
STF model. Such a distributed protocol is actually commonly
used by task-based runtime systems (including StarPU) on
distributed-memory machines [7], [19], where there is typically one master thread per hardware node: each master thread
delegates the handling of tasks mapped on the node to the
node workers, but the master threads of all the nodes have
to coordinate with each other. We adapt this approach into
a shared-memory algorithm defining a light-weight protocol
for synchronizing data accesses in a decentralized in-order
execution model. We present this approach in algorithm 1,
which we further introduce in the remaining of this section.
We make the following assumptions:
1. Tasks are numbered in the order in which they appear
in the control flow, that number is called the Task ID.
2. All the threads discover the same sequence of tasks,
i.e. the tasks have the same ID and dependencies and
are delivered in the task flow order for all threads.
3. All the threads have access to a mapping function that
deterministically associates a Task ID to a unique thread.
-----
A shared-memory region is managed by a data object, composed of both a thread-local and a shared state. Accesses to
the latter must be properly synchronized. To keep pseudocode
concise, algorithm 1 supposes there is a single data object.
The local state contains two integer values: local.nb
reads since write corresponding to the number of read
operations encountered by the thread on this shared-memory
region (but maybe not yet executed) since the last write, and
local.last registered write which is the Task ID
of the last write operation encountered on this memory region.
The shared state also contains two integers: shared.nb
reads since write holding the number of reads per_formed on the shared-memory region since last write, and_
shared.last executed write containing the Task ID
of the last write operation performed on the memory region.
Finally we define a set of routines for the data object that
manipulates the local and shared states. Each routine exists in
two versions: read or write. The appropriate version must be
called depending on the access mode requested by the task
(lines 4 & 12 in algorithm 1). We replace read or write by op
in the following routines (detailed in algorithm 2):
_• declare op: declare an operation in op mode but does_
_not execute it on the current thread. This only requires_
to modify the local state.
_• get op: return a pointer to the data for use in op mode._
This operation might be blocking: it can only return once
all dependencies have been resolved, which may require
reading the shared state and potentially waiting for other
threads.
_• terminate op: declare that an operation in op mode_
has been executed. This modifies the shared state.
Given these definitions, to synchronize accesses to a sharedmemory location through a data object all the threads must
iterate over the list of tasks. For each task in which the
memory location is involved, the thread calls the mapping
function (line 3 in algorithm 1) to get the identifier of the
thread responsible for that task. If the thread is assigned to
the task it calls get op (lines 6 & 14) to get access to
the memory location, performs the task and then releases the
memory location with terminate op (lines 8 & 16). If the
thread is not responsible for the task, it updates its local state
by calling the declare op function (lines 10 & 18).
A read-only operation can be executed if local.last
registered write is equal to shared.last
executed write of the data object (algorithm 2, lines 12
& 13), this ensures that all the required writes have been
performed on the data. A write operation has to check that
local.last registered write and shared.last
executed write are the same and the number of reads
since that write in the local and shared nb reads since
write variables are equal (algorithm 2, lines 17 to 20).
This asserts that all the previous reads and writes have been
performed on the data.
A property of algorithm 1 is its low overhead, both in
time and space. A data object requires 2 integers in the
shared state plus 2 integers per worker for synchronization,
**Algorithm 1: Decentralized Data Synchronization**
1: for all threads do
2: **for all task in TaskFlow do**
3: _executor ←_ _mapping(task_ )
4: **if task has read dependency then**
5: **if executor = self then**
6: _data ←_ _get read()_
7: /* data can be used in read mode here */
8: _terminate read()_
9: **else**
10: _declare read()_
11: **end if**
12: **else if task has write dependency then**
13: **if executor = self then**
14: _data ←_ _get write()_
15: /* data can be used in write mode here */
16: _terminate write(task_ _.id)_
17: **else**
18: _declare write(task_ _.id)_
19: **end if**
20: **end if**
21: **end for**
22: end for
**Algorithm 2: Decentralized Data Synchronization Routines**
1: function declare read() do
2: _local.nb reads since write ←_
3: _local.nb reads since write + 1_
4: end function
5:
6: function declare write(task id) do
7: _local.nb reads since write ←_ 0
8: _local.last registered write ←_ _task id_
9: end function
10:
11: function get read() do
12: **wait for local.last registered write =**
13: _shared.last executed write_
14: end function
15:
16: function get write() do
17: **wait for local.last registered write =**
18: _shared.last executed write_
19: **wait for local.nb reads since write =**
20: _shared.nb reads since write_
21: end function
22:
23: function terminate read() do
24: _shared.nb reads since write ←_
25: _shared.nb reads since write + 1_
26: _declare read()_
27: end function
28:
29: function terminate write(task id) do
30: _shared.nb reads since write ←_ 0
31: _shared.last executed write ←_ _task id_
32: _declare write(task id)_
33: end function
independently from the number of tasks. In contrast with
centralized execution models, threads progress independently
until they block on a dependency. Coupled with very small pertask overhead when the thread is not responsible for executing
the task (a single write in private memory per data object for
a read operation, two writes in private memory for a write
operation), the decentralized model avoids the bottleneck of
centralized runtimes’ workers (section III-C) waiting for the
master thread to dispatch the tasks.
-----
10[1]
10[0]
10 1
10 2
|Col1|1 worker 2 workers|Col3|Col4|Col5|Col6|Col7|
|---|---|---|---|---|---|---|
||4 workers 8 workers 16 workers||||||
||32 workers 64 workers||||||
||||||||
||||||||
||||||||
10[1] 10[2] 10[3] 10[4] 10[5] 10[6]
number of counter increments per task
Fig. 7. Total execution time of 2[15] _≈_ 32000 independent tasks per worker
consisting in incrementing counters.
An extended variant of this algorithm is used for dependence management in the centralized, OoO task-based
runtime system SuperGlue [5]. It introduces the notion of data
versioning [21], where a new version of a piece of data is
created upon a write by a task, and lets task dependencies be
expressed as references to specific versions of some pieces of
data. It enables expressing additional constructs beyond the
strict sequential consistency of STF, such as reductions.
_E. Task pruning_
The main drawback of the decentralized model is that the
work of unrolling the task flow is duplicated on all the workers.
Scaling the number of tasks with the number of workers
increases the overhead, because each worker has to process
the tasks of all workers. Figure 7 illustrates this behavior.
It reports the total execution time of 2[15] independent tasks
per worker consisting in incrementing counters, on a 64 cores
AMD EPYC 7702 chip. Since all workers discover all the
tasks, more tasks to execute translates into more time spent
by workers in managing tasks and dependencies. Depending
on the number of workers and task granularity, the overhead
incurred might be negligible, as might be the case in a
hypothetical a centralized OoO model runtime delegating finegrained tasks to an embedded decentralized in-order runtime
on a subset of workers.
In case the runtime overhead becomes intractable because of
a high volume of extremely fine-grained tasks, an applicationspecific solution is to use task pruning. Task pruning for
STF has been successful in distributed-memory settings [7].
It consists in having each entity (worker or master depending
on the execution model) unrolling only the relevant part of
the task flow. The effectiveness of task pruning depends on
the application and the density of the dependency graph, but
for common and well known applications such as dense linear
algebra the gains can be substantial.
in TLA[+] [22]. This formalism allows us to precisely (1)
distinguish the programming model from the execution model
and (2) define the proposed model in terms independent from
the proposed implementation. In addition, although model
checking is subject to combinatorial explosion and is intrinsically limited to the assessment of very small test cases, it may
still provide further confidence on the assessed model (as a
complement to the — necessarily non exhaustive — at scale
actual experiments we will discuss later on in section V).
The specification consists in two modules: a specification
of the STF model and a specification of our Run-In-Order
execution model which must comply with this STF specification. For a matter of conciseness, we only present here the
methodology we have followed together with the illustration
of a particular property, and we report to appendix B of the
associated research report [23] for an exhaustive specification.
The STF module describes all the possible executions of
a STF program for a given set of workers, data, tasks and
task flow. By giving concrete values to these variables, tools
such as the TLA[+] model checker, TLC [24], can be used to
verify that some properties hold for any possible execution.
We illustrate it with the termination property. In the STF
specification, termination is defined as any state in which the
union of active tasks (tasks that a worker is actively executing)
and pending tasks (tasks not yet executed or being executed
by a worker) is empty.
_Terminated_ =∆ _pendingTasks_ _activeTasks =_
_∪_ _{}_
IV. FORMAL SPECIFICATION
In addition to the algorithm described in section III and a
concrete implementation of the decentralized in-order execution model, we propose a formal specification of the model
The STF specification also defines a data-race freedom property that is satisfied as long as long as no pair of workers are
executing tasks with a dependency on the same data and one of
the tasks performing a write to that data. There is no property
enforcing the sequential consistency in the STF specification.
Instead, it is encoded in the state transition by exclusively
allowing states to be reached for which sequential consistency
holds. We report to appendix B.1 of the associated research
report [23] for an exhaustive specification of the STF model.
The Run-In-Order module describes all possible execution
for the in-order execution model presented in this paper.
In addition to the workers, data, tasks and task flow variables, an additional mapping variable is used to attribute
tasks to workers. The state transition is further restricted
to prevent workers from re-ordering their tasks. The only
property checked against the Run-In-Order model is that it
implements the STF specification, that is the set of executions
allowed by the Run-In-Order model is a subset of all possible
STF executions. Because the STF model is checked to verify
termination and data-race freedom and ensures sequential
consistency, checking the Run-In-Order model also ensures
those properties. Appendix B.2 of the associated research
report [23] gives the full specification of the execution model.
Using the TLC model checker we checked the correctness
of the STF and Run-In-Order specifications by emulating a
tiled LU matrix factorization using two workers. The results
for different sizes using TLC are reported in table I. The
exponentially growing number of tasks only allows us to assess
-----
TABLE I. Number of states found and execution time of TLC to check the STF
and Run-In-Order models on the LU factorization algorithm with different
matrix sizes (number of row × column blocks).
**STF** **Run-In-Order**
Generated Distinct Generated Distinct
Size Time Time
States States States States
2 × 2 445 23 1s 2322 11 1s
3 × 2 54 481 94 11s 1 847 877 29 56s
3 × 3 542 753 065 655 22h27min - - _>48h_
very small test cases. We nonetheless found no errors during
model checking and obtained a low state collision probability
of at most 1.9 10[−][8], giving us some confidence in the
_×_
correctness of the proposed models.
V. PERFORMANCE EVALUATION
For evaluating the ability of a decentralized in-order runtime
to efficiently execute tasks of fine granularity, we have implemented the specifications proposed in section III within our
new RIO runtime. We compare it against StarPU, a state-ofthe-art runtime whose default execution model within a node
is centralized OoO. The experiments have been conducted on
a dual socket 12 cores Haswell Intel Xeon E5-2680 v3 [12].
_A. Methodology_
We consider four test cases to assess our method:
_• Experiment 1 (Fig. 8, row 1) uses in independent tasks;_
_• Experiment 2 (Fig. 8, row 2) uses random read and write_
dependencies (128 data objects with 2 random read and
1 random write dependencies per task);
_• Experiment 3 (Fig. 8, row 3) uses the matrix multiplica-_
tion dependency graph; and
_• Experiment 4 (Fig. 8, row 4) uses the dependency graph_
of a LU factorization without pivoting.
As illustrated above with the matrix multiplication (Figure
3), the efficiency of the considered kernels executed by the
tasks may be sensitive to the effects of the granularity and of
the locality, which are orthogonal to the issues we focus on
in the present study. When operating at low granularity, the
dumping of the traces notifying all the events which would
allow one to remove such effects in post-processing would
have a non negligible impact on the overall performance.
Instead, we chose to substitute each actual task with a synthetic
task, common for all the tasks of the four experiments. This
common synthetic task consists in incrementing a counter:
**volatile uint64_t counter = 0;**
for (uint64_t i = 0; i < N; i++)
counter = i;
Using this kernel, we get a granularity efficiency eg (g) = 1
as incrementing a single counter up to N is almost exactly as
long as incrementing n counters up to N /n. Also, because
the only relevant memory location lives on the thread’s stack,
the locality efficiency also becomes irrelevant: el (g) = 1.
With this kernel the experiments become sensitive only to
the two remaining efficiencies, ep(g) and er (g), the ones of
interest for our study. They depend on the cumulative time
spent executing tasks τp,t (g), idle τp,i (g) and the total cumulative execution time τp(g). Because there is no locality effect,
_τp,t_ (g) is equal to the execution time for the same sequence of
tasks on a single computation unit without runtime, t(g), and
the total measured execution time tp(g) can be used to trivially
derive τp(g). As RIO uses mutexes for synchronization, the
idle time can be obtained with non-intrusive measurements
from the CPU time share, while StarPU offers lightweight
built-in online performance monitoring tools for measuring
idle time that does not require to dump a trace. Measurements
in StarPU are intrusive and do incur a small overhead, but
because StarPU has a parallel efficiency close to zero due to
the bottleneck effect with fine granularity, that overhead is
negligible in our experiments.
All in all, the four experiments we conducted therefore
correspond to the actual task graphs of the considered test
cases but the tasks themselves are synthetically generated.
_B. Results_
The results of the four experiments are shown in Figure 8.
Centralized OoO and decentralized in-order execution models
indeed exhibit very different performance profiles: StarPU
demonstrates very good and consistent performance for coarse
tasks on all four experiments while RIO is much more sensitive
to the dependency graph, especially when no appropriate
mapping and task ordering can be given, as with random
dependencies (experiment 2).
The runtime overhead of StarPU is almost independent from
task sizes and explained by the fact that one of the thread
is completely dedicated to task management, capping the
maximal theoretical runtime efficiency to _[p][−]p_ [1] when running
on p threads. When tasks get small, between 10[5] and 10[6]
instructions for StarPU, the centralized model starts struggling
to handle all the tasks: the master thread is not able to produce
enough tasks to feed all the workers, who are then forced to
enter idle mode leading to the observed drop in pipelining
efficiency. Decentralized models do not have this weakness
because the workers independently process the task flow. With
RIO, we observe that the execution becomes limited either
by the pipelining or by the runtime efficiency depending on
the task graph. If the number of synchronizations required is
low or mainly for read operations (experiments 1 and 3), the
time spent by the runtime for processing the task flow is the
main source of slowdown, but thanks to the efficient in-order
execution, the overhead is still reasonable even for very fine
tasks of 10[3] to 10[4] operations. When more synchronization
are needed (experiments 2 and 4), the time spent waiting for
dependencies becomes the main source of total execution time.
VI. CONCLUSION
While most modern STF runtimes rely on centralized
OoO execution models when dealing with shared-memory
machines, other models are possible. In particular, inefficiency
in handling fine-grained tasks was previously considered as a
limitation of the STF programming model itself, while we
showed it can in fact be attributed to the centralized execution
|Col1|STF|Run-In-Order|
|---|---|---|
|Size|Generated Distinct Time States States|Generated Distinct Time States States|
|2 × 2 3 × 2 3 × 3|445 23 1s 54 481 94 11s 542 753 065 655 22h27min|2322 11 1s 1 847 877 29 56s - - >48h|
-----
|Col1|Col2|Col3|Col4|Col5|Col6|Col7|Col8|Col9|Col10|Col11|Col12|Col13|
|---|---|---|---|---|---|---|---|---|---|---|---|---|
||||||||||||||
||||||||||||||
||||||||||||||
||||||||||||e||
||||||||||||ep er||
||||||||||||||
Fig. 8. Efficiency decomposition as a function of task sizes for a decentralized in-order runtime (RIO) and a centralized OoO runtime (StarPU) on different
task graphs.
-----
model used de facto in current implementations. We have
proposed and assessed an alternative decentralized in-order
execution model, on top of an enriched (with the additional
requirement to provide a static mapping) STF model. This
execution model achieves a higher level of performance in
the special case of fine-grained tasks, thanks to lower runtime
overhead and independent task flow unrolling. By drawing a
distinction between the programming and execution models,
we demonstrate that the case of small granularity is not an
intrinsic limitation of the STF model itself and we hope that
the present study might motivate future work combining both
execution models (and thus requiring only partial mappings)
for enabling efficient and portable implementations of wider
classes of algorithms within the STF programming model. We
also plan to investigate leveraging OpenMP’s included tasks
as a building block for implementing a decentralized-in order
STF execution model within the scope of the standard, to let
a broader audience benefit from it.
REFERENCES
[1] E. Slaughter, W. Wu, Y. Fu, L. Brandenburg, N. Garcia,
W. Kautz, E. Marx, K. S. Morris, Q. Cao, G. Bosilca, et
_al., “Task bench: A parameterized benchmark for eval-_
uating parallel runtime performance,” in SC20: Inter_national Conference for High Performance Computing,_
_Networking, Storage and Analysis, IEEE, 2020._
[2] A. OpenMP, “Openmp application program interface
version 4.0,” in The OpenMP Forum, Tech. Rep, 2013.
[3] C. Augonnet, S. Thibault, R. Namyst, and P.-A. Wacrenier, “Starpu: A unified platform for task scheduling
on heterogeneous multicore architectures,” Concurrency
_and Computation: Practice and Experience, vol. 23,_
no. 2, 2011.
[4] C. Cao, T. Herault, G. Bosilca, and J. Dongarra, “Design
for a soft error resilient dynamic task-based runtime,”
in International Parallel and Distributed Processing
_Symposium, IEEE, 2015._
[5] M. Tillenius, “Scientific computing on multicore architectures,” Ph.D. dissertation, Uppsala Universiteit, 2014.
[6] A. Duran, E. Ayguad´e, R. M. Badia, J. Labarta, L.
Martinell, X. Martorell, and J. Planas, “Ompss: A
proposal for programming heterogeneous multi-core
architectures,” Parallel processing letters, 2011.
[7] E. Agullo, O. Aumage, M. Faverge, N. Furmento, F.
Pruvost, M. Sergent, and S. P. Thibault, “Achieving
high performance on supercomputers with a sequential
task-based programming model,” IEEE Transactions on
_Parallel and Distributed Systems, 2017._
[8] E. Agullo, A. Buttari, A. Guermouche, and F. Lopez,
“Implementing multifrontal sparse solvers for multicore architectures with sequential task flow runtime
systems,” Acm transactions on mathematical software
_(toms), 2016._
[9] J. J. Dongarra, P. Luszczek, and A. Petitet, “The linpack
benchmark: Past, present and future,” Concurrency and
_Computation: practice and experience, 2003._
[10] J. J. Dongarra, H. W. Meuer, E. Strohmaier, et al.,
“Top500 supercomputer sites,” Supercomputer, vol. 13,
1997.
[11] A. YarKhan, J. Kurzak, and J. Dongarra, “Quark users’
guide: Queueing and runtime for kernels,” University of
_Tennessee Innovative Computing Laboratory Technical_
_Report ICL-UT-11-02, 2011._
[12] _[Haswell intel® xeon® e5-2680 v3 @ 2,5 ghz, https:](https://ark.intel.com/content/www/us/en/ark/products/81908/intel-xeon-processor-e5-2680-v3-30m-cache-2-50-ghz.html)_
[//ark.intel.com/content/www/us/en/ark/products/81908/](https://ark.intel.com/content/www/us/en/ark/products/81908/intel-xeon-processor-e5-2680-v3-30m-cache-2-50-ghz.html)
[intel-xeon-processor-e5-2680-v3-30m-cache-2-50-](https://ark.intel.com/content/www/us/en/ark/products/81908/intel-xeon-processor-e5-2680-v3-30m-cache-2-50-ghz.html)
[ghz.html, Accessed: 2021-09-06.](https://ark.intel.com/content/www/us/en/ark/products/81908/intel-xeon-processor-e5-2680-v3-30m-cache-2-50-ghz.html)
[13] S. Nakov, “On the design of sparse hybrid linear solvers
for modern parallel architectures,” Ph.D. dissertation,
Universit´e de Bordeaux, 2015.
[14] E. Agullo, O. Aumage, B. Bramas, O. Coulaud, and
S. Pitoiset, “Bridging the gap between openmp 4.0 and
native runtime systems for the fast multipole method,”
Inria, Tech. Rep., 2016.
[15] H. Casanova, A. Legrand, and Y. Robert, Parallel Algo_rithms, ser. Chapman & Hall/CRC Numerical Analysis_
and Scientific Computing Series. CRC Press, 2008,
ISBN: 9781584889465.
[16] L. S. Blackford, J. Choi, A. Cleary, E. D’Azevedo,
J. Demmel, I. Dhillon, J. Dongarra, S. Hammarling,
G. Henry, A. Petitet, et al., ScaLAPACK users’ guide.
SIAM, 1997.
[17] A. George, J. W. Liu, and E. Ng, “Communication
results for parallel sparse cholesky factorization on a
hypercube,” Parallel Computing, vol. 10, no. 3, 1989.
[18] A. Pothen and C. Sun, “A mapping algorithm for
parallel sparse cholesky factorization,” SIAM Journal
_on Scientific Computing, vol. 14, no. 5, 1993._
[19] J. Lee and M. Sato, “Implementation and performance
evaluation of xcalablemp: A parallel programming language for distributed memory systems,” in International
_Conference on Parallel Processing Workshops, 2010._
[20] E. Agullo, O. Beaumont, L. Eyraud-Dubois, and S.
Kumar, “Are static schedules so bad? a case study
on cholesky factorization,” in 2016 IEEE Interna_tional Parallel and Distributed Processing Symposium_
_(IPDPS), IEEE, 2016._
[21] A. Zafari, M. Tillenius, and E. Larsson, “Programming
models based on data versioning for dependency-aware
task-based parallelisation,” in International Conference
_on Computational Science and Engineering, 2012._
[22] L. Lamport, “The temporal logic of actions,” ACM
_Transactions on Programming Languages and Systems_
_(TOPLAS), vol. 16, no. 3, 1994._
[23] C. Castes, E. Agullo, O. Aumage, and E. Saillard,
“Decentralized in-order execution of a sequential taskbased code for shared-memory architectures,” Inria
Bordeaux Sud-Ouest, Research Report RR-XXX, Jan.
[2022. [Online]. Available: https://hal.inria.fr/hal-XXX.](https://hal.inria.fr/hal-XXX)
[24] L. Lamport, Specifying Systems: The TLA+ Language
_and Tools for Hardware and Software Engineers._
Addison-Wesley, Jun. 2002.
-----
|
{
"disclaimer": "Notice: The following paper fields have been elided by the publisher: {'references'}. Paper or abstract available at https://api.unpaywall.org/v2/10.1109/IPDPSW55747.2022.00095?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1109/IPDPSW55747.2022.00095, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "other-oa",
"status": "GREEN",
"url": "https://hal.inria.fr/hal-03896030/file/ccastes_hips_2022.pdf"
}
| 2,022
|
[
"JournalArticle"
] | true
| 2022-05-01T00:00:00
|
[] | 12,779
|
en
|
[
{
"category": "Business",
"source": "external"
},
{
"category": "Business",
"source": "s2-fos-model"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
},
{
"category": "Economics",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/fffc9f8191197053bdc051968cded7cecfec8edf
|
[
"Business"
] | 0.896562
|
The Application of Blockchain Technology in Crowdfunding: Towards Financial Inclusion via Technology
|
fffc9f8191197053bdc051968cded7cecfec8edf
|
International Journal Of Management and Applied Research
|
[
{
"authorId": "83357677",
"name": "Aishath Muneeza"
},
{
"authorId": "134502385",
"name": "Nur Aishah Arshad"
},
{
"authorId": "2056585634",
"name": "Asma’ Tajul Arifin"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"Int J Manag Appl Res"
],
"alternate_urls": null,
"id": "bd2206f8-ca81-465a-a65f-47669d32ec4a",
"issn": "2056-757X",
"name": "International Journal Of Management and Applied Research",
"type": "journal",
"url": "http://ijmar.org/index.html"
}
|
The emergence of innovative digital financial technologies, namely blockchain and crowdfunding, indicates new ways to reach the poor and economically vulnerable groups. This paper contributes to the emerging literature on financial technology by presenting the case of crowdfunding in financial inclusion. The rationale behind this inquiry is to demonstrate the relevance of crowdfunding to financial inclusion, and how might blockchain technology fuel the development of crowdfunding. This paper also constitutes one of the first attempts to analyse crowdfunding in Malaysia and Shariah-compliant crowdfunding. In this paper, a desk research is conducted where journal articles, books, magazines, newspapers, industry reports published on the subject matter are reviewed critically. To analyse the development of crowdfunding in Malaysia, 6 crowdfunding platforms are examined. The outcome of this research suggests that crowdfunding is a viable means to promote financial inclusion, and blockchain technology could help mitigate the current issues faced by platform operators.
|
# The Application of Blockchain Technology in Crowdfunding: Towards Financial Inclusion via Technology
## Aishath Muneeza, Nur Aishah Arshad, Asma’ Tajul Arifin
### International Centre for Education in Islamic Finance (INCEIF)
Malaysia
**ABSTRACT**
The emergence of innovative digital financial technologies, namely blockchain and
crowdfunding, indicates new ways to reach the poor and economically vulnerable
groups. This paper contributes to the emerging literature on financial technology by
presenting the case of crowdfunding in financial inclusion. The rationale behind this
inquiry is to demonstrate the relevance of crowdfunding to financial inclusion, and
how might blockchain technology fuel the development of crowdfunding. This paper
also constitutes one of the first attempts to analyse crowdfunding in Malaysia and
Shariah-compliant crowdfunding. In this paper, a desk research is conducted where
journal articles, books, magazines, newspapers, industry reports published on the
subject matter are reviewed critically. To analyse the development of crowdfunding in
Malaysia, 6 crowdfunding platforms are examined. The outcome of this research
suggests that crowdfunding is a viable means to promote financial inclusion, and
blockchain technology could help mitigate the current issues faced by platform
operators.
**Keywords: Blockchain Technology; Crowdfunding; Financial Inclusion; Islamic**
Finance and Banking; Islamic Crowdfunding
Received: 28 July 2018 ISSN 2056-757X
Revised: 18 Aug 2018
Accepted: 28 Aug 2018 https://doi.org/10.18646/2056.52.18-007
-----
**Inclusion via Technology**
#### 1. Introduction
Financial inclusion has become a prominent financial reform agenda in most countries
around the world. This phenomenon stems from the realisation that an inclusive
financial system is critical in reducing poverty and promoting shared prosperity. In
reference to The World Bank (2018), “financial inclusion means that individuals and
businesses have access to useful and affordable financial products and services that
meet their needs such as transactions, payments, savings, credit and insurance, and
being delivered in a responsible and sustainable way”. Kim and De Moor (2017)
highlighted that financial exclusion is not limited to individuals but also extends to
companies, especially for small and medium enterprises (SMEs) which have limited or
no financial supports.
The rise of digital financial services indicates an alternative to reach the financially
excluded people with a range of financial services in a cost-effective and sustainable
manner. Financial innovations such as microfinance, mobile payment, crowdfunding,
and cryptography are playing a vital role in providing greater financial access to the
financially underserved populations. In particular, the growing use of crowdfunding
platforms and blockchain has created new means to reach financially constrained
individuals, households and companies.
It is in this regard that this study analyses the role of crowdfunding and blockchain in
expanding financial inclusion based on data from Malaysia. Although there is growing
literatures examine crowdfunding, little work is done in the context of Muslim
developing countries and financial inclusion. According to a report by Pew Research
Center (2011), Muslim-majority countries are among the poorest in the world, as
measured by gross domestic product (GDP) per capita in U.S. dollars. Moreover, the
number of venture capitalists in the Arab world is alarmingly insufficient, compared to
the rising demand for venture capital (Taha and Macias, 2014). The main purpose of
this paper is thus to explore crowdfunding as a means to widen financial access in
Muslim developing country, and Malaysia was chosen for a number of reasons. First,
Malaysia has achieved one of the highest levels of financial inclusion among Southeast
Asia countries, due in part to policies taking advantage of digital technology to expand
financial access for all (World Bank, 2017). The Global Findex Database of the World
Bank revealed that 81 percent of Malaysia’s adults had an account at a licensed
financial institution in 2014 which indicate high levels of financial inclusion
(Demirgüç-Kunt et al., 2018; World Bank, 2017). Second, Malaysia is one of the first
countries in Southeast Asia to give regulatory approval for equity crowdfunding, and
the number of crowdfunding platforms in Malaysia is rising (Thas Thaker et al., 2018).
#### 2. Methodology
This is a desk research where literatures written on the subject are reviewed to derive
conclusions. As such, data required for the study is primarily collected from secondary
sources consist of books, research articles, industry reports, various websites, trade
journals, magazines, and newspapers.
International Journal of Management and Applied Research, 2018, Vol. 5, No. 2
- 83
-----
**Inclusion via Technology**
#### 3. Literature Review
**_3.1 Financial Inclusion and Crowdfunding_**
Financial inclusion has become a global agenda in order to bridge the gap between the
poor and the rich. The World Bank has been keeping track on global financial
inclusion to ensure that all planned agenda in upholding it are implemented
accordingly. The Global Financial Inclusion Database (Global Findex) covers more
than 140 economies, and the indicators of financial inclusion measure how people
save, borrow, make payments and manage risk. According to the 2017 Global Findex
survey, 69 percent of adults or 3.8 billion people as of 2017 have a bank account
(Demirgüç-Kunt et al., 2018). There are reasons why globally 31 percent of the adults
are unbanked. The most commonly cited barrier include: lack of enough money, they
believe they do not need an account, accounts are too expensive, family members
already have an account, financial institutions too far way, lack of necessary
documentation, lack of trust, and religious reasons (Demirgüç-Kunt et al., 2018). An
examination at these reasons reveals that limited access to finance (lack of money,
banks are too far away) is the main battier to create a bank account, while personal
belief (religious reasons, felt unnecessary to open account) comprises a small part.
Studies show that there has been a significant increase in the use of mobile phones and
the internet to conduct financial transactions (Demirgüç-Kunt et al., 2018; Ouma et al.,
2017; World Bank, 2013). Between 2014 and 2017, this has contributed to a rise in the
share of account owners sending or receiving payments digitally from 67 percent to 76
percent globally, and in the developing world from 57 percent to 70 percent
(Demirgüç-Kunt et al., 2018). The growing internet access through affordable devices
could enable those from developing countries to use a cheaper payment system in
making money transactions. According to the data by the World Bank, globally there
are 1.7 billion adults remain unbanked, yet two-thirds of them own a mobile phone that
enables them to access financial services (see Figure 1).
**Figure 1: Unbanked adults who own a mobile phone**
Source: Demirgüç-Kunt et al. (2018: 11)
International Journal of Management and Applied Research, 2018, Vol. 5, No. 2
- 84
-----
**Inclusion via Technology**
Jenik et al. (2017) suggest that crowdfunding can benefit financial inclusion efforts is
grounded in the following ways: (i) it improves access to finance by excluded and
underserved individuals and micro, small, and medium enterprises; (ii) it allows for
innovations of existing models to serve Bottom of Pyramid (BoP) customers, such as
microfinance and mobile financial services; and (iii) it opens access to more complex
investment products for resilience and asset building. A study by World Bank (2013)
indicates that there is an opportunity for up to 344 million people in developing
economies to participate in crowdfunding. Crowdfunding also opens access to funding
and investment opportunities that are currently unavailable to customers at the BoP. To
ensure that people benefit from digital financial services, it is important to have a well
developed payments system, good physical infrastructure, appropriate regulations, and
vigorous consumer protection safeguards (Demirgüç-Kunt et al., 2018).
At the core of crowdfunding are two defining aspects: first, raising small amounts of
money from a large number of people (hence the term ‘crowd’); second, the
fundraising and transactions take place via the internet. The World Bank (2013)
defines crowdfunding as an internet-enabled way for businesses or other organizations
to raise money in the form of either donations or investments from multiple
individuals. Similarly, Kirby and Worner (2014) described crowdfunding occurs where
small amounts of money is obtained from a large number of individuals or
organisations, to fund a project, a business or personal loan, and other needs through
an online web-based platform in crowdfunding. In short, crowdfunding can be
described as an internet enabled platform that is open for individuals or corporations
for particular purposes, including wealth creation and social value creation.
United States (US) began to implement crowdfunding in 2007 and was subsequently
followed by other markets in later after the 2008 global financial crisis (Jenik et al,
2017; Kirby and Worner, 2014; Kim and De Moor, 2017). Crowdfunding offers an
alternative to traditional banking, which has grown rapidly in markets driven by
technology, as well as macroeconomic and regulatory factors (Jenik et al, 2017).
Crowrdfunding can be categorised into four: loan, equity, reward, and donation. While
the former two involves financial returns, the latter two have no payback.
With the growing emphasis on the social roles of financial services, crowdfunding
could be seen as an innovative way to improve financial inclusion (Jenik et al, 2017;
Kim and De Moor, 2017). Many developing countries are on the verge of financial
exclusion due to remoteness, restricted access to financial services, lack of money, and
lack of necessary documentation, which indicates the weakness in the existing
financial system (Demirgüç-Kunt et al., 2018). Financial technology in a broader sense
can increase financial inclusion because it has a capability to reach the financially
vulnerable populations. For instance, mobile banking and electronic financial
transactions are considered important ways to promote financial inclusion due to its
accessibility, affordability, and safety (Ouma et al., 2017). Equally, crowdfunding can
help those who have limited access to finance to raise funds quickly at affordable cost.
Nonetheless, crowdfunding still can be further enhanced by altering a certain set of
regulations in order to improve its implementation (Kim and De Moor, 2017).
International Journal of Management and Applied Research, 2018, Vol. 5, No. 2
- 85
-----
**Inclusion via Technology**
Blockchain-based financial services could help to resolve the dependency of the
unbanked on cash and traditional peer-to-peer trust circles. Theoretically, blockchain
technology is a solution that allows an efficient and low-cost equity registration, equity
transaction and transfer, and shareholder voting in the crowdfunding industry, and
eliminating legal risks related to fund management (Zhu and Zhou, 2016). However,
there are many legal and technical issues to be resolved for bloackchain technology to
be widely implemented in the market (Guo and Liang, 2016; Zhu and Zhou, 2016).
**_3.2 Overview of Islamic Crowdfunding_**
The concept of crowdfunding is in line with Islamic teachings in which Allah said in
the Quran, “Cooperate in righteousness and piety”. To a great extent, crowdfunding
and Islamic finance have many similarities. Both Islamic finance and crowdfunding
place a strong emphasis on trust and most importantly, both share the same principle of
financing: profit and loss sharing philosophy (Asian Institute of Finance, 2017; Taha
and Macias, 2014: 116).
Crowdfunding can be conceptualised as “Shariah compliance” if it conforms to
Shariah law: share profit and loss, does not involve in prohibited industries (alcohol,
pork, drug, etc), and does not charge any interest on lending. While most
crowdfunding categories fit into these principles of Islamic finance, loan-based
crowdfunding requires adaptation to be Shariah compliant (IFSB, 2017; Marzban and
Asutay, 2014; Taha and Macias, 2014). More specifically, equity-based crowdfunding
can be equated with the PLS concept of Islamic finance, while donation-based
crowdfunding matches the mandatory charitable contribution in Islam -- zakah. While
reward-based crowdfunding has no parallels in Islamic finance, it does not challenge
its principles because money is exchanged for non-financial rewards. However, loanbased crowdfunding would need to be interest-free in order to comply with Shariah
law. Any excess amount taken when repaying is considered Riba which is not
permissible in Islam.
The Islamic Financial Services Board (IFSB) recognised the importance of
crowdfunding, as can be shown in the efforts of Organisation of Islamic Cooperation
(OIC) to introduce Shariah compliant crowdfunding platforms to the local funding
ecosystem. In its annual Islamic financial services industry stability report, IFSB
(2017) reported that there are 80 active crowdfunding platforms with a primary
location in an OIC member state. However, most of these platforms do not provide the
full details of admission criteria, contracts, as well as measures to ensure Shariah
compliance (IFSB, 2017). Some of the notable platforms are summarised as follows:
1. Beehive, a loan-based platform in the UAE, applies a dual approach: it offers
both conventional and Shariah-compliant lending options. The Islamic option is
described in a detailed manner over its website (IFSB, 2017:120).
2. Yomken, a Shariah-friendly platform based in Cairo, follows profit and loss
sharing concept and does not impose any interest rate. No investments will be
made in projects associated with industries that are prohibited in Islam, such as
alcohol, drug, pork (Taha and Macias, 2014: 118).
International Journal of Management and Applied Research, 2018, Vol. 5, No. 2
- 86
-----
**Inclusion via Technology**
3. Liwwa, a loan-based crowdfunding platform in Lebanon gives a brief
explanation of its business model (based primarily on murabaḥah) in the FAQ
section of its website (IFSB, 2017:120).
4. Ethis Crowd and KapitalBoost, are Islam-oriented crowdfunding platforms
operate outside the OIC. Based in Singapore, these two platforms provide
financing for SMEs and real estate developers (IFSB, 2017:121).
5. Shekra, one of the oldest equity crowdfunding platforms in Egypt, does not
explain how it assures Shariah compliance in its website, but the platform
follows profit sharing concept (IFSB, 2017:120).
6. Danadidik, an Indonesian platform for student loans, applies a profit and loss
sharing model to calculate the returns for investors; however, the Shariah
compliance is uncertain (IFSB, 2017:120).
Islamic crowdfunding could respond to the needs of both Muslim and non-Muslim
(Taha and Macias, 2014), who might not have the means and resources to access
finance. These individuals or firms may have low credit ratings or perhaps lack of
guarantees (Kim and De Moor, 2017), but acquire intangible assets which are difficult
to quantify using traditional methods. In this context, Shariah-friendly crowdfunding
platforms could fill the gaps in the financial industry by providing a means for the
crowd in supoorting each other.
For a financial product to be labelled as Shariah compliant, the underlying contract and
instrument used in its structuring must be valid in form, substance, and the
implementation of the product must be line with Shariah principles (Abozaid, 2014).
Form relates to fulfilling the Sharia basic structural requirements and conditions in
contract and contractors, while substance is concerned with the essence and the spirit
of the structured product, especially when more than one contract or element is
involved in the product. The implication of the structured product substantially means
the structured product must not lead to evil or have unfavourable or negative
implications.
Take donation-based crowdfunding for instance, the suitable instruments would be
Hiba, Qard-Hasan and Murabaha. Hibah is a form of benevolent (tabarru`) contract
which can be applied in crowdfunding platform, where a donor can transfer asset to a
recipient without any consideration (Bank Negara Malaysia, 2016). Murabaha refers to
a sale and purchase of an asset where the acquisition cost and the mark-up are
disclosed to the purchaser (Bank Negara Malaysia, 2013). Murabaha can be an
alternative to riba system which using mark-up price as to attain profit. Qard refers to a
contract of lending money by a lender to a borrower where the latter is bound to repay
an equivalent replacement amount to the lender (Bank Negara Malaysia, 2018).
Marzban and Asutay (2014) proposed a numbers of Shariah-compliant contracts that
can be applied to Islamic crowdfunding, and these are summarised in the Table 1:
International Journal of Management and Applied Research, 2018, Vol. 5, No. 2
- 87
-----
**Inclusion via Technology**
**Table 1: Islamic Crowdfunding**
**Models** **Characteristics** **Proposed instruments**
Donation Debt-free funding with no payback; Hiba;
No tangible returns Qard-Hasan;
Murabaha
Reward Debt-free funding with no payback; Sale
Token of appreciation
Loan Fixed periodic returns; Murabaha;
Repayment Ijarah
Equity No guarantee on repayment; Diminishing Musharakah;
Profit-sharing Musharakah
Source: IFSB (2017); Marzban and Asutay (2014); Taha and Macias (2014)
An ijarah refers to− (a) a lease contract that transfers the ownership of a usufruct of an
asset to another person for a specified period in exchange for a specified consideration;
or (b) a contract for hiring of services of a person for a specified period in exchange for
a specified consideration (Bank Negara Malaysia, 2018). Leasing gives the opportunity
for business especially small companies to continue operation without incurring a high
cost to buy a new machine. It is also a chance to inject capital into the business by
securing a project.
Musyarakah refers to a partnership between two or more parties, whereby all parties
will share the profit and bear the loss from the partnership. On the other hand, a
musyarakah may be entered into by two or more parties on a particular asset or venture
which allows one of the partners to gradually acquire the shareholding of the other
partner through an agreed redemption method during the tenure of the musyarakah
contract. Such arrangement is commonly referred to as musyarakah mutanaqisah
(diminishing partnership) (Bank Negara Malaysia, 2018). Musyarakah is widely used
in investment based financing where the profit and loss are shared between parties. It
gives the advantage to both parties as one gets the capital to operate the business and
the other get profit from investment.
**_3.3_** **_Blockchain-based Crowdfunding_**
Blockchain technology could mitigate the problems faced by crowdfunding and
traditional banking. For instance, fundraisers could issue their own shares or perhaps
smart contracts guaranteeing that pledge contributions would be returned where
funding targets were not met. This allows project initiators and crowdfunding
shareholders to securely register their rights at low cost (Zhu and Zhou, 2016).
Blockchain has the following characteristics: secure and indelible, distributed ledger,
decentralised data management, transparent and auditable, anti-tampering and antiforgery, efficient, low cost, orchestrated and flexible (Guo and Liang, 2016; Niforos et
al., 2017; Zhu and Zho, 2016). Blockchain is a decentralized and distributed ledger
technology to ensure data security, transparency, and integrity, which cannot be
tampered with or forged, and thus it is deemed to have great potential in the finance
industry. Table 2 summarises the differences between traditional banking and how
blockchain could resolve the issues in crowdfunding.
International Journal of Management and Applied Research, 2018, Vol. 5, No. 2
- 88
-----
**Inclusion via Technology**
**Table 2: How Blockchain Could Disrupt Traditional Banking and Aids Crowdfunding**
**Traditional banking** **Blockchain**
Efficiency Complex clearing process;
bottlenecks Large amount of manual inspection;
Many intermediate links
Distributed ledger;
Automated;
Disintermediation
Point-to-point transmission;
Uniqueness of equity transaction
and transfer
Security of
fund
management
A central trusted party;
Complex equity transaction and
transfer
Cost High cost Low cost
Transaction lag Centralised data management;
Leads and lags
Operation risk Use of information asymmetric which
often leads to adverse selection and
moral hazards;
Double payment
Decentralised data management;
Transactions are time-stamped and
can be verified in near real-time
Use of asymmetric encryption;
Transparent
Source: Guo and Liang, 2016; Niforos et al., 2017; Zhu and Zho, 2016
The benefits of building a platform on blockchain technology are numerous. To
illustrate, a crowdfunding platform may:
1. introduce a blockchain based voting system, allowing the crowd or even
shareholders to participate in corporate governance in a cost-effective and yet
effective manner (Zhu and Zho, 2016);
2. use blockchain-based smart contract to keep track of all changes in the
agreement made between the crowd and project initiator, thereby allowing
regulators to identify fraudulent fundraising (Niforos et al., 2017; Zhu and Zho,
2016);
3. develop an identity management system that gives full control to users via
blockchain (Niforos et al., 2017), preventing identity theft and money
laundering;
4. implement digital currency like bitcoin to avoid intermediary like banks and
payment providers (Collins and Baeck, 2015)
5. establish the conditions under which a transaction occurs, helping regulators to
observe and regulate the quota of investment and qualification of investors
(Niforos et al., 2017; Zhu and Zho, 2016)
There are plenty examples of combining blockchain technology and crowdfunding.
Initial Coin Offering (ICO), where start-ups use blockchain protocols and
cryptocurrency tokens as a means of crowdfunding their ventures, has become a
phenomenon. A number of crowdfunding platforms (e.g. Fundedbyme, StartEngine,
WeFunder) have already accepting bitcoin. More notably, crowdfunding platforms
such as Swarm and Lighthouse allow companies to create their own coins
(cryptocurrency) which can be traded for other virtual currencies (Collins and Baeck,
2015).
International Journal of Management and Applied Research, 2018, Vol. 5, No. 2
- 89
-----
**Inclusion via Technology**
Thus, based on the above analysis, with the maturity and wide use of blockchain
technology, a secure, efficient, cost-effective crowdfunding platform can be
established based on the blockchain technology.
**_3.4 Crowdfunding Platforms in Malaysia_**
Since the early 1980s, Malaysians have been involved in community-based
crowdfunding projects (Asian Institute of Finance, 2017: 16). One notable example is
the collection of public contribution to watch live football match in the days when live
television was not easily available (Securities Commission Malaysia, 2014). In 1982, a
football fan Peter Teo pitched a crowdfunding campaign to pay for the live telecast of
World Cup football matches. After several weeks of collection, the campaign
successfully raised a total of RM300, 000, which was sufficient to pay live telecasts of
the World Cup (Chua, 2018).
In 2012, crowdfunding platforms using digital technology came to Malaysia. The early
adopts are largely donation- and reward-based (Cambridge Judge Business School,
2017) and unregulated before 2015 (Asian Institute of Finance, 2017). Securities
Commission Malaysia announced a regulatory framework for crowdfunding in 2015
and peer to pear lending in 2016 respectively.
In 2018, the transaction value in the crowdfunding segment amounts to US$0.7m in
Malaysia (The Statista, 2018). Crowdfunding platforms are regulated under the
supervision of Securities Commission Malaysia (SCM). In reference to Securities
Commission Malaysia (2018), there are seven crowdfunding operators and six peer to
peer financing operators registered with SCM (see Table 3). To date, these platforms
raised a total of RM118 million collectively, benefiting over 300 micro, small, and
medium enterprises (Securities Commission Malaysia, 2018).
**Table 3: List of Market Operators licensed by Securities Commission Malaysia**
**No.** **Company** **Official website** **Platform**
1 Ata Plus Sdn Bhd http://ata-plus.com/ Equity Crowdfunding
2 Crowdo Malaysia Sdn Bhd https://crowdo.com/ Equity Crowdfunding
3 Eureeca SEA Sdn Bhd https://eureeca.com/ Equity Crowdfunding
4 FBM Crowdtech Sdn Bhd https://www.fundedbyme.com/ Equity Crowdfunding
5 Funnel Technologies Sdn Bhd N/A Equity Crowdfunding
6 Pitch Platforms Sdn Bhd https://www.equity.pitchin.my/ Equity Crowdfunding
7 Crowdplus Sdn Bhd https://www.crowdplus.asia/ Equity Crowdfunding
8 B2B Finpal Sdn Bhd http://www.b2bfinpal.com/ Peer-to-Peer Financing
9 Ethis Kapital Sdn Bhd https://www.nusakapital.com/ Peer-to-Peer Financing
10 FBM Crowdtech Sdn Bhd https://www.alixco.com/ Peer-to-Peer Financing
11 Modalku Ventures Sdn Bhd https://fundingsocieties.com.my/ Peer-to-Peer Financing
12 Peoplender Sdn Bhd https://www.fundaztic.com/ Peer-to-Peer Financing
13 QuicKash Malaysia Sdn Bhd https://www.quickash.com/ Peer-to-Peer Financing
Source: Securities Commission Malaysia, n.d.
International Journal of Management and Applied Research, 2018, Vol. 5, No. 2
- 90
-----
**Inclusion via Technology**
The development stage of these platforms varied: while majority platforms are
functioning (e.g. Ata Plus, Crowdo, Eureeca), one company is still under development
(Funnel Technologies), and one company expand its services to relevant categories
(Ethis Kapital). In particular, the founder of Ethis Kapital, Umar Munshi, has created a
number of Shariah-complaint platforms, ranging from real estate crowdfunding (Ethis
Crowd) to donation-based crowdfunding (Global Sadaqah).
The key characteristics of the crowdfunding platforms are summarised as follows:
1. Ata Plus, a blockchain-enhanced licensed equity crowdfunding platform, currently
uses blockchain technology for record-keeping purposes and accepts bitcoin as an
investment instrument since digital currency is not recognised as legal tender in the
country (Noordin, 2018).
2. Crowdo, a crowdfunding platform that is fully licensed by regulators in Malaysia,
Singapore, and Indonesia. In early 2018, Crowdo announced a strategic partnership
and cooperation with Sentinel Chain, a blockchain-based financial inclusion
services marketplace(Riana, 2018).
3. Pitch IN, a reward- and equity-based crowdfunding platform active in Malaysia.
4. Eureeca, a Dubai-based equity Crowdfunding platform, have received licensing
from the UK, Malaysia and the Netherlands.
5. FundedByMe, a Stockholm based crowdfunding platform, mostly active in
Scandinavia but also operates in Singapore and Malaysia.
6. Crowd Plus, an equity crowdfunding platform which has offices in China, Hong
Kong Vietnam, and Malaysia.
Nearly 300 campaigns successfully funded via these 6 platforms (Asian Institute of
Finance, 2017: 26), and the amounts raised differ significantly. Asian Institute of
Finance (2017: 28-29) reported that the lowest amount is RM6 for a technology project
and the highest thus far is RM2,636,900 for a brick and mortar business.
**_3.5 Shariah Compliant Blockchain-based Crowdfunding in Malaysia_**
By its very nature, blockchain technology does not contradict with Islamic teaching
since technology is always deemed permissible in Shariah. The utilisation of
technology is what makes it lead to Haram or Halal. A careful examination of the
blockchain technology suggests that its form, substance and the implication (Abozaid,
2014) are all aligned with Islamic values where it leads to irrevocability and
transparency in business. Thus, Islamic finance industry could benefit greatly from
blockchain technology in its efforts to provide services in the true spirit of Shariahcompliance.
Malaysia has set up regulatory sandbox for developing blockchain solutions by
partnering with industry and technology providers (Niforos et al., 2017: 41). In
November 2017, Securities Commission Malaysia announced that it will be embarking
on a blockchain pilot project for Over The Counter (OTC) markets (Fong, 2017b).
Neuroware, a Malaysia-based blockchain service provider, is the sole technical vendor
behind this pilot project. This pilot project is done through the aFFINity Innovation
lab, which is an initiative facilitated by the Securities Commission Malaysia to catalyse
greater interest towards the development of emerging technology-driven innovations in
International Journal of Management and Applied Research, 2018, Vol. 5, No. 2
- 91
-----
**Inclusion via Technology**
financial services (Fong, 2017a). In February 2018, Neuroware announced that the
company is now taking part of government tenders (Neuroware, 2018); in June 2018,
the Malaysian government signed a Memorandum of Understanding with a South
Korean blockchain lab IncuBlock to develop blockchain platform permissible under
Islamic law (Zuckerman, 2018). These recent announcements imply a favourable
attitude displayed the Malaysian government towards the blockchain technology.
Based on the above reports, it can be seen that the Malaysia government is open to new
developments in financial technology. This finding is consistent with earlier studies
which concluded that the Malaysian government and its financial regulator, Securities
Commission, have positive attitudes towards financial technology. For example, World
Bank (2017) found that the Malaysia government leverage technology to provide
financial services to serve low-income households using new instruments and
innovative solutions (e.g. agent banking, mobile banking).
#### 4 Discussion
The idea of integrating blockchain technology to crowdfunding platform is highly
possible to be implemented in Malaysia as it in progress. Malaysia provides a very
good blueprint for regulator to engage with the industry, practitioners, experts,
potential funders and fund-raiser (Cambridge Judge Business School, 2017). In
addition to that, the on-going blockchain pilot project of Securities Commission
Malaysia has been a significant milestone on the road of implementation of blockchain
technology in the finance sector.
This paper proposes crowdfunding structure that mcombines both Shariah principles
and blockchain technology to be implemented in the industry (see Figure 2).
**Figure 2: Proposed Framework -- Blockchain-enabled Mudharabah Crowdfunding**
Mudharabah is one of the most popular contracts used in Islamic finance transactions.
In a Mudharabah contract, profits and loss are shared according to the profit-sharing
ratio. The issuer as the Mudhaarib will pledge the issuance of funds through a
crowdfunding platform. The application as Mudhaarib is through blockchain
technology; i.e. Economic Identity which provides digital identity to individuals with
enhanced privacy, so that identity is restricted to devices as well as other individuals
with access. Additionally, Smart Contracts could be used for transaction verification
International Journal of Management and Applied Research, 2018, Vol. 5, No. 2
- 92
-----
**Inclusion via Technology**
andstorage purposes, eliminating the need for third-parties. The Mudhaarib discloses
all the information with regard to their projects, including the percentage of actual
profits divided between them in case of getting return. The crowd, or potential
investors, then review the proposal and invest if they consider the project is worthy.
Since profits depend on the performance of the venture, both entrepreneur and investor
need to allocate resources (both financial and non-financial) efficiently. Mudharabah
crowdfunding is thus a symbiotic relationship whereby both parties leverage on the
competence of the other.
However, even in this conducive framework, this paper has identified a few challenges
that could limit the blockchain technology to be harnessed to its fullest potential in
crowdfunding platform. These include:
i. 52% of the world’s population still do not have access to the Internet and one
billion people worldwide lack the digital literacy and skills necessary to fully
take advantage of ICTs (International Telecommunications Union, 2017). The
cost of Internet access is high in developing and underdeveloped economies. As
Demirgüç-Kunt et al. (2018) pointed out, mobile phones and internet cannot
drive financial inclusion in the absence of necessary infrastructure, namely
reliable electricity and mobile networks.
ii. The disadvantaged groups may lack the necessary know-how to attract funding.
In equity-based crowrdfunding platforms, prospective entrepreneurs must
demonstrate that their ideas are viable in order to attract investments. There is
also lack of training and education to equip the disadvantaged groups with
necessary skill sets in business administration and information technology.
iii. The crowd size for equity-based crowdfunding is still quite small in Malaysia.
This can be attributed to the low public awareness and limited investor pool at
the current stage, and they have yet to reach the desired level of maturity
(Asian Institute of Finance, 2017).
iv. The current guidelines on equity-based crowdfunding stipulate a cap of
RM5,000 per project owner and RM50,000 a year for total crowdfunding
investment. Retail investors will need to self-declare that they are willing to
take the associated risk if they wish to invest beyond the safety threshold. Such
additional step and paperwork may hinder the growth of crowdfunding (Asian
Institute of Finance, 2017).
v. Blockchain technology is still at its infancy stage in Malaysia, and thus it takes
time to reach a critical mass of the ecosystem participants and to realise full
network benefits (Niforos et al., 2017).
vi. The industry needs time to adopt blockchain technology. Executives need to
rethink their business model and tested its viability before making any strategic
move. To make smart contracts viable, lawyers and regulators will need to
develop an in-depth understanding in blockchain (Iansiti and Lakhani, 2017).
Their adoption will require major regulatory, economic and social change.
vii. There is an absence of one common set of standards that can ensure the
interoperability of systems across industry and supply chains (Niforos et al.,
International Journal of Management and Applied Research, 2018, Vol. 5, No. 2
- 93
-----
**Inclusion via Technology**
2017: 49). Gaining institutional agreement on standards and processes involve
coordinating the activity of many different actors (Iansiti and Lakhani, 2017).
To sum up, blockchain-based crowdfunding has a huge potential to be a viable
platform to promote financial inclusion. It could make financial services become
accessible for all, bridging the gaps between the rich and poor, urban and rural, men
and women. Blockchain-based crowdfunding may improve financial inclusion to
another level when its mechanism involves the crowd in a sustainable manner. Shariah
principles, on the other hand, provide guidelines to build and develop a socially
responsible blockchain-based crowdfunding. Taking these together, blockchain-based
crowdfunding that is Shariah-compliant could benefit the society as a whole.
#### 5 Lessons from Malaysia’s Experience
There are several lessons can be drawn from the Malaysia’s experience in
crowdfunding that could be useful for other countries, especially for countries wish to
leverage financial technology to provide financial services to those who face financial
constraints.
1. Engaged, open, and proactive regulator: The Malaysian government is one of the
first countries in Southeast Asia to introduce crowdfunding regulation. There are
regulatory measures of varying scope to safeguard the interests of investors, in
addition to ongoing efforts to invite open dialogues with the private sector.
Nonetheless, as Asian Institute of Finance (2017: 6) points out, the current
regulatory framework requires a periodic recalibration as crowdfunding evolves
and market grows.
2. Build awareness: Campaigns, roadshows, and conferences to create awareness of
crowdfunding and blockchain are necessary in empowering the financially
disadvantaged groups. The mainstream media is also important in showing the
benefits of crowdfunding (Asian Institute of Finance, 2017) and blockchain. Media
coverage of success stories of crowdfunding and progress on regulatory framework
has been useful to attract attentions of the public.
3. Encourage financial innovation: Securities Commission Malaysia and Bank Negara
Malaysia have been supportive towards the development of financial technology.
Malaysia has adopted regulatory sandbox in October 2016, enabling the
experimentation of fintech solution in a live environment, subject to appropriate
safeguards and regulatory requirements (Cambridge Judge Business School, 2017;
Niforos etal 2017; World Bank, 2017).
4. Education and training: World Bank (2017: 59) reported that the Malaysia
government proactively educate the population in improving their financial literacy
and encouraging them to adopt new technologies. In the digital age, however,
comprehensive training sessions should be provided to aspiring entrepreneurs so
that they could improve their marketing and pitching skills to attract investments
(Asian Institute of Finance, 2017) using digital technology like video or social
media.
International Journal of Management and Applied Research, 2018, Vol. 5, No. 2
- 94
-----
**Inclusion via Technology**
5. Engage with private sectors: Active and constructive dialogue between the
regulator and the private sector has been critical in promoting financial inclusion
(Cambridge Judge Business School, 2017; World Bank, 2017). Leveraging on
resources and inputs from the private sector is crucial in widening financial access
to those in need of financial help. Additionally, outreach initiatives with other
industry players such as business angel and investment network could enlarge the
investor pool in crowdfunding platforms (Asian Institute of Finance, 2017).
6. Shariah-compliant crowdfunding: With Malaysia being an Islamic financial center,
its regulatory framework is expected to drive the development of Islamic
crowdfunding in the Muslim countries. To date, the number of Shariah-compliant
crowdfunding platforms in Malaysia is quite limited. The application of blockchain
technology to crowdfunding presents a new chapter in fundraising, financial
inclusion, and perhaps Islamic banking. The consensus-based and transactional
nature of blockchain (Niforos et al., 2017: 12) could reduce administrative and
legal complexities of crowdfunding.
#### 6 Conclusion
Crowdfunding is a practice of funding a project or venture by raising small amounts of
money from a large number of people via the internet. It can be seen as an alternative
to the existing financial services targeted at many different audiences, ranging from
aspiring entrepreneur to investor, from the needy to philanthropist. Crowdfunding has
the potential to attain financial inclusion. Blockchain technology could bring
crowdfunding to another level because it not only helps in enhancing data security but
also efficiency and affordability.
It might be too early for jubilation, but there are good reasons to be confident and
hopeful about the application of blockchain on crowdfunding and the future of
Shariah-compliant crowdfunding platforms in Malaysia. Not least of these is the fact
that the regulator has been supportive towards the emerging financial technology.
This paper provides a basis for further work in Islamic crowdfunding and how
blockchain might improve crowdfunding platforms. This paper provides background
by defining Islamic crowdfunding, providing an overview of its forms and substance,
describing the most recent technological trends in crowdfunding, highlighting benefits
of integrating blockchain to crowdfunding, and summarising the key barriers to
blockchain-enabled Islamic crowdfunfing platforms in Malaysia. Follow-up work
could focus specifically on the competitive advantage of blockchain-based Islamic
crowdfunding platforms, and how it varies in different economic and legal contexts.
#### 7 References
1. Abozaid, A. (2014), Reforming the methodology of product development in Islamic
_finance, Germany: Lap Lambert Academic Publishing._
International Journal of Management and Applied Research, 2018, Vol. 5, No. 2
- 95
-----
**Inclusion via Technology**
2. Asian Institute of Finance (2017), Crowdfunding Malaysia’s Sharing Economy:
_Alternative Financing For Micro, Small, and Medium Enterprises, Kuala Lumpur:_
Asian Institute of Finance.
3. Bank Negara Malaysia (2016), Hibah, 3 August, BNM/RH/PD 028-5
4. Bank Negara Malaysia (2018), Ijarah, 29 June, BNM/RH/PD 028-2
5. Bank Negara Malaysia (2013), Murabahah, 23 December, BNM/RH/STD 028-4 I
6. Bank Negara Malaysia (2015), Musyarakah, 20 April, BNM/RH/STD 028-7
7. Cambridge Judge Business School (2017), Crowdfunding in East Africa:
_Regulation and Policy for Market Development, available from:_
https://www.jbs.cam.ac.uk/fileadmin/user_upload/research/centres/alternativefinance/downloads/2017-05-eastafrica-crowdfunding-report.pdf [accessed on 1
Aug 2018].
8. Chua, J. (2018), “Did You Know That Malaysians Once Sponsored RTM To Air
The 1982 World Cup?, Rojak Daily, available from:
http://www.rojakdaily.com/entertainment/article/5133/did-you-know-thatmalaysians-once-sponsored-rtm-to-air-the-1982-world-cup [accessed on 1 Aug
2018].
9. Collins, L. and Baeck, P. (2015), Cryptocurrencies could bring cost-savings to
_crowdfunding and make it easier to hold small stakes in companies, UK: NESTA,_
available from: https://www.nesta.org.uk/blog/crowdfunding-and-cryptocurrencies/
[accessed on 11 Aug 2018].
10. Demirgüç-Kunt, A. et al. (2018), The Global Findex Database 2017: Measuring
_Financial Inclusion and Fintech Revolution, International Bank for Reconstruction_
and Development, Washington, DC: World Bank.
11. Fong, V. (2017a), “Behind The Scenes: Securities Commission Malaysia’s
Blockchain Project”, Fintech News Singapore, available from:
http://fintechnews.sg/15270/blockchain/securities-commission-malaysiablockchain-neuroware/ [accessed on 1 Aug 2018].
12. Fong, V. (2017b), “Securities Commission Malaysia Embarks on Blockchain Pilot
Project”, Fintech News Singapore, available from:
http://fintechnews.sg/13963/malaysia/securities-commission-malaysia-embarks-onblockchain-pilot-project/ [accessed on 1 Aug 2018].
13. Guo, Y. and Liang, C. (2016), “Blockchain application and outlook in the banking
industry”, Financial Innovation, Vol. 2, No. 1, pp. 24.
https://doi.org/10.1186/s40854-016-0034-9
14. Iansiti, M. and Lakhani, K. R. (2017), “The Truth About Blockchain”, Harvard
_Business Review, Vol. 95, No. 1, pp. 118–127._
15. Islamic Financial Services Board (IFSB) (2017), Islamic Financial Services
_Industry Stability Report 2017, Kuala Lumpur, Malaysia: IFSB._
16. International Telecommunications Union (ITU) (2017), Fast Forward Progress:
_Leveraging Tech to Achieve the Global Goals, ITU, Geneva, Switzerland: ITU._
International Journal of Management and Applied Research, 2018, Vol. 5, No. 2
- 96
-----
**Inclusion via Technology**
17. Jenik, I., Lyman T., and Nava, A. (2017), Crowdfunding and financial inclusion,
The Consultative Group to Assist the Poor (CGAP), available from:
https://www.cgap.org/sites/default/files/Working-Paper-Crowdfunding-andFinancial-Inclusion-Mar-2017.pdf [accessed on 11 Aug 2018].
18. Kim, H, and Moor, L. (2017), “The Case of Crowdfunding in Financial Inclusion:
A Survey”, Strategic Change, Vol. 26, No. 2, pp. 193-212.
https://doi.org/10.1002/jsc.2120
19. Kirby, E., and Worner, S. (2014), Crowd-funding: An Infant Industry Growing
_Fast, Madrid, Spain: International Organization of Securities Commissions_
(IOSCO).
20. Marzban, S. and Asutay, M. (2014), “Shariah-compliant Crowd Funding: An
Efficient Framework for Entrepreneurship Development in Islamic Countries”,
Conference Paper presented in Harvard Islamic Finance Forum, April 2014,
Boston, United States America, https://doi.org/10.13140/RG.2.1.2696.1760.
21. Neuroware (2018), Tender Support for Blockchain Technology in Malaysia,
available from: http://neuroware.io/blog/tender-support-for-blockchain-technologyin-malaysia/ [accessed on 11 Aug 2018].
22. Niforos, M.; Ramachandran, V.; Rehermann, T. (2017), Block Chain :
_Opportunities for Private Enterprises in Emerging Market. Washington, D.C.:_
International Finance Corporation, available from:
https://openknowledge.worldbank.org/handle/10986/28962 [accessed on 1 Aug
2018].
23. Noordin, K. A. (2018), “Profile: Putting her faith in equity crowdfunding”, The
_Edge Market, available from: http://www.theedgemarkets.com/article/profile-_
putting-her-faith-equity-crowdfunding [accessed on 11 Aug 2018].
24. Ouma, S.A., Odongo, T.M. and Were, M. (2017). “Mobile financial services and
financial inclusion: Is it a boon for savings mobilization?”, Review of Development
_Finance, Vol. 7, No. 1, pp.29–35._ https://doi.org/10.1016/j.rdf.2017.01.001
25. Pew Research Center (2011), The Future of the Global Muslim Population,
available from: http://assets.pewresearch.org/wpcontent/uploads/sites/11/2011/01/FutureGlobalMuslimPopulation-WebPDFFeb10.pdf [accessed on 1 Aug 2018].
26. Riana, A. (2018), “InfoCorp announces Strategic Cooperation with Crowdo—the
First Financial Service Provider to join Sentinel Chain in providing P2P loan
services”, Medium, available from: https://medium.com/sentinelchain/infocorpand-crowdo-announces-strategic-partnership-for-sentinel-chain-407469424cbf
[accessed on 11 Aug 2018].
27. Securities Commission Malaysia (n.d.), List of Registered Market Operators,
available from: https://www.sc.com.my/digital/list_rmo/ [accessed on 1 Aug 2018].
28. Securities Commission Malaysia (2014), Annual Report: Part 1 Growing Our
_Market, available from: https://www.sc.com.my/wp-_
content/uploads/eng/html/resources/annual/ar2014_eng/part1.pdf [accessed on 1
Aug 2018].
International Journal of Management and Applied Research, 2018, Vol. 5, No. 2
- 97
-----
**Inclusion via Technology**
29. Securities Commission Malaysia (2018), SC Invites Applications for Registration
_as Equity Crowdfunding and Peer-to-Peer Financing Operators, available from:_
https://www.sc.com.my/post_archive/sc-invites-applications-for-registration-asequity-crowdfunding-and-peer-to-peer-financing-operators/ [accessed on 1 Aug
2018].
30. Taha T. and Macias I. (2014), “Crowdfunding and Islamic Finance: A Good
Match?”, In: Atbani F.M., Trullols C. (eds) Social Impact Finance. London:
Palgrave Macmillan, https://doi.org/10.1057/9781137372697_10
31. Thas Thaker, M. A. M.; Thas Thaker, H. M. and Pitchay, A. A. (2018), “Modeling
crowdfunders’ behavioral intention to adopt the crowdfunding-waqf model (CWM)
in Malaysia: The theory of the technology acceptance model”, International
_Journal of Islamic and Middle Eastern Finance and Management, Vol. 11, No. 2,_
pp. 231-249, https://doi.org/10.1108/ IMEFM-06-2017-0157
32. The Statistics Portal. (2018). Crowdfunding Malaysia, available from:
https://www.statista.com/outlook/335/122/crowdfunding/malaysia#market-arpu
[accessed on 1 Aug 2018].
33. World Bank, (2013), Crowdfunding's Potential for the Developing World,
Washington, DC: World Bank.
https://openknowledge.worldbank.org/handle/10986/17626
34. World Bank (2017), Financial Inclusion in Malaysia: Distilling Lessons for Other
_Countries. Washington, DC: World Bank._
https://openknowledge.worldbank.org/handle/10986/27543
35. World Bank (2018), _Financial Inclusion Overview, available from:_
http://www.worldbank.org/en/topic/financialinclusion/overview [accessed on 1 Aug
2018].
36. Zhu, Z., and Zhou, Z. Z. (2016), “Analysis and outlook of applications of
blockchain technology to equity crowdfunding in China”, Financial Innovation,
Vol. 2, No. 1, pp. 29. https://doi.org/10.1186/s40854-016-0044-7
37. Zuckerman, M. J. (2016), “Malaysian Gov’t Committee Partners With Korean Lab
to Develop Sharia-Compliant Blockchain”, Cointelegraph, available from:
https://cointelegraph.com/news/malaysian-gov-t-committee-partners-with-koreanlab-to-develop-sharia-compliant-blockchain [accessed on 1 Aug 2018].
International Journal of Management and Applied Research, 2018, Vol. 5, No. 2
- 98
-----
|
{
"disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.18646/2056.52.18-007?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.18646/2056.52.18-007, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "CCBY",
"status": "HYBRID",
"url": "https://www.ijmar.org/v5n2/18-007.pdf"
}
| 2,018
|
[
"Review"
] | true
| 2018-07-30T00:00:00
|
[
{
"paperId": "466ad9f0e86e48b830c79609205d1f7b56e663e1",
"title": "Modeling crowdfunders’ behavioral intention to adopt the crowdfunding-waqf model (CWM) in Malaysia: The theory of the technology acceptance model"
},
{
"paperId": "81d8a65951c70bb2d928d2a985ac650ea22eada0",
"title": "Mobile financial services and financial inclusion: Is it a boon for savings mobilization?"
},
{
"paperId": "5296e81728d72fb6f3476e462d9f6aff7f1ba1a7",
"title": "Financial inclusion in Malaysia : distilling lessons for other countries"
},
{
"paperId": "8b07b1c1cf88e72a0255bc92de33af0cf469b5bd",
"title": "The Case of Crowdfunding in Financial Inclusion: A Survey"
},
{
"paperId": "36dfcd6c6a2365dd13a0e166d0ec34aba6e45dd8",
"title": "Analysis and outlook of applications of blockchain technology to equity crowdfunding in China"
},
{
"paperId": "444f63ec398bc47be32936c35567a78ab2c8e062",
"title": "Blockchain application and outlook in the banking industry"
},
{
"paperId": "ba9a9034bee525c621e42301c69df61aae480118",
"title": "Reforming the methodolgoy of product development in Islamic finance"
},
{
"paperId": null,
"title": "The Application of Blockchain Technology in Crowdfunding: Towards Financial Inclusion via"
},
{
"paperId": null,
"title": "The Global Findex Database 2017: Measuring Financial Inclusion and Fintech Revolution, International Bank for Reconstruction"
},
{
"paperId": "a8327365caeb960a50a0c746fe0b683804ccb6ba",
"title": "The Truth about Blockchain"
},
{
"paperId": "c83f6914282ef2177f868b5339229946054a2b08",
"title": "International Organization of Securities Commissions (IOSCO)"
},
{
"paperId": null,
"title": "Fast Forward Progress: Leveraging Tech to Achieve the Global Goals"
},
{
"paperId": "cbbb426987455372a88fc1354539f8814b09ceed",
"title": "Financial Inclusion-An Overview"
},
{
"paperId": null,
"title": "( 2018 ) , “ Did You Know That Malaysians Once Sponsored RTM To Air The 1982 World Cup ?"
},
{
"paperId": "e0a93c6cf959efbc0a79dfd6b8d272314f6e086c",
"title": "Crowdfunding and Islamic Finance: A Good Match?"
},
{
"paperId": null,
"title": "Shariah-compliant Crowd Funding: An Efficient Framework for Entrepreneurship Development in Islamic Countries"
},
{
"paperId": "4f164eab2e3b4ce9e0a353ac6acb228c57562431",
"title": "Crowdfunding's Potential for the Developing World"
},
{
"paperId": null,
"title": "Block Chain : Opportunities for Private Enterprises in Emerging Market"
},
{
"paperId": null,
"title": "Tender Support for Blockchain Technology in Malaysia"
},
{
"paperId": null,
"title": "The Consultative Group to Assist the Poor (CGAP)"
},
{
"paperId": null,
"title": "Crowdfunding in East Africa: Regulation and Policy for Market Development"
},
{
"paperId": null,
"title": "SC Invites Applications for Registration as Equity Crowdfunding and Peer-to-Peer Financing Operators"
},
{
"paperId": null,
"title": "Cryptocurrencies could bring cost-savings to crowdfunding and make it easier to hold small stakes in companies, UK: NESTA"
},
{
"paperId": null,
"title": "List of Registered Market Operators"
},
{
"paperId": null,
"title": "Malaysian Gov't Committee Partners With Korean Lab to Develop Sharia-Compliant Blockchain"
},
{
"paperId": null,
"title": "The Future of the Global Muslim Population"
},
{
"paperId": null,
"title": "Crowdfunding Malaysia"
},
{
"paperId": null,
"title": "Behind The Scenes: Securities Commission Malaysia's Blockchain Project"
},
{
"paperId": null,
"title": "( 2018 ) , “ InfoCorp announces Strategic Cooperation with Crowdo — the First Financial Service Provider to join Sentinel Chain in providing P 2 P loan services ”"
},
{
"paperId": null,
"title": "Profile: Putting her faith in equity crowdfunding"
}
] | 12,009
|
en
|
[
{
"category": "Medicine",
"source": "external"
},
{
"category": "Medicine",
"source": "s2-fos-model"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/fffcd92064b2e1e6bef26e63e85d3b6895630ade
|
[
"Medicine"
] | 0.808838
|
Developing a Standardized and Reusable Method to Link Distributed Health Plan Databases to the National Death Index: Methods Development Study Protocol
|
fffcd92064b2e1e6bef26e63e85d3b6895630ade
|
JMIR Research Protocols
|
[
{
"authorId": "38191268",
"name": "C. Fuller"
},
{
"authorId": "2052830648",
"name": "Wei Hua"
},
{
"authorId": "2494340",
"name": "Charles E. Leonard"
},
{
"authorId": "5275775",
"name": "A. Mosholder"
},
{
"authorId": "6579667",
"name": "R. Carnahan"
},
{
"authorId": "34135447",
"name": "S. Dutcher"
},
{
"authorId": "2068703620",
"name": "Katelyn King"
},
{
"authorId": "48864105",
"name": "Andrew B Petrone"
},
{
"authorId": "47310417",
"name": "Robert Rosofsky"
},
{
"authorId": "11953865",
"name": "L. Shockro"
},
{
"authorId": "6360669",
"name": "Jessica G. Young"
},
{
"authorId": "144925635",
"name": "J. Min"
},
{
"authorId": "3671857",
"name": "I. Binswanger"
},
{
"authorId": "2059981452",
"name": "Denise Boudreau"
},
{
"authorId": "1883849",
"name": "M. Griffin"
},
{
"authorId": "4559142",
"name": "Margaret A. Adgent"
},
{
"authorId": "48846061",
"name": "J. Kuntz"
},
{
"authorId": "2237972671",
"name": "C. McMahill-Walraven"
},
{
"authorId": "3512658",
"name": "P. Pawloski"
},
{
"authorId": "152532981",
"name": "R. Ball"
},
{
"authorId": "1921029",
"name": "S. Toh"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"JMIR Res Protoc"
],
"alternate_urls": [
"http://www.researchprotocols.org/index"
],
"id": "278131df-030d-4e6c-b083-d57f3b740dc4",
"issn": "1929-0748",
"name": "JMIR Research Protocols",
"type": "journal",
"url": "https://www.researchprotocols.org/"
}
|
Background Certain medications may increase the risk of death or death from specific causes (eg, sudden cardiac death), but these risks may not be identified in premarket randomized trials. Having the capacity to examine death in postmarket safety surveillance activities is important to the US Food and Drug Administration’s (FDA) mission to protect public health. Distributed networks of electronic health plan databases used by the FDA to conduct multicenter research or medical product safety surveillance studies often do not systematically include death or cause-of-death information. Objective This study aims to develop reusable, generalizable methods for linking multiple health plan databases with the Centers for Disease Control and Prevention’s National Death Index Plus (NDI+) data. Methods We will develop efficient administrative workflows to facilitate multicenter institutional review board (IRB) review and approval within a distributed network of 6 health plans. The study will create a distributed NDI+ linkage process that avoids sharing of identifiable patient information between health plans or with a central coordinating center. We will develop standardized criteria for selecting and retaining NDI+ matches and methods for harmonizing linked information across multiple health plans. We will test our processes within a use case comprising users and nonusers of antiarrhythmic medications. Results We will use the linked health plan and NDI+ data sets to estimate the incidences and incidence rates of mortality and specific causes of death within the study use case and compare the results with reported estimates. These comparisons provide an opportunity to assess the performance of the developed NDI+ linkage approach and lessons for future studies requiring NDI+ linkage in distributed database settings. This study is approved by the IRB at Harvard Pilgrim Health Care in Boston, MA. Results will be presented to the FDA at academic conferences and published in peer-reviewed journals. Conclusions This study will develop and test a reusable distributed NDI+ linkage approach with the goal of providing tested NDI+ linkage methods for use in future studies within distributed data networks. Having standardized and reusable methods for systematically obtaining death and cause-of-death information from NDI+ would enhance the FDA’s ability to assess mortality-related safety questions in the postmarket, real-world setting. International Registered Report Identifier (IRRID) DERR1-10.2196/21811
|
JMIR RESEARCH PROTOCOLS Fuller et al
##### Protocol
# Developing a Standardized and Reusable Method to Link Distributed Health Plan Databases to the National Death Index: Methods Development Study Protocol
##### Candace C Fuller[1], MPH, PhD; Wei Hua[2], MSc, MHS, MD, PhD; Charles E Leonard[3], PharmD, MSCE; Andrew Mosholder[2], MD, MPH; Ryan Carnahan[4], PharmD, MS; Sarah Dutcher[2], PhD, MS; Katelyn King[1], BA; Andrew B Petrone[1], MPH; Robert Rosofsky[5], MA; Laura A Shockro[1], BA; Jessica Young[1], PhD; Jea Young Min[6], PharmD, MPH, PhD; Ingrid Binswanger[7], MD, MPH, MS; Denise Boudreau[8], RPh, PhD, MS; Marie R Griffin[6], MD, MPH; Margaret A Adgent[6], MSPH, PhD; Jennifer Kuntz[9], MS, PhD; Cheryl McMahill-Walraven[10], MSW, PhD; Pamala A Pawloski[11], PharmD; Robert Ball[2], MD, MPH, ScM; Sengwee Toh[1], ScD
1Department of Population Medicine, Harvard Pilgrim Health Care Institute, Harvard Medical School, Boston, MA, United States
2Office of Surveillance and Epidemiology, Center for Drug Evaluation and Research, Food and Drug Administration, Silver Spring, MD, United States
3Center for Pharmacoepidemiology Research and Training, Department of Biostatistics, Epidemiology, and Informatics Perelman School of Medicine,,
University of Pennsylvania, Philadelphia, PA, United States
4University of Iowa, College of Public Health, Iowa City, IA, United States
5Health Information Systems Consulting, Milton, MA, United States
6Vanderbilt University, Nashville, TN, United States
7Kaiser Permanente Colorado, Aurora, CO, United States
8Kaiser Permanente Washington Health Research Institute and University of Washington, Seattle, WA, United States
9Kaiser Permanente Northwest, Portland, OR, United States
10Aetna, a CVS Health company, Blue Bell, PA, United States
11HealthPartners Institute, Bloomington, MN, United States
**Corresponding Author:**
Candace C Fuller, MPH, PhD
Department of Population Medicine
Harvard Pilgrim Health Care Institute
Harvard Medical School
401 Park Drive, Suite 401 East
Boston, MA, 02215
United States
Phone: 1 617 867 4867
[Email: Candace_Fuller@harvardpilgrim.org](mailto:Candace_Fuller@harvardpilgrim.org)
### Abstract
**Background:** Certain medications may increase the risk of death or death from specific causes (eg, sudden cardiac death), but
these risks may not be identified in premarket randomized trials. Having the capacity to examine death in postmarket safety
surveillance activities is important to the US Food and Drug Administration’s (FDA) mission to protect public health. Distributed
networks of electronic health plan databases used by the FDA to conduct multicenter research or medical product safety surveillance
studies often do not systematically include death or cause-of-death information.
**Objective:** This study aims to develop reusable, generalizable methods for linking multiple health plan databases with the
Centers for Disease Control and Prevention’s National Death Index Plus (NDI+) data.
**Methods:** We will develop efficient administrative workflows to facilitate multicenter institutional review board (IRB) review
and approval within a distributed network of 6 health plans. The study will create a distributed NDI+ linkage process that avoids
sharing of identifiable patient information between health plans or with a central coordinating center. We will develop standardized
criteria for selecting and retaining NDI+ matches and methods for harmonizing linked information across multiple health plans.
We will test our processes within a use case comprising users and nonusers of antiarrhythmic medications.
-----
JMIR RESEARCH PROTOCOLS Fuller et al
**Results:** We will use the linked health plan and NDI+ data sets to estimate the incidences and incidence rates of mortality and
specific causes of death within the study use case and compare the results with reported estimates. These comparisons provide
an opportunity to assess the performance of the developed NDI+ linkage approach and lessons for future studies requiring NDI+
linkage in distributed database settings. This study is approved by the IRB at Harvard Pilgrim Health Care in Boston, MA. Results
will be presented to the FDA at academic conferences and published in peer-reviewed journals.
**Conclusions:** This study will develop and test a reusable distributed NDI+ linkage approach with the goal of providing tested
NDI+ linkage methods for use in future studies within distributed data networks. Having standardized and reusable methods for
systematically obtaining death and cause-of-death information from NDI+ would enhance the FDA’s ability to assess
mortality-related safety questions in the postmarket, real-world setting.
**International Registered Report Identifier (IRRID):** DERR1-10.2196/21811
**_(JMIR Res Protoc 2020;9(11):e21811)_** [doi: 10.2196/21811](http://dx.doi.org/10.2196/21811)
**KEYWORDS**
National Death Index; data linkage; all-cause mortality; cause specific mortality; distributed analysis; multisite research
### Introduction
##### Public Health Significance and Study Motivation
Certain medications may increase the risk of death and specific
causes of death (eg, sudden cardiac death [SCD]), but these
risks may not be identified in premarket randomized controlled
trials owing to the relatively small sample sizes and the highly
selected patient populations in these trials. The capacity to
examine the risk of death in postmarket safety surveillance
activities is an important part of the US Food and Drug
Administration’s (FDA) mission to protect public health.
Although the FDA Adverse Event Reporting System (FAERS)
[1] identifies drug safety signals [2] and is vital to this mission
[3], FAERS has a number of known limitations. Similar to most
spontaneous reporting systems that rely primarily on voluntarily
reported adverse events, FAERS is susceptible to underreporting,
variable data quality, lack of denominator information, and
frequent absence of details necessary to evaluate clinical events
and associations with a specific medication [4-6].
Other components of the FDA’s postmarket medical product
safety surveillance system complement FAERS in many ways
but often do not systematically capture death or cause-of-death
information. For example, the FDA’s Sentinel System [7,8]
includes a distributed network of electronic health plan
databases. The health plans that participate in the Sentinel
System or other multicenter research networks routinely capture
data on in-hospital deaths and medically attended deaths but
often do not have complete capture of out-of-hospital deaths or
cause-of-death information. Although some health plans perform
routine or ad hoc linkages with local or state death registries or
Social Security Administration (SSA) data to address these data
gaps, such linkages are often specific to a particular study or
site.
In addition, some multicenter research networks use a distributed
data approach in which individual study sites or health plans
maintain physical and operational control over their electronic
health data behind their respective firewalls. A distributed
network approach promotes data sharing by protecting patient
privacy, data security, and proprietary interests [9-12]. The
development of a systematic method to link distributed databases
to a data source that includes both death and cause-of-death
information, such as the National Death Index (NDI), would
enhance the FDA’s ability to answer mortality-related safety
questions in the postmarket setting.
##### NDI and Cause-of-Death Information
The NDI, a self-supporting service within the National Center
for Health Statistics (NCHS) of the Centers for Disease Control
and Prevention, is a centralized database of death record
information compiled from the vital statistics offices of states
and other jurisdictions. The NDI provides death information
including death date and death certificate number (referred to
as the NDI data) and cause of death from death certificates
(referred to as NDI Plus or NDI+ data) upon request [13].
Although the SSA also provides the fact of death, it does not
provide cause-of-death information, and a 2011 determination
by the SSA that data submitted electronically by states cannot
be publicly shared in the SSA death master file has since limited
its coverage [14].
The limitations of the cause-of-death information derived from
death certificates, the foundation of state death records, and
subsequent NDI information have been well described [15]. In
brief, although efforts have been made to improve the
completeness and accuracy of cause-of-death reporting in the
United States, the cause-of-death information in the death
certificate ultimately represents medical opinions. The certifier
(eg, attending physician, medical examiner, coroner) provides
a clinical judgment informed by their training, knowledge of
medicine, and available medical history of the decedent [16].
Certifier requirements (eg, coroner or medical examiner) can
also vary according to state laws [17]. Variation in all of these
elements can lead to inaccurate documentation by the certifier,
and studies have found that causes of death listed on the death
certificates, and subsequently coded in NDI+ data, may be
misclassified by 16% to 40%, depending on the cause [18,19].
Misclassification may increase when the death is sudden and
unobserved [20,21] and also when more narrowly defined causes
of death are listed [22]. Errors introduced during translation of
the causes of death on death certificates to the International
Classification of Diseases, 10th Revision (ICD-10) codes are
much less common [23,24].
Despite the known limitations of death certificate data,
researchers have used these data to examine national death data
-----
JMIR RESEARCH PROTOCOLS Fuller et al
trends and changes in causes of death over time [22,25,26] and
have used death certificate data with other data sources to more
accurately define specific causes of death, such as SCD [27].
Notwithstanding the above mentioned limitations, the NDI is
currently the only complete national source of death and
cause-of-death information accessible to large-scale
population-based epidemiologic studies in the United States.
##### Primary and Secondary Objective of the Study
Overview of the Study Objectives
The primary objective of this study is to develop reusable
administrative and technical processes for linking multiple
health plan databases with NDI+ data to allow the FDA to assess
death and specific causes of death as outcomes in medical
product safety and effectiveness studies in distributed networks
of electronic health plan databases. We will pilot the developed
approach through a use case comprising antiarrhythmic
medication users and nonusers. The outcomes of interest in the
use case are all-cause mortality and SCD, but cardiovascular
death may also be examined if it is feasible within the study
timeline.
The secondary objectives focus on using the linked health plan
and NDI+ data to estimate the incidences and incidence rates
of mortality and specific causes of death within the use case
and comparing them with estimates reported in the literature.
Examining the incidences and incidence rates of mortality and
death from specific causes within the use case will provide an
opportunity to assess the performance of the workflows and
processes developed under the primary objectives.
##### Primary Objectives
1. Develop and pilot an administrative workflow that facilitates
efficient, coordinated, multicenter institutional review board
(IRB) review and approval for linking health plan data with
NDI+ data.
2. Create and pilot a distributed technical process for linking
health plan and NDI+ data that:
- uniformly identifies records to be submitted to the NDI
from each health plan
- avoids sharing of identifiable patient information
between participating health plans or with the
coordinating center and allows health plans to work
directly with the NDI
- uses standardized criteria to select and retain confirmed
or best match from linked NDI+ data across multiple
health plans
- harmonizes linked information across multiple health
plans by saving NDI+ data in a standardized format
##### Secondary Objectives
The secondary objectives are as follows:
1. Estimate the incidences and incidence rates of all-cause
mortality, SCD, and potentially cardiovascular death within
a high-risk use case cohort (ie, individuals using
antiarrhythmic medications) and an average-risk cohort (ie,
individuals not on antiarrhythmic medications).
2. Assess the performance of the developed workflows and
processes for linking health plan and NDI+ data by
examining the incidences and incidence rates of all-cause
mortality, SCD, and potentially cardiovascular death within
the use case cohorts, and comparing them with estimates
previously reported in the literature.
Figure 1 provides an overview of the questions this study will
address and anticipated contributions.
**Figure 1.** Overview of study questions and anticipated contributions. NDI: National Death Index; IRB: Institutional Review Board; PHI: Protected
Health Information.
-----
JMIR RESEARCH PROTOCOLS Fuller et al
### Methods
##### Use Case and Rationale
For this study, we chose antiarrhythmic medications as the use
case. The arrhythmogenicity of antiarrhythmic medications is
well known, and several antiarrhythmic medications are known
to be associated with elevated risks of all-cause mortality and
SCD [28-30]. SCD associated with arrest, generally defined as
the sudden cessation of heart function, is a major cause of
mortality and a major public health concern. Ventricular
fibrillation is often associated with SCD and is a pulseless
arrhythmia with irregular and chaotic electrical activity and
ventricular contraction in which the heart immediately loses its
ability to pump [31]. Ventricular fibrillation is the initial
electrocardiogram rhythm in 75% of outpatient cases of SCD
[32]. Torsade de Pointes is a specific form of polymorphic
ventricular tachycardia that if rapid or prolonged can lead to
ventricular fibrillation and SCD [33].
There are approximately 20 cardiovascular medications and
well over 100 noncardiovascular medications suspected of
causing SCD, ventricular fibrillation, or Torsade de Pointes
[28]. For example, although class III antiarrhythmic medications
are used to treat atrial or ventricular arrhythmias, they prolong
repolarization and cardiac refractoriness and can increase an
individual’s propensity for Torsade de Pointes [34]. In addition,
individuals with arrhythmias are at a high risk of death and
SCD. Therefore, we expect all-cause mortality as well as SCD
to be more common in antiarrhythmic medication users than in
a cohort not exposed to these medications. As the incidences
of mortality and SCD in the US population are well described
[35-37], identification of a cohort at average risk of these
outcomes will provide an efficient reference point for
antiarrhythmic medication users and an opportunity to assess
the performance or validity of the linkage to NDI+ data.
##### Participating Organizations
This project will be led and coordinated by the Harvard Pilgrim
Health Care Institute (HPHCI), which will work closely with
the FDA and participating health plans in all aspects of the
project. A total of 6 health plans—Aetna, a CVS Health
company; HealthPartners Institute; Kaiser Permanente Colorado;
Kaiser Permanente Northwest; Kaiser Permanente Washington;
and Vanderbilt University (which provides access to Tennessee
Medicaid data)—will participate in this project. They represent
a diverse group of health plans, including national insurers,
regional health plans, and integrated delivery systems, and cover
both commercial and public insurance programs. Although the
project will leverage the Sentinel infrastructure and be built on
the successful collaboration among participating institutions, it
will be conducted outside of the Sentinel Initiative and will be
relevant to other distributed data networks. The project is a
research activity subject to the Office for Human Research
Protections regulations, following the 45 Code of Federal
Regulations 46 [38] on the protection of human subjects, and
will undergo IRB review.
##### Development of Multisite Administrative Workflows to Support Linkage to NDI+ Data
Overview of the Administrative Workflows
This project will develop reusable and flexible administrative
workflows required to support simultaneous linkage of multiple
health plan databases with NDI+ data. As the lead project site
and coordinating center, the HPHCI will develop and facilitate
administrative processes for IRB workflow as well as
submission of the master NDI application on behalf of the
participating health plans. The HPHCI will lead the development
of the NDI application package, coordinate review by
participating health plans and the FDA as well as the execution
of legal agreements (as necessary), and will submit the master
NDI application that will include IRB documents and approvals.
The HPHCI will review, consider, and accommodate the
requirements of institutions involved in this project to ensure
that the developed workflows for NDI and IRB application
review and approval are flexible enough to be reused in future
studies. This may require review and response to any of the
following: health plan institutional requirements, FDA
requirements, relevant federal requirements (eg, revised
Common Rule [39] and other requirements), relevant state or
local jurisdiction requirements (eg, laws concerning death data),
institutional IRB requirements, or NCHS/NDI requirements.
For example, preliminary work with participating health plans
suggested the need to consider any state or local laws pertaining
to death data within project workflows. Balancing such
requirements as well as any other identified prerequisites or
constraints will be a key focus of the developed multisite
administrative workflow. In the following paragraphs, we
describe our anticipated processes for implementing coordinated
multisite, central IRB review and approval, as well as multisite
NDI application review and approval.
##### IRB Application Workflow
The revised Common Rule requires the use of a central IRB for
multisite research, with certain exceptions (82 Fed. Reg. at 7265
[final rule §.114]) [39]. In addition, the NDI currently requires
all studies requesting NDI+ data to undergo IRB review. This
project will develop and pilot an administrative workflow that
facilitates efficient, coordinated, multicenter IRB review and
approval for linking health plan data with NDI+ data in
accordance with the revised Common Rule.
The IRB at Harvard Pilgrim Health Care, the parent organization
of the HPHCI, is responsible for managing and supporting
scientific and ethical review of research studies submitted by
the HPHCI. The HPHC IRB also enters into reliance agreements
for multisite studies as a reviewing IRB and a relying IRB. The
HPHC IRB holds a Federalwide Assurance (FWA) with the US
Department of Health and Human Services [FWA00000100]
and thus is compliant with human subjects regulations within
45 Code of Federal Regulations 46 [38,40]. As the lead study
institution, the HPHCI will aim to have the HPHC IRB serve
as the IRB of record, with all participating sites ceding their
IRB review to the HPHC IRB. However, if the use of a single
IRB entity is determined not to be feasible or acceptable to the
NCHS, the NDI board, or participating health plans, the HPHCI
-----
JMIR RESEARCH PROTOCOLS Fuller et al
will work with each participating health plan to attain IRB
approval.
The study team will describe the necessary administrative
workflow processes and highlight any encountered governance
challenges (eg, local institutional policies or procedures) and
potential solutions. Furthermore, the study team will address
any complications with individual study sites obtaining approval
to cede to the HPHC IRB in the final developed workflow. The
anticipated central IRB workflow is as follows:
1. The HPHCI will submit an IRB application to the HPHC
IRB and obtain HPHC IRB approval for the study. The
HPHCI and collaborating health plans to cede review by
initiating and executing reliance agreements with respective
health plan IRB(s). Reliance agreements must be in place
for local health plan IRBs to cede review and for the HPHC
IRB to serve as the lead reviewing IRB. We anticipate the
cede process will proceed as follows:
- The HPHCI will provide the HPHC IRB application
and approval to participating health plans for review.
The HPHCI will work with health plans to address any
concerns or amendments needed to satisfy approval to
cede to the HPHC IRB. Individual health plan–specific
policies and procedures may apply and will be
documented.
- Participating health plans will prepare all necessary
cede request documents required by site IRB(s) and
the HPHC IRB. Health plans will submit a cede request
to the HPHC IRB.
- The HPHC IRB will review the submitted cede requests
and may require additional health plan–specific
materials in determining approval to accept the request
(eg, documentation of human subjects training from
key personnel).
- The lead HPHC IRB and the IRB(s) at participating
health plans will fully execute reliance agreements,
formally known as IRB authorization agreements, to
officially confirm the HPHC IRB as the lead reviewing
IRB of record for the study.
2. Following the completed cede process, the HPHC IRB will
be responsible for continuing review as well as amendment
and reviewing of any unanticipated problems. Participating
health plans will be responsible for timely communication
and reporting to the HPHC IRB for any unanticipated
problems encountered at their site for this study.
The anticipated central IRB workflow process will be updated
as new procedures or processes are encountered. A final
recommended IRB workflow will be created after the process
is piloted and will include lessons learned, requirements for
each involved institution (eg, FDA, HPHCI, participating health
plans), relevant flowcharts, and recommendations for future
studies.
##### NDI Application Workflow
The HPHCI will lead the NDI application development and
subsequent application review by the FDA and the health plans
before submission of the final application package to the NDI.
The published guidelines for obtaining NDI application approval
by the NDI board will inform the developed workflow [41].
The HPHCI will also work with staff at the NDI to ensure all
requirements are met in accordance with the NDI guidelines.
Process development may be iterative, with the NDI providing
guidelines and the HPHCI subsequently working with health
plans and the FDA to ensure guidelines are met. Preliminary
work has identified the need for specific process development
in IRB approval for the protection of human subjects, final
disposition of identifiable data, and NDI-required agreements.
The HPHCI will document lessons learned from piloting the
administrative workflows that will inform the development of
a flexible and reusable process intended to guide future studies.
The HPHCI will review the NDI and IRB stipulations
encountered during this study and ensure appropriate processes
and guidelines are built to accommodate them. As the NDI and
IRB administrative workflows are interdependent, we will use
an iterative process outlining and updating the IRB and NDI
administrative workflows as new stipulations or requirements
are encountered. Thus, the overall administrative workflow will
include recommendations for IRB and NDI application
development for use in future studies.
##### Development of Distributed Process for Linkage Between Health Plan and NDI+ Data
Overview of the Distributed Linkage Process
The HPHCI, in collaboration with the FDA and participating
health plans, will develop a distributed linkage process that
allows health plans to work directly with the NDI to eliminate
sharing of identifiable patient information between participating
health plans or with the coordinating center. The HPHCI will
develop the distributed NDI+ data linkage process with input
from the participating health plans and pilot the process within
the study use case. Health plans will identify and submit
individuals meeting specific criteria within the use case cohorts
to the NDI for matching. The HPHCI will also work with each
participating health plan to develop and ensure a standardized
NDI+ data linkage process across databases. Figure 2 provides
a high-level overview of the anticipated distributed process for
linkage between health plan data and NDI+ data.
Piloting the process with the study use case will elucidate
adjustments that could be made to improve efficiency and
provide flexible options for future studies. We will summarize
practical lessons learned from the participating health plans and
the NDI. Although the NDI User’s Guide [42] describes the
general process for NDI+ data linkage within a single site, the
developed technical workflow will need to enable linkage to
NDI+ data at multiple study sites. Accomplishing timely and
standardized linkage to NDI+ data across multiple sites requires
defining and implementing a set of NDI submission criteria,
ensuring adequate file preparation and quality control processes
across sites, standardizing the selection and retention of NDI
matches, and storing information retrieved from the NDI in
standardized table(s) so that study analyses can be implemented.
We anticipate the following tasks will be required to build a
distributed process for linkage between health plan and NDI+
data.
-----
JMIR RESEARCH PROTOCOLS Fuller et al
**Figure 2.** Overview of the distributed National Death Index data linkage process. NDI: National Death Index; PHI: Protected Health Information.
##### Defining NDI Submission Criteria
This project will develop, pilot, and recommend case
identification and NDI submission criteria for future multicenter
studies. Multimedia Appendix 1 includes the case identification
and NDI submission criteria this project will use to determine
which individuals will be initially selected for sending to the
NDI, thereby obtaining death and cause-of-death information.
We anticipate submitting patients with deaths recorded in health
plan data or patients with potential deaths to the NDI for linkage.
We will define potential death as health plan disenrollment
between cohort entry and cohort exit plus 365 days, without
subsequent reenrollment or medical utilization >60 days after
disenrollment. It is possible that these NDI submission criteria
will be refined or redesigned as they are piloted within the study
use case. We will describe the final developed case identification
and NDI submission algorithm and provide this information for
use in future studies.
##### Preparing Files for Submission to the NDI
The NDI publishes information that health plans must provide
to conduct an NDI+ data search as well as the required file
structures in their NDI User’s Guide [42]. Health plans will
need to access these required data elements from their source
systems and transmit complete records to the NDI for matching.
To ensure that files submitted to the NDI are of sufficient
completeness, the HPHCI will develop distributed programs
for local execution by the health plans to identify any potential
data or formatting issues. Any lessons learned during these file
preparation and quality control processes will be documented
for future use and incorporation into the technical workflow.
##### Standardizing NDI+ Data Linkage Across Multiple Health Plan Databases
After files intended for submission to the NDI have been
checked to ensure sufficient completeness and quality, each
health plan will submit their selected health plan members for
matching directly with NDI+ data. Health plan data files will
be transferred to the NCHS via either password-protected
encrypted CDs or a secure file transfer protocol site, according
to the health plan and NCHS or NDI requirements. When NDI
staff return data files directly to health plans, health plans will
load the returned files to their computer servers behind their
firewalls. These data sets will remain behind their firewalls and
will not be shared with the HPHCI, the FDA, or other health
plans. We will summarize the processes, challenges, and
requirements in the technical workflow.
##### Selecting and Retaining the Best NDI Match
When the NDI performs matching, multiple possible matches
for each individual submitted may be provided within the
NDI-returned data files. The NDI User’s Guide [42] provides
guidelines for selection and retention of NDI matches, from
among multiple possible matches for each individual submitted.
This requires researchers to assess the quality of each possible
NDI record match listed and determine which possible matches
are _best matches. The NDI recommends a multistep process_
when determining the best match among possible multiple
matches, including using the NDI-provided probabilistic
matching scores to distinguish true matches from false matches.
The HPHCI, guided by the principles within the NDI User’s
Guide [42], will develop a standardized process for ascertaining
and keeping confirmed or best matches locally at the
participating health plan sites. This will be implemented in
distributed programs to examine all possible matches and
-----
JMIR RESEARCH PROTOCOLS Fuller et al
identify matches that are considered best based on specific
criteria.
We will design the process to be flexible and reusable, and we
anticipate a multistep process using variables within the returned
NDI data files for match selection. Processes will assess the
distribution of NDI-provided matching variables such as the
_Status Code (indicates NDI assessment of probability of truly_
being alive or dead), Class Code (indicates the fact that some
NDI-identifying data items used in the matching criteria are
more important for determining true matches than others),
assessment of item-by-item matches between health plan and
NDI information, and probabilistic matching scores (score for
each potential match). We will implement rules for retaining
NDI matches in distributed program(s).
The NDI returns a cause-of-death code only for records that
rank first in the list of possible NDI matches. If our match
selection process identifies a match that was not ranked first by
the NDI, this record will not have the cause-of-death information
in the initial NDI+ data files. In such instances, the HPHCI will
work with the NDI to attain this missing cause-of-death
information. However, it is possible that the NDI will be unable
to supply the cause-of-death information or may have time
delays for the return of this information. If this occurs, the
HPHCI may not be able to include newly supplied
cause-of-death information in final use case analyses and will
pilot the process for requesting and attaining this information
and document lessons learned.
The HPHCI will develop a proposed standardized table structure
that can be used in future studies to store information retrieved
from the NDI. The HPHCI will work with the health plans to
develop the ultimate table structure. The data included in this
table will be maintained behind each health plan’s firewall,
thereby preserving the distributed nature of health plan
databases. The HPHCI will document these processes and
programs in a report for future use.
##### Draft Use Case Specifications
Use Case Inclusion and Exclusion Criteria
This study will use data captured within participating health
plan databases between 2000 and 2017 (or earliest or latest
available health plan data) and the most recent NDI+ data
available at the time of NDI application.
Cohort 1 will include new users of select antiarrhythmic
medications for men aged 45 years and older and women aged
55 years and older on the date of cohort entry between 2000
and 2017 (or earliest available health plan data). The list of
select antiarrhythmic medications of interest and new-user
definition is described under the Exposure Identification for the
_Use Case section. We chose different age cutoff values for men_
and women because risks of all-cause mortality and SCD vary
considerably by sex. The goal is to improve the specificity of
mortality and specific causes of death outcomes identified
through NDI+ matching. Younger individuals are less likely to
experience mortality and SCD than older individuals, and within
age groups, women are less likely to experience mortality and
SCD than men. The risk for SCD has been shown to increase
in women after the age of 55 years [43]. All-cause mortality is
also rare in younger age groups. Choosing a higher age cutoff
for women is intended to decrease false-positive matches and
minimize the number of NDI submissions.
We will use the entire cohort for the all-cause mortality analysis
and potentially the cardiovascular death analysis. For analyses
focused on SCD, we will restrict the cohorts to individuals under
the age of 75 years to maintain consistency with a study by
Chung et al [27], which developed and validated a computerized
algorithm to identify community originating SCD. As the risk
of mortality increases with age, Chung et al [27] found death
certificates to be less reliable for identifying SCD in older
individuals and removed patients aged ≥75 years to minimize
false positives. Although it may be difficult to capture nursing
home stays within the participating health plan databases, to
maintain consistency with the algorithm by Chung et al [27],
we will exclude individuals with evidence of a nursing home
stay in the baseline period. Cohort 1 entry will begin on an
individual’s first prescription dispensing for an oral dosage form
of an antiarrhythmic medication of interest that was preceded
by a 365-day baseline period with medical and pharmacy
benefits (gaps in enrollment <45 days bridged), during which
the individual has ≥1 encounter with a diagnosis recorded in
any care setting or an outpatient dispensing of any medication.
To mimic typical drug safety study situations in which no future
information is available to determine medication users’ vital
status, individuals with more than one episode of new use during
the study period will contribute only their first episode. This
study design choice also helps avoid the selection bias that use
of future information may generate. The protocol allows gaps
in enrollment of <45 days because it is believed that these may
not represent true gaps in coverage but rather administrative
changes. Index date will be the date of the first eligible
dispensing for a select antiarrhythmic drug of interest.
Cohort 2 will be drawn from average-risk individuals who are
not current (ie, on day of cohort entry) or past (ie, before 365
days) users of antiarrhythmic medications of interest. We will
match cohort 2 at a one-to-one ratio with cohort 1 based on age,
sex, and health plan. Index dates will also be matched to cohort
1. We will require individuals in cohort 2 to have a 365-day
baseline period with medical and pharmacy benefits (gaps in
enrollment <45 days ignored as specified above in cohort 1)
and at least one medical encounter or outpatient pharmacy
dispensing claim in the previous 365 days. As in cohort 1, cohort
2 will include the entire cohort for the all-cause mortality
analysis and potentially the cardiovascular death analysis but
will be restricted to individuals younger than 75 years and with
no evidence of a nursing home stay in the baseline period for
the SCD analyses. It is worth noting that individuals included
in either cohort 1 or 2 may in fact have used antiarrhythmics
medications outside of the study period or before enrolling in
a participating health plan.
##### Use Case Exposure Definitions
We will identify select oral antiarrhythmic medications of
interest using National Drug Codes. New use will be defined
by excluding individuals with dispensings of class I and III
antiarrhythmic drugs (all routes of administration), including
amiodarone, disopyramide, dofetilide, dronedarone, flecainide,
-----
JMIR RESEARCH PROTOCOLS Fuller et al
mexiletine, procainamide, propafenone, quinidine, and sotalol
[44,45], in the 365-day baseline period. Individuals with
dispensings of intravenous lidocaine in the 365-day baseline
period will also be excluded. Baseline exposure to adenosine
A1 agonists, digoxin, phenytoin, class II β-blocker agents, and
calcium channel blockers (class IV) agents will be ignored.
When creating treatment episodes, we will apply a stockpiling
algorithm [46] to account for the possibility that members may
refill prescriptions before the end of days’ supply of their
previous prescription. For example, if a member receives a
30-day dispensing for sotalol on January 1, and then receives
a second 30-day dispensing on January 20, the stockpiling
algorithm will adjust the second dispensing so that it starts on
January 31, after the first dispensing has been used in full. The
treatment episode will thus be 60 days in total, through March
1 (assuming February has 28 days). We will also implement a
14-day episode gap when creating treatment episodes to account
for imperfect adherence. An episode gap is the maximum
number of days of interrupted days-supply allowed between
two claims for the same drugs of interest. If the number of days
between when one prescription claim runs out and the next
claim is smaller than or equal to the episode gap, the algorithm
_bridges these two claims to build a continuous treatment_
episode. However, if the number of days between the two claims
of the same treatment exceeds the episode gap, the treatment
episode ends at the end of the 14-day period. The episode gap
is assessed after the claim service dates are adjusted by the
stockpiling algorithm. Because we are interested in the risk of
all-cause mortality and SCD for the class of medications in
general and not individual antiarrhythmic medications, our
analyses will focus on users of any antiarrhythmic medications
of interest as a group, and the results will not be stratified by
individual medication.
##### Use Case Follow-Up and Censoring Plan
For cohort 1, follow-up time will begin with the cohort
entry-defining antiarrhythmic medication dispensing (ie, day 1
of follow-up=dispensing date) and will continue based on the
treatment episode as described above. For cohort 2, follow-up
time will begin on the same day as the individual’s
corresponding match from the antiarrhythmic medication user
cohort. Follow-up will be censored upon the earliest of the
following occurrences:
1. Death or specific causes of death, as determined from NDI+
data; date of death will be the last day of follow-up (both
cohorts).
2. Health plan disenrollment (gaps of enrollment <45 days
will be ignored); the last day of enrollment will be the last
day of follow-up (both cohorts).
3. End of database time; database end date will be the last day
of follow-up (both cohorts).
4. Initiation of an antiarrhythmic medication of interest; the
day before the date of medication initiation will be the last
day of follow-up (cohort 2 only).
5. Excessive allowable gap between dispensings, defined as
>14 days between two consecutive dispensings for a study
antiarrhythmic medication of interest, the last day of
follow-up included will be the end of days’ supply of the
most recent dispensing of the study antiarrhythmic
medication of interest +14 days (cohort 1 only).
The analysis will follow use case cohorts for death, SCD, and
potentially cardiovascular death until censored. As linking to
NDI+ data allows us to follow patients for survival through the
end of the study period, if feasible, we will also conduct an
analysis that ignores the censoring criteria and follows use case
cohorts for death and SCD, and potentially cardiovascular death
through the end of NDI+ data.
##### Use Case Outcomes
The primary outcomes of interest are all-cause mortality and
SCD. If timeline and study resources permit, we will assess
cardiovascular death as a secondary outcome of interest. Ideally,
the selected outcome algorithms would: (1) facilitate the
assessment of the performance or validity of the linkage to NDI+
data; (2) allow for comparing the incidences and incidence rates
of all-cause mortality and specific causes of death with rates
previously reported in the literature, or other national death
information sources; and (3) use data retrieved from the NDI,
and possibly information within health plan databases. To inform
future studies, we will try to capture both medically attended
and nonmedically attended deaths. We will identify these
outcomes using NDI+ data and will evaluate each outcome
separately. Although we will attempt to replicate SCD or
cardiovascular death algorithms that have been previously
validated by other studies, it may be necessary to modify or
tailor the algorithms to data elements available within the health
plan databases that have been converted into the Sentinel
Common Data Model format [47]. Multimedia Appendix 2
[27,48,49] describes the operational definitions of the outcomes.
We also provide the high-level details in the following
paragraphs.
We will determine all-cause mortality through linkage to the
NDI+ data (all deaths, including both medically attended and
nonmedically attended deaths). Two algorithms for SCD will
be used, both of which exclude persons aged ≥75 years. For the
primary SCD definition, we will adapt an algorithm focused on
community-originating events defined by Chung et al [27] for
use within the health plan databases. This algorithm uses
information available in claims data to exclude patients with
certain conditions (Table 1 [50]) as well as cause-of-death
information provided by the NDI (Table 2) [27]. The definition
of secondary SCD will focus on events that occur in medical
care settings. Studies examining ventricular arrhythmia diagnosis
in hospital settings (ie, inpatient or emergency department) have
found inpatient diagnosis codes for ventricular arrhythmia to
have high positive predictive values, regardless of diagnosis
code position [49,51,52]. To identify SCD outcomes originating
in medical settings, we will adapt these algorithms for use within
health plan databases. Secondary emergency department or
inpatient diagnoses consistent with ventricular arrhythmia or
sudden cardiac arrest were selected to attempt to identify events
occurring in medical settings, as principal diagnosis codes would
generally define conditions established after study to be chiefly
responsible for admission [53]. If feasible, we may also include
a sensitivity analysis exploring the principal emergency
department or inpatient diagnoses consistent with ventricular
-----
JMIR RESEARCH PROTOCOLS Fuller et al
arrhythmia or sudden cardiac arrest. Finally, we may examine
cardiovascular death if it is determined to be feasible by the
study team, and we would define cardiovascular death with
cause-of-death codes typically used by national death data
sources, such as the underlying cause of death consistent with
a cardiovascular cause [25]. The algorithm parameters are
outlined in more detail in Multimedia Appendix 2.
**Table 1.** High-risk conditions likely to be miscoded as sudden cardiac death per Ray et al[a].
Condition Operational definition[b]
Cancer
Diagnosis of cancer (except for nonmelanoma skin cancers) or select antineoplastic agents. In
cludes the following neoplasms uncertain behavior ICD-9-CM[c] codes[d] 235-238, except: 238.2
(skin), 238.9 (site unspecified), 237.70, 237.71 (neurofibromatosis), 238.4 (polycythemia vera),
238.7 (lymphoproliferative disease), and 285.22 (anemia in neoplastic disease)
HIV Diagnosis of HIV or use of antiretroviral agents appropriate for HIV or pentamidine (also used
for other major immunocompromised patients)
Renal Diagnosis or procedure code for dialysis outside of the hospital (includes 996.73). Includes endstage renal disease diagnosis (285.21, 585.5, 585.6), also outside of the hospital
Liver Diagnoses 570-573
Respiratory Diagnosis of respiratory failure, cardiorespiratory failure, or pulmonary heart disease. Also includes tracheostomy (excluding temporary), home oxygen, or home ventilator
Organ transplant Includes kidney, heart, lung, liver, bone marrow, and pancreas. Includes complications of transplanted organ (996.8)
Serious neuromuscular
Cardiovascular congenital anomalies
Other congenital anomalies/childhood conditions
Other end-stage illness
Drug abuse
Multiple sclerosis (340), amyotrophic lateral sclerosis (335.20), Duchenne muscular dystrophy
(335.21), Huntington chorea (333.4), quadriplegia, paraplegia, or spinal cord injury. Recent
stroke (inpatient with primary discharge diagnosis of 430, 431, 433.x1, 434, 436) with hemiplegia/hemiparesis (342, 438.2)
Common truncus (745.0) transposition great vessels (745.1), tetrology (745.2), common ventricle
(745.3), endocardial cushion defect (745.6), pulmonary atresia (746.0), tricuspid atresia (746.1),
hypoplastic left heart (746.7), coarctation of aorta (747.1), other anomalies of aorta (747.2), total
anomalous pulmonary venous connection (747.41). A single diagnosis is sufficient for exclusion
Sickle cell (282.6), cerebral palsy (343), spina bifida (741), Down syndrome (758.0), hydrocephalus (742.3), microcephalus (742.1), encephalocele (742.0), severe mental retardation (318.1,
318.2), cystic fibrosis
(a) Hospice care; (b) diagnosis of coma, vegetative state, debility (799.3); (c) total parenteral
nutrition, percutaneous endoscopic gastrostomy, enteral feeding, malnutrition (260, 261, 262,
263) when these are for outpatients; (d) gangrene (040, gas gangrene; 785.4 gangrene: single
diagnosis sufficient); (e) intravenous medications outside of the hospital, as indicated by procedures for intravenous access outside a hospital stay period
Includes all medications and drugs with abuse potential and with the exception of alcohol (unless
hospitalization with primary discharge diagnosis: 291.x, 303.x, 305.0, 980.0, 980.9, E860.0,
E860.1, E860.9) and tobacco. Codes are 292.0 (drug withdrawal syndrome), 304.x (drug dependence), 305.2-305.9 (drug abuse, except alcohol/tobacco, 305.9 is abuse not otherwise specified,
may be nonspecific, but better to exclude), 965.01 (accidental poisoning, heroin), 969.6 (poisoning,
psychodysleptic [hallucinogens]), 970.81 (cocaine poisoning, added in 2010), E8500 (heroin
poisoning), E8541 (psychodysleptic poisoning)
aRay et al [50].
bUnless otherwise indicated, codes are ICD-9-CM diagnostic codes and a 3- or 4-digit code implies inclusion of all subcodes. Further, a single diagnosis
is sufficient for exclusion.
cICD-9-CM: International Classification of Diseases, 9th Revision, Clinical Modification.
dICD-9-CM codes will be mapped to ICD-10-CM codes during the study.
-----
JMIR RESEARCH PROTOCOLS Fuller et al
**Table 2.** Underlying cause-of-death diagnostic codes consistent with sudden cardiac death.
International Classification of Diseases, 10th Revision Code Description
I10 essential hypertension, not otherwise specified
I11.9 hypertensive heart disease, without heart failure
I20 angina pectoris
I21 acute myocardial infarction
I22 subsequent myocardial infarction
I23 certain current complications following ST elevation and non-ST elevation myocardial
infarction
I24 other acute ischemic heart disease
I25 chronic ischemic heart disease
I25.2 old myocardial infarction
I42.8, I42.9 cardiomyopathy, not otherwise specified
I46 cardiac arrest
I47.0 re-entry ventricular arrhythmia
I47.2 ventricular tachycardia
I49.0 ventricular fibrillation and flutter
I49.8 other specified cardiac arrhythmias
I49.9 cardiac arrhythmia, unspecified
I51.6 cardiovascular disease, unspecified
I51.9 heart disease, unspecified
I70.9 atherosclerosis, not otherwise specified
R96.1 death in <24 hours
R98 unattended death
##### Use Case Analytic Plan
For both cohort 1 and cohort 2, we will generate a baseline
characteristics table. Table 3 includes the proposed list of
baseline characteristics and Table 4 includes the initial code
lists. We will examine demographic variables, health care
utilization intensity measures, and select comorbid conditions
during the 365-day baseline period. Expert opinion and review
of the literature will inform variable selection. If feasible, we
will also consider examining a claims-based measure of frailty
[54].
Separately for all-cause mortality, SCD, and cardiovascular
death, we will estimate the incidences and incidence rates as
the number of outcome events during the observation period as
defined in the outcome section below, divided by total persons
in cohort (for incidences) or person-time (for incidence rates)
of observation. All incidences or incidence rates will also be
stratified by cohort. We will further estimate the incidences and
incidence rates by age group (<65, 65-74, ≥75 [for all-cause
mortality only]), sex, and cohort entry year. To facilitate
comparison with previously published estimates, incidence will
be presented per 1000 persons and incidence rates will be
presented per 1000 person-years. For SCD, we will further
estimate the incidences and incidence rates by selecting
comorbidities (coronary heart disease [35,36,55,56] and diabetes
mellitus [55,57,58]). If feasible, to facilitate comparisons with
the literature, we will include analyses using multiple age
subgroups (eg, age subgroup 1: 45-54, 55-64, 65-74, 75-84, and
≥85 years; age subgroup 2: 45-46, 47-51, 52-56, 57-61, 62-66,
67-71, 72-74; and 45-54, 55-64, 65-74) [35,64].
Although medical records, autopsy reports, ambulance, or other
similar records might be used to validate death information
attained from the NDI, this type of evaluation is beyond the
scope of this study. If project timelines permit, we will consider
two other indirect approaches to evaluate the performance of
the NDI+ data linkage. The first strategy would involve
comparing rates of mortality and SCD with rates previously
reported in the literature. We will describe and examine the
incidences and incidence rates of mortality and SCD in the use
case cohorts and compare them with estimates previously
reported in the literature. This comparison will provide indirect
evidence for outcome definition accuracy. For all-cause
mortality, we will compare our estimated incidence rates with
those from the CDC Wonder data [65]. For SCD, we will
compare the incidence rates estimated in cohort 1 with the range
of incidence rates reported in the literature (Table 5). In general,
we will examine and compare the incidences and incidence rates
in cohort 2 with national data sources such as CDC Wonder
and studies included in the literature because such data sources
and studies focus on the overall population and are thus are
comparable with our cohort 2.
-----
JMIR RESEARCH PROTOCOLS Fuller et al
**Table 3.** Baseline characteristics associated with users of antiarrhythmic medications (cohort 1) and among the average-risk population (cohort 2)
identified at participating health plans, 2000 to 2017 or latest health plan and National Death Index Plus data availability.
Demographics Cohort 1[a] Cohort 2[a]
**Age groups (<65, 65-74, ≥75)**
Mean age, in years (±SD) N/A[c] N/A
Median age, in years (±SD) N/A N/A
Sex, % female N/A N/A
**Health care utilization intensity measures during the baseline period**
#hospitalizations N/A N/A
#emergency department visits N/A N/A
#ambulatory care visits N/A N/A
#unique medications dispensed N/A N/A
**Comorbid conditions, identified during the baseline period**
Arrhythmia/conduction disorder, by type N/A N/A
Atrial fibrillation and flutter N/A N/A
Paroxysmal ventricular tachycardia N/A N/A
Ventricular fibrillation and flutter N/A N/A
Paroxysmal supraventricular tachycardia N/A N/A
Unspecified paroxysmal tachycardia N/A N/A
Premature beats N/A N/A
Other specified or unspecified cardiac dysrhythmia N/A N/A
Cerebrovascular disease N/A N/A
Coronary heart disease N/A N/A
Diabetes mellitus N/A N/A
Heart failure/cardiomyopathy N/A N/A
Cardioverter-defibrillator/pacemaker N/A N/A
Hyperlipidemia N/A N/A
Hypertension N/A N/A
Kidney disease N/A N/A
Circulatory system disease N/A N/A
Seizure disorder N/A N/A
Smoking[b] N/A N/A
Obesity[b] N/A N/A
**Charlson comorbidity score**
0 N/A N/A
1 N/A N/A
≥2 N/A N/A
**Risk of Torsades de pointes (TdP), per CredibleMeds [28]**
Known risk N/A N/A
Possible risk N/A N/A
Conditional risk N/A N/A
To be avoided by congenital long QT patients N/A N/A
aThis table represents planned study analyses, and cells are blank because analyses are not yet complete.
-----
JMIR RESEARCH PROTOCOLS Fuller et al
bAlthough these covariates are often not well-captured in claims data, given the importance of these factors we will include them with the understanding
under capture of these elements is expected within source data.
cN/A: Not yet available
-----
JMIR RESEARCH PROTOCOLS Fuller et al
**Table 4.** International Classification of Diseases, 9th Revision, Clinical Modification, diagnosis, and procedure codes for identifying comorbidities
and other conditions.[a]
Baseline table conditions Codes
Atrial fibrillation and flutter ICD-9[b]-CM: 427.31 and 427.32
Paroxysmal ventricular tachycardia ICD-9-CM: 427.1
Ventricular fibrillation and flutter ICD-9-CM: 427.4X
Paroxysmal supraventricular tachycardia ICD-9-CM: 427.0
Unspecified paroxysmal tachycardia ICD-9-CM: 427.2
Premature beats ICD-9-CM: 427.6X
Other specified or unspecified cardiac dysrhythmia ICD-9-CM: 427.8X or 427.9X
Cerebrovascular disease
ICD-9-CM: 430.X-432.X
433.01, 433.11, 433.21, 433.31, 433.81, 433.91, 434.x, 436
362.34, 433.00, 433.10, 433.20, 433.30, 433.80, 433.90, 435.x, 437.0, 437.1,
437.9, 438.x
38.11, 38.12, 38.41, 38.42
325.X, 437.6
781.4, 784.3, 997.0
Coronary heart disease [35,36,55,56] ICD-9-CM: 410.XX, 412.XX, 412, 413.X, 414.XX
Diabetes mellitus [55,57,58] ICD-9-CM: 250.XX
Heart failure/cardiomyopathy [35,59,60] ICD-9-CM: 402.X1, 404.X1, 404.X3, 428.XX
Cardioverter-defibrillator/pacemaker
ICD-9-CM: 996.01, 996.04, V45.X, V53.31, V53.32; ICD-9-CM Volume 3
procedure codes: 00.50─00.54, 37.7X, 37.8X, 37.94, 37.95, 37.96, 37.97,
37.98, 89.45─89.49
CPT-4[c] Category II codes: 00530, 33200─33249, 33262─33264, 93280,
93288, 93294, 93296, 93297, 93640, 93641, 93642
CPT-4 Category III codes: 0319T─0328T
Healthcare Common Procedure Coding System codes (HCPCS): C1721,
C1722, C1777, C1779, C1785, C1786, C1882, C1895, C1896, C1898, C1899,
C2619, C2620, C2621, E0610, E0615, E0617, G0297, G0298, G0299, G0300,
G0448, K0606, K0607, K0608, K0609
Hyperlipidemia ICD-9-CM: 272.0X, 272.1X, 272.2X, 272.3X, 272.4X, 272.7X
Hypertension ICD-9-CM: 401–405 (excluding 402.01, 402.11, 402.91)
Chronic kidney disease [58,61,62] ICD-9-CM: 585.3, 585.4, 585.5
Circulatory system disease, thereby capturing rheumatic fever, ICD-9-CM: 390.X–459.X
rheumatic heart disease, hypertensive disease, ischemic heart disease,
diseases of pulmonary circulation, other heart disease, cerebrovascular
disease, arterial disease, and venous disease
Seizure disorder ICD-9-CM: 345x, 780.3x (not 780.31)
Smoking tobacco [55][e] Presence of any the following codes on any claim type: ICD-9-CM: 305.1,
649.0X, 989.84, V15.82; CPT-I: 83887, 99406, 99407; CPT-II: 1034F, 1035F,
4000F, 4001F, 4004F; HCPCS: C9801, C9802, G0375, G0376, G0436,
G0437, G8093, G8094, G8402, G8403, G8453, G8454, G8455, G8456,
G8688, G9016, S4990, S4991, S4995, S9075, S9453; NDC[d]: nicotine replacement, varenicline, Zyban (brand only)
Obesity [55,63][e] 278.0X
**Conditions included in the SCD[f]** **subgroup analyses**
Coronary heart disease [35,36,55,56] 410.XX, 412.XX, 412, 413.X, 414.XX
Diabetes mellitus [55,57,58] 250.XX
aCodes will be mapped to ICD-10-CM (ICD-10: International Classification of Diseases, 10th Revision) codes during the study
bICD-9-CM: International Classification of Diseases, 9th Revision.
-----
JMIR RESEARCH PROTOCOLS Fuller et al
cCPT-4: Current Procedural Terminology-4.
dNDC: National Drug Code.
eAlthough obesity and smoking are often not well-captured in claims data, we will include them with the understanding under capture of these elements
is expected within source data.
fSDC: sudden cardiac death.
-----
JMIR RESEARCH PROTOCOLS Fuller et al
**Table 5.** Published incidences or incidence rates of sudden cardiac death and all-cause mortality among users of antiarrhythmic medications and among
the average-risk population.
Patient characteristics Events per person or person-years, and/or risk of sudden cardiac Events per person or person-years or risk of all-cause
death by patient characteristics mortality by patient characteristics[a]
Antiarrhythmic
medication users[b]
Average-risk population, without respect to
antiarrhythmic use
Antiarrhythmic medication users
Average-risk population,
without respect to antiarrhythmic use
Overall N/A 0.5-1.5/1000 persons, Deo et al [66], Chugh N/A[c] N/A
et al [36], Straus et al [67]
Female N/A
Female<male, Zheng et al [43], Kannel et N/A N/A
al [68], Stecker et al [37]; Beginning at age
35, incidence increases monotonically until
age 85 (Zheng et al [43], Chugh et al [36],
Straus et al [67])
55-64 years N/A 1.0/1000 persons N/A N/A
65-74 years N/A 2.8/1000 persons N/A N/A
Male N/A
Male>female, Zheng et al [43], Kannel et N/A N/A
al [59], Stecker et al [37]; Beginning at age
35, incidence increases monotonically until
age 85 (Zheng et al [43], Chugh et al [36],
Straus et al [67])
45-54 years N/A 1.2/1000 persons N/A N/A
55-64 years N/A 2.8/1000 persons N/A N/A
65-74 years N/A 6.0/1000 persons N/A N/A
Year N/A
Given that sudden cardiac death incidence N/A N/A
declined from 1979-1998 [69], it may be
reasonable to expect a small decline in incidence from 2001-2002 to 2009-2010. This
is likely driven by a reduction in coronary
heart disease. Yet, any small decline could
be halted by the increasing incidence of
heart failure [70]
1990-1995 N/A 1.0/1000 person-years (for 1990s) [71] N/A N/A
1996-1999 N/A 0.91-1.0/1000 persons [67] N/A N/A
2000-2004 N/A 0.79/1000 persons [67] N/A N/A
2005-2009 N/A N/A N/A N/A
2010-2014 N/A N/A N/A N/A
2015-2017 N/A N/A N/A N/A
**Comorbidities** N/A
**Coronary heart disease** 2-12X increased risk, Chugh et al [36], N/A N/A
Kannel et al [56,59], Albert et al [72]
Presence N/A 4.6-25.1/1000 persons N/A N/A
Absence N/A 1.5-3.6/1000 persons N/A N/A
**Diabetes mellitus**
2-3 times increased risk, Jouven et al N/A N/A
[73,74], Albert et al [72], Vasiliadis et al
[58]; 1.3/1000 person-years in sulfonylurea
users Leonard et al [75]
Presence N/A N/A N/A N/A
Absence N/A N/A N/A N/A
aEstimates from CDC Wonder or other national death data sources.
bEstimates located at the time or protocol development were included, blank cells indicate no available information at the time of protocol development.
cN/A: Not yet available.
-----
JMIR RESEARCH PROTOCOLS Fuller et al
The second strategy would be to examine the concordance
between NDI data and health plan death data. Several
participating health plans collect death information through
linkage with the state death records. If timeline and resources
permit, this project will attempt to identify time periods in which
death information is considered well populated within each
health plan and examine the concordance of this information
with information attained through linkage to NDI data. At health
plans that do not attain death information from state death
records, if timeline and resources permit, we will consider
examining discharge disposition (ie, discharged expired) for
in-hospital deaths included in health plan databases, and
comparing this information with NDI data. Although we expect
agreement between both data sources, such comparisons will
assist in any evaluations of matching with NDI data and would
also provide indirect evidence for accuracy (Table 6).
**Table 6.** Example concordance matrix, all-cause mortality (to be repeated for each health plan and time period of interest[a]).
NDI[b] data Health plan data
Health plan 1 death=yes[c] Health plan 1 death=no[c]
NDI death=yes A C
NDI death=no B D
aDeath data within the health plan databases are known to be incomplete. Time period of interest will be time periods in which participating health plans
are confident in the completeness of their death data. Additional stratifications, such as stratifying results by data source (eg, hospital discharge disposition)
may be conducted.
bNDI: National Death Index.
cNo gold standard, can only describe concordance and discordance (ie, “a” and “d” concordance, “b” and “c” discordant).
##### Proposed Use Case Workflow
Below, we summarize a high-level overview of steps to execute
the use case.
1. Study team will finalize the following:
- Use case specifications
- Criteria for NDI patient record submission
- The limited set of identifiable data elements needed
for NDI+ matching
- Analytic plan
2. The HPHCI will develop a cohort identification program
that will query health plan databases formatted in the
Sentinel Common Data Model. This program will identify
individuals who meet the criteria entry into the cohorts as
well as for matching with the NDI at the participating health
plans; the program will be distributed to participating health
plans for local execution.
3. Participating health plans will populate files to be sent
directly to the NDI from their operational data source with
the NDI required patient identifiers (eg, name, date of birth,
age, social security number).
4. The HPHCI will develop a data quality assurance and check
program that will ensure that the data files to be sent to the
NDI are completely populated, meet NDI’s minimal criteria
as eligible for matching, and are correctly formatted. The
program will be distributed to participating health plans for
local execution.
5. Participating health plans will individually submit the
necessary quality-checked data files to the NDI.
6. The NDI will conduct matching activities and return files
to health plans.
7. The HPHCI will develop a program to remove all
identifiable data, identify matches to be saved, and create
analytic files with minimally necessary information from
health plan data and the NDI. The program will be
distributed to participating health plans for local execution.
8. The HPHCI will develop an analytic program to generate
information necessary to conduct the statistical analysis for
the use case. The program will be distributed to participating
health plans for local execution, and only summary-level
information will be shared between health plans and the
coordinating center.
9. The HPHCI will retrieve output produced by health plans
and complete the statistical analysis.
10. The HPHCI will lead the writing of the final project report
and standard operating procedures.
### Results
We will use the linked health plan and NDI+ data sets to
estimate the incidence and incidence rate of mortality and
specific causes of death within the use case and compare the
results with previously reported estimates. These comparisons
provide an opportunity to assess the performance of the
developed NDI+ linkage approach and lessons to future studies
requiring NDI+ linkage in distributed database settings. This
study is approved by the Harvard Pilgrim Health Care IRB in
Boston, MA. We will present results and the reusable NDI+
linkage approach to the FDA, at academic conferences, and
publish in peer-reviewed journals. We have attained NDI
approval and are summarizing the administrative processes that
we developed and implemented for use in other studies.
Currently, the study team is in the process of developing and
testing the distributed NDI+ linkage process as described above
and anticipates having initial results in early 2021.
### Discussion
##### Use Case Limitations
Given that the outcomes of death, SCD, and cardiovascular
death could be rare in the general population; large cohorts will
be required to adequately address the use case. Although we
anticipate potentially large available sample sizes within the
-----
JMIR RESEARCH PROTOCOLS Fuller et al
use case, estimates of incidences and incidence rates in small
subgroups may be imprecise. If it is not feasible to perform
linkage for all the identified individuals, we will develop a
sampling scheme that will still allow us to pilot the linkage
methods.
The incidences and incidence rates estimated from our study
may not be directly comparable with those reported in the
literature. For example, our proposed use case exclusion
conditions and matching of persons in cohort 2 with persons of
cohort 1 by age, sex, health plan, and index dates (thereby
making the population in cohort 2 more similar to the
antiarrhythmic medication users in cohort 1), may make our
population of interest different from other populations studied
previously. In addition, privately insured patients may have
lower mortality rates compared with the general population
owing to better health care access. Due to these anticipated
differences, the comparison between the incidences and
incidence rates derived from our study and the literature-reported
estimates will be performed qualitatively.
Some of the outcome algorithms used in this study have been
validated in other data sources but have not been validated
specifically within the participating health plan databases. For
example, the SCD algorithm by Chung et al [27] was originally
developed and implemented within a population including
Tennessee Medicaid recipients aged 30-74 years. While the
participating health plans in this study include mainly
commercially insured populations, Medicaid beneficiaries
included in the study by Chung et al may be different (eg, more
vulnerable, economically disadvantaged). However, in our study,
one participating health plan also provides Tennessee Medicaid
data, and thus analyses stratified by health plan may inform
potential population differences. In addition, the Chung et al
study relied on both death certificate data and state hospital
discharge data when developing a computerized algorithm to
identify SCD. Although not all information included in the
Chung et al study is available to participating health plans, the
selected algorithms can be adapted to utilize data elements
available within health plan data. The potential inability to
replicate validated computerized algorithms developed in other
data sources in their entirety is a study limitation.
Health plan disenrollment will be used as a proxy to select
individuals for linkage to NDI+ data. Most individuals who
disenroll from their health plans have not died but instead have
lost or changed their insurance coverage. If individuals in an
average-risk cohort are healthier and more likely to change
health insurance plans, they may have higher rates of
disenrollment than antiarrhythmic medication users. These
higher rates of disenrollment are unlikely to reflect death and
may lead to a disproportionate number of submissions to the
NDI that do not result in a death record. We expect that the
incidence of death and SCD will be low and disenrollment rates
will be high (approximately 20%-30% per year). Therefore, we
expect that our NDI+ data linkage activity will yield false
positives. However, given the goal of this project is to determine
an algorithm for identifying individuals to submit to NDI in
future studies, lessons learned concerning false positives during
analyses examining concordance between health plan death data
and NDI data as well as ways to refine the disenrollment
algorithm will inform future NDI+ data linkage studies.
In general, study results will be highly dependent on the quality
of the NDI+ data linkage. Some identifiers that would be highly
desirable to use as keys for linkage may not be uniformly
available across all health plans. For example, provision of
social security number information to the NDI will likely
increase the number of correct matches. However, social security
number information is not always complete in health plans. A
lack of social security number submittal could result in a greater
number of multiple matches returned by the NDI, which requires
resolution and selection. The study team is designing strategies
to optimize the selection of the best match. However, regardless
of whether a social security number is submitted, it is possible
that an incorrect match could be selected. In addition, if personal
identifiers submitted by the health plans are incorrect,
mismatches between health plan and NDI+ data could also
occur. Such mismatches will most likely result in misclassifying
patients who are dead as alive (ie, unable to locate a death in
NDI+ data). The study team has anticipated these potential
issues and is designing quality assurance steps where possible.
To inform future studies, we will summarize lessons learned
about ways to maximize the quality of the NDI+ data linkage.
##### Study Strengths
The NDI is currently the best data source of death and
cause-of-death information for large-scale population-based
epidemiologic studies in the United States. We anticipate the
development of standardized processes to attain and analyze
death and cause-of-death information from the NDI will provide
avenues for multisite research networks to efficiently obtain
more complete death information. As many health plans that
participate in multisite research networks do not have complete
capture of out-of-hospital deaths or cause-of-death information,
the ability to efficiently attain this information from the NDI
may provide opportunities to answer a wider variety of
mortality-related research questions. We also anticipate that our
newly developed NDI+ linkage methods will enhance the FDA’s
ability to answer mortality-related safety questions in distributed
networks.
Although conducted independently of the Sentinel Initiative,
our study will leverage the infrastructure of a well-known
distributed network, the FDA Sentinel System [7,8], to develop
and test reusable administrative and technical processes for
linking multiple health plan databases with NDI+ data.
Leveraging the Sentinel System infrastructure will ensure that
health plan databases are standardized and research ready. As
our study sites are health plans that participate in the Sentinel
System, administrative processes or NDI+ data linkage programs
we will develop could be reused by the Sentinel System as well
as other multisite studies using distributed research networks.
As the Sentinel System publishes its common data model
publicly [7,8] and in some instances provides translation code
to help certain data sources with data conversion, other
researchers would have the ability to directly transform other
health plan databases into the Sentinel Common Data Model
and directly use any developed NDI+ data linkage programs
from this study for NDI+ data linkage. In addition, we will test
-----
JMIR RESEARCH PROTOCOLS Fuller et al
our newly developed NDI+ data linkage methods among a
diverse group of participating health plans (ie, national insurers,
regional health plans, and integrated delivery systems, which
cover both commercial and public insurance programs). We
anticipate that our testing will ensure that developed NDI+ data
linkage processes will be applicable to multiple settings.
Another strength of this study is our focus on developing a
distributed process for NDI+ data linkage in multisite research
studies. A distributed approach allows individual study sites to
maintain physical and operational control over their electronic
health data behind their respective firewalls, thus promoting
data sharing by protecting patient privacy, data security, and
proprietary interests [9-11]. We will develop methods that will
allow health plans to work directly with the NDI and eliminate
sharing of identifiable patient information between participating
health plans or the coordinating center.
Finally, we chose our antiarrhythmic medications use case to
robustly test the NDI+ data linkage processes within a cohort
at high risk of death (antiarrhythmic medication users) and a
cohort at average risk of death (nonusers matched by age and
sex to antiarrhythmic medication users). This use case should
provide sufficient sample sizes for patients who are dead and
alive. To indirectly validate our newly developed linkage
methods, we plan to examine the concordance between NDI
##### Acknowledgments
data and health plan death data as well as compare rates of
mortality and SCD with rates previously reported in the
literature. Information we will gather as part of these indirect
validation activities will provide some metrics for the
performance of our NDI+ data linkage methods.
##### Anticipated Study Contributions
We anticipate this project to provide future studies with a tested
administrative workflow that facilitates efficient, coordinated,
multicenter IRB review and approval for linking health plan
data with NDI+ data in accordance with the revised Common
Rule. We will also provide recommendations for completing a
successful NDI application, along with lessons learned that may
help future studies navigate the process more efficiently. We
will develop a standardized and reusable distributed technical
process for efficiently attaining and analyzing death and
cause-of-death information from the NDI across multiple health
plan databases without sharing protected health information
between health plans or with the coordinating center. Our study
will also provide considerations for determining which patients
to submit to the NDI for matching. We will leverage lessons
learned by developing and testing our NDI+ data linkage
methods with the goal of improving the ability to answer
mortality-related research questions within multisite studies
based in distributed data networks.
This project is supported by the US Department of Health and Human Services (HHS), Assistant Secretary of Planning and
Evaluation, Patient Centered Outcomes Research Trust Fund, through the Food and Drug Administration contract number
HHSF223301710132C, project titled, “A Reusable Method to Link Health Plan Data with the National Death Index Plus to
Examine the Associations Between Medical Products and Death and Causes of Death.” This paper reflects the views of the authors
and does not necessarily represent the FDA’s views or policies.
A previous mini-Sentinel project workgroup laid an important groundwork for this project and included the following members
and organizations: Steven Bird, Victor Crentsil, David Graham, Terry Harrison, Monika Houstoun, Stephanie Keeton, Susan Lu,
Katrina Mott, Rita Ouellet-Hellstrom, Simone Pinheiro, Marsha Reichman, Marry Ross Southworth, and Anne Tobenkin of
Office of Surveillance and Epidemiology, Center for Drug Evaluation and Research, US Food and Drug Administration; Eric
Frimpong and Margie Goulding of Harvard Pilgrim Health Care Institute; Sascha Dublin, Monica Fujii, Kristina Hansen, Jennifer
Nelson, and Robert Wellman of Group Health Research Institute; Susan Andrade of Meyers Primary Care Institute; Nancy Lin
of OptumInsight Life Sciences Inc; Todd Lee of University of Illinois at Chicago; Rajat Deo and Sean Hennessy of Center for
Pharmacoepidemiology Research and Training, Department of Biostatistics, Epidemiology, and Informatics, Perelman School
of Medicine, University of Pennsylvania; and James Floyd, Bruce Psaty and David Siscovik of University of Washington
Department of Biostatistics.
Furthermore, the authors acknowledge the helpful input and contributions to the current project as follows: Noelle Cocoros, Qoua
Her, April DuCott, Matthew Lakoma, Christine Draper, Zilu Zhang, Elizabeth Dee, and Susan Forrow of Harvard Pilgrim Health
Care Institute; Jacqueline M Major, Deloris Willis, Carla Walls, Denise Jones, and Rita Noel of Office of Surveillance and
Epidemiology, Center for Drug Evaluation and Research, US Food and Drug Administration; Sonal Singh of Meyers Primary
Care Institute; and Samantha Soprano of Center for Pharmacoepidemiology Research and Training, Department of Biostatistics,
Epidemiology, and Informatics, Perelman School of Medicine, University of Pennsylvania.
##### Authors' Contributions
CF collaborated with coauthors on the study design and wrote the protocol. All authors reviewed and approved the final manuscript.
##### Conflicts of Interest
CEL serves on the Executive Committee of the University of Pennsylvania's Center for Pharmacoepidemiology Research and
Training. The Center receives funds for education from Pfizer and Sanofi. He recently received honoraria from the American
College of Clinical Pharmacy Research Institute and the University of Florida College of Pharmacy. CEL's research is funded
-----
JMIR RESEARCH PROTOCOLS Fuller et al
by the American Diabetes Association, Food and Drug Administration, and National Institutes of Health. CEL is a Special
Government Employee of the Food and Drug Administration.
##### Multimedia Appendix 1
Proposed National Death Index submission criteria to be used to determine which individuals will be initially selected for sending
to the NDI, thereby obtaining death and cause-of-death information.
[[PPTX File, 47 KB-Multimedia Appendix 1]](https://jmir.org/api/download?alt_name=resprot_v9i11e21811_app1.pptx&filename=59ff06a74eda5b2ae6c058ef52185d80.pptx)
##### Multimedia Appendix 2
Operational definitions of outcomes of interest in the use case.
[[PPTX File, 53 KB-Multimedia Appendix 2]](https://jmir.org/api/download?alt_name=resprot_v9i11e21811_app2.pptx&filename=79cf22c6c4145d471310513d02934596.pptx)
##### References
1. Questions and Answers on FDA's Adverse Event Reporting System (FAERS). US Food and Drug Administration. 2018.
[URL: https://www.fda.gov/drugs/surveillance/questions-and-answers-fdas-adverse-event-reporting-system-faers [accessed](https://www.fda.gov/drugs/surveillance/questions-and-answers-fdas-adverse-event-reporting-system-faers)
2018-03-28]
2. Colman E, Szarfman A, Wyeth J, Mosholder A, Jillapalli D, Levine J, et al. An evaluation of a data mining signal for
amyotrophic lateral sclerosis and statins detected in FDA's spontaneous adverse event reporting system. Pharmacoepidemiol
[Drug Saf 2008 Nov;17(11):1068-1076. [doi: 10.1002/pds.1643] [Medline: 18821724]](http://dx.doi.org/10.1002/pds.1643)
3. Wysowski DK, Swartz L. Adverse drug event surveillance and drug withdrawals in the United States, 1969-2002: The
importance of reporting suspected reactions. Arch Intern Med 2005 Jun 27;165(12):1363-1369. [doi:
[10.1001/archinte.165.12.1363] [Medline: 15983284]](http://dx.doi.org/10.1001/archinte.165.12.1363)
4. Wong CK, Ho SS, Saini B, Hibbs DE, Fois RA. Standardisation of the FAERS database: a systematic approach to manually
[recoding drug name variants. Pharmacoepidemiol Drug Saf 2015 Jul;24(7):731-737. [doi: 10.1002/pds.3805] [Medline:](http://dx.doi.org/10.1002/pds.3805)
[26017154]](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&list_uids=26017154&dopt=Abstract)
5. Moore TJ, Cohen MR, Furberg CD. Serious adverse drug events reported to the food and drug administration, 1998-2005.
[Arch Intern Med 2007 Sep 10;167(16):1752-1759. [doi: 10.1001/archinte.167.16.1752] [Medline: 17846394]](http://dx.doi.org/10.1001/archinte.167.16.1752)
6. Weiss-Smith S, Deshpande G, Chung S, Gogolak V. The FDA drug safety surveillance program: adverse event reporting
[trends. Arch Intern Med 2011 Mar 28;171(6):591-593. [doi: 10.1001/archinternmed.2011.89] [Medline: 21444854]](http://dx.doi.org/10.1001/archinternmed.2011.89)
7. Platt R, Brown JS, Robb M, McClellan M, Ball R, Nguyen M, et al. The FDA sentinel initiative- an evolving national
[resource. N Eng J Med 2018;379(9):2091-2093. [doi: 10.1056/NEJMp1809643] [Medline: 30485777]](http://dx.doi.org/10.1056/NEJMp1809643)
8. Behrman RE, Benner JS, Brown JS, McClellan M, Woodcock J, Platt R. Developing the sentinel system--a national resource
[for evidence development. N Engl J Med 2011 Feb 10;364(6):498-499. [doi: 10.1056/NEJMp1014427] [Medline: 21226658]](http://dx.doi.org/10.1056/NEJMp1014427)
9. Toh S, Platt R, Steiner JF, Brown JS. Comparative-effectiveness research in distributed health data networks. Clin Pharmacol
[Ther 2011;90(6):883-887. [doi: 10.1038/clpt.2011.236] [Medline: 22030567]](http://dx.doi.org/10.1038/clpt.2011.236)
10. Maro JC, Platt R, Holmes JH, Strom BL, Hennessy S, Lazarus R, et al. Design of a national distributed health data network.
[Ann Intern Med 2009 Sep 1;151(5):341-344. [doi: 10.7326/0003-4819-151-5-200909010-00139] [Medline: 19638403]](http://dx.doi.org/10.7326/0003-4819-151-5-200909010-00139)
11. Brown JS, Holmes JH, Shah K, Hall K, Lazarus R, Platt R. Distributed health data networks: a practical and preferred
approach to multi-institutional evaluations of comparative effectiveness, safety, and quality of care. Med Care 2010 Jun;48(6
[Suppl):S45-S51. [doi: 10.1097/MLR.0b013e3181d9919f] [Medline: 20473204]](http://dx.doi.org/10.1097/MLR.0b013e3181d9919f)
12. Her Q, Malenfant J, Zhang Z, Vilk Y, Young J, Tabano D, et al. Distributed regression analysis application in large distributed
[data networks: analysis of precision and operational performance. JMIR Med Inform 2020 Jun 4;8(6):e15073 [FREE Full](https://medinform.jmir.org/2020/6/e15073/)
[text] [doi: 10.2196/15073] [Medline: 32496200]](https://medinform.jmir.org/2020/6/e15073/)
13. [Fact sheet: national death index. National Center for Health Statistics. 2018. URL: https://www.cdc.gov/nchs/about/factsheets/](https://www.cdc.gov/nchs/about/factsheets/factsheet_ndi.htm)
[factsheet_ndi.htm [accessed 2018-03-28]](https://www.cdc.gov/nchs/about/factsheets/factsheet_ndi.htm)
14. da Graca B, Filardo G, Nicewander D. Consequences for healthcare quality and research of the exclusion of records from
[the death master file. Circ Cardiovasc Qual Outcomes 2013;6(1):124-128. [doi: 10.1161/CIRCOUTCOMES.112.968826]](http://dx.doi.org/10.1161/CIRCOUTCOMES.112.968826)
[[Medline: 23322808]](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&list_uids=23322808&dopt=Abstract)
15. Brooks EG, Reed KD. Principles and pitfalls: a guide to death certification. Clin Med Res 2015 Jun;13(2):74-82; quiz 83-4.
[[doi: 10.3121/cmr.2015.1276] [Medline: 26185270]](http://dx.doi.org/10.3121/cmr.2015.1276)
16. [Possible Solutions to Common Problems in Death Certification. National Center for Health Statistics. URL: https://www.](https://www.cdc.gov/nchs/nvss/)
[cdc.gov/nchs/nvss/ [accessed 2017-06-24]](https://www.cdc.gov/nchs/nvss/)
17. Hanzlick R. The conversion of coroner systems to medical examiner systems in the United States: a lull in the action. Am
[J Forensic Med Pathol 2007 Dec;28(4):279-283. [doi: 10.1097/PAF.0b013e31815b4d5a] [Medline: 18043011]](http://dx.doi.org/10.1097/PAF.0b013e31815b4d5a)
18. Ives DG, Samuel P, Psaty BM, Kuller LH. Agreement between nosologist and cardiovascular health study review of deaths:
[implications of coding differences. J Am Geriatr Soc 2009 Jan;57(1):133-139 [FREE Full text] [doi:](http://europepmc.org/abstract/MED/19016930)
[10.1111/j.1532-5415.2008.02056.x] [Medline: 19016930]](http://dx.doi.org/10.1111/j.1532-5415.2008.02056.x)
-----
JMIR RESEARCH PROTOCOLS Fuller et al
19. Lakkireddy DR, Basarakodu KR, Vacek JL, Kondur AK, Ramachandruni SK, Esterbrooks DJ, et al. Improving death
[certificate completion: a trial of two training interventions. J Gen Intern Med 2007 Apr;22(4):544-548 [FREE Full text]](http://europepmc.org/abstract/MED/17372807)
[[doi: 10.1007/s11606-006-0071-6] [Medline: 17372807]](http://dx.doi.org/10.1007/s11606-006-0071-6)
20. Kircher T, Anderson RE. Cause of death. Proper completion of the death certificate. J Am Med Assoc 1987 Jul
[17;258(3):349-352. [Medline: 3599328]](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&list_uids=3599328&dopt=Abstract)
21. Folsom AR, Gomez-Marin O, Gillum RF, Kottke TE, Lohman W, Jacobs DJ. Out-of-hospital coronary death in an urban
population--validation of death certificate diagnosis. The Minnesota heart survey. Am J Epidemiol 1987
[Jun;125(6):1012-1018. [doi: 10.1093/oxfordjournals.aje.a114617] [Medline: 3578243]](http://dx.doi.org/10.1093/oxfordjournals.aje.a114617)
22. Lenfant C, Friedman L, Thom T. Fifty years of death certificates: the Framingham heart study. Ann Intern Med 1998 Dec
[15;129(12):1066-1067. [doi: 10.7326/0003-4819-129-12-199812150-00013] [Medline: 9867763]](http://dx.doi.org/10.7326/0003-4819-129-12-199812150-00013)
23. Olubowale OT, Safford MM, Brown TM, Durant RW, Howard VJ, Gamboa C, et al. Comparison of expert adjudicated
coronary heart disease and cardiovascular disease mortality with the national death index: results from the reasons for
geographic and racial differences in stroke (REGARDS) study. J Am Heart Assoc 2017 May 3;6(5). [doi:
[10.1161/JAHA.116.004966] [Medline: 28468785]](http://dx.doi.org/10.1161/JAHA.116.004966)
24. Sathiakumar N, Delzell E, Abdalla O. Using the national death index to obtain underlying cause of death codes. J Occup
[Environ Med 1998 Sep;40(9):808-813. [doi: 10.1097/00043764-199809000-00010] [Medline: 9777565]](http://dx.doi.org/10.1097/00043764-199809000-00010)
25. Miniño A, Klein R. Mortality From Major Cardiovascular Diseases: United States, 2007. National Center for Health
[Statistics. 2010. URL: https://www.cdc.gov/nchs/data/hestat/cardio2007/cardio2007.htm [accessed 2020-07-31]](https://www.cdc.gov/nchs/data/hestat/cardio2007/cardio2007.htm)
26. Kodadhala V, Obi J, Wessly P, Mehari A, Gillum RF. Asthma-related mortality in the United States, 1999 to 2015: a
[multiple causes of death analysis. Ann Allergy Asthma Immunol 2018 Jun;120(6):614-619. [doi: 10.1016/j.anai.2018.03.005]](http://dx.doi.org/10.1016/j.anai.2018.03.005)
[[Medline: 29548908]](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&list_uids=29548908&dopt=Abstract)
27. Chung CP, Murray KT, Stein CM, Hall K, Ray WA. A computer case definition for sudden cardiac death. Pharmacoepidemiol
[Drug Saf 2010 Jun;19(6):563-572 [FREE Full text] [doi: 10.1002/pds.1888] [Medline: 20029823]](http://europepmc.org/abstract/MED/20029823)
28. Combined list of all QT drugs and the list of drugs to avoid for patients with congenital long QT syndrome. CredibleMeds.
[2017. URL: https://www.crediblemeds.org/index.php [accessed 2017-06-22]](https://www.crediblemeds.org/index.php)
29. Torres V, Flowers D, Somberg JC. The arrhythmogenicity of antiarrhythmic agents. Am Heart J 1985 May;109(5 Pt
[1):1090-1097. [doi: 10.1016/0002-8703(85)90253-4] [Medline: 3993517]](http://dx.doi.org/10.1016/0002-8703(85)90253-4)
30. Cowan JC, Bourke J, Campbell RWF. Arrhythmogenic effects of antiarrhythmic drugs. Eur Heart J 1987 Mar;8(Suppl
[A):133-136. [doi: 10.1093/eurheartj/8.suppl_a.133] [Medline: 3582392]](http://dx.doi.org/10.1093/eurheartj/8.suppl_a.133)
31. Oliver MF. Metabolic causes and prevention of ventricular fibrillation during acute coronary syndromes. Am J Med 2002
[Mar;112(4):305-311. [doi: 10.1016/s0002-9343(01)01104-4] [Medline: 11893370]](http://dx.doi.org/10.1016/s0002-9343(01)01104-4)
32. Greene HL. Sudden arrhythmic cardiac death--mechanisms, resuscitation and classification: the Seattle perspective. Am J
[Cardiol 1990 Jan 16;65(4):4B-12B. [doi: 10.1016/0002-9149(90)91285-e] [Medline: 2404396]](http://dx.doi.org/10.1016/0002-9149(90)91285-e)
33. Yap YG, Camm AJ. Drug induced QT prolongation and torsades de pointes. Heart 2003 Nov;89(11):1363-1372. [doi:
[10.1136/heart.89.11.1363] [Medline: 14594906]](http://dx.doi.org/10.1136/heart.89.11.1363)
34. Lazzara R. Antiarrhythmic drugs and torsade de pointes. Eur Heart J 1993 Nov;14(Suppl H):88-92. [doi:
[10.1093/eurheartj/14.suppl_h.88] [Medline: 8293758]](http://dx.doi.org/10.1093/eurheartj/14.suppl_h.88)
35. Chugh SS, Reinier K, Teodorescu C, Evanado A, Kehr E, Al Samara M, et al. Epidemiology of sudden cardiac death:
[clinical and research implications. Prog Cardiovasc Dis 2008;51(3):213-228 [FREE Full text] [doi:](http://europepmc.org/abstract/MED/19026856)
[10.1016/j.pcad.2008.06.003] [Medline: 19026856]](http://dx.doi.org/10.1016/j.pcad.2008.06.003)
36. Chugh SS, Jui J, Gunson K, Stecker EC, John BT, Thompson B, et al. Current burden of sudden cardiac death: multiple
source surveillance versus retrospective death certificate-based review in a large US community. J Am Coll Cardiol 2004
[Sep 15;44(6):1268-1275 [FREE Full text] [doi: 10.1016/j.jacc.2004.06.029] [Medline: 15364331]](https://linkinghub.elsevier.com/retrieve/pii/S0735-1097(04)01242-2)
37. Stecker EC, Reinier K, Marijon E, Narayanan K, Teodorescu C, Uy-Evanado A, et al. Public health burden of sudden
[cardiac death in the United States. Circ Arrhythm Electrophysiol 2014 Apr;7(2):212-217 [FREE Full text] [doi:](http://europepmc.org/abstract/MED/24610738)
[10.1161/CIRCEP.113.001034] [Medline: 24610738]](http://dx.doi.org/10.1161/CIRCEP.113.001034)
38. [Electronic Code of Federal Regulations: Title 45: Subtitle A, Subchapter C, Part 160. URL: https://www.ecfr.gov/cgi-bin/](https://www.ecfr.gov/cgi-bin/text-idx?tpl=/ecfrbrowse/Title45/%2045cfr160_main_02.tpl)
[text-idx?tpl=/ecfrbrowse/Title45/%2045cfr160_main_02.tpl [accessed 2018-04-18]](https://www.ecfr.gov/cgi-bin/text-idx?tpl=/ecfrbrowse/Title45/%2045cfr160_main_02.tpl)
39. Federal Policy for the Protection of Human Subjects ('Common Rule'). Office for Human Research Protections. URL:
[https://www.hhs.gov/ohrp/ [accessed 2017-06-22]](https://www.hhs.gov/ohrp/)
40. Federalwide Assurance (FWA) for the Protection of Human Subjects: 45 CFR 46. Office for Human Research Protection.
[URL: https://www.hhs.gov/ohrp/register-irbs-and-obtain-fwas/fwas/fwa-protection-of-human-subjecct/index.html [accessed](https://www.hhs.gov/ohrp/register-irbs-and-obtain-fwas/fwas/fwa-protection-of-human-subjecct/index.html)
2020-07-31]
41. Criteria To Be Applied In Approving National Death Index Applications. Centers for Disease Control and Prevention.
[2017. URL: https://www.cdc.gov/nchs/data/ndi/ndi_approval_criteria.pdf [accessed 2018-04-13]](https://www.cdc.gov/nchs/data/ndi/ndi_approval_criteria.pdf)
42. [National death Index: User's Guide. Centers for Disease Control and Prevention. 2019. URL: https://www.cdc.gov/nchs/](https://www.cdc.gov/nchs/data/ndi/ndi_users_guide.pdf)
[data/ndi/ndi_users_guide.pdf [accessed 2019-06-06]](https://www.cdc.gov/nchs/data/ndi/ndi_users_guide.pdf)
43. Zheng ZJ, Croft JB, Giles WH, Mensah GA. Sudden cardiac death in the United States, 1989 to 1998. Circulation 2001
[Oct 30;104(18):2158-2163. [doi: 10.1161/hc4301.098254] [Medline: 11684624]](http://dx.doi.org/10.1161/hc4301.098254)
-----
JMIR RESEARCH PROTOCOLS Fuller et al
44. Carnes C. Antiarrhythmic drug classification. In: Billman GE, editor. Novel Therapeutic Targets for Antiarrhythmic Drugs.
Hoboken, New Jersey: Wiley; 2010:155-170.
45. Campbell TJ, Vaughan Williams EM, editors. Classification of antiarrhythmic actions. In: Antiarrhythmic Drugs. New
York, USA: Springer; 1989:45-67.
46. [Cohort Identification and Descriptive Analysis (CIDA) Module. Sentinel Initiative. URL: https://dev.sentinelsystem.org/](https://dev.sentinelsystem.org/projects/SENTINEL/repos/sentinel-routine-querying-tool-documentation/browse/files/file118-type02-overlap.md)
[projects/SENTINEL/repos/sentinel-routine-querying-tool-documentation/browse/files/file118-type02-overlap.md [accessed](https://dev.sentinelsystem.org/projects/SENTINEL/repos/sentinel-routine-querying-tool-documentation/browse/files/file118-type02-overlap.md)
2018-04-11]
47. Curtis LH, Weiner MG, Boudreau DM, Cooper WO, Daniel GW, Nair VP, et al. Design considerations, architecture, and
use of the mini-sentinel distributed data system. Pharmacoepidemiol Drug Saf 2012 Jan;21(Suppl 1):23-31. [doi:
[10.1002/pds.2336] [Medline: 22262590]](http://dx.doi.org/10.1002/pds.2336)
48. [CDC Wonder. Centers for Disease Control and Prevention. URL: https://wonder.cdc.gov/ [accessed 2020-07-31]](https://wonder.cdc.gov/)
49. Hennessy S, Leonard CE, Freeman CP, Deo R, Newcomb C, Kimmel SE, et al. Validation of diagnostic codes for
outpatient-originating sudden cardiac death and ventricular arrhythmia in medicaid and medicare claims data.
[Pharmacoepidemiol Drug Saf 2010 Jun;19(6):555-562 [FREE Full text] [doi: 10.1002/pds.1869] [Medline: 19844945]](http://europepmc.org/abstract/MED/19844945)
50. Ray WA, Murray KT, Hall K, Arbogast PG, Stein CM. Azithromycin and the risk of cardiovascular death. N Engl J Med
[2012 May 17;366(20):1881-1890. [doi: 10.1056/nejmoa1003833] [Medline: 22591294]](http://dx.doi.org/10.1056/nejmoa1003833)
51. Hennessy S, Leonard C, Newcomb C, Kimmel S, Bilker W. Cisapride and ventricular arrhythmia. Br J Clin Pharmacol
[2008 Sep;66(3):375-385 [FREE Full text] [doi: 10.1111/j.1365-2125.2008.03249.x] [Medline: 18662288]](https://doi.org/10.1111/j.1365-2125.2008.03249.x)
52. Trac MH, McArthur E, Jandoc R, Dixon SN, Nash DM, Hackam DG, et al. Macrolide antibiotics and the risk of ventricular
[arrhythmia in older adults. Can Med Assoc J 2016 Apr 19;188(7):E120-E129 [FREE Full text] [doi: 10.1503/cmaj.150901]](http://www.cmaj.ca/cgi/pmidlookup?view=long&pmid=26903359)
[[Medline: 26903359]](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&list_uids=26903359&dopt=Abstract)
53. Health information policy council. 1984 revision of the Uniform Hospital Discharge Data Set--HHS. Notice. Federal register
[1895;50(147):31038-31040. [Medline: 10272121]](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&list_uids=10272121&dopt=Abstract)
54. Faurot KR, Jonsson Funk M, Pate V, Brookhart MA, Patrick A, Hanson LC, et al. Using claims data to predict dependency
[in activities of daily living as a proxy for frailty. Pharmacoepidemiol Drug Saf 2015 Jan;24(1):59-66 [FREE Full text] [doi:](http://europepmc.org/abstract/MED/25335470)
[10.1002/pds.3719] [Medline: 25335470]](http://dx.doi.org/10.1002/pds.3719)
55. Leonard CE, Freeman CP, Razzaghi H, Carnahan RM, Chrischilles EA, Andrade SE, et al. Mini-Sentinel Methods: 15
[Cohorts of Interest for Surveillance Preparedness. Sentinel Initiative. 2014. URL: https://www.sentinelinitiative.org/sites/](https://www.sentinelinitiative.org/sites/default/files/Methods/Mini-Sentinel_Methods_15-Cohorts-of-Interest-for-Surveillance-Preparedness_0.pdf)
[default/files/Methods/Mini-Sentinel_Methods_15-Cohorts-of-Interest-for-Surveillance-Preparedness_0.pdf [accessed](https://www.sentinelinitiative.org/sites/default/files/Methods/Mini-Sentinel_Methods_15-Cohorts-of-Interest-for-Surveillance-Preparedness_0.pdf)
2020-07-31]
56. Kannel WB, Schatzkin A. Sudden death: lessons from subsets in population studies. J Am Coll Cardiol 1985 Jun;5(6
[Suppl):141B-149B [FREE Full text] [doi: 10.1016/s0735-1097(85)80545-3] [Medline: 3889106]](https://linkinghub.elsevier.com/retrieve/pii/S0735-1097(85)80545-3)
57. Kucharska-Newton AM, Couper DJ, Pankow JS, Prineas RJ, Rea TD, Sotoodehnia N, et al. Diabetes and the risk of sudden
[cardiac death, the atherosclerosis risk in communities study. Acta Diabetol 2010 Dec;47(Suppl 1):161-168 [FREE Full](http://europepmc.org/abstract/MED/19855920)
[text] [doi: 10.1007/s00592-009-0157-9] [Medline: 19855920]](http://europepmc.org/abstract/MED/19855920)
58. Vasiliadis I, Kolovou G, Mavrogeni S, Nair D, Mikhailidis D. Sudden cardiac death and diabetes mellitus. J Diabetes
[Complications 2014;28(4):573-579. [doi: 10.1016/j.jdiacomp.2014.02.003] [Medline: 24666923]](http://dx.doi.org/10.1016/j.jdiacomp.2014.02.003)
59. Kannel WB, Plehn JF, Cupples LA. Cardiac failure and sudden death in the Framingham study. Am Heart J 1988
[Apr;115(4):869-875. [doi: 10.1016/0002-8703(88)90891-5] [Medline: 3354416]](http://dx.doi.org/10.1016/0002-8703(88)90891-5)
60. Toh D, Reichman ME, Houstoun M, Ross Southworth M, Ding X, Hernandez A, et al. Mini-Sentinel Medical Product
Assessment: Signal Refinement of Angioedema Events in Association With Use of Drugs That Act on the
[Renin-Angiotensin-Aldosterone System Report. Sentinel Initiative. 2012. URL: https://www.sentinelinitiative.org/sites/](https://www.sentinelinitiative.org/sites/default/files/Drugs/Assessments/Mini-Sentinel_Angioedema-and-RAAS_Final-Report.pdf)
[default/files/Drugs/Assessments/Mini-Sentinel_Angioedema-and-RAAS_Final-Report.pdf [accessed 2020-07-31]](https://www.sentinelinitiative.org/sites/default/files/Drugs/Assessments/Mini-Sentinel_Angioedema-and-RAAS_Final-Report.pdf)
61. Pun PH, Smarz TR, Honeycutt EF, Shaw LK, Al-Khatib SM, Middleton JP. Chronic kidney disease is associated with
increased risk of sudden cardiac death among patients with coronary artery disease. Kidney Int 2009 Sep;76(6):652-658
[[FREE Full text] [doi: 10.1038/ki.2009.219] [Medline: 19536082]](https://linkinghub.elsevier.com/retrieve/pii/S0085-2538(15)54024-6)
62. Shamseddin MK, Parfrey PS. Sudden cardiac death in chronic kidney disease: epidemiology and prevention. Nat Rev
[Nephrol 2011 Mar;7(3):145-154. [doi: 10.1038/nrneph.2010.191] [Medline: 21283136]](http://dx.doi.org/10.1038/nrneph.2010.191)
63. Quan H, Li B, Saunders LD, Parsons GA, Nilsson CI, Alibhai A, et al. Assessing validity of ICD-9-CM and ICD-10
administrative data in recording clinical conditions in a unique dually coded database. Health Serv Res 2008
[Aug;43(4):1424-1441 [FREE Full text] [doi: 10.1111/j.1475-6773.2007.00822.x] [Medline: 18756617]](http://europepmc.org/abstract/MED/18756617)
64. Becker LB, Han BH, Meyer PM, Wright FA, Rhodes KV, Smith DW, et al. Racial differences in the incidence of cardiac
arrest and subsequent survival. The CPR Chicago project. N Engl J Med 1993 Aug 26;329(9):600-606. [doi:
[10.1056/NEJM199308263290902] [Medline: 8341333]](http://dx.doi.org/10.1056/NEJM199308263290902)
65. Friede A, Reid JA, Ory HW. CDC wonder: a comprehensive on-line public health information system of the centers for
[disease control and prevention. Am J Public Health 1993 Sep;83(9):1289-1294. [doi: 10.2105/ajph.83.9.1289] [Medline:](http://dx.doi.org/10.2105/ajph.83.9.1289)
[8395776]](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&list_uids=8395776&dopt=Abstract)
66. [Deo R, Albert CM. Epidemiology and genetics of sudden cardiac death. Circulation 2012 Jan 31;125(4):620-637 [FREE](http://europepmc.org/abstract/MED/22294707)
[Full text] [doi: 10.1161/CIRCULATIONAHA.111.023838] [Medline: 22294707]](http://europepmc.org/abstract/MED/22294707)
-----
JMIR RESEARCH PROTOCOLS Fuller et al
67. Straus S, Bleumink G, Dieleman J, van der Lei J, Stricker B, Sturkenboom M. The incidence of sudden cardiac death in
[the general population. J Clin Epidemiol 2004 Jan;57(1):98-102. [doi: 10.1016/S0895-4356(03)00210-5] [Medline: 15019016]](http://dx.doi.org/10.1016/S0895-4356(03)00210-5)
68. Kannel WB, Wilson PW, D'Agostino RB, Cobb J. Sudden coronary death in women. Am Heart J 1998 Aug;136(2):205-212.
[[doi: 10.1053/hj.1998.v136.90226] [Medline: 9704680]](http://dx.doi.org/10.1053/hj.1998.v136.90226)
69. Goraya TY, Jacobsen SJ, Kottke TE, Frye RL, Weston SA, Roger VL. Coronary heart disease death and sudden cardiac
[death: a 20-year population-based study. Am J Epidemiol 2003 May 1;157(9):763-770. [doi: 10.1093/aje/kwg057] [Medline:](http://dx.doi.org/10.1093/aje/kwg057)
[12727669]](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&list_uids=12727669&dopt=Abstract)
70. [Zipes DP, Wellens HJ. Sudden cardiac death. Circulation 1998 Nov 24;98(21):2334-2351. [doi: 10.1161/01.cir.98.21.2334]](http://dx.doi.org/10.1161/01.cir.98.21.2334)
[[Medline: 9826323]](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&list_uids=9826323&dopt=Abstract)
71. Fox CS, Evans JC, Larson MG, Kannel WB, Levy D. Temporal trends in coronary heart disease mortality and sudden
cardiac death from 1950 to 1999: the Framingham Heart Study. Circulation 2004;110:522-527. [doi:
[10.1161/01.cir.0000136993.34344.41]](http://dx.doi.org/10.1161/01.cir.0000136993.34344.41)
72. Albert CM, Chae CU, Grodstein F, Rose LM, Rexrode KM, Ruskin JN, et al. Prospective study of sudden cardiac death
[among women in the United States. Circulation 2003 Apr 29;107(16):2096-2101. [doi: 10.1161/01.CIR.0000065223.21530.11]](http://dx.doi.org/10.1161/01.CIR.0000065223.21530.11)
[[Medline: 12695299]](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&list_uids=12695299&dopt=Abstract)
73. Jouven X, Desnos M, Guerot C, Ducimetière P. Predicting sudden death in the population: the Paris prospective study I.
[Circulation 1999 Apr 20;99(15):1978-1983. [doi: 10.1161/01.cir.99.15.1978] [Medline: 10209001]](http://dx.doi.org/10.1161/01.cir.99.15.1978)
74. Jouven X, Lemaître RN, Rea T, Sotoodehnia N, Empana J, Siscovick D. Diabetes, glucose level, and risk of sudden cardiac
[death. Eur Heart J 2005 Oct;26(20):2142-2147. [doi: 10.1093/eurheartj/ehi376] [Medline: 15980034]](http://dx.doi.org/10.1093/eurheartj/ehi376)
75. Leonard CE, Brensinger CM, Aquilante CL, Bilker WB, Boudreau DM, Deo R, et al. Comparative safety of sulfonylureas
[and the risk of sudden cardiac arrest and ventricular arrhythmia. Diabetes Care 2018 Apr;41(4):713-722 [FREE Full text]](http://europepmc.org/abstract/MED/29437823)
[[doi: 10.2337/dc17-0294] [Medline: 29437823]](http://dx.doi.org/10.2337/dc17-0294)
##### Abbreviations
**FAERS:** FDA Adverse Event Reporting System
**FDA:** United States Food and Drug Administration
**FWA:** Federalwide Assurance
**HPHCI:** Harvard Pilgrim Health Care Institute
**ICD-9:** International Classification of Diseases, 9th Revision
**ICD-10:** International Classification of Diseases, 10th Revision
**IRB:** institutional review board
**NCHS:** National Center for Health Statistics
**NDI:** National Death Index
**NDI+:** National Death Index Plus
**SCD:** sudden cardiac death
**SSA:** Social Security Administration
_Edited by G Eysenbach; submitted 29.06.20; peer-reviewed by N Mohammad Gholi Mezerji, G Luo; comments to author 22.07.20;_
_revised version received 04.08.20; accepted 11.08.20; published 02.11.20_
_Please cite as:_
_Fuller CC, Hua W, Leonard CE, Mosholder A, Carnahan R, Dutcher S, King K, Petrone AB, Rosofsky R, Shockro LA, Young J, Min_
_JY, Binswanger I, Boudreau D, Griffin MR, Adgent MA, Kuntz J, McMahill-Walraven C, Pawloski PA, Ball R, Toh S_
_Developing a Standardized and Reusable Method to Link Distributed Health Plan Databases to the National Death Index: Methods_
_Development Study Protocol_
_JMIR Res Protoc 2020;9(11):e21811_
_[URL: https://www.researchprotocols.org/2020/11/e21811](https://www.researchprotocols.org/2020/11/e21811)_
_[doi: 10.2196/21811](http://dx.doi.org/10.2196/21811)_
_[PMID: 33136063](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&list_uids=33136063&dopt=Abstract)_
©Candace C Fuller, Wei Hua, Charles E Leonard, Andrew Mosholder, Ryan Carnahan, Sarah Dutcher, Katelyn King, Andrew
B Petrone, Robert Rosofsky, Laura A Shockro, Jessica Young, Jea Young Min, Ingrid Binswanger, Denise Boudreau, Marie R
Griffin, Margaret A Adgent, Jennifer Kuntz, Cheryl McMahill-Walraven, Pamala A Pawloski, Robert Ball, Sengwee Toh.
Originally published in JMIR Research Protocols (http://www.researchprotocols.org), 02.11.2020. This is an open-access article
distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which
permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR
-----
JMIR RESEARCH PROTOCOLS Fuller et al
Research Protocols, is properly cited. The complete bibliographic information, a link to the original publication on
http://www.researchprotocols.org, as well as this copyright and license information must be included.
-----
|
{
"disclaimer": "Notice: Paper or abstract available at https://pmc.ncbi.nlm.nih.gov/articles/PMC7669437, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "CCBY",
"status": "GREEN",
"url": "https://www.researchprotocols.org/2020/11/e21811/PDF"
}
| 2,020
|
[
"JournalArticle",
"Review"
] | true
| 2020-06-29T00:00:00
|
[
{
"paperId": "f4e14494ec4af6e76c4938da5d7efcdf32043e3d",
"title": "Distributed Regression Analysis Application in Large Distributed Data Networks: Analysis of Precision and Operational Performance."
},
{
"paperId": "e27d85e597dd9200a8896f466bcbc743df79ab60",
"title": "Cohorts"
},
{
"paperId": "d0fcfbc72409201dca9ae93d226b95999814e523",
"title": "Identifying the DEAD: Development and Validation of a Patient-Level Model to Predict Death Status in Population-Level Claims Data"
},
{
"paperId": "344dd86e2aa296a7d27666891b3154faeb532470",
"title": "The FDA Sentinel Initiative - An Evolving National Resource."
},
{
"paperId": "d073e8c3141921095640c8c1780e2427031c96ef",
"title": "Asthma-related mortality in the United States, 1999 to 2015: A multiple causes of death analysis."
},
{
"paperId": "ee217c984c5e3d9f5835fdc99439e47a348b23ce",
"title": "Comparative Safety of Sulfonylureas and the Risk of Sudden Cardiac Arrest and Ventricular Arrhythmia"
},
{
"paperId": "6a62af2ad35d099c8b88eab49cbdb3f9a3220707",
"title": "Comparison of Expert Adjudicated Coronary Heart Disease and Cardiovascular Disease Mortality With the National Death Index: Results From the REasons for Geographic And Racial Differences in Stroke (REGARDS) Study"
},
{
"paperId": "1df7c49e5aab991b7596dfde7099481361f3cf7b",
"title": "Macrolide antibiotics and the risk of ventricular arrhythmia in older adults"
},
{
"paperId": "06cfffcb8ba0b6a119aa9202d9514d19b1dff082",
"title": "Wonder"
},
{
"paperId": "720ae345dd3de3b434496d23f7490a725a409452",
"title": "Standardisation of the FAERS database: a systematic approach to manually recoding drug name variants"
},
{
"paperId": "ad5373d6efdcc96a0ee89419deb0727ea40300dc",
"title": "Principles and Pitfalls: a Guide to Death Certification"
},
{
"paperId": "f863831afa6e1b6454ae56c54adb2acf7e7b34d6",
"title": "Serious adverse Drug events reported to the Food and Drug Administration (FDA): analysis of the FDA adverse event reporting system (FAERS) 2006-2011 database"
},
{
"paperId": "a76bcce666abcbdffb9d7b10d110fd44f4a317a1",
"title": "Using claims data to predict dependency in activities of daily living as a proxy for frailty"
},
{
"paperId": "0ca5ccae80b865b81520ace232c5caf4cf4813c9",
"title": "Sudden cardiac death and diabetes mellitus."
},
{
"paperId": "4781b37690e939ebeb894f99b3da0150f4ac0657",
"title": "Public Health Burden of Sudden Cardiac Death in the United States"
},
{
"paperId": "5d5594a3f4b6ef8e85a55fce593a4f6393bad846",
"title": "Federal Policy for the Protection of Human Subjects (“Common Rule”)"
},
{
"paperId": "3d42707360cc0bb042fbeef357c57567d4ba6bf8",
"title": "Azithromycin and the risk of cardiovascular death."
},
{
"paperId": "25dbc840cfc8b99c23bb2e03cb28b6f552e43af0",
"title": "Epidemiology and genetics of sudden cardiac death."
},
{
"paperId": "5740b4153e6d657a5cf0f1da4d1cefa23ed4604c",
"title": "Design considerations, architecture, and use of the Mini‐Sentinel distributed data system"
},
{
"paperId": "c58fe7e67056d464b50068d70b04cb2f047391a5",
"title": "Comparative‐Effectiveness Research in Distributed Health Data Networks"
},
{
"paperId": "62157c97c31b3288d5cecc54f1fdb05a32d6765a",
"title": "The FDA drug safety surveillance program: adverse event reporting trends."
},
{
"paperId": "8da66e4e0cdb5e55ab3732b92c829799e0a7f8ed",
"title": "Sudden cardiac death in chronic kidney disease: epidemiology and prevention"
},
{
"paperId": "4f06147f953f87ac2da3fbbbe23f3030a163e7b4",
"title": "Developing the Sentinel System--a national resource for evidence development."
},
{
"paperId": "6270222e604249221b35a75bb1acca6d1801a02d",
"title": "Diabetes and the risk of sudden cardiac death, the Atherosclerosis Risk in Communities study"
},
{
"paperId": "fbe59690018d70b3d219b03ca621cde98c93bb88",
"title": "Distributed Health Data Networks: A Practical and Preferred Approach to Multi-Institutional Evaluations of Comparative Effectiveness, Safety, and Quality of Care"
},
{
"paperId": "d8d98b45c92131b6b741e52109860f875f026f53",
"title": "Novel Therapeutic Targets for Antiarrhythmic Drugs"
},
{
"paperId": "d491bea3aa86a539b266e48be8dff9ee380845ca",
"title": "A computer case definition for sudden cardiac death"
},
{
"paperId": "93ae326100202011d8ab9c5d28d9b5433ff760ce",
"title": "Antiarrhythmic Drug Classification"
},
{
"paperId": "76b6a090b08c691aaf01dff0f15fe24344df4cc2",
"title": "Validation of diagnostic codes for outpatient‐originating sudden cardiac death and ventricular arrhythmia in Medicaid and Medicare claims data"
},
{
"paperId": "67564241281a4c9248b75b51eb3833c17383852d",
"title": "Chronic kidney disease is associated with increased risk of sudden cardiac death among patients with coronary artery disease."
},
{
"paperId": "236d7409e0000481fe95625acf24570f4816fd8f",
"title": "Design of a National Distributed Health Data Network"
},
{
"paperId": "db88241272c9b9b2c6e2c80e00a2ce13cbbdee06",
"title": "Agreement Between Nosologist and Cardiovascular Health Study Review of Deaths: Implications of Coding Differences"
},
{
"paperId": "26a9140eaf95a4c1812cd63166c326206d81f542",
"title": "An evaluation of a data mining signal for amyotrophic lateral sclerosis and statins detected in FDA's spontaneous adverse event reporting system"
},
{
"paperId": "1d901dce29904b8717672954b4122298df8ef022",
"title": "Epidemiology of sudden cardiac death: clinical and research implications."
},
{
"paperId": "2caa53cb3a4ee54464dce73f541ee2d8db2a96b1",
"title": "Cisapride and ventricular arrhythmia."
},
{
"paperId": "c2f5443a2dfd7e7efb747a2fb0b81b1624db6111",
"title": "Assessing validity of ICD-9-CM and ICD-10 administrative data in recording clinical conditions in a unique dually coded database."
},
{
"paperId": "260a3da5de23197e0916146a8faac2d3ed6c5fce",
"title": "The conversion of coroner systems to medical examiner systems in the United States: a lull in the action."
},
{
"paperId": "9901b98dd02d8ed10bc3fa3bf891e81958ca49e3",
"title": "Improving Death Certificate Completion: A Trial of Two Training Interventions"
},
{
"paperId": "d8410ca7ef94267f649647d8d3efff25db98d4e6",
"title": "Diabetes, glucose level, and risk of sudden cardiac death."
},
{
"paperId": "d1c8da80fa69b051a1337feaf66d4426986c6c50",
"title": "Adverse drug event surveillance and drug withdrawals in the United States, 1969-2002: the importance of reporting suspected reactions."
},
{
"paperId": "36ff70ae8dedfcc7447276cb769bad3e97137c06",
"title": "Current burden of sudden cardiac death: multiple source surveillance versus retrospective death certificate-based review in a large U.S. community."
},
{
"paperId": "483f947ff9974a6aeb4148a17032c76aa7e2b64a",
"title": "Temporal Trends in Coronary Heart Disease Mortality and Sudden Cardiac Death From 1950 to 1999: The Framingham Heart Study"
},
{
"paperId": "c127b594d46f01a7b09cb0c57dc46d41e43a8a5b",
"title": "Drug induced QT prolongation and torsades de pointes"
},
{
"paperId": "018bb6553ea3b5e906c16c14a53fb2aaabd49276",
"title": "Coronary heart disease death and sudden cardiac death: a 20-year population-based study."
},
{
"paperId": "4b173e8b0c64e0ae621a18781a4d724c973561ee",
"title": "Prospective Study of Sudden Cardiac Death Among Women in the United States"
},
{
"paperId": "1824a9bae62b993966ae08569d9217cef7f191cc",
"title": "The paradox of human subjects protection in research: some thoughts on and experiences with the Federalwide Assurance Program."
},
{
"paperId": "9ff7ad4b5e0b8edc3e5bff35a305c5190904baa9",
"title": "Metabolic causes and prevention of ventricular fibrillation during acute coronary syndromes."
},
{
"paperId": "a480a9711e4f89b96e0c20a67e9d8294b4192693",
"title": "Sudden Cardiac Death in the United States, 1989 to 1998"
},
{
"paperId": "7db9bd6cae02394706742441c5c63f6ebee26ff2",
"title": "Predicting sudden death in the population: the Paris Prospective Study I."
},
{
"paperId": "25b018a7aaa7e92e7655cd65f3d51371af5e8bdc",
"title": "Fifty Years of Death Certificates: The Framingham Heart Study"
},
{
"paperId": "49b6b3692e620da603b1d524bcd173da7021a2c0",
"title": "Using the National Death Index to obtain underlying cause of death codes."
},
{
"paperId": "d91f91968d5386d5fef7d6e354aad1fc092175da",
"title": "Sudden coronary death in women."
},
{
"paperId": "421fe4ea3cf2f1778cbb95d073c5a0154d2de0ae",
"title": "Sudden cardiac death."
},
{
"paperId": "91672bb082e8904a46a2d1a83504b21456106049",
"title": "Antiarrhythmic drugs and torsade de pointes."
},
{
"paperId": "830e152471a45ef2a91364e8a674db69711130b9",
"title": "Federal Policy for the Protection of Human Subjects"
},
{
"paperId": "940afa78e411c1782ef3b2c752d0978ded1d89c9",
"title": "CDC WONDER: a comprehensive on-line public health information system of the Centers for Disease Control and Prevention."
},
{
"paperId": "2c6a27297b252f5f3e3506d62c601b56a0d6eb40",
"title": "Racial differences in the incidence of cardiac arrest and subsequent survival. The CPR Chicago Project."
},
{
"paperId": "45df1745bb8918bf9ca260f6b3470bab3ada79b7",
"title": "Sudden arrhythmic cardiac death--mechanisms, resuscitation and classification: the Seattle perspective."
},
{
"paperId": "6072364872c243cf929914c93c871760d1a235db",
"title": "Cardiac failure and sudden death in the Framingham Study."
},
{
"paperId": "d6b94bbbc810f7dc717f5b51f8d35efd4d50563a",
"title": "Cause of death. Proper completion of the death certificate."
},
{
"paperId": "b9217ce913537e06f4229b605c02943b0b92edf8",
"title": "Out-of-hospital coronary death in an urban population--validation of death certificate diagnosis. The Minnesota Heart Survey."
},
{
"paperId": "1e4734e6abb92cf13696f08c61f8318c0807dd39",
"title": "Arrhythmogenic effects of antiarrhythmic drugs."
},
{
"paperId": "3d1a685121e72500571deb044fb6242b60412cb4",
"title": "The arrhythmogenicity of antiarrhythmic agents."
},
{
"paperId": "67e92ee1dd5dde8836e9919ef190732d8cca2624",
"title": "Asthma: Lessons from Epidemiology"
},
{
"paperId": null,
"title": "National death Index: User's Guide"
},
{
"paperId": null,
"title": "and Answers on FDA's Adverse Event Reporting System (FAERS)"
},
{
"paperId": null,
"title": "Fact sheet: national death index. National Center for Health Statistics"
},
{
"paperId": "88db714904749fb3412e35aa2d1cdf97373bb05c",
"title": "MINI-SENTINEL MEDICAL PRODUCT ASSESSMENT SIGNAL REFINEMENT OF ANGIOEDEMA EVENTS IN ASSOCIATION WITH USE OF DRUGS THAT ACT ON THE RENIN-ANGIOTENSIN-ALDOSTERONE SYSTEM REPORT"
},
{
"paperId": null,
"title": "Criteria To Be Applied In Approving National Death Index Applications"
},
{
"paperId": "b06bee0ba410dd8fea35ba7d0512ff53564f459e",
"title": "From the Centers for Disease Control and Prevention"
},
{
"paperId": null,
"title": "Mini-Sentinel Methods: 15 Cohorts of Interest for Surveillance Preparedness"
},
{
"paperId": "25041603097bbd2fd808f1eb3f52e5be754c5e5b",
"title": "Consequences for healthcare quality and research of the exclusion of records from the Death Master File."
},
{
"paperId": "7ec5d33b16bb6b66cb60d08ba8a1a44f8dd9b1fe",
"title": "Mortality From Major Cardiovascular Diseases: United States, 2007"
},
{
"paperId": "15e7ab07a1c690b22a8a4c1a9219f51630dcbf12",
"title": "The incidence of sudden cardiac death in the general population."
},
{
"paperId": "e09b1f06a450694e7669288ca6fe3a35075c7be7",
"title": "Classification of Antiarrhythmic Actions"
},
{
"paperId": "edcdc17b8cd45157b76b45bf1aec6b523aa93332",
"title": "Health information policy council; 1984 revision of the Uniform Hospital Discharge Data Set--HHS. Notice."
},
{
"paperId": null,
"title": "Possible Solutions to Common Problems in Death Certification"
},
{
"paperId": null,
"title": "Combined list of all QT drugs and the list of drugs to avoid for patients with congenital long QT syndrome"
},
{
"paperId": null,
"title": "Research Protocols, is properly cited. The complete bibliographic information"
},
{
"paperId": null,
"title": "FWA) for the Protection of Human Subjects: 45 CFR 46. Office for Human Research Protection"
},
{
"paperId": null,
"title": "Electronic Code of Federal Regulations: Title 45: Subtitle A, Subchapter C, Part 160"
},
{
"paperId": null,
"title": "Health plan disenrollment (gaps of enrollment <45 days will be ignored)"
},
{
"paperId": null,
"title": "Participating health plans will individually submit the necessary quality-checked data files to the NDI"
},
{
"paperId": null,
"title": "Initiation of an antiarrhythmic medication of interest; the day before the date of medication initiation will be the last day of follow-up (cohort 2 only)"
},
{
"paperId": null,
"title": "The NDI will conduct matching activities and return files to health plans"
},
{
"paperId": null,
"title": "The HPHCI will develop a cohort identification program that will query health plan databases formatted in the Sentinel Common Data Model"
},
{
"paperId": null,
"title": "Death or specific causes of death, as determined from NDI+ data; date of death will be the last day of follow-up (both"
},
{
"paperId": null,
"title": "Cohort Identification and Descriptive Analysis ( CIDA ) Module"
}
] | 28,323
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
},
{
"category": "Engineering",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/fffe570263c29c449eb56acec6308f206a85ca94
|
[
"Computer Science"
] | 0.862142
|
Decentralized Algorithm for Randomized Task Allocation in Fog Computing Systems
|
fffe570263c29c449eb56acec6308f206a85ca94
|
IEEE/ACM Transactions on Networking
|
[
{
"authorId": "3124970",
"name": "Sladana Jošilo"
},
{
"authorId": "143996776",
"name": "G. Dán"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"IEEE ACM Trans Netw",
"IEEE ACM Transactions on Networking",
"IEEE/ACM Trans Netw"
],
"alternate_urls": [
"https://ieeexplore.ieee.org/xpl/RecentIssue.jsp?punumber=90",
"https://ieeexplore.ieee.org/servlet/opac?punumber=90"
],
"id": "b1aea3ab-edf0-430b-a9c2-cce5469f6b23",
"issn": "1063-6692",
"name": "IEEE/ACM Transactions on Networking",
"type": "journal",
"url": "http://portal.acm.org/ton/"
}
|
Fog computing is identified as a key enabler for using various emerging applications by battery powered and computationally constrained devices. In this paper, we consider devices that aim at improving their performance by choosing to offload their computational tasks to nearby devices or to an edge cloud. We develop a game theoretical model of the problem and use a variational inequality theory to compute an equilibrium task allocation in static mixed strategies. Based on the computed equilibrium strategy, we develop a decentralized algorithm for allocating the computational tasks among nearby devices and the edge cloud. We use the extensive simulations to provide insight into the performance of the proposed algorithm and compare its performance with the performance of a myopic best response algorithm that requires global knowledge of the system state. Despite the fact that the proposed algorithm relies on average system parameters only, our results show that it provides a good system performance close to that of the myopic best response algorithm.
|
# Decentralized Algorithm for Randomized Task Allocation in Fog Computing Systems
## Sla ¯dana Jošilo and György Dán School of Electrical Engineering and Computer Science KTH, Royal Institute of Technology, Stockholm, Sweden E-mail: {josilo, gyuri}@kth.se
**_Abstract—Fog computing is identified as a key enabler_**
**for using various emerging applications by battery powered**
**and computationally constrained devices. In this paper, we**
**consider devices that aim at improving their performance**
**by choosing to offload their computational tasks to nearby**
**devices or to an edge cloud. We develop a game theoretical**
**model of the problem, and we use variational inequality**
**theory to compute an equilibrium task allocation in static**
**mixed strategies. Based on the computed equilibrium strat-**
**egy, we develop a decentralized algorithm for allocating the**
**computational tasks among nearby devices and the edge**
**cloud. We use extensive simulations to provide insight into the**
**performance of the proposed algorithm, and we compare its**
**performance with the performance of a myopic best response**
**algorithm that requires global knowledge of the system state.**
**Despite the fact that the proposed algorithm relies on average**
**system parameters only, our results show that it provides**
**good system performance close to that of the myopic best**
**response algorithm.**
**_Index terms— computation offloading, fog computing,_**
Nash equilibria, decentralized algorithms
I. INTRODUCTION
Fog computing is widely recognized as a key component
of 5G networks and an enabler of the Internet of Things
(IoT) [1], [2]. The concept of fog computing extends
the traditional centralized cloud computing architecture by
allowing devices not only to use computing and storage
resources of centralized clouds, but also resources distributed across the network including the resources of each
other and resources located at the network edge [3].
Traditional centralized cloud computing allows devices
to offload the computation to a cloud infrastructure with
significant computational power [4],[5], [6]. Cloud offloading may indeed accelerate the execution of applications,
but it may suffer from high communication delays, on
the one hand due to the contention of devices for radio
spectrum, on the other hand due to the remoteness of
the cloud infrastructure. Thus, traditional centralized cloud
computing may not be able to meet the delay requirements
of emerging IoT applications [7], [8], [9], [10].
Fog computing addresses this problem by allowing
collaborative computation offloading among nearby devices and distributed cloud resources close to the network
edge [11]. The benefits of collaborative computation offloading are twofold. First, collaboration among devices
can make use of device-to-device (D2D) communication,
and thereby it can improve spectral efficiency and free
up radio resources for other purposes [12], [13], [14].
Second, the proximity of devices to each other can enable
The work was partly funded by the Swedish Research Council through
project 621-2014-6.
low communication delays. Thus, fog computing allows to
explore the tradeoff between traditional centralized cloud
offloading, which ensures low computing time, but may
suffer from high communication delay, and collaborative
computation offloading, which ensures low communication delay, but may involve higher computing times.
One of the main challenges facing the design of fog
computing systems is how to manage fog resources efficiently. Compared to traditional centralized cloud computing, where a device only needs to decide whether to
offload the computation of a task, in the case of fog
computing the number of offloading choices increases with
the number of devices. Furthermore, today’s devices are
heterogeneous in terms of computational capabilities, in
terms of what tasks they have to execute and how often.
At the same time, some devices may be autonomous, and
hence they would be interested in minimizing their own
perceived completion times. Therefore, developing low
complexity algorithms for efficient task allocation among
nearby devices is an inherently challenging problem.
In this paper we address this problem by considering a
fog computing system, where devices can choose either to
perform their computation locally, to offload the computation to a nearby device, or to offload the computation to
an edge cloud. We provide a game theoretical model of
the completion time minimization problem. We show that
an equilibrium task allocation in static mixed strategies
always exists, i.e., if devices can choose at random whether
to offload, and where to offload. Based on the game
theoretical model we propose a decentralized algorithm
that relies on average system parameters, and allocates
the tasks according to a Nash equilibrium in static mixed
strategies. We use the algorithm to address the important
question whether efficient task allocation is feasible using
an algorithm that requires low signaling overhead, and
we compare the performance achieved by the proposed
algorithm with the performance of a myopic best response
algorithm that requires global knowledge of the system
state. Our results show that the proposed decentralized
algorithm, despite significantly lower signaling overhead,
provides good system performance close to that of the
myopic best response algorithm.
The rest of the paper is organized as follows. We
present the system model in Section II. We present two
algorithms in Sections III and IV. In Section V we present
numerical results and in Section VI we review related
work. Section VII concludes the paper.
-----
D
D6
D5
Fig. 1. Fog computing system that consists of 6 devices and an edge
cloud.
II. SYSTEM MODEL AND PROBLEM FORMULATION
We consider a fog computing system that consists of
a set = 1, 2, ..., N of devices, and an edge cloud.
_N_ _{_ _}_
Device i ∈N generates a sequence (ti,1, ti,2, . . .) of
computational tasks. We consider that the size Di,k (e.g.,
in bytes) of task ti,k of device i can be modeled by a
random variable Di, and the number of CPU cycles Li,k
required to perform the task by a random variable Li.
According to results reported in [15], [16], [17] the number
_Xi of CPU cycles per data bit can be approximated by a_
Gamma distribution, and thus we can model the relation
between Li and Di as Li = DiXi. Furthermore, assuming
that the first moment X _i and the second moment_ [2]X _i_
of Xi can be estimated based on the past, the statistics
of the number of CPU cycles required to perform the
task of device i can be easily obtained. Similar to other
works [18], [19], [20], we assume that the task arrival
process of device i can be modeled by a Poisson process
with arrival intensity λi.
For each task ti,k device i can decide whether to
perform the task locally, to offload it to a device j
_∈_
_i_ or to an edge cloud. Thus, device i chooses
_N \ {_ _}_
a member of the set 0, where 0 corresponds
_N ∪{_ _}_
to the edge cloud. We allow for randomized policies,
and we denote by pi,j(k) the probability that device i
assigns its task ti,k to j ∈N ∪{0}, and we define the
probability vector pi(k) = {pi,0(k), pi,1(k), ..., pi,N (k)},
where [�]j∈N ∪{0} _[p][i,j][(][k][) = 1][. Finally, we denote by][ P]_
the set of probability distributions over 0, i.e.,
_N ∪{_ _}_
_pi(k) ∈P._
The above fog computing system relies on the assumption that all devices faithfully execute the tasks offloaded
to them. To ensure this, the devices need to be incentivized
to collaborate in executing each others’ computational
tasks, as discussed in [21]. The collaboration among
devices in fog computing systems can be ensured with an
adequate incentive scheme similar to those used in peerto-peer systems [22], [23], [24]. These schemes ensure
the collaboration among the peers through the reputationbased trust supporting mechanism. In the context of fog
computing systems, the mechanism would result in an
incentive scheme in which only devices that process
offloaded tasks themselves are entitled to offload the tasks.
_A. Communication model_
We consider that the devices communicate using an
orthogonal frequency division multiple access (OFDMA)
framework in which there is an assignment of subcarriers
to pairs of communicating nodes [25], [26]. Furthermore,
we consider that devices use dedicated bandwidth resources, i.e. node-to-node pairs do not share the bandwidth
with each other and with the other cellular users [25]. This
can be implemented by assigning an orthogonal subcarrier
per transmission direction for each pair of communicating
nodes, resulting in N _N subcarriers in total. We denote_
_×_
the transmission rate from device i to device j by Ri,j,
and the transmission rate from device i to the edge cloud
through a base station by Ri,0. Each device maintains N
transmission queues, i.e., N 1 queues for transmitting to
_−_
devices j _i_ and one for transmitting to the edge
_∈N \ {_ _}_
cloud, and the tasks are transmitted in FIFO order.
We consider that the time Ti,j[t] [(][k][)][ needed to transmit a]
task ti,k from device i to j ∈N ∪{0} is proportional to
its size Di,k, and is given by
_Ti,j[t]_ [(][k][) =][ D][i,k][/R][i,j][.]
Furthermore, the time Ti,j[d] [(][k][)][ needed to deliver the input]
data Di,k from device i to j ∈N ∪{0} is the sum of the
transmission time Ti,j[t] [(][k][)][ and of the waiting time (if any).]
Similar to other works [27], [28], [29], [30], we consider
that the time needed to transmit the results of the computation back to the device is negligible. This assumption is
justified for many applications including face and object
recognition, and anomaly detection, where the size of the
result of the computation is much smaller than the size of
the input data.
Observe that our system model can accommodate systems in which certain devices i only serve for
_∈N_
performing the computational tasks of others, by setting
the arrival intensity λi = 0. These devices can be considered as micro-data centers located at the network edge,
whose function in fog computing systems is to perform
the computational tasks of the other devices [31], [32].
Furthermore, our system model can accommodate systems
in which certain devices j are not supposed to
_∈N_
perform the computational tasks of others, by setting the
transmission rates Ri,j from the other devices i ∈N \{j}
to device j to low enough values.
Figure 1 illustrates a fog computing system that consists
of six devices and one edge cloud; device 1 and device
2 offload their tasks through a base station to the cloud
server, device 4 offloads its tasks to device 2, device 5
offloads its task to device 3 that serves as a micro-data
center, and device 6 performs computation locally.
_B. Computation model_
To model the time that is needed to compute a task
in a device i, we consider that each device i maintains
one execution queue with tasks served in FIFO order.
We denote by Fi the computational capability of device
_i. Unlike devices, the cloud server has a large number_
of processors with computational capability F0 each, and
we assume that computing in the edge cloud begins
immediately upon arrival of a task.
Similar to common practice [21], [27] we consider that
the time Ti,j[c] [(][k][)][ needed to compute a task][ t][i,k][, on][ j][ ∈]
_N ∪{0} is proportional to its complexity Li,k, and is_
given by
_Ti,j[c]_ [(][k][) =][ L][i,k][/F][j][.]
-----
Fig. 2. Fog computing system modeled as a queuing network.
Furthermore, the execution time Ti,j[e] [(][k][)][ of a task][ t][i,k][ on]
device j is the sum of the computation time Ti,j[c] [(][k][)][ and of]
the waiting time (if any). Figure 2 illustrates the queuing
model of a computation offloading system.
_C. Problem formulation_
We define the cost Ci of device i as the mean completion time of its tasks. Given a sequence (ti,1, ti,2, . . .) of
computational tasks, we can thus express the cost Ci as
_Ci = lim_ 1 � �K �pi,i(k)Ti,i[e] [(][k][)] (1)
_K→∞_ _K_
_k=1_
+ � �Ti,j[d] [(][k][) +][ T][ e]i,j[(][k][)]�[��].
_j∈N \{i}∪{0}_ _[p][i,j][(][k][)]_
Since the devices are autonomous, we consider that each
device aims at minimizing its cost by solving
min Ci s.t. (2)
_pi(k) ∈P._ (3)
Since devices’ decisions affect each other, the devices play
a dynamic non-cooperative game, and we refer to the game
as the multi user computation offloading game (MCOG).
The game is closest to an undiscounted stochastic game
with countably infinite state space, but the system state
evolves according to a semi-Markov chain (instead of a
Markov chain, depending on the distribution of Di and
_Li) and payoffs (the completion times) are unbounded. We_
are not aware of existence results for Markov equilibria for
this class of problem, and even for the case when the state
evolves according to a Markov chain with countable state
space and unbounded payoffs, there are only a few results
on the existence of equilibria in Markov strategies [33],
[34], [35].
_D. Decentralized solution supported by a centralized entity_
Since fog computing architecture is decentralized in nature, and devices in fog computing systems are expected to
be autonomous [11], [36] we are interested in developing
decentralized algorithms that will allow devices to make
their offloading decisions locally. Motivated by widely
considered hierarchical fog computing architectures [37],
[38], we consider that there is a single central entity
with a high level of hierarchy that collects and stores the
information about the fog computing system. The entity
_pi(k) = MyopicBestResponse(ti,k)_
1: pi,j(k) = 0, _∀j ∈N ∪{0}_
2: /* Estimate completion time of ti,k in ∀j ∈N∪{0} */
3: for j = 0, . . ., N do
4: **if j = i then**
5: _ECompleteT_ (j) = Ti,j[e] [(][k][)]
6: **else**
7: _ECompleteT_ (j) = Ti,j[d] [(][k][) +][ T][ e]i,j[(][k][)]
8: **end if**
9: end for
10: /* Make a greedy decision */
11: i[′] arg min _ECompleteT_ (j)
_←_
_{j∈N ∪{0}_
12: pi,i′ (k) = 1
13: return pi(k)
Fig. 3. Pseudo code of myopic best response.
need not be a single physical entity, but a single logically
centralized entity that can handle high loads and can be
resilient to failure.
Furthermore, we consider that the entity periodically
sends the needed information to the devices and thus
supports them in making their offloading decisions. Intuitively, more information about the system state will allow
devices to make better offloading decisions, but at the cost
of increased signaling overhead. Therefore, one important
objective when developing decentralized algorithms for
allocating the computational tasks is to achieve good
system performance at the cost of an acceptable signaling
overhead. With this in mind, in what follows we propose
and discuss two decentralized solutions for the MCOG
problem in the form of a Markov strategy and in static
mixed strategies, respectively.
III. MYOPIC BEST RESPONSE
The first algorithm we consider, called Myopic Best
_Response (MBR), requires global knowledge of the system_
state, but decisions are made locally at the devices. Similar
to the WaterFilling algorithm proposed in [39], in the MBR
algorithm every device i makes a decision based on a
myopic best response strategy, i.e., every device i chooses
a node j 0 that minimizes the completion time of
_∈N ∪{_ _}_
its task ti,k, given the instantaneous state of the queuing
network. The pseudo-code for computing the myopic best
response strategy is shown in Figure 3. Note that since the
devices make their decisions based on the instantaneous
states of the queues, they do not take into account the tasks
that may arrive to the other devices’ execution queues
while transmitting a task. Futhermore, if the devices’
execution queues were stable if all devices perform all
tasks locally, then under the MBR algorithm the queue
lengths do not grow unbounded since each device chooses
the destination node based on the instantaneous state of the
queues.
Note that if we define the system state upon the arrival
of task ti,k as the number of jobs in the transmission
and execution queues, then the devices’ decisions depend
on the instantaneous system state only, and hence the
myopic best response is a Markov strategy for the MCOG.
Nonetheless, it is not necessarily a Markov perfect equilibrium.
-----
μ1,0E
μ1,0E
μ1,0T
(1-p11)λ1 E (1-p11)λ1 p λ E (1-p11)λ1
device i as a function of strategy profile (pi)i∈N, i.e.,
the mean completion time of its tasks in steady state.
Throughout the section we denote by Di and [2]Di the first
and the second moment of Di, respectively, and by Li and
2Li the first and the second moment of Li, respectively.
_A. Transmission time in steady state_
p11λ1 μE1 (1-p11)λ1 p11λ1 μE1 (1-p11)λ1 p11λ1 μE1 (1-p11)λ1
Fig. 4. State transition diagram of the semi-Markov process induced by
the offloading decisions for the single device case (N = 1).
In a system with N devices we have N _N transmission_
_×_
queues and N +1 execution queues, and we can thus model
the system as an N (N +1)+1 dimensional semi-Markov
_×_
process.
**Example 1. Figure 4 shows the state transition diagram**
_for a single device, i.e., N_ = 1, which is three di_mensional. We use the triplet (nl, nt, n0) to denote the_
_system state, where nl, nt and n0 stand for the number_
_of tasks in the local execution queue, number of tasks in_
_the transmission queue and the number of tasks in the_
_cloud server, respectively. Since N = 1, a device only_
_needs to decide whether to offload the computation to the_
_edge cloud or to perform the computation locally and_
_hence the transition intensities from state (nl, nt, n0) to_
_state (nl, nt + 1, n0) and from state (nl, nt, n0) to state_
(nl + 1, nt, n0) are (1 − _p1,1)λ1 and p1,1λ1, respectively._
_In the case of computation offloading, the task with size_
_D1 and complexity L1 needs to be transmitted to the_
_edge cloud at rate R1,0 and executed with computational_
_capability F0 and thus the transition intensities from state_
(nl, nt, n0) to state (nl, nt 1, n0 + 1) and from state
_−_
(nl, nt, n0) to state (nl, nt, n0 − 1) are µ[T]1,0 [=][ D][1][/R][1][,][0]
_and µ[E]1,0_ [=][ n][0][L][1][/F][0][, respectively. Finally, in the case of]
_local execution the task with complexity L1 needs to be_
_executed locally with local computational capability F1_
_and hence the transition intensity from state (nl, nt, n0)_
_to state (nl −_ 1, nt, n0) is µ[E]1 [=][ L][1][/F][1][.]
A significant detriment of the MBR algorithm is its
signaling overhead, as it requires global information about
the system state upon the arrival of each task. To reduce
the signaling requirements, in what follows we propose an
algorithm that is based on a strategy that relies on average
system parameters only.
Since tasks arrive to each device as a Poisson process
and we aim for a constant probability vector pi as a
solution, the arrival processes to the transmission queues
are Poisson processes. If the transmission queues are
sufficiently large, we can approximate them as infinite,
similar to [20], and thus we can model each transmission
queue as an M/G/1 system. Let us denote by T _[t]i,j and_
2T ti,j the mean and the second moment of the time needed
to transmit a task from device i to j _i_ 0,
_∈N \ {_ _} ∪{_ _}_
respectively. Then the mean time T _[d]i,j needed to deliver_
the input data from device i to j _i_ 0 is the sum
_∈N\{_ _}∪{_ _}_
of the mean waiting time in the transmission queue and
the mean transmission time T _[t]i,j, and can be expressed_
as
_pi,jλ[2]i_ _[T][ ti,j]_
_T_ _[d]i,j =_ + T _[t]i,j,_ (4)
2(1 − _pi,jλiT_ _[t]i,j)_
and the queue is stable as long as the offered load ρ[t]i,j [=]
_pi,jλiT_ _[t]i,j < 1._
_B. Computation time in steady state_
IV. EQUILIBRIUM IN STATIC MIXED STRATEGIES
As a practical alternative to the MBR algorithm, in this
section we propose a decentralized algorithm, which we
refer to as the Static Mixed Nash Equilibrium (SM-NE) algorithm. The algorithm is based on an equilibrium (pi)i∈N
in static mixed strategies, that is, device i chooses the node
where to execute an arriving task at random according
to the probability vector pi, which is the same for all
tasks. For computing a static mixed strategy, it is enough
for a device to know the average task arrival intensities,
transmission rates, and the first and second moments of the
task size and the task complexity distribution. Therefore,
the SM-NE algorithm requires significantly less signaling
than the MBR algorithm.
In order to compute an equilibrium strategy, we start
with expressing the (approximate) equilibrium cost of
Observe that if the input data size Di follows an exponential distribution, then departures from the transmission
queues can be modeled by a Poisson process, and thus
tasks arrive to the devices’ execution queues according to a
Poisson process. In what follows we use the approximation
that the tasks arrive according to a Poisson process even if
_Di is not exponentially distributed. Furthermore, following_
common practice [40], [19], for analytical tractability we
approximate the execution queues as being infinite. This
approximation is reasonable if the queues are sufficiently
large. These two approximations allow us to model the
execution queue of each device as an M/G/1 system,
and the edge cloud as an M/G/ system.
_∞_
Let us denote by T _[c]i,j and_ [2]T _[c]i,j the mean and the_
second moment of the time needed to compute device i’s
task on j 0, respectively. Then the mean time
_∈N ∪{_ _}_
_T_ _[e]i,j that device j ∈N needs to complete the execution_
of device i’s task is the sum of the mean waiting time in
the execution queue and the mean computation time T _[c]i,j,_
and can be expressed as
�
_T_ _[e]i,j =_ _i[′]∈N_ _[p][i][′][,j][λ]i[2][′]_ _[T][ ci][′][,j]_ (5)
2(1 − [�]i[′]∈N _[p][i][′][,j][λ][i][′]_ _[T][ ci][′][,j][) +][ T][ ci,j][,]_
and the queue is stable as long the offered load ρ[e]j [=]
�
_i[′]∈N_ _[p][i][′][,j][λ][i][′]_ _[T][ ci][′][,j][ <][ 1][.]_
Since computing in the edge cloud begins immediately
upon arrival of a task, the mean time T _[e]i,0 that the cloud_
needs to complete the execution of device i’s task is equal
to the mean computation time T _[c]i,0, i.e.,_
_T_ _[e]i,0 = Li/F0._ (6)
-----
_C. Existence of Static Mixed Strategy Equilibrium_
We can rewrite (1) to express the cost Ci of device i in
steady state as a function of (pi)i∈N,
_Ci(pi, p−i) = pi,iT_ _[e]i,i+�j∈N \{i}∪{0}_ _[p][i,j]�T_ _[d]i,j+T_ _[e]i,j�,_
where we use p−i to denote the strategies of all devices
except device i.
Observe that static mixed strategy profile (pi)i∈N of
the devices has to ensure that the entire system is stable in
steady state, and we assume that the load is such that there
is at least one strategy profile that satisfies the stability
condition of the entire system. Now, we can define the set
of feasible strategies of device i as the set of probability
vectors that ensure stability of the transmission and the
execution queues
_Ki(p−i)=_ _{pi∈P|ρ[t]i,j_ _[≤]_ _[S][t][, ρ]i[e][′][ ≤]_ _[S][t][,][ ∀][j][∈N \{][i][}∪{][0][}][,][ ∀][i][′][}][,]_
where 0 < St < 1 is the stability threshold associated
with the transmission and the execution queues.
Note that due to the stability constraints the set of
feasible strategies Ki(p−i) of device i depends on the
other devices’ strategies, and we are interested in whether
there is a strategy profile (p[∗]i [)][i][∈N][, such that]
_Ci(p[∗]i_ _[, p][∗]−i[)][ ≤]_ _[C][i][(][p][i][, p][∗]−i[)][,]_ _∀pi ∈Ki(p[∗]−i[)][.]_
We are now ready to formulate the first main result of
the section.
**Theorem 1. The MCOG has at least one equilibrium in**
_static mixed strategies._
In the rest of this subsection we use variational inequal_ity (VI) theory to prove the theorem and for computing_
an equilibrium. For a given set K ⊆ R[n] and a function
_F : K →_ R[n], the V I(K, F ) problem is the problem of
finding a point x[∗] such that F (x[∗])[T] (x _x[∗])_ 0,
_∈K_ _−_ _≥_
for _x_ . We define the set as
_∀_ _∈K_ _K_
_K_ = _{(pi)i∈N|pi∈P, ρ[t]i,j_ _[≤]_ _[S][t][, ρ]i[e]_ _[≤]_ _[S][t][, j][ ∈N \{][i][}∪{][0][}][,][∀][i][}][.]_
Before we prove the theorem, in the following we
formulate an important result concerning the cost function
_Ci(pi, p−i)._
**Lemma 1. Ci(pi, p−i) is a convex function of pi for any**
_fixed p−i and (pi, p−i) ∈K._
_Proof. For notational convenience let us start the proof_
with introducing a few shorthand notations,
�
_γi,j = pi,jλ[2]i_ _[T][ ti,j][, δ][i]_ [=] _pj,iλ[2]j_ _[T][ cj,i][,]_
_j∈N_
_ϵi,j = 1 −_ _ρ[t]i,j[, ζ][i]_ [= 1][ −] _[ρ][e]i_ _[.]_
Using this notation we expand the cost Ci(pi, p−i) as
_Ci(pi, p−i) =pi,i�_ 2δζii +T _[c]i,i�+pi,0�_ 2γϵi,i,00 +T _[t]i,0 + T_ _[c]i,0�_
order derivatives hi,j = _[∂C][i]∂p[(][p][i]i,j[,p][−][i][)]_,
_hi,0 = T_ _[t]i,0_ +T _[c]i,0_ + _[γ][i,][0]_ + pi,0λi� [2]T _[t]i,0_ + _[T][ ti,][0][γ][i,][0]_
2ϵi,0 2ϵi,0 2ϵ[2]i,0
Observe that all diagonal elements of Hi(pi, p−i) are
nonnegative, and thus the Hessian matrix Hi(pi, p−i) is
positive semidefinite on, which implies convexity.
_K_
We are now ready to prove Theorem 1.
_Proof of Theorem 1. Let us define the generalized Nash_
equilibrium problem Γ[s] =< N _, (P)i∈N, (Ci)i∈N_ _>,_
subject to (pi)i∈N ∈K. Γ[s] is a strategic game, in which
each device i ∈N plays a mixed strategy pi ∈Ki(p−i),
and aims at minimizing its cost Ci by solving
min s.t. (7)
_pi_ _[C][i][(][p][i][, p][−][i][)]_
_pi ∈Ki(p−i)._ (8)
Clearly, a pure strategy Nash equilibrium (p[∗]i [)][i][∈N][ of][ Γ][s]
is an equilibrium of the MCOG in static mixed strategies,
as
_Ci(p[∗]i_ _[, p][∗]−i[)][ ≤]_ _[C][i][(][p][i][, p][∗]−i[)][,]_ _∀pi ∈Ki(p[∗]−i[)][.]_
We thus have to prove that Γ[s] has a pure strategy Nash
equilibrium.
To do so, let us first define the function
�,
_hi,i = T_ _[c]i,i +_ _[δ][i]_ + pi,iλi� [2]T _[c]i,i_ + _[T][ ci,i][δ][i]_ �,
2ζi 2ζi 2ζi[2]
_hi,j|j≠_ _i = T_ _[t]i,j + T_ _[c]i,j +_ 2[γ]ϵ[i,j]i,j + 2[δ]ζ[j]j
_T_ _[t]i,j_ 2T ci,j
+ pi,jλi� [2] + + _[T][ ti,j][γ][i,j]_ + _[T][ ci,j][δ][j]_ �.
2ϵi,j 2ζj 2ϵ[2]i,j 2ζj[2]
We can now express the Hessian matrix
_[,]_
_Hi(pi, p−i)=_
h[i]i,0 0 _. . ._ 0
0 _h[i]i,1_ _[. . .]_ 0
... ... ... ...
0 0 _. . . h[i]i,N_
where h[i]i,j [=][ ∂][2][C]∂p[i][(][p][2]i,j[i][,p][−][i][)], and
_h[i]i,0_ [=][ λ][i] �2T ti,0 + γi,0Ti,[t] 0
_ϵi,0_ _ϵi,0_
��1 + pi,0 _λiTi,[t]_ 0 �,
_ϵi,0_
��1 + pi,i _λiTi,i[c]_
_ζi_
_h[i]i,i_ [=][ λ][i]
_ζi_
�2T ci,i + δiTi,i[c]
_ζi_
�,
_h[i]i,j��j≠_ _i_ [=][ λ]ϵi,j[i] �2T ti,j + γi,jϵi,jTi,j[t] ��1 + pi,j _λϵiTi,ji,j[t]_
_λi_ �2T ci,j + δjTi,j[c] ��1 + pi,j _λiTi,j[c]_ �.
_ζj_ _ζj_ _ζj_
�+
+ � _pi,j�_ _γi,j_ +T _[t]i,j +_ _[δ][j]_ + T _[c]i,j�._
2ϵi,j 2ζj
_j∈N \{i}_
To prove convexity we proceed with expressing the first
_F =_
_∇p1_ _C1(p1, p−1)_
_,_
...
_∇pN CN_ (pN _, p−N_ )
-----
where ∇pi _Ci(pi, p−i) is the gradient vector given by_
_hi,0_
_hi,1_
_∇pi_ _Ci(pi, p−i) =_
...
_[.]_
_hi,N_
We prove the theorem in two steps based on the VI( _, F_ )
_K_
problem, which corresponds to Γ[s].
First, we prove that the solution set of the VI( _, F_ )
_K_
problem is nonempty and compact. Since the first order
derivatives hi,j are rational functions, the function F is
infinitely differentiable at every point in, and hence it
_K_
is continuous on . Furthermore, the set is compact and
_K_ _K_
convex. Hence, the solution set of the VI( _, F_ ) problem
_K_
is nonempty and compact (Corollary 2.2.5 in [41]).
Second, we prove that any solution of the VI( _, F_ )
_K_
problem is an equilibrium of the MCOG. Since the function F is continuous on K, it follows that Ci(pi, p−i)
is continuously differentiable on . Furthermore, by
_K_
Lemma 1 we know that Ci(pi, p−i) is a convex function.
Therefore, any solution of the VI( _, F_ ) problem is a
_K_
pure strategy Nash equilibrium of Γ[s] [42], and is thus
an equilibrium in static mixed strategies of MCOG. This
proves the theorem.
Theorem 1 guarantees that the MCOG possesses at least
one equilibrium in static mixed strategies, according to
which the SM-NE algorithm allocates the tasks among the
devices and the edge cloud. The next important question is
whether there is an efficient algorithm for solving the VI
problem, and hence for computing an equilibrium (p[∗]i [)][i][∈N]
of the MCOG in static mixed strategies.
In what follows we show that an equilibrium can
be computed efficiently under certain conditions. To do
so, we show that the function F is monotone if the
execution queue of each device can be modeled by an
_M/M/1 system and all task arrival intensities are equal._
Monotonicity of F is a sufficient condition for various
algorithms proposed for solving VIs [43], e.g., for the
_Solodov-Tseng Projection-Contraction (ST-PC) method._
**Theorem 2. If the task sizes and complexities are expo-**
_nentially distributed, arrival intensities λi = λ and_
_λ max_ _,_ _i_ _,_
_j∈N_ _[T][ cj,i][ ≤]_ [1][ −]N[S][t] _∀_ _∈N_
_then the function F is monotone._
The proof is given in Appendix A.
Note that the sufficient condition provided by Theorem 2 ensures stability of all execution queues in the
worst case scenario, i.e., when T _[c]j,i = maxj∈N T_ _[c]j,i for_
all devices. This condition is, however, not necessary for
function F to be monotone in realistic scenarios. In fact,
our simulations showed that the ST-PC method converges
to an equilibrium for various considered scenarios.
V. NUMERICAL RESULTS
In what follows we show simulation results obtained
using an event driven simulator, in which we implemented
the MBR and SM-NE algorithms. For the ST-PC method
we set pi,i = 1, ∀i ∈N as starting point, which
corresponds to the strategy profile in which each device
performs all tasks locally. The ST-PC method stops when
the norm of the difference of two successive iterations is
less than 10[−][4].
Similar to [44], [45], we placed the devices at random
on a regular grid with 10[4] points defined over a square
area of 1km 1km, and we placed the edge cloud at the
_×_
center of the grid as in [44]. Unless otherwise noted, we
consider that the wired link latency τc incurred during
communication with the cloud server can be neglected
since the cloud is located in close proximity of devices [46]. For simplicity, we consider a static bandwidth
assignment for the simulations; we assign a bandwidth of
_Bi,j = 5 MHz for communication between device i and_
device j [47], [48], and for the device to cloud bandwidth
assignment we consider two scenarios. In the elastic
scenario the bandwidth Bi,0 assigned for communication
between device i and the edge cloud is independent of
the number of devices. In the fixed scenario the devices
share a fixed amount of bandwidth B0 when they want to
offload a task to the edge cloud, and the bandwidth Bi,0
scales directly proportional with the number of devices,
i.e., Bi,0 = _N1_ _[B][0][. We consider that the channel gain of]_
device i to a node j ∈N \{i}∪{0} is proportional to d[−]i,j[α][,]
where di,j is the distance between device i and node j, and
_α is the path loss exponent, which we set to 4 according_
to the path loss model in urban and suburban areas [49].
We set the data transmit power Pi[t] [of every device][ i][ to]
0.4 W according to [50] and given the bandwidth Bi,j
available for the communication between nodes i and j
we calculate the noise power Pn as Pn = Bi,jN0, where
_N0 = 1.38065 × 10[−][23]T is the spectral density for the_
termal noise at the temperature T = 290K. Finally, we
calculate the transmission rate Ri,j from device i to node
_j ∈N \ {i} ∪{0} as Ri,j = Bi,jlog2(1 + Pi[t][d]i,j[−][α][/P][n][)][.]_
The input data size Di follows a uniform distribution
on [a[d]i _[, b]i[d][]][, where][ a]i[d]_ [and][ b]i[d] [are uniformly distributed on]
[0.1, 1.4] Mb and on [2.2, 3.4] Mb, respectively. The arrival
intensity λi of the tasks of device i is uniformly distributed on [0.01, 0.03] tasks/s, and the stability threshold
is St = 0.6. Note that for the above set of parameters the
maximum arrival intensity does not satisfy the sufficient
condition of Theorem 2 already for N = 20 devices. Yet,
our evaluation shows that the ST-PC method converges
even for larger instances of the problem.
The computational capability Fi of device i is drawn
from a continuous uniform distribution on [1, 4] GHz,
while the computation capability of the edge cloud is
_F0 = 64 GHz [51]. The task complexity Li follows a uni-_
form distribution on [a[l]i[, b][l]i[]][, where][ a][l]i [and][ b]i[l] [are uniformly]
distributed on [0.2, 0.5] Gcycles and [0.7, 1] Gcycles, respectively.
We use three algorithms as a basis for comparison.
The first algorithm computes the socially optimal static
mixed strategy profile (¯pi)i∈N that minimizes the system
cost C = _N1_ �i∈N _[C][i][, i.e.,][ (¯][p][i][)][i][∈N][ = arg min][(][p]i[)]i∈N_ _[C][.]_
We refer to this algorithm as the Static Mixed Optimal
(SM-OPT) algorithm. The second algorithm considers that
the devices are allowed to offload the tasks to the edge
cloud only (i.e., pi,i + pi,0 = 1), and we refer to this
algorithm as the Static Mixed Cloud Nash Equilibrium
(SMC-NE) algorithm. The third algorithm considers that
-----
12
3
2.5
2
10
8
1.5
1
6
4
|Col1|Col2|B =1 i,c|/N*12.5[MH|z]|B =1.25[M i,c|Hz] MBR|
|---|---|---|---|---|---|---|
|SM-O SMC 0.5km 1km 1.41k|PT -NE × 0.5km × 1km m × 1.41|km|Col4|Col5|Col6|
|---|---|---|---|---|---|
2
0 10 20 30 40 50 60 70
Number of devices (N)
0 1 2 3 4 5 6
Device to cloud bandwith (Bi,0)[MHz]
Fig. 5. Performance gain vs. number of devices for Bi,0 = 0.2 MHz,
_Bi,0 = 1.25 MHz and Bi,0 =_ _N1_ [12][.][5][ MHz][.]
all devices perform local execution (i.e., pi,i = 1). Furthermore, we define the performance gain of an algorithm as
the ratio between the system cost reached when all devices
perform local execution and the system cost reached by
the algorithm. For the SM-OPT algorithm the results
are shown only up to 30 or 35 devices, because the
computation of the socially optimal strategy profile was
computationally infeasible for larger problem instances.
The results shown in all figures are the averages of 50
simulations, together with 95% confidence intervals.
_A. Performance gain_
We start with evaluating the performance gain as a
function of the number of devices. Figure 5 shows the
performance gain for the MBR, SM-NE, SM-OPT and
SMC-NE algorithms as a function of the number of
devices for the two scenarios of device to cloud bandwidth
assignment. For the elastic scenario Bi,0 = 0.2 MHz
and Bi,0 = 1.25 MHz, and for the fixed scenario B0 =
12.5 MHz.
The results show that the SM-NE and the SM-OPT
algorithms perform close to the MBR algorithm, despite
the fact that they are based on average system parameters only. We can also observe that when the device to
cloud bandwidth is low (about 0.2 MHz), SMC-NE does
not provide significant gain compared to local execution
(the performance gain is close to one for all values of
_N_ ). On the contrary, the MBR, SM-NE and SM-OPT
algorithms, which allow collaborative offloading, provide
a performance gain of about 50%, and the gain slightly
increases with the number of devices. The reason for the
slight increase of the gain is that when there are more
devices, devices are closer to each other on average, which
allows higher transmission rates between devices.
Compared to the case when Bi,0 = 0.2 MHz, the results
for Bi,0 = 1.25 MHz show that all algorithms achieve
very high performance gains (up to 300%). Furthermore,
the performance gain of the SMC-NE algorithm is similar
to that of the SM-NE and the SM-OPT algorithms, while
the MBR algorithm performs slightly better. The reason is
that for high device to cloud bandwidth in the static mixed
equilibrium most devices offload to the edge cloud, as on
average it is best to do so, even if given the instantaneous
system state it may be better to offload to a device,
as done by the MBR algorithm. Furthermore, unlike for
_Bi,0 = 0.2 MHz, for Bi,0 = 1.25 MHz the performance_
Fig. 6. Performance gain vs. device to cloud bandwidth Bi,0 for N = 8
devices placed over 0.5km × 0.5km square area, for N = 30 devices
placed over 1km × 1km square area, and for N = 60 devices placed
over 1.41km × 1.41km square area.
gain becomes fairly insensitive to the number of devices,
which is again due to the increased reliance on the cloud
resources for computation offloading.
The results are fairly different for the fixed device to
cloud bandwidth assignment scenario, as in this scenario
the number of devices affects the device to cloud bandwidth. In this scenario collaboration among the devices
improves the system performance (SMC-NE vs. SM-NE
algorithms). We can also observe that as N increases, the
curves for fixed scenario approach the curves for the elastic
scenario for Bi,0 = 0.2 MHz. This is due to that for large
values of N the device to cloud bandwidth Bi,0 becomes
low and the devices offload more to each other than to the
edge cloud.
Finally, the results show that the gap between the SM_NE and the SM-OPT algorithms is almost negligible for_
all scenarios, and hence we can conclude that the price of
stability of the MCOG game in static mixed strategies is
close to one.
_B. Impact of cloud availability_
In order to analyse the impact of the possibility to
offload to the edge cloud, in the following we vary the
bandwidth Bi,0 between 0.2 MHz and 5.2 MHz.
Figure 6 shows the average and the median performance
gain for the MBR, SM-NE, SM-OPT and SMC-NE algorithms as a function of the device to cloud bandwidth for 8
devices placed over a square area of 0.5km 0.5km, for 30
_×_
devices placed over a square area of 1km 1km, and for
_×_
60 devices placed over a square area of 1.41km 1.41km.
_×_
Note that the three scenarios have approximately the same
density of devices. We first observe that the median performance gain is almost equal to the average performance
gain for all algorithms and for all considered scenarios,
which suggests that distribution of the completion times of
the tasks is approximately symmetrical. The figure shows
that the performance gain achieved by the algorithms increases with the bandwidth Bi,0. Furthermore, we observe
that the gap between the algorithms decreases as the device
to cloud bandwidth increases, and for reasonably high
bandwidths the SM-NE algorithm performs almost equally
well as the MBR algorithm. The results also show that
collaboration among the devices has highest impact on
the system performance when the bandwidth Bi,0 is low,
and for Bi,0 = 1.2 MHz offloading to the edge cloud
-----
1
0.8
|Col1|Col2|Col3|Col4|Col5|Col6|Col7|SM SM SM|-NE -OPT C-NE|
|---|---|---|---|---|---|---|---|---|
||||||||||
||||||||||
|Col1|Col2|Col3|Col4|Col5|Col6|Col7|Col8|Col9|Col10|Col11|
|---|---|---|---|---|---|---|---|---|---|---|
|||||||||MBR SM-N SM-O|E PT||
|||||||||SMC Bi,0 Bi,0 Bi,0|-NE = 0.2[M = 0.8[M = 1.25[|Hz] Hz] MHz]|
0 1 2 3 4 5 6 7 8 9 10
Performance gain
3
MBR
SM-NE
SM-OPT
SMC-NE
0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45
Latency to the cloud server (τc)[s]
2.5
2
0.6
0.4
1.5
1
0.2
0
Fig. 7. Performance gain vs. latency τc to the cloud server, for N = 30
devices placed over 1km × 1km square area, and Bi,0 = 1.25 MHz.
only (SMC-NE) is as good as the SM-NE and SM-OPT
algorithms.
Comparing the performance for different sized areas we
observe that the performance gain decreases as the size of
the area increases, which is due to that the devices are
closer to the cloud server on average in a smaller area.
_C. Impact of cloud remoteness_
In order to evaluate the impact of the cloud access
latency, in the following we vary the latency τc between
0 s and 0.4 s. A low latency (0ms ≤ _τc < 20ms)_
would correspond to the case of an edge cloud or a home
gateway, a moderate latency (20ms ≤ _τc < 100ms) would_
correspond to an edge cloud located deeper in the network
(e.g., metro network), and high latency (100ms ≤ _τc)_
would correspond to remote cloud servers.
In Figure 7 we show the average performance gain
as a function of the latency τc for the MBR, SM-NE,
SM-OPT and SMC-NE algorithms in a fog computing
system that serves N = 30 devices, each of them assigned
a bandwidth of Bi,0 = 1.25 MHz for communication with
the cloud. The figure shows that the performance gain of
all algorithms decreases as the latency to the cloud server
increases. Furthermore, we observe that the performance
gain of the SMC-NE algorithm approaches one, as in the
case of a high latency it is better for most of devices
to perform the computation locally. On the contrary, the
performance gain of the MBR, SM-NE and SM-OPT
algorithms remains slightly above 1.5 even for high values
of the latency (τc ≥ 300ms), which additionally confirms
that devices can decrease the average completion times of
their tasks through collaboration even in systems where
they cannot entirely rely on the cloud resources.
_D. Performance gain perceived per device_
Fig. 8. Distribution of the performance gain for N = 30 devices,
_Bi,0 = 0.2 MHz, Bi,0 = 0.8 MHz and Bi,0 = 1.25 MHz._
of the performance gain for the elastic device to cloud
bandwidth assignment scenario with 30 devices and for
_Bi,0 = 0.2 MHz, Bi,0 = 0.8 MHz, and Bi,0 = 1.25 MHz._
The results for Bi,0 = 0.2 MHz show that the SMC-NE
algorithm is ex-post individually rational, as devices always gain compared to local computation. At the same
time, the SM-NE and MBR algorithms achieve a performance gain below one for a small fraction of the
devices, and hence collaboration among devices is not expost individually rational. On the contrary, the results for
_Bi,0 = 0.8 MHz show that the MBR algorithm is ex-post_
individually rational, since the performance gain of every
device is larger than one, but the SM-NE is not. Finally,
the results for Bi,0 = 1.25 MHz show that all algorithms
ensure that every device achieves a performance gain at
least one, and hence for Bi,0 = 1.25 MHz collaboration
among devices is ex-post individually rational using all
algorithms.
The above results show that collaboration among the
devices is ex-post individually rational only if sufficient
bandwidth is provided for communication to the edge
cloud. Thus, if ex-post individual rationality is important
then the device to cloud bandwidth has to be managed
appropriately.
_E. Utilization ratio of collaboration among devices_
In order to evaluate the performance gain perceived per
device, we use the notion of ex-ante and ex-post individual
rationality. These are important in situations when the
devices are allowed to decide whether or not to participate in the collaboration before and after learning their
types (i.e., the exact size and complexity of their tasks),
respectively. The results in Figure 5 show that on average
the devices benefit from collaboration, as the performance
gain is greater than one, and hence collaboration among
the devices is ex-ante individually rational. In order to
investigate whether collaboration among the devices is expost individually rational, in Figure 8 we plot the CDF
In order to evaluate the impact of collaboration on
the system performance, we consider the ratio of the
tasks executed at different nodes in the system. To obtain
this ratio, we simulated stochastic task arrivals over a
period of 10[4]s. We recorded the Nt tasks generated
in the system during this period, and for an algorithm
_A ∈{MBR, SM-NE, SM-OPT} we recorded Nl[A]_ [and][ N][ A]c [,]
the number of tasks executed locally and the number of
tasks executed in the edge cloud, respectively. Figure 9
_l_
shows the ratio _[N]N[ A]t_ [of the tasks executed locally, and the]
_c_
ratio _[N][t]N[−]t[N][ A]_ of the tasks executed either locally or at one
of the other devices for the MBR, SM-NE and SM-OPT
algorithms as a function of the number of devices for
_Bi,0 =_ _N[1]_ [12][.][5][ MHz][.]
The results in Figure 9 show that for N = 10, i.e., when
the bandwidth assigned to each device for communication
with the edge cloud is 1.25 MHz, the devices offload
more tasks to the edge cloud in the case of the SM-NE
and SM-OPT algorithms than in the case of the MBR
algorithm, which coincides with the observation made in
-----
1
0.7
10 [4]
10 [3]
0.4
0.1
10 [2]
10 [1]
SM-NE
SM-OPT
5 15 25 35 45 55 65
Number of devices (N)
|Col1|Col2|Col3|Col4|MBR SM-N SM-O Local Local|E PT execution or D2D o|ffloading|
|---|---|---|---|---|---|---|
||||||||
|Col1|SM-O|PT|Col4|Col5|Col6|Col7|Col8|
|---|---|---|---|---|---|---|---|
|||||||||
0 10 20 30 40 50 60 70
Number of devices (N)
10 [0]
Fig. 9. Ratio of the tasks executed locally and the tasks executed at any
of the devices for Bi,0 = _N1_ [12][.][5][ MHz][.]
Figure 5 for Bi,0 = 1.25 MHz. On the contrary, when
_N_ 20 the devices offload more tasks to the edge
_≥_
cloud in the case of the MBR algorithm than in the
case of the SM-NE and SM-OPT algorithms that achieve
approximately the same performance. Furthermore, we
observe that while the ratio of the tasks executed locally
increases up to 30 users and remains constant for more
devices, the ratio of the tasks executed either locally or
at one of the other devices continues to increase with
the number of devices for all algorithms. These results
confirm the observation made for Bi,0 = _N1_ [12][.][5][ MHz][ in]
Figure 5 that the collaboration among the devices improves
the system performance.
_F. Computational efficiency of the SM-NE algorithm_
Recall that the SM-NE algorithm is based on the static
mixed strategy equilibrium, and that the SM-OPT algorithm is based on the socially optimal static mixed strategy
profile. In order to assess the computational efficiency of
the SM-NE algorithm we measured the time needed to
compute a static mixed strategy equilibrium by the ST-PC
method and the time needed to compute a socially optimal
static mixed strategy profile by the quasi-Newton method.
Figure 10 shows the measured times as a function of the
number of devices. We observe that the time needed to
compute the socially optimal static mixed strategy profile
increases exponentially with the number of devices at a
fairly high rate, and already for 30 devices it is more than
an order of magnitude faster to compute a static mixed
strategy equilibrium than to compute the socially optimal
static mixed strategy profile. Therefore, we conclude that
the SM-NE algorithm, which is based on an equilibrium in
static mixed strategies, is a computationally efficient solution for medium to large scale collaborative computation
offloading systems.
VI. RELATED WORK
There is a large body of work on augmenting the
execution of computationally intensive applications using
cloud resources [52], [53], [54], [55], [27], [56]. In [52] the
authors studied the problem of maximizing the throughput
of mobile data stream applications through partitioning,
and proposed a genetic algorithm as a solution. The
authors in [53] considered multiple QoS factors in a 2tiered cloud infrastructure, and proposed a heuristic for
minimizing the users’ cost. In [54] the authors proposed
an iterative algorithm that minimizes the users’ overall
Fig. 10. Time needed to compute a static mixed strategy equilibrium and
a socially optimal static mixed strategy profile for Bi,0 = 1.25 MHz.
energy consumption, while meeting latency constraints.
The authors in [55] considered the joint optimization of the
offloading decisions, and the allocation of communication
and computation resources, proved the NP-hardness of
the problem and proposed a heuristic offloading decision algorithm for minimizing the completion time and
the energy consumption of devices. The authors in [27]
considered a single wireless link and an elastic cloud,
provided a game theoretic treatment of the problem of
minimizing completion time and showed that the game is
a potential game. The authors in [56] considered multiple
wireless links, elastic and non-elastic cloud, provided a
game theoretic analysis of the problem and proposed
a polynomial complexity algorithm for computing an
equilibrium allocation. In [19] the authors considered a
three-tier cloud architecture with stochastic task arrivals,
provided a game theoretical formulation of the problem,
and used a variational inequality to prove the existence
of a solution and to provide a distributed algorithm for
computing an equilibrium. Unlike these works, we allow
devices to offload computations to each other as well.
A few recent works considered augmenting the execution of computationally intensive applications using the
computational power of nearby devices in a collaborative
way [57], [58], [59], [18], [39]. The authors in [57]
modeled the collaboration among mobile devices as a
coalition game, and proposed a heuristic method for
solving a 0 1 integer quadratic programing problem
_−_
that minimizes the overall energy consumption. In [58]
the authors formulated the resource allocation problem
among neighboring mobile devices as a multi-objective
optimization that aims to minimize the completion times
of the tasks as well as the overall energy consumption,
and as a solution proposed a two-stage approach based on
enumerating Pareto optimal solutions. In [59] the authors
formulated the problem of maximizing the probability of
computing tasks before their deadlines through mobilityassisted opportunistic computation offloading as a convex
optimization problem, and used the barrier method to solve
the problem. The authors in [18] considered a collaborative cloudlet that consists of devices that can perform
shared offloading, and proposed two heuristic allocation
algorithms that minimize the average relative usage of all
the nodes in the cloudet. The authors in [39] proposed
an architecture that enables a mobile device to remotely
access computational resources on other mobile devices,
and proposed two greedy algorithms that require complete
-----
information about devices’ states, for minimizing the job
completion time and the energy consumption, respectively.
Our work differs from these works, as we consider computation offloading to an edge cloud and nearby devices,
and provide a non-cooperative game theoretic treatment of
the problem.
Only a few recent works considered the computation
offloading problem in fog computing systems [60], [61],
[62], [63]. The authors in [60] considered a fog computing
system in which the tasks can be performed locally at
the devices, at a fog node or at a remote cloud server,
and proposed a suboptimal algorithm for computing the
offloading decisions and allocating resources with the objective to minimize the delay and the energy consumption
of devices. In [61] the authors considered a fog computing
system, where devices may offload their computation to
small cell access points that provide computation and
storage capacities, and designed a heuristic for a joint
optimization of radio and computational resources with the
objective of minimizing the energy consumption. Unlike
this work, we consider stochastic task arrivals, and we
provide a game theoretical treatment of the completion
time minimization problem. In [62] authors formulated the
power consumption-delay tradeoff problem in fog computing system that consists of a set of fog devices and a set of
cloud servers, and proposed a heuristic for allocating the
workload among fog devices and cloud servers. In [63]
the authors considered the joint optimization problem
of task allocation and task image placement in a fog
computing system that consists of a set of storage srevers,
a set of computation servers and a set of users, and proposed a low-complexity three-stage algorithm for the task
completion time minimization problem. Our work differs
from these works, as we consider heterogeneous computational tasks, and our queueing system model captures
the contention for both communication and computational
resources.
To the best of our knowledge ours is the first work
based on a game theoretical analysis that proposes a
decentralized algorithm with low signaling overhead for
solving the completion time minimization problem in fog
computing systems.
VII. CONCLUSION
We have provided a game theoretical analysis of a fog
computing system. We proposed an efficient decentralized
algorithm based on an equilibrium task allocation in static
mixed strategies. We compared the performance achieved
by the proposed algorithm that relies on average system
parameters with the performance of a myopic best response algorithm that requires global knowledge of the
system state. Our numerical results show that the proposed
algorithm achieves good system performance, close to that
of the myopic best response algorithm, and could be a
possible solution for coordinating collaborative computation offloading with low signaling overhead. There is a
number of interesting extensions of our model. First, one
could consider a communication model in which devices
share the bandwidth with each other. Another direction is
to consider the energy cost of offloading, e.g., use it as a
constraint for offloading optimization.
APPENDIX
_A. Proof of Theorem 2_
Observe that if λi = λ then the cost Ci can equivalently
be defined as Ni = λCi, i.e., the number of tasks in the
system. Furthermore, since task complexities are assumed
to be exponentially distributed, the execution queues are
_M/M/1 systems. We can thus rewrite T_ _[e]i,j as_
_T_ _[e]i,j =_ _[T][ ci,j]_ _,_ (9)
1 − _ρ[e]j_
and the cost Ni(pi, p−i) of device i as
_Ni(pi, p−i) =pi,iλ_ _[T][ ci,i]ζi_ +pi,0λ� 2γϵi,i,00 +T _[t]i,0 + T_ _[c]i,0�_
+ � _pi,jλ�_ _γi,j_ +T _[t]i,j +_ _[T][ ci,j]_ �.
2ϵi,j _ζj_
_j∈N \{i}_
Next, we express the first order derivatives hi,j of
_Ni(pi, p−i) as_
�+pi,0λ[2][�] [2][T][ ti,][0] + _[T][ ti,][0][γ][i,][0]_ �,
2ϵi,0 2ϵ[2]i,0
_hi,0 =_ _λ�_
_T_ _[t]i,0_ +T _[c]i,0_ + _[γ][i,][0]_
2ϵi,0
_hi,i = λ_ _[T][ ci,i]_ + pi,iλ[2][ T][ c]i,i[2] _,_
_ζi_ _ζi[2]_
_hi,j|j≠_ _i = λ�T_ _[t]i,j +_ 2[γ]ϵ[i,j]i,j + _[T][ ci,j]ζj_ �
+ pi,jλ[2][�] [2][T][ ti,j] + _[T][ ti,j][γ][i,j]_ + _[T][ c]i,j[2]_
2ϵi,j 2ϵ[2]i,j _ζj[2]_
�.
In order to prove the monotonicity of the function F in
what follows we show that the Jacobian J of F is positive
semidefinite. The Jacobian J has the following structure
_h[1]1,0_ 0 _..._ 0 0 0 _..._ 0 _..._ 0 0 _..._ 0
0 _h[1]1,1_ _[...]_ 0 0 h[1]2,1 _[...]_ 0 _..._ 0 _h[1]N,1_ _[...]_ 0
... ... ... ... ... ... ... ... _..._ ... ... ...
0 0 _... h[1]1,N_ [0] 0 _... h[1]2,N_ _[...]_ 0 0 _... h[1]N,N_
_,_
... ...
0 0 _..._ 0 0 0 _..._ 0 _... h[N]N,0_ 0 _..._ 0
0 _h[N]1,1_ _[...]_ 0 0 h[N]2,1 _[...]_ 0 _..._ 0 _h[N]N,1_ _[...]_ 0
... ... ... ... ... ... ... ... _..._ ... ... ...
0 0 _... h[N]1,N_ [0] 0 _... h[N]2,N_ _[...]_ 0 0 _... h[N]N,N_
where the second order derivatives can be expressed as
_h[i]i,0_ [=][ λ][2]
_ϵi,0_
�2T ti,0 + γi,0T _[t]i,0_
_ϵi,0_
��1 + pi,0 _λT_ _[t]i,0_ �
_ϵi,0_
� _λT ci,i_
_h[i]i,i_ [=]
_ζi_
�2 �2 + 2 _[λ]_ _pi,iT_ _[c]i,i�,_
_ζi_
� _λT ci,j_
_h[i]i,j��j≠_ _i_ [=] _ζj_
�2 �2 + 2 _[λ]_ _pi,jT_ _[c]i,j�_ + h[t]i,j[,]
_ζj_
where h[t]i,j [=][ λ][2]
_ϵi,j_
�2T ti,j + γi,jT _[t]i,j_
_ϵi,j_
��1 + pi,j _λT_ _[t]i,j_
_ϵi,j_
�,
and
_h[i]i[′],j��i[′]≠_ _i_ [=][ λT][ ci,j]ζ[λT]j[2] _[ ci][′][,j]_
�1 + 2 _[λ]_ _pi,jT_ _[c]i,j�._
_ζj_
-----
Reordering the rows and columns, the Jacobian J can be
rewritten as
C 0 _. . ._ 0
0 _M1_ _. . ._ 0
_J =_
... ... ... ...
_[,]_
0 0 _. . ._ _MN_
where
so, let us denote by e the all-ones vector and define the
vector t[p]i [= (][p][1][,i][T][ c][1][,i][ p][2][,i][T][ c][2][,i][ . . . p][N,i][T][ cN,i][)][. Now, we]
can express matrix Ti[p] [as]
_Ti[p]_ [= 1] �t[p]i _[e][T][ +][ e][(][t]i[p][)][T][ �]._
2
The characteristic polynomial of the symmetric matrix Ti[p]
is given by [65]
_k[N]_ _[−][2]_
2 �k[2] _−_ 2(e[T] _t[p]i_ [)][k][ + (][e][T][ t]i[p][)][2][ −] _[N]_ _[∥][t]i[p][∥][2][�]._
We observe that Ti[p] [has][ N][ −] [2][ zero eigenvalues, and]
one non-negative and one non-positive eigenvalue given by
_k+ =_ �e[T] _t[p]i_ [+]√N _∥t[p]i_ _[∥]�/2 and k−_ = �e[T] _t[p]i_ _[−]√N_ _∥t[p]i_ _[∥]�/2,_
respectively. Therefore, the minimum eigenvalue of the
matrix [2]ζ[λ]i _[T][ p]i_ [is greater than][ −][1][ if]
_λ_ _√_
_ζi_ � _N_ _∥t[p]i_ _[∥−]_ _[e][T][ t]i[p]�_ _≤_ 1. (10)
Since t[p]i is a vector with non-negative elements, we
_√have that e[T]_ _t[p]i_ _[≥∥][t]i[p][∥]_ [and it also holds that][ ∥][t]i[p][∥≤]
_N maxj∈N tj,i. Therefore, the following inequalities_
hold
_λ_ _√_ _√_ _√_
_ζi_ � _N_ _∥t[p]i_ _[∥−]_ _[e][T][ t]i[p]�_ _≤_ _ζ[λ]i_ � _N maxj∈N_ _[t][j,i][(]_ _N −_ 1)�
max max
_≤_ _[Nλ]ζi_ _j∈N_ _[t][j,i][ ≤]_ _[Nλ]ζi_ _j∈N_ _[T][ cj,i][.]_
Since ρ[e]i _[≤]_ _[S][t][, we have that][ ζ][i][ ≥]_ [1][ −] _[S][t][, and therefore]_
_Nλ_ _Nλ_
max max (11)
_ζi_ _j∈N_ _[T][ cj,i][ ≤]_ 1 − _St_ _j∈N_ _[T][ cj,i][.]_
Based on (11) a sufficient condition for (10) is that
_λ maxj∈N T_ _[c]j,i ≤_ [1][−]N[S][t] [. This proves the theorem.]
REFERENCES
[1] M. Chiang and T. Zhang, “Fog and IoT: An overview of research
opportunities,” IEEE Internet of Things Journal, pp. 854–864, 2016.
[2] A. V. Dastjerdi and R. Buyya, “Fog computing: Helping the internet
of things realize its potential,” Computer, pp. 112–116, 2016.
[3] Y. Ai, M. Peng, and K. Zhang, “Edge computing technologies
for internet of things: a primer,” Digital Communications and
_Networks, vol. 4, no. 2, pp. 77–86, 2018._
[4] E. Cuervo, A. Balasubramanian, D.-k. Cho, A. Wolman, S. Saroiu,
R. Chandra, and P. Bahl, “MAUI: Making smartphones last longer
with code offload,” in Proc. of ACM MobiSys, 2010, pp. 49–62.
[5] K. Kumar, J. Liu, Y.-H. Lu, and B. Bhargava, “A survey of
computation offloading for mobile systems,” Mobile Networks and
_Applications, vol. 18, no. 1, pp. 129–140, 2013._
[6] Y. Wen, W. Zhang, and H. Luo, “Energy-optimal mobile application execution: Taming resource-poor mobile devices with cloud
clones,” in Proc. of IEEE INFOCOM, 2012, pp. 2716–2720.
[7] J. G. Andrews, S. Buzzi, W. Choi, S. V. Hanly, A. Lozano, A. C.
Soong, and J. C. Zhang, “What will 5G be?” IEEE J-SAC, pp.
1065–1082, 2014.
[8] G. P. Fettweis, “The tactile internet: Applications and challenges,”
_IEEE Vehicular Technology Magazine, pp. 64–70, 2014._
[9] M. S. Elbamby, M. Bennis, and W. Saad, “Proactive edge computing in latency-constrained fog networks,” in Proc. of IEEE
_Networks and Communications (EuCNC), 2017, pp. 1–6._
[10] S. Li, L. Da Xu, and S. Zhao, “The internet of things: a survey,”
_Information Systems Frontiers, vol. 17, no. 2, pp. 243–259, 2015._
[11] L. M. Vaquero and L. Rodero-Merino, “Finding your way in the
fog: Towards a comprehensive definition of fog computing,” ACM
_SIGCOMM Computer Communication Review, vol. 44, no. 5, pp._
27–32, 2014.
[12] G. Fodor, E. Dahlman, G. Mildh, S. Parkvall, N. Reider, G. Miklós,
and Z. Turányi, “Design aspects of network assisted device-todevice communications,” IEEE Communications Magazine, vol. 50,
no. 3, 2012.
h[1]1,i _[h]2[1],i_ _[. . . h]N,i[1]_
h[2]1,i _[h]2[2],i_ _[. . . h]N,i[2]_
... ... ... ...
_h[N]1,i_ _[h]2[N],i_ _[. . . h]N,i[N]_
_[.]_
_[, M][i][ =]_
_C =_
h[1]1,0 0 _. . ._ 0
0 _h[2]2,0_ _[. . .]_ 0
... ... ... ...
0 0 _. . . h[N]N,0_
Observe that all diagonal elements of C are nonnegative,
and thus the matrix C is positive definite. In order to
show that J is positive semidefinite we have to show that
the symmetric matrix Mi[s] = 21 [(][M][ T]i [+][ M][i][)][ is positive]
semidefinite.
The diagonal elements _[d]h[s]j,i_ [of][ M][ s]i [are given by]
_dhsj,i��j=i_ [=] � _λTζ ci,ii_
�2
�2 + 2 _[λ]_ _pi,iT_ _[c]i,i�,_
_ζi_
_dhsj,i��j≠_ _i_ [=] � _λTζ cj,ii_
�2 �2 + 2 _[λ]_ _pj,iT_ _[c]j,i�_ + h[t]j,i[,]
_ζi_
where h[t]j,i [=][ λ][2]
_ϵj,i_
�2T tj,i + γj,iT _[t]j,i_ ��1 + pj,i _λT_ _[t]j,i_
_ϵj,i_ _ϵj,i_
�,
and the off-diagonal elements _[o]h[s]j,i_ [=] 12 [(][h]j,i[i] [+][ h]i,i[j] [)]
���j≠ _i_
are given by
_ohsj,i_ [=][ λT][ ci,i][λT][ cj,i]
_ζi[2]_
�1 + _[λ]_ (pi,iT _[c]i,i + pj,iT_ _[c]j,i)�_
_ζi_
Let us define the vector T _[c]i = (T_ _[c]1,i T_ _[c]2,i . . . T_ _[c]N,i)[T]_
and matrix T _[t]i_
_T_ _[t]i =_ � diag(h[t]j,i[)][|]j∈N \{i} [0] � _._
0 0
Furthermore, let us define matrix Ti[p] [as]
_p1,iT_ _[c]_ 1,i _p1,iT [c]_ 1,i +2 _p2,iT [c]_ 2,i _..._ _p1,iT [c]_ 1,i +2pN,iT [c] _N,i_
_p2,iT [c]_ 2,i +2 _p1,iT [c]_ 1,i _p2,iT_ _[c]_ 2,i _..._ _p2,iT [c]_ 2,i +2pN,iT [c] _N,i_
... ... ... ... [.]
_pN,iT [c]_ _N,i2+p1,iT [c]_ 1,i _pN,iT [c]_ _N,i2+p2,iT [c]_ 2,i _..._ _pN,iT_ _[c]_ _N,i_
Now, matrix Mi can be rewritten as
� �
_Mi =_ _[λ]ζi[2][2]_ _T_ _[c]i T_ _[cT]i_ _[◦]_ _I + E + [2]ζ[λ]i_ _Ti[p]_
��
+ T _[t]i,_
where denotes the Hadamard product, i.e., the
_◦_
component-wise product of two matrices.
It is well known that the identity I and unit E matrices
are positive definite, while positive definiteness of matrix
_T_ _[c]i T_ _[cT]i_ [follows from the definition. Observe that matrix]
_T_ _[t]i is positive semidefinite as well, since it is a diagonal_
matrix with non-negative elements. Since the sum of two
positive semidefinite matrices is positive semidefinite and
the Hadamard product of two positive semidefinite matrices is also positive semidefinite [64], the proof reduces to
showing that matrix I +E + [2]ζ[λ]i _[T][ p]i_ [is positive semidefinite.]
To do so, we will show that the minimum eigenvalue of
the matrix 2ζλi _[T][ p]i_ [is greater than or equal to][ −][1][. To do]
-----
[13] K. Doppler, C.-H. Yu, C. B. Ribeiro, and P. Janis, “Mode selection
for device-to-device communication underlaying an lte-advanced
network,” in Proc. of IEEE WCNC, 2010, pp. 1–6.
[14] M. Zulhasnine, C. Huang, and A. Srinivasan, “Efficient resource
allocation for device-to-device communication underlaying lte network,” in Proc. of IEEE WiMob, 2010, pp. 368–375.
[15] A. P. Miettinen and J. K. Nurminen, “Energy efficiency of mobile
clients in cloud computing,” in Proc. of USENIX Conference on
_Hot Topics in Cloud Computing, 2010, pp. 4–4._
[16] J. R. Lorch and A. J. Smith, “Improving dynamic voltage scaling
algorithms with pace,” in ACM SIGMETRICS Performance Evalu_ation Review, vol. 29, no. 1._ ACM, 2001, pp. 50–61.
[17] W. Yuan and K. Nahrstedt, “Energy-efficient cpu scheduling for
multimedia applications,” ACM Transactions on Computer Systems
_(TOCS), vol. 24, no. 3, pp. 292–331, 2006._
[18] S. Bohez, T. Verbelen, P. Simoens, and B. Dhoedt, “Discreteevent simulation for efficient and stable resource allocation in
collaborative mobile cloudlets,” Simulation Modelling Practice and
_Theory, vol. 50, pp. 109–129, 2015._
[19] V. Cardellini, V. De Nitto Personé, V. Di Valerio, F. Facchinei,
V. Grassi, F. Lo Presti, and V. Piccialli, “A game-theoretic approach
to computation offloading in mobile cloud computing,” Mathemat_ical Programming, pp. 1–29, 2015._
[20] Y. Wang, X. Lin, and M. Pedram, “A nested two stage game-based
optimization framework in mobile cloud computing system,” in
_Service Oriented System Engineering, Mar. 2013, pp. 494–502._
[21] L. Pu, X. Chen, J. Xu, and X. Fu, “D2D fogging: An
energy-efficient and incentive-aware task offloading framework via
network-assisted D2D collaboration,” IEEE J-SAC, vol. 34, no. 12,
pp. 3887–3901, 2016.
[22] K. Aberer and Z. Despotovic, “Managing trust in a peer-2-peer
information system,” in Proc. of ACM International Conference on
_Information and Knowledge Management, 2001, pp. 310–317._
[23] S. D. Kamvar, M. T. Schlosser, and H. Garcia-Molina, “The
eigentrust algorithm for reputation management in P2P networks,”
in Proc. of ACM International Conference on World Wide Web,
2003, pp. 640–651.
[24] L. Xiong and L. Liu, “Peertrust: Supporting reputation-based trust
for peer-to-peer electronic communities,” IEEE Transactions on
_Knowledge and Data Engineering, pp. 843–857, 2004._
[25] P. Mach, Z. Becvar, and T. Vanek, “In-band device-to-device
communication in OFDMA cellular networks: A survey and challenges,” IEEE Communications Surveys & Tutorials, vol. 17, no. 4,
pp. 1885–1922, 2015.
[26] S. Sharma, N. Gupta, and V. A. Bohara, “OFDMA-based deviceto-device communication frameworks: Testbed deployment and
measurement results,” IEEE Access, vol. 6, pp. 12 019–12 030,
2018.
[27] X. Chen, “Decentralized computation offloading game for mobile
cloud computing,” IEEE Transactions on Parallel and Distributed
_Systems, vol. 26, no. 4, pp. 974–983, 2015._
[28] D. Huang, P. Wang, and D. Niyato, “A dynamic offloading algorithm for mobile computing,” IEEE Transactions on Wireless
_Communications, vol. 11, no. 6, pp. 1991–1995, Jun. 2012._
[29] S. Jošilo and G. Dan, “Selfish decentralized computation offloading
for mobile cloud computing in dense wireless networks,” IEEE
_Transactions on Mobile Computing, 2018._
[30] S. Jošilo and G. Dán, “Decentralized scheduling for offloading
of periodic tasks in mobile edge computing,” in Proc. of IFIP
_NETWORKING, 2018._
[31] R. Mahmud, R. Kotagiri, and R. Buyya, “Fog computing: A
taxonomy, survey and future directions,” in Internet of Everything.
Springer, 2018, pp. 103–130.
[32] A. Brogi, S. Forti, A. Ibrahim, and L. Rinaldi, “Bonsai in the
fog: an active learning lab with fog computing,” in Proc. of IEEE
_International Conference on Fog and Mobile Edge Computing, Apr._
2018.
[33] L. I. Sennott, “Nonzero-sum stochastic games with unbounded
costs: discounted and average cost cases,” Mathematical Methods
_of Operations Research, vol. 40, no. 2, pp. 145–162, 1994._
[34] E. Altman, A. Hordijk, and F. Spieksma, “Contraction conditions
for average and α-discount optimality in countable state markov
games with unbounded rewards,” Mathematics of Operations Re_search, vol. 22, no. 3, pp. 588–618, 1997._
[35] A. S. Nowak, “Sensitive equilibria for ergodic stochastic games
with countable state spaces,” Mathematical Methods of Operations
_Research, vol. 50, no. 1, pp. 65–76, 1999._
[36] X. Masip-Bruin, E. Marín-Tordera, G. Tashakor, A. Jukan, and G.J. Ren, “Foggy clouds and cloudy fogs: a real need for coordinated
management of fog-to-cloud computing systems,” IEEE Wireless
_Communications, vol. 23, no. 5, pp. 120–128, 2016._
[37] B. Tang, Z. Chen, G. Hefferman, T. Wei, H. He, and Q. Yang,
“A hierarchical distributed fog computing architecture for big
data analysis in smart cities,” in Proc. of the ASE BigData &
_SocialInformatics._ ACM, 2015, p. 28.
[38] O. Consortium et al., “OpenFog reference architecture for fog
computing,” Tech. Rep., Feb. 2017.
[39] C. Shi, V. Lakafosis, M. H. Ammar, and E. W. Zegura, “Serendipity: enabling remote computing among intermittently connected
mobile devices,” in Proc. of ACM MobiHoc, 2012, pp. 145–154.
[40] L. Liu, Z. Chang, X. Guo, S. Mao, and T. Ristaniemi, “Multiobjective optimization for computation offloading in fog computing,”
_IEEE Internet of Things Journal, vol. 5, no. 1, pp. 283–294, 2018._
[41] F. Facchinei and J.-S. Pang, Finite-dimensional variational inequal_ities and complementarity problems. Springer Science & Business_
Media, 2007.
[42] F. Facchinei, A. Fischer, and V. Piccialli, “On generalized nash
games and variational inequalities,” Operations Research Letters,
vol. 35, no. 2, pp. 159–164, 2007.
[43] F. Tinti, “Numerical solution for pseudomonotone variational inequality problems by extragradient methods,” in Variational Anal_ysis and Applications._ Springer, 2005, pp. 1101–1128.
[44] E. Balevi and R. D. Gitlin, “Optimizing the number of fog nodes for
cloud-fog-thing networks,” IEEE Access, vol. 6, pp. 11 173–11 183,
2018.
[45] S. Sigg, P. Jakimovski, and M. Beigl, “Calculation of functions on
the RF-channel for IoT,” in Proc. of IEEE IOT, 2012, pp. 107–113.
[46] L. F. Bittencourt, J. Diaz-Montes, R. Buyya, O. F. Rana, and
M. Parashar, “Mobility-aware application scheduling in fog computing,” IEEE Cloud Computing, vol. 4, no. 2, pp. 26–35, March
2017.
[47] Y.-L. Chung, “Rate-and-power control based energy-saving transmissions in OFDMA-based multicarrier base stations,” IEEE Sys_tems Journal, vol. 9, no. 2, pp. 578–584, 2015._
[48] M. N. Tehrani, M. Uysal, and H. Yanikomeroglu, “Device-to-device
communication in 5G cellular networks: challenges, solutions, and
future directions,” IEEE Communications Magazine, vol. 52, no. 5,
pp. 86–92, 2014.
[49] A. Aragon-Zavala, Antennas and propagation for wireless commu_nication systems._ John Wiley & Sons, 2008.
[50] N. Balasubramanian, A. Balasubramanian, and A. Venkataramani,
“Energy consumption in mobile phones: a measurement study and
implications for network applications,” in Proc. of ACM Internet
_Measurement Conference (IMC), 2009, pp. 280–293._
[51] M. Satyanarayanan, “A brief history of cloud offload: A personal
journey from odyssey through cyber foraging to cloudlets,” GetMo_bile: Mobile Computing and Communications, pp. 19–23, 2015._
[52] L. Yang, J. Cao, Y. Yuan, T. Li, A. Han, and A. Chan, “A framework
for partitioning and execution of data stream applications in mobile
cloud computing,” ACM SIGMETRICS Performance Evaluation
_Review, vol. 40, no. 4, pp. 23–32, Apr. 2013._
[53] M. R. Rahimi, N. Venkatasubramanian, S. Mehrotra, and A. V.
Vasilakos, “On optimal and fair service allocation in mobile cloud
computing,” IEEE Transactions on Cloud Computing, 2015.
[54] S. Sardellitti, G. Scutari, and S. Barbarossa, “Joint optimization of
radio and computational resources for multicell mobile-edge computing,” IEEE Transactions on Signal and Information Processing
_over Networks, vol. 1, no. 2, pp. 89–103, Jun. 2015._
[55] X. Lyu, H. Tian, C. Sengul, and P. Zhang, “Multiuser joint task
offloading and resource optimization in proximate clouds,” IEEE
_Transactions on Vehicular Technology, vol. 66, no. 4, pp. 3435–_
3447, 2017.
[56] S. Jošilo and G. Dán, “A game theoretic analysis of selfish mobile
computation offloading,” in Proc. of IEEE INFOCOM, 2017.
[57] L. Xiang, B. Li, and B. Li, “Coalition formation towards energyefficient collaborative mobile computing,” in Proc. of IEEE ICCCN,
2015, pp. 1–8.
[58] S. Ghasemi-Falavarjani, M. Nematbakhsh, and B. S. Ghahfarokhi, “Context-aware multi-objective resource allocation in mobile cloud,” Computers & Electrical Engineering, vol. 44, pp. 218–
240, 2015.
[59] C. Wang, Y. Li, and D. Jin, “Mobility-assisted opportunistic computation offloading,” IEEE Communications Letters, vol. 18, no. 10,
pp. 1779–1782, 2014.
[60] J. Du, L. Zhao, J. Feng, and X. Chu, “Computation offloading
and resource allocation in mixed fog/cloud computing systems with
min-max fairness guarantee,” IEEE Transactions on Communica_tions, 2017._
-----
[61] J. Oueis, E. C. Strinati, and S. Barbarossa, “The fog balancing:
Load distribution for small cell cloud computing,” in Proc. of IEEE
_Vehicular Technology Conference, 2015, pp. 1–6._
[62] R. Deng, R. Lu, C. Lai, T. H. Luan, and H. Liang, “Optimal
workload allocation in fog-cloud computing toward balanced delay
and power consumption,” IEEE Internet of Things Journal, vol. 3,
no. 6, pp. 1171–1181, 2016.
[63] D. Zeng, L. Gu, S. Guo, Z. Cheng, and S. Yu, “Joint optimization
of task scheduling and image placement in fog computing supported software-defined embedded system,” IEEE Transactions on
_Computers, vol. 65, no. 12, pp. 3702–3712, 2016._
[64] R. A. Horn and C. R. Johnson, Matrix analysis. Cambridge
University Press, 2012.
[65] D. S. Bernstein, Matrix mathematics: Theory, facts, and formulas
_with application to linear systems theory._ Princeton University
Press Princeton, 2005, vol. 41.
**Sla ¯dana Jošilo is a Ph.D. student at**
the Department of Network and Systems Engineering in KTH, Royal Institute of Technology. She received her
M.Sc. degree in electrical engineering
from the University of Novi Sad, Serbia in 2012. She worked as a research
engineer at the Department of Power,
Electronics and Communication Engineering, University of Novi Sad from 2013 to 2014. Her
research interests are design and analysis of distributed algorithms for exploiting resources available at the network
edge using game theoretical tools.
**György Dán (M’07) is an associate**
professor at KTH Royal Institute of
Technology, Stockholm, Sweden. He received the M.Sc. in computer engineering from the Budapest University of
Technology and Economics, Hungary
in 1999, the M.Sc. in business administration from the Corvinus University
of Budapest, Hungary in 2003, and the
Ph.D. in Telecommunications from KTH in 2006. He
worked as a consultant in the field of access networks,
streaming media and videoconferencing 1999-2001. He
was a visiting researcher at the Swedish Institute of
Computer Science in 2008, a Fulbright research scholar
at University of Illinois at Urbana-Champaign in 20122013, and an invited professor at EPFL in 2014-2015. His
research interests include the design and analysis of content management and computing systems, game theoretical
models of networked systems, and cyber-physical system
security in power systems.
-----
|
{
"disclaimer": "Notice: The following paper fields have been elided by the publisher: {'references'}. Paper or abstract available at https://api.unpaywall.org/v2/10.1109/TNET.2018.2880874?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1109/TNET.2018.2880874, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "other-oa",
"status": "GREEN",
"url": "https://kth.diva-portal.org/smash/get/diva2:1295997/FULLTEXT01"
}
| 2,019
|
[
"JournalArticle"
] | true
| 2019-02-01T00:00:00
|
[] | 23,426
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Business",
"source": "external"
},
{
"category": "Business",
"source": "s2-fos-model"
},
{
"category": "Law",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/fffed5294ff2689f528f44ee9ae4e9ff0c28dee1
|
[
"Computer Science",
"Business"
] | 0.900068
|
Smart Contracts for Global Sourcing Arrangements
|
fffed5294ff2689f528f44ee9ae4e9ff0c28dee1
|
Global Sourcing Workshop
|
[
{
"authorId": "1836284",
"name": "J. Hillegersberg"
},
{
"authorId": "144710255",
"name": "J. Hedman"
}
] |
{
"alternate_issns": null,
"alternate_names": null,
"alternate_urls": null,
"id": null,
"issn": null,
"name": null,
"type": null,
"url": null
}
| null |
# Smart Contracts for Global Sourcing Arrangements
Jos van Hillegersberg[1(][B][)] and Jonas Hedman[2]
1 Faculty of Behavioral, Management and Social Sciences, Industrial Engineering and Business
Information Systems, University of Twente, P.O. Box 217, 7500 AE Enschede, The Netherlands
j.vanhillegersberg@utwente.nl
2 Department of Digitalization, Copenhagen Business School,
Howitzvej 60, 2000 Frederiksberg, Denmark
jhe.digi@cbs.dk
**Abstract. While global sourcing arrangements are highly complex and usually**
represent large value to the partners, little is known of the use of e-contracts or
smart contracts and contract management systems to enhance the contract management process. In this paper we assess the potential of emerging technologies
for global sourcing. We review current sourcing contract issues and evaluate three
technologies that have been applied to enhance contracting processes. These are
(1) semantic standardisation, (2) cognitive technologies and (3) smart contracts
and blockchain. We discuss that each of these seem to have their merit for contract management and potentially can contribute to contract management in more
complex and dynamic sourcing arrangements. The combination and configuration
in which these three technologies will provide value to sourcing should be on the
agenda for future research in sourcing contract management.
**Keywords: Global outsourcing · Contracts · E-Contracting · Smart contracts ·**
Semantic standards · Cognitive technology
## 1 Introduction
Sourcing is difficult. Unfortunately, one thing that many sourcing arrangements have in
common is a lose-lose scenario. A recent story on Dell’s and FedEx’s eight-year contract situation illustrates this. In 2005, Dell and FedEx wrote a 100 pages contract with
numerous “Supplier shall” paragraphs to manage all possible issues in Dell’s hardware
return-and-repair process. During the following decade, both parties complied with obligations outlined in the contract. It was even re-negotiated at three occasions. Dell was
unhappy with the lack of proactivity from FedEx - no innovation. FedEx was unhappy
with the detailed processes description that had to be met - very expensive. At the end of
the contract - none of the parties were happy, but none of the parties afforded to cancel
or not to continue the relationship [1]. However, this is not a unique story in the history
of sourcing arrangements and the contracts governing the relationship.
Contracts have existed since ancient times of trade and barter. Our current conceptualization of contracts can be traced back to the mid-1700s and the industrial revolution.
© Springer Nature Switzerland AG 2020
I O h i t l (Ed ) Gl b l S i 2019 LNBIP 410 82 92 2020
-----
In particular, the growing British economy and the adaptability and flexibility of the
English common law led to the development of modern contract law. Mainland Europe,
with its more rigid civil law, was slower in developing a legal framework governing
the role contracts. Not until the 20th century and with the growth of global trade and
sourcing agreements there was a need for international contract law. Today, we have a
number of global conventions, such as the Hague-Visby Rules and the UN Convention
on Contracts for the International Sale of Goods, that regulate trade and contracts.
So, what is a contract? Ryan defines a contract as “a legally binding agreement
which recognises and governs the rights and duties of the parties to the agreement” that
addresses the exchange of goods, services, money, or promises of any of those [2]. With
time contracts and its interpretation has evolved. Most recently, a new type of contracts
have emerged - so-called e-contracts [3]. The development of e-contracts has followed
the emergence of digital signatures and electronic identification [4]. E-contracts, enables
that the promise of goods, services, or money can be controlled and monitored by digital
technologies and potentially automated [3]. Furthermore, the International Association
for Contract and Commercial Management (IACCM) concludes in a recent report that
the future of contracts will focus more on relationships instead of costs. Therefore, we
expect that contract management will evolve to include a degree of “intelligence” and
become “smarter” while becoming more relationship oriented.
A lot of the research on smart contracts related to cryptocurrencies [5–7], but have
broadened its scope and include topics such as internet of things (IoT) [8], banking
ledger [9], and global shipping [10]. However, there is still not much research on the use
of information technology in sourcing contracts. One reason could be the complexity in
sourcing agreements, where a contract could last for many years, spanning continents,
involving multiple actors, etc. Therefore, our aim is to explore the role of information
technology in sourcing contract management.
The remainder of this paper is structured accordingly: In the following section,
we review contracts types in sourcing arrangements. In the third section, we broaden
our review to issues and challenges in sourcing contract management. Thereafter, we
look into the information technology developments for contract management systems
including the recent emergence of smart contracts. In the fifth section we provide a
synthesis and our assessment of the use of these technologies for sourcing contracts. We
conclude the paper by combining and discussing our findings.
## 2 Contracts in Sourcing Arrangements
Outsourcing arrangements are agreed upon and governed by contracts. Contracts can
vary from short and straight-forward to voluminous and highly complex, cf. Dell and
FedEx. There are some main different types of contracts. The most common are Firm
Fixed Price Contracts and Cost Reimbursement Contracts. In the first type price not
subject to any adjustment on the basis of the contractor’s incurred costs - this is the
simplest form of contracts and imposes a minimum administrative burden. The second
type gives the supplier payment of allowable incurred costs, to the extent prescribed
in the contract. This opens up for some interpretation and negotiation. The different
-----
types of contracts are determined by factors like the regulatory framework, complexity
of the outsourcing services specified, total value, duration of the contract, the number
of partners involved, and incentive or penalty clauses included. The variety in contracts
follows the logic of Roman-based law: usus (right to use a good), fructus (right to what
a good produces), and abusus (right to sell a good). Thus, clearly, the contract governing
a multi-year multi-million sourcing deal is likely to differ greatly from the contract
specification of a relatively simple and largely standardizes micro-service. Still sourcing
contracts have much in common as well.
Sourcing Contract Templates, such as the sourcing contract template compiled by
the Dutch Platform Outsourcing, give an overview of elements that should be present
in a balanced and mature contract. This template was created by a committee of both
vendor and client representatives and aimed at medium size to larger organizations and
medium to complex services sourced [11]. The full table of contents can be viewed in
the appendix. While some of the typical contract elements are relatively static, others
require continuous monitoring and management. Think of contract changes, contract
performance monitoring and auditing, and the enactment of penalties and bonus/malus
schemes based on compliance and service level agreements.
The role of contracts changes throughout the four phases of global sourcing
arrangements:
Pre-sourcing collaboration: A global sourcing arrangement begins when an initiator
start exploring the possibility to source services or resources externally via a tender
process. In this phase the scope of the collaboration is defined by assigning roles
to each company involved, inviting potential companies, and defining the business
requirements. During this phase a draft contract or contract frame could be present,
but often this phase is largely informal supported by trust and a sense of common
purpose.
Sourcing arrangement creation and consolidation. After a sourcing arrangement is
established, procedures are formalization and rules and obligations are described in
a contract. This also includes specific pricing agreements, incentive/penalty clauses
and duration and renewal conditions. At the end of this phase, the selected services
and/or resources should be implemented and made ready to be used.
Sourcing arrangement delivery. In this phase, the sourced services or resources are
executed. The contract should be managed and monitored. That is, actual execution
and delivery performance should be monitored against the agreements defined in
the contract. Contract rules should be executed when execution events trigger these.
Incentives/penalties should be paid or charged as defined in the contract. Before the
end date, the contract should be evaluated and renewed, or termination should be
initiated.
Partnership termination or succession In this phase a re-assertion of the contract is
organized by the initiator and sourcing partners. Eventually this leads to termination
of the contract, straight forward renewal or renewal after adaptation.
-----
## 3 Challenges in Sourcing Contract Management
Sourcing and contract management is not easy. A case study on IT offshoring at Shell
Global IT functions, clearly illustrates the central role a contract plays in a sourcing
relationship [12]. Based on interviews with internal and external experts the study reveals
that a contract is instrumental in governance of a sourcing relationship. It is input to joint
processes between customer and vendor including performance management (is service
delivery in line with the contract), financial management (is cost allocation and pricing in
line with the contract), and escalation and relationship management (are measures taken
in case of anomalies in line with the contract). Clearly the contract is also central in the
contract management process. The Shell case also shows that interactions between the
many roles in a sourcing relationship are better manageable if well-defined contracts are
in place. Think of interactions between purchaser (client) and contract manager (vendor),
service manager (client) and delivery manager (vendor), and innovation manager (client)
and competence manager (vendor). Moreover, risk management and compliance benefit
from well specified contracts. This included risks of confidentiality and compliance to
legislation.
The main results of the Shell case are confirmed in a survey by McKinsey [13] that
who reviewed 200 live sourcing contracts of over 50 companies, analysing three main
dimensions: general terms and conditions, commercial terms and conditions, and governance structure. The review showed several frequent issues that hindered both supplier
and customer. Some remarkable results of the McKinsey study, related to Sourcing Contract Management, are; (1) Purchasers and providers faced unclear definition of quality
of service and limited tracking and control of business and financial targets (60%). (2)
Few incentives for joint innovation (90%). (3) Limited collaboration (90%). (4) Key
performance indicators had not been defined (75%), (5) No value-based negotiation on
price and no mutual incentives and gain-sharing initiatives (67%).
Companies are often involved in multiple sourcing arrangements. Each of these
arrangements may include multiple partners and a mix of services and resources (multivendor sourcing). “However, the lack of expressivity in current SLA specifications
and the inadequacy of tools for managing SLA and contract compositions is relevant.”
[14]. Outsourcing contracts span hundreds of pages of legal contractual language that
describes the delivered services and their performance. As the terms and conditions use a
variety of metrics usually specified in natural language, it becomes increasingly difficult
to monitor the performance of the contract [15].
Empirical research into IT outsourcing contracts has revealed that a large variety
exists in their structure. Moreover, perhaps counter-intuitive, their length and complexity
tends to grow as contract partners gain experience [16]. The contracts are unlikely to
be synchronized, i.e. a variety of contracts in different phases of their life cycle need to
be managed. In many cases contract management cannot keep up with the increasing
dynamics and complexity of the arrangements. This leads to insufficient monitoring and
execution of contracts, no insight in compliance, incorrect payments, ignoring the rules
specified and violation of renewal or termination conditions. Most contracts are still
defined in natural language and no support for automatic negotiation of smart contracts
is provided [17]. Contract management of sourcing arrangement can thus become a time
consuming and complex endeavor.
-----
Many of these issues require organizational measures and practices to improve the
sourcing relationship contracting, Still, there also seems to be ample opportunity for
emerging technology for contract management to address the issues described above,
reduce the risks in sourcing of services and increase the value. While the research
into e-Contracting has made considerable progress over the last decades, there is no
comprehensive proposal that covers the full e-contracting life cycle [18].
## 4 IT for Sourcing Contract Management Systems
**4.1** **Contract Management Systems**
Contract Management Systems are emerging that support the phases of sourcing arrangements and managing the lifecycle of contracts. Clearly, the possibilities of contract
management systems are much more powerful if the contracts that are managed are econtracts or smart contracts and not simply digital scans of printed documents. Recent,
Contract Management Systems software is stand-alone program or series of related software programs for storing and managing agreements with sourcing partners. Its overall
purpose is to streamline administrative tasks and reduce overhead by providing a single, unified interface to manage new contracts, capture data related to the contract and
document authoring, contract creation and negotiation. The contract management system can then follow the contract as it goes through the review and approval process,
providing documentation for digital signatures and execution of the contract, including post-execution tracking and commitments management. Most contract management
systems are designed from the perspective of the buyer and have thereby a cost focus.
This view is criticized by [1] since a contract fundamentally deal with at least two parties
- buyer and seller. However, the contract management systems providers do not view or
see a contract management system as a platform business or as a two-sided market.
Variousstandards,architectureandtoolshavebeensupportedtofacilitatethecontract
management process. These include automated support for identifying service providers
and for negotiation and offer building. Business architectures have been proposed to
build upon e-contract SLA standards. A study by [18] describes the design of such an
environment that supports contract management processes such as price offering and
billing, compliance, arbitration and mediation, reporting, and termination and archiving
and eventually also support for negotiation and merging of subcontractors terms and
conditions.
On the technology side, there is a historical progression from paper to digital format
with varying degrees of possibilities of re-negotiation. In its simplest form a digital
contract is just a tick off box at the end of a page or an app. For instance, when a company
signs up for a Dropbox account to store or share different files. The other extreme
is a contract management system that supports all activities related to pre-sourcing
collaboration, sourcing arrangement creation and consolidation, sourcing arrangement
delivery, and partnership termination. Clearly, the role of information technology varies
between these extremes of digitalizes sourcing contracts from keeping track of approval
to contract life-cycle management.
-----
**4.2** **Semantic Standards for Contract Management**
E-contract is any type of contract formed in the interaction between two or more parties
using electronic means. The parties may be human or digital agents (computer software). This includes even contracts between two digital agents that are programmed to
recognize the existence of a contract. See for instance the Uniform Computer Information Transactions Act that provides rules regarding the formation, governance, and basic
terms of an e-contract. E-commerce is the legacy of most research and conceptualizations
of e-contracts.
Based on nine contracting templates, a study by IBM research developed a Generic
SLA Semantic Model for the Execution Management of e-Business Outsourcing Contracts [19]. They also use actual service agreements and based on these, develop a
semantic model of a service contract that includes data common data elements (see
Table 1). As the area of e-Business hosting is relatively well-understood, the study manages to standardise common service level agreements and measurement data, and based
on these, define refund/reward specifications that can be automatically executed. The
researchers also report they have successfully developed a contract management system
based on the semantic model and a service specification language that would reduce the
financial risk of service-level violations [20].
**Table 1. Typical elements in an E-business service contract source: [19]**
Description of service
Functional requirements of the service system
Start date and duration of service
Pricing and payment terms
Terms and conditions for service installation, revisions, and termination
Planned service maintenance windows
Customer support procedures and response time
Problem escalation procedures
Acceptance testing criteria, i.e., quality requirements that must be met before the service can
be deployed for production use. These criteria could be stated in terms of, for example,
benchmark-based transaction throughput performance, business-oriented synthetic transaction
processing performance, fail-over latency, service usability, service system configurations (e.g.
computer main memory size), etc.
More recently, and with the advent of cloud computing, studies have addressed contracting of cloud services. Advances have been made in viewing services as dynamic
compositions and striving for machine readable SLA’s based on standardised quality
attributes and contract elements. The design of a tool named DAMASCO (DAta MAnager for Service COmposition) that offers SLA evaluation and assessment capabilities
to IT professionals during the design phase is an example of such a study [14]. The
authors propose an extension to the Web Service Agreement (WS-Agreement) standard proposed by the Open Grid Forum (OGF) to define agreements and their contexts
-----
between providers and consumers, as well as a set of service attributes (e.g., name; context; guarantee terms; constraints), to obtain a flexible template for IT service contracts.
A contract is composable of sub-contracts and includes standard specifications of items
such as cost, duration, service quality and penalty.
**4.3** **Cognitive Technology for Sourcing Contract Management**
An alternative to striving for more formal specification of SLAs is using text-mining
techniques to elicit SLAs stated in the contract in natural language and evaluate their
performance using data from service performance logs. A study by [15] is an example
of such a study, proposing Fitcon - a contract mining system that detects service level
agreements from contracts, tracks the delivery performance against them and predicts
the health of long-term contracts. The study develops a framework to automatically
extract SLAs and SLA metrics from contract documents, using IBM’s Watson Document
Conversion Service (DCS). Next SLAs and their performance are mapped to internal
standards. Terms and conditions are extracted using a Natural Language Toolkit that
works on top of DCS. The approach was tested on actual client contracts and evaluated
with subject matter experts, demonstrating promising results.
Thus, the availability of a widely agreed, standardized model that would enable to
apply templates to every type of contracts and SLAs, and to categorize contract terms
to be used in different services domains is still a significant need [14].
**4.4** **Smart Contracts and Blockchain Applications for Sourcing Contract**
**Management**
More recently the secure storage of contracts in distributed ledger technology (DLT)
or blockchain has been proposed to allow for open access by partners involved in the
arrangement. Moreover, a DLT architecture can store mutually agreed upon transactions
in a safe and decentral manner. For instance, a decentralized and blockchain based platform for temporary employment contracts is proposed in [21]. Their platform design
address ensures temporary employees with the fair and legal remuneration (including
taxes) of work performances and respect for the rights for all actors involved in a temporary and offers the employer support for processing contracts with a fully automated
and fast procedure. The full transparency and immutability that blockchain offers would
enable compliance checking of the rights of both of the worker and of the employer. Their
proposed decentralized infrastructure makes use of the Smart Contract feature included
in new generation block chain architectures such as Ethereum. The Smart Contract is
stored in the blockchain and opens the possibility to store and execute contractual agreements without dependence on a regulator. The design by [21] proposed a work ledger,
that is used to register work offers to which workers can apply. Agreements and work
hours are also stored in the ledger. Smart contracts are used to check certification of
workers, allow governments to check compliance to legislation, manage the relationship
and transfer value automatically. The study describes an application of the concept to
agriculture but does not include an implementation nor a field test. While many details
still need to be addressed, the idea could also apply to international contracting of service workers in outsourcing arrangement without an intermediary platform or a sourcing
-----
vendor. Smart contracting could thus be used to reduce the coordination costs involved
in resource-based sourcing contracts.
A related development is the verifiable storage of degrees, credentials and certificates
of professionals using blockchain and smart contracts. Especially in time/resource-based
contracts, verification of the qualifications of professionals could enhance trust in the
sourcing relationship. A conceptual architecture and prototype to this end is developed
in [22]. They use the Ethereum blockchain and Smart contracts written in Solidity
to manage the issuing of certificates to learners. Certification authorities validate or
revoke these, and smart contracts verify that only accredited certification authorities
can manage certification rights. Similar proof of concepts have been implemented by
specific universities such as University of Nicocia, MIT, and University of Twente [23].
Using blockchain and smart contracts have also been piloted by companies such as
SAP for their professional courses. Combined with educational domain standards (e.g.
openbadges.org) such infrastructures may evolve into trustable global infrastructures
that allows companies to verify qualifications and make the verification steps part of
their contract.
A study by [17] applies the idea of Smart contracts to managing dynamics in cloud
services. They propose a formal contracting language that should allow a contract to
be updated automatically to include new requirements such as increased service capacity needs. This language is used to manage automatic adaptation, consistency check,
and verification and change management of contracts. In addition, the authors propose a mechanism for autonomous negotiation based on the joint utility of client and
cloud provider. The study is innovative in that it does not strive to achieve an exact
match between client requirements and provider offerings. They focus on modelling the
dynamic aspects of SLAs, i.e. under what conditions can SLAs change such as a pricing increment for enhanced response times of services. The smart contract proposal here
focuses more on the automatic reconfiguration of the contract rather than on a blockchain
architecture.
A smart contract application proposed by [24] even goes a step further. They implement a distributed peer-to-peer cloud storage platform DStore using smart contracts for
the storage lease and automating the transfers. This offers a secure and effortless storage cloud that also facilitates financial settlement based on actual usage. Their proposal
eliminates the role of third parties thus offering efficiency gains, especially when the
demand for storage space is dynamic.
## 5 Assessment of Technologies for Sourcing Contracts
Based on the properties of the three technologies discussed, we provide an assessment
of the potential of each of them to address contracting requirements (Table 2).
In Table 2, We indicate a clear and promising match between requirements and
the features of the technology with a (+), and leave cells empty were we do not see
a clear application of the technology. Where more research is needed to identify the
match, we place a “?”. The assessment presented in Table 2 illustrates that no single
technology can address all requirements for Smart Contracts in isolation. The three
emerging technologies should be combined and further developed to meet the demands
of complex and evolving sourcing arrangements.
-----
**Table 2. Our assessment of the potential of reviewed technologies to address contracting issues**
Contracting phase Requirements for Contract Semantic Standards CognTech Block chain Smart Contr
Management Technologies based
on current issues
Contract Definition and Can value based negotiation be + + +
updating supported?
Can contracts and subcontracts + ?
be linked and aggregated?
Is service quality well defined, +
e.g. as precisely defined SLAs?
Can KPI’s be defined? +
Can terms and conditions be + ?
precisely specified?
Can incentives for joint ? ?
innovation be defined?
Can renewal/terminal conditions ? ?
be specified?
Can multiple roles access the +
contract and update/change the
contract according to their rights?
Contract Execution and Are collaborative processes in +
Monitoring defining and updating the
contract supported?
Monitoring if service delivery in + +
line with the contract?
Monitoring if cost allocation and + +
pricing in line with the contract?
Are business and financial targets + +
tracked?
Can mutual incentives and +
gain-sharing initiatives be
implemented?
Are measures taken in case of + +
anomalies in line with the
contract?
Contract Compliance and Can the health of the contract be +
Health assessed?
Can business and financial +
targets be predicted?
Can Confidentiality be managed? +
The next challenge is to evaluate to what extent these technologies, possibly combined, can relieve the sourcing contract issues and improve contract management practices and performance. We are currently working on theorizing on how a particular type
of IT artefact - namely Contract Management Systems - can deploy a combination of
semantic, cognitive and smart contracting technologies.
## 6 Conclusions and Future Research
Westartedoutbyrevisitingtheroleofcontractsinsourcingrelationships.theliteratureon
this area is vast, so we centred our introduction around the type of contracts currently in
use during the phases in the life cycle of a contract. Clearly, sourcing contracts are a core
element of a sourcing relationship and are of eminent importance. Next, we reviewed
issues with sourcing contracts reported on in the literature. Remarkably, while both
|Contracting phase|Requirements for Contract Management Technologies based on current issues|Semantic Standards|CognTech|Block chain Smart Contr|
|---|---|---|---|---|
|Contract Definition and updating|Can value based negotiation be supported?|+|+|+|
||Can contracts and subcontracts be linked and aggregated?|+||?|
||Is service quality well defined, e.g. as precisely defined SLAs?|+|||
||Can KPI’s be defined?|+|||
||Can terms and conditions be precisely specified?|+||?|
||Can incentives for joint innovation be defined?|?||?|
||Can renewal/terminal conditions be specified?|?||?|
||Can multiple roles access the contract and update/change the contract according to their rights?|||+|
|Contract Execution and Monitoring|Are collaborative processes in defining and updating the contract supported?|||+|
||Monitoring if service delivery in line with the contract?|+||+|
||Monitoring if cost allocation and pricing in line with the contract?|+||+|
||Are business and financial targets tracked?|+||+|
||Can mutual incentives and gain-sharing initiatives be implemented?||+||
||Are measures taken in case of anomalies in line with the contract?|+||+|
|Contract Compliance and Health|Can the health of the contract be assessed?||+||
||Can business and financial targets be predicted?||+||
||Can Confidentiality be managed?|||+|
-----
clients and vendors in sourcing relationships often have very mature knowledge of IT and
process automation, the sourcing contracts in place and the contract management process
are usually not deploying and technology beyond traditional document management.
At the same time, various information technologies have emerged to support contract management. We evaluated the potential use of these technologies and systems
in improving contracting for global sourcing arrangements. In this paper we illustrated
this by reviewing three technologies: (1) Semantic standards, (2) Cognitive technology
(3) Smart Contracting and Blockchain. These technologies have all received increasing
attention over the past few years.
However, while they have been applied to (micro) IT-outsourcing, they have not been
discussed and compared in the context of complex and long-running sourcing contracts.
Pilots are mainly reported on in computer science-oriented conferences and journals
and usually make use publicly available sourcing contracts or relatively standardized
e-business or cloud sourcing arrangements. In Sect. 4, we provide an initial assessment
of the match of the three technologies survey on smart contract requirements. We believe
further work on this question is needed to advance the use of technology in sourcing
contract management.
## References
1. Frydlinger, D., Hart, O.D.: Overcoming contractual incompleteness: the role of guiding principles. National Bureau of Economic Research, Working Paper 26245, September 2019. https://
doi.org/10.3386/w26245
2. Ryan, D.F.: Contract Law. Round Hall Ltd, Dublin (2006)
3. Krishna, P.R., Karlapalem, K.: Electronic Contracts. IEEE Internet Comput. 12(4), 60–68
[(2008). https://doi.org/10.1109/MIC.2008.77](https://doi.org/10.1109/MIC.2008.77)
4. Eaton, B., Hedman, J., Medaglia, R.: Three different ways to skin a cat: financialization in
the emergence of national e-ID solutions. J. Inf. Technol. 33(1), 70–83 (2018). https://doi.
org/10.1057/s41265-017-0036-8
5. Bartoletti,M.,Pompianu,L.:Anempiricalanalysisofsmartcontracts:platforms,applications,
and design patterns. In: Brenner, M., et al. (eds.) FC 2017. LNCS, vol. 10323, pp. 494–509.
[Springer, Cham (2017). https://doi.org/10.1007/978-3-319-70278-0_31](https://doi.org/10.1007/978-3-319-70278-0_31)
6. Luu, L., Chu, D.-H., Olickel, H., Saxena, P., Hobor, A.: Making smart contracts smarter.
In: Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications
Security, New York, NY, USA, pp. 254–269 (2016). https://doi.org/10.1145/2976749.297
8309
7. Velner, Y., Teutsch, J., Luu, L.: Smart contracts make bitcoin mining pools vulnerable. In:
Brenner, M., et al. (eds.) FC 2017. LNCS, vol. 10323, pp. 298–316. Springer, Cham (2017).
[https://doi.org/10.1007/978-3-319-70278-0_19](https://doi.org/10.1007/978-3-319-70278-0_19)
8. Christidis, K., Devetsikiotis, M.: Blockchains and smart contracts for the internet of things.
[IEEE Access 4, 2292–2303 (2016). https://doi.org/10.1109/ACCESS.2016.2566339](https://doi.org/10.1109/ACCESS.2016.2566339)
9. Peters, G.W., Panayi, E.: Understanding modern banking ledgers through blockchain technologies: future of transaction processing and smart contracts on the internet of money. In:
Tasca, P., Aste, T., Pelizzon, L., Perony, N. (eds.) Banking Beyond Banks and Money. NEW,
[pp. 239–278. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-42448-4_13](https://doi.org/10.1007/978-3-319-42448-4_13)
10. Jensen, T.D., Hedman, J., Henningson, S.: How TradeLens Delivers Business Value with
Blockchain Technology, 2019, vol. Forthcoming (2019)
-----
11. Platform Outsourcing Netherlands, “Template Sourcing Agreements v1.0.” Dutch Outsourc[ing Association (2011). www.platformoutsourcing.nl, https://sourcingnederland.nl/](http://www.platformoutsourcing.nl)
12. de Jong, F., van Hillegersberg, J., van Eck, P., van der Kolk, F., Jorissen, R.: Governance of
offshore it outsourcing at shell global functions IT-BAM development and application of a
governance framework to improve outsourcing relationships. In: Oshri, I., Kotlarsky, J. (eds.)
Global Sourcing 2010. LNBIP, vol. 55, pp. 119–150. Springer, Heidelberg (2010). https://
doi.org/10.1007/978-3-642-15417-1_8
13. McKinsey, “Five ways to unlock win–win value from IT-services sourcing relationships.
McKinsey (2017). https://www.mckinsey.com/business-functions/mckinsey-digital/our-ins
ights/five-ways-to-unlock-win-win-value-from-it-services-sourcing-relationships. Accessed
09 Oct 2019
14. Longo, A., Zappatore, M., Bochicchio, A.M.: Service level aware - contract management.
In: 2015 IEEE International Conference on Services Computing, June 2015, pp. 499–506.
[https://doi.org/10.1109/scc.2015.74](https://doi.org/10.1109/scc.2015.74)
15. Madaan,N.,etal.:AsystemforpredictinghealthofanE-Contract.In:2018IEEEInternational
Conference on Services Computing (SCC), July 2018, pp. 57–64. https://doi.org/10.1109/scc.
2018.00015
16. Chen, Y., Bharadwaj, A.: An empirical analysis of contract structures in IT outsourcing. Inf.
[Syst. Res. 20(4), 484–506 (2009). https://doi.org/10.1287/isre.1070.0166](https://doi.org/10.1287/isre.1070.0166)
17. Scoca, V., Uriarte, R.B., Nicola, R.D.: Smart contract negotiation in cloud computing. In: 2017
IEEE 10th International Conference on Cloud Computing (CLOUD), June 2017, pp. 592–599.
[https://doi.org/10.1109/cloud.2017.81](https://doi.org/10.1109/cloud.2017.81)
18. Gómez, S.G., Rueda, J.L., Chimeno, A.E.: Management of the business SLAs for services
eContracting. In: Wieder, P., Butler, J., Theilmann, W., Yahyapour, R. (eds.) Service Level
Agreements for Cloud Computing, pp. 209–224. Springer, New York (2011). https://doi.org/
10.1007/978-1-4614-1614-2_13
19. Ward, C., Buco, M.J., Chang, R.N., Luan, L.Z.: A generic SLA semantic model for the
execution management of e-Business outsourcing contracts. In: Bauknecht, K., Tjoa, A.M.,
Quirchmayr, G. (eds.) EC-Web 2002. LNCS, vol. 2455, pp. 363–376. Springer, Heidelberg
[(2002). https://doi.org/10.1007/3-540-45705-4_38](https://doi.org/10.1007/3-540-45705-4_38)
20. Buco, M., Chang, R., Luan, L., Ward, C., Wolf, J., Yu, P.: Managing eBusiness on demand
SLA contracts in business terms using the cross-SLA execution manager SAM. In: The Sixth
International Symposium on Autonomous Decentralized Systems. ISADS 2003, April 2003,
[pp. 157–164 (2003). https://doi.org/10.1109/isads.2003.1193944](https://doi.org/10.1109/isads.2003.1193944)
21. Pinna, A., Ibba, S.: A blockchain-based decentralized system for proper handling of temporary
employment contracts. In: Arai, K., Kapoor, S., Bhatia, R. (eds.) SAI 2018. AISC, vol. 857,
[pp. 1231–1243. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-01177-2_88](https://doi.org/10.1007/978-3-030-01177-2_88)
22. Gräther, W., Kolvenbach, S., Ruland, R., Schütte, J., Torres, C., Wendland, F.: Blockchain for
[Education: Lifelong Learning Passport (2018). https://doi.org/10.18420/blockchain2018_07](https://doi.org/10.18420/blockchain2018_07)
23. Brinkkemper, F.L.: Decentralized credential publication and verification : a method for issuing
and verifying academic degrees with smart contracts, 28 June 2018. https://essay.utwente.nl/
75199/. Accessed 14 Oct 2019
24. Xue, J., Xu, C., Zhang, Y., Bai, L.: DStore: a distributed cloud storage system based on
smart contracts and blockchain. In: Vaidya, J., Li, J. (eds.) ICA3PP 2018. LNCS, vol. 11336,
[pp. 385–401. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-05057-3_30](https://doi.org/10.1007/978-3-030-05057-3_30)
-----
|
{
"disclaimer": "Notice: The following paper fields have been elided by the publisher: {'references', 'abstract'}. Paper or abstract available at https://api.unpaywall.org/v2/10.1007/978-3-030-66834-1_5?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1007/978-3-030-66834-1_5, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": null,
"status": "CLOSED",
"url": ""
}
| 2,019
|
[
"JournalArticle"
] | false
| null |
[] | 8,565
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.