lang
stringclasses
1 value
s2FieldsOfStudy
listlengths
0
8
url
stringlengths
78
78
fieldsOfStudy
listlengths
0
5
lang_conf
float64
0.8
0.98
title
stringlengths
4
300
paperId
stringlengths
40
40
venue
stringlengths
0
300
authors
listlengths
0
105
publicationVenue
dict
abstract
stringlengths
1
10k
text
stringlengths
1.94k
184k
openAccessPdf
dict
year
int64
1.98k
2.03k
publicationTypes
listlengths
0
4
isOpenAccess
bool
2 classes
publicationDate
timestamp[us]date
1978-02-01 00:00:00
2025-04-23 00:00:00
references
listlengths
0
958
total_tokens
int64
509
40k
en
[ { "category": "Education", "source": "s2-fos-model" }, { "category": "Environmental Science", "source": "s2-fos-model" }, { "category": "Computer Science", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/020122fcd3271980e2e32e33bf7f15599b812a32
[]
0.884879
Building a Blockchain-Based Decentralized Crowdfunding Platform for Social and Educational Causes in the Context of Sustainable Development
020122fcd3271980e2e32e33bf7f15599b812a32
Sustainability
[ { "authorId": "2267990636", "name": "B. Țigănoaia" }, { "authorId": "2267993831", "name": "George-Madalin Alexandru" } ]
{ "alternate_issns": null, "alternate_names": null, "alternate_urls": [ "http://mdpi.com/journal/sustainability", "http://nbn-resolving.de/urn/resolver.pl?urn=urn:nbn:ch:bel-172127" ], "id": "8775599f-4f9a-45f0-900e-7f4de68e6843", "issn": "2071-1050", "name": "Sustainability", "type": "journal", "url": "http://www.e-helvetica.nb.admin.ch/directAccess?callnumber=bel-172127" }
Blockchain technology contributes to achieving the Sustainable Development Goals. Education for sustainable development (ESD) is UNESCO’s education sector response to the urgent and dramatic challenges the planet faces. The traditional way of donating money to charitable causes, such as education, has been through centralized methods and organizations that lack transparency, and donors often do not have a clear understanding of how their contributions are being utilized. Blockchain technology, particularly, platforms like Ethereum and Polygon, has the potential to address the issues associated with traditional donation systems. This paper proposes a decentralized web3 application that utilizes blockchain technology to enhance transparency and efficiency in educational donations in the context of sustainable development. The platform leverages decentralized protocols and smart contracts to ensure secure and transparent transactions, enabling donors to track the utilization of their contributions and ensuring their funds reach their intended beneficiaries. This paper discusses the design and implementation of the platform, highlighting its features and potential for transforming the landscape of charitable donations. This software application can be used in education, and a demo plus some scenarios/work cases are presented/analyzed. The main results and contributions open other future research directions for not only authors.
## sustainability _Article_ # Building a Blockchain-Based Decentralized Crowdfunding Platform for Social and Educational Causes in the Context of Sustainable Development **Bogdan Tiganoaia** **[1,]*** **and George-Madalin Alexandru** **[2]** 1 Entrepreneurship and Management Department, National University of Science and Technology POLITEHNICA Bucharest, 060042 Bucharest, Romania 2 Computer Science Department, National University of Science and Technology POLITEHNICA Bucharest, 060042 Bucharest, Romania ***** Correspondence: bogdantiganoaia@gmail.com **Abstract: Blockchain technology contributes to achieving the Sustainable Development Goals. Edu-** cation for sustainable development (ESD) is UNESCO’s education sector response to the urgent and dramatic challenges the planet faces. The traditional way of donating money to charitable causes, such as education, has been through centralized methods and organizations that lack transparency, and donors often do not have a clear understanding of how their contributions are being utilized. Blockchain technology, particularly, platforms like Ethereum and Polygon, has the potential to address the issues associated with traditional donation systems. This paper proposes a decentralized web3 application that utilizes blockchain technology to enhance transparency and efficiency in educational donations in the context of sustainable development. The platform leverages decentralized protocols and smart contracts to ensure secure and transparent transactions, enabling donors to track the utilization of their contributions and ensuring their funds reach their intended beneficiaries. This paper discusses the design and implementation of the platform, highlighting its features and potential for transforming the landscape of charitable donations. This software application can be used in education, and a demo plus some scenarios/work cases are presented/analyzed. The main results and contributions open other future research directions for not only authors. **Citation: Tiganoaia, B.; Alexandru,** G.-M. Building a Blockchain-Based Decentralized Crowdfunding Platform for Social and Educational Causes in the Context of Sustainable Development. Sustainability 2023, 15, [16205. https://doi.org/10.3390/](https://doi.org/10.3390/su152316205) [su152316205](https://doi.org/10.3390/su152316205) Academic Editor: Yang (Jack) Lu Received: 3 October 2023 Revised: 7 November 2023 Accepted: 15 November 2023 Published: 22 November 2023 **Copyright:** © 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons [Attribution (CC BY) license (https://](https://creativecommons.org/licenses/by/4.0/) [creativecommons.org/licenses/by/](https://creativecommons.org/licenses/by/4.0/) 4.0/). **Keywords: education; blockchain; decentralization; web3 platform; Ethereum; smart contracts;** transparency; sustainable development **1. Introduction** The goals for sustainable development, named the SDGs, means a total of 17 general targets with a highlight on sustainable development aspects related to: Education, poverty, and equality. _•_ The changes in climate, infrastructure, water, and land. _•_ Production and consumption. _•_ The target for these SDGs is the year 2030. _About Sustainability Education_ UNESCO shares knowledge, produces information, and provides policy support plus technical guidance to its Member States. It implements projects related to the SDGs. It also acts as an advocate with one objective: that governments are able to provide quality Climate Change Education (CCE) [1]. Education for sustainable development is an exciting new field. The objective is to highlight the connections between the environment, society, and economy. This contributes to a sustainable future [2]: ----- _Sustainability 2023, 15, 16205_ 2 of 19 1. Students are able to understand and apply the concepts (at a basic level) and principles related to sustainability. 2. Students know about sustainability viewed as a triad: economic plus ecological plus social systems. Today, the blockchain is an important and new technology. Its impact is related to: The economy, as it can transform interactions (it is about social interactions); _•_ Public institutions; _•_ Our relationship with the environment [3]. _•_ How does blockchain technology contribute to achieving the SDGs? The answer can be considered as follows: 1. Blockchain technology has apps that regard sustainable goals in different ways. 2. It offers resilience and security capabilities, without any implications that may harm third parties. 3. The blockchain guarantees precision in the surveillance of actions [4]. We can use smart contracts, which validate the actions that occur within the blockchain. The paradigm shift that started with blockchain technology can be seen in the world’s biggest companies like Google, IMB, and Meta (formerly Facebook), which are adopting this technology to build new decentralized applications or to integrate it into already existing products to enhance security, transparency, and innovation. Decentralized applications are software apps that live and run on the blockchain instead of on a single server. These applications can benefit from security (most blockchains use cryptographic algorithms to secure data), user privacy, and a lack of censorship. Blockchain technology is most commonly used in the financial and banking industries, with protocols like AAVE allowing you to stake digital assets like cryptocurrencies and stable coins to receive incentives and borrow assets against your collateral. The advantages of blockchains can also benefit the educational industry: 1. Shah et al. [5] presented a combined approach in which they integrated a blockchain for storing student information, used a machine learning algorithm to forecast the potential job roles for students post-graduation, and evaluated the outcomes using various machine learning methods. 2. Zhang et al. [6] presented an innovative approach aimed at improving the management of teaching information in higher education with the integration of blockchain technology. 3. Gresch et al. [7] presented a fresh technique for enhancing the transparency of educational certificates with the application of blockchain technology. They integrated blockchain networks at every stage of the system, ensuring the secure transmission of student certificates. People often donate money to charities because they wish to give back to society and their communities, because they believe in a certain cause, or because they have experienced some traumatic events and the donations of others have been helpful to them or their loved ones. Some of the main causes that people donate money to support include, for example: _•_ Disaster relief: to assist those affected by natural disasters like earthquakes, hurricanes, and floods. _•_ Health causes: to assist those in need of financial assistance for surgeries or to support research into diseases such as cancer, Alzheimer’s, or Parkinson’s. Animal shelters. _•_ _•_ Education: to support local schools and universities as well as to provide scholarships and school supplies for students in need. With the development of technology, donations can now be made using a variety of channels, including social media, online donation platforms, and SMS. The Blackbaud Institute conducted research on charitable giving [8], and it estimated that the total amount of donations made in the United States in 2021 was around USD 46.4 billion, of which ----- _Sustainability 2023, 15, 16205_ 3 of 19 USD 2.9 billion came from online donations, representing a 42% increase in overall online giving since 2019. The problem with these traditional ways of donating money to charities is that they lack transparency. Centralized organizations, also known as banks, handle the processing of donated money, and most of the time, the donors do not have a clear understanding of how their contributions are being utilized. Furthermore, the distribution of the raised funds may be slow and bureaucratic, with funds often taking a long time to reach the intended beneficiaries. The technology of distributed and decentralized networks has advanced significantly over the past few years, particularly blockchain technology. Numerous public decentralized blockchain protocols like Bitcoin, Ethereum, Elrond, and Polygon have been developed and are used by millions of users daily. This paper proposes an alternative solution to the conventional methods of donating funds to charities, which uses a decentralized blockchain protocol and allows users to transparently see how all the contributions are being utilized to support each cause. This paper is divided into seven sections: the second presents the motivation and the weaknesses of the existing centralized methods; the third presents the main concepts and technologies (such as blockchain, smart contracts, wallets, Ethereum, and Polygon); the fourth describes the proposed solution and its implementation; the fifth illustrates a short demo of the platform; the sixth discusses the security of the smart contract; and the final section wraps up this article and discusses some ideas for future development. **2. Motivation** To design the solution and a smart contract, we started with the problems of the existing tools that we want to solve or improve, namely: High fees: By using a public blockchain protocol, such as Polygon, we can eliminate _•_ the commissions applied by donation collection platforms. However, in order to interact with the platform, in particular, with its smart contract, users have to pay fees imposed by the Polygon network to secure it. These fees are significantly lower compared with existing platforms. For example, if we compare the fees of 2.9% + USD 0.30 for each transaction on the GoFundMe platform with the fees paid to the Polygon network for a donation of USD 100, we obtain the following results: GoFundMe: out of USD 100, USD 3.2 is tax-represented and USD 96.8 reaches � beneficiaries Proposed solution: out of USD 100 paid to MATIC, the full amount will reach � the balance of the contract, with the user paying separately a transaction fee of MATIC 0.00015112, which means less than USD 0.001 (calculated taking into account the network load at that time and using the price of 0.61 USD/MATIC on 11 June 2023). Bureaucracy and delays: Some platforms, such as DonorsChoose, use a platform _•_ campaign verification process to ensure that they follow some rules and standards. Due to the public nature of the smart contract, anyone can interact with it and create new campaigns. After the completion and expiration of the campaign deadline, their initiators can immediately use the amounts collected, without any further delays. _•_ Lack of transparency: This is a sensitive and difficult issue to address. Although most platforms provide users with a transparent way to track the progress of donations after the end of the campaign and the transfer of money to beneficiaries, there is no longer a possibility to follow how they are used to support the case. The proposed solution implements, with smart contracts, a functionality in which the initiators of the campaigns must provide a description of the transaction, the address of the recipient, and the amount they intend to use. For example, there may be a campaign that aims to raise donations for the purchase of beds to equip a ward in a hospital. After the end of the campaign, after the necessary funds are collected, the initiator of the campaign can use the platform to send the necessary amount to the bed merchant. This can be performed using a form that completes the description of the transaction (for example, the name of the beneficiary hospital and the number of beds purchased can be ----- _Sustainability 2023, 15, 16205_ 4 of 19 mentioned), the amount of the transaction, and the address of the consignee (in the case of the above-mentioned example, the address of the bed trader, which can theoretically be verified on the official website of the trader). All these transactions are public and available both on the blockchain and within the platform, providing transparency to donors and the opportunity to monitor the progress and use of donations. Geographical limitation: As mentioned above, platforms are generally only available _•_ in certain countries, such as the United States and some countries in Europe or Asia. Due to the decentralized idea of the blockchain, the proposed solution is available regardless of the geographical region in which the users are located, provided that the law of the country in which the activity takes place allows the holding and trading of cryptocurrencies [9]. **3. Main Concepts and Technologies** _3.1. Blockchain_ Blockchain technology is considered by universities as an approach to improving the teaching and learning process. It encourages the participation of all stakeholders like (1) Undergraduates; (2) Professors; (3) Family members [10]. Blockchain technology is based on Distributed Ledger Technology (DLT), which enables direct transactions between users without the need for intermediaries or a centralized authority to oversee them [11]. Transactions are validated with a consensus mechanism within an interconnected network of computers. What is a blockchain? In 1991, Stuard Haber and W. S. Stornetta published a paper titled “How to Time-Stamp a Digital Document” [12], in which they proposed a method for digitally time-stamping documents using hash functions, digital signatures, and data stored in blocks. This paper is considered to be the first description of the blockchain concept. Now, we refer to the term “blockchain” as a distributed database or ledger that is shared among the nodes of a computer network and stores data into blocks that are chained together using different consensus algorithms. Being open and distributed, the blockchain provides immutability, security, and transparency. In 2008, Satoshi Nakamoto published a paper titled “Bitcoin: A Peer-to-Peer Electronic Cash System”, in which he proposed a decentralized financial instrument using a digital currency called Bitcoin. It proposes a “peer-to-peer network using proof-of-work to record a public history of transactions” [13]. There are several organizations with connections in the development of a blockchain, such as: IBM is the most involved and a principal investor. _•_ Mastercard is another organization that has over 100 blockchain patents filed. This _•_ company uses technology to increase protection against fraud and to reduce transaction costs [14]. According to Investopedia [15], the three biggest blockchain companies are � Coinbase Global Inc. (San Francisco, CA, USA)—COIN; � Canaan Inc. (Beijing, China)—CAN; � Galaxy Digital Holdings Ltd. (New York, NY, USA)—BRPHF. Last, but not least, some authors used blockchain technology to develop the BookChain project—a secure library book for storing and sharing in academic institutions. For details, please visit [16]. Another application of blockchain technology is an electronic voting system—for more information, see [17]. ----- _Sustainability 2023, 15, 16205_ 5 of 19 _3.2. Cryptocurrency_ A cryptocurrency is a decentralized digital currency that uses cryptography to secure transactions and that lives on a blockchain, which may be interpreted as a public digital ledger that records all transactions made using the cryptocurrency. In contrast with traditional fiat currencies, cryptocurrencies are not issued, regulated, or backed by any financial institution. Instead, the transactions are verified and approved by a network of users that uses various consensus algorithms such as Proof-of-Work or Proof-of-Stake. The popularity of blockchain technology and cryptocurrencies increased considerably [after the launch of Bitcoin in 2009. As stated on CoinMarketCap’s website (https://](https://coinmarketcap.com) [coinmarketcap.com, accessed on 10 September 2023) [18], on 23 April 2023, there were](https://coinmarketcap.com) 23,562 cryptocurrencies with a total market capitalization of USD 1.17 trillion. According [to a report conducted by Crypto.com (for more information see https://crypto.com/,](https://crypto.com/) accessed on 10 September 2023), on-chain data analysis revealed that the total number of global crypto owners reached 425 million in December 2022 [19]. In addition, according to the “Developer Report” by Electric Capital, there are 23,343 active developers in crypto monthly. _3.3. Polygon_ The blockchain Trilemma [20] covers the challenges faced by developers in building a blockchain that is secure, decentralized, and scalable without sacrificing any of these characteristics. Even though it sacrificed its scalability, the Ethereum Foundation has focused its efforts on building a decentralized and secure blockchain. As a result, transactions can be slow and expensive. According to the Etherscan website (see [21]), on 24 April 2023, the average gas price was around Gwei 44, which means that a simple transaction on the Ethereum network costs about USD 1.78 and takes roughly three minutes to complete. As stated on their website, “Polygon is a Layer 2 scaling solution” (see [22]). As a Layer-2 protocol, Polygon intends to increase transaction speed and reduce costs for users rather than duplicate Ethereum’s functionalities. The native cryptocurrency of the Polygon Network is MATIC. As a comparison, the current number of transactions made on Ethereum per second is around 11, while on Polygon, it is 34. However, Polygon promises the potential of over 7000 transactions per second. If we look at the cost and completion time of a simple transaction, the average gas price on 24 April 2023 was Gwei 439.6, which means that a transaction costs about USD 0.00823 and takes between 30 and 60 s to complete. Compared with Ethereum, this is 3 to 6 times faster and 215 times less expensive. _3.4. Sustainable Development Goals_ Some important SDGs (selection from a set of 17 Goals) include: Goal No. 1: NO POVERTY; _•_ Goal No. 4: QUALITY in EDUCATION; _•_ Goal No. 5: GENDER EQUALITY; _•_ Goal No. 7: CLEAN ENERGY and AFFORDABLE ENERGY; _•_ Goal No. 11: SUSTAINABLE CITIES AND COMMUNITIES. _•_ **4. Application for Social and Educational Causes** The web platform in this study is designed to facilitate interaction between blockchain and the deployed smart contract by creating an intuitive and user-friendly interface to create new campaigns and donate money to support the causes. The platform is developed using Next.JS (NodeJS [23] is also a solution for developers) framework, which is a React framework that enables the creation of full-stack web applications (for more information, see [24]). The modules of the application are described below. ----- _Sustainability 2023, 15, 16205_ 6 of 19 _4.1. Frontend_ The frontend part of the application is developed using React, HTML, and CSS, and its role is to provide an interactive and intuitive interface for users to easily interact with the platform. Also, this module facilitates communication between the user, the smart contract, and the backend of the application. Using the frontend, users can view and donate cryptocurrencies to already-running campaigns, create a new campaign, and claim raised funds for their own campaigns. _4.2. Backend_ The backend part of the application is developed using JavaScript language, and its role is to handle the server-side logic, data storage, and communication with other systems. _4.3. IPFS module_ 4.3.1. The IPFS The IPFS (InterPlanetary File System) is “a peer-to-peer hypertext protocol designed to preserve and develop the knowledge of humanity by transforming the web into an up-to-date, resilient and more open platform” [25]. It was created to solve the problems of scalability, redundancy, and censorship associated with traditional architectures for storage and file transfer. This protocol is ideal for the proposed platform because we can use it to store large files outside the blockchain network, such as pictures, and to store only immutable and permanent links to those files on the blockchain. 4.3.2. The IPFS Module The platform interface provides the ability to add a representative picture for each campaign. In the contract, there is no possibility of storing pictures for campaigns. The classic solution would be to add the pictures to a database (more info about databases and SQL in [26]), but this would limit the decentralized nature of the proposed solution. To avoid using a centralized database specific to the platform, we chose to use the IPFS protocol to store campaign images and saved the reference to the picture uploaded on IPFS in the contract for each campaign. Subsequently, this reference is used by the application interface to retrieve the picture from the IPFS and display it to users. _4.4. Wallet Connect_ In order to use the platform, it is mandatory to have a wallet. When a wallet is created on the blockchain, it generates two paired keys: a public key that is used for identification and a private key that is used for authorization (e.g., for signing transactions). In the new Web 3.0 iteration, decentralized applications will now authenticate users by using wallets. [For implementing the connection to a crypto wallet, RainbowKit (https://www.](https://www.rainbowkit.com) [rainbowkit.com, accessed on 15 of September 2023 [27]) is used, which is a React library](https://www.rainbowkit.com) that facilitates the wallet connection to decentralized applications. _4.5. Smart Contracts_ Smart contracts are programs that are stored on a blockchain and run when predetermined conditions are achieved. Smart contracts are used to automate the running of an agreement. There is no involvement of intermediaries or time loss. Smart contracts can automate workflows. They can trigger the next action when conditions are achieved [28]. A smart contract works as a simple “if/when...then...” statement. These statements are inserted into the code of the blockchain. When the transaction is finished, the blockchain is then updated. The conclusion refers to the fact that the transaction cannot be changed. Another important aspect here is that only parties who have been granted permission can view the results (based on [28]). The concept of “smart contracts” was first introduced in the early 1990s by computer scientist Nick Szabo in his work titled “Formalizing and Securing Relationships on Public Networks” [29]. Bitcoin is the first blockchain that uses ----- _Sustainability 2023, 15, 16205_ 7 of 19 this new concept, with Vitalik Buterin affirming in the Ethereum whitepaper that “Bitcoin protocol actually does facilitate a weak version of a concept of “smart contracts”” [30]. Nowadays, smart contracts are implemented in numerous blockchain protocols, such as Ethereum [21,31] and Elrond [32], and can be seen as self-executing programs that ensure that the terms of an agreement are respected or fulfilled without the need for trust among the involved parties. The smart contract for our applications is developed using Solidity. It is an objectoriented high-level language used for smart contracts. Ethereum is considered the most known and secure blockchain, with a total of more than 500,000 validators as of December 2022. The main disadvantage of it is that it has a low level of scalability; it can only process 10–30 transactions per second, resulting in high gas fees. To solve the scalability issue and to avoid the high gas fees of the Ethereum network, which is seen as a Layer-1 blockchain, we chose to use the Polygon network. It is an overlay of Ethereum, also known as a Layer-2 protocol, that offers higher speed and lower gas fees. This Layer-2 protocol can handle around 700 TPS, and the gas fees are around USD 0.01 per transaction. The contract is deployed on the Polygon public blockchain, and users can interact with it using either the platform or directly on the blockchain using the Polygon scan website. The smart contract was developed by the authors without any additional cost. Figure 1 shows the scheme of the contract and the data structures used. The data structures used are as follows: Donation: for each donation, this structure retains the donated value and the address _•_ of the donor; _•_ Transaction: for each transaction made by the campaign initiators, this structure retains the donated value, the address of the recipient, and a description of the transaction; _•_ Campaign: this structure is used to retain information relevant to a campaign, namely: an ID, name, objective, balance, address of the originator, number of donations, a deadline saved in UNIX timestamp format (UNIX timestamp = number of seconds past from 1 January 1970 until now), a description of the campaign, a flag to know whether or not the campaign is over, a reference to the image of the campaign uploaded to the IPFS, and the final amount raised. **Figure 1. Smart contract and data structures used.** The structure of an intelligent contract is similar to the structure of a class in an objectoriented language such as C++ or Java. The “Crowdfunding” contract encapsulates the main functionality of the crowdfunding platform and provides the necessary functions and ----- _Sustainability 2023, 15, 16205_ 8 of 19 mapping for the management of campaigns, donations, and transactions. It contains the following attributes and methods: 1. Attributes: Owner: the address of the contract owner; _•_ _•_ indexOfCampaign: the global variable used to assign a unique ID to each campaign; Campaigns: the map structure that associates the ID of a campaign with the _•_ structure of the campaign; userDonationPerCampaign: mapping that retains the amount donated by each _•_ donor for each campaign; donationsPerCampaign: mapping that retains donations for each campaign; _•_ _•_ transactionsPerCampaign: mapping that retains transactions performed for each campaign after its completion. 2. Methods: createCampaign: a function that allows users to create a new campaign, check _•_ the deadline to ensure it is in the future, register the campaign in the campaigns mapping, and increment indexOfCampaigns; Donate: allows users to donate to a campaign, check the amount donated, up _•_ date the campaign balance, record the donation in the DonationsPercampaign mapping, and increase the number of donations for the campaign; _•_ endCampaign: allows the owner of a campaign to end the campaign, check that the campaign is still active and the deadline has been exceeded, and update the status of the campaign; useFunds: allows the owner of a campaign to use the funds collected, check _•_ various conditions such as the fact that the campaign has been completed, the caller is the owner, the balance of the campaign is not zero, the amount used is positive, etc., update the campaign balance, record the transaction in mapping transactionsPerCampaign, and transfer the funds to the recipient; getCampaign: getter function that returns information about a campaign; _•_ getBalanceOfContract: getter function that returns the balance of the contract; _•_ _•_ getUserDonationPerCampaign: getter function that returns the amount donated by a specific user for a specified campaign; getCampaigns: returns a list of “Campaign” structures that represent all the _•_ campaigns created so far; _•_ getDonationsPerCampaign: returns a list of “Donation” structures that represent all registered donations for a specified campaign; getTransactionsPerCampaign: returns a list of “Transaction” structures that _•_ represent all transactions performed for a specified campaign. The following diagram—see Figure 2, represents how the crowdfunding platform is structured: ----- _Sustainability 2023, 15, 16205_ 9 of 19 **Figure 2. The architecture of the application.** **5. A Short Demo for Educational Campaigns** _An Introduction to Ethereum_ Ethereum is a technology that is home to digital money, global payments, and applications [33]. Ethereum is a decentralized open-source blockchain platform that natively supports smart contracts. It was first introduced in 2014 by Vitalik Buterin in his paper entitled “Ethereum: A Next-Generation Smart Contract and Decentralized Application Platform” [30] and later launched in 2015. When the Ethereum network originally started, it relied on a proof-of-work consensus system that let miners compete by solving mathematical problems to add new blocks to the chain and earn Ether as a reward. Later, in 2022, Ethereum switched to a proof-of-stake algorithm. In this new proof-of-stake model, miners stake their own Ether into a smart contract and use it as collateral in order to validate new blocks and earn rewards in Ether proportional to the amount they staked. Ether (ETH) is the native cryptocurrency of the Ethereum network. As stated on their website, “Ether is the main internal crypto-fuel of Ethereum” [33], and it is used as payment for transaction fees when users interact with the network or as collateral for staking in order to secure the network and earn rewards. Regarding Ethereum, two important concepts can be discussed: 1. Accountability in Ethereum—for details, see [34]. 2. Anonymity in Ethereum—for details, see [35]. In this section, we will illustrate how you can use the platform to create a charity or educational campaign, donate to different causes, and claim the raised funds after the campaign goal is reached. Blockchain technology can be used for developing applications for other uses such as voting applications in non-profit systems (universities, communities, etc.). These applications can be described by use case. Next, we detail a short demo for educational campaigns. **_Step 1: When you first access the platform—see Figure 3, you will be redirected to the_** Donate page where you can see all the ongoing campaigns. ----- _Sustainability 2023, 15, 16205_ 10 of 19 **Figure 3. Donate page where all ongoing campaigns are displayed.** **_Step 2: If you click on the “Create campaign” button, a form for creating a new_** campaign will be displayed—see Figure 4. **Figure 4. Form for creating a new campaign.** **_Step 3: After you fill the form by adding a title, telling your story, and setting the goal_** (in MATIC) and the deadline for your campaign, at the bottom of the form, you will see a _Sustainability 2023,message that informs you that you should first connect your wallet in order to create a new 15, x FOR PEER REVIEW_ 11 of 20 campaign—see Figure 5. You can connect your wallet using the button at the top right corner. **Figure 5. Filled form for creating a new campaign.Figure 5. Filled form for creating a new campaign.** **Comme** **_Step 4: After you click on the “Connect wallet” button—see Figure 6 a new modal_** ----- _Sustainability 2023, 15, 16205_ 11 of 19 **_Step 4: After you click on the “Connect wallet” button—see Figure 6, a new modal_** will appear, from which you can choose the wallet provider you want to use. **Figure 6. Modal for choosing which wallet provider you want to use.** **_Step 5: After you connect your wallet, in the right top corner, you will see some_** information regarding your wallet like the balance of your account and a part of your public address. Also, a submit button will appear on the bottom of the form. After clicking on the “Submit new campaign” button, you will be asked to sign a transaction because this way you interact with the smart contract, which is deployed on the Polygon Mumbai testnet. Figures 7–9 show some other aspects related to a new campaign. **Figure 7. Form for creating a new campaign with the submit button active.** **Figure 8. Signing the transaction for creating a new campaign.** ----- _Sustainability 2023, 15, 16205_ 12 of 19 **Figure 9. Display message after the campaign was successfully created.** **_Step 6: After successfully creating the campaign, you can either close the modal or_** choose to go to the campaign page. We chose to close the modal and go to the “Donate” page—see Figures 10 and 11, where we were able to see the newly created campaign. **Figure 10. Donate page displaying the newly created campaign—Part 1.** **Figure 11. Donate page displaying the newly created campaign—Part 2.** ----- _Sustainability 2023, 15, 16205_ 13 of 19 **_Step 7: You can now click on the newly created campaign, and a new page will open_** displaying relevant information for the campaign like the amount raised, the number of days left, the number of backers, the number of donations, the story, and a list with all the donations. **_Step 8: Now, you can support this campaign. First, you need to fill in the amount of_** MATIC you want to donate and after that, you can click on the “Donate” button. You will be asked again to sign a transaction and after the transaction is complete, a message will be displayed. Some other aspect related to the campaign can be viewed in the Figures 12–19. **_Step 9: After closing the modal, we now can see that the campaign raised 100% of the_** goal. Also, we can see the list of donations. **_Step 10: Now, if we scroll down, we can see an “End Campaign” section. This section_** only appears when you are the owner of the contract. After successfully raising all the funds needed for your campaign or after the deadline has passed, you can click on the “Finish campaign” button to end the campaign and claim the raised amount. You will be asked again to sign a transaction, and after the transaction is finished, a new message will appear on the screen. **Figure 12. The Donate page ready for a donation of MATIC 0.5.** **Figure 13. Signing a transaction for donating to the campaign.** ----- _Sustainability 2023, 15, 16205_ 14 of 19 **Figure 14. Display message after the donation was successful.** **Figure 15. Campaign page with the updated state.** ----- _Sustainability 2023, 15, 16205_ 15 of 19 **Figure 16. Campaign page with the “End Campaign” section.** **Figure 17. Signing the transaction for finishing the campaign and claiming the raised funds.** ----- _Sustainability 2023, 15, 16205_ 16 of 19 **Figure 18. Display message after ending the campaign successfully.** **Figure 19. The Polygon scan—the transparent view of the crowdfunding platform.** All the interactions with the smart contract can also be found on Polygon scan, which provides users with a transparent view of what is happening on the crowdfunding platform. **6. Security of the Smart Contract—Unit Testing** Unit tests are a form of software testing that focuses on verifying the individual functionality of its smallest components, called units. These can be functions, methods, or classes of an application. The main purpose of unit tests is to validate the correct behavior and functionality of these units. You can develop your apps by using [36] and monitor them by using [37]. In the blockchain environment, a single mistake can compromise the security of the contract and implicitly of its funds. Once a contract is loaded on a blockchain, it becomes public and immutable, and any errors in the contract can no longer be resolved. These errors can lead to contract vulnerabilities that can be exploited by malicious users, and for this reason, contract testing is a necessary step. The Hardhat development environment also provides support for testing contracts by writing unit tests. In Figure 20, the coverage percentage of the code with the unit tests written for the verification of the contract is presented. It measures the coverage of instructions, decision-making branches, functions, and lines in the code. The percentage for covering branches is 94.74% because we failed to simulate the case where the transfer of cryptocurrencies between the contract and an external address fails. ----- _Sustainability 2023, 15, 16205_ 17 of 19 **Figure 20. Percentage of coverage of written unit tests.** **7. Contributions, Future Work, Limitations and Conclusions** _7.1. Contributions_ This platform aims to demonstrate the potential of blockchain technology and decentralized applications, provide a decentralized and transparent system for creating and managing crowdfunding campaigns, and enable greater trust and accountability between donors and recipients in the field of charitable giving to support educational causes. Some contributions include: Bibliographic research on the paper topic: Polygon, Ethereum, blockchain, smart _•_ contracts, etc.; The design of the platform (including the architecture of the platform); _•_ _•_ The implementation of the web application, which consists of the frontend and backend; The implementation of smart contracts for the platform; _•_ A short demo for charity and educational campaigns. _•_ This project tries to innovate by providing a decentralized solution for raising funds for schools, students, and the educational field in general. It uses the latest technologies like Next.JS for the web platform development and also combines blockchain protocols like Polygon, which is the network where the smart contract is deployed, and IPFS, where the platform stores images to not limit the decentralized character of the proposed solution. _7.2. Future Work_ The platform’s next stage is to offer a brand-new, improved user interface. Also, we plan to integrate multiple blockchains and cryptocurrencies, explore the use of nonfungible tokens (NFTs) as a means of incentivizing donations, and integrate smart contracts to automate and streamline fund distribution. Blockchains can reshape the educational system as we know it. In this context, blockchains provide significant benefits that can secure processes. Blockchains create security and trust, as they eliminate the need for an intermediary to validate transactions. _7.3. Limitations_ Generally speaking, the desire for objectives like decentralization, transparency, and privacy has limitations. Also, the platform’s user interface and experience are still in their early stages. The user interface is on the second version and could also be improved. Regarding the application, it is not fully decentralized due to some data that are not stored on the blockchain, and we aim for them to be improved or corrected in case of errors. The application stores information like images or descriptions of the charities and any other relevant data. _7.4. Conclusions_ The Sustainable Development Goals (more info in [38])—SDGs—are a set of 17 general targets focused on sustainable development issues. The following is a selection from this set of 17 goals: Goal no 2: Zero hunger; _•_ Goal no 3: Good health plus well-being; _•_ Goal no 6: Clean water plus sanitation; _•_ Goal no 8: Decent work plus economic growth; _•_ ----- _Sustainability 2023, 15, 16205_ 18 of 19 Goal no 9: Industry, innovation, and infrastructure; _•_ Goal no 10: Reduced inequalities; _•_ Goal no 12: Responsible consumption plus production; _•_ Goal no 13: Climate action; _•_ Goal no 14: Life below the water; _•_ Goal no 15: Life on the land; _•_ Goal no 16: Justice, peace, and strong institutions; _•_ Goal no 17: Partnerships for goals. _•_ According to this direction, in this paper, we propose a decentralized web3 application that utilizes blockchain technology and addresses the problems of traditional donation systems by creating a platform where people can create and manage charitable campaigns to support educational causes, all of which are performed by interacting with a smart contract deployed on a public blockchain. The platform leverages decentralized protocols and smart contracts to ensure secure and transparent transactions, enabling donors to track the utilization of their contributions and ensure their funds reach their intended beneficiaries. The main contributions are highlighted in a separate section. The limitations and future works are also provided. The software application can be used for now in education, but not only, and it will have some new features in the near future. **Author Contributions: Methodology, B.T. and G.-M.A.; Software, G.-M.A.; Validation, G.-M.A.;** Investigation, B.T.; Resources, B.T.; Writing—original draft, G.-M.A.; Writing—review & editing, B.T.; Visualization, G.-M.A.; Project administration, B.T.; Funding acquisition, B.T. All authors have read and agreed to the published version of the manuscript. **Funding: This research was funded by POLITEHNICA BUCHAREST.** **Institutional Review Board Statement: Not applicable.** **Informed Consent Statement: Informed consent was obtained from all subjects involved in the study.** **Data Availability Statement: The data presented in this study are available on request from the** corresponding author. **Conflicts of Interest: The authors declare no conflict of interest.** **References** 1. [Education for Sustainable Development. Available online: https://www.unesco.org/en/education-sustainable-development](https://www.unesco.org/en/education-sustainable-development) (accessed on 2 May 2023). 2. [What Is Sustainability Education? 2021. Available online: https://online.sou.edu/degrees/education/msed/curriculum-and-](https://online.sou.edu/degrees/education/msed/curriculum-and-instruction-stem/what-is-sustainability-edu/) [instruction-stem/what-is-sustainability-edu/ (accessed on 2 May 2023).](https://online.sou.edu/degrees/education/msed/curriculum-and-instruction-stem/what-is-sustainability-edu/) 3. [Sirimanne, S.N.; Freire, C. How Blockchain Can Power Sustainable Development. 2021. Available online: https://unctad.org/](https://unctad.org/news/how-blockchain-can-power-sustainable-development) [news/how-blockchain-can-power-sustainable-development (accessed on 2 May 2023).](https://unctad.org/news/how-blockchain-can-power-sustainable-development) 4. [Blockchain and Its Impact on the SDGs (Sustainability Objectives). Available online: https://icommunity.io/en/blockchain-sdgs/](https://icommunity.io/en/blockchain-sdgs/) (accessed on 2 May 2023). 5. Shah, D.; Patel, D.; Adesara, J.; Hingu, P.; Shah, M. Integrating machine learning and blockchain to develop a system to veto the [forgeries and provide efficient results in education sector. Vis. Comput. Ind. Biomed. Art 2021, 4, 18. [CrossRef] [PubMed]](https://doi.org/10.1186/s42492-021-00084-y) 6. Zhang, L.; Ma, Z.; Ji, X.; Wang, C. Blockchain: Application in the System of Teaching Informatization Management of Higher Education. In Proceedings of the 2020 3rd International Conference on Smart BlockChain (SmartBlock), Zhengzhou, China, 23–25 [October 2020; pp. 185–190. [CrossRef]](https://doi.org/10.1109/SmartBlock52591.2020.00041) 7. Gresch, J.; Rodrigues, B.; Scheid, E.; Kanhere, S.S.; Stiller, B. The Proposal of a Blockchain-Based Architecture for Transparent Certificate Handling. In Business Information Systems Workshops. BIS 2018; Abramowicz, W., Paschke, A., Eds.; Lecture Notes in [Business Information Processing; Springer: Cham, Switzerland, 2019; Volume 339. [CrossRef]](https://doi.org/10.1007/978-3-030-04849-5_16) 8. MacLaughlin, S.; Perrotti, E.; Thomson, A. Charitable Giving Report. Blackbaud Institute. February 2022. Available online: [https://institute.blackbaud.com/wp-content/uploads/2022/03/BBI_CGR_2022.pdf (accessed on 24 April 2023).](https://institute.blackbaud.com/wp-content/uploads/2022/03/BBI_CGR_2022.pdf) 9. [The World’s Leading Cryptocurrency Platform. Available online: https://crypto.com/ (accessed on 2 May 2023).](https://crypto.com/) 10. Chinnasamy, P.; Ramani, D.R.; Ayyasamy, R.K.; Jebamani, B.J.A.; Dhanasekaran, S.; Praveena, V. Applications of Blockchain Technology in Modern Education System—Systematic Review. In Proceedings of the 2023 International Conference on Computer [Communication and Informatics (ICCCI), Coimbatore, India, 23–25 January 2023. [CrossRef]](https://doi.org/10.1109/ICCCI56745.2023.10128381) ----- _Sustainability 2023, 15, 16205_ 19 of 19 11. [Michael, J.; Cohn, A.; Butcher, J.R. Blockchain technology. Journal 2018, 1, 35–45. Available online: https://www.steptoe.com/a/](https://www.steptoe.com/a/web/171269/3ZEKzc/lit-febmar18-feature-blockchain.pdf) [web/171269/3ZEKzc/lit-febmar18-feature-blockchain.pdf (accessed on 20 April 2023).](https://www.steptoe.com/a/web/171269/3ZEKzc/lit-febmar18-feature-blockchain.pdf) 12. Haber, S.; Stornetta, W.S. How to Time-Stamp a Digital Document; Springer: Berlin/Heidelberg, Germany, 1991; pp. 437–455. 13. [Nakamoto, S. Bitcoin: A Peer-to-Peer Electronic Cash System. 2008. Available online: https://bitcoin.org/bitcoin.pdf (accessed](https://bitcoin.org/bitcoin.pdf) on 20 April 2023). 14. Cernian, A.; Vlasceanu, E.; Tiganoaia, B.; Iftemi, A. Deploying blockchain technology for storing digital diplomas. In Proceedings of the CSCS23: The 23th International Conference on Control Systems and Computer Science, Bucharest, Romania, 26–28 May 2021; IEEE Computer Society: New York, NY, USA, 2021; pp. 322–328. 15. [6 Biggest Blockchain Companies. Available online: https://www.investopedia.com/10-biggest-blockchain-companies-5213784](https://www.investopedia.com/10-biggest-blockchain-companies-5213784) (accessed on 2 May 2023). 16. Chinnasamy, P.; Elumalai, A.; Ayyasamy, R.K.; Kavya, S.P.; Dhanasekaran, S.; Kiran, A. BookChain: A Secure Library Book Storing and Sharing in Academic Institutions using Blockchain Technology. In Proceedings of the 2023 International Conference [on Computer Communication and Informatics (ICCCI), Coimbatore, India, 23–25 January 2023. [CrossRef]](https://doi.org/10.1109/ICCCI56745.2023.10128571) 17. Dai, H.-N.; Wu, J.; Wang, H. Blockchain for Electronic Voting System—Review and Open Research Challenges. Sensors 2021, 21, [5874. [CrossRef]](https://doi.org/10.3390/s21175874) 18. [Coin Market Cap. Available online: https://coinmarketcap.com/ (accessed on 2 May 2023).](https://coinmarketcap.com/) 19. [Crypto Market Sizing. January 2023. Available online: https://content-hub-static.crypto.com/wp-content/uploads/2023/01/](https://content-hub-static.crypto.com/wp-content/uploads/2023/01/Cryptodotcom_Crypto_Market_Sizing_Jan2023-1.pdf) [Cryptodotcom_Crypto_Market_Sizing_Jan2023-1.pdf (accessed on 24 April 2023).](https://content-hub-static.crypto.com/wp-content/uploads/2023/01/Cryptodotcom_Crypto_Market_Sizing_Jan2023-1.pdf) 20. [Musharraf, M. What Is the Blockchain Trilemma? Available online: https://www.ledger.com/academy/what-is-the-blockchain-](https://www.ledger.com/academy/what-is-the-blockchain-trilemma) [trilemma (accessed on 1 May 2023).](https://www.ledger.com/academy/what-is-the-blockchain-trilemma) 21. [Ethereum Gas Tracker. Available online: https://etherscan.io/gastracker (accessed on 30 April 2023).](https://etherscan.io/gastracker) 22. [What Is Polygon. Available online: https://wiki.polygon.technology/docs/home/polygon-basics/what-is-polygon (accessed](https://wiki.polygon.technology/docs/home/polygon-basics/what-is-polygon) on 30 April 2023). 23. [Next-Generation Node.Js and TypeScript ORM. Available online: https://www.prisma.io/ (accessed on 20 April 2023).](https://www.prisma.io/) 24. [React Framework for the Web. Available online: https://nextjs.org/ (accessed on 20 April 2023).](https://nextjs.org/) 25. [The Official Website of IPFS. Available online: https://ipfs.tech/ (accessed on 29 August 2023).](https://ipfs.tech/) 26. [PostgreSQL: The World’s Most Advanced Open Source Relational Database. Available online: https://www.postgresql.org/](https://www.postgresql.org/) (accessed on 20 April 2023). 27. [RainbowKit—The Best Way to Connect a Wallet. Available online: https://www.rainbowkit.com/ (accessed on 2 May 2023).](https://www.rainbowkit.com/) 28. [IBM. Available online: https://www.ibm.com/topics/smart-contracts (accessed on 1 August 2023).](https://www.ibm.com/topics/smart-contracts) 29. [Szabo, N. Formalizing and securing relationships on public networks. First Monday 1997, 2. [CrossRef]](https://doi.org/10.5210/fm.v2i9.548) 30. Buterin, V. A next-generation smart contract and decentralized application platform. White Pap. 2014, 3, 1–36. 31. [The Official Website of Ethereum. Available online: https://ethereum.org/ (accessed on 2 May 2023).](https://ethereum.org/) 32. [The Official Website of the MultiversX. Available online: https://multiversx.com/ (accessed on 2 May 2023).](https://multiversx.com/) 33. [Ethereum Whitepaper. Available online: https://ethereum.org/en/whitepaper/ (accessed on 2 May 2023).](https://ethereum.org/en/whitepaper/) 34. [Gramoli, V. What Is Blockchain Accountability, and Why It Matters? Available online: https://www.redbelly.network/blog/](https://www.redbelly.network/blog/what-is-blockchain-accountability-and-why-it-matters) [what-is-blockchain-accountability-and-why-it-matters (accessed on 1 October 2023).](https://www.redbelly.network/blog/what-is-blockchain-accountability-and-why-it-matters) 35. [Pseudonymity and Anonymity: Be Untraceable in the Blockchain World. Available online: https://www.immunebytes.com/](https://www.immunebytes.com/blog/pseudonymity-and-anonymity-be-untraceable-in-the-blockchain-world/) [blog/pseudonymity-and-anonymity-be-untraceable-in-the-blockchain-world/ (accessed on 1 October 2023).](https://www.immunebytes.com/blog/pseudonymity-and-anonymity-be-untraceable-in-the-blockchain-world/) 36. [Electric Capital Developer Report. Available online: https://github.com/electric-capital/developer-reports/blob/master/dev_](https://github.com/electric-capital/developer-reports/blob/master/dev_report_2022.pdf) [report_2022.pdf (accessed on 1 May 2023).](https://github.com/electric-capital/developer-reports/blob/master/dev_report_2022.pdf) 37. [Bring Your Code, We’ll Handle the Rest. Available online: https://railway.app/ (accessed on 20 April 2023).](https://railway.app/) 38. [Population and the Sustainable Development Goals. Available online: https://populationmatters.org/un-sdgs/?gclid=Cj0](https://populationmatters.org/un-sdgs/?gclid=Cj0KCQjwsIejBhDOARIsANYqkD1vyLpVSL9Xf_bsHFT_MCGZxFW99a4Tm6byzQzf5mAaGWf3_HoVRCgaAg8jEALw_wcB) [KCQjwsIejBhDOARIsANYqkD1vyLpVSL9Xf_bsHFT_MCGZxFW99a4Tm6byzQzf5mAaGWf3_HoVRCgaAg8jEALw_wcB (ac-](https://populationmatters.org/un-sdgs/?gclid=Cj0KCQjwsIejBhDOARIsANYqkD1vyLpVSL9Xf_bsHFT_MCGZxFW99a4Tm6byzQzf5mAaGWf3_HoVRCgaAg8jEALw_wcB) cessed on 2 May 2023). **Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual** author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. -----
{ "disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.3390/su152316205?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.3390/su152316205, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": "CCBY", "status": "GOLD", "url": "https://www.mdpi.com/2071-1050/15/23/16205/pdf?version=1700659792" }
2,023
[ "JournalArticle" ]
true
2023-11-22T00:00:00
[ { "paperId": "80260a4af770a022ccb4955b0999075dc908643a", "title": "Blockchain for Electronic Voting System—Review and Open Research Challenges" }, { "paperId": "7ec86f51a93d646e5e4cf15d4ee48d9570021adc", "title": "Integrating machine learning and blockchain to develop a system to veto the forgeries and provide efficient results in education sector" }, { "paperId": "1192f9947017ac30295fbc1621fccee7e7bbafd3", "title": "Blockchain Technology" }, { "paperId": "5b4cf1e37954ccd1ca6b315986d45904f9d2f636", "title": "Formalizing and Securing Relationships on Public Networks" }, { "paperId": "a85721c6d540e02fde873fbb07af3dab3b8c0452", "title": "Business Information Systems Workshops: BIS 2019 International Workshops, Seville, Spain, June 26–28, 2019, Revised Papers" }, { "paperId": "0dbb8a54ca5066b82fa086bbf5db4c54b947719a", "title": "A NEXT GENERATION SMART CONTRACT & DECENTRALIZED APPLICATION PLATFORM" } ]
12,895
en
[ { "category": "Computer Science", "source": "external" }, { "category": "Engineering", "source": "s2-fos-model" }, { "category": "Environmental Science", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/020293b1e58f0d7aefdf0d5c72e95fa7fc01fc2e
[ "Computer Science" ]
0.879753
Distributed Strategy for Optimal Dispatch of Unbalanced Three-Phase Islanded Microgrids
020293b1e58f0d7aefdf0d5c72e95fa7fc01fc2e
IEEE Transactions on Smart Grid
[ { "authorId": "30811424", "name": "P. Vergara" }, { "authorId": "38506412", "name": "J. Rey" }, { "authorId": "1814090", "name": "H. Shaker" }, { "authorId": "2255353261", "name": "J. Guerrero" }, { "authorId": "9114734", "name": "B. Jørgensen" }, { "authorId": "134292628", "name": "L. D. da Silva" } ]
{ "alternate_issns": null, "alternate_names": [ "IEEE Trans Smart Grid" ], "alternate_urls": null, "id": "1c2f3998-b5ca-48ca-9991-94b71c71ecb7", "issn": "1949-3053", "name": "IEEE Transactions on Smart Grid", "type": "journal", "url": "http://ieeexplore.ieee.org/servlet/opac?punumber=5165411" }
This paper presents a distributed strategy for the optimal dispatch of islanded microgrids, modeled as unbalanced three-phase electrical distribution systems. To set the dispatch of the distributed generation (DG) units, an optimal generation problem is stated and solved distributively based on primal–dual constrained decomposition and a first-order consensus protocol, where units can communicate only with their neighbors. Thus, convergence is guaranteed under the common convexity assumptions. The islanded microgrid operates with the standard hierarchical control scheme, where two control modes are considered for the DG units: a voltage control mode, with an active droop control loop, and a power control mode, which allows setting the output power in advance. To assess the effectiveness and flexibility of the proposed approach, simulations were performed in a 25-bus unbalanced three-phase microgrid. According to the obtained results, the proposed strategy achieves a lower cost solution when compared with a centralized approach based on a static droop framework, with a considerable reduction on the communication system complexity. Additionally, it corrects the mismatch between generation and consumption even during the execution of the optimization process, responding to changes in the load consumption, renewable generation, and unexpected faults in units.
### University of Southern Denmark Distributed Strategy for Optimal Dispatch of Unbalanced Three-Phase Islanded Microgrids Vergara Barrios, Pedro Pablo ; Rey-López, Juan Manuel; Shaker, Hamid Reza; Guerrero, Josep M.; Jørgensen, Bo Nørregaard; da Silva, Luiz Carlos Pereira Published in: IEEE Transactions on Smart Grid DOI: [10.1109/TSG.2018.2820748](https://doi.org/10.1109/TSG.2018.2820748) Publication date: 2019 Document version: Accepted manuscript Citation for pulished version (APA): Vergara Barrios, P. P., Rey-López, J. M., Shaker, H. R., Guerrero, J. M., Jørgensen, B. N., & da Silva, L. C. P. (2019). Distributed Strategy for Optimal Dispatch of Unbalanced Three-Phase Islanded Microgrids. IEEE [Transactions on Smart Grid, 10(3), 3210-3225. https://doi.org/10.1109/TSG.2018.2820748](https://doi.org/10.1109/TSG.2018.2820748) [Go to publication entry in University of Southern Denmark's Research Portal](https://portal.findresearcher.sdu.dk/en/publications/09899419-7445-46c7-8b29-56d3b83ca744) Terms of use This work is brought to you by the University of Southern Denmark. Unless otherwise specified it has been shared according to the terms for self-archiving. If no other license is stated, these terms apply: - You may download this work for personal use only. - You may not further distribute the material or use it for any profit-making activity or commercial gain - You may freely distribute the URL identifying this open access version If you believe that this document breaches copyright please contact us providing details and we will investigate your claim. Please direct all enquiries to puresupport@bib.sdu.dk ----- ## Distributed Strategy for Optimal Dispatch of Unbalanced Three-Phase Islanded Microgrids #### Pedro P. Vergara, Juan M. Rey, Hamid R. Shaker, Josep M. Guerrero, Fellow, IEEE, Bo N. Jørgensen, Luiz C. P. da Silva. Abstract—This paper presents a distributed strategy for the optimal dispatch of islanded microgrids, modeled as unbalanced three-phase electrical distribution systems (EDS). To set the dispatch of the distributed generation (DG) units, an optimal generation problem is stated and solved distributively based on primal-dual constrained decomposition and a first-order consensus protocol, where units can communicate only with their neighbors. Thus, convergence is guaranteed under the common convexity assumptions. The islanded microgrid operates with the standard hierarchical control scheme, where two control modes are considered for the DG units: a voltage control mode (VCM), with an active droop control loop, and a power control mode (PCM), which allows setting the output power in advance. To assess the effectiveness and flexibility of the proposed approach, simulations were performed in a 25-bus unbalanced three-phase microgrid. According to the obtained results, the proposed strategy achieves a lower cost solution when compared with a centralized approach based on a static droop framework, with a considerable reduction on the communication system complexity. Additionally, it corrects the mismatch between generation and consumption even during the execution of the optimization process, responding to changes in the load consumption, renewable generation and unexpected faults in units. Index Terms—Consensus algorithm, distributed dispatch, optimal power flow, nonlinear programming, three-phase microgrid. NOTATION Sets: F Set of phases {A, B, C} G Set of DG units, G = G1 ∪G2 G1 Set of DG units operating in PCM, G1 ⊂G G2 Set of DG units operating in VCM, G2 ⊂G L Set of lines N Set of nodes of the EDS Om Set of operational constraints of the DG unit m W Set of wind turbines (WTs) units Indexes: φ, ψ Phases φ ∈F and ψ ∈F mn Line mn ∈L This work was supported by the São Paulo Research Foundation (FAPESP). Research Grants: 2015/09136-8 and 2016/04164-6. Pedro P. Vergara and Luiz C. P. da Silva are with the Department of Systems and Energy, UNICAMP, University of Campinas, 13083-852 Campinas, São Paulo, Brazil (emails: {pedropa,lui}@fee.dsee.unicamp.br). Juan M. Rey is with Escuela de Ingenierías Eléctrica, Electrónica y de Telecomunicaciones (E3T), Universidad Industrial de Santander (UIS), 680002 Bucaramanga, Colombia (e-mail: juanmrey@uis.edu.co). Josep M. Guerrero is with the Department of Energy Technology, Aalborg University, Aalborg DK-9220, Denmark (e-mail: joz@et.aau.dk). Pedro P. Vergara, Hamid R. Shaker and Bo N. Jørgensen are with the Center for Energy Informatics, University of Southern Denmark, Odense DK-5230, D k ( il { b h h b j}@ i d dk) m, n Node m ∈N and n ∈N Parameters: αm Constant parameter associated to the DGs operation cost βm Linear parameter associated to the DGs operation cost ∆tD Length for the discretization of the operational time ∆ω Angular frequency deviation ∆V Voltage magnitude deviation Dm[P] Active power droop gain of DG units in VCM Dm[Q] Reactive power droop gain of DG units ε Parameter to control the converge of the active droop protocol εˆ Parameter to control the converge of the frequency reference protocol γm Quadratic parameter associated to the DGs operation cost λ Dual variable associated with the active power balance constraint λm Local estimation of λ by the DG unit m κ Parameter to control the converge of consensus protocol Pm[W] Expected active power generation of the WTs G P m Maximum active generation limit of the DG units P [G]m Minimum active generation limit of the DG units Pm,φ[D] Active load consumption G Qm Maximum reactive generation limit of the DG units Q[G] Minimum reactive generation limit of the DG units m Q[D]m,φ Reactive load consumption V Maximum voltage magnitude V Minimum voltage magnitude V0 Nominal voltage magnitude ω0 Nominal angular frequency ωm Frequency reference of the DG units in VCM Zmn,φ,ψ Line impedance ′ ′ Zmn,φ,ψ [Transformed line impedance, defined as][ Z]mn,φ,ψ [=] Zmn,φ,ψ θψ − θφ Continuous Variables: Pm[G] Total active output power of the DG units Pm[G][0] Total scheduled active of the DG units Pmn,φ Active power flow in line mn at phase φ Q[G] Total reactive generation power of the DG units ----- Qmn,φ Reactive power flow in line mn at phase φ Smn,φ Apparent power of line mn at phase φ Smn,φ[L] Apparent power losses in line mn at phase φ Vm,φ Voltage magnitude of nodes ω Frequency of the system Remark: Through the paper, it is assumed that the DG unit m ∈G (and equivalently the WT m ∈W), it is connected to the node m ∈N of the EDS. I. INTRODUCTION RADITIONALLY, the optimal dispatch of a microgrid is performed in a centralized way, where a system op# T erator gathers all the operational and technical information of the distributed generation (DG) units, aiming to define the generation dispatch that minimizes the overall cost [1]. Centralized optimal strategies for microgrids have been proposed in [2]–[4]. Due to the way information is exchanged with the central operator, these approaches require high bandwidth communication infrastructures and high-levels of connectivity, increasing the complexity of their implementation, specially considering that the number of DG units can be large. Moreover, these approaches do not show privacy preserving characteristics, considering that units can belong to different owners, which might not be interested in sharing private operational information. In contrast, distributed approaches offer features that make them an interesting alternative, including scalability, adaptability, privacy preserving and robustness, allowing to respond to changes in the number of operating units, unexpected increase in renewable generation or load consumption, among others [5]. Recently, distributed approaches have drawn a lot of attention in the technical literature [6]. In general, two main groups can be identified: (i) the approaches based on consensus algorithms and (ii) the approaches based on local updating rules. In all these, the objective is to define the generation dispatch of each DG unit locally, limiting the amount of information that is exchanged between the DG units. For the first group, in [7] an iterative algorithm is developed based on the incremental cost principle. This correspond to the consensus variable. In these works, a DG leader unit is required to balance the generation and load consumption. In [8], two consensus algorithms are executed in parallel to estimate locally the mismatch between generation and load consumption. In [9], a term is added to the consensus algorithm using only local information based on the nodal power balance equation, which plays the role of a gradient. In [10], [11] a modified consensus algorithm with finitetime convergence characteristics is presented, while in [12] a distributed gradient-based algorithm is developed, taken the derivative of the cost function of each DG unit as the consensus variable. In the second group, simple updating rules are developed. These rules are continuously executed in an iterative procedure aiming to define the operational schedule of each unit until a convergence criterion is reached. For instance, in [13]–[17], the iterative rule is defined to be proportional to the active power mismatch between generation and load consumption In addition to this, in [14], a proportional term based on the marginal cost is also considered. Thus, units with low marginal cost will increase their output power faster than high cost generators. As the active power mismatch is a global variable, and to be able to estimate it locally, in [13] local measurements of frequency deviation are used, while in [14] and [17], a complex communication procedure between neighbor units is considered. The main drawback of the above-discussed works [7]– [17], is that they assume that all the generators and loads are connected to one bus, ignoring the underlying operation of the electrical distribution system (EDS). In general, in these works it is assumed that a balancing mechanism operates i. e., a leader unit supply the required active power to correct the mismatch between generation and load consumption; all this while the optimization algorithm is executed. In an actual operation, this is not a practical assumption since the mismatch between the generation and load consumption is corrected in a faster speed of response (normally, in the order of seconds) by the lower level controllers [18]. Moreover, as in islanded operation the DG units are responsible for providing the frequency and voltage magnitude references for the system, if the optimal dispatch does not consider the control operation of the DG units, the system might operate with a higher frequency or voltage deviation, and consequently, the optimal schedule might not be technically feasible. In this regard, in [5], [19], a distributed approach including the operation of the EDS was developed. However, a centralized communication infrastructure is still required, while the unbalanced operation of the microgrid is not taken into account. Considering this, a distributed strategy for the optimal dispatch of islanded microgrids is presented in this paper. The microgrid is modeled as an unbalanced three-phase EDS, operating within a hierarchical control scheme. To define the active dispatch of the DG units, an optimal generation problem is stated and solved distributively using a first-order consensus protocol, where units can communicate only with their neighbors. This strategy is based on primal-dual constrained decomposition theory, in order to distribute the problem among the units and take into account locally their technical operational requirements. Thus, convergence is guaranteed under the common convexity assumptions. Additionally, two control modes are considered for the DG units: a voltage control mode (VCM), with an active droop control loop, and a power control mode (PCM), which allows setting the output power of the unit in advance. To assess the effectiveness and flexibility of the proposed approach, simulations were performed in a 25bus microgrid for different case of studies. According to the results, the proposed strategy achieves a lower cost solution when compared with a centralized approach, with a considerable reduction on the communication system complexity, responding to changes in the load consumption, renewable generation, and unexpected faults in units. Among all the features previously discussed of the proposed distributed strategy, the main contributions of this paper can be summarized as follows: - The proposed strategy considers the control modes of the DG units (VCM and PCM) in the optimization ----- approach. In this context, as the units in VCM operate with a droop control loop, these are responsible for correcting the active power mismatch between generation and consumption after any load or renewable generation increase/decrease, and more importantly, during the execution of the optimization algorithm. - The proposed strategy is considered to operate within the standard hierarchical control framework for microgrids. Thus, it is ensured that the dynamics of the optimization algorithm and the primary control layer (implemented with droop control) are decoupled, which helps to maintain the stability of the system. Moreover, it considers a correction protocol, in order to operate in steady-state with a lower frequency deviation. II. CONTROL AND OPERATION OF MICROGRIDS Microgrids can operate in two modes: grid-connected or islanded mode. In grid-connected mode, the frequency and voltage magnitude references are provided by the main grid, while in islanded mode, these must be provided by the DG units [20]. Since the operation of microgrids deals with issues from different technical areas, time scales and infrastructure levels, the hierarchical control scheme has been widely accepted as the standard solution [18]. In general, the hierarchical control scheme comprises three different and well defined levels: (i) a primary level, the fastest level, responsible for the local control of the DG units, generally based on droop control, which does not require communications; (ii) a secondary level, which deals with the deviation at steady-state conditions of the frequency and the voltage magnitude due to the operation of the primary level; and (iii) a tertiary level, the slowest level, responsible for the economical operation of the system, implemented through a dispatch algorithm, generally based on the solution of an optimization problem. A. Primary and Secondary Control Level Regarding the primary control level, DG units can operate in two different control modes: power control mode (PCM) and voltage control mode (VCM) [21]. In islanded operation, at least one unit is required operating in VCM to define the frequency and voltage magnitude reference of the EDS [18]. Hence, if the unit operates in PCM, the output power can be set at the schedule value defined by the tertiary control level, i.e., Pm[G] [=][ P][ G]m [0][,] ∀m ∈G1. (1) In this case, the output power is independent of the state of the EDS. Different from this, if the unit operate in VCM, its output power cannot be set in advance, since this unit operates with a droop control loop. Therefore, all units in VCM share the remaining active power mismatch between generation and consumption, in inverse proportion to their active droop gain (Dm[P] [). The droop operation of a unit in VCM mode can be] represented using the expression, ω = ωm − Dm[P] [P][ G]m [,] ∀m ∈G2, (2) where Pm[G] [is the total active output power of the unit. A] schematic representation of both control modes is shown in PCM ω0 ωm VCM ω Dm[P] Pm[G] [=][ P][ G]m [0] Pm[G] Pm[G][0] Figure 1. Control modes of the DG units: VCM and PCM. The line in the VCM indicates the direction of variation in Pm[G] [when][ D]m[P] [is decreased.] Additionally, ωm can be modified in order to reduce the frequency deviation. Fig. 1. The droop gain Dm[P] [reflects the slope of the][ ω][ −] [P] curve. Thus, the total output power of units in VCM (i.e., Pm[G][)] can be set to their scheduled value (Pm[G][0][), tuning][ D]m[P] [; all this] in order to minimize the overall generation cost. The main difference between these control modes is related to the operation of the control loops and how these set the output power in steady-state conditions. Thus, they are essentially independent, which means that each DG unit can decide its operation mode (see Sec. IV-C for a further discussion). Implementation and stability issues related to the transition between both control modes are discussed in detail in [21]. As for the reactive power Q[G]m[, in both control modes all] the DG units share the reactive power consumption defining their output voltage magnitude using the expression, Vm,φ = V0 − Dm[Q] [Q][G]m[,] ∀m ∈G. (3) This droop control is based on the assumption that the output impedance of the DG unit is inductive, which is valid for synchronous-based and the majority of inverted-based units, coupled to the EDS with an inductor filter. Nevertheless, in case of output non-inductive impedance, control strategies that aims to decouple the active and reactive power regulation can be implemented, e.g. virtual output impedance strategies [22]. Regarding the secondary control level, its main function is related to the definition of the frequency and voltage reference i. e., ωm and V0. This is done in order to reduce the frequency and voltage deviation in steady-state conditions [23], and as shown in Fig. 1. The operation of the secondary control level can be seen as a correction process, which operates with a lower speed of response than the primary control, in order to maintain their dynamics decoupled. B. Tertiary Control Level Regarding the tertiary level, to define the operational schedule of the DG units, an optimization problem is formulated and solved. The formulation of this problem must account for all the operational constraints of units, while the total load consumption is supplied with minimum generation cost. In general, this problem is known as the optimal generation problem, which can be stated using the formulation given by (4)–(6), for a microgrid comprising DG units, WT units and loads. G0min �� fm(Pm[G][0] [)]� (4) ----- subject to, � Pm[G][0] [+] � Pm[W] [=] � m∈G m∈W m∈N � Pm,φ[D] (5) φ∈F Thus, after replacing (9) into (8), and some re-arrange, the dynamics of the consensus variable of each DG unit can be updated using (10), which can be seen as the weighted average of its current state and the current state of its neighbors units. xm(k + 1) = � cmnxn(k). (10) n∈Nm To reach consensus under dynamics in (10), the consensus matrix C = [cmn] can be defined as [14], P [G]m [≤] [P][ G]m [0] [≤] [P] Gm ∀m ∈G. (6) In the above formulation, the objective function in (4) aims to minimize the overall generation cost, where fm(Pm[G][0][)] models the generation cost of each DG unit, which can be approximated with a quadratic function [1], such as, fm(Pm[G][0] [) =][ γ][m][(][P][ G]m [0] [)][2][ +][ β][m][P][ G]m [0] [+][ α][m][,] ∀m ∈G, (7) where usually γm holds a positive value, which yields convexity of the generation cost function. For the operational constraints, the active power balance in the EDS (neglecting power losses) is modeled in (5), as a function of the three-phase output power of the DG units (Pm[G][), WT units (][P][ W]m [) and the load consumption (][P]m,φ[ D] [); while] constraints in (6) models the generation limits of the DG units. III. DISTRIBUTED OPTIMAL STRATEGY In this section, a description of the communication topology of the DG units seen as a graph is discussed. Additionally, the consensus algorithm used is introduced. Then, the distributed optimal strategy is presented. Finally, an overview of the proposed approach is discussed. A. First-Order Consensus Algorithm Let the graph G = (V, E, A) describes the communication topology of the DG units. For this graph, the set of nodes V represents the set of DG units, while the set of edges E ⊂ V × V represents the set of communication links between the DG units. Considering this, an adjacency matrix A = [amn], with non-negative adjacency elements amn, can be defined for the microgrid. The adjacency elements associated with the communication links (or edges of the graph) are positive, i.e., amn = 1 ∀(m, n) ∈E, and otherwise, amn = 0. Additionally, Nm is defined as the set of neighbors of the DG unit m, i.e., the set of DG units that can exchange information with unit m. Finally, the cardinality (i.e., the size) of the set Nm is deifned as dm. Define for the DG unit m a generic variable xm ∈ R, and named it as the consensus variable. The consensus variable represents the quantity in which all the DG units want to agree (in Sec. III-B, the consensus variable defined corresponds to λ, i.e., the incremental cost variable). Thus, it can be said that the DG unit m and n agree if and only if xm = xn. Moreover, it can be said that all the DG units have reached consensus if and only if xm = xn ∀m, n ∈V. The dynamics of the consensus variable xm for each DG unit can be described by the discrete-time model in (8), where k is an iteration counter. xm(k + 1) = xm(k) + um(k). (8) It can be shown that under the protocol in (9), all the DG units reach consensus when k →∞ [24], where C = [cmn] is known as the consensus matrix. um(k) = � cmn(xn(k) − xm(k)). (9) 1Notice that Om as defined in (6), is closed due that P Gm [take values within] the range P [G]m [≤] [P][ G]m [≤] [P] Gm[. Additionally, it is convex since it is described] b t f li ti cmn = �1/(dm + 1) if n ∈Nm ∪{m}, (11) 0 if n ̸∈Nm. Such definition leads to a row-stochastic (i.e., row sum of 1), as required according [25]. It is important to highlight that the notion of neighborhood used here is related to the existence of a communication link between the DG units. The protocol in (10) is known as the first-order consensus algorithm, and its speed of convergence depends on the level of connectivity of the communication topology of the DG units. Nevertheless, convergence is guarantee as long as the communication topology fulfills the design requirements discussed in Sec. III-E. B. Distributed Optimal Dispatch Strategy The optimal generation problem, as formulated in Section II-B, can be solved using a distributed optimization approach taking advantage of its structure. In this, the only constraint that couple the problem among all the units is the active power balance in (5). Moreover, the set of operational constraints of the DG units, given by (6), defines a closed and convex set[1] Om, ∀m ∈G, in such a way that Om ∩On = ∅, ∀m, n ∈G; or in other words, the operational constraints of the DG unit m are independent of those of unit n. In this case, only generation limit constraints are considered. However, other operational constraints such as prohibited operational zones can be added to the set Om without modifying the proposed optimization strategy. Based on this, a distributed strategy can be developed. Firstly, define the Lagrangian function L(Pm[G][0] [, λ][)][ as] L(Pm[G][0] [, λ][) =] � fm(Pm[G][0][)] m∈G  + λ  [�] � Pm,φ[D] [−] � Pm[G][0] [−] � Pm[W] m∈N φ∈F m∈G m∈W  , (12)  where λ corresponds to the dual variable associated to constraint in (5). The optimal solution, which defines the active power dispatch of each DG unit, must meet the first optimality condition, which can be expressed as ∂L(·) m [)] = [df][m][(][P][ G][0] − λ = 0, ∀m ∈G (13) ∂Pm[G][0] dPm[G][0] or equivalently [26], min Pm[G][0] [∈O]m � fm(Pm[G][0][)][ −] [λP][ G]m [0] � . (14) ----- λn(k) λn(k) |Col1|Stage I k = k + 1 Consensus Algorithm Ag ent m Agent n Agent p| |---|---| Pm[G][, ω] Define Pm[G][0][(][k][)][ solving (14)]2 DefineAgent Pn[G][0] n,[(][k][)] ∀[ solving (14)]n ∈G1 Vm,φ Vm,φ Define Dm[P] [(][k][)][ using (16)] (a) DG unit operating with VCM control. (b) DG unit operating with PCM control. Figure 2. Structure of a DG unit seen as an agent. The black dashed lines represent exchange of information between different agents, while the red dashed lines represent local measurements. Therefore, to define its scheduled active power, i.e., Pm[G][0][,] each DG unit m ∈G solve (14) locally. The only global information required to solve (14) corresponds to λ, which can be estimated locally by each unit. To do this, variable λm is introduced and defined as the local estimation of the dual variable by unit m. From the economic operation of power systems, λm can be seen as the incremental cost of the DG units. Hence, the minimum cost dispatch is reached when all units have the same incremental cost value [7]. This condition is equivalent to state that λm = λn, ∀m, n ∈G, which suggests that variable λm can be defined as the consensus variable. Therefore, to estimate λm, this paper proposes that each unit execute locally the iterative consensus protocol given by, |Col1|DP m(k), ωm(k) P G0 m (k)|Col3| |---|---|---| ||Microgrid|| |||| |||| Figure 3. Flowchart of the proposed distributed dispatch strategy composed of Stage I and Stage II. Dm[P] [(][k][ + 1) =][ D]m[P] [(][k][) +][ ε][(][P][ G]m [−] [P][ G]m [0][)][,][ ∀][m][ ∈G][2][,] (16) λm(k+1) = � cmnλn(k)+κ(Pm[G][−][P][ G]m [0][)][,][ ∀][m][ ∈G][,][ (15)] n∈Nm where κ is a parameter that controls the convergence of the protocol. Notice that for units operating in PCM, the second term in (15) is reduced to zero, due to (1). The rationale of protocol in (15) can be understood if each DG unit is modeled as an independent agent, with functionalities such as acquiring local measurements, exchange information with its neighboring agents and define its active power schedule independently. A representation of the structure and information flow of a DG unit as an agent for both control modes, is shown in Fig. 2. Notice that each DG unit has two active power variables: Pm[G][, which stands for the active output power, and][ P][ G]m [0][,] which stands for the scheduled active output power. These two variables must not be confused: Pm[G][0] is obtained for all the DG units solving the problem in (14), and corresponds to the output power that minimizes the total generation cost, while Pm[G] [is the actual power that the DG unit is supplying] to the microgrid. Thus, the active output power of a DG unit operating in PCM, can be directly set considering (1). On the other hand, as the output power of units operating in VCM cannot be directly set (i.e., Pm[G] [cannot take directly the] value given by Pm[G][0][), the active droop gain (][D]m[P] [) is modified] iteratively using the protocol where ε controls the convergence. This protocol guarantees that the active output power of the DG units in VCM (Pm[G][),] defined through the droop expression in (2), equals the dispatched power (Pm[G][0][), defined cooperatively by all the DG] units. Nevertheless, when Dm[P] [is modified, the system might reach] a steady-state with a frequency different from the nominal frequency value, given by ω0. To reduce this deviation, each unit updates its frequency reference (ωm) in (2), using the following protocol, ωm(k + 1) = ωm(k) + ˆε(ω0 − ω), ∀m ∈G2. (17) This protocol guarantees that the system will operate with a lower frequency deviation in steady-state. A theoretical convergence analysis of protocols (15), (16) and (17), is presented in the Appendix. As for the reactive power, and as shown in Fig. 2, information about the voltage at the node of connection is required for all units in order to define their reactive output power using (3). In this case, reactive power has not been considered in the optimization strategy, as it does not incur in any cost [27]. C. Overview of the Distributed Strategy Fig. 3 shows the flowchart of the proposed distributed dispatch strategy, composed of two stages: one to run the consensus protocol (Stage I) and other to run the optimization algorithm (Stage II). At each iteration k, each stage can be explained as follows: Stage I: All DG units execute locally their consensus protocols, as explained in Section III-B, in order to calculate λm(k), i.e., the local estimation of variable λ. Here, it is assumed that the exchange of information between the DG units is done synchronously, i.e., the time-delay of the communication process is not considered. Additionally, recall that the exchange of information between units depends on the ----- communication topology, as explained in Sec. III-A and shown in Fig. 3. Stage II: Here, each DG unit solves the problem in (14) independently, in order to define Pm[G][0][(][k][)][, i.e., the optimal] dispatch that minimizes the overall generation cost for the current λm(k). To be able to solve the sub-problem in (14), each DG unit requires: λm(k) (previously calculated in the Stage I), and its own operational data i.e., parameters of the cost function (αm, βm, γm), maximum and minimum active generation capacity (P m, P m). Additionally, the units in VCM update their parameters Dm[P] [(][k][)][ and][ ω][m][(][k][)][, using (16) and] (17), respectively. It is important to highlight that, as the consensus protocol of units operating in VCM considers the current output power though the droop control loop, the proposed strategy takes into account the active power losses, even if these were not considered in the formulation stated in Sec. II-B. Notice that, in order to update protocols (16) and (17) in Stage II, operational information of the EDS, such as the frequency (ω) and the current output power of the DG units in VCM (Pm[G][), is required (see Fig. 2). This information can] be obtained by the DG unit through local measurements, as explained in [12]. D. Operation within the Hierarchical Control Framework To better understand the operation of the proposed strategy within the hierarchical control scheme, consider the illustrative example of the dynamics of the microgrid shown in Fig. 4. The hierarchical control is activated at t1. Before this time it is considered that the microgrid is in steady-state operation, and the DG unit m has output power Pm[G][. After][ t][1][, the] active droop gain Dm[P] [is modified, forcing the unit in VCM] to supply Pm[G][0] (previously defined as a predetermined value of the tertiary control). This process is performed by the primary control level, which operates with the fastest time of response, denominated as TP . Due to the operation of the primary control, the frequency is modified, as shown in Fig. 4b. This frequency deviation is corrected by the secondary level, which acts in a slower time scale compared with the primary level. For this reason, its time of response TS is greater than TP, which helps to decouple its dynamics. Finally, the tertiary level operates defining the new scheduled active power value Pm[G][0] with the slowest time scale TT . Thus, the update of the dispatch variables is done once the frequency recovery process is completed. In this context, considering the operation of the proposed distributed strategy, Stage I and II perform the functions of the tertiary and secondary control, defining the optimal schedule of all the DG units (i.e., Pm[G][0][), as well as the parameters of] the droop control to reduce the frequency deviation in steadystate conditions (i.e., Dm[P] [,][ ω][m][). Considering this, and aiming] to maintain decoupled the dynamics of Stage I and II and the primary control, the response time of the proposed strategy (named as TD in Fig. 4a) should be selected to have a value greater than TS, but lower than TT, i.e., TS ≤TD ≤TT . The selected value will depend on the speed of response desired for the system. Usually, TS takes values near to 30 s or lower [23] [28] [29] while T can take values |Col1|Col2|Col3|TD|Col5|Col6| |---|---|---|---|---|---| ||P mG · · ·||D|P mG0|| ||||TT||| ||||TS||| |||TP|||| ||||||| t1 t2 t3 t4 (a) Active output power of the DG unit m operating in VCM. ω0 ω3 ω2 ω1 t3 t1 t2 t4 |(a) Active outpu|ut power of|f the DG unit m op|perating in VCM|M.| |---|---|---|---|---| |||||| |ω1 · · ·|ω2|ω3||| |t1 t2 t3 t4||||| (b) Angular frequency of the microgrid. Figure 4. Illustrative example of the dynamics of the primary, secondary and tertiary control level. The convergence time of the optimization algorithm in the proposed strategy is limited by the time length TD. between 5 to 15 minutes [18]. Notice also that TD limits the maximum processing time of the proposed strategy in a practical implementation. E. Design Considerations The selection of the parameters κ, ε and ˆε in protocols, (15), (16) and (17), can affect the speed of convergence of the distributed strategy. Higher values for these parameters can lead to a faster convergence. However, a trade-off exists so that if they are set too large, an oscillatory behavior can be observed due to the excessively fast update of λm(k) in (15). Additionally, the number of DG units also affects the choice of these parameters. It is possible to observe that, after a load or WT generation increase/decrease, the higher number of DG units in VCM, the lower the active power that each DG unit supply to reduce the mismatch between generation and load consumption; which means that the output power of the DG unit (Pm[G][) might not be too far from its new schedule] value (Pm[G][0][). Therefore, the distributed approach can converge] faster. This analysis suggests that the control parameters κ, ε and ˆε should be set inversely proportional to the number of DG units, N . Thus, the following heuristic rules can be used, κ = 100/N, ε = 1/100N, εˆ = 1/N. (18) Although these rules do not estimate an optimal value for the convergence parameters κ, ǫ and ˆǫ, these have shown a good performance in different scenarios, as described in Sec. IV-F. The design of the communication topology can follow multiple criteria in order to define how the DG units exchange information, including: geographical localization, closeness in the electrical network, tolerance to links failure, among others [7], [12]. Nevertheless, the convergence of the distributed strategy is guarantee as long as the graph G, that models the communication topology, meets, at least, the next design criteria: ----- Table I DG UNITS INFORMATION m γm[$/kW[2]] βm[$/kW] αm[$] P [G]m[[kW]] P [G]m[[kW]] Q[G]m[[kvar]] Q[G]m[[kvar]] 8 0.444 0.111 0.0 90 900 -180 540 13 0.264 0.067 0.0 150 1500 -300 900 19 0.400 0.100 0.0 120 1200 -120 720 22 0.500 0.125 0.0 80 800 -160 480 25 0.250 0.063 0.0 160 1600 -240 960 - There exists a path that links any DG unit m to any DG unit n. This condition guarantee that all the DG units are connected. - The consensus matrix C = [cmn] is balanced, which can be obtained with bidirectional communications links. The use of the proposed strategy, as based on a distributed approach, reduces the dependence on the communication system complexity when compared with a centralized approach. In fact, in a centralized approach, a high level of connectivity is required due to the central operator, reducing its robustness and reliability since a single point of failure exists. Moreover, in case of a link failure, the corresponding DG unit will be isolated, preventing its control. In this context, in the proposed strategy the communication topology can be designed to be robust in case of any link failure, following, for instance, the design rule presented in [12]. Additionally, as there is no leader unit, the system will continue its operation in case of a unit fault, as will be shown in Sec. IV-D. IV. SIMULATION RESULTS AND DISCUSSION The proposed strategy was tested in the unbalanced 25bus microgrid shown in Fig. 5a. In total, five DG units and two WT units are considered. For comparison purposes, the two communication topology shown in Fig. 5 are considered. These are selected as they correspond to the more common topologies used in literature to test distributed algorithms [7]. Nevertheless, other topologies can be tested as well, as long as they meet the design requirements described in Sec. III-E. The information of the DG units is shown in Table I. The reactive droop gain of all units were defined as in (19), while at initialization (i.e., at k = 0), the active droop gain of the units operating in VCM were defined as in (20). This definition allows the DG units operating in VCM achieve proper active power sharing according to their power ratings [30]. G Dm[Q] [= (][V][ −] [V][ )][/][2][Q]m[,] ∀m ∈G (19) G Dm[P] [(][k][) = ∆][ω/P] m[,] ∀m ∈G2|k = 0. (20) Additionally, V, V and ∆ω/2π were set to 0.94 p.u., 1.05 p.u. and 0.1 Hz, respectively. The parameter κ was set in 20, while ε and ˆε were defined to be 0.02 and 0.2, respectively, using (18). The active power in all the protocols is in p.u., using 1000 kW as the nominal base. Initially, the WT units are not operating. The units G13, G19, G25 operate in VCM, while units G8, G22 operate in PCM. The distributed model was implemented in AMPL and solved with IPOPT [31], using a computer with an Intel i7-4749 processor and 16 GB RAM. As explained in Sec. III-C, at each iteration k, information related to the the frequency (ω) and the current output power 21 22 G22 19 20 23 24 25 G19 G25 18 1 2 3 4 5 W5 13 16 G13 8 6 7 9 10 11 12 G8 15 14 17 W17 (a) 25-bus three-phase microgrid. G22 G22 G25 G25 G19 G19 G13 G13 G8 G8 (b) Ring topology. (c) Tree topology. Figure 5. Microgrid and the communication topology presented as a graph. The data of the system and load demand of each node can be found in [32]. of the DG units in VCM (Pm[G][) is required to update protocols] (16) and (17). To simulate this measurement process, in this paper, an optimal power flow (OPF) formulation is used, as explained next in Sec. IV-A. A. Simulation of the Measurement Process In order to simulate the physical response of the microgrid during the optimization process at each iteration k, an optimal power flow formulation is solved. In practical cases, the response in the microgrid variables is measured using sensors locally implemented in each DG unit, thus, this formulation is not necessary. The use of this formulation is based on two facts: first, the dynamics of the primary control level is faster than the dynamics of the higher control levels (or equivalently, TD ≫ TP in Fig. 4); and second, to define Dm[P] [in (16),][ P][ G]m [is obtained] filtering the measured instantaneous active power with a lowpass filter [23]. This means that, at the end of the time length TD, the system has reached a steady-state condition, which can be estimated using a power flow model [30]. The unbalanced three-phase islanded OPF formulation is given by the non-linear optimization problem in (21)–(33), which is based on the work presented in [33]. Notice that in this formulation there are not control variables, thus, its solution is equivalent to the one provided by a three-phase power flow tool, such as OpenDSS or GridLabD.  � R{Smn,φ[L] [}] (21) φ∈F  [,] min    � mn∈L where Smn,φ[L] [is defined as,]   Smn,φ. (22) Smn,φ[L] [=]  ′ mn,φ,ψ[S]mn,ψ[∗]  [�] [Z] Vm φVm ψ ----- Subject to: � Plm,φ − � lm∈L mn∈L �Pmn,φ + R{Smn,φ[L] [}]� + � Pm[G][0][(][k][)][/][3] m∈G1 + � Pm,φ[G] [=][ P][ D]m,φ [−] � Pm[W] [/][3] m∈G2 m∈W ∀m ∈N, ∀φ ∈F (23) � Qlm,φ − � lm∈L mn∈L �Qmn,φ + I{Smn,φ[L] [}]� + � Q[G]m,φ,t m∈G = Q[D]m,φ ∀m ∈N, ∀φ ∈F (24) Vm,φ[2] [−] [V][ 2]n,φ [= 2] � ψ∈F �R{Zmn,φ,ψ′ [}][P][mn,ψ] [+] G1 G2 G3 P [D] Figure 6. Microgrid used for the simple numerical example. DG units G1 and G2 operate in VCM, while the DG unit G3 operate in PCM. The total load demand is defined to be 400 kW. Table II DG UNITS INFORMATION FOR THE NUMERICAL EXAMPLE m γm[$/kW[2]] βm[$/kW] αm[$] P [G]m[[kW]] P [G]m[[kW]] 1 0.20 0.25 0.0 40 400 2 0.50 0.15 0.0 20 200 3 0.30 0.22 0.0 25 250 and VCM). The active droop expression of units in VCM is considered using (28), while the reactive droop expression is considered in (29) for all units (in PCM and VCM). Constraint in (30) models the electromotive force of synchronous-based DG units, which is represented by the balanced voltages magnitudes at their internal nodes [2]. Constraint in (31) enforces the maximum and minimum limits for the voltage magnitudes. Finally, the total active generation power limits are defined by (32) for units in VCM, while the total reactive generation limits are defined by (33) for all units. For a more detailed discussion of this power flow formulation and the DG units modeling, see [33]. B. Numerical Example In order to illustrate the proposed strategy, a simple numerical case is presented in this section, applying the distributed approach, which is summarized in (40) to (45) in the Appendix. This simple case is composed of three DG units, the DG unit G1 and G2 operate in VCM, while the DG unit G3 operates in PCM. The total load consumption is defined as 400 kW, while the system is considered to be lossless, as shown in Fig. 6. The DG units parameters are shown in Table II, meanwhile it is considered that all the DG units exchange information with the remaining, defining the consensus matrix to be equal to C = [cmn] = 1/3, ∀(n, m) ∈V, as explained in Sec. III-A. The first iterations for this simple case are shown in Table III. For k = 0, all the DG units define the local estimation of the incremental cost as λ1(0) = λ2(0) = λ3(0) = 0. To define Pm[G][0][(0)][, each DG unit solves the problem in (14). This] can be solved analytically, giving the next expression, P [G]m if (λm(k) − βm)/(2γm) < P [G]m[,] Pm[G][0][(][k][) =] P Gm if (λm(k) − βm)/(2γm) > P Gm[,]  λm(2kγ) −m βm Otherwise. (34) Thus, using the current local estimation of λm(k) and (34), each DG unit defines its optimal active power as P1[G][0](0) = 40, P2[G][0](0) = 20 and P3[G][0](0) = 25, which corresponds to their minimum active power generation. Notice in Table III, that as the unit G3 operates in PCM, its output power can be defined to be its scheduled value i e P [G][0](0) P [G](0) 25 For the 2 � Zmn,φ,ψ′∗ [S][mn,φ] ψ∈F ������ ∀mn ∈L, ∀φ ∈F (25) I{Zmn,φ,ψ′ [}][Q][mn,ψ]� − Vm,φ1[2] ������ Pm[G] [=] � Pm,ψ[G] ∀m ∈G2 (26) ψ∈F Q[G]m [=] � Q[G]m,ψ ∀m ∈G (27) ψ∈F ω = ωm(k) − Dm[P] [(][k][)][P][ G]m ∀m ∈G2 (28) Vm,φ = V0 − Dm[Q] [Q][G]m ∀m ∈G (29) Vm,φ = Vm,ψ ∀m ∈G2, ∀φ, ψ ∈F (30) V ≤ Vm,φ ≤ V ∀m ∈N, ∀φ ∈F (31) P [G]m [≤] [P][ G]m [≤] [P] Gm ∀m ∈G2 (32) G Q[G]m [≤] [Q]m[G] [≤] [Q]m ∀m ∈G (33) In the above formulation, the objective function in (21) aims to minimize the total active power losses of the EDS. This ensure that the solution of the power flow formulation matches the steady-state operation of the unbalanced microgrid, considering the active droop loop of the DG units operating in VCM and the reactive droop loop of all units [30], [33]. The EDS is modeled by (23)–(25), derived as a function of the active and reactive power flow through lines, i.e., Smn,φ = Pmn,φ + jQmn,φ. The line impedance is considered to be constant. Additionally, a transformation is introduced and ′ defined as Zmn,φ,ψ [=][ Z][mn,φ,ψ] [θ][ψ] [−] [θ][φ][. Notice that due to this] ′ definition Zmn,φ,ψ [is not symmetric as][ Z][mn,φ,ψ][. Constraints] (23) and (24) model the active and reactive power balance, respectively, considering the output power of the DG units operating in VCM and PCM, and the balanced power of the WTs. In (23), the three-phase output power of DG units in PCM is modeled as a balanced constant power injection, which value was previously defined in the Stage II (i.e. Pm[G][0][(][k][)][, see] Fig. 3); while the three-phase output power of units operating in VCM is considered as a power flow variable. Equation (25) models the voltage magnitude drop in the lines. In (26), the three-phase output power of units in VCM is modeled as a function of their output power per phase, while (27) models the three phase reactive output power of all units (in PCM ----- units in VCM, the DG current output power can be obtained using power flow models or analytic solutions, if the size of the problem allows it. For real operation, the estimation of the current output power is based on local real-time measurements. In this case, the current output power of the units in VCM can be estimated as[2], 3 [(][k][))][D]2[P] [(][k][)] P1[G][(][k][) = (][P][ D][ −] [P][ G], (35) D1[P] [(][k][) +][ D]2[P] [(][k][)] 3 [(][k][))][D]1[P] [(][k][)] P2[G][(][k][) = (][P][ D][ −] [P][ G] . (36) D1[P] [(][k][) +][ D]2[P] [(][k][)] 10 Tree Topology Ring Topology 9 8 7 0 100 200 300 400 500 600 700 800 900 Figure 7. Total generation cost of the DG units considering the ring and tree topology shown in Fig. 5. The x-axis represents k, i.e., the iteration counter. |Col1|Tree T Ring T|opology opology|Col4|Col5|Col6|Col7|Col8|Col9|Col10|Col11| |---|---|---|---|---|---|---|---|---|---|---| |||||||||||| |||||||||||| |||||||||||| Using these expressions, for k = 0, the current output power of the units in VCM are given by P1[G][(0) = 250][ and][ P][ G]2 [(0) =] 125. Additionally, D1[P] [(0)][ and][ D]2[P] [(0)][, are defined as using] the standard expression, which is given in (20), defining the values[3] of D1[P] [(0) = 1][.][57][ and][ D]2[P] [(0) = 3][.][14][.] For k = 1, first all the DG units update their local estimation of the incremental cost, using the expression in (15). For the unit G1, (15) gives[4], λ1(1) = 1/3 λ1(0) + 1/3 λ2(0) + 1/3 λ3(0) + 33.33(P1[G][(0)][ −] [P][ G]1 [0](0)) = 7.00. (37) The factor 33.33 represents κ, calculated as explained in Sec. III-E. For the remaining DG units, the same procedure is done, giving: λ2(1) = 3.5 and λ3(1) = 0. Once the local estimation of the incremental cost are obtained, each DG unit uses the expression in (34) to obtain their optimal schedule, which gives: P1[G][0](1) = 40, P2[G][0](1) = 20 and P3[G][0](1) = 25. Then, the droop gains of units in VCM are updated, using the expression in (16), giving, D1[P] [(1) =][ D]1[P] [(0)] + 0.0033(P1[G][(0)][ −] [P][ G]1 [0](0)) = 1.5707 (38) The factor 0.0033 represents ε. Applying the same procedure for G2, gives D2[P] [(1) = 3][.][1404][. Finally, the current output] power of the units in VCM are estimated using the expressions in (35) and (36), which gives: P1[G][(1) = 249][.][97][ and][ P][ G]2 [(1) =] 125.03. Notice in Table III, and as described in Sec. III-B, the units in VCM always maintain the power balance, supplying the total amount of load, even during the execution of the proposed distributed strategy. This procedure can be applied iteratively, reducing the total generation cost, as can be seen in Table III. The final solution will converge to the optimal solution of the centralized problem, given by P1[G][0] = 193.47, P2[G][0] = 77.49 and P3[G][0] = 129.03, with a total generation cost of 15572.25 $. C. Case I: Initialization, Validation and Comparison Fig. 7 shows the total generation cost, while Fig. 8 and Fig. 9 show the local estimation of the incremental cost variable (λm) and the total active output power of each DG unit, respectively; all during operation. The iteration counter 2A detailed derivation of this solution can be found in [20]. 3D1P [(0) = 2][π][ ·][ 0][.][1][/][0][.][4 = 1][.][5700][.] 4In all the protocols, the active power is in p.u., using 1000 kW as the i l b k, in Fig. 7 to Fig. 9 (and the remaining ones), should be seen as a discretization of the operational time, using a time length of ∆tD (see Sec. IV-F). Additionally, it is assumed that there is not changes in the operational conditions (increase/decrease of load and WT generation) until the proposed strategy reaches the optimal solution. This is done in order to assess its convergence properties. Initially, the value of λm was set to zero at each DG unit. Due to this, the output power of units in PCM is set at their minimum value, as a result of the problem stated in (14). This is shown in the early iterations in Fig. 9d. Simultaneously, the units operating in VCM correct the active power mismatch, supplying the remaining active power, as can be seen in Fig 9a to Fig 9c. After some iterations, the units operating in VCM increase their local estimation of λm, as shown in Fig. 8, this value is then distributed through the communication topology, and as a consequence, the output powers of units operating in PCM is increased. The units operating in VCM decrease their output powers to respond to the increase in the output powers of the units operating in PCM. For this, the active gain of the droop loops are modified as shown in Fig. 10. Moreover, due to the way the consensus protocol is defined in (15) for units operating in VCM, their output powers will converge to the optimal value defined through the solution of (14). Therefore, through this cooperative procedure, the total generation cost is reduced as the strategy reaches the optimal solution, as can be seen in Fig. 7. Regarding the reactive power, as all units operate with a droop-based loop, the reactive generation is distributed proportionally according to the rating of each DG unit, as shown in Fig. 11. Notice that, although the reactive power dispatch is not considered in the proposed strategy, the voltage constraints are considered through the definition of the reactive droop gain in (19), which guarantees that the voltage in the microgrid will operate within the maximum and minimum allowed values, as it is shown in Fig. 12, before and after the convergence of the iteration process. As for the frequency, Fig. 13 compares the frequency of the system with and without considering the protocol in (17). Thus, when protocol in (17) is considered, the system operates with a lower frequency deviation. This is accomplished after each DG unit defines locally their frequency reference ωm, using local measurements of the system’s frequency, as shown also in Fig. 13. The solution that the proposed distributed strategy reaches is independent of the communication topology of the DG units. This can be seen in Fig. 7, where the total cost is the same for both communication topologies. Moreover, notice that, although the results for the dual variables displayed in Fig 8 are obtained considering the ring topology shown in ----- Table III FIRST ITERATIONS OF THE NUMERICAL EXAMPLE k P1[G][(][k][)] P2[G][(][k][)] P3[G][(][k][)] � P Gm [(][k][)] D1[P] [(][k][)] D2[P] [(][k][)] λ1(k) λ2(k) λ3(k) P1[G][0] (k) P2[G][0] (k) P3[G][0] (k) $ 0 250.00 125.00 25.00 400.0 1.5700 3.1400 0.000 0.000 0.000 40.00 20.00 25.00 20586.75 1 249.97 125.03 25.00 400.0 1.5707 3.1404 7.000 3.500 0.000 40.00 20.00 25.00 20587.44 2 249.94 125.06 25.00 400.0 1.5714 3.1407 10.499 7.001 3.500 40.00 20.00 25.00 20588.14 3 249.92 125.08 25.00 400.0 1.5721 3.1411 13.998 10.502 7.000 40.00 20.00 25.00 20588.83 4 249.89 125.11 25.00 400.0 1.5728 3.1414 17.497 14.003 10.500 43.12 20.00 25.00 20589.53 5 249.86 125.14 25.00 400.0 1.5735 3.1418 20.892 17.504 14.000 51.61 20.00 25.00 20590.21 6 247.34 123.92 28.74 400.0 1.5741 3.1421 24.074 20.970 17.465 59.56 20.82 28.74 20247.74 7 243.58 122.06 34.36 400.0 1.5748 3.1424 27.096 24.273 20.836 67.11 24.12 34.36 19756.54 8 239.97 120.29 39.75 400.0 1.5754 3.1428 29.950 27.333 24.068 74.25 27.18 39.75 19311.92 9 236.56 118.61 44.83 400.0 1.5759 3.1431 32.641 30.221 27.117 80.98 30.07 44.83 18916.08 10 233.35 117.03 49.62 400.0 1.5764 3.1434 35.179 32.944 29.993 87.32 32.79 49.62 18563.74 0.15 0.1 G13 G19 G25 0.05 460 500 450 400 440 300 296 298 300 302 304 306 308 730 740 750 760 770 780 790 0 0 100 200 300 400 500 600 700 800 900 Figure 10. Active droop gain of units operating in VCM during operation. The x-axis represents k, i.e., the iteration counter. G8 G13 G19 G22 G25 |λ8 λ13 λ19 λ22 λ25 600|Col2|Col3|Col4|Col5|Col6|Col7|Col8|Col9|Col10|Col11|Col12|Col13|Col14|Col15| |---|---|---|---|---|---|---|---|---|---|---|---|---|---|---| |400 200 0 0 100 200 300 400 500 600 700 800 900 500 400 300||||||||||||||| |||||||||||||||| |||||||||||||||| |||||||||||||||| |||||||||||||||| Figure 8. Local estimation of the incremental cost variable (or dual variable) at each DG unit using the ring topology in Fig.5b. The x-axis represents k, i.e., the iteration counter. 1500 Pm[G] Pm[G][0] 1000 500 800 600 400 200 00 |Col1|P mG|P mG0|Col4|Col5|Col6|Col7|Col8|Col9| |---|---|---|---|---|---|---|---|---| |||||||||| |||||||||| 100 100 900 900 300 400 500 600 (a) Active power of unit G13 300 400 500 600 (b) Active power of unit G19 700 700 200 m[G] 200 800 800 1000 500 00 |Col1|Col2|Col3|(a) Active|e power of|f unit G13|Col7|Col8|Col9| |---|---|---|---|---|---|---|---|---| ||P mG|P mG0||||||| |||||||||| 1500 1000 500 00 |Col1|P mG|P mG0|Col4|Col5|Col6|Col7|Col8|Col9|Col10| |---|---|---|---|---|---|---|---|---|---| ||||||||||| ||||||||||| 900 900 300 400 500 600 (c) Active power of unit G25 |Col1|Col2|Col3|(c) Active|e power of|Col6|f unit G25|Col8|Col9|Col10|Col11| |---|---|---|---|---|---|---|---|---|---|---| |G8|G22|W5|+ W17|||||||| |||||||||||| |||||||||||| 300 200 22 200 800 800 0 0 100 200 300 400 500 600 700 800 900 Figure 11. Total reactive output power of all the DG units. The x-axis represents k, i.e., the iteration counter. Notice that the formulation of the optimal generation problem in Sec. II-B is independent on the operational mode of the DG units (VCM or PCM), suggesting that the optimal solution is also independent on these operation modes. However, as units operating in VCM share the power losses in proportion to the their ratings, the final solution will depend on the set of units operating in VCM and in PCM. To show this, Table IV compares the optimal solution for different cases of modes of operation. In all cases, the communication topology used was the ring topology. According to these results, the solution obtained have the same total generation cost, but different active power losses and incremental cost variable. The same total generation cost is due to the cost of the power losses, which is negligible when compared with the cost of supplying the total active load consumption. The selection of the units operating in VCM is an important issue, as these units are responsible for automatically correct the mismatch between generation and load consumption after any change in the operating conditions and, more importantly, during the optimization process. Therefore, its selection should be based on their maximum generation capacity. In this context, if the generation capacity of these units is lower than the total load demand the droop control cannot guarantee 1000 500 00 100 8 100 600 700 700 500 400 (d) Active power of units G8, G22 and W5+W17 Figure 9. Total active output power of DG and WT units. The x-axis represents k, i.e., the iteration counter. In (a) to (c), the blue line represents Pm[G][, i.e., the real active power of the DG units, while the dashed red line] represents Pm[G][0] [, i.e., the optimal scheduled active power of each DG unit.] After convergence, the real output and the optimal scheduled power are the same. Fig. 5b, the same results are obtained if the tree topology is considered instead. In this sense, the main difference is related to the speed of convergence of the consensus algorithm, which depends on the connectivity (i.e., the number of links) of the communication topology ----- Phase A Phase B Phase C Table IV COMPARISON OF THE DISTRIBUTED STRATEGY FOR DIFFERENT CASES Distributed Strategy Centralized [33] 1.05 1 0.95 1.05 Units in PCM G8, G22 G8 – – G13, G19 G13, G19, G8, G13, G19, G8, G13, G19, Units in VCM G25 G22, G25 G22, G25 G22, G25 |Col1|Col2|Col3|Col4|Col5| |---|---|---|---|---| |||||| |||||| |||||| 25 20 5 10 15 (a) At iteration k = 1 Total Cost [10[5]$] 7.369 7.369 7.369 7.413 Total Losses [kW] 32.77 32.79 33.04 33.54 Frequency [Hz] 60.00 60.00 60.00 59.97 1 max{λm} [$/kW] 450.495 450.43 450.328 – min{λm} [$/kW] 450.058 450.095 450.185 – 0.95 5 10 15 20 25 (b) At iteration k = 300 Figure 12. Voltage profile of the microgrid for all the phases before and after the convergence process. |Col1|Col2|Col3|Col4|Col5| |---|---|---|---|---| |||||| |||||| |||||| |||||| |||||| 60.1 60.05 60 59.95 59.9 0 100 200 300 400 500 600 700 800 900 Figure 13. Frequency and frequency reference for all units during operation. The x-axis represents k, i.e., the iteration counter. |Col1|Col2|Col3|Col4|Col5|Col6|Col7|Col8|Col9| |---|---|---|---|---|---|---|---|---| |||||||||| |||ωm(k|)||ω(k|) with the|protocol i|n (17)| ||||ω(k|) without|the prot|ocol in (17|)|| |||||||||| |||||||||| a feasible operation, especially in the early iterations of the optimization process and after any load increase or WT generation reduction. Table IV also shows a comparison with the solution obtained using the centralized strategy in [33], which considers a static droop operation framework, i.e., the active and reactive droop gains are not modified during operation, and they are defined based on the Standard IEEE 1547.7 [34]. In this case, as the strategy proposed in [33] is centralized, all the information regarding the microgrid, the DG units and loads is available in advance, gathered by a high-level and central operator. According to these results, the total generation cost and the power losses are reduced 0.6% and 2.4%, when comparing the distributed with the centralized solution. Notice that the static definition for the active droop gain (Dm[P] [) used by the] centralized solution does not consider the economic operation of the DG units, but only the generation capacity instead, sharing the active load among all the units in proportion to their ratings [33]. In contrast, the proposed approach defines the active droop gains based on the solution of the economical dispatch problem, and consequently, a lower cost solution is obtained. Regarding the frequency, the microgrid operates with the nominal value in steady-state in all cases, while in the solution obtained by the centralized strategy a deviation of 0.05% was observed. These results show the effectiveness of the proposed strategy when compared with a centralized strategy. D. Case II: Time-varying Conditions min{Vm,φ} [p.u.] 0.9466 0.9460 0.9430 0.9430 case, units operating in VCM respond automatically correcting the active power mismatch in the EDS, as can be seen in Fig 9. Due to this, the local estimation of the incremental cost variable (λm) are increased, which cause that units in PCM respond to the new operational condition, increasing their output active powers. After some iterations, the system reaches a new optimal operational state. At k = 450, both WTs are dispatched, as shown in Fig 9d. This increase in the renewable generation creates an active power mismatch (generation is higher than consumption), causing that units operating in VCM respond automatically and reaching a new operational condition characterized by a lower value of the incremental cost variable, as shown in Fig 8. At k = 600, unit G22, which is operating in PCM, unexpectedly is turned off and all its communication links are disabled, simulating a fault. Here, it is assumed that the WTs generation maintains the same value in order to assess only the impact due to the DG units fault. This fault creates an active power mismatch (generation is lower than consumption), that leads to an immediate response of the units operating in VCM. In fact, this is one of the main advantages of the proposed strategy, since some units operate in VCM mode (i.e., with an active droop loop), they are responsible for automatically reduce the active power mismatch between generation and consumption after any change in the operational conditions of the microgrid, using only local information, increasing the robustness and reliability during operation. Finally, at k = 750, unit G22 restores its operation and the system reaches the same optimal operational point before the fault. Notice, however, after unit G22 is turned on, the units operating in VCM have a different value of active droop gains, as shown in Fig. 10. This is due to the fact that the output power of units operating in VCM, defined through the active droop loops, depends on the rates Dm[G] [/D]n[G][,][ ∀][m, n][ ∈G][2][,] which in this case are the same before and after the simulated fault of unit G22. To assess the flexibility of the proposed distributed strategy under time-varying conditions, different unexpected changes in the operational conditions of the microgrid are analyzed. Here, the communication topology used was also the ring topology. At k 300 the load demand is increased by 10% In this E. Case III: Impact of the Communication Topology on the Performance In this section, the impact of the communication topology on the performance of the proposed strategy is assessed. To do this, the communication topologies shown in Fig. 14 are considered, in addition to the ring and tree topologies shown in Fig. 5. All the parameters, units operating in VCM and PCM are defined as in Case I ----- G22 G22 G22 G19 G25 G19 G25 G19 G25 G13 G13 G13 G8 G8 G8 (a) Fully Connected (b) Leader (c) Robust Figure 14. Additional communication topologies used to test the proposed strategy. The robust topology was designed using the rule proposed in [12]. Ring Tree Fully Leader Robust 10 9 8 7 0 50 100 150 Figure 15. Comparison of the convergence of the total generation cost of the DG units for different communication topologies. The x-axis represents k, i.e., the iteration counter. The convergence of the proposed strategy can be affected by the connectivity level of the DG units, as it is based on a first-order consensus algorithm. This level of connectivity can be measured by the coefficient β, defined by the relationship between the number of links over the number of DG units. For the considered topologies, this takes the value of 1 and 0.8 for the ring and tree topologies, respectively (see Fig. 5); and 2, 0.8 and 1.2 for the fully connected, leader and robust topologies, respectively (see Fig. 14). Thus, it is expected that communication topologies with higher value of β, reach consensus faster. However, according to the results shown in Fig. 15, the tree and robust topology have better performance (i.e., they converge faster) when compared with the fully connected topology, which has the higher β. Considering these results, it is possible to conclude that although the level of connectivity β and the speed of convergence are closely related, there is no an inversely proportional relationship between them. For this reason, it is not possible to know exactly which topology will have the fastest convergence speed based exclusively on the values of β. These results are in agreement with those presented in [7], where a first-order consensus algorithm was also studied. Finally, it is important to add that in the simulations the proposed strategy defines the same optimal dispatch for all the DG units, regardless the communication topology used. time-step to discretize the operational time (∆tD). This time should also include the time to perform all the measurements and exchange the data between the DG units. In this context, this low computational time is one of the main advantages of the proposed distributed strategy, which is a consequence of the reduction of the size of the optimization problem solved by each DG unit. Notice that in all cases, all the DG units have reached consensus in less than 400 iterations, which means that the proposed approach requires approximately 12 s to converge to the optimal solution (or 40 s, if 0.1 s is used for ∆tD), if all the DG units execute the optimization process in parallel, as expected in practical implementations. In fact, based on the hierarchical control approach discussed in Sec. III-D, the maximum time for the distributed approach to reach the optimal solution is actually limited by TD, which can take values in the order of minutes. Thus, the proposed algorithm is sufficiently fast and suitable for implementation. Finally, it is important to highlight that the results shown in Fig. 16 were obtained using the proposed heuristic rules in (18), showing their good performance for these cases, as the distributed strategy properly reaches the optimal solution. V. CONCLUSION F. Case IV: Scalability and Computational Time In this section, simulations with 3, 7 and 10 DG units were carried out, in addition to the case with 5 DG units presented in Sec. IV-C. This is done in order to assess the scalability performance of the proposed strategy. In all the simulations, the communication topology used was the ring topology, while the units operating in VCM and PCM are selected following the discussion presented in Sec. IV-C. Fig. 16 shows the active output power of the DG units in VCM and PCM in all cases. For these simulations, the maximum computational time required by one DG unit to solve the problem (14) in one iteration k, is near to 0.030 s. Based on this a conservative value of 0 1 s can be used as the In this paper, a distributed strategy for optimal dispatch of unbalanced three-phase islanded microgrids was presented. To define the generation dispatch of the DG units that minimize the overall generation cost, an optimization problem is stated and solved distributively based on primal-dual constrained decomposition and a first-order consensus algorithm. Two operational modes are considered for the DG units: VCM and PCM. Comprehensive simulations and comparison were given to show the effectiveness and flexibility of the proposed distributed approach. According to the obtained results, the proposed strategy achieves a lower cost solution, when compared with the standard centralized approach based on a static droop framework, since the solution of the economic dispatch is used to define the active droop gains; while the frequency deviation is reduced in steady-state, using a local correction term. Additionally, as units in VCM operate with an active droop loop, they are responsible for automatically reduce the active power mismatch between the generation and the consumption, after any change in the operational conditions of the microgrid and, more importantly, during the optimization process. Finally, as the proposed strategy is considered to operate within the standard hierarchical control framework, the dynamics of the primary level (implemented with droop control) and the dispatch layer (implemented in the Stage I and II) are decoupled, which helps to maintain the stability of the system. APPENDIX CONVERGENCE ANALYSIS OF THE PROPOSED DISTRIBUTED STRATEGY To study the convergence of the proposed distributed strategy, the next conditions are assumed to hold, (C1) Condition 1: The problem in (4)–(6) is technically feasible i e the DG units in VCM have enough generation ----- G13 G25 600 500 400 1600 1500 1400 1300 1200 0 1000 500 1000 800 600 G5 G13 G13 G25 G19 G25 G1 G5 G12 G13 400 0 50 100 150 200 250 300 350 400 400 300 200 100 0 0 50 300 0 50 100 150 200 250 300 200 400 400 300 300 100 100 150 150 200 G19 250 250 350 350 350 400 50 G8 G15 G17 G22 0 0 50 50 300 200 100 0 0 400 400 G8 G17 G22 100 150 200 250 300 (b) Case with 7 DG units 350 100 150 200 250 300 (c) Case with 10 DG units 350 (a) Case with 3 DG units Figure 16. Active output power of the DG units for the cases with 3, 7 and 10 units. Upper: Units in VCM. Lower: Units in PCM. The x-axis represents k, i.e., the iteration counter. capacity to correct the active power mismatch between generation and consumption during the optimization process. This can be written mathematically as, � P Gm [≫] � � Pm,φ[D] [−] � Pm[W] [.] (39) m∈G2 m∈N φ∈F m∈W (C2) Condition 2: All the DG units exchange information following a pre-defined communication topology, which defines the consensus matrix C = [cmn], designed as explained in Sec. III-E. (C3) Condition 3: All the DG units gather the required data, process and update their protocols synchronously and in parallel. Thus, the next proposition can be defined, (P1) Proposition 1: The proposed distributed strategy converge monotonically if conditions C1–C3 holds. To prove Proposition 1, first, recall the proposed distributed strategy. For each iteration k, and each DG unit m ∈G, apply sequentially: λm(k) λm(k + 1) λm(k + 2) ≈ λm(k)/2γm ≈ λm(k + 1)/2γm ≈ λm(k + 2)/2γm P [G]m P Gm Figure 17. Schematic representation of L(Pm[G][0] [, λ]m[(][k][))][ in (48), as a] function of Pm[G][0] for different values of λm(k), such that λm(k + 2) > λm(k + 1) > λm(k). Pm[G][0] can be obtained as the root of ∂L(·), which is approximately ≈ λm(k)/2γm. The areas, Pm[G] [< P][ G]m [and][ P][ G]m [> P] Gm[,] represent the non-feasible values for Pm[G][0] [, according to the set][ O]m[.] |Col1|Col2|λm(k)|Col4|λm(k +|+ 1) λm(k + 2| |---|---|---|---|---|---| ||||||| ||||||| |≈λm(k)|/2γm|≈λm(k +|1)/2γm||| |||||≈λm(k|+ 2)/2γm| Proof: Recall that (40) is equivalent to the definition of the Lagrangian given in (12). Thus, (40) can be stated as, L(Pm[G][0] [, λ][m][(][k][)) =] γm(Pm[G][0][)][2][ +][ β][m][P][ G]m [0] [+][ α][m] [−] [λ][m][(][k][)][P][ G]m [0] [,] (46) � (40) which can be re-written as, L(·) = γm(Pm[G][0][)][2][ + (][β][m] [−] [λ][m][(][k][))][P][ G]m [0] [+][ α][m][.] (47) Pm[G][0][(][k][) =][ arg min] Pm[G][∈O][m] � fm(Pm[G][0] [)][ −] [λ][m][(][k][)][P][ G]m [0] ∆Pm[G][(][k][) =][ P][ G]m [−] [P][ G]m [0][(][k][)] (41) λm(k + 1) = � cmnλn(k) + κ∆Pm[G][(][k][)] (42) n∈Nm Dm[P] [(][k][ + 1) =][ D]m[P] [(][k][) +][ ε][∆][P][ G]m [(][k][)] (43) ∆ω(k + 1) = ω0 − ω (44) ωm(k + 1) = ωm(k) + ˆε∆ω(k + 1) (45) In (41), Pm[G] [corresponds to the total active output power] of the DG units, meanwhile ω in (44), corresponds to the angular frequency of the microgrid, both obtained by the DG unit through local measurements. Additionally, consider the next following lemmas, (L1) Lemma 1: The active output power of a DG unit operating with droop control is inversely proportional to its active droop gain, or equivalently, Pm[G] [∝] [1][/D]m[P] [.] Proof: A technical discussion related to the operation of DG units with droop control is presented in [20]. (L2) Lemma 2: In (40), Pm[G][0][(][k][ + 1)][ > P][ G]m [0][(][k][)][, if][ λ][m][(][k][ +] 1) > λ (k) Considering that αm = 0, since the shut-up/shut-down cost is not considered; and λm(k) ≫ βm in the economic dispatch problem [35], (47) can be approximated as, L(·) ≈ γm(Pm[G][0] [)][2][ −] [λ][m][(][k][)][P][ G]m [0] [.] (48) In order to better understand the solution of (40), which defines Pm[G][0][, as a function of][ λ][m][(][k][)][, Fig. 17 shows the] second-order polynomial function given by (48), for different values of λm(k). Notice that the solution of (40) is equivalent to find the root of the derivative of (48), in such a way that Pm[G][0] [∈O][m][. This root can be analytically found as,] ∂L(·) = 2γmPm[G][0] [−] [λ][m][(][k][) = 0][,] (49) ∂Pm[G][0] which gives that, � Pm[G][0] [≈] λm(k)/2γm � . (50) P [G][0] ∈O ----- Thus, as γm > 0, from (50) and Fig. 17, is possible to conclude that Pm[G][0] [∝] [λ][m][(][k][)][, which prove L2.] For each iteration k ∈ Z[+] ∪{0}, let construct the series of variables (∆Pm[G][, λ]m[(][k][)][, P][ G]m[0] [(][k][)][, D]m[P] [(][k][)][,][ ∆][ω][(][k][)][, ω]m[(][k][))][, fol-] lowing the iterative strategy in (41)–(45). Proposition 1 will be proved, if the series defined using (41)–(45) converge and are monotonic. To show this, let k = 0, thus λm(0) = 0, ∀m ∈G, while Dm[P] [(0)][ is defined as in (20) and][ ω][m][(0) =][ ω][0][, as] explained in Sec. IV. Since λm(0) = 0, then Pm[G][0][(0) =][ P][ G]m[,] due to (50). Moreover, Pm[G] [>][ 0][, since C1 holds and the] DG units in VCM correct the active power mismatch in the microgrid. Thus, ∆Pm[G][(0)][ >][ 0][.] For k = 1, λm(1) > λm(0) and Dm[P] [(1)][ > D]m[P] [(0)][, since] ∆Pm[G][(0)][ >][ 0][ and][ κ >][ 0][, ǫ >][ 0][ in (42) and (43). As] λm(1) > λm(0), then, Pm[G][0][(1)][ > P][ G]m [0][(0)][ in (40) and due] to L2. Moreover, Pm[G][(1)][ < P][ G]m [(0)][ since][ D]m[P] [(1)][ > D]m[P] [(0)] and due to L1. Finally, ∆ω(1) - 0, since in islanded droop-based microgrids, the frequency is under the nominal value when the frequency reference is set to ω0 [33], as for k = 0. Thus, ωm(1) > ωm(0), since ˆǫ > 0, and as a consequence, ∆ωm(1) < ∆ω(0), which means that deviation of the frequency of the microgrid is reduced, considering also that C3 holds (see in Fig. 1 ω when ωm increase). Because of the iterative nature of (41)–(45), it can be verified that the following monotonic series exits for k = 2, 3, 4, ..., {Pm[G][(][k][)][}][ :][ P][ G]m [(0)][ > P][ G]m [(1)][ >][ · · ·][ > P][ G]m {Pm[G][0][(][k][)][}][ :][ P][ G]m [0][(0)][ < P][ G]m [0][(1)][ <][ · · ·][ < P] Gm {∆Pm[G][(][k][)][}][ : ∆][P][ G]m [(0)][ >][ ∆][P][ G]m [(1)][ >][ · · ·][ >][ 0] {λm(k)} : λm(0) < λm(1) < · · · < λ {Dm[P] [(][k][)][}][ :][ D]m[P] [(0)][ < D]m[P] [(1)][ <][ · · ·][ < D]m[P] [(][k][)] {∆ω(k)} : ∆ω(0) > ∆ω(1) > · · · > 0 {ωm(k)} : ωm(0) < ωm(1) < · · · < ωm(k). In the last series, P [G]m [is a bound of][ {][P]m[ G][(][k][)][}][, while] G P m [is a bound of][ {][P]m[ G][0][(][k][)][}][. Moreover, the series][ {][P]m[ G][(][k][)][}] is monotonically decreasing, while the series {Pm[G][0] [(][k][)][}][ is] monotonically increasing. Due this, the series {∆Pm[G][(][k][)][}][ is] monotonically decreasing and bounded by 0. These series can be technically interpreted as, during the optimization process, the DG units can not supply lower than P [G]m [and the optimal] G dispatch can not be greater than P m[. The fact that the series] {∆Pm[G][(][k][)][}][ is decreasing imply that][ P][ G]m [converge to][ P][ G]m [0][.] The series {λm(k)}, {Dm[P] [(][k][)][}][ and][ {][ω][m][(][k][)][}][ are not] bounded, but they are limited since their definition in (42), (43) and (45), are a function of ∆Pm[G][(][k][)][ and][ ∆][ω][(][k][)][, which] are bounded and monotonically converge to 0. Additionally, as C2 holds, all the λm(k) converge to λ through the consensus matrix C = [cmn]. Thus, based on the fact that as all these series are monotonic and limited, P1 is proved to be valid. REFERENCES [1] J. Zhu, Optimization of Power System Operation. USA: Wiley-IEEE, 2009 [2] D. Olivares, C. Cañizares, and M. Kazerani, “A centralized energy management system for isolated microgrids,” IEEE Trans. Smart Grid, vol. 5, no. 4, pp. 1864–1875, July 2014. [3] K. H. Youssef, “Power quality constrained optimal management of unbalanced smart microgrids during scheduled multiple transitions between grid-connected and islanded modes,” IEEE Trans. Smart Grid, vol. 8, no. 1, pp. 457–464, Jan. 2017. [4] P. P. Vergara, J. C. Lopez, L. C. da Silva, and M. J. Rider, “Securityconstrained optimal energy management system for three-phase residential microgrids,” Electr. Pow. Syst. Res., vol. 146, pp. 371–382, May 2017. [5] W. Shi, X. Xie, C. C. Chu, and R. Gadh, “Distributed optimal energy management in microgrids,” in IEEE Trans. Smart Grid, vol. 6, no. 3, May 2015, pp. 1137–1146. [6] Y. Wang, S. Wang, and L. Wu, “Distributed optimization approaches for emerging power systems operation: A review,” Electr. Pow. Syst. Res., vol. 144, pp. 127 – 135, 2017. [7] Z. Zhang and M. Y. Chow, “Convergence analysis of the incremental cost consensus algorithm under different communication network topologies in a smart grid,” IEEE Trans. Power Systems, vol. 27, no. 4, pp. 1761– 1768, Nov. 2012. [8] S. Yang, S. Tan, and J. X. Xu, “Consensus based approach for economic dispatch problem in a smart grid,” IEEE Trans. Power Systems, vol. 28, no. 4, pp. 4416–4426, Nov 2013. [9] G. Hug, S. Kar, and C. Wu, “Consensus and innovations approach for distributed multiagent coordination in a microgrid,” IEEE Trans. Smart Grid, vol. 6, no. 4, pp. 1893–1903, July 2015. [10] G. Chen, J. Ren, and E. N. Feng, “Distributed finite-time economic dispatch of a network of energy resources,” IEEE Trans. Smart Grid, to be published, 2016. [11] F. Guo, C. Wen, J. Mao, and Y. D. Song, “Distributed economic dispatch for smart grids with random wind power,” IEEE Trans. Smart Grid, vol. 7, no. 3, pp. 1572–1583, May 2016. [12] W. Zhang, W. Liu, X. Wang, L. Liu, and F. Ferrese, “Online optimal generation control based on constrained distributed gradient algorithm,” IEEE Trans. Power Systems, vol. 30, no. 1, pp. 35–45, Jan 2015. [13] R. Mudumbai, S. Dasgupta, and B. B. Cho, “Distributed control for optimal economic dispatch of a network of heterogeneous power generators,” IEEE Trans. Power Systems, vol. 27, no. 4, pp. 1750–1760, Nov 2012. [14] G. Binetti, A. Davoudi, F. L. Lewis, D. Naso, and B. Turchiano, “Distributed consensus-based economic dispatch with transmission losses,” IEEE Trans. Power Systems, vol. 29, no. 4, pp. 1711–1720, July 2014. [15] Y. Xu and Z. Li, “Distributed optimal resource management based on the consensus algorithm in a microgrid,” IEEE Trans. on Ind. Electr., vol. 62, no. 4, pp. 2584–2592, April 2015. [16] G. Chen, F. L. Lewis, E. N. Feng, and Y. Song, “Distributed optimal active power control of multiple generation systems,” IEEE Trans. Ind. Electr., vol. 62, no. 11, pp. 7079–7090, Nov 2015. [17] H. Pourbabak, J. Luo, T. Chen, and W. Su, “A novel consensus-based distributed algorithm for economic dispatch based on local estimation of power mismatch,” IEEE Trans. Smart Grid, to be published, 2017. [18] J. M. Guerrero, J. C. Vasquez, J. Matas, L. G. de Vicuña, and M. Castilla, “Hierarchical control of droop-controlled AC and DC microgrids - A general approach toward standardization,” IEEE Trans. Ind. Electron., vol. 58, no. 1, pp. 158–172, Jan. 2011. [19] W. J. Ma, J. Wang, V. Gupta, and C. Chen, “Distributed energy management for networked microgrids using online alternating direction method of multipliers with regret,” IEEE Trans. Smart Grid, to be published, 2016. [20] S. J. Ahn, J. W. Park, I. Y. Chung, S. I. Moon, S. H. Kang, and S. R. Nam, “Power-sharing method of multiple distributed generators considering control modes and configurations of a microgrid,” IEEE Trans. Power Delivery, vol. 25, no. 3, pp. 2007–2016, July 2010. [21] D. Wu, F. Tang, T. Dragicevic, J. C. Vasquez, and J. M. Guerrero, “A control architecture to coordinate renewable energy sources and energy storage systems in islanded microgrids,” IEEE Trans. Smart Grid, vol. 6, no. 3, pp. 1156–1166, May 2015. [22] J. Matas, M. Castilla, L. G. d. Vicuña, J. Miret, and J. C. Vasquez, “Virtual impedance loop for droop-controlled single-phase parallel inverters using a second-order general-integrator scheme,” IEEE Trans. Power Electronics, vol. 25, no. 12, pp. 2993–3002, Dec 2010. [23] J. M. Rey, P. Marti, M. Velasco, J. Miret, and M. Castilla, “Secondary switched control with no communications for islanded microgrids,” IEEE T I d El t i l 64 11 8534 8545 N 2017 ----- [24] R. Olfati-Saber and R. M. Murray, “Consensus problems in networks of agents with switching topology and time-delays,” IEEE Trans. Automatic Control, vol. 49, no. 9, pp. 1520–1533, Sep. 2004. [25] R. Olfati-Saber, J. A. Fax, and R. M. Murray, “Consensus and cooperation in networked multi-agent systems,” Proceedings of the IEEE, vol. 95, no. 1, pp. 215–233, Jan 2007. [26] M. Zhu and S. Martinez, “On distributed convex optimization under inequality and equality constraints,” IEEE Trans. Automatic Control, vol. 57, no. 1, pp. 151–164, Jan. 2012. [27] F. Chen, M. Chen, Q. Li, K. Meng, Y. Zheng, J. M. Guerrero, and D. Abbott, “Cost-based droop schemes for economic dispatch in islanded microgrids,” IEEE Trans. Smart Grid, vol. 8, no. 1, pp. 63–74, Jan. 2017. [28] Q. Shafiee, C. Stefanovic, T. Dragicevic, P. Popovski, J. C. Vasquez, and J. M. Guerrero, “Robust networked control scheme for distributed secondary control of islanded microgrids,” IEEE Trans. Ind. Electronics, vol. 61, no. 10, pp. 5363–5374, Oct 2014. [29] X. Lu, X. Yu, J. Lai, Y. Wang, and J. M. Guerrero, “A novel distributed secondary coordination control approach for islanded microgrids,” IEEE Trans. Smart Grid, to be published, 2017. [30] M. M. A. Abdelaziz, H. E. Farag, E. F. El-Saadany, and Y. Mohamed, “A novel and generalized three-phase power flow algorithm for islanded microgrids using a newton trust region method,” IEEE Trans. Power Systems, vol. 28, no. 1, pp. 190–201, Feb. 2013. [31] A. Wachter and L. T. Biegler, “On the implementation of an interiorpoint filter line-search algorithm for large-scale nonlinear programming,” Math. Prog., vol. 106, no. 1, pp. 25–57, 2006. [32] G. K. V. Raju and P. R. Bijwe, “Efficient reconfiguration of balanced and unbalanced distribution systems for loss minimisation,” IET Gen. Trans. Distr., vol. 2, no. 1, pp. 7–12, Jan. 2008. [33] P. P. Vergara, J. C. Lopez, M. J. Rider, and L. C. P. da Silva, “Optimal operation of unbalanced three-phase islanded droop-based microgrids,” IEEE Trans. Smart Grid, to be published, 2017. [34] Standard 1547: IEEE Standard for Interconnecting Distributed Resources with Electric Power Systems,, IEEE Interconnecting Committee Std., 2013. [35] A. Wood and B. Wollenberg, Power Generation, Operation and Control. New York, NY, USA: Wiley, 1996. Pedro P. Vergara was born in Barranquilla, Colombia in 1990. He received the B.Sc. degree in electronic engineering from the Universidad Industrial de Santander, Bucaramanga, Colombia, in 2012, and the M.Sc. degree in electrical engineering from University of Campinas, UNICAMP, Campinas, Brazil, in 2015. He is currently working toward the Ph.D. degree in electrical engineering at the University of Campinas and at the University of Southern Denmark, SDU, Denmark, as part of a double degree program between UNICAMP and SDU. His current research interests include development of methodologies for the optimization, planning, and control of electrical distribution systems with high penetration of distributed generation and renewable energy systems. Juan M. Rey was born in Bucaramanga, Colombia in 1989. He received the B.S. in electrical engineering from Universidad Industrial de Santander, Bucaramanga, Colombia, in 2012. He is currently working toward the Ph.D. degree in the Department of Electronic Engineering, Technical University of Catalonia, Spain. Since 2013, he has been with the Electrical, Electronic and Telecommunications Engineering School (E3T), Universidad Industrial de Santander, Bucaramanga Colombia, where he is currently an Assistant Professor. His research interest are power electronics and control for distributed ti d i id Hamid R. Shaker received his PhD in 2010 from Aalborg University, Denmark. He has been a visiting researcher at MIT, a post-doctoral researcher and an assistant professor at Aalborg University within 2009-2013 and an associate professor at Norwegian University of Science and Technology (NTNU), Norway within 2013-2014. Since 2014, he has been an associate professor at Center for Energy Informatics, University of Southern Denmark. His research interests are in the area of fault detection and diagnosis, processes monitoring, modeling and control with applications in energy technology. His contributions have been reported in more than 90 journal and conference publications. He serves three journals as a member of editorial board and has been IPC member for several conferences. Josep M. Guerrero (S’01-M’04-SM’08-FM’15) received the B.S. degree in telecommunications engineering, the M.S. degree in electronics engineering, and the Ph.D. degree in power electronics from the Technical University of Catalonia, Barcelona, in 1997, 2000 and 2003, respectively. Since 2011, he has been a Full Professor with the Department of Energy Technology, Aalborg University, Denmark, where he is responsible for the Microgrid Research Program (www.microgrids.et.aau.dk). His research interests are oriented to different microgrid aspects, including power electronics, distributed energystorage systems, hierarchical and cooperative control, energy management systems, smart metering and the internet of things for AC/DC microgrid clusters and islanded minigrids. Prof. Guerrero is an Associate Editor for a number of IEEE TRANSACTIONS. He received the best paper award of the IEEE Transactions on Energy Conversion for the period 2014-2015, and the best paper prize of IEEEPES in 2015. As well, he received the best paper award of the Journal of Power Electronics in 2016. In 2014, 2015, 2016, and 2017 he was awarded by Thomson Reuters as Highly Cited Researcher, and in 2015 he was elevated as IEEE Fellow for his contributions on distributed power systems and microgrids. Bo Nørregaard Jørgensen is founder and head of Center for Energy Informatics at the University of Southern Denmark. Center for Energy Informatics is an interdisciplinary research center focusing on innovative solutions for facilitating the transition towards a smart sustainable energy system. The center’s research is conducted in close collaboration with industrial partners, public bodies, and government agencies. As head of center, Dr. Jørgensen represent University of Denmark at national and international events, in advisory boards and government reference committees. He is appointed member of the Danish Academy of Technical Science. Dr. Jørgensen research focuses on integration and management of demand-side flexibility with supply-side fluctuations, from the business and technological perspectives. He holds a Ph.D. in Computer Science from the University of Southern Denmark, a M.Sc. and a B.Sc. in Computer System Engineering from Odense University, Denmark. Luiz C. P. da Silva graduated in electrical engineering in Federal University of Goias, Goias, Brazil, in 1995 and received the M.Sc. and Ph.D. degrees in electrical engineering from the University of Campinas, UNICAMP, Campinas, Brazil, in 1997 and 2001, respectively. From 1999 to 2000, he was visiting Ph.D. student at the University of Alberta, Edmonton, AB, Canada. Currently, he is an Associate Professor at the University of Campinas, UNICAMP, Campinas, Brazil. His research interests are power system transmission and distribution. -----
{ "disclaimer": "Notice: The following paper fields have been elided by the publisher: {'references'}. Paper or abstract available at https://api.unpaywall.org/v2/10.1109/TSG.2018.2820748?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1109/TSG.2018.2820748, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": "other-oa", "status": "GREEN", "url": "https://findresearcher.sdu.dk/ws/files/160389217/Distributed_Strategy_for_Optimal_Dispatch_of_Unbalanced_Three_Phase_Islanded_Microgrids.pdf" }
2,019
[ "JournalArticle" ]
true
2019-05-01T00:00:00
[]
26,050
en
[ { "category": "Engineering", "source": "s2-fos-model" }, { "category": "Environmental Science", "source": "s2-fos-model" }, { "category": "Economics", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/02038327a403f44d76766bdb875612bc733fb499
[]
0.845957
A Novel Distributed Consensus-Based Approach to Solve the Economic Dispatch Problem Incorporating the Valve-Point Effect and Solar Energy Sources
02038327a403f44d76766bdb875612bc733fb499
Energies
[ { "authorId": "2077526466", "name": "Muhammad Moin" }, { "authorId": "2120578249", "name": "Waqas Ahmed" }, { "authorId": "144811267", "name": "M. Rehan" }, { "authorId": "2069560928", "name": "Muhammad Iqbal" }, { "authorId": "34734215", "name": "N. Ullah" }, { "authorId": "9236658", "name": "Kamran Zeb" }, { "authorId": "35702336", "name": "Waqar Uddin" } ]
{ "alternate_issns": null, "alternate_names": null, "alternate_urls": [ "http://nbn-resolving.de/urn/resolver.pl?urn=urn:nbn:ch:bel-155563", "https://www.mdpi.com/journal/energies", "http://www.mdpi.com/journal/energies" ], "id": "1cd505d9-195d-4f99-b91c-169e872644d4", "issn": "1996-1073", "name": "Energies", "type": "journal", "url": "http://www.e-helvetica.nb.admin.ch/directAccess?callnumber=bel-155563" }
This research focused on the design of a distributed approach using consensus theory to find an optimal solution of the economic dispatch problem (EDP) by considering the quadratic cost function along with the valve-point effect of generators and renewable energy systems (RESs). A distributed consensus approach is presented for the optimal economic dispatch under a complex valve-point effect by accounting for solar energy in addition to conventional power plants. By employing the beta distribution function and communication topology between generators, a new optimality condition for the dispatch problem was formulated. A novel distributed updation law for generation by considering the communication between generators was provided to deal with the valve-point effect. The convergence of the proposed updation law was proved analytically using Lyapunov stability and graph theory. An algorithm for ensuring a distributed economic dispatch via conventional power plants, integrated with solar energy, was addressed. To the best of the authors’ knowledge, a distributed nonlinear EDP approach for dealing with the valve-point loading issue via nonlinear incremental costs has been addressed for the first time. The designed approach was simulated for benchmark systems with and without a generation capacity constraint, and the results were compared with the existing centralized and distributed strategies.
# energies _Article_ ## A Novel Distributed Consensus-Based Approach to Solve the Economic Dispatch Problem Incorporating the Valve-Point Effect and Solar Energy Sources **Muhammad Moin** **[1], Waqas Ahmed** **[1], Muhammad Rehan** **[1,]*, Muhammad Iqbal** **[2], Nasim Ullah** **[3]** **,** **Kamran Zeb** **[4,]*** **and Waqar Uddin** **[5]** 1 Department of Electrical Engineering, Pakistan Institute of Engineering and Applied Sciences (PIEAS), Islamabad 45650, Pakistan 2 Department of Computer Science, National University of Technology (NUTECH), Islamabad 44000, Pakistan 3 Department of Electrical Engineering, College of Engineering, Taif University, P.O. Box 11099, Taif 21944, Saudi Arabia 4 School of Electrical Engineering and Computer Science, National University of Sciences and Technology, Islamabad 44000, Pakistan 5 Department of Electrical Engineering, National University of Technology (NUTECH), Islamabad 44000, Pakistan ***** Correspondence: rehanqau@gamil.com (M.R.); kamran.zeb@seecs.edu.pk (K.Z.) **Citation: Moin, M.; Ahmed, W.;** Rehan, M.; Iqbal, M.; Ullah, N.; Zeb, K.; Uddin, W. A Novel Distributed Consensus-Based Approach to Solve the Economic Dispatch Problem Incorporating the Valve-Point Effect and Solar Energy Sources. Energies **[2023, 16, 447. https://doi.org/](https://doi.org/10.3390/en16010447)** [10.3390/en16010447](https://doi.org/10.3390/en16010447) Academic Editors: David Borge-Diez and Donato Morea Received: 9 November 2022 Revised: 10 December 2022 Accepted: 21 December 2022 Published: 30 December 2022 **Copyright:** © 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons [Attribution (CC BY) license (https://](https://creativecommons.org/licenses/by/4.0/) [creativecommons.org/licenses/by/](https://creativecommons.org/licenses/by/4.0/) 4.0/). **Abstract: This research focused on the design of a distributed approach using consensus theory to** find an optimal solution of the economic dispatch problem (EDP) by considering the quadratic cost function along with the valve-point effect of generators and renewable energy systems (RESs). A distributed consensus approach is presented for the optimal economic dispatch under a complex valve-point effect by accounting for solar energy in addition to conventional power plants. By employing the beta distribution function and communication topology between generators, a new optimality condition for the dispatch problem was formulated. A novel distributed updation law for generation by considering the communication between generators was provided to deal with the valve-point effect. The convergence of the proposed updation law was proved analytically using Lyapunov stability and graph theory. An algorithm for ensuring a distributed economic dispatch via conventional power plants, integrated with solar energy, was addressed. To the best of the authors’ knowledge, a distributed nonlinear EDP approach for dealing with the valve-point loading issue via nonlinear incremental costs has been addressed for the first time. The designed approach was simulated for benchmark systems with and without a generation capacity constraint, and the results were compared with the existing centralized and distributed strategies. **Keywords: consensus; distributed algorithm; economic dispatch problem; renewable energy sources;** incremental cost; non-smooth cost function; optimization; valve-point loading effect **1. Introduction** In recent years, great attention has been given to the study and development of optimization techniques; see, for instance [1–5]. One of the fundamental optimization problems in power systems is deciding the output power of generation facilities that minimizes the total generation cost, which is commonly referred to as the economic dispatch problem (EDP). The EDP has been widely investigated since the advent of computers, and efforts have been focused on developing centralized optimization algorithms [6,7]. Particle swarm optimization (PSO) is the most popular among other metaheuristic techniques, despite the fact that it may not converge to an optimal solution in the case of the non-convex power system optimization problem [8]. Inspired by PSO, economic dispatch algorithms were investigated by considering generation constraints [9] and wind power uncertainty [10]. ----- _Energies 2023, 16, 447_ 2 of 23 The consideration of the valve-point effect (VPE), resulting from the sequential opening of control valves in thermal power plants, makes the cost function highly nonlinear. Due to the VPE, some ripples float over the cost function, which may be modeled as rectified sine waves. Different techniques are well-established in the literature for solving complex EDP considering the VPE. A genetic algorithm with a multi-parent crossover solution for the EDP with the VPE was presented in [11]. The coalescence of incremental rates and bee colony optimization methods were used in [12]. The authors in [13] used the iterative piecewise linear function approximation and mixed integer programming to find an optimal solution, and the obtained solution was then improved using the nonlinear programming models. In [14] (see also [15]), a multi-population-based differential evolution algorithm was applied to optimize the cost function with the VPE. All of these approaches for solving the EDP with the VPE are centralized and require a central controller to receive information from available nodes. Emerging technologies of renewable energy resources (RESs), such as solar energy, wind energy, and hydro-power, have influenced researchers to devise methods to solve the EDP, considering integrated power plants. Authors in [16] have exploited PSO, Newton– Raphson, and binary integer programming methods for finding a combined optimized solution for solar integrated power systems. The work of [17] considered a modified genetic algorithm for the consideration of thermal power cost optimization along with wind–solar constraints for a reduction in toxic emissions. The concept of a multi-generation system based on photovoltaic cells along with a battery system for the cost of energy optimization was revealed in [18]. To attain a low-carbon economic dispatch, through the consideration of bio-gas, wind, and solar sources, the work of [19] considered the stochastic optimization approach. The methods of [20,21] accounted for low-carbon energy optimization under various constraints by considering uncertainties in solar irradiance and energy efficiency, respectively. The major common concern in the above-mentioned algorithms [11–14,17–21] is that these methods apply a central dispatching facility, which gathers data of all generation nodes and gives a dispatch command to all nodes accordingly. The centralized approaches have several concerns, such as a single-point of failure (if the central node fails), system insecurity as the central processor can be vulnerable to cyberattacks), and time-delays (due to the communication of all nodes with a central dispatch center). In addition, these centralized optimization methods have privacy of data issues in a competitive environment, increase the business of the main server due to requests from all generating nodes, and have computational issues due to a central facility. Owing to these shortcomings, efforts have been devoted in the recent era to investigate distributed techniques, as observed in [22–29]. Recently, the cooperative control of multiagent systems (MASs) has been widely investigated and the EDP has transformed into the consensus of MASs. Some recent works on applying consensus theory to resolve the EDP in a distributed manner were discussed in [30–35]. Authors in [36] showed that the distributed EDP is solvable, and an optimal solution can be obtained if the incremental costs (ICs) of all generation facilities reach an agreement. In [37], a fully distributed control strategy was designed using two-level control through an upper level for discovering the reference of optimal power generation and a lower level for reference tracking. The method in [38] utilized stochastic programming along with robust and distributed optimization methods to minimize the overall cost of all generation units, including uncertain and intermittent renewable generations. The work in [39] developed a distributed scheme via an alternating direction method of multipliers for resolving the EDP. To address communication delays, it was studied in [40] that a discrete-time consensus approach should be adopted because information flows discretely through the underlying communication network. A distributed consensus strategy for EDP with communication delays was presented in [41]. Adaptive consensus-based strategies for EDP under communication uncertainties were designed in [42,43]. Based on the literature review, a brief detail of different areas considered in the existing works is provided in Table 1. Most of the attention in the above-mentioned literature is paid to minimizing a ----- _Energies 2023, 16, 447_ 3 of 23 quadratic cost function, which is a smooth and convex function. An attempt to solve EDP-VPE using a distributed consensus approach was presented in [44], where piecewise linear approximation was used for each nonlinear region. Approximation results in a loss of information, and the consideration of multiple regions makes this approach highly conservative. This paper deals with a distributed cooperative optimization (rather than the conventional central optimization) approach for the economic dispatch by considering thermal generators under the VPE and a solar energy system for attaining low-carbon footprints. A new algorithm for dispatching the powers economically by employing the beta distribution function for solar irradiance and by considering a smart-grid via cooperation and communication between generators through graph theory has been revealed. Here, a consensus-based distributed algorithm was designed to solve the EDP with a quadratic cost function and VPE, which takes the generator’s output power as the consensus update variable and local power mismatch as the feedback variable. It was shown that updating the generators’ output power in the consensus-based optimization protocol ultimately results in a consensus of the proposed modified ICs with the VPE under an initial supply–demand balance assumption according to RESs. The authors further improved the distributed algorithm to deal with the generation capacity constraint by adding a power limit compensation factor and by omitting the initial supply–demand balance restriction. It was shown that the proposed algorithms are able to solve the EDP with or without the generator capacity constraint, while the power demand and supply is balanced in addition to the consideration of RESs. The novel contributions of the presented work are four-fold: 1. _Optimality Condition under VPE: A new optimality condition for the EDP under the_ VPE of power plants, integrated with solar energy (for the distributed optimization case), was revealed via the Lagrangian method. In contrast to existing conditions [2, 30,33,36,42,43,45,46], the proposed conditions employ modified ICs with the VPE, and can be applied to more complicated scenarios of the EDP for considering the VPE. 2. _Distributed Dispatching Strategy: A novel distributed approach for the optimal solution_ of the EDP under the VPE and solar energy is proposed. To the best of our knowledge, a distributed method by considering the communication topology between generators, without requiring a central dispatch facility, under the nonlinear handling of the VPE, has been provided for the first time. In contrast to central methods [11–14,17–21,36,47, 48], the proposed distributed approach applies a smart-grid concept for cooperation between agents, which supports plug-and-play, privacy of data, a simple generatorlevel handling of the dispatch, and better security against cyber attacks. As opposed to existing centralized strategies in [11–14,17–21], the design of a distributed consensus algorithm avoids single-point failure, ensures the minimum interaction between nodes, reduces the computation burden, reduces lags due to the central facility and promotes the flexible use of communication resources. 3. _Convergence of Algorithm: An analytical convergence analysis of the proposed method_ was performed under VPE constraints, in contrast to the conventional distributed methods [2,30,33,36,42,43,45,46]. The optimal convergence of the proposed approach was guaranteed via analysis through Lyapunov stability theory, dynamics of modified ICs, modified ICs consensus, generation dynamics analysis, and properties of graph theory, which are non-trivial in the analysis. 4. _Consideration of Clean Energy: The integration of solar energy sources with conventional_ thermal power plants has a substantial influence on the cost and emission reduction, which was considered in this study, in contrast with the conventional (distributed) methods [2,30,33,36,42,43,45,46]. The incorporation of green energy sources has a favorable ecological impact and helps conventionally fuelled power plants to achieve better carbon trade-offs, resulting in lower carbon penalties imposed by environmental regulatory authorities. Furthermore, the application of renewable energy plays an important role in stabilizing state GDP because fuel imports are cut significantly. ----- _Energies 2023, 16, 447_ 4 of 23 Based on these contributions, the proposed approach can be applied for attaining the advantages of the distributed EDP (rather than the central EDP), along with the challenges of the VPE constraint and low-carbon footprints. However, the adaptation of this approach will require smart infrastructure at generating units, including communication devices, smart meters, and real-time computational facilities. The simulation was accomplished on two benchmark test systems, i.e., a ten-unit system and forty-unit system, to validate the theoretical results, and a comparison was provided with the existing centralized and distributed approaches. In comparison to [36,47,48], the proposed consensus algorithm gives a better optimal cost and requires less CPU time. The remaining paper is organized as follows. In Section 2, the mathematical background of algebraic graphs and consensus in MASs is reviewed. The description of the problem is provided in Section 3. In Section 4, a distributed algorithm for the EDP considering the VPE, with and without the generation capacity constraint, is proposed. In Section 5, simulation results and comparisons are provided to validate the effectiveness of the algorithm. Finally, a conclusion is provided to conclude the article. **Table 1. Area of research considered in existing works.** **Area of Research Considered** **Works** **Limitations** Methods with VPE [11–14] Mostly central optimization Methods concerning RESs [17–21] Mostly central optimization Distributed EDP methods [2,30,33,36,42,43,45,46] Mostly ignore VPE and RESs **2. Preliminaries** Before presenting a detailed analysis of the proposed algorithm, a mathematical background of algebraic graph theory and the consensus of first-order MASs is provided. _2.1. Graph Theory_ In a networked system, agents are represented as nodes and the communication between nodes is represented by edges. A graph is defined as G = _V, E_, where V is the _{_ _}_ set of nodes, and E is the set of edges. An undirected edge Eij in the network is denoted by an unordered pair of vertices (vi, vj). The degree of a vertex in an undirected graph is the total number of edges associated with it. For simplicity, it is assumed that there are no self-loops and that the graph is connected [36]. Two important associated matrices with graphs are the adjacency matrix A = [aij]N×N and Laplacian matrix L = [lij]N×N. We consider that aij = aji = 1 if i and j are connected; otherwise, aij = 0. The entries of the Laplacian matrix are taken as lij = −aij, i ̸= j and lii = − ∑[N]j=1,j̸=i _[a][ij][, which ensures the]_ diffusion that ∑[N]j=1 _[l][ij][ =][ 0. The following lemma is required to prove the main results.]_ **Lemma 1 ([36]). 1.** _The Laplacian matrix for a connected undirected graph has a zero eigenvalue_ _and the remaining eigenvalues are positive._ _2._ _The second least eigenvalue of the Laplacian matrix, denoted by λo(L), validates the following_ _condition: λo(L) ≤_ _[x]x[T][T][Lx]x_ [.] _2.2. Consensus of First-Order MASs_ The consensus protocol in MASs is defined as follows [49]. ----- _Energies 2023, 16, 447_ 5 of 23 _x˙i(t) = ui(t),_ _N_ _ui(t) =_ ∑ _aij(xj(t) −_ _xi(t))_ _j=1,j̸=i_ _N_ = − ∑ _lijxj(t),_ _j=1_ (1) where ui(t) is referred to as the control signal, xi(t) is the state vector, which can represent a physical quantity, aij is the adjacency matrix entries, and lij is the Laplacian matrix entries. Consensus in multi-agents is achieved if the following holds. lim (2) _t→∞_ _[∥]_ _[x][i][(][t][)][ −]_ _[x][j][(][t][)][ ∥][=][ 0,][ ∀][i][,][ j][ =][ 1, 2,][ · · ·][,][ N][.]_ An interesting result on the consensus of multi-agents is established in [50] as follows. **Lemma 2. Consensus in multi-agents can be achieved for a connected undirected graph if the** _following condition holds._ lim (3) _t→∞_ _[∥]_ _[x][i][(][t][)][ −]_ _[x][∗][(][t][)][ ∥][=][ 0,][ ∀][i][,][ j][ =][ 1, 2,][ · · ·][,][ N][,]_ _where x[∗](t) =_ _N[1]_ [∑]k[N]=1 _[x][k][(][t][)][ represents the average value of states of all agents.]_ **3. System Description** We assumed a network of N generating facilities working cooperatively to achieve an optimal power dispatch in a power system or smart-grid. To this end, a quadratic cost function without the VPE for each generation facility was assumed, which is given as follows. _Ci = ai + biPi + ciPi[2][.]_ (4) Thermal power plants apply a stream to run turbines, which are controlled sequentially through the opening of stream valves. This opening of valves is needed to increase the generation of a unit. However, the effect of this valve opening (namely, VPE) causes a nonlinear rippling effect at the cost function. Hence, a practical generating unit cannot have a simple quadratic cost function, leading to a highly nonlinear EDP. Including the VPE into the quadratic cost function leads to the following. _Ci[vpe]_ = ai + biPi + ciPi[2] [+][ |][e][i] [sin][(][ f][i][(][P]i[min] _−_ _Pi))|,_ (5) where ai, bi, ci, ei, fi > 0 are cost function coefficients, Pi represents the output power of the _ith generator, Pi[min]_ is the lower bound of the generation capacity and |ei sin( fi(Pi[min] _−_ _Pi))|_ is the VPE in the cost function. The difference in cost functions (4) and (5) is depicted in Figure 1. The below mathematical strategy may be employed to estimate the expense of photovoltaic energy (PE) production. _CSC = ∑sNSU=1_ _s_ RP,s × MiGs. (6) Under this scenario, CSC represents the cost of solar energy, whereas NSUs and RP,s represent the number of solar panels and power, respectively. It is evident from Figure 1 that (4) is a convex function whereas (5) is a nonlinear, non-smooth, and non-convex function, ----- _Energies 2023, 16, 447_ 6 of 23 which, in turn, inherits the difficulty in devising an optimization algorithm to solve the EDP subject to the VPE. The total cost of the power generation is given by _N_ _CT[vpe]_ = (∑ _Ci[vpe]) + CSC._ (7) _i=1_ **Cost function without valve point effect** **Pmin** **Pmax** **Power Output (MW)** **Figure 1. The cost function with and without valve-point effect.** The research objective was to minimize the total generation cost by considering the valve-point loading effect under the constraint that the power demand and generation must be balanced; that is, _N_ min ∑ _Ci[vpe]_ _i=1_ _N_ s.t. PD = ∑ _Pi + RP,s,_ _i=1_ (8) where PD is the total power demand. Sunlight rays, surrounding temperatures, and the efficiency characteristics of the photovoltaic panel all have a substantial effect on solar power production. Here, we incorporated the beta distribution function (BDF) to calculate the energy production, and the BDF was used to describe solar energy mathematically. ``` D(F + G) D(F)D(G) ``` _[×][ B][F][−][1][(][1][ −]_ `[B][)][G][−][1]` _f or 0 ⩽_ `B ⩽` 1, F ⩾ 0, G ⩾ 0 0 _Otherwise_ _BDFβ(B) =_    (9) where D and G are the parameters of BDFβ. We can write this function in terms of mean X and standard deviation Z. � `X(X + 1)` � `F = X` 1, (10) _−_ ``` Z[2] ``` �� `X(X + 1)` �� `Y = (1` `X)` 1 . (11) _−_ _−_ ``` Z[2] ``` ----- _Energies 2023, 16, 447_ 7 of 23 As said before, the following model can be used to predict how solar radiation and ambient temperature would affect the solar output. _RP(t) = Nsrs × Nparl[RP(SC) ×_ _[R]Srad[(][t][)].[rad]SC_ _× [1 −_ Θ × (Ucel − _Ucel.SC)]]],_ (12) _Ucel = Uambt +_ _[R]R[(]rad[t][)].[rad]stc_ _× (Unrml.temp −_ 20). (13) **Assumption 1. The communication topology between generators is connected.** **Assumption 2. The initial condition of generators is such that ∑i[N]=1** _[P][i][(][0][) +][ R][P,s][ =][ P][D][.]_ An important constraint for generators is the capacity constraint, which is given by _Pi[min]_ _≤_ _Pi ≤_ _Pi[max], where Pi[min]_ and Pi[max] represent the minimum and maximum generation limits of the ith generator. **4. Main Results** Before presenting the main algorithm, conventional and proposed definitions of IC for generators are given. **Definition 1. The incremental cost of the ith generator (by ignoring the VPE) is given by** _ηi =_ _[∂][C][i]_ = bi + 2ciPi, i = 1, · · ·, N. (14) _∂Pi_ **Definition 2. The incremental cost of the ith generator by incorporating the VPE has the form** _ηi,_ _f =_ _[∂][C]∂Pi[vpe]i_ = bi + 2ciPi − _fi(gi)ei cos ( fi(Pi[min]_ _−_ _Pi)),_ (15) _where gi = sin( fi(Pi[min]_ _−_ _Pi))._ For dealing with the VPE, we applied the modified definition of ICs in Definition 2. Based on this modified definition, the EDP was resolved via the application of ηi, _f rather_ than conventional ηi. Equation (15) can also be written in a convenient form as _ηi,_ _f =_ _[∂][C]i[vpe]_ = ηi + φi, (16) _∂Pi_ where φi = − _fi(gi)ei cos( fi(Pi[min]_ _−_ _Pi))._ Note that the above condition provides the relation between the conventional IC and the modified IC for the issue of the VPE. The proposed Definition 2 can be interesting as it can be applied to deal with the EDP for addressing the non-convex valve-point loading effect. **Remark 1. An expression for IC with the VPE was derived in the recent interesting and motivating** _study of [44]. This condition is given as ηi,_ _f = bi + 2ciPi + fiei cos(mod( fi(Pi[min]_ _−_ _Pi), π)),_ _which is also equivalent to the present case of (15). However, the expression (15) is more convenient_ _than the above condition as the signum function is better to understand, realize, and implement. It is_ _also even easier to approximate than the MOD function. Due to this difficulty in [44], the definition_ _provided in [44] for IC with the VPE is based on a piece-wise linear approximation of the mentioned_ _MOD-based expression. The resultant approach for this approximation is conservative due to the_ _loss of information owing to linearization. Furthermore, it is also difficult to design and implement_ _due to the consideration of several regions. The switching between these regions may also cause a_ ----- _Energies 2023, 16, 447_ 8 of 23 _discontinuous operation, which can be fatal. The present work is based on the nonlinear and more_ _relevant Definition 2, which does not have conservatism as observed in [44]._ _4.1. Proposed Optimality Condition_ The optimization problem (8) can have an optimal solution if the conditions in Lemma 3 are satisfied. **Lemma 3. The optimal solution of EDP with the VPE and RESs as in (8) can be obtained if** _ηi + φi = ηj + φj_ (17) _and_ _N_ ### ∑ Pi + RP,s = PD. (18) _i=1_ **Proof. Using the Lagrange multiplier method, the Lagrange function for (8) was con-** structed as _N_ _N_ _L(Pi, λ) =_ ∑ _Ci[vpe]_ + λ(PD − ∑ _Pi −_ RP,s), (19) _i=1_ _i=1_ where λ is the Lagrange multiplier. By the application of (5), we attain _N_ _N_ _L(Pi, λ) =_ ∑ _ai + biPi + ciPi[2]_ [+][ |][e][i] [sin][(][ f][i][(][P]i[min] _−_ _Pi))| + λ(PD −_ ∑ _Pi −_ RP,s). (20) _i=1_ _i=1_ Differentiating L(Pi, λ) with respect to Pi leads to _∂L_ _∂Pi_ = bi + 2ciPi − _fi(gi)ei cos( fi(Pi[min]_ _−_ _Pi)) −_ _λ._ (21) Putting the derivative equal to zero for achieving an optimality condition, we have _ηi + φi_ _λ = 0,_ _−_ (22) _ηi + φi = λ._ The above equation shows that all IC with the VPE should be equal to a constant. Therefore, we can say that _ηi + φi = ηj + φj, ∀i, j = 1, · · ·, N._ (23) In addition, taking the derivative of L(Pi, λ) with respect to the Lagrange multiplier produces _∂L_ _N_ _∂λ_ [=][ P][D][ −] ∑ _Pi −_ RP,s. (24) _i=1_ Putting the derivative equal to zero leads to _N_ ### ∑ Pi + RP,s = PD. (25) _i=1_ This completes our proof. **Remark 2. The conventional distributed IC consensus method [36] (see also [45,46]) does not** _consider the VPE. Therefore, it has φi = 0, ∀i = 1, · · ·, N. By using this condition in the proposed_ _optimality condition of Lemma 3, the generalized optimal condition in (17) reduces to_ _ηi = ηj, ∀i, j = 1, · · ·, N._ (26) ----- _Energies 2023, 16, 447_ 9 of 23 _Hence, the proposed condition in Lemma 3 is the generalization of the conventional condition._ _Our approach supports the use of the VPE for attaining coherency between generators for an effective_ _cost minimization._ _4.2. Proposed Consensus-Based Optimization Protocol_ IC with the VPE contains nonlinearity, which is difficult to handle and update in a consensus protocol. Therefore, we proposed a novel consensus-based optimization protocol using power generation Pi and updated it to reach the consensus of ICs with the VPE. The designed consensus protocol is as follows. _N_ _P˙i =c_ ∑ _aij(bi + 2ciPi −_ _fi(gi)ei cos( fi(Pi[min]_ _−_ _Pi))_ (27) _j=1_ _−_ _bj −_ 2cjPj + fj(gj)ej cos( fj(Pj[min] _−_ _Pj))),_ with the initial condition ∑i[N]=1 _[P][i][(][0][) +][ R][P,s][ =][ P][D][. For the novel proposed method (][27][),]_ the following condition in Theorem 1 provides the optimal solution of the EDP (8). **Theorem 1. Consider N distributed generators with generations Pi, ∀i = 1, · · ·, N, with indi-** _vidual cost functions (5)–(6) under the VPE, connected via a graph of Assumption 1, validating_ _Assumption 2. The proposed optimization protocol (27) for c < 0 under 2ci > fi[2][e][i][ will ensure the]_ _optimal convergence of Pi to Pi[∗][, where P]i[∗]_ _[is an optimal solution of the problem (][8][).]_ **Proof. Using the cost functions in (5)–(6), IC with the VPE is calculated as in (15). Expand-** ing (15) leads to _bi + 2ciPi −_ _fiei cos( fi(Pi[min]_ _−_ _Pi)), gi > 0,_ _bi + 2ciPi,_ _gi = 0,_ (28) _bi + 2ciPi + fiei cos( fi(Pi[min]_ _−_ _Pi)), gi < 0._ _ηi,_ _f =_    Taking the time-derivative, we have 2ciP[˙]i − _fi[2][e][i][ sin][(][ f][i][(][P]i[min]_ _−_ _Pi))P[˙]i,_ _g > 0,_ 2ciP[˙]i, _g = 0,_ (29) 2ciP[˙]i + fi[2][e][i][ sin][(][ f][i][(][P]i[min] _−_ _Pi))P[˙]i,_ _g < 0._ _η˙i,_ _f =_    After combining all of these piece-wise functions, we have a generalized dynamics of IC with the VPE as follows. _η˙i,_ _f = (2ci −_ _fi[2][e][i][g][i][(][g][i][))][ ˙][P][i][.]_ (30) Equation (30) can also be written as _η˙i,_ _f = s(t, Pi)P[˙]i,_ (31) where s(t, Pi) = 2ci − _fi[2][e][i][g][i][(][g][i][)][. It is important to note that the following condition must]_ be satisfied for a guaranteed consensus (which can be relaxed, to be discussed later)— 2ci > fi[2][e][i][—to make][ s][(][t][,][ P][i][)][ >][ 0. From (][31][), we have] _P˙i =_ _η˙i,_ _f_ (32) _s(t, Pi)_ [,] which indicates that the dynamics of IC with the VPE and dynamics of power generation depend on each other; that is, _P[˙]i ∝_ _η˙i,_ _f . By multiplying s(t, Pi) on both sides in (27) and_ ----- _Energies 2023, 16, 447_ 10 of 23 writing in terms of IC with the VPE, we can convert the generation dynamics into dynamics of IC with the VPE via _N_ _η˙i,_ _f = cs(t, Pi)_ ∑ _aij(ηi,_ _f −_ _ηj,_ _f )._ (33) _j=1_ This indicates that the design of the EDP protocol using Pi can ultimately result in the consensus of ICs with the VPE. In (33), s(t, Pi) is a time-dependent variable. This variable can be transformed into a linear parameter variable (LPV) model as follows [51]. _s(t, Pi) = Θi, where Θi ∈_ [Θmin, Θmax]. (34) Hence, by the application of LPV model, the relation (33) becomes _η˙i,_ _f = cΘi_ _N_ ### ∑ aij(ηi, f − ηj, f ). (35) _j=1_ Now, we develop the consensus error dynamics of ICs with the VPE. Let the error _εi = ηi,_ _f −_ _η¯ as the consensus error, where ¯η = ∑[N]j=1_ Θ1jΘ _[η][j][,]_ _[f][ and][ Θ][ =][ ∑]i[N]=1_ Θ1i [. As per] Lemma 2, the consensus between ICs with the VPE will be achieved if this consensus error converges to zero. For constructing the error dynamics, we take the time-derivative of this error as follows. _N_ 1 _ε˙i = ˙ηi,_ _f −_ _j∑=1_ ΘjΘ _[η][˙]_ _[j][,]_ _[f][ .]_ (36) Applying (35) leads to _ε˙i = cΘi_ _N_ ### ∑ aij(ηi.j − ηj, f ) − Θ[c] _j=1_ _N_ ### ∑ ajk(ηj, f − ηk, f ). (37) _k=1_ _N_ ### ∑ _j=1_ The term ∑[N]j=1 [∑]k[N]=1 _[a][jk][(][η][j][,]_ _[f][ −]_ _[η][k][,]_ _[f][ )][ reduces to zero, and we are left with]_ _ε˙i = cΘi_ _N_ ### ∑ aij(ηi, f − ηj, f ), (38) _j=1_ which can be further written as _ε˙i = cΘi_ _N_ ### ∑ aij(ηi, f − η¯ + ¯η − ηj, f ). (39) _j=1_ The compact form of the error dynamics is attained as follows. _N_ ### ∑ lijε j. (40) _j=1_ _ε˙i = cΘi_ _N_ ### ∑ aij(εi − ε j) = cΘi _j=1_ After attaining the error dynamics for ICs with the VPE, we show that this error converges to the origin. This convergence is required to attain the first optimization condition in Lemma 3. In addition, we also show that the supply–demand condition also holds. The conditions for the consensus of ICs with the VPE and supply–demand balance are investigated in Appendix A. By the application of Lemma 3, the proposed consensus-based optimization protocol (27) guarantees the convergence of Pi to the optimal solution Pi[∗] [of (][8][). This completes the proof.] **Remark 3. To solve the optimization problem using the consensus protocol designed according to** _Theorem 1, a Lagrangian method approach was used to derive the optimal conditions for the issue_ _of the VPE. Since the optimization problem is non-convex, this implies that there may be multiple_ ----- _Energies 2023, 16, 447_ 11 of 23 _optimal solutions based on the initial point. This necessitates that the initial point should be chosen_ _carefully to drive the solution to an optimum value. Therefore, we suggest applying this algorithm_ _for fine-tuning. The conventional distributed optimization methods, by ignoring the VPE, can be_ _applied for the initial solution, while the presented method can be used for fine tuning. Moreover,_ _if different operational constraints are considered, then these constraints will drive the solution_ _towards the global one._ **Remark 4. In Theorem 1, a distributed consensus-based algorithm is designed to dispatch power** _in a distributed manner in the presence of the VPE and RESs. This is different from conventional_ _distributed strategies as they only consider a quadratic cost function [30,33,36,42,43,45,46]._ **Remark 5. Conventional distributed approaches use IC as the consensus protocol variable [30,33,** _36,42,43,45,46]. In our approach, a modified IC with the VPE was taken as the consensus variable._ _In addition, the protocol’s update variable was also different (power generation Pi). The inclusion of_ _the VPE in ICs and variation in the protocol update variable for (27) led us to apply the proposed_ _distributed approach for a complex objective function with the valve-point loading effect._ **Remark 6. In this approach, the LPV model was used to transform a time-dependent variable** _through s(t, Pi) = Θi, where Θi ∈_ [Θmin, Θmax] to reach the consensus of ICs with the VPE. _The proposed optimization protocol is different from the existing studies, as it contains highly_ _nonlinear terms as fi(gi)ei cos( fi(Pi[min]_ _−_ _Pi)) and fj(gj)ej cos( fj(Pj[min]_ _−_ _Pj)), rather than linear_ _terms as in [30,33,36,42,43,45,46]. These terms appeared due to a novel distributed optimization_ _scenario of the VPE, which was considered in the present study. It should also be noted that_ _optimization analysis for a highly nonlinear protocol (27) is also a challenging research task._ _The presented proof required the generation and IC dynamics with valve-point nonlinearities, LPV_ _modeling, and LPV-based modified IC dynamics. Even the presented Lyapunov function and stability_ _analysis are based on the LPV parameter Θi._ **Remark 7. In the presented EDP approach of Theorem 1, we require 2ci > fi[2][e][i][ making][ s][(][t][,][ P][i][)][ >]** 0, which is a limitation of the proposed method. As s(t, Pi) = 2ci − _fi[2][e][i][g][i][(][g][i][)][ for][ g][i][ =][ sin][(][ f][i][(][P]i[min]_ _−_ _Pi)), the sign of gi can be either positive or negative (with unity gain). Usually, we have ci > 0, and_ _the further negative sign of gi will also contribute towards s(t, Pi) > 0. Therefore, the term s(t, Pi)_ _can have a positive value for most of the time, even if 2ci > fi[2][e][i][ is not validated. The expected]_ _values of Θi for i = 1, ..., N can be positive, resulting in the consensus of expected values of the_ _modified ICs. A simulation study is also provided in the next section to demonstrate the relaxation_ _of the constraint 2ci > fi[2][e][i][. The simulation comparison demonstrated that the presented approach]_ _is still better than the conventional distributed optimization schemes._ **Remark 8. The problem of an optimal dispatch under the complex nonlinear VPE without any** _linearization is formulated in the framework of distributed consensus-based optimization. To the best_ _of our knowledge, a nonlinear consensus-based distributed approach for the EDP under the VPE for_ _smart-grid applications has been formulated for the first time. The proof of convergence analysis was_ _provided, which is a non-trivial research problem for a distributed strategy. The problem becomes_ _complicated as a central processor and the collection of information to the central unit were relaxed_ _in our study._ To solve the EDP subject to the VPE in a distributed manner, the proposed distributed algorithm is summarized in steps in Algorithm 1. The proposed approaches in Theorem 1 and Algorithm 1 will remain valid as long as Assumption 2 is valid from the communication graph topology point of view. However, if a graph has more connections, the convergence of the algorithm can be faster. It should also be noted that the convergence of the proposed optimization protocol (27) can be improved by increasing the magnitude of c; however, it can also amplify the noise effects. ----- _Energies 2023, 16, 447_ 12 of 23 **Algorithm 1: Algorithm to solve EDP with VPE and RESs** **Input: PD −** RP,s, aij **Output: Pi** **1 Initialize generator parameters: ai, bi, ci, ei, fi, Pi[min], Pi[max], and tolerance τ.** **2 Set initial generations according to ∑i[N]=1** _[P][i][(][0][) +][ R][P,s][ =][ P][D][.]_ **3 Choose c < 0.** **4 while | ∑** _aij(ηi,_ _f −_ _ηj,_ _f )| > τ do_ **5** Each unit computes IC with VPE given by _ηi,_ _f = bi + 2ciPi −_ _fi(gi)ei cos( fi(Pi[min]_ _−_ _Pi))._ **6** All generation units share _bi + 2ciPi −_ _fi(gi)ei cos( fi(Pi[min]_ _−_ _Pi))_ with neighbours according to underlying communication topology. **7** Each generator updates Pi according to (27). **8 End If | ∑** _aij(ηi,_ _f −_ _ηj,_ _f )| ≤_ _τ._ _4.3. Extension to Generator’s Capacity Constraints_ The consensus protocol in (27) does not take the generator’s capacity constraint into account, and is hence unable to solve the EDP with the VPE in the presence of the capacity limit constraint. For this protocol to be able to solve this optimization problem, a power limit compensation factor along with a conditional statement for regulating the generation constraint was added. The proposed protocol (27) can be modified as follows.    _P˙i = 0,_ if Pi ≤ _Pi[min],_ _N_ _P˙i = c_ _j∑=1_ _aij(ηi,_ _f −_ _ηj,_ _f ) + δi,_ ifPi[max] _≥_ _Pi ≥_ _Pi[min],_ (41) _P˙i = 0,_ if Pi ≥ _Pi[max],_ where δ = −c0∆Pi, and c0 > 0. The term ∆Pi represents an estimation of the power mismatch for the ith generation facility, computed via local knowledge. The estimate of the local power mismatch can be determined by the use of the local communication of neighboring units. **5. Simulation Results and Discussions** _5.1. Simulation_ In this subsection, the designed distributed algorithm is simulated, with and without the generation capacity constraint, to validate the results of the designed strategy. The simulations were carried out on an Intel Core i7 − 3520M CPU @ 2.90 GHz processor equipped with 4 GB RAM. For the sake of numerical simulation, two benchmark test systems were selected. One was the ten-unit system with PD = 2000 MW, and the other was the fortyunit system with PD = 10500 MW. The data set for both test systems was taken from [48]. The unit data for the ten-unit system is depicted in Table 2. The communication topology graph for generators in the case of the ten-unit system is shown in Figure 2. For the forty-unit system, a randomly generated connected graph was considered. ----- _Energies 2023, 16, 447_ 13 of 23 **Figure 2. Communication topology graph for a ten-unit system.** **Table 2. Unit data for ten-unit system.** **Unit** **_Pi[min]_** **_Pi[max]_** **_ai_** **_bi_** **_ci_** **_ei_** **_fi_** **1** 10 55 1000.403 40.5407 0.12951 33 0.0174 **2** 20 80 950.606 39.5804 0.10908 25 0.0178 **3** 47 120 900.705 36.5104 0.12511 32 0.0162 **4** 20 130 800.705 39.5104 0.12111 30 0.0168 **5** 50 160 756.799 38.5390 0.15247 30 0.0148 **6** 70 240 451.325 46.1592 0.10587 20 0.0163 **7** 60 300 1243.531 38.3055 0.03546 20 0.0152 **8** 70 340 1049.998 40.3965 0.02803 30 0.0128 **9** 135 470 1658.569 36.3278 0.02111 60 0.0136 **10** 150 470 1356.659 38.2704 0.01799 40 0.0141 5.1.1. Simulation on Ten-Unit System without Generation Constraint In this case, there is no generation capacity constraint imposed on the generation units and the initial condition is set such that ∑i[N]=1 _[P][i][(][0][) =][ P][D][. The consensus protocol (][27][)]_ was used. The parameter for the optimization protocol (27) was selected as c = 0.1 by _−_ virtue of Theorem 1. The total output power and generators’ active power are plotted in Figures 3 and 4, respectively. Figure 5 shows that ICs with the VPE reach consensus. The optimal output power of each generation unit with the optimal cost and CPU time is given in Table 3. 2200 Power generation 2150 2100 2050 2000 1950 1900 1850 1800 0 50 100 150 200 Time(sec) **Figure 3. Total active power output without capacity constraint.** **Figure 2. Communication topology graph for a ten-unit system.** **Table 2. Unit data for ten-unit system.** **Unit** **_Pi[min]_** **_Pi[max]_** **_ai_** **_bi_** **_ci_** **_ei_** **1** 10 55 1000.403 40.5407 0.12951 33 ----- _Energies 2023, 16, 447_ 14 of 23 600 unit1 unit2 500 unit3 unit4 400 unit5 unit6 unit7 300 unit8 unit9 unit10 200 100 0 0 50 100 150 200 Time(sec) **Figure 4. Output power of ten generation nodes without capacity constraint.** 100 unit1 unit2 90 unit3 unit4 unit5 80 unit6 unit7 unit8 unit9 70 unit10 60 50 0 50 100 150 200 Time(sec) **Figure 5. Consensus of ICs with the VPE.** **Table 3. Optimal output power of generation units and total cost in case of no capacity limits.** **Quantity** **Optimal Results** **_P1 (MW)_** 64.06 **_P2 (MW)_** 80.42 **_P3 (MW)_** 80.85 **_P4 (MW)_** 72.98 **_P5 (MW)_** 60.23 **_P6 (MW)_** 53.10 **_P7 (MW)_** 266.66 **_P8 (MW)_** 311.62 **_P9 (MW)_** 494.37 **_P10 (MW)_** 515.71 **Total Generation (MW)** 2000 **Cost ×10[5]** **($/MWh)** 1.06 **CPU Time (s)** 1.7 ----- _Energies 2023, 16, 447_ 15 of 23 5.1.2. Simulation on Ten-Unit System Using Improved Algorithm with Capacity Constraint In this case, the improved distributed algorithm (41) is applied on a ten-unit system with a capacity constraint. In addition, the initial condition is not restricted to be equal to _PD. Again, c = −0.1 was selected, and we chose c0 = 2 for the modified approach (41)._ The total active output power and generation units’ output power are plotted in Figures 6 and 7, respectively. The initial condition on the total power generation was taken to be 1830 MW. The simulation shows that the algorithm is able to solve the EDP considering the generation capacity constraint and initial conditions other than PD. Figure 8 illustrates the IC with the VPE. These ICs tend to reach consensus up until when the generation of a generator is not saturated due to the capacity constraint, and therefore consensus is not fully achieved. Individual generations are restricted with generation capacity limits, which restricts generators in achieving complete consensus in the modified ICs. It can be seen in Figure 8 that some generation units tried to achieve consensus while few could not, due to the generation capacity limit, referring to Figure 7. 2200 Power generation 2150 2100 2050 2000 1950 1900 1850 1800 0 50 100 150 200 Time(sec) **Figure 6. Total active power output using improved consensus protocol considering capacity constraint.** 500 unit1 unit2 400 unit3 unit4 unit5 300 unit6 unit7 unit8 unit9 200 unit10 100 0 0 50 100 150 200 Time(sec) **Figure 7. Output power of ten generation nodes for improved consensus protocol considering** capacity constraint. ----- _Energies 2023, 16, 447_ 16 of 23 100 unit1 unit2 90 unit3 unit4 80 unit5 unit6 unit7 70 unit8 unit9 unit10 60 50 40 0 50 100 150 200 Time(sec) **Figure 8. Incremental cost consensus in case of generation capacity limit constraint.** 5.1.3. Simulation on Forty-Unit System under RESs To discuss the validity of the proposed method on large-scale systems, the proposed consensus protocol (41) was applied to a forty-unit system [48] in the presence of a capacity constraint. Additionally, the present work also considered the renewable energy sources in this simulation. We considered a share of 500MW from renewable sources, leading to RP,s = 500 MW. The initial conditions were taken balanced such that ∑i[N]=1 _[P][i][(][0][) +][ R][P,s][ =]_ _PD. The communication topology and adjacency matrix for this system was generated_ randomly using a standard uniform distribution on MATLAB for incorporating a random behavior. Again, c = −0.1 was selected, and c0 = 2 was chosen. The total power, which was ∑i[N]=1 _[P][i][(][t][)][, and the individual generation of each conventional unit are shown in]_ Figures 9 and 10, respectively. In Figure 11, the modified ICs are plotted for the sake of analysis. The results show that most of the units achieved consensus, whereas the remaining units attained partial consensus due to saturation to the maximum upper limit of power generation as imposed by the generator capacity constraint. Hence, the proposed approach can be applied to a large-scale system with capacity, non-convex, and renewable energy constraints. 10[4] 1.07 Total Power 1.065 1.06 1.055 1.05 1.045 1.04 1.035 1.03 0 5 10 15 20 25 Time(sec) **Figure 9. Total power generation in case of forty-unit system.** ----- _Energies 2023, 16, 447_ 17 of 23 **Figure 10. Individual power generation of forty units.** **Figure 11. Modified IC consensus for forty-unit system.** _5.2. Discussion and Comparison_ 5.2.1. Comparison with Centralized Algorithms To authenticate the proposed distributed consensus-based algorithm, a comparison between the existing centralized strategies to solve EDP-VPE and the proposed strategy is presented in Table 4. The results obtained from the proposed algorithm are compared in Table 4, along with those obtained from multi-objective differential evolution (MODE) in [47] and new global particle swarm optimization (NGPSO) in [48]. For the comparison study, we considered the case of the capacity constraint and used the approach of the consensus protocol (41). It can be seen that the proposed strategy gives a comparable cost (because the central methods are multi-objective schemes) with the advantage of solving the problem in a distributed manner. We also provided the expected time for a single node, as the previous CPU time was computed via the central processing unit. As the generating units are working independently in the proposed work, we can roughly compute the time of the individual node by dividing the CPU time by the total number of units in our case. Hence, the time in our case will be further reduced due to the use of distributed computing facilities. Note that the communication delays in central methods and the business of central processor issues are also eliminated in our approach. In addition, the proposed approach is not prone to single-point failure and is resilient against attacks due to its distributed nature compared to [11–14], [17–21], and [36,47,48]. For launching a cyber attack, an expensive attack for blocking all generating units will be needed, rather than considering the central unit only. In addition, the proposed approach is flexible for increasing the number of generating units, ----- _Energies 2023, 16, 447_ 18 of 23 as it will not require an enhancement of the communication and computational powers of the central facility. With these advantages, the proposed approach can be a better choice than the conventional central methods. **Table 4. Generation, cost, and CPU time comparison for ten-unit system (PD = 2000 MW).** **Type** **Centralized** **Centralized** **Distributed** **Quantity** **MODE [47]** **NGPSO [48]** **Proposed** _P1 (MW)_ 55.00 55.00 55.00 _P2 (MW)_ 79.81 80.00 80.00 _P3 (MW)_ 106.82 106.94 62.42 _P4 (MW)_ 102.83 100.58 87.35 _P5 (MW)_ 82.24 81.50 160.00 _P6 (MW)_ 80.44 83.02 69.99 _P7 (MW)_ 300.00 300.00 300.00 _P8 (MW)_ 340.00 340.00 340.00 _P9 (MW)_ 470.00 470.00 470.00 _P10 (MW)_ 469.90 470.00 375.38 **Cost ×10[5]** **($/MWh)** 1.1150 1.1149 1.082 **CPU time (s)** 9.42 – 2.00 **Time for one node (s)** - – 0.2 approx. 5.2.2. Comparison with Distributed Methods The consensus protocol from [36], one of the fundamental distributed consensusbased strategies to solve EDP, was applied on a ten-unit system. This comparison study investigated the optimization protocol (27) under an unconstrained environment. For the sake of comparison, we took ei to be 100 times larger than that of Table 2, and no generation constraint was imposed in this experiment. The large value of ei was accounted for, as we wanted to attempt to solve the EDP with th eVPE in case of a violation of the constraint 2ci > fi[2][e][i][. The obtained power generation][ P][i][ was used to calculate the cost from (][4][) and (][5][),] and then the obtained results were compared with the results of the proposed algorithm in Table 5. It is shown in Table 5 that the optimized cost for [36] with cost function (5) is more than the optimized cost with cost function (4), which is actually logical because cost function (4) is an ideal approximation of the fuel cost and does not incorporate the VPE. It is also evident that the proposed algorithm gives better optimal results compared to [36] when considering the VPE. In contrast to conventional methods [2,30,33,36,42,43,45,46], the presented approach considered the effect of RESs for the forty-unit system. In contrast to conventional distributed methods [2,30,33,36,42,43,45,46], the proposed approach considers the highly nonlinear VPE constraint and employs low-carbon energy sources in the form of solar energy. In addition to these two technical advantages, the theoretical convergence analysis of the proposed method via the stability theory of MASs was performed in the presence of new constraints through complex Lyapuov, graph theory, and dynamical analysis formulation, which improves the reliability of the proposed method. ----- _Energies 2023, 16, 447_ 19 of 23 **Table 5. Comparison with distributed approach.** **Quantity** **Existing Protocol** **Proposed Protocol** _P1 (MW)_ 64.29 10.00 _P2 (MW)_ 80.73 200.55 _P3 (MW)_ 82.66 47.00 _P4 (MW)_ 73.00 206.99 _P5 (MW)_ 61.17 50.02 _P6 (MW)_ 52.11 164.46 _P7 (MW)_ 266.32 266.71 _P8 (MW)_ 299.61 315.45 _P9 (MW)_ 494.20 366.00 _P10 (MW)_ 525.91 372.81 Cost without VPE ×10[5] ($/MWh) 1.058 – Cost with VPE ×10[5] ($/MWh) 1.257 1.144 Total generation (MW) 2000 2000 _5.3. CPU Time_ To emphasize the fact that the proposed approach can solve the optimization problem significantly more quickly than the existing centralized methods, the CPU time was calculated for all test systems. Due to the distributed framework of optimization, the computation time was significantly reduced compared to central methods. This is shown and compared in Table 4. In addition, Table 6 is provided, which compares the CPU time for the proposed approach as applied on different benchmark test systems. The authors want to emphasize the fact that these CPU times were calculated for the whole simulation time, and should not be confused with the convergence time of the ICs. In addition, it should be noted that these simulations were conducted on a central processor. When this algorithm is implemented in real-time on a distributed controller in the framework of MASs, the CPU time will be much shorter than those reported in the article. **Table 6. CPU time comparison.** **Test System with Approach** **CPU Time (s)** Ten-unit unconstrained 1.7 Ten-unit constrained 2.0 Forty-unit unconstrained 5.4 Forty-unit constrained 13.5 Recently, some Lyapunov and energy function methods were reported for a better convergence analysis as in [52–55]. In the future, these methods can be applied for investigating comprehensive convergence properties. **6. Conclusions** This paper considered a distributed optimization approach for the EDP under the VPE and solar energy constraints over a communication topology. The generators were assumed to be equipped with smart devices, such as transmitters, receivers, and real-time computational facilities. The proposed strategy applied power generation as an updation variable and modified ICs as consensus variables for dealing with cost optimization under clean energy sources by accounting for solar energy distribution properties. In contrast with the conventional central optimization methods, the proposed distributed approach is ----- _Energies 2023, 16, 447_ 20 of 23 cooperative, resilient against cyber attacks, not limited to one-point failure, does not have delays due to the dispatch center, and does not have a server business issue with respect to the central unit. In addition, it can be easily extended for increasing the number of units and requires less computational effort due to it having a simple algorithm and the division of the algorithm at several nodes. Compared with the existing distributed approaches, the designed distributed consensus protocol deals with the highly nonlinear constraint of the VPE and incorporates a solar energy system for attaining low-carbon footprints. Simulation results for medium-scale and large-scale systems were performed along with a comparison with central and distributed methods. With respect to central methods, the CPU time for the proposed algorithm was found to be quite better. Compared with the existing distributed methods, our approach provides a better optimal cost due to the consideration of the VPE constraint. In future, a more practical approach for considering a realistic network reconfiguration, including the sizing and allocation of the distributed energy hubs, will be considered for a distributed optimization framework. **Author Contributions: M.M. written the initail version of manuscript. W.A. and M.R. completed the** final version of manuscript. W.A., M.I., N.U. and K.Z. conceived of the idea. M.M., W.A. N.U. and K.Z. developed the theoretical framework. M.R. and W.U. verified the analytical methods. M.M. and M.R. performed the simulation results. All authors have read and agreed to the published version of the manuscript. **Funding: This Research was Supported by Adaptive Controller Design and Validation of Electric** Vehicle Charger (Project No. NUST-22-41-45). **Data Availability Statement: The data used in this study is included in the article. Further inquiries** can be directed to the corresponding authors. **Conflicts of Interest: The authors declare no conflict of interest.** **Appendix A. Consensus and Supply–Demand Conditions** We took support from Lyapunov stability theory, and considered the following Lyapunov function [56,57]: _N_ _ε[T]i_ _[ε][i]_ _V =_ ∑ . (A1) _i=1_ 2Θi Note that Θi is a positive scalar because of 2ci > fi[2][e][i][, leading to][ s][(][t][,][ P][i][)][ >][ 0, and resulting] in the LPV parameter Θi > 0. Taking the time-derivative of V gives _N_ _V˙_ = ∑ _i=1_ _ε[T]i_ _[ε][˙][i]_ . (A2) Θi Applying (40) leads to _N_ _N_ _V˙_ = ∑ _ε[T]i_ _[c]_ ∑ _lijε_ _j._ (A3) _i=1_ _j=1_ The expansion and evaluation of these sums along with e[T] = [ε1, ε2, ..., ε _N] imply that_ _V˙_ = e[T]cLe. (A4) By application of Lemma 1, we have _V˙_ _≤_ _cλo(L)e[T]e._ (A5) Since c < 0, _V[˙]_ _< 0 is made. This implies that ICs with the valve-point loading effect_ reach consensus with each other. Hence, the first optimality condition in Lemma 3 has been validated. ----- _Energies 2023, 16, 447_ 21 of 23 To assure the second condition of Lemma 3 related to the supply–demand balance, we move towards the generation dynamics. Substituting (32) into (33) leads to the generation dynamics for the ith generator as follows. _N_ _P˙i = c_ ∑ _aij(ηi,_ _f −_ _ηj,_ _f )._ (A6) _j=1_ For achieving total generation dynamics, we applied the summation to the above-mentioned _ith generation to achieve_ _N_ _N_ _N_ ### ∑ P˙i = c ∑ ∑ aij(ηi, f − ηj, f ). (A7) _i=1_ _i=1_ _j=1_ The expansion and evaluation of these sums reduce the right side to zero. Therefore, the total generation dynamics will follow _N_ ### ∑ P˙i = 0. (A8) _i=1_ Equation (A8) implies that ∑i[N]=1 _[P][i][ remains constant during the dispatch process. Hence,]_ ∑i[N]=1 _[P][i][ +][ R][P,s][ =][ P][D][, and the second optimality condition in Lemma 3 also holds.]_ **References** 1. Samia, C.; Houssem, J. The Use of a Heuristic Optimization Method to Improve the Design of a Discrete-time Gain Scheduling [Control Int. J. Control Autom. Syst. 2021, 19, 1836–1846. [CrossRef]](http://doi.org/10.1007/s12555-019-0774-1) 2. Meraihi, Y.; Gabis, A.B.; Mirjalili, S.; Ramdane-Cherif, A. Grasshopper Optimization Algorithm: Theory, Variants, and Applica[tions. IEEE Access 2021, 9, 50001–50024. [CrossRef]](http://dx.doi.org/10.1109/ACCESS.2021.3067597) 3. Perng, J.W.; Kuo, Y.C.; Lu, K.C. Design of the PID controller for hydro-turbines based on optimization algorithms. Int. J. Control _[Autom. Syst. 2020, 18, 1758–1770. [CrossRef]](http://dx.doi.org/10.1007/s12555-019-0254-7)_ 4. Awal, M.A.; Masud, M.; Hossain, M.S.; Bulbul, A.A.M.; Mahmud, S.M.H.; Bairagi, A.K. A Novel Bayesian Optimization-Based [Machine Learning Framework for COVID-19 Detection From Inpatient Facility Data. IEEE Access 2021, 9, 10263–10281. [CrossRef]](http://dx.doi.org/10.1109/ACCESS.2021.3050852) [[PubMed]](http://www.ncbi.nlm.nih.gov/pubmed/34786301) 5. Nelem, A.T.; Ele, P.; Ndiaye, P.A.; Essiane, S.N.; Pesdjock, M.J.P. Dynamic Optimization of Switching States of an Hybrid Power [Network. Int. J. Control Autom. Syst. 2021, 19, 2468–2478. [CrossRef]](http://dx.doi.org/10.1007/s12555-020-0088-3) 6. Xia, X.; Elaiw, A. Optimal dynamic economic dispatch of generation: A review. Electr. Power Syst. Res. 2010, 80, 975–986. [[CrossRef]](http://dx.doi.org/10.1016/j.epsr.2009.12.012) 7. Attaviriyanupap, P.; Kita, H.; Tanaka, E.; Hasegawa, J. A Hybrid EP and SQP for Dynamic Economic Dispatch with Nonsmooth [Fuel Cost Function. IEEE Power Eng. Rev. 2002, 22, 77. [CrossRef]](http://dx.doi.org/10.1109/MPER.2002.4312139) 8. Abbas, G.; Gu, J.; Farooq, U.; Asad, M.U.; El-Hawary, M. Solution of an economic dispatch problem through particle swarm [optimization: A detailed survey-part I. IEEE Access 2017, 5, 15105–15141. [CrossRef]](http://dx.doi.org/10.1109/ACCESS.2017.2723862) 9. Gaing, Z.L. Particle swarm optimization to solving the economic dispatch considering the generator constraints. IEEE Trans. _[Power Syst. 2003, 18, 1187–1195. [CrossRef]](http://dx.doi.org/10.1109/TPWRS.2003.814889)_ 10. Yao, F.; Dong, Z.Y.; Meng, K.; Xu, Z.; Iu, H.H.C.; Wong, K.P. Quantum-inspired particle swarm optimization for power system [operations considering wind power uncertainty and carbon tax in Australia. IEEE Trans. Ind. Inf. 2012, 8, 880–888. [CrossRef]](http://dx.doi.org/10.1109/TII.2012.2210431) 11. Pulluri, H.; Vyshnavi, M.; Shraddha, P.; Priya, B.S.; Hari, T.S. Genetic Algorithm with Multi-Parent Crossover Solution for Economic Dispatch with Valve Point Loading Effects. In Innovations in Electrical and Electronics Engineering; Springer: Singapore, 2020; pp. 429–438. 12. Khamsen, W.; Takeang, C.; Aunban, P. Hybrid method for solving the non smooth cost function economic dispatch problem. Int. _[J. Electr. Comput. Eng. 2020, 10, 609. [CrossRef]](http://dx.doi.org/10.11591/ijece.v10i1.pp609-616)_ 13. Sharifzadeh, H. Sharp formulations of nonconvex piecewise linear functions to solve the economic dispatch problem with [valve-point effects. Int. J. Electr. Power Energy Syst. 2021, 127, 106603. [CrossRef]](http://dx.doi.org/10.1016/j.ijepes.2020.106603) 14. Li, X.; Zhang, H.; Lu, Z. A differential evolution algorithm based on multi-population for economic dispatch problems with [valve-point effects. IEEE Access 2019, 7, 95585–95609. [CrossRef]](http://dx.doi.org/10.1109/ACCESS.2019.2927574) 15. Sakthivel, V.P.; Goh, H.H.; Srikrishna, S.; Sathya, P.D.; Abdul Rahim, S.K. Multi-Objective Squirrel Search Algorithm for Multi-Area Economic Environmental Dispatch With Multiple Fuels and Valve Point Effects. IEEE Access 2020, 9, 3988–4007. [[CrossRef]](http://dx.doi.org/10.1109/ACCESS.2020.3046257) 16. Khan, N.A.; Sidhu, G.A.S.; Gao, F. Optimizing combined emission economic dispatch for solar integrated power systems. IEEE _[Access 2016, 4, 3340–3348. [CrossRef]](http://dx.doi.org/10.1109/ACCESS.2016.2587665)_ ----- _Energies 2023, 16, 447_ 22 of 23 17. Kumar Dey, S.; Prasad Dash, D.; Basu, M. Application of NSGA-II for environmental constraint economic dispatch of thermal[wind-solar power system. Renew. Energy Focus 2022, 43, 239–245. [CrossRef]](http://dx.doi.org/10.1016/j.ref.2022.08.008) 18. Zhang, C.; Xia, J.; Guo, X.; Huang, C.; Lin, P.; Zhang, X. Multi-optimal design and dispatch for a grid-connected solar photovoltaicbased multigeneration energy system through economic, energy and environmental assessment. Sol. Energy 2022, 243, 393–409. [[CrossRef]](http://dx.doi.org/10.1016/j.solener.2022.08.016) 19. Bakirtzis, A.; Petridis, V.; Kazarlis, S. Genetic algorithm solution to the economic dispatch problem. IEE Proc.-Gener. Transm. _[Distrib. 1994, 141, 377–382. [CrossRef]](http://dx.doi.org/10.1049/ip-gtd:19941211)_ 20. Kahvecio˘glu, G.; Morton, D.P.; Wagner, M.J. Dispatch optimization of a concentrating solar power system under uncertain solar [irradiance and energy prices. Appl. Energy 2022, 326, 119978. [CrossRef]](http://dx.doi.org/10.1016/j.apenergy.2022.119978) 21. Xu, Y.; Song, Y.; Deng, Y.; Liu, Z.; Guo, X.; Zhao, D. Low-carbon economic dispatch of integrated energy system considering the [uncertainty of energy efficiency. Energy Rep. 2023, 9, 1003–1010. [CrossRef]](http://dx.doi.org/10.1016/j.egyr.2022.11.102) 22. Hua, M.; Ding, H.; Yao, X.Y.; Zhang, X. Distributed Fixed-time Formation-containment Control for Multiple Euler-Lagrange [Systems with Directed Graphs. Int. J. Control Autom. Syst. 2021, 19, 837–849. [CrossRef]](http://dx.doi.org/10.1007/s12555-020-0106-5) 23. Nezami, Z.; Zamanifar, K.; Djemame, K.; Pournaras, E. Decentralized Edge-to-Cloud Load Balancing: Service Placement for the [Internet of Things. IEEE Access 2021, 9, 64983–65000. [CrossRef]](http://dx.doi.org/10.1109/ACCESS.2021.3074962) 24. Jameel, A.; Rehan, M.; Hong, K.S.; Iqbal, N. Distributed adaptive consensus control of Lipschitz nonlinear multiagent systems [using output feedback. Int. J. Control 2016, 89, 2336–2349. [CrossRef]](http://dx.doi.org/10.1080/00207179.2016.1155755) 25. Fu, H.; Cui, B.; Zhuang, B.; Zhang, J. Anti-collision and Obstacle Avoidance of Mobile Sensor-plus-actuator Networks over [Distributed Parameter Systems with Time-varying Delay. Int. J. Control Autom. Syst. 2021, 19, 2373–2384. [CrossRef]](http://dx.doi.org/10.1007/s12555-020-0317-9) 26. Tang, X.; Li, M.; Wei, S.; Ding, B. Event-triggered Synchronous Distributed Model Predictive Control for Multi-agent Systems. Int. _[J. Control Autom. Syst. 2021, 19, 1273–1282. [CrossRef]](http://dx.doi.org/10.1007/s12555-019-0795-9)_ 27. Li, S.; Ai, W.; Wu, J.; Feng, Q. A fixed-time distributed algorithm for least square solutions of linear equations. Int. J. Control _[Autom. Syst. 2021, 19, 1311–1318. [CrossRef]](http://dx.doi.org/10.1007/s12555-020-0096-3)_ 28. Zhang, Q.; Gong, Z.; Yang, Z.; Chen, Z. Distributed convex optimization for flocking of nonlinear multiagent systems. Int. J. _[Control Autom. Syst. 2019, 17, 1177–1183. [CrossRef]](http://dx.doi.org/10.1007/s12555-018-0191-x)_ 29. Hu, C.; Meng, Z.; Qu, G.; Shin, H.S.; Tsourdos, A. Distributed cooperative path planning for tracking ground moving target by [multiple fixed-wing UAVs via DMPC-GVD in urban environment. Int. J. Control Autom. Syst. 2021, 19, 823–836. [CrossRef]](http://dx.doi.org/10.1007/s12555-019-0625-0) 30. Wang, A.; Liu, W. Distributed incremental cost consensus-based optimization algorithms for economic dispatch in a microgrid. _[IEEE Access 2020, 8, 12933–12941. [CrossRef]](http://dx.doi.org/10.1109/ACCESS.2020.2966078)_ 31. Wang, R.; Li, Q.; Zhang, B.; Wang, L. Distributed consensus based algorithm for economic dispatch in a microgrid. IEEE Trans. _[Smart Grid 2019, 10, 3630–3640. [CrossRef]](http://dx.doi.org/10.1109/TSG.2018.2833108)_ 32. Zhang, Z.; Ying, X.; Chow, M.Y. Decentralizing the economic dispatch problem using a two-level incremental cost consensus algorithm in a smart grid environment. In Proceedings of the 2011 North American Power Symposium, Boston, MA, USA, 4–6 [August 2011; pp. 1–7. [CrossRef]](http://dx.doi.org/10.1109/NAPS.2011.6025103) 33. Zhang, Z.; Chow, M.Y. Incremental cost consensus algorithm in a smart grid environment. In Proceedings of the 2011 IEEE Power and Energy Society General Meeting, Detroit, MI, USA, 24–28 July 2011; pp. 1–6. 34. Tang, Z.; Hill, D.J.; Liu, T. A novel consensus-based economic dispatch for microgrids. IEEE Trans. Smart Grid 2018, 9, 3920–3922. [[CrossRef]](http://dx.doi.org/10.1109/TSG.2018.2835657) 35. Zhang, Z.; Chow, M.Y. Convergence Analysis of the Incremental Cost Consensus Algorithm Under Different Communication [Network Topologies in a Smart Grid. IEEE Trans. Power Syst. 2012, 27, 1761–1768. [CrossRef]](http://dx.doi.org/10.1109/TPWRS.2012.2188912) 36. Yu, W.; Li, C.; Yu, X.; Wen, G.; Lü, J. Distributed consensus strategy for economic power dispatch in a smart grid. In Proceedings of the 2015 10th Asian Control Conference (ASCC), Kota Kinabalu, Malaysia, 31 May–3 June 2015; pp. 1–6. 37. Xu, Y.; Li, Z. Distributed Optimal Resource Management Based on the Consensus Algorithm in a Microgrid. IEEE Trans. Ind. _[Electron. 2015, 62, 2584–2592. [CrossRef]](http://dx.doi.org/10.1109/TIE.2014.2356171)_ 38. Chang, X.; Xu, Y.; Gu, W.; Sun, H.; Chow, M.Y.; Yi, Z. Accelerated Distributed Hybrid Stochastic/Robust Energy Management of [Smart Grids. IEEE Trans. Ind. Inf. 2021, 17, 5335–5347. [CrossRef]](http://dx.doi.org/10.1109/TII.2020.3022412) 39. Li, P.; Hu, J. An ADMM based distributed finite-time algorithm for economic dispatch problems. IEEE Access 2018, 6, 30969–30976. [[CrossRef]](http://dx.doi.org/10.1109/ACCESS.2018.2837663) 40. Zhang, Z.; Chow, M.Y. The influence of time delays on decentralized economic dispatch by using incremental cost consensus algorithm. In Control and Optimization Methods for Electric Smart Grids; Springer: New York, NY, USA, 2012; pp. 313–326. 41. Zhu, Y.; Yu, W.; Wen, G. Distributed consensus strategy for economic power dispatch in a smart grid with communication time delays. In Proceedings of the 2016 IEEE International Conference on Industrial Technology (ICIT), Taipei, Taiwan, 14–17 March 2016; pp. 1384–1389. 42. Wen, G.; Yu, W.; Yu, X.; Cao, J. Designing adaptive consensus-based scheme for economic dispatch of smart grid. In Proceedings of the 2016 Eighth International Conference on Advanced Computational Intelligence (ICACI), Chiang Mai, Thailand, 14–16 February 2016; pp. 236–241. 43. Wen, G.; Yu, X.; Liu, Z.W.; Yu, W. Adaptive consensus-based robust strategy for economic dispatch of smart grids subject to [communication uncertainties. IEEE Trans. Ind. Inf. 2017, 14, 2484–2496. [CrossRef]](http://dx.doi.org/10.1109/TII.2017.2772088) ----- _Energies 2023, 16, 447_ 23 of 23 44. Zhou, Y.; Zhu, S.; Chen, Q. Distributed Prescribed Finite Time Consensus Scheme for Economic Dispatch of Smart Grids with the [Valve Point Effect. Complexity 2020, 2020, 5476846. [CrossRef]](http://dx.doi.org/10.1155/2020/5476846) 45. Yu, M.; Song, C.; Feng, S.; Tan, W. A consensus approach for economic dispatch problem in a microgrid with random delay [effects. Int. J. Electr. Power Energy Syst. 2020, 118, 105794. [CrossRef]](http://dx.doi.org/10.1016/j.ijepes.2019.105794) 46. Chen, W.; Li, T. Distributed Economic Dispatch for Energy Internet Based on Multiagent Consensus Control IEEE Trans. Autom. _[Control 2021, 66, 137–152. [CrossRef]](http://dx.doi.org/10.1109/TAC.2020.2979749)_ 47. Basu, M. Economic environmental dispatch using multi-objective differential evolution. Appl. Soft Comput. 2011, 11, 2845–2853. [[CrossRef]](http://dx.doi.org/10.1016/j.asoc.2010.11.014) 48. Zou, D.; Li, S.; Li, Z.; Kong, X. A new global particle swarm optimization for the economic emission dispatch with or without [transmission losses. Energy Convers. Manag. 2017, 139, 45–70. [CrossRef]](http://dx.doi.org/10.1016/j.enconman.2017.02.035) 49. Olfati-Saber, R.; Murray, R.M. Consensus problems in networks of agents with switching topology and time-delays. IEEE Trans. _[Autom. Control 2004, 49, 1520–1533. [CrossRef]](http://dx.doi.org/10.1109/TAC.2004.834113)_ 50. Ren, W.; Beard, R.W. Distributed Consensus in Multi-Vehicle Cooperative Control; Springer: Berlin/Heidelberg, Germany, 2008. 51. Kamran, M.A.; Hong, K.S. Linear parameter-varying model and adaptive filtering technique for detecting neuronal activities: An [fNIRS study. J. Neural Eng. 2013, 10, 056002. [CrossRef]](http://dx.doi.org/10.1088/1741-2560/10/5/056002) 52. Yao, Z.; Zhou, P.; Zhu, Z.; Ma, J. Phase synchronization between a light-dependent neuron and a thermosensitive neuron. _[Neurocomputing 2021, 423, 518–534. [CrossRef]](http://dx.doi.org/10.1016/j.neucom.2020.09.083)_ 53. Xu, L.; Qi, G.; Ma, J. Modeling of memristor-based Hindmarsh-Rose neuron and its dynamical analyses using energy method. _[Appl. Math. Model. 2022, 101, 503–516. [CrossRef]](http://dx.doi.org/10.1016/j.apm.2021.09.003)_ 54. [Zhou, P.; Hu, X.; Zhu, Z.; Ma, J. What is the most suitable Lyapunov function? Chaos Solitons Fractals 2021, 150, 111154. [CrossRef]](http://dx.doi.org/10.1016/j.chaos.2021.111154) 55. Ahmad, S.; Rehan, M.; Hong, K.S. Observer-based robust control of one-sided Lipschitz nonlinear systems. ISA Trans. 2016, _[65, 230–240. [CrossRef]](http://dx.doi.org/10.1016/j.isatra.2016.08.010)_ 56. Rehan, M.; Ahmad, S.; Hong, K.S. Novel results on observer-based control of one-sided Lipschitz systems under input saturation. _[Eur. J. Control 2020, 53, 29–42. [CrossRef]](http://dx.doi.org/10.1016/j.ejcon.2019.10.007)_ 57. Hussain, M.; Rehan, M.; Ahn, C.K.; Hong, K.S.; Saqib, N.u. Simultaneous design of AWC and nonlinear controller for uncertain [nonlinear systems under input saturation. Int. J. Robust Nonlinear Control 2019, 29, 2877–2897. [CrossRef]](http://dx.doi.org/10.1002/rnc.4529) **Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual** author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. -----
{ "disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.3390/en16010447?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.3390/en16010447, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": "CCBY", "status": "GOLD", "url": "https://www.mdpi.com/1996-1073/16/1/447/pdf?version=1672888489" }
2,022
[]
true
2022-12-30T00:00:00
[ { "paperId": "e77c908ad1e5bcabaa8bf8a204cbabb8a47c3724", "title": "Low-carbon economic dispatch of integrated energy system considering the uncertainty of energy efficiency" }, { "paperId": "903edff32ae88ba67bd5199d335ed48b209a8c74", "title": "Dispatch optimization of a concentrating solar power system under uncertain solar irradiance and energy prices" }, { "paperId": "f5bc57b736a2f3f3efe4483004cbfb8f315f3db7", "title": "Application of NSGA-II for Environmental Constraint Economic Dispatch of Thermal-Wind-Solar Power System" }, { "paperId": "2730e1f3d9f939bab0bb2895a9180760d66a3d86", "title": "Multi-optimal design and dispatch for a grid-connected solar photovoltaic-based multigeneration energy system through economic, energy and environmental assessment" }, { "paperId": "d3307bc0c48318dba4f0acbae3a2d2431756d9d2", "title": "Modeling of memristor-based Hindmarsh-Rose neuron and its dynamical analyses using energy method" }, { "paperId": "23d882676f08b6a3862d623da839312ea998d584", "title": "What is the most suitable Lyapunov function" }, { "paperId": "e2043116c2609c1d9bcaf3ad751417a40297527d", "title": "Accelerated Distributed Hybrid Stochastic/Robust Energy Management of Smart Grids" }, { "paperId": "ab640b4e41574811376b481c2b7a47d88e53ae48", "title": "Dynamic Optimization of Switching States of an Hybrid Power Network" }, { "paperId": "2c7c48090bd56b49ed08aaddc67807628abe30ae", "title": "Anti-collision and Obstacle Avoidance of Mobile Sensor-plus-actuator Networks over Distributed Parameter Systems with Time-varying Delay" }, { "paperId": "c2d3dda981852d7a5f4e8e077843ed2461eb9c96", "title": "The Use of a Heuristic Optimization Method to Improve the Design of a Discrete-time Gain Scheduling Control" }, { "paperId": "ddb41d61c745d6e5f4a2ff27987c5c8ae0090efe", "title": "A Novel Bayesian Optimization-Based Machine Learning Framework for COVID-19 Detection From Inpatient Facility Data" }, { "paperId": "d6f1e247ffe03183e8fb14c2a09df931b3f8dbb5", "title": "Phase synchronization between a light-dependent neuron and a thermosensitive neuron" }, { "paperId": "8a17832ba4062bf3854933b9e46a4a4834f8468b", "title": "Sharp formulations of nonconvex piecewise linear functions to solve the economic dispatch problem with valve-point effects" }, { "paperId": "d60fad1c905393d61c0ab725192acaa4172bac82", "title": "Event-triggered Synchronous Distributed Model Predictive Control for Multi-agent Systems" }, { "paperId": "60a067fafbe7f3239530fdeb35f48ed778363ce7", "title": "Distributed Fixed-time Formation-containment Control for Multiple Euler-Lagrange Systems with Directed Graphs" }, { "paperId": "0b9a5b49e2d15428cd97bddcd67da53bd2c873c3", "title": "Distributed Cooperative Path Planning for Tracking Ground Moving Target by Multiple Fixed-wing UAVs via DMPC-GVD in Urban Environment" }, { "paperId": "983e65bfd76c6f18c5847609cdbad7bd94695f59", "title": "Distributed Prescribed Finite Time Consensus Scheme for Economic Dispatch of Smart Grids with the Valve Point Effect" }, { "paperId": "cf4fcae3667b9443d23dcc3fe21425eadb012b4c", "title": "A consensus approach for economic dispatch problem in a microgrid with random delay effects" }, { "paperId": "525defc6b7bd9629802eaa5be823c6b23b50cefe", "title": "Decentralized Edge-to-Cloud Load Balancing: Service Placement for the Internet of Things" }, { "paperId": "23748090bf03f129e552f431facfe70bae69b37e", "title": "Novel results on observer-based control of one-sided Lipschitz systems under input saturation" }, { "paperId": "18d665651b841ef8aa9f0bf8e6775927041554da", "title": "Design of the PID Controller for Hydro-turbines Based on Optimization Algorithms" }, { "paperId": "8c42dbc9c094432b1dd892ad833a37b3788dfc54", "title": "Hybrid method for solving the non smooth cost function economic dispatch problem" }, { "paperId": "499ecfdf7fabeb66e1b367bb4b579225ca620782", "title": "Distributed Incremental Cost Consensus-Based Optimization Algorithms for Economic Dispatch in a Microgrid" }, { "paperId": "dc4c63f276883a095b2d6436db6e45efcb79fd6b", "title": "Distributed Consensus Based Algorithm for Economic Dispatch in a Microgrid" }, { "paperId": "6b1c707c68a1d1885bd0e418fb697a466381be05", "title": "A Fixed-time Distributed Algorithm for Least Square Solutions of Linear Equations" }, { "paperId": "3cc9842af90668ade143880b65f1fd9ea1a0f929", "title": "Distributed Convex Optimization for Flocking of Nonlinear Multi-agent Systems" }, { "paperId": "624716e76bc51fb3ef73941d034ee000200d0d23", "title": "Simultaneous design of AWC and nonlinear controller for uncertain nonlinear systems under input saturation" }, { "paperId": "8c9ac526021d13fda27220c98e2894a27ba446a6", "title": "Distributed Economic Dispatch for Energy Internet Based on Multiagent Consensus Control" }, { "paperId": "c4bdc7b9f8aae36de0678b206d9a72f62580759b", "title": "Adaptive Consensus-Based Robust Strategy for Economic Dispatch of Smart Grids Subject to Communication Uncertainties" }, { "paperId": "a9dd1498b4fe4a0da5a838431d4abe3e8648bd90", "title": "A Novel Consensus-Based Economic Dispatch for Microgrids" }, { "paperId": "b722707389c1b10599b802866e22ec5a81ba5108", "title": "A new global particle swarm optimization for the economic emission dispatch with or without transmission losses" }, { "paperId": "10f908862b4f80bd0394403309821a742e539bf4", "title": "Observer-based robust control of one-sided Lipschitz nonlinear systems." }, { "paperId": "674a19cdeef6a2059356cc265d0c170c8336ed2b", "title": "Optimizing Combined Emission Economic Dispatch for Solar Integrated Power Systems" }, { "paperId": "00812d9e653d4954552a4dcce987ea7de51c31b4", "title": "Distributed adaptive consensus control of Lipschitz nonlinear multi-agent systems using output feedback" }, { "paperId": "1260805dc54fb86c893f94df3298622e1f1c2c39", "title": "Distributed Optimal Resource Management Based on the Consensus Algorithm in a Microgrid" }, { "paperId": "6b93640439df1f5f8bfc32657a947831b1211b74", "title": "Linear parameter-varying model and adaptive filtering technique for detecting neuronal activities: an fNIRS study" }, { "paperId": "95023389a834a6b9fca3df5899fac96d477753eb", "title": "Quantum-Inspired Particle Swarm Optimization for Power System Operations Considering Wind Power Uncertainty and Carbon Tax in Australia" }, { "paperId": "d7c06dca27dc6329b5f061f38f9004f521dd3cda", "title": "Convergence Analysis of the Incremental Cost Consensus Algorithm Under Different Communication Network Topologies in a Smart Grid" }, { "paperId": "974dd86ab3c07673ca4e78ff73cc68d0fda6fc12", "title": "Economic environmental dispatch using multi-objective differential evolution" }, { "paperId": "a5072af724cce320e6aca60840a3cbee35b76cbc", "title": "Optimal dynamic economic dispatch of generation: A review" }, { "paperId": "bbadd4370cb353cd31a260668824f27a41c9af56", "title": "Distributed Consensus in Multi-vehicle Cooperative Control - Theory and Applications" }, { "paperId": "9839ed2281ba4b589bf88c7e4acc48c9fa6fb933", "title": "Consensus problems in networks of agents with switching topology and time-delays" }, { "paperId": "4ed4c873b592eecdf4c662d95e0f17be136c5409", "title": "Particle swarm optimization to solving the economic dispatch considering the generator constraints" }, { "paperId": "86e87db2dab958f1bd5877dc7d5b8105d6e31e46", "title": "A Hybrid EP and SQP for Dynamic Economic Dispatch with Nonsmooth Fuel Cost Function" }, { "paperId": "0f7af097fe558e98686349be82ad2ae90104a71e", "title": "Genetic algorithm solution to the economic dispatch problem" }, { "paperId": "d44ee8614d118bc1d68a59869e35352df0d2bc08", "title": "Multi-Objective Squirrel Search Algorithm for Multi-Area Economic Environmental Dispatch With Multiple Fuels and Valve Point Effects" }, { "paperId": "f194a613e67ee4d8ba1a24faf4edf0d8d8b1ab0e", "title": "Grasshopper Optimization Algorithm: Theory, Variants, and Applications" }, { "paperId": "8c64b9e0dc3c842a1b68ffecb609d45a5d09941b", "title": "A Differential Evolution Algorithm Based on Multi-Population for Economic Dispatch Problems With Valve-Point Effects" }, { "paperId": "5accccb060936cec582020e562844924d45f1135", "title": "An ADMM Based Distributed Finite-Time Algorithm for Economic Dispatch Problems" }, { "paperId": "09d320c1922384fd9c1cee2aa92ea3bae0ebad06", "title": "Solution of an Economic Dispatch Problem Through Particle Swarm Optimization: A Detailed Survey - Part I" } ]
20,780
en
[ { "category": "Computer Science", "source": "s2-fos-model" }, { "category": "Environmental Science", "source": "s2-fos-model" }, { "category": "Engineering", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/020558f09e952f682805891d6c1393b0d3b2be5c
[]
0.822493
An Incentive Mechanism for Energy Internet of Things Based on Blockchain and Stackelberg Game
020558f09e952f682805891d6c1393b0d3b2be5c
International Journal of Engineering
[ { "authorId": "2217243090", "name": "H. Zhou" }, { "authorId": "2112953148", "name": "J. Gong" }, { "authorId": "2054501316", "name": "W. Bao" }, { "authorId": "2107786910", "name": "Q. Liu" } ]
{ "alternate_issns": [ "1985-2312", "1728-1431", "2423-7167", "1735-9244", "1728-144X", "2545-417X" ], "alternate_names": [ "International Journal of Engineering&Technology", "Int j eng Trans basic", "International journal of engineering. Transactions A: basics", "Int J Eng" ], "alternate_urls": [ "http://ijet.ielas.org/", "http://www.cscjournals.org/", "http://www.cscjournals.org/journals/IJE/description.php?JCode=IJE" ], "id": "e6019940-2d4f-4a6e-95b4-1c2735d629c2", "issn": "1025-2495", "name": "International Journal of Engineering", "type": "journal", "url": "http://www.ije.ir/" }
In the Internet of Everything era, the Energy Internet of Things (IoT), as a typical application of IoT technology, has been extensively studied. Meanwhile, blockchain technology and energy IoT can be coordinated and complementary. The energy IoT is diversified and has a high transaction demand. it is an issue worthy of research to discuss the impact of the energy IoT environment on the performance of blockchain consensus algorithms and guarantee blockchain stability in energy IoT environment. In the research, an incentive mechanism based on Stackelberg game is proposed for the network scenario involving multiple roadside units and user nodes. The proposed strategy is analyzed through the Matlab simulation platform. The simulation results show that the proposed scheme can effectively protect the interests of blockchain users and miners. It also can improve the security and stability of the blockchain-based energy IoT system. Moreover, the numerical results not only verify the model feasibility. It also shows that when there are many blockchain miners, the model performance is fine. However, when the number of miners reaches a certain value, there will be unobvious growth. Furthermore, it is
IJE TRANSACTIONS B: Applications Vol. 36, No. 08, (August 2023) 1468-1477 # International Journal of Engineering J o u r n a l H o m e p a g e : w w w . i j e . i r ## An Incentive Mechanism for Energy Internet of Things Based on Blockchain and Stackelberg Game H. Zhou[a], J. Gong[*b], W. Bao[b], Q. Liu[b] _a Department of Electromechanical and Information Engineering, Changde Vocational Technical College, Changde, China_ _b School of Communication and Information Engineering, Chongqing University of Posts and Telecommunications, Chongqing, China_ _P A P E R I N F O_ _Paper history:_ Received 01 November 2022 Received in revised form 18 April 2023 Accepted 19 April 2023 _Keywords:_ _Blockchain_ _Energy Internet of Things_ _Incentive Mechanism_ _Stackelberg Game_ **NOMENCLATURE** _A B S T R A C T_ In the Internet of Everything era, the Energy Internet of Things (IoT), as a typical application of IoT technology, has been extensively studied. Meanwhile, blockchain technology and energy IoT can be coordinated and complementary. The energy IoT is diversified and has a high transaction demand. it is an issue worthy of research to discuss the impact of the energy IoT environment on the performance of blockchain consensus algorithms and guarantee blockchain stability in energy IoT environment. In the research, an incentive mechanism based on Stackelberg game is proposed for the network scenario involving multiple roadside units and user nodes. The proposed strategy is analyzed through the Matlab simulation platform. The simulation results show that the proposed scheme can effectively protect the interests of blockchain users and miners. It also can improve the security and stability of the blockchainbased energy IoT system. Moreover, the numerical results not only verify the model feasibility. It also shows that when there are many blockchain miners, the model performance is fine. However, when the number of miners reaches a certain value, there will be unobvious growth. Furthermore, it is also confirmed that the wireless energy IoT environment will also create a certain impact on the game model. **_doi: 10.5829/ije.2023.36.08b.07_** _N_ The set of miner nodes _TPS_ _dag_ The number of transactions verified per second in the blockchain network _Ts ( )_ User's response time _U_ _r*_ The optimal total reward _Tv ( )_ The transaction verification delay * The optimal equilibrium point _Tw ( )_ The queuing and service time _U_ _l*_ Benefit function _U_ _l_ The user's benefit function (2xU)r2 The second derivative of U with respect tor _x_ _f ( )i_ The satisfaction function of blockchain users ( )i The verification delay of the transaction under high load.  The weight factor of the response time function, _c_ The computing and storage cost in each transaction _Ll_ A convex function with respect to  ( ) The ideal response time demand _x*_ The optimal pricing strategy that can maximize U . r **1. INTRODUCTION[1]** Energy IoT is a new energy internet system based on cutting-edge technologies such as 5G and artificial intelligence, combined with energy. According to the complementary mode of different energy sources, energy internet greatly promotes the linkage between electricity, fossil, and heat energy sources with the help of internet *Corresponding Author Email: _[junquan123gong@163.com](mailto:junquan123gong@163.com)_ (J. Gong) technology [1]. Meanwhile, blockchain technology and energy IoT can be coordinated and complementary in integrated development. This complement is mainly reflected in decentralization, collaborative autonomy, marketization, and smart contracts. As a cutting-edge technology, blockchain deeply integrates a series of emerging computer technologies such as distributed data storage, P2P (peer-to-peer) Please cite this article as: H. Zhou, J. Gong, W. Bao, Q. Liu, An Incentive Mechanism for Energy Internet of Things Based on Blockchain and Stackelberg Game, International Journal of Engineering, Transactions B: Applications, Vol. 36, No. 08, (2023), 1468-1477 ----- H. Zhou et al. / IJE TRANSACTIONS B: Applications Vol. 36, No. 08, (August 2023) 1468-1477 1469 transmission, consensus mechanism, encryption algorithm, and so on. It also displays distinct application characteristics of decentralization, openness and transparency, traceability, and tamper-proofness [2]. The application value and application scenarios of blockchain technology in the field of energy IoT have been deeply discussed in a large number of studies. Zhao et al. [3] summarized and introduced the development status of blockchain energy application engineering at home and abroad. And it has provided reliable development ideas and suggestions for the engineering application of blockchain technology in China's energy field. Zhang et al. [4] comprehensively and systematically sorted out the application dimensions of blockchain technology in the energy Internet. The key role of blockchain technology in the field of energy Internet has been elaborated in detail from the perspectives of energy, information and value. Fernández-Caramés et al. [5] described the demand for blockchain technology in the IoT field and the impact of its application on the development of the modern IoT. Doshi and Varghese [6] examined how renewable energy and AI-powered IoT can be used to improve agriculture. The paper explores how to use technologies to optimize crop yield, reduce water consumption and improve the efficiency of the agricultural industry. The authors also discussed potential challenges and solutions to ensure successful implementation of smart agriculture. Wang and Liu [7] presented an energy efficient optimization method for smart-IoT data centers based on task arrival. The authors proposed a task scheduling algorithm to minimize energy consumption while ensuring system performance. The algorithm dynamically assigns tasks to different nodes based on task arrival, system load, and energy consumption. This approach is compared with existing scheduling algorithms. The results show that this method improves energy efficiency while maintaining system performance. However, the most concerned challenge is that the current performance of the traditional blockchain cannot meet the needs of high-frequency data usage. The traditional single-chain structure results in a limited number of transactions that can be processed in a consensus cycle. This cannot meet the dynamic scalability requirements for performance of blockchain technology in the actual production. Therefore, for the scalability of blockchain, a distributed ledger based on DAG is proposed, which greatly improves the system performance under high concurrency. How to balance the response strategies of each participant to protect the interests of blockchain users, miners and the system is a problem worth studying. Game theory is a mathematical model for the study of strategic interactions between rational decision makers [6, 7]. It can be used to analyze the strategies of nodes and the interactions between nodes. Due to the power of game theory, it is one of the new trends of future development to use game theory to solve the optimization problem in blockchain. The optimization problem, especially the CAP theory problem in current blockchain [8], is namely impossible triangle: decentralization, scalability and security. Secondly, the Stackelberg game model is generally widely used to solve the pricing problem between service providers and users [9, 10]. For wireless environments like Energy IoT, the work of end users needs to rely on the purchase of computing resources from edge computing networks. Modeling the interaction between the two using Stackelberg games is a problem worth investigating for system optimization. Nejati and Faraji [11] dealed with the issue of actuator fault detection and isolation for a helicopter unmanned aerial vehicle. The authors proposed a methodology based on the observer and residual generation technique to detect and isolate actuator faults in real-time [11]. Khosravian and Maghsoudi [12] discussed the design of an intelligent controller for station keeping, attitude control, and path tracking of a quadrotor using recursive neural networks. The authors proposed a control scheme based on the fusion of multiple recursive neural networks for precise control of the quadrotor [12]. Xiong et al. [13] discussed about cloud computing and pricing management for blockchain networks. Wei et al. [14] also investigated on application of blockchain for uncertainity in energy pricing and market pricing for the enegy sectors. Given the basis of game theory and the problems faced in this paper, this paper proposes a Stackelberg game-based incentive mechanism based on the DAG consensus mechanism. The game model simulates the interaction between blockchain users and miners, verifying the existence of the game balance point. The simulation results show that the algorithm can effectively improve the system security and stability. Specifically, it aims to improve the system security by encouraging miners to join the blockchain network, while meeting the needs of blockchain users. The rest of the paper is organized as follows. Section 2 introduces the related problems and system models. Section 3 introduces the best solution analysis and leader analysis. In section 4, the simulation results are analyzed and the system performance is evaluated numerically. Finally, section 5 summarizes this paper. The research objective of this paper is to propose an incentive mechanism based on Stackelberg game model to simulate the transaction behavior between blockchain users and miners. The proposed scheme can effectively protect the interests of blockchain users and miners. The security and stability of the blockchain-based energy IoT system has been improved. ----- 1470 H. Zhou et al. / IJE TRANSACTIONS B: Applications Vol. 36, No. 08, (August 2023) 1468-1477 **2. PROBLEM DESCRIPTION AND NETWORK** **MODEL** **2. 1. System Model** Our model consists of two entities: 1. blockchain user, namely solar inverter, vehicle, etc.; 2. Blockchain consensus node, namely roadside unit with computing and storage capabilities, also known as miner, as shown in Figure 1. It is noteworthy that in DAG, miners do not need a lot of computing resources in mining, just needing to verify every collected transaction. This is referred to as mining behavior in this paper. Blockchain users deliver transactions to miner nodes through wireless channels. Wireless channels require all blockchain users in the area covered by miners' nodes to compete with each other. Miner nodes communicate with each other via wired channels, run DAG consensus algorithms, validate and store the collected transactions. This consumes computing and storage resources. Due to the selfishness of nodes themselves, this is unfair for miners. Therefore, to maintain the normal operation of the blockchain system, it is reasonable for miners to charge certain transaction fees from blockchain users. For blockchain users, the transaction verification will cause new delays, so the process from publication to confirmation of transactions in the blockchain will go through two stages: delivery and verification. The blockchain network model considered in this study consists of multiple blockchain user clusters, each of which receives data by a miner node. Where, _N_ = 1,..., _Nc_ represents the set of miner nodes. The number of blockchain users within the coverage area of each miner follows Poisson distribution, and the transaction arrival rate of users is i i,  _N_ . Moreover, each user has an independent satisfaction function whose value is related to its own response time needs and the miner's pricing x of the transaction. In the blockchainbased energy IoT, the user's response time _Ts ( )_ is composed of two parts. The first part is the queuing and service time in the wireless phase _Tw( )_ = _Tq_ ( ) +Tst ( ), and the other part is the transaction verification delay _Tv ( )_, namely: _Ts_ ( ) = _Tw( )_ +Tv ( ) (1) After joining the blockchain network, the user response time is more affected by the verification delay. The delay _Tv ( )_ for transactions to be validated at miner nodes is the time it takes for the cumulative weight of blockchain transactions to reach the weight threshold. Due to the directed acyclic graph property in DAG, the verification delay is proportional to the transaction generation rate . It means that blockchain users need to generate more transactions to meet the lower response delay requirements. Here, in view of the queuing process in the first stage, this paper only considers the transaction verification delay under stable high load. According to the description in DAG white paper, the change process of verification delay with transaction arrival rate  can be expressed as: _Tv_ ( ) = 0.352D ln(4LsN Dc2 ) + 2W W TL- _s(Nac2)_ (2) Since this study only considers the block verification process during the high load phase, we need to add a restriction on the transaction generation rate, i.e.: _N_ 1 ##  i  (3) _i_ =1 _N Dc_ where, _N represents the mean value of the distributed_ blockchain user nodes. Meanwhile, it should be made clear that in the transaction delivery, the wireless channel capacity is limited. Therefore, the wireless channel will restrict the transaction delivery after the service intensity  1 . Therefore, this section sets restriction  1, which can be specifically expressed as follows: _m_ i _E T[_ _st_ ] (4) **2. 2. Analysis of Stackelberg Game Model Problem** To encourage blockchain miners to share their computing resources, more miners are motivated to participate in the blockchain consensus to improve the system security. The system has the authority to require blockchain users to pay a fee for each transaction. And it allows blockchain users to have different needs for response time. Therefore, there is a non-cooperative game between blockchain users and miners. In this paper, an incentive mechanism based on Stackelberg game model is proposed to simulate the interaction between blockchain users and miner nodes. Where, the set of blockchain miners is the leader and blockchain users are the followers. Miners charge transaction fees at the expense of computing and storage resources, while blockchain ``` 区块链共识节点Blockchain ``` Blockchain user区块链用户 consensus node(矿工) Transaction ``` 交易上传upload ``` Block 区块传播 propagation Block 区块验证 propagation **Figure 1. Game Model** ----- H. Zhou et al. / IJE TRANSACTIONS B: Applications Vol. 36, No. 08, (August 2023) 1468-1477 1471 users have higher demand for system response time. This paper mainly uses a game theory model to maximize the benefits of blockchain users and miner nodes. And it verifies the existence of equilibrium points in this game. **2. 2. 1. Benefit Function of Blockchain Users In** terms of blockchain users, its benefit function includes satisfaction function of response time and incentive cost, namely transaction cost. The response time here represents the verification delay of transactions in the DAG network. Due to the different load of transactions arriving in the network, transactions have different verification delays. Therefore, the user's benefit function can be defined in the following equation: _Ul_ = _f_ (,1 2,...,i ) − i _x_ (5) In general, logarithmic function is used to evaluate user satisfaction [11]. Therefore, in this paper, the satisfaction function of blockchain users with respect to response time is expressed as follows: _f_ (i ) = log 1 + _g_ (( _i_ )) (6) where,  represents the weight factor of the response time function, and ( )i represents the verification delay of the transaction under high load. It has been calculated previously. It can be known that ( )i is a function inversely proportional to the transaction rate i . Let term is the computing and storage cost in each transaction _c . This paper assumes that each user has the same_ transaction request, i.e. i  . In general, the benefit functions of leaders and followers are expressed as follows: Leader : max _Ul_, 1 _m_ s.t   _NN Dc_ _E T[_ _st_ ] Followers : maxx _U_ _r_, (9) s.t _N cc_  _x_  _xmax_ 1 _g (_ ( _i_ ) = () i ) , so that can be clearly understood **3. ANALYSIS OF OPTIMAL SOLUTION** According to the Stackelberg game model proposed in section 2.2, both blockchain users and miners are rational users who want to maximize their revenues. If one party achieves the maximum revenue, it will damage the other party's revenues and eventually lead to game breakdown. Therefore, an equilibrium point must be found so that both buyers and sellers can accept it. In the model, firstly, blockchain miners fix the price of each transaction on the basis of their own cost function to gain the optimal total reward _U from their own strategy space. Secondly, r*_ blockchain users choose respective response time strategy according to the pricing of miners. In this section, backward induction [15, 16] will be used to first analyze the benefit function of the following blockchain user, especially the verification delay, to obtain the * _U_ _l*_ optimal equilibrium point and benefit function of the blockchain user. Then, analysis will be made on the optimal equilibrium point _x and benefit function *_ _U of r*_ the leading blockchain miner. Finally, in the distributed environment, the optimal solution can be obtained with the help of our proposed iterative update function. Therefore, definition 1 can be obtained based on the above analysis. Definition 1: Let the policy set of blockchain users be _R_ = 1,...,i, and the policy set of miners be Equation (6). Through the above analysis, the expression of user benefit function can be rewrite as follows: _Ul_ = log 1 + _g_ (( _i_ )) − i _x_ (7) **2. 2. 2. Benefit Function of Blockchain Mine For** the blockchain miners, their benefit function is defined as the charged transaction fees minus the cost of computing resources consumed per transaction. Miners aim to help blockchain users verify and store valid transactions and charge transaction fees _x for each transaction, thereby_ maximizing revenue. Mathematically, the optimization problem can be expressed as follows: _U_ _r_ = _TPS_ _dag_ ( _Nxc_ − _c)_ (8) _U_ _l_ (i*,,R x)  _U_ _l_ (, _R−i_, _x)_, −i indicates the user policy _C_ = x1,..., _x_ _j_ . When _[x]_ is fixed, if * meets −i where, _TPS_ _dag_ =2LsNc represents the system throughput in the blockchain network under the wireless channel service strength  1 . That is, the number of transactions verified per second in the blockchain network. In Equation (8), the first term represents the average verification revenue of all blockchain miners. The second − _j_ _λ x*,_ - ) miner strategy set excluding. Then, the strategy[(] is the optimal equilibrium point of the non-cooperative Stackelberg game. set excluding i* . Meanwhile, when is fixed, if _x*_ meets _U_ _r_ ( _x C*j_,, )  _U_ _r_ ( _x C,_ − _j_, ), _x*j_ − _j_ represents the ----- 1472 H. Zhou et al. / IJE TRANSACTIONS B: Applications Vol. 36, No. 08, (August 2023) 1468-1477 **3. 1. Follower Analysis Through backward** induction, first the benefit maximization strategy of the follower blockchain user is analyzed. For the benefit function of blockchain users, its derivative is as follows: t, _t_ instantaneous values of iteration parameters at time _t can be calculated by solving Equations (11) and (12)_ simultaneously, as shown in Equation (14). _[t]_ represents the index of iteration times. Ul = _W2−L NW Ts_ (c2a ) +− _x_ 2 2 ( U)l2 = − W −W T( _a2)1L N+s_ 2Wc−LsW TN( _c2a)_ 2 (10) (14) From the analysis of the above two expressions combined with the derivation in section 3, the second derivative 2U2l  0 of _U can be concluded.l_ _U is l_ clearly convex function with respect to []. Due to the constraint conditions in Equation (9), generally Lagrange multiplier method is used to solve the optimization problem. After substituting the constraint conditions into the benefit function, the following expression can be obtained.  _t_ _E T[mst_ ] _xt_ −log 1 + 2WL N−s _W Tc2(_ _E Ta_ )[mst ]  =  _m_ 1  −  _E T[_ _st_ ] _NN Dc_  1 _t_  _W_ −W T( _a_ )  t = _NN Dc_ _x_ −log 1 + 2L NNs _c3D_   _m_ 1 −  _E T[_ _st_ ] _NN Dc_   1 _Ll_ (,, ) = log 1 + _g_ (( )) − x − −   _NN D_   _c_  (11)  _m_  − −   _E T[_ _st_ ] Based on this, the KKT condition can be obtained as shown in Equation (12). Where, * represents the optimal solution. **3. 2. Leader Analysis On the basis of the optimal** strategy of the following blockchain user, the second step of backward induction method is to use the obtained optimal strategy solution of the follower and substitute it into the leader's utility function. Then the first order and second derivative analysis is used in the Stackelberg game to find the optimal strategy _x of the leading *_ blockchain miner. For the blockchain consensus node, based on backward induction, the second derivative of _U with r_ respect to _[x]_ can be expressed as follows: To prove the existence of extreme values of U, the r concavity and convexity must be analyzed first. Therefore, to further solve the first and second derivatives of [] with respect to _[x]_, the following expression can obtained: (2xU)r2 = 2Ls 2 x + ( _x_ − _N cc_ ) (x2)2  (15) * * − _m_  = 0,  _E T[_ _st_ ]   *  1 − *  =0,  _NN D_   _c_  By analyzing the above equation, the first and second   2 x  0, (x)2  0 derivatives in Equation (15) can be obtained. Finally, through the above analysis, it can be _xmax_ = ( _x2−N c+c_ _v)3_ + ( _x_ −2+ _v)2_ obtained that when, 2xU2r  0 if _x_ xmin, _xmax_ , and the benefit function _U_ _r_ of blockchain miners is a convex function with respect to _x ._   x = − ( _x_ −+ _v)2_  2 2 ( _x)2_ = ( _x_ −+ _v)3_ * _m_ −  0, _E T[_ _st_ ] 1 − *  0, _NN D_ _c_ *  0,*  0,*  0. (16) Ll (,, ) = 0 Let , then the optimal policy blockchain users can be obtained. * = _x_ − _v*_ +* − _W2−L NW Ts_ (c2a ) (12) * of (13) * _x v,_ - , It is noteworthy that is a function of, _x v,_ - , which means that the corresponding is the * information necessary to get . In addition, the ----- H. Zhou et al. / IJE TRANSACTIONS B: Applications Vol. 36, No. 08, (August 2023) 1468-1477 1473 U _r_ =2Ls  + _x_ − _N cc_  = 0 Therefore, when x  x , the optimal strategy price _x of blockchain miners can be *_ obtained, namely: _x*_ =  (4− 4N c vc ) - − 4* + 4Nc* _c_ +2 +(2N cc − 2)v* + 2− 2Nc* _c_ − (2− 2N cc )−1 (17) where, _x has a negative solution, which does not meet *_ the conditions and will not be discussed here. Meanwhile, as can be seen from Equation (17); _x is a closed *_ ,v*,* expression related to . Therefore, to solve this ,v*,* equation, the game strategies of both parties in the previous round must be obtained first. However, in a distributed environment, since the two sides of the game are non-cooperative, neither the blockchain miner nor the user knows the optimal strategy of the other. Therefore, this paper uses the classical iterative method [17] to find the optimal solution, and this process is shown in Algorithm 1. In Algorithm 1, if the iterative convergence condition is not met, the value calculated in this round will be used as the initial value for the next round of update, and this process will be repeated until _[x][,][ ]converge._ The above analysis, on the basis of definition 1, demonstrates that the optimal solution is the unique equilibrium solution by proving 1 and 2. Proof 1: For blockchain user, when the transaction globally optimal. In particular, it is proved in section 3.1 that 2U2l  0, 2L2l = 2U2l  0 under KKT, so _Ll is a_ convex function with respect to [], which meets the contents of Definition 1. Proof 2: For blockchain miners, when the user gets  the ideal response time demand [( )], the optimal trading strategy [] can be obtained. As proved in section 3.2, under the condition [(]2xU)r2  0, _x is the optimal pricing *_ strategy that can maximize U . r **4. PERFORMANCE EVALUATION** In this paper, an incentive scheme based on Stackelberg game is proposed for the network scenario involving multiple roadside units and user nodes. The proposed strategy is analyzed through the Matlab simulation platform. The following will first explain the scenario setting of simulation verification. The specific simulation parameters are shown in Table 1. In this section, the system performance is evaluated numerically from three aspects. First, the update process of blockchain user and miner policies with the number of iterations is examined. Second, the influence of user distribution on benefit function in the energy IoT scenario is considered. Third, as the number of blockchain miners increases, the change trend of the benefit function is analyzed. Miners, as leaders, first have the authority to formulate pricing strategies. This is to update respective strategies for following blockchain users on the basis of miners' strategies to meet their own response time requirements. Figure 2 represents the iterative update process of transaction pricing for blockchain miners. In this figure, transaction price decreases with an increase in the number of iterations, which ultimately converges to a stable value. This is because only when the transaction price _[x]_ is lower, blockchain users will choose **TABLE 1. Simulation Parameters of the Game Model** **Parameter** **Value (range)** DAG transaction broadcast delay D 1×10[-2]s DAG verification threshold W 800 DAG transaction weight 3 Wireless transmission transaction threshold m 32 Algorithm convergence accuracy ε 10[-8] Weight factor θ 1 Mining cost in transaction c 10[-2] price _[x]_ is fixed, * makes the user benefit function _U_ _l_ **Algorithm 1 Iterative update algorithm** Input: initial value _x t_, _t_, J _t_ [, convergence accuracy] [][, ] other parameter values of energy IoT. Output: convergent _t x,_ - , - [. ] 1: The number of initialization iterations _t=0_ ; the flag bit flag=flase, the initial value of _x, t_ U _r_ = _U_ _rt_ +1 −U _rt_  ,  denotes the convergence accuracy; 2:while (!flag) 3: The blockchain user gets _x from the blockchain miner t_ and updates it into t (xt ) ; 4. The blockchain miner obtains the updated t [ from the ] DAG network and substitutes it into Equation (17); 5: Updatet, _t_ [ according to Equation (14); ] 6: if ( Ur  ) 7: flag=true; 8: _x*=,x t_ - = t ; 9: t=t+1; 10:endwhile; 11:return _t x,_ - , - [;] ----- 1474 H. Zhou et al. / IJE TRANSACTIONS B: Applications Vol. 36, No. 08, (August 2023) 1468-1477 **Figure 2. Update Process of Transaction Price Strategy with** the Number of Iterations to increase transaction arrival rate strategy []. Although transaction price falls, a greater number of transactions in the network will make miner's total revenue increase. In addition, as the number of miners increases, so does the ability to collect transactions in the network. Therefore, despite the low transaction price, the miners' revenue can still be guaranteed. Figure 3 shows the change trend of the transaction demand rate of blockchain users with the number of iterations under different number of miners. Similarly, it can be seen from the figure that with an increase in the number of iterations, the transaction demand rate of blockchain users increases and finally enters a stable state. This is because as the number of miners increases, the transaction price decreases, which exactly encourages blockchain users to demand faster transaction rates. Where, verification delay _Tv ( )_ is a function of , which represents the transaction verification delay of blockchain users. As can be seen from Figure 4, with an increase in the number of iterations, the value of _Tv ( )_ will gradually decrease, which is consistent with the analysis result in Figure 3. Since _Tv ( )_ is inversely proportional to [], when [] increases, the user’s verification delay will decrease. Consequently, the benefit function of the user is guaranteed, and eventually, _Tv ( )_ will tend to a stable value. Figures 5 and 6 show the impact of the number of miners on the benefit functions of blockchain users and miners themselves. According to the figure, as the number of miners increases, the benefit function of blockchain users and miners will also increase. This is because more miners can process more transactions per unit time. That is, the number of transactions participating in the consensus process per unit time increases. This leads to the decrease of verification delay, the improvement of **Figure 3. Update process of demand strategy for transaction** arrival rate with the number of iterations **Figure 4. Update Process of Transaction Verification Delay** with the Number of Iterations **Figure 5. Benefit Function of Blockchain Users** blockchain users' satisfaction, and an increase in transaction demand. For blockchain miners, although the ----- H. Zhou et al. / IJE TRANSACTIONS B: Applications Vol. 36, No. 08, (August 2023) 1468-1477 1475 transaction price is falling, more transactions in the system can also ensure that miners gain decent revenue. This paper also shows distribution comparison of four groups of blockchain users in Figures 5 and 6. It can be seen that, due to the limitation of wireless environment, under greater blockchain user distribution area, that is, greater [] value, the benefit function will be greater. However, despite the continuous increase in []value, the =0.15, =0.20 difference between the two curves in the =0.05, =0.10 figure is obviously less than that of . This is because the dense distribution of blockchain users will lead to the continuous decline of transaction delivery efficiency in the wireless environment. This will slow down the growth in the number of transactions in the network, reducing benefits for blockchain users and miners. Through the above simulation, it can be concluded that the incentive mechanism proposed in this paper not only encourages miners to join the blockchain network. This increases the system stability and meets the response time requirements of blockchain users. This is the purpose of this algorithm, namely, not only guaranteeing the interests of both parties of the game, but also improving the distributed stability of the system. **6. DISCUSSION** The proposed incentive mechanism based on Stackelberg game has numerically proved to be beneficial for both blockchain users and miners. Simulation results have shown that the proposed scheme can effectively protect the interests of blockchain users and miners, and improve the security and stability of the blockchain-based energy IoT system. This conclusion is supported by the results of several studies. For example, a survey conducted by Liu et al. [8] on blockchain on the use of game theory to **Figure 6. Benefit Function of Blockchain Miners** analyze the incentives of different participants in an energy blockchain system found that the incentive mechanism proposed in their study was able to balance the interests of energy producers, consumers, and miners. Similarly, a study by Sun et al. [18] investigated on the impact of game theory on the security of blockchainbased energy trading systems, and found that gametheoretic approaches can effectively enhance the security of energy trading systems. Moreover, a study by Dong et al. [19] on the use of game theory to optimize the performance of blockchain-based energy trading systems found that the game-theoretic approach can effectively improve the performance of blockchain-based energy trading systems. These studies all provide evidence that the proposed incentive mechanism based on Stackelberg game can protect the interests of blockchain users and miners, and improve the security and stability of the blockchain-based energy IoT system. **6. CONCLUSION** In this paper, the Stackelberg game is used to coordinate the needs of blockchain users and miners. Blockchain users can upload data to the DAG blockchain by paying a fee to blockchain miners. Miners can gain revenue by charging transaction fees. Through the game, on the one hand, the revenue of the whole blockchain miners can be guaranteed, and on the other hand, the response time demand of blockchain users can be guaranteed. The numerical results not only verify the model feasibility, but also show that when there are many blockchain miners, the model performance is fine, but when the number of miners reaches a certain value, there will be unobvious growth. Furthermore, the wireless energy IoT environment can be confirmed that it will also create a certain impact on the game model. The simulation results also show that with an increase in the number of miners, the benefit function of blockchain users and miners will also increase. This is because more miners can process more transactions per unit time. This can reduce verification delay, improve blockchain users' satisfaction, and an increase in the transaction demand. For blockchain miners, although the transaction price is falling, more transactions in the system can also ensure that miners gain decent revenue. Overall, the results of this study show that the proposed incentive scheme based on the Stackelberg game model can effectively protect the interests of blockchain users and miners, and improve the security and stability of the blockchain-based energy IoT system. This research has several limitations. First, it only focuses on the game model between blockchain users and miners, and does not consider the impact of other factors on the system performance. Second, the simulation parameters are only applied in the energy IoT ----- 1476 H. Zhou et al. / IJE TRANSACTIONS B: Applications Vol. 36, No. 08, (August 2023) 1468-1477 environment. There is no discussion on the application of the proposed model in other scenarios. Third, the game model in this paper only considers the response time requirements of blockchain users, and does not consider the resource utilization efficiency of blockchain miners. To further improve the system performance, there is still a lot of work to be done in the future. First, the game model should be extended to consider the resource utilization efficiency of blockchain miners. Second, the game model should consider the impact of other factors on system performance such as network latency, transaction broadcast delay, etc. Third, the application of the proposed model should be further extended to other scenarios. Finally, additional research should be done to explore other incentive mechanisms for blockchain networks. **7. FUNDINGS** The research is supported by: Basic and Advanced Research Projects of CSTC (No. cstc2019jcyj zdxmX0008); the Science and Technology Research Program of Chongqing Municipal Education Commission (No. KJZD-K201900605); The project of Chongqing big data application and Development Administration Bureau (No.22-30); The project of Housing and Urban-rural Development Commission of Chongqing Municipality (No.2021-0-104). **8. REFERENCES** 1. Rifkin, J., "The third industrial revolution: How lateral power is transforming energy, the economy, and the world, Macmillan, (2011). 2. Nakamoto, S., "Bitcoin: A peer-to-peer electronic cash system", **_Decentralized Business Review, (2008), 21260._** 3. Zhao, Y., Peng, K., Xu, B.-y. and Liu, Y., "Status and prospect of pilot project of energy blockchain", **_Automation of Electric_** **_Power Systems, Vol. 43, No. 7, (2019), 14-22._** 4. Zhang, N., Wang, Y., Kang, C., Cheng, J. and He, D., "Blockchain technique in the energy internet: Preliminary research framework and typical applications", Proceedings of the **_CSEE, Vol. 36, No. 15, (2016), 4011-4022._** 5. Fernández-Caramés, T.M. and Fraga-Lamas, P., "A review on the use of blockchain for the internet of things", Ieee Access, Vol. 6, (2018), 32979-33001. doi: 10.1109/ACCESS.2018.2842685. 6. Doshi, M. and Varghese, A., Smart agriculture using renewable energy and ai-powered iot, in Ai, edge and iot-based smart agriculture. 2022, Elsevier.205-225. 7. Wang, B. and Liu, F., "Task arrival based energy efficient optimization in smart-iot data center", Mathematical Biosciences **_and Engineering, Vol. 18, No. 3, (2021), 2713-2732. doi:_** 10.3934/mbe.2021138. 8. Liu, Z., Luong, N.C., Wang, W., Niyato, D., Wang, P., Liang, Y.C. and Kim, D.I., "A survey on blockchain: A game theoretical perspective", **_IEEE Access, Vol. 7, (2019), 47615-47643. doi:_** 10.1109/ACCESS.2019.2909924. 9. Saad, W., Han, Z., Debbah, M., Hjorungnes, A. and Basar, T., "Coalitional game theory for communication networks", **_Ieee_** **_Signal Processing Magazine, Vol. 26, No. 5, (2009), 77-97. doi:_** 10.1109/MSP.2009.000000. 10. Hakak, S., Khan, W.Z., Gilkar, G.A., Imran, M. and Guizani, N., "Securing smart cities through blockchain technology: Architecture, requirements, and challenges", **_IEEE Network,_** Vol. 34, No. 1, (2020), 8-14. doi: 10.1109/MNET.001.1900178. 11. Nejati, Z. and Faraji, A., "Actuator fault detection and isolation for helicopter unmanned arial vehicle in the present of disturbance", **_International_** **_Journal_** **_of_** **_Engineering,_** **_Transactions C: Aspects, Vol. 34, No. 3, (2021), 676-681. doi:_** 10.5829/IJE.2021.34.03C.12. 12. Khosravian, E. and Maghsoudi, H., "Design of an intelligent controller for station keeping, attitude control, and path tracking of a quadrotor using recursive neural networks", **_International_** **_Journal of Engineering,_** **_Transactions B: Applications, Vol. 32,_** No. 5, (2019), 747-758. doi: 10.5829/ije.2019.32.05b.17. 13. Xiong, Z., Feng, S., Wang, W., Niyato, D., Wang, P. and Han, Z., "Cloud/fog computing resource management and pricing for blockchain networks", IEEE Internet of Things Journal, Vol. 6, No. 3, (2018), 4585-4600. doi: 10.1109/JIOT.2018.2871706. 14. Wei, W., Liu, F. and Mei, S., "Energy pricing and dispatch for smart grid retailers under demand response and market price uncertainty", IEEE Transactions on Smart Grid, Vol. 6, No. 3, (2014), 1364-1374. doi: 10.1109/TSG.2014.2376522. 15. Yang, D., Xue, G., Fang, X. and Tang, J., "Incentive mechanisms for crowdsensing: Crowdsourcing with smartphones", **_IEEE/ACM Transactions on Networking, Vol. 24, No. 3,_** (2015), 1732-1744. doi: 10.1109/TNET.2015.2421897. 16. Hedges, J., "Backward induction for repeated games", arXiv Preprint arXiv:1804.07074, (2018). [https://doi.org/10.48550/arXiv.1804.07074](https://doi.org/10.48550/arXiv.1804.07074) 17. Cao, B., Xia, S., Han, J. and Li, Y., "A distributed game methodology for crowdsensing in uncertain wireless scenario", **_IEEE Transactions on Mobile Computing, Vol. 19, No. 1,_** (2019), 15-28. doi: 10.1109/TMC.2019.2892953. 18. Sun, J., Wu, C. and Ye, J., "Blockchain-based automated container cloud security enhancement system", in 2020 IEEE international conference on smart cloud (SmartCloud), IEEE. (2020), 1-6. 19. Dong, J., Song, C., Liu, S., Yin, H., Zheng, H. and Li, Y., "Decentralized peer-to-peer energy trading strategy in energy blockchain environment: A game-theoretic approach", **_Applied_** **_Energy,_** Vol. 325, (2022), 119852. [https://doi.org/10.1016/j.apenergy.2022.119852](https://doi.org/10.1016/j.apenergy.2022.119852) ----- H. Zhou et al. / IJE TRANSACTIONS B: Applications Vol. 36, No. 08, (August 2023) 1468-1477 1477 **COPYRIGHTS** ©2023 The author(s). This is an open access article distributed under the terms of the Creative Commons Attribution (CC BY 4.0), which permits unrestricted use, distribution, and reproduction in any medium, as long as the original authors and source are cited. No permission is required from the authors or the publishers . Persian Abstract **چکیده** ، به عنوان یک کاربرد معمولی فناوری اینترنت اشیا، به طور گسترده مورد مطالعه قرار گرفته است. در همین حال، فناوری(IoT) در عصر اینترنت همه چیز، اینترنت اشیا انرژی .بالک چین و انرژی اینترنت اشیا می توانند هماهنگ و مکمل یکدیگر باشند. انرژی اینترنت اشیا متنوع است و تقاضای تراکنش باالیی دارد بحث در مورد تأثیر م حیط اینترنت اشیا انرژی بر عملکرد الگوریتم های اجماع بالک چین و تضمین ثبات بالک چین در محیط اینترنت اشیا انرژی، موضوعی است که ارزش تحقیق د ارد. در این تحقیق، یک ها ی کاربر پیشنهاد شده است. استراتژی پیشنهادی از طریق پلتای و گره برای سناریوی شبکه شامل چندین واحد کنار جادهStackelberg مکانیسم انگیزشی مبتنی بر بازی تحلیل می شود. نتایج شبیه سازی نشان می دهد که طرح پیشنهادی می تواند به طور موثر از منافع کاربران بالک چین و ماینرها محافظت کند . همچنینMatlab فرم شبیه سازی سنجی مد ل را تأیید می کنند. همچنین نشان می دهد کهمی تواند امنیت و ثبات سیستم اینترنت اشیاء مبتنی بر بالک چین را بهبود بخشد. عالوه بر این، نتایج عددی نه تنها امکان وقتی ماینرهای بالک چین زیادی وجود دارد، عملکرد مدل خوب است. با این حال، زمانی که تعداد ماینرها به مقدار مشخصی برسد، رشد نامشخ .صی وجود خواهد داشت عالوه بر این، همچنین تایید شده است که محیط اینترنت اشیا انرژی بی سیم .نیز تاثیر خاصی بر مدل بازی ایجاد خواهد کرد -----
{ "disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.5829/ije.2023.36.08b.07?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.5829/ije.2023.36.08b.07, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": "CCBY", "status": "GOLD", "url": "https://www.ije.ir/article_170108_0606d7f89e99e10eed7dcd50415a9b55.pdf" }
2,023
[ "JournalArticle" ]
true
null
[ { "paperId": "5044316f1a8510873c9c3ce2bb53f2fdd6102854", "title": "Decentralized peer-to-peer energy trading strategy in energy blockchain environment: A game-theoretic approach" }, { "paperId": "b16153b44542775da53036b811ac829405b3bcb4", "title": "Task arrival based energy efficient optimization in smart-IoT data center." }, { "paperId": "f66b0a487551452aa73def8ca3e43d5775dc7245", "title": "Actuator Fault Detection and Isolation for Helicopter Unmanned Arial Vehicle in the Present of Disturbance" }, { "paperId": "58de6c3adb6f303ceb691c8a99a3c1aa6c410511", "title": "Blockchain-based Automated Container Cloud Security Enhancement System" }, { "paperId": "a59b3a1ff98afe3f98ec89fb3027a9e8bcf62866", "title": "A Distributed Game Methodology for Crowdsensing in Uncertain Wireless Scenario" }, { "paperId": "ee59451deedd0c7532b5b4c34340a615c1d76312", "title": "Securing Smart Cities through Blockchain Technology: Architecture, Requirements, and Challenges" }, { "paperId": "00ed0d22e38f54ed3e0e5c2a73b582ffda55ab3e", "title": "Design of an Intelligent Controller for Station Keeping, Attitude Control, and Path Tracking of a Quadrotor Using Recursive Neural Networks" }, { "paperId": "7ff71f9b85ca18554d56228edb7271d439214707", "title": "A Survey on Blockchain: A Game Theoretical Perspective" }, { "paperId": "02458904f9bd718bd8c6a1a36e9847ad83b0410b", "title": "A Review on the Use of Blockchain for the Internet of Things" }, { "paperId": "e173f6a03d74ca2192d6cbddd1a09632e882d41d", "title": "Backward induction for repeated games" }, { "paperId": "b3f538f13b441c969a998350e63efaded86223e6", "title": "Cloud/Fog Computing Resource Management and Pricing for Blockchain Networks" }, { "paperId": "d8403984bff42604cce78876efb31c11321bfed8", "title": "Incentive Mechanisms for Crowdsensing: Crowdsourcing With Smartphones" }, { "paperId": "acade013bcbcadcafc0498c0a5ce97facb600804", "title": "Energy Pricing and Dispatch for Smart Grid Retailers Under Demand Response and Market Price Uncertainty" }, { "paperId": null, "title": "A game-theoretic approach\", Applied Energy, Vol" }, { "paperId": "f8458ab0d8988bcd7a74c63992f358d64a49babf", "title": "Smart agriculture using renewable energy and AI-powered IoT" }, { "paperId": null, "title": "Status and prospect of pilot project of energy blockchain" }, { "paperId": "18cf728c1f9e9ad19d1354add882c1da48f7291a", "title": "Blockchain Technique in the Energy Internet: Preliminary Research Framework and Typical Applications" }, { "paperId": "b10f53fbb737f5a244202eda8a2ab20d4b3336f2", "title": "The third industrial revolution : how lateral power is transforming energy, the economy, and the world" }, { "paperId": "4e9ec92a90c5d571d2f1d496f8df01f0a8f38596", "title": "Bitcoin: A Peer-to-Peer Electronic Cash System" }, { "paperId": "41bab193bd1d5432d3784d077d3ef3f078168dac", "title": "IEEE Signal Processing Magazine, Special Issue on Game Theory, to appear, 2009 Coalitional Game Theory for Communication Networks: A Tutorial" } ]
12,802
en
[ { "category": "Computer Science", "source": "external" }, { "category": "Computer Science", "source": "s2-fos-model" }, { "category": "Medicine", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/0205b4b033b326a2bfc28ec21fccbd91df47ab2f
[ "Computer Science" ]
0.862838
Using Visualization to Build Transparency in a Healthcare Blockchain Application
0205b4b033b326a2bfc28ec21fccbd91df47ab2f
Sustainability
[ { "authorId": "145467727", "name": "Jesús Peral" }, { "authorId": "2056867384", "name": "E. Gallego" }, { "authorId": "2060054848", "name": "D. Gil" }, { "authorId": "1413908809", "name": "M. Tanniru" }, { "authorId": "2368536", "name": "P. Khambekar" } ]
{ "alternate_issns": null, "alternate_names": null, "alternate_urls": [ "http://mdpi.com/journal/sustainability", "http://nbn-resolving.de/urn/resolver.pl?urn=urn:nbn:ch:bel-172127" ], "id": "8775599f-4f9a-45f0-900e-7f4de68e6843", "issn": "2071-1050", "name": "Sustainability", "type": "journal", "url": "http://www.e-helvetica.nb.admin.ch/directAccess?callnumber=bel-172127" }
With patients demanding services to control their own health conditions, hospitals are looking to build agility in delivering care by extending their reach into patient and partner ecosystems and sharing relevant patient data to support care continuity. However, sharing patient data with several external stakeholders outside a hospital network calls for the development of a digital platform that is trusted by both hospitals and stakeholders, given that there is often no single entity supporting such coordination. In this paper, we propose a methodology that uses a blockchain architecture to address the technical challenge of linking disparate systems used by multiple stakeholders and the social challenge of engendering trust by using visualization to bring about transparency in the way in which data are shared. We illustrate this methodology using a pilot implementation. The paper concludes with a discussion and directions for future research and makes some concluding comments.
## sustainability _Article_ # Using Visualization to Build Transparency in a Healthcare Blockchain Application **Jesús Peral** **[1,]*, Eduardo Gallego** **[1,2], David Gil** **[2]** **, Mohan Tanniru** **[3]** **and Prashant Khambekar** **[4]** 1 Department of Software and Computing Systems, University of Alicante, 03690 Alicante, Spain; ejgl2@alu.ua.es 2 Department of Computer Technology and Computation, University of Alicante, 03690 Alicante, Spain; dgil@dtic.ua.es 3 College of Public Health, University of Arizona, Phoenix, AZ 85006, USA; tanniru@oakland.edu 4 Harbinger Systems, Philadelphia, PA 19103, USA; Prashant.Khambekar@harbingergroup.com ***** Correspondence: jperal@dlsi.ua.es; Tel.: +34-96-590-3772 Received: 12 July 2020; Accepted: 17 August 2020; Published: 20 August 2020 [����������](https://www.mdpi.com/2071-1050/12/17/6768?type=check_update&version=2) **�������** **Abstract: With patients demanding services to control their own health conditions, hospitals are** looking to build agility in delivering care by extending their reach into patient and partner ecosystems and sharing relevant patient data to support care continuity. However, sharing patient data with several external stakeholders outside a hospital network calls for the development of a digital platform that is trusted by both hospitals and stakeholders, given that there is often no single entity supporting such coordination. In this paper, we propose a methodology that uses a blockchain architecture to address the technical challenge of linking disparate systems used by multiple stakeholders and the social challenge of engendering trust by using visualization to bring about transparency in the way in which data are shared. We illustrate this methodology using a pilot implementation. The paper concludes with a discussion and directions for future research and makes some concluding comments. **Keywords: blockchain; IoT; secure transaction; health; file sharing; visualization** **1. Introduction** In today’s digital age, advanced technologies are continually altering customer expectations of services delivered and requiring that organizations build “agility” within their internal operations by using an agile organizational model of structure and governance [1]. The agile model supports the exploration of innovative service value propositions and the use of a mix of internal and external resources to evaluate these innovations to fulfill customer value [2,3]. Such a model is also used to support evaluation, adaptation, and learning to improve organizational capacity to sustain value as customer expectations change [4,5]. One can argue that “agile” organizations are indeed sustainable organizations, as they continue to meet the current needs of customers by using external resources and conserve their own resources to address future customer needs. In this paper, we focus on hospitals that are responsible for supporting continuity of care for patients outside the hospital. Hospitals are extending patient care using several care facilities (e.g., urgent care facilities, ambulatory care facilities, etc.) [6,7] and are helping patients self-manage their care using multiple technologies [8–11]. This calls for hospitals to build agility to leverage the resources of external partners and motivate patients to self-manage their health in a tightly regulated and resource constrained environment. This means that patient data are generated by multiple stakeholder systems (partners and patients) that use several advanced technologies, such as internet of things (IoT), mobile apps, digital exchanges, and social media, and such data have to be understood, collected, integrated, and shared by all involved in the support of patient care. Unless there is a public health crisis ----- _Sustainability 2020, 12, 6768_ 2 of 20 (e.g., COVID-19) that calls for public health agencies to coordinate significant disruptions to economic, health, and social conditions [12], opioid addiction that calls for tracking drug distribution [13], or chronic care management of high risk patients to reduce hospital readmissions [14–16], there is little incentive for hospitals to coordinate patient data sharing outside their hospital networks. This calls for a distributed digital platform that is either coordinated by a trusted third party or an architecture that ensures trust for everyone to contribute and use the data shared. Blockchain technology has been suggested in prior research as a platform when there is no trusted coordinator to support data sharing. It supports peer-to-peer connectivity among various stakeholders using agreed upon protocols about who can participate in such data sharing. Using characteristics such as immutability and auditability, it is considered a viable and trusted platform to share data when there is no central entity coordinating such sharing activities. In healthcare, establishing trust is both a technical challenge (i.e., ensuring the integrity of data shared by multiple stakeholder systems and making it available for impact on care) and a social challenge (i.e., ensuring transparency to engender confidence that the mechanism used to share data addresses confidentiality). For example, a system that monitors patients’ vital signs and uses an algorithm to generate a metric used to track patient conditions has to be trusted for its integrity. Patients’ questions sent to peers and clinicians may be anonymized to share with peers for comment to ensure confidentiality and identified and made available to patients quickly for treatment adaptation. This paper proposes a methodology that uses blockchain technology as a digital platform with a visualization feature added to address both the technical and social challenges. This paper is organized as follows. Section 2 provides prior research on the use of blockchain in multiple domains as well as in healthcare. Section 3 discusses the methodology that creates visibility for both the creation of the data and its movement within the network to address the challenges. A case study to illustrate a pilot implementation is explained in Section 4. Section 5 discusses, in detail, an implementation methodology, and Section 6 includes discussion, future research directions, and limitations. Section 7 provides some concluding comments. **2. Background** Blockchain applications can be categorized by domain-financial or non-financial [17], since cryptocurrencies represent many but not all of the applications using blockchain technology. These applications can also be classified by the version of technology used (i.e., 1.0, 2.0 and 3.0) [18,19]. Along application domains, they can classified by application type (e.g., financial, healthcare, business and industrial, education, etc.), business issue focus (e.g., governance, privacy and security, etc.), or technical issue focus (e.g., integrity verification, IoT, data management, etc.) [20]. Application of blockchain in healthcare has been more recent [21,22], and, as discussed earlier, trust in sharing sensitive healthcare information among several actors outside a hospital system has been a challenge [23]. However, the mechanisms embedded in the distributed ledger technology associated with blockchain technology may be able to address this challenge [21,22,24–26]. In other words, if healthcare organizations are to become agile in meeting patient needs outside a hospital, the digital platform has to address some of the technical challenges, such as ensuring security, interoperability, data sharing, and mobility, if it is to engender trust [23]. Let us take each of these in more detail. (1) Security Existing methods used to protect and secure patient medical records have not been effective [27,28]. While access controls and authentication of records are widely used in ensure integrity, confidentiality, and accessible of medication information [26,29,30], their implementation becomes a challenge once systems are extended outside a hospital [31,32]. The encryption of data among Electronic Medical Record (EMR) and stakeholder systems is useful, but this leads to problems when there are many different encryption standards [33,34]. With no single technology platform addressing the security challenges [35], a distributed platform that allows local control of the data at each node but ensures ----- _Sustainability 2020, 12, 6768_ 3 of 20 security as it moves across a distributed platform may be a solution. Blockchain technology, which has a uniform method to encrypt the data transferred, public–private keys for the authentication of users who transfer the data, and validation of those who decrypt the data for use, can be effective in addressing security when data are shared by several stakeholders [20,26,36,37]. (2) Interoperability Sharing data among multiple stakeholder systems, such as apps or intelligent agents, or multiple people, such as messages sent using mobile phones, requires having a uniform method to collect disparate sources of data and a centralized database for all to share. With no single entity coordinating such a shared database, a blockchain architecture can allow each partner to upload data for sharing and use using certain agreed upon protocols about who can contribute and access data, with embedded security, controlled redundancy, and auditability [23]. (3) Data Sharing Data sharing in healthcare is critical, as patient care is remotely managed at various locations (at home, at partner sites, or at hospitals) and must be shared with others to support continuity of care. Moreover, the data gathered at each site may be in a different form [24,33,38]. Blockchain technology allows for each partner connected to the network to share data either directly or indirectly using a secure link. In some cases, data are stored elsewhere (e.g., when the data are large, as with image scans, or in narrative form, as with doctors’ notes), and associated links can be used for data access. In summary, blockchain technology allows the sharing of multiple data types without forcing a single data normalization method. (4) Mobility (IoMT) As patients become mobile and must access their data when and where they need it, its portability is critical. With more devices such as smart phones and sensors (IoT) connected to the Internet, data are collected from these devices [39–41] have to be effectively integrated. This concept is often referred to as digital mobility or Internet of Medical Things (IoMT) [42–44]. With a blockchain’s ability to connect with any partner (human or machine) with permission to share data with others, such mobility is feasible. _2.1. Blockchain in Healthcare_ Blockchain technology has begun to see applications that extend care to patients outside a hospital. Traditionally, EMRs are used to manage patient data within a hospital system, and their use has grown significantly [10,45,46]. However, as hospitals try to extend care to patients outside the hospital, and with partners and patients using a myriad of systems, the challenge is one of interoperability. Blockchains can provide a gateway for data sharing among these systems by addressing the four key areas of importance discussed above: security, interoperability, data sharing, and mobility. For example, OmniPHR (Omnipresent Personal Health Record) has been proposed as a distributed model to integrate personal health records for patients and hospitals to access and use [38], and MedRec (decentralized record management system to handle EMRs) is being developed as a component of a hospital EMR system [24]. A framework for EMR data sharing for cancer patients is proposed by Dubovitskaya et al. [47], and a decentralized platform that provides a secure, fast, and transparent exchange of a single version of a patient’s data are provided by Medicalchain [48]. Other applications include HealthChain, which leverages blockchain technology to support the sharing of patients’ medical data [49,50]. MediBchain is another patient centric healthcare data management system that enables patient data sharing using cryptographic techniques [51]. Borioli and Couturier [52] discuss the potential of blockchain to conduct clinical trials using smart contracts, and Mamoshina et al. [53] propose a roadmap for decentralizing the personal health data ecosystem for drug discovery, biomarker development, and preventative healthcare. The use of microscopy sensors ----- _Sustainability 2020, 12, 6768_ 4 of 20 that take an image of fingernails for identity authentication was proposed by Lee at al. [54] to protect data privacy, and an Ethereum protocol that remotely monitors and manages patients using data from sensors and smart devices and smart contracts was presented by Griggs et al. [27]. MeDShare (a system that addresses the issue of medical data sharing) is a blockchain-based system that is used to provide data provenance, auditing, and control of shared medical data in cloud repositories and to monitor malicious use of these data [28]. The goal of all these applications is to support operational continuity as care is extended outside a hospital so that patient data can be accessed by doctors, hospitals, laboratories, pharmacists, insurers, etc., and strategic support (e.g., analysis for treatment adherence to change diagnoses or treatment plans, at an individual level over time or at an aggregate level for discovering patterns, possibly using big data analytics). To address these two types of support, one may consider two different blockchain architectures: one blockchain with parallel computing capabilities and big data analytics for strategic support, and another blockchain to support operational continuity that includes data integration, secure identity management, and a trust supporting data sharing component [55]. Each of these blockchains still leverages the blockchain properties of authentication, confidentiality, accountability, and data sharing among those using the networks. In other words, operational continuity leads to data collection (or surveillance of patient–partner activities), and strategy support is used to leverage these data for analysis and to refine care processes. _2.2. Increasing Trust through Visibility_ While the discussion thus far demonstrates the role of blockchain technology in addressing a number of technical challenges to ensure trust in the way data are collected from disparate systems and shared to ensure integrity and confidentiality, there is still the issue of the social challenge: Will those who have to adopt the system trust the system enough to contribute to it? Transparency through visualization to enhance trust has been discussed in the literature. For example, transparency of the supply chain is viewed as critical to engender trust among the participating stakeholders [56], and visualization is often used to communicate information to groups with varying technical backgrounds, especially when there are opportunities for misrepresentation of the data [57]. In some cases, interactive graphics are used to make static reports dynamic, so that individuals can understand the data by seeing such data at various levels of granularity [58]. Dashboards with drill-down capabilities have been used by many organizations to improve both transparency and accountability, especially when clinical decisions and administrative decisions lead to conflicts [59]. Visualizations has also been used to debug software and help with understanding the reasoning processes of forward-chaining rule-based expert systems [60], as well as when individuals are engaged in global software development to ensure that workflows that are generating data to influence a project can be monitored [61]. Today, when data are manipulated by multiple entities including robots, designing human-like and visualization-based transparency is critical to map the processes used to manipulate data so it can match an individual’s mental models [62] and reduce the cognitive burden by helping with external anchoring, information foraging, and cognitive offloading [63]. The methodology discussed here uses “visualization” to improve the trustworthiness of those sharing the data using the blockchain architecture, thus addressing both technical and social challenges. **3. The Proposed Methodology: A Blockchain-Based Solution** In this section, we present our general methodology where a blockchain architecture is used to visually show how data are shared by users as it moves among various nodes in the network. The architecture uses two web applications: one to create the data for the blockchain and the other to visualize the network to improve transparency and build trust. The application supports the sharing of data files (PDF, text, images, etc.) between different nodes, so that a user will have the ability to visually see the files as they are sent and received, ensuring the existence, order, and immutability of these files. Specifically, we will illustrate the process used when permission is granted for some data by the patient ----- _Sustainability 2020, 12, 6768_ 5 of 20 and the subsequent movement of these data along the network to support transparency. To achieve the stated objectives, the methodology uses two features: blockchain technology and visualization techniques. This methodology is technology agnostic, i.e., different blockchain technologies can be used for application implementation. The methodology can be summarized as follows: _Sustainability 2020 FOR PEER REVIEW_ 5 of 20 (1) Create the blockchain with the different network nodes, where each node corresponds to different 1) Create the blockchain with the different network nodes, where each node corresponds to users who will participate in data sharing. In our case study the nodes correspond to patients different users who will participate in data sharing. In our case study the nodes correspond to who decide to share their files as well as the buyers of information from these files. patients who decide to share their files as well as the buyers of information from these files. (2) Manage the transactions generated by different nodes. Here, we will focus on authentication, file 2) Manage the transactions generated by different nodes. Here, we will focus on authentication, transfer, and visualization. These transactions are combined with other transactions to create a file transfer, and visualization. These transactions are combined with other transactions to create new block. a new block. (3) Configure and customize the information to be visualized after choosing a tool for the 3) Configure and customize the information to be visualized after choosing a tool for the network network visualization. visualization. (4) Connect or integrate the blockchain with the visualization tool. 4) Connect or integrate the blockchain with the visualization tool. (5)5) Demonstrate the visualization of how nodes are interacting during a transaction. Demonstrate the visualization of how nodes are interacting during a transaction. Figure 1 shows at a high level how the transactions are managed within the blockchain network. Figure 1 shows at a high level how the transactions are managed within the blockchain network. **Figure 1. Methodology for transactions management within the blockchain network.** **Figure 1. Methodology for transactions management within the blockchain network.** **Figure 1. Methodology for transactions management within the blockchain network.** Figure 2 shows the basic blockchain structure. A blockchain is a data structure in which the information contained is grouped into sets of blocks. Each block has information on the previous block,Figure 2 shows the basic blockchain structure. A blockchain is a data structure in which the and, using cryptographic techniques, this information can only be repudiated or edited by modifyinginformation contained is grouped into sets of blocks. Each block has information on the previous all subsequent blocks. The information stored in each block includes: (a) records or transactions,block, and, using cryptographic techniques, this information can only be repudiated or edited by (b) information about the block, and (c) a link to the previous block through a digital signature (hash).modifying all subsequent blocks. The information stored in each block includes: (a) records or Each block has a specific and unmovable place within the chain, since each block contains informationtransactions, (b) information about the block, and (c) a link to the previous block through a digital on the previous block as a hash. The entire chain is stored at each of the nodes that make up thesignature (hash). Each block has a specific and unmovable place within the chain, since each block network, so that all network participants have an exact copy of it. When a new record is created,contains information on the previous block as a hash. The entire chain is stored at each of the nodes it is verified and validated by all the nodes that form the network and then added to a new blockthat make up the network, so that all network participants have an exact copy of it. When a new record is created, it is verified and validated by all the nodes that form the network and then added and linked to the chain. Each node uses different types of certificates and digital signatures to verify to a new block and linked to the chain. Each node uses different types of certificates and digital information, as well as to validate transactions and data stored in the blockchain. signatures to verify information, as well as to validate transactions and data stored in the blockchain. ----- record is created, it is verified and validated by all the nodes that form the network and then added _Sustainability 2020, 12, 6768_ 6 of 20 to a new block and linked to the chain. Each node uses different types of certificates and digital signatures to verify information, as well as to validate transactions and data stored in the blockchain. **Figure 2.Figure 2. Basic structure of a blockchain.Basic structure of a blockchain.** Despite having introduced in this section the generic methodology based in blockchain, not all environments or organizations may use this methodology shown here. As Hebert et al. [64] point out, varying levels of security threats specific to a blockchain may call for an integrated multi-staged architecture. For the healthcare application chosen here, where a patient stores his or her data and provides access to these data to others, the methodology used here is considered appropriate. The data shared is tamper-proof because of the immutability property, and participants of the network are a-priori authenticated as trusted partners to share data using the network (i.e., permissioned blockchain). The application also uses the proposed transparency feature to enable users to see who accessed the data, and the blockchain keeps a record of every time data are accessed with a time stamp, thus providing an audit trail. The following section will discuss the application. **4. Case Study** The case study presented here allows patients to share their health data, including diagnoses and treatments, and gives research organizations access to these data for payment. The transparency of access and its purpose ensures that payments are made for the right purpose and accurately, while protecting patients’ rights over their data. The payments increase patients’ willingness to share their data for research purposes, and research institutions will benefit by paying a small amount to gather a large amount of patient data to support analysis. While such a payment of small amounts to patients may be viewed as too complicated [65], research organizations today spend large sums of money to solicit patient participation in clinical trials and a large proportion of this money goes to intermediaries. Within a blockchain, the patient has more control over their data and can monetize the data by selling it directly to potential buyers. As the blockchain can track every access, the payment can be coupled with access, thus leading to immediacy and accuracy. Such transparency can lead to increased patient participation and improve the quality of clinical trials. Some systems have used token mechanisms for payments and several blockchains have their own crypto tokens. However, the tradability of tokens with fiat currency, liquidity, and the handling of inflationary pressures makes their use complicated [66]. Therefore, the system described here uses fiat currency (US dollars), thus creating a determined value for each transaction. _Processes for Uploading and Accessing Health Data_ The main roles in the blockchain-based system are patients, caregivers, and buyers. The caregivers monitor patients for care continuity (e.g., doctors, external care providers, family members, etc.). The case study here, however, will focus on the interaction between patients and buyers, where patients upload health data and allow buyers to download and read the data after purchasing it. The patient data can be in the form of a Continuity of Care Document (CCD) or Fast Healthcare Interoperability Resources (FHIR). Patients get these data either from hospitals and clinics or can upload it from their ----- improve the quality of clinical trials. Some systems have used token mechanisms for payments and _Sustainability 2020, 12, 6768_ 7 of 20 several blockchains have their own crypto tokens. However, the tradability of tokens with fiat currency, liquidity, and the handling of inflationary pressures makes their use complicated [66]. own devices (e.g., via Fitbit devices). These data can either relate to a patient visit to a care centerTherefore, the system described here uses fiat currency (US dollars), thus creating a determined value (encounter) or an episode related to his healthfor each transaction. /wellness. The data are uploaded one record at a time by the system front-end and is stored in the blockchain. _Processes for Uploading and Accessing Health Data_ The metadata about that data are stored in local storage, and this can include the nature of the data uploaded. Depending on the size of the data, this can take a couple of minutes. The patient is informedThe main roles in the blockchain-based system are patients, caregivers, and buyers. The once the data are uploaded and stored. This is shown in Figurecaregivers monitor patients for care continuity (e.g., doctors, external care providers, family 3. members, etc.). The case study here, however, will focus on the interaction between patients and Patients publish the names of the files they wanted to share. When a buyer wants to purchase data, they are shown dibuyers, where patients upload health data and allow buyers to download and read the data after fferent types of data and the corresponding information (e.g., time range). Subsequently, once the buyer decides to purchase some data, the system determines the owner of thepurchasing it. The patient data can be in the form of a Continuity of Care Document (CCD) or Fast Healthcare Interoperability Resources (FHIR). Patients get these data either from hospitals and clinics data and checks whether permission was provided. If permission was not already provided, the system or can upload it from their own devices (e.g., via Fitbit devices). These data can either relate to a informs the patient of the buyer request and the incentive offered by the buyer. If the patient provides patient visit to a care center (encounter) or an episode related to his health/wellness. permission, then the system stores the permission (for one patient, one buyer, and one piece of data) in The data are uploaded one record at a time by the system front-end and is stored in the the blockchain. It notifies the buyer of the permission, so the buyer can request the data to be read.Sustainability blockchain. The metadata about that data are stored in local storage, and this can include the nature 2020 FOR PEER REVIEW 7 of 20 The system stores the data access and deducts the payment from the buyer and credits it to the patient. of the data uploaded. Depending on the size of the data, this can take a couple of minutes. The patient The payments made are accumulated with each buyer read. These interactions are shown in FigurePatients publish the names of the files they wanted to share. When a buyer wants to purchase 4. is informed once the data are uploaded and stored. This is shown in Figure 3. data, they are shown different types of data and the corresponding information (e.g., time range). Subsequently, once the buyer decides to purchase some data, the system determines the owner of the data and checks whether permission was provided. If permission was not already provided, the system informs the patient of the buyer request and the incentive offered by the buyer. If the patient provides permission, then the system stores the permission (for one patient, one buyer, and one piece of data) in the blockchain. It notifies the buyer of the permission, so the buyer can request the data to be read. The system stores the data access and deducts the payment from the buyer and credits it to the patient. The payments made are accumulated with each buyer read. These interactions are shown in Figure 4. **Figure 3. Process for uploading of health data by patient.** **Figure 3. Process for uploading of health data by patient.** **Figure 4. Process for access to health data by buyer.** **Figure 4. Process for access to health data by buyer.** The blockchain keeps health data, permission data, and the monetary amounts belonging to both The blockchain keeps health data, permission data, and the monetary amounts belonging to both the patient and the buyer. Based on the size of each block, these patient–buyer transactions can be the patient and the buyer. Based on the size of each block, these patient–buyer transactions can be spread across many blocks, as shown in Figure 5. The next section will discuss the implementation. spread across many blocks, as shown in Figure 5. The next section will discuss the implementation. Subsequently, once the buyer decides to purchase some data, the system determines the owner of the data and checks whether permission was provided. If permission was not already provided, the system informs the patient of the buyer request and the incentive offered by the buyer. If the patient provides permission, then the system stores the permission (for one patient, one buyer, and one piece of data) in the blockchain. It notifies the buyer of the permission, so the buyer can request the data to be read. The system stores the data access and deducts the payment from the buyer and credits it to the patient. The payments made are accumulated with each buyer read. These interactions are shown ----- The blockchain keeps health data, permission data, and the monetary amounts belonging to both _Sustainability 2020, 12, 6768_ 8 of 20 the patient and the buyer. Based on the size of each block, these patient–buyer transactions can be spread across many blocks, as shown in Figure 5. The next section will discuss the implementation. **Figure 5. Figure 5.Patient’s data and information across blocks. Patient’s data and information across blocks.** **5. Implementation5. Implementation** In this section, we will discuss the implementation of the case. It is assumed that permission isIn this section, we will discuss the implementation of the case. It is assumed that permission is granted by the patient, his or her data are sent to the buyer, and these transactions are tracked.granted by the patient, his or her data are sent to the buyer, and these transactions are tracked. The technologies considered for the development of this application were Corda R3, HyperledgerThe technologies considered for the development of this application were Corda R3, Fabric and Ethereum. After studying these three technologies, Hyperledger Fabric [Hyperledger Fabric and Ethereum. After studying these three technologies, Hyperledger Fabric [67] 67] was chosen for its robustness and the privacy it offers for the stored information compared to several competitors. It is also configurable, guarantees security, interoperability, and data sharing. Inside the Hyperledger family, there is also Hyperledger Sawtooth with a different consensus algorithm and a different mode of execution. For the purpose of this study, Hyperledger Fabric was chosen because its Explorer is much easier to use than the Explorer that comes with Hyperledger Sawtooth. The main challenge in the implementation was the integration of the different used technologies such as Hyperledger Fabric, Hyperledger Explorer or Vue.js; implementation details are shown in the following subsections. _5.1. Blockchain Creation_ Each node in the network (associated with the users: patient, buyer, etc.) will be created using Vue.js. Different templates will be created for viewing files, sending files, and support authentication. When the permission is provided by the patient, the corresponding transaction and the subsequent block is created. The first step calls for the downloading the blockchain platform using the latest version of Hyperledger Fabric from the official repository, unzip it and access the first-network folder to check accuracy of the download. Once in the folder and is running correctly, the message shown in Figure A1 will be displayed. _5.2. Transaction Management: Patient Permission, File Transmission and Block Creation_ Once the permission is granted by the patient to share his or her data, the application will check that the recipient is in the system and the file is in the right format. Then, the file will be encrypted in base64. Base64 is a method of encoding and decoding binary data (e.g., HTML, CSS, text documents or images) [68]. After encrypting the file, the Application Programming Interface (API) endpoint will be called to upload the file to the blockchain. Subsequently, a “json” file with the user’s credentials and the encrypted document is sent to the API. This information becomes part of the transaction and will be converted into a block, as shown in Figure 6. ----- be called to upload the file to the blockchain. Subsequently, a “json” file with the user’s credentials _Sustainability 2020, 12, 6768_ 9 of 20 and the encrypted document is sent to the API. This information becomes part of the transaction and will be converted into a block, as shown in Figure 6. **Figure 6.Figure 6. Description of information block.Description of information block.** _Sustainability 2020 FOR PEER REVIEW_ 9 of 20 _5.3. File Reception_ _5.3. File Reception_ In this step, the receiver (the buyer) can download the shared files. When the receiver logs on In this step, the receiver (the buyer) can download the shared files. When the receiver logs on to to the home page and clicks “View my received documents”, a screen (as shown in Figure 7) will the home page and clicks “View my received documents”, a screen (as shown in Figure 7) will appear. appear. The recipient user will be able to download the documents needed, and these are ordered from The recipient user will be able to download the documents needed, and these are ordered from the the most recent to the oldest, showing the sender, the send date, and the ID of the sender. When the most recent to the oldest, showing the sender, the send date, and the ID of the sender. When the receiver clicks on “Download”, the file will be decrypted using base64 and then downloaded. receiver clicks on “Download”, the file will be decrypted using base64 and then downloaded. **Figure 7. Figure 7.Received files. Received files.** _5.4. Visualization Configuration and Connection with the Blockchain Network_ _5.4. Visualization Configuration and Connection with the Blockchain Network_ Hyperledger Explorer will be used for the display of the network using React.js. It offers default Hyperledger Explorer will be used for the display of the network using React.js. It offers default templates ready to be launched or edited, and it provides several graphics to customize the templates templates ready to be launched or edited, and it provides several graphics to customize the templates for visualization. Such a method of sharing documents and using visualization to track its flow isfor visualization. Such a method of sharing documents and using visualization to track its flow is useful in healthcare to build transparency and gain the trust of all actors involved. There are potentiallyuseful in healthcare to build transparency and gain the trust of all actors involved. There are other applications where such transparency is needed to ensure user adoption of blockchain technologypotentially other applications where such transparency is needed to ensure user adoption of blockchain technology for sharing data. The rest of the section will discuss some of the implementation details such as the installation, configuration, and visualization of Hyperledger Explorer. ----- _Sustainability 2020, 12, 6768_ 10 of 20 for sharing data. The rest of the section will discuss some of the implementation details such as the installation, configuration, and visualization of Hyperledger Explorer. 5.4.1. Installation and Configuration For the Installation, the first step is to download the latest version of Hyperledger Explorer from the official repository, followed by downloading PostgreSQL packages, and running the database services to make sure the database has been installed correctly (as shown in Figure A2). Once installed, the next step will be to authorize Hyperledger Explorer to access the network in Fabric (Configuration). In the “app” folder inside the main folder of “blockchain-explorer”, the file “explorerconfig.json” should be modified (Figure A3). In “platform”, the fabric platform is used. In “PostgreSQL”, the database credentials will be detailed. To connect Explorer with Fabric, access “blockchain-explorer/app/platform/fabric” where the file “config.json” will be modified. The goal here is to define the connection with Fabric (Figure A4). The name of the blockchain network in our case is set to “first-network”. Finally, we open the json file located at: ``` /blockchain-explorer/app/platform/fabric/connection-profile/first-network.json ``` Then, we update “adminPrivateKey”, “signedCert” and “path” with the corresponding routes of the Fabric network for visualization (Figure A5). Once Fabric and Explorer are connected, the last commands (Figure 8) are executed to build the project, which contains our case study: _Sustainability 2020 FOR PEER REVIEW_ 10 of 20 `./main.sh install` (To make the build of the project.) `./main.sh clean` (To clean up unnecessary files that were installed with the previous command.) `./main.sh test` (To test the REST API as well as the interface components, it generates a document reporting errors.) **Figure 8. Commands needed to run Fabric.** **Figure 8. Commands needed to run Fabric.** 5.4.2. Visualization 5.4.2. Visualization The final step is to visualize the blockchain network from an analytical point of view. For this The final step is to visualize the blockchain network from an analytical point of view. For this purpose, it is necessary to modify some packages of Hyperledger Explorer. Its structure is shown in purpose, it is necessary to modify some packages of Hyperledger Explorer. Its structure is shown in Figure A6. Figure A6. In order to customize Hyperledger Explorer, the default code of the official package must be In order to customize Hyperledger Explorer, the default code of the official package must be modified. It is developed with React.js and Redux frameworks. Therefore, to edit the components it is modified. It is developed with React.js and Redux frameworks. Therefore, to edit the components it necessary to access the folder “is necessary to access the folder “/blockchain-explorer/client/src/components” and edit the /blockchain-explorer/client/src/components” and edit the components that are required. Here, we have only modified Charts, as it supports visualization. Figurecomponents that are required. Here, we have only modified Charts, as it supports visualization. 9 shows the dashboard of Explorer, including a set of panels with the current configuration.Figure 9 shows the dashboard of Explorer, including a set of panels with the current configuration. ----- is necessary to access the folder “/blockchain-explorer/client/src/components” and edit the _Sustainability 2020, 12, 6768_ 11 of 20 components that are required. Here, we have only modified Charts, as it supports visualization. Figure 9 shows the dashboard of Explorer, including a set of panels with the current configuration. **Figure 9. Dashboard of Hyperledger Explorer.** **Figure 9. Dashboard of Hyperledger Explorer.** On the top panel, we can see that the network has eight blocks (from Block 0 to Block 7; the genesis On the top panel, we can see that the network has eight blocks (from Block 0 to Block 7; the block is a configuration block for a specific Hyperledger Fabric channel and contains no data) with genesis block is a configuration block for a specific Hyperledger Fabric channel and contains no data) eight transactions (one transaction per block). There are four nodes representing four diwith eight transactions (one transaction per block). There are four nodes representing four different fferent users registered on the network. In this case study, there are zero chaincodes since no smart contracts wereusers registered on the network. In this case study, there are zero chaincodes since no smart contracts created. Chaincode refers to the code for executing programs in the blockchain. These codes or smartwere created. Chaincode refers to the code for executing programs in the blockchain. These codes or contracts signify a particular mini agreement that gets automatically triggered when the conditionsmart contracts signify a particular mini agreement that gets automatically triggered when the values align to the required set of conditions. The word chaincode is a simple phrase to indicate thatcondition values align to the required set of conditions. The word chaincode is a simple phrase to the code is related to the blockchain.indicate that the code is related to the blockchain. Below the top panel are the list of Peers on the left and the network traffic on the right. Peers are network elements that help maintain the network and verify and approve transactions. They also provide methods for interacting with the network, such as creating different APIs. The component on the lower left shows the blockchain. It shows the last block added (Block 7). Each block has three different fields: - Channel Name: The name of the channel through which the block has been created. A channel is a mechanism by which a set of components of a blockchain network interact and exchange information. They provide privacy to the network. There can be different channels, and users can access one or another, depending on how their permissions are configured. - Datahash: This is an encrypted code that contains all the information of the block. Here, you can find information about the sender of the file, the receiver of the file, and the file itself. - Number of Tx: This represents the number of transactions per block. To the right of this last component is Transactions by Organization, an entity that has access to different channels and shows how network participants are grouped according to their privileges. Finally, it is important to question the suitability of approaches similar to ours for inherently decentralized architectures such as distributed ledgers or blockchains, where processing, storage, and control flow are shared among many equal participants. Van Landuyt et al. [69] performed an analysis of blockchain security and the privacy of data it supports with other threat-modeling approaches discussed in the literature and their findings identify areas for future improvements needed for threat-modeling approaches. ----- control flow are shared among many equal participants. Van Landuyt et al. [69] performed an _Sustainabilityanalysis of blockchain security and the privacy of data it supports with other threat-modeling 2020, 12, 6768_ 12 of 20 approaches discussed in the literature and their findings identify areas for future improvements needed for threat-modeling approaches. _5.5. User Study_ _5.5. User Study A user study was carried out to determine the features important to users with respect to the_ visualization model and implementation. The total number of users within the authors’ research groupA user study was carried out to determine the features important to users with respect to the performing the study were 11: two full professors, three associate professors, three PhD students,visualization model and implementation. The total number of users within the authors’ research and three degree students. Figuregroup performing the study were 11: two full professors, three associate professors, three PhD 10 shows the results from the users’ responses. The users indicated that security and a user-friendly nature are the most important features. The preliminary results showstudents, and three degree students. Figure 10 shows the results from the users’ responses. The users that transparency in data sharing is important for user participation when there is no single trustedindicated that security and a user-friendly nature are the most important features. The preliminary coordinating entity that users can rely on.results show that transparency in data sharing is important for user participation when there is no single trusted coordinating entity that users can rely on. ## Main feature Security **20% [5%]** **45%** User friendly **30%** Easy integration into other systems Others ( open source, ...) (a) Main feature to users with respect to the visualization model and implementation. ## Second feature **10%** **25%** **30%** **35%** (b) Second feature to users with respect to the visualization model and implementation. **Figure 10.Figure 10. User study results.User study results.** In summary, the proposed digital platform can be used in any healthcare application where thereIn summary, the proposed digital platform can be used in any healthcare application where are multiple actors (hospital, patients, external clinical and non-clinical care providers) sharing selectthere are multiple actors (hospital, patients, external clinical and non-clinical care providers) sharing data among each other to support care. The content of the file (or resource) to be shared and who itselect data among each other to support care. The content of the file (or resource) to be shared and should be sent to is determined by the client (patient, provider, etc.), and the blockchain architecturewho it should be sent to is determined by the client (patient, provider, etc.), and the blockchain supports interoperability among a number of distributed systems outside a hospital’s own EMR.architecture supports interoperability among a number of distributed systems outside a hospital’s With the immutability of data stored and the authenticity of those accessing the data, the architectureown EMR. With the immutability of data stored and the authenticity of those accessing the data, the ensures that those who are designated to receive the data are indeed the ones who are accessing the data. More importantly, by visually tracking the movement of data files, the users can see and interpret the activity. This is a key contribution of this paper. **6. Discussion** The implementation can be generalized to share different types of files based on the application context. For example, users may cast their vote on an issue or in an election and see how these are pooled by an authentic node on the network for compilation. Similarly, in today’s COVID-19 environment, data from various test facilities and hospitals can be tracked for the number of people infected (or testing positive) and the number of hospitalizations and deaths for public health officials to develop regional patterns. With some of the demographic or geographical data of each node stored outside the blockchain, it can reduce the data redundancy but provide access to interpret the data traffic within the network. Moreover, blockchains using smart contracts can provide alerts in appropriate nodes based on data analysis. For example, an alert can be sent to a public health node on the network when the number of positive cases coming from that region exceeds a threshold for its regional population, so that it can develop alternative preventive practices. Similarly, it can trigger an alert to an emergency management vehicle station node when a hospital within its area has exceeded its hospitalization capacity, so that patients can be diverted to another hospital. Furthermore, some of these partners, such as emergency management vehicle stations or public health agencies, can be ----- _Sustainability 2020, 12, 6768_ 13 of 20 outside the blockchain if they are primarily receiving alerts or aggregated data to reduce network complexity, or else in a separate blockchain that is used for receiving such alerts. _6.1. Future Research Directions_ When care is moved outside a hospital and with a number of actors sharing different types of data at varying frequencies, future research needs to explore certain heuristics or algorithmic models to segment the digital platform that may include a mix of centralized and distributed networks. Each of these networks is synchronized to ensure that data moving within and across these networks are not lost. The larger the network, the greater the technical challenge of managing the actors and the data they share and the more complex the social challenge of aligning the goals of these actors. In addition, the distributed actors using blockchain must eventually interact with other actors (e.g., hospitals) who operate centrally coordinated patient health records or a government agency that regulates the type of data shared. This leads to three different possible research directions. Addressing the technical challenge: Are there ways to decide when to segment the data based on frequency of use and the size of the data shared? Given the redundancy embedded in the way in which the blockchain replicates the data, decision rules may guide the size of the data to be shared, the type of data shared (e.g., images vis-à-vis text), the frequency of data sharing (e.g., once a month with a few nodes or real time for tracking infections) and, of course, the number of nodes who need access to these data. This may lead to the creation of subnetworks, which also are relevant in addressing the complexity of the social challenge. Addressing the social challenge: Healthcare outside a hospital is supported by many different actors, such as clinical actors like pharmacies or testing labs, non-clinical actors like social workers or care givers of patients at home, or researchers who analyze data for treatment adherence or disease patterns. The motivation of these users to use such a platform to share data and the transparency they need to enhance their trust in using the system may vary. Therefore, having different networks support clinical actors, non-clinical actors, and analysis may lead to reducing the goal alignment complexity and help mitigate the need for visualization and associated complexities in system design. Moreover, many of these subgroups have varying levels of interaction with hospitals, thus creating the need for different gateways for data sharing with the hospital EMR, an issue which is discussed next. Gateways to centralized systems: Hospitals and government agencies still drive much of healthcare around the world, and the type of data integration they need with external actors varies. For example, central public health agencies of regions or countries need aggregated data from hospitals and other external care providers like test facilities to track disease conditions, except during health emergencies when real time data access is critical. Similarly, hospitals may need certain data in real time from clinical actors outside the system like pharmacies to control over-prescription or use of drugs, whereas they need periodic data from social workers on patient adherence to treatment protocols. This means that each blockchain network may have to decide which centralized systems will become nodes and how data are aggregated and sent to these nodes based on pre-defined criteria. In some cases, the centralized systems may be part of a separate network, with the blockchain network of the distributed actors simply connected to the centralized system network to ensure the integrity of each. _6.2. Limitations_ We reviewed the state of the art of both the challenges and opportunities offered by the blockchain technology-based solution in terms of modeling problems in general and in healthcare in particular. It is important to emphasize that although the technology itself is not new, the fundamental contribution of the paper here is the use of visualization to make blockchain use transparent. This highly model-driven and flexible methodology provides an integration with existing technologies, highlights various challenges and opportunities when they are integrated with a blockchain with IoT [70] and suggests improvements to support decentralization and scalability, identity (identification of every device), autonomy, reliability (verifying the data authenticity), security (validation by smart contracts among ----- _Sustainability 2020, 12, 6768_ 14 of 20 other services), market of services (interesting solutions for an IoT ecosystem of services and data marketplaces), and secure code deployment (significant advantage of blockchain secure-immutable storage). Similarly, the survey [71] reviews blockchain challenges and opportunities and indicates a wide spectrum of blockchain applications extending from cryptocurrency, financial services, risk management and internet of things (IoT) to public and social services. The authors conducted a comprehensive survey on the blockchain technology with a focus on taxonomy, algorithms, applications, and technical challenges as well as recent advances to address some of these challenges. Another important issue within the blockchain framework is cryptocurrencies, as they are an emerging economic force, but there are concerns about their security. The reason for this is due to the complex collusion cases and new threat vectors that could be missed by conventional security assessment strategies. Almashaqbeh et al. [72] propose an ABC: an Asset-Based Cryptocurrency-focused threat modeling framework, which demonstrates the effectiveness of some real-world use cases. Finally, as we have observed in Section 5.5, the user study that has been carried out has the usual limitations of a preliminary study. For this reason, it will be necessary to extend it to a study with more users with different profiles in order to evaluate our proposal in a more exhaustive and comprehensive way and thus make it more general purpose. Finally, it would be necessary to compare our proposal with similar cases, where visualization is not present, to demonstrate the advantages of our methodology in gaining user trust to use a blockchain to share data. **7. Conclusions** This paper illustrates the use of a digital platform based on an underlying blockchain technology architecture to support data sharing by patients with external partners. It brings to surface the mechanism used by blockchain technology to send and receive data in a secure manner to engender trust among those sharing the data. Such transparency is key if the digital platform is to motivate patients, who are unfamiliar with the technology, to share their data with others who are willing to provide a service. Ultimately, the ease of use supported by interoperability among different patient and partner systems and the transparency with regard to how the data are shared among patients and partners are both critical for enhancing the external resources used to sustain care outside a hospital. **Author Contributions: Conceptualization, J.P., D.G., M.T. and P.K.; methodology, J.P., E.G., D.G., M.T. and P.K.;** software, E.G.; validation, E.G. and P.K.; formal analysis, J.P., D.G., M.T. and P.K.; investigation, J.P., E.G., D.G., M.T. and P.K.; resources, E.G.; data curation, E.G.; writing—original draft preparation, J.P., E.G., D.G., M.T. and P.K.; writing—review and editing, J.P., D.G. and M.T.; visualization, E.G.; supervision, J.P., D.G., M.T. and P.K.; project administration, J.P., D.G. and M.T.; funding acquisition, J.P. and D.G. All authors have read and agreed to the published version of the manuscript. **Funding: This study has been partially funded by the ECLIPSE-UA project (RTI2018-094283-B-C32).** **Conflicts of Interest: The authors declare no conflict of interest.** ----- to the published version of the manuscript. _SustainabilityFunding: This study has been partially funded by the ECLIPSE-UA project (RTI2018-094283-B-C32). 2020, 12, 6768_ 15 of 20 **Funding: This study has been partially funded by the ECLIPSE-UA project (RTI2018-094283-B-C32).** **Conflicts of Interest: The authors declare no conflict of interest.** **Conflicts of Interest: The authors declare no conflict of interest.** **Appendix A Screenshots of Terminal Windows** **Appendix A. Screenshots of Terminal Windows** **Appendix A. Screenshots of Terminal Windows** **Figure A1.Figure A1. Successful blockchain creation.Successful blockchain creation.** **Figure A1. Successful blockchain creation.** _Sustainability 2020 FOR PEER REVIEW Figure A2.Figure A2. Check for correct database creation.Check for correct database creation._ 15 of 20 **Figure A2. Check for correct database creation.** **Figure A3.Figure A3. Hyperledger Explorer access to the network in Fabric.Hyperledger Explorer access to the network in Fabric.** ----- _Sustainability 2020, 12, 6768_ 16 of 20 **Figure A3. Hyperledger Explorer access to the network in Fabric.** _Sustainability 2020 FOR PEER REVIEW_ 16 of 20 _Sustainability 2020 FOR PEER REVIEW Figure A4.Figure A4. Connection with Fabric: network name.Connection with Fabric: network name._ 16 of 20 **Figure A5.Figure A5. Figure A5. Connection with Fabric: path settings.Connection with Fabric: path settings. Connection with Fabric: path settings.** _Sustainability 2020 FOR PEER REVIEW Figure A4.Figure A4. Connection with Fabric: network name.Connection with Fabric: network name._ 16 of 20 **References** **2020 FOR PEER REVIEW** **Figure A6.Figure A6. Hyperledger Explorer structure.Hyperledger Explorer structure.** **Figure A6. Hyperledger Explorer structure.** 1. Aghina, W.; De Smet, A.; Weerda, K. Agility: It rhymes with stability; McKinsey Quarterly, December 2015. **References** Available online: https://www.mckinsey.com/business-functions/organization/our-insights/agility-it 1. Aghina, W.; De Smet, A.; Weerda, K. Agility: It rhymes with stability; McKinsey Quarterly, December 2015. rhymes-with-stability (accessed on 6 June 2020). Available 2. Bossert, O.; Laartz, J.; Ramsey, T.J. Running your company at two speeds; McKinsey Quarterly, December online: https://www.mckinsey.com/business-functions/organization/our-insights/agility-it 2014. Available online: https://www.mckinsey.com/business-functions/mckinsey-digital/our rhymes-with-stability (accessed on 6 June 2020) ----- _Sustainability 2020, 12, 6768_ 17 of 20 **References** 1. Aghina, W.; De Smet, A.; Weerda, K. Agility: It rhymes with stability. McKinsey Quarterly. December 2015. [Available online: https://www.mckinsey.com/business-functions/organization/our-insights/agility-it-rhymes-](https://www.mckinsey.com/business-functions/organization/our-insights/agility-it-rhymes-with-stability) [with-stability (accessed on 6 June 2020).](https://www.mckinsey.com/business-functions/organization/our-insights/agility-it-rhymes-with-stability) 2. Bossert, O.; Laartz, J.; Ramsey, T.J. Running your company at two speeds. McKinsey Quarterly. December 2014. [Available online: https://www.mckinsey.com/business-functions/mckinsey-digital/our-insights/running-](https://www.mckinsey.com/business-functions/mckinsey-digital/our-insights/running-your-company-at-two-speeds) [your-company-at-two-speeds (accessed on 6 June 2020).](https://www.mckinsey.com/business-functions/mckinsey-digital/our-insights/running-your-company-at-two-speeds) 3. Lusch, R.F.; Nambisan, S. Service Innovation: A Service-Dominant Logic Perspective. MIS Quart. 2015, 39, 155–175. 4. Tanniru, M.; Khuntia, J.; Weiner, J. Hospital Leadership in Support of Digital Transformation. Pac. Asia J. _[Assoc. Inf. Syst. 2018, 10, 1–24. [CrossRef]](http://dx.doi.org/10.17705/1PAIS.10301)_ 5. Vargo, S.L.; Lusch, R.F. Service-Dominant Logic: What it is, What it is not, What it might be. In The _Service-Dominant Logic of Marketing: Dialog, Debate, and Directions; Vargo, S.L., Lusch, R.F., Eds.; M.E. Sharpe:_ Armonk, NY, USA, 2006; pp. 43–55. 6. Nuckols, T.K.; Fingar, K.R.; Barrett, M.; Steiner, C.A.; Stocks, C.; Owens, P.L. The shifting landscape in utilization of inpatient, observation, and emergency department services across payers. J. Hosp. Med. 2017, _[12, 443–446. [CrossRef]](http://dx.doi.org/10.12788/jhm.2751)_ 7. Sanborn, B.J. Outpatient shift, digitization, shortages, and telehealth will shape healthcare systems of [tomorrow. Healthcare Finance. May 2018. Available online: https://www.healthcarefinancenews.com/](https://www.healthcarefinancenews.com/news/outpatient-shift-digitization-shortages-and-telehealth-will-shape-healthcare-systems-tomorrow) [news/outpatient-shift-digitization-shortages-and-telehealth-will-shape-healthcare-systems-tomorrow](https://www.healthcarefinancenews.com/news/outpatient-shift-digitization-shortages-and-telehealth-will-shape-healthcare-systems-tomorrow) (accessed on 6 June 2020). 8. Tanniru, M.; Sandhu, K. Engagement leading to empowerment-Digital Innovation Strategies for Patient Care Continuity. In Proceedings of the Australasian Conference on Information Systems, Sydney, Australia, [January 2018; Available online: https://aisel.aisnet.org/cgi/viewcontent.cgi?article=1039&context=acis2018](https://aisel.aisnet.org/cgi/viewcontent.cgi?article=1039&context=acis2018) (accessed on 6 June 2020). 9. Johnston, L.; Zemanek, J.; Reeve, M.J.; Grills, N. The evidence for using mHealth technologies for diabetes management in low- and middle-income countries. J. Hosp. Manag. Health Policy 2018, 2. Available online: [http://jhmhp.amegroups.com/article/view/4403/5182 (accessed on 6 June 2020).](http://jhmhp.amegroups.com/article/view/4403/5182) 10. Agarwal, R.; Gao, G.; DesRoches, C.M.; Jha, A.K. Research Commentary—The Digital Transformation of [Healthcare: Current Status and the Road Ahead. Inform. Syst. Res. 2010, 21, 796–809. [CrossRef]](http://dx.doi.org/10.1287/isre.1100.0327) 11. Ghosh, K.; Khuntia, J.; Chawla, S.; Deng, X. Media Reinforcement for Psychological Empowerment in [Chronic Disease Management. Commun. Assoc. Inf. Syst. 2014, 34, 419–438. [CrossRef]](http://dx.doi.org/10.17705/1CAIS.03422) 12. Tanniru, M. Transforming public health using value lens and extended partner networks. Learn. Health Syst. **[2020. Available online: https://onlinelibrary.wiley.com/doi/full/10.1002/lrh2.10234 (accessed on 6 June 2020).](https://onlinelibrary.wiley.com/doi/full/10.1002/lrh2.10234)** 13. Dowell, D.; Haegerich, T.; Chou, R. No Shortcuts to Safer Opioid Prescribing. N. Engl. J. Med. 2019, _[380, 2285–2287. [CrossRef]](http://dx.doi.org/10.1056/NEJMp1904190)_ 14. Rahman, A.; Rashid, M.; Kernec, J.L.; Phillippe, B.; Barnes, S.J.; Fioranell, F.; Yang, S.; Romain, O.; Abbasi, Q.; Loukas, G.; et al. A Secure Occupational Therapy Framework for Monitoring Cancer Patients Quality of Life. _[Sensors 2019, 19, 5258. [CrossRef]](http://dx.doi.org/10.3390/s19235258)_ 15. Boots, L.M.M.; de Vugt, M.E.; van Knippenberg, R.J.M.; Kempen, G.I.J.M.; Verhey, F. A Systematic Review of Internet-Based Supportive Interventions for Caregivers of Patients with Dementia. Int. J. Geriatr. Psych. **[2013, 29, 331–344. [CrossRef]](http://dx.doi.org/10.1002/gps.4016)** 16. Carretero, S.; Stewart, J.; Centeno, C. Information and Communication Technologies for Informal Carers and Paid Assistants: Benefits from Micro-, Meso-, And Macro-Levels. Eur. J. Ageing. 2015, 12, 163–173. [[CrossRef]](http://dx.doi.org/10.1007/s10433-015-0333-4) 17. Crosby, M.; Nachiappan; Pattanayak, P.; Verma, S.; Kalyanaraman, V. Blockchain technology: Beyond bitcoin. _[Appl. Innov. Rev. 2016, 2, 6–19. Available online: http://scet.berkeley.edu/wp-content/uploads/AIR-2016-](http://scet.berkeley.edu/wp-content/uploads/AIR-2016-Blockchain.pdf)_ [Blockchain.pdf (accessed on 6 June 2020).](http://scet.berkeley.edu/wp-content/uploads/AIR-2016-Blockchain.pdf) 18. Swan, M. Blockchain: Blueprint for a new economy, 1st ed.; McGovern, T., Ed.; O’Reilly Media: Sebastopol, CA, USA, 2015. 19. Zhao, J.L.; Fan, S.; Yan, J. Overview of business innovations and research opportunities in blockchain and [introduction to the special issue. Financ. Innov. 2016, 2, 28. [CrossRef]](http://dx.doi.org/10.1186/s40854-016-0049-2) ----- _Sustainability 2020, 12, 6768_ 18 of 20 20. Casino, F.; Dasaklis, T.K.; Patsakis, C. A systematic literature review of blockchain-based applications: Current status, classification, and open issues. Telemat. Inform. 2019, 36, 55–81. 21. Beninger, P.; Ibara, M.A. Pharmacovigilance and Biomedical Informatics: A Model for Future Development. _[Clin. Ther. 2016, 38, 2514–2525. [CrossRef]](http://dx.doi.org/10.1016/j.clinthera.2016.11.006)_ 22. Sankar, L.S.; Sindhu, M.; Sethumadhavan, M. Survey of consensus protocols on blockchain applications. In Proceedings of the 2017 4th International Conference on Advanced Computing and Communication [Systems (ICACCS), Coimbatore, India, 6–7 January 2017; pp. 1–5. [CrossRef]](http://dx.doi.org/10.1109/ICACCS.2017.8014672) 23. McGhin, T.; Choo, K.K.R.; Liu, C.Z.; He, D. Blockchain in healthcare applications: Research challenges and [opportunities. J. Netw. Comput. Appl. 2019, 135, 62–75. [CrossRef]](http://dx.doi.org/10.1016/j.jnca.2019.02.027) 24. Azaria, A.; Ekblaw, A.; Vieira, T.; Lippman, A. MedRec: Using blockchain for medical data access and permission management. In Proceedings of the 2016 2nd International Conference on Open and Big Data [(OBD), Vienna, Austria, 22–24 August 2016; pp. 25–30. [CrossRef]](http://dx.doi.org/10.1109/OBD.2016.11) 25. Yue, X.; Wang, H.; Jin, D.; Li, M.; Jiang, W. Healthcare Data Gateways: Found Healthcare Intelligence on [Blockchain with Novel Privacy Risk Control. J. Med. Syst. 2016, 40, 218. [CrossRef]](http://dx.doi.org/10.1007/s10916-016-0574-6) 26. Yüksel, B.; Küpçü, A.; Özkasap, Ö. Research issues for privacy and security of electronic health services. _[Future Gener. Comput. Syst. 2017, 68, 1–13. [CrossRef]](http://dx.doi.org/10.1016/j.future.2016.08.011)_ 27. Griggs, K.N.; Ossipova, O.; Kohlios, C.P.; Baccarini, A.N.; Howson, E.A.; Hayajneh, T. Healthcare Blockchain System Using Smart Contracts for Secure Automated Remote Patient Monitoring. J. Med. Syst. 2018, 42, 1–7. [[CrossRef]](http://dx.doi.org/10.1007/s10916-018-0982-x) 28. Xia, Q.; Sifah, E.B.; Asamoah, K.O.; Gao, J.; Du, X.; Guizani, M. MeDShare: Trust-Less Medical Data Sharing [among Cloud Service Providers via Blockchain. IEEE Access 2017, 5, 14757–14767. [CrossRef]](http://dx.doi.org/10.1109/ACCESS.2017.2730843) 29. Abouelmehdi, K.; Beni-Hssane, A.; Khaloufi, H.; Saadi, M. Big data security and privacy in healthcare: [A Review. Procedia Comput. Sci. 2017, 113, 73–80. [CrossRef]](http://dx.doi.org/10.1016/j.procs.2017.08.292) 30. Small, A.; Wainwright, D. Privacy and security of electronic patient records – Tailoring multimethodology to explore the socio-political problems associated with Role Based Access Control systems. Eur. J. Oper. Res. **2018, 265, 344–360.** 31. Kshetri, N. Blockchain’s roles in strengthening cybersecurity and protecting privacy. Telecommun. Policy **[2017, 41, 1027–1038. [CrossRef]](http://dx.doi.org/10.1016/j.telpol.2017.09.003)** 32. Ramani, V.; Kumar, T.; Bracken, A.; Liyanage, M.; Ylianttila, M. Secure and Efficient Data Accessibility in Blockchain Based Healthcare Systems. In Proceedings of the 2018 IEEE Global Communications Conference [(GLOBECOM), Abu Dhabi, UAE, 9–13 December 2018; pp. 206–212. [CrossRef]](http://dx.doi.org/10.1109/GLOCOM.2018.8647221) 33. Khan, S.I.; Hoque, A.S.M.L. Privacy and security problems of national health data warehouse: A convenient solution for developing countries. In Proceedings of the 2016 International Conference on Networking [Systems and Security (NSysS), Dhaka, Bangladesh, 7–9 January 2016; pp. 1–6. [CrossRef]](http://dx.doi.org/10.1109/NSysS.2016.7400708) 34. Rajkomar, A.; Oren, E.; Chen, K.; Dai, A.M.; Hajaj, N.; Hardt, M.; Liu, P.J.; Liu, X.; Marcus, J.; Sun, M.; et al. [Scalable and accurate deep learning with electronic health records. Npj Digital Med. 2018, 1, 1–10. [CrossRef]](http://dx.doi.org/10.1038/s41746-018-0029-1) 35. Esposito, C.; De Santis, A.; Tortora, G.; Chang, H.; Choo, K.K.R. Blockchain: A Panacea for Healthcare [Cloud-Based Data Security and Privacy? IEEE Cloud Comput. 2018, 5, 31–37. [CrossRef]](http://dx.doi.org/10.1109/MCC.2018.011791712) 36. Dagher, G.G.; Mohler, J.; Milojkovic, M.; Marella, P.B. Ancile: Privacy-preserving framework for access control and interoperability of electronic health records using blockchain technology. Sustain. Cities Soc. **[2018, 39, 283–297. [CrossRef]](http://dx.doi.org/10.1016/j.scs.2018.02.014)** 37. Ekblaw, A.; Azaria, A.; Halamka, J.D.; Lippman, A. A Case Study for Blockchain in Healthcare: “MedRec” prototype for electronic health records and medical research data. White Paper, reprinted from 2nd [International Conference on Open & Big Data, 22–24 August 2016, 1–13. Available online: https://www.](https://www.media.mit.edu/publications/medrec-whitepaper/) [media.mit.edu/publications/medrec-whitepaper/ (accessed on 6 June 2020).](https://www.media.mit.edu/publications/medrec-whitepaper/) 38. Roehrs, A.; da Costa, C.A.; da Rosa Righi, R. OmniPHR: A distributed architecture model to integrate [personal health records. J. Biomed. Inform. 2017, 71, 70–81. [CrossRef]](http://dx.doi.org/10.1016/j.jbi.2017.05.012) 39. Kotz, D.; Gunter, C.A.; Kumar, S.; Weiner, J.P. Privacy and Security in Mobile Health: A Research Agenda. _[Computer 2016, 49, 22–30. [CrossRef]](http://dx.doi.org/10.1109/MC.2016.185)_ 40. Sahi, M.; Abbas, H.; Saleem, K.; Yang, X.; Derhab, A.; Orgun, M.; Iqbal, W.; Rashid, L.; Yaseen, A. Privacy preservation in e-healthcare environments: State of the art and future directions. IEEE Access 2018, 6, 464–478. [[CrossRef]](http://dx.doi.org/10.1109/ACCESS.2017.2767561) ----- _Sustainability 2020, 12, 6768_ 19 of 20 41. Zhang, Y.; Liu, T.; Li, K.; Zhang, J. Improved visual correlation analysis for multidimensional data. J. Visual _[Lang. Comput. 2017, 41, 121–132. [CrossRef]](http://dx.doi.org/10.1016/j.jvlc.2017.03.005)_ 42. Cecil, J.; Gupta, A.; Pirela-Cruz, M.; Ramanathan, P. An IoMT based cyber training framework for orthopedic [surgery using Next Generation Internet technologies. Inf. Med. Unlocked 2018, 12, 128–137. [CrossRef]](http://dx.doi.org/10.1016/j.imu.2018.05.002) 43. Joyia, G.J.; Liaqat, R.M.; Farooq, A.; Rehman, S. Internet of Medical Things (IOMT): Applications, Benefits and [Future Challenges in Healthcare Domain. J. Commun. 2017, 12, 240–247. [CrossRef]](http://dx.doi.org/10.12720/jcm.12.4.240-247) 44. Pilkington, M. Can Blockchain Improve Healthcare Management? Consumer Medical Electronics and the [IoMT. SSRN 2017, 1–13. Available online: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3025393](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3025393) (accessed on 6 June 2020). 45. Hoy, M.B. An introduction to the blockchain and its implications for libraries and medicine. Med. Ref. Serv. _Q. 2017, 36, 273–279._ 46. Kuo, T.T.; Kim, H.E.; Ohno-Machado, L. Blockchain distributed ledger technologies for biomedical and health care applications. J. Am. Med. Inform. Assoc. 2017, 24, 1211–1220. 47. Dubovitskaya, A.; Xu, Z.; Ryu, S.; Schumacher, M.; Wang, F. How blockchain could empower ehealth: [An application for radiation oncology. Lect. Notes Comput. Sci. 2017, 10494, 3–6. [CrossRef]](http://dx.doi.org/10.1007/978-3-319-67186-4_1) 48. [Medicalchain. 2018. Whitepaper 2.1. Available online: https://medicalchain.com/Medicalchain-Whitepaper-](https://medicalchain.com/Medicalchain-Whitepaper-EN.pdf) [EN.pdf (accessed on 6 June 2020).](https://medicalchain.com/Medicalchain-Whitepaper-EN.pdf) 49. Peterson, K.; Deeduvanu, R.; Kanjamala, P.; Boles, K. A blockchain-based approach to health information exchange networks. In Proc. NIST Workshop Blockchain Healthcare. 2016. [Available online: https://pdfs.semanticscholar.org/c1b1/89c81b6fda71a471adec11cfe72f6067c1ad.pdf?_ga=2.](https://pdfs.semanticscholar.org/c1b1/89c81b6fda71a471adec11cfe72f6067c1ad.pdf?_ga=2.151693625.186512466.1597711686-512884126.1594177302) [151693625.186512466.1597711686-512884126.1594177302 (accessed on 6 June 2020).](https://pdfs.semanticscholar.org/c1b1/89c81b6fda71a471adec11cfe72f6067c1ad.pdf?_ga=2.151693625.186512466.1597711686-512884126.1594177302) 50. Ahram, T.; Sargolzaei, A.; Sargolzaei, S.; Daniels, J.; Amaba, B. Blockchain technology innovations. In Proceedings of the 2017 IEEE Technology and Engineering Management Society Conference (TEMSCON), [Santa Clara, CA, USA, 8–10 June 2017; pp. 137–141. [CrossRef]](http://dx.doi.org/10.1109/TEMSCON.2017.7998367) 51. Al Omar, A.; Rahman, M.S.; Basu, A.; Kiyomoto, S. MediBchain: A blockchain based privacy preserving platform for healthcare data. In Proceedings of the 10th International Conference on Security, Privacy and Anonymity in Computation, Communication and Storage, Guangzhou, China, 12–15 December 2017; [pp. 534–543. [CrossRef]](http://dx.doi.org/10.1007/978-3-319-72395-2_49) 52. Borioli, G.S.; Couturier, J. How blockchain technology can improve the outcomes of clinical trials. Br. J. _Health Care Manag. 2018, 24, 156–162._ 53. Mamoshina, P.; Ojomoko, L.; Yanovich, Y.; Ostrovski, A.; Botezatu, A.; Prikhodko, P.; Izumchenko, E.; Aliper, A.; Romantsov, K.; Zhebrak, A.; et al. Converging blockchain and next-generation artificial intelligence technologies to decentralize and accelerate biomedical research and healthcare. Oncotarget 2018, _9, 5665–5690._ 54. Lee, S.H.; Yang, C.S. Fingernail analysis management system using microscopy sensor and blockchain [technology. Int. J. Distrib. Sens. Netw. 2018, 14, 1–13. [CrossRef]](http://dx.doi.org/10.1177/1550147718767044) 55. Shae, Z.; Tsai, J.J.P. On the Design of a Blockchain Platform for Clinical Trial and Precision. In Proceedings of the 2017 IEEE 37th International Conference on Distributed Computing Systems (ICDCS), Atlanta, GA, USA, [5–8 June 2017; pp. 1972–1980. [CrossRef]](http://dx.doi.org/10.1109/ICDCS.2017.61) 56. Frentrup, M.; Theuvsen, L. Transparency in supply chains: Is trust a limiting factor? Presented at the 99th EAAE Seminar Trust and Risk in Business Networks, Bonn, Germany, 8–10 February 2006; [Available online: https://www.semanticscholar.org/paper/Transparency-in-supply-chains%3A-is-trust-a-](https://www.semanticscholar.org/paper/Transparency-in-supply-chains%3A-is-trust-a-limiting-Frentrup-Theuvsen/8fa45abcce7d8231c756af48810c6c797fea1f2a) [limiting-Frentrup-Theuvsen/8fa45abcce7d8231c756af48810c6c797fea1f2a (accessed on 6 June 2020).](https://www.semanticscholar.org/paper/Transparency-in-supply-chains%3A-is-trust-a-limiting-Frentrup-Theuvsen/8fa45abcce7d8231c756af48810c6c797fea1f2a) 57. Weissgerber, T.L.; Milic, N.M.; Winham, S.J.; Garovic, V.D. Beyond bar and line graphs: Time for a new data [presentation paradigm. PLoS Biol. 2015, 13, 1–10. [CrossRef]](http://dx.doi.org/10.1371/journal.pbio.1002128) 58. Weissgerber, T.L.; Garovic, V.D.; Savic, M.; Winham, S.J.; Milic, N.M. From Static to Interactive: Transforming [Data Visualization to Improve Transparency. PLoS Biol. 2016, 14, 1–8. [CrossRef]](http://dx.doi.org/10.1371/journal.pbio.1002484) 59. Weiner, J.; Balijepally, V.; Tanniru, M. Integrating Strategic to Operational Decision-Making using Data-Driven Dashboard Implementation: The Case of St. Joseph Mercy Oakland Hospital. J. Healthc. Manag. 2015, _[60, 319–331. [CrossRef]](http://dx.doi.org/10.1097/00115514-201509000-00005)_ 60. Selig, W.J.; Johannes, J.D. Reasoning visualization in expert systems-the applicability of algorithm animation techniques. In Proceedings of the 3rd international conference on Industrial and engineering applications of [artificial intelligence and expert systems, New York, NY, USA, 15–18 July 1990; pp. 457–466. [CrossRef]](http://dx.doi.org/10.1145/98784.98869) ----- _Sustainability 2020, 12, 6768_ 20 of 20 61. Trainer, E.H. Supporting Trust in Globally Distributed Software Teams: The Impact of Visualized Collaborative Traces on Perceived Trustworthiness. Doctoral Dissertation, University of California, La Jolla, CA, USA, 2012. 62. Perlmutter, L.; Kernfeld, E.; Cakmak, M. Situated language understanding with human-like and visualization-based transparency. In Proceedings of the Robotics: Science and Systems Conference, [Ann Arbor, MI, USA, 18–22 June 2016; pp. 1–10. [CrossRef]](http://dx.doi.org/10.15607/RSS.2016.XII.040) 63. Liu, Z.; Stasko, J. Mental Models, Visual Reasoning and Interaction in Information Visualization: A Top-down [Perspective. IEEE Trans. Vis. Comput. Graph. 2010, 16, 999–1008. [CrossRef]](http://dx.doi.org/10.1109/TVCG.2010.177) 64. Hebert, C.; Di Cerbo, F. Secure blockchain in the enterprise: A methodology. Perv. Mob. Comput. 2019, _[59, 101038. [CrossRef]](http://dx.doi.org/10.1016/J.PMCJ.2019.101038)_ 65. Yaraghi, N. Who should profit from the sale of patient data? The Brookings Institution. Available [online: https://www.brookings.edu/blog/techtank/2018/11/19/who-should-profit-from-the-sale-of-patient-](https://www.brookings.edu/blog/techtank/2018/11/19/who-should-profit-from-the-sale-of-patient-data/) [data/ (accessed on 6 June 2020).](https://www.brookings.edu/blog/techtank/2018/11/19/who-should-profit-from-the-sale-of-patient-data/) 66. Hertzog, E.; Benartzi, G.; Benartzi, G.; Bancor Protocol. Continuous Liquidity for Cryptographic Tokens [through their Smart Contracts. Available online: https://storage.googleapis.com/website-bancor/2018/04/](https://storage.googleapis.com/website-bancor/2018/04/01ba8253-bancor_protocol_whitepaper_en.pdf) [01ba8253-bancor_protocol_whitepaper_en.pdf (accessed on 6 June 2020).](https://storage.googleapis.com/website-bancor/2018/04/01ba8253-bancor_protocol_whitepaper_en.pdf) 67. [A Blockchain Platform for the Enterprise. Hyperledger. Available online: https://hyperledger-fabric.](https://hyperledger-fabric.readthedocs.io/en/release-2.0/) [readthedocs.io/en/release-2.0/ (accessed on 15 May 2020).](https://hyperledger-fabric.readthedocs.io/en/release-2.0/) 68. Josefsson, S. The Base16, Base32, and Base64 Data Encodings; RFC 4648, The Internet Society. 2006. Available [online: https://www.semanticscholar.org/paper/The-Base16%2C-Base32%2C-and-Base64-Data-Encodings-](https://www.semanticscholar.org/paper/The-Base16%2C-Base32%2C-and-Base64-Data-Encodings-Josefsson/2718f599c9bbb96aecd81180167d10dcf9c65c47) [Josefsson/2718f599c9bbb96aecd81180167d10dcf9c65c47 (accessed on 6 June 2020).](https://www.semanticscholar.org/paper/The-Base16%2C-Base32%2C-and-Base64-Data-Encodings-Josefsson/2718f599c9bbb96aecd81180167d10dcf9c65c47) 69. Van Landuyt, D.; Sion, L.; Vandeloo, E.; Joosen, W. On the Applicability of Security and Privacy Threat Modeling for Blockchain Applications. In Computer Security; Katsikas, S., Cuppens, F., Cuppens, N., Lambrinoudakis, C., Kalloniatis, C., Mylopoulos, J., Antón, A., Gritzalis, S., Pallas, F., Pohle, J., et al., Eds.; [Springer: Cham, Switzerland, 2020; Volume 11980, pp. 195–203. [CrossRef]](http://dx.doi.org/10.1007/978-3-030-42048-2_13) 70. AnaReyna, A.; Martín, C.; Chen, J.; Soler, E.; Díaz, M. On blockchain and its integration with IoT–Challenges [and opportunities. Future Gener. Comput. Syst. 2018, 88, 173–190. [CrossRef]](http://dx.doi.org/10.1016/j.future.2018.05.046) 71. Zheng, Z.; Xie, S. Blockchain challenges and opportunities: A Survey. Int. J. Web Grid. Serv. 2018, 14, 352–375. [[CrossRef]](http://dx.doi.org/10.1504/IJWGS.2018.10016848) 72. Almashaqbeh, G.; Bishop, A.; Cappos, J. ABC: A Cryptocurrency-Focused Threat Modeling Framework. In Proceedings of the IEEE INFOCOM 2019-IEEE Conference on Computer Communications Workshops [(INFOCOM WKSHPS), Paris, France, 29 April–2 May 2019; pp. 859–864. [CrossRef]](http://dx.doi.org/10.1109/INFCOMW.2019.8845101) © 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution [(CC BY) license (http://creativecommons.org/licenses/by/4.0/).](http://creativecommons.org/licenses/by/4.0/.) -----
{ "disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.3390/su12176768?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.3390/su12176768, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": "CCBY", "status": "GOLD", "url": "https://www.mdpi.com/2071-1050/12/17/6768/pdf?version=1598255404" }
2,020
[]
true
2020-08-20T00:00:00
[ { "paperId": "ce50d80f9ad070fb9a4b66e98cf787402804177f", "title": "Can Blockchain Improve Healthcare Management?" }, { "paperId": "85a2d6936a792572bb2a2bb9a2687f584b5ec623", "title": "Transforming public health using value lens and extended partner networks" }, { "paperId": "31a9d3bf8bebd0ce6ad95c72b16c6164f431db34", "title": "A Secure Occupational Therapy Framework for Monitoring Cancer Patients’ Quality of Life" }, { "paperId": "d53231838c5887c3a5b1e05ff7567606f0f36c69", "title": "Secure blockchain in the enterprise: A methodology" }, { "paperId": "489cf88a2d399edf065fc0d4cea95f1e0a8a7f88", "title": "On the Applicability of Security and Privacy Threat Modeling for Blockchain Applications" }, { "paperId": "d0b74583708a96ed7562ab023aaad1261f25241d", "title": "Engagement Leading to Empowerment-Digital Innovation Strategies for Patient Care Continuity" }, { "paperId": "c1e98fa629cd65080afd919eecb8628829fe66ce", "title": "Blockchain in healthcare applications: Research challenges and opportunities" }, { "paperId": "3b652bc4e0ca3397357bd8a28587cdf41004796a", "title": "No Shortcuts to Safer Opioid Prescribing." }, { "paperId": "dbf53d232e5282261ab628e75717d6f0b983384e", "title": "ABC: A Cryptocurrency-Focused Threat Modeling Framework" }, { "paperId": "4c0945cb52d0734b25ecea49e3ae1c1b243fca66", "title": "A systematic literature review of blockchain-based applications: Current status, classification and open issues" }, { "paperId": "41db775646e5909053126a13ced6c35199377f3f", "title": "Secure and Efficient Data Accessibility in Blockchain Based Healthcare Systems" }, { "paperId": "305edd92f237f8e0c583a809504dcec7e204d632", "title": "Blockchain challenges and opportunities: a survey" }, { "paperId": "7b55b3f41446c613453b14cbc5ea48ae4f634eeb", "title": "Hospital Leadership in Support of Digital Transformation" }, { "paperId": "0c2ad254173631422348b28b6d9045e144180786", "title": "The evidence for using mHealth technologies for diabetes management in low- and middle-income countries" }, { "paperId": "6d661299a8207a4bff536494cec201acee3c6c1c", "title": "Healthcare Blockchain System Using Smart Contracts for Secure Automated Remote Patient Monitoring" }, { "paperId": "863dff6fea7811e6c2b76b3eb64eee84ee280b33", "title": "Ancile: Privacy-Preserving Framework for Access Control and Interoperability of Electronic Health Records Using Blockchain Technology" }, { "paperId": "950c5a5c6dd3e8e338160fef2d0ff1b3c092de57", "title": "Fingernail analysis management system using microscopy sensor and blockchain technology" }, { "paperId": "c078bf5f00a1bfb8df6bda9bc7fee6bfea6f5cbb", "title": "Blockchain: A Panacea for Healthcare Cloud-Based Data Security and Privacy?" }, { "paperId": "baad851f534d38ffef5732149089a4a46b31e707", "title": "How blockchain technology can improve the outcomes of clinical trials" }, { "paperId": "5d949d5f2c48572ff385da5576b2c10ab7142089", "title": "Privacy and security of electronic patient records - Tailoring multimethodology to explore the socio-political problems associated with Role Based Access Control systems" }, { "paperId": "1f7eec4c76963a4ba7516ca00e6a2f855667b3f2", "title": "Scalable and accurate deep learning with electronic health records" }, { "paperId": "87c259303857ef81593b2abd49106b9b5ce89b48", "title": "MediBchain: A Blockchain Based Privacy Preserving Platform for Healthcare Data" }, { "paperId": "7f1c3c97a639a93796e935add3665c3ef329c0c8", "title": "Blockchain's roles in strengthening cybersecurity and protecting privacy" }, { "paperId": "5bbc4181e073ec6b3ec894a35eacdc6a67e8c3a3", "title": "Blockchain distributed ledger technologies for biomedical and health care applications" }, { "paperId": "b756c3c1f4d02421680c5ad3420bc11d5d3baeca", "title": "How Blockchain Could Empower eHealth: An Application for Radiation Oncology - (Extended Abstract)" }, { "paperId": "4697609882b87d138455a60882608b293bb4e3ca", "title": "Can Blockchain Improve Healthcare Management? Consumer Medical Electronics and the IoMT" }, { "paperId": "8f00a9e60c7bdb8ac473ee7a09bf1b051b33393d", "title": "Improved visual correlation analysis for multidimensional data" }, { "paperId": "49af9119c09b97af977595b011afd8a3f588412d", "title": "MeDShare: Trust-Less Medical Data Sharing Among Cloud Service Providers via Blockchain" }, { "paperId": "996d2697e16db08b6cfa89cadb924da534ddf3dd", "title": "An Introduction to the Blockchain and Its Implications for Libraries and Medicine" }, { "paperId": "6536e22778df1d8b371cd8ff263145713e7bffd9", "title": "OmniPHR: A distributed architecture model to integrate personal health records" }, { "paperId": "9092a7802f6e56dd5b6d1be30c8b5588a22e53fe", "title": "Blockchain technology innovations" }, { "paperId": "c082bbc681566f6761729837a97a0ebab92a2ce8", "title": "On the Design of a Blockchain Platform for Clinical Trial and Precision Medicine" }, { "paperId": "e11466449176d78b209e7ee32a03f5ddfc68cd8e", "title": "The Shifting Landscape in Utilization of Inpatient, Observation, and Emergency Department Services Across Payers" }, { "paperId": "ef6e7f49c81767695d3048e6fc43d3d0a8188f72", "title": "Research issues for privacy and security of electronic health services" }, { "paperId": "b66602ad0b663e93bea5765c4a56d83b7ef1c2dc", "title": "Pharmacovigilance and Biomedical Informatics: A Model for Future Development." }, { "paperId": "c7083cfa963f9a58a27cbae152295b3a555bcc5a", "title": "Overview of business innovations and research opportunities in blockchain and introduction to the special issue" }, { "paperId": "208735a6c437b8ae3efba01693c3e8a06289c3dd", "title": "Healthcare Data Gateways: Found Healthcare Intelligence on Blockchain with Novel Privacy Risk Control" }, { "paperId": "bd8a307efcffbf57d2e5c3c23577de44d883d865", "title": "MedRec: Using Blockchain for Medical Data Access and Permission Management" }, { "paperId": "3a1690ecb06986c3b5f66a81034c76e76845a80e", "title": "Situated Language Understanding with Human-like and Visualization-Based Transparency" }, { "paperId": "538dd43365c1b6629f055df06f694fa53c969405", "title": "Privacy and Security in Mobile Health: A Research Agenda" }, { "paperId": "457080ccecbfc681f1520cee65d8f94d39887a3e", "title": "From Static to Interactive: Transforming Data Visualization to Improve Transparency" }, { "paperId": "b0f2c8ce920a63dfff803be0abe583ff03ee083e", "title": "Converging blockchain and next-generation artificial intelligence technologies to decentralize and accelerate biomedical research and healthcare" }, { "paperId": "85ad29c9c76585ad33b1a339ac96abe8230ef4ae", "title": "Integrating Strategic and Operational Decision Making Using Data‐Driven Dashboards: The Case of St. Joseph Mercy Oakland Hospital" }, { "paperId": "4d02488921e248b8a0cfe70f7d60c0606d3954c4", "title": "Beyond Bar and Line Graphs: Time for a New Data Presentation Paradigm" }, { "paperId": "5d1d25e8e7c0abbac2b9189dbe7b9b67fee5fda7", "title": "Service Innovation: A Service-Dominant Logic Perspective" }, { "paperId": "97fddbbfd681bce9eeb8e0a013353b4d5b2ba0db", "title": "Blockchain: Blueprint for a New Economy" }, { "paperId": "d5d0c5c8bd73aa20d3d332ba2d6b0a2c0848ee62", "title": "Information and communication technologies for informal carers and paid assistants: benefits from micro-, meso-, and macro-levels" }, { "paperId": "75dfeed53405a480817a668e966508156b33ef5d", "title": "Service-Dominant Logic: What It Is, What It Is Not, What It Might Be" }, { "paperId": "d8d2d004d3ce7b5d036e5ec5587b9682cd53c714", "title": "A systematic review of Internet‐based supportive interventions for caregivers of patients with dementia" }, { "paperId": "162bd6eadcdebdbc76c39a78c39333b6d763f31f", "title": "Mental Models, Visual Reasoning and Interaction in Information Visualization: A Top-down Perspective" }, { "paperId": "61209d903884e8bc800e6f9ccdf61f26501dc257", "title": "The Base16, Base32, and Base64 Data Encodings" }, { "paperId": "452f658a73f86a2220ac9d1163f6b01b686e428b", "title": "Reasoning visualization in expert systems—the applicability of algorithm animation techniques" }, { "paperId": null, "title": "https://hyperledger-fabric" }, { "paperId": null, "title": "Beyond bitcoin" }, { "paperId": null, "title": "Outpatient shift, digitization, shortages, and telehealth will shape healthcare systems of tomorrow" }, { "paperId": null, "title": "Who should profit from the sale of patient data? The Brookings Institution" }, { "paperId": null, "title": "A Blockchain Platform for the Enterprise" }, { "paperId": "36c8624c210b8e0accfc03ec68542d8ebf45cb12", "title": "WITHDRAWN: An IoMT-based Cyber Training Framework for Orthopedic Surgery using Next Generation Internet Technologies" }, { "paperId": "aa156cc213266649b6809ac00be087a296f41143", "title": "Privacy Preservation in e-Healthcare Environments: State of the Art and Future Directions" }, { "paperId": "33a0e8d78ccef17adaba0d9df3446c5d466c36c9", "title": "Bancor Protocol Continuous Liquidity for Cryptographic Tokens through their Smart Contracts" }, { "paperId": null, "title": "Medicalchain" }, { "paperId": "a3f9127171ada4280b5a8f6803b1348515b37816", "title": "Big data security and privacy in healthcare: A Review" }, { "paperId": "5fac7d95cbae8f435b482bc819353f27d342373c", "title": "Internet of Medical Things (IOMT): Applications, Benefits and Future Challenges in Healthcare Domain" }, { "paperId": "00113e81ef3a179d74d988d72329d306eae78525", "title": "Survey of consensus protocols on blockchain applications" }, { "paperId": "de0398060e9a5f525099447b4d8093b5480a41b3", "title": "Agility : It rhymes with stability" }, { "paperId": "deaa536affe5da5c87536281508f2d9ef83e60d7", "title": "Privacy and security problems of national health data warehouse: a convenient solution for developing countries" }, { "paperId": "3ed0db58a7aec7bafc2aa14ca550031b9f7021d5", "title": "A Case Study for Blockchain in Healthcare : “ MedRec ” prototype for electronic health records and medical research data" }, { "paperId": "c1b189c81b6fda71a471adec11cfe72f6067c1ad", "title": "A Blockchain-Based Approach to Health Information Exchange Networks" }, { "paperId": "4027dcacc881fcc33f009ec6f18629b1295b1be4", "title": "Media Reinforcement for Psychological Empowerment in Chronic Disease Management" }, { "paperId": "628bb3b1472745ccfd1bd430e9f237f3b9dbf7e4", "title": "Supporting trust in globally distributed software teams: the impact of visualized collaborative traces on perceived trustworthiness" }, { "paperId": "cf8b96dabf484fc28049eebd3afc86e50bba45c8", "title": "The Digital Transformation of Healthcare: Current Status and the Road Ahead" }, { "paperId": "8fa45abcce7d8231c756af48810c6c797fea1f2a", "title": "Transparency in Supply Chains: Is Trust a Limiting Factor?" }, { "paperId": null, "title": "This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license" }, { "paperId": null, "title": "Whitepaper 2.1" }, { "paperId": null, "title": "Running your company at two speeds ; McKinsey Quarterly , December 2014" }, { "paperId": null, "title": "Manage the transactions generated by di ff erent nodes" }, { "paperId": null, "title": "Create the blockchain with the di ff erent network nodes, where each node corresponds to di ff erent users who will participate in data sharing" } ]
20,689
en
[ { "category": "Computer Science", "source": "external" }, { "category": "Computer Science", "source": "s2-fos-model" }, { "category": "Engineering", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/02071107d0474cbd1f4077016d3014e1c2c9974e
[ "Computer Science" ]
0.88378
Trust but Verify: Cryptographic Data Privacy for Mobility Management
02071107d0474cbd1f4077016d3014e1c2c9974e
IEEE Transactions on Control of Network Systems
[ { "authorId": "5716768", "name": "Matthew W. Tsao" }, { "authorId": "37577482", "name": "Kaidi Yang" }, { "authorId": "92858908", "name": "Stephen Zoepf" }, { "authorId": "1696085", "name": "M. Pavone" } ]
{ "alternate_issns": null, "alternate_names": [ "IEEE Trans Control Netw Syst" ], "alternate_urls": null, "id": "0d3564f0-947d-4124-b171-400399406075", "issn": "2325-5870", "name": "IEEE Transactions on Control of Network Systems", "type": "journal", "url": "http://ieeexplore.ieee.org/servlet/opac?punumber=6509490" }
The era of big data has brought with it a richer understanding of user behavior through massive datasets, which can help organizations optimize the quality of their services. In the context of transportation research, mobility data can provide municipal authorities (MAs) with insights on how to operate, regulate, or improve the transportation network. Mobility data, however, may contain sensitive information about end users and trade secrets of mobility providers (MPs). Due to this data privacy concern, MPs may be reluctant to contribute their datasets to MA. Using ideas from cryptography, we propose an interactive protocol between an MA and an MP, in which MA obtains insights from mobility data without MP having to reveal its trade secrets or sensitive data of its users. This is accomplished in two steps: 1) a commitment step and 2) a computation step. In the first step, Merkle commitments and aggregated traffic measurements are used to generate a cryptographic commitment. In the second step, MP extracts insights from the data and sends them to MA. Using the commitment and zero-knowledge proofs, MA can certify that the information received from MP is accurate, without needing to directly inspect the mobility data. We also present a differentially private version of the protocol that is suitable for the large query regime. The protocol is verifiable for both MA and MP in the sense that dishonesty from one party can be detected by the other. The protocol can be readily extended to the more general setting with multiple MPs via secure multiparty computation.
## Trust but Verify: Cryptographic Data Privacy for Mobility Management ##### Matthew Tsao Stanford University ``` mwtsao@stanford.edu Stephen Zoepf Lacuna Technologies stephen.zoepf@lacuna.ai ``` ##### Kaidi Yang Stanford University ``` kaidi.yang@stanford.edu Marco Pavone Stanford University pavone@stanford.edu ``` ##### November 16, 2021 **Abstract** The era of Big Data has brought with it a richer understanding of user behavior through massive data sets, which can help organizations optimize the quality of their services. In the context of transportation research, mobility data can provide Municipal Authorities (MA) with insights on how to operate, regulate, or improve the transportation network. Mobility data, however, may contain sensitive information about end users and trade secrets of Mobility Providers (MP). Due to this data privacy concern, MPs may be reluctant to contribute their datasets to MA. Using ideas from cryptography, we propose an interactive protocol between a MA and a MP in which MA obtains insights from mobility data without MP having to reveal its trade secrets or sensitive data of its users. This is accomplished in two steps: a commitment step, and a computation step. In the first step, Merkle commitments and aggregated traffic measurements are used to generate a cryptographic commitment. In the second step, MP extracts insights from the data and sends them to MA. Using the commitment and zero-knowledge proofs, MA can certify that the information received from MP is accurate, without needing to directly inspect the mobility data. We also present a differentially private version of the protocol that is suitable for the large query regime. The protocol is verifiable for both MA and MP in the sense that dishonesty from one party can be detected by the other. The protocol can be readily extended to the more general setting with multiple MPs via secure multi-party computation. This research was supported by the National Science Foundation under CAREER Award CMMI-1454737. K. Yang would like to acknowledge the support of the Swiss National Science Foundation (SNSF) Postdoc Mobility Fellowship (P400P2 199332). ----- Cryptographic Data Privacy for Mobility Management Tsao, Yang, Zoepf and Pavone ### Contents **1** **Introduction** **4** 1.1 Statement of Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.2 Organization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 1.3 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 **2** **Model & Problem Description** **7** 2.1 Transportation Network Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 2.2 Objective: Privacy for Mobility Management (PMM) . . . . . . . . . . . . . . . . . . 8 2.2.1 Regulation Compliance for Mobility Providers . . . . . . . . . . . . . . . . . . 10 2.2.2 Transportation Infrastructure Development Projects . . . . . . . . . . . . . . 10 2.2.3 Congestion Pricing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 **3** **A high level description of the protocol** **12** **4** **The Protocol** **13** 4.1 Protocol Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 4.2 Ensuring accuracy of σ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 4.2.1 Rider Witness: Detecting underreported demand . . . . . . . . . . . . . . . . 17 4.2.2 Aggregated Roadside Audits: Detecting overreported demand . . . . . . . . . 18 4.2.3 Implementation details for ARA . . . . . . . . . . . . . . . . . . . . . . . . . 20 **5** **Discussion** **21** **6** **Conclusion** **22** **A Incorporating Differential Privacy for the Large Query Regime** **26** A.1 Goal: Differential Privacy without Trust . . . . . . . . . . . . . . . . . . . . . . . . . 27 A.2 A Differentially Private version of the protocol . . . . . . . . . . . . . . . . . . . . . 27 **B Supplementary Material** **30** B.1 Mobility Provider Serving Demand . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 B.2 Mobility Provider Serving Demand (Steady State) . . . . . . . . . . . . . . . . . . . 31 B.3 Cryptographic Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 B.3.1 Cryptographic Hash Functions . . . . . . . . . . . . . . . . . . . . . . . . . . 31 B.3.2 Cryptographic Commitments . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 B.3.3 Merkle Trees . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 B.3.4 Merkle Proofs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 B.3.5 Digital Signatures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 B.3.6 Public Key Encryption . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 B.3.7 Zero Knowledge Proofs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 B.3.8 zk-SNARKs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 B.4 Implementation Details and Examples . . . . . . . . . . . . . . . . . . . . . . . . . . 36 B.4.1 Obtaining ridehailing period activity . . . . . . . . . . . . . . . . . . . . . . . 36 B.4.2 Evaluating contributions to congestion . . . . . . . . . . . . . . . . . . . . . . 38 ----- Cryptographic Data Privacy for Mobility Management Tsao, Yang, Zoepf and Pavone B.5 Necessity of Assumption 1 for Verifiability . . . . . . . . . . . . . . . . . . . . . . . . 38 B.6 Roadside Audits with fewer sensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 B.6.1 Security Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 B.7 Establishing Verifiability and Differential Privacy for Appendix A . . . . . . . . . . . 41 B.8 More Details on Congestion Pricing . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 B.9 Efficacy of Merkle Proofs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 ----- Cryptographic Data Privacy for Mobility Management Tsao, Yang, Zoepf and Pavone ### 1 Introduction The rise of mobility as a service, smart vehicles and smart cities is revolutionizing transportation industries all over the world. Mobility management, which entails operation, regulation, and innovation of transportation systems, can leverage mobility data to improve the efficiency, safety, accessibility, and adaptability of transportation systems far beyond what was previously achievable. The analysis and sharing of mobility data, however, introduces two key concerns. The first concern is data privacy; sharing mobility data can introduce privacy risks to end users that comprise the datasets. The second concern is credibility; in situations where data is not shared, how can the correctness of numerical studies be verified? These concerns motivate the need for data analysis tools for transportation systems which are both privacy preserving and verifiable. The data privacy issue in transportation is a consequence of the trade-off between data availability and data privacy. While user data can be used to inform infrastructure improvement, equity and green initiatives, the data may contain sensitive user information and trade secrets of mobility providers. As a result, end users and mobility providers may be reluctant to share their data with city authorities. Cities have recently begun mandating micromobility providers to share detailed trajectory data of all trips, arguing that the data is needed to enforce equity or environmental objectives. Some mobility providers argued that while names and other directly identifiable information may not be included in the data, trajectory data can still reveal schedules, routines and habits of the city’s inhabitants. The mobility providers’ concern over the release of anonymized data is justified. [1] showed that any attempt to release anonymized data either fails to provide anonymity, or there are low-sensitivity attributes of the original dataset that cannot be determined from the published version. In general, anonymization is increasingly easily defeated by the very techniques that are being developed for many legitimate applications of big data [2]. Such disputes highlight the need for privacy-preserving data analysis tools in transportation. A communication scheme between a sender and a receiver is verifiable if it enables the receiver to determine whether the message or report it receives is an accurate representation of the truth. When the objectives of mobility providers and policy makers are not aligned, one party may benefit from misreporting data or other information, giving rise to verifiability issues in transportation. An example of this is Greyball software [3]. Mobility providers developed Greyball software to deny service or display misleading information to targeted users. It was originally developed to protect their drivers from oppressive authorities in foreign countries, by misreporting driver location to accounts that were believed to belong to the oppressive authorities. However, mobility providers also used Greyball to hide their activity from authorities in the United States when their operations were scrutinized. Another example of verifiability issues is third party wage calculation apps [4]. Drivers, frustrated by instances of being underpaid, created an app to confirm whether the pay was consistent with the length and duration of each trip. Such incidents highlight the need for verifiable data analysis tools in transportation. ##### 1.1 Statement of Contributions In this paper we propose a protocol between a Municipal Authority and a Mobility Provider that enables the Mobility Provider to send insights from its data to the Municipal Authority in a privacypreserving and verifiable manner. In contrast to non-interactive data sharing mechanisms (which are currently used by most municipalities) where a Municipal Authority is provided an aggregated ----- Cryptographic Data Privacy for Mobility Management Tsao, Yang, Zoepf and Pavone Figure 1: The Mobility Provider can answer the Municipal Authority’s data-related mobility queries in a verifiable way without needing to share the data. The absence of data sharing in the protocol reduces the chance that a malicious third party intercepts and uses the data for nefarious privacyinvasive purposes. and anonymized version of the data to analyze, our proposed protocol is an interactive mechanism where a Municipal Authority sends queries and Mobility Providers give responses. By sharing responses to queries rather than the entire dataset, interactive mechanisms circumvent the data anonymization challenges faced by non-interactive approaches [1, 2]. Our proposed protocol, depicted in Figure 1, has three main steps. In the first step, the Mobility Provider uses its data to produce a data identifier which it sends to the Municipal Authority. The Municipal Authority can then send its data query to the Mobility Provider in the second step. In the third step, the Mobility Provider sends its response along with a zero knowledge proof. The Municipal Authority can use the zero knowledge proof to check that the response is consistent with the identifier, i.e., the response was computed from the same data that was used to create the identifier. If the Municipal Authority has multiple queries, steps 2 and 3 are repeated. The protocol uses cryptographic commitments and aggregated traffic measurements to ensure that the identifier is properly computed from the true mobility data. In particular, any deviation from the protocol by one party can be detected by the other, making the protocol strategyproof for both parties. Given that the identifier is properly computed, the zero knowledge proof then enables the Municipal Authority to verify the correctness of the response without needing to directly inspect the mobility data. Since the Municipal Authority never needs to inspect the mobility data, the protocol is privacy-preserving. The protocol can be extended to the more general case of multiple Mobility Providers, each with a piece of the total mobility data. This is done by including a secure multi-party computation in step 3 of the protocol. Answering a large number of queries with our protocol can lead to privacy issues since it was shown in [5] that a dataset can be reconstructed from many accurate statistical measurements. To address this concern, we generalize the protocol to enable differentially private responses from the Mobility Provider in large query regimes. ##### 1.2 Organization This paper is organized as follows. The remainder of the introduction discusses academic work related to privacy and verifiability in transportation networks. In Section 2 we introduce a mathematical model of transportation networks and use it to formulate the data privacy problem for ----- Cryptographic Data Privacy for Mobility Management Tsao, Yang, Zoepf and Pavone Mobility Management. We provide a high level intuitive description of our proposed protocol in Section 3. In Section 4 we provide a full technical description of our protocol. We discuss some of the technical nuances of the protocol and their implications in Section 5. We summarize our work and identify important areas for future research in Section 6. In Appendix A we present a differentially private extension of the protocol that is suitable for the large query regime. ##### 1.3 Related Work Within the academic literature, this work is related to the following four fields: misbehavior detection in cooperative intelligent transportation networks, data privacy in transportation systems, differential privacy, and secure multi-party computation. We briefly discuss how this work complements ideas from these fields. Cooperative intelligent transportation networks (cITS) aim to provide benefits to the safety, efficiency, and adaptability of transportation networks by having individual vehicles share their information. As with all decentralized systems, security and robustness against malicious agents is essential for practical deployment. As such, misbehavior detection in cITS have been studied extensively [6]. Misbehavior detection techniques often rely on honest agents acting as referees, and are able to detect misbehavior in the honest majority setting. Watchdog is one such protocol [7, 8] which uses peer-to-peer refereeing. The protocol uses a public key infrastructure (PKI) to assign a persisting identity to each node in the network, and derives a reputation for each node based on its historical behavior. Our objective in this work is also detection of misbehavior, but in a different setting. In our setting, while the mobility network is comprised of many agents (customers and drivers), there is a single entity (the Mobility Provider, e.g., a ridehailing service) who is responsible for the storage and analysis of trip data. As such, the concept of honest majority does not apply to our setting. Furthermore, [8] does not address the issue of data privacy; indeed, PKIs can often expose the users’ identities, especially if an attacker cross-references the network traffic with other traffic records. Privacy in intelligent transportation systems is often implemented by using non-interactive anonymization (e.g., data aggregation), cryptographic tools or differential privacy. Providing anonymity in non-interactive data analysis mechanisms is challenging [1, 2] and thus data aggregation alone is often not enough to provide privacy. From the cryptography side, to address the lack of anonymity provided by blockchains like Bitcoin and Ethereum, zero knowledge proofs [9] were deployed in blockchains like Zcash [10] to provide fully confidential transactions. In the context of transportation, zero knowledge proofs have been proposed for privacy-preserving vehicle authentication to EV charging services [11], and privacy-preserving driver authentication to customers in ridehailing applications [12]. These privacy-preserving authentication systems rely on a trusted third party to distribute and manage certificates. Differential privacy is an interactive mechanism for data privacy which uses randomized responses to hide user-specific information [1]. For any query, the data collector provides a randomized response, where two datasets which differ in only one entry produce statistically indistinguishable outputs. Due to this randomization, there is a trade-off between the accuracy of the response and the level of privacy provided. Randomization is necessary to preserve privacy in the large query regime as demonstrated by [5] which showed that a dataset can be reconstructed from many accurate statistical measurements. The standard model of differential privacy, however, relies on a _trusted data collector to apply the appropriate randomized response to queries. This is problematic_ ----- Cryptographic Data Privacy for Mobility Management Tsao, Yang, Zoepf and Pavone in situations where the data collector is not trusted. A local model of differential privacy where users perturb their data before sending it to the data collector has received significant attention due to trust concerns [13]. However mobility providers often record exact details about user trips, making local differential privacy unsuitable for current mobility applications (See Remark 15). Instead, we believe cryptographic techniques can be used to address trust concerns. There are also more general concerns about trust; downstream applications of data queries can lead to conflicts of interest and encourage strategic behavior. Secure Multi-Party Computation (MPC) is a technique whereby several players, each possessing private data, can jointly compute a function on their collective data without any player having to reveal their data to other players [14]. MPC achieves confidentiality by applying Shamir’s Secret Sharing [15] to inputs and intermediate results. In its base form, MPC is secure against honest-butcurious adversaries, which follow the protocol, but may try to do additional calculations to learn the private data of other players. In general, security against active malicious adversaries, which deviate from the protocol arbitrarily, requires a trusted third party to perform verified secret sharing [16]. In verified secret sharing, the trusted third party creates initial cryptographic commitments for each player’s private data. The commitments do not leak any information about the data, and allows honest players to detect misbehavior using zero knowledge proofs. MPC is a very promising tool for our problem, but a trusted third party able to eliminate strategic behavior does not yet exist in the transportation industry, therefore a key objective of this work is to develop mechanisms to defend against strategic behavior. _In Summary - Our goal in this work is to develop a protocol that enables a mobility provider to_ share insights from its data to a municipal authority in a privacy-preserving and verifiable manner. Existing work in accountability and misbehavior detection focus on networks with many agents and rely on honest majority. Such assumptions, however, are not realistic for interactions between a municipal authority and a few mobility providers. We thus turn our attention to differential privacy and secure multi-party computation which provide data privacy but require honesty of participating parties. To address this, we develop mechanisms based on cryptography and aggregated roadside measurements to detect dishonest behavior. ### 2 Model & Problem Description In this section we present a model for a city’s transportation network and formulate a data Privacy for Mobility Management (PMM) problem. Section 2.1 introduces a mathematical representation of a city’s transportation network along with the demand and mobility providers. In Section 2.2 we formalize the notion of data privacy using secure multi-party computation, and introduce assumptions on user behavior that we will need to construct verifiable protocols. We then formally introduce the PMM problem and describe several transportation problems that can be formulated in the PMM framework. ##### 2.1 Transportation Network Model _Transportation Network - Consider the transportation network of a city, which we represent as_ a directed graph G = (V, E, f ) where vertices represent street intersections and edges represent roads. For each road e ∈ _E we use an increasing differentiable convex function fe : R+ →_ R+ to denote the travel cost (which may depend on travel time, distance, and emissions). of the road as ----- Cryptographic Data Privacy for Mobility Management Tsao, Yang, Zoepf and Pavone a function of the number of vehicles on the road. We will use n := _V_ and m := _E_ to denote the _|_ _|_ _|_ _|_ total number of vertices and edges in G respectively. Time is represented in discrete timesteps of size ∆t. The operation horizon is comprised of T + 1 timesteps as := 0, ∆t, 2∆t, ..., T ∆t . _T_ _{_ _}_ _Mobility Provider - A Mobility Provider (MP) is responsible for serving the transportation de-_ mand. It does so by choosing a routing x of its vehicles within the transportation network. The routing must satisfy multi-commodity network flow constraints (see Supplementary Material SM-I and SM-II for explicit descriptions of these constraints) and the MP will choose a feasible flow that maximizes its utility function JMP. Some examples of MPs are ridehailing companies, bus companies, train companies, and micromobility (i.e., bikes & scooters) companies. _Transportation Demand Data - The MP’s demand data is a list of completed trips Λ := {λ1, ..., λq},_ where λi contains the following basic metadata about the ith trip: Pickup location, Dropoff location, Request time, Match time (i.e., the time at which the user is matched to a driver), Pickup time, Dropoff time, Driver wage, Trip fare, Trip trajectory (i.e., the vehicle’s trajectory from the time the vehicle is matched to the rider until the time the rider is dropped off at their destination), Properties of the service vehicle. For locations i, j _V and a timestep t, we use Λ(i, j, t) to denote the number of users in the data_ _∈_ set who request transit from location i to location j at time t. **Remark 1 (Multiple Mobility Providers). We can consider settings where there are multiple mo-** bility providers, MP1, MP2, ..., MPℓ, where Λj is the demand data of MPj. The demand data set for the whole city is thus Λ = ∪j[ℓ]=1[Λ][j][.] _Ridehailing Periods - For MPs that operate ridehailing services, a ridehailing vehicle’s trajectory_ is often divided into three different periods (with Period 0 often ignored): _Period 0: The vehicle is not online with a platform. The driver may be using the vehicle_ personally. _Period 1: The vehicle is vacant and has not yet been assigned to a rider._ _Period 2: The vehicle is vacant, but it has been assigned to a rider, and is en route to pickup._ _Period 3: The vehicle is driving a rider from its pickup location to its dropoff location._ ##### 2.2 Objective: Privacy for Mobility Management (PMM) In the data Privacy for Mobility Management (PMM) problem, a Municipal Authority (MA) wants to compute a function g(Λ) on the travel demand, where g(Λ) is some property of Λ that can inform MA on how to improve public policies. There are two main obstacles to address: privacy and verifiability. Privacy issues arise since trip information may contain sensitive customer information as well as trade secrets of Mobility Providers (MP). For this reason MPs may be reluctant to contribute their data for MA’s computation of g(Λ). This motivates the following notion of privacy: ----- Cryptographic Data Privacy for Mobility Management Tsao, Yang, Zoepf and Pavone **Definition 1 (Privacy in Multi-Party Computation). Suppose MP1, ...MPℓ** serve the demands Λ1, ..., Λℓ respectively, and we denote Λ = ∪i[ℓ]=1[Λ][i][. We say a protocol for computing][ g][(Λ) between] a MA and several MPs is privacy preserving if 1. MA learns nothing about Λ beyond the value of g(Λ). 2. For any pair i ̸= j, MPi learns nothing about Λj beyond the value of g(Λ). Verifiability issues arise if there is incentive misalignment between the players. In particular, if the MA or a MP can increase their utility by deviating from the protocol, then the computation of _g(Λ) may be inaccurate. To address this issue, we need the protocol to be verifiable, as described_ by Definition 2. The following assumption is necessary to ensure accurate reporting of demand (See Supplementary Material SM-V for more details): **Assumption 1 (Strategic Behavior). We assume in this work that drivers and customers of the** _transportation network will behave honestly (by this we mean they will always follow the protocol),_ _but MA and MPs may act strategically to maximize their own utility functions._ **Definition 2 (Verifiable Protocol). A protocol for computing g(Λ) is verifiable under Assumption 1** if: 1. Any deviation from the protocol by the MA can be detected by the MPs provided that all riders and drivers act honestly (i.e., follow the protocol). 2. Any deviation from the protocol by an MP can be detected by the MA provided that all riders and drivers act honestly. Our objective in this paper is to present a PMM protocol, which is defined below. **Definition 3 (PMM Protocol). A PMM protocol between a MA and MP1, ...MPℓ** can, given any function g, compute g(Λ) for MA while ensuring privacy and verifiability as described by Definitions 1 and 2 respectively. **Remark 2 (Admissible Queries and Differential Privacy). While a PMM protocol hides all infor-** mation about Λ beyond the value of g(Λ), g(Λ) itself may contain sensitive information about Λ. The extreme case would be if g is the identity function, i.e., g(Λ) = Λ. In such a case, the MPs should reject the request to protect the privacy of its customers. More generally, MPs should reject functions g if g(Λ) is highly correlated with sensitive information in Λ. The precise details as to which functions g are deemed acceptable queries must be decided upon beforehand by MA and the MPs together. Differential privacy mechanisms provide a principled way to address the sensitivity of g by having MPs include noise in the computation of g(Λ). If the noise distribution is chosen according to both the desired privacy level and the sensitivity of g to its inputs, then the output is differentially private. Note that this privacy is not for free; the noise reduces the accuracy of the output. The precise choice of noise distribution is important for both the privacy and accuracy of this method, so ensuring that the randomization step is conducted properly in the face of strategic MAs and MPs is essential. This can be done with a combination of coinflipping protocols and secure multi-party computation, which we describe in Appendix A. ----- Cryptographic Data Privacy for Mobility Management Tsao, Yang, Zoepf and Pavone **Remark 3 (A note on computational complexity). The applications we consider in this work** do not impose strict requirements on computation times of protocols. Regulation checks can be conducted daily or weekly, and infrastructure improvement initiatives are seldom more frequent than one per week. The low frequency of such queries gives plenty of time to compute a solution. For this reason, we do not expect the computational complexity of the solution to be an issue. We now present some important social decision making problems that can be formulated within the PMM framework. **2.2.1** **Regulation Compliance for Mobility Providers** Suppose MA wants to check whether a MP is operating within a set of regulations ρ1, ..., ρk. The metadata contained within each trip includes request time, match time, pickup time, dropoff time, and trip trajectory, which can be used to check regulation compliance. If we define the function _ρi(Λ) to be 1 if and only if regulation i is satisfied, and 0 otherwise, then regulation compliance_ can be determined from the function g(Λ) := [�]t[k]=1 _[ρ][t][(Λ). Below are some examples of regulations]_ that can be enforced using trip metadata. **Example 1 (Waiting Time Equity). MP is not discriminating against certain requests due to the** pickup or droppoff locations. Specifically, the difference in average waiting time among different regions should not exceed a specified regulatory threshold. **Example 2 (Congestion Contribution Limit). The contribution of MP vehicles (in Period 2 or 3)** to congestion should not exceed a specified regulatory threshold. **Example 3 (Accurate Reporting of Period 2 Miles). A ridehailing driver’s pay per mile/minute** depends on which period they are in. In particular, the earning rate for period 2 is often greater than that of period 1. For this reason, mobility providers are incentivized to report period 2 activity as period 1 activity. To protect ridehailing drivers, accurate reporting of period 2 activity should be enforced. **Example 4 (Emissions Limit). The collective emission rate of MP vehicles in Phases 2 and 3.** should not exceed a specified regulatory threshold. MP emissions can be computed from the metadata of served trips, in particular the trajectory and vehicle make and model. See Supplementary Material SM-IV for further details on formulating the above examples within the PMM framework. **2.2.2** **Transportation Infrastructure Development Projects** _Transportation Infrastructure Improvment Projects - A Municipal Authority (MA) measures the_ efficiency of the current transportation network via a concave social welfare function JMA(x). The MA wants to make improvements to the network G through infrastructure improvement projects. Below are some examples of such projects. **Example 5 (Building new roads). The MA builds new roads Enew so the set of roads is now** _E ∪_ _Enew, i.e., G now has more edges._ ----- Cryptographic Data Privacy for Mobility Management Tsao, Yang, Zoepf and Pavone **Example 6 (Building Train tracks). The MA builds new train routes. Train routes differ from** roads in that the travel time is independent of the number of passengers, i.e., there is no congestion effect. **Example 7 (Adding lanes to existing roads). The MA adds more lanes to some roads E[′]** _E. As_ _⊂_ a consequence, the shape of fe will change for each e ∈ _E[′]._ **Example 8 (Adjusting Speed limits). Similar to adding more lanes, adjusting the speed limit of** a road will change its delay function. _Evaluation of Projects - We measure the utility of a project using a Social Optimization Problem_ (SOP). An infrastructure improvement project θ makes changes to the transit network, so let Gθ denote the transit network obtained by implementing θ. The routing problem ROUTE(θ, Λ) associated with θ is the optimal way to serve requests in Gθ as measured by MP’s objective function _JMP. Letting Sθ,Λ be the set of flows satisfying multi-commodity network flow constraints (See Sup-_ plementary Material SM-I and SM-II for time-varying and steady state formulations respectively). for the graph Gθ and demand Λ, ROUTE(θ, Λ) is given by max JMP(x) (ROUTE(θ, Λ)) s.t. x ∈ _Sθ,Λ._ **Definition 4 (The Infrastructure Development Selection Problem). Suppose there are k infras-** tructure improvement projects Θ := {θ1, θ2, ..., θk} available, but the city only has the budget for one project. The city will want to implement the project that yields the most utility, which is determined by the following optimization problem. � � argmax _JMA_ 1≤i≤k argmax _JMP(x)_ _x∈Sθi,Λ_ _._ (SOP(Θ, Λ)) In the context of PMM, the function g associated with the infrastructure development selection problem is g(Λ) := SOP(Θ, Λ). **2.2.3** **Congestion Pricing** Some ridehailing services allow drivers to choose the route they take when delivering customers. When individual drivers prioritize minimizing their own travel time and disregard the negative externalities they place on other travelers, the resulting user equilibrium can experience significantly more congestion than the social optimum. In these cases, the total travel time of the user equilibrium is larger than that of the social optimum. This gap, known as the price of anarchy, is well studied in the congestion games literature. Congestion pricing addresses this issue by using road tolls to incentivize self-interested drivers to choose routes so that the total travel time of all users is minimized. The desired road tolls depend on the demand Λ, so MA would need help from MPs to compute the prices. Congestion pricing can be formulated in the PMM framework through the query function gcp described in (2). ----- Cryptographic Data Privacy for Mobility Management Tsao, Yang, Zoepf and Pavone When the travel cost is the same as travel time, the prices can be obtained from the Traffic Assignment Problem [17]: � min _xefe(xe)_ (1) _e∈E_ � � s.t. x = _x[od]_ _o∈V_ _d∈V_ _x[od]_ 0 _o_ _V, d_ _V_ _⪰_ _∀_ _∈_ _∈_ � _x[od](u,v)_ _[−]_ _[x]([od]v,u)_ [= Λ(][o, d][)] �1[u=o] − 1[u=d]� _∀u ∈_ _V_ (u,v)∈E where x[od]e [denotes the traffic flow from][ o][ to][ d][ that uses edge][ e][. The objective measures the sum of] the travel times of all requests in Λ. The desired prices are then given by: _gcp(Λ) :=_ �x[∗]e[f]e[′][(][x][∗]e[)]� (2) _e∈E_ [where][ x][∗] [solves (1)][.] See Supplementary Material SM-VIII for more details on congestion pricing. ### 3 A high level description of the protocol We focus our discussion on the case where there is one MP. The protocol we will present can be generalized to the multiple MP setting through secure Multi-party Computation [14]. The simplest way for MA to obtain g(Λ) is via a non-interactive protocol where MP sends Λ to MA. MA could then compute g(Λ) and any other attributes of Λ that it wants to know. This simple procedure, however, does not satisfy data privacy, since MA now has full access to the demand Λ. To address this concern, one could use an interactive protocol where MA sends a description of the function g to MP, MP then computes g(Λ) and sends it to MA. This protocol does not require MP to share the demand Λ. The problem with this approach is that there is no way for MA to check whether MP computed g(Λ) properly, i.e., this approach is not verifiable. This is problematic if there is an incentive for MP to act strategically, e.g., if MP wants to maximize its own revenue, rather than social utility. In this paper we present a verifiable interactive protocol, which allows MA to check whether or not the message it receives from MP is in fact g(Λ). This will result in a protocol where MA is able to obtain g(Λ) without requiring MP to reveal any information about Λ beyond the value of g(Λ). First, we describe a non-confidential way to compute g(Λ). We will discuss how to make it confidential in the next paragraph. MP will send a commitment σ = MCommit(Λ, r) of Λ to MA. This commitment will enable MA to certify that the result given to it by MP is computed using the true demand Λ. The commitment is confidential, meaning it reveals nothing about Λ, and is binding, meaning that it will be inconsistent with any other demand Λ[′] = Λ. Now suppose MP _̸_ computes a message z = g(Λ). To convince MA that the calculation is correct, MP will construct a witness w := (Λ, r). When MA receives the message z and witness w, it will compute C(σ, z, w), where C is an evaluation algorithm. C(σ, z, w) evaluates to True if 1. Rider Witness and Aggregated Roadside Audit checks are satisfied. (σ was reported honestly) 2. MCommit(Λ, r) = σ. (Λ is the demand that was used to compute σ). ----- Cryptographic Data Privacy for Mobility Management Tsao, Yang, Zoepf and Pavone 3. g(Λ) = z (g was evaluated properly.) If any of these conditions are not met, C(σ, z, w) will evaluate to False. Finally, MA will accept the message z only if C(σ, z, w) = True. The approach presented in the previous paragraph is not privacy-preserving because the witness _w being sent from MP to MA includes the demand Λ. Fortunately, we can use zero knowledge proofs_ to obtain privacy. Given an arithmetic circuit C (which in our case is the evaluation algorithm C), it is possible for one entity (the prover) to convince another entity (the verifier) that it knows an input z, w so that C(σ, z, w) = True without revealing what w is. This is done by constructing a zero knowledge proof π from (z, w) and sending (z, π) to the verifier instead of sending (z, w). MA can then check whether π is a valid proof for z. The proof π is zero knowledge in the sense that it is computationally intractable to deduce anything about w from π, aside from the fact _C(σ, z, w) = True._ For our application, the prover will be MP who is trying to convince the verifier, which is MA, that it computed g(Λ) correctly. This protocol requires MP to send a commitment of the true demand data to MA. This is problematic if MP has incentive to be dishonest, i.e., provide a commitment corresponding to a different dataset. To ensure this does not happen, our protocol uses a Rider Witness incentive to prevent MP from underreporting demand, and Aggregated Roadside Audits to prevent MP from overreporting demand. These two mechanisms establish the verifiability of the protocol, since, as seen in first requirement of C, MA will reject the message if either of these mechanisms detect dishonesty. _In Summary - We present a verifiable interactive protocol. First, MP sends a commitment_ of the demand to MA, which ensures that the report is computed using the true demand. The correctness of this commitment is enforced by Rider Witness and Aggregated Roadside Audits. MA then announces the function g that it wants to evaluate. MP computes a message z _g(Λ)_ _←_ and constructs a witness w to the correctness of z. Since w in general contains sensitive information, it cannot be used directly to convince MA to accept the message z. MP computes a zero knowledge proof π of the correctness of z from w, and sends the message z and proof π to MA. MA accepts z if π is a valid zero knowledge proof for z. _Implementation - To implement our protocol we will use several tools from cryptography. The_ commitment σ is implemented as a Merkle commitment. For computing zero knowledge proofs, we will need a zk-SNARK that doesn’t require a trusted setup. PLONK [18], Sonic [19], and Marlin [20] using a DARK based polynomial commitment schemes described in [21, 22]. Other options include Bulletproofs [23] and Spartan [24]. The cryptographic tools used in the protocol are reviewed in Supplementary Material SM-III. ### 4 The Protocol In this section we present our protocol for the PMM problem described in Section 2.2. For clarity and simplicity of exposition we will focus on the case where there is one Mobility Provider. The single MP case can be extended to the multiple MP case via secure multi-party computation [14]. We present the protocol, which is illustrated in Figure 2, in Section 4.1. In Section 4.2 we discuss mechanisms used to ensure verifiability of the protocol. The protocol uses the following cryptographic primitives: hash functions, commitment schemes, Merkle trees, public key encryption and zero knowledge proofs. Hash functions map data of ar ----- Cryptographic Data Privacy for Mobility Management Tsao, Yang, Zoepf and Pavone bitrary size to fixed size messages, often used to provide succinct identifiers for large datasets. Commitment schemes are a form of verifiable data sharing where a receiver can reserve data from a sender, obtain the data at a later point, and verify that the data was not changed between the reservation and reception times. A Merkle tree is a particular commitment scheme we will use. In public key encryption, every member of a communication network is endowed with a public key and a private key. The public key is like a mailbox which tells senders how to reach the member, and the secret key is the key to the mailbox, so messages can be viewed only by their intended recipients. Zero knowledge proofs, as discussed in Section 3, enable a prover to convince a verifier that it knows a solution to a mathematical puzzle without directly revealing its solution. For a more detailed description of these concepts, we refer the reader to the Supplementary Material SM-III, where we provide a self-contained introduction of the cryptographic tools used in this work. ##### 4.1 Protocol Description The protocol entails 6 stages: _Stage 0: (Data Collection) MP serves the demand Λ and builds a Merkle Tree TΛ of the demand_ it serves. MP publishes the root of TΛ, which is denoted as σ := MCommit(Λ, r) so that MA, all riders and all drivers have access to σ. Here r is the set of nonces used to make the commitment confidential. _Stage 1: (Integrity Checks) MA instantiates Rider Witness and Aggregated Roadside Audits to_ ensure that σ was computed using the true demand Λ. The description of these mechanisms can be found in Section 4.2. _Stage 2: (Message Specifications) MA specifies to MP the function g it wants to compute._ _Stage 3: (zk-SNARK Construction) MA constructs an evaluation algorithm C for the function g._ _σ, z are public parameters of C, and the input to C is a witness of the form w = (Λw, rw, cw), where_ _rw is a set of nonces, Λw is a demand matrix, and cw is an optional input that may depend on g_ (See Remark 5). C does the following: 1. Checks whether the Rider Witness and Aggregated Roadside Audit tests are satisfied (This checks that σ was reported honestly), 2. Checks whether MCommit(Λw, rw) = σ (This determines whether the provided demand Λw is the same as the demand that created σ), 3. Checks whether g(Λw) = z (This checks that the message z is computed properly from Λw). _C will evaluate to True if and only if all of those checks pass. Now, using one of the schemes from_ [18, 19, 20, 23, 24], MA will create a zk-SNARK (S, V, P ) for C. S is a set of public parameters that describes the circuit C, P is a prover function which MP will use to construct a proof, and _V is a verification function which MA will use to verify the correctness the MP’s proof. It sends_ _C, (S, V, P_ ), g to MP. ----- Cryptographic Data Privacy for Mobility Management Tsao, Yang, Zoepf and Pavone Figure 2: A block diagram of the communication between MA and MP. ----- Cryptographic Data Privacy for Mobility Management Tsao, Yang, Zoepf and Pavone _Stage 4: (Function Evaluation) If the request g is not a privacy-invasive function (see Remark 2),_ MP will compute a message z = g(Λ) and construct a witness w := (Λ, r, cw) to the correctness of z. _Stage 5: (Creating a Zero Knowledge Proof) MP uses the zk-SNARK’s prover function P to con-_ struct a proof π := P (σ, z, w) that certifies the calculation of z. MP sends z, π to MA. _Stage 6: (zk-SNARK Verification) MA uses the zk-SNARK’s verification function V (σ, z, π) to_ check whether MP is giving a properly computed message. If this is the case, MA accepts the message z. **Remark 4 (Computational Gains via Commit-then-Prove). Steps 2) and 3) of the evaluation** circuit C involve different types of computation. This heterogeneity can introduce computational overhead in the zk-SNARK. Commit-and-Prove zk-SNARKs [25, 26] are designed to handle computational heterogeneities, however existing implementations require a trusted setup. **Remark 5 (Verifying solutions to convex optimization problems). If g(Λw) is the solution to** a convex optimization problem parameterized by Λw, (e.g., g(Λw) = SOP(Θ, Λw) or congestion pricing gcp(Λw)), then computing g(Λw) within the evaluation algorithm C may cause C to be a large circuit, thus making evaluation of C computationally expensive. Fortunately, this can be avoided by leveraging the structure of convex problems. If z = g(Λw), we can include the optimal primal and dual variables associated with z in the optional input cw. This way, checking the optimality of z can be done by checking that cw satisfy the KKT conditions rather than needing to re-solve the problem. ##### 4.2 Ensuring accuracy of σ The protocol presented in the previous section requires MP to share a commitment to the true demand Λ. However, scenarios exist where the MP may face direct or indirect incentives to misreport demand, such as per-ride fees, congestion charges, or other regulations that may constrain MP operations. In this section we present mechanisms to ensure that MP submits a commitment σ = MCommit(Λ, r) corresponding to the true demand Λ rather than a commitment _σ[′]_ = MCommit(Λ[′], r) corresponding to some other demand Λ[′]. Specifically, we present Rider Witness and Aggregated Roadside Audits which detect underreporting and overreporting of demand respectively. The Rider Witness mechanism described in Section 4.2.1 prevents MP from omitting real trips from its commitment. Under the Rider Witness mechanism, each rider is given a receipt for their trip signed by MP. By signing a receipt, the trip is recognized as genuine by MP. Since _σ[′]_ = MCommit(Λ[′], r) is a Merkle commitment, for each λ[′] Λ[′], MP can provide a proof that λ[′] is _∈_ included in the calculation of σ[′]. Conversely, if λ Λ[′], MP is unable to forge a valid proof to claim _̸∈_ that λ is included in the calculation of σ[′]. Therefore if there exists a genuine trip λ Λ that is not _∈_ included in Λ[′], then that rider can report its receipt to MA. MP cannot provide a proof that λ was included, and since the receipt of λ is signed by MP, this is evidence that MP omitted a genuine trip from σ[′]. If this happens, MP is fined, and the reporting rider is rewarded. The Aggregated Roadside Audit mechanism described in Section 4.2.2 prevents MP from adding fictitious trips into its commitment. Due to Rider Witness, MP will not omit genuine trips, so _σ[′]_ = MCommit(Λ[′], r) where Λ ⊆ Λ[′]. Recall that the trip metadata includes the trajectory. If Λ[′] ----- Cryptographic Data Privacy for Mobility Management Tsao, Yang, Zoepf and Pavone contains fictitious trips, then the road usage reported by Λ[′] will be greater than what happens in reality. Thus if MA measures the number of passenger carrying vehicles that traverse each road, then it will be able to detect if MP has included fictitious trips. However, auditing every road can lead to privacy violations. Therefore, the audits are aggregated so that MA obtains the total volume of passenger carrying traffic in the entire network, but not the per-road traffic information. **4.2.1** **Rider Witness: Detecting underreported demand** In this section, we present a Rider Witness mechanism to detect omission or tampering of the demand Λ. Concretely, if a MP sends to MA a Merkle commitment σ[′] = MCommit(Λ[′], r) which underreports demand, i.e., Λ Λ[′] is non-empty, then Rider Witness will enable MA to detect this. _\_ MA can impose fines or other penalties when such detection occurs to deter MP from underreporting the demand. _Rider Witness Incentive Mechanism - At the beginning of Stage 0 (Data Collection) of the_ protocol, MP constructs a public key and private key pair (pkmp, skmp) to use for digital signatures. The payment process is as follows: When the ith customer is delivered to their destination, the customer will send a random nonce ri to MP. MP will respond with a receipt (H(ri||λi), σi), where _σi := sign(skmp, H(ri||λi)) is a digital signature certifying that MP recognizes λi as an official ride_ (here || represents concatenation of binary strings). Here H is SHA256, so that H(ri||λi) is a cryptographic commitment to the trip λi. The customer is required to pay the trip fare only if verify(pkmp, H(ri||λi), σi) = True, i.e., they received a valid receipt. **Definition 5 (Rider Witness Test). Given a commitment σ[′]** reported by MP to MA, each rider who was served by MP requests a Merkle proof that their ride is included in the computation of _σ[′]. If there exists a valid[1]_ ride receipt (H(ri||λi), σi) for which MP cannot provide a Merkle proof, then the customer associated with λi will report (H(ri||λi), σi) to MA. MA checks if σi is a valid signature for H(ri||λi), and if so, directly asks MP for a Merkle Proof that λi is included in the computation of σ[′]. If MP is unable to provide the proof, then σ[′] fails the Rider Witness Test. **Observation 1 (Efficacy of Rider Witness). Under Assumption 1, if MP submits a commitment** _σ[′]_ = MCommit(Λ[′], r) which omits a ride, i.e., Λ Λ[′] _is non-empty, then σ[′]_ _will fail the Rider_ _\_ _Witness Test._ _Proof of Observation 1. If Λ ̸⊆_ Λ[′], then there exists some λi which is in Λ but not Λ[′]. Suppose Alice was the rider served by ride λi. Forging a proof that λi ∈ Λ[′] requires finding a hash collision for the hash function used in the Merkle commitment. Since MCommit is implemented using a cryptographic hash function (e.g., SHA256), it is computationally intractable to find a hash collision, and thus MP will be unable to forge a valid proof that λi ∈ Λ[′]. If MP does not provide Alice a valid proof within a reasonable amount of time (e.g., several hours), Alice can then report (H(ri||λi), σi) to MA. This reporting does not compromise Alice’s privacy due to the hiding property of cryptographic hash functions. MA will check whether verify(pkmp, H(ri||λi), σi) = True, and if so, means that λi is recognized as a genuine trip by MP. MA will directly ask MP for a Merkle proof that H(ri||λi) ∈ _TΛ. Since MP cannot provide a valid_ proof, this is evidence that a genuine trip was omitted in the computation of σ[′], and hence σ[′] will fail the Rider Witness test. 1In the sense that verify(pkmp, H(ri||λi), σi) = True. ----- Cryptographic Data Privacy for Mobility Management Tsao, Yang, Zoepf and Pavone **Remark 6 (Tamperproof Property). We note that Rider Witness also prevents the MP from** altering the data associated with genuine rides. If MP makes changes to λi ∈ Λ resulting in some _λ[′]i[, then by collision resistance of][ H][, it is computationally infeasible to find][ r][′][ so that][ H][(][r][i][||][λ][i][) =]_ _H(ri[′][||][λ]i[′][). If such a change is made, then][ H][(][r]i[′][||][λ]i[′][) is included into the computation of][ σ][′][ instead]_ of H(ri||λi). This means (H(ri||λi), σi) becomes a valid witness that data tampering has occurred. **Remark 7 (Receipts are Unforgeable). Note that it is not possible for a rider to report a fake ride** _λ[′]_ Λ to MA. This is because the corresponding signature σ[′] cannot be forged without knowing _̸∈_ MP’s secret key skmp. Therefore, assuming skmp is only known to MP, only genuine trips can be reported. **Remark 8 (Honesty of riders). The Rider Witness mechanism assumes that riders are honest, i.e.,** they will not collude with MP by accepting invalid receipts. **4.2.2** **Aggregated Roadside Audits: Detecting overreported demand** In this section we present an Aggregated Roadside Audit (ARA) mechanism to detect overreporting of demand. Concretely, if MP announces a commitment σ[′] = MCommit(Λ[′], r), where Λ[′] is a strict superset of Λ (i.e., Λ[′] Λ is non-empty), then ARA will enable MA to detect this. Thus between _\_ ARA and Rider Witness, MA can detect if MP commits to a demand that is not Λ. _Aggregated Roadside Audits - Due to the Rider Witness mechanism, we can assume that MP_ submits a commitment σ[′] computed from Λ[′] satisfying Λ Λ[′], i.e., Λ[′] is a superset of Λ. For an _⊆_ edge e _E and a demand Λ, define_ _∈_ _ϕ(e, Λ) :=_ � 1[λ traverses e] (3) _λ∈Λ_ to be the number of trips that traversed e during passenger pickup (Period 2) or passenger delivery (Period 3). Since trip route is provided in the trip metadata, ϕ(e, Λ) can be computed from Λ. **Definition 6 (ARA Test). The Aggregated Roadside Audit places a sensor on every road to** conduct an audit on each road e _E to measure ϕ(e, Λ). These values are then aggregated as_ _∈_ _φ :=_ [�]e∈E _[ϕ][(][e,][ Λ). A witness][ w][ = (Λ][w][, r][w][, c][w][) passes the ARA test if and only if]_ � _ϕ(e, Λw) = φ._ (ARA) _e∈E_ **Observation 2 (Efficacy of Aggregated Roadside Audits). Under Assumption 1, if MP submits a** _commitment σ[′]_ = MCommit(Λ[′], r) to a strict superset of the demand, i.e., Λ Λ[′], then any proof _⊂_ _submitted by MP will either be inconsistent with σ[′]_ _or will fail the ARA test. Hence MP cannot_ _overreport demand._ _Proof of Observation 2. Suppose Λ[′]_ is a strict superset of Λ, which means that there exists some _λ[′]_ Λ[′] Λ. Then there must exist some e[′] _E for which ϕ(e[′], Λ[′]) > ϕ(e[′], Λ). In particular, any_ _∈_ _\_ _∈_ edge in the trip route of λ[′] will satisfy this condition. With the inclusion of the ARA test, MP is unable to provide a valid witness for MA’s evaluation algorithm C (and as a consequence, will be unable to produce a valid zero knowledge proof) for the following reason: ----- Cryptographic Data Privacy for Mobility Management Tsao, Yang, Zoepf and Pavone Figure 3: An example of ARA. The true demand is Λ, which results in traffic shown on the left. Here ϕ(eij, Λ) is the total number of trips in Λ that use the edge from i to j. Suppose MP submits a commitment to Λ[′] = Λ _λ[′]_, i.e., inserts a fake trip λ[′] into the commitment. In this example, _∪{_ _}_ _λ[′]_ is a fake trip from 5 to 2 that MP claims was served via the route {e56, e63, e31, e12} (shown in red on the right). λ[′] increases the total traffic on the roads e56, e63, e31, e12 and as a result, we have � _e∈E_ _[ϕ][(][e,][ Λ][′][) =][ φ][ + 4.]_ 1. MCommit is a collision-resistant function (since it is built using a cryptographic hash function _H), so because σ[′]_ = MCommit(Λ[′], r), it is computationally intractable for MP to find Λ[′′] = Λ̸ _[′]_ and nonce values r[′′] so that MCommit(Λ[′′], r[′′]) = σ[′]. Therefore, in order to satisfy condition 2 of C (see Stage 3 of Section 4.1), MP’s witness must choose Λw to be Λ[′]. 2. However, Λ[′] will not pass the ARA test. To see this, note that (a) Λ Λ[′] implies that _⊆_ _ϕ(e, Λ)_ _ϕ(e, Λ[′]) for all e_ _E. Furthermore, (b) there exists an edge e[′]_ where the inequality _≤_ _∈_ is strict, i.e., ϕ(e[′], Λ) < ϕ(e[′], Λ[′]). From this, we see that � � _φ =_ _ϕ(e, Λ) = ϕ(e[′], Λ) +_ _ϕ(e, Λ)_ _e∈E_ _e∈Λ,e≠_ _e[′]_ (a) � _ϕ(e[′], Λ) +_ _ϕ(e, Λ[′])_ _≤_ _e∈Λ,e≠_ _e[′]_ (b) � _< ϕ(e[′], Λ[′]) +_ _ϕ(e, Λ[′])_ _e∈Λ,e≠_ _e[′]_ � = _ϕ(e, Λ[′]),_ _e∈E_ i.e., if the witness passes condition 2 of C, then it will fail the ARA test. Therefore the value of φ can be used to detect fictitious rides. See Figure 3 for a visualization of ARA. In the following remark, we present a variant of ARA that is robust to measurement errors. **Remark 9 (Error Tolerance in ARA). Trip trajectories are often recorded via GPS, so GPS** errors can lead to inconsistencies between ARA sensor measurements and reported trajectories. To ----- Cryptographic Data Privacy for Mobility Management Tsao, Yang, Zoepf and Pavone prevent an honest MP from failing the ARA test due to GPS errors, one can use an error tolerant version of the ARA test defined below � _φ −_ _ϕ(e, Λw)_ _≤_ _ϵφ_ ����� _e∈E_ ����� where ϵ [0, 1] is a tuneable tolerance parameter to account for GPS errors while still detecting _∈_ non-negligible overreporting of demand. **Remark 10 (Honesty of Drivers). The correctness of ARA presented in Observation 2 assumes** that drivers are honest when declaring their current period to ARA sensors, e.g., a driver who is in period 3 will not report themselves as period 1 or 2. Two challenges that arise in the computation of φ are privacy and honesty, which are described below. **Remark 11 (Privacy-Preserving computation of φ). The na¨ıve way to compute φ is for MA to** collect the values ϕ(e, Λ) from each road. This, however, can compromise data privacy. Indeed, if there is only 1 request in Λ, then measuring the number of customer carrying vehicles that traverse each link exposes the trip route of that request: Edges that are traversed 1 time are in the route, and edges that are traversed 0 times are not. More generally, observing ϕ(e, Λ) on all roads e _E_ _∈_ exposes trip routes to or from very unpopular locations. **Remark 12 (Honest computation of φ). It is essential that MA acts truthfully when taking** measurement and computing φ in ARA, otherwise MP will be wrongfully accused of dishonesty. Fortunately, the ARA sensors can use public key encryption to share their data with each other to compute φ in a privacy-preserving and honest way so that MA cannot learn ϕ(e, Λ) for any _e_ _E even if it tries to eavesdrop on the communication between the sensors. After φ has been_ _∈_ sent to MA and the protocol has finished, the data on the sensors should be erased. We describe this process in Section 4.2.3. **4.2.3** **Implementation details for ARA** In this section we describe the implementation details of ARA to ensure that the computation of _φ is both privacy-preserving and accurate._ _ARA Sensors - To implement ARA, MA designs a sensor to detect MP vehicles. Concretely,_ the sensor records the current period of all MP vehicles that pass by. For communication, the sensor will generate a random public and private key pair, and share its public key with the other sensors. The sensor should have hardware to enable it to encrypt and decrypt messages it sends and receives, respectively. To ensure honest auditing by MA, these sensors are inspected by MP to ensure that they detect MP vehicles properly, key generation, encryption and decryption are functioning properly, and that there are no other functionalities. Once the sensors have passed the inspection, the following storage and communication restrictions are placed on them: 1. The device can only transmit data if it receives permission from both MP and MA. ----- Cryptographic Data Privacy for Mobility Management Tsao, Yang, Zoepf and Pavone Figure 4: An ARA sensor records the vehicle ID number, vehicle period and current timestamp of each ridehailing vehicle that traverses the road. The dashed line around the sensor represents a communication restriction: The sensor data can only be accessed with the consent of both parties. 2. The device can only transmit to addresses (i.e., public keys) that are on its sender whitelist. The sender whitelist is managed by both MA and MP, i.e., an address can only be added with the permission of MA and MP. 3. The device can only receive data from addresses that are on its receiver whitelist. The receiver whitelist is managed by both MA and MP. 4. The device’s storage can be remotely erased with permission from both MP and MA. _Deployment - To conduct ARA, a sensor is placed on every road, and will record the timestamp_ and period information of MP vehicles that pass by during the operation period. During operation, both the sender and receiver whitelists should be empty. As a consequence, MA cannot retrieve the sensor data. After the operation period ends and MP has sent a commitment σ to MA, MP and MA conduct a coin flipping protocol to choose one sensor at random to elect as a leader (the leader is elected randomly for the sake of robustness. If the leader is the same every time, then the system would be unable to function if this sensor malfunctions or is compromised in any way). A coin flipping protocol is a procedure where several parties can generate unbiased random bits. The leader sensor’s public key is added to the whitelist of all other sensors, and all sensors are added to the leader’s receiving whitelist. Each sensor then encrypts and sends its data under the leader sensor’s public key. Since the MA does not know the leader sensor’s secret key, it cannot decrypt the data even if it intercepts the ciphertexts. The addresses of MA and MP are then added to the leader’s sender whitelist. The leader sensor decrypts the data, computes φ and reports the result to both MA and MP. Once the protocol is over, the sender and receiver whitelists of all sensors are cleared, and MA and MP both give permission for the sensors to delete their data. Figure 4 illustrates the sensor setup for ARA. ### 5 Discussion The protocol requires minimal computational resources from the MA. Indeed, the computation of _g(Λ), and all data analysis therein, is conducted by the MPs. The MA only needs to construct an_ evaluation circuit C and zk-SNARK (S, V, P ) for each of their queries g. In terms of data storage, the MA only needs to store the commitments σ to the demand and the total recorded volume of MP traffic φ for each data collecting period. If the Merkle Trees are built using the SHA256 hash function, then σ is only 256 bits, and is thus easy to store. φ is a single integer, which is also easy to store. ----- Cryptographic Data Privacy for Mobility Management Tsao, Yang, Zoepf and Pavone On the other hand, the hardware requirements for the Aggregated Roadside Audits may be difficult for cities to implement, as placing a sensor on every road in the city will be expensive. To address this concern, we present an alternative mechanism known as Randomized Roadside Audits (RRA) in Supplementary Material SM-VI . RRA is able to use fewer sensors by randomly sampling the roads to be audited, however as a tradeoff for using fewer sensors, overreported demand will only be detected probabilistically. See Supplementary Material SM-VI for more details. There is a trade-off between privacy and diagnosis when using zero knowledge proofs. In the event that the zk-SNARK’s verification function fails, i.e., V (σ, z, π) = False, we know that z is not a valid message, but we do not know why it is invalid. Specifically, V (σ, z, π) does not specify which step of the evaluation algorithm C failed (See Stage 3 of Section 4.1). Thus in order to determine whether the failure was due to integrity checks, inconsistency between Λ and σ, or a mistake in the computation of g, further investigation would be required. Thus, while the zero knowledge proof enables us to check the correctness of z without directly inspecting the data, it does not provide any diagnosis in the event that z is invalid. Multi-party computation is a natural way to generalize the proposed protocol to the multiple MP setting. In such a case, the demand Λ = ∪i[k]=1[Λ][k][ is the disjoint union of Λ][1][, ...,][ Λ][k][, where Λ][i] is the demand served by the ith MP, and is hence the private data of the ith MP. Multi-party computation is a procedure by which several players can compute a function over their combined data without any player learning the private data of other players. In the context of PMM with multiple MPs, the MPs are the players and their private data is the Λi’s. In stage 0, each MP would send to MA a commitment to its demand data, and the computation of z and π in stages 4 and 5 would be done using secure multi-party computation. Verifiability is established using Rider Witness and ARA, as is done in the single MP case. See [27] and multiparty.org for an open-source implementation of multi-party computation. ### 6 Conclusion In this paper we presented an interactive protocol that enables a Municipal Authority to obtain insights from the data of Mobility Providers in a verifiable and privacy-preserving way. During the protocol, a Municipal Authority submits queries and a Mobility Provider computes responses based on its mobility data. The protocol is privacy-preserving in the sense that the Municipal Authority learns nothing about the dataset beyond the answer to its query. The protocol is verifiable in the sense that any deviation from the protocol’s instructions by one party can be detected by the other. Verifiability is achieved by using cryptographic commitments and aggregated roadside measurements, and data privacy is achieved using zero knowledge proofs. We showed that the protocol can be generalized to a setting with multiple Mobility Providers using secure multi-party computation. We present a differentially private version of the protocol in Appendix A to address situations where the Municipal Authority has many queries. There are several interesting and important directions for future work. First, while this work accounts for strategic behavior of the Municipal Authority and Mobility Providers, it assumes that drivers and customers will act honestly. A more general model which also accounts for potential strategic behavior of drivers and customers would be of great value and interest. Second, while secure multi-party computation can be used to generalize the protocol to settings with multiple Mobility Providers, generic tools for secure multi-party computation introduce computational and communication overhead. Developing specialized multi-party computation tools for mobility ----- Cryptographic Data Privacy for Mobility Management Tsao, Yang, Zoepf and Pavone related queries is thus of significant practical interest. Finally, we suspect there are other applications for this protocol in transportation research beyond city planning and regulation enforcement that could be investigated. ### References [1] C. Dwork, F. McSherry, K. Nissim, and A. D. Smith, “Calibrating noise to sensitivity in private data analysis,” in Theory of Cryptography Conference, vol. 3876, pp. 265–284, Springer, 2006. [2] PCAST, “Big data and privacy: A technological perspective,” 2014. [3] M. Isaac, “How uber deceives the authorities worldwide,” New York Times, Mar 2017. [4] S. Szymkowski, “Google removes app that calculated if uber drivers were underpaid,” Road_Show, Feb 2021._ [5] I. Dinur and K. Nissim, “Revealing information while preserving privacy,” in ACM Symposium _on Principles of Database Systems, pp. 202–210, ACM, 2003._ [6] R. W. van der Heijden, S. Dietzel, T. Leinm¨uller, and F. Kargl, “Survey on misbehavior detection in cooperative intelligent transportation systems,” IEEE Commun. Surv. Tutorials, vol. 21, no. 1, pp. 779–811, 2019. [7] S. Marti, T. J. Giuli, K. Lai, and M. Baker, “Mitigating routing misbehavior in mobile ad hoc networks,” in Proceedings of the 6th Annual International Conference on Mobile Computing _and Networking, 2000._ [8] J. Hortelano, J. C. Ruiz, and P. Manzoni, “Evaluating the usefulness of watchdogs for intrusion detection in vanets,” in 2010 IEEE International Conference on Communications Workshops, pp. 1–5, 2010. [9] S. Goldwasser, S. Micali, and C. Rackoff, “The knowledge complexity of interactive proof systems,” SIAM J. Comput., vol. 18, no. 1, pp. 186–208, 1989. [10] E. B. Sasson, A. Chiesa, C. Garman, M. Green, I. Miers, E. Tromer, and M. Virza, “Zerocash: Decentralized anonymous payments from bitcoin,” in 2014 IEEE Symposium on Security and _Privacy, pp. 459–474, 2014._ [11] D. Gabay, K. Akkaya, and M. Cebe, “Privacy-preserving authentication scheme for connected electric vehicles using blockchain and zero knowledge proofs,” IEEE Transactions on Vehicular _Technology, vol. 69, no. 6, pp. 5760–5772, 2020._ [12] W. Li, C. Meese, H. Guo, and M. Nejad, “Blockchain-enabled identity verification for safe ridesharing leveraging zero-knowledge proof,” arXiv preprint arXiv:2010.14037, 2020. [13] S. P. Kasiviswanathan, H. K. Lee, K. Nissim, S. Raskhodnikova, and A. D. Smith, “What can we learn privately?,” SIAM J. Comput., vol. 40, no. 3, pp. 793–826, 2011. ----- Cryptographic Data Privacy for Mobility Management Tsao, Yang, Zoepf and Pavone [14] O. Goldreich, S. Micali, and A. Wigderson, “How to play any mental game or A completeness theorem for protocols with honest majority,” in STOC 1987, New York, New York, USA (A. V. Aho, ed.), pp. 218–229, ACM, 1987. [15] A. Shamir, “How to share a secret,” Commun. ACM, vol. 22, no. 11, pp. 612–613, 1979. [16] B. Chor, S. Goldwasser, S. Micali, and B. Awerbuch, “Verifiable secret sharing and achieving simultaneity in the presence of faults,” in FOCS 1985, pp. 383–395, IEEE Computer Society, 1985. [17] Y. Sheffi, Urban Transportation Networks: Equilibrium Analysis with Mathematical Program_ming Methods. Prentice-Hall, Englewood Cliffs, New Jersey, 1 ed., 1985._ [18] A. Gabizon, Z. J. Williamson, and O. Ciobotaru, “PLONK: permutations over lagrangebases for oecumenical noninteractive arguments of knowledge,” IACR Cryptol. ePrint Arch., vol. 2019, p. 953, 2019. [19] M. Maller, S. Bowe, M. Kohlweiss, and S. Meiklejohn, “Sonic: Zero-knowledge snarks from linear-size universal and updatable structured reference strings,” in ACM Conference on Com_puter and Communications Security, CCS, pp. 2111–2128, ACM, 2019._ [20] A. Chiesa, Y. Hu, M. Maller, P. Mishra, N. Vesely, and N. P. Ward, “Marlin: Preprocessing zksnarks with universal and updatable SRS,” in Advances in Cryptology - EUROCRYPT 2020, vol. 12105 of Lecture Notes in Computer Science, pp. 738–768, Springer, 2020. [21] B. B¨unz, B. Fisch, and A. Szepieniec, “Transparent snarks from DARK compilers,” in Advances _in Cryptology - EUROCRYPT 2020, vol. 12105 of Lecture Notes in Computer Science, pp. 677–_ 706, Springer, 2020. [22] A. R. Block, J. Holmgren, A. Rosen, R. D. Rothblum, and P. Soni, “Time- and spaceefficient arguments from groups of unknown order,” in Advances in Cryptology - CRYPTO _2021, vol. 12828 of Lecture Notes in Computer Science, pp. 123–152, Springer, 2021._ [23] B. B¨unz, J. Bootle, D. Boneh, A. Poelstra, P. Wuille, and G. Maxwell, “Bulletproofs: Short proofs for confidential transactions and more,” in IEEE Symposium on Security and Privacy, _SP, pp. 315–334, IEEE Computer Society, 2018._ [24] S. T. V. Setty, “Spartan: Efficient and general-purpose zksnarks without trusted setup,” in _Advances in Cryptology - CRYPTO 2020, vol. 12172 of Lecture Notes in Computer Science,_ pp. 704–737, Springer, 2020. [25] M. Campanelli, D. Fiore, and A. Querol, “Legosnark: Modular design and composition of succinct zero-knowledge proofs,” in ACM Conference on Computer and Communications Security, _CCS, pp. 2075–2092, ACM, 2019._ [26] M. Campanelli, A. Faonio, D. Fiore, A. Querol, and H. Rodr´ıguez, “Lunar: a toolbox for more efficient universal and updatable zksnarks and commit-and-prove extensions,” IACR Cryptol. _ePrint Arch., p. 1069, 2020._ ----- Cryptographic Data Privacy for Mobility Management Tsao, Yang, Zoepf and Pavone [27] A. Lapets, F. Jansen, K. D. Albab, R. Issa, L. Qin, M. Varia, and A. Bestavros, “Accessible privacy-preserving web-based data analysis for assessing and addressing economic inequalities,” in Conference on Computing and Sustainable Societies, ACM, 2018. [28] M. Blum, “Coin flipping by telephone - A protocol for solving impossible problems,” in COM_PCON’82, pp. 133–137, IEEE Computer Society, 1982._ [29] T. P. Pedersen, “Non-interactive and information-theoretic secure verifiable secret sharing,” in Advances in Cryptology - CRYPTO (J. Feigenbaum, ed.), vol. 576 of Lecture Notes in _Computer Science, pp. 129–140, Springer, 1991._ [30] A. Narayanan, J. Bonneau, E. W. Felten, A. Miller, and S. Goldfeder, Bitcoin and Cryptocur_rency Technologies - A Comprehensive Introduction. Princeton University Press, 2016._ [31] D. Boneh and V. Shoup, A Graduate Course in Applied Cryptography. 2020. [[32] V. Buterin, “Zk rollup.” Available Online, 2016.](https://ethresear.ch/t/on-chain-scaling-to-potentially-500-tx-sec-through-mass-tx-validation/3477) ----- Cryptographic Data Privacy for Mobility Management Tsao, Yang, Zoepf and Pavone ### A Incorporating Differential Privacy for the Large Query Regime One potential concern with the protocol described in Section 4 arises in the large query regime. It was shown in [5] that a dataset can be reconstructed from many accurate statistical measurements. One way to address this is to set a limit on the number of times the MA can query the data for a given time period. Such a restriction would not lead to data scarcity since the MP is collecting new data daily. Differential privacy offers a principled way to determine how many times MA should query a dataset (see Remark 13). Differentially private mechanisms address the result of [5] by reducing the accuracy of the responses to queries, i.e., responding to a query g with a noisy version of g(Λ). In this section we describe how the protocol from section 4 can be generalized to facilitate verifiable and differentially private responses from MP. To this end we first define differential privacy. **Definition 7 (Datasets and Adjacency). A dataset Λ is a set of datapoints. In the context of** transportation demand, a datapoint is the metadata corresponding to a single trip. We say two datasets Λ, Λ[′] are adjacent if either (a) Λ Λ[′] with Λ[′] containing exactly 1 more datapoint than _⊂_ Λ, or (b) Λ[′] Λ with Λ containing exactly 1 more datapoint than Λ[′]. _⊂_ **Definition 8 (Differential Privacy). Let** be a σ-algebra on a space Ω. A mechanism M : Ω _F_ _D →_ is (ϵ, δ)-differentially private if for any two adjacent datasets Λ, Λ[′] and any -measurable event _∈D_ _F_ _S,_ P (M (Λ) ∈ _S) ≤_ _e[ϵ]P_ �M (Λ[′]) ∈ _S�_ + δ. In words, the output of a (ϵ, δ)-differentially private mechanism on Λ is statistically indistinguishable from the output of the mechanism on Λ _λ_ for any single datapoint λ Λ. Since Λ _∪{_ _}_ _̸∈_ does not contain λ, M (Λ) does not reveal any information about λ. Since M (Λ _λ_ ) is statistically _∪{_ _}_ indistinguishable from M (Λ), M (Λ _λ_ ) does not reveal much about λ. _∪{_ _}_ **Example 9 (Laplace Mechanism for Vote Tallying). Suppose a city is trying to decide whether to** expand its railways or expand its roads based on a majority vote from its citizens. The dataset is Λ := {λ1, ..., λn} where λi is a boolean which is 0 if the ith citizen prefers the railway and 1 if the ith citizen prefers the roads. To implement majority vote, the city needs to compute g(Λ) := [�]i[n]=1 _[λ][i][.]_ The Laplace Mechanism achieves (ϵ, 0)-differential privacy for this computation via _Mlaplace(Λ) := Y +_ _n_ � _λi,_ _i=1_ where Y has the discrete Laplace distribution: for any k ∈ Z, P[Y = k] ∝ _e[−][ϵ][|][k][|]. To see why this_ achieves (ϵ, 0)-differential privacy, for any 1 _j_ _n, note that_ _≤_ _≤_ P[M (Λ) = k] _i=1_ _[λ][i][|]_ P[M (Λ \ {λj}) = k] [=][ e]e[−][−][ϵ][ϵ][|][|][k][k][−][−][�][�][n]i≠ _j_ _[λ][i][| ≤]_ _[e][ϵλ][j][ ≤]_ _[e][ϵ][.]_ Note that the noise distribution for Y depends only on ϵ, and is independent of n, the size of the dataset. **Remark 13 (Privacy Budget). By composition rules, the result of k queries to a (ϵ, 0)-differentially** private mechanism is (kϵ, 0)-differentially private. Thus a dataset should only be used to answer k separate (ϵ, 0)-differentially private queries if e[kϵ] is sufficiently close to 1. ----- Cryptographic Data Privacy for Mobility Management Tsao, Yang, Zoepf and Pavone ##### A.1 Goal: Differential Privacy without Trust Given a query function g from MA, let M be an polynomial-time computable (ϵ, δ)-differentially private mechanism for computing g. For a given dataset Λ we can represent the random variable _M_ (Λ) with a function _g(Λ, Z) where Z_ 0, 1 represents the random bits used by M . Here � _∈{_ _}[v]_ _v is an upper bound on the number of random bits needed for the computation of M_ . By its construction, _g(Λ, Z) is (ϵ, δ)-differentially private if Z is drawn uniformly at random over_ 0, 1 . � _{_ _}[v]_ Therefore differential privacy is achieved if MP draws Z uniformly at random over 0, 1 and _{_ _}[v]_ sends _g(Λ, Z) to MA. However, as mentioned in Assumption 1, we are studying a model where_ � MP can act strategically. Thus we cannot assume that MP will sample Z uniformly at random if there is some other distribution over Z that leads to a more favorable outcome for MP. We revisit Example 9 to illustrate this concern. **Example 10 (Dishonest Vote Tallying). Consider the setting from Example 9.** The Laplace mechanism can be represented as � _,_ _g(Λ, Z) := Y +_ � _n_ � _λi, where Y = Flaplace[−][1]_ _i=1_ � int(Z) 2[v] where int(Z) is the integer whose binary representation is the bits of Z. Here Flaplace[−][1] [is the] inverse cumulative distribution for the discrete Laplace distribution. Thus Flaplace[−][1] [(int(][Z][)][/][2][v][) is] an application of inverse transform sampling that converts a uniform random variable Z into a random variable Y with a discrete Laplace distribution. Suppose the MP has a ridehailing service and would thus prefer an upgrade to city roads over an upgrade to the railway system. If this is the case, choosing Z so that _g(Λ, Z) > n/2 (as opposed to choosing Z randomly) is a weakly dominant_ � strategy for MP, even if g(Λ) < n/2 and a majority of the citizens prefer railway upgrades. Thus we need a way to verify that the randomness Z used in MP’s evaluation of g(Λ, Z) has the correct distribution. We will now show how the protocol can be adjusted to accommodate this, and as a consequence, enable verifiable differentially private data queries for MA. **Remark 14 (MA provided randomness). One natural attempt to ensure that Z is uniformly** random is to have MA specify Z. However, this destroys the differential privacy, since for some mechanisms (including the Laplace mechanism) g(Λ) can be computed from _g(Λ, Z) and Z. Also,_ � it is not clear a priori whether such a setup is strategyproof for MA. ##### A.2 A Differentially Private version of the protocol In this section, we present modifications to the protocol from Section 4.1 that enables verifiable differentially private responses from MP. At a high level, the MA and MP jointly determine the random bits Z via a coin flipping protocol [28]. The zk-SNARK can then be modified to ensure that _g(Λ, Z) is computed correctly. The protocol has a total of 6 stages which are described below._ � _Stage 0: (Data Collection) MP builds a Merkle Tree TΛ of the demand Λ that it serves. It computes_ a commitment σ := MCommit(Λ, r) to this demand. Additionally, MP samples Zmp uniformly at random from {0, 1}[v] and computes a Pedersen commitment [29] zmp := Commit(Zmp, rmp). The ----- Cryptographic Data Privacy for Mobility Management Tsao, Yang, Zoepf and Pavone Pederson commitment scheme is a secure commitment scheme which is perfectly hiding and computationally binding. MP sends both σ, zmp to MA. _Stage 1: (Integrity Checks) Same as in Section 4.1._ _Stage 2: (Message Specifications) MA specifies the function g it wants to compute. Additionally,_ MA samples Zma uniformly at random from {0, 1}[v] and specifies a differentially private mechanism _g for the computation of g._ � _Stage 3: (zk-SNARK Construction) MA constructs an evaluation circuit C for the function_ _g._ � The public parameters of C are σ, zmp, Zma, z and the input to C is a witness of the form w = (Λw, rw, cw, Zmp,w, rmp,w). C does the following: 1. Checks whether the Rider Witness and Aggregated Roadside Audit tests are satisfied, 2. Checks whether MCommit(Λw, rw) = σ, 3. Checks whether Commit(Zmp,w, rmp,w) = zmp, 4. Checks whether �g(Λw, Zma ⊕ _Zmp,w) = z. (Here ⊕_ is bit-wise XOR.) _C will return True if and only if all of these checks pass. MA constructs a zk-SNARK (S, V, P_ ) for _C and sends g, �g, Zma, C, (S, V, P_ ) to MP. _Stage 4: (Function Evaluation) If_ _g is a differentially private mechanism for computing g, then MP_ � computes a message z = �g(Λ, Zma ⊕ _Zmp) and a witness w := (Λ, r, cw, Zmp, rmp) to the correctness_ of z. _Stage 5: (Creating a Zero Knowledge Proof) Same as in Section 4.1._ _Stage 6: (zk-SNARK Verification) Same as in Section 4.1._ In Supplementary Material SM-VII we show that this protocol has the following two desirable features that enable verifiable and differentially private responses from MP to MA queries. 1. Verifiability - If the MA receives a valid proof from MP, then it can be sure that the corresponding message is indeed �g(Λ, Zma ⊕ _Zmp)._ 2. Differential Privacy - The MP’s output is differentially private with respect to the dataset Λ if at least one of Zma, Zmp is sampled uniformly at random. **Remark 15 (A note on Local Differential Privacy). Local Differential Privacy [13] addresses the** setting where the data collector is untrusted. Differential privacy is achieved by users adding noise to their data before sending it to the data collector. This is in contrast to the setting we study here where an untrusted data collector has the clean data of many users. We chose to study the latter model due to the way current mobility companies collect high resolution data on the trips they serve. Additionally, local differential privacy requires users to add noise to their data so they become statistically indistinguishable from one another. In the context of transportation, this ----- Cryptographic Data Privacy for Mobility Management Tsao, Yang, Zoepf and Pavone means the noisy data of users will be statistically indistinguishable from one another, even if they have very different travel preferences. This level of noise significantly reduces the accuracy of any computation done on the data. ----- Cryptographic Data Privacy for Mobility Management Tsao, Yang, Zoepf and Pavone ### B Supplementary Material ##### B.1 Mobility Provider Serving Demand For a given discretization of time T := {0, ∆t, 2∆t, ..., T ∆t}, the demand Λ ∈ R[n][×][n][×][T] can be represented as a 3-dimensional matrix (e.g., a 3-Tensor) where Λ(i, j, t) represents the number of riders who request transit from i to j at time t. We use τij to represent the time it takes to travel from i to j. To serve the demand from i to j, the MP chooses passenger carrying flows x[ij] _∈_ R[mT]+ where _x[ij]t_ [(][u, v][) is the number of passenger carrying trips from][ i][ to][ j][ that enter the road (][u, v][) at time][ t][.] Such vehicles will exit the road at time t + τuv. There is also a rebalancing flow r ∈ R[mT] which represents the movement of vacant vehicles that are re-positioning themselves to better align with future demand. Concretely, rt(u, v) is the number of vacant vehicles which enter road (u, v) at time _t. The initial condition is y ∈_ R[n]+[, where][ y][i] [denotes the number of vehicles at location][ i][ at time 0.] The Mobility Provider’s routing strategy is thus x := ��x[ij][�] � which satisfies the (i,j)∈V ×V _[, r]_ following multi-commodity network flow constraints:  (4)   � rt(u, v) + _x[ij]t_ [(][u, v][)] (i,j)∈V ×V  � =  _v:(u,v)∈E_ � _v:(v,u)∈E_  � rt−τvu(v, u) + _x[ij]t−τvu[(][v, u][)]_ (i,j)∈V ×V for all (u, t) _V_ [T ] _∈_ _×_ � � _x[ij]t_ [(][u, v][) =] _x[ij]t−τvu[(][v, u][) for all (][i, j][)][ ∈]_ _[V][ ×][ V, t][ ∈]_ [[][T] []][, u][ ̸∈{][i, j][}] (5) _v:(u,v)∈E_ _v:(v,u)∈E_ _t_ � Λ(i, j, τ ) for all (i, j, t) _V_ _V_ [T ] (6) _∈_ _×_ _×_ _τ_ =0   _≤_ _t_ � _τ_ =0  � �  _x[ij]τ_ [(][i, v][)][ −] _x[ij]τ_ _−τvi[(][v, i][)]_ _v:(i,v)∈E_ _v:(v,i)∈E_ _x[ij]t_ [(][j, v][) = 0 for all (][i, j][)][ ∈] _[V][ ×][ V, t][ ∈]_ [[][T] []][,][ (][j, v][)][ ∈] _[E.]_ (7) � _x[ij]0_ [=][ y][i][ for all][ i][ ∈] _[V.]_ (8) _j:(i,j)∈E_ Here (4) represents conservation of vehicles, (5),(6),(7) enforce pickup and dropoff constraints according to the demand Λ, and (8) enforces initial conditions. The utility received by the Mobility provider (e.g., total revenue) from implementing flow x for a given demand Λ is JMP(x; Λ). An optimal routing algorithm for demand Λ is a solution to the following optimization problem. maximize _JMP(x; Λ)_ _x_ s.t. (4), (5), (6), (7), (8). ----- Cryptographic Data Privacy for Mobility Management Tsao, Yang, Zoepf and Pavone ##### B.2 Mobility Provider Serving Demand (Steady State) In a steady state model, the demand can be represented as Λ ∈ R[n]+[×][n], a matrix where Λ(i, j) represents the rate at which riders request transit from node i to node j. For each origin-destination pair (i, j) _V_ _V, the MP serves the demand Λ(i, j) by choosing a_ _∈_ _×_ �� � � passenger carrying flow x[ij]p _[∈]_ [R]+[m] [and a rebalancing flow][ x][r] _[∈]_ [R]+[m] [so that][ x][ :=] _x[ij]p_ (i,j)∈V ×V _[, x][r]_ satisfy the multi-commodity network flow constraints with demand Λ:  � xr(v, u) + _x[ij]p_ [(][v, u][)] (v,u)∈V ×V  for all u _V_  _∈_ (9)  � =  _v:(v,u)∈E_ � _v:(u,v)∈E_  � xr(u, v) + _x[ij]p_ [(][u, v][)] (i,j)∈V ×V Λ(i, j)1[u=j] + � _x[ij]p_ [(][u, v][) = Λ(][i, j][)][1][u=i] [+] � _xr(v, u) + x[ij]p_ [(][v, u][) for all (][i, j][)][ ∈] _[V][ ×][ V, u][ ∈]_ _[V.]_ _v:(u,v)∈E_ _v:(v,u)∈E_ (10) Here (9) represents conservation of flow and (10) enforces pickup and dropoff constraints according to the demand Λ. The utility received by the Mobility provider (e.g., total revenue) from implementing flow x is _JMP(x). Therefore the Mobility Provider will choose x according to the following program._ maximize _JMP(x)_ _x_ s.t. (9), (10) ##### B.3 Cryptographic Tools In this section we introduce existing cryptographic tools that are used in the protocol. The contents of this section are discussed in greater detail in [30, 31, 9, 10, 18]. Throughout this paper, we use _r_ _x to denote the concatenation of r and x._ _||_ **B.3.1** **Cryptographic Hash Functions** **Definition 9 (Cryptographic Hash Functions). A function H is a d-bit cryptographic hash function** if it is a mapping from binary strings of arbitrary length to 0, 1 and has the following properties: _{_ _}[d]_ 1. It is deterministic. 2. It is efficient to compute. 3. H is collision resistant - For sufficiently large d, it is computationally intractable to find distinct inputs x1, x2 so that H(x1) = H(x2). 4. H is hiding - If r is a sufficiently long random string (256 bits is often sufficient), then it is computationally intractable to deduce anything about x by observing H(r _x)._ _||_ ----- Cryptographic Data Privacy for Mobility Management Tsao, Yang, Zoepf and Pavone Property 3 is called collision resistance and enables the hash function to be used as a digital fingerprint. Indeed, since it is unlikely that two files will have the same hash value, H(x) can serve as a unique identifier for x. We refer the interested reader to [30] for further details on cryptographic hash functions. [SHA256 is a widely used collision resistant hash function which has extensive applications](https://web.archive.org/web/20160330153520/http://www.staff.science.uu.nl/~werkh108/docs/study/Y5_07_08/infocry/project/Cryp08.pdf) including but not limited to establishing secure communication channels, computing checksums for online downloads, and computing proof-of-work in Bitcoin. **B.3.2** **Cryptographic Commitments** Cryptographic commitment schemes are tamper-proof communication protocols between two parties: a sender and a receiver. In a commitment scheme, the sender chooses (i.e., commits to) a message. At a later time, the sender reveals the message to the receiver, and the receiver can be sure that the message it received is the same as the original message chosen by the sender. _Intuition - We can think of a commitment scheme as follows: A sender places a message into a_ box and locks the box with a key. The sender then gives the locked box to the receiver. Once the sender has given the box away, the sender can no longer change the message inside the box. At this point, the receiver, who does not have the key, cannot open the box to read the message. At a later time, the sender can give the key to the receiver, allowing the receiver to read the message. A commitment scheme is specified by a message space, nonce space, commitment space, _M_ _R_ _X_ a commitment function commit : and a verification function verify : 0, 1 . _M×R →X_ _M×R×X →{_ _}_ Creating a commitment to a message m happens in two steps: _∈M_ 1. Commitment Step - The sender computes σ := commit(m, r) for some r and gives σ to _∈R_ the receiver. 2. Reveal Step - At some later time, the sender gives m, r to the receiver who accepts m as the original message if and only if verify(m, r, σ) := 1[commit(m,r)=σ] evaluates to 1. A secure commitment scheme has two important properties: 1. Binding - If σ is a commitment to a value m, it is computationally intractable to find m[′], r[′] so that m[′] = m and commit(m[′], r[′]) = σ. Hence σ binds the committer to the value m. _̸_ 2. Hiding - It is computationally intractable to learn anything about m from σ. Cryptographic hash functions can be used to build secure commitment schemes. To do this, given a cryptographic hash function H, we define commit(m, r) := H(r||m) and verify(m, r, σ) := 1[H(r||m)=σ]. The security of this commitment scheme comes from the properties of H. The binding property of this commitment scheme follows directly from collision resistance of H. Furthermore, if r is chosen uniformly at random from, then the commitment scheme is hiding due to the hiding _R_ property of H. ----- Cryptographic Data Privacy for Mobility Management Tsao, Yang, Zoepf and Pavone **B.3.3** **Merkle Trees** A Merkle tree is a data structure that is used to create commitments to a collection of items _M := {m0, ..., mq−1}. A Merkle Tree has two main features:_ 1. The root of the tree contains a hiding commitment to the entire collection M . 2. The root can also serve as a commitment to each item m _M_ . Furthermore, the proof that _∈_ _m is a leaf of the tree reveals nothing about the other items in M and has length O(log q),_ where q is the total number of leaves in the tree. A Merkle tree can be constructed from a cryptographic hash function. Concretely, given a cryptographic hash function H and a collection of items m0, ..., mq−1, construct a binary tree with these items as the leaves. The leaves of the Merkle Tree are the zeroth level h0,0, h0,1, ..., h0,q−1, where h0,i = mi. The next level has the same number of nodes h1,0, ..., h1,q−1 defined by h1,i = H(ri||mi) where ri is a random nonce. Level k where k 2, has half as many nodes as level k 1, defined by _≥_ _−_ _hk,i = H(hk−1,2i||hk−1,2i+1). Figure 5 illustrates an example of a Merkle tree. In total there are_ _ℓt +1 levels where ℓq := ⌈log2 q⌉_ +1. With this notation, hℓq,0 is the value at the root of the Merkle Tree. The root of a Merkle tree hℓq,0 is a commitment to the entire collection due to collision resistance of H. To commit to the data M, the committer will generate r0, ..., rq−1, compute the Merkle Tree, and announce hℓq,0. In the reveal step, the committer can announce {(mi, ri)}i[q]=0[−][1][, and anyone can] then compute the resulting Merkle tree and confirm that the root is equal to hℓq,0. **B.3.4** **Merkle Proofs** The root also serves as a commitment to each mi _M_ . Suppose someone who knows mi wants a _∈_ proof that mi is a leaf in the Merkle tree. A proof π(mi) can be constructed from the Merkle tree. Furthermore, this proof reveals nothing about the other items {mj}j≠ _i._ Define x0, x1, ...xℓq recursively as: _x0 := i_ _xj :=_ � _xj−1_ � for 1 ≤ _j ≤_ _ℓq._ 2 With this notation, �hj,xj �ℓjq=0 [is the path from][ m][i][ to the root of the Merkle Tree. The Merkle] proof for mi is denoted as π(mi) and is given by _π(mi) := {ri} ∪_ �sibling(hj,xj )�ℓjq=1 _[,][ where]_ sibling(hi,j) := � _hhi,ji,j−+11_ ifif j j is even, is odd. See Supplementary Material SM-IX for details on the binding and hiding properties of Merkle commitments, and how to verify the correctness of Merkle proofs. **Definition 10 (Merkle Commitment). Given a data set M = {m1, ..., mt} and a set of random** nonce values r = {r1, ..., rt}, we use MCommit(M, r) to denote the root of the Merkle Tree constructed from the data M and random nonces r. We refer the interested reader to Section 8.9 of [31] for more details on Merkle Trees. ----- Cryptographic Data Privacy for Mobility Management Tsao, Yang, Zoepf and Pavone Figure 5: An example of a Merkle tree containing 8 items. Each item mi is a leaf node and has one parent which is H(ri||mi), where ri is a random hiding nonce. All other internal nodes are computed by applying H to the concatenation of its children. **B.3.5** **Digital Signatures** A digital signature scheme is comprised by three functions: Gen, sign, verify. Gen() is a random function that produces valid public and private key pairs (pk, sk). Given a message, a signature is produced using the secret key via σ = sign(sk, m). The authenticity of the signature is checked using the public key via verify(pk, m, σ). A secure digital signature scheme has two properties: 1. Correctness - For a valid key pair (pk, sk) obtained from Gen() and any message m, we have verify(pk, m, sign(sk, m)) = True. 2. Secure - Given a public key pk, if the corresponding secret key sk is unknown, then it is computationally intractable to forge a signature on any message. Specifically, if sk has never been used to sign a message m[′], then without knowledge of sk, it is computationally intractable to find (m[′], σ[′]) so that verify(pk, m[′], σ[′]) = True. We refer the interested reader to Section 13 of [31] for more details on digital signatures. **B.3.6** **Public Key Encryption** A public key encryption scheme is specified by three functions: a key generation function, an encryption function E, and a decryption function D. In a public key encryption scheme, each user has a public key and private key denoted (pk, sk), produced by the key generation function. As the name suggests, the public key pk is known to everyone, while each secret key sk is known only by its owner. Encryption is done using public keys, and decryption is done using secret keys. To send a message m to Bob, one would encrypt m using Bob’s public key via c = E(pkBob, m). Then Bob would decrypt the message via D(skBob, c). A secure Public Key Encryption scheme has two properties: 1. Correctness - For every valid key pair (pk, sk) and any message m, we have m = D(sk, E(pk, m)), i.e., the intended recipient receives the correct message upon decryption. ----- Cryptographic Data Privacy for Mobility Management Tsao, Yang, Zoepf and Pavone 2. Secure - For a public key pk and any message m, if the corresponding secret key sk is not known, then it is computationally intractable to deduce anything about m from the ciphertext _E(pk, m)._ The appeal of public key encryption is that users do not have to have a shared common key in order to send encrypted messages to one another. We refer the interested reader to Part II of [31] for more details on Public Key Encryption. **B.3.7** **Zero Knowledge Proofs** A zero knowledge proof for a mathematical problem is a technique whereby one party (the prover) can convince another party (the verifier) that it knows a solution w to the problem without revealing any information about w other than the fact that it is a solution. Before discussing zero knowledge proofs further, we must first introduce proof systems. **Definition 11 (Proof System). Consider an arithmetic circuit C :** 0, 1, and the _X × W →{_ _}_ following optimization problem: For a fixed x, find a w so that C(x, w) = 0. Here x _∈X_ _∈W_ is part of the problem statement, and w is a solution candidate. Consider a tuple of functions (S, V, P ) where 1. S is a preprocessing function that takes as input C, x and outputs public parameters pp. 2. P is a prover function that takes as input pp, x, w and produces a proof π. 3. V is a verification function that takes as input pp, x, π and outputs either 0 or 1 corresponding to whether the proof π is invalid or valid respectively. The tuple (S, V, P ) is a proof system for C if it satisfies the following properties: 1. Completeness - If C(x, w) = 0, then V (pp, x, P (pp, x, w)) should evaluate to 1; i.e., the verifier should accept proofs constructed from valid solutions w. 2. Proof of Knowledge - If V (pp, x, π) = 1, then whoever constructed π must have known a w satisfying C(x, w) = 0. With this definition in hand, we can now define zero knowledge proof systems. **Definition 12 (Zero Knowledge Proof Systems). Consider a proof system (S, V, P** ) for the problem of finding w so that C(x, w) = 0. (S, V, P ) is a zero knowledge proof system if it is computationally intractable to learn anything about w from π := P (pp, x, w). If this is the case, then π is a zero knowledge proof. Zero knowledge proofs were first proposed by [9], but the prover and verifier functions were not optimized to be computationally efficient. In the next section, we present zk-SNARKs, which are computationally efficient zero knowledge proof systems. ----- Cryptographic Data Privacy for Mobility Management Tsao, Yang, Zoepf and Pavone **B.3.8** **zk-SNARKs** In this section we introduce Succinct Non-interactive Arguments of Knowledge (SNARK). SNARKs are proof systems where proofs are short, and both the construction and verification of proofs are computationally efficient. **Definition 13 (Succinct Non-interactive Argument of Knowledge (SNARK)). Consider the prob-** lem of finding w so that C(x, w) = 0, where C is an arithmetic circuit with n logic gates. A _∈W_ proof system (S, V, P ) is a SNARK if 1. The runtime of the prover P is _O(n),_ [�] 2. The length of a proof computed by P is O(log n), 3. The runtime of the verifier V is O(log n). **Definition 14 (zk-SNARK). If a SNARK (S, V, P** ) is also a zero knowledge proof system, then it is a zk-SNARK. The Zcash cryptocurrency, which provides fully confidential transactions, was the first setting where zk-SNARKs have been used in the field [10]. zk-SNARKs have also been deployed in the zk-rollup procedure which increases the transaction throughput of the Ethereum blockchain [32]. For PMM we will need a zk-SNARK that does not require a trusted setup. PLONK [18], Sonic [19], and Marlin [20] using a DARK based polynomial commitment scheme described in [21, 22]. Other options include Bulletproofs [23] and Spartan [24]. ##### B.4 Implementation Details and Examples In this section, we show how driver period information in ridehailing services, and a mobility provider’s impact on congestion can be obtained from the protocol. Both cases involve specifying characteristics of the query function g and trip metadata that enable the desired information to be computed by the protocol. **B.4.1** **Obtaining ridehailing period activity** As discussed in Example 3, the pay rate of ridehailing drivers depends on the period they are in. Ridehailing companies use period 2 to tell users that they are matched and a driver is en route, thereby reducing the likelihood that the user leaves the system out of impatience. Due to this utility, period 2 has a higher pay rate than period 1. There is thus a financial incentive for ridehailing companies to report period 2 activity as period 1 activity so that they can have improved user retention while keeping operations costs low. Accurate period information is thus important to protect the wages of ridehailing drivers. We achieve accurate period information by including digital signatures in the trip metadata. Recall that the trip metadata includes the request time, match time, pickup time, and dropoff time of the request. The period 2 and period 3 activity associated with a trip can be deduced from these timestamps, as shown in Figure 6. Furthermore, Rider Witness and ARA ensure that reporting the true demand is a dominant strategy for the ridehailing operator. Therefore to ensure accurate period information, it is sufficient to ensure that the aforementioned timesteps are recorded correctly. For period 2 accuracy, we need to ensure that the match ----- Cryptographic Data Privacy for Mobility Management Tsao, Yang, Zoepf and Pavone #### Request Match Pickup Dropoff Time Time Time Time Phase 2 Phase 3 Wait Time Figure 6: The timesteps within the trip metadata determine the Period 2 and Period 3 activity of the vehicle that serves this trip. time and pickup time are recorded properly for each trip. To do this, we will use digital signatures. To notify a user that they have been matched, the ridehailing operator will send (mpt, σpt), where: _mpt = You have been matched to vehicle vehID at time currtime,_ _σpt = sign(skmp, mpt)._ The user will only consider the message mpt as genuine if it is accompanied by a valid signature σpt. Therefore, telling a user they are matched (and thus reducing the likelihood that this user cancels their trip) requires the ridehailing company to provide an irrefutable and unforgeable declaration of the match time in the form of (mpt, σpt). The message and signature (mpt, σpt) is then included in the trip metadata to certify the trip’s match time. The same can be done for the pickup time, and as a result, ensure accurate reporting of all period 2 activity. The accuracy of period 3 activity can be ensured by ensuring that pickup time and dropoff time are recorded correctly. To implement driver wage inspection through the protocol, the query function g would be _gwage(Λ) :=_ � 1[w(λ) = fwage(λ)], _λ∈Λ_ where w(λ) is the driver wage of ride λ, and fwage is the MP’s wage formula which may depend on the period and trajectory information contained in the trip metadata of λ. Note that gwage(Λ) = 1 if and only if all drivers were paid properly, and is 0 otherwise. **Remark 16 (Evaluating Waiting Time Equity). Using the idea from Section B.4.1, one can also** evaluate the equity of waiting times throughout the network. It is clear from Figure 6 that the wait time can be determined by the request time and pickup time, both of which can be found in the trip metadata. The trip metadata also includes the pickup location and dropoff location, so the average wait time as a function of pickup location, dropoff location, both pickup and dropoff locations, can all be computed from the trip metadata. To implement a waiting time evaluation through the protocol, the Municipal Authority would specify a fairness threshold τ . The query function g is then designed to output 1 if and only if the ----- Cryptographic Data Privacy for Mobility Management Tsao, Yang, Zoepf and Pavone average waiting time across locations does not vary by more than the pre-specified threshold, and outputs 0 otherwise. Concretely, if we want to enforce wait time equity across pickup regions, we could do this with the function _gwait(Λ) =_ � 1 [|τi − _τj| ≤_ _τ_ ] _i,j∈V_ where τi is the average wait time for requests in region i. **B.4.2** **Evaluating contributions to congestion** The trip metadata contains the trip trajectory which can be used to evaluate a ridehailing fleet’s contribution to congestion. The trip trajectory provides the location of the service vehicle as a function of time, which provides two important insights. First, the trip trajectories can be used to determine how many ridehailing vehicles are on a particular road at any given time. Second, from a trajectory one can compute the amount of time the vehicle spends on each road within the trip path. Thus the average travel time for a road can be calculated, which can then be used to estimate the total traffic flow on the road using traffic models. Combining these two pieces of information, the fraction of a road’s total traffic that is ridehailing vehicles can be computed from the trip metadata. ##### B.5 Necessity of Assumption 1 for Verifiability In this section we show that Assumption 1 is necessary for verifiable queries on mobility data under the natural assumption that MA does not have surveillance in the interior of MP vehicles. This assumption on limited surveillance ability of MA is formalized in Assumption 2. **Assumption 2. There does not exist a practical way for MA to determine whether a MP vehicle** _is carrying a customer or not, without directly tracking all customers. In particular, MA cannot_ _determine the period information of MP vehicles._ Note that MA can obtain phase information from the drivers or from MP, but in the absence of Assumption 1, drivers and MPs may act strategically, and may not be trustworthy. The following result shows that under Assumption 2, if the drivers or riders are willing to collude with the MP, then the MP can misreport properties of its mobility demand in a way that is undetectable by the MA. **Observation 3 (Necessity of Assumption 1 for Strategyproofness). Under Assumption 2, the** _following events are undetectable by MA, even if MA can track all MP vehicles (i.e., knows the_ _location of each MP vehicle at any time):_ _1. If drivers collude with MP, then MP can overreport demand._ _2. If riders and drivers collude with MP, then MP can underreport demand, or misreport at-_ _tributes of the demand._ ----- Cryptographic Data Privacy for Mobility Management Tsao, Yang, Zoepf and Pavone _Proof of Observation 3. By Assumption 2, MA cannot distinguish between MP vehicles in period 1_ and MP vehicles in period 2 or 3. Suppose the drivers are willing to collude with MP. If MP wants to overreport demand from some origin i to some destination j, it can have some drivers drive from _i to j without a passenger. This will lead to period 1 traffic from i to j, however the drivers will_ report themselves in period 3 to MA. This way, even if MA is able to track the MP vehicles, the reported period information from the drivers will be consistent with the demand report from MP. Now suppose both riders and drivers are willing to collude with MP. If the MP wants to underreport demand from i to j, they can have some drivers who are serving passengers from i to _j report themselves in period 1 to MA._ So in the absence of Assumption 1, the MA has no way of checking whether the messages it receives from MP are computed from the true demand. **Remark 17 (Tracking Users is also insufficient). Even if the MA is able to track users and thus** determine whether a MP vehicle has a passenger, this still does not prevent overreporting of demand. In this case, MP can hire people to hail rides from specific trips if it wants to overreport demand. ##### B.6 Roadside Audits with fewer sensors In this section we present the Randomized Roadside Audits (RRA) mechanism. Like ARA, the RRA detects overreporting of demand by conducting road audits. Where ARA places sensors on every road, RRA places sensors on a small subset of randomly selected roads, enabling it to use fewer sensors. The sensors used in RRA are similar to the sensors used in ARA described in Section 4.2.3, with the following differences: 1. Each sensor has its own pair of public and secret keys (pks, sks) for digital signatures. Everyone knows pks, but sks is contained in a Hardware Security Module within the sensor so that it is impossible to extract sks from the sensor, but it is still possible to sign messages using sks. 2. Each sensor now records its own location using GPS. First, the MA and MP agree on a list of public keys belonging to the sensors. In particular, they must agree on the number of sensors being deployed in the network. Let mp be the number of sensors being deployed (recall that m is the number of roads in the network). We will focus on the case where p (0, 1). If p > 1, then there are enough sensors to implement ARA. _∈_ During the data-collection period, the MA will place the sensors inside vehicles which are driven by its employees. We assume that MP cannot determine which vehicles are carrying sensors. In practice, MA can have much more than mp employees driving around in the network, but only mp of them will have sensors. The data collection period is divided up into many rounds (e.g., a round could be 1 hour long). In each round, MA will sample a random set of mp roads. Each vehicle with a sensor is assigned to one of these roads, where they will stay (i.e., parked on the side of the street) to measure the MP traffic that pass by them. The sensor will record a measurement u which specifies the time, the period and the location of every MP vehicle that passes by. It will then sign the message with its secret key via σu := sign(sks, u). In particular, a sensor assigned to road e ∈ _E in round t will_ ----- Cryptographic Data Privacy for Mobility Management Tsao, Yang, Zoepf and Pavone be able to determine ϕt(e, Λ), which is the total number of period 2 or period 3 MP vehicles that traverse e in round t, formally described below. _ϕt(e, Λ) :=_ � 1[λ traverses e in round t]. _λ∈Λ_ As was the case in ARA, the sensors have a communication constraint that prevents them from transmitting their data unless both MA and MP give permission. Therefore during the data collection period, MP does not know where the sensors are. Once the data collection period is over, both MA and MP give permission to collect the data from the sensors. **Definition 15 (RRA Test). The RRA test checks whether the road usage on sampled roads is** consistent with the demand reported by MP. Concretely, a witness w = (Λw, rw, cw) passes the RRA test if ϕt(e, Λ) = ϕt(e, Λw) for all pairs (e, t) such that e was sampled in round t. **Observation 4 (Efficacy of Random Roadside Audits). Under Assumption 1, if MP submits a** _commitment σ[′]_ = MCommit(Λ[′], r) to a strict superset of the demand, i.e., Λ Λ[′], then with _⊂_ _probability at least p, any proof submitted by MP will either be inconsistent with σ[′]_ _or will fail the_ _RRA test. Hence overreporting of demand will be detected with positive probability._ _Proof of Observation 4. We use a similar analysis to ARA. Suppose MP overreports the demand,_ i.e., submits a commitment σ[′] = MCommit(Λ[′], r) where Λ[′] is a strict superset of Λ. Then there exists λ[′] Λ[′] Λ. Let e[′] be any road in the trip trajectory of λ[′]. and t(λ[′], e[′]) be the round in _∈_ _\_ which trip λ[′] traverses e[′]. We then have ϕt(λ′,e′)(e[′], Λ) < ϕt(λ′,e′)(e[′], Λ[′]). If e[′] is audited in round _t(λ[′], e[′]), then σ[′]_ will be inconsistent with the roadside audit measurements, and will fail the RRA test. Since MA samples mp roads to audit uniformly randomly in each round, and there are a total of m roads, the probability that e[′] is chosen in round t(λ[′], e[′]) is p. Since overreporting is detected only probabilistically, in the event that it is detected, MA should fine MP so that MP’s expected utility is reduced if it overreports demand. **Remark 18 (Comparing RRA to ARA). When compared to ARA, RRA uses fewer sensors. This,** however, is not without drawbacks, since RRA detects demand overreporting only probabilistically. Thus in RRA the MA needs to fine the MP in the event that demand overreporting is detected. In particular, the fine should be chosen so that the MP’s expected utility is decreased if it decides to overreport demand. Concretely, suppose Uh, Ud are the utilities received by MP when acting honestly and dishonestly respectively. Since dishonesty is detected with probability p, the fine F must satisfy _Uh > (1 −_ _p)Ud −_ _pF =⇒_ _F >_ _p[1]_ [(][U][d][ −] _[U][h][)][ −]_ _[U][d][.]_ If MA is using very few sensors or if Ud is much larger than Uh, then F needs to be very large. A large fine, however can be difficult to implement. Recall from Section 4.2.2 that inconsistencies between demand metadata and roadside measurements due to GPS errors can occur even if all parties are honest. If such errors occur, then an MP would incur a large fine even if it behaves honestly. For this reason, even an honest MP may not want to participate in the protocol. One could use an error tolerant version of RRA, but for large F the tolerance parameter ϵ would need to be large, enabling a dishonest MP to overreport demand while remaining within the tolerance parameter. ----- Cryptographic Data Privacy for Mobility Management Tsao, Yang, Zoepf and Pavone **B.6.1** **Security Discussion** We now make several remarks regarding the two sensor modifications we made for RRA. First, the signatures generated by the sensors’ secret keys ensure that MA cannot fabricate or otherwise tamper with the sensor’s data. This is important because the sensors are in the possession of the MA and its employees. Even if the MA manages to change the data in the sensor’s storage, it cannot produce the corresponding signatures for the altered data since it does not know the secret key, which is protected by a Hardware Security Module. Second, the sensor’s location data is essential to prevent MA from conducting relay attacks. A relay attack is as follows: Suppose Alice and Bob are both MA employees. Alice has a sensor in her car. Bob does not have a sensor in his car, but he wants to collect data as if he had a sensor in his car. The MA can give Bob an unofficial sensor (this sensor does not have a valid public and secret key recognized by the MP), allowing Bob to detect signals from MP vehicles. Since Bob’s sensor does not have an official secret key, he cannot obtain a valid signature for his measurements. To get the signatures, Bob sends the detected signal to Alice, and Alice relays the signal to her sensor, which will sign the measurement and record it. In this manner, the MA is able to get official measurements and signatures on Bob’s road even though he does not have an official sensor. Fortunately, this attack is thwarted if the sensor knows its own location. If a sensor receives a measurement whose location is very different than its location, then it will reject the message, thus thwarting the relay attack.[2] ##### B.7 Establishing Verifiability and Differential Privacy for Appendix A Verifiability is established by steps 1, 3 and 4 of C. Based on the analysis in Section 4.2, a witness satisfies step 1 of C if and only if Λw = Λ, i.e., the demand is reported honestly. Since the Pedersen commitment scheme is secure, it is computationally binding, meaning that it is computationally intractable for MP to find Zmp[′] _[, r]mp[′]_ [with][ Z][mp] _[̸][=][ Z]mp[′]_ [and][ Commit][(][Z]mp[′] _[, r]mp[′]_ [) =][ z][mp][. So in order for] the MP’s witness to pass step 3 of C, it must have Zmp,w = Zmp. Given steps 1, 3 have passed, step 4 ensures that the message z is indeed equal to �g(Λ, Zma ⊕ _Zmp), which establishes verifiability._ To establish differential privacy, we need to show two things: (a) MA does not know Zma ⊕ _Zmp_ (see Remark 14) and (b) Zma ⊕ _Zmp is uniformly distributed over {0, 1}[v], even if MA and MP_ are acting strategically. To this end, we consider a game between MA and MP with actions _Zma, Zmp ∈{0, 1}[v]_ and outcome Zma ⊕ _Zmp ∈{0, 1}[v]. We will show that the strategy profile where_ both Zma, Zmp are independently sampled uniformly at random is a Nash equilibrium, meaning that differential privacy is achieved as long as at least one party is honest. To show that independent uniform random sampling of both Zma, Zmp is a Nash equilibrium, we first need to show that Zma, Zmp are independent. In the protocol Zmp is sampled first, and a Pedersen commitment zmp is sent to MA. Since Pedersen commitments are perfectly hiding, the distribution of zmp does not depend on Zmp. So even if MA samples Zma based on the value of zmp, the result will be independent of Zmp. Now that we have established independence of Zmp, Zma, we make use of the following observation. **Observation 5 (One Time Pad). Suppose Zma, Zmp are independent random variables. If Zma is** _uniformly distributed over {0, 1}[v], then Zma ⊕_ _Zmp is uniformly distributed over {0, 1}[v], regardless_ 2The measurements can be protected by authenticated encryption so that relayers (e.g., Bob) cannot modify the messages (i.e., changing the vehicle position part of the measurement) ----- Cryptographic Data Privacy for Mobility Management Tsao, Yang, Zoepf and Pavone _of how Zmp is sampled. If Zmp is uniformly distributed over {0, 1}[v], then Zma ⊕_ _Zmp is uniformly_ _distributed over {0, 1}[v], regardless of how Zma is sampled._ Observation 5 says that if Zma, Zmp are independent, the distribution of Zma _Zmp does not_ _⊕_ depend on Zmp if Zma is uniformly random, and vice versa. Hence independent uniform sampling of Zma, Zmp is a Nash equilibrium, establishing condition (b). To establish (a), if at least one party is honest, then we can assume without loss of generality that both parties are acting according to the Nash equilibrium. By observation 5, this means the marginal distribution of Zma _Zmp and_ _⊕_ the conditional distribution of Zma _Zmp given Zma are both uniform. In particular, MA does not_ _⊕_ learn anything about Zma _Zmp from Zma._ _⊕_ ##### B.8 More Details on Congestion Pricing When the travel cost is the same as travel time, the prices can be obtained from the following optimization problem � min _xefe(xe)_ _e∈E_ � � s.t. x = _x[od]_ _o∈V_ _d∈V_ _x[od]_ 0 _o_ _V, d_ _V_ _⪰_ _∀_ _∈_ _∈_ � _x[od](u,v)_ _[−]_ _[x]([od]v,u)_ [= Λ(][o, d][)] �1[u=o] − 1[u=d]� _∀u ∈_ _V_ (u,v)∈E where x[od]e is the traffic flow from o to d that uses edge e, and x is the total traffic flow. Λ is the travel demand where Λ(o, d) is the rate at which users require transport from o to d. Here the objective measures the sum of the travel times of all requests in Λ. Let x[∗] be a solution to (1). By first order optimality conditions[3] of x[∗], for any origin-destination pair (i, j), and any two paths p1, p2 from i to j with non-zero flow, we have � _∂_ � _∂_ _xefe(xe)_ = _e∈p1_ _∂xe_ ����xe=x[∗]e _e[′]∈p2_ _∂xe′_ _[x][e][′][f][e][′][(][x][e][′][)]����xe′_ =x[∗]e[′] � � =⇒ _fe(x[∗]e[) +][ x][∗]e[f]e[′][(][x][∗]e[) =]_ _fe′(x[∗]e[′][) +][ x][∗]e[′][f]e[′][′][(][x][∗]e[′][)][.]_ (11) _e∈p1_ _e[′]∈p2_ In order to realize x[∗] as a user equilibrium, the costs of p1, p2 should be the same so that no user has an incentive to change their strategy. This can be achieved by setting the toll for each road e as pe := x[∗]e[f]e[′][(][x][∗]e[). By doing so, from (11) we can see that the cost (travel time plus toll) for the] two paths will be equal. In the context of PMM, the function g associated with congestion pricing is _gcp(Λ) :=_ �x[∗]e[f]e[′][(][x][∗]e[)]�e∈E [where][ x][∗] [solves (1)][.] 3i.e., it should be impossible to decrease the objective function by reallocating flow from p1 to p2 or vice versa. ----- Cryptographic Data Privacy for Mobility Management Tsao, Yang, Zoepf and Pavone ##### B.9 Efficacy of Merkle Proofs To verify the proof π(mi) = {ri} ∪ �sibling(hj,xj )�ℓjq=1 [for membership of][ m][i][, the recipient of the] proof would compute v1, v2, ..., vℓq−1 recursively via: _v1 := H(ri||mi)_ _vj :=_ � _H(vj−1||sibling(hj−1,xj−1))_ if xj−1 is even, for 1 ≤ _j < ℓq._ _H(sibling(hj−1,xj−1)||vj−1)_ if xj−1 is odd, By the construction of the Merkle tree, vj = hj,xj, and so in particular the Merkle Proof is valid if and only if vℓq is equal to the root, i.e., vℓq = hℓq,0. Since there are q leaves in the binary tree, pi has at most log2 q vertices in it, and each hash is _d bits, so the length of π is at most d log2 q._ By collision resistance of H, it is intractable to forge a proof if mi is not in the tree, and since hiding nonces are used when hashing the items, the proof reveals nothing about the other items in the tree. -----
{ "disclaimer": "Notice: Paper or abstract available at https://arxiv.org/abs/2104.07768, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": "publisher-specific-oa", "status": "BRONZE", "url": "https://doi.org/10.1109/tcns.2022.3141027" }
2,021
[ "JournalArticle" ]
true
2021-04-15T00:00:00
[ { "paperId": "a5f325bd45f33872560042a63a06ae26f1c52b27", "title": "Blockchain-Enabled Identity Verification for Safe Ridesharing Leveraging Zero-Knowledge Proof" }, { "paperId": "a876aeff0d2d0bef08738ac063ef6c48f7eb8789", "title": "Spartan: Efficient and general-purpose zkSNARKs without trusted setup" }, { "paperId": "7a5b83e86384eef4eb9285032d1ee6761473ba42", "title": "Blockchain for the Internet of Vehicles Towards Intelligent Transportation Systems: A Survey" }, { "paperId": "ea6facc5e26fb6ca80a54930dc49158ca9dd7f60", "title": "Verifiable and Privacy-Preserving Traffic Flow Statistics for Advanced Traffic Management Systems" }, { "paperId": "70ef5d5fd45518d0ab498fa23f15d80e326fbd64", "title": "A differential privacy-based privacy-preserving data publishing algorithm for transit smart card data" }, { "paperId": "6a346977fe0eb181c1de9ba636235727ea4ecc80", "title": "Marlin: Preprocessing zkSNARKs with Universal and Updatable SRS" }, { "paperId": "c71edf106571a39c9cf6d2ace05b63b4c66bf72a", "title": "Transparent SNARKs from DARK Compilers" }, { "paperId": "6a2d8d174ecd081d2b4ce0f09480751f5445b3e9", "title": "A Blockchain-Assisted Intelligent Transportation System Promoting Data Services with Privacy Protection" }, { "paperId": "83d68f99b53491d52d68e7d2a3e49fd7e02e73ed", "title": "Optimal privacy control for transport network data sharing" }, { "paperId": "4e5a0594e5c35a37df1818cd9c0fdbfac968fdc4", "title": "Privacy-Preserving Authentication Scheme for Connected Electric Vehicles Using Blockchain and Zero Knowledge Proofs" }, { "paperId": "fd575712e32943402ade9c75e8439ba860af48fb", "title": "A Review of Blockchain-Based Systems in Transportation" }, { "paperId": "2ac167785e327e4e74b754265c6fd92542963353", "title": "Sonic: Zero-Knowledge SNARKs from Linear-Size Universal and Updatable Structured Reference Strings" }, { "paperId": "153798f293f693b12a2c8dc9300ba117b18248f5", "title": "LegoSNARK: Modular Design and Composition of Succinct Zero-Knowledge Proofs" }, { "paperId": "6cc50a2fb28f7c8f810a5e1c919686e2d6bf9ed2", "title": "A Privacy-Preserving Trust Model Based on Blockchain for VANETs" }, { "paperId": "cd61d69c58565c9f2a9baee6c7b780553d80627f", "title": "Accessible Privacy-Preserving Web-Based Data Analysis for Assessing and Addressing Economic Inequalities" }, { "paperId": "31d5acdca6c2543a191c7ce3ca27c4e357a81be2", "title": "Bulletproofs: Short Proofs for Confidential Transactions and More" }, { "paperId": "2313d55f4e7739de529e966521018c9043c3fe45", "title": "Survey on Misbehavior Detection in Cooperative Intelligent Transportation Systems" }, { "paperId": "80b826d4effc89e347cd243905ab4b63193907ba", "title": "Racial and Gender Discrimination in Transportation Network Companies" }, { "paperId": "c2de5385c197aab309abd859b36bee1362147688", "title": "Bitcoin and Cryptocurrency Technologies - A Comprehensive Introduction" }, { "paperId": "3797d924fc5f832b72a84c512e92021906793965", "title": "Zerocash: Decentralized Anonymous Payments from Bitcoin" }, { "paperId": "773d73400f9b453e28949b316e65a1954d19f44c", "title": "Privacy Protection Method for Fine-Grained Urban Traffic Modeling Using Mobile Sensors" }, { "paperId": "4ce8f55b89d2622ca0709f5e1e7f467057ca0ccf", "title": "Evaluating the Usefulness of Watchdogs for Intrusion Detection in VANETs" }, { "paperId": "c8f98af75931260b0a58d5aed9b62ee0f6bf2f23", "title": "Challenges in teaching a graduate course in applied cryptography" }, { "paperId": "8c23ea0ed7badd70a8e26dcea73f2d673cc0c74d", "title": "What Can We Learn Privately?" }, { "paperId": "e4ce10063cd25447dcde75c2d9ce327446ced952", "title": "Calibrating Noise to Sensitivity in Private Data Analysis" }, { "paperId": "61b66a8324742a09d259a24f98effbb1fbfec9b2", "title": "Revealing information while preserving privacy" }, { "paperId": "afc0af9aa4462fbb52f8e85859ad6e445ed1e9a2", "title": "Mitigating routing misbehavior in mobile ad hoc networks" }, { "paperId": "26747053b4bc759f4517f2d570b7c0227d116c2c", "title": "Non-Interactive and Information-Theoretic Secure Verifiable Secret Sharing" }, { "paperId": "9ccf7b6cb32cf89752a35bd910555adac54773e0", "title": "The Knowledge Complexity of Interactive Proof Systems" }, { "paperId": "df2473061df11b76cebb7400c50246d0b354390c", "title": "How to play ANY mental game" }, { "paperId": "8f3282f4141f3a096f821a19aeeaf0f9f6c491f6", "title": "Verifiable secret sharing and achieving simultaneity in the presence of faults" }, { "paperId": "88abb2cda4f2a57499a717966ac4fbe9a993027a", "title": "How to share a secret" }, { "paperId": "9700307a2d35a594929d44edc49fcde7226ed663", "title": "Time- and Space-Efficient Arguments from Groups of Unknown Order" }, { "paperId": null, "title": "Uber must face lawsuit over ‘woefully inadequate" }, { "paperId": "9e0f026d02ed411889cb2b14efb390b7661924b8", "title": "Lunar: a Toolbox for More Efficient Universal and Updatable zkSNARKs and Commit-and-Prove Extensions" }, { "paperId": "d928b78ea85cae93d3ca0bfabe47bf954db55e7a", "title": "PLONK: Permutations over Lagrange-bases for Oecumenical Noninteractive arguments of Knowledge" }, { "paperId": null, "title": "“Zk rollup.”" }, { "paperId": "0dbb8a54ca5066b82fa086bbf5db4c54b947719a", "title": "A NEXT GENERATION SMART CONTRACT & DECENTRALIZED APPLICATION PLATFORM" }, { "paperId": null, "title": "Big data and privacy: A technological perspective" }, { "paperId": "b7f039f3e24404dfdae60547ec1a26df671675aa", "title": "A Fistful of Bitcoins Characterizing Payments Among Men with No Names" }, { "paperId": "1e34435375dcfc6381200ac6d23c48037b3fa5a9", "title": "Available online at:" }, { "paperId": "4e9ec92a90c5d571d2f1d496f8df01f0a8f38596", "title": "Bitcoin: A Peer-to-Peer Electronic Cash System" }, { "paperId": "f3ca3e173ea3cfa0f8a74e4c68970e8a281d95eb", "title": "Urban Transportation Networks: Equilibrium Analysis With Mathematical Programming Methods" }, { "paperId": "a513c22df84d752391f050fa8e004ba2630409d4", "title": "Coin flipping by telephone a protocol for solving impossible problems" }, { "paperId": null, "title": "How uber deceives the authorities worldwide" }, { "paperId": null, "title": "If riders and drivers collude with MP, then MP can underreport demand, or misreport attributes of the demand" }, { "paperId": null, "title": "H is hiding - If r is a sufficiently long random string (256 bits is often sufficient), then it is computationally intractable to deduce anything about x by observing H ( r || x ). 31" }, { "paperId": null, "title": "Secure - Given a public key pk" }, { "paperId": null, "title": "The tuple ( S, V, P ) is a proof system for C if it satisfies the" }, { "paperId": null, "title": "H is collision resistant - For sufficiently large d , it is computationally intractable to find distinct inputs x 1 , x 2 so that H ( x 1 ) = H ( x 2 )" }, { "paperId": "f48281cf5f701777bc1d664dcb7eb5e669e089e1", "title": "Lunar : a Toolbox for More Efficient Universal and Updatable zkSNARKs and Commit-and-Prove Extensions" }, { "paperId": null, "title": "Cryptographic Data Privacy for Mobility Management" }, { "paperId": null, "title": "Data sharing: What's the worst that could happen? Government Technology" }, { "paperId": null, "title": "Google removes app that calculated if uber drivers were underpaid" } ]
33,226
en
[ { "category": "Engineering", "source": "external" }, { "category": "Engineering", "source": "s2-fos-model" }, { "category": "Environmental Science", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/02081d2aaf6c56a33e89743aa88faafa64171819
[ "Engineering" ]
0.8888
Distributed Energy Storage Control for Dynamic Load Impact Mitigation
02081d2aaf6c56a33e89743aa88faafa64171819
[ { "authorId": "30554145", "name": "Maximilian J. Zangs" }, { "authorId": "2055232521", "name": "P. B. Adams" }, { "authorId": "46515838", "name": "Timur Yunusov" }, { "authorId": "2678149", "name": "W. Holderbaum" }, { "authorId": "2124923", "name": "B. Potter" } ]
{ "alternate_issns": null, "alternate_names": null, "alternate_urls": null, "id": null, "issn": null, "name": null, "type": null, "url": null }
The future uptake of electric vehicles (EV) in low-voltage distribution networks can cause increased voltage violations and thermal overloading of network assets, especially in networks with limited headroom at times of high or peak demand. To address this problem, this paper proposes a distributed battery energy storage solution, controlled using an additive increase multiplicative decrease (AIMD) algorithm. The improved algorithm (AIMD+) uses local bus voltage measurements and a reference voltage threshold to determine the additive increase parameter and to control the charging, as well as discharging rate of the battery. The used voltage threshold is dependent on the network topology and is calculated using power flow analysis tools, with peak demand equally allocated amongst all loads. Simulations were performed on the IEEE LV European Test feeder and a number of real U.K. suburban power distribution network models, together with European demand data and a realistic electric vehicle charging model. The performance of the standard AIMD algorithm with a fixed voltage threshold and the proposed AIMD+ algorithm with the reference voltage profile are compared. Results show that, compared to the standard AIMD case, the proposed AIMD+ algorithm further improves the network’s voltage profiles, reduces thermal overload occurrences and ensures a more equal battery utilisation.
# energies _Article_ ## Distributed Energy Storage Control for Dynamic Load Impact Mitigation **Maximilian J. Zangs** **[†], Peter B. E. Adams** **[†], Timur Yunusov, William Holderbaum *** **and Ben A. Potter** School of Systems Engineering, University of Reading, Whiteknights Campus, Reading RG6 6AY, UK; m.j.zangs@pgr.reading.ac.uk (M.J.Z.); p.b.e.adams@pgr.reading.ac.uk (P.B.E.A.); t.yunusov@reading.ac.uk (T.Y.); b.a.potter@reading.ac.uk (B.A.P.) ***** Correspondence: w.holderbaum@reading.ac.uk; Tel.: +44-118-378-6086; Fax: +44-118-975-1994 † These authors contributed equally to this work. Academic Editor: Rui Xiong Received: 31 January 2016; Accepted: 4 August 2016; Published: 17 August 2016 **Abstract: The future uptake of electric vehicles (EV) in low-voltage distribution networks can cause** increased voltage violations and thermal overloading of network assets, especially in networks with limited headroom at times of high or peak demand. To address this problem, this paper proposes a distributed battery energy storage solution, controlled using an additive increase multiplicative decrease (AIMD) algorithm. The improved algorithm (AIMD+) uses local bus voltage measurements and a reference voltage threshold to determine the additive increase parameter and to control the charging, as well as discharging rate of the battery. The used voltage threshold is dependent on the network topology and is calculated using power flow analysis tools, with peak demand equally allocated amongst all loads. Simulations were performed on the IEEE LV European Test feeder and a number of real U.K. suburban power distribution network models, together with European demand data and a realistic electric vehicle charging model. The performance of the standard AIMD algorithm with a fixed voltage threshold and the proposed AIMD+ algorithm with the reference voltage profile are compared. Results show that, compared to the standard AIMD case, the proposed AIMD+ algorithm further improves the network’s voltage profiles, reduces thermal overload occurrences and ensures a more equal battery utilisation. **Keywords: battery storage; distributed control; electric vehicles; additive increase multiplicative** decrease (AIMD); voltage control; smart grid **1. Introduction** The adoption of electric vehicles (EV) is seen as a potential solution to the decarbonisation of future transport networks, offsetting emissions from conventional internal combustion engine vehicles. The current rate of EV uptake is anticipated to increase with improved driving range, reduced cost of purchase and greater emphasis on leading an environmentally-friendly lifestyle [1]. It is predicted that by 2030, there will be three million plug-in hybrid electric vehicles (PHEV) and EVs sold in Great Britain and Northern Ireland [2], and it is expected that by 2020, every tenth car in the United Kingdom will be electrically powered [3]. It is anticipated that the majority of PHEV/EV will be charged at home, putting additional stress on the existing local low voltage distribution network, which must then cater for the increased demand in energy [4,5]. Uncontrolled charging of multiple PHEV/EV can raise the daily peak power demand, which leads to: increased transmission line losses, higher voltage drops, equipment overload, damage and failure [6–9]. Accommodating the increased demand and mitigation of such failures is a major area of research interest, with the focus mainly placed on the coordinating and support of home charging. ----- _Energies 2016, 9, 647_ 2 of 20 Demand Side Management (DSM) strategies for Distributed Energy Resources (DER), aim to alleviate the impacts of PHEV/EV home-charging and are a favoured solution. Mohsenian-Rad et al. in [10] developed a distributed DSM algorithm that implicitly controls the operation of loads, based on game theory and the network operator’s ability to dynamically adjust energy prices. Focusing on financial incentive-driven DSM strategies, in [11], a Time-Of-Use (TOU) tariff and real-time load management strategy was proposed, where disruptive charging is avoided by allocating higher prices to times of peak demand. Financial incentives have also become a drive towards optimising the operation of Battery Energy Storage Solutions (BESS) and Distributed Generation (DG) when including PHEV/EV into the problem formulation [12]. Research focused on grid support has been driven by the need to deliver long-term savings and to avoid the immediate costs and disruption of network reinforcements and upgrades. This area of research proposes the implementation of alternative solutions to support the adoption of low carbon technologies, such as EVs, heat pumps and the electrification of consumer products. To reduce the resulting increased peak demand, Mohsenian-Rad et al. developed an approach of direct interaction between grid and consumer to achieve valley-filling, by means of dynamic game theory [10]. In [13], a Multi-Agent System (MAS) was used to manage flexible loads for the minimisation of cost in a dynamic game. The use of aggregators has been proposed to allow the participation of a number of small providers to participate in network support, such as grid frequency response [14–16]. Yet without the availability of power demand forecasts, real-time control needs to be implemented. Real-time DSM can either be implemented in a centralised or distributed control approach. In the former, a central controller relays control signals to its aggregated DERs, whereas the latter allows each DER to control itself. A common form of controlling DERs in this mode of operation is set-point control [17]. Using set-point control on multiple identically-configured DERs would yield optimal operation conditions if each DER’s control parameters (e.g., bus voltage) were shared. In a system without sharing network information, DER control algorithms have to be improved to prevent, for example, devices located furthest from the substation from being used more frequently than others. This paper therefore presents an individualised BESS control algorithm that lets distributed batteries respond to fluctuations in real-time local bus voltage readings. The proposed algorithm is based on the robust Additive Increase Multiplicative Decrease (AIMD) type algorithm, yet implements a set-point adjustment based on the location of the controlled BESS. It will be shown how these home-connected batteries can mitigate the impact of additional loads (i.e., EV uptake), whilst assuring that all BESS are cycled equally. The key contribution of this work can be summarised as a novel distributed battery storage algorithm for mitigating the negative impact of dynamic load uptake on the low-voltage network. This algorithm uses an individualised set-point control to regulate bi-directional battery power flow and, for convergence, extends the traditional AIMD algorithm. As a result, the developed battery control method reduces voltage deviation, over-currents and the inequality of battery usage. Reducing this usage inequality leads to a homogeneous usage of all of the distributed batteries and, hence, prevents unequal degradation rates and unfair device utilisation. The remainder of this paper is organised as follows: Section 2 gives some background to related work on AIMD algorithms on which this research is based. Section 3 outlines the EV, network and storage models used in the research. Additionally, it explains the assumptions that accommodate and justify these models. Section 4 elaborates on the proposed AIMD control algorithm (AIMD+). Next, Section 5 details the implementation and scenarios used for a set of test cases. For later comparison, this section also outlines a set of comparison metrics. Section 6 presents and discusses the results, followed by the conclusion in Section 7. **2. Related Work** Existing literature addresses the usage of energy storage units in low-voltage distribution networks to assure voltage security [18–22]. An approach used by, e.g., Mokhtari et al. in [21] ----- _Energies 2016, 9, 647_ 3 of 20 relies on bus voltage and network load measurements to prevent system overloads. Yet, these kinds of storage control systems do require communication infrastructures to relay the network information and control instructions. This requirement has also been addressed in the comprehensive review on storage allocation and application methods by Hatziargyriou et al. [23]. In the presented work, a control algorithm is proposed that removes the need for such an inter-BESS communication, since it only uses local voltage measurements to infer the network operation. Yet, to prevent conflicting device behaviour, the underlying coordination mechanism is of particular importance. Assuring convergence, the AIMD algorithm is perfectly suited for such coordinated control. Originally, AIMD algorithms were applied to congestion management in communications networks using the TCP protocol [24], to maximise utilisation while ensuring a fair allocation of data throughput amongst a number of competing users [25]. AIMD-type algorithms have previously been applied to power sharing scenarios in low voltage distribution networks, where the limited resource is the availability of power from the substation’s transformer. For instance, such an algorithm was first proposed for EV charging by Stüdli et al. [26], requiring a one-way communications infrastructure to broadcast a “capacity event” [27,28]. Later, their work was further developed to include vehicle-to-grid applications with reactive power support [29]. The battery control algorithm proposed in this paper builds upon the algorithm used by Mareels et al. [30], where EV charging was organised by including bidirectional power flow and the use of a reference voltage profile derived from network models. Similar to the work by Xia et al. [31], who utilised local voltage measurements to adjust the charging rate, only voltage measurements at the batteries’ connection sites were used in this work to control the batteries’ operations. Previous research is therefore extended by the work presented here, as previous work has only utilised common set-point thresholds for controlling each of the DERs. The approach proposed in this paper ensures that unavoidable voltage drops along the feeder do not skew the control decisions, and voltage oscillations caused by demand variation are taken into control considerations. In contrast to previous work, where substation monitoring was used to inform control units of the transformer’s present operational capacity, the proposed AIMD+ algorithm does not require this information and, hence, does not require such an extensive communications infrastructure. **3. System Modelling** In this section, the underlying assumptions to validate the research are addressed. Next, a model to describe EV charging behaviour is explained. This is followed by a model of the BESS. Finally, the network models used to simulate the power distribution networks are explained. _3.1. Assumptions_ For this work, several underlying assumption were made to obtain the models: 1. The uptake of EVs is assumed to increase and, hence, to have a significant impact on the normal operation of the low voltage distribution network. This assumption is based on a well-established prediction that the majority of EV charging will take place at home [32]. 2. The transition from internal combustion engine-powered vehicles to EVs is assumed to not impact the users’ driving behaviour. Similar to [33], this assumption allows the utilisation of recent vehicle mobility data [34] to generate leaving, driving and arriving probabilities, from which the EV charging demand can be determined. 3. The transition to low carbon technologies will increase the variability of electricity demand, and therefore, grid-supporting devices, such as BESS, are anticipated to play a more important role [35]. Hence, alongside a high uptake of EVs, an increased adoption of distributed BESS devices is assumed. 4. It is assumed that BESS solutions, or more specifically battery energy storage solutions, start the simulations at 50% SOC and are not 100% efficient at storing and releasing electrical energy, as in [36]. Additionally, its utilisation will degrade the energy storage capability and performance ----- _Energies 2016, 9, 647_ 4 of 20 over time, as shown in [37]. Therefore, the requirements for equal and fair storage usage is of high importance. 5. It is assumed that the load profiles provided by the IEEE Power and Energy Society (PES) are sufficient as base load profiles for all simulations. _3.2. Electric Vehicle Charging Behaviour_ From publicly-available car mobility data [33,34] an empirical model was developed to capture the underlying driving behaviour. The raw data, nr(t), represents the probabilities of starting a trip during a 15-min period of a weekday. Three continuous normal distribution functions, each defined as: � 1 _nˆ_ _x(t) = βx_ _√_ _σx_ � exp 2π _−_ �t/24 − _µx�2_ 2σx[2] where t = [0, 24] (1) were used to represent vehicles leaving in the morning, _nˆ_ _m(t), lunch time,_ _nˆ_ _l(t), and in_ the evening, ˆne(t). The aggregate probability of these three functions was optimised using a Generalised Reduced Gradient (GRG) algorithm to fit the original data. In order to represent a symmetric commuting behaviour, i.e., vehicles departing in the morning and returning during the evening, an equality amongst the three probabilities was defined as follows: � 24 0 = [ ˆnm(t) + ˆnl(t) − _nˆ_ _e(t)] dt_ (2) 0 The resulting parameters from the GRG fitting of the three distribution functions are tabulated in Table 1. Additionally, the resulting departure probabilities, as well as the reference data nr(t) are shown in Figure 1. **Table 1. Parameters for normal distributions.** **Equation ˆnx(t)** **_µx (Mean)_** **_σx (SD)_** **_βx (Weight)_** _nˆ_ _m(t)_ 0.3049 0.0488 0.00206 _nˆ_ _l(t)_ 0.4666 0.0829 0.00314 _nˆ_ _e(t)_ 0.7042 0.0970 0.00521 **Figure 1. The probability of starting a trip at a particular time during a weekday, extrapolated into** three normal distributions (RMS error: 9.482%). Statistical data capturing the probability distribution of a trip being of a certain distance were also extracted from the dataset. This was done for both the weekdays wwd(d) and weekends wwe(d). The Weibull function was chosen to be fitted against the extracted probability distributions and is defined as: ----- _Energies 2016, 9, 647_ 5 of 20 _wˆ_ _x(d) :=_    _kx_ � _d_ �kx−1 exp � � _d_ �kx [�] if d 0 _γx_ _γx_ _−_ _γx_ _≥_ (3) 0 if d < 0 Performing the curve fitting using the GRG optimisation algorithm, a weekday trip distance distribution, _wˆ_ _wd(d), and a weekend trip distribution,_ _wˆ_ _we(d), could be estimated._ The computed function parameters for these two estimated distribution functions are tabulated in Table 2. Their resulting probability distributions are plotted for comparison against the real data, _wwd(d) and wwe(d), in Figure 2._ **Table 2. Parameters for Weibull distributions.** **Equation ˆwx(d)** **_γx (Scale)_** **_kx (Shape)_** _wˆ_ _wd(t)_ 15.462 0.6182 _wˆ_ _we(t)_ 38.406 0.4653 **Figure 2. The probability of a trip being of a particular distance during a weekday, extrapolated into a** Weibull distribution (RMS error: 3.791%). In addition to these probabilities, an average driving speed of 56 kmh (35 mph) and an average driving energy efficiency of 0.1305 kWh/kmh (0.21 kWh/mph) are taken from [38]. Using the predicted driving distance and average driving speed with the driving energy efficiency, it is possible to estimate an EV’s energy demand upon arrival. Starting to charge from this arrival time until the energy demand has been met allows the generation of an estimated charging profile of a single EV. To do this, a maximum charging power of the U.K.’s average household circuit rating (i.e., 7.4 kW) and an immediate disconnection of the EV upon charge completion were assumed [39]. Generating several of those charging profiles and aggregating them produces an estimated charging demand for an entire fleet of EVs. To provide an example, charge demand profiles for 50 EVs were generated, aggregated and plotted in Figure 3. This plot shows the expected magnitude and variability in energy demand that is required to charge several EVs at consumers’ homes based on the vehicles’ daily usage. This model’s EV charging behaviour has been implemented to reflect EV demand if applied today without widespread smart charging infrastructure. It does therefore reflect the worst case scenario. Future smart-charging schemes would mitigate the currently present collective EV charging spike, yet the implementation and validation of available smart-charging schemes lies beyond the scope of this paper. This model’s data were used to feed additional demand into the power network models, which are outlined in the next section. ----- _Energies 2016, 9, 647_ 6 of 20 **Figure 3. Excerpt from the aggregated 50 EVs; charging powers that were each generated from the** empirical models. _3.3. Battery Modelling_ For this work, a well-established model that has been used in previous publications by this research group was used [36,40,41]. This model consists of a battery with a self-discharge loss that is dependent on the current battery’s State Of Charge (SOC) and an energy conversion loss to represent the energy lost when charging or discharging this battery. A complete list of all notations that are used for this battery model is included in Table 3. **Table 3. Table of the notation used in this section.** **Parameter** **Description** ine Pbat(t) Battery power at time t _SOC(t)_ Battery state of charge at time t _δSOC(t)_ Change in SOC during time period τ _µ_ Self-discharge loss factor _η_ Energy conversion efficiency _SOCmin_ Minimum rated SOC for limited battery operation _SOCmax_ Maximum rated SOC for limited battery operation _C_ Battery capacity _Pmax_ Power rating of battery When an ideal battery charges or discharges, the change in SOC is related by the battery power, Pbat. When sampling battery operation at a regular period, τ, then the energy transferred into the battery can be described as Pbat(t)τ. The change in SOC for this ideal battery, δSOC, is therefore defined as: _δSOC(t) :=_ _[P][bat][(][t][)][τ]_ = SOC(t) − SOC(t − _τ)_ (4) _C_ The self-discharge loss is added to this ideal battery model to represent the continual loss of energy in the battery typical of chemical energy storage. This self-discharge loss, δSOC,sel f -discharge, is proportional to the current SOC and is determined using the self-discharge loss factor, µ: _δSOC,sel f_ -discharge(t) := µSOC(t) (5) Additionally, to represent the losses in the power electronics and energy conversion process, an energy conversion loss, δSOC,conversion, is defined. This loss is proportional to the rate at which the battery’s SOC changes, by using the energy conversion efficiency, ˆη as follows: _δSOC,conversion(t) := ˆηδSOC(t)_ (6) ----- _Energies 2016, 9, 647_ 7 of 20 Here, the conversion losses in the power electronics are reflected as an asymmetric efficiency, which depends on the direction of the flow of energy. This is done by charging the battery at a lower power when consuming energy and discharging it more quickly when releasing energy. Mathematically, this can be represented as: _ηˆ =_ � _η_ if δSOC(t) ≥ 0 (7) _η1_ if δSOC(t) < 0 When substituting the self-discharge loss and conversion losses, respectively δSOC,sel f -discharge and δSOC,conversion, into the SOC evolution equation, the full battery model can be summarised as follows: SOC(t) : = δSOC(t − _τ) −_ _δSOC,sel f_ -discharge(t − _τ) −_ _δSOC,conversion(t)_ (8) = (1 − _µ)δSOC(t −_ _τ) −_ _ηδˆ_ _SOC(t)_ In addition, both the SOC and the Pbat are constrained due to the device’s maximum and minimum energy storage capabilities, respectively SOCmax and SOCmin, and maximum charge and discharge rate, Pmax. These limitations are captured in Equations (9) and (10), respectively. SOCmin ≤ SOC(t) ≤ SOCmax (9) _|Pbat(t)| ≤_ Pmax (10) _3.4. Network Models_ To simulate the low-voltage energy distribution networks, the Open Distribution System Simulator (OpenDSS) developed by the Electronic Power Research Institute (EPRI) was used. It requires element-based network models, including line, load and transformer information, and generates realistic power flow results. (a) (b) **Figure 4. Sample Open Distribution System Simulator (OpenDSS) power flow plots of the used power** networks. Consumers are indicated as red crosses and 11/0.416-kV substations are marked with a green square. (a) IEEE Power and Energy Society (PES) EU Low Voltage Test Feeder plot; (b) Scottish and Southern Energy Power Distribution (SSE-PD) Common Information Model (CIM) (UK) feeder plot. Simulations were conducted using the IEEE’s European Low Voltage Test Feeder [42] and six detailed U.K. feeder models, that are based on real power distribution networks and provided by Scottish and Southern Energy Power Distribution (SSE-PD). The SSE-PD circuit models were provided as Common Information Models (CIM) during the collaboration on the New Thames Valley ----- _Energies 2016, 9, 647_ 8 of 20 Vision Project Project (NTVV) [43]. An example of the IEEE EU LV Test feeder and a U.K. feeder provided by SSE-PD are shown in Figure 4a,b, respectively. A summary of these model’s parameters is given in the Table 4. **Table 4. Network model parameters.** **IEEE EU** **Parameter** **SSE-PD LV Feeders** **LV Test Feeder** Network number 1 [1] 2 [1] 3 4 5 6 7 Number of customers 55 56 53 91 59 88 37 Median load per customer (VA) 227 227 231 241 224 237 237 Maximum load per customer (kVA) 16.8 16.8 16.8 19.5 16.8 19.5 16.8 Customer connection Single-phase Single-phase Median substation load (kVA) 24.4 24.9 23.9 41.9 25.6 38.9 16.3 Maximum load per customer (kVA) 72.6 72.7 72.2 92.9 73.5 89.6 60.5 Three-phase Three-phase Feeder line model implicit-neutral explicit-neutral 1 These networks are shown in Figure 4. Throughout this paper, all excerpt and time series results were extracted from experiments with the IEEE EU LV Test feeder (i.e., Network No. 1). All concluding results are based on an aggregation of all networks to include network diversity in the analysis. The model-derived EV data and IEEE EU LV Test feeder consumer demand profiles were used in all simulations. The resultant demand profiles represent the total daily electricity demand of households with EVs. These profiles were sampled at τ = 1 min. The OpenDSS simulation environment was controlled using MATLAB, achieved through OpenDSS’s Common Object Model (COM) interface and accessible using Microsoft’s ActiveX server bridge. **4. Storage Control** In this section, the control of the energy storage system is explained. Firstly, the additive increase multiplicative decrease algorithm is presented, and its decision mechanism is explained in full. Then, the voltage referencing, used for AIMD+, is outlined. _4.1. Additive Increase Multiplicative Decrease_ The proposed distributed battery storage control is shown in Algorithm 1. The parameter _α denotes the size of the power’s additive increase step, and β denotes the size of the multiplicative_ decrease step. It is worth mentioning that α linearly increases and β exponentially decreases, both charging and discharging powers, where discharging power is represented as a negative power flow, i.e., energy released by the battery. The constants Vmax and Vthr are the maximum historic voltage value and the set-point threshold used to regulate the total demand. In the case when the total demand is too high, the local voltages will fall below Vthr, and the batteries reduce their charging power and start discharging. This behaviour reduces total demand on the feeder. At simulation start, Vmax is set to the nominal voltage of the substation transformer, i.e., 240 V, and Vthr is set to a fraction of Vmax, which was found by solving a balanced power flow analysis. The variable _V(t) is the battery’s local bus voltage, and Pmax denotes the maximum charging/discharging power_ of the battery. The charging and discharging power of the batteries is increased in proportion to the available headroom on the network, which is inferred from the local voltage measurement V(t), to avoid any sudden overloading of the substation transformer. ----- _Energies 2016, 9, 647_ 9 of 20 **Algorithm 1 Compute battery power.** 1: R(t) = (V(t) − _Vthr)/(Vmax −_ _Vthr)_ _▷_ Defines the rate for the current voltage reading 2: if V(t) ≥ _Vthr then_ _▷_ Given the voltage levels are nominal... 3: **if SOC < SOCmax then** _▷_ ...and the battery is not fully charged... 4: _P(t) = P(t −_ _τ) + αPmaxR(t)_ _▷_ ...increase the charging power 5: **else** _▷_ If the battery has fully charged... 6: _P(t) = 0_ _▷_ ...shut off 7: **end if** 8: **if P(t) < 0 then** _▷_ If the battery has been discharging... 9: _P(t) = βP(t −_ _τ)_ _▷_ ...reduce the discharging power by β 10: **end if** 11: else _▷_ If voltage levels are not nominal... 12: **if SOC > SOCmin then** _▷_ ...and battery is charged sufficiently... 13: _P(t) = P(t −_ _τ) + αPmaxR(t)_ _▷_ ...increase discharge power 14: **else** _▷_ If the battery is not sufficiently charged... 15: _P(t) = 0_ _▷_ ...shut off 16: **end if** 17: **if P(t) > 0 then** _▷_ If the battery has been charging... 18: _P(t) = βP(t −_ _τ)_ _▷_ ...reduce the charging power by β 19: **end if** 20: end if 21: P(t) = signum(P(t)) × min{|P(t)|, Pmax} _▷_ Limit the power to battery specifications The algorithm itself, as shown in Algorithm 1, contains two decision levels. The first determines whether the network is over- or under-loaded by comparing the local bus voltage, V(t), to the battery’s set-point threshold, Vthr. In the event that the network is not under high load, the battery’s SOC is compared to its operation limit to check whether the battery can charge, i.e., SOC < SOCmax. If there is enough charging capacity left, then the battery’s charging power is linearly increased following Line 4. If the battery was previously discharging, the related discharging power is exponentially reduced (Line 9) to reflect the multiplicative decrease. The second decision level is entered when the network is under load. Here, the discharging power is linearly increased if the battery has enough energy stored, i.e., SOC > SOCmin (Line 13). Additionally, if the battery was previously charging, then its charging power is multiplicatively reduced (Line 18). The direction of the charging/discharging power adjustment is determined by the first decision level, as well as the threshold proximity ratio R(t). As the battery’s bus voltage, V(t), approaches the threshold voltage, Vthr, this ratio tends to zero and, hence, stops the battery operation. Therefore, oscillatory hunting is effectively mitigated. The last step of the algorithm (Line 21) assures that the battery charge/discharge power is within its device rating. _4.2. Reference Voltage Profile_ When using a fixed voltage threshold, the difference in the location and load of each customer results in the over-utilisation of batteries located at the feeder end. Similar to Papaioannou et al. [44], yet for the control of BESS instead of EV charging, a reference voltage profile is proposed, which is produced by performing a power flow analysis of the network under maximum demand. An example of a fixed threshold and reference voltage profile is shown in Figure 5. In the AIMD+, consumers located at the head of the feeder are allocated a higher voltage threshold, while those towards the end of the feeder have similar voltage thresholds to that of the fixed threshold. This replicates the expected voltage drop along the length of the feeder, hence resulting in a more equal utilisation of battery storage units that are located at those distances. The voltage threshold is set in such a way as to limit the maximum voltage drop to 3% at the end of the feeder. ----- _Energies 2016, 9, 647_ 10 of 20 **Figure 5. A plot showing the difference between the fixed voltage threshold (AIMD) and the reference** voltage profile (AIMD+). **5. Scenarios and Comparison Metrics** In this section, several scenarios are explained that were used to test the performance of the battery control algorithm. Following that is the definition of three comparison metrics. These metrics quantify the improvements caused by the different algorithms in comparison to the worst case scenario. _5.1. Test Cases and Scenarios_ In all simulations, the EVs plug-in on arrival and charge at their nominal charging rate until fully charged. The BESS devices were chosen to have a capacity of 7 kWh with a maximum power rating of 2 kW (battery specifications are based on the Tesla Powerwall [45]). Four excerpt cases were defined with different levels of EV and storage uptakes, these are as follows: **A** A baseline scenario, where only household demand is used. **B** A worst case scenario, in which EV uptake is 100% and no BESS is used. **C** An AIMD scenario, in which EV uptake is 100% and each household has a battery energy storage device. Here, each battery was controlled using the AIMD algorithm using a fixed voltage threshold. **D** An AIMD+ scenario, in which EV uptake is 100%, and each household has a battery energy storage device. Here, each battery was controlled using the AIMD+ algorithm using the optimised reference voltage profile. A storage uptake of 100% was adopted to represent the worst case scenario. In addition to the four defined scenarios, a full set of simulations was performed with EV and storage uptake combinations of 0% to 100% in steps of 10%. _5.2. Performance Metric Definition_ To obtain comparable performance metrics, three parameters are defined. These parameters capture the improvements in voltage violation mitigation, line overload reduction and the equality of battery usage. All excerpt performance metrics were calculated based on simulations from the IEEE EU LV Test feeder for reproducibility. 5.2.1. Parameter for Voltage Improvement The first parameters are ζC[∗] [and][ ζ]D[∗] [for, respectively, Cases C and D, and calculate the magnitude] of the voltage level improvement by comparing two voltage frequency distributions. More specifically, they find the difference between these probability distributions and compute a weighted sum. Here, the weighting, δ[∗](v), emphasises the voltage level improvements that deviate further from the nominal substation voltage Vss. If the resulting weighted sum is negative, then the obtained voltage frequency ----- _Energies 2016, 9, 647_ 11 of 20 distribution was improved in comparison to the associated worst case scenario. In contrast, a positive number would indicate a worse outcome. The performance metric ζC[∗] [is defined as follows.] _Vmax_ _ζC[∗]_ [:][=] ∑ _δ[∗](v) [PB(v) −_ _PC(v)]_ (11) _v=Vmin_ Here, Vmin is the lowest recorded voltage, and Vmax is the highest recorded voltage. PB(v) is the voltage probability distribution of the worst case scenario (Case B), and PC(v) is the voltage probability distributions of Case C (i.e., the case with maximum EV and AIMD storage uptake). Similarly, the parameter ζD[∗] [therefore compares Case D, i.e., the AIMD+ case, with Case B.] The aforementioned factor, δ[∗](v), scales down the summation in Equation (11) for voltages within the nominal operating band, where no voltage violations take place. Voltage violations on the other hand are scaled up to increase their impact on the summation. This scaling was produced using a linear function, with its minimum at Vss, that is defined as: _VssVss−−Vlowv_ if v ≤ _Vss_ (12) _Vhighv−V−ssVss_ otherwise _δ[∗](v) :=_    _Vlow and Vhigh are defined as the lower and upper limits of the nominal operation voltage band,_ respectively. In general, the proposed voltage comparison parameter, ζ[∗], shows an improvement in voltage distribution when it is negative, whereas a positive value implies a voltage distribution with more voltage violations. 5.2.2. Parameter for Line Overload Reduction Similar to measuring the voltage level improvements, all line utilisation probability distributions between the storage and worst case scenarios were compared. This follows a similar equation to before, but uses a different scaling factor, as described in Equation (11): _Cmax_ _ζC[∗∗]_ [:][=] ∑ _δ[∗∗](c) [PC(c) −_ _PB(c)]_ (13) _c=0_ Here, Cmax is the highest line utilisation. PB(c) and PC(c) present the line utilisation probability distributions for Cases B and C, respectively, and δ[∗∗](c) is the associated scaling factor. Since the relationship between line current and ohmic losses is quadratic, this scaling factor is defined as an exponential function that amplifies the impact of line currents beyond the line’s nominal rating. _δ[∗∗](c) =_ � 1−Cc _min_ �2 if c ≥ _Cmin_ (14) 0 otherwise The capacity scale modifier, Cmin, defines from where the scaling should start and has been set to 0.5 for this work as only line utilisation above 0.5 p.u. was considered. Therefore, a reduction in line overloads would give a negative ζ[∗∗], whereas a positive value implies a higher line utilisation, i.e., worse results. 5.2.3. Parameter for the Improvement of Battery Cycling The final metric, ζ[∗∗∗], gives an indication of the inequality of battery cycling (one battery cycle is defined as a full discharge and charge of the battery at maximum operating power, i.e., Pmax) across ----- _Energies 2016, 9, 647_ 12 of 20 all battery units. It does this by computing the the ratio between the peak and mean battery cycling. This Peak-to-Average Ratio (PAR) of batteries’ cycling is defined in the following equation. max |CC| _ζC[∗∗∗]_ := _B[−][1]_ ∑b[B]=1 ��cCb �� (15) Here, B is the number of batteries, and cC[b] [is the total cycling of battery][ b][ during Scenario C.][ C][C][ is] a vector of R≥[B] 0 [that contains all batteries’ cycling values, i.e.,][ c]C[b] _[∈]_ _[C][C][. Equally, the battery cycling]_ for Scenario D would be captured by ζD[∗∗∗][. In the unlikely event of an equal cycling of all batteries,] _ζ[∗∗∗]_ will have a value of one. Yet, as batteries are operated differently, the value of ζ[∗∗∗] is likely to be greater than one. Therefore, a resulting PAR closer to one implies a more equal and therefore fairer utilisation of the deployed batteries. **6. Results and Discussion** In this section, the results are outlined that were generated from all simulations. In each of the three subsections, the performances of the AIMD and AIMD+ algorithm are compared against each other. To do so, the performance metrics outlined in Section 5.2 were used. In the following subsections, results from the four test cases defined as A, B, C and D in Section 5.1 are explained first, then the results from the full analysis over the large range of EV and battery storage uptake is presented. In the end, these results are summarised and discussed. _6.1. Voltage Violation Analysis_ For the comparison of voltage improvements, results compared the algorithms’ performances at reducing bus voltage variation; particularly by increasing the lowest recorded bus voltage. Each load’s bus voltage was recorded, from which a sample voltage profile, Figure 6, was extracted, where the bus voltage fluctuation over time becomes apparent. It can be seen that the introduction of EVs has significantly lowered the line-to-neutral voltage. Adding energy BESS devices did raise the voltage levels during times of peak demand, as can be seen between 17:00 and 21:00, where the AIMD+ algorithm has elevated voltages further than the AIMD scenario. To obtain a better understanding of the level of improvement, the voltage frequency distribution of all buses along the feeder was generated and plotted in a histogram in Figure 7. **Figure 6. Recorded voltage profile at the bus of the customer closest to the substation over the period** of one day with a certain uptake in EV and battery storage devices using a moving average over a window of 5 min. Here, Case A is blue; Case B is red; Case C is yellow; and Case D is violet. In this histogram, the voltage probability distributions for all four cases were normalised and plotted against each other. Here, the previously seen drop in voltages by introducing EVs is recorded as a shift in the voltage distribution. This voltage drop is mitigated by the introduction of the storage solutions, since the probability distribution is shifted towards higher voltage bands. For the ----- _Energies 2016, 9, 647_ 13 of 20 IEEE EU LV Test feeder, the AIMD+-controlled batteries outperform the AIMD devices as the resulting _ζC[∗]_ [is greater than][ ζ]D[∗] [.] **Figure 7. Voltage probability distribution of all loads’ buses for certain uptakes of EV and battery** storage devices. Here, Case A is blue; Case B is red; Case C is yellow; and Case D is violet; with _ζC[∗]_ [=][ −][0.153 and][ ζ]D[∗] [=][ −][0.135.] To gain a full understanding of the performance of the AIMD and AIMD+ algorithms, a full sweep of EV and BESS uptake combinations was simulated on all available power distribution networks. The resulting parameters were averaged and plotted in Figure 8. (a) (b) **Figure 8.** Comparison of voltage improvement indices (i.e., ζ[∗]) for (a) AIMD and (b) AIMD+. (a) ζC[∗] [indices (AIMD); (b)][ ζ]D[∗] [indices (AIMD+).] These figures show that the AIMD+ control algorithm reduces voltage deviation more effectively as the uptake in storage and EVs increases. For low storage uptake, the AIMD algorithm does not perform as strongly since more ζC[∗] [values are positive and larger than their corresponding][ ζ]D[∗] [value.] This becomes more apparent when averaging all ζC[∗] [and][ ζ]D[∗] [values for their common storage uptake] and across all EV uptakes. The resulting averaged metrics are plotted in Figure 9. In this last figure, it can be seen how the sole impact of BESS uptake reflects in a continuing improvement of voltage levels. In fact, both compared algorithms improved the bus voltage, which coincides with the findings in the case studies. On average, this is the case for all BESS uptakes, as ζC[∗] _[≈]_ _[ζ]D[∗]_ [. Nonetheless, it should be noted that the AIMD+ algorithm had reduced the frequency of] severe voltage deviations in comparison to the AIMD algorithm and is more effective during scenarios with lower BESS uptake. ----- _Energies 2016, 9, 647_ 14 of 20 **Figure 9.** Average ζC[∗] [(AIMD) and][ ζ]D[∗] [(AIMD+) values recorded against the corresponding] storage uptake. _6.2. Line Overload Analysis_ Similar to the voltage improvement analysis, a frequency distribution of the line utilisation was generated. Figure 10 shows a probability distribution of the per unit (1 p.u. represents a 100% line usage, i.e., a line current of the same value as the line’s nominal current rating) current in all lines, for each of the four scenarios. The corresponding ζC[∗∗] [and][ ζ]D[∗∗] [values for the AIMD and AIMD+] storage deployment have also been included in the figure’s caption. In this figure, the observed high probability of line over-utilisation confirms that the used test network is of insufficient capacity to cater for the chosen EV uptake. **Figure 10. Line utilisation probability distribution of all lines in the simulated feeder for certain uptakes** of EV and battery storage devices. Here, Case A is blue; Case B is red; Case C is yellow; and Case D is violet; with ζC[∗∗] [=][ −][0.360 and][ ζ]D[∗∗] [=][ −][0.518.] Here, the AIMD+ controlled storage devices yielded a noticeable reduction in line overloads. This improvement is apparent through the compressed width of the probability distribution and the negative ζD[∗∗] [value. In contrast, the AIMD controlled storage devices do not fully utilise the line capacity] as effectively, which leads to a positive value of ζC[∗∗][. To evaluate the line utilisation improvement] across all simulations, the full range of EV and storage uptake was evaluated. The resulting plots are shown in Figure 11. In these figures, it can be seen how the performance metrics change as EV uptake and storage uptake increase. For the AIMD-controlled BESS, the resulting ζC[∗∗] [values are distributed around zero,] whereas the AIMD+ algorithm achieved mostly negative values of ζD[∗∗][. These negative values confirm] the better usage of available line capacity. This becomes particularly noticeable for scenarios where very low EV uptake is combined with larger BESS uptake. Here, AIMD-controlled storage devices commence their initial charge simultaneously. As they are located closer to the substation, they do not measure a sufficient bus voltage offset to regulate down their charging power. This behaviour causes a number of line overloads at the very beginning of the simulated days. The AIMD+ algorithm on the other hand, with its adjusted thresholds, is more responsive to non-optimal network operation and, therefore, increases the charging rate gradually. ----- _Energies 2016, 9, 647_ 15 of 20 (a) (b) **Figure 11.** Comparison of line utilisation improvement indices for (a) AIMD and (b) AIMD+. (a) ζC[∗∗] [indices (AIMD); (b)][ ζ]D[∗∗] [indices (AIMD+).] This gradual adjustment is based on the fact that the bus voltages in the AIMD+ algorithm are closer to their nominal voltages (i.e., bus voltages found by simulating the feeder with its equally-distributed nominal load) than they are in the conventional AIMD case. A greater voltage disparity, which is the case in AIMD, causes a prolonged additive adjustment to the battery’s power. This prolonged adjustment is particularly apparent for batteries situated at the bottom of the feeder, as their voltage measurements deviate the furthest from the substation voltage level. AIMD+ on the other hand prevents this behaviour by setting the voltage threshold based on the network’s nominal voltage drop, which is dependent on the distance between the BESS and its feeding substation. As a result, the set-point voltage thresholds at the bottom of the feeder are lower than those closer to the substation. Hence, the additive power adjustment is equalised along the entire feeder. Therefore, by applying these individualised control thresholds, the sensitivity of the algorithm is corrected, whilst successfully mitigating the severity of line overloads. Averaging the ζC[∗∗] [and][ ζ]D[∗∗] [values over all EV uptakes gives a clearer indication of performance,] as this is now the only variable in the performance analysis. The result is plotted in Figure 12. Here, the hypothesis that AIMD-controlled energy storage devices do not improve line utilisation is confirmed. In contrast, the AIMD+-controlled devices succeed at effectively reducing line overloads. This is also demonstrated by the values of ζC[∗∗][, which remain positive yet close to zero, whereas][ ζ]D[∗∗] decreases with increasing uptake of battery storage devices. **Figure 12.** Average ζC[∗∗] (AIMD) and ζD[∗∗] [(AIMD+) values recorded against the corresponding] storage uptake. Whilst the deployment of energy storage has often been seen as a possible solution to defer network reinforcements, the presented results show that this is not always the case. In fact, the importance of choosing an appropriate control algorithm outweighs the availability of the ----- _Energies 2016, 9, 647_ 16 of 20 energy storage itself. This becomes particularly apparent when energy storage devices need to recharge their injected energy for times of peak demand. For the AIMD case, this recharging was not controlled sufficiently, which led to higher line currents. The proposed AIMD+ algorithm was not as susceptible to this kind of behaviour, as it has been designed to take battery location into account. This immunity and well-controlled power flow caused little to no additional strain on the network’s equipment, allowing the deployed storage devices to also provide voltage support. _6.3. Battery Utilisation Analysis_ In this part of the analysis, the batteries’ fairness of usage was evaluated. The battery power profiles were recorded; excerpts are plotted in Figure 13 and are arranged by distance from the substation. (a) (b) **Figure 13. Battery power profiles of each load’s battery storage device over four days for (a) AIMD and** (b) AIMD+. (a) Case C, 60% EV and 100% AIMD (kW); (b) Case D, 60% EV and 100% AIMD+ (kW). In this figure, it can be seen that only half of the deployed storage devices were active in Case C (AIMD control), whereas all devices are utilised in Case D (AIMD+ control). From the recorded battery SOC profiles, the net cycling of each battery was computed and divided by the duration of the simulation, giving an average daily cycling value. This value is plotted for each load in Figure 14a. The corresponding statistical analysis is presented in Figure 14b. (a) (b) **Figure 14. Each load’s battery cycling compared for (a) 60% EV and 100% AIMD and AIMD+ uptake** and (b) in a statistical context. Here, ζC[∗∗∗] = 3.89 and ζD[∗∗∗] = 2.54. (a) Battery cycling for each load; (b) statistic. These two plots show the under-usage of AIMD controlled batteries, as well as the variance in battery usage under AIMD and AIMD+ control. In fact, under AIMD control, 20 out of 55 batteries experienced a cycling of less than 10% per day, whereas the remaining devices were utilised fully. ----- _Energies 2016, 9, 647_ 17 of 20 This discrepancy causes the ζC[∗∗∗] value to be noticeably larger than ζD[∗∗∗][. A more detailed comparison] is given when plotting the Peak-to-Average Ratios (PAR) of the batteries’ daily cycling over the full range of EV and storage uptake scenarios; these plots are shown in Figure 15. Section 5.2.3 gives the detail on the PAR, ζ[∗∗∗]. (a) (b) **Figure 15. Peak-to-Average Ratios (PAR) of the battery cycling profiles of each load’s battery storage** device over four days for (a) AIMD and (b) AIMD+. (a) Cycling PAR for AIMD; (b) cycling PAR for AIMD+. The figure shows that for any EV uptake scenario, AIMD-controlled energy storage units were cycled less equally than the AIMD+ controlled devices. Results also show that with a low EV uptake, both the AIMD and AIMD+ algorithm performed worse; yet improved as EV uptake increased. Averaging the PARs for all batteries’ SOC profiles over all EV uptake percentages yields a clear performance difference between AIMD and AIMD+. These resulting PARs, i.e., the ζC[∗∗∗] and ζD[∗∗∗] values for their corresponding storage uptake percentages, are presented in Figure 16. **Figure 16. The performance index ζC[∗∗∗]** for AIMD storage and ζD[∗∗∗] for AIMD+ storage control against storage uptake. Although the AIMD controlled batteries were, on average, cycled less than the batteries controlled by the proposed AIMD+ algorithm, looking at the average produces a distorted understanding of the performance. In fact, as more than half of the assigned AIMD BESS devices never partook in the network control, a lower average cycling was expected to begin with. The variation in cycling across all batteries, or the cycling PAR, reveals the difference between usage and effective usage. A lower ratio indicates a better usage of the deployed batteries. **7. Conclusions** In this paper, an algorithm is proposed for distributed battery energy storage, in order to mitigate the negative impact of highly variable uncontrolled loads, such as the charging of EVs. The improved ----- _Energies 2016, 9, 647_ 18 of 20 AIMD algorithm uses local bus voltage measurements and implements a reference voltage profile, derived from power flow analysis of the distribution network, for its set-point control. Taking the distance to the feeding substation into account allowed optimising the algorithm’s parameters for each BESS. Simulations were performed on the IEEE EU LV Test feeder and a set of real U.K. suburban network models. Comparisons were made of the standard AIMD algorithm with a fixed voltage threshold against the proposed AIMD+ algorithm using a reference voltage threshold. A set of European demand profiles and a realistic EV travel model were used to feed load data into the simulations. For all conducted simulations, the performance of the energy storage units was improved by using the proposed AIMD+ algorithm instead of traditional AIMD control. The improved algorithm resulted in a reduction of voltage variation and an increased utilisation of available line capacity, which also reduced the frequency of line overloads. Additionally, the same algorithm equalised the cycling and utilisation of battery energy storage, making better use of the deployed battery assets. To take this work further, future work will also consider distributed generation, such as photovoltaic panels, smart-charging EV uptake, as well as decentralised methods for determining voltage reference values, so no prior network knowledge is required. **Acknowledgments: The authors would like to thank SSE-PD for providing their network information for the** utilised U.K. feeder models and also Miss Catriona Scrivener for proofreading this manuscript. **Author Contributions: Maximilian J. Zangs and Peter B. E. Adams contributed equally to this piece of work and** were supervised by William Holderbaum and Ben A. Potter. Timur Yunusov has provided technical input and feedback throughout. **Conflicts of Interest: The authors declare no conflict of interest.** **References** 1. Shah, V.; Booream-Phelps, J. F.I.T.T. for Investors: Crossing the Chasm; Technical Report; Deutsche Bank Market Research: Frankfurt am Main, Germany, 2015. 2. Department for Business Enterprise and Regulatory (DBER); Department for Transport (DfT). Investigation _into the Scope for the Transport Sector to Switch to Electric Vehicles and Plug-in Hybrid Vehicles; Technical Report;_ Department for Business Enterprise and Regulatory (DBER), Department for Transport (DfT): London, UK, 2008. 3. Ecolane, University of Aberdeen. Pathways to High Penetration of Electric Vehicles; Technical Report; Ecolane, University of Aberdeen: Bristol, UK, 2013. 4. Clement-Nyns, K.; Haesen, E.; Driesen, J. The impact of Charging plug-in hybrid electric vehicles on a residential distribution grid. IEEE Trans. Power Syst. 2010, 25, 371–380. 5. Fernández, L.P.; San Román, T.G.; Cossent, R.; Domingo, C.M.; Frías, P. Assessment of the impact of plug-in electric vehicles on distribution networks. IEEE Trans. Power Syst. 2011, 26, 206–213. 6. Hadley, S.W.; Tsvetkova, A.A. Potential Impacts of Plug-in Hybrid Electric Vehicles on Regional Power Generation. Electr. J. 2009, 22, 56–68. 7. Putrus, G.; Suwanapingkarl, P.; Johnston, D.; Bentley, E.; Narayana, M. Impact of electric vehicles on power distribution networks. In Proceedings of the IEEE Vehicle Power and Propulsion Conference, Dearborn, MI, USA, 7–10 September 2009; pp. 827–831. 8. Pillai, J.R.; Bak-Jensen, B. Vehicle-to-grid systems for frequency regulation in an islanded Danish distribution network. In Proceedings of the IEEE Vehicle Power and Propulsion Conference (VPPC), Lille, France, 1–3 September 2010. 9. Zhou, K.; Cai, L. Randomized PHEV Charging Under Distribution Grid Constraints. IEEE Trans. Smart Grid **2014, 5, 879–887.** 10. Mohsenian-Rad, A.H.; Wong, V.W.S.; Jatskevich, J.; Schober, R.; Leon-Garcia, A. Autonomous demand-side management based on game-theoretic energy consumption scheduling for the future smart grid. IEEE Trans. _Smart Grid 2010, 1, 320–331._ ----- _Energies 2016, 9, 647_ 19 of 20 11. Deilami, S.; Masoum, A.S. Real-time coordination of plug-in electric vehicle charging in smart grids to minimize power losses and improve voltage profile. IEEE Trans. Smart Grid 2011, 2, 456–467. 12. Masoum, A.S.; Deilami, S.; Member, S.; Masoum, M.A.S. Fuzzy Approach for Online Coordination of Plug-In Electric Vehicle Charging in Smart Grid. IEEE Trans. Sustain. Energy 2015, 6, 1112–1121. 13. Karfopoulos, E.L.; Hatziargyriou, N.D. A Multi-Agent System for Controlled Charging of a Large Population of Electric Vehicles. IEEE Trans. Power Syst. 2013, 28, 1196–1204. 14. Wu, C.; Mohsenian-Rad, H.; Huang, J. Vehicle-to-aggregator interaction game. IEEE Trans. Smart Grid 2012, _3, 434–442._ 15. Samadi, P.; Mohsenian-Rad, H.; Schober, R.; Wong, V.W.S. Advanced Demand Side Management for the Future Smart Grid Using Mechanism Design. IEEE Trans. Smart Grid 2012, 3, 1170–1180. 16. Xu, N.Z.; Chung, C.Y. Challenges in Future Competition of Electric Vehicle Charging Management and Solutions. IEEE Trans. Smart Grid 2015, 6, 1323–1331. 17. Leadbetter, J.; Swan, L. Battery storage system for residential electricity peak demand shaving. Energy Build. **2012, 55, 685–692.** 18. Sugihara, H.; Yokoyama, K.; Saeki, O.; Tsuji, K.; Funaki, T. Economic and efficient voltage management using customer-owned energy storage systems in a distribution network with high penetration of photovoltaic systems. IEEE Trans. Power Syst. 2013, 28, 102–111. 19. Toledo, O.M.; Oliveira, D.; Diniz, A.; Martins, J.H.; Vale, M.H.M. Methodology for Evaluation of Grid-Tie Connection of Distributed Energy Resources-Case Study with Photovoltaic and Energy Storage. IEEE Trans. _Power Syst. 2013, 28, 1132–1139._ 20. Marra, F.; Yang, G.Y.; Fawzy, Y.T.; Træholt, C.; Larsen, E.; Garcia-Valle, R.; Jensen, M.M. Improvement of local voltage in feeders with photovoltaic using electric vehicles. IEEE Trans. Power Syst. 2013, 28, 3515–3516. 21. Mokhtari, G.; Nourbakhsh, G.; Ghosh, A. Smart coordination of energy storage units (ESUs) for voltage and loading management in distribution networks. IEEE Trans. Power Syst. 2013, 28, 4812–4820. 22. Atia, R.; Yamada, N. Sizing and Analysis of Renewable Energy and Battery Systems in Residential Microgrids. _IEEE Trans. Smart Grid 2016, 7, 1204–1213._ 23. Hatziargyriou, N.D.; Škrlec, D.; Capuder, T.; Georgilakis, P.S.; Zidar, M. Review of energy storage allocation in power distribution networks: Applications, methods and future research. IET Gener. Transmi. Distrib. **2015, 10, 1–8.** 24. Chiu, D.M.; Rain, R. Analysis of the increase and decrease algorithms for congestion avoidance in computer networks. Comput. Netw. ISDN Syst. 1989, 17, 1–14. 25. Wirth, F.; Stuedli, S.; Yu, J.Y.; Corless, M.; Shorten, R. IBM Research Report: Nonhomogeneous Place-Dependent _Markov Chains, Unsynchronised AIMD, and Network Utility Maximization; Technical Report; IBM: New York,_ NY, USA, 2014. 26. Stüdli, S.; Crisostomi, E.; Middleton, R.; Shorten, R. A flexible distributed framework for realising electric and plug-in hybrid vehicle charging policies. Int. J. Control 2012, 85, 1130–1145. 27. Studli, S.; Griggs, W.; Crisostomi, E.; Shorten, R. On Optimality Criteria for Reverse Charging of Electric Vehicles. IEEE Trans. Intell. Transp. Syst. 2014, 15, 451–456. 28. Stüdli, S.; Crisostomi, E.; Middleton, R.; Shorten, R. Optimal real-time distributed V2G and G2V management of electric vehicles. Int. J. Control 2014, 87, 1153–1162. 29. Stüdli, S.; Crisostomi, E.; Middleton, R.; Braslavsky, J.; Shorten, R. Distributed Load Management Using Additive Increase Multiplicative Decrease Based Techniques. In Plug in Electric Vehicles in Smart Grids; Springer: Singapore, Singapore, 2015; pp. 173–202. 30. Mareels, I.; Alpcan, T.; Brazil, M.; de Hoog, J.; Thomas, D.A. A distributed electric vehicle charging management algorithm using only local measurements. In Proceedings of the IEEE PES Innovative Smart Grid Technologies Conference (ISGT), Washington, DC, USA, 19–22 February 2014; pp. 1–5. 31. Xia, L.; Hoog, J.D.; Alpcan, T.; Brazil, M. _Electric Vehicle Charging: A Noncooperative Game Using_ _Local Measurements; The International Federation of Automatic Control: Cape Town, South Africa, 2014;_ pp. 5426–5431. 32. Munkhammar, J.; Bishop, J.D.; Sarralde, J.J.; Tian, W.; Choudhary, R. Household electricity use, electric vehicle home-charging and distributed photovoltaic power production in the city of Westminster. Energy Build. **2015, 86, 439–448.** ----- _Energies 2016, 9, 647_ 20 of 20 33. Dallinger, D.; Wietschel, M. Grid integration of intermittent renewable energy sources using price-responsive plug-in electric vehicles. Renew. Sustain. Energy Rev. 2012, 16, 3370–3382. 34. Institut für angewandte Sozialwissenschaft GmbH; Deutsches Zentrum für Luft-und Raumfahrt e.V. Mobilität _in Deutschland 2008; Technical Report; Mobilität in Deutschland: Bonn und Berlin, Germany, 2008.[¨]_ 35. National Grid. Future Energy Scenarios 2015; Technical Report; National Grid: Warwick, UK, July 2015. 36. Rowe, M.; Member, S.; Yunusov, T.; Member, S.; Haben, S.; Singleton, C.; Holderbaum, W.; Potter, B. A Peak Reduction Scheduling Algorithm for Storage Devices on the Low Voltage Network. IEEE Trans. Smart Grid **2014, 5, 2115–2124.** 37. Laresgoiti, I.; Käbitz, S.; Ecker, M.; Sauer, D.U. Modeling mechanical degradation in lithium ion batteries during cycling: Solid electrolyte interphase fracture. J. Power Sour. 2015, 300, 112–122. 38. Government Digital Service. Vehicle Free-Flow Speeds (SPE01); Government Digital Service: London, UK, 2013. 39. Office for Low Emission Vehicles. _Electric Vehicle Homecharging Scheme—Guidance for Manufacturers_ _and Installers; Technical Report; Office for Low Emission Vehicles: London, UK, 2016._ 40. Rowe, M.; Holderbaum, W.; Potter, B. Control methodologies: Peak reduction algorithms for DNO owned storage devices on the Low Voltage network. In Proceedings of the 4th IEEE/PES Innovative Smart Grid Technologies Europe (ISGT), Lyngby, Denmark, 6–9 October 2013; pp. 1–5. 41. Rowe, M.; Yunusov, T.; Haben, S.; Holderbaum, W.; Potter, B. The real-time optimisation of DNO owned storage devices on the LV network for peak reduction. Energies 2014, 7, 3537–3560. 42. Society, I.P. Energy, European Low Voltage Test Feeder. Available online: http://ewh.ieee.org/soc/pes/dsacom/testfeeders/ (accessed on 31 January 2016). 43. Thames Valley Vision—Project Library—Published Documents. Available online: http://thamesvalleyvision.co.uk/project-library/published-documents/ (accessed on 31 January 2016). 44. Papaioannou, I.T.; Purvins, A.; Demoulias, C.S. Reactive power consumption in photovoltaic inverters: A novel configuration for voltage regulation in low-voltage radial feeders with no need for central control. _Prog. Photovolt. Res. Appl. 2015, 23, 611–619._ 45. Tesla Motors Inc. Tesla Powerwall; Tesla Motors Inc.: Fremont, CA, USA, 2015. _⃝c_ 2016 by the authors; licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution [(CC-BY) license (http://creativecommons.org/licenses/by/4.0/).](http://creativecommons.org/licenses/by/4.0/) -----
{ "disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.3390/EN9080647?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.3390/EN9080647, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": "CCBY", "status": "GOLD", "url": "https://www.mdpi.com/1996-1073/9/8/647/pdf?version=1471431770" }
2,016
[]
true
2016-08-17T00:00:00
[ { "paperId": "f92dbeeeff4c7c01616c4ebbbeea94cb79e3a009", "title": "Service" }, { "paperId": "bea991075c2d63ff016a05298daa2aee58ef32bb", "title": "Review of energy storage allocation in power distribution networks: applications, methods and future research" }, { "paperId": "d2c2d2430a7b21ab6448983dd8948bf28d2c1165", "title": "Sizing and Analysis of Renewable Energy and Battery Systems in Residential Microgrids" }, { "paperId": "4bfb725c697d5ad666d84f5fea7adb73819db862", "title": "Modeling mechanical degradation in lithium ion batteries during cycling: Solid electrolyte interphase fracture" }, { "paperId": "18e110a5314c4d8a652bbf980ecda63b4ec7c8db", "title": "Fuzzy Approach for Online Coordination of Plug-In Electric Vehicle Charging in Smart Grid" }, { "paperId": "53cf2ba8428417db7296ab8edc4bae922e5a6246", "title": "Challenges in Future Competition of Electric Vehicle Charging Management and Solutions" }, { "paperId": "74c6066ca2e388b88f85c3f3ee7a4a6016968115", "title": "Reactive power consumption in photovoltaic inverters: a novel configuration for voltage regulation in low‐voltage radial feeders with no need for central control" }, { "paperId": "e280a50a565feb8bffe6f121cd689eae516caccd", "title": "A Peak Reduction Scheduling Algorithm for Storage Devices on the Low Voltage Network" }, { "paperId": "c7dbc255e51d2693d581850184f178f5c41985bb", "title": "The Real-Time Optimisation of DNO Owned Storage Devices on the LV Network for Peak Reduction" }, { "paperId": "c79e09ec37d3ced5fe30e5d41ff2e826b5da1457", "title": "A distributed electric vehicle charging management algorithm using only local measurements" }, { "paperId": "20c178d152b3fab13710d3dd7ae1d8d4c0be2f28", "title": "Nonhomogeneous Place-Dependent Markov Chains, Unsynchronised AIMD, and Network Utility Maximization" }, { "paperId": "22016e5e4871e52a377b927fe4e376775bd96821", "title": "Randomized PHEV Charging Under Distribution Grid Constraints" }, { "paperId": "e31aca41cda405f371bfb1f71725ce3638fa9d84", "title": "Optimal real-time distributed V2G and G2V management of electric vehicles" }, { "paperId": "75edf98c614dbe54b330502000b986846a08ca1e", "title": "On Optimality Criteria for Reverse Charging of Electric Vehicles" }, { "paperId": "9cd40acac793e054efaed364af3c3a36e315472e", "title": "Control methodologies: Peak reduction algorithms for DNO owned storage devices on the Low Voltage network" }, { "paperId": "304ea789633461c2ed89c078743b431ad9ceefde", "title": "Smart Coordination of Energy Storage Units (ESUs) for Voltage and Loading Management in Distribution Networks" }, { "paperId": "00e3337142859baf13ecdcf8dd3189d832f316b3", "title": "A Multi-Agent System for Controlled Charging of a Large Population of Electric Vehicles" }, { "paperId": "00bbd9ef8ad23302ebea5f3893f436a24a62c16a", "title": "Methodology for evaluation of grid-tie connection of distributed energy resources - Case study with photovoltaic and energy storage" }, { "paperId": "b5d6abdef987fc9cf776fc57a6fdb0237a3f0057", "title": "Improvement of Local Voltage in Feeders With Photovoltaic Using Electric Vehicles" }, { "paperId": "6db10cf37bcfe71c37dcaf166114ae2f0a082c34", "title": "Economic and Efficient Voltage Management Using Customer-Owned Energy Storage Systems in a Distribution Network With High Penetration of Photovoltaic Systems" }, { "paperId": "9c3d2cce5a973fa5b4023c5baa492223a6e636c7", "title": "Battery storage system for residential electricity peak demand shaving" }, { "paperId": "0c386db6f124431e753b5fbba79cab32a07ed1cc", "title": "Advanced Demand Side Management for the Future Smart Grid Using Mechanism Design" }, { "paperId": "0fd0ef5c807d6cca4c4012108dcb229f8019c758", "title": "A flexible distributed framework for realising electric and plug-in hybrid vehicle charging policies" }, { "paperId": "7d546948366df251e4007e446570ae7b40cb6afe", "title": "Grid integration of intermittent renewable energy sources using price-responsive plug-in electric vehicles" }, { "paperId": "2d69bb478b740fa70e4f2e0da55cdc8c67f60489", "title": "Vehicle-to-Aggregator Interaction Game" }, { "paperId": "8a59e11d3e42d200cb4892194f2b018b1952e806", "title": "Real-Time Coordination of Plug-In Electric Vehicle Charging in Smart Grids to Minimize Power Losses and Improve Voltage Profile" }, { "paperId": "eeff9128a6f0fec8d2af1afa8241253b59458297", "title": "Assessment of the Impact of Plug-in Electric Vehicles on Distribution Networks" }, { "paperId": "021d2b357c2d4fb0000f93970732bdef961ac6d4", "title": "Autonomous Demand-Side Management Based on Game-Theoretic Energy Consumption Scheduling for the Future Smart Grid" }, { "paperId": "783fad50d6a366fb48f0343d233d14b5ed5ca501", "title": "Vehicle-to-grid systems for frequency regulation in an Islanded Danish distribution network" }, { "paperId": "88d9ed51d8bf9c230052dcadad0b675588e46981", "title": "The Impact of Charging Plug-In Hybrid Electric Vehicles on a Residential Distribution Grid" }, { "paperId": "3fe0ddbe3dbcf8cd0bc00923755f56385e65e8fd", "title": "Potential Impacts of Plug-in Hybrid Electric Vehicles on Regional Power Generation" }, { "paperId": "4239b92d7d159c483dbd1f933be2d4795e44b4f5", "title": "Impact of electric vehicles on power distribution networks" }, { "paperId": "805d0da469da6ba7571ee75732ab66202aaea9e0", "title": "Analysis of the Increase and Decrease Algorithms for Congestion Avoidance in Computer Networks" }, { "paperId": "c37157c4b031f8e5e9669135e226127bf10c4ed3", "title": "Distributed Load Management Using Additive Increase Multiplicative Decrease Based Techniques" }, { "paperId": "3f996d1353aaad418807ac2f5401f97da7d2058a", "title": "Household electricity use, electric vehicle home-charging and distributed photovoltaic power production in the city of Westminster" }, { "paperId": "cc953dffedaf5347dfd1cfe0b458a44dd046594e", "title": "Electric Vehicle Charging: A Noncooperative Game Using Local Measurements" }, { "paperId": null, "title": "Autonomous 399 demand-side management based on game-theoretic energy consumption scheduling for the future 400 smart grid" }, { "paperId": null, "title": "F.I.T.T. for Investors: Crossing the Chasm" }, { "paperId": null, "title": "Energy , European Low Voltage Test Feeder Thames Valley Vision — Project Library — Published Documents" }, { "paperId": null, "title": "Energy, European Low Voltage Test Feeder" }, { "paperId": null, "title": "This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC-BY) license" }, { "paperId": null, "title": "Pathways to High Penetration of Electric Vehicles" }, { "paperId": null, "title": "European Low Voltage Test Feeder" }, { "paperId": null, "title": "Electric Vehicle Charging: A Noncooperative Game Using Local Measurements; The International Federation of Automatic Control: Cape" }, { "paperId": null, "title": "Thames Valley Vision-Project Library-Published Documents" }, { "paperId": null, "title": "Electric Vehicle Homecharging Scheme-Guidance for Manufacturers and Installers" }, { "paperId": null, "title": "Investigation into the Scope for the Transport Sector to Switch to Electric Vehicles and Plug-in Hybrid Vehicles" }, { "paperId": null, "title": "for Investors: Crossing the Chasm" }, { "paperId": null, "title": "Government Digital Service Vehicle Free-Flow Speeds (SPE01); Government Digital Service" }, { "paperId": null, "title": "This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC-BY) license (http://creativecommons" }, { "paperId": null, "title": "Household electricity 430 use, electric vehicle home-charging and distributed photovoltaic power production in the city of 431 Westminster" }, { "paperId": null, "title": "Office for Low Emission Vehicles Electric Vehicle Homecharging Scheme—Guidance for Manufacturers and Installers" }, { "paperId": null, "title": "Impact of electric 392 vehicles on power distribution networks" } ]
15,377
en
[ { "category": "Computer Science", "source": "external" }, { "category": "Computer Science", "source": "s2-fos-model" }, { "category": "Mathematics", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/02086f491c51a3222da753b7d2b7100fffa01e0b
[ "Computer Science" ]
0.813728
Compact McEliece Keys from Goppa Codes
02086f491c51a3222da753b7d2b7100fffa01e0b
IACR Cryptology ePrint Archive
[ { "authorId": "2237912", "name": "Rafael Misoczki" }, { "authorId": "1698800", "name": "Paulo L. Barreto" } ]
{ "alternate_issns": null, "alternate_names": [ "IACR Cryptol eprint Arch" ], "alternate_urls": null, "id": "166fd2b5-a928-4a98-a449-3b90935cc101", "issn": null, "name": "IACR Cryptology ePrint Archive", "type": "journal", "url": "http://eprint.iacr.org/" }
null
# Compact McEliece Keys from Goppa Codes Rafael Misoczki and Paulo S.L.M. Barreto[⋆] Departamento de Engenharia de Computação e Sistemas Digitais (PCS), Escola Politécnica, Universidade de São Paulo, Brazil {rmisoczki,pbarreto}@larc.usp.br **Abstract. The classical McEliece cryptosystem is built upon the class** of Goppa codes, which remains secure to this date in contrast to many other families of codes but leads to very large public keys. Previous proposals to obtain short McEliece keys have primarily centered around replacing that class by other families of codes, most of which were shown to contain weaknesses, and at the cost of reducing in half the capability of error correction. In this paper we describe a simple way to reduce significantly the key size in McEliece and related cryptosystems using a subclass of Goppa codes, while also improving the efficiency of cryptographic operations to _O[˜](n) time, and keeping the capability of correcting_ the full designed number of errors in the binary case. ## 1 Introduction Quantum computers can potentially break most if not all conventional cryptosystems actually deployed in practice, namely, all systems based on the integer factorization problem (like RSA) or the discrete logarithm problem (like traditional or elliptic curve Diffie-Hellman and DSA, and also all of pairing-based cryptography). Certain classical cryptosystems, inspired on computational problems of a na ture entirely different from the above and potentially much harder to solve, remain largely unaffected by the threat of quantum computing, and have thus been called quantum-resistant or, more suggestively, ‘post-quantum’ cryptosystems. These include lattice-based cryptosystems and syndrome-based cryptosystems like McEliece [16] and Niederreiter [19]. Such systems usually have even a speed advantage over conventional schemes; for instance, both McEliece and Niederreiter encryption over a code of length n has time complexity O(n[2]), while Diffie-Hellman/DSA and (private exponent) RSA with n-bit keys have time complexity O(n[3]). On the other hand, they are plagued by very large keys compared to their conventional counterparts. It is therefore of utmost importance to seek ways to reduce the key sizes for post-quantum cryptosystems while keeping their security level. The first steps _⋆_ Supported by the Brazilian National Council for Scientific and Technological De velopment (CNPq) under research productivity grant 312005/2006-7 and universal grant 485317/2007-9, and by the Science Foundation Ireland (SFI) as E. T. S. Walton Award fellow under grant 07/W.1/I1824. M.J. Jacobson Jr., V. Rijmen, and R. Safavi-Naini (Eds.): SAC 2009, LNCS 5867, pp. 376–392, 2009. _⃝c_ Springer-Verlag Berlin Heidelberg 2009 ----- Compact McEliece Keys from Goppa Codes 377 toward this goal were taken by Monico et al. using low density parity-check codes [18], by Gaborit using quasi-cyclic codes [8], and by Baldi and Chiaraluce using a combination of both [1]. However, these proposals were all shown to contain weaknesses [22]. In those proposals the trapdoor is protected essentially by no other means than a private permutation of the underlying code. The attack strategy consists of obtaining a solvable system of linear equations that the components of the permutation matrix must satisfy, and was successfully mounted due to the very constrained nature of the secret permutation (since it has to preserve the quasi-cyclic structure of the result) and the fact that the secret code is a subcode of a public code. A dedicated fix to the problems in [1] is proposed in [2]. More recently, Berger et al. [3] showed how to circumvent the drawbacks of Gaborit’s original scheme and remove the weaknesses pointed out in [22] by means of two techniques: 1. Extracting block-shortened public codes from very large private codes, ex ploiting Wieschebrink’s theorem on the NP-completeness of distinguishing punctured codes [29]; 2. Working with subfield subcodes over an intermediate subfield between the base field and the extension field of the original code. These two techniques were successfully applied to quasi-cyclic codes, yet we will see that their applicability is not restricted to that class. **Our contribution: In this paper we propose the class of quasi-dyadic Goppa** codes, which admit a very compact parity-check or a generator matrix representation, for efficiently instantiating syndrome-based cryptosystems. We stress that we are not proposing any new cryptosystem, but rather a technique to obtain efficient parameters and algorithms for such systems, current or future. In contrast to many other proposed families of codes [10,11,22,27], Goppa codes have withstood cryptanalysis quite well, and despite considerable progress in the area [14,26] (see also [6] for a survey) they remain essentially unscathed since they were suggested with the very first syndrome-based cryptosystem known, namely, the original McEliece scheme. Our method produces McEliece-type keys that are up to a factor t = O[˜](n) smaller than keys produced from generic t-error correcting Goppa codes of length n in characteristic 2. In the binary case it also retains the ability of correcting the full designed number of errors rather than just half as many, a feature that is missing in all previous attempts at constructing compact codes for cryptographic purposes, including [3]. Moreover, the complexity of all typical cryptographic operations become _O[˜](n); specifically, under the common_ cryptographic setting t = O(n/ lg n), code generation, encryption and decryption all have asymptotic complexity O(n lg n). The remainder of this paper is organized as follows. Section 2 introduces some basic concepts of coding theory. In section 3 we describe our proposal of using binary Goppa codes in quasi-dyadic form, and how to build them. We consider hardness issues in Section 4, and efficiency issues, including guidelines on how to choose parameters, in Section 5. We conclude in Section 6. ----- 378 R. Misoczki and P.S.L.M. Barreto ## 2 Preliminaries In what follows all vector and matrix indices are numbered from zero onwards. **Definition 1. Given a ring R and a vector h = (h0, . . ., hn−1) ∈R[n], the dyadic** _matrix ∆(h) ∈R[n][×][n]_ _is the symmetric matrix with components ∆ij = hi⊕j where_ _⊕_ _stands for bitwise exclusive-or on the binary representations of the indices. The_ _sequence h is called its signature. The set of dyadic n × n matrices over R is_ _denoted ∆(R[n]). Given t > 0, ∆(t, h) denotes ∆(h) truncated to its first t rows._ One can recursively characterize a dyadic matrix when n is a power of 2: any 1 × 1 matrix is dyadic, and for k > 0 any 2[k] _× 2[k]_ dyadic matrix M has the form � _M =_ � _A B_ _B A_ where A and B are 2[k][−][1] _× 2[k][−][1]_ dyadic matrices. It is not hard to see that the signature of a dyadic matrix coincides with its first row. Dyadic matrices form a commutative subring of R[n][×][n] as long as R is commutative [12]. **Definition 2. A dyadic permutation is a dyadic matrix Π** _[i]_ _∈_ _∆({0, 1}[n]) whose_ _signature is the i-th row of the identity matrix._ A dyadic permutation is clearly an involution, i.e. (Π _[i])[2]_ = I. The i-th row (or equivalently the i-th column) of the dyadic matrix defined by a signature h can be written ∆(h)i = hΠ _[i]._ **Definition 3. A quasi-dyadic matrix is a (possibly non-dyadic) block matrix** _whose component blocks are dyadic submatrices._ Quasi-dyadic matrices are at the core of our proposal. We will be mainly concerned with the case R = Fq, the finite field with q (a prime power) elements. **Definition 4. Given two disjoint sequences z = (z0, . . ., zt−1) ∈** F[t]q _[and][ L][ =]_ (L0, . . ., Ln−1) ∈ F[n]q _[of distinct elements, the][ Cauchy matrix][ C][(][z, L][)][ is the][ t]_ _[×]_ _[n]_ _matrix with elements Cij = 1/(zi_ _Lj), i.e._ _−_ 1 1 _. . ._ _z0 −_ _L0_ _z0 −_ _Ln−1_ _..._ _..._ _..._ 1 1 _. . ._ _zt−1 −_ _L0_ _zt−1 −_ _Ln−1_ ⎤ ⎥⎥⎥⎥⎦ _C(z, L) =_ ⎡ ⎢⎢⎢⎢⎣ _._ Cauchy matrices have the property that all of their submatrices are nonsingular [25]. Notice that, in general, Cauchy matrices are not dyadic and vice-versa, although the intersection of these two classes is non-empty in characteristic 2. **Definition 5. Given t > 0 and a sequence L = (L0, . . ., Ln−1) ∈** F[n]q _[, the][ Van-]_ dermonde matrix vdm(t, L) is the t × n matrix with elements Vij = L[i]j[.] ----- Compact McEliece Keys from Goppa Codes 379 **Definition 6. Given a sequence L = (L0, . . ., Ln−1) ∈** F[n]q _[of distinct elements]_ _and a sequence D = (D0, . . ., Dn−1) ∈_ F[n]q _[of nonzero elements, the][ General-]_ ized Reed-Solomon code GRSr(L, D) is the [n, k, r] linear error-correcting code _defined by the parity-check matrix_ _H = vdm(r_ 1, L) diag(D). _−_ _·_ _An alternant code is a subfield subcode of a Generalized Reed-Solomon code._ Let p be a prime power, let q = p[d] for some d, and let Fq = Fp[x]/b(x) for some irreducible polynomial b(x) ∈ Fp[x] of degree d. Given a code specified by a parity-check matrix H ∈ F[t]q[×][n], the trace construction derives from it an Fp-subfield subcode by writing the Fp coefficients of each Fq component of H onto d successive rows of a parity-check matrix Td(H) ∈ F[dt]p _[×][n]_ for the subcode. The related co-trace parity-check matrix Td[′][(][H][)][ ∈] [F][dt]p _[×][n], equivalent to Td(H)_ by a left permutation, is obtained from H by writing the Fp coefficients of terms of equal degree from all components on a column of H onto successive rows of _Td[′][(][H][)][.]_ Thus, given elements ui(x) = ui,0 + · · · + ui,d−1x[d][−][1] _∈_ Fq = Fp[x]/b(x), the trace construction maps a column (u0, . . ., ut−1)[T] from H to the column (u0,0, . . ., u0,d−1; . . . ; ut−1,0, . . ., ut−1,d−1)[T] on the trace matrix Td(H), and to the column (u0,0, . . ., ut−1,0; . . . ; u0,d−1, . . ., ut−1,d−1)[T] on the co-trace matrix _Td[′][(][H][)][.]_ Finally, one of the most important families of linear error-correcting codes for cryptographic purposes is that of Goppa codes: **Definition 7. Given a prime power p, q = p[d]** _for some d, a sequence L =_ (L0, . . ., Ln−1) ∈ F[n]q _[of distinct elements and a polynomial][ g][(][x][)][ ∈]_ [F][q][[][x][]][ of] _degree t such that g(Li) ̸= 0 for 0 ⩽_ _i < n, the Goppa code Γ_ (L, g) over Fp is the alternant code over Fp corresponding to GRSt(L, D) where D = (g(L0)[−][1], . . ., g(Ln−1)[−][1]), and its minimum distance is at least 2t + 1. An irreducible Goppa code in characteristic 2 can correct up to t errors using Patterson’s algorithm [23], or slightly more using Bernstein’s list decoding method [5], and t errors can still be corrected by suitable decoding algorithms if the generator g(x) is not irreducible[1]. In all other cases no algorithm is known that can correct more than t/2 errors (or just a few more). ## 3 Goppa Codes in Cauchy and Dyadic Form A property of Goppa codes that is central to our proposal is that they admit a parity-check matrix in Cauchy form: 1 For instance, one can equivalently view the binary Goppa code as the alternant code defined by the generator polynomial g[2](x), in which case any alternant decoder will decode t errors. We are grateful to Nicolas Sendrier for pointing this out. ----- 380 R. Misoczki and P.S.L.M. Barreto **Theorem 1 ([28]). The Goppa code generated by a monic polynomial g(x) =** (x − _z0) . . . (x −_ _zt−1) without multiple zeros admits a parity-check matrix of the_ _form H = C(z, L), i.e. Hij = 1/(zi_ _Lj), 0 ⩽_ _i < t, 0 ⩽_ _j < n._ _−_ This theorem (also appearing in [15, Ch. 12, §3, Pr. 5]) is entirely general when one considers the factorization of the Goppa polynomial over its splitting field, in which case a single root of g is enough to completely characterize the code. For simplicity, we will restrict our attention to the case where all roots of that polynomial are in the field Fq itself. **3.1** **Building a Binary Goppa Code in Dyadic Form** We now show how to build a binary Goppa code that admits a parity-check matrix in dyadic form. To this end we seek a way to construct dyadic Cauchy matrices. The following theorem characterizes all matrices of this kind. **Theorem 2. Let H ∈** F[n]q _[×][n]_ _with n > 1 be simultaneously a dyadic matrix_ _H = ∆(h) for some h ∈_ F[n]q _[and a Cauchy matrix][ H][ =][ C][(][z, L][)][ for two dis-]_ _joint sequences z ∈_ F[n]q _[and][ L][ ∈]_ [F]q[n] _[of distinct elements. Then][ F][q]_ _[is a field of]_ _characteristic 2, h satisfies_ 1 _hi⊕j_ = [1] _hi_ + [1] _hj_ + [1] _h0_ _,_ (1) _and zi = 1/hi + ω, Lj = 1/hj + 1/h0 + ω for some ω ∈_ Fq. _Proof. Since a dyadic matrix is symmetric, the sequences that define it must_ satisfy 1/(zi − _Lj) = 1/(zj −_ _Li), hence Lj = zi + Li −_ _zj for all i and j. Then_ _zi + Li must be a constant α, and taking i = 0 in particular this simplifies to_ _Lj = α−zj. Substituting back into the definition Mij = 1/(zi_ _−Lj) one sees that_ _Hij = 1/(zi + zj + α). But dyadic matrices also have constant diagonal, namely,_ _Hii = 1/(2zi + α) = h0. This is only possible if all zi are equal (contradicting_ the definition of a Cauchy matrix), or else if the characteristic of the field is 2, as claimed. In this case we see that α = 1/h0, and hence Hij = 1/(zi + zj + 1/h0). Plugging in the definition Hij = hi⊕j we get 1/Hij = 1/hi⊕j = zi + zj + 1/h0, and taking j = 0 in particular this yields 1/hi = zi + z0 + 1/h0, or simply _zi = 1/hi + 1/h0 + z0. Substituting back one obtains 1/hi⊕j = zi + zj + 1/h0 =_ 1/hi + 1/h0 + z0 + 1/hj + 1/h0 + z0 + 1/h0 = 1/hi + 1/hj + 1/h0, as expected. Finally, define ω = 1/h0 + z0 and substitute into the derived relations zi = 1/hi +1/h0 + _z0 and Lj = α_ _−_ _zj to get zi = 1/hi +_ _ω and Lj = 1/hj +1/h0 +_ _ω,_ as desired. _⊓⊔_ Therefore all we need is a method to solve Equation 1. The technique we propose consists of simply choosing distinct nonzero h0 and hi at random where i scans all powers of two smaller than n, and setting all other values as 1 _hi+j ←_ 1 + [1] _hi_ _hj_ + [1] _h0_ ----- Compact McEliece Keys from Goppa Codes 381 for 0 < j < i (so that i + j = i ⊕ _j), as long as this value is well-defined._ Algorithm 1 captures this idea. Since each element of the signature h is assigned a value exactly once, its running time is O(n) steps. The notation u _←$_ _U means_ that variable u is uniformly sampled at random from set U . For convenience we also define the essence of h to be the sequence ηs = 1/h2s + 1/h0 for s = 0, . . ., ⌈lg n⌉− 1 together with η⌈lg n⌉ = 1/h0, so that, for i = [�]k[⌈][lg]=0[ n][⌉−][1] _ik2[k],_ 1/hi = η⌈lg n⌉ + [�]k[⌈][lg]=0[ n][⌉−][1] _ikηk._ **Algorithm 1. Constructing a binary Goppa code in dyadic form** Input: q (a power of 2), n ⩽ _q/2, t._ Output: Support L, generator polynomial g, dyadic parity-check matrix H for a bi nary Goppa code Γ (L, g) of length n and design distance 2t + 1 over Fq, and the essence η of the signature of H. 1: U ← Fq \ {0} _▷_ Choose the dyadic signature (h0, . . ., hn−1). N.B. Whenever hj with j > 0 is taken from U, so is 1/(1/hj + 1/h0) to prevent a potential spurious intersection between _z and L._ 2: h0 _←$_ _U_ 3: η⌈lg n⌉ _←_ 1/h0 4: U ← _U \ {h0}_ 5: for s ← 0 to ⌈lg n⌉− 1 do 6:7: _ih ←i_ _←$2U[s]_ 8: _ηs ←_ 1/hi + 1/h0 9: _U ←_ _U \ {hi, 1/(1/hi + 1/h0)}_ 10: **for j ←** 1 to i − 1 do 11: _hi+j ←_ 1/(1/hi + 1/hj + 1/h0) 12: _U ←_ _U \ {hi+j_ _, 1/(1/hi+j + 1/h0)}_ 13: **end for** 14: end for 15: ω _←$_ Fq _▷_ Assemble the Goppa generator polynomial: 16: for i ← 0 to t − 1 do 17: _zi ←_ 1/hi + ω 18: end for 19: g(x) ← [�]i[t]=0[−][1] [(][x][ −] _[z][i][)]_ _▷_ Compute the support: 20: for j ← 0 to n − 1 do 21: _Lj ←_ 1/hj + 1/h0 + ω 22: end for 23: h ← (h0, . . ., hn−1) 24: H ← _∆(t, h)_ 25: return L, g, H, η **Theorem 3. Algorithm 1 produces up to** [�]i[⌈]=0[lg][ n][⌉] (q − 2[i]) Goppa codes in dyadic _form._ ----- 382 R. Misoczki and P.S.L.M. Barreto _Proof. Each dyadic signature produced by Algorithm 1 is entirely determined_ by the values h0 and h2s for s = 0, . . ., ⌈lg n⌉− 1 chosen at steps 2 and 7 (ω only produces equivalent codes). Along the loop at line 5, exactly 2i = 2[s][+1] elements are erased from U, corresponding to the choices of h2s . . . h2s+1−1. At the end of that loop, 2 + 2 [�]ℓ[s]=0 [2][ℓ] [= 2][s][+2][ elements have been erased in total.] Hence at the beginning of each step of the loop only 2[s][+1] elements had been erased from U, i.e. there are q − 2[s][+1] elements in U to choose h2s from, and _q −_ 1 possibilities for h0. Therefore this construction potentially yields up to (q − 1) [�]s[⌈][lg]=0[ n][⌉−][1] (q − 2[s][+1]) = [�]i[⌈]=0[lg][ n][⌉] (q − 2[i]) possible codes. _⊓⊔_ Theorem 3 actually establishes the number of distinct essences of dyadic signatures corresponding to Cauchy matrices. The roots of the Goppa polynomial are completely specified by the first ⌈lg t⌉ elements of the essence η together with _η⌈lg n⌉, namely, zi = η⌈lg n⌉_ + [�][⌈]k[lg]=0[ t][⌉−][1] _ikηk, disregarding the ω term which is im-_ plicit in the choice of η⌈lg n⌉. We see that any permutation of the essence elements _η0, . . ., η⌈lg t⌉−1 only changes the order of those roots. Since the Goppa polyno-_ mial itself is defined by its roots regardless of their order, the total number of possible Goppa polynomials is therefore ��⌈i=0lg t⌉ [(][q][ −] [2][i][)]� _/⌈lg t⌉! ≈_ (q−t)�⌈lgq t⌉�. For n ≈ _q/2 the number of dyadic codes can be approximated by q[m]Q = 2[m][2]Q_ where Q = [�]i[∞]=1 [(1][ −] [1][/][2][i][)][ ≈] [0][.][2887881][. We will also see that the number] of quasi-dyadic codes, which we describe next and propose for cryptographic applications, is larger than this. Before we proceed, however, it is interesting to notice that one of the reasons the attack proposed in [22] succeeds against certain quasi-cyclic codes, besides the constrained structure of the applied permutation, is that those schemes start from a known BCH or Reed-Solomon code which is unique up to the choice of a primitive element from the underlying finite field. Thus, in those proposals an initial code over F2m is at best chosen from a set of O(2[m]) codes. In comparison, we start from a secret code sampled from a much larger family of O(2[m][2] ) codes. For instance, while those proposals have only 2[15] starting points over F216, our scheme can sample a family with more than 2[254] codes over the same field. The main protection of the hidden trapdoor is, of course, the block puncturing process and the more complex blockwise permutation of the initial secret code, as detailed next. **3.2** **Constructing Quasi-Dyadic, Permuted Subfield Subcodes** To complete the construction it is necessary to choose a compact generator matrix for the subfield subcode. Although the parity check matrix H built by Algorithm 1 is dyadic over Fq, the usual trace construction leads to a generator of the dual code that most probably violates the dyadic symmetry. However, by representing each field element to a basis of Fq over the subfield Fp, one can view _H as a superposition of d = [Fq : Fp] distinct dyadic matrices over Fp, and each_ of them can be stored in a separate dyadic signature. A cryptosystem cannot be securely defined on a Goppa code specified directly by a parity-check matrix in Cauchy form, since this would immediately reveal ----- Compact McEliece Keys from Goppa Codes 383 the Goppa polynomial g(x): it suffices to solve the overdefined linear system _zi −_ _Lj = 1/Hij consisting of tn equations in t + n unknowns._ Algorithm 1 generates fully dyadic codes. We now show how to integrate the techniques of Berger et al. with Algorithm 1 so as to build quasi-dyadic subfield subcodes whose parity-check matrix is a non-dyadic matrix composed of blocks of dyadic submatrices. The principle to follow here is to select, permute, and _scale the columns of the original parity-check matrix so as to preserve quasi-_ dyadicity in the target subfield subcode and the distribution of introduced errors in cryptosystems. A similar process yields a generator matrix in convenient quasidyadic, systematic form. For the desired security level (see the discussion in Section 5.1), choose p = 2[s] for some s, q = p[d] = 2[m] for some d with m = ds, a code length n and a design number of correctable errors t such that n = ℓt for some ℓ> d. For simplicity we assume that t is a power of 2, but the following construction method can be modified to work with other values. Run Algorithm 1 to produce a code over Fq whose length N ≫ _n is a large_ multiple of t not exceeding the largest possible length q/2, so that the constructed _t × N parity-check matrix_ _H[ˆ] can be viewed as a sequence of N/t dyadic blocks_ [B0 | · · · | BN/t−1] of size t × t each. Select uniformly at random ℓ distinct blocks Bi0 _, . . ., Biℓ−1 in any order from_ _H[ˆ]_, together with ℓ dyadic permutations _Π_ _[j][0]_ _, . . ., Π_ _[j][ℓ][−][1]_ of size t × t and ℓ nonzero scale factors σ0, . . ., σℓ−1 ∈ Fp. Let _Hˆ_ _[′]_ = [Bi0 _Π_ _[j]0 | · · · | Biℓ−1Π_ _[j]ℓ−1_ ] ∈ (F[t]q[×][t])[ℓ] and Σ = diag(σ0It, . . ., σℓ−1It) ∈ (F[t]p[×][t])[ℓ][×][ℓ]. Compute the co-trace matrix H _[′]_ = Td[′][( ˆ][H] _[′][Σ][) =][ T][ ′]d[( ˆ][H]_ _[′][)][Σ][ ∈]_ [(][F]p[t][×][t])[d][×][ℓ] and finally the systematic form H of H _[′]. Notice that, if the systematic form_ of Td[′][( ˆ][H] _[′][)][ is][ H][0][, then][ H][ =][ U][ −][1][H][0][V][ where][ U][ = diag(][σ][0][I][t][, . . ., σ][ℓ][−][d][−][1][I][t][)][ and]_ _V = diag(σℓ−dIt, . . ., σℓ−1It)._ The resulting parity-check matrix defines a code of length n and dimension _k = n−_ _dt over Fp, and since all block operations performed during the Gaussian_ elimination are carried out in the ring ∆(F[t]p[)][, the result still consists of dyadic] submatrices which can be represented by a signature of length t. Hence the whole matrix can be stored in an area a factor t smaller than a general matrix. However, the dyadic submatrices that appear in this process are not necessarily nonsingular, as they are not associated to a Cauchy matrix anymore; should all the submatrices on a column be found to be singular (above or below the diagonal, according to the direction of this process) so that no pivot is possible, the whole block containing that column may be replaced by another block Bj′ chosen at random from _H[ˆ] as above._ The trapdoor information consisting of the essence η of h, the sequence (i0, . . ., iℓ−1) of blocks, the sequence (j0, . . ., jℓ−1) of dyadic permutation identifiers, and the sequence of scale factors (σ0, . . ., σℓ−1), relates the public code defined by H with the private code defined by _H[ˆ]_ . The space occupied by the trapdoor information is thus m[2] + ℓ lg N + ℓs bits. If one starts with the largest possible N = 2[m][−][1], this simplifies to the maximal size of m[2] + ℓ(m _−_ 1 + s) bits. The total space occupied by the essential part of the resulting generator (or parity-check) matrix over Fp is dt × (n − _dt)/t = dk Fp elements, or mk bits – a_ ----- 384 R. Misoczki and P.S.L.M. Barreto factor t better than plain Goppa codes, which occupy k(n − _k) = mkt bits. Had_ _t not been chosen to be a power of 2, say, t = 2[u]v where v > 1 is odd, the cost_ of multiplying t × t matrices would be in general O(2[u]uv[3]) rather than simply _O(2[u]u), and the final parity-check matrix would be compressed by only a factor_ 2[u]. For each code produced by Algorithm 1, the number of codes generated by this construction is �N/tℓ � _× ℓ! × t[ℓ]_ _× (r −_ 1)[ℓ], hence �N/tℓ � _× ℓ! × t[ℓ]_ _× (r −_ 1)[ℓ] _×_ �i⌈=0lg N _⌉_ (q − 2[i]) codes are possible in principle. **3.3** **A Toy Example** Let F25 = F2[u]/(u[5] + u[2] + 1). The dyadic signature _h = (u[20], u[3], u[6], u[28], u[9], u[29], u[4], u[22], u[12], u[5], u[10], u[2], u[24], u[26], u[25], u[15])_ and the offset ω = _u[21]_ define a 2-error correcting binary Goppa code of length N = 16 with g(x) = (x − _u[12])(x −_ _u[15]) and support L_ = (u[21], u[29], u[19], u[26], u[6], u[16], u[7], u[5], u[25], u[3], u[11], u[28], u[27], u[9], u[22], u[2]). The associated parity-check matrix built according to Theorem 1 is _,_ _Hˆ =_ � _u20 u3 u6 u28 u9 u29 u4 u22 u12 u5 u10 u2 u24 u26 u25 u15_ _u[3]_ _u[20]_ _u[28]_ _u[6]_ _u[29]_ _u[9]_ _u[22]_ _u[4]_ _u[5]_ _u[12]_ _u[2]_ _u[10]_ _u[26]_ _u[24]_ _u[15]_ _u[25]_ � with eight 2 × 2 blocks B0, . . ., B7 as indicated. From this we extract the shortened, rearranged and permuted sequence _H[ˆ]_ _[′]_ = [B7Π [0] _| B5Π_ [1] _| B1Π_ [0] _| B2Π_ [1] _|_ _B3Π_ [0] _| B6Π_ [1] _| B4Π_ [0]] (because in this example the subfield is the base field itself, all scale factors have to be 1), i.e.: _u[25]_ _u[15]_ _u[2]_ _u[10]_ _u[6]_ _u[28]_ _u[29]_ _u[9]_ _u[4]_ _u[22]_ _u[26]_ _u[24]_ _u[12]_ _u[5]_ _u[15]_ _u[25]_ _u[10]_ _u[2]_ _u[28]_ _u[6]_ _u[9]_ _u[29]_ _u[22]_ _u[4]_ _u[24]_ _u[26]_ _u[5]_ _u[12]_ � _,_ _Hˆ =_ � whose co-trace matrix over F2 has the systematic form: 0 1 0 1 1 0 0 0 0 0 0 0 0 0 1 0 1 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 1 0 0 0 0 1 0 0 0 0 0 0 0 1 1 0 0 0 0 0 1 0 0 0 0 0 1 1 0 0 0 0 0 0 0 1 0 0 0 1 0 0 1 0 0 0 0 0 0 0 1 0 0 1 1 0 0 0 0 0 0 0 0 0 0 1 0 1 1 0 0 0 0 0 0 0 0 0 0 0 1 ⎤ ⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎦ _H =_ ⎡ ⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎣ = [M [T] _| In−k],_ from which one readily obtains the k _×_ _n = 4_ _×_ 14 generator matrix in systematic form: ⎡ 1 0 0 0 0 1 0 1 0 0 0 1 1 1 ⎤ 0 1 0 0 1 0 1 0 0 0 1 0 1 1 _G =_ 0 0 1 0 0 1 0 0 1 1 1 0 0 0 = [Ik | M ], ⎢⎢⎣ ⎥⎥⎦ 0 0 0 1 1 0 0 0 1 1 0 1 0 0 ----- Compact McEliece Keys from Goppa Codes 385 where both G and H share the essential part M : **0 1 0 1 0 0 0 1 1 1** 1 0 1 0 0 0 1 0 1 1 **0 1 0 0 1 1 1 0 0 0** 1 0 0 0 1 1 0 1 0 0 ⎤ _,_ ⎥⎥⎦ _M =_ ⎡ ⎢⎢⎣ which is entirely specified by the elements in boldface and can thus be stored in 20 bits instead of, respectively, 4 · 14 = 56 and 10 · 14 = 140 bits. ## 4 Assessing the Hardness of Decoding Quasi-Dyadic Codes The original McEliece (or, for that matter, the original Niederreiter) schemes are perhaps better described as a candidate trapdoor one-way functions rather than full-fledged public-key encryption schemes. Such functions are used in cryptography in many different settings, each with different security requirements, and we do not consider such applications in this paper. Instead we focus purely on the question of inverting the trapdoor function, in other words, decoding. As we pointed out in Section 1, the well-studied class of Goppa codes remains one of the best choices to instantiate McEliece-like schemes. Although our proposal is ultimately based on Goppa codes, one may wonder whether or not the highly composite nature of the Goppa generator polynomial g(x), or the peculiar structure of the quasi-dyadic parity-check and generator matrices, leak any information that might facilitate decoding without knowledge of the trapdoor. Yet, any alternant code can be written in Goppa-like fashion by using the diagonal component of its default parity-check matrix (see Definition 6) to interpolate a generating polynomial (not necessarily of degree t) that is composite with high probability. We are not aware of any way this fact could be used to facilitate decoding without full knowledge of the code structure, and clearly any result in this direction would affect most of the alternant codes proposed for cryptographic purposes to date. Otmani et al.’s attack against quasi-cyclic codes [22] could be modified to work against Goppa codes in dyadic form. For this reason we adopt the same countermeasures proposed by Berger et al. to thwart it for cyclic codes, namely, working with a block-shortened subcode of a very large code as described in Section 3.2. This idea also build upon the work of Wieschebrink [29] who proved that deciding whether a code is equivalent to a shortened code is NP-complete. In our case, the result is to hide the Cauchy structure of the private code in a general dyadic structure, rather than disguising a quasi-cyclic code as another one with the same symmetry. We now give a reduction of the problem of decoding the particular class of quasi-dyadic codes to the well-studied syndrome decoding problem, classical in coding theory and known to be NP-complete [4]. **Definition 8 (Syndrome decoding). Let Fq be a finite field, and let (H, w, s)** _be a triple consisting of a matrix H ∈_ F[r]q[×][n], an integer w < n, and a vector ----- 386 R. Misoczki and P.S.L.M. Barreto _s ∈_ F[r]q[. Does there exist a vector][ e][ ∈] [F][n]q _[of Hamming weight][ wt][(][e][)][ ⩽]_ _[w][ such that]_ _He[T]_ = s[T]? The corresponding problem for quasi-dyadic matrices reads: **Definition 9 (Quasi-Dyadic Syndrome Decoding). Let Fq be a finite field,** _and let (H, w, s) be a triple consisting of a quasi-dyadic matrix H ∈_ _∆(F[ℓ]q[)][r][×][n][,]_ _an integer w < ℓn, and a vector s ∈_ F[ℓr]q _[. Does there exist a vector][ e][ ∈]_ [F]q[ℓn] _of_ _Hamming weight wt(e) ⩽_ _w such that He[T]_ = s[T]? **Theorem 4. The quasi-dyadic syndrome decoding problem (QD-SDP) is poly-** _nomially equivalent to the syndrome decoding problem (SDP). In other words,_ _decoding quasi-dyadic codes is as hard in the worst case as decoding general codes._ _Proof. The QD-SDP, being an instance of the SDP restricted to a particular_ class of codes, is clearly a decision problem in NP. Consider now a generic instance (H _[′], w[′], s[′]) ∈_ F[r]q[×][n] _× Z × F[r]q_ [of the SDP.] Assume one is given an oracle that solves the QD-SDP over ∆(F[ℓ]q[)][ for some] given ℓ> 0. Let vℓ _∈_ F[ℓ]q [be the all-one vector, i.e.][ (][v][ℓ][)][j][ = 1][ for all][ j][. Define] the quasi-dyadic matrix H = H _[′]_ _⊗_ _Iℓ_ _∈_ _∆(F[ℓ]q[)][r][×][n][ with blocks][ H][ij]_ [=][ H]ij[′] _[I][ℓ][, the]_ vector s = s[′] _⊗_ _vℓ_ _∈_ (F[ℓ]q[)][r][ with blocks][ s][i] [=][ s][′]i[v][ℓ][, and][ w][ =][ ℓw][′][. It is evident that] the instance (H, w, s) ∈ _∆(F[ℓ]q[)][r][×][n]_ _[×][Z][×][(][F][ℓ]q[)][r][ of the QD-SDP can be constructed]_ in polynomial time. Assume now that there exists e ∈ F[ℓn]q of Hamming weight wt(e) ⩽ _w such_ that He[T] = s[T]. For all 0 ⩽ _i < ℓ, let e[′]i_ _[∈]_ [F]q[n] [be the vector with elements] (e[′]i[)][j][ =][ e][i][+][jℓ][,][ 0][ ⩽] _[j < n][, so that the][ e][′]i_ [are interleaved to compose][ e][. Obviously] at least one of the e[′]i [has Hamming weight not exceeding][ w/ℓ] [=][ w][′][, and by the] construction of H any of them satisfies He[′]i[T] = s[′][T], constituting a solution to the given instance of the SDP. This effectively reduces the SDP to the QD-SDP for any given ℓ in polynomial time. Thus, the QD-SDP itself is NP-complete. _⊓⊔_ Although this theorem does not say anything about hardness in the average case, it nevertheless strengthens our claim that the family of codes we propose is in principle no less suitable for cryptographic applications than a generic code, in the sense that, should the QD-SDP problem turn out to be feasible in the worst case, then all coding-based cryptosystems would definitely be ruled out, regardless of which code is used to instantiate them. Incidentally, the expected running time of all known algorithms for the SDP (and the QD-SDP) is exponential, so there is empirical evidence that the average case is also very hard. We stress, however, that particular cryptosystems based on quasi-dyadic codes will usually depend on more specific security assumptions, whose assessment transcends the scope of this paper. ## 5 Efficiency Considerations Due to their simple structure the matrices in our proposal can be held on a simple vector not only for long-term storage or transmission, but for processing as well. ----- Compact McEliece Keys from Goppa Codes 387 The operation of multiplying a vector by a (quasi-)dyadic matrix is at the core of McEliece encryption. The fast Walsh-Hadamard transform (FWHT) [12] approach for dyadic convolution via lifting[2] to characteristic 0 leads to the asymptotic complexity O(n lg n) for this operation and hence also for encoding. Sarwate’s decoding method [24] sets the asymptotic cost of that operation at roughly O(n lg n) as well for the typical cryptographic setting t = O(n/ lg n). Inversion, on the other hand, can be carried out in O(n) steps: one can show by induction that a binary dyadic matrix ∆(h) of dimension n satisfies ∆[2] = ([�]i _[h][i][)][2][I][, and hence its inverse, when it exists, is][ ∆][−][1][ = (][�]i_ _[h][i][)][−][2][∆][, which]_ can be computed in O(n) steps since it is entirely determined by its first row. Converting a quasi-dyadic matrix to systematic (echelon) form involves a Gaussian elimination incurring about d[2]ℓ products of dyadic t × t submatrices, implying a complexity O(d[2]ℓt lg t) = O(d[2]n lg n), and hence the overall cost of formatting is O(n lg n) as long as d is a small constant, which is indeed the case in practice since maximum size reduction is achieved when Fp is a large proper subfield of Fq (see Section 5.1). Notice that, contrary to systems based on quasi-circulant matrices [8, Proposition 3.4], our proposal does not require a lengthy process, involving expensive O(n[3]) matrix rank computations to construct a generator matrix in suitable form, often larger than one would expect for a code of the given dimension. Table 1 summarizes the asymptotic complexities of code generation (mainly due to systematic formatting), encoding and decoding, which coincide with the complexities of key generation, encryption and decryption of typical cryptosystems based on codes. **Table 1. Operation complexity relative to the code length n** operation generic ours Code generation O(n[3]) O(n lg n) Encode/Decode O(n[2]) O(n lg n) **5.1** **Suggested Parameters** Several trade-offs are possible when choosing parameters for a particular application. One may wish to minimize the key size, or increase speed, or simplify the underlying arithmetic, or attaining a balance between them. We present here some non-exhaustive combinations. The number of errors is always a power of 2 to enable maximum size reduction. Table 2 shows the influence of varying the subfield degree while keeping fixed the approximate security level and the number of design errors. In general, codes over larger subfields allow for smaller keys as already indicated in [3]. For these parameters the number of possible codes ranges from 2[392] to 2[731]. 2 We are grateful to Dan Bernstein for suggesting the lifting technique to emulate the FWHT in characteristic 2. ----- 388 R. Misoczki and P.S.L.M. Barreto **Table 2. Sample parameters for a fixed number of errors (t = 128) and approximately** 128-bit security level, using a subcode over the subfield F2[s] of F216 _s_ _n_ _k_ size (bits) 1 4096 2048 32768 2 2560 1536 24576 4 1408 896 14336 8 768 512 8192 Table 3 displays a different trade-off whereby the key size and the subfield are kept constant at the cost of varying the number of errors and the code length. The estimated security level on column ‘level’ refers to the approximate logarithmic cost of the best known attack according to the guidelines in [7]. **Table 3. Sample parameters for a fixed key size (8192 bits, corresponding to k = 512),** using a subcode over the subfield F28 of F216 _n_ _t_ level 640 64 102 768 128 136 1024 256 168 One more trade-off is obtained by defining the subfield subcode over the base field itself, following the common practice for generic codes. The corresponding settings[3] are summarised on Table 4. **Table 4. Sample parameters for a subcode over the base subfield F2 of F216** level _n_ _k_ _t_ size (bits) 80 2304 1280 64 20480 112 3584 1536 128 24576 128 4096 2048 128 32768 192 7168 3072 256 49152 256 8192 4096 256 65536 Table 5 contains a variety of balanced parameters for practical security lev els. Although we do not recommend these for actual deployment before further analysis is carried out, these parameters were chosen to stress the possibilities of our proposal while giving a realistic impression of what one might indeed 3 The actual security levels computed according to the attack strategy in [7] for the parameters suggested in Table 4 are, respectively, 84.3, 112.3, 136.5, 216.0, and 265.1. We are grateful to Christiane Peters for kindly providing these estimates. ----- Compact McEliece Keys from Goppa Codes 389 adopt in practice. The target security level, roughly corresponding to the estimated logarithmic cost of the best known attack according to the guidelines in [7], is shown on the ‘level’ column. The ‘size’ column contains the amount of bits effectively needed to store a quasi-dyadic generator or parity-check matrix in systematic form. The size of a corresponding systematic matrix for a generic Goppa code at roughly the same security level as suggested in [7] is given on column ‘generic’. The ‘shrink’ column contains the size ratio between such a generic matrix and a matching quasi-dyadic matrix. The ‘RSA’ column lists the typical size of a (quantum-susceptible) RSA modulus at the specified security level (more accurate RSA estimates can be found in [20,21]). To assess our results against what can be achieved by other post-quantum settings, column ‘QC’ lists key sizes for quasi-cyclic codes of approximately the specified security level (although not necessarily for the same code length, dimension, and distance) as suggested in [3], column ‘LDPC’ does the same for (quasi-cyclic) low-density parity-check codes as discussed in [2], and finally the ‘NTRU’ column contains the range (from size-optimal to speed-optimal) of NTRU key sizes as suggested in the draft IEEE 1363.1 standard [13]. For these very compact parameters the number of possible codes ranges between 2[346] and 2[392], less than those of Table 2 but still very large. **Table 5. Sample parameters for a subcode over the subfield F28 of F216** level _n_ _k_ _t_ size generic shrink RSA QC LDPC NTRU 80 512 256 128 4096 460647 112 1024 6750 49152 – 112 640 384 128 6144 1047600 170 2048 14880 – 4411–7249 128 768 512 128 8192 1537536 188 3072 20400 – 4939–8371 192 1280 768 256 12288 4185415 340 7680 – – 7447–11957 256 1536 1024 256 16384 7667855 468 15360 – – 11957–16489 For the parameters on Table 5, we observed the timings on Table 6 (measured in ms) for generic Goppa codes and quasi-dyadic (QD) codes, and also for RSA to assess the efficiency relative to a very common pre-quantum cryptosystem. We made no serious attempt at optimizing the implementation, which was done in C++ and tested on an AMD Turion 64X2 2.4 GHz. Benchmarks for RSA-15360 were omitted due to the enormous time needed to generate suitable parameters. **Table 6. Benchmarks for typical parameters** level generation encoding decoding RSA generic QD RSA generic QD RSA generic QD 80 563 375 17.2 0.431 0.736 0.817 15.61 1.016 3.685 112 1971 1320 18.7 1.548 1.696 1.233 110.34 2.123 4.463 128 4998 2196 20.5 3.467 2.433 1.575 349.91 3.312 5.261 192 628183 13482 47.6 22.320 6.872 4.695 5094.10 8.822 17.783 256 – 27161 54.8 – 12.176 6.353 – 15.156 21.182 ----- 390 R. Misoczki and P.S.L.M. Barreto ## 6 Conclusion and Further Research We have described how to generate Goppa codes in quasi-dyadic form suitable for cryptographic applications. Key sizes for a typical McEliece-like cryptosystem are roughly a factor t = O[˜](n) smaller than generic Goppa codes, and keys can be kept in this compact size not only for storing and transmission but for processing as well. In the binary case these codes can correct the full design number of errors. This brings the size of cryptographic keys to within a factor 4 or less of equivalent RSA keys, comparable to NTRU keys. Our work provides an alternative to conventional cyclic and quasi-cyclic codes, and benefits from the same trapdoor-hiding techniques proposed by Wieschebrink in general [29], and by Berger et al. for that family of codes [3]. The complexity of all operations in McEliece and related cryptosystems is reduced to O(n lg n). Other cryptosystems can also benefit from dyadic codes, e.g. entity identification and certain digital signatures for which double circulant codes have been proposed [9] could use dyadic codes instead, even random ones without a Goppa trapdoor. One further line of research is whether one can securely combine the techniques in [2] with ours to define quasi-dyadic, low-density parity-check (QD-LDPC) codes that are suitable for cryptographic purposes and potentially even shorter than plain quasi-dyadic codes. Interestingly, it is equally possible to define lattice-based cryptosystems with short keys using dyadic lattices entirely analogous to ideal (cyclic) lattices as proposed by Micciancio [17], and achieving comparable size reduction. We leave this line of inquiry for future research since it falls outside the scope of this paper. ## Acknowledgments We are most grateful and deeply indebted to Marco Baldi, Dan Bernstein, PierreLouis Cayrel, Philippe Gaborit, Steven Galbraith, Robert Niebuhr, Christiane Peters, Nicolas Sendrier, and the anonymous reviewers for their valuable comments and feedback during the preparation of this work. ## References 1. Baldi, M., Chiaraluce, F.: Cryptanalysis of a new instance of McEliece cryptosys tem based on QC-LDPC code. In: IEEE International Symposium on Information Theory – ISIT 2007, Nice, France, pp. 2591–2595. IEEE, Los Alamitos (2007) 2. Baldi, M., Chiaraluce, F., Bodrato, M.: A new analysis of the mcEliece cryptosys tem based on QC-LDPC codes. In: Ostrovsky, R., De Prisco, R., Visconti, I. (eds.) SCN 2008. LNCS, vol. 5229, pp. 246–262. Springer, Heidelberg (2008) 3. Berger, T.P., Cayrel, P.-L., Gaborit, P., Otmani, A.: Reducing key length of the McEliece cryptosystem. In: Preneel, B. (ed.) AFRICACRYPT 2009. LNCS, vol. 5580, pp. 77–97. Springer, Heidelberg (2009), [http://www.unilim.fr/pages_perso/philippe.gaborit/reducing.pdf](http://www.unilim.fr/pages_perso/philippe.gaborit/reducing.pdf) ----- Compact McEliece Keys from Goppa Codes 391 4. Berlekamp, E., McEliece, R., van Tilborg, H.: On the inherent intractability of certain coding problems. IEEE Transactions on Information Theory 24(3), 384– 386 (1978) 5. Bernstein, D.J.: List decoding for binary Goppa codes (2008) (preprint), [http://cr.yp.to/papers.html#goppalist](http://cr.yp.to/papers.html#goppalist) 6. Bernstein, D.J., Buchmann, J., Dahmen, E.: Post-Quantum Cryptography. Springer, Heidelberg (2008) 7. Bernstein, D.J., Lange, T., Peters, C.: Attacking and defending the mcEliece cryp tosystem. In: Buchmann, J., Ding, J. (eds.) PQCrypto 2008. LNCS, vol. 5299, pp. 31–46. Springer, Heidelberg (2008), [http://www.springerlink.com/content/68v69185x478p53g](http://www.springerlink.com/content/68v69185x478p53g) 8. Gaborit, P.: Shorter keys for code based cryptography. In: International Workshop on Coding and Cryptography – WCC 2005, Bergen, Norway, pp. 81–91. ACM Press, New York (2005) 9. Gaborit, P., Girault, M.: Lightweight code-based authentication and signature. In: IEEE International Symposium on Information Theory – ISIT 2007, Nice, France, pp. 191–195. IEEE, Los Alamitos (2007) 10. Gibson, J.K.: Severely denting the Gabidulin version of the McEliece public key cryptosystem. Designs, Codes and Cryptography 6(1), 37–45 (1995) 11. Gibson, J.K.: The security of the Gabidulin public key cryptosystem. In: Maurer, U.M. (ed.) EUROCRYPT 1996. LNCS, vol. 1070, pp. 212–223. Springer, Heidelberg (1996) 12. Gulamhusein, M.N.: Simple matrix-theory proof of the discrete dyadic convolution theorem. Electronics Letters 9(10), 238–239 (1973) 13. IEEE P1363 Working Group. IEEE 1363-1: Standard Specifications for Public-Key Cryptographic Techniques Based on Hard Problems over Lattices, Draft (2009), [http://grouper.ieee.org/groups/1363/lattPK/index.html](http://grouper.ieee.org/groups/1363/lattPK/index.html) 14. Loidreau, P., Sendrier, N.: Some weak keys in McEliece public-key cryptosystem. In: IEEE International Symposium on Information Theory – ISIT 1998, Boston, USA, p. 382. IEEE, Los Alamitos (1998) 15. MacWilliams, F.J., Sloane, N.J.A.: The theory of error-correcting codes. North Holland Mathematical Library, vol. 16 (1977) 16. McEliece, R.: A public-key cryptosystem based on algebraic coding theory. The Deep Space Network Progress Report, DSN PR 42–44 (1978), [http://ipnpr.jpl.nasa.gov/progressreport2/42-44/44N.PDF](http://ipnpr.jpl.nasa.gov/progressreport2/42-44/44N.PDF) 17. Micciancio, D.: Generalized compact knapsacks, cyclic lattices, and efficient one way functions. Computational Complexity 16(4), 365–411 (2007) 18. Monico, C., Rosenthal, J., Shokrollahi, A.: Using low density parity check codes in the McEliece cryptosystem. In: IEEE International Symposium on Information Theory – ISIT 2000, Sorrento, Italy, p. 215. IEEE, Los Alamitos (2000) 19. Niederreiter, H.: Knapsack-type cryptosystems and algebraic coding theory. Prob lems of Control and Information Theory 15(2), 159–166 (1986) 20. European Network of Excellence in Cryptology (ECRYPT). ECRYPT yearly re port on algorithms and keysizes (2007-2008). D.SPA.28 Rev. 1.1, IST-2002-507932 ECRYPT, 07/2008 (2008), [http://www.ecrypt.eu.org/ecrypt1/documents/D.SPA.28-1.1.pdf](http://www.ecrypt.eu.org/ecrypt1/documents/D.SPA.28-1.1.pdf) 21. National Institute of Standards and Technology (NIST). Recommendation for key management – part 1: General (2007), [http://csrc.nist.gov/publications/nistpubs/800-57/](http://csrc.nist.gov/publications/nistpubs/800-57/sp800-57-Part1-revised2_Mar08-2007.pdf) [sp800-57-Part1-revised2_Mar08-2007.pdf](http://csrc.nist.gov/publications/nistpubs/800-57/sp800-57-Part1-revised2_Mar08-2007.pdf) ----- 392 R. Misoczki and P.S.L.M. Barreto 22. Otmani, A., Tillich, J.-P., Dallot, L.: Cryptanalysis of two McEliece cryptosystems based on quasi-cyclic codes (2008) (preprint), [http://arxiv.org/abs/0804.0409v2](http://arxiv.org/abs/0804.0409v2) 23. Patterson, N.J.: The algebraic decoding of Goppa codes. IEEE Transactions on Information Theory 21(2), 203–207 (1975) 24. Sarwate, D.V.: On the complexity of decoding Goppa codes. IEEE Transactions on Information Theory 23(4), 515–516 (1977) 25. Schechter, S.: On the inversion of certain matrices. Mathematical Tables and Other Aids to Computation 13(66), 73–77 (1959), [http://www.jstor.org/stable/2001955](http://www.jstor.org/stable/2001955) 26. Sendrier, N.: Finding the permutation between equivalent linear codes: the support splitting algorithm. IEEE Transactions on Information Theory 46(4), 1193–1203 (2000) 27. Sidelnikov, V., Shestakov, S.: On cryptosystems based on generalized Reed Solomon codes. Discrete Mathematics 4(3), 57–63 (1992) 28. Tzeng, K.K., Zimmermann, K.: On extending Goppa codes to cyclic codes. IEEE Transactions on Information Theory 21, 721–726 (1975) 29. Wieschebrink, C.: Two NP-complete problems in coding theory with an application in code based cryptography. In: IEEE International Symposium on Information Theory – ISIT 2006, Seattle, USA, pp. 1733–1737. IEEE, Los Alamitos (2006) -----
{ "disclaimer": "Notice: The following paper fields have been elided by the publisher: {'references', 'abstract'}. Paper or abstract available at https://api.unpaywall.org/v2/10.1007/978-3-642-05445-7_24?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1007/978-3-642-05445-7_24, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": null, "status": "BRONZE", "url": "https://link.springer.com/content/pdf/10.1007/978-3-642-05445-7_24.pdf" }
2,009
[ "JournalArticle" ]
true
2009-11-04T00:00:00
[]
15,683
en
[ { "category": "Computer Science", "source": "external" }, { "category": "Computer Science", "source": "s2-fos-model" }, { "category": "Engineering", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/020b47164f980cf8f1c447a85554fe0217c96be6
[ "Computer Science" ]
0.874104
A LIGHT-WEIGHT MUTUAL AUTHENTICATION AND KEY-EXCHANGE PROTOCOL BASED ON ELLIPTICAL CURVE CRYPTOGAPHY FOR ENERGY-CONSTRAINED DEVICES
020b47164f980cf8f1c447a85554fe0217c96be6
[ { "authorId": "2275441", "name": "K. Yow" }, { "authorId": "2064749211", "name": "Amol Dabholkar" } ]
{ "alternate_issns": null, "alternate_names": null, "alternate_urls": null, "id": null, "issn": null, "name": null, "type": null, "url": null }
null
A LIGHT-WEIGHT MUTUAL AUTHENTICATION AND KEY-EXCHANGE PROTOCOL BASED ON ELLIPTICAL CURVE CRYPTOGAPHY FOR ENERGY-CONSTRAINED DEVICES Kin Choong Yow and Amol Dabholkar School of Computer Engineering, Nanyang Technological University, Singapore 639798, email: kcyow@ntu.edu.sg, amold@pmail.ntu.edu.sg **Abstract** Wireless devices are characterized by low computational power and memory. Hence security protocols dealing with these devices have to be designed to give minimal computational and memory load. We present an efficient authentication and key exchange protocol for low-end wireless clients and high end servers, which is overall nearly three times as fast as comparable protocols. The basic idea of our protocol is to use symmetric key encryption in place of public key encryption wherever possible. **1.** **Introduction** Wireless environment is by its very nature insecure and limited in bandwidth [1]. For example, an adversary could easily tap into wireless communication channels or jam other people’s devices. Furthermore personal wireless devices are generally of low power, low in computational abilities and low in memory. These reasons have prevented a simple migration of cryptographic protocols from fixed networks to wireless networks for authentication and security [2]. The first phase of a typical secure transaction or communication is a secure key exchange where both parties decide on a common secret-key known only to them. The security of the rest of the transaction then depends on this first step and if there is a compromise at this phase, the rest of the communication may be compromised. In this paper we will look at protocols for key exchange and ways for devices to authenticate each other. We will be looking specifically at protocols that combine these two objectives, which are called Mutually Authenticated Key Exchange Protocols or MAKEPs. The goal of these protocols is to provide the communicating parties with some assurance that they know each others’ true identity and at the same time have a common key known only to them [3]. We will first look into the design strategies for a typical MAKEP and study two recently proposed protocols that deal with similar scenarios. We will then present our protocol and discuss its design and security. Finally we will compare implementations of the protocols based on elliptic curve cryptography using PDA clients. **2.** **Design Strategies** In this section we will briefly analyze the designs of two MAKEPs designed for low-powered clients and discuss their advantages and disadvantages. 10.5121/ijnsa.2010.2210 123 ----- _2.1 Server-Specific MAKEP_ This protocol was proposed in [2] as an efficient MAKEP for communication between a low powered wireless client and a powerful server. The main feature of this MAKEP is that it eliminates any public key cryptographic operations on the client side. The client side performs only symmetric key operations. Symmetric key cryptography is much faster than public key cryptography [4]. Hence the protocol executes much faster on the client side, than it would have if public key operations were involved. The protocol relies on server-specific certificates of the form given by _Cert = <IDAB_ A, {KA}PKB, SigTA(IDA, {KA}PKB)> which is a certificate from the low powered wireless client _Alice to the high_ powered server Bob. Here KA is Alice’s long-lived symmetric secret key. PKB is the well known public key of B. {KA}PKB means _Alice’s secret key KA is encrypted with_ _Bob’s public key PKB. SigTA is a signature from a_ trusted authority TA, and IDA is Alice’s identity. Alice _Bob_ rKAAЄ (random)Zq PK, SK {rA}KA, _Cert_ _AB_ Verfify Cert[B]A {rA, rB, IDB}KA rB Є (random)Zq {rB}KA σ = rA ⊕ rB σ = rA ⊕ rB Figure 1: Server-Specific MAKEP protocol On receiving the first message from _Alice,_ _Bob verifies the TA’s signature and obtains_ _Alice’s long_ lived symmetric key KA, by decrypting {KA}PKB from the certificate by using his private key SKB. This step also authenticates _Alice._ _Bob then encrypts a new random value_ _rB along with_ _Alice’s random_ value _rA and his identity IDB, with_ _Alice’s symmetric key KA. This step authenticates_ _Bob. Now the_ session key σ is calculated as rA ⊕ _rB._ The protocol completes in 3 steps with hardly any pre-computations. Another major advantage is that the client uses only symmetric-key cryptography with her secret key KA. This increases its speed and efficiency as compared to other comparable MAKEPs that use public-key cryptography. The freshness of the session key is maintained by using the random numbers _rA and_ _rB. Unlike the Jakobsson-_ 124 ----- Pointcheval protocol [5], where only the client computes the session key σ, here we use client and server contributions, rA and rB respectively, to calculate σ. This increases the strength of the key. However, there are some disadvantages to this protocol in its raw form – 1. The protocol needs server specific certificates. For n servers it will need n distinct certificates. This creates scalability issues. 2. The client’s long lived secret key KA is known to the server. If the server is malicious it can create nonexistent sessions and there is no way for Alice to deny these occurred, i.e. a malicious server can impersonate its clients to create false runs of the protocol. 3. Even if the server is trustworthy, it has access to the client’s secret key KA. If the server’s security is compromised in any way it implies the client’s security is also compromised since a hacker can get KA from Bob’s memory. These problems can be taken care of by our proposed protocol described in the next section. _2.2 Client-Server MAKEP_ This protocol solves the scalability and other problems of the Server-Specific protocol by using publickey cryptography on the client side as well as the server side [3]. However, the public-key computations are kept to a minimum and most of the costly operations are done on the server side. _Alice chooses a random value_ _rA as its contribution towards creating the shared key, and encrypts it_ using Bob’s well known public key PKB. Alice also computes a random value ‘b’ and sends β = g[b] to _Bob._ _Bob replies by sending his shared key contribution_ _rB, encrypted with_ _Alice’s contribution_ _rA. This_ authenticates Bob to Alice. Since rA had been encrypted under Bob’s private key, only Bob will know _rA, and rB encrypted under rA implies that Bob is replying to the current session. Alice sends the value y_ = ah(σ) + b mod q (q is the prime number used to define the field Fq) to Bob where σ is computed from _rA and rB. Bob checks if g[y] = (g[a])[h(][σ][)]β. Now g[y] = g[(ah(][σ][) + b)] = (g[a])[h(][σ][)]g[b] = (g[a])[h(][σ][)]β. Bob knows g[a] from_ the certificate and he has received β in the previous message. Thus Alice is authenticated as the secret key portion of ‘a’ in the value of y can only be computed by Alice, and the β value effectively binds the earlier and later messages, as Alice will compute a new value of ‘b’ for each session. 125 ----- Figure 2: Client Server MAKEP The third step is necessary because Alice has not been authenticated to Bob. Note that in the first step, when _Alice sends “CertA,_ β, x” to _Bob, this does not authenticate_ _Alice, as anyone could send this_ message by capturing previous protocol runs. The dotted line shows the binding between the first and the third step by b and β. The third step combines the secret key (a) and the session random (b) value of _Alice to authenticate her to Bob. If Alice was authenticated in the first step itself, the third step will not_ be necessary. This protocol has been shown to be secure against attacks by using the Bellare-Rogaway [6] model. Since it uses general certificates there is no scaling problem like the server-specific MAKEP. Most of the computationally expensive operations like Certificate verification are carried out on the server-side. **3.** **Our Proposed Protocol** We observe that by using the keys of the client and the server to make a Diffie-Hellman [7] key pair in the first message, we can have a protocol that can be executed in just 2 steps. This removes the need of a separate binding ‘b’ and β between the first and the last message, and further removes the need for the server to perform exponential calculations to verify and authenticate the client.. The protocol is shown in figure 3. Let _g be the generator of the field over which the DH problem is_ intractable. The client Alice has a private key ‘a’ and a public key g[a] and the server Bob has a private key ‘y’ with a public key g[y]. We assume that the client has a certificate CertA from a trusted authority 126 ----- where CertA = <IDA, g[a], SigTA(IDA, g[a] )>. Unlike the Server-Specific MAKEP algorithm in [2] we do not use a server-specific certificate and avoid scaling issues. Calculate σ = r ⊕ _r’_ Figure 3: A new MAKEP for wireless devices _Alice first computes a random number r. She then calculates the DH key g([ya]) using her private key a_ and Bob’s public key _g[y]. Our protocol is unique in the sense that we propose that this DH key be_ converted to an AES key. The subsequent use of the AES symmetric key algorithm in encryption and decryption of data leads to a substantial improvement in time. _Alice sends her random contribution r encrypted under the AES key to Bob along with her certificate._ _Bob verifies Alice’s public key g[a] and uses it to calculate the DH key g([ya]) using his own private key y._ He also converts the DH key to an AES key and uses it to decrypt the message and get Alice’s random number r. Bob computes his own random number r’ and uses r as an AES key to encrypt it and send it to Alice. Alice can decrypt this message since she already knows her random r. σ is the new session key and H is a cryptographic hash function. By using r (the client random) and r’ (the server random) we are maintaining the freshness of the session key and the g[(ya)] value is used for authentication Alice and _Bob to each other, as well as preventing any other party from mounting any_ replay attacks or Denial of Service attacks on Alice or Bob. Since we have authenticated Alice to Bob in the first step, we don’t need a third step like in [2]. Another variation of this protocol could be made using elliptic curve cryptography (ECC). A 1024 bit RSA key is comparable to only a 164 bit ECC key [8] i.e. it provides the same level of security so with the shorter key length the protocol will be much faster. In the ECC version of the protocol all the variables will now be points on an elliptic curve E. _Alice’s private and public keys will be the field_ element a and the point Pa (obtained by point multiplication of the base point P and the secret key a), 127 ----- while Bob’s keys will be y and Py. P is a publicly known base point. Instead of encrypting r with g[(ya)] we will encrypt it with (Pya). The rest of the protocol remains similar to the original one. **4.** **A More Secure Version of the Protocol** There is however a problem with the protocol based on trust issues. A weakness has been pointed out for the Server-Specific MAKEP protocol in [9], which is the server has control over the session key. If we look at the server side computations before the second message is sent in figure 3, we observe that the server can decide before hand which session key (σ) will be used by calculating r’ = (r ⊕ σ) and it can do this because it knows the random value of the client before the final session key is calculated. We can easily overcome this by adding a predetermined value before the random contributions before encrypting them. By doing this, we force the participants to check if the decryption has lead to a valid result. For example, Alice will send CertA,{IDA, h(r)}g(ya) and Bob will reply with {IDB, r’}h(r). Decrypt {r}Y Check h(r) Calculate x = r ⊕ _r’_ Figure 4: A more secure version 128 ----- As can be seen from figure 4, an extra step has been added to the protocol. In the first step we send _h(r), which is the hashed value of the client random_ _r. So now the server_ _Bob does not know the_ client’s random number before calculating his own random contribution r’. So there is no way for the server to decide the session key _x beforehand. In the third step we send the actual client random_ _r,_ encrypted with a symmetric key derived from the DH key g[(][ya][)] and the server random r’. Note that this will also be a symmetric key encryption and hence almost negligible in comparison to the public key operations. So the addition of the third step will not cause any difference in the speed of the protocol. **5.** **Implementation and Results** We have implemented the ECC version of our protocol and an ECC version of the Client-Server protocol. The ECC version of the Client-Server protocol has been done by replacing the exponentiation operations with point multiplication operations, and by replacing the field generator _g with the base_ point P. By doing this we have a level playing field on which to compare both the protocols for timing efficiency using the same key sizes and the same ECC implementation. Our implementation of ECC is based on the one described by Rosing [10]. We have used a 206 MHz 64MB RAM HP-Compaq iPAQ PDA as the client and a 1.7 GHz P4 PC as the server. Table 1: Comparison for the total time taken **Field Size** **Our Protocol** **Client-Server** **Speed** **(Bits)** **(secs)** **MAKEP** **comparison** **(secs)** 158 4.81 15.3 3.18 times 155 4.53 12.44 2.75 times 134 3.45 9.3 2.7 times 119 2.88 7.49 2.6 times 113 2.79 7.03 2.51 times 90 1.41 4.02 2.85 times 65 1.078 2.06 1.9 times 50 1.04 1.63 1.56 times As we can see from the graph in figure 5, our protocol is almost 1.5 times faster at low field sizes of around 50 bits (1.04 seconds vs 1.63 seconds) and more than 3 times faster for higher bits (at 158 bits our protocol runs at around 4.81 seconds as opposed to 15.3 seconds by the Client-Server MAKEP). 129 |Field Size (Bits)|Our Protocol (secs)|Client-Server MAKEP (secs)|Speed comparison| |---|---|---|---| |158|4.81|15.3|3.18 times| |155|4.53|12.44|2.75 times| |134|3.45|9.3|2.7 times| |119|2.88|7.49|2.6 times| |113|2.79|7.03|2.51 times| |90|1.41|4.02|2.85 times| |65|1.078|2.06|1.9 times| |50|1.04|1.63|1.56 times| ----- Figure 5: Graph of total time comparison We will now analyze the two protocols by breaking them up into a pre-computation part and a run time part. The pre-computation portion is the portion of the protocol that does not need to be executed at runtime. The pre-computational time is given in the table 2. Table 2: Runtime comparison **Field Size** **Our Protocol** **Client-Server MAKEP** **(Bits)** Pre-comp Run Time Pre-comp Run Time time (sec) (sec.) time (sec.) (sec.) 158 2.976 1.84 2.97 12.4 155 2.745 1.79 2.74 9.7 134 1.844 1.61 1.83 7.56 119 1.26 1.54 1.309 6.2 113 0.964 1.4 0.988 6.004 90 0.51 1.2 0.531 3.67 65 0.19 0.98 0.211 1.84 50 0.1 0.94 0.101 1.5 The actual time from table 1 minus the pre-computational time from table 2 gives us the run time of the protocol. Figure 6 shows these values plotted in a graph. If we compare the graphs in figures 6 and 5 we can see that our protocol is still nearly 7 times faster at the higher bit size fields and 1.5 times faster for the lower bit sizes. 130 |Field Size (Bits)|Our Protocol|Col3|Client-Server MAKEP|Col5| |---|---|---|---|---| ||Pre-comp time (sec)|Run Time (sec.)|Pre-comp time (sec.)|Run Time (sec.)| |158|2.976|1.84|2.97|12.4| |155|2.745|1.79|2.74|9.7| |134|1.844|1.61|1.83|7.56| |119|1.26|1.54|1.309|6.2| |113|0.964|1.4|0.988|6.004| |90|0.51|1.2|0.531|3.67| |65|0.19|0.98|0.211|1.84| |50|0.1|0.94|0.101|1.5| ----- Figure 6: Actual Run time graph We will now look at a major improvement we have made in our protocol in the first step. This is converting the DH key into an AES key and carrying out symmetric key encryption of the client random value. As opposed to this, we have used El Gamal public key encryption to encrypt the client random value in the client-server MAKEP protocol. We can see the time difference in table 3. Table 3: Step 1 Encryption Comparison **Field** **Size** **Our** **Protocol** **Client-Server** **MAKEP** **(Bits)** **[AES Encrypt** _rA_ **[El Gamal Encrypt** _rA_ (msecs.)] (msecs.)] 158 13 5600 155 12 5090 134 12 3500 119 12 2700 113 13 2030 90 12 1040 65 12 400 50 12 200 As we can see from table 3, this is a very big improvement in terms of speed. 131 |Field Size (Bits)|Our Protocol [AES Encrypt r A (msecs.)]|Client-Server MAKEP [El Gamal Encrypt r A (msecs.)]| |---|---|---| |||| |158|13|5600| |155|12|5090| |134|12|3500| |119|12|2700| |113|13|2030| |90|12|1040| |65|12|400| |50|12|200| ----- **6.** **Conclusions** We have proposed a new and efficient mutual authentication and key exchange protocol for low powered clients like PDAs in wireless environments. We have analyzed two recently proposed protocols and have implemented one of them using ECC for comparison with our protocol. We have seen that our protocol is around 3 times as fast at key sizes of around 160 bits. Furthermore, our protocol is scalable as it does not require server specific certificates. We use random contributions from both the client and the server to produces a session key that ensures forward secrecy. **References** 1. Nichols R. & Lekkas P. (2002) ‘Wireless Security: Models, Threats and Solutions’ McGraw-Hill. 2. Wong D & Chan A. (2001). _‘Mutual Authentication and Key exchange for low power wireless_ _communications’, IEEE MILCOM 2001 Conference Proceedings._ 3. Wong D & Chan A. (2001). ‘Efficient and Mutually Authenticated Key Exchange for Low Power _Computing Devices’. ASIACRYPT 2001._ 4. Schneier B., ‘Applied Cryptography’, Wiley (1996). 5. Jakobsson M. & Pointcheval D. (2001) ‘Mutual Authentication for low-power mobile devices’. Proceedings of Financial Cryptography 2001, Springer-Verlag. 6. Frankel S., Glenn R. et al. ‘RFC 3602: The AES-CBC Cipher Algorithm and Its Use with IPsec’ (Retrieved September 2004 from http://rfc3602.x42.com/. 7. Diffie W. & Hellman M., _‘New Directions in Cryptography’, IEEE Transactions on Information_ Theory, 644-654 (1976) 8. Lenstra A., & Verheul E. (2001) ‘Selecting Cryptographic Key Sizes’, Journal of Cryptology, 14(4):255-293. 9. S.-L. Ng and C. J. Mitchell, 'Comments on mutual authentication and key exchange protocols for _low_ _power_ _wireless_ _communications',_ Retrieved October 2003 at http://www.isg.rhul.ac.uk/~cjm/comaak.pdf, 10. Rosing, M. (1999) ‘Implementing Elliptic Curve Cryptography’. Greenwich, CT: Manning. 132 -----
{ "disclaimer": "Notice: The following paper fields have been elided by the publisher: {'abstract', 'references'}. Paper or abstract available at https://api.unpaywall.org/v2/10.5121/IJNSA.2010.2210?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.5121/IJNSA.2010.2210, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": null, "status": "BRONZE", "url": "https://doi.org/10.5121/ijnsa.2010.2210" }
2,010
[]
true
2010-04-25T00:00:00
[]
5,270
en
[ { "category": "Law", "source": "s2-fos-model" }, { "category": "Computer Science", "source": "s2-fos-model" }, { "category": "Business", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/020b4eec8eb34b2752e7f0c4072f8eedc07d1fd2
[]
0.869871
TOOLS TO STIMULATE BLOCKCHAIN: APPLICATION OF REGULATORY SANDBOXES, SPECIAL ECONOMIC ZONES AND PUBLIC PRIVATE PARTNERSHIPS
020b4eec8eb34b2752e7f0c4072f8eedc07d1fd2
International Journal of Law in Changing World
[ { "authorId": "72777661", "name": "E. Gromova" }, { "authorId": "2058262052", "name": "D. Ferreira" } ]
{ "alternate_issns": null, "alternate_names": [ "Int J Law Chang World" ], "alternate_urls": null, "id": "226eec69-7e47-42e4-99c5-8b17c4cd2b9c", "issn": "2764-6068", "name": "International Journal of Law in Changing World", "type": "journal", "url": null }
The Blockchain technology has significant and almost limitless potential. However, today their use for implementation is associated with the problems of lack of high-quality legal regulation of this technology; technical standards for its application; investments required for its development. These problems and the search for their solutions are especially relevant now, in the context of the financial crisis. In this regard, the purpose of the article is to analyse the legal mechanisms and tools that make up special and experimental regimes, the use of which contributed to the introduction of the Blockchain technology into industrial production, identifying their features in relation to individual countries, problems associated with their implementation and finding solutions. The research is based on comparative legal and system analysis, as well as methods of legal modelling and content analysis. The author comes to the conclusion that in order to increase the attractiveness of the legal climate for the implementation of the Blockchain technology, it is necessary, first, to develop a “high-quality” legal regulation, which will be possible in the case of prior testing of an innovative product (service) based on the application of this technology in conditions of the experimental legal regime (regulatory sandbox); second, to develop standards for normative and technical regulation of this technology; third, to improve legislation on the main tools aimed at stimulating investment in the creation and implementation of digital innovations, and Blockchain technology, including - on special economic zones, public-private partnerships and state support of companies-developers of the Blockchain services for industrial production.
**Volume 2 Issue 1 (2023) ISSN 2764-6068** **Research article** **JNL:** [https://ijlcw.emnuvens.com.br/revista](https://ijlcw.emnuvens.com.br/revista) **DOI:** [https://doi.org/10.54934/ijlcw.v2i1.48](https://doi.org/10.54934/ijlcw.v2i1.48) # TOOLS TO STIMULATE BLOCKCHAIN: APPLICATION OF REGULATORY SANDBOXES, SPECIAL ECONOMIC ZONES, AND PUBLIC PRIVATE PARTNERSHIPS **Elizaveta A. Gromova** South Ural State University (National Research University), Russian Federation **Daniel Brantes Ferreira** Brazilian Centre for Meditation and Arbitration, Brazil **Volume 2 Issue 1 (2023) ISSN 2764-6068** **Article Information:** Received April 16, 2023 Approved April 21, 2022 Accepted May 3, 2022 Published June 15, 2022 **Keywords:** blockchain, experimental regime, regulatory sandboxes, special economic zones, public private partnership, standardization, government support **FOR CITATION:** **ABSTRACT** The Blockchain technology has significant and almost limitless potential. However, today their use for implementation is associated with the problems of lack of high-quality legal regulation of this technology; technical standards for its application; investments required for its development. These problems and the search for their solutions are especially relevant now, in the context of the financial crisis. In this regard, the purpose of the article is to analyse the legal mechanisms and tools that make up special and experimental regimes, the use of which contributed to the introduction of the Blockchain technology into industrial production, identifying their features in relation to individual countries, problems associated with their implementation and finding solutions. The research is based on comparative legal and system analysis, as well as methods of legal modelling and content analysis. The author comes to the conclusion that in order to increase the attractiveness of the legal climate for the implementation of the Blockchain technology, it is necessary to develop a “high-quality” legal regulation, standards for normative and technical regulation of this technology, and improve legislation on the main tools aimed at stimulating investment in the Blockchain technology. Gromova, E. A., & Ferreira, D. B. (2023). Tools to Stimulate Blockchain: Application of Regulatory Sandboxes, Special Economic Zones, and Public Private Partnerships. International Journal of Law in Changing World, 2 (1), 17-36. DOI: https://doi.org/10.54934/ijlcw.v2i1.48 ----- **1.** **INTRODUCTION** The Blockchain technology (Blockchain) was first talked about in 2008, after the publication of the article “Bitcoin: A Peer-to-Peer Electronic Cash System”, written by a group of authors under the pseudonym S. Nakamoto (2009). This technology (Blockchain 1.0) was recommended to be used to verify Bitcoin transactions. Initially, this technology was used to verify financial transactions in operations with cryptocurrencies. However, afterwards, it found its application in the framework of registration of rights to real estate, medical data, etc. This technology can also be very effectively used for the development of the so-called “smart” industry. At present, a number of companies in the energy, mining and manufacturing industries are using this technology, among other needs, to ensure reliable supply chains. For the successful implementation of this technology in industrial production, states use special and experimental regimes (special and experimental regulation). Further, with the advent of self-executable contracts (smart contracts) in 2013, Blockchain technology is gaining even more popularity (Borg and Schembri, 2019). Smart contracts made it possible to implement through Blockchain technology (Blockchain 2.0) (Aggarwal and Kumar, 2021) a diverse set of business functions related to the transfer of information and/or values, while leaving transparent and reliably verifiable information flows[1]. Over time, the capabilities of this technology made it possible to use distributed ledgers not only within the framework of cryptocurrencies. Since the records in the chain are stored and distributed across the nodes of the network, they are very difficult to falsify, which makes Blockchain a safe and transparent way to record transactions and service information. Therefore, today Blockchain is used in the field of trade, government services, healthcare, tourism and even music[2]. So, for example, the courts of the PRC use Blockchain to record court hearings (Tran, 2020). Japanese animation studios use Blockchain to combat anime piracy[3]. The Blockchain technology is actively used in various industries (smart, digital industry) (Xu et al., 2021). Programs such as IBM Blockchain are designed to improve supply chain, data identification and management. Blockchain Foundry focuses on Blockchain-based services for prototyping and industrial production. In manufacturing, 75% of industrial companies are expected to use distributed ledger systems by 2024. This will reduce the cost of controlling the quality of raw materials by 50%, and for document circulation by 40%. The share of successful cyber-attacks will be halved[4]. It is 1 White Paper «Blockchain in Trade Facilitation» ECE/TRADE/C/CEFACT/2019/9/Rev.1, available at: https://unece.org/fileadmin/DAM/cefact/GuidanceMaterials/WhitePaperBlockchain.pdf(accessed 28.01.2023). 2 White Paper «Blockchain in Trade Facilitation» ECE/TRADE/C/CEFACT/2019/9/Rev.1, available at: https://unece.org/fileadmin/DAM/cefact/GuidanceMaterials/WhitePaperBlockchain.pdf (accessed 28.01.2023). 3 Japan’s Blockchain Sandbox Is Paving The Way For The Fintech Future, available at:https://www.forbes.com/sites/japan/2019/06/26/japans-blockchain-sandbox-is-paving-the-way-for-the-fintechfuture/?sh=5ef085832795 (accessed 28.01.2023). 4 Russian Blockchain, available at: https://www.cnews.ru/articles/2019-08-27_rossijskim_blokchejnrazrabotchikam (accessed 28.01.2023). https://doi.org/10.54934/ijlcw.v2i1.48 18 ----- expected that the third generation of the Blockchain technology (Blockchain 3.0) will allow the development of large-scale industrial applications capable of simultaneously managing many processes, processing and storing huge amounts of data, ensuring their consistency (Xu et al., 2021). According to a survey conducted by the World Economic Forum, if in 2015 only 0.025% of global GDP was based on the use of Blockchain, then by 2027 this ratio is expected to jump to 10%[5]. According to the respondents of the Deloitte survey conducted in 2020, any state is capable of losing its competitive advantages if it does not use this technology and its implementation plays a very important role. The cost of quality control of raw materials will be reduced by 50%, and the cost of document flow - by 40%. The share of successful cyber-attacks will be halved[6]. The third generation of the Blockchain technology (Blockchain 3.0) allows the development of large-scale industrial applications that can simultaneously manage many processes, process and store huge amounts of data, ensuring their logical interconnection and consistency (Di Francesco and Mori, 2020). However, today states are faced with universal problems that can level the potential of Blockchain. These are: lack of high-quality legal regulation of this technology; technical standards for its application; investments required for its development. These problems are the main barriers to its implementation, including in industrial production. Lack of regulatory clarity is one such barrier, according to Deloitte's 2020 Blockchain Survey[7]. Consequently, if states are unable to effectively implement this technology, they can lose their competitive advantages (Swan, 2015). In this regard, the purpose of the article is to analyze the mechanisms and tools that make up special and experimental modes, the use of which would contribute to solving these problems. To achieve the goal of the study, the national regulation of the creation and implementation of the Blockchain technology in various areas, including industrial production, tools that help to improve the quality of legal (including regulatory and technical) regulation of this technology, as well as to attract investments in its development. Currently, many scientific articles and monographs have been published on the Blockchain technology. These articles were written by representatives of various branches of science and touch on completely different aspects of the creation and implementation of this technology (Mohamad et al., 2017; Dong et al., 2018; Fan et al., 2018; Wu et al., 2021). The scientific works analyzed by the author can be conditionally divided into the following groups: 1. Legal regulation of creation and implementation of the Blockchain technology. 5 Global Agenda Council on the Future of Software & Society Deep Shift Technology Tipping Points and Societal Impact, available at: http://www3.weforum.org/docs/WEF_GAC15_Technological_Tipping_Points_report_2015.pdf (accessed 28.01.2023). 6 Russian Blockchain, available at: https://www.cnews.ru/articles/2019-08-27_rossijskim_blokchejnrazrabotchikam (accessed 28.01.2023). 7 Deloitte 2020 Blockchain Survey, available at: https://www2.deloitte.com/content/dam/insights/us/articles/6608_2020global-blockchain-survey/DI_CIR%202020%20global%20blockchain%20survey.pdf (accessed 28.01.2023). https://doi.org/10.54934/ijlcw.v2i1.48 19 ----- 2. Legal aspects of implementation of the Blockchain technology in certain areas of public life, sectors of economy and industrial production. 3. Ways to develop and implement the Blockchain technology. A significant number of works by authors from different countries are devoted to the legal regulation of the Blockchain technology, which emphasizes the relevance of legal research on creation and implementation of the Blockchain technology. As a rule, these articles are devoted to the search for the optimal model of legal regulation for individual countries, their unions and integration formations. The works of other researchers are devoted to defining the legal essence of this technology, trying to form its definition, legal features and classification (Sultan et al., 2018; Bouraga, 2021). A significant number of scientific articles are devoted to individual legal problems of creation and implementation of the Blockchain technology, for the most part – the problem of correlation with legislation on protection of personal data (Ivanc et al., 2016; Mohamad et al., 2017; Tatara et al., 2020; Campanile et al., 2021). Most of the published scientific papers on this technology aim to describe the role of the Blockchain technology in the financial market, cryptocurrencies and smart contracts (Alia et al., 2020; De Filippi et al., 2020; Elisavetsky and Marun, 2020). At the same time, some authors turn to the legal analysis of application of the Blockchain technology in other areas – medicine, public administration, etc. (Mohamad et al., 2017; Dong et al., 2018; Fan et al., 2018; Roman-Belmonte et al., 2018; Joppen et al., 2019; Balasubramaniana et al., 2021). Some of the works analyzed by the author describe individual ways of developing and implementing this technology. As a rule, these articles are devoted to the issues of attracting investments in its development (Jani and Panda, 2019). Without diminishing the importance of the research carried out by the authors, it should be noted that at the moment the author did not find a comprehensive study of special and experimental legal regimes contributing to implementation of the Blockchain technology in various spheres of society, as well as in the industrial production industry, as well as their constituent legal instruments and mechanisms that would allow overcoming barriers that hinder its development and promote its implementation. To achieve the goals of the article, the author applied a set of methods, which included the comparative legal and systemic method, as well as the method of legal modeling and content analysis. The comparative legal method was used to analyze approaches to the legal regulation of the Blockchain technology, the national legislation of the countries implementing this technology. The application of this method made it possible to identify tools and mechanisms that help to attract investments in the development of this technology, as well as best practices that contribute to improving the quality of legal, including regulatory and technical, regulation of this technology. https://doi.org/10.54934/ijlcw.v2i1.48 20 ----- The systematic method made it possible to consider the legal instruments and mechanisms that contribute to creation and implementation of the Blockchain technology as a single system of techniques and methods, the use of which, in aggregate, will overcome the barriers that hinder the development and implementation of this technology. The method of content analysis made it possible to analyze the content of individual information resources to identify existing practices for the implementation of the Blockchain technology, as well as to attract investments in its development and state support of its developers. **2.** **TOOLS TO STIMULATE BLOCKCHAIN** As a rule, when states seek to achieve certain goals in a certain area of development, they apply a special, different from the general, regulation (Podshivalov, 2018; Gromova, 2018; Ferreira and Filho, 2020; Kraljić, 2020; Nikitin and Marius, 2020; Ostanina and Titova, 2020). Introduction of a special regulation is due to the need to achieve certain goals that cannot be achieved through general regulation. In this regard, states use the so-called special legal regimes, which are a set of legal means aimed at achieving a certain result. In case when it comes to the creation and implementation of the Blockchain technology, states also apply special regulation (special and experimental regimes) that contribute to solving these problems. They consist in a certain set of tools and mechanisms that contribute to the achievement of the set goals and the solution of existing problems. **2.1 Experimental legal regimes (regulatory sandboxes) for Blockchain** The regulatory sandboxes for Blockchain services are the experimental legal regimes used by many states today to create optimal regulation that facilitates implementation of the Blockchain technology. The significant potential of the Blockchain technology, as well as the possible danger of its improper use, raised the question of finding approaches to the legal regulation of this technology before modern states. The policy ecosystem is not fully adapted to this technology, and rules and regulations would have to be retrofitted (Gabison, 2016). In this regard, the governments of many countries have chosen an approach aimed at creating a “breakthrough” regulation of digital innovations. Its essence is that, even in the absence of legal regulation, business entities have the opportunity to “test” the capabilities of services and products based on digital technologies in a real market and under state control. For this, the state began to use regulatory sandboxes. As such, an environment controlled by the regulator, in which entrepreneurs are given the opportunity to test the possibilities of innovative services or products when applying certain regulatory “indulgences”, is meant. These may be the non-application of licensing requirements, requirements for accreditation or certification to its participant. The purpose of the regulatory sandbox is to create an environment for testing digital innovations in the absence of proper https://doi.org/10.54934/ijlcw.v2i1.48 21 ----- legal regulation. They are used in order, first of all, to check the “viability” of a digital innovative service (product), temporarily removing legislative barriers in the form of mandatory regulatory requirements. The following factors are the advantages of regulatory sandboxes. First, their application enables business entities to test innovations in a safe environment. This, in turn, helps to minimize the risks of violating legal requirements. Second, the use of regulatory sandboxes allows regulators to examine the “work” of new technologies “from the outside,” in a low-risk environment. This presupposes the possibility of searching for the most appropriate ways of adapting legislation. And third, the use of regulatory sandboxes helps to minimize the harm that can be done to consumers. This becomes possible in connection with the provision of guarantees and additional protection methods. Thus, this mechanism is an example of abandoning traditional regulatory approaches in favor of more flexible regulation (Gromova and Ivanc, 2020). Regulatory sandboxes were first introduced in 2015 as part of a government initiative to support British digital financial innovation companies (Arner et al., 2016). This initiative enabled companies to test the innovative products, services and business models they create in an isolated environment. The first experience with regulatory sandboxes has been positive. The British regulatory sandbox has contributed to the development of innovative activities of more than 500 companies, while in more than 40 of them it has received regulatory reinforcement (Global Regulatory Sandbox Review…, 2017). The success of the United Kingdom in creating sandboxes has led to their proliferation throughout the world. Currently, regulatory sandboxes are used in countries such as Singapore, UAE, Australia, EU countries, China, India, Russia, etc. The areas of application of regulatory sandboxes are usually Fintech (digital financial technologies). This is the case, for example, in the UK, Singapore, Australia, India and the UAE (Jenik and Lauer, 2017). Separate regulatory sandboxes in China, in turn, are created to develop not only Fintech innovations, but the InsurTech market (digital innovations in the field of insurance). Regulatory sandboxes in Russia can be used to test innovative services or business models in the field of medicine, transport, education[8]. In order to be able to become a participant in the regulatory sandbox, a business entity must apply to a state-authorized body (regulator) and provide the so-called experimental regime program. It should 8 Global Regulatory Sandbox Review: An Overview on the Impact, Challenges, and Benefits of Regulatory FinTech Sandboxes, available at: https://financedocbox.com/Insurance/73322297-Global-regulatory-sandbox-review.html (accessed 28.01.2021). Regulatory Sandbox Review, available at: https://digitalchamber.org/wp-content/uploads/2017/11/Regulatory-SandboxReview_Nov-21-2017_2.pdf (accessed 28.01.2021); Regulatory Sandboxes and Financial Inclusion, available at: https://www.cgap.org/sites/default/files/researches/documents/Working-Paper-Regulatory-Sandboxes-Oct-2017.pdf (accessed 28.01.2023); Federal Law "On Experimental Legal Regimes for Digital Innovation” (in Russ.), available at: https://sozd.duma.gov.ru/bill/922869-7; Global Regulatory Sandbox Review An Overview on the Impact, Challenges, and Benefits of Regulatory FinTech Sandboxes November 21th, 2017, available at: https://financedocbox.com/Insurance/73322297-Global-regulatory-sandbox-review.html (accessed 28.01.2023). https://doi.org/10.54934/ijlcw.v2i1.48 22 ----- present the very innovative business model (service or product) based on digital innovation, analyze its potential and possible risks, and identify ways to minimize them. If the submitted program is approved by the regulator, the participant of the experimental legal regime gets the opportunity to test it in a real market with real consumers, but with the application of certain regulatory indulgences (special regulation), within a certain period of time. Typically, this period is from 3 to 12 months (UK, China, India, Australia) (Jenik and Lauer, 2017). However, the legislation of certain countries, for example, the UAE, sets a period of up to 2 years (Global Regulatory Sandbox Review…, 2017); Russia – up to 5 years (Gromova and Ivanc, 2020). Upon the expiration of this period, the authorized body, based on the results of monitoring and evaluating the effectiveness and efficiency of the experiment, draws conclusions about: the admissibility of giving special regulation the properties of general regulation; the admissibility of imparting the properties of general regulation to a special regulation in the event of amendments to the special regulation; the inadmissibility of imparting the properties of general regulation to special regulation. One of the development trends of this tool is the creation of regulatory sandboxes aimed at testing services based on the Blockchain technology (Cheah et al., 2018). The World Association of Exchanges highlighted the importance of creating regulatory sandboxes for distributed ledger technologies in connection with the need to study the potential of this technology for the implementation of services based on Blockchain[9]. In this regard, in some foreign countries, there are so-called “thematic” regulatory sandboxes, the main purpose of which is to test innovative services, products and business models based on the Blockchain technology. For example, the Government of Japan has launched a regulatory sandbox for incubating Blockchain innovations[10]. Thailand’s regulatory sandbox is also being applied to the development of the Blockchain technology. The examples of projects that are currently being tested in the sandbox are services using Blockchain for letters of guarantee and cross-border funds transfers, iris identification for identity verification, and QR code payment verification (Guide for Regulatory Sandboxes, 2018). Of particular interest is the regulatory sandbox of the International Civil Aviation Organization (ICAO). Its goal was the introduction of the Blockchain technology for the development of civil aviation. The ICAO Blockchain Sandbox (2021) is a cloud-hosted network enabling different partners to work on subjects on the same 9 Exchange Body Calls for Creation Of Regulatory Sandboxes for Distributed Ledgers, available at: https://www.finextra.com/newsarticle/29390/exchange-body-calls-for-creation-of-regulatory-sandboxes-for-distributedledgers (accessed 28.01.2023). 10 Japans blockchain sandbox is paving the way for the fintech future, available at: https://www.forbes.com/sites/japan/2019/06/26/japans-blockchain-sandbox-is-paving-the-way-for-the-fintechfuture/?sh=5ef085832795 (accessed 28.01.2023). https://doi.org/10.54934/ijlcw.v2i1.48 23 ----- platform. It is a blockchain infrastructure for the aviation sector. It empowers partners to create and test services, systems, or products on a decentralized platform. The European Union expects to launch a regulatory blockchain sandbox by 2022. The project was initiated by the European Commission and the European Blockchain Partnership[11]. This sandbox will test the viability of Blockchain technologies in healthcare, environment, energy and other key sectors[12]. This means that, firstly, one of the trends in the use of regulatory sandboxes is the creation of specialized Blockchain sandboxes. And, secondly, and importantly, Blockchain sandboxes aim to introduce the Blockchain technology not only in the field of financial markets, but also in completely different areas. These trends should be assessed positively, since they will contribute to the creation of an adequate and effective legal regulation of this technology. At the same time, the importance of correlating special regulation (granting regulatory concessions) with fundamental human rights and consumer protection legislation should be considered. It is no coincidence that critics of regulatory sandboxes see them as a means to circumvent consumer protection laws. This is due to the fact that regulatory indulgences applied in testing conditions may negatively affect the quality of services provided to consumers or otherwise violate their rights. However, as a rule, the only special protective measure is to obtain the consent of consumers to participate in the experiment. Only a few jurisdictions provide for liability insurance for sandbox participants and compensation in case of violation of consumer rights[13]. In the case when it comes to testing services and products based on the use of the Blockchain technology, it is very important to integrate the rules for participation in regulatory sandboxes with legislation on the protection of personal data. The problem of personal data protection in the context of special regulation applied in the framework of the regulatory sandbox is already obvious. For example, in the Russian Federation there is an experiment on the development of artificial intelligence technologies in Moscow. As part of this experiment, anonymized personal data of Moscow residents are transferred for processing to artificial intelligence programs. The possibility of using anonymized personal data significantly reduces the cost of processing them. At the same time, to date, there is no clarification in Russian legislation about what “anonymized” personal data is and what is the mechanism of their depersonalization (Mavrinskaya et al., 2017). 11 European Commission Launch Blockchain Regulatory Sandbox, available at: https://ec.europa.eu/digital-singlemarket/en/legal-and-regulatory-framework-blockchain (accessed 28.01.2023). 12 Legal and Regulatory Framework for Blockchain, available at: https://ec.europa.eu/digital-single-market/en/legal-andregulatory-framework-blockchain (accessed 28.01.2023). 13 Fintech regulatory sandbox, available at: https://asic.gov.au/for-business/innovation-hub/fintech-regulatory-sandbox/ (accessed 28.01.2023). https://doi.org/10.54934/ijlcw.v2i1.48 24 ----- In the case when it comes to the use of the Blockchain technology, the issues of personal data protection come first. After all, even outside the regulatory sandboxes, there are big problems of convergence between the Blockchain technology and legislation on the protection of personal data. In this regard, it is very important to work on improving the national regulatory framework and developing international legislation in this area[14]. **2.2 Special economic zones to attract investments in development of the Blockchain technology** There is no doubt that implementation of the Blockchain technology may require significant investment. This is especially true for the introduction of such technology into industrial production. In this regard, it is very important to attract investments in its creation and implementation. That is why it is important for each country to create a favorable investment climate, including adequate legal conditions for attracting investments into the national economy. For this, countries around the world use various tools. **_2.2.1 Special economic zones for Blockchain_** These are, first of all, special economic zones. According to statistics, there are more than 5400 such territories in 147 countries in the world (World Investment Report, 2019). Scientists note that these territories are recognized as factors of accelerated economic growth due to their ability to influence the intensification of trade, attracting investment, and deepening integration processes (Bost, 2019; Veselkova, 2019). Within the boundaries of such territories, representatives of the private sector are provided with tax and other preferences in order to stimulate investment and other entrepreneurial activities. The most famous example of the successful creation and operation of special economic zones is undoubtedly China. The rise of the Chinese economy, associated, among other things, with the creation of special economic zones, is called the “Chinese economic miracle”. In order to attract private investors, residents were provided with various preferences, including inexpensive land, tax and customs benefits, the possibility of repatriating profits and capital investments, exemption from export tax and a limited license to sell goods in the domestic market [11] Creation of innovative products is the main goal of the establishment and High-Tech Industrial Development Zones. Today, 54 such zones are successfully operating in China. Their creation began in 1980 under the Program of the Ministry of Science and Technology of China. The main goal of the Program was to use the technological capacity and resources of research institutes, universities, and large and medium enterprises to develop new and high-tech products and to expedite the commercialization of research and development [36]. 14 Blockchain: Playing in the regulatory sandbox, 07 September 2016 https://www.finextra.com/blogposting/13055/blockchain-playing-in-the-regulatory-sandbox (accessed 28.01.2023). https://doi.org/10.54934/ijlcw.v2i1.48 25 ----- Today, some countries are considering the possibility of creating and implementing the Blockchain technology within the boundaries of special, free economic zones. For example, the Central Committee of the Chinese Communist Party recently announced that research into the creation and implementation of the Blockchain technology for the digital currency market will be supported within the Shenzhen special economic zone. The Chinese government intends to use the Shenzhen special economic zone as a pilot demonstration zone for supporting innovative applications such as digital money research and mobile payment in Shenzhen [40]. Special conditions for the implementation of the Blockchain technology are envisaged in Georgia. Thus, the Gldani Free Industrial Zone guaranteed the UK-based company access to electricity at discounted rates for a brand-new, power-thirsty 40-megawatt datacenter devoted to the mining of cryptocurrencies. Other special economic zones providing tech-companies with the special regulatory environment they need to thrive are springing up across the globe, particularly in countries that have embraced blockchain and cryptocurrencies. The Cagayan special economic zone in the Philippines has licensed as many as 37 crypto exchanges since receiving a special mandate to develop the “Crypto Valley of Asia” in May 2018. Note that in the Russian Federation it is also planned to create a Blockchain cluster within the free economic zone of the Republic of Crimea and the federal city of Sevastopol. The purpose of creating such a cluster will be to attract investment in the implementation of Blockchain projects. For the development of cryptocurrencies and Blockchain projects, they plan to use the Russian part of the territory of the Bolshoi Ussuriysky Island (2019). It is planned to create a special administrative region with preferential conditions for international companies planning to operate in this area[15]. It is believed that the creation of clusters within the boundaries of special, special and free economic zones will contribute to the development of the Blockchain technology. The operation of geographically related companies carrying out complementary activities will have a positive effect on creation and implementation of this technology, including in industrial production. Being a member of a cluster is strongly believed to enhance local productivity and competitiveness. No wonder that policymakers are concerned to create, establish, promote or just label existing interfirm networks or agglomerations of firms or industries as a cluster [23]. **2.3 Public-private partnership for Blockchain projects** Public-private partnership (hereinafter – PPP) is an important tool that helps to attract investment in the development of socially significant projects. This tool is actively used all over the world. There is even a term “innovative public-private partnership” (Innovative PPP) for the creation and implementation of 15 Why Blockchain Developers are being given the VIP treatment, available at: https://www.fdiintelligence.com/article/75453 (accessed 28.01.2023). https://doi.org/10.54934/ijlcw.v2i1.48 26 ----- digital innovations. The European Union is actively using the mechanisms of public-private partnership for the development of infrastructure projects. EU legislation also provides for the possibility of creating innovations such as robotics and supercomputers under contractual forms of innovative PPP. So, for example, creation and development of robotics and artificial intelligence is one of the key areas for development of the digital economy, it is actively taking place in foreign countries precisely on the basis of PPP (Cyman et al., 2020). In the European Union, in particular, research in the field of robotics received the largest funding under the innovation program Horizon 2020 based on PPP projects – about €190 million. Under another European robotics development program, SPARC, EU states are investing €700 million, and the private sector – €2.1 billion[16] in the creation of industrial robotics. In addition, the development of another breakthrough direction of the digital industry – supercomputers (high performance computing) in the EU countries is also carried out on the basis of PPP[17]. Note that in the United States there is a separate research development program in the field of implementing the Blockchain technology in the electoral process. This program is based on the principles of public-private partnership. It was initiated by the Government Blockchain Association in the USA. The program GBA Public-Private Partnership (PPP) objectives include researching the technological, regulatory and political issues associated with Blockchain and voting. The second phase of the program includes developing the requirements, implementing, and deploying Blockchain-based voting solutions[18]. Another PPP project in the field of creation and implementation of the Blockchain technology has also been launched in the United States. The Security and Software Engineering Research Center at Georgetown University (S2ERC). S2ERC is a great example of a public-private partnership that seeks to merge interest of the federal government and commercial innovation[19]. Another example of PPP in the field of creation and implementation of Blockchain projects is the infrastructure project of the US government and DeFi to create toll roads, payments for the use of which are saved under the Blockchain program[20]. The Chinese government is also actively developing public-private partnerships in creation and implementation of Blockchain technologies. So, one of these projects was the creation of a fund (Xiong'An Global Blockchain Innovation Fund) for the development of Blockchain startups. At the same 16 Is Europe investing in robotics? (In Russ.), available at: http://www.robogeek.ru/analitika/evropa-vkladyvaet-dengi-vrobototehniku (accessed 28.01.2023). 17 Contractual forms of PPP for high performance computer, available at: https://ec.europa.eu/digital-single-market/en/highperformance-computing-contractual-public-private-partnership-hpc-cppp (accessed 28.01.2023). 18 Government Blockchain Association (GBA), available at: https://www.gbaglobal.org/blockchain-voting-public-privatepartnership-ppp-forming-now/ (accessed 28.01.2023). 19 Public Private Partnerships for Innovation Blockchain, available at: https://federalnewsnetwork.com/federal-techtalk/2018/01/public-private-partnerships-innovation-blockchain/ (accessed 28.01.2023). 20 DeFi Blockchain contract, available at: https://www.ledgerinsights.com/us-space-force-awards-blockchain-contract-toxage-security/ (accessed 28.01.2023). https://doi.org/10.54934/ijlcw.v2i1.48 27 ----- time, the state's share was 25%, the remaining $ 1.2 billion are private investments of Tunlan Investment Company[21]. With regard to the Russian Federation, it should be noted that in 2018 alone, the Federal Law "On Public-Private, Municipal-Private Partnership" (2015) (hereinafter referred to as the Law on PPP) was amended to allow the creation of information technology objects within the framework of PPP. The introduction of these changes should be assessed positively. Since in the previous edition, the creation of information technology objects was not allowed within the framework of PPP. Meanwhile, the fact that only information technology objects can be created on the basis of PPP limits the possibility of implementing PPP in the field of innovation. According to Art. 2 of the Federal Law "On Information, Information Technologies and Information Protection" dated July 27, 2006 No. 149-FZ (2006), “information technologies – processes, methods of searching, collecting, storing, processing, providing, disseminating information and ways of implementing such processes and methods”. It seems that the term “information technology objects” chosen by the legislator significantly limits the potential of PPPs in the field of creating innovations. Note that within the framework of the Federal Program “Digital Economy” it is proposed to develop a number of “end-to-end digital technologies: big data; neurotechnology and artificial intelligence; distributed ledger systems; quantum technologies; new production technologies; industrial internet; robotics and sensorics components; wireless technology; technologies of virtual and augmented reality”. This list is not exhaustive and can be expanded as new technologies appear and develop. At the same time, if such digital technologies as artificial intelligence, neurotechnologies, wireless communication technologies, virtual reality can be considered information technologies, and, accordingly, created within the framework of PPP, then referring to information technology objects a whole range of end-to-end digital technologies, such as new production technology, as well as the components of robotics, is highly controversial. This, in turn, may affect the development opportunities within the framework of PPP of the Blockchain technology itself. The fact is that a project implemented within the framework of a PPP may be associated with not one, but several digital innovations. And in the event that one of them does not “fall” under the regulation of this act, then the creation and implementation of the rest may be questionable. It seems that the current legislation on PPP and its legal forms should be amended to allow the creation of digital technologies within the framework of PPP. Such changes, it seems, would be more conducive to the development of innovation and the digital economy on the basis of PPP, and, thereby, would improve the country's competitiveness in the digital technology market (Ertz and Boily, 2019). 21 China invests $ 16 billion to develop Blockchain PPP, available at: https://cryptor.net/news/kitay-investiruet-v-blokcheyntehnologii-16-mlrd-v-ramkah-gosudarstvenno-chastnogo-partnerstva (accessed 28.01.2023). https://doi.org/10.54934/ijlcw.v2i1.48 28 ----- **2.4 Other tools to stimulate Blockchain** _2.4.1 Standardization for Blockchain_ In the context of intensive digitalization, the most important component of state innovation policy today is the creation of standards in the field of digital technologies, consolidating in them the technical aspects of the functioning of such technologies [8]. Standardization in the field of the Blockchain technology will allow developing a universal terminology associated with this technology; will ensure the safe use of technologies based on artificial intelligence. Moreover, the standardization of this technology will increase the level of its interoperability with other digital technologies, which, in turn, will have a positive impact on the development of scientific and technological progress[22]. At the same time, the development and adoption of “ineffective” standards can constrain the development of digital technologies. In this regard, global international cooperation and coordination on the development of standards in the field of distributed ledger technology will be critical for the successful standardization of digital technologies in general, ensuring fair competition, removing trade barriers and the flourishing of innovation[23]. Today, there is an intensive development of standards in the field of the Blockchain technology. International organizations for standardization, as well as authorized bodies of many foreign countries, are actively involved in this process. The International Organization for Standardization (ISO) established an international technical committee for the standardization of Blockchain and Distributed Ledger Technologies in 2016 (ISO/TC 307 Blockchain and Distributed Ledger Technologies)[24]. The committee includes five working groups: on Blockchain architecture and ontology, scope, security and privacy, identification and smart contracts. The committee included 35 states, led by Australia. In March 2017, the first Blockchain standardization roadmap was published. Standards in the field of this technology are developed by standardization bodies of certain foreign countries. For example, in 2020, a focus group on the application of distributed ledger technology (FG DLT), established by the International Telecommunication Union (ITU-T) Standardization Sector, completed its work in Geneva. The Institute of Electrical and Electronics Engineers (IEEE) is working on a series of standards for general purpose frameworks and architectures, interoperability, core technology components, and Bockchain industry specifications (P2418) (Blockchain standards, 2020). 22 Artificial Intelligence’s standardization helps create innovation friendly framework conditions for the technology of the future, available at: https://www.din.de/blob/306690/f0eb72ae529d8a352e0b0923c67b6156/position-paper-artificialintelligence-english--data.pdf (accessed 28.01.2023). 23 U.S. leadership in AI: A Plan for Federal Engagement in Developing Technical Standards and Related Tools, available at: https://www.nist.gov/sites/default/files/documents/2019/08/10/ai_standards_fedengagement_plan_9aug2019.pdf (accessed 28.01.2023). 24 Blockchain standards, available at: https://blockchain.ieee.org/standards (accessed 28.01.2023). https://doi.org/10.54934/ijlcw.v2i1.48 29 ----- The National Institute for Standardization (NIST), within the framework of the doctrine of US leadership in the field of digital technologies, approved the "Plan for state involvement in the development of technical standards and related tools" in 2019 (U.S. leadership in AI…, 2019) and is actively working to create an international standard for the Blockchain technology. So, in the fall of 2020, NIST posted the Draft Standard NISTIR 8301, Blockchain Networks: Token Design and Management Overview, which provides a high-level technical overview and conceptual framework of token designs and management methods. The Draft Standard for the Application of the Blockchain Technology in Industry is also important. The Blockchain Project for Industrial Applications Community of Interest is providing guidelines to create a (better) synergy between end users, research community, and solution providers to reduce complexity, cost, and delay of adoption of Blockchain technologies[25]. In modern conditions, Russia also does not stay away from world trends. To date, it has adopted two strategically important documents in the field of technical regulation of digital technologies. One of these is the Passport of the Digital Economy Program[26]. It provides for the development of a federal project “Normative regulation of the digital environment”, including with a view to improving standardization mechanisms in the field of digital technologies. In turn, in the Action Plan for “Normative Regulation” of the “Digital Economy of the Russian Federation” program dated December 18, 2017 a set of measures is envisaged to improve the mechanisms for standardizing digital technologies to eliminate barriers to their use. Among the activities of this plan is amending the current legislation in order to simplify the procedures for developing standardization documents, shorten the time for their development, accelerate the adoption of national standardization documents based on or taking into account the standards of the most authoritative associations and organizations. The measures proposed in the Plan cannot be assessed unambiguously. So, on the one hand, the establishment of the possibility of adopting standards based on or taking into account the standards of the most authoritative associations and organizations, will contribute to a better “filling” of such documents with technical requirements already tested in practice. On the other hand, simplifying the standardization procedure and shortening its time frame will not in itself contribute to the development and adoption of “working” standards in the field of the Blockchain technology. In this case, it is necessary to revise not the quantitative, but the qualitative aspects of standardization. 25 NIST Blockchain Standardization, available at: https://www.nist.gov/blockchain 26 Russia's National Program For Digital Economy, available at: https://ac.gov.ru/en/projects/project/digital-economyprogram-implementation-42 (accessed 28.01.2023); "Plan of measures for the direction "Regulatory regulation" of the program "Digital economy of the Russian Federation" 2017 г., available at: http://static.government.ru/media/files/P7L0vHUjwVJPlNcHrMZQqEEeVqXACwXR.pdf (accessed 28.01.2023). https://doi.org/10.54934/ijlcw.v2i1.48 30 ----- The need for standardization in the field of digital technologies predetermined the creation of a technical committee for Blockchain standardization. The committee was named “Distributed Ledger Technologies and Blockchain Hardware and Software”. Its main task is to increase the efficiency of work on the development of the domestic regulatory and technical base in the field of distributed ledger and Blockchain technologies. Within the framework of TC 26, methodological recommendations on terminology were issued – MR 26.4.001-2018 “Terms and definitions in the field of chain data recording technologies (Blockchain) and distributed ledgers” (2018). One of the strategically important directions of the committee's work is participation in the international standardization process on behalf of the Russian Federation, including the consideration of application of international standards in the field of distributed ledger technology at the national level. This is important, since the participation of this Russian committee in international standardization will contribute to a greater extent to ensuring national interests than the usual adherence to the International Standard, developed without the participation of representatives of the country. According to experts, today it is obvious that there is a need to move from passive assimilation of foreign experience to the stage of active construction of domestic developments in the field of standardization, which should significantly strengthen Russia's position in the field of high technologies[27]. It seems that when preparing standards in the field of the Blockchain technology, a number of important points should be taken into account. First, it is imperative for the international community to continue to work together to standardize digital technologies. Second, at the national level, it is worth actively involving the private sector in the standardization process; but, at the same time, not only large and medium-sized businesses, but also small businesses. It should be remembered that small businesses can also be actively involved in the development and application of the Blockchain technology, and, as a rule, are more “mobile” in these matters. Third, it is recommended to involve leading research universities and research organizations in the digital standards development process. And finally, fourthly, in order to stimulate the participation of these entities in the development of standards in the field of the Blockchain technology, it is also necessary to create a mechanism to compensate the costs of the latter for participation in the development of international and national standards. _2.4.2. Governmental support for Blockchain companies due to COVID-2019._ In the pandemic, programs to support business entities engaged in the creation and implementation of digital technologies, including Blockchain technologies, are of particular importance. At the moment, there are practically no government support programs for Blockchain companies. The US is an exception. 27 Standardization of the Digital Economy, available at: http://www.connect-wit.ru/standartizatsiya-tsifrovoj-ekonomikirossii.html (accessed 28.01.2023). https://doi.org/10.54934/ijlcw.v2i1.48 31 ----- So, U.S. Small Business Administration (SBA) launched the Paycheck Protection Program (PPP). Under this Program, more than 75 companies in the Blockchain industry received government loans totaling $ 30 million. The PPP Program was created by the Trump administration during the COVID-19 outbreak to help businesses pay their employees during the ongoing economic crisis[28] (Market Wrap…, 2021). Note that in other countries the situation is diametrically opposite. So, for example, in the Russian Federation, the Blockchain projects industry has not yet received any financial support from the state (subsidies, compensation for lost rental income) (Blockchain has officially become…, 2020). According to experts, at the moment, the infrastructure for supporting distributed ledger systems is insufficient to ensure continuous improvement of relevant solutions. This is especially true today, in the context of the financial and economic crisis, the imposed trade restrictions on certain technological components or ready-made solutions, the inaccessibility of foreign capital markets and the lack of opportunities for exchanging experience with foreign experts, as well as insufficient demand for solutions in the domestic market, provided that foreign markets are not accessible (Blockchain will bring…, 2019)[29]. In this regard, the author believes that in order to support the developers of Blockchain services that can find their application in industry, a number of government support measures should also be developed, similar to the United States. If talk about Russian legislation, then it seems possible to give the opportunity to such developers, small and medium-sized businesses, the right to receive financial, property and other support measures in accordance with the legislation on small and medium-sized businesses. **3. RECOMMENDATIONS** The recommendations for improving the special and experimental modes are as follows. First, with regard to experimental legal regimes (regulatory sandboxes), it is important to integrate the rules of participation in Blockchain sandboxes with legislation on the protection of personal data, as well as on the protection of consumer rights. In this regard, it is very important to work on improving the national regulatory framework and the development of international legislation in this area, since now many states are considering the possibility of creating interstate regulatory sandboxes in the field of testing Blockchain services. Second, it is very important to improve the Blockchain technology standardization process, again, both at the international and national levels. At the state level, it is important to involve small and medium sized businesses and other business representatives, as well as leading research universities and research 28 Market Wrap: Bitcoin Hovers Around $34.2K While Options Traders Pay Up for Possible ETH Upside, available at: https://www.coindesk.com/market-wrap-bitcoin-hovers-35k-options-traders-pay-eth-upside (accessed 28.01.2023). 29 Blockchain will bring 16 bill to the Russian Economy, available at: https://www.cnews.ru/news/top/2019-0717_blokchejn_prineset_rossijskoj_ekonomike_16_trillionov (accessed 28.01.2023). https://doi.org/10.54934/ijlcw.v2i1.48 32 ----- organizations in the development of the standard. In order to stimulate their participation in the development of standards for the Blockchain technology, it is also necessary to create a mechanism to compensate the costs of the latter for participation in the development of international and national standards. If talk about the international level, then the joint efforts of states to create universal, understandable and, most importantly, working standards are also very important. Third, attention should be paid to improving legislation on special regimes aimed at attracting private investment in the creation of Blockchain technologies for smart industry. So, it is necessary to legally provide the possibility of creating Blockchain clusters within such territories. In addition, it seems necessary to develop public-private partnerships in creation and implementation of the Blockchain technology. This contributed to the development of innovation and the digital economy on the basis of PPPs, and, thereby, made it possible to increase the competitiveness of each country in the digital technology market. To support the developers of Blockchain services that can find their application in industry, a number of government support measures should also be developed, including financial, property and other. **5. CONCLUSIONS** Thus, for the legal support of the implementation of the Blockchain technology, including in industrial production, states apply special and experimental regimes. As a rule, such regimes are generally universal for most states. In this regard, it can be concluded that for successful implementation of the Blockchain technology in industrial production, states should use certain tools and mechanisms that make up the content of these modes. This will allow not only determining the national legal framework for implementation of the Blockchain technology. It will also allow us to work together to create international regulation for implementation of the Blockchain technology in industrial production, develop cooperation in this area, share and develop the best world practices. The latter is especially important given the fact that the instruments and mechanisms used by states that make up the content of these regimes are not without drawbacks. These shortcomings, in turn, are due to the lack of quality legal regulation of such instruments. It also requires a concerted effort to eliminate them, both nationally and internationally. The combined use of these special and experimental regimes (special and experimental regulation) will increase the attractiveness of the state jurisdiction by creating more adequate legal conditions for the implementation of activities by investors and developers of services and products based on the Blockchain technology. The conclusions reached by the author can be used in the development of international and national legal foundations for the implementation of the Blockchain technology in industrial production. In addition, the results of the study can be used as a basis for further scientific research in the field of legal regulation of the creation and implementation of the Blockchain technology and other digital technologies. https://doi.org/10.54934/ijlcw.v2i1.48 33 ----- **REFERENCES** [1] Alia, O., Allyb, M., Clutterbuck, Y. (2020). The state of play of blockchain technology in the financial services sector: A systematic literature review. _International Journal of Information_ _Management, 54, 102199._ [2] Arner, D., Barberis, J., Buckley, R.P. (2016). The Evolution of Fintech: A New Post-Crisis Paradigm? Georgetown Journal of International Law, 47, 1271-1281. [3] Bouraga, S. (2021). A taxonomy of blockchain consensus protocols: A survey and classification framework. Expert Systems with Applications, 168, article number 114384. [4] Balasubramaniana, S., Shuklab, V., Singh, J., Islamc, N., Salouma, R. (2021). A readiness assessment framework for Blockchain adoption: A healthcare case study. _Technological_ _Forecasting and Social Change, 165, article number 120536._ [5] Borg, J.F., Schembri, T. (2019). The regulation of blockchain technology. In: J. Dewey (Ed.), Blockchain & Cryptocurrency Regulation (pp. 188-192). [6] Bost, F. (2019). Special economic zones: methodological issues and definition. _Transnational_ _Corporations, 26(2), 141-153._ [7] Campanile, L., MauroIacono, F., Marulli, M. (2021). Designing a GDPR compliant blockchain based IoV distributed information tracking system. Information Processing & Management, 3(58), article number 102511. [8] Cyman D., Gromova E., Juchnevicius E. (2021). Regulation of Artificial Intelligence in BRICS and the European Union. BRICS Law Journal, 8(1), Pp. 86-115. [9] De Filippi, P., Reijersd, V., Morshed, M. (2020). Blockchain as a confidence machine: The problem of trust & challenges of governance. Technology in Society, 62, article number 101284. [10] Dong, Zh., Fengji, L., Gaoqi, L. (2018). Blockchain: a secure, decentralized, trusted cyber infrastructure solution for future energy systems. _Journal of Modern Power Systems and Clean_ _Energy, 5(6), 958-967._ [11] Enright, M.J., Scott, E., Chung, K. (2005). Regional Powerhouse: The Greater Pearl River Delta _and the Rise of China. Singapore: John Wiley & Sons (Asia), 404 p._ [12] Ertz, M., Boily, E. (2019). The rise of the digital economy: Thoughts on blockchain technology and cryptocurrencies for the collaborative economy. International Journal of Innovation Studies, 3(4), 84-93. [13] Elisavetsky, A., Marun, M.V. (2020). Technologies applied to conflict resolution. Understanding it for the efficiency of ODR and its projection in Latin America [La tecnología aplicada a la resolución de conflictos. Su comprensión para la eficiencia de las ODR y para su proyección en Latinoamérica]. Revista Brasileira de Alternative Dispute Resolution, 3(2), 51-69. [14] Fan, K., Wang, Sh., Ren, Y. (2018). MedBlock: Efficient and Secure Medical Data Sharing Via Blockchain. Journal of Medical Systems, 8(42), 136-145. [15] Ferreira, D.B., Filho, E.A. (2020). Action for Annulment of Arbitral Award: a doctrinaire and empirical analysis of the jurisprudence of the courts of Santa Catarina, Rio de Janeiro and São Paulo between 2015 and 2019 [Anulatória de sentença arbitral: uma análise doutrinária e empírica da jurisprudência dos tribunais dos estados de Santa Catarina, Rio de Janeiro e São Paulo entre 2015 e 2019]. Revista Brasileira de Alternative Dispute Resolution, 3(2), 195-214. https://doi.org/10.54934/ijlcw.v2i1.48 34 ----- [16] Gabison, G. (2016). Policy Considerations for the Blockchain Technology Public and Private Applications. Science and Technology Law Review, 19(3), 327-350. [17] Gromova, E., Ivanc, T. (2020). Regulatory Sandboxes (Experimental Legal Regimes) for Digital Innovations in BRICS. BRICS Law Journal, 7(2), 10-36. https://doi.org/10.21684/2412-23432020-7-2-10-36 [18] Gromova, E. (2018). The Free Economic Zone of the Republic of Crimea and the Federal City of Sevastopol. Russian Law Journal, 6(3), 79-99. https://doi.org/10.17589/2309-8678-2018-6-3-7999 [19] Ivanc T., Rijavec V., Kerestes T. (2016). Theoretical background of using information technology _in evidence taking. Dimensions of evidence in European civil procedure. UK: Wolters Kluwer._ [20] Jani, N., Panda, P. (2019). Blockchain Technology: A Study of Investment and Applications Introduction, In NCUICM-2019 International Conference on Management Winning in a VUCA _World Edition: First Impression. Excel India Publishers._ [21] Joppen, R., Lipsmeier, A., Tewes, A., Dumitrescu, R. (2019). Evaluation of investments in the digitalization of a production. Procedia CIRP, 81, 411-416. [22] [Kraljić, S. (2020). New family code and the dejudicialization of divorce in Slovenia. Balkan Social](https://www.scopus.com/authid/detail.uri?authorId=36098958100) _Science Review, 15, 157-176._ [23] Lehmann, E., Menter, M. (2018). Public cluster policy and performance. _The Journal of_ _Technology Transfer, 43(2), 558-592. https://doi.org/10.1007/s10961-017-9626-4_ [24] Mavrinskaya, T.V., Loshkaryov, A.V., Churakova, E.N. (2017). Depersonalization of Personal Data and “Big Data” Technology (BIG DATA). [25] Mohamad, A.M., Zaiton, H., Mohd Bahrin, O. (2017). Balancing Open Justice and Privacy Rights in Adopting ICT in the Malaysian Courts. Advanced Science Letters, 23(8), 7996-8000. [26] Nakamoto, S. (2009). Bitcoin: A Peer-to-Peer Electronic Cash System. https://bitcoin.org/bitcoin.pdf. [27] Nikitin, E., Marius, M.C. (2020). Unified Digital Law Enforcement Environment – Necessity and Prospects for Creation in the “BRICS Countries”. BRICS Law Journal, 2(7), 66-93. [28] Ostanina, E., Titova, E. (2020). The Protection of Consumer Rights in the Digital Economy Conditions – the Experience of the BRICS Countries. BRICS Law Journal, 7(2), 118-147. [29] Podshivalov, T. (2018). Protection of Property Rights Based on the Doctrine of Piercing the Corporate Veil in the Russian Case Law. _Russian_ _Law_ _Journal,_ 6(2), 39-72. https://doi.org/10.17589/2309-8678-2018-6-2-39-72 [30] Jenik, I., Lauer, K. (2017). Regulatory Sandboxes and Financial Inclusion. [31] Roman-Belmonte, J.M., De la Corte-Rodriguez, H., Carlos Rodriguez-Merchan, E. (2018). How blockchain technology can change medicine. Postgraduate Medicine, 130(4), 420-427. [32] Swan, M. (2015). Blockchain: Blueprint for a New Economy. Newton: O'Reilly Media, Inc. [33] Spevakov, A.G., Spevakova, S.V., Primenko, D.V. Method of data depersonalization in protected automated information systems. _Radio Electronics, Computer Science, Control, 1, 162-168._ https://doi.org/10.15588/1607-3274-2020-1-16 [34] Sultan, K., Ruhi, U., Lakhani, R. (2018). Conceptualizing Blockchains: Characteristics & Applications. In 11th IADIS International Conference Information Systems (pp. 49-57). https://doi.org/10.54934/ijlcw.v2i1.48 35 ----- [35] Tatara, U., Gokceb Y., Nussbauma, B. (2020). Law versus technology: Blockchain, GDPR, and tough tradeoffs. Computer Law & Security Review, 38, article number 10545. [36] Zeng, Zh. (2012). _China's_ _Special_ _Economic_ _Zones and Industrial_ _Clusters. Success and Challenges:_ _Working_ _Paper._ [https://www.lincolninst.edu/sites/default/files/pubfiles/2261_1600_Zeng_WP13DZ1.pdf.](https://www.lincolninst.edu/sites/default/files/pubfiles/2261_1600_Zeng_WP13DZ1.pdf) [37] Xu, Zh., Zhang, J., Song, Zh., Liu, Y. (2021). A scheme for intelligent blockchain-based manufacturing industry supply chain management. _Computing, 1._ [https://doi.org/10.1007/s00607-](https://doi.org/10.1007/s00607-020-00880-z) [020-00880-z](https://doi.org/10.1007/s00607-020-00880-z) [38] Cheah, S., Ho, P., Pattalachinti, S. (2018). Blockchain Industries, Regulations and Policies in Singapore. Asian Research Policy, 9(2), 83-98. [39] Veselkova, E. (2019). Goals and Methods of Using Special Economic Zones in International Economic Cooperation. Norwegian Journal of Development of the International Science, 37, 53-56. https://doi.org/10.54934/ijlcw.v2i1.48 36 ----- **ABOUT THE AUTHORS** **[Daniel Brantes](https://adrbc.com/team/daniel-brantes-ph-d/)** **[Ferreira – Ph.D., Senior Researcher, National Research South Ural](https://adrbc.com/team/daniel-brantes-ph-d/)** State University (Russia), Professor, AMBRA University (USA), CEO, Brazilian Centre for Mediation and Arbitration (Rio de Janeiro, Brazil) [Email: daniel.brantes@gmail.com](mailto:daniel.brantes@gmail.com) [ORCID ID: 0000-0003-0504-1154](https://orcid.org/0000-0003-0504-1154) [Web of Science Researcher ID: AEN-4058-2022](http://www.webofscience.com/wos/author/record/AEN-4058-2022) [Scopus Author ID: 56555993000](https://www.scopus.com/authid/detail.uri?authorId=56555993000) [GoogleScholarID: 7maCMCUAAAAJ](https://scholar.google.com/citations?hl=en&user=7maCMCUAAAAJ) **[Elizaveta Alexandrovna Gromova – Ph.D. (Law), Associate Professor, Deputy](https://www.susu.ru/ru/employee/gromova-elizaveta-aleksandrovna)** Director of the Law Institute on international activity, Associate Professor, Department of Entrepreneurial, Competition and Environmental Law, South Ural State University (national research university) (Chelyabinsk, Russian Federation) Address: 454090 Lenina Prospekt, 78, Chelyabinsk, Russian Federation [Email: gromovaea@susu.ru](mailto:gromovaea@susu.ru) [ORCID ID: 0000-0001-6655-8953](https://orcid.org/0000-0001-6655-8953) [Web of Science Researcher ID: AAO-8876-2020](http://www.webofscience.com/wos/author/record/AAO-8876-2020) [Scopus Author ID: 57208846603](https://www.scopus.com/authid/detail.uri?authorId=57208846603) [GoogleScholarID: fDz6FkUAAAAJ&hl](https://scholar.google.com/citations?user=fDz6FkUAAAAJ&hl) **ABOUT THIS ARTICLE** **Conflict of interests: Authors declare no conflicting interests.** . https://doi.org/10.54934/ijlcw.v2i1.48 37 -----
{ "disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.54934/ijlcw.v2i1.48?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.54934/ijlcw.v2i1.48, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": "CCBY", "status": "GOLD", "url": "https://ijlcw.emnuvens.com.br/revista/article/download/48/18" }
2,023
[ "JournalArticle" ]
true
2023-06-15T00:00:00
[]
15,819
en
[ { "category": "Computer Science", "source": "external" }, { "category": "Computer Science", "source": "s2-fos-model" }, { "category": "Engineering", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/020c3ce8434387e9f141198493a2f68c2e25ca2c
[ "Computer Science" ]
0.896482
An Adaptive Lightweight Security Framework Suited for IoT
020c3ce8434387e9f141198493a2f68c2e25ca2c
Internet of Things
[ { "authorId": "35009042", "name": "M. Domb" } ]
{ "alternate_issns": [ "2199-1073" ], "alternate_names": [ "Internet Thing" ], "alternate_urls": [ "https://www.sciencedirect.com/journal/internet-of-things", "http://www.springer.com/series/11636" ], "id": "2989732e-2668-4b47-9c29-326646a60273", "issn": "2542-6605", "name": "Internet of Things", "type": null, "url": "https://www.journals.elsevier.com/internet-of-things" }
Standard security systems are widely implemented in the industry. These systems con- sume considerable computational resources. Devices in the Internet of Things [IoT] are very limited with processing capacity, memory and storage. Therefore, existing security systems are not applicable for IoT. To cope with it, we propose downsizing of existing security processes. In this chapter, we describe three areas, where we reduce the required storage space and processing power. The first is the classification process required for ongoing anomaly detection, whereby values accepted or generated by a sensor are clas- sified as valid or abnormal. We collect historic data and analyze it using machine learn ing techniques to draw a contour, where all streaming values are expected to fall within the contour space. Hence, the detailed collected data from the sensors are no longer required for real-time anomaly detection. The second area involves the implementation of the Random Forest algorithm to apply distributed and parallel processing for anomaly discovery. The third area is downsizing cryptography calculations, to fit IoT limitations without compromising security. For each area, we present experimental results support-ing our approach and implementation. as follows: We begin with an introduction followed by the relevant literature review. We then discuss rules extraction using machine learning tech -niques. We present random forest as the most suitable ML for IoT. We proceed with various improvements, utilizing RF and IoT attributes. We then outline an experiment that executes RF building and its corresponding classifications using 15 different configurations, each based on a unique combination of the number of processors and the forest size.
## We are IntechOpen, the world’s leading publisher of Open Access books Built by scientists, for scientists # 6,700 Open access books available # 180,000 195M International authors and editors Our authors are among the Downloads # 12.2% # 154 TOP 1% Countries delivered to most cited scientists Contributors from top 500 universities Selection of our books indexed in the Book Citation Index in Web of Science™ Core Collection (BKCI) ### Interested in publishing with us? Contact book.department@intechopen.com Numbers displayed above are based on latest data collected. For more information visit www.intechopen.com ----- ##### Chapter 2 #### An Adaptive Lightweight Security Framework Suited for IoT ##### Menachem DombMenachem Domb Additional information is available at the end of the chapterAdditional information is available at the end of the chapter http://dx.doi.org/10.5772/intechopen.73712 **Abstract** Standard security systems are widely implemented in the industry. These systems consume considerable computational resources. Devices in the Internet of Things [IoT] are very limited with processing capacity, memory and storage. Therefore, existing security systems are not applicable for IoT. To cope with it, we propose downsizing of existing security processes. In this chapter, we describe three areas, where we reduce the required storage space and processing power. The first is the classification process required for ongoing anomaly detection, whereby values accepted or generated by a sensor are classified as valid or abnormal. We collect historic data and analyze it using machine learning techniques to draw a contour, where all streaming values are expected to fall within the contour space. Hence, the detailed collected data from the sensors are no longer required for real-time anomaly detection. The second area involves the implementation of the Random Forest algorithm to apply distributed and parallel processing for anomaly discovery. The third area is downsizing cryptography calculations, to fit IoT limitations without compromising security. For each area, we present experimental results supporting our approach and implementation. **Keywords: IoT, anomaly detection, entropy, machine learning, random forest,** cryptography, RSA ##### 1. Introduction The area of the Internet of Things [IoT] is rapidly growing, raising severe security concerns to the entire network. Due to its high traffic volume and real-time operation, a security framework is essential. The system should timely predict possible attacks and react accordingly. Standard security systems are widely implemented in the industry. These systems consume © 2016 The Author(s). Licensee InTech. This chapter is distributed under the terms of the Creative Commons © 2018 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. distribution, and reproduction in any medium, provided the original work is properly cited. |its unrestricted u|Col2|se,| |---|---|---| |||| |d.||| ----- 32 Internet of Things - Technology, Applications and Standardization considerable computational resources and cannot operate in IoT devices (i.e., sensors) due to their very limited memory and computation power. To cope with these limitations, two alternatives come to mind, i.e., the development of novel security measures tailored to IoT [1] or downsizing existing security processes to enable properly operation in IoT devices. We apply the latter option as it is highly recommended to use proven algorithms, which have been extensively analyzed and tested, while new algorithms exposes the user to vulnerability. We introduce lightweight versions of several known security processes. We analyze each relevant process and its corresponding limitations, and then we divide each complex and large process into a collection of smaller processes. These small processes are distributed and executed by sensors connected to the same network, based on its available capacity. Once all small processes are completed, we collect the partial results and input them into a complementary process that integrates the partial results to compose the desired result. The final result is the same as if the original process was generated. In this chapter, we describe three areas, where we minimize the required storage space and processing power. The first is the classification process required for ongoing anomaly detection, whereby values accepted or generated by a sensor are classified as valid or abnormal. We collect historic data and analyze it using machine learning techniques to draw a contour, and all streaming values are expected to fall within the contour space. The detailed collected data are no longer required, thereby considerably reducing the storage space. The second area involves the implementation of the Random Forest algorithm to apply distributed and parallel processing for anomaly discovery, resulting in the use of limited processing power. The third area is downsizing cryptography calculations, such as RSA, a public-key cryptosystem, to fit IoT limitations. The rest of this chapter is divided into three sections, one dedicated to each downsized area. In the last section, we conclude this chapter. The rest of this chapter is organized as follows: In Section 2, we describe the preparation stage of the classification process, which minimizes the need for the entire historic data and then the anomaly detection processes using the outcome of the previous stage. In Section 3, we describe the use of the Random Forest algorithm for distributed and parallel processing of automatic classification and anomaly detection. In Section 4, we present an improved implementation of RSA to allow high class cryptography that runs in an IoT configuration. In Section 5, we conclude this chapter and discuss our ongoing and future work. ##### 2. Classification framework for data streaming anomaly detection To predict the behavior of a system, we usually examine its past data to discover common patterns and other classification issues. This process consumes considerable computational power and data storage. In this section, we describe an approach and a system, which requires much less resources without compromising prediction capabilities and accuracy. It employs three basic methods: a common behavior graph, the contour surrounding the graph, and entropy calculation methods. When the system is about to be implemented for a specific domain, the optimized combination of these three methods is considered, such that it fits the unique nature of the domain and its corresponding type of data. In addition, we present a ----- An Adaptive Lightweight Security Framework Suited for IoT 33 http://dx.doi.org/10.5772/intechopen.73712 framework and a process that will assist system designers in finding the optimal methods for the case at hand. We use a case study to demonstrate this approach with meteorological data collected over 15 years to classify and detect anomalies in new data. This section is organized as follows: We begin by defining the problem, proceed with various solutions proposed in the literature, and then present our adjustable contour approach. We then show how it is applicable for IoT. We proceed with a case study demonstrating the build-up of the contour and how it is used for instant anomaly detection. We conclude with a summary of the section. **2.1. Problem definition** The problem we attempt to solve is the optimization of the amount of sampling data collected to maintain a proper balance between the quantity of sampling data and the information extracted from it. The problem statement focuses on extracting concepts, methods, rules, and measurements, so that at the end of the process, the original sampling data become redundant and no longer need to be stored. However, to keep improving and adjusting the extracted items to natural changes in the behavior of the sampled mechanism, we incorporate in the approach an ongoing learning process. In addition, in the study, we concentrate on timedependent streaming sampling data, divided by fixed periods, so that we can repeat the analysis process for each period/cycle. Thus, while there are many classification algorithms using time series sampling, the aim is not to compare the performance of yet another classifier, but rather present a flexible method to compactly represent the data with several parameters that can be chosen and adjusted. We suggest an independent framework that allows a flexible adaptation of the contour to the nature of the given domain. Indeed, some of the reviewed works, such as Reeves et al. [6], can be revised and adjusted to the problem statement and serve as a valid alternative to the approach we present. We are striving for the best sampling strategy given sequential data, generated from IoT devices. The input given is a set of time series: D = {d [(1)], d [(2)], …, d [(][n][)]}, where each time series d[(i)] contains pairs (timestamp and numeric value). The required output is an optimal set Dw = {a1, a2, …, am}, where ai can be any sampling item, such as a minimal data set, trends, graphs, measurements, or rules, which strongly represents and supports the purpose of the original data set D. We consider the set Dw and the full data set D as containing the same information, if they produce the same classifier. That is, if f (d) = fw (d) ∈ {−1, 1} for every new data series d, where f is a classifier learned from D and fw is a classifier based on Dw. For instance, we can judge whether a series of yearly temperatures represent an El Nino (EN) year or not, or whether a series of sensor data is characteristic of a suspected intrusion or not. Here, we consider two sets D and Dw as containing the same (or similar) information if both can predict the future pattern of an initial series d. That is, we can use either D or Dw to predict a future item dn with similar accuracy. **2.2. Literature review** Real-world data typically contain repeated and periodic patterns. This suggests that the data can be effectively represented and compressed using only a few coefficients of an appropriate ----- 34 Internet of Things - Technology, Applications and Standardization basis. Mairal et al. [2] study modeling data vectors as sparse linear combinations of basic elements generating a generic dictionary and then adapt it to specific data. Jankov et al. [3] present an implementation of a real-time anomaly detection system over data streams and report experimental results and performance tuning strategies. Vlachos et al. [4] formulate the problem of estimating lower/upper distance bounds as an optimization problem and establish the properties of optimal solutions to develop an algorithm which obtains an exact solution to the problem. Sakurada and Yairi [5] use auto-encoders with nonlinear dimensionality reduction for the anomaly detection task. They demonstrate the ability to detect subtle anomalies where linear PCA fails. Reeves et al. [6] present a multi-scale analysis to decompose time series and to obtain sparse representations in various domains. Chilimbi and Hirzel [7] implement a dynamic pre-fetching scheme that operates in several phases. The first is profiling, which gathers a temporal data reference profile from a running program. Next, an algorithm extracts hot data streams, which are data reference sequences that frequently repeat in the same order. Then, a code is dynamically injected into appropriate program points to detect and pre-fetch the hot data streams. Finally, the process enters the hibernation phase where the program continues to execute with the added pre-fetch instructions. At the end, the program is deoptimized to remove the inserted checks and pre-fetch instructions and control returns to the profiling phase. Lane and Brodley [8] claim that features can be extracted from object behavior and a domain heuristic. Experiments show that it detects anomalous conditions, and it is able to identify a profiled user from other users. They present several techniques for reducing 70% of the storage required for user profile. Kasiviswanathan et al. [9] proposed a two-stage approach based on detection and clustering of novel user-generated content to derive a scalable approach by using the alternating directions method to solve the resulting optimization problems. Aldroubi et al. [10] show that for each dataset there is an optimized collection of cells spanning the entire space and so generate the optimized sampling set. The common underlying idea of the reviewed approaches is the definition of the problem they are aiming to solve. The problem attempted to be solved is optimizing the size of the collected sampling data so that it keeps the proper balance between the quantity of sampling data and the information extracted from it. **2.3. Contour-based approach** Briefly, we analyze sampling data collected over several periods. We divide the period into time-units. For example, for a period of a year, we divide it into daily time-units. For each time-unit, we extract one value that represents it. This is done by averaging the samples collected during the time-unit. In the example, we may calculate the average value of all samples of that day. We may also decide to select one of the samples to represent the day, e.g., the first or last sample. We then calculate the average value for each time-unit from the collected values for the same time-unit in all periods, resulting in an average value for a given time-unit. We repeat this process for all time-units in the period and obtain a graph that represents the average values for an average and common period. Assuming we have the average graph line for an average period, we now calculate the contour around this average. The generated contour represents the standard range of values, such that an unanalyzed period can be compared to this contour. If its graph value is completely within ----- An Adaptive Lightweight Security Framework Suited for IoT 35 http://dx.doi.org/10.5772/intechopen.73712 the contour, the period is a standard period. If it is completely out of the contour, then it is purely not standard. If the sections of the graph are within the contour, while others are out of it, we use an entropy measure to calculate the overall “distance” of the given period from the standard contour. Assuming an existing entropy threshold, we can decide whether the period is a standard one or not. We apply the same concept at the unit level and decide whether a specific time-unit in a period is within the standard or not. This specific check is relevant, for example, to anomaly detection of IoT behavior. In conclusion, the entire process is based on three key elements: the average graph per period, the contour around the average graph, and an entropy value representing the overall distance of a period from the contour. Each of these elements—average, contour, and entropy—can be one of the several possibilities. For the contour, a simplistic choice would be minimum and maximum (min-max) values. Alternatively, the SD or confidence interval (CI) could be employed. These three elements affect each other, and every choice of such a triplet—average, contour, and entropy—will produce a different behavior of the compressed classifier. The object is to find the best triplet that will be able to disregard the original data after extracting the representative contour, without compromising the ability to successfully analyze future series. In our work, we consistently use the arithmetic average and classical entropy and focus on finding the best contour. _2.3.1. Finding the optimal contour_ We begin with a supervised learning approach, for classification, in which each time series is labeled as one of two classes. To demonstrate, using the data set from the experiments, the time series are year-long recordings of temperature samplings, labeled as positive, if the corresponding year was an EN year, or otherwise negative. We now describe in detail the process of building the classifier, with emphasis on finding the optimal contour. Constructing the best contour is described in **Figure 1. We begin with raw data collected** during N periods, where each record corresponds to a specific time-unit. These cycles have already been classified positive or negative according to some classification criteria. These classified cycles will later be used to determine the best contour. The process is divided into four stages. In stage one, we use a selected average method and calculate the average graph line representing the N given cycles. This is done horizontally by calculating the average of the values related to the same time-unit across all N cycles. For example, we calculate the average of the values for January 1st across the various years. Doing so for all timeunits will generate the average graph line. In stage two, we select several distance calculation methods, and for each method, we construct its associated contour. This is done by calculating the distance value for each distance method, e.g., the min-max difference, SD, and CI. Taking the distance value, we add and subtract it from the average line to get the contour around the average. We repeat this process for all distance methods. At this stage, we have constructed several contours around the average line. The goal now is to select the contour, which is most effective in classifying unclassified cycles. This is done in stages three and four. In stage three, we calculate the prediction power for each contour and select the one with the highest prediction power. This is done by summing, for each contour, the number of cases in which its prediction was right and calculating the average entropy of these correctly classified cycles. We do the same for wrong ----- 36 Internet of Things - Technology, Applications and Standardization **Figure 1. Process of finding the optimal contour.** predictions. In stage four, we use one entropy method with an associated threshold value. An unclassified cycle with an entropy value lower than the threshold will be classified positive and otherwise negative. For each contour, we calculate the entropy of the given classified cycles. The result is a set of entropy values, where some are below the threshold and others are above it. **a.** We repeat this for all classified cycles. We then sum up the number of correct predictions and their total entropies. We do the same for wrong predictions. We then subtract the total wrong numbers from the correct numbers. We repeat this process for all the constructed contours and select the contour with the highest prediction power. **b.** Calculating the entropy. The entropy of a period, given a contour, is calculated as follows: - Marking for every timestamp whether the cycle’s value at that timestamp is below, within, or above the contour. - Calculating the frequency of each of these three possibilities: below (p1), within (p2), and above (p3) - Using these as a ternary probability distribution, its entropy is calculated according to the formula: p1 log[(][p1][)] + p2 log[(][p2][)] + p3 log[(][p3][)] - The entropy measure is expected to return its minimum value at the two extreme cases: When the cycle graph is entirely contained within the contour and when the cycle graph lies entirely outside of the contour. All other cycles are expected to fall mostly within the contour, and those which diverge enough from the contour, will have a high entropy value which will lead to the right conclusion ----- An Adaptive Lightweight Security Framework Suited for IoT 37 http://dx.doi.org/10.5772/intechopen.73712 **c.** Classifying a cycle/period **Figure 2 describes the process of classifying unlabeled data cycles, as listed below:** **1.** Apply the given data cycle to the contour and match it according to timestamps. **2.** Noting for each timestamp whether the data point is below the contour, within it, or above it. **3.** Marking these cases respectively as −1, 0, and +1. **4.** Calculating the frequencies of each of the three values: −1 (p1), 0 (p2), and +1(p3). **5.** Calculating the entropy of the distribution defined by p1, p2, and p3. **6.** Classifying as belonging to the contour, if the entropy is below the threshold determined in the learning phase. _2.3.2. Advantages of the proposed technique_ The proposed technique has several advantages over other methods. The technique is a family of sampling methods and is defined by the three parameters described above. It is reasonable to expect that different datasets will require different parameters for the best sampling. Different combinations can be tested and evaluated to ensure optimal treatment of the data. The technique we propose is therefore flexible and adjustable and thus suits every given data set. Secondly, this technique can be applied not only for classification but also for prediction of time series. Thirdly, the technique can be used to evaluate reliability of data online. In cases of high fluctuations or sharp changes in the cycle graph, which do not conform to either of the two class contours, suspicion may arise that the reliability of the data has been compromised. This can indicate that the sensor is damaged or that there has been a security breach. Fourthly, the approach allows self-learning and automatic adjustments in cases of common behavior changes and a new standard has been established. Lastly, occasionally, a post-mortem may be run to check the system’s reaction to actual behavior and thereafter adjust the system parameters accordingly. **2.4. Anomaly detection for IoT security** IoT devices generate time-related data, i.e., structured records containing a timestamp and one or more numeric values. In many cases, we can identify recurrent time frames where the system behavior has a repetitive format. Hence, IoT data have a structure to which the contour approach is highly applicable. IoT security utilizes common data patterns and quantitative measurements. Based on the identified patterns and measurements, we can extract logical rules that will be executed once an exception is discovered. An exception may be any violation of predefined patterns, measurements, and other parameters, which represent normal, standard, and permitted behavior. ----- 38 Internet of Things - Technology, Applications and Standardization **Figure 2. Classifying a cycle.** In IoT, there is an abundance of possible patterns, starting with column level patterns up to a super internet controlling several IoT networks. The goal is to find the methods and tools to define standard patterns and how they can be identified. Once this is done, we can apply the contour method. In our work, we show a two-dimensional contour. Using the same concept, we can expand it to be a multi-dimensional contour. This case is common where there is a dependency among several columns within one record and the same applies for the case where there are dependencies among networks of IoT systems. **2.5. Case study** In the following case study, we used meteorological data collected on EN years (positive class) and NEN years (negative class) from 1980 to 1998. For the positive contours, we took data from the EN years 1982, 1983, 1987, 1988, 1991, and 1992. All other years in the range were NEN years. We tested three methods for generating contours: (a) max-min over all cycles; (b) average cycle ± SD; and (c) CI. **Figures 3 and 4 depict the contours for NEN years. Figure 3 shows the NEN contour in black** according to the average ± SD and depicts how EN years diverge from this contour, as compared ----- An Adaptive Lightweight Security Framework Suited for IoT 39 http://dx.doi.org/10.5772/intechopen.73712 to the NEN year—1995. The 1992 and 1988 (EN years) show clear divergence from the contour while 1995 (a NEN) is more contained within the contour. This is nicely captured by the entropy values, which for 1992 was 0.4266 and for 1988 was 0.3857—above the threshold, leading to the conclusion that they are not NEN years—while for 1995, the entropy was 0.3631—significantly lower than those of the EN years, leading to the correct conclusion that 1995 was indeed a NEN year. **Figure 4 shows two contours: the min-max contour and the average ± SD contour. The Y-axis** in these graphs is the temperature value, and the X-axis is the time. Within each contour, the year 1995 (a NEN year) is graphed. Its entropy is 0.3631 for the average SD contour and 0.2932 for the min-max contour. Both are the threshold, which leads to the correct conclusion that it should indeed be classified as NEN. In the case study, we compared the constructed contours, by using the average graph ± SD and the average graph _± min-max. For the_ _SD contour, we obtained a significant entropy_ value difference between a classified EN case and a NEN case. In comparison, the min-max contour resulted in close values of entropy for the EN cycle and the NEN cycle. Thus, the ability to differentiate between two extreme situations using entropy depends on the parameter used to build the contour. **2.6. Section summary** In this section, we dealt with the classification problem of an unclassified cycle of IoT streaming data. We introduced the contour approach to draw the borders around the standard area representing a specific class. If there was an unclassified cycle, we measured its distance from **Figure 3. EN cycles on NEN average ± SD contour.** ----- 40 Internet of Things - Technology, Applications and Standardization **Figure 4. NEN contours—min-max and SD.** the contour using an entropy formula. Then, we compared the result to a predefined threshold. If the entropy value is below the threshold, the cycle is of the same class. We propose a process for constructing the best contour that will presumably classify the correct underlying class. The process is based on three measurement methods: average, distance, ----- An Adaptive Lightweight Security Framework Suited for IoT 41 http://dx.doi.org/10.5772/intechopen.73712 and entropy. For each method, there are several alternate formulas that we may use. Each combination of these three methods may result in different contour and hence different entropy value for the same unclassified cycle. We select the combination with the maximum difference between positive and negative values. In addition to the initial construction of the class contours from the given data, we suggest ongoing improvements of the initial contours. Namely, we recalculate the class averages and their contours to refine and revise the contours for improved classification performance. In this manner, we are able to improve the contour approach, in reference to several aspects, such as determining the minimal number of classified cycles required to define the best contour, expanding the use of the contour to discover early trends or discover significant changes in behavior and adjusting the contour accordingly, exploring the possibility of dividing one cycle into several segments, and associating a different contour method to each segment. ##### 3. Lightweight adaptive random forest for rule generation and execution The volume of transmitted data over the various sensors continuously grows. Sensors typically are low in resources of storage, memory, and processing power. Data security and privacy are part of the major concerns and drawbacks of this growing domain. An IoT network intrusion detection system is required to monitor and analyze the traffic and predict possible attacks. Machine leaning techniques can automatically extract normal and abnormal patterns from a large set of training sensors data. Due to the high volume of traffic and the need for real-time reaction, accurate threat discovery is mandatory. This section focuses on designing a lightweight comprehensive IoT rules generation and execution framework. It is composed of three components, a machine learning rule discovery, a threat prediction model builder and tools to ensure timely reaction to rules violation and unstandardized and ongoing changes in traffic behavior. The generated detection model is expected to identify exceptions in real time and notify the system accordingly. We use random forest (RF) as the machine learning platform for the discovery of rules and real-time anomaly detection. To allow RF adaptation for IoT, we propose several improvements to make it lightweight and propose a process that combines IoT network capabilities, messaging and resource sharing, to build a comprehensive and efficient IoT security framework. The rest of this section is organized as follows: We begin with an introduction followed by the relevant literature review. We then discuss rules extraction using machine learning techniques. We present random forest as the most suitable ML for IoT. We proceed with various improvements, utilizing RF and IoT attributes. We then outline an experiment that executes RF building and its corresponding classifications using 15 different configurations, each based on a unique combination of the number of processors and the forest size. ----- 42 Internet of Things - Technology, Applications and Standardization **3.1. Introduction** IoT is a network of objects, consisting of sensors, Internet, software, and exchange of data. This generates critical issues of security, which must be addressed. Since to date there is no standard for sensors, any system under development at this stage must consider the possibility that soon a standard will be defined, and the systems must be able to easily adjust to it. Along with the limited processing power and the fact that the security issues must be dealt with in real time, we realize the immediate need for a flexible and lightweight solution. The solution should be dynamic, open, scalable, distributed and decentralized. The analysis discovers patterns and measurements from the data, which are then translated into anomaly detection rules associated with actions to be executed when a rule is violated. The rules are then deployed in the IoT devices. When data are received from, or transmitted to an IoT device, the rules are executed. If the result is positive, the corresponding action is triggered to cope with the situation. **3.2. Literature review** Mansoori et al. [11] proposed a systematic process for retrieving fuzzy rules from a given data set. To improve performance, the retrieved rules are then crystallized based on its effectiveness and applicability. Dubois et al. [12] use Sugeno integrals, which are qualitative criteria aggregations where it is possible to assign weights to groups of criteria. They show how to extract if-then rules that express the selection of situations based on local evaluations and rules to detect bad situations. Sumit-Gulwani, Hart, and Zorn [13] deal with converting data into an appropriate layout, which requires major investment in manual reformatting. The paper introduces a synthesis engine to extract structured relational data. It uses examples to synthesize a program using an extraction language. Bharathidason et al. [23] presented a fast and compact decision rules algorithm. The algorithm works online to learn rule sets. It presents a technique to detect local drifts by taking advantage of the modularity of the rule sets. Each rule monitors the evolution of performance metrics to detect a concept drift. It provides useful information about the dynamics of the process generating data, faster adaptation to changes, and generates more compact rule sets. Jafarzadeh et al. [15] used averaging techniques to propose a method in which a previous algorithm for association rules mining is improved upon to specify minimum support. It uses fuzzy logic to distribute data in different clusters and then tries to provide the user with the most appropriate threshold automatically. Limb et al. [16] used Fuzzy ARTMAP and Q learning to build a data classification and rule mining model. To justify the classification, the model provides a fuzzy conditional rule. Q-values are used to minimize QFAM prototyping. Mashinchi et al. [17] proposed a granular-rules extraction method to simplify a data set into a granularrule set with unique granular rules. It performs in two stages to construct and prune the granular rules. Yang H. et al [18] proposed an anomaly detection algorithm of Quick Access Recorder (QAR) data, based on attribute support of a rough set. The method retains the time characteristics of QAR data and strengthens the relation between the condition and decision attributes. Tang [19] described an approach of data mining with Excel, using the XLMiner add-in. This is an example of mining association rules to illustrate all the steps ----- An Adaptive Lightweight Security Framework Suited for IoT 43 http://dx.doi.org/10.5772/intechopen.73712 of this approach. Tong S and Koller D. [20] introduced an algorithm for choosing which instances to request next, in a setting in which the learner has access to a pool of unlabeled instances and can request the labels for some number of them. The algorithm is based on a theoretical motivation for using support vector machines (SVMs). Osungi et al. [21] proposed an active learning algorithm that balances exploration by dynamically adjusting the probability to explore each step. Lang T et al. [22] proposed an active learning method for multi-class classification. The method selects informative training compounds to optimally support the learning progress. Bharathidason et al. [23] improved the performance and the accuracy by including only uncorrelated high performing trees in a random forest. The reviewed literature focuses on improvements to known rule discovery mechanisms, such as machine learning, to transform them into lightweight systems that can be executed in limited resources settings. In most cases, the proposed solutions remain for general purposes but can run with less required resources. We are seeking a solution that takes advantage of the unique IoT attributes and utilizes them to build a combined comprehensive framework for IoT security. **3.3. Rules generation and deployment process** The process consists of seven stages (see Figure 1). Stage 1 collects training data from the IoT network, removes irrelevant records, and complements data in records with missing data. In stage 2, we apply discovery techniques to extract important measurements and patterns. Stage 3 consists of generating a rule for each measurement and pattern. In stage 4, we evaluate the effectiveness of each rule with a set of training data. If the number of times a rule has been executed is below a given threshold, the rule is removed from the rules set. Next, in stage 5, we check the completeness and the integrity of the generated set of rules. Rules that contradict another rule are removed and missing rules are added. Stage 6 runs a simulation with the same training data with the presumption that all the designated rules will be executed. Finally, in stage 7, we deploy the generated rules set. At this point, the system is ready to accept the IoT traffic data in real time and automatically check it against the set of rules. **3.4. Extracting rules from training data** A typical sensor record contains the sensor ID, timestamp, and one or more values per feature. The main source for extracting rules is data collected from the concrete processes involved in the explored domain. The significance to IoT is taking the accurate decision in real time and react in real time to security alerts, notifications, automation, and predictive maintenance. To ensure the completeness and the integrity of the generated set of rules, we use a consistent multi-layer process of accumulating rules, starting with the simplest rules up to the most complicated and multi-stage rules. Simple rules are extracted at the single feature level, and then we proceed with rules extracted from a combination of any number of features having a common relation, such as features of sensors sharing the same workflow. The generated rules at this level relate to basic data such as maximum, minimum, average, standard deviation, median, and most frequent value. More complex relations, such as proportions among ----- 44 Internet of Things - Technology, Applications and Standardization subsequent values, sequence trends, and significant patterns, require reasoning capabilities and can be reached by machine learning and data mining techniques. The outcomes are measurements, thresholds, and patterns used to draw the corresponding decision trees. These decision trees tend to grow fast, consuming large storage, and memory space along with high runtime when pruning and analyzing it to find the specific rule. The depth of the tree grows linearly with the number of variables, but the number of branches grows exponentially with the number of states. Decision trees are useful when the number of states per variable is limited. It becomes complicated when the state of the variables depends on a threshold or complex computations. Communicating this rationale requires labeling every edge and then tracing the tree path to understand the logic incorporated in it. Complex event processing (CEP) engines are popular in IoT. They support matching time series data patterns that originate from different sources. However, they suffer from the same modeling issues as trees and pipeline processing. Rule engines have two major drawbacks in the context of IoT, the logic representation is not compact and the use of it requires much processing power and time. We will cope with these drawbacks in two ways. 1. Reduce the number of decision trees and improve the search navigation scope, resulting in a reasonable and acceptable search time. 2. Utilize IoT attributes and functionality to optimize the tree navigation flow and process sharing. In the following sections, we present the random forest machine learning and propose several improvements where the known drawbacks are removed. **3.5. Decision automation using random forest** Random forest employs bootstrap aggregation for training. While the predictions of a single tree are sensitive to noise in its training set, the average of many uncorrelated trees is not. Bootstrap sampling is a way of decorrelating the trees by showing them different training sets. Many trees reduce the depth and width of each tree and so save pruning and analysis time, which suit IoT constraints. The algorithm has two key parameters: the number of K trees to form a random forest and the number of features F, randomly sampled features for building a decision tree. For large and high dimensional data, a large K should be used. Estimating the performance of random forest for one core is based on the following parameters: # trees [K], # features [F], # rows [R], and maximum depth [D]. The estimated runtime is influenced by the number of features. Hence, keeping only the most important features lowers the number of records and maintains the maximum depth low, which will improve the overall random forest performance. Random forest performance is better than the classical tree decision algorithm. However, it may still be insufficient for IoT due to the memory space and processing power it requires. Hence, building a lightweight RF process and utilizing IoT networking are required. In the following section, we describe four proposals that make random-forest lightweight. ----- An Adaptive Lightweight Security Framework Suited for IoT 45 http://dx.doi.org/10.5772/intechopen.73712 **3.6. Improving RF performance and consumption of resources** **a.** Randomization may cause occurrence of redundant, irrelevant or even contradicting trees, which may lead to redundant searches or even to the wrong decision. Therefore, selection of trees with high classification accuracies leads to improved performance and better decision accuracy. A decision process is effective when the difference among the relevant alternatives is significant. RF contains many decision trees, where each of them may contribute to the final decision. Many such trees generally require wider searches and thus expand the decision process. On the one hand, reducing the number of the searched trees will shorten the process but on the other hand may increase the probability of making the wrong decision. Therefore, a selection criterion for removing the “redundant” trees is required. An initial approach is to remove similar trees as correlated trees hardly contribute to reaching the correct decision. Thus, for effective RF decisions, we strive to remove uncorrelated trees [14]. The correlation between two trees may be defined in various ways, such as: **1.** Distance—we transform the tree into a sequence of values, and then we apply a hashing function on this sequence and get a score. Two trees are correlated if the difference between the scores is below a predefined threshold. **2.** Common components—count the number of similar components and compare. **3.** Empirically by removing the tree and trying a vast number of cases, we will reach the same decisions as we would if the tree was included, which means that the tree has no effect on practical decisions. **b.** Prioritize trees by simulation using labeled and already classified cases. Instead of removing trees, we propose prioritizing them. The prioritization can be an empirical study of the historical use and effectiveness in true/false decisions. Another way is to run a Monte-Carlo intensive simulation and prioritize trees accordingly. **3.7. Prioritize trees by its threat level** We define several security levels: low, normal, high, and emergency. For each level, we associate the most effective trees and the order of the trees to be visited. For each network, we designate a security manager device, which collects messages from its network devices, assesses it, and determines the network security level. When the network is initiated, the designated level is low. As time passes, messages arrive at the security manager device, which analyzes the input and decides to change the security level. Then, a message is distributed requesting a security level change. Once the level is changed, the local system activates the new tree search schedule. **3.8. Messaging assisted, best trees selection** MQTT is a lightweight messaging protocol, over TCP, adjusted to the IoT domain. Given MQTT, we can utilize the IoT network itself to improve performance. We use it to transfer messages and data from one device to another. For example, in case of a suspicious occasion ----- 46 Internet of Things - Technology, Applications and Standardization detected by one of the sensors, using the protocol, the device sends alert messages to other members. The messages include data strings and unique data patterns that receivers should expect to receive and thus detect a malicious situation. The message may also include the most effective trees that may cope with the suspected threat. When suspicious data reach a sensor, it is analyzed locally, and the best tree sequence is identified. This device sends a message to the security manager, containing the data with the detected anomaly and the sequence of trees to visit and act accordingly. The messaging protocol is an adjustment of HTTP. **3.9. Experiment using the random forest in an IoT** In this section, we describe a comprehensive test, simulating the building of various random forests and then runs several classification cycles for a given set of anonymous records. We used a computer with eight processors running the random forest PMI platform with 10–1000 trees per forest. It contained a random forest builder, an anonymous records classification process, and a configuration tool. We sought the best configuration, suitable for the optimized performance and accuracy of a random forest simulation. A configuration in this context is measured by the combination of the number of processors and the number of trees in a forest. For the simulation, we used 500 anonymous records and 3350 already classified samples, where each sample has 95 attributes. We ran 30 test cycles where each cycle represented a unique configuration—number of processors: 2, 4, 6, 8, and 16 and the number of trees per forest: 10, 100, 250, 500, 750, and 1000. For comparison, all test cycles used the same data set. In cases of similar trees, we ran a process that removes similar trees. The performance of the entire 30 test cycles is evaluated by its accuracy and processing time. **Figures 5 and 6 show that accuracy, performance of each of the processes and combined are** best achieved when using 10 trees per forest and 8 processors. Based on the above simulations, it seems that for the example at hand, using a relatively small number of trees per forest and multi-core processors is recommended for optimal performance and high accuracy. However, this may not be the common case. Therefore, prior to implementing RF-based anomaly detection, it is recommended that a simulation test be run with the main data. In addition, we propose a prototype of an IoT environment. The prototype is composed of one server and six Arduino OS devices. We built two configurations, A and B. In configuration A, all the devices are connected via WIFI 14 to the server, where the data transmission between two devices is done through the server. The entire RF is loaded in the server while the devices have one tree installed in them. The data flow of an incoming event in configuration A can be one of the following: 1. An event arrives at a device, the device forwards it to the server, which then runs the RF and classifies the event. 2. An event arrives at a device and the device forwards it to the server. The server forwards it to all devices. Each device checks the event against the appropriate local tree and sends the result to the server. The server then counts the results and sends the reply to the sender, which acts accordingly. The flow in configuration B is as follows: An event arrives at a device, the device propagates it to other devices, checks it against its own tree, and propagates the results back to the sender. The sender classifies the event and acts accordingly. To test the feasibility of the prototype, we used the trees built by the simulation tool and loaded it to the server and devices accordingly. We transmitted 500 events to the devices in ----- An Adaptive Lightweight Security Framework Suited for IoT 47 http://dx.doi.org/10.5772/intechopen.73712 **Figure 5. Results of running the 30 classification processes.** **Figure 6. Accuracy and combined results of running the 30 classification processes.** a round robin schedule. The resulting accuracy level was similar to the level we found in the previous simulation. Performance was out of the scope of the prototype stage. Nonetheless, we did not notice streaming interruptions or delays. In future work, we intend to design and perform consistent and comprehensive tests of the device and other similar devices. Based on the results, we will be better able to determine which rules are to be executed in real time and which are to be executed online or in batch mode. ##### 4. Lightweight public key cryptographic processor suited for IoT Due to the vast number of IoT devices and high transmission volumes, a robust and adaptive cryptography system is required. However, since IoT devices have limited memory and computation power, they are unable to execute public key cryptographic systems. To cope ----- 48 Internet of Things - Technology, Applications and Standardization with this limitation, we propose a lightweight RSA process. A combination of symmetric and asymmetric encryption systems is commonly used by the industry. Symmetric encryption systems require moderate computation resources and consequently are already used in IOT. However, asymmetric public key encryption requires vast computation resources, and as a result cannot be executed by most IOT devices. In this section, we describe a lightweight RSA encryption, where three improvements are incorporated: acceleration of modular exponentiation calculation, parallel and distributed multi-core processing, and splitting the original message if the message length is very long. After each part is completed, the system collects the intermediate results and loads them into a consolidation and integration process, which generates the result. We ran comprehensive encryption and decryption processes on messages of various lengths. The results prove that lightweight RSA is ready to be incorporated in IoT devices. The rest of this section outlines the relevant literature review. Then, we describe an example of smart modular exponential calculation, which runs efficiently in an IoT architecture. **4.1. Literature review** Lin et al. [24] proposed the execution in parallel on CPU/GPU hybrids, of the Montgomery algorithm, to improve RSA performance and security. Fadhil and Younis [25] proposed a hybrid system, running RSA on multi-core CPU and multi GPU cores. For comparison purposes, they implemented variants of RSA, Crypto++, and the sequential counterpart. Multithread CPU improved performance by 6, over the sequential CPU implementation, and with GPU, it improved 23 times over the sequential implementation. The throughput gained for 1024 bits was ~1800 msg/sec, and for 2048 bits, it was ~250 msg/sec. Yanga et al. [26] suggested a parallel block Wiedemann algorithm in cloud to enhance the performance of GNFS and reduce communication costs, involved in solving large and sparse linear systems over GF. **4.2. Example of the acceleration of a modular exponentiation calculator** The calculation of “a factor b modulo n” is the heart of RSA cryptography and is also the most resource consuming component. Dividing this calculation into smaller parts will allow distributed and parallel processing of this calculation, where each smaller part is calculated by one sensor and later is integrated to obtain the result of “a factor b modulo n.” The underlying concept is the following conceptual equation: ((a mod n) * (b mod n)) mod n = (a*b) mod n. This concept is used by the following algorithm to calculate modular exponentiation. Step 1: Translate the input into a binary number. Step 2: Start at the rightmost digit, let k = 0, for each positive digit calculate the value of 2^k, Step 3: Calculate mod n of the powers of two ≤ b, Step 4: Use modular multiplication properties to combine the calculated mod n values. Steps 2 and 3 can be executed in parallel by several connected sensors. The results from the sensors are then sent to the sensor requested the encryption/decryption, to execute step 4 and obtain the final result. Using a network of 7688 devices, we ran a comprehensive test, which proves the feasibility of executing RSA using parallel and distributed processing. ----- An Adaptive Lightweight Security Framework Suited for IoT 49 http://dx.doi.org/10.5772/intechopen.73712 ##### 5. Conclusion Connecting sensors to the Internet exposes the entire network to malicious penetrations. This is due to poor computation resources in standard sensors, which do not allow the execution of robust security systems. Hence, lightweight primitive systems should be implemented in IoT. To maintain current Internet security level, we adjusted implementations of known security concepts and mechanisms, which contribute to the security of the Internet of things. In this chapter, we focused on three key security elements where downsizing is feasible without compromising security: (a) Eliminating the frequent use of detailed data in the classification process. (b) Adjusted random forest machine learning to work in a distributed and parallel mode, when building the forest and during the detection process. (c) Adjust RSA cryptography calculations which are executed in parallel and distributed. The proposed solutions have smaller footprints, are efficient, and in most cases demonstrate better performance. We prove that downsizing and parallel processing are the most appropriate approaches for implementing comprehensive concepts for proper operation in constrained environments of IoT. We are currently working on expanding current research areas. For example, additional improvements in RF implementation and exploring other machine learning technologies to check its applicability to IoT anomaly detection. We are exploring other asymmetric cryptography systems to check their applicability to IoT. In parallel, we are investigating authentication methods and technologies to discover a suitable one for IoT, or we are considering building an IoT-specific authentication. ##### Author details Menachem Domb Address all correspondence to: dombmnc@edu.aac.ac.il Ashkelon Academic College, Computer Science Department, Ashkelon, Israel ##### References [1] Aldosari HM, Snasel V, Abraham A. A new security layer for improving the security of internet of things (IoT). Inteernational Journal of Computer Information Systems and Industrial Management Applications. 2016;8:275-283. ISSN: 2150-7988 [2] Mairal FB, Ponce J, Sapiro G. Online dictionary learning for sparse coding. In: Proceedings of the 26th Annual International Conference on Machine Learning. Montreal, Quebec, Canada: ACM; 2009. pp. 689-696 ----- 50 Internet of Things - Technology, Applications and Standardization [3] Jankov D, Sikdar S, Mukherjee R, Teymourian K, Jermaine C. Real-time high performance anomaly detection over data streams: Grand challenge. In: Proceedings of the 11th ACM International Conference on Distributed and Event-based Systems. Tokyo, Japan: ACM; 2017. pp. 292-297 [4] Vlachos M, Freris NM, Kyrillidis A. Compressive mining: Fast and optimal data mining in the compressed domain. The VLDB Journal. 2015;24(1):1-24 [5] Sakurada M, Yairi T. Anomaly detection using autoencoders with nonlinear dimensionality reduction. In: Proceedings of the MLSDA 2014 2nd Workshop on Machine Learning for Sensory Data Analysis. ACM; 2014. p. 4 [6] Reeves G, Liu J, Nath S, Zhao F. Managing massive time series streams with multi-scale compressed trickles. Proceedings of the VLDB Endowment. 2009;2(1):97-108 [7] Chilimbi TM, Hirzel M. Dynamic hot data stream prefetching for general-purpose programs. In: ACM SIG-PLAN Notices. Berlin, Germany: ACM; 2002;37(5):199-209 [8] Lane T, Brodley CE. Temporal sequence learning and data reduction for anomaly detection. ACM Transactions on Information and System Security (TISSEC). 1999;2(3):295-331 [9] Kasiviswanathan SP, Melville P, Banerjee A, Sindhwani V. Emerging topic detection using dictionary learning. In: Proceedings of the 20th ACM international conference on Information and knowledge management. ACM; 2011. pp. 745-754 [10] Aldroubi A, Cabrelli C, Molter U. Optimal non-linear models for sparsity andsampling. Journal of Fourier Analysis and Applications. 2008;14(5-6):793-812 [11] Mansoori EG, Zolghadri MJ, Katebi SD. SGERD: A steady-state genetic algorithm for extracting fuzzy classification rules from data. IEEE Transactions on Fuzzy Systems. Aug 2008;16(4):1061-1071. ISSN: 1063-6706 [12] Dubois D, Durrireu C, Prade H, Rico A, Ferro Y. Extracting decision rules from qualitative data using sugeno integral: A case-study. In: Proceedings of the 13th European Conference, ECSQARU 2015. Compiègne, France: Springer; Jul 2015;9161:14-24. ISBN: 978-3-319-20806-0; ISSN: 0302-9743 [13] Daniel W. Gulwani S, Hart T, Zorn B. FlashRelate: extracting relational data from semistructured spreadsheets using examples. In: Proceedings of the 36th ACM SIGPLAN Conference on Programming Language Design and Implementation. Vol. 50, Issue. 6. New York: ACM; Jun 2015. pp. 218-228 [14] Kosina P, Gama J. Very fast decision rules for classification in data streams. Data Mining and Knowledge Discovery. Jan 2015;29(1):168-202. ISSN: 1384-5810 [15] Jafarzadeh H, Torkashvand R, Asgari C, Amiry A. Provide a new approach for mining fuzzy association rules using apriori algorithm. Indian Journal of Science and Technology. Apr 2015;8(S7):127-134. ISSN: 0974-6846 ----- An Adaptive Lightweight Security Framework Suited for IoT 51 http://dx.doi.org/10.5772/intechopen.73712 [16] Pourpanaha F, Peng Limb C, Mohamad Saleh J. A hybrid model of fuzzy ARTMAP and genetic algorithm for data classification and rule extraction. Elsevier, Expert Systems with Applications. 2016;49(7):4-85 [17] Mashinchi R, Selamat A, Ibrahim S, Krejcar O. Granular-rule extra action to simplify data. In: Intelligent Information and Database Systems. Vol. 9012 of the Series LNCS. Mar 2015. pp. 421-429. ISBN: 978-3-319-15704-7 [18] Yang H, Xiao C, Qiao Y. Study on anomaly detection algorithm of qar data based on attribute support of rough set. International Journal of Hybrid Information Technology. 2015;8(1):371-382. DOI: http://dx.doi.org/10.14257/ijhit.2015.8.1.33. ISSN: 1738-9968. IJHIT Copyright ⓒ 2015 SERSC [19] Tang H. A simple approach of data mining in excel. In: 4th International Conference Browse Conference Publications, IEEE Xplore; 2008 [20] Tong S, Koller D. Support vector machine active learning with applications to text classification. Journal of Machine Learning Research, Leslie Pack Kaelbling. 2001:45-66 [21] Osugi T, Kun D, Scott S. Balancing exploration and exploitation: A new algorithm for active machine learning. In: Fifth IEEE International Conference on Data Mining. IEEE Xplore, 2005. DOI: 10.1109/ICDM.2005.33 [22] Lang T, Flachsenberg F, Luxburg U, Rarey M. Feasibility of Active Machine Learning for Multiclass Compound Classification. 2016. PMID: 26740007. DOI: 10.1021/acs.jcim. 5b00332 [23] Bharathidason S, Jothi Venkataeswaran C. Improving classification accuracy based on random forest model with uncorrelated high performing trees. International Journal of Computer Applications (0975-8887). Sep 2014;101(13) [24] Lin C, Liu J, Li C-C, Chu P-W. Parallel modulus operations in RSA encryption by CPU/ GPU hybrid computation. In: Taiwan, Conference Paper, IEEE Xplore: 29. Jan 2015. DOI: 10.1109/AsiaJCIS.2014.25 [25] Fadhil HM, Younis MI. Parallelizing RSA algorithm on multicore CPU and GPU. International Journal of Computer Applications (0975-8887). Feb 2014;87(6) [26] Yanga LT, Huanga G, Jun Feng B, Xua L. Parallel GNFS algorithm integrated with parallel block Wiedemann algorithm for RSA security in cloud computing. Information Sciences, Elsevier. 2016 ----- -----
{ "disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.5772/INTECHOPEN.73712?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.5772/INTECHOPEN.73712, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": "CCBY", "status": "HYBRID", "url": "https://www.intechopen.com/citation-pdf-url/59350" }
2,018
[ "Review" ]
true
2018-08-01T00:00:00
[ { "paperId": "a83bc4f8ad7e2867ac5a34ccca850d1a52ca3d47", "title": "Real-time High Performance Anomaly Detection over Data Streams: Grand Challenge" }, { "paperId": "8f2983c45eac6f055bc44a2f41dada29e1c46881", "title": "Parallel GNFS algorithm integrated with parallel block Wiedemann algorithm for RSA security in cloud computing" }, { "paperId": "f361bb890bf1186bb4ccdecf0c684c9d5ccb8766", "title": "A hybrid model of fuzzy ARTMAP and genetic algorithm for data classification and rule extraction" }, { "paperId": "49ab7c56113d526411a715c91fe80080e20750c9", "title": "Feasibility of Active Machine Learning for Multiclass Compound Classification" }, { "paperId": "feae78252abb92b6066968d490a1e691f3dc0b94", "title": "Extracting Decision Rules from Qualitative Data Using Sugeno Integral: A Case-Study" }, { "paperId": "8a60432910c1d4c0459bf448008ed967e25529b1", "title": "FlashRelate: extracting relational data from semi-structured spreadsheets using examples" }, { "paperId": "5b3ec3254e8591728d164c2a975db1c3bbc0c257", "title": "Provide A New Approach for Mining Fuzzy Association Rules using Apriori Algorithm" }, { "paperId": "0325826ae01aa5f79af71b4592af5c8d48294b0e", "title": "Granular-Rule Extraction to Simplify Data" }, { "paperId": "3e911b6d87e9e6ac661d1bb04782497d0a958277", "title": "Study on Anomaly Detection Algorithm of QAR Data Based on Attribute Support of Rough Set" }, { "paperId": "ee70341e75c2dffebbabd24b239cc158ad691ed1", "title": "Anomaly Detection Using Autoencoders with Nonlinear Dimensionality Reduction" }, { "paperId": "e9393f9a6a6817b96c51ee49f8697f56635aebea", "title": "Improving Classification Accuracy based on Random Forest Model with Uncorrelated High Performing Trees" }, { "paperId": "1552fa6de4088ed9466cef940e05d9e4d3e731b4", "title": "Parallel Modulus Operations in RSA Encryption by CPU/GPU Hybrid Computation" }, { "paperId": "98f3a2ca2dfe5387da11c283f64c4c16caf2e3f6", "title": "Compressive mining: fast and optimal data mining in the compressed domain" }, { "paperId": "d7aad9d68a40aed4fa215f2e26b31ac7d26c1027", "title": "Parallelizing RSA Algorithm on Multicore CPU and GPU" }, { "paperId": "8465541f4bc7a0488124cad784bccba56647f94f", "title": "Very fast decision rules for classification in data streams" }, { "paperId": "12a2958c923b48849cb1d06faaab357969295806", "title": "Emerging topic detection using dictionary learning" }, { "paperId": "6d91ceb0e58c6f82c6901ab1861bbb91332db2c8", "title": "Managing Massive Time Series Streams with MultiScale Compressed Trickles" }, { "paperId": "12439a6ff384e95ee2262ee982bc055534e30487", "title": "Online dictionary learning for sparse coding" }, { "paperId": "de8230310005d9945271fce3b58a8598241f7ddc", "title": "A Simple Approach of Data Mining in Excel" }, { "paperId": "f15ab3c43ebb0d6201818925eabe41a553386bf7", "title": "SGERD: A Steady-State Genetic Algorithm for Extracting Fuzzy Classification Rules From Data" }, { "paperId": "aa6da31754897581238cdf1f3602bdde325c3c21", "title": "Optimal Non-Linear Models for Sparsity and Sampling" }, { "paperId": "01a27d77257c47a24daaa969f258ea844b9cbff8", "title": "Balancing exploration and exploitation: a new algorithm for active machine learning" }, { "paperId": "60b85b7ee655397a4d2202f9cdf6dd5e3f04f6fd", "title": "Dynamic hot data stream prefetching for general-purpose programs" }, { "paperId": "a8797f1d253c75669d96e6fcceda2be3f8534e1d", "title": "Support Vector Machine Active Learning with Applications to Text Classification" }, { "paperId": "62fdd44b8048848c5bd25cab55ff4913215da731", "title": "Temporal sequence learning and data reduction for anomaly detection" }, { "paperId": "ee240b54cba7480f40f182c932edd06f5d5d4ab7", "title": "A New Security Layer for Improving the security of internet of things ( IoT )" } ]
12,997
en
[ { "category": "Computer Science", "source": "external" }, { "category": "Computer Science", "source": "s2-fos-model" }, { "category": "Engineering", "source": "s2-fos-model" }, { "category": "Environmental Science", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/02118711c1841d380df17c2537690bc0cedc4906
[ "Computer Science" ]
0.903187
A Double Auction for Charging Scheduling among Vehicles Using DAG-Blockchains
02118711c1841d380df17c2537690bc0cedc4906
ACM Trans. Sens. Networks
[ { "authorId": "47093617", "name": "Jianxiong Guo" }, { "authorId": "3437541", "name": "Xingjian Ding" }, { "authorId": "47203186", "name": "Weili Wu" }, { "authorId": "2065276848", "name": "Dingzhu Du" } ]
{ "alternate_issns": null, "alternate_names": null, "alternate_urls": null, "id": null, "issn": null, "name": null, "type": null, "url": null }
Electric Vehicles (EVs) are becoming more and more popular in our daily life, which replaces traditional fuel vehicles to reduce carbon emissions and protect the environment. EVs need to be charged, but the number of charging piles in a Charging Station (CS) is limited, and charging is usually more time-consuming than fueling. According to this scenario, we propose a secure and efficient charging scheduling system based on a Directed Acyclic Graph (DAG)-blockchain and double-auction mechanism. In a smart area, it attempts to assign EVs to the available CSs in the light of their submitted charging requests and status information. First, we design a lightweight charging scheduling framework that integrates DAG-blockchain and modern cryptography technology to ensure security and scalability during performing scheduling and completing tradings. In this process, a constrained multi-item double-auction problem is formulated because of the limited charging resources in a CS, which motivates EVs and CSs in this area to participate in the market based on their preferences and statuses. Due to this constraint, our problem is more complicated and harder to achieve truthfulness as well as system efficiency compared to the existing double-auction model. To adapt to it, we propose two algorithms, namely, Truthful Mechanism for Charging (TMC) and Efficient Mechanism for Charging (EMC), to determine an assignment between EVs and CSs and pricing strategies. Then, both theoretical analysis and numerical simulations show the correctness and effectiveness of our proposed algorithms.
## A Double Auction for Charging Scheduling among Vehicles Using DAG-Blockchains #### Jianxiong Guo, Member, IEEE, Xingjian Ding, Weili Wu, Senior Member, IEEE, and Ding-Zhu Du **_Abstract—Electric Vehicles (EVs) are becoming more and_** **more popular in our daily life, which replaces traditional fuel** **vehicles to reduce carbon emissions and protect the environment.** **EVs need to be charged, but the number of charging piles** **in a Charging Station (CS) is limited and charging is usually** **more time-consuming than fueling. According to this scenario,** **we propose a secure and efficient charging scheduling system** **based on a Directed Acyclic Graph (DAG)-blockchain and double** **auction mechanism. In a smart area, it attempts to assign EVs** **to the available CSs in the light of their submitted charging** **requests and status information. First, we design a lightweight** **charging scheduling framework that integrates DAG-blockchain** **and modern cryptography technology to ensure security and scal-** **ability during performing scheduling and completing tradings. In** **this process, a constrained multi-item double auction problem** **is formulated because of the limited charging resources in a** **CS, which motivates EVs and CSs in this area to participate** **in the market based on their preferences and statuses. Due to** **this constraint, our problem is more complicated and harder to** **achieve truthfulness as well as system efficiency compared to the** **existing double auction model. To adapt to it, we propose two** **algorithms, namely Truthful Mechanism for Charging (TMC)** **and Efficient Mechanism for Charging (EMC), to determine an** **assignment between EVs and CSs and pricing strategies. Then,** **both theoretical analysis and numerical simulations show the** **correctness and effectiveness of our proposed algorithms.** **_Index Terms—Electric Vehicle (EV), Charging Scheduling,_** **DAG-based Blockchain, Constrained Multi-item Double Auction,** **Truthfulness, System Efficiency.** I. INTRODUCTION O deal with the fossil energy crisis and reduce the emission of greenhouse gases, Electric Vehicles (EVs) have # T attracted more and more people’s attention because of their great potential. Renewable energy will become the mainstream way of energy supply in the near future. Compared to traditional fuel vehicles, EVs have a number of advantages such as cost reduction, renewability, and environmental protection. With the development of EVs, a large number of Charging Stations (CSs) will be deployed in cities, which is different from current gas stations. Because of the tedious storage and transportation of gasoline, the deployment of gas stations is Jianxiong Guo is with the Advanced Institute of Natural Sciences, Beijing Normal University, Zhuhai 519087, China, and also with the Guangdong Key Lab of AI and Multi-Modal Data Processing, BNU-HKBU United International College, Zhuhai 519087, China. (E-mail: jianxiongguo@bnu.edu.cn) Xingjian Ding is with the Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China. (e-mail: dxj@bjut.edu.cn) Weili Wu and Ding-Zhu Du are with the Department of Computer Science, Erik Jonsson School of Engineering and Computer Science, The University of Texas at Dallas, Richardson, TX 75080, USA. (E-mail: weiliwu@utdallas.edu; dzdu@utdallas.edu) _(Corresponding author: Xingjian Ding.)_ M i t i d i d usually centralized. There is no such problem with electricity, especially after the rise of distributed energy. It can be seen that the deployment of CSs is more distributed, the number of CSs is larger, and the single CS is smaller. Besides, it usually takes more than half an hour to charge an EV, which is not the same thing as being able to refuel in an instant. This has also created a degree of administrative hardship. In this paper, we consider such a scenario: In a smart area, there is a manager that is responsible for the overall scheduling of EVs and CSs in this area. EVs hope to complete charging at the fastest speed and at the least cost, while CSs hope to maximize their profits by providing charging services. However, there are still some challenges that need to be resolved. On the one hand, it lacks an effective approach to ensure the security of charging trading between EVs and CSs. Traditional centralized trading platforms depend on a trusted third party to manage and store every transaction between EVs and CSs. These platforms are plagued by some attacks [1] [2] such as single point of failure, denial of service, and privacy leakage [3]. A lot of existing research about charging management neglects the security and privacy protection of trading. The advent of blockchain technology has made it possible to improve these issues. Blockchain is a kind of decentralized ledger, which can realize security and privacy protection through the knowledge of modern cryptography and distributed consensus protocol. Based on that, we propose a Blockchain-based Charging Scheduling (BCS) system to manage the charging assignments between EVs and CSs in a secure and efficient manner. First, we are able to protect the contents of communication among the entities in this system from being tampered with or leaked by using digital encryption techniques. However, due to its confirmation delay, limited scalability, and the inherent uncertainty of consensus mechanisms [4] [5], traditional chainbased blockchains are not applicable to transactions with highfrequency characteristics. In addition, the incentive to miners, transaction fees, and restricted block sizes are also important factors that limit the use of such blockchains. To overcome the above drawbacks, the infrastructure of our BCS system is built on a Directed Acyclic Graph (DAG)-based blockchain [6]. It has an asynchronous consensus process, which can make full use of network bandwidth to improve scalability. The DAGbased blockchain has no miners, thus it reduces latency and transaction fees, which makes frequent energy transactions between EVs and CSs possible [6]. At the same time, it can guarantee the same security and decentralization as the traditional blockchain In our BCS system EVs work as light ----- nodes, while CSs work as full nodes which are responsible for issuing new transactions, storing, and maintaining the whole blockchain system. On the other hand, we have mentioned that charging is more time-consuming than fueling. For each CS in this system, the number of charging piles is limited. We can imagine that an EV goes to a CS for charging, but there is no idle charging pile at this moment that can charge it. This EV has to wait for other EVs to finish charging or go to other CSs, which is frustrating. For each EV in this system, it is more inclined to those CSs that locate near it or provide better services to it. Because EVs and CSs in this system belong to different entities, they are driven by their own utilities. Then, a natural question is how to assign the EV that submits a charging request in this system to a CS with a limited number of charging piles. Based on that, we formulate a constrained multi-item double auction model, where EVs are buyers and CSs are sellers. This model can be considered as an instance of a singleround multi-item double auction process [7]. For the multiitem double auction, Jin et al. [8] [9] proposed a resource allocation problem in mobile cloud computing and designed a truthful resource auction mechanism for the resource trading between mobile users (buyers) and cloudlet (sellers). However, this is a one-to-one mapping from buyers to sellers, that is, at most one mobile user can be assigned to one cloudlet at a moment. This scheme cannot be applied to solve our problem, because the number of EVs that can be assigned to a CS is more than one but less than the number of available charging piles in this CS. In other words, this is a many-to-one mapping from buyers to sellers. Secondly, to maximize the utilities of CSs (sellers), the assignment and price determination depends not only on the unit bids of EVs, but also on their charging amounts. Due to the limited number of charging piles in each CS as well as the different preferences and charging amounts of each EV, our constrained multi-item double auction model is distinguished from any existing double auction mechanisms. It is more complex and harder to achieve truthfulness and system efficiency. In order to solve this challenge, we design a Truthful Mechanism for Charging (TMC) first to determine the winners and transaction prices. By theoretical analysis, it meets the requirements of individual rationality, budget balance, truthfulness, and computational efficiency. Because of achieving truthfulness, TMC sacrifices a part of system efficiency. To improve the system efficiency, we design an Efficient Mechanism for Charging (EMC) that increases the number of successful trades (winning buyers) significantly more than that in TMC. Nevertheless, EMC is not able to ensure truthfulness for the buyers in extreme cases. The DAGbased blockchain and built-in smart contract implemented by the double auction enable our BCS system to work distributed and securely together. The contributions of this paper are summarized as follows: _• We investigate the challenges of EVs charging at present,_ integrate DAG-blockchain with charging scheduling, and proposed a complete framework for charging scheduling and digital trading. _• We model the assignment between EVs and CSs as a_ constrained multi item double auction This model is first proposed by us, and its objective and constraint are fundamentally different from existing auctions. To solve it, we design a TMC algorithm that can guarantee truthfulness. _• In order to meet the rapid response of scheduling systems,_ we design an EMC algorithm that is more efficient than TMC at the expense of truthfulness. _• We build a simulation environment for the BCS system,_ and verify our expected goals for the system and auction theory through intensive simulations. **Organizations: In Sec. II, we discuss the-state-of-art work.** In Sec. III, we introduce the background of DAG-based blockchain. In Sec. IV, we present our BCS system and charging scheduling framework elaborately. In Sec. V, constrained multi-item double auction is formulated. Then, two solving mechanisms TMC and EMC are shown in Sec.VI and VII. Finally, we evaluate our proposed algorithms by numerical simulations in Sec.VIII and show the conclusions in Sec. IX. II. RELATED WORK Recently, blockchain has been used as an effective method to deal with the issues of transactions generated by peerto-peer (P2P) energy trading among EVs. Kang et al. [10] exploited a P2P electricity trading system with consortium blockchain to motivate EVs to discharge for balancing local electricity demand. Wu et al. [11] studied energy scheduling in office buildings through combining renewable energies and workplace EV charging, and used stochastic programming to manage the uncertainty of charging. Liu et al. [12] proposed an EV participation charging scheme for a blockchain-enable system to minimize the fluctuation level of the grid network and charging cost of EVs. Su et al. [13] designed a contract-based energy blockchain in order to make EVs charge securely in the smart community, while they implemented a reputation-based delegated Byzantine fault tolerance consensus algorithm. Zhou _et al. [14] developed a secure and efficient energy trading_ mechanism based on consortium blockchain for vehicle-to-grid that exploited the bidirectional transfer technology of EVs to reduce the demand-supply mismatch. Xia et al. [15] proposed a vehicle-to-vehicle electricity scheme in blockchain-enabled Internet of Vehicles to address the driving endurance issue of EVs. Guo et al. [16] designed a lightweight blockchain-based information trading framework to realize a real-time traffic monitoring. However, due to its confirmation delay, limited scalability, and lack of computational power, traditional chain-based blockchains cannot be deployed in the systems associated with EVs. The DAG-based blockchain (Tangle) [6] emerged as a new type of blockchain, which reduces the reliance on computational power and improves the throughput significantly. Huang et al. [17] presented a DAG-based blockchain with a credit-based consensus mechanism for power-constrained IoT devices. Hassija et al. [18] exploited the DAG-based blockchain to support the increasing number of transactions in the vehicle-to-grid network. In this paper, our BCS system is built on DAG-based blockchain as well because of its limited computational power and high frequency characteristics ----- Auction theory has been widely applied in many different fields, such as mobile crowdsensing [19] [20] [21], mobile cloud computing [8] [9] [22], and energy trading [23] [24]. Here, we only focus on double auctions, where buyers (resp. sellers) submit their bids (resp. asks) to an auctioneer. For example, Borjigin et al. [25] proposed a double auction method to maximize the profit of participants that can be applied in service function chain routing and NFV price adjustment. There are two classic models: McAfee double auction [26] and Vickrey-based model [27]. They only considered homogeneous items and the Vickrey-based model cannot satisfy the truthfulness, unfortunately. To heterogeneous items (multi-item), Yang et al. [7] designed a truthful double auction scheme for the cooperative communications, where the auctioneer first uses an assignment algorithm based on its design goal to get candidates and mapping from buyer to seller, then apply McAfee double auction to determine the winners and corresponding transaction prices. According to [7], Jin et al. [8] [9] proposed a truthful resource auction mechanism in mobile cloud computing, which is the most relevant to our paper and has been introduced in Sec. I. However, there are obvious differences in our work. In our constrained multi-item double auction model, the number of buyers that can be assigned to the same seller is constrained. Furthermore, we should not only consider the unit bids of buyers, but also consider their charging amounts. It is more difficult to achieve truthfulness and system efficiency, which is our main contribution to this paper. III. BACKGROUND FOR DAG-BLOCKCHAINS Blockchain is an emerging technology that acts as a decentralized ledger or database since Nakamoto published his original prototype in 2008 [28], which is implemented by modern cryptographic technologies and distributed consensus algorithms. Because of these technical characteristics, it takes more than half of the computational power to tamper with transactions in the blockchain, which prevents attacks from malicious nodes effectively. In the process of communication and storage, All transactions are encrypted digitally, which realizes the anonymity and privacy protection of the blockchain system. They can enable users who do not trust each other to trade freely in a secure and reliable environment without a third-party platform. Then, a number of real applications flourished, such as Bitcoin [28], Ethereum [29], and Hyperledger fabric [30], all of which were based on the chain-based blockchain. The chain-based blockchain is shown in Fig. 1 (a), where a block contains a certain number of transactions. All users maintain the longest main chain jointly, and only the transactions in the main chain can be considered legal. However, this chain-based structure is limited by its high requirement for computational power because of its consensus mechanism based on hashing puzzles, thereby it cannot be applied to devices with limited power. Secondly, it generates and verifies the blocks in a sequential and synchronous manner, resulting in unacceptable confirmation delay and scalability. Take Bitcoin as an example its average throughput is about 7 TPS (Trans (a) The chain-based blockchain (b) The DAG-based blockchain Fig. 1. The comparison of chain-based and DAG-based blockchian, where the square nodes in (a) are blocks and the circle nodes in (b) are transactions. _actions Per Second) [31]. Thus, it is not suitable for those real-_ time application scenarios with high-frequency characteristics. Here, the system we design is based on EVs which are both resource-limited and high-frequency, hence the chain-based structure is difficult to achieve our goal. To improve the scalability, a new blockchain architecture based on the Directed Acyclic Graph (DAG) was proposed, called DAG-based blockchain or tangle [6]. The DAG-based blockchain is shown in Fig. 1 (b), where each node represents a transaction instead of a block. When a new transaction is issued, it must validate two tips which are previous transactions attached in the tange but not verified by any others. This validation is denoted by a directed edge in the tangle. This new transaction will be verified by the other upcoming transactions. The time required to verify a tip (confirmation delay) depends on the tip selection algorithms [6] and the rate of new transactions. Generally, there is a weight associated with each transaction, which is proportional to the difficulty of the hashing puzzle defined by itself. When issues a new transaction, it has to find a random number nouce such that Hash(Transaction, Timestamp, nouce) Target (1) _≤_ where the smaller the target implies the greater the weight. Until now, the newly issued transaction has been completed and waits to be verified by subsequent transactions. When can we consider a transaction valid? This is related to its cumulative weight. The cumulative weight of a transaction is the weighted sum of transactions that approve it directly or indirectly. Shown as Fig.1(b), the cumulative weight of transaction A equals the weighted sum of transactions A, _B, C, D, E, F_, and G. Consider a transaction, the larger cumulative weight can only be achieved by consuming more computational power, thereby it is more likely to be legal if it has a larger cumulative weight. A transaction is believed to be legal if and only if its cumulative weight exceeds a predefined threshold Suppose that most power is controlled by ----- legal users, we can distinguish those transactions issued by malicious users since the valid transactions will be verified by other legal users and their cumulative weight will be larger and larger. Different from those synchronous consensus mechanisms in the chain-based blockchain, the consensus process in the DAG-based blockchain is completed in an asynchronous approach. This eliminates an inherent defect of the chainbased blockchain that has on consensus finality because of forking and pruning [32]. Besides, it is able to defend against possible attacks, such as a single point of failure, Sybil attack, lazy tips, and double-spending. According to that, the DAGbased blockchain can not only provide us with a secure and reliable trading environment but also improve the scalability and reduce the requirements on the computational power of devices. Then, a lot of real systems based on DAG-based blockchains emerged, such as IOTA [6], Byteball [33], and Nano [34]. Finally, the devices in our charging scheduling system are resource-limited and trade with others frequently. Therefore, the DAG-based blockchain is an ideal choice to act as infrastructure to achieve security and privacy protection. IV. BLOCKCHAIN-BASED CHARGING SCHEDULING SYSTEM In this section, we demonstrate the overview of our Blockchain-based Charging Scheduling (BCS) system by introducing entities and a charging scheduling framework. _A. Entities for BCS System_ Consider a smart area, there are a certain number of CSs with charging piles available to charge the EVs in this area. Then, it exists a manager that is responsible for managing entities and executing the charging scheduling between EVs and CSs. Therefore, there are three main entities in this system shown as follows. _• Electric Vehicle (EV): The EVs running in this area play_ the part of requesters. When it is low on power, the EV will request for charging services from the manager. _• Charging Station (CS): The CSs located in this area_ play the part of providers to charge the EVs. A CS has many charging piles, and each charging pile can charge one EV at a time. When it is idle, the CS will inform the manager of its status information. _• Manager: The manager works as an energy scheduler._ Each EV sends a request about its charging preference and each CS submits its status information on how many charging piles are available to the manager. Then, the manager acts as an auctioneer to perform a constrained double auction mechanism between EVs and CSs that assigns EVs to CSs and determines the transaction prices. The price charged to EV and payment rewarded to CS are determined by the transaction prices. Thus, the BCS system can be denoted by B = {M, V, C} where V = {V1, V2, · · · } is the set of EVs, C = {C1, C2, · · · } is the set of CSs, and M is manager. Then, we define a transaction T between V ∈ V and C ∈ C as the charging Fig. 2. The architecture of blockchain-based charging scheduling system in a smart area. trading and digital payment record between them. This transaction is stored in the blockchain and includes pseudonyms of _Vi and Cj, data type, transaction details, and timestamps of_ generation. In order to ensure security and privacy protection, the transaction is encrypted by their digital signatures and the payment is made in the form of charging coins. _B. Architecture for BCS system_ The infrastructure of BCS system B = {M, V, C} is established on the DAG-based blockchain, where each aforementioned entity is a node in this system. Depending on their abilities towards computation and storage, they can be split into two categories: light node and full node. Light nodes have limited computational power and memory space, thereby they are only responsible for generating transactions together with full nodes together. Only part of their own information is stored so that they can check it conveniently. Full nodes usually possess powerful servers with multiple functions, which can issue new transactions by finding a valid nonce and validating the previous tips. Moreover, they are responsible for storing and maintaining the entire blockchain, namely the tangle. The architecture of our BCS system is shown in Fig. 2. In our BCS system B, each EV Vi ∈ V works as a light node, and each CS Cj ∈ C works as a full node. The manager M is a specific full node that manages the entire system to make sure it works instead of storing and maintaining the tangle. For the full nodes, the difficulty of hashing puzzle can be set by modifying different target values dynamically to adapt to their own computational power. For the manager, in addition to the above-mentioned function that carries out constrained multiitem double auction between EVs and CSs as an auctioneer, it has the right to add or delete nodes in real-time according to the actual situation. For example, it can permit new CSs to join this system and remove some malicious EVs from this system. Besides, it can block those invalid requests from the nodes within the system and prevent various attacks from the nodes outside the system in advance. Also, the architecture of our BCS system is based on the DAG-based blockchain, which is not only distributed and reliable to defend against attacks but also improves the throughput to be qualified for high frequency energy trading ----- Fig. 3. The procedure and message flow of charging scheduling between CSs and EVs in a smart area. In our BCS system, we use an asymmetric cryptography, such as the elliptic curve digital signature algorithm [35], for system initialization. Each EV Vi ∈ V registers on a trusted authority to be a legitimate node through obtaining a unique identification IDi that is associated with its license plate and a certificate Ceri that is signed by the private key of authority to certify the authenticity of this identity. After verifying its certificate, the EV Vi can join this system and be assigned with a public/private key pair (PKi, SKi) where its public key works as a pseudonym that is open to all nodes and its private key that is kept by itself. In asymmetric encryption, the message encrypted by the public key can be decrypted by a corresponding private key, and vice versa. After joining this system, there is a set of wallet address {Wi(k)}k[θ]=1 [owned by] the EV Vi, where we assume there are θ wallets for each entity. Thus, the account of each EV Vi ∈ V that joins this system can be denoted by AEi = {IDi, Ceri, (PKi, SKi), {Wi(k)}k[θ]=1[}][.] Similarly, the account of each CS Cj ∈ C can be denoted by _CEj = {IDj, Cerj, (PKj, SKj), {Wj(k)}k[θ]=1[}][.]_ _C. Charging Scheduling Framework_ The detailed procedure and message flow of charging scheduling between EVs and CSs are shown in Fig. 3. We assume it performs in discrete times. Here, we only consider the situation in the time slot t 1, 2,, thus we ignore _∈{_ _· · · }_ the time superscript t in the following variables. By adding this superscript t, we can design online scheduling further according to the actual demand. The specific operations can be divided into 7 steps in detail: **1) Request and Status: Each EV Vi** _∈_ V sends a request message Ri that includes its bid for charging at each CS Cj ∈ C to the manager. This request message is denoted by ReqMsg = {PKM (SKi(Ri)), Ceri, STime} where PKM is the public key of the manager and STime is the timestamp of this message generation. At the same time, each CS Cj C sends a status message Sj that includes _∈_ its ask for serving a vehicle and the number of available charging piles to the manager. This status message is denoted by StaMsg = {PKM (SKj(Sj)), Cerj, STime}. Here, the request and status message are encrypted by the manager’s public key PKM since they are only allowed to be read by the manager for privacy protection and fair trading **2) Scheduling: The manager waits to receive request** messages from EVs and status messages from CSs. After collecting them, the manager confirms their legitimate identity by verifying their certificates. Then, it works as a scheduler to assign each winning EV to a CS that has unoccupied charging piles. Besides, it determines the price charged to EV and payment rewarded to CS as well, which is executed by the built-in smart contract. The smart contract is implemented by the constrained multi-item double auction model explained in the following Sec.V and VI. **3) Order and Assignment: The manager sends an order** message Oi to each winning EV Vi V and an assignment _∈_ message Aj to each winning CS Cj C. The Oi includes a _∈_ CS that can serve to it and the price charged to it, which is denoted by OrdMsg = {PKi(SKM (Oi)), CerM _, STime}._ The Aj includes an assignment, the set of EVs that can be charged in Cj and the payment rewarded to it, which is denoted by AssMsg = {PKj(SKM (Aj)), CerM _, STime}._ Here, the order and assignment message are encrypted by their public key PKi and PKj because of permitting to be read by themselves. **4) Confirm: If EV Vi receives an order message from the** manager, it implied it can be charged at the CS Cx designated by the manager. Then, the EV Vi sends a confirm message _Fi like “I will come on time” to the designated CS Cx. It_ is denoted by ConMsg = {PKx(SKiFi), Ceri, STime} encrypted by Cx’s public key PKx for similar reasons. **5) Charging: Once CS Cx receives the confirm message** from the EV Vi, it will check whether the Vi is in its assignment Ax. If yes, the CS Cx can provide charging service to EV Vi before the deadline. After charging, the EV Vi generates a new transaction Tix according to their trading information, signs, and sends the SKi(Tix) back to Cx. **6) Transactions: When CS Cx receives the new transaction** from the EV Vi, it will check and sign this transaction by its private key as well. Now, the CS Cx issues this new transaction _SKx(SKi(TXix)) in the DAG-based blockchain. It is able to_ adjust the difficulty of the hashing puzzle by setting different targets dynamically according to its computational power and transaction frequency. **7) Verification and payment: After the transaction Tix is** issued, it will be verified to be legal in the future when its cumulative weight is large enough. At this moment, charging coins should be transferred from the wallet of Vi to Cx. The coins with the price charged to Vi will be deducted from the wallet Wi(k) and coins with the payment rewarded to Cx will be added into the wallet Wx(k) permanently. V. PROBLEM FORMULATION In our BCS system B = {M, V, C}, CSs in C provide charging piles for EVs in V that need charging. Each CS _Ci ∈_ C has limited charging piles, where the number of charging piles in this charging station is ki ∈ Z+. In general, CSs are distributed evenly across this smart area, also those located in the area center are usually more crowded. Furthermore, CSs have different charging efficiencies, where higher efficiency means shorter charging time Thus there are ----- two critical attributes, location and efficiency, associated with each CS, which determine the valuation of an EV toward it. The valuation of an EV toward a CS can be decided according to its requirement. For example, when the battery of an EV is very low, it values high a CS that is nearest to it. But for an EV in a hurry, it considers both location and efficiency to minimize its charging time. In the trading between EVs and CSs, we aim to incentivize CSs to provide charging services and meet the demands of EVs. To benefit both EVs and CSs, we design a constrained multi-item double auction model that gets a truthful assignment between EVs and CSs. _A. Constrained Multi-Item Double Auction Model_ Shown as Fig. 3, we assume that this system runs in discrete times. At each time step, EVs send request messages and CSs send status messages privately to the managers. Based on the single-round multi-item double auction model [7], EVs are buyers and CSs are sellers in this auction. The manager M ∈ B works as the trusted third auctioneer to assign n buyers to _m sellers and determine the price charged to each buyer and_ payment rewarded to each seller. The set of buyers is V = {V1, V2, · · ·, Vn} and the set of sellers is C = {C1, C2, · · ·, Cm}. For each buyer Vi ∈ V, its bid vector is denoted by Bi = (b[1]i _[, b]i[2][,][ · · ·][, b]i[m][)][ where][ b]i[j]_ is the unit bid (maximum buying price per unit charging) of _Vi for charging at seller Cj_ C. Additionally, we define a _∈_ charging vector R = (r1, r2, · · ·, rn) where ri is the charging amount of Vi. For the sellers in C, the ask vector is denoted by **_A = (a1, a2, · · ·, am) where aj ∈_** **_A is the unit ask (minimum_** selling price per unit charging) of Cj. As for the number of available charging piles in each CS, we define a vector K = (k1, k2, · · ·, km) where kj ∈ Z+ is the number of piles that can charge EVs in Cj ∈ C. Here, we notice that the bids of a buyer vary with sellers since each EV has different evaluations on CSs according to its requirements regarding location and efficiency. However, the ask of a seller remains unchanged among buyers because it is only concerned about payment from charging vehicles. By aforementioned definitions, the request message sent by buyer Vi is denoted by Ri = (Bi, ri) and the status message sent by seller Cj is denoted by Sj = (aj, kj). After it gets the collection (B, R, A, K) where B = (B1; B2; · · · ; Bn), the auctioneer determines the winning buyer set Vw ⊆ V, the winning seller set Cw C, a mapping from Vw to Cw that is _⊆_ _σ : {i : Vi ∈_ Vw} →{j : Cj ∈ Cw}, the unit price ˆpi charged to buyer Vi Vw, and the unit payment ¯pj rewarded to seller _∈_ _Cj. The assignment Aj for each Cj_ Cw is _∈_ _Aj = {Vi ∈_ Vw : σ(i) = j} where |Aj| ≤ _kj_ (2) because the CS Cj permits at most kj EVs to be charged at the same time, which is the reason why this model is called “constrained” double auction. Moreover, for each buyer Vi ∈ V, its valuation vector is denoted by Vi = (vi[1][, v]i[2][,][ · · ·][, v]i[m][)] where vi[j] [is its unit valuation of][ V][i][ for charging at seller][ C][j][ ∈] C. For the sellers in C, the cost vector is denoted by C = (c1, c2, · · ·, cm) where cj ∈ **_C is the unit cost of Cj to provide_** charging service Based on the buyer’s valuation and seller’s Besides, in our model Ψ, we need to guarantee that the assignment for each winning seller is not more than its number of charging piles, that is |Aj| ≤ _kj for Cj ∈_ Cw. _• Computational Efficiency: The auction results, includ-_ ing winning buyers, winning sellers, mapping from winning, price charged to buyer, and payment rewarded to seller, can be obtained in polynomial time. In addition to the above three properties, there are two more important properties that should be satisfied strictly or approximately. _• Truthfulness: A double auction is truthful if every buyer_ (resp. seller) bids (resp. asks) truthfully is one of its dominant strategies that can maximize its utility. That is to say, no buyer can increase its utility by giving a bid that is different from its true valuation and no seller can increase its utility by giving an ask that is different from its true cost. Consider our model Ψ, we have ˆui can be maximized by bidding Bi = Vi for each Vi V and ¯uj _∈_ can be maximized by asking aj = cj for each Cj C _∈_ when other players do not change their strategies. _• System Efficiency: There are a number of different_ metrics to evaluate the system efficiency of a double auction model. The most common approach [27] is the number of completed trades, that is the number of buyers in the winning buyer set Vw in our model Ψ. Here, each buyer V ∈ V will be assigned to a seller C _∈_ C cost, the utility ˆui of winning buyer Vi Vw and the utility _∈_ _u¯j of winning seller Cj_ Cw can be defined as follows: _∈_ _uˆi = (vi[σ][(][i][)]_ _−_ _pˆi) · ri_ (3) � _u¯j = (¯pj −_ _cj) ·_ _Vi∈Aj_ _[r][i]_ (4) Otherwise, for losing buyer Vi /∈ Vw and losing seller Cj /∈ Vw, their utilities are ˆui = 0 and ¯uj = 0. Here, the utility _uˆi is proportional to the difference between its valuation and_ charged price, which implies the satisfaction level of Vi on its assigned CS. The utility ¯uj is proportional to the difference between rewarded payment and its cost, which characterizes the profitability of Cj for providing charging service. _B. Design Rationales_ The constrained multi-item double auction model defined in the last subsection can be denoted by Ψ = (V, C, B, R, A, K). A valid double auction mechanism has to meet the following three properties first, they are _• Individual Rationality: The price charged to the winning_ buyer is not more than its bid and the payment rewarded to the winning seller is not less than its ask. Consider our model Ψ, we have ˆpi ≤ _b[σ]i_ [(][i][)] for each Vi ∈ Vw and _p¯j_ _aj for each Cj_ Cw. _≥_ _∈_ _• Budget Balance: The total price charged to all winning_ buyers is not less than the total payment rewarded to all winning sellers, which ensures the profitability of the auctioneer. Thus, � � � (5) _Vi∈Vw_ _[p][ˆ][i][ ·][ r][i][ −]_ _Cj_ _∈Cw_ _[p][¯][j][ ·]_ _Vi∈Aj_ _[r][i][ ≥]_ [0] ----- **Algorithm 1 TMC (V, C, B, R, A, K)** **Input: V, C, B, R, A, K** **Output: Vw, Cw, σ,** P[ˆ]w, P[¯]w 1: (Vc, Cc, ajψ ) ← TMC-WCD (V, C, B, A) 2: (Vw, Cw, σ, P[ˆ]w, P[¯]w) ← TMC-AP (Vc, Cc, ajψ _, R, K)_ 3: return (Vw, Cw, σ, P[ˆ]w, P[¯]w) **Algorithm 2 TMC-WCD (V, C, B, A)** **Input: V, C, B, A** **Output: Vc, Cc, ajϕ** 1: Vc ←∅, Cc ←∅ 2: Construct a set V[′] = {Vst : b[t]s _[>][ 0][, V][s]_ _[∈]_ [V][}][ based on][ B] 3: Sort the buyers in V[′] and transfer it to an order list V[′] = _⟨Vs1t1_ _, Vs2t2_ _, · · ·, Vsxtx_ _⟩_ such that b[t]s[1]1 _[≥]_ _[b]s[t][2]2_ _[≥· · · ≥]_ _[b]s[t][x]x_ 4: Sort the sellers, get an order list C[′] = ⟨Cj1 _, Cj2_ _, · · ·, Cjm⟩_ such that aj1 _aj2_ _ajm_ 5: Find the median ask ≤ _≤· · · ≤ ajϕ of C[′], ϕ =_ � _m2+1_ � 6: Find the minimum φ from V[′] such that bs[t][φ]φ[+1]+1 _[< a]jϕ_ 7: V[′′] _←⟨Vs1t1_ _, Vs2t2_ _, · · ·, Vsφtφ⟩_ 8: for each Vst ∈ V[′′] **do** 9: **if at < ajϕ then** 10: Vc ← Vc ∪{Vst} 11: **if Ct /∈** Cc then 12: Cc ← Cc ∪{Ct} 13: **end if** 14: **end if** 15: end for 16: return (Vc, Cc, ajϕ ) which satisfies the requirements of both buyer and seller. To maximize it, this is in line with our original intention of designing this system to make as many EVs as possible to be charged. Other metrics, such as total price charged to winning buyers, total payment rewarded to winning seller, and profit of auctioneer, should be considered as well based on needs. We will analyze them later. To the truthfulness, we assume the submitted vector R and **_K are trusted and cannot be tampered with because they can_** be monitored by reliable hardware. Thereby, we only consider the bids B and asks A to analyze the truthfulness of model Ψ. When it is truthful, the double auction model avoids being manipulated maliciously due to the fact that each player can get the best utility by telling the truth. There is no player that has the motivation to lie since they do not have to adapt to others’ strategies by telling the lie for improving their utilities. Therefore, truthfulness simplifies the strategic decisions for players and makes sure a fair market environment, which plays an important role in mechanism design. VI. TRUTHFUL MECHANISM FOR CHARGING In this section, we design a Truthful Mechanism for Charging (TMC) based on our constrained multi-item double auction model and analyze whether it satisfies the desired properties mentioned in Sec V B **Algorithm 3 TMC-AP (Vc, Cc, ajϕ** _, R, K)_ **Input: Vc, Cc, ajϕ** _, R, K_ **Output: Vw, Cw, σ,** P[ˆ]w, P[¯]w 1: Vw ←∅, Cw ←∅, P[ˆ]w ←∅, P[¯]w ←∅ 2: Sort the buyers in Vc, get an ordered queue Qc = _⟨Vs1t1_ _, Vs2t2_ _, · · ·, Vsyty_ _⟩_ such that b[t]s[1]1 _[·][ r][s]1_ _[≥]_ _[b]s[t][2]2_ _[·][ r][s]2_ _[≥]_ _· · · ≥_ _b[t]s[y]y_ _[·][ r]sy_ 3: Create a tentative set Hj ←∅ from each Cj ∈ Cc 4: K[′] = (k1[′] _[, k]2[′]_ _[,][ · · ·][, k]m[′]_ [)][ copied from vector][ K] 5: while Qc ̸= ∅ **do** 6: _Vsltl_ Qc.pop(0) // Obtain the first element in the _←_ _queue Qc, then remove it from Qc._ 7: **if kt[′]l** _[>][ 0][ then]_ 8: Htl ← Htl ∪{Vsl _}, kt[′]l_ _[←]_ _[k]t[′]l_ _[−]_ [1] 9: _pˆsltl_ _ajϕ_ _←_ 10: **else** 11: **for each Vs ∈** Htl do 12: _pˆstl ←_ max{ajϕ _, b[t]s[l]l · (rsl_ _/rs)}_ 13: **end for** 14: **for each Vstl ∈** Qc do 15: Qc ← Qc\{Vstl _}_ 16: **end for** 17: **end if** 18: end while 19: Vw ←{Vs : ∃j(Cj ∈ Cc ∧ _Vs ∈_ Hj)} 20: for each Vs ∈ Vw do 21: Is ←{Ct : Vs ∈ Ht, Ct ∈ Cc} 22: Find Ctm ∈ arg maxCt∈Is _{uˆst = (vs[t]_ _[−]_ _[p][ˆ][st][)][ ·][ r][s][}]_ 23: _σ(s) = tm_ 24: _pˆs ←_ _pˆstm_ _,_ P[ˆ]w ← P[ˆ]w ∪{pˆs} 25: **if Ctm /∈** Cw then 26: Cw ← Cw ∪{Ctm} 27: **end if** 28: end for 29: for each Ct ∈ Cw do 30: _p¯t ←_ _ajϕ_ _,_ P[¯]w ← P[¯]w ∪{p¯t} // Payments 31: end for 32: return (Vw, Cw, σ, P[ˆ]w, P[¯]w) _A. Algorithm Design_ The process of TMC is shown in Algorithm 1, where it is composed of two sub-processes, Winning Candidate Determination (TMC-WCD) shown in Algorithm 2 and Assignment & Pricing (TMC-AP) shown in Algorithm 3. In TMC, we select the winning candidates, then assign winning buyer candidates to winning seller candidates truthfully. At the same time, the price charged to winning buyer and payment rewarded to winning seller can be determined. Shown as Algorithm 2, in the process of winning candidate determination, we sort the buyers in descending order based on their bids for different sellers, where each buyer Vs ∈ V is replaced with a buyer set {Vst : b[t]s _[>][ 0][, C][t]_ _[∈]_ [C][}][ in which] buyer Vs gives a positive bid to seller Ct. The sellers are sorted in ascending order based on their ask. The median of asks from sellers ajϕ is selected as a threshold [8] to control the number of buyer and seller candidates Let φ satisfy b[t][φ] _a_ and _≥_ ----- _b[t]s[φ]φ[+1]+1_ _[< a]jϕ_ [, buyer][ V]st [will be a winning buyer candidate if] its bid b[t]s [is not less than][ b]s[t][φ]φ [and the ask of requested seller] _at is less than the threshold ajϕ_ . Seller Ct will be a winning seller candidate if its ask at is less than ajϕ and there exists at least one winning buyer candidate bidding for it. Shown as Algorithm 3, in the process of assignment & pricing, we sort the Vc in descending order based on their total bids, which is the unit bids multiplied by their charging amounts. The total bid of buyer Vst is b[t]s _[·][ r][s]_ [definitely. For] each winning buyer candidate Vst ∈ Vc, it has met the basic conditions for closing a deal because there is a winning seller candidate Ct ∈ Cc with b[t]s _[> a][t]_ [and][ a][t] _[< a][j]ϕ_ [. For each] seller Ct ∈ Cc, we assign the buyers with larger total bids to it in priority. Thereby, there is a “tentative set” Ht associated with each Ct Cc, which contains at most kt buyers with _∈_ maximum total bids to Ct in Qc. It can be implemented in line 5-18. We denoted by ˆpst the unit price charged to buyer _Vs that gets service from Ct. Similar, the utility ˆust for each_ buyer Vs ∈ Ht can be defined as ˆust = (vs[t] _[−]_ _[p][ˆ][st][)][ ·][ r][s][;]_ Otherwise, the utility is ˆust = 0 for the buyer Cs /∈ Ht. For example, let ⟨Vo1t, · · ·, Vozt⟩⊆ Qc be all buyers in Qc that bid for Ct with b[t]o1 1 _oz_ _z_ [. If] _[·][ r][o]_ _[≥· · · ≥]_ _[b][t]_ _[·][ r][o]_ _kt ≥_ _z, we have Ht = ⟨Vo1t, · · ·, Vozt⟩_ and ˆpoit = ajϕ for each Voit ∈ Ht; Else, we have Ht = ⟨Vo1t, · · ·, Vokt _t⟩_ and _pˆoit = max{ajϕ_ _, b[t]okt_ +1 _[·][ (][r][o]kt_ +1 _[/r][o]i_ [)][}][ for each][ V][o]i[t] _[∈]_ [H][t] [to] guarantee truthfulness. Then, for each winning buyer Vs ∈ Vw, it can be assigned to one of the seller in Is. The Vs selects the optimal seller Ctm Is such that ˆustm _uˆst for each_ _∈_ _≥_ _Ct_ Is. Now, the mapping is σ(s) = tm and the charged _∈_ price is ˆps = ˆpstm . Finally, the payment rewarded to winning seller in Cw is given by ajϕ unanimously. _B. Properties of TMC_ Next, we argue that our proposed TMC mechanism satisfies individual rationality, budget balance, computational efficiency, and truthfulness. **Lemma 1. The TMC is individually rational.** _Proof. For each winning buyer Vs_ Vw and winning seller _∈_ _Ct ∈_ Cw, we need to show that the charged price ˆps ≤ _b[σ]s_ [(][s][)] and the rewarded payment ¯pt ≥ _at. According to Algorithm 2,_ it must be at < ajϕ if Ct ∈ Cw ⊆ Cc. Thereby the payment rewarded to winning seller ¯pt = ajϕ is larger than its ask at for each Ct ∈ Cw. Consider a winning buyer Vs ∈ Vw, the charged price is either ˆps = ajϕ or ˆps = bs[σ]l[(][s][)] _· (rsl_ _/rs)._ _• In the first case, it must be b[σ]s_ [(][s][)] _≥_ _b[q]p[φ]φ_ [if][ V]s _[∈]_ [V]w[. The] price charged to winning buyer ˆps = ajϕ is not more than its bid b[σ]s [(][s][)] definitely. _• In the second case, we have b[σ]s_ [(][s][)] _·rs ≥_ _b[t]s[l]l ·rsl according_ to Algorithm 3. The price charged to winning buyer ˆps = _b[σ]sl[(][s][)]_ _· (rsl_ _/rs) is not more than its bid b[σ]s_ [(][s][)]. Thus, the TMC is individually rational because all winning buyers and sellers are individually rational. **Lemma 2. The TMC is budget balanced.** _Proof. Shown as Algorithm 3, each winninng buyer Vs ∈_ Vw is assigned to exact one winning seller C ∈ C hence this is since b[t]o _[·][ r][o]_ _[≥]_ _[v]s[t]_ _[·][ r][s][. Thus we have][ ˆ][u]st[′]_ _[<][ ˆ][u][st]_ [= 0][.] **(b) The Vs is not a winning buyer when bidding truthfully:** According to Algorithm 3 there is no such a H for C ∈ C a many-to-one mapping from Vw to Cw and Vw = ∪Cj _∈Cw_ _Aj._ For each mapping σ(s) = t from winning buyer Vs to winning seller Ct, we have ˆps ≥ _ajϕ = ¯pt. Based on (5), it can be_ shown that � (6) _Vi∈Vw_ [(ˆ][p][i][ −] _[p][¯][σ][(][i][)][)][ ·][ r][i][ ≥]_ [0] Besides, it is easy to see that the assignment Aj for each winning seller Cj ∈ Cw satisfies |Aj| ≤ _kj._ **Lemma 3. The TMC is truthful.** _Proof. Here, we need to show the truthfulness to sellers and_ buyers one by one as follows: For each buyer Vs V, we need to show its utility ˆus when _∈_ giving a truthful bid Bs = Vs is not less than its corresponding utility ˆu[′]s [when given an untruthful bid][ B]s[′] _[̸][=][ V][s][. Afterward,]_ any notation xs and x[′]s [refer to the concepts given by bid][ B][s] and Bs[′] [respectively.] **(a) The Vs is a winning buyer when bidding truthfully:** According to Algorithm 3, it wins the seller Ctm such that maximizes its utility. Thereby, the utility ˆustm _uˆst for each_ _≥_ _Ct_ Is. First, for each seller Ct Is, it implies Vs Ht and _∈_ _∈_ _∈_ giving an untruthful bid (b[t]s[)][′][ to seller][ C][t] [cannot increase the] utility such that ˆu[′]st _[>][ ˆ][u][st]_ [since] _• (b[t]s[)][′][ > v]s[t][: The charged price][ ˆ][p][′]st[(= ˆ][p][st][)][ will not be]_ changed because the Vs has been in Ht when bidding _b[t]s[(=][ v]s[t][)][. Thus, we have][ ˆ][u][′]st_ [= ˆ][u][st][.] _• (b[t]s[)][′][ < v]s[t][: The charged price][ ˆ][p][′]st[(= ˆ][p][st][)][ will not be]_ changed if the Vs is still in Ht when bidding (b[t]s[)][′][(][< v]s[t][)][.] Thus, we have ˆu[′]st [= ˆ][u][st][. However, when this untruthful] bid (b[t]s[)][′][ decreases to be lower than][ ˆ][p][st][, the][ V][s] [cannot] be in Ht and ˆu[′]st [= 0][. Thus, we have][ ˆ][u]st[′] _[<][ ˆ][u][st][.]_ Then, for each seller Ct ∈ V\Is, it can be divided into two sub-cases where giving an untruthful bid (b[t]s[)][′][ to seller][ C][t] cannot increase the utility such that ˆu[′]st _[>][ ˆ][u][st]_ [as well. They] are analyzed as follows: _• Vst /∈_ Vc: If at ≥ _ajϕ_, it is impossible to make Vst be in Vc according to Algorithm 2 regardless of what the (b[t]s[)][′] is. Thus, we have ˆu[′]st [= ˆ][u][st] [= 0][. If][ a][t] _[< a][j]ϕ_ [but truthful] bid b[t]s[(=][ v]s[t][)][ < a][j]ϕ [, the][ V][s] [has to increase its bid such] that (b[t]s[)][′][ ≥] _[a][j]ϕ_ [in order to make][ V][st] [be in][ V][c][. At this] time, we have _uˆ[′]st_ [= (][v]s[t] _[−]_ _[p][ˆ]st[′]_ [)][ ·][ r][s] _[≤]_ [(][v]s[t] _[−]_ _[a][j]ϕ_ [)][ ·][ r][s] _[<][ 0]_ (7) if Ct ∈ Is when bidding (b[t]s[)][′][ untruthfully, otherwise] _uˆ[′]st_ [= 0][. Thus, we have][ ˆ][u]st[′] _[<][ ˆ][u][st]_ [= 0][.] _• Vst ∈_ Vc but Vs /∈ Ht: In this case, we can know that the Ht has been full where |Ht| = kt. From this, we have _b[t]o_ _[·][ r][o]_ _[≥]_ _[b]s[t]_ [(=][ v]s[t][)][ ·][ r][s] [where][ V][o] [has minimum value of] _b[t]o_ _[·][ r][o]_ [among all buyers in][ H][t][. The][ V][s] [has to increase] its bid such that (b[t]s[)][′][ ≥] _[b][t]o_ _[·][ (][r][o][/r][s][)][ in order to replace]_ _Vo in Ht. Then, the charged price will be changed to_ _pˆ[′]st_ [=][ b]o[t] _[·][ (][r][o][/r][s][)][. The utility is]_ � � _uˆ[′]st_ [= (][v]s[t] _[−]_ _[p][ˆ]st[′]_ [)][ ·][ r][s] [=] _vs[t]_ _[−]_ _[b]o[t]r[r]s[o]_ _· rs < 0_ (8) ----- that has Vs ∈ Ht. For each seller Ct ∈ C, we have utility _uˆst = 0. Similarly, we need to show giving an untruthful bid_ (b[t]s[)][′][ to seller][ C][t] [cannot increase the utility such that][ ˆ][u][′]st _[>]_ _uˆst. The analysis about it can be divided into Vst /∈_ Vc and _Vst ∈_ Vc but Vs /∈ Ht, which are the same as the analysis for _Ct ∈_ V\Is in part (a). Thus, we have ˆu[′]st _[≤]_ _[u][ˆ][st]_ [= 0][.] From above, the utility of Vs to each seller Ct C satisfies _∈_ _uˆst ≥_ _uˆ[′]st_ [definitely. By selecting the the seller such that] maximizes its utility, we have ˆus ≥ _uˆ[′]s[. Therefore, the buyers]_ are truthful. For each seller Ct C, we need to show its utility ¯ut when _∈_ giving a truthful ask at = ct is not less than its corresponding utility ¯u[′]t [when given an untruthful ask][ a]t[′] _[̸][=][ c][t][. Afterward,]_ any notation xt and x[′]t [refer to the concepts given by bid][ a][t] and a[′]t [respectively.] **(c) The Ct is a winning seller when asking truthfully: Accord-** ing to Algorithm 3, its ask at(= ct) < ajϕ and at least one buyer are assigned to it. Hence, we have |At| > 0. Its utility can be denoted by **Algorithm 4 EMC (V, C, B, R, A, K)** **Input: V, C, B, R, A, K** **Output: Vw, Cw, σ,** P[ˆ]w, P[¯]w 1: (Vc, Cc, ajψ ) ← EMC-WCD (V, C, B, A) 2: (Vw, Cw, σ, P[ˆ]w, P[¯]w) ← EMC-AP (Vc, Cc, ajψ _, R, K)_ 3: return (Vw, Cw, σ, P[ˆ]w, P[¯]w) **Lemma 4. The TMC is computationally efficient.** � _u¯t = (ajϕ −_ _ct) ·_ (9) _Vi∈At_ _[r][i][ >][ 0]_ _Proof. In Algorithm 2, there are nm buyers in V[′], hence it_ takes O(nm log(nm)) and O(m log(m)) times to sort the V[′] and C[′] respectively. The size of V[′′] in line 7 is at most nϕ, and the number of iterations in the for-loop (line 8-15) is at most nϕ. Consequently, the time complexity of Algorithm 2 is O(nm log(nm)). In Algorithm 3, it takes O(nϕ log(nϕ)) times to sort the Vc. The number of iterations in the while-loop (line 5-18) is at most n. The line 12 can be executed at most �m _i=1_ _[k][i][ times, thus it takes][ O][(][n][ +][ �]i[m]=1_ _[k][i][)][ time to execute]_ this while-loop. Besides, there are at most n buyers in Vw and find the best (line 22) from at most ϕ sellers, thus it takes _O(nϕ) time to execute the for-loop (line 20-28). Consequently,_ the time complexity of Algorithm 2 is O(nϕ log(nϕ)) and overall time complexity of TMC is O(nm log(nm)). **Theorem 1. The TMC is individually rational, budget bal-** _anced, truthful, and computationally efficient._ _Proof. It can be derived from Lemma 1 to Lemma 4._ VII. EFFICIENT MECHANISM FOR CHARGING Even though TMC is able to ensure truthfulness, it sacrifices the system efficiency. Shown as Algorithm 3, suppose the winning buyer Vs satisfies Vs ∈ Ht1 and Vs ∈ Ht2, it can be assigned to only one seller t ∈{t1, t2}. Thus, another seller will have a charging pile empty, which could be used to charge other buyers. Thus, for each winning buyer Vs ∈ Vw with |Is| > 1, there are |Is| − 1 charging piles being wasted. To address this drawback, we propose an Efficient Mechanism for Charging (EMC) to improve system efficiency and ensure its truthfulness to some extent. _A. Algorithm Design_ The process of EMC is shown in Algorithm 4. Similar to TMC, it is composed of Winning Candidate Determination (EMC-WCD) and Assignment & Pricing (EMC-AP). Here, the EMC-WCD is the same as TMC-WCD shown in Algorithm 2, which can be used to generate a winning buyer candidate set Vc and winning seller candidate set Cc. The EMC-AP is shown in Algorithm 5. Shown as Algorithm 5, in the process of assignment & pricing, we sort the winning buyer candidates in Vc in descending order based on their total bids. Then, we give priority to assigning the buyer that can give the maximum total bid, which is the critical step to improve the system efficiency. At each iteration, we pop the buyer that has the maximum total bid from Qc, denoted by Vsltl and check whether the seller C requested by V still have available charging piles since ¯pt = ajϕ . Consider giving an untruthful ask a[′]t[, we can] discuss as follows: _• a[′]t_ _[≥]_ _[a][j]ϕ_ [: It loses this auction because the new median] ask a[′]jϕ [is not less than][ a][j]ϕ [and][ a]t[′] _[≥]_ _[a]j[′]_ _ϕ_ _[≥]_ _[a][j]ϕ_ [. Thus,] the utility is ¯u[′]t [= 0][ <][ ¯][u][t][.] _• a[′]t_ _[< a][j]ϕ_ [: At the time, the new median ask][ a][′]jϕ [is equal] to ajϕ and a[′]t _[< a]j[′]_ _ϕ_ [=][ a][j]ϕ [. Moveover, the assignment] of Ct remains unchanged, A[′]t [=][ A][t][, and the rewarded] payment ¯p[′]t [= ¯][p][t][. According to (9), the utility when] asking untruthfully is the same as that when asking truthfully. Thus, we have ¯u[′]t [= ¯][u][t][.] **(d) The Ct is not a winning seller when asking truthfully: At** this time, its utility when asking truthfully is ¯ut = 0. Here, we need to analyze how the Ct loses this auction. If losing since ct _ajϕ_, we have _≥_ _• a[′]t_ _[< a][j]ϕ_ [: The new median ask][ a][′]jϕ [is not more than][ a][j]ϕ and a[′]t _jϕ_ _ϕ_ [. If the][ C][t][ still loses the auction,] _[≤]_ _[a][′]_ _[≤]_ _[a][j]_ its utility ¯u[′]t [= 0][. If the][ C][t] [wins now, its utility][ ¯][u][′]t _[≤]_ [0] because of the rewarded payment ¯p[′]t [=][ a]j[′] _ϕ_ _[≤]_ _[a][j]ϕ_ _[≤]_ _[c][t][.]_ Thus, we have ¯u[′]t _[≤]_ _[u][¯][t]_ [= 0][.] _• a[′]t_ _[≥]_ _[a][j]ϕ_ [: It loses this auction because the new median] ask a[′]jϕ [is equal to][ a][j]ϕ [and][ a]t[′] _[≥]_ _[a][j]ϕ_ [=][ a]j[′] _ϕ_ [. Thus, the] utility is ¯u[′]t [= ¯][u][t] [= 0][.] If ct < ajϕ but still loses, there are two situations that no buyer Vst gives a bid b[t]s [such that][ b]s[t] _[≥]_ _[a][j]ϕ_ [or utility][ ˆ][u][st] [is] not the maximum one for each Vs ∈ Vw. Now, _• a[′]t_ _[< a][j]ϕ_ [: The above two situations still happen because] the new median ask a[′]jϕ [=][ a][j]ϕ [and][ a]t[′] _[< a]j[′]_ _ϕ_ [. Thus, we] have ¯u[′]t [= ¯][u][t] [= 0][.] _• a[′]t_ _[≥]_ _[a][j]ϕ_ [: It loses this auction because the new median] ask a[′]jϕ [is not less than][ a][j]ϕ [and][ a]t[′] _[≥]_ _[a]j[′]_ _ϕ_ _[≥]_ _[a][j]ϕ_ [. Thus,] the utility is ¯u[′]t [= ¯][u][t] [= 0][.] Therefore, the sellers are truthful. In summary, both buyers and sellers cannot improve the utility by deviating from their valuations and costs. ----- **Algorithm 5 EMC-AP (Vc, Cc, ajϕ** _, R, K)_ **Input: Vc, Cc, ajϕ** _, R, K_ **Output: Vw, Cw, σ,** P[ˆ]w, P[¯]w 1: Vw ←∅, Cw ←∅, P[ˆ]w ←∅, P[¯]w ←∅ 2: Sort the buyers in Vc, get an ordered queue Qc = _⟨Vs1t1_ _, Vs2t2_ _, · · ·, Vsyty_ _⟩_ such that b[t]s[1]1 _[·][ r][s]1_ _[≥]_ _[b]s[t][2]2_ _[·][ r][s]2_ _[≥]_ _· · · ≥_ _b[t]s[y]y_ _[·][ r]sy_ 3: K[′] = (k1[′] _[, k]2[′]_ _[,][ · · ·][, k]m[′]_ [)][ copied from vector][ K] 4: while Qc ̸= ∅ **do** 5: _Vsltl_ Qc.pop(0) _←_ 6: **if ktl > 0 then** 7: _σ(sl) = tl, kt[′]l_ _tl_ _[←]_ _[k][′]_ _[−]_ [1] 8: Vw ← Vw ∪{Vsl _}, ˆpsl ←_ _ajϕ_ _,_ P[ˆ]w ← P[ˆ]w ∪{pˆsl _}_ 9: **if Ctl /∈** Cw then 10: Cw ← Cw ∪{Ctl _}_ 11: **end if** 12: **for each Vslt ∈** Qc do 13: Qc ← Qc\{Vslt} 14: **end for** 15: **else** 16: _Atl ←{Vs ∈_ Vw : σ(s) = tl} 17: **for each Vs ∈** _Atl do_ 18: _pˆs ←_ max{ajϕ _, b[t]s[l]l · (rsl_ _/rs)} ∈_ P[ˆ]w 19: **end for** 20: **for each Vstl ∈** Qc do 21: Qc ← Qc\{Vstl _}_ 22: **end for** 23: **end if** 24: end while 25: for each Ct ∈ Cw do 26: _p¯t ←_ _ajϕ_ _,_ P[¯]w ← P[¯]w ∪{p¯t} // Payments 27: end for 28: return (Vw, Cw, σ, P[ˆ]w, P[¯]w) If yes, kt[′]l _[>][ 0][, the buyer][ V][s]l_ [will be assigned to seller] _Ctl_ . Also, Vsl is a winning buyer, Ctl is a winning seller, and the price charged to buyer Vsl is given by ˆpsl = ajϕ tentatively. Furthermore, the requests from buyer Vsl to other sellers should be deleted from Qc since the buyer Vsl has been assigned. If no, kt[′]l [= 0][, the buyer][ V][s]l [will not be assigned] to seller Ctl because there is no unoccupied charging piles in _Ctl_ . It implies that in previous iterations, the buyers in Atl have been assigned to seller Ctl, then the price charged to each winning buyer Vs ∈ _Atl has to be changed to its critical_ price ˆps = b[t]s[l]l (rsl _/rs). Finally, the payment rewarded to_ _·_ winning seller is given by ajϕ unanimously. _B. Properties of EMC_ Similar to Sec. VI-B, we analyze whether our EMC mechanism satisfies the aforementioned four properties. **Lemma 5. The EMC is individually rational.** _Proof. It can be discussed similar to proof of Lemma 1._ **Lemma 6. The EMC is budget balanced.** _Proof It can be discussed similar to proof of Lemma 2_ **Lemma 7. The EMC is not truthful, but the truthfulness is** _held for sellers._ _Proof. For a buyer Vs /∈_ Vw when bidding truthfully, giving an untruthful bid (b[t]s[)][′][ to seller][ C][t] [cannot increase the utility] such that ˆu[′]st _[<][ ˆ][u][st]_ [= 0][, which can be divided into][ V][st] _[∈][/]_ [V][c] (similar to the analysis for Vst /∈ Vc in part (a) of Lemma 3) and Vst ∈ Vc. Consider the case Vst ∈ Vc, it exists a Vot ∈ Qc with b[t]o[·][r][o] _[≥]_ _[b]s[t]_ [(=][ v]s[t][)][·][r][s] [which is the last one can be assigned] to seller Ct in Algorithm 5. To replace Vot, the Vs has to bid a (b[t]s[)][′][ such that][ (][b]s[t] [)][′][ ≥] _[b]o[t]_ _[·][(][r][o][/r][s][)][ and the charged price will be]_ _pˆ[′]st_ [=][ b]o[t] _[·][(][r][o][/r][s][)][. From here, we have][ ˆ][u][′]st_ _st_ _s[.]_ _[≤]_ [0][ since][ ˆ][p][′] _[≥]_ _[v][t]_ For a buyer Vs ∈ Vw, we give two examples where giving an untruthful bid is possible to improve its utility. There are two sellers o1 and o2 requested by the Vs lie in Qc when it bids truthfully, thus Qc = ⟨· · ·, Vso1 _, · · ·, Vslo1_ _, · · ·, Vso2_ _, · · · ⟩_ with vs[o][1] _[·][ r][s]_ _[≥]_ _[b]s[o]l[1]_ _[·][ r][s]l_ _[≥]_ _[v]s[o][2]_ _[·][ r][s][. The results returned by]_ Algorithm 5 are σ(s) = o1, ˆps = b[o]sl[1] _l_ _[/r][s][)][, and][ k]o[′]_ 2 _[>][ 0][.]_ _[·][ (][r][s]_ At this time, its utility is ˆus = (vs[o][1] _−_ _b[o]sl[1]_ _[·][ (][r][s]l_ _[/r][s][))][ ·][ r][s][.]_ When giving an untruthful bid (b[o]s[1] [)][′][ such that][ (][b]s[o][1] [)][′][ < v]s[o][2] [,] the results are changed to σ[′](s) = o2, ˆp[′]s [=][ a][j]ϕ [. At this time,] its utility is ˆu[′]s [= (][v]s[o][2] _[−]_ _[a][j]ϕ_ [)][ ·][ r][s][. We cannot judge which one] is larger since vs[o][1] _s_ and b[o]sl[1] _l_ _[/r][s][)][ ≥]_ _[a][j]ϕ_ [. If][ ˆ][u][′]s _[>][ ˆ][u][s][,]_ _[≥]_ _[v][o][2]_ _[·][ (][r][s]_ its utility can be improved by bidding untruthfully. Similarly, it can give an untruthful bid (b[o]s[2] [)][′][ such that][ (][b]s[o][2] [)][′][ > v]s[o][1] [, the] results are changed to σ[′](s) = o2, ˆp[′]s [=][ a][j]ϕ [as well. Thus,] truthfulness is not held for winning buyers. The analysis for sellers are similar to the case (c) and (d) in proof of Lemma 3, thus truthfulness is held for sellers. **Lemma 8. The EMC is computationally efficient.** _Proof. It can be discussed similar to proof of Lemma 4._ **Theorem 2. The EMC is individually rational, budget bal-** _anced, and computationally efficient, but not truthful._ _Proof. It can be derived from Lemma 5 to Lemma 8._ In EMC, we attempt to assign each buyer in Vc to a seller in a greedy manner, thus avoiding the waste of charging piles. Therefore, it increases the number of winning buyers (successful trades) and improves system efficiency. Even though the winning buyers are able to improve their utilities by bidding untruthfully, this is difficult to achieve. In our BCS system, each player bids or asks privately. The buyer cannot have knowledge of other players’ strategies such as other buyers’ bids and sellers’ asks. Thus, it is not able to predict whether it will win, which seller it will be assigned to, and its charged price. For the buyer losing the auction, it is possible to get a negative utility when bidding untruthfully. For the buyer winning the auction, despite the potential of improving its utility, there is also the possibility of losing the auction when bidding untruthfully. Obviously, they have no motivation to lie because the risks are great. Therefore, we can say the EMC is truthful to some extent. _C. A Walk-through Example_ To understand our TMC and EMC algorithms clearly and compare their difference we give a walk through example ----- TABLE I AN EXAMPLE WITH 5 BUYERS AND 5 SELLERS. **_B_** _C1_ _C2_ _C3_ _C4_ _C5_ **_R_** _V1_ 0 4 0 5 2 5 _V2_ 2 0 5 1 0 2 _V3_ 7 5 0 4 0 6 _V4_ 6 4 0 3 0 4 _V5_ 0 0 2 3 5 3 **_A_** 4 1 3 2 5 **_K_** 1 2 4 2 3 with 5 buyers and 5 sellers. The bids and charging amounts of buyers, the asks and number of charging piles of sellers are shown in Table I. In TMC-WCD (EMC-WCD) according to Algorithm 2, the median of asks is denoted by ajϕ = a3 = 3. We can get V[′] = ⟨V31, V41, V14, V23, V32, V55, V12, V34, V42, V44, V54⟩. By removing those Vst V[′] with at _ajϕ_, we have _∈_ _≥_ Vc = {V14, V32, V12, V34, V42, V44, V54} and Cc = {C2, C4}. Then, sort the Vc according to their total bids, we have Qc = ⟨V32 = 30, V14 = 25, V34 = 24, V12 = 20, V42 = 16, V44 = 12, V54 = 9⟩. For TMC, in TMC-AP according to Algorithm 3, we have tentative sets H2 = {V1, V3} with ˆp12 = max{ajϕ _, b[2]4_ _[·]_ (r4/r1)} = 3.2, ˆp32 = max{ajϕ _, b[2]4_ _[·][ (][r][4][/r][3][)][}][ = 3][ and]_ H4 = {V1, V3} with ˆp14 = max{ajϕ _, b[4]4_ _[·][ (][r][4][/r][1][)][}][ = 3][,][ ˆ][p][34]_ [=] max{ajϕ _, b[4]4_ _[·][ (][r][4][/r][3][)][}][ = 3][. For the buyer][ V][1][, its utility]_ satisfies ˆu12 = (b[2]1 _[−]_ _[p][ˆ][12][)][ ·][ r][1]_ [= 4][ <][ 10 = ˆ][u][14][. For] the buyer V3, its utility satisfies ˆu32 > ˆu34. Thus, we have Vw = {V1, V3}, Cw = {C2, C4}, {σ(1) = 4, σ(3) = 2}, _Pˆw = {pˆ1 = 3, ˆp3 = 3}, and ¯Pw = {p¯2 = 3, ¯p4 = 3}._ For EMC, in EMC-AP according to Algorithm 5, we assign buyer V3 to seller C2 with ˆp3 = 3 in the first iteration, then Qc is revised to Qc = ⟨V14 = 25, V12 = 20, V42 = 16, V44 = 12, V54 = 9⟩. Repeat it until Qc = ∅, we have Vw = {V1, V3, V4, V5}, Cw = {C2, C4}, {σ(1) = 4, σ(3) = 2, σ(4) = 2, σ(5) = 4}, _P[ˆ]w = {pˆ1 = 3, ˆp3 = 3, ˆp4 = 3, ˆp5 =_ 3}, and _P[¯]w = {p¯2 = 3, ¯p4 = 3}. From this example, we can_ see that the winning sellers in TMC are not full where there are idle charging piles not used to charge vehicles. Therefore, the number of successful trades |Vw| = 2 in TMC is less than _|Vw| = 4 in EMC, which explains the reason why the system_ efficiency of EMC is better than TMC. VIII. NUMERICAL SIMULATIONS In this section, we implement our TMC and EMC algorithms, evaluate their performances, and verify whether they satisfy our design rationale separately. _A. Simulation Setup_ To simulate our TMC and EMC, we consider a smart area B = (M, V, C) with 1000 × 1000 km[2]. There are m CSs and _n EVs distributed uniformly in this area, where we default by_ _n_ 10 m unless otherwise specified For each CS C ∈ C its |B|C C C C C 1 2 3 4 5|R| |---|---|---| |V 1 V 2 V 3 V 4 V 5|0 4 0 5 2 2 0 5 1 0 7 5 0 4 0 6 4 0 3 0 0 0 2 3 5|5 2 6 4 3| |A K|4 1 3 2 5 1 2 4 2 3|- -| where (xi, yi) and (xj, yj) are the coordinates of Vi and Cj. We assume the valuation of an EV to a CS is related to their distance. Thus, the larger d[j]i [is, the lower][ v]i[j] [is. The maximum] _√_ distance between two entities in this area is 1000 2, thus we _√_ assume that vi[j] [= 1][ −] _[d]i[j][/][(1000]_ 2). _B. Simulation Results and Analysis_ To evaluate individual rationality, budget balance, and truthfulness, we consider a smart area with m = 10 CSs and n = 100 EVs. They can be denoted by C = _{C1, C2, · · ·, C10} and V = {V1, V2, · · ·, V100}. The median_ ask is ajϕ = 0.764 and there are five winning sellers, thus C _{C_ _C_ _C_ _C_ _C_ _} Moreover the corresponding_ (a) TMC (b) EMC Fig. 4. The assignment results and individual rationality obtained by our TMC and EMC. number of charging piles kj is generated from {1, 2, · · ·, 10} randomly with probability 1/10. The cost cj of CS Cj is generated according to a uniform distribution within (0, 1]. Similarly, for each EV Vi V, its charging amount ri is _∈_ sampled from a truncated normal distribution with mean 50 and variance 1 in interval (0, 100]. To quantify its valuation _vi[j]_ [to CS][ C][j][, we defined the distance][ d]i[j] [between][ V][i][ and][ C][j] according to their coordinates, that is � _d[j]i_ [=] (xi − _xj)[2]_ + (yi − _yj)[2]_ (10) ----- (a) Buyer V50 ∈ Vw (b) Seller C1 ∈ Cw (c) Buyer V86 /∈ Vw (d) Seller C3 /∈ Cw Fig. 5. The truthfulness of buyers and sellers in TMC. (a) Buyer V50 ∈ Vw (b) Seller C1 ∈ Cw (c) Buyer V86 /∈ Vw (d) Seller C3 /∈ Cw Fig. 6. The truthfulness of buyers and sellers in EMC. number of charging piles of CSs in Cw is given by {r1 : 3, r2 : 8, r6 : 4, r7 : 3, r10 : 8}. **Individual Rationality: Fig. 4 shows the assignment results** and individual rationality obtained by TMC and EMC. The first line from the bottom is sellers (CSs) and the second line from the bottom is buyers (EVs). Take Fig. 4 (a) as an example, for the seller C1, there are two buyers, V50 and _V58, assigned to it in TMC. For the mapping σ(50) = 1,_ the payment rewarded to C1 (red column) is more than the ask of C1 (grey column) and the price charged to V50 (green column) is less than the bid of V (blue column) Then for Fig. 7. The running time varies with the increasing number of CSs in TMC and EMC. Fig. 8. The number of successful trades (winning buyers) varies with the increasing number of CSs in TMC and EMC. any mappings from Vw to Cw in TMC and EMC, the price charged to the winning buyer is not more than its bid and the payment rewarded to the winning seller is not less than its ask, thus individual rationality is held. **Budget Balance: According to the charged price and** rewarded payment shown as Fig. 4, the total price charged to all winning buyers is not less than the total payment rewarded to all winning sellers. Furthermore, the number of buyers |Aj| assigned to each seller Cj Cw is not more than the number _∈_ of charging piles kj, namely we have |Aj| ≤ _kj. Thereby the_ budget balance is held in both TMC and EMC. **Truthfulness: We select a winning buyer V50 ∈** Vw, a losing buyer V86 /∈ Vw, a winning seller C1 ∈ Cw, and a losing seller C3 /∈ Cw as the representatives to evaluate the truthfulness of buyers and sellers in our TMC and EMC. Fig. 5 and Fig. 6 show the truthfulness of buyers and sellers in TMC and EMC. Let us look at Fig. 5 in TMC first. For the winning buyer V50, σ(50) = 1, it can get the maximum utility _uˆ50 = 9.932 when giving the truthful bid b[1]50_ [=][ v]50[1] [= 0][.][905][.] Here, we have I50 = {C1, C10} and its utility cannot be improved when changing the bids to the seller in I50. If the bid b[1]50 _[<][ 0][.][77][, its utility will decrease to][ 1][.][819][ since]_ the V50 will not be selected in H1 and then be assigned to seller C10. Besides, by changing the bids to sellers that are not in V (C ) or in V but not in I (C ) its utility ----- cannot be improved as well. For the winning seller C1, it can get the maximum utility ¯u1 when giving the truthful ask _a1 = c1 = 0.434, which cannot be improved by changing its_ ask. For the losing buyer V86, the utility ˆu86 is impossible to be more than zero when bidding untruthfully. Its utility will be negative if increasing the bids to sellers in Vc (C3 and C6). For the losing seller C3, it achieves zero utility when giving the truthful ask a3 = c3 = 0.896, which will be negative if decreasing its ask. Next, let us look at Fig. 6 in EMC. We have the same observations in sub-figures shown as (b), (c), and (d). For the winning buyer V50, σ(50) = 1, it has a little different from that in TMC. If increasing the bids to other sellers in Vc (C6 and C10), its utility will decrease, even be negative. This is because the V50 will be assigned to C6 or _C10 instead of C1 since total bids have been varied._ To evaluate the computational efficiency and system efficiency, we consider a smart area whose number of CSs m ranges from 0 to 200. The parameters are sampled according to the rules described in the simulation setup. **Computational Efficiency: Fig. 7 shows the running time** comparison between TMC and EMC. We default by n = 10 _m, thereby the time complexity O(nm log(nm)) can_ _·_ be considered as 10 _m[2]_ approximately. The trends shown _·_ in Fig. 7 are in line with our expectations and they are computationally efficient. Besides, we can observe that the running time of EMC is slightly lower than that of TMC since there are more entities eliminated in advance. **System Efficiency: Here, the system efficiency can be** characterized by the number of successful trades between buyers and sellers, which is equal to the number of winning buyers in Vw because each winning buyer will be assigned to a winning seller and then begin to trade. Fig. 8 shows the system efficiency comparison between TMC and EMC. The system efficiency is not monotone since we sample the parameters used in this simulation at each number of sellers independently. Shown as Fig. 8, we can observe that the system efficiency in EMC is apparently better than that in TMC, which implies our proposed EMC is an effective approach to improve system efficiency even though it does not guarantee the truthfulness of buyers in some extreme cases. Next, the gap between TMC and EMC increases gradually as the number of sellers increases. This is because the buyer who bids higher can take over more tentative sets of candidate sellers in Algorithm 3. However, it can be assigned to only one of these tentative sets, which causes a lot of waste and enlarge the gap. _C. Further Discussion_ According to Lemma 7, we have known that truthfulness is not held for buyers in some extreme cases. A buyer cannot predict an untruthful bid to improve its utility in a deterministic manner because it does not know the bidding strategies of other buyers. For the buyers, it is very risky and difficult to increase their utilities by changing their bids. Our simulation result, shown as Fig. 6, also proves this point that EMC satisfies the truthfulness to some extent. Shown as Fig. 7 and Fig. 8, EMC has a lower running time and a much better system efficiency than TMC. Therefore, we prefer to use our EMC instead of TMC in practical applications IX. CONCLUSION In this paper, a charging scheduling system based on blockchain technology and a constrained multi-item double auction model has been designed and implemented. To achieve privacy protection and scalability, we gave a lightweight charging scheduling framework based on asymmetric encryption and DAG-based blockchain. To incentivize EVs and CSs to participate in the market, we considered a constrained multiitem double auction model and designed two algorithms, TMC and EMC, that attempt to assign EVs (buyers) in this area to be charged in CSs (sellers). Both algorithms are feasible, which ensures individual rationality, budget balance, truthfulness, and computational efficiency. Here, EMC can get a better system efficiency than TMC, but it weakens the truthfulness of buyers to some extent. Finally, the results of numerical simulations indicated that our model is robust and theoretical analysis is correct. ACKNOWLEDGMENT This work was supported in part by the National Natural Science Foundation of China (NSFC) under Grant No. 62202055 and No. 62202016, the Start-up Fund from Beijing Normal University under Grant No. 310432104, the Start-up Fund from BNU-HKBU United International College under Grant No. UICR0700018-22, the Project of Young Innovative Talents of Guangdong Education Department under Grant No. 2022KQNCX102, and the National Science Foundation (NSF) under Grant No. 1907472 and No. 1822985. REFERENCES [1] Z. Xiong, H. Xu, W. Li, and Z. Cai, “Multi-source adversarial sample attack on autonomous vehicles,” IEEE Transactions on Vehicular _Technology, vol. 70, no. 3, pp. 2822–2835, 2021._ [2] Z. Wang, Q. Hu, R. Li, M. Xu, and Z. Xiong, “Incentive mechanism design for joint resource allocation in blockchain-based federated learning,” IEEE Transactions on Parallel and Distributed Systems, vol. 34, no. 5, pp. 1536–1547, 2023. [3] R. Zhang, R. Xue, and L. Liu, “Security and privacy on blockchain,” _ACM Computing Surveys (CSUR), vol. 52, no. 3, pp. 1–34, 2019._ [4] J. Guo, X. Ding, and W. Wu, “A blockchain-enabled ecosystem for distributed electricity trading in smart city,” IEEE Internet of Things _Journal, vol. 8, no. 3, pp. 2040–2050, 2020._ [5] ——, “An architecture for distributed energies trading in byzantinebased blockchains,” IEEE Transactions on Green Communications and _Networking, vol. 6, no. 2, pp. 1216–1230, 2022._ [6] S. Popov, “The tangle,” cit. on, p. 131, 2016. [7] D. Yang, X. Fang, and G. Xue, “Truthful auction for cooperative communications,” in Proceedings of the Twelfth ACM International _Symposium on Mobile Ad Hoc Networking and Computing, 2011, pp._ 1–10. [8] A.-L. Jin, W. Song, and W. Zhuang, “Auction-based resource allocation for sharing cloudlets in mobile cloud computing,” IEEE Transactions _on Emerging Topics in Computing, vol. 6, no. 1, pp. 45–57, 2015._ [9] A.-L. Jin, W. Song, P. Wang, D. Niyato, and P. Ju, “Auction mechanisms toward efficient resource sharing for cloudlets in mobile cloud computing,” IEEE Transactions on Services Computing, vol. 9, no. 6, pp. 895–909, 2015. [10] J. Kang, R. Yu, X. Huang, S. Maharjan, Y. Zhang, and E. Hossain, “Enabling localized peer-to-peer electricity trading among plug-in hybrid electric vehicles using consortium blockchains,” IEEE Transactions on _Industrial Informatics, vol. 13, no. 6, pp. 3154–3164, 2017._ [11] D. Wu, H. Zeng, C. Lu, and B. Boulet, “Two-stage energy management for office buildings with workplace ev charging and renewable energy,” _IEEE Transactions on Transportation Electrification, vol. 3, no. 1, pp._ 225 237 2017 ----- [12] C. Liu, K. K. Chai, X. Zhang, E. T. Lau, and Y. Chen, “Adaptive blockchain-based electric vehicle participation scheme in smart grid platform,” IEEE Access, vol. 6, pp. 25 657–25 665, 2018. [13] Z. Su, Y. Wang, Q. Xu, M. Fei, Y.-C. Tian, and N. Zhang, “A secure charging scheme for electric vehicles with smart communities in energy blockchain,” IEEE Internet of Things Journal, vol. 6, no. 3, pp. 4601– 4613, 2018. [14] Z. Zhou, B. Wang, M. Dong, and K. Ota, “Secure and efficient vehicle-to-grid energy trading in cyber physical systems: Integration of blockchain and edge computing,” IEEE Transactions on Systems, Man, _and Cybernetics: Systems, vol. 50, no. 1, pp. 43–57, 2019._ [15] S. Xia, F. Lin, Z. Chen, C. Tang, Y. Ma, and X. Yu, “A bayesian game based vehicle-to-vehicle electricity trading scheme for blockchainenabled internet of vehicles,” IEEE Transactions on Vehicular Technol_ogy, vol. 69, no. 7, pp. 6856–6868, 2020._ [16] J. Guo, X. Ding, and W. Wu, “Reliable traffic monitoring mechanisms based on blockchain in vehicular networks,” IEEE Transactions on _Reliability, vol. 71, no. 3, pp. 1219–1229, 2021._ [17] J. Huang, L. Kong, G. Chen, M.-Y. Wu, X. Liu, and P. Zeng, “Towards secure industrial iot: Blockchain system with credit-based consensus mechanism,” IEEE Transactions on Industrial Informatics, vol. 15, no. 6, pp. 3680–3689, 2019. [18] V. Hassija, V. Chamola, S. Garg, N. G. K. Dara, G. Kaddoum, and D. N. K. Jayakody, “A blockchain-based framework for lightweight data sharing and energy trading in v2g network,” IEEE Transactions _on Vehicular Technology, vol. 69, no. 6, pp. 5799–5812, 2020._ [19] D. Yang, G. Xue, X. Fang, and J. Tang, “Incentive mechanisms for crowdsensing: Crowdsourcing with smartphones,” IEEE/ACM transac_tions on networking, vol. 24, no. 3, pp. 1732–1744, 2015._ [20] Y. Lin, Z. Cai, X. Wang, F. Hao, L. Wang, and A. M. V. V. Sai, “Multiround incentive mechanism for cold start-enabled mobile crowdsensing,” _IEEE Transactions on Vehicular Technology, vol. 70, no. 1, pp. 993–_ 1007, 2021. [21] Y. Zhang and X. Zhang, “Incentive mechanism with task bundling for mobile crowd sensing,” ACM Transactions on Sensor Networks, vol. 19, no. 3, pp. 1–23, 2023. [22] X. Ding, J. Guo, D. Li, and W. Wu, “An incentive mechanism for building a secure blockchain-based internet of things,” IEEE Transactions on _Network Science and Engineering, vol. 8, no. 1, pp. 477–487, 2020._ [23] Z. Zhao, C. Feng, and A. L. Liu, “Comparisons of auction designs through multiagent learning in peer-to-peer energy trading,” IEEE Trans_actions on Smart Grid, vol. 14, no. 1, pp. 593–605, 2022._ [24] D. An, Q. Yang, D. Li, and Z. Wu, “Distributed online incentive scheme for energy trading in multi-microgrid systems,” IEEE Transactions on _Automation Science and Engineering, 2023._ [25] W. Borjigin, K. Ota, and M. Dong, “In broker we trust: A double-auction approach for resource allocation in nfv markets,” IEEE Transactions on _Network and Service Management, vol. 15, no. 4, pp. 1322–1333, 2018._ [26] R. P. McAfee, “A dominant strategy double auction,” Journal of eco_nomic Theory, vol. 56, no. 2, pp. 434–450, 1992._ [27] D. C. Parkes, J. Kalagnanam, and M. Eso, “Achieving budget-balance with vickrey-based payment schemes in exchanges,” in Proceedings of _the 17th international joint conference on Artificial intelligence-Volume_ _2, 2001, pp. 1161–1168._ [28] S. Nakamoto, “Bitcoin: A peer-to-peer electronic cash system,” White _Paper, 2008._ [29] G. Wood et al., “Ethereum: A secure decentralised generalised transaction ledger,” Ethereum project yellow paper, vol. 151, no. 2014, pp. 1–32, 2014. [30] E. Androulaki, A. Barger, V. Bortnikov, C. Cachin, K. Christidis, A. De Caro, D. Enyeart, C. Ferris, G. Laventman, Y. Manevich et al., “Hyperledger fabric: a distributed operating system for permissioned blockchains,” in Proceedings of the thirteenth EuroSys conference, 2018, pp. 1–15. [31] E. K. Kogias, P. Jovanovic, N. Gailly, I. Khoffi, L. Gasser, and B. Ford, “Enhancing bitcoin security and performance with strong consistency via collective signing,” in 25th {usenix} security symposium ({usenix} _security 16), 2016, pp. 279–296._ [32] V. Hassija, V. Chamola, D. N. G. Krishna, and M. Guizani, “A distributed framework for energy trading between uavs and charging stations for critical applications,” IEEE Transactions on Vehicular Technology, vol. 69, no. 5, pp. 5391–5402, 2020. [33] A. Churyumov, “Byteball: A decentralized system for storage and transfer of value,” URL https://byteball.org/Byteball.pdf, 2016. [34] C. LeMahieu, “Nano: A feeless distributed cryptocurrency network,” _Nano [Online resource]. URL: https://nano.org/en/whitepaper (date of_ _24 03 2018) 2018_ [35] D. Johnson, A. Menezes, and S. Vanstone, “The elliptic curve digital signature algorithm (ecdsa),” International journal of information security, vol. 1, no. 1, pp. 36–63, 2001. **Jianxiong Guo received his Ph.D. degree from** the Department of Computer Science, University of Texas at Dallas, Richardson, TX, USA, in 2021, and his B.E. degree from the School of Chemistry and Chemical Engineering, South China University of Technology, Guangzhou, China, in 2015. He is currently an Assistant Professor with the Advanced Institute of Natural Sciences, Beijing Normal University, and also with the Guangdong Key Lab of AI and Multi-Modal Data Processing, BNU-HKBU United International College, Zhuhai, China. He is a member of IEEE/ACM/CCF. He has published more than 40 peerreviewed papers and been the reviewer for many famous international journals/conferences. His research interests include social networks, wireless sensor networks, combinatorial optimization, and machine learning. **Xingjian Ding received his B.E. degree in electronic** information engineering from Sichuan University in 2012 and M.S. degree in software engineering from Beijing Forestry University in 2017. He obtained his Ph.D. degree from the School of Information, Renmin University of China in 2021. He is currently an assistant professor at the School of Software Engineering, Beijing University of Technology. His research interests include wireless rechargeable sensor networks, approximation algorithms design and analysis, and blockchain. **Weili Wu received the Ph.D. and M.S. degrees from** the Department of Computer Science, University of Minnesota, Minneapolis, MN, USA, in 2002 and 1998, respectively. She is currently a Full Professor with the Department of Computer Science, The University of Texas at Dallas, Richardson, TX, USA. Her research mainly deals in the general research area of data communication and data management. Her research focuses on the design and analysis of algorithms for optimization problems that occur in wireless networking environments and various database systems. **Ding-Zhu Du received the M.S. degree from the** Chinese Academy of Sciences, Beijing, China, in 1982, and the Ph.D. degree from the University of California at Santa Barbara, Santa Barbara, CA, USA, in 1985, under the supervision of Prof. R. V. Book. Before settling at The University of Texas at Dallas, Richardson, TX, USA, he was a Professor with the Department of Computer Science and Engineering, University of Minnesota, Minneapolis, MN, USA. He was with the Mathematical Sciences Research Institute, Berkeley, CA, USA, for one year, with the Department of Mathematics, Massachusetts Institute of Technology, Cambridge, MA, USA, for one year, and with the Department of Computer Science, Princeton University, Princeton, NJ, USA, for one and a half years. Dr. Du is the Editor-in-Chief of the Journal of Combinatorial Optimization and is also on the editorial boards for several other journals. -----
{ "disclaimer": "Notice: Paper or abstract available at https://arxiv.org/abs/2010.01436, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": null, "status": "GREEN", "url": "https://arxiv.org/pdf/2010.01436" }
2,020
[ "JournalArticle" ]
true
2020-10-03T00:00:00
[ { "paperId": "bf17cb050c156b2d5822e9da7dca66c167485039", "title": "Distributed Online Incentive Scheme for Energy Trading in Multi-Microgrid Systems" }, { "paperId": "672d378cd798a03d6a90a79b4b402ff49fce5aa7", "title": "Next-power: Next-generation framework for secure and sustainable energy trading in the metaverse" }, { "paperId": "83d2d11f73249d19ddb06df9acd0d8fbb476dfd0", "title": "Incentive Mechanism with Task Bundling for Mobile Crowd Sensing" }, { "paperId": "f0ef374bd6095c16ffdea79de6ce3d00a2f42f15", "title": "Comparisons of Auction Designs Through Multiagent Learning in Peer-to-Peer Energy Trading" }, { "paperId": "3bab76d6bfba8c821d109aa120d700d5a2580dad", "title": "Understanding Characteristics and System Implications of DAG-Based Blockchain in IoT Environments" }, { "paperId": "4ad85567c7bb89e6f2074ed5308e833de49c63bc", "title": "A DAG Blockchain-Enhanced User-Autonomy Spectrum Sharing Framework for 6G-Enabled IoT" }, { "paperId": "51fe4b279e6d4e7ae37aa56a6555271c64d1a326", "title": "Incentive Mechanism Design for Joint Resource Allocation in Blockchain-Based Federated Learning" }, { "paperId": "86a7b1ae4bf592b2d2294e4bd05eda505639799a", "title": "Exploiting Multi-Dimensional Task Diversity in Distributed Auctions for Mobile Crowdsensing" }, { "paperId": "26c136c5fb2961995847ef43b6a9f6bb560f0446", "title": "Multi-Source Adversarial Sample Attack on Autonomous Vehicles" }, { "paperId": "0e9bb03f7c99807e9733f2357ffe4b10ca0455ea", "title": "An Incentive Mechanism for Building a Secure Blockchain-Based Internet of Things" }, { "paperId": "29039ddf5725295af8cebe52b5709f5ee4dabb9c", "title": "Multi-Round Incentive Mechanism for Cold Start-Enabled Mobile Crowdsensing" }, { "paperId": "63ee9b99e77dee8c5d497babdadf3fd569d85185", "title": "Reliable Traffic Monitoring Mechanisms Based on Blockchain in Vehicular Networks" }, { "paperId": "44ff32b8bbfa0b4bdda40162afcbe12af6e5b1b5", "title": "zkCrowd: A Hybrid Blockchain-Based Crowdsourcing Platform" }, { "paperId": "efcf2a07467ce55afbfc6942870a4376fbb29e95", "title": "An Architecture for Distributed Energies Trading in Byzantine-Based Blockchains" }, { "paperId": "831ddcfd1ddfa0d5b9e34a625fab89d39d16f6a0", "title": "A Bayesian Game Based Vehicle-to-Vehicle Electricity Trading Scheme for Blockchain-Enabled Internet of Vehicles" }, { "paperId": "d1eea8fbe1163e20576c7a43d0ca2cd1615e9da6", "title": "A Blockchain-Enabled Ecosystem for Distributed Electricity Trading in Smart City" }, { "paperId": "5dee597647e62b79d07d34a3f31459456801ca28", "title": "A Distributed Framework for Energy Trading Between UAVs and Charging Stations for Critical Applications" }, { "paperId": "0085516e1951f2e7c0f5fecb0fb9385820883e1a", "title": "DEAL: Differentially Private Auction for Blockchain-Based Microgrids Energy Trading" }, { "paperId": "5603dc2ef70cde2de7af9140480f7756884ee5c5", "title": "A Blockchain-Based Framework for Lightweight Data Sharing and Energy Trading in V2G Network" }, { "paperId": "1bd6c38338bbb44b6ab4ca6af36a0a20b5e1f06d", "title": "Secure and Efficient Vehicle-to-Grid Energy Trading in Cyber Physical Systems: Integration of Blockchain and Edge Computing" }, { "paperId": "3edf84d91b475af359d490f0581a009591559664", "title": "Double Auction Mechanisms For Dynamic Autonomous Electric Vehicles Energy Trading" }, { "paperId": "eeafc2434bbd5f63995c206eb3eedc1a01550145", "title": "A Secure Charging Scheme for Electric Vehicles With Smart Communities in Energy Blockchain" }, { "paperId": "b269f69d4a8a9a7bdfff7780d4f7e7fd1ce554ec", "title": "Coin Hopping Attack in Blockchain-Based IoT" }, { "paperId": "b4c3ae7667cc4c64ba6cf7114ab3be0b163312cf", "title": "Security and Privacy on Blockchain" }, { "paperId": "9be319953ae3b40f7c9fe70f649c1ff60915e6f7", "title": "Towards Secure Industrial IoT: Blockchain System With Credit-Based Consensus Mechanism" }, { "paperId": "dbd85a482b21038c2d792f1652f8c36976bf2690", "title": "In Broker We Trust: A Double-Auction Approach for Resource Allocation in NFV Markets" }, { "paperId": "d05d8515953857cf3375211281838ec44fee669e", "title": "Adaptive Blockchain-Based Electric Vehicle Participation Scheme in Smart Grid Platform" }, { "paperId": "88ad82e6f2264f75f7783232ba9185a2f931a5d1", "title": "Facial Expression Analysis under Partial Occlusion" }, { "paperId": "1b4c39ba447e8098291e47fd5da32ca650f41181", "title": "Hyperledger fabric: a distributed operating system for permissioned blockchains" }, { "paperId": "29ec6d070828fabc60982fb3251fe6acdd04eb19", "title": "Energy Big Data Security Threats in IoT-Based Smart Grid Communications" }, { "paperId": "6ed916428e6596961f7a9ee769f5552a80af285f", "title": "Distributed Auctions for Task Assignment and Scheduling in Mobile Crowdsensing Systems" }, { "paperId": "81bfea080e833fd0046b1e9b879a19429c1d08bf", "title": "Enabling Localized Peer-to-Peer Electricity Trading Among Plug-in Hybrid Electric Vehicles Using Consortium Blockchains" }, { "paperId": "33709e0bb08464cdaf26b1ef68b7c388333a7156", "title": "BlockChain: A Distributed Solution to Automotive Security and Privacy" }, { "paperId": "3b21b6f9e9e7b9477c4f8ab668a348bb25abf7c9", "title": "Two-Stage Energy Management for Office Buildings With Workplace EV Charging and Renewable Energy" }, { "paperId": "f00e466cd4c74e9b50a622c4ef55f8fa0fc0f07c", "title": "Auction Mechanisms Toward Efficient Resource Sharing for Cloudlets in Mobile Cloud Computing" }, { "paperId": "d8403984bff42604cce78876efb31c11321bfed8", "title": "Incentive Mechanisms for Crowdsensing: Crowdsourcing With Smartphones" }, { "paperId": "efd99fe3b5b620d89aa03201199c45988c688670", "title": "Enhancing Bitcoin Security and Performance with Strong Consistency via Collective Signing" }, { "paperId": "8bee02071879104528c87b659b9de05b2427d1cb", "title": "Truthful auction for cooperative communications" }, { "paperId": "353d374013d9849c5534e7b586a583cd97fa275f", "title": "Achieving Budget-Balance with Vickrey-Based Payment Schemes in Exchanges" }, { "paperId": "65ecd5eaec484201c701c4a1508f8d2132c125f0", "title": "The Elliptic Curve Digital Signature Algorithm (ECDSA)" }, { "paperId": "36c724c2f381a74261b878cc78c77bb19bf90e96", "title": "A dominant strategy double auction" }, { "paperId": null, "title": "Blockchain-basedreverseauctionforv2v charging in smart grid environment" }, { "paperId": "e6c2cb33391083f4e60da9378079b453a5591891", "title": "Auction-Based Resource Allocation for Sharing Cloudlets in Mobile Cloud Computing" }, { "paperId": "600c574adfbd0a6895934ec8d3dbfcb56fb2bd68", "title": "Nano : A Feeless Distributed Cryptocurrency Network" }, { "paperId": null, "title": "Currently, he is working toward the PhD degree in the School of Information, Renmin University of China, Beijing, China. His research interests include wireless rechargeable sensor networks algorithm" }, { "paperId": null, "title": "Byteball: A decentralized system for storage and transfer of value" }, { "paperId": "43586b34b054b48891d478407d4e7435702653e0", "title": "The Tangle" }, { "paperId": "64e428003bedbde11b7409999c836c848f735319", "title": "Designing Truthful Spectrum Double Auctions with Local Markets" }, { "paperId": "3c50bb6cc3f5417c3325a36ee190e24f0dc87257", "title": "ETHEREUM: A SECURE DECENTRALISED GENERALISED TRANSACTION LEDGER" }, { "paperId": "4e9ec92a90c5d571d2f1d496f8df01f0a8f38596", "title": "Bitcoin: A Peer-to-Peer Electronic Cash System" }, { "paperId": null, "title": "of California at Santa Barbara, Santa Barbara, CA, USA," }, { "paperId": null, "title": "Transactions: When CS C x receives the new transaction from the EV V i , it will check and sign this transaction by its private key as well. Now, the CS C x issues this new transaction" }, { "paperId": null, "title": "A Double Auction for Charging Scheduling among Vehicles Using DAG-Blockchains" }, { "paperId": null, "title": "— We model the assignment between EVs and CSs as a constrained multi-item double auction" }, { "paperId": null, "title": "United International College, Zhuhai, China" }, { "paperId": null, "title": "Received 28 August 2023; revised 15 February 2024; accepted 26" }, { "paperId": null, "title": "sensor networks, combinatorial optimization, and machine learning" }, { "paperId": null, "title": "2023. Trusted mobile edge computing: DAG blockchain-aided trust management and resource allocation" }, { "paperId": null, "title": "Scheduling: The manager waits to receive request messages from EVs and status messages from CSs" }, { "paperId": null, "title": "Confirm: If EV V i receives an order message from the manager, then it can be charged at the CS C x designated by the manager" }, { "paperId": null, "title": "Order and Assignment" }, { "paperId": null, "title": "Engineering, University of Minnesota, Minneapolis, MN, USA" } ]
25,517
en
[ { "category": "Computer Science", "source": "external" }, { "category": "Computer Science", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/0213f41fa2b6958b9142cc43454cfa1974de97dd
[ "Computer Science" ]
0.913839
Databases in Cloud Computing: A Literature Review
0213f41fa2b6958b9142cc43454cfa1974de97dd
[ { "authorId": "71456857", "name": "Harrison John Bhatti" }, { "authorId": "2380227", "name": "Babak Bashari Rad" } ]
{ "alternate_issns": null, "alternate_names": null, "alternate_urls": null, "id": null, "issn": null, "name": null, "type": null, "url": null }
— Information Technology industry has been using the traditional relational databases for about 40 years. However, in the most recent years, there was a substantial conversion in the IT industry in terms of commercial applications. Stand-alone applications have been replaced with electronic applications, committed servers with various appropriate servers and devoted storage with system storage. Lower fee, flexibility, the model of pay-as-you-go are the main reasons, which caused the distributed computing are turned into reality. This is one of the most significant revolutions in Information Technology, after the emergence of the Internet. Cloud databases, Big Table, Sherpa, and SimpleDB are getting to be more familiar to communities. They highlighted the obstacles of current social databases in terms of usability, flexibility, and provisioning. Cloud databases are essentially employed for information-escalated applications, such as storage and mining of huge data or commercial data. These applications are flexible and multipurpose in nature. Numerous value-based information administration applications, like banking, online reservation, e-trade and inventory administration, etc. are produced. Databases with the support of these types of applications have to include four important features: Atomicity, Consistency, Isolation, and Durability (ACID), although employing these databases is not simple for using in the cloud. The goal of this paper is to find out the advantages and disadvantages of databases widely employed in cloud systems and to review the challenges in developing cloud databases
Published Online April 2017 in MECS (http://www.mecs-press.org/) DOI: 10.5815/ijitcs.2017.04.02 # Databases in Cloud Computing: A Literature Review ## Harrison John Bhatti School of Computing and Technology, Asia Pacific University of Technology and Innovation (APU), Technology Park Malaysia (TPM), Bukit Jalil, Kuala Lumpur 57000 Malaysia E-mail: harrisonjohn03@gmail.com ## Babak Bashari Rad School of Computing and Technology, Asia Pacific University of Technology and Innovation (APU), Technology Park Malaysia (TPM), Bukit Jalil, Kuala Lumpur 57000 Malaysia E-mail: babak.basharirad@apu.edu.my **_Abstract—Information Technology industry has been_** using the traditional relational databases for about 40 years. However, in the most recent years, there was a substantial conversion in the IT industry in terms of commercial applications. Stand-alone applications have been replaced with electronic applications, committed servers with various appropriate servers and devoted storage with system storage. Lower fee, flexibility, the model of pay-as-you-go are the main reasons, which caused the distributed computing are turned into reality. This is one of the most significant revolutions in Information Technology, after the emergence of the Internet. Cloud databases, Big Table, Sherpa, and SimpleDB are getting to be more familiar to communities. They highlighted the obstacles of current social databases in terms of usability, flexibility, and provisioning. Cloud databases are essentially employed for informationescalated applications, such as storage and mining of huge data or commercial data. These applications are flexible and multipurpose in nature. Numerous valuebased information administration applications, like banking, online reservation, e-trade and inventory administration, etc. are produced. Databases with the support of these types of applications have to include four important features: Atomicity, Consistency, Isolation, and Durability (ACID), although employing these databases is not simple for using in the cloud. The goal of this paper is to find out the advantages and disadvantages of databases widely employed in cloud systems and to review the challenges in developing cloud databases. **_Index Terms—Cloud, Database, Cloud Computing,_** Cloud Database, Cloud Service. I. INTRODUCTION All various branches of IT are obligated and committed to provide and true enlisting, stockpiling, supporting the workplaces and IT frameworks at the most lessened achievable cost. According to [1], Enormous interest in IT framework fills in as a prevention in its gathering, especially for little scale affiliations. Down and out affiliations hunt down alternatives, which can lessen their capital endeavors incorporated into acquiring and keeping up IT hardware and programming with the goal that they can get greatest advantages of IT. At this stage, Cloud databases are considered as a smart answer for programmers, on the off chance that they need to store the information of their applications in a versatile and exceedingly accessible backend. These administrations are alluded to as Database-as-a-Service (DBaaS) [2]. A cloud-facilitated DBMS must have a few methods for persistently putting away its database. One methodology is to utilize a persistent stockpiling administration gave inside of the cloud and got to over the system by the DBMS. An illustration of this is Amazon's Elastic Block Service (EBS), which gives system available persevering stockpiling volumes that can be connected to virtual machines [3]. This article investigates the preferences and weaknesses of conveying database frameworks in the cloud. We take a gander at how the run of the mill features of available distributed computing influences the decision of information administration applications to transfer in the cloud. Due to the growing necessities of today's commercial world for more investigation and exploration, we can infer explanatorily and descriptive information administration applications are more capable of being employed in the cloud than value-based information administration applications. We, therefore, lay out an exploration motivation for huge size data investigation in the cloud, to demonstrate that the accessible frameworks are not suitable for cloud organization, and to resist that there is a requisite for a recently planned DBMS, designed especially for distributed computing stages [4]. In this paper, cloud databases, cloud computing and the databases that can be hosted and deployed in the cloud have been discussed, respectively. Furthermore, the advantages and disadvantages of the most widely used database in cloud computing have been presented. This paper is organized as follows. In the next section, cloud database has been introduced, in brief. Then, the cloud ----- computing and its features have been discussed. Next, some popular databases used in cloud computing have been reviewed, including advantages and disadvantages of MySQL, and in the last section, the most important challenges in the development of cloud databases have been discussed. Finally, at the end of the article, a summary, and list of references are given. II. CLOUD DATABASE The cloud database holds the information on distinctive server farms situated in diverse areas. This makes the cloud database structure not the same as the objective database administration framework. Over a cloud database, there are numerous hubs, intended for question administrations, for server farms that are also corporate farms and are situated in distinctive land areas. This connection is required for the convenient and full access to the database on the cloud administrations. Many systems have been introduced to get the benefits of databases over the cloud. The user can take its advantages by means of a personal computer using the web, or by a mobile device, which has capability of accessing the cloud database using 3G/4G services. To understand the infrastructure of the cloud databases, structure of cloud database is demonstrated in Fig. 1 [5]. Fig.1. Structure of Cloud Database [5] III. CLOUD COMPUTING Distributed computing is a late idea and one of the most recent PC industry trendy expressions. The idea is gotten from the symbolism of the ''Internet cloud'', in which the symbolism of a cloud is customarily ''used to speak to the Internet or some extensive organized environment''. The thought portrayed in the symbolism is that customer information and applications are put away and got to ''in the distance''. As being what is indicated, one definition offered for distributed computing is the ''virtualization of assets that keeps up and oversees itself. To improve the idea, distributed computing can be basically characterized as the distribution and utilization of assets and facilities of a system to finish the work with no worry about possession or administration of the system resources and assets. With distributed computing, PC assets for finishing work and their information are no more put away on one's PC, yet are facilitated somewhere else to be made available in any area and whenever [6]. According to [7], Distributed computing is a developing range of disseminated processing that provides lots of benefits to the organization so that companies can get their data by using a new technology easy and faster. When an organization makes a contract with the cloud service provider to save their programs and data then cloud service provider make it possible so that their clients can get full access anytime anywhere with full benefits but still there are some things needed to be considered like security and viewing data by others. Distributed computing is not a solitary sort of framework, but rather it incorporates a scope of basic advances and setup choices. The qualities and weaknesses of the distinctive cloud development, policies, structures, administration models, and sending routines should be considered by associations evaluating administrations to satisfy their requirements. _A. Features of Cloud Computing_ There are lots of features and facilities provided by cloud computing, but here we will talk about some of them. According to [7], a model for supporting valuable, on interest framework access to a shared customizable resources like servers, storage, and applications, which are instantly supplied and available with simple organization efforts or immaterial collaboration of provider. - _On-Demand Self-Service_ With distributed computing, affiliations can have on ----- interest self-organization for handling capacities, for instance, server time and framework stockpiling when required, and through a single supplier [7]. - _Broad Network Access_ All services and facilities of clouds are accessible on the network and will be available according to some systematic mechanisms that improve the use by heterogeneous thin or thick client platforms, like mobile devices or portable workstations [8]. - _Pooling the Computer Resources_ The resources provided by the supplier are stored to present appropriate services to different customers through a multi-tenant model, with various real or virtual resources allocated based on the requirements of customers. While the range of the benefits like data storage is provided, care of memory, network transmission limits, and virtual machines is not needed to be controlled by the user, it might be feasible for the supporter of determining the nation, state, or server farm that gives the cloud administrations [7]. - _Rapid Elasticity_ Cloud limits can be given to the endorser rapidly and adaptable, allowing the supporter of either construct or decrease organizations. The limits available consistently appear, in every way, to be fast to the supporter and can be gained in any sum at whatever point [7, 8]. - _Measured Service_ Cloud systems basically monitor, control and improve the availability and performance of provided facilities and services through a deliberate administration ability that is fitting for the sort of administration offered. The use of asset can be observed, monitored, and recorded to offer a clear report to the supplier, as well as the users of the services [7, 8]. - _Multi-Tenacity Services_ The cloud server can be informed for the prerequisites for policy-driven administration, segmentation, isolation, governance, service levels, and charging/payments for various types of customers [9]. IV. DATABASES IN CLOUD COMPUTING ENVIRONMENT Distributed computing innovation speaks to another ideal model for facilitating programming applications. This standard streamlines the prolonged procedures of equipment provisioning, equipment acquirement, and programming sending. In this manner, it reformed the way computational assets and administrations are popularized and conveyed to clients. These days, distributed computing is becoming essentially. Cloud suppliers progressively give new administrations and new elements to their customers with proficient and practical answers for their issues. Thus, the cloud has turned into an appealing stage for the product designers and endeavors to have their applications and frameworks. Nonetheless, the administrations offered by distinctive cloud suppliers are generally incongruent with one another and do not bolster any institutionalized model or interfaces. Along these lines, one of the real difficulties for encouraging cloud appropriation is that of the cloud interoperability and transportability [2]. On a fundamental level, cloud databases are at present considered as an appealing answer for programming designers on the off chance that they need to store the information of their applications in an adaptable and exceptionally accessible backend. These administrations are alluded to as Database-as-a-Service (DBaaS). These cloud-based information stockpiling administrations can be arranged into two principle classifications: benefits that backing conventional social databases (RDB) (e.g., Amazon RDS, Google SQL, Microsoft Azure), and key/quality pair information stockpiling administrations (e.g., Amazon Simple DB, Google Data Store), which are otherwise called NoSQL Databases. In important, RDB frameworks utilization organized inquiry dialect (SQL) as an institutionalized interface to get the information in a social database. On the other side, the NoSQL databases stay unstandardized, so there is no brought together information access approach. Consequently, every cloud supplier has an alternate approach to overseeing and access the database, which makes the information convenience, and it is a testing assignment to accomplish between these frameworks [2]. At the other compelling, applications can store their information utilizing cloud-facilitated social database administration frameworks (DBMS). Case in point, customers of the base as an administration supplier, for example, Amazon, can send DBMS in virtual machines and utilize these to give database administration to their applications. On the other hand, applications can utilize administrations, for example, Amazon's RDS or Microsoft SQL Azure in a comparative manner. This methodology is most appropriate to applications that can be upheld by a solitary DBMS case, or that can be shared over various autonomous DBMS occurrences. High accessibility is additionally an issue, as the DBMS speaks to a solitary purpose of disappointment an issue normally tended to utilize DBMS-level high accessibility systems. In spite of these impediments, this methodology is broadly utilized on the grounds that it puts the greater part of the surely knew advantages of social DBMS, for example, SQL question handling and exchange support, in the administration of the application. This is the methodology we concentrate on in this paper. A cloud facilitated DBMS must have a few methods for diligently putting away its database. One methodology is to utilize a steady stockpiling administration give inside of the cloud and got to over the system by the DBMS. A sample of this is Amazon's Elastic Block Service (EBS), which gives system open diligent stockpiling volumes that can be joined to virtual machines [3]. Amazon provides deployment services to databases like MS SQL Server, MySQL, and Oracle in its own cloud which is EC2 [10]. ----- V. POPULAR DATABASES USED IN CLOUD COMPUTING There are some most popular databases in cloud computing. They are mentioned below: - StromDB - MySQL - PostgreSQL - Google Cloud SQL - MongoLab. _A. StromDB_ StromDB is a free and an open source appropriate ongoing calculation framework. It is simple in StromDB to dependably handle the infinite flow of information, finishing for steady get ready what Hadoop achieved for pack get ready. StromDB is very straightforward, it can be utilized with any programming dialect, and is a considerable measure of enjoyable to utilize. StromDB has numerous utilization cases: constant examination, online machine learning, nonstop processing, conveyed RPC, ETL, and then some. StromDB is quick: a benchmark timed it at more than a million tuples prepared every second per hub. It is adaptable, deficiency tolerant ensures your information will be prepared and is anything but difficult to situated up and work. StromDB coordinates with the queuing and database advancements you as of now utilize. A Storm topology devours floods of information and procedures those streams in selfassertively complex ways, repartitioning the streams between every phase of the reckoning however required. Read all the more in the instructional exercise [11]. _B. MySQL_ MySQL is an open-source social database administration framework. It is possessed by Oracle Corporation and can be utilized under either the GNU General Public License or a standard business permit acquired from Oracle. MySQL is a hearty, multi-strung, value-based DBMS. It is profoundly versatile and can be conveyed over numerous servers. Because it can be utilized for nothing out of pocket, it holds a critical piece of the pie inside of established researchers. While frequently thought to be unseemly for spaces of high security like budgetary organizations or certain territories of the administration. MySQL has turned into the main social database in numerous regions of the scholarly world, including experimental research and instructing understudies [12]. _C. PostgreSQL_ Cloud Database permits administration suppliers and associations to offer versatile and profoundly adaptable database-as-an administration (DBaaS) situations while liberating DBAs and application designers from the rigors of setting up and directing present day and vigorous database situations. Postgres plus Cloud Database rearranges the procedure of provisioning vigorous Postgres arrangements while exploiting the advantages of distributed computing. At the point when utilized with Postgres Plus Advanced Server, Cloud Database additionally gives an Oracle-perfect DBaaS, offering sensational expense reserve funds and game changes [13]. Fig. 2 illustrate the PostgreSQL Performance in Amazon. Fig.2. PostgreSQL Performance in Amazon [14] _D. Google Cloud SQL_ MySQL database has one more database, which can easily be deployed in Google cloud known as “Google Cloud SQL”. It has every one of the abilities and usefulness of MySQL, with a couple of extra elements and a couple of unsupported elements as recorded underneath. Google Cloud SQL is anything but difficult to utilize, doesn't require any product establishment or support, and is perfect for little to medium-sized applications [15]. ----- MySQL databases sent in the cloud without an object. It is provided you by the Google Cloud Platform with effective databases that run quick, don't come up short on space and give your application the excess, it requires dependable capacity [15]. _E. MongoLab_ MongoDB is an arranged open source JSON database structure. Geir Magnusson and Dwight Merriman created at 10gen. Instead of a complete quality store, it is planned to be a bona fide article database. The data is stored in JSON, like records with component developments. The flexibility of key quality store and space is given. The rich accommodation like records and part demand of social databases are also provided. The flexibility level is given too [1]. VI. WIDELY USED DATABASE IN CLOUD COMPUTING (MYSQL) MySQL is the best database framework being utilized everywhere throughout the world, particularly when little and medium size commercial enterprises are attempting to cut expenses. Keeping in mind the end goal to meet the level of administration requested by the clients, it is discriminating that applications have the accessibility and execution expected to pay little respect to the sort of use or the work stack a framework has. For measuring the execution in MySQL applications, discovery methodology is the most widely recognized system utilized for measuring the Transactions every Second [16]. _A. Advantages of MySQL Database in Cloud Computing_ There are some main advantages of MySQL database in cloud computing [17]: - _Availability_ It is very terrible to deal with a database going down during high workload and sales times. Cloud-based MySQL databases provide a guarantee to avoid this issue using modern technology and accessible and distributed resources. - _Buy the database administration only_ Some cloud organizations just offer MySQL database facilitating through a cloud-based facilitating record. As of late organizations began offering databases as an administration, permitting people to pay just for the databases and not for a facilitating record that there is no utilization for. - _Easy to get outsource maintenance_ Innovation keeps on progressing, yet administration's spending on IT office staff more often than scale in a remarkable same manner. In case you're as of now overburden with system organization, sending parts of the framework to the cloud permits you to offload upkeep and redesign undertakings to the cloud supplier. You cannot be totally uninvolved, however, each and every bit makes a difference. - _Versatility_ The versatility that originates from MySQL databases cannot be coordinated by individual or devoted devices. People would prefer not to ship in a bundle of database servers for trivial needs, however, cloud-based MySQL databases are ideal for such circumstance. _B. Disadvantages of MySQL Database in Cloud_ _Computing_ Below is the discussion on limitations of both SQL and MySQL [12]. - _Null Data_ Putting away deficient or obscure worldly information in SQL is commonly finished with a NULL. DATE and TIME information sorts, as portrayed by SQL-92, are viewed as each to be made out of three different whole numbers of different satisfactory extents. For instance, DATE is the single information sort relegated to a table quality that stores a date (and not a period). SQL takes into consideration to win a big or bust nullability. That is, the information, in general, can be invalid, yet parts of a date can't. - _Granularity_ Identified with putting away invalid information is putting away information in different granularities. Since zero is significant in every time field and in light of the fact that MySQL additionally uses zero as an invalid marker for every field in a period, putting away TIMEs or DATETIMEs of different granularities in a solitary segment which is impractical. - _Overflow_ Within MySQL, a DATE can be categorized as one of three essential extents: upheld, legitimate, and unlawful. "Upheld" means acknowledged by the framework and ensured to work. "Lawful" and "unlawful" are terms not clearly characterized but rather which were extrapolated from other phrasing utilized as a part of the instructional booklet. "Legal" means perhaps acknowledged by the framework however not ensured to work. "Illegal" means not acknowledged by the framework. - _Non-Gregorian Calendars_ MySQL utilizes the proleptic Gregorian schedule, implying that all dates are settled around the Gregorian datebook and that the Gregorian logbook is utilized to speak to even those dates that occurred amid the time when the Julian timetable was being used. The same component can be found in play in the yearly dates of Hanukkah. Hanukkah moves around the Gregorian timetable on the grounds that it is taking into account the Jewish datebook. Inside of the Jewish logbook, Hanukkah holds an altered position, however, non-Jews have a tendency to identify with Hanukkah regarding the Gregorian schedule, which is the reason it seems to move around from year to year. ----- VII. CHALLENGES TO DEVELOP CLOUD DATABASE Cloud DBMSs ought to bolster elements of Cloud figuring and additional databases for more extensive worthiness; it is a responsibility of Hercules. There are some possible difficulties connected with cloud databases, which are displayed in Fig. 3 [1]. Fig.3. Issues can occur during DB deployment [1] - _Scalability_ The rapid growth of databases in size is a consequence of involving large size multimedia data, which requires novel scalable systems. Because users expect to easily scale up and down the size of data in databases to ensure requirements of their commercial aims, cloud systems must provide scalable database services to meet the expectations of their users. This is the most important feature of cloud standard. It prescribes the services that can be scaled-up or down remarkably without accomplishing any impedance in the association. It is a big challenge in the architecture of the system to implement databases in the cloud to guarantee that synchronous customers are supported and handled and data can be improved. - _Fault Tolerance and High Availability_ It is very vital to replicate information over wide geographical locations to provide a high availability and robustness of information, as well as high flexibility in adaptation to internal failure. The term availability of system can generally be defined as the degree of accessibility and usability of resources for individual users or staffs of organizations [18]. This is one of the most important issues, which must be considered by individuals or organizations before starting to move to the cloud database. If an interruption occurs due to a failure in cloud service, it may affect the availability of databases, temporarily or permanently, which may cause a serious loss of data, partially or completely. Equipment failures, Security deficiencies and attacks such as DOS are serious threats to the availability of cloud database system. In most cases, these types of failures are unpredictable and can seriously influence the performance of organizations or individuals’ activities, which may result in the corruption of data or interruption of real-time services. The performance of the majority of database applications may seriously be affected due to unavailability or failure of cloud service. - _Integrity and Data Consistency_ In order to guarantee a high level of integrity of data, it is vital to carefully control and monitor users of the database, including the database administrator and technical staffs, who legally permitted to access the system [18]. Keeping the consistency of an exchange in a database is also a very difficult task, even worse if it changes very fast, particularly on account of value-based information. Designers must resemble BASE (Basically Available, Soft state, eventually consistent) features of database precisely. They must be careful to ensure that there is no risk of losing data integrity in their shift to cloud databases. - _Interface for Query_ Cloud Database is spread. Addressing passed on the database is an imperative test that cloud planners face. A passed on inquiry needs to get to particular focus purposes of cloud database. There ought to be a streamlined and sorted out solicitation interface for investigating the database. - _Privacy and Security of Database_ There are some security concerns which organization ----- needs to consider, before transferring the traditional database to the database on a cloud platform. These security considerations are the main and significant concern of the organizations, not the cloud service provider, as the outcome will ultimately affect the organization’s function. Specifically, if sensitive information is stored on the local databases, during the migration process it is important to promise users about the security of cloud database. In particular, the confidentiality and protection of data should be guaranteed to users. It must be assured that the data will not be illegally manipulated or stolen during the procedure of transferring from the internal database to cloud storage. To achieve this safe migration, a secure procedure should be carefully designed and implemented [18]. It is also essential to encrypt the data stored on the outsourced databases hosted at cloud storage, in order to achieve a high level of confidentiality. Dangers are included in the storage of value-based information on a host that is not adequately secured. Significant information is encrypted before being stored in the cloud to neutralize illegal access. The ability of decryption of data in the cloud should be restricted for different applications. It is a serious challenge to promise the privacy and security of various databases on one system. - _Data Portability_ Information Portability is the capacity to execute application prepared for a specific cloud supplier in another cloud supplier's settings and systems. Interoperability is the capability to provide some codes that are enough adaptable to work with various cloud suppliers, independent of their differences. VIII. REVIEW OF RELATED WORKS There are many researches on the cloud computing database and its related issues, published recent years. Some of them are discussed briefly in this section. In a research paper by Vodomin and Androcec [19], the authors present a practical prototype of a migration tool for SQL databases, including MySQL, PostgreSQL and Microsoft SQL Server. This research mainly contributed in investigation of issues may happen in the process of migration of databases into cloud storage. The dissimilarities of storage models between commercial clouds have also been discussed, which assist to identify the potential issues may appear during the migration between cloud storages. Strauch, S., et al [20] introduced a methodology to move applications to cloud. Their methodology considers some significant aspects, such as differences in the granularity of interactions and data confidentiality. It is also necessary to allow the interaction of the application with remote data sources. All these features have been addressed in the proposed method. Furthermore, the authors also developed a tool for decision support, application refactoring and movement of data. This tool aids the developers of applications to realize of the suggested methodology. Both the proposed methodology and the tool have been also evaluated by the authors using a case study examination in partnership with an IT enterprise. In another research paper by Abourezq and Idrissi [21], the researchers offered a benchmark of the main database solutions presented by service providers as DBaaS (DataBase as a Service). They reviewed the characteristics of solutions and their adaptability to Big Data applications. In the paper written by Arora and Gupta [1], the state of the art in the cloud databases and various architectures have been reviewed and discussed. Furthermore, the challenges in development of cloud databases, as well as some of very common cloud databases have been discussed and assessed. The main goal of their paper was to review and discuss the recent trends, and to explore and analyze the barriers and issues in development of cloud databases technologies. Alomari et al. in their paper published on 2014 [2] focused on the challenges and issues of data transfer between various cloud data storage. The authors proposed a data model and an API for the modern generation of NoSQL databases in cloud storage. The implementation of their proposed framework involves three popular NoSQL systems, including Google Datastore, Amazon SimpleDB and MongoDB. The proposed framework was established with high level of flexibility and could be simply applied to other NoSQL systems. Moreover, the framework include some tools to provide support for adaptation, data transformation and exchange. The authors also employed a case study to describe the structure and implementation of the suggested framework. In another study by Ferretti et al. [22], an alternative architecture has been proposed which avoids intermediate components, to achieve a level of availability and scalability similar to unencrypted cloud database services. Additionally, their proposed architecture ensure the consistency of data in an environment where different clients run SQL queries simultaneously, and the configuration of the database can be changed. Finally, In an article published by Shendeand Chapke [23], the latest trends in cloud services provided for database management systems have been discussed. The benefits and drawbacks of database as a service have been explored to allow users to make decision for using database as a service. This article also discussed the architecture of cloud based on database management system. IX. SUMMARY This article introduced the basic knowledge and concepts of cloud databases and explained some of their important features. Organizations started to work on the distributed computing for various aims and a pattern begun by adopting distributed computing administrations for an improved and faster accessibility of the data instead of establishing a separate database server for each ----- organization or company. Presently the cloud database has advanced another measurement Database as a Service. This service assists the organizations to exploit the facilities provided by the suppliers, without any concern about storage of the equipment and programming tools. They get administrations from DBaaS supplier and take advantages of the flexibility of a full-time available database. There are also both favorable conditions and inconveniences. However, the adoption the cloud database has proved that the advantages are more than the weaknesses. The cloud database services offer various favorable features. REFERENCES [1] Arora, I. and A. Gupta, Cloud databases: a paradigm shift _in databases. International J. of Computer Science Issues,_ 2012. 9(4): p. 77-83. [2] Alomari, E., A. Barnawi, and S. Sakr. _CDPort: a_ _framework of data portability in cloud platforms. in_ _Proceedings of the 16th International Conference on_ _Information Integration and Web-based Applications &_ _Services. 2014. ACM._ [3] Liu, R., A. Aboulnaga, and K. Salem. _Dax: a widely_ _distributed multitenant storage service for dbms hosting._ in _Proceedings of the VLDB Endowment. 2013. VLDB_ Endowment. [4] Agrawal, D., S. Das, and A.E. Abbadi, Data management _in the cloud: challenges and opportunities. Synthesis_ Lectures on Data Management, 2012. 4(6): p. 1-138. [5] Al Shehri, W., _Cloud Database Database as a Service._ International Journal of Database Management Systems, 2013. 5(2): p. 1. [6] Scale, M.-S.E., _Cloud computing and collaboration._ Library Hi Tech News, 2009. 26(9): p. 10-13. [7] Radack, S., _Cloud computing: a review of features,_ _benefits, and risks, and recommendations for secure,_ _efficient implementations. National Institute of Standards_ and Technology, 2012. [8] Puthal, D., B. Sahoo, S. Mishra, and S. Swain. _Cloud_ _computing features, issues, and challenges: a big picture._ in Computational Intelligence and Networks (CINE), 2015 _International Conference on. 2015. IEEE._ [9] Jula, A., E. Sundararajan, and Z. Othman, _Cloud_ _computing service composition: A systematic literature_ _review. Expert Systems with Applications, 2014. 41(8): p._ 3809-3824. [10] Aboulnaga, A., et al., _Deploying Database Appliances in_ _the Cloud. IEEE Data Eng. Bull., 2009. 32(1): p. 13-20._ [11] Marz, N., _Storm: distributed and fault-tolerant realtime_ _computation, in O'Reilly Strata Conference Making Data_ _Work. 2012, O'Reilly Media, Inc.: Santa Clara, California._ [12] Vicknair, C., D. Wilkins, and Y. Chen. _MySQL and the_ _trouble with temporal data. in_ _Proceedings of the 50th_ _Annual Southeast Regional Conference. 2012. ACM._ [13] Postgres Plus, _Cloud Database: Getting started Guide._ Retrieved 23rd November, 2012. [14] Campbell, L., J. Edwards, and E. Calvo _RDBMS in the_ _Cloud: PostgreSQL on AWS. Amazon Web Services,_ 2013. [15] Krishnan, S. and J.L.U. Gonzalez, Google Cloud SQL, in _Building Your Next Big Thing with Google Cloud_ _Platform. 2015, Springer. p. 159-183._ [16] Ahmed, M., M.M. Uddin, M.S. Azad, and S. Haseeb. _MySQL performance analysis on a limited resource server:_ _Fedora vs. Ubuntu Linux. in_ _Proceedings of the 2010_ _Spring Simulation Multiconference. 2010. Society for_ Computer Simulation International. [17] Summers, A. _Five advantages of running a SQL Server_ _database in a cloud environment or virtual machine. 2013._ [18] Sakhi, I., Database security in the cloud. 2012. [19] Vodomin, G. and D. Androcec. _Problems during_ _Database Migration to the Cloud. in_ _Central European_ _Conference on Information and Intelligent Systems. 2015._ Faculty of Organization and Informatics Varazdin. [20] Strauch, S., et al., Migrating enterprise applications to the _cloud: methodology and evaluation. International Journal_ of Big Data Intelligence 5, 2014. 1(3): p. 127-140. [21] Abourezq, M. and A. Idrissi, _Database-as-a-service for_ _big data: An overview. International Journal of Advanced_ Computer Science and Applications (IJACSA), 2016. 7(1). [22] Ferretti, L., M. Colajanni, and M. Marchetti, _Supporting_ _security and consistency for cloud database, in_ _Cyberspace Safety and Security. 2012, Springer. p. 179-_ 193. [23] Shende, S.B. and P.P. Chapke, _Cloud Database_ _Management System (CDBMS). Compusoft, 2015. 4(1): p._ 1462. **Authors’ Profiles** **Harrison** **John** **Bhatti** received his Bachelors of Science in Computer Science (BCS) degree in 2003 and M.Sc. of Information Technology Management in the field of Cloud Computing and Virtualization in 2016 from Asia Pacific University of Technology and Innovation (APU), Kuala Lumpur in Collaboration with Staffordshire University, UK. Harrison John is currently doing his second Masters of Engineering in Industrial Management and Innovation from University of Halmstad, Sweden. His core research areas are Cloud Computing, Virtualization, Docker Container and Strategic Planning and Innovation. **Babak Bashari Rad received his B.Sc. of** Computer Engineering in subfield of Software in 1996 and M.Sc. of Computer Engineering in field of Artificial Intelligence and Robotics in 2001 from University of Shiraz. He received his Ph.D. in Computer Science, from University Technology of Malaysia, in 2013. Dr. Babak is currently Program Leader of Postgraduate Studies in School of Computing and a Senior Lecturer in academic group of Computer Science and Software Engineering (CSSE), Asia Pacific University of Technology and Innovation (APU), Kuala Lumpur. His main research interests cover a broad range of various areas in Computer Science and Information Technology including Information Security and Forensics, Malware Detection, Machine Learning, Artificial Intelligence, Image Processing, Cloud Computing, and other relevant fields. ----- **How to cite this paper:** Harrison John Bhatti, Babak Bashari Rad,"Databases in Cloud Computing: A Literature Review", International Journal of Information Technology and Computer Science(IJITCS), Vol.9, No.4, pp.9-17, 2017. DOI: 10.5815/ijitcs.2017.04.02 -----
{ "disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.5815/IJITCS.2017.04.02?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.5815/IJITCS.2017.04.02, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": null, "status": "GOLD", "url": "http://www.mecs-press.org/ijitcs/ijitcs-v9-n4/IJITCS-V9-N4-2.pdf" }
2,017
[ "Review" ]
true
2017-04-08T00:00:00
[ { "paperId": "10b13340aa6145b67d470c8989fa0339e7b1d075", "title": "Problems during Database Migration to the Cloud" }, { "paperId": "e38f7af107e55ab01fffa6e730337ef18544b3bd", "title": "Building Your Next Big Thing with Google Cloud Platform" }, { "paperId": "c7bcfbe38eae00e0d3c1d1615217cd084a6e5518", "title": "Cloud Computing Features, Issues, and Challenges: A Big Picture" }, { "paperId": "74cf3121e2f901b501cfcc868baa8d110781225c", "title": "Migrating enterprise applications to the cloud: methodology and evaluation" }, { "paperId": "1463029e078bda11a877ac533dde8a004cebbaca", "title": "CDPort: A Framework of Data Portability in Cloud Platforms" }, { "paperId": "e72714c051aae9d4e32002d2b406a6a5f7d58d5f", "title": "Cloud computing service composition: A systematic literature review" }, { "paperId": "7460d993bed5f8b624af3273fcd4873c952ceb7d", "title": "Cloud Database Management System (CDBMS)" }, { "paperId": "9be96860856b2eb37d75111eb2bd8134e8034bd4", "title": "Cloud Database Database as a Service" }, { "paperId": "b51b0c8de157121c8e00adcde03a1114b90a983e", "title": "DAX: A Widely Distributed Multi-tenant Storage Service for DBMS Hosting" }, { "paperId": "f13fe25b56891d696341ed4720bca199d03c7cfb", "title": "Data Management in the Cloud: Challenges and Opportunities" }, { "paperId": "55d611b4d79f39462f204a3e194457c2ef8b1017", "title": "Supporting Security and Consistency for Cloud Database" }, { "paperId": "3111d9a0c472a6b9912b8990033fd45e99bdc732", "title": "Cloud Computing: A Review of Features, Benefits, and Risks, and Recommendations for Secure, Efficient Implementations | NIST" }, { "paperId": "a66074da68167b4e2f1535b472c37f762db99d19", "title": "MySQL and the trouble with temporal data" }, { "paperId": "da08c17d228e05e9a72174c906354155cdd79d8d", "title": "MySQL performance analysis on a limited resource server: Fedora vs. Ubuntu Linux" }, { "paperId": "c6fa48334fb77707412634aaa0a04a7fc4716b4e", "title": "Cloud computing and collaboration" }, { "paperId": "a0e7e17762d0297f9a4cb7c714aa1ee86962c512", "title": "Database-as-a-Service for Big Data: An Overview" }, { "paperId": null, "title": "RDBMS in the Cloud: PostgreSQL on AWS" }, { "paperId": null, "title": "Five advantages of running a SQL Server database in a cloud environment or virtual machine" }, { "paperId": null, "title": "Cloud databases: a paradigm shift in databases" }, { "paperId": "d0ba7b29e9dca93d39c85180afdbc42d860bd025", "title": "Database security in the cloud" }, { "paperId": null, "title": "Postgres Plus" }, { "paperId": null, "title": "Storm: distributed and fault-tolerant realtime computation" }, { "paperId": null, "title": "Society for Computer Simulation International" }, { "paperId": "12a911fa2c9babc1d91368a8fde757c4f8a282c1", "title": "Deploying Database Appliances in the Cloud" }, { "paperId": "7fabc3a05d6246634ff81fe1c1b95e633d66492b", "title": "Copyright" }, { "paperId": null, "title": "Bhatti received his Bachelors of Science in Computer Science (BCS) degree in 2003 and M.Sc. of Information Technology Management in the field of Cloud Computing and Virtualization" }, { "paperId": null, "title": "Innovation from University of Halmstad, Sweden" } ]
8,803
en
[ { "category": "Computer Science", "source": "external" }, { "category": "Computer Science", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/02143a479ab76c1a8793d3669a25e72ead827869
[ "Computer Science" ]
0.837242
Symmetric-Key Based Proofs of Retrievability Supporting Public Verification
02143a479ab76c1a8793d3669a25e72ead827869
European Symposium on Research in Computer Security
[ { "authorId": "10770425", "name": "Chaowen Guan" }, { "authorId": "144222395", "name": "K. Ren" }, { "authorId": "3283592", "name": "Fangguo Zhang" }, { "authorId": "1682949", "name": "F. Kerschbaum" }, { "authorId": "46380550", "name": "Jia Yu" } ]
{ "alternate_issns": null, "alternate_names": [ "ESORICS", "Eur Symp Res Comput Secur" ], "alternate_urls": null, "id": "0bddd5d7-2897-495a-a961-465abe6e04de", "issn": null, "name": "European Symposium on Research in Computer Security", "type": "conference", "url": "http://www.wikicfp.com/cfp/program?id=923" }
null
# Symmetric-Key Based Proofs of Retrievability Supporting Public Verification Chaowen Guan[1(][B][)], Kui Ren[1], Fangguo Zhang[1][,][2][,][3], Florian Kerschbaum[4], and Jia Yu[1][,][5] 1 Department of Computer Science and Engineering, University at Buffalo, Buffalo, USA _{chaoweng,kuiren}@buffalo.edu, isszhfg@mail.sysu.edu.cn_ 2 School of Information Science and Technology, Sun Yat-sen University, Guangzhou, China 3 Guangdong Key Laboratory of Information Security Technology, Guangzhou, China 4 SAP, Karlsruhe, Germany florian.kerschbaum@sap.com 5 College of Information Engineering, Qingdao University, Qingdao, China **Abstract. Proofs-of-Retrievability enables a client to store his data on** a cloud server so that he executes an efficient auditing protocol to check that the server possesses all of his data in the future. During an audit, the server must maintain full knowledge of the client’s data to pass, even though only a few blocks of the data need to be accessed. Since the first work by Juels and Kaliski, many PoR schemes have been proposed and some of them can support dynamic updates. However, all the existing works that achieve public verifiability are built upon traditional publickey cryptosystems which imposes a relatively high computational burden on low-power clients (e.g., mobile devices). In this work we explore indistinguishability obfuscation for building a Proof-of-Retrievability scheme that provides public verification while the encryption is based on symmetric key primitives. The resulting scheme offers light-weight storing and proving at the expense of longer verification. This could be useful in apations where outsourcing files is usually done by low-power client and verifications can be done by well equipped machines (e.g., a third party server). We also show that the proposed scheme can support dynamic updates. At last, for better assessing our proposed scheme, we give a performance analysis of our scheme and a comparison with several other existing schemes which demonstrates that our scheme achieves better performance on the data owner side and the server side. **Keywords: Cloud storage · Proofs of retrievability · Indistinguishabil-** ity obfuscation ## 1 Introduction Nowadays, storage outsourcing (e.g., Google Drive, Dropbox, etc.) is becoming increasingly popular as one of the applications of cloud computing. It enables _⃝c_ Springer International Publishing Switzerland 2015 ----- clients to access the outsourced data flexibly from any location. However, the storage provider (i.e., server) is not necessarily trusted. This situation gives rise to a need that a data owner (i.e., client) can efficiently verify that the server indeed stores the entire data. More precisely, a client can run an efficient audit protocol with the untrusted server where the server can pass the audit only if it maintains knowledge of the client’s entire outsourced data. Formally, this implies two guarantees that the client wants from the server: Authenticity and _Retrievability. Authenticity ensures that the client can verify the correctness of_ the data fetched from the server. On the other hand, Retrievability provides assurance that the client’s data on the server is intact and no data loss has occurred. Apparently, the client should not need to download the entire data from server to verify the data’s integrity, since this may be prohibitive in terms of bandwidth and time. Also, it is undesirable for the server to read all of the client’s outsourced data during an audit protocol. One method that achieves the above is called Proofs of Retrievability (PoR) which was initially defined and constructed by Juels and Kaliski [1]. Mainly, PoR schemes can be categorized into two classes: privately verifiable ones and publicly verifiable ones. Note that privately verifiable PoR systems normally only involve symmetric key primitives, which are cheap for the data owner in encrypting and uploading its files. However, in such systems the guarantees of the data’s authenticity and retrievability largely depend on the data owners themselves due to the fact that they need to regularly perform verifications (e.g., auditing) in order to react as early as possible in case of a data loss. Nowadays, users create and upload data everywhere using low power devices, such as mobile phones. Obviously, such privately verifiable PoR system would inevitably impose expensive burdens on low power data owners in the long run. On the other hand, in this scenario with low power users, it is reasonable to have a well equipped server (trusted or semi-trusted) perform auditing on behalf of data owner which requires publicly verifiable PoR systems. However, all of the existing PoR schemes that achieve public verifiability are constructed based on traditional public key cryptography which implies more complex and expensive computations compared to simple and symmetric key cryptographic primitives. (This observation can also be spotted in outsourced computing schemes that support public verification [34–36].) That means a PoR scheme using public key cryptographic primitives incurs relatively expensive overheads on low-capability clients. One might want to construct a public verifiable PoR scheme without relying on traditional public key cryptographic primitives. One cryptographic primitive that can help to overcome this constraint is indistinguishability obfuscation (i ) which achieves _O_ that obfuscations of any two distinct (equal-size) programs that implement the same functionality are computationally indistinguishable from each other. i _O_ has become so important since the recent breakthrough result of Garg et al. in [2]. Garg et al. proposed the first candidate construction of an efficient indistinguishability obfuscator for general programs which are written as boolean circuits. Subsequently, Sahai and Waters [3] showed the power of i as a cryp_O_ tographic primitive: they used i to construct denial encryption, public-key _O_ encryption, and much more from pseudorandom functions. Most recently, by ----- exploiting i, Ramchen et al. [4] built a fully secure signature scheme with fast _O_ signing and Boneh et al. [5] proposed a multiparty key exchange protocol, an efficient traitor tracing system and more. **Our work. In this paper, we explore this new primitive, i**, for building PoR. _O_ In particular, we modify Shacham and Waters’ privately verifiable PoR scheme [6] and apply i to construct a publicly verifiable PoR scheme. Our results share _O_ a similar property with Ramchen et al.’s signing scheme [4], that is, storing and proving are fast at the expense of longer public verification. Such “imbalance” could be useful in applications where outsourcing files is usually done by lowpower client and verifications can be done by well equipped machines (a semitrusted third party). Our contributions are summarized as follows: 1. We explore building proof-of-retrievability systems from obfuscation. The resulting PoR scheme offers light-weight outsourcing, because it requires only symmetric key operations for the data owner to upload files to the cloud server. Likewise, the server also requires less workload during an auditing compared to existing publicly verifiable PoR schemes. 2. We show that the proposed PoR scheme can support dynamic updates by applying the Merkle hash tree technique. We first build a modified B+ tree over the file blocks and the corresponding block verification messages σ. Then we apply the Merkle hash tree to this tree for ensuring authenticity and freshness. 3. Note that the current i construction candidate will incur a large amount of _O_ overhead for generating obfuscation, but it is only a one-time cost during the preprocessing stage of our system. Therefore its cost can be amortized over plenty of future operations. Except for this one-time cost, we show that our proposed scheme achieves good performance on the data owner side and the cloud server side by analysis and comparisons with other recent existing PoR schemes. Indistinguishability obfuscation indeed provides attractive and interesting features, but the current i candidate construction offers impractical generation _O_ and evaluation. Given the fact that the development of i is still in its nascent _O_ stages, in Appendix, we discuss several possible future directions in works on obfuscation in addition to those discussed in [2]. **1.1** **Related Work** **Proof of Retrievability and Provable Data Possession. The first PoR** scheme was defined and constructed by Juels and Kaliski [1], and the first Provable Data Possession (PDP) was concurrently defined by Ateniese et al. [7]. The main difference between PoR and PDP is the notion of security that they achieve. Concretely, PoR provides stronger security guarantees than PDP does. A successful PoR audit guarantees that the server maintains knowledge of all of the client’s outsourced data, while a successful PDP audit only ensures that ----- the server is retaining most of the data. That means, in a PDP system a server that lost a small amount of data can still pass an audit with significant probability. Some PDP schemes [8] indeed provide full security. However, those schemes requires the server to read the client’s entire data during an audit. If the data is large, this becomes totally impractical. A detailed comparison can be found in [9]. Since the introduction of PoR and PDP they have received much research attention. On the one hand, subsequent works [6, 10–12] for static data focused on the improvement of communication efficiency and exact security. On the other hand, the works of [13–15] showed how to construct dynamic PDP scheme supporting efficient updates. Although many efficient PoR schemes have been proposed since the work of Juels et al., only a few of them supports efficient dynamic update [16–18]. Observe that in publicly verifiable PoR systems, an external verifier (called auditor) is able to perform an auditing protocol with the cloud server on behalf of the data owner. However, public PoR systems do not provide any security guarantees when the user and/or the external verifier are dishonest. To address this problem Armknecht et al. recently introduced the notion of outsourced proofs _of retrievability (OPoR) [19]. In particular, OPoR protects against the collusion_ of any two parties among the malicious auditor, malicious users and the malicious cloud server. Armknecht et al. proposed a concrete OPoR scheme, named Fortress, which is mainly built upon the private PoR scheme in [6]. In order to be secure in the OPoR security model, Fortress also employs a mechanism that enables the user and the auditor to extract common pseudorandom bits using a time-dependent source without any interaction. **Indistinguishability Obfuscation. Program obfuscation aims to make com-** puter programs “unintelligible” while preserving their functionality. The formal study of obfuscation was started by Barak et al. [20] in 2001. In their work, they first suggested a quite intuitive notion called virtual black-box obfuscation, for which they also showed impossibility. Motivated by this impossibility, they proposed another important notion of obfuscation called indistinguishability _obfuscation (i_ ), which asks that obfuscations of any two distinct (equal-size) _O_ programs that implement the same functionalities are computationally indistinguishable from each other. A recent breakthrough result by Garg et al. [2] presented the first candidate construction of an efficient indistinguishability obfuscator for general programs that are written as boolean circuits. The proposed construction was build on the multilinear map candidates [21, 22]. The works of Garg et al. [2] also showed how to apply indistinguishability obfuscation to the construction of functional encryption schemes for general circuits. In subsequent work, Sahai and Waters [3] formally investigated what can be built from indistinguishability obfuscation and showed the power of indistinguishability obfuscation as a cryptographic primitive. Since then, many new applications of general-purpose obfuscation have been explored [24–28]. Most recently, the works of Boneh et al. [5] and Ramchen et al. [4] re-explore the constructions of some existing cryptographic primitives through the lens of obfuscation, including broadcast encryption, traitor tracing and signing. Those proposed ----- constructions indeed obtain some attractive features, although current obfuscation candidates incur prohibitive overheads. Precisely, Boneh et al.’s broadcast encryption achieves that ciphertext size is independent of the number of users, and their traitor tracing system achieves full collusion resistance with short ciphertexts, secret keys and public keys. On the other hand, Ramchen et al. [4] proposed an imbalanced signing algorithm, which is ideally significantly faster than comparable signatures that are not built upon obfuscation. Here “imbalanced” means the signing is fast at the expense of longer verification. ## 2 Preliminaries In this section we define proof-of-retrievability, indistinguishability obfuscation, and variants of pseudorandom functions (PRFs) that we will make use of. All the variants of PRFs that we consider will be constructed from one-way functions. **2.1** **Proofs of Retrievability** Below, we give the definition of publicly verifiable PoR scheme in a way similar to that in [6]. A proof of retrievability scheme defines four algorithms, KeyGen, Store, Prove and Verify, which are specified as follows: (pk, sk) **KeyGen(1[λ]). On input the security parameter λ, this randomized** _←_ algorithm generates a public-private keypair (pk, sk). (M _[∗], t) ←Store(sk, M_ ). On input a secret key sk and a file M ∈{0, 1}[∗], this algorithm processes M to produce M _[∗], which will be stored on the server,_ and a tag t. The tag t contains information associated with the file M _[∗]._ (0, 1) **Audit(Prove, Verify). The randomized proving and verifying algo-** _←_ rithms together define an Audit-protocol for proving file retrievability. During protocol execution, both algorithms take as input the public key pk and the file tag t output by Store. Prove algorithm also takes as input the processed file description M _[∗]_ that is output by Store, and Verify algorithm takes as input public verification key V K. At the end of the protocol, Verify outputs 0 or 1, with 1 indicating that the file is being stored on the server. We denote a run of two parties executing such protocol as: _{0, 1} ←_ (Verify(pk, V K, t) ⇌ **Prove(pk, t, M** _[∗]))._ **Correctness. For all keypairs (pk, sk) output by KeyGen, for all files M** _∈_ _{0, 1}[∗], and for all (M_ _[∗], t) output by Store(sk, M_ ), the verification algorithm accepts when interacting with the valid prover: (Verify(pk, V K, t) ⇌ **Prove(pk, t, M** _[∗])) = 1._ ----- **2.2** **Obfuscation Preliminaries** We recall the definition of indistinguishability obfuscation from [2, 3]. **Definition 1. Indistinguishability Obfuscation (i** _). A uniform PPT machine_ _O_ _iO is called an indistinguishability obfuscator for a circuit class {Cλ}λ∈N if the_ _following conditions are satisfied:_ _– For all security parameters λ ∈_ N, for all C ∈Cλ, for all inputs x, we have _that Pr[C_ _[′](x) = C(x) : C_ _[′]_ _i_ (λ, C)] = 1. _←_ _O_ _– For any (not necessarily uniform) PPT distinguisher (Samp, D), there exists_ _a negligible function negl(_ ) such that the following holds: if for all security _·_ _parameters λ ∈_ N, Pr[∀x, C0(x) = C1(x) : (C0; C1; τ ) ← _Samp(1[λ])] > 1 −_ _negl(λ), then we have_ _|Pr[D(τ, iO(λ, C0)) = 1 : (C0; C1; τ_ ) ← _Samp(1[λ])]−_ Pr[D(τ, iO(λ, C1)) = 1 : (C0; C1; τ ) ← _Samp(1[λ])]| ≤_ _negl(λ)._ **2.3** **Puncturable PRFs** A pseudorandom function (PRF) is a function F : with K $ such _K×M →Y_ _←K_ that the function F (K, ) is indistinguishable from random. A constrained PRF _·_ [29] is a PRF F (K, ) that is able to evaluate at certain portions of the input _·_ space and nowhere else. A puncturable PRF [3, 29] is a type of constrained PRF that enables the evaluation at all bit strings of a certain length, except for any polynomial-size set of inputs. Concretely, it is defined with two PPT algorithms (EvalF, PunctureF ) such that the following two properties hold: – Functionality Preserved under Puncturing. For every PPT algorithm with input 1[λ] outputs a set S 0, 1, for all x 0, 1 _S, we have_ _A_ _⊆{_ _}[n]_ _∈{_ _}[n]\_ Pr[EvalF (K{S}, x) = F (K, x) : K _←K$_ _, K{S} ←_ PunctureF (K, S)] = 1 – Pseudorandom at Punctured Points. For every pair of PPT algorithms (A1, A2) such that A1(1[λ]) outputs a set S ⊆{0, 1}[n] and a state σ, consider an experiment where K _←K$_ _, K{S} ←_ PunctureF (K, S). It holds that _|Pr[A2(σ, K{S}, S, F_ (K, S)) = 1)]− Pr[A2(σ, K{S}, S, Um(λ)·|S|) = 1]| ≤ _negl(λ)_ ## 3 Security Definitions The security definitions of Authenticity and Retrievability in [17, 18] are essentially equivalent to the security definition of Soundness in [6]. Note that the security definitions in [17, 18] are for dynamic PoR systems, while the one in [6] considers only static PoR systems. The only difference between a static PoR ----- scheme and a dynamic PoR scheme is that the latter one supports secure dynamic updates, including modification, deletion and insertion. This affects the access to oracles in the security game. Below we present the security definitions for static PoR systems in the same way as [17, 18] and then point out how to obtain the security definitions for dynamic PoR systems based on the static one. **3.1** **Security Definitions on Static PoR** **Authenticity. Authenticity requires that the client can always detect if any** message sent by the server deviates from honest behavior. More precisely, consider the following game between a challenger, a malicious server and an _C_ _S[�]_ honest server for the adaptive version of authenticity: _S_ – The challenger initializes the environment and provides with public para_S[�]_ meters. – The malicious sever _S[�] specifies a valid protocol sequence P = (op1, op2, · · ·,_ _oppoly(λ)) of polynomial size in the security parameter λ. The specified oper-_ ations opt can be either Store or Audit. C executes the protocol with both _S[�]_ and an honest server . _S_ If at execution of any opj, the message sent by _S[�] differs from that of the honest_ server and does not output reject, the adversary wins and the game results _S_ _C_ _S[�]_ in 1, else 0. **Definition 2. A static PoR scheme is said to satisfy adaptive Authenticity, if** _any polynomial-time adversary_ _wins the above security game with probability_ _S[�]_ _no more than negl(λ)._ **Retrievability. Retrievability guarantees that whenever a malicious server can** pass the audit test with non-negligible probability, the server must know the entire content of ; and moreover, can be recovered by repeatedly running _M_ _M_ the Audit-protocol between the challenger and the server . More precisely, _C_ _S[�]_ consider the following security game: – The challenger initializes the environment and provides with public para_S[�]_ meters. – The malicious server _S[�] specifies a protocol sequence P = (op1, op2, · · ·,_ _oppoly(λ)) of polynomial size in terms of the security parameter λ. The speci-_ fied operations opt can be either Store or Audit. Let M be the correct content value. – The challenger sequentially executes the respective protocols with . At the _C_ _S[�]_ end of executing P, let stC and st �S [be the final configurations (states) of the] challenger and the malicious server, respectively. – The challenger now gets black-box rewinding access to the malicious server in its final configuration st �S [. Starting from the configurations (][st][C][, st][ �]S [), the chal-] lenger runs the Audit-protocol repeatedly for a polynomial number of times with the server and attempts to extract out the content value as . _S[�]_ _M[′]_ ----- If the malicious server passes the Audit-protocol with non-negligible probability _S[�]_ and the extracted content value =, then this game outputs 1, else 0. _M[′]_ _̸_ _M_ **Definition 3. A static PoR scheme is said to satisfy Retrievability, if there exists** _an efficient extractor_ _such that for any polynomial-time_ _, if_ _passes the_ _E_ _S[�]_ _S[�]_ _Audit-protocol with non-negligible probability, and then after executing the Audit-_ _protocol with_ _for a polynomial number of times, the extractor_ _outputs content_ _S[�]_ _E_ _value_ = _only with negligible probability._ _M[′]_ _̸_ _M_ The above says that the extractor will be able to extract out the correct _E_ content value = if the malicious server can maintain a non-negligible _M[′]_ _M_ _S[�]_ probability of passing the Audit-protocol. This means the server must retain full knowledge of . _M_ **3.2** **Security Definitions on Dynamic PoR** The security definitions for dynamic PoR systems are the same as those for static PoR systems, except that the oracles which the malicious server has access to _S[�]_ are including Read, Write and Audit. Precisely, the security game for Authenticity is the same as the for static PoR schemes, except that the malicious server _S[�]_ can get access to Read, Write and Audit oracles. This means that the specified operations opt by _S[�] in the protocol sequence P = (op1, op2, · · ·, oppoly(λ)) can_ be either Read, Write or Audit. Similarly, the security game for Retrievability is the same as that for static PoR systems, except that the malicious server can _S[�]_ get access to Read, Write and Audit oracles. Note that the winning condition for both games remain unchanged. ## 4 Constructions In this section we first give the construction of a static publicly verifiable PoR system. Then we discuss how to extend this static PoR scheme to support efficient dynamic updates. Before presenting our proposed constructions, we analyze a trivial construction of a publicly verifiable PoR scheme using i . Let n be the number of file _O_ blocks, λ1 be the size of a file block (here assume every file block is equally large), λ2 be the size of a block tag σ and I be the challenge index set requested by the verifier. Since i can hide secret information, which is embedded into the _O_ obfuscated program, from the users, one might construct a scheme as: (1) set the tag for a file block mi as the output of a PRF F (k, mi) with secret key k; (2) embed key k into the verification program and obfuscate it; (3) this verification program simply checks the tags for the challenged file blocks to see if they are valid outputs of the PRF. Observe that this verification program takes as inputs a challenge index set, the challenged file blocks and the corresponding file tags. Therefore, the circuit for this verification program will be of size _O(poly(|I| · log n + |I| · λ1 + |I| · λ2)), where |I| is the size of index set I and_ _poly(x) is a polynomial in terms of x. Clearly, this method also costs much a lot_ of bandwidth due to the fact that it does not provide an aggregated proof. While in our construction we modify the privately verifiable PoR scheme in [6]. For consistency with the above analysis, assume that file blocks are not ----- further divided into sectors. Then the verification program takes as input a challenge index set I, an aggregation of the challenged file blocks μ and an aggregated σ[′]. Consequently the circuit for the verification program will have size O(poly(|I| · log n + λ1 + λ2)), which is much smaller than that in the trivial construction. Clearly, the trivial construction will lead to a significantly larger obfuscation of the verification program. Similarly, we analyze the circuit’s size when a file block is further split into _s sectors, as the scheme in [6] did. Let the size of a sector in a file block be λ3._ The circuit size in the trivial construction will remain unchanged, O(poly( _I_ _|_ _| ·_ log n + |I| · λ1 + |I| · λ2)). While the circuit in our construction will have size _O(poly(|I| · log n + s · λ3 + λ3)) ≈_ _O(poly(|I| · log n + λ1 + λ3)), which is still_ much smaller than that in the trivial construction. As we can see, exploiting i _O_ is not trivial although it is a powerful cryptographic primitive. **4.1** **Static Publicly Verifiable PoR Scheme** We modify Shacham and Waters’ privately verifiable PoR scheme in [6] and combine it with i to give a publicly verifiable PoR scheme. Recall that in the _O_ scheme in [6], a file F is processed using erasure code and then divided into n blocks. Also note that each block is split into s sectors. This allows for a tradeoff between storage overhead and communication overhead, as discussed in [6]. Before presenting the construction of the proposed static PoR scheme, we give a brief discussion on how we apply indistinguishability obfuscation to the PoR scheme in [6]. For doing that, we need to utilize a key technique introduced in [3], named punctured programs. At a very high-level, the idea of this technique is to modify a program (which is to be obfuscated) by surgically removing a key element of the program, without which the adversary cannot win the security game it must play, but in a way that does not change the functionality of the program. Note that, in Shacham and Waters’ PoR scheme, for each file block, σi is set as fprf (i) + [�]j[s]=1 _[α][j][m][ij][, where the secret key][ k][prf][ for PRF][ f][ is specific]_ for one certain file M . That means for different files, it uses different PRF key _kprf_ ’s. As to make it a punctured PRF that we want in the obfuscated program, we eliminate this binding between PRF key kprf and file M, and the same PRF key kprf will be used in storing many different files. Thus, the PRF key kprf will be randomly chosen in client KeyGen step, not in Store step. The security will be maintained after this modification, due to the fact that it still provides _σi with randomness without adversary getting the PRF key._ The second main change is related to the construction of a file tag t. Note that, in Shacham and Waters’ scheme, t = n∥c∥MACkmac(n∥c), where _c = Enckenc(kprf_ _∥α1∥· · · ∥αs). In our proposed scheme, the randomly selected_ elements α1, · · ·, αs will be removed. Instead, we use another PRF key fprf ′ to generate s pseudorandom numbers, which will reduce the communication cost by (s · ⌈log p⌉), where log p means each element αi ∈ Zp. As a consequence of these two changes, the symmetric key encryption component c is no longer needed and _σi will be made as fprf_ (i) + [�]j[s]=1 _[f][prf][ ′][(][j][)][ ·][ m][ij][.]_ ----- Let F1(k1, ·) be a puncturable PRF mapping ⌈log N _⌉-bit inputs to ⌈log Zp⌉. Here_ _N is a bound on the number of blocks in a file. Let F2(k2, ·) be a puncturable_ PRF mapping ⌈log s⌉-bit inputs to ⌈log Zp⌉. Let SSigssk(x) be the algorithm generating a signature on x. **KeyGen(). Randomly choose two PRF key k1** 1, k2 2 and a random _∈K_ _∈K_ signing keypair (svk, ssk) _←R_ SKg. Set the secret key sk = (k1, k2, ssk). Let the public key be svk along with the verification key VK which is an indistinguishability obfuscation of the program Check defined as below. **Store(sk, M** ). Given file M and secret key sk = (k1, k2, ssk), proceed as follows: 1. apply the erasure code to M to obtain M _[′];_ 2. split M _[′]_ into n blocks, and each block into s sectors to get {mij} for 1 _i_ _n, 1_ _j_ _s;_ _≤_ _≤_ _≤_ _≤_ 3. set the file tag t = n∥SSigssk(n) 4. for each i, 1 ≤ _i ≤_ _n, compute σi = F1(k1, i) +_ [�]j[s]=1 _[F][2][(][k][2][, j][)][ ·][ m][ij][;]_ 5. set as the outputs the processed file M _[′]_ = {mij}, 1 ≤ _i ≤_ _n, 1 ≤_ _j ≤_ _s,_ the corresponding file tag t and {σi}, 1 ≤ _i ≤_ _n._ **Verify(svk, V K, t). Given the tag t, parse t = n∥SSigssk(n) and use svk to verify** the signature on t; if the signature is invalid, reject and halt. Otherwise, pick a random l-element subset I from [1, n], and for each i _I, pick a random_ _∈_ element vi ∈ Zp. Send set Q = {(i, vi)} to the prover. Parse the prover’s response to obtain μ1, · · ·, μs, σ ∈ Z[s]p[+1]. If parsing fails, reject and halt. Otherwise, output VK(Q = {(i, vi)}i∈I _, μ1, · · ·, μs, σ)._ Check: Inputs: Q = {(i, vi)}i∈I _, μ1, · · ·, μs, σ_ Constants: PRF keys k1, k2 **if σ =** [�](i,vi)∈Q _[v][i][ ·][ F][1][(][k][1][, i][) +][ �]j[s]=1_ _[F][2][(][k][2][, j][)][ ·][ μ][j][ then][ output 1]_ **else output** _⊥_ **Prove(t, M** _[′]). Given the processed file M_ _[′], {σi}, 1 ≤_ _i ≤_ _n and an l-element_ set Q sent by the verifier, parse M _[′]_ = {mij}, 1 ≤ _i ≤_ _n, 1 ≤_ _j ≤_ _s and_ _Q = {(i, vi)}. Then compute_ � � _μj =_ _vimij for 1 ≤_ _j ≤_ _s,_ and _σ =_ _viσi,_ (i,vi)∈Q (i,vi) and send to the prove in response the values μ1, · · ·, μs and σ. **4.2** **PoR Scheme Supporting Efficient Dynamic Updates** A PoR scheme supporting dynamic updates means that it enables modification, deletion and insertion over the stored files. Note that, in the static PoR scheme, each σi associated with mij 1≤j≤s is also bound to a file block index i. If an update is executed in this static PoR scheme, it requires to change every σi corresponding to the involved file blocks, and the cost could probably be expensive. Let’s say the client needs to insert a file block Fi into position i. We can see that this insertion manipulation requires to update the indices in σj’s for all _i ≤_ _j ≤_ _n. On average, a single insertion incurs updates on n/2 σj’s._ ----- In order to offer efficient insertion, we need to disentangle σi from index i. Concretely, F1(k1, ·) should be erased in the computing of σi, which leads to a modified σi[′] [=][ �][s]j=1 _[F][2][(][k][2][, j][)][ ·][ m][ij][. However, this would make the scheme inse-]_ cure, because a malicious server can always forge, e.g., σi[′][/][2 =][ �][s]j=1 _[F][2][(][k][2][, j][)][ ·]_ (mij/2) for file block {mij/2}1≤j≤s with this σi[′][.] Instead, we build σi as F1(k1, ri)+ [�][s]j=1 _[F][2][(][k][2][, j][)]_ _[·]_ _[m][ij][, where][ r][i][ is a random]_ element from Zp. Clearly, we can’t maintain the order of the stored file blocks without associating σi with index i. To provide the guarantee that every upto-date file block is in the designated position, we use a modified B+ tree data structure with standard Merkle hash tree technique. Observe that, unlike Shacham and Waters’ scheme where the file is split into _n blocks after being erasure encoded, the construction here assumes that each file_ block is encoded ‘locally’. (Cash et al.’s work [17] also started with this point.) That is, instead of using an erasure code that takes the entire file as input, we use a code that works on small blocks. More precisely, the client divides the file _M into n blocks, i.e., M = (m1, m2, · · ·, mn), and then encodes each file block_ _mi individually into a corresponding codeword block ci = encode(mi). Next, the_ client performs the following PoR scheme to create σi for each ci. Auditing works as before: The verifier randomly selects l indices from [1, n] and l random values, and then challenges the server to respond with a proof that is computed with those l random values and corresponding codewords specified by the l indices. Note that, in this construction, each codeword ci will be further divided into s sectors, (ci1, ci2, · · ·, cis) during the creation of σi. A more detailed discussion about this and analysis of how to better define block size can be found in the appendices in [6, 17]. Let F1(k1, ·) be a puncturable PRF mapping ⌈log N _⌉-bit inputs to ⌈log Zp⌉. Here_ _N is a bound on the number of blocks in a file. Let F2(k2, ·) be a puncturable_ PRF mapping ⌈log s⌉-bit inputs to ⌈log Zp⌉. Let Enck/Deck be a symmetric key encryption/decryption algorithm, and SSigssk(x) be the algorithm generating a signature on x. **KeyGen(). Randomly choose puncturable PRF keys k1** 1 k2 2, _∈K_ _∈K_ a symmetric encryption key kenc _enc and a random signing keypair_ _∈K_ (svk, ssk) _←R_ SKg. Set the secret key sk = (k1, k2, kenc, ssk). Let the public key be svk along with the verification key VK which is an indistinguishability obfuscation of the program CheckU defined as below. **Store(sk, M** ). Given file M and secret key sk = (k1, k2, kenc, ssk), proceed as follows: 1. split M _[′]_ into n blocks and apply the erasure code to each block mi to obtain the codeword block m[′]i[, then divide each block][ m][′]i [into][ s][ sectors to] get {m[′]ij[}][ for 1][ ≤] _[i][ ≤]_ _[n,][ 1][ ≤]_ _[j][ ≤]_ _[s][;]_ 2. for each i, 1 ≤ _i ≤_ _n, choose a random element ri ∈_ Zp and compute _σi = F1(k1, ri) +_ [�]j[s]=1 _[F][2][(][k][2][, j][)][ ·][ m]ij[′]_ [;] 3. set c = Enckenc(r1∥· · · ∥rn) and the file tag t = n∥c∥SSigssk(n∥c); 4. set as the outputs the processed file M _[′]_ = {m[′]ij[}][, 1][ ≤] _[i][ ≤]_ _[n,][ 1][ ≤]_ _[j][ ≤]_ _[s][,]_ the corresponding file tag t and {σi}, 1 ≤ _i ≤_ _n._ ----- **Verify(svk, V K, t). Given the file tag t, parse t = n∥c∥SSigssk(n∥c) and use** _svk to verify the signature on t; if the signature is invalid, reject and halt._ Otherwise, pick a random l-element subset I from [1, n], and for each i _I,_ _∈_ pick a random element vi ∈ Zp. Sent set Q = {(i, vi)} to the prover. Parse the prover’s response to obtain μ1, · · ·, μs, σ ∈ Z[s]p[+1]. If parsing fails, reject and halt. Otherwise, output VK(Q = {(i, vi)}i∈I _, μ1, · · ·, μs, σ, t)._ CheckU: Inputs: Q = {(i, vi)}i∈I _, μ1, · · ·, μs, σ, t_ Constants: PRF keys k1, k2, symmetric encryption key kenc _n∥c∥SSigssk(n∥c) ←_ _t_ _r1, · · ·, rn ←_ _Deckenc_ (c) **if σ =** [�](i,vi)∈Q _[v][i][ ·][ F][1][(][k][1][, r][i][) +][ �]j[s]=1_ _[F][2][(][k][2][, j][)][ ·][ μ][j][ then][ output 1]_ **else output** _⊥_ **Prove(t, M** _[′]). Given the processed file M_ _[′], {σi}, 1 ≤_ _i ≤_ _n and an l-element_ set Q sent by the verifier, parse M _[′]_ = {m[′]ij[}][,][ 1][ ≤] _[i][ ≤]_ _[n,][ 1][ ≤]_ _[j][ ≤]_ _[s][ and]_ _Q = {(i, vi)}. Then compute_ � � _μj =_ _vim[′]ij_ [for 1][ ≤] _[j][ ≤]_ _[s,]_ and _σ =_ _viσi,_ (i,vi)∈Q (i,vi) and send to the prove in response the values μ1, · · ·, μs and σ. **Modified B+ Merkle tree. In our construction, we organize the data files** using a modified B+ tree, and then apply a standard Merkle Hash tree to provides guarantees of freshness and authenticity. In this modified B+ tree, each node has at most three entries. Each entry in leaf node is data file’s σ and is linked to its corresponding data file in the additional bottom level. The internal nodes will no longer have index information. Before presenting the tree’s construction, we first define some notations. We denote an entry’s corresponding computed σ by label( ), the rank of an entry (i.e., the number of file blocks that _·_ can be reached from this entry) by rank( ), descendants of an entry by child( ), _·_ _·_ left/right sibling of an entry by len( )/ren( ). _·_ _·_ – entry w in leaf node: label(w) = σ, len(w) (if w is the leftmost entry, len(w) = 0) and ren(w) ((if w is the rightmost entry, ren(w) = 0); – entry v in internal node and root node: rank(v), child(v) len(v) and ren(v), where len(v) and ren(v) conform to the rules above. An example is illustrated in Fig. 1a. Following the definitions above, entry v1 in root node R contains: (1) rank(v1) = 3, because w1, w2 and w3 can be reached from v1; (2) child(v1) = w1∥w2∥w3; (3) len(v1) = 0; (4) ren(v1) = v2. Entry w2 in leaf node W1 contains: (1) label(w2) = σ2; (2) len(w2) = w1; (3) ren(w2) = w3. Note that the arrows connecting the entries in leaf nodes with F ’s means that each entry is associated with its corresponding file block. Precisely, e.g., entry w1 is associated with the first data block F1 and label(w1) = σ1. ----- **Fig. 1. An example of a modified B+ tree.** To search for a σ and its corresponding file block, we need two additional values of each entry, low( ) and high( ). low( ) gives the lowest-position data _·_ _·_ _·_ block that can be reached from an entry, and high( ) defines the highest-position _·_ data block that can be reached from an entry. Observe that these two values need not be stored for every entry in the tree. We can compute them on the fly using the ranks. For the current entry r, assume we know low(r) and high(r). Let _child(r) = v1∥v2∥v3. Then low(vi)’s and high(vi)’s can be computed with entry’s_ _rank value in the following way: (1) low(v1) = low(r) and high(v1) = low(v1) +_ _rank(v1)_ _−_ 1; (2) low(v2) = high(v1)+1 and high(v2) = low(v2)+ _rank(v2)_ _−_ 1; (3) low(v3) = high(v2) + 1 and high(v3) = high(r). Using the entries’ rank values, we can reach the i-th data block (i.e., i-th entry) in the leaf nodes. The search starts with entry v1 in root node. Clearly, for the start entry of the tree, we have low(v1) = 1. On each entry v during the search, if i [low(v), high(v)], we proceed the search along the pointer from v _∈_ to its children; otherwise, check the next entry on v’s right side. We continue until we reach the i-th data block. For instance, say we want to read the 6-th data block in Fig. 1a. We start with entry v1, and the search proceeds as follows: 1. compute high(v1) = low(v1) + rank(v1) − 1 = 3; 2. i = 6 /∈ [low(v1), high(v1)], then check the next entry, v2; 3. compute low(v2) = high(v1) + 1 = 4, high(v2) = low(v2) + rank(v2) − 1 = 6; 4. i ∈ [low(v2), high(v2)], then follow the pointer leading to v2’s children; 5. get child(v2) = w4∥w5∥w6; ----- 6. now in leaf node, check each entry from left to right, and find w6 be the entry connecting to the wanted data block. Now it is only left to define the Merkle hash tree on this modified B+ tree. Note that in our modified B+ tree, each node have at most 3 entries. Let upper case letter denote node and lower case one denote entry. For each entry, the hashing value is computed as follows: – Case 0: w is an entry in a leaf node, compute f (w) = h(label(w)) = h(σ), – Case 1: v is an entry in an internal node and it’s descendent is node V _[′],_ compute f (v) = h(rank(v)∥f (V _[′]))._ For each node (internal node or leaf node) consisting of entries v1, v2, v3 from left to right, we define f (V ) = h(f (v1)∥f (v2)∥f (v3)). For instance, in Fig. 1.a, the hashing value for the root node is f (R) = h(f (v1)∥f (v2)∥f (v3)), where f (vi) = _h(rank(vi)∥f_ (Wi)) and f (Wi) = h(f (w(i−1)∗3+1)∥f (w(i−1)∗3+2)∥f (w(i−1)∗3+3)). With this Merkle hash tree built over the modified B+ tree, the client keeps track of the root digest. Every time after fetching a data block, the client fetches its corresponding σ as well. Also the client receives the hashing values associated with other entries in the same node along the path from root to the data block. Then the client can verify the authenticity and freshness with the Merkle tree. Say the client needs to verify the authenticity and freshness of block F3 in Fig. 1a, where he/she possesses the root digest f (R). The path from root to F3 will be (R → _W1). For verification, besides σ3, the client also receives f_ (w1), f (w2) in node W1 and f (v2), f (v3) in node R. **Update. The main manipulations are updating the data block and updating** the Merkle tree. Note that the update affects only nodes along the path from a wanted data block to root on the Merkle tree. Therefore, the running time for updating the Merkle tree is O(logn). Also to update the Merkle tree, some hashing values along the path from a data block to root are needed from the server. Clearly, the size of those values will be O(logn). Update operations include Modification, Deletion and Insertion. The update operations over our modified B+ tree mostly conform to the procedures of standard B+ tree. A slight difference lies in the Insertion operation when splitting node, due to the fact that our modified B+ tree doesn’t have index information. First, we discuss Modification and Deletion. To modify a data block, the client simply computes the data block’s new corresponding σ and updates the Merkle tree with this σ to obtain a new root digest. Then the client uploads the the new data block and the new σ. After receiving this new σ, the server just needs to update the Merkle tree along the path from the data block to root. To delete a data block, the server simply deletes the unwanted data block by the client and then updates the Merkle tree along the path from this data block to root. Next, we give the details of Insertion. If the leaf node where the new data block will be inserted is not full, the procedure is the same as Modification. Otherwise, the leaf node needs to be split, and then the entry that leads to this leaf node will also be split into two entries, with one entry leading to each leaf node. Note that unlike operations on standard B+ tree, we don’t copy the index ----- of the third entry (i.e., the index of the new generated node) to its parent’s node. Instead, we simply create a new entry with a pointer leading to the node and record the corresponding information as defined above. If the root node needs to be divided, the depth of this Merkle tree will increment by 1. An example of updating is shown as Fig. 1b and c. Say the client wants to insert a new file block F10 in the 7-th position. First, it locates the position in the way mentioned above. Note that we can locate the 6-th position or the 7-th position. Here we choose to locate the 6-th position and insert a new entry w10 behind w6 in left node W2 . (If choosing to locate the 7-th position, one should put the new entry before w7.) Next, the information corresponding to this new file block F10 will be written into entry w10 with a pointer pointing from w10 to F10, as shown in Fig. 1b. Since it exceeds the maximum number of entries that a node can have, this leaf node W2 needs to be split into two leaf nodes, W2[′] [and][ W][4] [with two non-] empty entries in each node (this conforms to the rules of updating a B+ tree), as shown in Fig. 1c. At the same time, a new entry v4 is created in the root node R with a pointer leading v4 to leaf node W4. Similarly, this root node R is split into two internal nodes, V1 and V1. Finally, a new root note R[′] is built, which has two entries and two pointers leading to V1 and V2, respectively. Note that, now the root node has entries r1 and r2, where r1 is the start entry of this tree, meaning _low(r1) = 1. We also have rank(r1) = rank(V1) = rank(v1) + rank(v2) = 5 and_ _rank(r2) = 5._ **4.3** **Security Proofs** **Theorem 1. The proposed static PoR scheme satisfied Authenticity as specified** _in Sect. 3.1, assuming the existence of secure indistinguishability obfuscators,_ _existentially unforgeable signature schemes and secure puncturable PRFs._ **Theorem 2. The proposed static PoR scheme satisfies Retrievability as specified** _in Sect. 3.1._ The detailed proof for Theorem 1 is given in the full version of this paper [23]. The proof for Theorem 2 will be identical to that in [6], because in our scheme, a file is processed using erasure code before being divided into n blocks, the same as that in [6] where the proof was divided into two parts, Sects. 4.2 and 4.3. ## 5 Analysis and Comparisons In this section, we give an analysis of our proposed scheme and then compare it with other two recently proposed schemes. Our scheme requires the data owner to generate an obfuscated program during the preprocessing stage of the system. With the current obfuscator candidate, it indeed costs the data owner a somewhat large amount of overhead, but this is a one-time effort which can be amortized over plenty of operations in the future. Thus, we focus on the analysis on the computation and communication overheads incurred during writing and auditing operations rather than those in ----- **Table 1. Comparison with existing dynamic PoRs.** Scheme Write cost on Write Auditing cost Verifiability Dynamic server bandwidth server read Iris [16] _O(β)_ _O(β)_ _O(βλ[√]n)_ Private YES Cash et al. [17] O(βλ(log n)[2]) _O(βλ(log n)[2])_ _O(βλ(log n)[2])_ Private YES Shi et al. [18] _O(β log n) +_ _O(β) +_ _O(βλ log n)_ Public YES _O(λ log n)_ _O(λ log n)_ This paper _O(β) +_ _O(β) +_ _O(βλ)_ Public YES _O(λ log n)_ _O(λ log n)_ the preprocessing step. Like the private PoR system in [6] the data owner can efficiently store files on the cloud server, and it takes the cloud server less overhead during an auditing protocol than in a public-key-based scheme. The cost on the client device is mainly incurred by the operations over symmetric key primitives, which are known to be much faster than public key cryptographic primitives. The cost analysis on the server side is shown as Table 1. In Table 1 showing a comparison with existing dynamic PoR schemes we let _β be the block size in number of bits, λ be the security parameter and n be the_ number of blocks. We compare our scheme with the state-of-the-art scheme [18], since a comparison between Shi et al.’s scheme and Cash et al.’s scheme is given in [18]. Note that Shi et al.’s scheme needs amortized cost O(β log n) for writing on the server side, due to the fact that an erasure-coding needs to be done on the entire data file after Θ(n) updates, while our scheme uses an erasure code that works on file blocks, instead of taking the entire file as inputs (more details and discussions can be found in Sect. 4). That means, in our system modifying a block does not require a change of the erasure codes of the entire file. Thus, the cost for writing is only proportional to the block size being written. On the other hand, during an auditing protocol, Shi et al.’s scheme incurs overhead _O(βλ log n) on the server side, due to the features of the server-side storage_ layout. In their scheme, one single file will be stored as three parts, including raw data part R, erasure-coded copy of the entire file C and hierarchical log structure part H that stores the up-to-date file blocks in erasure-coded format. Thus, during one auditing operation, Shi et al.’s scheme needs to check O(λ) random blocks from C and O(λ) random blocks from each filled level in H. While, in our scheme, the server performs every writing over the wanted block directly, not storing the update block separately. Thus, our scheme only requires _O(λ) random blocks of one file to check authenticity during auditing. (Note that_ this O(λ) usually would be Ω([√]nβ) if no pseudorandom permutation over the locations of the file blocks is performed, because a small number proportional to O(λ) might render the system insecure. Please refer to [17] for more details.) Note that it is most likely that the auditing protocol is executed between a well-equipped verification machine and the server, and the operations on server |Scheme|Write cost on server|Write bandwidth|Auditing cost server read|Verifiability|Dynamic| |---|---|---|---|---|---| |Iris [16]|O(β)|O(β)|√ O(βλ n)|Private|YES| |Cash et al. [17]|O(βλ(log n)2)|O(βλ(log n)2)|O(βλ(log n)2)|Private|YES| |Shi et al. [18]|O(β log n) + O(λ log n)|O(β) + O(λ log n)|O(βλ log n)|Public|YES| |This paper|O(β) + O(λ log n)|O(β) + O(λ log n)|O(βλ)|Public|YES| ----- side only involve symmetric key primitives. Therefore, it will not have noticeable effects on the system’s overall performance. Clearly, the improvement in our work mainly results from i ’s power that _O_ secret keys can be embedded into the obfuscated verification program without secret keys being learnt by user. However, the current obfuscator candidate [2] provides a construction running in impractical, albeit polynomial, time. (Note that it is reasonable and useful that the obfuscated program is run on wellequipped machines.) Although i ’s generation and evaluation is not fast now _O_ [30], studies on implementing practical obfuscation are developing fast [31]. It is plausible that obfuscations with practical performance will be achieved in the not too distant future. Note that the improvement on obfuscation will directly lead to an improvement on our schemes. ## 6 Conclusions In this paper, we explore indistinguishability obfuscation to construct a publicly verifiable Proofs-of-Retrievability (PoR) scheme that is mainly built upon symmetric key cryptographic primitives. We also show how to modify the proposed scheme to support dynamic updates using a combination of a modified B+ tree and a standard Merkle hash tree. By analysis and comparisons with other existing schemes, we show that our scheme is efficient on the data owner side and the cloud server side. Although it consumes a somewhat large amount of overheads to generate an obfuscation, it is only a one-time effort during the preprocessing stage of the system. Therefore, this cost can be amortized over all of future operations. Also note that the improvement on obfuscation will directly lead to an improvement on our schemes. **Acknowledgments. This work is supported in part by US National Science Founda-** tion under grant CNS-1262277 and the National Natural Science Foundation of China (Nos. 61379154 and U1135001). ## A Discussions and Future Directions Towards i O As pointed out in [2], the current obfuscation constructions runs in impractical polynomial-time, and it is an important objective to improve the efficiency for _i_ usage in real life applications. Also Apon et al.’s showed the inefficiency in _O_ _i_ ’s generation and evaluation in [30]. In this section, we give discussions on _O_ three possible future directions in Obfuscation, in addition to those in [2]. **A.1** **Outsourced and Joint Generation of Indistinguishability** **Obfuscation** Image the scenario in our proposed publicly verifiable PoR system, where users store their data on the same cloud server using the same PoR scheme but with ----- different secret keys. One naive approach with i would be requiring each user _O_ to generate his/her own individual obfuscated program for public verification. This means that each user needs to afford the prohibitively expensive overhead for i ’s generation on his/her own. Note that for the same PoR scheme, the _O_ verification procedures are the same but with different user’s secret key. Also note that each user “embeds” his/her own secret keys into the obfuscated verification in a way that anyone else can’t learn anything about the embedded secret values. Hence, we can have several users jointly and securely generate one obfuscated verification program, where each user uses his/her own secret key as part of the input to the generation. One promising way could be using Secure multiparty computation. Observe that this generated obfuscated program has almost the same computation as the one with only one user’s secret key embedded. The only differences between this jointly generated obfuscation and the individual-usergenerated obfuscation are that (1) the jointly generated obfuscation is implanted with more than one user’s secret key; (2) the jointly generated obfuscation needs one more step to identify which user’s secret key it will use. On the other hand, outsourced computing is useful in applications where relatively low-power devices need to compute expensive and time-consuming functions. Clearly, as for relatively low-power individual computers, the overhead caused by the current i construction candidate is impractical. Thus, it would _O_ be promising to find a specific way to efficiently outsource i ’s generation. _O_ **A.2** **Reusability and Universality of Indistinguishability Obfuscation** Reusability is related to i ’s joint generation to some extent. In the scenario _O_ considered above, the jointly generated obfuscated program is embedded with a group of users’ private key. This means that the same obfuscated program can be used by verifiers on behalf of different users in this group. Universality is relevant to an obfuscated program’s functionalities. More Concretely, an universal i is supposed to support multiple functionalities. A _O_ _straightforward example would be the obfucation-based functional encryption_ scheme in [2]. Recall that in their construction, the secret key skf for a function _f is an obfuscated program. For this obfuscated program to become universal,_ _skf would need to be associated with more than one function. In this case,_ e.g., an universal obfuscated program skf can be associated with a class of similar functions f = (f1, f2, · · ·, fk). This means that skf ’s holder can obtain _f1(m), f2(m), · · ·, fk(m) from an encryption of m._ Recently, Hohenberger et al. [32] has shown that i can provide some other _O_ cryptographic primitives with universality. They employed i to construct uni_O_ versal signature aggregators, which can aggregate across schemes in various algebraic settings (e.g., RSA, BLS). Prior to this universal signature aggregator, the aggregation of signatures can only be built if all the signers use the same signing algorithm and shared parameters. On the contrary, the universal signature aggregator enables the aggregation of the users’ signatures without requiring them to execute the same signing behavior, which indicates a compressed authentication overhead. ----- **A.3** **Obfuscation for Specific Functions** The current i construction candidate provides a way for obfuscating general cir_O_ cuits and runs in impractical polynomial-time. Note that an obfuscation designed for some particular simple function with practical performance, such as computing two vectors’ inner product, can also be wanted. (like Wee’s work in STOC’05 [33]) This means that we want to obfuscate such simple functions in a practical way that might be specific for those functions. Note that, for example, a practical obfuscated program computing the inner product of two vectors, where one vector is an input to this program and the other one is embedded into the program without user learning its knowledge, could be useful in applications like computational biometrics. Also, it is really likely that such a practical obfuscation for a specified function can be used as a building block to construct an obfuscation supporting more complex functionalities by combining with other existing practical cryptographic primitives. ## References 1. Juels, A., Kaliski, Jr., B.S.: PORs: Proofs of retrievability for large files. In: ACM CCS, pp. 584–597 (2007) 2. Garg, S., Gentry, C., Halevi, S., Raykova, M., Sahai, A., Waters, B.: Candidate indistinguishability obfuscation and functional encryption for all circuits. In: FOCS, pp. 40–49 (2013) 3. Sahai, A., Waters, B.: How to use indistinguishability obfuscation: deniable encryption, and more. In: STOC, pp. 475–484 (2014) 4. Ramchen, K., Waters, B.: Fully secure and fast signing from obfuscation. In: ACM CCS, pp. 659–673 (2014) 5. Boneh, D., Zhandry, M.: Multiparty key exchange, efficient traitor tracing, and more from indistinguishability obfuscation. In: Garay, J.A., Gennaro, R. (eds.) CRYPTO 2014, Part I. LNCS, vol. 8616, pp. 480–499. Springer, Heidelberg (2014) 6. Shacham, H., Waters, B.: Compact proofs of retrievability. In: Pieprzyk, J. (ed.) ASIACRYPT 2008. LNCS, vol. 5350, pp. 90–107. Springer, Heidelberg (2008) 7. Giuseppe, A., Randal, B., Reza, C., Herring, J., Kissner, L., Peterson, Z., Song, D.: Provable data possession at untrusted stores. In: ACM CCS, pp. 598–609 (2007) 8. Benabbas, S., Gennaro, R., Vahlis, Y.: Verifiable delegation of computation over large datasets. In: Rogaway, P. (ed.) CRYPTO 2011. LNCS, vol. 6841, pp. 111–131. Springer, Heidelberg (2011) 9. K¨up¸c¨u, A.: Efficient cryptography for the next generation secure cloud: protocols, proofs, and implementation. Lambert Academic Publishing, Saarbr¨ucken (2010) 10. Ateniese, G., Kamara, S., Katz, J.: Proofs of storage from homomorphic identification protocols. In: Matsui, M. (ed.) ASIACRYPT 2009. LNCS, vol. 5912, pp. 319–333. Springer, Heidelberg (2009) 11. Bowers, K.D., Juels, A., Oprea, A.: Proofs of retrievability: theory and implementation. In: The ACM Workshop on Cloud Computing Security, pp. 43–54 (2009) 12. Dodis, Y., Vadhan, S., Wichs, D.: Proofs of retrievability via hardness amplification. In: Reingold, O. (ed.) TCC 2009. LNCS, vol. 5444, pp. 109–127. Springer, Heidelberg (2009) 13. Ateniese, G., Pietro, R.D., Mancini, L.V., Tsudik, G.: Scalable and efficient provable data possession. In: SecureComm 2008, pp. 9:1–9:10. ACM, New York (2008) ----- 14. Dynamic provable data possession. In: ACM CCS, pp. 213–222 (2009) 15. Wang, Q., Wang, C., Li, J., Ren, K., Lou, W.: Enabling public verifiability and data dynamics for storage security in cloud computing. In: Backes, M., Ning, P. (eds.) ESORICS 2009. LNCS, vol. 5789, pp. 355–370. Springer, Heidelberg (2009) 16. Stefanov, E., van Dijk, M., Juels, A., Oprea, A.: Iris: a scalable cloud file system with efficient integrity checks. In: ACSAC, pp. 229–238 (2012) 17. Cash, D., K¨up¸c¨u, A., Wichs, D.: Dynamic proofs of retrievability via oblivious RAM. In: Johansson, T., Nguyen, P.Q. (eds.) EUROCRYPT 2013. LNCS, vol. 7881, pp. 279–295. Springer, Heidelberg (2013) 18. Shi, E., Stefanov, E., Papamanthou, C.: Practical dynamic proofs of retrievability. In: ACM CCS, pp. 325–336 (2013) 19. Armknecht, F., Bohli, J.M., Karame, G.O., Liu, Z., Reuter, C.A.: Outsourced proofs of retrievability. In: ACM CCS, pp. 831–843 (2014) 20. Barak, B., Goldreich, O., Impagliazzo, R., Rudich, S., Sahai, A., Vadhan, S.: On the (Im)possibility of obfuscating programs. In: Kilian, J. (ed.) CRYPTO 2001. LNCS, vol. 2139, pp. 1–18. Springer, Heidelberg (2001) 21. Coron, J.-S., Lepoint, T., Tibouchi, M.: Practical multilinear maps over the integers. In: Canetti, R., Garay, J.A. (eds.) CRYPTO 2013, Part I. LNCS, vol. 8042, pp. 476–493. Springer, Heidelberg (2013) 22. Garg, S., Gentry, C., Halevi, S.: Candidate multilinear maps from ideal lattices. In: Johansson, T., Nguyen, P.Q. (eds.) EUROCRYPT 2013. LNCS, vol. 7881, pp. 1–17. Springer, Heidelberg (2013) 23. Guan, C., Ren, K., Zhang, F., Kerschbaum, F., Yu, J.: A symmetric-key based [proofs of retrievability supporting public verification. full version. http://ubisec.](http://ubisec.cse.buffalo.edu/files/PoR_from_iO.pdf) [cse.buffalo.edu/files/PoR from iO.pdf](http://ubisec.cse.buffalo.edu/files/PoR_from_iO.pdf) 24. Barak, B., Bitansky, N., Canetti, R., Kalai, Y.T., Paneth, O., Sahai, A.: Obfuscation for evasive functions. In: Lindell, Y. (ed.) TCC 2014. LNCS, vol. 8349, pp. 26–51. Springer, Heidelberg (2014) 25. Brakerski, Z., Rothblum, G.N.: Virtual black-box obfuscation for all circuits via generic graded encoding. In: Lindell, Y. (ed.) TCC 2014. LNCS, vol. 8349, pp. 1–25. Springer, Heidelberg (2014) 26. Garg, S., Gentry, C., Halevi, S., Raykova, M.: Two-round secure MPC from indistinguishability obfuscation. In: Lindell, Y. (ed.) TCC 2014. LNCS, vol. 8349, pp. 74–94. Springer, Heidelberg (2014) 27. Goldwasser, S., Gordon, S.D., Goyal, V., Jain, A., Katz, J., Liu, F.-H., Sahai, A., Shi, E., Zhou, H.-S.: Multi-input functional encryption. In: Nguyen, P.Q., Oswald, E. (eds.) EUROCRYPT 2014. LNCS, vol. 8441, pp. 578–602. Springer, Heidelberg (2014) 28. Hohenberger, S., Sahai, A., Waters, B.: Replacing a random oracle: full domain hash from indistinguishability obfuscation. In: Nguyen, P.Q., Oswald, E. (eds.) EUROCRYPT 2014. LNCS, vol. 8441, pp. 201–220. Springer, Heidelberg (2014) 29. Boneh, D., Waters, B.: Constrained pseudorandom functions and their applications. In: Sako, K., Sarkar, P. (eds.) ASIACRYPT 2013, Part II. LNCS, vol. 8270, pp. 280–300. Springer, Heidelberg (2013) 30. Apon, D., Huang, Y., Katz, J., Malozemoff, A.J.: Implementing cryptographic program obfuscation. IACR Cryptol. ePrint Arch. 2014, 779 (2014) 31. Ananth, P., Gupta, D., Ishai, Y., Sahai, A.: Optimizing obfuscation: avoiding barrington’s theorem. In: ACM CCS, pp. 646–658 (2014) 32. Hohenberger, S., Koppula, V., Waters, B.: Universal signature aggregators. IACR Cryptol. ePrint Arch. 2014, 745 (2014) ----- 33. Wee, H.: On obfuscating point functions. In: STOC, pp. 523–532 (2005) 34. Gennaro, R., Gentry, C., Parno, B.: Non-interactive verifiable computing: outsourcing computation to untrusted workers. In: Rabin, T. (ed.) CRYPTO 2010. LNCS, vol. 6223, pp. 465–482. Springer, Heidelberg (2010) 35. Parno, B., Raykova, M., Vaikuntanathan, V.: How to delegate and verify in public: verifiable computation from attribute-based encryption. In: Cramer, R. (ed.) TCC 2012. LNCS, vol. 7194, pp. 422–439. Springer, Heidelberg (2012) 36. Kerschbaum, F.: Outsourced private set intersection using homomorphic encryption. In: ASIACCS, pp. 85–86 (2012) -----
{ "disclaimer": "Notice: The following paper fields have been elided by the publisher: {'references', 'abstract'}. Paper or abstract available at https://api.unpaywall.org/v2/10.1007/978-3-319-24174-6_11?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1007/978-3-319-24174-6_11, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": null, "status": "CLOSED", "url": "" }
2,015
[ "JournalArticle", "Conference" ]
false
2015-09-21T00:00:00
[]
17,829
en
[ { "category": "Computer Science", "source": "external" }, { "category": "Physics", "source": "external" }, { "category": "Computer Science", "source": "s2-fos-model" }, { "category": "Physics", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/0217a17ab73d370bbdbf3ecb95639bd0810b5690
[ "Computer Science", "Physics" ]
0.842535
Generation and Distribution of Quantum Oblivious Keys for Secure Multiparty Computation
0217a17ab73d370bbdbf3ecb95639bd0810b5690
Applied Sciences
[ { "authorId": "145451615", "name": "M. Lemus" }, { "authorId": "122259487", "name": "Mariana F. Ramos" }, { "authorId": "2053153085", "name": "P. Yadav" }, { "authorId": "144691583", "name": "N. Silva" }, { "authorId": "2113134", "name": "N. Muga" }, { "authorId": "151498130", "name": "André Souto" }, { "authorId": "47144625", "name": "N. Paunkovic" }, { "authorId": "144372606", "name": "P. Mateus" }, { "authorId": "143888528", "name": "A. Pinto" } ]
{ "alternate_issns": null, "alternate_names": [ "Appl Sci" ], "alternate_urls": [ "http://www.mathem.pub.ro/apps/", "https://www.mdpi.com/journal/applsci", "http://nbn-resolving.de/urn/resolver.pl?urn=urn:nbn:ch:bel-217814" ], "id": "136edf8d-0f88-4c2c-830f-461c6a9b842e", "issn": "2076-3417", "name": "Applied Sciences", "type": "journal", "url": "http://www.e-helvetica.nb.admin.ch/directAccess?callnumber=bel-217814" }
The oblivious transfer primitive is sufficient to implement secure multiparty computation. However, secure multiparty computation based on public-key cryptography is limited by the security and efficiency of the oblivious transfer implementation. We present a method to generate and distribute oblivious keys by exchanging qubits and by performing commitments using classical hash functions. With the presented hybrid approach of quantum and classical, we obtain a practical and high-speed oblivious transfer protocol. We analyse the security and efficiency features of the technique and conclude that it presents advantages in both areas when compared to public-key based techniques.
# applied sciences _Article_ ## Generation and Distribution of Quantum Oblivious Keys for Secure Multiparty Computation **Mariano Lemus** **[1,2], Mariana F. Ramos** **[3,4], Preeti Yadav** **[1,2], Nuno A. Silva** **[3,4]** **,** **Nelson J. Muga** **[3,4]** **, André Souto** **[2,5,6,]*** **, Nikola Paunkovi´c** **[1,2]** **, Paulo Mateus** **[1,2]** **and Armando N. Pinto** **[3,4]** 1 Departamento de Matemática, Instituto Superior Técnico, Av. Rovisco Pais 1, 1049-001 Lisboa, Portugal; mariano.lemus@tecnico.ulisboa.pt (M.L.); pri8.phy@gmail.com (P.Y.); npaunkov@math.tecnico.ulisboa.pt (N.P.); pmat@math.ist.utl.pt (P.M.) 2 Instituto de Telecomunicações, Av. Rovisco Pais 1, 1049-001 Lisboa, Portugal 3 Departamento de Eletrónica, Telecomunicações e Informática, Universidade de Aveiro, Campus Universitário de Santiago, 3810-193 Aveiro, Portugal; marianaferreiraramos@live.ua.pt (M.F.R.); nasilva@ua.pt (N.A.S.); muga@ua.pt (N.J.M.); anp@ua.pt (A.N.P.) 4 Instituto de Telecomunicações, Campus Universitário de Santiago, 3810-193 Aveiro, Portugal 5 Departamento de Informática, Faculdade de Ciências da Universidade de Lisboa, Campo Grande 016, 1749-016 Lisboa, Portugal 6 LASIGE, Faculdade de Ciências da Universidade de Lisboa, Campo Grande 016, 1749-016 Lisboa, Portugal ***** Correspondence: ansouto@fc.ul.pt Received: 15 May 2020; Accepted: 10 June 2020; Published: 12 June 2020 **Featured Application: Private data mining.** [����������](https://www.mdpi.com/2076-3417/10/12/4080?type=check_update&version=3) **�������** **Abstract: The oblivious transfer primitive is sufficient to implement secure multiparty computation.** However, secure multiparty computation based on public-key cryptography is limited by the security and efficiency of the oblivious transfer implementation. We present a method to generate and distribute oblivious keys by exchanging qubits and by performing commitments using classical hash functions. With the presented hybrid approach of quantum and classical, we obtain a practical and high-speed oblivious transfer protocol. We analyse the security and efficiency features of the technique and conclude that it presents advantages in both areas when compared to public-key based techniques. **Keywords: secure multiparty computation; oblivious transfer; quantum communications** **1. Introduction** In Secure Multiparty Computation (SMC), several agents compute a function that depends on their own inputs, while maintaining them private [1]. Privacy is critical in the context of an information society, where data is collected from multiple devices (smartphones, home appliances, computers, street cameras, sensors, ...) and subjected to intensive analysis through data mining. This data collection and exploration paradigm offers great opportunities, but it also raises serious concerns. A technology able to protect the privacy of citizens, while simultaneously allowing to profit from extensive data mining, is going to be of utmost importance. SMC has the potential to be that technology if it can be made practical, secure and ubiquitous. Current SMC protocols rely on the use of asymmetric cryptography algorithms [2], which are considered significantly more computationally complex compared with symmetric cryptography algorithms [3]. Besides being more computationally intensive, in its current standards, asymmetric cryptography cannot be considered secure anymore due to the expected increase of computational ----- _Appl. Sci. 2020, 10, 4080_ 2 of 11 power that a large-scale quantum computer will bring [4]. Identifying these shortcomings in efficiency and security motivates the search for alternative techniques for implementing SMC without the need of public key cryptography. _1.1. Secure Multiparty Computation and Oblivious Transfer_ Consider a set of N agents and f (x1, x2, . . ., xN) = (y1, y2, . . ., yN) a multivariate function. For i ∈ _{1, . . ., N}, a SMC service (see Figure 1) receives the input xi from the i-th agent and outputs back the_ value yi in such a way that no additional information is revealed about the remaining xj, yj, for j ̸= i. Additionally, this definition can be strengthened by requiring that for some number M < N of corrupt agents working together, no information about the remaining agents gets revealed (secrecy). It can also be imposed that if at most M[′] _< N agents do not compute the function correctly, the protocol_ identifies it and aborts (authenticity). **Figure 1. In secure multiparty computation, N parties compute a function preserving the privacy of** their own input. Each party only has access to their own input–output pair. Some of the most promising approaches towards implementing SMC are based on oblivious circuit evaluation techniques such as Yao’s garbled circuits for the two party case [5] and the GMW or BMR protocols for the general case [2,6–8]. It has been shown that to achieve SMC it is enough to implement the Oblivious Transfer (OT) primitive and, without additional assumptions, the security of the resulting SMC depends only on that of the OT [9]. In the worst case, this requires each party to perform one OT with every other party for each gate of the circuit being evaluated. This number can be reduced by weakening the security or by increasing the amount of exchanged data [10]. Either way, the OT cost of SMC represents a major bottleneck for its practical implementation. Finding fast and secure OT protocols, hence, is a very relevant task in the context of implementing SMC. Let Alice and Bob be two agents. A 1-out-of-2 OT service receives bits b0, b1 as input from Alice and a bit c as input from Bob, then outputs bc to Bob. This is done in a way that Bob gets no information about the other message, i.e., bc, and Alice gets no information about Bob’s choice, i.e., the value of c [11]. _1.2. State of the Art_ Classical OT implementations are based on the use of asymmetric keys, and suffer from two types of problems. The first one is the efficiency: asymmetric cryptography relies on relatively complex key generation, encryption, and decryption algorithms [12] (Chapter 1) and [13] (Chapter 6). This limits achievable rates of OTs, and since implementations of SMC require a very large number of OTs [3,10], this has hindered the development of SMC-based applications. The other serious drawback is that ----- _Appl. Sci. 2020, 10, 4080_ 3 of 11 asymmetric cryptography, based on integer number factorization or discrete-logarithm problems, is insecure in the presence of quantum computers, and therefore, it has to be progressively abandoned. There are strong research efforts in order to find other hard problems that can support asymmetric cryptography [4]. However, the security of these novel solutions is still not fully understood. A possible way to circumvent this problem is by using quantum cryptography to improve the efficiency and security of current techniques. Quantum solutions for secure key distribution, Bit Commitment (BC) and OT have been already proposed [14]. The former was proved to be unconditionally secure (assuming an authenticated channel) and realizable using current technology. Although, it was shown to be impossible to achieve unconditionally secure quantum BC and OT [15–17], one can impose restrictions on the power of adversaries in order to obtain practically secure versions of these protocols [18,19]. These assumptions include physical limitations on the apparatuses, such as noisy or bounded quantum memories [20–22]. For instance, quantum OT and BC protocols have been developed and implemented (see [23–25]) under the noisy storage model. Nevertheless, solutions based on hardware limitations may not last for long, because as quantum technology improves the rate of secure OT instances will decrease. Other solutions include exploring relativistic scenarios using the fact that no information can travel faster than light [26–28]. However, at the moment, these solutions do not seem to be practical enough to allow the large dissemination of SMC. In this work, we explore the resulting security and efficiency features of implementing oblivious transfer using a well known quantum protocol [5] supported by using a cryptographic hash based commitment scheme [29]. We call it a hybrid approach, since it mixes both classical and quantum cryptography. We analyse the protocol stand alone security, as well as its composable security in the random oracle model. Additionally, we study its computational complexity and compare it with the complexity of alternative public key based protocols. Furthermore, we show that, while unconditional information-theoretic security cannot be achieved, there is an advantage (both in terms of security and efficiency) of using quantum resources in computationally secure protocols, and as such, they are worth consideration for practical tasks in the near future. This paper is organized as follows. In Section 2, we present a quantum protocol to produce OT given access to a collision resistant hash function, define the concept of oblivious keys, and explain how having pre-shared oblivious keys can significantly decrease the computational cost of OT during SMC. The security and efficiency of the protocol is discussed in Section 3. Finally, in Section 4 we summarize the main conclusions of this work. **2. Methods** _2.1. Generating the OTs_ In this section, we describe how to perform oblivious transfer by exchanging qubits. The protocol _πQOT shown in Figure 2 is the well known quantum oblivious transfer protocol first introduced by_ Yao, which assumes access to secure commitments. The two logical qubit states 0 and 1 represent _|_ _⟩_ _|_ _⟩_ _√_ _√_ the computational basis, and the states + = ( 0 + 1 )/ 2, = ( 0 1 )/ 2 represent the _|_ _⟩_ _|_ _⟩_ _|_ _⟩_ _|−⟩_ _|_ _⟩−|_ _⟩_ Hadamard basis. We also define the states |(si, ai)⟩ for si, ai ∈{0, 1} according to the following rule: (0, 0) = 0 (0, 1) = + _|_ _⟩_ _|_ _⟩_ _|_ _⟩_ _|_ _⟩_ (1, 0) = 1 (1, 1) = . _|_ _⟩_ _|_ _⟩_ _|_ _⟩_ _|−⟩_ Note that these states can be physically instantiated using, for instance, a polarization encoding fiber optic quantum communication system, provided that a fast polarization encoding/decoding process and an algorithm to control random polarization drifts in optical fibers are available [30,31]. ----- _Appl. Sci. 2020, 10, 4080_ 4 of 11 **Protocol πQOT** **Parameters: Integers n, m < n.** **Parties: The sender Alice and the receiver Bob.** **Inputs: Alice gets two bits b0, b1 and Bob gets a bit c.** _(Oblivious key distribution phase)_ 1. Alice samples s, a ∈{0, 1}[n][+][m]. For each i ≤ _n + m she prepares the state |φi⟩_ = |(si, ai)⟩ and sends _|φ⟩_ = |φ1φ2 . . . φn+m⟩ to Bob. 2. Bob samples ˜a ∈{0, 1}[n][+][m] and, for each i, measures |φi⟩ in the computational basis if ˜ai = 0, otherwise measures it in the Hadamard basis. Then, he computes the string ˜s = ˜s1s˜2 . . . ˜sn+m, where ˜si = 0 if the outcome of measuring |φi⟩ was 0 or +, and ˜si = 1 if it was 1 or −. 3. For each i, Bob commits (s˜i, ˜ai) to Alice. 4. Alice chooses randomly a set of indices T ⊂{1, . . ., n + m} of size m and sends T to Bob. 5. For each j ∈ _T, Bob opens the commitments associated to (s˜j, ˜aj)._ 6. Alice checks if sj = ˜sj whenever aj = ˜aj for all j ∈ _T. If the test fails Alice aborts the protocol, otherwise she_ sends a[∗] = a|T to Bob and sets k = s|T. 7. Bob computes x = a[∗] _⊕_ _a˜|T and k[˜] = ˜s|T._ _(Oblivious transfer phase)_ 8. Bob defines the two sets I0 = {i | xi = 0} and I1 = {i | xi = 1}. Then, he sends to Alice the ordered pair (Ic, Ic⊕1). 9. Alice computes (e0, e1), where ei = bi �j∈Ic⊕i _[k]j[, and sends it to Bob.]_ 10. Bob outputs b[˜]c = ec �j∈I0 _[k][˜]_ _j[.]_ **Figure 2. Quantum OT protocol based on secure commitments. The** [�] denotes the bit XOR of all the elements in the family. Intuitively, this protocol works because the computational and the Hadamard are conjugate bases. Performing a measurement in the preparation basis of a state, given by ai, yields a deterministic outcome, whereas measuring in the conjugate basis, given by ¯ai, results in a completely random outcome. By preparing and measuring in random bases, as shown in steps 1 and 2, approximately half of the measurement outcomes will be equal to the prepared states, and half of them will have no correlation. As Alice sends the information of preparation bases to Bob in step 6, he gets to know which of his bits are correlated with Alice’s. During steps 3 to 6, Bob commits the information of his measurement basis and outcomes to Alice, who then chooses a random subset of them to test for correlations. Passing this test (statistically) ensures that Bob measured his qubits as stated in the protocol as opposed to performing a different (potentially joint) measurement. Such strategy may extract additional information from Alice’s strings, but would fail to pass the specific correlation check in step 6. At step 8, Bob separates his non-tested measurement outcomes in two groups: I0 where he measured in the same basis as the preparation one, and I1, in which he measured in the different basis. He then inputs his bit choice c by selecting the order in which he sends the two sets to Alice. During step 9, Alice encrypts her first and second input bits with the preparation bits associated with the first and second second sets sent by Bob respectively. This effectively hides Bob’s input bit because she is ignorant about the measurements that were not opened by Bob (by the security of the commitment scheme). Finally, Bob can decrypt only the bit encrypted with the preparation bits associated to I0. In real implementations of the protocol one should consider imperfect sources, noisy channels, and measurement errors. Thus, in step 6 Alice should perform parameter estimation for the statistics of the measurements, and pass whenever the error parameter es below some previously fixed value. Following this, Alice and Bob perform standard post-processing techniques of information reconciliation and privacy amplification before continuing to step 7. These techniques indeed work even in the presence of a dishonest Bob. As long as he has some minimal amount of uncertainty about Alice’s preparation string s, an adequate privacy amplification scheme can be used to maximize Bob’s uncertainty of one of Alice’s input bits. This comes at the cost of increasing the amount of qubits shared per OT [32]. ----- _Appl. Sci. 2020, 10, 4080_ 5 of 11 An example of these techniques applied in the context of the noisy storage model (where the commitment based check is replaced by a time delay under noisy memories) can be found in [19]. _2.2. Oblivious Key Distribution_ In order to make the quantum implementation of OT more practical during SMC we introduce the concept of oblivious keys. The protocol πQOT can be separated in two phases: the Oblivious Key _Distribution phase which consists of steps 1 to 7 and forms the πOKD subprotocol, and the Oblivious_ Transfer phase which takes steps 8 to 10 and we denote as the πOK→OT subprotocol. Note that after step 7 of πQOT the subsets I0, I1 have not been revealed to Alice, so she has no information yet on how the correlated and uncorrelated bits between k and _k[˜] are distributed (recall that k and_ _k[˜] are the result_ of removing the tested bits from the strings s and ˜s respectively). On the other hand, after receiving Alice’s preparation bases, Bob does know the distribution of correlated and uncorrelated bits between _k and_ _k[˜], which is recorded in the string x (xi = 0 if ai = ˜ai, otherwise xi = 1). Note that until step 7 of_ the protocol all computation is independent of the input bits e0, e1, c. Furthermore, from step 8, only the strings k, _k[˜], and x are needed to finish the protocol (in addition to the input bits). We call these three_ strings collectively an oblivious key, depicted in Figure 3. Formally, let Alice and Bob be two agents. Oblivious Key Distribution (OKD) is a service that outputs to Alice the string k = k1k2 . . . kℓ and to Bob the string _k[˜] =_ _k[˜]_ 1k[˜] 2 . . . k[˜] _ℓ_ together with the bit string x = x1x2 . . . xℓ, such that ki = _k[˜]_ _i whenever_ _xi = 0 and_ _k[˜]_ _i does not give any information about k whenever xi = 1. All of the strings are chosen at_ random for every invocation of the service. A pair (k, (k[˜], x)) distributed as above is what we call an oblivious key pair. Alice, who knows k, is referred to as the sender, and Bob, who holds _k[˜] and x, is the_ receiver. In other words, when two parties share an oblivious key, the sender holds a string k, while the receiver has only approximately half of the bits of k, but knows exactly which of those bits he has. **Figure 3. Oblivious keys. Alice has the string k and Bob the string** _k[˜]. For each party, the boxes in the_ left and right represent the bits of their string associated to the indices i for which xi equals 0 (left box) or 1 (right box). Alice knows the entire key, Bob only knows half of the key, but Alice does not know which half Bob knows. When two parties have previously shared an oblivious key pair, they can securely produce OT by performing the steps πOK→OT of πQOT. This is significantly faster than current implementations of OT without any previous shared resource and does not require quantum communication during SMC. Note that the agents can perform, previously or concurrently, an OKD protocol to share a sufficiently large oblivious key, which can be then partitioned and used to perform as many instances of OT as needed for SMC. Fortunately, it is possible to achieve fast oblivious key exchange if the parties have access to fast and reliable quantum communications and classical commitments. In order to use this QOT protocol, the commitment scheme must be instantiated. Consider the commitment protocol πCOMH shown in Figure 4, first introduced by Halevi and Micali. It uses a combination of universal and cryptographic hashing, the former to ensure statistical uniformity on the commitments, and the latter to hide the ----- _Appl. Sci. 2020, 10, 4080_ 6 of 11 committed message. The motivation for the choice of this protocol for this task will become more apparent during the following sections as we discuss the security and efficiency characteristics of the composition of πQOT with πCOMH, henceforth referred as the πHOK (for Hybrid Oblivious Key) protocol for OT. The existence of a reduction from OT to commitments, while proven within quantum cryptography through the πQOT protocol, is an open problem in classical cryptography. The existence of commitment schemes such as πCOMH, which do not rely on asymmetric cryptography, provides a way to obtain OT in the quantum setting while circumventing the disadvantages of asymmetric cryptography. **Protocol πCOMH** **Parameters: Message length ˜n and security parameter k. A universal hash family F = { f : {0, 1}[ℓ]** _→{0, 1}[k]},_ with ℓ = 4k + 2˜n + 4. A collision resistant hash function H. **Parties: The verifier Alice and the committer Bob.** **Inputs: Bob gets a string ˜m of length ˜n.** _(Commit phase)_ 1. Bob samples r ∈{0, 1}[ℓ], computes y = H(r), and chooses f ∈ **F, such that f (r) = ˜m. Then, he sends ( f**, y) to Alice. _(Open phase)_ 2. Bob sends r to Alice. 3. Alice checks that H(r) = y. If this test fails she aborts the protocol. Otherwise, she outputs f (r). **Figure 4. Commitment protocol based on collision resistant hash functions.** **3. Results and Discussion** _3.1. Security_ In this section, we analyse the security of the proposed composition of protocols. The main result is encapsulated in the following theorem. **Theorem 1. The protocol πHOK is secure as long as the hash function is collision resistant. Moreover, if the** _hash function models a Random Oracle, a simple modification of the protocol can make it universally-composable_ _secure._ **Proof. The security proof relies on several well-established results in cryptography. First, notice that** the πHOK protocol is closely related to the standard Quantum OT protocol πQOT, which is proven statistically secure in Yao’s original paper [33] and later universally composable in the quantum composability framework [34]. The difference between the two is that πQOT uses ideal commitments, as opposed to the hash-based commitments in πHOK. We start by showing that the protocol πHOK is standalone secure. For this case, we only need to replace the ideal commitment of πQOT with a standalone secure commitment protocol, such as the Halevi and Micali [29], which is depicted in _πCOMH. Since the latter is secure whenever the hash function is collision resistant, we conclude that_ _πHOK is secure whenever the hash function is collision resistant._ Finally, we provide the simple modification of πHOK that makes it universally-composable secure when the hash function models a Random Oracle. The modification is only required to improve upon the commitment protocol, as Yao’s protocol with ideal commitments is universally-composable [34]. Indeed, we need to consider universally composable commitment scheme instead of πCOMH. This is achieved by the HMQ construction [35] which, given a standalone secure commitment scheme and a Random Oracle, outputs a universally-composable commitment scheme, which is perfectly hiding and computationally binding, that is, secure as far as collisions cannot be found. So we just need to replace πCOMH with the output of the HMQ construction, when πCOMH and H are given as inputs and H models a Random Oracle. ----- _Appl. Sci. 2020, 10, 4080_ 7 of 11 Regarding the above theorem we note that, for the composable security, the HMQ construction mentioned in the proof formally requires access to a random oracle, which is an abstract object used for studying security and cannot be realized in the real world. Hence, we leave it as an additional security property, as hash functions are traditionally modelled as random oracles. Stand alone security of the πHOK protocol does not require the hash function to be a random oracle. The use of collision resistant hash functions is acceptable in the quantum setting, as it has been shown that there exist functions for which a quantum computer does not have any significant advantage in finding collisions when compared with a classical one [36]. One point to note about the security of πOKD is that it is not susceptible to intercept now-decrypt later style of attacks. Bob can attempt an attack in which he does not properly measure the qubits sent by Alice at step 2, and instead waits until Alice reveals the test subset in step 4 to measure honestly only those qubits. For that he must be able to control the openings of the commitment scheme such that Alice opens the values of his measurement outcomes for those qubits. In order to do this, he must be able find collisions for H before step 5. This means that attacking the protocol by finding collisions of the hash function is only effective if it is done in real time, that is, between steps 3 and 5 of the protocol. This is in contrast to asymmetric cryptography based OT, in which Bob can obtain both bits if he is able to overcome the computational security at a later stage. Finally, we point out that the OT extension algorithms that are used during SMC often rely only on collision resistant hash functions [37] anyway. If those protocols are used to extend the base OTs produced by πHOK, we can effectively speed up the OT rates without introducing any additional computational complexity assumption. _3.2. Efficiency_ Complexity-wise, the main problem with public-key based OT protocols is that they require a public/private key generation, encryption, and decryption per transfer. In the case of RSA and ElGamal based algorithms, this has complexity O(n[2.58]) (where N = 2[n] is the size of the group), using Karatsuba multiplication and Berett reduction for Euclidian division [38]. Post-quantum protocols are still ongoing optimization, but recent results show RLWE key genereration and encryption in time _O(n[2]_ log(n)) [39]. To study the time complexity of the πHOK protocol, consider first the complexity of πCOMH. It requires two calls of H and one call of the universal hash family F, ˜n bit comparisons (if using the technique proposed in [29] to find the required f ), and one additional evaluation of f . Cryptographic hash functions are designed so that their time complexity is linear on the size of the input, which in this case is ℓ = 4k + 2n˜ + 4. To compute the universal hashing, the construction in [29] requires ˜nk binary multiplications. Thus, the running time of πCOMH is linear on the security parameter k. On the other hand, πQOT has two security parameters: n, associated to the size of the keys used to encrypt the transferred bits, and m, associated to the security of the measurement test done by Alice. The protocol requires n + m qubit preparations and measurements, n + m calls of the commitment scheme, and n bit comparisons. This leads to an overall time complexity of O(k(n + m)) for the πHOK protocol, which is linear in all of its security parameters. In realistic scenarios, however, error correction and privacy amplification must be implemented during the πOK→OT. For the former, LDPC codes [40] or the cascade algorithm [41] can be used, and the latter can be done with universal hashing. For a given channel error parameter, these algorithms have time complexity linear in the size of the input string, which in our case is n. Hence, πHOK stays efficient when considering channel losses and preparation/measurement errors. One of the major bottlenecks in the GMW protocol for SMC is the number of instances of OT required (it is worth noting that GMW uses 1-out-of-4 OT, which can efficiently be obtained from two instances of the 1-out-of-2 OT presented here [42]). A single Advanced Encryption Standard (AES) circuit can be obtained with the order of 10[6] instances of OT. However, with current solutions, i.e., with computational implementations of OT based on asymmetric classical cryptography, one can ----- _Appl. Sci. 2020, 10, 4080_ 8 of 11 generate 10[3] secure OTs per second in standard devices [43]. It is possible to use OT extension _∼_ algorithms to increase its size up to rates of the order of 10[6] OT per second [3]. Several of such techniques are based on symmetric cryptography primitives [43], such as hash functions, and could also be used to extend the OTs generated by πHOK. Due to the popularity of crypto-currencies, fast and efficient hashing machines have recently become more accessible. Dedicated hashing devices are able to compute SHA-256 at rates of 10[12] hashes per second (see Bitfury, Ebit, and WhatsMiner, for example). In addition, existent standard Quantum Key Distribution (QKD) setups can be adapted to implement OKD, since both protocols share the same requirements for the generation and measurement of photons. Notably, QKD setups have already demonstrated secret key rates of the order of 10[6] bits per second [44–48]. It is also worth mentioning that, as opposed to QKD, OKD is useful even in the case when Alice and Bob are at the same location. This is because in standard key distribution the parties trust each other and, if at the same location, they can just exchange hard drives with the shared key, whereas when sharing oblivious keys, the parties do not trust each other and need a protocol that enforces security. Thus, for the cases in which both parties being at the same location is not an inconvenience, the oblivious key rates can be further raised, as the effects of channel noise are minimized. Direct comparisons of OT generation speed between asymmetric cryptography techniques and quantum techniques are difficult because the algorithms run on different hardware. Nevertheless, as quantum technologies keep improving, the size and cost of devices capable of implementing quantum protocols will decrease and their use can result in significant improvements of OT efficiency, in the short-to-medium term future. **4. Conclusions** Motivated by the usefulness of SMC as a privacy-protecting data mining tool, and identifying its OT cost as its main implementation challenge, we have proposed a potential solution for practical implementation of OT as a subroutine SMC. The scheme consists on pre-sharing an oblivious key pair and then using it to compute fast OT during the execution of the SMC protocol. We call this approach hybrid because it uses resources traditionally associated with classical symmetric cryptography (cryptographic hash functions), as well as quantum state communication and measurements on conjugate observables, resources associated with quantum cryptography. The scheme is secure as far as the chosen hash function is secure against quantum attacks. In addition, we showed that the overall time complexity of πHOK is linear on all its security parameters, as opposed to the public-key based alternatives, whose time complexities are at least quadratic on their respective parameters. Finally, by comparing the state of current technology with the protocol requirements, we concluded that it has the potential to surpass current asymmetric cryptography based techniques. It was also noted that current experimental implementations of standard discrete-variable QKD can be adapted to perform πHOK. The same post-processing techniques of error correction and privacy amplification apply, however, fast hashing subroutines should be added for commitments during the parameter estimation step. Future work includes designing an experimental setup, meeting the implementation challenges, and experimentally testing the speed, correctness, and security of the resulting oblivious key pairs. This includes computing oblivious key rate bounds for realistic scenarios and comparing them with current alternative technologies. Real world key rate comparisons can help us understand better the position of quantum technologies in the modern cryptographic landscape. Regarding the use of quantum cryptography during the commitment phase; because of the impossibility theorem for unconditionally secure commitments in the quantum setting [17], one must always work with an additional assumption on top of needing quantum resources. The noisy storage model provides an example in which the commitments are achieved by noisy quantum memories [21,22,49]. The drawback of this particular assumption is the fact that advances in quantum storage technology work against the performance of the protocol, which is not a desired feature. The added cost of using quantum communication is a disadvantage. So far, to the knowledge of the ----- _Appl. Sci. 2020, 10, 4080_ 9 of 11 authors, there are no additional practical quantum bit commitment protocols that provide advantages in security or efficiency compared to classical ones once additional assumptions (such as random oracles, common reference strings, computational hardness, etc.,) are introduced. Nevertheless, we are optimistic that such protocols can be found in the future, perhaps by clever design, or by considering a different a kind of assumption outside of the standard ones. **Author Contributions: Conceptualization, P.M. and A.N.P.; methodology, M.F.R., N.A.S. and N.J.M.; validation,** M.L., N.P., and A.S.; formal analysis, M.L., P.Y., N.P., A.S. and P.M.; investigation, M.L., M.F.R., N.A.S. and N.J.M.; writing—original draft preparation, M.L., N.P., P.Y., M.F.R., N.A.S., N.J.M. and A.N.P.; writing—review and editing, M.L. and P.M.; visualization, M.F.R., N.A.S., N.J.M. and A.N.P.; supervision, P.M. and A.N.P.; project administration, P.M., A.S. and A.N.P.; funding acquisition, P.M., A.S. and A.N.P. All authors have read and agreed to the published version of the manuscript. **Funding:** This work is supported by the Fundação para a Ciência e a Tecnologia (FCT) through national funds, by FEDER, COMPETE 2020, and by Regional Operational Program of Lisbon, under UIDB/50008/2020, UIDP/50008/2020, UID/CEC/00408/2013, POCI-01-0145-FEDER-031826, POCI-01-0247-FEDER-039728, PTDC/CCI-CIF/29877/2017, PD/BD/114334/2016, PD/BD/113648/2015, and CEECIND/04594/2017/CP1393/CT0006A. **Conflicts of Interest: The authors declare no conflict of interest. The funders had no role in the design of the** study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results. **References** 1. Lindell, Y.; Pinkas, B. Secure Multiparty Computation for Privacy-Preserving Data Mining. J. Priv. Confid. **[2009, 59–98. [CrossRef]](http://dx.doi.org/10.29012/jpc.v1i1.566)** 2. Laud, P.; Kamm, L. Applications of Secure Multiparty Computation; IOS Press: Amsterdam, The Netherlands, 2015; Volume 13. 3. Asharov, G.; Lindell, Y.; Schneider, T.; Zohner, M. More Efficient Oblivious Transfer Extensions. J. Cryptol. **[2017, 30, 805–858. [CrossRef]](http://dx.doi.org/10.1007/s00145-016-9236-6)** 4. [Bernstein, D.J.; Lange, T. Post-quantum cryptography. Nature 2017, 549, 188. [CrossRef] [PubMed]](http://dx.doi.org/10.1038/nature23461) 5. Yao, A.C.C. How to generate and exchange secrets. In Proceedings of the 27th Annual Symposium on Foundations of Computer Science (sfcs 1986), Toronto, ON, Canada, 27–29 October 1986; pp. 162–167. [[CrossRef]](http://dx.doi.org/10.1109/SFCS.1986.25) 6. Goldreich, O.; Micali, S.; Wigderson, A. How to Play ANY Mental Game. In Proceedings of the Nineteenth Annual ACM Symposium on Theory of Computing, New York, NY, USA, 25–27 May 1987; ACM: New York, [NY, USA, 1987; pp. 218–229. [CrossRef]](http://dx.doi.org/10.1145/28395.28420) 7. Schneider, T.; Zohner, M., GMW vs. Yao? Efficient Secure Two-Party Computation with Low Depth Circuits. In Financial Cryptography and Data Security, Proceedings of the 17th International Conference, FC 2013, Okinawa, _Japan, 1–5 April 2013; Revised Selected Papers; Sadeghi, A.R., Ed.; Springer: Berlin/Heidelberg, Germany,_ 2013; pp. 275–292. 8. Beaver, D.; Micali, S.; Rogaway, P. The round complexity of secure protocols. In Proceedings of the Twenty-Second Annual ACM Symposium on Theory of Computing, Baltimore, MD, USA, 14–16 May 1990; pp. 503–513. 9. Kilian, J. Founding Cryptography on Oblivious Transfer. In Proceedings of the Twentieth Annual ACM Symposium on Theory of Computing, Chicago, IL, USA, 2–4 May 1988; ACM: New York, NY, USA, 1988; pp. 20–31. 10. Harnik, D.; Ishai, Y.; Kushilevitz, E. How many oblivious transfers are needed for secure multiparty computation? In Proceedings of the Annual International Cryptology Conference, Santa Barbara, CA, USA, 19–23 August 2007; pp. 284–302. 11. Rabin, M.O. How To Exchange Secrets; Technical Report TR-81; Aiken Computation Laboratory, Harvad University, Cambridge, MA, USA, 1981. 12. Goldreich, O. Foundations of Cryptography, Volume I Basic Techniques; Cambridge University Press: Cambridge, UK, 2001. 13. Paar, C.; Pelzl, J. Understanding Cryptography; Springer: Berlin/Heidelberg, Germany, 2010. ----- _Appl. Sci. 2020, 10, 4080_ 10 of 11 14. Broadbent, A.; Schaffner, C. Quantum cryptography beyond quantum key distribution. Des. Codes Cryptogr. **[2016, 78, 351–382. [CrossRef]](http://dx.doi.org/10.1007/s10623-015-0157-4)** 15. Shenoy-Hejamadi, A.; Pathak, A.; Radhakrishna, S. Quantum cryptography: Key distribution and beyond. _[Quanta 2017, 6, 1–47. [CrossRef]](http://dx.doi.org/10.12743/quanta.v6i1.57)_ 16. Lo, H.K.; Chau, H.F. Is Quantum Bit Commitment Really Possible? Phys. Rev. Lett. 1997, 78, 3410–3413. [[CrossRef]](http://dx.doi.org/10.1103/PhysRevLett.78.3410) 17. Mayers, D. Unconditionally Secure Quantum Bit Commitment is Impossible. _Phys. Rev. Lett. 1997,_ _[78, 3414–3417. [CrossRef]](http://dx.doi.org/10.1103/PhysRevLett.78.3414)_ 18. Wehner, S.; Schaffner, C.; Terhal, B.M. Cryptography from Noisy Storage. Phys. Rev. Lett. 2008, 100, 220502. [[CrossRef]](http://dx.doi.org/10.1103/PhysRevLett.100.220502) 19. Wehner, S.; Curty, M.; Schaffner, C.; Lo, H.K. Implementation of two-party protocols in the noisy-storage [model. Phys. Rev. A 2010, 81, 052336. [CrossRef]](http://dx.doi.org/10.1103/PhysRevA.81.052336) 20. Konig, R.; Wehner, S.; Wullschleger, J. Unconditional Security From Noisy Quantum Storage. IEEE Trans. _[Inf. Theory 2012, 58, 1962–1984. [CrossRef]](http://dx.doi.org/10.1109/TIT.2011.2177772)_ 21. Loura, R.; Almeida, Á.J.; André, P.; Pinto, A.; Mateus, P.; Paunkovi´c, N. Noise and measurement errors in [a practical two-state quantum bit commitment protocol. Phys. Rev. A 2014, 89, 052336. [CrossRef]](http://dx.doi.org/10.1103/PhysRevA.89.052336) 22. Almeida, Á.J.; Stojanovic, A.D.; Paunkovi´c, N.; Loura, R.; Muga, N.J.; Silva, N.A.; Mateus, P.; André, P.S.; Pinto, A.N. Implementation of a two-state quantum bit commitment protocol in optical fibers. J. Opt. 2015, _[18, 015202. [CrossRef]](http://dx.doi.org/10.1088/2040-8978/18/1/015202)_ 23. Erven, C.; Ng, N.; Gigov, N.; Laflamme, R.; Wehner, S.; Weihs, G. An experimental implementation of [oblivious transfer in the noisy storage model. Nat. Commun. 2014, 5, 3418. [CrossRef] [PubMed]](http://dx.doi.org/10.1038/ncomms4418) 24. Furrer, F.; Gehring, T.; Schaffner, C.; Pacher, C.; Schnabel, R.; Wehner, S. Continuous-variable protocol for [oblivious transfer in the noisy-storage model. Nat. Commun. 2018, 9, 1450. [CrossRef] [PubMed]](http://dx.doi.org/10.1038/s41467-018-03729-4) 25. Ng, N.H.Y.; Joshi, S.K.; Chen Ming, C.; Kurtsiefer, C.; Wehner, S. Experimental implementation of bit [commitment in the noisy-storage model. Nat. Commun. 2012, 3, 1326. [CrossRef] [PubMed]](http://dx.doi.org/10.1038/ncomms2268) 26. Lunghi, T.; Kaniewski, J.; Bussières, F.; Houlmann, R.; Tomamichel, M.; Wehner, S.; Zbinden, H. Practical [Relativistic Bit Commitment. Phys. Rev. Lett. 2015, 115, 030502. [CrossRef] [PubMed]](http://dx.doi.org/10.1103/PhysRevLett.115.030502) 27. Verbanis, E.; Martin, A.; Houlmann, R.; Boso, G.; Bussières, F.; Zbinden, H. 24-Hour Relativistic Bit [Commitment. Phys. Rev. Lett. 2016, 117, 140506. [CrossRef]](http://dx.doi.org/10.1103/PhysRevLett.117.140506) 28. Pitalúa-García, D.; Kerenidis, I. Practical and unconditionally secure spacetime-constrained oblivious [transfer. Phys. Rev. A 2018, 98, 032327. [CrossRef]](http://dx.doi.org/10.1103/PhysRevA.98.032327) 29. Halevi, S.; Micali, S. Practical and Provably-Secure Commitment Schemes from Collision-Free Hashing. In Advances in Cryptology—CRYPTO ’96, Proceedings of the 16th Annual International Cryptology Conference Santa _Barbara, California, CA, USA, 18–22 August 1996; Koblitz, N., Ed.; Springer: Berlin/Heidelberg, Germany,_ 1996; pp. 201–215. 30. Pinto, A.N.; Ramos, M.F.; Silva, N.A.; Muga, N.J. Generation and Distribution of Oblivious Keys through Quantum Communications. In Proceedings of the 2018 20th International Conference on Transparent Optical [Networks (ICTON), Bucharest, Romania, 1–5 July 2018; pp. 1–3. [CrossRef]](http://dx.doi.org/10.1109/ICTON.2018.8473991) 31. Ramos, M.F.; Silva, N.A.; Muga, N.J.; Pinto, A.N. Reversal operator to compensate polarization random [drifts in quantum communications. Opt. Express 2020, 28, 5035–5049. [CrossRef]](http://dx.doi.org/10.1364/OE.385196) 32. Lindell, Y.; Pinkas, B. An efficient protocol for secure two-party computation in the presence of malicious adversaries. In Proceedings of the Annual International Conference on the Theory and Applications of Cryptographic Techniques, Barcelona, Spain, 20–24 May 2007; pp. 52–78. 33. Yao, A.C.C. Security of Quantum Protocols Against Coherent Measurements. In Proceedings of the Twenty-Seventh Annual ACM Symposium on Theory of Computing, Las Vegas, NV, USA, 29 May–1 June 1995; ACM: New York, NY, USA, 1995; pp. 67–75. 34. Unruh, D. Universally composable quantum multi-party computation. In Proceedings of the Annual International Conference on the Theory and Applications of Cryptographic Techniques, French Riviera, France, 30 May–3 June 2010; pp. 486–505. 35. Hofheinz, D.; Müller-Quade, J. Universally Composable Commitments Using Random Oracles. In Theory of _Cryptography; Naor, M., Ed.; Springer: Berlin/Heidelberg, Germany, 2004; pp. 58–76._ 36. Aaronson, S.; Shi, Y. Quantum Lower Bounds for the Collision and the Element Distinctness Problems. _[J. ACM 2004, 51, 595–605. [CrossRef]](http://dx.doi.org/10.1145/1008731.1008735)_ ----- _Appl. Sci. 2020, 10, 4080_ 11 of 11 37. Asharov, G.; Lindell, Y.; Schneider, T.; Zohner, M. More Efficient Oblivious Transfer and Extensions for Faster Secure Computation. In Proceedings of the 2013 ACM SIGSAC Conference on Computer & Communications [Security, Berlin, Germany, 4–8 November 2013; ACM: New York, NY, USA, 2013; pp. 535–548. [CrossRef]](http://dx.doi.org/10.1145/2508859.2516738) 38. Menezes, A.J.; Katz, J.; Van Oorschot, P.C.; Vanstone, S.A. Handbook of Applied Cryptography; CRC Press, Inc.: Boca Raton, FL, USA: 1996. 39. Ding, J.; Xie, X.; Lin, X. A Simple Provably Secure Key Exchange Scheme Based on the Learning with Errors Problem. Iacr Cryptol. Eprint Arch. 2012, 2012, 688. 40. Martinez-Mateo, J.; Elkouss, D.; Martin, V. Key reconciliation for high performance quantum key distribution. _[Sci. Rep. 2013, 3, 1576. [CrossRef] [PubMed]](http://dx.doi.org/10.1038/srep01576)_ 41. Brassard, G.; Salvail, L. Secret-key reconciliation by public discussion. In Workshop on the Theory and _Application of of Cryptographic Techniques; Springer: Berlin/Heidelberg, Germany, 1993; pp. 410–423._ 42. [Naor, M.; Pinkas, B. Computationally secure oblivious transfer. J. Cryptol. 2005, 18, 1–35. [CrossRef]](http://dx.doi.org/10.1007/s00145-004-0102-6) 43. Chou, T.; Orlandi, C. The simplest protocol for oblivious transfer. In Proceedings of the International Conference on Cryptology and Information Security in Latin America, Guadalajara, Mexico, 23–26 August 2015; pp. 40–58. 44. Comandar, L.; Fröhlich, B.; Lucamarini, M.; Patel, K.; Sharpe, A.; Dynes, J.; Yuan, Z.; Penty, R.; Shields, A. Room temperature single-photon detectors for high bit rate quantum key distribution. Appl. Phys. Lett. 2014, _[104, 021101. [CrossRef]](http://dx.doi.org/10.1063/1.4855515)_ 45. Islam, N.T.; Lim, C.C.W.; Cahall, C.; Kim, J.; Gauthier, D.J. Provably secure and high-rate quantum key [distribution with time-bin qudits. Sci. Adv. 2017, 3, e1701491. [CrossRef]](http://dx.doi.org/10.1126/sciadv.1701491) 46. Ko, H.; Choi, B.S.; Choe, J.S.; Kim, K.J.; Kim, J.H.; Youn, C.J. High-speed and high-performance polarization-based quantum key distribution system without side channel effects caused by multiple lasers. _[Photonics Res. 2018, 6, 214–219. [CrossRef]](http://dx.doi.org/10.1364/PRJ.6.000214)_ 47. Wang, T.; Huang, P.; Zhou, Y.; Liu, W.; Ma, H.; Wang, S.; Zeng, G. High key rate continuous-variable [quantum key distribution with a real local oscillator. Opt. Express 2018, 26, 2794–2806. [CrossRef]](http://dx.doi.org/10.1364/OE.26.002794) 48. Pirandola, S.; Andersen, U.; Banchi, L.; Berta, M.; Bunandar, D.; Colbeck, R.; Englund, D.; Gehring, T.; Lupo, C.; Ottaviani, C.; et al. Advances in Quantum Cryptography. arXiv 2019, arXiv:1906.01645. 49. Loura, R.; Arsenovi´c, D.; Paunkovi´c, N.; Popovi´c, D.B.; Prvanovi´c, S. Security of two-state and four-state [practical quantum bit-commitment protocols. Phys. Rev. A 2016, 94, 062335. [CrossRef]](http://dx.doi.org/10.1103/PhysRevA.94.062335) _⃝c_ 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution [(CC BY) license (http://creativecommons.org/licenses/by/4.0/).](http://creativecommons.org/licenses/by/4.0/.) -----
{ "disclaimer": "Notice: Paper or abstract available at https://arxiv.org/abs/1909.11701, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": "CCBY", "status": "GOLD", "url": "https://www.mdpi.com/2076-3417/10/12/4080/pdf" }
2,019
[ "JournalArticle" ]
true
2019-09-25T00:00:00
[ { "paperId": "0e9c40922a9f9bad45bf71afb9d7dbc66ef48565", "title": "Reversal operator to compensate polarization random drifts in quantum communications." }, { "paperId": "8ceda6f05d27ae88d8272f228bed78b4f0b3af13", "title": "Advances in quantum cryptography" }, { "paperId": "83721103a6fd5535e943b1b575cf70862c2322a8", "title": "Handbook of Applied Cryptography" }, { "paperId": "e3151b039935d2fba4933a1964514f2debb639f5", "title": "Practical and unconditionally secure spacetime-constrained oblivious transfer" }, { "paperId": "db827e86f965bd12cb08e515c8e6940063102184", "title": "Understanding Cryptography" }, { "paperId": "c98b82e5f2e1385e4f4ba9f9bdf23c09f60979fd", "title": "Generation and Distribution of Oblivious Keys through Quantum Communications" }, { "paperId": "f2a2c3cee9d1e7ebc4835577938e0ba638eb7c04", "title": "High key rate continuous-variable quantum key distribution with a real local oscillator." }, { "paperId": "e8815a601afc7d7a6246f45d98cc17214d79cc3b", "title": "High-speed and high-performance polarization-based quantum key distribution system without side channel effects caused by multiple lasers" }, { "paperId": "7faf5e22615c238cae2cf77b125d0e6ca6a3945e", "title": "Provably secure and high-rate quantum key distribution with time-bin qudits" }, { "paperId": "517943eea610e226c4aef8192cad506410ef1d9b", "title": "Physical implementation of oblivious transfer using optical correlated randomness" }, { "paperId": "25b4a7b04cc7c1ff46f56d4f29a2dcbd7f444f8f", "title": "Continuous-variable protocol for oblivious transfer in the noisy-storage model" }, { "paperId": "797f33b2208de43926407139d5bdc530f1aa3a09", "title": "More Efficient Oblivious Transfer Extensions" }, { "paperId": "0feae0e89270536cdb45138ee0993aefe53e8905", "title": "Quantum Cryptography: Key Distribution and Beyond" }, { "paperId": "cf0719b8d8c3c75c4212aa986600348f1fe0a3f2", "title": "Securing the Internet of Things in a Quantum World" }, { "paperId": "168ec39c9eb2b4440bdf2736ee5101fafdc811c2", "title": "Quantum computers ready to leap out of the lab in 2017" }, { "paperId": "5256995df636cecb7dee690e646ab2398cfbee82", "title": "Security of two-state and four-state practical quantum bit-commitment protocols" }, { "paperId": "5210b5a0e98a7138710e2d7d88b641f6997a4dde", "title": "24-Hour Relativistic Bit Commitment." }, { "paperId": "18f0f29bacb2441dd430271fc6ceaacca8ed68c4", "title": "Computationally Binding Quantum Commitments" }, { "paperId": "98fad0706f87c3a07d69aab77f995b4dc1dc8bd0", "title": "Quantum cryptography beyond quantum key distribution" }, { "paperId": "c52930652ef773a82572e8e7c88e4c6f384f16d8", "title": "The Simplest Protocol for Oblivious Transfer" }, { "paperId": "737e72cfb698d1aa756b044bfa64c5001b90064b", "title": "Practical Relativistic Bit Commitment." }, { "paperId": "ab34e3e90de34024a6727a0349d77a70118e8b2c", "title": "Noise and measurement errors in a practical two-state quantum bit commitment protocol" }, { "paperId": "7470baf6080b7a835edf92f8467637ab3873c8a9", "title": "Room temperature single-photon detectors for high bit rate quantum key distribution" }, { "paperId": "0166c8b5c6445043b94fc7b62d145d0c3c8b6483", "title": "More efficient oblivious transfer and extensions for faster secure computation" }, { "paperId": "acb5ace0bbb9741bcc757cb09ec4faf6572823b8", "title": "An experimental implementation of oblivious transfer in the noisy storage model" }, { "paperId": "9e003697fb9cc15aee9b97a8f42b88dbdf06eef3", "title": "Key Reconciliation for High Performance Quantum Key Distribution" }, { "paperId": "9fa0ee74353fd008f2fbb1f6d724437678cbf9dd", "title": "GMW vs. Yao? Efficient Secure Two-Party Computation with Low Depth Circuits" }, { "paperId": "206112b21f899101d090f78590f343d30b97fc33", "title": "Experimental implementation of bit commitment in the noisy-storage model" }, { "paperId": "e4dc30ba02cd3c29735348ca02c5d49fc1df9eb7", "title": "Implementation of two-party protocols in the noisy-storage model" }, { "paperId": "d531e2c1b879538d859a6d6a82d43e30e9d41c8e", "title": "Universally Composable Quantum Multi-party Computation" }, { "paperId": "574d0df466d7e1244029365a44aa7b579cfd7fee", "title": "Unconditional Security From Noisy Quantum Storage" }, { "paperId": "a6f644f6e739fa73ada11dc4c85b812b31f63d53", "title": "Secure Multiparty Computation for Privacy-Preserving Data Mining" }, { "paperId": "99291ce0b97a31c786560241fea62604332afbf5", "title": "Post-quantum cryptography" }, { "paperId": "253e994f744e9a0b9c4a305ebb246891ec0665e2", "title": "Cryptography from noisy storage." }, { "paperId": "84ab7f80160291e5f1cda11d519a74864a8c6eda", "title": "How Many Oblivious Transfers Are Needed for Secure Multiparty Computation?" }, { "paperId": "d4a8ecf10852322d5a162ba1a58687e9f5c16a19", "title": "An Efficient Protocol for Secure Two-Party Computation in the Presence of Malicious Adversaries" }, { "paperId": "a93556bf7fee012764e36f70495cbbc43bdf4875", "title": "Universally Composable Commitments Using Random Oracles" }, { "paperId": "4beae9ab6774cac4190ba8ceb998370317c65d09", "title": "Quantum lower bounds for the collision and the element distinctness problems" }, { "paperId": "11032f14bf3fdb71476518922d3af4e6cd8b4af8", "title": "Practical and Provably-Secure Commitment Schemes from Collision-Free Hashing" }, { "paperId": "9543136e66bf75eebc31adf98ad1a0ff99844fdf", "title": "Unconditionally secure quantum bit commitment is impossible" }, { "paperId": "ba1c8fd36590fca91a67945b02d907da94098586", "title": "Is Quantum Bit Commitment Really Possible?" }, { "paperId": "86bce9af17bae2cca7427b1f813cd046d3a7eb35", "title": "Security of quantum protocols against coherent measurements" }, { "paperId": "cd81f9ed8a3adfecf6a69fdf5e1dbf92d7844dcc", "title": "Secret-Key Reconciliation by Public Discussion" }, { "paperId": "e5302edfa2fa077525008333fcb56d9c2f3451ef", "title": "The round complexity of secure protocols" }, { "paperId": "df2473061df11b76cebb7400c50246d0b354390c", "title": "How to play ANY mental game" }, { "paperId": "29b0f06d18949fc7f3a38bb0022571aa15725dc7", "title": "How to generate and exchange secrets" }, { "paperId": "aab2764de08142e8c256f6014d0cd1ff600517ad", "title": "Founding Cryptography on Oblivious Transfer" }, { "paperId": "dc9701ffb5b2f1e345bf1d59065f9d2ee1afbedd", "title": "Implementation of a two-state quantum bit commitment protocol in optical fibers" }, { "paperId": "d9082271933927a6b574b22e8e0bb8ad1b4903e2", "title": "Applications of secure multiparty computation" }, { "paperId": "f104773bee7d9ac9e28f4591c8521008c48540e2", "title": "A Simple Provably Secure Key Exchange Scheme Based on the Learning with Errors Problem" }, { "paperId": "de4b461d1f1cc7f7044c92b49c586a2463b28a8e", "title": "Computationally Secure Oblivious Transfer" }, { "paperId": "e9fb87613db9138acc19682cbf109c3c37dad02b", "title": "Founding crytpography on oblivious transfer" } ]
12,187
en
[ { "category": "Computer Science", "source": "external" }, { "category": "Engineering", "source": "s2-fos-model" }, { "category": "Physics", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/0219fc1452e1fe4236f49bfc7838f03a19ec7fba
[ "Computer Science" ]
0.903706
5G-Compatible IF-Over-Fiber Transmission Using a Low-Cost SFP-Class Transceiver
0219fc1452e1fe4236f49bfc7838f03a19ec7fba
IEEE Access
[ { "authorId": "104319582", "name": "M. Fernandes" }, { "authorId": "1720762567", "name": "Bruno T. Brandão" }, { "authorId": "1404994638", "name": "A. Lorences-Riesgo" }, { "authorId": "2105730276", "name": "P. Monteiro" }, { "authorId": "1394516775", "name": "F. Guiomar" } ]
{ "alternate_issns": null, "alternate_names": null, "alternate_urls": [ "http://ieeexplore.ieee.org/servlet/opac?punumber=6287639" ], "id": "2633f5b2-c15c-49fe-80f5-07523e770c26", "issn": "2169-3536", "name": "IEEE Access", "type": "journal", "url": "http://www.ieee.org/publications_standards/publications/ieee_access.html" }
With the rise of 5G and beyond, the ever-increasing data-rates demanded by mobile access are severely challenging the capacity of optical fronthaul networks. Despite its high reliability and ease of deployment, legacy digital radio-over-fiber (RoF) technologies face an upcoming bandwidth bottleneck in the short term. This has motivated a renewed interest in the development of analog RoF alternatives, owing to their high spectral efficiency. However, unlike its digital counterpart, analog RoF transmission requires a highly linear transceiver to guarantee signal fidelity. Typical solutions exploited in recent research works tend to adopt the use of bulky benchtop components, such as directly modulated lasers (DML) and photodiodes. Although this provides a convenient and quick path for proof-of-concept demonstrations, there is still a considerable gap between lab developments and commercial deployment. Most importantly, a key question arises: can analog-RoF transceivers meet the 5G requirements while being competitive in terms of cost and footprint? Following this challenge, in this work we exploit the use of a low-cost commercial off-the-shelf (COTS) small form-factor pluggable (SFP) transceiver, originally designed for digital transmission at 1 Gbps, which is properly adapted towards analog RoF transmission. Bypassing the digital electronics circuitry of the SFP, while keeping the original transmitter optical sub-assembly (TOSA) and receiver optical sub-assembly (ROSA), we demonstrate that high-performance 5G-compatible transmission can be performed by reusing the key built-in components of current low-cost SFP-class transceivers. Particularly, we demonstrate error vector magnitude (EVM) performances compatible with 5G 64QAM transmission both at 100MHz and 400MHz. Furthermore, employing a memory polynomial model for digital pre-distortion of the transmitted signal, we achieve 256QAM-compatible performance at 100MHz bandwidth, after 20 km fronthaul transmission.
Received February 1, 2022, accepted February 23, 2022, date of publication February 25, 2022, date of current version March 9, 2022. _Digital Object Identifier 10.1109/ACCESS.2022.3154784_ # 5G-Compatible IF-Over-Fiber Transmission Using a Low-Cost SFP-Class Transceiver MARCO A. FERNANDES 1,2, (Member, IEEE), BRUNO T. BRANDÃO 1,2, (Member, IEEE), ABEL LORENCES-RIESGO 1, PAULO P. MONTEIRO 1,2, (Senior Member, IEEE), AND FERNANDO P. GUIOMAR 1,2, (Member, IEEE) 1Instituto de Telecomunicações, 3810-193 Aveiro, Portugal 2Department of Electronics, Telecommunications, and Informatics (DETI), University of Aveiro, 3810-193 Aveiro, Portugal Corresponding author: Marco A. Fernandes (marcofernandes@av.it.pt) This work was supported in part by the European Regional Development Fund (FEDER), through the Regional Operational Programme of Centre (CENTRO 2020) of the Portugal 2020 framework; and in part by the Financial Support National Public [Fundação para a Ciência e Tecnologia (FCT)] through Projects Optical Radio Convergence Infrastructure for Communications and Power Delivering (ORCIP) (CENTRO-01-0145-FEDER-022141), Utilização de Tecnologias de Reflectometría no melhoramento do futuro Internet das Coisas e Sistemas Ciber-Físicos (RETIOT) (Programa Operacional Competitividade e Internacionalização (POCI)-01-0145-FEDER-016432), LANDmaRk (POCI-01-0145-FEDER-031527), and OptWire (PTDC/EEI-TEL/2697/2021). The work of Marco A. Fernandes and Bruno T. Brandão was supported by the Ph.D. fellowships from FCT under Grant 2020.07521.BD and Grant 2021.05867.BD. The work of Fernando P. Guiomar was supported by the ‘‘la Caixa’’ Foundation (ID 100010434), under Grant LCF/BQ/PR20/11770015. **ABSTRACT With the rise of 5G and beyond, the ever-increasing data-rates demanded by mobile access** are severely challenging the capacity of optical fronthaul networks. Despite its high reliability and ease of deployment, legacy digital radio-over-fiber (RoF) technologies face an upcoming bandwidth bottleneck in the short term. This has motivated a renewed interest in the development of analog RoF alternatives, owing to their high spectral efficiency. However, unlike its digital counterpart, analog RoF transmission requires a highly linear transceiver to guarantee signal fidelity. Typical solutions exploited in recent research works tend to adopt the use of bulky benchtop components, such as directly modulated lasers (DML) and photodiodes. Although this provides a convenient and quick path for proof-of-concept demonstrations, there is still a considerable gap between lab developments and commercial deployment. Most importantly, a key question arises: can analog-RoF transceivers meet the 5G requirements while being competitive in terms of cost and footprint? Following this challenge, in this work we exploit the use of a low-cost commercial offthe-shelf (COTS) small form-factor pluggable (SFP) transceiver, originally designed for digital transmission at 1 Gbps, which is properly adapted towards analog RoF transmission. Bypassing the digital electronics circuitry of the SFP, while keeping the original transmitter optical sub-assembly (TOSA) and receiver optical sub-assembly (ROSA), we demonstrate that high-performance 5G-compatible transmission can be performed by reusing the key built-in components of current low-cost SFP-class transceivers. Particularly, we demonstrate error vector magnitude (EVM) performances compatible with 5G 64QAM transmission both at 100 MHz and 400 MHz. Furthermore, employing a memory polynomial model for digital pre-distortion of the transmitted signal, we achieve 256QAM-compatible performance at 100 MHz bandwidth, after 20 km fronthaul transmission. **INDEX TERMS 5G, memory polynomial, analog radio-over-fiber.** **I. INTRODUCTION** The imminent rise of 5G and beyond radio communications, together with the progressive adoption of the centralized radio-access network (C-RAN) architecture [1], is bringing new challenges for optical transceivers. Future transceivers will have to cope with very tight requirements in terms of The associate editor coordinating the review of this manuscript and approving it for publication was Z. G. Zang . bandwidth, latency and reliability [2]. Digital fronthauling based on the common radio public interface (CPRI) [3] specification has been adopted as the de facto standard for 4G-LTE signals, with its most recent version (eCPRI) dividing functionalities between the centralized unit and the distributed unit, thereby achieving improved latency and capacity [17]. However, for next generation RANs, this fronthaul architecture must be able to provide data rates beyond of hundred Gbps [4] due to the larger bandwidth of 5G signals and the use ----- **FIGURE 1. Schematic of two RAN concepts: i) employing D-RoF transmission (above), and ii) another depicting an A-RoF architecture (below).** of massive multiple input multiple output (MIMO) systems. This together with the tight latency requirements defined for 5G signals, has triggered research on whether analog fronthaul can be used instead [5]. Whereas analog fronthaul lacks the resilience of the digital counterpart, it does however reduce the requirements on the transceiver bandwidth and also minimizes the fronthaul latency [1]. The performance of analog radio-over-fiber (A-RoF) transceivers is impaired by several effects, including nonlinearities in the transceiver [6] and fiber dispersion, whose penalty can be enhanced by the laser chirp [7]. The presence of such undesired effects is enhanced when using lowcost transceivers, which are required for these applications. To mitigate fiber dispersion, the use of an intermediate frequency has been proposed for the mm-wave bands [8]. Mitigation of nonlinearities can be performed using several digital pre-distortion (DPD) techniques, such as memory polynomials [9], look-up-tables [10], or neural networks [11]. The complexity of these techniques should be taken into account, and therefore lower complexity solutions such as look-up-tables or memory polynomial are preferred. Due to the widespread dissemination of digital communications, the main efforts from the industry rely on developing low-cost packaged digital transceivers, while most analog solutions remain based on costly discrete components, which require further driving/adaptation so that they can be used for research purposes. In this work, we address this scarcity of analog RoF solutions, exploiting a workaround to obtain low-cost analog transceivers through the adaptation of a commercial digital small form-factor pluggable (SFP) transceiver to support the analog transmission of 5G signals. We address key implementation issues such as the impact of crosstalk between transmitter and receiver ports, and then we proceed with experimentally testing the adapted low-cost transceiver in an optical analog fronthaul. In order to maximize the transceiver performance, we propose a low-complexity DPD method based on a memory polynomial. We demonstrate that transmission over 20 km single-mode fiber (SMF) does not impose any major impairment on the signal quality, but nonlinear compensation is required to enable reception of a 256QAM signal with EVM below the 3.5% established limit [12]. In summary, the main novel contributions provided in this work can be enumerated as follows, i) proposal and demonstration of a simple adaption procedure over the electronic driving circuit of a low-cost digital SFP transceiver, enabling its compatibility with analog RoF transmission, while reusing the original packaging and TOSA / ROSA components; ii) detailed characterization of the adapted analog SFP transceiver at the component level (S-parameters) and system level (EVM performance in different 5G transmission scenarios); iii) performance enhancement of the adapted analog SFP transceiver using a low-complexity DPD model based on a memory polynomial; iv) experimental demonstration of the performance compatibility of the proposed analog SFP transceiver for the transmission of 5G signals with 100 MHz and 400 MHz with 64QAM modulated subcarriers, extended to 256QAM-compatible operation (100 MHz) enabled by a memory polynomial DPD. **II. ANALOG RADIO-OVER-FIBER FOR 5G ACCESS** Figure 1 depicts the concept of a typical RAN employing D-RoF transmission, and the alternative architecture with analog transmission. The first difference observed consists in the shift of equipment complexity from the distributed unit (DU) to the centralized unit (CU). This is mainly due ----- **FIGURE 2. Simple adaptation to convert a commercial digital SFP into an** analog RoF transceiver. Red lines represent the shunt wires soldered into the original PCB. (a) Modifications in the transmitter side and (b) in the receiver side. to the digital-to-analog converter (DAC) and analog-to-digital converter (ADC) that were in the DU in the D-RoF scenario, performing A/D conversion in the uplink and D/A conversion in the downlink, which are relocated to the CU in the A-RoF case, performing D/A conversion in the downlink and A/D conversion in the uplink. This results in DUs that are simpler, cheaper and with lower power consumption. Other main drivers for the adoption of A-RoF systems consist in their low-latency, ease of implementation and high-spectral efficiency. Since in the A-RoF scenario, the radio signals are directly transmitted, problems such as bandwidth multiplication are avoided. In typical D-RoF scenarios, this problem cannot be avoided, resulting in enormous data rates for 5G typical scenarios. In this scenario, considering the employement of CPRI (Option 8 / Split E), the data rate can be calculated using the following expression : data rate = M × Sr × N × 2 _Q[I]_ (1) [×][ C][w][ ×][ C][,] where M is the number of antennas per sector, Sr the sampling rate, N is the number of bits per sample, 2(I _/Q) is a multipli-_ cation factor for in-phase (I) and quadrature-phase (Q) data, _Cw is the factor of CPRI control word and C is a coding factor._ From the expression analysis it is easy to conclude that for transmitting a given signal, the required fronthaul data rate will be significantly expanded. For instance, considering the case study of this paper, for the transmission of a 100 MHz signal, if we consider M = 1, Sr = 1.5 × 100 MHz, _N_ = 15, Cw = 16/15 and C = 66/64, the resulting data rate for the D-RoF scenario will be roughly 2.5 Gb/s. Considering the maximum bandwidth specified for 5G i.e. 16 aggregated components carriers (CCs) each with 400 MHz (total aggregated bandwidth of 6.4 GHz), the required data rate given by expression (1) with the aforementioned parameters is 158 Gbps, which would require the use of two highend 100GBASE-LR4 transceivers (or even ER4-class, if the fronthaul is longer than 10 km), thus imposing a high cost and power consumption at the E/O ends and also a rather inefficient use of the optical spectrum (4 wavelengths per transceiver). Instead, if A-RoF is considered, no bandwidth expansion is imposed, and the 6.4 GHz radio signal can be generated by an analog transceiver with a similar operating bandwidth, thereby reducing the cost of the electronic components and also reducing the spectral occupancy of the optical signal. A similar calculation for more than 100 antenna modules, a 4-sector device supporting 400 MHz baseband channels would roughly require 10 Tbps, which is equivalent to about 400 optical OOK-DSB-50GHz-grid channels [16]. These high-capacity examples clearly expose the critical upscaling issues that are associated with digital fronthauling, which is driving a renewed interest on the development of A-RoF solutions for 5G and beyond. Another tight requirement for 5G networks is ultra lowlatency communications, where this latency time can be reduced to 1 ms in urgent and specific scenarios. This is another advantage of A-RoF systems which have a significant gain in latency-terms when compared to its digital counterpart. All these advantages are increasing the interest of the scientific community in analog RANs, with multiple works highlighting these advantages in field-trials applications and real 5G networks [13]–[15]. Despite all the aforementioned advantages of analog transmission it is difficult to encounter a commercial off-the-shelf (COTS) analog optical transceiver. In the next section, we propose a simple procedure to take a COTS digital transceiver and convert it to perform analog transmission. **III. SFP ADAPTATION FOR ANALOG TRANSMISSION** The transceivers under test are COTS SFP transceivers designed for digital transmission at 1–10 Gb/s, which are nowadays ubiquitous in fiber-optic access networks, and whose cost is typically below 50 ¿. Let us start by unpacking the SFP transceiver and analyzing its key components. In Figure 3.a, we can identify three key parts of the digital SFP transceiver: i) the transmitter optical sub-assembly (TOSA), which performs electrical-to-optical (E/O) conversion using a directly-modulated laser, ii) the receiver optical sub-assembly (ROSA), which performs optical-to-electrical (O/E) conversion through an amplified photodiode, and iii) the printed circuit board (PCB) that is responsible for electrically driving the TOSA and ROSA components. Since these transceivers are designed for digital fiber communications, a set of necessary adaptations are required to enable the transmission of analog optical signals. Note, however, that the key optical transmission components, i.e. the TOSA and ROSA parts of the transceiver, are fundamentally responsible for the E/O and O/E conversion regardless of the properties of the transmitted/received signals, i.e. they are transparent to the type of transmission, and therefore can be kept in their original form, thus benefiting from the low-cost and small form-factor integration of these components. The main adaptations to enable analog transmission with the SFP transceiver are then required at the RF driving level in the PCB. ----- **FIGURE 3. Final prototype of the converted analog RoF transceiver.** (a) Open package showing the simple shunt modifications applied to the original SFP board and (b) analog RoF transceiver enclosed in the original SFP package. _A. ADAPTING THE ORIGINAL SFP ELECTRONICS FOR_ _ANALOG TRANSMISSION_ In order to preserve the original form-factor and pin-out of the standard SFP transceiver, we reuse the original digital board and perform a direct bypass of the digital electronics. Figure 2 shows a functional view of the modifications performed. As evidenced in Figure 2, the modifications simply consist in bypassing the digital part of the board (buffering, equalization, amplification, DC offset cancellation and amplitude limitation) and directly wiring the input data pins to the output ones. The final physical layout of the modified transceiver can be observed in Figure 3, which evidences that these small modifications enable to obtain a low-cost analog RoF transceiver and keeping it in the original SFP package. Also note that, in this work, we have used a SFP evaluation board to provide access to the RF ports via SMA connectors. **IV. CHARACTERIZATION OF THE ANALOG SFP** Having successfully converted a digital transceiver into an analog one, we proceeded with the characterization of these transceivers. To this intent, firstly, S-parameter measurements were performed to characterize the frequency response of the analog transceivers. The E/O and O/E frequency response of the transmitter and receiver are illustrated in Figure 4. Note that the E/O frequency response of the TOSA is measured by using a calibrated photodiode at the receiver, while the corresponding O/E frequency response of the ROSA is obtained after de-embedding the frequency response of the TOSA. From Figure 4a we see that the transmitter shows a considerable conversion loss of 40 dB on average for frequencies up to 3 GHz and beyond 3 GHz the loss increases gradually, exceeding 55 dB at 10 GHz. This shows the already known poor conversion efficiency of direct modulated lasers. For the receiver it can be seen that the O/E conversion gain shows a decreasing trend from 45 dB near DC, down to 40 dB at 7 GHz. After 7 GHz, the performance degrades severely. When observing the combined response of the transmitter and the receiver, (Figure 4b) we see that for frequencies below 3 GHz there is a gain in the system, provided by the receiver TIA. However, when increasing the frequency for Final prototype of the converted analog RoF transceiver. (a) Open package showing the simple shunt modifications applied to the original SFP board and (b) analog RoF transceiver enclosed in the original SFP package. **FIGURE 4. (a) Transmitter and receiver E/O and O/E frequency responses** respectively. (b) Combined |S21| of the transmitter and receiver. **FIGURE 5. Electrical S11 and S22 responses of the transmitter and** receiver respectively. values above 7.5 GHz the system insertion loss becomes too high to apply in a practical scenario. The results of the impedance matching measurement at the transmitter and receiver RF ports are shown in Figure 5. The input reflection coefficient at the transmitter input RF port (|S11|) show a bad impedance matching with the |S11| being mostly between 4 dB and 8 dB in the frequency range tested. In con− − trast, the receiver results reveal two regions with good port impedance matching, with |S22| equal or below −10 dB. The first one is from DC up to 2.5 GHz and the second one is from 5.5 GHz up to 6.5 GHz. The clear dip below −30 dB in the |S11| of the transmitter can be attributed to a resonance in the input circuit confined between 8 GHz and 9 GHz. Overall, this S-parameter characterization shows the ----- **FIGURE 6. Diagram of the setup used to measure the transceiver** crosstalk. major impact of the transmitter conversion loss in the system. It is worth noting that this simple digital-to-analog adaptation procedure, basically consisting of four bypass wires, has been performed with the main aim of providing a proof-of-concept and easily reproducible demonstration. However, it should be noted that improved impedance matching and conversion efficiency might be achievable if the original SFP digital electronics are entirely replaced by an analog driving board, at the expense of a longer development time. Nevertheless, the premise of this work is to assess the performance limits of such a simple and fast digital-to-analog SFP conversion procedure. _A. CROSSTALK MEASUREMENTS_ Crosstalk between the transmitter and receiver can be a performance bottleneck in A-RoF systems. Addressing this subject, we advanced the characterization process by measuring the crosstalk in our A-RoF transceiver. In order to quantify the impact of this problem in further tests, we designed a simple experiment consisting in measuring the received RF power with the laser on and off. The setup used to measure the crosstalk is depicted in Figure 6. If the transmitter and receiver are disconnected, it is not expected to receive any RF power, however, this power still has a significant value due to the crosstalk. We have defined crosstalk as the ratio between the power received with the laser off and the one with laser on : _�P =_ _[P][Laser OFF]_ (2) _PLaser ON_ Fixing the RF power to 0 dBm, we measured the value of _�P sweeping the RF frequency between 1 GHz and 10 GHz._ These results are presented in Figure 7, which shows a clear dependency between the crosstalk and the RF carrier frequency. For higher frequencies (above 8 GHz), the crosstalk is so high, that the RF received power is the same, regardless of whether the optical link is connected or disconnected. Given the EVM-SNR relationship, EVM (%) = 10[−] [SNR (dB)]20 × 100, (3) if we assume all the crosstalk to be noise, we can measure the impact that it will have on the EVM. Since we are aiming to transmit 5G signals over an RF carrier of 3.5 GHz, the EVM will always have a floor limit of 10%, which already does not comply with the 3GPP requirements for modulations formats of 256QAM and 64QAM [12]. These results render the joint utilization of the transmitter and receiver in the transceiver more challenging, degrading the overall transmission performance. To overcome this problem and to ensure that the **FIGURE 7. Dependency of the crosstalk with the RF frequency with 0 dBm** RF power at the transmitter. transceiver crosstalk is avoided, in all of the remaining tests we have decided to use TOSA and ROSA from different SFPs packages. Although this choice implies some hardware inefficiency (one TOSA/ROSA is unutilized per SFP pair), it guarantees a crosstalk-free operation. **V. EXPERIMENTAL SETUP** In order to emulate a real analog optical fronthaul, we have implemented the experimental setup shown in Figure 8. The setup is composed of an Arbitrary Waveform Generator (AWG) responsible for the generation of the RF baseband signal. This AWG has two differential output channels corresponding to the I and Q waveforms. This signal is then up-converted by the IQ mixer to an RF carrier of 3.5 GHz, corresponding to the standardized FR1 [2], before directly modulating the analog optical transmitter. The optical signal then travels through a given length of Single Mode Fiber (SMF). Before the optical receiver, there is a Variable Optical Attenuator (VOA), which enables the control of the optical power at the photodiode. The optical receiver performs the O/E conversion and this electrical signal is received in the Vector Signal Analyzer (VSA). The VSA down-converts the signal and demodulates it using standard compensation algorithms. In order to maximize the performance of the low-cost transceivers, we propose the use of nonlinear DPD based on the following memory polynomial model, _Q−1_ � _akq · y(n −_ _q) · |y(n −_ _q)|[k]_ _,_ (4) _q=0_ _z(n)_ = _K_ −1 � _k=0_ where y is the transmitted signal without pre-distortion, z is the pre-distorted signal, K is the nonlinear order and Q is the memory depth. The optimization of the memory polynomial coefficients, akq, follows the strategy described in Figure 8. First, the signals are generated in the AWG without DPD. The measured signals are then used to calculate the coefficients of the memory polynomial by comparison with the transmitted ----- **FIGURE 8. Experimental setup for extracting and applying a DPD model** in a low-cost optical fronthaul. waveform. After calculating the coefficients, the signal is pre-distorted with the DPD model. The experimental analysis is divided in the following stages: i) back-to-back (B2B) performance assessment of the TOSA/ROSA transceiver pair, ii) fronthaul performance assessment, in which we consider a fiber link composed of 20 km SMF, and iii) A-RoF performance enhancement. **VI. EXPERIMENTAL 5G RESULTS** _A. 5G FR1 TRANSMISSION_ 1) OPTICAL B2B The first test consisted in analyzing the performance of the fronthaul in a simple scenario without optical fiber. The VOA was used to set the optical power into the photodiode to 12 dBm. This test consisted in optimizing the RF power for − the 256QAM 5G signal. These optimizations were done for various DPD models varying the values of K and Q described in equation (4). The first implemented model consists only of a linear compensation with one sample memory. After, we tested a DPD model without memory (Q 1), while = increasing the nonlinear order of the model (K ) until obtaining the best performance. With the optimized value of K, we increased the memory of the model until finding its best value. The obtained results are plotted in Figure 9. As can be seen, without DPD the best EVM was obtained for an RF power of 0 dBm, and has a value of 4.3%. Note that this value is above the 3.5% limit established by 3GPP for 256QAM transmission. The linear compensation corresponding to the blue dashed line in the figure presented similar results to those obtained without DPD. With all the nonlinear models implemented we obtained a considerable EVM reduction, with the best DPD model (Q = 1, K = 5), yielding an EVM of 3.2%. This model shows also to be better in terms of RF power margin where the EVM is below 3.5%, providing approximately a 2 dB tolerance for transmitted power detuning. Not shown here for brevity but increasing the nonlinear order for values higher than 5 resulted in a progressive loss of performance, likely triggered by a less accurate model extraction of higher-order nonlinearities. It is interesting to observe that the best absolute performance was obtained for a higher power than the optimum without DPD (3 dBm), **FIGURE 9. Measured EVM in B2B for the cases of no DPD, linear DPD and** different nonlinear DPD based on memory polynomials, for a 100 MHz signal with a 3.5 GHz carrier. **FIGURE 10. Measured EVM after 20 km SMF for the cases of no DPD and** different nonlinear DPD based on memory polynomials, for a 100 MHz signal with a 3.5 GHz carrier. which is a well-known advantage of nonlinear compensation in general: by mitigating nonlinearities, higher powers can be launched into the transmission system, thereby resulting in an improved SNR and/or power-budget. It is worth noticing that, all the considered DPD models in Figure 9 enable to successfully transmit a 5G 256QAM signal with an EVM below the 3GPP limit. 2) 20 km ANALYSIS After studying the B2B performance, we added 20 km of SMF in order to get a more realistic fronthaul scenario. With this setup, there were two main goals. First, to verify if the model obtained in B2B is accurate enough to achieve the best transmission performance with a fiber fronthaul. This would bring a great advantage in practical terms, enabling the optimization of the DPD model for the optical transceivers in a controlled laboratory environment without requiring individual optimization in different fronthaul networks. The other main goal of this test, which is also inherently related with the first, is to test whether a long fronthaul link composed of 20 km SMF would introduce memory effects in the system. ----- With these goals in mind, we have continued the tests using the best DPD model obtained in B2B (Q = 1, K = 5) and with Q 2 and K 5. For the memoryless model = = (Q = 1, K = 5), we have measured the performance applying the model obtained in B2B as well as extracting a new model taking into account the 20 km in the system. It is worth noticing that the insertion of the fiber link required to increase the power in the photodiode to 10.5 dBm in order − to achieve the best performance. The obtained results are shown in Figure 10. Without DPD the performance remains similar to the B2B case, with minimum EVM of 4.3% when the RF power is 0 dBm. Through the analysis of Figure 10, it is possible to conclude that the model obtained in B2B is still valid up to an RF power of 2 dBm, achieving a minimum EVM of 3.3%. The benefits of extracting the model again are visible for higher powers, enabling to achieve an EVM of 3.2% for an RF power of 3 dBm. It is worth to notice that the model obtained in B2B was still capable of presenting a very good performance being only 0.1% worse than those obtained with 20 km fiber link. The next step was to analyze if the 20 km SMF introduced memory in the system, so we extracted and applied a memory polynomial with Q 2 and = _K_ 5, which is signaled in the figure by a blue solid line. The = system remains memoryless since the performance degrades relatively to the Q = 1, K = 5 scenario, yielding a minimum EVM of 3.3% at the optimum RF power of 3 dBm. Once again with all the DPD models, it was possible to obtain an EVM below the 3GPP limit for 256QAM. Considering equation (3) we can calculate the SNR gain obtained with DPD. An EVM of 4.3% corresponds to an SNR of 27.3 dB, whereas an EVM of 3.2% corresponds to an SNR of 29.9 dB. Therefore, we may conclude that DPD effectively provides an SNR improvement of approximately 3 dB, which is a remarkable gain. These improvements are clear when observing the spectrum of the signal with and without predistortion. In Figure 11 it is possible to observe the improvement on the signal spectrum at the power of 3 dBm caused by the usage of DPD. Without DPD there is a clear nonlinear phenomenon in the adjacent bands of the signal, which is commonly designated as ‘‘spectral regrowth’’. It is possible to see that the applied DPD model compensate almost completely for these bands. The signal spectra obtained at the optimum power without DPD (0 dBm) is also shown in Figure 11. From the results, we may conclude that DPD has effectively compensated for the nonlinear distortions generated by the 3 dB increased power, thus resulting in an effective gain of 3 dB in SNR, which very nearly matches the observed gains in terms of EVM. _B. 5G FR2 TRANSMISSION_ 1) IF ANALYSIS Since 3GPP has defined the mmWave range for FR2 transmission, to be able to transmit these signals with our adapted transceivers, we need to convert them to an intermediate frequency (IF), leading to a system configuration that is typi **FIGURE 11. Measured spectra for the optical case without DPD (0 dBm),** and for the optical case when performing DPD (3 dBm) before and after applying the model. **FIGURE 12. Measured EVM for different IF values with a 64QAM 400 MHz** transmitted signal. cally designated as IF-over-fiber (IFoF). To optimize the best suited IF for our system, we started by sending a 400 MHz 64QAM signal using different IFs with an RF power of 0 dBm and analysed the measured EVM. The results obtained from this analysis are depicted in Figure 12. Despite not showing a clear tendency, the results show an EVM minimum of 5.5% for an IF of 3.5 GHz. Besides maximizing the A-RoF transceiver performance, this 3.5GHz IF choice also shows the advantage of enabling an improved compatibility between the FR1 and FR2 transmission modes. 2) PERFORMANCE ASSESSMENT After optimizing the value of the transmitted IF we measured the performance achievable in an FR2 scenario with our setup. To this intent, we used the same 5G signal as before (64QAM, 400 MHz) at the optimum IF of 3.5 GHz. With this signal, we performed tests without fiber and with 20 km SMF. Figure 13 shows the results obtained when sweeping the RF power transmitted, without any DPD and with the best DPD model found (Q = 2, K = 3), for each scenario. It is observable from the presented results that there is a slight increase in the optimum RF power, which is related with having ----- of real multiplications (NRMs) required to pre-distort each sample of the transmitted signal. To proceed with this analysis, let us return to equation (4) and, since both akq and y(n) are complex numbers, rewrite it as, _z(n)_ = _K_ −1 _Q−1_ � � ([akq,r _yr_ (n − _q) −_ _akq,iyi(n −_ _q)]_ _k=0_ _q=0_ � Real component (2 RMs)�� � +j [akq,r _yi(n −_ _q) + akq,iyr_ (n − _q)])|y(n −_ _q)|[k]_ _,_ � �� � Imaginary component (2 RMs) **FIGURE 13. Measured EVM in an OB2B scenario and with 20 km SMF, for** the cases of no DPD and best DPD model obtained (Q = 2, K = 3), for a 400 MHz signal with a 3.5 GHz carrier. increased the signal bandwidth from 100 MHz to 400 MHz, leading to a power spreading over the frequencies spectrum. This bandwidth increase also leads to the enhancement of filtering effects that introduce more memory into the system, resulting in an optimum DPD model with a memory tap. From the results, we observe that, for both scenarios, simply optimizing the RF power driving the SFP is enough to achieve 64QAM transmission (where the 3GPP limit is 8% EVM). Without any DPD we obtained an EVM of 5.2% and 6.3%, in OB2B and with 20 km SMF, respectively. However, with the best DPD models found, the performance was improved to 4.8% in the OB2B scenario, and 5.5% with 20 km SMF. _C. DPD COMPLEXITY ANALYSIS_ An underlying problem with introducing advanced techniques for nonlinear DPD is the increased complexity in these systems. For this reason, we decided to perform a complexity analysis of the proposed memory polynomial DPD method. In order to quantify their complexity, we will use the number - K = 1: (5) where we consider that the absolute value of y(n _q) can be_ − computed as,[1] � |y(n − _q)| =_ _yr_ (n − _q)[2]_ + yi(n − _q)[2]_ � �� � 2 RMs _._ (6) Noting that the number of RMs grows linearly with the DPD memory, Q, we can start by analyzing the model complexity for the case of Q 1, with increasing polynomial order, K, = as shown in equations (8) to (11), as shown at the bottom of the page. Note that, when computing |y(n)|[k] for k > 1, we assume that the value of _y(n)_ has already been previously com| |[k][−][1] puted and stored in memory, and therefore there is only one extra real multiplication needed to evaluate _y(n)_ | |[k] = _y(n)_ _y(n)_ . | |[k][−][1]| | Finally, generalizing the above examples for any value of _K and Q, we obtain that the following analytical expression_ 1Note that, for simplicity, we neglect the complexity associated with the [√](.) operation, as its hardware implementation might follow different algorithms, namely resorting to the use of look-up tables. Nevertheless, it is worth noting that for any memory polynomial of order K > 1 and memory _Q, only Q square-root operations are actually required; i.e. once the value of_ |y(n − _q)| is first computed, it can be stored in memory for the subsequent_ evaluation of its |y(n − _q)|[k]_ products. ([a00,r _yr_ (n) − _a00,iyi(n)]_ � �� � 2 RMs +j [a00,r _yi(n) + a00,iyr_ (n)]) |y(n)|[0] � 2 RMs�� � ����=1 4 RMs (8) −→ - K = 2: - K = 3: - K = 4: [a10,r _yr_ (n) − _a10,iyi(n)]|y(n)|_ +j [a10,r _yi(n) + a10,iyr_ (n)]|y(n)| � �� � � �� � 3 RMs 3 RMs [a20,r _yr_ (n) − _a20,iyi(n)]|y(n)|[2]_ +j [a20,r _yi(n) + a20,iyr_ (n)]|y(n)|[2] � �� � � �� � 3 RMs 3 RMs [a30,r _yr_ (n) − _a30,iyi(n)]|y(n)|[3]_ � �� � 3 RMs +j [a30,r _yi(n) + a30,iyr_ (n)]|y(n)|[3] � �� � 3 RMs +2 from eq. (6) +4 from eq. (8) 12 RMs (9) −−−−−−−−−−−−−−−−−−→ +12 from eq. (9) +1 from |y(n)|[2] 19 RMs (10) −−−−−−−−−−−−−−−−−−−→ +19 from eq. (10) +1 from |y(n)|[3] 26 RMs (11) −−−−−−−−−−−−−−−−−−−→ ----- that fully describes the complexity (in number of RMs) of the memory polynomial model, [2] Technical Specification Group Services and System Aspects: Release 15 _Description, Standard 3GPP TR 21.915, 2019._ [3] Common Public Radio Interface (CPRI); Interface Specification, document CPRI Specification V7.0, 2015. [4] Technical Specification Group Radio Access Network: Study on CU-DU _Lower Layer Split for NR, Annex A: Fronthaul Bandwidth (Release 15),_ Standard 3GPP TR 38.816, 2017. [5] C. Ranaweera, E. Wong, A. Nirmalathas, C. Jayasundara, and C. Lim, ‘‘5G C-RAN with optical fronthaul: An analysis from a deployment perspective,’’ J. Lightw. Technol., vol. 36, no. 11, pp. 2059–2068, Jun. 1, 2018, [doi: 10.1109/JLT.2017.2782822.](http://dx.doi.org/10.1109/JLT.2017.2782822) [6] J. Wang, C. Liu, J. Zhang, M. Zhu, M. Xu, F. Lu, L. Cheng, and G.-K. Chang, ‘‘Nonlinear inter-band subcarrier intermodulations of multiRAT OFDM wireless services in 5G heterogeneous mobile fronthaul networks,’’ J. Lightw. Technol., vol. 34, no. 17, pp. 4089–4103, Sep. 1, 2016, [doi: 10.1109/JLT.2016.2584621.](http://dx.doi.org/10.1109/JLT.2016.2584621) [7] B. G. Kim, S. H. Bae, H. Kim, and Y. C. Chung, ‘‘RoF-based mobile fronthaul networks implemented by using DML and EML for 5G wireless communication systems,’’ J. Lightw. Technol., vol. 36, no. 14, pp. 2874–2881, [Jul. 15, 2018, doi: 10.1109/JLT.2018.2808294.](http://dx.doi.org/10.1109/JLT.2018.2808294) [8] S.-H. Cho, H. Park, H. S. Chung, K. H. Doo, S. Lee, and J. H. Lee, ‘‘Cost-effective next generation mobile fronthaul architecture with multiIF carrier transmission scheme,’’ in Proc. Opt. Fiber Commun. Conf., Mar. 2014, pp. 1–3. [9] J. Zhang, J. Wang, M. Xu, F. Lu, L. Chen, J. Yu, and G.-K. Chang, ‘‘Memory-polynomial digital pre-distortion for linearity improvement of directly-modulated multi-IF-over-fiber LTE mobile fronthaul,’’ in Proc. _Opt. Fiber Commun. Conf., Mar. 2016, pp. 1–3._ [10] X. N. Fernando and A. B. Sesay, ‘‘Look-up table based adaptive predistortion for dynamic range enhancement in a radio over fiber link,’’ in Proc. _IEEE Pacific Rim Conf. Commun., Comput. Signal Process. (PACRIM)_ _[Conf., Aug. 1999, pp. 26–29, doi: 10.1109/PACRIM.1999.799469.](http://dx.doi.org/10.1109/PACRIM.1999.799469)_ [11] S. Liu, M. Xu, J. Wang, F. Lu, W. Zhang, H. Tian, and G.-K. Chang, ‘‘A multilevel artificial neural network nonlinear equalizer for millimeterwave mobile fronthaul systems,’’ J. Lightw. Technol., vol. 35, no. 20, [pp. 4406–4417, Oct. 15, 2017, doi: 10.1109/JLT.2017.2717778.](http://dx.doi.org/10.1109/JLT.2017.2717778) [12] User Equipment (UE) Radio Transmission and Reception; Part 1: Range _1 Standalone (Release 15), Standard 3GPP TS 38.101-1, 2018._ [13] M. A. Fernandes, P. A. Loureiro, B. T. Brandao, A. Lorences-Riesgo, F. P. Guiomar, and P. P. Monteiro, ‘‘Multi-carrier 5G-compliant DMLbased transmission enhanced by bit and power loading,’’ IEEE Pho_ton. Technol. Lett., vol. 32, no. 12, pp. 737–740, Jun. 15, 2020, doi:_ [10.1109/LPT.2020.2994045.](http://dx.doi.org/10.1109/LPT.2020.2994045) [14] A. Mufutau, F. Guiomar, M. Fernandes, A. Lorences-Riesgo, A. Oliveira, and P. Monteiro, ‘‘Demonstration of a hybrid optical fiber–wireless 5G fronthaul coexisting with end-to-end 4G networks,’’ J. Opt. Commun. _Netw., vol. 12, pp. 72–78, Mar. 2020._ [15] M. Alzenad, M. Z. Shakir, H. Yanikomeroglu, and M.-S. Alouini, ‘‘FSObased vertical backhaul/fronthaul framework for 5G+ wireless networks,’’ _IEEE Commun. Mag., vol. 56, no. 1, pp. 218–224, Jan. 2018._ [16] Z. Zakrzewski, ‘‘D-RoF and A-RoF interfaces in an all-optical fronthaul of 5G mobile systems,’’ Appl. Sci., vol. 10, no. 4, p. 1212, Feb. 2020. [17] Common Public Radio Interface Interface Specification, document eCPRI Interface Specification V1.0, 2017. MARCO A. FERNANDES (Member, IEEE) received the M.Sc. degree in electronics and telecommunications engineering from the University of Aveiro, in 2019. He is currently pursuing the Ph.D. degree in MAP-tele doctorate program from the University of Aveiro, the University of Porto, and the University of Minho. During his M.Sc. degree, he has worked with analog-radio over fiber applied to 5G communications. During his master’s, he worked with advanced radio-over-fiber transmission providing 5G-solutions for Optical Radio Convergence Infrastructure for Communications and Power Delivering (ORCIPwww.orcip.pt) testbed. He is currently participates in multiple research projects, mainly involved high-capacity free space optics (FSO) transmission and machine learning applications. He has authored or coauthored more than 15 scientific publications in leading international journals and conferences. He is an Optica Member. He received the Ph.D. Grant from FCT, in 2020. In 2021, he was a finalist in OFC2021 Corning Student Award. Nmul = � 4Q, if K = 1, (7) (6 + 6(K − 1) + (K − 2))Q, if K ≥ 2. Having obtained this relation, we can now calculate the number of multiplications required to implement the memory polynomial DPD at the best complexity-vs-performance tradeoff previously found in Figs. 9, 10 and 13, i.e. K 5, = _Q_ 1 and K 3, Q 2, yielding a complexity of = = = 33 and 38 multiplications, respectively. Through this in-depth complexity analysis, we can then conclude that the utilized memory polynomial model is effectively a low-complexity subsystem. As a baseline for comparison, the number of multiplications required for its operation is lower than what would be required for a standard linear filter with 10 taps (i.e. 4Q as in the upper branch of (8), for K 1). = **VII. CONCLUSION** In this paper, we addressed one of the main challenges in the upcoming next-generation RANs, namely, the bandwidth bottleneck imposed by digital fronthauling in typical architectures. With the rise of 5G and the emergence of 6G specifications, it is required to search for alternative technologies that meet these unprecedented demands. Answering to these requirements and responding to the scarcity of low-cost analog optical transceivers, we have demonstrated a simple procedure to take a low-cost COTS digital SPF transceiver, and modify it to perform analog transmission. With the simple modifications exposed in the paper, we were able to obtain an SFP-packaged analog transceiver, capable of transmitting 100 MHz and 400 MHz 64QAM signals meeting the 3GPP EVM requirements. Moreover, a memory-polynomial based pre-distortion technique has been shown to partially counteract the limitations inherent to the simple digital-to-analog adaption procedure, enabling to meet the EVM specifications for transmitting a 100 MHz 256QAM signal over 20 km SMF. Although the proposed digital-to-analog adaption of the SFP transceiver is not deemed as a practical solution for the marketization of analog RoF transceivers, the results presented in this work demonstrate that it is possible to design high-performance analog RoF solutions using low-cost components that have found matured deployment in the low-end digital optics market. Furthermore, the analog-adaption methodology and digital pre-distortion technique demonstrated in this work might provide useful insights for the research community, facilitating the access to low-cost RoF solutions as an enabling technology to support the experimentation and prototyping of complex 5G and 6G optical access architectures in laboratory environments. **REFERENCES** [1] I. A. Alimi, A. L. Teixeira, and P. P. Monteiro, ‘‘Toward an efficient C-RAN optical fronthaul for the future networks: A tutorial on technologies, requirements, challenges, and solutions,’’ IEEE Com_mun. Surveys Tuts., vol. 20, no. 1, pp. 708–769, 1st Quart., 2018, doi:_ [10.1109/COMST.2017.2773462.](http://dx.doi.org/10.1109/COMST.2017.2773462) ----- BRUNO T. BRANDÃO (Member, IEEE) received the M.Sc. degree in electronics and telecommunications engineering from the University of Aveiro, Portugal, in 2019. During his M.Sc. degree, he has developed an analogue radio-overfibre link, based on low-cost optical transceivers, for 4G and 5G communication support. Since 2019, he has been with the Telecommunications Ph.D. Program (MAP-Tele), a joint venture between the Universities of Minho, Portugal, and the Universities of Aveiro. In his Ph.D. studies, he is developing and implementing digital signal processing algorithms in reconfigurable hardware for real-time coherent communication systems. Along with the Ph.D. studies, he is working in the RETIOT Project under the M.Sc. fellowship in which he is developing a distributed radio system for both radio communications and coherent radar applications. He also designed an RF frontend to serve as an interface between the remote user equipment and the radio-over-fibre link. This work was done under the scope of ORCIP infrastructure (www.orcip.pt) at the Instituto de Telecomunicações. ABEL LORENCES-RIESGO received the Ph.D. degree from the Chalmers University of Technology, in 2017. From 2017 to 2019, he worked as a Postdoctoral Researcher at Telecomunicações– Aveiro. In 2019, he joined the Optical Communication Technology Laboratory, Paris Research Center, Huawei Technologies France, as a Senior Engineer. He has authored or coauthored more than 70 papers in leading international journals and conferences. His main research interests include fiber optic communications and digital signal processing. PAULO P. MONTEIRO (Senior Member, IEEE) is currently an Associate Professor at the University of Aveiro and a Senior Researcher at the Instituto de Telecomunicações, where he is also a Research Coordinator of optical communication systems (https://www.it.pt/Groups/Index/59). He successfully tutored over 14 Ph.D. students and 24 master’s students. He has participated in more than 26 research projects. He has authored/coauthored more than 18 patent applications, over 115 articles in journals, and 380 conference contributions. His main research interests include optical communications and reflectometry systems. FERNANDO P. GUIOMAR (Member, IEEE) received the M.Sc. and Ph.D. degrees in electronics and telecommunications engineering from the University of Aveiro, Portugal, in 2009 and 2015, respectively. Since 2017, he has been a Senior Researcher at the Instituto de Telecomunicações, Aveiro, where his main research interests are focused within the area of fiber-based and free-space optical communication systems, including the development of digital signal processing algorithms, advanced modulation and coding, constellation shaping and non-linear modeling and mitigation. He has authored or coauthored more than 100 scientific publications in leading international journals and conferences. He is an OSA Member. In 2015, he received a Marie SkłodowskaCurie individual fellowship, jointly hosted by the Politecnico di Torino, Italy, and CISCO Optical GmbH, Germany. In 2016, he has received the Photonics21 Student Innovation Award, distinguishing industrial-oriented research with high impact in Europe. In 2020, he was awarded a three year Junior Leader Fellowship by the ‘‘la Caixa’’ Foundation. BRUNO -----
{ "disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.1109/ACCESS.2022.3154784?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1109/ACCESS.2022.3154784, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": "CCBY", "status": "GOLD", "url": "https://ieeexplore.ieee.org/ielx7/6287639/9668973/09721876.pdf" }
2,022
[ "JournalArticle" ]
true
null
[ { "paperId": "cabd0425553bb2e5fd6d61992a473c6f4db8f162", "title": "Multi-Carrier 5G-Compliant DML-Based Transmission Enhanced by Bit and Power Loading" }, { "paperId": "22190aea929c806856b9467b9c911cdc36109b20", "title": "D-RoF and A-RoF Interfaces in an All-Optical Fronthaul of 5G Mobile Systems" }, { "paperId": "938192a90615fac58aa9b8d0ac4fabeeb28395ac", "title": "Demonstration of a hybrid optical fiber–wireless 5G fronthaul coexisting with end-to-end 4G networks" }, { "paperId": "95d73ad6062651c7d42e3a7f60efe43d604236c1", "title": "RoF-Based Mobile Fronthaul Networks Implemented by Using DML and EML for 5G Wireless Communication Systems" }, { "paperId": "8b9edb606ee1c3f86ab39dbf823889bdd6487786", "title": "5G C-RAN With Optical Fronthaul: An Analysis From a Deployment Perspective" }, { "paperId": "f6a3f3ef2a242e2c9d60bd7439fe1a2ef1f68ea5", "title": "A Multilevel Artificial Neural Network Nonlinear Equalizer for Millimeter-Wave Mobile Fronthaul Systems" }, { "paperId": "3d9a5775b900725c2268cf1bf62ddfdb547aa548", "title": "Nonlinear Inter-Band Subcarrier Intermodulations of Multi-RAT OFDM Wireless Services in 5G Heterogeneous Mobile Fronthaul Networks" }, { "paperId": "0cc8551881da255f2cfeebbbb973ec2d473c1b2d", "title": "Memory-polynomial digital pre-distortion for linearity improvement of directly-modulated multi-IF-over-fiber LTE mobile fronthaul" }, { "paperId": "6978786aa1de5db57eeb90d1edeb03a96eccbbc0", "title": "Cost-effective next generation mobile fronthaul architecture with multi-IF carrier transmission scheme" }, { "paperId": "4d91322b48291edb03f1fa6b26b2520062ed2d09", "title": "Look-up table based adaptive predistortion for dynamic range enhancement in a radio over fiber link" }, { "paperId": "afbc1ba078afee5588dd2b0ae90153eeb0733692", "title": "Toward an Efficient C-RAN Optical Fronthaul for the Future Networks: A Tutorial on Technologies, Requirements, Challenges, and Solutions" }, { "paperId": null, "title": "User Equipment (UE) Radio Transmission and Reception" }, { "paperId": null, "title": "FSObased vertical backhaul/fronthaul framework for 5G+ wireless networks" }, { "paperId": null, "title": "Release 15 Description" }, { "paperId": null, "title": "Technical Specification Group Radio Access Network: Study on CU-DU Lower Layer Split for NR, Annex A: Fronthaul Bandwidth (Release 15)" }, { "paperId": null, "title": "Common Public Radio Interface Interface Specification, document eCPRI Interface Specification V1" } ]
12,275
en
[ { "category": "Computer Science", "source": "external" }, { "category": "Computer Science", "source": "s2-fos-model" }, { "category": "Economics", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/021a44734d384688e277b852910ac80cb2ab8c34
[ "Computer Science" ]
0.870149
An Agent-Based Model Framework for Utility-Based Cryptoeconomies
021a44734d384688e277b852910ac80cb2ab8c34
arXiv.org
[ { "authorId": "38081739", "name": "Kiran Karra" }, { "authorId": "2225941210", "name": "Tom Mellan" }, { "authorId": "2226149183", "name": "Maria Silva" }, { "authorId": "1557361372", "name": "Juan P. Madrigal-Cianci" }, { "authorId": "2225941452", "name": "Axel Cubero Cortes" }, { "authorId": "1471432581", "name": "Zixuan Zhang" } ]
{ "alternate_issns": null, "alternate_names": [ "ArXiv" ], "alternate_urls": null, "id": "1901e811-ee72-4b20-8f7e-de08cd395a10", "issn": "2331-8422", "name": "arXiv.org", "type": null, "url": "https://arxiv.org" }
In this paper, we outline a framework for modeling utility-based blockchain-enabled economic systems using Agent Based Modeling (ABM). Our approach is to model the supply dynamics based on metrics of the cryptoeconomy. We then build autonomous agents that make decisions based on those metrics. Those decisions, in turn, impact the metrics in the next time-step, creating a closed loop that models the evolution of cryptoeconomies over time. We apply this framework as a case-study to Filecoin, a decentralized blockchain-based storage network. We perform several experiments that explore the effect of different strategies, capitalization, and external factors to agent rewards, that highlight the efficacy of our approach to modeling blockchain based cryptoeconomies.
SS 379 5980 (o e) DOI 10.5195/LEDGER.20XX.XXX RESEARCH ARTICLE # An Agent-Based Model Framework for Utility-Based Cryptoeconomies ### Kiran Karra, Tom Mellan, Maria Silva, Juan P. Madrigal-Cianci, Axel Cubero Cortes, Zixuan Zhang [∗] **Abstract. In this paper, we outline a framework for modeling utility-based blockchain-enabled** economic systems using Agent Based Modeling (ABM). Our approach is to model the supply dynamics based on metrics of the cryptoeconomy. We then build autonomous agents that make decisions based on those metrics. Those decisions, in turn, impact the metrics in the next time-step, creating a closed loop that models the evolution of cryptoeconomies over time. We apply this framework as a case-study to Filecoin, a decentralized blockchain-based storage network. We perform several experiments that explore the effect of different strategies, capitalization, and external factors to agent rewards, that highlight the efficacy of our approach to modeling blockchain based cryptoeconomies. KEY WORDS 1. Agent-Based Modeling. 2. Cryptoeconomics. 3. Digital Twin. ### 1. Introduction #### Cryptoeconomics is an interdisciplinary science that combines fields such as economics, cryp- tography, and computer science with the goal of designing and analyzing economic incentive structures for resource allocation in decentralized systems.[1] Accordingly, cryptoeconomic sys- tems are often used to create new forms of digital currency, utilities, and markets. Because each system has its own goals and contexts in which it is applicable, cryptoeconomic incentive structures usually need to be customized for each individual application. In addition, these systems typically show features associated with Complex Systems.[1] This means that the long-term #### evolution of these systems cannot be easily inferred from local changes caused by individuals, which makes the task of customizing cryptoeconomic systems to support a concrete application more difficult. #### Even though cryptoeconomics is a relatively young field,[2] some work has been done to address the complexities of designing and tuning decentralised economies. An exciting new approach is to use Agent-based modeling (ABM).[3] ABM is a computational modeling technique #### that has been used to study a wide variety of complex systems, including social systems,[4] economic systems,[5,6] and biological systems.[7] In ABM, the system is modeled as a collection of agents, each of which has its own set of rules and behaviors. The agents interact with each _∗_ Kiran Karra and Tom Mellan contributed equally to this work. [All authors are research scientists at CryptoEconLab (https://cryptoeconlab.io)](https://cryptoeconlab.io) ----- LEDGER VOL X (20XX) X X other and with the environment, and the system’s behavior emerges from the interactions of the individual agents.[3] #### Within the cryptoeconomics space, ABM has the potential to support practitioners in three main areas: (1) Study the cryptoeconomics and robustness of the blockchain to agent behavior. As #### an example, Struchkov et al.[8] used ABM to test how Decentralised Exchanges would respond to stress market conditions and front-running, while Cocco et al.[9] used ABM to analyse the mining incentives in the Bitcoin network; (2) Explore the design space of blockchain networks. For instance, ABM has been applied to compare different token designs and their impact on prediction markets;[10] (3) Test new features and protocols. Following the fair launch allocation from Yearn Finance, a group of researchers[11] used ABMs to examine the concentration of voting rights tokens after the launch, under different trading modalities. #### This paper explores how ABM can be adapted to a particular type of decentralised system, namely utility-based decentralised networks. These networks employ their own currency to provide consumptive rights on the services or the product being offered by the network.[12] Thus, #### these systems mediate a marketplace of providers of a specific good and users that want to consume the good. Since the entire system depends on the good being traded, any tool attempting to model such system needs to consider how changes in utility impact the system and its agents. #### Therefore, we propose a framework for applying ABMs to utility-based cryptoeconomies. Our approach is complimentary to other methods in the literature[13] and builds on the work of Zhang, et. al[14] to enable multi-scale coupling between individual microeconomic preferences and protocol specific supply dynamics. Rational users of cryptoeconomic systems will base their decisions on some aspects of the network they are involved in, which in turn affects the network. This natural feedback loop is well represented in our framework. #### We apply the framework to Filecoin, a decentralised data storage network,[15] and conduct two experiments that uncover interesting aspects of the network. The first explores the agents’ reward trajectories under different lending rates, while the other examines how the current cryptoeconomic mechanisms of Filecoin impact wealth distribution. The rest of this paper is organized as follows. We begin in Section 2 by presenting the general #### framework for applying ABM to utility-based systems. This framework is then applied to the Filecoin network, by first developing a mathematical model of Filecoin’s supply dynamics in Section 3. In Section 4, we describe the ABM that leverages this model to simulate a closed loop interaction of programmable agents within the Filecoin economy. Section 5 follows with the two experiments that showcase the utility of our ABM framework to understand the Filecoin system. We conclude in Section 6 by framing the results in the context of utility based token economies and discussing future research paths. ### 2. ABM Framework for Utility Cryptoeconomies #### Agent-based models (ABMs) are a tool for modeling complex systems[3] with a high degree of granularity. They consist of two primary components which interact with each other: a) the environment, which models the system under study, and b) agents, which take actions that affect the environment. ----- LEDGER VOL X (20XX) X X **Formal Definition** #### We define the general framework of our deterministic, discrete-time ABM as follows. Let E denote the set of all possible environmental variables. For a given time d ∈ N, let Ed ∈ E denote #### the environment at time d, and define the set E ∋ Ed := E0 × ··· × Ed. In our specific setting, _Ed,_ _Ed+1,... corresponds to the environments at day d, at day d +_ 1, etc. In addition, let A be #### the abstract set of agents and let H denote the abstract set of actions. To each agent a ∈ A, there corresponds a given set of actions ha ∈ H . Furthermore, we define the update rules _fd[H]+1_ [:][ A][ ×] _[E][d][ �→]_ _[H][,][ f][ E]d+1_ [:][ H][ ×] _[E][ �→]_ _[E][, and][ f][ A]d+1_ [:][ E][d][ �→] _[A][, as some abstract functions that]_ update the actions of each agent, the environment and agents respectively. Given a set of agents _A0_ _A, at time d = 0, where each agent a0_ _A0 is equipped with a set of actions ha0_ _H and_ _⊂_ _∈_ _∈_ an initial environment E0, the ABM proceeds by iterating as follows: _ha,d+1 = fd[S][(][a][d][,]_ _[E][d][)]_ _∀ad ∈_ _Ad,_ (1)   _Ed+1 = fd[E]_  [ą] _ha,d+1,_ _Ed_ (2) _ad_ _∈Ad_ _Ad+1 = fd[A][(][E][d][)][.]_ (3) **Environment** The supply dynamics, defined as factors which affect the supply of tokens in the cryptoeconomy, are modeled by the environment, E. Factors include the total supply of tokens, the rate at which #### new tokens are mined, and the rate at which tokens are taken out of circulation through means such as burning. This information is often used by investors and traders who want to understand the potential value of a cryptoeconomy to then make decisions about potential investments into that economy. Three aspects of the environment must be defined: (1) Network Performance Metrics - what are the key set of metrics that are to be modeled and used as performance indicators when evaluating the results of the ABM simulation? (2) Inputs - when actors take part in the cryptoeconomy, what is the subset of actions that they can take that would affect the network metrics? (3) Outputs - what are the outputs that flow back to miners, which in turn affect miners decisions about their actions in the next time step? For example, in a typical blockchain, rewards are issued to miners and because the rate and trajectory of rewards affects agent’s behavior, this is a key output. Additional outputs, such as a subset of the overall network metrics that miners can use to make rational decisions can also be included. **Agents** Miners in blockchains are mapped to agents, Ai. Miners take actions that correspond to the inputs #### of the environment and make those decisions based on the outputs of the environment they are interacting with. The actions they take affect the network’s performance metrics, which then affects the outputs that the agents are fed. Through this feedback loop, the dynamical nature of the cryptoeconomy is modeled. ----- LEDGER VOL X (20XX) X X Fig. 1. Proposed framework for mapping cryptoeconomies to an ABM. Miners are represented by agents and the environment is mapped to a supply dynamics model. #### Fig. 1 shows a diagram of this proposed framework for mapping cryptoeconomies to an ABM. **Examples** #### Three examples of cryptoeconomic projects where this framework can provide value include Helium,[16] Ethereum,[17] and Filecoin.[15] #### Helium is a decentralized wireless network where miners provide wireless coverage in exchange for HNT tokens. An ABM utilizing the described framework can be used to understand, #### for example, how the rate of token distribution and population density may affect expected network coverage. This requires a mathematical model of how tokens are minted (the supply dynamics), given agent inputs (i.e. the wireless coverage they provide to the network). This can then be used to design new incentive structures to ensure a more even coverage distribution. Ethereum is another example where the described ABM framework can be applied. A specific use case could be analyzing how user behavior (agents) could stress and effect the total circulating #### supply of ETH tokens after the “Shapella” upgrade, which enabled easier unlocking of staked ETH. In this setting, one could model, e.g., the propensity of a participant to unlock, against more staking inflows due to the ability to unlock. Here, the agents actions (whether to stake or unstake) have a direct effect on the supply dynamics and the outlined framework would enable one quantify various scenarios that may play out. We now discuss the application of the ABM framework to Filecoin in more detail. ### 3. Modeling Filecoin Supply Dynamics #### Filecoin is a distributed storage network based on the blockchain, where miners, referred to as storage providers (SPs), provide storage capacity for the network and earn units of the Filecoin cryptocurrency (FIL) by periodically producing cryptographic proofs that certify they are providing the promised storage capacity. In contrast to using Nakamoto-style proof of work #### to maintain consensus on the chain, Filecoin uses proof of storage: a miner’s voting power — the probability that the network elects a miner to create a new block — is proportional to their current quality-adjusted storage in use in relation to the rest of the network. The cryptoeconomics #### of Filecoin are designed to incentivize storage providers to participate and grow the collective utility of the data storage network. The following subsections describe various aspects of the Filecoin supply dynamics. ----- LEDGER VOL X (20XX) X X **Circulating Supply** Filecoin’s circulating supply Sd is modeled at a daily (d) level of aggregation and has four parts: _Sd+1 = Md +Vd_ _−_ _Ld −_ _Bd_ _._ (4) ���� ���� inflow outflow These correspond to minted block rewards Md, vested tokens Vd, locked tokens Ld, and burnt tokens Bd. **Power Onboarding and Renewals** The dynamics of Md, Ld, and Bd depend on the amount of storage power onboarded and renewed in the network. #### In Filecoin, storage providers (SPs) participate by onboarding power onto the network by adding a sector for a committed duration. Power is measured in units of sectors, which can be either 32 GiB or 64 GiB in size. Each sector consists of a fraction of committed capacity (CC) and verified deal data (FIL+).[18] An SPs can choose to renew CC sectors when they expire. #### We model power in aggregate terms rather than at the sector level. This means that we model the network’s storage power to be split into two categories: 1) CC and 2) FIL+. This approximation is valid for the granularity that we are seeking to achieve with our modeling. Filecoin has two methods to measure the power of the network, the network’s raw byte power #### (RBP) and the network’s quality adjusted power (QAP). Network RBP is a measure of the raw storage capacity (in bytes) of the network — it does not distinguish between the kind of data that #### is stored on the network. For example, empty or random data stored on the network is counted the same as a widely used dataset when computing network RBP. A second measure of network power is quality adjusted power (QAP). QAP is a derived measurement that captures the amount of useful data being stored on the network. Considering the aggregated approximation discussed above, we compute the quality adjusted power, Pd[QA] of the network on day d as _Pd[QA]_ = (1 _−_ _γ)_ _·_ _Pd[RB]_ + 10 _·_ _γ ·_ _Pd[RB][,]_ (5) #### where γ ∈ [0, 1] is the overall FIL+ rate of the network, and Pd[RB] is the raw byte power of the network on day d. Eq. 5 reveals that FIL+ power is given a 10x multiplier when computing the QA power of the network.[18] An initial pledge collateral of FIL tokens is required in order to onboard or renew power, and #### the specific amounts and time-windows are discussed below. In exchange for onboarding and renewing power onto the network, and continually submitting storage proofs to the chain, SPs can receive block rewards in the form of FIL tokens. **Rewards from minting** #### Filecoin uses a hybrid minting model that has two components — simple minting and baseline minting. The total number of tokens minted by day d is the sum of these two minting mechanisms: _Md = Md[S]_ [+] _[M]d[B]_ (6) ----- LEDGER VOL X (20XX) X X Simple minting is defined by an exponential decay model _Md[S]_ [=][ M]∞[S] _[·]_ [(][1] _[−]_ _[e][−][λ][ d][)][,]_ (7) which decays at a rate of λ = [ln]6yrs[(][2][)] [, corresponding to a 6-year half-life.][ M]∞[S] [takes a value of 30%] of the maximum possible minting supply of 1.1B tokens. Tokens emitted via simple minting are independent of network power. This is similar to minting schemes present in other blockchains.[19] #### The second component of minting in Filecoin is baseline minting, Md[B][. Baseline minting] depends on network power and aims to align incentives with the growth of the network’s utility. #### The minting function still follows an exponential decay, however, it now decays based on the effective network time, θd. The equations describing this are: _Md[B]_ [=][ M]∞[B] _[·]_ [(][1] _[−]_ _[e][−][λθ][d]_ [)] � � _θd =_ _g[1]_ [ln] _gRb0[∑]d_ + 1 (8) _R[∑]d_ [=] ∑ min{bd, _Pd[RB][}]_ _d∈_ _D_ From these definitions, we can compute the cumulative baseline minting by day d from the cumulative capped RBP of the network: _−λ_ _gR[∑]d_ � 1 _e_ _g_ [ln][(] _b0_ [+][1][)] _−_ _Md[B]_ [=][ M]∞[B] _[·]_ = M∞[B] _[·]_ �  1  _−_ = (9) � � _−gλ_  _gR[∑]d_ + 1  _b0_ In this expression M∞[B] [takes a value of 70% of the maximum possible supply of 1.1B tokens.][ R][∑] #### is the cumulative capped network RBP; it is the sum of the point-wise minimum of network’s RBP and the baseline storage function for each day: _bd = b0 e_ _[gd]_ (10) The baseline storage function serves as a target for the network to hit to maximize the baseline minting rate. In this expression g = [log][(][2][)] 365 [, the baseline storage growth rate which corresponds to] 2 year doubling, and b0 = 2.88888888EiB is the initial baseline storage. **Vesting** Vesting supply, which can contribute 0.9B tokens, is modelled daily, summing across the set of recipients R as Vd = ∑ _Vr,d ._ (11) _r∈_ _R_ Different recipients have different linear vesting schedules. **Locked tokens** Locked tokens in the network Ld are made up of storage collaterals Ld[S] [and vesting block rewards] _Ld[R]_ [as] ----- LEDGER VOL X (20XX) X X _Ld = Ld[S]_ [+] _[L]d[R]_ (12) The locked balance for vesting blocked rewards is modeled as _Ld[R]_ [=][ 0][.][75] ∑ ∆ _Md−τ ·_ _rd[R]_ _[.]_ (13) _τ≤d_ where ∆ _Md is daily minted rewards and rd[R]_ [is a vector specifying the release linear release over] 180days. The locked storage collateral is modeled as having the following dynamics _Ld[S]_ [=] ∑ ∆ _Ld[S]−τ_ _[·]_ _[r]d[S]_ _[.]_ (14) _τ≤d_ #### where ∆ Ld[S] [is newly locked storage collateral tokens, and][ r]d[S] [is a vector specifying a release] schedule. Newly locked collateral tokens are given by ∆ _Ld[S]_ [=][ ∆] _[L]d[SP]_ [+] [∆] _[L]d[CP]_ _[.]_ (15) where the ’storage pledge’ locked tokens are ∆ _Ld[SP]_ [=][ max] [(][20] _[·]_ [∆][M][d][,][ 0][)][,] (16) and ’consensus pledge’ locked tokens are  (17)  ∆ _Ld[CP]_ = max  _d_ _,_ 0  [0][.][3] _[·]�[S][d][ ·]_ [∆][P][QA]� max _Pd[QA], bd_ where ∆Pd[QA] denotes new quality-adjusted power onboarded on day d. **Burnt tokens** Burnt tokens are modeled as consisting of termination fees B[T]d [and base fees from gas usage][ B]d[G] as: _Bd = B[T]d_ [+] _[B]d[G]_ (18) #### where terminations are accounted for by aggregating agent decisions and gas fees as linearly increasing as B[G]d [=][ β][ d][ at average rate][ β] [.] ### 4. ABM of Filecoin Utilizing the framework developed in Section 2, we create an ABM of Filecoin. Storage providers #### (SPs) are mapped to agents, and the environment consists of the supply dynamics described in Section 3. We divide the environment into three logical modules: a) Network State, b) Forecasting, and c) the external environment. These components interact with each other in the following manner. ----- LEDGER VOL X (20XX) X X Fig. 2. Summary of the Agent Based Model of the Filecoin network, with arrows indicating direction of data flow. #### Agents determine the amount of power they will onboard and renew onto the network for day d. All agents decisions are aggregated and passed into the network state module. Using the developed model of the supply dynamics, the network state is updated. By utilizing both historical network metrics and network forecasting information, agents can make rational decisions. Finally, #### an external environment simulates constraints that SPs are subject to in the real world, such as borrowing costs of pledge collateral. These components interact to create a closed-loop simulation of the Filecoin economy. Fig. 2 summarizes the components and dataflow of the Filecoin ABM. **Agents** Agents directly influence the outcomes of the simulation, since their actions are aggregated and used as inputs to update the network state. We have developed three types of agents which use the network and forecasting information in different ways to make decisions regarding onboarding and renewing power: (1) DCAAgent - This is the dollar cost averaging agent, and does not use any forecasting information or historical network information to make decisions. The agent is configured #### to onboard a constant amount of power per day, the percentage of that power which corresponds to verified deals, and the percentage of expiring power which should be renewed. This is a dollar-cost averaging strategy and can be useful in understanding relative performance to more complex strategies. (2) FoFRAgent - This agent utilizes the rewards/sector forecast provided by the network #### to internally forecast the FIL-on-FIL returns (FoFR) of onboarding sectors for various sector durations, where FoFR = [rewards] pledge [. This metric can additionally be generalized] #### to introduce arbitrary cost structures. Because pledge (Eq. 17) is dependent upon the Network QAP and agents must make decisions for a given timestep before the overall Network QAP is aggregated for a day, it is approximated by using the previous day’s pledge. If the estimated FoFR for any of the tested sector durations exceeds a configurable #### threshold (which indirectly represents the risk profile of the agent), then the agent will onboard a configured amount of power. It will also renew a configured amount of power ----- LEDGER VOL X (20XX) X X under the same condition. (3) NPVAgent - This agent utilizes the rewards/sector forecast provided by the network to compute the net present value (NPV) of onboarding power for various sector durations. NPV is the present value of the expected rewards/sector less costs/sector. Present #### value is computed using the continuous discounting formula. The agent discount rate, configured upon instantiation, is a proxy for the risk profile of the agent, with a higher discount rate representing a higher risk aversion. The agent will onboard and renew power at a sector duration which maximizes NPV, but will take no action for day d if NPV < 0 for all durations tested. **Network State** Network state consists of: a) network power, and b) the status of tokens. The total network power is summed across all of the agents individual contributions for each #### day, and the type of power onboarded (CC or FIL+) is also tracked. This enables the network state to track both Pd[RB] and Pd[QA]. Token status consists of three parts: a) the amount mined, b) the amount locked due to pledge, #### and c) the remaining that has been released into the circulating supply through vesting. The mined tokens are distributed to agents as rewards. The fraction of total rewards mined in a day are distributed proportional to the fraction of total network QAP. Tokens are locked when agents #### onboard power to provide a consensus pledge. This information is aggregated to compute the overall network state of the number of tokens locked. Using Eq. 4, the circulating supply of the network is computed. **Forecasting** #### To enable agents to make rational decisions, relevant forecasts are computed and provided to agents, who can choose to utilize them. As previously mentioned, agents use rewards/sector to make rational decisions. This quantifies the amount of rewards an agent can expect to receive for onboarding a given sector. The metric is forecast by first utilizing the historical network RBP #### and QAP to train a time-series forecasting model, M . M is used to forecast RBP and QAP trajectories until the end of the simulation, denoted by P[ˆ]d[RB] and P[ˆ]d[QA], respectively. P[ˆ]d[RB] is then used to compute the expected minting rate, ˆmd, using Eqs. 6, 7, and 9. Finally, because rewards are distributed proportional to the network’s QA power, _rewardssectorˆ_ [[][d][] =][ ˆ][m][d][/][ ˆ][P]d[QA]. There are currently two models implemented for RB and QA forecasting, linear extrapolation #### and a variant of Markov-Chain Monte Carlo.[20,21] Agents may also elect to perform custom forecasting using network metrics, and this represents a competitive advantage that a certain agent may have. **External Environment** Agents must borrow tokens in order to satisfy the consensus pledge (Eq. 17) needed to onboard #### power. The borrowing rate is modeled as an external environment process that specifies the discount rate, Rd, at which agents can borrow tokens. Agents use this information to make rational decisions regarding onboarding and renewing power. The purpose of this is to model ----- LEDGER VOL X (20XX) X X Fig. 3. Validating our supply dynamics and ABM through backtesting. (a) The mined FIL of the model to the historical data, and (b) The circulating supply computed by the model against historical data. realities that SPs have to face, when determining their strategy for being involved in the Filecoin network. Additional real-world complexities can also be modeled here. **Model Validation** We begin by validating the model of Filecoin’s supply dynamics that was developed and described #### in Section 3, using backtesting. Our approach is to instantiate one DCAAgent which onboards and renews the historical power that was onboarded onto the network for that day. This is in contrast to the typical use-case of an agent, which is making daily decisions about whether and #### how much power to onboard and renew. Then, the relevant statistics for circulating supply are calculated from the start of simulation to the present date. This is then compared against actual statistics from the Filecoin network retrieved from Spacescope.[22] Fig.3 shows the results of this experiment: a) shows the minted tokens, and b) shows the circulating supply. For each of these network statistics, the implemented model tracks the historical data with good accuracy. Slight #### differences are observed, and these can be attributed to not modeling certain intricacies of the Filecoin network, such as variable sector durations. ### 5. Experiments and Results #### In this section, we describe some experiments that showcase the utility of ABM in modeling blockchain networks, using the Filecoin network as a case study. **Sensitivity of Rewards to External Discount Rates** #### In this experiment, we explore how the cryptoeconomics of Filecoin and external factors such as borrowing rates affect agent rewards. We instantiate two subpopulations of NPVAgents, one subpopulation is configured to only onboard verified deals (which corresponds to FIL+ power), while the second is configured to only onboard storage capacity (CC power). Both subpopulations of agents are configured to have identical risk profiles by instantiating the agents with the same #### discount rates. Fig. 4 shows agent rewards trajectory, with different colors indicating different external discount rates. #### Fig. 4(a) shows that irrespective of external discount rates, FIL+ agents are more profitable than CC agents. This is a direct consequence of the cryptoeconomic mechanism in place in ----- LEDGER VOL X (20XX) X X Fig. 4. Experiment exploring the sensitivity of returns to external discount rates with two subpopulations of agents, FIL+ and CC. In (a) both exhibit the same risk profile, (b) the CC agent has 2x the risk aversion of the FIL+ agent. Filecoin to incentivize FIL+ data, through the 10x quality adjusted (QA) multiplier. Secondly, we see the effect of external borrowing rates on agent profitability. As expected, higher rewards are correlated with lower borrowing rates. However, the rewards trajectory does not change linearly with the borrowing rate and starts to oscillate as borrowing rates increase. This is an example of an interesting dynamic that emerges as a result of the agent based simulation. #### We extend this experiment by altering one aspect of the previous experimental setup - that is, we increase the risk aversion of the CC agent to be two-times the risk aversion of the FIL+ agent. We then examine the agent rewards trajectory, shown in Fig. 4(b). Because the FIL+ agents have the same risk as before, their rewards trajectories are identical. However, we notice #### that when the external discount rate is 30%, the risk-averse CC agent manages a more positive rewards trajectory than the non risk-averse CC agent. The effect of this disappears as the external borrowing rates decrease, however. **Wealth Concentration** #### In this experiment, we explore how the distribution of starting capital in the cryptoeconomic network affects the ability to get rewards from the network. Our experimental setup consists of five DCAAgents which are configured to represent different levels of capitalization. This is represented with a vector [a1, a2, a3, a4, a5], where the relative capitalization of ai is defined as _ci = ai/_ ∑[5]i=1 _[a][i][.]_ In Filecoin, onboarding power requires, in addition to pledge collateral, sealing of sectors via cryptographic proofs that require large computational resources. It is reasonable to assume that agents with larger capitalization will have more hardware resources to perform this than agents with smaller capitalization, thereby having a larger sealing throughput. To model this, we scale #### how much power an agent is able to onboard and renew, per day, by its relative capitalization. The mapping from capitalization to sealing throughput captures the idea of wealth concentration. #### To compare and interpret the results, the overall power onboarded is kept constant across the three experiments. We test three distributions of initial capital: (1) All agents have equal starting capital (20%) - this is considered the baseline and corre ----- LEDGER VOL X (20XX) X X Fig. 5. The trajectory of rewards for agents with various starting capitalizations, relative to the baseline distribution, where all agents are equally capitalized (20%). sponds to the the vector [1, 1, 1, 1, 1]. (2) One agent has 50% of the starting capital, and the remainder have 50/4 = 12.5% starting capital each and corresponds to the vector [4, 1, 1, 1, 1]. (3) The agent capitalization follows the distribution: [33%, 27%, 20%, 13%, 7%] and corresponds to the configuration vector [5, 4, 3, 2, 1]. #### Fig. 5 shows the reward trajectories of each agent, relative to the baseline case where each agent has 20% of the starting capital. We observe that relative to the max-capitalized agent, the rewards trajectories of other agents are on a decreasing trend. This is a consequence of the fact that both onboarding and renewals are a function of the agent capitalization. ### 6. Conclusion #### In this paper, we have outlined a framework for applying ABM to modeling utility based blockchain economies, and validated our framework with Filecoin as a case study. Our experiments shed light on some interesting aspects of Filecoin, including agent reward trajectories when #### taking into account external lending rates, and how the cryptoeconomic structure of Filecoin distributes wealth. The sensitivity experiments indicate that creating new, competitive lending markets with smart #### contracts leveraging programmable platforms such as FVM[23] can enable network growth and increase miner returns. The wealth concentration experiments indicate that starting capitalization has a significant effect on total rewards in the future. By explicitly modeling this effect with the supply dynamics, one can then design new incentive structures to either accentuate, maintain, or perhaps reverse the trend based on the goals of the project. Insights such as these, enabled by the ABM framework, can help designers and creators of cryptoeconomies to more efficiently achieve their goals. This indicates that ABM can be a valuable tool for researchers to better understand and design blockchain economies. #### In the future, we plan to explore additional aspects of blockchain economies that are well mapped to ABMs, such as the effect of information quality, availability, and lag on agent reward #### trajectories, and related network science questions. Another potential research direction is to include uncertainty by considering a probabilistic ABM, while balancing the computational constraints using methods such as Multi-level and Multi-Index Monte Carlo methods.[24–27] ----- LEDGER VOL X (20XX) X X ### Author Contributions KK developed the ABM codebase and helped devise experiments that were conducted with the #### framework. TM developed the mathematical models of the Filecoin economy and steered the project. They both contributed equally to manuscript preparation. TM and MS implemented the initial mathematical models, which were then ported to the ABM framework. JC provided formalism and consulting on ABM related topics. AC and ZZ helped putting the project in larger context and helped with manuscript preparation. ### Notes and References 1 Voshmgir, S., Zargham, M., et al. “Foundations of cryptoeconomic systems.” Research Institute for Cryptoe_conomics, Vienna, Working Paper Series/Institute for Cryptoeconomics/Interdisciplinary Research 1._ 2 Davidson, S., De Filippi, P., Potts, J. “Economics of Blockchain.” (2016) doi:10.2139/ssrn.2744751 URL ``` https://papers.ssrn.com/abstract=2744751. ``` 3 Macal, C. M., North, M. J. “Agent-based modeling and simulation.” In Proceedings of the 2009 winter _simulation conference (WSC) IEEE 86–98 (2009) ._ 4 Terano, T. “A Perspective on Agent-Based Modeling in Social System Analysis.” In G. S. Metcalf, K. Kijima, H. Deguchi (Eds.), Handbook of Systems Sciences Singapore: Springer 1–13 (2020)doi: [10.1007/978-981-13-0370-8 5-1 URL https://doi.org/10.1007/978-981-13-0370-8_5-1.](https://doi.org/10.1007/978-981-13-0370-8_5-1) 5 Chen, S.-H., Chang, C.-L., Du, Y.-R. “Agent-based economic models and econometrics.” _The_ _Knowledge_ _Engineering_ _Review_ **27.2** 187–219 (2012) doi:10.1017/ S0269888912000136 publisher: Cambridge University Press URL `https://www.` ``` cambridge.org/core/journals/knowledge-engineering-review/article/abs/ agentbased-economic-models-and-econometrics/DF3E4987809567A9B277F83ED6E22E00. ``` 6 Fagiolo, G., Guerini, M., Lamperti, F., Moneta, A., Roventini, A. “Validation of Agent-Based Models in Economics and Finance.” In C. Beisbart, N. J. Saam (Eds.), Computer Simulation Validation: Fundamental _Concepts, Methodological Frameworks, and Philosophical Perspectives Cham: Springer International Publish-_ ing Simulation Foundations, Methods and Applications 763–787 (2019)doi:10.1007/978-3-319-70766-2 31 [URL https://doi.org/10.1007/978-3-319-70766-2_31.](https://doi.org/10.1007/978-3-319-70766-2_31) 7 Eubank, S., et al. “Modelling disease outbreaks in realistic urban social networks.” Nature 429.6988 180–184 (2004). 8 Struchkov, I., Lukashin, A., Kuznetsov, B., Mikhalev, I., Mandrusova, Z. “Agent-Based Modeling of Blockchain Decentralized Financial Protocols.” In 2021 29th Conference of Open Innovations Association _(FRUCT) 337–343 (2021) doi:10.23919/FRUCT52173.2021.9435601 iSSN: 2305-7254._ 9 Cocco, L., Tonelli, R., Marchesi, M. “An Agent Based Model to Analyze the Bitcoin Mining Activity and a Comparison with the Gold Mining Industry.” Future Internet 11.1 8 (2019) doi:10.3390/fi11010008 number: 1 [Publisher: Multidisciplinary Digital Publishing Institute URL https://www.mdpi.com/1999-5903/11/1/](https://www.mdpi.com/1999-5903/11/1/8) ``` 8. ``` 10 Hulsemann, P., Tumasjan, A. “Walk this Way! Incentive Structures of Different Token Designs for¨ [Blockchain-Based Applications.” ICIS 2019 Proceedings URL https://aisel.aisnet.org/icis2019/](https://aisel.aisnet.org/icis2019/blockchain_fintech/blockchain_fintech/7) ``` blockchain_fintech/blockchain_fintech/7. ``` 11 Fernandez, J. D., Barbereau, T., Papageorgiou, O. “Agent-based Model of Initial Token Allocations: Evaluating Wealth Concentration in Fair Launches.” (2022) 2208.10271. 12 Benedetti, H., Abarzua, L., Caceres Fuentes, C. “Utility Tokens.” (2021) doi:10.2139/ssrn.4088568 URL´ ``` https://papers.ssrn.com/abstract=4088568. ``` [13 Akcin, O., Streit, R. P., Oommen, B., Vishwanath, S., Chinchali, S. https://eprint.iacr.org/2022/](https://eprint.iacr.org/2022/1492) ``` 1492 “A Control Theoretic Approach to Infrastructure-Centric Blockchain Tokenomics.” (2022) Cryptology ``` [ePrint Archive, Paper 2022/1492 URL https://eprint.iacr.org/2022/1492.](https://eprint.iacr.org/2022/1492) ----- LEDGER VOL X (20XX) X X 14 Zhang, Z., Zargham, M., Preciado, V. M. “On modeling blockchain-enabled economic networks as stochastic dynamical systems.” Applied Network Science 5.1 1–24 (2020). 15 Benet, J., Greco, N. “Filecoin: A decentralized storage network.” Protoc. Labs 1–36. 16 Haleem, A., Allen, A., Thompson, A., Nijdam, M., Garg, R. “A decentralized wireless network.” Helium _Netw 3–7._ 17 Buterin, V., et al. “A next-generation smart contract and decentralized application platform.” white paper **3.37 2–1 (2014).** [18 Labs, P. apr 28, 2023 “Filecoin Spec.” (2018) URL https://spec.filecoin.io.](https://spec.filecoin.io) 19 Nakamoto, S. “Bitcoin: A peer-to-peer electronic cash system.” Decentralized business review 21260. 20 Neal, R. M., et al. “MCMC using Hamiltonian dynamics.” Handbook of markov chain monte carlo 2.11 2 (2011). 21 Hoffman, M. D., Gelman, A., et al. “The No-U-Turn sampler: adaptively setting path lengths in Hamiltonian Monte Carlo.” J. Mach. Learn. Res. 15.1 1593–1623 (2014). [22 Labs, S. apr 28, 2023 “Spacescope.” (2018) URL https://spacescope.io.](https://spacescope.io) [23 Accessed: 2022-06-09 “Introducing the Filecoin Virtual Machine.” https://filecoin.io/blog/posts/](https://filecoin.io/blog/posts/introducing-the-filecoin-virtual-machine/) ``` introducing-the-filecoin-virtual-machine/. ``` 24 Giles, M. B. “Multilevel monte carlo methods.” Acta numerica 24 259–328 (2015). 25 Madrigal-Cianci, J. P., Kristensen, J. “Time-efficient Decentralized Exchange of Everlasting Options with Exotic Payoff Functions.” In 2022 IEEE International Conference on Blockchain (Blockchain) 427–434 (2022) doi:10.1109/Blockchain55522.2022.00066. 26 Madrigal-Cianci, J. P., Nobile, F., Tempone, R. “Analysis of a class of multilevel Markov chain Monte Carlo algorithms based on independent Metropolis–Hastings.” SIAM/ASA Journal on Uncertainty Quantification 11.1 91–138 (2023). 27 Qian, E., Peherstorfer, B., O’Malley, D., Vesselinov, V. V., Willcox, K. “Multifidelity Monte Carlo estimation of variance and sensitivity indices.” SIAM/ASA Journal on Uncertainty Quantification 6.2 683–706 (2018). -----
{ "disclaimer": "Notice: Paper or abstract available at https://arxiv.org/abs/2307.15200, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": null, "status": "CLOSED", "url": "https://arxiv.org/pdf/2307.15200" }
2,023
[ "JournalArticle" ]
true
2023-07-27T00:00:00
[ { "paperId": "79d0267c870ed7ef88625c0571808220523a82e0", "title": "Analysis of a Class of Multilevel Markov Chain Monte Carlo Algorithms Based on Independent Metropolis-Hastings" }, { "paperId": "301bc3c7c0665488cbb31294acce73218a00e15a", "title": "A Control Theoretic Approach to Infrastructure-Centric Blockchain Tokenomics" }, { "paperId": "ad414c6ad72556d4cdd5930de3fec4a5a52a1ed2", "title": "Agent-based Model of Initial Token Allocations: Evaluating Wealth Concentration in Fair Launches" }, { "paperId": "e629c12ea77ca1ac79e6bc109e97c41da674307c", "title": "Time-efficient Decentralized Exchange of Everlasting Options with Exotic Payoff Functions" }, { "paperId": "90c80f9286abbc460bf5130adcf72868e80fe02b", "title": "Agent-Based Modeling of Blockchain Decentralized Financial Protocols" }, { "paperId": "1cb9f168dbf66cff10891f9ee29a31bfa86ef32f", "title": "On modeling blockchain-enabled economic networks as stochastic dynamical systems" }, { "paperId": "8ea45c6b86d702db129a32d2c50caf37c7b86cd3", "title": "Foundations of Cryptoeconomic Systems" }, { "paperId": "70a881f0c11bc5c17654269aa8df8c14a36eed28", "title": "Validation of Agent-Based Models in Economics and Finance" }, { "paperId": "6749c78e9f12012a9fb58194438a24bad641dfd7", "title": "An Agent Based Model to Analyze the Bitcoin Mining Activity and a Comparison with the Gold Mining Industry" }, { "paperId": "f0ae6eda680130ae4e0701febf6886c65f8b6b3f", "title": "Multifidelity Monte Carlo Estimation of Variance and Sensitivity Indices" }, { "paperId": "13fca50483e298e931e1b804e95089c26a773cd9", "title": "Economics of Blockchain" }, { "paperId": "d16a7375c98f888333ebe253062a767c784b9dfb", "title": "Agent-based economic models and econometrics" }, { "paperId": "e1103d528d874a9e8e84ca443fe3fd5c1ff9eb9e", "title": "The No-U-turn sampler: adaptively setting path lengths in Hamiltonian Monte Carlo" }, { "paperId": "3b0821ff22fdffc95b0caae1f9660773eb54dc52", "title": "Handbook of Markov Chain Monte Carlo" }, { "paperId": "b716f735fb9b229c8227495e596e2d52f2981c33", "title": "MCMC Using Hamiltonian Dynamics" }, { "paperId": "d9f25ae6216f47d8351705cab828219362d7099f", "title": "Agent-based modeling and simulation" }, { "paperId": "1413dbfbbae1b59d52656db3dc48a4ee278e082f", "title": "Modelling disease outbreaks in realistic urban social networks" }, { "paperId": "d512f19f7cdf5994df388d8bc5c5a828c311b870", "title": "Multilevel Monte Carlo Methods" }, { "paperId": "9103b4aa754ab20959a30473c7e9a54fe7d51248", "title": "Utility Tokens" }, { "paperId": "bc18522debf8b54c660ec21d404538e609333d56", "title": "A Perspective on Agent-Based Modeling in Social System Analysis" }, { "paperId": "a93cf4648862778128c10dfa7f48da1347b4cb82", "title": "Computer Simulation Validation - Fundamental Concepts, Methodological Frameworks, and Philosophical Perspectives" }, { "paperId": "5f6de94b4b150f200e1e3d4f3bf8390720b08680", "title": "Walk this Way! Incentive Structures of Different Token Designs for Blockchain-Based Applications" }, { "paperId": null, "title": "Spacescope" }, { "paperId": "0dbb8a54ca5066b82fa086bbf5db4c54b947719a", "title": "A NEXT GENERATION SMART CONTRACT & DECENTRALIZED APPLICATION PLATFORM" }, { "paperId": "4e9ec92a90c5d571d2f1d496f8df01f0a8f38596", "title": "Bitcoin: A Peer-to-Peer Electronic Cash System" }, { "paperId": null, "title": "“ Filecoin : A decentralized storage network" }, { "paperId": null, "title": "Introducing the Filecoin Virtual Machine" }, { "paperId": null, "title": "Filecoin Spec" }, { "paperId": null, "title": "Study the cryptoeconomics and robustness of the blockchain to agent behavior" } ]
10,067
en
[ { "category": "Computer Science", "source": "external" }, { "category": "Computer Science", "source": "s2-fos-model" }, { "category": "Business", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/021bb3f039ede8bfe80eb9749a32b3e9c8bc6b29
[ "Computer Science" ]
0.897764
Flexible Integration of Blockchain with Business Process Automation: A Federated Architecture
021bb3f039ede8bfe80eb9749a32b3e9c8bc6b29
CAiSE Forum
[ { "authorId": "50696201", "name": "M. Adams" }, { "authorId": "2914969", "name": "S. Suriadi" }, { "authorId": "2116451209", "name": "Akhil Kumar" }, { "authorId": "143613819", "name": "A. Hofstede" } ]
{ "alternate_issns": null, "alternate_names": null, "alternate_urls": null, "id": null, "issn": null, "name": null, "type": null, "url": null }
null
----- # Flexible Integration of Blockchain with Business Process Automation: A Federated Architecture Michael Adams[1], Suriadi Suriadi[1], Akhil Kumar[2], and Arthur H. M. ter Hofstede[1] 1 Queensland University of Technology, Brisbane, Australia _{mj.adams, s.suriadi, a.terhofstede}@qut.edu.au_ 2 Smeal College of Business, Penn State University, University Park, USA ``` akhil@psu.edu ``` **Abstract. Blockchain technology enables various business transactions** to be performed in an immutable and transparent manner. Within the business process management community, blockchain technology has been positioned as a way to better support the execution of inter-organisational business processes, where the entities involved may not completely trust each other. However, the architectures proposed thus far in the literature for blockchain-enabled business process management can be described as “heavy-weight”, since they promote the blockchain platform as the monolithic focal point of all business logic and process operations. We propose an alternative: a federated and flexible architecture that leverages the capabilities of blockchain, but without overloading the functionalities of the blockchain platform with those already extant in Business Process Management Systems (BPMSs). We illustrate its benefits, and demonstrate its feasibility, through the implementation of a prototype. **Keywords: blockchain; process flexibility; business process automation;** business process management systems. ## 1 Introduction A blockchain is a tamper-proof, replicated and distributed ledger [10] to which multiple parties can append transactional records in such a way that modification is prevented, in a technically-enforceable manner. Blockchain technology effectively guarantees that transactions, once recorded, become immutable [17], facilitating the execution of transactions across multiple, potentially untrusted parties without the need for a trusted intermediary. Naturally, blockchain opens up new opportunities to support the execution of cross-organisational business processes (i.e. those processes that necessitate interactions involving multiple discrete players) typically seen in many domains, such as supply chain management and manufacturing. In recent years, the Business Process Management (BPM) community has investigated ways to exploit blockchain for secure, cross-organisational process execution (see [6–8, 19] for some initial approaches). In this paper, we specifically focus on the alternative architectural designs that integrate blockchain ----- 2 M. Adams et al. technologies with business process management systems (BPMS) to support process executions involving multiple, independent parties. We call such a system a _blockchain-integrated BPMS._ A prominent architecture proposed for blockchain-integrated BPMS transforms a business process, expressed as one or more process models, into smart contracts (programmable transactions) that are then executed entirely upon a blockchain platform [6,19]. That is, all business rules, branching logic, instance data, resource allocation, access authorisations and process state management is deployed to and handled by the blockchain platform. Thus, the focal point of this architecture resides in the blockchain and the different parties involved in the business process must interact directly with this blockchain, both during process design time and runtime executions. We shall refer to such an architecture as _blockchain-centric._ While blockchain-centric architectures may be appealing for some business process applications, and under certain threat assumptions and/or risk scenarios, it is not a universal solution. It is a heavy-weight architecture, with a rigidity that may not be necessary, or even desirable, in many other business process applications, for example where interactions between multiple parties are loosely-coupled and/or may involve asynchronous-type interactions. A heavyweight architecture also overloads a blockchain system with a host of supporting compilers, components and mechanisms required to wholly accommodate business process design and execution within a distributed ledger. In effect, this tight integration necessitates a duplication of the capabilities that already exist within core execution engines of BPMSs. Hence, we propose an alternative federated architecture, that is more decentralised, cooperative, and flexible, simpler to realise and better suited to meet the needs of a wide variety of cross-organisational settings. The proposed architecture allows component parts to each perform their fit-for-purpose capabilities in a federated whole, rather than overloading components with functionalities that are better performed by others. This architecture is flexible in that it is not tied to any particular type of blockchain platform or BPMS. It supports the minimisation of append operations to a blockchain, which are known to be resource-intensive [18,21], and does not require the creation and propagation of multiple smart contracts per execution instance. This paper is structured as follows: a background discussion and related work are presented in Section 2, Section 3 establishes the need for a federated blockchain-integrated BPMS, and Section 4 describes the proposed architecture, while Section 5 illustrates its implementation. A comparative discussion of the federated and blockchain-centric approaches is presented in Section 6, followed by the conclusion of the paper. ## 2 Background and Related Work The key advantages of blockchain, besides immutabiity, include: visibility (all authorised participants can view the transactions); validation (transactions are ----- Flexible Integration of Blockchain with Process Automation 3 endorsed by peers through a designated consensus mechanism prior to being written to the chain); and resilience (a replicated ledger means there is no single point-of-failure). A blockchain system can be permissioned (exercise membership control) or _permissionless (publicly accessible). For example, Ethereum[1]_ [3] is (by default) a permissionless blockchain platform, where any peer can join to read or submit transactions at any time. Moreover, there is no central entity to manage membership, although private and permissioned blockchains can also be configured. Permissioned blockchain systems are designed to better address concerns around transaction security, privacy and scalability [1]. Hyperledger Fabric[2] [4] is an example of a permissioned blockchain framework. Another key aspect of blockchain technology is the provision of so-called smart contracts [5,15], i.e. executable scripts that reside on the blockchain and automate the steps and rules corresponding to the business logic of the bespoke transactional operations. In recent research efforts towards integrating blockchain technology with BPM [6,10,19], the authors propose an architecture that tightly integrates business process execution with blockchain by encapsulating the entire business process logic into smart contracts. In this approach, a translator component takes a process specification as input and generates a set of corresponding smart contracts per process instance. In addition, a choreography monitor uses smart contracts to control a collaborative business process. A prototype has been developed for the Ethereum platform [19]. Architectural design issues of blockchain based systems with an eye towards quality and performance attributes are addressed in [20] in the form of a taxonomy and flowchart. Other performance issues that have been addressed are availability [18] and latency [21]. Methods for optimising execution of business processes on an Ethereum blockchain by improving data structures and runtime components are discussed in [6] and demonstrated in a prototype called Caterpillar [8]. Approaches for implementing collaborative, data-aware business processes on blockchain using the business artifact paradigm are discussed in [2,7], focussing on a new business collaboration language. Sturm et al. [14] develop a generic approach to control-flow management within the blockchain by having one contract that handles choice and parallel structures. However, the control-flow capabilities are limited and data management is not discussed. There are also a plethora of approaches to interorganisational process management that use platforms and environments other than blockchain, for example [9,11,13]. All these related approaches have helped to locate our work in context. However, our approach is different in that we believe that the essential functionality of a BPM system should not be migrated to the blockchain. Instead, we explore a lean approach (along the lines of [14]) wherein the BPM system can interface with the blockchain as a repository of reliable data and for executing key contractual terms through smart contracts. 1 https://www.ethereum.org/ 2 https://www.hyperledger.org/projects/fabric ----- 4 M. Adams et al. ## 3 Towards a Federated Blockchain-integrated BPMS Consider the pharmaceutical use case scenario shown in Figure 1. In this crossorganisational process, a Pharmacy places an order for medical supplies with its Distributor, who in turn requests the production of the pharmaceuticals by the Manufacturer. Once the pharmaceuticals are manufactured, they are delivered to the Distributor who then sends them to the Pharmacy. Using different types of blockchain with any pair of distributor and pharmacy (at different process instances) |Blockchain Type B (with Manufacturer 1)|Col2|Col3|Col4| |---|---|---|---| ||Blockchain Type B (with Manufacturer 1)||| |||Blockchain Type B (with Manufacturer 1)|| ||||| Using different types of blockchain with any pair of distributor and manufacturer (at different process instances) **Fig. 1. Pharmaceutical Supply Process - Multiple Ledgers** When this process is executed, there is a potential for conflict across different parties. For example, if the Distributor fails to deliver the ordered pharmaceuticals on time, the Distributor may blame the Manufacturer for being late with production, or the Distributor may dispute the date and time when it received the original order. Therefore, the use of blockchain in recording the process transactions can be beneficial. Moreover, each organisation can exercise full control over their own private business process, and share information of only selected activities that involve cross-organisational interactions, as shown in Figure 1. There are many desirable features of this approach. Firstly, the parties in the process do not need to agree on a common inter-organisational process. They may even be on different blockchain platforms so long as they are compatible. Secondly, the lower transparency requirement will increase the willingness of the ----- Flexible Integration of Blockchain with Process Automation 5 parties to cooperate with each other. Thirdly, there is more scalability in such an arrangement since in general a pharmacy will deal with multiple distributors, and a distributor, in turn, with multiple manufacturers. Thus, this use case calls for a more flexible, decentralised, loosely-coupled and distributed approach based on platform heterogeneity, for both two-party and multi-party interactions, which minimises the need for interactions with the blockchain platform. **Towards a Federated Approach We propose a federated, blockchain inte-** grated BPMS architecture to address the issues identified above. Such an architecture should provide the following properties: – Separation of Concerns: A clear separation of capabilities should be maintained between business logic operations and distributed transactional execution records, with the aim of minimising the performance hit on blockchain operations and maximising the fit-for-purpose capabilities of the BPMS and blockchain platforms. – Platform Heterogeneity: The architecture should allow the use of more than one compatible blockchain platform within and across a composite set _of interacting process instances._ – Compartmentalisation of Interactions: A requirement that all interactions between any two participating parties need to be transparent to all parties involved should not be imposed. A blockchain-centric architecture may perhaps support this through the use of, for example, separate permissioned channels, but this should not be seen as a necessary realisation, and it still imposes the requirement that they share the same blockchain platform. – Single-party Interaction: The architecture should not assume that all interactions between a business process and a blockchain involve multi-party communication. Hence, it should support simple single-party interaction between an organisation’s business process and its corresponding blockchain. ## 4 Conceptual Architecture In our federated approach, each organisation hosts a discrete BPMS that encapsulates a service or middleware component through which it will delegate designated tasks, designed to perform a required inter-organisational activity, within a process execution instance. The service will then interact with a properly configured blockchain network. Each participating service in an inter-organisational process is granted authorisation to a discrete permissioned channel (or other authenticating, secure pipeline) on a blockchain network. A channel is a private overlay that partitions a blockchain network to provide data isolation and confidentiality [1]. Whenever a new block is written, an event notification is generated by the blockchain platform and then relayed to the BPMS through the service. The service will by default listen for events as they occur, but it may also be configured to periodically request the event history from past blocks, to accommodate those ----- 6 M. Adams et al. **Fig. 2. Conceptual internal architecture** deployments where connection to the blockchain network is not always available. The service will take one of three actions for each received event notification, depending on how the service has been configured for each event: (1) release a task that has been waiting for the event to occur; (2) launch a new process instance, using the event as a trigger; or (3) ignore the event. Hence, the only information exchanged between organisations is that required for work to be handed over and performed within each organisation (e.g. purchase order, invoice, contract, application). The state of a process instance can be inferred from the history of data associated with it on the blockchain, for example an order has been placed, a shipment was sent, a payment was made, etc. This eliminates the need for sharing additional information about exact process state on the blockchain, or any process definitions, business logic and rules, organisational data, or resource allocations that should remain private to their respective organisations. A transaction (such as placing a purchase order) submitted by one organisation to the blockchain will, within a short period, be written to a block on the blockchain after it is validated by other peer nodes on the network using a validation algorithm, and ordered along with other transactions into a block structure. The creation of a new block will trigger an event notification which may be used by another organisation to complete a task in one of its own processes or to commence a new process instance (see Section 5 for more details). An internal architecture of the proposed approach is given in Figure 2. The BPMS of an organisation will delegate the execution of certain tasks to the blockchain service (middleware component) using the appropriate API along with the requisite data. The subcomponents of the middleware are: – Smart Contract Invoker: Performs smart contract calls on the blockchain to either query the current instance data that has been written to the chain, ----- Flexible Integration of Blockchain with Process Automation 7 or requests the creation of a new transaction to store data to be shared with another organisation. – Event Listener: Listens and responds to events generated by the blockchain network each time a new transaction is created. An event may trigger the completion of a waiting task, or the launch of a new process instance via a call to the BPM engine’s API. – Task Cache: Stores tasks that are waiting for some event to occur on the blockchain, that is some specific data to be made available from another organisation (e.g. order received, invoice produced, etc). When the designated event occurs, that task can be further processed and/or completed, allowing its parent process instance to continue. – Authority Certificate Store: Stores the private and public keys authorising the service to access the channel to read from and submit to the ledger on behalf of its owner organisation. Each call of a smart contract must be accompanied by the appropriate certificates. ## 5 Implementation A prototype service that implements the conceptual architecture described in Section 4 has been realised in the YAWL business process management environment [16]. YAWL was selected because it is robust, fully open-source, and offers a service-oriented architecture, allowing an interactive blockchain service to be implemented independent of existing components. However, the generic federated architecture is not limited to the YAWL environment, but rather is applicable to any BPMS that supports the addition of service-oriented or middleware components for interacting with external networks and applications. Importantly, absolutely no changes were required to be made to the YAWL environment itself to enable support for communication and interaction with a blockchain network. The YAWL Blockchain Service, and its source code, can be freely downloaded from the YAWL repository[3]. For this prototype implementation, a Hyperledger Fabric blockchain network was chosen, because it is open-source, can be deployed freely, does not require crypto-currency payments for its operations and supports a permissioned network natively. Again, the architecture is not limited by this choice; other blockchain platforms may be used. The Blockchain Service has been developed as a YAWL custom service and so may have tasks assigned to it at design time via the process editor. At runtime, the engine delegates all such assigned tasks to the service for action, passing task input data and metadata to it via a specific process engine API. Communication between the YAWL Blockchain Service and the Hyperledger Fabric network is handled via the Java software development kit (SDK) for Hyperledger[4]. Each organisation maintains its own discrete YAWL environment and Blockchain Service. 3 https://github.com/yawlfoundation/yawl 4 https://github.com/hyperledger/fabric-sdk-java ----- 8 M. Adams et al. **5.1** **Event Handling** The architecture leverages the event generating capabilities of the blockchain platform to provide change-of-state announcements in the end-to-end process, in particular notifying parties in cross-organisational processes of actions that have been taken by others. Events of interest can be used to release a task that has been waiting for an action to occur, or can signify the triggering of a new process instance within an organisation. For a task that has been designated to wait for an action, a dedicated data structure, which specifies the event to wait for and the values that uniquely identify the event as related to the current end-to-end process instance, is included as an input parameter of the task. On being delegated the task at runtime, the Blockchain Service stores details of the task, and the specifics of the WAIT data values, and compares each incoming event with those parameters. When a match occurs, the task is updated with the values attached to the event, and released, i.e. returned to the core BPMS engine, allowing the process instance to continue. The other type of event of interest to the Blockchain Service signifies the triggering of a new case instance. For example, if a pharmacy raises a purchase order and submits it to a blockchain, the event produced by the blockchain when the order transaction is written to a block can be captured by the Blockchain Service of a pharmaceuticals distributor and used to trigger the creation of a new process to fulfil that order (see Section 5.2 below for more details). Events can be defined as process triggering events via a dynamically loaded configuration file, or via an input data variable for a task, or by using an administration tool. **5.2** **Illustrative Example** An example execution of typical interactions among the three organisations in the supply chain scenario of Section 3 is illustrated in Figure 3. The processes have been somewhat simplified for clarity in the discussion below, and are depicted in the YAWL language. There are three interacting organisations: a Pharmacy that places orders for the supply of pharmaceuticals, a Distributor that fulfils those orders, and a _Manufacturer that fabricates and supplies pharmaceuticals for distribution. The_ Pharmacy interacts only with the Distributor, similarly the Manufacturer interacts only with the Distributor, and consequently the Distributor interacts with both. To ensure data isolation and confidentiality, two channels are created, one _Pharmacy_ _Distributor (called chPharmDist), the other Manufacturer_ _←→_ _←→_ _Distributor (chManuDist). Importantly, all internal processes remains private_ to each organisation, only the transactional data necessary to collaborate with another organisation is shared via the blockchain. While this scenario concerns a specific pharmacy-distributor-manufacturer, more generally a distributor would deal with a number of different pharmacies and manufacturers, and vice versa, all of which would potentially participate as peers within the blockchain platform and may play a role in the validation ----- Flexible Integration of Blockchain with Process Automation 9 **Fig. 3. Inter-organisational process interactions – supply chain example** consensus that occurs when a transaction is submitted to the chain. Further, it is of course also possible to have a single channel for all three parties if desired. To illustrate a complete sequence of interactions, with reference to Figure 3 and the numeric labelling within it: 1. A composite process instance begins with the pharmacy process, when a new order is generated and then sent, i.e. submitted and subsequently written to a new block on the chain via a task delegated to its YAWL Blockchain Service. 2. Since the permissioned channel chPharmDist is shared by the Pharmacy and the Distributor, the Distributor’s Blockchain Service detects the new _BlockEvent and interprets it as a trigger to launch a new instance of its_ internal ‘supply’ process. The transaction data sent with the event (i.e. the purchase order) is used as the originating data for the new instance. 3. The Distributor adds the order to a batch, then at a designated time submits the batch order to the Manufacturer via submission to the blockchain via the shared chManuDist channel shared by those two organisations. 4. The Manufacturer’s service receives the write BlockEvent, which triggers a new instance of its own ‘manufacture’ process, using the transaction data in the event (i.e. the batch order) as originating data. 5. Once the Manufacturer ships the order, the process archives the order details on the blockchain. 6. This BlockEvent triggers the release of the waiting Receive and Verify task in the Distributor’s process, allowing that process to continue. 7. Later, an invoice is produced by the Distributor and submitted to the blockchain via the chPharmDist channel. 8. The subsequent writing of the invoice to the chain causes a BlockEvent that triggers the release of the waiting Receive Invoice task in the Pharmacy’s process. 9. Eventually, the Pharmacy pays the invoice by submitting the payment transfer details to the chain. ----- 10 M. Adams et al. 10. The payment causes a BlockEvent that triggers the release of the waiting _Receive Payment task in the Distributor’s process._ Significantly, this example illustrates that secure inter-organisational process automation can be achieved using a federated architecture, and that the approach affords several concrete advantages when compared to the more heavyweight, blockchain-centric architectures: – Efforts to combine the three processes into one overarching, monolithic, endto-end process model are no longer required, negating the need for a great deal of collaboration between all parties, and the translation of the result into a set of factory smart contracts. The architecture also avoids the need for the creation, verification and storage of a new set of smart contracts for every instance of the inter-organisational process. – Because all business logic, branching rules, resources allocations, etc. are handled by the BPMS, the smart contracts here are not overloaded with procedural code, resulting in much simpler, faster to process transactions. In this example, the smart contracts define data structures for order, invoice and payment, and a trivial invoke function that either submits a transaction or performs a query over existing blocks. The data structures are used to (de)serialise JSON strings passed to/from the BPMS into block data. – There is no need to ‘centralise’ the process on the blockchain. Each organisation retains autonomy of its own processes, and the foci of operations are retained within the processes of each organisation’s BPMS. – There is no requirement for the creation and maintenance of the “intricate set of components” [19], prerequisite to the heavyweight architecture. Only a standard BPMS environment, a simple middleware service and vanilla blockchain network are needed. – Unlike many blockchain-centric architectures, there is no requirement for a central ‘mediator’ process to choreograph the interactions between each organisation’s processes. – There are no limitations placed on the types of process patterns supported. Any pattern supported by the process language used by the process execution environment (i.e. the BPMS) can be used in this approach, including those more complex patterns that are difficult, if not impossible, to transform into a smart contract, since all process executions are contained within the BPMS, rather than on the blockchain. – An unimpeachable audit trail is stored on the blockchain(s) and can be extrapolated for all inter-process activity instances between organisation pairs. ## 6 Discussion and Conclusion Many blockchain-centric approaches use a blockchain monolithically as an entire execution platform for business processes. Thus, a potentially large volume of data, including process definitions in the form of smart contracts, business rule definitions, datasets representing the work of a process instance, as well as its ----- Flexible Integration of Blockchain with Process Automation 11 constantly updating state information, is written to, read from, and executed on the blockchain. Depending on the smart contracts and business rules executed, such data could contain potentially-confidential internal data of an organisation, thus inadvertently and unnecessarily exposing private data to external parties. As per our case example (Figure 3), it will require 20 large, custom contracts to be created (one for each task) in such blockchain-centric architectures versus 7 short, generic contracts that merely write important transactions to the blockchain in our approach. Additionally, each custom contract will require considerable effort for verification, and the blind trust of each organisation that the translation tool generates error-free smart contracts. Each update of a smart contract requires that each peer must compile, instantiate and validate it before it is committed to the blockchain, thus consuming resources and adding to the overhead of the blockchain’s performance. It is clear that blockchain is much more expensive as a medium for processing and storage than traditional media. Hence, it should be used as sparingly as possible by minimising both the size of smart contracts and the amount of data stored, while maintaining trust by means of a reliable audit trail. Extraneous processing and data should go to traditional platforms that offer better performance, flexibility and technology heterogeneity, and less visibility across parties. We are not convinced that it is necessary to reinvent the functionality of a BPM engine, which includes complex control flow management, data management and resource allocation, within a blockchain platform. As we have demonstrated in our implementation, it is less work to integrate blockchain into an application with our federated approach, when compared to the more heavyweight blockchain-centric architectures. To fully transfer all the features of an industrial strength BPM system onto a blockchain platform could amount to a very long, risky and expensive undertaking, especially when considering the non-trivial processes in real-world scenarios. Our prototype illustrates the advantages of dedicating the existing capabilities of BPMS for process automation, and those of blockchain as an immutable, distributed ledger, to automate secure, cross-organisational process interactions without the overheads necessitated by the heavyweight, blockchain-centric approach. We believe our proposal aligns better with the underlying philosophy of blockchain technology based on distributed autonomous organisations (DAOs) [12]. We have presented a conceptual architecture and an implementation that demonstrates the feasibility of the approach. The comparisons presented here are mostly qualitative; a more thorough empirical comparison through experiments and quantitative data is needed and it will form our future work. More work is also needed to optimise the distribution of on-chain and off-chain data, and to validate the applicability of the federated approach with different types of scenarios and use cases. ## References 1. Androulaki, E., Barger, A., Bortnikov, V., Cachin, C., et al.: Hyperledger Fabric: a distributed operating system for permissioned blockchains. In: Proceedings of the ----- 12 M. Adams et al. Thirteenth EuroSys Conference. p. 30. ACM (2018) 2. Astigarraga, T., Chen, X., Chen, Y., Gu, J., et al.: Empowering business-level blockchain users with a rules framework for smart contracts. In: 16th International Conference on Service-Oriented Computing. pp. 111–128. Springer (2018) 3. Buterin, V.: Ethereum: A next-generation smart contract and decentralized application platform (2014), https://github.com/ethereum/wiki/wiki/White-Paper 4. Cachin, C.: Architecture of the Hyperledger Blockchain Fabric. In: Distributed Cryptocurrencies and Consensus Ledgers. vol. 310, p. 4 (2016) 5. Christidis, K., Devetsikiotis, M.: Blockchains and smart contracts for the internet of things. IEEE Access 4, 2292–2303 (2016) 6. Garc´ıa-Ba˜nuelos, L., Ponomarev, A., Dumas, M., Weber, I.: Optimized execution of business processes on blockchain. In: BPM. pp. 130–146. Springer, Cham (2017) 7. Hull, R., Batra, V.S., Chen, Y.M., Deutsch, A., et al.: Towards a shared ledger business collaboration language based on data-aware processes. In: 14th International Conference on Service-Oriented Computing. pp. 18–36. Springer (2016) 8. L´opez-Pintado, O., Garc´ıa-Ba˜nuelos, L., Dumas, M., Weber, I., Ponomarev, A.: Caterpillar: a business process execution engine on the ethereum blockchain. Software: Practice and Experience 49(7), 1162–1193 (2019) 9. Mendling, J., Hafner, M.: From inter-organizational workflows to process execution: Generating BPEL from WS-CDL. In: CoopIS’05. pp. 506–515. Springer (2005) 10. Mendling, J., Weber, I., van der Aalst, W., vom Brocke, J., et al.: Blockchains for business process management - challenges and opportunities. ACM Transactions on Management Information Systems 9(1), 4:1–4:16 (Feb 2018) 11. Narendra, N.C., Norta, A., Mahunnah, M., Ma, L., Maggi, F.M.: Sound conflict management and resolution for virtual-enterprise collaborations. Service Oriented Computing and Applications 10(3), 233–251 (Sep 2016) 12. Norta, A.: Creation of smart-contracting collaborations for decentralized autonomous organizations. In: International Conference on Business Informatics Research. pp. 3–17. Springer (2015) 13. Norta, A., Grefen, P., Narendra, N.C.: A reference architecture for managing dynamic inter-organizational business processes. Data & Knowledge Engineering 91, 52–89 (2014) 14. Sturm, C., Szalanczi, J., Sch¨onig, S., Jablonski, S.: A lean architecture for blockchain based decentralized process execution. In: International Conference on Business Process Management Workshops. pp. 361–373. Springer (2018) 15. Szabo, N.: Formalizing and securing relationships on public networks. First Monday **2(9) (1997)** 16. ter Hofstede, A., van der Aalst, W., Adams, M., Russell, N. (eds.): Modern Business Process Automation: YAWL and Its Support Environment. Springer (2010) 17. Underwood, S.: Blockchain beyond bitcoin. Communications of the ACM 59(11), 15–17 (2016) 18. Weber, I., Gramoli, V., Ponomarev, A., Staples, M., et al.: On availability for blockchain-based systems. In: 2017 IEEE 36th Symposium on Reliable Distributed Systems (SRDS). pp. 64–73 (Sept 2017) 19. Weber, I., Xu, X., Riveret, R., Governatori, G., et al.: Untrusted business process monitoring and execution using blockchain. In: BPM. pp. 329–347. Springer (2016) 20. Xu, X., Weber, I., Staples, M., Zhu, L., et al.: A taxonomy of blockchain-based systems for architecture design. In: ICSA. pp. 243–252. IEEE (April 2017) 21. Yasaweerasinghelage, R., Staples, M., Weber, I.: Predicting latency of blockchainbased systems using architectural modelling and simulation. In: ICSA. pp. 253–256. IEEE (April 2017) -----
{ "disclaimer": "Notice: The following paper fields have been elided by the publisher: {'references', 'abstract'}. Paper or abstract available at https://api.unpaywall.org/v2/10.1007/978-3-030-58135-0_1?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1007/978-3-030-58135-0_1, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": null, "status": "CLOSED", "url": "" }
2,020
[ "JournalArticle" ]
false
null
[]
7,468
en
[ { "category": "Computer Science", "source": "external" }, { "category": "Medicine", "source": "external" }, { "category": "Engineering", "source": "s2-fos-model" }, { "category": "Computer Science", "source": "s2-fos-model" }, { "category": "Business", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/021c89fecb5deff33c4bd7ab7f22800ad64ad22d
[ "Computer Science", "Medicine" ]
0.883362
IoT Data Qualification for a Logistic Chain Traceability Smart Contract
021c89fecb5deff33c4bd7ab7f22800ad64ad22d
Italian National Conference on Sensors
[ { "authorId": "145893525", "name": "Mohamed Ahmed" }, { "authorId": "3269191", "name": "C. Taconet" }, { "authorId": "103678321", "name": "Mohamed Ould" }, { "authorId": "1776634", "name": "S. Chabridon" }, { "authorId": "1695916", "name": "A. Bouzeghoub" } ]
{ "alternate_issns": null, "alternate_names": [ "SENSORS", "IEEE Sens", "Ital National Conf Sens", "IEEE Sensors", "Sensors" ], "alternate_urls": [ "http://nbn-resolving.de/urn/resolver.pl?urn=urn:nbn:ch:bel-142001", "http://www.mdpi.com/journal/sensors", "https://www.mdpi.com/journal/sensors" ], "id": "3dbf084c-ef47-4b74-9919-047b40704538", "issn": "1424-8220", "name": "Italian National Conference on Sensors", "type": "conference", "url": "http://www.e-helvetica.nb.admin.ch/directAccess?callnumber=bel-142001" }
In the logistic chain domain, the traceability of shipments in their entire delivery process from the shipper to the consignee involves many stakeholders. From the traceability data, contractual decisions may be taken such as incident detection, validation of the delivery or billing. The stakeholders require transparency in the whole process. The combination of the Internet of Things (IoT) and the blockchain paradigms helps in the development of automated and trusted systems. In this context, ensuring the quality of the IoT data is an absolute requirement for the adoption of those technologies. In this article, we propose an approach to assess the data quality (DQ) of IoT data sources using a logistic traceability smart contract developed on top of a blockchain. We select the quality dimensions relevant to our context, namely accuracy, completeness, consistency and currentness, with a proposition of their corresponding measurement methods. We also propose a data quality model specific to the logistic chain domain and a distributed traceability architecture. The evaluation of the proposal shows the capacity of the proposed method to assess the IoT data quality and ensure the user agreement on the data qualification rules. The proposed solution opens new opportunities in the development of automated logistic traceability systems.
# sensors _Article_ ## IoT Data Qualification for a Logistic Chain Traceability Smart Contract **Mohamed Ahmed** **[1,2,]*, Chantal Taconet** **[2,]*** **, Mohamed Ould** **[1], Sophie Chabridon** **[2]** **and Amel Bouzeghoub** **[2]** 1 ALIS International, 4 Rue du Meunier, 95724 Roissy-en-France, France; Mohamed.Ould@alis-intl.com 2 Samovar, Télécom SudParis, Institut Polytechnique de Paris, 9 rue Charles Fourier, 91011 Evry-Courcouronnes CEDEX, France; Sophie.Chabridon@telecom-sudparis.eu (S.C.); Amel.Bouzeghoub@telecom-sudparis.eu (A.B.) ***** Correspondence: Mohamed.Ahmed@alis-intl.com (M.A.); Chantal.Taconet@telecom-sudparis.eu (C.T.) [����������](https://www.mdpi.com/article/10.3390/s21062239?type=check_update&version=2) **�������** **Citation: Ahmed, M.; Taconet, C.;** Ould, M.; Chabridon, S.; Bouzeghoub, A. IoT Data Qualification for a Logistic Chain Traceability Smart Contract. Sensors 2021, 21, 2239. [https://doi.org/10.3390/s21062239](https://doi.org/10.3390/s21062239) Academic Editor: Muhamed Turkanovi´c Received: 29 January 2021 Accepted: 19 March 2021 Published: 23 March 2021 **Publisher’s Note: MDPI stays neutral** with regard to jurisdictional claims in published maps and institutional affil iations. **Copyright: © 2021 by the authors.** Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons [Attribution (CC BY) license (https://](https://creativecommons.org/licenses/by/4.0/) [creativecommons.org/licenses/by/](https://creativecommons.org/licenses/by/4.0/) 4.0/). **Abstract: In the logistic chain domain, the traceability of shipments in their entire delivery process** from the shipper to the consignee involves many stakeholders. From the traceability data, contractual decisions may be taken such as incident detection, validation of the delivery or billing. The stakeholders require transparency in the whole process. The combination of the Internet of Things (IoT) and the blockchain paradigms helps in the development of automated and trusted systems. In this context, ensuring the quality of the IoT data is an absolute requirement for the adoption of those technologies. In this article, we propose an approach to assess the data quality (DQ) of IoT data sources using a logistic traceability smart contract developed on top of a blockchain. We select the quality dimensions relevant to our context, namely accuracy, completeness, consistency and currentness, with a proposition of their corresponding measurement methods. We also propose a data quality model specific to the logistic chain domain and a distributed traceability architecture. The evaluation of the proposal shows the capacity of the proposed method to assess the IoT data quality and ensure the user agreement on the data qualification rules. The proposed solution opens new opportunities in the development of automated logistic traceability systems. **Keywords: IoT; data quality; smart contract; traceability; logistic; sensor; blockchain; supply chain** **1. Introduction** In the logistic chain domain, multiple stakeholders need to exchange data about _shipments transiting from the shipper to the consignee. The data exchange purpose is to give_ visibility to all the stakeholders about the shipments progress in the logistic chain and trace the path as well as the transport conditions throughout the entire chain. We refer to the data collected during shipments transit as traceability data, the system in charge of collecting, saving and sharing those data as traceability system and the whole process of data collection and processing as the traceability process. Traditional traceability systems handle traceability data in a central system hosted by one of the stakeholders, which constitutes a risk on the availability of the traceability data (single point of failure). The lack of transparency in the qualification process could also be a source of dispute on the correct application of data handling and qualification rules agreed by all the traceability process stakeholders. The advent of blockchain technology and smart contracts help develop new traceability systems. Such systems allow stakeholders to achieve the secure and transparent sharing of traceability data, using the blockchain secured and distributed ledger. In addition, smart contracts allow stakeholders to share data handling and decision-making rules, in order to ensure that the same agreed rules are applied by all the stakeholders. Increasingly, IoT devices are used to automatically collect field data. Those data are used both for traceability purpose and to take automatic decisions, such as the creation of shipment incidents, when one or more of the negotiated shipment transport conditions ----- _Sensors 2021, 21, 2239_ 2 of 25 are not respected. As a result, the human intervention is limited in the process, as well as process error probability. To automate the traceability process decision making, new traceability system architectures have been proposed in the literature combining smart contracts and IoT (see, e.g., [1–4]). However, in the existing smart contracts and IoT traceability systems literature, many of the provided architectures propose to integrate the IoT data directly into the smart contract (see, e.g., [1–4]). This could lead to unsound decisions taken by the smart contract based on erroneous data collected and sent directly to the smart contract by the IoT data sources. To overcome this issue, we propose to introduce an IoT data qualification process in smart-contracts and IoT-based traceability architectures. We proposed in [5] to enhance traceability architectures using blockchain, smart contracts and IoT, combined with a lightweight IoT data qualification process. However, in this previous work, the proposed qualification process covered only outlier measure detection, which is only one facet of data quality. Furthermore, we did not compute the qualification at different levels such as the measure and the sensor level. Moreover, the IoT data qualification process was centralized at one stakeholder’s site, and there was no guarantee for the other stakeholders on the correct execution of the agreed IoT data qualification rules. In addition, the stakeholder in charge of the IoT data qualification represented a single point of failure of the architecture on the IoT data qualification part. To overcome the above limitations, the main contributions of this article are threefold: _(i) The literature review of IoT data qualification highlights that the data quality of a system_ is assessed by means of several dimensions. Considering the logistic chain properties, the first contribution is to identify the most relevant IoT data qualification dimensions and provide measurement methods for each of them. (ii) To help the stakeholders to get an end-to-end visibility of the data quality and to identify the quality issues causes, the second contribution aims at measuring the data quality at four levels: IoT data events, IoT data sources, shipments and IoT data sources-shipments associations. (iii) To ensure the stakeholders agreement on the traceability data, the data qualification rules, and the decisions taken based on the data, such as the creation of incidents, the third contribution consists in integrating the data qualification measurement methods in a traceability smart contract. The rest of the article is organized as follows. In Section 2, we present the logistic domain context and its requirements through an example use case. Section 3 highlights the main research questions addressed in the paper together with their motivations. Section 4 studies the works related to the IoT data quality and the use of the blockchain to assess this quality. In Section 5, we present the use of the selected IoT data quality dimensions to measure the data quality. Section 6 presents the architecture of the proposed traceability solution. The evaluation of our proposed IoT data quality assessment approach is presented in Section 7. Finally, we conclude in Section 8 and present some future works. **2. Medical Equipment Cold Chain Use Case** In this section, we present a business-to-business logistic chain emblematic example. Because of its specific constraints, the medical equipment cold chain is handled by specific transport means. We chose this use case for two reasons: (1) the requirement for transport monitoring; and (2) we worked with an ALIS customer specialized in the production of medical equipment and we were able to discuss with this customer about their traceability needs for this specific cold chain context. Some of the equipment, such as perishable medical diagnostic kits used in blood tests, needs to be transported under strict conditions with a temperature between a minimum of +2 and a maximum of +8 _[◦]C. The non-compliance with this temperature interval may_ render the medical diagnostic kits unusable. The stakeholders should be notified of any temperature non-compliance. At least three traceability stakeholders are involved in the traceability system of this medical equipment cold chain: a shipper (at the origin of the transport request), a carrier (in ----- _Sensors 2021, 21, 2239_ 3 of 25 charge of transport operation) and a consignee (the recipient of the transported equipment). In our context, we use the term shipment to designate any object entrusted to the carrier by the shipper, in order to be forwarded to the consignee. The transport condition data are collected through IoT data sources declared by the stakeholders and every IoT data source has its own data communication interval. Among the declared IoT data sources used in our scenario, there is a connected object equipped with sensors that accompanies the shipments and that is assigned by the shipper. For visibility and transparency purpose, the stakeholders need to securely share all the traceability data created manually or collected automatically by the IoT data sources. The stakeholders need also to be sure that the traceability data processing conforms to the rules agreed between the stakeholders. The shipper is responsible for the shipment creation in the traceability system, with all the data required by the carrier for the good execution of the transport operation, such as the origin, destination, transport temperature thresholds and IoT data reception interval. In this scenario, we focus on the management of incidents that could be detected automatically by the traceability system, based on the data sent by the IoT data sources, such as the non-compliance with the negotiated transport temperature interval. The data received from the IoT data sources will be used to automatically create incidents in the traceability process if necessary. Hence, these data should not be integrated directly into the system. A data qualification process is required to ensure that the IoT data quality is good enough to ensure the proper incidents detection. For this purpose, the stakeholders should have the ability to set the required thresholds for the IoT data quality. Thus, the data that do not meet the quality thresholds requirements could not be used in the traceability process. The data qualification process has many advantages: it not only provides a quality degree to each shipment related IoT event and a performance measure of its associated data source but also helps the users to choose the most trustworthy data source and facilitates the detection of damaged ones in order to repair or replace them. **3. Research Questions and Motivations** Based on the above-mentioned use case, we can highlight six main research questions addressed in this article and their motivations: (1) How accurate are the data? In other words, do the data reflect the reality of the shipment transport operation? Measuring data accuracy avoids the use of unreliable data. (2) Are the data complete? Indeed, the existence of gaps in the collected data may affect the shipment traceability. (3) Are the data consistent? The consistency issue arises when the collected data assigned to a shipment comes from several sources with possibly discrepancies leading to incidents. In this case, an agreement could be defined to tolerate a minimum deviation between the data, for example, a gap of 0.5 _[◦]C in the temperature may be considered as acceptable. (4) Are the data timely_ valid? That is, are the data compliant with the receiving window agreed between the stakeholders? The non-respect of this interval may significantly affect the stakeholder’s visibility and the required transparency of ongoing transport operations. Each above question reflects a facet (dimension) of the quality process that this paper addresses and thus the main contribution of this paper is to propose quality measures for each dimension identified as relevant in our context namely: accuracy, completeness, consistency and currentness. These quality dimensions are defined in the next section. In addition, to the above quality dimensions questions, there is a concern about quality granularity. (5) How can the system provide different levels of quality: data events, IoT data sources and per shipment performances? This high precision quality monitoring facilitates the identification at the right time of the data sources that need to be repaired or removed. Finally, there is a question concerning transparency. (6) How can the data and the data quality measurement rules be shared securely among the stakeholders to ensure their agreement on the correct application of these rules? To address this issue, we propose to im ----- _Sensors 2021, 21, 2239_ 4 of 25 plement the above quality measures into a smart contract, in order to ensure the agreement of all the stakeholders on the correct application of the proposed quality measures. **4. Related Works** Data quality is not a recent research topic. The first data quality studies concerned databases. Many data quality aspects have been considered such as the accuracy, consistency and reliability to improve the quality of data inputs into databases and handle databases incompatibility and time critical delivery data [6]. With the advent of the IoT as new data sources, the existing data quality studied aspects needed to be extended to the specificities of those new data sources. The data collected from IoT data sources need to be controlled even more due to the limited capacity of these sources to ensure the security and the quality of their data. The “Never trust user input” should evolve to “Never trust things input”, as stated by Karkouch et al. [7]. Moreover, the emergence of blockchain opens new opportunities for systems that involve multiple stakeholders. The logistic chain domain, which involves multiple stakeholders, provides relevant use cases for this technology [8], especially for traceability purpose [9]. The blockchain promotes the development of smart logistics [10], using smart contracts. Before providing a literature review, it is important first to define some terms used in the domain of data quality and their meaning in the logistic context. _4.1. Data Quality Definitions_ Data quality dimensions are attributes representing a single aspect of the data quality, as stated by Richard Y. Wang [11]. In this work, we consider the following data quality dimensions: accuracy, completeness, consistency and currentness. The accuracy, as stated by ISO [12], refers to: “the degree to which data has attributes that correctly represent the true value of the intended attribute of a concept or event in a specific context of use”. In our context, it is difficult to know if a received measurement reflects the real shipment situation, especially when the shipment transport operation is ongoing. However, we can define an accuracy measurement method based on the received measure and the measure source specifications. The completeness, according to ISO [12], corresponds to “the degree to which subject data associated with an entity has values for all expected attributes and related entity instances in a specific context of use”. In our context, the completeness depicts the fact that all the expected events have been received by a data source or a shipment according to the update interval agreed by all the stakeholders. The consistency, according to ISO [12], refers to “The degree to which data has attributes that are free from contradiction and are coherent with other data in a specific context of use”. It is also referred to as concordance in some works [13]. In our context, the consistency dimension corresponds to the degree of coherence between IoT data events sent by different IoT data sources and related to the same shipment. The currentness was defined by ISO [12] as: “The degree to which data has attributes that are of the right age in a specific context of use”. It is also referred to as timeliness, currency, freshness, delay or contemporaneous, in some works [13,14]. In our context, an event is considered of the right age when it is received at the expected time according to the update interval agreed by the stakeholders and defined in the smart contract. _4.2. Related Works Study Criteria_ The combination of the blockchain smart contracts and the IoT helps in the development of trusted [15] and automated systems. However, the IoT data quality is a hindrance to the development and adoption of this new generation of systems. In this article, we present the works related to the IoT data quality issue according to three criteria: (C1) the quality dimensions; (C2) the quality levels; and (C3) the use of blockchain smart contracts for data quality management. ----- _Sensors 2021, 21, 2239_ 5 of 25 4.2.1. Quality Dimensions (C1) The IoT data quality issue has been addressed using data quality dimensions. For this purpose, the traditional data quality dimensions [11] have been used and adapted to the IoT context needs [13]. The definition of the IoT quality dimensions and their corresponding calculation methods facilitates their usage and application in the target IoT based systems. Due to the lack of works on IoT data using quality dimensions in the logistic chain context, we selected some representative related works from other domains. Many of the existing works show the interest of using those quality dimensions for IoT data quality handling. In each work, the authors selected the dimensions relevant to their domain and defined the corresponding measurement methods for the selected quality dimensions. Li et al. [16] defined and measured the currency, availability and validity metrics in a pervasive environment (IoT context) and the problem of data expiration (data no longer usable). It is worth noting that, in the traceability context, the data do not expire. It is important to get all the data for traceability purpose even though the data received late will have a poor currentness quality index. Sicari et al. [17] proposed a quality-aware and secured architecture handling: accuracy, currentness, completeness and other quality dimensions. A framework for determining the quality of heterogeneous information sources was proposed by Kuemper et al. [18], using the dimensions of accuracy and consistency. To ensure a real-time data allocation and data quality in multiple partitions collection and storage, Kolomvatsos [19] proposed a real time data pre-processing mechanism, using Fuzzy Logic and handling the accuracy dimension. In the domain of Ambient Assisted Living (AAL) systems, Kara et al. [20] proposed a quality evaluation model. Their approach is based on the definition and execution of quality metrics and the use of fuzzy logic to evaluate the metrics and decide of the data quality level. In the same precedent domain, Erazo-Garzon et al. [21] defined, measured and evaluated the quality of data collected from an intelligent pillbox, using seven data quality dimensions, among them the accuracy, the completeness, the currentness and the confidentiality. All the above works use some of or all our required IoT quality dimensions. However, their measurements methods do not meet our needs of dimensions definition and measurement at different levels: data event, data source and shipment. 4.2.2. Quality Levels (C2) In the logistic chain context, the stakeholders need to be provided with a full quality visibility at different levels of the manipulated objects. This is our second criterion (C2). It is helpful for the data quality management and simplifies the investigation in case of discrepancy between the stakeholders IoT data sources. Some works proposed data quality models to handle this issue. A generic data quality metamodel for data stream management was proposed by Karkouch et al. [22]; in the evaluation of their work, the authors used the accuracy and completeness dimensions. There is also the work of Fagúndez et al. [23] on a data quality model to assess sensors data quality in the health domain, using the dimensions of accuracy, completeness, freshness and consistency. The above cited models do not meet our context needs. On the one hand, the data sources in our context are reused and affected by different shipments in different transport operations. On the other hand, to meet the criterion (C2) in our proposition, we provide the stakeholders with a full visibility of the data quality at different object levels, using an adequate quality model. ----- _Sensors 2021, 21, 2239_ 6 of 25 4.2.3. Blockchain Smart Contracts for Data Quality Management (C3) Traceability data need to be shared securely among the stakeholders in order to ensure their agreement on the data quality and the correct application of the agreed data calculation methods. This is our third criterion (C3). The following representative works from the literature propose IoT-blockchain based architectures to handle this issue. In the domain of crowdsensing platforms, there are many recent works, proposing to use the blockchain in order to improve the quality of the collected IoT data, such as the works of Gu et al. [24], Nguyen and Ali [25], Wei et al. [26], Cheng et al. [27], Huang et al. [28], Zou et al. [29] and Javaid et al. [30]. Their propositions are based essentially on users’ reviews, reputation and reward mechanisms to incentivize the users to improve the quality of their provided data. Those mechanisms are not applicable in the logistic chain context, in which the stakeholders are known and responsible of their provided data. Casado-Vara [31] proposed an IoT data quality framework based on the use of a blockchain, in the context of smart homes. The proposed solution is limited to the accuracy dimension and does not involve multiple stakeholders, each having its own data sources as is in our context. In the context of a fish farm, Hang et al. [32] proposed a blockchain based architecture to ensure agriculture data integrity. Their proposed fish farm architecture includes an outlier filter, that removes measurements beyond the expected values. This outlier filter is implemented outside the blockchain, using a Kalman filter algorithm. Leal et al. [14] proposed a framework for end-to-end traceability and data integrity, in the domain of pharmaceutical manufacturing. They addressed the problem of temporal and multi-source variability using probability distribution methods. In our logistic traceability context, we do not need to estimate sensor measurement data, so we should just report these data as they are sent by sensors. If some data are missed or out of the expected ranges, this results in a quality incident on which the involved stakeholders need to agree. In our proposition, we implement the data quality measurement methods in a blockchain smart contract in order to ensure a secured sharing and agreement of all the stakeholders on the correct application of the measurement method and the resulting data quality. _4.3. Summary of the Related Works Study_ To enhance and secure the IoT data quality in the logistic chain, we propose in this article a data quality assessment architecture using accuracy, completeness, consistency and currentness dimensions (C1) in a blockchain smart contract for logistic chain traceability. The proposed architecture provides the logistic chain stakeholders with data quality visibility at different levels (C2) and guarantees the user agreement on the correct quality rules application (C3). Besides, the proposed architecture does not only increase the integrated data quality, but also the stakeholder’s trust and adherence to the resulting automatic decisions. Table 1 summarizes the selected related works and how they meet the studied three criteria. **Table 1. Related works comparison summary.** **C3 (Use of Blockchain Smart Contracts** **C1 (Quality Dimension)** **C2 (Quality Levels)** **for Data Quality Management)** IoT Li et al. [16] Currentness and others Data N/A Sicari et al. [17] accuracy, currentness, completeness and others Data and stream window N/A Kuemper et al. [18] Accuracy and consistency Data and data source N/A Kolomvatsos et al. [19] Accuracy Data N/A Kara et al. [20] Accuracy, completeness and others Data N/A Accuracy, completeness, consistency (lack of Erazo-Garzom et al. [21] Data and data source N/A measurement method), currentness and others. IoT Data Quality models Karkouch et al. [22] Accuracy and completeness (in the evaluation) Data and stream window N/A Accuracy, completeness, freshness and Fagúndez et al. [23] Data and stream window N/A consistency ----- _Sensors 2021, 21, 2239_ 7 of 25 **Table 1. Cont.** **C3 (Use of Blockchain Smart Contracts** **C1 (Quality Dimension)** **C2 (Quality Levels)** **for Data Quality Management)** Blockchain and IoT Crowdsensing platforms Gu et al. [24], Nguyen and Ali [25], Wei et al. [26], Cheng et al. [27], Huang et al. [28], Zou et al. [29] and Javaid et al. [30] Blockchain, IoT and data qualification Data quality ensured through reviews, N/A N/A reputations and rewards mechanisms implemented in blockchain smart contracts Accuracy qualified outside the blockchain Casado-Vara et al. [31] Accuracy and consistency N/A smart contract and consistency inside it Outlier’s filtering outside the blockchain Hang et al. [32] Accuracy (outliers filtering) N/A smart contract Accuracy, consistency (multi-source variability) Data qualification outside and inside the Leal et al. [14] N/A and currentness (contemporaneous) blockchain smart contract Accuracy, completeness, consistency and Our proposition currentness Data, data source, shipment and shipment data source relationship (equivalent to Stream window) Data qualified using quality dimensions implemented in a blockchain smart contract **5. Data Qualification Using Data Quality Dimensions** In this work, the data qualification refers to the definition of data quality measurement methods and the application of those methods on every data received and handled by the smart contract. We focus on the qualification of traceability IoT data. Because these data are automatically collected and used by the smart contract for the detection of incidents, their qualification is essential for building reliable and automated traceability system. Thanks to a data quality study adapted to the logistic chain domain, we identified: _(i) relevant IoT data quality dimensions; and (ii) their respective measurement methods._ The IoT data quality model purpose is to be implemented in the traceability smart contract, in order to assess the shipment data quality and consequently improve the incidents creation process. Among the quality models proposed in the literature, the model by Karkouch et al. [22] is the closest to our above needs, and we decided to implement and extend this model for the logistic chain domain. As depicted in Figure 1, we added the Shipment entity to collect the data quality at the shipment level with its own IoTQualityDimension. Furthermore, for capturing the data quality during the association of the IoTDataSource and the Shipment, we added the _Assignment entity which reflects this temporary relationship._ Furthermore, we highlight in Figure 1 all the model entities and attributes added for the quality assessment purpose. The main entity of this model is Shipment which has its own IoTQualityDimension and its own IoTDataSource affected to it through the _Assignment entity. It is worth noting that the IoTQualityDimension has a weight attribute_ defining the importance of the dimension according to the stakeholders needs. In our context, we need to distinguish different application levels of each dimension, for quality visibility at every object level. The quality index resulting from a dimension application is calculated differently for each dimension related entity in the schema. In some cases (detailed in the next sections), an IoTQualityDimension is not defined for some entities of the schema. For example, the completeness dimension is not defined for _IoTDataEvent and IoTMeasure; it is used only for entities with an update time interval_ constraint such as IoTDataSource and Shipment. Moreover, we introduce in this model a qualityCon f idenceIndex, in order to provide users with an overview of the data quality for the main objects manipulated in our traceability system, which are the IoTDataSource, Shipment, Assignment and IoTDataEvent. ----- _Sensors 2021, 21, 2239_ 8 of 25 **Assignment** - id: Long - startAssiTime: Timestamp - endAssiTime: Timestamp 0..* - confidenceIndex: Long - dimensionsQualityIndex: Map<String, Double> - lastReceivedEvtTimestamp: Long 0..* 1..* assigned Shipment **IoTDataSource** - id: Long - id: String - pickupTimestamp: Timestamp - name: String related - deliveryTimestamp: Timestamp - owner: String - origin: String - startTimestamp 0..* - destination: String - qualityConfidenceIndex: Double - mode: String - dimensionsQualityIndex: Map<String, Double> - carrier: String - measureIntervalInSeconds: Integer - dataQualityIndexThreshold: Double related **- qualityConfidenceIndex: Double** - dataQualityIndexDimensionsThresholds: Map<String, Double> **- dimensionsQualityIndex: Map<String, Double>** **- globalDataQualityThreshold: Double** 1..1 **- dataQualityIndexThreshold: Double** 1..1 0..1 0..1 0..* 0..* **- dataQualityIndexDimensionsThresholds: Map<String, Double>** **IoTDataEvent** related related 0..* 1..1 0..1 0..* 0..* - id: Long - srcId: String **DataQualityIncident** - timestamp: Timestamp has shipmentIncident has shipmentCondition - receptionTimestamp: Timestamp - id: Long - qualityConfidenceIndex: Double 1..1 0..* - label: String - dimensionsQualityIndex: Map<String, Double> - creationTime: Timestamp - dataQualityIndexThreshold: Double ShipmentIncident ShipmentCondition - closingTime: Timestamp- stakholder: String 0..* - dataQualityIndexDimensionsThresholds: Map<String, Double> - id: Long - id: Long **IoTMeasure** - label: String - code: String - creationTime: Timestamp - label: String - closingTime: Timestamp - min: Long - code: String - stakholders: List<String> - max: Long - qualityConfidenceIndex: Double - stakholders: List<String> 0..* 0..* - dimensionsQualityIndex: Map<String, Double> 1..* **_IoTQualityDimension_** # code: String; 1..* # name: String **IoTMeasureValue** # weight: Integer # timeToleranceThreshold: Integer - code: String - value: Double + caculateDimensionConfidenceIndex(shipment:Shipment): Double - minValue: Double + caculateDimensionConfidenceIndex(ioTDataSource:IoTDataSource): Double - maxValue: Double + caculateDimensionConfidenceIndex(ioTDataEvent:IoTDataEvent): Double - precision: Double + caculateDimensionConfidenceIndex(ioTMeasure:IoTMeasure): Double **IoTQualityAccuracy** **IoTQualityCompleteness** **IoTQualityConsistency** **IoTQualityCurrentness** **Figure 1. IoT Data Quality Entity class diagram.** The calculation of the quality index takes into account the weight W of dimensions fixed by the users for IoTDataSource and Shipment. We calculate this quality index for the IoTDataSource and Assignment as an average of their n IoTDataEvents and m _IoTQualityDimensions:_ |Col1|Col2|Col3|Col4| |---|---|---|---| |IoTQualityCompleteness||IoTQualityConsistency|| _qualityCon f idenceIndex =_ _m_ _n_ _j∑=1(Wj ∗_ _i∑=1_ _dimensionQualityIndexjIoTDataEventi )_ (1) _m_ ∑ _Wj_ _j=1_ For the Shipment quality index calculation, we use the quality indexes of its related _Assignment objects. Regarding the IoTDataEvent, we use the average quality of its related_ ----- _Sensors 2021, 21, 2239_ 9 of 25 _IoTMeasures. The methods used to calculate the IoTQualityDimensions are detailed in the_ next sections. The quality thresholds are set by the stakeholders to define the minimum accepted quality index. Values that do not respect this quality will be stored for traceability purpose but will not be used for dynamic incident detection. To monitor the compliance of the received data according to both the quality threshold and the Shipment transport conditions defined in the smart contract, we added in the model, respectively, the entities: DataQualityIncident and ShipmentIncident. DataQualityIncident results from a non-compliance with the agreed quality thresholds and ShipmentIncident results from a non-compliance with the agreed business transport conditions. For example, consider a IoTDataSource with an interval of possible values from 0 to 50 _[◦]C, and monitor-_ ing a Shipment with business transport conditions of 2–8 _[◦]C. If this IoTDataSource sends_ a temperature value of 100[◦]C, this value is considered as non-compliant with the quality thresholds and generates a DataQualityIncident. However, if the sent value is 20 _[◦]C, it_ is considered as non-compliant with the business transport conditions and generates a _ShipmentIncident._ In the next subsections, we detail how the dimensions are used to calculate the quality indexes for the different object levels. _5.1. Accuracy_ The accuracy measurement method is based on the IoTDataSource specifications (sensor measure precision value and sensor minimum and maximum measurable values). Using this method, we can ensure that the received measurement is a possible normal value that can be sent by the concerned IoTDataSource. Therefore, the received measurement could be used by the traceability smart contract, for example to create an incident, if the received measurement is out of the ranges fixed by the shipper for this specific measurement. In the following subsection, we detail the accuracy calculation method depending on the object level. Accuracy Levels We identify five accuracy levels: the IoTMeasureValue accuracy AccMsrVal, the IoTMeasure accuracy AccMsr, the IoTDataEvent accuracy AccEvt, the IoTDataSource accuracy AccSrc and the Shipment accuracy AccShp. The IoTMeasureValue accuracy as indicated by its name is related to only one value of the IoTMeasure. It is used to indicate if a value of the IoTMeasure is in the range of relevant and acceptable values of this specific IoTMeasureValue, based on the IoTDataSource specifications. For example, consider an IoTMeasureValue m, with precision p, and FThmin and FThmax are, respectively, the minimum and the maximum possible values given by the _IoTDataSource manufacturer._ We calculate the IoTMeasureValue accuracy AccMsrVal using the following formula: 1 If (m − _p) ≥_ _FThmin and (m + p) ≤_ _FThmax_ _m−FThp_ _min_ if (m − _p) < FThmin and m ≥_ _FThmin_ _FThmaxp_ _−m_ if (m + p) > FThmax and m ≤ _FThmax_ 0 _otherwise_ _AccMsrVal =_    (2) The IoTMeasure is composed of n IoTMeasureValues, and consequently we calculate the IoTMeasure accuracy AccMsr as an average of all its IoTMeasureValues accuracies: _AccMsr =_ _n_ ∑ _AccMsrVali_ _i=1_ (3) _n_ ----- _Sensors 2021, 21, 2239_ 10 of 25 The IoTDataEvent accuracy AccEvt corresponds to an overview of the accuracies of all its related IoTMeasures. This is useful in our context where the IoTDataEvent is considered as a coherent set of IoTMeasures. If this is not the case, the accuracy calculated at the IoTMeasure level can directly be used, and the IoTDataEvent accuracy can be ignored. However, for an IoTDataEvent with n related IoTMeasures, the IoTDataEvent accuracy corresponds to the average of all the IoTDataEvent related IoTMeasures accuracies: _AccEvt =_ _n_ ∑ _AccMsri_ _i=1_ (4) _n_ The IoTDataSource accuracy AccSrc gives an overview of all the IoTDataSource related IoTDataEvents accuracies, it is related to the historic of IoTDataEvents received from the IoTDataSource. In our context, we consider that it is important to take in consideration this historic of IoTDataEvents in the calculation of IoTDataSource accuracy, because it indicates the reliability of the IoTDataSource since it has been deployed and used in our traceability system. If the users are interested only in the IoTDataSource IoTMeasures accuracies, the accuracy calculated at the IoTMeasure level could be reused at the IoTDataSource level in order to give them an IoTDataSource accuracy per IoTMeasure. The accuracy of an _IoTDataSource corresponds to the accuracy average of all its related IoTDataEvents:_ _AccSrc =_ _n_ ∑ _AccEvti_ _i=1_ (5) _n_ Finally, the Shipment level accuracy emphasizes all the Shipment related IoTDataSource accuracies for the specific time period in which the IoTDataSource is assigned to the _Shipment. Every Shipment is considered as an independent transport operation that should_ have its own accuracy value. For a Shipment with n Assignments to IoTDataSources, the accuracy AccShp corresponds to the average of all the Shipment-IoTDataSource Assignments. For each Assignment accuracy AccAssigni, the number of IoTDataEvents nEvtAssign to be considered in the accuracy calculation, corresponds to the number of IoTDataEvents sent by the IoTDataSource for this specific Shipment Assignment relationship: _nEvtAssign_ ∑ _AccEvtj_ _j=1_ (6) _nEvtAssign_ _AccShp =_ _5.2. Completeness_ _n_ ∑ _AccAssigni_ _i=1_ such as AccAssigni = _n_ The completeness measurement method calculates the gap in the data reception for a specific object. It concerns the levels of the IoTDataSource, the Assignment and the _Shipment._ 5.2.1. Completeness Levels At the IoTDataSource level, the completeness is calculated based on the source _startTimestamp, the source measure interval I, the number of received IoTDataEvents n_ from the IoTDataSource and the reception timestamp of the last IoTDataEvent lastTimestamp, related to the IoTDataSource: _ComSrc =_ � 1 If n ≥ _[lastTimestamp][−]I[startTimestamp]_ (7) _n∗I_ otherwise _lastTimestamp−startTimestamp_ ----- _Sensors 2021, 21, 2239_ 11 of 25 The Assignment completeness ComAssign means that all the expected IoTDataEvents of the assigned IoTDataSource Src, have been received by the Shipment during the IoTData _Source and Shipment association time period enshrined in the smart contract._ Consequently, for the Shipment, the IoTDataEvent frequency is at least one IoTData _Event per IoT update time interval I defined in the smart contract. The ComAssign highlights_ for the stakeholders the capacity of each IoTDataSource to send all the expected data during its association with a Shipment. This helps the stakeholders to decide on the reusability of the IoTDataSource for further Shipments in the case of a good completeness value or, otherwise, to take over the IoTDataSource in order to identify the completeness source problem. The ComAssign evolves during the Shipment and the IoTDataSource association time period, and it is recalculated for every new IoTDataEvent reception at the timestamp _evtTimestamp, based on the current number of received IoTDataEvents n, the Shipment up-_ date interval _I,_ the _IoTDataSource-Shipment_ _Assignment_ _startAssignTime_ and _endAssignTime timestamps._ 1 If n ≥ _[evtTimestamp][−]I[startAssignTime]_ and evtTimestamp ]startAssignTime, endAssignTime] _∈_ Or _n ≥_ _[endAssignTime][−]I[startAssignTime]_ (8) and evtTimestamp > endAssignTime _endAssignTimen−∗startAssignTimeI_ If evtTimestamp > endAssignTime 0 _otherwise_ At the Shipment level, the completeness ComShp gives an idea of the completeness trend of all the Shipment related IoTDataSources. It is calculated as a ComAssign average of the nAssign IoTDataSources assigned to the Shipment: _ComAssign =_    _ComShp =_ _nAssign_ ∑ _ComAssigni_ _i=1_ (9) _nAssign_ 5.2.2. Completeness Incidents The completeness problem reflects the missing IoTDataEvents. Many reasons could be at the origin of missing IoTDataEvents: network errors, synchronization problems or device malfunctions [33]. If it is not handled, missing data seriously affect the reliability of the data collected through the IoTDataSource. We propose to generate a completeness incident, if the completeness index of the object fall below the completeness threshold fixed by the stakeholders. The update-missing incident created by the smart contract will also remain there in order to trace the history of data quality problems related to the event IoTDataSource. _5.3. Consistency_ It is important to calculate the coherence degree between IoTDataEvents and to alert the stakeholders in the case of incoherence detection. The stakeholders should take a corrective action, such as identifying and removing failing IoTDataSource, adapting new threshold values, etc. The main IoTDataSource in this work is the shipper shipment connected object. However, other IoTDataSources could be added by any of the Shipment transport stakeholders. When two or more IoTDataSources assigned to the Shipment monitor the same transport conditions, we calculate the consistency of those IoTDataSources, by comparing their _IoTMeasures._ ----- _Sensors 2021, 21, 2239_ 12 of 25 The IoTMeasures comparison takes into account two tolerance thresholds: the time tolerance threshold Ttth and the measure tolerance threshold Mtth. Those two thresholds should be defined at the Shipment creation for every IoTDataSource assigned to the _Shipment, of course through a mutual agreement between the stakeholders in charge of_ those IoTDataSources. Consistency Levels The consistency dimension concerns the levels of IoTDataEvent and Shipment. When an IoTDataEvent Evti is received from IoTDataSource Srci at a timestamp Rti, and contains a list Msri of IoTMeasures, we check if there are other IoTDataEvents related to the Shipment and sent by other IoTDataSources, verifying that for each IoTDataEvent _Evtj, received from IoTDataSource Srcj at the timestamp Rtj and containing a list Msrj of_ _IoTMeasures:_  Srci ̸= Srcj _|Rti −_ _Rtj| ≤_ _Ttth_ (10) Msri ∩ _Msrj ̸= ∅_ where IoTMeasures are compared using their codes (see Figure 1). If there is only one IoTDataSource for the Shipment, or there are no IoTDataEvents verifying the above conditions, then there is no consistency calculation to do. Otherwise, the IoTDataEvent consistency is calculated using the following method: 1 _∀m ∈_ _Msri ∩_ _Msrj, |Valmi −_ _Valmj_ _| ≤_ _Mtth_ _Valmi is the value of m in Msri, and_ _Valmj is the value of m in Msrj_ _NbConNbEvtEvti_ _NbConEvti is the number of events_ concordant with Evti, and NbEvt is the total number of events verifying the above consistency conditions _ConEvti =_    (11) The Shipment consistency ConShp gives an overview of the Shipment data consistency between all the IoTDataSources related to the Shipment and monitoring the same transport conditions. It is calculated as an average of the Shipment related Assignments consistency: _ConAssign._ _nEvtAssign_ ∑ _ConEvtj_ _j=1_ (12) _nEvtAssign_ _ConShp =_ _5.4. Currentness_ _n_ ∑ _ConAssigni_ _i=1_ such as ConAssigni = _n_ In the logistic traceability context, the currentness dimension may not be critical. Indeed, the most important is to detect incidents, even though the data are received late. However, currentness may reveal incidents concerning data acquisition. Thus, the stakeholders define the Shipment currentness threshold according to the use case. 5.4.1. Currentness Levels We consider the following currentness levels: IoTDataEvent, IoTDataSource and _Shipment._ For an IoTDataEvent Evti, the currentness CurEvti is calculated based on the previous _IoTDataEvent reception timestamp ti−1, the update interval defined in the smart contract_ ----- _Sensors 2021, 21, 2239_ 13 of 25 _I, the expected next IoTDataEvent timestamp ti+1 which is equal to ti−1 + 2 ∗_ _I and the_ current IoTDataEvent reception timestamp ti. _CurEvti =_ � 1 − _[|][(][t][i][−][1][+]I_ _[I][)][−][t][i][|]_ If ti ∈]ti−1, ti+1[ (13) 0 _otherwise_ For the Shipments, the interval I is a shipper requirement that should be met through the sending of an IoTDataEvent to the smart contract, every time that this interval has elapsed. Consequently, the currentness indicates not only the quality of the data but also the meet degree of one of the more important shipper requirements defined in the smart contract, the Shipment update interval I. Furthermore, the CurEvti at the IoTDataSource level is calculated using the same above method, but it is worth noting that the IoTDataSource has its own update interval that could be different from the Shipment update interval. Regarding the IoTDataSource, the currentness corresponds to the degree to which the _IoTDataSource has met the update interval time requirement, in the entire history of its_ related IoTDataEvents, including the last received IoTDataEvent. The currentness dimension helps the users in the choice of the IoTDataSources to be assigned to the Shipment, users will always choose the IoTDataSource with the highest currentness among the available IoTDataSources. The IoTDataSource currentness CurSrc is calculated as the average of all the IoTDataSource related IoTDataEvents: _CurSrc =_ _n_ ∑ _CurEvti_ _i=1_ (14) _n_ From the Shipment perspective, the currentness indicates the degree to which the _shipper update time interval requirement has been met for the Shipment by all its related_ _IoTDataSources, during the Shipment-IoTDataSource association time period. To measure_ the currentness performance of the Shipment-IoTDataSource association, the currentness calculated for this association CurAssign is saved in the Assignment object. The CurAssign is useful when the Shipment stakeholders need to investigate a low _Shipment currentness, as it helps to identify the Shipment related IoTDataSource(s) respon-_ sible(s) of the low currentness value. The Shipment currentness CurShp corresponds to the average CurAssigni of all its n related Assignment objects. The CurAssign is calculated as a _CurEvtj average of the nEvtAssign IoTDataEvents received from the IoTDataSource for the_ _Shipment, during their Assignment association:_ _nEvtAssign_ ∑ _CurEvtj_ _j=1_ (15) _nEvtAssign_ _CurShp =_ _n_ ∑ _CurAssigni_ _i=1_ such as CurAssigni = _n_ 5.4.2. Currentness Incidents There are two currentness control points, the reception of the IoTDataEvent by the stakeholder IS (shipper IS, carrier IS or consignee IS) and the reception of the IoTDataEvent by the smart contract. In case of non-reception of the IoTDataEvent by the stakeholder IS, this leads to a missing update on the smart contract side. The IoTDataSource is configured to send an IoTDataEvent every n seconds. If this interval has elapsed and no new IoTDataEvent has been received from the IoTDataSource, the situation is considered as a missing update problem. The missing update is not critical if the IoT update interval Isc of the smart contract is larger than n seconds, because the smart contract generally does not wait for a new _IoTDataEvent as long as this update interval does not expire._ In contrast, if the update interval is equal to n seconds, the stakeholder IS notifies the smart contract in the case of missing data. Once notified, the smart contract assigns ----- _Sensors 2021, 21, 2239_ 14 of 25 a missing update related incident to the IoTDataSource owner. The origins of this kind of incidents are multiple; for example, the IoTDataSource is not able to connect to the IoT network, the IoTDataSource has internal problem or an IoT cloud data platform problem. **6. The Distributed Architecture of the Traceability System** This section presents the architecture of the proposed traceability solution and its main components: the blockchain smart contract and the IoT data sources. To respond to the identified criteria for secured traceability data, data qualification and transparency, we propose a distributed, secured and trusted architecture, based on the use of blockchain smart contracts, as depicted in Figure 2. The main components of this architecture are a smart contract shared by all the stakeholders and IoT data sources. The arrows in this figure indicate data transmission directions. Shipper Smart contract Consignee Node Node Carrier Node Shipper IS Consignee IS Carrier IS Shipper Consignee Carrier Delivered shipments To be transported shipments On going transport shipments Shipment Connected Object Shipment Connected Object Shipment Connected Object Shipper Consignee Factories Factories IoT Cloud Data Platform Shipper (IoTCDP) Carrier Consignee Warehouses Warehouses Warehouses LPWAN Gateway **Figure 2. Distributed architecture of the traceability system.** For the smart contract component, we chose to work on a Hyperledger Fabric blockchain [34]. It is a permissioned blockchain that presents many advantages in comparison to the other blockchains, among them: a node architecture based on the notion of organization to establish a trust model more adapted to the enterprise context, the support of the Go, Javascript and Java languages for writing smart contracts and a parameterized consensus protocol [5]. The smart contract is installed on the top of a blockchain involving all the stakeholders. This smart contract holds all the rules about the collected traceability data management, ----- _Sensors 2021, 21, 2239_ 15 of 25 the incident management and the IoT data qualification. Those rules have been validated by all the traceability system stakeholders. IoT data sources are assigned to each shipment. They are responsible of the field data collection about the shipment transport conditions. It is worth noting that the three stakeholders depicted in Figure 2 are given as examples. As many stakeholders as needed may be added to this architecture. The addition of stakeholders is enabled by the underlying Hyperledger Fabric- based architecture [34]. In addition, the maximum number of stakeholders in the context of logistic chain is limited. For example, in our use case, this number is of the order of tens stakeholders. The stakeholders to be added to this architecture are those who need to participate in the traceability process. They are added before the creation of any shipment transport operation in which they will be involved. _6.1. The Smart Contract_ Smart contracts are “trusted distributed applications” [34]. They are secured by the underlying blockchain and the peers consensus mechanism. In the transport traceability context, we need a distributed and secured application to share traceability data among all the traceability process stakeholders and ensure their agreement on the shared data quality and the incidents created based on this data. We proposed in [5] to implement a lightweight IoT data qualification application and a traceability smart contract handling all the shipment transport operation process. The implemented smart contract allowed the stakeholders to define all the transport conditions terms, update the transport status and transport related milestones status, integrating IoT data about the shipment transport operation progress and creating both manual or automatic transport related incidents. The contractual constraints, negotiated between stakeholders, are enshrined in the smart contract, and should be respected by all the stakeholders. Any gap between those constraints and the data provided by a stakeholder results in a non-compliance incident created automatically by the smart contract. The contractual constraints are communicated to the smart contract by the shipper system at the shipment creation time. In this article, we extend the traceability proposal presented in [5] to overcome two important limitations. (i) The IoT data qualification is centralized at the shipper side, and there is a lack of guarantees for the other stakeholders on the good execution of the agreed IoT data quality calculation rules. (ii) The lightweight IoT data qualification module is limited to data outlier’s detection. Therefore, we propose in this work to enhance the IoT data application through the implementation of the quality model presented in Section 5, into the traceability smart contract. This allows ensuring the stakeholders agreement on the correct application of the data qualification rules. The data qualification module is also improved by the integration of the accuracy, completeness, consistency and currentness dimensions. _IoTDataEvents that do not conform to the defined IoT quality model constraints_ generate DataQualityIncident visible by all the stakeholders. They are not discarded but saved in the blockchain for audit purpose. As examples of decisions taken automatically by the smart contract based on the received events, Table 2 shows some temperature events values received by the smart contract, their Quality Indexes (QI) and their corresponding decisions. For these examples, we consider multiple IoT data sources with manufacturer temperature specifications interval of [0 _[◦]C, 50_ _[◦]C]. Those data sources are assigned to a shipment with a temperature_ conditions transport interval of [2 _[◦]C, 8_ _[◦]C]. The quality dimensions’ weights are set to 4_ for accuracy, 4 for consistency and 1 for Currentness. If the event QI is below the quality index threshold (0.7) a DataQualityIncident is generated for the event. Consequently, the event QI is calculated as follows: _EventQI =_ [4][ ∗] [(][AccuracyQI][) +][ 4][ ∗] [(][ConsistencyQI][) +][ 1][ ∗] [(][CurrentnessQI][)] (16) 9 ----- _Sensors 2021, 21, 2239_ 16 of 25 where 9 is the sum of dimensions’ weights (4 + 4 + 1). **Table 2. Smart contract decisions examples.** **Received Temperature** **Accuracy QI** **Consistency QI** **Currentness QI** **Event QI** **Smart Contract Decision** **Events Values** 10 _[◦]C_ 1 1 1 1 Create a ShipmentIncident _−20_ _[◦]C_ 0 1 1 0.55 Create an accuracy DataQualityIncident Create an accuracy DataQualityIncident, and Many times −20 _[◦]C from the_ 0 1 1 0.55 finally a completeness DataQualityIncident at same source the Source-Shipment Assignment level Create a consistency and accuracy _DataQualityIncident from Event1 and a_ _ShipmentIncident from Event2_ Event1 of −20 _[◦]C from a source 1_ and Event2 of 10 _[◦]C from a_ source 2 0 for Event1 0.33 for Event1 and 1 for 0.5 for both 1 for both and 0.77 for Event2 Event2 10 _[◦]C received late from_ 1 1 0 0.88 Create a ShipmentIncident one source _6.2. The IoT Data Sources_ In the proposed traceability architecture, the IoT data could be received from many IoT data sources. Each stakeholder could decide to assign an IoT data source that it owns to a shipment in which it has a stakeholder role, at any time during the shipment progress in the logistic chain. The only condition to do so is that the IoT data source and the shipment have already been created in the smart contract. The assignment of an IoT data source to a shipment is for a limited period. Every data source assigned to a shipment sends IoT data about the shipment transport conditions at a fixed time interval defined in the shipment smart contract instance. If a data-related incident is detected by the smart contract, it is automatically affected to the IoT data source owner declared in the smart contract. The smart contract has a detailed description of the IoT data source specifications collected at the data source creation in the smart contract. This is a requirement for the correct application of the data quality measures. The shipper in our context has a principal IoT data source which is the shipment connected object accompanying the shipment. The role of this object is to collect data about the shipment transport conditions, throughout the transport operation. To send the collected data to the shipper IS (Information System), the connected object uses an LPWAN (Low Power Aera Network) network Gateway, which transmits the received messages to the IoT Cloud Data Platform (IoTCDP) before their reception in the _shipper IS._ The shipper IS sends the received messages to the shipper node including the connected object id of the messages. This connected object id is used by the smart contract to link the received IoT messages to the right shipment in the smart contract. In this context, the data are pushed by the IoT object. The pull/push of data from/to the connected object is out of the scope of our work. The shipment connected object collects data about the shipment pickup, transport and delivery conditions. Each stakeholder could declare other IoT data sources, such as IoT data sources related to factories, warehouses, transport vehicles, etc. In general, every data source that can collect and send automatically measurements about the shipments could be declared by the stakeholder as an IoT data source. Moreover, all IoT data sources, except the shipment connected object, help to collect data about the shipment conditions in a specific segment of the transport operation. Only the shipment connected object that accompanies the shipment continues to collect data about the shipment transport conditions during the whole transport operation. **7. Evaluation** The objectives of this section are: (i) to evaluate the proposed quality measures; (ii) to evaluate the impact of the IoT data quality module on the number of created incidents; and ----- _Sensors 2021, 21, 2239_ 17 of 25 _(iii) to evaluate the impact of the IoT data quality module on the IoT data event insertion_ time in the blockchain. We evaluated our proposed quality measures to measure their pertinence and performance. We also monitored the number of quality incidents created to highlight the impact of the quality module. The number of shipment incidents was also monitored to emphasize the impact of the quality module on the business decisions. The IoT data event insertion time in the blockchain was also measured in our tests, firstly with the quality module activated and then with the quality module inactivated, in order to evaluate the impact of our proposed quality module on the data event insertion time and ensure the final users that this time is acceptable while ensuring the quality. _7.1. Smart Contract Architecture_ For the implementation purpose, we used the same architecture used in our previous work on the traceability using smart contracts and IoT [5]. It is an architecture based on the use of Hyperledger Fabric as the blockchain implementation, with three peers (stakeholders): a shipper, a carrier and a consignee. On the top of this blockchain, we implemented our traceability and IoT data qualification smart contract. The smart contract used in this evaluation was developed on the top of a Hyperledger Fabric blockchain, using the Fabric Java Framework. We used in this evaluation a Virtual Machine (VM) with the characteristics depicted in Table 3. **Table 3. Test VM characteristics.** **Characteristic** **Details** OS Ubuntu 18.04.4 desktop amd64 CPU 4 CPU Intel(R) Core™i7-8565U RAM 8 G Virtual Disk 50 G Furthermore, we set the Hyperledger Fabric block creation timeout to 1 s and the maximum number of transactions per block to 15. This means that, after the reception of a new transaction, the system will trigger the block creation either after a time wait of 1 s or after a total number of 15 new transactions is reached. In addition, we used in this evaluation the Raft consensus algorithm, with a unique ordering service node [5]. In the existing traceability smart contract [5], we added many new methods such as createDataSource and assignDataSource. The createDataSource method inserts the data source given as input in the blockchain. The assignDataSouce method assigns an existing IoT data source to an existing shipment, using their IDs. Based on the quality measures proposed in this article, we updated the addIoTEvent method with the following new functionalities: (i) calculate the event quality measures; and (ii) update the IoT data source and the quality measures of the shipments related to this IoT data source. _7.2. Evaluation Experimental Choices_ Due to a lack of real data to evaluate the proposed architecture in our use case, we chose to simulate our use case data with a well-known dataset in the IoT domain. The Intel Berkeley dataset is a collection of sensor data, collected by Intel research team in the Intel Berkeley Research lab, between 28 February and 5 April 2004 [35]. An example of the dataset content is depicted in Table 4. ----- _Sensors 2021, 21, 2239_ 18 of 25 **Table 4. Samples of the Intel Berkeley dataset.** **Date** **Time** **Event ID** **Sensor ID** **Temperature** **Humidity** **Light** **Voltage** 12 March 2004 16:29:04.084098 39302 1 21.8308 43.5855 165.6 2.53812 14 March 2004 15:45:11.669786 44974 2 26.9464 41.814 264.96 2.54901 19 March 2004 19:01:21.094445 59766 3 21.9092 45.1103 39.56 2.44412 . . . ... ... . . . . . . . . . . . . . . . To adapt this dataset to our context, we considered every sensor as an IoT data source. This gives us 54 data sources to be handled. For the shipments, we used every 24 h of sensor data collection as a shipment, which results in 2052 shipments (54 sensors multiplied by 38, the number of data collection days), for the whole dataset. Furthermore, we considered only the temperature measures in this evaluation because it is the main measure for our use case, but the module could be used to handle any other measure type. We began the evaluation phase by defining the user’s quality thresholds requirements for all the data sources and shipments. We used the same threshold for the data sources, the shipments and the four quality dimensions. We made a series of tests by varying the defined threshold, going from 0 (no quality constraints) to 1 (strict quality), to show the impact of those thresholds on the number of created quality and shipments incidents. In Table 5, we establish a classification of data quality indexes for our dimensions and objects. This classification helps in the presentation and the analysis of the evaluation results. **Table 5. Quality indexes and thresholds classification.** **Data Quality Index and Threshold Interval** **Label** **Code** [0, 0.5) Poor quality P [0.5, 0.7) Low quality L [0.7, 0.9) Good quality G [0.9, 1] High quality H We chose the following weights for the quality dimensions based on their importance for the use case in the context of the medical equipment cold chain: a weight of 4 for the accuracy, the completeness, and the consistency, which are the most important for our users, and a weight of 1 for the currentness, which is not as critical as the other dimensions, as explained in Section 5. For the shipment incidents, we chose an accepted temperature interval of 20 to 25 _[◦]C_ based on the work of Hui et al. [36]. Beyond this temperature interval, if the received event quality is compliant with the shipment quality threshold, this event results in a shipment incident created for all the shipments that have an active assignment relationship with the event data source. There was no information in the dataset about the sensor’s precision value. Consequently, we chose to set this value to 0.5 _[◦]C, which is a recurrent value in the temperature_ sensors. In the following evaluation results, we did not take into account the sensor 5 from which we did not see any event. We also ignored some other events with the sensor IDs 55, 56 and 58, because in the dataset reference the number of sensors was only 54, and events coming from the same sensor with the same event number (113,474 events in the dataset). There were also 355 events in the dataset that we could not parse correctly due to their data presentation errors and 526 incomplete lines, from which we could not get all the event required data. This results in a total of 2,199,327 events integrated correctly in our quality tests, from a total of 2,313,682 events present in the dataset. We used the event timestamp in the dataset as an event reception timestamp in this evaluation. Moreover, we used this timestamp to order and identify the events, for shipment ----- _Sensors 2021, 21, 2239_ 19 of 25 incident creation and closing purpose. The results of this choice were 10,299 duplicated events, because they had the same timestamp as previously received events from the same sensor. Furthermore, we use the quality threshold to define the stakeholder’s requirement for the quality indexes of events to be integrated in the data source or sent to shipments. All the events with a quality index below the defined quality threshold value results in a quality incident and are not used to create shipment incidents in case of non-compliance with the agreed transport conditions. If the quality incident is detected by the data source, it will not send the event to its related shipments. _7.3. Results Concerning the Accuracy, Completeness and Currentness Dimensions_ Firstly, regarding the accuracy, the sensors used to collect the Intel Berkeley dataset, a valid temperature value should be in the range of 0–50 _[◦]C according to [37], otherwise we_ consider this temperature as inaccurate. Regarding the completeness, we used the following parameters: the update interval of 31 s, the maximum timestamp among the already integrated events timestamps, the start IoT data source and the shipment start timestamp. We set the IoT data source start timestamp at 28 February 2004 at 00:00:00 am, and, for the shipment, the start timestamp is the shipment date and the start time set at 00:00:00 am and the end at 11:59:59 pm. Concerning the currentness, we used the measure interval of 31 s given for the dataset. We used this same update interval for the data sources and the shipments. In our tests, we did not consider the difference that could exist between the event reception timestamp and the event production timestamp. This difference could affect the test and need to be addressed in future works. Table 6 shows the classification of quality results obtained for the sensors (data sources), regarding the different quality dimensions defined in this work and using multiple quality threshold values. Those results show that in the 53 retained sensors: 42 have a good accuracy, 29 have a poor completeness and 29 have a lower currentness. **Table 6. Sources quality evaluation results.** **Quality Threshold** **Accuracy** **Completeness** **Currentness** **Quality Index** 0, 0.5, 0.7, 0.9 and 1 0P 1L 43G 9H 29P 22L 2G 0H 1P 29L 20G 3H 0P 38L 15G 0H Regarding the global sensor quality index, most sensors (38) have a low-quality index. If the quality threshold is set to a good quality value (e.g., 0.7), only 15 sensors are usable, and, in the case of threshold of high quality (e.g., 0.9), there is no usable sensor in this dataset. Thanks to the quality module, all the events with a quality incident problem are not integrated into the shipments assigned to the event data source, and this keeps the shipment events quality at the level fixed and agreed by all the stakeholders. For example, in the case of Sensor 45, when we set the quality threshold at 1, 9% of the events received from this sensor have not been integrated into the source related shipments, due to their quality problems. In Table 7, we can clearly see the impact of the threshold choice on the percentage of quality incidents. This percentage represents the events that do not respect the agreed quality thresholds. The events are filtered at the data source level according to the selected quality threshold value. ----- _Sensors 2021, 21, 2239_ 20 of 25 **Table 7. Quality and shipments incidents results according to the quality threshold.** **Shipments Quality** **Percentage of Quality** **Percentage of Shipments** **Threshold** **Incidents** **Incidents** 0 0 0.21 0.5 25 0.4 0.7, 0.9 and 1 21 0.3 Consequently, the percentage of quality incidents drops from around 25% of the total received events for a threshold at 0.5 to around 21% when the quality threshold was greater or equal to 0.7. The percentage of shipment incidents evolution is not linear due to the _shipments number evolution depending on the selected quality threshold, as depicted in_ Table 8. **Table 8. Shipments events number evolution.** **Shipments Quality** **Number of Shipments** **Number of Shipments with** **Threshold** **Without Any Event** **at Least One Event** 0 421 1631 0.5, 0.7, 0.9 and 1 821 1231 Regarding the shipments quality results, it is important to note that there were 421 _shipments for which we did not receive any event, no matter what the quality threshold_ value was. This number increases to 821 shipments, when we set the quality threshold at 0.5, 0.7, 0.9 or 1, as depicted in Table 8. Consequently, we did not consider those shipments in the following shipment quality results, because all our quality dimension calculations are based on the events values and timestamps. Table 9 shows that the percentage of shipments with a high accuracy level increase as the shipments quality thresholds increases, and this is the same for the currentness. The percentage of events with a poor completeness index increases due to events blocked by the quality threshold at the data source level. **Table 9. Shipments quality evaluation results.** **Quality Threshold** **Accuracy (in %)** **Completeness (in %)** **Currentness (in %)** **Quality Index (in %)** 0 26P 1L 1G 72H 48P 29L 19G 4H 18P 30L 40G 13H 27P 16L 44G 13H 0.5, 0.7, 0.9 and 1 0P 0L 0G 100H 64P 19L 17G 1H 12P 32L 41G 15H 2P 47L 42G 10H The shipment quality index also is improved by the quality threshold increase; for example, we went from 27% of poor data quality shipments when the quality threshold was at 0 to only 2%, when the quality threshold was up to 0.5. _7.4. Results Concerning the Consistency Dimension_ For the consistency evaluation, we selected four groups of sensors placed in proximity zones, as depicted in Figure 3: {1, 2, 3}, {11, 12, 13}, {15, 16, 17} and {49, 50, 51}. For each group, we linked each sensor to all its related sensors shipments in the same sensors group. ----- _Sensors 2021, 21, 2239_ 21 of 25 **Figure 3. Intel Berkeley sensors arrangement diagram.** The total number of shipments related to the selected groups was 456 (12 sensors multiplied by 38 data collection days). There were 84 shipments related to those groups, for which we did not receive any event from the sensors, whatever the quality threshold value. This number increases to 171 shipments when we set the quality threshold at 0.5, 0.7, 0.9 or 1, due to the events quality filtering at the data source level. Furthermore, we set in this evaluation the tolerance time interval to 31 s and the consistency tolerance temperature to 0.5 _[◦]C. This means that two events are considered as_ eligible to the consistency test only when their timestamps difference is lower than 31 a, and they are considered as concordant if their reported temperatures difference is lower than 0.5 _[◦]C._ Table 10 summarizes the consistency evaluation results for the selected sensors groups. The group {1, 2, 3} has at least 76% of its shipments with a high consistency index. Those results show that the events reported by the group {1, 2, 3} were more concordant than those reported by the other groups. **Table 10. Shipments consistency evaluation results.** **Quality Threshold** **Sensors Group** **Consistency (in %)** 0 {1, 2, 3} 0P 3L 21G 76H {11, 12, 13} 0P 0L 73G 27H {15, 16, 17} 0P 0L 74G 26H {49, 50, 51} 0P 0L 68G 32H 0.5, 0.7, 0.9 and 1 {1, 2, 3} 0P 0L 15G 85H {11, 12, 13} 0P 0L 62G 38H {15, 16, 17} 0P 0L 88G 12H {49, 50, 51} 0P 0L 81G 19H The consistency results for the selected groups were generally good to high, except for 3% of shipments related to the group {1, 2, 3}, when the quality threshold was at 0. This shows the impact of the quality threshold on the consistency quality results. _7.5. Impact of the IoT Data Quality Module on the IoT Data Event Insertion_ For the smart contract IoT data quality evaluation, and due to our blockchain architecture response time (around 1 s per operation), we selected a sample of 3000 events from the dataset. This sample corresponds to the first 1000 events received from the Sensors 1–3 on 28 February 2004. ----- _Sensors 2021, 21, 2239_ 22 of 25 The average response time of the addIoTEvent using the 3000 events data sample was around 1.7 s, with an average standard deviation of 0.174 s. When we disabled the quality module, with the same data sample, the average response time of this method drops to around 1.6 s, with an average standard deviation of 0.158 s. This result shows that our quality module adds only around 0.1 s to the event integration time. The additional quality module cost is acceptable regarding the data quality improvement brought by this module. _7.6. Related Works Discussion_ As shown in Section 4, the works of Casado-Vara et al. [31], Hang et al. [32] and Leal et al. [14] are the closest to our work. Casado-Vara et al. [31] proposed a vote method to address the accuracy and the consistency problems. Their vote method is based on the game theory to find a cooperative temperature among all the used temperature sensors. It is not applicable in our context, because we have different data sources owned by different stakeholders, and we need to report all the data sent by those data sources for audit purpose. In the case of discrepancy between the stakeholder’s data sources related to the same _shipment, we need to trace this discrepancy, and, if it goes below the fixed quality threshold,_ a corresponding quality incident is created by the smart contract. However, the vote method in [31] could be used in the very specific case of many shipments with similar data sources, the same shipper, the same carrier and from which we want to have a global measure trend. Hang et al. [32] proposed a Hyperledger Fabric based architecture. This blockchain implementation choice is perfectly adapted to our B2B use case, and we used the same in our proposed architecture. However, they only addressed the accuracy problem (outlier filtering) using the Kalman filter. Besides, the standard version of Kalman filter did not meet our needs, because the outlier interval limits are not fixed and evolve according to the received data. This could be problematic when the Kalman filter goes in fail mode, as stated by Berman [38]. The usage of an assisted version of the Kalman filter needs to be explored in future work. Leal et al. [14] proposed using an Ethereum traceability-based architecture. Their Ethereum choice is justified by the solution monetization goal. However, in our use case, we chose to work with Hyperledger Fabric which does not need any cryptocurrency management and has an organization architecture more adapted to our B2B logistic chain context, in terms of data access levels management. In addition, Leal et al. [14] addressed the accuracy, consistency and currentness problems using probability distribution methods, but they did not provide further details about their application and evaluation of those methods. Furthermore, the authors of [14] proposed to filter the data inside and outside the blockchain, which is a good idea, and we already have in our architecture the inside blockchain data filtering. Besides, we need to explore the adding of a data filtering first level outside the blockchain, in future works. The outside blockchain filtering needs to be done carefully, because it should not prevent the blockchain from getting the required traceability data; although, in some cases these data will be outliers, they need to be traced for further audit purposes. _7.7. Conclusions on the Evaluation_ This evaluation section demonstrates the pertinence of the proposed IoT data quality module and the impact of this module on the data to be used in the traceability smart contract. The entire data qualification process is executed in a secured and distributed application on which users agree on every datum to be included, on its qualification process and decisions to be taken based on this datum. It is worth noting that the choice of the quality thresholds has a huge impact on the data filtering process set at the data source level. The events with a quality index below ----- _Sensors 2021, 21, 2239_ 23 of 25 the defined quality threshold will never be sent to the shipment. This leads directly to data loss at the shipment level. For this reason, stakeholders may prefer selecting a good quality threshold ([0.7,0.9]), rather than a high one ([0.9,1]). Although the proposed architecture evaluation shows encouraging results, this architecture still needs to be tested in a real-life scenario with more data and stakeholders to get more information about its real performances. **8. Conclusions and Future Works** In this article, we propose a distributed architecture and a smart contract to enhance the IoT data quality in the context of logistic traceability. The proposed architecture uses a model of IoT data quality with four main data quality dimensions: accuracy, currentness, completeness and consistency. We also propose an approach for the calculation of the selected data quality dimensions. The dimensions calculation results are used in our traceability smart contract to set and control the data quality of events, data sources, shipments and shipments data sources associations. The proposed architecture ensures the stakeholders agreement on the data quality calculation and application rules, and consequently their trust in the decisions taken automatically by the traceability smart contract. We evaluated our proposed IoT data quality assessment architecture based on an online available dataset, and the results show the relevancy of this architecture. This work could be extended by evaluating the scalability of the proposition when adding more stakeholders. The approach used to calculate the quality dimensions could be combined with algorithms, such as DBSCAN [39] or an assisted version of the Kalman filter [40], to improve the quality index calculation. The blockchain data charge could be alleviated by adding in this architecture a first level of data filtering on each stakeholder side. The IoT data sources’ security and interoperability also need to be addressed. Finally, the architecture evaluation needs to be done in a real-life scenario to ensure its performance in the context of logistic chain traceability. **Author Contributions: Conceptualization, M.A. and C.T.; methodology, M.A., C.T., S.C., and A.B.;** software, M.A.; validation, C.T., M.O., S.C. and A.B.; resources, M.A., C.T., S.C. and A.B.; data curation, M.A.; writing—original draft preparation, M.A.; witing—review and editing, M.A., C.T., M.O., S.C., and A.B.; supervision, C.T., M.O., S.C. and A.B.; and project administration, C.T. and M.O. All authors have read and agreed to the published version of the manuscript. **Funding: This research was funded by ALIS.** **Data Availability Statement: Publicly available datasets were analyzed in this study. This data can** [be found here: http://db.csail.mit.edu/labdata/labdata.html.](http://db.csail.mit.edu/labdata/labdata.html) **Conflicts of Interest: The authors declare no conflict of interest.** **References** 1. Hasan, H.; AlHadhrami, E.; AlDhaheri, A.; Salah, K.; Jayaraman, R. Smart contract-based approach for efficient shipment [management. Comput. Ind. Eng. 2019, 136, 149–159. [CrossRef]](http://doi.org/10.1016/j.cie.2019.07.022) 2. Bumblauskas, D.; Mann, A.; Dugan, B.; Rittmer, J. A blockchain use case in food distribution: Do you know where your food has [been? Int. J. Inf. Manag. 2020, 52, 102008. [CrossRef]](http://dx.doi.org/10.1016/j.ijinfomgt.2019.09.004) 3. Casino, F.; Kanakaris, V.; Dasaklis, T.K.; Moschuris, S.; Rachaniotis, N.P. Modeling food supply chain traceability based on blockchain technology. In Proceedings of the 9th IFAC Conference on Manufacturing Modelling, Management and Control MIM [2019, Berlin, Germany, 28–30 August 2019. [CrossRef]](http://dx.doi.org/10.1016/j.ifacol.2019.11.620.) 4. Wen, Q.; Gao, Y.; Chen, Z.; Wu, D. A Blockchain-based Data Sharing Scheme in The Supply Chain by IIoT. In Proceedings of the 2019 IEEE International Conference on Industrial Cyber Physical Systems (ICPS), Taipei, Taiwan, 6–9 May 2019; pp. 695–700. [[CrossRef]](http://dx.doi.org/10.1109/ICPHYS.2019.8780161) 5. Ahmed, M.; Taconet, C.; Ould, M.; Chabridon, S.; Bouzeghoub, A. Enhancing B2B supply chain traceability using smart contracts and IoT. In Proceedings of the Hamburg International Conference of Logistics (HICL) 2020, Online: hosted by the Hamburg [University of Technology, 23–25 September 2020; Volume 29, pp. 559–589. [CrossRef]](http://dx.doi.org/10.15480/882.3110) ----- _Sensors 2021, 21, 2239_ 24 of 25 6. Lee, Y.W.; Strong, D.M.; Kahn, B.K.; Wang, R.Y. AIMQ: A methodology for information quality assessment. Inf. Manag. 2002, _[40, 133–146. [CrossRef]](http://dx.doi.org/10.1016/S0378-7206(02)00043-5)_ 7. Karkouch, A.; Mousannif, H.; Al Moatassime, H.; Noël, T. Data quality in internet of things: A state-of-the-art survey. J. Netw. _[Comput. Appl. 2016, 73, 57–81. [CrossRef]](http://dx.doi.org/10.1016/j.jnca.2016.08.002)_ 8. Suciu, G.; N˘adrag, C.; Istrate, C.; Vulpe, A.; Ditu, M.; Subea, O. Comparative Analysis of Distributed Ledger Technologies. In [Proceedings of the 2018 Global Wireless Summit (GWS), Chiang Rai, Thailand, 25–28 November 2018; pp. 370–373. [CrossRef]](http://dx.doi.org/10.1109/GWS.2018.8686563) 9. Pournader, M.; Shi, Y.; Seuring, S.; Koh, S.L. Blockchain applications in supply chains, transport and logistics: A systematic [review of the literature. Int. J. Prod. Res. 2020, 58, 2063–2081. [CrossRef]](http://dx.doi.org/10.1080/00207543.2019.1650976) 10. Issaoui, Y.; Khiat, A.; Bahnasse, A.; Ouajji, H. Smart logistics: Study of the application of blockchain technology. In Proceedings of the 9th International Conference on Current and Future Trends of Information and Communication Technologies in Healthcare [(ICTH-2019), Coimbra, Portugal, 4–7 November 2019. [CrossRef]](http://dx.doi.org/10.1016/j.procs.2019.09.467) 11. Richard, Y.; Wang, D.M.S. Beyond Accuracy: What Data Quality Means to Data Consumers. J. Manag. Inf. Syst. 1996, 12, 5–33. 12. [ISO 25012: Quality of Data Product. Available online: https://iso25000.com/index.php/en/iso-25000-standards/iso-25012](https://iso25000.com/index.php/en/iso-25000-standards/iso-25012) (accessed on 26 December 2020). 13. [Liu, C.; Nitschke, P.; Williams, S.; Zowghi, D. Data quality and the Internet of Things. Computing 2019. [CrossRef]](http://dx.doi.org/10.1007/s00607-019-00746-z) 14. Leal, F.; Chis, A.E.; Caton, S.; González–Vélez, H.; García–Gómez, J.M.; Durá, M.; Sánchez–García, A.; Sáez, C.; Karageorgos, A.; Gerogiannis, V.C.; et al. Smart Pharmaceutical Manufacturing: Ensuring End-to-End Traceability and Data Integrity in Medicine [Production. Big Data Res. 2021, 24, 100172. [CrossRef]](http://dx.doi.org/10.1016/j.bdr.2020.100172) 15. Byabazaire, J.; O’Hare, G.; Delaney, D. Data Quality and Trust: Review of Challenges and Opportunities for Data Sharing in IoT. _[Electronics 2020, 9, 2083. [CrossRef]](http://dx.doi.org/10.3390/electronics9122083)_ 16. Li, F.; Nastic, S.; Dustdar, S. Data Quality Observation in Pervasive Environments. In Proceedings of the 2012 IEEE 15th International Conference on Computational Science and Engineering, Paphos, Cyprus, 5–7 December 2012; pp. 602–609. 17. Sicari, S.; Rizzardi, A.; Miorandi, D.; Cappiello, C.; Coen-Porisini, A. A secure and quality-aware prototypical architecture for the [Internet of Things. Inf. Syst. 2016, 58, 43–55. [CrossRef]](http://dx.doi.org/10.1016/j.is.2016.02.003) 18. Kuemper, D.; Iggena, T.; Toenjes, R.; Pulvermueller, E. Valid.IoT: A Framework for Sensor Data Quality Analysis and Interpolation. In Proceedings of the 9th ACM Multimedia Systems Conference, MMSys ’18, Amsterdam, The Netherlands, 12–15 June 2018; [Association for Computing Machinery: New York, NY, USA, 2018; pp. 294–303. [CrossRef]](http://dx.doi.org/10.1145/3204949.3204972) 19. Kolomvatsos, K. A distributed, proactive intelligent scheme for securing quality in large scale data processing. Computing 2019, _[101, 1687–1710. [CrossRef]](http://dx.doi.org/10.1007/s00607-018-0683-9)_ 20. Kara, M.; Lamouchi, O.; Ramdane-Cherif, A. A Quality Model for the Evaluation AAL Systems. In Proceedings of the 7th International Conference on Current and Future Trends of Information and Communication Technologies in Healthcare (ICTH[2017), Lund, Sweden, 18–20 September 2017. [CrossRef]](http://dx.doi.org/10.1016/j.procs.2017.08.354) 21. Erazo-Garzon, L.; Erraez, J.; Illescas-Peña, L.; Cedillo, P. A Data Quality Model for AAL Systems. In Information and Communication _Technologies of Ecuador (TIC.EC); Fosenca, C.E., Rodríguez Morales, G., Orellana Cordero, M., Botto-Tobar, M., Crespo Martínez, E.,_ Patiño León, A., Eds.; Springer International Publishing: Cham, Switzerland, 2020; pp. 137–152. 22. Karkouch, A.; Mousannif, H.; Al Moatassime, H.; Noel, T. A model-driven architecture-based data quality management framework for the internet of Things. In Proceedings of the 2016 IEEE 2nd International Conference on Cloud Computing Technologies and Applications (CloudTech), Marrakech, Morocco, 24–26 May 2016; pp. 252–259. 23. Fagúndez, S.; Fleitas, J.; Marotta, A. Data Stream Quality Evaluation for the Generation of Alarms in the Health Domain. J. Intell. _[Syst. 2015, 24, 361–369. [CrossRef]](http://dx.doi.org/10.1515/jisys-2014-0166)_ 24. Gu, X.; Peng, J.; Yu, W.; Cheng, Y.; Jiang, F.; Zhang, X.; Huang, Z.; Cai, L. Using blockchain to enhance the security of fog-assisted crowdsensing systems. In Proceedings of the 2019 IEEE 28th International Symposium on Industrial Electronics (ISIE), Vancouver, [BC, Canada, 12–14 June 2019; pp. 1859–1864. [CrossRef]](http://dx.doi.org/10.1109/ISIE.2019.8781332) 25. Nguyen, D.; Ali, M.I. Enabling On-Demand Decentralized IoT Collectability Marketplace using Blockchain and Crowdsensing. [In Proceedings of the 2019 Global IoT Summit (GIoTS), Aarhus, Denmark, 17–21 June 2019; pp. 1–6. [CrossRef]](http://dx.doi.org/10.1109/GIOTS.2019.8766346) 26. [Wei, L.; Wu, J.; Long, C. A Blockchain-Based Hybrid Incentive Model for Crowdsensing. Electronics 2020, 9, 215. [CrossRef]](http://dx.doi.org/10.3390/electronics9020215) 27. Cheng, J.; Long, H.; Tang, X.; Li, J.; Chen, M.; Xiong, N. A Reputation Incentive Mechanism of Crowd Sensing System Based on Blockchain. In Proceedings of the 6th International Conference on Artificial Intelligence and Security (ICAIS 2020), Hohhot, [China, 17–20 July 2020; Springer: Singapore, 2020; pp. 695–706. [CrossRef]](http://dx.doi.org/10.1007/978-981-15-8086-4_65) 28. Huang, J.; Kong, L.; Dai, H.; Ding, W.; Cheng, L.; Chen, G.; Jin, X.; Zeng, P. Blockchain-Based Mobile Crowd Sensing in Industrial [Systems. IEEE Trans. Ind. Inform. 2020, 16, 6553–6563. [CrossRef]](http://dx.doi.org/10.1109/TII.2019.2963728) 29. Zou, S.; Xi, J.; Wang, H.; Xu, G. CrowdBLPS: A Blockchain-Based Location-Privacy-Preserving Mobile Crowdsensing System. _[IEEE Trans. Ind. Inform. 2020, 16, 4206–4218. [CrossRef]](http://dx.doi.org/10.1109/TII.2019.2957791)_ 30. Javaid, A.; Zahid, M.; Ali, I.; Khan, R.; Noshad, Z.; Javaid, N. Reputation System for IoT Data Monetization using Blockchain. In Proceedings of the 14th International Conference on Broad-Band Wireless Computing, Communication and Applications (BWCCA), Antwerp, Belgium, 7–9 November 2019; Lecture Notes in Networks and Systems Series; Springer: Cham, Switzerland, [2019; Volulme 97, pp. 173–184. [CrossRef]](http://dx.doi.org/10.1007/978-3-030-33506-9_16) ----- _Sensors 2021, 21, 2239_ 25 of 25 31. Casado-Vara, R.; de la Prieta, F.; Prieto, J.; Corchado, J.M. Blockchain Framework for IoT Data Quality via Edge Computing. In Proceedings of the BlockSys’18: 1st Workshop on Blockchain-Enabled Networked Sensor Systems, Shenzhen, China, 4 November [2018; Association for Computing Machinery: New York, NY, USA, 2018; pp. 19–24. [CrossRef]](http://dx.doi.org/10.1145/3282278.3282282) 32. Hang, L.; Ullah, I.; Kim, D.H. A secure fish farm platform based on blockchain for agriculture data integrity. Comput. Electron. _[Agric. 2020, 170, 105251. [CrossRef]](http://dx.doi.org/10.1016/j.compag.2020.105251)_ 33. Mary, I.P.S.; Arockiam, L. Imputing the missing data in IoT based on the spatial and temporal correlation. In Proceedings of the 2017 IEEE International Conference on Current Trends in Advanced Computing (ICCTAC), Bangalore, India, 2–3 March 2017; pp. 1–4. 34. Androulaki, E.; Barger, A.; Bortnikov, V.; Cachin, C.; Christidis, K.; De Caro, A.; Enyeart, D.; Ferris, C.; Laventman, G.; Manevich, Y.; et al. Hyperledger Fabric: A Distributed Operating System for Permissioned Blockchains. In Proceedings of the Thirteenth [EuroSys Conference—EuroSys ’18, Porto, Portugal, 23–26 April 2018; pp. 1–15. [CrossRef]](http://dx.doi.org/10.1145/3190508.3190538) 35. [Madden, S. Intel Lab Data. Available online: http://db.csail.mit.edu/labdata/labdata.html (accessed on 15 December 2020).](http://db.csail.mit.edu/labdata/labdata.html) 36. Hui, Z.; Fred, B.; Ed, A.; Yongchao, Z.; Darryl, D.; Xiang, Z.; Maohui, L. Reducing building over-cooling by adjusting HVAC supply airflow setpoints and providing personal comfort systems. In Proceedings of the 15th Conference of the International Society of Indoor Air Quality & Climate (ISIAQ), Philadelphia, PA, USA, 22–27 July 2018. 37. [MPR/MIB User’s Manual. Available online: http://www-db.ics.uci.edu/pages/research/quasar/MPR-MIB%20Series%20](http://www-db.ics.uci.edu/pages/research/quasar/MPR-MIB%20Series%20User%20Manual%207430-0021-06_A.pdf) [User%20Manual%207430-0021-06_A.pdf (accessed on 6 January 2021).](http://www-db.ics.uci.edu/pages/research/quasar/MPR-MIB%20Series%20User%20Manual%207430-0021-06_A.pdf) 38. Berman, Z. Outliers rejection in Kalman filtering—Some new observations. In Proceedings of the 2014 IEEE/ION Position, [Location and Navigation Symposium—PLANS 2014, Monterey, CA, USA, 5–8 May 2014; pp. 1008–1013. [CrossRef]](http://dx.doi.org/10.1109/PLANS.2014.6851466) 39. Ester, M.; Kriegel, H.P.; Sander, J.; Xu, X. A Density-Based Algorithm for Discovering Clusters in Large Spatial Databases with Noise. In Proceedings of the Second International Conference on Knowledge Discovery and Data Mining KDD’96, Portland, OR, USA, 2–4 August 1996; Simoudis, E., Han, J., Fayyad, U., Eds.; AAAI Press: Portland, OR, USA, 1996; pp. 226–231. 40. [Kalman, R.E. A New Approach to Linear Filtering and Prediction Problems. J. Basic Eng. 1960, 82, 35–45. [CrossRef]](http://dx.doi.org/10.1115/1.3662552) -----
{ "disclaimer": "Notice: Paper or abstract available at https://pmc.ncbi.nlm.nih.gov/articles/PMC8005206, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": "CCBY", "status": "GOLD", "url": "https://www.mdpi.com/1424-8220/21/6/2239/pdf?version=1616563883" }
2,021
[ "JournalArticle" ]
true
2021-03-01T00:00:00
[ { "paperId": "194575920b0e5c8241f3328e86fb1d1961098295", "title": "Smart Pharmaceutical Manufacturing: Ensuring End-to-End Traceability and Data Integrity in Medicine Production" }, { "paperId": "02de8b8ad8da33ae3c31cd497f604e0cd36df874", "title": "Data Quality and Trust: Review of Challenges and Opportunities for Data Sharing in IoT" }, { "paperId": "b44cbf45a571973780b638712f1bad9f13885e1b", "title": "Enhancing B2B supply chain traceability using smart contracts and IoT" }, { "paperId": "b9344230b8a0f59e2604c04b5939466a3c6314c8", "title": "A blockchain use case in food distribution: Do you know where your food has been?" }, { "paperId": "52086111ab99095fa5f6dda8832ea51b8b2e500c", "title": "CrowdBLPS: A Blockchain-Based Location-Privacy-Preserving Mobile Crowdsensing System" }, { "paperId": "17cfa4c5d53735dba4a4ed4239a9b4e2ddb0b57d", "title": "A secure fish farm platform based on blockchain for agriculture data integrity" }, { "paperId": "6f9d0b9bbec43810bff5d1aa5ce611e4ec74836e", "title": "A Blockchain-Based Hybrid Incentive Model for Crowdsensing" }, { "paperId": "145709e8c6b5a84c1b2164a4510a13a511099de4", "title": "Blockchain-Based Mobile Crowd Sensing in Industrial Systems" }, { "paperId": "03eab3fe6012f30649c189e72f9c71e1d3778ccc", "title": "A Data Quality Model for AAL Systems" }, { "paperId": "bdfd9dbc612de159e42efe1688b3694c0a8e2365", "title": "Reputation System for IoT Data Monetization Using Blockchain" }, { "paperId": "6eec7163206b1ce96142c997347d600892a6eaab", "title": "Smart contract-based approach for efficient shipment management" }, { "paperId": "b5680cb1a98e30d061cc6ba1bda8b5f2b6ddea71", "title": "Blockchain applications in supply chains, transport and logistics: a systematic review of the literature" }, { "paperId": "515193e0624c4b6583e48c098e16551cb7a12e0c", "title": "Data quality and the Internet of Things" }, { "paperId": "5f3628418c7feb81a410bb01d256d12b162d2d02", "title": "Enabling On-Demand Decentralized IoT Collectability Marketplace using Blockchain and Crowdsensing" }, { "paperId": "a1eee1ef44305cbf9cced3dd8bd20260cf930f07", "title": "Using blockchain to enhance the security of fog-assisted crowdsensing systems" }, { "paperId": "750e7b8fca5c29ee9597a66927b237a94e02a8dc", "title": "A Blockchain-based Data Sharing Scheme in The Supply Chain by IIoT" }, { "paperId": "365b86d2757ac753e8f8fa2d4a4d03649ae92299", "title": "A distributed, proactive intelligent scheme for securing quality in large scale data processing" }, { "paperId": "5b8e9b3546d1785341003fbc6146ad1882c86130", "title": "Blockchain framework for IoT data quality via edge computing" }, { "paperId": "cbdc83b5b1d096fcf362e63b595443d7286d856c", "title": "Comparative Analysis of Distributed Ledger Technologies" }, { "paperId": "c053f1163d56efec3207b69820fc2c3e0590b3ab", "title": "Reducing building over-cooling by adjusting HVAC supply airflow setpoints and providing personal comfort systems" }, { "paperId": "a111f604876b3f291578bf29b865cb34c74e501f", "title": "Valid.IoT: a framework for sensor data quality analysis and interpolation" }, { "paperId": "1b4c39ba447e8098291e47fd5da32ca650f41181", "title": "Hyperledger fabric: a distributed operating system for permissioned blockchains" }, { "paperId": "2e2a40965a593370e3ef681649053dcf56c0031a", "title": "Imputing the missing data in IoT based on the spatial and temporal correlation" }, { "paperId": "823f01704db0b8ad0ef5106a50925504be91cb4b", "title": "Data quality in internet of things: A state-of-the-art survey" }, { "paperId": "2facd1ce6754b44a660e90ea48110462210dd63e", "title": "A secure and quality-aware prototypical architecture for the Internet of Things" }, { "paperId": "adc43198003787ff3cf0425dfe645424f714b0c6", "title": "A model-driven architecture-based data quality management framework for the internet of Things" }, { "paperId": "1eefedddf29f648d0bf1bb0e21044218cc564b4a", "title": "Data Stream Quality Evaluation for the Generation of Alarms in the Health Domain" }, { "paperId": "e1f9bd356faf6dde103d061a72f0f7a338f86127", "title": "Outliers rejection in Kalman filtering — Some new observations" }, { "paperId": "3c967f91408413b7d8ab1d7ac1f85612474dc7f8", "title": "Data Quality Observation in Pervasive Environments" }, { "paperId": "4e82d1fe3066ad65970434c7b1d0a10c3443e94d", "title": "AIMQ: a methodology for information quality assessment" }, { "paperId": "5c8fe9a0412a078e30eb7e5eeb0068655b673e86", "title": "A Density-Based Algorithm for Discovering Clusters in Large Spatial Databases with Noise" }, { "paperId": "b057cc625984119d48846dbf08f30b565f8c263d", "title": "Beyond Accuracy: What Data Quality Means to Data Consumers" }, { "paperId": null, "title": "Quality of Data Product" }, { "paperId": "da87e8e8699fe05ee2c954b49e1b93fdeaed28e9", "title": "A Reputation Incentive Mechanism of Crowd Sensing System Based on Blockchain" }, { "paperId": "7d05797d0b7c96d1a7b43e50775c15b788a64c3e", "title": "Modeling food supply chain traceability based on blockchain technology" }, { "paperId": "7de2516008bb3d6dc8e4e4cc251d3bd837907078", "title": "Smart logistics: Study of the application of blockchain technology" }, { "paperId": "4261315e89f886c44e7a3f9b25aca7e4768cb798", "title": "A Quality Model for the Evaluation AAL Systems" }, { "paperId": "255a77422b1da74da05d1714b7875356187385bd", "title": "A New Approach to Linear Filtering and Prediction Problems" }, { "paperId": null, "title": "MPR / MIB User ’ s Manual" }, { "paperId": null, "title": "Intel Lab Data" } ]
23,393
en
[ { "category": "Computer Science", "source": "external" }, { "category": "Computer Science", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/021d12b8bd39c09f273ee6e30fc70916bf3e52ca
[ "Computer Science" ]
0.855219
A Distributed Path Query Engine for Temporal Property Graphs
021d12b8bd39c09f273ee6e30fc70916bf3e52ca
IEEE/ACM International Symposium on Cluster, Cloud and Internet Computing
[ { "authorId": "1491748089", "name": "S. Ramesh" }, { "authorId": "1491747900", "name": "Animesh Baranawal" }, { "authorId": "1761220", "name": "Yogesh L. Simmhan" } ]
{ "alternate_issns": null, "alternate_names": [ "Clust Comput Grid", "CCGRID", "IEEE/ACM Int Symp Clust Cloud Grid Comput", "IEEE/ACM International Symposium Cluster, Cloud and Grid Computing", "Cluster Computing and the Grid", "IEEE/ACM Int Symp Clust Cloud Internet Comput" ], "alternate_urls": null, "id": "57f970eb-366a-4bfa-aa06-2ff70d834806", "issn": null, "name": "IEEE/ACM International Symposium on Cluster, Cloud and Internet Computing", "type": "conference", "url": "http://www.buyya.com/ccgrid/" }
Property graphs are a common form of linked data, with path queries used to traverse and explore them for enterprise transactions and mining. Temporal property graphs are a recent variant where time is a first-class entity to be queried over, and their properties and structure vary over time. These are seen in social, telecom and transit networks. However, current graph databases and query engines have limited support for temporal relations among graph entities, no support for time-varying entities and/or do not scale on distributed resources. We address this gap by extending a linear path query model over property graphs to include intuitive temporal predicates that operate over temporal graphs. We design a distributed execution model for these temporal path queries using the interval-centric computing model, and develop a novel cost model to select an efficient execution plan from several. We perform detailed experiments of our $\mathcal{G}ranite$ distributed query engine using temporal property graphs as large as 52M vertices, 218M edges and 118M properties, and an 800-query workload, derived from the LDBC benchmark. We offer sub-second query latencies in most cases, which is 149×-1140× faster compared to industry-leading Neo4J shared- memory graph database and the JanusGraph/Spark distributed graph query engine. Further, our cost model selects a query plan that is within 10% of the optimal execution time in 90% of the cases. We also scale well, and complete 100% of the queries for all graphs, compared to only 32-92% by baseline systems.
## A Distributed Path Query Engine for Temporal Property Graphs * #### Shriram Ramesh, Animesh Baranawal and Yogesh Simmhan _Department of Computational and Data Sciences,_ _Indian Institute of Science, Bangalore 560012, India_ _Email: {shriramr, animeshb, simmhan}@iisc.ac.in_ **Abstract** Property graphs are a common form of linked data, with path queries used to traverse and explore them for enterprise transactions and mining. Temporal property graphs are a recent variant where time is a first-class entity to be queried over, and their properties and structure vary over time. These are seen in social, telecom, transit and epidemic networks. However, current graph databases and query engines have limited support for temporal relations among graph entities, no support for time-varying entities and/or do not scale on distributed resources. We address this gap by extending a linear path query model over property graphs to include intuitive temporal pred_icates and aggregation operators over temporal graphs. We design a_ _distributed execution model for these temporal path queries using the_ interval-centric computing model, and develop a novel cost model to select an efficient execution plan from several. We perform detailed experiments of our _ranite distributed query engine using both static_ _G_ and dynamic temporal property graphs as large as 52M vertices, 218M edges and 325M properties, and a 1600-query workload, derived from the LDBC benchmark. We often offer sub-second query latencies on a commodity cluster, which is 149 –1140 faster compared to industry_×_ _×_ leading Neo4J shared-memory graph database and the JanusGraph/Spark distributed graph query engine. _ranite also completes 100% of_ _G_ the queries for all graphs, compared to only 32–92% workload completion by the baseline systems. Further, our cost model selects a query plan that is within 10% of the optimal execution time in 90% of the cases. Despite the irregular nature of graph processing, we exhibit a weak-scaling efficiency of 60% on 8 nodes and 40% on 16 nodes, _≥_ _≥_ for most query workloads. *An extended version of the paper that appears in IEEE/ACM International Symposium on Cluster, Cloud and Internet Computing (CCGrid), 2020. [doi:10.1109/CCGrid49817.2020.00-43](https://dx.doi.org/10.1109/CCGrid49817.2020.00-43) 1 ----- ### 1 Introduction Graphs are a natural model to represent and analyze linked data in various domains. Property graphs allow vertices and edges to have associated _key–value pair properties, besides the graph structure. This forms a rich_ information schema and has been used to capture knowledge graphs (concepts, relations) [1], social networks (person, forum, message) [2], epidemic networks (subject, infected status, location) [3], and financial and retail transactions (person, product, purchase) [4]. _Path queries are a common class of queries over property graphs [5,_ 6]. Here, the user defines a sequence of predicates over vertices and edges that should match along a path in the graph. E.g., in the property graph for a community of users in Figure 1, the vertices are labeled with their _IDs, their colors indicate their type – blue for Person and orange for a_ _Post, and they have a set of properties listed as Name:Value. The edges_ are relationships, with types such as Follows, Likes and Created. We can define an example 3-hop path query “[EQ1] Find a person (vertex type) who _lives in the country ‘UK’ (vertex property) and follows (edge type) a person_ _who follows another person who is tagged with the label ‘Hiking’ (vertex_ _property)”. This query would match Cleo_ _Alice_ _Bob, if we ignore the time_ _→_ _→_ intervals. Path queries are used to identify concept pathways in knowledge graphs, find friends in social networks, fake news detection, and suggest products in retail websites [5, 6, 7]. They also need to be performed rapidly, within 1 sec, as part of interactive requests from websites or exploratory _≈_ queries by analysts. While graph databases are designed for transactional read and write workloads, we consider graphs that are updated infrequently but queried often. For these workloads, graph query engines load and retain property graphs in-memory to service requests with low latency, without the need for locking or consistency protocols [8, 9]. They may also create indexes to accelerate these searches [10, 11]. Property graphs can be large, with 10[5]–10[8] vertices and edges, and 10’s of properties on each vertex or edge. This can exceed the memory on a single machine, often dominated by the properties. This necessitates the use of distributed systems to scale to large graphs [12, 13]. **Challenges** Time is an increasingly common graph feature in a variety of domains [3, 14, 15, 16]. However, existing property graph data models fail to consider it as a first-class entity. Here, we distinguish between graphs with a _time interval or a lifespan associated with their entities (properties, vertices,_ edges), and those where the entities themselves change over time and the history is available. We call the former static temporal graphs and the latter _dynamic temporal graphs. Yet another class is streaming graphs, where the_ topology and properties change in real-time, and queries are performed on 2 ----- |Col1|[5,100) Tag: Hiking| |---|---| **Likes** **Follows** **Created** **Don** **Cleo** **Pic** **Post** Figure 1: Sample Temporal Property Graph of a Community of Users this evolving structure [17, 18]; that is outside the scope of this article. E.g., in the temporal graph in Figure 1, the lifespan, [start, end), is indicated on the vertices, edges and properties. The start time is inclusive while the end time is exclusive. Other than the properties of Cleo, the remaining entities of the graph form a static temporal graph as they are each valid only for a single time range. But the value of the Country property of _Cleo changes over time, making it a dynamic temporal graph._ This gap is reflected not just in the data model but also in the queries _supported. We make a distinction between time-independent (TI) and time-_ _dependent (TD) queries, both being defined on a temporal graph [19]. TI_ queries are those which can be answered by examining the graph at a single point in time (a snapshot), e.g. EQ1 executed on the temporal graph. In contrast, TD queries capture temporal relations between the entities across consecutive time intervals, e.g., “[EQ2] Find people tagged with ‘Hiking’ _who liked a post tagged as ‘Vacation’, before the post was liked by a per-_ _son named ‘Don’ ”,_ and “[EQ3] Find people who started to follow an_other person, after the latter stops following ‘Don’ ”._ Treating time as just another property fails to express temporal relations such as ensuring time-ordering among the entities on the path. While EQ2 and EQ3 should match the paths Bob _PicPost_ _Don and Alice_ _Bob_ _Don, respec-_ _→_ _→_ _→_ _→_ tively, such queries are hard, if not impossible, to express in current graph databases. This problem is exacerbated for path queries over dynamic tem_poral graphs. E.g., the query EQ1 over the dynamic temporal graph should_ _not match Cleo_ _Alice_ _Bob since at the time Cleo was living in ‘UK’, she_ _→_ _→_ was not following Alice. While platforms which execute snapshot at a time [19,20] can be adapted to support TI queries over temporal graphs, TD queries cannot be expressed meaningfully. Even those that support TD algorithms enforce strict temporal ordering [21], requiring that the time intervals along the path should be increasing or decreasing, but not both; this limits query expressivity. These motivate the need to support intuitive temporal predicates to concisely ex 3 **Alice** **Bob** ----- press such temporal relations, and flexible platforms to execute them. Lastly, the scalability of existing graph systems is also limited, with few property graph query engines that operate on distributed memory systems with low latency [8, 22], let alone on temporal property graphs. We make the following specific contributions in this article: We propose a temporal property graph model, and intuitive temporal _•_ _predicates and aggregation operators for path queries on them (§3)._ We design a distributed execution model for these queries using the _•_ interval-centric computing model (§4). We develop a novel cost model that uses graph statistics to select the _•_ best from multiple execution plans (§5). We conduct a detailed evaluation of the performance and scalability _•_ of _ranite for 8 temporal graphs and up to 1600 queries, derived from_ _G_ the LDBC benchmark. We compare this against three configurations of Neo4J, and JanusGraph which uses Apache Spark (§6). We discuss related work in Section 2 and our conclusions in Section 7. A prior version of this work appeared as a conference paper [23]. This article substantially extends this. Specifically, it introduces the temporal aggregation operator to the query model (Section 3.3) and implements it within the execution model; offers details, illustrations and complexity metrics for our query model, distributed execution model and query optimizations (Sections 3, 4 and 5); and provides a rigorous empirical evaluation, including two additional large dynamic temporal graphs, aggregation query workloads, weak scaling experiments, and results on the component times of query execution, besides more detailed analysis for the cost model benefits and baseline platform comparisons (Section 6). ### 2 Related Work #### 2.1 Distributed and Temporal Graph Processing There are several distributed graph processing platforms for running graph algorithms on commodity clusters and clouds [24]. These typically offer programming abstractions like Google Pregel’s vertex-centric computing model [20] and its component-centric variants [25, 26] to design algorithms such as Breadth First Search, centrality scores and mining [27]. These execute using a Bulk Synchronous Parallel (BSP) model, and scale to large graphs and applications that explore the entire graph. They offer high throughput batch _processing that take_ (mins)– (hours). We instead focus on exploratory _O_ _O_ and transactional path queries that are to be processed in (secs). This _O_ 4 ----- requires careful use of distributed graph platforms and optimizations for fast responses. There are also parallel graph platforms for HPC clusters and accelerators [28]. These optimize the memory and communication access to scale to graphs with billions of entities on thousands of cores [29]. They focus on high-throughput graph algorithms and queries over static graphs [30]. We instead target commodity hardware and cloud VMs with 10’s of nodes and 100’s of total cores, and are more accessible. We also address queries over temporal property graphs. A few distributed abstractions and platforms support designing of temporal algorithms and their batch execution [19, 31, 32]. Most are limited to executing TI algorithms, snapshot at a time, and are unable to seamlessly model TD queries. Our prior work Graphite offers an interval-centric computing model (ICM) to represent TI and TD algorithms, but limits it to time-respecting algorithms [21]. We use it as the base framework for our proposed distributed path query engine, while relaxing the time-ordering, including indexing and proposing different query execution plans for lowlatency response. There are also some platforms that support incremental computing over streaming graph updates [33, 34]. We rather focus on materialized property graphs with temporal lifespans on their vertices, edges and properties that have already been collected in the past. In future, we will also consider incremental query processing over such streaming graphs. #### 2.2 Property and Temporal Graph Querying Query models over property graphs and associated query engines are popular for semantic graphs [30, 35, 36]. Languages like SPARQL offer a highly flexible declarative syntax, but are costly to execute in practice for large graphs [37, 38]. Others support a narrower set of declarative query primitives, such as finding paths, reachability and patterns over property graphs, but manage to scale to large graphs using a distributed execution model [39, 40]. However, none of these support time as a first-class entity, during query specification or execution. There has been limited work on querying and indexing over specific temporal features of property graph. Semertzidis, et al. [41] propose a model for finding the top-k graph patterns which exist for the longest period of time over a series of graph snapshots. They offer several indexing techniques to minimize the snapshot search space, and perform a brute-force pattern mining on the restricted set. This multi-snapshot approach limits the pattern to one that fully exists at a single time-point and recurs across time, rather than spans time-points. It is also limited to a single-machine execution, which limits scaling. _TimeReach [42] supports conjunctive and disjunctive reachability queries_ on a series of temporal graph snapshots. It builds an index from strongly 5 ----- connected components (SCC) for each snapshot, condenses them across time, and use this to traverse between vertices in different SCCs within a single hop. It assumes that the graph has few SCCs that do not change much over time. They also require the path to be reachable within a single snapshot rather than allow path segments to connect across time. Likewise, _TopChain [43] supports temporal reachability query using an index label-_ ing scheme. It unrolls the temporal graph into a static graph, with time expanded as additional edges, finds the chain-cover over it, and stores the top-k reachable chains from each vertex as labels. It uses this to answer time-respecting reachability, earliest arrival path and fastest path queries. Paths can span time intervals. However, they do not support any predicates over the properties. Neither of these support distributed execution. There is also literature on approximate querying over graphs. Arrow [44] examines reachability queries on both non-temporal and temporal graphs using random walks. These are performed from both the source and the sink vertices, and an intersection of the two vertex sets gives the result. They use approximation by bounding the walk length based on the diameter of the graph and a tunable parameter which balances accuracy and query latency. Iyer, et al. [45] consider approximate pattern mining on large nontemporal graphs. They use statistical techniques to sample the graph edges and estimate the number of occurrences of a specific pattern in the graph. However, their approach cannot enumerate the actual vertices and edges forming the pattern. _ChronoGraph [46] supports temporal traversal queries over interval prop-_ erty graphs, and is the closest to our work. They implement this by extending the Gremlin property graph query language with temporal properties. They propose optimizations to the Gremlin traversal operators, and parallelization and lazy traversals within a single machine, which are executed by the TinkerGraph engine. However, they do not support novel temporal operators such as the edge-temporal relationship that we introduce. They also do not use indexes or query planning to make the execution plan more efficient. Their optimizations are tightly-coupled to the execution engine, which does not support distributed execution. Lastly, there are several open-source and proprietary graph database systems [8, 47, 48] which provide general-purpose property graph storage and querying capabilities while allowing transactional access to graph data. However, these systems do not have first class support for time-varying graphs and query models that can leverage the temporal dimension. This leads to temporal queries written in their native language which are neither intuitive in expressing temporal notion nor efficient during execution due to lack of time-aware query optimizer and execution engine. In summary, these various platforms lack one or more of the following capabilities we offer: modeling time as a first-class graph and query concept; enabling temporal path queries that span time and match temporal relations 6 ----- across entities; and distributed execution on commodity clusters that scales to large graphs using a query optimizer that leverages the graph’s structure, temporal features, and property values. ### 3 Temporal Graph and Query Models #### 3.1 Temporal Concepts The temporal property graph concepts used in this paper are drawn from our earlier work [21]. Time is a linearly ordered discrete domain Ωwhose range is the set of non-negative whole numbers. Each instant in this domain is called a time-point and an atomic increment in time is called a time-unit. A time interval is given by τ = [ts, te) where ts, te ∈ Ωwhich indicates an interval starting from and including ts and extending to but excluding te. _Interval relations [49] are Boolean comparators between intervals; fully be-_ _fore relation is denoted by_, starts before relation by, fully after relation _≪_ _≺_ by, starts after relation by, during relation by, equals relation by =, _≫_ _≻_ _⊂_ _during or equals relation by_ and overlaps relation by . _⊆_ _⊓_ #### 3.2 Temporal Property Graph Model We formally define a temporal property graph as a directed graph G = (V, E, PV, PE). V is a set of typed vertices where each vertex ⟨vid, σ, τ _⟩∈_ _V_ is a tuple with a unique vertex ID, vid, a vertex type (or schema) σ, and the lifespan of existence of the vertex given by the interval, τ = [ts, te). E is a set of directed typed edges, with ⟨eid, σ, vidi, vidj, τ _⟩∈_ _E. Here, eid_ is a unique ID of the edge, σ its type, vidi and vidj are its source and sink vertices respectively, and τ = [ts, te) is its lifespan. We have a schema function : σ _K, that maps a given vertex or edge type σ to the set of_ _S_ _→_ _property keys (or names) it can have. PV is a set of vertex property values,_ where each ⟨vid, κ, val, τp⟩∈ _PV represents a value val for the key κ ∈_ _K_ for the vertex vid, with the value valid for the interval τp ⊆ _τ_ . A similar definition applies for edge property values ⟨eid, κ, val, τp⟩∈ _PE._ Further, the graph G must meet the uniqueness constraint of vertices and edges, i.e., a vertex or an edge with a given ID exist at most once and for a single continuous duration; referential integrity constraints, where the lifespan of an edge must be contained within the lifespan of its incident vertices; and constant edge association, which enforces that the vertices incident on an edge remain the same during the edge’s lifespan. These are defined in [50]. A static temporal property graph is a restricted version of the temporal property graph such that τp = τ for the vertex and edge properties, i.e., each property key has a static value that is valid for the entire vertex or edge lifespan, formally stated as: 7 ----- _∀⟨vid, κ, val, τp⟩∈_ _PV, ⟨vid, σ, τ_ _⟩∈_ _V =⇒_ _τp = τ_ and _∀⟨eid, κ, val, τp⟩∈_ _PE, ⟨eid, σ, vidi, vidj, τ_ _⟩∈_ _E =⇒_ _τp = τ Temporal property graphs with-_ out this restriction are called dynamic temporal property graphs, and allow keys for a vertex or an edge to have different values for non-overlapping time intervals, i.e., τp ⊆ _τ_ . E.g., Figure 1 is a dynamic temporal property graph as Cleo’s property values change over time, but omitting Cleo makes it a static temporal property graph. #### 3.3 Temporal Path Query An n-hop linear chain path query matches a path with n vertex predicates and n 1 edge predicates. The syntax rules for this query model and its _−_ predicates are given below, and illustrated for the example queries from earlier in Table 1. _path_ ::= _ve-fragment_ _ve-int-fragment_ - _v-predicate_ _⟨_ _⟩_ _⟨_ _⟩⟨_ _⟩_ _⟨_ _⟩_ | _ve-fragment_ _ve-int-fragment_ - _v-predicate_ _aggregate_ _⟨_ _⟩⟨_ _⟩_ _⟨_ _⟩⊕⟨_ _⟩_ _ve-fragment_ ::= _v-predicate_ _e-predicate_ _⟨_ _⟩_ _⟨_ _⟩⊢⟨_ _⟩_ _ve-int-fragment_ ::= _ve-fragment_ _v-predicate_ _etr-clause_ _e-predicate_ _⟨_ _⟩_ _⟨_ _⟩| ⟨_ _⟩⟨_ _⟩⊢⟨_ _⟩_ _v-predicate_ ::= _predicate_ _⟨_ _⟩_ _⟨_ _⟩_ _e-predicate_ ::= _predicate_ _direction_ _⟨_ _⟩_ _⟨_ _⟩⟨_ _⟩_ _direction_ ::= _⟨_ _⟩_ _→| ←| ↔_ _predicate_ ::= ⋆ _bool-predicate_ _prop-clause_ _time-clause_ _⟨_ _⟩_ _| ⟨_ _⟩| ⟨_ _⟩| ⟨_ _⟩|_ _time-clause_ AND _bool-predicate_ _⟨_ _⟩_ _⟨_ _⟩_ _bool-predicate_ ::= _prop-clause_ _prop-clause_ OR _bool-predicate_ _⟨_ _⟩_ _⟨_ _⟩| ⟨_ _⟩_ _⟨_ _⟩_ | _prop-clause_ AND _bool-predicate_ _⟨_ _⟩_ _⟨_ _⟩_ _prop-clause_ ::= ve-key _prop-compare_ value _⟨_ _⟩_ _⟨_ _⟩_ _time-clause_ ::= ve-lifespan _time-compare_ interval _⟨_ _⟩_ _⟨_ _⟩_ _etr-clause_ ::= el-lifespan _time-compare_ er-lifespan _⟨_ _⟩_ _⟨_ _⟩_ _prop-compare_ ::= ‘==’ ‘!=’ _⟨_ _⟩_ _|_ _| ∋_ _time-compare_ ::= _⟨_ _⟩_ _≺| ≪| ≻| ≫| ⊓| ̸ ⊓_ _aggregate_ ::= _aggregate-op_ [ v-key | ⋆ ] _⟨_ _⟩_ _⟨_ _⟩_ _aggregate-op_ ::= count min max _⟨_ _⟩_ _|_ _|_ As we can see, the property and time clauses are the atomic elements of the predicate and allow one to compare in/equality and containment between a property value and the given value, and a more flexible set of comparisons between a vertex/edge/property lifespan and a given interval (time_compare). These temporal clauses allow a wide variety of comparison within_ 8 ----- the context of a single vertex or edge, and their properties. These clauses can be combined using Boolean AND and OR operators. Edge predicates can have an optional direction. The wildcard ⋆ matches all vertices or edges at a hop. A novel and powerful temporal operator we introduce is edge time re_lationship (ETR). Unlike the time clause, this etr-clause allows comparison_ across edge lifespans. Specifically, it is defined on an intermediate vertex in the path (ve-int-fragment), and allows us to compare the lifespans of its left (el-lifespan) and right (er-lifespan) edges in the path. The motivation for this operator comes from social network mining [6] and to identify flow and frauds in transactions networks [4]. E.g., the queries EQ2 and EQ3 from Section 1 can be concisely captured using this. We also support a novel temporal aggregate operator to group the resultset from the path query. The paths are grouped on the first vertex in the resulting temporal paths, and computes a specific aggregation on a property at the last vertex of the path. The grouping is time-aware; specifically, it is based on the duration of the first vertex in the result path. E.g., if the result-set for a query contains i = 1..m paths of length n each, v1[i] _[−]_ _[e]1[i]_ _[−]_ _v2[i]_ 2 _n[, and the first vertex][ v]1[i]_ [in a result matches the query for the] _[−]_ _[e][i]_ _[−]_ _[...]_ _[−]_ _[v][i]_ time period τi = [t[i]s[, t][i]e[), then we perform a “group by” of the result paths] by the temporal vertex {v1[i] _[.id,][ [][t][i]s[, t][i]e[)][}][. For all the paths][ j][ in a group, we]_ perform an aggregation operation ⊕ on vn[j] _[.prop][, where][ prop][ is a property on]_ the last vertex that is selected by the user and may be omitted for a count aggregation. We return the aggregated result {v1[i] _[.vid,][ [][t][i]s[, t][i]e[)][,][ ⊕]_ _n[.prop][)][}]_ _j_ [(][v][j] for each unique temporal vertex group [1]. This can help answer queries such as “[EQ4] Count the number of persons followed by a person ‘Bob’ during _his existence in the network”. The answer to this for Figure 1 varies across_ time, taking value 1 during [10, 30) [50, 100) and 0 during [5, 10) [30, 50). _∪_ _∪_ Our _ranite implementation supports count, min and max operations for_ _G_ , while others can also easily be added. _⊕_ ### 4 Distributed Query Engine #### 4.1 Relaxed Interval Centric Computing The high-level architecture of our distributed query engine, _ranite, is_ _G_ shown in Figure 2a. Our query engine uses a distributed in-memory iterative execution model that extends and relaxes the Interval-centric Computing _Model (ICM) [21]. ICM adds a temporal dimension to Pregel’s vertex-centric_ iterative computing model [20], and allows users to define their computation from the perspective of a single interval-vertex, i.e., the state and properties 1The valid duration for the first vertex can be disjoint, in which case each maximal contiguous interval for that vertex vid forms a separate temporal group. 9 ----- Table 1: Query Syntax Examples **Example Query** **Query Syntax** **EQ1 Find a person who lives** **Type == Person AND Country == UK** _⊢_ in ‘UK’ and follows a per- **Type == Follows** _→_ son who follows another person **Type == Person** **Type == Follows** _⊢_ _→_ who is tagged with ‘Hiking’ **Type == Person AND Tag** _Hiking_ _∋_ **Type == Person AND Tag** _Hiking_ **Type** **EQ2** Find people tagged _∋_ _⊢_ == Likes with ‘Hiking’ who liked a post _→_ **Type** == _Post_ AND **Tag** _Vacation_ tagged as ‘Vacation’ before the _∋_ el-lifespan er-lifespan post was liked by a person _≺_ **Type == Likes** named ‘Don’ _⊢_ _←_ **Type == Person AND Name == Don** **Type == Person** **Type == Follows** **EQ3 Find people who started** _⊢_ _→_ **Type** == _Person_ el-lifespan to follow another person, after _≫_ er-lifespan **Type == Follows** they stopped following ‘Don’ _⊢_ _→_ **Type == Person AND Name == Don** **EQ4** Count the number of **Type == Person AND Name == Bob** **Type** persons followed by a person _⊢_ == Follows ‘Bob’ during his existence in _→_ **Type == Person** count [⋆] the network. _⊕_ for a certain interval of a vertex’s lifespan. In each iteration (superstep) of an ICM application, a user-defined compute function is called on each active interval-vertex, which operates on its prior state and on messages it receives from its neighbors, for that interval, and updates the current state. A TimeWarp function aligns the lifespans of the input messages to the lifespans of the partitioned interval states for an interval vertex. So each call to compute executes on the temporally intersecting messages and states for a vertex. Then, a user-defined scatter function is called on the out-edges of that interval-vertex, which allows them to send temporal messages containing, say, the updated vertex state to its neighboring vertices. The message lifespan is usually the intersection of the state and the edge lifespans. Messages are delivered in bulk at a barrier after the scatter phase, and the compute phase for the next iteration starts after that. Vertices receiving a message whose interval overlaps with its lifespan are activated for the overlapping period. This repeats across supersteps until no messages are generated after a superstep. The execution of compute and scatter functions are each data-parallel within a superstep, and their invocation on different interval vertices and edges can be done by concurrent threads. We design _ranite using the compute and scatter primitives offered_ _G_ the Graphite implementation of ICM over Apache Giraph, as illustrated in Figure 2b. However, ICM enforces time-respecting behavior, i.e., the intervals between the messages and the interval-vertex state have to overlap for compute to be called on the messages; intervals between the states updated 10 |Example Query|Query Syntax| |---|---| |Example Query|Query Syntax| |---|---| |EQ1 Find a person who lives in ‘UK’ and follows a per- son who follows another person who is tagged with ‘Hiking’|Type == Person AND Country == UK ⊢ Type == Follows → Type == Person Type == Follows ⊢ → Type == Person AND Tag Hiking ∋| |EQ2 Find people tagged with ‘Hiking’ who liked a post tagged as ‘Vacation’ before the post was liked by a person named ‘Don’|Type == Person AND Tag Hiking Type ∋ ⊢ == Likes → Type == Post AND Tag Vacation ∋ el-lifespan er-lifespan ≺ Type == Likes ⊢ ← Type == Person AND Name == Don| |EQ3 Find people who started to follow another person, after they stopped following ‘Don’|Type == Person Type == Follows ⊢ → Type == Person el-lifespan ≫ er-lifespan Type == Follows ⊢ → Type == Person AND Name == Don| |EQ4 Count the number of persons followed by a person ‘Bob’ during his existence in the network.|Type == Person AND Name == Bob Type ⊢ == Follows → Type == Person count [⋆] ⊕| ----- **Stats** **Master .Optimizer** **Receive & broadcast query to worker** - **Select query plan using cost model** - **Coordinate worker exec. for query** **plan ◊** **Return result set to client** **Init/** **Compute** **(Resume)** **vertex** **predicate** **evaluation.** **V** **V** Controls Messaging |Granite|Master .Optimizer Stats Receive & broadcast query to worker ◊Select query plan using cost model ◊Coordinate worker exec. for query plan ◊Return result set to client|Col3|Init/ Scatter Compute Evaluate edge (Resume) predicate, temporal vertex edge relation. predicate Send partial results evaluation. message to sink vertex.| |---|---|---|---| |ICM on Interval Property Graph Graphite VCM on Graph Apache Giraph Query Worker Worker Worker Worker Master||ICM on Interval Property Graph Graphite|| |||VCM on Graph Apache Giraph|| |Giraph Worker Partition Compute Graphite Interval Compute Evaluate Vertex Granite P Ar ce td ivic ea vte e r𝜋 ti𝑖ceo sn init / compute V V V scatter|Col2|Col3|Col4|Col5|Col6|Col7|Col8| |---|---|---|---|---|---|---|---| |||||V V V|||| ||scatter||||||| **Scatter** **Evaluate edge** **predicate, temporal** **edge relation.** **Send partial results** **message to sink vertex.** **Partition Compute** Evaluate Edge _MatchedPredicate verticesഥ𝜋𝑖_ on (b) Iterative query execution across ICM supersteps **V** HDFS (a) Architecture of Granite **Graphite** **Interval** **Compute** **Granite** **init /** **compute** **scatter** Figure 2: Architecture and ICM execution model of _ranite_ _G_ by the compute and the edge lifespans have to overlap for scatter to be called; and scatter sends messages only on edges whose lifespan overlaps with the updated states. But the temporal path queries do not need to meet these requirements, e.g., a query may need to navigate from a vertex to an adjacent vertex that occurs after it. The TimeWarp operator of ICM enforces this timerespecting behavior. So we relax ICM to allow non-time respecting behavior between compute, scatter and messages to meet the execution requirements of our path queries, while leveraging its other interval-centric features. #### 4.2 Distributed Execution Model In our execution model, each vertex predicate for a path query and the succeeding edge predicate, if any, are evaluated in a single ICM superstep. Specifically, the vertex predicates are evaluated in the compute function and the edge predicates in the scatter function. We use a specialized logic called init for evaluating the first vertex predicate in a query. This is shown in Figs. 2b and 3a. A Master receives the path query from the client, and broadcasts it to all Workers to start the first superstep (Figure 2a). Each Worker operates over a set of graph partitions with one thread per partition, and each thread calls the compute and scatter functions on every active vertex in its partition. The init logic is called on all vertices in the first superstep. It resets the vertex state for this new query and evaluates the first vertex predicate of the query. If the vertex matches, its state is updated with a _matched flag and scatter is invoked for each of its incident in or out edges,_ as defined in the query. Scatter evaluates the next edge predicate, and if it matches, sends the partial path result and the evaluated path length to the destination vertex as a message. If a match fails, this path traversal is pruned. 11 ----- In the next iteration, our compute logic is called for vertices receiving a message. This evaluates the next vertex predicate in the path and if it matches, it puts all the partial path results from the input messages in the vertex state, and scatter is called on each incident edge. If the edge matches the next edge predicate, the current vertex and edge are appended to each prior partial result and sent to the destination vertex. This repeats for as many supersteps as the path length. In the last superstep, the vertices receiving matching paths in their messages send it to the Master to return to the client. Figure 3a (Plan 1) illustrates this for a sample path query with vertex and edge predicates, V 1 _E1_ _V 2_ _E2_ _V 3. In superstep 1, init is called_ _−_ _−_ _−_ _−_ on all vertices to evaluate the vertex predicate V 1, and for the ones that match, scatter is called to evaluate the edge predicate E1. Those edges that match send a message to their remote vertex in superstep 2, where all vertices that receive a message invoke their compute logic to evaluate the vertex predicate V 2 of the second hop. This is (optionally) preceded by the TimeWarp operator on all the messages received by an interval vertex. Vertices that match V 2 call scatter on their edges to match the predicate _E2, and send messages if they too match. In the last superstep, vertices_ that receive messages evaluate the predicate V 3, and if there is a match, return that result path to the user. Each vertex in the last superstep may return multiple matching paths based on the messages received, and different vertices may return result paths to the Master. Scatter also evaluates the edge temporal relationship. Here, the scatter of the preceding edge passes its lifespan in the result message, and this is compared against the current edge’s lifespan by the next scatter to decide on a match. In the case of temporal aggregate queries, the result set is constructed in the last superstep, similar to the non-aggregate queries. Then, the first vertex in each result path, its associated lifespan, and the count or the property value of the last vertex to be aggregated are extracted and sent to the Master. The Master temporally groups the values for each distinct temporal vertex using the TimeWarp operator, and applies the aggregation operator on the values in each group. For static temporal graphs, we do not use any interval-centric features of ICM, such as TimeWarp, and the entire lifespan of the vertex is treated as a single interval-vertex for execution, and likewise for edges. However, we do use the property graph model and state management APIs offered by the interval-vertex. For dynamic temporal graphs with time varying properties, we leverage the interval-centric features of ICM. Specifically, we enable TimeWarp of message intervals with the vertex properties’ lifespans so that compute is called on an interval vertex with messages temporally aligned and grouped against the property intervals. Scatter is called only for edges whose lifespans overlap with the matching interval-vertex, and its scope is limited 12 ----- _Query_ **E1** **E2** **V1** **V2** **V3** |SS|V1|Col3|E1|V2|E2|V3| |---|---|---|---|---|---|---| |1|I||S||S|| |2||||CJ||| |SS|V1|Col3|E1|V2|E2|V3| |---|---|---|---|---|---|---| |1|I||S|||| |2||||C|S|| |3||||||C| **V2** **V3** (a) Query execution phases across different supersteps (SS) for two plans of the input query **Tag ϶ Vacation** **Likes** **Likes** **Tag ϶** **Name** **V1** **V2** **V3** **Hiking** **== Don** **Sub** **Sub** **V1** **V2** **Split** **V2** **V3** **query 1** **query 2** **SS 1** **SS 2** **SS 2** **SS 1** **Join** **Partial** **Person** **results** **Post** ``` el-lifespan ≺ er-lifespan ``` (b) Splitting of query and Joining of results for EQ2, similar to Plan 2 in Figure 3a Figure 3: Query execution plans in _ranite_ _G_ to the period of overlap. The compute or scatter functions only access messages and properties that are relevant to their current interval, and both can be called multiple times, for different intervals, on the same vertex and edge. #### 4.3 Distributed Query Execution Plans Queries can be evaluated by splitting them into smaller path query segments that are independently evaluated left-to-right, and the results then combined. Each vertex predicate in the path query is a potential split point. E.g., a query V 1 _E1_ _V 2_ _E2_ _V 3 can be split at V 2 into the segments:_ _−_ _−_ _−_ _−_ _V 1_ _E1_ _V 2 and V 3_ _E2_ _V 2; execution proceeds inwards, from the outer_ _−_ _−_ _−_ _−_ predicates (V 1 and V 3) to the split vertex (V 2) which joins the results. This is illustrated in Figs. 3a (Plan 2) and 3b. A trivial split at the last vertex predicate V 3 is the default execution of the query from left-to-right, shown in Figure 3a (Plan 1), while an alternative split at the first vertex predicate _V 1 evaluates this from right-to-left as V 3_ _E2_ _V 2_ _E1_ _V 1._ _−_ _−_ _−_ _−_ 13 **Person** **Post** **V2** **V2** **V2** **V1** ----- Each split point and plan can be beneficial based on how many vertices and edges match the predicates in the graph. Intuitively, a good plan should evaluate the most discriminating predicate first (low selectivity, few vertex/edges match) to reduce the solution space early. A cost model, discussed in Section 5, attempts to select the best split point. We modify our _ranite logic to handle the execution of two path seg-_ _G_ ments concurrently. E.g., for a split point V 2, in the first superstep, we evaluate predicates V 1 _E1 and V 3_ _E2 in the same compute/init and_ _−_ _−_ scatter logic, while in the second superstep we evaluate predicate V 2, as shown in Figure 3a (Plan 2). In the final superstep when results from both the segments are available, we do a nested loop join to get the cross-product of the results. This can be extended to multiple split points in the future. #### 4.4 System Optimizations **4.4.1** **Type-based Graph Partitioning** Giraph by default does a hash-partitioning of the vertices of the graph by their vertex IDs onto workers. But we use knowledge of the entity schema types to create graph partitions hosting only a single vertex type. This helps us eliminate the evaluation of all vertices in a partition if its type does not match the vertex type specified in that hop of the query. This filtering is done before the compute is called, at the partitionCompute of Giraph. We first group vertices by type to form a typed partition each, e.g., Type A and Type B, as illustrated in Figure 4a. But these can have skewed sizes, and there may be too few types (hence partitions) to fully exploit the parallelism available on the workers and their threads. So we further perform a secondlevel topological partitioning of each typed partition into p sub-partitions using METIS [51]. This only considers the edges between vertices of the same type, i.e., within each typed partition, and uses the edge lifespan as their weight. This second-level partitioning can also reduce the network messaging cost between vertices of the same type. The sub-partitions from each typed partition are then distributed in a round-robin manner among all the workers. So if there are w workers, t types and p sub-partition per type, each worker is expected to have _[t][×][p]_ sub-partitions, with _p_ _w_ _w_ [of each] type. Since each superstep typically evaluates a query predicate for a single vertex type, this ensures load balancing of the typed sub-partitions across all workers during a superstep execution. In our experiments using the 100k:A-S graph, described in Section 6.1, we observe that using type-based partitioning at the first level instead of hash partitioning improves the average execution time for our query workloads by 5.8 . When we combine this with METIS partitioning in the second _×_ level, we see a further improvement of 32%. All our results we later report use this optimization. 14 ----- G D B E **1** **4** **5** **7** **7** **9** **8** **5** **3** **2** **3** **Type B** **4** **9** **D** **D** **H** **A** **F** **Input Graph** _Type-based_ **Sub-partitions** _Partitioning_ _METIS_ (a) Two-level load-balanced partitioning of Input Graph to Workers, by type and then by topology H F **6** **1** **4** **1** **7** **H** **E** _Input Graph_ **B** **D** **F** **H** C **C** **C** **B** **E** **F** **H** ✓ _Predicate Match_  _Predicate Mismatch_ **C** **E** **F** **H** (b) Message tree propagation during query evaluation A **3** **9** **2** **5** **8** **H** **B** **F** **C** **F** Figure 4: Examples of system optimizations in _ranite_ _G_ **4.4.2** **Message Optimization** Path results can have a lot of overlaps. But each partial result path is separately maintained and sent in messages during query execution. This redundancy leads to large message sizes and more memory. Instead, we construct a result tree, where vertices/edges that match at a previous hop are higher up in the tree and subsequent vertex/edge matches are its descendants. E.g., assuming a full binary tree expansion for a path query with h hops and n = 2[h][−][1] matching paths, this reduces the result size from (h _n) to_ (2n 1). When execution completes, a traversal of this result _O_ _×_ _O_ _−_ tree gives the expanded result paths. This is illustrated in Figure 4b. Here, vertices A, B and C match the vertex and edge predicates in the first hop and send their partial result to their neighbors. D receives the messages from A and B and evaluates itself for the second-hop predicate. But this execution is not unique to A or B, but rather shared across them. If D matches, rather than send a message with two sub-paths, A _D and B_ _D, we instead send a sub-tree,_ _A, B_ _D in the_ _−_ _−_ _{_ _}−_ message, to its neighbors. Similarly, E which receives messages fro B and C and matches for the second predicate sends a sub-tree _B, C_ _E message._ _{_ _} −_ _F receives two sub-trees as messages, evaluates itself for the third predicate_ that matches, and sends a larger sub-tree, _A, B_ _D_ _,_ _B, C_ _E_ _F_, to _{{_ _}−_ _}_ _{{_ _}−_ _}−_ its neighbor H. G is not a match and prunes its traversal, with no messages sent. H matches the last predicate successfully, and sends the final resulttree with H as the root to the Master, which unrolls the tree to return the paths from H to every leaf as individual results to the client. **4.4.3** **Memory Optimizations** In our graph data model, all property keys and values, excluding time intervals, are strings. In Java, string objects are memory-heavy. Since keys will often repeat for different vertices in the same JVM, we map every property key to a byte, and rewrite the query at the Master based on this mapping. 15 C **8** **6** **H** **B** ----- Further, for property values that repeat, such as country, we use interning in Java that replaces individual string objects with shared string objects. This works as the graph is read-only. Besides reducing the base memory usage for the graph by 5%, it also allows predicate comparisons based on _≈_ pointer equivalence. ### 5 Query Planning and Optimization A given path query can be executed using different distributed execution plans, each having a different execution time. The goal of the cost model is to quickly estimate the expected execution time of these plans and pick the optimal plan for execution. Rather than absolute accuracy of the query execution time, what matters is its ability to distinguish poor plans with high execution times from good plans with low execution times. We propose an analytical cost model that uses statistics about the temporal property graph, combined with estimates about the time spent in different stages of the distributed execution plan, to estimate the execution time for the different plans of a given query. We first enumerate the possible plans, contributed by each split point in the path query. The graph statistics are then used to predict the number of vertices and edges that will be active at each superstep of query execution, and the number of vertices that will match the predicates in this superstep and activate the next hop of the query (superstep). Based on the number of active and matched vertices and edges, our cost model will estimate the runtime for each superstep of the plan. Adding these up across supersteps returns the estimated execution time for a plan. Next, we discuss the statistics that we maintain, and the models to predict the vertex and edge counts, and the execution time. #### 5.1 Graph Statistics We maintain statistics about the temporal property graph to help estimate the vertices and edges matching a specific query predicate. Typically, relational databases maintain statistics on the frequency of tuples matching different value ranges, for a given column (property). A unique challenge for us is that the property values can be time variant. Hence, for each property key present in the vertex and edge types, we maintain a 2D histogram, where the Y axis indicates the different value ranges for the property and the X axis the different time ranges. Each entry in the histogram has a count of vertices or edges that fall within that value range for that time range. E.g., Figure 5a(top) shows such a histogram for the Country property. Its Y axis lists different country values appearing in the vertices of the property graph, such as India, UK and US. The X axis divides the lifespan of the graph into time intervals, say, [0, 50) in steps of 10. The cell values indicate the number of vertices that have these property values for those time 16 ----- _10_ _20_ _30 40 50_ **_Time_** India **9** **10** **12** **9** **14** UK **4** **6** **9** **5** **14** USA **2** **4** **8** **10** **12** **Tiling** _10_ _20_ _30 40 50_ **_Time_** India **10** **14** UK **5** **8** USA **12** (a) 2-D Histogram of Statistics |9|10|12|9|14|Col6| |---|---|---|---|---|---| |4 6 2 4||9 5 14 8 10 12|||| |Col1|1|0|Col4|Col5|Col6| |---|---|---|---|---|---| ||5||8|14|| |||||12|| **India : 10** **India/UK : 14** **USA : 12** (b) Interval Tree for Statistics **UK/USA : 8** Figure 5: Query planning Table 2: Vertex and edge count estimates per superstep and execu_tion time calculated by the model for two execution plans, for query_ _EQ2 on 100k:F-S graph_ **Plan** **SS** _ai_ _fi_ _mi_ _ai_ _fi_ _mi_ _Ti (ms)_ 1 1 100k 3.7 10[−][2] 3.7k 6.2M 35M 1.3M 531 _×_ 2 1.3M 7.7 10[−][4] 1k – – – 132 _×_ 2 1 51M 7.7 10[−][4] 39k 273k 88M 67k 4147 _×_ 2 67k 3.7 10[−][2] 2.5k – – – 35 _×_ intervals, in the entire graph. Here, 9 vertices have the Country property value as India during time interval [0, 10) and 10 vertices have it during [10, 20), and similarly for other countries and time intervals. Formally, for a given property key κ, we define a histogram function _Hκ : (val, τ_ ) →⟨f, δin, δout⟩, that returns an estimate of the frequency f of vertices or edges which have the property value val during a time interval _τ_, and the average in and out degrees δ of the matching vertices, which are maintained for a vertex property. The granularity of the value and time ranges has an impact on the size of the statistics maintained and the accuracy of the estimated frequencies. We make several optimizations in this regard. We use Dynamic Programming (DP) to coarsen the ranges of the histogram along both axes to form a _hierarchical tiling [52]._ This ensures that the frequency variance among the individual value–time pairs in each tile is no more than a threshold. For example, in Figure 5a(bottom), the frequencies 9, 10, 12 and 9 for the property value India during the interval [10, 40) are close to each other and hence tiled, i.e., aggregated and replaced by their average value 10. Similarly, India and UK have the same frequency 14 for the interval [40, 50) and are tiled. This reduces the number of entries that are maintained in the histogram, i.e., the space complexity, while bounding its impact on the accuracy of the statistics. For important properties like vertex and edge types, out-degree and in 17 |Plan|SS|a f m i i i|a f m i i i|T (ms) i| |---|---|---|---|---| |1 1 100k 3.7 10−2 3.7k 6.2M 35M 1.3M 531 × 2 1.3M 7.7 10−4 1k – – – 132 ×|1|100k 3.7 10−2 3.7k ×|6.2M 35M 1.3M|531| |---|---|---|---|---| |Plan 1|SS 1 2|ai fi mi 100k 3.7 × 10−2 3.7k 1.3M 7.7 × 10−4 1k 4|ai fi mi 6.2M 35M 1.3M – – –|Ti (ms) 531 132| |---|---|---|---|---| |2|1|51M 7.7 10−4 39k ×|273k 88M 67k|4147| ||2|67k 3.7 10−2 2.5k ×|– – –|35| **UK/USA : 5** ----- degree, we pre-coarsen the time steps into, say, weeks, and for other properties into, say, months to reduce the size of the histogram – the actual coarsening factor is decided based on how often the properties change in the graph. For properties with 1000’s of enumerated values (e.g., Tag in Figure 1), we sort them based on their frequency, cluster them into similar frequencies, and perform tiling on these clusters. We retain a map between property values and clusters for these, which is used to rewrite the input query to replace the property values with these cluster IDs instead. We use an interval tree to maintain each histogram, with each tile inserted into this tree based on its time range. The nodes of the tree will have a set of tiles (property value ranges and their frequencies) that fall within its time interval. The invariant for all the nodes in the tree is such that the interval of a parent node will be after the left child (i.e., start time and/or end time of parent’s interval is after the left child’s interval), and before the right child. E.g., the interval tree in Figure 5b is constructed from the 2D histogram in Figure 5a(bottom). Every tile in the histogram becomes a node or part of a node in the interval tree. We insert a tile in the right subtree if its interval is greater than the parent node’s interval, in the left sub-tree if it is lesser, and in the parent if it overlaps with it. To perform a lookup, we check if the lookup interval is greater than or less than the parent interval and prune the search space accordingly, similar to a binary search tree. Calling the function performs a lookup in this interval tree, _H_ and matches within the set of property ranges. The time complexity to construct each interval tree includes the time to aggregate the statistics from the graph, taking (n _k), where n is the_ _O_ _·_ number of vertices in the graph and k the average number of property names or keys per vertex type. For each property key, the time taken is dominated by the tiling step that uses DP, and takes (p[3]t[3]), where p is the number _O_ of (clustered) values for the property key, and t the number of (coarsened) time units they span [52]. The cost of building the interval tree is (m _t),_ _O_ _·_ where m is the number of tiles in the coarsened histogram. The lookup time is (p _t) in the worst case; for a balanced tree the expected lookup time is_ _O_ _·_ (log m + k), where k is the number of intersecting intervals in the tree. _O_ The raw size of the statistics for the graphs used in our experiments ranges from 4200–5600 kB for about 13–15 property keys. #### 5.2 Estimating the Active and Matching Vertex and Edge Counts A query plan contains either one or two path query segments. The query predicates on each vertex and its edges in the segment are evaluated in a single superstep. If two path segments are present, their results are joined at the split point. Aggregation operators, if any, are also evaluated in the last superstep. For each segment, we estimate a count of active and match 18 ----- ing vertices and edges in each superstep, given by the recurrence relation discussed next. Let P = [π1, π1, ..., πn] denote the sequence of n vertex predicates, π, and _n_ 1 edge predicates, π, for a given path query segment. Each predicate _−_ _π has a set of property clauses CP (π) = {⟨κ, val⟩} and a temporal clause_ _CT (π) = ⟨lifespan, τ_ _⟩, where κ is a property key, val is a value to compare_ its value against, and τ is the interval to compare that vertex/edge/property’s lifespan against; and similarly for π. These clauses themselves can be combined using AND and OR Boolean operators, as described in the query syntax earlier. Let σi (σi) denote the type of the vertex (edge) enforced by a clause of predicate πi (πi). Let Vσ (Eσ) denote the set of vertices (edges) of that type; if the vertex (edge) type is not specified in the predicate, these sets degenerate to all vertices (edges) in the graph. As shown in Figure 2b, each superstep is decomposed into 2 stages: calling init or compute on the active vertices to find the vertices matching the vertex predicate, and calling scatter on the active edges (i.e., in or out edges of the matching vertices) to identify the edges matching the edge predicates. These in turn help identify the active vertices for the next superstep of execution. Initially, all vertices of the graph are active, but if a type is specified in the starting vertex predicate, we can use the type-based partitioning to limit the active vertices to the ones having that vertex type. Let ai and mi denote the number of active and matched vertices, respectively, for vertex predicate πi with type σi, and ai and mi denote the _number of active and matched edges, respectively, for the edge predicate πi_ with type σi. These can be recursively defined as: _ai_ = � _|Vσ|_, if i = 1 (1) min(mi−1, |Vσ|), otherwise � _⟨fi, δin[i]_ _[, δ]out[i]_ _[⟩]_ = _Hκ(val, τ_ ) _⟨κ,val⟩∈CP (πi)_ _⟨lifespan,τ_ _⟩∈CT (πi)_ _mi_ = _ai ×_ _|V[f]σ[i]_ _|_ (2) _ai_ = _m[σ]i_ _in_ [+][ δ]out[i] [)] (3) _[×][ (][δ][i]_ � _⟨fi, −, −⟩_ = _Hκ(val, τ_ ) _⟨κ,val⟩∈CP (πi)_ _⟨lifespan,τ_ _⟩∈CT (πi)_ _fi_ _mi_ = _ai ×_ _|Vσ| × (δ[¯]in[σ]_ [+ ¯][δ]out[σ] [)] (4) In Equation 1, we set the active vertex count in the first superstep to be equal to the number of vertices of type σ. This reflects the localization 19 ----- of the search space in the init function to only vertices in the partitions matching that vertex type. For subsequent supersteps, the active vertex search space is upper-bounded by |Vσ| but is usually expected to be the number of matching edges in the previous superstep [2], which would send a message to activate these vertices and call its compute function. Next, in Equation 2, we use the graph statistics to find the fraction of vertices _fi_ _|Vσ|_ [that match the vertex predicate][ π][i][ (also called][ selectivity][) and] multiply this with the number of active vertices to estimate the matched vertices. This is the expected matched output count from init or compute. We use to find the selectivity by iterating through all clauses of a predi_H_ cate πi, get their frequency, average in degree and average out degree of the vertex matches for each along with any temporal clause, and then aggregate ( ) these frequencies. The aggregation between adjacent clauses can be ei_⊗_ ther AND or OR, and based on this, we apply the following aggregation logic for the frequencies and degrees. � _f_ = (f 1, f 2) = � _δ_ = (⟨fi, δi⟩, ...) = � min(f 1, f 2), if = AND _⊗_ (5) max(f 1, f 2), if = OR _⊗_ � _i_ _[f][i][ ×][ δ][i]_ (6) � _i_ _[f][i]_ Equation 5 returns the smaller of the frequencies while performing an AND, and the larger of the two with an OR; the former can be an over-estimate while the latter an under-estimate if the two properties are not statistically independent. Equation 6 finds the weighted average of the degrees of the vertices matching the predicates. Once the frequencies of the clauses are aggregated, we divide it by the number of vertices of this vertex type to get the selectivity for the vertex predicate. Then, in Equation 3, we identify the number of edges for which scatter will be triggered by multiplying the matched vertices with the sum of the in and out degrees for the matching vertices δ. Lastly, in Equation 4 we estimate the number of edges matched by the edge predicate πi. Here, we get the edge selectivity using the frequency of edge matches returned by the graph statistics, and normalized by the number of preceding vertices of type σ, times the average of the in and out degrees of vertices of this type, _δ. The edge selectivity is multiplied by the active edge count to get the_ matched edges that is expected from the scatter call. These edges will send messages to their destination vertices, and this will feed into the active vertex count in superstep i + 1. E.g., Table 2 shows the cost model and statistics in action for query _EQ2 on graph 100k:F-S that is described later in Section 6.1. It reports_ the counts for the active and matched vertex and edge counts (a, m) using 2This is in the worst case, if vertices and edges that match in the preceding hop activate mutually exclusive vertices in the next hop. 20 ----- Equations 1–4, and the frequency of the vertices and edges (f ) as returned by the histogram, for each superstep of two different query plans. We see that a1 is higher for Plan 2 than Plan 1 since the plans start at different vertex types during the init phase, and this will lead to different execution times for this phase (ι, discussed later). The frequency f1 in Plan 1 is equal to f2 in Plan, 2 and likewise for f2 of Plan 1 and f1 of Plan 2. This is expected since the predicate evaluated in superstep 1 of Plan 1 is same as that of superstep 2 in Plan 2. The cost model also estimates the messages sent m1 to be 1.3M and 67k for the two plans. Since we assume that the property values are independent, the selectivities remain constant. a2 = m1 for both plans since we assume that each message from a superstep is sent to a unique vertex in the next superstep. While the compute calls for Plan 2 is higher, and the scatter calls and messages for Plan 1 is higher. The execution time model discussed next helps decide which of these plans has a lower estimated latency. The clauses for time can also have comparators like,, etc. and prop_≻_ _≺_ erty clauses can have ! =. These are supported by the histogram and cost model. E.g., we get the frequency for a operator by summing the frequen_≺_ cies for all values smaller than the given value, and for ! = by subtracting from the total frequency the frequency of values that equal the given value. All time-variant statistics are maintained in the histogram, while invariants such as the count of vertex and edges of each type are maintained as part of global statistics for the graph. #### 5.3 Execution Time Estimate Given the estimates of the active/matched vertices/edges in each superstep, we incorporate them into execution time models for the different stages within a superstep to predict the overall execution time. We use micro-benchmarks to fit a linear regression model for the execution times, _,_ _,_ _,_ _, and_, used below. These are unique to a cluster deployment _I_ _M_ _S_ _CC_ _IC_ of _ranite, and can be reused across graphs and queries._ _G_ As shown in Figure 2b, the init function is called on the active vertices _a1 in the first superstep, and generates m1 outputs that affect the states_ of the interval vertex. Its execution time estimate is given by the function _ι = I(a0, m0). For subsequent supersteps i, the compute function is called_ similarly on the active vertices, ai, to generate the matched vertices mi. This has a slightly different execution logic since it has to process an estimated _mi−1 input messages from the previous superstep and does not have to_ initialize data structures, unlike init. Its execution time estimate is, ci = _M(ai, mi, mi−1). In a superstep i, scatter is called on the active edges and_ generates matched edges, with an estimated time of si = S(ai, mi). Besides these, there are per-superstep platform overheads: for iterating over vertices matching a given type, cci = CC(|Vσ|) in the partitionCompute phase, and a 21 ----- Table 3: Cost model coefficients for linear regression fit for each execution phase, as used in our experiments **Init (** **)** **Compute (** **)** **Scatter (** **)** _I_ _C_ _S_ _a0_ _m0_ cons. _ai_ _mi_ _mi−1_ cons. _ai_ _mi_ cons. 9.4e-5 -3.1e-5 3.83 7.2e-5 3.3e-5 1.8e-5 1.63 7.9e-5 0 -3.81 **Interval** **Partition** **Compute** **Compute** **(** **)** **(** **)** _IC_ _CC_ _ai_ cons. _Vσ_ cons. -5.1e-6 8.6e-2 -8.0e-6 28.7 base overhead of ici = IC(ai) per active vertex for Graphite. Given these, the total estimated execution time of the cost model for a query path segment with n hops is: |Init ( ) I|Compute ( ) C|Scatter ( ) S| |---|---|---| |a m cons. 0 0|a m m cons. i i i−1|a m cons. i i| |Interval Compute ( ) IC|Partition Compute ( ) CC| |---|---| |a cons. i|V cons. σ| |-5.1e-6 8.6e-2|-8.0e-6 28.7| |---|---| _T = (ι + s1 + cc1 + ic1) +_ _n_ � _ci + si + cci + ici_ _i=2_ In practice, these functions are determined by fitting simple linear regression models over query micro-benchmarks performed on the cluster on which the platform will be deployed. This is done once, and the functions are common for different graphs and query workloads on that cluster. E.g., Table 3 shows the coefficients for the linear equations that we fit for these functions, for the experiment setup in Section 6.2. Also, Table 2 shows the estimated execution time Ti in each superstep i for the two execution plans, using these coefficients. Plan 1 takes lesser time than Plan 2 due to the latter taking 7.8 longer in superstep 1. This is caused by a high init _×_ execution time, ι, since it has to evaluate 51M vertices (a1) compared to only 100k in Plan 1. Since the total time is dominated by the init time, our cost model will choose Plan 1 for executing of this query. We exclude the time to perform join and aggregation (for aggregate queries) from the cost model equation. This is based on our observation that this time is negligible (e.g., 20–30 ms in our experiments) compared to the overall execution time of a query (1000 ms) in most cases. In contrast, the execution time for init and the three compute functions together take about 900 ms. Further, the join and aggregate costs are proportional to the result set size. Even with a large result set size, there would inevitably be a large number of intermediate compute calls, and so the relative time taken by join and aggregate will remain low. Avoiding their inclusion helps keep the model concise, with only the most significant costs included. The time taken to find the optimal split point for a query using the approach described in this section is 2–9ms. 22 ----- Figure 6: Modified LDBC Temporal Property Graph schema used in the evaluation ### 6 Results #### 6.1 Workload We use the social network benchmark from the Linked Data Benchmark Council (LDBC) [53] for our evaluation of _ranite._ It is a community_G_ standard workload with realistic transactional path queries over a social network property graph. There are two parts to this benchmark, a social network graph generator and a suite of benchmark queries. **Property Graph Datasets** The graph generator S3G2 [54] models a social network as a large correlated directed property graph with diverse distributions. Vertices and edges have a schema type and a set of properties for each type. Vertex types include person, message, comment, university, _country, etc., while edge types are follows, likes, isLocatedIn, etc. The graph_ is generated for a given number of persons in the network, and a given degree distribution of the person–follows–person edge: Altmann (A), Discrete _Weibull (DW), Facebook (F) or Zipf (Z)._ We make two changes to the LDBC property graph generator. One, we denormalize the schema to embed some vertex types such as country, _company, university and tag directly as properties inside person, forum,_ _post and comment vertices._ This simplifies the data model. Two, while LDBC vertices are assigned a creation timestamp that can fall within a 3-year period, we include an end time of to form a time interval. We _∞_ also add lifespans to the edges incident on vertices based on their referential integrity constraints, and replace time-related properties like join date and _post date with the built-in lifespan property instead. The vertex and edge_ lifespans are also inherited by their properties. Figure 6 shows this modified graph schema. 23 ----- Table 4: Characteristics of graphs used in the experiments _Frequent Vertex Types_ **Graph** **V** **E** **Persons Posts Comments Forums** _|_ _|_ _|_ _|_ _Static Temporal Graphs_ **10k:DW-S** 5.5M 20.8M 8.9k 1.1M 4.3M 82k **100k:Z-S** 12.1M 23.9M 89.9k 7.4M 2.3M 815k **100k:A-S** 25.4M 78.2M 89.9k 8.7M 15.7M 816k **100k:F-S** 52.1M 217.6M 100k 12.6M 38.3M 996k _Dynamic Temporal Graphs_ **10k:DW-D** 6.6M 29.3M 10k 1.4M 5.1M 100k **100k:Z-D** 15.2M 37.1M 100k 9.3M 4.8M 995k **100k:A-D** 32.0M 112.2M 100k 10.8M 20.1M 995k **100k:F-D** 52.0M 216.5M 100k 12.6M 38.2M 995k _Frequent Edge Types_ **Unrolled** **Graph** **hasMember[∗]** **hasCreator[†]** **Properties[#]** _Static Temporal Graphs_ **10k:DW-S** 3.3M 4.3M 35M **100k:Z-S** 1.5M 2.3M 60M **100k:A-S** 12.7M 15.8M 157M **100k:F-S** 52.2M 38.4 M 325M _Dynamic Temporal Graphs_ **10k:DW-D** 7.2M 5.1M 30M **100k:Z-D** 3.2M 4.8M 57M **100k:A-D** 25.6M 20.1M 132M **100k:F-D** 51.8M 38.3M 222M _∗_ _forum_hasMember_person_ _† comment_hasCreator_person_ # Unrolls multi-valued properties into individual ones However, this is still only a static temporal property graph. To address this, we introduce temporal variability into the properties, worksAt, country and hasInterest of the person vertex. For worksAt, we generate a new property every year using the LDBC distribution; the country is correlated with _worksAt, and hence updated as well. We update the hasInterest property_ based on the list of tags for a forum that a person joins, at different time points. Table 4 shows the vertex and edge counts, the number of vertices of each type and the total number of property values, for graphs we generate with 10[4] (10k) or 10[5] (100k) persons, with different distributions (DW, Z, A, F), and with static (S, top) and dynamic (D, bottom) properties. As we see, the Comments type dominates the number of vertices, with up to 400 comments per person over a 3 year period, followed by about 100 Posts per person. The most frequent edge types are forum_hasMember_person and _comment_hasCreator_person, while each person Follows 10.2 other friends_ on average. Properties such as hasInterest for person and hasTag for com 24 |Graph|V | ||E | ||Frequent Vertex Types Persons Posts Comments Forums| |---|---|---|---| |10k:DW-S 100k:Z-S 100k:A-S 100k:F-S|5.5M 12.1M 25.4M 52.1M|20.8M 23.9M 78.2M 217.6M|8.9k 89.9k 89.9k 100k|1.1M 7.4M 8.7M 12.6M|4.3M 2.3M 15.7M 38.3M|82k 815k 816k 996k| |---|---|---|---|---|---|---| |10k:DW-D 100k:Z-D 100k:A-D 100k:F-D|6.6M 15.2M 32.0M 52.0M|Col3|29.3M 37.1M 112.2M 216.5M|10k 100k 100k 100k|1.4M 9.3M 10.8M 12.6M|5.1M 4.8M 20.1M 38.2M|Col8|100k 995k 995k 995k| |---|---|---|---|---|---|---|---|---| |Graph||Frequent Edge Types hasMember∗ hasCreator†|||||Unrolled Properties#|| |10k:DW-S 100k:Z-S 100k:A-S 100k:F-S|3.3M 1.5M 12.7M 52.2M|4.3M 2.3M 15.8M 38.4 M|35M 60M 157M 325M| |---|---|---|---| |10k:DW-D 100k:Z-D 100k:A-D 100k:F-D|7.2M 3.2M 25.6M 51.8M|5.1M 4.8M 20.1M 38.3M|30M 57M 132M 222M| |---|---|---|---| ----- 10[8] 10[7] 10[6] 10[5] 10[4] 10[3] 10[2] 10[1] 10[0] |Col1|Col2|Col3|Col4|Col5|Col6|Col7|Col8|Col9| |---|---|---|---|---|---|---|---|---| |||||||||| ||7|.5K||||||| ||||806||815|3 500|.1K|| |2|6|||4||||| Q1 Q2 Q3 Q4 Q5 Q6 Q7 Q8 Query Type 10[8] 10[7] 10[6] 10[5] 10[4] 10[3] 10[2] 10[1] 10[0] |Col1|Col2|Col3|Col4|Col5|Col6|Col7|Col8|Col9| |---|---|---|---|---|---|---|---|---| |||||||||| |||||||||| |||606|16||103|73|265|| |1|2|||1||||| |Q1 Q||2 Q3 Q4 Q5 Q6 Q7 Q8 Query Type||||||| (a) 10k:DW (b) 100k:Z 10[8] 10[7] 10[6] 10[5] 10[4] 10[3] 10[2] 10[1] 10[0] |Col1|Col2|5.1K|324|Col5|673|6|96 81|Col9| |---|---|---|---|---|---|---|---|---| |7|7|||2||||| |1|42 3|0|3|7K 2|2. 1|5. 5K|2K 4.|7K| |---|---|---|---|---|---|---|---|---| |||||||||| Q1 Q2 Q3 Q4 Q5 Q6 Q7 Q8 Query Type 5.2K4.7K 2.5K 142 30 21 Q1 Q2 Q3 Q4 Q5 Q6 Q7 Q8 Query Type 4.3M 10[8] 10[7] 10[6] 10[5] 10[4] 10[3] 10[2] 10[1] 10[0] (c) 100k:A (d) 100k:F Figure 7: Box and whiskers distribution plot of the result set count for the 100 instances of each non-aggregate query type. Q1–Q7 are reported on the static graphs while Q8 is on the dynamic graphs. The median result set count is labeled. ment take up the most space since they are multi-valued, with an average of 23 interests per person and 1.22 tags per comment. **Query Workload** We select a subset of query templates provided in the LDBC query workload [53] that conform to a linear path query, and adapt them for our temporal graphs. Table 5 describes the query templates. These are either from the Business Intelligence (BI) or the Interactive Workload _(IW). We also include two additional query templates Q5 and Q6 to fully_ exercise our query model. Also, query template Q8 depends on worksAt which is a dynamic property and so it is only evaluated for dynamic temporal graphs. Each template has some parameterized property or time value. We generate 100 query instances for each template by randomly selecting a value for the parameters, evaluating the query on the temporal graph, and ensuring that there is at least 1 valid result set in most cases. Query instances are generated for both the static and dynamic graphs. In addition to these non 25 ----- Table 5: Description of query workload used in the experiments **Property** **Time** **Has ER** **Description of path to** **Query LDBC ID Hops** **Predi-** **Predi-** **Predi-** **find (Parameterized property** **cates** **cates** **cate?** _values are underlined)_ Two messages with different **Q1** BI/Q9 3 4 1 Yes belong to the same forum time ordering between the messages A person with a given **Q2** BI/Q10 2 6 1 No _message with the same_ given date. A person from a given **Q3** BI/Q16 3 6 1 Yes commented or liked a _person from another given_ Mutual friendships between three _persons, but with a time-respecting_ **Q4** BI/Q17 4 3 2 Yes order in which they befriend each other. A person posts a message given tag to a forum **Q5** – 5 7 3 Yes time offset, they post another _message to the same_ different tag. A person with a specific **Q6** – 5 7 1 Yes replies to a post after another _person replies to it._ A person posts a message outside their home country **Q7** BI/Q23 4 5 3 Yes befriends another person _person then posts another_ from outside their home Two persons working in different **Q8** IW/Q11 3 3 1 Yes _companies have a common_ at a time-point. _aggregate queries, we also create another workload that includes a count_ temporal aggregate operator to these query templates, i.e., it will group the results of the original query by the first vertex and its time intervals, and return the count for each vertex-interval. This helps evaluate the performance of aggregate queries. For brevity, we limit these aggregate queries to the two largest graphs, 100k:A and 100k:F. Figures 7a, 7b, 7c and 7d show the distribution of the result set count for the (non-aggregate) queries on the different graphs in our workload. These illustrate the expressivity of our query model, and ability to intuitively extend it to the time domain. The query length varies between 2 and 5 hops, allowing us to evaluate the cost model and _ranite perfor-_ _G_ 26 |Query|LDBC ID|Hops|Propert Predi- cates|y Time Predi- cates|Has ER Predi- cate?|Description of path to find (Parameterized property values are underlined)| |---|---|---|---|---|---|---| |Col1|Col2|Col3|cates|cates|cate?|values are underlined)| |---|---|---|---|---|---|---| |Q1|BI/Q9|3|4|1|Yes|Two messages with different tags belong to the same forum, with a time ordering between the messages| |Q2|BI/Q10|2|6|1|No|A person with a given tag creates a message with the same tag after a given date.| |Q3|BI/Q16|3|6|1|Yes|A person from a given country has commented or liked a post before a person from another given country.| |Q4|BI/Q17|4|3|2|Yes|Mutual friendships between three persons, but with a time-respecting order in which they befriend each other.| |Q5|–|5|7|3|Yes|A person posts a message with a given tag to a forum and, after a time offset, they post another message to the same forum with a different tag.| |Q6|–|5|7|1|Yes|A person with a specific gender replies to a post after another person replies to it.| |Q7|BI/Q23|4|5|3|Yes|A person posts a message from outside their home country, then befriends another person, and that person then posts another message from outside their home country.| |Q8|IW/Q11|3|3|1|Yes|Two persons working in different companies have a common friend at a time-point.| ----- mance for different lengths. All the vertex types appear as predicates in our workload. The queries filter on both single-valued properties like country and lastName, and multi-valued properties like hasInterest and hasTag. All edge types except forum_hasModerator_person are used in the workload. 7 out of the 8 query types have ETR predicate and all the queries have at least 1 time predicate. They are diverse with respect to result sizes too, as shown in Figure 7a, 7b, 7c and 7d, and the result counts span several orders of magnitude, from 10[0]–10[4]. In our experiments, each query is given an execution budget of 600 secs, after which it is terminated and marked as failed. The average execution times are only reported on the successful queries. We verify the correctness of all queries on _ranite and baseline platforms. For the performance eval-_ _G_ uations, the queries only return the count of the result sets for timeliness. #### 6.2 Experiment Setup Our commodity cluster has 18 compute nodes, each with one Intel Xeon E52620 v4 CPU with 8 cores (16 HT) @ 2.10GHz, 64 GB RAM and 1 Gbps Ethernet, running CentOS v7. For some shared-memory experiments on other baseline graph platforms, we also use a “big memory” head node with 2 similar CPUs and 512 GB RAM. _ranite is implemented over our in-house_ _G_ Graphite v1.0 ICM platform [21], Apache Giraph v1.3.0, Hadoop v3.1.1 and Java v8. By default, our distributed experiments use 8 compute nodes in this cluster, run one _ranite Worker JVM per machine with 8 threads_ _G_ per Worker, and have 50 GB RAM available to the JVM. The graphs are initially loaded into _ranite from JSON files stored in HDFS, with their_ _G_ pre-computed cost model statistics, and the query workloads run on this distributed in-memory copy of the graph. #### 6.3 Baseline Graph Platforms We use the widely-used Neo4J Community Edition v3.2.3 [47] as a baseline graph database to compare against. This is a single-machine, singlethreaded platform. We use three variants of this. One specifies the workload queries using the community standard Gremlin query language (N4J-Gr, in our plots), and the other uses Neo4J’s native Cypher language (N4J-Cy). Both these variants run on a single compute node with 50 GB heap size. A third variant uses Cypher as well, but is allocated 8 50 = 400 GB of heap _×_ space on the head node (N4J-Cy-M ). As graph platforms are often memory bound, this configuration matches the total distributed memory available to our _ranite setup by default. We build indexes on all properties in Neo4J._ _G_ There are few open source distributed graph engines available. Janus_Graph [8], a fork from Titan, is popular, and uses Apache Spark v2.4.0 as a_ distributed backend engine to run Gremlin queries (Spark, in our plots). It 27 ----- uses Apache Cassandra v2.2.10 to store and access the input graph. Spark runs on 8 compute nodes with 1 Worker each and 50 GB heap memory per Worker. Cassandra is deployed on 8 additional compute nodes. This is based on the recommended configuration for JanusGraph on Cassandra [3]. Spark initially loads the graph from Cassandra into its distributed memory present on its 8 compute nodes. This load time is not considered as part of the query execution time. So effectively, only the 8 Spark nodes are used during query execution. For all baselines, we follow the standard performance tuning guidelines provided in their documentation [4 5]. Since these platforms do not natively support temporal queries over _dynamic temporal graphs, we transform the graphs into a static tempo-_ ral graph using techniques described by Wu, et al. and used earlier by Graphite [21, 43]. This static property graph converts the time-intervals on vertices and edges of the original interval graph into an expanded set of vertices and edges that are valid for just a single discrete time point. This lets us adapt the query to operate on the static graph, albeit a bloated one. Also, temporal aggregation is not feasible internally on these platforms. So we perform the final aggregation at the client side for queries with an aggregate operator. JanusGraph/Spark is unable to load these two large graphs in-memory, and hence was not evaluated for the aggregate queries. The results from all platforms for all queries are verified to be identical. #### 6.4 Effectiveness of Cost Model We first evaluate the effectiveness of _ranite’s cost model in identifying the_ _G_ optimal split point for the distributed query execution. For each query type (template), we execute its 100 query instances using all their possible query _plans, i.e., every possible split point is considered for each query. From the_ execution time of all plans for a query, we pick the smallest as its optimal _plan. We compare this against the plan selected by our cost model, and_ report the % of excess execution time that our model-selected plan takes above the optimal plan. This is the effective time penalty when we select a sub-optimal plan. Figure 8a shows a violin plot of the the distribution of this % excess time over optimal, for the different fixed split points 1–4 executed for the 100 queries of type Q4 (non-aggregate) on graph 100k:A-S, compared to the plan selected by our cost model (CM) – lower this value, closer to optimal the performance. We see that the execution time varies widely across the plans, with some taking 8 longer than optimal. Also, some split points like _×_ 2 and 3 are in general better than the others, but among them, neither is consistently better. In contrast, our cost model plan has a low mean excess 3https://docs.janusgraph.org/storage-backend/cassandra/ 4https://neo4j.com/docs/operations-manual/3.2/performance/ 5https://docs.janusgraph.org/advanced-topics/hadoop/ 28 ----- (a) Distribution of queries that exceed the optimal plan’s time by a % (Y axis), for each fixed plan and for the cost model, for 100k:A-S graph on Q4 type queries 10[1] 10[0] |Col1|Col2|Col3|Q1|Col5|Col6|Col7|Col8|Col9|Col10|Col11|Col12|Col13|Col14|Col15|Col16| |---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---| ||||Q2 Q3||||||||||||| ||||Q4||||||||||||| ||||||||||||||||| ||||Q5 Q6 Q7||||||||||||| ||||||||||||||||| ||||||||||||||||| ||||||||||||||||| ||||||||||||||||| ||||||||||||||||| ||||||||||||||||| ||||||||||||||||| ||||||||||||||||| Best Second Third Fourth Split Point (c) Ratio of estimated average execution cost of the other plans relative to the optimal plan, for all query types of 100k:A_S graph_ (b) Actual vs. Cost model estimated execution time for all query instances of _100k:A-S graph, with correlation coeffi-_ cient of ρ = 0.87 Optimal 2nd Best Rest 100 80 60 40 20 0 (d) Cost Model Accuracy. % of times the optimal plan, 2[nd] best plan and other plans were selected by our model for all graphs Figure 8: Effectiveness of cost model in picking the best plan for nonaggregate queries time of 2.9%, relative to 12.2% and 6.9% excess time taken by these other split points. Also, it is not possible to a priori find a single fixed split point which is generally better than the rest, without running the queries using all split points. These motivate the need for an automated analytical cost model for query plan selection. We analyze the accuracy of the cost model for 100k:A, the second largest graph, in more detail, and its impact on the execution cost. First, Figure 8b shows a scatter plot between the actual and the model-estimated execution 29 ----- Table 6: % excess time spent over the Optimal plan by the Cost Model selected plan, for different query percentiles of each query type (a) 100k:A-S _%le_ _Q1 Q2 Q3 Q4 Q5 Q6 Q7_ 75 1.8 0 2.2 0 0 0 0 90 6.8 0 12.6 0 0 0 0 95 8.5 0 24.6 0 56 0 0 99 17.6 0 47.1 0 123 0 195 (c) 100k:A-S (Temporal Aggregate) _%le Q1 Q2 Q3 Q4 Q5 Q6 Q7_ 75 0 0 0 0 0 0 0 90 5.7 0 20 0 19 0 0 95 6.3 0 24 0 24 0 0 99 8.3 0 30 0 52 0 0 Optimum 2nd Best Rest 80 60 40 20 0 Q1 Q2 Q3 Q4 Q5 Q6 Q7 Query Type (a) 100k:A-S Optimum 2nd Best Rest 80 60 40 20 0 Q1 Q2 Q3 Q4 Q5 Q6 Q7 Query Type (c) 100k:A-S, Temporal Agg. (b) 100k:A-D _%le Q1 Q2 Q3 Q4 Q5 Q6 Q7 Q8_ 75 0 0 0 0 0 0 0 90 0 0 42 0 7.1 0 0 59 95 2.4 0 124 66 8.8 0 0 112 99 3.6 0 198 191 12 0 0 277 (d) 100k:A-D (Temporal Aggregate) _%le Q1 Q2 Q3 Q4 Q5 Q6 Q7 Q8_ 75 0 0 0 0 0 0 0 0 90 4 0 6 0 28 0 0 57 95 12 28 21 132 39 0 0 175 99 18 145 84 166 57 0 0 643 Optimum 2nd Best Rest 100 80 60 40 20 0 Q1 Q2 Q3 Q4 Q5 Q6 Q7 Q8 Query Type (b) 100k:A-D Optimum 2nd Best Rest 100 80 60 40 20 0 Q1 Q2 Q3 Q4 Q5 Q6 Q7 Q8 Query Type (d) 100k:A-D, Temporal Agg. |%le|Q1|Q2|Q3|Q4|Q5|Q6|Q7| |---|---|---|---|---|---|---|---| |75|1.8|0|2.2|0|0|0|0| |90|6.8|0|12.6|0|0|0|0| |95|8.5|0|24.6|0|56|0|0| |99|17.6|0|47.1|0|123|0|195| |%le|Q1|Q2|Q3|Q4|Q5|Q6|Q7|Q8| |---|---|---|---|---|---|---|---|---| |75|0|0|0|0|0|0|0|0| |90|0|0|42|0|7.1|0|0|59| |95|2.4|0|124|66|8.8|0|0|112| |99|3.6|0|198|191|12|0|0|277| |%le|Q1|Q2|Q3|Q4|Q5|Q6|Q7| |---|---|---|---|---|---|---|---| |75|0|0|0|0|0|0|0| |90|5.7|0|20|0|19|0|0| |95|6.3|0|24|0|24|0|0| |99|8.3|0|30|0|52|0|0| |%le|Q1|Q2|Q3|Q4|Q5|Q6|Q7|Q8| |---|---|---|---|---|---|---|---|---| |75|0|0|0|0|0|0|0|0| |90|4|0|6|0|28|0|0|57| |95|12|28|21|132|39|0|0|175| |99|18|145|84|166|57|0|0|643| Figure 9: Cost Model Accuracy. % of times the optimal plan, the second best plan and the other plans were selected by our model 30 ----- time for the 100k:A-S static graph; the plot has 2500 points. Overall, we _≈_ see a high correlation coefficient of ρ = 0.87. There is an over-estimation for Q7 (maroon) due to an inaccurate estimation of the number of matching edges in the second hop, and under-estimates for Q1 to Q5. But Q6 (purple) shows a high correlation of ρ = 0.94. Given these execution time inaccuracies of the model, we examine its effect on: (1) picking the optimal execution plan, and (2) on the latency penalty when it does not pick the optimal plan. Figures 9 show the fraction of times the cost model selects the optimal plan, the second best plan, and the rest of the plans, for the static and dynamic variants of 100k:A, and for non-aggregate and aggregate queries. We also have corresponding data in Tables 6 which report for different query types (columns), and for different percentiles of their queries (rows), what is the % excess execution time over the optimal spent by the plan chosen by the cost model. For the non-aggregate queries, the best or the second best plan were selected over 97% of the time across all queries, as seen in Figures 9a and 9b. For queries Q2, Q4, Q6 and Q7, the optimal plan was chosen 99% of the time. In Q2, this is due to a short query length of 2 that reduces the cumulative errors in the model, as well as a high difference in cost between the best and the second best plans. This is seen in Figure 8c, which gives the ratio of the 2[nd], 3[rd] and 4[th] best plan relative to the optimal. For Q2, the best plan evaluates the person vertices first, which are 500 fewer than the _×_ _message vertices evaluated first by the other plan. As a result, the optimal_ execution time is 10 smaller than the other and the model easily selects _×_ the former plan. Similarly, Q6 also exhibits a high difference in cost between the optimal plan and the remaining three. But the top two plans for queries _Q4 and Q7 have a similar cost. For Q4, starting at either ends causes a high_ fan-out and hence the plans that start at the two intermediate hops have a lower, but similar, cost. In such cases, as Figures 9a and 9b show, we may occasionally select the second best plan. However, the consequence of choosing the second best plan on the actual execution latency is low when the top-2 plans have a similar model cost. In fact, for 100k:A-S, we see from Table 6a that the execution time of the model-selected plan is within 2% of the optimal execution time for the 75[th] percentile query, within a query type, and within 13% for the 90[th] percentile query. Its only at the 95[th] percentile query that we see higher penalties of 8–56% for 3 of the 7 query types. Even for the dynamic graph 100k:A-D, 6 of the 8 query types have negligible time penalties at the 90[th] percentile query in Table 6b, while two, Q3 and Q8, have higher penalties of 42–59%. The sub-optimal behavior happens when the execution model predicts a similar cost for the top-2 plans but selects the actual second-best, and the observed runtime for the second-best is much worse than the best. E.g., for the 100k:A-S graph, the difference in actual execution cost between the optimum and second best plans for query Q3 is 18%. This causes the model 31 ----- to select the second best plan 28% of the time, and causes 5% of the _≈_ _≈_ queries to take 25% or longer to execute than the optimal plan. We see similar trends for the temporal aggregate queries as well, in Figures 9c and 9d, and Tables 6c and 6d. The models predict the same costs for these aggregate queries since it ignores the aggregate operation and join costs due to their negligible overheads. Despite that, these queries perform on par or better than the equivalent queries without the aggregation step. In fact, this is broadly applicable to all the graphs, as observed in Figure 8d. It reports that across all queries and graphs evaluated, our cost model picks the best (optimal) or the second best plan over 95% of the time. In summary, the cost model is accurate when the query is of shorter length, and accurate enough to distinguish between the similar good plans and the rest when certain predicate have high cardinalities. So we predominantly pick a plan that is optimal, or has an execution time that is close to the optimal plan. Thus, while our cost model is not perfect, it is accurate enough to discriminate between the better and the worse plans, and consequently reduce the actual query execution time. #### 6.5 Comparison with Baselines Figures 10 show the average execution time on _ranite and the baseline_ _G_ platforms (Y axis, log scale) for the different non-aggregate query types (X axis) for the static temporal graphs, and Figures 11 for the dynamic temporal _graphs. Only queries that complete in the 600 sec time budget are plotted._ As Table 7 shows, Janus/Spark did not run (DNR) for several larger graphs due to resource limits when loading the graph in-memory from Cassandra. 32–79% of queries did not finish (DNF) within the time budget on Neo4J for 100k:F-S, the largest graph. _ranite completes all queries on all graphs,_ _G_ often within 1 sec. For the largest graph 100k:F-S, _ranite uses 16 nodes_ _G_ to ensure that the graph fits in distributed memory. The bar plots show that _ranite is much faster than the baselines, across_ _G_ _all graphs and all query types, except for Q5 on the smallest graph, 10k:DW-_ _S. On average, we are 149_ faster than N4J-Cy-M, 192 faster than N4J_×_ _×_ Cy, 154 faster than N4J-Gr and 1140 faster than Spark. Other than _×_ _×_ the largest graph, _ranite completes on an average within 500 ms for all_ _G_ static graphs and most query types, and on an average within 1000 ms for _100k:F-S and all the dynamic graphs._ Focusing on specific query types for the largest static temporal graph, _100k:F-S, Q2 takes the least time for_ _ranite due to its short path length_ _G_ of 2. The left-to-right execution by the baseline platforms is the optimal query plan, but we are still able to out-perform them due to the parallelism provided by partitioning. _ranite takes_ 5 secs for Q3 due to the huge _G_ _≈_ number of results, 5.9M on average. But this query does not even com_≈_ plete for N4J-Cy and Spark. _ranite’s tree-based result structure is more_ _G_ 32 ----- 10[7] 10[5] 10[7] 10[5] 10[3] 10[1] 10[3] 10[1] 10[7] 10[5] 10[7] 10[5] 10[3] 10[1] |Spark N4J-Cy-M|N4J-Gr N4J-Cy Granite| |---|---| ||| ||| |Q1 Q2 Q3 Q Query (a) 10k:D|4 Q5 Q6 Q7 Type W-S| |Spark N4J-Cy-M|N4J-Gr N4J-Cy Granite| ||| |R R R R|R R R F| |DN DN DN DN|DN DN DN DN| |7|Col2|Col3| |---|---|---| ||Spark N4J-Cy-M|N4J-Gr N4J-Cy Granite| |||| |||| |Q1 Q2 Q3 Q Quer (b) 100k:||| ||Spark N4J-Cy-M|| |||| ||R R R F R F|| ||DN DN DN DN DN DN|| Q1 Q2 Q3 Q4 Q5 Q6 Q7 Query Type Q1 Q2 Q3 Q4 Q5 Q6 Q7 Query Type 10[3] 10[1] (c) 100k:A-S (d) 100k:F-S Figure 10: Comparison of average execution time of _ranite with baseline_ _G_ systems for non-aggregate query types, on Static Temporal Graphs compact, reducing memory and communication costs. Q4 for this graph is also 89–112 better in _ranite than the baselines, with large result sizes_ _×_ _G_ of 72k on average. Here, there is a rapid fan-out of matching vertices _≈_ followed by a fan-in as they fail to match downstream predicates, leading to high costs. _Q7 queries are able to complete only in_ _ranite and not on the_ _G_ baseline platforms. This query has an optimal split point of 1 or 2 which is not adopted by the baselines. In fact, baselines use the worst possible left-to-right plan, which we see is 4 slower than the optimal for _ranite._ _×_ _G_ _ranite is also consistently better for the dynamic graphs. Similar to_ _G_ the static graphs, the only time that our average query time is slower than a baseline is for Q5 on 10k:DW-D. Here, the default left-to-right execution is near-optimal, and the query has a low traversal fan-out and < 10 results. So the baselines are in an ideal configuration while _ranite has overheads_ _G_ for distributed execution. _Neo4J using Cypher, on the single compute node (N4J-Cy) and the big_ memory node (N4J-Cy-M), are the next best to _ranite. The large memory_ _G_ variant gives similar performance as the regular memory one for the smaller graphs, but for larger graphs like 100k:A and 100k:F, it out-performs. For 33 ----- 10[7] 10[5] 10[7] 10[5] 10[3] 10[1] 10[3] 10[1] Q1 Q2 Q3 Q4 Q5 Q6 Q7 Q8 Query Type (b) 100k:Z-D 10[7] 10[5] 10[7] 10[5] 10[3] 10[1] |Col1|Col2|N4J- Gra|Col4|Gr nite|Col6|N|Col8|4J-Cy|Col10|Col11|Col12|N4J-|Col14|Cy-M|Col16| |---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---| ||||||||||||||||| ||||||||||||||||| |Q|1 Q (||2 Q a) 1||3 Q Que 0k:||4 Q ry T DW||5 Q ype -D||6 Q||7 Q||8| |||N G|4J- ra|Gr nit|e||N|4J|-Cy|||N|4J-|Cy|-M| ||||||||||||||||| |||||F|F|||||F|F|||F|F| |||||DN|DN|||||DN|DN|||DN|DN| |N4J-Cy N4J-|Col2|Cy-M|Col4| |---|---|---|---| ||||| ||||| |Q4 Q5 Q6 Q uery Type 00k:Z-D|7 Q||8| |N4J-Cy N G|4J- ra|Cy nit|-M e| ||||| |FF F FF|F|FF|| |DDNN DN DDNN|DN|DDNN|| Q1 Q2 Q3 Q4 Q5 Q6 Q7 Q8 Query Type N4J-Gr N4J-Cy N4J-Cy-M Granite Q1 Q2 Q3 Q4 Q5 Q6 Q7 Q8 Query Type 10[3] 10[1] (c) 100k:A-D (d) 100k:F-D Figure 11: Comparison of average execution time of _ranite with baseline_ _G_ systems for non-aggregate query types, on Dynamic Temporal Graphs the latter graph, N4J-Cy could not finish several query types. Though Neo4J uses indexes to help filter the vertices for the first hop, query processing for later hops involves a breadth first traversal and pruning of paths based on the predicates. There are also complex joins between consecutive edges along the path to apply the temporal edge relation. These affect their execution times. Gremlin and Cypher variants of Neo4J are comparable in performance, with no strong performance skew either way. Interestingly, the Gremlin variant of Neo4J is able to run most query workloads for all graph, albeit with slower performance. The Janus/Spark distributed baseline takes the most time for all these queries. This is despite omitting its initial graph RDD creation time ( _≈_ 80 secs). _ranite persists the graph in-memory across queries. Despite us-_ _G_ ing distributed machines, Spark is unable to load large graphs in memory and often fails to complete execution within the time budget. A similar challenge was seen even for alternative engines like, Hadoop, used by JanusGraph and Spark was the best of the lot. In the bar plots, we also show a black bar for the single-machine baselines, which is marked at the 1/8[th] execution time-point – this shows the 34 ----- 10[7] 10[5] 10[7] 10[5] 10[3] 10[1] Q1 Q2 Q3 Q4 Q5 Q6 Q7 Q1 Q2 Q3 Q4 Q5 Q6 Q7 10[3] 10[1] 10[7] 10[5] 10[7] 10[5] 10[3] 10[1] |N4J-Gr Granite|N4J-Cy|N4J|-Cy-M|Col5| |---|---|---|---|---| |||||| |||||F| |||||DN| |Q1 Q2 Q Q (a) 1|3 Q4 Q5 uery Type 00k:A-S|Q6||Q7| |N4J-Gr Granite|N4J-Cy|N4J-|C|y-M| |||||| |||||| |F||F||F| |DN||DN||DN| |N G|4J-Gr ranite|N|4J-Cy|N4J|-Cy-M| |---|---|---|---|---|---| ||||||| ||||||FFF| ||||||DDDNNN| |Q1 Q|2 Q Q (b) 1|3 Q uery 00k:|4 Q5 Type F-S|Q6 Q7|| |N4 Gr|J-Gr anite|N|4J-Cy|N4J-|Cy-M| ||||||| ||||||| ||FF|FF|F|FFF|FF| ||DDNN|DDNN|DN|DDDNNN|DDNN| Q1 Q2 Q3 Q4 Q5 Q6 Q7 Q8 Query Type Q1 Q2 Q3 Q4 Q5 Q6 Q7 Q8 Query Type 10[3] 10[1] (c) 100k:A-D (d) 100k:F-D Figure 12: Comparison of average execution time of _ranite with baseline_ _G_ systems for Temporal Aggregate query types theoretical time that would be taken by these platforms if they had perfect parallel scaling on 8 machines, though they do not support parallel execution. As we see, _ranite is often able to complete its execution within_ _G_ that mark, showing that our distributed engine shows scaling performance comparable or better than highly optimized single-machine platforms, even if they had ideal scaling. Lastly, we compare the performance of temporal aggregate queries for the two largest static and dynamic graphs, 100k:A and 100k:F. Their execution times on the different platforms are shown in Figure 12. For the static graphs, we observe from Figures 12a and 12b that _ranite is much faster_ _G_ than all the baselines for most query types. On average, we are 165 faster _×_ than N4J-Cy-M, 175 faster than N4J-Cy and 95 faster than N4J-Gr. This _×_ _×_ is 10 faster even when compared with the perfect scaling extrapolation for _×_ the baselines. These temporal aggregate queries are slower compared to their nonaggregate equivalents. Specifically, for 100k:A-S, _ranite takes 64% (_ _G_ _≈_ 315 ms) more on average while the baseline platforms on average are 56% (N4J-Cy), 42% (N4J-Gr) and 78% (N4J-Cy-M) slower, which translates to 35 ----- Table 7: % of queries that complete within 600 seconds for different platforms on the temporal graphs **Graph** **Spark** **N4J-Gr** **N4J-Cy** **N4J-Cy-M** _ranite_ _G_ _Static Graphs, Non-aggregate queries_ 10k:DW 100 99 99 80 100 100k:Z 93 90 100 100 100 100k:A DNR 100 90 98 100 100k:F DNR 66 21 68 100 _Dynamic Graphs, Temporal aggregate queries_ 100k:A DNR 98 65 99 100 100k:F DNR 60 68 65 100 **Graph** **Spark** **N4J-Gr** **N4J-Cy** **N4J-Cy-M** _ranite_ _G_ _Dynamic Graphs, Non-aggregate queries_ 10k:DW DNR 96 98 96 100 100k:Z DNR 100 90 98 100 100k:A DNR 97 46 46 100 100k:F DNR 30 20 75 100 _Dynamic Graphs, Temporal aggregate queries_ 100k:A DNR 95 46 99 100 100k:F DNR 36 19 78 100 24–53 secs longer, per query. The baselines’ time increase considerably _≈_ due to the additional overhead of sending the entire result set back to client to perform the temporal aggregation, as opposed to just sending the total number of results for the non-aggregate queries. Since _ranite does this_ _G_ natively in a distributed manner, we mitigate this cost. _ranite completes all these queries when executed using the plan se-_ _G_ lected by the cost model (Table 7). The baseline platforms are only able to complete, on average, 79% (N4J-Gr), 67% (N4J-Cy) and 82% (N4J-Cy-M) of the queries on the static graphs, and this is worse for the dynamic graphs, ranging from 33%–89%. Also, as Figures 12a and 12b show, we take under 1 sec to run all queries on 100k:A-S except Q7, and within 2.1 secs for all queries on the largest graph, 100k:F-S, except Q3 – query Q3 takes longer due to the large result count of 4.3M (Figure 7d). For dynamic graphs, we take under 3 secs _≈_ for all queries on 100k:A-D except Q4 and Q7, and within 9.2 secs for all queries on the largest graph, 100k:F-D, except Q3 (Figures 12c and 12d). None of the baseline platforms could finish query type Q7 for 100k:F-S or _100k:F-D. This query starts and ends with the Post vertex type, which has_ a high cardinality. Also, these queries on the baseline platforms need to accumulate all the results for client-side aggregation. Both of these lead to memory-pressure for the larger graphs. 36 |Graph|Spark N4J-Gr N4J-Cy N4J-Cy-M ranite G| |---|---| |Graph|Spark N4J-Gr N4J-Cy N4J-Cy-M Granite| |---|---| ||Static Graphs, Non-aggregate queries| |10k:DW|100 99 99 80 100| |100k:Z|93 90 100 100 100| |100k:A|DNR 100 90 98 100| |100k:F|DNR 66 21 68 100| |10k:DW 100k:Z 100k:A 100k:F|Static Graphs, Non-aggregate queries 100 99 99 80 100 93 90 100 100 100 DNR 100 90 98 100 DNR 66 21 68 100| |---|---| ||Dynamic Graphs, Temporal aggregate queries| |100k:A|DNR 98 65 99 100| |100k:F DNR 60 68 65 100|| |Graph|Spark N4J-Gr N4J-Cy N4J-Cy-M ranite G| |Graph|Spark N4J-Gr N4J-Cy N4J-Cy-M Granite| |---|---| ||Dynamic Graphs, Non-aggregate queries| |10k:DW|DNR 96 98 96 100| |100k:Z|DNR 100 90 98 100| |100k:A|DNR 97 46 46 100| |100k:F|DNR 30 20 75 100| |10k:DW 100k:Z 100k:A 100k:F|Dynamic Graphs, Non-aggregate queries DNR 96 98 96 100 DNR 100 90 98 100 DNR 97 46 46 100 DNR 30 20 75 100| |---|---| ||Dynamic Graphs, Temporal aggregate queries| |100k:A|DNR 95 46 99 100| |100k:F|DNR 36 19 78 100| ----- #### 6.6 Components of Execution Time Next, we briefly examine where the time is spent in distributed execution. As an exemplar, Figure 13 shows a stacked bar plot of the time taken by _Q7 in different supersteps, and within different workers in a superstep, for_ the 100k:A-S graph. The stacks represent the time taken by the init/com_pute, scatter, and join phases of_ _ranite, the interval compute parent phase_ _G_ of Graphite (ICM), the partition compute grand-parent phase of Giraph (VCM), and other residual time such as barrier synchronization and JVM garbage collection (GC), in each superstep. These times are averaged across all 100 instances of the query type. For deterministic execution, we select a fixed split point for the execution plan that is optimal for a majority of the queries, which, for Q7 is at the third vertex in the path. For Q7, the first superstep time is dominated by the init logic as the predicate operates on the Post vertex type, which has 8.7M vertices. Its _scatter time is minimal as only 71k out edges match out of 250k and are_ used to send messages. The overheads of interval compute are small, but _partition compute takes longer at 140 ms. In the latter, the Giraph logic_ which we extend selects the active partitions based on the vertex type of the query predicate (Post, in the case of Q7), iterates through its active vertices, invokes interval compute on each with the incoming messages, and clears the message queue. The other time is non-trivial at 145 ms. This is caused by GC triggering due to memory pressure, and taking 110 ms, with the rest going to the superstep barrier. In superstep 2, the compute time is negligible at 1.5 ms as only 3.4k _Person vertices are active across both branches of the query plan, but scatter_ takes 247 ms since 2.83M edges are processed along one branch of the plan – the Person vertex has a high out-edge degree – out of which 31k satisfy the predicate. About 100 ms is taken by partition and interval computes, for selecting and iterating over the relevant active vertices, and for performing TimeWarp and state initialization, while there is a GC overhead of 64 ms in other. In the last superstep, there is a small time taken for compute and to join the results. Interestingly, the time taken by each phase is similar across the different workers in a superstep for this query. This indicates that the partitioning manages to balance the load for this query type. However, for other queries like Q4 (not shown for brevity), we observe that in some supersteps, scatter takes 79% longer for the slowest worker compared to the fastest due to a skew in the number of edges activated per worker. Also, queries like Q4 take less time for the first superstep but a larger time in superstep 2 due to a high fanout, going from 36k edges processed in the first step to 1.48M edges in the second step. In others like Q3, the first superstep is dominated by scatter since the initial vertex type Person has only 89k vertices with 770 of them matching, but these cause 950k edges to be processed of which 122k 37 ----- Figure 13: Stacked bar plot of component execution times in each superstep, averaged over all queries of query type _Q7 on 100k:A-S graph. Header labels indicate average_ component time across Workers in a superstep. 6 2 4 8 16 5 100 4 80 3 60 2 40 1 20 0 0 Q1 Q2 Q3 Q4 Q5 Q6 Q7 Query Type Figure 14: Relative execution time (left axis, bar) and Scaling efficiency%= _[t][2]_ _tw_ [% (right axis, cir-] cle) for Worker counts w = 4, 8, 16, relative to _{_ _}_ _w = 2 for Weak Scaling runs with (w_ 6.25k):F-S _×_ graphs match and trigger messaging. In summary, the different supersteps have high variability in execution times and there is also variability in the time taken by each phase. Despite that, the cost model is able to discriminate and select near-optimal plans. The load is mostly balanced across workers in a superstep, though this depends on the query type. Much of the time is spent directly in processing the query using compute and scatter, with some additional overheads for the other phases. #### 6.7 Weak Scaling We evaluate the weak scaling capabilities of _ranite using the static Facebook-_ _G_ distribution graphs. We use 4 different system resource sizes – 2, 4, 8 and 16 Workers, with 1 compute node per Worker, and the graph sizes increase pro 38 ----- portional to the Worker count – 12.5k:F-S, 25k:F-S, 50k:F-S and 100k:F-S. This attempts to keep the workload per Worker constant across the scaling configurations, with the per-Worker vertex and edge counts remaining within 18% and 23% of their mean, respectively. 100k:F-S is partitioned _±_ _±_ into 512 partitions (128 per vertex type), and other graphs into 256 partitions (64 per vertex type). This ensures that we have enough partitions for the compute threads to process them in parallel across all Workers. We generate and use a 100 query workload for each query type, for each graph. The left Y axis of Figure 14 (bars) shows for each query type, the average relative execution time when using w = 4, 8, 16 Workers, compared to w = 2 Workers. The right Y axis (circles) shows the scaling efficiency = _t2_ _tw_ [%,] i.e., time taken on 2 Workers vs. time taken on w Workers. With perfect weak scaling, the relative time should be constant and efficiency 100%. The asymmetric nature of graph data structure makes it rare to get ideal weak scaling. However, we do see that query types Q1, Q5, Q6 and Q7 offer 60% scaling efficiency on up to 8 Workers, and all queries but Q3 and _≥_ _Q4 have_ 40% efficiency on up to 16 Workers. Q3 and Q4 are unable to _≥_ fully exploit the additional resources due to stragglers among their threads, which are often 10 slower due to uneven load. These two queries also have _×_ the largest result cardinality, which causes more messages to be sent over the network as the number of machines increase. As a result, they have poor scaling efficiency. ### 7 Conclusions In this article, we have motivated the need for querying over large temporal property graphs and the lack of such platforms. We have proposed an intuitive temporal path query model to express a wide variety of requirements over such graphs, and designed the _ranite distributed engine to implement_ _G_ these at scale over the Graphite ICM platform. Our novel analytical cost model uses concise information about the graph to allow accurate selection of a distributed query execution plan from several choices. These are validated through rigorous experiments on 8 temporal graphs with a 1600query workload, derived from the LDBC benchmark. _ranite out-performs_ _G_ the baseline graph platforms and gives < 1 sec latency for most queries. As future work, we plan to explore out of core execution models to scale beyond distributed memory, indexing techniques to accelerate performance, more generalized temporal tree and reachability query models, and compare performance with other research prototypes and metrics from literature. Designing incremental query execution strategies over streaming propertygraph updates is also a related and under-explored challenge. The _ranite_ _G_ platform is also finding relevance in analyzing epidemiological networks that form temporal property graphs constructed from, say, digital contact tracing 39 ----- for the COVID-19 pandemic. This may motivate the need for further query operators. ### Acknowledgements The first author of this work was supported by the Maersk CDS M.Tech. Fellowship, and the last author was supported by the Swarna Jayanti Fellowship from DST, India. We thank Ravishankar Joshi from BITS-Pilani, Goa for his assistance with the experiments. ### References [1] T. Mitchell et al., “Never-ending learning,” Communications of the _ACM, vol. 61, no. 5, 2018._ [2] M. Cha, H. Haddadi, F. Benevenuto, and K. P. Gummadi, “Measuring user influence in twitter: The million follower fallacy,” in AAAI _International conference on weblogs and social media (ICWSM), 2010._ [3] S. Liu, S. Poccia, K. S. Candan, G. Chowell, and M. L. Sapino, “epidms: data management and analytics for decision-making from epidemic spread simulation ensembles,” The Journal of infectious diseases, vol. 214, pp. S427–S432, 2016. [4] B. Haslhofer, R. Karl, and E. Filtz, “O bitcoin where art thou? insight into large-scale transaction graphs.” in International Workshop on Se_mantic Change & Evolving Semantics (SuCCESS), 2016._ [5] K. Shu, A. Sliva, S. Wang, J. Tang, and H. Liu, “Fake news detection on social media: A data mining perspective,” ACM SIGKDD Explorations _Newsletter, vol. 19, no. 1, 2017._ [6] W. Fan, “Graph pattern matching revised for social network analysis,” in International Conference on Database Theory (ICDT), 2012. [7] Z. Huang, W. Chung, and H. Chen, “A graph model for e-commerce recommender systems,” Journal of the American Society for informa_tion science and technology, vol. 55, no. 3, 2004._ [[8] Sharp, Austin et al., “JanusGraph,” 2020, https://janusgraph.org/.](https://janusgraph.org/) [9] V. G. Castellana, A. Morari, J. Weaver, A. Tumeo, D. Haglin, O. Villa, and J. Feo, “In-memory graph databases for web-scale data,” Computer, vol. 48, no. 3, 2015. 40 ----- [10] Y. Yan, C. Wang, A. Zhou, W. Qian, L. Ma, and Y. Pan, “Efficient indices using graph partitioning in rdf triple stores,” in 2009 IEEE _25th International Conference on Data Engineering._ IEEE, 2009, pp. 1263–1266. [11] T. Milo and D. Suciu, “Index structures for path expressions,” in Inter_national Conference on Database Theory. Springer, 1999, pp. 277–295._ [12] M. Junghanns, M. Kießling, N. Teichmann, K. Gómez, A. Petermann, and E. Rahm, “Declarative and distributed graph analytics with gradoop,” Proceedings of the VLDB Endowment, vol. 11, no. 12, 2018. [13] B. Shao, H. Wang, and Y. Li, “Trinity: A distributed graph engine on a memory cloud,” in Proceedings of the 2013 ACM SIGMOD Interna_tional Conference on Management of Data, 2013, pp. 505–516._ [14] D. Greene, D. Doyle, and P. Cunningham, “Tracking the evolution of communities in dynamic social networks,” in 2010 International con_ference on advances in social networks analysis and mining._ IEEE, 2010. [15] B. George and S. Shekhar, “Time-aggregated graphs for modeling spatio-temporal networks,” in Journal on Data Semantics XI. Springer, 2008. [16] L. Zhao, G.-J. Wang, M. Wang, W. Bao, W. Li, and H. E. Stanley, “Stock market as temporal network,” Physica A: Statistical Mechanics _and its Applications, vol. 506, 2018._ [17] O. Walavalkar, A. Joshi, T. Finin, Y. Yesha et al., “Streaming knowledge bases,” in Fourth International Workshop on Scalable Semantic _Web knowledge Base Systems, 2008._ [18] D. Ediger, J. Riedy, D. A. Bader, and H. Meyerhenke, “Tracking structure of streaming social networks,” in 2011 IEEE International Sympo_sium on Parallel and Distributed Processing Workshops and Phd Forum,_ 2011. [19] M. Then, T. Kersten, S. Günnemann, A. Kemper, and T. Neumann, “Automatic algorithm transformation for efficient multi-snapshot analytics on temporal graphs,” Proceedings of the VLDB Endowment, vol. 10, no. 8, 2017. [20] G. Malewicz et al., “Pregel: a system for large-scale graph processing,” in SIGMOD, 2010. [21] S. Gandhi and Y. Simmhan, “An interval-centric model for distributed computing over temporal graphs,” in IEEE International Conference _on Data Engineering (ICDE), 2020._ 41 ----- [22] K. Zeng, J. Yang, H. Wang, B. Shao, and Z. Wang, “A distributed graph engine for web scale rdf data,” The VLDB Journal, vol. 6, no. 4, 2013. [23] S. Ramesh, A. Baranawal, and Y. Simmhan, “A distributed path query engine for temporal property graphs,” in IEEE/ACM International _Symposium on Cluster, Cloud and Internet Computing (CCGrid), 2020._ [24] Y. Guo, M. Biczak, A. L. Varbanescu, A. Iosup, C. Martella, and T. L. Willke, “How well do graph-processing platforms perform? an empirical performance evaluation and analysis,” in IEEE IPDPS, 2014. [25] Y. Simmhan, A. Kumbhare, C. Wickramaarachchi, S. Nagarkar, S. Ravi, C. Raghavendra, and V. Prasanna, “GoFFish: A sub-graph centric framework for large-scale graph analytics,” in EuroPar, 2014. [26] J. E. Gonzalez, R. S. Xin, A. Dave, D. Crankshaw, M. J. Franklin, and I. Stoica, “Graphx: Graph processing in a distributed dataflow framework,” in OSDI, 2014. [27] H. Chen, M. Liu, Y. Zhao, X. Yan, D. Yan, and J. Cheng, “G-miner: An efficient task-oriented graph mining system,” in ACM EuroSys, 2018. [28] D. Gregor and A. Lumsdaine, “The parallel bgl: A generic library for distributed graph computations,” Parallel Object-Oriented Scien_tific Computing (POOSC), 2005._ [29] H.-V. Dang, R. Dathathri, G. Gill, A. Brooks, N. Dryden, A. Lenharth, L. Hoang, K. Pingali, and M. Snir, “A lightweight communication runtime for distributed graph analytics,” in IEEE IPDPS, 2018. [30] J. Shi, Y. Yao, R. Chen, H. Chen, and F. Li, “Fast and concurrent RDF queries with rdma-based distributed graph exploration,” in 12th _{_ _}_ _USENIX_ _Symposium on Operating Systems Design and Implementa-_ _{_ _}_ _tion (_ _OSDI_ _16), 2016, pp. 317–332._ _{_ _}_ [31] Y. Simmhan et al., “Distributed programming over time-series graphs,” in IEEE IPDPS, 2015. [32] W. Han et al., “Chronos: a graph engine for temporal graph analysis,” in ACM EuroSys, 2014. [33] T. A. Zakian, L. A. Capelli, and Z. Hu, “Incrementalization of vertexcentric programs,” in IEEE IPDPS, 2019. [34] R. Cheng, J. Hong, A. Kyrola, Y. Miao, X. Weng, M. Wu, F. Yang, L. Zhou, F. Zhao, and E. Chen, “Kineograph: taking the pulse of a fast-changing and connected world,” in ACM EuroSys, 2012. 42 ----- [35] D. Chavarría-Miranda, V. G. Castellana, A. Morari, D. Haglin, and J. Feo, “Graql: A query language for high-performance attributed graph databases,” in IEEE IPDPS Workshops, 2016. [36] J. Zhou, G. V. Bochmann, and Z. Shi, “Distributed query processing in an ad-hoc semantic web data sharing system,” in IEEE IPDPS Work_shops, 2013._ [37] “SPARQL Query Language for RDF,” 2008. [Online]. Available: [http://www.w3.org/TR/rdf-sparql-query/](http://www.w3.org/TR/rdf-sparql-query/) [38] J. Huang, D. J. Abadi, and K. Ren, “Scalable sparql querying of large rdf graphs,” VLDB Endowment, vol. 4, no. 11, 2011. [39] N. Jamadagni and Y. Simmhan, “GoDB: From batch processing to distributed querying over property graphs,” in IEEE/ACM CCGrid, 2016. [40] M. Sarwat, S. Elnikety, Y. He, and M. F. Mokbel, “Horton+: A distributed system for processing declarative reachability queries over partitioned graphs,” PVLDB, vol. 6, no. 14, 2013. [41] K. Semertzidis and E. Pitoura, “Top-k durable graph pattern queries on temporal graphs,” IEEE TKDE, vol. 31, no. 1, 2018. [42] K. Semertzidis, E. Pitoura, and K. Lillis, “Timereach: Historical reachability queries on evolving graphs,” in EDBT, 2015. [43] H. Wu, Y. Huang, J. Cheng, J. Li, and Y. Ke, “Reachability and timebased path queries in temporal graphs,” in ICDE, 2016. [44] N. Sengupta, A. Bagchi, M. Ramanath, and S. Bedathur, “Arrow: Approximating reachability using random walks over web-scale graphs,” in 2019 IEEE 35th International Conference on Data Engineering _(ICDE), 2019._ [45] A. P. Iyer, Z. Liu, X. Jin, S. Venkataraman, V. Braverman, and I. Stoica, “ ASAP : Fast, approximate graph pattern mining at scale,” in _{_ _}_ _13th_ _USENIX_ _Symposium on Operating Systems Design and Imple-_ _{_ _}_ _mentation (_ _OSDI_ _18), 2018._ _{_ _}_ [46] J. Byun, S. Woo, and D. Kim, “Chronograph: Enabling temporal graph traversals for efficient information diffusion analysis over time,” IEEE _TKDE, vol. 32, no. 3, 2019._ [[47] “Neo4j graph platform,” 2020. [Online]. Available: https://neo4j.com/](https://neo4j.com/) [48] “Orientdb graph database,” 2020. [Online]. Available: [https:](https://orientdb.com/graph-database/) [//orientdb.com/graph-database/](https://orientdb.com/graph-database/) 43 ----- [49] J. Allen, “Maintaining knowledge about temporal intervals,” Commu_nications of the ACM, vol. 26, no. 11, 1983._ [50] V. Z. Moffitt and J. Stoyanovich, “Temporal graph algebra,” in ACM _International Symposium on Database Prog. Languages (DBPL), 2017._ [51] G. Karypis and V. Kumar, “A fast and high quality multilevel scheme for partitioning irregular graphs,” SIAM Journal on Scientific Comput_ing, vol. 20, no. 1, 1998._ [52] S. Muthukrishnan, V. Poosala, and T. Suel, “On rectangular partitionings in two dimensions: Algorithms, complexity and applications,” in _International Conference on Database Theory (ICDT), 1999._ [53] “The LDBC social network benchmark (version 0.3.2),” Linked Data Benchmark Council, Tech. Rep., 2019. [54] M.-D. Pham, P. Boncz, and O. Erling, “S3g2: A scalable structurecorrelated social graph generator,” in Selected Topics in Performance _Evaluation and Benchmarking, 2013._ 44 -----
{ "disclaimer": "Notice: Paper or abstract available at https://arxiv.org/abs/2002.03274, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": null, "status": "GREEN", "url": "https://arxiv.org/pdf/2002.03274" }
2,020
[ "JournalArticle" ]
true
2020-02-09T00:00:00
[ { "paperId": "b97084d8e61f062ba63338528701c0c1ec7f3da4", "title": "An Interval-centric Model for Distributed Computing over Temporal Graphs" }, { "paperId": "69b47761baff2b3d25fc73f5cc90c3e3518d2b4f", "title": "ChronoGraph: Enabling Temporal Graph Traversals for Efficient Information Diffusion Analysis over Time" }, { "paperId": "605bad962c56a74d4fe5e42b36271c2e67becaa3", "title": "Incrementalization of Vertex-Centric Programs" }, { "paperId": "2cec2e0c71d24744da6035b369f5c96084787c04", "title": "A Lightweight Communication Runtime for Distributed Graph Analytics" }, { "paperId": "24a441897a0fd3b781052cb4ced38532c031acd8", "title": "G-Miner: an efficient task-oriented graph mining system" }, { "paperId": "3f92a52c6823d7be1a7672be56333b3379a778b0", "title": "Temporal graph algebra" }, { "paperId": "cb40a5e6d4fc0290452345791bb91040aed76961", "title": "Fake News Detection on Social Media: A Data Mining Perspective" }, { "paperId": "3ff2f80eee5256aeebab1dced07defc73a41bd70", "title": "GraQL: A Query Language for High-Performance Attributed Graph Databases" }, { "paperId": "975668821b26eeae79265135be73337ad7e3be7a", "title": "GoDB: From Batch Processing to Distributed Querying over Property Graphs" }, { "paperId": "1c948041fd224135d780a552ce78be1503ee129d", "title": "Reachability and time-based path queries in temporal graphs" }, { "paperId": "0ba86604228b555475496e200f31878df3aabd6e", "title": "Never-Ending Learning" }, { "paperId": "d6adbafb07c7a5590516404208fe8776f1363eb3", "title": "GraphX: Graph Processing in a Distributed Dataflow Framework" }, { "paperId": "8b48dde8978253066a0b19ab10e3525a2ebbdad4", "title": "Distributed Programming over Time-Series Graphs" }, { "paperId": "92d797407719ecf859431caf943de13ca7c9a38b", "title": "How Well Do Graph-Processing Platforms Perform? An Empirical Performance Evaluation and Analysis" }, { "paperId": "6f7cd29a3dfdcb2f6880a022e13054542020c5ce", "title": "Chronos: a graph engine for temporal graph analysis" }, { "paperId": "05370a6cc820ffe5393fcc948d7d600b5949a217", "title": "GoFFish: A Sub-graph Centric Framework for Large-Scale Graph Analytics" }, { "paperId": "ad88f7b783a742ca2fe7b1f0cb54b7bbb40e61ce", "title": "Horton+: A Distributed System for Processing Declarative Reachability Queries over Partitioned Graphs" }, { "paperId": "688ef3247342fb1a2292503125bb3b1ccde6a1ea", "title": "Distributed Query Processing in an Ad-hoc Semantic Web Data Sharing System" }, { "paperId": "f0b21ae25f918818d032d9b6b326f334b3510caa", "title": "S3G2: A Scalable Structure-Correlated Social Graph Generator" }, { "paperId": "282bc59faefb734137d2ea978cb1eb5699e67c7c", "title": "Kineograph: taking the pulse of a fast-changing and connected world" }, { "paperId": "ead04453dc387f849472e1edf4f9ba5bed6571f7", "title": "Graph pattern matching revised for social network analysis" }, { "paperId": "2d867297dfe0d3ce2ed5b1d0f2dff88cac46ee94", "title": "Pregel: a system for large-scale graph processing" }, { "paperId": "1ad8410d0ded269af4a0116d8b38842a7549f0ae", "title": "Measuring User Influence in Twitter: The Million Follower Fallacy" }, { "paperId": "8232d17f69b491d92d944af0fb93b5a10632497c", "title": "Efficient Indices Using Graph Partitioning in RDF Triple Stores" }, { "paperId": "4468768ed9f544a8ddeccb49af3857dfcf5df359", "title": "Time-Aggregated Graphs for Modeling Spatio-temporal Networks" }, { "paperId": "7e2de7f6458ad745bb64daef2e5145ffaf2ad647", "title": "On Rectangular Partitionings in Two Dimensions: Algorithms, Complexity, and Applications" }, { "paperId": "df86d2a8c217776786bac9019d8b20029e4c0dd5", "title": "A Fast and High Quality Multilevel Scheme for Partitioning Irregular Graphs" }, { "paperId": "f4d642d674aa63aafc11562c2557cb4772946147", "title": "Maintaining knowledge about temporal intervals" }, { "paperId": "1d5d5a2b448fa93a131d44edba8bc5483c23ef3e", "title": "Top-<inline-formula><tex-math notation=\"LaTeX\">$k$</tex-math><alternatives> <inline-graphic xlink:href=\"semertzidis-ieq1-2823754.gif\"/></alternatives></inline-formula> Durable Graph Pattern Queries on Temporal Graphs" }, { "paperId": null, "title": "The LDBC social network benchmark (version 0.3.2)" }, { "paperId": "96b95da0ab88de23641014abff2a5c0b5fec00c9", "title": "O Bitcoin Where Art Thou? Insight into Large-Scale Transaction Graphs" }, { "paperId": "865dc9c42de72ae1581e63d52936e6ed5f41d919", "title": "TimeReach: Historical Reachability Queries on Evolving Graphs" }, { "paperId": "d7f449c199ce86d3b8039899caabb31b54ced7f2", "title": "The Parallel BGL : A Generic Library for Distributed Graph Computations" }, { "paperId": null, "title": "A distributed path query engine for temporal property graphs" }, { "paperId": null, "title": "JanusGraph" } ]
34,427
en
[ { "category": "Computer Science", "source": "s2-fos-model" }, { "category": "Environmental Science", "source": "s2-fos-model" }, { "category": "Engineering", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/021d3d91a15cd3a247255a205d8e4228e04609a0
[]
0.917905
Improving Collaborative Intrusion Detection System Using Blockchain and Pluggable Authentication Modules for Sustainable Smart City
021d3d91a15cd3a247255a205d8e4228e04609a0
Sustainability
[ { "authorId": "2152962051", "name": "R. Gupta" }, { "authorId": "2202681571", "name": "Vedant Chawla" }, { "authorId": "144821177", "name": "R. K. Pateriya" }, { "authorId": "2099551831", "name": "P. Shukla" }, { "authorId": "2213062", "name": "S. Mahfoudh" }, { "authorId": "2195025023", "name": "Syed Bilal Hussain Shah" } ]
{ "alternate_issns": null, "alternate_names": null, "alternate_urls": [ "http://mdpi.com/journal/sustainability", "http://nbn-resolving.de/urn/resolver.pl?urn=urn:nbn:ch:bel-172127" ], "id": "8775599f-4f9a-45f0-900e-7f4de68e6843", "issn": "2071-1050", "name": "Sustainability", "type": "journal", "url": "http://www.e-helvetica.nb.admin.ch/directAccess?callnumber=bel-172127" }
The threat of cyber-attacks is ever increasing in today’s society. There is a clear need for better and more effective defensive tools. Intrusion detection can be defined as the detection of anomalous behavior either in the host or in the network. An intrusion detection system can be used to identify the anomalous behavior of the system. The two major tasks of intrusion detection are to monitor data and raise an alert to the system administrators when an intrusion takes place. The current intrusion detection system is incapable of tackling sophisticated attacks which take place on the entire network containing large number of nodes while maintaining a low number of login attempts on each node in the system. A collaborative intrusion detection system (CIDS) was designed to remove the inefficiency of the current intrusion detection system which failed to detect coordinated distributed attacks. The main problem in the CIDS is the concept of trust. Hosts in the network need to trust the data sent by other peers in the network. To bring in the concept of trust and implement the proof-of-concept, blockchain was used. Pluggable authentication modules (PAM) were also used to track login activity securely before an intruder could modify the login activity. To implement blockchain, an Ethereum-based private blockchain was used.
## sustainability _Article_ # Improving Collaborative Intrusion Detection System Using Blockchain and Pluggable Authentication Modules for Sustainable Smart City **Rajeev Kumar Gupta** **[1]** **, Vedant Chawla** **[2], Rajesh Kumar Pateriya** **[2], Piyush Kumar Shukla** **[3],** **Saoucene Mahfoudh** **[4]** **and Syed Bilal Hussain Shah** **[4,]*** 1 Computer Science and Engineering Department, Pandit Deendayal Energy University, Gandhinagar 382007, India 2 Computer Science and Engineering Department, Maulana Azad National Institute of Technology, Bhopal 462003, India 3 Computer Science & Engineering Department, University Institute of Technology, Rajiv Gandhi Proudyogiki Vishwavidyalaya (Technological University of Madhya Pradesh), Bhopal 462033, India 4 School of Engineering, Computing and Informatics, Dar Al-Hekma University, Jeddah 22246, Saudi Arabia ***** Correspondence: sshah@dah.edu.sa **Citation: Gupta, R.K.; Chawla, V.;** Pateriya, R.K.; Shukla, P.K.; Mahfoudh, S.; Shah, S.B.H. Improving Collaborative Intrusion Detection System Using Blockchain and Pluggable Authentication Modules for Sustainable Smart City. _[Sustainability 2023, 15, 2133. https://](https://doi.org/10.3390/su15032133)_ [doi.org/10.3390/su15032133](https://doi.org/10.3390/su15032133) Academic Editors: Dhananjay Singh, Paulo J. Sequeira Gonçalves, Pradeep Kumar Singh, Pradip Sharma and Pao-Ann Hsiung Received: 21 November 2022 Revised: 27 December 2022 Accepted: 16 January 2023 Published: 23 January 2023 **Copyright:** © 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons [Attribution (CC BY) license (https://](https://creativecommons.org/licenses/by/4.0/) [creativecommons.org/licenses/by/](https://creativecommons.org/licenses/by/4.0/) 4.0/). **Abstract: The threat of cyber-attacks is ever increasing in today’s society. There is a clear need for** better and more effective defensive tools. Intrusion detection can be defined as the detection of anomalous behavior either in the host or in the network. An intrusion detection system can be used to identify the anomalous behavior of the system. The two major tasks of intrusion detection are to monitor data and raise an alert to the system administrators when an intrusion takes place. The current intrusion detection system is incapable of tackling sophisticated attacks which take place on the entire network containing large number of nodes while maintaining a low number of login attempts on each node in the system. A collaborative intrusion detection system (CIDS) was designed to remove the inefficiency of the current intrusion detection system which failed to detect coordinated distributed attacks. The main problem in the CIDS is the concept of trust. Hosts in the network need to trust the data sent by other peers in the network. To bring in the concept of trust and implement the proof-of-concept, blockchain was used. Pluggable authentication modules (PAM) were also used to track login activity securely before an intruder could modify the login activity. To implement blockchain, an Ethereum-based private blockchain was used. **Keywords: sustainable smart city; intrusion detection system; collaborative intrusion detection** system; authentication; blockchain **1. Introduction** According to the report of the Indian Computer Emergency Response Team (2021), more than 26,100 websites were victims of cyber-attacks in India in the year 2020 alone. This clearly indicates the need for better and more effective defensive tools. The role of blockchain in smart, sustainable cities is vital because it helps to foster the kind of trust necessary for smart cities. Blockchain should serve as the cornerstone for the development of a smart city and is a crucial assurance for the proper design and execution of the management strategy and planning scheme. A smart city naturally combines smart energy, smart transportation, smart government, and other services under the same umbrella. Decentralization and the availability of clear data place strict constraints on the big data service platform. Finding the problematic node among the hundreds of millions of nodes in a network is a time-consuming operation if a network encounters a problem or is the target of an attack. Most current Internet-of-Things networks are centralized. A huge server or centralized cloud is connected to hundreds of millions of nodes, which causes ----- _Sustainability 2023, 15, 2133_ 2 of 14 bottlenecks in the price and computer storage capacity. Blockchain-distributed technology can guarantee that even if one or more nodes are hacked, the total network data remains trustworthy and secure. Distributed computing makes use of point-to-point computing to handle the hundreds of billions of transactions that the Internet-of-Things generates. This significantly lowers the cost of computing and storage by utilizing the computing and storage capabilities of a large number of idle devices deployed in unused locations. In order to increase the degree of security for secure transmission and safe storage, additional protection mechanisms need to be implemented due to the privacy of numerous people involved. Blockchain has shown to be secure, dependable, and suitable for this purpose. The disaster recovery system cannot be enhanced due to the high expense of creating a data center and data storage. Therefore, a key issue at hand is how to lower storage costs while enhancing disaster recovery capabilities. Blockchain, which connects distributed and centralized services, can successfully stop an attack on the vital network infrastructure. The main objective of intrusion detection is to observe anomalous behavior either in a network or in a host. In the current scenario, the current IDS is not sophisticated enough to detect the wide variety of threats. Collaborative intrusion detection has some of the capabilities to at least detect some of those threats and send them for further processing. Based on the deployed location, IDSs can be categorized as a host-based intrusion detection system (HIDS) and network-based intrusion detection system (NIDS). A HIDS monitors the characteristics of a particular node and the system events in a node for malicious activities, whereas a NIDS monitors the network by placing packet sniffers in the network at various points. These packet sniffers pick up the data and send the data to analysis units who compare the present state of the system with that of an anomaly. Based on the approach of the detection, an IDS can again be classified into two types: signature-based IDSs and anomaly-based IDSs. Signature-based detection detects an attack by comparing stored signatures with the observed system or network events for possible occurrences. A signature (also known as a rule) is a pattern that describes a known attack or exploit. Anomaly-based detection works by detecting large deviations between its pre-built normal profile and the observed events, and hence detects suspicious activity. A normal profile is frequently generated by observing the features of ordinary activity over time, and it might represent the regular behavior of users, network connections, and programs [1]. If an abnormal circumstance is discovered, an alert may be triggered. The main disadvantage of IDS is that it cannot detect sophisticated attacks which take place on the entire network of nodes cumulatively as they monitor only a single node or a single network. For example, if we have a series of stand-alone IDSs, they are incapable of detecting a distributed attack which takes place across multiple hosts in a network. This is because they do not have the ability to co-relate the events which take place. To address this weakness, the concept of CIDSs was introduced. CIDSs were introduced to address the weakness of IDSs which can be seen during distributed attacks. CIDSs generally consists of several monitor units and analysis units. The monitor units jot down the information and send it to the analysis units, which process the information and make decisions based on it. Based on the architectural differences, CIDSs again can be classified into three categories as shown in Figure 1, namely: centralized, decentralized, and distributed [2]. A centralized CIDS is the most basic version and the simplest one. However, it is prone to a single point of failure (SPoF) and performance bottleneck in cases of network overload. In distributed CIDSs, the SPoF disadvantage is somewhat removed but it still has disadvantages. In this, information is lost at each level of the hierarchy and hence is somewhat unreliable. In a decentralized CIDS, each node behaves as both monitor and analysis units. It is a P2P architecture which facilitates data sharing, correlation, and aggregation of data across all nodes. However, CIDS also has some disadvantages. The network cost incurred is very high as all nodes are in constant communication with each other. Furthermore, the idea of trust is very important among these nodes. To remove the trust issue among the nodes, the concept of blockchain was ----- ###, g y g _Sustainabilityall nodes are in constant communication with each other. Furthermore, the idea of trust 2023, 15, 2133_ 3 of 14 ### is very important among these nodes. To remove the trust issue among the nodes, the concept of blockchain was introduced. This problem, along with CIDS, is discussed in a introduced. This problem, along with CIDS, is discussed in a detailed manner further in detailed manner further in the later stages. Figure 1 illustrates the architecture of CIDS.the later stages. Figure 1 illustrates the architecture of CIDS. #### Figure 1. Overview of a CIDS architecture. Figure 1. Overview of a CIDS architecture. The main contributions of this paper are: ### The main contributions of this paper are: Proposed system will be able to detect coordinated distributed attacks. Hosts in the network need to trust the data sent by other peers in the network. To ### Proposed system will be able to detect coordinated distributed attacks.bring in the concept of trust and implement the proof-of-concept, blockchain was used. Hosts in the network need to trust the data sent by other peers in the network. To Pluggable authentication modules (PAM) were also used to track login activity securely before an intruder could modify the login activity. ### bring in the concept of trust and implement the proof-of-concept, blockchain was used. To implement blockchain, an Ethereum-based private blockchain was used ### Pluggable authentication modules (PAM) were also used to track login activity se-This paper is organized as follows: Section 2 discusses the basics of blockchain along curely before an intruder could modify the login activity.with the different components of blockchain, Section 3 discusses different existing intrusion To implement blockchain, an Ethereum-based private blockchain was useddetection systems, and Section 4 explains the proposed improved collaborative IDS which uses blockchain and pluggable authentication modules. Section 5 discusses the result ### This paper is organized as follows: Section 2 discusses the basics of blockchain along analysis and Section 6 summarizes the entire work and gives direction for future work. with the different components of blockchain, Section 3 discusses different existing intru **2. Blockchain** ### sion detection systems, and Section 4 explains the proposed improved collaborative IDS Blockchain can be defined as a distributed peer-to-peer network of blocks. Each block ### which uses blockchain and pluggable authentication modules. Section 5 discusses the re is linked to the previous block using a cryptographic hash. Blockchain technology has been ### sult analysis and Section 6 summarizes the entire work and gives direction for future applied to several fields such as healthcare, education, energy, etc. There are three types work. of blockchain ledgers which are currently in use: public, consortium, and private. Public blockchains (such as Ethereum) are accessible to anyone with internet access and anyone can read the blockchain and maintain the blockchain ledger, i.e., there is no membership ### 2. Blockchain mechanism in place. The consortium blockchains (such as the Hyperledger Fabric) are maintained by an established body which grants access to others and has a pre-defined ### Blockchain can be defined as a distributed peer-to-peer network of blocks. Each block consortium of peers maintaining the chain. Private blockchains are maintained by one ### is linked to the previous block using a cryptographic hash. Blockchain technology has entity that provides access to others and there is no consensus process. been applied to several fields such as healthcare, education, energy, etc. There are three _2.1. Block Structure_ ### types of blockchain ledgers which are currently in use: public, consortium, and private. The most basic definition of blockchain is that it is a chain of blocks with each block ### Public blockchains (such as Ethereum) are accessible to anyone with internet access and connected to the one before it with the help of a mathematical relationship. The block in anyone can read the blockchain and maintain the blockchain ledger, i.e., there is no mem-itself is a container of data. The main premise underlying blockchain is that each block contains a unique self-identifying hash that ensures the chain’s integrity. The hash of the ### bership mechanism in place. The consortium blockchains (such as the Hyperledger Fabric) block index, data, timestamp, and, of course, the hash of the previous block hash, make ### are maintained by an established body which grants access to others and has a pre-defined up this self-identifying hash. It also contains a record of the transactions, called a ledger, consortium of peers maintaining the chain. Private blockchains are maintained by one which took place during the time of blockchain production. As each block references the entity that provides access to others and there is no consensus process ----- ### up i e i e i yi g a I a o o ai a e o o e a a io, a e a e _Sustainability 2023, 15, 2133_ which took place during the time of blockchain production. As each block reference4 of 14 ### one before it, there is a record of all transactions that took place prior to the current bl generation. The Figure 2 shows the structure of the block chain generation. one before it, there is a record of all transactions that took place prior to the current block’s generation. The Figure 2 shows the structure of the block chain generation. ##### Figure 2. Figure 2.Structure of a blockchain [3]. Structure of a blockchain [3]. _2.2. Consensus_ ### 2.2. Consensus Consensus algorithms allow the participants to reach an agreement about the state ### Consensus algorithms allow the participants to reach an agreement about the of the network without the presence of a central authority. Any blockchain model is only of the network without the presence of a central authority. Any blockchain model isas effective as its consensus model. There are two major consensus algorithms in the blockchain world which are the proof-of-work and the proof-of-stake. The proof-of-work ### as effective as its consensus model. There are two major consensus algorithms in the b algorithm is implemented by Bitcoin, whereas the proof-of-stake algorithm is implemented ### chain world which are the proof-of-work and the proof-of-stake. The proof-of-work by Ethereum and is currently in deployment. Proof-of-work is founded on the premise rithm is implemented by Bitcoin, whereas the proof-of-stake algorithm is implementethat a participant establishes its identity by demonstrating that it worked. In the case of Ethereum and is currently in deployment. Proof-of-work is founded on the premiseBitcoin, each participant’s purpose is to find a hash value that is less than a number set by the network as the difficulty level. This is an example of a computational puzzle where ### a participant establishes its identity by demonstrating that it worked. In the case of Bit a brute-force, guess-and-check method is the most effective way to solve it. This process, ### each participant’s purpose is to find a hash value that is less than a number set bknown as mining, ensures that no single player has an edge in creating the next block. network as the difficulty level. This is an example of a computational puzzle whAs a result, miners are not required to provide any authentication or a-priori knowledge. The chances of a block being modified successfully diminishes exponentially with the size ### brute-force, guess-and-check method is the most effective way to solve it. This pro of the blockchain. Proof-of-work, on the other hand, is subject to the 51 percent attack, ### known as mining, ensures that no single player has an edge in creating the next bloc in which a coalition with more than half of the possible mining power can insert blocks ### a result, miners are not required to provide any authentication or a-priori knowledgeinto the blockchain. To counter this, Ethereum built a new consensus algorithm called chances of a block being modified successfully diminishes exponentially with the siproof-of-stake. Proof-of-stake relies on a group of validators with a financial stake in the network voting and proposing the next block in turn. The method chooses validators for ### the blockchain. Proof-of-work, on the other hand, is subject to the 51 percent attac block production in a pseudo-random manner, preventing advance knowledge of when a ### which a coalition with more than half of the possible mining power can insert blocksspecific participant would create a block. The quantity of cryptocurrency, or stake, that a the blockchain. To counter this, Ethereum built a new consensus algorithm called pparticipant has determines his or her chances of being chosen as a validator. While there of-stake. Proof-of-stake relies on a group of validators with a financial stake in the netare several drawbacks to this method of implementation, it does address the 51 percent attack problem which the proof-of-work had and is currently being developed by Ethereum. ### voting and proposing the next block in turn. The method chooses validators for b Table 1 illustrates the fields of block structure in blockchains and their uses. ### production in a pseudo-random manner, preventing advance knowledge of when a cific participant would create a block. The quantity of cryptocurrency, or stake, that a ticipant has determines his or her chances of being chosen as a validator. While ther several drawbacks to this method of implementation, it does address the 51 percent a problem which the proof-of-work had and is currently being developed by Ethereum ble 1 illustrates the fields of block structure in blockchains and their uses. ----- _Sustainability 2023, 15, 2133_ 5 of 14 **Table 1. Fields in a blockchain [4].** **Name** **Description** Version It refers to the protocol’s identifying rules. Timestamp Nodes can use timestamps to properly change the mining difficulty for each block creation period. Timestamps allow the net- work to calculate how long it takes to extract blocks during a certain time period and alter the mining difficulty parameter accordingly. Previous Hash Used to link the previous block with the current block in the chain. Target It defines the difficulty level of the consensus algorithm. Nonce is an abbreviation for “number only used once” which is added to a hashed Nonce block in a blockchain that when rehashed, fits the difficulty level limitations. In order to receive cryptocurrency, blockchain miners must solve for a nonce. A Merkle Root is the hash of all the hashes of all the legitimate transactions that Merkle Root make up a block. Hashing transaction occurs by Merkel tree, where each node is related with its Hash parent node; therefore, if the transaction is modified, then it will affect all hash trees from the leaf node to the Merkle Root, respectively. **3. Literature Survey** Kanth et al. [5] implemented a collaborative intrusion detection system capable of recording login activity via a private blockchain-based ledger and hence it is immutable. In initial stages the authors were successfully capable of proving that blockchain-based CIDSs were a viable method to detect doorknob-rattling attacks and hence can prevent any act of an intruder trying to modify the activity records. The author [6] uses CPU utilization as a metric to accurately determine whether an intrusion is taking place or not. Golomb et al. [7] introduces CIoTA, which is a blockchain-based solution for collaborative anomaly detection across a large number of IoT devices. While staying resilient to adversarial attacks, CIoTA constantly trains an anomaly detection model. CIoTA can also distinguish between uncommon benign events and malevolent activity by harnessing the knowledge of the crowd. One downside of CIoTA is that each IoT model/firmware must have its own chain published. As a result, CIoTA is best suited to large industrial settings and smart cities in its current state. We intend to develop CIoTA in the future to support a variety of frameworks and increase its detection capability, for example, by investigating API flows rather than lower-level control flows. Ide et al. [8] presented a system (CollabDict) based on blockchain and the Gaussian mixture learning algorithm for collaborative anomaly detection. The major challenges which the author faced here were building the consensus, validating the data, and security of the data. However, the performance of the CollabDict is better than the fuses multitask learning algorithm. Kumari et al. [9] primarily examine the issue of harmful behaviors occurring in blockchain networks, and then attempt to remedy the problem using the clustering protocol. As a result, the authors keep a check on each node’s behavior pattern. Had the authors tried to perform manually for each node, it would have been practically impossible to do so for all the nodes. The K-means clustering approach was utilized to perform the clustering. However, with that algorithm, considerable improvisation was required. As a result, an adapted version of the k-means method was used. This in turn made the blockchain safer against any unlawful or unusual activity. However, the major disadvantage is that the authors used the mean value for each cluster, thus an inaccurate cluster head could be selected. Dey [10] employs game theory and supervised machine learning techniques to identify anomalous player behavior in a blockchain network. The author provides the probability for each attack based on the value of each transaction; however, the implementation was still in its early phases and hence required a lot of improvements in the defense mechanism. Signorini et al. [11] proposed BAD (blockchain anomaly detection). BAD, in particular, enables the detection of abnormal transactions and the prevention of their propagation. ----- _Sustainability 2023, 15, 2133_ 6 of 14 While forks can occur naturally in the blockchain life cycle owing to network delays, they can also be generated purposefully by attackers and used to commit fraud. Malicious acts are dispersed throughout the chain. By gathering data, BAD enables the avoidance of repeated attacks and builds a tamper-proof threat database that is distributed (thus preventing any single point of failure), trusted (the majority of the network collects and verifies any behavioral data), and private. Kanth et al. [5] implemented a collaborative intrusion detection system capable of recording login activity via a private blockchain-based ledger and hence it is immutable. In initial stages the authors were successfully capable of proving that blockchain-based CIDSs were a viable method to detect doorknob-rattling attacks and hence can prevent any act of an intruder trying to modify the activity records. The author also uses CPU utilization as a metric to accurately determine whether an intrusion is taking place or not. Steichen et al. [12] discusses security issues regarding private or consortium blockchains. In this paper, the authors discussed how an attacker can target individual nodes since the number of nodes undertaking blockchain-related tasks is generally restricted. As a result, ChainGuard, which is built as an SDN module and identifies and intercepts excessively large flows at the network level, was proposed in this study. ChainGuard’s implementation specifics were discussed and trials were carried out. The tests conducted by the author indicate that ChainGuard can effectively resist DoS and DDoS assaults while allowing a restricted number of packets to cross the SDN network, and hence permits communication between benign blockchain nodes to continue in the case of an attack. Zhu et al. [13] discussed a novel approach to achieving the controllable blockchain CBDM, which is used to obtain storage efficiency in the cloud computing network and to reduce the risk of attacks which are malicious in the blockchain. Though not tested in a real environment, it provides huge scope for development of the prototype. Hu et al. [14] discussed the multi-microgrid system which it creates a collaborative intrusion detection (CID) paradigm based on blockchain technology. It stores the CID goal in a blockchain and uses a consensus mechanism to create a multi-microgrid system correlation model. It also reduces the false-negative rate and considerably improves the DPoS consensus algorithm by continuously using multiple patterns. However, the major drawback here is that this method does not provide a higher level of true-positive rates and is also limited to fewer types of attacks. N. Alexopoulos et al. [15] uses blockchain technology to improve CIDSs and also provides a combined architecture based on the CIDS and blockchain. This paper proposes a model which considerably reduces the overhead and volume of the blockchain. However, the author has provided a prototype, or a higher view, and the model was not tested in the real environment. Li et al. [16] focused on signature-based collection in their study and proposed a CBSigIDS, a general framework for a collaborative blockchained, signature-based IDS that used blockchains to help gradually share and construct a trusted signature database, inspired by previous blockchain applications. It improved the effectiveness of signaturebased IDSs. However, a major drawback is that it was prone to advanced attacks, and the need for verification and updates in the blockchain resulted in the diminishing performance of the overall network. In recent years, many other IDSs have been proposed [17–21]. These IDSs can be used in any domain to identify intrusions or abnormalities, which can then lead to the development of a secure solution for smart cities. In smart cities, everything is connected to the internet so a smart IDS can play a significant role to provide the security for this created network. Aloqaily et al. [22] proposed an IDS for securing transportation. This IDS will help in vehicular service management to secure the network from attacks and ensure the quality-of-service availability. Elrawy et al. [23] discussed the role of IDSs and the IoT in the smart environment. This article first discusses various existing works that have contributed to the smart environment using IoT sensors, and then discusses the existing IDSs used to provide security in an IoT context. Elsaeidy et al. [24] introduced a smart IDS to prevent distributed denial of service (DDoS) attacks in smart cities. This article used ----- _Sustainability 2023, 15, 2133_ 7 of 14 the restricted Boltzmann machines (RBMs) technique to design the IDS. Saba et al. [25] proposed an ensemble-based IDS for smart city hospitals. The current IDS is not sophisticated enough to detect distributed, parallel attacks that take place throughout the nodes in the network instead of a single node. In the case of CIDSs, the ability to correlate events is crucial. The events occurring across all nodes in the network must be aggregated for further processing and raising of alerts. The concept of trust is crucial among the nodes. While the discussed approaches have their advantages, there was clearly a lack of scalable architecture in the case of the CIDS. The main aim of this paper is to demonstrate an approach through which a scalable architecture could be developed, and trust could be established among the nodes in the CIDS architecture. Blockchain was proposed to solve the trust issue in the case of the CIDS. **4. Proposed Methodology and Implementation** In the case of the doorknob-rattling scenario, there is a clear need for a CIDS as demonstrated by Alexopolous et al. [15]. The doorknob-rattling scenario can be further explained. In this case, suppose 50 stand-alone nodes in the network are using IDSs and tracking login attempts. The threshold for an individual machine could be set to 4. Instead of making 4 incorrect attempts on each node, the attacker would make a series of 2 incorrect attempts until he successfully logs on to a particular node. To utilize this, the attacker _Sustainability 2023, 15, x FOR PEER REVIEW uses the common list of user-ids and passwords available. If the system is using an IDS,8 of 17_ these activities would go unnoticed. However, in the case of a CIDS, these activities would clearly be noticed, and a clear spike would be seen as in Figure 3. **Figure 3. Figure 3.Doorknob-rattling attack [5]. Doorknob-rattling attack [5].** Table 2 indicates the parameters necessary for the building of a CIDS system. While Table 2 indicates the parameters necessary for the building of a CIDS system. While these are necessary, some of them are complementary to each other. That is, while satisfying these are necessary, some of them are complementary to each other. That is, while satis one of the requirements, there would be a high chance of violating the other. For example, fying one of the requirements, there would be a high chance of violating the other. For accountability means disclosing some of the information about the node in the system example, accountability means disclosing some of the information about the node in the while that would clearly defy the rules of privacy. Hence, there are clear trade-offs between system while that would clearly defy the rules of privacy. Hence, there are clear trade-offs one and another. between one and another. **Table 2. Requirements of a CIDS system [26].** Accountability Nodes must be responsible for the actions taken by them. Integrity Data cannot be manipulated once entered into the system. Resilience The system should be free from SPoF. ----- _Sustainability 2023, 15, 2133_ 8 of 14 **Table 2. Requirements of a CIDS system [26].** Accountability Nodes must be responsible for the actions taken by them. Integrity Data cannot be manipulated once entered into the system. Resilience The system should be free from SPoF. Consensus Nodes in the system must trust the data sent by other nodes. Scalability The system must be able to scale as the number of nodes increases. Overhead The overhead cost must be minimized to achieve scalability. Privacy Privacy must be a concern for the participants in the system. The main challenge here was to develop a CIDS framework which would decrease the overhead costs and would be scalable in cases where the size of the network increases. Blockchain was used to implement the proof-of-concept described earlier. A private Ethereum-based ledger was used in our case. In order to log the successful attempts made, pluggable authentication modules were used. The pluggable authentication modules were used in order to securely log the attempts made in the system so that these attempts could be transferred to the blockchain as environment variables. In order to use this, the pam_exec.so was used to run a shell script login_success.sh and the pam_exec file passed the login information to the shell script to the shell script as environment variables. These variables were then sent to the log files which were stored automatically. In order to send data to blockchain, cron utility software was used in Linux which scheduled the transfer of data from the log files directly to the blockchain by running a python script at a continuous interval of 5 minutes. If the login was successful, data from the logs would immediately be sent to the blockchain. This is because a scenario was imagined where the attacker would gain access to these log files and could tamper or remove them in order to remove the proof of his presence in the node. In order to simulate an attack, continuous login attempts were made from different machines using SSH (secure shell) to check whether the attempts were being logged or not. To make continuous attacks from another host, cronjobs were again used to call shell script files at an interval of 5 min. This shell script would make a series of wrong attempts trying to log in as different users in the target machine. This is undertaken to reduce the overhead cost incurred every time a transaction is made on the blockchain. In order to make the system more secure, another parameter was used. This was measuring the CPU utilization of the target machine. There may be a case where an attacker, after gaining access to the system, would try to run malicious programs. In order to detect this, CPU utilization was also logged and stored in log files. To measure CPU utilization, the command used was top | head -3 | tail -1. This command would give the CPU utilization at that given instant. Our system was designed in such a way that if the CPU utilization would exceed a given threshold, this would be logged on the log files. After being logged, these files would be sent to the blockchain at an interval of every 5 min. The spike in the gas cost which can be seen through the Ganache UI would warn the system administrator of the possible attacks which might be taking place in the node. **5. Results Discussion** _5.1. CPU Utilization_ A simple experiment was performed to confirm that our CIDS setup would be able to precisely record CPU utilization information. The system would record CPU utilization every minute and record the result. If the CPU utilization exceeded a particular threshold, for example, 50%, it would get recorded in a log file named cpu.log. To spike the CPU utilization, another program (prime) was running in the background. The purpose of prime was to spike the CPU utilization, utilizing 10 threads so that the results could be logged onto the cpu.log file. Based on the usage of the current system, a threshold was set to 50% to trigger the sending of data to the log files. This threshold could be modified according to ----- utilization, another program (prime) was running in the background. The purpose of _Sustainability 2023, 15, 2133_ prime was to spike the CPU utilization, utilizing 10 threads so that the results could be 9 of 14 logged onto the cpu.log file. Based on the usage of the current system, a threshold was set to 50% to trigger the sending of data to the log files. This threshold could be modified the usage of the system and based on user needs. Figureaccording to the usage of the system and based on user needs. Figure 4 illustrates the 4 illustrates the cpu.log recording all CPU utilization.cpu.log recording all CPU utilization. **Figure 4. Snapshot from cpu.log recording all CPU utilization.** **Figure 4. Snapshot from cpu.log recording all CPU utilization.** _5.2. Login Attempts5.2. Login Attempts_ After the CIDS was set up, the main objective was to capture different authenticationAfter the CIDS was set up, the main objective was to capture different authentication requests, including the login attempts. Figurerequests, including the login attempts. Figure 5 shows the output of the log file when a 5 shows the output of the log file when a _Sustainability 2023, 15, x FOR PEER REVIEW_ 10 of 17 person tries to become a super-user. This is determined by the $PAM_RUSER field becauseperson tries to become a super-user. This is determined by the $PAM_RUSER field beboth sudo and su can change the context of the user.cause both sudo and su can change the context of the user. **Figure 5. Output of auth.log when someone tries to become a super-user.** **Figure 5. Output of auth.log when someone tries to become a super-user.** ##### Table 3 illustrates the results when the user tries to become a super-user using the Table 3 illustrates the results when the user tries to become a super-user using the sudo and su commands. sudo and su commands. **Table 3. The attempts when the user tries to become a super-user using the sudo and su commands.** **Table 3. The attempts when the user tries to become a super-user using the sudo and su commands.** **$PAM_USER** **$PAM_TYPE** **$PAM_SERVICE** **$PAM_RUSER** **Date** **Success/Failure** ##### $PAM_USE $PAM_TYP $PAM_SER- $PAM_RUS Tue 27 MayDate **Success/Failure** vedant auth **R** **E sudo** **VICE vedant** **ER** Success 01:56:00 ##### Tue May 27 vedant auth sudo vedant Tue 27 May Success vedant auth su vedant 01:59:0001:56:00 Success ##### Tue May 27 vedant auth su vedant Success Table 4 shows the use of an external host by making use of the ssh command to log01:59:00 onto the CIDS remotely. This example makes use of the $PAM_RHOST field, containingTable 4 shows the use of an external host by making use of the ssh command to log ##### onto the CIDS remotely. This example makes use of the $PAM_RHOST field, containing information about requesting hosts. In the current example, an external agent (192.168.87.4) information about requesting hosts. In the current example, an external agent successfully logged into the CIDS node as ‘vedant’ via the ssh vedant@192.168.87.3 command. This is a crucial use case as the doorknob-rattling attack typically involves remote users ##### (192.168.87.4) successfully logged into the CIDS node as ‘vedant’ via the ssh attempting to penetrate the target network [27]. ##### vedant@192.168.87.3 command. This is a crucial use case as the doorknob-rattling attack typically involves remote users attempting to penetrate the target network [27]. **Table 4. Login attempt evidence.** ##### Suc- $PAM SER ----- _Sustainability 2023, 15, 2133_ 10 of 14 **Table 4. Login attempt evidence.** **$PAM_USER** **$PAM_TYPE** **$PAM_SERVICE** **$PAM_RHOST** **Date** **Success/Failure** Wed 27 May vedant auth sshd 192.168.87.3 Success 01:56:23 Using the current method to record the external login attempts evidenced by Table 4, a simulated doorknob-rattling attack was tried against one of the machines in the network. During the test, the intruder tried using different user accounts on a single machine. The FOR PEER REVIEW output from the log files is shown in Figure 6. The attacking machine tried to penetrate11 of 17 the machine thrice using a secure shell (SSH) into each of the user accounts. Each of these attempts was recorded and sent to the blockchain as transactions [28–30]. _Sustainability 2023, 15, x FOR PEER REVIEW_ 12 of 17 #### Figure 6. Doorknob-rattling attack in a ledger. Figure 6. Doorknob-rattling attack in a ledger. **2023, 15, x FOR PEER REVIEW** Each transaction had a varying gas cost based on the number of attempts the intruder made to penetrate the machine. All records from the log files were pushed onto theA brief summary of the doorknob-rattling attack events in the case of a single attacker ##### is shown in Table 5. The transaction which took place in Ganache can be seen in Figure 7. blockchain either at the end of a specified time interval (in the case where all the attempts during the interval were failed login attempts) or were sent immediately to the blockchain **Table 5. if there were any successful logins [Summary of doorknob-rattling attack.31].** A brief summary of the doorknob-rattling attack events in the case of a single attacker is shown in Table 5. The transaction which took place in Ganache can be seen in FigureNumber of Login At- 7. ##### User IP Address of Request tempts **Table 5.user1 Summary of doorknob-rattling attack.192.168.87.3** 1 ##### user2 User IP Address of Request192.168.87.3 Number of Login Attempts2 user6 user1 192.168.87.3 192.168.87.3 1 3 user7 192.168.87.3 2 user2 192.168.87.3 2 ##### There was a total of eight login attempts over four different user accounts. Figure 7 user6 192.168.87.3 3 ##### shows the transaction which was submitted to the blockchain and the transaction hash user7 192.168.87.3 2 ##### and all the other details. **Figure 7. Figure 7.Transactional data which were sent to the blockchain. Transactional data which were sent to the blockchain.** ##### Figure 8 shows a list of all the blocks which were mined during the entire process. It also shows the gas cost incurred during the entire attack ----- _Sustainability 2023, 15, 2133_ 11 of 14 , 15, x FOR PEER REVIEW There was a total of eight login attempts over four different user accounts. Figure13 of 17 7 shows the transaction which was submitted to the blockchain and the transaction hash and all the other details. Figure 8 shows a list of all the blocks which were mined during the entire process. It also shows the gas cost incurred during the entire attack. ##### Figure 8. List of all the blocks mined and the respective gas costs. **Figure 8. List of all the blocks mined and the respective gas costs.** #### The timestamps of these transactions show that the attacks were permanently rec-The timestamps of these transactions show that the attacks were permanently recorded in the CIDS distributed ledger. The given sequence of events and protection of related data #### orded in the CIDS distributed ledger. The given sequence of events and protection of re shows that the nodes can be protected, and the system administrator would be made aware #### lated data shows that the nodes can be protected, and the system administrator would be of the possible intrusion immediately. This also proves that Ethereum and Ganache work #### made aware of the possible intrusion immediately. This also proves that Ethereum and smoothly with Linux and the integration between them to achieve data ingest for intrusion Ganache work smoothly with Linux and the integration between them to achieve data detection is successful. ingest for intrusion detection is successful. _5.3. Detecting an Anomaly: Thwarting a Doorknob-Rattling Attack_ The main aim of our CIDS architecture was to record data that could be used to detect #### 5.3. Detecting an Anomaly: Thwarting a Doorknob-Rattling Attack anomalies. This is because data collection at the end is not enough. It needs to be processed The main aim of our CIDS architecture was to record data that could be used to detect and the analysis should be performed to find potential threats. Subroutines were created which were scheduled to run at specific time intervals using the cron utility software. This #### anomalies. This is because data collection at the end is not enough. It needs to be processed made the traffic steady over a period of time. The cron software utility in Linux was used #### and the analysis should be performed to find potential threats. Subroutines were created to schedule the bash script anomaly.sh. The anomaly.sh is used to ensure that 20 to 30 login #### which were scheduled to run at specific time intervals using the cron utility software. This attempts were made from random users on the virtual machine at an interval of every five made the traffic steady over a period of time. The cron software utility in Linux was used minutes. This was continued for several iterations. All these attacks were logged into our target machine and data were being sent to the blockchain. The main idea behind this #### to schedule the bash script anomaly.sh. The anomaly.sh is used to ensure that 20 to 30 approach was that an increase in the number of transactions would mean higher gas cost. #### login attempts were made from random users on the virtual machine at an interval of The gas cost for each transaction is further used to analyze whether an attack has actually #### every five minutes. This was continued for several iterations. All these attacks were taken place or not. Table 6 shows the transactions which took place during the attack and logged into our target machine and data were being sent to the blockchain. The main idea the total gas cost incurred. There were three instances where the number of attacks crossed behind this approach was that an increase in the number of transactions would mean the threshold. The time instances were 10:40, 11:20, and 11:45. The transactions were then plotted onto a graph with the number of transactions on #### higher gas cost. The gas cost for each transaction is further used to analyze whether an the primary axis and the number of attempts on the secondary axis. The peaks show that #### attack has actually taken place or not. Table 6 shows the transactions which took place the number of transactions crossed the threshold as shown in Figure 9. Figure 10 shows the during the attack and the total gas cost incurred. There were three instances where the bar chart of gas costs in various time frames. Although the inability to detect all the attacks number of attacks crossed the threshold. The time instances were 10:40, 11:20, and 11:45. was problematic, this statistical method did correctly identify that there was an anomaly, which would lead a system administrator to further investigate. ##### Table 6. Transactions and their respective gas costs in an interval of five minutes. #### Time Number of Transactions Total Gas Cost ----- _Sustainability 2023, 15, 2133_ 12 of 14 **Table 6. Transactions and their respective gas costs in an interval of five minutes.** **Time** **Number of Transactions** **Total Gas Cost** 10:20 1 22,280 10:25 1 22,280 10:40 18 44,184 10:45 1 22,280 _Sustainability 2023, 15, x FOR PEER REVIEW_ 14 of 17 10:50 4 26,152 11:00 4 23,576 11:15 2 23,576 11:45 1 22,280 11:20 8 26,152 11:50 4 26,152 11:25 1 22,280 11:55 1 22,280 11:30 1 22,280 The transactions were then plotted onto a graph with the number of transactions on the primary axis and the number of attempts on the secondary axis. The peaks show that 11:35 26 54,488 the number of transactions crossed the threshold as shown in Figure 9. Figure 10 shows 11:45 1 22,280 the bar chart of gas costs in various time frames. Although the inability to detect all the 11:50 4 26,152 attacks was problematic, this statistical method did correctly identify that there was an 11:55 1 22,280 anomaly, which would lead a system administrator to further investigate. 60,000 30 50,000 25 40,000 20 30,000 15 20,000 10 10,000 5 0 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 _Sustainability 2023, 15, x FOR PEER REVIEW_ 15 of 17 **Figure 9.** Graph showing number of attempts and corresponding gas costs at an interval of five **Figure 9. Graph showing number of attempts and corresponding gas costs at an interval of five minutes.** minutes. 60,000 50,000 40,000 30,000 20,000 10,000 0 TIME **Figure 10. Bar chart of gas costs in various time frames.** **Figure 10. Bar chart of gas costs in various time frames.** **6. Conclusions and Future Work** The sharing of information is extremely crucial between the nodes in a CIDS system ----- _Sustainability 2023, 15, 2133_ 13 of 14 **6. Conclusions and Future Work** The sharing of information is extremely crucial between the nodes in a CIDS system in order to prevent the system from attacks as a whole. Information sharing is extremely important in a scenario where distributed attacks are taking place increasingly. CIDSs, along with blockchain, appears to be highly suitable for the ingesting of data, especially in the case of building a smart sustainable city. This paper showed that commercial and open-source blockchain technologies may be used to create an information-sharing system that records both doorknob-rattling attacks using pluggable authentication modules and CPU utilization data as blockchain transactions. This also proves that a blockchain system can also be used as a logging mechanism for multiple machines and hence can be used to aggregate data which could be later processed for intrusion detection. This research provides positive indications that blockchain technology could be used on a large scale for solving the intrusion detection problem and building a CIDS at a very large scale. The most significant contribution made in this paper is that it provides an end-to-end proof-of-concept for CIDS. It also showed at an initial level that attacks or intrusions can be detected using blockchain as a backbone of the CIDS framework. However, there is a need to consider the cost of setting up such a system and how sound it is. The proof-of-concept, which was discussed in the literature, was not implemented at an end-to-end level. The main aim of this paper was to build an IDS which could be potentially used to detect system abnormalities and intrusions. There are several avenues which are left to explore in this paper for additional work. The main aim going further would be to create a large-scale system which could detect anomalies, block them, and trigger alerts to the system administrator. Further research is also required to see how the overhead cost of running the blockchain client would be taken care of. Currently, Ganache (a private blockchain running at a particular node) is used for testing and carrying out transactions in the blockchain. Public or other test nets could be used to carry out system tests. **Author Contributions: Methodology, V.C.; Formal analysis, S.M.; Investigation, R.K.P.; Resources,** R.K.G.; Writing—original draft, S.B.H.S.; Writing—review & editing, P.K.S. All authors have read and agreed to the published version of the manuscript. **Funding: This research received no external funding.** **Data Availability Statement: The data that support the findings of this study are available on request** from the corresponding author. **Conflicts of Interest: The authors declare no conflict of interest.** **References** 1. [Jose, S. A Survey on Anomaly Based Host Intrusion Detection System. J. Phys. Conf. Ser. 2018, 1000, 1–11. [CrossRef]](http://doi.org/10.1088/1742-6596/1000/1/012049) 2. Li, W. Surveying Trust-Based Collaborative Intrusion Detection: State-of-the-Art, Challenges and Future Directions. IEEE _[Commun. Surveys Tuts 2021, 280–305. [CrossRef]](http://doi.org/10.1109/COMST.2021.3139052)_ 3. [“What Is Hashing? Step-by-Step Guide-Under Hood of Blockchain. August. 2017. Available online: https://blockgeeks.com/](https://blockgeeks.com/guides/what-is-hashing/) [guides/what-is-hashing/ (accessed on 2 July 2022).](https://blockgeeks.com/guides/what-is-hashing/) 4. Salam, A.-E.; Mohammed, A.; Yousef, S.; Selvakumar, M.; Iznan, H. Intrusion Detection Systems Using Blockchain Technology: [A Review, Issues and Challenges. Comput. Syst. Sci. Eng. 2021, 40, 87–112. [CrossRef]](http://doi.org/10.32604/csse.2022.017941) 5. Kanth, V.; Mcabee, A.; Tummala, M.; Mceachen, J. Collaborative Intrusion Detection leveraging Blockchain and Pluggable Authentication Modules. In Proceedings of the 53rd Hawaii International Conference on System Sciences, Maui, HI, USA, 7–10 January 2020. 6. Dreger, H.; Feldmann, A.; Paxson, V.; Sommer, R. Predicting the Resource Consumption of Network Intrusion Detection Systems. [International Workshop on Recent Advances in Intrusion Detection. 2008. Available online: https://link.springer.com/chapter/](https://link.springer.com/chapter/10.1007/978-3-540-87403-4_8) [10.1007/978-3-540-87403-4_8 (accessed on 2 July 2022).](https://link.springer.com/chapter/10.1007/978-3-540-87403-4_8) 7. Golomb, T.; Mirsky, Y.; Elovici, Y. CIoTA: Collaborative IoT anomaly detection via Blockchain. arXiv 2018, arXiv:1803.03807. 8. Idé, T. Collaborative Anomaly Detection on Blockchain from Noisy Sensor Data. In Proceedings of the 2018 IEEE International Conference on Data Mining Workshops (ICDMW), Singapore, 17–20 November 2018; pp. 120–127. 9. Kumari, R.; Catherine, M. Anomaly detection in Blockchain using clustering protocol. Int. J. Pure Appl. Math. 2018, 118, 391–396. 10. Dey, S. Securing majority-attack in blockchain using machine learning and algorithmic game theory: A proof of work. In Proceedings of the 2018 10th Computer Science and Electronic Engineering (CEEC), Colchester, UK, 19–21 September 2018; pp. 7–10. ----- _Sustainability 2023, 15, 2133_ 14 of 14 11. Signorini, M.; Pontecorvi, M.; Kanoun, W.; Di-Pietro, R. ADvISE: Anomaly Detection tool for Blockchain Systems. In Proceedings of the 2018 IEEE World Congress on Services (SERVICES), San Francisco, CA, USA, 2–7 July 2018; pp. 65–66. 12. Steichen, M.; Homme, S.; State, R. ChainGuard—A firewall for blockchain applications using SDN with OpenFlow. In Proceedings of the 2017 Principles, Systems and Applications of IP Telecommunications (IPTComm), Chicago, IL, USA, 25–28 September 2017; pp. 1–8. 13. Zhu, L.; Wu, Y.; Gai, K.; Choo, K.R. Controllable and trustworthy blockchain-based cloud data management. Future Gener. Comput. _[Syst. 2019, 91, 527–535. [CrossRef]](http://doi.org/10.1016/j.future.2018.09.019)_ 14. Hu, B.; Zhou, C.; Tian, Y.C.; Qin, Y.; Junping, X. A collaborative intrusion detection approach using Blockchain for multimicrogrid [systems. IEEE Trans. Syst. Man Cybern. Syst. 2019, 49, 1–11. [CrossRef]](http://doi.org/10.1109/TSMC.2019.2911548) 15. Alexopoulos, N.; Vasilomanolakis, E.; Ivánkó, N.R.; Mühlhäuser, M. Towards Blockchain-based collaborative intrusion detection systems. In Proceedings of the Critical Information Infrastructures Security 12th International Conference, CRITIS 2017, Lucca, Italy, 8–13 October 2017; D’Agostino, G., Scala, A., Eds.; Springer: Cham, Switzerland, 2018. 16. Vasilomanolakis, E.; Karuppayah, S.; Mühlhäuser, M.; Fischer, M. Taxonomy and survey of collaborative intrusion detection. _[ACM Comput. Surv. 2015, 47, 1–33. [CrossRef]](http://doi.org/10.1145/2716260)_ 17. Liang, C.; Shanmugam, B.; Azam, S.; Karim, A.; Islam, A.; Zamani, M.; Kavianpour, S.; Idris, N.B. Intrusion Detection System for [the Internet of Things Based on Blockchain and Multi-Agent Systems. Electronics 2020, 9, 1120. [CrossRef]](http://doi.org/10.3390/electronics9071120) 18. Ghaleb, F.; Saeed, F.; Al-Sarem, M.; Ali Saleh Al-rimy, B.; Boulila, W.; Eljialy, A.E.M.; Aloufi, K.; Alazab, M. Misbehavior-Aware On-Demand Collaborative Intrusion Detection System Using Distributed Ensemble Learning for VANET. Electronics 2020, 9, 1411. [[CrossRef]](http://doi.org/10.3390/electronics9091411) 19. Radoglou-Grammatikis, P.I.; Sarigiannidis, P.G.; Efstathopoulos, G.; Panaousis, E.A. A Novel Multivariate Intrusion Detection [System for Smart Grid. Sensors 2020, 20, 5305. [CrossRef] [PubMed]](http://doi.org/10.3390/s20185305) 20. Iwendi, C.; Anajemba, J.H.; Biamba, C.; Ngabo, D. Security of Things Intrusion Detection System for Smart Healthcare. Electronics **[2021, 10, 1375. [CrossRef]](http://doi.org/10.3390/electronics10121375)** 21. Kotecha, K.; Verma, R.; Rao, P.V.; Prasad, P.; Mishra, V.K.; Badal, T.; Jain, D.; Garg, D.; Sharma, S. Enhanced Network Intrusion [Detection System. Sensors 2021, 21, 7835. [CrossRef] [PubMed]](http://doi.org/10.3390/s21237835) 22. Aloqaily, M.; Otoum, S.; Al Ridhawi, I.; Jararweh, Y. An intrusion detection system for connected vehicles in smart cities. Ad Hoc _[Netw. 2019, 90, 101842. [CrossRef]](http://doi.org/10.1016/j.adhoc.2019.02.001)_ 23. Elrawy, M.F.; Awad, A.I.; Hamed, H.F. Intrusion detection systems for IoT-based smart environments: A survey. J. Cloud Comput. **[2018, 7, 1–20. [CrossRef]](http://doi.org/10.1186/s13677-018-0123-6)** 24. Elsaeidy, A.; Munasinghe, K.S.; Sharma, D.; Jamalipour, A. Intrusion detection in smart cities using Restricted Boltzmann [Machines. J. Netw. Comput. Appl. 2019, 135, 76–83. [CrossRef]](http://doi.org/10.1016/j.jnca.2019.02.026) 25. Saba, T. Intrusion Detection in Smart City Hospitals using Ensemble Classifiers. In Proceedings of the 13th International [Conference on Developments in eSystems Engineering (DeSE), Liverpool, UK, 14–17 December 2020. [CrossRef]](http://doi.org/10.1109/DeSE51703.2020.9450247) 26. Zhu, B.; Joseph, A.; Sastry, S. A taxonomy of cyber attacks on scada systems. In Proceedings of the 2011 International Conference on Internet of Things and 4th International Conference on Cyber, Physical and Social Computing, Washington, DC, USA, 19–22 October 2011; pp. 380–388. 27. [Debar, H.; Dacier, M.; Wespi, A. Towards a taxonomy of intrusion-detection systems. Comput. Netw. 1999, 31, 805–822. [CrossRef]](http://doi.org/10.1016/S1389-1286(98)00017-6) 28. Proffitt, T. How Can You Build and Leverage SNORT IDS Metrics to Reduce Risk? SANS Institute. 2013. Available online: [https://www.sans.org/reading-room/whitepapers/tools/paper/34350 (accessed on 2 July 2022).](https://www.sans.org/reading-room/whitepapers/tools/paper/34350) 29. [Hu, J. Host-Based Anomaly Intrusion Detection; Springer: Berlin/Heidelberg, Germany, 2010; pp. 235–255. [CrossRef]](http://doi.org/10.1007/978-3-642-04117-4_13) 30. Khan, A.R.; Kashif, M.; Jhaveri, R.H.; Raut, R.; Saba, T.; Bahaj, S.A. Deep Learning for Intrusion Detection and Security of Internet [of Things (IoT): Current Analysis, Challenges, and Possible Solutions. Secur. Commun. Netw. 2022, 2022, 1–13. [CrossRef]](http://doi.org/10.1155/2022/4016073) 31. Parwani, D.; Dutta, A.; Shukla, P.K.; Tahiliyani, M. Various Techniques of DDoS Attacks Detection & Prevention at Cloud: A Survey. J. Comput. Sci. Technol. 2015, 8, 110–120. **Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual** author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. -----
{ "disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.3390/su15032133?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.3390/su15032133, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": "CCBY", "status": "GOLD", "url": "https://www.mdpi.com/2071-1050/15/3/2133/pdf?version=1674464426" }
2,023
[]
true
2023-01-23T00:00:00
[ { "paperId": "1d3d953363a6a3070c3a87c940af432edba24e18", "title": "Deep Learning for Intrusion Detection and Security of Internet of Things (IoT): Current Analysis, Challenges, and Possible Solutions" }, { "paperId": "39b7dbf60eaace5503793ab39d2b48e25101e816", "title": "Enhanced Network Intrusion Detection System" }, { "paperId": "c1cf420379a410014a5307e46ef8307b3cecfb1c", "title": "Security of Things Intrusion Detection System for Smart Healthcare" }, { "paperId": "ae82780a0deb576c647393a20392867e3657173a", "title": "Misbehavior-Aware On-Demand Collaborative Intrusion Detection System Using Distributed Ensemble Learning for VANET" }, { "paperId": "a7fb6fd0076b3026d9bb37e1d52e8fb6b3a68853", "title": "ARIES: A Novel Multivariate Intrusion Detection System for Smart Grid" }, { "paperId": "e50d6a9f5ea1be9eb5c8bfd55213c193e553a195", "title": "Intrusion Detection System for the Internet of Things Based on Blockchain and Multi-Agent Systems" }, { "paperId": "94554103c275907d052fc87598c032ca041a74c9", "title": "An intrusion detection system for connected vehicles in smart cities" }, { "paperId": "2ef192b8fb19fd6b2366de6ee6869e49eb41bd08", "title": "Intrusion detection in smart cities using Restricted Boltzmann Machines" }, { "paperId": "809d947cf0f222ecb58e42a8be69c0dc00042880", "title": "A Collaborative Intrusion Detection Approach Using Blockchain for Multimicrogrid Systems" }, { "paperId": "d9b5194f3f959eda2e95df6a340254f52ced46f4", "title": "Controllable and trustworthy blockchain-based cloud data management" }, { "paperId": "a7eea1d6dfbcee9a434e9ab6c9a7f67f1506d3db", "title": "Intrusion detection systems for IoT-based smart environments: a survey" }, { "paperId": "6d7b1e32a3c8bb8cf4f51f4c0561146c437bc766", "title": "A Survey on Anomaly Based Host Intrusion Detection System" }, { "paperId": "00be303628cc63a79cf6ffd89402a07525dab4fe", "title": "CIoTA: Collaborative IoT Anomaly Detection via Blockchain" }, { "paperId": "e11d5a4edec55f5d5dc8ea25621ecbf89e9bccb7", "title": "Taxonomy and Survey of Collaborative Intrusion Detection" }, { "paperId": "62fa81980219842a6c5945cc2d3f5b142240e3eb", "title": "Towards a taxonomy of intrusion-detection systems" }, { "paperId": "2899af503b92d4feea21468549013fc63ad72618", "title": "Intrusion Detection Systems Using Blockchain Technology: A Review, Issues and Challenges" }, { "paperId": "13a12faf033612507e6364faeb2bc6aa6958d7d5", "title": "Anomaly Detection in Block chain Using Clustering Protocol" }, { "paperId": null, "title": "Various Techniques of DDoS Attacks Detection & Prevention at Cloud: A Survey" } ]
14,329
en
[ { "category": "Economics", "source": "s2-fos-model" }, { "category": "Business", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/021ecd84d3d3c6ddb6874023f60808dc635bc2c2
[]
0.885968
Is Bitcoin an emerging market? A market efficiency perspective
021ecd84d3d3c6ddb6874023f60808dc635bc2c2
Central European Economic Journal
[ { "authorId": "2237636921", "name": "Mateusz Skwarek" } ]
{ "alternate_issns": null, "alternate_names": [ "Central Eur Econ J" ], "alternate_urls": [ "https://content.sciendo.com/view/journals/ceje/ceej-overview.xml" ], "id": "40fcf287-61ca-4a3d-9606-874810723e41", "issn": "2543-6821", "name": "Central European Economic Journal", "type": null, "url": "https://content.sciendo.com/view/journals/ceej/ceej-overview.xml" }
Abstract Despite recent studies focused on comparing the dynamics of market efficiency between Bitcoin and other traditional assets, there is a lack of knowledge about whether Bitcoin and emerging markets efficiency behave similarly. This paper aims to compare the market efficiency dynamics between Bitcoin and the emerging stock markets. In particular, this study indicates whether the dynamics of Bitcoin market efficiency mimic those of emerging stock markets. Thus, the paper's contribution emerges from the combination of Bitcoin and emerging markets in the field of dynamics of market efficiency. The dynamics of market efficiency are measured using the Hurst exponent in the rolling window. The study uses daily data for the MSCI Emerging Markets Index and the Bitcoin market over the period 2011–2022. Our results show that there is at most a moderate correlation between the dynamics of Bitcoin and emerging stock markets’ efficiency over the entire study period. The strongest correlations occur mainly in periods of high economic policy uncertainty in the largest Bitcoin mining countries. Therefore, the association between Bitcoin market efficiency and emerging stock markets’ efficiency may strengthen with an increase in economic policy uncertainty. These findings may be useful for investors and portfolio managers in constructing better investment strategies.
**ISSN:** 2543-6821 (online) [Journal homepage: http://ceej.wne.uw.edu.pl](http://ceej.wne.uw.edu.pl) ## **Mateusz Skwarek** # **Is Bitcoin an emerging market? ** **A market efficiency perspective** **To cite this article** Skwarek, M. (2023). Is Bitcoin an emerging market? A market efficiency perspective. Central European Economic Journal, 10(57), 219-236. **DOI:** 10.2478/ceej-2023-0013 [To link to this article: https://doi.org/10.2478/ceej-2023-0013](https://doi.org/10.1515/ceej-2018-0003) ----- ##### **Mateusz Skwarek ** Poznań University of Economics and Business, Institute of Accounting and Finance Management, al. Niepodległości 10, 61-875 Poznań, Poland, corresponding author: Mateusz.Skwarek@phd.ue.poznan.pl ### **Is Bitcoin an emer in market? A market efficienc ers ective** **g g y p p** **Abstract** Despite recent studies focused on comparing the dynamics of market efficiency between Bitcoin and other traditio-­ nal assets, there is a lack of knowledge about whether Bitcoin and emerging markets efficiency behave similarly. This paper aims to compare the market efficiency dynamics between Bitcoin and the emerging stock markets. In particular, this study indicates whether the dynamics of Bitcoin market efficiency mimic those of emerging stock markets. Thus, the paper’s contribution emerges from the combination of Bitcoin and emerging markets in the field of dynamics of market efficiency. The dynamics of market efficiency are measured using the Hurst exponent in the rolling window. The study uses daily data for the MSCI Emerging Markets Index and the Bitcoin market over the period 2011–2022. Our results show that there is at most a moderate correlation between the dynamics of Bitcoin and emerging stock markets’ efficiency over the entire study period. The strongest correlations occur mainly in periods of high economic policy uncertainty in the largest Bitcoin mining countries. Therefore, the association between Bitcoin market effi-­ ciency and emerging stock markets’ efficiency may strengthen with an increase in economic policy uncertainty. These findings may be useful for investors and portfolio managers in constructing better investment strategies. **Keywords** bitcoin | market efficiency | emerging stock markets | long-range dependence | Hurst exponent **JEL Codes** G11, G14, G15 #### **1. Introduction** Bitcoin is the largest (when it comes to capitalisation) and the most researched cryptocurrency in the context of informational efficiency (Urquhart, 2016; Bariviera, 2017; Kristoufek, 2018; Kumar & Zargar, 2019; Tran & Leirvik, 2020; Noda, 2021). The majority of these studies have confirmed that the Bitcoin market is the least inefficient among cryptocurrencies. Thus, Bitcoin seems to be the most mature market and representative cryptocurrency (in terms of researchers’ and investors’ attention). However, similarly to other cryptocurrency markets, many previous studies also indicate that the Bitcoin market is still inefficient (e.g. Kosc, Sakowski & Ślepaczuk, 2019). The market is efficient when investors are not able to earn abnormal returns based on their past values (Fama, 1970). In other words, the market prices include all information. However, changes in market conditions and behavioural biases may make that market efficiency dynamic. For example, loss aversion may affect investor decision-making under business uncertainty (Kahneman & Tversky, 1979). The Adaptive Markets Hypothesis combines these components and assumes that investors learn from their mistakes. After some time (change in market conditions), investors adapt to this new environment and then the market may be very close to efficiency. But market conditions vary over time, leading to behavioural biases of investors (e.g. overconfidence, overreaction) and, in effect, the dynamics of market efficiency can be observed (Lo, 2004). For example, Lim, Brooks and Kim (2008) find the dynamics of stock market efficiency during different market conditions. So it seems that changes in economic uncertainty related to different market conditions affect market efficiency. So far, there is no comprehensive answer to the question of whether the Bitcoin market efficiency is developing better than that of emerging markets. This paper aims to fill this gap by comparing the dynamics of market efficiency in the Bitcoin market with that of the emerging stock markets. This may help investors to allocate capital more efficiently. In particular, the survey of the relationship between emerging markets ----- CEEJ **• 10** (57) **•** 2023 **•** pp. 219-236 **•** ISSN 2543-6821 **•** DOI: 10.2478/ceej-2023-0013 **221** and Bitcoin over time could indicate whether a portfolio’s chance of obtaining a given return is greater by including both markets. On the other hand, this analysis from a market efficiency perspective may show in which sub-periods there is a delay in the price’s reaction to information and what the size of the price’s deviation from the random walk process is. Thus, it could be used by investors in constructing better investment strategies. The latest research on cryptocurrency market efficiency has focused on the resilience of this Bitcoin market efficiency to global shocks such as the Covid-19 pandemic. Phiri (2022) shows that the pandemic has affected the dynamics of Bitcoin market efficiency. A similar view is developed by Fernandes et al. (2022), who conclude that the response of Bitcoin market efficiency to Covid-19 is different from other markets. This evidence is supported by others (Wang et al., 2021; DinizMaganini, Diniz & Rasheed, 2021; Mensi et al. 2022) who also compare the dynamics of market efficiency in Bitcoin with developed markets and traditional investments. The above-mentioned studies do not include emerging stocks in this context. However, Lim et al. (2008) confirm that some emerging markets exhibit higher market inefficiency in times of financial crisis. Baur and McDermott (2010) note that large emerging markets react differently to economic shock (compared to developed stock markets). Thus, the dynamics of market efficiency of both the emerging and Bitcoin markets seem to be affected by unexpected events in different ways than developed stock markets. Because of this, a study on the resilience of Bitcoin and emerging markets’ efficiency to economic shocks is needed. Therefore, this paper joins a discussion on the comparison of the dynamics of Bitcoin market efficiency with the dynamics of market efficiency in other traditional markets. Existing studies cover the relationship between Bitcoin and emerging markets in other aspects than dynamics of market efficiency (Carrick, 2016; Bouri et al., 2017; Shahzad et al., 2019; Mizerka, StróżyńskaSzajek & Mizerka, 2020; Bouri et al., 2020). Specifically, the studied areas include the dependence between Bitcoin and emerging markets returns, the co-movement of markets at different time horizons, the predictability of asset returns from stock market returns. The majority of these studies suggest that this association is weak and may be time-varying. The reaction of the correlation between the markets and economic shocks may affect the losses of their investors during these events (Baur & McDermott, 2010). In particular, they notice the reactions of emerging markets, their difference from developed markets, during extreme events. However, there is a lack of empirical analysis of the relationship between the dynamics of Bitcoin market efficiency and the dynamics of emerging markets’ efficiency over time. Therefore, the studies of the association between markets during economic shocks should be deepened. Several studies indicate some similarities between Bitcoin and emerging markets from a market efficiency perspective. Urquhart (2016), Bariviera (2017), and Takaishi and Adachi (2020) find that the Bitcoin market has become more efficient over time. This trend in market efficiency is also documented in the case of some emerging stock markets by Cajueiro and Tabak (2004), Sukpitak and Hengpunya (2016), and Hkiri et al. (2021). In this context, it can be concluded that some emerging markets and Bitcoin become more efficient in the years 2015–2016. However, in the Chinese and Bitcoin markets during the years 2014–2015, bubble-like price dynamics could be observed. According to Kristoufek (2018), the Bitcoin market is efficient only after price bubbles, that is, during low Bitcoin price dynamics. Some studies (e.g. Lim et al., 2008; Hull & McGroarty, 2014) also report that emerging markets exhibit higher inefficiency in certain periods. Motivated by various results, this paper verifies whether both emerging and Bitcoin markets will become more efficient over time. The research gap consists of several strands. Firstly, it concerns the comparison of Bitcoin with emerging stock markets’ efficiency. In particular, the relative dynamics of market efficiency in both markets have not been analysed in times of different turmoils. Furthermore, so far, the relationship between Bitcoin and emerging markets’ efficiency have not been linked to the high economic policy uncertainty events in its largest mining countries during this period. Besides, to the best of the author’s knowledge, the rolling correlation between the dynamics of market efficiency of Bitcoin and emerging markets has not been measured. It may help to compare the relative chance of profitable investment strategies in different markets at some time horizon. Thus the following research question was asked: What is the relationship between Bitcoin and emerging stock markets from the market efficiency perspective? The main purpose of this article is to compare the dynamics of market efficiency between Bitcoin and the emerging stock markets. For this aim, the weak form of market efficiency is analysed over time by applying the Hurst exponent in the rolling window. ----- CEEJ **• 10** (57) **•** 2023 **•** pp. 219-236 **•** ISSN 2543-6821 **•** DOI: 10.2478/ceej-2023-0013 **222** Specifically, the dataset consists of daily closing prices of the MSCI Emerging Markets Index and the Bitcoin market from the period 2011 to 2022. Finally, the correlation coefficient is measured between two-time series of Hurst exponents on a rolling window. In effect, the dynamic relationship between the Bitcoin market efficiency and the emerging stock markets’ efficiency is shown. Thus, it is indicated whether Bitcoin and emerging stock markets show some similarities in their efficiency. Findings show that there is a moderate correlation between the market efficiency of Bitcoin and emerging stock markets at some periods (the strong value of the correlation is not confirmed in the additional tests). This is in contrast to previous research on the association between Bitcoin and emerging market returns. Bitcoin market efficiency and emerging markets’ efficiency report the most common fluctuations in periods of large economic policy uncertainty. Specifically, the jumps in correlation values occurred in the periods: the threat of a spillover of the euro area crisis in 2012, the threat of the US debt crisis at the end of 2013, Russia’s aggression against Ukraine on February 2014, China’s economic downturn in 2015, the USA – China trade tensions in the years 2018–2019, Covid-19 in 2020 and the Russia – Ukraine war in 2022. Thus, the results can be assigned to the events related to the high economic policy uncertainty of the largest Bitcoin mining countries (e.g. China, the USA). For example, the uncertainty related to the US presidential election results in 2016 (Trump’s election), the USA – China trade policy tensions in the years 2018–2019 and the threat of Covid-19 may be associated with the jumps in the values of correlation. The identified economic shocks extend the conclusions of the existing research on the dynamics of Bitcoin market efficiency because recent studies mainly verify the importance of Covid-19 for the dynamics of market efficiency as a global crisis. It can be supposed that the economic shocks of the largest Bitcoin mining countries also would have an impact on the dynamics of Bitcoin market efficiency and its relationship with the dynamics of emerging markets’ efficiency. Therefore, investors should pay attention to the role of high economic policy uncertainty of these countries in the profitability of their portfolio diversification, which includes Bitcoin and emerging markets. The contribution of this study is at least threefold. Firstly, this paper adds to the previous literature by comparing the dynamics of market efficiency in Bitcoin and emerging stock markets. In the existing research, there is no clear evidence of the ‘emerging’ nature of the cryptocurrency market efficiency. However, researchers refer to cryptocurrencies as an emerging market (Alvarez-Ramirez, Rodriguez & Ibarra-Valdez, 2018; Khuntia & Pattanayak, 2018; Kumar & Zargar, 2019). Inappropriate classification of the Bitcoin market may cause investors to treat it as less risky than it is. Therefore, this study contributes to the possibility of a better allocation, as it shows the actual and relative level of both Bitcoin market maturity and the degree of predictability of the returns time series of the studied markets. Thus, it is important to verify that the emerging market is the proper category, in the case of Bitcoin. Secondly, the study extends a discussion on the resilience of market efficiency to economic shocks. The more resilience of market efficiency of one asset from another could be a potential attribute of the safe haven (Wang et al., 2021). In other words, the safe haven could be identified by the observation of negative predictability from the stock market to the (safe haven) asset or by the fact that losses from one investment are compensated by gains from another (Shahzad et al., 2019). This is the first study to compare the market efficiency dynamics of emerging stock markets and Bitcoin in different periods of economic shocks. Recent studies focus on the relationship between Bitcoin and emerging economies in the context of portfolio diversification opportunities (Bouri et al., 2017; Shahzad et al., 2019; Mizerka et al., 2020; Bouri et al., 2020). The low (or negative) correlation between Bitcoin and other assets may indicate benefits from portfolio diversification, especially during periods of market stress that may be characterised by a different herd behaviour of investors (because of different perceptions of the impact of a given shock on markets). However, there is still a research gap in this phenomenon from a market efficiency perspective. The findings show whether the chance to obtain profitable strategies based on historical quotations in one market may be higher than in another. If both markets are included in one portfolio, the low (or negative) correlation between the degree of the predictability of returns in these markets may indicate potential benefits of a safe haven from the investment strategies based on the market performance; that is, economic shock effects on both markets differ in terms of degree and/or nature of dependence (momentum/meanreversion) in return time series in a given time. Thus, in terms of practical contribution, this study may help investors in developing better diversification ----- CEEJ **• 10** (57) **•** 2023 **•** pp. 219-236 **•** ISSN 2543-6821 **•** DOI: 10.2478/ceej-2023-0013 **223** investment strategies. For example, the largest positive correlations between Bitcoin and emerging markets’ efficiency in a period of market stress confirm that investment strategies based on the historical returns obtained in these markets should rather assume a reduction of the share of these investment assets in the portfolio during some unexpected events. Thirdly, this paper contributes to the literature on the dynamics of Bitcoin market efficiency and its potential factors. Despite many recent studies on the dynamics of Bitcoin market efficiency, there is no comprehensive evidence on whether the dynamics of Bitcoin market efficiency are related to uncertainty (Wang et al., 2021; Diniz-Maganini et al., 2021; Mensi et al. 2022; Phiri, 2022; Fernandes et al., 2022; Mnif, Mouakhar & Jarboui, 2023). In addition, so far it has not been verified in the context of emerging stock markets. This research shows that high economic uncertainty potentially affects the changes in both Bitcoin and emerging stock markets’ efficiency. In effect, investors should take the economic policy uncertainty of the largest emerging countries in Bitcoin mining into consideration . Thus, the novelty of this paper arises from the combination of the dynamics of Bitcoin and emerging markets’ efficiency and uncertainty. This research makes a theoretical contribution by explaining the co-movements in the markets’ efficiency dynamics of different investment assets by the sub-optimal investor reaction to the high economic policy uncertainty events concerning the largest countries in terms of the capital flows between these markets. In other words, the reason for the increase in the correlation between markets’ efficiency may be that investors are more subject to the representativeness heuristic in times of high economic uncertainty events. Thus, this study deepens understanding of the Adaptive Markets Hypothesis. The structure of the article is as follows: The first section is the introduction. The second part presents the literature background. Next, the data and methodology applied in this paper are described. The third section reports the results. The fourth part of the article consists of additional analyses. The last sections are the discussion and the conclusion. #### **2. Literature review** Bitcoin is the most popular cryptocurrency and the largest in terms of market capitalisation, which accounts for about 43% of the cryptocurrency market share (January 4, 2023). The purpose of the creation of Bitcoin is to be used as a payment system. In fact, some users treat Bitcoin as an alternative currency or even a store of value (Polasik et al., 2015). However, most participants in the cryptocurrency market perceive it as a speculative investment (Hileman & Rauchs, 2017, p. 24). One of the most discussed issues in the context of Bitcoin is its market efficiency. This popular topic has been studied for many years in the stock markets since Fama (1970) formulated the efficient market hypothesis. According to Fama (1970), the efficient market hypothesis (EMH) means it is impossible to use past prices to predict future prices (weak form). Thus, it refers to informational efficiency (Czekaj, Woś & Żarnowski, 2001, 30) which is an important global problem, because the growth in market efficiency may lead to a better allocation of capital (both from the global and individual investors’ perspectives). The majority of early studies on Bitcoin market efficiency report that its price behaviour is nonrandom and characterised by dynamics. Urquhart (2016) and Bariviera (2017) applying the Hurst exponent showed that the Bitcoin market was inefficient in the years 2010–2016/2017 and that lately, there was a trend toward an efficient market. Other researchers (Aggarwal, 2019; Bouri et al., 2019; Jiang, Nie & Ruan, 2018; Kumar & Zargar, 2019; Takaishi & Adachi, 2020) also confirmed that the inefficiency of the Bitcoin market varies over time. Several of them found a long memory of Bitcoin returns, which signals a positive autocorrelation (e.g. Alvarez-Ramirez et al., 2018). Thus, the above-mentioned evidence suggested that Bitcoin may become more efficient over time. However, it seems to be still an inefficient market with the presence of long memory. Similar results can be found in the context of emerging markets (Cajueiro & Tabak, 2004; Sukpitak & Hengpunya, 2016; Hkiri et al., 2021). These studies confirmed that emerging markets have become more efficient over time. Hull and McGroarty (2014), however, noticed that the emerging markets’ efficiency was time-varying and characterised by a long-memory process most of the time. Therefore, the following research question was addressed: What is the relationship between Bitcoin and emerging stock markets from the market efficiency perspective? Despite recent papers mainly contradicting Bitcoin market efficiency, many of them also focus on the factors of market efficiency (e.g. Brauneis & Mestel, 2018; Wei, 2018; Köchling, Müller & Posch, ----- CEEJ **• 10** (57) **•** 2023 **•** pp. 219-236 **•** ISSN 2543-6821 **•** DOI: 10.2478/ceej-2023-0013 **224** 2019; Khuntia & Pattanayak, 2020; Takaishi & Adachi, 2020; Noda, 2021; Phiri, 2022) or the relationship between the dynamics of market efficiency of Bitcoin and traditional financial assets (e.g. Al-Yahyaee, Mensi & Yoon, 2018; Plastun et al., 2019; Diniz-Maganini et al., 2021; Wang et al., 2021; Mensi et al., 2022) or other cryptocurrencies (e.g. Caporale, Gil-Alana & Plastun, 2018; Wei, 2018; Borowski & Matusewicz, 2019; Aslan & Sensoy, 2020; Noda, 2021; Assaf et al., 2022). In this context, several researchers (Brauneis & Mestel, 2018; Wei, 2018; Noda, 2021) found that Bitcoin was the least inefficient compared to other cryptocurrencies. Therefore, taking that this is the most studied, least inefficient, and largest cryptocurrency into consideration, it seems to be the most suitable representative of the cryptocurrency market with which to examine the dynamics of market efficiency. Recent studies compare Bitcoin market efficiency to the efficiency of traditional assets such as gold, currencies, bonds, stock markets, and commodities (Al-Yahyaee et al., 2018; Plastun et al., 2019; Wang et al., 2021; Diniz-Maganini et al., 2021; Mensi et al., 2022; Chowdhury et al., 2023). Most of them indicate that the size of the Bitcoin market inefficiency is different from other markets. However, several studies confirm that these markets exhibit some similarities when it comes to market price reactions to economic shocks such as Covid-19. Mensi et al. (2022) and Wang et al. (2021) documented that the inefficiency of Bitcoin and other studied markets of traditional financial assets increased during the time of Covid-19. Lim et al. (2008) also support these findings in the case of the reaction of emerging market efficiency to a financial shock. Thus, it can be expected that in times of market turmoil, the correlation between the market efficiency of traditional emerging markets and Bitcoin strengthens and the market efficiency deteriorates in both cases. Another interesting conclusion can be drawn in the context of the market efficiency resistance to high economic policy uncertainty. For example, Wang et al. (2021), Diniz-Maganini et al. (2021), and Mensi et al. (2022) observed that during Covid-19 the increase in the Bitcoin market inefficiency was smaller than for the other studied markets, which could be an attribute of a safe haven (Wang et al., 2021). A similar conclusion was developed by Fernandes et al. (2022), who stated that the dynamics of cryptocurrency market efficiency is robust to unpredictable shocks such as Covid-19. In contrast to them, Phiri (2022) obtained findings that contradict the resistance of the dynamics of Bitcoin market efficiency to shocks. Recently, Rufino (2023) confirmed that Bitcoin market efficiency deteriorated during the pandemic period. This is supported by Mnif et al. (2023), who also reported that during unexpected events such as the Russia-Ukraine war, the Bitcoin market inefficiency increases. However, Chowdhury et al. (2023) noticed that during the Covid-19 period, the market efficiency of the S&P 500 changed more than did that of Bitcoin. Thus, the majority of results provide evidence that supports the greater resilience of Bitcoin market efficiency to economic shocks compared to the markets of traditional investment assets. There is still no consistency, however, concerning whether Bitcoin market efficiency is robust to unexpected events. So far, the studied factors of cryptocurrency market efficiency include liquidity (Brauneis & Mestel, 2018; Wei, 2018; Köchling et al. 2019; Takaishi & Adachi, 2020; Noda, 2021), halving (Phiri, 2022), market capitalisation (Brauneis & Mestel, 2018), or trading volume (Khuntia & Pattanayak, 2020). Although the latest research indicates that in the case of a speculative bubble or global crisis, there is an increased comovement of Bitcoin and most cryptocurrencies (Assaf et al., 2022) or traditional investments efficiency (Wang et al., 2021; Mensi et al., 2022). This has not been verified yet for the market efficiency of Bitcoin and emerging markets. Moreover, some researchers (Czarnecki, Grech & Pamuła, 2008; Hkiri et al., 2021) have noticed that the behaviour of the Hurst exponent of the developing stock markets may be related to the financial or political crisis. Therefore, it can be assumed that, similar to other traditional markets, the dynamics of emerging markets’ efficiency may be more related to the dynamics of Bitcoin market efficiency during times of high economic policy uncertainty events. This is supported by large capital flows between emerging markets and Bitcoin in the context of cryptocurrency mining (Statista, 2022). Only Plastun et al. (2019) examined emerging markets’ efficiency and Bitcoin market efficiency. Specifically, they compared the market efficiency of two emerging markets (Russia and Ukraine) to Bitcoin market efficiency. Thus, they did not take China into account, which is the largest emerging market (the share of China in the MSCI Emerging Markets index was about 30% in late 2022) and one of the most important countries in the case of Bitcoin mining. Therefore, further studies which will include the largest emerging markets in this field are needed. Plastun et al. (2019) also concluded that these markets exhibited different degrees of persistence in ----- CEEJ **• 10** (57) **•** 2023 **•** pp. 219-236 **•** ISSN 2543-6821 **•** DOI: 10.2478/ceej-2023-0013 **225** returns for days of the week in the years 2014–2018, which is contrary to the efficient market hypothesis. Moreover, the majority of other studies suggested that the association between emerging markets and Bitcoin is weak and may be time-varying (Carrick, 2016; Shahzad et al., 2019; Bouri et al., 2020). Along these lines, it can be expected that the relationship between emerging markets and Bitcoin from a perspective of dynamics of market efficiency is weak for the whole period. On the other hand, Plastun et al. (2019) used the Hurst exponent for emerging markets in crosssection on different days of the week. This static approach does not include the dynamics of correlation and volatility clustering, changes in the underlying process which drives Bitcoin prices (Aggarwal, 2019). Thus, the results obtained by Plastun et al. (2019) should be verified by applying a different approach, such as dynamic correlation (e.g. sliding window). Besides, this evidence suggests that the relationship between emerging markets and Bitcoin efficiency may be time-varying. To sum up, most studies of cryptocurrencies’ market efficiency have been conducted on the Bitcoin market. This cryptocurrency has the largest share of the market and is the least inefficient. Furthermore, previous research looking at Bitcoin from a market efficiency perspective have focused on its relationship with other cryptocurrencies, uncertainty, or traditional assets such as gold, currencies, commodities, and developed countries’ stocks. These studies don’t take the large capital flows between emerging countries and Bitcoin into account. As an effect, no evidence includes the largest emerging stock markets from this perspective. However, the majority of them suggest that the association between emerging markets and Bitcoin is weak and may be time-varying (Carrick, 2016; Shahzad et al., 2019; Bouri et al., 2020). In particular, some studies conducted from a market efficiency perspective find separately that both Bitcoin (Urquhart, 2016; Bariviera, 2017; Takaishi & Adachi, 2020) and the emerging markets (Cajueiro & Tabak, 2004; Sukpitak & Hengpunya, 2016; Hkiri et al., 2021) have become more efficient over time. Thus, the research question concerns the relationship between Bitcoin and emerging stock markets’ efficiency. #### **3. Data and methodology** In line with Urquhart (2016), Bariviera (2017), Kristoufek (2018), and Jiang et al. (2018), logarithmic returns are calculated to provide time series for analysis of market efficiency. To verify the market efficiency, the Hurst exponent is adopted. It is a measure of long-range dependence. Following Urquhart (2016) and Bariviera (2017), the Hurst exponent is calculated using the rescaled range analysis (R/S). According to Kristoufek (2010), this method can be represented as an analysis of the rescaled range of a time series for different scales of a given length. In effect, there is a dependence on a distraction (range - *R* ) from different lengths of scale ( *i* ). Briefly, this relation is presented below: *R* / *S* = *a* ∙ *i* *[H]* (1) where *H* is the Hurst exponent, *S* is the standard deviation of the sums of departures of returns from the average in a given period, *R* (range) is a difference between the maximum and minimum of the sums of deviations from the average in each subinterval of ‘ *i* ’ length, ‘ *a* ’ is a constant. When the above relationship imitates a linear trend in a double-logarithmic scale, there is a random walk of the time series. So, if the Hurst exponent equals 0.5, the market is efficient. The value of the Hurst exponent of more than 0.5 means that the time series is long memory persistent. On the other hand, when the value of the Hurst exponent is less than 0.5 it can be interpreted as a mean-reversion property of a time series. As pointed out by Kristoufek (2010), the standard deviations for the rescaled range analysis are smaller compared to the detrended fluctuation analysis (DFA) which is a very popular alternative in this case. However, he states that in general, the results of both methods are quite similar. Furthermore, Kristoufek (2010) recommends applying a minimum scale of 16 observations and a minimum length of time series equal to 512 data points in the case of R/S. He argues that too-small scales can lead to an incorrect value of the standard deviation (bias), which is used to rescale the ranges during the estimation of the Hurst exponent. However, too-large scales may cause the impact of extreme values to be underestimated. Thus, the minimum scale of 16 and the length of 512 for the time series are used. In effect, to show the dynamics of market efficiency, we calculate the Hurst exponent over a rolling window of 512 data points (a fixed size) with one-day step. This is comparable to the two-year window exploited by Bariviera (2017). Similar to Polanco-Martínez (2019), to present a dynamic relationship between two variables – ----- CEEJ **• 10** (57) **•** 2023 **•** pp. 219-236 **•** ISSN 2543-6821 **•** DOI: 10.2478/ceej-2023-0013 **226** the Hurst exponents of the Bitcoin and the MSCI Emerging Markets Index – the rolling window correlation coefficients are estimated. Specifically, the Spearman rank correlation with p-values is exploited, because this is more robust to non-linear relationships of the analysed data series. The reasons for applying the rolling window correlation are the presence of volatility clustering (Bariviera, 2017) and structural breaks (Jiang et al., 2018) in financial time series which could signal nonlinear patterns in the dynamics of market efficiency. The correlations are based on the series of Hurst exponents. For example, it means that the dynamic correlation coefficient for the first of January 2021, refers to the behaviour of market returns in the previous period of two years plus the window size for the rolling correlation (251 or 126 data points). Finally, the robustness of the results is verified in several ways. As proposed by Polanco-Martínez (2019), the dynamic correlation is applied for different window sizes. On the one hand, the small length of the time series which is used to compute the correlation could influence the significance of results. On the other hand, the use of larger window sizes may mean that the one correlation value includes the impact of several ‘unpredictable’ events (which are rare), so it will be difficult to isolate the importance of one event for the studied association. Therefore, only two window sizes are used: 126 and 251 data points. Besides, Kendall’s correlation is applied to verify whether the results are robust. Similar to Borowski and Matusewicz (2020), detrended fluctuation analysis (DFA) is also adopted to provide additional estimates for market efficiency. In contrast to the rescaled range analysis, DFA exploits the squared fluctuations function that is a measure of variability (instead of the range). Additionally, the overlapping rolling window of 512 observations with a minimum scale of 16 data points is used. As an effect, the dynamic Spearman correlation is applied to the Hurst exponents based on the DFA method. The dataset consists of daily closing prices of Bitcoin and the MSCI Emerging Markets Index in the period September 13, 2011, through August 11, 2022. This period is limited by the availability of quotations from the Bitstamp exchange. Another reason for the length of this period is that it includes the largest changes in economic policy uncertainty, which allow us to study the price’s reaction to different levels of information uncertainty. Similar to Bariviera et al. (2017) and Takaishi and Adachi (2020), the Bitcoin data from Bitstamp (the world’s longest-standing cryptocurrency exchange) are used and collected through the website: http://api.bitcoincharts.com/ v1/csv/. In this case, for each common business day, the day’s closing price is exploited (according to Aslan and Sensoy (2020)). The MSCI Emerging Markets Index prices are sourced from the *Wall Street Journal* [(https://www.wsj.com/market-data/quotes/index/](https://www.wsj.com/market-data/quotes/index/XX/891800/historical-prices) [XX/891800/historical-prices).](https://www.wsj.com/market-data/quotes/index/XX/891800/historical-prices) Information about economic policy uncertainty events for China, Russia, and USA is downloaded from www.policyuncertainty. com (except for China’s downturn in 2015). Table 1 presents estimates of basic descriptive statistics for Bitcoin and the MSCI Emerging Markets Index. It can be noticed that Bitcoin reports a higher maximum daily return of 48% and the largest decrease of 66%, compared to the MSCI Emerging Markets Index. Besides, both return series are left skewed and leptokurtic. However, the left tail of the distribution in Bitcoin returns is much longer (-1.0666) than for the MSCI Emerging index (−0.5098). Results of the ADF test imply that returns for Bitcoin and the MSCI Emerging index are stationary. These findings suggest that Bitcoin returns are more volatile, and their distribution is more non-normal than in the case of the emerging stock markets. **Table 1.** Descriptive statistics for the logarithmic return series of Bitcoin (BTC) and the MSCI Emerging Markets Index from 13 September 2011 to 11 August 2022 **BTC** **MSCI Emerging Markets** Mean 0.0029 1.78E-05 Median 0.0026 4.55E-04 Maximum 0.4848 5.58E-02 Minimum -0.6639 -6.94E-02 Std. Dev. 0.0558 0.0101 Skewness -1.0666 -0.5098 Kurtosis 23.3798 8.0321 ADF -11.764*** -14.166*** Observations 2835 2835 Note: *** means a 1% significance level. Source: Own calculations ----- CEEJ **• 10** (57) **•** 2023 **•** pp. 219-236 **•** ISSN 2543-6821 **•** DOI: 10.2478/ceej-2023-0013 **227** #### **4. Results** Figure 1 shows the time series of the Hurst exponents for both studied markets over time. The red and blue lines denote the Hurst exponents for Bitcoin and MSCI Emerging Markets Index, respectively. As seen in Figure 1, at most times there is a time-varying long memory of Bitcoin and the MSCI Emerging Markets Index, because both markets obtained the most values of the Hurst exponent more than 0.5. In particular, most of the largest deviations of the Hurst exponent from 0.5 are reached for Bitcoin. Thus, generally, the Bitcoin market seems to be more inefficient compared to the emerging stock markets. A similar conclusion can be drawn based on the results of another study (Plastun et al., 2019) in the context of a comparison of the market efficiency between two emerging markets. Secondly, in the first half of the study period, the time series of the Hurst exponents shows a decreasing trend towards the value of 0.5 (an efficient market) for both the MSCI Emerging Markets Index and Bitcoin. This trend can be assigned to the announcements of Bitcoin regulations, and recommendations of supervisory authorities, which frequently occurred in the years 2013–2018. In particular, it concerns mainly the largest Bitcoin mining countries, which are China and the USA (Statista, 2022). These ‘regulatory’ events might ensure better access (for investors) to information that had been undefined (uncertain) before it appeared. In effect, a decrease in the Bitcoin market inefficiency can be observed, which is consistent with Urquhart (2016), Bariviera (2017), and Takaishi and Adachi (2020). During this period, there is also an improvement in market efficiency for the MSCI Emerging Markets Index. It confirms the findings of studies on market efficiency for some emerging stock markets (Cajueiro & Tabak, 2004; Sukpitak & Hengpunya, 2016; Hkiri et al., 2021). In Figure 1, one can see that the behaviour of the Hurst exponent at some subperiods seems to be very close for the emerging markets and Bitcoin, especially during the pandemic era from 2020–2022. To be more precise, at the beginning of the pandemic period, a meaningful increase in market inefficiency can be observed for both studied markets. However, the initial reaction of the Hurst exponent to the economic shock (pandemic) is less for Bitcoin compared to the traditional assets, which are emerging stock markets. These above findings for the Bitcoin market are in line with Wang et al. (2021), Assaf et al. (2022), Mensi et al. (2022), and Chowdhury et al. (2023). In effect, it can be expected **Figure 1.** Hurst exponents of daily returns for Bitcoin and the MSCI Emerging Markets Index Note: The date denotes the endpoints of the sliding windows. The red and blue lines mean Hurst exponents for Bitcoin and MSCI Emerging Markets Index, respectively. The dashed line denotes an efficient market – the value of the Hurst exponent is 0.5. Source: Own work that the strength of the relationship between Bitcoin and emerging stock markets may be time-varying and related to some global economic shocks. Therefore, a dynamic correlation between the dynamics of the market efficiency of Bitcoin and the MSCI Emerging Markets Index is presented in Figure 2. Time-varying correlation (Figure 2) shows that in the context of the dynamics of market efficiency, the relationship between Bitcoin and MSCI Emerging Markets is quite strong in some periods. The maximum Spearman coefficients are 0.83 (Panel A) and 0.81 (Panel B), which indicate a strong correlation. In particular, it can be observed that the significant (p-values less than 10%) and the largest correlations occur mainly in several periods, e.g. mid-2014, at the end of 2014, 2015, at the end of 2016, in early 2017, from 2018– 2019, 2020, in early 2021, at the end of 2021, and in early 2022. The above periods can be assigned to the events of high economic policy uncertainty in the largest Bitcoin mining countries. Specifically, China and the USA were the largest Bitcoin mining countries in the last few years (Statista, 2022). However, until 2015 most Bitcoin mining industries were located in Europe and the USA (Tovanich, Soulié & Isenberg, 2021). Apart from that, China accounts for almost a third of the MSCI Emerging Markets Index. Another large emerging economy is Russia. In the study period, there are several economic shocks concerning these countries: the threat of a spillover of the euro area crisis in 2012, at the end of 2013 the threat of the US ----- CEEJ **• 10** (57) **•** 2023 **•** pp. 219-236 **•** ISSN 2543-6821 **•** DOI: 10.2478/ceej-2023-0013 **228** **Figure 2.** Dynamic correlation between Bitcoin and MSCI Emerging Markets Index using R/S at different lengths of the rolling window: 251 (Panel A), 126 (Panel B) Note: Black and grey lines indicate correlation coefficients and p-values, respectively. The horizontal red line means p-values at 10%. Rolling window sizes are 251 (Panel A) and 126 Hurst exponents (Panel B). The date corresponds to the endpoints of the sliding windows for the correlation coefficient. Source: Own calculations debt crisis, Russia’s aggression against Ukraine on February 2014, China’s economic downturn in 2015, the USA – China trade tensions in the years 2018–2019 (also related to the election of Donald Trump at the end of 2016), Covid-19 in 2020 and the Russia – Ukraine war in 2022. In particular, the turbulence in the eurozone may cause investors to reduce the share of emerging economies in their investment portfolios. At the end of 2013, there was the threat of a debt crisis in the USA and China. China was America’s largest foreign creditor in 2013. At that time, the US and Europe also had the largest share of Bitcoin mining. If Congress had not passed an increase in the national debt limit by October 17, foreign payments could be stopped. As a result, there was a partial shutdown of the US government for 16 days, because Congress could not agree on a budget. Next, there was Russia’s aggression against Ukraine on February 2014. Another shock was related to uncertainty about the fact that Donald Trump won the election in late 2016. During the presidential campaign, he spoke about his future policy against the existing trade agreements with China. As a result, in January 2018, the USA set tariffs on China. This trade conflict intensified through 2019. The increase in the correlation value in the years 2020/2021 could be linked to the appearance of uncertainty related to the Covid-19 pandemic. In 2022, the threat of the Russia Ukraine war could have affected investors in terms of increasing fear and herd behaviour. The common feature of the above-mentioned economic shocks is their unpredictability. Because of a lack of certain information about these events (e.g. the threat of a debt crisis, the Covid-19 vaccine, trade policy between the USA and China, and war), they could not be included in market prices by the rational expectations of investors. Furthermore, investors could over- or underestimate the importance of these shocks for the economy due to the presence of a high level of fear. Thus, the irrationality of Bitcoin investors could arise from an increase in economic policy uncertainty. Besides, the specific features of cryptocurrencies also may affect investors’ behaviour. The computing power which concentrates on this ‘cryptocurrency system’ is not related to one geographic territory. So, to estimate the distributed policy uncertainty of Bitcoin, investors may use heuristics based on the information from its largest mining countries. Thus, it seems that events related to the high economic policy uncertainty in the largest emerging economies (e.g. China) have a meaningful impact on the comovement in dynamics of market efficiency of Bitcoin and the emerging stock markets. Specifically, in times of the highest economic policy uncertainty, the correlation value seems to strengthen. Despite ----- CEEJ **• 10** (57) **•** 2023 **•** pp. 219-236 **•** ISSN 2543-6821 **•** DOI: 10.2478/ceej-2023-0013 **229** the association between Bitcoin and emerging stock markets efficiency is quite strong in some periods, a sign of the correlation changes. This may be because in the case of economic shock, in the short term, behavioural factors may be the main determinants of the efficiency of both markets. However, in the long run, the market efficiency of emerging stocks may be determined more by fundamentals, in contrast to the cryptocurrency market efficiency which may still be mainly the effect of behavioural factors. Furthermore, the negative values of the correlation could signal the feature of the safe haven in some periods for Bitcoin. This is in line with Wang et al. (2021), Maganini, Diniz and Rasheed (2021), and Mensi et al. (2022). The results are robust in the context of adopting different sizes of the rolling window. Generally, using 251 (Panel A) and 126 (Panel B) observations as a length of the sliding window, the significant and largest correlations are obtained mainly in similar periods. However, the values of the correlation based on the ‘126’ data points are more volatile due to the shorter time series used for its calculation. #### **5. Additional analyses** To analyse the relationship between Bitcoin and emerging stock market efficiency more deeply, additional tests were carried out. One of them is the calculation of the correlations on the first differences of the Hurst exponents. In this case, the Hurst exponents are also based on the rescaled range analysis (R/S). The results are presented in Figure 3. Time-varying correlation (Figure 3) shows that in the context of the dynamics of market efficiency, the relationship between Bitcoin and MSCI Emerging Markets in the pandemic period is one of the strongest compared to the whole period. However, the values of the association between Bitcoin and emerging markets suggest a weak or lack of statistical correlation in the analysed period. The strength of this association is different from the correlation based on the values (Figure 2). However, it could be expected, because the transformation of values to the first differences may result in the loss of some information. In Figure 3, it can be observed that there are four local maxima of the correlation. Therefore, three main subperiods with different trends in the study association can be distinguished: the years 2014–2017, **Figure 3.** Dynamic correlation on the first differences of Hurst exponents between Bitcoin and MSCI Emerging Markets Index Note: Black and grey lines indicate correlation coefficients and p-values respectively. The horizontal red line means p-values at 10%. Rolling window sizes are 251 (Panel A) and 126 Hurst exponents (Panel B). The date corresponds to the endpoints of the sliding windows for the correlation coefficient. Source: Own calculation ----- CEEJ **• 10** (57) **•** 2023 **•** pp. 219-236 **•** ISSN 2543-6821 **•** DOI: 10.2478/ceej-2023-0013 **230** **Figure 4.** Dynamic Kendall correlation between Bitcoin and MSCI Emerging Markets Index using R/S at different lengths of the rolling window: 251 (Panel A), 126 (Panel B) Note: Black and grey lines indicate correlation coefficients and p-values, respectively. The horizontal red line means p-values at 10%. Rolling window sizes are 251 (Panel A) and 126 Hurst exponents (Panel B). The date corresponds to the endpoints of the sliding windows for the correlation coefficient. Source: Own work late 2017–2020, and from the end of 2020. The significant (p-values less than 10%) and the largest correlations occur mainly in three smaller periods – 2014–early 2015, in the years 2017–early 2018, and 2020–early 2021. These periods are covered by previous findings (Figure 2). Figure 4 presents the dynamic correlation using Kendall’s τ instead of the Spearman correlation. Notice that the results are identical to those reached by the Spearman correlation. In particular, the dynamics of the correlation coefficient are similar to that observed in the case of the Spearman correlation. The subperiods with the largest values of the study association are the same as before (Figure 2). However, the maximum value of the correlation coefficient (0.61) is smaller compared to that of 0.83, noted for the Spearman method using 251 data points as a length of the rolling window. This is also supported by the results obtained using the sliding window of 126 observations. Figure 5 reflects estimates of the Hurst exponent using DFA (detrended fluctuation analysis). In Figure 5, some similarities can be seen in the dynamics of market efficiency using both methods – rescaled range analysis and detrended fluctuation analysis. Specifically, the significant (p-values less than 10%) and the largest correlations between Bitcoin and MSCI Emerging Markets from a market efficiency perspective can be observed mainly in similar subperiods – end of 2014/early 2015, at the end of 2015, at the end of 2016/early 2017, 2018, at the end of 2018 and 2019, at the beginning of 2020, and the end of 2020/the beginning of 2021, at the end of 2021, and in early 2022. Except for the very rare cases (periods: 2016/2017 and 2018/2019), the signs or values of the correlation are very similar for both methods. However, DFA reached on average lower maximum correlation coefficients compared to R/S while the studied relationship may be considered moderate in most periods of market stress for both methods. The reason for this can be that only DFA (contrary to R/S) uses a polynomial fit detrending in subperiods. It may be more resistant to the non-stationarity of time series compared to R/S (Kristoufek, 2010). To show more precisely the periods in which the correlations are the strongest, the Spearman correlations based on Hurst exponents by both DFA and R/S methods, and the First differences are presented together in Figure 6. In some periods Figure 6 presents similar dynamics of the Spearman correlation coefficients based on the Hurst exponents of both methods (DFA and R/S). In ----- CEEJ **• 10** (57) **•** 2023 **•** pp. 219-236 **•** ISSN 2543-6821 **•** DOI: 10.2478/ceej-2023-0013 **231** **Figure 5.** Dynamic correlation between Bitcoin and MSCI Emerging Markets Index using DFA at different lengths of the rolling window: 251 (Panel A), 126 (Panel B) Note: Black and grey lines indicate correlation coefficients and p-values, respectively. The horizontal red line means p-values at 10%. Rolling window sizes are 251 (Panel A) and 126 Hurst exponents (Panel B). The date corresponds to the endpoints of the sliding windows for the correlation coefficient. Source: Own calculations **Figure 6.** Dynamic Spearman correlation between Bitcoin and MSCI Emerging Markets Index based on Hurst exponents using R/S, DFA and First differences of Hurst exponents in the rolling window of 251 observations Note: Blue and red lines indicate correlation coefficients based on Hurst exponents using R/S and DFA, respectively. The green line means correlation coefficients based on the first differences of Hurst exponents using R/S. The grey colour indicates the range of the correlation values (minimum, maximum) relative to the time point (x-axis). The correlation coefficients located in the area between two horizontal black dashed lines are statistically insignificant (p-values less than 10%). The date corresponds to the endpoints of the sliding windows for the correlation coefficient. Source: Own calculation ----- CEEJ **• 10** (57) **•** 2023 **•** pp. 219-236 **•** ISSN 2543-6821 **•** DOI: 10.2478/ceej-2023-0013 **232** particular, the most similar dynamics and correlation values can be observed during the pandemic. Thus, the link between global economic shocks and the dynamics of correlation seems to be the most confirmed. On the other hand, the correlation values in some cases differ in the sign of the correlation or its value (especially in the years 2018–2019). However, dynamics of both correlations are very similar in the period of local economic shocks (e.g. from mid-2018 to 2019). Therefore, it cannot be unequivocally stated whether “local” economic shocks contribute to the largest correlation values between Bitcoin and the emerging stock markets’ efficiency. But, in general, the results confirm that the dynamics of the correlation between the emerging stock and Bitcoin markets efficiency behave similarly during times of high uncertainty related to the economic turmoils in the countries with the largest Bitcoin mining or global economic shocks. This is also supported by the separate phases of the dynamic correlation trend for the first differences in market efficiency. The correlation of changes in the dynamics of market efficiency strengthens with the accumulation of uncertainty related to economic shocks. #### **6. Discussion** Generally, the results show that in times of unexpected events, the correlation between emerging stock and Bitcoin markets efficiency is the strongest, although the strength of this association can be considered moderate. However, only at some subperiods is there a negative sign of the correlation, which may indicate Bitcoin’s potential to be a safe haven in the context of market efficiency. This is in line with the view presented by others (Wang et al., 2021; Maganini, Diniz, & Rasheed, 2021; Mensi et al., 2022). Besides, this paper confirms the findings of previous studies, uncovering in separate research that the market efficiency in the emerging markets and Bitcoin is time-varying and characterised by the long-memory process most of the time, for example as in Bariviera (2017) and Hull and McGroarty (2014). Furthermore, our results indicate that the market efficiency dynamics of Bitcoin and emerging stock markets are different. In particular, the findings obtained by detrended fluctuation analysis show at most a moderate association between the dynamics of Bitcoin market efficiency and emerging markets efficiency. This confirms the results reached by Plastun et al. (2019) for Russia and Ukraine. Thus, future studies should treat Bitcoin rather as a specific investment instead of an emerging market from the perspective of the dynamics of market efficiency. This is contrary to the nomenclature presented by others (Alvarez-Ramirez et al., 2018; Kumar & Zargar, 2019). The findings suggest that the main events related to economic policy uncertainty may affect the dynamics of market efficiency of Bitcoin and emerging stock markets. These economic shocks mainly concern the largest Bitcoin mining countries and their major trading partners, and global economic threats such as Covid-19. Thus, investors should track the economic policy uncertainty of the largest Bitcoin mining ‘geographic territory’. Furthermore, different reactions of market efficiency in these markets to some economic shocks imply the potential to benefit from a diversification strategy using Bitcoin and the emerging markets in one investment portfolio during economic turmoil. Thus, the result may have an impact on the more efficient allocation of capital. Besides, our findings indicate that regulators of the Bitcoin market and the emerging markets should be cautious about the impact of their economic policy transparency on the reaction of these market investors. The research has shed light on the dependence between the dynamics of market efficiency of Bitcoin and emerging markets in the context of high economic policy uncertainty in major Bitcoin mining countries. Future studies should deepen this issue. Furthermore, because Bitcoin is more inefficient than emerging markets most of the time, its dynamics may be more dependent on behavioural factors (these factors, such as investor emotions, make investment decisions more difficult). This could be verified by future studies. Our results suggest that the largest emerging countries with a meaningful share in Bitcoin mining could play an essential role in this market during economic shocks. Therefore, a study of the dynamic relationship between Bitcoin market efficiency and emerging stock markets efficiency across countries is needed. Finally, the results showing a negative correlation are only true for some identified economic shocks, so it is still uncertain whether Bitcoin can be treated as a safe haven from a market efficiency perspective. #### **7. Conclusion** This paper attempts to compare the dynamics of market efficiency between Bitcoin and the emerging stock markets in the years 2011–2022. It clarifies ----- CEEJ **• 10** (57) **•** 2023 **•** pp. 219-236 **•** ISSN 2543-6821 **•** DOI: 10.2478/ceej-2023-0013 **233** whether Bitcoin can be treated as an emerging market or whether its market efficiency is more resistant to economic shocks than emerging markets’ efficiency. To this end, the Hurst exponent is exploited as a measure of market efficiency. The Hurst exponent is calculated using the rescaled range analysis. Besides, the sliding window is applied to show the dynamics of market efficiency. Thirdly, the rolling window correlation is utilised to show how the association between studied variables varies over time. The contribution of this article to the literature is at least threefold. Firstly, it concerns drivers of Bitcoin market efficiency. So far, there is a lack of knowledge of whether the dynamics of Bitcoin market efficiency are linked to economic policy uncertainty. Our results provide new insights into this issue, suggesting that future studies should focus on a dependence between the dynamics of Bitcoin market efficiency and economic policy uncertainty in the largest Bitcoin mining countries. Secondly, this paper adds to the literature by the verification of the ‘emerging’ nature of the cryptocurrency market efficiency. The findings report that the dynamics of Bitcoin and emerging stock markets’ efficiency are different. Besides, this research presents different relative reactions of the Bitcoin and emerging stock markets’ efficiency to economic shock (e.g. the pandemic). This study has several limitations. Firstly, Bitcoin’s weekend prices are excluded from our sample, because the stock market exchanges are closed at the weekend. Secondly, it is difficult to precisely identify concrete shocks in economic policy uncertainty as an interpretation of the values of correlation because the length of the ‘overlapping’ sliding windows (for the correlation and Hurst exponent calculation) covers from 2.5 to 3 years. Another limitation is the utilisation of the aggregate index of the largest emerging stock markets. In this case, the impact of economic policy uncertainty on a particular stock market may be different. Probably, the shocks will be more apparent in the smallest emerging markets, because of lower policy stability compared to the largest economies. #### **References** Aggarwal, D. (2019). Do Bitcoins follow a random walk model? *Research in Economics,* 73, 15–22. https:// doi.org/10.1016/j.rie.2019.01.002 Alvarez-Ramirez, J., Rodriguez, E., & IbarraValdez, C. (2018). Long-range correlations and asymmetry in the Bitcoin market. *Physica A: Statistical* *Mechanics and Its Applications*, 492, 948–955. https:// doi.org/10.1016/j.physa.2017.11.025 Al-Yahyaee, K. H., Mensi, W., & Yoon, S. (2018). Efficiency, multifractality, and the long-memory property of the Bitcoin market: A comparative analysis with stock, currency, and gold markets. *Finance* *Research Letters*, 27, 228–234. https://doi.org/10.1016/j. frl.2018.03.017 Aslan, A., & Sensoy, A. (2020). Intraday efficiency-frequency nexus in the cryptocurrency markets. *Finance Research Letters*, 35(C). https://doi. org/10.1016/j.frl.2019.09.013 Assaf, A., Bhandari, A., Charif, H., & Demir, E. (2022). Multivariate long memory structure in the cryptocurrency market: The impact of COVID-19. *International Review of Financial Analysis*, 82(C). https:// doi.org/10.1016/j.irfa.2022.102132 Bariviera, A. F. (2017). The inefficiency of Bitcoin revisited: A dynamic approach. *Economics Letters,* 161, 1–4. https://doi.org/10.1016/j.econlet.2017.09.013 Bariviera, A. F., Basgall, M. J., Hasperué, W., & Naiouf, M. (2017). Some stylized facts of the Bitcoin market. *Physica A: Statistical Mechanics and its* *Applications*, 484(C), 82–90. https://doi.org/10.1016/j. physa.2017.04.159 Baur, D., & McDermott, T. (2010). Is gold a safe haven? International evidence. *Journal of Banking &* *Finance*, 34(8), 1886–1898. https://doi.org/10.1016/j. jbankfin.2009.12.008 Borowski, K., & Matusewicz, M. (2019). The day-of-the-week effect on the example of 82 cryptocurrencies. *Przedsiębiorstwo i Finanse*, *3* (26), 31–50. Retrieved from http://cejsh.icm.edu.pl/cejsh/ element/bwmeta1.element.ojs-issn-2084-1361-year2019-issue-3-article-df281222-63ea-385f-a265 72dc5dc83783 Borowski, K., & Matusewicz, M. (2020). Calculating Hurst Exponent with the Use of the Siroky Method in Developed and Emerging Markets. *Finanse I Prawo Finansowe*, 3(27), 25–61. https://doi. org/10.18778/2391-6478.3.27.02 Bouri, E., Molnár, P., Azzi, G., Roubaud, D., & Hagfors, L. I. (2017). On the hedge and safe haven properties of Bitcoin: Is it really more than a diversifier?. *Finance Research Letters*, 20(C), 192–198. [https://doi.org/10.1016/j.frl.2016.09.025](https://doi.org/10.1016/j.frl.2016.09.025) ----- CEEJ **• 10** (57) **•** 2023 **•** pp. 219-236 **•** ISSN 2543-6821 **•** DOI: 10.2478/ceej-2023-0013 **234** Bouri, E., Gil-Alana, L. A., Gupta, R., & Roubaud, D. (2019). Modelling long memory volatility in the Bitcoin market: Evidence of persistence and structural breaks. *International Journal of Finance & Economics*, 24, 412–26. https://doi.org/10.1002/ijfe.1670 Bouri, E., Shahzad, S. J. H., Roubaud, D., Kristoufek, L., & Lucey, B. (2020). Bitcoin, gold, and commodities as safe havens for stocks: New insight through wavelet analysis. *The Quarterly Review* *of Economics and Finance*, 77, 156–164. https://doi. org/10.1016/j.qref.2020.03.004 Brauneis, A., & Mestel, R. (2018). Price discovery of cryptocurrencies: Bitcoin and beyond. *Economics* *Letters*, 165, 58–61. https://doi.org/10.1016/j. econlet.2018.02.001 Cajueiro, D., & Tabak, B. (2004). The Hurst exponent over time: testing the assertion that emerging markets are becoming more efficient. *Physica A:* *Statistical Mechanics and its Applications*, 336(3), 521–537. https://doi.org/10.1016/j.physa.2003.12.031 Caporale, G. M., Gil-Alana, L., & Plastun, A. (2018). Persistence in the cryptocurrency market. *Research in International Business and Finance*, 46(C), 141–148. https://doi.org/10.1016/j.ribaf.2018.01.002 Carrick, J. (2016). Bitcoin as a Complement to Emerging Market Currencies. *Emerging Markets* *Finance and Trade*, 52(10), 2321–2334. https://doi.org/ 10.1080/1540496X.2016.1193002 Chowdhury, M. A. F., Abdullah, M., Alam, M., Abedin, M. Z., & Shi, B. (2023). NFTs, DeFi, and other assets efficiency and volatility dynamics: An asymmetric multifractality analysis. *International* *Review of Financial Analysis*, 87(C). https://doi. org/10.1016/j.irfa.2023.102642 Czarnecki, Ł., Grech, D., & Pamuła, G. (2008). Comparison study of global and local approaches describing critical phenomena on the Polish stock exchange market. *Physica A-statistical Mechanics and Its* *Applications*, 387, 6801–6811. https://doi.org/10.1016/j. physa.2008.08.019 Czekaj, J., Woś, M., & Żarnowski, J. (2001). *Efektywność giełdowego rynku akcji w Polsce. Z* *perspektywy dziesięciolecia* . Warszawa: Wydawnictwo Naukowe PWN Diniz-Maganini, N., Diniz, E. H., & Rasheed, A. A. (2021). Bitcoin’s price efficiency and safe haven properties during the COVID-19 pandemic: A comparison. *Research in International Business* *and Finance*, 58, 101472. https://doi.org/10.1016/j. ribaf.2021.101472 Fama, E. F. (1970). Efficient Capital Markets: A Review of Theory and Empirical Work. *The* *Journal of Finance*, 25(2), 383–417. https://doi. org/10.2307/2325486 Fernandes, L. H. S., Bouri, E., Silva, J. W.L., Bejan L., & de Araujo, F. H. A. (2022). The resilience of cryptocurrency market efficiency to COVID-19 shock. *Physica A: Statistical Mechanics and its Applications*, 607. https://doi.org/10.1016/j.physa.2022.128218 Hileman, G., & Rauchs, M. (2017). Global cryptocurrency benchmarking study. *Cambridge Centre* *for Alternative Finance* . Retrieved from https://www. jbs.cam.ac.uk/wp-content/uploads/2020/08/2017-0420-global-cryptocurrency-benchmarking-study.pdf Hkiri, B., Bejaoui, A., Gharib, C., & Al Nemer H. A. (2021). Revisiting efficiency in MENA stock markets during political shocks: evidence from a multi-step approach. *Heliyon*, 7(9). https://doi.org/10.1016/j. heliyon.2021.e08028 Hull, M., & McGroarty, F. (2014). Do emerging markets become more efficient as they develop? Long memory persistence in equity indices. *Emerging* *Markets Review*, 18(C), 45–61. https://doi.org/10.1016/j. ememar.2013.11.001 Jiang, Y., Nie, H., & Ruan, W. (2018). Time-Varying Long-Term Memory in Bitcoin Market. *Finance* *Research Letters*, 25, 280–284. https://doi.org/10.1016/j. frl.2017.12.009 Kahneman, D., & Tversky, A. (1979). Prospect Theory: An Analysis of Decision under Risk. *Econometrica*, 47(2), 263–291. https://doi. org/10.2307/1914185 Khuntia, S., & Pattanayak, J. (2018). Adaptive market hypothesis and evolving predictability of Bitcoin. *Economics Letters*, 167, 26–28. https://doi. org/10.1016/j.econlet.2018.03.005 Khuntia, S., & Pattanayak, J. (2020). Adaptive Long Memory in Volatility of Intra-day Bitcoin Returns and the Impact of Trading Volume. *Finance Research Letters*, 32, 101077. https://doi.org/10.1016/j.frl.2018.12.025 Kosc, K., Sakowski, P., & Ślepaczuk, R. (2019). Momentum and contrarian effects on the cryptocurrency market. *Physica A: Statistical Mechanics* *and its Applications* [, 523, 691–701. https://doi.](https://doi.org/10.1016/j.physa.2019.02.057) [org/10.1016/j.physa.2019.02.057](https://doi.org/10.1016/j.physa.2019.02.057) ----- CEEJ **• 10** (57) **•** 2023 **•** pp. 219-236 **•** ISSN 2543-6821 **•** DOI: 10.2478/ceej-2023-0013 **235** Kristoufek, L. (2010). Rescaled Range Analysis and Detrended Fluctuation Analysis: Finite Sample Properties and Confidence Intervals. *Czech Economic Review*, Charles University Prague, Faculty of Social Sciences, Institute of Economic Studies, 4(3), 315–329. Retrieved from https:// www.researchgate.net/profile/LadislavKristoufek/ publication/227360892_Rescaled_Range_Analysis_ and_Detrended_Fluctuation_Analysis_Finite_ Sample_Properties_and_Confidence_Intervals/ links/0fcfd50ddb6e3bdcf5000000/Rescaled-RangeAnalysis-and-Detrended-Fluctuation-AnalysisFinite-Sample-Properties-and-Confidence-Intervals. pdf Kristoufek, L. (2018). On the Bitcoin market inefficiency and its Evolution. *Physica A: Statistical* *Mechanics and its Applications*, 503, 257–262. https://doi. org/10.1016/j.physa.2018.02.161 Kumar, D., & Zargar, F. N. (2019). Informational inefficiency of Bitcoin: A study based on highfrequency data. *Research in International Business* *and Finance*, 47, 344–353. https://doi.org/10.1016/j. ribaf.2018.08.008 Köchling, G., Müller, J., & Posch, P. N. (2019). Price delay and market frictions in cryptocurrency markets. *Economics Letters*, 174(C), 39–41. https://doi. org/10.1016/j.econlet.2018.10.025 Lim, K. P., Brooks, R. D., & Kim, J. H. (2008). Financial crisis and stock market efficiency: Empirical evidence from Asian countries. *International Review* *of Financial Analysis*, 17(3), 571–591. https://doi. org/10.1016/j.irfa.2007.03.001 Lo, A. W. (2004). The Adaptive Markets Hypothesis: Market Efficiency from an Evolutionary Perspective. *The Journal of Portfolio Management*, 30(5), 15–29. Retrieved from https://www.researchgate. net/publication/228183756_The_Adaptive_ Markets_Hypothesis_Market_Efficiency_from_an_ Evolutionary_Perspective Mensi, W., Sensoy, A., Vo, X. V., & Kang, S. H. (2022). Pricing efficiency and asymmetric multifractality of major asset classes before and during COVID-19 crisis. *The North American Journal of* *Economics and Finance*, 62(C). https://doi.org/10.1016/j. najef.2022.101773 Mizerka, J., Stróżyńska-Szajek, A., & Mizerka, P. (2020). The role of Bitcoin on developed and emerging markets—on the basis of a Bitcoin users graph analysis. *Finance Research Letters*, 35. https://doi.org/10.1016/j. frl.2020.101489 Mnif, E., Mouakhar, K., & Jarboui, A. (2023). Energy-conserving cryptocurrency response during the COVID-19 pandemic and amid the Russia–Ukraine conflict. *Journal of Risk Finance*, 24(2), 169–185. https:// doi.org/10.1108/JRF-06-2022-0161 Noda, A. (2021). On the evolution of cryptocurrency market efficiency. *Applied Economics* *Letters*, *28* (6), 433–439. https://doi.org/10.1080/13504 851.2020.1758617 Phiri, A. (2022). Can wavelets produce a clearer picture of weak-form market efficiency in Bitcoin? *Eurasian Economic Review.* *Eurasia Business and Economics* *Society*, 12(3), 373–386. https://doi.org/10.1007/ s40822-022-00214-8 Plastun, A., Kozmenko, S., Plastun V., & Filatova, H. (2019). Market anomalies and data persistence: The case of the day-of-the-week effect. *Journal of* *International Studies*, 12(3), 122–130. https://doi. org/10.14254/2071-8330.2019/12-3/10 Polanco-Martínez, J. M. (2019). Dynamic relationship analysis between NAFTA stock markets using nonlinear, nonparametric, non-stationary methods. *Nonlinear Dynamics,* 97, 369–389. https://doi. org/10.1007/s11071-019-04974-y Polasik, M., Piotrowska, A. I., Wisniewski, T. P., Kotkowski, R., & Lightfoot, G. (2015). Price Fluctuations and the Use of Bitcoin: An Empirical Inquiry. *International Journal of Electronic Commerce*, 20(1), 9–49. https://doi.org/10.1080/10864415.2016.1 061413 Rufino, C. C. (2023). On the Volatility and Market Inefficiency of Bitcoin During the COVID-19 Pandemic. *DLSU Business & Economics Review*, 32(2), 23–32. Retrieved from https://www.dlsu.edu.ph/ wp-content/uploads/2023/04/2rufino-040323.pdf Shahzad, S. J. H., Bouri, E., Roubaud, D., Kristoufek, L., & Lucey, B. (2019). Is Bitcoin a better safe-haven investment than gold and commodities? *International Review of Financial Analysis*, 63, 322–330. https://doi.org/10.1016/j.irfa.2019.01.002 Statista. (2022, January 12). *Bitcoin mining by* *country* . Retrieved from https://www.statista.com/ statistics/1200477/bitcoin-mining-by-country/ Sukpitak, J., & Hengpunya, V. (2016). Efficiency of Thai stock markets: Detrended fluctuation analysis. *Physica A: Statistical Mechanics and its Applications*, 458(C), [204–209. https://doi.org/10.1016/j.physa.2016.03.076](https://doi.org/10.1016/j.physa.2016.03.076) ----- CEEJ **• 10** (57) **•** 2023 **•** pp. 219-236 **•** ISSN 2543-6821 **•** DOI: 10.2478/ceej-2023-0013 **236** Takaishi, T., & Adachi, T. (2020). Market Efficiency, Liquidity, and Multifractality of Bitcoin: A Dynamic Study. *Asia-Pacific Financial Markets*, 27, 145–154. https://doi.org/10.1007/s10690-019-09286-0 Tovanich, N., Soulié, N., & Isenberg, P. (2021, April). *Visual analytics of bitcoin mining pool evolution:* *on the road toward stability?* 3rd International Workshop on Blockchains and Smart Contracts held in conjunction with the 11th IFIP International Conference on New Technologies, Mobility and Security, France, Paris, 1–5. https://doi.org/10.1109/ NTMS49979.2021.9432675 Tran, V. L., & Leirvik, T. (2020). Efficiency in the markets of crypto-currencies. *Finance Research Letters*, 35. https://doi.org/10.1016/j.frl.2019.101382 Urquhart, A. (2016). The inefficiency of Bitcoin. *Economics Letters*, 148, 80–82. https://doi.org/10.1016/j. econlet.2016.09.019 Wang, J., & Wang, X. (2021). COVID-19 and financial market efficiency: Evidence from an entropy-based analysis. *Finance Research Letters*, 42(C). https://doi.org/10.1016/j.frl.2020.101888 Wei, W. C. (2018). Liquidity and market efficiency in cryptocurrencies. *Economics Letters*, 168, 21–24. [https://doi.org/10.1016/j.econlet.2018.04.003](https://doi.org/10.1016/j.econlet.2018.04.003) -----
{ "disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.2478/ceej-2023-0013?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.2478/ceej-2023-0013, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": "CCBY", "status": "HYBRID", "url": "https://sciendo.com/pdf/10.2478/ceej-2023-0013" }
2,023
[ "JournalArticle" ]
true
2023-01-01T00:00:00
[ { "paperId": "b016c9aa7731ee26cf581cfd6aaa1ac6e9a42f30", "title": "NFTs, DeFi, and other assets efficiency and volatility dynamics: An asymmetric Multifractality analysis" }, { "paperId": "3f789cecc80f6d377bf8c849602ea45bb98ef3f3", "title": "The resilience of cryptocurrency market efficiency to COVID-19 shock" }, { "paperId": "f6053e9032dafa604440b955e338d046decb2d85", "title": "Can wavelets produce a clearer picture of weak-form market efficiency in Bitcoin?" }, { "paperId": "9895af7968af5a01dd2f02750a4bb27495483856", "title": "Pricing efficiency and asymmetric multifractality of major asset classes before and during COVID-19 crisis" }, { "paperId": "9e87f3cd6b92cff97ee9b118902f230567e62308", "title": "Multivariate long memory structure in the cryptocurrency markets: The impact of COVID-19" }, { "paperId": "b9090fafd634482b7d9b78d922ce6ad3881a6887", "title": "Revisiting efficiency in MENA stock markets during political shocks: evidence from a multi-step approach" }, { "paperId": "63f68191f7b0249915c047dac6992f6bcff9cea8", "title": "Bitcoin’s price efficiency and safe haven properties during the COVID-19 pandemic: A comparison" }, { "paperId": "c3ea681c2d760f8dfa46b2602e9081cdbb53f5e0", "title": "COVID-19 and financial market efficiency: Evidence from an entropy-based analysis" }, { "paperId": "416f1b4a9615ae04b74a34327a867236049d4b16", "title": "Calculating Hurst Exponent with the Use of the Siroky Method in Developed and Emerging Markets" }, { "paperId": "ebb567a45ecff5f4f13de1197a157cee840a2c7a", "title": "Bitcoin, gold, and commodities as safe havens for stocks: New insight through wavelet analysis" }, { "paperId": "e549e886ab9c666acd3f0a6f996307168268d8de", "title": "The role of Bitcoin on developed and emerging markets – on the basis of a Bitcoin users graph analysis" }, { "paperId": "22379d0695ab678330424360dea3711ea3a799d1", "title": "Intraday efficiency-frequency nexus in the cryptocurrency markets" }, { "paperId": "003451d154fa358df6809d1c0e228ffbf3212476", "title": "Efficiency in the Markets of Crypto-Currencies" }, { "paperId": "d18e968c64d66e5908d32dc1b9ea723d9275c418", "title": "Market anomalies and data persistence: The case of the day-of-the-week effect" }, { "paperId": "58d9ebc0b7bffca6d28d78cba0007b64f7b28cbd", "title": "Dynamic relationship analysis between NAFTA stock markets using nonlinear, nonparametric, non-stationary methods" }, { "paperId": "9a149d283607e973cbc375a50e33528bcf42df42", "title": "Momentum and contrarian effects on the cryptocurrency market" }, { "paperId": "421f93e54b5fcf7e6c92966a7c2e82ab8d1fa4b0", "title": "Is Bitcoin a better safe-haven investment than gold and commodities?" }, { "paperId": "33d746db9f1ef2e80fa8a7c828fa09d75bb87a93", "title": "On the evolution of cryptocurrency market efficiency" }, { "paperId": "2ec77e703c24221bb74bf8317cd1214837658f15", "title": "Do bitcoins follow a random walk model?" }, { "paperId": "6c5efe78379aafa0843c3f28ba878189a91c5cfa", "title": "Market Efficiency, Liquidity, and Multifractality of Bitcoin: A Dynamic Study" }, { "paperId": "0c1a4fc61eb82a94a9c1568f7995d837355638f8", "title": "Informational inefficiency of Bitcoin: A study based on high-frequency data" }, { "paperId": "217895bb50976672fd9004ee9e3b22ce6f0e1e31", "title": "Efficiency, multifractality, and the long-memory property of the Bitcoin market: A comparative analysis with stock, currency, and gold markets" }, { "paperId": "9fc88fcfae6b3187c97cb0c332f5093500423245", "title": "Price Delay and Market Frictions in Cryptocurrency Markets" }, { "paperId": "6ea1864c4299ddee3189c2bdfeb924b283603402", "title": "Modelling long memory volatility in the Bitcoin market: Evidence of persistence and structural breaks" }, { "paperId": "b61d0259f2c02c945f2946679f0347e36285fc70", "title": "On Bitcoin markets (in)efficiency and its evolution" }, { "paperId": "408687691e3fc29950250da71d4be91697471704", "title": "Liquidity and market efficiency in cryptocurrencies" }, { "paperId": "0a7dc72b8e07e2622b47ecfedd9693ccfd0ba3dc", "title": "Adaptive market hypothesis and evolving predictability of bitcoin" }, { "paperId": "1d0ff715d8d15d48b7c577fef1bb6fd496b5667a", "title": "Price discovery of cryptocurrencies: Bitcoin and beyond" }, { "paperId": "747785ad0cf34f4f8abe99438b47474afa8a49f2", "title": "Long-range correlations and asymmetry in the Bitcoin market" }, { "paperId": "569fd1797fe6e29ac0876d1e4d4e08ef8bf9b0c9", "title": "Time-varying long-term memory in Bitcoin market" }, { "paperId": "3dae2f3c7ca5e9c60aa41df4e6dfba35f9586d83", "title": "Persistence in the Cryptocurrency Market" }, { "paperId": "719b219f9b5e2de92b2014b11289fd9d388fb046", "title": "The Inefficiency of Bitcoin Revisited: A Dynamic Approach" }, { "paperId": "572271b7f4d35459aaa4d8d84c3b26c9f2380765", "title": "Some Stylized Facts of the Bitcoin Market" }, { "paperId": "26d69459477acc088945ceeb331e6fb6bd8af07a", "title": "2017 Global Cryptocurrency Benchmarking Study" }, { "paperId": "becc76a130cd37d9441ff38ff9dc60c5218bff38", "title": "On the Hedge and Safe Haven Properties of Bitcoin: Is it Really More than a Diversifier?" }, { "paperId": "f69c079991496fdfcda933ba9744eacc6007b926", "title": "Efficiency of Thai stock markets: Detrended fluctuation analysis" }, { "paperId": "99d1e84f089518c9f74a9b4487a95caa794fde9c", "title": "The Inefficiency of Bitcoin" }, { "paperId": "685a0a1a54cb732c466cbae58ea174211297ac04", "title": "Price Fluctuations and the Use of Bitcoin: An Empirical Inquiry" }, { "paperId": "d561eb0a7d8858139c0ed390b35493d929bd946a", "title": "Do emerging markets become more efficient as they develop? Long memory persistence in equity indices" }, { "paperId": "2c33761ce13d10605ce507ec30861e770c60f840", "title": "Comparison study of global and local approaches describing critical phenomena on the Polish stock exchange market" }, { "paperId": "b376a646c744af2f340a3ee809e54987151c041a", "title": "Financial crisis and stock market efficiency: Empirical evidence from Asian countries" }, { "paperId": "eebc4f8e7b5437bd12cf80bb28be2c6a23247b5b", "title": "The Hurst exponent over time: testing the assertion that emerging markets are becoming more efficient" }, { "paperId": "26b8dfda9be1da51ffe0830381773bb6e68455ad", "title": "Efficient Capital Markets: A Review of Theory and Empirical Work" }, { "paperId": "d38668a1eba0be85c75f9ea0c3cd1c892bf3c422", "title": "Adaptive long memory in volatility of intra-day bitcoin returns and the impact of trading volume" }, { "paperId": null, "title": "Ambiguity Aversion and Comparative Ignorance" }, { "paperId": "51c36fc0431bd2c99b6e36792a8da8e58a2e0137", "title": "Institute for International Integration Studies Is Gold a Safe Haven? International Evidence Is Gold a Safe Haven? International Evidence Is Gold a Safe Haven? International Evidence" } ]
18,678
en
[ { "category": "Computer Science", "source": "external" }, { "category": "Physics", "source": "external" }, { "category": "Computer Science", "source": "s2-fos-model" }, { "category": "Physics", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/0223db3a9042664f4529dc235ad9f7a662dc15af
[ "Computer Science", "Physics" ]
0.898775
Discussion of Quantum Consensus Algorithms
0223db3a9042664f4529dc235ad9f7a662dc15af
arXiv.org
[ { "authorId": "2108025805", "name": "Lifu Zhang" }, { "authorId": "2051458279", "name": "Samuel Fulton" } ]
{ "alternate_issns": null, "alternate_names": [ "ArXiv" ], "alternate_urls": null, "id": "1901e811-ee72-4b20-8f7e-de08cd395a10", "issn": "2331-8422", "name": "arXiv.org", "type": null, "url": "https://arxiv.org" }
Leader election is a crucial process in many areas such as cloud computing, distributed systems, task orchestration, and blockchain. Oftentimes, in a distributed system, the network needs to choose a leader, which would be responsible for synchronization between different processors, data storage, information distribution, and more.
Boston University Department of Physics ## Discussion of Qantum Consensus Algorithms #### Samuel Fulton, Lifu Zhang **Abstract Leader election is a crucial process in many areas such as cloud computing, distributed systems, task orchestra-** tion, and blockchain. Ofentimes, in a distributed system, the network needs to choose a leader, which would be responsible for synchronization between diferent processors, data storage, information distribution, and more. In the case where the network is anonymous, no classical algorithms could solve the problem exactly. However, in the seting of quantum computers, this problem is readily solved. In this paper, we analyze the quantum consensus algorithm developed by Seiichiro Tani. We look at the inner workings of the algorithm and develop a circuit representation of the key steps. We review Mochon’s fault tolerant leader election algorithm. We then implement a simple leader election algorithm on a quantum computer. ### Introduction In this paper, we will introduce distributed systems and consensus algorithms. We then zoom in and look at a specific class of distributed systems known as anonymous or symmetric distributed systems. From there, we introduce the classical approach to breaking symmetry and electing a leader. We show that there is no classical deterministic algorithm for leader election among an anonymous distributed system. We then introduce two quantum algorithms developed by Seiichiro Tani in [Tani et al. (2012)] for anonymous leader election. The quantum algorithms for anonymous leader election are more than computational speedups; they are deterministic solutions to a classically non-deterministic problem. We implement the second quantum leader election algorithm in Qiskit. Lastly, we analyze work by Mochon and Kitaev, [Mochon (2007)], on developing fault-tolerant leader election algorithms. ### Distributed Systems We will start with a brief definition of consensus and distributed systems. We define a distributed system to be a collection of communicating processors all working towards a common goal. Let’s consider a toy example. Imagine a distributed system of computers atempting to factor a large number _𝑁_ . Once one processor “believes” that it has factored the number, it proposes its prime factors, 𝑃𝑖, of N. The other processors in the distributed system check if [�]𝑖 _[𝑃]𝑖_ [=][ 𝑁] [. If the majority of the pro-] cessors agree that [�]𝑖 _[𝑃]𝑖_ [=][ 𝑁] [then][ 𝑃]𝑖 [is accepted] as the factors of 𝑁 . All of the processors can now move on to factoring a diferent prime number 𝑀, all the while remembering that the prime factors of 𝑁 have been decided. Other examples where consensus algorithms come into play include leader election, blockchain, load balancing, clock synchronization, and more. Three things characterize a distributed system: Agreement, Validity, and Termination. As the name suggests agreement means that all non-faulty processors must agree on the same value. In the case of factoring 𝑁, all non-faulty processors must either agree that 𝑃𝑖 are the factors of N or all agree that 𝑃𝑖 are not the factors of 𝑁 . Validity is the assertion that under non-Byzantine conditions, the distributed system will never return an incorrect result. In the example of the processors atempting the factor 𝑁, this would mean that given suficient time all non-faulty processors would find the same prime factors of 𝑁 . Termination is the assertion that given enough time the processors are guaranteed to complete the task. What do we mean when by non-faulty processors? There are two types of faulty processors. The first is a processor that experienced a crash. This processor stops responding to other processors in the distributed system. This is a common occurrence. However, since distributed systems only need the majority of processors to work, crash failures are readily dealt with. The second type of faulty processor experiences Byzantine failure. Byzantine failure occurs when a processor malfunctions in a way such that it sends incorrect data 1 ----- Boston University Department of Physics to the distributed system. An example of a Byzantine processor is a hacked processor. For consensus problems having Byzantine failures is the worst scenario. Fault-tolerant consensus algorithms address crash and Byzantine failure. The most widely used consensus algorithms in distributed and cloud computing systems are Paxos and its variants such as Raf. These algorithms are used for leader election and typically tolerate non-Byzantine failures. ### Qantum Consensus Mazzarella categorizes quantum consensus into four classes in [Mazzarella et al. (2015)]. _𝜎-_ expectation consensus, reduced state consensus, symmetric state consensus, and single 𝜎measurement consensus. Consider a quantum network consist of three qubits, and three observables of the form. _𝜎_ [1] = 𝜎[𝑧] ⊗ _𝐼_ ⊗ _𝐼,_ _𝜎_ [2] = 𝐼 ⊗ _𝜎[𝑧]_ ⊗ _𝐼,_ _𝜎_ [3] = 𝐼 ⊗ _𝐼_ ⊗ _𝜎[𝑧]_ _._ The system is in consensus concerning the expectation of 𝜎𝑧 if _𝑇𝑟_ (𝜌𝜎 [1]) = 𝑇𝑟 (𝜌𝜎 [2]) = 𝑇𝑟 (𝜌𝜎 [3]). Noted that Qantum consensus is achieved by a quantum network rather than traditional computational resources. They are similar counterparts, though we must take account of probabilistic outcomes due to the stochastic nature of quantum mechanics. We will show how quantum entanglements can ofer an advantage in terms of reaching an agreement in a distributed seting. ### Leader Election As described in [Bro], ”leader election is the simple idea of giving one thing (a process, host, thread, object, or human) in a distributed system some special powers. Those special powers could include the ability to assign work, the ability to modify a piece of data, or even the responsibility of handling all requests in the system.” Leader election is extremely useful for improving eficiency. A leader can ofen bypass consensus algorithms and simply inform the system about changes that will be made. Leaders can help with consistency because they can see all of the changes that have been made to the system. By acting as a central data cache, a leader can improve consistency across the entire system. Figure 1: A single leader does introduce some drawbacks. Namely, a single leader is a critical point for failure. If the leader crashes the entire distributed system may halt. Furthermore, if a single leader experiences Byzantine failure, the entire system may waste time following incorrect protocols. However, many of these drawbacks are mitigated through the use of consensus algorithms. Ofentimes, the improved eficiency of leader election out ways any drawbacks. In the next section, we will explore how leaders can be fairly elected. ### Leader Election Algorithm I One interesting consensus problem is the anonymous leader election. Anonymous leader election is used in the case where we have a collection of identical processors and wish to designate a leader. Figure 1 depicts an anonymous distributed system. In the system, processors do not have unique identification and all run the same protocol. Thus, the system is symmetric under all permutations. The symmetry of the system prevents non-probabilistic leader election. The classical approach outlined by Seiichiro Tani in [Tani et al. (2005)] is to install a coin flip in each processor. Each processor flips a coin if heads it is eligible for leader election. If the coin is tails, it is a follower. If multiple processors get heads, then the protocol is repeated with the eligible candidates. Note that this process is non-deterministic and has an expected run time of _𝑂_ (log(𝑛)), where 𝑛 is the number of processors. Let’s compare this to a simplistic quantum leader election algorithm, which was proposed in [Tani et al. (2012)]. We will start with a two-processor scenario. For this algorithm, we have two processors 𝐴 and 𝐵. We generate the quantum state _𝑊_ = √[1] (|01⟩+ |10⟩). 2 2 ----- Boston University Department of Physics #### |R1⟩ • • X • • X |R1⟩ |R2⟩ • • X • • X |R2⟩ |0⟩1 |S1⟩ |0⟩2 |Sn⟩ Figure 2: We send the first qubit to processor 𝐴 and the second qubit to processor 𝐵. If processor 𝐴 measures |1⟩, we know processor 𝐵 must measure 0 and vice versa. Whichever processor measures |1⟩ becomes the leader. For 𝑛 processors we generate the state _𝑊𝑛_ = √[1] (|10...0⟩+ ... + |0...01⟩), _𝑛_ or (|01⟩+ |10⟩) |00⟩ _._ If the system collapses to the (|01⟩+|10⟩) |00⟩ state, processor 𝐴 measures its |𝑅1⟩ state and processor _𝐵_ measures |𝑅2⟩ state. Whichever processor measures |1⟩ is the leader. If the system collapses into the state (|00⟩+ |11⟩) |11⟩, both processors apply the unitary operation |• • X • • X | • • X • • X | S | S ||Col2|Col3|Col4|X|Col6| |---|---|---|---|---|---| |•|•||•||| |•|•||•||| ||||||| ||||||| _𝑈_ = [1] √ 2 � 1 −𝑖 −𝑖 1 � = [1] √ _𝑛_ ∑︁𝑛−1 _𝑘=0_ 2𝑘 � _._ ��� Each processor receives a qubit. Whichever processor measures |1⟩ is the leader. This algorithm terminates afer one run, which is in stark contrast to the non-deterministic classical leader election. ### Algorithm II There is a more robust approach to quantum leader election proposed in [Tani et al. (2012)]. Consider the case with processors 𝐴 and 𝐵. Each processor prepares the state |𝑅1⟩ = |𝑅2⟩ = √[1] (|0⟩+ |1⟩). 2 We send |𝑅1⟩ and |𝑅2⟩ through the following circuit, where _𝑋_ is the Pauli matrix _𝑋_ corresponding to a bit flip, and the remaining two gates are control not gates. The output of the circuit is the state |𝑅1 𝑅2 𝑆1 𝑆2⟩ = (|00⟩+ |11⟩) |11⟩ + (|01⟩+ |10⟩) |00⟩ _._ In words, 𝑆1 and 𝑆2 and both |1⟩ if |𝑅1⟩ and |𝑅2⟩ are equal. Processor 𝐴 has the state |𝑅1 𝑆1⟩ and processor 𝐵 has the |𝑅2 𝑆2⟩ qubit. We note that |𝑆1⟩ and |𝑆2⟩ are entangled. Afer each processor measures its 𝑆 state, the system will collapse to either 00 11 11 (| ⟩+ | ⟩) | ⟩ to their 𝑆 states. _𝑈𝐴_ ⊗ _𝑈𝐵_ (|00⟩+ |11⟩) = 𝑈𝐴 |0⟩⊗ _𝑈𝐵_ |0⟩+ 𝑈𝐴 |1⟩⊗ _𝑈𝐵_ |1⟩ _,_ = (|0⟩− _𝑖_ |1⟩) ⊗(|0⟩− _𝑖_ |1⟩) + (|0⟩− _𝑖_ |1⟩) ⊗(|0⟩− _𝑖_ |1⟩), = |00⟩− _𝑖_ |01⟩+ 𝑖[2] |11⟩ + 𝑖[2] |00⟩− _𝑖_ |01⟩− _𝑖_ |10⟩+ |11⟩ _,_ = −𝑖 (|01⟩+ |10⟩). Just like in the first case, processor 𝐴 measures its |𝑅1⟩ state, and processor 𝐵 measures |𝑅2⟩ state. Whichever processor measures |1⟩ is the leader. This algorithm is readily extended into the case with 𝑛 nodes. Before we start we need one definition. We say a string 𝑥 = 𝑥1𝑥2...𝑥𝑛 is consistent if all substrings 𝑥𝑖 are equal. Each processors starts by generating the state _𝑅𝑖_ = √[1] (|0⟩+ |1⟩). 2 The state of the system is Each processor stores the consistency of the system in the qubit |𝑆𝑖 ⟩. The circuit for the process is shown in Fig 3 The global system becomes |𝑅1...𝑅𝑛𝑆1...𝑆𝑛⟩ = (��0⊗𝑛 � + ��1⊗𝑛 �) ��1⊗𝑛 � + 2∑︁[𝑛]−2 |𝑖⟩ ��0⊗𝑛 � _._ _𝑖=1_ Each processor now measures its |𝑆⟩ state. The system collapse to either (��0⊗𝑛 � + ��1⊗𝑛 �) ��1⊗𝑛 � or 2∑︁[𝑛]−2 |𝑖⟩ ��0⊗𝑛 � _._ _𝑖=1_ � 1 _𝑅𝑖_ = √ _𝑖_ 2[𝑛] 2∑︁[𝑛]−1 |𝑖⟩ _._ _𝑖=0_ 3 ----- Boston University Department of Physics _|R1⟩_ _•_ ... _•_ _X_ _•_ ... _•_ _X_ _|R1⟩_ _|R2⟩_ _•_ ... _•_ _X_ _•_ ... _•_ _X_ _|R2⟩_ ... _|Rn⟩_ _•_ ... _•_ _X_ _•_ ... _•_ _X_ _|Rn⟩_ _|0⟩1_ _|S1⟩_ ... _|0⟩n_ _|Sn⟩_ Figure 3: If the system collapses to 2∑︁[𝑛]−2 |𝑖⟩ ��0⊗𝑛 � _𝑖=1_ then the system 𝑅1…𝑅𝑛 is inconsistent. Since the system is inconsistent at least one 𝑅𝑖 = |0⟩ and at � least one ��𝑅 _𝑗_ = |1⟩. Any processor that measures its |𝑅𝑖 ⟩ = |1⟩ is a leader candidate. Any processor that measures its |𝑅𝑖 ⟩ = |0⟩ is a follower. There are now at most (𝑛 − 1) leader candidate, for which the process is repeated. In the event that the system collapsed to the (��0⊗𝑛 � + ��1⊗𝑛 �) ��1⊗𝑛 � state, we need to apply a unitary such that the symmetry is broken. If the number of states 𝑛 is even then we apply the unitary _𝑈_ . Instead, for each processor we need an additional register |𝑇𝑖 ⟩ initialized to |0⟩. Set _𝑇𝑖_ = 𝑅𝑖 ⊕ _𝑇𝑖_ . Then apply 𝑉𝑛 to 𝑅𝑖 ⊗ _𝑇𝑖_ . We define √𝑅1𝑛+1𝑉𝑛 as the matrix |• ... • X • ... • X | • ... • X • ... • X | . . . • ... • X • ... • X | |S . . . |S|Col2|X|Col4| |---|---|---|---| |•|||| |• . . .|||| |•|||| |. . .|||| ||||| �𝑛 �𝑛 1/√2 0 √𝑅𝑛 _𝑒[𝑖]_ _[𝜋]𝑛_ /√2 1/√2 0 −[√]𝑅𝑛𝑒[−][𝑖] _[𝜋]𝑛_ _𝑒[−][𝑖]_ _[𝜋]𝑛_ /√2 ����� √𝑅𝑛 0 −𝑖𝑒√2[−]𝑅[𝑖𝜋𝐼𝑛]22𝑛𝑛 −[√]𝑅𝑛 √ � 0 _𝑅𝑛_ + 1 0 0 ����� � _,_ where 𝑅𝑛 and 𝐼𝑛 are the real and imaginary parts of 𝑒[𝑖] _[𝜋]𝑛_, respectively. This matrix is well de fined since 0 < |𝑅𝑛 | < 1. With some calculations 𝑉𝑛 is shown to be unitary. Similar to the case where 𝑛 is even, for symmetry to be preserved the system must be in one of the following states |00⟩ [⊗][𝑘] _, |01⟩_ [⊗][𝑘] _, |10⟩_ [⊗][𝑘] _, |11⟩_ [⊗][𝑘] . However, afer each processor applies 𝑉𝑛, the probability of the system being in any one of these states is � _,_ _𝑈_ = [1] √ 2 � 1 _𝑒[−][𝑖𝜋]_ [/][𝑛] −𝑒[𝑖𝜋] [/][𝑛] 1 to each |𝑅𝑖 ⟩, which we will show breaks the symmetry. For symmetry to be preserved the system |𝑅1...𝑅𝑛⟩ must be in the state |0⟩ [⊗][𝑛] or |1⟩ [⊗][𝑛] . Afer each processor applies 𝑈 to |𝑅𝑖 ⟩, the probability of being in either of these states is Prob(|00⟩ [⊗][𝑛] ) = √[1] �� √ 1 2 2𝑅𝑛 + 2 � _𝑒[𝑖]_ _[𝜋]𝑛_ �𝑛� + √ _,_ 2𝑅𝑛 + 2 = 0, Prob(|01⟩ [⊗][𝑛] ) = √[1] �� √ 1 2 2𝑅𝑛 + 2 � −𝑒[𝑖] _[𝜋]𝑛_ �𝑛� + √ _,_ 2𝑅𝑛 + 2 = 0, �𝑛� _,_ �𝑛� _,_ Prob(|0⟩ [⊗][𝑛] ) = √[1] 2 �� 1 √ 2 �𝑛 � _𝑒𝑖_ _[𝜋]𝑛_ + √ 2 and = 0, Prob(|1⟩ [⊗][𝑛] ) = √[1] 2 = 0. �� 1 √ 2 �𝑛 � −𝑒𝑖 _[𝜋]𝑛_ + √ 2 Thus, afer applying 𝑈 the probability of being in a symmetric state is zero. Afer applying 𝑈, if a processor measures its |𝑅𝑖 ⟩ to be |1⟩, it is a leader candidate. Since the system is in an asymmetric state, at least one processor will lose eligibility, and at least one processor will remain eligible. If the number of states 𝑛 is odd we cannot simply apply , Prob(|10⟩ [⊗][𝑛] ) = √[1] �� − √ 1 �𝑛 2 2𝑅𝑛 + 2 � 1 �𝑛� + √ _,_ 2𝑅𝑛 + 2 = 0, Prob(|11⟩ [⊗][𝑛] ) = 0. Thus, the symmetry is broken. Each processor now measures |𝑅𝑖 _𝑇𝑖_ ⟩, and the processors with the largest value of |𝑅𝑖𝑇𝑖 ⟩ are candidates for the next round. Again, since symmetry was broken, at least one processor will lose eligibility, and at least one processor will remain eligible. 4 ----- Boston University Department of Physics ### Qantum Consensus and Non- Bias Leader Election Up to this point, we have been assuming no faulty processors. Maor Ganz considers the case with a group of 𝑛 processors who do not trust each other and want to elect a leader. In his paper, [Ganz (2017)], Ganz considers an algorithm that gives an honest processor at least _𝑛[1]_ [−] _[𝜖]_ [probability of win-] ning. Using classical consensus, this problem was shown to be impossible by Mochon in [Mochon (2007)]. However, using quantum consensus Mochon showed that in certain cases one can formulate an algorithm with arbitrarily small 𝜖. This algorithm is based on a series of quantum coin flips in tournament style. In other words, processors are paired and a single quantum coin flip is used to eliminate a processor from each pair. The main dificulty is in creating fault-tolerant coinflipping. There are two types of bias coin flipping: strong and weak coin-flipping. A strong coin-flipping protocol with bias _𝜖_ is a protocol in which neither party is capable of forcing the probability of any given flip to be greater than 1/2 + 𝜖. In weak coin flipping, both parties, Alice and Bob, have a predetermined desired coin outcome. For example, a 1 can be thought of as Alice winning and a 0 can be thought of as Bob winning. In weak coin flipping, neither player can shif the probability of the coin flip towards their desired outcome with probability greater than 1/2 + 𝜖. In the classical scenario, weak and strong coin flipping are essentially equivalent. However, as Mochon states ”in the quantum world the two are very diferent,” [Mochon (2007)]. For this paper, we will only be concerned with weak coin-flipping. Machon’s algorithm, or rather his proof of the existence of an algorithm for weak coin flipping with arbitrarily small bias is a significant result in quantum algorithms. However, as stated by Ganz ”the result has not been peer-reviewed, its novel techniques (and in particular Kitaev’s point game formalism) have not been applied anywhere else, and an explicit protocol is missing,” [Ganz (2017)]. With that said, the basic setup for weak coin flipping is quite similar to the setup for algorithms II. Figure 4 illustrates the process. Alice starts with � state ��𝜓𝐴,0 on space 𝐴 and Bob starts with state � ��𝜓𝐵,0 on state 𝐵. On every odd round Alice applies a unitary 𝑈𝐴,𝑖 and projection 𝐸𝐴,𝑖 to space 𝐴 ⊗ _𝑀,_ and on every even round Bob applies a unitary 𝑈𝑏,𝑖 and projection 𝐸𝐵,𝑖 to space 𝑀 ⊗ _𝐵. The basic idea is_ that by applying specific unitaries and projections, an honest player can decrease any bias to arbitrar ily small values. Figure 4: retrieved from [Aharonov et al. (2014)] Mochon’s paper proving this result is 80 pages, and we do not have the time to go into detail. However, this is an impressive result in quantum information and demonstrates some of the beauty of the field. ### Implementation By using several existing quantum sofware packages, we were able to simulate Qantum Leader Election Algorithm II. We used the packages listed below. - Qiskit is an open-source SDK for working with quantum computers at the level of pulses, circuits, and application modules. - ProjectQ is an open-source sofware framework for quantum computing. It provides tools for implementing and running quantum algorithms using either classical hardware. - SimulaQron allows distributed simulation of the nodes in a quantum internet network. 5 ----- Boston University Department of Physics The source of implementation for simulating Qantum Leader Election Algorithm can be found at `https://github.com/lifuzhang1108/` ``` quantum-consensus. ``` **Pseudo code:** 1. Prepare one-qubit quantum registers _𝑅1,…,𝑅6, 𝑇1, ...,𝑇6 and 𝑆6, ...,𝑆6._ 2. For each processor, do the following: 3. If 𝑠𝑡𝑎𝑡𝑢𝑠 = “eligible,” √ Set |𝑅𝑖 ⟩ = (|0⟩+ |1⟩)/ 2 Set |𝑆⟩ = |0⟩; 4. Apply circuit in figure 3. 5. Measure |𝑆⟩. 6. If |𝑆⟩ = |0⟩, measure |𝑅𝑖 ⟩. If |𝑅𝑖 ⟩ = |1⟩, 𝑠𝑡𝑎𝑡𝑢𝑠 = “eligible.” If |𝑅𝑖 ⟩ = |0⟩, 𝑠𝑡𝑎𝑡𝑢𝑠 = “ineligible.” 7. If |𝑆⟩ = |1⟩ and there are an even number of eligible processors, apply unitary 𝑈 to |𝑅𝑖 ⟩, measure |𝑅𝑖 ⟩. If |𝑅𝑖 ⟩ = |1⟩, 𝑠𝑡𝑎𝑡𝑢𝑠 = “eligible,” If |𝑅𝑖 ⟩ = |0⟩, 𝑠𝑡𝑎𝑡𝑢𝑠 = “ineligible,” 8. If |𝑆⟩ = |1⟩ and there are an odd number of eligible processors, initialize 𝑇𝑖 and apply unitary 𝑉𝑛 to |𝑅𝑖𝑇𝑖 ⟩. Measure all |𝑅𝑖𝑇𝑖 ⟩ If _𝑅𝑖𝑇𝑖_ = _𝑚𝑎𝑥_ (𝑅1𝑇1, .., 𝑅𝑛𝑇𝑛), 𝑠𝑡𝑎𝑡𝑢𝑠 = “eligible,” If 𝑅𝑖𝑇𝑖 _< 𝑚𝑎𝑥_ (𝑅1𝑇1, .., 𝑅𝑛𝑇𝑛), 𝑠𝑡𝑎𝑡𝑢𝑠 = “ineligible,” 9. Output status. ### Summary Qantum computing provides tools for achieving consensus in a distributed system. It is shown by Tani that the classically non-deterministic anonymous leader election problem is can be solved deterministically using quantum computers. As a proof of concept demonstration, we implemented the quantum algorithms in Qiskit and simulated quantum network using SimulaQron, the algorithm can successfully elect a single leader among anonymous parties. Mochon demonstrated that quantum consensus algorithms can be used to fairly elect a leader even under Byzantine conditions. While these algorithms have not been used in practice, they ofer excellent insight into both information theory and quantum mechanics. ### References Leader election in distributed systems. htps://d1.awsstatic.com. Aharonov, Dorit, Andr´e Chailloux, Maor Ganz, Iordanis Kerenidis, and Lo¨ıck Magnin. 2014. A simpler proof of existence of quantum weak coin flipping with arbitrarily small bias. SIAM Jour_nal on Computing, 45._ Ganz, Maor. 2017. Qantum leader election. Qan_tum Information Processing, 16:1–17._ Mazzarella, Luca, Alain Sarlete, and Francesco Ticozzi. 2015. Consensus for quantum networks: Symmetry from gossip interactions. IEEE Trans_actions on Automatic Control, 60(1):158–172._ Mochon, Carlos. 2007. Qantum weak coin flipping with arbitrarily small bias. Tani, Seiichiro, Hirotada Kobayashi, and Keiji Matsumoto. 2005. Qantum leader election via exact amplitude amplification. Tani, Seiichiro, Hirotada Kobayashi, and Keiji Matsumoto. 2012. Exact quantum algorithms for the leader election problem. _ACM Trans. Comput._ _Theory, 4:1:1–1:24._ 6 -----
{ "disclaimer": "Notice: Paper or abstract available at https://arxiv.org/abs/2206.04710, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": null, "status": "CLOSED", "url": "http://arxiv.org/pdf/2206.04710" }
2,022
[ "JournalArticle" ]
true
2022-06-09T00:00:00
[ { "paperId": "c7824a6369e33dde3738f204797f965b37185751", "title": "A Simpler Proof of the Existence of Quantum Weak Coin Flipping with Arbitrarily Small Bias" }, { "paperId": "68074c123229f29fa742e354a2c0a8b6af8057aa", "title": "Consensus for Quantum Networks: Symmetry From Gossip Interactions" }, { "paperId": "c5d78f1656b6f42642173788fc431a65caad06d3", "title": "Quantum weak coin flipping with arbitrarily small bias" }, { "paperId": "338dc1c9be94c6c817ce987f7ba2a8919f143241", "title": "Exact Quantum Algorithms for the Leader Election Problem" }, { "paperId": "b8acfea8a2279ff72e4116a4e73ebd5cba0f5dc2", "title": "Quantum Leader Election via Exact Amplitude Amplification" } ]
8,190
en
[ { "category": "Medicine", "source": "external" }, { "category": "Medicine", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/0224fec689cae36bb85c4d63ae3c5a4b060abcd3
[ "Medicine" ]
0.922305
Infection-related hospitalization following ureteroscopic stone treatment: results from a surgical collaborative
0224fec689cae36bb85c4d63ae3c5a4b060abcd3
BMC Urology
[ { "authorId": "39726081", "name": "A. Cole" }, { "authorId": "49745989", "name": "J. Telang" }, { "authorId": "2110993326", "name": "Tae-Kyung Kim" }, { "authorId": "118276505", "name": "K. Swarna" }, { "authorId": "152322873", "name": "J. Qi" }, { "authorId": "3808902", "name": "C. Dauw" }, { "authorId": "9831716", "name": "B. Seifman" }, { "authorId": "6599584", "name": "M. Abdelhady" }, { "authorId": "152413146", "name": "W. Roberts" }, { "authorId": "31678075", "name": "J. Hollingsworth" }, { "authorId": "145918894", "name": "K. Ghani" } ]
{ "alternate_issns": null, "alternate_names": [ "BMC Urol" ], "alternate_urls": [ "http://www.pubmedcentral.nih.gov/tocrender.fcgi?journal=67", "https://bmcurol.biomedcentral.com/" ], "id": "2369a370-b3f9-4eec-9f80-5e9540ea27e2", "issn": "1471-2490", "name": "BMC Urology", "type": "journal", "url": "http://www.biomedcentral.com/bmcurol/" }
Background Unplanned hospitalization following ureteroscopy (URS) for urinary stone disease is associated with patient morbidity and increased healthcare costs. To this effect, AUA guidelines recommend at least a urinalysis in patients prior to URS. We examined risk factors for infection-related hospitalization following URS for urinary stones in a surgical collaborative. Methods Reducing Operative Complications from Kidney Stones (ROCKS) is a quality improvement (QI) initiative from the Michigan Urological Surgery Improvement Collaborative (MUSIC) consisting of academic and community practices in the State of Michigan. Trained abstractors prospectively record standardized data elements from the health record in a web-based registry including patient characteristics, surgical details and complications. Using the ROCKS registry, we identified all patients undergoing primary URS for urinary stones between June 2016 and October 2017, and determined the proportion hospitalized within 30 days with an infection-related complication. These patients underwent chart review to obtain clinical data related to the hospitalization. Multivariable logistic regression analysis was performed to determine risk factors for hospitalization. Results 1817 URS procedures from 11 practices were analyzed. 43 (2.4%) patients were hospitalized with an infection-related complication, and the mortality rate was 0.2%. Median time to admission and length of stay was 4 and 3 days, respectively. Nine (20.9%) patients did not have a pre-procedure urinalysis or urine culture, which was not different in the non-hospitalized cohort (20.5%). In hospitalized patients, pathogens included gram-negative (61.5%), gram-positive (19.2%), yeast (15.4%), and mixed (3.8%) organisms. Significant factors associated with infection-related hospitalization included higher Charlson comorbidity index, history of recurrent UTI, stone size, intra-operative complication, and procedures where fragments were left in-situ. Conclusions One in 40 patients are hospitalized with an infection-related complication following URS. Awareness of risk factors may allow for individualized counselling and management to reduce these events. Approximately 20% of patients did not have a pre-operative urine analysis or culture, and these findings demonstrate the need for further study to improve urine testing and compliance
p g ## RESEARCH ARTICLE ## Open Access # Infection‑related hospitalization following ureteroscopic stone treatment: results from a surgical collaborative ### Adam Cole[1*], Jaya Telang[1], Tae‑Kyung Kim[1], Kavya Swarna[1], Ji Qi[1], Casey Dauw[1], Brian Seifman[2], Mazen Abdelhady[3], William Roberts[1], John Hollingsworth[1] and Khurshid R. Ghani[1] on behalf of for the Michigan Urological Surgery Improvement Collaborative **Abstract** **Background: Unplanned hospitalization following ureteroscopy (URS) for urinary stone disease is associated with** patient morbidity and increased healthcare costs. To this effect, AUA guidelines recommend at least a urinalysis in patients prior to URS. We examined risk factors for infection-related hospitalization following URS for urinary stones in a surgical collaborative. **Methods: Reducing Operative Complications from Kidney Stones (ROCKS) is a quality improvement (QI) initiative** from the Michigan Urological Surgery Improvement Collaborative (MUSIC) consisting of academic and community practices in the State of Michigan. Trained abstractors prospectively record standardized data elements from the health record in a web-based registry including patient characteristics, surgical details and complications. Using the ROCKS registry, we identified all patients undergoing primary URS for urinary stones between June 2016 and Octo‑ ber 2017, and determined the proportion hospitalized within 30 days with an infection-related complication. These patients underwent chart review to obtain clinical data related to the hospitalization. Multivariable logistic regression analysis was performed to determine risk factors for hospitalization. **Results: 1817 URS procedures from 11 practices were analyzed. 43 (2.4%) patients were hospitalized with an infec‑** tion-related complication, and the mortality rate was 0.2%. Median time to admission and length of stay was 4 and 3 days, respectively. Nine (20.9%) patients did not have a pre-procedure urinalysis or urine culture, which was not different in the non-hospitalized cohort (20.5%). In hospitalized patients, pathogens included gram-negative (61.5%), gram-positive (19.2%), yeast (15.4%), and mixed (3.8%) organisms. Significant factors associated with infection-related hospitalization included higher Charlson comorbidity index, history of recurrent UTI, stone size, intra-operative com‑ plication, and procedures where fragments were left in-situ. **Conclusions: One in 40 patients are hospitalized with an infection-related complication following URS. Awareness** of risk factors may allow for individualized counselling and management to reduce these events. Approximately 20% of patients did not have a pre-operative urine analysis or culture, and these findings demonstrate the need for further study to improve urine testing and compliance **Keywords: Ureteroscopy, Infection, Outcomes, Quality improvement, Urolithiasis** *Correspondence: aicole@med.umich.edu 1 Department of Urology, University of Michigan, Ann Arbor, MI 48103, USA Full list of author information is available at the end of the article **Background** Ureteroscopy (URS) is now the most common treatment modality for treating upper urinary tract stones in North America [1, 2]. Due to technological advances and © The Author(s). 2020 Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this [licence, visit http://creat​iveco​mmons​.org/licen​ses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creat​iveco​](http://creativecommons.org/licenses/by/4.0/) [mmons​.org/publi​cdoma​in/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.](http://creativecommons.org/publicdomain/zero/1.0/) ----- widespread availability of equipment, URS is often performed in the outpatient setting [3]. Despite this, morbidity, especially infection-related complications, may occur in up to 5–18% of patients [4–8]. These often result in hospital admission and can have a significant impact on patients, providers, and payers [3, 9–11]. A hospital admission for sepsis can cost approximately $20,000 [12]. Therefore, efforts to mitigate infection-related complications following URS would be beneficial in reducing healthcare expenditures. Prior studies investigating infection-related complications after URS have provided some insights on risk factors, which include stone, patient, and operative characteristics. However, most are single institution series from tertiary referral or academic medical centers [4–10], which may limit generalizability of the results to the wider swathe of urologic patients commonly treated by diverse practitioners in community or multi-specialty group practices. In the state of Michigan, we have developed a quality improvement (QI) initiative and a clinical registry— Reducing Operative Complications from Kidney Stones (ROCKS)—to better understand processes of care, outcomes, and quality indicators for patients undergoing URS for urinary stones. A strength of this registry is its diversity of patients and practicing urologists. In our drive to improve outcomes for URS, we sought to better understand risk factors for infection-related hospitalization using data from this surgical collaborative. We also sought to assess care in relation to guideline-based practice. We hypothesize that there are modifiable factors which lead to infection related morbidity. Identifying high-risk patients may allow for individualized counseling, and development of QI interventions that reduce adverse events, and the associated patient morbidity and healthcare costs. **Methods** **Data source** The Michigan Urological Surgery Improvement Collaborative (MUSIC) was established in 2011 in partnership with Blue Cross Blue Shield of Michigan. The ROCKS QI initiative within MUSIC comprises diverse community and academic urology practices in the state of Michigan and started in 2016. For patients with urinary stones undergoing URS, trained abstractors prospectively record standardized data elements in a web-based registry including patient and stone characteristics, surgical details and complications. Patient data are entered into the registry 60 days after a URS procedure, and data entry is guided by standard variable definitions and collaborative-wide operating procedures. To ensure data quality, the coordinating center performs on-site data audits on a semi-annual basis. **Patient selection and outcomes** We identified all patients undergoing URS for primary treatment of urinary stones between June 2016 and October 2017. During this period, ROCKS consisted of 11 practices. To be included in the ROCKS registry, a patient had to be at least 18 years of age and undergone unilateral URS for urinary stones. Patients who underwent bilateral URS, had an ipsilateral nephrostomy at the time of URS, or underwent URS after percutaneous renal surgery were ineligible. We identified all patients who were discharged after surgery and then subsequently hospitalized (at any institution) within 30 days of their procedure. An infection-related hospitalization was determined by chart review, based on the presence of SIRS criteria with or without bacteriuria. Patients admitted for other indications (pain, hematuria, etc.) were classified as a non-infectious hospitalization. Stone-free rate (SFR) was defined as absence of any fragment on X-ray, CT or ultrasound reports obtained within 60 days. Chart review was performed on all patients with infectionrelated hospitalizations, including urine culture pathogen data, length of stay, and timing from surgery. **Statistical analyses** We generated descriptive summary statistics of all patients in the analytic sample. Chi-square tests and student’s t tests were performed for categorical and continuous variables, respectively, to compare demographic and operative factors between the two groups. Significant pre-operative and operative variables were then used as covariates in a multivariable analysis to determine which factors were associated with higher odds of an infectionrelated hospitalization. Multivariable analysis was performed using a logistic regression model. The odds ratios and 95% confidence intervals were reported. Significant variables with less than 10 events were not included in the multivariable final model. All analyses were performed with SAS 9.4 (SAS Institute, Cary, NC) at a 5% significance level. **Results** A total of 1817 URS procedures in 1737 patients from 11 practices were analyzed. In total, 80 (4.4%) patients were hospitalized within 30 days of their URS. Fortythree (2.4%) patients were hospitalized with an infectionrelated complication (Fig. 1). Median time from surgery to admission was 4 days (range 0–30) and median length of stay was 3 days (range 1–33) for the patients admitted with an infection-related complication. The majority of admissions (74.4%) occurred within 7 days of surgery, ----- and more than half of patients (55.8%) were admitted for longer than 2 days. One (2.3%) patient had a prior ureteroscopy within 1 month of the index surgery. Of patients with a positive urine culture during hospitalization (n = 26), isolated pathogens included 16 (61.5%) gramnegative, 5 (19.2%) gram-positive, 4 (15.4%) yeast, and 1 (3.8%) with gram-positive and -negative cocci. Only 9 of these 26 patients (34.6%) had positive urinalysis (defined by positive nitrite) or urine culture prior to surgery. Three patients died during their hospitalization (mortality rate 0.2%). Pre-operative, intra-operative, and post-operative characteristics among infection-related hospitalized, and non-hospitalized patients, are provided in Table 1. Significant factors for hospitalization with an infectionrelated complication on bivariate analysis were public insurance status, older age, higher Charlson Comorbidity Index (CCI), history of recurrent UTI (registry variable based on clinic note by physician indicating history of prior UTIs), spinal cord injury, urinary diversion, intraoperative complication, and larger stone size. Patients that were hospitalized were less likely to be on pre-operative alpha-blockers. There was no statistical difference in the proportion of patients who had an indwelling ureteral stent prior to URS. Of those hospitalized with an infection-related complication, 9 (20.9%) did not have a pre-procedure urinalysis or urine culture, compared to 355 (20.5%) in the non-hospitalized group (p = 0.95). 12 (27.9%) patients who were hospitalized with an infectious indication had a positive urinalysis or urine culture prior to surgery, compared to 261 (15.4%) in the non-hospitalized patients (p = 0.08). None of the 12 patients with abnormal pre-operative urine studies who were hospitalized were treated with antibiotics prior to surgery. Patients who were hospitalized for infectious reasons were more likely to have an intra-operative complication (7.0%). Complications included inability to complete procedure due to bleeding or perforation. There was no difference in the rate of ureteral stent placement, ureteral dilation, or use of ureteral access sheath at the time of surgery between the infection-related hospitalization group and the non-admitted group. Those hospitalized with an infection-related complication were more likely to have lithotripsy with fragments left in situ at the conclusion of the operation. On multivariable analysis (Table 2), significant risks factors associated with hospitalization for infectionrelated causes included higher CCI, history of recurrent UTI, increasing stone size, history of intra-operative complication, and lithotripsy with fragments left in-situ. The strongest risk factors were the presence of an intraoperative complication (OR 3.7) and history of recurrent UTI (OR 3.74). **Discussion** We found that in 11 diverse urology practices across the state of Michigan, 1 in 40 patients were hospitalized with an infection-related complication following URS for urinary stones. During admission the most commonly identified organisms were gram-negative, however a small proportion of patients had yeast identified. Risk factors for an infection-related admission were higher Charlson comorbidity index, history of recurrent UTI, larger stone size, intra-operative complication and cases where lithotripsy was performed with fragments left in-situ. Overall, 20% of all patients did not have a documented urinalysis or urine culture prior to URS. Collectively, these findings represent an opportunity for the development of QI initiatives to decrease the risk of infection and sepsis after URS, as well as better adherence to American Urological Association (AUA) guidelines. Previous investigators have examined risk factors for infectious complications following URS. Zhong et al. examined 250 patients that underwent URS for stone treatment, and found an 8.1% incidence of systemic inflammatory response syndrome (SIRS) following the procedure. Risk-factors included stone size, smaller caliber ureteral access sheath, higher irrigation flow rate, and presence of struvite calculi [8]. Other studies have also identified female gender [5, 6, 13], history of ----- **Table 1 Patient characteristics for patients undergoing ureteroscopy for urinary stones in MUSIC ROCKS stratified** **by post-operative course** **Risk factor** **Infection-related hospitalization** **Non-hospitalized (n = 1737)** **_P value_** **(n = 43)** _Pre-operative characteristics_ Public insurance 25 (58.1%) 669 (39.7%) 0.01 Mean age (SD) 60.1 (15.8) 54.4 (15.5) 0.02 Male gender 19 (44.2%) 867 (50.1%) 0.44 BMI > 30 24 (58.5%) 788 (46.8%) 0.13 CCI ≥ 1 26 (60.5%) 495 (28.5%) < 0.01 CCI ≥ 2 14 (32.6%) 241 (13.9%) < 0.01 Presence of hydronephrosis on pre-operative imaging 27 (67.5%) 1048 (66.7%) 0.92 Largest stone size (mm), mean (SD) 10.1 (6.5) 7.8 (5.4) < 0.01 Solitary kidney 2 (4.7%) 25 (1.5%) 0.13 Horseshoe kidney 1 (2.3%) 6 (0.4%) 0.16 History of recurrent UTI 9 (20.9%) 88 (5.1%) < 0.01 Urinary diversion 2 (4.7%) 7 (0.4%) 0.02 Spinal cord injury 2 (4.7%) 3 (0.2%) < 0.01 Anti-platelet therapy 13 (30.2%) 345 (20.4%) 0.12 Pre-operative urinalysis/urine culture not performed 9 (20.9%) 355 (20.5%) 0.95 Positive pre-operative urinalysis/urine culture 12 (27.9%) 266 (15.4%) 0.08 Positive pre-operative urinalysis/urine culture treated 0 (0%) 59 (22.2%) 0.07 Urgent/emergent surgery 1 (2.3%) 146 (8.4%) 0.15 Peri-operative antibiotic use 38 (95%) 1513 (96.9%) 0.49 Alpha-blocker therapy prior to URS 11 (26.2%) 755 (45.1%) 0.01 Pre-stenting (ureteral stent in place) 20 (46.5%) 649 (37.5%) 0.23 _Stone location_ Renal 17 (45.9%) 502 (31.0%) 0.07 Ureter 14 (37.8%) 906 (56.0%) Both 6 (16.2%) 211 (13.0%) _Intra-operative characteristics_ Intra-operative complication 3 (7.0%) 33 (1.9%) < 0.01 Complication: bleeding 2 (4.7%) 14 (0.8) Complication: perforation 0 (0.0%) 4 (0.2%) Complication: other 1 (2.3%) 15 (0.9%) Ureteral dilation 6 (13.9%) 340 (19.7%) 0.44 Ureteral access sheath use 18 (41.9%) 626 (36.6%) 0.49 Lithotripsy with fragments left in-situ 30 (69.8%) 716 (42.3%) < 0.01 Stenting during URS 31 (72.1%) 1248 (72.1%) 0.99 _Post-operative characteristics_ Discharged with antibiotics 15 (36.6%) 638 (39.2%) 0.74 Discharged with antibiotics and stent placed 9 (20.9%) 509 (29.3%) 0.23 Discharged with alpha-blocker 27 (65.8%) 911 (55.9%) 0.21 Stone free rate 19 (57.6%) 579 (77.5%) < 0.01 obstructive pyelonephritis [5, 6], positive pre-operative urine culture [5, 6], and prolonged ureteral stent dwell time [5] as risk factors for SIRS/sepsis, with rates of SIRS/sepsis from 0.30–8% [5–8, 14]. We also found similar risk factors for hospitalization related to infectious complications, including higher Charlson comorbidity index, history of recurrent UTI, intra-operative complication, and stone size. Interestingly, female gender and pre-operative ureteral stenting were not risk factors in this analysis. Female gender has been a reported risk factor in some series [5, 6, 13], however in other studies this was not a risk factor [7, 14] suggesting differences in ----- **Table 2 Multi-variable logistic regression demonstrating** **association between patient characteristics and risk** **of infection-related hospitalization** **Risk factor** **OR** **CI** **_P value_** Age 1.01 0.98–1.03 0.95 Comorbidity (CCI 0 vs. 1) 3.12 1.37–7.14 < 0.01 Comorbidity (CCI 0 vs. 2) 2.72 1.16–6.37 0.02 Stone size 1.04 1.01–1.07 0.02 History of recurrent UTI 3.74 1.55–9.00 < 0.01 Insurance (public vs. private) 1.57 0.75–3.25 0.23 Alpha-blocker prior to URS 0.51 0.24–1.06 0.07 Complete fragment removal 0.32 0.16–0.65 < 0.01 Intra-operative complication 3.70 1.22–11.25 0.02 study design. Perhaps a prospective study would be helpful. Additionally, public insurance was associated with an increased risk of an infectious-hospitalization on univariate analysis. However, this association was not seen in our multi-variable model, suggesting that the association of insurance and infection may be due to other factors. Awareness of risk factors can allow for an individualized approach to pre-operative antibiotic selection, adoption of intra-operative technical factors such as considering a ureteral access sheath or limiting the irrigation flow rate, and post-operative antibiotic therapy in patients at risk for developing sepsis. Since there was a strong relationship between an intra-operative complication and subsequent hospitalization, patients who suffer this event could be considered for prolonged observation in the recovery room or even admission and observation. Likewise, patients with a history of recurrent UTI should be considered for pre-operative urine culture (not urinalysis) and be managed with culture-directed pre-operative antibiotics. While almost all patients received peri-operative antibiotics, more patients in the hospitalized group had an abnormal urine study prior to surgery, and none of these patients were treated with antibiotics. There are a very small number of patients in both groups with untreated positive urine cultures prior to ureteroscopy. This represents a focus for subsequent quality improvement initiatives with the goal to improve pre-operative testing and follow-up. We found that in patients with a positive urine culture during hospitalization, only 34.6% had a positive UA or urine culture prior to surgery. It is possible this discordance lies in our definition of a positive urinalysis (nitrite positivity), which can be altered by medications such as pyridium. Additionally, any positive pre-op culture, regardless of organism or colony count, is deemed positive. These represent limitations of our study, however, previous studies have also reported discordance between pre-operative, intra-operative and post-operative urine cultures in patients undergoing stone surgery. Paonessa et al. examined pre-operative urine cultures and intraoperative stone cultures in patients undergoing percutaneous nephrolithotomy and found that 9.7% of patients with negative pre-operative urine cultures had positive stone cultures. In patients with both positive pre-operative urine and intra-operative stone cultures, the organisms differed in 13.3%, representing an overall discordance in almost a quarter of cases [15]. Marien et al. also demonstrated 27% discordant voided and upper tract urine cultures after decompression for obstructing stones [16]. The AUA Guidelines on Surgical Management of Stones advises clinicians to obtain a urinalysis prior to URS, and in patients with clinical or laboratory signs of infection, a urine culture should be obtained [17]. EAU Guidelines state a urine culture or urinary microscopy are mandatory before treatment [18]. In our cohort, approximately 20% of patients who were admitted with an infectious complication did not have a urinalysis or urine culture prior to surgery. This aspect of care, where patients are not managed in accordance with current guidelines represents an area for improvement. Interestingly, this rate was similar in the group of non-hospitalized patients. It would appear that obtaining a pre-operative urinalysis or urine culture did not alter the risk of hospitalization for an infection-related reason. One major limitation of our work is that we do not differentiate between urine culture or urinalysis in our registry. Additionally, the pre-operative screening requirements vary by center due to institutional protocols, work-flow, staffing, and resources. Some institutions require a urine culture within 30 days of surgery, while others use urinalysis with reflex culture. Despite these difference, pre-operative urine studies were not obtained in approximately 20% of all patients, likely for a variety of reasons: urine studies may not have been ordered, urine studies were ordered but not performed by the patient, or they were performed at outside institutions but not available. Our findings warrant further investigation to address these quality of care gaps. Lithotripsy with fragments left in-situ was associated with an increased risk of infection-related hospitalization in our cohort. This variable is determined by review of the operative notes by data abstractors based on key phrases, such as “all fragments were removed,” or “all remaining fragments were 1 mm of less.” The database does not detect specific stone treatment technique, and it is difficult to ascertain if this indicates dusting technique, or a hybrid technique of basketing and dusting. It is possible that our results are confounded by patients with large stone burden. Also, while patients in the nonhospitalized group were more likely to be on pre-operative alpha-blockers, this was not significantly associated ----- with a lower risk of an infection-related hospitalization on multi-variate analysis. In a recent study, 1 week of pre-operative alpha-blocker therapy was associated with lower overall complications after URS [19]. The same mechanism by which alpha-blockers are prescribed to facilitate ureteral stone passage—inhibition of the alpha receptors in the distal ureter and reduced ureteral muscle tone and peristalsis—has been proposed to facilitate ureteroscopy and instrumentation of the ureter [20]. This is an area of interest that will be the subject of future investigation. Our work has several limitations. First, our patients are located in a single state and it is possible these results are not applicable to all patients in the United States or outside the country. Our registry does not collect information on pre-operative ureteral stent dwell-time, technical intraoperative details such as size of access sheath, irrigation rate, surgical time, and other factors that may place patients at higher risk for developing an infectious complication. In addition, stone cultures are not captured by registry. Additionally, it is possible we are underreporting the number of events based on our study design. From the registry we were able to identify all patients that were hospitalized within 30 days. We then made a determination if the hospitalization was due to infectious or noninfectious etiologies (pain, hematuria, etc.) based on chart review. Along with the admission notes, SIRS criteria were used to determine if the admission was due to infectious-indication, as culture data was not available or obtained after administration of antibiotics. Therefore it is possible that some patients were admitted with infectious-indications without SIRS criteria. Finally, the small number of hospitalization events may alter the fit of our multi-variable model. Our findings do have several implications. URS is among the most commonly performed urologic surgeries, and unplanned healthcare encounters following URS are not uncommon. We demonstrate suboptimal statewide compliance with guidelines regarding pre-operative urine screening. Efforts should be taken to comply with best practice statements, and this will be the subject of future QI initiatives in MUSIC. In particular we are considering collecting information on which specific pre-operative urine study was performed to determine whether urinalysis is insufficient as a screening tool to mitigate the risk of sepsis after URS. **Conclusion** We found that nearly 1 in 40 patients are hospitalized with an infection-related complication following URS for urinary stones in diverse practices in Michigan. Awareness of risk factors may allow for individualized counselling and management to reduce these events. Approximately 20% of patients did not have a pre-operative urine analysis or culture, and these findings demonstrate the need for further study to improve urine testing and compliance. **Abbreviations** URS: Ureteroscopy; AUA​: American Urological Association; ROCKS: Reducing Operative Complications from Kidney Stones; MUSIC: Michigan Urological Surgery Improvement Collaborative; QI: Quality improvement; SFR: Stone-free rate; UTI: Urinary tract infection; CCI: Charlson Comorbidity Index; OR: Odds ratio; SIRS: Systemic inflammatory response syndrome; UA: Urinalysis; EAU: European Association of Urology. **Acknowledgements** We would like to thank the significant contribution of the clinical champions, urologists and data abstractors in each participating MUSIC ROCKS practice. In addition, we would like to acknowledge the support provided by the Value Partnerships program at BCBSM. **Authors’ contributions** Made substantial contributions to the conception: AC, KG. Made substantial contributions to design of the work: AC, JT, TK, KS, JQ, Made substantial con‑ tributions to the acquisition and analysis of the data: AC, JT, TM, KS, JQ. Made substantial contributions to interpretation of the data: AC, CD, WR, JH, KG. Made substantial contributions to drafting and revising the manuscript: AC, CD, BS, MA, WR, JH, KG. All authors read and approved the final manuscript. **Funding** Michigan Urological Surgery Improvement Collaborative (MUSIC) is funded by Blue Cross Blue Shield of Michigan (BCBSM). Blue Cross Blue Shield of Michigan did not have a role in the design and conduct of the study; col‑ lection, management, analysis, and interpretation of the data; preparation, review, or approval of the manuscript; and decision to submit the manuscript for publication. **Availability of data and materials** The datasets generated and/or analyzed during the current study are not publicly available, and are managed by the MUSIC urology coordinating center. MUSIC urology was founded with the guiding principle to improve the urologic care across the entire state. We do not compare institutions within our registry for the purposes of maintaining confidentiality. As such, our data are internally maintained and not publically available. **Ethics approval and consent to participate** The MUSIC registry was issued a Notice of Determination of “Not Regulated” Status by the University of Michigan Institutional Review Board (IRBMED), ID: HUM00054438. This registry did not fit the definition of human subjects research requiring IRB approval because the program is focused on quality improvement versus research and the human subjects themselves. **Consent for publication** Not applicable. **Competing interests** The authors declare that they have no competing interests. **Author details** 1 Department of Urology, University of Michigan, Ann Arbor, MI 48103, USA. 2 Michigan Institute of Urology, West Bloomfield, MI 48322, USA. 3 Detroit Medical Center, Department of Urology, Detroit, MI 48201, USA. Received: 4 April 2020 Accepted: 15 September 2020 ----- **References** 1. Oberlin DT, Flum AS, Bachrach L, Matulewicz RS, Flury SC. Contempo‑ rary surgical trends in the management of upper tract calculi. J Urol. 2015;193:880–4. 2. Ordon M, et al. The surgical management of kidney stone disease: a population based time series analysis. J Urol. 2014;192:1450–6. 3. San Juan J, Hou H, Ghani KR, Dupree JM, Hollingsworth JM. Variation in spending around surgical episodes of urinary stone disease: findings from Michigan. J Urol. 2018;199:1277–82. 4. Mitsuzuka K, Nakano O, Takahashi N, Satoh M. Identification of factors associated with postoperative febrile urinary tract infection after ureter‑ oscopy for urinary stones. Urolithiasis. 2016;44:257–62. 5. Nevo A, Mano R, Baniel J, Lifshitz DA. Ureteric stent dwelling time: a risk factor for post-ureteroscopy sepsis. BJU Int. 2017;120:117–22. 6. Uchida Y, Takazawa R, Kitayama S, Tsujii T. Predictive risk factors for systemic inflammatory response syndrome following ureteroscopic laser lithotripsy. Urolithiasis. 2018;46:375–81. 7. Blackmur JP, et al. Analysis of factors’ association with risk of postoperative urosepsis in patients undergoing ureteroscopy for treatment of stone disease. J Endourol. 2016;30:963–9. 8. Zhong W, Leto G, Wang L, Zeng G. Systemic inflammatory response syndrome after flexible ureteroscopic lithotripsy: a study of risk factors. J Endourol. 2015;29:25–8. 9. Bloom J, Matthews G, Phillips J. Factors influencing readmission after elective ureteroscopy. J Urol. 2016;195:1487–91. 10. Du K, et al. Unplanned 30-day encounters after ureterorenoscopy for urolithiasis. J Endourol. 2018;32:1100–7. 11. Scales CD, et al. The impact of unplanned postprocedure visits in the management of patients with urinary stones. Surgery. 2014;155:769–75. 12. Arefian H, et al. Hospital-related cost of sepsis: a systematic review. J Infect. 2017;74:107–17. 13. Martov A, et al. Postoperative infection rates in patients with a nega‑ tive baseline urine culture undergoing ureteroscopic stone removal: a matched case–control analysis on antibiotic prophylaxis from the CROES URS global study. J Endourol. 2015;29:171–80. 14. Somani BK, et al. Complications associated with ureterorenoscopy (URS) related to treatment of urolithiasis: the Clinical Research Office of Endourological Society URS Global study. World J Urol. 2017;35:675–81. 15. Paonessa JE, Gnessin E, Bhojani N, Williams JC, Lingeman JE. Preopera‑ tive bladder urine culture as a predictor of intraoperative stone culture results: clinical implications and relationship to stone composition. J Urol. 2016;196:769–74. 16. Marien T, Mass AY, Shah O. Antimicrobial resistance patterns in cases of obstructive pyelonephritis secondary to stones. Urology. 2015;85:64–8. 17. Assimos D, et al. Surgical management of stones: American uro‑ logical association/endourological society guideline, PART I. J Urol. 2016;196:1153–60. 18. Türk C, et al. EAU guidelines on interventional treatment for urolithiasis. Eur Urol. 2016;69:475–82. 19. Sokhal A, et al. Do pre-operative alpha blockers facilitate ureteroscope insertion at the vesico-ureteric junction? An answer from a prostective case–controlled study. Eur Med J. 2017;2:82–6. 20. Alsaikhan B, Koziarz A, Lee J, Pace K. Preoperative alpha-blockers for ureteroscopy for ureteral stones: a systematic review and meta-analysis of randomized controlled trials. J Endourol. 2020;34:33–41. 21. Margel D, et al. Clinical implication of routine stone culture in percutane‑ ous nephrolithotomy—a prospective study. J Urol. 2006;67:26–9. **Publisher’s Note** Springer Nature remains neutral with regard to jurisdictional claims in pub‑ lished maps and institutional affiliations. -----
{ "disclaimer": "Notice: Paper or abstract available at https://pmc.ncbi.nlm.nih.gov/articles/PMC7607640, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": "CCBY", "status": "GOLD", "url": "https://bmcurol.biomedcentral.com/track/pdf/10.1186/s12894-020-00720-4" }
2,020
[ "JournalArticle", "Review" ]
true
2020-11-03T00:00:00
[ { "paperId": "33e1185d60ea7e6205382bf0e0c34e34dca5de28", "title": "Preoperative alpha-blockers for ureteroscopy for ureteric stones: a systematic review and meta-analysis of randomized controlled trials." }, { "paperId": "f8050ba31a9d825e6e17d63a1b6ac10a7b250a95", "title": "Unplanned 30-Day Encounters After Ureterorenoscopy for Urolithiasis." }, { "paperId": "1b9e2f9b4d5de4087e800f2f1ee1d5ef2d3a9b89", "title": "Predictive risk factors for systemic inflammatory response syndrome following ureteroscopic laser lithotripsy" }, { "paperId": "631a8efe123b5637c3841d7e1ae5e223434d3178", "title": "Variation in Spending around Surgical Episodes of Urinary Stone Disease: Findings from Michigan" }, { "paperId": "c7f2fd44f52df94e1b90305ece9d88dfed9a9bf8", "title": "Ureteric stent dwelling time: a risk factor for post‐ureteroscopy sepsis" }, { "paperId": "b7f638363c21a56620d5374cbfe46960a9ecd4a0", "title": "Hospital-related cost of sepsis: A systematic review." }, { "paperId": "0258eee3a5a0168630716ee1540b96ad808f9f51", "title": "Surgical Management of Stones: American Urological Association/Endourological Society Guideline, PART I." }, { "paperId": "520b02ef54beb6ae44329e0db05e6bebc34fa1dc", "title": "Analysis of Factors' Association with Risk of Postoperative Urosepsis in Patients Undergoing Ureteroscopy for Treatment of Stone Disease." }, { "paperId": "32da42ebda1955581f19f294e7ee8835d332effe", "title": "Preoperative Bladder Urine Culture as a Predictor of Intraoperative Stone Culture Results: Clinical Implications and Relationship to Stone Composition." }, { "paperId": "705d5bfd5b6366e6b6a1e1830af88ff435a7bdfd", "title": "Complications associated with ureterorenoscopy (URS) related to treatment of urolithiasis: the Clinical Research Office of Endourological Society URS Global study" }, { "paperId": "da70f9ccc04c43df38f1d8a7df39ae3407f4de46", "title": "Identification of factors associated with postoperative febrile urinary tract infection after ureteroscopy for urinary stones" }, { "paperId": "b2763dec136065e5800f64abafc705897deb57fa", "title": "Factors Influencing Readmission after Elective Ureteroscopy." }, { "paperId": "01ccb68973043d4faf61f48bcc3b6eb5db2cc968", "title": "EAU Guidelines on Interventional Treatment for Urolithiasis." }, { "paperId": "1f1ea544f14135d571edfd655be402062715374e", "title": "Postoperative infection rates in patients with a negative baseline urine culture undergoing ureteroscopic stone removal: a matched case-control analysis on antibiotic prophylaxis from the CROES URS global study." }, { "paperId": "8b38d4835f39e6e2d44f53a76ddcf6b1636c8e2f", "title": "Systemic inflammatory response syndrome after flexible ureteroscopic lithotripsy: a study of risk factors." }, { "paperId": "7824449747113e519bff0b5279132ddebbb0ad58", "title": "The surgical management of kidney stone disease: a population based time series analysis." }, { "paperId": "3f44eb409ffa0ee009d0227cdc3dfbb372eb7753", "title": "The impact of unplanned postprocedure visits in the management of patients with urinary stones." }, { "paperId": "ada76768377758f114cf28a7ef936acfbfb591b0", "title": "Contemporary surgical trends in the management of upper tract calculi." }, { "paperId": null, "title": "Do pre‐operative alpha blockers facilitate ureteroscope insertion at the vesico‐ureteric junction? An answer from a prostective case–controlled study" }, { "paperId": "10dce973d02844eaefed790886946d4a38ed714f", "title": "Antimicrobial resistance patterns in cases of obstructive pyelonephritis secondary to stones." }, { "paperId": "f3ea80f42e37f5d01aad7a805975c7e562a7ecff", "title": "Clinical implication of routine stone culture in percutaneous nephrolithotomy--a prospective study." }, { "paperId": "34b9635d7779e219e9d60e0d3d33919ca9bc123c", "title": "Publisher's Note" }, { "paperId": null, "title": "Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations" } ]
7,884
en
[ { "category": "Computer Science", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/0227d8aca57de41a14d396350e12f28d4aabe480
[]
0.902386
Towards Improving Privacy and Security of Identity Management Systems Using Blockchain Technology: A Systematic Review
0227d8aca57de41a14d396350e12f28d4aabe480
Applied Sciences
[ { "authorId": "2193919526", "name": "H. Alanzi" }, { "authorId": "1404419908", "name": "M. Alkhatib" } ]
{ "alternate_issns": null, "alternate_names": [ "Appl Sci" ], "alternate_urls": [ "http://www.mathem.pub.ro/apps/", "https://www.mdpi.com/journal/applsci", "http://nbn-resolving.de/urn/resolver.pl?urn=urn:nbn:ch:bel-217814" ], "id": "136edf8d-0f88-4c2c-830f-461c6a9b842e", "issn": "2076-3417", "name": "Applied Sciences", "type": "journal", "url": "http://www.e-helvetica.nb.admin.ch/directAccess?callnumber=bel-217814" }
An identity management system (IDMS) manages and organizes identities and credentials information exchanged between users, identity providers (IDPs), and service providers (SPs) to ensure confidentiality and enhance privacy of users’ personal data. Traditional or centralized IDMS rely on a third party to store a user’s personal information, authenticate the user, and organize the entire process. This clearly constitutes threats to the privacy of the user, in addition to other issues, such as single point of failure (SPOF), user tracking, and data availability issues. Blockchain technology has many useful features that can contribute to solving traditional IDMS issues, such as decentralization, immutability, and anonymity. Blockchain represents an attractive solution for many issues related to traditional IDMS, including privacy, third-party control, data leakage, and SPOF, supported by Distributed Ledger Technology (DLT) security features and powerful smart contracts technology. The current study presents a systematic literature review and analysis for recently proposed solutions that adopt the traditional centralized approach, as well as solutions based on blockchain technology. The study also aims to provide a deep understanding of proposed IDMS solutions and best practices, and highlight the research gaps and open issues related to IDMSs and users’ privacy. In particular, the current research focuses on analyzing the blockchain-based solutions and illustrating their strengths and weaknesses, as well as highlighting the promising blockchain technology framework that can be utilized to enhance privacy and solve security issues in a centralized IDMS. Such a study is an important step towards developing efficient solutions that address the pressing needs in the field.
## applied sciences _Systematic Review_ ### Towards Improving Privacy and Security of Identity Management Systems Using Blockchain Technology: A Systematic Review **Haifa Alanzi * and Mohammad Alkhatib *** Department of Computer Science, College of Computer and Information Sciences, Imam Mohammad Ibn Saud Islamic University, Riyadh 11564, Saudi Arabia *** Correspondence: haifaalanzi1995@gmail.com (H.A.); mohkhatib83@gmail.com (M.A.)** **Citation: Alanzi, H.; Alkhatib, M.** Towards Improving Privacy and Security of Identity Management Systems Using Blockchain Technology: A Systematic Review. _[Appl. Sci. 2022, 12, 12415. https://](https://doi.org/10.3390/app122312415)_ [doi.org/10.3390/app122312415](https://doi.org/10.3390/app122312415) Academic Editor: Gianluca Lax Received: 6 October 2022 Accepted: 26 November 2022 Published: 4 December 2022 **Publisher’s Note: MDPI stays neutral** with regard to jurisdictional claims in published maps and institutional affil iations. **Copyright:** © 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons [Attribution (CC BY) license (https://](https://creativecommons.org/licenses/by/4.0/) [creativecommons.org/licenses/by/](https://creativecommons.org/licenses/by/4.0/) 4.0/). **Abstract: An identity management system (IDMS) manages and organizes identities and credentials** information exchanged between users, identity providers (IDPs), and service providers (SPs) to ensure confidentiality and enhance privacy of users’ personal data. Traditional or centralized IDMS rely on a third party to store a user’s personal information, authenticate the user, and organize the entire process. This clearly constitutes threats to the privacy of the user, in addition to other issues, such as single point of failure (SPOF), user tracking, and data availability issues. Blockchain technology has many useful features that can contribute to solving traditional IDMS issues, such as decentralization, immutability, and anonymity. Blockchain represents an attractive solution for many issues related to traditional IDMS, including privacy, third-party control, data leakage, and SPOF, supported by Distributed Ledger Technology (DLT) security features and powerful smart contracts technology. The current study presents a systematic literature review and analysis for recently proposed solutions that adopt the traditional centralized approach, as well as solutions based on blockchain technology. The study also aims to provide a deep understanding of proposed IDMS solutions and best practices, and highlight the research gaps and open issues related to IDMSs and users’ privacy. In particular, the current research focuses on analyzing the blockchain-based solutions and illustrating their strengths and weaknesses, as well as highlighting the promising blockchain technology framework that can be utilized to enhance privacy and solve security issues in a centralized IDMS. Such a study is an important step towards developing efficient solutions that address the pressing needs in the field. **Keywords: identity management; blockchain; distributed ledger technology; self-sovereign identity;** privacy **1. Introduction** Today, digital identities are essential for users on the internet to obtain services from electronic service providers (SPs). Digital identity represents the user’s personality in the digital world and carries their necessary data that allows the identity holder to access various resources on the internet provided by SPs [1]. Managing and protecting the user’s identity, as well as related transactions and data, are critical tasks that need to be considered. The IDMS is an organizational process that aims to achieve these tasks and makes it easy for authorized users to access required services through their digital identity credentials. In addition, IDMS seeks to provide necessary security services, such as privacy, confidentiality, and availability, to counter recently emerged cyberattacks and threats. There are three general basic parties in IDMS: the identity provider (IDP), the SP (or relying party RP), and the user [2]. The digital identity of the user is created by the IDP, as they are responsible for creating the digital identity and certifying it for the SP; the user needs to obtain a service from the SP, which provides the necessary authentication for the user. The SP provides the user with various resources after verifying their identity through the IDP. An IDMS becomes essential for modern applications and e-transactions to organize and manage ----- _Appl. Sci. 2022, 12, 12415_ 2 of 20 identity information and credentials between the involved parties; the user, the SP, and the IDP. Furthermore, the IDMS is required to control the process of user authorization and support the role-based access system. The IDMS can be realized using centralized and decentralized approaches. A centralized IDM approach is the process of controlling and managing user identities and their relations using other central parties: an IDP and an SP. It is based on two primary operations, authentication and authorization, to provide an identity verification process and to increase access control (AC) security. However, a centralized IDMS suffers from potential risks that threaten users’ privacy and decrease system transparency because of its reliance on centralization in controlling and managing users’ data. The major risks associated with a centralized IDMS include issues related to user privacy, such as user behavior monitoring, and third-party control, in addition to issues relevant to the availability of data, such as the single point of failure (SPOF) [3]. The decentralized blockchain infrastructure is one of the most important proposed solutions to solve the centralized IDMS issue approaches, as a result of its powerful security features and promising technologies. The blockchain has multiple features that contribute to improving the problems of the current central systems, such as the features of distribution, peer-to-peer (P2P), immutability, and others. Two important concepts were launched in 2013 that served to transform IDMs from centralization to decentralization, Ethereum, and the smart contract. In smart contracts, transactions between parties can be conducted and tasks can be performed without the involvement of a third party, since it is a self-executing program that runs whenever the conditions are met. There are many features of blockchain technology that can enhance user privacy. Decentralization is the most important. In addition, avoiding complete dependence on a central authority reduces the risk of a SPOF. By using the blockchain, the user is protected from relying on third parties, and therefore, the possibility of tracking and studying their behavior is eliminated. However, despite the blockchain’s many advantages, it still faces some challenges, such as its scalability. A study comparing the various solutions offered by this technology is important as the blockchain offers multiple features that can help solve the problems associated with centralizing identity management. Several issues have been addressed in the current systems in which blockchain technology has been applied, as well as addressing research that has compared and uncovered the most suitable method of centralized identity management using various types of blockchain. This research presents a systematic literature review of recent studies that have proposed blockchain-based solutions for centralized IDMSs across different domains. The aim of this study is to explore blockchain privacy and security solutions, study and compare those solutions, and analyze the results to highlight the current research gaps and best practices. These efforts seek to develop efficient blockchain-based solutions for IDMSs which represent an essential need for the current internet-based applications and businesses. The remaining sections of this paper are as follows. Section 2: Background; Section 3: Literature Review; Section 4: Method; Section 5: Result and Discussion; and finally, the conclusion is outlined in Section 6. **2. Background** _2.1. Overview of IDMSs_ Digital identities are needed to identify users when they request access to digital resources. To manage these digital identities, in addition to related information and credentials, an efficient IDMS is required. There are many identity management models that have been created and categorized based on the use of identity and the need for a cross-domain, such as an isolated user identity model, a federated identity model, and a user-centric model [1]. ----- _Appl. Sci. 2022, 12, 12415_ 3 of 20 2.1.1. The Isolated User Identity Model (SILO) or Centralized Model IDMSs have undergone multiple stages of development. First, there was the Isolated User Identity (SILO) model, which is the cornerstone and the most simple model most widely used [4]. It is based on identity management between only two parties, the IDP and the user. The IDP in this system plays the role of the SP, as it allows the user to create a digital identity to obtain services provided in a specific field, which means that the user needs to create several digital identities to obtain services in multiple domains [5]. This is perhaps a major defect in this model owing to the difficulty of managing multiple identities by the user, in addition to full dependence on the IDP, which may cause a violation of user privacy, such as user movements tracking. 2.1.2. Federated Identity Model Another IDM was then created, which is Federated IDMS [1]. It differs from the previous system as it is based on three parties instead of two: the IDP, the SP, and the user [6]. The IDP here is the responsible party for user identity creation, authentication, and necessary credentials. In this model, the user depends on the IDP to issue credentials related to their identity and authenticate them to the SP. Therefore, there must be an element of trust between the IDP and the SP (Circle of Trust principle), which means that for every IDP in the system, there is a group of trusted SPs that the user can obtain services from [2,4]. Full dependence on the IDP, in addition to being fully informed of all user behaviors and relationships, is a threat to user privacy and may lead to the SPOF. These are serious problems in the centralized identity management approach that depends on a central party to provide the required identity creation and authentication services; the IDP. 2.1.3. User-Centric Model This model is also referred to as the Open Trust Model, as all parties in the system are required to trust each other [1]. In this model, the user can select the attributes and credentials to be sent, in addition to the ability of choosing the IDP. It is very similar to the federated model, and it also has the same privacy concerns. The second law of identity (justifiable parties) is not satisfied in this model and the sharing policy with SP can be defined by the user, but it is still under the control of the IDP [5]. User privacy is violated in this model because of the IDP control. 2.1.4. Self-Sovereign Identity Model (SSI) The abovementioned IDM model requires full dependence on a third party, the IDP, to manage and control the identity, in addition to providing the credentials necessary for authentication. This represents a clear threat to the user’s privacy, as all user behavior and movements are exposed to the IDP. To raise the level of user privacy in the field of digital identities, and to find a solution to the problems associated with the user’s dependence on the IDP (problems related to the centralized approaches), a model based on the principle of decentralization has appeared in the field of IDM. The adoption of a decentralized IDM approach has been instigated by many researchers to find solutions regarding the privacy and SPOF problems in the previous centralized models. The Self-sovereign Identity model (SSI) is an emerging decentralized IDMS that provides the user with the ability to control their identity, as well as its related data and transactions [7]. Unlike the three previously mentioned models of online identity, centralized, federated, and user-centric, SSI provides all three of the basic requirements, security, control, and portability. Therefore, the user is both the controller and the manager of the identity, and there are no external central control parties; reducing the hacking risk. During hacking, when the IDP obtains the data of all users who trust it, the attacker needs to individually hack each user one by one, which necessitates higher costs, more time, and more effort. To develop an efficient decentralized IDM system capable of addressing problems related to privacy, SPOF, and other security issues, an appropriate infrastructure must be made available. Distributed Ledger Technology (DLT), also called blockchain, has been proposed by numerous research ----- _Appl. Sci. 2022, 12, 12415_ 4 of 20 studies as an infrastructure by which to develop an IDM system and find effective solutions to the issues of security, privacy, and SPOF, as well as to give users the freedom to manage and exchange their data privately without the presence of or observation by controlling parties [5]. _2.2. Blockchain_ Blockchain was invented in 2008 by an unknown entity who went under the pseudonym Satoshi Nakamoto [8]. Blockchain technology is a technology that is built on several technologies, which include: blockchain data structure, public key infrastructure PKI, distributed ledger technology DLT, and a consensus mechanism [9]. Blockchain technology has many characteristics that have contributed to its widespread adoption and significance today, the most important of them being the decentralization feature. Using decentralization correctly is one of the most important steps towards solving the SPOF problem, which poses one of the biggest challenges to centralized systems. There is also a significant impact factor in the field of data protection associated with blockchain technology, since the data stored cannot be deleted or modified once it has been stored on the blockchain [10,11]. Blockchain is one of the most important decentralized technologies. It has been widely spread in the recent years and has been used in many domains, such as IOT [12–16]; supply chain [17–20]; AC and Identity Management in [21–26], cloud IDM in [27], ad-hoc network (VANET) in [28–30], healthcare in [31–33], internet of connected vehicles in [34,35], and even for the undirected graph authentication, as discussed in [36]. Blockchain is a type of DLT which makes it very difficult to modify or hack any data and transactions stored on the blockchain platform through a secure and tamper-proof way [5]. The main components of blockchain technology are: - A block: A block of data which has a 32-bit randomly generated number (nonce) and cryptographic hash, which is like a fingerprint of the block data. The first block of the chain is called the Genesis Block, and it does not contain a previous hash, because it is the original and the first block on the chain, and thus it is the only block with this feature [37]. - Miners: The blockchain technology requires miners to solve complex math algorithms to generate the cryptographic hash from the random nonce for each block created. - Nodes: The nodes can be any electronic device holding all of the blockchain transactions copies. - Chain: Group of blocks. - Consensus protocol: Operations implementation rules. The blockchain distributes the data blocks over multiple nodes on the internet [2]. Therefore, it is working to publish and transmit data in the form of multiple blocks linked together. Each of the blocks contains the hash of the previous block, and that is why it is called a chain of blocks (blockchain) because all the blocks are cryptographically linked to each other through the hash, so if anyone tries to tamper with one of the blocks, the hash of the block will no longer match up and the chain of blocks will be invalid, which is an immutable ledger feature. Blockchain features such as decentralization, immutability, and individual control of data, help to solve the most important issues of centralized IDMs by giving the user full control of their data to increase privacy by limiting third-party control, which is the main shortcoming of centralized IDM systems. The security and transparency features avoid the central authority issue while no single entity owns the data. Another important feature is that the blocks on a blockchain cannot be modified, and that is a very important feature in the field of security as it has a major role in reducing attacks [38]. A Distributed P2P Network is one blockchain feature where each device in the network is connected to all the other devices in the same network, and each device has a copy of the blockchain. Therefore, with each new block created in the chain, a copy of the block will be sent to all the peers under a cryptographic role. This is a very important security feature where any system errors or tampering of any block will be detected because the blockchain constantly checks all its peers to make sure that there are no issues. If any of the ----- will be sent to all the peers under a cryptographic role. This is a very important security _Appl. Sci. 2022, 12, 12415_ feature where any system errors or tampering of any block will be detected because the 5 of 20 blockchain constantly checks all its peers to make sure that there are no issues. If any of the peers has a tampered block, the majority of the peers will compare the block and re peers has a tampered block, the majority of the peers will compare the block and replaceplace the tampered block with the original one. As a result of this feature, it is difficult to the tampered block with the original one. As a result of this feature, it is difficult to hackhack the block, since the hacker would have to tamper with more than 50% of the blocks the block, since the hacker would have to tamper with more than 50% of the blocks at theat the same time in order to succeed [39,40]. In addition to the security features provided same time in order to succeed [by blockchain technology, it eliminates the need for a third party to process transactions, 39,40]. In addition to the security features provided by blockchain technology, it eliminates the need for a third party to process transactions, andand hence, supports decentralization via the use of smart contracts technology. A smart hence, supports decentralization via the use of smart contracts technology. A smart contractcontract is a conditional transaction process in the blockchain that occurs when the condiis a conditional transaction process in the blockchain that occurs when the condition is mettion is met (a self-executed program). Smart contracts provide many advantages, such as (a self-executed program). Smart contracts provide many advantages, such as increasingincreasing performance, saving time, and, most importantly, increasing privacy comperformance, saving time, and, most importantly, increasing privacy compared to otherpared to other traditional methods [41]. Smart contracts are run on many blockchain plattraditional methods [forms such as Hyperledger Fabric, Waves, Ethereum, and NEO. 41]. Smart contracts are run on many blockchain platforms such as Hyperledger Fabric, Waves, Ethereum, and NEO. Many IDM solutions have been designed without using DLT. As a result, there have Many IDM solutions have been designed without using DLT. As a result, there have been some issues related to central authority or third-party control, as in [3,6,42–44]. On been some issues related to central authority or third-party control, as in [3,6,42–44]. On the other hand, some research attempts have proposed solutions based on blockchain the other hand, some research attempts have proposed solutions based on blockchain technology. However, proposed blockchain-based IDM systems have certain issues re technology. However, proposed blockchain-based IDM systems have certain issues related lated to centralization when a private blockchain is used [13]; these pertain to private BC, to centralization when a private blockchain is used [13]; these pertain to private BC, central central authority in [45], data availability in [46], and key management issues in authority in [45], data availability in [46], and key management issues in [47].There are [47].There are many challenges in the field of user privacy in central identity manage many challenges in the field of user privacy in central identity management, such as relying ment, such as relying on the third party to create, verify, and authenticate the identity and on the third party to create, verify, and authenticate the identity and its attributes, in its attributes, in addition to the increased risk of user tracking, because the user needs the addition to the increased risk of user tracking, because the user needs the third party every third party every time they want to obtain a service from the service provider. The SPOF time they want to obtain a service from the service provider. The SPOF is also one of the is also one of the most important challenges facing central identity management. most important challenges facing central identity management. Integrating blockchain with identity management has many promising features that Integrating blockchain with identity management has many promising features that may help in solving and improving the system quality and user privacy. Decentralization, may help in solving and improving the system quality and user privacy. Decentralization, transparency, and immutability are among the most important characteristics that sup transparency, and immutability are among the most important characteristics that support this improvement, but there are also challenges that still need to be addressed, such asport this improvement, but there are also challenges that still need to be addressed, such scalability of the blockchain system.as scalability of the blockchain system. **3. Literature Review3. Literature Review** The current paper aims to present a comprehensive discussion and review for both traditional IDM systems that adopt the centralized approach, and the blockchain-based IDMSs that rely on the decentralized DLT to improve privacy and achieve self-sovereign identity concepts. _3.1. Traditional IDMSs_ In [3], a study concerning Digital Identity and IDM Technologies, the author illustrated a variety of technologies used in the field of IDM. Among the several competing standards in the IDM field, the security assertion markup language (SAML) was the only applicable ----- _Appl. Sci. 2022, 12, 12415_ 6 of 20 choice, as it had a high level of acceptance at that time. This is because it was part of the solution to the problem of single sign-on. Later, another technology emerged and received some attention in the community, called the WS-Federation. As users need to have multiple identities for different service providers, the multiple identities used can cause a degree of inconvenience to the user in terms of managing them. The author concluded that both are similar in functionality but had different names: IDP and the service provider in SAML; security token service and relying party in WS-Federation. Microsoft CardSpace is a claim-based IDM system proposed by Microsoft to satisfy the seven laws of identity. It gives the user the right to control their digital identities and choose the card after they have completed the SP policy through the identity selector. The identity selector is the intermediary between the user, the IDP, and the SP, as they retrieve the security policy after the user picks the card and completes the user authentication with the IDP on behalf of the user, and then forwards the security token to the SP to log the user in after they have received it from the IDP. The system guarantees the integrity of security tokens through an xml-signature and preserves the confidentiality of the IDP and SP security policies by making transactions over an SSL/TLS channel. However, this model violates user privacy, as it requires presenting the user credentials to the identity selector. Another drawback for this model is that the user must carry out the authentication step every time before a token is issued [42]. Another research study, this time conducted by the Liberty Alliance project, was a single sign-on federated IDMS proposed in 2001. The project proposed several frameworks: the identity federation framework (ID-FF), the identity web services framework (ID-WSF), the identity service interface specification (ID-SIS), the Liberty identity assurance framework (LIAF), and the identity governance framework (IGF). The authentication and authorization frameworks were separated in the system. The user in the Liberty Alliance system was monitored by the IDP, as they knew who all the services providers were accessed by the user, which violated user privacy [6]. In [48], researchers introduced Shibboleth, which is a Federated IDMS, and its single sign-on framework, but it does not support single sign-off. The proposed system tries to increase user privacy by using a short-term, random ID to maintain anonymity. Unlike the previous project, the authentication and authorization frameworks can be combined. In Shibboleth, IDP discovery is performed by the SP using the WAYF technique, which can increase the risks to the user by connecting with a fake IDP, redirecting them via a malicious SP. This also increased the risk of stolen credentials. The OpenID system is an open-source IDMS, released in 2005. It supports SSO and uses the concept of a global identifier to enable the user to contact any OpenID-enabled SP. The system does not use any proof of rightful possession, which makes it vulnerable to the risk of credential theft. In addition, it may create other risks such as directing the user to a fake IDP via a malicious SP, and the risk of a man-in-the-middle (MITM) attack [44]. Reference [43] suggested two proposed solutions in the implementation layer to improve the level of authentication with the user in a claim-based IDMS. A proof-ofauthenticity method and challenge-response method appeared as suggested solutions to solve the problem of the malicious IDP, which may cause considerable damage to the SP and the user. The authors suggested a proof-of-authenticity method as the first solution, which uses an additional authentication layer through creating a random secret value by the SP, and then sends it to the user (known only to the user and the SP) after each complete authentication. The challenge-response method is the second proposed solution where the user has to accept a challenge sent by the SP, and they must respond with the expected result computed by using a private signature key or shared secret key between the SP and the user. Both proposed solutions had a positive impact on solving the problem studied by the authors, where, in addition to enhancing the user authentication, they also increased the level of privacy in the claim-based IDM system. The previously reviewed studies had many features that improve the quality and performance of the system, but they also had many challenges that violate user privacy, ----- _Appl. Sci. 2022, 12, 12415_ 7 of 20 such as data disclosure [42], user monitoring and increasing the risk of credentials being stolen [6], a man-in-the-middle attack, and fake parties [44]. These were in addition to the SPOF, which is one of the main issues associated with the centralized IDM approach. The next section presents state-of-the-art studies that adopted a decentralized approach for IDMS using the blockchain technology. _3.2. Blockchain-Based IDMSs_ In [13], the authors presented a new IDM approach based on a private blockchain, which aims to provide an efficient and simple protocol that meets all the needs of Internet of Things (IOT) organizations. Researchers implemented a Hyper-ledger Fabric for the smart homes model and wrote the chain codes using Golang language. The main functions of the IDM systems are split into three phases to allow simultaneous execution: identity registration, identity verification, and identity revocation; the three phases employed smart contracts to interact with the blockchain. The author discussed how this approach would enhance IOT entities communications by including a consortium membership service and identity management protocol. The author chose to use a private blockchain in the model to achieve more security and better scalability; however, in terms of characteristics, it was more like centralization than decentralization, and that increased the risk of SPOF and central authority issues. The authors in [49] developed a decentralized IDM system prototype using the Hyperledger Indy blockchain as a proof-of-concept in the public transportation sector, based on self-sovereign identity principles. The proposed system can reduce the need for using multiple travel cards for the people who travel frequently and who use several modes of transportation within multiple jurisdictions. The system aims to give the users full identity control by creating a direct identity layer based on the principles of decentralization using a blockchain-based IDM system to provide a Single European Transport for users. The proposed system will provide the ability to create many decentralized identifiers for any person, in addition to creating a key pair for each user so they can securely share the data. In [45], researchers proposed a blockchain-based decentralized IDM system for the public sector in South Korea by providing a mobile application by which to create electronic identity cards, issued and managed by a national central authority. The user stores their driver licenses on their device and verifies their identity through the app by using a onetime QR code. The client server in the system is developed by using Hyper-ledger Fabric V1.0 to increase the privacy level. Amazon web service (AWS) is used in the system to provide a faster process and increase efficiency. Data for any identity in the system is linked to a central government agency in South Korea to complete the identification process. User data is stored in a database in the form of keys and values paired on a hash map, in addition to the chain code. The developer also used a modern user interface to make users feel more comfortable using the system. The application is very effective in using blockchain, but it appears to be centralized, even with blockchain, as the national central authority is the data manager, and license requirement in the verification process might be a disadvantage because such an application is not appropriate for many e-commerce systems or for obtaining online services as there will be licenses or other types of formal document involvement. Authors of [46] used a smart contract to design a cross-domain self-sovereign identity management system. The system contains three types of smart contracts; each one built to perform a specific function. The services smart contract SSC is the first contract and the basis contract in the system which controls the publishing of a user identity contract, and it is created and published when the SP joins the system. The second is the identity smart contract ISC, which is requested by the user from the SP after they have been identified and verified, and their address is recorded in the SSC. The ISC is controlled by the user after it is published. The Recovery Smart Contract (RSC) is also created at the same time. The RSC is automatically created for each ISC to give the user the ability to recover their lost password from a list of friends. The system, as proposed by the designer, performs better ----- _Appl. Sci. 2022, 12, 12415_ 8 of 20 compared to three other systems using the same concept, but it also has a limitation in that it uses the address of the ISC as a universal unique identifier UUID, which is not readable by users, and, as the system stores the full attributes information in the user device, that will decrease the availability of information when the user is offline. In the study presented in [47], a hybrid methodology was proposed as a part of the Impilo project for data management in healthcare by combining a central database and decentralized infrastructure “blockchain”. The new approach tries to create ownership and management of data on the patient side to increase security of electronic health records and keep it shareable at the same time. Patient information is stored on a central database during the validation process, and the transaction is stored on the blockchain. The system operation begins by logging into the Impilo app and storing the registration information in a new file, and then communicating with the DB to store the medical information. The blockchain will generate a new hash, communicate with both sides, and then store the transaction details on the chain if the verification process is correctly completed. In this approach, the decryption key of medical information in a database is the user login password; so, if an attacker knows the user login password, they will have access to all the user information, and this decreases the security of the database. In [50], researchers proposed a framework to solve the centralized problem of access control and its related privacy and ethical issues, and to give users full control of their IOT devices. The proposed framework is based on two main concepts: a blockchain and a machine learning algorithm. The researchers addressed two problems in IoT environment access control: centralized access control (AC) and security policy management. The proposed framework distributes the security policy (a set of guidelines and security rules) in the blockchain by using a smart contract instead of storing it in a server, as in a traditional AC, and improves it by using an online learning mechanism of machine learning algorithms to solve the problem of a non-contextual security policy. An online learning machine type is used to detect any AC rules which do not satisfy the security policy, or which may lead to any security threat. Authors in [36] used the private Ethereum network to design a cryptographic authentication scheme. The authors developed a smart contract and published it on a private chain, and then evaluated the scheme’s functions by using web3j and a proof of security model. The research introduced a transitively closed undirected graph authentication (TCUGA) scheme to update the certificates by the signatory with no re-signing process needed by using a trapdoor hash function and allowing the administrator to prove the certificate relationships “even when they are not in the same equivalence class” after they are received from the signatory. A permissioned blockchain-based IDM user authentication scheme was introduced in [33] to solve key management and authentication issues in e-health systems by using a key distributed mechanism of personal biometrics. The proposed system contains four main members: the founder, the user (U), the registration center (RC), and the medical server (MS), in addition to the smart contract that provides access control functions. It has two major mathematical problems: the computational Diffie-Hellman problem (CDHP) and the discrete logarithm problem (DLP). The proposed scheme is provided with a mutual authentication equation and achieves anonymity by making the user’s identity hidden. The designer tested the proposed system and guaranteed the security requirements by using the Scyther tool, which is an automatic verification tool for security protocols. An attempt to solve traditional banking issues by developing a blockchain-based IDM and access control (BIMAC) framework was presented in [51]. The researchers used an MVC (Model-View-Controller) structure for this purpose. The implemented framework improved user experience by creating a user login to many bank accounts without the need to remember all their accounts and passwords. The prototype applied the concept of self-sovereign identity in the open banking field and provided an efficient authentication framework. ----- _Appl. Sci. 2022, 12, 12415_ 9 of 20 In [28], the authors tried to solve the problem of traffic disruption caused by malicious vehicles through incorrect information propagation. As a way to maintain privacy, they suggested using a blockchain-based authentication scheme and asymmetric key encryption to secure vehicle communication. Additionally, elliptic curve cryptography was used to increase transactions pseudonymity. According to the study of [34], it has been found that when cooperating with unauthorized vehicles, it is possible to steal information, compromise privacy, and exploit a variety of threats in terms of security. The authors proposed a blockchain-based Internet of Vehicles (IoV) protocol that was developed on the Ethereum platform, to improve the privacy of vehicle data and relationships with the help of blockchain technology. However, too much IoV information stored in the blockchain will affect the system’s scalability. [35] In addition, the paper discussed the increased difficulty of managing certificates for vehicular communications, along with the cost of anonymizing vehicle identities. This study proposes a blockchain-based pseudonym management solution which has the ability to reuse existing pseudonyms in order to simplify pseudonym management. Additionally, in [30], the authors attempted to enhance vehicle privacy and trust relationships. As a result of the use of blockchain technology by these authors, they proposed a blockchain-based anonymous reputation system (BARS), which is based on a reputation evaluation algorithm. A proof-of-concept IoT identity management system for a business case scenario was implemented by the authors in [12], to ensure the integrity of the data provenance records in the organization-networked IOT resources using blockchain and smart contracts. Solidity language is used to code the proposed blockchain model and it is deployed in Kaleido. The authors of [21] proposed a Hyperledger fabric blockchain system to enhance Modbus, one of the Industrial Internet of Things IIoT protocols that faces many security challenges, such as SPOFs. On-chain authentication and authorization are supported by the designed decentralized identity system. By providing both security and scalability for Modbus connections, it can be used in a system with more than one organization. Self-sovereign Identity, blockchain, and Inter Planetary File technologies were used by [17] to improve food supply chains. By using SSI concepts, the study proposed a way to manage certifications throughout the supply chain. A certificate is issued by a certifying body and stored in IPFS, with only some key information being stored on the chain; verifiers need this information to verify whether a certificate is valid in the chain. To improve supply chain security, the authors in [18] also implemented a Hyperledger Fabric framework to ensure each registered device in the supply chain is tracked and to improve system security. Furthermore, reference [19] proposed a supply chain traceability system, though this proposed system tracks and validates both sides of the transaction. Additionally, reference [20] used a permissioned blockchain network in order to take advantage of smart contract features and to increase supply chain management security. The proposed framework provides the user with control over the data and increases identity protection by using cryptographic proof. In recent years, telehealth has become a necessity, especially since the COVID-19 pandemic started. In [31], the authors addressed the problem of trusting e-health application service providers and not knowing whether they comply with regulations to ensure privacy and security. Blockchain technology was used to provide authentication and identification processes to users and service providers across a variety of health domains. A smart contract was implemented in the proposed system using Ethereum. In edge computing, the privacy and security of user data are two of the most important factors that need to be considered. As discussed in [22], the authors used smart contracts as a means of presenting the Access Management System by using blockchain technology. In order to improve the Internet of Things HIoT privacy, the authors in [15] proposed verifiable anonymous identity management systems (VAIM), through which they improved blockchain identity management and enhanced the unlinkability of the system by using zero-knowledge proof (ZKP) algorithms. ----- _Appl. Sci. 2022, 12, 12415_ 10 of 20 User privacy has been affected by third-party dependencies in identity management systems in a variety of fields, including the Internet of Things. An SPOF is also one of the most important issues resulting from third-party control. Using Hyperledger Fabric, the authors in [13] implemented a smart-home-based scenario architecture to improve the quality and efficiency of home sensors and to enhance IoT centralization issues. A proposed architecture would divide the functions of the system into three main parts: registration, authorization, and revocation. The authors tried to improve the scalability of the system by splitting the functions. The authors in [32] attempted to solve the problem of electronic health records information being exposed, which poses a threat to the privacy of the users and those whose records are accessed. The authors implemented a proof of concept through the use of Hyperledger Fabric’s permissioned blockchain technology to ensure anonymity for the EHR data and to enhance privacy for patients. Using a DNS-like approach, the authors of [23] proposed a DNS-IDMs architecture that is implemented on Ethereum’s permissioned ledger. In order to enhance the privacy of the user, users and service providers would be able to create identity attribute claims and verify them using the services of real-world identity attribute benefactors. By using blockchain transactions, users can also control and manage their identities. There are many security challenges associated with large-scale IoT systems due to centralization concepts, such as unauthorized access requests to IoT-enabled devices, which are an issue of access control. To make the system more flexible and adaptable, reference [14] implemented a private blockchain POC prototype using Ethereum and smart contracts. BlendCAC was the name of the framework proposed by the authors. An Ethereum-based IDM cloud protocol was proposed by [27], an improved version of CIDM (Consolidated Identity Management). The proposed protocol attempts to solve the third-party reliance problem in traditional identity management systems. Smart contracts were used in the proposed system to increase data transmission privacy and to enhance system flexibility. The authors of [24] provided a method that allows users to sign transactions using a different Ethereum identity in order to enhance user untraceability by granting the user the right to delete their data and allow them to discard their identity afterward. The proposed method represents identity through web3js-based implementation and data erasing can be requested by the user or an end of service. It was proposed in [25] that attribute trust could be enhanced by using an Attribute Trust-enhancing Identity Broker (ATIB) architecture in order to enhance the aggregation of system attributes by following the ten SSI principles. As part of the proposed proof of concept, the service providers role would be enhanced with the help of the protocol manager, which is the main component in the proposed architecture that will be able to support the implementation of many identities and access protocols to the system. In [26], the authors proposed a method of integrating distributed identity provider technology (OLYMPUS) with blockchain technology while utilizing smart contract technology as a means of evolution of distributed identity provider technology. It was proposed that the proposed architecture will improve system security and enhance the privacy of users. As a result of a combination of a cryptographic authentication scheme and blockchain technology, reference [36] proposed a transitively closed undirected graph authentication scheme (TCUGA). The proposed scheme manages vertices and edges, and it can prove the absence of any edge between two vertices. A permissioned blockchain was used with attribute-based access control (ABAC) and an identity-based signature (IBS) in order to improve the security of an Internet of Things system [16]. In this paper, a cross-domain blockchain-based IoT access control system was proposed to address some of the challenges related to IoT systems, such as SPOFs, information leaks, and Distributed Denial of Service (DDoS). ----- _Appl. Sci. 2022, 12, 12415_ 11 of 20 By adopting an existing technology, the authors in [33] enhanced E-health identity authentication and solved some major security issues, including reply attack and an MITM attack. In order to provide a secure mutual authentication and key distribution system, the proposed authentication scheme is implemented in permissioned blockchains. A fine-grained AC scheme was proposed in [29] to enhance Vehicular Ad Hoc Network (VANET) data sharing. In order to increase data sharing security and decrease SPOFs, a combination of blockchain technology, IPFS, and ciphertext-based attribute encryption (CP-ABE) is proposed. A smart contract is also used in the proposed scheme in order to increase the scalability of the systems. In [52], a private blockchain was used to help the agricultural sector and farmers in India to ensure that their communication with their customers can take place directly with them without any intervention from third parties in the process. The proposed model was built on Hyperledger Fabric to enable direct communication between the farmer and the customer at the same time. **4. Method** To achieve the study’s key aim of exploring the use of a public blockchain platform to integrate the principle of decentralization with IDMS, we conducted a systematic review following the Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA) guidelines, which help in analyzing the steps of the systematic review by identifying specific and clear research questions, and following a specific methodology to obtain answers through the use of a sample of research papers that are determined by of exclusion and inclusion criteria [53]. For this purpose, we selected previous studies that use blockchain technology on IDMs. Further elaboration on research and selection strategy explanations is given below: _4.1. Research Need Identification_ An objective of this systematic literature review is to examine how blockchain-based systems can be used to enhance privacy, as well as improve a system by eliminating or reducing centralization issues in trading systems, such as SPOF risks, central authority issues, and third-party control risks. _4.2. Research Questions_ Q1: What are the current issues that threaten user privacy and security in centralized IDMSs? Q2: Will decentralizing identity management by using distributed ledger technology solve user privacy problems, and if so, why? Q3: What are the blockchain-based technologies that may be utilized to enhance user privacy? Q4: What is the most efficient blockchain-based development platform for IDMSs? _4.3. Information Source and Database_ We selected multiple databases for the information sources, as shown in Table 1. The literature review was limited to research studies published between 2018 and 2022. **Table 1. Information Source.** **Database** **Website** IEEE Xplore Digital Library [https://www.ieee.org](https://www.ieee.org) MDPI [https://www.mdpi.com](https://www.mdpi.com) _4.4. Research String_ The research strings are described in Table 2. ----- _Appl. Sci. 2022, 12, 12415_ 12 of 20 **Table 2. Research String.** **Open** **After Deleting** **Database** **Keywords** **NO** **Access** **Duplicate** **After** **Reading** **Paper** IEEE MDPI “Identity management 319 38 38 14 systems AND blockchain” “Identity management 101 11 2 0 system AND smart contract” “Ethereum AND identity 41 5 4 0 management system” “Identity management 26 26 26 11 systems AND blockchain” “Identity management 7 7 1 0 system AND smart contract “ “Ethereum AND identity 2 2 0 0 management system” _4.5. Criteria Selection_ The study only included research written in the English language from 2018 until the present day. In addition, surveys papers or systematic review papers were not considered. Instead, papers that proposed systems were considered, as shown in Table 3. _4.6. Inclusion and Exclusion Criteria_ We followed the PRISMA flow diagram in the study selection process, as shown in Figure 1, and by following the inclusion and exclusion criteria of the current systematic review described in Table 3, the authors extracted approximately 496 studies relevant to blockchain-based IDM systems. Following the two main inclusion criteria, only 71 papers fulfilled the research aims. After downloading and reading the abstracts, 46 more papers were excluded during screening. Only 26 research articles were assessed and recognized _Appl. Sci. 2022, 12, x FOR PEER REVIEW_ 13 of 21 against the research criteria. The current systematic followed the PRISMA standards for data extraction and selection, as shown in Figure 1. **Figure 1.Figure 1. PRISMA flow diagram of the study selection process.PRISMA flow diagram of the study selection process.** ----- _Appl. Sci. 2022, 12, 12415_ 13 of 20 **Table 3. Criteria selection.** **Inclusion Criteria** **Exclusion Criteria** Written English Studies written in other languages Studies From 2018 until now Studies before 2018 Original research paper Survey, systematic review papers Proposed solution implemented Proposed solutions not implemented **5. Results and Discussion** In this section, the research review will be discussed, and the results are presented in detail. The results are presented in multiple sub-sections according to the field to which they belong. _5.1. Study Characteristics_ The current systematic review focused on developing blockchain-based solutions for privacy and security issues in IDMSs. To highlight important characteristics of the reviewed studies, we designed Tables A1 and A2 for the two databases considered in this study. Each table contains Title, Author with Year, Type, Publisher, the use of BC, and the use of SC. Due to the role blockchain types play in solving existing research problems, the tables indicate which type of blockchain was used in each research, in addition to the possibility of using smart contracts. _5.2. Discussion and Result_ In this section, we present the information collected from the research papers after the systematic review. In Section 5.2.1, we review the domains in which blockchain technology was adopted to enhance privacy and security of IDM, and then Section 5.2.2 discusses the blockchain types and technologies that were applied to address different issues related to privacy and security in order to highlight the best practices and efficient solutions, as well as to provide an understanding of the potential solutions that can be offered by blockchain technologies. Section 5.2.3 discusses the research and issues addressed via using smart contracts technology, as it represents a cornerstone and powerful blockchain technology that can effectively contribute to developing efficient solutions for problems relevant to the privacy issue. Finally, in Section 5.2.4, the research questions are answered in detail. 5.2.1. Domain The current systematic review surveyed the previously proposed IDMS solutions that adopted a decentralized approach. Previous literature has illustrated that the use of blockchain technology improved the security and privacy of IDMSs in many domains, such as IOT [12–16]; supply chain [17–20]; AC and Identity Management in [21–26], cloud IDM in [27], ad-hoc network (VANET) in [28–30], healthcare in [31–33], internet of connected vehicles in [34,35], and even for the undirected graph authentication, as discussed in [36]. 5.2.2. Issues and Blockchain Type This section sheds light on the different blockchain types adopted in previous research and the security issues addressed by each type. This assists in understanding the potential solutions that can be addressed by particular blockchain types or technology. The majority of the reviewed studies adopted access control and IDM to find solutions for system issues by using the Ethereum blockchain type. In [27], the IDMS adopted by cloud users relies too much on third-party services. Studies published in [24] and [18] suffered from third-party issues, especially trackability, and both used Ethereum in their solutions. In [20], authors used Ethereum-based IDM Protocol as a solution for the U.S. beef cattle supply chain. By utilizing Ethereum blockchain, the authors in [19] provided a solution for identifying the root cause of system problems. An Ethereum-based food ----- _Appl. Sci. 2022, 12, 12415_ 14 of 20 supply chain system was proposed in [17]. Other studies have also used the Ethereum blockchain type to improve their systems, such as [14,23,29,31,34,36]. Other types of blockchain have also been used in some of the studies reviewed. A permissioned blockchain was used in [15] as a solution for the same third-party issue in a different domain. Trust relationships between SPs, users, and IDPs in ABC systems have many privacy concerns, and the authors in [26] tried to improve this by using Hyperledger technology. The later blockchain type was used by [16] to solve three main issues: (1) single failure point; (2) privacy information leak; (3) Distributed Denial of Service (DDoS) attack of the delegate node. In addition, [30] preserved a vehicle’s identity privacy by using blockchain to prevent fake message distribution. Communication and computational overheads in healthcare systems were discussed by [33], using a permissioned blockchain to improve them. The reviewed studies proposed solutions to enhance and improve centralized systems by using blockchain technology in a different way, but there are still open issues that need to be addressed and enhanced, such as enhancing the scalability of blockchain-based IDMS platforms, system usability, and privacy enhancement. 5.2.3. Smart Contract Smart contracts are a very important concept in the field of blockchains. They provide many important features to enhance system functionality and to increase the speed of operations. In the current review, only seven research papers did not use smart contracts in their proposed solutions: [15,21,25,28,30,32,35]. On the other hand, 18 research papers adopted smart contracts to provide more efficient solutions for the privacy problems in IDMS: [12–14,16–20,22–24,26,27,29,31,33,34,36]. The analysis of statistics related to the previous research shows that there has been an increase in the number of publications over recent years that adopted blockchain technology in the field of IDMS, as depicted in Figure 2. In terms of the blockchain type, the analysis results presented in Figure 3 show that Ethereum has been more frequently used than the other types of blockchain. There are several reasons for this. The smart contract is one of the most important components of an Ethereum system’s development and improvement. _Appl. Sci. 2022, 12, x FOR PEER REVIEW_ 15 of 2 The Solidity Language is another important reason, along with the fact that Ethereum is involved in several applications, the most important of which is the DApp. 12 10 8 6 4 2 0 #### YE A R A N D N UM B E R O F P U BLICA TION S 2018 2019 2020 2021 2022 **Figure 2.Figure 2. Years and number of publications.Years and number of publications.** There has been a significant increase in identity control on the proposed blockchainbased systems because of the third-party limitations caused by the decentralization feature. The system is powerful and operates faster when it is using smart contracts as they are # BC Types self-executed codes, but there is some uncertainty about the security of the stored data. As The system is powerful and operates faster when it is using smart contracts as they are # BC Types self-executed codes, but there is some uncertainty about the security of the stored data. As ----- _Appl. Sci. 2022, 12, 12415_ 2 15 of 20 0 2018 2019 2020 2021 2022 a result, there have been many research papers on identity management systems that are trying to reduce the different risks and to mitigate cyberattacks encountered in this field.Figure 2. Years and number of publications. # BC Types **24%** **36%** **40%** Blockchain Ethereum Hyperledger Fabric **Figure 3.Figure 3. Blockchain Types.Blockchain Types.** # BC Types **24%** **36%** **40%** Blockchain Ethereum Hyperledger Fabric It can be seen from the research articles shown in Tables A1 and A2 that blockchain #### There has been a significant increase in identity control on the proposed blockchain technology, the underlying technology for decentralized IDMSs, has been proposed as an #### based systems because of the third-party limitations caused by the decentralization fea effective solution for privacy and security issues in a variety of fields, such as IOT, supply #### ture. The system is powerful and operates faster when it is using smart contracts as they chains, ad-hoc networks, cloud IDM, healthcare, internet of connected vehicles, and access #### are self-executed codes, but there is some uncertainty about the security of the stored data control. Previous research has illustrated that blockchain is a powerful technology and has many features that may effectively contribute to enhancing user privacy and increasing theAs a result, there have been many research papers on identity management systems tha level of self-control over personal data in the field of IDM and relevant applications.are trying to reduce the different risks and to mitigate cyberattacks encountered in thi #### field. 5.2.4. Research Questions and Answers #### It can be seen from the research articles shown in Tables A1 and A2 that blockchain Q1: What are the current issues that threaten user privacy and security in centralizedtechnology, the underlying technology for decentralized IDMSs, has been proposed as an IDMSs? #### effective solution for privacy and security issues in a variety of fields, such as IOT, supply chains, ad-hoc networks, cloud IDM, healthcare, internet of connected vehicles, and accesCentral identity management systems suffer from certain privacy issues, as discussed in the previous section. One of the most important problems is centralization, since it reliescontrol. Previous research has illustrated that blockchain is a powerful technology and upon one central party, which results in the high risk of an SPOF. Third-party control is #### has many features that may effectively contribute to enhancing user privacy and increas considered one of the most important threats in centralized systems, since the user is under #### ing the level of self-control over personal data in the field of IDM and relevant applica the control of a third party, which can compromise their privacy, such as monitoring their #### tions. movements and studying their behavior. Q2: Will decentralizing identity management by using distributed-ledger technology solve #### 5.2.4. Research Questions and Answers user privacy problems, and if so, why? #### Q1: What are the current issues that threaten user privacy and security in centralized ID Decentralization of identity management by using distributed ledger technology #### MSs? addresses the problem of a SPOF because copies of the system are distributed over multiple peers. As the peers constantly compare and verify the validity of the copies, when oneCentral identity management systems suffer from certain privacy issues, as discussed fails, the rest discover the error and recopy the system in the correct chain. Furthermore,in the previous section. One of the most important problems is centralization, since it relie technology provides the smart contract, which plays a major role in limiting the control ofupon one central party, which results in the high risk of an SPOF. Third-party control i third parties, as tasks are assigned to the smart contract, and the tasks are automatically executed without the intervention of any third parties. Q3: What are the blockchain-based technologies that may be utilized to enhance user privacy? Using the smart contract as an intermediary to carry out tasks between the parties enhances the privacy of the parties, since, for example, users can send tokens through the smart contract to a service provider, whose tokens have attributes certified by third parties. Since a smart contract acts as an intermediary, third parties and service providers cannot Blockchain Ethereum Hyperledger Fabric ----- _Appl. Sci. 2022, 12, 12415_ 16 of 20 track user relations or actions. Additionally, the user can control how much data is shown in each token created for a service provider through a smart contract. The smart contract can also be used to track all the viewers of the token data by recording their addresses and the time they viewed it. So, yes, this technology enhances user privacy. Q4: What is the most efficient blockchain-based development platform for IDMSs? As a result of the research, most of the applications used the public blockchain (Ethereum) because it is open source and has smart contract technology. Furthermore, Ethereum works with a special currency called Ether, and has a special programming language called Solidity. **6. Conclusions** In the domain of IDM, the adaptation of distributed ledger technology has attracted attention due to its ability to enhance user privacy and address issues, such as the SPOF and third-party control. The current work reviewed recent research papers in the area of identity management systems; both traditional and those which have adopted blockchain technology. Many articles covering IDM and blockchain technologies were reviewed in this research. Many reviewed research attempts to provide the user with increased identity control by trying to solve third-party control issues, address the SPOF, and avoid fake message distribution. Furthermore, the review of previous research about IDMS showed that there are still open issues relating to user privacy in the traditional centralized IDMSs, including third-party control and user movement monitoring or tracking, in addition to the problem of the SPOF. This prompted the need to search for an efficient solution to enhance user privacy in IDMSs and avoid other problems associated with the decentralized approach. Decentralized IDM by using blockchain has many advantages, including solving the problem of third-party control by giving each user full control of their private information and activities, improving performance, and saving time by using smart contracts and other blockchain features. In addition, the use of blockchain-based IDMS can avoid the SPOF and ensure that data and services are available to legitimate parties once needed. However, blockchain-based solutions that use a private type have some weaknesses related to privacy, and they inherit certain problems from the centralized approach. In addition, the use of weak authentication methods is a significant issue that needs to be addressed in recently proposed block-chain-based IDMSs. The systematic literature review presented in this paper discussed and analyzed the recent solutions and current challenges in the field of IDM, while concentrating on the contributions made by using blockchain technology. This aims to provide a better understanding of the role and significance of adopting blockchain technologies in the field of IDM and the advances that can be achieved using this powerful technology. Moreover, the current review attempts to identify the research gaps and open issues, and motivate future research works that may utilize the promising features of blockchain in improving user privacy and addressing other challenges in the field of IDM. As part of our future work, we intend to implement a system prototype for a decentralized identity management system utilizing the Ethereum blockchain to solve the problems identified in this research and assess its advantages and disadvantages. **Author Contributions: Writing—original draft preparation, H.A.; writing—review and editing, H.A.,** M.A.; supervision, M.A. All authors have read and agreed to the published version of the manuscript. **Funding: The authors extend their appreciation to the Deanship of Scientific Research at IMSIU for** founding and supporting this work through the Graduate Student Research Support Program. **Institutional Review Board Statement: Not applicable.** **Informed Consent Statement: Not applicable.** **Data Availability Statement: The literature review was limited to research studies published in IEEE** [and MDPI databases. https://www.ieee.org. whttps://ww.mdpi.com.](https://www.ieee.org) ----- _Appl. Sci. 2022, 12, 12415_ 17 of 20 **Acknowledgments: Authors acknowledge the support from Imam Mohammad ibn Saud Islamic** University (IMISIU) for this research. The authors extend their appreciation to the Deanship of Scientific Research at IMSIU for founding and supporting this work through the Graduate Student Research Support Program. **Conflicts of Interest: The authors declare no conflict of interes.** **Appendix A Included Studies** **Table A1. MDPI Included studies.** **Smart** **Study NO** **Title** **Authors** **Year** **Type** **Publisher** **BC Used and Filed** **Contract** [28] EBAS: An Efficient Blockchain-based Authentication Scheme for Secure Communication in Vehicular Ad Hoc Network Developing an IoT Identity [12] Management System Using Blockchain [21] Modbus Access Control System Based on SSI over Hyperledger Fabric Blockchain Blockchain and Self Sovereign [17] Identity to Support Quality in the Food Supply Chain Blockchain, r secure Xia Feng et al. 2022 article MDPI communication in no VANET Sitalakshmi 2022 article MDPI Blockchain, IOT Yes Venkatraman et al. Santiago Hyperledger fabric Figueroa-Lorenzo 2021 article MDPI blockchain, Modbus no et al. access control. Luisanna Ethereum Blockchain, 2021 article MDPI yes Cocco et al. Food Supply chain Ethereum Ibrahim Tariq 2021 article MDPI consortium yes Javed et al. blockchain, e health [31] Health-ID: A Blockchain-Based Decentralized Identity Management for Remote Healthcare Blockchain-Enabled Access Blockchain, edge [22] Management System for Edge Yong Zhu et al 2021 article MDPI yes computing Computing [34] ABlockchain-based Authentiaction Protocol For Cooperative Vehicular Ad Hoc Network Ethereum blockchain, A. F. M. Suaib 2021 article MDPI internet of Vehicles yes Akhter et al. (IoV) Alightweight Blockchain [13] based IOT Identity Managemnt Approach Aprivacy-preserving [32] Healthcare Framework Using Hyperledger Fabric consortium blockchain-based identity management, IoT(implement by Hyperledger Fabric) Hyperledger Fabric’s permissioned blockchain framework, healthcare private Ethereum network (permissioned Ethereum ledger) Mohammed 2021 article MDPI Amine Bouras et al. Charalampos 2020 article MDPI Stamatellis et al. Jamila Alsayed 2019 article MDPI Kassem et al. yes no yes [23] [14] DNS-IDM: A Blockchain Identity Management System to Secure Personal Data Sharing in A Network BlendCAC: ASmart Contract-Enabled Decentralized Capability-Based Access Control Mechanism For The IOT private Ethereum Ronghua Xu 2018 article MDPI blockchain, AC in IoT yes et al. devices. ----- _Appl. Sci. 2022, 12, 12415_ 18 of 20 **Table A2. IEEE Included studies.** **Smart** **Study NO** **Title** **Authors** **Year** **Type** **Publisher** **BC Used and Filed** **Contract** EIDM: A Ethereum-Based Cloud shangping Ethereum blockchain, [27] User Identity Management 2019 article IEEE yes wang et al. cloud IDM Protocol Burnable Pseudo-Identity: A iván gutiérrez Ethereum, Anonymous [24] Non-binding Anonymous agüero 2021 article IEEE yes Identity Identity Method for Ethereum et al. Pseudonym Management Through Blockchain: shihan bao Blockchain, internet of [35] Cost-efficient Privacy 2019 article IEEE no et al. connected vehicles. Preservation on Intelligent Transportation Systems VAIM: Verifiable Anonymous permissioned Identity Management for gyeongjin ra blockchain, the human [15] 2021 article IEEE no Human-centric Security and et al. internet of things Privacy in the Internet of Things (HIoT) ATIB: Design and Evaluation of an Architecture for Brokered Blockchain, Self-Sovereign Identity andreas grüner [25] 2021 article IEEE IDM(attributes no Integration and Trust-Enhancing et al. aggregations.) Attribute Aggregation for the Service Provider A Trusted Approach for Decentralized and rafael torres Hyperledger fabric, [26] 2021 article IEEE yes Privacy-Preserving Identity moreno et al. IDM Management A New Transitively Closed Undirected Graph Authentication Ethereum, undirected [36] chao lin1 et al. 2018 article IEEE yes Scheme for Blockchain-based graph. Identity Management Systems Blockchain-Based IoT Access Hyperledger fabric shuang sun [16] Control System: Towards Security, 2021 article IEEE permissioned yes et al. Lightweight, and Cross-Domain blockchain, IOT AC. A Permissioned Blockchain-based permissioned Identity Management and User xinyin xiang [33] 2020 article IEEE blockchain, e-health yes Authentication Scheme for et al. systems E-Health Systems FADB: A Fine-grained Access Ethereum, Vehicular [29] Control Scheme for VANET Data hui li et al. 2020 article IEEE Ad Hoc Network yes Based on Blockchain (VANET) Hyperledger fabric A Blockchain-based Framework pinchen cui permissioned [18] 2019 article IEEE yes for Supply Chain Provenance et al. blockchain, Supply Chain Smart Contract-based Product shangping Ethereum, Supply [19] Traceability System in the Supply 2019 article IEEE yes wang et al. Chain Chain Scenario Blockchain, vehicular A Privacy-Preserving Trust Model zhaojun lu [30] 2018 article IEEE ad hoc networks no Based on Blockchain for VANETs et al. (VANETs) permissioned A Permissioned Distributed tanvir ferdousi blockchain network, [20] Ledger for the US Beef Cattle 2020 article IEEE yes et al. Ethereum Supply Supply Chain Chain **References** 1. L’Amrani, H.; Berroukech, B.; Ajhoun, R.; El Idrissi, Y. Identity Management Systems: Laws of Identity for Models[′] Evaluation. In Proceedings of the 2016 4th IEEE International Colloquium on Information Science and Technology (CiSt), Tangier, Morocco, 24–26 October 2016. 2. Liu, Y.; He, D.; Obaidat, M.; Kumar, N.; Khan, M.; Choo, K. Blockchain-based identity management systems: A review. J. Netw. _[Comput. Appl. 2020, 166, 102731. [CrossRef]](http://doi.org/10.1016/j.jnca.2020.102731)_ 3. Agudo, I. Digital Identity and Identity Management Technologies. Serb. Publ. InfoReview Joins UPENET Netw. CEPIS Soc. J. Mag. **2010, 6.** 4. Jøsang, A.; AlZomai, M.; Suriadi, S. Usability and Privacy in Identity Management Architectures. In Proceedings of the Fifth Australasian Symposium on Grid Computing and e-Research (AusGrid 2007), the Fifth Australasian Information Security ----- _Appl. Sci. 2022, 12, 12415_ 19 of 20 Workshop (Privacy Enhancing Technologies) (AISW 2007), and the Australasian Workshop on Health Knowledge Management and Discovery (HKMD 2007). Proceedings, Ballarat, VIC, Australia, 30 January-2 February 2007. 5. Panait, A.; Olimid, R.; Stefane, A. Identity Management on Blockchain—Privacy and Security Aspects. Proc. Rom. Acad.-Ser. A _Math. Phys. Tech. Sci. Inf. Sci. 2021, 21, 45–52._ 6. Alrodhan, W. Privacy and Practicality of Identity Management Systems: Academic Overview; Vdm Verlag Dr. Müller: Saarbrücken, Germany, 2011. 7. Lim, S.Y.; Tankam Fotsing, P.; Almasri, A.; Musa, O.; Mat Kiah, M.L.; Ang, T.F.; Ismail, R. Blockchain Technology the Identity Management and Authentication Service Disruptor: A Survey. Int. J. Adv. Sci. Eng. Inf. Technol. 2018, 8, 1735. Available online: [http://insightsociety.org/ojaseit/index.php/ijaseit/article/view/6838 (accessed on 15 August 2022). [CrossRef]](http://insightsociety.org/ojaseit/index.php/ijaseit/article/view/6838) 8. Almeshal, T.A.; Alhogail, A.A. Blockchain for Businesses: A Scoping Review of Suitability Evaluations Frameworks. IEEE Access **[2021, 9, 155425–155442. [CrossRef]](http://doi.org/10.1109/ACCESS.2021.3128608)** 9. Zhu, X. Research on blockchain consensus mechanism and implementation. IOP Conf. Ser. Mater. Sci. Eng. 2019, 569, 042058. [[CrossRef]](http://doi.org/10.1088/1757-899X/569/4/042058) 10. Maldonado, F.C. Introduction to Blockchain and Ethereum: Use Distributed Ledgers to Validate Digital Transactions in a Decentralized and _Trustless Manner; Packt Publishing: Birmingham, UK, 2018._ 11. Joshi, J.; Nepal, S.; Zhang, Q.; Zhang, L. Blockchain—ICBC 2019. In Proceedings of the Second International Conference, held as Part of the Services Conference Federation, SCF 2019, San Diego, CA, USA, 25–30 June 2019; Springer: Cham, Switzerland, 2019. 12. Bao, Z.; Wang, Q.; Shi, W.; Wang, L.; Lei, H.; Chen, B. When Blockchain Meets SGX: An Overview, Challenges, and Open Issues. _[IEEE Access 2020, 8, 170404–170420. [CrossRef]](http://doi.org/10.1109/ACCESS.2020.3024254)_ 13. Bouras, M.A.; Lu, Q.; Dhelim, S.; Ning, H. A Lightweight Blockchain-Based IoT Identity Management Approach. Future Internet **[2021, 13, 24. [CrossRef]](http://doi.org/10.3390/fi13020024)** 14. Xu, R.; Chen, Y.; Blasch, E.; Chen, G. BlendCAC: A Smart Contract Enabled Decentralized Capability-Based Access Control [Mechanism for the IoT. Computers 2018, 7, 39. [CrossRef]](http://doi.org/10.3390/computers7030039) 15. Ra, G.; Kim, T.; Lee, I. VAIM: Verifiable Anonymous Identity Management for Human-Centric Security and Privacy in the Internet [of Things. IEEE Access 2021, 9, 75945–75960. [CrossRef]](http://doi.org/10.1109/ACCESS.2021.3080329) 16. Sun, S.; Du, R.; Chen, S.; Li, W. Blockchain-Based IoT Access Control System: Towards Security, Lightweight, and Cross-Domain. _[IEEE Access 2021, 9, 36868–36878. [CrossRef]](http://doi.org/10.1109/ACCESS.2021.3059863)_ 17. Cocco, L.; Tonelli, R.; Marchesi, M. Blockchain and Self Sovereign Identity to Support Quality in the Food Supply Chain. Future _[Internet 2021, 13, 301. [CrossRef]](http://doi.org/10.3390/fi13120301)_ 18. Cui, P.; Dixon, J.; Guin, U.; Dimase, D. A Blockchain-Based Framework for Supply Chain Provenance. IEEE Access 2019, 7, [157113–157125. [CrossRef]](http://doi.org/10.1109/ACCESS.2019.2949951) 19. Wang, S.; Li, D.; Zhang, Y.; Chen, J. Smart Contract-Based Product Traceability System in the Supply Chain Scenario. IEEE Access **[2019, 7, 115122–115133. [CrossRef]](http://doi.org/10.1109/ACCESS.2019.2935873)** 20. Ferdousi, T.; Gruenbacher, D.; Scoglio, C.M. A Permissioned Distributed Ledger for the US Beef Cattle Supply Chain. IEEE Access **[2020, 8, 154833–154847. [CrossRef]](http://doi.org/10.1109/ACCESS.2020.3019000)** 21. Figueroa-Lorenzo, S.; Añorga Benito, J.; Arrizabalaga, S. Modbus Access Control System Based on SSI over Hyperledger Fabric [Blockchain. Sensors 2021, 21, 5438. [CrossRef]](http://doi.org/10.3390/s21165438) 22. Zhu, Y.; Huang, C.; Hu, Z.; Al-Dhelaan, A.; Al-Dhelaan, M. Blockchain-Enabled Access Management System for Edge Computing. _[Electronics 2021, 10, 1000. [CrossRef]](http://doi.org/10.3390/electronics10091000)_ 23. Alsayed Kassem, J.; Sayeed, S.; Marco-Gisbert, H.; Pervez, Z.; Dahal, K. DNS-IdM: A Blockchain Identity Management System to [Secure Personal Data Sharing in a Network. Appl. Sci. 2019, 9, 2953. [CrossRef]](http://doi.org/10.3390/app9152953) 24. Gutierrez-Aguero, I.; Anguita, S.; Larrucea, X.; Gomez-Goiri, A.; Urquizu, B. Burnable Pseudo-Identity: A Non-Binding [Anonymous Identity Method for Ethereum. IEEE Access 2021, 9, 108912–108923. [CrossRef]](http://doi.org/10.1109/ACCESS.2021.3101302) 25. Gruner, A.; Muhle, A.; Meinel, C. ATIB: Design and Evaluation of an Architecture for Brokered Self-Sovereign Identity Integration [and Trust-Enhancing Attribute Aggregation for Service Provider. IEEE Access 2021, 9, 138553–138570. [CrossRef]](http://doi.org/10.1109/ACCESS.2021.3116095) 26. Moreno, R.T.; Garcia-Rodriguez, J.; Bernabe, J.B.; Skarmeta, A. A Trusted Approach for Decentralised and Privacy-Preserving [Identity Management. IEEE Access 2021, 9, 105788–105804. [CrossRef]](http://doi.org/10.1109/ACCESS.2021.3099837) 27. Wang, S.; Pei, R.; Zhang, Y. EIDM: A Ethereum-Based Cloud User Identity Management Protocol. IEEE Access 2019, 7, 115281– [115291. [CrossRef]](http://doi.org/10.1109/ACCESS.2019.2933989) 28. Feng, X.; Cui, K.; Jiang, H.; Li, Z. EBAS: An Efficient Blockchain-Based Authentication Scheme for Secure Communication in [Vehicular Ad Hoc Network. Symmetry 2022, 14, 1230. [CrossRef]](http://doi.org/10.3390/sym14061230) 29. Li, H.; Pei, L.; Liao, D.; Chen, S.; Zhang, M.; Xu, D. FADB: A Fine-Grained Access Control Scheme for VANET Data Based on [Blockchain. IEEE Access 2020, 8, 85190–85203. [CrossRef]](http://doi.org/10.1109/ACCESS.2020.2992203) 30. Lu, Z.; Liu, W.; Wang, Q.; Qu, G.; Liu, Z. A Privacy-Preserving Trust Model Based on Blockchain for VANETs. IEEE Access 2018, 6, [45655–45664. [CrossRef]](http://doi.org/10.1109/ACCESS.2018.2864189) 31. Javed, I.T.; Alharbi, F.; Bellaj, B.; Margaria, T.; Crespi, N.; Qureshi, K.N. Health-ID: A Blockchain-Based Decentralized Identity [Management for Remote Healthcare. Healthcare 2021, 9, 712. [CrossRef]](http://doi.org/10.3390/healthcare9060712) 32. Stamatellis, C.; Papadopoulos, P.; Pitropakis, N.; Katsikas, S.; Buchanan, W.J. A Privacy-Preserving Healthcare Framework Using [Hyperledger Fabric. Sensors 2020, 20, 6587. [CrossRef] [PubMed]](http://doi.org/10.3390/s20226587) ----- _Appl. Sci. 2022, 12, 12415_ 20 of 20 33. Xiang, X.; Wang, M.; Fan, W. A Permissioned Blockchain-Based Identity Management and User Authentication Scheme for [E-Health Systems. IEEE Access 2020, 8, 171771–171783. [CrossRef]](http://doi.org/10.1109/ACCESS.2020.3022429) 34. Akhter, A.F.M.S.; Ahmed, M.; Shah, A.F.M.S.; Anwar, A.; Kayes, A.S.M.; Zengin, A. A Blockchain-Based Authentication Protocol [for Cooperative Vehicular Ad Hoc Network. Sensors 2021, 21, 1273. [CrossRef] [PubMed]](http://doi.org/10.3390/s21041273) 35. Bao, S.; Cao, Y.; Lei, A.; Asuquo, P.; Cruickshank, H.; Sun, Z.; Huth, M. Pseudonym Management Through Blockchain: Cost[Efficient Privacy Preservation on Intelligent Transportation Systems. IEEE Access 2019, 7, 80390–80403. [CrossRef]](http://doi.org/10.1109/ACCESS.2019.2921605) 36. Lin, C.; He, D.; Huang, X.; Khurram Khan, M.; Choo, K.-K.R. A New Transitively Closed Undirected Graph Authentication [Scheme for Blockchain-Based Identity Management Systems. IEEE Access 2018, 6, 28203–28212. [CrossRef]](http://doi.org/10.1109/ACCESS.2018.2837650) 37. de Ponteves, H.; Eremenko, K.; Ligency Team. Blockchain A-Z™: Learn How To Build Your First Blockchain. September 2021. [Available online: https://www.udemy.com/course/build-your-blockchain-az/#instructor-1 (accessed on 11 June 2022).](https://www.udemy.com/course/build-your-blockchain-az/#instructor-1) 38. Shobanadevi, A.; Tharewal, S.; Soni, M.; Kumar, D.D.; Khan, I.R.; Kumar, P. Novel identity management system using smart [blockchain technology. Int. J. Syst. Assur. Eng. Manag. 2022, 13 (Suppl. 1), 496–505. [CrossRef]](http://doi.org/10.1007/s13198-021-01494-0) 39. Lastovetska, A. Blockchain Architecture Basics: Components, Structure, Benefits & Creation. 5 January 2021. Available online: [https://mlsdev.com/blog/156-how-to-build-your-own-blockchain-architecture (accessed on 1 November 2022).](https://mlsdev.com/blog/156-how-to-build-your-own-blockchain-architecture) 40. [Buterin, V. The Meaning of Decentralization. [Online] Medium. 2017. Available online: https://medium.com/@VitalikButerin/](https://medium.com/@VitalikButerin/the-meaning-of-decentralization-a0c92b76a274) [the-meaning-of-decentralization-a0c92b76a274 (accessed on 26 October 2022).](https://medium.com/@VitalikButerin/the-meaning-of-decentralization-a0c92b76a274) 41. Wüst, K. Do you need a Blockchain? In Proceedings of the Crypto Valley Conference on Blockchain Technology (CVCBT), Zug, Switzerland, 20–22 June 2018. 42. [Alrodhan, W.; Mitchell, C. Improving the Security of CardSpace. EURASIP J. Inf. Secur. 2009, 2009, 1–8. [CrossRef]](http://doi.org/10.1155/2009/167216) 43. Alrodhan, W.; Mitchell, C. Enhancing User Authentication in Claim-Based Identity Management. In Proceedings of the 2010 International Symposium on Collaborative Technologies and Systems, Chicago, IL, USA, 17–21 May 2010. 44. Dai, Z.; Zhou, W. The Federated Identity and Access Management Architectures: A Literature Survey; Deakin University, School of Information Technology: Geelong, VIC, Australia, 2005. 45. Sung, C.; Park, J. Understanding of blockchain-based identity management system adoption in the public sector. J. Enterp. Inf. _[Manag. 2021, 34, 1481–1505. [CrossRef]](http://doi.org/10.1108/JEIM-12-2020-0532)_ 46. Niu, J.; Ren, Z. A self-sovereign identity management scheme using smart contracts. MATEC Web Conf. 2021, 336, 08005. [[CrossRef]](http://doi.org/10.1051/matecconf/202133608005) 47. Bouras, M.; Lu, Q.; Zhang, F.; Wan, Y.; Zhang, T.; Ning, H. Distributed Ledger Technology for eHealth Identity Privacy: State of [the Art and Future Perspective. Sensors 2020, 20, 483. [CrossRef]](http://doi.org/10.3390/s20020483) 48. Ferdous, M.S.; Poet, R. A Comparative Analysis of Identity Management Systems. In Proceedings of the 2012 International Conference on High Performance Computing & Simulation (HPCS), Madrid, Spain, 2–6 July 2012. 49. Stockburger, L.; Kokosioulis, G.; Mukkamala, A.; Mukkamala, R.; Avital, M. Blockchain-enabled Decentralized Identity Manage[ment: The Case of Self-sovereign Identity in Public Transportation. Blockchain Res. Appl. 2021, 2, 100014. [CrossRef]](http://doi.org/10.1016/j.bcra.2021.100014) 50. Outchakoucht, A.; Es-Samaali, H. Dynamic Access Control Policy based on Blockchain and Machine Learning for the Internet of [Things. Int. J. Adv. Comput. Sci. Appl. 2017, 8, 417–424. [CrossRef]](http://doi.org/10.14569/IJACSA.2017.080757) 51. Liao, C.H.; Guan, X.Q.; Cheng, J.H.; Yuan, S.M.; Blockchain-Based Identity Management and Access Control Framework for [Open Banking Ecosystem. pp. 450–466. Available online: https://ssrn.com/abstract=4039865 (accessed on 5 October 2022).](https://ssrn.com/abstract=4039865) 52. Desabathina, N.V.M.; Merugu, S.; Gunjan, V.K.; Kumar, B.S. Agricultural Crowdfunding Through Blockchain. In ICDSMLA _2020; Kumar, A., Senatore, S., Gunjan, V.K., Eds.; Lecture Notes in Electrical Engineering; Springer: Singapore, 2022; Volume 783._ [[CrossRef]](http://doi.org/10.1007/978-981-16-3690-5_155) 53. Tetzlaff, J.; Page, M.; Moher, D. Pns154 the prisma 2020 statement: Development of and key changes in an updated guideline for [reporting systematic reviews and meta-analyses. Value Health 2020, 23, S312–S313. [CrossRef]](http://doi.org/10.1016/j.jval.2020.04.1154) -----
{ "disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.3390/app122312415?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.3390/app122312415, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": "CCBY", "status": "GOLD", "url": "https://www.mdpi.com/2076-3417/12/23/12415/pdf?version=1670316403" }
2,022
[ "Review" ]
true
2022-12-04T00:00:00
[ { "paperId": "e93cdaa30a2a08ef4cd1b02c016d40cca07c6395", "title": "EBAS: An Efficient Blockchain-Based Authentication Scheme for Secure Communication in Vehicular Ad Hoc Network" }, { "paperId": "e6fa0025972678f7911c7555e374738ba8da67ec", "title": "Blockchain-based identity management and access control framework for open banking ecosystem" }, { "paperId": "a02fbb024d20025ac6481106d1b43e81c992a93b", "title": "Novel identity management system using smart blockchain technology" }, { "paperId": "94666e9231860cad4c6314c4cbfd6e9fb14313ef", "title": "Blockchain and Self Sovereign Identity to Support Quality in the Food Supply Chain" }, { "paperId": "dda4fdfe5324d17aa37a1a3e1874c08e0169cc14", "title": "Agricultural Crowdfunding Through Blockchain" }, { "paperId": "315d1f7b6effbc6edfe35cb1c8072f3b3bed59fe", "title": "Understanding of blockchain-based identity management system adoption in the public sector" }, { "paperId": "91c4e342f6ce4e2086e19e82d2c790efde91225d", "title": "Modbus Access Control System Based on SSI over Hyperledger Fabric Blockchain" }, { "paperId": "e63a9d5738e4c3f9d6ea71f290b8d72099bf8be7", "title": "Health-ID: A Blockchain-Based Decentralized Identity Management for Remote Healthcare" }, { "paperId": "8ee6a504f7217b8797606e38b32ef9524b1eb9a8", "title": "Blockchain-Enabled Decentralized Identify Management: The Case of Self-Sovereign Identity in Public Transportation" }, { "paperId": "8516ef3c32c934fd8f731efd954043fae390fa88", "title": "Blockchain-Enabled Access Management System for Edge Computing" }, { "paperId": "c334ad5237bca17aa678e40fa1876e6f067bfa40", "title": "A Blockchain-Based Authentication Protocol for Cooperative Vehicular Ad Hoc Network" }, { "paperId": "713a182d25a452966f9a68cc71888bf8a6b2daf4", "title": "A Lightweight Blockchain-Based IoT Identity Management Approach" }, { "paperId": "4e5dae79996e1e6f77e726f00f0ba0aa15ae7a8c", "title": "A Privacy-Preserving Healthcare Framework Using Hyperledger Fabric" }, { "paperId": "a31f659a46cecb0652d4b54f6ef7a303b0d87795", "title": "FADB: A Fine-Grained Access Control Scheme for VANET Data Based on Blockchain" }, { "paperId": "ce99d3c263a4596bdb9298f760dd4def8d38fe5f", "title": "PNS154 THE PRISMA 2020 STATEMENT: DEVELOPMENT OF AND KEY CHANGES IN AN UPDATED GUIDELINE FOR REPORTING SYSTEMATIC REVIEWS AND META-ANALYSES" }, { "paperId": "496a450d5921c13a369fb885c8c19400d5d27c00", "title": "Identity Management on Blockchain - Privacy and Security Aspects" }, { "paperId": "d324c7d451061edb5f2d8e04afb051058bd5b4da", "title": "Distributed Ledger Technology for eHealth Identity Privacy: State of The Art and Future Perspective" }, { "paperId": "cfa7edded27c9342abe3cd1f35ee137332c0c4f2", "title": "A Blockchain-Based Framework for Supply Chain Provenance" }, { "paperId": "d91a912d8a959eb57ed494a0fac48441094b73a9", "title": "Smart Contract-Based Product Traceability System in the Supply Chain Scenario" }, { "paperId": "5e6ff0d5a5808045905c7e86ecdf0b6d5bc0364e", "title": "Research on blockchain consensus mechanism and implementation" }, { "paperId": "e9747eace33602d3f4b5f5c0fd35e33c31b022cc", "title": "DNS-IdM: A Blockchain Identity Management System to Secure Personal Data Sharing in a Network" }, { "paperId": "f9a498c61698444c7b0924a84bc6a31a7436f9b7", "title": "Pseudonym Management Through Blockchain: Cost-Efficient Privacy Preservation on Intelligent Transportation Systems" }, { "paperId": "11a5fa69fd443dce30eab25309c2727e45427834", "title": "Blockchain Technology the Identity Management and Authentication Service Disruptor: A Survey" }, { "paperId": "6cc50a2fb28f7c8f810a5e1c919686e2d6bf9ed2", "title": "A Privacy-Preserving Trust Model Based on Blockchain for VANETs" }, { "paperId": "83bc879120207d575fa92dbbc1d34f40351e6085", "title": "BlendCAC: A Smart Contract Enabled Decentralized Capability-Based Access Control Mechanism for the IoT" }, { "paperId": "ad9760ea1568263d4f670edc52e8d91875c95e42", "title": "Do you Need a Blockchain?" }, { "paperId": "54099b438aa0c825a009db810a5d9d86f1140311", "title": "A New Transitively Closed Undirected Graph Authentication Scheme for Blockchain-Based Identity Management Systems" }, { "paperId": "2fb015911e45a39153c2006b35001a86f75f9c31", "title": "A comparative analysis of Identity Management Systems" }, { "paperId": "2b925752f1085029c4124e5e0f360c431403902c", "title": "Privacy and Practicality of Identity Management Systems: Academic Overview" }, { "paperId": "79834e55efa027feb52d492cb346b4211104c7d4", "title": "Enhancing user authentication in claim-based identity management" }, { "paperId": "31f8d61d98c0192f44a1ccd85fbf23ec77e645f4", "title": "Usability and Privacy in Identity Management Architectures" }, { "paperId": "63a9681663778913982f8f028e170be5ff36f532", "title": "The Meaning of Decentralization" }, { "paperId": "73940007299f38fad090f45357c11d168a4f1de1", "title": "ICDSMLA 2020" }, { "paperId": "eead3bf6565da57f622af2f160018a6db4311856", "title": "A self-sovereign identity management scheme using smart contracts" }, { "paperId": "b2949146c822d2b0b3ad9f61e63f2f12df7b47ff", "title": "A Trusted Approach for Decentralised and Privacy-Preserving Identity Management" }, { "paperId": "69501a0df726a5061079df41703ccc3a6cd7e4ed", "title": "Blockchain for Businesses: A Scoping Review of Suitability Evaluations Frameworks" }, { "paperId": "cf80de04004d28c24ea2e551759e7e25ea288f37", "title": "VAIM: Verifiable Anonymous Identity Management for Human-Centric Security and Privacy in the Internet of Things" }, { "paperId": "67b86a0ed3ff564cae9e58d2cf1b2aee4ee141bd", "title": "ATIB: Design and Evaluation of an Architecture for Brokered Self-Sovereign Identity Integration and Trust-Enhancing Attribute Aggregation for Service Provider" }, { "paperId": "e907694d7bb6cdfda680d19066e0cf135b3a6a81", "title": "Burnable Pseudo-Identity: A Non-Binding Anonymous Identity Method for Ethereum" }, { "paperId": "46eee47d659397cc5aa0df54544c24ce78948ab4", "title": "Blockchain-Based IoT Access Control System: Towards Security, Lightweight, and Cross-Domain" }, { "paperId": null, "title": "Ligency Team. Blockchain A-Z™: Learn How To Build Your First Blockchain" }, { "paperId": null, "title": "Blockchain Architecture Basics: Components, Structure, Benefits & Creation" }, { "paperId": "811e4ec25ae9558b5daf057a8a3d60c603c3cbb7", "title": "A Permissioned Distributed Ledger for the US Beef Cattle Supply Chain" }, { "paperId": "68751a6ada0290576996ce34cf229e1a4a582625", "title": "A Permissioned Blockchain-Based Identity Management and User Authentication Scheme for E-Health Systems" }, { "paperId": "c60306cf289468331d8e4ec5b0b4a9d77eb58e38", "title": "When Blockchain Meets SGX: An Overview, Challenges, and Open Issues" }, { "paperId": "20f0822a8a354809c7b34181e80a8c99412c08f3", "title": "EIDM: A Ethereum-Based Cloud User Identity Management Protocol" }, { "paperId": "1ddae1695eb04990be95ff9038a218cfd4f14bb3", "title": "Blockchain – ICBC 2019" }, { "paperId": null, "title": "Introduction to Blockchain and Ethereum: Use Distributed Ledgers to Validate Digital Transactions in a Decentralized and Trustless Manner" }, { "paperId": "c3bd5f09b1f13c365c78101f65a77acb41dfcc20", "title": "Dynamic Access Control Policy based on Blockchain and Machine Learning for the Internet of Things" }, { "paperId": null, "title": "Identity Management Systems: Laws of Identity for Models (cid:48) Evaluation" }, { "paperId": "13ab07ab629844cae21f2ed86ff8e32bd355f1e2", "title": "Digital Identity and Identity Management Technologies" }, { "paperId": "8e63dfee41b2a8d2baccda16577e7a6857486bac", "title": "Improving the Security of CardSpace" }, { "paperId": null, "title": "The Federated Identity and Access Management Architectures: A Literature Survey" }, { "paperId": "667cc1ac858c058b6f55797d427bb5174f712dba", "title": "Journal of Network and Computer Applications" } ]
20,319
en
[ { "category": "Mathematics", "source": "external" }, { "category": "Computer Science", "source": "external" }, { "category": "Computer Science", "source": "s2-fos-model" }, { "category": "Mathematics", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/02280183f81323acabd93adf831183a26abe12c0
[ "Mathematics", "Computer Science" ]
0.866114
A variant of Wiener’s attack on RSA
02280183f81323acabd93adf831183a26abe12c0
Computing
[ { "authorId": "1759823", "name": "A. Dujella" } ]
{ "alternate_issns": null, "alternate_names": null, "alternate_urls": null, "id": "48433268-b689-457a-8e57-36895fd4f04e", "issn": "0010-485X", "name": "Computing", "type": "journal", "url": "https://link.springer.com/journal/607" }
Wiener’s attack is a well-known polynomial-time attack on a RSA cryptosystem with small secret decryption exponent d, which works if d < n0.25, where n = pq is the modulus of the cryptosystem. Namely, in that case, d is the denominator of some convergent pm/qm of the continued fraction expansion of e/n, and therefore d can be computed efficiently from the public key (n, e). There are several extensions of Wiener’s attack that allow the RSA cryptosystem to be broken when d is a few bits longer than n0.25. They all have the run-time complexity (at least) O(D2), where d = Dn0.25. Here we propose a new variant of Wiener’s attack, which uses results on Diophantine approximations of the form |α − p/q| <  c/q2, and “meet-in-the-middle” variant for testing the candidates (of the form rqm+1 +  sqm) for the secret exponent. This decreases the run-time complexity of the attack to O(D log D) (with the space complexity O(D)).
## A variant of Wiener’s attack on RSA ### Andrej Dujella Abstract Wiener’s attack is a well-known polynomial-time attack on a RSA cryptosystem with small secret decryption exponent d, which works if d < n[0][.][25], where n = pq is the modulus of the cryptosystem. Namely, in that case, d is the denominator of some convergent pm/qm of the continued fraction expansion of e/n, and therefore d can be computed efficiently from the public key (n, e). There are several extensions of Wiener’s attack that allow the RSA cryptosystem to be broken when d is a few bits longer than n[0][.][25]. They all have the run-time complexity (at least) O(D[2]), where d = Dn[0][.][25]. Here we propose a new variant of Wiener’s attack, which uses results on Diophantine approximations of the form α p/q < c/q[2], and | − | “meet-in-the-middle” variant for testing the candidates (of the form rqm+1 + sqm) for the secret exponent. This decreases the run-time complexity of the attack to O(D log(D)) (with the space complexity O(D)). ## 1 Introduction The most popular public key cryptosystem in use today is the RSA cryptosystem, introduced by Rivest, Shamir, and Adleman [8]. Its security is based on the intractability of the integer factorization problem. The modulus n of a RSA cryptosystem is the product of two large primes p and q. The public exponent e and the secret exponent d are related by ed 1 (mod ϕ(n)), (1) ≡ 0 2000 Mathematics Subject Classification: Primary 94A60; Secondary 11A55, 11J70. Key words: RSA cryptosystem, continued fractions, cryptanalysis 1 ----- where ϕ(n) = (p 1)(q 1). In a typical RSA cryptosystem, p and q have − − approximately the same number of bits, while e < n. The encryption and decryption algorithms are given by C = M [e] mod n, M = C [d] mod n. To speed up the RSA decryption one may try to use small secret decryption exponent d. The choice of a small d is especially interesting when there is a large difference in computing power between two communicating devices, e.g. in communication between a smart card and a larger computer. In this situation, it would be desirable that the smart card has a small secret exponent, while the larger computer has a small public exponent, to reduce the processing required in the smart card. In 1990, Wiener [13] described a polynomial time algorithm for breaking a typical (i.e. p and q are of the same size and e < n) RSA cryptosystem if the secret exponent d has at most one-quarter as many bits as the modulus n. From (1) it follows that there is an integer k such that ed kϕ(n) = 1. − Since ϕ(n) ≈ n, we have that [k]d [≈] n[e] [. Wiener’s attack is usually described in] the following form (see [2, 9]): [√]4 If p < q < 2p, e < n and d < [1] n, then d is the denominator of some 3 convergent of the continued fraction expansion of [e] n [.] Indeed, under these assumptions it is easy to show that e 1 < n d 2d[2] [.] ���� [−] [k] ���� By the classical Legendre’s theorem, [k]d [is some convergent][ p]qm[m] [of the continued] fraction expansion of ne [, and therefore][ d][ can be computed efficiently from] the public key (n, e). Namely, the total number of convergents is of order O(log n), and each convergent can be tested in polynomial time. In 1997, Verheul and van Tilborg [12] proposed an extension of Wiener’s attack that allows the RSA cryptosystem to be broken when d is a few bits longer than n[0][.][25]. For d > n[0][.][25] their attack needs to do an exhaustive search for about 2t+8 bits (under reasonable assumptions on involved partial convergents), where t = log2(d/n[0][.][25]). In [4], we proposed a slight modification of the Verheul and van Tilborg attack, based on Worley’s result on Diophantine approximations [14], which implies that all rationals [p]q [satisfying the inequality] p α − q ���� < c (2) q[2] [,] ���� 2 ----- for a positive real number c, have the form p (3) q [=][ rp]rqm[m]+1[+1] ±[ ±] sq[ sp]m[m] for some m 1 and nonnegative integers r and s such that rs < 2c. It has ≥− been shown recently in [5] that Worley’s result is sharp, in the sense that the condition rs < 2c cannot be replaced by rs < (2 ε)c for any ε. − In both mentioned extensions of Wiener’s attack, the candidates for the secret exponent are of the form d = rqm+1+sqm. Then we test all possibilities for d. The number of possibilities is roughly the product of the number of possibilities for r and the number of possibilities for s, which is O(D[2]), where d = Dn[0][.][25]. More precisely, the number of possible pairs (r, s) in the Verheul and van Tilborg attack is O(D[2]A[2]), where A = max{ai : i = m+1, m+2, m+3, while in our variant the number of pairs is O(D[2] log A) } (and also O(D[2] log D)). Another modification of the Verheul and van Tilborg attack has been recently proposed by Sun, Wu an Chen [11]. It requires (heuristically) an exhaustive search for about 2t 10 bits, so its complexity is also O(D[2]). − We cannot expect drastic improvements here, since, by a result of Steinfeld, Contini, Wang and Pieprzyk [10], there does not exist an attack in this class with subexponential running time. Boneh and Durfee [3] and Bl¨omer and May [1] proposed attacks based on Coppersmith’s lattice-based technique for finding small roots of modular polynomials equations using LLL-algorithm. The attacks work if d < n[0][.][292]. The conjecture is that the right bound below which a typical version of RSA is insecure is d < n[0][.][5]. In the present paper, we propose a new variant of Wiener’s attack. It also uses continued fractions and searches for candidates for the secret key in the form d = rqm+1 + sqm. However, the searching phase of this variant is significantly faster. Its complexity is O(D log D), and it works efficiently for d < 10[30]n[0][.][25]. Although this bound is asymptotically weaker than the bounds in the above mentioned attacks based on the LLL-algorithm (note however that these bounds are not strictly proved since Coppersmith’s theorem in the bivariate case is only a heuristic result - see also [6, 7]), for practical values of n (e.g. for 1024-bits) these bounds are of comparable size. 3 ----- ## 2 The Verheul and van Tilborg attack In this section we briefly describe the Verheul and van Tilborg attack [12] and its modification from [4]. We assume that p < q < 2p and e < n. Then it is easy to see that e .122 e < 2 (4) n d n[√]n [.] ���� [−] [k] ���� Let m be the largest (odd) integer satisfying [p]qm[m] n [>][ 2]n[.][122][√]n[ e] [. Verheul and van] [−] [e] Tilborg proposed to search for [k]d [among the fractions of the form][ rp]rq[m]m[+1]+1[+]+[sp]sqm[m] [.] This leads to the system rpm+1 + spm = k, rqm+1 + sqm = d. The determinant of the system satisfies |pm+1qm −qm+1pm| = 1, and therefore the system has (positive) integer solutions: r = dpm − kqm, s = kqm+1 − dpm+1. If r and s are small, then they can be found by an exhaustive search. Let [a0; a1, a2, . . .] be the continued fraction expansion of e/n and D = d/n[0][.][25]. In [4], the following upper bounds for r and s were derived: r < max{�2.122(am+3 + 2)(am+2 + 1)D, �2.122(am+2 + 2)D}, s < max{2�2.122(am+3 + 2)D, �2.122(am+2 + 2)(am+1 + 1)D}. The modified attack proposed in [4] searches for [k]d [among the fractions of] the forms [rp]rq[m]m[+1]+1[+]+[sp]sqm[m] [,][ rp]rqm[m]+2[+2]−[−]sq[sp]m[m]+1[+1] [and][ rp]rq[m]m[+3]+3[+]+[sp]sqm[m]+2[+2] [. It results with bounds for] r and s which are (almost) independent on the partial quotients am’s. Hence, in both attacks bounds for r and s are of the form O(D), but in the case of [4] the implied constants are much smaller (indeed, the table in Section 4 shows that with high probability we have r < 4D and s < 4D). ## 3 Testing the candidates There are two principal methods for testing candidates for the secret exponent d. 4 ----- Method I ([13]): Compute p and q, assuming d is the correct guess, using the following formulas: ϕ(n) = (de 1)/k, p + q = n + 1 ϕ(n), − − (q p)[2] = (p + q)[2] 4n, − − p = [p][ +][ q], q = [p][ +][ q] + [q][ −] [p] . − [q][ −] [p] 2 2 2 2 Method II ([9, Chapter 17]): Test the congruence (M [e])[d] M (mod n), ≡ for some random value of M, or simply for M = 2. Both methods are very efficient. But in the situation where we have to test huge amount of candidates for d of the form rqm+1 + sqm, there is a significant difference between them. With the Method I it seems that we cannot avoid testing separately all possible pairs (r, s). On the other hand, here we present a new idea, which is to apply “meet-in-the-middle” to the Method II. We want to test whether 2[e][(][rq][m][+1][+][sq][m][)] 2 (mod n). (5) ≡ Note that m is (almost) fixed. Indeed, let m[′] be the largest odd integer such that pm′ qm′ [> e]n [+ 2]n[.][122][√]n [e][.] Then m m[′], m[′] + 1, m[′] + 2 (see [4] for details). ∈{ } Let 2[eq][m][+1] mod n = a, (2[eq][m])[−][1] mod n = b. Then we test the congruence a[r] 2b[s] (mod n). (6) ≡ We can do it by computing a[r] mod n for all r, sorting the list of results, and then computing 2b[s] mod n for each s one at a time, and checking if the result appears in the sorted list. This decreases the time complexity of the testings phase to O(D log D) (with the space complexity O(D)). 5 ----- ## 4 Implementation issues and improvements The theoretic base for the extension of Wiener’s attack is Worley’s theorem on Diophantine approximations of the form (2). We have already mentioned a result from [5] which shows that Worley’s result is in some sense the best possible. However, some improvements are possible if we consider unsymmetrical variants of Worley’s result (with different bounds on r and s). Roughly speaking, in solutions of (2) in form (3), if r < s then we may take rs < c instead of rs < 2c. Due to such unsymmetrical results, a space-time tradeoff might be possible. The following table shows the chance of success of our attack for various (symmetrical and unsymmetrical) bounds on r and s. We can see that, with the same bound for rs, the better results are obtained for smaller bounds on r and larger bounds on s. In the implementations, this fact can be used to decrease the memory requirements (up to factor 16). bound for r bound for s chance of success 4D 4D 98% 2D 2D 89% D D 65% D 4D 86% 4D D 74% D/2 2D 70% 2D D/2 47% D/4 4D 54% 4D D/4 28% In the implementation of the proposed attack, we can use hash functions instead of sorting. Furthermore, it is not necessary to store all bits of a[r] mod n in the hash table. Indeed, values of a[r] mod n are from the set 0, 1, . . ., n, and the number of r’s is typically much smaller than n. { } Therefore, around 2 log2 D stored bits will suffice in order to avoid too many accidental collisions. Note that a reasonable number of collisions is not big problem here, since each such collision can be efficiently tested by Method I. Hash tables can be used to take into account the condition gcd(r, s) = 1. This condition was easy to use in brute-force testing of all possible pairs (r, s), but the direct application of our “meet-in-the-middle” variant seemingly ignores it. But if we create rows in the hash table according to divisibility properties 6 ----- of exponents r modulo small primes, we may take again an advantage of this condition and speed up the algorithm up to 39%. We have implemented several variants of the proposed attack in PARI and C++, and they work efficiently for values of D up to 2[30], i.e. for d < 2[30]n[0][.][25]. For larger values of D the memory requirements become too demanding for ordinary computers. The following table compares this bound with the bound of d in the best known attacks on RSA with small secret exponent based on LLL-algorithm. log2 n log2(2[30]n[0][.][25]) log2(n[0][.][292]) 512 158 150 768 222 224 1024 286 299 2048 542 598 The attack can be also slightly improved by using better approximations to [k]d [, e.g.] n+1−e 2[√]n [instead of][ e]n[. Namely,] e .1221 e < 0 . (7) n + 1 2[√]n d n[√]n ���� − [−] [k] ���� Comparing (7) with (4), we see that by replacing n[e] [by] n+1−e 2[√]n [we can gain] the factor 4 in bounds for r and s, so decreasing both, time and memory requirements. With these improvements, for 1024-bits RSA modulus n, the range in which our attack can be applied becomes comparable and competitive with best known attacks based on the LLL-algorithm. Acknowledgements. The author would like to thank Vinko Petriˇcevi´c for his help with C++ implementation of the various variants of the attack described in this paper. The author was supported by the Ministry of Science, Education and Sports, Republic of Croatia, grant 037-0372781-2821. ## References [1] J. Bl¨omer, A. May, Low secret exponent RSA revisited, Cryptography and Lattice - Proceedings of CaLC 2001, Lecture Notes in Comput. Sci. 2146 (2001), 4–19. 7 ----- [2] D. Boneh, Twenty years of attacks on the RSA cryptosystem, Notices Amer. Math. Soc. 46 (1999), 203–213. [3] D. Boneh, G. Durfee, Cryptanalysis of RSA with private key d less than N [0][.][292], Advances in Cryptology - Proceedings of Eurocrypt ’99, Lecture Notes in Comput. Sci. 1952 (1999), 1–11. [4] A. Dujella, Continued fractions and RSA with small secret exponent, Tatra Mt. Math. Publ. 29 (2004), 101–112. [5] A. Dujella, B. Ibrahimpaˇsi´c, On Worley’s theorem in Diophantine approximations, Ann. Math. Inform., to appear. [6] J. Hinek, Low Public Exponent Partial Key and Low Private Exponent Attacks on Multi-prime RSA, Master’s thesis, University of Waterloo, 2002. [7] M. J. Hinek, M. K. Low, E. Teske, On some attacks on multi-prime RSA, Proceedings of SAC 2002, Lecture Notes in Comput. Sci. 2595 (2003), 385– 404. [8] R. L. Rivest, A. Shamir, L. Adleman, A method for obtaining digital signatures and publi-key cryptosystems, Communications of the ACM 21 (1978), 120– 126. [9] N. Smart, Cryptography: An Introduction, McGraw-Hill, London, 2002. [10] R. Steinfeld, S. Contini, H. Wang, J. Pieprzyk, Converse results to the Wiener attack on RSA, Public Key Cryptography - PKC 2005, Lecture Notes in Comput. Sci. 3386 (2005), 184–198. [11] H.-M. Sun, M.-E. Wu, Y.-H. Chen, Estimating the Prime-Factors of an RSA Modulus and an Extension of the Wiener Attack, Applied Cryptography and Network Security, Lecture Notes in Comput. Sci. 4521 (2007), 116–128. [12] E. R. Verheul, H. C. A. van Tilborg, Cryptanalysis of ‘less short’ RSA secret exponents, Appl. Algebra Engrg. Comm. Computing 8 (1997), 425–435. [13] M. J. Wiener, Cryptanalysis of short RSA secret exponents, IEEE Trans. Inform. Theory 36 (1990), 553–558. [14] R. T. Worley, Estimating α p/q, Austral. Math. Soc. Ser. A 31 (1981), | − | 202–206. 8 ----- Andrej Dujella Department of Mathematics University of Zagreb Bijeniˇcka cesta 30 10000 Zagreb, Croatia E-mail address: duje@math.hr 9 -----
{ "disclaimer": "Notice: Paper or abstract available at https://arxiv.org/abs/0811.0063, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": null, "status": "CLOSED", "url": "" }
2,008
[ "JournalArticle" ]
false
2008-11-01T00:00:00
[ { "paperId": "9159e05170d7e2de593dca5f6318bd522b49f82a", "title": "A New Lattice Construction for Partial Key Exposure Attack for RSA" }, { "paperId": "7ed6709c43cc052fa779cbf2488a7d1dd2315986", "title": "Estimating the Prime-Factors of an RSA Modulus and an Extension of the Wiener Attack" }, { "paperId": "546db35cfd519822b9f635a10940fbccf08e6e2a", "title": "Partial Key Exposure Attacks on RSA up to Full Size Exponents" }, { "paperId": "5df59cf84251633a661a71bd2284a8914c4f3105", "title": "Converse Results to the Wiener Attack on RSA" }, { "paperId": "4c01e5b25b0dd9b86894c778051ab636fa429466", "title": "Cryptography: An Introduction" }, { "paperId": "efbee2d0be5391a2245539c23ce86794db82d1f2", "title": "Continued fractions and RSA with small secret exponent" }, { "paperId": "237624e4ebd4d23952680ef71be009ec4ad09389", "title": "On Some Attacks on Multi-prime RSA" }, { "paperId": "7dcaed1b0ac58978ba3c92f476d6cfe9edcf3e30", "title": "Low Secret Exponent RSA Revisited" }, { "paperId": "ccfafaf2ccf0e3607829d449b8cf3c444e85405d", "title": "Cryptanalysis of RSA with private key d less than N0.292" }, { "paperId": "8fa97e90498cf02bfd5e75b9f619795f452dde03", "title": "Cryptanalysis of ‘Less Short’ RSA Secret Exponents" }, { "paperId": "e1855c956271f84decba5972cf84e09e8932da89", "title": "Cryptanalysis of Short RSA Secret Exponents (Abstract)" }, { "paperId": "6408f394f424660f430fdd6fa32520fc5bdf0847", "title": "Estimating |α – p / q|" }, { "paperId": "b93923304173b662818588acc4554f7d46c9ed9c", "title": "On Worley's theorem in Diophantine approximations" }, { "paperId": "2d859e8937fe652558a60e82ffd39cd4ab835e31", "title": "On the security of some variants of rsa" }, { "paperId": null, "title": "Low Public Exponent Partial Key and Low Private Exponent Attacks on Multi-prime RSA" }, { "paperId": "e85c8789db8a9379c60a1621878222bcbe7a11a3", "title": "TWENTY YEARS OF ATTACKS ON THE RSA CRYPTOSYSTEM" }, { "paperId": "22d5ff0ad3dadaa713d07e00499587e76c80c3f2", "title": "Cryptanalysis of RSA with Private Key d Less Than N 0" }, { "paperId": null, "title": "A space-time tradeoff might be possible, by using unsymmetrical variants of Worley's result (with different bounds on r and s)" }, { "paperId": null, "title": "Andrej Dujella Department of Mathematics University of Zagreb Bijenička cesta 30" } ]
4,797
en
[ { "category": "Computer Science", "source": "external" }, { "category": "Computer Science", "source": "s2-fos-model" }, { "category": "Economics", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/02292be913a5c03ba6177bc9b7e85eaed6c26cf7
[ "Computer Science" ]
0.90421
Time is Money: Strategic Timing Games in Proof-of-Stake Protocols
02292be913a5c03ba6177bc9b7e85eaed6c26cf7
Conference on Advances in Financial Technologies
[ { "authorId": "2133346796", "name": "Caspar Schwarz-Schilling" }, { "authorId": "46459277", "name": "Fahad Saleh" }, { "authorId": "13227476", "name": "T. Thiery" }, { "authorId": "50617806", "name": "Jennifer Pan" }, { "authorId": "1737249", "name": "Nihar B. Shah" }, { "authorId": "9545425", "name": "B. Monnot" } ]
{ "alternate_issns": null, "alternate_names": [ "AFT", "Conf Adv Financial Technol" ], "alternate_urls": null, "id": "3f5bab6f-a198-4ad4-b2c1-f674f0d964d3", "issn": null, "name": "Conference on Advances in Financial Technologies", "type": "conference", "url": null }
We propose a model suggesting that honest-but-rational consensus participants may play timing games, and strategically delay their block proposal to optimize MEV capture, while still ensuring the proposal's timely inclusion in the canonical chain. In this context, ensuring economic fairness among consensus participants is critical to preserving decentralization. We contend that a model grounded in honest-but-rational consensus participation provides a more accurate portrayal of behavior in economically incentivized systems such as blockchain protocols. We empirically investigate timing games on the Ethereum network and demonstrate that while timing games are worth playing, they are not currently being exploited by consensus participants. By quantifying the marginal value of time, we uncover strong evidence pointing towards their future potential, despite the limited exploitation of MEV capture observed at present.
## Time is Money: Strategic Timing Games in Proof-of-Stake Protocols Caspar Schwarz-Schilling[1], Fahad Saleh[2], Thomas Thiery[1], Jennifer Pan[3], Nihar Shah[3], and Barnabé Monnot[1] 1 Ethereum Foundation ``` {caspar.schwarz-schilling,thomas.thiery,barnabe.monnot}@ethereum.org ``` 2 Wake Forest University ``` saleh@wfu.edu ``` 3 Jump Crypto ``` {jpan,nshah}@jumptrading.com ``` **Abstract. We propose a model suggesting that honest-but-rational consensus participants may play tim-** _ing games, and strategically delay their block proposal to optimize MEV capture, while still ensuring_ the proposal’s timely inclusion in the canonical chain. In this context, ensuring economic fairness among consensus participants is critical to preserving decentralization. We contend that a model grounded in honest-but-rational consensus participation provides a more accurate portrayal of behavior in economically incentivized systems such as blockchain protocols. We empirically investigate timing games on the Ethereum network and demonstrate that while timing games are worth playing, they are not currently being exploited by consensus participants. By quantifying the marginal value of time, we uncover strong evidence pointing towards their future potential, despite the limited exploitation of MEV capture observed at present. ### 1 Introduction Consensus protocols are typically evaluated based on their ability to maintain liveness and safety [11], referring to the regular addition of new transactions to the output ledger in a timely manner, and to the security of confirmed transactions remaining in their positions within the ledger. However, beyond liveness and safety, blockchain protocols require fairness of economic outcomes amongst consensus participants to preserve decentralization. More specifically, a protocol should be designed to maximize profitability of honest participation, wherein participants adhere to the prescribed rules. Otherwise, a deviating participant will outcompete their honest peers, leading to centralization of the validation set over time and security implications for consensus itself. However, the advent of Maximal Extractable Value (MEV) frustrates such fairness goals. It is defined as the value that consensus participants, in their duties as block producers, accrue by selectively including, excluding and ordering user transactions [14,6], MEV has equally substantial implications for the security of consensus protocols. For a system in which transaction fee rewards are dominant, consensus may become unstable due to increased variance in miner rewards [12]. Similarly, it was argued that a rational actor issuing a whale _transaction with an abnormally large transaction fee can convince peers to fork the current chain, further_ destabilizing consensus [22]. More broadly, understanding and mitigating the impact of MEV on the security and fairness of blockchain networks has become a central concern of protocol designers [30]. As the whale transaction highlights, potential MEV accrues over time as users submit transactions and the value of the set of pending transactions increases for the block producer. As a consequence, time is valuable to consensus participants, a feature obviated by the assumption of honest behavior in previous models of consensus. However, we argue that protocols who wish to preserve properties such as economic fairness amongst consensus participants must assume some share of honest-but-rational consensus participation. In particular, the effects of MEV on the consensus participants’ incentives must be better understood. In this paper, we investigate the possibility for block proposers to delay their block proposal as long as possible while ensuring they become part of the canonical chain, aiming to maximize MEV extraction. The reader may note that in Proof-of-Work (PoW)-based leader selection protocols, delaying a proposal bears the risk of losing to a competing block proposer. PoW protocols exhibit an inherent racing condition that prevents these types of strategic delay deviations, or at least make them unprofitable in expectation. Thus, we investigate the implications of MEV on the incentives of consensus participants, particularly block proposers, in a Proof-ofStake (PoS) context. More specifically, we consider propose-vote type of PoS protocols, where in each consensus ----- p g,, y,,, round, one leader proposes a block, and a committee of consensus participants is selected in-protocol to vote on the acceptance of that block. This effectively grants block proposers a short-lived monopoly as the only valid proposer for some given round. During this time interval they can attempt to strategically deviate from their assigned block proposal time and delay the release of their block as long as possible in order to extract more MEV, while still ensuring that a sufficient share of attesters see the block in time to vote it into the canonical chain. This behavior leads to an environment in which honest validators earn less than their deviating counterparts, resulting in stake centralization and second-order effects for consensus stability. _Related Work To the best of our knowledge, timing games have not been formally analyzed in previous literature_ on Proof-of-Stake. Selfish mining [19], studied in the context of Proof-of-Work, relies on appropriately timing the release of a block, in order to waste computation of honest miners and earn an outsize share of the rewards. Our timing games are also concerned with strategic behavior to capture a larger share of the total available rewards to consensus participants, yet do not feature the same dynamics as selfish mining in Proof-of-Work, since participants in many PoS-based consensus mechanisms are given a fixed time interval in which to perform their duties. The security of PoS-based mechanisms has been discussed in terms of chain growth [18] or focusing on the safety and liveness properties of hybrid protocols such as Gasper [10,24]. The economics literature has also examined Proof-of-Stake security with respect to particular attacks such as the double-spending attack [26] and 51% attacks [20]. Separately, incentive considerations in the presence of MEV led to the discovery of severe attacks on the Gasper consensus [28,25] and protocol changes to address such attacks [15,16,17]. _Our Contributions Our work models the value of time to consensus participants and explores the potential_ emergence of timing games in Proof-of-Stake protocols. By understanding the strategic behavior of consensus participants within this model, we gain insights into how these dynamics affect the robustness of consensus protocols to exogenous incentives, and ultimately fairness. **– Despite initial pessimism regarding the existence of equilibria in timing games [23], we formally show how** to sustain equilibrium behavior, where it is individually irrational for proposers to deviate from a schedule enforced by attesters, and reward-sharing is fair among participants (Sections 2 and 3). **– We then investigate whether such timing games might occur in real-world systems (namely, the Ethereum** network), using a large, granular data set recording the MEV offered to block proposers over time. We show incidental deviations from the honest protocol specification, highlighting the feasibility of timing games, yet we do not conclude on the existence of intentional timing games (Section 4). ### 2 Model We model an infinite horizon game among block proposers and attesters. Time is partitioned into slots n ∈ N, each of time length ∆> 0. Each slot n has a block proposer n and a unit measure of attesters An = {A(i,n)}i∈[0,1] where A(i,n) refers to the ith attester within slot n.[4] The game evolves as follows: **– At the beginning of slot n, proposer n acts by deciding whether to build on top of the block of proposer** _n −_ 1 and also when to release their own block. More formally, proposer n selects φn ∈{0, 1} and tn ≥ _n · ∆_ where φn = 1 (φn = 0) refers to proposer n (not) building on top of the block of proposer n − 1 and tn denotes the time at which proposer n releases their own block. Note that we specify that proposer n cannot release their block before the start of slot n (i.e., tn ≥ _n · ∆) but that they release their block after the end_ of the slot. **– After proposer n acts, all slot n attesters act simultaneously. In particular, attester A(i,n) decides whether to** attest to the block of proposer n and also the time to release their attestation. More formally, attester A(i,n) select ν(i,n) ∈{0, 1} and τ(i,n) ≥ _n · ∆_ where ν(i,n) = 1 (ν(i,n) = 0) refers to attester A(i,n) (not) attesting to proposer n’s block and τ(i,n) refers to the time that they release their attestation. Notably, attester A(i,n) can attest to the block of proposer n only if they receive the block before releasing their attestation. We 4 Note that we have a continuum of attesters, rather than a discrete set. In Ethereum PoS, over 18,000 attesters emit a vote per slot (as of 2023-05-12). ----- y g g let δn,(i,n) ∼ _exp(θ[−][1]) refer to the random time required for the block of proposer n to reach attester A(i,n)_ where θ > 0 denotes the average communication time across the network and we assume that the slot length is at least double the average communication time across the network (i.e., ∆ _≥_ 2θ). In turn, the action of attester A(i,n) is constrained by ν(i,n) = 1 =⇒ _τ(i,n) ≥_ _tn + δn,(i,n)._ **2.1** **Block proposers** The pay-off for proposer n is given as follows: _U_ _[P]_ (tn, φn) = � _α + µ · (tn −_ _tn−_ )[+] if χn = 1 (1) 0 otherwise where α, µ > 0 are exogenous constants while tn− corresponds to the time of the most recent canonical block before slot n and χn ∈{0, 1} corresponds to whether the block in slot n is canonical on the blockchain. We introduce the conditions for a block to become canonical in our model in the following, and delay until Section 3.2 its interpretation with respect to established consensus models. Note that we assume that the reward of proposer n increases linearly with time relative to the most recent canonical block so long as block n eventually becomes canonical. This assumption reflects that proposer n accrues incremental MEV over time by delaying the release of their block but that they risk being skipped if they delay release for too long. The time of the most recent canonical block, tn−, is endogenous where slot n− refers to the most recent canonical slot and is thus given explicitly as follows: _n−_ = max{k ∈ N : χk = 1, χk ≤ _n −_ 1} (2) For a block to be canonical, we require both that it receives sufficiently many successful attestations and that the subsequent block producer builds on top of it. More formally, letting _A[˜]n denote the successful attestations_ for block n, χn is given explicitly as follows: _χn =_ � 1 if φn+1 = 1, _A[˜]n_ _γ_ _≥_ (3) 0 otherwise where the number of successful attestations for block n is given as the measure of attesters in slot n voting for block n: _A˜n = |{i ∈_ [0, 1] : ν(i,n) = 1}| (4) **2.2** **Attesters** Attester (i, n) receives a pay-off if and only if two conditions are met: **– Correctness: A vote by attester (i, n) is correct if their vote is consistent with the canonical blockchain.** Recall that the vote of attester (i, n) is given by ν(i,n) and the eventual canonical status of the block is given by χn; thus, this condition is equivalent to ν(i,n) = χn. **– Freshness: A vote by attester (i, n) is fresh if it was received by proposer n + 1 soon enough that it could** be included in the block in slot n + 1 and the block in slot n + 1 is eventually made canonical. We let _δ(i,n),n+1 ∼_ _exp(θ[−][1]) denote the random communication time between attester (i, n) and proposer n + 1,_ implying that the first part of this condition equates with τ(i,n) + δ(i,n),n+1 _tn+1. Moreover, the second_ _≤_ part of this condition equates with χn+1 = 1. For exposition, we normalize the pay-off for attester (i, n) to unity, implying that their pay-off function is given explicitly as follows: _U_ _[A](ν(i,n), τ(i,n)) =_  1 if ν(i,n) = χn, τ(i,n) + δ(i,n),n+1 _tn+1, χn+1 = 1_  _≤_ (5) 0 otherwise ----- p g,, y,,, ### 3 Analysis **3.1** **Equilibrium analysis** There exists a multiplicity of Nash equilibria. In particular, attesters can coordinate to implement proposers acting at any particular time ∆[⋆] _∈_ [0, ∆] within the slot. Formally, we have the following result: **Proposition 1. Multiple Equilibria** _For any ∆[⋆]_ _∈_ [0, ∆], there exists an equilibrium as follows: _Proposer n selects tn as follows:_ _tn = n · ∆_ + ∆[⋆] (6) _and selects φn as follows:_ _φn =_ _Attester (i, n) selects ν(i,n) as follows:_ � 1 _if tn−1 ≤_ (n − 1) · ∆ + ∆[⋆] (7) 0 _otherwise_ _ν(i,n) =_ � 1 _if (6) and (7) hold_ (8) 0 _otherwise_ _and selects τ(i,n) as follows:_ _τ(i,n) =_ � _tn + δn,(i,n)_ _if (6) and (7) hold_ (9) _n · ∆_ _otherwise_ Proposition 1 arises because a proposer receives a zero pay-off unless her block earns sufficiently many attestations. In turn, if attesters coordinate on voting for a proposer’s block only if the proposer releases her block at a particular time, then the proposer earns a strictly positive pay-off only if she releases her block at that particular time. Thus, since a proposer prefers a strictly positive pay-off to a zero pay-off, each proposer optimally releases her block at the release time on which attesters coordinate. As an aside, we emphasize that the referenced coordination by attesters is equilibrium behavior. In particular, an attester receives a strictly positive pay-off only if her attestation is correct, and her attestation is correct only if it agrees with the majority of attesters in her slot. As a consequence, when all other attesters vote in one direction, each attester optimally votes in that same direction to avoid a zero pay-off. _Proof._ We begin by establishing that (8) - (9) are optimal responses for any attester (i, n). Formally, we take as given that all attesters other than (i, n) follow the equilibrium actions (8) - (9) and also that all proposers follow the equilibrium actions (6) - (7); in that context, we demonstrate that (8) - (9) maximize (5) and thus these are equilibrium actions for each attester (i, n). If (6) and (7) hold, then φn = 1 follows directly for all n ∈ N. Moreover, if all attesters other than (i, n) follow (8), then (6) and (7) imply ν(−i,n) = 1 which implies _A[˜]n = 1 for all n ∈_ N. Then, since (6) and (7) imply _φn = 1 for all n ∈_ N and also _A[˜]n = 1 ≥_ _γ, (3) therefore implies χn = 1 for all n ∈_ N. In turn, since ν(i,n) ̸= χn implies the lowest possible pay-off in (5), we have that ν(i,n) = χn = 1 whenever (6) and (7) holds. If (6) and (7) do not hold, then (8) implies ν(−i,n) = 0 which implies _A[˜]n = 0 for all n ∈_ N. Moreover, (3) implies χn = 0 for all n ∈ N. In turn, since ν(i,n) ̸= χn implies the lowest possible pay-off, we have that ν(i,n) = χn = 0 whenever the conjunction of (6) and (7) do not hold. Thus, ν(i,n) = 1 is an optimal response if (6) and (7) and ν(i,n) = 0 is an optimal response otherwise, thereby establishing (8) as the equilibrium action for any attester (i, n). To establish (9) as an optimal response for attester (i, n), note that (5) pointwise decreases in τi,n and thus it is optimal to set τ(i,n) as low as possible subject to feasibility. In general, τ(i,n) ≥ _n · ∆_ but ν(i,n) = 1 =⇒ _τ(i,n) ≥_ _tn + δn,(i,n) = n · ∆_ + ∆[⋆] + δn,(i,n) > n · ∆. As such, whenever ν(i,n) = 0, then τ(i,n) = n · ∆, whereas whenever ν(i,n) = 1, then τ(i,n) = tn + δn,(i,n). Then, as per our proof of (8), (6) and (7) imply ν(i,n) = 1 which ----- y g g implies τ(i,n) = tn + _δn,(i,n), whereas if either (6) or (7) does not hold, then ν(i,n) = 0 which implies τ(i,n) = n_ _·_ _∆,_ which thereby establishes (9). We conclude by demonstrating that (6) - (7) are an optimal response for any proposer n. More formally, we take as given that all attesters follow the equilibrium actions (8) - (9) and also that all proposers other than proposer n follow the equilibrium actions (6) - (7); in this context, we establish that (6) - (7) maximize (5) and thus these are equilibrium actions for each proposer n. Due to (8), any deviation in (6) or (7) implies ν(i,n) = 0 for all (i, n) which further implies _A[˜]n = 0. Then,_ under such a deviation, (3) implies χn = 0 which implies a zero pay-off as per (1). Finally, since pay-offs are bounded below by zero, not deviating from (6) and (7) necessarily produces a higher pay-off than any such deviation and thus (6) and (7) are equilibrium actions. **3.2** **Model justification** The model presented in Section 2 is an idealized description of a blockchain consensus mechanism. A sequence of proposers is selected, each of which is given the right to produce a block for the slot they are assigned to in the sequence. Once the block is released, a set of attesters assigned for the current slot gets to vote for the presence or absence of the block. When the proposer chooses to build on the previous block, they affirm its place in the canonical chain. There is no block tree: either the current proposer recognizes the block produced by the proposer before them as part of the canonical chain (φ = 1), or they recognize that the previous proposer failed to produce a block which is part of the canonical chain (φ = 0). With the assumption of a continuum of attesters, at equilibrium, sufficiently many votes reach the following proposer, allowing them to make the call on whether or not the previous proposer’s block is canonical. This model resembles the Streamlet protocol [13]. A proposer submits a block for consideration to the rest of the network. If γ = 2/3 share of attesters vote the block in, the block is notarized. If attesters do not, e.g., because the block is unavailable, the chain height is not increased, but the next slot starts, giving the opportunity to the next block producer to submit a block for consideration. Leaders extend the longest chain of notarized blocks they have seen. The model also bears resemblance with the proposed (block, slot) fork choice rule of the Ethereum Gasper protocol [1], specifically the dynamically available chain produced by the protocol, when γ = 1/2. In this model of the fork choice, attesters submit a vote attesting to the presence or absence of a block at some given slot. The canonicity of a block is however complicated by the LMD-GHOST rule for block weight accumulation. Obtaining more than half of the attesters’ vote may then neither be a sufficient nor a necessary condition to be part of the canonical chain. Generally, we formulate the hypothesis that most Proof-of-Stake-based leader selection protocols will be exposed to timing games. As long as duties are assigned according to an absolute (wall-clock) time schedule, there exists no pressure to complete duties in a timely manner comparable to the random arrival process of leaders in Proof-of-Work. For instance, PBFT-based finalization protocols such as Tendermint [21] or HotStuff [31] do not perform a view change until some timeout is reached, which a leader may use to time their release appropriately. While a sufficiently decentralized committee of validators is an existing feature of these protocols, our model further highlights its role in enforcing timeliness at equilibrium, as described in Section 3.1. ### 4 An empirical case study: Ethereum Following a formal analysis of the coordination game between proposers and attesters, we now investigate the occurrence of such strategic timing games in real-world systems. To this end, we examine Ethereum, an ideal candidate for the empirical analysis of potential timing games, owing to its mature MEV market structure and the availability of accessible, informative data points. We show that timing games are indeed worth playing. However, we find that proposers do not delay their block release with the intention to capture more MEV. Instead, we find that delays are mostly due to latency in their signing processes. Thus, we can conclude that timing games are rational to engage in, but do not yet occur to their full possible extent. ----- p g,, y,,, **4.1** **Consensus mechanism** The Ethereum consensus mechanism is a composite of two protocols: variants of LMD GHOST [29] and Casper FFG [9], often referred to together as Gasper [10]. In this paper, we focus exclusively on Ethereum’s available _chain that is built roughly following LMD GHOST. This is because timing games only occur on the available_ chain. Within this protocol, time progresses in 12-second slots [3]. For each slot, one consensus participant, referred to as a validator, is selected as the block proposer. According to the honest validator specifications [4], which define the rules for honest protocol participation, a block should be released at the beginning of the slot (0 seconds into the slot). Furthermore, the protocol selects a committee of attesters from the validator set who vote on what they consider to be the latest canonical block as soon as they hear a valid block for their assigned slot, or 4 seconds into the slot, whichever comes first [4]. We refer to this 4-second mark as the attestation _deadline. This dynamic, in which block proposers must release their block early enough for attesters to receive_ it via the peer-to-peer network before the attestation deadline, results from the attestation deadline serving as a coordination Schelling point [27]. It is worth noting that the honest validator specification prescribes block proposers to release their block at the beginning of the slot, while attesters only attest 4 seconds into the slot (unless a valid block is heard prior to the attestation deadline). This opens up room for block proposers to release their block strategically—i.e., as late as possible while ensuring they accumulate a sufficient share of attestations. **4.2** **Block production process** To assess the potential benefits of timing games for block proposers, it is important to comprehend the value of time and the process by which MEV opportunities are captured in the block proposing process. In Ethereum, the MEV market structure evolved and matured significantly over time, turning the block production process into an intricate interplay between specialized actors [30]. This division of labor enables validators to profit from MEV without engaging in the complex process of identifying MEV opportunities themselves. Instead, validators can outsource the task of building a maximally profitable block to an out-out-protocol block auction process known as MEV-Boost [5]. _Searchers look for MEV opportunities (e.g., arbitrages), and submit bundles of transactions alongside bids_ to express their order preference to block builders. Block builders, in turn, specialize in packing maximally profitable blocks using searcher bundles and other available user transactions before submitting their blocks with bids to relays. Relays act as trust facilitators between block proposers and block builders, validating blocks received by block builders and forwarding only valid headers to validators. This ensures validators cannot steal the content of a block builder’s block, but can still commit to proposing this block by signing the respective block header. In the long run, Ethereum’s plans include enshrining this currently out-of-protocol mechanism into the protocol [7,8] to eliminate relay trust assumptions. It is worth noting that MEV-Boost is an opt-in protocol, and validators can always choose to revert to local block building. Finally, when a validator is selected to propose a block in a given slot, they request the highest-bidding block header from the relay, sign it, and return the signed block header to the relay, which then releases the block to the peer-to-peer network. In summary, searchers find MEV opportunities and express their transaction-ordering preferences within a block via bids. Block builders aim to build maximally profitable blocks using searcher bundles and user transactions, then submit their block content and bids to relays. Validators ultimately request the highestpaying block header, sign it and return it to relays, which release the signed block to the peer-to-peer network. Due to competition at all levels in this block production process (except for block proposing monopoly), the block proposer is able to capture most of the MEV via this block auction. **MEV-Boost block auction Here, we granularly outline the sequence of events that take place during the** block construction of MEV-Boost block auctions on the Ethereum network. Figure 1 illustrates these events along with their corresponding timestamps, and is intended to serve as a reference for the remainder of this empirical analysis. The auction for block of slot n begins in slot n − 1 (at t = −12000ms), during which builders submit blocks alongside bids to relays. This competitive process between block builders determines the right to construct the block for slot n and secures potential MEV-derived profits (block building profit equates to extracted MEV minus bid value). For each bid, the relay logs the timestamps of events at which the bid was received by the relay (receivedAt). After some validity checks are completed by the relay, the bid is made available to the ----- y g g proposer (eligibleAt). When the proposer chooses to propose a block [5], the proposer requests getHeader to receive the highest bidding, eligible block header from the relay. Upon receiving the header associated with the winning bid, the proposer signs it and thereby commits to proposing this block built by the respective builder in slot n. The signed block header is sent to the relay, along with a request to get the full block content from the relay (getPayload). Finally, the relay receives the signed block header (signedAt) and publishes the full block contents to the peer-to-peer network and proposer. As soon as peers see the new block, validators assigned to the slot can attest to it. This cycle completes one round of consensus repeating every slot. Fig. 1: Logical representation of the block production process for slot n. Builder bids begin streaming in during slot n − 1, after which the proposer and relay interact through requests and responses. **4.3** **Data sets** The analysis utilizes data provided by the ultra sound relay from March 4, 2023, to April 11, 2023. This covers just under 185,000 slots, interspersed from slot 5,965,398 to slot 6,282,397, and includes all bids placed by block builders through this relay. There were over 800 bids per slot, for a total of over 150 million bids. The winning block originated from the ultra sound relay for nearly 85,000 of these slots, and so we measure timestamps and other properties for those slots when investigating winning bids specifically. Finally, we augmented the winning slots with various on-chain measures from the execution layer (EL) and consensus layer (CL), such as attestations and aggregations, using a combination of analytical tools like Dune and direct observation of the peer-to-peer network. **4.4** **Are timing games worth playing?** **Marginal value of time Timing games offer potential for substantial profit due to the increased MEV** opportunities they provide. First, we assess whether timing games are worth playing for proposers, by estimating the incremental MEV gained per second. We utilize all bids submitted by builders from the ultra sound relay to examine the relationship between the timestamp at which the relay received a bid submitted by a builder (receivedAt timestamp relative to the slot boundary) and the bid value, residualized against slot fixed effects to account for differences between low- and high-MEV regimes and other unobservable forms of heterogeneity. 5 An honest participant will request the block header shortly before slot n such that the block can be released on time, at the beginning of slot n (t = 0ms). ----- p g,, y,,, We then fit a regression line to this relationship, obtaining a slope with a coefficient of 0.0065 ETH per second, which represents our estimate for the marginal value of time. Figure 2 depicts the linear increase in median bid values over the slot duration on a point-by-point basis, and the distribution of bid receival times, indicating that most bids are submitted between four seconds before the slot boundary to one second after. This analysis shows there exists a positive marginal value of time, indicating that a rational block proposer would participate in timing games. Fig. 2: Analysis of bid values and their distribution over slot duration. The histogram (in blue) shows the distribution of bid counts across time in seconds. The dark green line represents the median bid value in Ether (ETH) for each time bin (with its associated IQR in green), residualized against the slot fixed effects that are estimated in a linear regression of bid on timestamp (dashed red line). The x-axis shows time in milliseconds relative to the slot boundary, the left y-axis displays the residualized bid value in ETH, and the right y-axis displays the count of bids. **4.5** **Are block proposers playing timing games?** Having shown that timing games are worth playing, we turn our attention to whether proposers are currently taking advantage of the opportunity to accumulate more MEV by committing to a bid later than foreseen by the honest validator specifications. **Characterizing late block signing behavior First, we investigate whether block headers and associated** bids are signed by proposers later than the slot boundary (t = 0), the time stipulated by the honest protocol specifications to broadcast their block to the network. We observe that winning bids are signed by proposers approximately 774 ms after the slot boundary (t(111573) = 575.5, p < 1×10[−][20], using a two-tailed paired Student _t test) and about 513 ms after the relay made the bid eligible (t(111573) = 472.6, p < 1×10[−][20], using a two-tailed_ paired Student t-test). Figure 3a displays the distribution of timings for winning bids, based on ultra sound relay timestamps for bid reception from the builder (receivedAt, median = 157ms), eligibility for proposer signing (eligibleAt, median = 260ms), and the actual signing by proposers (signedAt, median = 774ms). To better understand the reasons behind late-signing behavior by proposers, we map validator public keys to their staking entities and CL clients, see Figure 3b and 3c respectively). Validator to staking entity mappings were obtained via a combination of open source data sets [6], and validator to client mappings were obtained using blockprint [6 Dune Spellbooks: https://dune.com/spellbook, Mevboost.pics Open Data: https://mevboost.pics/data.html](https://dune.com/spellbook) ----- y g g [2], an open source tool assigning client labels to validators based on their attestation packing on the Ethereum beacon chain. We found that staking entities such as Kraken (t = 38.9, p < 1 × 10[−][20]) and Coinbase (t = 67.6, _p < 1 × 10[−][20]), as well as proposers using the Lodestar client (t = 44.9, p < 1 × 10[−][20]) sign block headers_ significantly later than other block proposer types (results were obtained using two-tailed unpaired Student ttests). Notably, additional analyses are required to differentiate the interdependencies between validator entities and clients to better understand their roles in late signing behavior. This analysis confirms that proposers are signing blocks significantly later than expected, but it does not yet clarify the underlying reasons, which could include participation in timing games or increased latency for independent reasons, e.g., longer signing processes. (a) (c) Fig. 3: Analysis of event timestamps and their distributions among Validator Clients and Entities. (3a) Multiple Kernel Density Estimation (KDE) distributions of event timestamps from the relay data, showing the probability density functions for three event types: receivedAt (blue), eligibleAt (green), and signedAt (light green). (3b-3c) Violin plots comparing the distribution of signedAt event timestamps for the top 7 validator entities and clients. The x-axis represents time in milliseconds (ms) relative to the slot boundary, while the y-axis displays validator clients and entities, ordered by the mean signedAt time. The width of each violin plot signifies the kernel density estimation of the signedAt event timestamps, demonstrating the distribution and frequency of the events within each group. We subsequently collected data for 1241 slots (slot 6,200,251 to 6,204,957 on April 11, 2023) and used the time difference between getHeader and getPayload calls by proposers as an approximation to estimate the duration of the signing process (see Figure 1). Figure 4 shows that the median difference between getHeader and getPayload call is 418 ms. Interestingly, this delay, attributable to the signing process, accounts for 75.42% of the overall latency. This percentage was determined by calculating the difference between the signing time and the moment the bid was deemed eligible by the relay on a slot-by-slot basis, using the formula median(getPayload−getHeader) median(signedAt−eligibleAt) _[×][100][. We conclude that late signing behavior is primarily attributed to latency caused]_ by the signing process, rather than intentional delays to incorporate more MEV in blocks. This finding aligns with the hypothesis that large US-based staking entities, such as Coinbase and Kraken, may prefer utilizing ----- p g,, y,,, sophisticated remote secure signing mechanisms, resulting in a lengthier signing process compared to other parties. (a) (b) Fig. 4: Estimating the latency induced by the signing process. (4a) Histogram of getHeader and getPayload call timestamps relative to slot boundary. The histogram displays the density of events occurring at different times into the slot (in milliseconds) for getHeader (yellow) and getPayload (blue) calls. (4b) Histogram of the time difference between getHeader and getPayload calls. The histogram shows the density of time differences (in milliseconds) getHeader and getPayload calls. Vertical lines represent the 50[th] (solid), 90[th] (dashed), and 99[th] (dotted) percentiles of the distribution. **4.6** **The impact of latency on the peer-to-peer network** Our prior results indicate that validators are not engaging in timing games to accrue more MEV. Nonetheless, we assess the implications of late signing of consensus messages on the peer-to-peer network. Specifically, we examine the relationship between the relay timestamps, the timings at which blocks are (1) first seen by the rest of the peer-to-peer network and (2) begin collecting attestations and aggregations. The consensus layer data was obtained through nodes run by the Ethereum Foundation, for 2643 slots (slots 6,357,601 to 6,363,807 on May 3 and 4, 2023). Figure 5a shows the sequence of these event timestamps over the course of a slot. We subsequently assess the correlations between each of these event pairs, as depicted in Figure 5b. Our analysis reveals high correlations between the time at which blocks are signed by proposers (i.e., the signedAt relay timestamp) and the time at which blocks (correlation coefficient = 0.986) and attestations (correlation coefficient = 0.971) are initially observed by the peer-to-peer network. These findings underscore the significance of proposers signing blocks promptly, as it considerably impacts the downstream processes at the consensus layer in the network. Next, we evaluate the impact of latency induced by late signing behavior on attestations collected by winning blocks proposed to the peer-to-peer network. We examine the relationship between the time at which blocks are signed by proposers (signedAt), and the share of attestations included by blocks in their respective target slot referred to as slot n in Figure 1. As a reminder, attestations collected on a given slot n are only included on-chain one (slot n + 1) or more slots later. In our analysis, we focus on the attestations included in the subsequent slot and compute a metric next-slot shares. This metric refers to the percentage of attestations for the winning block in a given slot that appear in the next block (slot n + 1), out of the total number of attestations in the next slot that refer to any block in the target slot. Our hypothesis is that if a block is signed too late by a proposer, it will not propagate early enough for attesters to vote for it before their attestation deadline (t = 4000ms, see Figure 1, and [4]). Hence, in such settings attesters vote for another block (e.g., the parent block), and this will be reflected in the next-slot shares metric. Figure 6 shows that latency does indeed cause a steep drop-off in the share of attestations received by the winning block. We observe that the share value stays close to one as long as the block is signed within the ----- y g g (a) (b) Fig. 5: Analysis of relay and consensus layer timestamps. (5a) Box plot of the time differences between relay and consensus timestamps. The box plots display the distribution of time differences for receivedAt, elligibleAt, and signedAt events, as well as blocks, attestations, and aggregations first seen by the peer-to-peer network. The boxes represent the interquartile range (IQR) from the first quartile (Q1, 25[th] percentile) to the third quartile (Q3, 75[th] percentile), while the whiskers extend to the minimum and maximum values within three times the IQR. The horizontal lines within the boxes represent the median values. (5b) Bar plot of Pearson correlation coefficients for each pair of event timestamps. The bars represent the mean correlation coefficient for each relationship, while the error bars represent the 95% confidence intervals obtained via bootstrapping. first two seconds of the slot. Once the two second threshold is crossed, there is a substantial drop-off and many winning blocks earn fewer than half of the next-slot attestations, which continues to rapidly decrease towards zero as we approach the theoretical t = 4000ms attestation deadline. These results demonstrate the impact of latency on the rest of the peer-to-peer network and highlight the importance of signing and broadcasting blocks on time to prevent missed slots and reorganizations. We previously documented a private incentive for proposers to delay their block release, according to the steady increase of MEV as time progresses through the slot. Such malicious behavior is not prevalent, as 75% and 98% of all blocks are seen by our nodes after two and four seconds into the slot, respectively. Yet, our analysis reveals on-chain evidence whenever latency degrades consensus formation. ### 5 Discussion In this paper, we present an argument that consensus participants are subject to exogenous incentives, primarily MEV, that exist outside the consensus mechanism itself. This highlights the imperative for blockchain protocols to ensure economic fairness among all consensus participants. Specifically, it necessitates a design where honest and honest-but-rational consensus participation become indistinguishable, and honesty within the protocol is the most profitable strategy. This approach ensures that honest-but-rational participants have no incentive to deviate from honest consensus participation. We present a model that highlights the time-dependent value for consensus participants and probes into the strategic timing considerations that block proposers face. Our model uncovers a spectrum of equilibria wherein attesters can enforce any deadline for block proposals to achieve canonical status, thereby emphasizing the crucial role of Schelling points as coordination mechanisms. For instance, in the Ethereum network we observe the emergence of such Schelling points through the default settings of client software. The widespread use of these default settings among consensus participants generally ensures their effectiveness. We support our theoretical findings by observations of the Ethereum network. Our analysis demonstrates that timing games are indeed worth playing for block proposers, enabling them to capture additional MEV by delaying their block proposals beyond the timeframe prescribed by the honest validator specification. However, we observe that current instances of delayed block proposals are primarily due to latency in the block signing process, rather than a conscious strategy to maximize profits. The apparent lack of maximal MEV capture by ----- p g,, y,,, Fig. 6: Effects of block signing times on next-slot attestations. This scatter plot features the x-axis displaying the time at which proposers signed the winning block relative to the slot n boundary, and the y-axis illustrating the share of next-slot (n + 1) attestations for the winning block. Each point on the graph corresponds to the time (in milliseconds) at which the winning block was signed within the slot and the average share of attestations it received, included in the next slot, across all winning blocks signed that specific time. honest proposers could be attributed to either a lack of common knowledge, existing social norms around this practice. It’s clear, however, that these are not sustainable safeguards for maintaining economic fairness. The implications of timing games are manifold and significant. An honest-but-rational participant who engages in timing games will outperform honest participants, leading to a centralization of stake over time. Hypothetically, this could culminate in a breach of consensus security. In a more practical sense, it may encourage individual stakers to delegate their stake to professional entities adept at these practices, negatively impacting the network’s decentralization. Moreover, timing games can overload the messaging system within a short time span, potentially causing cascading failures at the peer-to-peer layer, particularly within client systems. Essentially, timing games are facilitated by the monopolistic right that block proposers possess for a single round of consensus. Introducing competition in block proposing, similar in effect to the exogenous randomness in Proof of Work (PoW) systems, emerges as a potential solution. However, the challenge lies in deterministically selecting a winning proposer, or reverting to peer-to-peer latency races, which in itself is centralizing. Alternatively, an on-chain heuristic for timely block proposals could incentivize timely participation, yet the allure of MEV rewards might still outweigh any in-protocol consensus rewards. Tackling the root cause of timing games remains an open challenge. In the Ethereum context, a late-block reorging mechanism has been adopted in the fork choice, effectively imposing a 4-second deadline for block proposers. This constraint significantly limits the extent to which block delays are possible. Looking ahead, the adoption of (block, slot) type of attestations is likely, further refining the protocol. However, it remains challenging to address the root cause of timing games, as it is deeply intertwined with the fundamental workings of Proof of Stake (PoS). Although limiting the length of the proposer’s interval is feasible, completely eliminating the monopolistic market structure of block proposers proves to be a difficult task. Consequently, it may prove valuable to find a more general abstraction for PoS type of protocols and further explore the implications of consensus participants being exposed to incentives outside of consensus itself, such as MEV. More generally, assuming honest-but-rational as opposed to honest type of consensus participation should prove significant in designing economically fair blockchain protocols. ----- y g g ### Acknowledgments The authors acknowledge helpful discussions and comments from Francesco d’Amato and Anders Elowsson. We also appreciate the significant contributions of Mike Neuder in obtaining the necessary data for this study. ### References [1. (block, slot) fork choice. https://github.com/ethereum/consensus-specs/pull/2197, accessed: 2023-10-05](https://github.com/ethereum/consensus-specs/pull/2197) [2. blockprint, https://github.com/sigp/blockprint, accessed: 2023-10-05](https://github.com/sigp/blockprint) [3. Ethereum consensus specifications - beacon chain, https://github.com/ethereum/consensus-specs/blob/dev/specs/](https://github.com/ethereum/consensus-specs/blob/dev/specs/phase0/beacon-chain.md) [phase0/beacon-chain.md, accessed: 2023-10-05](https://github.com/ethereum/consensus-specs/blob/dev/specs/phase0/beacon-chain.md) [4. Ethereum consensus specifications - honest validator, https://github.com/ethereum/consensus-specs/blob/dev/](https://github.com/ethereum/consensus-specs/blob/dev/specs/phase0/validator.md) [specs/phase0/validator.md, accessed: 2023-10-05](https://github.com/ethereum/consensus-specs/blob/dev/specs/phase0/validator.md) [5. Mev-boost, https://github.com/flashbots/mev-boost, accessed: 2023-10-05](https://github.com/flashbots/mev-boost) 6. Babel, K., Daian, P., Kelkar, M., Juels, A.: Clockwork finance: Automated analysis of economic security in smart contracts. arXiv preprint arXiv:2109.04347 (2021) [7. Buterin, V.: Proposer/block builder separation-friendly fee market designs, https://ethresear.ch/t/proposer-block-](https://ethresear.ch/t/proposer-block-builder-separation-friendly-fee-market-designs/9725) [builder-separation-friendly-fee-market-designs/9725, accessed: 2023-10-05](https://ethresear.ch/t/proposer-block-builder-separation-friendly-fee-market-designs/9725) [8. Buterin, V.: Two-slot proposer/builder separation, https://ethresear.ch/t/two-slot-proposer-builder-separation/](https://ethresear.ch/t/two-slot-proposer-builder-separation/10980) [10980, accessed: 2023-10-05](https://ethresear.ch/t/two-slot-proposer-builder-separation/10980) [9. Buterin, V., Griffith, V.: Casper the friendly finality gadget. arXiv:1710.09437 [cs.CR] (2019), https://arxiv.org/abs/](https://arxiv.org/abs/1710.09437) [1710.09437](https://arxiv.org/abs/1710.09437) 10. Buterin, V., Hernandez, D., Kamphefner, T., Pham, K., Qiao, Z., Ryan, D., Sin, J., Wang, Y., Zhang, Y.X.: Combining ghost and casper. arXiv preprint arXiv:2003.03052 (2020) 11. Cachin, C., Vukolić, M.: Blockchain consensus protocols in the wild. arXiv preprint arXiv:1707.01873 (2017) 12. Carlsten, M., Kalodner, H., Weinberg, S.M., Narayanan, A.: On the instability of bitcoin without the block reward. In: Proceedings of the 2016 acm sigsac conference on computer and communications security. pp. 154–167 (2016) 13. Chan, B.Y., Shi, E.: Streamlet: Textbook streamlined blockchains. In: Proceedings of the 2nd ACM Conference on Advances in Financial Technologies. pp. 1–11 (2020) 14. Daian, P., Goldfeder, S., Kell, T., Li, Y., Zhao, X., Bentov, I., Breidenbach, L., Juels, A.: Flash boys 2.0: Frontrunning, [transaction reordering, and consensus instability in decentralized exchanges. CoRR abs/1904.05234 (2019), http:](http://arxiv.org/abs/1904.05234) [//arxiv.org/abs/1904.05234](http://arxiv.org/abs/1904.05234) 15. D’Amato, F., Neu, J., Tas, E.N., Tse, D.: No more attacks on proof-of-stake ethereum? arXiv preprint arXiv:2209.03255 (2022) 16. D’Amato, F., Zanolini, L.: Recent latest message driven ghost: Balancing dynamic availability with asynchrony resilience. arXiv preprint arXiv:2302.11326 (2023) 17. D’Amato, F., Zanolini, L.: A simple single slot finality protocol for ethereum. arXiv preprint arXiv:2302.12745 (2023) 18. Dembo, A., Kannan, S., Tas, E.N., Tse, D., Viswanath, P., Wang, X., Zeitouni, O.: Everything is a race and nakamoto always wins. In: Proceedings of the 2020 ACM SIGSAC Conference on Computer and Communications Security. pp. 859–878 (2020) 19. Eyal, I., Sirer, E.G.: Majority is not enough: Bitcoin mining is vulnerable. Communications of the ACM 61(7), 95–102 (2018) 20. John, K., Rivera, T., Saleh, F.: Economic implications of scaling blockchains: Why the consensus protocol matters. NYU Stern Working Paper (2023) 21. Kwon, J.: Tendermint: Consensus without mining. Draft v. 0.6, fall 1(11) (2014) 22. Liao, K., Katz, J.: Incentivizing blockchain forks via whale transactions. In: Financial Cryptography and Data Security: FC 2017 International Workshops, WAHC, BITCOIN, VOTING, WTSC, and TA, Sliema, Malta, April 7, 2017, Revised Selected Papers 21. pp. 264–279. Springer (2017) [23. Monnot, B.: Timing games in proof-of-stake, https://ethresear.ch/t/timing-games-in-proof-of-stake/13980, accessed:](https://ethresear.ch/t/timing-games-in-proof-of-stake/13980) 2023-10-05 24. Neu, J., Tas, E.N., Tse, D.: Ebb-and-flow protocols: A resolution of the availability-finality dilemma. In: 2021 IEEE Symposium on Security and Privacy (SP). pp. 446–465. IEEE (2021) 25. Neu, J., Tas, E.N., Tse, D.: Two more attacks on proof-of-stake ghost/ethereum. In: Proceedings of the 2022 ACM Workshop on Developments in Consensus. pp. 43–52 (2022) 26. Saleh, F.: Blockchain without waste: Proof-of-stake. Review of Financial Studies 34(3), 1156–1190 (2021) 27. Schelling, T.C.: The Strategy of Conflict: with a new Preface by the Author. Harvard university press (1980) 28. Schwarz-Schilling, C., Neu, J., Monnot, B., Asgaonkar, A., Tas, E.N., Tse, D.: Three attacks on proof-of-stake ethereum. In: Financial Cryptography and Data Security: 26th International Conference, FC 2022, Grenada, May 2–6, 2022, Revised Selected Papers. pp. 560–576. Springer (2022) ----- p g,, y,,, 29. Sompolinsky, Y., Zohar, A.: Secure high-rate transaction processing in Bitcoin. In: International Conference on Financial Cryptography and Data Security. pp. 507–527. Springer (2015) 30. Yang, S., Zhang, F., Huang, K., Chen, X., Yang, Y., Zhu, F.: Sok: Mev countermeasures: Theory and practice (2022). [https://doi.org/10.48550/ARXIV.2212.05111, https://arxiv.org/abs/2212.05111](https://doi.org/10.48550/ARXIV.2212.05111) 31. Yin, M., Malkhi, D., Reiter, M.K., Gueta, G.G., Abraham, I.: Hotstuff: Bft consensus with linearity and responsiveness. In: Proceedings of the 2019 ACM Symposium on Principles of Distributed Computing. pp. 347–356 (2019) -----
{ "disclaimer": "Notice: Paper or abstract available at https://arxiv.org/abs/2305.09032, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": null, "status": "GREEN", "url": "http://arxiv.org/pdf/2305.09032" }
2,023
[ "JournalArticle" ]
true
2023-05-15T00:00:00
[ { "paperId": "3d6d7416b8249c2dc6f47ec127274ff0d63c4a9b", "title": "A Simple Single Slot Finality Protocol For Ethereum" }, { "paperId": "e04584b4e478c93f2137d83b768d1a1caaa7c50b", "title": "Recent Latest Message Driven GHOST: Balancing Dynamic Availability With Asynchrony Resilience" }, { "paperId": "71495ab57440c34578f5a01813699b98fb3f984d", "title": "Two More Attacks on Proof-of-Stake GHOST/Ethereum" }, { "paperId": "8b647a12f0ff831db7e6489b1166710841d66a88", "title": "Goldfish: No More Attacks on Ethereum?!" }, { "paperId": "36382648cb6df633c981e91cc557c90575781c30", "title": "Three Attacks on Proof-of-Stake Ethereum" }, { "paperId": "647fa7aaa4eac4fafa56ac86ebafc5cf5944a8eb", "title": "Clockwork Finance: Automated Analysis of Economic Security in Smart Contracts" }, { "paperId": "d4b4f0c642358004982e75eb85da9d6fbcb11602", "title": "Economic Implications of Scaling Blockchains: Why the Consensus Protocol Matters" }, { "paperId": "ea17b3d47f1d95a1c4215da19cd7efeefd7f5b1a", "title": "Streamlet: Textbook Streamlined Blockchains" }, { "paperId": "7ebc49a3bc102c24a46bf79ce94b3213711fe2d3", "title": "Ebb-and-Flow Protocols: A Resolution of the Availability-Finality Dilemma" }, { "paperId": "03d1b883e9d8474212094e5764646bc6450cf565", "title": "Blockchain Without Waste: Proof-of-Stake" }, { "paperId": "0a6b4af44cba14f11add0df09e5da33af0ff18e1", "title": "Everything is a Race and Nakamoto Always Wins" }, { "paperId": "5c89ae258f209fc2ff3464b4c01e6e88eae10852", "title": "Combining GHOST and Casper" }, { "paperId": "50dd47f615068b0eac1eeed60c91e95549aed3d4", "title": "HotStuff: BFT Consensus with Linearity and Responsiveness" }, { "paperId": "393ab84a86631d5fda128c3aac0bf5476da07791", "title": "Flash Boys 2.0: Frontrunning, Transaction Reordering, and Consensus Instability in Decentralized Exchanges" }, { "paperId": "fdbebd67c8a9671efabf4e53d6267789cd91d96c", "title": "Casper the Friendly Finality Gadget" }, { "paperId": "26a286e447cd78227eddc801a5d9816e0215834b", "title": "Blockchain Consensus Protocols in the Wild" }, { "paperId": "6d818bf1b05a07b79e73f5b29b4c5a34ad2bc456", "title": "Incentivizing Blockchain Forks via Whale Transactions" }, { "paperId": "557d145a725e3b29336f004194e1e6bbb17fe8e9", "title": "On the Instability of Bitcoin Without the Block Reward" }, { "paperId": "728b60c04afb5b87853b59265e49f430dbf631db", "title": "Secure High-Rate Transaction Processing in Bitcoin" }, { "paperId": "86bf53b9c87265e42152d40a9d5ce6a5cd02f055", "title": "Countermeasures" }, { "paperId": null, "title": "Proposer/block builder separation-friendly fee market designs, https://ethresear.ch/t/proposer-blockbuilder-separation-friendly-fee-market-designs/9725" }, { "paperId": null, "title": "Majority is not enough: Bitcoin mining is vulnerable" }, { "paperId": "df62a45f50aac8890453b6991ea115e996c1646e", "title": "Tendermint : Consensus without Mining" }, { "paperId": null, "title": "The Strategy of Conflict: with a new Preface by the Author" }, { "paperId": null, "title": "Time is Money: Strategic Timing Games in Proof-of-Stake Protocols" }, { "paperId": null, "title": "Two-slot proposer/builder separation" }, { "paperId": null, "title": "Mev-boost" }, { "paperId": null, "title": "Timing games in" }, { "paperId": null, "title": "Ethereum consensus specifications - beacon chain" }, { "paperId": null, "title": "blockprint" } ]
12,907
en
[ { "category": "Medicine", "source": "external" }, { "category": "Environmental Science", "source": "s2-fos-model" }, { "category": "Political Science", "source": "s2-fos-model" }, { "category": "Medicine", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/022ad80bb2cf8922a5d0b62cb56c4b7beb08450c
[ "Medicine" ]
0.910237
Mitigation Planning and Policies Informed by COVID-19 Modeling: A Framework and Case Study of the State of Hawaii
022ad80bb2cf8922a5d0b62cb56c4b7beb08450c
International Journal of Environmental Research and Public Health
[ { "authorId": "2110859392", "name": "Thomas H. Lee" }, { "authorId": "2120238704", "name": "Bobby Do" }, { "authorId": "2165752670", "name": "Levi Dantzinger" }, { "authorId": "47889503", "name": "Joshua R. Holmes" }, { "authorId": "143824683", "name": "M. Chyba" }, { "authorId": "80974909", "name": "Steven D Hankins" }, { "authorId": "69452483", "name": "E. Mersereau" }, { "authorId": "118840200", "name": "Kenneth S. Hara" }, { "authorId": "33931275", "name": "V. Fan" } ]
{ "alternate_issns": null, "alternate_names": [ "Int J Environ Res Public Health" ], "alternate_urls": null, "id": "3096eb5c-d18c-4877-94cd-28edd3a9c357", "issn": "1660-4601", "name": "International Journal of Environmental Research and Public Health", "type": "journal", "url": "http://www.mdpi.com/journal/ijerph/" }
In the face of great uncertainty and a global crisis from COVID-19, mathematical and epidemiologic COVID-19 models proliferated during the pandemic. Yet, many models were not created with the explicit audience of policymakers, the intention of informing specific scenarios, or explicit communication of assumptions, limitations, and complexities. This study presents a case study of the roles, uses, and approaches to COVID-19 modeling and forecasting in one state jurisdiction in the United States. Based on an account of the historical real-world events through lived experiences, we first examine the specific modeling considerations used to inform policy decisions. Then, we review the real-world policy use cases and key decisions that were informed by modeling during the pandemic including the role of modeling in informing planning for hospital capacity, isolation and quarantine facilities, and broad public communication. Key lessons are examined through the real-world application of modeling, noting the importance of locally tailored models, the role of a scientific and technical advisory group, and the challenges of communicating technical considerations to a public audience.
International Journal of **_[Environmental Research](https://www.mdpi.com/journal/ijerph)_** **_and Public Health_** _Article_ # Mitigation Planning and Policies Informed by COVID-19 Modeling: A Framework and Case Study of the State of Hawaii **Thomas H. Lee** **[1,2], Bobby Do** **[1], Levi Dantzinger** **[1], Joshua Holmes** **[1], Monique Chyba** **[3]** **, Steven Hankins** **[4],** **Edward Mersereau** **[5], Kenneth Hara** **[6]** **and Victoria Y. Fan** **[1,7,]*** 1 Thompson School of Social Work & Public Health, University of Hawaii at Manoa, Honolulu, HI 96822, USA; tlee@hawaiidata.org (T.H.L.); bdo7@hawaii.edu (B.D.); levidantzinger@gmail.com (L.D.); jrholmes@hawaii.edu (J.H.) 2 Hawaii Data Collaborative, Honolulu, HI 96813, USA 3 Department of Mathematics, College of Natural Sciences, University of Hawaii at Manoa, Honolulu, HI 96822, USA; chyba@hawaii.edu 4 John A. Burns School of Medicine, University of Hawaii at Manoa, Honolulu, HI 96813, USA; hankinss@hawaii.edu 5 Behavioral Health Administration, Hawaii Department of Health, Honolulu, HI 96813, USA; phac@hawaii.edu 6 Hawaii Department of Defense, Honolulu, HI 96816, USA; kenneth.s.hara@hawaii.gov 7 Center for Global Development, Washington, DC 20036, USA ***** Correspondence: vfan@hawaii.edu **Citation: Lee, T.H.; Do, B.;** Dantzinger, L.; Holmes, J.; Chyba, M.; Hankins, S.; Mersereau, E.; Hara, K.; Fan, V.Y. Mitigation Planning and Policies Informed by COVID-19 Modeling: A Framework and Case Study of the State of Hawaii. Int. J. _Environ. Res. Public Health 2022, 19,_ [6119. https://doi.org/10.3390/](https://doi.org/10.3390/ijerph19106119) [ijerph19106119](https://doi.org/10.3390/ijerph19106119) Academic Editor: Fernando Augusto Lima Marson Received: 29 March 2022 Accepted: 12 May 2022 Published: 18 May 2022 **Publisher’s Note: MDPI stays neutral** with regard to jurisdictional claims in published maps and institutional affil iations. **Copyright:** © 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons [Attribution (CC BY) license (https://](https://creativecommons.org/licenses/by/4.0/) [creativecommons.org/licenses/by/](https://creativecommons.org/licenses/by/4.0/) 4.0/). **Abstract: In the face of great uncertainty and a global crisis from COVID-19, mathematical and** epidemiologic COVID-19 models proliferated during the pandemic. Yet, many models were not created with the explicit audience of policymakers, the intention of informing specific scenarios, or explicit communication of assumptions, limitations, and complexities. This study presents a case study of the roles, uses, and approaches to COVID-19 modeling and forecasting in one state jurisdiction in the United States. Based on an account of the historical real-world events through lived experiences, we first examine the specific modeling considerations used to inform policy decisions. Then, we review the real-world policy use cases and key decisions that were informed by modeling during the pandemic including the role of modeling in informing planning for hospital capacity, isolation and quarantine facilities, and broad public communication. Key lessons are examined through the real-world application of modeling, noting the importance of locally tailored models, the role of a scientific and technical advisory group, and the challenges of communicating technical considerations to a public audience. **Keywords: COVID-19; pandemic; modeling; epidemiology; isolation and quarantine; media and** communication; public health planning; governance; hospital; pandemic preparedness **1. Introduction** The health and economic toll of the global coronavirus disease 2019 (COVID-19) pandemic posed unprecedented challenges for how public health authorities respond and mitigate a public health emergency and crisis globally. Yet, the local public health response to COVID-19 was challenging for many reasons, including the widespread uncertainty as well as ever-evolving science and knowledge of a new disease, the wide-ranging mitigation measures and health and socioeconomic impacts, the disproportionate impact on vulnerable populations, the highly charged political context and polarized communication challenge, the lack of preparedness and capacity for public sector response, among others [1–3]. However, in many countries, COVID-19 was mitigated through local or subnational efforts and responses in addressing the multidimensional impacts of COVID-19. As a result, local policymakers and public health authorities were challenged to make timely decisions and deploy a variety of tools to best respond to the emergency. ----- _Int. J. Environ. Res. Public Health 2022, 19, 6119_ 2 of 14 In many countries, one key tool used and deployed by policymakers was mathematical modeling and epidemiologic forecasting of infectious diseases. There was widespread use of a variety of models that forecasted and made predictions about the spread of [COVID-19 and its subsequent health impacts. The COVID Forecast Hub (https://COVID1](https://COVID19forecasthub.org/) [9forecasthub.org/ (accessed on 11 May 2022)) curated a partial list of more than 40 models,](https://COVID19forecasthub.org/) predominantly from universities and research institutes and with a wide range of predictions on case counts, hospitalizations, intensive care unit use, and deaths. The landscape of models can be dizzying for a non-technical user including policymakers. Policymakers can use mathematical and epidemiologic modeling to inform a variety of pressing policy, programmatic, and planning questions and decisions for mitigating COVID-19. These decisions can be grouped into two main categories along the slopes of the COVID-19 curve—the surge up and the decline down. Key questions for policymakers seeking to make decisions informed by best available science and evidence through modeling include: Given the nature of COVID-19 to surge exponentially, are there sufficient health care _•_ resources in our jurisdiction or state, including hospital beds, ventilators, personal protective equipment, medication, isolation and quarantine facilities, contact tracers, and other health workers? Will capacity be sufficient, and if not, when will they run out? Does the state need to enforce stronger mitigation measures including at its extreme “shutdown”, i.e., mass quarantine and isolation for the state? _•_ As COVID-19 ostensibly declines, what policy decisions should authorities undertake to reopen and relax measures? What mitigation measures need to be maintained and what measures can be dispensed including testing, tracing, isolation, masking, distancing, and vaccination? This study presents a real-world historical case study of the roles, uses, and approaches to COVID-19 modeling and forecasting for policy decisions and policy use cases, drawing from the historical perspectives in one state jurisdiction in the United States. This study does not present a micro-level analysis of detailed modeling and its mathematical specifications, but rather provides a macro-level historical and policy perspective on the ways in which modeling informs policymaking. The methodology and data used for this case study rely on a review of the historical facts and real-world events through lived experiences of the authors of this paper who are members of the Hawaii Pandemic Applied Modeling [Work Group (HiPAM) (https://www.hipam.org (accessed on 11 May 2022)), the Hawaii](https://www.hipam.org) Department of Defense, Hawaii Emergency Management Agency (HI-EMA), or the Hawaii Department of Health in a variety of roles during the COVID-19 pandemic from March 2020 to May 2022. This case study is intended to and may help future policymakers seeking to navigate this complex landscape of models and draw upon practical lessons learned on how to make appropriate evidence-based decisions using models. As such, this paper is structured as follows. The first part of this case study focuses on the major technical considerations of mathematical and epidemiologic models and how models were selected given realworld limitations of time and resources. The second part of this case study reviews and summarizes the real-world policy use cases and key policy decisions informed by modeling during the pandemic surge and decline, including the role of modeling in informing planning for hospital capacity and isolation and quarantine facilities, and deploying a broad public communication strategy that navigates the complexities and pitfalls of modeling. We then reflect on the key lessons and discussion from the use cases that may be relevant for other jurisdictions seeking to use modeling to inform decision making. **2. COVID-19 Models Used to Inform Policymakers in Hawaii** _2.1. Model Selection_ A “model” refers to a mathematical or logical representation of the biology and epidemiology of disease transmission and its associated processes [4]. To date, there are more than 40 COVID-19 models available. In many locations around the world, there ----- _Int. J. Environ. Res. Public Health 2022, 19, 6119_ 3 of 14 was a need for timely public health decisions pressing against the lack of available time, resources, and a severe dearth of expertise such as epidemiologists in the jurisdiction (as was the case in Hawaii). Thus, it was not feasible nor practical for policymakers and their technical support teams to comprehensively review all models in order to make decisions. Instead, policymakers were continuously forced to make strategic decisions to select and use tools in order to make the best available information at hand. Nevertheless, even the selection of tools in order to make policy decisions requires technical expertise and communication savvy in order to wade through the science and complexity and minimize creation of additional confusion. The first part of this case study focuses on the major criteria for selecting a model. Beginning in April of 2020, the HI-EMA tasked a technical team (including a physician, a lead epidemiologic adviser and a technical analyst) who in turn sought guidance from a newly developed HiPAM work group to review and use models to inform a variety of specific policy decisions, described in the second part of this case study. The four models that were ultimately chosen and used for informing Hawaii public authorities for decision making were based on selective review rather than comprehensive review of models. The four models used by the technical team were the following: the University of Washington Institute for Health Metrics and Evaluation (IHME) model [5], the Imperial College London model [6], the Epidemic Calculator [7], and the University of Basel model [8]. The dimensions for reviewing and selecting these models are described in Section 2.2. Upon review of the documentation and source code, if available, of these models, in 2020, the technical team identified some of the key assumptions of these models. These assumptions were crucial in understanding the limitations or applicability of a given model to a particular jurisdiction, and in this case, the state of Hawaii. These assumptions and limitations are discussed in Section 2.3. _2.2. Criteria for Model Selection_ There are several criteria that could be considered for selecting a model. In this case study, the technical team in Hawaii was prompted with questions from policymakers relying on wide media coverage on two models in particular—the University of Washington and the Imperial College London model. Yet, as the technical team discovered, these two models were not completely suitable or customizable for the situation in the local state jurisdiction. The technical team then identified two more models (Epidemic Calculator and the University of Basel models) and reviewed these four models based on publicly available documentation (noted in the aforementioned references), and in some cases, data visualizations and source code, along five key dimensions. At the time, the COVID-19 modeling hub had not yet been available in the early part of the pandemic, and thus the models chosen were selective and purposive. The key dimensions used to select and use these four models were the following: (1) model objective, (2) interactivity and local parameter customizability, (3) age distribution, (4) type of model, and (5) open source (see Table 1). Given limitations of time and resources, the technical team made purposive decisions on which models to consider and use in 2020, and compared and contrasted the models along these dimensions. These dimensions were argued to be relevant for decision making in Hawaii based on the issues of the assumptions and limitations of the models. While these are not comprehensive of all considerations, they reflect the historical events in the Hawaii case. **Model Objective. Each model had a different objective. The IHME model intended** to estimate COVID-19 hospital impacts, whereas the Imperial College London model sought to illustrate how public health measures such as physical distancing and protecting vulnerable populations affected the spread of COVID-19. Understanding the objective of the model is an important but incomplete aspect to its appropriate use. **Local Parameter Customizability. Some models allowed for interactivity and cus-** tomizability of the model parameters. The Epidemic Calculator had sliders to allow for a user to modify parameters driving the transmission and clinical dynamics underpinning ----- _Int. J. Environ. Res. Public Health 2022, 19, 6119_ 4 of 14 the model (e.g., the population size and the basic reproduction number R0) and to add an intervention to decrease transmission by a specified amount from a given day. The Basel model allowed for the user to modify various model parameters, age-group-specific parameters, and isolation measures, and add multiple interventions to reduce transmission. In contrast, the IHME model had limited local parameter customizability. Although it generated state-specific estimates, it did not allow for state-specific parameters to be incorporated. Moreover, although the IHME model used a wide variety of data sources, not all states had their data reflected in the model. In the case of Hawaii, the IHME model initially did not appear to utilize data from Hawaii but instead utilized average estimates of time from hospitalization to death from other states, despite widely different demographic, epidemiologic, and socioeconomic considerations. **Age Distribution. Age is well documented as one of the largest and most significant** risk factors for COVID-19, with older adults at increased risk of being hospitalized and dying due to COVID-19 [9,10]. Each state has different age distributions and demographic and population age structure, and so it is important for models to account for age to project the case, hospitalization, and fatality numbers more accurately. The technical team during their rapid review identified the University of Basel model as Susceptible Infected Recovered/Susceptible Exposed Infected Recovered (SEIR) compartment-based models that accounted for age distributions, allowing for the user to adjust the age distribution and age-group-specific parameters to reflect the population of interest. **Type of Model. Models can be broadly categorized into two types—mechanistic** and statistical. Mechanistic models make assumptions about how the actual process of COVID-19 disease transmission occurs and include the SEIR compartmental models and their modified variants. In contrast, statistical models fit curves using existing data, the main example being the IHME model which early on used the existing data from China and Italy to predict what would happen in the United States and elsewhere. This means that while statistical models can forecast what will happen in the near future, mechanistic models can make assumptions on the transmission dynamics of COVID-19 and forecast longer-term scenarios based on different interventions and policy changes [11]. **Table 1. Landscape of selected models for informing COVID-19 control and mitigation, 2020.** **Localized** **Local Age** **Objective of Model** **Type of Model** **Open Source** **Customizability** **Distribution** IHME [5] Estimate hospital impacts No Unknown [1] Statistical No Imperial College Assess public health No [2] No [2] Mechanistic No [2] London [6] measures on spread Epidemic Calculator [7] University of Basel [8] Estimate change in epi curve after reduction Yes No Mechanistic Yes in transmission Planning tool with features such as imported cases and Yes Yes Mechanistic Yes age groups 1 The IHME model was closed source so it was unknown how local age distribution was taken into account. 2 The source code was not available when the original Report 9 was released. The updated source code was eventually made available much later with limited documentation, making localized use of the model difficult. Incorrectly utilizing a statistical model to create long-term scenarios can produce results that “may suffer from the fallacy of Farr’s law, a similar non-mechanistic method in which epidemics are assumed to follow a normal distribution shifted and scaled to fit data” [12]. This was a common and widespread criticism of the IHME, as simply fitting a curve to historical data and extrapolating into the future can produce dramatic over- or underestimates of the epidemic’s impact [13]. However, mechanistic models also have shortcomings. Parameters available to a model are finite, meaning any output will be inherently flawed. In addition, assignment of values to the parameters available in each model may only be viable with respect to a given historical situation, but relatively meaningless considering even a small shift in the makeup ----- _Int. J. Environ. Res. Public Health 2022, 19, 6119_ 5 of 14 or habit of the population or any immediate major policy change in the future, including upon seeing the results of the model. Therefore, it is important to recognize and establish degrees of uncertainty within the parameters themselves for mechanistic models. Models without such extended boundaries should be caveated generously or avoided completely. **Open Source. Models that are open source, defined as having the source code made** publicly available for use and modification, are models that enable users to “open up the hood of the car” or “look into the sausage-making machine”. This transparency in model assumptions and limitations should have appropriate interpretation by an epidemiologist to policymakers to ensure appropriate planning. Most importantly, open source incurs little to no additional cost and offers support to states with limited technical and epidemiologic capacity. For example, the IHME model was not open source which made it very challenging to assess even basic assumptions such as how it incorporates age-specific distributions. Policymakers should approach model interpretations cautiously and not make assumptions of the data. _2.3. Model Assumptions and Limitations_ This section reviews the key assumptions identified by the technical team of these models, which were used to inform the applicability of a given model to a particular jurisdiction, and in this case, the state of Hawaii. Table 2 presents explicitly some of the assumptions in the four models. Ultimately, the technical team chose to use the Epidemic Calculator and University of Basel model for several key decisions in 2020, as described in Section 3. In Hawaii in 2020, the Hawaii Data Collaborative in partnership with local hospitals and HiPAM built on and modified the open-source Epidemic Calculator model to show how policy measures on reopening and resuming travel could impact the spread of COVID-19. Later beginning in 2021, HiPAM relied on a locally customized model. **Table 2. Selected model assumptions for informing COVID-19 control and mitigation, 2020.** **Key Assumption #2:** **Age Distribution** Uses actual data and are based on results for specific age distributions (for China and Italy) applied and adapted to other populations Agent based model has individuals that reflect the population’s age distribution Does not take age or age distributions into account and unclear the reference population or data used to benchmark (e.g., China) Divides population into age groups with age-group-specific parameters (such as how severe, critical, and fatal the infection is) **Other Assumptions** Assumes changes in transmission are reflected through mobility of the population Puts imported cases into the Exposed compartment, which can be interpreted as the cases coming from outside are all incubating/recently infected and not symptomatic **Underestimate or** **Overestimate on Total** **Severity (Cases,** **Deaths)** As the model is not open source, it is unapparent to what extent asymptomatic vs. symptomatic is considered Same as for Epidemic Calculator (see below) May underestimate total severity as asymptomatic individuals are more likely to spread COVID-19 as they are unaware, they are infected and/or infectious Same as for Epidemic Calculator (see above) **Underestimate or** **Overestimate on Total** **Severity (Cases,** **Deaths)** As the model is not open source, it is unapparent how the age-specific distributions are incorporated and applied Not applicable May overestimate hospitalizations and fatalities if population is younger, as increased age significantly increases risk [1] Depends on whether the user correctly selects the age distribution and age-group-specific parameters of geographic location of interest IHME [5] Imperial College [6] Epidemic Calculator [7] University of Basel [8] **Key Assumption #1:** **Asymptomatic vs.** **Symptomatic** As the model is not open source, it is unapparent to what extent asymptomatic vs. symptomatic is taken into account Does not appear to distinguish between asymptomatic and non-hospitalized symptomatic individuals Does not appear to distinguish between asymptomatic and non-hospitalized symptomatic individuals Does not appear to distinguish between asymptomatic and non-hospitalized symptomatic individuals 1 The United States has a younger age distribution compared to China, so models that use aggregate estimates of mortality for China may overestimate mortality for the United States unless age-specific mortality distributions are accounted for. ----- _Int. J. Environ. Res. Public Health 2022, 19, 6119_ 6 of 14 **3. Policy Use Cases of Applying Models to Specific State Policy Decisions** In this section of the policy case study, we provide a historical account for how models were used to inform key policy decisions in the jurisdiction, both in terms of managing capacity during a surge and reopening amidst a decline. The policy case study reflects actual lived experiences along with (now historical) observations and perspectives of the technical team and the HiPAM work group who were providing information to policymakers seeking to make decisions informed by modeling. Thus, the detailed microlevel analyses for each use case are not presented herein, but rather, the specific translation of evidence to knowledge and communication with a variety of stakeholders including policymakers, media, and the public are described. _3.1. Using Models for Managing Resources and Capacity in a Surge_ Many states did not have an established epidemic or pandemic response plan for COVID-19, let alone a plan for how to use modeling for informing policy decisions. Amidst this context, a pressing and overarching question was how quickly COVID-19 would spread in their state or community. As such, the IHME model was utilized because it provided early state-specific estimates. It was one of the only models at the time that gave a hard deadline by which a state’s bed surge capacity might be reached because of the speed by which COVID-19 spreads and leads to hospitalization. It was also widely disseminated in the news and prompted several policymakers to inquire whether decisions could be made based on what was circulated in the media. Policymakers rarely have a background in infectious disease or epidemiology, and the wide coverage of the models in media does not guarantee their appropriate use. Thus, this case study reflects the occasion in which some policymakers had the foresight and humility to seek out information and inputs from technical experts for three use cases described herein: _•_ **Use Case 1: Determining whether there was adequate hospital bed capacity in the** state and adequate PPE in the state. _•_ **Use Case 2: Assessing the need for isolation and quarantine facilities from the surge** of the second wave in the fall of 2020. _•_ **Use Case 3: The role of public communication during the Delta surge in the summer** of 2021, and the Omicron surge in the fall of 2021. 3.1.1. Use Case 1: Adequacy of Hospital Bed and Personal Protective Equipment Capacity The IHME model was initially used to plan for ensuring adequate bed capacity and to decide whether to put up additional acute care facilities. In Hawaii, policymakers pondered challenging decisions of whether to retrofit existing hotel rooms or outfit a convention center. Either option would require collaboration with the US Army Corps of Engineers with an expensive price tag. This policy decision required COVID-19 case and hospitalization projections specific for Hawaii. In the beginning of the pandemic, with no other available guidance or tools as well as limited or no epidemiologic advisors, policymakers turned to the web-accessible IHME model for guidance on when Hawaii would be hit with a “surge” of cases. However, at the onset of COVID-19 in the US, many states had yet to fully understand how the virus was spreading through their individual communities and how measures such as requiring face mask use in public would affect the spread [14]. Through the month of March and early April, many states did not yet have a high case count and fatality count to get a sense of the trend of COVID-19 within their state. The IHME model used the hospitalization to death ratio from seven locations within the US with the most cases to create a weighted average for their ratio and applied it to states with fewer than five fatalities, which included Hawaii. This resulted in Hawaii expecting to see a surge in cases and hospitalizations that was projected to overwhelm the local healthcare system. Yet, when the technical team with the HiPAM work group and utilized a basic SEIR model with Hawaii-specific parameters, no surge was estimated within the same time frame that IHME was predicting. The modeling team in Hawaii understood that Hawaii’s ----- _Int. J. Environ. Res. Public Health 2022, 19, 6119_ 7 of 14 unique geography and early mitigation efforts, most significantly restricting air and sea travel, drastically reduced the Rt below the value of 2.2 that was used by most models. The technical team stated all the limitations and assumptions of their early model to the decision makers in HI-EMA and others. Based on the recommendation of state-specific data and use of a locally tailored epidemiologic model, the decision was made to not retrofit the Hawaii Convention Center into an acute care facility at that time, and to re-evaluate at a future date. Ultimately, Hawaii was never hit with a surge at the level predicted by the IHME model, i.e., the predictive validity of the IHME model was poor for Hawaii. Due to the information provided by the technical team, the state of Hawaii made a policy decision that avoided millions of USD in costs. While the IHME model influenced policymakers and emergency management leaders’ decisions about imposing public health measures to stop COVID-19 spread (e.g., through closing business and halting travel), it is important to note that the actual death totals from COVID-19 were outside the IHME model’s 95% confidence interval 70% of the time [15]. This fact is notable given the importance of deaths as a measurement of COVID-19 spread during the early days of the pandemic [16]. Yet, predictive validity is an ex post consideration for model selection. Policymakers must make best possible decisions without knowing the future or the predictive validity of any given model. Thus, in the historical case study, the selection of models best used for a given jurisdiction was based on the factors noted in Section 2. Allocation, logistics, and utilization of personal protective equipment (PPE) during the initial response to COVID-19 was another use of COVID-19 models by policymakers. Hospital administrators and policymakers need to accurately account for burn rates of PPE (e.g., masks, surgical gowns, and facemasks) to request appropriate funding from their funding sources. The technical team used the University of Basel model for informing PPE. Regarding stockpiling respirators, estimations of need were essential to decreasing over-stocking which may diminish the supply in other areas of need, or under-stocking respirators, which would have had severe consequences. 3.1.2. Use Case 2: Isolation and Quarantine Capacity Planning In the second surge that Hawaii experienced in the fall of 2020, the models adapted and used through the HiPAM work group were used to communicate the forecasted number of cases, hospitalizations, and deaths, primarily through behind-the-scenes communications to senior state policymakers including the Governor’s office and the county Mayors, among others. Whereas the models used for hospital bed planning were informing HI-EMA as a key state agency, the departure of the epidemiologic advisor to HI-EMA in July of 2020 resulted in the HiPAM work group stepping in to serve as the go-to local institutional contact for modeling, supported by the Hawaii Data Collaborative and Hawaii Department of Health Behavioral Health Administration. HiPAM had formed in April of 2020, bringing together health professionals, data scientists, mathematicians, and agency staff to convene around an agenda on COVID-19 modeling. Given the limitations in resources in a small remote state, there was a need to pool resources and efforts together to reduce duplication and confusion. The interdisciplinary HiPAM work group was structured on past work of the HiPAM chair, who had previous experience using work groups at a think tank in Washington, DC (the Center for Global Development). Based on the work of the HI-EMA epidemiologic advisor and technical team with support from HiPAM, a need for an ongoing forecast for the state was identified. By July of 2020, HiPAM launched an online two-week COVID-19 forecast, accessible publicly. As the local response evolved including increasing capacity for testing, tracing, and isolation and quarantine, the models were also used to inform isolation and quarantine capacity which had a particular emphasis on vulnerable populations including homeless individuals, Native Hawaiian and Other Pacific Island communities, as well as individuals with co-occurring mental illness and substance use challenges. The Hawaii Department of Health’s Behavioral Health Administration (BHA) was designated to lead isolation and ----- _Int. J. Environ. Res. Public Health 2022, 19, 6119_ 8 of 14 quarantine beginning in August 2020 as Hawaii experienced its second surge. The BHA was also the sole DOH unit to establish the standalone Temporary Quarantine & Isolation Center specifically for homeless individuals and later for medically needy individuals [17]. As the BHA leadership actively participated in HiPAM, BHA leadership had sought inputs and guidance from HiPAM models to monitor adequacy of bed capacity for isolation and quarantine real-time case counts. Models from HiPAM were also used to estimate the adequacy of shelter capacity for homeless populations. There was a policy need for a simple benchmark to identify whether there was adequate isolation and quarantine capacity and, specifically, enough beds procured by the State of Hawaii. With limited time and resources available to conduct a detailed epidemiologic and demographic study, there was a need to identify in a simple manner how many people might need isolation and quarantine services. Eligibility for isolation and quarantine services in a government-procured hotel was determined in part based on whether an individual was able to safely isolate at home and whether the individual lived in a shared bedroom with someone. In Hawaii, the percentage of the population living in a shared bedroom was identified to be nearly 10%. When applied to the total number of active COVID-19 cases at any given time, this benchmark helped to inform the planning for the total beds procured by the State of Hawaii for isolation and quarantine operational activities in 2020. Although ‘active cases’ as a concept was challenging to independently measure due to lack of capacity for verification of individuals released from isolation and quarantine, the rolled-up cumulative case count over the last 14 days was used as a proxy for active case count for the state, on which the ten percent was applied to estimate need for isolation and quarantine outside of one’s home. 3.1.3. Use Case 3: Broad Public and Media Communications In the Delta and Omicron surges in 2021 in the summer and winter, respectively, HiPAM took a direct public communications strategy to communicate the results of the model and forecast. Rather than use only backdoor communication with senior policymakers and government authorities, HiPAM emphasized direct communications with the media, similar to the weatherman, as well as the release of regular and timely reports sent to all key policymakers and news outlets in the state, supplementing the online web tool hosting the two-week advance forecast, which had been launched in July of 2020. By 2021, a locally developed and customized model led by mathematicians (Chyba et al.) became the de facto and well-accepted model by HiPAM for the state, and the other models by the University of Basel and IHME were abandoned by 2021 [18]. The Chyba et al. model fulfilled the key considerations for the models including local parameter customizability, local age distribution, use for assessing different policy scenarios, and being customizable and potentially open source because it was developed in-house. Developing local in-house mathematical and epidemiologic modeling capacity is extremely challenging and dependent upon the availability of scientific experts willing to engage in real-world policy challenges and was spearheaded by funding from the National Science Foundation competitively awarded to Chyba et al. [18]. The direct public dissemination of the model results to the state in 2021 and 2022 was vastly different from the 2020 approach of behind-the-scenes information provided to senior leaders. Nevertheless, this public communication strategy also had challenges and risks in terms of the ways in which the modeling results and information was communicated and the kinds of questions and concerns posed by the media, policymakers, and the public. The media and policymakers, for example, repeatedly asked HiPAM representatives challenging questions about the specific policy guidance that should be made based on the modeling results. Yet in order to ensure and maintain the scientific credibility of the HiPAM models, HiPAM repeatedly emphasized its role as a scientific body that focused on high-quality models based on best-available evidence and ever-evolving science. It had to remind the media, the public, and policymakers that while HiPAM’s information was important, policymakers would need to use multiple sources of information, in order to make decisions. ----- _Int. J. Environ. Res. Public Health 2022, 19, 6119_ 9 of 14 In doing so, HiPAM reinforced its role as a scientific body and not as a body making policy recommendations or actions. Maintaining a scientifically neutral and unbiased position was essential for ensuring HiPAM public credibility and recognition. A second key question repeatedly raised by the public, the media, and policymakers was about the time horizon of the model. There was a longstanding desire for understanding the forecast or projection of COVID-19 well into the future by several months. HiPAM, however, maintained a stance of emphasizing a two-week forecast horizon, and that anything longer than that would be subject to change. Seeing the real-world mistakes of models communicated at the national level, HiPAM made a deliberate choice to focus on a limited time horizon and repeatedly emphasized the ways in which individual and policy actions could easily influence the forecast beyond two weeks. A third key challenge of direct communications was the emphasis on the dynamic nature of the modeling results. HiPAM repeatedly noted that upon release of the forecast, the projection would immediately change based on the fact that knowledge and information about the situation would result in changes in individual behavior as well as policy changes and action. Unlike weather forecasting, dissemination of a COVID-19 forecast changes the forecast itself. This difficulty in communicating the dynamic nature of modeling was challenging throughout the pandemic, even with seasoned media reporters and engaged policymakers and legislators. The media and communications engagements was also broad through numerous media engagements through print, television, radio, social media, state and county government hearings and fora, and so on, raising public awareness of the COVID-19 modeling writ-large beyond the closed circles of policymakers. The media strategy led to wide acceptance, recognition and use of the COVID-19 modeling not only by state agencies but also other health care provider organizations including the local hospital association. The credibility and validity of the HiPAM model work was emphasized primarily through maintaining a neutral stance on any specific policy recommendation but focusing on the specific technical result or information that the model provided. The interactive communication loop also enabled immediate feedback from the policymakers, the media, and the public to ask questions including about understanding the potential impacts of a particular policy or intervention scenario. These communication engagements also enabled policymakers to request and clarify potential scenarioing requests for any given policy. _3.2. Using Models for Reopening Amidst Decline_ Models were also used to inform policy decisions for reopening, and these decisions were equally pressing due to the economic impacts of COVID-19. In the case of Hawaii, one major question facing policymakers seeking to reopen is how and when travel volumes, both domestic and international, can increase. Some of the early mechanistic models only accounted for a population where the total size stayed the same as well as for how COVID-19 would progress under certain mitigation efforts scenarios pre-programmed into the model. Moreover, there continues to be uncertainty and evolving understanding about the basic scientific facts and assumptions of COVID-19 (e.g., extent of screening for asymptomatic transmission [19] and the infection fatality rate [20]), making policy decisions difficult. As travel volumes return to higher levels, models that factor in imports of new cases can provide more accurate estimates of travel impacts on overall disease spread. Teams engaged in epidemic forecasting can estimate metrics for different travel volume scenarios and demonstrate how the range of new cases is dependent on how many imported cases are brought into their community. There are many assumptions built into the various COVID-19 models, such as whether symptomatic travelers will restrict themselves from traveling and whether they will be identified at the port of departure. Arguably, one of the largest considerations for developing travel scenarios is that of asymptomatic and pre-symptomatic cases—how assumptions about these parameters are incorporated into a given model, the distribution of COVID-19 ----- _Int. J. Environ. Res. Public Health 2022, 19, 6119_ 10 of 14 cases which are asymptomatic or pre-symptomatic, and the rate of spread from these cases [21–23]. Reopening strategies based on one or multiple tests have been suggested without any numerical estimations of possible infected travelers slipping through. Modeling can provide policymakers with an educated guess when comparing reopening strategies based on frequency and type of tests. Testing, contact tracing, and isolation and quarantine represent major public health tools for policymakers responding to COVID-19. States can consider how tests such as body temperature and symptom screens as well as standard polymerase chain reaction (PCR) tests can be linked to travel policies, and use models to help estimate the potential impacts and consequences of different testing strategies. Most models at present do not account for health impacts beyond the immediate COVID-19 health impacts such as those pertaining to mental health, reductions in use of other essential health services, or long-term care facilities or other congregate settings. Mental health and substance use, already important public health issues prior to COVID-19, have become exacerbated by secondary and tertiary impacts due to COVID-19 [24]. COVID-19 will continue to directly impact communities and indirectly for decades to come. Policymakers will need to shift from the use of models that focus on hospital capacity and reopening, to models identifying long-term health and economic impacts of COVID-19, such as mental health, access to non-COVID-19 health care services, education, and other dimensions of the social determinants of health. Most of the COVID-19 models used at present have not directly incorporated these long-term impacts. The use of Bayesian modeling and synthetic population models can and have begun to be used to examine these longer-term health impacts and policy implications, as this type of modeling accounts for additional differences in a population such as economic status and race. **4. Discussion** Epidemiologic models used for COVID-19 are numerous and complex, requiring subject matter experts to appropriately utilize data, interpret, and communicate results. The study used a case study approach to provide a historical account of the events and reflect on the lessons learned from one jurisdiction in the United States. The historical events reflected in this case study demonstrate the real-world challenges that policymakers and subject matter experts face when deciding which model to use, including demonstrating how even with accurate data, utilization of an inappropriate model or considerations has the potential to lead to inappropriate interpretation of results. COVID-19 models vary by their designed intent and understanding these differences, including their differences in geographic application and applicability to specific policy decisions, is necessary for policymakers to better utilize them in making decisions [25–27]. There are several key lessons that can be drawn from this case study documenting the historical application of mathematical and epidemiologic models for key policy decisions. First, COVID-19 modeling in Hawaii benefited from the incorporation of state-specific data which were historically argued to directly result in cost savings from decreased unnecessary spending, particularly in the case of the hospital capacity planning in the early part of 2020 during the pandemic. This model incorporated two of the most important factors that assist local leaders in modeling local issues, age distribution and customization that was specific to Hawaii [9,10]. It also helped to inform isolation and quarantine planning and adequacy of facilities available in order to meet demand and need in the fall of 2020, as well as helped to inform the media, the public, and policymakers of the potential magnitude of the Delta and Omicron surges in 2021 to 2022. Second, regardless of model selection, it is essential that model outputs be interpreted directionally, not as a forecast of hard, immutable numbers, and with a clearly delineated time horizon. The numerous known and unknown factors and their combinations thereof impacting the spread of COVID-19 means that no model, no matter the level of sophistication, can concurrently accurately account for all factors. Further, the nature of the models, easily influenced by actions of individuals and policies today, makes the models dynamic ----- _Int. J. Environ. Res. Public Health 2022, 19, 6119_ 11 of 14 and uncertain rather than static, despite a desire for static and definitive answers. Therefore, numbers produced regarding cases, hospitalizations, deaths, etc. should be communicated and understood as a possible scenario should current trends continue forward into the future assuming no change in policy or human behavior, which is an impossible assumption to begin with as soon as a forecast is released and disseminated. Third, when the above factors are properly considered and both model outputs— projected trends and subsequent reductions by means of interventions—are combined, it is essential that warnings are heeded and action be taken as soon as possible. Based on appropriate interpretation of a model, policymakers can be advised if a policy intervention may avert critical thresholds such as hospital and ICU capacity. With these crucial timings in mind, policymakers can then use other models to help more accurately understand how an intervention may impact the Rt in a given population and, subsequently, what scenarios of intervention combinations and efficacies might result in faster control or even elimination of COVID-19 [28]. Hesitation in implementation or inadequate interventions can have dramatic effects on disease spread, such as the delayed and scattered approach to mask wearing early on in the pandemic [29]. However, failure to grasp model limitations can also result in hasty and expensive overreactions. Fourth, these models require a firm grasp of epidemiologic concepts. As such, policymakers are advised to seek out and involve an interdisciplinary scientific advisory group as early as possible to translate modeled outcomes into actionable context. Because of the complexity of models, significant unpredictable impact of human behavior, and the potential for misinterpretation, it can be argued that these models do more harm than good. Rather than dismiss the use of models because of their complexity, policymakers should incorporate into the scientific and technical advisory group or ‘brain trust’ as early as possible to help inform and navigate the difficult policy decisions that can have positive impacts on their constituents and communities. In the case study herein, the brain trust was a diverse team that provided input from various areas of expertise (e.g., epidemiology, data science, behavioral health, and mathematics). The role of the local university to work in close collaboration and partnership with the community and government authorities is essential. While it remains to be seen what the long-run impact of the communication of COVID-19 was modeling and forecasting in the state of Hawaii, it cannot be denied that there is value of having mathematical and epidemiologic capacity of scientific experts who are willing and able to communicate and contribute to real-world policy challenges in service of the public and community during an extended period of confusion and crisis. The work in Hawaii of using a brain trust may be contextualized to the work in many countries around the world which used COVID-19 modeling and forecasting to inform decision making. In Hawaii, the creation of HiPAM included a range of local experts from epidemiology, public health, data science, and mathematics who were able to contribute to modeling and forecasting locally. Other countries such as Ireland, United Kingdom, New Zealand, and several others had technical advisory groups that provided inputs and information to policymakers who ultimately made the policy calls and decisions. For example, in New Zealand, a COVID-19 technical advisory group comprised medical, public health, and academic advisors, which provided advice to the ministry of health. In Australia, the COVID-19 Expert Database hosted by the Australian Academy of Science provided a mechanism for governments and decision makers to have easier access to expertise in COVID-19. The UK also established a Scientific Advisory Group for Emergencies as the entity responsible for providing scientific advice to UK decision makers while not representing official government policy. More research is needed to examine how to create and then institutionalize these bodies with blended technical expertise, savvy communication skills, and linkages to policymakers making decisions. What is the optimal composition of these bodies? To what extent are these bodies linked and connected to decision-making? What is the role and balance of neutrality in balancing scientific facts relative to policy recommendations? The ----- _Int. J. Environ. Res. Public Health 2022, 19, 6119_ 12 of 14 work in Hawaii serves as a basis for the hypothesis that the composition of a body that draws from a wide range of expertise beyond clinical medicine or public health fields can help to bridge challenges of mathematics, applied or real-world epidemiology, behavioral health, and data science. Existing public health institutions such as those pertaining to health technology assessment or epidemic intelligence are also relevant and should be considered before new bodies are duplicated, creating more institutional fragmentation, siloization, and duplication of effort. Further, there is a need for communication and practitioners who can help to translate and communicate complex ideas into simple concepts for policymakers and the public. We would also hypothesize that whether forecasting and modeling are sidelined or are integral to decision-making depends on leadership and governance in formally supporting, building, seeking guidance from, and incorporating information from technical advisory groups. During a time when there is controversy and doubt about science, the ways in which scientific and technical advisory bodies monitor ever-evolving science and evidence and how such bodies intersect with and communicate with policymakers who then make decisions for policies, programs, and practice merits further study. There is some research in the field of political science and public administration which examines the ways in which policy decisions and actions are determined and implemented. Political science theories have been applied to understand how political actors make policy actions and political decisions, including Kingdon’s Multiple Streams Model [30] and Reich’s work on political economy [31]. Work by Walt et al. noted that rigorous health policy research methods have much to be desired for understanding the policy process [32]. In particular, a major limitation of this historical case study is its focus on a single jurisdiction—the state of Hawaii—one that lacks a comparison or historical “counterfactual” for what might have happened in the absence of this work on modeling in the state. We argue that drawing on historical perspectives and chronology from lived experiences of those engaged in real-world implementation and operations (albeit informed by modeling and evidence) is a research methodology. This case study also did not explicitly examine cases of misappropriation and misuse of models in Hawaii or cases in which the modeling outputs were ignored or otherwise not used for specific policy actions. Determining what constitutes misuse and misappropriation is beyond the scope of this paper, but we acknowledge that the complexity of models makes inappropriate or poor application quite possible, if not the default. Future research would be valuable to examine the different ways in which modeling informed or did not inform key policy decisions in multiple states and jurisdictions and the variations in communication about modeling. **5. Conclusions** This article has emphasized the role of localizing knowledge that can be translated and used to inform local decisions. With tremendous uncertainty about a novel disease, the need for thoughtful application of scientific knowledge is ever more pressing. Although the specific use cases and policy window and moment for critical decisions described herein have now passed, the lessons from this case study may be relevant for jurisdictions seeking to make smarter decisions informed by modeling. The knowledge and experience that was gained through these lived experiences may be applicable to island countries and states with age, ethnicity, and other sociodemographic distributions similar to Hawaii. The knowledge and experience from this case study may also help to inform jurisdictions experiencing limitations in resources, time, and scientific expertise for COVID-19 modeling in informing policymaking. **Author Contributions: Conceptualization, T.H.L. and V.Y.F.; methodology, T.H.L., B.D., L.D., J.H.,** M.C. and V.Y.F.; resources, S.H., E.M., K.H. and V.Y.F.; writing—original draft preparation, T.H.L.; writing—review and editing, T.H.L., B.D., L.D., J.H., M.C., S.H., E.M., K.H. and V.Y.F.; supervision, project administration and funding acquisition, E.M., K.H. and V.Y.F. All authors have read and agreed to the published version of the manuscript. ----- _Int. J. Environ. Res. Public Health 2022, 19, 6119_ 13 of 14 **Funding: T.H. Lee and V.Y. Fan gratefully acknowledge extramural funds from the Hawaii State** Department of Health Behavioral Health Administration Alcohol and Drug Abuse Division (ADADMOA-SP-19-01, ASO Log No. 15-074). M. Chyba gratefully acknowledges extramural funds from the National Science Foundation, award #2030789. V.Y. Fan and M. Chyba gratefully acknowledge support from the Coronavirus State Fiscal Recovery Funds via the Governor’s Office Hawaii Department of Defense and the University of Hawaii at Manoa Provost’s Office. **Institutional Review Board Statement: Not applicable.** **Informed Consent Statement: Not applicable.** **Data Availability Statement: Not applicable.** **Acknowledgments: The authors gratefully acknowledge support from the members of the Hawaii** Pandemic Applied Modeling Work Group (Adriann Gin, Baseem Missaghi, Brian Wu, Curtis Toma, David Chow, Francis Chan, Istvan Szapudi, Janet Berreman, Kendrick Leong, Kiyoshi Shiraishi, Lee Altenberg, Marguerite Butler, Nick Redding, Noah Hafner, Peter Fuleky, Roy Esaki, Rukiyah Walker, Tiana Tran, Tom Blamey), ACES Lab, Applied Research Laboratory, Margo Edwards, Harry Kim, Aimee Grace, Vassilis Syrmos, Velma Kameoka, Michael Bruno, David Lassner, and John Valera. The authors also gratefully acknowledge the public, the media, and the federal, state, and county policymakers for their interest in this work. Any omissions or errors are our own. **Conflicts of Interest: The authors declare no conflict of interest.** **References** 1. Logan, R.I.; Castañeda, H. Addressing Health Disparities in the Rural United States: Advocacy as Caregiving among Community [Health Workers and Promotores de Salud. Int. J. Environ. Res. Public Health 2020, 17, 9223. [CrossRef] [PubMed]](http://doi.org/10.3390/ijerph17249223) 2. Shelus, V.S.; Frank, S.C.; Lazard, A.J.; Higgins, I.C.A.; Pulido, M.; Richter, A.P.C.; Vandegrift, S.M.; Vereen, R.N.; Ribisl, K.M.; Hall, M.G. Motivations and Barriers for the Use of Face Coverings during the COVID-19 Pandemic: Messaging Insights from Focus [Groups. Int. J. Environ. Res. Public Health 2020, 17, 9298. [CrossRef] [PubMed]](http://doi.org/10.3390/ijerph17249298) 3. Zhang, X.; Warner, M.E. COVID-19 Policy Differences across US States: Shutdowns, Reopening, and Mask Mandates. Int. J. _[Environ. Res. Public Health 2020, 17, 9520. [CrossRef]](http://doi.org/10.3390/ijerph17249520)_ 4. Dubé, C.; Garner, G.; Stevenson, M.; Sanson, R.; Estrada, C.; Willeberg, P. The use of epidemiological models for the management of animal diseases. Conf. OIE 2007, 1, 13–23. 5. Team IC 19 Health Service Utilization Forecasting; Murray, C.J. Forecasting the impact of the first wave of the COVID-19 pandemic [on hospital demand and deaths for the USA and European Economic Area countries. medRxiv 2020, 2020, 20074732. [CrossRef]](http://doi.org/10.1101/2020.04.21.20074732) 6. Ferguson, N.; Laydon, D.; Nedjati Gilani, G.; Imai, N.; Ainslie, K.; Baguelin, M.; Bhatia, S.; Boonyasiri, A.; Cucunuba Perez, Z.; Cuomo-Dannenburg, G. Impact of Non-Pharmaceutical Interventions (NPIs) to Reduce COVID19 Mortality and Healthcare Demand; [Imperial College London: London, UK, 2020. [CrossRef]](http://doi.org/10.25561/77482) 7. [Goh, G. Epidemic Calculator. Available online: https://gabgoh.github.io/COVID/index.html (accessed on 9 June 2020).](https://gabgoh.github.io/COVID/index.html) 8. Noll, N.B.; Aksamentov, I.; Druelle, V.; Badenhorst, A.; Ronzani, B.; Jefferies, G.; Neher, R.A. COVID-19 Scenarios: An interactive [tool to explore the spread and associated morbidity and mortality of SARS-CoV-2. medRxiv. 2020, 2020, 20091363. [CrossRef]](http://doi.org/10.1101/2020.05.05.20091363) 9. Bialek, S.; Boundy, E.; Bowen, V.; Chow, N.; Cohn, A.; Dowling, N.; Ellington, S.; Gierke, R.; Hall, A.; MacNeil, J.; et al. Severe Outcomes Among Patients with Coronavirus Disease 2019 (COVID-19)—United States, February 12–March 16, 2020. MMWR _[Morb. Mortal. Wkly. Rep. 2020, 69, 343–346. [CrossRef]](http://doi.org/10.15585/mmwr.mm6912e2)_ 10. Garg, S.; Kim, L.; Whitaker, M.; O’Halloran, A.; Cummings, C.; Holstein, R.; Prill, M.; Chai, S.J.; Kirley, P.D.; Alden, N.B.; et al. Hospitalization Rates and Characteristics of Patients Hospitalized with Laboratory-Confirmed Coronavirus Disease [2019—COVID-NET, 14 States, March 1–30, 2020. MMWR Morb. Mortal. Wkly. Rep. 2020, 69, 458–464. [CrossRef]](http://doi.org/10.15585/mmwr.mm6915e3) 11. Holmdahl, I.; Buckee, C. Wrong but Useful—What COVID-19 Epidemiologic Models Can and Cannot Tell Us. N. Engl. J. Med. **[2020, 383, 303–305. [CrossRef]](http://doi.org/10.1056/NEJMp2016822)** 12. Jewell, N.P.; Lewnard, J.A.; Jewell, B.L. Caution Warranted: Using the Institute for Health Metrics and Evaluation Model for [Predicting the Course of the COVID-19 Pandemic. Ann. Intern. Med. 2020, 173, 226–227. [CrossRef]](http://doi.org/10.7326/M20-1565) 13. Brown, G.; Cavanaugh, J.; Miller, A.; Oleson, J.; Pentella, M.; Perencevich, E.; Sewell, D. Critique of the IHME Model for COVID-19 [Projections. Available online: https://governor.iowa.gov/sites/default/files/documents/IDPH%20Whitepaper%201%20-%](https://governor.iowa.gov/sites/default/files/documents/IDPH%20Whitepaper%201%20-%20Critique%20of%20IHME%20Model%20for%20COVID-19%20Projections%20%281%29.pdf) [20Critique%20of%20IHME%20Model%20for%20COVID-19%20Projections%20%281%29.pdf (accessed on 11 April 2021).](https://governor.iowa.gov/sites/default/files/documents/IDPH%20Whitepaper%201%20-%20Critique%20of%20IHME%20Model%20for%20COVID-19%20Projections%20%281%29.pdf) 14. Lyu, W.; Wehby, G.L. Community Use Of Face Masks And COVID-19: Evidence From A Natural Experiment Of State Mandates [In The US. Health Aff. 2020, 39, 1419–1425. [CrossRef] [PubMed]](http://doi.org/10.1377/hlthaff.2020.00818) 15. Marchant, R.; Samia, N.I.; Rosen, O.; Tanner, M.A.; Cripps, S. Learning as We Go: An Examination of the Statistical Accuracy of [COVID19 Daily Death Count Predictions. arXiv 2020, arXiv:2004.04734. [CrossRef]](http://doi.org/10.1101/2020.04.11.20062257) 16. Subbaraman, N. Why daily death tolls have become unusually important in understanding the coronavirus pandemic. Nature **[2020. [CrossRef] [PubMed]](http://doi.org/10.1038/d41586-020-01008-1)** ----- _Int. J. Environ. Res. Public Health 2022, 19, 6119_ 14 of 14 17. Fan, V.Y.; Fontanilla, T.M.; Yamaguchi, C.T.; Geib, S.M.; Holmes, J.R.; Kim, S.; Do, B.; Lee, T.H.; Talagi, D.K.P.; Sutton, Y.; et al. [Experience of isolation and quarantine hotels for COVID-19 in Hawaii. J. Travel Med. 2021, 28, taab096. [CrossRef] [PubMed]](http://doi.org/10.1093/jtm/taab096) 18. Kunwar, P.; Markovichenko, O.; Chyba, M.; Mileyko, Y.; Koniges, A.; Lee, T. A study of computational and conceptual complexities [of compartment and agent based models. N. Heterog. Media 2022, 17, 11546. [CrossRef]](http://doi.org/10.3934/nhm.2022011) 19. Park, M.; Cook, A.R.; Lim, J.T.; Sun, Y.; Dickens, B.L. A Systematic Review of COVID-19 Epidemiology Based on Current Evidence. _[J. Clin. Med. 2020, 9, 967. [CrossRef]](http://doi.org/10.3390/jcm9040967)_ 20. Basu, A. Estimating The Infection Fatality Rate Among Symptomatic COVID-19 Cases In The United States. Health Aff. 2020, 39, [1229–1236. [CrossRef]](http://doi.org/10.1377/hlthaff.2020.00455) 21. Gandhi, M.; Yokoe, D.S.; Havlir, D.V. Asymptomatic Transmission, the Achilles’ Heel of Current Strategies to Control COVID-19. _[N. Engl. J. Med. 2020, 382, 2158–2160. [CrossRef]](http://doi.org/10.1056/NEJMe2009758)_ 22. He, X.; Lau, E.H.Y.; Wu, P.; Deng, X.; Wang, J.; Hao, X.; Lau, Y.C.; Wong, J.Y.; Guan, Y.; Tan, X.; et al. Temporal dynamics in viral [shedding and transmissibility of COVID-19. Nat. Med. 2020, 26, 672–675. [CrossRef]](http://doi.org/10.1038/s41591-020-0869-5) 23. [Oran, D.P.; Topol, E. Prevalence of Asymptomatic SARS-CoV-2 Infection. Ann. Intern. Med. 2020, 173, 362–367. [CrossRef]](http://doi.org/10.7326/M20-3012) 24. Hartnett, K.P.; Kite-Powell, A.; Devies, J.; Coletta, M.A.; Boehmer, T.K.; Adjemian, J.; Gundlapalli, A.V. Impact of the COVID-19 Pandemic on Emergency Department Visits—United States, January 1, 2019–May 30, 2020. MMWR. Morb. Mortal. Wkly. Rep. **[2020, 69, 699–704. [CrossRef] [PubMed]](http://doi.org/10.15585/mmwr.mm6923e1)** 25. Ouerfelli, N.; Vrinceanu, N.; Coman, D.; Cioca, A.L. Empirical Modeling of COVID-19 Evolution with High/Direct Impact on [Public Health and Risk Assessment. Int. J. Environ. Res. Public Health 2022, 19, 3707. [CrossRef] [PubMed]](http://doi.org/10.3390/ijerph19063707) 26. Ganasegeran, K.; Jamil, M.F.A.; Appannan, M.R.; Ch’Ng, A.S.H.; Looi, I.; Peariasamy, K.M. Spatial Dynamics and Multiscale Regression Modelling of Population Level Indicators for COVID-19 Spread in Malaysia. Int. J. Environ. Res. Public Health 2022, _[19, 2082. [CrossRef] [PubMed]](http://doi.org/10.3390/ijerph19042082)_ 27. Rypdal, K.; Bianchi, F.M.; Rypdal, M. Intervention Fatigue is the Primary Cause of Strong Secondary Waves in the COVID-19 [Pandemic. Int. J. Environ. Res. Public Health 2020, 17, 9592. [CrossRef]](http://doi.org/10.3390/ijerph17249592) 28. [Siegenfeld, A.F.; Bar-Yam, Y. The impact of travel and timing in eliminating COVID-19. Commun. Phys. 2020, 3, 1–8. [CrossRef]](http://doi.org/10.1038/s42005-020-00470-7) 29. Howard, J.; Huang, A.; Li, Z.; Tufekci, Z.; Zdimal, V.; van der Westhuizen, H.-M.; von Delft, A.; Price, A.; Fridman, L.; Tang, L.-H.; [et al. An evidence review of face masks against COVID-19. Proc. Natl. Acad. Sci. USA 2021, 118, e2014564118. [CrossRef]](http://doi.org/10.1073/pnas.2014564118) 30. Kingdon, J.W. Agendas, Alternatives, and Public Policies; Little, Brown: Boston, MA, USA, 1984. 31. [Reich, M.R. Political economy analysis for health. Bull. World Health Organ. 2019, 97, 514. [CrossRef]](http://doi.org/10.2471/BLT.19.238311) 32. Walt, G.; Shiffman, J.; Schneider, H.; Murray, S.F.; Brugha, R.; Gilson, L. ‘Doing’ health policy analysis: Methodological and [conceptual reflections and challenges. Health Policy Plan. 2008, 23, 308–317. [CrossRef]](http://doi.org/10.1093/heapol/czn024) -----
{ "disclaimer": "Notice: Paper or abstract available at https://pmc.ncbi.nlm.nih.gov/articles/PMC9140577, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": "CCBY", "status": "GOLD", "url": "https://www.mdpi.com/1660-4601/19/10/6119/pdf?version=1652854362" }
2,022
[ "JournalArticle", "Review" ]
true
2022-05-01T00:00:00
[ { "paperId": "b71197b3384e5ce4cd75b1a2fba999416692e76a", "title": "Empirical Modeling of COVID-19 Evolution with High/Direct Impact on Public Health and Risk Assessment" }, { "paperId": "d3f99b2ed1d23d37277228937fe96cf92570f5f2", "title": "Spatial Dynamics and Multiscale Regression Modelling of Population Level Indicators for COVID-19 Spread in Malaysia" }, { "paperId": "bb5fcc2c315f2d8570bdfe0a8198e2e19acb1146", "title": "A study of computational and conceptual complexities of compartment and agent based models" }, { "paperId": "7a630f287498fba98ec1f9bdde82a3fef6e83ba9", "title": "Experience of isolation and quarantine hotels for COVID-19 in Hawaii" }, { "paperId": "01db2ee5d371a513a911cc5144c76dd735b896ec", "title": "Prevalence of Asymptomatic SARS-CoV-2 Infection" }, { "paperId": "c3b6e328dca9c135a50edd6b33f541a5a7405143", "title": "An evidence review of face masks against COVID-19" }, { "paperId": "17ff703ad8ce6e2bea88584193adceafef4c137e", "title": "Motivations and Barriers for the Use of Face Coverings during the COVID-19 Pandemic: Messaging Insights from Focus Groups" }, { "paperId": "48560927862f49035fb08a5942a31154a570e850", "title": "COVID-19 Policy Differences across US States: Shutdowns, Reopening, and Mask Mandates" }, { "paperId": "9556743ebd32eaf70aa5add82200b45ff6431490", "title": "Addressing Health Disparities in the Rural United States: Advocacy as Caregiving among Community Health Workers and Promotores de Salud" }, { "paperId": "f0d1df1c588eec17f881730b0087b776d49a60f8", "title": "Intervention Fatigue is the Primary Cause of Strong Secondary Waves in the COVID-19 Pandemic" }, { "paperId": "b8a0ff6de3d9469d67fdbfd402ca0d3c972666ad", "title": "The impact of travel and timing in eliminating COVID-19" }, { "paperId": "1a2e2784eb0df80643c7f1adac6079ea5e0254b3", "title": "Community Use Of Face Masks And COVID-19: Evidence From A Natural Experiment Of State Mandates In The US." }, { "paperId": "a0a9184e748a58fe25e051bab4a6c338b358e825", "title": "Impact of the COVID-19 Pandemic on Emergency Department Visits — United States, January 1, 2019–May 30, 2020" }, { "paperId": "0e7c2681bc8b0350f563663774fbe19dce3c6b27", "title": "Wrong but Useful - What Covid-19 Epidemiologic Models Can and Cannot Tell Us." }, { "paperId": "0c5a3cbb9069454e95dc38099e979ae80b815525", "title": "Estimating The Infection Fatality Rate Among Symptomatic COVID-19 Cases In The United States." }, { "paperId": "526625cf5000ab29d70641c70564d1f49b42946a", "title": "COVID-19 Scenarios: an interactive tool to explore the spread and associated morbidity and mortality of SARS-CoV-2" }, { "paperId": "9712e670200fbc081c1115a5789086a59b0edd50", "title": "Forecasting the impact of the first wave of the COVID-19 pandemic on hospital demand and deaths for the USA and European Economic Area countries" }, { "paperId": "e836dabc407abecd889905eca49f2ed07a702e58", "title": "Asymptomatic Transmission, the Achilles’ Heel of Current Strategies to Control Covid-19" }, { "paperId": "9f138bc04b187093dffd7a041ad35495ce8aa5f8", "title": "Hospitalization Rates and Characteristics of Patients Hospitalized with Laboratory-Confirmed Coronavirus Disease 2019 — COVID-NET, 14 States, March 1–30, 2020" }, { "paperId": "4fce764b00de92faae0af5386a55350db9a46dc8", "title": "Caution Warranted: Using the Institute for Health Metrics and Evaluation Model for Predicting the Course of the COVID-19 Pandemic" }, { "paperId": "1e1b26fb478f39cca24208072bc12a7519548d46", "title": "Why daily death tolls have become unusually important in understanding the coronavirus pandemic" }, { "paperId": "4dd6fd8bb608ec3e9d6835578f0d16fe320b605d", "title": "Learning as We Go: An Examination of the Statistical Accuracy of COVID19 Daily Death Count Predictions" }, { "paperId": "4c970b611ff10313bd88b7c3e197c45c06a67feb", "title": "A Systematic Review of COVID-19 Epidemiology Based on Current Evidence" }, { "paperId": "0813446cd9dcf350a7212280b8db7ee3fd05b970", "title": "Severe Outcomes Among Patients with Coronavirus Disease 2019 (COVID-19) — United States, February 12–March 16, 2020" }, { "paperId": "a5a77eb6c25592824e322c47e4c2de04676bd8ae", "title": "Temporal dynamics in viral shedding and transmissibility of COVID-19" }, { "paperId": "0874966fc2b887ece8ce43d4dd09ec180cb11615", "title": "Report 9: Impact of non-pharmaceutical interventions (NPIs) to reduce COVID19 mortality and healthcare demand" }, { "paperId": "88479b44437b28f96cb357a0ce15a71da77fe0d6", "title": "Political economy analysis for health" }, { "paperId": "ed860409ad73160875dcb38e972d302f7b336dfe", "title": "‘Doing’ health policy analysis: methodological and conceptual reflections and challenges" }, { "paperId": "df03e1844e385324d61af7ca3587d4411ee81ae4", "title": "THE USE OF EPIDEMIOLOGICAL MODELS FOR THE MANAGEMENT OF ANIMAL DISEASES" }, { "paperId": "8488b9d69fa47093b6cf77562473d0333ece1896", "title": "Agendas, alternatives, and public policies" }, { "paperId": null, "title": "Critique of the IHME Model for COVID-19 Projections" }, { "paperId": null, "title": "Team IC 19 Health Service Utilization Forecasting" }, { "paperId": null, "title": "Epidemic Calculator" } ]
15,361
en
[ { "category": "Economics", "source": "external" }, { "category": "Economics", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/022baa2ab01c85223ced326773cfec28142ff784
[ "Economics" ]
0.894598
Gauging Market Responses to Monetary Policy Communication
022baa2ab01c85223ced326773cfec28142ff784
The Review
[ { "authorId": "69896945", "name": "Kevin L. Kliesen" }, { "authorId": "50153875", "name": "Brian Levine" }, { "authorId": "49372646", "name": "Christopher J. Waller" } ]
{ "alternate_issns": null, "alternate_names": [ "International Conference on Remote Engineering and Virtual Instrumentation", "Rev", "REV", "Int Conf Remote Eng Virtual Instrum" ], "alternate_urls": null, "id": "86ecfb72-3296-4092-bd46-03d007d47bd0", "issn": "1941-532X", "name": "The Review", "type": "conference", "url": null }
The modern model of central bank communication suggests that central bankers prefer to err on the side of saying too much rather than too little. The reason is that most central bankers believe that clear and concise communication of monetary policy helps achieve their goals. For the Federal Reserve, this means to achieve its goals of price stability, maximum employment, and stable long-term interest rates. This article examines the various dimensions of Fed communication with the public and financial markets and how Fed communication with the public has evolved over time. We use daily and intraday data to document how Fed communication affects key financial market variables. We find that Fed communication is associated with changes in prices of financial market instruments such as Treasury securities and equity prices. However, this effect varies by type of communication, by type of instrument, and by who is doing the speaking.
The modern model of central bank communication suggests that central bankers prefer to err on the side of saying too much rather than too little. The reason is that most central bankers believe that clear and concise communication of monetary policy helps achieve their goals. For the Federal Reserve, this means to achieve its goals of price stability, maximum employment, and stable longterm interest rates. This article examines the various dimensions of Fed communication with the public and financial markets and how Fed communication with the public has evolved over time. We use daily and intraday data to document how Fed communication affects key financial market variables. We find that Fed communication is associated with changes in prices of financial market instruments such as Treasury securities and equity prices. However, this effect varies by type of communication, by type of instrument, and by who is doing the speaking. (JEL E52, E58, E61, G10) Federal Reserve Bank of St. Louis Review, Second Quarter 2019, 101(2), pp. 69-91. https://doi.org/10.20955/r.101.69-91 **KEYNES: Arising from Professor Gregory’s questions, is it a practice of the Bank of England never to** explain what its policy is? **HARVEY: Well, I think it has been our practice to leave our actions to explain our policy.** **KEYNES: Or the reasons for its policy?** **HARVEY: It is a dangerous thing to start to give reasons.** **KEYNES: Or to defend itself against criticism?** **HARVEY: As regards criticism, I am afraid, though the Committee may not all agree, we do not admit** there is a need for defense; to defend ourselves is somewhat akin to a lady starting to defend her virtue. Exchange between John Maynard Keynes and Bank of England Deputy Governor Sir Ernest Harvey, December 5, 1929.[1] Kevin L. Kliesen is a business economist and research officer, Brian Levine was a senior research associate, and Christopher J. Waller is executive vice president and director of research at the Federal Reserve Bank of St. Louis. © 2019, Federal Reserve Bank of St. Louis. The views expressed in this article are those of the author(s) and do not necessarily reflect the views of the Federal Reserve System, the Board of Governors, or the regional Federal Reserve Banks. Articles may be reprinted, reproduced, published, distributed, displayed, and transmitted in their entirety if copyright notice, author name(s), and full citation are included. Abstracts, synopses, and other derivative works may be made only with prior written permission of the Federal Reserve Bank of St. Louis. **F d** **l R** **B** **k f St L** **i REVIEW** **S** **d Q** **t** **2019** **69** ----- **Kliesen, Levine, Waller** ## INTRODUCTION Central bank communication has come a long way since the Bank of England’s motto ostensibly was “Never explain, never apologize.”[2] Today, the motto of central bankers might instead be “Can you hear me now?” The modern model of central bank communication sug­ gests that central bankers prefer to err on the side of saying too much rather than too little. In this vein, central bank communication takes many forms, from economic forecasts and official reports, to speeches, interviews, testimonies before governmental bodies, and policy statements and press conferences immediately after policy meetings. In the United States, enhancements in central bank communication are most pronounced in the realm of speeches and other remarks (e.g., television interviews) by Federal Reserve (hereafter, Fed) governors and Reserve Bank presidents. These forms of communication have become more prominent since the recession and Financial Crisis. In an era of increased communication by Federal Open Market Committee (FOMC) participants, one may ask whether additional information is useful for financial market participants who carefully monitor monetary policy developments. Indeed, some economists and analysts have argued that Fed officials talk too much.[3] There are many nuances to this argument, but the primary claim is that more information increases the proba­ bility of market mispricing. Shin (2017) discusses some of these issues. There are at least two counterarguments to the market mispricing view. The first, as enunciated by Kocherlakota (2017), is that the price of an independent central bank is a set of independent voices to insure against group think. The second counterargument is that the pricing of financial instruments in markets is more efficient with more, not less, information. Regardless, central bank communication is important because individuals’ economic decisions are based on expectations of future policies. Thus, clear communication of its policies and actions may help the Fed achieve its mandated goals of stable prices, maximum employment, and moderate long-term interest rates. The purpose of this article is twofold. The first part examines the various dimensions of Fed communication with the public and financial markets. This includes documenting how communication with the public has evolved over time. The second part empirically analyzes the economic effects of Fed communication on key financial market variables. Our analysis uses daily and intraday data. We find that Fed communication can affect prices of financial market instruments such as equities and Treasury securities. However, this effect varies by type of communication, by type of instrument, and by who is doing the speaking. We also find that larger financial market reactions tend to be associated with communication from the Fed Chair, non-Chair Fed governors, and FOMC meetings without an associated press conference. We further find that financial market reactions following press conferences after FOMC meeting statements are not significant. ## HOW DOES THE FED COMMUNICATE? As the exchange between John Maynard Keynes and Bank of England Deputy Governor Sir Ernest Harvey demonstrated, the principles of central bank communication have evolved **70** **S** **d Q** **t** **2019** **F d** **l R** **B** **k f St L** **i REVIEW** ----- **Kliesen, Levine, Waller** ### Table 1 **Types of Fed Communication** **Type** **Communicator** **Frequency** **Release timing** Policy statement FOMC 8 times per year After each FOMC meeting, ~2 PM EST Minutes FOMC 8 times per year 3 weeks after each FOMC meeting, ~2 PM EST Press conference Chair 8 times per year* After designated FOMC meeting, ~2:30 PM EST Summary of Economic Projections FOMC 4 times per year After designated FOMC meeting, ~2 PM EST Monetary Policy Report to Congress Chair 2 times per year ~February and July of each year Speeches and other public remarks FOMC Continuous[†] NA Statement of Longer-Run Goals FOMC 1 time per year Reaffirmed each January and Policy Strategy Policy Normalization Principles FOMC Updated periodically After associated FOMC meeting, ~2 PM EST and Plans[‡] NOTE: Table reflects the present-day FOMC procedure. The timing and frequency of each event has changed over the past 20 years. ~Indicates times are approximations and may differ slightly from event to event. *During the period analyzed, press conferences were held only four times per year. Beginning in January 2019, press conferences are held after every FOMC meeting. [†]Excludes FOMC “blackout periods,” which begin the second Saturday preceding an FOMC meeting and end the Thursday following the meeting. [‡]Initially released in September 2014. An addendum [was adopted in March 2015 and augmented in June 2017. For a history of revisions, see https://www.federalreserve.gov/monetarypolicy/time­](https://www.federalreserve.gov/monetarypolicy/timeline-policy-normalization-principles-and-plans.htm) [line-policy-normalization-principles-and-plans.htm.](https://www.federalreserve.gov/monetarypolicy/timeline-policy-normalization-principles-and-plans.htm) over time. A modern comparison describing the evolution of Fed communication was noted in 2003 by then Fed Governor Janet Yellen when she said that the FOMC “had journeyed from ‘never explain’ to a point where sometimes the explanation is the policy.”[4] Some have termed this policy “open-mouth operations.”[5] Although views may differ between policymakers and across central banks, the fundamental principles of central bank communication are founded on the dual notions that increased transparency enhances the effectiveness of policy and the accountability of policymakers in a democratic society.[6] In this article, we focus on Fed com­ munication, though the principles and practices are similar among many of the world’s central banks. When analyzing central bank communication, the following questions come to mind: First, who should do the talking; second, what should the central bank talk about; and, third, who should the central bank talk to? There is a vast economic literature that attempts to answer these questions. One notable early effort was a cross-country study by Blinder et al. (2001), who surveyed communication methods and tactics, among other things. A subsequent article by Blinder et al. (2008) argued that there was large variation in strategies but no consensus on the best-practice approach to communicating monetary policy to the public. Woodford (2001) was an early proponent of using communication to influence market expectations. This view influenced several subsequent Fed officials, most notably former Fed Chairman Ben Bernanke.[7] Finally, in the aftermath of the Financial Crisis of 2008, several event studies were published that analyzed the FOMC’s unconventional policy actions on prices of financial market instru­ ments, macroeconomic outcomes, and the expectations about future monetary policy actions.[8] **F d** **l R** **B** **k f St L** **i REVIEW** **S** **d Q** **t** **2019** **71** ----- **Kliesen, Levine, Waller** In sum, the academic literature offers more support for the modern view of central bank communication: More is generally better. Table 1 lists the primary methods that the Fed uses to communicate its policies, procedures, and policy expectations to the public.[9] These methods include the policy statement released at the end of the eight regularly scheduled FOMC meeting, the minutes released three weeks after each of the eight regularly scheduled FOMC meetings, the Chair’s quarterly press conference, along with speeches, testimonies, and media interviews by Fed governors and Reserve Bank presidents. Some of these innovations are long standing, such as the FOMC minutes, while others are more recent, such as the Chair’s press conferences.[10] Given the prominence of FOMC policy statements as a communication instrument, the fol­ lowing discussion will first briefly focus on their history and role. ### Policy Statements: Length and Readability The Fed’s principle medium of communication is the policy statement released after each FOMC meeting. The policy statement has evolved over time. From 1967 to 1992, the FOMC issued a “Record of Policy Actions” (ROPA), which were initially released with a 90-day lag.[11] Beginning under Chairman Alan Greenspan, the FOMC began to issue policy statements immediately after the February 4, 1994, meeting. The first policy statement was rather short, at 99 words, and made no mention of the intended federal funds target rate. Instead, the inaugural statement indicated that the Committee decided to “increase slightly the degree of pressure on reserve positions” in financial markets. In taking this action, the FOMC noted that they expected an “associated small increase in short-term money market interest rates.”[12] Following the release of the inaugural statement, the FOMC released a post-meeting state­ ment four additional times in 1994. Three post-meeting statements were released in 1995, including the statement released after the July 6, 1995, meeting, which was the first instance that the FOMC specifically mentioned the federal funds rate. The FOMC continued to issue post-meeting statements over the next few years, but only at meetings where a policy change occurred. However, beginning with the May 18, 1999, meeting, statements were released after each FOMC meeting.[13] The public focus on the policy statement was such that the financial press developed a “briefcase barometer.”[14] The post-meeting FOMC statements have evolved over time. Prior to the Financial Crisis, the post-meeting policy statement mostly focused on the state of the economy and the Com­ mittee’s rationale for raising or lowering the policy rate or reasons why the policy rate was not changed. In general, less was said about the future path of interest rate changes. The policy statement evolved to take on a larger role in communicating the stance of monetary policy during the Financial Crisis after the federal funds rate reached the zero lower bound (ZLB) on December 16, 2008.[15] Figure 1 shows that the word count of the policy statements began to increase steadily in 2007 during the early stages of the Financial Crisis. The word count continued to increase during the adoption of quantitative easing (QE) policies that both increased the size of the balance sheet and changed its composition. Prior to the ZLB period, the number of words in each statement averaged 223. During the ZLB period, the count was more than twice as much, averaging 580 words. After the nominal federal funds target rate reached the ZLB in December 2008, the Fed pro­ vided the largest amount of monetary accommodation through balance sheet adjustments and **72** **S** **d Q** **t** **2019** **F d** **l R** **B** **k f St L** **i REVIEW** ----- **Kliesen, Levine, Waller** ### Figure 1 **FOMC Statement Word Count** Number of Words in Each Statement QE1 QE2 MEP QE3 1,000 900 800 700 600 500 400 300 200 100 0 1994 1999 2001 2002 2004 2006 2008 2009 2011 2013 2015 2017 NOTE: Shaded area indicates the period of the FOMC’s unconventional monetary policy with interest rates at the effective ZLB. MEP, Maturity Extension Program. Under the MEP, the Fed sold or redeemed shorter-term Treasury securities and used the proceeds to buy longer-term Treasury securities, thereby extending the average maturity of the securities in the Fed’s portfolio. Updated through 2017. SOURCE: Board of Governors of the Federal Reserve System. other unconventional policies.[16] But as the U.S. economy transitioned from recession to a slower-than-average recovery, the Fed’s policy approach also changed. The new approach focused instead on influencing the public’s expectations of the future direction and level of the federal funds target rate. This approach, in its current form, is referred to as forward guidance.[17] For example, following the August 9, 2011, meeting, the policy statement stated the following: The Committee currently anticipates that economic conditions—including low rates of resource utilization and a subdued outlook for inflation over the medium run—are likely to warrant exceptionally low levels for the federal funds rate at least through mid-2013. In this case, the FOMC’s intent was to signal to the public that its policy rate would remain low for a long time in order to spur the economy’s recovery. This signal was meant to be taken as a public commitment, what Campbell et al. (2012) termed “Odyssean” policy. Using lan­ guage from Greek mythology, Odyssean policy is meant to convey a public commitment not to change policy for a certain period—in this case, for more than two years. Instead, the public appeared to view this statement as a forecast, what Campbell et al. (2012) termed “Delphic” policy. In effect, the Delphic statement strongly suggested that, in the FOMC’s view, the eco­ nomic weakness would persist for more than two years. However, at the June 2011 meeting two months earlier, the Summary of Economic Projections (SEP) indicated that real gross domestic product (GDP) would increase by 3.5 percent in 2012 and by 3.9 percent in 2013 (each measure is the midpoint of the central tendency).[18] Thus, by August, the Committee appeared to have concluded that it, like most private sector forecasters, had been much too optimistic about the pace of real GDP growth during the early stages of the expansion. Indeed, by the January 2012 meeting, forecasts for real GDP growth in 2012 and 2013 had been marked down to 1.7 percent and 2.5 percent, respectively. **F d** **l R** **B** **k f St L** **i REVIEW** **S** **d Q** **t** **2019** **73** ----- **Kliesen, Levine, Waller** ### Figure 2 **FOMC Statement Complexity** Flesch-Kincaid Reading Grade Level QE1 QE2 MEP QE3 24 22 20 18 16 14 12 10 8 6 1994 1999 2001 2002 2004 2006 2008 2009 2011 2013 2015 2017 NOTE: Shaded area indicates the period of the FOMC’s unconventional monetary policy with interest rates at the effective ZLB. MEP, Maturity Extension Program. Under the MEP, the Fed sold or redeemed shorter-term Treasury securities and used the proceeds to buy longer-term Treasury securities, thereby extending the average maturity of the securities in the Fed’s portfolio. Updated through December 2017. SOURCE: Board of Governors of the Federal Reserve System (FOMC statements) and Educational Testing Service (word count). To accomplish the Fed’s goals and objectives in a slow-growth economy, the post-meeting statement changed in two dimensions. The first change, as noted above, was that the length increased. The statements included more discussion of the economic situation and its impli­ cation for the near-term direction of policy (changes in the federal funds target rate).[19] Second, the statements incorporated more complex economic terms and analysis. This is shown in Figure 2, which uses text evaluation software to measure the Flesch-Kincaid reading grade level of the policy statement. A higher grade level is assumed to reflect increased complexity of the statement. Prior to the ZLB period, the median grade level was 13.5, indicating com­ prehension accessible to someone reading at a college undergraduate level. But by late 2013, when the FOMC was in the midst of increasing the size of its balance sheet through asset pur­ chases, the grading level rose to 20, which is commensurate to a graduate school reading level. For the entire ZLB period, the grade level rose to 16 (median), but then fell to 15 (median) during the post-ZLB period.[20] Researchers find that the readability of central bank policy statements and remarks are an important factor in how they are received by financial markets. Not surprisingly, clearer statements lead to lower volatility.[21] This section has highlighted how the FOMC changed the length and composition of the policy statement during the period of unconventional monetary policy. But the policy state­ ment is only one form of central bank communication. Speeches and other public remarks are another form of communication that policymakers have deployed to increase the public’s knowledge of the prevailing monetary policy regime. The next two sections will delve into monetary policy communication strategies by Fed officials, both old and new. **74** **S** **d Q** **t** **2019** **F d** **l R** **B** **k f St L** **i REVIEW** ----- **Kliesen, Levine, Waller** ### Figure 3 **Number of Public Remarks by Type of Fed Official** Total Remarks Per Year 250 ZLB Period Begins (December 2007) 200 **Bank Presidents** 150 100 **Non-Chair Governors** 50 **FOMC Chair** 0 1998 2000 2002 2004 2006 2008 2010 2012 2014 2016 NOTE: Through 2017. SOURCE: Board of Governors of the Federal Reserve System, the 12 Federal Reserve Banks, Bloomberg, and authors’ calculations. ### Public Remarks by Fed Officials Fed officials have long used other forms of public communication besides policy state­ ments.[22] Public remarks can take many forms, including formal speeches, Congressional testimonies, interviews with the financial media, or published articles and commentaries. Sometimes, Fed officials do not comment on monetary policy issues that may be discussed at recent or upcoming FOMC meetings. In those instances, policymakers may instead choose to focus on other issues, such as local economic conditions, economic education, community development, or banking and financial market regulation. The ZLB period witnessed an unprecedented rate of spoken and written communication with the public by Fed governors and Reserve Bank presidents. Figure 3 shows the annual number of public remarks by the Fed Chair, non-Chair governors, and Reserve Bank presi­ dents since 1998.[23] From 1998 to 2004, the total number of public remarks by Reserve Bank presidents remained roughly constant at about 150 per year. A slightly different pattern occurred with governors and the Fed Chair. Total remarks over this period steadily fell, but then rebounded, so that the numbers of public remarks in 2004 were close to the 1998 totals. Beginning in 2005, the total number of public remarks by Reserve Bank presidents began to increase, reaching a peak in 2013 of a little more than 220 public remarks. Interestingly, though, the FOMC Chair and governors delivered public remarks slightly less frequently over the ZLB period. Some of the reduced frequency of public remarks by members of the Board of Governors (excluding the Chair) reflects the fact that the Board has rarely operated with a full complement of Governors (seven). From 1998 to 2017, there has only been four years when there were seven governors present at the last formal meeting of the year. Indeed, at the end of 2017, there were only four governors at the December meeting. At the March 2018 meeting, the number of governors had dwindled to three. **F d** **l R** **B** **k f St L** **i REVIEW** **S** **d Q** **t** **2019** **75** |al Remarks Per Year|Col2| |---|---| |ZLB Period Begins (December 2007)|| ||Bank Presidents| ||| ||Non-Chair Governors| |FOMC Chair|| ----- **Kliesen, Levine, Waller** ### Figure 4 **Number of Times More Than One Bank President or Governor Spoke on the Same Day, 1998-2017** Frequency 80 70 60 50 40 30 20 Governors Bank Presidents 60 10 3 0 1998 2000 2002 2004 2006 2008 2010 2012 2014 2016 SOURCE: Board of Governors of the Federal Reserve System, Bloomberg, and authors’ calculations. Speeches have become important communication events. Chairman Greenspan’s new economy speech in 1995 and his “irrational exuberance” speech in 1996 were among his more notable speeches. Chairman Ben Bernanke also gave notable speeches during his tenure. Two that standout are his “Deflation: Making Sure ‘It’ Doesn’t Happen Here” speech in 2002 and his global saving glut speech in 2005. Days with multiple Fed communication events have become more numerous over time—particularly since the Financial Crisis. Figure 4 shows the increase in multiple Fed communication events on the same day stems from an increase in more than one Reserve Bank president speaking on the same day. For example, in 2017, there were 60 days when more than one Reserve Bank president spoke. In 2004, it was about half as much. By contrast, in 2017 there were only three days when more than one Fed governor spoke publicly on the same day. This is down sharply from 2003, when there were 19 days when multiple Fed governors spoke on the same day.[24] In separate analysis, we looked at the annual number of public remarks by Reserve Bank presidents from January 1998 to December 2017. We separated the sample into roughly two 10-year periods: January 1998 to August 2008 (pre-Financial Crisis) and September 2008 to December 2017 (post-Financial Crisis). The number of public remarks by Reserve Bank presidents increased in all but three Fed Districts (Chicago, New York, and Richmond). The average increase in volume across these nine Districts was 46 percent. We did not examine whether the nature of the remarks by Reserve Bank presidents has changed over time. We did, however, analyze the number of speeches and public remarks given by presidents of the Fed Bank of St. Louis since January 1929. We have documented this in the boxed insert. ### Other Forms of Fed Communication In the past several years, chiefly under the Bernanke regime, the FOMC has adopted several new forms of communication to further increase transparency. As noted earlier, the Chair’s quarterly press conference, beginning under Chairman Bernanke’s term in January **76** **S** **d Q** **t** **2019** **F d** **l R** **B** **k f St L** **i REVIEW** ----- **Kliesen, Levine, Waller** 2012, is one key innovation. Current Chairman Jerome Powell expanded on this innovation, announcing that press conferences will be held after every FOMC meeting beginning in January 2019. Other innovations include the FOMC’s “Statement of Longer-Run Goals and Monetary Policy Strategy,” “Policy Normalization Principles and Plans,” and “Summary of Economic Projections” (SEP). These are also listed in Table 1. The first two are meant to provide clarity on the Fed’s dual mandate and balance sheet, respectively, while the SEP conveys projections for four key macroeconomic variables. In addition, the SEP conveys each FOMC participant’s assessment of appropriate monetary policy, as indicated by their federal funds rate projections over short-, medium-, and longer-term horizons. **F d** **l R** **B** **k f St L** **i REVIEW** **S** **d Q** **t** **2019** **77** ----- **Kliesen, Levine, Waller** ### Grading Fed Communication The Hutchins Center on Fiscal and Monetary Policy at Brookings conducted a survey of academics and private sector Fed watchers to assess the effectiveness of different forms of Fed communication.[25] Survey participants viewed the FOMC policy statement, speeches by the FOMC Chair, and quarterly press conferences as the most useful forms of Fed communi­ cation. On net, academics generally found these forms of communication more useful than did the private sector economists and Fed watchers. One of the key communication innovations during the Bernanke tenure was the public release of individual FOMC participants’ expectations of the future level of the federal funds rate. Once a quarter, with the release of the SEP, each FOMC participant—anonymously— indicates their preference for the level of the federal funds rate at the end of the current year, at the end of the next two to three years, and over the “longer run.” According to the survey, these projections are often termed the FOMC “dot plots.” Both academics and those in the private sector found the dot plots of limited use as an instrument of Fed communication (more “useless” than “useful”). One-third of the respondents found the dot plots “useful or extremely useful,” 29 percent found them “somewhat useful,” and 38 percent found them “useless or not very useful.” The limited usefulness of the dot plots probably reflects many factors. First, each partici­ pant’s projection is conditioned on the highly restrictive assumption of “appropriate monetary policy.” Each participant’s appropriate monetary policy stance is conditioned on their view of the outlook for real GDP growth, inflation, and the unemployment rate over the medium term. Moreover, the range of participants’ views may not dovetail with the policy path out­ lined in the FOMC statement, which can further complicate the communicated outlook and diminish the tool’s effectiveness. The regular presence of dissents suggests that appropriate policy can differ sharply across the Committee. Second, the participants may have other vastly different assumptions that influence their outlook, such as the equilibrium real interest rate, the future path of crude oil prices, the for­ eign exchange value of the dollar, or their outlook for foreign economic growth. For these reasons and more, FOMC participants persistently over-projected the federal funds target rate path during the early years of the current expansion. [See earlier discussion on page 73.] These persistent one-sided forecast errors may have impaired the credibility of the dot plots to the extent that the projections were important inputs in establishing expectations about future monetary policy. Finally, the Brookings study revealed that survey participants believe that Reserve Bank presidents’ speeches are slightly less useful than the dot plots, but still more useful than Fed reports to Congress, such as the semi-annual Monetary Policy Report.[26] This finding is per­ haps striking given that the number of public remarks by Reserve Bank presidents has been trending up over time, especially during the ZLB period, while the number of public remarks by the Chair and non-Chair governors has been trending down. **78** **S** **d Q** **t** **2019** **F d** **l R** **B** **k f St L** **i REVIEW** ----- **Kliesen, Levine, Waller** ## EMPIRICAL ANALYSIS The final section of the article assesses how financial market participants respond to vari­ ous forms of Fed communication. Admittedly, this is a difficult empirical exercise for many reasons. First, public remarks by Fed senior officials are often context- and perspectivedependent. Each individual brings their own perspective, model of the economy, and view of the monetary policy transmission mechanism. These views naturally inform their assessments of appropriate monetary policy going forward, which are then conveyed in public remarks. For their part, financial market participants may become familiar with a given policymaker’s view or assume a given outcome for a particular FOMC meeting. If so, markets may react only to views that are sufficiently different from expectations. Past research has demonstrated that monetary policy surprises can have significant effects on high-frequency asset prices.[27] We acknowledge the importance of monetary policy surprises, but use a different approach to assess the significance of Fed communication events. Second, when attempting to gauge the significance of public remarks, markets do not usually assign equal weights to all FOMC participants. Certainly, markets carefully parse remarks by the Chair, who is typically viewed as the public voice of the FOMC and the one who sets the policy agenda. Moreover, while the Chair’s views often convey the consensus view of the Committee, the Chair nonetheless also has a policy preference. Although the Chair’s preference invariably prevails, dissents still occur periodically. Indeed, Reserve Bank presidents sometimes use their public remarks, or dissents, with the intention of signaling future policy preferences or advocating for alternative frameworks.[28] Still, markets may discount the views of the presidents, on average, because they believe their views unnecessarily distort market signals or future policy intentions. For example, Lustenberger and Rossi (2017) claim that remarks by Reserve Bank presidents worsen the accuracy of private sector forecasts. With these caveats in mind, we adopt a two-pronged empirical exercise. The first exercise uses daily data to examine whether Fed communication events are associated with significant movements in key financial market variables. Admittedly, this approach has some drawbacks. First, daily financial market data tend to be more volatile compared with monthly or quarterly data. Second, this volatility arises, in part, because financial markets trade on many types of information, such as macroeconomic data or global financial or geopolitical developments. Thus, while Fed communication comprises one set of information the market uses to price assets, there are potentially many other sources of information that the market uses that we can’t readily account for. Our intent is to assess market reactions to Fed communication events and not to model changes in asset price movements at a high frequency. The second empirical exercise uses intraday data at 5-minute frequencies. Using intra­ day data allows us to more closely match the timing of Fed communication events with the responses in financial markets. This is the approach adopted by most of the aforementioned event studies. Our intent is to determine if the empirical results using the daily data are con­ sistent with those from the intraday data. Before presenting the results, we provide a detailed description of our data sources and approach. **F d** **l R** **B** **k f St L** **i REVIEW** **S** **d Q** **t** **2019** **79** ----- **Kliesen, Levine, Waller** ### Data Sources and Approach We study the effects of seven types of Fed communication events: FOMC meeting state­ ments;[29] FOMC minutes; Fed Chair press conferences; public remarks by the Fed Chair, non-Chair Fed governors, and Reserve Bank presidents; and unconventional monetary policy announcements.[30] Five of the seven categories are included in the Brookings study. It is important to note that there is an overlap between FOMC meetings and six of the seven unconventional policy announcements we include.[31] Initially, our data set included public remarks made after market hours and on weekends. Consistent with some of the literature, we initially moved an after-hours communication event to the following trading day to gauge the market’s reaction to the remark. However, this approach ended up producing large reac­ tions that were probably not tied to the public remark itself. For example, many key data releases are often issued before the market opens.[32] In this case, it is difficult to determine whether the market is responding to the public remarks by a Fed official or to economic data releases that may be a surprise.[33] ### Empirical Analysis: Daily Data We create a series of dummy variables for the Fed communication events. Because the Brookings study found that survey participants viewed the Fed Chair press conferences as a useful form of communication, we identify regularly scheduled FOMC meetings with and without an associated press conference. In recent years, FOMC press conferences have occurred after the March, June, September, and December meetings. Since the liftoff from the ZLB at the December 2015 meeting, increases in the FOMC’s federal funds target rate have occurred at meetings with an associated press conference by the Fed Chair. Our sample period is January 6, 1998, to December 29, 2017. There are nine types of communication events: - Non-press conference FOMC meeting statements - Press conference FOMC meeting statements - Releases of FOMC minutes - Remarks by the FOMC Chair - Remarks by all other Fed governors - Remarks by Reserve Bank presidents - Days when there are multiple Fed communication events (e.g., speeches) - Unconventional policy actions (e.g., large-scale asset purchases) - Key macroeconomic data releases (e.g., industrial production) We evaluate the market reaction for three financial instruments: the absolute value of the daily change in the yield on 2-year Treasury notes, the yield on 10-year Treasury notes, and the Chicago Board Options Exchange equity market volatility index (VIX). Changes in 2-year Treasury yields are widely viewed as being sensitive to expected changes in FOMC policy. The 10-year Treasury yield is the most liquid, long-term, risk-free interest rate in the financial markets. It is also sensitive to changes in inflation expectations and longer-term expectations about short-term interest rates. Finally, the VIX, which is often termed the mar­ **80** **S** **d Q** **t** **2019** **F d** **l R** **B** **k f St L** **i REVIEW** ----- **Kliesen, Levine, Waller** ket’s “fear gauge,” is sometimes viewed as signaling changes in economic uncertainty. This exercise can be represented by the following equation: ΔYi,t =α + β1ΔYi,t−1 + β2NPCi,t + β3PCi,t + β4MINi,t + β5CHAIRi,t + β6GOVi,t +β7PRESi,t + β8MULTi,t + β9UNCONVi,t + β10MACROi,t, where ∆Yi,t represents the absolute value of the daily change in financial variable i (either the 2-year Treasury yield, 10-year Treasury yield, or VIX) on day t. The independent variables include a constant, a one-day lag of the dependent variable, and a series of dummy variables (specified earlier in this section) that take the value of 1 if that event occurs on day t or are zero if the event does not occur on day t. We analyze daily data with three ordinary least-squares regressions. We use the absolute value of the daily changes because some communication events will cause yields to increase or decrease, while others will generate no market response. Using absolute values are a more effective way to gauge the effects of communication events on financial market activity.[34] We also include another dummy variable (MACRO) on days when key economic statistics are released. The motivation for this is that the market trades on information contained in these reports. Our economic statistic dummy variable takes the value of 1 when the following monthly economic reports are released (and is zero on all other days): the consumer price index, monthly employment situation, industrial production, retail sales, the Institute for Supply Management Report on Manufacturing, and the three GDP releases (advance, second, and third estimates). Table 2 shows the results of our analysis using daily data. Daily data allow us to make a few noteworthy observations. First, for the change in the 2-year Treasury yield, markets react significantly (at the 1 or 5 percent level) to Fed Chair and Fed governor communication events and also to FOMC statements at non-press conference meetings. Table 2 further indicates that changes in 2-year Treasury securities do not react significantly on days when one Reserve Bank president speaks, but they do react significantly on days when there are multiple Fed speakers. (Recall from Figure 4 that the number of days with multiple Fed speakers has increased since the Financial Crisis). Finally, 2-year yields also react significantly to macroeconomic data releases. Unconventional policy actions are marginally significant (at the 8 percent level). With the exception of days with multiple Fed speakers, the signs of the coefficients on the significant variables are positive. The second and third sets of regressions in Table 2 show results for the change in the 10-year Treasury yield and in the VIX. Traders of longer-term Treasury securities react broadly similarly to Fed communication events and data releases as traders of 2-year Treasury secu­ rities. For instance, 10-year yields react significantly to the Fed Chair’s remarks, on days when there are multiple Fed speakers, and to macroeconomic data releases; the coefficients generally have the same signs and magnitudes as those from the regression using 2-year yields. However, there are some differences between the 2-year and 10-year responses. For example, the change in the 10-year yield is significantly associated with unconventional policy actions. Moreover, 10-year yields do not react significantly to remarks by Fed governors or to non-press confer­ ence FOMC statements. **F d** **l R** **B** **k f St L** **i REVIEW** **S** **d Q** **t** **2019** **81** ----- **Kliesen, Levine, Waller** ### Table 2 **Federal Reserve Communication Events and Financial Market Responses Using Daily Data** **Dependent variables** Independent variables **2-year Treasury** **10-year Treasury** **VIX** 0.023 Constant (0.000)** 0.248 Lagged dependent variable (0.000)** 0.013 Non-press conference FOMC meetings (0.001)** 0.004 Press conference FOMC meetings (0.594) 0.004 FOMC minutes (0.154) 0.641 (0.000)** 0.321 (0.000)** 0.059 (0.588) 0.092 (0.626) –0.078 (0.315) 0.018 (0.787) 0.020 (0.645) 0.093 (0.111) –0.068 (0.265) 0.626 (0.176) 0.077 (0.031)* 0.006 FOMC Chair remarks (0.001)** 0.004 Fed governor remarks (0.003)** 0.000 Fed president remarks (0.775) –0.004 Multiple Fed speakers (0.032)* 0.036 Unconventional policy actions (0.080) 0.009 Macroeconomic data releases (0.000)** 0.035 (0.000)** 0.132 (0.000)** 0.006 (0.187) 0.001 (0.880) 0.004 (0.309) 0.005 (0.012)* 0.001 (0.347) 0.000 (0.877) –0.004 (0.018)* 0.044 (0.021)* 0.009 (0.000)** Adjusted R-squared 0.089 0.040 0.109 Durbin-Watson statistic 2.107 2.043 2.264 NOTE: p-values listed in parentheses. The sample period is January 6, 1998 to December 29, 2017. - and ** indicate significance at the 5 percent and 1 percent levels, respectively. Dependent variables are expressed as the absolute value of their one-day changes. Column 3 presents the results for the change in the VIX. Equity market volatility does not react significantly to Fed communication events. The closest variable of significance (p = 0.11) are remarks by Reserve Bank presidents. Equity market volatility does, however, react significantly to macroeconomic data releases. Finally, in all three regressions, the con­ stant and the lagged dependent variable are significant at the 1 percent level. Figure 5 provides some visual evidence for the behavior of equity market volatility around FOMC meetings: From January 1994 to December 2017, the VIX begins to rise about a week before an FOMC meeting. The VIX then drops relatively sharply (nearly 3 percent) on the day the FOMC statement is released. This finding suggests that equity markets appear to be increasingly uncertain about the meeting outcome, or its effects on financial markets, in the run-up to FOMC meetings.[35] Likewise, we see a noticeable reduction in market volatility **82** **S** **d Q** **t** **2019** **F d** **l R** **B** **k f St L** **i REVIEW** ----- **Kliesen, Levine, Waller** ### Figure 5 **Relative Changes in the VIX Near FOMC Announcement Days** VIX = 100 on FOMC Announcement Day 104 103 102 101 100 99 98 97 –10 –9 –8 –7 –6 –5 –4 –3 –2 –1 FOMC 1 2 3 4 5 6 7 8 9 10 NOTE: Sample includes all regularly scheduled FOMC meetings between January 1994 and December 2017. SOURCE: Haver Analytics and authors’ calculations. after the policy announcement (statement), perhaps indicating a decline in uncertainty and clearer understanding of the Fed’s reaction function. Finally, other than the lagged dependent variable, the dummy variable that accounts for the release of key economic reports is the only other independent variable that is statistically significant. We now turn to the second approach of our empirical exercise, namely, examining the effects of communication events on financial market outcomes using intraday data. ### Empirical Analysis: Intraday Data We use intraday data to estimate the effects of Fed communication on key financial market variables. Many researchers have used intraday data to gauge market reactions to monetary policy surprises or to the Fed’s announcements of unconventional polices after the Financial Crisis. These event studies, as they are often called, are intended to measure the financial market’s response to news at intervals measured in minutes. Our analysis of the market’s response to Fed communication events generally follows the form and practice of the event study literature. Event studies can be criticized for many reasons. First, the studies gauge only the initial announcement responses rather than the responses across time. Second, the results can be sensitive to the choice of window size—that is, responses evaluated over a 1-minute window versus a 5- or 10-minute window. Third, responses could be affected by non-announcement effects, such as from economic data releases or geopolitical events. In view of these concerns, we tested several different window sizes for robustness and used minute-by-minute asset price data for the S&P 500 stock prices and 10-year Treasury futures prices.[36] For FOMC meeting statements, FOMC minutes, and unconventional policy announcements (non-speaker events), a window of plus or minus 15 minutes is used. For FOMC press conferences and other public remarks (speaker events), a window of 15 minutes before to 60 minutes after the event is used. We do not find that the interpretation of the results meaningfully changes when the event window is adjusted.[37] **F d** **l R** **B** **k f St L** **i REVIEW** **S** **d Q** **t** **2019** **83** ----- **Kliesen, Levine, Waller** Since these two variables are both prices, we calculate the percent change in each series over each event window.[38] We then summarize this information via two metrics: mean absolute change and cumulative change. For non-speaker events, where the event window is +/–15 minutes, the mean absolute change can be represented as MACNonj -Speaker = N[1] i∑N=1 ⎛⎝[⎜] YYii,,jjtt+−1515 −1⎞⎠[⎟] [*] [100,] where Yi,t[j] [ represents, for each non-speaker event category ][j][, the asset price associated with ] observation i at time t, and N represents the total number of observations for each nonspeaker event category j over the sample. Likewise, for speaker events, where the event window is –15/+60 minutes, the mean absolute change can be represented as MACSpeakerj = N[1] i∑N=1 ⎛⎝[⎜] YYii,,jjtt+−1560 −1⎞⎠[⎟] [*] [100,] using the same notation as before. The cumulative change is calculated similarly, but we are now summing (instead of averaging) over our sample, and we are not taking the absolute value beforehand. We represent this as CCNonj -Speaker = ∑iN=1 ⎡⎢⎢⎣⎛⎝[⎜] YYii,,jjtt+−1515 −1⎞⎠[⎟] [*] [100] ⎤ ⎥ ⎥⎦ and CCSpeakerj = ∑iN=1 ⎢⎢⎣⎡⎛⎝[⎜] YYii,,jjtt+−1560 −1⎞⎠[⎟] [*] [100] ⎤ ⎥ ⎥⎦ for non-speaker and speaker events, respectively, again using the same notation as before. The results are shown in Figure 6A, Figure 6B, and Figure 7. The grouping on the left side of Figure 6A shows the mean absolute changes in the S&P 500 index in response to Fed communication events not associated with an individual Fed official (non-speaker events), while the grouping on the right side of Figure 6A shows those is response to events with public remarks by a Fed official (speaker events).[39] On the left side, we find that stock prices react most strongly to unconventional policy actions—indeed, twice as strong as the next-largest event (FOMC meeting statements). This finding appears consistent with the event study litera­ ture cited earlier. On the right side, stock prices react the most to the Chairs’ press conferences and their remarks. In contrast, stock price changes in response to Fed communication events by Reserve Bank presidents and Governors are similar in magnitude. Figure 6B shows the same calculation for 10-year Treasury bond futures prices. The results in Figure 6B are broadly similar to those in Figure 6A. In particular, responses to unconven­ tional policies are substantially larger than to other forms of Fed communication, such as FOMC meeting statements. As with stock prices, bond markets appear to react more strongly **84** **S** **d Q** **t** **2019** **F d** **l R** **B** **k f St L** **i REVIEW** ----- **Kliesen, Levine, Waller** ### Figure 6A **Mean Absolute Changes in S&P 500 Index** Percent 0.70 0.7 0.6 0.5 0.4 0.35 0.35 0.33 0.3 0.28 0.26 0.25 0.21 0.2 0.16 0.1 0.0 NOTE: Underlying data are expressed as a percent change in the S&P 500 index over an event window of –15/+15 minutes (non-speaker events) or of –15/+60 minutes (speaker events). The absolute values of these percent changes are then averaged, for each event category, over the full sample. Non-event days are days with no Fed communication event. Non-speaker events and speaker events have different non-event day controls because they are associated with different window sizes. |Col1|Col2|Non-Speakers|Col4|Col5|Col6|Col7|Col8|Col9|Speakers|Col11|Col12|Col13|Col14|Col15|Col16|Col17|Col18|Col19|Col20| |---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---| ||||||||||||||||||||| ||||||||||||||||||||| |||0.35|||||||0.35 0.33 0.28||||||||||| ||||||||||||||||||||| |||||0.21 0.16|||||||||0.26 0.25||||||| ||||||||||||||||||||| ||||||||||||||||||||| ||||||||||||||||||||| ||||||||||||||||||||| ### Figure 6B **Mean Absolute Changes in 10-Year Treasury Futures Prices** Percent 0.68 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0.0 |Col1|Col2|Non-Speakers|Col4|Col5|Col6|Col7|Speakers|Col9|Col10|Col11|Col12|Col13|Col14|Col15|Col16|Col17|Col18| |---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---| ||||||||||||||||||| ||||||||||||||||||| ||||||||||||||||||| |||0.22|||||0.18||||||||||| ||||||||||||||||||| |||||0.09|||||0.14 0.12 0.10 0.08||||||||| |||||||0.05|||||||||||| NOTE: Underlying data are expressed as a percent change in 10-year Treasury futures price over an event window of –15/+15 minutes (non-speaker events) or of –15/+60 minutes (speaker events). The absolute values of these percent changes are then averaged, for each event category, over the full sample. Non-event days are days with no Fed com­ munication event. Non-speaker events and speaker events have different non-event day controls because they are associated with different window sizes. **F d** **l R** **B** **k f St L** **i REVIEW** **S** **d Q** **t** **2019** **85** ----- **Kliesen, Levine, Waller** ### Figure 7 **Cumulative Changes for Fed Communication Events** Percentage Points 4.0 S&P 500 2.0 0.0 |S&P 500|Col2|Col3|3.19|3.39|Col6| |---|---|---|---|---|---| |10-Year Treasury Futures 1.76|||||| |1.32 1.32|||||| ||||||| |||–0.07 –0.54|||| –2.0 –1.38 Meetings Minutes Press Conferences Unconventional NOTE: Underlying data are expressed as a percent change in the index (S&P 500) or price (10-year Treasury futures) over the event window. These percent changes are then summed, by category, over the full sample. For FOMC meetings, minutes, and unconventional policy measures, the window is –/+15 minutes. For press conferences, the window is –15/+60 minutes. For illustrative purposes, other public remarks were removed from the figure because of very high cumulative change values. to meeting statements than the release of FOMC minutes. The right side of Figure 6B shows that the bond market’s responses to the Chair’s press conferences and the Chair’s remarks are appreciably larger than to non-Chair Fed governors and Reserve Bank presidents. Figure 7 plots the cumulative changes for FOMC meeting statements, minutes, press conferences, and unconventional monetary policy announcements. We exclude other events for illustrative purposes, as they exhibit very high cumulative change values. Similar to the findings in Figures 6A and 6B, unconventional policies are associated with large stock and bond market responses during our sample. The cumulative change in stock prices associated with FOMC press conferences is also relatively large and positive. However, for FOMC meeting statements and the release of FOMC minutes, the cumulative response of stock prices is neg­ ative, with the response of the latter more than double the former. The response of bond futures prices to FOMC meeting statements is of the same magnitude as the minutes, but, again, far smaller than to unconventional policies. For Chair press conferences, the near-zero cumula­ tive change is not a function of the bond futures market ignoring this information; rather, it is the result of large, positive price reactions negating large, negative price reactions over the sample. In summary, the empirical analysis presented in this article suggests that stock and bond markets respond to a variety of Fed communication events, especially FOMC meeting statements, FOMC press conferences, and remarks by the Fed Chair. ## CONCLUSION Clear and concise communication of monetary policy helps the Fed achieve its congres­ sionally mandated goals of price stability, maximum employment, and stable long-term **86** **S** **d Q** **t** **2019** **F d** **l R** **B** **k f St L** **i REVIEW** ----- **Kliesen, Levine, Waller** interest rates. It does so by helping to reduce uncertainty about the future direction of policy. This helps to reduce distortions in market pricing, thereby improving the efficient allocation of resources by firms, households, and governments. This article has examined the various dimensions of Fed communication with the public and financial markets. This includes docu­ menting how the Fed’s communication with the public has evolved over time. Using both daily and intraday data, our empirical analysis documents how Fed communication affects key financial market variables. We find that Fed communication is associated with changes in prices of financial market instruments such as Treasury securities and equity prices. How­ ever, this effect varies by type of communication, by type of instrument, and by who is doing the speaking. Perhaps not surprisingly, we find that the largest financial market reactions tend to be associated with communication by Fed Chairs rather than by other Fed governors and Reserve Bank presidents and with FOMC meeting statements rather than FOMC minutes. n ## NOTES 1 The occasion was a hearing of the Committee on Finance and Industry. According to Ahamed (2009), this was a select committee to investigate the British banking system in the aftermath of the 1929 collapse in stock prices and the poor performance of the British economy. See Ahamed (2009, pp. 371-72). 2 Ahamed (2009, p. 371). 3 For example, see Cochrane (2017), Cogan and Shultz (2017), and Derby (2017). 4 From a 2003 speech by Governor Yellen, as quoted in Holmes (2013). Holmes argues that central bankers, both in the United States and elsewhere, have increasingly (even before the Financial Crisis) moved away from traditional instruments, such as interest rates or exchange rates, toward “communicative experiments” designed to influence public sentiments and expectations. 5 For an early discussion of this phenomenon applied to the Reserve Bank of New Zealand and the FOMC, see Guthrie and Wright (2000) and Thornton (2004), respectively. 6 See Blinder et al. (2001). 7 See Bernanke, Reinhart, and Sack (2004). A synthesis of Bernanke’s views was presented in a 2013 speech, “Communication and Monetary Policy.” 8 See, for example, Neely (2015) or Bauer and Rudebusch (2014). 9 We define the public as anyone who uses expectations about future monetary policy actions as an input into their decisionmaking process. 10 Current Chairman Jerome Powell expanded on this innovation, announcing that press conferences will be held after every FOMC meeting beginning in January 2019. 11 For example, the ROPA for the January 15, 1970, meeting was released on April 15, 1970, a three-month lag. The FOMC ceased publication of the ROPA after the December 22, 1992, meeting. Beginning in 1993, the ROPA was effectively folded into the FOMC minutes and released with a much shorter lag. For more historical detail, see https://www.federalreserve.gov/monetarypolicy/fomc_historical.htm. 12 This statement, and subsequent policy statements, can be found on the Board of Governors of the Federal Reserve [System website: https://www.federalreserve.gov/monetarypolicy/fomc_historical_year.htm.](https://www.federalreserve.gov/monetarypolicy/fomc_historical_year.htm) 13 Wynne (2013) provides a short history of the FOMC’s communication practices. 14 See Gavin and Mandal (2000). 15 The ZLB is the period when the target range for the intended federal funds rate was 0 percent to 0.25 percent. The ZLB period ended at the December 2015 FOMC meeting. **F d** **l R** **B** **k f St L** **i REVIEW** **S** **d Q** **t** **2019** **87** ----- **Kliesen, Levine, Waller** 16 The monetary easing commenced in August 2007, when the Board of Governors voted to reduce the discount [rate by 50 basis points. See https://www.stlouisfed.org/financial-crisis/full-timeline.](https://www.stlouisfed.org/financial-crisis/full-timeline) 17 Wynne (2013) documented that the Fed used forward-looking language to shape expectations before the Financial Crisis. For example, in 2003, the FOMC noted that “policy accommodation can be maintained for a considerable period” in its post-meeting statement. Most economists and policymakers, though, would probably agree that the use was most pronounced during the ZLB era. The FOMC’s forward guidance policy was influenced importantly by Woodford (2001) and Eggertsson and Woodford (2003). 18 The midpoint of the central tendency excludes the three highest and three lowest projections for each variable in each year. 19 Moreover, following the November 3, 2010, meeting, the policy statements crafted under the leadership of Chair­ man Bernanke began to emphasize the economy’s current performance and expected outcome relative to the Fed’s “statutory mandate” of price stability and maximum employment. This was a departure from the Greenspan era, when the statement rarely—if ever—mentioned the Fed’s statutory mandate. The November 2010 statement was also noteworthy because it announced the second round of the large-scale asset purchase program (QE2). 20 The average Flesch-Kincaid scores during this period were very close to the reported medians. 21 See also Jansen (2011). Others have found similar findings for other major central bank communications. See Coenen at al. (2017), Haldane (2017), and Ehrmann and Talmi (2017). 22 Meltzer (2009) documents a 1962 FOMC meeting where communication with the public was discussed. Then Chairman Martin favored increased communication with the public as a way to counter academic critics of Fed policy who he believed were mistaken in their analysis. However, Martin opposed regular (quarterly) policy reviews because there were instances where the FOMC would not wish to explain its decision. See discussion on p. 337 of Meltzer (2009). 23 The source of this repository is Bloomberg. More detail on this source, and its limitations, is provided in the empirical analysis section. 24 As noted above, the declining number of Fed governors speaking on the same day reflects to some extent the dwindling number of years when there was full complement of governors (seven) serving on the FOMC. 25 See Olson and Wessel (2016). [26 See https://www.federalreserve.gov/monetarypolicy/mpr_default.htm.](https://www.federalreserve.gov/monetarypolicy/mpr_default.htm) 27 See Fawley and Neely (2014). 28 See Bullard (2016), Evans (2017), and Kashkari (2017). 29 Conference calls and unscheduled FOMC meetings were excluded from the analysis. 30 For simplicity, we only focus on announcements directly related to a large-scale asset purchase program. These include the following: QE1 announcement and expansion, QE2 announcement, Maturity Extension Program announcement and expansion, and QE3 announcement and expansion. 31 The initial QE1 announcement, which was made on November 25, 2008, did not coincide with an FOMC meeting. 32 For example, the release of nonfarm payroll employment, CPI inflation, and GDP (advance, second, and third esti­ mates) all occur before or at the market open. 33 As previously mentioned, our database for public remarks comes from Bloomberg; it begins in 1998. For consis­ tency, we start all Fed communication event categories at this date, where applicable. Only public remarks made during market hours are included in the event study. If Bloomberg did not provide a time for an event, and this time could not be identified by other sources, the event was removed from the sample. We considered merging Bloomberg’s repository with other databases, but since there was not a consistent time horizon or speaker overlap, we did not proceed with this approach. In particular, we examined databases from the Board of Governors of the Federal Reserve System and the Federal Reserve Bank of St. Louis’s “FOMC Speak.” The Board’s database does not include public remarks made by Bank presidents, while “FOMC Speak” only begins in 2010. Merging either data­ base with Bloomberg’s would result in an upward estimate of governors’ remarks (for the former scenario) or an upward estimate of remarks over the 2010-17 period (for the latter scenario), which would also affect Figure 3. **88** **S** **d Q** **t** **2019** **F d** **l R** **B** **k f St L** **i REVIEW** ----- **Kliesen, Levine, Waller** Nevertheless, we acknowledge that the Bloomberg database is only a proxy for public remarks when presenting this analysis. 34 Our dependent variable is very similar to the approach used by Andersson (2010), who used intraday data to analyze financial market responses to Federal Reserve and European Central Bank monetary policy decisions. 35 Andersson (2010) studied intraday volatility in the bond futures market and in the equity market (S&P 500 index) around FOMC statement releases from April 1999 to May 2006. He found that intraday volatility rises sharply at the time of the release of FOMC meeting statements. 36 TickWrite is the source for the intraday data used in this analysis. 37 The one exception is for press conferences, where expanding our event window noticeably increased the market reaction relative to other events. One possible explanation is that press conferences are often more than an hour long. However, a closer inspection reveals that the press conferences driving this jump in magnitude are those on June 22, 2011, and June 19, 2013. The latter was noteworthy because this is when Chairman Bernanke discussed the so-called taper tantrum that had developed in the markets in response to his Congressional testimony a month earlier. In that testimony, he raised the possibility of the FOMC beginning to taper asset purchases later that year. 38 It is not our intent to examine whether stock and bond prices may react differently to Fed communication events. We refer the reader to numerous studies on the effects of these dynamics in the interactions with monetary policy actions. For example, see Campbell and Ammer (1993), Bernanke and Kuttner (2005), Andersen et al. (2007), and Connolly, Stivers, and Sun (2005). 39 The non-event day controls in Figures 6A and 6B are constructed to have similar response windows to the events they are compared with. For example, in Figure 6A, we use a rolling event window of 30 and 75 minutes to calcu­ late a benchmark for non-speakers and speakers, respectively. Windows that either include an event or overlap days are removed before calculating the benchmark mean absolute changes. We follow the same procedure for Figure 6B.The authors thank Chris Neely for helpful comments in this regard. ## REFERENCES Ahamed, Liaquat. Lords of Finance: The Bankers Who Broke the World. Penguin Books, 2009 (paperback version). Andersen, Torben G.; Bollerslev, Tim; Diebold, Francis X. and Vega, Clara. “Real-Time Price Discovery in Global Stock, Bond and Foreign Exchange Markets.” Journal of International Economics, November 2007, 73(2), pp. 251-77; [https://doi.org/10.1016/j.jinteco.2007.02.004.](https://doi.org/10.1016/j.jinteco.2007.02.004) Andersson, Magnus. “Using Intraday Data to Gauge Financial Market Responses to Federal Reserve and ECB Monetary Policy Decisions.” International Journal of Central Banking, June 2010, 6(2), pp. 117-46. Bauer, Michael D. and Rudebusch, Glenn D. “The Signaling Channel for Federal Reserve Bond Purchases.” _International Journal of Central Banking, September 2014, 10(3), pp. 233-89._ Bernanke, Ben S. “Deflation: Marking Sure ‘It’ Doesn’t Happen Here.” Remarks before the National Economists Club, [Washington, D.C., November 21, 2002; https://www.federalreserve.gov/boarddocs/speeches/2002/20021121/.](https://www.federalreserve.gov/boarddocs/speeches/2002/20021121/) Bernanke, Ben S. “The Global Saving Glut and the U.S. Current Account Deficit.” Remarks at the Sandridge Lecture, Virginia Association of Economists, Richmond, Virginia, March 10, 2005; [https://www.federalreserve.gov/boarddocs/speeches/2005/200503102/.](https://www.federalreserve.gov/boarddocs/speeches/2005/200503102/) Bernanke, Ben S. and Kuttner, Kenneth N. “What Explains the Stock Market’s Reaction to Federal Reserve Policy?” _[Journal of Finance, May 2005, 60(3), 1221-257; https://doi.org/10.1111/j.1540-6261.2005.00760.x.](https://doi.org/10.1111/j.1540-6261.2005.00760.x)_ Bernanke, Ben S.; Reinhard, Vincent R. and Sack, Brian P. “Monetary Policy Alternatives at the Zero Bound: An Empirical Assessment.” Brookings Papers on Economic Activity, January 2004; [https://www.brookings.edu/wp-content/uploads/2004/06/2004b_bpea_bernanke.pdf.](https://www.brookings.edu/wp-content/uploads/2004/06/2004b_bpea_bernanke.pdf) Blinder, Alan; Ehrmann, Michael; Fratzscher, Marcel; De Haan, Jakob and Jansen, David-Jan. “Central Bank Communication and Monetary Policy: A Survey of Theory and Evidence.” ECB Working Paper No. 898, European [Central Bank, May 2008; https://www.ecb.europa.eu/pub/pdf/scpwps/ecbwp898.pdf.](https://www.ecb.europa.eu/pub/pdf/scpwps/ecbwp898.pdf) **F d** **l R** **B** **k f St L** **i REVIEW** **S** **d Q** **t** **2019** **89** ----- **Kliesen, Levine, Waller** Blinder, Alan; Goodhart, Charles; Hildebrand, Philipp; Lipton, David and Wyplosz, Charles. “How Do Central Banks Talk?” Geneva Report on the World Economy, No. 3, International Center for Monetary and Banking Studies, 2001. Bullard, James. “A New Characterization of the U.S. Macroeconomic and Monetary Policy Outlook.” Remarks delivered at the Society of Business Economists Annual Dinner, London, United Kingdom, June 30, 2016; [https://www.stlouisfed.org/from-the-president/speeches-and-presentations/2016/a-new-characterization.](https://www.stlouisfed.org/from-the-president/speeches-and-presentations/2016/a-new-characterization) Campbell, Jeffrey R.; Evans, Charles L.; Fisher, Jonas D.M. and Justiano, Alejandro. “Macroeconomic Effects of Federal Reserve Forward Guidance.” Brookings Papers on Economic Activity, Spring 2012, pp. 1-63; [https://doi.org/10.1353/eca.2012.0004.](https://doi.org/10.1353/eca.2012.0004) Campbell, John Y. and Ammer, John. “What Moves the Stock and Bond Markets? A Variance Decomposition for Long-Term Asset Returns.” Journal of Finance, March 1993, 48(1), pp. 3-37; [https://doi.org/10.1111/j.1540-6261.1993.tb04700.x.](https://doi.org/10.1111/j.1540-6261.1993.tb04700.x) Cochrane, John H. “Taylor for Fed.” The Grumpy Economist Blog, October 20, 2017; [https://johnhcochrane.blogspot.com/2017/10/taylor-for-fed.html.](https://johnhcochrane.blogspot.com/2017/10/taylor-for-fed.html) Coenen, Günter; Ehrmann, Michael; Gaballo, Gaetano; Hoffmann, Peter; Nakov, Anton; Nardelli, Stefano; Persson, Eric and Strasser, Georg. “Communication of Monetary Policy in Unconventional Times.” ECB Working Paper No. [2080, European Central Bank, June 2017; https://www.ecb.europa.eu/pub/pdf/scpwps/ecb.wp2080.en.pdf.](https://www.ecb.europa.eu/pub/pdf/scpwps/ecb.wp2080.en.pdf) Cogan, John F. and Shultz, George P. “The Fed Chief America Needs.” Commentary, Wall Street Journal, October 18, 2017. Connolly, Robert; Stivers, Chris and Sun, Licheng. “Stock Market Uncertainty and the Stock-Bond Return Relation.” _Journal of Financial and Quantitative Analysis, March 2005, 40(1), pp. 161-94;_ [https://doi.org/10.1017/S0022109000001782.](https://doi.org/10.1017/S0022109000001782) Derby, Michael S. “Fed’s Fischer Laments ‘The Cacophony’ of Too Much Fed Speak.” WSJ Pro, April 25, 2017. Eggertsson, G. and Woodford, Michael. “The Zero Bound on Interest Rates and Optimal Monetary Policy.” Brookings _Papers on Economic Activity, 2003, 34(1), pp. 139-235;_ [https://www.brookings.edu/wp-content/uploads/2003/01/2003a_bpea_eggertsson.pdf.](https://www.brookings.edu/wp-content/uploads/2003/01/2003a_bpea_eggertsson.pdf) Ehrmann, Michael and Talmi, Jonathan. “Starting from a Blank Page? Semantic Similarity in Central Bank Communication and Market Volatility.” ECB Working Paper No. 2023, European Central Bank, February 2017; [https://www.ecb.europa.eu/pub/pdf/scpwps/ecbwp2023.en.pdf?561f7f26fea2285effb36dae431188d1.](https://www.ecb.europa.eu/pub/pdf/scpwps/ecbwp2023.en.pdf?561f7f26fea2285effb36dae431188d1) Evans, Charles L. “Rationale for My Dissent at the December 2017 Meeting.” Federal Reserve Bank of Chicago, [December 15, 2017. https://www.chicagofed.org/publications/speeches/2017/12-15-2017-evans-rationale-for-](https://www.chicagofed.org/publications/speeches/2017/12-15-2017-evans-rationale-for-dissent-december-fomc) [dissent-december-fomc.](https://www.chicagofed.org/publications/speeches/2017/12-15-2017-evans-rationale-for-dissent-december-fomc) Fawley, Brett W. and Neely, Christopher J. “The Evolution of Federal Reserve Policy and the Impact of Monetary Policy Surprises on Asset Prices.” Federal Reserve Bank of St. Louis Review, First Quarter 2014, 96(1), pp. 73-109; [https://files.stlouisfed.org/files/htdocs/publications/review/2014/q1/fawley.pdf.](https://files.stlouisfed.org/files/htdocs/publications/review/2014/q1/fawley.pdf) Gavin, William T. and Mandal, Rachel J. “Inside the Briefcase: The Art of Predicting the Federal Reserve.” Federal [Reserve Bank of St. Louis Regional Economist, July 2000, pp. 4-9; https://www.stlouisfed.org/publications/region­](https://www.stlouisfed.org/publications/regional-economist/july-2000/inside-the-briefcase-the-art-of-predicting-the-federal-reserve) [al-economist/july-2000/inside-the-briefcase-the-art-of-predicting-the-federal-reserve.](https://www.stlouisfed.org/publications/regional-economist/july-2000/inside-the-briefcase-the-art-of-predicting-the-federal-reserve) Greenspan, Alan. “Remarks before the Economic Club of Chicago.” Chicago, Illinois, October 19, 1995; [https://fraser.stlouisfed.org/scribd/?item_id=8549&filepath=/files/docs/historical/greenspan/](https://fraser.stlouisfed.org/scribd/?item_id=8549&filepath=/files/docs/historical/greenspan/Greenspan_19951019.pdf) [Greenspan_19951019.pdf.](https://fraser.stlouisfed.org/scribd/?item_id=8549&filepath=/files/docs/historical/greenspan/Greenspan_19951019.pdf) Greenspan, Alan. “The Challenge of Central Banking in a Democratic Society.” Remarks at the Annual Dinner and Francis Boyer Lecture of the American Enterprise Institute for Public Policy Research, Washington, D.C., [December 5, 1996; https://fraser.stlouisfed.org/scribd/?item_id=8581&filepath=/files/docs/historical/green­](https://fraser.stlouisfed.org/scribd/?item_id=8581&filepath=/files/docs/historical/greenspan/Greenspan_19961205.pdf) [span/Greenspan_19961205.pdf.](https://fraser.stlouisfed.org/scribd/?item_id=8581&filepath=/files/docs/historical/greenspan/Greenspan_19961205.pdf) Guthrie, Graeme and Wright, Julian. “Open Mouth Operations.” Journal of Monetary Economics, October 2000, 46(2), [pp. 489-516; https://doi.org/10.1016/S0304-3932(00)00035-0.](https://doi.org/10.1016/S0304-3932(00)00035-0) Haldane, Andrew G. “A Little More Conversation, A Little Less Action.” Remarks at the Macroeconomics and Monetary Policy Conference, Federal Reserve Bank of San Francisco, March 2017; [https://www.bankofengland.co.uk/speech/2017/a-little-more-conversation-a-little-less-action.](https://www.bankofengland.co.uk/speech/2017/a-little-more-conversation-a-little-less-action) **90** **S** **d Q** **t** **2019** **F d** **l R** **B** **k f St L** **i REVIEW** ----- **Kliesen, Levine, Waller** Holmes, Douglas R. Economy of Words: Communicative Imperatives in Central Banks. University of Chicago Press, [2013; https://doi.org/10.7208/chicago/9780226087764.001.0001.](https://doi.org/10.7208/chicago/9780226087764.001.0001) Jansen, David-Jan. “Does the Clarity of Central Bank Communication Affect Volatility in Financial Markets? Evidence from Humphrey-Hawkins Testimonies.” Contemporary Economic Policy, 2011, 29, pp. 494-509; [https://doi.org/10.1111/j.1465-7287.2010.00238.x.](https://doi.org/10.1111/j.1465-7287.2010.00238.x) Kashkari, Neel. “Why I Dissented a Third Time.” Federal Reserve Bank of Minneapolis, December 18, 2017; https://www.minneapolisfed.org/news-and-events/messages/why-i-dissented-a-third-time. Kocherlakota, Narayana. “Maybe Central Banks Are Too Independent.” Bloomberg View, August 7, 2017; [https://www.bloomberg.com/view/articles/2017-08-07/maybe-central-banks-are-too-independent.](https://www.bloomberg.com/view/articles/2017-08-07/maybe-central-banks-are-too-independent) Lustenberger, Thomas and Rossi, Enzo. “Does Central Bank Transparency and Communication Affect Financial and Macroeconomic Forecasts?” Swiss National Bank Working Paper, December 2017; [https://www.snb.ch/n/mmr/reference/working_paper_2017_12/source/working_paper_2017_12.n.pdf.](https://www.snb.ch/n/mmr/reference/working_paper_2017_12/source/working_paper_2017_12.n.pdf) Meltzer, Allan H. A History of the Federal Reserve. Volume 2, Book 1: 1951-1969. University of Chicago Press, 2009. Neely, Christopher J. “Unconventional Monetary Policy Had Large International Effects.” Journal of Banking and _[Finance, March 2015, 52, pp. 101-11; https://doi.org/10.1016/j.jbankfin.2014.11.019.](https://doi.org/10.1016/j.jbankfin.2014.11.019)_ Olson, Peter and Wessel, David. “Federal Reserve Communications: Survey Results.” Hutchins Center on Fiscal and Monetary Policy at Brookings, November 2016; [https://www.brookings.edu/wp-content/uploads/2016/11/fed-communications-survey-results.pdf.](https://www.brookings.edu/wp-content/uploads/2016/11/fed-communications-survey-results.pdf) Shin, Hyun Song. “Can Central Banks Talk Too Much?” Speech at the European Central Bank conference “Communications Challenges for Policy Effectiveness,” Frankfurt, November 14, 2017. Thornton, Daniel L. “The Fed and Short-Term Rates: Is It Open Market Operations, Open Mouth Operations or Interest Rate Smoothing?” Journal of Banking and Finance, 2004, 28(4), pp. 475-98; [https://doi.org/10.1016/S0378-4266(02)00409-0.](https://doi.org/10.1016/S0378-4266(02)00409-0) Woodford, Michael. “Monetary Policy in the Information Economy.” Presented at the Federal Reserve Bank of Kansas City symposium “Economic Policy for the Information Economy,” August 30–September 1, 2001; [https://www.kansascityfed.org/publicat/sympos/2001/papers/S02wood.pdf.](https://www.kansascityfed.org/publicat/sympos/2001/papers/S02wood.pdf) Wynne, Mark A. “A Short History of FOMC Communication.” Federal Reserve Bank of Dallas Economic Letter, [September 2013, 8(8); https://www.dallasfed.org/research/eclett/2013/el1308.cfm#n8.](https://www.dallasfed.org/research/eclett/2013/el1308.cfm#n8) **F d** **l R** **B** **k f St L** **i REVIEW** **S** **d Q** **t** **2019** **91** ----- **92** **S** **d Q** **t** **2019** **F d** **l R** **B** **k f St L** **i REVIEW** -----
{ "disclaimer": "Notice: The following paper fields have been elided by the publisher: {'references'}. Paper or abstract available at https://api.unpaywall.org/v2/10.20955/r.101.69-91?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.20955/r.101.69-91, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": null, "status": "BRONZE", "url": "https://files.stlouisfed.org/research/publications/review/2019/04/15/gauging-market-responses-to-monetary-policy-communication.pdf" }
2,019
[]
true
null
[]
19,709
en
[ { "category": "Law", "source": "s2-fos-model" }, { "category": "Computer Science", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/022f64f1cbb0b6dd859736f162cff1130501ec1b
[]
0.888863
Dispute Resolution Mechanism for Smart Contracts
022f64f1cbb0b6dd859736f162cff1130501ec1b
Masaryk University Journal of Law and Technology
[ { "authorId": "94054909", "name": "M. Kasatkina" } ]
{ "alternate_issns": [ "1802-5951" ], "alternate_names": [ "Masaryk Univ J Law Technol", "Masaryk University journal of law and technology", "Masaryk Univ j law technol" ], "alternate_urls": null, "id": "9e7f3023-e1fa-4121-9043-ef2a46353fbf", "issn": "1802-5943", "name": "Masaryk University Journal of Law and Technology", "type": "journal", "url": null }
Disputes regarding smart contracts are inevitable, and parties will need means for dealing with smart contract issues. This article highlights the need for dispute resolution mechanisms for smart contracts. The author provides analysis of the possible mechanisms to solve disputes arising from smart contracts, namely dispute resolution by traditional arbitration institutions and blockchain arbitration. Article acknowledges the benefits and challenges of both mechanisms. In the light of this, the author concludes about instituting a hybrid approach aimed at resolving disputes that will not stymie efficiencies of smart contracts.
_2022]_ _M. Kasatkina: Dispute Resolution Mechanism for Smart Contracts_ _143_ _DOI 10.5817/MUJLT2022-2-2_ # DISPUTE RESOLUTION MECHANISM FOR SMART CONTRACTS _by_ ## MARINA KASATKINA[*] _Disputes regarding smart contracts are inevitable, and parties will need means for_ _dealing with smart contract issues. This article highlights the need for dispute_ _resolution mechanisms for smart contracts. The author provides analysis_ _of the possible mechanisms to solve disputes arising from smart contracts, namely_ _dispute resolution by traditional arbitration institutions and blockchain_ _arbitration. Article acknowledges the benefits and challenges of both mechanisms._ _In the light of this, the author concludes about instituting a hybrid approach aimed_ _at resolving disputes that will not stymie efficiencies of smart contracts._ ## KEY WORDS _Smart Contracts, Blockchain Technology, Digital Disputes, Dispute Resolution_ _Mechanism, Off-chain, On-chain._ ## 1. INTRODUCTION With the rapid development of new technologies occurring during the fourth industrial revolution, new types of disputes with significant specifics are gradually beginning to form. A special category among them belongs to disputes arising from smart contracts based on blockchain technology. Smart contracts are not really “contracts” in the true sense of the word, understood by most as negotiated terms in an arms-length transaction (or “meeting of the minds”).[1] Enforcement is automatic, and the code is immutable. Therefore, smart contracts on the blockchain - m.kasatkina@maastrichtuniversity.nl, Ph.D. candidate, Maastricht University, Netherlands 1 Schmitz, A. J. and Rule, C. (2019) Online Dispute Resolution for Smart Contracts. _Journal_ _of Dispute Resolution. University of Missouri School of Law Legal Studies Research Paper,_ 2019 (11). Available from: https://ssrn.com/abstract=3410450 [Accessed 12 April 2022]. ----- _144_ _Masaryk University Journal of Law and Technology_ _[Vol. 16:2_ present a different set of challenges due to the inflexibility of the code-based executions. It has to be noted that there is a close interaction between the real world and the software transaction world. Smart contracts inherently interfere with real-world people or institutions, which would result in legal issues due to the nature of our societies.[2] For the reason that virtual experiences lead to specific actions in the real world, disputes are inevitable. Possible scenarios in which disputes may arise include changing of circumstances, creating undesirable results for one party, absence of legal capacity to enter into the smart contract. Smart contracts may not be accurately coded to encompass the parties’ original intentions. Moreover, coders may be sued for liability as a result of inaccurate smart contracts, or hackers may be prosecuted for interfering with or manipulating smart contracts.[3] In this respect, the potential need for dispute resolution mechanism is inevitable. But nowadays, there exist no well-defined system of rules applicable to smart contracts. All these aspects show that there is room for identifying dispute resolution mechanisms for smart contracts. Generally speaking, there are two possible ways to resolve such disputes. According to the first approach, they are subject to review by traditional courts. The second approach assumes that arbitration institutions lend to resolve disputes arising out of smart contracts. They, in turn, are divided into two groups: a) “off-chain” arbitration, meaning dispute resolution by traditional arbitration institutions guided by the usual rules; b) “on-chain” arbitration that assumes to create innovative applications based on blockchain technology and designed to resolve disputes arising in a digital decentralized environment (blockchain arbitration).[4] My focus in this article is on the possible mechanisms to solve disputes arising from smart contracts. I have two aims: first, to outline a framework for dispute resolution by traditional arbitration institutions and blockchain arbitration, and second, based on advantages and disadvantages of both 2 Clément, M. (2019) Smart Contracts and the Courts. In: DiMatteo, L., Cannarsa, M. and Poncibò, C. (eds.) The Cambridge Handbook of Smart Contracts, Blockchain Technology _and Digital Platforms. Cambridge University Press, pp. 271–287._ 3 Zaslowsky, D. (2018) What to Expect When Litigating Smart Contract Disputes. [online] Available from: https://www.law360.com/articles/1028009/what-to-expect-when-litigatingsmart-contract-disputes [Accessed 02 May 2022]. 4 International Chamber of Commerce (2018). ICC Dispute Resolution Bulletin. Issue 1. Available from: https://www.hoganlovells.com/~/media/hogan-lovells/pdf/2018/ 2018_12_13_icc_robots_arbitrator.pdf [Accessed 02 May 2022]. ----- _2022]_ _M. Kasatkina: Dispute Resolution Mechanism for Smart Contracts_ _145_ mechanisms I introduce a new hybrid approach to blockchain dispute resolution, that combines both on and off-blockchain components. ## 2. ANALYSIS OF THE POSSIBLE DISPUTE RESOLUTION MECHANISMS The first question while considering dispute resolution mechanisms should be asked whether traditional courts could adjudicate disputes arising from smart contracts. In this respect, the following should be mentioned. Firstly, a smart contract is the code, which is understandable to programmers, not lawyers and judges. Courts may be substantially challenged in interpreting smart contracts, written in a coded language, that is not understandable to a human observer. Furthermore, a court could not intervene to prevent or reverse an automatic contract, since the execution of smart contracts does not allow for modifications.[5] As James Grimmelmann notes, _“…as long as the code does what it is supposed to and blockchain nodes_ _achieve consensus, the intent and actions of one’s counterpart do not matter;_ _once triggered, the contract moves forward as defined at the time of its_ _writing, regardless of either party’s change in circumstances,_ _misunderstandings, or otherwise.”[6]_ In this regard, it is important to distinguish between two main models of smart contracts: external and internal.[7] External smart contracts are those that are governed by traditional, natural language contracts with the smart, code-driven part of the contract merely automating the performance of terms as appropriate (e.g. payment, shipment, etc.). If there is any disagreement between the parties, the traditional, non-code version of the contract prevails. An external smart contract must be clear about which version of the contract prevails in order to successfully put the natural-language terms first and foremost. However, when such clarity is lacking in multi-language contracts, the UNIDROIT Principles stipulate 5 Rodrigues, U. (2018) Law and the Blockchain. Iowa Law Review, 104. Available from: https://ilr.law.uiowa.edu/print/volume-104-issue-2/law-and-the-blockchain/ [Accessed 02 May 2022]. 6 Grimmelmann, J. (2019) All Smart Contracts are Ambiguous. Journal of Law & Innovation, 2 (1). Available from: https://www.law.upenn.edu/live/files/9782-grimmelmann-all-smartcontracts-are-ambiguous [Accessed 02 May 2022]. 7 Chamber of Digital Commerce. (2018) Smart Contracts: Is the Law Ready? Available from: https://www.theblockchaintest.com/uploads/resources/CDC%20-%20Smart%20Contract-Is %20the%20Law%20Ready%20-%202018%20-%20Sep.pdf [Accessed 02 May 2022]. ----- _146_ _Masaryk University Journal of Law and Technology_ _[Vol. 16:2_ that preference should be given to the contract that was originally drawn up. Presumably, the same can apply to smart contracts; if the code was written first and the natural-language contract second, the code prevails. Inversely, one may say that code is not a “human” language of any kind and therefore should be interpreted as an appendix for the natural language contract, but not the main, binding part of any agreement. This approach may work in certain contexts, however, given that the code creates an outcome automatically, its interpretive value seems more relevant to the main body of most external smart contract.[8] In the internal smart contracts, the code is supreme and any natural -language portion of the agreement is secondary. Therefore, while the natural-language portion of the contract may help courts understand the parties’ intent, they will still have to interpret code to understand what consensus was reached. While this has been raised as a problem for courts wishing to exert power over smart contracts, the use of expert witnesses who can read and inform the court what the code “says”, can quickly and easily remedy this issue (e.g. bringing a programmer to the stand to testify what the outcome of the code, as written, would be).[9] Thus, regardless of the specific type of smart contract, the inflexibility of code-based executions presents potential challenges. Secondly, the anonymous nature of smart contracts and the fluidity of online identities make it difficult to determine the identities of the parties. The aforementioned anonymity gained by the use of public-key encrypted identities and VPNs. Nodes that contain the blockchain and all of its information are located all over the world. Transactions in the blockchain are fully networked and present only in cyberspace. The nodes hold imperfect partial copies of the blockchain; no particular node holds the entire blockchain.[10] And the decentralized nature of smart contracts prevents courts from establishing jurisdiction and determining the choice of law based on traditional rules. For all of these reasons, it can be concluded that smart contract disputes should not be resolved by any national court. This leads to the demand for 8 Sillanpaa, T. (2020) Freedom to (Smart) Contract: The Myth of Code and Blockchain Governance Law. _IALS Student Law Review,_ 7 (2). Available from: https://journals.sas.ac.uk/lawreview/issue/view/582 [Accessed 02 May 2022]. 9 Ibid. 10 Kaal, W. A. and Calcaterra, C. (2018) Crypto Transaction Dispute Resolution. Business _Lawyer. Available from: http://dx.doi.org/10.2139/ssrn.2992962 [Accessed 05 May 2022]._ ----- _2022]_ _M. Kasatkina: Dispute Resolution Mechanism for Smart Contracts_ _147_ resolving smart contract disputes with cross-jurisdictional, extra-legal, and efficient remedies. Therefore, international arbitration presents a well-suited alternative for smart contract disputes as they have many common features, such as functioning in a decentralized manner, flexibility, confidentiality of proceedings. Nowadays, there exist two main approaches for dealing with smart contract issues, namely “on-chain” and “off-chain” arbitration.[11] ### 2.1 “OFF-CHAIN” ARBITRATION (DISPUTE RESOLUTION BY TRADITIONAL ARBITRATION INSTITUTIONS) According to this approach, smart contracts can operate within the existing contract law framework, and disputes arising from them are subject to the arbitration institutions.[12] In this regard, a special arbitration center dealing with the resolution of digital disputes is being created or a specialized board in the existing arbitration institutions is being formed. Generally speaking, “off-chain” dispute resolution system could be characterized as a combination of traditional forms of dispute resolution process, lacking a mechanism for the automatic enforcement of the award. For instance, on the 8th of November 2018 was opened the Court _of Arbitration of the Polish Blockchain and New Technology Chamber of Commerce_ (hereinafter “Court of Arbitration”) which purpose is to resolve disputes related to digital technologies.[13] It is Europe’s first and the world’s second (after Japan) arbitral tribunal specializing in blockchain. Court of Arbitration applies the provisions of the Rules of the Court of Arbitration of the Polish Blockchain and New Technology Chamber of Commerce (hereinafter “Rules”).[14] According to paragraph 3 of the Rules, the Court of Arbitration has jurisdiction over a dispute if the parties conclude a written agreement (arbitration agreement) in the following forms: 11 Szczudlik, K. (2019) _“On-chain” and “off-chain” arbitration: Using smart contracts to amicably_ _resolve disputes. [online] Available from: https://newtech.law/en/on-chain-and-off-chain-_ arbitration-using-smart-contracts-to-amicably-resolve disputes [Accessed 02 May 2022]. 12 De Filippi, P. and Wright A. (2018) Blockchain and the Law: The Rule of Code. Cambridge, Cambridge, MA: Harvard University Press, 300; Holden R. and Malani A. (2018) Can Blockchain Solve the Holdup Problem in Contracts? University of Chicago Coase-Sandor _Institute for Law & Economics. Working Paper, 846._ 13 _The Court of Arbitration of the Polish Blockchain and New Technology Chamber of Commerce._ [online] Available from: https:// blockchaincourt.org/ [Accessed 02 May 2022]. 14 The Court of Arbitration of the Polish Blockchain and New Technology Chamber of Commerce (2019). The Rules of the Court of Arbitration of the Polish Blockchain and New _Technology Chamber of Commerce._ Available from: https://blockchaincourt.org/ wp-content/uploads/2019/07/The-Rules-of-the-Court-of-Arbitration-ENG.pdf [Accessed 04 May 2022]. ----- _148_ _Masaryk University Journal of Law and Technology_ _[Vol. 16:2_ a) a clause included in letters exchanged between the parties or declarations made by the parties by means of remote communication that enable the content of such declarations to be recorded b) a reference made in a written agreement to a document containing a provision on submitting disputes to resolution by the Court of Arbitration. The dispute resolution process is carried out according to the standard arbitration procedure with certain exceptions. Firstly, the number of arbitrators for resolution of the dispute could be 5 or 7 in contrast to “traditional” arbitration (paragraph 19 of the Rules), where the number of arbitrators is limited (1 or 3). Secondly, an award made by the Court of Arbitration shall be pronounced at the same hearing at which the trial is closed. When pronouncing the award, the presiding arbitrator shall state orally the main reasons upon which such award is based (paragraph 45 of the Rules). Whereas the traditional arbitration ends without the announcement of the decision, which is sent to the parties later. This approach also includes the creation of specialized boards in the existing arbitration institutions. For example, in 2018, the Arbitration _center of the Russian Union of Industrialists and Entrepreneurs (RSPP)_ announced the formation of a new Panel on disputes in the digital economy. The panel was created to resolve disputes arising from transactions involving automatic execution, including using information systems based on a distributed registry (blockchain); disputes arising from the issuance, accounting and circulation of digital rights and disputes over transactions made using and (or) in relation to digital financial assets.[15] Due to the absence of special rules, the proceedings on such disputes are conducted according to the Rules of the arbitration center at the RSPP 2018.[16] The above-mentioned approach to the disputes arising from smart contracts is considered the mainstream view. Although in the legal literature it is often criticized.[17] Instead, it is proposed to create special methods of dispute resolution based on technology- blockchain arbitration. 15 _Arbitraznyu zentr pri RSPP._ [online] Available from: https://arbitration rspp.ru/about/structure/boards/digital-disputes/ [Accessed 04 May 2022]. 16 Ibid. 17 Schmitz, A. and Rule C. (2019) Online Dispute Resolution for Smart Contracts. _Journal of Dispute Resolution, 2, pp. 103–125._ ----- _2022]_ _M. Kasatkina: Dispute Resolution Mechanism for Smart Contracts_ _149_ ### 2.2 “ON-CHAIN” ARBITRATION (BLOCKCHAIN ARBITRATION) This group includes projects that provide for the creation of new mechanisms specifically designed to resolve disputes arising from smart contracts. “On-chain” arbitration contains solutions in which the equivalent of a traditional arbitration decision is automatically executed by a smart contract without the involvement of any third parties. For instance, this could be realized with the help of certain assets, which, upon the occurrence of a defined condition, are transferred from one party to the other.[18] This approach contemplates smart contracts as distinct legal tools, rather than digital alternatives to traditional legal contracts. From this perspective, blockchain technologies and smart contracts may create new legal systems, or a new Lex Cryptographia.[19] Several characteristics of blockchain-based technologies and smart contracts, such as its anonymity, automatic execution, and tamper-resistance, mean that _“existing legal infrastructure cannot address legal challenges presented_ _by crypto transaction disputes”.[20]_ Instead, these disputes require a “distributed jurisdiction” created through a process of institutional innovations. Currently, there exist more than 20 projects that use blockchain to automate dispute resolution. All these projects could be divided into two groups: a) Special on-line arbitration (CodeLegit, Cryptonomica, Juris, Mattereum, SAMBA); b) Crowdsourced dispute resolution (Aragon, BitCad, CrowdJury, Confideal, Jur, Kleros, Oath). In this article, I examined the most noteworthy projects, which have already been tested by end users. 18 Szczudlik, K. (2019) _“On-chain” and “off-chain” arbitration: Using smart contracts to amicably_ _resolve disputes. [online] Available from: https://newtech.law/en/on-chain-and-off-chain-_ arbitration-using-smart-contracts-to-amicably-resolve disputes [Accessed 02 May 2022]. 19 De Filippi, P. and Wright A. (2018) Blockchain and the Law: The Rule of Code. Cambridge, Cambridge, MA: Harvard University Press, 300. 20 Kaal, W. A. and Calcaterra, C. (2018) Crypto Transaction Dispute Resolution. Business _Lawyer. Available from: http://dx.doi.org/10.2139/ssrn.2992962 [Accessed 05 May 2022]._ ----- _150_ _Masaryk University Journal of Law and Technology_ _[Vol. 16:2_ ### 2.2.1 SPECIAL ON-LINE ARBITRATION This group includes platforms that enable the creation of a special arbitration combining the advantages of international commercial arbitration and blockchain technology. They presume the automation of certain elements of the proceedings. However, the mechanism of their action is in many ways similar to international arbitration, as the rules of many such projects are based on the UNCITRAL Arbitration Rules. In this case, the decision made by the arbitrators is executed in the traditional way or is automatically executed with a smart contract. For instance, a Juris project that presents a blockchain-based development system, operating on the basis of the Juris Protocol Mediation and Arbitration.[21] A prerequisite for considering a dispute is the existence of an arbitration agreement, integrated into a smart contract via a coded clause. In case of a dispute between the parties, a user initiates a protocol by filing a complaint (Formal Complaint). The system suspends further execution of the smart contract generation and notifies the other party about the dispute. After that, the following three procedures are possible: 1) Self Mediation – through which the parties get access to a number of tools, specially designed for self-regulation dispute resolution with the help of Self-Enforced Library Functions (or Self layer). These tools enable the execution of basic operations that alter the outcome of a smart contract implementation (such as contract cancellation and asset transfer). In the case of impossibility to resolve the dispute, parties could escalate to the second stage. 2) SNAP (Simple Neutral Arbitrator Poll) means the consideration of the dispute by independent arbitrators. Results of the voting are reported to the parties. Based on this information, the parties still may try to resolve the dispute by using Self layer or applying to the third tool. 3) PANEL (Juris Peremptory Agreement for Neutral Expert Litigation) is the analogue of traditional arbitration proceedings based on the UNCITRAL Arbitration Rules. The dispute is reviewed by three arbitrators selected on the basis of their reputation and compliance with the requirements specified by the parties while entering into the contract. 21 Kerpelman, A. J. (2018) Introducing the Juris Protocol: Human-Powered Dispute Resolution _for Blockchain_ _Smart_ _Contracts._ [online] Available from: https://medium.com/jurisproject/introducing-the-juris-protocol-human-powered-disputeresolution-for-blockchain-smart-contracts-bc574b50d8e1 [Accessed 05 May 2022]. ----- _2022]_ _M. Kasatkina: Dispute Resolution Mechanism for Smart Contracts_ _151_ After hearing the parties and considering evidence, the arbitrators within 30 days make a decision that is binding and subject to automatic execution by smart contract. Another project based on the blockchain technology is Mattereum, which presents the layer of the legal, technological, and commercial infrastructure that governs on-chain rights control and transfer for tangible, intangible, and digital assets. Mattereum supports a decentralized commercial law system, the Smart Property Register, that executes through automated smart contracts that ensure property rights, as well as dispute resolution and enforcement. This register facilitates “on-chain property transfer” through a smart contract that in effect becomes a “legal contract” without the need for legislative support.[22] A distinctive feature of this project is the “Ricardian Contracts” on which the contract protocol is based.[23] Ricardian Contracts are cryptographically verified documents signed with a digital signature and available for reading both in electronic and text form. The project involves the creation of a decentralized arbitration court, meeting the requirements of the New York Convention on Recognition and Enforcement of Foreign Arbitral Awards of 1958 (hereinafter referred to as the New York Convention). Therefore, awards of such decentralized commercial arbitration court will be enforced by national courts in nearly all of the countries in the world.[24] A separate point must be made about OpenBazaar Dispute Resolution _(notary). It is a distributed program that provides an on-line trading_ platform for any type of merchandise using cryptocurrencies.[25] It is a distributed network where the parties and transactions are anonymous.[26] A core element of the OpenBazaar dispute resolution mechanism concerns the possibility of appealing to a notary who becomes an arbitrator and determines the dispute based on the evidence presented. Notaries in the OpenBazaar system are randomly chosen to provide anonymity for keeping the system secure. An important feature of OpenBazaar’s approach 22 Allen, D., Lane, A. M. and Poblet, M. (2019) The Governance of Blockchain Dispute Resolution. Harvard Negotiation Law Review, 25, pp. 75–101. 23 Zagaynova, M. (2018) _Obzor ICO proekta Mattereum._ [online] Available from: https://ffc.media/ru/overviews/ico-mattereum-project-review/ [Accessed 21 June 2022]. 24 Allen, D., Lane, A. M. and Poblet, M. (2019) The Governance of Blockchain Dispute Resolution. Harvard Negotiation Law Review, 25, pp. 75–101. 25 Sanchez Dr W. _Dispute Resolution in OpenBazaar._ [online] Available from: http://docs.openbazaar.org/03.-OpenBazaar-Protocol/ [Accessed 21 June 2022]. 26 Kaal, W. A. and Calcaterra, C. (2018) Crypto Transaction Dispute Resolution. Business _Lawyer. Available from: http://dx.doi.org/10.2139/ssrn.2992962 [Accessed 05 May 2022]._ ----- _152_ _Masaryk University Journal of Law and Technology_ _[Vol. 16:2_ is connected with the ability of the parties to choose the notary pools as an expert in certain fields of law. Besides, OpenBazaar has an appeal system that includes randomly selecting new notaries from the pool chosen by the parties earlier. ### 2.2.2 CROWDSOURCED DISPUTE RESOLUTION This group includes projects that provide for the establishment of fundamentally new, unique platforms based on blockchain technology and specifically designed to resolve disputes arising from smart contracts. Their essence is an attempt to create a quasi-judicial system, where the judges (members of the jury) are registered on the relevant platform users who are elected through the method of generating random numbers, remaining anonymous to the parties. Each of the judges votes separately; after the voting is completed, the system counts the votes and determines the outcome of the dispute. Then the decision is automatically executed using a smart contract. Another important characteristic of such projects is the use of codes of non-state regulation in the dispute resolution process.[27] It has to be noted that crowdsourced dispute resolution is not new. For example, more than twenty years ago iCourthouse pioneered the notion of online crowdsourcing in civil cases and ten years ago eBay India’s Community Court leveraged the best judgement of other eBay users to decide whether a contested eBay review should be deleted. The following examples of crowdsourced dispute resolution on the blockchain go even further with this model, however, by tokenizing the process. In other words, jurors vote with funds (generally cryptocurrency), which they lose if they are on the losing side. In contrast, jurors on the winning side generally gain some reward. This creates a market for accurate crowdsourced resolution outcomes.[28] One example is Oath, a project based on the Ethereum platform. The model of OATH’s dispute resolution is related to the idea of a jury trial. When entering into a smart contract, the parties can use the provided dispute resolution protocol (Smart Arbitration Plan). In the case of a dispute, the protocol is converted into a Smart Arbitration Case. After 27 Zasemkova O. (2020) Methods of Resolving Disputes Arising from Smart Contracts. Lex _Russica, 73 (4), p. 20._ 28 Rule, C. and Nagarajan, C. (2011) Crowdsourcing Dispute Resolution Over Mobile Devices. In: Poblet, M. (ed.) Mobile Technologies for Conflict Management. Law, Governance and _Technology Series, vol 2. Dordrecht: Springer, pp. 93–100._ ----- _2022]_ _M. Kasatkina: Dispute Resolution Mechanism for Smart Contracts_ _153_ that, the parties set the parameters for resolving dispute: the number of jurors (any odd number in the range from 11 to 101); the percentage of votes required to make a decision (from 51 to 100 %). Juries are selected randomly from the users of the blockchain platform. The decision is made solely on the basis of common sense (Common sense), based on the study of the terms of the contract, witness statements and other evidence. The decision can be appealed within 5 days from the date of its issuance by repeating the procedure but with other jurors.[29] Like Oath, Kleros promises inexpensive and transparent, online dispute resolution using crowdsourcing theory. The mechanism is similar to Oath, advocating for an opt-in court platform that uses “crowdsourced jurors”. First, smart contracts have to designate Kleros as their arbitrator in cases of dispute, including the type of court (Kleros is developing an ecosystem of specialized courts) and the number of juries to be involved (idem). When a dispute arises, Kleros randomly assigns the dispute to a jury of crowdsourced, self-selected experts, who analyze the evidence and vote for a verdict. Jurors are penalized for communicating with each other, and must “justify” their votes so that the parties can later understand their decisions. A smart contract then transfers the money to the winning party. Oracles are engaged to provide real-world data to assist dispute resolution.[30] A similar platform is Jur.io that advertises itself as a free service to users for creating and securing smart contracts and resolving contract disputes within 24 hours. Accordingly, Jur’s key promise seems to be speed and security in smart contracting.[31] Its unique feature is the opportunity to create their own hub (a “specialized oracle”) which operates on special rules for users in particular industries.[32] Additionally, the Jur platform provides tools for signing contracts, and creating and reselling contract templates.[33] 29 _OATH Protocol. Blockchain Alternative Dispute Resolution Protocol. Version 2.6.0. Available_ from: https://oaths.io/files/OATH-Whitepaper-EN.pdf [Accessed 15 June 2022]. 30 Allen, D., Lane, A. M. and Poblet, M. (2019) The Governance of Blockchain Dispute Resolution. Harvard Negotiation Law Review, 25, pp. 75–101. 31 _JUR.Io – platforma kotoray pomozet razreshit finansovye spory mezdy investorami i srartupami._ (2018) [online] Available from: https://invest4all.ru/obzory-i-otchyoty/obzory-kraudsejlovico/jur-io-platforma-kotoraya-pomozhet-razreshit-finansovye-spory-mezhdu-investorami-istartapami [Accessed 15 June 2022]. 32 Ibid. 33 Ibid. ----- _154_ _Masaryk University Journal of Law and Technology_ _[Vol. 16:2_ It is worth pointing out that the above-mentioned platforms have a dispute resolution mechanism with the following characteristics: (i) adjudicator expertise in dispute resolution and law; (ii) independence (neutral and anonymous adjudicators); (iii) impartiality (random selection of judges without vested interests); and (iv) transparency (all procedures are documented and rationalized).[34] ## 3. SHORTCOMINGS OF THE TRADITIONAL ARBITRATION INSTITUTIONS AND BLOCKCHAIN ARBITRATION There are several drawbacks associated with “off-chain” arbitration. Firstly, courts could only force the parties to execute a secondary transaction or otherwise pay remedies for a smart contract that created damages for one of the parties. Courts are not able to change the terms of the given smart contract that was executed according to its parameters and added to the blockchain because they could not change the existing code. Because of these inherent limitations, courts are not able to render resolutions to disputes arising from blockchain-based smart contracts. Secondly, it is worth mentioning that high price is another disadvantage of traditional arbitration institutions. In particular, Tang Z. S. states that the average online consumer contract value is USD60, whereas an exemplary UK provider of ODR services charges between GBP25 and GBP850 for a resolution of consumer disputes. Therefore, even the lowest charge of GBP25 will be disproportionately expensive compared with the average value of the consumer disputes.[35] Moreover, traditional arbitration institutions are characterized by a slow speed of dispute resolution. However, in the online environment, people would often like to get a quick decision. In relation to the incapability of traditional dispute resolution to resolve numerous online disputes, it should be pointed out that when the number of disputes runs into the millions, human-powered dispute resolution cannot handle the scale of disputes.[36] 34 Allen, D., Lane, A. M. and Poblet, M. (2019) The Governance of Blockchain Dispute Resolution. Harvard Negotiation Law Review, 25, pp. 75–101. 35 Tang, Z. S. (2015) Electronic consumer contracts in the conflict of laws. 2[nd] ed. Oxford: Hart Publishing, p. 373. 36 Dimov, D. (2017) Crowdsourced Online Dispute Resolution. [online] Ph.D. Leiden University. ----- _2022]_ _M. Kasatkina: Dispute Resolution Mechanism for Smart Contracts_ _155_ Therefore, traditional arbitration mechanisms could not be the only possible recourse for smart contract disputes.[37] The first drawback of “on chain” arbitration concerns the enforceability of awards. In other words, arbitral awards rendered through online arbitration may not be recognized and enforced under the New York Convention because, pursuant to Article 2 of the New York Convention, it applies only to agreements “in writing”.[38] However, online arbitral agreements would appear to satisfy the writing requirements of the convention. The reason is that, under most national legislation, electronic writings are considered equivalent to traditional writings.[39] As a corollary, it is uncertain whether an award issued pursuant to an arbitration agreement contained in the code of a smart contract would be capable of being enforced. The second drawback is the lack of trust in the procedures caused by non-face-to-face communication. People who do not trust each other may act tentatively and keep important information to themselves. As a result, disputants participating in ODR processes may not disclose all the relevant information to online arbitrators.[40] Moreover, criminals may exploit the information security vulnerabilities of the ODR platform in order to obtain unauthorized access to information related to the dispute and the disputants. That is why the ODR provider should use information security practices.[41] The third drawback concerns the parties who may not be familiar and comfortable with the relevant technology. Besides, it should be noted that the legal qualification of arbitrators may be crucial for parties who want to choose arbitrators with the special technical knowledge to adjudicate certain disputes. 37 Kaal, W. A. and Calcaterra, C. (2018) Crypto Transaction Dispute Resolution. Business _Lawyer. Available from: http://dx.doi.org/10.2139/ssrn.2992962 [Accessed 05 May 2022]._ 38 _Convention on the Recognition and Enforcement of Foreign Arbitral Awards, 10 June 1958._ Available from: http://www.newyorkconvention.org/11165/web/files/original/1/5/15432.pdf [Accessed 23 June 2022]. 39 Cortes, P. (2010) Online Dispute Resolution for Consumers in the European Union. Routledge _Research in IT and E-commerce Law. London: Routledge, Taylor & Francis Group. Available_ from: https://www.econstor.eu/bitstream/10419/181972/1/391038.pdf [Accessed 23 June 2022]. 40 Ibid. 41 Lodder, A. R. and Zeleznikow, J. (2005) Developing an Online Dispute Resolution Environment: Dialogue Tools and Negotiation Support Systems in a Three-Step Model. _Harvard Negotiation Law Review, 10, pp. 287–337._ ----- _156_ _Masaryk University Journal of Law and Technology_ _[Vol. 16:2_ In addition, the described method of dispute resolution is obviously devoid of a standard of efficiency, since there is no possibility to limit in advance the range of checks used by arbitrators, who may not respect the accumulated experience in resolving similar cases. As a result, a decentralized court decision will become more and more resource-intensive over time, as the parties will try to determine all possible circumstances in the program code. In other words, the parties will have to discuss each dispute from the very beginning, without any knowledge of the previous cases. Besides, problems arise with the method of selection of the arbitrators as well as ways of making their decisions. Arbitrators are selected randomly, but from a certain group of specialists in the blockchain area, which is not very big now. For that reason, there is a risk that the arbitrators will not be independent of the parties. To sum up, neither of these two alternative mechanisms can provide an adequate environment for resolving disputes arising from smart contracts. Therefore, in the next paragraph, I introduce the design and implementation of a hybrid for the digital dispute. ## 4. HYBRID APPROACH In light of the shortcomings of the available dispute resolution mechanisms for the crypto economy, it is possible to talk about instituting a hybrid approach. It means the creation of an independent, decentralized platform that integrates both approaches to the smart contract dispute resolution problem. This framework recognizes internal mechanisms of the smart contract system that will regulate disputes depending on the precise nature of the case and certain circumstances. Parties should incorporate a mandatory dispute settlement clause directly in the smart contract code. Such a clause may include the following provisions: a) automatic adoption of interim measures (for example, suspension of performance of obligations under a smart contract, blocking of funds); b) rules and deadlines for the creation of arbitration; c) procedure and deadlines for dispute resolution; d) procedure for the execution of arbitration awards; it means technical standards that allow smart contracts to be reversed; ----- _2022]_ _M. Kasatkina: Dispute Resolution Mechanism for Smart Contracts_ _157_ e) an agreement between the parties to resolve disputes using on-chain resolution platforms. The lack of agreement between the parties should lead to resolve the dispute with an on-chain system; f) a clause regulating dispute resolution. For instance, by including an ICC Arbitration Clause in a contract, the parties agree that their dispute will be resolved by arbitration and that the arbitration proceedings will be governed by the procedural rules in the ICC Rules of Arbitration, given the finality and binding effect of an arbitral award for the parties. Even if the dispute was resolved with “on chain” mechanisms, the interested party should still have the right to appeal to the off-chain arbitration. In these cases, decisions reached by way of blockchain arbitration should not rise to the level of “off chain” arbitration. To be specific, the off-chain arbitration should be viable for the following cases: - the disputes where one party is a consumer (taking into account the level of consumer protection existing in the EU and its Member-States); - the complex disputes (i.e. it is necessary to examine additional evidence, to assign an expert examination or to hear witness testimony); - the procedure may lead to the disclosure of commercial secrets; - the disputes where fundamental rights are at stake. This last condition is due to the impossibility to predict at the moment of drafting the contract, what kind of disputes may arise between the parties in the interpretation and performance of the contract. Therefore, it should be possible for the parties to consider the dispute using traditional arbitration. Generally speaking, on-chain resolution platforms could be used for resolving minor disputes (with a small cost), for instance cross-border consumer disputes. Moreover, they could be used for technical disputes, such as gas or share price determination and construction schedule disputes. In other words, an “on chain” arbitration system could act as an expert to resolve factual issues, such as whether a contract performance complied with technical specifications, to calculate the market value of shares or commodities, or to calculate damages. In these cases, the parties may agree that the “on chain” arbitration award will be binding. The ability of the parties to resolve disputes with online forms is of high importance due to several benefits. Firstly, the high speed of online procedures. Off-chain arbitration is not able to cope with the huge number ----- _158_ _Masaryk University Journal of Law and Technology_ _[Vol. 16:2_ of online disputes. Secondly, the absence of on-chain resolution would negate key blockchain benefits and would undermine the evolution of the crypto economy. However, on-chain arbitration requires the adaptation to the existing legal regulation, primarily to the requirement of the New York Convention to an arbitration agreement to be in writing. Otherwise, smart contracts run the risk of not being enforced under the New York Convention, unless they have an equivalent traditional word-format contract signed by both parties. In this regard, it seems appropriate to have a hybrid version of smart contracts, whereby there is a text-based version of the same force in addition to the encrypted-coded-language smart contract. All these considerations are compelling and favor a hybrid approach. Given the current legal framework, fully “on chain” arbitration will not become a reality in the nearest future. At the same time, prospects of a hybrid approach are much more likely. It will reflect the complex nature of blockchain technologies and the diversity of smart contracts used in a dynamically competitive environment. On the one hand, the possibility of using “on chain” arbitration will lead to speedy, less-costly awards, to the benefit of parties in various specific sectors. Thus, the essence of a smart contract will be reflected in comparison with a traditional contract. On the other hand, “off chain” arbitration in certain cases seems to be unavoidable given the legal realities of the modern world. ## 5. CONCLUSION All in all, building and implementation of the effective dispute resolution into smart contracts will be a crucial step in achieving level of certainty in crypto transactions and facilitating the broadening evolution of the crypto economy. Different mechanisms described above for resolving smart contracts demonstrate various possibilities, opting human-driven resolution systems or crowdsourced systems. The development and introduction of new technologies should be convenient for the participants, diminishing their risks and making it possible to protect their rights in a faster manner. Besides, the use of technology could be advantageous for the justice system, which could be relieved of the burden of deciding certain kinds of disputes. The hybrid approach that I suggest in this article addresses problems that neither the “on chain” nor “off chain” approaches can address ----- _2022]_ _M. Kasatkina: Dispute Resolution Mechanism for Smart Contracts_ _159_ separately. I argue that for some reasons, hybrid solutions are more adequate given the framework of the Internet Age. The world is rapidly changing, and laws will have to adapt to this rising tide. As such, the growth of smart contracts will require adaptation by the legal profession and modification of approaches to dispute resolution. In doing so, though, contract law should operate according to its traditional canons and categories, through a modification and supplementation of existing rules and procedures.[42] And these technologies should be seen as an improvement of existing contractual structures in terms of their effectiveness. They cannot definitely change the essence of dispute resolution relationships between the parties. Without a doubt, using a hybrid architecture can substantially improve the dispute resolution from smart contracts while retaining existing traditional law rules and principles. However, there is a room for specification of the individual conditions of “on chain” and “off chain” arbitration. ## LIST OF REFERENCES [1] Allen, D., Lane, A. M. and Poblet, M. (2019) The Governance of Blockchain Dispute Resolution. Harvard Negotiation Law Review, 25, pp. 75–101. [2] _Arbitraznyu zentr pri RSPP. [online] Available from: https://arbitration-rspp.ru/_ about/structure/boards/digital-disputes/ [Accessed 04 May 2022]. [3] Chamber of Digital Commerce. (2018) Smart Contracts: Is the Law Ready? Available from: https://www.theblockchaintest.com/uploads/resources/CDC%20-%20Smart%20Contract Is%20the%20Law%20Ready%20-%202018%20-%20Sep.pdf [Accessed 02 May 2022]. [4] Clément, M. (2019) Smart Contracts and the Courts. In: DiMatteo, L., Cannarsa, M. and Poncibò, C. (eds.) The Cambridge Handbook of Smart Contracts, Blockchain Technology and _Digital Platforms. Cambridge University Press, pp. 271–287._ [5] _Convention on the Recognition and Enforcement of Foreign Arbitral Awards, 10 June 1958._ Available from: http://www.newyorkconvention.org/11165/web/files/original/ 1/5/15432.pdf [Accessed 23 June 2022]. 42 Pardolesi, R. and Davola, A. (2019) What Is Wrong in the Debate About Smart Contracts. _SSRN_ _Electronic_ _Journal._ Available from: https://www.researchgate.net/ publication/331834837_What_Is_Wrong_in_the_Debate_About_Smart_Contracts [Accessed 23 June 2022]. ----- _160_ _Masaryk University Journal of Law and Technology_ _[Vol. 16:2_ [6] Cortes, P. (2010) Online Dispute Resolution for Consumers in the European Union. Routledge _Research in IT and E-commerce Law. London: Routledge, Taylor & Francis Group._ Available from: https://www.econstor.eu/bitstream/10419/181972/1/391038.pdf [Accessed 23 June 2022]. [7] _The Court of Arbitration of the Polish Blockchain and New Technology Chamber of Commerce._ [online] Available from: https:// blockchaincourt.org/ [Accessed 02 May 2022]. [8] The Court of Arbitration of the Polish Blockchain and New Technology Chamber of Commerce (2019). The _Rules of the Court of Arbitration of the Polish Blockchain and New_ _Technology Chamber of Commerce. Available from: https://blockchaincourt.org/wp-_ content/uploads/2019/07/The-Rules-of-the-Court-of-Arbitration-ENG.pdf [Accessed 04 May 2022]. [9] Dimov, D. (2017) Crowdsourced Online Dispute Resolution. [online] Ph.D. Leiden University. [10] De Filippi, P. and Wright A. (2018) Blockchain and the Law: The Rule of Code. Cambridge, Cambridge, MA: Harvard University Press, 300. [11] Grimmelmann, J. (2019) All Smart Contracts are Ambiguous. Journal of Law & Innovation, 2 (1). Available from: https://www.law.upenn.edu/live/files/9782-grimmelmann-all smart-contracts-are-ambiguous [Accessed 02 May 2022]. [12] International Chamber of Commerce (2018). ICC Dispute Resolution Bulletin. Issue 1. Available from: https://www.hoganlovells.com/~/media/hogan-lovells/pdf/2018/ 2018_12_13_icc_robots_arbitrator.pdf [Accessed 02 May 2022]. [13] JUR.Io – platforma kotoray pomozet razreshit finansovye spory mezdy investorami i srartupami. (2018) [online] Available from: https://invest4all.ru/obzory-i-otchyoty/obzory kraudsejlov-ico/jur-io-platforma-kotoraya-pomozhet-razreshit-finansovye-spory mezhdu-investorami-i-startapami [Accessed 15 June 2022]. [14] Kaal, W. A. and Calcaterra, C. (2018) Crypto Transaction Dispute Resolution. Business _Lawyer. Available from: http://dx.doi.org/10.2139/ssrn.2992962 [Accessed 05 May 2022]._ [15] Kerpelman, A. J. (2018) _Introducing the Juris Protocol: Human-Powered Dispute Resolution_ _for Blockchain Smart Contracts. [online] Available from: https://medium.com/jurisproject/_ introducing-the-juris-protocol-human-powered-dispute-resolution-for-blockchain-smart contracts-bc574b50d8e1 [Accessed 05 May 2022]. ----- _2022]_ _M. Kasatkina: Dispute Resolution Mechanism for Smart Contracts_ _161_ [16] Lodder, A. R. and Zeleznikow, J. (2005) Developing an Online Dispute Resolution Environment: Dialogue Tools and Negotiation Support Systems in a Three-Step Model. _Harvard Negotiation Law Review, 10, pp. 287–337._ [17] OATH Protocol. Blockchain Alternative Dispute Resolution Protocol. Version 2.6.0. Available from: https://oaths.io/files/OATH-Whitepaper-EN.pdf [Accessed 15 June 2022]. [18] Pardolesi, R. and Davola, A. (2019) What Is Wrong in the Debate About Smart Contracts. _SSRN Electronic Journal. Available from: https://www.researchgate.net/publication/_ 331834837_What_Is_Wrong_in_the_Debate_About_Smart_Contracts [Accessed 23 June 2022]. [19] Rodrigues, U. (2018) Law and the Blockchain. Iowa Law Review, 104. Available from: https://ilr.law.uiowa.edu/print/volume-104-issue-2/law-and-the-blockchain/ [Accessed 02 May 2022]. [20] Rule, C. and Nagarajan, C. (2011) Crowdsourcing Dispute Resolution Over Mobile Devices. In: Poblet, M. (ed.) Mobile Technologies for Conflict Management. Law, Governance _and Technology Series, vol 2. Dordrecht: Springer, pp. 93–100._ [21] Sanchez Dr W. _Dispute Resolution in OpenBazaar._ [online] Available from: http://docs.openbazaar.org/03.-OpenBazaar-Protocol/ [Accessed 21 June 2022]. [22] Schmitz, A. J. and Rule, C. (2019) Online Dispute Resolution for Smart Contracts. Journal _of Dispute Resolution. University of Missouri School of Law Legal Studies Research Paper,_ 2019 (11). Available from: https://ssrn.com/abstract=3410450 [Accessed 12 April 2022]. [23] Schmitz, A. and Rule, C. (2019) Online Dispute Resolution for Smart Contracts. Journal _of Dispute Resolution, 2, pp. 103–125._ [24] Sillanpaa, T. (2020) Freedom to (Smart) Contract: The Myth of Code and Blockchain Governance Law. _IALS Student Law Review,_ 7 (2). Available from: https://journals.sas.ac.uk/lawreview/issue/view/582 [Accessed 02 May 2022]. [25] Szczudlik, K. (2019) “On-chain” and “off-chain” arbitration: Using smart contracts to amicably _resolve disputes. [online] Available from: https://newtech.law/en/on-chain-and-off-chain-_ arbitration-using-smart-contracts-to-amicably-resolve disputes [Accessed 02 May 2022]. [26] Tang, Z. S. (2015) Electronic consumer contracts in the conflict of laws. 2[nd] ed. Oxford: Hart Publishing, p. 373. [27] Zagaynova, M. (2018) Obzor ICO proekta Mattereum. [online] Available from: https://ffc.media/ru/overviews/ico-mattereum-project-review/ [Accessed 21 June 2022]. ----- _162_ _Masaryk University Journal of Law and Technology_ _[Vol. 16:2_ [28] Zasemkova O. (2020) Methods of Resolving Disputes Arising from Smart Contracts. Lex _Russica, 73 (4), p. 20._ [29] Zaslowsky, D. (2018) What to Expect When Litigating Smart Contract Disputes. [online] Available from: https://www.law360.com/articles/1028009/what-to-expect-when litigating-smart-contract-disputes [Accessed 02 May 2022]. -----
{ "disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.5817/mujlt2022-2-2?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.5817/mujlt2022-2-2, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": null, "status": "GOLD", "url": "https://journals.muni.cz/mujlt/article/download/18580/26353" }
2,022
[]
true
2022-09-30T00:00:00
[ { "paperId": "79538bf4a093fdc52fe6cea592045bcc23fc900a", "title": "Freedom to (Smart) Contract: The Myth of Code and Blockchain Governance Law" }, { "paperId": "9697cd944da32cbc5e8758fe56f589740e89bb34", "title": "Methods of Resolving Disputes Arising from Smart Contracts" }, { "paperId": "3379525cc9ce816fa06fedcc7edc74840fda48de", "title": "Smart Contracts and the Courts" }, { "paperId": "0731a6799b22a5a92b5c49d2227b0939156d1421", "title": "Blockchain and the Law: The Rule of Code" }, { "paperId": "4589fa535d8f3c6c086f7977f5d8ba28aa771e62", "title": "Online Dispute Resolution for Smart Contracts" }, { "paperId": "921fe5b1d9f66845c878289d942e1c4229619cc8", "title": "What Is Wrong in the Debate About Smart Contracts" }, { "paperId": "044dd0452e813aac098d1de11e899cb9ab071b53", "title": "The Governance of Blockchain Dispute Resolution" }, { "paperId": "f922f790930b5795f525af54f21c8932529fe8a9", "title": "All Smart Contracts Are Ambiguous" }, { "paperId": "a3095f085b07903fe4b9018f3cc6a2e9ef42c670", "title": "Law and the Blockchain" }, { "paperId": "2f40c7c8b8ea5573a7e616c7d7a23b4024f71927", "title": "Crowdsourced Online Dispute Resolution" }, { "paperId": "166996fc47f25e1a3918d28166a5ac907f43ebc5", "title": "Crypto Transaction Dispute Resolution" }, { "paperId": "14f8a7d1993977c485cc3e32fa690c99ac96606d", "title": "Online Dispute Resolution for Consumers in the European Union" }, { "paperId": "e11136635fff95aac5bf8c8ab7be0cd1b1ab2def", "title": "Electronic Consumer Contracts in the Conflict of Laws" }, { "paperId": "026fe761d9c2171d9c7dc5cafaf882f9ab19e014", "title": "Developing an Online Dispute Resolution Environment: Dialogue Tools and Negotiation Support Systems in a Three-Step Model" }, { "paperId": null, "title": "Arbitraznyu zentr pri RSPP" }, { "paperId": "99741b9fca17a05abc21aa7d14bd43e3107d2fdf", "title": "The Cambridge Handbook of Smart Contracts, Blockchain Technology and Digital Platforms" }, { "paperId": null, "title": "“On-chain” and “off-chain” arbitration: Using smart contracts to amicably resolve disputes" }, { "paperId": null, "title": "ICC Dispute Resolution Bulletin. Issue 1" }, { "paperId": null, "title": "Obzor ICO proekta Mattereum" }, { "paperId": null, "title": "Smart Contracts: Is the Law Ready? Available from" }, { "paperId": null, "title": "Introducing the Juris Protocol: Human-Powered Dispute Resolution for Blockchain Smart Contracts" }, { "paperId": null, "title": "What to Expect When Litigating Smart Contract Disputes" }, { "paperId": null, "title": "JUR.Io – platforma kotoray pomozet razreshit finansovye spory mezdy investorami i srartupami" }, { "paperId": "67076814f2b7335cea04f31b1f33815ab34c8117", "title": "Crowdsourcing Dispute Resolution over Mobile Devices" }, { "paperId": null, "title": "The Rules of the Court of Arbitration of the Polish Blockchain and New Technology Chamber of Commerce" }, { "paperId": null, "title": "Convention on the Recognition and Enforcement of Foreign Arbitral Awards, 10 June 1958" }, { "paperId": null, "title": "Blockchain Alternative Dispute Resolution Protocol. Version 2.6.0" } ]
12,282
en
[ { "category": "Medicine", "source": "external" }, { "category": "Biology", "source": "s2-fos-model" }, { "category": "Medicine", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/02306c29e74698aa4e2ac5e75c872514872ffbe5
[ "Medicine" ]
0.879291
Genome-wide analysis of genes encoding core components of the ubiquitin system during cerebral cortex development
02306c29e74698aa4e2ac5e75c872514872ffbe5
Molecular Brain
[ { "authorId": "4799055", "name": "A. Bouron" }, { "authorId": "3657994", "name": "M. Fauvarque" } ]
{ "alternate_issns": null, "alternate_names": [ "Mol Brain" ], "alternate_urls": [ "http://www.molecularbrain.com/" ], "id": "239c22f5-f478-44e5-877f-8fb0e2daedb4", "issn": "1756-6606", "name": "Molecular Brain", "type": "journal", "url": "https://molecularbrain.biomedcentral.com/" }
Ubiquitination involves three types of enzymes (E1, E2, and E3) that sequentially attach ubiquitin (Ub) to target proteins. This posttranslational modification controls key cellular processes, such as the degradation, endocytosis, subcellular localization and activity of proteins. Ubiquitination, which can be reversed by deubiquitinating enzymes (DUBs), plays important roles during brain development. Furthermore, deregulation of the Ub system is linked to the pathogenesis of various diseases, including neurodegenerative disorders. We used a publicly available RNA-seq database to perform an extensive genome-wide gene expression analysis of the core components of the ubiquitination machinery, covering Ub genes as well as E1, E2, E3 and DUB genes. The ubiquitination network was governed by only Uba1 and Ube2m , the predominant E1 and E2 genes, respectively; their expression was positively regulated during cortical formation. The principal genes encoding HECT (homologous to the E6-AP carboxyl terminus), RBR (RING-in-between-RING), and RING (really interesting new gene) E3 Ub ligases were also highly regulated. Pja1 , Dtx3 (RING ligases) and Stub1 (U-box RING) were the most highly expressed E3 Ub ligase genes and displayed distinct developmental expression patterns. Moreover, more than 80 DUB genes were expressed during corticogenesis, with two prominent genes, Uch-l1 and Usp22, showing highly upregulated expression. Several components of the Ub system overexpressed in cancers were also highly expressed in the cerebral cortex under conditions not related to tumour formation or progression. Altogether, this work provides an in-depth overview of transcriptomic changes during embryonic formation of the cerebral cortex. The data also offer new insight into the characterization of the Ub system and may contribute to a better understanding of its involvement in the pathogenesis of neurodevelopmental disorders.
p g ## RESEARCH ## Open Access # Genome‑wide analysis of genes encoding core components of the ubiquitin system during cerebral cortex development ### Alexandre Bouron[1,2*] and Marie‑Odile Fauvarque[1] **Abstract** Ubiquitination involves three types of enzymes (E1, E2, and E3) that sequentially attach ubiquitin (Ub) to target pro‑ teins. This posttranslational modification controls key cellular processes, such as the degradation, endocytosis, subcel‑ lular localization and activity of proteins. Ubiquitination, which can be reversed by deubiquitinating enzymes (DUBs), plays important roles during brain development. Furthermore, deregulation of the Ub system is linked to the patho‑ genesis of various diseases, including neurodegenerative disorders. We used a publicly available RNA-seq database to perform an extensive genome-wide gene expression analysis of the core components of the ubiquitination machin‑ ery, covering Ub genes as well as E1, E2, E3 and DUB genes. The ubiquitination network was governed by only Uba1 and Ube2m, the predominant E1 and E2 genes, respectively; their expression was positively regulated during cortical formation. The principal genes encoding HECT (homologous to the E6-AP carboxyl terminus), RBR (RING-in-betweenRING), and RING (really interesting new gene) E3 Ub ligases were also highly regulated. Pja1, Dtx3 (RING ligases) and _Stub1 (U-box RING) were the most highly expressed E3 Ub ligase genes and displayed distinct developmental expres‑_ sion patterns. Moreover, more than 80 DUB genes were expressed during corticogenesis, with two prominent genes, _Uch-l1 and Usp22, showing highly upregulated expression. Several components of the Ub system overexpressed in_ cancers were also highly expressed in the cerebral cortex under conditions not related to tumour formation or pro‑ gression. Altogether, this work provides an in-depth overview of transcriptomic changes during embryonic formation of the cerebral cortex. The data also offer new insight into the characterization of the Ub system and may contribute to a better understanding of its involvement in the pathogenesis of neurodevelopmental disorders. **Keywords: Rodent, Brain, Cerebral cortex, Ubiquitin, Ubiquitination, Deubiquitinating enzymes** **Introduction** Ubiquitination is a multistep process during which ubiquitin (Ub), a versatile and highly conserved 76 amino-acid polypeptide, is covalently conjugated to target substrates. It is one of the most common posttranslational modifications of proteins [1] and requires the sequential action of three types of enzymes: Ub-activating (E1) enzymes, *Correspondence: alexandre.bouron@cea.fr 2 Genetics and Chemogenomics Lab, Building C3, CEA, 17 rue des Martyrs, 38054 Grenoble Cedex 9, France Full list of author information is available at the end of the article Ub-conjugating (E2) enzymes and Ub-ligases (E3) [2]. Ubiquitination is counterbalanced by the action of deubiquitinating enzymes (or deubiquitinases, DUBs) that can reverse the conjugation of Ub to substrates. Mammalian DUBs are classified into seven categories: Ub-specific proteases (USPs), Ub carboxyl-terminal hydrolases (UCHs), otubain proteases (OTUs), Machado-Joseph disease protein domain proteases (MJDs or Josephins), JAB1/MPN/Mov34 metallopeptidases (JAMMs), motifinteracting with Ub-containing novel DUB family (MINDY) and ZUP1 [3, 4]. Ubiquitination is described as a quality control system devoted to protein homeostasis © The Author(s) 2022. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this [licence, visit http://​creat​iveco​mmons.​org/​licen​ses/​by/4.​0/. The Creative Commons Public Domain Dedication waiver (http://​creat​iveco​](http://creativecommons.org/licenses/by/4.0/) [mmons.​org/​publi​cdoma​in/​zero/1.​0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.](http://creativecommons.org/publicdomain/zero/1.0/) ----- since it targets damaged or misfolded proteins for degradation via the Ub–proteasome system. However, ubiquitination shows a wider physiological importance because the conjugation of Ub can modify the activity of their targets, changing their subcellular localization or involvement in the formation of multiple protein complexes. Therefore, ubiquitination is involved in the regulation of various key cellular processes, such as endocytosis, cell signalling, autophagy and DNA repair [5]. A large number of proteins in the brain are ubiquitinated. For instance, an analysis of the ubiquitome (ubiquitinated proteins) revealed 921 targets in the rat brain with numerous pre- and postsynaptic actors [6]. The Ub pathway controls multiple neuronal processes, such as neuron migration, growth and synaptic transmission [7–10]. The Ub system also governs fundamental mechanisms controlling memory reorganization [11]. Moreover, alterations of the Ub pathway are thought to contribute to neurodevelopmental, cognitive and age-related neurodegenerative diseases [8, 12]. It is thus of paramount importance to understand how the Ub system participates in normal brain formation and development. The aim of this study was to provide an extensive and detailed overview of the expression pattern of genes encoding major factors mediating ubiquitination and deubiquitination during the formation of the cerebral cortex in mice. The data presented rely on a published RNA-seq database [13] covering 4 stages of cortical development corresponding to the beginning (embryonic Day 11, E11), the peak (E13), and the end of neurogenesis (E17), followed by the beginning of the maturation process and neuronal circuit assembly (postnatal Day 1, PN1) [14, 15]. These four periods cover stages of profound cell division (E11E13) followed by stages characterized by the growth and morphological differentiation of the postmitotic neurons and the establishment of neural networks (E17-PN1) [14, 15]. The analysis of gene expression patterns during development will help to understand the extraordinary complexity of the Ub conjugation/deconjugation system. This study not only presents a description of transcription profile changes during embryonic cerebral cortex formation but also provides an in-depth overview of the core components of the Ub system as discerned through published data. **Materials and methods** The analysis was based on a published RNA-seq gene expression dataset on E11, E13, E17 and PN1 [13], reflecting the period of cerebral cortex formation in the mouse brain. The complete dataset is freely accessible from the GEO repository with the accession number GSE154677. Throughout this study, the results are expressed in transcripts per million (TPM) and the mean ± standard error of mean (SEM). **Results and discussion** The data obtained from the genome-scale profiling of gene expression are organized into two parts: the first part covers the ubiquitination process, and the second part is devoted to the genes encoding DUBs, a class of enzymes participating in the novo synthesis of Ub and are responsible of the deubiquitination of substrates. **Ubiquitination** This first section is subdivided into 4 sections covering the following topics: (1) genes involved in the synthesis of Ub and Ub-like proteins, followed by the genes encoding (2) Ub-activating (E1) enzymes; (3) Ub-conjugating (E2) enzymes; and (4) Ub ligases (E3). **_Ubiquitin genes_** In mammals, two classes of genes are involved in de novo synthesis of Ub: the monomeric Ub-ribosomal fusion genes _Uba52 and_ _Rps27a (Uba80) and the stress-induc-_ ible poly-Ub genes _Ubb and_ _Ubc [16]. In the immature_ cerebral cortex, _Ubb and_ _Rps27a were the most highly_ expressed Ub genes (Fig. 1A). Ubb transcripts accounted for 30–40% of the total Ub transcripts, which is actually in close agreement with previously reported data [17], indicating a high abundance of total _Ubb transcripts in_ the brain. _Ubb and_ _Rps27a followed dissimilar expres-_ sion patterns: transcripts of Rps27a predominated at the onset of corticogenesis (E11–13) before their number significantly decreased, whereas the Ubb gene was the most highly Ub gene expressed at the end of corticogenesis (E17-PN1). Notably, the number of transcripts (expressed in transcripts per million, TPM) was on the order of 700–1700, reflecting a very high abundance of _Ubb,_ _Rps27a and_ _Uba52 transcripts at all stages (Fig._ 1A). In comparison, the TPM values of H2afz (encoding histone 2A, member z), one of the most abundant cellular proteins ubiquitously expressed [18], were 1740 (at E11) and 400 (on PN1). H2af was the 66th most highly expressed gene on E11, whereas Rps27a was the 68th mostly highly expressed. On PN1, _Ubb, the most prominent Ub gene_ (with TPM values of 1400) was the 60th most highly expressed gene in the immature cerebral cortex. These data are in line with the known high abundance of the Ub protein in biological samples, in which it has been shown to comprise up to 5% of total protein [19]. These 4 genes generate single (Uba52 and Rps27a) or multiple copies (9 for Ubc, and 3 for Ubb) of Ub. Therefore, to better assess the contribution of these genes to the total pool of Ub, we calculated the theoretical production of Ub molecules assuming a similar translation efficiency for the four ----- |Col1|Col2| |---|---| ||| transcripts [20]. The Ubc gene accounted for nearly 65% of the total pool of Ub in the embryonic cortex, followed by Ubb, Rps27a and Uba52, accounting for 18%, 11% and 6%, respectively. These data are in line with a previous report showing that UBC accounts for 64% of the total Ub pool in HeLa cells [20]. Thus, the poly-ubiquitin gene _Ubc is the major cellular contributor of Ub molecules._ In the mouse brain, 60% of the Ub pool is in the free form (i.e., not attached to target substrates) [19]. The level of free Ub is important to neuronal functions and survival [16, 19]. The morphology, neurite outgrowth and synaptic development are impaired in cultured neurons isolated from the brains of Ubb-knockout mice [21]. A recent study reported 52 Ub pseudogenes in humans [22]. Moreover, some of these genes, such as human _Ubb pseudogene 4 (Ubbp4),_ _Rps27a pseudogene 16_ (Rps27ap16), and _Uba52 pseudogene 8, encode pro-_ teins [22]. Here, the expression of the following murine Ub pseudogenes was examined: _Gm1821 (Ubb-pseudo_ _gene or_ _Ubb-ps),_ _Rps27a-ps1,_ _Rps27a-ps2 and_ _Gm7866_ (Uba52-ps). Only transcripts of the pseudogene Gm1821 were found with TPM values of ⁓ 15, which was nearly ----- ⁓ 100-fold less abundant than _Ubb transcripts. In con-_ clusion, our data indicated a high expression level of the three Ub-encoding genes _Ubb,_ _Rps27a and_ _Uba52._ In comparison, the expression of Ub pseudogenes was negligible. **_Ubiquitin‑like genes_** Similar to Ub, various Ub-like proteins can be covalently conjugated to target substrates via an enzymatic cascade involving E1, E2, and E3 enzymes [23], [24]. The expression of this set of genes was analysed, and the results are reported in Fig. 1B. The following 10 genes were identified: _Sumo1-3, Nedd8, Isg15, Ubd (Fat10), Ufm1, Atg8_ _(Map1lc3b), Atg12, and_ _Urm1. No transcript for_ _Sumo2_ or _Ubd was found (TPM < 2) [25]. All the other genes_ were, however, significantly expressed, with Atg8, Sumo3 and _Nedd8 displaying the highest levels of expression._ For this set of genes, the TPM ranged from 70 to ⁓ 200 (Fig. 1B), a number that is approximately tenfold lower than that of Ub genes (Fig. 1A). Atg8 was the major gene, and its expression was highly upregulated during development, with a number of transcripts showing expression increases by a factor of 3 between E11 and E17, suggesting activation of Atg8-dependent physiological processes, such as autophagy activation, at the end of cortical development (Fig. 1B). A recent proteomic analysis showed that the levels of conjugated and free NEDD8 (or ISG15) in the mouse brain were at least 60- and 20-fold lower than those of Ub [6], which is consistent with the profoundly lower gene expression levels of these genes, particularly Isg15, whose expression was negligible. **_Ub‑activating (E1) enzymes_** Ubiquitination is a three-step enzymatic reaction. During the initial step, Ub is activated in an adenosine triphosphate-dependent manner by the Ub-activating (E1) enzyme before being transferred to a Ub-conjugating (E2) enzyme [2, 26]. Figure 1C shows the expression profile of the two genes Uba1 and Uba6 (Ube1l2) encoding mammalian Ub-activating E1 enzymes and seven genes encoding Ub-like proteins that activate E1 enzymes (Uba2-3, _Uba5,_ _Uba7,_ _Nae1, and_ _Sae1) [24]. The_ _Uba1_ gene was by far the most prominently expressed E1 gene in the cerebral cortex (Fig. 1C). Its TPM values were ⁓ 200 on E11 and ⁓ 390 on E17, thus showing a nearly twofold increase in transcript abundance during embryonic development. Abundant expression of UBA1 may be a common feature of many cellular types, as the UBA1 protein is among the top 2% of the most highly expressed proteins in HeLa cells [18], reflecting the crucial requirement of this E1 enzyme in Ub-dependent cell processes. In the cerebral cortex, _Uba1 was the 597th_ and 343th most highly expressed gene on E11 and PN1, respectively, which confirms the relatively high abundance of _Uba1 transcripts. The_ _Uba1 gene product is_ abundant in the nucleus and cytoplasm [27], whereas the other mammalian E1 _Uba6 gene product is found only_ in the cytoplasm, which may ensure much more specific and restricted functions. Uba6 was expressed at very low levels, with TPM values ranging from ⁓ 9 on E11 to ⁓ 4 on PN1. In the cerebral cortex, the ratios of Uba1:Uba6 transcript abundance were > 20:1 and 90:1 on E11 and PN1, respectively. This differential expression was consistent with proteomic data showing that the relative abundance ratio of the UBA1 and UBA6 proteins is > 10:1 in HeLa cells [18], further suggesting a restricted function for UBA6 compared to that of UBA1. Collectively, the expression profile data of Uba1 showed that it is the primary E1 gene in the cerebral cortex of mice. Due to its central role in Ub homeostasis, UBA1 is likely to regulate a wide range of neurobiological processes [28]. Far below the level of expression observed for _Uba1,_ the expression of a set of six genes encoding Ub- and Ub-like proteins activating E1 enzymes (Uba2-3, _Uba5,_ _Uba7, Nae1, and Sae1) exhibited TPM values < 100. Uba2_ encodes an E1 enzyme specific for the Ub-like molecule SUMO. This Uba2 gene product is thought to form heterodimers with SAE1 [24, 26]. Interestingly, the expression of both genes (Uba2 and _Sae1) was downregulated_ during development, with an abundance of transcripts reduced by nearly 50% between E11 and PN1 (Fig. 1C). Notably, no transcript for the Ub-like E1 gene _Atg7 was_ detected. Despite its low level of expression in the developing cortex, _Uba6 plays important roles in neuronal_ development, dendritic spine architecture, and mouse behaviour, and its deficiency is lethal [29]. Moreover, _Uba6 is required for neuronal viability in primary hip-_ pocampal neuronal cultures. Collectively, our data led to the identification of the highly regulated Uba1 and Uba2 genes as the major E1 and E1-like enzyme genes, respectively, expressed during cortical brain development. **_Ub‑conjugating (E2) enzymes_** The analysis of the genes encoding Ub- and Ub-like protein-conjugating E2 enzymes [30, 31] is shown in Fig. 1D, E. Transcripts for _Ube2d4,_ _Ube2e2,_ _Cdc34b,_ _Atg10 and Ube2u were not found. This observation rein-_ forces the validity of our results since Ube2u transcripts were detected specifically in tissues of the urogenital tract [32]. All the other genes were expressed at significant levels (35 of 40 genes), particularly Ube2m (Ubc12), encoding a Nedd8-conjugating enzyme, with TPM values increasing from ⁓270 on E11 to ⁓320 on PN1 (Fig. 1D). The most important expressed genes encoding Ub-conjugating E2 enzymes were _Ube2c and_ _Ube2r1 (cdc34)_ (Fig. 1D) and, to a lesser extent, _Ube2q1,_ _Ube2ql1, and_ ----- _Ube2z (Use1) (Fig. 1E). Expression of Ube2c was inhibited_ during embryonic development, with high levels of transcripts evident in the neurogenic period (E11–E13) and a marked decrease from E17 and later. Overall, TPM values of the _Ube2c gene decreased by a factor of 25 between_ E11 (> 150 TPM) and PN1 (⁓ 7 TPM) (Fig. 1D). This decline represented the most downregulated genes in the Ub pathway during corticogenesis. The UBE2C protein is an exclusive partner of APC/C E3 ligases and controls cell cycle progression. Its mRNA and protein levels were low in quiescent cells but greatly increased and peaked during mitosis [31, 33]. Ube2c mRNA was thought to be barely detectable in tissues except under oncogenic conditions, with high levels in various cancers, such as brain and breast cancers [33]. The data presented in Fig. 1D show that high levels of Ube2c mRNA were found in nontumorous tissue under conditions not related to cancer onset or progression, similar to many developmentalspecific genes whose re-expression is associated with carcinogenesis. Further studies are required to verify whether the UBE2C protein is a marker of neurogenesis in the brain. _Ube2r1 was another prominently expressed Ub-con-_ jugating E2 enzyme-encoding gene. Similar to that of _Ube2c, the expression of_ _Ube2r1 was repressed, with an_ abundance of transcripts reduced by a factor of 4 between E11 and PN1 (from 126 to 34 TPM) (Fig. 1D). The E2 enzyme CDC34 (encoded by _Ube2r1) is the primary E2_ for cullin-RING E3 ligases (CRLs). Two members of the _Ube2q gene family were expressed at moderate levels:_ _Ube2q1 and Ube2ql1. The expression of the Ube2q1 gene_ was constant, showing no sign of developmental regulation. Western blot and immunohistochemical experiments showed the presence of UBE2Q1 proteins in the rat brain cortex, mainly in neurons [34]. UBE2Q1has been postulated to play an anti-apoptotic role, at least in pathological states such as traumatic brain injury [34]. The expression of Ube2ql1 was not detected before E13, and it peaked in the E17-PN1 period, with TPM values increasing from ⁓8 to 85 TPM from E13 to E17. This increase represented an 11-fold increase in transcript abundance, making _Ube2ql1 the second most induced_ gene (in the Ub system) during embryonic development. In HeLa cells, UBE2QL1 exhibited a dual function: it is required for the efficient clearance of damaged lysosomes by lysophagy and maintains lysosome integrity [35]. The expression of minor genes (TPM values < 40) encoding Ub- and Ub-like proteins-conjugating E2 enzymes is reported in Additional file 1: Fig. S1A, B. The vast majority of these genes were expressed at constant levels, except _Ube2g2,_ _Ube2s,_ _Ube2t and_ _Ube2l6, which_ were downregulated. In particular, the expression Ube2l6 was profoundly repressed, with TPM values decreasing from ⁓ 18 to ⁓ 2 (a ⁓ninefold reduction in transcript abundance) (Additional file 1: Fig. S1B), making it one of the most important downregulated genes observed in this study. Mutations in the Ube2a gene lead to neurodevelopmental disorders such as X-linked syndromic mental retardation. The precise roles _Ube2a plays in brain_ formation are unknown. In the rodent brain, the _Ube2a_ gene was expressed at low levels, with few transcripts evident throughout cortical development (Additional file 1: Fig. S1A). It has been proposed that some of the cellular effects of UBE2A involve the E3 Ub ligase Parkin. This might be the case in adults, but this enzyme was not expressed during embryonic development of the cerebral cortex, suggesting the involvement of other E3 partners (see below, “RING E3 Ub ligases”). UBE2A exerts some of its actions via the E3 Ub ligases RAD18 and RNF20. The UBE2A/RAD18 complex is at least partially responsible for the pathogenesis of mental retardation (in association with the proliferating cell nuclear antigen, PCNA [12]). The rad18 gene was expressed uniquely on E11 and E13, but the abundance of transcripts was very low (< 6 TPM), suggesting that its presence is required for brain development exclusively during neurogenesis. This transcriptomic analysis provides a detailed overview of the expression patterns of E2 genes that are central players in the Ub pathway. Overall, these genes were expressed at low levels during embryonic development of the mouse cerebral cortex. As in HeLa cells or Swiss 3T3 cells, Ube2m (encoding UBE2M/UBC12 and involved in neddylation) was the prominent E2. However, the pattern of expression of the Ub- and Ub-like protein-conjugating E2 genes in the cerebral cortex did not completely overlap with that reported in cell lines. For instance, UBE2I/ UBC9 (encoding SUMO) and UBE2N were abundant E2 proteins in cell lines [18]. Together with UBE2V1, UBE2N represents > 50% of the Ub-dedicated E2 enzymes in HeLa cells, and it is associated with UBE2V2 in Swiss 3T3 cells [18]. UBE2L3 is another abundant E2 that is expressed at levels twofold higher that all HECT and RBR E3 ligase genes in HeLa cells combined [18]. This expression pattern is profoundly different than that of the cerebral cortex, where the Ube2l3 and Ube2n genes were expressed at moderate levels (TPM values of 20–30) (Additional file 1: Fig. S1). The E2 enzyme UBE2L3 works in concert with the E3 ligase Ube3a (also known as E6-associated protein or E6-AP) [36]. Mutations and genetic defects in the Ube3a gene are associated with the Angelman syndrome, a neurodevelopmental disease [37]. The profile of E2 enzyme expression in nontumorous tissue is likely to differ from that in immortalized cells. Our data clearly illustrate that the pattern of E2 gene expression was temporally regulated. This pattern may, however, differ from one brain area to another. ----- **_Ub ligases (E3)_** Ub ligase (E3) enzymes exert two crucial functions: they target a specific type of ubiquitinated substrate and enable the final transfer of Ub (to the substrate) [38]. In this study, E3 Ub ligases are grouped into 3 families according to [39]: the HECT (homologous to the E6-AP carboxyl terminus), RBR (RING-in-between-RING), and RING (really interesting new gene) E3 families. Depending on the E3 ligase, the transfer of Ub from the E2 enzyme to the target substrate can occur directly or via a 2 step process. For instance, RING E3 ligases enable a direct transfer of Ub from E2 to the target whereas the ubiquitination involving HECT develops via a 2 step process with Ub carried by the E2 enzyme binds first to a HECT domain before being transferred to the target protein [38, 39]. Although RBR E3 ligases have two RING domains and could be categorised as a sub-class of RING-type ligases, they are described as RING-HECT hybrids catalysing ubiquitination not directly like RING-type ligases but via a two-step reaction like HECT-type ligases during which Ub is transferred to the RING2 domain and then to the target [38–41]. _HECT Ub ligases Genes were subdivided into 3 groups:_ Nedd, Herc and other HECT ligases [42]. Transcripts of twenty-four HECT genes were found (Fig. 2), with four genes (Ube2cbp, Hace1, Herc6, Hecw2) below the detection limit. The group of HECT E3 Ub ligases was dominated by the high abundance of _Nedd4 transcripts. The_ TPM values decreased from 249 (on E11) to 84 (on PN1), reflecting a ⁓ threefold reduction in transcript abundance throughout corticogenesis. _Nedd4 was the most highly_ expressed HECT E3 gene during neurogenesis (E11– E13) and the second most highly expressed gene at the end of corticogenesis (E17-PN1), after Hectd3, the other prominent HECT gene. Our data were in line with the first study reporting the isolation of both a set of Nedd4 cDNA clones and the corresponding mRNA expression [43]. This later study also showed a gradual mRNA decrease during embryonic development in the brain. _Nedd4l, which is closely related to_ _Nedd4, the found-_ ing and most ancient member of the Nedd4 family, was expressed on E13 and onwards at a very low level (TPM values < 9). These data illustrate the temporal regulation of these E3 Ub ligases, which play essential roles in neuronal cell fate determination and survival, neurite outgrowth, axon guidance and branching [44]. Compared to HECT, E3 Ub ligases in the Herc group were expressed at much lower levels, with only _Herc1_ and _Herc2 showing a TPM reaching a maximum of 23,_ on E17 (Fig. 2). In the third category (i.e., the HECT E3 Ub ligases that differ from Nedd and Herc), Hectd3 was the predominant gene, with highly regulated expression during corticogenesis, with TPM values increasing by a factor of 3 from E11 to PN1 (from ⁓ 30 to ⁓ 94 TPM) (Fig. 2). _Huwe1 was the third most highly expressed_ gene of this subfamily of HECT E3 ligases, with TPM values of 43–60. Interestingly, knocking down HUWE1 expression in cortical tissue with a siRNA resulted in an increase in the fraction of proliferating cells in the developing brain and blockade of neuronal differentiation [45]. These results show that HUWE1 controls neural differentiation and proliferation. All the other genes in this subgroup were expressed at low levels, with TPM values < 30 (Fig. 2). The HECT E3 Ub ligase UBE3A has been well characterized because mutations in the Ube3a gene cause Angelman syndrome, a neurodevelopmental disease [37]. However, _Ube3a was expressed at low lev-_ els throughout the cortex (TPM values of 16–25, Fig. 2). Notably, the highest abundance of HECT Ub ligase gene transcripts was noted at the peak of neurogenesis (E13) and then declined. ----- _RING E3 Ub ligases RING E3 ligases constitute the larg-_ est family of E3 Ub ligases [8] [38]. For instance, a previous analysis of the mouse genome identified 398 putative E3 enzymes [46]. In the following sections, RING E3 Ub ligase genes are classified into three main subgroups: (1) single subunit, (2) multiple subunit RING E3 and (3) U-box RING E3 ligases. A. Single subunit RING E3: A list compiled by [47] was used for the analysis of the major E3 ligase subgroups: _Cbl, Deltex, Goliath,_ _IAP, Listerin, Makorin, MARCH, Neuralized, Pel-_ _lino, Pex, Polycomb, Praja, RBR, Siah, Traf, Trim_ and _Ubr. Notable heterogeneity in expression levels_ was observed among these genes. The most highly expressed subgroup genes included _Deltex, Goliath,_ _Makorin, March, Neuralized, Praja, Polycomb, Traf_ (Fig. 3) and _Trim genes (Fig._ 4). The expression levels of the other minor gene groups (Cbl, IAP, Listerin, _Pellino, Pex, Siah, and Ubr) are presented in Table 1._ The TPM values of all the genes in this set were < 37, except the Ubr7 gene, for which the TPM value was ⁓ 60 on E11 and E13. _Deltex E3 Ub ligases_ _Dtx3 and Dtx4 were the major_ _deltex genes, with TPM values increasing from 179_ to 351 (Dtx3) and from 40 to 91 (Dtx4) (Fig. 3A), revealing strong positive regulation during corticogenesis. However, the most highly regulated gene in this group was _Dtx1: no transcripts were detected_ on E11, but it was clearly strongly induced later, with TPM values increasing from 9 (on E13) to 60 (on PN1), a nearly sevenfold increase (Fig. 3A). Deltex E3 has been principally studied in the context of tumorigenesis and tumour cell invasion [48], but very little is known about the roles played by Deltex E3 Ub ligases in the developing or adult brain in mammals. The marked enhancement of _deltex gene expression_ supports the notion suggesting key roles in neuronal growth and differentiation in mammals. _Goliath E3 Ub ligases Twenty-nine orthologous_ genes of the Drosophila Goliath E3 Ub ligases were [identified in mice (https://​flyba​se.​org/​repor​ts/​FBgg0​](https://flybase.org/reports/FBgg0000104.html) [000104.​html). However, nine of these genes were not](https://flybase.org/reports/FBgg0000104.html) expressed (Rnf43, _Rnf128 (Grail),_ _Rnf133, Rnf148,_ _Rnf150, Znrf3, Znrf4, Zswim2 and 4930595M18Rik)._ In the expressed gene subgroup, _Rnf44 (68–115_ TPM), _Rnf167 (72–125 TPM), and_ _Rnf126 (46–70_ TPM) were the major genes (Fig. 3B). Notably, _Rnf215 was the most highly regulated gene, with the_ abundance of its transcripts decreasing from 74 to 27 TPM from E11 to PN1, a 2.7-fold decrease during corticogenesis. The contribution of RNF44 to brain formation and functions is unknown. The E3 ligase RNF167 plays important roles in neuronal cells. Although principally found in lysosomes, a fraction of RNF167 is present at the cell surface, where it participates in the ubiquitination of AMPA receptors. Ubiquitination modulates the number of AMPA receptors at the cell surface as well as synaptic currents. Therefore, RNF167 is an important physiological modulator of glutamatergic neurotransmission [49]. This RNF167-dependent ubiquitination of AMPA receptors was recently shown to be mediated by the E2 enzymes Ube2D1 and Ube2N [50]. RNF126, another prominent factor in this subgroup, has been shown to be involved in Friedreich ataxia, a severe genetic neurodegenerative disease characterized by reduced expression of the essential mitochondrial protein frataxin. The E3 Ub ligase RNF126 specifically mediates frataxin ubiquitination, which induces its degradation [51]. Our results point towards a role played by Goliath E3 Ub ligases in neuronal function from early embryonic stages. _Makorin E3 Ub ligases All three makorin genes_ (Mkrn1-3) were expressed in the immature cerebral cortex (Fig. 3A). _Mkrn1 was the predominant gene,_ with TPM values increasing from 32 (on E11) to 116 (on E17). Consistent with our findings, _Mkrn1 had_ been originally identified as a highly expressed gene during mouse embryonic development, with a high level of mRNA expression in the developing brain [52]. Low levels of Mkrn1 proteins were found in the brain despite its relatively high mRNA abundance due to the autoubiquitination properties of this E3 Ub ligase, which induces its own proteasomal degradation [53]. Experiments performed with Xenopus embryos showed that Mkrn2 proteins inhibit neurogenesis by acting downstream of phosphatidylinositol 3-kinase (PI3K) and Akt [54]. Thus, Mkrn proteins clearly play major roles in the developing nervous system. _MARCH E3 Ub ligases The family of proteins of the_ membrane-associated RING-CH (MARCH) comprises eleven E3 Ub ligases (MARCH-1 to -11) [55]. Four March genes of the eleven analysed in this study were not expressed: _March-1,_ _-3, -10 and_ _-11. The_ major gene in the group was March 9. Its TPM value was approximately 60 on E13, which was profoundly increased on E17 (192 TPM) and PN1 (217 TPM), corresponding to a > 3.5-fold increase (Fig. 3A). On E17 and PN1, March 9 transcripts represented more ----- than 50% of all March transcripts. In dendritic cells, MARCH-9 proteins localize to the trans-Golgi network (TGN) and controls a TGN‐to‐endosome transport step [56]. In previous studies, MARCH-9 expression had been mainly found in immune cells and organs such as the lung, lymph nodes, and the spleen, not neuronal cells [57]. Our data, however, indicate that MARCH-9 was highly expressed at the end of neurogenesis. Clearly, additional work is needed to delineate the neuronal functions of ----- MARCH-9 proteins in the brain. Although expressed at a considerably lower level, the other major March gene was March-5, a mitochondrial-associated E3 Ub ligase. Its TPM values were 58–62, and showed no sign of development regulation. Of note, transcripts of March 4 were detected on E17 and onwards with TPM values of ⁓ 12 (Fig. 3A). MARCH-4, a Golgiassociated E3 Ub ligase, was the only member of the MARCH family previously known to be expressed in the brain [57], but the TPM values were low (less than 13 TPM) (Fig. 3A). Our transcriptomic analysis revealed a large repertoire of factors, with seven _March genes expressed throughout corticogenesis_ and both high and highly regulated expression of _March-9 (Fig. 3A)._ _Neuralized E3 Ub ligases Transcripts of three neu-_ ralized genes were measured: _Neurl1b,_ _Neurl2 and_ _Neurl4. The expression of the major gene Neurl4 was_ enhanced, with TPM values increasing from ⁓ 90 to ⁓ 200 (Fig. 3A). The protein NEURL4, **found in the** developing rodent cerebellum [58], is a p53-interacting protein that when overexpressed inhibits cellular growth [59]. Furthermore, previous experiments preformed with NEURL4-knockdown animals showed a reduced number of presynaptic boutons, indicating that NEURL4 regulates synapse development in the brain [58], consistent with its upregulated expression during corticogenesis. _Praja E3 Ub ligases The expression of the two Praja_ genes _Pja1 and_ _Pja2 was positively regulated dur-_ ing corticogenesis. The abundance of _Pja1 and_ _Pja2 transcripts increased by factors of 1.6 and 3.8,_ respectively (Fig. 3A). With expression less regulated than that of _Pja2, the_ _Pja1 gene was the pre-_ dominant _Praja gene and one of the most highly_ expressed genes in the Ub system. Its TPM values were on the order of 270 (on E11) and ⁓ 430 (PN1), peaking on E17 (⁓ 480) (Fig. 3A). On average, _Pja1_ transcripts were 11- and 5–sixfold more abundant than _Pja2 transcripts on E11-E13 and E17-PN1,_ respectively. The human and mouse _Pja1 genes are_ highly expressed in the brain, particularly in the cerebral cortex [60]. Northern blot experiments showed _Pja1 mRNA in the immature brain on E11.5. Sup-_ pression of _Pja1 expression led to a high apoptosis_ rate, indicating that the protein exerts a prosurvival anti-apoptotic effect. In line with the function of the encoded protein in cell survival, _Pja1 mRNA has_ been previously found to be overexpressed in twentynine cancer types, with particularly high expression in gliomas [60]. Altogether, these results support the idea that Praja1 proteins play important roles in brain development and regulation of cell apoptosis. _Polycomb complexes Polycomb-containing complexes_ possess E3 Ub ligase activity due to its RING1A (Ring1) or RING1B (Rnf2) member. The abundance of _Rnf2 transcripts did not vary during corticogen-_ esis, whereas a reduction in _Ring1 transcripts was_ observed between E11 and E17, with TPM values decreasing from 80 to 50 (Fig. 3A). ----- **Table 1 It gives the list of the genes encoding RING E3 ligases that were found to be weakly expressed during the formation of the** cerebral cortex **E3 families** **Genes** **TPM values (mean values)** **Expression** **pattern** **E11** **E13** **E17** **PN1** Cbl _Cbl_ 17.2 18.7 15.2 10.8 ↘ _Cblb_ 4.9 4.5 5.1 3.1 = _Cblc_ n.d n.d n.d n.d _Cbll1_ 20.7 23.5 30.2 23.0 ↗ IAP _Birc2_ 23.5 29.7 24.1 17.0 ↗ _Birc3_ n.d n.d n.d n.d _Xiap_ 18.4 19.9 17.8 16.0 = _Birc7_ n.d n.d n.d n.d Listerin _Ltn1_ 12.7 14.8 14.3 12.0 = Pellino _Peli1_ 11.0 15.6 24.2 14.7 ↗ _Peli2_ n.d 3.1 3.2 3.2 = _Peli3_ n.d 3.2 13.8 14.5 ↗ Pex _Pex2_ 22.9 22.7 16.6 16.0 ↘ _Pex10_ 17.3 20.6 17.1 11.6 ↘ _Pex12_ 12.4 16.0 19.1 15.3 ↗ Siah _Siah1a_ 14.1 16.5 23.2 22.5 ↗ _Siah1b_ 36.6 31.0 13.0 9.2 ↘ _Siah2_ 5.7 4.8 9.4 11.2 ↗ _Siah3_ n.d 2.7 n.d n.d Ubr _Ubr1_ 4.4 5.4 7.3 5.6 = _Ubr2_ 8.0 10.0 12.2 9.9 = _Ubr3_ 7.9 8.4 11.4 11.8 ↗ _Ubr4_ 19.0 22.4 27.8 27.0 ↗ _Ubr7_ 61.4 58.3 35.2 27.7 ↘ They all displayed TPM values < 65 n.d.: not detected, below the detection threshold. Depending on the gene, the abundance of transcripts increased, decreased or was nearly constant (↗, ↘ and =, respectively) during corticogenesis _Traf E3 Ub ligases Transcripts of four_ _Traf genes_ were identified, but their levels varied, with the predominant being _Traf4 with TPM values of 120–180_ (Fig. 3A). In contrast to the other _Traf members,_ _Traf4 expression was increased during corticogen-_ esis. TRAF4 proteins are essential for neural crest development and neural folding in Xenopus [61]. In mice, TRAF4 deficiency can induce defects in neural tube closure [62]. This protein also participates in the control of myelination [62]. However, transcripts of two _Traf genes (Traf1 and_ _Traf5) were not detected_ in the present study. _TRIM E3 Ub ligases Proteins of the tripartite motif_ (TRIM) family are engaged in multiple cellular processing through their E3 Ub ligase activity. Absent in yeast, TRIM proteins are required for activation of mammalian autophagy and critical for the regulation of innate immunity [41]. Several families and subfamilies of TRIM proteins have been identified (C-I to C-XI), in addition to a group of unclassified TRIM proteins lacking a RING-finger domain [41]. More than eighty genes were analysed in this study. Taken together, the data revealed that transcripts of thirty-two genes encoding RING-finger domaincontaining TRIM and only one TRIM without a RING-finger domain were found (Fig. 4). Six _Trim_ genes were very highly expressed: _Trim28,_ _Trim32,_ _Trim35, Trim46, Trim59 and Trim67. The latter was_ both the most highly expressed _Trim gene and one_ of the most highly upregulated genes analysed in the present study. No transcripts were detected before E13, and the TPM values increased from 12 (on E13) to 253 (on PN1). Overall, the transcript abundance increased by a factor of 21 during embryonic development. The most significant increase was noted between E13 and E17, indicating that TRIM67 is a dispensable ligase during neurogenesis but is crucial ----- for postmitotic cell functions and the maturation of the cerebral cortex (Fig. 4). These data are in line with a previous report showing that TRIM67 proteins are highly expressed in the developing and mature brain but not found in nonneuronal tissues [63]. The TRIM67 protein expression peaked late in the embryonic and perinatal stages, indicating that it is involved in neuronal development after the proliferative period. Deletion of the Trim67 gene causes malformations in several brain regions associated with cognitive and behavioural impairments [63]. The molecular role played by TRIM67 in brain development as well as the nature of its substrates are, however, unknown. _Trim35 expression was not regulated to the same_ extent as that of Trim67, but Trim35 was nevertheless expressed at all ages. _Trim35 TPM values increased_ from ⁓ 170 to 230 from E11 to PN1, reflecting a 35% augmentation in transcript abundance (Fig. 4). The third most prominent gene in this family was Trim28. Its expression was downregulated, with TPM values decreasing from ⁓ 240 to 120 between E11 and PN1. The repression of _Trim28 expression was evident_ after E13, indicating that TRIM28 (KAP1 or TIF1b) exerts important effects during the proliferative period. TRIM28 is an epigenetic corepressor protein highly expressed both in the developing and adult brain [64]. Its absence in mice is embryonically lethal (on approximately E5.5). TRIM28 has been proposed to be a SUMO E3 ligase [65]. In murine and human brains, TRIM28 functions as a transcriptional regulator of neurodevelopmental gene programmes important for brain development [64]. The other main Trim genes were found to be Trim32, _Trim46, and_ _Trim59. The expression of_ _Trim32_ and _Trim46 was upregulated: the abundance of_ their transcripts increased markedly after E13. For instance, the TPM values increased by a factor of 2.6 and 8.7 for Trim32 and Trim46, respectively, between E11 and PN1 (Fig. 4). _Trim46 was one of the most_ induced genes (an ⁓ ninefold increase). Accumulation of TRIM32 proteins into neural cells favours their commitment to the neuronal lineage [66]. Following its translocation to the nucleus, TRIM32 targets c‐Myc for proteasomal degradation, which initiates neuronal differentiation [66]. TPM values for _Trim59, another highly regulated gene, decreased by_ a factor of ⁓ 9 (from 149 to 17 TPM) (Fig. 4). These changes in transcript abundance were primarily identified after the peak of neurogenesis (on E13). The mRNA levels were much higher during the prolif erative periods of corticogenesis. TRIM59 proteins are abundantly expressed in certain organs, such as the spleen, stomach and ovary, but they are also found at lower levels in the brain, lung, kidney, muscle and intestine [67]. Again, it is interesting to note the high expression level of factors known to regulate carcinogenesis. For instance, TRIM28, TRIM32, and TRIM59, which have been found to be aberrantly overexpressed in certain cancers, were highly abundant in this study. Specifically, TRIM28 has been associated with proteins of the melanoma-associated antigen (MAGE) family and favours the progression of carcinogenesis via suppression of autophagy [68]. Notably, many E3 Ub ligases, such as MARCH and TRIM proteins, known for the roles they play in immune responses, were highly expressed in the developing cerebral cortex. B. Multisubunit RING E3 ligases: Three families of multimeric RING E3 ligases were considered in the present report: (1) cullin RING ligases, (2) the APC/C E3 ligase, and (3) the Fanconi anaemia complex. B.1 Cullin RING ligases (CRLs) Cullin RING ligases (CRLs) represent the largest family of E3 Ub ligases. They are complex molecular entities with several independent subunits. CRLs (CRL1-9) comprise a cullin (Cul) scaffold associated with a RING-box protein and an adaptor protein. They also require a substrate recognition element that is an interchangeable subunit that indicates the target protein to be ubiquitinated. CRL3 is a notable exception, because the same molecular entity (Broad complex, Tramtrack, Bric-a-brac, the BTB domain) is both an adaptor and substrate receptor [39]. Several cullins, RING-box proteins, adaptor proteins, and hundreds of substrate recognition proteins contribute to the generation of a wide range of combinations giving rise to a multitude of functionally distinct CRLs [69]. Table 2 presents an overview of the multisubunit structure of CRLs and their modularity. B.1.1 Cullin scaffold proteins Transcripts of nine cullin genes (Cul1-3, _4a,_ _4b, 5, 7, and Cul9 or Parc) were detected, with_ _Cul7 being the predominant member of this_ group. _Cul7 TPM values slightly decreased_ from 103 to 82 from E11 to PN1 (Fig. 5A). The CUL7 protein, present only in chordates, par ----- **Table 2 Gives an overview of the multi-subunit structure of CRLs and their modularity** **Type of CRL** **Cullin scaffold** **RING-finger protein** **Adaptor protein** **Substrate** **recognition** **protein** CRL1 CUL1 ROC1 (Rbx1) Skp1 F-box CRL2 CUL2 ROC1 (Rbx1) Elongin B/Elongin C VHL-box CRL3 CUL3 ROC1 (Rbx1) BTB CRL4A CUL4A ROC1 (Rbx1) DDB1 DCAF CRL4B CUL4B ROC1 (Rbx1) DDB1 DCAF CRL5 CUL5 ROC2 (Rbx2) Elongin B/Elongin C SOCS-box CRL7 CUL7 ROC1 (Rbx1) Skp1 Fbw8 CRL9 CUL9 (PARC) ? ? ? ticipates in the control of embryonic development. CUL7-knockout mice displayed neonatal lethality, and mutations in the human Cul7 gene were linked to the growth retardation disorder 3-M syndrome [69]. The expression level of the human _Cul7 gene is increased_ in glioblastoma tissues compared to normal brain tissues. Furthermore, _Cul7 facilitates_ the proliferation, invasion and migration of glioma cells by activating the NF-κB pathway [70]. These effects are consistent with the current view that CUL7 is an oncogene [71]. _Cul7 expression is elevated in healthy_ (nontumorigenic) brain tissue. The neuronal functions of the scaffold protein CUL7 are not clear but it has been found to be highly abundant in the developing rat brain [72, 73]. Only two F-box proteins are known to interact with CUL7: FBXW8 and FBXW11 [71]; the gene expression of these proteins showed an opposite pattern: the abundance of Fbxw8 transcripts decreased from 16 to 10 TPM, whereas the abundance of Fbxw11 transcripts increased from 12 to 19 TPM from E11 to PN1. CUL7 is, together with the F-box protein FBXW8, associated with the Golgi apparatus in neuronal cells and is required for the growth of dendrites (but not axons) in neurons in the mammalian brain [72]. CUL7 is also found at synaptic sites, controlling the degradation of Eag1, a potassium channel in the plasma membrane that participates in the regulation of membrane excitability [73]. **Fig. 5** Expression of genes encoding cullin scalfolds, adaptor Due to its high expression level and synapproteins and RING finger proteins. Transcripts of cullin (Cul) (A), tic localization, CUL7 may be an important adaptor proteins (B) and RINGER proteins (C) genes are shown. D, modulator of neuronal excitability in the **E and F show the pattern of expression of the major Fbxw, Fbxl and** brain. Within the CRL7 E3 ligase complex, _Fbxo genes. They all had TPM values ≥ 40. The minor Fbxw, Fbxl and_ the scaffold protein CUL7 is associated with _Fbxo genes are shown in Additional file 1: Fig. S2A–C_ ----- the adaptor protein Skp1 and the RING finger protein ROC1, which contains a E2 enzymebinding domain. The expression of the _Skp1_ and Rbx1 genes is discussed below. The other _Cul genes showed low TPM values (approxi-_ mately 10–20 TPM). The expression of the minor _Cul9 gene was developmentally regu-_ lated: its transcript abundance was increased by a factor of ⁓ eight between E11 and PN1 (from ⁓ 2 to 19 TPM) (Fig. 5A). It was one of the most upregulated genes analysed in this study. B.1.2 Adaptor proteins Adaptor proteins are attached to the cullin scaffold. The following four genes were identified in this study: Skp1 (Skp1a), Elob (elongin _B), Eloc (elongin C) and Ddb1. Three of these_ genes were highly expressed with TPM values ≥ 180: _Skp1a,_ _Elob, and_ _Ddb1 (Fig._ 5B). They displayed distinct patterns of expression: _Elob was expressed at a constant level,_ whereas _Ddb1 expression was negatively_ regulated (Fig. 5B). Transcripts of _Ddb1, the_ major gene in this subgroup, were reduced drastically during embryonic development, with the TPM value decreasing from 351 to 162 (Fig. 5B). B.1.3 RING finger proteins The RING finger E3 ligases function as docking sites for E2 enzymes. The TPM values of the _Rbx1 and_ _Rbx2 (Rnf7) genes were_ nearly identical on E11 (89 and 86, respectively) (Fig. 5C). These genes displayed differing expression patterns: the abundance of _Rbx1 transcripts increased (with a peak on_ E17 with 109 TPM), but that of _Rbx2 was_ decreased (with 43 TPM on PN1). Overall, _Rbx1 was the predominant RING finger pro-_ tein-encoding gene in the cerebral cortex during development (Fig. 5C). B.1.4 Substrate receptors In this chapter, we have analysed the expression of genes encoding F-box proteins. They are important components of the Skp1-Cullin 1-F-box complex (SCF E3 Ub ligases). F-box proteins play critical roles as substrate receptors and have been classified into three groups: FBXW, FBXL and FBXO F-box proteins [74]. The analysis of the genes (Fbxw, _Fbxl and_ _Fbxo) followed this classification._ The most highly expressed genes (with TPM values ≥ 40) are shown in Fig. 5D–F. The expression of the minor F-box genes (TPM values < 40) is presented in Additional file 1: Fig. S2. _Fbxw genes: The major genes of this subgroup_ were _Fbxw2,_ _Fbxw5 and_ _Fbxw9 (Fig._ 5D). They were all upregulated during corticogenesis, particularly _Fbxw9 with TPM val-_ ues increasing ⁓ fourfold from 22 (on E11) to 93 TPM (on PN1). _Fbxw5 was the most_ highly expressed _Fbxw gene. The TPM val-_ ues increased from 98 to 193, a ⁓ twofold increase in transcript abundance (Fig. 5D). This is another example of a member of the Ub system known for its contribution in tumorigenesis [75] that is highly expressed in a nontumorous brain tissue. However, the roles of FBXW5 proteins in the brain are unknown. The minor Fbxw genes displayed TPM values ranging from 2 to 20. Similar to major _Fbxw_ genes, the expression of the minor _Fbxw_ genes was positively regulated during corticogenesis except _Fbxw8 (Additional file_ 1: Fig. S2A). Of note, two Fbxw genes were not expressed (Fbxw10 and Fbxw12). _Fbxl genes: The major Fbxl genes were Fbxl6,_ _Fbxl16, Fbxl19 associated with the predomi-_ nant _Fbxl14 gene (Fig._ 5E). This gene had elevated TPM values that decreased during corticogenesis (from 202 to 98 TPM). The proteins FBXL14 has been reported to associate with Hes1 (hairy and enhancer of split 1), a repressor of proneural genes. Furthermore, the loss or overexpression of FBXL14, respectively, stabilizes Hes1 or decreases its protein levels [76]. In stem cells, FBXL14 controls the proteasomal degradation of Hes1, which favours neuronal differentiation [76]. The temporal pattern of expression of the _Hes1 gene is reported in Fig. 9B. Both genes,_ _Fbxl14 and Hes1, were downregulated during_ corticogenesis. The other major _Fbxl genes_ had however a different pattern of expression with an abundance of transcripts increasing significantly during embryogenesis. The expression of the _Fbxl16 gene was strongly_ induced with TPM values increasing from 19 (on E13) to 130 (on PN1), a nearly sevenfold increase (Fig. 5E). The temporal pattern of expression of the minor Fbxl genes is shown in Additional file 1: Fig. S2B. No transcripts of the following six Fbxl genes (Fbxl4, Fbxl7, _Fbxl8, Fbxl13, Fbxl17, and Fbxl21) were_ detected. ----- _Fbxo genes: Amongst the six genes of this_ group that had TPM values > 40: _Ccnf_ (Fbxo1), _Fbxo5, 21, 41, 44, and_ _45, two_ _Fbxo_ genes predominated: _Fbxo5 and_ _Fbxo21_ (Fig. 5F). They had however differing expression patterns: the expression of _Fbxo5 was_ strongly reduced (TPM decreasing from 84 to 5, a 17-fold reduction in transcript abundance) while the expression of _Fbxo21 was_ induced during corticogenesis (TPM values increasing from 44 to 79, a 1.8-fold increase) (Fig. 5F). Although expressed at lower levels (TPM values ranging from 46 to 3), Ccnf was also strongly downregulated. The abundance of _Ccnf transcripts decreased nearly 15-fold_ from E11 to PN1 (Fig. 5F). _Ccnf and_ _Fbxo5_ seemed to play critical roles at the onset of neurogenesis. Fbxo5 proteins have been shown to control cell proliferation [77]. The expression of _Fbxo41 was strongly induced:_ no transcripts were detected at E11 but the abundance of transcripts increased from 4 TPM (on E13) to 45 TPM (on PN1) (Fig. 5F), a 11-fold increase. Fbxo45 proteins are found exclusively in the brain [78, 79] and Fbxo45 mRNA is detected as early as E12 [79]. The loss of Fbxo45 is postnatally lethal and is associated with an abnormal embryonic neural development [78]. Fbxo45 proteins play important roles in the brain by regulating neurotransmission [79]. The Fbxo45 gene expression was significantly enhanced at the end of neurogenesis (Fig. 5F). It is however important to note that Fbxo45 fails to associate with Cul1 and does not form an SCF complex but associates with a RING finger-type Ub ligase [80]. The expression of the minor _Fbxo genes is shown in Additional file 1: Fig._ S2C. The number of transcripts of seven Fbxo genes were below the detection threshold (Fbxo15, Fbxo16, Fbxo24, Fbxo36, Fbxo39, _Fbxo40, and Fbxo23)._ B.2. The anaphase-promoting complex/ cyclosome (APC/C) E3 ligase The E3 Ub ligase anaphase-promoting complex/cyclosome (APC/C) is well known for its control of the cell cycle because it regulates mitotic progression and exit. It is highly abundant in postmitotic neurons, where it plays a role in dendrite and axon arborization and in synaptogenesis [9]. The APC/C E3 ligase is a multi-subunit complex displaying a similar structure to the Skp1/Cul1/F-box protein Ub ligases. Both are composed of three fixed subunits (a catalytic RING protein, a scaffold protein and an adaptor protein) and another component conferring substrate specificity (an F-box protein for SCF, and Cdh1 or Cdc20 for APC/C) [81]. The three sub-complexes of the APC/C ligase consist in a catalytic core (with APC2, APC10, or APC11), a scaffolding platform (APC1, APC4, APC5, or APC15), and a substrate recognition module (or tetratricopeptide repeat lobe, TPR) (consisting of APC3, APC6, APC7, APC8, APC12, APC13, of APC16). In addition, CDC20 and CDH1 are coactivators (also considered to be substrate receptors) essential for the activity of an APC/C ligase [82, 83]. The expression of fourteen APC-encoding genes (Anapc) and two coactivator-encoding genes (Cdc20 and Cdh1) was analysed. _Anapc2 (100–115 TPM) and_ _Anapc11 (⁓ 49_ TPM) were the major genes of the catalytic core. Transcripts of the other component (Anapc10) were near the detection level (2–3 TPM) (Fig. 6A). With TPM values ranging from 128 to 107 from E11 to PN1, _Anapc5_ was the predominant gene in the scaffolding platform. Anapc1 and Anapc4 presented comparable low levels of expression (TPM values of 20–40), whereas _Anapc15 was expressed_ at even lower levels (TPM values of 6–12) (Fig. 6A). Anapc6 (Cdc16) and Anapc8 (Cdc23) were the most highly expressed genes in the substrate recognition module (Fig. 6A). Taken together, the transcripts of this subgroup had low or moderate abundance, with TPM values ranging from 12 to 64 TPM. As shown in Fig. 6A, the expression of genes in the APC/C subgroup displayed nearly constant transcript numbers throughout cortical formation, suggesting their basic functions in cell physiology. Notably, the extremely elevated expression of the Cdc20 gene at early stages of cortical development was followed by a sharp decrease on E17. Its TPM values declined from ⁓ 350–310 on E11–E13 to 44–18 on E17-PN1. Overall, the abundance of _Cdc20 transcripts was reduced_ by a factor ⁓ 20, showing that this gene exhibited a marked temporal pattern of expression. The high abundance of Cdc20 transcripts cor ----- responded to periods of cell production (E11E13), strongly suggesting a role for _Cdc20 in_ cell proliferation. The expression of the other coactivator-encoding gene _Cdh1 (Fzr1) was_ not developmentally regulated. Constant levels of Cdh1 transcripts were found during embryonic development (TPM values of 89–90). Chd1 proteins are required for neurogenesis in vivo [84]. It seemed, however, that the regulation of the coactivator gene _Cdc20 expres-_ sion was a central determinant affecting the functionality of the APC/C E3 Ub ligase in the embryonic cerebral cortex. The APC/C E3 Ub ligase ubiquitinates its substrates in conjunction with a limited set of E2 Ub-conjugating enzymes: UBE2S, UBCH10 (UBE2C) and, to a lesser extent, UBCH5 (UBE2D1) [82, 83]. As shown in Fig. 1D, _Ube2c was the most highly_ expressed of these three E2 enzymes. Interest ingly, _Ube2c and_ _Cdc20 displayed similar pat-_ terns of expression (Fig. 6B). The decline in _Cdc20 expression mirrored the marked repres-_ sion of Ube2c expression. B.3. Fanconi anaemia (FA) E3 ligases The classification of the components of the Fanconi anaemia (FA) complex was established according to [85] and the Fanconi anaemia mutation database [https://​www2.​rocke​](https://www2.rockefeller.edu/fanconi/) [feller.​edu/​fanco​ni/). The FA complex is com-](https://www2.rockefeller.edu/fanconi/) monly described as a machine recruited to DNA lesions and playing a role in DNA repair. FANCL is the only protein of the FA complex displaying a ligase activity. However, no transcripts of its gene (Fancl) were found, suggesting poor or no activity under physiological developmental conditions. Of note, Ube2T, the E2 working in concert with FA E3 ligases, was also expressed at very low levels with TPM values decreasing from 17 to 4 between E11 and E17, and no _Ube2T transcripts were detected_ on PN1 (Fig. 1D). C. U-box RING E3 ligases: U-box RING E3 enzymes form another prominent class of E3 Ub ligases. They are characterized by a peculiar protein domain named the U-box and are structurally related to the RING finger family [86, 87]. U-box E3 Ub ligases are scaffolds that recruit a Ub-charged E2 and its colocalized substrate. Interestingly, mammalian U-box E3 Ub proteins interact with molecular chaperones or cochaperones such as Hsp90, Hsp70, DnaJc7, EKN1, CRN, and VCP [88]. U-box E3 Ub ligases can be found as monomers (i.e., UBE4) or homodimers (CHIP and PRPF19) [89]. Some U-box E3 Ub proteins have been identified as E4 enzymes due to their involvement in the assembly of poly-Ub chains on substrates that are first ubiquitinated by a non-U-box E3 Ub enzyme. Nine genes were analysed: Stub1 (Chip), Prpf19 _(Prp19), Ube4a (Ufd2b), Ube4b (Ufd2a), Ppil2_ _(Cyc4), Ubox5 (Uip5), Wdsub1, Act1 (Traf3ip2)_ and _Aff4 (Fig._ 7). Except for a transcript of _Act1, transcripts of all U-box E3 Ub-encoding_ genes were found. No clear developmental regulation pattern was observed for the U-box ligases except for _Prpf19 (Prp19), the second_ most highly expressed U-box E3 Ub gene. ----- The _Prpf19 TPM values decreased from 250_ to 180 from E11 to PN1, a ⁓ 30% reduction in transcript abundance during corticogenesis. The functions of the _Prpf19 gene product are_ unknown, but it is an essential protein since mouse Prpf19-null mutants show lethality [90]. With TPM values of 300–350, the Stub1 (Chip) gene was the predominant gene in this group and one of the most highly expressed E3 Ub ligase genes. This high expression highlights its physiological relevance during brain formation and development. The U-box E3 Ub ligase CHIP can tag misfolded or damaged proteins for subsequent proteasomal degradation. A previously performed proteomic analysis identified hundreds of potential CHIP substrates in HEK 293 cells [91]. The very high level of Stub1 _(Chip) expression underscores the physiologi-_ cal importance of CHIP during the protein quality control process and clearance of abnormal proteins throughout embryonic development. The UFD2a protein (coded by _Ube4b/Ufd2a)_ has been found to be highly abundant in some brain areas, such as the cerebrum and cerebellum, of 8-week-old C57Bl6 mice [92]. Furthermore, immunohistochemical data have indicated that in the cerebral cortex, the UFD2a protein is localized mainly in the cytoplasm of neurons. Kaneko et al. [92] proposed that UFD2a contributes to the ubiquitination of specific substrates related to neuronal function. The high abundance of UFD2a proteins previously observed in the adult mouse brain differs from the low mRNA abundance of _Ube4b (Ufd2a) transcripts in the embryonic_ brain. _RBR Ub ligases The RBR Ub ligase family includes_ a few E3 Ub ligases. Fourteen RBR E3 genes were analysed (Arih1, Arih2, Ankib1, Park2, Rbck1, Rnf14, Rnf19a, _Rnf19b, Rnf31, Rnf144a, Rnf144b, Rnf216, and_ _Rnf217)_ (Fig. 8). The number of _Park2 and_ _Rnf144b genes tran-_ scripts was lower than the detection limit. Notably, Park2 encodes Parkin, a protein controlling mitophagy via the ubiquitination of mitochondrial proteins. Mitochondria can, however, be recycled via a Ub-independent pathway involving the specific autophagy receptors FUNDC1, BNIP3, and NIX [93]. Interestingly, the RNA-seq dataset indicated that, albeit sometimes at low levels, the _Fundc1, Bnip3 and Nix (Bnip3l) genes were all expressed_ with TPM values of ⁓ 10 (Fundc1), ⁓ 11 (Bnip3), and ⁓ 50 (Nix/Bnip3l). Based on these results, it is proposed that in the embryonic cerebral cortex, mitophagy is initiated independently of Parkin but via a Ub-independent process. As previously pointed out by [94], most of the data describing the regulation of mitophagy have been obtained with cells overexpressing Parkin and through the use of mitochondrial-depolarizing agents, which may not be relevant under the basal conditions of mitochondrial clearance. _Rnf14 (ring finger protein 14, also known as Triad2),_ encoding transcriptional regulator RNF14, was the major RBR E3 gene expressed during cortex development. Its expression was positively regulated, with TPM values increasing from 27 to 113 from E11 to PN1 (Fig. 8A), representing a nearly > fourfold increase in Rnf14 transcript abundance. RNF14 is an oncoprotein that promotes cell cycle progression and proliferation by inducing cyclin D1 expression [95]. In the developing (nontumorigenic) cerebral cortex, the expression of the cyclin D1 gene (Ccnd1) decreased from 333 to 32 TPM from E11 to PN1, a ⁓ tenfold reduction in _Ccnd1 transcript abundance._ This decrease revealed an inverse relationship between the expression of the Ccnd1 and Rnf14 genes that contradicts the suggestion that RNF14 exerts a positive effect ----- on cyclin D1 expression, at least under physiological conditions. The biological functions of RNF14 in the brain are unknown; however, our data indicated that RNF14 likely plays important functions in the cerebral cortex, particularly after the cessation of cell production (on E17 and onwards), during the maturation of neurons and the establishment of synaptic networks. The other major RBR E3 gene in the embryonic cerebral cortex, although expressed at low levels, was Rbck1, encoding RanBP-type and C3HC4-type zinc finger containing 1 (HOIL-1 or HOIL-1L). Its TPM values were on the order of 45 at the onset of development and 60 at the latest stage, indicating a modest upregulation during corticogenesis (Fig. 8B). A recent report showed that RBCK1 plays a role in the linear ubiquitin assembly complex (LUBAC) [96] comprising the adaptor protein SHARPIN and two RBR E3 ligases: HOIP and HOIL-1L. HOIP is the main E3 catalytic centre of LUBAC and is necessary for linear ubiquitination. The gene encoding HOIP (Rnf31) was expressed at low levels (TPM values of 18–27) (Fig. 8B). HOIL-1L, which is the second most active and minor ligase of LUBAC, exerts a regulatory role in the complex by negatively regulating HOIL activity [96]. LUBAC is recruited to different protein aggregates associated with neurodegenerative diseases. LUBAC-dependent linear ubiquitination decreases the toxic potential of misfolded protein species and promotes their removal via the proteasome [97]. The linear ubiquitination catalysed by HOIP is antagonized by the DUB OTULIN [97]. Low HOIP and HOIL-1L levels in mice cause early embryonic lethality (on approximately E10.5) [98]. The other RBR E3 genes were also expressed at low levels, and their patterns of expression were not found to be developmentally regulated, except for _Rnf19b, whose number of transcripts_ increased by a factor of ⁓5 between E11 and PN1, highlighting a putative function in neuronal development that remains to be discovered (Fig. 8B). **Deubiquitinating enzymes (DUBs)** This section covers the gene expression of seven families of DUBs: USPs, UCHs, OTUs, MJDs, JAMMs, MINDYs and ZUP1 [3, 4]. DUBs are Ub hydrolases responsible of the deubiquitination process. Additionally, some of them such as USP5, UCH-L3, USP9X, USP7, and Otulin participate in the processing of Ub precursors [99]. **_Ub‑specific proteases (USPs)_** With approximately fifty members, USPs constitute the largest subfamily of DUBs [4]. In a recent report analysing the expression of _Usp genes in the rat cerebellar_ cortex, only 32 USP-encoding genes were retained for analysis [100]. In the present study, we first selected fiftyfour genes for analysis, but five of these genes (Usp50, 39, _53, 54 and_ _Pan2) showed no catalytic activity [18] and_ were therefore not analysed further. Ultimately, fortynine genes were analysed. The transcripts of forty-five _Usp genes were quantified, implying that four transcripts_ of _Usp genes were undetected (Usp13, Usp17, Usp18,_ and Usp26). Large heterogeneity in gene expression was observed. For the sake of clarity, the five most highly expressed Usp genes are shown as a group in Fig. 9A, and the other members, with much lower transcript abundance, are shown in Additional file 1: Fig. S3A-B. Of note, USP9X was shown to play roles in neurodevelopment. However, a moderate level of gene expression was observed with TPM values ranging from 20 to 30 (Additional file 1: Fig. S3A). Furthermore, [101] found an agerelated up-regulation of USP9X protein expression in the mouse brain with much higher protein levels in the adult brain, suggesting that USP9X could play important roles postnatally rather than during embryonic development. With TPM values increasing from ⁓ 115 to ⁓ 390 between E11 and PN1, _Usp22 was the major gene and_ the most highly upregulated _Usp gene (Fig._ 9A). It was also one of the five most highly expressed DUB genes throughout corticogenesis. Its expression was developmentally regulated and showed a marked increase on E17. At this point, Usp22 was the most highly expressed _Usp gene and the second most highly expressed DUB_ gene after Uch-l1 (see below). Our data were in line with previous reports showing high abundance of USP22 in ----- the mouse embryonic brain [102, 103] and further indicating the specific expression of its gene in the cortex. USP22 proteins are critically required for embryogenesis since their loss leads to early embryonic lethality (at approximately E10.5) [103, 104]. USP22 proteins interfere with SOX2 and Hes1 activity, as well as that of other targets. _Sox2 is a pluripotency gene, and_ _Hes1 represses_ the expression of proneural genes, contributing to the regulated maintenance of neural/stem progenitor cells. In embryonic stem cells, an inverse correlation has been identified between SOX2 and USP22 protein levels [105]. Specifically, USP22 occupies the _Sox2 promoter and_ represses _Sox2 transcription [105]. Hes1, which under-_ goes a fast turnover rate due to its degradation by the proteasome, is deubiquitinated and stabilized by USP22 [102]. We therefore measured the expression of the Sox2 and Hes1 genes to gain further understanding of USP22dependent regulatory mechanisms during corticogenesis. Interestingly, both genes were highly expressed when the Usp22 expression was the lowest. In contrast, a profound increase in _Usp22 expression coincided with a_ marked reduction in Sox2 and Hes1 transcript abundance (Fig. 9B). Thus, a clear inverse correlation between Usp22 (which promotes neuronal differentiation) and Sox2 and _Hes1 (genes necessary for the maintenance of neural/_ stem progenitor cells) was identified. _Usp1 was another highly expressed_ _Usp gene for_ which a high abundance of transcripts was found on E11 and E13 (⁓ 160–130 TPM), corresponding to periods of intense cell division. _Usp1 was the second most_ highly expressed DUB gene on E11. The TPM values were ⁓ 40–30 on E17 and PN1 (Fig. 9A). This decrease in _Usp1 expression at E17 indicated that the gene may play_ a role in proliferation but not in the growth or maturation of neurons. In osteosarcoma cells, USP1 knockdown triggers osteogenic differentiation, whereas USP1 overexpression enhances proliferation, suggesting that, in this cell type, USP1 is involved in the maintenance of a stem cell state [106]. A similar finding was observed with glioblastoma cells [107]. The transcription of Usp1 is regulated in a cell cycle-dependent manner, with transcription peaking during the S phase. The transcriptomic data clearly support the notion that _Usp1 is highly regulated_ during embryonic cortical development, showing high mRNA expression levels during stages of cell division and neurogenesis. Deletion of the Usp1 gene has been associated with 80% perinatal lethality, and the surviving Usp1deficient mice exhibited growth retardation [108]. In addition to _Usp1 and_ _Usp22,_ _Usp5,_ _Usp19 and_ _Usp21 were the other major Usp genes expressed in the_ embryonic cortical wall (Fig. 9A). However, these genes displayed no clear pattern of developmental expression. Similar to most DUBs, USP19 is a soluble cytosolic protein, but one prominent USP19 isoform possesses a C-terminal transmembrane domain, which enables its translocation to the endoplasmic reticulum (ER). USP19 seems to function in ER-associated degradation (ERAD). ER stress induction upregulates USP19 expression, and its biological relevance has been studied in muscle cells, where it plays a role in metabolic regulation and controls muscle mass [109]. Little is known regarding the neurobiological functions of Usp19 and Usp21. The Usp21 gene was one of the most highly _Usp genes expressed during_ corticogenesis. Previously experiments conducted with embryonic stem cells showed that USP21 proteins control the balance between stem cell self-renewal and differentiation [110]. Notably, Usp5 is continually highly expressed throughout embryonic development. Its protein product USP5, is primarily located in the cytosol and nucleoplasm, where it recognizes poly-Ub chains not conjugated to target proteins and contributes to maintaining the pool of free Ub monomers by removing Ub from the proximal end of these unanchored chains [111]. USP5, which is an important contributor of Ub precursors processing [99], has been studied extensively in relation to cancer, but this DUB is widely expressed. For example, USP5 has been shown to play a role in inflammatory and neuropathic pain by regulating the cell surface abundance of the ­Cav3.2 protein, a T-type voltage-gated ­Ca[2][+] channel that plays important roles in nociception [112]. USP5 counterbalances the action of the E3 Ub ligase WWP1 [112]. Our data emphasize that certain components of the Ub system that are generally associated with cancers, such as USP5, are also highly expressed during development in nontumorous tissue. As shown in Additional file 1: Fig S3A-B, many _Usp_ genes were expressed in the cortical tissue throughout embryonic cortical development. For 10 _Usp genes_ (Usp4, 7, 9, 10, 14, 24, 28, 30, 36, and 38), the abundance of transcripts was nearly constant throughout corticogenesis, indicating that their expression was not developmentally regulated. In addition to Usp1, the expression of twelve genes was downregulated: _Usp3, 8, 21, 25, 37,_ _39, 40, 44, 45, 49, 51 and 54 (Additional file 1: Fig. S3A-_ B). Notably, _Usp25 and_ _Usp44 were only expressed at_ the beginning of corticogenesis. The expression of other _Usp genes was upregulated, although to moderate levels._ Interestingly, the expression of 4 Usp genes was induced at the end of corticogenesis (Usp2, 29, 43, and 53), with no transcripts detected before E17. This finding points to a potential role of these DUBs in neuronal growth and the establishment of neural circuits, whereas the _Usp25_ and _Usp44 gene products exert their biological effects_ during the neurogenesis period. ----- **_Ub carboxyl‑terminal hydrolases (UCHs)_** Four _Uch genes,_ _Uch-l1, Uch-l3, Uch-l5 and_ _Bap1_ _(BRCA1-associated protein 1), were expressed during_ corticogenesis, although the levels were considerably different (Fig. 9C). With TPM values ranging from ⁓155 (on E11) to ⁓550 (on PN1), _Uch-l1 clearly showed the high-_ est expression levels in this subfamily, at least in the two latest stages. Its expression was markedly upregulated, with an abundance of transcripts increasing by a factor of 3.3 and 3.5 on E17 and PN1, respectively, compared to the abundance on E11. This observation is in line with the fact that UCH-L1 (also named PGP 9.5) is one of the most abundant brain proteins, representing up to 1–5% of total soluble brain proteins [113]. Isolated from brain extracts, UCH-L1 was originally described as a neuronal marker [113]. In a previous study on the brain, _Uch-l1_ mRNA was detected in early stages of embryonic development [114] and was found in progenitor cells and neurons [115]. UCH-L1 has been postulated to facilitate neurogenesis and determine the morphology of progenitor cells [115]. The precise roles UCH-L1 plays in neuronal physiology are, however, poorly understood, but UCH-L1 dysfunction has been associated with several age-related neurodegenerative processes, such as Alzheimer’s and Parkinson’s diseases [116]. A high and constant abundance of _Bap1 transcripts_ was observed (TPM values of ⁓ 200) with no evidence of developmental regulation. _Bap1 was the most highly_ expressed _Uch gene on E11-E13, whereas_ _Uch-l1 was_ the major gene expressed at the end of corticogenesis (E17-PN1) (Fig. 9C). The protein BAP1 was originally described as a nuclear DUB that exhibited tumour-suppressing properties. It regulates transcription and the DNA repair response. Additionally, BAP1 modulates intracellular ­Ca[2][+] signalling by deubiquitinating (and stabilizing) inositol 1,4,5-trisphosphate (IP3) receptors, prominent ­Ca[2][+] release channels in the ER [117]. Hence, BAP1 displays a basal prosurvival function by inhibiting the unfolded protein response induced by glucose deprivation [118]. Our data, together with results found in the literature, suggest that the protein product of _Bap1,_ a highly expressed gene, plays important roles during the production, survival and differentiation of neural cells in the cortical wall during embryonic development. **_MJD_** The TPM values of the MJD genes Atxn3, Josd1 and Josd2 [4, 119] ranged from 4 to 60, revealing low to moderate transcript abundance (Fig. 9D). Josd2 was the main MJD gene. Its expression was repressed throughout cortical development. **_Otubain proteases (OTUs)_** The analysis encompassed fifteen genes, and for two of them, no transcript was found: _Otud6a and_ _Otud7a_ (Cezanne2). In addition to the _A20 gene (Tnfaip3) that_ was expressed exclusively (and at low level) on E17 and PN1, all the other _Otu genes were expressed at low and_ moderate levels on the order of 5 to 30 TPM at all time points with no clear pattern of developmental regulation (Fig. 10A), except for _Otud1. Its transcript abundance_ increased by nearly sixfold from E11 to PN1 (TPM values ranging from 4 to 23) (Fig. 10A). The gene Otub1 was the only member of this family showing relatively high levels of expression (TPM values of 120–140). OTUB1 has been described as one of the most abundant DUBs in cells with ubiquitous tissue expression [18]. To date, the neuronal functions of _Otub1 have been poorly charac-_ terized. OTUB1 is found in the brain and is expressed in neurons but not in microglia or astrocytes [120]. OTUB1 attenuates the apoptosis of neuronal cells after intracerebral haemorrhage [120]. Hence, it is coenriched with α-synuclein [121], the major component of Lewy bodies, which constitute a hallmark of Parkinson’s disease. The pathogenicity of OTUB1 has been underscored by [122], who showed that OTUB1 is an amyloidogenic protein that could contribute to the development of Parkinson’s ----- disease. Notably, Otud1, although expressed at low levels (TPM values ranging from 4 to 23), was the most highly regulated gene of this subfamily; the abundance of Otud1 transcripts increased nearly sixfold from E11 to PN1 (Fig. 10A). **_Machado‑Joseph disease protein domain proteases (JaMMs_** **_or Josephins)_** First, we focused our analysis on the following seven JaMM genes: _Cops5 (Csn5), Psmd14, Brcc3, Mpnd,_ _Mysm1, Stambp (Amsh), and Stambpl1, which all encode_ DUBs with enzymatic activity. These genes were all expressed, except Stambpl1 (Fig. 10B). Cops5 and Mpnd were the major genes in this group, with TPM values of ⁓ 60 and ⁓ 70–90, respectively. For the other members, namely, _Psmd14, Brcc3, Mysm1, and_ _Stambp (Amsh),_ a low but continuous abundance of transcripts was observed (< 15 TPM) (Fig. 10B); however, among these genes, _Stambp (Amsh) was found to be strongly and_ positively regulated, as indicated by a fourfold increase in transcript abundance between E11 and PN1 (from 3 to 14 TPM). Many JaMMs fail to display catalytic activity and are thus classified as pseudoenzymes [4]. The following pseudoenzyme genes were selected for analysis: _Cops6 (Csn6), Eif3f, Eif3h, Prpf8, and_ _Psmd7. As shown_ in Fig. 10C, they were all highly expressed. For instance, the TPM values of the 3 major genes _Cops6, Eif3f, and_ _Prpf8 were on the order of 190–200. These transcripts_ were thus 3- to tenfold more abundant than the transcripts of the JaMMs genes Cops5, Psmd14, Brcc3, Mpnd, _Mysm1, or Stambp. This finding indicates that the protein_ products of the _Cops6, Eif3f, and_ _Prpf8 genes may exert_ important nonenzymatic biological functions in cells of the cortical wall during embryonic development. **_Motif‑interacting with Ub‑containing novel DUB family_** **_(MINDY)_** MINDY is a family of DUBs with four members: FAM63A (MINDY-1), FAM63B (MINDY-2), FAM188A (MINDY-3) and FAM188B (MINDY-4) [123]. The expression of these genes was investigated (Fig. 10D), and the TMP values were found to be on the order of ⁓ 2–10 for _Fam63b and Fam188a and ⁓ 20–30 for Fam63a. No tran-_ script of the Fam188b gene was found. Compared to that of the other DUB families, the lowest abundance of transcripts was found in this group of genes. The neuronal functions of members of the MINDY family are currently unknown. **_ZUP1_** ZUP1 (or ZUFSP, zinc finger with UFM1-specific peptidase domain) was identified as a seventh family of human DUB [124]. The murine _Zufsp gene was expressed at_ extremely low levels (TPM values of 2–4, not shown). Nothing is known about the biological roles played by _Zufsp in the rodent brain. In humans, the protein ZUFSP,_ which is mainly localized in the nucleus, is thought to be a putative DNA repair and/or replication factor involved in Ub signalling at DNA lesions [124]. **Conclusions** The contribution of the Ub system has been studied using various lines of embryonic stem cells and their differentiation into neural precursor cells. In contrast, in this study, no cell lines were employed, and data were extracted from an RNA-seq database [13], allowing us to detect transcriptomic changes in the core components of the Ub system during the formation of the cerebral cortex in mice. This strategy permitted us to describe the transcriptomic landscape of the whole tissue. This approach also revealed the large repertoire of functional components of the Ub system in embryogenesis. One important result indicated that the expression of Ub genes, notably Ubb and Rps27a, was extremely high. These two genes were among the 100 most highly expressed genes of the cortical wall. Our findings illustrate that the intricate ubiquitination network was governed by the E1 gene _Uba1, which was more highly expressed, from 20- to_ 90-fold, than the other E1 gene _Uba6. The most promi-_ nent E2 gene was _Ube2m, encoding a Nedd8-conjugat-_ ing E2 enzyme. The major Ub-conjugating E2 gene was _Ube2c, the expression of which was profoundly down-_ regulated during embryonic development. A large diversity of E3 Ub ligase gene transcripts was detected with distinct temporal patterns of expression. _Pja1, Trim67_ (RING E3-encoding genes), _Stub1 (U-box E3-encoding_ gene), and _Nedd4 (HECT E3-encoding gene) were the_ most prominent E3 genes. A previous report analysed the expression of thirty DUB-encoding genes in the rat cerebellum, out of approximately one hundred DUB genes [100], and thirty DUBs have also been independently described as being involved in the nervous system [119]. In this study, an extensive genome-wide gene expression analysis of the core components of the Ub machinery showed that more than 80 DUB genes were expressed during the formation of the cerebral cortex. This outcome provides a comprehensive survey of the large diversity of DUB gene expression and further indicates some important candidate products that may play major roles in cortex development. For instance, _Uch-l1 was one of_ the most highly expressed genes. It was also positively regulated during corticogenesis. This study was based on a bulk transcriptomic analysis that did not discriminate between cell type or the cell lineages within the whole tissue sample. Moreover, certain cells, such as neurons, are highly polarized with several ----- subcellular compartments (i.e., dendrites, cell body, axon) with distinct biological functions. Cellular polarity requires precise spatial targeting of the factors participating in the Ub pathway. In most (if not all) instances, the mechanisms governing the spatial targeting of Ub components are unknown. Despite these limitations, this study provides novel insights into the complex transcriptomic changes occurring during cerebral cortex formation. One interest of the present work is the identification of several components of the Ub system known to be overexpressed in cancers that correspond to developmental genes highly expressed in the embryonic cerebral cortex under physiological conditions but are not related to tumour formation or progression, for instance Ube2c (E2 gene), Trim28, Trim32, and Trim59 (E3 genes). The data collected may be used as a starting point for future functional studies of the rodent brain. **Supplementary Information** [The online version contains supplementary material available at https://​doi.​](https://doi.org/10.1186/s13041-022-00958-z) [org/​10.​1186/​s13041-​022-​00958-z.](https://doi.org/10.1186/s13041-022-00958-z) **Author details** 1 Université Grenoble Alpes, Inserm, CEA, UMR 1292, 38000 Grenoble, France. 2 Genetics and Chemogenomics Lab, Building C3, CEA, 17 rue des Martyrs, 38054 Grenoble Cedex 9, France. Received: 3 June 2022 Accepted: 2 August 2022 **References** 1. Gross S, Rahal R, Stransky N, Lengauer C, Hoeflich KP. Targeting cancer with kinase inhibitors. J Clin Investig. 2015;125(5):1780–9. 2. Komander D, Rape M. The ubiquitin code. Annu Rev Biochem. 2012;81:203–29. 3. Komander D, Clague MJ, Urbe S. Breaking the chains: struc‑ ture and function of the deubiquitinases. Nat Rev Mol Cell Biol. 2009;10(8):550–63. 4. Clague MJ, Urbe S, Komander D. Breaking the chains: deubiquity‑ lating enzyme specificity begets function. Nat Rev Mol Cell Biol. 2019;20(6):338–52. 5. Damgaard RB. The ubiquitin system: from cell signalling to dis‑ ease biology and new therapeutic opportunities. Cell Death Differ. 2021;28(2):423–6. 6. Na CH, Jones DR, Yang Y, Wang X, Xu Y, Peng J. Synaptic protein ubiqui‑ tination in rat brain revealed by antibody-based ubiquitome analysis. J Proteome Res. 2012;11(9):4722–32. 7. DiAntonio A, Hicke L. Ubiquitin-dependent regulation of the synapse. Annu Rev Neurosci. 2004;27:223–46. 8. Yamada T, Yang Y, Bonni A. Spatial organization of ubiquitin ligase pathways orchestrates neuronal connectivity. Trends Neurosci. 2013;36(4):218–26. 9. Kawabe H, Brose N. The role of ubiquitylation in nerve cell develop‑ ment. Nat Rev Neurosci. 2011;12(5):251–68. 10. Todi SV, Paulson HL. Balancing act: deubiquitinating enzymes in the nervous system. Trends Neurosci. 2011;34(7):370–82. 11. Lee SH, Choi JH, Lee N, Lee HR, Kim JI, Yu NK, Choi SL, Lee SH, Kim H, Kaang BK. Synaptic protein degradation underlies destabilization of retrieved fear memory. Science. 2008;319(5867):1253–6. 12. Sewduth RN, Baietti MF, Sablina AA. Cracking the monoubiquitin code of genetic diseases. Int J Mol Sci. 2020;21(9):3036. 13. Hasna J, Bohic S, Lemoine S, Blugeon C, Bouron A. Zinc uptake and stor‑ age during the formation of the cerebral cortex in mice. Mol Neurobiol. 2019;56(10):6928–40. 14. Jabaudon D. Fate and freedom in developing neocortical circuits. Nat Commun. 2017;8:16042. 15. Kriegstein AR, Noctor SC. Patterns of neuronal migration in the embry‑ onic cortex. Trends Neurosci. 2004;27(7):392–9. 16. Hallengren J, Chen PC, Wilson SM. Neuronal ubiquitin homeostasis. Cell Biochem Biophys. 2013;67(1):67–73. 17. Ryu KY, Garza JC, Lu XY, Barsh GS, Kopito RR. Hypothalamic neurode‑ generation and adult-onset obesity in mice lacking the Ubb polyubiq‑ uitin gene. Proc Natl Acad Sci U S A. 2008;105(10):4016–21. 18. Clague MJ, Heride C, Urbe S. The demographics of the ubiquitin system. Trends Cell Biol. 2015;25(7):417–26. 19. Park CW, Ryu KY. Cellular ubiquitin pool dynamics and homeostasis. BMB Rep. 2014;47(9):475–82. 20. Bianchi M, Giacomini E, Crinelli R, Radici L, Carloni E, Magnani M. Dynamic transcription of ubiquitin genes under basal and stressful conditions and new insights into the multiple UBC transcript variants. Gene. 2015;573(1):100–9. 21. Ryu HW, Park CW, Ryu KY. Restoration of cellular ubiquitin reverses impairments in neuronal development caused by disruption of the polyubiquitin gene Ubb. Biochem Biophys Res Commun. 2014;453(3):443–8. 22. Dubois ML, Meller A, Samandi S, Brunelle M, Frion J, Brunet MA, Toupin A, Beaudoin MC, Jacques JF, Levesque D et al. UBB pseudogene 4 encodes functional ubiquitin variants. Nat Commun. 2020; 11(1). 23. Cappadocia L, Lima CD. Ubiquitin-like protein conjugation: structures, chemistry, and mechanism. Chem Rev. 2018;118(3):889–918. **Additional file 1. Supplementary Figure 1. It shows the expression** of the minor genes (TPM values <40) encoding Ub-(A) and Ub-like (B) proteins conjugating E2 enzymes. Supplementary Figure 2. It shows the expression of the minor Fbxw (A), Fbxl (B) and Fbxo genes (TPM values <40). Supplementary Figure 3. It shows the expression of the minor Usp genes. **Acknowledgements** We thank Dr Helen Walden for her help with the Fanconi anaemia com‑ plex and Dr Claudio Joazeiro for his comments on a preliminary version of this manuscript. We also wish to thank Dr Sophie Lemoine and Dr Corinne Blugeon for their help with the transcriptomic analysis. **Author contributions** Data curation and analysis, AB; writing, AB; review and editing, AB, MOF. Both authors read and approved the final manuscript. **Funding** The work receives support from the Centre National de la Recherche Scienti‑ fique (CNRS), Commissariat à l’Energie Atomique et aux Energies Alternatives (CEA), Université de Grenoble Alpes (UGA). This project received funding from GRAL, a programme of the Chemistry Biology Health (CBH) Graduate School of University Grenoble Alpes (ANR-17-EURE-0003). **Availability of data and materials** The complete dataset is freely accessible on the GEO repository with the accession number GSE154677. **Declarations** **Ethics approval and consent to participate** Not applicable. **Consent for publication** Not applicable. **Competing interests** The authors declare that they have no competing interests. ----- 24. Schulman BA, Harper JW. Ubiquitin-like protein activation by E1 enzymes: the apex for downstream signalling pathways. Nat Rev Mol Cell Bio. 2009;10(5):319–31. 25. Wagner GP, Kin K, Lynch VJ. A model based criterion for gene expres‑ sion calls using RNA-seq data. Theory Biosci. 2013;132(3):159–64. 26. Barghout SH, Schimmer AD. E1 enzymes as therapeutic targets in cancer. Pharmacol Rev. 2021;73(1):1–58. 27. Lambert-Smith IA, Saunders DN, Yerbury JJ. The pivotal role of ubiqui‑ tin-activating enzyme E1 (UBA1) in neuronal health and neurodegen‑ eration. Int J Biochem Cell B. 2020;123:105746. 28. Groen EJN, Gillingwatert TH. UBA1: at the crossroads of ubiquitin home‑ ostasis and neurodegeneration. Trends Mol Med. 2015;21(10):622–32. 29. Lee PC, Dodart JC, Aron L, Finley LW, Bronson RT, Haigis MC, Yankner BA, Harper JW. Altered social behavior and neuronal development in mice lacking the Uba6-Use1 ubiquitin transfer system. Mol Cell. 2013;50(2):172–84. 30. van Wijk SJL, Timmers HTM. The family of ubiquitin-conjugating enzymes (E2s): deciding between life and death of proteins. FASEB J. 2010;24(4):981–93. 31. Ye YH, Rape M. Building ubiquitin chains: E2 enzymes at work. Nat Rev Mol Cell Bio. 2009;10(11):755–64. 32. van Wijk SJ, de Vries SJ, Kemmeren P, Huang A, Boelens R, Bonvin AM, Timmers HT. A comprehensive framework of E2-RING E3 interactions of the human ubiquitin-proteasome system. Mol Syst Biol. 2009;5:295. 33. Xie CL, Powell C, Yao M, Wu JM, Dong QH. Ubiquitin-conjugating enzyme E2C: a potential cancer biomarker. Int J Biochem Cell B. 2014;47:113–7. 34. Wan C, Chen J, Hu B, Zou H, Li A, Guo A, Jiang J. Downregulation of UBE2Q1 is associated with neuronal apoptosis in rat brain cortex fol‑ lowing traumatic brain injury. J Neurosci Res. 2014;92(1):1–12. 35. Koerver L, Papadopoulos C, Liu B, Kravic B, Rota G, Brecht L, Veenendaal T, Polajnar M, Bluemke A, Ehrmann M, et al. The ubiquitin-conjugating enzyme UBE2QL1 coordinates lysophagy in response to endolysosomal damage. EMBO Rep. 2019;20(10): e48014. 36. Nuber U, Schwarz S, Kaiser P, Schneider R, Scheffner M. Cloning of human ubiquitin-conjugating enzymes UbcH6 and UbcH7 (E2–F1) and characterization of their interaction with E6-AP and RSP5. J Biol Chem. 1996;271(5):2795–800. 37. Kishino T, Lalande M, Wagstaff J. UBE3A/E6-AP mutations cause Angel‑ man syndrome. Nat Genet. 1997;15(1):70–3. 38. Zheng N, Shabek N. Ubiquitin ligases: structure, function, and regula‑ tion. Annu Rev Biochem. 2017;86(86):129–57. 39. Morreale FE, Walden H. Types of ubiquitin ligases. Cell. 2016;165(1):248–248. 40. Dove KK, Klevit RE. RING-between-RING E3 ligases: emerging themes amid the variations. J Mol Biol. 2017;429(22):3363–75. 41. Deshaies RJ, Joazeiro CA. RING domain E3 ubiquitin ligases. Annu Rev Biochem. 2009;78:399–434. 42. Wang Y, Argiles-Castillo D, Kane EI, Zhou A, Spratt DE. HECT E3 ubiquitin ligases - emerging insights into their biological roles and disease relevance. J Cell Sci. 2020; 133(7). 43. Kumar S, Tomooka Y, Noda M. Identification of a set of genes with developmentally down-regulated expression in the mouse brain. Biochem Biophys Res Commun. 1992;185(3):1155–61. 44. Donovan P, Poronnik P. Nedd4 and Nedd4-2: ubiquitin ligases at work in the neuron. Int J Biochem Cell Biol. 2013;45(3):706–10. 45. Zhao X, Heng JI, Guardavaccaro D, Jiang R, Pagano M, Guillemot F, Iavarone A, Lasorella A. The HECT-domain ubiquitin ligase Huwe1 con‑ trols neural differentiation and proliferation by destabilizing the N-Myc oncoprotein. Nat Cell Biol. 2008;10(6):643–53. 46. Hou X, Zhang W, Xiao Z, Gan H, Lin X, Liao S, Han C. Mining and char‑ acterization of ubiquitin E3 ligases expressed in the mouse testis. BMC Genomics. 2012;13:495. 47. Li W, Bengtson MH, Ulbrich A, Matsuda A, Reddy VA, Orth A, Chanda SK, Batalov S, Joazeiro CA. Genome-wide and functional annotation of human E3 ubiquitin ligases identifies MULAN, a mitochondrial E3 that regulates the organelle’s dynamics and signaling. PLoS ONE. 2008;3(1): e1487. 48. Wang L, Sun X, He J, Liu Z. Functions and molecular mechanisms of Deltex family ubiquitin E3 ligases in development and disease. Front Cell Dev Biol. 2021;9: 706997. 49. Lussier MP, Herring BE, Nasu-Nishimura Y, Neutzner A, Karbowski M, Youle RJ, Nicoll RA, Roche KW. Ubiquitin ligase RNF167 regulates AMPA receptor-mediated synaptic transmission. Proc Natl Acad Sci U S A. 2012;109(47):19426–31. 50. Ghilarducci K, Cabana VC, Desroches C, Chabi K, Bourgault S, Cappado‑ cia L, Lussier MP. Functional interaction of ubiquitin ligase RNF167 with UBE2D1 and UBE2N promotes ubiquitination of AMPA receptor. FEBS J. 2021;288(16):4849–68. 51. Benini M, Fortuni S, Condo I, Alfedi G, Malisan F, Toschi N, Serio D, Massaro DS, Arcuri G, Testi R, et al. E3 ligase RNF126 directly ubiquit‑ inates frataxin, promoting its degradation: identification of a potential therapeutic target for Friedreich ataxia. Cell Rep. 2017;18(8):2007–17. 52. Gray TA, Hernandez L, Carey AH, Schaldach MA, Smithwick MJ, Rus K, Marshall Graves JA, Stewart CL, Nicholls RD. The ancient source of a distinct gene family encoding proteins featuring RING and C(3)H zinc-finger motifs with abundant expression in developing brain and nervous system. Genomics. 2000;66(1):76–86. 53. Miroci H, Schob C, Kindler S, Olschlager-Schutt J, Fehr S, Jungenitz T, Schwarzacher SW, Bagni C, Mohr E. Makorin ring zinc finger protein 1 (MKRN1), a novel poly(A)-binding protein-interacting protein, stimu‑ lates translation in nerve cells. J Biol Chem. 2012;287(2):1322–34. 54. Yang PH, Cheung WK, Peng Y, He ML, Wu GQ, Xie D, Jiang BH, Huang QH, Chen Z, Lin MC, et al. Makorin-2 is a neurogenesis inhibitor down‑ stream of phosphatidylinositol 3-kinase/Akt (PI3K/Akt) signal. J Biol Chem. 2008;283(13):8486–95. 55. Ohmura-Hoshino M, Goto E, Matsuki Y, Aoki M, Mito M, Uematsu M, Hotta H, Ishido S. A novel family of membrane-bound E3 ubiquitin ligases. J Biochem. 2006;140(2):147–54. 56. De Angelis RF, De Gassart A, Pforr C, Cano F, N’Guessan P, Combes A, Camossetto V, Lehner PJ, Pierre P, Gatti E. MARCH9-mediated ubiq‑ uitination regulates MHC I export from the TGN. Immunol Cell Biol. 2017;95(9):753–64. 57. Lin H, Li S, Shu HB. The membrane-associated MARCH E3 ligase family: emerging roles in immune regulation. Front Immunol. 2019;10:1751. 58. Valnegri P, Huang J, Yamada T, Yang Y, Mejia LA, Cho HY, Oldenborg A, Bonni A. RNF8/UBC13 ubiquitin signaling suppresses synapse forma‑ tion in the mammalian brain. Nat Commun. 2017;8(1):1271. 59. Cubillos-Rojas M, Schneider T, Bartrons R, Ventura F, Rosa JL. NEURL4 regulates the transcriptional activity of tumor suppressor protein p53 by modulating its oligomerization. Oncotarget. 2017;8(37):61824–36. 60. Shin J, Mishra V, Glasgow E, Zaidi S, Chen J, Ohshiro K, Chitti B, Kapadia AA, Rana N, Mishra L, et al. PRAJA is overexpressed in glioblastoma and contributes to neural precursor development. Genes Cancer. 2017;8(7–8):640–9. 61. Kalkan T, Iwasaki Y, Park CY, Thomsen GH. Tumor necrosis factor-recep‑ tor-associated factor-4 is a positive regulator of transforming growth factor-beta signaling that affects neural crest formation. Mol Biol Cell. 2009;20(14):3436–50. 62. Blaise S, Kneib M, Rousseau A, Gambino F, Chenard MP, Messadeq N, Muckenstrum M, Alpy F, Tomasetto C, Humeau Y, et al. In vivo evidence that TRAF4 is required for central nervous system myelin homeostasis. PLoS ONE. 2012;7(2): e30917. 63. Boyer NP, Monkiewicz C, Menon S, Moy SS, Gupton SL. Mammalian TRIM67 functions in brain development and behavior. eNeuro. 2018; 5(3). 64. Brattas PL, Jonsson ME, Fasching L, Nelander Wahlestedt J, Shahsavani M, Falk R, Falk A, Jern P, Parmar M, Jakobsson J. TRIM28 controls a gene regulatory network based on endogenous retroviruses in human neural progenitor cells. Cell Rep. 2017;18(1):1–11. 65. Liang Q, Deng H, Li X, Wu X, Tang Q, Chang TH, Peng H, Rauscher FJ 3rd, Ozato K, Zhu F. Tripartite motif-containing protein 28 is a small ubiquitin-related modifier E3 ligase and negative regulator of IFN regulatory factor 7. J Immunol. 2011;187(9):4754–63. 66. Hillje AL, Worlitzer MM, Palm T, Schwamborn JC. Neural stem cells main‑ tain their stemness through protein kinase C zeta-mediated inhibition of TRIM32. Stem cells. 2011;29(9):1437–47. 67. Zhao X, Liu Q, Du B, Li P, Cui Q, Han X, Du B, Yan D, Zhu X. A novel accessory molecule Trim59 involved in cytotoxicity of BCG-activated macrophages. Mol Cells. 2012;34(3):263–70. 68. Hatakeyama S. TRIM family proteins: roles in autophagy, immunity, and carcinogenesis. Trends Biochem Sci. 2017;42(4):297–311. ----- 69. Sarikas A, Hartmann T, Pan ZQ. The cullin protein family. Genome Biol. 2011;12(4):220. 70. Xu J, Zhang Z, Qian M, Wang S, Qiu W, Chen Z, Sun Z, Xiong Y, Wang C, Sun X, et al. Cullin-7 (CUL7) is overexpressed in glioma cells and promotes tumorigenesis via NF-kappaB activation. J Exp Clin Cancer Res CR. 2020;39(1):59. 71. Shi L, Du D, Peng Y, Liu J, Long J. The functional analysis of Cullin 7 E3 ubiquitin ligases in cancer. Oncogenesis. 2020;9(10):98. 72. Litterman N, Ikeuchi Y, Gallardo G, O’Connell BC, Sowa ME, Gygi SP, Harper JW, Bonni A. An OBSL1-Cul7Fbxw8 ubiquitin ligase signaling mechanism regulates Golgi morphology and dendrite patterning. PLoS Biol. 2011;9(5): e1001060. 73. Hsu PH, Ma YT, Fang YC, Huang JJ, Gan YL, Chang PT, Jow GM, Tang CY, Jeng CJ. Cullin 7 mediates proteasomal and lysosomal degradations of rat Eag1 potassium channels. Sci Rep. 2017;7:40825. 74. Kipreos ET, Pagano M. The F-box protein family. Genome Biol. 2000;1(5):REVIEWS3002. 75. Puklowski A, Homsi Y, Keller D, May M, Chauhan S, Kossatz U, Grunwald V, Kubicka S, Pich A, Manns MP, et al. The SCF-FBXW5 E3-ubiquitin ligase is regulated by PLK4 and targets HsSAS-6 to control centrosome duplication. Nat Cell Biol. 2011;13(8):1004–9. 76. Chen F, Zhang C, Wu H, Ma Y, Luo X, Gong X, Jiang F, Gui Y, Zhang H, Lu F. The E3 ubiquitin ligase SCF(FBXL14) complex stimulates neuronal differentiation by targeting the Notch signaling factor HES1 for prote‑ olysis. J Biol Chem. 2017;292(49):20100–12. 77. Uddin S, Bhat AA, Krishnankutty R, Mir F, Kulinski M, Mohammad RM. Involvement of F-BOX proteins in progression and development of human malignancies. Semin Cancer Biol. 2016;36:18–32. 78. Saiga T, Fukuda T, Matsumoto M, Tada H, Okano HJ, Okano H, Nakayama KI. Fbxo45 forms a novel ubiquitin ligase complex and is required for neuronal development. Mol Cell Biol. 2009;29(13):3529–43. 79. Tada H, Okano HJ, Takagi H, Shibata S, Yao I, Matsumoto M, Saiga T, Nakayama KI, Kashima H, Takahashi T, et al. Fbxo45, a novel ubiquitin ligase, regulates synaptic activity. J Biol Chem. 2010;285(6):3840–9. 80. Zhou W, Wei W, Sun Y. Genetically engineered mouse models for func‑ tional studies of SKP1-CUL1-F-box-protein (SCF) E3 ubiquitin ligases. Cell Res. 2013;23(5):599–619. 81. Fasanaro P, Capogrossi MC, Martelli F. Regulation of the endothe‑ lial cell cycle by the ubiquitin-proteasome system. Cardiovasc Res. 2010;85(2):272–80. 82. Yamano H. APC/C: current understanding and future perspectives. F1000Research. 2019;8:725. 83. Schrock MS, Stromberg BR, Scarberry L, Summers MK. APC/C ubiquitin ligase: functions and mechanisms in tumorigenesis. Semin Cancer Biol. 2020;67(Pt 2):80–91. 84. Delgado-Esteban M, Garcia-Higuera I, Maestre C, Moreno S, Almeida A. APC/C-Cdh1 coordinates neurogenesis and cortical size during devel‑ opment. Nat Commun. 2013;4:2879. 85. Walden H, Deans AJ. The Fanconi anemia DNA repair pathway: structural and functional insights into a complex disorder. Annu Rev Biophys. 2014;43:257–78. 86. Hatakeyama S, Nakayama KI. U-box proteins as a new family of ubiqui‑ tin ligases. Biochem Bioph Res Co. 2003;302(4):635–45. 87. Cyr DM, Hohfeld J, Patterson C. Protein quality control: U-box containing E3 ubiquitin ligases join the fold. Trends Biochem Sci. 2002;27(7):368–75. 88. Hatakeyama S, Matsumoto M, Yada M, Nakayama KI. Interaction of U-box-type ubiquitin-protein ligases (E3s) with molecular chaperones. Genes Cells. 2004;9(6):533–48. 89. Nordquist KA, Dimitrova YN, Brzovic PS, Ridenour WB, Munro KA, Soss SE, Caprioli RM, Klevit RE, Chazin WJ. Structural and functional charac‑ terization of the monomeric U-box domain from E4B. Biochemistry. 2010;49(2):347–55. 90. Marin I. Ancient origin of animal U-box ubiquitin ligases. BMC Evol Biol. 2010;10:331. 91. Bhuripanyo K, Wang YY, Liu XP, Zhou L, Liu RC, Duong D, Zhao B, Bi YT, Zhou H, Chen G et al. Identifying the substrate proteins of U-box E3s E4B and CHIP by orthogonal ubiquitin transfer. Sci Adv. 2018; 4(1). 92. Kaneko C, Hatakeyama S, Matsumoto M, Yada M, Nakayama K, Nakayama KI. Characterization of the mouse gene for the U-box-type ubiquitin lipase UFD2a. Biochem Bioph Res Co. 2003;300(2):297–304. 93. Grumati P, Dikic I. Ubiquitin signaling and autophagy. J Biol Chem. 2018;293(15):5404–13. 94. Jacomin AC, Taillebourg E, Fauvarque MO. Deubiquitinating enzymes related to autophagy: new therapeutic opportunities? Cells. 2018; 7(8). 95. Wang P, Dai X, Jiang W, Li Y, Wei W. RBR E3 ubiquitin ligases in tumori‑ genesis. Semin Cancer Biol. 2020;67(Pt 2):131–44. 96. Fuseya Y, Fujita H, Kim M, Ohtake F, Nishide A, Sasaki K, Saeki Y, Tanaka K, Takahashi R, Iwai K. The HOIL-1L ligase modulates immune signal‑ ling and cell death via monoubiquitination of LUBAC. Nat Cell Biol. 2020;22(6):663–73. 97. van Well EM, Bader V, Patra M, Sanchez-Vicente A, Meschede J, Furth‑ mann N, Schnack C, Blusch A, Longworth J, Petrasch-Parwez E et al. A protein quality control pathway regulated by linear ubiquitination. EMBO J. 2019; 38(9). 98. Peltzer N, Darding M, Montinaro A, Draber P, Draberova H, Kupka S, Rieser E, Fisher A, Hutchinson C, Taraborrelli L, et al. LUBAC is essential for embryogenesis by preventing cell death and enabling haemat‑ opoiesis. Nature. 2018;557(7703):112–7. 99. Grou CP, Pinto MP, Mendes AV, Domingues P, Azevedo JE. The de novo synthesis of ubiquitin: identification of deubiquitinases acting on ubiquitin precursors. Sci Rep. 2015;5:12836. 100. Anckar J, Bonni A. Regulation of neuronal morphogenesis and posi‑ tioning by ubiquitin-specific proteases in the cerebellum. PLoS ONE. 2015;10(1): e0117076. 101. Xu J. Age-related changes in Usp9x protein expression and DNA meth‑ ylation in mouse brain. Brain Res Mol Brain Res. 2005;140(1–2):17–24. 102. Kobayashi T, Iwamoto Y, Takashima K, Isomura A, Kosodo Y, Kawakami K, Nishioka T, Kaibuchi K, Kageyama R. Deubiquitinating enzymes regulate Hes1 stability and neuronal differentiation. FEBS J. 2015;282(13):2411–23. 103. Koutelou E, Wang L, Schibler AC, Chao HP, Kuang X, Lin K, Lu Y, Shen J, Jeter CR, Salinger A et al. USP22 controls multiple signaling pathways that are essential for vasculature formation in the mouse placenta. Development. 2019; 146(4). 104. Lin Z, Yang H, Kong Q, Li J, Lee SM, Gao B, Dong H, Wei J, Song J, Zhang DD, et al. USP22 antagonizes p53 transcriptional activation by deubiq‑ uitinating Sirt1 to suppress cell apoptosis and is required for mouse embryonic development. Mol Cell. 2012;46(4):484–94. 105. Sussman RT, Stanek TJ, Esteso P, Gearhart JD, Knudsen KE, McMahon SB. The epigenetic modifier ubiquitin-specific protease 22 (USP22) regulates embryonic stem cell differentiation via transcriptional repression of sex-determining region Y-box 2 (SOX2). J Biol Chem. 2013;288(33):24234–46. 106. Williams SA, Maecker HL, French DM, Liu J, Gregg A, Silverstein LB, Cao TC, Carano RA, Dixit VM. USP1 deubiquitinates ID proteins to preserve a mesenchymal stem cell program in osteosarcoma. Cell. 2011;146(6):918–30. 107. Lee JK, Chang N, Yoon Y, Yang H, Cho H, Kim E, Shin Y, Kang W, Oh YT, Mun GI, et al. USP1 targeting impedes GBM growth by inhibiting stem cell maintenance and radioresistance. Neuro Oncol. 2016;18(1):37–47. 108. Kim JM, Parmar K, Huang M, Weinstock DM, Ruit CA, Kutok JL, D’Andrea AD. Inactivation of murine Usp1 results in genomic instability and a Fanconi anemia phenotype. Dev Cell. 2009;16(2):314–20. 109. Wing SS. Deubiquitinating enzymes in skeletal muscle atrophy-an essential role for USP19. Int J Biochem Cell Biol. 2016;79:462–8. 110. Pei D. Deubiquitylating Nanog: novel role of USP21 in embryonic stem cell maintenance. Signal Transduct Target Ther. 2017;2:17014. 111. Ning F, Xin H, Liu J, Lv C, Xu X, Wang M, Wang Y, Zhang W, Zhang X. Structure and function of USP5: insight into physiological and patho‑ physiological roles. Pharmacol Res. 2020;157: 104557. 112. Garcia-Caballero A, Gadotti VM, Stemkowski P, Weiss N, Souza IA, Hodg‑ kinson V, Bladen C, Chen L, Hamid J, Pizzoccaro A, et al. The deubiquit‑ inating enzyme USP5 modulates neuropathic and inflammatory pain by enhancing Cav3.2 channel activity. Neuron. 2014;83(5):1144–58. 113. Wilkinson KD, Lee KM, Deshpande S, Duerksen-Hughes P, Boss JM, Pohl J. The neuron-specific protein PGP 9.5 is a ubiquitin carboxyl-terminal hydrolase. Science. 1989;246(4930):670–3. 114. Schofield JN, Day IN, Thompson RJ, Edwards YH. PGP9.5, a ubiqui‑ tin C-terminal hydrolase; pattern of mRNA and protein expression during neural development in the mouse. Brain Res Dev Brain Res. 1995;85(2):229–38. ----- 115. Sakurai M, Ayukawa K, Setsuie R, Nishikawa K, Hara Y, Ohashi H, Nishimoto M, Abe T, Kudo Y, Sekiguchi M, et al. Ubiquitin C-terminal hydrolase L1 regulates the morphology of neural progenitor cells and modulates their differentiation. J Cell Sci. 2006;119(Pt 1):162–71. 116. Bishop P, Rocca D, Henley JM. Ubiquitin C-terminal hydrolase L1 (UCH L1): structure, distribution and roles in brain function and dysfunction. Biochem J. 2016;473(16):2453–62. 117. Bononi A, Giorgi C, Patergnani S, Larson D, Verbruggen K, Tanji M, Pel‑ legrini L, Signorato V, Olivetto F, Pastorino S, et al. BAP1 regulates IP3R3mediated Ca(2+) flux to mitochondria suppressing cell transformation. Nature. 2017;546(7659):549–53. 118. Dai F, Lee H, Zhang Y, Zhuang L, Yao H, Xi Y, Xiao ZD, You MJ, Li W, Su X, et al. BAP1 inhibits the ER stress gene regulatory network and modulates metabolic stress response. Proc Natl Acad Sci U S A. 2017;114(12):3192–7. 119. Ristic G, Tsou WL, Todi SV. An optimal ubiquitin-proteasome pathway in the nervous system: the role of deubiquitinating enzymes. Front Mol Neurosci. 2014;7:72. 120. Xie L, Li A, Shen J, Cao M, Ning X, Yuan D, Ji Y, Wang H, Ke K. OTUB1 attenuates neuronal apoptosis after intracerebral hemorrhage. Mol Cell Biochem. 2016;422(1–2):171–80. 121. Xia Q, Liao L, Cheng D, Duong DM, Gearing M, Lah JJ, Levey AI, Peng J. Proteomic identification of novel proteins associated with Lewy bodies. Front Biosci. 2008;13:3850–6. 122. Kumari R, Kumar R, Kumar S, Singh AK, Hanpude P, Jangir D, Maiti TK. Amyloid aggregates of the deubiquitinase OTUB1 are neurotoxic, sug‑ gesting that they contribute to the development of Parkinson’s disease. J Biol Chem. 2020;295(11):3466–84. 123. Abdul Rehman SA, Kristariyanto YA, Choi SY, Nkosi PJ, Weidlich S, Labib K, Hofmann K, Kulathu Y. MINDY-1 is a member of an evolutionarily conserved and structurally distinct new family of deubiquitinating enzymes. Mol Cell. 2016;63(1):146–55. 124. Kwasna D, Abdul Rehman SA, Natarajan J, Matthews S, Madden R, De Cesare V, Weidlich S, Virdee S, Ahel I, Gibbs-Seymour I, et al. Discovery and characterization of ZUFSP/ZUP1, a distinct deubiquitinase class important for genome stability. Mol Cell. 2018;70(1):150–64. **Publisher’s Note** Springer Nature remains neutral with regard to jurisdictional claims in pub‑ lished maps and institutional affiliations. -----
{ "disclaimer": "Notice: Paper or abstract available at https://pmc.ncbi.nlm.nih.gov/articles/PMC9380329, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": "CCBY", "status": "GOLD", "url": "https://molecularbrain.biomedcentral.com/counter/pdf/10.1186/s13041-022-00958-z" }
2,022
[ "JournalArticle", "Review" ]
true
2022-08-16T00:00:00
[ { "paperId": "e5bb5a26ee9153ca09c8ab39050dcf4ad7f62010", "title": "Functions and Molecular Mechanisms of Deltex Family Ubiquitin E3 Ligases in Development and Disease" }, { "paperId": "7214f8e5d30b2d5e155691b9eaa0135861a46969", "title": "Functional interaction of ubiquitin ligase RNF167 with UBE2D1 and UBE2N promotes ubiquitination of AMPA receptor" }, { "paperId": "cd219831de40cc36539f3d273b73edec489e081a", "title": "The ubiquitin system: from cell signalling to disease biology and new therapeutic opportunities" }, { "paperId": "08aea2619ffc7e585bbf4ad193fb67a5c823b0f8", "title": "E1 Enzymes as Therapeutic Targets in Cancer" }, { "paperId": "f4b402405a230a4481f66db04367b0f65b3fd206", "title": "The functional analysis of Cullin 7 E3 ubiquitin ligases in cancer" }, { "paperId": "3ede35d399b20c158eb3da46654f3dcf63784841", "title": "RBR E3 ubiquitin ligases in tumorigenesis." }, { "paperId": "1b1c256ad65d78a46884b7d36478b6a00359a114", "title": "The HOIL-1L ligase modulates immune signalling and cell death via monoubiquitination of LUBAC" }, { "paperId": "e1aeb071a9c6d8ceff9bbc63406e98fe735a2183", "title": "Cracking the Monoubiquitin Code of Genetic Diseases" }, { "paperId": "700f7c6c26141e30cb91ed2a94937cbd2c7b3348", "title": "The pivotal role of Ubiquitin-activating enzyme E1 (UBA1) in neuronal health and neurodegeneration." }, { "paperId": "ae3d57ecaea2fc10237152c2ab0e94279651f7ae", "title": "Cullin-7 (CUL7) is overexpressed in glioma cells and promotes tumorigenesis via NF-κB activation" }, { "paperId": "9afb032bdbbd520a04d2a28722b0dc9899008dad", "title": "HECT E3 ubiquitin ligases – emerging insights into their biological roles and disease relevance" }, { "paperId": "ccc50605fca2b8348eceb597f116f8f0ee61a7e8", "title": "UBB pseudogene 4 encodes functional ubiquitin variants" }, { "paperId": "8c491425087a1e8ed1e695683ba949219fec848b", "title": "APC/C ubiquitin ligase: functions and mechanisms in tumorigenesis." }, { "paperId": "321530efeaba7cc9fa5eb23b317d0a7d2a930819", "title": "Amyloid aggregates of the deubiquitinase OTUB1 are neurotoxic, suggesting that they contribute to the development of Parkinson's disease" }, { "paperId": "709a7fe874a114a8c6eaef010df22cc3e951a5ed", "title": "Structure and function of USP5: Insight into physiological and pathophysiological roles." }, { "paperId": "f9a77002177403d781576aaa4a615d3ee15b2314", "title": "The ubiquitin‐conjugating enzyme UBE2QL1 coordinates lysophagy in response to endolysosomal damage" }, { "paperId": "87fab1ec24a0f8573705e992817efab7667ac8cb", "title": "The Membrane-Associated MARCH E3 Ligase Family: Emerging Roles in Immune Regulation" }, { "paperId": "7573b3fba1117ca9d50bfa322409ed20eaf62d31", "title": "APC/C: current understanding and future perspectives" }, { "paperId": "1cdb5ff4c43772c8da4160c9cbbf89a9b3c9b19a", "title": "Zinc Uptake and Storage During the Formation of the Cerebral Cortex in Mice" }, { "paperId": "874ee9ad1981d43567c7187809908668f23a8c14", "title": "A protein quality control pathway regulated by linear ubiquitination" }, { "paperId": "979e678f53ecad79d68c5e8729d7426c9c40251e", "title": "Breaking the chains: deubiquitylating enzyme specificity begets function" }, { "paperId": "3d244aacc7ff2667b33b827db2012179864d0a5f", "title": "USP22 controls multiple signaling pathways that are essential for vasculature formation in the mouse placenta" }, { "paperId": "c9cf89e7642b3da75a33d482d26a330212677479", "title": "Deubiquitinating Enzymes Related to Autophagy: New Therapeutic Opportunities?" }, { "paperId": "d0510e7a011d3257b5c816b6e8d2c09cb0026454", "title": "Mammalian TRIM67 Functions in Brain Development and Behavior" }, { "paperId": "ca68595bf945b22156e2f9b1b074973017c4cf4f", "title": "Discovery and Characterization of ZUFSP/ZUP1, a Distinct Deubiquitinase Class Important for Genome Stability" }, { "paperId": "1eb9c18190345b1be5dd8a0013e3bbf29c4d04b2", "title": "LUBAC is essential for embryogenesis by preventing cell death and enabling haematopoiesis" }, { "paperId": "ab283aed475305e44f0ef5db38ae766d2df051db", "title": "Identifying the substrate proteins of U-box E3s E4B and CHIP by orthogonal ubiquitin transfer" }, { "paperId": "e746befcb34fb62ba614c0a39f6ef261fd5ccd1c", "title": "Ubiquitin signaling and autophagy" }, { "paperId": "15250730127427b824a0f2d067d0da2f0ccec33c", "title": "RING-Between-RING E3 Ligases: Emerging Themes amid the Variations." }, { "paperId": "a40e6ed9811a106337f5ceaf0973562859fca9f3", "title": "RNF8/UBC13 ubiquitin signaling suppresses synapse formation in the mammalian brain" }, { "paperId": "e69310387cd5d7940960a11681328f516fe5956d", "title": "MARCH9‐mediated ubiquitination regulates MHC I export from the TGN" }, { "paperId": "806b06ef786edcedf1570c714678077c61f1a157", "title": "Fate and freedom in developing neocortical circuits" }, { "paperId": "7ba20f1ffa00acba47a9f7d48530850b23a196de", "title": "PRAJA is overexpressed in glioblastoma and contributes to neural precursor development" }, { "paperId": "43ce931c66dd7bedd00a0c1178d30e20f6f3f239", "title": "Ubiquitin Ligases: Structure, Function, and Regulation." }, { "paperId": "b13bd118e7c7d8c681012eeb23944ab47865e21c", "title": "NEURL4 regulates the transcriptional activity of tumor suppressor protein p53 by modulating its oligomerization" }, { "paperId": "6482873df64a490c03e12353ac7a40d9c1bbd6d4", "title": "BAP1 regulates IP3R3-mediated Ca2+ flux to mitochondria suppressing cell transformation" }, { "paperId": "cc70e10bcedf714eafbe892ffb1aa689d6f02292", "title": "Deubiquitylating Nanog: novel role of USP21 in embryonic stem cell maintenance" }, { "paperId": "bd7104c4cca4260dd43c7e5c823685336b8d7498", "title": "TRIM Family Proteins: Roles in Autophagy, Immunity, and Carcinogenesis." }, { "paperId": "c12ec8449b6f45662f3a2743ca2cb8b18cf51688", "title": "BAP1 inhibits the ER stress gene regulatory network and modulates metabolic stress response" }, { "paperId": "e897253e028a0637cfd4e822efeaa981de30f06b", "title": "Ubiquitin-like Protein Conjugation: Structures, Chemistry, and Mechanism" }, { "paperId": "c1ef19883567eb1a735bf7979fce6f062691b0b8", "title": "E3 Ligase RNF126 Directly Ubiquitinates Frataxin, Promoting Its Degradation: Identification of a Potential Therapeutic Target for Friedreich Ataxia" }, { "paperId": "77569da96d2ccbe6a969d1488bd2dd2bd52446c5", "title": "Cullin 7 mediates proteasomal and lysosomal degradations of rat Eag1 potassium channels" }, { "paperId": "9524d00faf53b53459447294f8d3827fc3528b81", "title": "TRIM28 Controls a Gene Regulatory Network Based on Endogenous Retroviruses in Human Neural Progenitor Cells." }, { "paperId": "e81cba8231c650e65ca22e09ff95870e4084874d", "title": "Deubiquitinating enzymes in skeletal muscle atrophy-An essential role for USP19." }, { "paperId": "351cafd4582a53fe006e3664b242267d5d3933f2", "title": "OTUB1 attenuates neuronal apoptosis after intracerebral hemorrhage" }, { "paperId": "d053b5ad410489efc193f333112b988a77b555a8", "title": "Ubiquitin C-terminal hydrolase L1 (UCH-L1): structure, distribution and roles in brain function and dysfunction" }, { "paperId": "6effad3ba483e7996465d07e017db86f1f63480d", "title": "MINDY-1 Is a Member of an Evolutionarily Conserved and Structurally Distinct New Family of Deubiquitinating Enzymes" }, { "paperId": "1dedae5007832dedb60a491bfac559e9811e275b", "title": "Types of Ubiquitin Ligases" }, { "paperId": "1ad545f66256176b53c18a9136dccc9e2a9f9a76", "title": "Involvement of F-BOX proteins in progression and development of human malignancies." }, { "paperId": "76d0c6fe797444a8e63aa63254c3d667de2957a7", "title": "Dynamic transcription of ubiquitin genes under basal and stressful conditions and new insights into the multiple UBC transcript variants." }, { "paperId": "57264f204e21a558088a9adb47540e4c63aceee7", "title": "UBA1: At the Crossroads of Ubiquitin Homeostasis and Neurodegeneration" }, { "paperId": "8068195a216b357b40a6cb460bd99a0d8ad98772", "title": "The de novo synthesis of ubiquitin: identification of deubiquitinases acting on ubiquitin precursors" }, { "paperId": "94b10d376283e1699e87c865ac3d40d0354c9300", "title": "The demographics of the ubiquitin system." }, { "paperId": "4150e439eab9c8681268deca6ab4504f5e5eca5a", "title": "Deubiquitinating enzymes regulate Hes1 stability and neuronal differentiation" }, { "paperId": "4f7e7ee9d8eaaecdd99f18773a4136c97446d054", "title": "Targeting cancer with kinase inhibitors." }, { "paperId": "b6d81ee46a752067888fa0aa52a0f0eb94323285", "title": "Regulation of Neuronal Morphogenesis and Positioning by Ubiquitin-Specific Proteases in the Cerebellum" }, { "paperId": "cb5cd5e43fc9518c1a7e260a32379f056a8c1034", "title": "Restoration of cellular ubiquitin reverses impairments in neuronal development caused by disruption of the polyubiquitin gene Ubb." }, { "paperId": "3f0bbc89970ff0f708920d4d41154fc10b1ccfd4", "title": "The Deubiquitinating Enzyme USP5 Modulates Neuropathic and Inflammatory Pain by Enhancing Cav3.2 Channel Activity" }, { "paperId": "484fa8d90cd34dd4c0509cfc4c71172e3a8d7993", "title": "Cellular ubiquitin pool dynamics and homeostasis" }, { "paperId": "ae142c9b88d09a35efd1311b0b44c60155e7e87b", "title": "An optimal ubiquitin-proteasome pathway in the nervous system: the role of deubiquitinating enzymes" }, { "paperId": "c7795dc786298fd24b1a145399a1582a44b81d17", "title": "The Fanconi anemia DNA repair pathway: structural and functional insights into a complex disorder." }, { "paperId": "6e9876806a4f48adb1c7aed6dded69ab03071fdd", "title": "Neurobehavioural effects of developmental toxicity" }, { "paperId": "5956f58b9b340846cf82e1c745e6b639ad295851", "title": "Ubiquitin-conjugating enzyme E2C: a potential cancer biomarker." }, { "paperId": "471cb1ac76a6f8903eb38e708cf57862fed36047", "title": "Downregulation of UBE2Q1 is associated with neuronal apoptosis in rat brain cortex following traumatic brain injury" }, { "paperId": "fadb6c833ed2cd00d96f98013244320838e6127e", "title": "APC/C-Cdh1 coordinates neurogenesis and cortical size during development" }, { "paperId": "92a97e9ffc6e43fc50b687ab7cb8c391fcf7dbef", "title": "The Epigenetic Modifier Ubiquitin-specific Protease 22 (USP22) Regulates Embryonic Stem Cell Differentiation via Transcriptional Repression of Sex-determining Region Y-box 2 (SOX2)*" }, { "paperId": "d01501b743ce2ce2637c2d79fa368fd5a7233467", "title": "Neuronal Ubiquitin Homeostasis" }, { "paperId": "c89ec9c0034d0e79a053073c8f4f34bb339a1396", "title": "Altered social behavior and neuronal development in mice lacking the Uba6-Use1 ubiquitin transfer system." }, { "paperId": "0b32f919568991e440f92d2685ce3063e1764a24", "title": "A model based criterion for gene expression calls using RNA-seq data" }, { "paperId": "142fb42d200bb840462f5f6c7b5554f85b350ef3", "title": "Spatial organization of ubiquitin ligase pathways orchestrates neuronal connectivity" }, { "paperId": "846ca56a7ae19680a4e6544be19d175f399fc194", "title": "Genetically engineered mouse models for functional studies of SKP1-CUL1-F-box-protein (SCF) E3 ubiquitin ligases" }, { "paperId": "81659835f9b09ac4abc3ec8dd2a23dfdc483f81c", "title": "Nedd4 and Nedd4-2: ubiquitin ligases at work in the neuron." }, { "paperId": "0d44c103b7050b9a39bf07172326c5eaa8400acf", "title": "Ubiquitin ligase RNF167 regulates AMPA receptor-mediated synaptic transmission" }, { "paperId": "1cb02df5084781ec988fa2d73b8a1e53199af473", "title": "Mining and characterization of ubiquitin E3 ligases expressed in the mouse testis" }, { "paperId": "29c412d2aed1a78d33806a57679588226dc9551c", "title": "Mining and characterization of ubiquitin E3 ligases expressed in the mouse testis" }, { "paperId": "8c471432d5a7926bdab78d15ceee0378442ef988", "title": "Synaptic protein ubiquitination in rat brain revealed by antibody-based ubiquitome analysis." }, { "paperId": "91aaac570b2142db5a4a8c6e8b7b9a3f6758fce5", "title": "A novel accessory molecule Trim59 involved in cytotoxicity of BCG-activated macrophages" }, { "paperId": "5b2953bfd8ea399abbd4d0c58a7da91d859914e6", "title": "The ubiquitin code." }, { "paperId": "72c81d60057c35d34bf6d87395692f46a97c73d1", "title": "USP22 antagonizes p53 transcriptional activation by deubiquitinating Sirt1 to suppress cell apoptosis and is required for mouse embryonic development." }, { "paperId": "6845d753ba4e9d7d239fe47884f5a7871ca6c7b1", "title": "In Vivo Evidence That TRAF4 Is Required for Central Nervous System Myelin Homeostasis" }, { "paperId": "cfb083749fefbbc49afe7570f520117a93b18c82", "title": "Iron toxicity in neurodegeneration" }, { "paperId": "3bd32cddfd7d73bbb40e178dd5b6f9a9af321978", "title": "Makorin Ring Zinc Finger Protein 1 (MKRN1), a Novel Poly(A)-binding Protein-interacting Protein, Stimulates Translation in Nerve Cells*" }, { "paperId": "585f26ca3943a4b238d192f9ece631343ee9c826", "title": "Tripartite Motif-Containing Protein 28 Is a Small Ubiquitin-Related Modifier E3 Ligase and Negative Regulator of IFN Regulatory Factor 7" }, { "paperId": "6d288f5ddac9c345af8cedfaaf11f63d9deb5dda", "title": "USP1 Deubiquitinates ID Proteins to Preserve a Mesenchymal Stem Cell Program in Osteosarcoma" }, { "paperId": "1f44a87617d2a35dd41822090c1d2ae39c1caa0a", "title": "Neural Stem Cells Maintain Their Stemness through Protein Kinase C ζ‐Mediated Inhibition of TRIM32" }, { "paperId": "0d7829a89bf927e95edd4d7af8a571be454104f6", "title": "The SCF–FBXW5 E3-ubiquitin ligase is regulated by PLK4 and targets HsSAS-6 to control centrosome duplication" }, { "paperId": "28c3cb1dd49cffb3ca5e5d43bc2dd7b2a5180d29", "title": "Balancing act: deubiquitinating enzymes in the nervous system" }, { "paperId": "22929bafb6748508a0c3a9487b90962f2d169dc5", "title": "The role of ubiquitylation in nerve cell development" }, { "paperId": "e6ac047c89940e731791542c6cc22087e0ddd835", "title": "An OBSL1-Cul7Fbxw8 Ubiquitin Ligase Signaling Mechanism Regulates Golgi Morphology and Dendrite Patterning" }, { "paperId": "9a1d270baff22f4ed145b7e6cfd539f24b484295", "title": "The cullin protein family" }, { "paperId": "c8261e5508c0b2d95e9419dd97ef2185a534f8a8", "title": "Ancient origin of animal U-box ubiquitin ligases" }, { "paperId": "fbb329dd2e56cc61bc65c613a127eca7f9fb1adb", "title": "The family of ubiquitin‐conjugating enzymes (E2s): deciding between life and death of proteins" }, { "paperId": "a79a40260468441d0c56d819ab2ccdd0f83885f6", "title": "Structural and functional characterization of the monomeric U-box domain from E4B." }, { "paperId": "1038b0cff47071f527f69d5714278d8bffd2fe73", "title": "Regulation of the endothelial cell cycle by the ubiquitin-proteasome system." }, { "paperId": "7f2d7d4f14540a4accf18f0815b77f6b3eee46e7", "title": "Fbxo45, a Novel Ubiquitin Ligase, Regulates Synaptic Activity*" }, { "paperId": "e8324db196f5eef7c5f93d73b087269eabd6f7e6", "title": "Building ubiquitin chains: E2 enzymes at work" }, { "paperId": "0f6045c62ca785ff117b95bb77d500855a6869c7", "title": "A comprehensive framework of E2–RING E3 interactions of the human ubiquitin–proteasome system" }, { "paperId": "502e7ab3db4438511bd316d0bab6efec474cb684", "title": "Breaking the chains: structure and function of the deubiquitinases" }, { "paperId": "5eed0dc4886c43ba9b90db1b9a6a039412dfe828", "title": "Tumor necrosis factor-receptor-associated factor-4 is a positive regulator of transforming growth factor-beta signaling that affects neural crest formation." }, { "paperId": "bad3f08c363426e09b0089582deb2cecfa364e86", "title": "RING domain E3 ubiquitin ligases." }, { "paperId": "44db31c41ab0845a121901fe8ca24a5709b8ef3b", "title": "Ubiquitin-like protein activation by E1 enzymes: the apex for downstream signalling pathways" }, { "paperId": "39e6b7b4a563db11144bb0a7011c159bd6a4ac76", "title": "Fbxo45 Forms a Novel Ubiquitin Ligase Complex and Is Required for Neuronal Development" }, { "paperId": "5250c7a201c0aed2ea2893a3e047785168a316d5", "title": "Inactivation of murine Usp1 results in genomic instability and a Fanconi anemia phenotype." }, { "paperId": "6511fab711aae31dd4aefa5243cabe0972759946", "title": "The HECT-domain ubiquitin ligase Huwe1 controls neural differentiation and proliferation by destabilizing the N-Myc oncoprotein" }, { "paperId": "610daf80fe883271e0b5d451a1ee41ccbfa71ee9", "title": "Proteomic identification of novel proteins associated with Lewy bodies." }, { "paperId": "a69069067161c28dbb821046af02eec33fe40291", "title": "Makorin-2 Is a Neurogenesis Inhibitor Downstream of Phosphatidylinositol 3-Kinase/Akt (PI3K/Akt) Signal*" }, { "paperId": "17cf9c5da0c75344be782f84b08f7a479efb7208", "title": "Hypothalamic neurodegeneration and adult-onset obesity in mice lacking the Ubb polyubiquitin gene" }, { "paperId": "92955849e0401f762ec2e88c9e82b42839049022", "title": "Synaptic Protein Degradation Underlies Destabilization of Retrieved Fear Memory" }, { "paperId": "286b4740717f21abcc48d2f5a13b287f19295ee4", "title": "Genome-Wide and Functional Annotation of Human E3 Ubiquitin Ligases Identifies MULAN, a Mitochondrial E3 that Regulates the Organelle's Dynamics and Signaling" }, { "paperId": "3fdeb142fa7d79648f146caa0758f7ab04707fea", "title": "A novel family of membrane-bound E3 ubiquitin ligases." }, { "paperId": "edbbdaced3b7665315d3e5c79571e06af2843fe6", "title": "Ubiquitin C-terminal hydrolase L1 regulates the morphology of neural progenitor cells and modulates their differentiation" }, { "paperId": "c2208857e4a6d29aa65d370cb2eca5ad0dc6d171", "title": "Age-related changes in Usp9x protein expression and DNA methylation in mouse brain." }, { "paperId": "ef326d9e76f9a152d54f649f06dbb3e70c71b4aa", "title": "Patterns of neuronal migration in the embryonic cortex" }, { "paperId": "0473b85a9d2deb40ca455a8e6e523e19eae740b6", "title": "Ubiquitin-dependent regulation of the synapse." }, { "paperId": "16fca93d7d9d216271d933753564203965a133a4", "title": "Interaction of U‐box‐type ubiquitin‐protein ligases (E3s) with molecular chaperones" }, { "paperId": "fb6be405704e37aaf8edc521fcf19f7f8ce80044", "title": "U-box proteins as a new family of ubiquitin ligases." }, { "paperId": "cb0ed3da02549d2740e37773f01257dc91740e8b", "title": "Characterization of the mouse gene for the U-box-type ubiquitin ligase UFD2a." }, { "paperId": "411639b4cb9253fe486b2471ab3dedaac723a1e6", "title": "Protein quality control: U-box-containing E3 ubiquitin ligases join the fold." }, { "paperId": "741988991b95cf84bd968f0835cd4bfad1d2bc77", "title": "The F-box protein family" }, { "paperId": "ea0296945068cfa177ed84a30db31ceaed411c8e", "title": "The ancient source of a distinct gene family encoding proteins featuring RING and C(3)H zinc-finger motifs with abundant expression in developing brain and nervous system." }, { "paperId": "85bc9e16b847648c6329167b2ccb6eabc009125a", "title": "UBE3A/E6-AP mutations cause Angelman syndrome" }, { "paperId": "22896ca2db2bdefd1778e9af02bcdeedb02adab6", "title": "Cloning of Human Ubiquitin-conjugating Enzymes UbcH6 and UbcH7 (E2-F1) and Characterization of Their Interaction with E6-AP and RSP5 (*)" }, { "paperId": "a60832cbef8ac6904058c9c24c3bcdc8a8bf4ec0", "title": "PGP9.5, a ubiquitin C-terminal hydrolase; pattern of mRNA and protein expression during neural development in the mouse." }, { "paperId": "b394eff16abda393c9968842ebc8f9b0744dceb4", "title": "Identification of a set of genes with developmentally down-regulated expression in the mouse brain." }, { "paperId": "b193d1e5d99bed859ab3d62c748ecf2a085f0a39", "title": "The neuron-specific protein PGP 9.5 is a ubiquitin carboxyl-terminal hydrolase." }, { "paperId": null, "title": "The E3 ubiquitin ligase SCF(FBXL14) complex stimulates neuronal differentiation by targeting the Notch signaling factor HES1 for prote‐ olysis" }, { "paperId": "4c6c581316a825c69ed4a22ac67c8a78dca07d6a", "title": "USP1 targeting impedes GBM growth by inhibiting stem cell maintenance and radioresistance." }, { "paperId": "34b9635d7779e219e9d60e0d3d33919ca9bc123c", "title": "Publisher's Note" }, { "paperId": null, "title": "Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations" } ]
30,868
en
[ { "category": "Computer Science", "source": "external" }, { "category": "Computer Science", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/023095b6c75a66623876dbc6bca0dfe6b78291f3
[ "Computer Science" ]
0.906266
On the composition of authenticated byzantine agreement
023095b6c75a66623876dbc6bca0dfe6b78291f3
Symposium on the Theory of Computing
[ { "authorId": "1682750", "name": "Yehuda Lindell" }, { "authorId": "1783286", "name": "Anna Lysyanskaya" }, { "authorId": "1693109", "name": "T. Rabin" } ]
{ "alternate_issns": null, "alternate_names": [ "Symp Theory Comput", "STOC" ], "alternate_urls": null, "id": "8113a511-e0d9-4231-a1bc-0bf5d0212a4e", "issn": null, "name": "Symposium on the Theory of Computing", "type": "conference", "url": "http://acm-stoc.org/" }
null
# On the Composition of Authenticated Byzantine Agreement ## Yehuda Lindell� ### Dept. of Computer Science Weizmann Institute of Science Rehovot 76100, ISRAEL lindell@wisdom.weizmann.ac.il ## ABSTRACT � ## Anna Lysyanskaya ### MIT LCS 200 Technology Square Cambridge, MA 02139 USA anna@theory.lcs.mit.edu ## 1. INTRODUCTION ## Tal Rabin ### IBM T.J.Watson Research PO Box 704, Yorktown Heights NY 10598, USA talr@watson.ibm.com 1 The Byzantine Generals (Byzantine Agreement ) problem A fundamental problem of distributed computing is that of simulating a (secure) broadcast channel, within the set ting of a p oint-to-p oint network. This problem is known as Byzantine Agreement and has b een the fo cus of much research. Lamp ort et al. showed that in order to achieve Byzantine Agreement in the standard mo del, more than 2=3 of the participating parties must b e honest. They further showed that by augmenting the network with a public-key infrastructure, it is p ossible to obtain secure proto cols for any numb er of faulty parties. This augmented problem is called \authenticated Byzantine Agreement". In this pap er we consider the question of concurrent, par allel and sequential comp osition of authenticated Byzantine Agreement proto cols. We present surprising imp ossibility results showing that: 1. Authenticated Byzantine Agreement cannot b e com p osed in parallel or concurrently (even twice), if 1=3 or more of the parties are faulty. 2. Deterministic authenticated Byzantine Agreement pro to cols that run for r rounds and tolerate 1=3 or more faulty parties, can only b e comp osed sequentially less than 2r times. In contrast, we present randomized proto cols for authen ticated Byzantine Agreement that comp ose sequentially for any p olynomial numb er of times. We exhibit two such proto cols: The �rst proto col tolerates corruptions of up to 1=2 of the parties, while In the �rst proto col, the numb er of faulty parties may b e any numb er less than 1=2. On the other hand, the second proto col can tolerate any numb er of faulty parties, but is limited to the case that the overall numb er of parties is O (log k ), where k is a security parameter. Finally, we show that when the mo del is further augmented so that unique and common session identi�ers are assigned to each concurrent session, then any p olynomial numb er of authen ticated Byzantine agreement proto cols can b e concurrently executed, while tolerating any numb er of faulty parties. � This work was carried out while the �rst and second au thors were visiting IBM Research. ##### Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. STOC’02, May 19-21, 2002, Montreal, Quebec, Canada. Copyright 2002 ACM 1-58113-495-9/02/0005 ...$5.00. is one of the most researched areas in distributed computing. Numerous variations of the problem have b een considered under di�erent communication mo dels, and b oth p ositive results, i.e. proto cols, and negative results, i.e. imp ossibil ity and lower b ounds on eÆciency and resources, have b een established. The reason for this vast interest is the fact that the Byzantine Generals problem is the algorithmic imple mentation of a broadcast channel within a p oint-to-p oint network. In addition to its imp ortance as a stand-alone primitive, broadcast is a key to ol in the design of secure proto cols for multiparty computation. Despite the imp ortance of this basic functionality and the vast amount of research that has b een directed towards it, our understanding of the algorithmic issues is far from com plete. As is evident from our results, there are still key questions that have not yet b een addressed. In this pap er, we provide solutions to some of these questions. The problem of Byzantine Generals is (informally) de�ned as follows: There are n parties, one of which is the General who holds an input x. In addition, there is an adversary who controls up to t of the parties and can arbitrarily deviate from the designated proto col sp eci�cation. The (honest) parties need to agree on a common value. Furthermore, if the General is not faulty, then this common value must b e his original input x. Pease et al. [15, 13] provided a solution to the Byzantine Generals problem in the standard mo del, i.e. the information theoretic mo del with p oint-to-p oint communication lines (and no setup assumptions). For their solution, the numb er of faulty parties, t, must b e less than n=3. Furthermore, they complemented this result by showing that the requirement for t < n=3 is in fact inherent. That is, no proto col which solves the Byzantine Generals problem in the standard mo del can tolerate a third or more faulty parties. The ab ove b ound on the numb er of faulty parties in the standard mo del is a severe limitation. It is therefore of great imp ortance to intro duce a di�erent (and realistic) mo del in which it is p ossible to achieve higher fault tolerance. One p ossibility involves augmenting the standard mo del such that messages sent can b e authenticated. By authentica tion, we mean the ability to ascertain that a message was in fact sent by a sp eci�c party, even when not directly re ceived from that party. This can b e achieved using a trusted prepro cessing phase in which a public-key infrastructure for digital signatures (e.g. [18, 11]) is set up. (We note that this 1 These two problems are essentially equivalent. ----- requires that the adversary b e computationally b ounded. However, there exist prepro cessing phases which do not re quire any computational assumptions; see [16].) Indeed, Pease et al. [15, 13] use such an augmentation and obtain a proto col for the Byzantine Generals problem which can tol erate any numb er of faulty parties (this is very dramatic con sidering the limitation to 1/3 faulty in the standard mo del). The Byzantine Generals problem in this mo del is referred to as authenticated Byzantine Generals. A common use of Byzantine Generals is to substitute a broadcast channel. Therefore, it is clear that the settings in which we would want and need to run it, involve many invo cations of the Byzantine Generals proto col. The ques tion of whether these proto cols remain secure when executed concurrently, in parallel or sequentially is thus an imp ortant one. However, existing work on this problem (in b oth the standard and authenticated mo dels) fo cused on the security and correctness of proto cols in a single execution only. It is easy to see that the unauthenticated proto col of Pease et al. [15], and other proto cols in the standard mo del, do comp ose concurrently (and hence in parallel and sequen tially). However, this is not the case with resp ect to authen ticated Byzantine Generals. The �rst to notice that comp o sition in this mo del is problematic were Gong, Lincoln and Rushby [12], who also suggest metho ds for overcoming the problem. Our work shows that these suggestions and any others are futile; in fact comp osition in this mo del is imp os sible (as long as 1/3 or more of the parties are faulty). (We note that by comp osition, we refer to stateless comp osition; see Section 2.3 for a formal discussion.) Our Results. Our �rst theorem, stated b elow, shows that authenticated Byzantine Generals proto cols, b oth determin istic and randomized, cannot b e comp osed in parallel (and thus concurrently). This is a surprising and p owerful state ment with resp ect to the issue of enhancing the standard mo del by the addition of authentication. The theorem shows that this enhancement does not provide the ability to over come the imp ossibility result when comp osition is required. That is, if there is a need for parallel comp osition, then the numb er of faulty players cannot b e n=3 or more, and hence the authenticated mo del provides no advantage over the standard mo del. Theorem 1. No protocol for authenticated Byzantine Agreement that composes in paral lel (even twice) can tol erate n=3 or more faulty parties. Regarding the question of sequential comp osition, we show di�erent results. We �rst prove another (weaker) lower b ound for deterministic proto cols: Theorem 2. Let � be a deterministic protocol for au thenticated Byzantine Agreement that terminates within r rounds of communication. Then, � can be sequential ly com posed at most 2r � 1 times. In contrast, for randomized proto cols we obtain p ositive re sults and present a proto col which can b e comp osed sequen tially (any p olynomial numb er of times), and which toler ates t < n=2 faulty parties. The proto col which we present is based on a proto col of Fitzi and Maurer [8] that toler ates t < n=2 faulty parties, and is in the standard mo del augmented with an ideal three-party broadcast primitive. We show that this primitive can b e replaced by an authen ticated proto col for three parties that can b e comp osed se quentially (and the resulting proto col also comp oses sequen tially). Thus, we prove: Theorem 3. Assume that there exists a signature scheme that is existential ly secure against chosen message attacks. Then, there exists a randomized protocol for authenticated Byzantine Generals with a bounded number of rounds, that tolerates t < n=2 faulty parties and composes sequential ly any polynomial number of times. We also present a randomized Byzantine Generals proto col that tolerates any numb er of faulty parties, and comp oses sequentially any p olynomial numb er of times. However, the numb er of messages sent in this proto col is exp onential in the numb er of parties. Therefore, it can only b e used when the overall numb er of parties is logarithmic in the security parameter of the signature scheme. On the Use of Unique Session Identi�ers. As will b e apparent from the pro ofs of the lower b ounds (Theorems 1 and 2), what prevents agreement in this setting is the fact that honest parties cannot tell in which execution of the proto col a given message was authenticated. This allows the adversary to \b orrow" messages from one execution to another, and by that attack the system. In Section 5, we show that if we further augment the authenticate mo del so that unique and common indices are assigned to each exe cution, then security under many concurrent executions can b e achieved (for any numb er of faulty parties). Thus, on the one hand, our results strengthen the com mon b elief that session identi�ers are necessary for achieving authenticated Byzantine Generals. On the other hand, we show that such identi�ers cannot be generated within the sys tem. Typical suggestions for generating session identi�ers in practice include having the General cho ose one, or having the parties exchange random strings and set the identi�er to b e the concatenation of all these strings. However, The orem 1 rules out all such solutions (notice that just coming up with a common identi�er involves reaching agreement). Rather, one must assume the existence of some trusted ex ternal means for coming up with unique and common in dices. This seems to b e a very diÆcult, if not imp ossible, assumption to realize in many natural settings. A natural question to ask here relates to the fact that unique and common session identi�ers are anyway needed in order to carry out concurrent executions. In particular, parties need to b e able to allo cate messages to proto col exe cutions, and this requires a way of distinguishing executions from each other. Indeed, global session identi�ers solve this problem. However, it also suÆces for each party to allo cate local identi�ers for itself. That is, when a party b egins a new execution, it cho oses a unique identi�er sid and informs all parties to concatenate sid to any message they send him within this execution. It is then guaranteed that any mes sage sent by an honest party to another honest party will b e directed to the execution it b elongs to. We thus conclude that for the purp oses of carrying out concurrent executions, global identi�ers are not needed. Implications for Secure Multiparty Computations. As we have stated ab ove, one imp ortant use for Byzantine Generals proto cols is to substitute the broadcast channel in a multiparty proto col. In fact, most known solutions ----- for multiparty computations assume a broadcast channel, claiming that it can b e substituted by a Byzantine Generals proto col without any complications. Our results therefore imply that multiparty proto cols that rely on authenticated Byzantine Generals to replace the broadcast channel, cannot b e comp osed in parallel or concurrently. Another imp ortant implication of our result is due to the fact that any secure proto col for solving general multiparty tasks can b e used to solve Byzantine Generals. Therefore, none of these proto cols can b e comp osed in parallel or con currently, unless more than 2=3 of the parties are honest or a physical broadcast channel is available. Our Work vs. Comp osition of Secure Multiparty Proto cols. There has b een much work on the topic of proto col comp osition in the context of multiparty computa tion [1, 14, 2, 5, 3]. Much of this work has fo cused on zero knowledge and concurrent zero-knowledge proto cols [10, 6, 17, 4]. For example, Goldreich and Krawczyk [10] show that there exist proto cols that are zero-knowledge when executed stand-alone, and yet do not comp ose in parallel (even twice). However, proto cols that comp ose do exist (see, for example, Goldreich [9] and references therein). In contrast, we show that it is imp ossible to obtain any protocol that will comp ose twice in parallel. ## 2. DEFINITIONS 2.1 Computational Model versary (for whom the set of faulty parties is �xed b efore the execution b egins). Note that our proto cols for authenticated Byzantine Agree ment that comp ose sequentially rely on the security of sig nature schemes, and thus assume probabilistic p olynomial time adversaries only. On the other hand, our imp ossibility results hold for adversaries (and honest parties) whose run ning time is of any complexity. In fact, the adversary that we construct to prove our lower b ounds is of the same com plexity as the honest parties. ## 2.2 Byzantine Generals/Agreement The existing literature de�nes two related problems: Byzan tine Generals and Byzantine Agreement. In the �rst prob lem, there is one designated party, the General, who wishes to broadcast its value to all the other parties. In the second problem, each party has an input and the parties wish to agree on a value, with a validity condition that if a ma jority of honest parties b egin with the same value, then they must terminate with that value. These problems are equivalent in the sense that any proto col solving one can b e used to construct a proto col solving the other, while tolerating the same numb er of faulty parties. We relax the standard re quirements on proto cols for the ab ove Byzantine problems in that we allow a proto col to fail with probability that is negligible in some security parameter. This relaxation is needed for the case of authenticated Byzantine proto cols where signature schemes are used (and can always b e forged with some negligible probability). Formally, We consider a setting involving n parties, P1 ; : : : ; P n , that Definition 1. (Byzantine Generals): Let P1 ; : : : ; Pn�1 interact in a synchronous p oint-to-p oint network. In such a network, each pair of parties is directly connected, and it is assumed that the adversary cannot mo dify messages sent b etween honest parties. In this setting, each party is formally mo deled by an interactive Turing machine with n � 1 pairs of communication tap es. The communication of the network pro ceeds in synchronized rounds, where each round consists of a send phase followed by a receive phase. In the send phase of each round, the parties write messages onto their output tap es, and in the receive phase, the parties read the contents of their input tap es. This pap er refers to the authenticated mo del, where some typ e of trusted preprocessing phase is assumed. This is mo d eled by all parties also having an additional setup-tap e that is generated during the prepro cessing phase. Typically, in such a prepro cessing phase, a public-key infrastructure of signature keys is generated. That is, each party receives its own secret signing key, and in addition, public veri�cation keys asso ciated with all other parties. (This enables parties to use the signature scheme to authenticate messages that they receive, and is thus the source of the name \authen ticated".) However, we stress that our lower b ound holds for all prepro cessing phases (even those that cannot b e eÆ ciently generated). In this mo del, a t-adversary is a party that controls t < n and G = Pn be n parties and let G be the designated party with input x. In addition there is an adversary who may cor rupt up to t of the parties including the special party G. A protocol solves the Byzantine Generals problem if the fol low ing two properties hold (except with negligible probability): 1. Agreement: Al l honest parties output the same value. 2. Validity: If G is honest, then al l honest parties output x. We denote such a protocol by BGn;t . In the setting of Byzantine Agreement it is not straight forward to formulate the validity prop erty. Intuitively, it should capture that if enough honest parties b egin with the same input value then they will output that value. By \hon est," we mean the parties that follow the prescrib ed proto col exactly, ignoring the issue that the �rst step of the party might b e to change its lo cal input. Definition 2. (Byzantine Agreement): Let P1 ; : : : ; Pn be n parties, with associated inputs x1 ; : : : ; xn . In addition of the parties P1 ; : : : ; Pn , where the corruption strategy de there is an adversary who may corrupt up to t of the parties. Then, a protocol solves the Byzantine Agreement problem if the fol lowing two properties hold (except with negligible prob ability): 1. Agreement: Al l honest parties output the same value. 2. Validity: If max(n � t; bn=2c + 1) of the parties have the same input value x and fol low the protocol speci�cation, then al l honest parties output x. We note that for the information-theoretic setting, the valid ity requirement is usually stated so that it must hold only when more than two thirds of the parties have the same p ends on the adversary's view (i.e., the adversary is adap tive). Since the adversary controls these parties, it receives their entire views and determines the messages that they send. In particular, these messages need not b e according to the proto col execution, but rather can b e computed by the adversary as an arbitrary function of its view. We note that our imp ossibility results hold even against a static ad ----- input value, b ecause in the information-theoretic setting, n � t - 2n=3. Authenticated Byzantine Proto cols: In the mo del for authenticated Byzantine Generals/Agreement, some trusted prepro cessing phase is run b efore any executions b egin. In this phase, a trusted party distributes keys to every partic ipating party. Formally, Definition 3. (Authenticated Byzantine Generals and Agreement): A protocol for authenticated Byzantine Gener als/Agreement is a Byzantine Generals/Agreement protocol with the fol lowing augmentation: � Each party has an additional setup-tap e. � Prior to any protocol execution, an ideal (trusted) party � Run the preprocessing phase associated with � and ob tain the strings s1 ; : : : ; sn . Then, for every j, set the setup-tap e of P . j to equal sj � Repeat the fol lowing process a polynomial number of times sequential ly (resp., in paral lel). 1. The adversary A chooses an input vector x ; : : : ; xn . . 1 2. Fix the input tape of every honest Pj to be xj and the random-tape to be a uniformly (and indepen dently) chosen random string. 3. Invoke al l parties for an execution of � (using the strings generated in the preprocessing phase above). The execution is such that for i 2 I, the messages chooses a series of strings s1 ; : : : ; sn according to some distribution, and sets party Pi (for every i = 1; : : : ; n). 's setup-tap e to equal si Fol lowing the above preprocessing stage, the protocol is run in the standard communication model for Byzantine Gener als/Agreement protocols. As we have mentioned, a natural example of such a prepro cessing phase is one where the strings s1 ; : : : ; sn constitute a sent by party Pi are determined by A (who also sees Pi 's view). On the other hand, al l other parties fol low the instructions as de�ned in �. We stress that the prepro cessing phase is executed only once and all executions use the strings distributed in this phase. Furthermore, we note that De�nition 4 implies that all hon est parties are oblivious of the other executions that have taken place (or that are taking place in parallel). This is implicit in the fact that in each execution the parties are invoked with no additional state information, b eyond the contents of their input, random and key tap es. On the other hand, the adversary A can co ordinate b etween the executions, and its view at any given time includes all the 2 messages received in all other executions. Before pro ceeding, we show that any Byzantine Generals (or Agreement) proto col in the standard model comp oses concurrently. Proposition 2.1. Any protocol � for Byzantine Gener als (or Agreement) in the standard mo del, remains secure under concurrent composition. Proof: We reduce the security of � under concurrent comp osition to its security for a single execution. Assume by contradiction that there exists an adversary A who runs N concurrent executions of �, such that with non-negligible probability, in one of the executions the outputs of the par ties are not according to the requirements of the Byzantine 0 public-key infrastructure. That is, the trusted party cho oses key-pairs (pk1 ; sk1 ); : : : ; (pkn ; skn ) from a secure signature scheme, and sets the contents of party Pi 's tap e to equal si = (pk 1 ; : : : ; pki�1 ; ski ; pki+1 ; : : : ; pkn ). That is, all par ties are given their own signing key and the veri�cation keys of all the other parties. We remark that the ab ove-de�ned prepro cessing phase is very strong. First, it is assumed that it is run completely by a trusted party. Furthermore, there is no computational b ound on the p ower of the trusted party generating the keys. Nevertheless, our imp ossibility results hold even for such a prepro cessing phase. ## 2.3 Composition of Protocols This pap er deals with the security of authenticated Byzan tine Agreement proto cols, when the proto col is executed many times (rather than just once). We de�ne the comp osi tion of proto cols to b e stateless. This means that the honest parties act up on their view in a single execution only. In par ticular, this means that the honest parties do not store in memory their views from previous executions or co ordinate b etween di�erent executions o ccurring at the current time. Furthermore, in stateless comp osition, there is no unique session identi�er that is common to all participating par ties. (See the Intro duction for a discussion on session iden ti�ers and their role.) We note that although the parties are stateless, the adversary is allowed to maliciously co ordinate b etween executions and record its view from previous exe cutions. Formally, comp osition is captured by the following pro cess: Definition 4. (sequential and parallel comp osition): Let Generals. We construct an adversary A who internally in corp orates A and attacks a single execution of �. Intuitively, 0 A simulates all executions apart from the one in which A 0 succeeds in its attack. Formally, A b egins by cho osing an th index i 2R f1; : : : ; N g. Then, for all but the i 0 execution of the proto col, A plays the roles of the honest parties in an 0 interaction with A (this simulation is internal to A ). On externally inter th the other hand, for the i 0 execution, A acts with the honest parties and passes messages b etween them and A (which it runs internally). The key p oint in the pro of is that the honest parties hold no secret information (and do not co ordinate b etween executions). Therefore, the 0 simulation of the concurrent setting by A th for A is perfect. Thus, with probability 1=N, the i execution is the one in 0 which A succeeds. However, this means that A succeeds 0 P1 ; : : : ; Pn be parties for an authenticated Byzantine Gener in breaking the proto col for a single execution (where A 's als/Agreement protocol �. Let I � [n] be an index set such that for every i 2 I, the adversary A controls the party Pi . success probability equals 1=N times the success probability of A.) This contradicts the stand-alone security of �. 2 The analogous de�nition for the comp osition of unauthenticated Byzantine Generals/Agreement is derived from De�nition 4 by re moving the reference to the prepro cessing stage and setup-tap es. Over time, indices are added to I as the adversary chooses to corrupt additional parties, with the restriction that jI j � t. Then, the sequential (resp., parallel) comp osition of � in volves the fol lowing process: ----- ## 3. IMPOSSIBILITY RESULTS In this section we present two imp ossibility results regard ing the comp osition of authenticated Byzantine Agreement proto cols. Recall that we are concerned with stateless com p osition. First, we show that it is imp ossible to construct an authenticated Byzantine Agreement proto col that comp oses in parallel (or concurrently), and is secure when n=3 or more parties are faulty. This result is analogous to the Fischer et al. [7] lower b ound for Byzantine Agreement in the standard mo del (i.e., without authentication). We stress that our result do es not merely show that authenticated Byzantine Agreement proto cols do not necessarily comp ose; rather, we show that one cannot construct proto cols that will comp ose. Since there exist proto cols for unauthenticated Byzantine Agreement that are resilient for any t < n=3 faulty parties and comp ose concurrently, this shows that the advantage gained by the prepro cessing step in authenticated Byzantine Agreement proto cols is lost when comp osition is required. Next, we show a lower b ound on the numb er of rounds re quired for deterministic authenticated Byzantine Agreement that comp oses sequentially. (Note that the imp ossibility of paral lel comp osition holds even for randomized proto cols.) We show that if an authenticated Byzantine Agreement pro to col that tolerates n=3 or more faulty parties is to comp ose sequentially r times, then there are executions in which it runs for more than r =2 rounds. Thus, the numb er of rounds in the proto col is linear in the numb er of times it is to com p ose. This rules out any practical proto col that will comp ose for a (large) p olynomial numb er of times. Intuition. Let us �rst provide some intuition into why the added p ower of the prepro cessing step in authenticated Byzantine Agreement do es not help when comp osition is re quired. (Recall that in the stand-alone setting, there exist authenticated Byzantine Agreement proto cols that tolerate any numb er of faulty parties. On the other hand, under par allel comp osition, more than 2n=3 parties must b e honest.) An instructive step is to �rst see how authenticated Byzan tine Agreement proto cols typically utilize the prepro cess ing step, in order to increase fault tolerance. A public-key infrastructure for signature schemes is used and this helps in achieving agreement for the following reason. Consider three parties A, B and C participating in a standard (unau thenticated) Byzantine Agreement proto col. Furthermore, assume that during the execution A claims to B that C sent it some message x. Then, B cannot di�erentiate b etween the case that C actually sent x to A, and the case that C did not send this value and A is faulty. Thus, B cannot b e sure that A really received x from C . Indeed, such a mo del has b een called the \oral message" mo del, in contrast to the \signed message" mo del of authenticated Byzantine Agree ment [13]. On the other hand, the use of signature schemes helps to overcome this exact problem: If C had signed the value x and sent this signature to A, then A could forward the signature to B . Since A cannot forge C 's signature, this would then constitute a pro of that C indeed sent x to A. Therefore, utilizing the unforgeability prop erty of signa tures, it is p ossible to achieve Byzantine Agreement for any numb er of faulty parties. However, the ab ove intuition holds only in a setting where a single execution of the agreement proto col takes place. Sp eci�cally, if a numb er of executions were to take place, then A may send B a value x along with C 's signature on x, yet B would still not know whether C signed x in this execution, or in a di�erent (concurrent or previous) execu tion. Thus, the mere fact that A pro duces C 's signature on a value do es not provide pro of that C signed this value in this execution. As we will see in the pro of, this is enough to render the public-key infrastructure useless under some typ es of comp osition. We remark that it is p ossible to achieve concurrent comp o sition, using state in the form of unique and common session identi�ers. However, as we have mentioned, there are many scenarios where this do es not seem to b e achievable (and many others where it is undesirable). Theorem 1. No protocol for authenticated Byzantine Agreement that composes in paral lel (even twice) can tol erate n=3 or more faulty parties. Proof: The pro of of Theorem 1 is based on some of the ideas used by Fischer et al. [7] in their pro of that no unau thenticated Byzantine Agreement proto col can tolerate n=3 or more faulty parties. We b egin by proving the following lemma: Lemma 3.1. There exists no protocol for authenticated Byzantine Agreement for three parties, that composes in par al lel (even twice) and can tolerate one faulty party. Proof: Assume, by contradiction, that there exists a proto col � that solves the Byzantine Agreement problem for three parties A, B and C, where one may b e faulty. Fur thermore, � remains secure even when comp osed in parallel twice. Exactly as in the pro of of Fischer et al. [7], we de �ne a hexagonal system S that intertwines two indep endent copies of �. That is, let A1 ; B1 , C1 and A2, B2 and C2 b e indep endent copies of the three parties participating in �. By indep endent copies, we mean that A1 and A 2 are the same party A with the same key tap e, that runs in two di�erent parallel executions of �, as de�ned in De�nition 4. The system S is de�ned by connecting party A1 to C2 and B1 (rather than to C1 and B1 ); party B1 to A1 and C1 ; party C1 to B1 and A2 ; and so on, as in Figure 1. ###### A1 B1 ###### A ###### C2 1 S 0 C1 ##### Π ###### B C ###### B2 ###### A2 Figure 1: Combining two copies of � in a hexagonal system S . In the system S, parties A1 , B1 , and C 1 have input 0; while parties A 2, B 2 have input 1. Note that within and C2 S, all parties follow the instructions of � exactly. We stress that S is not a Byzantine Agreement setting (where the parties are joined in a complete graph on three no des), and therefore the de�nitions of Byzantine Agreement tell us nothing directly of what the parties' outputs should b e. However, S is a well-de�ned system and this implies that the parties have well-de�ned output distributions. The pro of ----- pro ceeds by showing that if � is a correct Byzantine Agree ment proto col, then we arrive at a contradiction regarding the output distribution in S . We b egin by showing that B1 and C1 output 0 in S . We denote by rounds(�) the up p er b ound on the numb er of rounds of � (when run in a Byzantine Agreement setting). Claim 3.2. Except with negligible probability, parties B1 and C1 halt within rounds(�) steps and output 0 in the sys tem S . Proof: We prove this claim by showing that there exists a faulty party (or adversary) A who participates in two par allel copies of � and simulates the system S, with resp ect to 1. Send outgoing messages of round i: A obtains mes � In �2, A sends B2 sages msg ) and msg (A sages msg i (A1 ; B1 ) and messages msg ; B2 ) from A1 ; C in �1, and messages msg i (A2 ; B2 ) and msg i (A2 ; C2 ) from A2 in �2 (these are the round i messages sent by A1 and A2 to the other parties; as we have mentioned, A1 and A2 compute these messages according to the proto col (A2 ; msg i (A1 ; C1 ) ) and msg (A2 ; 2 de�nition and based on their view). � In �1, A sends B1 the message msg msg i (A1 ; B1 ) and ) (and thus the sends C1 the message msg i (A2 ; C2 ) (and thus the (A1 ; C1 ) directed edge is replaced by the directed edge (A2 ; C1 )). the message msg msg i (A2 ; B2 ) and ) (and thus the B1 and C1 's view. The faulty party A (and the other honest sends C2 the message msg i (A1 ; C1 ) (and thus the (A2 ; C2 ) directed edge is replaced by the directed parties participating in the parallel execution) work within a Byzantine Agreement setting where there are well-de�ned requirements on their output distribution. Therefore, by an alyzing their output in this parallel execution setting, we are able to make claims regarding their output in the system S . edge (A1 ; C2 )). Let A1 , B 1 b e parties running an execution of �, and C1 denoted � , where B1 and C1 b oth have input 0. Further 1 more, let A , B2 and C2 b e running a parallel execution of 2 �, denoted �2, where B2 and C2 b oth have input 1. Recall that B1 and B2 are indep endent copies of the party B with the same key tap e (as de�ned in De�nition 4); likewise for C1 and C2 . Now, let A b e an adversary who controls b oth A1 in �1 and A2 in �2 (recall that the faulty party can co ordinate b etween the di�erent executions). Party A's strategy is to maliciously generate an execution in which B1 1 's and C1 's 2. Obtain incoming messages from round i: A receives messages msg i (B1 ; A1 ) and msg i (C1 ; A1 ) from B1 and C1 in round i of �1, and messages msg i (B2 ; A2 ) and msg i (C2 ; A2 ) from B2 and C2 in round i of �2 . � A passes A1 in �1 the messages msg i (B1 ; A1 ) and msg i (C2 ; A2 ) (and thus the (C1 ; A1 ) directed edge is replaced by the directed edge (C2 ; A1 )). � A passes A2 in �2 the messages msg i (B2 ; A2 ) and msg i (C1 ; A1 ) (and thus the (C2 ; A2 ) directed edge is replaced by the directed edge (C1 ; A2 )). We now claim that B1 and C1 's view in �1 is identical to 3 view in �1 is identical to their view in S . A achieves this by B1 and C1 's view in S . This holds b ecause in the parallel redirecting edges of the two parallel triangles (representing the parallel execution), so that the overall system has the same b ehavior as S ; see Figure 2. execution of � , all parties follow the proto col de�ni 1 and �2 tion (including A1 and A2 ). The same is true in the system S, except that party A1 is connected to B1 instead and C1 ###### B1 of to B1 and C1 instead of to B2 . Likewise, A and C2 2 connected to B1 and C2 is connected to B2 ###### A 1 ###### B1 ###### A 1 . However, by the de�nition of A, ###### 0 Π1 the messages seen by all parties in the parallel execution of ###### C2 ###### 0 �1 and �2 are exactly the same as the messages seen by the parties in S (e.g., the messages seen by C1 in �1 ###### C2 1 S ###### C1 ###### 1 ###### 0 Π2 ###### C1 are those sent by B1 and A2 , exactly as in S ). Therefore, the views of ###### 1 1 B2 A2 ###### A 2 B1 and C1 in the parallel execution maliciously controlled 4 by A, are identical to their views in S . By the assumption that � is a correct Byzantine Agree ment proto col that comp oses twice in parallel, we have that, Figure 2: Redirecting edges of �1 hexagon. ###### B2 and �2 to make a except with negligible probability, in �1 b oth B1 and C1 halt within rounds(�) steps and output 0. The fact that they b oth output 0 is derived from the fact that B and C1 1 Sp eci�cally, the (A1 ; C1 ) and (A2 ; ; C2 2 ) edges of �1 are an honest ma jority with the same input value 0. There fore, they must output 0 in the face of any adversarial A1 ; in particular this holds with resp ect to the sp eci�c adver sp ectively are removed, and the (A1 ; C2 ) and (A2 and �2 re ; C1 ) edges of S are added in their place. A is able to make such a mo di�cation b ecause it only involves redirecting messages sary A describ ed ab ove. Since the views of B1 and C1 in S to and from parties that it controls (i.e., A1 and A2 ). are identical to their views in �1, we conclude that in the Before pro ceeding, we present the following notation: let msg i th (A1 ; B 1 1 ) denote the message sent from A1 to B1 in the system S, they also halt within rounds(�) steps and out 3 In fact, the views of al l the parties in the parallel execution with A are identical to their view in the system S . However, in order to i i round of the proto col execution. We now formally show how the adversary A works. A invokes parties A1 and A2, up on inputs 0 and 1 resp ectively. We stress that A1 and A2 obtain Claim 3.2, we need only analyze the views of B1 4 and C1 . We note the crucial di�erence b etween this pro of and that of Fischer follow the instructions of proto col � exactly. However, A provides them with their incoming messages and sends their outgoing messages for them. The only malicious b ehavior of a single execution of � with B1 et al. [7]: the faulty party A is able to simulate the entire A1 � C2 � B2 � A2 segment of the hexagon system S by itself. Thus, in and C1 , party A can simulate the hexagon. Here, due to the fact that the parties B2 and C2 have secret A is in the redirection of messages to and from A1 and A2 . A full description of A's co de is as follows (we recommend the reader to refer to Figure 2 in order to clarify the following): information that A do es not have access to, A is unable to simulate their b ehavior itself. Rather, A needs to redirect messages from the parallel execution of � in order to complete the hexagon. 2 ----- put 0 (except with negligible probability). This completes the pro of of the claim. Using analogous arguments, we obtain the following two claims: Claim 3.3. Except with negligible probability, parties A2 and B2 halt within rounds(�) steps and output 1 in the sys tem S . In order to prove this claim, the faulty party is C and it works in a similar way to A in the pro of of Claim 3.2 ab ove. (The only di�erence is regarding the edges that are redirected.) Claim 3.4. Except with negligible probability, parties A2 proto col. (More generally, r rounds of k parallel executions of a proto col can b e simulated in k � r sequential executions.) Thus, essentially, the deterministic sequential lower b ound is derived by reducing it to the parallel comp osition case of Theorem 1. That is, Theorem 2. Let � be a deterministic protocol for authen ticated Byzantine Agreement that concludes after r rounds of communication. Then, � can be sequential composed at most 2r � 1 times. ## 4. SEQUENTIALLY COMPOSABLE RAN- DOMIZED PROTOCOLS In this section we present two results. The �rst one is a proto col which tolerates any t < n=2 faulty parties and has p olynomial communication complexity (i.e., bandwidth). The second one is a proto col that can tolerate any numb er of faulty parties but is exp onential in the numb er of partici pating parties. The building blo ck for b oth of the ab ove proto cols will b e and C1 halt within rounds(�) steps and output the same value in the system S . Similarly, this claim is proven by taking the faulty party as B who follows a similar strategy to A in the pro of of Claim 3.2 ab ove. Combining Claims 3.2, 3.3 and 3.4 we obtain a contradic a randomized (sequentially comp osable) proto col, ABG3;1 , tion. This is b ecause, on the one hand C1 must output 0 in S (Claim 3.2), and A2 must output 1 in S (Claim 3.3). On the other hand, by Claim 3.4, parties A2 and C1 must out put the same value. This concludes the pro of of the lemma. Theorem 1 is derived from Lemma 3.1 in the standard way [15, 13] by showing that if there exists a proto col that is correct for any n � 3 and n=3 faulty parties, then one can construct a proto col for 3 parties that can tolerate one faulty party. This is in contradiction to Lemma 3.1, and thus Theorem 1 is implied. The following corollary, referring to concurrent comp osition, is immediately derived from the fact that parallel comp osi tion (where the scheduling of the messages is �xed and syn chronized) is merely a sp ecial case of concurrent comp osition (where the adversary controls the scheduling). Corollary 1. No protocol for authenticated Byzantine Agreement that composes concurrently (even twice) can tol erate n=3 or more faulty parties. Sequential Comp osition of Deterministic Proto cols. We now show that there is a signi�cant limitation on de terministic Byzantine Agreement proto cols that comp ose sequential ly. Sp eci�cally, any proto col which terminates within r rounds can only b e comp osed sequentially for at most 2r � 1 times. The lower b ound is derived by show ing that for any deterministic proto col �, r rounds of the hexagonal system S (see Figure 1) can b e simulated in 2r sequential executions of �. As we have seen in the pro of of Theorem 1, the ability to simulate S results in a contradic tion to the correctness of the Byzantine Agreement proto col �. However, a contradiction is only derived if the system S halts. Nevertheless, since � terminates within r rounds, the system S also halts within r rounds. We conclude that the proto col � can b e sequentially comp osed at most 2r � 1 times. We remark that in actuality, one can prove a more gen eral statement that says that for any deterministic proto col, r rounds of 2 parallel executions of the proto col can b e p erfectly simulated in 2r sequential executions of the same for authenticated Byzantine Generals b etween 3 parties and tolerating one faulty party. Recall that ABGn;t denotes an authenticated Byzantine Generals proto col for n parties that tolerates up to t faults. We �rst present the proto col ABG3;1 and then show how it can b e used to achieve the ab ove describ ed results. ## 4.1 Sequentially Composable ABG3;1 For this proto col we assume three parties: the general, G, As we will wish to incorp orate Proto col 3 into a proto col with n parties we state a broader claim for the comp osition than for a simple three party setting. Lemma 4.1. Assume that the signature scheme � is ex istential ly secure against adaptive chosen message attacks. Then, Protocol 3 is a secure protocol for ABG3;1 that can be composed sequential ly within a system of n parties, in which t may become faulty, for any t < n. and the recipients P1 ; P2 . The General has an input value x. According to De�nition 1, parties P1 0 and P2 need to output 0 the same value x , and, if G is not faulty, then x = x. As is evident from the pro ofs of the imp ossibility, what hinders a solution is that faulty parties can import messages from previous executions, and there is no means to distinguish b e tween those and the current messages. Thus, if some fresh ness could b e intro duced in the signatures, then this would foil the adversary's actions. Yet, agreeing on such freshness would put us in a circular problem. Nevertheless, the case of three parties is di�erent: here there are only two parties who need to receive each signature. Furthermore, it turns out that it suÆces if the parties who are receiving a signature can jointly agree on a fresh string. Fortunately, two parties can easily agree on a new fresh value: they simply exchange messages and set the fresh string to equal the concatenation of the exchanged values. Now, in the proto col which follows for three parties, we require that whenever a party signs a message, it uses freshness generated by the two remaining parties. We note that in the proto col, only the General, G, signs a message, and therefore only it needs a public key. The proto col is describ ed in Figure 3. For simplicity, we as sume that the signature scheme is de�ned such that �pk also contains the value z . (z ) ----- Intuitively, with probability 1=n, this is the party who plays the general when A foils the agreement. For all other parties, the forger F cho oses a key pair, for which it knows b oth the signing and veri�cation keys. Then, F gives the adversary A the key pairs for all the initially corrupted parties. F now invokes A and simulates the roles of the honest parties in the sequential executions of Proto col 3, with A as the adversary. In particular, F works as follows: � In all executions where the recipient/s P1 and/or P2 are not corrupted, F plays their role, following the proto col exactly as sp eci�ed. This is straightforward as the recipients do not use signing keys during such an execution. � In all executions where the general is some uncor rupted party Pl 6= Pj, , the forger F plays the role of Pl, , following the proto col and using the signing-key which it asso ciated with Pl initially. � In all executions where the general is the uncorrupted Pj , the forger F plays the role of Pj following the pro to col. However, in this case, F do es not have the as so ciated signing-key. Nevertheless, it do es have access to the signing oracle asso ciated with pk (which is Pj 's public veri�cation-key). Therefore, F executes these signatures by accessing its oracle. In particular, for lab els `1 ; `2 that it receives during the simulation, it queries the signature oracle for �pk (x; `1 ; ` 2 ). � Corruptions: If at any p oint, A corrupts a party Pl 6= Pj , then F hands A the signing-key that is asso ciated with Pl (this is the only secret information that Pl has). On the other hand, if at any p oint A corrupts Proof: We prove the theorem by contradiction. Assume Pj , then F ab orts (and do es not succeed in forging). that a series of ABG3;1 proto cols are run sequentially, such that in some (or all) of them, the adversary succeeded in foiling agreement with non-negligible probability. We will show that in such a case, we can construct a forger F for the signature scheme who succeeds with non-negligible prob ability. This will then b e in contradiction to the security of the signature scheme. As there are n parties and the adversary can control up to t of them, there may b e executions where two or three of the parties are corrupted. However, in such a case, agree ment holds vacuously. On the other hand, any execution in which all three parties are honest must b e correct. There fore, agreement can only b e foiled in the case that exactly one participating party is corrupted. We �rst claim that when A plays the General in an ex ecution, it cannot foil the agreement. This is b ecause P1 Pj Throughout the ab ove-describ ed simulation, F monitors each execution and waits for an execution in which exactly one party is corrupt and the agreement is foiled. If no such ex ecution o ccurs, then F ab orts. Otherwise, in the �rst foiled execution, F checks if the uncorrupted Pj is the general in this execution. If not, then F ab orts (without succeeding in generating a forgery). Otherwise, we have an execution in which Pj is the general and agreement is foiled. In such a case, F succeeds in generating a forgery as follows. As we have mentioned, agreement can only b e foiled if exactly one party is faulty. Since by assumption Pj is not corrupted, we have that one of the recipients P1 or P2 are corrupted; without loss of generality, let P1 b e the corrupted party. (We note that F plays the roles of b oth honest parties and P2 in the simulation.) Now, since the agreement was and P2 's views of the messages sent by A (playing G) are foiled, we know that P2 do es not output P j 's input value identical. Furthermore, their decision making pro cess based on their view is deterministic. Therefore, they must output the same value. We stress that this is irresp ective of how many executions have passed (and is also not dep endent at all on the security of the signature scheme b eing used). Thus, it must b e the case that the foiled execution is one where the general is an honest party. As we have mentioned, we build a forger F for the signature scheme � who uses A. The forger F receives as input a public veri�cation key pk, and access to a signing oracle asso ciated with this key. F b egins by cho osing at random one of the parties, say x, which means that it defaulted in Step 5. This can only happ en if P2 received two valid signatures on the lab el ` which it sent Pj in this execution. Now, P2 clearly received b ecause P2 a correct signature m on Pj 's input using the lab el ` from Pj itself. (In fact, by the simulation, this signature is generated by F accessing its signature oracle.) However, in addition, 0 P2 0 m must have received a valid signature m constitutes Pj from P1, where 's signature on a string that contains lab el 0 ` and a di�erent message x . With overwhelming probabil ity the lab el ` did not app ear in any previous execution, is honest and cho oses its p ortion of the lab el Pj , and asso ciating the veri�cation-key pk with this party. at random. Thus, previously in the simulation, the signing ----- oracle was never queried with a string containing `. Further 0 more, by the assumption that x 6= x, the oracle query by F in this execution was di�erent to the string up on which 0 m 0 is a signature. We conclude that m is a valid signature on a message, and that F did not query the signing oracle 0 with this message. Therefore, F outputs m successful forgery. and this is a setting, the proto col can only b e carried out for n = log k parties (where k is a security parameter). We stress that the fact that the numb er of parties must b e logarithmic in the security parameter is due to two reasons. First, we wish the proto col to run in p olynomial time. Second, we use a signature scheme and this is only secure for p olynomial-time adversaries, and a p olynomial numb er of signatures. Our proto col is constructed by presenting a transforma tion that takes a sequentially comp osable ABG proto col for n�1 parties which tolerates n�3 faulty parties, ABGn�1;n�3, and pro duces a sequentially comp osable ABG proto col for It remains to analyze the probability that F succeeds in this forgery. First, it is easy to see that when F do es not ab ort, the simulation of the sequential executions is perfect, and that A's view in this simulation is identical to a real exe . cution. Furthermore, the probability that Pj is the identity n parties which tolerates n � 2 faulty parties, ABGn;n�2 of the (uncorrupted) general in the �rst foiled agreement Then, given our proto col for broadcast among three parties , we can apply our equals 1=n exactly. The fact that Pj is chosen ahead of which tolerates one faulty party, ABG3;1 time makes no di�erence b ecause the simulation is p erfect. Therefore, the choice of Pj by F do es not make any di�er transformation and obtain ABGn;n�2 for any n. The idea for the transformation is closely related to the ideas b ehind the proto col for Byzantine Generals for three parties. The solution for the three-party broadcast assumes two-party broadcast (which is trivial). Using two-party broad cast, agreement on a fresh lab el can b e reached. Having agreed on this lab el, the two p oint communications with the General are suÆcient. Each party sends its claimed fresh lab el to the General, and the General includes the two received lab els inside any signature that it pro duces. Our general transformation will work in the same manner. We ence to the b ehavior of A. We conclude that F succeeds in forging with probability 1=n times the probability that A succeeds in foiling agreement (which is non-negligible). This contradicts the security of the signature scheme. ## 4.2 Sequentially Composable ABGn;n=2 Fitzi and Maurer [8] present a proto col for the Byzan tine Generals problem that tolerates any t < n=2 faulty parties. Their proto col is for the information-theoretic and unauthenticated mo del. However, in addition to the p oint to-p oint network, they assume that every triplet of parties is connected with an ideal (3-party) broadcast channel. As we have shown in Section 4.1, given a public-key infrastruc ture for signature schemes, it is p ossible to implement secure broadcast among three parties that comp oses sequentially. use the ABGn�1;n�3 proto col to have all parties (apart from Thus, a proto col for ABGn;n=2 is derived by substituting the General) agree on a random lab el. Then, each party pri vately sends this lab el to the General, who then includes all lab els in its signatures. Thus, we prove: Theorem 4. Assume that there exists a signature scheme that is existential ly secure against chosen message attacks, for adversaries running in time p oly (k ). Then, there exists a Byzantine Generals protocols for O (log k ) parties, that toler ates any number of faulty parties and composes sequential ly. The formal description of the proto col and the pro of of the theorem are omitted due to lack of space in this abstract. ## 5. AUTHENTICATED BYZANTINE AGREE- MENT USING UNIQUE IDENTIFIERS In this section we consider an augmentation to the authen ticated mo del in which each execution is assigned a unique and common identi�er. We show that in such a mo del, it is p ossible to achieve Byzantine Agreement/Generals that comp oses concurrently, for any numb er of faulty parties. We stress that in the authenticated mo del itself, it is not p ossi ble for the parties to agree on unique and common identi �ers, without some external help. This is b ecause agreeing on a common identi�er amounts to solving the Byzantine Agreement problem, and we have proven that this cannot b e achieved for t � n=3 when comp osition is required. There fore, these identi�ers must come from outside the system (and as such, assuming their existence is an augmentation to the authenticated mo del). Intuitively, the existence of unique identi�ers helps in the authenticated mo del for the following reason. Recall that our lower b ound is based on the ability of the adversary to borrow signed messages from one execution to another. Now, if each signature also includes the session identi�er, then the honest parties can easily distinguish b etween mes sages signed in this execution and messages signed in a dif ferent execution. It turns out that this is enough. That is, the ideal 3-party broadcast primitive in the proto col of Fitzi and Maurer [8] with Proto col 3. Since Proto col 3 and the proto col of Fitzi and Maurer [8] b oth comp ose sequentially, the resulting proto col also comp oses sequentially. Theorem 3 Assume that there exists a signature scheme that is existential ly secure against chosen message attacks. Then, there exists a randomized protocol for authenticated Byzantine Generals that tolerates t < n=2 faulty parties and composes sequential ly any polynomial number of times. As we show in Section 5, it is p ossible to execute many copies of an authenticated Byzantine Generals proto col con currently, by allo cating each execution a unique identi�er that is common and known to all parties. Now, inside the Fitzi-Maurer proto col we can allo cate unique indices to each invo cation of the ABG3;1 proto col. We can therefore run the ABG3;1 proto cols in parallel (rather than sequentially), im proving the round complexity of the resulting proto col. In particular, our proto col is of the same round complexity as the underlying Fitzi-Maurer proto col. (We stress that the fact that the ABG3;1 subproto cols can b e executed in par allel within the ABGn;n=2 proto col does not imply that the ABGn;n=2 proto col itself can comp ose in parallel. Rather, by our imp ossibility result, we know that it indeed cannot b e comp osed in parallel.) ## 4.3 Sequentially Composable ABGn;t [for any] t In this section we describ e a proto col for the Byzantine Generals problem for n parties, which can tolerate any num ber of faulty parties. However, this proto col is exp onential in the numb er of participating parties. Therefore, in our ----- we give a transformation from almost any Byzantine Agree ment proto col based on signature schemes, to a proto col that comp oses concurrently when unique identi�ers exist. By \almost any proto col," we mean that this transforma tion applies for any proto col that uses the signature scheme for signing and verifying messages only. This is the natural use of the signature scheme and all known proto cols indeed work in this way. More formally, our transformation works as follows. Let � b e a proto col for authenticated Byzantine Agreement. We de�ne a mo di�ed proto col �(id) that works as follows: � Each party is given the identi�er id as auxiliary input. � If a party Pi has an instruction in � to sign a given message m with its secret key ski , then Pi signs up on ## Acknowledgments We would like to thank Oded Goldreich for p ointing out a simpler pro of of Theorem 5. And Matthias Fitzi for discus sions ab out [8]. ## 7. REFERENCES [1] D. Beaver. Secure multiparty proto cols and zero-knowledge pro of systems tolerating a faulty minority. Journal of Cryptology, 4:75{122, 1991. [2] R. Canetti. Security and comp osition of multiparty cryptographic proto cols. Journal of Cryptology, 13(1):143{202, 2000. [3] R. Canetti. Universally Comp osable Security: A New Paradigm for Cryptographic Proto cols. In 42st FOCS, pages 136{145. 2001. [4] R. Canetti, J. Kilian, E. Petrank, and A. Rosen. Black-Box Concurrent Zero-Knowledge Requires Omega(log n) Rounds. In 33th STOC, pages 570{579. 2001. [5] Y. Do dis and S. Micali. Parallel Reducibility for Information-Theoretically Secure Computation. In Crypto '00, pages 74{92, 2000. LNCS No. 1880. [6] C. Dwork, M. Naor, and A. Sahai. Concurrent zero-knowledge. In 30th STOC, pages 409{418. 1998. [7] M. Fischer, N. Lynch, and M. Merritt. Easy Imp ossibility Pro ofs for Distributed Consensus Problems. Distributed Computing, 1(1):26{39, 1986. [8] M. Fitzi and U. Maurer. From partial consistency to global broadcast. In 32th STOC, pages 494{503. 2000. [9] O. Goldreich. Concurrent Zero-Knowledge With Timing Revisited. In 34th STOC. 2002. [10] O. Goldreich and H. Krawczyk. On the comp osition of zero-knowledge pro of systems. SIAM. J. Computing, 25(1):169{192, 1996. [11] S. Goldwasser, S. Micali, and R. L. Rivest. A digital signature scheme secure against adaptive chosen-message attacks. SIAM J. Computing, 17(2):281{308, Apr. 1988. [12] L. Gong, P. Lincoln, and J. Rushby. Byzantine Agreement with Authentication: Observations and Applications in Tolerating Hybrid and Link Faults. In Dependable Computing for Critical Applications, 1995. [13] L. Lamp ort, R. Shostack, and M. Pease. The Byzantine generals problem. ACM Trans. Prog. Lang. and Systems, 4(3):382{401, 1982. [14] S. Micali and P. Rogaway. Secure computation. In Crypto '91, pages 392{404, 1991. LNCS No. 576. [15] M. Pease, R. Shostak, and L. Lamp ort. Reaching Agreement in the Presence of Faults. Journal of the ACM, 27(2):228{234, 1980. [16] B. P�tzmann and M. Waidner. Information-Theoretic Pseudosignatures and Byzantine Agreement for t >= n=3. Technical Rep ort RZ 2882 (#90830), IBM Research, 1996. [17] R. Richardson and J. Kilian. On the concurrent comp osition of zero-knowledge pro ofs. In Eurocrypt '99, pages 311{326, 1999. LNCS No. 1592. [18] R. L. Rivest, A. Shamir, and L. M. Adleman. A metho d for obtaining digital signatures and public-key cryptosystems. Communication of the ACM, 21(2):120{126, 1978. id Æ m instead (where Æ denotes concatenation). � If a party Pi has an instruction in � to verify a given signature � on a message m with a public key pkj, then Pi veri�es that � is a valid signature for the message id Æ m. We now state our theorem: Theorem 5. Let � be a secure protocol for authenticated Byzantine Agreement which uses an existential ly unforge able signature scheme. Furthermore, this scheme is used for generating and verifying signatures only. Let the pro tocol �(id) be obtained from � as described above, and let id1 ; : : : ; id` be a series of ` unique strings. Then, the pro tocols �(id ) al l solve the Byzantine Agreement 1 ); : : : ; �(id` problem, even when run concurrently. We conclude by noting that it is not at all clear how such an augmentation to the authenticated mo del can b e achieved in practice. In particular, requiring the on-line participation of a trusted party who assigns identi�ers to every execution is clearly impractical. (Furthermore, such a party could just b e used to directly implement broadcast.) However, we do note one imp ortant scenario where Theorem 5 can b e ap plied. As we have mentioned, secure proto cols often use many invo cations of a broadcast primitive. Furthermore, in order to improve round eÆciency, in any given round, many broadcasts may b e simultaneously executed. The key p oint here is that within the secure proto col, unique identi�ers can b e allo cated to each broadcast (by the proto col designer). Therefore, authenticated Byzantine Agreement can b e used. Of course, this do es not change the fact that the secure proto col itself will not comp ose in parallel or concurrently. However, it do es mean that its security is guaranteed in the stand-alone setting, and a physical broadcast channel is not necessary. ## 6. OPEN PROBLEMS Our work leaves op en a numb er of natural questions. First, an unresolved question is whether or not it is p ossible to construct randomized proto cols for authenticated Byzantine Generals that sequentially comp ose, for any n and any num b er of faulty parties. Second, it is unknown whether or not it is p ossible to construct a deterministic proto col that ter minates in r rounds and sequentially comp oses ` times, for some 2 � ` � 2r � 1. Another question that arises from this work is to �nd a realistic computational mo del for Byzantine Agreement that does allow parallel and concurrent comp o sition for n=3 or more faulty parties. -----
{ "disclaimer": "Notice: The following paper fields have been elided by the publisher: {'references', 'abstract'}. Paper or abstract available at https://api.unpaywall.org/v2/10.1145/509907.509982?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1145/509907.509982, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": null, "status": "GREEN", "url": "http://www.cs.brown.edu/research/pubs/pdfs/2002/Lindell-2002-CAB.pdf" }
2,002
[ "JournalArticle" ]
true
2002-05-19T00:00:00
[]
18,021
en
[ { "category": "Computer Science", "source": "external" }, { "category": "Computer Science", "source": "s2-fos-model" }, { "category": "Mathematics", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/02318c72964c65db5dc32b3997968c673c3bef9a
[ "Computer Science" ]
0.8153
Private-Key Algebraic-Coded Cryptosystems
02318c72964c65db5dc32b3997968c673c3bef9a
Annual International Cryptology Conference
[ { "authorId": "2093398212", "name": "T. Rao" }, { "authorId": "72321938", "name": "Kil-Hyun Nam" } ]
{ "alternate_issns": null, "alternate_names": [ "Int Cryptol Conf", "Annu Int Cryptol Conf", "CRYPTO", "International Cryptology Conference" ], "alternate_urls": null, "id": "212b6868-c374-4ba2-ad32-19fde8004623", "issn": null, "name": "Annual International Cryptology Conference", "type": "conference", "url": "http://www.iacr.org/" }
null
been studied previously. Private-key cryptosystems `using` simpler codes have _, ###### PRIVATEKEY ALGEBRAIC-CODED ``` CRYFTOSYSTEMS * T. R. N. Rao Kil-Hyun Nam ** ``` The Center `for Advanced Computer Studies` National Defense College University of Southwestern Louisiana Seoul, Korea Lafayette, Louisiana **70504** ``` ABSTRACT ``` Public-key cryptosystems using very large distance algebraic codes have been studied previously. Private-key cryptosystems `using` simpler codes have _, **also** been subject of some study recently. `This paper proposes` a new ap- proach to the private-key cryptosystems which allows use of very simple codes such `as distance` `3 and 4` Hamming codes. This new approach gives not only very efficient encoding/decoding and very high information rates but also appears to be secure even under chosen- plaintext attacks. Keywords : cryptosystems, public-key cryptosystems, private-key cryptosp- tems, Algebraic codes, crypt-complexity, chosen-plaintext attack, Joint Encryption and Error-control `Coding` ###### * This research is supported by a grant from National Security Agency grant # MDA 904-84-H-005. ** This author's research was performed while he was with the Center for Ad- vanced Computer Studies at University of Southwestern Louisiana. ----- `1.` **INTRODUCTION** McEliece introduced a public-key cryptosystem based on algebraic coding theory using t-error correcting Goppa codes [McEliece **’781.** But McEliece Public-key Cryptosystem (MPBC) requires large block lengths with capabilities to correct large number of errors (n = 1000 bits, t **x 50** bits) to be effective. This involves very large computational (encryption and decryption) overhead to be practical in computer communications. Private-key Algebraic-coded Cryptosystems (PRAC) were suggested by ###### Rao [Rao ’84b] using the same techniques as MPBC but keep the public gen- erator matrix `as` private. PRAC provides better security with simpler error correcting codes, hence, requires relatively low computational overhead. Howev- er, we show that PRAC can be broken easily by a chosen-plaintext attack. Both MPBC and PRPLC are classified `as` Algebraic-Coded Cryptosystems (ACC) here. This paper introduces a new approach to PRAC, which requires simple error correcting codes (i.e. distance **3** codes) and also provides much higher security level. ###### 1.1. McEliece Public-key Cryptosystems (MPBC) Encryption Let G be a t-error correcting **k*n** generator matrix of a linear code over GF(2) capable of t-error correction. The rate of the code is -. k We can **n** select a random `k*k nonsingular matrix` S called scrambler and a random n*n permutation matrix P. Having G, `S and` P, we can compute the public gen- erator matrix G’ such that G’ = SGP, which is combinatorially equivalent to G. Then the encryption **is** done by: C = MG’ + `Z` where `C : ciphertext` of length n, ----- M : plaintext message of length `k,` Z : random error vector of length n with weight t. Note that the vectors are italic lettered, and weight meam Hamming weight. **Decryption** The decryption is very straight forward. ###### Fro= the encryption equation G' = SGP ###### c = MG' i- z = MSGP + Z = M' GP + Z where M' = MS Hence, we can recover M `as given by the following steps.` ###### Step 1 compute C' : `C'` = C P T = M ' G + Z P T ###### = M ' G + 2' where Z' = Z P T (Note: `2'` `has same weight` `as Z` since ###### P and P T are permutation matrices) Step 2 Decoding and error correction: (Patterson Algorithm [MCEL **771).** ###### Step 3 recover plaintext M : _M = M ' S - '_ **Cryptanalysis** `of MPBC` `As suggested by McEliece in his paper [McEliece '781,` there could be two kinds of basic attacks for the cryptanalyst to try. (a) Factoring S, G and P from G' Since the number of codes which are combinatorially equivalent to a given code is astronomical, it is hopeless `task to` find out exact keys s, G and P used for G'. However, the cryptanalyst needs only some ----- `Si` Gi and code. For the given _G’,_ the cryptanalyst can obtain `Sj , G,` and Pi satisfying the equation `S, G,` P, = G’, where G, is a generator in `s y ~ -` tematic form. G, is obtained from G’ by.elementary row operations (row canonical reduction) and column operations. G’, G, `and G` are all said to be combinatorially equivalent. Where `as G` corresponds directly to a Goppa code which has well understood and well-known decoding algorithms, `no such would be possible for` `G,. Trial and error manipu-` lation to obtain a G, coinciding with an equivalent Alternant code generator would require an astronomically large work factor. (b) Recovering **_M_** from `C directly without keys` Another approach involves solving a set of k-unknowns from `n simul-` taneous equations for all possible `Z values.` Let `M` and `C be a plaintext pair` **_M_** = **_m l m 2 m 3_** . . _ mk ###### c = c 1 c 2 c 3 . . . ck . . . cn z = z l b 2 z 3 . . . t k . . . Zn G’ = [ Gij’ 1 i = 1, ... , k j = 1, ... , n (t-error correcting algebraic code) Then, for `j=` `1, ... , n` **_C 1_** = m1G11’ **_fm2G21’_** +...+ mkGk{ **_+ Z I_** **_C 2_** = m1G12’ **_f m z G 2 2 ’_** +...+ mkGk$ **_+_** **_~_** **_2_** To solve `k unknowm` `(rn l,m2,` . . . , mk ), k operations are required because `k equations are sufficient` to solve the equations if the code `is` maximal distance separable (MDS) code. Otherwise, at most k’ = n-d+l equations are required to solve for k-unknowns [Pless **,821.** ----- Since t is smaller than n-k, it is possible that the cryptanalyst could select k equations containing `no errom from` n equations. Therefore, the cryptanalyst could repeat solving equations by selecting arbitrary k equations from n simultaneous equations with- the assumption of `no er-` ``` rors in selected equations until a meaningful plaintext is obtained. ``` The probability of `no errors in` `k` equations, `is:` and the average number of repetition is `Pk-'.` Hence, the average work factor, T is: ###### T = k 3 * Pk-1 However, this does not include the work factor to check whether the plaintext **_M_** obtained by solving equations is correct (Le., meaningful) or not. It is assumed that the plaintexts are from a source such `as` natural language or a programming language which contains an enor- ###### mous amount of redundancy penning '821. Redundancy in M heIps to determine the validity of the plaintext derived. **1.2.** **Private-key** `Algebraic-coded Cryptosystems (PF2AC)` ###### To increase information rate and to reduce computational (encryption and decryption) overhead of `MPBC, Private-key Algebraic-coded Cryptovstem` ###### (PRAC) were suggested [Rao '84b]. PRAC can provide better security with . simpler error correcting codes, hence, require relatively low computational over- head compared to MPBC. PRAC keeps G' private `as well` `as` S, P and G to provide higher security level. **A** known-plaintext attack to PFL4C is feasible by solving matrices for each column vector of G' independently but this method requires a very large set of known **_( M , C ) pairs. Hence, this attack can be foiled_** by periodic change or modification of the keys by the cryptographer. However, the analysis given below shows that PRAC still requires large t to be secure from a chosen- plaintext attack. ----- **Chosen- Plaint ext** **Attack** The cryptanalyst is required to go through two steps. Step 1 : Solve for G’ from a large set of _( M , C ) pairs._ Step **2** : Determine _M_ from `C using` G’ obtained in Step 1 (same **work** fac- tor `as in MPBC).` It can be safely assumed that a chosen plaintext of the form _M_ = (00 - - `.010 . . .O)` with only one `1 in` ith position (for i = 1, . . . ,k) is not allowed by the cryptosystem. However, a chosen-plaintext attack may proceed `as fol-` lows. Let **M1 and** `M 2 are two plaintext differing in one p i t i o n only, that` is, _M , -_ **_M ,_** = `(00 . . . 010` . . . 0) ##### ith position for i = 1, . . . ,k then, _C, -_ `C,` = **_gj’_** + ( 2 1 - `22)` **(Es. 1)** where **_gi’_** `is` the ith row vector of G’. The Hamming weight of _(2, -_ _2,)_ is at most 2t. Since t is much smaller than n, the majority of the `bits of` the vector C, - C2 correspond directly **with** **_9;’_** . We can let C1 - C2 represent one estimate of **_gj’_** . By repeating the step several times a number of estimates of **g;’** can be obtained. From these esti- mates of **_g j ’_** and by majority voting for each position, the vector **_gj’_** `can be` correctly determined. `This step repeated for` all i = 1’2,. . . .k will give us G’, which can be used to break the code by step 2. This step **2** will require a re- latively small work factor because t is small. However, a chosen-plaintext attack of the above nature can succeed only when t n - is small and it **will** not if t - - 2’ ----- **2.** **MODIFIED CRYPTOSYSTEMS** **2.1.** **Introduction** Our intent here is to obtain private-key cryptosystems using simple alge- braic codes such `as Hamming codes` or distance **_5_** `BCH codes. Furthermore,` we would still want the _Z_ vector to have a weight t sufficiently large to prc- vide good security. By a clever design we will show that we could obtain t ###### e L. Obviously it would not be possible unless we change or modiy the origi- **2** nal encryption method. Here we develop such a modification and show that it is indeed possible to use simple (i-e., short distance) algebraic codes for `PRAC which are very` secure from chosen-plaintext attacks. Clearly a system that is secure from such `an attack` is **also** secure from other attacks including known-plaintext attacks. **2.2.** **Encryption** **of** **Modified** **PRAC** This approach uses a minimum distance **3** code generator G `(as an` exam- ple) and uses specific error patterns for the random error vector Z of which the average `Hamming` weight is approximately `f.` Encryption method- ‘L3 **2** modified `as follow.` Let G’ = SG where S : k*k nonsingular matrix G : k*n distance **3** code generator matrix G’: k*n encryption matrix Then C = _(MG’ + Z)P_ where _M_ : plaintext of length `k` C : ciphertext of length n ###### P : n*n permutation matrix _Z_ : a random ATE (Method **1)** or an entry of the Syndrome-error table (Method 2) ----- (Method `1 and` **2** are destribed below.) Since the security of `PRAC crucially depends` on the weight of _2 , the selec-_ tion of Z is very important. We introduce two kinds of error patterns. ###### Method 1 : Use adjacent t errors for Z. Definition 1 : Adjacent t Errors (ATE) **An** ATE `is` a vector of length n with t `(5 i) adjacent errors, i.e.,` an ATE consists of n-t 0’s and t consecutive 1’s. ATE must not be a codeword. **A** random ATE can be used for _2 ._ There exist exactly n-t+l ATE’s for the given n and t (and n ATE’s for cyclic codes). ###### Method 2 : Use of predetermined set of vectors (Syndrome-error table). A predetermined set of vectors consisting one from each coset of the standard array decoding table can be used for Z. Each coset has a distinct syndrome and there are exactly **2n-k** cosets [Blahut `‘83, Lin` **’831.** Therefore, we could select any set of vectors one from each of the `2n“` cosets. The set is predetermined in the sense the decryptor knows the Syndrome-error table used for `2.` Fig.1 shows an example of standard array and Syndrome-error table. The vectors in the rec- tangular `boxes are selected` `as z` - vectors. 0 1 1 1 0 0 Coset leader Syndrome 000000 I 001110 010101 limiil 011011 101101 110110 111000 000 ``` -A- ``` 000001 I 001111 010100 100010 r-1 101100 110111 111001 001 000010 I 001100 010111 100001 011001 101111 im1 111010 010 000100 I 001010 010001 100111 011111 (lolooll 110010 111100 100 001000 I 000110 101011 010011 100101 111110 110000 110 010000 I 011110 000101 m l 001011 111101 100110 101000 101 100000 1 110101 000011 111011 001101 010110 011000 011 ## [m- 001001 I 000111 011100 101010 010010 100100 111111 plooorl 111 Fig. 1. Standard array for the (6, 3, 3) code ----- G and P are secret encryption keys, and the Syndrome-error table is also secret in the Method **2.** **2.3.** **Decryption** **of** `Modified Cryptosystems` From the encryption algorithm `(Eq. 2)` ###### c = (MG' + z)P = M S G P + ZP = M ' G P + ZP. (M' = M S ) Decryption can be done using secret keys S-l, `HT (GHT` = 0) and p through following steps. ###### Step 1 Obtain C' : C' = c p = = M ' G f Z ###### Step 2 : Find the error pattern and recover M' : `C' HT =` `M' GHT` + ZHT ###### = ZHT (Syndrome) Identify the error pattern. (use the Syndrome-error table look-up for the Method **2).** Recover `M'` by correcting for the error pattern. ###### Step 3 Recover plaintext M : _M_ = _M'_ s-1 ###### Note : It appears that this approach requires long keys (S, P, G and the Syndrome-emr table for the Method **2).** However, the keys could be generated by using a pseudo-random number generator algorithm. In that case the user may require only short seeds for keys S, P and the Syndromeerror table. This problem is not addressed here and it would be a topic for future work. ----- **2.4.** **Application to JOEEC** Recently Joint Encryption and Error-control Coding (JOEEC) **was** sug- gested `pi-.. ’84a].` This approach combines data .encryption and error-control coding steps into one step to gain speed and efficiency in implementation. The modified cryptosystems could also be implemented `as JOEEC by` us- ing higher distance codes. But the application to JOEEC of this approach `is` presently being studied. `3.` **CRYPTANALYSIS** `OF MODIFIED` **CRYPTOSYSTEMS** The encryption algorithm **(Eq.** **2)** can be rewritten `as follows.` ``` C = (MG’ + Z ) P ###### = MG” + Z P ``` where G” = G’P = [ **g;”** ] for i = 1, ... ,k, ###### and gin is a row vector. The following lemmas help us to establish the high level of security **pro-** vided by this new approach. ###### Lemma 1 : The number of P’s that transform ATE’S into non-ATE’S is at least (n - - `I)!` if 2 < t <_ _2_ where n is the length of **2 ’** ###### ATE and t is the length of adjacent errors. Outline of Proof: Let vector V be an ATE of length n. We select a set of positions, (1, **2,** t, 2t, ..., bt}, from V where b = 121. We reorder **_t_** these positions as an ordered set, B = (1, t, 2t, ..., bt, **2).** This map- ping is illustrated in the figure below. ###### v- = I-+--+ ---- + +--- - - - ---+____I (ATE) **1** **2** t 2t 3t bt n b = ###### B = (1, t, 2t, . . . , bt, 2) V’ = I-- - - - ---+ ____ B_--+ ______ - - - _-_I (non-ATE) We consider a permutation map of vector V to V’ with B embedded in V’ The purpose is to make V’ a non-ATE This is achieved because B ----- 0 s. any position of V' and therefore, we have n-b-1 choices. In addition the number of permutations possible for V --> V of the remaining positions is (n-b-2)!. Thus the total number of permutations of transforming an ATE vector V to non-ATE vector V', N, can be shown to be at least ###### Np = (n-b-1) * (n-b-2)! = (n-b-l)! QED. This formula gives a lower bound for N, of (n - **3)!** when t = Lt]. ###### Lemma 2 : The number of code generators combinatorially equivalent to a (n, k, 3) code generator is at least `k!.` Proof: Let G be a `(n, k,` **3)** code generator in systematic form. ###### G = [Ik Pk,n-k 1 where `Ik` `is` an identity matrix and P k ,n -k is a parity check matrix. Then, there are k! row combinations of parity check matrix, which are distinct `(n, k, 3) code generators also.` All of these code generators can be obtained by row exchange and column permutation of G, `and hence,` are combinatorially equivalent to G [Peterson '721. ###### Lemma 3: The number of k*k non-singular matrices over GF(2), Ns is given by Proof: We can start with any non-zero vector for the first row of non- singular matrix S and we have **_zk_** - 1 choices. The second row must be linearly independent of the first. That is **we** have **2"** - 2 choices for the second row. For the third row the choice is any vector linearly indepen- dent of the first two. Clearly it has **_(zk - 27_** choices. Continuing this way, the number of non-singular matrices are given by the equality (Eq. **3).** Since there are k terms in the product, the smallest of which is **_Z k - l ,_** ----- **An** attack by exhaustive search for S, G and P is considered hopeless task due to the results of above Lemmas. The previously described method of the chosen-plaintext attack (described in Section **1.2.1)' can not be applied here be-** **n** cause the average Hamming weight of (2,-Z,)P is about **_2, which_** is very large. Therefore, we have to look for a different method to cryptanalysb and it could be `as follows.` Let `Cj and` c k be two distinct ciphertexts obtained for the same plaintext `M.` Then `Cj =` _MG"_ + Zip `Ck` = _MG"_ + ZkP ###### cj - c, = (Zj -&)P The above step provides `one value` for _(Zj - zk)P._ This step needs to be re- peated until all possible pairs of 2's are used. The number of distinct 2's is given by ###### N = 2L for the Method 1, **2** ###### > - n for the Method 2; ``` N *-N ``` `and the number` of possible distinct values of **(Zi** `-zj)P is` -. **2** **An** expression for **gin** by a computation `as described in Section` 1.2.1 `is` given by **_C,-Cz_** = **gin + (Z1-ZJP** **_g i n_** = ~ **1** `C Z -` - (2, - zJP. (Es. 4 ) Hence, every possible value of (Zi - Zj)P should be tested for `(2, - Z,)P` of Eq. **_4._** Since the correctness of each row vector of G", **_g i ,_** can not be verified in- dependently, the complete solution of G" should be obtained and verified. This involves on the average work factor, T given by ``` k ###### T ?&] 1 N2 - ``` Substituting for N, T can be shown to be **(nu).** Thus we establish the fol- lowing. ----- ###### Claim : To determine G from a chosen- plaintext attack (as discussed above) `has a work factor` T = fl ( n"). It can be easily shown that the above step, namely, the determination of G" is the really dominant factor. Determination of P and `Z vectors` are straight forward after that. `As of` now, the analysis and procedure ex- plained seem to be the only possible approach to break the code and it requires an enormous work factor _0_ **_( n 2 k ) ._** **4.** **CONCLUSION** We have introduced a new approach to the private-key algebraic-coded cryptosystems requiring only simple codes such `as` distance **3** codes. These systems will be very efficient because of high information rates and low over- head for encoding and decoding logic. The chosen-plaintext attack given here appears to be the only plausible approach for cryptanalyst. It requires a work factor R **(,a2&)** and is therefore, computationally secure even for small **_k w a . It will be_** a chalIenge to find alternate methods of attack which can be successful. ###### REFERENCES plahut **'831** Richard E. Blahut, _Theory and Implementation_ `of Error Correct-` _ing Code, Addison-Wesley, 1983._ penning **'821** Dorothy E. Denning, _Cryptography and Data Security,_ Addison Wesley, **1982.** bin **'831** Shu Lin, Daniel J. Costello, Jr., _Error_ _Control Coding: Fundarnentab_ _and Applications, Prentice-Hall,_ **1983.** WcEliece '771 McEliece R. J. "The theory of Information and coding," **_(vol._** **_9_** `of` _the encyclopedia_ of _mathematics and_ **_its_** _Applications) Reading,_ Mass Addison-Wesley, 1977. ----- Coding Theory," DSN Progress _Report,_ Jet Propulsion Laboratory, **CA.,** Jan. & Feb. 1978, pp **42** - 44. Peterson **'721** W. Wesley Peterson and E. J. Weldon, Jr., _Error-Correcting_ _Codes, Second edition, The MIT_ Press, **1972.** ###### pi.. '84a] T.R.N. Rao, "Joint Encryption and Error Correction Schemes," _Proc._ _11th Inti. Symp. on_ _Cornp. Arch., Ann_ Arbor, Mich., May 1984. ###### pm '84bl T.R.N. Rao, "Cryptosystems Using Algebraic Codes," Inti. Conf. _on Computer Systems_ `6 Signal` _Processing, Bangalore, India,_ Dec. 1984. -----
{ "disclaimer": "Notice: The following paper fields have been elided by the publisher: {'references', 'abstract'}. Paper or abstract available at https://api.unpaywall.org/v2/10.1007/3-540-47721-7_3?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1007/3-540-47721-7_3, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": null, "status": "BRONZE", "url": "https://link.springer.com/content/pdf/10.1007%2F3-540-47721-7_3.pdf" }
1,986
[ "JournalArticle", "Conference" ]
true
null
[]
6,407
en
[ { "category": "Education", "source": "s2-fos-model" }, { "category": "Economics", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/0231f3a936aa6807a7d215c8ef49bb96affd5f3f
[]
0.913903
The Equity and Inclusion in Higher Education: A Proposed Model for Open Data
0231f3a936aa6807a7d215c8ef49bb96affd5f3f
Social Science Research Network
[ { "authorId": "133763011", "name": "Carla Hamida" }, { "authorId": "2081117", "name": "A. Landi" }, { "authorId": "1739188795", "name": "Ziyi Liu" } ]
{ "alternate_issns": null, "alternate_names": [ "SSRN, Social Science Research Network (SSRN) home page", "SSRN Electronic Journal", "Soc Sci Res Netw", "SSRN", "SSRN Home Page", "SSRN Electron J", "Social Science Electronic Publishing presents Social Science Research Network" ], "alternate_urls": [ "www.ssrn.com/", "https://fatcat.wiki/container/tol7woxlqjeg5bmzadeg6qrg3e", "https://www.wikidata.org/wiki/Q53949192", "www.ssrn.com/en", "http://www.ssrn.com/en/", "http://umlib.nl/ssrn", "umlib.nl/ssrn" ], "id": "75d7a8c1-d871-42db-a8e4-7cf5146fdb62", "issn": "1556-5068", "name": "Social Science Research Network", "type": "journal", "url": "http://www.ssrn.com/" }
Recently, governmental institutions and private industries in power have been pushed to be more transparent so that more people can have ownership of their data. Another type of institution with a large amount of power over data are educational institutions. Colleges and Universities around the globe store a significant amount of data on millions of students, such as financial aid, grades, dropout or graduation, successes after graduation. Each institution is rated with respect to these items and more, and potential students are making decisions to go to the school based on these ratings. Therefore, it is imperative for students, who invest their time and their money into the school of their choice, to know the truth. In 2017, the College Transparency Act and the Student Right to Know Before You Go Act were passed, which were created to push transparency for data in higher education. The openness of data in higher education will be beneficial to prospective students. The push for these two bills coincided with the bitcoin bubble. In the past three years, experts in economics, medicine, and supply chain management have been researching methods on how to implement blockchains to create optimal and decentralized data systems. In this paper, we propose a model for open data in higher education inspired by the Bitcoin, which uses blockchain. When used together with InterPlanetary File System, a peer-to-peer distributed file system, we can create a decentralized platform that increases accessibility of data and autonomy of prospective students.
RESEARCH ASSOCIATION for # **R A I S** INTERDISCIPLINARY DOI: 10.5281/zenodo.3267518 JUNE 2019 STUDIES ## **The Equity and Inclusion in Higher Education: ** **A Proposed Model for Open Data ** ### **Carla Hamida [1], Amanda Landi [2], Ziyi Liu [3]** *[1]* *Bard College at Simon’s Rock, Great Barrington, USA (Indonesia), chamida16@simons-rock.edu* *2* *Bard College at Simon’s Rock, Great Barrington, USA, alandi@simons-rock.edu* *3* *Bard College at Simon’s Rock, Great Barrington, USA (China), ziyiliu16@simons-rock.edu* ABSTRACT: Recently, governmental institutions and private industries in power have been pushed to be more transparent so that more people can have ownership of their data. Another type of institution with a large amount of power over data are educational institutions. Colleges and Universities around the globe store a significant amount of data on millions of students, such as financial aid, grades, dropout or graduation, successes after graduation. Each institution is rated with respect to these items and more, and potential students are making decisions to go to the school based on these ratings. Therefore, it is imperative for students, who invest their time and their money into the school of their choice, to know the truth. In 2017, the College Transparency Act and the Student Right to Know Before You Go Act were passed, which were created to push transparency for data in higher education. The openness of data in higher education will be beneficial to prospective students. The push for these two bills coincided with the bitcoin bubble. In the past three years, experts in economics, medicine, and supply chain management have been researching methods on how to implement blockchains to create optimal and decentralized data systems. In this paper, we propose a model for open data in higher education inspired by the Bitcoin, which uses blockchain. When used together with InterPlanetary File System, a peer-to-peer distributed file system, we can create a decentralized platform that increases accessibility of data and autonomy of prospective students. KEYWORDS: open data, higher education, blockchain, IPFS, transparency ### **Introduction ** In today’s society, data is currency. Many stepped into the market by collecting data, e.g. Google, Facebook, National Security Agency. Others, still, monetized controlling the access and use of the data, e.g. Facebook, government spending budgets. Open data is defined by the Open Data Institute as data that anyone can access, use, or share (Open Data Institute 2017). Across the globe, nonprofit organizations are pushing for empowering citizens with data. For example, the Open Data Charter was founded in 2015, and it is a collaboration of more than 70 governments, experts, and organizations whose sole goal is to make governmental data more available and accessible to citizens of the world. The Open Data Charter proposed six principles, and they are meant “for improved governance and citizen engagement” and “for inclusive development and innovation” (Open Data Charter 2015). While governments and private for-profit companies certainly play a role in the monitoring and controlling of data publication, education institutions make huge profits from their management of student data. Transparency and accountability is imperative in higher education. Prospective students need accurate information with respect to financial aid, program success statistics, job- obtained- after-graduation data, demographic statistics, and other forms of cost such as living and food. Educational researchers, accreditation teams, and governments investing financial aid need granulated data on student success so that inclusivity of marginalized groups can be improved (Koch 2018). However, transparency does not mean simply listing summarized data online. In fact, every college or university that receives federal aid from the United States is legally required to submit raw data regarding their demographics and financial aid reports annually (Schneider 2017). This information is available to the public, as it is on the The Integrated Postsecondary Education Data System (IPEDS) website. However, navigating through the website itself is a hassle, and large chunks of data must be downloaded in order to attain the raw data for each institution. We need for these institutions to publish simplified aggregate data in order to fully understand how much change has been made. ----- ### RAIS Conference Proceedings, June 10-11, 2019 ### 101 ### In Section 1 of this paper, we expand on why transparency and accessibility is required of higher education institutions. In Section 2, we discuss the major issue of privacy of student data with respect to making data more granular. In Section 3, we explain the current blockchain technology so that in section 4 we can propose our solution to the issue of privacy as a stumbling block to complete transparency of higher education data. Finally, in section 5 we conclude our paper and state paths for future work. **Section 1. Transparency in Higher Education ** In 2019, there is still a lack of representation of various people in higher education institutions. Organizational change is slow and it only happens effectively when all members involved in and affected by the change see the value in implementation of the change (Berg and Hanson 2018). Although there have been efforts to inspire this change, such as Affirmative Action which first appeared in the Supreme Court in the 1978 case Regents of California v. Bakke and scholarships for underrepresented people in higher education (West 1998), one reason for such little change in the last several decades is the still present sexism, racism, classism, and homophobia among the student body as well as the admissions process. Another huge reason is the lack of access to data that tracks minority students, providing educational researchers the ability to determine weaknesses in a program and allowing curricular developers the chance to improve courses. In 2017, the STEM Research and Education Effectiveness and Transparency Act was passed. The purpose of the bill is to promote inclusion of marginalized groups, specifically women, in participation of research in STEM. Section 2 Article 2 of the bill emphasizes the need to continually collect “information on student outcomes using all available data, including dropout rates, enrollment in graduate programs, internships or apprenticeships, and employment” so the development of marginalized groups in STEM can be tracked (US House 2017). We often read that universities are becoming more inclusive on the news, e.g. (Association of American Colleges & Universities 2015), (Esters), and (Smith 2018). However, many people either do not have access, or have little access to, the actual data informing the demographics at universities. Moreover, the data available may not break demographics down into specific fields and undergraduate v. graduate programs. Universities such as Harvard and Cambridge publish annual reports on their demographics. Even after these universities publish annual reports, it is still inconvenient for readers to open each annual report to compare the progress between these higher education institutions. Given the continued existence of institutional marginalization, there is a great need to create and implement new policies and solutions. We need to implement a more optimal allocation of resources that can provide real impact to young lives. Understanding the issues within the higher education system, and how these issues affect students, could be done in a systematic matter if all the information was collected on one network in an accessible manner. **Section 2. Transparency v. Privacy: Efforts to Protect Student Data** Despite our need to publish accurate and granular data, we still need to protect the identities of the students represented in the data (ensure anonymity). While there is concern for the misuse of existing data in higher education, it does not mean that is a reason to abandon the idea of sharing. Rather, it means that we need to build systems and establish unambiguous policies in place to protect the data. In 2017, the College Transparency Act was introduced; the bill requires that the National Center for Education Statistics create a data system that analyzes financial costs and student enrollment patterns, customizes information for users accessing the data system, and have the ability to link with other federal data systems (US Senate 2017). In addition to the College Transparency Act, in November 2017, the Student Right to Know Before You Go Act was introduced by Senators Marco Rubio, Mark Warner, and Ron Wyden (US House 2017). The purpose of this bill is to publish granular and uncomplicated data on higher education institutions in order for prospective students to create informed decisions when applying to colleges and universities, while maintaining privacy standards. The bill requires the data ----- ### RAIS Conference Proceedings, June 10-11, 2019 ### 102 ### platform use encryption technology that “includes the use of secure multi-party computation, which generates statistical data based on information provided by colleges and universities as well as loan and income information from government agencies like the IRS” in order to keep published information anonymous (Ortega 2017). The push for more open data in higher education has not only come from congress. Private organizations such as Data Quality Campaign have been advocating the need to make student data more accessible since 2005. In 2014, the Data Quality Campaign and the Consortium for School Networking established the “Student Data Principles.” There are 10 principles that promote the openness of data in higher education and the use of data to create inclusion in the academic world. The technological platforms being used for higher education data need to be advanced enough to meet the needs in the future. As seen with IPEDs, handling student data can be complex since different governmental organizations and schools have unique ways of collecting their data. Given that the data could also be misused, those holding the complete and raw data are responsible to ensure that identities remain anonymous. We need to establish a coherent system in which information can spread across the network and each party has the ability to access the appropriate information while being able to update the network systematically with complete information. Handling a large amount of information often leads to complications with storage. In the past, establishing such a network was a more difficult task. In the present, however, decentralized and transparent data systems have been created and implemented. One such data system we next discuss is blockchain technology. **Section 3. The Model ** ***Section 3.1. Blockchain *** Blockchain was originally designed to store Bitcoin transactions (Zheng 2017). At the basic level, it is a list of blocks that contains certain information. Figure 1 illustrates a simple blockchain model. 1. Index: the index of the block 2. Timestamp: the time when the particular block was created 3. BPM: pulse rate, an example of the kind of data that can be stored in a block 4. Hash [1] : a unique hash of the block, which is calculated based on all the information stored in the block 5. PrevHash: the hash of the previous block to link the blocks together Figure 1. Simple Blockchain Model (Coral Health 2018) 1 Hashes are calculated with a hash function such as SHA and RIPEMD. The function takes a input string, performs a series of operations, and output another string of fixed length, the hash. (Madeira 2019) ----- ### RAIS Conference Proceedings, June 10-11, 2019 ### 103 ### Blockchain has several important characteristics. First of all, blockchain is immutable, which means once a block is added to chain, it cannot be changed. If a block is tampered, its hash will be different from the PrevHash stored in the next block. Therefore, no one can secretly change the data stored on the chain. Moreover, it doesn’t allow single point of authority, which means no single party in the network has complete control over the data stored on the blockchain. Before a new block is added to the chain, an algorithm checks that the block satisfies all agreed-upon features, and this also ensures that all parties on the network has the same chain (Zheng 2017). This makes blockchain a decentralized technology, and is extremely useful for increasing data transparency. To store data with blockchain, there are two options: on-chain and off-chain. Since blockchain was originally designed to store bitcoin transactions, its protocols or big transaction fees limits that only a small amount of data can be stored on-chain, usually in range of kilobytes or less with one kilobytes equals to approximately 500 words (Marx 2018). Therefore, a reasonable solution is storing the hash of the data on-chain, while storing the actual files and the corresponding block hashes (TX-ID) off-chain as shown in Figure 2. There are two common options for storing data off-chain. The first one is traditional database or cloud storage. However, there are several problems with this first option. Once the files are uploaded to cloud or inserted into database, they are once again controlled by one central point of authority, such as Google, Microsoft, Oracle and so on. Not only transparency could be lost, but also if the company decided to close down the storage service, data itself could be lost as well (Marx 2018). This leads to the second option - decentralized storage. In decentralized storage, data is distributed across many nodes on the network, and files are broken apart and stored on various nodes. So no single node has the entire file and breakdown with one node will not affect the others, so files are at a much lower chance of being lost permanently (Marx 2018). One such project is the InterPlanetary File System (IPFS). Figure 2. Storing data with Blockchain (Marx 2018) ***Section 3.2 IPFS *** Just like the Hyper Text Transfer Protocol (HTTP) which the internet is based on today, IPFS is an internet protocol. However, unlike the location addressed HTTP where users get information from central servers according to the IP address, IPFS uses content addressing and a peer-to-peer (P2P) network in which users can share files directly with others in the network (Curran 2018). This is illustrated in Figure 3. This gives IPFS several advantages. Since HTTP is location based addressing, if the server is down or the webpages are deleted, the files are not available anymore and useful information could be lost ----- ### RAIS Conference Proceedings, June 10-11, 2019 ### 104 ### (FortKnoxster 2017). Also HTTP is centralized, so access to data can be slow depending on where the server is, or even restricted, such as Google, Youtube, Facebook and so on are all blocked by the government of China (Carson 2015). Figure 3. HTTP vs. IPFS (Curran 2018) On the other hand, IPFS is content based addressing, meaning that the link of each file is composed of its unique hash, and every node in the network can choose to keep the files it is interested in. Each file is broken into multiple IPFS objects and linked together by an empty object as illustrated in Figure 4 (Fanzil 2019). This makes sharing and downloading files much faster, since users not only can get data from the closest node which has a copy of it, but also can download parts of a file from different nodes at the same time instead of downloading the entire file from a single server (Curran 2018). Moreover, since IPFS is decentralized, all files on the network is publicly visible and cannot be blocked. Thus, transparency is preserved. A real life example happened in Turkey, 2017. Turkish authorities blocked access to Wikipedia throughout Turkey, but activists created a copy of Wikipedia on IPFS and made it available again (Dale 2017). Figure 4. IPFS model (Fazil 2019) **Section 4. Discussion of the Model and Open Data ** Our proposed model is to use blockchain together with IPFS to create a completely decentralized application that holds important college data. The issue with publishing data while worrying that ----- ### RAIS Conference Proceedings, June 10-11, 2019 ### 105 ### private information can get in the wrong hands can be absolved using the model we proposed. All the files will be stored on IPFS, and the immutable, permanent IPFS links will be placed into blockchain as shown in Figure 5 (FortKnoxster 2017). Since IPFS tracks version history, using blockchains and IPFS can ensure that annual data will be preserved permanently on the network. In addition, every modification to the data will be visible to the public. Since the consensus algorithm will check the information to be added on the chain, sensitive information is highly unlikely to be published or accessed by parties outside of the network. Therefore, IPFS makes a good candidate for our purpose -- increase transparency of college data. This model can benefit three parties involved in higher education: the students, universities/colleges, and the government. Figure 5. Use of IPFS with Blockchain (Coral Health 2018) Prospective students can see previous versions of the college annual data and discover differences, improvements or setbacks on the network. Thus, having access to more holistic information, these students can make well informed decisions about their future. Universities and the government will have access to organized and clearly presented data. This will make it easier for them to analyze trends, discover issues, and fix problems within higher education. Moreover, governments can determine which universities and colleges will be a positive investment for placing their government aid. An effective data system will lead to effective decision making, for all parties involved. **Section 5. Conclusion** Federal organizations, in general, need to promote the existence and importance of the availability of data. There is no use in making data more accessible to the public without active citizen engagement, because only involvement can push for development. An ideal future next step is to implement this data network through a decentralized application built with blockchain and IPFS. When building this application, we can learn from research or existing implementations of blockchains or IPFS in different fields. Since the use of this decentralized data system is flexible, we strongly encourage governments to store other federal data on the network, which would increase comparability of data and minimize the time in finding problems. **Acknowledgements ** We would like to thank our close friends and family for the support they have given us throughout the year. ----- ### RAIS Conference Proceedings, June 10-11, 2019 ### 106 ### **References ** Association of American Colleges & Universities. n.d. “Indicators of Equity in Higher Education in the United States.” Accessed May 4, 2019. https://www.aacu.org/aacu-news/newsletter/indicators-equity-higher-education-united states. Berg, E.A. and Hanson, M. 2018. “Putting the ‘Evidence’ in Evidence-Based: Utilizing Institutional Research to Drive Gateway-Course Reform” in *Improving Teaching, Learning, Equity, and Success in Gateway Courses: New* *Directions for Higher Education*, Number 180. Koch, A.K. (Ed.). John Wiley & Sons. Carson, Biz. 2015. “9 Incredibly Popular Websites that are Still Blocked in China.” *Business Insider* . Accessed April 10, 2019. https://www.businessinsider.com/websites-blocked-in-china-2015-7. Coral Health. 2018. “Learn to securely share files on the blockchain with IPFS!” *Medium.* Accessed April 2, 2019.https://medium.com/@mycoralhealth/learn-to-securely-share-files-on-the-blockchain-with-ipfs219ee47df54c. Curran, Brian. 2018. “What is Interplanetary File System IPFS? Complete Beginner’s Guide.” *Blockonomi.* Accessed April 2, 2019. https://blockonomi.com/interplanetary-file-system/ Dale, Brady. 2017. “Turkey Can’t Block This Copy of Wikipedia.” *Observer* . Accessed April 2, 2019.https://observer.com/2017/05/turkey-wikipedia-ipfs/ Esters, Lorenzo L. “Making an Impact in Higher Education Equity.” *Strada Education Network* . Accessed May 4, 2019. http://www.stradaeducation.org/making-an-impact-in-higher-education-equity/. Fazil, Usman. 2019. “IPFS: A Distributed File Store.” *Block360* . Accessed April 2, 2019. https://block360.io/ipfs-a distributed-file-store/. FortKnoxster. 2017. “How the IPFS Concept Can Change the Internet and the Storage Distribution.” *Medium.* Accessed April 2, 2019. https://medium.com/fortknoxster/how-the-ipfs-concept-can-change-the-internet-and-the-storagedistribution-c6c13283f12d. Gandhi, Rohith. 2018. “InterPlanetary File System(IPFS) — Future of the Web.” *Medium.* Accessed April 2, 2019. https://medium.com/coinmonks/interplanetary-file-system-ipfs-future-of-the-web-c45c955e384c National Center for Education Statistics (NCES). Integrated Postsecondary Education Data System. Accessed April 10, 2019. https://nces.ed.gov/ipeds/use-the-data. Koch, A.K. (Ed.). 2018. *Improving Teaching, Learning, Equity, and Success in Gateway Courses: New Directions for* *Higher Education*, Number 180. John Wiley & Sons. Madeira, Antonio. 2019. “How Does a Hashing Algorithm Work.” *CryptoCompare.* Accessed April 10, 2019. https://www.cryptocompare.com/coins/guides/how-does-a-hashing-algorithm-work/. Marx, Lukas. 2018. “Storing Data on the Blockchain: The Developers Guide.” *Malcoded.* Accessed April 2, 2019.https://malcoded.com/posts/storing-data-blockchain. Open Data Charter. 2015. “Principles.” Open Data Charter. Accessed July 21, 2018. https://opendatacharter.net/principles/ Open Data Institute. 2017. “What is Open Data and Why Should We Care?” Accessed May 2, 2019. https://theodi.org/article/what-is-open-data-and-why-should-we-care/. Ortega, Jennifer. 2017. “Student Right to Know Before You Go Bill Introduced.” *EDUCAUSE* . Accessed May 4, 2019. https://er.educause.edu/blogs/2017/12/student-right-to-know-before-you-go-bill-introduced Schneider, M. 2017. Reforms to Increase Transparency in Higher Education: Testimony before the House Subcommittee on Higher Education. Smith, Ashley A, 2018. “States Attempt Closing Racial Gaps to Improve Graduation.” *Inside Higher Ed* . Accessed May 4, 2019. https://www.insidehighered.com/news/2018/08/21/states-showing-some-progress-closing-racial-equity gaps. Student Data Principles. “The Principles.” Accessed April 15, 2019. https://studentdataprinciples.org/the-principles/ US House. 115th Congress, 1st Session. *H.R. 4375, STEM Research and Education Effectiveness and Transparency* *Act.* Act. Washington, Government Printing Office, 2017. Passed in 2017. US House. 115th Congress, 1st Session. *H.R. 4479, Student Right to Know Before You Go Act of 2017.* Act. Washington, Government Printing Office, 2017. Introduced in 2017. US Senate. 115th Congress, 1st Session. S. 1121, College Transparency Act. Act. Washington, Government Printing Office, 2017. Introduced in 2017. West, M. S. 1998. The Historical Roots of Affirmative Action. *Berkeley La Raza Law Journal*, *10* (2): 607. Zheng, Zibin. 2017. “An Overview of Blockchain Technology: Architecture, Consensus, and Future Trends.” *6th IEEE* *International Congress on Big Data* . 10.1109/BigDataCongress.2017.85. -----
{ "disclaimer": "Notice: The following paper fields have been elided by the publisher: {'references'}. Paper or abstract available at https://api.unpaywall.org/v2/10.2139/ssrn.3434064?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.2139/ssrn.3434064, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": null, "status": "GREEN", "url": "http://rais.education/wp-content/uploads/2019/07/012HC.pdf" }
2,019
[]
true
2019-06-30T00:00:00
[]
5,243
en
[ { "category": "Computer Science", "source": "external" }, { "category": "Political Science", "source": "s2-fos-model" }, { "category": "Computer Science", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/02323347b6dc92c516ef6bc456db5aa647627895
[ "Computer Science" ]
0.906707
The Democracy to Come? An Enquiry Into the Vision of Blockchain-Powered E-Voting Start-Ups
02323347b6dc92c516ef6bc456db5aa647627895
Frontiers in Blockchain
[ { "authorId": "122026692", "name": "M. Imperial" } ]
{ "alternate_issns": null, "alternate_names": [ "Front Blockchain" ], "alternate_urls": null, "id": "17d7865f-0af7-472c-b174-60948bf06d11", "issn": "2624-7852", "name": "Frontiers in Blockchain", "type": null, "url": "https://www.frontiersin.org/journals/blockchain#" }
This research sets out to analyze the message promoted by start-up enterprises that apply blockchain technologies for the purpose of e-voting [blockchain-powered e-voting (BPE)], and their perceived effects of this technological solution on democratic outcomes. Employing Norman Fairclough’s critical discourse analysis (CDA), I examined the written output of seven BPE start-ups (Agora, DemocracyEarth, Follow My Vote, Polys, Voatz, Votem, and VoteWatcher), as displayed in their websites. The close attention of CDA to power relations brought out relevant topics of discussion for analysis. Notably, these included: voting as an expression of democracy; technological determinism; individual versus communitarian understandings of democracy; the prominence of neoliberalism and the economic sphere; and technological literacy. Findings from the literature suggest that the assumptions of BPE start-ups about a blockchain-powered democracy diverge from widely accepted understandings of democracy. BPE start-ups envision a democracy determined by positions and institutions of power, by the technologically able, and by economic interests. This research argues that this conception of democracy disempowers voters from any form of decision-making regarding how democracy is run beyond their expression in the form of a vote decided by these established powers. The widespread addresses to existing elites to enable BPE, as well as what is left unsaid about community, collective rights and the not so technologically literate population, imply that BPE developers display concern for one particular expression among the many diverse and heterogeneous understandings of democracy, while disregarding outstanding privacy, security and accountability concerns associated to implementations of the technology for BPE. This work is a contribution to much needed research on technology and democracy’s deepening intersections, at a time of rapid technological innovation and turbulent democratic scepticism.
_Edited by:_ _Marta Poblet,_ _RMIT University, Australia_ _Reviewed by:_ _Vanessa Teague,_ _Australian National University,_ _Australia_ _Jake Goldenfein,_ _The University of Melbourne, Australia_ _*Correspondence:_ _Miranda Imperial_ _mci30@cam.ac.uk;_ _miranda.imperial@gmail.com_ _Specialty section:_ _This article was submitted to_ _Blockchain for Good,_ _a section of the journal_ _Frontiers in Blockchain_ _Received: 24 July 2020_ _Accepted: 19 March 2021_ _Published: 09 April 2021_ _Citation:_ _Imperial M (2021) The Democracy_ _to Come? An Enquiry Into the Vision_ _of Blockchain-Powered E-Voting_ _Start-Ups._ _Front. Blockchain 4:587148._ _[doi: 10.3389/fbloc.2021.587148](https://doi.org/10.3389/fbloc.2021.587148)_ p p [doi: 10.3389/fbloc.2021.587148](https://doi.org/10.3389/fbloc.2021.587148) # The Democracy to Come? An Enquiry Into the Vision of Blockchain-Powered E-Voting Start-Ups _Miranda Imperial*_ _Department of Sociology, University of Cambridge, Cambridge, United Kingdom_ This research sets out to analyze the message promoted by start-up enterprises that apply blockchain technologies for the purpose of e-voting [blockchain-powered e-voting (BPE)], and their perceived effects of this technological solution on democratic outcomes. Employing Norman Fairclough’s critical discourse analysis (CDA), I examined the written output of seven BPE start-ups (Agora, DemocracyEarth, Follow My Vote, Polys, Voatz, Votem, and VoteWatcher), as displayed in their websites. The close attention of CDA to power relations brought out relevant topics of discussion for analysis. Notably, these included: voting as an expression of democracy; technological determinism; individual versus communitarian understandings of democracy; the prominence of neoliberalism and the economic sphere; and technological literacy. Findings from the literature suggest that the assumptions of BPE start-ups about a blockchain-powered democracy diverge from widely accepted understandings of democracy. BPE start-ups envision a democracy determined by positions and institutions of power, by the technologically able, and by economic interests. This research argues that this conception of democracy disempowers voters from any form of decision-making regarding how democracy is run beyond their expression in the form of a vote decided by these established powers. The widespread addresses to existing elites to enable BPE, as well as what is left unsaid about community, collective rights and the not so technologically literate population, imply that BPE developers display concern for one particular expression among the many diverse and heterogeneous understandings of democracy, while disregarding outstanding privacy, security and accountability concerns associated to implementations of the technology for BPE. This work is a contribution to much needed research on technology and democracy’s deepening intersections, at a time of rapid technological innovation and turbulent democratic scepticism. Keywords: blockchain, critical discourse analysis, democracy, e-voting, start-up, technological determinism, technological literacy ----- ## INTRODUCTION The possibilities of use of blockchain technologies in the public sector have been thoroughly reviewed recently (Berryhill et al., 2018; Thomason et al., 2019). Besides any possible incorporation to the public sector of financial applications of blockchains, a field that is spearheading blockchain adoption in the private sector (Arslanian and Fischer, 2019), governments appear to be aware of the “transformative and potentially disruptive nature of this emerging technology” (Berryhill et al., 2018, p. 20), and several hundred initiatives are underway (Berryhill et al., 2018, pp. 20–22), many of them taking advantage of publicprivate partnerships as a means to jumpstart access to the technology (Berryhill et al., 2018, pp. 22–23). According to the American Council for Technology-Industry Advisory Council (American Council for Technology Industry Advisory Council (ACT-IAC), 2017) most areas of the public sector could benefit from the use of blockchains (Berryhill et al., 2018; Thomason et al., 2019). But it is perhaps the application of blockchains for voting that has become the most advocated (Allen et al., 2019), although outstanding security risks remain (National Academies of Sciences, Engineering, and Medicine, 2018; Park et al., 2020) that could hinder its widespread application and that have resulted in early failures (Juels et al., 2018; Goodman and Halderman, 2020). Many abstract controversies regarding blockchain-powered e-voting (BPE) systems are likely to be tempered in the landscape emerging in the aftermath of the current COVID-19 pandemic, where “new normal” (Beck, 1992, p. 79) regulations are likely to make current methods of synchronous, in-person voting exceedingly cumbersome or unadvisable (Blessing et al., 2020). On the other hand, the need for alternatives to in-person voting is likely to intensify the scrutiny over privacy and security issues with those systems (National Academies of Sciences, Engineering, and Medicine, 2018; Park et al., 2020), as the recent November 2020 United States presidential election, with its widespread–although not necessarily well-founded–allegations of voting irregularities starkly highlighted. Beyond technological, privacy and security aspects, there is a paucity of reports on the relationship between BPE intended applications and current understandings of social relations, as it can be expected from a relatively new technology that is beginning to enter a widespread adoption phase. This is particularly true regarding the implications that adopting this new technology may have for power relations. In this work, I center on the study of the power relations covert in communication between technology developers and users by analyzing how BPE start-ups communicate the means and ends of their products, their views on how their technology can impact democracy, and how they are shaped by the power relations implicit to the development of BPE. To that end, I will first review the relevant sociological literature on: (i) the impact of new technologies on society, particularly those related to technological determinism and its criticism; and (ii) current understandings of democracy and how they are impacted by alternative modes of voting, as these will become relevant in the subsequent empirical analysis of the messages BPE start-ups are advancing to promote their products and technology. ## LITERATURE REVIEW Technological Determinism Scholarly attempts to discern the relationship between new technologies and society have tended to fall into the trap of technological determinism, that is, describing a purely causal relationship between technology and its–generally positive– important effects on society. There is a “strong tendency, especially when technologies are new, to view them as causal agents, entering societies as active forces of change that humans have little power to resist,” communication around them becoming “deterministic” (Baym, 2015, p. 26). Examples abound in the literature (McLuhan, 1964; Fischer, 1992). Technological determinism is an “optimistic theory” that either fails to recognize the misgivings of technology, or believes negative outcomes stemming from technology can be eliminated by new innovation (Markus, 1994). An informed discussion of technological determinism is crucial to a discourse analysis of communication of BPE, since descriptions of the technology are determinant in promoting its adoption. It is important to analyze the messaging around it, its developers’ intentions and assumptions, conveyed in messaging: whether they conceive of BPE as an innovation to induce radical, positive change unto the world, and whether they adhere to logics similar to those of technological determinism. Such technologically “utopian” ideas (in Nye, 1997) tend to see technologies as “natural societal developments,” as “improvements to daily life,” and as “forces that will transform reality for the better” (Baym, 2015, p. 28). The yardstick of democratic ideals that most Western liberal societies adhere to– and that the world now adheres to through the West’s colonialist imposition of its aspirations–is something which BPE addresses, albeit covertly, in their messaging. Technological deterministic narratives have been associated to the promotion of democratic ideals in the literature before. Some academics have deemed the creation of the Internet a “renaissance of democracy” (Agre, 1994, cited in Curran. 2016), and a “revitalized democracy characterized by a more active informed citizenry” (Corrado, 1996, in Levin. 2002, p. 81). Similarly, Castells has stated that “dictatorships could be overthrown with the bare hands of the people” thanks to the power of a technology: the Internet (2012, p. 1). This is not a new thought, as much has been written about social media’s promotion of platforms and media where “participatory” agency is created (Fuchs, 2013, p. 26). Others, like Jenkins (2008, p. 137) and Shirky (2009, p. 107) have further remarked how the Internet goes beyond economic facilitation and empowers “consumer participation” (Jenkins, 2008, p. 137) and “participatory engagement” (Deuze, 2007, p. 95), enabling conversation and action. I draw on this theory because the conveyance of blockchain’s new and improved privacy, anonymity and efficiency capabilities might well fall into technologically deterministic writing, or at ----- least, follow a trend in theories linking emerging technology and the current transformation of democracy. Directly relevant to voting, the emergence of Internet-mediated platforms showed, for many, that “the public would gain unprecedented access to information, and be better able to control government” (Toffler and Toffler, 1995, in Curran, 2016), and “empower [.] previously excluded groups” (Poster, 2001, p. 175) via more horizontal avenues of communication. Ideas about the Information Society’s “democratic and decentralizing” power (Garnham, 1994, p. 43) are widespread. Some have even called the Internet a “liberation technology” (Diamond, 2010, p. 71). Regarding e-voting, academics have discussed it as a potential end of Internet-mediated democracy. It has been linked to direct democracy (Grossman, 1996, p. 250) as, through it “voters will have a voice that reaches directly to the highest levels of both parties and the government” and it will “bring accountability directly to bear on elected officials” (Rash, 1997, in Levin, 2002, p. 82). It remains to be seen whether communication around blockchain’s adoption for e-voting purposes will fall into these technologically utopian commentaries or whether it will bring about different implications for the democratic aims to be sought after in modern societies. ## Criticism of Technological Determinism Critics of this kind of optimism over the web 2.0 regard it as spreading an ideology serving corporate interests (Van Dijck and Nieborg, 2009; Fuchs, 2011). This line of criticism of academics overly relying on technological innovation for its explanation of societal trends, have also developed different ideas on the side, beyond circular narratives of “technology shap(ing) technology” and society (Ellul, 1964, pp. 85–94; Winner, 1977, pp. 57–73, in MacKenzie and Wajcman, 1989). Many of these social theories are important for this research, as they, unlike technologically deterministic accounts, do not fail to explore power relations emerging from the creation and adoption of new technologies. For some social scientists, the consequences of technology are not just innovation in hardware and software or economic benefits, rather “they apply to all areas of social life” (Schroeder, 2007, p. 9). Social construction of technology approaches developed by Wiebe Bijker and Trevor Pinch (Bijker et al., 1987; Bijker, 1995), to the contrary from technological determinism, see “technology and society as continually influencing one another” (Baym, 2015, p. 26). Placing agency in people’s hands as creators of technology (Nye, 1997, p. 151), these theories are more fitted to observing decisions taken by technology developers, as they are “seen as dependent on their social contexts which are, in turn, shaped [. . .] by communication” (Baym, 2015, p. 44). This is why this approach is particularly relevant to an analysis of BPE. There has further been much written about the “prosumer,” with the “blurring of the line that separates producer from consumer” (Toffler, 1980, p. 267), showing how individuals, even beyond the creators themselves, can shape technology in their everyday life usage. However, the fact that BPE is not a platform for conversation might make these theories less relevant to this particular avenue of research. I adopt Winner’s thesis that “technologies [.] can be inherently political” (Winner, 1989, p. 33), since they “can be designed, consciously or unconsciously, to open certain social options and close others” (MacKenzie and Wajcman, 1989, p. 4; see Winner, 1989, p. 32). This is an important idea that will emerge as pertinent to an analysis of BPE. Ultimately, Mackenzie and Wacjman’s thesis, that “it is mistaken to think of technology and society as separate spheres influencing each other: technology and society are mutually constitutive” (MacKenzie and Wajcman, 1989, p. 23) will inform the analysis of my research. ## Blockchain-Powered E-Voting and Democracy I decided to explore democracy as an analytical benchmark since blockchain applications for e-voting deal with voting as a concept, and voting is crucial to democratic aims, values and ends, albeit not the one key defining feature of democracy, as we will see below. The relationship between voting and democracy is delimited by the concept and fundamentals of democracy. Different definitions of democracy exist, with some recent understandings of the term taking into consideration data and algorithms (“linked democracy,” Poblet et al., 2019) or blockchain technologies (“cryptodemocracy,” Allen et al., 2019). Within the context of this research, Bernard Crick’s remarks seem appropriate: [A]ll can participate if they care (and care they should), but they must then mutually respect the equal rights of fellow citizens within a regulatory legal order that defines, protects, and limits those rights. This is what most people today [. . .]. ordinarily mean by democracy–let us call it “modern democracy,” ideally a fusion [. . .] of the idea of power of the people and the idea of legally guaranteed individual rights. The two should, indeed, be combined, but they are distinct ideas, and can prove mutually contradictory in practice (Crick, 2002, p. 13). At the root of common conceptions of democracy, citizens can freely exercise their opinions on how to be governed via vote, either to choose representatives or to give their opinion on an issue, and they will abide by the decision of the majority. However, individual voters can have very little influence on the outcome of an election, and this can discourage them from voting. As early noted by Condorcet (McLean and Hewitt, 1994, pp. 245–246), or Hegel (1991), this can discourage voting and pose hindrances to participation and commitment to democratic ideals. Notably, I observe an underlying tension in defining democracy that Crick has masterfully outlined: that between the participatory, communal and emotional “power of the people,” versus “individual rights.” Both of these are needed for a functioning and normatively “good” democracy according to Crick’s widely accepted definition. Therefore, I will take this definition and these two elements as the benchmark against which BPE start-ups’ assumptions about democracy will be judged. In his 1957 attempt at modeling political decision-making in democracies, Anthony Downs highlighted the high opportunity cost of voting, which thereby led to a paradox: by voting, rational citizens do not maximize the expected utility. Therefore, why do they vote? (1957, pp. 244–246). Followers of Downs have ----- supported instrumental theories of the rationality of voting (Grossback et al., 2007; Noel, 2010). Critics of this rational choice framework hold that voters wish to voice their opinions and their ideas through their voting. This is at the root of the expressive theory of voting, advanced by Brennan and Buchanan (1984); Brennan and Lomasky (1993), and Hillman (2010), that “sees the vote as expressing support for one or other electoral options, rather like cheering at a football match” (Brennan and Hamlin, 1998, p. 149). Despite all of the above, most people believe they have a moral obligation to vote (Mackie, 2015), although the exact reasons why this belief arises are controversial (Brennan, 2016). In fact, expressivist theorists believe there is no duty to vote (Brennan and Lomasky, 1993) in countries where voting is a right rather than a duty. Besides the above, is there a moral obligation regarding how citizens vote? Influential theorists in the history of democracy, such as Mill and Rousseau have argued that voters should cast their vote for the common good, beyond self-interest (Mansbridge, 1990). Along similar lines, expressivist theorists claim that voters become attached, in a morally significant way, to the ideas defended–and implemented in case of victory– by their candidate (Brennan and Lomasky, 1993, p. 186). If the conclusion is that voting is morally important, these views would vary greatly from Downs’ rationalist approach. Further, by virtue of ascribing a need and a morality to voting, voting becomes, by necessity, something crucial for a community, as its importance is greater than that of one choice in many in an individual’s day. These considerations are fundamental for this research, as I will be analyzing with which connotations blockchain e-voting producers speak about and approach voting. Whether voting is considered an individual or a collective action, and its importance, will be essential in relation to the democratic outcomes these technology developers wish for. Within voting, we must look at e-voting more concretely, as this is the technological innovation that BPE is advancing. There is a large body of work regarding voting methods. Current research suggests that no voting method outperforms the rest in all situations (reviewed by List, 2013). E-voting is just the last of a series of “convenience voting” solutions implemented over time (Gronke et al., 2008). In e-voting, “voters are provided a method of signing into a secure website [. . .] and cast their votes using a web browser” (Gronke et al., 2008, p. 441), although a more comprehensive definition should reflect that, in e-voting, the voter’s intention is recorded electronically rather than on paper. Gibson et al. (2016) have recently highlighted the many challenges of e-voting, more demanding than those of, for example, e-banking: (a) authentication, (b) anonymity/privacy; (c) verifiability/auditability. Most importantly, voter coercion can be an issue in remote e-voting. Proponents of e-voting and advancements in voting technologies, like Krimmer (2012), believe that ICT radically changes the framework of representation, as it allows new and more direct interaction with representatives. It also provides solutions to old problems (like voting from far away or remote places). Finally, it offers the promise of increasing turnout both through its facilitation of the voting process (Krimmer, 2012) and by increasing participation of the youth in the political process (McAllister, 2016), an effect that is not observed with late adopters (Richey and Zhu, 2015). A recent example of these benefits can be found in Indonesia, with the world’s fourth largest population (close to 300 million) spread over 17,000 islands, and where democratic voting processes are severely hampered by weak and insufficient electoral infrastructures. The application of a BPE technology has shown much promise of improvement over traditional voting systems (Van Niekerk, 2019). Many of these potential advantages are presently offset by outstanding limitations of current technologies centering on security–vote preservation and certification–and privacy–authentication and anonymity–issues (Gibson et al., 2016; National Academies of Sciences, Engineering, and Medicine, 2018; Park et al., 2020). Owing to these, some electoral systems–such as Switzerland (Kuenzi, 2019)–have halted any further development of BPE systems, whereas others, such as Russia, appear to keep on pushing for BPE, despite initial, security-related setbacks (Kapilkov, 2020). In this work, I argue that a vision of technology and democracy through power relations is crucial to uncover what might be hidden behind assumptions made by the communication of innovation. I show that BPE start-ups, rather than focusing on procedural matters, such as appealing to the efficiency and ease of their offered BPE product, choose to focus on the democratic achievements of their technology, and on its creation for the betterment of democracy. Through critical discourse analysis (CDA), I investigated the perceptions of BPE start-ups on technological determinism vis-à-vis democratic ends and outcomes, relating back to understandings of power relations and to democracy, an elusive, heterogeneous concept where tensions between communal participative, bottom-down perspectives coexist with institutionalized practices granting an individualized set of rights. All of this will help us address how new technology for democracy (in the form of BPE) confronts this intersection. This research centers on the application of blockchains to e-voting, a form of voting online mediated by the blockchain. Voting is intrinsically related to democracy, and thus, to citizen participation and representation in government. Representation is all important in democracy, but it is biased by inequality. In fact, rampant inequality is at the basis of the ills of present democratic systems (Fitzi et al., 2018; International IDEA, 2019). Power relations here are crucial, and must be explored when dealing with the emergence of a new technology (intrinsically linked with economic rationale, profit accumulation, . . .) and its relationship to democracy. As summarized above, there is a dearth of theoretical studies linking BPE and democracy, however, such a paucity can be in itself a consideration for “power relations” (Robins and Webster, 1988, p. 52). Through a CDA of how developers of this new technology market and communicate their products I aim at uncovering assumed power relations (who this technology is directed to, who has the right to use it, . . .). I also aim at contributing to the academic literature linking BPE technologies and democracy by analyzing how BPE developers view the state of democratic values today and where democracy is headed. ----- ## MATERIALS AND METHODS Conceptual Framework The fact that BPE is a technology in a stage of recent development and being newly adopted by consumers means that there is little content addressing and describing it besides that crafted by its creators. Therefore, I embarked on examining textual content produced by BPE start-ups, describing their products and the reasons for using them. I believed the assumptions comprehended in what is written (and what is not), in tone, directed audience and structure, would be indicative of the thoughts and stance of the developers of this technology, that so directly seeks to impact democratic outcomes. A preliminary examination of these materials made it clear that texts provided by BPE start-ups focused on democracy itself, the technology’s ultimate achievements, rather than on implementation, deployment logistics or technological functionality. Surprisingly, little attention appeared to be paid to technology adoption by final users, the voters, suggesting an underlying assumption that “code is law” (Lessig, 2006; De Filippi and Hassan, 2016), that people can use the technology. This involved a complexity in the textual analysis that would be best approached through the use of CDA as the methodology of analysis because it includes rich, detailed, in-depth textual analysis of a carefully selected number of sources (Fairclough, 1995; Wodak and Meyer, 2009). The intrinsic interest of this research approach was compounded by the fact that there is a significant gap in CDA approximations to blockchain applications for any industry. I anticipated the assumptions around power relations made within texts marketing these technologies to be especially covert by the use of technical jargon in the form of complex technological lexis. All these reasons set the scope of my research around the analysis of final, published texts of BPE for textual, discursive and societal features (Fairclough, 1995). Critical discourse analysis as an analytical method relies on broader academic theory surrounding discourse analysis and its identification of the embodiment of power relations in language (Fairclough, 1989, 1992). Past research around discourse analysis has put forward the idea that “our use of language in particular (is) bound up with causes and effects which we may not be aware of under normal conditions” (Fairclough, 1995, p. 54). Language is perceived to be more than just an innate and natural function enabling communication. Proponents of discourse analysis have theorized that language conveys a set of assumptions and understandings about the world that are “historically and culturally specific and relative” (Gill, 2000, p. 173). Because of this importance of contextual factors, the discursive “is a space for dispersion, it is an open field of relationships” (Foucault, 1968, p. 10). Discourse analysis further brings about a clearer understanding of “assumptions” (Altheide and Schneider, 2013, p. 2) that might appear within the construction of concepts (Gill, 1996, p. 144) in the language employed. This study of assumptions presents itself as essential to perceive factors such as the “limits and the forms of expressibility,” as well as of “conservation: [. . .] those which are repressed and censured” (Foucault, 1968, p. 14). The study of what is not said in a text, as well as the apparent purpose of the content and the people it is addressed to, distinguish discursive forms of textual analysis from other methods. This is the main reason why CDA was chosen to conduct detailed, in-depth research of assumptions in the texts advanced by BPE start-ups. The small number (seven) of search-engine discoverable start-ups for BPE existing in the market suggested that an in-depth critical appraisal of their online materials according to discourse analysis was feasible and should be done. ## Research Design Within available practices of discourse analysis, CDA as theorized by Fairclough (1995), appeared to be particularly fit to analyze the power relations established between the views of technological developers as reflected in text form, and democratic goals and values. CDA goes further than observing the “content, organization and functions of texts” (Gill, 2000, p. 187) and interprets that all social interactions are mediated through language (Fairclough, 1995). Its capacity for “relationality” (Fairclough, 1989, p. 3) and “transdisciplin(arity)” (Fairclough, 1989, p. 3), make it well suited to find implicit connections between expressions as diverse thematically as those concerning technological innovation and democracy. Furthermore, CDA goes beyond other discourse analysis genres (such as narrative analysis) in being particularly “sensitive to power relations” (Fairclough, 1992). Unlike other discourse analysis frameworks, it picks up “power, ideology and inequality issues” (Blommaert and Bulcaen, 2000, p. 447) in a close reading of texts. Fairclough’s CDA (1992; 1995) revolves around three areas of analysis: the Textual (consisting of discourse-as-text and micro language choices), the Discursive (looking at contextual, speech acts and intertextuality) and the Societal (viewing discourse-as-socialpractice, within ideological and hegemonic processes). Altogether these are particularly perceptive of power relations implied within the forms of communication of texts. Academic research exploring technology and democracy that has foregone power relations in the past has been criticized for this omission, as I have observed above. The research fields of technology and democracy and their interaction, having much to do with representation of voices, which voices matter most and who has the power to design creative technological outcomes that end up “mattering,” show that a method, such as CDA, that prioritizes power relations is needed for this research. Critical discourse analysis has intrinsic limitations due to the influence of context and researcher bias. Many of these have been highlighted in the literature (Wodak and Meyer, 2009; Wodak, 2014, p. 311). Despite these limitations, CDA is still the only method providing the necessary amount of detail and “self-reflection” to an emergent topic that deeply requires “new responses and new thoughts” (Wodak and Meyer, 2009, p. 32), such as BPE. In this instance, CDA can help uncover associations between topics according to context. It also shows that “there is no neat separation between the meanings in language and in the social world” (Taylor, 2013, p. 78). This is important to the nature of this research: rather than valuing the benefits or shortcomings of BPE in itself, it is what BPE technology ----- developers express about BPE that will be analyzed. Thus, it is only through CDA that such an acute drawing out of social relations as is needed can be undertaken. Along these lines, criticisms of CDA as producing interpretation rather than fact feed into a “notion of truth” (Taylor, 2013, p. 82), dichotomizing facts and interpretation into binary opposites. This criticism hardly applies here, given that, as mentioned above, I will be focusing on production and construction of meaning far beyond the mere reporting of facts, and exploring how technological innovators describe their products. To use CDA empirically, I adopted Gill’s (2000, pp. 188– 189) systematization of the steps to follow in order to undertake discourse analysis, as she provides a good foundation on how to approach a broad research question. This was useful in countering the aforementioned lack of order or clarity in process in conducting CDA with texts. ## Sample Selection A Google web search engine exploration for BPE start-ups was carried out during the month of April 2019. After a thorough search for the most prominent start-ups, and scouring through some online media articles talking about different BPE emerging start-ups, seven start-ups with Internet presence that used the Web to communicate about their BPE products in English were chosen. This small number of start-ups was well-suited for CDA and ensured the possibility to fully examine all of the content in their websites, including attached pdfs, white papers and blog posts concerning their products. The chosen start-ups were Agora, Democracy Earth, Follow My Vote, Polys, Voatz, Votem, and VoteWatcher (Table 1), and all the materials from the web pages that were the subject of my analysis were collected during the months of April and May, 2019. As of May 1st, 2020, the relevant contents had not been changed. ## Analytical Procedures Since web materials did not require transcription, I moved on to “skeptically read and interrogate the text” (Gill, 2000, p. 189), familiarizing myself with the content and keeping the research objectives in mind. Following Fairclough, language was appraised as Textual, Discursive or Social. According to these categories, texts were analyzed and annotated for implicit and explicit references having to do with technology’s role in the government of society and in democratic values, and technology and power/agency. This involved comparing variability in data (frequency and presence of different elements, placement on the web, visibility, rhetorical force, . . .) as well as forming hypotheses about what I uncovered. ## RESULTS AND DISCUSSION The CDA conducted covers the bulk of the publicly available information offered in their web pages by seven BPE startups: Agora; Democracy Earth; Follow My Vote; Polys; Votem; Voatz; and VoteWatcher (Table 1), following Fairclough’s threedimensional canonical model: textual, discursive, and societal (Fairclough, 1995). Due to spatial constraints and to the copious amount of suitable materials, mainly themes of interest that recur throughout the seven start-ups will be highlighted. However, some points of nuance and division among these that contribute toward general conclusions will be included. Some challenges to the CDA are worthy of mention, notably, the abundance of data in the seven websites, as well as the general lack of discursive data due to the “newness” of blockchain for e-voting. Despite these limitations, analysis of BPE start-ups’ content allowed me to scrutinize the vision that technology entrepreneurs have for the future with relation to existing power relations. The content of the analysis largely focused on discourses of change, governance, technological ability and links between democracy and productivity, and between democracy and capitalism. ## Failing Methods, Technological Solutions: Democracy Reduced to Voting On the whole, the seven start-ups studied introduced widespread claims of problems that plague current democratic organizations. Most of them highlighted flaws in current voting systems and their subsequent hinderance to democracy. Agora, Vote Watcher and Follow My Vote, especially, outlined the problems existing with current voting technologies. Paper ballots and Electronic Voting Machines were denounced as being “slow, costly and exposed to many vulnerabilities that can inhibit free and fair elections” (Agora, 2019b, p. 4). Textual analysis here identified the use of a highly descriptive lexis, including adjectives or descriptive phrases loaded with negative connotations referring to existing voting technologies. Similarly, vocabulary such as “voting machines used in the 2012 election were over a decade old, running outdated software that took only 15 min to hack into?” (VoteWatcher, 2019) was directly followed by “We are using the latest operating system with the most up to date software” (VoteWatcher, 2019) as an effective juxtaposition. Like Agora, Vote Watcher displayed the new technology as necessary: the flaws of the past technological advancements in voting prescribe the need for a new technology that solves all of these issues. Follow My Vote went as far as giving a granular, page-long analysis of different voting machines’ use cases (Follow My Vote, 2019c), from which the following takeaway was drawn: Several things can be learned from these system failures. First, the machines must be physically sound. Second, the programming must not have holes that can be exploited. Third, it’s not best practice to have extremely simple and guessable passwords hardwired into a voting machine. This statement presents data in a simplified, matter-of-fact manner. The repeated use of modal verb “must,” as well as the simple and very direct wording of the list (with very factual adverbs like “First. . .” “Second” and “Third” introducing the “things (that) can be learned from these system failures”) are important to carry meaning forward. This enumeration prescribed what a voting technological ideal was for Follow My Vote. Through dichotomy and drawing out particular existing issues in voting technologies, start-ups were more effective at pushing the importance of their technology onto readers’ perceptions, while purposefully ignoring unresolved ----- TABLE 1 | Blockchain-powered e-voting startups chosen for study. Startup Country Description Agora Switzerland Initiated in 2017 (Swiss Lab & Foundation for Digital Democracy, Leonardo Grammar). [(https://www.agora.vote)](https://www.agora.vote) Wide media exposure after their technology was used in a recent general election in Sierra Leone Democracy Earth International Open-source, collaborative enterprise, started in Argentina by Santiago Siri and Pia [(https://democracy.earth)](https://democracy.earth) Mancini (NET liquid democracy political party). In 2015 they joined other developers and hacktivists to start the Democracy Earth Foundation, an international collective united by the vision that distributed ledger technologies can reverse some of the ills of democracy today: “low participation, polarized endogamy and eroded trust in [governing institutions” (https://words.democracy.earth/about).](https://words.democracy.earth/about) [Follow My Vote (https:](https://followmyvote.com) United States A “non-partisan public benefit corporation [. . .], founded on the principles of freedom, [//followmyvote.com)](https://followmyvote.com) as a tribute to the Founding Fathers of the United States [. . .] to promote truth and freedom by empowering individuals to communicate effectively and implement non-coercive solutions to societal problems.” It aims at “improving the integrity standards of voting systems used in elections worldwide” through the use of [blockchain technology (https://followmyvote.com/about-us/). The brainchild of Adam](https://followmyvote.com/about-us/) Ernest, Nathan Hourt and Will Long, it is based in Longmont, CO, United States [Polys (https://polys.me)](https://polys.me) Russia An “online voting platform based on blockchain technology and backed with transparent crypto algorithms.” Promoted by the Kaspersky Software Lab (Moscow). [Voatz (https://voatz.com)](https://voatz.com) United States Founded in 2015 and devoted to the development of blockchain-powered mobile voting systems that allow voters to cast their e-vote from their mobile phones. Based in Boston, MA, United States Votem United States Offering a “revolutionary mobile voting platform designed to securely cast votes in [(https://votem.com)](https://votem.com) elections across the globe.” It was started in 2014 by Pete Martin and it is based in Cleveland, OH, United States. VoteWatcher United States A voting platform launched by Blockchain Technologies (MA, United States), “a voting [(http://votewatcher.com)](http://votewatcher.com) system for the 21st century [. . .] focused on transparency and efficiency and all of the code is open source or available for inspection. It runs on off-the-shelf hardware. Simple paper ballots make it easy for the voter. Detailed election records are posted online and on the blockchain. Every step in the process is highly auditable.” privacy, security and overall accountability issues that have been repeatedly associated to these technologies and recently summarized by Park et al. (2020). Also significant to broader research is, perhaps, a societal analysis of what remained unsaid in BPE start-ups’ content dealing with the flaws of modern voting. Causal links drawn between the failures of democracy and the necessary solutions that BPE offered imply that: (a) how voting is currently conducted is the main problem existing for democracy, and (b) a reform to how voting is done will be the answer. This line of argument thus ignored other pressing issues outside of the boundaries of voting, such as the irruption of populism, the rise of democratic discontent, or corruption by elected representatives. Importantly in my CDA, I observed that many of the existing power relations in society were ignored by the seven start-ups scrutinized. Imbalances of power such as the aforementioned were disregarded by a reductionist line of argument that brings down democracy to one of its many expressions: voting. This subject, the conflation of democracy with voting, appeared recurrently throughout my analysis. ## Efficiency, Capitalism–Monetary Concerns? Beyond the Democratic? As aforementioned, existing problems in technologically aided voting were a focal point of most of the analyzed websites. BPE start-ups presented these chaotic methods against the orderly, scientific promise of the blockchain. Such expressions could be found within the start-ups’ mission and vision sections of their websites. For Democracy Earth, this was “the need to make our shared home a place of peaceful coexistence” (Democracy Earth, 2019b, p. 2). For Follow My Vote, it was “to promote truth and freedom by empowering individuals to communicate effectively and implement noncoercive solutions to societal problems” (Follow My Vote, 2019a). Polys wanted to “change the way people vote” (Polys, 2019b, p. 2). However, despite stating these ideas, expressed by abstract nouns, an emphasis on changes to proceduralism over form and ideas was a common thread that could be observed across different start-ups’ written expression. Within the textual dimension of CDA, I encountered many instances where the democratic process was referred to in a highly detached, scientific manner. The little concern for emotion and emotional language showed BPE start-ups’ focus on productivity, efficiency and securitization of the means for democracy. But no reference to ideas of communitarianism, equality or justice conveyed in democratic thought (e.g., Rousseau) was made. This emphasis on such a means of democracy for success was repeated throughout the texts. Certain lexical choices employed throughout the corpus displayed this, e.g., “electoral procedure” (Agora, 2019b, p. 4). This noun, “procedure,” conveys a potential for mechanical, technological operationalization, to enact more effective and productive action, in order to facilitate democratic outcomes. It could be argued that a focus on operationalizing means and efficiency may be associated to scientific language because BPE start-ups were, at their core, proposing a technological product. However, the appearance of lexis like “incentivizes” (Democracy Earth, 2019a), “voting systems” (Democracy Earth, 2019b, p. 2), “Governance as a service” (Democracy Earth, 2019a), . . . indicates something different. The societal dimension of CDA links language signaling more efficient and cost-effective ----- operations to capitalism and the economic sphere. What this indicates in relation to power relations is that BPE start-ups operate beyond ends purely focused on democratic outcomes, and hints at the economic forming a large part of how their ideal voting “procedures” are to be developed and deployed. The languages of technology and capitalism converged in this frame. Beyond occasional expressions of the more emotional values behind democratic theory: “While money is the language of self-interest, votes express the shared views of a community. Political currency is not strictly meant for trade but for social choice” (Democracy Earth, 2019b, p. 8), start-ups were primarily concerned with both democracy and the economic, as exemplified by the following Democracy Earth quotation: Although politics and economics are often perceived as different realms, history teaches that money means power and power means votes. In order to effectively promote democracy it is essential to address both (Democracy Earth, 2019b, p. 15). As I noted previously, this acute concern with the antidemocratic flaws of existing voting procedures, coupled with ideas of vote “incentivization” and efficiency, display a stark reality. The current state of the economy is a capitalist one, whereby start-ups present business models whose main purpose is to develop a profit-making mechanism for themselves and for their investors. Discursively, I found a similar trend: investors were a crucial warrant of accountability for the subjects of my analysis. Though most websites avoided referring to them, venture capital funds such as Fenbushi Capital (powering Agora) were mentioned. Papacharissi (2014) identified the crucial role of affect and the emotional for democracy, especially in times of election and electoral campaigning. BPE start-ups, however, neglected this dimension and presented a radically opposite view: they considered the pure act of voting as the expression of democracy. By doing so, they proposed a mechanistic, operational ideal, and ultimately showed a highly polarized view within the political scenario. The observed connection between the development of new technologies for democracy and economic motivations requires further exploration, and opens up grounds for research in future work. ## Who? Audiences and the Issue of Voice Discursively, CDA allows for a meticulous insight into the treatment of voice in BPE start-ups’ literature. A common trend that I will outline here is that, interestingly, the perceived existing “flaws” of democracy singled out previously, tended to be framed in the texts from the lens of the individual subject. Quotations such as “Every eligible individual should be able to actively participate in democracy by easily and safely voting when, how, and where they want” (Votem, 2019b, p. 3) were dotted throughout the texts, thus centering the problem and potential solution around the rights of individual citizens. The promise behind BPE start-ups’ services appeared as an improvement for the individual citizen within a democracy. Quotes such as the aforementioned, and similar ones, like “Follow My Vote’s mission is to promote truth and freedom by empowering individuals to communicate effectively and implement non-coercive solutions to societal problems” (Follow My Vote, 2019a), focused on the individual as the main subject existing vis-à-vis democratic institutions and being addressed by the radical positive changes of the blockchain. This notion has important implications for power relations. BPE start-ups generally, and Votem and Follow My Vote more specifically, conveyed that they conceive of voting and democratic outcomes as, ultimately, an atomized, individual choice, echoing rational choice theory models such as that proposed by Downs (1957). The start-ups under scrutiny thus showed little interest in comprehending or adopting more communitarian understandings of democracy and participation. This point reiterates and aligns well with the previous finding of a lack of emotional and social participation in the ideal future of democracy that BPE start-ups foresaw. Rather, these startups sought to transform democratic practice for the better by guaranteeing the fulfillment of democratic ideals to individuals and individuals alone. This, again, represented a way of avoiding communitarian ideas of democracy. In doing so, BPE start-ups appeared to very much side with the current statu quo, a statu quo that is being questioned by citizens in the wake of the last global economic crisis (Ancelovici et al., 2016). In addition, and very importantly for the discursive sphere of the CDA, is who (the Audience) these messages are subliminally designed to be read by. I explored who it is that the BPE startups were attempting to reach with their literature, who it was that would be interested enough to read and consider their output. In several instances, there was written content in their websites that was under lockdown for the general Internet public, and could not be accessed unless you were a client or in touch with the business (e.g., Voatz, with a mostly locked down page). Voatz openly marketed itself toward electoral administrators, with “Are you an election administrator interested in trying out Voatz at your next federal, state or local election? Contact Us” (Voatz, 2019) as the question introducing their contact form. This makes it clear that the expected audience of the product were people already involved in the implementation of democratic practices. Other websites presented similar focal points. Agora “work(s) together with vote administrators and politically neutral thirdparty organizations to implement fair and trusted voting systems” (Agora, 2019a) and alluded to their authority as making their “consensus framework” run right, over choosing to assign this role to, say, impartial voters. Follow My Vote attempted to include voters in its rhetoric more than other more institutionally minded websites (perhaps due to the fact that its code is opensource). It did so with expressions such as: No one except for election officials really knows what happens to your vote once you cast it, so it’s not surprising that more and more research is showing that citizens don’t vote because they don’t believe their votes count. Understandably, these frustrated voters are losing confidence in our democratic system (Follow My Vote, 2019a). However, and despite this claim, there were areas of the website where authority and decision-making power were not as radically shared. Quotations such as “after all, in an election, it’s not who votes that counts, it’s who counts the votes!” ----- (Follow My Vote, 2019a) emphasize this, as do web sections such as “Benefits for Candidates” and “Benefits for Registrars,” only countered by a single “Benefits to Voters.” Therefore, I argue that it is possible to trace a democratic asymmetry inherent in these start-ups’ “democratic” ends. This analysis challenges the extent to which the products that BPE start-ups introduced are that ground-breaking, or even, “democratic,” given that existing institutions continued to be thoroughly involved with guaranteeing the running of democratic outcomes, and average citizens continued to be excluded from decisions surrounding electoral processes. The Voatz website stated the product will benefit “overseas” and “military” voters who found it difficult to participate in elections before (Voatz, 2019), however, the discursive aspects of my analysis display that the text (the solution to their troubles) is clearly not including them as active participants in the creation of opportunity for their involvement. What this signals, accordingly, is an inherent power asymmetry in the way these start-ups address audiences. By restricting BPE implementation to existing institutions and not involving citizens in the process, the start-ups under scrutiny fell into representing a product that has not been democratically implemented and decided upon, but rather, one that would be imposed by existing institutions, the same existing institutions that are being attacked by present-day critics of the democratic _status quo. Overall, the acute emphasis these start-ups placed on_ individualistic interpretations of democratic processes, together with their dialog with existing institutions and representatives in power, display a replication of present-day power relations. This questions to what extent BPE is a ground-breaking force with a great potential to democratize governance. ## Technological Determinism A common aspect in the message put forward by BPE startups is the emphasis on the positive overtones of the relationship between technology and normative goodness. An example is given in the following quote: With internet growth reaching over 3 billion lives [. . .] there’s no reason stopping mankind from building a borderless commons that can help shape the next evolutionary leap for democratic governance at any scale (Democracy Earth, 2019b, p. 3). This fits into societal discursive analysis and displays a concern for the power relations embedded at the crux of technology and democracy. Democracy Earth also stated that “The next Silicon Valley is not in a faraway land or on any land at all, but a new frontier of the internet itself rising as the one true open, free and sovereign network of peers” (Democracy Earth, 2019b, p. 24). Both the direct causal link between “internet growth” and “shap(ing) the next evolutionary leap for democratic governance” shown in the first quote, and the hyperbolic language: “the one true open, free and sovereign network of peers,” employed in the second, act similarly. These passages depict a causal relationship between technology and democracy, with technology being expressed as the solution to democratization and giving voice to a population. The intrinsic relationship established between e-voting startups and technology and its widespread benefits reminisce of the technologically deterministic discourses of technological hype and utopia surrounding the origin of the web and the web 2.0, stemming from Silicon Valley (Castells, 1998, 2012). Therefore, whilst it is in the start-ups’ interest to put forward a claim where their technological innovation is seen as indispensable for the future of our democratic values (“it is impossible to envision the future of democracy where digital elections are not the global standard” Votem, 2019b, p. 3), there are overarching discourses identifying broader ideas about the world at play. One such allusion was displayed in the following example: “the mere existence of risk should not preclude technological progress” (Votem, 2019b, p. 3). The intrinsic relationship between technological innovation and capitalism was alluded to through connotation, through words evoking investment, such as “risk.” This is representative of the power embedded in the funding and expertise required to develop these technologies. This, and its relationship with a better future says a lot about who they envision as bringers of a promising future and what kind of skills and resources are needed for this. Much of my discussion around technology in the textual dimension of CDA follows this societal enduring discourse too. My analysis highlights that start-ups believed in “empowering” voters through their technology: We are tapping on delivering a human right that can effectively empower individuals that will have to face the coming challenges of automation (Democracy Earth, 2019b, p. 20). Follow My Vote’s mission is to promote truth and freedom by empowering individuals to communicate effectively and implement non-coercive solutions to societal problems (Follow My Vote, 2019b,c, p. 2). This indicates that BPE platforms view voters as a largely disempowered commons that can achieve the empowering that democracy should bring about through technological innovation (as argued above) powered by capitalism and Venture Capital. ## Technological Literacy Finally, voice and technology meet around the issue of technological literacy. There was a multiplicity of statements that indicate the existence of complex power relations between technological innovation and the promise of democratic, fair and equal political futures. Several pressing problems were singled out in these start-ups’ literature, problems that must be solved to uphold the feasibility and realization of democratic government. One of these is participation. Sentences like: Democracy can’t function if almost half of citizens aren’t voting; and in this regard Follow My Vote is striving to restore the democratic tradition (Follow My Vote, 2019a). are evocative of many meanings directly linked to power relations surrounding technology and democracy. The uses of language here did not include conditional verb tenses: rather, it was stated, in the present tense, that democracy “can’t function” as it currently is. This statement hints at the fact that it is technological ability and competence that allows for Follow My Vote to “striv(e) to restore the democratic tradition and that will find the solutions necessary for democracy to be “fixed.” ----- Thus, the sentence “Follow My Vote is striving to restore the democratic tradition” indicates, through a gerund implying continuity, that Follow My Vote is attempting this via their technology. This particular use of language may well imply a trend seen throughout this CDA, textually, discursively and societally: that technological ability gives developers agency to be able to perform this “striving to restore the democratic tradition” through their creative means and solutions. This conclusion ties up with the academic line of inquiry identified as technological determinism. Connotations of technology being able to solve a perceived democratic deficit demonstrate this. However, it is through a discursive analysis of the quote above that the implications of these start-ups’ narratives for technological literacy can be perceived. The discursive aspects of CDA raise many questions pertaining to intertextuality and audiences. The quote above suggests the Follow My Vote technology has the potential to “restore the democratic tradition.” However, it also poses important concerns regarding the agency of others to contribute to the preservation of democracy. These queries are elicited because the agency of citizens with no technological skill or knowledge remained unaddressed and unaccounted for. There was a perceived scarcity of statements about the participation and contribution of citizens without technological skills, beyond actively voting for politicians and institutions within the boundaries set by these (as seen in my audience analysis in section “Who? Audiences and the Issue of Voice”). Follow My Vote assumed that democracy would be preserved if most or all citizens were voting, and thus assigned voters, who would employ the technology developed by the start-up, a passive role in shaping democracy’s functional future. The text’s connotations imply that voters with no technological skills or without institutional power are resigned to vote and nothing more. On the other hand, technologically skilled individuals, as well as the institutions they seek to collaborate with, can actively shape the governmental outcomes of society. Among all the start-ups analyzed, Polys stood out as claiming to provide a platform where “no specific training or IT literacy is needed” and emphasizing that it “is a flexible application and can be easily customized for your particular needs” (Polys, 2019b, p. 3). They also emphasized the importance that complementary information and knowledge have in order to realize the promise of a more democratic society that technology can potentially bring about. By stating that “any attempts to improve the electoral system and democracy with the help of new technologies are meaningless without raising the level of voter awareness” (Polys, 2019a), Polys demonstrated a care and a need for additional qualities. Beyond individualism and beyond technological literacy it is the actual normative values that are important to maintain democracy. This demonstrates, on the part of Polys specifically, an acute emphasis on access to democracy in a way which, for them, is virtually made harder by bureaucracy and the impositions of an inefficient system. Regarding technology, Polys commented: “People make democracy–an online voting system is just an instrument for facilitating honest, transparent elections” (Polys, 2019a). Further and similar to this, Democracy Earth argued that “No technology will ever be able to satisfy democratic aspirations if it can only be understood by an elite” (Democracy Earth, 2019b, p. 8), and put Facebook and Google as examples of monopolistic technological companies that do not manage to ensure the rights to privacy and transparency of their user bases. Finally, Votem also emphasized the ease of the process: “With just a few taps, you can give voters a more powerful way to have their say from their mobile phone or secure web browser. . .” (Votem, 2019a). As referenced above, it is interesting to note how these start-ups acknowledged the need for convenient, user-friendly technology for everyone in order to achieve democratic outcomes. However, most of the societal discourses leading to an improvement of democratic outcomes idealized technology and presented it as able to channel democracy in the right way, as aforementioned, in a highly technologically deterministic mode. They barely considered the fact that this technology might not initially be reachable by everyone. In fact, they disregarded discourses dealing with the digital divide (Warschauer, 2003) on the basis that most of the population is connected to the internet (International Telecommunications Union (ITU), 2018). As a result, BPE start-ups made little reference to promoting the skills that are necessary to participate in the creation of “solutions” to democracy through the use of their platforms. This is clearly a paradox in wide-reaching projects such as those I have analyzed. ## CONCLUSION In analyzing texts made available by BPE start-ups in their web pages, omissions related to widespread concerns regarding current implementations of BPE technologies stand out. These concerns have been recently summarized as follows: (1) Blockchain technology does not solve the fundamental security problems suffered by all electronic voting systems. (. . .) (2) Electronic, online, and blockchain-based voting systems are more vulnerable to serious failures than available paper-ballotbased alternatives. (. . .) (3) Adding new technologies to systems may create new potential for attacks (Park et al., 2020, p. 19). The above considerations led the authors to conclude that “blockchain-based voting methods fail to live up to their apparent promise” (Park et al., 2020, p. 19). Perhaps unsurprisingly for start-ups that wish to promote their products, these issues were not touched upon in their texts. This is especially poignant in the case of Voatz, perhaps the most secretive of the BPE start-ups analyzed (see above). An independent analysis of their BPE server carried out by Specter et al. (2020) through reverse engineering of their mobile app uncovered extreme security and privacy vulnerabilities that should preclude its use in electoral processes. Likewise, allusions and mentions of community and empowering voters can be spotted sparingly, presenting ideas of ----- radical change to existing power relations governing democracy that have made it fail. However, CDA findings overwhelmingly suggest that, rather than wishing to profoundly alter existing power relations to transform and revitalize democracy, BPE start-ups are ready to make little change beyond switching the ways in which citizens vote, essentially promoting the adoption of their technological solutions. Their understanding of democracy challenges existing definitions of the term. Those employed throughout this work emphasize a care for voters’ rights, as well as a concern for upholding them after and beyond election (voting) time. For this to happen, a combination of the “power of the people,” along with a legal enshrinement of individual rights, are necessary. This tension between communal, bottom-up power and institutional, reified power appears in academic research about democracy. Nevertheless, BPE start-ups do not address it and, instead, reduce the extent of democracy to its most procedural expression (voting) and address the blockchain’s benefits to individual voters, foregoing any mention of the positives to life as a community within democracy. This raises some doubts as to what community and assembly in participation would look like (if at all) under BPE start-ups’ ideal of democracy. Further to this, textual features show that BPE start-ups display a consideration for economics, with constant reference to their products’ potential to increase incentive and efficiency. Altogether, they present an image whereby the economy and its individualistic concerns under capitalism take precedence over communitarian, emotional understandings of democratic association. Finally, the recurring idea that technology has a complete and utter capacity for transforming democracy for the better is found as a common trend throughout, and strongly echoes academic research trends on technological determinism in the 1990s. BPE start-ups’ belief is so strong that it relegates any activities to promote democracy on the part of voters with no technological background to just voting. This seriously problematizes power relations, entrenched in prior positions and stagnant at a moment when world history and political history advance much faster than ever before (Virilio, 1986). Overall, it can be concluded that BPE start-ups’ conceptions of democracy from an individualized, atomized, economic perspective, solely enabled by technological operations are antidemocratic according to current definitions. They show a considerable lack of concern for the communal, emotional domain that has been crucial in present-day participative criticism of democracy seeking to reduce the democratic deficit (e.g., the Spanish Indignados movement; Errejón and Mouffe, 2016). In summary, while BPE platforms have a potential to solve problems related to the mechanics of voting, it is unlikely that, in their present design, they will contribute to revitalize democracy or advance democratizing aims. These conclusions are relevant because the main frame employed by the selected BPE start-ups emphasizes a broad and normative message of democratic improvement through BPE technology, beyond the specific qualities of their product. The use of Fairclough’s CDA allowed delimitation of the wide network of power relations involved in choosing certain framings of the products over others. The authority and expertise asserted by the texts describing technological products, such as those analyzed, creates a convoluted relationship between the creators of the technology, the targeted ‘buyers’ (e.g., Governments) and the ultimate “users” of the technology (citizens). My conclusions suggest that this relationship is one where the final users of the technology are referred to assiduously, and where the functionality of the technology in less technologically literate households is not even considered. As a continuation of this research, it would be interesting to conduct a parallel CDA on governments’ views and understanding of the use of new technologies, such as BPE, for the future of democratic systems. Setting up a thorough, attentive and informed dialog between both sides would make us gain a better appreciation of where views coming from technology and views coming from government overlap, and whether they are more mindful of other areas within the broad conception of democracy sustained here. To conclude, this work opens up several avenues for future research, including those on public perceptions of politics on online platforms and their impact on participation and voting, online modes of civic engagement with partisan politics and their democratic outcomes, techno-politics and cyberactivism for a new democratic culture. It may also address and interrogate internet-mediated channels of communal participation in local and national politics and its consequences for current democracy, and might benefit from more overarching methodological understandings–grounded theory, mixed methods qualitative research, ethnographies, quantitative methods, . . .–to achieve more concrete and sound conclusions within this field of research. ## DATA AVAILABILITY STATEMENT The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding author/s. ## AUTHOR CONTRIBUTIONS MI designed, carried out research, and wrote the manuscript. ## ACKNOWLEDGMENTS MI is the recipient of a “La Caixa” post-graduate fellowship. MI thanks Dr. Dylan Mulvin (LSE) for early advice and guidance, the participants in the “Crisis and Challenges of Democracy” workshop (CES, University of Coimbra, Portugal) for feedback on a preliminary version of this manuscript, and two knowledgeable reviewers for their critical insights into current limitations of BPE technologies regarding fundamental privacy, security and accountability rights. ----- ## REFERENCES [Agora (2019a). Agora Web. Available online at: https://agora.vote (accessed July 1,](https://agora.vote) 2019). [Agora (2019b). Agora Whitepaper. Available online at: https://www.agora.vote/s/](https://www.agora.vote/s/Agora_Whitepaper.pdf) [Agora_Whitepaper.pdf (accessed July 1, 2019).](https://www.agora.vote/s/Agora_Whitepaper.pdf) Agre, P. (1994). Networking and Democracy. The Network Observer 1.4. Available [online at: http://polaris.gseis.ucla.edu/pagre/tno/april-1994.html (accessed July](http://polaris.gseis.ucla.edu/pagre/tno/april-1994.html) 1, 2019). Allen, D. W. E., Berg, C., and Lane, A. M. (2019). Cryptodemocracy: How _Blockchain Can Radically Expand Democratic Choice. Lanham, MD: Lexington_ Books. Altheide, D. L., and Schneider, C. J. (2013). Qualitative Media Analysis, 2nd Edn. London: SAGE. American Council for Technology Industry Advisory Council (ACT-IAC) (2017). Blockchain Primer: Enabling Blockchain Innovation in the U.S. Federal _[Government, ACT-IAC Whitepaper. Available online at: https://www.actiac.org/](https://www.actiac.org/act-iac-white-paper-enabling-blockchain-innovation-us-federal-government)_ [act-iac-white-paper-enabling-blockchain-innovation-us-federal-government](https://www.actiac.org/act-iac-white-paper-enabling-blockchain-innovation-us-federal-government) (accessed July 1, 2019). Ancelovici, M., Dufour, P., and Nez, H. (2016). Street Politics in the Age of Austerity: _From the Indignados to Occupy. Amsterdam: Amsterdam University Press._ Arslanian, H., and Fischer, F. (2019). The Future of Finance: The Impact of FinTech, _AI, and Crypto on Financial Services. Switzerland: Palgrave Macmillan._ Baym, N. K. (2015). Personal Connections in the Digital Age. 2nd ed. Cambridge, MA: Polity Press. Beck, U. (1992). Risk Society: Towards a New Modernity. London: Sage. Berryhill, J., Bourgery, T., and Hanson, A. (2018). Blockchains Unchained: _Blockchain Technology and its Use in the Public Sector, OECD Working Papers_ _on Public Governance, no. 28. Paris: OECD Publishing._ Bijker, W. E. (1995). Of Bicycles, Bakelites, and Bulbs: Toward A Theory of _Sociotechnical Change. Cambridge, MA: MIT Press._ Bijker, W. E., Hughes, T. P., and Pinch, T. (1987). The Social Construction _of Technological Systems: New Directions in the Sociology and History of_ _Technology. Cambridge, MA: MIT Press._ Blessing, J., Gomez, J., Patiño, M., and Nguyen, T. (2020). Security Survey and [Analysis of Vote-by-Mail Systems. arXiv [Preprint]. Available at: https://arxiv.](https://arxiv.org/pdf/2005.08427.pdf) [org/pdf/2005.08427.pdf (accessed June 30, 2020).](https://arxiv.org/pdf/2005.08427.pdf) Blommaert, J., and Bulcaen, C. (2000). Critical Discourse Analysis. Annu. Rev. _Anthropol. 29, 447–466._ Brennan, G., and Buchanan, J. (1984). Voter Choice. Am. Behav. Sci. 2, 185–201. Brennan, G., and Hamlin, A. (1998). Expressive voting and electoral equilibrium. _Public Choice 95, 149–175._ Brennan, G., and Lomasky, L. (1993). Democracy and Decision: The Pure Theory of _Electoral Preference. Cambridge, MA: Cambridge University Press._ Brennan, J. (2016). “The Ethics and Rationality of Voting,” in The Stanford _Encyclopedia of Philosophy, ed. E. N. Zalta.(Stanford, CA: Stanford University)._ [doi: 10.4159/harvard.9780674497764.c8](https://doi.org/10.4159/harvard.9780674497764.c8) Castells, M. (1998). The Information Age: Economy, Society and Culture. 3. Malden, MA: Blackwell Publishing. Castells, M. (2012). Networks of Outrage and Hope: Social Movements in the Internet _Age. London: Polity Press._ Corrado, A. (1996). “Elections in Cyberspace: Prospects and Problems,” in Elections _in Cyberspace: Toward A New Era in American Politics, eds A. Corrado and_ C. M. Firestone (Washington, DC: Aspen Institute). Crick, B. (2002). Democracy. A Very Short Introduction. Oxford: Oxford University Press. Curran, J. (2016). “The Internet of Dreams: Reinterpreting the Internet,” in _Misunderstanding the Internet, eds J. Curran, N. Fenton, and D. Freedman_ [(London: Routledge), 1–47. doi: 10.4324/9781315695624-1](https://doi.org/10.4324/9781315695624-1) De Filippi, P., and Hassan, S. (2016). Blockchain Technology as a Regulatory [Technology: From Code Is Law to Law Is Code. First Monday 21, 12. doi:](https://doi.org/10.5210/fm.v21i12.7113) [10.5210/fm.v21i12.7113](https://doi.org/10.5210/fm.v21i12.7113) [Democracy Earth (2019a). Democracy Earth Web. Available online at: https://](https://democracy.earth/) [democracy.earth/ (accessed July 1, 2019).](https://democracy.earth/) Democracy Earth. (2019b). Democracy Earth Social Smart Contract Whitepaper. Available online [at:https://github.com/DemocracyEarth/paper/blob/master/](https://github.com/DemocracyEarth/paper/blob/master/The%20Social%20Smart%20Contract.pdf) [The%20Social%20Smart%20Contract.pdf (accessed July 1, 2019).](https://github.com/DemocracyEarth/paper/blob/master/The%20Social%20Smart%20Contract.pdf) Deuze, M. (2007). Media Work. Cambridge, UK: Polity Press. Diamond, L. (2010). Liberation Technology. J. Democracy 21, 61–83. Downs, A. (1957). An Economic Theory of Democracy. New York, NY: Harper. Ellul, J. (1964). The Technological Society. New York, NY: Vintage. Errejón, I., and Mouffe, C. (2016). Podemos: In the Name of the People. London: Lawrence & Wishart. Fairclough, N. (1989). Language and Power. London: Longman. Fairclough, N. (1992). Discourse and Social Change. Cambridge, UK: Polity Press. Fairclough, N. (1995). Critical Discourse Analysis: The Critical Study of Language. London: Longman. Fischer, C. S. (1992). America Calling: A Social History of the Telephone to 1940. Berkeley, CA: University of California Press. Fitzi, G., Mackert, J., and Turner, B. S. (2018). Populism and the Crisis of Democracy. _3. London: Routledge._ [Follow My Vote (2019a). Follow My Vote Web. Available online at: https://](https://followmyvote.com) [followmyvote.com (accessed July 1, 2019).](https://followmyvote.com) Follow My Vote (2019b). Follow My Vote: The Future of Voting. Available online [at: https://followmyvote.com/the-future-of-voting/ (accessed July 1, 2019).](https://followmyvote.com/the-future-of-voting/) Follow My Vote (2019c). Follow My Vote: Voting Systems Vulnerabilities. Available [online at: https://followmyvote.com/voting-system-vulnerabilities/ (accessed](https://followmyvote.com/voting-system-vulnerabilities/) July 1, 2019). Foucault, M. (1968). Politics and the Study of Discourse. Ideol. Conscious. 3, 7–26. Fuchs, C. (2011). Foundations of Critical Media and Information Studies. New York, NY: Routledge. Fuchs, C. (2013). “Social Media and Capitalism,” in Producing the Internet: _Critical Perspective of Social Media, ed. T. Olsson (Gothenburg: Nordicom),_ 25–44. Garnham, N. (1994). “Whatever Happened to the Information Society?,” in _Management of Information and Communication Technologies: Emerging_ _Patterns of Control, ed. R. Mansell (London: ASLIB), 42–51._ Gibson, J. P., Krimmer, R., Teague, V., and Pomares, J. (2016). A Review of [E-voting: The Past, Present and Future. Ann. Telecommun. 71, 279–286. doi:](https://doi.org/10.1007/s12243-016-0525-8) [10.1007/s12243-016-0525-8](https://doi.org/10.1007/s12243-016-0525-8) Gill, R. (1996). “Discourse Analysis: Practical Implementation,” in Handbook of _Qualitative Research Methods for Psychology and the Social Sciences, ed. J. T. E._ Richardson (London: BPS-Blackwell), 141–156. Gill, R. (2000). “Discourse Analysis,” in Qualitative Researching with Text, Image _and Sound: A Prectical Handbook, eds M. W. Bauer and G. Gaskell (London:_ SAGE), 172–190. Goodman, R., and Halderman, J. A. (2020). Internet Voting is Happening Now. _Slate 15:2020._ Gronke, P., Galanes-Rosenbaum, E., Miller, P. A., and Toffey, D. (2008). Convenience Voting. Annu. Rev. Pol. Sci. 11, 437–455. Grossback, L. J., Peterson, D. A. M., and Stimson, J. A. (2007). Mandate Politics. Cambridge, UK: Cambridge University Press. Grossman, L. K. (1996). The Electronic Republic: Reshaping American Democracy _for the Information Age. New York, NY: Viking._ Hegel, G. W. F. (1991). Elements of the Philosophy of Right. Cambridge, UK: Cambridge University Press. Hillman, A. L. (2010). Expressive Behavior in Economics and Politics. Eur. J. Pol. _[Econ. 26, 403–418. doi: 10.1016/j.ejpoleco.2010.06.004](https://doi.org/10.1016/j.ejpoleco.2010.06.004)_ International IDEA (2019). The Global State of Democracy 2019: Addressing the Ills, _[Reviving the Promise. Stockholm: International IDEA, doi: 10.31752/idea.2019.](https://doi.org/10.31752/idea.2019.31)_ [31](https://doi.org/10.31752/idea.2019.31) International Telecommunications Union (ITU) (2018). _Measuring_ _the_ _Information Society Report 2018, Executive Summary.Available online at:_ [https://www.itu.int/en/ITU-D/Statistics/Documents/publications/misr2018/](https://www.itu.int/en/ITU-D/Statistics/Documents/publications/misr2018/MISR2018-ES-PDF-E.pdf) [MISR2018-ES-PDF-E.pdf (accessed August 1, 2019).](https://www.itu.int/en/ITU-D/Statistics/Documents/publications/misr2018/MISR2018-ES-PDF-E.pdf) Jenkins, H. (2008). Convergence Culture: Where Old and New Media Collide. New York, NY: New York University Press. Juels, A., Eyal, I., and Naor, O. (2018). Blockchains won’t fix internet voting security– and could make it worse. Conversation 18:2018. Kapilkov, M. (2020). Russia Pilots Federal Voting on Waves Blockchain. _Cointelegraph 19:2020._ Krimmer, R. (2012). The Evolution of E-voting: Why Voting Technology Is Used _and How It Affects Democracy. Ph.D. Thesis Tallinn: Tallinn University of_ Technology. Kuenzi, R. (2019). These Are the Arguments That Sank E-voting in Switzerland. _Swissinfo 2:2019._ ----- Lessig, L. (2006). Code: And Other Laws of Cyberspace. Version 2.0. New York, NY: Basic Books. Levin, Y. (2002). Politics After the Internet. Public Interest 149, 80–94. List, C. (2013). ““Social Choice Theory,”,” in The Stanford Encyclopedia of _Philosophy, ed. E. N. Zalta.(Stanford, CA: Stanford University)._ MacKenzie, D., and Wajcman, J. (eds) (1989). The Social Shaping of Technology, 2nd Edn. Buckingham, UK: Open University. Mackie, G. (2015). “Why It’s Rational to Vote,” in Rationality, Democracy, _and Justice: The Legacy of Jon Elster, eds C. López-Guerra and J._ [Maskivker (Cambridge, UK: Cambridge University Press), 21–49. doi: 10.1017/](https://doi.org/10.1017/cbo9781107588165.005) [cbo9781107588165.005](https://doi.org/10.1017/cbo9781107588165.005) Mansbridge, J. J. (ed.) (1990). Beyond Self-Interest. Chicago, IL: University of Chicago Press. Markus, M. L. (1994). Finding the Happy Medium: Explaining the Negative Effects of Electronic Communication on Social Life at Work. ACM T. Inform. Syst. 12, [119–149. doi: 10.1145/196734.196738](https://doi.org/10.1145/196734.196738) McAllister, I. (2016). Internet Use, Political Knowledge and Youth Electoral [Participation in Australia. J. Youth Stud. 19, 1220–1236. doi: 10.1080/13676261.](https://doi.org/10.1080/13676261.2016.1154936) [2016.1154936](https://doi.org/10.1080/13676261.2016.1154936) McLean, I., and Hewitt, F. (1994). Condorcet: Foundations of Social Choice and _Political Theory. Cheltenham, UK: Edward Elgar._ McLuhan, M. (1964). Understanding Media: The Extensions of Man. New York, NY: McGraw-Hill. National Academies of Sciences, Engineering, and Medicine (2018). Securing _the Vote: Protecting American Democracy. Washington, DC: The National_ [Academies Press, doi: 10.17226/25120](https://doi.org/10.17226/25120) Noel, H. (2010). Ten Things Political Scientists Know that You Don’t. The Forum [8:12. doi: 10.2202/1540-8884.1393](https://doi.org/10.2202/1540-8884.1393) Nye, D. E. (1997). Narratives and Spaces: Technology and the Construction of _American Culture. New York, NY: Columbia University Press._ Papacharissi, Z. (2014). Affective Publics: Sentiment, Technology, and Politics. _New York. Oxford: Oxford University Press._ Park, S., Specter, M., Narula, N., and Rivest, R. L. (2020). Going from Bad to Worse: From Internet Voting to Blockchain Voting. Semantic Scholar _[Preprint]._ Available online at: [https://www.semanticscholar.org/paper/](https://www.semanticscholar.org/paper/Going-from-Bad-to-Worse%3A-From-Internet-Voting-to-Park-Narula/d60e045731228b1118918cbd8cc41f2e19ac143f) [Going-from-Bad-to-Worse%3A-From-Internet-Voting-to-Park-Narula/](https://www.semanticscholar.org/paper/Going-from-Bad-to-Worse%3A-From-Internet-Voting-to-Park-Narula/d60e045731228b1118918cbd8cc41f2e19ac143f) [d60e045731228b1118918cbd8cc41f2e19ac143f (accessed June 30, 2020).](https://www.semanticscholar.org/paper/Going-from-Bad-to-Worse%3A-From-Internet-Voting-to-Park-Narula/d60e045731228b1118918cbd8cc41f2e19ac143f) Poblet, M., Casanovas, P., and Rodríguez-Doncel, V. (2019). Linked Democracy: _Foundations, Tools, and Applications. Switzerland: Springer Open._ [Polys. (2019a). Polys Web. Available online at: https://polys.me (accessed July 1,](https://polys.me) 2019). [Polys. (2019b). Polys Whitepaper. Available online at: https://polys.blob.core.](https://polys.blob.core.windows.net/site/Polys_whitepaper.pdf) [windows.net/site/Polys_whitepaper.pdf (accessed July 1, 2019).](https://polys.blob.core.windows.net/site/Polys_whitepaper.pdf) Poster, M. (2001). What’s the Matter with the Internet. Minneapolis, MN: University of Minnesota Press. Rash, W. Jr. (1997). Politics on the Nets: Wiring the Political Process. San Francisco, CA: W.H. Freeman. Richey, S., and Zhu, J. (2015). Internet Access Does Not Improve Political Interest, [Efficacy, and Knowledge for Late Adopters. Pol. Commun. 32, 396–413. doi:](https://doi.org/10.1080/10584609.2014.944324) [10.1080/10584609.2014.944324](https://doi.org/10.1080/10584609.2014.944324) Robins, K., and Webster, F. (1988). “Cybernetic Capitalism: Information, Technology, Everyday Life,” in The Political Economy of Information, eds V. Mosko and J. Wasko (Madison, WI: University of Wisconsin press), 45–75. Schroeder, R. (2007). Rethinking Science, Technology, and Social Change. Palo Alto, CA: Stanford University Press. Shirky, C. (2009). Here Comes Everybody: The Power of Organizing Without _Organizations. London: Penguin._ Specter, M., Koppel, J., and Weitzner, D. (2020). The Ballot is Busted Before the _Blockchain: A Security Analysis of Voatz, the First Internet Voting Application_ _[Used in U.S.Federal Elections. Available online at: https://internetpolicy.mit.](https://internetpolicy.mit.edu/securityanalysisofvoatz_public/)_ [edu/securityanalysisofvoatz_public/ (accessed November 27, 2020).](https://internetpolicy.mit.edu/securityanalysisofvoatz_public/) Taylor, S. (2013). What Is Discourse Analysis?. London: Bloomsbury. Thomason, J., Bernhardt, S., Kansara, T., and Cooper, N. (2019). Blockchain _Technology for Global Social Change. Hershey, PA: IGI Global._ Toffler, A. (1980). The Third Wave. New York, NY: Bantam. Toffler, A., and Toffler, H. (1995). Creating A New Civilization: The Politics of The _Third Wave. Atlanta, GA: Turner._ Van Dijck, J., and Nieborg, D. (2009). Wikinomics and its Discontents: A Critical [Analysis of Web 2.0 Business Manifestos. New Media Soc. 5, 855–874. doi:](https://doi.org/10.1177/1461444809105356) [10.1177/1461444809105356](https://doi.org/10.1177/1461444809105356) Van Niekerk, M. (2019). How Blockchain Strengthened Indonesian Democracy _(And Could Do The Same Elsewhere). Forbes, May 23, 2019. Available_ online at: [https://www.forbes.com/sites/worldeconomicforum/2019/05/23/](https://www.forbes.com/sites/worldeconomicforum/2019/05/23/how-blockchain-strengthened-indonesian-democracy-and-could-the-same-elsewhere/#6d32cf252c3a) [how-blockchain-strengthened-indonesian-democracy-and-could-the-same-](https://www.forbes.com/sites/worldeconomicforum/2019/05/23/how-blockchain-strengthened-indonesian-democracy-and-could-the-same-elsewhere/#6d32cf252c3a) [elsewhere/#6d32cf252c3a (accessed November 28, 2020).](https://www.forbes.com/sites/worldeconomicforum/2019/05/23/how-blockchain-strengthened-indonesian-democracy-and-could-the-same-elsewhere/#6d32cf252c3a) Virilio, P. (1986). Speed and Politics: An Essay on Dromology. New York, NY: Columbia University [Voatz (2019). Voatz Web. Available online at: https://voatz.com (accessed July 1,](https://voatz.com) 2019). [Votem (2019a). Votem Web. Available online at: https://votem.com (accessed July](https://votem.com) 1, 2019). [Votem (2019b). Votem Proof of Vote Whitepaper. Available online at: https:](https://github.com/votem/proof-of-vote/blob/master/proof-of-vote-whitepaper.pdf) [//github.com/votem/proof-of-vote/blob/master/proof-of-vote-whitepaper.pdf](https://github.com/votem/proof-of-vote/blob/master/proof-of-vote-whitepaper.pdf) (accessed July 1, 2019). [VoteWatcher (2019). VoteWatcher Web. Available online at: https://votewatcher.](https://votewatcher.com) [com (accessed July 1, 2019).](https://votewatcher.com) Warschauer, M. (2003). Technology and Social Inclusion: Rethinking the Digital _Divide. Cambridge, MA: MIT Press._ Winner, L. (1977). Autonomous Technology: Technics-out-of-Control as a Theme in _Political Thought. Cambridge, MA: MIT Press._ Winner, L. (1989). “Do Artifacts Have Politics?,” in The Social Shaping of _Technology, eds D. MacKenzie and J. Wajcman (Buckingham, UK: Open_ University), 28–40. Wodak, R. (2014). “Critical Discourse Analysis,” in The Routledge Companion _to English Studies, eds C. Leung and B. V. Street (London: Routledge),_ 302–317. Wodak, R., and Meyer, M. (2009). “Critical Discourse Analysis: History, Agenda, Theory, and Methodology,” in Methods for Critical Discourse Analysis, eds R. Wodak and M. Meyer (London: SAGE), 1–33. **Conflict of Interest: The author declares that the research was conducted in the** absence of any commercial or financial relationships that could be construed as a potential conflict of interest. _Copyright © 2021 Imperial. This is an open-access article distributed under the_ _[terms of the Creative Commons Attribution License (CC BY). The use, distribution](http://creativecommons.org/licenses/by/4.0/)_ _or reproduction in other forums is permitted, provided the original author(s) and_ _the copyright owner(s) are credited and that the original publication in this journal_ _is cited, in accordance with accepted academic practice. No use, distribution or_ _reproduction is permitted which does not comply with these terms._ -----
{ "disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.3389/fbloc.2021.587148?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.3389/fbloc.2021.587148, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": "CCBY", "status": "GOLD", "url": "https://www.frontiersin.org/articles/10.3389/fbloc.2021.587148/pdf" }
2,021
[ "JournalArticle" ]
true
2021-04-09T00:00:00
[ { "paperId": "64e4b6f346ecefcf2a6dc6d6ec1a7fb69a467bff", "title": "Agora" }, { "paperId": "d60e045731228b1118918cbd8cc41f2e19ac143f", "title": "Going from bad to worse: from Internet voting to blockchain voting" }, { "paperId": "7c2e557b73c6e35918dbd0c6b2f0e7c258a63aa7", "title": "Security Survey and Analysis of Vote-by-Mail Systems" }, { "paperId": "c819aa103ab314e07e3195ca78cb94e91edc5035", "title": "SUMMARY: The Global State of Democracy 2019: Addressing the Ills, Reviving the Promise" }, { "paperId": "358aacb6585533dbbd7c9d9edf132d1a102e965d", "title": "The Global State of Democracy 2019: Addressing the Ills, Reviving the Promise" }, { "paperId": "bcd1120eb19d1005bbd107629a256ac1d4848a22", "title": "Critical Discourse Analysis" }, { "paperId": "a6e68731e074bba0809e0f1bf33211fab816ec48", "title": "Correction to: The Future of Finance" }, { "paperId": "b46498b49b1c57fc172a2f6c16fc358d26f5a1ac", "title": "Blockchain Technology for Global Social Change" }, { "paperId": "9341439ce1ef804745f45c54d29faf4894ef22d1", "title": "Linked Democracy" }, { "paperId": "884cf84d7ca40cf21d9e241182b9e7723eb819be", "title": "Discourse Analysis" }, { "paperId": "448d003323009d5c2f00b8f6540cef11b6b5c595", "title": "Populism and the Crisis of Democracy" }, { "paperId": "c4ee3016d74645f68c53f3b7c42edbe1568922f3", "title": "Securing the Vote" }, { "paperId": "6d7edb69dd35d75d4e3797fbd36f78f56626c85f", "title": "Do Artifacts Have Politics?" }, { "paperId": "b6d70017be9f13e1406f32b2dbdf53e5d702209b", "title": "Language and Power" }, { "paperId": "2b00630d77423fca5204ec91a6f8c6b280df0a6e", "title": "Blockchain Technology as a Regulatory Technology: From Code is Law to Law is Code" }, { "paperId": "81ac8abee548ab937b68b3bd03d27e57b2705f5a", "title": "A review of E-voting: the past, present and future" }, { "paperId": "4db268e806c077f00c7e5de5bf576da7d453f933", "title": "Internet use, political knowledge and youth electoral participation in Australia" }, { "paperId": "8981142c5fe8f5350e7af234579fc115130fd359", "title": "The internet of dreams" }, { "paperId": "b818c6b15d50dc92b2a4b9aa9b890a36ea4bd9dc", "title": "Internet Access Does Not Improve Political Interest, Efficacy, and Knowledge for Late Adopters" }, { "paperId": "5b752d7cf98bf39fd6183ee19a28c7b59a467640", "title": "Rationality, Democracy, and Justice: Why It’s Rational to Vote" }, { "paperId": "2900d2e581445f80eb3ecb75b587473e6dcedd08", "title": "Affective Publics: Sentiment, Technology, and Politics" }, { "paperId": "f9247a56c2a88372e62f288505fc74858444e081", "title": "NETWORKS OF OUTRAGE AND HOPE. SOCIAL MOVEMENTS IN THE INTERNET AGE, Manuel Castells, Cambridge, Polity Books, 2012, 200 pages" }, { "paperId": "6e75e39143e6a9e815fa6a4e33a293f9b208efb1", "title": "The Evolution of E-voting: Why Voting Technology is Used and How it Affects Democracy." }, { "paperId": "9bddc8909096e0b42372a91d1dd880ed3ee561f5", "title": "Foundations of Critical Media and Information Studies" }, { "paperId": "fc40c483e9641570651d7a52b95e70a4e6849954", "title": "Expressive behavior in economics and politics" }, { "paperId": "e0411b7d700be9220ab97522b32713cef483de12", "title": "Ten Things Political Scientists Know that You Don't" }, { "paperId": "c34b6a75becaac5f7eabed8b1b676806468ce15e", "title": "Liberation Technology" }, { "paperId": "764306bbe00cdd72f8499d1e7f994b037759188c", "title": "Wikinomics and its discontents: a critical analysis of Web 2.0 business manifestos" }, { "paperId": "baa56de8baeeb9936a4bc8f1033cc1e9d503622e", "title": "The Risk Society: Towards a new modernity" }, { "paperId": "0e0f61eccd6b7c4f6557a7a28a17ac3ea44c1315", "title": "Here Comes Everybody: The Power of Organizing Without Organizations" }, { "paperId": "c1146e3da50846519731d7bf24e01b9df61d3129", "title": "Rethinking Science, Technology, and Social Change" }, { "paperId": "101a822431ac3566abc045a0dbe3c5d3850f5566", "title": "Convergence Culture: Where Old and New Media Collide" }, { "paperId": "932f077b69006b951a9e7eda7407281c61b43074", "title": "Mandate Politics" }, { "paperId": "af46eae8afb0838019958fba220158a7705e71ed", "title": "Technology and Social Inclusion. Rethinking the Digital Divide" }, { "paperId": "53e02324935b45df6a31c79888a56cb680635ae4", "title": "Code and Other Laws of Cyberspace" }, { "paperId": "8d19aa993deb672b118f3ff6d0dda16e096d73d2", "title": "The Social Shaping of Technology (2nd ed.)" }, { "paperId": "fc46b74c477643dbbeef3f9ea7498728e7d3f29c", "title": "Democracy: A Very Short Introduction" }, { "paperId": "38939cc515ac0abf6581c884fb6215754062f58d", "title": "The Information Age: Economy, Society and Culture, 3 vols ‐ Vol. 1: The Rise of the Network Society" }, { "paperId": "5801ee5b383624a8e79689e3f3210f18b0e7b33a", "title": "Democracy" }, { "paperId": "55fe27dd0fcb64464c55834f13e3b3a275099612", "title": "The ‘Third Wave’" }, { "paperId": "98c02040fc5f8332ff32c0d447e40d483bb8bc2e", "title": "CRITICAL DISCOURSE ANALYSIS" }, { "paperId": "a137d5157f16b2359df7194ea8c5c5517d7badb9", "title": "Expressive voting and electoral equilibrium" }, { "paperId": "8e778dd7b2b6ab815d917954965e21097ca1bc92", "title": "Democracy and Decision: The Pure Theory of Electoral Preference" }, { "paperId": "7b50810157e299a92bdf1872f4935b82f723c98c", "title": "Politics on the nets: wiring the political process" }, { "paperId": "2882f1830e4e6f37c5bc30496a5542bf06d06dd8", "title": "Of Bicycles, Bakelites, and Bulbs: Toward a Theory of Sociotechnical Change" }, { "paperId": "c8a263a051cd2eccadb7d0c51871883e0e4a335b", "title": "Condorcet: Foundations of Social Choice and Political Theory" }, { "paperId": "ec69dd0c01b254f65a9573bee1e5baa9bcc4d264", "title": "Democracy and Decision: The Pure Theory of Electoral Preference. By Geoffrey Brennan and Loren Lomasky. New York: Cambridge University Press, 1993. 237p. $44.95." }, { "paperId": "80b43e1c43a04491b6dc2763190cdf6e8f4e4ed8", "title": "Personal Connections in the Digital Age" }, { "paperId": "f28cc2ba39a71d9a915f5ddfc33a422f1e5ae126", "title": "Finding a happy medium" }, { "paperId": "2d10897a990b306e7203f387e981daa47060739d", "title": "America Calling: A Social History of the Telephone to 1940" }, { "paperId": "633557d9e954b95d9a7da2d6faea1a8b786f69aa", "title": "America calling: A social history of the telephone to 1940 by Claude S. Fischer University of California Press, Berkeley, CA, 1992, 424 pp, $25.00" }, { "paperId": "7389a18d0c18d12dac817871e46206b57f2f89d2", "title": "Megatrends or megamistakes?: whatever happened to the information society?" }, { "paperId": "eabbd687a5258ab39975b690ff389e961375f18e", "title": "Hegel: Elements of the Philosophy of Right" }, { "paperId": "be1eb642f5d6d7cf2d3f4e9b9a6274df6596ae83", "title": "Beyond self-interest" }, { "paperId": "f5a1804de8b7cd550190a10f9b5120b74a434e28", "title": "The social construction of technological systems: new directions in the sociology and history of technology" }, { "paperId": "d58ca9814f94799451f1a8a9d9ff5d575815cd27", "title": "Speed and Politics: An Essay on Dromology" }, { "paperId": "3ed5797d655ab5e45ef416532236f2aee74e5ed5", "title": "Voter Choice" }, { "paperId": null, "title": "Conflict of Interest: The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest" }, { "paperId": "d6a5afd47a5bddc669399dc299c11ab8ac3368c2", "title": "The Ballot is Busted Before the Blockchain: A Security Analysis of Voatz, the First Internet Voting Application Used in U.S. Federal Elections" }, { "paperId": null, "title": "Russia Pilots Federal Voting on Waves Blockchain." }, { "paperId": null, "title": "Internet Voting is Happening Now" }, { "paperId": "47d632f1083801c4411a87378a24ad038a24bda1", "title": "”Networks of Outrage and Hope. Social Movements in the Internet Age”." }, { "paperId": null, "title": "Votem Proof of Vote Whitepaper" }, { "paperId": null, "title": "Voatz" }, { "paperId": null, "title": "These Are the Arguments That Sank E-voting in Switzerland." }, { "paperId": null, "title": "The Future of Finance: The Impact of FinTech, AI, and Crypto on Financial Services" }, { "paperId": null, "title": "Cryptodemocracy: How Blockchain Can Radically Expand Democratic Choice" }, { "paperId": null, "title": "Follow My Vote (2019a)" }, { "paperId": null, "title": "Polys" }, { "paperId": null, "title": "VoteWatcher" }, { "paperId": null, "title": "Blockchains Unchained: Blockchain Technology and its Use in the Public Sector" }, { "paperId": null, "title": "Measuring the Information Society Report 2018, Executive Summary" }, { "paperId": null, "title": "Blockchains won’t fix internet voting security– and could make it worse." }, { "paperId": null, "title": "Blockchain Primer: Enabling Blockchain Innovation in the U.S. Federal Government" }, { "paperId": "9d67bf0ee1c58c8aaa82ed71d8a38bccd000c06e", "title": "Street Politics in the Age of Austerity" }, { "paperId": "848f86ba7996e3730b3ca486d3f320c50b150c15", "title": "Podemos: In the name of the people" }, { "paperId": null, "title": "StreetPoliticsintheAgeofAusterity: From the Indignados to Occupy" }, { "paperId": "984e25e67cbddfccdac7dcca320b621a0e64a81c", "title": "An Economic Theory of Democracy" }, { "paperId": "6594dc608fb02d1d96b8074c8cff65afddfd9b11", "title": "Social media and capitalism" }, { "paperId": "73f801d960e994768eca44df83d0b7ad19b613f5", "title": "Convenience Voting" }, { "paperId": null, "title": "Wasko (Madison, WI: University of Wisconsin press), 45–75" }, { "paperId": "129add587072f63a35729e324469c77025411215", "title": "Social Construction" }, { "paperId": "6e8c43edc9b1a30bd955921592db41b5379ed81c", "title": "What's the Matter with the Internet?" }, { "paperId": "e74e34dc50982ebc8df913498061f4bebb6c490b", "title": "Code and Other Laws of Cyberspace" }, { "paperId": "e34404b00f0d3d91e53eba5c7b85a99eae6033c5", "title": "Narratives And Spaces: Technology and the Construction of American Culture" }, { "paperId": "a4a7698ebd88ab0b8f2183371901323ddd66ceaf", "title": "Qualitative Media Analysis" }, { "paperId": null, "title": "The Electronic Republic: Reshaping American Democracy for the Information Age" }, { "paperId": null, "title": "Elections in Cyberspace: Prospects and Problems" }, { "paperId": "70fd2e37b5869499374e486c49ccbf9be88b407c", "title": "Critical Discourse Analysis: The Critical Study of Language" }, { "paperId": "93f9f42f347989ae79b0624c8a2376767ebe0b76", "title": "Creating a new civilization : the politics of the Third Wave" }, { "paperId": "d0e3b23f1b4701d269ec4325c4f7f358c61441ea", "title": "Democracy and Decision" }, { "paperId": "4abd90f1c03b88c675e341c028ad2a64abf90f55", "title": "Discourse and social change" }, { "paperId": "b5c44f899edde1b5676d63d632ce912e54bec46e", "title": "The social shaping of technology" }, { "paperId": null, "title": "Cybernetic Capitalism: Information" }, { "paperId": "ce11759cc4047695a847c7def648e5ad9ef0dbf7", "title": "Social Choice Theory" }, { "paperId": "ee2f5b689495f2eaca6efb6ca9cabb726e6dbf99", "title": "Autonomous Technology: Technics-out-of-Control as a Theme in Political Thought" }, { "paperId": "02a83c02fde6852681304d430649a87122627b57", "title": "CHAPTER 6. CONCLUSIONS" }, { "paperId": null, "title": "UnderstandingMedia:TheExtensionsofMan" }, { "paperId": "66496d400a82b87cb706bff8e9b9ca3e86d599a1", "title": "Elements of the Philosophy of Right" }, { "paperId": null, "title": "Frontiers in Blockchain | www." }, { "paperId": null, "title": "Blockchain-Powered E-Voting Start-Ups" }, { "paperId": null, "title": "Networking and Democracy. The Network Observer 1.4" }, { "paperId": null, "title": "Follow My Vote: Voting Systems Vulnerabilities" }, { "paperId": null, "title": "Follow My Vote: The Future of Voting" }, { "paperId": null, "title": "Votem Web" }, { "paperId": null, "title": "How Blockchain Strengthened Indonesian Democracy (And Could Do The Same Elsewhere)" } ]
21,545
en
[ { "category": "Economics", "source": "s2-fos-model" }, { "category": "Business", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/02324eda5842b7144859126bf152495e7c8a415d
[]
0.881789
International Capital Flows, Dynamic Changes in Cryptocurrency and Noble Metal Markets
02324eda5842b7144859126bf152495e7c8a415d
BCP Business &amp; Management
[ { "authorId": "2068168295", "name": "Ruize Sun" } ]
{ "alternate_issns": null, "alternate_names": null, "alternate_urls": null, "id": null, "issn": null, "name": null, "type": null, "url": null }
In 2022, with the implementation of tightening monetary policies by FOMC, US dollar is experiencing a dramatic appreciation in a very short period. Though numerous studies have demonstrated the connection between the traditional currency market, cryptocurrency market, and precious metal market, rare studies are exploring the relationships between the three markets under a special political environment. This paper selects USDCNY exchange rate, gold and silver, and bitcoin as the representatives of three markets and then tests the volatility response of return on gold & silver and return on bitcoin to the change of return on USDCNY exchange rate. By employing impulse response function and ARMA-GARCHX model, the paper verifies the change of exchange rate will exacerbate the volatility of returns on gold & silver and bitcoin significantly, which suggests high risk and uncertainty of the cryptocurrency market and precious metal market in a complex and extreme political environment. Investors and speculators should take prudent investment strategies in such environment.
BCP Business & Management **AFTEM 2022** Volume **32** (2022) # **International Capital Flows, Dynamic Changes in ** **Cryptocurrency and Noble Metal Markets ** ## Ruize Sun* School of Business, University of Leicester, Leicester, LE2 7RH, UK *Corresponding author: rs673@student.le.ac.uk **Abstract.** In 2022, with the implementation of tightening monetary policies by FOMC, US dollar is experiencing a dramatic appreciation in a very short period. Though numerous studies have demonstrated the connection between the traditional currency market, cryptocurrency market, and precious metal market, rare studies are exploring the relationships between the three markets under a special political environment. This paper selects USDCNY exchange rate, gold and silver, and bitcoin as the representatives of three markets and then tests the volatility response of return on gold & silver and return on bitcoin to the change of return on USDCNY exchange rate. By employing impulse response function and ARMA-GARCHX model, the paper verifies the change of exchange rate will exacerbate the volatility of returns on gold & silver and bitcoin significantly, which suggests high risk and uncertainty of the cryptocurrency market and precious metal market in a complex and extreme political environment. Investors and speculators should take prudent investment strategies in such environment. **Keywords:** Exchange Rate, Precious Metal, Bitcoin, Volatility, ARMA-GARCHX. ## **1. Introduction ** ### Intending to suppress the price pressure generated from high inflation, the Federal Open Market Committee (FOMC) planned to take a series of aggressive monetary contractions, and thus FOMC announced to raise the interest rate at FOMC meetings and scheduled to implement this monetary policy in March, May, July, September, and November. Until the 28th, of July, FOMC has raised 25 basic points, 50 basic points, and 75 basis points in March, May, and June respectively. With the tightening monetary policies, predictably, the US dollar is experiencing a soring appreciation. Since the Chinese Central Bank keeps the interest rate constant in 2022, this paper proposes to set CNY as a proxy to reflect the real-time value of USD. As shown in fig 1, the exchange rate of CNYUSD was maintained at around 6.34 before the first interest rise occurred in March. Corresponding with the interest rise, the exchange rate increased dramatically in March, April, and May, and then been stable in June, at around 6.72. With the skyrocketing appreciation of the dollar in such a short period, not just the commodity sector, but all the financial sectors are experiencing a huge shock, via various channels, for instance, price volatility, risk, and expectative return of financial assets might change greatly with a sudden appreciation of the dollar. Figure 1 the exchange rate between USD and CNY 231 ----- BCP Business & Management **AFTEM 2022** Volume **32** (2022) ### Even Bretton Woods System has become a far-distance memory, the precious metals are still connected with USD closely, as the price of precious metals is normally bided by USD. Generally, precious metals, especially gold and silver, are recognized as both a kind of commodity and a special currency. From commodity aspect, gold and silver are scarce with great intrinsic value, which are also important industrial raw materials; from currency aspect, gold and silver have been perceived as a sample of wealth for thousands of years, which indicated they have the capacity of value storage and more they are used as mediums of exchange or quote of goods’ value. Because of the bivariate attributes of precious metals, the USD price of gold and silver might be sensitive to the change in the USD exchange rate and various macroeconomic factors. Becher and Soenen discover the USD price of gold rise accompanied by the depreciation of USD relative to other foreign currencies, this relationship has also been verified by Sjasstad and Scacciavillani through a comparison of the USD price of gold with DM (Deutsche mark) price of gold when the exchange rate of USDDM decreased [1, 2]. Then, they uncover the volatility of gold prices generated from floating exchange rates among the major currencies mainly. However, Pukthuanthong, and Rol, by using the VAR model and Granger Causality Analysis, prove that the negative relationship between change in the gold price and the change of the value of bided currencies is trivial, while the volatility of USD price, JPY price, DM price, and GBP price are similar [3]. Otherwise, via the asymmetry-powered GARCH model, Tully and Lucey confirm that the dollar effective exchange rate is the most influential factor to impact the mean of return on gold, while the exchange rate falls to explain the variance and fluctuation of gold price [4]. However, interest rate cannot explain either mean or variance of the return on gold, demonstrating gold price has limited relation to the interest rate. Interestingly, the coefficient of 1- lagged autoregression conditional variance is significant at 1% level and has the largest absolute value among all the coefficients, implying the high volatility persistence of gold price. This result is identical to the finding of Hammoudeh and Yuan, who also verify the high volatility persistence of return on gold and silver [5]. But on the contrary, the impact of interest rate on gold and silver is significant and dampening, not just mean but variance, similar to the conclusion of Hashim et al. [6]. Consequently, the interest rate and USD exchange rate have an impact on gold and silver, both price and volatility, however, the strength of this impact is unstable, and might shift over time. Bitcoin is firstly introduced by Nakamoto in 2008, to be designed as a new substitute for conventional currency. However, the nature of bitcoin still confused scholars and economists today because it is ambiguous and overlaps with various financial fields. In the article of Dyhrberg, she describes bitcoin as “the asset between gold and traditional currency”, since bitcoin shares many similar attributes with gold [7]. Like gold, bitcoin is scarce as the total amount of bitcoins has been decided by the algorithm, but it lacks intrinsic value, at this point, bitcoin approaches more with conventional currency, like dollar. Additionally, bitcoin has some unique features from both gold and traditional currency, first, it is decentralized and has no related departures or organizations to monitor the market of bitcoin. Although decentralization endows the very high liquidity for bitcoin, the risk of bitcoin is also high because of the high possibility of fraud and manual manipulation; plus, the value or credit of bitcoin is not guaranteed by any legal regimes or blocs, which implies the value of bitcoin are utterly determined by the market, and thus this feature brings high instability and volatility of its price. Geuder et al. have demonstrated that cryptocurrencies are a kind of special speculative asset, as their price is associated with bubble behaviors significantly [8]. Baur et al. and Corber et al. respectively confirm that bitcoin is insolated and has limited connection with other financial assets, for example, gold and oil futures, which manifests bitcoin’s failure to be a hedge asset but more suitable as risk diversification [9] [10]. The latest research from Kwon, via exploring the tail behaviors of bitcoin, gold, and dollar, illustrates the significant negative correlation between bitcoin and dollar while rejecting the similarity between bitcoin and gold, according to different tail features [11]. The conclusion may emphasize the currency and investment attribute of bitcoin. Consequently, the feature of bitcoin indicates it is a speculative asset mainly but not either a traditional currency or commodity, hence its price is highly uncertain and will be volatile with the fluctuation of other financial factors, such as interest rate and exchange rate. 232 ----- BCP Business & Management **AFTEM 2022** Volume **32** (2022) ### With previous research, the general interactions between exchange market, precious metal market, and cryptocurrency market are clear, however, there is rare research to test the interaction between those three markets, within some unusual scenarios. As in a specific period, the interaction model and volatility relativity are quite different from normal time, for exchange rate, bitcoin, and gold. For both investors and policymakers, risk management is always important, therefore scholars must clarify the risk of assets in some unusual periods to avoid irrational investment or policy making. Whereby, the main purpose of this paper is to examine the risk sensitivity of returns on gold, silver, and bitcoin with the change of return on USDCNY exchange rate, within the monetary contraction period of 2022. By employing the impulse response function and ARMA-GARCH model, the author discovers that: the change of return on the USDCNY exchange rate has a significant impact on both means and volatility of return on gold, silver, and bitcoin. And the conclusion further manifests that investors shall be more prudent to invest financial assets in such period. The rest of this paper is organized as follows: Part 2 is research design, including data sources, unit root test, and identification strategy; Part 3 reports empirical results; Part 4 is discussion and part 5 is conclusion. ## **2. Research Design ** ### **2.1 Data sources ** As FOMC starts to implement tightening monetary policies in 2022, to avoid interference from previous period, this paper collects data from January 4th to July 28th with 137 observations for each variable, and the four selected variables are the logarithm of the daily closing price of gold, silver, bitcoin, and USDCNY exchange rate. Intending to ensure the continuance of time-series with exchange rate, the trading price of gold, silver, and bitcoin at weekends have been excluded from the sample. **2.2 Unit Root Tests ** **2.2.1 ADF-test ** When making a time-series examination, the necessary and sufficient condition of the test shall be that the sequence is time-series stationery. As if the sequence is nonstationary and drifts randomly, the sequence will fail to cluster at the expectation of the sequence, and the past information and innovation will interfere with the result permanently. Dicky D.A and Fuller W.A. propose a method to test the unit root process of sequence, also known as DF test, [12]. 𝑝 𝑡 = 𝜙 0 + 𝜙 1 𝑝 𝑡−1 + 𝜀 𝑡 (1) Equation (1) is a standard 1 lag autoregression model, the stationary condition is that the coefficient 𝜙 1 < 1 Using ordinary least squares estimate, the coefficient 𝜙 1 is: ### 𝜙̂ = 1 𝑇 ### ∑ 𝑡 = 1 𝑝 𝑡−1 𝑝 𝑡 = ∑ 𝑇𝑡=1 𝑝 𝑡 [2], 𝜎̂ [2] ### 1 𝑇1 ̂𝑝 𝑡−1 ) [2] (2) 𝑇−1 [∑(𝑝] [𝑡] [−𝜙] [1] ### As P0=0, and T is the sample quantity, then making DF-test: 𝑇 ∑ 𝑡 = 1 𝑝 𝑡−1 𝑒 𝑡 𝑇 𝑡=1 𝑝 𝑡−1 [2] 𝜎̂√∑ ### 𝐷𝐹= 𝜙 ̂ −1 1 ### 𝑆𝐸(𝜙̂ ) 1 [=] ### (3) ### And hypothesis is 𝐻 0 : 𝜙 1 = 1, and 𝐻 1 : 𝜙 1 < 1 when p-value is small enough to reject hypothesis null, that indicates there is no unit root in sequence. However, many sequences in finance and 233 ----- BCP Business & Management **AFTEM 2022** Volume **32** (2022) ### economy cannot be described as random drafting processes and they are fitted with autoregressive integrated moving average model (ARIMA), therefore the formula of such sequence. (4) Therefore, when 𝛽= 1, equation (4) is AR (p-1) model of Δ𝑋 𝑡 When 𝛽< 1, equation (4) is AR (p) model of 𝑋 𝑡 And then adjusted DF-test is: ### 𝐴𝐷𝐹= 𝛽 [̂] −1 ### 𝑆𝐸(𝛽 [̂] ) [ (5) ] ### And hypothesis is: 𝐻 0 : 𝛽= 1, and 𝐻 1 : 𝛽< 1 rejects null hypothesis when there is no unit root process. **2.2.1 Test results ** Table 1 exhibits the result of ADF-test of samples and their first-order differences. The origin sample sequences fail to reject the null hypothesis of ADF-test and indicate that random drafting process is existing in those sequences, while the t-values of first-order differences are significant at 1% level and demonstrate the stationary of logarithm of assets’ yield rate. Table 1 ADF test Variables t - statistic p - value Price Gold -1.831 0.6895 Silver -1.970 0.6174 BTC -2.079 0.5579 Exchange rate -1.804 0.7030 Yield Gold -9.571 0.0000 [***] Silver -8.317 0.0000 [***] BTC -7.623 0.0000 [***] Exchange rate - 7.618 0.0000 [***] **2.3 Identification strategy ** As a variable could be influenced by not just other independent variables, but also the variables from past time, a autoregressive model could be built to describe this process: 𝑋 𝑡 = 𝜙 0 + ∑ 𝑃𝑗=1 𝜙 𝑗 𝑋 𝑡−𝑗 + 𝜀 𝑡 (6) And 𝜙 0 is constant, 𝜙 𝑗 is j-lagged autoregressive coefficient, only decided by lagged order but not time t; 𝜀 𝑡 is innovation of 𝑋 𝑡, which is also white noisy with independent and identically distribution, usually obey {0,1} normal distribution. But as the sequence is a weakly stationary process, the expectation of each 𝑋 𝑡 shall be all the same, therefore 𝑋 𝑡 could be perceived as result of both expectation and accumulation of innovations from past time. Generally, such a sequence could be described by Moving Average model (MA), and the formula of it is: 234 ----- BCP Business & Management **AFTEM 2022** Volume **32** (2022) ### 𝑋 𝑡 = 𝜃 0 + ∑ 𝑝𝑗=1 𝜃 𝑗 𝜀 𝑡−𝑗 + 𝜀 𝑡 (7) And 𝜃 0 is expectation of variance X, 𝜃 𝑗 is the coefficient of j-lagged innovation, 𝜀 𝑡−𝑗 is the innovation of 𝑋 𝑡−𝑗, which is white noisy with independent and identical distribution, like 𝜀 𝑡 . When combining both AR(p) model and MA(p) model to describe sequence, the autoregressive moving average model (ARMA) is 𝑋 𝑡 = 𝜙 0 + ∑ 𝑃𝑗=1 𝜙 𝑗 𝑋 𝑡−𝑗 + ∑ 𝑞𝑙=1 𝜃 𝑙 𝜀 𝑡−𝑙 + 𝜀 𝑡 (8) Set 𝐵 is backshift, as 𝐵 [𝑗] 𝑋 𝑡 = 𝑋 𝑡−𝑗, therefore ARMA (p,q) could be rewritten to be MA(q): 𝑋 𝑡 −∑ 𝑃𝑗=1 𝜙 𝑗 𝑋 𝑡−𝑗 = 𝜙 0 + ∑ 𝑞𝑙=1 𝜃 𝑙 𝜀 𝑡−𝑙 + 𝜀 𝑡 (9) (10) (11) 𝑋 𝑡 = 𝜇+ (∑ ∞𝑗=0 𝜓 𝑗 𝐵 [𝑗] )𝜀 𝑡 (12) 𝜕𝑋 𝑡+𝑙 ### And 𝜓 0 = 1, equation (12) is also known as Wold expression of ARMA (p, q), and 𝜓 𝑗 = 𝜕𝜀 𝑡 is impulse response function of ARMA (p, q), it means to bring 𝑋 𝑡+𝑙 an additional variable 𝜓 𝑗 when 𝜀 𝑡 = 1. When considering multiple vectors in AR(p) model, that is vector autoregressive model (VAR) and its formula is: (13) (14) And 𝑟 𝑡 is vector of k variables at time t, Φ 𝑗 is j-lagged cross-correlation matrix between vector 𝑟 𝑡 and vector 𝑟 𝑡−𝑗, and single unit of it is 𝜙 𝑎𝑏,𝑡−𝑗 = 𝑐𝑜𝑟𝑟(𝑟 𝑎,𝑡, 𝑟 𝑏,𝑡−𝑗 ), when a=b, the cross correlative coefficient is j-lagged autoregressive coefficient of variable 𝑟 𝑎,𝑡 ; 𝜙 0 is a 1*k constant matrix; 𝑎 𝑡 is innovation matrix Similar to AR(p) model, the characteristic function of VAR(p) is: 235 ----- BCP Business & Management **AFTEM 2022** Volume **32** (2022) ### I −∑ 𝑝𝑗=1 Φ 𝑗 𝑍 [𝑗] (15) Therefore, as VAR(p) is a stationary sequence, the cross-correlative matrix should cluster to the null matrix when 𝑗= ∞, the stability condition of VAR(p) is that: First, transfer VAR(p) into VAR (1) ### 𝑟 𝑡 𝑟 𝑡−1 𝑟́ 𝑡 ⋮ 𝑟 = [ 𝑡−𝑝 𝑡 ### ], 𝑎́ = [ ### 𝑎 𝑡 𝑎 𝑡−1 ⋮ 𝑎 𝑡−𝑝 ### = ] Φ [∗] ### Φ 2 ⋯ Φ 𝑝−1 Φ 𝑝 [Φ] [1] 𝐼 0 ⋯ 0 0 0 𝐼 ⋯ 0 0 𝑟́ 𝑡 = Φ [∗] 𝑟́ 𝑡−1 + 𝑏 [́] 𝑡 (16) ⋮ ⋮ ⋱ ⋮ ⋮ [ 0 0 ⋯ 𝐼 0 ] ### Establish the characteristic polynomial of equation (16). 𝜆𝐼−Φ [∗] (17) Calculate the solution of equation (17) when det(Φ [∗] −𝜆𝐼) = 0, the VAR(p) model is stable if all the solutions are inside the unit circle. Following equation (12) the Wold expression of ARMA (p, q), the Wold expression of VAR(p) is: ∞ ### 𝑟 𝑡 = 𝜇+ ∑ 𝑗=0 Ψ 𝑗 𝑎 𝑡−𝑗 (18) And thus the Ψ 𝑗 is impulse response matrix of 𝑟 𝑡 . Generally, in a typical AR process, shown in equation (6), the residual 𝜀 𝑡 should be white noisy with independent and identical distribution, however, in financial research, some sequences have volatility clustering phenomenon, because the residual 𝑎 𝑡 is usually heteroskedastic and correlative with each other, despite they are nonlinear. Engle first develops introgressive conditional heteroskedasticity model (ARCH) and proposes to decompose the innovation 𝑎 𝑡 in to two parts: conditional standard variance 𝜎 𝑡 and white noisy 𝜀 𝑡, while the conditional variance could be linearly expressed as [13]: 𝑎 𝑡 = 𝜎 𝑡 𝜀 𝑡 2 2 2 (19) {𝜎 𝑡 = 𝛼 0 + 𝛼 1 𝑎 𝑡−1 + ⋯+ 𝛼 𝑚 𝑎 𝑡−𝑚 𝛼 0 is constant of variance, and 𝛼 𝑚 is the coefficient of past innovation, which means the extent of present volatility could be explained by past fluctuations. However, in empirical work, conditional variance may need high lags to be described with ARCH model and that will bring a negative impact on model from two sides: 1. High lags need more parameters in model and it will reduce the simple’s degree of freedom significantly, 2. Information penalty from a high degree of complexity. Bollerslev expends the ARCH to generalized ARCH (GARCH), considering autoregression of conditional variance itself. Interestingly, standard GARCH model has highly constructive similarity with ARMA model, as [14]: 2 𝑚 2 𝑠 2 ### 𝜎 𝑡 = 𝛼 0 + ∑ 𝑙=1 𝛼 𝑙 𝑎 𝑡−𝑙 + ∑ 𝑗=1 𝛽 𝑗 𝜎 𝑡−𝑗 (20) The 𝛽 𝑗 is autoregressive coefficient of conditional variance. Further, in order to explore the contribution of exogenous variables to the volatility of financial assets, GARCH model with additional distributed lag term (GARCHX) could be better than standard GARCH model, and formula of GARCHX is: 236 ----- BCP Business & Management **AFTEM 2022** Volume **32** (2022) ### 𝜎 𝑡2 = 𝛼 0 + ∑ 𝑚𝑙=1 𝛼 𝑙 𝑎 𝑡−𝑙2 + ∑ 𝑠𝑗=1 𝛽 𝑗 𝜎 𝑡−𝑗2 + ∑ 𝑘𝑖=1 𝜌 𝑖 𝑋 𝑖 (21) And 𝑋 𝑖 is exogenous which contributes volatility to variance, and 𝜌 𝑖 is the related coefficient of term 𝑋 𝑖 . By combining ARMA model and GARCHX model, the joint model of ARMA-GARCHX is: ### 𝑌 𝑡 = 𝜙 0 + ∑ 𝑃𝑗=1 𝜙 𝑗 𝑌 𝑡−𝑗 + ∑ 𝑞𝑙=1 𝜃 𝑙 𝜀 𝑡−𝑙 + 𝑎 𝑡 𝑎 𝑡 = 𝜎 𝑡 𝜀 𝑡 {𝜎 𝑡2 = 𝛼 0 + ∑ 𝑚𝑙=1 𝛼 𝑙 𝑎 𝑡−𝑙2 + ∑ 𝑠𝑗=1 𝛽 𝑗 𝜎 𝑡−𝑗2 + ∑ 𝑘𝑖=1 𝜌 𝑖 𝑋 𝑖 ### (22) ## **3. Empirical results ** ### **3.1 VAR ** For VAR order selection, the paper anticipates 12 lagged orders for the model and makes lags length test via STATA, and the results are shown in Table 2. For various information criteria, the test marks illustrate that 0 lag is the best option for the model, which implies the malfunction of information criteria to select an appropriate lag for the model. For likelihood-ratio statistic, lag-4, lag- 7, lag-8, lag-9, lag-12 are all significant at 5% level, and the recommendation from varsoc confirms the lag-12 is the optimal selection for the model, hence, VAR (12) model will be built. Table 2 VAR model identification Lag LL LR df p FPE AIC HQIC SBIC 0 1584.38 1.0e- -25.49* - -25.399* 16* 25.4531* 1 1588.99 9.2108 16 0.904 1.2e-16 - -25.1214 -24.8513 25.3062 2 1597.73 17.495 16 0.354 1.4e-16 - -24.8566 -24.3705 25.1893 3 1605.79 16.106 16 0.446 1.5e-16 - -24.5806 -23.8784 25.0611 4 1622.02 32.462 16 0.009 1.5e-16 - -24.4365 -23.5182 25.0648 5 1629.62 15.208 16 0.509 1.8e-16 - -24.1533 -23.0189 24.9294 6 1637.49 15.745 16 0.471 2.0e-16 - -23.8744 -22.5239 24.7983 7 1651.21 27.429 16 0.037 2.1e-16 - -23.6897 -22.1231 24.7614 8 1664.77 27.121 16 0.040 2.3e-16 - -23.5025 -21.7199 24.7221 9 1679.76 29.973 16 0.018 2.4e-16 - -23.3383 -21.3396 24.7057 10 1686.52 13.531 16 0.634 2.8e-16 - -23.0416 -20.8268 24.5568 11 1698.68 24.31 16 0.083 3.1e-16 - -22.8317 -20.4008 24.4948 12 1712.43 27.513* 16 0.036 3.4e-16 - -22.6477 -20.0007 24.4586 237 ----- BCP Business & Management **AFTEM 2022** Volume **32** (2022) ### The solutions of characteristic polynomial of VAR (12) model have been illustrated in Figure 2. The visual results verify the roots of matrix inside the unit circle and thus manifesting the stability of VAR system. Figure 2 VAR stability Setting return on gold, silver, and bitcoin as response variables and return on exchange rate as impulse variables, the results of IRF are shown in Figure 3. Obviously, with one unit change in exchange rate, the other variables all have visible oscillation to respond to the impulse, and maximum amplitude for bitcoin, gold, and silver are 0.6%, 0.1%, and 0.2% respectively. That shall be a reasonable result, whereby gold and silver are usually perceived as hedge assets and have resistance against shocks. On the contrary, as a speculative asset, bitcoin is more likely affected by a positive or negative shock, consistent with the research of Geuder et al. and Glaster et al. [8, 15]. The impact of impulse decays around 20 lags and vanishes gradually, fitting the basic feature of time-series clustering sequence. Figure 3 Impulse and response 238 ----- BCP Business & Management **AFTEM 2022** Volume **32** (2022) ### **3.2 ARMA-GARCHX estimation results ** Before establishing ARMA-GARCHX model, it is essential to select the order for AR model and MA model. Calculating the autocorrelative function and partial auto-correlative function of returns on gold, silver, and bitcoin, the results are visualized in Figure 4, 5 lagged, 19 lagged and 35 lagged PACFs and 5 lagged ACF of bitcoin are significant at 5% level and reject to be white noisy; however, all the lagged PACFs and ACFs of gold are insignificant and cannot reject white noisy hypothesis; for silver, only 19 lagged PACF is significant, while all the ACFs of it are insignificant as well. Consequently, the possible ARMA models for bitcoin, silver, and gold are ARMA (5,5), AR (19), and ARMA (0,0), and more ARMA (0,0) is also a kind of white noisy. Noticeably, with empirical test, the AR (19) model is not effective to adapt the sequence of silver and the program fails to estimate specific coefficients of this model, so the sequence of return on silver also could be considered as a white noisy. For GARCH model, there is still no theoretical method to select the order of model appropriately, but according to the experience, GARCH (1,1) model is usually available and efficient, hence, anticipating the GARCH (1,1) model is reasonable to describe the volatility of the assets. Figure 4 PACF and ACF 239 ----- BCP Business & Management **AFTEM 2022** Volume **32** (2022) ### The results of ARMA-GARCHX model are exhibited in Table 3. Since at least one coefficient of ARCH and GARCH of each item are significant at 10% level, the effectiveness of ARMA-GARCHX model is verified. Surprisingly, the related coefficients of exchange rate for gold, silver and bitcoin are 155.18, 206.34, and 60.92, all significant at 5% level. This finding has not just meaningful in statistics, but more economic, as it demonstrates the gold, silver and bitcoin are quite sensitive to the value of dollar, in the environment with aggressive monetary policies. Table 3 ARMA - GARCH estimation results, variance equation AG BTC Variables [GOLD ] Coef. Std. err Coef. Std. err Coef. Std. err Exchange 155.1860 [**] 78.8116 206.3454 [***] 55.4349 60.9215 [***] 20.8007 rate ARCH (- 0.1331 [*] 0.0803 0.1536 [***] 0.0517 0.0450 0.0560 1) GARCH 0.0047 0.2207 0.7392 [***] 0.1513 0.5707 [***] 0.1510 (-1) Constant - 9.6370 [***] 0.3473 - 9.5033 0.3680 - 5.8147 [***] 0.1786 ## **4. Conclusion ** ### By employing VAR model and ARMA-GARCHX model, the paper successfully verifies that gold, silver, and bitcoin will be volatile fiercely, corresponding the change of exchange rate from two channels. First, the result from VAR model and impulse response function suggests that the change of exchange rate has a constant long-term influence on expectations of return on silver and gold, and bitcoin; then, via ARMA-GARCHX model, the change of exchange rate will exacerbate the volatility of return on gold & silver and bitcoin. The coefficients of exchange rate for gold, silver, and bitcoin are 155.18, 206.34, and 60.92, all significant at 5% level. This fact might warn that the potential risk of the precious metal market and cryptocurrency market, which relate to conventional currency market, could raise magnificently when the monetary environment becomes complex and extreme. Exposed to the shock, investor shall apply more prudent investment strategies to avoid volatility and uncertainty. ## **References ** [1] Beckers S, Soenen L. Gold: More attractive to non‐US than to US investors? Journal of Business Finance & Accounting, 1984, 11(1): 107-112. [2] Sjaastad L A, Scacciavillani F. The price of gold and the exchange rate. Journal of International Money and Finance, 1996, 15(6):879-897. [3] Pukthuanthong K, Roll R. Gold and the Dollar (and the Euro, Pound, and Yen. Journal of Banking & Finance, 2011, 35(8):2070-2083. [4] Tully E, Lucey B M. A power GARCH examination of the gold market. Research in International Business and Finance, 2007, 21. [5] Hammoudeh S, Yuan Y. Metal volatility in presence of oil and interest rate shocks. Energy Economics, 2008, 30(2):606-620. [6] Hashim S L, Ramlan H, Razali N H, et al. Macroeconomic variables affecting the volatility of gold price. Journal of Global Business and Social Entrepreneurship (GBSE), 2017, 3(5): 97-106. [7] Dyhrberg, Haubo A. Bitcoin, gold and the dollar – A GARCH volatility analysis. Finance Research Letters, 2015, 16:85-92. [8] Geuder J, Kinateder H, Wagner N F. Cryptocurrencies as financial bubbles: The case of Bitcoin. Finance Research Letters, 2019, 31:179-184. 240 ----- BCP Business & Management **AFTEM 2022** Volume **32** (2022) [9] Baur D G, Dimpfl T, Kuck K. Bitcoin, gold and the US dollar – A replication and extension. Finance Research Letters, 2018, 25. [10] Corbet S, Meegan A, Larkin C, et al. Exploring the dynamic relationships between cryptocurrencies and other financial assets. Economics Letters, 2017, 165:28-34. [11] Ji H K. Tail behavior of Bitcoin, the dollar, gold and the stock market index. Journal of International Financial Markets Institutions and Money, 2020, 67:101202. [12] Fuller D W A. Distribution of the Estimators for Autoregressive Time Series with a Unit Root. Journal of the American Statal Association, 1979, 79(366):355-367. [13] Engle R E. Autoregressive Conditional Heteroskedasticity with Estimates of the Variance of United Kingdom Inflation. Econometrica, 1982, 50(50). [14] Bollerslev T. Generalized autoregressive conditional heteroskedasticity. Eeri Research Paper, 1986, 31(3):307-327. [15] Glaser F, Zimmermann K, Haferkorn M, et al. Bitcoin-asset or currency? revealing users' hidden intentions[J]. Revealing Users' Hidden Intentions (April 15, 2014). ECIS, 2014. 241 -----
{ "disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.54691/bcpbm.v32i.2894?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.54691/bcpbm.v32i.2894, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": "CCBY", "status": "HYBRID", "url": "https://bcpublication.org/index.php/BM/article/download/2894/2859" }
2,022
[]
true
2022-11-22T00:00:00
[ { "paperId": "9c1637b5adcd3e69731b48e0dcb329d9d4a83687", "title": "Tail behavior of Bitcoin, the dollar, gold and the stock market index" }, { "paperId": "d23fafcfa0f448f063397a18e53af9e16b35c8ae", "title": "Cryptocurrencies as financial bubbles: The case of Bitcoin" }, { "paperId": "3b539f3d38dab6a25aea93f918ea1d0d8dac6c05", "title": "Exploring the Dynamic Relationships between Cryptocurrencies and Other Financial Assets" }, { "paperId": "6eb7bb60ded45918cc3e3821c58b277cb53b7009", "title": "Bitcoin, gold and the US dollar – A replication and extension" }, { "paperId": "268f2253c50dd1a897d2d5a767a8c1f65aab80fd", "title": "Bitcoin, gold and the dollar – A GARCH volatility analysis" }, { "paperId": "3c7d998b88bf48c88cf693625d2852706e7cb8e4", "title": "Bitcoin - Asset or Currency? Revealing Users' Hidden Intentions" }, { "paperId": "779b6486afb3c315fe8675c6dd8c60889f110192", "title": "Gold and the Dollar (and the Euro, Pound, and Yen)" }, { "paperId": "8009664766bce3ede9cf44ff5e7cca1c60ab3c2e", "title": "Metal volatility in presence of oil and interest rate shocks" }, { "paperId": "f694fa64d5bd760b044cc664e0f377c9b435756a", "title": "A power GARCH examination of the gold market" }, { "paperId": "86c50fb4f22e46a55c6aa329dd356b981b12745e", "title": "The price of gold and the exchange rate" }, { "paperId": "584c7954ebb89d6155fa50e5bcf44098fb881faa", "title": "Generalized autoregressive conditional heteroskedasticity" }, { "paperId": "f3beebc8ecbcbba3eeefa5bbd9c9191c92b7edfd", "title": "GOLD: MORE ATTRACTIVE TO NON-U.S. THAN TO U.S. INVESTORS?" }, { "paperId": "2ee6cb87fc81ecd78d161c4a92c9dfce00c8961c", "title": "Autoregressive conditional heteroscedasticity with estimates of the variance of United Kingdom inflation" }, { "paperId": "5cbbb5deb4d92dc0504fb7f2af0f6fe7da355d98", "title": "Distribution of the Estimators for Autoregressive Time Series with a Unit Root" }, { "paperId": null, "title": "Macroeconomic variables affecting the volatility of gold price" } ]
8,456
en
[ { "category": "Economics", "source": "external" }, { "category": "Business", "source": "s2-fos-model" }, { "category": "Computer Science", "source": "s2-fos-model" }, { "category": "Economics", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/023383352dccd15dc24113f894f33efd983da33c
[ "Economics" ]
0.909747
Empirical Research on the Fama-French Three-Factor Model and a Sentiment-Related Four-Factor Model in the Chinese Blockchain Industry
023383352dccd15dc24113f894f33efd983da33c
Sustainability
[ { "authorId": "134903717", "name": "Ziyang Ji" }, { "authorId": "2113941384", "name": "Victor I. Chang" }, { "authorId": "121319087", "name": "H. Lan" }, { "authorId": "2126737803", "name": "Ching-Hsien Robert Hsu" }, { "authorId": "144586410", "name": "Raul Valverde" } ]
{ "alternate_issns": null, "alternate_names": null, "alternate_urls": [ "http://mdpi.com/journal/sustainability", "http://nbn-resolving.de/urn/resolver.pl?urn=urn:nbn:ch:bel-172127" ], "id": "8775599f-4f9a-45f0-900e-7f4de68e6843", "issn": "2071-1050", "name": "Sustainability", "type": "journal", "url": "http://www.e-helvetica.nb.admin.ch/directAccess?callnumber=bel-172127" }
As one of the most significant components of financial technology (FinTech), blockchain technology arouses the interests of numerous investors in China, and the number of companies engaged in this field rises rapidly. The emotion of investors has an effect on stock returns, which is a hot topic in behavioral finance. Blockchain is an essential part of FinTech, and with the fast development of this technology, investors’ sentiment varies as well. The online information that directly reflects investors’ mood could be utilized for mining and quantifying to construct a sentiment index. For a better understanding of how well some factors adequately explain the return of stocks related to blockchain companies in the Chinese stock market, the Fama-French three-factor model (FFTFM) will be introduced in this paper. Furthermore, sentiment could be a new independent variable to enhance the explanatory power of the FFTFM. A comparison between those two models reveals that the sentiment factor could raise the explanatory power. The results also indicate that the Chinses blockchain industry does not own the size effect and book-to-market effect.
## sustainability _Article_ # Empirical Research on the Fama-French Three-Factor Model and a Sentiment-Related Four-Factor Model in the Chinese Blockchain Industry **Ziyang Ji** **[1], Victor Chang** **[2]** **, Hao Lan** **[1]** **[, Ching-Hsien Robert Hsu](https://sciprofiles.com/profile/587004)** **[3,4,5,]* and Raul Valverde** **[6]** 1 International Business School Suzhou, Xi’an Jiaotong-Liverpool University, Suzhou 215123, China; ji.ziyang@outlook.com (Z.J.); Hao.Lan@xjtlu.edu.cn (H.L.) 2 School of Computing, Engineering and Digital Technologies, Teesside University, Middlesbrough TS1 3BX, UK; V.Chang@tees.ac.uk 3 Department of Computer Science and Information Engineering, Asia University, Taichung 400-439, Taiwan 4 Department of Medical Research, China Medical University, Taichung 400-439, Taiwan 5 School of Mathematics and Big Data, Foshan University, Foshana 528000, China 6 John Molson School of Business, Concordia University, Montreal, QC G1X 3X4, Canada; raul.valverde@concordia.ca ***** Correspondence: chh@cs.ccu.edu.tw Received: 2 April 2020; Accepted: 11 June 2020; Published: 24 June 2020 [����������](https://www.mdpi.com/2071-1050/12/12/5170?type=check_update&version=2) **�������** **Abstract: As one of the most significant components of financial technology (FinTech), blockchain** technology arouses the interests of numerous investors in China, and the number of companies engaged in this field rises rapidly. The emotion of investors has an effect on stock returns, which is a hot topic in behavioral finance. Blockchain is an essential part of FinTech, and with the fast development of this technology, investors’ sentiment varies as well. The online information that directly reflects investors’ mood could be utilized for mining and quantifying to construct a sentiment index. For a better understanding of how well some factors adequately explain the return of stocks related to blockchain companies in the Chinese stock market, the Fama-French three-factor model (FFTFM) will be introduced in this paper. Furthermore, sentiment could be a new independent variable to enhance the explanatory power of the FFTFM. A comparison between those two models reveals that the sentiment factor could raise the explanatory power. The results also indicate that the Chinses blockchain industry does not own the size effect and book-to-market effect. **Keywords:** financial technology (FinTech); blockchain; Fama-French three-factor model; sentiment index **1. Introduction** Financial technology (FinTech) consists of several technologies, such as blockchain, cloud computing, big data, and machine learning. Blockchain is an advanced technology extracted from the bitcoin, which was first promoted by Nakamoto [1]. As one of the most innovative and important components of FinTech, it could now tackle challenges such as digital currency, asset securitization, cross-border payment and settlement, and insurance management. As part of FinTech, blockchain has produced a series of extremely promising applications because of its characteristics, such as decentralization, immutability, and anonymity. Blockchain can not only play a role in FinTech, but can also be applied to diverse industries, such as supply chain, intellectual property, estate, and the Internet of Things (IoT). Blockchain technology is highly valued in China. Even the People’s Bank of China (PBOC) began planning to issue CBDC (Central Bank Digital Currency) based on blockchain technology, and the excellent design has been basically completed [2]. Mu et al. claim that the People’s ----- _Sustainability 2020, 12, 5170_ 2 of 22 Bank of China owns the most blockchain patents among central banks in the world [3]. Due to the innovative nature of this technology and the high level of interest, the number of companies in this field is also increasing. It is necessary to detect the value of these firms to understand this industry better. This paper will try to make use of the FFTFM (Fama-French Three-Factor Model) for the analysis of stocks of Chinese blockchain firms and to detect the existence of size effect and book-to-market ratio effect (BM effect) in this field. The capital asset pricing model is a popular topic attracting numerous researchers for a long time. Markowitz first proposed portfolio theory to balance the risk and return [4]. Sharpe, Lintner, and Mossion built the capital asset pricing model in the 1960s, and this model considers the market return as the unique variable to explain the return. Fama and French proposed the FFTFM, that adding size factor and book-to-market ratio factor into the CAPM (Capital Asset Pricing Model) enhances the explanatory power [5]. After the model was released, Chinese researchers began to utilize the FFTFM to analyze Chinese stock market performance and found it gains a better result than CAPM [5–8]. Some studies pay attention to the stock market, for instance, the stocks belonging to the A-shares’ market and Growth Enterprise Market [9,10]. Some Chinese researchers focus on a particular industry via the FFTFM, such as the real estate industry, the electric power industry, steel industry, and bank industry [11–14]. It could be noticed that many studies emphasize the traditional industry [12,14], whereas the blockchain industry is an innovation and the research of implementation of FFTFM in this field is lacking. Blockchain owns its noticeable position, especially when it comes to concepts like FinTech. There is a saying that “one day in the blockchain industry, one year in real life”, which reveals the extremely rapid changes in this field. Blockchain technology was first applied in the financial field. Since it revolutionizes centralization and simplifies a series of the transaction process, it is recognized as a particularly useful tool of FinTech, which arouses a lot of interest. The rapid development also has an impact on investors’ expectations and sentiment to the blockchain companies including but not limited to the firms that take blockchain as FinTech, and the relationship between emotion and stock return is an indispensable topic of behavioral finance. Behavioral finance researchers study the impact of capital market participants’ psychological and behavioral characteristics on capital markets based on the assumptions of limited arbitrage opportunities and bounded rationality. The “emotion” in psychology is the expression of external attitudes generated by individual cognitive processes. Investor sentiment in behavioral finance is caused by investors’ limited rationality and can be interpreted as investors’ expected bias, subjective preferences, investment beliefs, and speculative needs. When investor sentiment affects enough investment demand, it will cause the stock price to deviate from its value. According to empirical studies, investor sentiment has an essential impact on financial behaviors such as stock price and income fluctuations, stock market anomalies, and corporate investment decisions and earnings management [15,16]. Liu and Zhang summarize that the Chinese stock market is mainly composed of individual investors with relatively weak investment skills, keen subjective awareness, and low-risk perception ability [17]. Investors are more inclined to pursue short-term capital gains and are keen on short-term investment projects to gain speculative profits. This determines that investor sentiment owns a more powerful influence in China than in mature capital markets. Under this background, this paper tries to investigate the influence of Internet information related to the stock performance by mining and quantifying Internet public opinion information. Then the sentiment factor would be added into the traditional FFTFM for research. The data are collected from the China Stock Market Accounting Research platform and China Research Data Services platform. Sentiment factor results from the Guba public comments of each stock by online users. Stocks that related to blockchain technology, including but not limited to the listed firms that treat blockchain as FinTech, would be grouped to construct portfolios with different characteristics for the research. While comparing the results of the FFTFM and improved four-factor model, the sentiment factor could present a better explanation of the return of Chinese blockchain ----- _Sustainability 2020, 12, 5170_ 3 of 22 stocks. It also notices that size effect and BM effect could not be found in this industry, and the portfolios constructed by big-size companies and low book-to-market ratios companies gain the best return. Due to the creativity and the bright prospect of blockchain technology, this paper focused on Chinese listed firms related to blockchain technology and demonstrated the valuation via the FFTFM, which is supposed to describe the risk yield characteristics of the blockchain industry in China. Many scholars have studied China‘s asset pricing based on FFTFM. This paper also drew on the FFTFM to empirically research the Chinese A-share market, to try to explain the stock performance of the Chinese A-share blockchain firms and verify whether the industry has scale effects, value premium effects, and profitability effects. We concluded that the BM effect does not exist in the Chinese blockchain industry. We also built a sentiment factor using data mining to improve the traditional FFTFM in order to present a better explanatory power for this industry. **2. Literature Review** _2.1. Fama-French Model Research_ Fama and French found that the beta value of CAPM could not explain the difference of excess return, so they proposed a three-factor model that divides the main factors into three factors, namely market factor, scale factor, and value factor, for a better explanatory power of excess return [4]. In order to explore whether the model can be applied to the stock markets of other countries, Fama and French studied the stock returns and pricing factors in different countries and claimed that FFTFM is better than CAPM [18]. Carhart believed that the FFTFM could not explain the difference of excess returns well and added a new variable-momentum factor to construct the Fama-French four-factor model [19]. Xu and Xiong used A-share listed companies as samples from 2004 to 2005 and found that the four-factor explanatory ability has been improved but cannot fully explain the stock fluctuations in yield [20]. Xue and Guan conducted empirical research through a four-factor model and found that only a few funds can perform slightly better than the whole market index [21]. This paper aimed to construct a model based on FFTFM with a sentiment factor and to detect a more explanatory power, which has a similar purpose as the Carhart model. Fama and French analyzed profitability and investment information and added them to the FFTFM to construct a new Fama-French five-factor model (FFFFM) [22]. However, the FFFFM is still an imperfect model. Fama and French found that the FFFFM mainly has two defects: The first is that the model lacks the ability to describe the average return of small stocks, and the second weakness is that the HML (High Minus Low) factor is a redundancy factor. Racicot et al. studied the FFFFM with traditional illiquidity measures and found the weakness of this model, especially for the endogenous illiquidity measures [23]. The robust instrumental variables (RIV) algorithm conducted by GMM (Generalized Method of Moment) was taken into consideration for correction. Racicot et al. transferred FFFFM into the dynamic specification and used Kalman filtering and a recursive robust instrumental variables (IV) algorithm to detect the estimation of alpha and beta [24]. They noticed that illiquidity is a significant factor in the Kalman filter approach and that market risk premium is the only effective factor in a dynamic context based on the GMM approach. Sembiring applied the Fama-French model in the Indonesian securities market under market overreaction conditions and found that the market, size, and value factor are accurate to explain portfolios’ returns [25]. Cox and Britten utilized the FFFFM in the Johannesburg securities market and concluded that size and value factors are significant, but the market factor presents a negative relationship [26]. Bangash, Khan, and Jabeen disagreed that size pattern performs well by empirical research on the Pakistan equity market [27]. In China, scholars not only use the FFTFM to investigate the Chinese stock market but also on the replacement of indicators based on the combination of the Chinese stock market’s actual situation. Tian, Wang, and Zhang compared the FFTFM between the securities market of China and the United States [28]. They concluded that the FFTFM could ----- _Sustainability 2020, 12, 5170_ 4 of 22 explain the excess returns of the two countries’ investment portfolios, whereas their applicability in the Chinese and American stock markets is different. The Chinese market risks are more significant than other factors, and SMB (Small minus Big) has explanatory power for small-cap stocks. Yang and Fan indicated that the FFTFM is interpretable to the stock markets of developed and developing countries [29]. Several researchers use the FFTFM application in the Chinese securities market to test the effectiveness of the whole market [9,30,31]. Liu, Zhu, and Li argued that the FFTFM is suitable for China’s Growth Enterprise Market [10]. Yin added the sentiment factor into the Fama-French model and found that prices of small-cap stocks, high P/E (Price/Earnings) ratios, and high-priced stocks are more sensitive to investor sentiment [32]. Hu, Tu, and Zhu believe that the five-factor model of Fama-French stock pricing is suitable for use in China’s stock market [33]. Yuan and Cong found that the FFTFM is suitable for the listed companies in the Chinese HA-DA-QI (Harbin-Daqing-Qiqihar) region [34]. For the stock returns in Chinese, a particular industry, some researchers take the FFTFM to explain the returns. You found the market factor effect and BM effect in the estate industry and Cheng and Fang conducted an empirical research of stock returns in the auto industry [11,35]. For energy listed companies, Li and Zhao pointed out that the FFTFM model applies to the prediction of market returns of China’s listed power companies and that FFTFM could also be used in Chinese iron industry [12,13]. Gou, Wang, and Zhu drew the same conclusion that small-cap stocks own scale effects, and stocks with high book-to-market value ratios have BM effect [14,36]. _2.2. Conventional Investor Sentiment Research_ Behavioral finance arouses numerous academics’ interest and they hope to find a principle for better decision making by using investor sentiment. Baur, Quintero, and Stevens used stock market data from 1986 to 1988 to explore the relationship between investor sentiment and linkage with the 1987 securities’ market crash [37]. Mehra and Sah summarized the three conditions in which the investor sentiment affects the stock price in the arbitrage market: Firstly, there is a systematic fluctuation of investor sentiment; secondly, investors make decisions based on emotions; thirdly, investor ignores the subjective influence brought by emotions [38]. Brown and Cliff collected data and compiled investor sentiment index [39]. The study found that the lag effect of market yield has a more significant impact on investor sentiment, but, in turn, investor sentiment is not efficient in predicting market returns. Cheng and Liu used a blue-chip index to reflect the bullish situation and found that the stock market’s mid-term sentiment was more affected than the short-term sentiment [40]. Wang, Zhao, and Fang claimed that investor sentiment leads to share prices in the early stage of IPOs (Initial Public Offering), which will cause listed companies to use investor sentiment to maximize profitability [41]. Wen et al. used the Shanghai Securities’ Market data to construct an investor sentiment index to study the characteristics of investor behavior under different emotions [42]. _2.3. Investor Sentiment Research Based on Internet Information_ Currently, how to use social network information to predict economic behavior has gradually become one of the research hotspots in various fields. Tetlock uses the one column of the Wall Street Journal as the investment sentiment analysis basis, and analyzes the relationship between investor sentiment and stock market returns [43]. He believes that the large fluctuations in investor sentiment will cause an increase in volume, and pessimistic forecasts will lead to a fall in stock prices. Chen et al. indicated that online information helps investors make better financial decisions [44]. Meng, Meng, and Hu constructed the investor sentiment index using factor analysis, based on the data of CSSCI (Chinese Social Sciences Citation Index), Sina Weibo text, and Baidu’s keyword recommendation system [45]. Luo, Wang, and Fang considered investor sentiment index when establishing the CAPM, and the investor sentiment index was constructed on the sentiment analysis of the stock forum posting [46]. Investor sentiment against the stock index based on the ordinary linear regression model was found. Xu uses text analysis and machine learning to construct a new investor sentiment indicator system based on Sina stock evaluation information and a long-term survey [47]. ----- _Sustainability 2020, 12, 5170_ 5 of 22 **3. Data** _3.1. Stock and Financial Data_ This paper selected listed blockchain companies in Shanghai and Shenzhen stock markets. The Special Treatment (st) company cannot be considered as a regular listed company because of its business difficulties, so the data only included non-st companies’ data. Because the data of listed blockchain companies included in the Wind blockchain index were complete since 2016, and to test the authenticity and comprehensiveness of the samples, monthly stocks’ data during June 2016 to June 2019 were collected from China stock market and accounting research database and Chinese research data services platform (CNRDS). After screening, there were 50 sample companies. A large sample time interval, with sufficient and new sample data, could provide a practical and meaningful result. This paper selected the monthly information of stocks related to the Chinese blockchain industry from June 2016 to June 2019, which was collected from the Chinese stock market and CNRDS. These stocks included, but were not limited to, some listed companies that use blockchain as a FinTech. The way to determine whether a company is involved in the blockchain industry is based on the components of Wind’s blockchain industry index. If a firm was collected as a component of the index, this firm was considered for the research. There was an overlap between the stocks belonging to the blockchain industry index and the stocks belonging to the FinTech index. The traditional Fama-French model will reclassify the portfolios at the end of June each year, but to better reflect the performance, this paper regrouped the portfolios monthly. The reason is that the blockchain industry in China is an emerging industry, and several companies are embracing this innovative technology, including, but not limited to, FinTech firms. Other companies involved in FinTech also show the same trend. In 2016, there were merely eight listed firms related to blockchain technology according to Wind database, whereas there were more than 150 firms in 2019 [48]. More data details will be shown in Sections 3.1.1, 3.1.2 and 3.1.4. 3.1.1. Stock Returns The stock returns were selected by the monthly return that after cash dividend reinvestment. Transaction costs was not considered. Stock returns are the basic element for constructing SMB and HML factors [49]. 3.1.2. Market Returns Monthly market returns would be seen as a market index, which is comprehensively calculated based on the returns of China A-shares market, B-shares market, and China’s growth enterprise market. This is due to the complexity of the blockchain industry in China. These listed companies have different characteristics and are distributed in different stock markets. Some of them are FinTech firms. Therefore, this comprehensive index can more comprehensively and objectively reflect the overall price change trend of the market and provide investors with more valuable indicators. This element could reflect the situation of the whole market, which is also an essential data for calculating beta [49]. 3.1.3. Risk-Free Rate In practice, there is no absolute risk-free interest rate. Researchers choose those financial products with better liquidity and less default risk to represent the risk-free interest rates, such as the national debt rate and bank savings deposit rate. The Chinese banking system is dominated by state-owned banks with little default risk, no market segmentation issues, or any individual or corporate institution that can deposit in the bank [50]. This paper intended to select the one-year-term savings deposit interest rate and convert it into a monthly return rate by using the continuous compounding method. ----- _Sustainability 2020, 12, 5170_ 6 of 22 3.1.4. Size The size of a listed company is determined by its market capitalization, obtained by multiplying the stock price by the number of shares outstanding. According to China’s unique conditions, stocks are divided into tradable shares and nontradable shares. Due to the special historical background of the restructuring of state-owned enterprises, the existence of nontradable shares is a major feature of China’s securities market and nontradable shares cannot be circulated on the secondary market [51]. So only tradable shares will be used and market capitalization calculated by the tradable shares as well. 3.1.5. Value The measurement of the value of the company is the number of book equity to the market value of equity. The financial statement of the listed company does not directly present this number, but it could be gained from price-to-book value. This paper took the reciprocal of the PB (Price-to-Book) value on the last trading day of each month. 3.1.6. Sentiment Data There are various resources to construct investor sentiment factors. Zhang and Liu summarized that the simple sentiment index mainly adopts a direct survey method or data mining method, and compound indicators are constructed by selecting multiple single objective indicators, or a combination of single objective indicators and subjective indicators to construct investor sentiment [52]. This paper used the data mining method, and Guba comments were used to present investors’ sentiment. Unlike news reports from newspapers or traditional news websites, Guba is a free medium, and the content of their posts is mainly the expression of investors’ subjective wishes, which are relatively random and irregular. For example, Guba comments may contain a few simple words, or some irrelevant expressions and meaningless text expressions. These noises will affect the accuracy of our sentiment judgment on posts. Comments on blockchain companies, including the firms, treat this technology as FinTech and would be collected for further analysis. There are several platforms for analyzing the text and extracting the emotional tendencies from the content, for example, cloud natural language from Google, Baidu AI (Artificial Intelligence) platform, and Yuyi data platform. In order to keep data consistent, the Guba media data analysis database from the Chinese research data services platform was used in this paper. According to the Guba database description, the platform uses a supervised learning model to judge the post’s sentiment. The application of supervised learning in the post classification of the database includes the following steps: �1 Define the categories (including positive, negative, and neutral) in advance, manually label the content of the posts, and obtain positive, negative tendency. Score 1 is positive, −1 is negative, and 0 is neutral. �2 Automatically obtain data from a dataset with category information. This part of the data is called “training data”. �3 Supervised learning algorithm support vector machine is introduced to learn the classification model on the training dataset. Use classification models to predict the categories of a test dataset automatically. It is noticeable that the Guba comments data from the Chinese research data services platform were merely from 2008 to 2018, so the 2019 comments data were collected from the China stock market and accounting research database. **4. Methodology** _4.1. Build Portfolios According to Size and Value_ The stocks will be grouped monthly from two dimensions, size and value. 4.1.1. Size There are many ways to determine in which group the company should be involved, and there are various ways to deal with it. Fama and French divided the stocks from three American stock exchanges ----- _Sustainability 2020, 12, 5170_ 7 of 22 into a small or big group based on the median of the size factor [5]. This paper used the FFTFM division method. After sorting the circulating market capitalization in ascending order, the stocks were evenly divided into two groups: Small (S) and big (B). 4.1.2. Value Fama and French sorted the book-to-market ratio in ascending order and divided the sample data into low (L) group, medium (M) group, and high (H) group, according to the proportions of 30%, 40%, and 30%, which are called growth group, medium group, and value group [5]. Since the number of blockchain companies change rapidly, including but not limited to the firms that take blockchain as FinTech, so, based on the traditional Fama-French division method, the number of shares in a certain group might be zero. In order to deal with this issue, the stocks were evenly classified into two groups, high group (H) and low group (L). In the future, there will be more companies using blockchain technology and more companies embracing FinTech. By then, this weak point can be compensated. 4.1.3. Portfolios After the above two classifications, each stock had two indicators: Size and value. Those stocks will be cross-combined to build four portfolios based on those two dimensions. They are S/L, S/H, B/L, and B/H. The research on the Fama-French pricing model was based on the data of these four portfolios. The details of these four portfolios are: 1. Portfolio S/L: Refers to those stocks which both belong to the small-size group and low book-to-market ratio group at the same time. 2. Portfolio S/H: Refers to those stocks which both belong to the small-size group and high book-to-market ratio group at the same time. 3. Portfolio B/L: Refers to those stocks which both belong to the big-size group and low book-to-market ratio group at the same time. 4. Portfolio B/H: Refers to those stocks which both belong to the big-size group and high book-to-market ratio group at the same time. 4.1.4. Construction of Independent Variables Ri Ri is the return of the portfolio that is calculated according to the ratio of the circulation market value of each stock to the sum of the circulation market value of the combination. Market Risk Premium Factor (Rm-Rf) The market risk premium is obtained by subtracting the risk-free interest rate from the market rate of return. As mentioned above, the market return (Rm) is a monthly return of A-shares, B-shares, and China growth enterprise market. It is a comprehensive monthly return that is considered after cash dividend reinvestment and obtained by using a weighted average market capitalization method. The risk-free rate is the coupon rate of the one-year bank saving deposits and then turns an annual risk-free interest rate into a monthly one. SMB SMB factor is obtained by comparing the average portfolio return of a small-sized company and the average portfolio return of a big-sized company. This factor measures the difference in returns due to the size of the listed companies. The construction method is to sort all companies according to market capitalization from low to high, select the first 50% of the stocks to form a small market value group, select the last 50% of the stocks to form a large market value group, and calculate the return of small market value group and the return of large market value group, respectively. Then calculate ----- _Sustainability 2020, 12, 5170_ 8 of 22 the difference between those two return rates. Repeat the above process every month to get the SMB factor sequence. The specific calculation formula is as follows: _SMBt = [(][S][/][L][) + (][S][/][H][)]_ − [(][B][/][L][) + (][B][/][H][)] 2 2 HML HML factor is obtained by comparing the monthly return rate of the high book-to-market ratio portfolio and the monthly return rate of the low book-to-market ratio portfolio. This factor measures the difference in returns due to different book-to-market ratios of listed companies. The construction method sorts all companies according to the book-to-market ratio from high to low, selects the top 50% of the stocks to form a group with a high book-to-market value ratio, and selects the bottom 50% of the stocks to form a group with a low book-to-market value ratio. Then calculate the market-weighted return of the two groups, respectively. Repeat the above process every month to get the HML factor sequence. The specific calculation formula is as follows: _HMLt = [(][H][/][S][) + (][H][/][B][)]_ − [(][L][/][S][) + (][L][/][B][)] 2 2 Sentiment Factor Antweiler and Frank introduced a method to measure the effect of investors’ sentiment [53]. Bu et al. proposed a measure of investor sentiment that integrates bullish and bearish expectations of investors based on the Guba comments and naive Bayesian method [54]. This paper referred to this method to construct the model for analyzing the negative or positive sentiment based on the Guba comment website. The formulation is: _M[pos]t_ − _M[neg]t_ _Sentt =_ _M[pos]t_ + M[neg]t The M[c]t [=][ �]i∈D(t) _[w]i[x][c]i_ [means the sum of one emotion during the period D(t). The “c” belongs to] positive, negative, or neutral. The x[c] _i_ [means if a comment “i” is one of “c”, then][ x]i[c] [equals 1. If a comment] “i” is not one of “c”, then x[c] _i_ [equals 0. The “pos” represents positive emotions, “neg” represents negative] emotions, and “neu” represents neutral emotions. The sentiment index “Sentt ” is between −1 to 1, which indicates investor expectations. Every stock has a “Sent” value every month. According to which portfolios they belong (S/L, S/H, B/L, B/H), the “sent” value of each portfolio will be obtained. Fama-French Model After collecting and processing the data above, the traditional Fama-French model could be presented: E(Ri) − Rf = bi[E(Rm) − Rf] + siE(SMB) + hiE(HML) The regression equation of the model is expressed as follows: Ri − Rft = αi + βi(Rmt − Rft) + siSMBt + hiHMLt + εit According to the idea of the FFTFM, based on the model, the sentiment index reflecting investor sentiment was constructed according to the above sentiment analysis method. Then, add the sentiment factor to the model, and finally get a four-factors model: Ri − Rft = αi + βi(Rmt − Rft) + siSMBt + hiHMLt + sentiSentimentit + εit ----- _Sustainability 2020, 12, 5170_ 9 of 22 Ri − Rft is the excess return on portfolio “i”. It is the difference between the weighted return of portfolio “I” and the risk-free rate during the same period ‘t’. Rmt − Rft is the difference between the market return and risk-free rate during the same period “t”. SMBt is the difference between the portfolio returns of the small companies and the portfolio returns the big companies constructed during the period “t”. _HMLt is the difference between the return of portfolios with high book-to-market ratio companies_ and the return of portfolios with low book-to-market ratio companies during the period “t”. _Sentimentit is the sentiment score of the portfolio “i” during the period “t”._ **5. Analysis of the Model** _5.1. Descriptive Analysis_ Based on the collected data and indicators constructed before, we could perform descriptive statistical analysis. Related data processing was conducted in Python. Table 1 shows the basic monthly returns of four different portfolios from 2016 to 2019, and it could be noticed that portfolios with low book-to-market ratios had a positive average return. Additionally, the investment portfolio with the lowest average rate of return was B/H, which was −0.01115. Portfolio B/L owned the highest average rate of return, which was 0.01505. This portfolio had the lowest standard deviation, indicating the smallest fluctuation of the performance. S/L portfolio owned the highest standard deviation, which meant that the small-size company with low book-to-market ratios carried the most top variations. Table 2 shows the correlation between parameters and portfolios’ returns. The correlation coefficient between SMB and market premium was 0.19, indicating that the two were positively correlated, where that between HML and market premium was −0.1, showing the negatively correlated. **Table 1. Return of each portfolio’s descriptive analysis.** **Test Focus** **R_SL** **R_SH** **R_BL** **R_BH** **Rm-Rf** **SMB** **HML** count 37 37 37 37 37 37 37 mean 0.007166 −0.01992 0.01505 −0.01115 −0.12047 −0.00833 −0.02664 std 0.124866 0.091218 0.108523 0.081383 0.044723 0.049832 0.05708 min −0.19904 −0.14461 −0.15265 −0.12005 −0.20996 −0.11855 −0.1435 25% −0.06269 −0.07561 −0.07093 −0.08007 −0.14663 −0.03881 −0.06217 50% −0.02073 −0.04207 0.005983 −0.01998 −0.11772 −0.00401 −0.0235 75% 0.035004 0.017523 0.081635 0.0212 −0.09386 0.016986 0.013356 max 0.395968 0.278853 0.327864 0.245898 0.024608 0.101769 0.089053 **Table 2. Correlation among parameters and portfolios’ return.** **Test Variables-** **rm-rf** **SMB** **HML** **R_SL** **R_SH** **R_BL** **R_BH** rm-rf 1 0.19 −0.1 0.56 0.68 0.54 0.67 SMB 0.19 1 −0.082 0.54 0.34 −0.13 0.16 HML −0.1 −0.082 1 −0.59 −0.12 −0.59 −0.16 R_SL 0.56 0.54 −0.59 1 0.79 0.72 0.79 R_SH 0.68 0.34 −0.12 0.79 1 0.75 0.92 R_BL 0.54 −0.13 −0.59 0.72 0.75 1 0.78 R_BH 0.67 0.16 −0.16 0.79 0.92 0.78 1 ----- _Sustainability 2020, 12, x FOR PEER REVIEW_ 10 of 22 _Sustainability 2020, 12, 5170_ 10 of 22 _5.2. Market Risk Factor_ _5.2. Market Risk Factor_ Generally, the trend of market risk factors is the same as the trend of the average return of the four portfolios. Among them, the market risk factor better reflects the changing direction of the S/H, Generally, the trend of market risk factors is the same as the trend of the average return of the B/H portfolio. Changes in other combinations can also be presented, but there will be some four portfolios. Among them, the market risk factor better reflects the changing direction of the discrepancies in specific periods. The portfolio S/L and B/L excess market factored a lot during the S/H, B/H portfolio. Changes in other combinations can also be presented, but there will be some period from February 2018 to April 2018. This showed that the market risk factor was one of the discrepancies in specific periods. The portfolio S/L and B/L excess market factored a lot during the essential variables to explain the difference in stock returns, but it was not enough to rely on the period from February 2018 to April 2018. This showed that the market risk factor was one of the market risk factor alone to explain the changes. This also reflected the defects of the CAPM model essential variables to explain the difference in stock returns, but it was not enough to rely on the market from the side, as shown in Figure 1. risk factor alone to explain the changes. This also reflected the defects of the CAPM model from the side, as shown in Figure 1. **Figure 1. Figure 1.Tendencies of return rate of four portfolios and market risk factor. Tendencies of return rate of four portfolios and market risk factor.** _5.3. Size Factor_ _5.3. Size Factor_ Figures 2 and 3 show the comparison of portfolios with different companies’ size, given the same Figures 2 and 3 show the comparison of portfolios with different companies’ size, given the same book-to-market ratio. The orange line presents the portfolios constructed by the big companies, while book-to-market ratio. The orange line presents the portfolios constructed by the big companies, while the blue line displays the one built by the small companies. From June 2016 to February 2019, the trend the blue line displays the one built by the small companies. From June 2016 to February 2019, the of the monthly average return of a portfolio with large-size listed companies was consistent with the trend of the monthly average return of a portfolio with large-size listed companies was consistent direction of the one constructed by small-size firms. After February 2019, the average monthly yield with the direction of the one constructed by small-size firms. After February 2019, the average on stocks of small-size listed companies was higher than the portfolios of large-scale listed companies. monthly yield on stocks of small-size listed companies was higher than the portfolios of large-scale This may be because there were fewer companies engaged in the blockchain industry before 2019, listed companies. This may be because there were fewer companies engaged in the blockchain and the size of the company cannot be an essential factor affecting the portfolio yield. After 2019, there industry before 2019, and the size of the company cannot be an essential factor affecting the portfolio were more than 50 companies engaged in the blockchain industry, which more clearly reflected the yield. After 2019, there were more than 50 companies engaged in the blockchain industry, which difference in yields caused by size. This trend may be because blockchain is still a new technology, more clearly reflected the difference in yields caused by size. This trend may be because blockchain and the number of companies, including the firms that treat blockchain as FinTech, was relatively is still a new technology, and the number of companies, including the firms that treat blockchain as small in the early stage. FinTech, was relatively small in the early stage. ----- _Sustainability Sustainability2020 2020,,12 12, x FOR PEER REVIEW, 5170_ 11 of 22 11 of 22 **Figure 2. The return of portfolio S/L (Small-and-Low) and B/L (Big-and-Low).** **Figure 2. Figure 2.The return of portfolio S/L (Small-and-Low) and B/L (Big-and-Low). The return of portfolio S/L (Small-and-Low) and B/L (Big-and-Low).** **Figure 3. Figure 3.The return of portfolio S/H (Small-and-High) and B/H (Big-and-High). The return of portfolio S/H (Small-and-High) and B/H (Big-and-High).** _5.4. Related TestFigure 3. The return of portfolio S/H (Small-and-High) and B/H (Big-and-High)._ ADF (Augmented Dickey–Fuller) Test Generally, the first step is to perform a stationary test when studying on a time series data. The Fama-French model is based on the return of stocks, which is a kind of time series dataset. In addition to the method of visual inspection, the more commonly used statistical test method is the augmented Dickey–Fuller (ADF) test, and it is an extended form of Dickey–Fuller test. The ADF test is also known as the unit root test. If the significance test statistic obtained is less than three confidence levels at 10%, 5%, or 1%, then there should be 90%, 95%, or 99% certainty to reject the null hypothesis accordingly. Since the difference between the FFTFM and the improved four-factor model is adding a new independent variable “sentiment”, the selected stocks remain, and the classification method does not change as well. Therefore, it merely needs to test the stationary of returns of each portfolio. The ADF test is suggested to be conducted in Python using the “statsmodels” package, and the results are delivered in Table 3. As shown in Table 3, the return of eight portfolios all passed the ADF test. The t-values of them were −5.4165, −5.0223, −4.3714, and −5.2175, respectively, and all p-values were equal to zero. As the null hypothesis was rejected, there was no unit root in any time series data. The stationary data could be taken into further research. ----- _Sustainability 2020, 12, 5170_ 12 of 22 **Table 3. Results of the augmented Dickey–Fuller (ADF) test.** **Portfolios** **1% Critical Value** **5% Critical Value** **10% Critical Value** **_t-Value_** **_p-Value_** S/L (S/L including sentiment) −3.6267 −2.9460 −2.6117 −5.4165 0.000 S/H (S/H including sentiment) −3.6327 −2.9485 −2.6130 −5.0223 0.000 B/L (B/L including sentiment) −3.6392 −2.9512 −2.6144 −4.3714 0.000 B/H (B/H including sentiment) −3.6327 −2.9485 −2.6130 −5.2175 0.000 _5.5. Autocorrelation_ Autocorrelation refers to the correlation between the expected values of random error terms, and it could harm the effectiveness of the multilinear regression model. So, whether using the traditional FFTFM or the four-factor model, including the new sentiment parameter, it is necessary to test if this situation exists. The first detection method was the standard Durbin–Watson (DW) test, and the results of the test are presented in Table 4. The range of DW was from 0 to 4, and the value of DW close to 0 indicated that the error terms were a positive autocorrelation while the value close to 4 indicated the negative autocorrelation. If the DW value ranged from dL (lower critical value of d) to dU (upper critical value of d), it could not judge whether there was autocorrelation. Moreover, if the DW value was between dU and 4-dU, it could bring greater confidence to conclude the non-existence of autocorrelation. It is required to refer to a list of DW values to acquire the upper limit value (dU) and lower limit value (dL) under different situations for checking the autocorrelation accurately. According to Table 4, there was no autocorrelation in most portfolios. Still, the DW values of portfolio S/L and B/H in the traditional FFTFM could not confirm whether they passed the test. The S/L portfolio in the FFTFM was also unable to recognize if there was autocorrelation. Therefore, Breusch–Godfrey LM (Lagrange multiplier) test was suggested to be considered, and the probabilities of chi2 (Chi-square) were 0.0818 and 0.0847, respectively. The null hypothesis of the Breusch–Godfrey LM test was that there was no autocorrelation. The results of the Breusch–Godfrey LM test also emerge in Table 4, and it can be seen that the probability of all portfolios implied that the null hypothesis was acceptable. There was no autocorrelation in any multilinear regression models. **Table 4. Results of the autocorrelation test.** **Test Variables** **Durbin-Watson** **Durbin-Watson** **chi2** **Prob > chi2** **and Focus** **Statistic** **Critical Value (Upper)** S/L 2.356 1.655 1.342 0.2467 S/H 2.082 1.655 0.152 0.6970 B/L 2.082 1.655 0.152 0.6970 B/H 2.356 1.655 1.342 0.2467 S/L (add sentiment) 2.330 1.723 1.164 0.2807 S/H (add sentiment) 1.996 1.723 0.014 0.9057 B/L (add sentiment) 2.108 1.723 0.180 0.6712 B/H (add sentiment) 2.129 1.723 0.181 0.6703 _5.6. Multicollinearity_ Multicollinearity means that there is a linear correlation among the independent variables. This situation manifests itself as one independent variable that can be a linear combination of one or several other independent variables. It hurts the regression model. Perfect multicollinearity could ----- _Sustainability 2020, 12, 5170_ 13 of 22 result in the non-existence of parameter estimation. Near-extreme multicollinearity allows the estimator of the ordinary least square model to cease to be effective. Simultaneously, the parameter estimator and the significance test would not make sense. It could not obtain a reliable prediction under the multicollinearity. The value of variance inflation factor (VIF) could be used for multicollinearity test. A greater VIF value means a higher probability of multicollinearity between independent variables. The results of the multicollinearity test are shown in Table 5. All independent variables in every regression model could pass the test, and there was no multicollinearity **Table 5. Variance inflation factor (VIF) results.** **Collinearity Statistics** **Three-Factor Model** **Tolerance** **VIF** (Constant) Rm-Rf 0.889 1.124 S/L, S/H, B/L, B/H SMB 0.95 1.053 HML 0.941 1.063 Collinearity Statistics Four-factor model Tolerance VIF S/L S/H B/L B/H (Constant) VAR00002 0.889 1.124 SMB 0.95 1.053 HML 0.941 1.063 S_SL 0.882 1.133 (Constant) VAR00002 0.951 1.052 SMB 0.946 1.058 HML 0.982 1.018 S_SH 0.973 1.027 (Constant) VAR00002 0.957 1.045 SMB 0.902 1.109 HML 0.742 1.347 S_BL 0.727 1.375 (Constant) VAR00002 0.956 1.046 SMB 0.933 1.072 HML 0.976 1.024 S_BH 0.96 1.041 _5.7. Heteroscedasticity_ All error terms have the same variance, which is an essential hypothesis of ordinary least squares regression that guarantees a reliable result of parameter estimation. If the error terms own a different variance, it could conclude that heteroscedasticity exists in the linear regression model. There are several test methods for heteroscedasticity, such as the White test, Park test, Gleiser test, Goldfel—Quandt test, and a directly subjective judgment is based on the graph. In this paper, the White test was taken into consideration for heteroscedasticity, and the results are presented in Table 6. The null hypothesis of the ----- _Sustainability 2020, 12, 5170_ 14 of 22 White test was there is homoscedasticity, and the alternative hypothesis was that there is unrestricted heteroskedasticity. According to Table 6, the probabilities of all portfolios, be they a three- or four-factor model, were higher than 0.05 overall. It implies that the null hypothesis was accepted and there was no heteroscedasticity in any portfolios. **Table 6. Results of the White test.** **Test Variables and Focus** **chi2** **Prob > chi2** S/L 9.70 0.3751 S/H 16.50 0.0572 B/L 16.50 0.0572 B/H 9.70 0.3751 S/L (add sentiment) 12.78 0.5438 S/H (add sentiment) 24.33 0.0518 B/L (add sentiment) 21.67 0.0856 B/H (add sentiment) 17.26 0.2426 After conducting the above tests, we could conclude that there was no autocorrelation, multicollinearity, and heteroscedasticity. Further regression analysis was allowed to perform. _5.8. Regression Analysis of the FFTFM_ 5.8.1. Goodness of Fit of the FFTFM The sample data obtained through actual observation used in empirical research were all authentic reflections of facts. Therefore, after introducing the sample data into the model, it must be able to describe this part of the objective facts well before the model can be considered meaningful. Therefore, the model after data processing should be able to describe the fact better. The degree to which the model approximates the sample is called the “goodness of fit.” In multiple regression analyses, the determination coefficient R[2] is usually used to determine the goodness of fit of the equation. The R[2] indicates what percent of the independent variable can explain the dependent variable. The value of Rˆ2 is between 0 and 1. The closer R[2] is to 1, the better the model fits the sample data. If the R[2] is close to 0, the model fits the fact badly. The regression could be conducted in Python and the results of “goodness of fit” are shown below. Durbin–Watson statistics were also included in the table, indicating no autocorrelation. The FFTFM model performed relatively well in stocks of China’s blockchain industry, which could provide a reference for other FinTech companies. The S/L group performed best in terms of explaining portfolio returns, explaining 77.7% changes in the stock return. Portfolio B/H owned the worst result, only 45.8%. These results illustrate that more factors could explain that the portfolio return needs to be included in the mode, as shown in Table 7. **Table 7. Goodness-of-fit test of traditional FFTFM (Fama-French Three-Factor Model).** **S/L** **S/H** **B/L** **B/H** _R[2]_ 0.770 0.509 0.653 0.458 _R2_ 0.749 0.464 0.621 0.409 Durbin-Watson statistic 2.356 2.082 2.082 2.356 ----- _Sustainability 2020, 12, 5170_ 15 of 22 5.8.2. Significance Test of the Model The goodness of fit can only reflect the results of the FFTFM based on the selected data, but cannot describe the overall relationship among the factors. Therefore, it was necessary to perform a significance test on the model to test the degree of approximation of the trend in yields. The universal test for testing the significance of the whole model is the F-test, as shown in Table 8. **Table 8. Results of F test.** **S/L** **S/H** **B/L** **B/H** F test (3,33) 36.81 11.38 20.68 9.309 Prob (F) 0.0000 0.0000 0.0000 0.000132 Test hypothesis: **Hypothesis 1 (H1). All coefficients of the regression model are zero, which means that it indicates that the** _linear relationship of the FFTFM is not significant, and the model is meaningless._ **Hypothesis 2 (H2). At least one of the coefficients are not zero. This shows that the FFTFM has a significant** _linear relationship, and this model has explanatory power to portfolio returns._ According to Table 8, at a given significance level of 1%, the F statistics of the four portfolios were all greater than the critical value (F0.05(3, 33) = 2.89). Then the null hypothesis H1 was rejected, which illustrates that at least one of the regression coefficients was significantly different from 0. It can be concluded that the linear relationship of the FFTFM was significant. Besides, the probability value corresponding to the F statistic of each portfolio was equal to 0, which also shows that the overall linear relationship of the FFTFM was highly significant. In short, the FFTFM can better reflect the overall characteristics of portfolio returns, which the companies constructed by the companies related to blockchain technology. It may also provide a baseline for other companies that use blockchain as FinTech. 5.8.3. Significance Test of Coefficients In the previous section, the F-test of the FFTFM was performed in this paper. The results showed that all four portfolios passed the F-test, which indicates that all factors in the FFTFM (Rm-Rf, SMB, and HML) on stock returns were significant. However, this does not mean that each element in the model (Rm-Rf, SMB, or HML) had a considerable effect on the yield alone. Therefore, it was necessary to test the significance of each coefficient in the model. This paper used the t-test to analyze the impact of a single factor on the stock return in the model. As an explanatory variable shared by the capital asset pricing model and the FFTFM, testing the coefficient b of the excess market rate of return (Rm-Rf) can analyze whether market risk factors have a significant effect on stock returns. According to Table 9, the coefficients of all portfolios were positive values, which indicates that the market risk factor was positively correlated with the stock return. Besides, the coefficients of the portfolios all passed the t-test with a significance level of 1%, and the t-values exceeded the significance level of the coefficient that represents the scale factor and the coefficient that means book-to-market ratio factor. This shows that market risk factors had a significant impact on stock returns. However, it is different from the study of Fama and French [5]. They concluded that market risk factors have only weak explanatory power. ----- _Sustainability 2020, 12, 5170_ 16 of 22 **Table 9. Results of t-test.** **S/L** **S/H** **B/L** **B/H** βi 1.1939 1.2942 1.2942 1.1939 _t-test_ (−5.009) *** (5.086) *** (5.086) *** (5.009) *** si 1.0496 0.3943 −0.6057 0.0496 _t-test_ (4.916) *** (1.730) * (−2.657) ** (0.232) _hi_ −1.1229 −0.0612 −1.0612 −0.1229 _t-test_ (−6.102) *** (−0.312) (−5.402) *** (−0.668) αi 0.0057 0.0136 0.0136 0.0057 _t-test_ (0.184) (0.409) (0.409) (0.184) *** Indicates the coefficient passing the t-test at the 1% level. ** Indicates the coefficient passing the t-test at the 5% level. * Indicates the coefficient passing the t-test at the 10% level. 5.8.4. Book-to-Market Ratio Factor The coefficient of the book-to-market value ratio factor (HML) was conducted to test whether there was a significant relationship between the HML and portfolio returns. As shown in Table 9, only the S/L and B/L passed the 1% significance test, and portfolios that were constructed by companies with a high market-to-book ratio performed worse and did not pass the t-test. Fama and French concluded that when the stock has a low book-to-market rate, which is called is a growth stock, the HML factor in the model generally has a negative slope or a decreasing positive slope; when the stock has a high book-to-market ratio, which is called value stock, the HML factor in the model generally has an increasing slope [5]. However, the empirical research of the Chinese companies in the blockchain industry, including which companies take blockchain as FinTech, did not follow this rule, and firms with high market-to-book ratio could not pass the t-test of HML. 5.8.5. Size Factor This part examines the linear relationship between the independent variable SMB and portfolio returns. Analysis of the results in Table 9 showed that the regression coefficients of the company’s size factor SMB on S/L, S/H, B/L, and B/H were 1.0496, 0.3943, −0.6057, and 0.0496, respectively. The p values of the t-test were 4.916, 1.730, −2.657, and 0.232, respectively. According to the above results, the SMB factor of the three portfolios, S/L, S/H, and B/H, had a positive correlation with the excess return of the portfolio, while the explanatory variable SMB factor of the B/L portfolio had a negative relationship with its performance. According to the t-test results of the SMB factor, the p values of the S/L and B/L portfolios were less than 1%, and the p-value of the S/H was less than 5%. Therefore, the correlation coefficient passed the test significantly, indicating that the SMB factor had a substantially higher positive correlation for small-scale blockchain stock portfolios. The t-value of the coefficient of portfolio B/H was less than the critical value, and the confidence level of the p-value was greater than 10%, which means that the correlation between the scale factor and the return of B/H portfolio was not significant. Since the samples were companies using blockchain, which are one of the components of FinTech, it might present a reference for other FinTech firms. 5.8.6. An Improved Four-Factor Model Based on Fama-French Model After collecting and processing the data about Guba comments on each blockchain company, the score of the investors’ emotions could be obtained. People generally regard blockchain as an innovative technology, especially when it comes to FinTech. The score of sentiment is a daily series, and it should be turned into a monthly sequence. Moreover, the monthly score of the firms that are in the same portfolio will be average weighted to get the grades of the portfolio’s monthly score. Subsequently, the four portfolios, S/L, S/H, B/L, and B/H, own their sentiment index each month, ----- _Sustainability 2020, 12, 5170_ 17 of 22 which could be added into the traditional Fama-French model to construct a new four-factor model. The equation is shown below: Ri − Rft = αi + βi(Rmt − Rft) + siSMBt + hiHMLt + sentiSentimentit + εit 5.8.7. Goodness of Fit and F Test Table 10 illustrates the results of the F test and goodness of fit of the four-factor Fama-French pricing model. **Table 10. The results of goodness of fit and F test of four-factor pricing model.** **Test Focus** **S/L** **S/H** **B/L** **B/H** _R[2]_ 0.771 0.529 0.689 0.514 _R2_ 0.743 0.47 0.65 0.453 F test(3,32) 27 8.976 17.7 8.447 Prob(F) 0.0000 0.0000 0.0000 0.0000 The F test results for measuring the significance of the regression equation show that the p-value of F test for the S/L, S/H, B/L, and B/H portfolios was equal to 0 when the number of samples was 150, and the number of explanatory variables was 3. All four portfolios passed the F test at a 99% confidence level, indicating that the four sets of regression equations were highly significant. From this, it was concluded that the independent variable market factor (Rm-Rf), the scale factor (SMB), the value factor (HML), and the sentiment factor (Sentiment) all had significant effects on the dependent variable portfolio yield. _R[2]_ can be used to test how well the regression equation fits the sample observations. The closer R[2] is to 1, the better the regression fit. The results in Table 10 show that the determination coefficient of the S/L was 0.771, the determination coefficient of the S/H combination was 0.529, the B/L combination was 0.689, and the B/H was 0.514. From the above analysis, it was found that the portfolio with the best fit was a small-scale and low book-to-market value one, and the worst fit was the portfolio constructed by the big-scale and high book-to-market value companies. The fit of the four groups was not very good, so there should be other factors in the market that had higher explanatory power besides three factors. 5.8.8. Parameters’ Analysis The first is the regression coefficient, and the significance analysis of market returns to the returns of each portfolio. S market portfolio returns are proxy variables for systemic risk; the beta reflects the sensitivity of a single asset or portfolio to market changes. Table 11 shows that the market risk premium coefficient β values of the four portfolios were 1.2245, 1.3174, 1.289, and 1.1852, respectively. The β coefficients were all greater than 0, indicating that the return on portfolios kept the same moving direction as the return of the whole market. The p-values of the t-test of the four stock portfolios were all less than 1%, and the null hypothesis could be rejected at a 99% confidence level. Therefore, it is believed that the linear relationship between the market factor (Rm-Rf) and the portfolio return was significant. Secondly, the linear relationship between independent variable SMB and portfolio return rate was tested. The results in Table 11 reveal that the regression coefficients of the company size factor (SMB) on S/L, S/H, B/L, and B/H were 1.06, 0.4275, −0.4984, and 0.1167, respectively. The t-values were 4.878, 1.871, −2.203, and 0.56, and the p-values for the t-test were 0, 0.070, 0.035, and 0.579. It illustrates that the SMB factors of portfolio S/L, S/H, and B/H had a positive correlation with the excess return, and the explanatory variable SMB factor of the B/L had a negative correlation with its return. According to the _t-test result of the SMB factor, the p-value of the S/L portfolio was less than 1%, and the p-value of the_ S/H was less than 10%. Therefore, the correlation coefficient passed the test significantly, indicating ----- _Sustainability 2020, 12, 5170_ 18 of 22 that the scale factor (SMB) affected small-scale blockchain stocks. The portfolios had a significantly higher positive correlation. **Table 11. Results of regression of four-factor pricing model.** **Test Focus** **S/L** **S/H** **B/L** **B/H** αi 0.006 0.0126 0.0075 −0.012 _t-test_ 0.189 0.383 0.235 −0.382 βi 1.2245 1.3174 1.289 1.1852 _t-test_ 4.894 *** 5.19 *** 5.267 *** 5.166 *** si 1.06 0.4275 −0.4984 0.1167 _t-test_ 4.878 *** 1.871 * −2.203 ** 0.56 _hi_ −1.1042 −0.0487 −0.8535 −0.155 _t-test_ −5.793 *** −0.249 −3.922 *** −0.872 _senti_ 0.0351 0.1054 0.2469 0.1493 _t-test_ 0.461 1.172 1.922 * 1.906 * *** Indicates the coefficient passing the t-test at the 1% level. ** Indicates the coefficient passing the t-test at the 5% level. * Indicates the coefficient passing the t-test at the 10% level. The SMB factor in the B/L portfolio also passed the t-test at the 1% confidence level. Still, the t value of the B/H was less than the critical value, indicating that the correlation between the scale factor of the B/H portfolio and the portfolio’s return was not significant. Thirdly, the correlation coefficients that showed the BM effect in the four portfolios were −1.1042, −0.0487, −0.8535, and −0.155, in turn. The t values were −5.793, −0.249, −3.922, and −0.872, respectively, with the corresponding p values 0, 0.805, 0, and 0.39. The regression results showed that HML factors of all portfolios had a negative correlation with their return, which is different from the study of Fama and French [5]. According to the t-test results, the t-values of value factor (HML) in S/L and B/L portfolio were greater than the critical value, and these HML factors in the two portfolios passed the t-test at the 1% confidence level. The t-value of HML in portfolio S/H and B/H showed that the HML factors were not allowed to pass the test, which means that the HML factor in portfolios composed of listed firms with high market-to-book ratio owned the very weak relationship with the return of the portfolios. Finally, the fourth-factor, “sentiment”, was suggested to test. The results in Table 11 reflect that the coefficients “Sentiment” in four portfolios were 0.0351, 0.1054, 0.2469, and 0.1493, in respect, which implied that the sentiment factors in all portfolios had a positive relationship with the return. In brief, the more exciting the investors, the higher the returns of stocks. However, when checking the results from a more detailed perspective, it was not difficult to conclude that merely sentiment factors in portfolios comprised of big-sized firms could pass the t-test at a 10% confidence level. The t-values of sentiment factor of S/L and B/L portfolios were 0.0351 and −0.0487, respectively. So it was no significant relationship between investors’ sentiment and the return of portfolios that were constructed by small-size firms. **6. Conclusions** This paper is relevant to the topic of regression for FinTech, to demonstrate the effective valuation theory conducted in companies embracing FinTech. It focused on the blockchain companies in China, including those which treat blockchain as FinTech, and used the FFTFM to find these firms. It also contained a new sentiment factor collecting from public comments for better explanatory power. After the above descriptive analysis and regression analysis, it could lead to some conclusions. ----- _Sustainability 2020, 12, 5170_ 19 of 22 _6.1. Feasibility of FFTFM and an Improved Four-Factor Model_ The FFTFM and the improved Fama-French model that added a new sentiment factor both passed the F test. The market factor, size factor, and book-to-market ratio factor in FFTFM owned the explanatory power to describe and review portfolios’ returns. In the improved model, the sentiment factor could also explain the return of portfolios effectively. It was noticeable that the explanatory power of four portfolios in FFTFM increased when adding sentiment factor, rising from 0.77, 0.509, 0.653, and 0.458 to 0.771, 0.529, 0.689, 0.514, respectively. It revealed that the sentiment factor had a positive effect on the model. All coefficients of sentiment factors in different portfolios were positive, indicating more optimism brings a higher return. However, there are still some minor flaws that caused the eight portfolios’ R-square to be not as good as the expectation: The best goodness-of-fit value was 77.1%, and there could be more explanatory variables to review the return of portfolios. Additionally, to guarantee the reliability of the regression results, it was indispensable to test if there were autocorrelation, multicollinearity, and heteroscedasticity, and we proved that all regressions were acceptable. Subsequently, although all portfolios passed the F test, the independent variables in each portfolio should also be checked for the significance. There were two portfolios (S/L and B/L) under the FFTFM and one portfolio (B/L) under the four-factor model’s own independent variables that all passed the significance test. _6.2. Influence of Market Risk Premium Factor_ The blockchain industry in China, including, but not limited to, the firms that take blockchain as FinTech, owns a positive relationship with the whole market environment. The coefficients of the independent variable market risk premium factor of all eight portfolios were more significant than 1, which implied that the investment portfolio could release the nonsystematic risk and contribute to the return of portfolios. The blockchain industry is an emerging market in China, and several update companies have begun to brace this technology recently, including the companies which use blockchain as FinTech. Along with the development of the blockchain industry, numerous investors are attracted by this industry and plan their investment in this as well. It causes higher volatility in return than the whole market. _6.3. The Non-Existence of Size Effect and Book-to-Market Ratio Effect in the Chinese Blockchain Industry_ The size effect is that the return of small listed firms have significantly higher average returns than the large. Banz first found this effect and Fama and French verified the existence [5,55]. However, several researchers own opposite views about the size effect. Goyal and Welch concluded that this effect is caused by the deviation of sample selection rather than the size of the companies [56]. Dimon and Marsh believe that big companies could achieve a higher return than small firms [57]. Schwert claimed that the size effect is disappearing gradually. In this empirical research, the conclusion could be drawn that there is no size effect in the Chinese blockchain industry, which includes the firms using this technology as FinTech; portfolios with big companies bring a higher return [58]. The BM effect indicates that the return of the stocks has a positive relationship with the company’s market-to-book ratio. A higher book-to-market rate could bring out a higher stock return. Fama and French also believe the existence of the BM effect [5]. Chinese researchers drew a different conclusion about the BM effect in the Chinese securities market. Xu argued that there is a significant BM effect in the Chinese stock market [59]. Gu and Ding conducted an empirical study of the growth effect of China’s securities’ market and proved that the BM effect is non-existent [60]. According to the analysis of the Fama-French model above, the BM effect does not exist in the Chinese blockchain industry. Portfolios built by the low book-to-market ratio companies earn more returns than others. ----- _Sustainability 2020, 12, 5170_ 20 of 22 There are various factors that affect the investment in the stock market and investors have been trying to obtain higher investment returns. Many scholars have also studied the effective factors in this filed and have proved that FFTFM can be applied to Western developed securities’ markets. Compared with these mature stock markets, the Chinese stock market has developed late. Therefore, whether the Chinese market can effectively meet the conditions for using the three-factor model has been under discussion. In recent years, with the continuous development of technologies, such as big data, methods for measuring investor sentiment have also advanced. The online forum is an important window for investors to express their sentiments. This article also took this factor into account to improve the model’s explanatory power. The empirical analysis of this article showed that there is no size effect in the Chinese blockchain industry, but there is a BM effect. Companies with more positive and optimistic concerns can bring higher returns to investors. These can help investors choose high-return companies in this field. We also need to admit that the method of sentiment analysis is still relatively simple, and the accuracy of text sentiment measurement needs to be improved. The extent to which the information in the online forum can affect investors’ decisions needs further research. **Author Contributions: Conceptualization, Z.J. and V.C.; methodology, Z.J., V.C., and H.L.; software, Z.J.;** validation, V.C., R.V., and C.-H.R.H.; formal analysis, Z.J. and V.C.; investigation, Z.J. and H.L.; resources, V.C., H.L., R.V., and C.-H.R.H.; data curation, Z.J.; writing—original draft preparation, Z.J.; writing—review and editing, V.C., H.L., R.V., and C.-H.R.H.; visualization, Z.J.; supervision, V.C. and H.L.; project administration, V.C.; funding acquisition, Z.J., V.C., R.V., and C.-H.R.H. All authors have read and agreed to the published version of the manuscript. **Funding: This work was partially supported by VC Research (VCR 0000042) and the National Natural Science** Foundation of China (Grant No. 61872084). **Conflicts of Interest: The authors declare no conflict of interest.** **References** 1. [Nakamoto, S. Bitcoin: A Peer-To-Peer Electronic Cash System. Available online: https://bitcoin.org/bitcoin.pdf](https://bitcoin.org/bitcoin.pdf) (accessed on 1 September 2019). 2. Mu, J. Opportunities, Challenges and Prospects of the Central Bank to Implement the Legal Digital Currency DCEP. Economist 2020, 31, 95–105. 3. Mu, C.C.; Di, G.; Lv, Y.; Qian, Y.C.; Qing, S.D. Blockchain Research Group of Digital Currency Research Institute of PBC. China Financ. 2020, 70, 28–29. 4. Markowitz, H.M. Portfolio selection. J. Financ. 1952, 7, 77–91. 5. [Fama, E.F.; French, K.R. The Cross-Section of Expected Stock Returns. J. Financ. 1992, 47, 427–465. [CrossRef]](http://dx.doi.org/10.1111/j.1540-6261.1992.tb04398.x) 6. Fan, L.Z.; Yu, S.D. A Three-factor Model of Chinese Stock Market. J. Syst. Eng. 2002, 17, 537–546. 7. Yang, X.; Chen, Z.H. Empirical Study on Three-factor Asset Pricing Model in Chinese Stock Market. _Quant. Tech. Econ. 2003, 20, 137–141._ 8. Deng, C.R.; Ma, Y.K. Empirical Study on Three-factor Model in China’s Securities Market. Chin. J. Manag. **2005, 2, 591–596.** 9. Zhao, P.; Zhou, M. Research on the Applicability of Fama-French Three-Factor Model of Securities Industry in China’s A-share Market. Times Financ. 2018, 27, 156–157. 10. Liu, T.J.; Zhu, J.Q.; Li, Y. An Empirical Analysis of the Applicability of Fama-French Three-factor Model in China’s Growth Enterprise Market. J. Jiaozuo Univ. 2013, 27, 64–66. 11. You, D. An Empirical Study of Fama-French Three-Factor Model in China’s A-Share Real Estate Sector. _Mod. Bus. Trade Ind. 2008, 20, 199–200._ 12. Li, Z.H.; Zhao, A.C. Research on the Return Rate Prediction of Electric Power Industry in Chinese Stock Market. J. Hebei Univ. Econ. Bus. (Compr. Ed.) 2018, 18, 34–39. 13. Guo, Z.X. Fama-French three-factor model and five-factor model empirical test for A-shares steel companies. _Hebei Enterp. 2019, 30, 33–36._ 14. Zhu, M. Empirical Test of Listed Bank Shares in China Based on Fama-French Three-Factor Model. Hebei Financ. **2019, 26, 17–21.** 15. Liu, W.Q.; Liu, X.X. Individual/institutional investor sentiment and stock returns: Study based on Shanghai A-share market. J. Manag. Sci. China 2014, 17, 70–87. ----- _Sustainability 2020, 12, 5170_ 21 of 22 16. Ba, S.S.; Zhu, H. Margin Trading, Short Selling, Investor Sentiment and Stock Market Volatility. _Stud. Int. Financ. 2016, 32, 82–96._ 17. Liu, Y.C.F.; Zhang, Z.C. A Summary of Research on Investor Sentiment in Chinese Stock Market. China Econ. **2018, 32, 78–79.** 18. Fama, E.F.; French, K.R. Size, Value and Momentum in International Stock Returns. J. Financ. Econ. 2012, 10, [457–472. [CrossRef]](http://dx.doi.org/10.1016/j.jfineco.2012.05.011) 19. [Carhart, M.M. On persistence in mutual fund performance. J. Financ. 1997, 52, 57–82. [CrossRef]](http://dx.doi.org/10.1111/j.1540-6261.1997.tb03808.x) 20. Xu, H.Y.; Xiong, C. Applicability of Carhart’s four-factor model in China’s stock market. Bus. Cult. 2009, _2, 128._ 21. Xue, S.Z.; Guan, X.Y. An Empirical Study on the Evaluation of Openended Fund Performance Based on “Four-Factors” Model. Financ. Teach. Res. 2011, 26, 63–65. 22. [Fama, E.F.; French, K.R. A five-factor asset pricing model. J. Financ. Econ. 2015, 116, 1–22. [CrossRef]](http://dx.doi.org/10.1016/j.jfineco.2014.10.010) 23. Racicot, F.E.; Rentz, W.F.; Tessier, D.; Théoret, R. The Conditional Fama-French Model and Endogenous [Illiquidity: A Robust Instrumental Variables Test. PLoS ONE 2019, 14, 1–26. [CrossRef]](http://dx.doi.org/10.1371/journal.pone.0221599) 24. Racicot, F.E.; Rentz, W.F.; Kahl, A.L.; Mesly, O. Examining the dynamics of illiquidity risks within the phases [of the business cycle. Borsa Istanb. Rev. 2019, 19, 117–131. [CrossRef]](http://dx.doi.org/10.1016/j.bir.2018.12.001) 25. Sembiring, F.M. Three-Factor and Five-Factor Models: Implementation of Fama and French Model on Market Overreaction Conditions. J. Financ. Bank. Rev. (JFBR) 2018, 3, 77–83. 26. Cox, S.; Britten, J. The Fama-French five-factor model: Evidence from the Johannesburg Stock Exchange. _[Invest. Anal. J. 2019, 48, 240–261. [CrossRef]](http://dx.doi.org/10.1080/10293523.2019.1647982)_ 27. Bangash, R.; Khan, F.; Jabeen, Z. Size, Value and Momentum in Pakistan Equity Market: Size and Liquidity [Exposures. Glob. Soc. Sci. Rev. 2018, 3, 376–394. [CrossRef]](http://dx.doi.org/10.31703/gssr.2018(III-I).22) 28. Tian, L.H.; Wang, G.Y.; Zhang, W. Three-factor model pricing: How is China different from the United States? _Stud. Int. Financ. 2014, 30, 37–45._ 29. Yang, J.H.; Fan, L.B. Global Market Integration: Fama-French Three-Factor Model Test in a Global Scenario. _Technol. Econ. 2017, 36, 109–119._ 30. Feng, S.J.; Liu, Z. An Empirical Study of Three-factor Pricing Model in China Stock Market. Econ. Res. Guide **2018, 20, 145–148.** 31. Yan, X.; Pu, T. An Empirical Study of Fama-French Model in China’s Stock Market: A Case Study of Shanghai 50 Index. Times Financ. 2017, 11, 181–182. 32. Yin, L.Y. An Empirical Study of the Impact of Investor Sentiment on Stock Returns: Based on the Fama-French Three-Factor Model. Friends Account. 2018, 6, 51–56. 33. Hu, M.A.; Tu, X.P.; Zhu, D.Y. Is the Fama-French five-factor model more explanatory? Co-Oper. Econ. Sci. **2017, 10, 82–85.** 34. Yuan, J.Y.; Cong, Z. Research on the Stock Return of Hadaqi Listed Companies Based on Fama-French Three-Factor Model. J. Heilongjiang Inst. Technol. 2015, 29, 51–53. 35. Cheng, S.Y.; Fang, H. Empirical Research on China’s Auto Sector Stock Yield Based on Fama-French Three-Factor Model. China Price 2019, 1, 71–73. 36. Gou, D.N.; Wang, W.J. Empirical Test of China’s Listed Bank Shares Based on Fama-French Three-Factor Model. Stat. Decis. 2016, 21, 158–161. 37. Baur, M.; Quintero, S.; Stevens, E. The 1986–88 stock market: Investor sentiment or fundamentals? _[Manag. Decis. Econ. 1995, 17, 319–329. [CrossRef]](http://dx.doi.org/10.1002/(SICI)1099-1468(199605)17:3&lt;319::AID-MDE776&gt;3.0.CO;2-0)_ 38. Mehra, R.; Sah, R. Mood fluctuations, projection bias, and volatility of equity prices. J. Econ. Dyn. Control **[2002, 26, 869–887. [CrossRef]](http://dx.doi.org/10.1016/S0165-1889(01)00035-5)** 39. [Brown, G.; Cliff, M. Investor Sentiment and Asset Valuation. SSRN Electron. J. 2001. [CrossRef]](http://dx.doi.org/10.2139/ssrn.292139) 40. Cheng, K.; Liu, R. Research on the Interaction between Investor Sentiment and Stock Market. _Shanghai Econ. Rev. 2005, 11, 88–95._ 41. Wang, C.F.; Zhao, W.; Fang, Z.M. Measure of IPO investor sentiment and its relationship with IPO price behavior. Syst. Eng. 2007, 7, 1–6. 42. Wen, F.H.; Yang, X.; Gong, X.; Huang, C.X.; Yang, X.G. An Infectious Analysis of Sino-US Investor Sentiment in the Context of the Financial Crisis. Syst. Eng. Theory Pract. 2015, 35, 623–629. 43. Tetlock, P. Giving Content to Investor Sentiment: The Role of Media in the Stock Market. J. Financ. 2007, 62, [1139–1168. [CrossRef]](http://dx.doi.org/10.1111/j.1540-6261.2007.01232.x) ----- _Sustainability 2020, 12, 5170_ 22 of 22 44. Chen, X.H.; Peng, W.L.; Tian, M.Y. Research on stock price and trading volume prediction based on investor sentiment. J. Syst. Sci. Math. Sci. 2016, 36, 2294–2306. 45. Meng, X.J.; Meng, X.L.; Hu, Y. Research on Investor Sentiment Index Based on Text Mining and Baidu Index. _Macroeconomics 2016, 1, 144–153._ 46. Luo, K.; Wang, C.F.; Fang, Z.M. Research on the Regionalization of Capital Asset Pricing under the Influence of Investors’ Emotion—Based on the Emotional Analysis of Stock Forum Posts. Oper. Res. Manag. Sci. 2017, _26, 129–136._ 47. Xu, T.Y. Research on the Impact of Investor Sentiment on the Stock Market in Social Media. _Shanghai Manag. Sci. 2018, 40, 67–74._ 48. The number of blockchain companies has increased by 7159%. China Econ. Wkly. 2019, 11, 7. 49. Fama, E.F.; French, K.R. Size and book-to-market factors in earnings and returns. J. Financ. 1995, 50, 131–155. [[CrossRef]](http://dx.doi.org/10.1111/j.1540-6261.1995.tb05169.x) 50. Wu, X.F.; Liu, Y.F.; Lei, M. An Empirical Analysis of Equity Premium in China’s Capital Market. Econ. Probl. **2007, 336, 85–88.** 51. Qu, Z.X. Research on Value Evaluation Method of Non-tradable Shares of Listed Companies. Business 2012, _9, 48._ 52. Zhang, Z.C.; Liu, Y.C.F. The Construction and Effectiveness Test of Investor Sentiment Index in my country’s Stock Market. Commer. Times 2018, 7, 156–158. 53. Antweiler, W.; Frank, M.Z. Is all that talk just noise? the information content of internet stock message [boards. J. Financ. 2004, 59, 1259–1294. [CrossRef]](http://dx.doi.org/10.1111/j.1540-6261.2004.00662.x) 54. Bu, H.; Xie, Z.; Li, J.H.; Wu, J.J. The Impact of Investor Sentiment Based on Stock Evaluation on the Stock Market. J. Manag. Sci. China 2018, 21, 86–101. 55. Banz, R.W. The Relationship between Return and Market Value of Common Stocks. J. Financ. Econ. 1981, 9, [3–18. [CrossRef]](http://dx.doi.org/10.1016/0304-405X(81)90018-0) 56. Goyal, A.; Welch, I. Predicting the Equity Premium; manuscript; Yale University: New Haven, CT, USA, 1999. 57. [Dimson, E.; Marsh, P. Murphy’s Law and Market Anomalies. J. Portf. Manag. 1998, 25, 53–69. [CrossRef]](http://dx.doi.org/10.3905/jpm.1999.319734) 58. Schwert, G.W. Anomalies and market efficiency. In Handbook of the Economics of Finance; Elsevier: Amsterdam, The Netherlands, 2003; pp. 939–974. 59. Xu, Z.H. An Empirical Analysis of China’s Stock Market Scale Effect and Book-to-Market Ratio Effect. _J. Financ. Dev. Res. 2011, 11, 75–78._ 60. Gu, J.; Ding, Y. An Empirical Study on the Growth Effect of China’s Securities Market. Econ. Rev. 2003, 2, 101–105. © 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution [(CC BY) license (http://creativecommons.org/licenses/by/4.0/).](http://creativecommons.org/licenses/by/4.0/.) -----
{ "disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.3390/su12125170?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.3390/su12125170, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": "CCBY", "status": "GOLD", "url": "https://www.mdpi.com/2071-1050/12/12/5170/pdf?version=1593505965" }
2,020
[]
true
2020-06-24T00:00:00
[ { "paperId": "67f6a970ea6e3c4ee036d62c44131455048601e9", "title": "The conditional Fama-French model and endogenous illiquidity: A robust instrumental variables test" }, { "paperId": "2a932183ee916d7344dc0f89269648e29cb3760b", "title": "The Fama-French five-factor model: Evidence from the Johannesburg Stock Exchange" }, { "paperId": "af456311cae5d703c19bc06c53c3bb6ead2baa2a", "title": "Examining the dynamics of illiquidity risks within the phases of the business cycle" }, { "paperId": "2850a882b61e903d15f32bfcc0d010572cad49fa", "title": "Three-Factor and Five-Factor Models: Implementation of Fama and French Model on Market Overreaction Conditions" }, { "paperId": "b37a7765c4526cac1983952aecf372828e5cac61", "title": "A Five-Factor Asset Pricing Model" }, { "paperId": "be388fb9059c223c97be2bea43d467d598ef4bf0", "title": "Size, Value, and Momentum in International Stock Returns" }, { "paperId": "1198070456cb65844c3766e78dce0b6d2ff9b31f", "title": "The Role of Investability Restrictions on Size, Value, and Momentum in International Stock Returns" }, { "paperId": "c596b215d73617d15528d39a4046560f9dd0c30c", "title": "NASA technology assessment using real options valuation" }, { "paperId": "1fe8a5e84939833f4b9f8c9bce311c70fb0192a9", "title": "Mood fluctuations, projection bias, and volatility of equity prices" }, { "paperId": "52bef0f2fbaead322e7788e9686e837986a9b1ba", "title": "Investor Sentiment and Asset Valuation" }, { "paperId": "747bcf00e45a51a1001fa6fc796502c9aab5974e", "title": "Is All that Talk Just Noise? The Information Content of Internet Stock Message Boards" }, { "paperId": "e20a66dae3fe8496eb76cb94d3f1d48bf85fe804", "title": "Asset Pricing at the Millennium" }, { "paperId": "c6ea69c387da7257429d3411b7ce2c7cb7cb7f67", "title": "Murphy's Law and Market Anomalies" }, { "paperId": "db0a181095577ef68e8e450d69e618c3ae2d25cb", "title": "The 1986-88 Stock Market: Investor Sentiment or Fundamentals?" }, { "paperId": "0065386cafc3ed92ace622e1cd8f9f51583cff33", "title": "Size and Book-to-Market Factors in Earnings and Returns" }, { "paperId": "f299a3b308717f6a17b15ee9509ddd70c3df69dd", "title": "The Cross‐Section of Expected Stock Returns" }, { "paperId": "0c27155b0d5c100db4e86e6dbad1af05b93b6c2f", "title": "The relationship between return and market value of common stocks" }, { "paperId": "433561f47f9416a6500c8350414fdd504acd2e5e", "title": "Bitcoin Proof of Stake: A Peer-to-Peer Electronic Cash System" }, { "paperId": null, "title": "Blockchain Research Group of Digital Currency Research Institute of PBC" }, { "paperId": null, "title": "Opportunities, Challenges and Prospects of the Central Bank to Implement the Legal Digital Currency DCEP" }, { "paperId": "39a0f436cbd7b98830b80b58f7e7b115673ef295", "title": "Portfolio Selection" }, { "paperId": null, "title": "The number of blockchain companies has increased by 7159%" }, { "paperId": null, "title": "Fama-French three-factor model and five-factor model empirical test for A-shares steel companies" }, { "paperId": null, "title": "Empirical Test of Listed Bank Shares in China Based on Fama-French Three-Factor Model. Hebei Financ" }, { "paperId": "83d542f739abbc8aa8f57d3ecc1065ad8e779791", "title": "Size, Value and Momentum in Pakistan Equity Markets: Size and Liquidity Exposures" }, { "paperId": null, "title": "The Construction and Effectiveness Test of Investor Sentiment Index in my country’s Stock Market" }, { "paperId": null, "title": "An Empirical Study of the Impact of Investor Sentiment on Stock Returns: Based on the Fama-French Three-Factor Model" }, { "paperId": null, "title": "An Empirical Study of Three-factor Pricing Model in China Stock Market" }, { "paperId": null, "title": "Research on the Return Rate Prediction of Electric Power Industry in Chinese Stock Market" }, { "paperId": null, "title": "Research on the Impact of Investor Sentiment on the Stock Market in Social Media" }, { "paperId": null, "title": "Research on the Applicability of Fama-French Three-Factor Model of Securities Industry Sustainability 2020" }, { "paperId": null, "title": "Global Market Integration: Fama-French Three-Factor Model Test in a Global Scenario" }, { "paperId": null, "title": "An Empirical Study of Fama-French Model in China’s Stock Market: A Case Study of Shanghai 50 Index" }, { "paperId": null, "title": "Research on the Regionalization of Capital Asset Pricing under the Influence of Investors’ Emotion—Based on the Emotional Analysis of Stock Forum Posts" }, { "paperId": null, "title": "Empirical Test of China’s Listed Bank Shares" }, { "paperId": null, "title": "Research on stock price and trading volume prediction based on investor sentiment" }, { "paperId": null, "title": "Research on Investor Sentiment Index Based on Text Mining and Baidu Index" }, { "paperId": null, "title": "Margin Trading, Short Selling, Investor Sentiment and Stock Market Volatility" }, { "paperId": null, "title": "An Infectious Analysis of Sino-US Investor Sentiment in the Context of the Financial Crisis" }, { "paperId": "00e68210d0125365b8c01e22870b5e3ef76bdd72", "title": "Individual /institutional investor sentiment and stock returns: Study based on Shanghai A-share market" }, { "paperId": null, "title": "Three-factor model pricing: How is China different from the United States?" }, { "paperId": "f5fb24b271bbdad21a1c54a17ce27c69bbd19407", "title": "Handbook of the Economics of Finance" }, { "paperId": null, "title": "Research on Value Evaluation Method of Non-tradable Shares of Listed Companies" }, { "paperId": null, "title": "An Empirical Analysis of China’s Stock Market Scale Effect and Book-to-Market Ratio Effect" }, { "paperId": null, "title": "An Empirical Study on the Evaluation of Openended Fund Performance Based on “Four-Factors” Model" }, { "paperId": "4e9ec92a90c5d571d2f1d496f8df01f0a8f38596", "title": "Bitcoin: A Peer-to-Peer Electronic Cash System" }, { "paperId": null, "title": "Measure of IPO investor sentiment and its relationship with IPO price behavior" }, { "paperId": "ae074faa8712719d5dce6b98b02750abc118c823", "title": "Giving Content to Investor Sentiment: The Role of Media in the Stock Market" }, { "paperId": "69b9a65c6589a3e2827bb0c885132b169f7808ba", "title": "Chapter 15 Anomalies and market efficiency" }, { "paperId": null, "title": "Predicting the Equity Premium; manuscript; Yale" }, { "paperId": null, "title": "This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license" } ]
21,047
en
[ { "category": "Computer Science", "source": "external" }, { "category": "Computer Science", "source": "s2-fos-model" }, { "category": "Engineering", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/023de6262322bae283469df6b9c1c056759315c4
[ "Computer Science" ]
0.872681
An Improved Genetic Algorithm for Safety and Availability Checking in Cyber-Physical Systems
023de6262322bae283469df6b9c1c056759315c4
IEEE Access
[ { "authorId": "11032852", "name": "Z. Wang" }, { "authorId": "2122754956", "name": "Yanan Jin" }, { "authorId": "150311209", "name": "Shasha Yang" }, { "authorId": "1864210", "name": "Jianmin Han" }, { "authorId": "49301778", "name": "Jianfeng Lu" } ]
{ "alternate_issns": null, "alternate_names": null, "alternate_urls": [ "http://ieeexplore.ieee.org/servlet/opac?punumber=6287639" ], "id": "2633f5b2-c15c-49fe-80f5-07523e770c26", "issn": "2169-3536", "name": "IEEE Access", "type": "journal", "url": "http://www.ieee.org/publications_standards/publications/ieee_access.html" }
Cross-IoT infrastructure access frequently occurs when performing tasks in a distributed computing infrastructure of a cyber-physical system (CPS). The access control technology that ensure secure access cross-IoT infrastructure usually automatically establish relationships between user-attribute/role-permission. How to efficiently determine whether an automatic authorization access control state satisfies the safety and availability requirements of a system is a huge challenge. Existing work often focuses on a single aspect of safety or availability, while ignoring the differences between permissions and the differences between users. In this paper, we first propose a fine-grained personalization policy that takes into account the specificity of permissions/users and describes the safety, availability and efficiency requirements of an access control system in CPS. Second, we define a Personalization Policy Checking (PPC) Problem to determine whether a given personalization policy is satisfied in an access control state. We give the computational complexity of the PPC problem in different subcases, and show that it is NP-complete in general. Third, we design a binary genetic search algorithm, whose improvements mainly include continuous update and selection of the best chromosomes in the population for iteration, and exploring and determining the optimal crossover and mutation probabilities, thereby improving the convergence efficiency of the algorithm. Finally, simulation results show the effectiveness of our proposed algorithm, which is especially fit for the case that the computational overhead is even more important than the accuracy in a large-scale CPS system.
Received March 2, 2021, accepted April 2, 2021, date of publication April 12, 2021, date of current version April 19, 2021. _Digital Object Identifier 10.1109/ACCESS.2021.3072635_ # An Improved Genetic Algorithm for Safety and Availability Checking in Cyber-Physical Systems ZHENG WANG 1,2, YANAN JIN3, SHASHA YANG4, JIANMIN HAN 5, AND JIANFENG LU 5 1Faculty of Science and Engineering, The Open University of China, Beijing 100039, China 2Department of Computer Science, Zhejiang Radio & Television University Haiyan College, Jiaxing 314300, China 3School of Information Management and Statistics, Hubei University of Economics, Wuhan 430205, China 4Xingzhi College, Zhejiang Normal University, Jinhua 321004, China 5Department of Computer Science and Engineering, Zhejiang Normal University, Jinhua 321004, China Corresponding author: Yanan Jin (jinyanan@yhcrt.com) This work was supported in part by the National Natural Science Foundation of China under Grant 62072411 and Grant 61872323, in part by the Zhejiang Provincial Natural Science Foundation of China under Grant LR21F020001, in part by the Zhejiang Provincial Department of Education under Grant Y202043497, and in part by the Social Development Project of Zhejiang Provincial Public Technology Research under Grant 2017C33054. **ABSTRACT** Cross-IoT infrastructure access frequently occurs when performing tasks in a distributed computing infrastructure of a cyber-physical system (CPS). The access control technology that ensure secure access cross-IoT infrastructure usually automatically establish relationships between user-attribute/rolepermission. How to efficiently determine whether an automatic authorization access control state satisfies the safety and availability requirements of a system is a huge challenge. Existing work often focuses on a single aspect of safety or availability, while ignoring the differences between permissions and the differences between users. In this paper, we first propose a fine-grained personalization policy that takes into account the specificity of permissions/users and describes the safety, availability and efficiency requirements of an access control system in CPS. Second, we define a Personalization Policy Checking (PPC) Problem to determine whether a given personalization policy is satisfied in an access control state. We give the computational complexity of the PPC problem in different subcases, and show that it is NP-complete in general. Third, we design a binary genetic search algorithm, whose improvements mainly include continuous update and selection of the best chromosomes in the population for iteration, and exploring and determining the optimal crossover and mutation probabilities, thereby improving the convergence efficiency of the algorithm. Finally, simulation results show the effectiveness of our proposed algorithm, which is especially fit for the case that the computational overhead is even more important than the accuracy in a large-scale CPS system. **INDEX TERMS Access control, personalization policy, genetic algorithm, cyber-physical system.** **I. INTRODUCTION** Cyber-physical system is a controllable, credible, scalable and heterogeneous distributed cyber-physical equipment system. It acquires information based on the IoT perception environment, and processes the information through deeply integrated computing, communication and control capabilities to complete a given task [1], [2]. CPS can bring huge economic benefits and is widely used in digital medical instruments and systems adopting automatic acquisition and control technology, distributed energy systems, aerospace and aircraft control, industrial control, etc [3]–[5]. CPS has aroused great interest of industry investment and researchers. The associate editor coordinating the review of this manuscript and approving it for publication was Po Yang . In the CPS environment, if users in local nodes or nodes across IoT infrastructure access sensitive data without authorization, huge losses will occur [2], [6]. For cyber-physical systems, safety is facing increasing challenges, because illegal access may also come from various networks and physical interfaces in an increasing number of non-local IoT infrastructures [7]–[9]. Due to the heterogeneity of different IoT infrastructures, traditional access control are less effective in protecting sensitive data across IoT infrastructures. In the field of distributed cyber-physical systems, the research of access control is becoming more and more important for CPS designers and users. The autonomy, heterogeneity and distribution of CPS nodes make access control mainly focus on multi-entity access control between different trust domains, while taking ----- into account geographic location and resource ownership [10], [11]. The subject and object of access control are highly dynamic in the CPS environment and there exists a huge number of terminals and users. Therefore, the authorization relationship between users and permissions cannot be presented in advance, and the system authorization can only be performed automatically [11], [12]. However, whether this automatically authorized access control state satisfies the safety and availability requirements of the access control system needs to be determined by corresponding access control policies. Therefore, the study of access control policies cross-IoT infrastructure in the CPS environment has practical theoretical significance and application value. Access control policies restrict the assignment of permissions to ensure the safety and availability of the access control system [13], [14]. However, there are still shortcomings in the existing research on access control policies. (1) Access control policies often focus on security or availability, and cannot effectively balance these two [11]. Multiple CPS nodes or even cloud nodes may be involved when performing tasks. These autonomous nodes have their own role-permission relationship and may not be able to accurately satisfy task requirements. The redundant permissions generated by ensuring availability will bring security risks to the system. If security is strictly ensured, it may lead to insufficient permissions and affect the smooth execution of tasks. (2) Existing access control policies consider a large number of access permissions with negligible impact on task execution, which will increase the scale of the problem and reduce the efficiency of access control decision-making. Ignoring the difference in nature importance between permissions and treating important permissions as ordinary permissions will also bring unpredictable risks to the system. (3) Determining whether a certain access control policy is satisfied in a system state is the key issue to efficient access control decision-making. However, this problem is difficult to solve, especially for access control systems authorized across autonomous domains in CPS environments. This is because the access control subjects and objects involved in the execution of a task may come from different CPS nodes with a large number of users and permissions. It is necessary to determine whether the access control state composed of these nodes satisfies the goals and constraints of considering the weight. This greatly increases the computational complexity. For example, an access control policy may require mutually disjoint user groups that can perform tasks independently to satisfy a certain number, and the weight of the permissions owned by a single user is less than a certain threshold. It can be seen that existing access control policies are difficult to effectively ensure the safety and availability in CPS, and it is intractable to improve the decision-making efficiency of access control under a large amount of data. For this reason, this paper introduces the concept of weight of users and permissions, which expresses the importance of permissions/users from the attributes of operations, the sensitivity of objects and user attributes. Subsequently, we propose a refined personalization policy based on weights to improve the efficiency of access control decision-making while enhancing the safety and availability of the system. Then, we analyze the computational complexity of the problem that a given access control state satisfies the requirements of a personalization policy. To address this problem of general case, we design an efficient solution based on the idea of genetic algorithm. Generally, a given access control policy is the minimum requirement of the system. For example, there are three groups of mutually disjoint users in an access control system, and each group has all the permissions to perform sensitive tasks. But the access control policy requires two groups of mutually disjoint users to ensure the availability of the system. Therefore, verifying that the policy is satisfied only needs to find two sets of mutually disjoint users. It can be seen that this solution is more effective when the parameters required by the policy are smaller than the actual parameters of the system. Briefly, the main contributions of this paper can be summarized as follows: - We propose a personalization policy that considers the different natural importance of permissions and users. This policy describes the safety, availability and efficiency requirements of the access control system in a fine-grained way. - We give a formal definition of the PPC problem which determines whether an access control state in CPS environment satisfies a given personalization policy, and present the computational complexity analysis of PPC problem in different subcases. In particular, we show that this problem is NP-complete in the general case. - We design a Binary Genetic Search (BGS) algorithm, which first considers the efficiency of solving PPC problems. This algorithm improves the selection operation and crossover and mutation probability of genetic algorithm. - Simulation results further demonstrate the effectiveness of the BGS algorithm, which is especially fit for the case that the computational overhead is even more important than the accuracy in a large-scale CPS system. The rest of this paper is organized as follows. In Section II, we start with an overview of previous literature. Section III presents the formal definition of the personalization policy and the PPC problem, and studies computational complexity of its variants subcases. We present an algorithm for the PPC problem in Section IV. In Section V, we implement the proposed algorithm. We conclude this paper in Section VI. **II. RELATED WORK** The unique technical requirements and constraints of CPS make the existing research on automatic authorization of access control focuses on the discovery method of attribute-permission association in attribute-based access control (ABAC). And provides flexible control and management through the mapping mechanism of user-role and role-permission in role-based access control (RBAC) [11]. ----- ABAC regards attributes as the key element of access control, which effectively solves the problem of large-scale, dynamic and private fine-grained access control in the CPS. ABAC first establishes the attribute set and describes the access control policy, and then responds to the access control request and updates the access control policy during execution [12]. RBAC guarantees flexible control and management of objects through a dual authority mapping mechanism, and provides inter-domain role mapping and constraint verification methods in cross-entity access control of CPS [15], [16]. When constructing attribute set and permission mapping, usually use role engineering or attribute engineering topdown or bottom-up method to mine roles or attributes to further authorize users. However, the automatically authorized access control state may not necessarily satisfy the safety and availability requirements of the access control system. The access control policy which used to restrict permission assignment to ensure safety and availability in access control system is a main research for several decades [17]. The research of access control policy originated from the safety analysis of access control system, which determines whether an access control system can reach a state in which an unsafe access is allowed [18]. In the earliest work, the safety of the access control system is the focus of consideration, its purpose is to ensure the safety of the access control system when performing tasks and prevent abuse of authority. The separation of duty (SOD) policies is a typical policy used to ensure safety [19]. It prevent a set of users less than a certain threshold from being fully authorized to perform sensitive tasks [15], [16], [20]. Excessive pursuit of system safety may lead to unavailability of the system. For example, an access control state that satisfies strict safety requirements does not have the full permissions to perform tasks. Therefore, subsequent research also focuses on the availability of the access control system. Resiliency policy requires that absent any s users, there is still exist d mutually disjointed set of users which number is less than t and each set has all permissions in P to perform tasks to ensure availability of system [21], [22]. The problem of determine whether a certain access control state satisfies a given access control policy in the CPS environment is difficult to solve. For example, the problem of checking whether an access control state satisfies a resiliency policy is intractable (NP-hard) in the general case, and is in the Polynomial Hierarchy (in coNP[NP]) [21]. In this paper, although we have comprehensively optimized the description of the policy to ensure that it is easier to solve while enhancing the safety and available effect. However, the policy proposed in this paper takes into account the weight of users and permissions, which obviously increases the difficulty of analyzing the problem. The policy checking problem is difficult to solve under general case. The existing access control policy checking problem is to reduce the system scale through preprocessing, and then solve it by a satisfiability problem (SAT) solver [22]. However, due to the massive data scale of the CPS environment, which makes the implementation of this scheme require a great system overhead. Genetic algorithm has been proved to be effective in dealing with many problems, especially in dealing with NP-complete problems [23]–[29]. This is because the fitness value of the optimal solution can be calculated for this type of problem. The optimization goal of genetic algorithm is to make the solution set convergence to the optimal solution with higher efficiency. For example, the literature [24] proposed multi-granularity genetic algorithm that adopts a multi-granularity space strategy based on a random tree, which accelerates the searching speed of the algorithm in the multi-granular space. The literature [25] optimized crossover and mutation operations were devised to make the algorithm converge more quickly in solving the multi-processor scheduling problem in cloud data-centers. Aiming at the policy checking problem, this paper optimizes the genetic algorithm in many aspects to achieve the ideal solution effect. In summary, the existing access control policy describes the safety or availability of the access control system, but it does not give a good balance between these two aspects, and it is difficult to apply in a distributed CPS environment. This paper proposes an access control policy applied in the CPS environment, defines and analyzes the computational complexity of the weighted policy check problem. Through the analysis of genetic algorithm, it can be seen that the algorithm can efficiently obtain the approximate solution of the problem. Therefore, this paper improves the algorithm to obtain better efficiency and accuracy. **III. PROBLEM FORMULATION** The individuality of every permission/user means that it has different nature and importance. It is a key topic that should be introduced to access control policy of CPS environment, but ignored. In this section, we propose a personalization policy that takes into account the specificity of permissions/users and be used to ensure the safety, availability, and efficiency of the access control system. _A. PERSONALIZATION POLICY_ The personalization policy considers the particularity of permissions that have different natural and importance. In financial institution’s access control systems, for example, the permission writes asset data is more important than the permission reads asset data. The weight is a value attached to a permission/user representing its importance and we introduce it to personalization policy. Here, we present an example to motivate the new features of the notation about the weight of permission/user to optimize the access control policy. Let us assume that the permission set is p1, p2, p3, p4, permissions p1 and p2 are assigned to u1, permissions p3 and _p4 are assigned to u2, permissions p1 and p3 are assigned to_ _u3, permissions p2 and p4 are assigned to u4. It is obvious_ that both {u1, u2} and {u3, u4} are the solutions and each solution has all permissions to perform tasks. However, it may not make any sense for choosing {u1, u2}, if the permissions _p1and p2 are more important resulting weighted u1 beyond_ ----- a certain threshold. This is because that it is easier to put the system at unpredictable risk if a user has too important permissions. Furthermore, certain permissions that may be more critical for system can only be owned by special users, other users cannot be authorized in the process of performing sensitive tasks. Safety is an important factor that we consider, and availability also needs to be considered because it is related to the smooth execution of the task. For example, in the previous example, there are two mutually disjoint user sets to perform sensitive task, this means that even if any one of the users is absent, the task can still be executed. There are a lot of resources in the CPS environment. If the access permissions of these resources are all taken into account in the access control system, it will bring great system overhead and affect the efficiency of access control decisionmaking. Therefore, in order to enhance the availability of the system, we do not consider non-essential permissions into the access control system. We use weights to indicate the importance of these resource access permissions to the system. We set a threshold according to the importance of the task, and do not add permissions with a weight less than a certain threshold to the access control policy. This is because the abuse of these permissions with lower weights has a tolerable impact on the smooth execution of tasks, and the deficiencies of these permissions can be resolved through temporary authorization. The weight of permissions/users is a value between 0-1 that weighs the importance of permissions/users from the attributes of operations, the sensitivity of objects, and user attributes [30]. In this section, formal definition of the weight of permission and methods of calculating them is not discussion. We assume that the weight of permissions is determined by the system and the weight of users is the sum of the weighted user’s permissions. The personalization policy is defined as follows. _Definition 1 (Personalization Policy): Given a set U of_ _users, a set P of permissions, the personalization policy sat-_ _isfy the following constraints:_ - Safety constraint: A safety constraints is denoted as _PP⟨ω, UF_ (pf )⟩, where ω ≥ 0. pf is very important to _the system and can only be assigned to users in the user_ _set UF_ _.We say that PP⟨U_ _, P, ω, UF_ (pf )⟩ _is satisfied if_ _and only if the following conditions hold:_ - ∃pf ∈ _P(UF_ ) and pf /∈ _P(Un) where Un = U −_ _UF_ _, P(UF_ ) denotes all permissions assigned to the _users set UF_ _._ - ∃ui ∈ _UP and UP =_ [�]W (uj)<ω _[u][j][, where u][j][ ∈]_ _[U]_ _and W_ (uj) denotes the weight of the uj. - Efficiency **_constraint:_** _A_ _efficiency_ _constraints_ _is_ _denoted as PP⟨ω0⟩, where 1 ≥_ _ω0 ≥_ 0, We say that _PP⟨U_ _, P, ω0⟩_ _is satisfied if and only if the following_ _conditions hold:_ - ∃pi ∈ _PP and PP =_ [�](W (pj)>ω0) _[p][j][ −]_ _[P][F]_ _[, where]_ _pj_ _P and PF_ _pf denotes the permissions set_ ∈ = [�] _of all pf ._ - Available constraint: A available constraints is denoted _as PP⟨U_ _, P, κ⟩, where 0 ≤_ _κ ≤_ _n are positive integer._ _We say that PP⟨κ⟩_ _is satisfied if and only if the following_ _conditions hold:_ - ∃{P(Ui), P(Uj)}, and P(Ui) = P(Uj) = PP, Ui ∩ _Uj = φ, and Ui, Uj ⊆_ _UP. where 0 ≥_ _i, j ≥_ _κ,_ _i_ _j._ ̸= In order to distinguish different types of permissions and user groups. We define PP, UP, PF _, UF as pivotal per-_ missions, pivotal users, fixed authorized permissions and fixed authorized users respectively, as shown in definition 1. We define PN _P/PF as non-fixed authorized permissions._ = We define the permissions with a weight less than ω0 as general permission, denoted as PG, and users with a weight greater than ω0 are dangerous users, denoted as UD. To specify a subcase of the personalization policy, we combine the three constraints and write it followed by the list of constraints within a pair of braces. For instance, _PP⟨P, U_ _, κ, ω0, ω, {Uf 1(pi), . . ., Ufn(pj)}⟩. An access con-_ trol state satisfies such a personalization policy if and only if fixed authorized permissions {pi, . . ., pj}only belongs fixed authorized users {Uf 1, . . ., Ufn} respectively, exist at least _κ mutually disjoint sets of users such that each set has all_ authorized pivotal permissions and total weight of permissions authorized by each users is no more than ω. Suppose we now give a personalization policy as PP⟨P, U _,_ _κ, ω0, ω, {Mike(Ratify)}⟩. This policy requires that fixed_ authorizations permission ratify only assigned to user Mike. If κ = 2 and ω0 = 0 is set, the policy requires that overall permissions except ratify are assigned to at least two mutually disjointed sets of users. If κ = 2 and ω0 = 0.35 is set, the permission excepted not only ratify but also permissions with a weight less than 0.35. If ω = 1.2 is set, this means that the weight of each user in each mutually disjointed user groups is no more than 1.2. If we set ω = ∞, this means that the weight of users is unrestricted. _Example 1: Given the access control state shown in_ _Figure 1, all permissions in a fund publishing task are P_ = _input, issue, view, ratify_ _and weighted to 0.7,0.5,0.3 and_ { } _0.9, respectively. All users are U_ _Alice, Bob, Ed, Mik,_ = { _Harry, Jack_ _._ } As shown in Figure 1, the personalization policy _PP⟨P, U_ _, 2, 0, 1.2, {Mike(Ratify)}⟩_ is satisfied, because existence of U1 = {alice, ed} and U2 = {bob, jack} have full pivotal permissions and weighted each user no more than 1.2. However, PP⟨P, U _, 2, 0, 1, {Mike(Ratify)}⟩_ is not satisfied, because the weight of U2 alone does not exceed 1. _PP⟨P, U_ _, 3, 0, ∞, {Mike(Ratify)}⟩_ is not satisfied, because this access control state has only two mutually disjoint sets of pivotal users with all pivotal permissions. But _PP⟨P, U_ _, 3, 0.35, ∞, {Mike(Ratify)}⟩_ is satisfied, because this access control state has three mutually disjoint sets of pivotal users has pivotal permissions input and issue, the weight of permission view is less than 0.35 means that it’s not importance for the task, so the access control system is not considered. ----- **FIGURE 1. An example of access control state.** The parameters κ requires that existing κ mutually disjoint sets of users can be perform tasks respectively, mean that any _κ −_ 1 pivotal users to be absent in emergency situations, there is still exist one independent team of users to perform tasks. Such as in the example 1, the access control state satisfies _κ = 2, mean that the system can be able to tolerate any one_ pivotal user absent. Furthermore, even if absents any number of pivotal users in κ −1 user sets, the system can still perform tasks. The parameters ω requires that the weight of a single user in any user set is no more than ω, which prevents a single user has more importance permissions to ensure the system safety. Obviously, if the parameters ω is given, then the number of users in each sets is no less than ⌈W (PP)/ω⌉, where W (PP) is weight of all pivotal permissions. Such as in the Example 1, if given ω = 0.8 then ⌈W (PP)/ω⌉= 2, it means the number of users in each sets is no less than 2. _B. PERSONALIZATION POLICY CHECKING PROBLEM_ In access control system, U represents all users and P represents all permissions, assignment relationship between the user and the permission is represented as UP _U_ _P. How_ ⊆ × to efficiently determine whether the existing access control state UP satisfies a given access control policy is the key to the access control decision. For this reason, we now give a formal definition of the problem and analyze its computational complexity. _Definition 2 (Personalization_ _Policy_ _Checking_ _(PPC)_ _Problem): Given a personalization policy PP and an access_ _control state UP, UP satisfies PP is denoted as the satPP(UP)._ _Determining whether satPP(UP) is true is called Personaliza-_ _tion Policy Checking Problem._ In special cases, the parameters of personalization policy PP are not always fully consider. For example, a personalization policy in the subcase PPC⟨κ = 1⟩ has the form PP⟨P, U _, 1, ω0, ω, {Uf 1(pi), . . ., Ufn(pj)}⟩_ which means determines whether there exists a set of users have all pivotal permissions in P and weight of each user no more than ω. The subcase PPC⟨ω = ∞⟩ determines whether exist _κ sets of users and each set has all pivotal permissions in P._ The computational complexity results for PPC problem and it’s various subcases are given as following theorem. **FIGURE 2. Computational complexity results for PPC problem in various** subcases. _Theorem 1: The computational complexity of PPC prob-_ _lem and its subcases is shown in Figure 2._ We study the computational complexity of PPC problem in various subcases. The following lemma shows that the _PPC⟨κ = 1⟩, PPC⟨ω = ∞⟩, PPC⟨_ ⟩ are NP-complete. _Lemma 1: PPC⟨κ = 1⟩_ _is NP-complete_ _Proof: We prove that the PPC⟨κ = 1⟩_ is an NP problem: given a solution of the PPC⟨κ = 1⟩ problem, it can be verified in polynomial time whether the solution is correct. Next, we convert the NP-complete weighted set covering decision problem [31] to PPC⟨κ = 1⟩ problem in Polynomial time, and show PPC⟨κ = 1⟩ is NP-complete. In the weighted set covering problem, given a finite set S, a family F = {S1, . . ., Sm} of subsets of S, and a budget B, the goal is to determine whether the weight of each Si is less than B, where the union of Si is S. Given an instance of the weighted set cover problem, we now construct an instance of PPC⟨κ = 1⟩ in the following way: We create permissions p1, . . ., pm for each element in S, let ω = B, m is the cardinality of the set S. we create PP⟨P, U _, 1, ω0, ω, {Uf 1(pi), . . ., Ufn(pj)}⟩_ and create an access control state: For each different subset _Si(1 ≤_ _i ≤_ _m) in F, create a user ui, so that all per-_ missions and their weight values in Si are assigned to ui. Then whether PP⟨P, U _, 1, ω0, ω, {Uf 1(pi), . . ., Ufn(pj)}⟩_ is true if and only if there is a union of subsets in F that covers S, and the weight of any set in the subset is less than B. ----- Therefore, the PPC problem when κ = 1 is NP-complete problem. _Lemma 2: PPC⟨ω = ∞⟩_ _is NP- complete_ _Proof:_ We prove that the PPC⟨ω = ∞⟩ is an NP problem: given a solution of the PPC⟨ω = ∞⟩ problem, it can be verified in polynomial time whether the solution is correct. Next, we reduce the NP-complete DOMATIC NUMBER problem [32] to PPC⟨ω = ∞⟩. Given a graph G(V _, E),_ the DOMATIC NUMBER problem asks whether V can be partitioned into κ mutually disjoint sets V1, V2, . . ., Vk such that each Vi is a dominating set for G. V[‘]is a dominating set for G(V _, E) if for every node u in V −_ _V_ [‘], there is a node v in V[‘] such that (u, v) ∈ _E. An instance of PPC⟨ω = ∞⟩_ asks whether an access control state UP satisfies a policy _PP⟨P, U_ _, κ, ω0, ∞, {Uf 1(pi), . . ., Ufn(pj)}⟩. Given a graph_ _G = (V_ _, E), we now construct an instance of PPC⟨ω = ∞⟩_ in the following way: We construct an access control state UP with n users u1, u2, . . ., un for n nodes in G and n permissions _p1, p2, . . ., pn. v(ui) denotes the node corresponding to user_ _ui. In UP, user ui is authorized for the permission pj if and_ only if either i = j or (v(ui), v(uj)) ∈ _E. Let P denote the set_ {p1, p2, . . ., pn}. A dominating set in G corresponds to a set of users that together have all permissions in P. UP satisfies _PP⟨P, U_ _, κ, ω0, ∞, {Uf 1(pi), . . ., Ufn(pj)}⟩_ if and only if V contains κ mutually disjoint dominating sets. Therefore, the PPC problem when ω = ∞ is NP-complete problem. _Lemma 3: PPC_ _is NP-complete_ ⟨ ⟩ _Proof: An instance consists of an access control state UP_ and a policy PP⟨P, U _, κ, ω0, ω, {Uf 1(pi), . . ., Ufn(pj)}⟩. UP_ satisfies PP⟨P, U _, κ, ω0, ω, {Uf 1(pi), . . ., Ufn(pj)}⟩_ if and only if there exist at least κ mutually disjoint sets of users such that each set has all authorized pivotal permissions and total weight of permissions authorized by each user is no more than ω. If these κ sets are given, they can be verified in polynomial time. Therefore, PPC is in NP, and the ⟨ ⟩ subcase of PPC is NP-complete, then the PPC is ⟨ ⟩ ⟨ ⟩ NP-complete. **IV. THE BINARY GENETIC SEARCH ALGORITHM FOR PPC** The fact that PPC problem is intractable, as shown in Theorem 1, means that there exist difficult problem instances that take exponential time in the worst case. Therefore, we propose a Binary Genetic Search (BGS) algorithm to approximate solve PPC problems, which is inspired by the idea of the Genetic algorithm. First, this algorithm performs preprocessing to reduce the system scale. Second, this algorithm execute optimized genetic algorithm and search algorithm within T seconds of system tolerance time. During this time, the number of mutually disjointed user sets which found in the first half of the population satisfy the parameters κ of policy, then stop and output result: true. If not, save the mutually disjoint user groups, randomly generate new chromosomes, and continue **TABLE 1. Main notations used in this algorithm.** to iterate until κ groups are found. If the running time more than the system tolerance time of T seconds, it is uncertain whether the policy is satisfied, and the output result: false. This algorithm has a time complexity of O(lmn), where l, _m and n denote the number of actually performed iterations,_ the size of population and the number of all available users, respectively. The main notations used in this paper are shown in Table 1. Algorithm 1 shows the process of BGS for PPC problem. **Algorithm 1 BGS for PPC** **Data: UP[m][n], W** (pi), PP, Pm, Pc, T **Result: O[m][n], Str** **1 Preprocessing();** **2 while runtime < Tsecond and κ < max do** **3** OGA( ) ; **4** Search( ); **5** **if κ ≥** _max then_ **6** Str True ; = **7** exit(0); **8** **end** **9 end** **10 if κ < max and runtime ≥** _Tsecond then_ **11** Str False ; = **12 end** **13 return: Str;** This algorithm is optimized based on the idea of genetic algorithm, and has the characteristics of rapid convergence and evolution to the optimal solution. At the same time, because the PPC problem is an NP-complete problem, it can be determined in polynomial time whether the obtained solution is optimal.The algorithm is divided into three parts as shown in Algorithm 1. The first part is preprocessing, as shown in Algorithm 2; The second part performs optimized genetic algorithm as shown in Algorithm 3; the third part is to find mutually disjoint user groups such as algorithm 4 shown. ----- _A. PREPROCESSING_ We first determine whether the fixed authorization permissions in PP⟨P, U _, κ, ω0, ω, {Uf 1(pi), . . ., Ufn(pj)}⟩_ in the preprocessing part only belongs to the fixed authorized user, that is, determine whether {Uf 1(pi), . . ., Ufn(pj)} is true, if it is false, the policy is not satisfied. Secondly, we perform static pruning of users and permissions based on PP to reduce the scale of problem solving, which is of great help to improve the access control decision-making efficiency of CPS. Finally, we transform the PPC problem into the chromosome of genetic algorithm through coding. The preprocessing process in this section is shown in Algorithm 2. **Algorithm 2 Preprocessing Function Algorithm for** PPC Problem **Data: UP[m][n], W** (pi), PP **Result: UpPp[m][n], Str** **1 if Uf 1(pi), . . ., Ufn(pj) == ∅** **then** **2** Str False ; = **3** exit(0); **4 else** **5** **foreach pi do** **6** **if pi** _Pf then_ ∈ **7** _P = P/pi;_ **8** **end** **9** **if W** (pi) < ω0 then **10** _P = P/pi;_ **11** **end** **12** **end** **13** **foreach W** (ui) > ω do **14** _U = U_ _/ui;_ **15** **end** **16 end** **17 return: Str;** 1) STATIC PRUNING The access permissions of large-scale resources in the CPS environment are taken into account in the access control decision system, which causes a large system overhead. Therefore, this section uses static pruning to delete users and permissions that do not need to be considered during the execution of the algorithm to improve the decision-making effectiveness of the access control system. Users and permissions in the following situations do not need to be considered. - Fixed authorization permissions: For safety reasons, fixed authorization permissions can only be owned by specific users, while other users cannot be authorized, so we need to exclude these permissions when considering availability. - Permission with weight less than ω0: The importance of the permission is less than a certain threshold, so the permission does not need to be considered to improve the efficiency of access control decision-making. During task execution, the lack of such permissions can be obtained through temporary authorization. - Users whose weight is greater than ω: If a selected user’s weight is greater than ω, it does not satisfy the requirements of the access control policy, so there is no need to consider it. 2) ENCODING After static pruning of users and permissions, a sub-state of the access control state composed of pivotal users and permissions is formed. Next, we optimize the genetic algorithm to discover the user group containing all the pivotal permissions. The genetic algorithm coding rules are as follows: Given an access control state UPPP, UP represents a set of m pivotal users, and PP represents a set of n pivotal permissions. We use m-bit chromosomes to represent m users. When the i-th chromosome is 1, it means that user ui is selected. _B. OPTIMIZATION GENETIC ALGORITHM_ In this section, we introduce the optimized genetic algorithm (OGA). The core idea of the OGA function is to carry out genetic iterations according to the optimal crossover and mutation probabilities determined by experiments, updated optimal half of the population after Each iteration completes, and continue iterating with this population. Until the fitness of the first half of the population is the same and it is equal to the maximum value of fitness, and the value of relative fitness is also in a reasonable range. This means that the user set selected by each chromosome in the first half of the population covers all pivotal permissions. The execution steps of the optimized genetic algorithm (OGA) function are as follows, and Algorithm 3 gives the detailed execution process. step i Select a population of m points x1, . . ., xm to represent the users set at random. step ii Compute fitness: Compute the fitness and relative fitness of the role set using the evaluation function respectively. step iii Replacement: Sort the m points according to the fitness value from large to small, sort the points with the same fitness according to the relative fitness, and then replace the latter half with the front half. step iv Mutate: For each point xi that m/2 < i ≤ _m in the_ population and for each bit in xi, with probability pm, alter its value. step v Crossover: For each yj in the pair points xi and x(i+1) from the xm/2, . . ., xm, with probability pc, exchange _xi.yj with x(i+1).yj._ step vi Stop: If the front half of the population has the same fitness and equal to the maximum fitness, at the same time, the value of relative fitness is also in a reasonable range, stop. _C. FINDING MUTUALLY DISJOINT USER GROUPS_ This section finds whether there are κ groups of mutually disjoint user groups in the solution E[i] of the optimized ----- **Algorithm 3 OGA Function Algorithm for PPC** Problem **Data: UpPp[m][n], W** (pi), PP, Pm, Pc **Result: E[i]** **1 Rand(E[m]);** **2 while E[0].fit ̸= E[m/2 −** 1].fit ̸= Ps and _E[m/2 −_ 1].relfit ≥ _W_ (Pi) ∗ 2 do **3** **foreach i ∈** [0, n) do **4** _E[i].fit ←_ _Ps −_ _wep −_ 100wmp ; **5** _E[i].relfit ←_ _W_ (E[i]) ; **6** **end** **7** Sort(E[m].fit,E[m].relfit) ; **8** **foreach i ≤** _m/2 do_ **9** _E[m/2 + i] ←_ _E[i];_ **10** **end** **11** **foreach i > m/2 do** **12** **foreach j ∈** [0, n) do **13** **if rand() < pm then** **14** _E[i].U_ [j] ← (E[i].¯U [j]); **15** **end** **16** **end** **17** **foreach i%2** 0 do == **18** **foreach j ∈** [0, n) do **19** **if rand() < pc then** **20** _E[i].U_ [j] ↔ _E[i + 1].U_ [j]; **21** **end** **22** **end** **23** **end** **24** **end** **25 end** **26 return:E[i];** genetic algorithm. This paper proposes two methods. The first one is to encode the obtained solution E[i] and use the optimized genetic algorithm to solve it again. The second method is to find whether there are κ groups mutually disjoint user groups. This method is an approximate solution method. The process is: If the obtained solution E[i] belongs to a subset of any solution in the mutually disjoint solution set _O[k], replace it. If it is a solution that does not intersect_ with O[k], add into the solution set. For the simulation experiment, we use the second method, which is shown in Algorithm 4. The BGS algorithm is optimized based on the idea of genetic algorithm, and the improvements are as follows: First, in the process of execution, the optimal half of the population obtained by evolution is always updated, and the population is used to continue iterating. This is because the evolution based on the best solution has a high probability to get the better solution. Second, the mutation and crossover probability are determined through experiments. In the experiment, the value of the mutation probability is an integer multiple of the reciprocal of the population size. Third, the crossover operation selects the chromosome with the closest fitness. **Algorithm 4 Search Function Algorithm for PPC** Problem **Data: E[i]** **Result: k** **1 foreach i < m/2 and k ≤** _max do_ **2** **if E[i]** _O[k] then_ ⊆ **3** _O[k]_ _E[i]_ ← ; **4** **end** **5** **if E[i] ∩** _O[k] == φ then_ **6** _O[_ _k]_ _E[i]_ + + ← ; **7** **end** **8 end** **9 return:k ;** These improvements greatly improve the efficiency of the algorithm converging to the optimal solution. **V. IMPLEMENTATION AND EVALUATION** In order to verify the effectiveness of the BGS algorithm, we have implemented it and performed several experiments using randomly generated instances. The implementation of our algorithm was written in C. Experiments have been carried out on a PC with an Intel(R) Core(TM) i5-8500T CPU running at 2.11 GHz, and with 4GB memory, running windows 10. In order to get closer to the real access control environment, we add two interference permissions that are not related to the task. It is assumed that the fixed authorization permissions satisfy the policy requirements and are pruned to generate instances. For each instance, 10 randomly generated test cases are run, the averages time of the test results are used to generate the runtime graphs, and the number of satisfaction in ten instances are used to generate another graph. The evaluation function is used to evaluate the solutions. The fitness function is defined as Ps − _wep −_ 100wmp, for more details, please refer to [23]. The relative fitness function is defined as follows. _Definition 3 (Evaluation Function of Relative Fitness):_ _Relative fitness of E[i] is defined as:_ _where WPaf (E[i].U_ [j]) represents weight of permissions only _owned by E[i].U_ [j]. _A. EFFECTIVENESS OF MUTATION AND CROSSOVER_ Figure 3 shows the average CPU times and number of satisfaction under different probability of crossover and mutation for the two test case (1) Usize = 60, permissons = 12 and _κ = 3; (2) Usize = 105, permissons = 7 and κ = 5,_ the size of population m 280, and the system tolerance = time t 30. The x-axis denotes the probability of mutation, = and we fix its value as 1/Usize, . . ., 8/Usize respectively. It can be clearly seen from Figure3(a) and (c) that the average CPU times is least when we choose the parameter Pm = 3/Usize _E[i].relfit =_ _j=Us_ � _WPaf (E[i].U_ [j]), (ifE[i].U [j] = 1) _j=0_ ----- **FIGURE 3. The runtime and number of satisfaction for different probability of mutation and crossover.** with fixed Pc. This means that it’s easier to obtain a solution quickly by simultaneously mutating 3 bits in a chromosome. The average CPU times increases with the maximal Pc for the fixed Pm, because the less Pc will save the CPU times. As shown in Figure3(b)(d), the number of satisfaction is maximum when we choose the parameter Pm = 3/Usize or Pm 5/Usize with fixed Pc, and when we choose the = parameter Pc is close to 0.5 with fixed Pm. Together with the observation, we choose the parameters Pm 3/Usize and = _Pc = 0.5 for the remainder experiments._ Figure3(c)(d) shows longer CPU time and higher number of satisfaction than Figure3(a)(b). This is because when the ratio of users to permissions is large, it is easier to obtain mutually disjoint user groups, and the CPU time consumed will be reduced. The Figure3(c)(d) is clearer than Figure3(a)(b) on the curve trend of CPU time and the number of satisfaction. This is because if the ratio of users to permissions is small, the number of mutually disjoint user groups in the system is also small. In this case, it is difficult for the system to obtain a solution that satisfies the policy, and it may even not have a solution that satisfies the requirements of the policy. Therefore, if the ratio of users to permissions is small, the running time and the number of satisfactions of different random instances are very different. _B. RUNTIME AND NUMBER OF SATISFACTION_ _FOR BGS ALGORITHM_ Figure4 shows the results of running the experiments for the four test case (a) Usize : Psize = 5 : 1; (b) Usize : Psize = 10 : 1; (c) Usize : Psize = 15 : 1;(d) Usize : Psize = 20 : 1. The runtime and number of satisfaction depend on the total number of the users Usize, pivotal authorized permissions _Psize, and parameter κ of the personalization policy._ In Figure4, as the parameter κ increases for the fixed Usize, the number of satisfaction reduces and the overall CPU time increases. This is because the larger the parameters required by the policy, the more difficult to satisfy for the system. As the total number of the users Usize increases for the fixed parameter κ, the number of satisfaction reduces and the overall CPU time increases, this change is not obvious when the value of κ is small. But the change is obvious when the value of κ becomes large, as shown in Figure4(g)(h). This is because as the policy parameter κ increases, it is more difficult for the system to obtain a solution that satisfies the policy, and the running time of some instances may reach the system tolerance time. The number of satisfaction increases also with the maximal Usize _Psize for the fixed Usize and κ._ : The reason is that the more value of Usize _Psize the more_ : number of mutually disjoint sets of users. In Figure4(f)(h), ----- **FIGURE 4. The runtime and number of satisfaction for different users and the parameters κ of policy.** as the number of Usize increases, the number of satisfaction reduced when the parameter κ more than 3. The reason is that the BGS algorithm will stop when CPU times are over the system tolerance time 30 second. Therefore, if we want to obtain the better number of satisfaction, we can increase tolerance time of the system. Consequently, for the case that the system tolerance time is more important, we can make the BGS algorithm obtain the ----- best possible solution within the system tolerance time. The BGS algorithm is able to solve the PPC problem even though in a larger scale system. **VI. CONCLUSION** In this paper, we have proposed a personalization policy that has reflected in the particularity of permissions/users and has described the safety, availability and efficiency requirements of the access control system in a fine-grained way. We have introduced the definition of PPC problems and have studied the computational complexity analysis of various subcases. We have shown that most instances of PPC problems are intractable. In particular, we have proposed a BGS algorithm to solve PPC problems. This algorithm has greatly improved the efficiency of the algorithm converging to the optimal solution of the PPC problem within the tolerance time of the system. **REFERENCES** [1] M. Wankhade and S. V. Kottur, ‘‘Security facets of cyber physical system,’’ in Proc. 3rd Int. Conf. Smart Syst. Inventive Technol. (ICSSIT), Tirunelveli, India, Aug. 2020, pp. 359–363. [2] Y. Javed, M. Felemban, T. Shawly, J. Kobes, and A. Ghafoor, ‘‘A partitiondriven integrated security architecture for cyberphysical systems,’’ Com_puter, vol. 53, no. 3, pp. 47–56, Mar. 2020._ [3] T. Wang, D. Zhao, S. Cai, W. Jia, and A. Liu, ‘‘Bidirectional predictionbased underwater data collection protocol for end-edge-cloud orchestrated system,’’ IEEE Trans. Ind. Informat., vol. 16, no. 7, pp. 4791–4799, Jul. 2020. [4] T. Wang, H. Luo, X. Zeng, Z. Yu, A. Liu, and A. K. Sangaiah, ‘‘Mobility based trust evaluation for heterogeneous electric vehicles network in smart cities,’’ IEEE Trans. Intell. Transp. Syst., vol. 22, no. 3, pp. 1797–1806, Mar. 2021. [5] X. Liu, M. S. Obaidat, C. Lin, T. Wang, and A. Liu, ‘‘Movement-based solutions to energy limitation in wireless sensor networks: State of the art and future trends,’’ IEEE Netw., vol. 35, no. 2, pp. 188–193, Mar. 2021. [6] J. Lu, C. Tang, X. Li, and Q. Wu, ‘‘Designing socially-optimal rating protocols for crowdsourcing contest dilemma,’’ IEEE Trans. Inf. Forensics _Security, vol. 12, no. 6, pp. 1330–1344, Jun. 2017._ [7] X. Chen, C. Li, D. Wang, S. Wen, J. Zhang, S. Nepal, Y. Xiang, and K. Ren, ‘‘Android HIV: A study of repackaging malware for evading machine-learning detection,’’ IEEE Trans. Inf. Forensics Security, vol. 15, pp. 987–1001, 2019. [8] T. Wang, Y. Lu, J. Wang, H.-N. Dai, X. Zheng, and W. Jia, ‘‘EIHDP: Edge-intelligent hierarchical dynamic pricing based on cloud-edge-client collaboration for IoT systems,’’ IEEE Trans. Comput., early access, [Feb. 19, 2021, doi: 10.1109/TC.2021.3060484.](http://dx.doi.org/10.1109/TC.2021.3060484) [9] J. Lu, Y. Xin, Z. Zhang, X. Liu, and K. Li, ‘‘Game-theoretic design of optimal two-sided rating protocols for service exchange dilemma in crowdsourcing,’’ IEEE Trans. Inf. Forensics Security, vol. 13, no. 11, pp. 2801–2815, Nov. 2018. [10] J. Giraldo, E. Sarkar, A. A. Cardenas, M. Maniatakos, and M. Kantarcioglu, ‘‘Security and privacy in cyber-physical systems: A survey of surveys,’’ _IEEE Des. Test. IEEE Des. Test., vol. 34, no. 4, pp. 7–17, Aug. 2017._ [11] J. H. Huh, R. B. Bobba, T. Markham, D. M. Nicol, J. Hull, A. Chernoguzov, H. Khurana, K. Staggs, and J. Huang, ‘‘Next-generation access control for distributed control systems,’’ IEEE Internet Comput., vol. 20, no. 5, pp. 28–37, Sep./Oct. 2016. [12] Z. Xu and S. D. Stoller, ‘‘Mining attribute-based access control policies,’’ _IEEE Trans. Dependable Secure Comput., vol. 12, no. 5, pp. 533–545,_ Sep. 2015. [13] A. Margheri, M. Masi, R. Pugliese, and F. Tiezzi, ‘‘A rigorous framework for specification, analysis and enforcement of access control policies,’’ _IEEE Trans. Softw. Eng., vol. 45, no. 1, pp. 2–33, Jan. 2019._ [14] P. Yang and L. Xu, ‘‘The Internet of Things (IoT): Informatics methods for IoT-enabled health care,’’ J. Biomed. Informat., vol. 87, pp. 154–156, Nov. 2018. [15] M. U. Aftab, Z. Qin, N. W. Hundera, O. Ariyo, N. T. Son, and T. V. Dinh, ‘‘Permission-based separation of duty in dynamic role-based access control model,’’ Symmetry, vol. 11, no. 5, p. 669, May 2019. [16] X. Ma, R. Li, Z. Lu, J. Lu, and M. Dong, ‘‘Specifying and enforcing the principle of least privilege in role-based access control,’’ _Concurrency Comput., Pract. Exper., vol. 23, no. 12, pp. 1313–1331,_ Mar. 2011. [17] M. Narouei, H. Takabi, and R. Nielsen, ‘‘Automatic extraction of access control policies from natural language documents,’’ IEEE Trans. Depend_able Secure Comput., vol. 17, no. 3, pp. 506–517, Jun. 2020._ [18] M. A. Harrison, W. L. Ruzzo, and J. D. Ullman, ‘‘Protection in operating systems,’’ Commun. ACM, vol. 19, no. 8, pp. 461–471, Aug. 1976. [19] D. D. Clark and D. R. Wilson, ‘‘A comparison of commercial and military computer security policies,’’ in Proc. IEEE Symp. Secur. Privacy, Oakland, CA, USA, Apr. 1987, pp. 184–194. [20] T. Zhu, D. Ye, W. Wang, W. Zhou, and P. Yu, ‘‘More than privacy: Applying differential privacy in key areas of artificial intelligence,’’ IEEE Trans. Knowl. Data Eng., early access, Aug. 4, 2021, [doi: 10.1109/TKDE.2020.3014246.](http://dx.doi.org/10.1109/TKDE.2020.3014246) [21] J. Crampton, G. Gutin, and R. Watrigant, ‘‘Resiliency policies in access control revisited,’’ in Proc. 21st ACM Symp. Access Control Models Tech_nol., Shanghai, China, Jun. 2016, pp. 101–111._ [22] N. Li, Q. Wang, and M. Tripunitara, ‘‘Resiliency policies in access control,’’ ACM Trans. Inf. Syst. Secur., vol. 12, no. 4, pp. 20:1–20:34, Apr. 2009. [23] J. Lu, Z. Wang, D. Xu, C. Tang, and J. Han, ‘‘Towards an efficient approximate solution for the weighted user authorization query problem,’’ IEICE Trans. Inf. Syst., vol. E100.D, no. 8, pp. 1762–1769, Aug. 2017. [24] C. Li, S. Xia, Z. Chen, and G. Wang, ‘‘A multi-granularity genetic algorithm,’’ in Proc. IEEE Int. Conf. Big Knowl. (ICBK), Beijing, China, Nov. 2019, pp. 145–151. [25] Y. Xiong, S. Huang, M. Wu, J. She, and K. Jiang, ‘‘A Johnson’s-rule-based genetic algorithm for two-stage-task scheduling problem in data-centers of cloud computing,’’ IEEE Trans. Cloud Comput., vol. 7, no. 3, pp. 597–610, Jul./Sep. 2019. [26] L. Cui, J. Zhang, L. Yue, Y. Shi, H. Li, and D. Yuan, ‘‘A genetic algorithm based data replica placement strategy for scientific applications in clouds,’’ _IEEE Trans. Services Comput., vol. 11, no. 4, pp. 727–739, Jul./Aug. 2018._ [27] A. Zhdanov, ‘‘Generation of static YARA-signatures using genetic algorithm,’’ in Proc. IEEE Eur. Symp. Secur. Privacy Workshops (EuroS&PW), Stockholm, Sweden, Jun. 2019, pp. 220–228. [28] B. Cao, S. Fan, J. Zhao, P. Yang, K. Muhammad, and M. Tanveer, ‘‘Quantum-enhanced multiobjective large-scale optimization via parallelism,’’ Swarm Evol. Comput., vol. 57, Sep. 2020, Art. no. 100697. [29] G. Lin, S. Wen, Q.-L. Han, J. Zhang, and Y. Xiang, ‘‘Software vulnerability detection using deep neural networks: A survey,’’ Proc. IEEE, vol. 108, no. 10, pp. 1825–1848, Oct. 2020. [30] X. Ma, R. Li, and Z. Lu, ‘‘Role mining based on weights,’’ in Proc. 15th _ACM Symp. Access Control Models Technol. (SACMAT), Pittsburgh, PA,_ USA, Jun. 2010, pp. 65–74. [31] J. Lu, J. B. D. Joshi, L. Jin, and Y. Liu, ‘‘Towards complexity analysis of user authorization query problem in RBAC,’’ Comput. Secur., vol. 48, pp. 116–130, Feb. 2015. [32] G. J. Chang, ‘‘The domatic number problem,’’ Discrete Math., vol. 125, nos. 1–3, pp. 115–122, Feb. 1994. ZHENG WANG received the M.S. degree from the Department of Computer Science and Engineering, Zhejiang Normal University, in 2017. He is currently a Teaching Assistant with Zhejiang Radio & Television University Haiyan College. His current research interests include optimizing intelligent algorithms and network security. ----- YANAN JIN received the Ph.D. degree in computer application technology from the Huazhong University of Science and Technology, in 2011. In 2012, he was a Visiting Researcher with Concordia University, Montréal, Canada. He is currently an Associate Professor with the School of Information Management and Statistics, Hubei University of Economics, Wuhan, China. His major research interests include web data mining and recommend systems. SHASHA YANG received the M.S. degree from the Department of Computer Science and Engineering, Zhejiang Normal University, in 2020. She is currently a Teaching Assistant with the Xingzhi College, Zhejiang Normal University. Her research interests include mobile crowdsensing, incentive mechanism, and game theory. JIANMIN HAN received the B.S. degree from the Department of Computer Science and Technology, Daqing Petroleum Institute, in 1992, and the Ph.D. degree from the Department of Computer Science and Technology, East China University of Science and Technology, in 2009. He is currently a Professor with the Department of Computer Science and Engineering, Zhejiang Normal University. His research interests include privacy preservation and game theory. JIANFENG LU received the Ph.D. degree in computer application technology from the Huazhong University of Science and Technology, in 2010. In 2013, he was a Visiting Researcher with the University of Pittsburgh, Pittsburgh, USA. He is currently a Professor with the Department of Computer Science and Engineering, Zhejiang Normal University. His research interests include algorithmic game theory and incentive mechanism with applications to mobile crowdsensing and federated learning. -----
{ "disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.1109/ACCESS.2021.3072635?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1109/ACCESS.2021.3072635, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": "CCBY", "status": "GOLD", "url": "https://ieeexplore.ieee.org/ielx7/6287639/9312710/09400839.pdf" }
2,021
[ "JournalArticle" ]
true
null
[ { "paperId": "4f198d0d756cfeb0a96bb4c3610e95a7499ecd69", "title": "Mobility Based Trust Evaluation for Heterogeneous Electric Vehicles Network in Smart Cities" }, { "paperId": "ed6d17b572f55922b0e65754df7793f14e495c0a", "title": "Movement-Based Solutions to Energy Limitation in Wireless Sensor Networks: State of the Art and Future Trends" }, { "paperId": "b5f28dabd7c71f0061054d9083b840d9a2820969", "title": "EIHDP: Edge-Intelligent Hierarchical Dynamic Pricing Based on Cloud-Edge-Client Collaboration for IoT Systems" }, { "paperId": "e60be5a82e8e08dd22f426cdd65e58462b76154c", "title": "Quantum-enhanced multiobjective large-scale optimization via parallelism" }, { "paperId": "7d5b94f9d2cee5823757352155d0234454f09a0e", "title": "More Than Privacy: Applying Differential Privacy in Key Areas of Artificial Intelligence" }, { "paperId": "4739b897bb6508046ec6c2d55699da941e1fafd9", "title": "Security Facets of Cyber Physical System" }, { "paperId": "a8bab484dd26027cdc85cf3c06818f4bab52277d", "title": "Bidirectional Prediction-Based Underwater Data Collection Protocol for End-Edge-Cloud Orchestrated System" }, { "paperId": "24220139c25d5e1191e66ed148382dfdcda1486e", "title": "Software Vulnerability Detection Using Deep Neural Networks: A Survey" }, { "paperId": "a21ab33bb33634ff97dc4466a07afbbcc3e53e8c", "title": "Automatic Extraction of Access Control Policies from Natural Language Documents" }, { "paperId": "9146c9e3dce71db6813cd35a93b7c366fcc69b6a", "title": "A Multi-granularity Genetic Algorithm" }, { "paperId": "6cc5587c8b732a933dfd4a37fa78df8cec6c5652", "title": "A Johnson's-Rule-Based Genetic Algorithm for Two-Stage-Task Scheduling Problem in Data-Centers of Cloud Computing" }, { "paperId": "1c3fc4d76d2034c798d770225b501c1be81029d2", "title": "Generation of Static YARA-Signatures Using Genetic Algorithm" }, { "paperId": "6def59617ec2969f69bccee0e794ff0e63a67901", "title": "Permission-Based Separation of Duty in Dynamic Role-Based Access Control Model" }, { "paperId": "e3e3b01df76c05424aa1e7c0ce6b7049e9eafc2d", "title": "A Partition-Driven Integrated Security Architecture for Cyberphysical Systems" }, { "paperId": "ecabe700cb9b6d48990f8aafa5d1384a9d76bc24", "title": "The Internet of Things (IoT): Informatics methods for IoT-enabled health care" }, { "paperId": "843c1f0d1721f513cf91054a9321978536d5bd5b", "title": "Android HIV: A Study of Repackaging Malware for Evading Machine-Learning Detection" }, { "paperId": "df333a3bcdc4b278a8dd089e6731aa3455f5cf74", "title": "A Genetic Algorithm Based Data Replica Placement Strategy for Scientific Applications in Clouds" }, { "paperId": "45648b4b01cfa81cd3206feaee5b75a39e9cb50d", "title": "Game-Theoretic Design of Optimal Two-Sided Rating Protocols for Service Exchange Dilemma in Crowdsourcing" }, { "paperId": "602cd4722141a97046c691d52308f40cfd7927d2", "title": "Towards an Efficient Approximate Solution for the Weighted User Authorization Query Problem" }, { "paperId": "12f3cbdd93ef47b36594fa567106c5bc0c6ff0f7", "title": "Designing Socially-Optimal Rating Protocols for Crowdsourcing Contest Dilemma" }, { "paperId": "248516d294bf7ee4d83816318ab42a473f2408b6", "title": "Security and Privacy in Cyber-Physical Systems: A Survey of Surveys" }, { "paperId": "d4498c69b91aa4cdfc206abe811db862347fb435", "title": "A Rigorous Framework for Specification, Analysis and Enforcement of Access Control Policies" }, { "paperId": "eff72911641db59b7d299f1c04bff7cf5e6012c5", "title": "Next-Generation Access Control for Distributed Control Systems" }, { "paperId": "508d9edd7c7004fd291ff136ba562c4d3d876958", "title": "Resiliency Policies in Access Control Revisited" }, { "paperId": "edd434f7a4f3e40f734fb8623b34fe4822f35554", "title": "Towards complexity analysis of User Authorization Query problem in RBAC" }, { "paperId": "8fb08b72bba6388b400bafe21dee67600e3f1646", "title": "Mining Attribute-Based Access Control Policies" }, { "paperId": "8f2dfd370068ec5fe03a4aa2d0325bad13c1fbda", "title": "Specifying and enforcing the principle of least privilege in role‐based access control" }, { "paperId": "d6a5c19f70508d808c29cdeb1f7db09d5c111147", "title": "Role mining based on weights" }, { "paperId": "a0e672ba3414c80deefe042e200222fc49c5f1ce", "title": "Resiliency policies in access control" }, { "paperId": "5b33cffc2ee476f7db9b06cd85dbb0ac22da03f5", "title": "The domatic number problem" }, { "paperId": "f97356ffef4cab0adc41e57f7c5b8df53ba481db", "title": "A Comparison of Commercial and Military Computer Security Policies" }, { "paperId": "d5002dcbc414472e38dedb3329b149d3d7d857eb", "title": "On protection in operating systems" }, { "paperId": null, "title": "He is currently a Teaching Assistant with Zhejiang Radio & Television University Haiyan College. His current research interests include optimizing intelligent algorithms and network security" }, { "paperId": "f97561037d53527f6d254110866cd93fa057f192", "title": "A Comparison of Commercial and Military Computer Security Policies" }, { "paperId": null, "title": "vi Stop" }, { "paperId": null, "title": "step v Crossover: For each y j in the pair points x i and x ( i + 1) from the x m / 2 , . . . , x m , with probability p c , exchange x i . y j with x ( i + 1)" }, { "paperId": null, "title": "step C. FINDING MUTUALLY DISJOINT USER GROUPS This section finds whether there are κ groups of mutually disjoint user groups in the" }, { "paperId": null, "title": "step iv Mutate: For each point x i that m / 2 < i ≤ m in the population and for each bit in x i , with probability p m , alter its value" }, { "paperId": null, "title": "step ii Compute fitness: Compute the fitness and relative fitness of the role set using the evaluation function respectively" } ]
14,642
en
[ { "category": "Computer Science", "source": "external" }, { "category": "Computer Science", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/023e83b3dce4a29c18934916f08cc3d2d40d535a
[ "Computer Science" ]
0.92317
New Directions in Social Authentication
023e83b3dce4a29c18934916f08cc3d2d40d535a
[ { "authorId": "3183839", "name": "Sakshi Jain" }, { "authorId": "144516687", "name": "N. Gong" }, { "authorId": "31289371", "name": "Sreya Basuroy" }, { "authorId": "2573527", "name": "Juan Lang" }, { "authorId": "143711382", "name": "D. Song" }, { "authorId": "143615345", "name": "Prateek Mittal" } ]
{ "alternate_issns": null, "alternate_names": null, "alternate_urls": null, "id": null, "issn": null, "name": null, "type": null, "url": null }
null
# New Directions in Social Authentication ## Sakshi Jain sjain2@linkedin.com ## Juan Lang juanlang@google.com ## Neil Zhenqiang Gong neilz.gong@berkeley.edu ## Dawn Song dawnsong@cs.berkeley.edu ## Sreya Basuroy basuroy@princeton.edu ## Prateek Mittal pmittal@princeton.edu **_Abstract—Web services are increasingly adopting auxiliary_** **authentication mechanisms to supplement the security provided** **by conventional password verification. In the domain of social** **network based web-services, Facebook has pioneered the use of** **_social authentication as an auxiliary authentication mechanism._** **If Facebook detects a user login under suspicious circumstances,** **then users are asked to verify information about their friends (in** **addition to verifying their passwords). However, recent work has** **shown that Facebook’s social authentication is insecure.** **In this work-in-progress, we propose to rethink the design** **of social authentication. Our key insight is that online social** **network (OSN) operators are privy to large amounts of private** **data generated by users, including information about users’** **online interactions. Based on this insight, we architect a system** **for social authentication that asks users to verify information** **about their social contacts and their interactions. Our system** **leverages information protected by privacy policies of OSNs to** **resist attacks, such as questions based on private user interactions** **including exchanging messages and poking social contacts.** **We implemented our system prototype as a Facebook ap-** **plication, and performed a preliminary user study to evaluate** **feasibility of the approach. Our initial experiments have been** **encouraging; we find that users have high rates of recall for** **information generated in the context of OSN interactions. Overall,** **our work provides a promising new direction for the secure and** **usable deployment of social authentication.** I. INTRODUCTION Web services today such as Facebook rely on user provided passwords for authentication. However, a critical security issue in this paradigm is the compromise of passwords [1]. For example, passwords could be compromised because of password database leakage, phishing attacks, dictionary attacks, or password reuses across multiple websites. To supplement the security provided by conventional passwords, websites are increasingly deploying auxiliary authentication mechanisms. Auxiliary authentication aims to prevent attackers from taking over user accounts despite having access to their correct passwords. Permission to freely reproduce all or part of this paper for noncommercial purposes is granted provided that copies bear this notice and the full citation on the first page. Reproduction for commercial purposes is strictly prohibited without the prior written consent of the Internet Society, the first-named author (for reproduction of an entire paper only), and the author’s employer if the paper was prepared within the scope of employment. USEC ’15, 8 February 2015, San Diego, CA, USA Copyright 2015 Internet Society, ISBN 1-891562-40-1 http://dx.doi.org/10.14722/usec.2015.23002 In the domain of social network based web services, Facebook has pioneered the use of social authentication as an auxiliary authentication mechanism. Facebook monitors user accounts for suspicious activity. For instance, if a user logs into Facebook from very distant locations within a very short span of time, then in addition to requiring the user password, Facebook verifies the user by presenting a friend photo and challenging the user to name the friend [2]. Indeed, Facebook’s approach has been inspired by similar proposals from the academic community [3]. Interestingly, most deployed and proposed systems have primarily focused on the paradigm of users identifying their friends in depicted photos. A critical vulnerability in this paradigm is the use of fast improving face recognition algorithms. In fact, recent works have demonstrated the successful attacks on photo-based social authentication through theoretical modeling as well as empirical evaluation [4], [5]. Thus, an open question facing _our community is whether social authentication in the current_ _form can provide a strong foundation for supplementing the_ _security of password based authentication._ **Our work:** We propose to rethink the design of social authentication based on the insight that online social network (OSN) operators are privy to large amounts of private data generated by users. We believe that the space of social knowledge is much larger than photographs of friends. For instance, users in online social networks are associated with rich node attributes such as users’ schools, employments, faces, and locations. Moreover, users interact with each other in online social networks. Such interactions include poking friends and exchanging private messages with friends. In this work-in-progress, we aim to study how to leverage the rich space of social knowledge to design mechanisms for social authentication that are both secure and usable. Towards this end, we introduce a general architecture and a system for social authentication that is is able to incorporate the social knowledge available to OSN operators. Our system challenges users to verify information that is dynamically generated in the context of OSN usage, such as information about users’ social contacts and their interactions. Note that our approach does not rely on users to preselect static “security questions” and can thus be leveraged on demand. We propose to group the challenges that can be generated using social knowledge into three categories: node, pseudo_edge, and edge questions. They are constructed from node_ attributes specific to a single user, common node attributes of linked users (friends), and attributes of user interactions, respectively. Under this categorization of social knowledge, ----- Facebook’s photo-based authentication mechanism is an example of a node question since faces are users’ node attributes. Moreover, questions based on private user interactions such as exchanging private messages are examples of edge questions. To resist attacks against social authentication, our approach relies on privacy policies applicable on user data that are enforced by OSN operators. One of the key challenges in generalizing the concept of social authentication is usability, i.e., are users able to recall information that is organically and dynamically generated with their OSN usage? To study this question, we implemented a preliminary prototype of our architecture as a Facebook application. We performed a user study by recruiting 90 participants from Amazon Mechanical Turk to test our prototype. Our initial results have been encouraging; our study provides preliminary support to the idea that users have a non-trivial ability to recall information pertaining to their interactions on online social networks. As a part of future work, we plan to (a) conduct a largerscale user study to further our understanding of the usability of social authentication, (b) develop theoretical models to quantify the security of the approach, and (c) engage with OSN operators to impact system design. Overall, our work opens up promising new directions for research in secure and usable social authentication mechanisms. II. MOTIVATION Facebook designed and implemented an auxiliary authentication mechanism called social authentication [2] for its users using photos of friends posted on the social network. When Facebook detects suspicious activity on a user’s account, e.g., if a user logged into Facebook from very distant locations within a small span of time, in addition to the user’s password, it presents photo challenges to the user. In these photo challenges, Facebook shows 3 tagged photos of a friend with 6 options and the user has to select the correct friend name that corresponds to the tags in the photos shown. If the user accurately answers at least 5 out of 7 instances of photo challenges, he or she is allowed access to the website. However, recent works [4], [5] have discussed various security issues with photo-based social authentication. For instance, Kim et al. [4] pointed out that photo-based social authentication is not secure against the user’s friends who could also recognize the person in the photo. Polakis [5] designed an automated attack which exploits face recognition techniques, to demonstrate the feasibility of carrying out large-scale realworld attack against photo-based social authentication. As a defense, Polakis et al. [6] recently proposed to transform faces and show distorted faces in the photos. They showed that these distorted friend faces, while easy for a user to recognize, are robust against face recognition attacks and image comparison attacks where attackers collect publicly available photos to compare and identify the individuals in displayed photos. In conclusion, photo-based social authentication constantly finds itself in arms race with face recognition algorithms, which are fast improving. In this work, we ask the question, can we leverage information from a user’s social network other than the photos? Fig. 1: Proposed architecture for social authentication systems Indeed, the space of social knowledge is much larger than just photos. For instance, users in OSNs usually create profiles which include diverse information types such as education, age, employment, and location. Moreover, OSNs offer various modes of interaction amongst users, for example, users could poke their friends and exchange private messages on Facebook, Twitter allows a user to follow another user, Google+ allows its users to create circles and categorize their connections, and LinkedIn allows users to write recommendations and endorse their social contacts for some skills. Can these social data be leveraged to design social authentication? How difficult or easy it is to generate challenges based on these data? How secure and usable would such systems be? Would it be more secure than photo-based social authentication? Would it have implications on users’ privacy? Can we categorize the plethora of information available in social networks in some way in order to perform a security analysis of them? We believe that photo-based social authentication is one aspect of knowledge based social authentication mechanisms and there lies a large space of social knowledge yet unexplored. In this work-in-progress work, we lay the basic framework of exploring the use of other social knowledge and take the first step towards answering some of the questions asked. III. ARCHITECTURE OF SOCIAL AUTHENTICATIONS We denote an OSN as a graph G = (V, E), where each node corresponds to a user registered on that OSN and an edge corresponds to two users being friends on the social network. OSNs store various types of personal information about users themselves as well as their activities on the website. We divide these information types into two categories, i.e., node attributes and edge attributes. Node attributes correspond to details specific to each user independent of their interaction with others. Some common node attributes across social networks include user’s name, photo, education, and location. Edge attributes on the other hand include data corresponding to interactions among various users. The schema of this information type largely depends on the various platforms provided by the social network for user engagement. Some examples of such data include messages exchanged between users, pokes by friends, and posts written on a friend’s wall. **Architecture Overview: A social authentication system com-** prises of challenges or questions posed to the user. We propose a schematic architecture for a social authentication system as follows. The system iterates over k trials to authenticate a ----- user u. In each trial, a question is selected from the question database and is displayed to the user via an authentication _interface. All questions follow a common schema, where the_ user is provided information about an attribute, node or edge, and is asked to identify the associated friend. The user u inputs his/her answer (i.e., name of a friend) to the question; and the _answer matching module checks if the user provided answer_ can be matched to the correct friend. **Question Database:** The questions in the database are generated using the node and edge attributes available for the specific social network. We divide the set of questions into three main categories. _Node questions: Questions where the user is provided data_ about some node attribute of a friend and is asked to recognize the corresponding friend. For instance, “Name your friend in the displayed photos” or “Name a friend who is currently studying at UC Berkeley”. _Pseudo-edge questions: Questions where the user is pro-_ vided information about some node attribute which is common between the user and a friend. The user is then asked to recognize the friend. For instance, “Who went to the same school with you?” is a pseudo-edge question because it involves the school (node attribute) common to the user and his/her friend. _Edge questions: Questions where the user is provided_ information about some interaction with a friend and the user is asked to recognize the friend. For instance, “Name a friend you recently exchanged a message with” is an edge question. Facebook’s face-recognition challenges fall under node questions category since faces are node attributes. **Authentication Interface & Answer Matching:** The authentication interface displays the challenges and receives the user’s inputs. There could be multiple ways of obtaining answers from the user, each providing varied usability and security trade-offs. For example, one way is to show n options of friend names as radio buttons and the user chooses the correct one amongst them. Facebook’s current photo-based social authentication system receives the answers in this way, where n = 6. Another way is to ask the user to type in the name of the correct friend by providing just the photos of both correct and incorrect friends as options. The user in this case needs to recognize the correct friend from the photos and write the selected friend’s name in the textbox. The name entered by the user in this case can be matched to the correct one using fuzzy matching, to account for spelling mistakes for improved usability. One can also imagine providing a dropdown menu of friends’ names to select from, with or without providing any photo options. Each of the above techniques have their pros and cons when evaluated against security and usability metrics. We suspect that the first method is very usable since it allows the user to click on an option, however, the security of such method is lower bounded by _n[1]_ [. Although we compromise] on usability for the second method, its security is strictly better than providing radio buttons, since the attacker would have to recognize the correct friend and type in the name. Quantitatively evaluating the security is however quite tricky in this case. **Model Selection and Evaluation: Given the proposed general** model for a social authentication system, there are multiple Fig. 2: Example of an edge question from our prototype for Facebook. parameters that need analysis. For example, how difficult is it to come up with the question database for a particular social network? Is such a model feasible? Would users remember answers to such questions? How should the answer choices look like? Do any particular category of questions provide better security or usability to users? In order to answer some of these questions and to test the feasibility of such a system, we build a prototype authentication interface for Facebook and perform a user study to perform preliminary analysis of the proposed system. We particularly chose Facebook as our platform since it is the most popular online social network (OSN) with more than 1 billion users worldwide [7]. Also Facebook provides an API to build apps using information from a user’s social graph. In the following two sections, we detail our analysis of the feasibility and usability of the proposed system. We also briefly discuss the security implications of the various types of questions in Section V. IV. USER STUDY DESIGN _A. Preliminary Study_ We designed a user study to understand the usability of our new proposed model, to measure how well users perform when posed with questions about their social network and to help design a more extensive authentication mechanism model. To this effect, we recruited 90 participants to take a survey and performed a quantitative study based on the observations. _1) Methodology: We invited participants via Amazon Me-_ chanical Turk to take a survey about their Facebook account. Any participant above 18 years of age owning a Facebook account was allowed to take the survey. Each participant is directed to a Facebook application URL and asked to login with his Facebook credentials. Once logged in, Facebook takes the participant to our application, called ’Soc-auth’. Soc-auth requests the following permissions to the user before proceeding: user-groups, user-photos, friends-about-me, _{_ friends-education-history, friends-photos, read-mailbox . Once _}_ the participant provides the required permissions, Soc-auth poses the participant with 4 different questions followed by a survey about basic personal information and a feedback form. For each question, client-side Javascript queries Facebook for ----- TABLE I: Questions used in the Facebook prototype for user study and their corresponding categories |Question schemas|Description|Category| |---|---|---| |Q 1|Type in the complete name of the person with a square box around his/her face in the following picture|Node| |Q 2|Given the following 5 Facebook friends as options, type the complete name of the friend you went to same school with|Pseudo-edge| |Q 3|Given the following 5 Facebook friends as options, type the complete name of the friend who poked you on Facebook|Edge| |Q 4|Given the following 5 Facebook friends as options, type the complete name of the friend with whom you exchanged a message on Facebook|Edge| appropriate user information and checks the correctness of the answer provided by the user. We chose to implement all the logic at the client side to protect the confidentiality of user information since the above mentioned permissions provide the app access to sensitive data including inbox. To protect the privacy of the user, we only store whether the user answered a question correctly. Each participant was compensated with $5 paid via Amazon Mechanical Turk. We recruited 90 participants in total from Amazon Mechanical Turk over a course of 7 days. These participants had a wide range of ages (18 - 45+). 42% of the participants fell in the (18-24) bracket, 39% in the (25-34) bracket, and the remaining 18 % were above 35. We also saw a wide range of educational background. About 19% had or are pursuing high school degrees, 57% had or are pursuing bachelor degrees, and 24% had or are pursuing advanced degrees. Our goal of this experiment is to understand the feasibility of a model which uses the user’s social network to generate authentication questions. To this effect, we chose 4 different questions to ask each user. Questions were selected based on most popular sources of activity on Facebook and security of the question. We first inspected the Facebook Graph API[1] which is a tool provided by Facebook to represent the nodes and edges of its social graph. By analyzing a node or user, we determined the most common interactions or edges they share with other nodes and designed the questions to ask about these attributes. Furthermore, according to a survey about people’s Facebook activity conducted by the Pew Research Center [8], the top 3 most frequent activity are commenting, liking, and exchanging messages. While users may post statuses or comment on friends’ posts frequently, this behavior is easily viewable by both known and unknown attackers and does not constitute a secure question. Hence, we ask questions about the next most frequent set of activities that are not public, such as private messages and pokes exchanged. The questions and their corresponding categories are shown in Table I. Question Q1 presents a user with a photo from his album and asks the user to type in the name of the tagged person. This is a node question since answering this question correctly would require the user to recognize a friend’s face (a node attribute) correctly. Question Q2 presents a user with profile photo of five of his friends and asks the user to type in the name of the friend with whom he went to the same school. This is a pseudo edge question since the question requires the knowledge about the node attributes (i.e., school) of both the user and the correct friend. Questions Q3 and Q4 are edge questions, each of which presents a user with five options and 1https://developers.facebook.com/docs/graph-api/ TABLE II: 95% confidence intervals of applicability and reliability of the four question schemas shown in Table I. Applicability Reliability _Q1_ 77%±8% 28%±9% _Q2_ 51%±10% 54%±10% _Q3_ 48%±10% 71%±9% _Q4_ 98%±2% 66%±10% asks the user to type in the correct name. Specifically, Q3 asks the user to choose the friend who poked him recently on Facebook and Q4 asks who recently exchanged a message with the user on Facebook. Questions Q3 and Q4 are asked only when the user has at least one friend who poked/ messaged him in last one year. This design choice is made to ensure that the interaction is recent enough for the user to remember the friend. Figure 2 shows an example of Q4. To generate options for each question, we randomly choose one correct option and 4 incorrect options. Note that the user is not just asked to select the correct friend but to type in the name of the friend in a text box, thereby increasing security. To match the answer provided by the user with the correct friend’s name displayed on Facebook, we adopt DamerauLevenstein edit distance for fuzzy matching. The input answer is considered correct if the edit distance is no more than 12%, which roughly means that we tolerate one out of 8 characters to be removed or replaced or added. _2) Findings: In order to capture the feasibility of our_ model, we evaluate it using two metrics, applicability and _reliability. Notice that some or all the four questions might be_ inapplicable to some users. For instance, Q3 is inapplicable to a user who has not been poked by any friend and Q2 is not applicable to a user who has not mentioned his school on Facebook. To quantify this, we define applicability of each question Qi as the fraction of users to which Qi was applicable. In order to measure how easy it is for a user to answer the questions, we define reliability of each question Qi as the fraction of users for whom this question was applicable and who correctly answered the question. We use well known Wilson method to compute 95% confidence interval for both applicability and reliability of the four questions. Table II shows the 95% confidence intervals of applicability and reliability of the four questions obtained from our user study of 90 participants. We find that the variation in the applicability of the questions we chose is quite large. Only about 51% of the participants had a page associated with their school on Facebook. While about 52% had not been poked |Col1|Applicability|Reliability| |---|---|---| |Q 1|77%±8%|28%±9%| |Q 2|51%±10%|54%±10%| |Q 3|48%±10%|71%±9%| |Q 4|98%±2%|66%±10%| ----- in last one year, around 98% had exchanged a message with a friend during the time span of a year. The photo question has a 77% of applicability, since the photos selected were chosen from the user’s albums, instead of any and all images of the individual. While a friend may have many images on Facebook, not all will have albums. Similarly large variation is seen in the reliability of our four questions. We find that the users were able to correctly answer the two edge questions more easily than the node question, which fared quite low on reliability ( 28%). We believe that _∼_ this gap is because an interaction with a friend in the form of a message or a poke would make it more likely that the friend is a close friend implying it would be easier for the user to remember his/her name. On the other hand, a user might not be familiar with friends or acquaintances (but friends on Facebook) tagged in some photos,[2] resulting in low reliability of Q1. Note that from these observations, we cannot firmly deduce that edge questions are more reliable than node or pseudo-edge questions since we have used specific examples for each category of question. It is possible that some instances of node questions perform better than a poorly chosen instance of edge question. However, since there is no universal set of edge, node, and pseudo-node questions, this is difficult to evaluate at this point. _B. Next Steps_ Based on the observations from the first study, we are designing a more extensive and larger scale study to quantitatively evaluate the benefits of the proposed model as a part of future work. Since the previous study only asked 1 node, 1 pseudo-edge, and 2 edge questions, the results are limited to the specific question asked within each category. Instead, we plan to design and analyze a broader set of questions per category. Examples of node questions other than faceidentification could include asking the user to identify a friend from his hometown, college, employer, or Facebook groups that he is a member of. The pseudo-edge category can be expanded to questions like “Name a friend who attended the same high school or college as you.”, or “Name a friend who is going to a given Facebook event with you.” Similarly, the edge questions can be expanded to more than exchanging pokes and messages. For example, users can be asked to identify a friend who sent them a friend request or tagged them in a photo recently. Each question may have a different memory recall time and applicability based on the user’s engagement of Facebook; it would be interesting to examine whether one particular type of questions are more usable. Furthermore, we want to quantify the usability and security of the existing face-based authentication model used by Facebook and compare with our model. The photo test question in the previous study was similar to the one used by Facebook, except for the number of images of the friend displayed in the question and the answering matching mode. Thus to create a more direct comparison, we plan to design a separate question to simulate the photo-based challenge as shown by Facebook. Finally, we’d like to evaluate the ease of use of various answering methodologies while maintaining their security properties. We plan to compare the radio button 2These tags could be provided by other users. option, vanilla text box with no options, and text box with photos of friends without their names shown as options. Moreover, we plan to construct a formal security model to quantify the security of different categories of questions and different answering matching modes, and compare them quantitatively. V. DISCUSSION In this section, we briefly discuss the security and privacy implications of the proposed model. **Security:** Online social networks often provide users with fine-grained privacy settings. We assume a user u sets his/her node attributes (e.g., users’ faces, schools, and employers) to be accessible to at least his/her friends. The incentives for users to do so could be to let their friends know who they are. In fact, Dey et al. [9] showed that 47% of Facebook users leave their such node attributes publicly accessible by default. However, we consider edge attributes (e.g., pokes and private messages exchanged between two users) of an edge (u, f ) are set to be accessible only to user u and the linked friend f . Indeed, such edge attributes in Facebook are only accessible to the two involved users. Under this privacy setting, the set of users who can access the attributes that are core to the three types of questions (i.e., node, pseudo-edge, and edge questions) are different. Specifically, let u be the user and f be the selected friend about whom a question q is being asked. If q is a node question, the node attribute used in q is at least accessible to all the friends of _f and f_ . If on the other hand q is a pseudo-edge question, the common node attribute involved in q is only accessible to the common friends of u and f if they set their node attributes to be only visible to their friends in their privacy settings. Lastly, if q is an edge question, the corresponding edge attribute is accessible only to u and f . The different privacy setting for node and edge attributes is the fundamental reason why the three types of questions manifest different levels of security. We will take the Sybil attack [10] as an example to further illustrate the security levels. In an Sybil attack, the attacker creates fake accounts on the social network and tries to befriend the victim and its friends to get access to their information. If the authentication challenge is a node question like the Facebook’s photo based challenge, the attacker has all the necessary information to solve the challenge once he has connected himself to the victim’s friends on the social graph. If the authentication challenge is a pseudo-edge question, the attacker needs to befriend the victim’s friends and the victim, which succeeds with a lower probability. Edge questions are robust to this kind of Sybil attack because interactions are private to the victim and the friend involved. We believe edge questions can be significantly more promising in providing security and worth exploring in the new versions of social authentication services. Theoretical modeling of the three types of questions and performing security experiments on publicly available social graphs is left for future work. **Privacy implications:** Social authentication mechanisms might also raise concerns around leakage of private user information. For each of the three types of questions, some information about the node or edge attributes is revealed to be ----- able to frame the challenge. An example from our prototype is the message question; the attacker without answering the question would know that the user exchanged private messages with one of the friends from the options. Similarly, in the Facebook’s photo-based questions, user’s friends and their photos are revealed during the challenge. One can argue against the privacy leakage since these challenges are only used when the user has been confirmed via primary authentication interface (passwords). Moreover, we plan to evaluate users’ privacy concerns in social authentication via user studies. VI. RELATED WORK In this section, we review prior work on social authentication mechanisms, which we divide into two categories: _trustee-based social authentication and knowledge-based social_ authentication. In trustee-based social authentication [11], [12], [13], [14], [15], the user or the service provider pre-selects a few friends of the user as trustees, who aid the user in the authentication process. Knowledge-based social authentication [3], [2], [4], [5], [6] utilizes a user’s friends’ information for authentication, and thus knowledge-based social authentication relies on the user’s knowledge about their friends. The friends are not directly involved in knowledge-based social authentication. Knowledge-based social authentication mechanisms are mainly used as auxiliary authentication mechanisms while trusteebased social authentication mechanisms are used as backup authentication service. Our work belongs to knowledge-based social authentication. **Trustee-based social authentication:** Brainard et al. [11] proposed to use somebody you know, i.e., friends of users, in authentication systems. Originally, Brainard et al. combined trustee-based social authentications with other authenticators (e.g., passwords) as a two-factor authentication mechanism. Later, trustee-based social authentication was adapted to be used as a backup authenticator [13], [14], [12]. For instance, Schechter et al. [12] designed and built a prototype of trusteebased social authentication system which was integrated into Microsoft’s Windows Live ID system. Facebook announced its trustee-based social authentication system called Trusted Friends in October 2011 [13], and it was redesigned to be Trusted Contacts [14] in May 2013. Gong and Wang [15] proposed a probabilistic security model to quantify the security of trustee-based social authentication, and their security model can guide the design of more secure trustee-based social authentication. **Knowledge-based social authentication: Yardi et al. [3] were** the first to propose a photo-based authentication system called _Lineup, to test if the user belongs to a group (e.g., interest_ groups in Facebook) that he/she tries to access. Specifically, when a user tries to access a group, Lineup presents a photo and asks the user to input the names of subjects in the photo assuming that if the user has the permission to access the group, he/she should know the subjects. To determine if the answer given by the user is correct or not, Lineup uses tagged photos to obtain ground-truth answers. Furthermore, Yardi et al. discussed Denial of Service (DoS) and network outlier attacks. In DoS attacks, an attacker could spam the system with a large number of photos with wrong tags, and thus legitimate users input “incorrect” answers even if they know the subjects. The network outlier attacks represent that an attacker can recognize his/her friends that are in the group and whose tagged photos are presented. Later, Facebook adopted and implemented this photo-based authentication mechanism [2] to verify users when a suspicious user activity is detected. VII. CONCLUSION AND FUTURE WORK In this work, we propose to revisit the design space of social authentication challenges by exploiting the vast amount of data generated on social networks. Specifically, we present a general architecture for social authentication that incorporates a large space of social knowledge and makes it possible to compare different design strategies under the same framework. We introduce a categorization of the design space of questions that can be generated from a social graph, i.e., node, pseudo_edge, and edge questions._ As a proof-of-concept for our proposed model, we implement a prototype as a Facebook application and perform user study on 90 Amazon Mechanical Turk workers. The results of the study are encouraging and prove the feasibility and usability of such a model. Our work thus opens up promising new directions in knowledge-based social authentication by exploiting a larger design space. **Acknowledgement: This work is supported by the NSF under** Grant No. 1409915 and 1409415. REFERENCES [1] D. Balfanz, R. Chow, O. Eisen, M. Jakobsson, S. Kirsch, S. Matsumoto, J. Molina, and P. van Oorschot, “The future of authentication,” IEEE _Security & Privacy, 2012._ [2] Facebook’s Knowledge-based Social Authentication., “http://blog.facebook.com/blog.php?post=486790652130.” [3] S. Yardi, N. Feamster, and A. Bruckman, “Photo-based authentication using social networks,” in WOSN, 2008. [4] H. Kim, J. Tang, and R. Anderson, “Social authentication: Harder than it looks,” in FC, 2012. [5] I. Polakis, M. Lancini, G. Kontaxis, F. Maggi, S. Ioannidis, A. D. Keromytis, and S. Zanero, “All your face are belong to us: Breaking facebook’s social authentication,” in ACSAC, 2012. [6] I. Polakis, P. Ilia, F. Maggi, M. Lancini, G. Kontaxis, S. Zanero, S. Ioannidis, and A. D. Keromytis, “Faces in the distorting mirror: Revisiting photo-based social authentication,” in CCS, 2014. [7] Facebook Company Info, “http://newsroom.fb.com/company-info/.” [8] Keith Hampton and Lauren Sessions Goulet and Cameron Marlow and Lee Rainie, “http://www.pewinternet.org/2012/02/03/ part-2-facebook-activity/.” [9] R. Dey, Z. Jelveh, and K. Ross, “Facebook users have become much more private: A large-scale study,” in SESOC, 2012. [10] J. R. Douceur, “The Sybil attack,” in IPTPS, 2002. [11] J. Brainard, A. Juels, R. L. Rivest, M. Szydlo, and M. Yung, “Fourthfactor authentication: Somebody you know,” in CCS, 2006. [12] S. Schechter, S. Egelman, and R. W. Reeder, “It’s not what you know, but who you know,” in CHI, 2009. [13] Facebook’s Trusted Friends, “https://www.facebook.com/notes/ facebook-security/national-cybersecurity-awareness-month-updates/ 10150335022240766.” [14] Facebook’s Trusted Contacts, “https://www.facebook.com/notes/ facebook-security/introducing-trusted-contacts/10151362774980766.” [15] N. Z. Gong and D. Wang, “On the security of trustee-based social authentications,” IEEE Transactions on Information Forensics and Se_curity (TIFS), vol. 9, no. 8, 2014._ -----
{ "disclaimer": "Notice: The following paper fields have been elided by the publisher: {'references', 'abstract'}. Paper or abstract available at https://api.unpaywall.org/v2/10.14722/USEC.2015.23002?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.14722/USEC.2015.23002, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": null, "status": "GREEN", "url": "http://home.engineering.iastate.edu/%7Eneilgong/papers/socauth.pdf" }
2,015
[]
true
null
[]
8,489
en
[ { "category": "Business", "source": "s2-fos-model" }, { "category": "Computer Science", "source": "s2-fos-model" }, { "category": "Economics", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/023fb7dd8bdf0812ffbf351657c08c8920f7d512
[]
0.881474
Applicability of Blockchain Technology to The Normal Accounting Cycle
023fb7dd8bdf0812ffbf351657c08c8920f7d512
Applied Finance and Accounting
[ { "authorId": "115576142", "name": "W. K. Peprah" }, { "authorId": "2155969672", "name": "Reynaldo P. Abas Jr." }, { "authorId": "108686680", "name": "A. Ampofo" } ]
{ "alternate_issns": [ "2374-2429" ], "alternate_names": [ "Appl Finance Account" ], "alternate_urls": [ "http://redfame.com/journal/index.php/afa/issue/view/24" ], "id": "6f9da3f2-7f87-411a-be06-4f8dd8d80439", "issn": "2374-2410", "name": "Applied Finance and Accounting", "type": null, "url": "http://redfame.com/journal/index.php/afa/index" }
Blockchain technologyis a distributed, unchangeable ledger that makes recording transactions and managing assets in a business network much easier and nowa type of accountingsoftwareconcernedwith the transfer of assetownership and the maintenanceof anaccuratefinancial ledger. Despitethenumerousbenefits ofblockchaintechnology,there is no study on theapplicability of blockchain technologytothenormalaccountingcycle in emerging economies in Africa.Thus,thispaperprovidesgeneralinsightsonhowblockchaintechnologymaybeusedinthenormalaccountingcycle in West Africa.Thestudyadoptedaqualitativeresearchmethodandcontentanalysisresearchdesigntounderstand the extent to which business leaders in West Africa are aware, understand, and utilize blockchain technology in the processing of accounting transactions to the preparation of financial statements.Results indicatethat West African business leaders are well aware, understand and applyblockchaintechnologyapplicationsinthenormalaccountingcycle,anditprovidescostsavings,digitalidentity,andsecurity.Thestudyrecommendsfurtherinvestigationsintohowtoaddressscalabilitywhen dealingwith recurrent and large transactions.
Vol. 8, No. 1, August 2022 ISSN 2374-2410 E-ISSN 2374-2429 Published by Redfame Publishing URL: http://afa.redfame.com # Applicability of Blockchain Technology to The Normal Accounting Cycle Williams Kwasi Peprah[1], Reynaldo P. Abas Jr.[2] & Akwasi Ampofo[3] 1Valley View University, School of Business, Oyibi, Accra, Ghana. E-mail: williams.peprah@vvu.edu.gh ORCID Number: 0000-0002-6802-2586 [2Adventist University of the Philippines, Department of Accountancy, College of Business. E-mail: rpabasjr@aup.edu.ph](mailto:rpabasjr@aup.edu.ph) 3University of Connecticut, School of Business, Accounting Department. E-mail: aaampofo@vt.edu Received: November 22, 2021 Accepted: December 27, 2022 Available online: February 22, 2022 doi:10.11114/afa.v8i1.5492 URL: https://doi.org/10.11114/afa.v8i1.5492 **Abstract** Blockchain technology is a distributed, unchangeable ledger that makes recording transactions and managing assets in a business network much easier and now a type of accounting software concerned with the transfer of asset ownership and the maintenance of an accurate financial ledger. Despite the numerous benefits of blockchain technology, there is no study on the applicability of blockchain technology to the normal accounting cycle in emerging economies in Africa. Thus, this paper provides general insights on how blockchain technology may be used in the normal accounting cycle in West Africa. The study adopted a qualitative research method and content analysis research design to understand the extent to which business leaders in West Africa are aware, understand, and utilize blockchain technology in the processing of accounting transactions to the preparation of financial statements. Results indicate that West African business leaders are well aware, understand and apply blockchain technology applications in the normal accounting cycle, and it provides cost savings, digital identity, and security. The study recommends further investigations into how to address scalability when dealing with recurrent and large transactions. **Keywords: Blockchain Technology, Normal Accounting Cycle, Accounting** **1. Introduction** Blockchain technology is a distributed, unchangeable ledger that makes recording transactions and managing assets in a business network much easier and now a type of accounting software (Adams et al., 2018; Demirkan et al., 2020) concerned with the transfer of asset ownership and the maintenance of an accurate financial ledger. Accounting is primarily concerned with the communication and measurement of accounting transactions and the analysis of such information (Dai, J., & Vasarhelyi, 2017). A large part of the profession involves determining or quantifying property rights and obligations to allocate financial resources best. For accountants, blockchain clarifies asset ownership and the existence of obligations, and the potential for significant efficiency gains. Despite the numerous literature about blockchain technology and its potential advantages, no study has been conducted yet on its potential applicability in the normal accounting cycle. Thus, this paper aims to provide general insights on how blockchain technology may be used in the normal accounting cycle. According to Supriadi (2020), blockchain technology can improve the accounting profession by lowering the cost of maintaining and reconciling ledgers and ensuring perfect clarity regarding asset ownership and transaction history. Blockchain technology has the potential to assist accountants in gaining clarity over their organizations' available resources and liabilities while also freeing up time to focus on analysis, valuation and planning rather than bookkeeping (Pimentel, & Boulianne, 2020). Blockchain technology will increase transaction-level accounting being performed but not by accountants (George & Patatoukas, 2021). Instead, successful accountants analyze the accurate economic interpretation of blockchain records, reconciling the record with financial reality and valuation. By removing reconciliations and providing confidence over transaction history, blockchain may also enable accounting to expand its scope, considering aspects that are currently thought too complex or unreliable to quantify, such as the worth of a company's data (Zhang et al., 2020). ----- Blockchain technology can be used to automate bookkeeping and reconciling tasks. This could enhance accountants' efforts in particular areas while strengthening those focused on generating value elsewhere. In business, particularly in accounting, the latest accounting software like SAP, Oracle, QuickBooks, Sunplus are just among the accounting information systems commonly used by most companies to process fast, convenient, and reliable financial transactions andto generate financial statements. Given the many and successive developments that occur in the information technology environment and the expansion of its use and utilization in the business environment, and the direct impact on the practice of accounting data systems need to examine the opportunity to make use of advanced technologies in the function of banking data systems, which represents the first basis in accounting work in the various sectors in which it operates (Alsaqa et al., 2019). Meanwhile, aside from the aforementioned accounting information systems, the potential of using blockchain has become one of the major trends lately, as evidenced by several research studies (Kwilinski, 2019; Dai & Vasarhelyi, 2017; Kokina et al., 2017). Blockchain technology has received much press coverage in the last few years (Kokina et al., 2017). Much has been said recently about Bitcoin, blockchains, and distributed ledger technologies (DLT). Satoshi Nakamoto, the famed bitcoin developer, is credited with initiating these debates (Appelbaum & Nehmer, 2020). Smith and Castonguay (2020) stated in their study that the potential for blockchain to be used as an accounting tool opened the door to more far- reaching implications in the areas of financial reporting, assurance, and corporate governance, extending the benefits beyond the internal control environment. Accounting and assurance are two professions in which blockchain technology can make a substantial impact and fundamentally alter current paradigms. Blockchain technology functions, such as data integrity protection, rapid sharing of pertinent information, and programmable and automatic process controls, may assist in constructing a new accounting environment (Wei et al. 2020). This technology could potentially be utilized to give automated assurance, enhancing the agility and precision of the current auditing paradigm (Dai & Vasarhelyi, 2017). Meanwhile, Orcutt (2018) reports that engineers recognized that blockchain might be used to track non-monetary assets. In 2013, Vitalik Buterin, then 19 years of age, founded Ethereum, a cryptocurrency that would follow financial transactions and the status of computer programs known as smart contracts. **2. Accounting Cycle** Ballada (2021) described an accounting cycle as a collection of successive processes or procedures to carry out the accounting process. The cycle's steps and their objectives are as follows: 1) Identifying the events that will be recorded. This tries to collect data on transactions or occurrences in general via source documents. 2) The journal is used to record transactions. This tries to document the economic impact of transactions on the firm in a journal, a format that allows transfer to the accounts. 3) The ledger is updated using journal entries. This procedure is intended to transmit data from the journal to the ledger for classification. 4) Constructing a trial balance. This section contains a listing for verifying the ledger's debits and credits are equal. 5) Worksheet preparation, including correcting entries. This simplifies the process of preparing financial statements. 6) Financial statement preparation. This knowledge is beneficial to decision-makers. 7) Journal entries for adjustments are journalized and posted. This column is used to track accruals, deferrals that have expired, estimations, and other occurrences from the worksheet. 8) Journal entries for the closing journal are journalized and posted. This results in the closure of temporary accounts and the transfer of profit to the owner's equity. 9) Preparation of a trial balance post-closing. This validates the debits and credits following the closing entries. 10) Entries in the reversing journal are journalized and posted. This simplifies the subsequent accounting period's documentation of certain routine transactions. Throughout the preceding steps, definitions of words may be beneficial for a better understanding of the accounting cycle. Journalizing is the process of chronologically recording commercial transactions in a journal in terms of debits and credits (Ballada, 2021). This procedure initiates the process of entering transactions into the books of accounts. Meanwhile, posting is the process of transferring commercial transactions from journals to ledgers. A ledger is a record that contains a summary of all journal entries. In other words, journaling comes before posting. ----- **3. Methodology** This research was qualitative and applied content analysis techniques in looking at the applicability of blockchain technology in the normal accounting cycle. By evaluating and coding textual material, the content analysis research design is utilized to establish replicable and accurate findings (Drisko & Maschi, 2016; Peprah et al., 2018). By assessing texts, such as documents, oral communication, and visuals, systematically from websites and books (Krippendorff, 2019). **4. Results and Discussion** As shown in Table 1 below, and based on the content analysis technique, the researcher summarizes the applicability of blockchain technology in the normal accounting cycle and its cost savings. The blockchain technology in the normal accounting cycle provides security to digital identity as per transaction in a way to minimize bookkeeping and reconciliation of transactions. Financial statement presentation format may be customized and integrated into the blockchain to give standard accounting reporting. Table 1. Applicability of Blockchain Technology in the Normal Accounting Cycle **Potential Benefits of Blockchain Technology** **Normal Accounting Cycle** **Applicability of Blockchain Technology in** **the** **Normal Accounting Cycle** Blockchain technology can pre-program transactions to "self-execute" per an agreed contract, called smart contracts. For example, a buyer and a seller may decide on specific criteria for their business transactions that may be programmed through blockchain technology. Thus, minimizing or eliminating the efforts of the bookkeepers. Moreover, this may also reduce or eliminate the issuance of formal accountable forms like invoices, statements of accounts, which would result in money savings. Moreover, the transaction between sellers and buyers may be guaranteed as authentic as digital identities are required to use blockchain technology. A transaction will not be added to the blockchain unless approved by all the members of the chain. Further, blockchain technology assures the security of the transactions because of the cryptographic feature, which would be very difficult to tamper with by unauthorized persons. Blockchain technology would help the bookkeeper to save time on journalizing recurring transactions. Thus, they may spend more time on valueadding activities like analyzing transactions, making special reports to management. Therefore for those entities, which might be employing several bookkeepers/accountants. Blockchain technology might be a cost-saving solution to process bulky and recurring transactions in a given time Cost Savings/digital identity/security Cost Savings Step 1. Identification of events to be recorded Step 2. Transactions are recorded in the journal. ----- Cost Savings Step 3. Journal entries are posted to the ledger. Step 4. Cost Savings Preparation of a trial balance. See discussions on Step 2 above. This step may not be necessary anymore as the transactions are done in Steps 1-3 is already validated by the members of the network/chain. The main purpose of the trial balance is to check the equality of debits and credits. While every transaction is done electronically and according to sequential criteria, the trial balance may be abolished. Therefore, blockchain technology may save time and money again on preparing the trial balance. The preparation of the worksheet may be foregone also as the trial balance in Step 4 above. However, with regard to the adjusting entries, blockchain technology can handle this task by following the usual process of recording and validating through the chain. This task may be accommodated by blockchain technology by customizing the formats of the financial statements required by the financial reporting framework applicable to an entity. See discussions on Step 2 above. See discussions on Step 2 above. See discussions on Step 4 above. Cost Savings Cost Svaings Cost Savings Cost Savings Cost Savings Step 5. Preparation of the worksheet, including adjusting entries. Step 6. Preparation of financial statements. Step 7. Adjusting journal entries are journalized and posted. Step 8. Closing journal entries are journalized and posted. Step 9. Preparation of a post-closing trial balance. Step 10. Reversing journal Cost Savings See discussions on Step 2 above. entries are journalized **5. Conclusion and Recommendations** Organizations may examine the possible benefits of adopting blockchain technology in terms of cost savings, digital identity, and security, particularly in this age of digitization. The prevalence of cryptocurrencies and other kinds of electronic evidence may point to the adoption of blockchain technology in the accounting industry, particularly throughout the typical accounting cycle. The study points to the fact that blockchain technology has come to reduce the normal accounting cycle required as per the accounting standards in the preparation of financial statement. Nonetheless, despite the potential benefits of blockchain technology, it is important to evaluate its drawbacks. For bookkeepers or accountants to use blockchain technology in their current and future jobs, they must understand its technicalities. To comprehend the basic technological requirements for using blockchain technology, certain skills of computing must be established. Another issue to overcome is blockchain technology's scalability when dealing with recurrent and large transactions. The data's reliability must be maintained at all times and closely monitored by technical personnel who are directly involved in blockchain technology's upkeep. Finally, more research on the implications of blockchain technology in the ----- normal accounting cycle, particularly in steps 4-10, is strongly recommended, given that the existing literature and studies focus solely on the distributed ledger implications of blockchain technology. **Reference** Adams, R., Kewell, B., & Parry, G. (2018). Blockchain for good? Digital ledger technology and sustainable development goals. In Handbook of sustainability and social science research (pp. 127-140). Springer, Cham. Alsaqa, Z. H., Hussein A. I., and Mahmood, S. M. (2019). The Impact of Blockchain on Accounting Information Systems. Journal of Information Technology Management. DOI: 10.22059/jitm.2019.74301 Appelbaum, D. & Nehmer, R. A. (2020). Auditing Cloud-Based Blockchain Accounting Systems. Journal of Information Systems. (34), 2. DOI: 10.2308/isys-52660. Ballada, W. and Ballada S. (2021). Basic Financial Accounting and Reporting. DomDane Publishers and Made Easy Books. Dai J. & Vasarhelyi, M. A. (2017). Toward Blockchain-Based Accounting and Assurance. Journal of Information Systems, 31(3), 5–21. https://doi.org/10.2308/isys- 51804. Dai, J., & Vasarhelyi, M. A. (2017). Toward blockchain-based accounting and assurance. Journal of Information Systems, _31(3), 5-21._ Demirkan, S., Demirkan, I., & McKee, A. (2020). Blockchain technology in the future of business cyber security and accounting. Journal of Management Analytics, 7(2), 189-208. Drisko, J. W. & Maschi, T. (2016). Content Analysis. Oxford University Press. George, K., & Patatoukas, P. N. (2021). The Blockchain Evolution and Revolution of Accounting. In Information for _Efficient Decision Making: Big Data, Blockchain and Relevance (pp. 157-172)._ Kokina, J., Mancha, R., & Pachamanova, D. (2017). Blockchain: Emergent industry adoption and implications for accounting. Journal of Emerging Technologies in Accounting, 14(2), 91-100. Krippendorff, K. (2019). Content Analysis, an introduction to its Methodology. SAGE Publications, Inc. Kwilinski, A. (2019). Implementation of blockchain technology in accounting sphere. _Academy of Accounting and_ _Financial Studies Journal, 23, 1-6._ OrCutt, M. (2018). Block chain. MIT Technology Review, 121(3), 18–23b. Peprah, W. K., Afriyie, A. O., Abandoh-Sam, J. A., & Afriyie, E. O. (2018). Dollarization 2.0 a Cryptocurrency: Impact on Traditional Banks and Fiat Currency. _International Journal of Academic Research in Business and Social_ _Sciences, 8(6), 341-349._ Pimentel, E., & Boulianne, E. (2020). Blockchain in Accounting Research and Practice: Current Trends and Future Opportunities. Accounting Perspectives, 19(4), 325-361. Smith, S. S., & Castonguay, J. J. (2020). Blockchain and accounting governance: Emerging issues and considerations for accounting and assurance professionals. Journal of Emerging Technologies in Accounting, 17(1), 119-131. Supriadi, I. (2020). The Effect of Applying Blockchain to the Accounting and Auditing. Ilomata International Journal of _Tax and Accounting, 1(3), 161-169._ Wei, P., Wang, D., Zhao, Y., Tyagi, S. K. S., & Kumar, N. (2020). Blockchain data-based cloud data integrity protection mechanism. Future Generation Computer Systems, 102, 902-911. Zhang, Y., Xiong, F., Xie, Y., Fan, X., & Gu, H. (2020). The Impact of Artificial Intelligence and Blockchain on the Accounting Profession. IEEE Access, 8, 110461-110477. **Copyrights** Copyright for this article is retained by the author(s), with first publication rights granted to the journal. [This is an open-access article distributed under the terms and conditions of the Creative Commons Attribution license](http://creativecommons.org/licenses/by/4.0/) which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. -----
{ "disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.11114/afa.v8i1.5492?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.11114/afa.v8i1.5492, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": null, "status": "GOLD", "url": "https://redfame.com/journal/index.php/afa/article/download/5492/5693" }
2,022
[]
true
2022-02-22T00:00:00
[ { "paperId": "fb990d334fd6e2a0a41e6b7ddb1113fe8d1a35ee", "title": "Blockchain in Accounting Research and Practice: Current Trends and Future Opportunities*" }, { "paperId": "95875321d1f95d60f8a73504e2a61c1fa96cc7df", "title": "The Effect of Applying Blockchain to The Accounting and Auditing" }, { "paperId": "b20f2936c062881eb02f4e4e0b9ffd0eb940d201", "title": "The Impact of Artificial Intelligence and Blockchain on the Accounting Profession" }, { "paperId": "206cfd7b124bc80f685a1187784d1ba188bbda9a", "title": "The Blockchain Evolution and Revolution of Accounting" }, { "paperId": "3cc0cd9a47a08a175150c2e3901b6069bda953f5", "title": "Blockchain technology in the future of business cyber security and accounting" }, { "paperId": "7c1dc3b5a1f6f331c782a08277caf26111a4b62d", "title": "Blockchain data-based cloud data integrity protection mechanism" }, { "paperId": "390e429a86a5a88c8bcec5ae33fb38c21284ddc4", "title": "Blockchain and Accounting Governance: Emerging Issues and Considerations for Accounting and Assurance Professionals" }, { "paperId": "24f820574ea153c7a8a2788d54748d96747f5ceb", "title": "Entries" }, { "paperId": "784e3b742b53c448dc88ed5048da87052a8fd591", "title": "Auditing Cloud-Based Blockchain Accounting Systems" }, { "paperId": "bc9e888c876a33b841915fcc1842cd4533d5f82f", "title": "The Impact of Blockchain on Accounting Information Systems" }, { "paperId": "2dd69343163deb265cf3404e781f4b5783467914", "title": "Dollarization 2.0 a Cryptocurrency: Impact on Traditional Banks and Fiat Currency" }, { "paperId": "210eb61e631e244675751e53845d55ff32a096bf", "title": "Blockchain: Emergent Industry Adoption and Implications for Accounting" }, { "paperId": "a6ed0c2467e56f569ce17885e9d5c1dae396b38d", "title": "Toward Blockchain-Based Accounting and Assurance" }, { "paperId": null, "title": "Basic Financial Accounting and Reporting" }, { "paperId": "8290128f83a8bba902898f63338c57634a89dcaa", "title": "Assurance" }, { "paperId": "29dc3f218fffb59992ed545f42f86dbcd95137ff", "title": "Implementation of Blockchain Technology in Accounting Sphere" }, { "paperId": "8f3711a7a4bed59c6002f9a6db2a6625443f2d59", "title": "Blockchain for Good? Digital Ledger Technology and Sustainable Development Goals" }, { "paperId": null, "title": "Block chain" }, { "paperId": "25075e27b0df6f2be5a8c519171bdabd1c3ed817", "title": "Content Analysis: An Introduction to Its Methodology" }, { "paperId": null, "title": "Worksheet preparation, including correcting entries. This simplifies the process of preparing financial statements" }, { "paperId": null, "title": "Constructing a trial balance" }, { "paperId": null, "title": "Identifying the events that will be recorded" }, { "paperId": null, "title": "The ledger is updated using journal entries. This procedure is intended to transmit data from the journal to the ledger for classification" }, { "paperId": null, "title": "Journal entries for the closing journal are journalized and posted. This results in the closure of temporary accounts and the transfer of profit to the owner's equity" }, { "paperId": null, "title": "Journal entries for adjustments are journalized and posted. This column is used to track accruals, deferrals that have expired, estimations, and other occurrences from the worksheet" }, { "paperId": null, "title": "The journal is used to record transactions. This tries to document the economic impact of transactions on the firm in a journal, a format that allows transfer to the accounts" } ]
4,290
en
[ { "category": "Computer Science", "source": "external" }, { "category": "Computer Science", "source": "s2-fos-model" }, { "category": "Engineering", "source": "s2-fos-model" }, { "category": "Environmental Science", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/0240a106144dbee52e5e527f8e2f0cd21fc3126b
[ "Computer Science" ]
0.82466
Publicly Verifiable Spatial and Temporal Aggregation Scheme Against Malicious Aggregator in Smart Grid
0240a106144dbee52e5e527f8e2f0cd21fc3126b
Applied Sciences
[ { "authorId": "2152829737", "name": "Lei Zhang" }, { "authorId": "2155698444", "name": "Jing Zhang" } ]
{ "alternate_issns": null, "alternate_names": [ "Appl Sci" ], "alternate_urls": [ "http://www.mathem.pub.ro/apps/", "https://www.mdpi.com/journal/applsci", "http://nbn-resolving.de/urn/resolver.pl?urn=urn:nbn:ch:bel-217814" ], "id": "136edf8d-0f88-4c2c-830f-461c6a9b842e", "issn": "2076-3417", "name": "Applied Sciences", "type": "journal", "url": "http://www.e-helvetica.nb.admin.ch/directAccess?callnumber=bel-217814" }
We propose a privacy-preserving aggregation scheme under a malicious attacks model, in which the aggregator may forge householders’ billing, or a neighborhood aggregation data, or collude with compromised smart meters to reveal object householders’ fine-grained data. The scheme can generate spatially total consumption in a neighborhood at a timestamp and temporally a householder’s billing in a series of timestamps. The proposed encryption scheme of imposing masking keys from pseudo-random function (PRF) between pairwise nodes on partitioned data ensures the confidentiality of individual fine-grained data, and fends off the power theft of n-2 smart meters at most (n is the group size of smart meters in a neighborhood). Compared with the afore-mentioned methods of public key encryption in most related literatures, the simple and lightweight combination of PRF with modular addition not only is customized to the specific needs of smart grid, but also facilitates any node’s verification for local aggregation or global aggregation with low cost overhead. The publicly verifiable scenarios are very important for self-sufficient, remote places, which can only afford renewable energy and can manage its own energy price according to the energy consumption circumstance in a neighborhood.
# applied sciences _Article_ ## Publicly Verifiable Spatial and Temporal Aggregation Scheme Against Malicious Aggregator in Smart Grid **Lei Zhang** **[1]** **and Jing Zhang** **[2,1,]*** 1 College of Computer Science and Technology, Harbin Engineering University, Harbin 150001, China; lei_power@hrbeu.edu.cn 2 School of Information Science and Engineering, Jinan University, Jinan 250022, China ***** Correspondence: ise_zhangjing@ujn.edu.cn Received: 16 November 2018; Accepted: 14 January 2019; Published: 31 January 2019 [����������](http://www.mdpi.com/2076-3417/9/3/490?type=check_update&version=1) **�������** **Abstract: We propose a privacy-preserving aggregation scheme under a malicious attacks model,** in which the aggregator may forge householders’ billing, or a neighborhood aggregation data, or collude with compromised smart meters to reveal object householders’ fine-grained data. The scheme can generate spatially total consumption in a neighborhood at a timestamp and temporally a householder’s billing in a series of timestamps. The proposed encryption scheme of imposing masking keys from pseudo-random function (PRF) between pairwise nodes on partitioned data ensures the confidentiality of individual fine-grained data, and fends off the power theft of n-2 smart meters at most (n is the group size of smart meters in a neighborhood). Compared with the afore-mentioned methods of public key encryption in most related literatures, the simple and lightweight combination of PRF with modular addition not only is customized to the specific needs of smart grid, but also facilitates any node’s verification for local aggregation or global aggregation with low cost overhead. The publicly verifiable scenarios are very important for self-sufficient, remote places, which can only afford renewable energy and can manage its own energy price according to the energy consumption circumstance in a neighborhood. **Keywords: smart metering; spatial and temporal aggregation; privacy protection; internal attack;** pseudo-random function **1. Introduction** With the development of Advanced Metering Infrastructure (AMI), Smart Metering as an important research subject in Smart Grid (SG) plays an increasingly important role and is closely associated with people’s daily life [1,2]. Aggregating fine-grained metering data attracts householders and power suppliers. Power suppliers can calculate, forecast, and regulate accurately power distribution/price of the next period in real time while detecting fraud reports. Based on billing details and current power price, householders can adjust its appliance consumption module to reduce the power billing at the peak time; however, accessing householder’s information on metering may cause security and privacy concerns, such as daily routines, the type of applications, etc. [1,2]. For this, in SG systems, one of the challenges faced by power big data is how to design one aggregation mechanism to balance the use of power data and individual privacy protection [2]. Protecting such sensitive private data from individual privacy threats needs to limit the authority of the utility company employee [2]. Namely, Supplier Billing System (SBS, sub-suppliers) will know only the total amount of the consumption for each customer, while the Energy Management System (EMS, demand prediction division) should know only the total consumption of customers in a certain region for each time period. To achieving the goals, smart metering systems often introduce the Meter ----- _Appl. Sci. 2019, 9, 490_ 2 of 20 Data Management System (MDMS), which stores the measured values of smart meters (SMs), and aggregates it before sending the aggregation to the SBS and EMS [2]. With the appearance of MDMS, another concern is upgrading, namely the malicious action of householders and regional MDMS employees. Unfortunately, a malicious householder may collude with the regional MDMS employee to report a false consumption to the SBS department; attackers may steal or forge power usage and consumption information. In addition, a regional MDMS employee may submit a fraudulent aggregation in a neighborhood. A World Bank report finds that each year over 6 billion dollars cannot post due to the energy theft and fraud report in the United States, in 2009, the FBI reported a wide and organized attempt that may have cost up to $400 million loss annually and power supplier suffered a great monetary loss [3]. To fend off this type of attack, it is desirable that suppliers or the public should detect the fraud profile from malicious aggregators or dishonest householders [4]. Privacy-preserving metering protocols have been discussed in lots of literatures [5–24]. They mainly focused on the studies of homomorphic aggregation [6–17,20,23,24], by which, aggregators can only obtain the fine-grained aggregation data within a certain region or householders’ billing in a serial period while protecting individual privacy. However, most of them can only resist against single external or semi-trustable attack [11,13,14,19], and how to fend off internal attackers (e.g. aggregators or householders) is an open problem. Internal attackers can legally collect and store power consumption information of users; therefore, they pose a higher threat than external attackers [18]. Most of existing works [5–10,18,20,21,23,24] about additively homomorphic, multiplicative homomorphic, and their combination with other cryptography endeavored to address the problem. Most of them improved Paillier encryption [6–9] by their combination with other cryptography, such as stream ciphers [5,19,23] and modular addition [7,24], to prevent power suppliers/operators from intercepting individual user data, and to detect fraudulent from dishonest users. To ensure the integrity of transmitted messages and fend off attacks such as man-in the middle attack and denial-of-service attack against SG, signature and authentication methods are proposed in References [8,15–17]. Lu et al. [6] proposed a privacy-preserving, multi-dimensional metering aggregation scheme in a neighborhood-wide grid with piallier encryption, bilinear pairing and computational Diffie-Hellman (DH) methods. For resisting against internal attackers possessing private keys, Xiao [8] introduced a spatial and temporal aggregation and authentication scheme by randomizing Paillier encryption with Lagrange interpolation. Their protocol requires O(n[2]) bytes of inter-action between the individual meters as well as relatively expensive cryptography on the meters (public key encryption). Chen [9] also improved Paillier encryption and proposed a privacy-preserving aggregation scheme resisting at most t compromised servers in a control center with threshold protocol. Dimitriou et al. [20] provided a verifiable publicly aggregation scheme against dishonest users that attempt to provide fraudulent data. Any user node in the community can prove its computation accuracy by zero-knowledge proof that the two encrypted message with different public keys corresponding to the same plaintext message. While we can prove our scheme costs lower overhead to resist fraudulent report from internal nodes. Erkin et al. [23] adopted a stream cipher (e.g. RC4) to generate pseudo-random keys as masking keys between nodes to prevent internal nodes from possessing private keys. During the aggregation within a neighborhood, all masking random keys cancelled out and the aggregation value is revealed without compromising individual privacy based on the security properties of the Paillier encryption and stream cipher. We follow its Pseudo-Random Function and combine it with modular addition. The main difference from ours is they impose the random keys from PRF on the plaintext before encrypting it with Paillier cryptography, and send the encrypted message to all nodes. We set a security parameter k to represent the number of communicate nodes in a neighborhood and improve the encryption method by replacing the costly Paillier encryption with the simple and lightweight combination. More significantly, we supplement a publicly verifiable property to detect the fraudulent profile from malicious aggregators or dishonest user nodes. ----- _Appl. Sci. 2019, 9, 490_ 3 of 20 Castelluccia et al. [19] protected individual data by imposing masking keys from RC4 on the plaintext data under the multi-level wireless sensors network model, However, the protection protocol cannot resist malicious aggregators, as the session keys are generated by the sink as the aggregator. We extend its PRF method into the peer-to-peer system model and propose a privacy-preserving scheme against maliciously internal attack. In addition, traditional modular addition was adopted in [7,24] by partitioning individual plaintext data into n shares and exchanging them between nodes (n is the number size of users in a neighborhood). Flavio et al. [7] adopted Paillier encryption and modular addition, in which every user node partitions its meter reading into n shares and transmits the encrypted shares with different public keys to the aggregator, which aggregates the data with the same public key before sending the aggregation to the users. Finally, the aggregator collects the plaintext sums to obtain the final aggregation. The method is privacy-preserving; however, during each spatial aggregation, three message exchanges are required between every user and the aggregator. Thus, the number of homomorphic encryption per user increases linearly with n increases, and the communication overhead is O(n[2]) messages [20]. Jia et al. [24] also generated partitioned data with modular addition and imposed them on a high-order polynomial coefficient. The values of the polynomial at different points are transmitted to the aggregator which finds the coefficients of the polynomial with the private key and gains the aggregation, so the scheme is under the semi-trusted model and the aggregator is trustable. In addition, the computation overhead is relatively higher when k is increasing. As every node does the x[k] polynomial operation before the matrix multiplication operation, the scheme increases greatly the computation overhead. Ohara K et al. [4] summarized the function requirements during smart metering against internal attackers: calculating billing and obtaining statistics for energy management. We follow the statistic function requirements and the spatial and temporal scenarios in References [8,23] against malicious MDMS/aggregators or dishonest users: (1) Spatial aggregation. A neighborhood-wide grid corresponds to a group of householders each equipped with a SG. They submit their encrypted meterings to the MDMS at a timestamp (e.g., 15 min). The latter aggregates homomorphically them before sending the aggregation to the EMS. During this aggregation, the individual data are confidential to the MDMS or the EMS. (2) Temporal aggregation. A single SM submits its power consumption in a series of timestamps to the MDMS for the billing purpose. In this scenario, SBS charges the householders in serial timestamps. Throughout this paper, we refer to the building area network (BAN) region as a neighborhood, and the regional MDMS as the regional gateway (GW), and the regional SBS as the control center (CC), respectively. The main contribution can be summarized as follows: (1) We design and implement a distributive, temporal and spatial aggregation scheme in the SG, in which every node sends and receives k encrypted message from k pairwise nodes distributively. The scheme provides spatial aggregation in a neighborhood at a fine-grained time scale (e.g. 15 min) and an individual temporal aggregation (e.g. monthly) in a series of timestamps for the billing purpose. (2) The proposed encryption scheme minimizes the computation and communication overhead by replacing the costly public key cryptography adopted in most literatures with a combination of modular addition and PRF. (3) The novel feature is that the masking keys are imposed on the partitioned data, and the latter are implemented by traditional modular addition. As the process of modular addition is processed by the node itself, other nodes cannot gain the true partitioned data, the masking key is only known to the pairwise nodes, and the combination ensures the confidentiality of individual data to any node including CC, aggregators, and n-2 nodes at most in a neighborhood. ----- _Appl. Sci. 2019, 9, 490_ 4 of 20 (4) To detect malicious aggregators or dishonest users, we propose innovatively a publicly verifiable aggregation method. By this way, any user node in a neighborhood can receive the communication flow, and verify the accuracy of local aggregation from other nodes or total aggregation from the aggregator without compromising individual fine-grained data. (5) The publicly available property for the aggregation also facilities householders regulating in time its current consumption module and consumption demand in the next time period, as by comparing their own consumptions with those of other nodes and checking if there is redundant power, householders can decide to store more energy or to sell excess power to the power supplier or other nodes. The scenarios are especially very important for self-sufficient, remote places, particularly, in developing countries, which can only afford renewable energy, such as wind turbines, solar panels, and carbon-based fuels [23]. The paper is organized as follows: in Section 2, we provide related preliminaries and formalize the system and attack models. In Section 3, we introduce our proposed aggregation scheme and correctness analysis. Security notions and proof are given in Section 4, followed by performance evaluation and comparison in Section 5. The conclusion is drawn in Section 6. **2. Preliminaries and Models** For ease of reading, we summarize the main notations in the paper in Table 1. **Table 1. Notations in the scheme.** **Symbol** **Meaning** _HSM/SM_ HAN smart meter/ user/user node/sm _N_ The number of users in a BAN neighborhood1 _k_ The number of pairwise nodes for every user _K_ Keystream based on stream cipher _M_ RSA modular (large prime) _x([i]_ _j,d)_ _useri partitioned data into userj at timestamp d_ _xd[i]_ _useri’s data at timestamp d_ _r([i]_ _j,d)_ _useri’s pairwise key with userj at timestamp d_ _E([i]_ _j,d)_ The encrypted form of x([i] _j,d)_ _ski_ The secret key between CC and every node _indi[s](s = 1, · · ·, k)_ _useri’s pairwise nodes set in serial timestamps T_ _LS(j, d)_ _userj’s locally spatial aggregation at timestamp d_ _LT([i]j,d)[(][j][ ∈]_ _[ind][i][[][s][])]_ _userj’s locally temporal aggregation for useri in T_ _AT(i, T)_ _useri’s temporal aggregation in T_ _ASd_ Spatial aggregation in a neighborhood at timestamp d _2.1. Additively Homomorphic Encryption Based on The Keystream_ Our security property partly comes from the stream cipher. The keystream generated from the pseudo-random function satisfies the security properties of the additively homomorphic encryption in the stream cipher. The basic idea [19] is denoted as follows: Encryption is written as: c = Enck(m + K) mod M where K is randomly generated keystream, m is the plaintext and m, k [0, M 1]. _∈_ _−_ Decryption is described as: Deck= c−K mod M. Additively homomorphic property of ciphertext are described as: c1 = EncK1(m1) and c2 = _EncK2(m2); then, the aggregated ciphertext is expressed as: c = c1 + c2mod M = EncK(m1 + m2),_ where K = K1 + K2 mod M. _2.2. Pseudo-Random Keystream Generator—RC4_ As a popular PRF generator, with secret keys between communication nodes, RC4 can generate a keystream. This secret key is pre-computed during the system initialization. As any stream cipher, the generated keystream can be used for encryption by combining it with the plaintext using bit-wise ----- _Appl. Sci. 2019, 9, 490_ 5 of 20 Exclusive-Or [19]. However our scheme is to replace the XOR (Exclusive-OR) operation typically found in stream ciphers with modular addition operation (+). To generate the keystream, RC4 needs two algorithms, i.e. Key-scheduling algorithm (KSA) and Pseudo-random generation algorithm (PRGA) [5,14]. KSA: KSA is to initialize a permutation with a variable length key between 40 and 2048 bits for PRGA. PRGA: once the permutation initialization of KSA has been completed, the stream of bits is generated using the PRGA. **Algorithm 1: Key-scheduling algorithm (KSA)** **Input:** i = 0; j = 0 //Two 8-bit index-pointers S //The initial key keyed with a secret key **Output: S //A permutation of all 256 possible bytes** 1. for (i = 0; i <= 255; ++ i) 2. S[i] = i; 3. end 4. k = 0; 5. for (i = 0; i <= 255; ++ i) 6. j = (j + s[i] + key[i mod keylength]) mod 256; 7. k = S[i]; 8. S[i] = S[j]; 9. S[j] = k; 10. end **Algorithm 2: Pseudo-random generation algorithm (PRGA)** **Input:** i = 0; j = 0 //Two 8-bit index-pointers **Output: Z // Pseudo-random keystream** 1. k = 0; 2. for (i = 0; i <= 255; ++ i) 3. i = (I + 1) mod 256; 4. j = (j + S[i]) mod 256; 5. k = S[i]; 6. S[i] = S[j]; 7. S[j] = k; 8. Z = S[(S[i] + S[j]) mod 256]; 9. end _2.3. System Model_ In our system model, we consider a typical SG communication architecture [8,9,11,15–17], as shown in Figure 1. It is based on the SG network model presented from the National Institute of Standards and Technology (NIST) and consists of six domains, i.e., the power plant, the transmission domain, the distribution domain and a CC, a residential GW, and the user domain. We mainly focus on how to report and aggregate the users’ privacy-preserving data into the CC. Hence, the system model divides especially the BAN into numbers of Household area network (HAN) equipped with a SG and every BAN includes a GW and numbers of users. CC: It acts as the SBS and EMS in reality. It needs to monitor the actual data on how much power is consumed at which timestamp in one BAN (neighborhood), how much power should be reserved for the next time period, and cumulative consumption for individual billing on a monthly basis, and how much power is being distributed to a specified neighborhood. In the paper, it is curious about the individual fine-grained data and may attempt to it as far as possible by all available resources, so it is assumed a semi-trusted entity. ----- _Appl. Sci. 2019, 9, 490_ 6 of 20 GW: A powerful entity, acting as the local MDMS, represents a locality (e.g., a region within a building) is responsible for aggregating real-time spatial data in a neighborhood and individual temporal data in a series of timestamps, and then transmitting the aggregation to the CC. The employment of GW relieves CC of aggregation and reducing largely the communication latency. However, the cost that potentially malicious attacks done to users or power suppliers is unignorable, as discussed earlier. We assume it is a malicious entity here. A BAN GW represents a locality (e.g., a region within a neighborhood). For facilitating the communication between BAN GW and CC, WiMax and other broadband wireless technologies can be adopted. We consider a scenario that one BAN neighborhood covers a hundred or more HANs, so the longest distance from the BAN GW to a HAN is more than a hundred miles, so WiMax maybe more suitable for this kind of distance communication. Household Smart Meter (HSM): A bidirectional communication entity deployed at householders’ premises. The modern SM is given a certain level of autonomy via trusted elements and the ability to collect, store, aggregate, and encrypt the usage data. Hence it has two interfaces—one interface is for reading power of householders and the other one acts as a communication GW. Even if we assume SM _Appl. Sci. 2018, 8, x FOR PEER REVIEW_ 7 of 22 is tamper-resistant, it is not powerful as a GW, so it may be vulnerable to be compromised by the GW to infer the object users’ data. **Figure 1. System model under consideration.** _2.4. Communication Model_ **Figure 1. System model under consideration.** As can be seen in the Figure 1, all SMs connect each other in a neighborhood by WiFi technique, _2.5. Data Model_ which constructs public verifiable foundation. Each user would select randomly k pairwise nodes in one round and can ensure that ifLet _x_ _di_ be the meter reading of the useri choosesi[th] (1 user≤ _i_ ≤j, thenN) user node at the userj chooses userd[th] (1 i and the keys between≤ _d_ ≤ _T) fine-grained_ them are opposite mutually. The valuetimestamp, where N is the number of user in a BAN (a neighborhood-wide grid), and k as a security parameter can take any value from 2 toT is a billing n, and depend on the specific application circumstance. The higher the value ofperiod. At each fine-grained time index _d, a neighborhood grid (over the entire BAN) spatially k is, the higher the complexity_ is, and vice versa, and the scheme is more vulnerable to be attacked.aggregated utility usage can be expressed as: _2.5. Data Model_ ###### AS d( ) =  in=1 xdi ;d = 1, 2,…,T (1) LetAt the end of a billing period ( xd[i] [be the meter reading of the]d = [ i]T[th (1]), a temporally aggregated utility usage data for the [ ≤] _[i][ ≤]_ _[N][) user node at the][ d][th (1][ ≤]_ _[d][ ≤]_ _[T][) fine-grained]i[th] user_ timestamp, where N is the number of user in a BAN (a neighborhood-wide grid), and T is a billing is expressed as: period. At each fine-grained time index d, a neighborhood grid (over the entire BAN) spatially aggregated utility usage can be expressed as:AT i T(, ) = Td =1 _xdi;i_ = 1 toN (2) _n_ _AS(d) = ∑i=1_ _[x]d[i]_ [;][ d][ =][ 1, 2, . . .,][ T] (1) _2.6. Security Requirement and Attack Model_ ----- _Appl. Sci. 2019, 9, 490_ 7 of 20 At the end of a billing period (d = T), a temporally aggregated utility usage data for the ith user is expressed as: _T_ _AT(i, T) = ∑d=1_ _[x]d[i]_ [;][ i][ =][ 1 to][ N] (2) _2.6. Security Requirement and Attack Model_ Within the system model, there are four types of actors involved in the meter data reporting process: the ith user (self), other users in the same neighborhood (BAN), the GW, and the CC. The CC requires the spatially aggregated fine-grained neighborhood usage data to optimize power delivery efficiency and the temporally aggregated user-specific utility usage data for the billing purpose. Hence, we stipulate the following security/privacy requirements: **_Requirement R1. Fine-grained, individual utility data are private and should not be disclosed to_** CC, GW, or other users. **_Requirement R2. Temporal aggregation for an individual user and spatial aggregation in one_** neighborhood cannot be tampered by the malicious aggregator or other internal nodes. For this, we envision a secure and reliable communication model comprising a verifiable publically method, which is customized to the correctness verification of the aggregation value of SG. For this, our attack model is based on the malicious aggregator who attempts to tamper the aggregation value in a neighborhood and the billing value for individual users, or infers fine-grained meterings of the individual user by colluding with other n-2 compromised nodes at most. Following the above security requirements, different compositions of the attackers and actions may be grouped into the following attack types: (1). External attack External attackers may compromise the meterings of the object users by eavesdropping the communication flow between communication nodes through various eavesdropping malware. (2). Malicious attack False aggregation report. The aggregator may alter or drop maliciously any individual data, or tamper the aggregation data to the CC; any malicious user node may provide false local aggregation to the GW. Collusion with compromised nodes. The aggregator may collude with compromised users to attempt to infer the uncompromised users’ data. (3). Semi-trustable internal attack The curious CC or any user node can also acquire data through the public communication flow, such as the message from the user node to the GW or from the GW to the CC. They may infer the object user’s fine-grained data by the public communication flow. An attack is an arrangement that enables unauthorized parties to gain access to private data or to tamper secured data (even by the user itself) without being detected. In this work, we assume the SMs are tamper-resistant [7,20,23], and can perform the measurement and reporting operations normally, but do not exclude the possibility of tampering with local aggregation values by itself. **3. Proposed Scheme** _3.1. Initialization Phase_ 3.1.1. Initializing Pairwise Number k and Session Key For every billing period, the CC generates randomly the pairwise number for every node in one neighborhood denoted as k, and broadcasts it to all SMs. We generate session keys between every node with the computational DH key exchange protocol as the initial key in RC4 to generate the keystream between pairwise nodes. Once one node joins a ----- _Appl. Sci. 2019, 9, 490_ 8 of 20 neighborhood size of n, it generates itself one DH public key g[a] (mod M) and remains the secret key _a, M are DH parameters, and then broadcasts the public key. By this Computational Diffie-Hellman_ CDH exchange key, any two pairwise nodes can identify their session key formed as g[ab]. 3.1.2. Modular Addition The useri partitions its own data xd[i] [into][ k][ partitions denoted as][ x]([i] _indi[j],d)[(][1][ ≤]_ _[j][ ≤]_ _[k][)][ and sends]_ them to every pairwise node. However, the partitioned data can be easily guessed, especially with brute search, as the consumption value at every timeslot is very small. For this, we impose extra noise (masking keys) which is only known by pairwise nodes themselves on the partitioned data to further secure the individual data. 3.1.3. Noise Addition Masking keys, as extra noise, are generated by pairwise nodes with PRF at every timestamp. The PRF can be implemented with RC4, the specific process can be referred to the Section 2.2. _3.2. Encryption and Aggregation_ 3.2.1. Data Encryption (1). Partition of individual data Each node randomly partitions its individual data into k partitions and sends them to k pairwise nodes along with the masking keys. The partition form is as follows: _xd[i]_ [=] ∑ _x([i]_ _j,d)[(][1][ ≤]_ _[s][ ≤]_ _[k][)]_ (3) _j∈indi[s]_ (2). Generation of pairwise nodes and masking keys For any node, it chooses randomly any k nodes in one round as its pairing nodes such that if useri selects userj, then userj also selects useri. With the session key between them, the two pairwise nodes generate a common key r from RC4; useri adds r([i] _j,d)_ [to][ x]([i] _j,d)[, and][ user][j][ adds][ r]([j]i,d)_ [which satisfies:] _r([j]i,d)_ [=][ −][r]([i] _j,d)[(][i][ ∈]_ _[ind][j][[][s][]][;][ j][ ∈]_ _[ind][i][[][s][])]_ (4) For useri, the generated noise set at the timestamp d can be denoted as r([i] _indi[s],d)_ [(][s][ = 1, 2,][ . . .][,][ k][).] Note that in order to facilitate the temporal aggregation, the pairwise key generated by an SM at the _T[th]_ timestamp should satisfy the following equation: _r([j]i,d)_ [=][ −][r]([i] _j,d)[(][i][ ∈]_ _[ind][j][[][s][]][;][ j][ ∈]_ _[ind][i][[][s][])]_ (5) (3). Encryption process At the timestamp d, useri adds the pairwise noise to the partitioned data to generate the encrypted message E[i] (j,d) [=][ x]([i] _j,d)_ [+][ r]([i] _j,d)[(][j][ ∈]_ _[ind][i][[][s][])][ to][ k][ pairwise nodes separetely as well as receiving the]_ encrypted message they sent. The Figure 2 illustrates an example for spatial and temporal aggregation among pairwise users in multi-region groups. ----- _Appl. Sci. 2019, 9, 490_ _r(, )i dj_ = −r(, )ij d [(]i ∈ind _j[s];_ _j_ ∈indi[s]) 9 of 20(5) _LS_ (1, ) d _LS_ (2, ) d _LS(3, ) d_ _LS i d(, )_ _LS k d(, )_ _LS j d(,_ ) _LS n d(, )_ _user1_ _x12,d_ + _r2,1_ _d_ _xi d1,_ + _ri d1,_ _x1j d,_ + _rj d1,_ _x1n d,_ + _rn d1,_ _user2_ _x1,2d_ + _r1,2d_ _x3,2_ _d_ + _r3,2d_ _xk d2,_ + _rk d2,_ _xn d2,_ + _rn d2,_ _user3_ _x2,3_ _d_ + _r2,3d_ _xi d3,_ + _ri d3,_ _xk d3,_ + _rk d3,_ _useri_ _x1,i_ _d_ + _r1,id_ _x3,i_ _d_ + _r3,i_ _d_ _xij d,_ + _rj di,_ _userk_ _x2,k_ _d_ + _r2,kd_ _x3,k_ _d_ + _r3,kd_ _xkj d,_ + _rj dk,_ _xn dk,_ + _rn dk,_ _userj_ _x1,j_ _d_ + _r1,jd_ _xi dj,_ + _ri d,j_ _xk dj,_ + _rk dj,_ _xn dj,_ + _rn dj,_ _usern_ _x1,nd_ + _r1,nd_ _x2,n_ _d_ + _r2,nd_ _xk dn,_ + _rk dn,_ _xnj d,_ + _rj dn,_ _user1_ _user2_ _user3_ _useri_ _userk_ _userj_ _usern_ _user1_ _user2_ _useri_ _userk_ _userj_ _usern_ _t =_ 1t = 2 _t_ = _d_ _t_ = _T_ −1t = _T_ _LT(, )ij T_ **Figure 2. Example multi-region spatial and temporal aggregation.** **Figure 2. Example multi-region spatial and temporal aggregation.** For any SM node j, it will store the encrypted data sent from one of its pairwise node i in a series of T timestamps in the form of matrix as follows: |Col1|Col2|Col3|Col4|Col5|Col6|Col7|Col8|Col9|Col10|Col11|Col12|Col13|Col14| |---|---|---|---|---|---|---|---|---|---|---|---|---|---| ||||||||||||||| ||||x1 +r1 2,d 2,d||||x1 +r1 i,d i,d||||x1 +r1 j,d j,d||x1 +r1 n,d n,d| ||x2 +r2 1,d 1,d||||x2 +r2 3,d 3,d||||x2 +r2 k,d k,d||||x2 +r2 n,d n,d| ||||x3 +r3 2,d 2,d||||x3 +r3 i,d i,d||x3 +r3 k,d k,d||||| ||xi +ri 1,d 1,d||||xi +ri 3,d 3,d||||||xi +ri j,d j,d||| ||||xk +rk 2,d 2,d||xk +rk 3,d 3,d||||||xk +rk j,d j,d||xk +rk n,d n,d| ||xj +rj 1,d 1,d||||||xj +rj i,d i,d||xj +rj k,d k,d||||xj +rj n,d n,d| ||xn +rn 1,d 1,d||xn +rn 2,d 2,d||||||xn +rn k,d k,d||xn +rn j,d j,d||| ||||||||||||||| |Col1|Col2|Col3|Col4|Col5|Col6| |---|---|---|---|---|---| ||||||| |||||||   _E[1]_ _E[2]_ _E[i]_ _E[n]_ (j,1) (j,1) _· · ·_ (j,1) _· · ·_ (j,1) _E[1]_ _E[2]_ _E[i]_ _E[n]_ (j,2) (j,2) _· · ·_ (j,2) _· · ·_ (j,2) ... ... ... ... _E[1]_ _E[2]_ _E[i]_ _E[n]_ (j,T) (j,T) _· · ·_ (j,T) _· · ·_ (j,T)   � _E([i]_ _j,1)_ [= (][x]([i] _j,d)_ [+][ r]([i] _j,d)[)]_ _j ∈_ _indi[s]_ _E([i]_ _j,d)_ [=][ 0] _j /∈_ _indi[s]_ � (6) 3.2.2. Storage and Aggregation (1). Spatial Aggregation Once receiving encrypted data at timeslot d from all pairwise nodes, useri aggregates them and generates the local spatial data LS (i, d) as follows: _n_ _LS(i, d) =_ ∑ (x([j]i,d) [+][ r]([j]i,d)[)][ mod][ M][(][1][ ≤] _[s][ ≤]_ _[k][)]_ (7) _j∈indi[s]_ Every user sends the local spatial aggregation formed as LS (i, d) to the GW at every timestamp. Once receiving the locally spatial aggregation LS (i, d) from the pairwise nodes, the GW adds them up together and the pairwise keys cancel out. The total spatial aggregation is denoted as: _n_ _ASd =_ ∑ _LS(i, d) mod M_ (8) _i=1_ (2). Temporal aggregation Every user node receives the encrypted data from its pairwise nodes and stores it as a matrix of T rows and n columns formed as Equation (6). ----- _Appl. Sci. 2019, 9, 490_ 10 of 20 In every billing period T, the user node aggregates every column in the Equation (6) into locally temporal aggregation after the pairwise keys cancel out. The locally temporal aggregation form is as follows: _T_ _LT([i]j,T)_ = ∑ _E([i]_ _j,d)_ [mod][ M][(][j][ ∈] _[ind][i][[][s][])]_ _d=1_ (9) _T_ = ∑ (x([i] _j,d)_ [+][ r]([i] _j,d)[)][ mod][ M]_ _d=1_ Once the CC issues the temporal aggregation request for useri to the GW, the pairwise nodes of _useri would report its local temporal aggregation LT([i]j,T)_ [to the GW.] The GW aggregates them into the temporal aggregation and transmits it to the CC; the aggregation process is as follows: _AT(i, T) =_ ∑ _LT([i]j,T)_ [mod][ M][(][1][ ≤] _[s][ ≤]_ _[k][)]_ (10) _j∈indi[s]_ _Appl. Sci. 2018, 8, x FOR PEER REVIEW_ 11 of 22 We assume j ∈ _indi[s]; i ∈_ _indj[s]._ Figure 3 shows the communication process between the pairwise nodes and GW at the timestampFigure 3 shows the communication process between the pairwise nodes and GW at the d. timestamp d. _SM_ _i_ _SM_ _j_ **_GW_** Generate _x_ _di_ ; Select randomly _k pairwise nodes_ and generate k pairwise keys formed as _r(, )ij d_ ; Generate its pairwise set S(i,d); Partition _x_ _di_ into _k_ partitions formed as _x(, )i_ _j d_ ; Generate _E(, )ij d_ with the sum of _x(, )i_ _j d_ and _r(, )ij d_ ; { _E(, )ij d_, d } ① { _E(, )i dj_, d } ① Compute local spatial aggregation _LS(i,d) at every timestamp; and_ local temporal aggregation _LT(, )i Tj_ in every billing period. _SMj_ _LS(i,d), d }②_ _LT(, )i Tj_ {LS( j,d), d }② Aggregate all _LS_ (i, _d) (1≤i≤n); generate_ _ASd; aggregate all_ _LT(, )i Tj_ ( _i_ ∈ind sj[ ] ) to generate AT(j, T). **Figure 3.Figure 3. Communication process data between the pairwise nodes and GW.Communication process data between the pairwise nodes and GW.** 3.2.3. Decryption Process3.2.3. Decryption Process In this way, the aggregation process is actually the decryption process, in which the random keysIn this way, the aggregation process is actually the decryption process, in which the random keys cancel out and individual consumption in a billing period or the spatial aggregation in a cancel out and individual consumption in a billing period or the spatial aggregation in a neighborhood neighborhood is revealed. Hence the combination of simple modular addition with noise addition is revealed. Hence the combination of simple modular addition with noise addition reduces the costly reduces the costly encryption and decryption operation in public key cryptography. encryption and decryption operation in public key cryptography. _3.3. Correctness Analysis_ Now we prove the correctness of our encryption scheme in terms of spatial and temporal aggregation: ----- _Appl. Sci. 2019, 9, 490_ 11 of 20 _3.3. Correctness Analysis_ Now we prove the correctness of our encryption scheme in terms of spatial and temporal aggregation: 3.3.1. Spatial Aggregation _n_ _ASd_ = ∑ _LS(i, d) modM_ _i=1_ _n_ = ∑ ∑ (x([j]i,d) [+][ r]([j]i,d)[)][ mod][ M][ (][1][ ≤] _[s][ ≤]_ _[k][)]_ _i=1_ _j∈indi[s]_ _n_ _n_ = ( ∑ ∑ _x([j]i,d)_ [+] ∑ ∑ _r([j]i,d)[)][ mod][ M]_ _i=1_ _j∈indi[s]_ _i=1_ _j∈indi[s]_ _n_ _n_ _n_ = ( ∑ ∑ _x([j]i,d)_ [+][ 1]2 [(] ∑ ∑ _r([j]i,d)_ [+] ∑ ∑ _r([j]i,d)[)][ mod][ M]_ _j=1_ _i∈indj[s]_ _i=1_ _j∈indi[s]_ _j=1_ _i∈indj[s]_ _n_ _n_ _n_ = ( ∑ _xd[j]_ [+][ 1]2 [(] ∑ ∑ _r([j]i,d)_ ∑ ∑ _r([j]i,d)[)][ mod][ M]_ _j=1_ _i=1_ _j∈indi[s]_ _[−]_ _i=1_ _j∈indi[s]_ _n_ = ∑ _xd[j]_ [mod][ M] _j=1_ (11) We prove the correctness of our spatial aggregation by permuting the row and column of data matrix formed as Figure 2. Equation (11) shows that the spatial aggregation in a neighborhood equals to the sum of locally spatial aggregation, i.e., the sum of individual data. 3.3.2. Temporal Aggregation _AT(i, T)_ = ∑ _LT([i]j,d)_ [mod][ M][(][1][ ≤] _[s][ ≤]_ _[k][)]_ _j∈indi[s]_ _T_ = ∑ ∑ _E([i]_ _j,d)_ [mod][ M] _j∈indi[s]_ _d=1_ _T_ = ∑ ∑ (x([i] _j,d)_ [+][ r]([i] _j,d)[)][ mod][ M]_ _d=1_ _j∈indi[s]_ _T_ = ∑ _xd[i]_ [mod][ M] _d=1_ (12) Equation (12) shows that the temporal aggregation for one user node equals to the sum of local temporal aggregation from its pairwise nodes, i.e., the sum of its individual data in a series of timestamps T. It proves further the correctness of our temporal aggregation. **4. Security Notions** _4.1. Security Proof_ In this section, we mainly elaborate the security properties of our scheme. In particular, based on the security requirement and attack model discussed in Section 2.6, we prove our scheme can ensure the confidentiality of fine-grained meterings for an individual user and the aggregation integrity that the local aggregation, and total aggregation cannot tampered by malicious individual user nodes or the aggregator. We firstly construct the Individual Metering Indistinguishable (IMI) security game to represent the adversary’s actions. **Definition 1. (IMI security game).** **_Setup: the challenger runs the initialization algorithm and first initializes a group of size n, then generates_** _the system parameter k to the adversary._ ----- _Appl. Sci. 2019, 9, 490_ 12 of 20 **_Queries: the adversary can not only capture meters’ encrypted report but also acquire the encryption and_** _compromise queries until meeting the constraints._ _Encrypt: The adversary A chooses useri and specifies xd[i]_ _[to ask for the ciphertext. The challenger returns it]_ _the ciphertext E(xd[i]_ [)][.] _Compromise: The adversary A specifies an integer q ∈{0, 1, · · ·, n}. If q = 0, the challenger returns the_ _adversary the aggregator’ capability, else returns userq’s message._ _Challenge. We denote with {C} the set of the uncompromised users. The adversary selects randomly two_ _meterings x[i][0]_ _d_ _[and][ x]d[i][1]_ [(][i][ ∈{][C][}][)][ at the timestamp d. The challenger flips a random bit][ b][ ∈{][0, 1][}][ uniformly and] _returns the adversary E(xd[i][b]_ [)][.] **_Guess: The adversary outputs a guess b[′]_** _∈{0, 1}, and A wins if b = b[’]_ _with unignorable advantage._ **Definition 2. (IMI security)** _The proposed temporal and spatial aggregation scheme is IMI if no probabilistic polynomial-time adversaries_ _A have more than an ignorable advantage in the IMI security game. The ignorable function for A is as follows:_ _AdvA = |Pr[b = b[′]] −_ [1] (13) 2 _[|][ =][ 0]_ **Theorem 1. The proposed encryption scheme is IMI.** _The intuition behind the theorem is any adversary cannot distinguish the encrypted individual metering_ _and the scheme cannot leak any individual user consumption at the d[th]_ _timestamp._ **Proof:** **Setup: The challenger initiates the whole system. The challenger generates a group of scale n and** pairs number k, and then gives the parameters (n, k) to the adversary. **Queries:** (1). Spatial aggregation Encrypt: A issues the encryption query with (i, d, xd[i] [)][ to the challenger. The challenger generates] the pairwise key r[i] (j,d)[(][j][ ∈] _[ind][i][[][s][])][ between the pairwise nodes, and imposes it on the randomly]_ partitioned data x([i] _j,d)[(][j][ ∈]_ _[ind][i][[][s][])][ to generate the encrypted measure formed as][ E][(][x]d[i]_ [) =][ x]([i] _j,d)_ [+] _r[i]_ (j,d)[mod][ M][(][j][ ∈] _[ind][i][[][s][])][.]_ Compromise: A may compromise the aggregator or up to n-1 users in any pairwise set in order to acquire more messages for object users. However, the compromise will encounter restriction when meeting with uncompromised users. Challenge. For simplifying the proof process and not losing the generalization, we consider the extreme circumstance that _c_ = 2. If the theorem holds for this circumstance, then it holds for _c_ _> 2._ _|_ _|_ _|_ _|_ We assume the user j is the only uncompromised user in indi[s](1 ≤ _s ≤_ _k). The adversary selects the_ two meterings and gives (i, j, d, xd[i][0][,][ x]d[i][1] [)][ to the challenger, the challenger flips a random bit][ b][ ∈{][0, 1][}] uniformly and returns the adversary E(xd[i] [)][ when][ b][ = 0, and returns][ E][(][x]d[j] [)][ when][ b][ = 1, and then] _E(xd[i]_ [)] = ∑ (x([i] _l,d)_ [+][ r]([i] _l,d)[)][ mod][ M][(][1][ ≤]_ _[s][ ≤]_ _[k][)]_ _l∈indi[s]_ (14) = (x([i] _j,d)[+][r][i](j,d)_ [+] ∑ (x([i] _c,d)[+][r]([i]_ _c,d)[)][ mod][ M]_ _c∈{indi[s]−j}_ _E(xd[j]_ [)] = ∑ (x([j] _l,d)_ [+][ r]([j] _l,d)[)][ mod][ M][(][1][ ≤]_ _[s][ ≤]_ _[k][)]_ _l∈indj[s]_ (15) = (x([j]i,d)[+][r]([j] _i,d)_ [+] ∑ (x([j] _c,d)[+][r]([j]_ _c,d)[)][ mod][ M]_ _c∈{indj[s]−i}_ ----- _Appl. Sci. 2019, 9, 490_ 13 of 20 In the Equations (14) and (15), the adversary A cannot solve the two equations at the d[th] timestamp and gain the exact x[i] (j,d) [and even if he knows][ r]([j]i,d) [=][ −][r]([i] _j,d)[, as the two equations have three unknown]_ variables, so it is more impossible for A to acquire xd[i] [and][ x]d[j] [which ensures the scheme’s security.] (2). Temporal aggregation _T_ _T_ _E(_ ∑ _xd[i]_ [)] = ∑ ∑ (x([i] _l,d)_ [+][ r]([i] _l,d)[)][ mod][ M][(][1][ ≤]_ _[s][ ≤]_ _[k][)]_ _d=1_ _d=1_ _l∈indi[s]_ _T_ _T_ = ( ∑ (x([i] _j,d)_ [+][ r]([i] _j,d)[) +]_ ∑ ∑ (x([i] _c,d)_ [+][ r]([i] _c,d)[))][ mod][ M]_ _d=1_ _d=1_ _c∈{indi[s]−j}_ _T−1_ _T_ = ((x([i] _j,T)_ [+][ r]([i] _j,T)[) +]_ ∑ (x([i] _j,d)_ [+][ r]([i] _j,d)[) +]_ ∑ ∑ (x([i] _c,d)_ [+][ r]([i] _c,d)[))][ mod][ M]_ _d=1_ _d=1_ _c∈{indi[s]−j}_ _T−1_ _T_ = (x([i] _j,T)_ [+] ∑ _x([i]_ _j,d)_ [+] ∑ ∑ (x([i] _c,d)_ [+][ r]([i] _c,d)[))][ mod][ M]_ _d=1_ _d=1_ _c∈{indi[s]−j}_ (16) _T_ _T−1_ _T_ _E(_ ∑ _xd[j]_ [)= (][x]([j]i,T) [+] ∑ _x([j]i,d)_ [+] ∑ ∑ (x([j] _c,d)_ [+][ r]([j] _c,d)[))][ mod][ M]_ (17) _d=1_ _d=1_ _d=1_ _c∈{indj[s]−i}_ In the Equations (16) and (17), the two equations with four unknown variables make the adversary _A impossible to acquire x[i]_ (j,d) [or][ x]([j]i,d)[.] Hence, the encrypted aggregation method can ensure the individual, fine-grained meterings indistinguishable security as long as there is at least one uncompromised user in its pairwise set. Our security properties are based on the randomness of modular addition and stream cipher which is used to blind the individual meterings. _4.2. Security Analysis_ We can prove that our proposed solution will withstand the other attacks discussed in Section 2.6 and ensure the integrity of the aggregated data, whether total aggregation or local aggregation. (1). Eavesdropping resistance Our proposed scheme supports the openness of communication flow. Whether it is the internal node with access to the communication flow in a community or the external eavesdropper, they can only get the encrypted individual data (x[i] (j,d) [+][ r]([i] _j,d)[)][, local aggregation value][ (][LS][(][i][,][ d][)][,][ LT]([i]j,d)[)][ or]_ total aggregation value (AS(d), AT(i, T)) sent by GW to CC. However, all of them can not obtain the fine-grained data. We have proved that even if all but one node is compromised, object metering still cannot be leaked. Hence, the proposed encryption method satisfies the security requirement R1. (2). False command from the GW The GW attempts to obtain object user’s meterings by issuing false billing commands in the name of CC, even if he cannot compromise its pairwise nodes. He tries to obtain valuable information from them at any timestamp. However, even so, he can only get the indistinguishable, individual meterings, due to the Equations (14)–(17). We cannot exclude the possibility that all pairwise keys of useri at a timestamp are all compromised nodes by the malicious aggregator or external attackers. In this case, the object useri’s privacy is exposed. That is the useri does not select any one honest node, then the probability is 1 − ( _n−k_ 1 [)]n−1−|c|. Obviously, the larger the value of |c| is, the smaller the value of k is, and the bigger the probability is. We improve the probability as much as possible and assume n = 1000, k = 30, and |c| = 500 (50% nodes are compromised), and then the probability is 2.47 × 10[−][7], so much small probability implies it ----- _Appl. Sci. 2019, 9, 490_ 14 of 20 is almost impossible that one user does not select any one honest node in one timestamp. Even if we fix a bigger pairs period T = 1 month, then we would have to cost 38.51 years to acquire individual data. _4.3. Publicly Verifiable Property_ The security requirement R2 given earlier needs to be satisfied with the publicly verifiable property. We provide the public communication flow between nodes in a neighborhood is to ensure the integrity of aggregation data. Any internal node in the community can verify publicly the accuracy of the local aggregation from other nodes and the total aggregation from the GW without compromising the individual fine-grained data. The special public verification process comprises two parties: 4.3.1. Spatial Verification Based on the public communication flow, any node in the neighborhood can gain the encrypted message formed as x[i] (j,d) [+][ r]([i] _j,d)_ [from the pairwise nodes, and compute its local aggregation formed] as LS(i,d) and LT[i] (j,d)[, and thus the total aggregation][ AS][d][ and][ AT][(][i][,][ T][)][ for the neighborhood can be] computed and compared with the reported result from the GW. If the result is questionable, the user can report directly to the CC. With such a supervision, the CC can detect the fraudulent profile of the malicious GW. 4.3.2. Temporal Verification The public verification method to the spatial aggregation is equally effective to the temporal verification. For any node, one of its pairwise nodes in the neighborhood gain its encrypted message formed as x[i] (j,d) [+][ r]([i] _j,d)_ [in a billing period before computing its local temporal aggregation, and thus its] total temporal aggregation is computed and verified by summing up local temporal aggregations from all its pairwise nodes. Thus, the billing user itself or any user node can verify the accuracy of the billing from the GW without revealing individual fine-grained data. Hence, they can detect if there is a malicious and fraudulent profile of the malicious GW and reports it to the CC in time. **5. Performance Evaluation** We evaluate the performance of the proposed aggregation scheme to assess the overheads. The performance metrics used in our empirical evaluation are defined as follows: (1) Computation overhead: node’s runtime of the proposed scheme in terms of spatial and temporal aggregation. (2) Communication overhead: the size of a message transmitted between the nodes and GW (number of bits). (3) Security parameter k: we analyze the impact of the different value of k on the two overheads. We compare these results against several existing works [23,24] using performance metrics based on Friendly ARM [25] and the library in [17]. By comparison with them, we intend to illustrate our computing and communication advantages in terms of the combination of PRF and modular addition methods adopted, respectively, in the scheme [23] and [24]. Each experiment consists of 50 independent trials and the averaged results of these trials are reported. The computation time required for these tasks is listed in Table 2. ----- _Appl. Sci. 2019, 9, 490_ 15 of 20 **Table 2. Average time for functions.** **Notations** **Descriptions** **Time Cost** Cadd Addition _≈0.038 ms_ Chash Hash (100 randoms) _≈0.85 ms_ Cmul Multiplication _≈0.013 ms_ Chenc Homomorphic encryption _≈2.7 ms_ Chdec Homomorphic decryption _≈0.61 ms_ Cma Hash/Modular addition _≈0.0023 ms_ Cprf Pseudorandom function _≈0.074 ms_ We fix the number of users at 1 million; the number of C is 10; the number of GW ranges from 1 to 20. Let n denotes a possible number of users in a group, and it ranges from 1 to 5000. We present the impact of a different number of users in the GW and a different value of k (ranging from 1 to 100) on the performance. We also assume, for simplicity, that all SMs can be functioning normally. _5.1. Computation Overhead_ (1). Spatial aggregation Let Cma and Cprf denote respectively the cost of Modular addition operation and keys generation operation with PRF, respectively let Cadd and Cmul denote the cost of addition and multiplication operation respectively, and Cenc and Cdec denote the cost of homomorphic encryption and decryption operation respectively. In our spatial aggregation scheme, for every node, partitioning individual data into k partitions costs one Cma; generating k pairwise keys costs k·Cprf; receiving k encrypted messages and adding them up cost k·Cadd, then the computation overhead per node is Cma + k·Cprf + k·Cadd and the total computation overhead per aggregator is (n-1)·Cadd for aggregating data from n nodes. In Erkin et al.’s scheme [23], at the d[th] time step, every hash function cost is Chash, k masking random keys cost is k·Cpr f and computing total masking keys cost is 2k·Cadd, and then encrypting individual data cost is Cenc, so the total computation overhead is Chash + k·Cpr f + 2k·Cadd + Cenc. In Jia et al.’s scheme [24], at the d[th] time step, the additive secret sharing cost is Css, k hash functions cost is k·Chash, and then k-order polynomial operation is x[k] and k matrix multiplication operations cost is (k[2] + 2k)·Cmul, so the total computation overhead is: Css + k·Chash + (k[2] + 2k)·Cmul. We provide the individual spatial computation overhead comparison in Table 3. **Table 3. Individual spatial computation overhead comparison (msec).** **Scheme** **Computation Overhead Per Smart Meter** Scheme in [23] _Chash + k·Cpr f + 2k·Cadd + Cenc_ Scheme in [24] _Css + k·Chash +_ �k[2] + 2k�·Cmul Our scheme Cma + k·Cprf + k·Cadd As described in the related work, the scheme in Reference [23] sets all nodes as communication nodes instead of selecting a limited number of communication nodes as in ours and [22]; however, for convenient comparison, we assume that k communication nodes are selected, which is on the same experiment platform as ours and the scheme in [23]. Even under such relaxation, we can still prove ours is superior in terms of computation and communication cost through the following performance evaluation. The Figure 4 plots the comparison of spatial computation overhead between our scheme and the schemes in References [23,24] with the value of k increasing. The Figure 4 shows that the three schemes’ computation overheads all increase with the value of k increasing, the computation overhead in Reference [23] and ours are lower compared with the scheme in References [24], in which polynomial ----- p g, p _Appl. Sci.overhead in Reference [23] and ours are lower compared with the scheme in References [24], in 2019, 9, 490_ 16 of 20 which polynomial operation _x_ _k_ and _k matrix multiplication operations generate too much_ computation overhead with _k growing, it has more cost significantly than ours and Erkin et al.'s_ operation x[k] and k matrix multiplication operations generate too much computation overhead with k scheme [23], ours is lower slightly than the scheme in [23], and both of them are close to growing, it has more cost significantly than ours and Erkin et al.’s scheme [23], ours is lower slightly than the scheme in [O k C( )⋅ _prf_ . 23], and both of them are close to O(k)·Cpr f . 300 250 200 150 100 50 0 |Col1|Col2|Col3|Col4|Col5|[23] ] me|Col7|Col8|Col9|Col10|Col11| |---|---|---|---|---|---|---|---|---|---|---| |||Erki Jia e The|n et al. t al.sc propos|scheme heme[24 ed sche|[23] ] me|||||| |||||||||||| |||||||||||| |||||||||||| |||||||||||| |||||||||||| |||||||||||| 5 10 15 20 25 30 35 40 45 50 Different k Value **Figure 4. Variety of spatial computation overhead per node with the value k.** **Figure 4. Variety of spatial computation overhead per node with the value k.** (2). Temporal aggregation (2). Temporal aggregation In the proposed scheme, each node chooses the same nodes every billing period to satisfy with In the proposed scheme, each node chooses the same nodes every billing period to satisfy with the Equation (5), so total temporal computation overhead in T serial time slots for every node is the Equation (5), so total temporal computation overhead in _T serial time slots for every node is_ _T·T(k⋅(·In Erkin et al.’s scheme [k CC⋅pr fprf ++ ⋅ kk C·Caddadd ++ CCmama)_ ) ++ ⋅T C T23·addC], each node sendsadd. . _T fine-grained utility readings in each of the T time_ In Erkin et al.'s scheme [23], each node sends T fine-grained utility readings in each of the T time steps, so the overhead per node is T·(Chash + k·Cpr f + 2k·Cadd + Cenc) + T·Cmul. In fact, the temporal aggregation overhead of the scheme in Reference [steps, so the overhead per node is _T_ ⋅(Chash + ⋅k C23prf] is higher than it, as with the modification of+ 2k C⋅ _add_ + _Cenc)_ +T C⋅ _mul_ . In fact, the temporal Paillier encryption, spatial and temporal aggregations are not being synchronized. To compensate theaggregation overhead of the scheme in Reference [23] is higher than it, as with the modification of _r[n]_ lack, every user must add an additional random keyPaillier encryption, spatial and temporal aggregations are not being synchronized. To compensate the R(i,T+1) = _R(i,d)_ at T[th] timestamp, which ∏n _d[T]=1_ _[h]d_ costs much overhead. However, our scheme has no extra cost and the third party’s involvement.lack, every user must add an additional random key _R(,i T_ +1) = _r_ ∏Td =1hdR(, )i d at _T[th ]timestamp, which_ We set the fine-grained reporting interval to be 15 minutes, and billing period T = 2880 (roughly one month). Figurecosts much overhead. However, our scheme has no extra cost and the third party's involvement. 5 plots the comparison of two schemes in terms of temporal computation overhead in every billing period forWe set the fine-grained reporting interval to be 15 minutes, and billing period k ranging from 0 to 50. From Figure 5, we can see the temporal computationT = 2880 (roughly overhead per node grows with the increasing ofone month). Figure 5 plots the comparison of two schemes in terms of temporal computation k value in two schemes; however, our proposed scheme increases slightly compared with the scheme in References [overhead in every billing period for k ranging from 0 to 50. From Figure 5, we can see the temporal 23], as the latter costs much overhead on Paillier encryption, while our scheme achieves the same privacy protection effect as the asymmetriccomputation overhead per node grows with the increasing of k value in two schemes; however, our proposed scheme increases slightly compared with the scheme in References [23], as the latter costs encryption with simple and low-cost modular addition. much overhead on Paillier encryption, while our scheme achieves the same privacy protection effect as the asymmetric encryption with simple and low-cost modular addition. ----- _Appl. Sci.Appl. Sci. 20192018, 9,, 4908, x FOR PEER REVIEW_ 17 of 2018 of 22 300 250 200 150 100 50 0 |Col1|Col2|Col3|Col4|Col5|[23] me|Col7|Col8|Col9|Col10|Col11| |---|---|---|---|---|---|---|---|---|---|---| |||Erki The|n et al. propos|scheme ed sche|[23] me|||||| |||||||||||| |||||||||||| |||||||||||| |||||||||||| |||||||||||| |||||||||||| 5 10 15 20 25 30 35 40 45 50 Different k Value **Figure 5. Variety of temporal computation overhead per node with the value k.** **Figure 5. Variety of temporal computation overhead per node with the value k.** _5.2. Communication Overhead_ _5.2. Communication Overhead_ We assume the format of a packet is the same as that in TinyOS [26]. The timestamp occupies 128 We assume the format of a packet is the same as that in TinyOS [26]. The timestamp occupies bits. The sizes of prime numbers p, and q needed in the Paillier encryption are 512 bits each. The size 128 bits. The sizes of prime numbers p, and q needed in the Paillier encryption are 512 bits each. The of elements in Zn[∗] [is 1024 bits. We further assume the plaintext data occupies 32 bits, then random from]* stream cipher occupies the same byte width with the plaintext data, and Paillier encryption occupiessize of elements in _[Z]n[ is 1024 bits. We further assume the plaintext data occupies 32 bits, then ]_ 4096 bits, while the hash function with timestamp occupies 256 bits.random from stream cipher occupies the same byte width with the plaintext data, and Paillier encryption occupies 4096 bits, while the hash function with timestamp occupies 256 bits. For simplicity, we denote {|X|, |R|, |E|, |H|} as the plaintext data size, masking random key (noise) size, Paillier encryption size, and the size of hash function random.For simplicity, we denote { _X_, _R_, _E_, _H_ } as the plaintext data size, masking random key (noise) size, Paillier encryption size, and the size of hash function random. 5.2.1. Spatial Communication Overhead Per Node 5.2.1. Spatial Communication Overhead Per Node To generate the spatial aggregation, every node sends the local aggregation to the GW after adding up the encrypted message from all k pairs. The data sent per node can be denoted as _LS(i, d)_ _t_, the _{_ _∥_ _}_ To generate the spatial aggregation, every node sends the local aggregation to the GW after size isadding up the encrypted message from all |X| + k·|R| + 128 bits (a partitioned part size isk pairs. The data sent per node can be denoted as [|][x]k[|] [bits,][ k][ partitions take][ |][x][|][ bits; a noise key] takes _R_ bits, and then k noise keys take k _R_ bits), so the total packet size is _x_ + k _R_ + 128 bits. _|_ _|_ _·|_ _|_ _|_ _|_ _·|_ _|_ ###### {LS i dFor the scheme in Reference [(, ) }t, the size is X + ⋅k R 23+128], the spatial aggregation packet per node is in the form as bits (a partitioned part size is [x] k [ bits, ][k][ partitions take ] _E_ _H_ _R_ _t_, its size is _E_ + _X_ + k _R_ + _H_ + 128 bits. _{_ _∥_ _∥_ _∥_ _}_ _{|_ _|_ _|_ _|_ _·|_ _|_ _|_ _|_ _}_ ###### x bits; a noise key takes R bits, and then k noise keys take k R⋅ bits), so the total packet size is Every user node in Reference [24] generates k results, the data is in the form of {(y1∥y2∥· · · ∥yk)∥t}, in whichx + ⋅k R yi involves the computation of data sharing and hash random value, so its size is+128 bits. _K·(K·|H| +_ _X_ /k + 128) bits. _|_ _|_ For the scheme in Reference [23], the spatial aggregation packet per node is in the form as{E H R tWe provide the individual spatial communication overhead comparison in Table  }, its size is { _E_ + _X_ + ⋅k R + _H_ +128} bits. 4. Every user node in Reference[24] generates Table 4. Individual spatial communication overhead comparison (bits).k results, the data is in the form of ###### {(y y1 2  y ) }k [t], in whichScheme[y]i[ involves the computation of data sharing and hash random value, so ]Computation Overhead Per Smart Meter |use|er node in Reference[24] generates k results, the data is in Table 4. Individual spatial communication overhead comparison (bits).| |---|---| ||| |y ) k|t}, in whichy involves the computation of data sharing and hash ran Schemei Computation Overhead Per Smart Meter| its size is _K_ ⋅(K H⋅ Scheme in [+ _X_ / _k_ +128)23] bits. _|E| + |X| + k·|R| + |H| + 128_ We provide the individual spatial communication overhead comparison in Table 4. Scheme in [24] _K·(K·|H| + |X|/k + 128)_ Our scheme _|X| + k·|R| + 128_ We plot the individual communication overhead comparison between our scheme and the other two schemes [23,24] during spatial aggregation in the Figure 6. We can see clearly the three schemes’ individual overhead all grow with the increasing of k value. The packet width per node in the scheme in Reference [24] grows significantly than the other two schemes, especially when k value is relatively higher, and communication overhead closes to O(k[2]), due to the x[k] polynomial operation per |K⋅(K ⋅|HS|c+h|eXm|/e kin+ [12238]) bits. |E| + |X| + k·|R| + |H| + 128| |---|---|---|---|---| ----- **Scheme** **Computation Overhead Per Smart Meter** _Appl. Sci. 2019, 9, 490_ 18 of 20 Scheme in [23] _E_ + _X_ + ⋅k R + _H_ +128 node before the matrix multiplication operation. Our scheme’s growth rate is close to the scheme inScheme in [24] _K_ ⋅(K H⋅ + _X_ / _k_ +128) Reference [23], which is higher always slightly higher than ours, due to the relatively higher public key encryption width.Our scheme _X_ + ⋅k R +128 x 104 3 2.5 2 1.5 1 0.5 0 |Col1|Col2| |---|---| |Erkin et al.scheme [23] Jia et al.scheme [24]|| |The proposed scheme|| ||| ||| ||| ||| ||| 5 10 15 20 25 30 35 40 45 50 Different k Value **Figure 6. Variety of spatial communication overhead per node with the value k.** **Figure 6. Variety of spatial communication overhead per node with the value k.** _Appl. Sci. 2018, 8, x FOR PEER REVIEW_ 20 of 22 5.2.2. Temporal Communication Overhead Per Node We plot the individual communication overhead comparison between our scheme and the from PRF saves much computation and communication overhead compared with traditional public Figure 7 shows the comparison result of ours and the scheme [23] in terms of temporal other two schemes [23,24] during spatial aggregation in the Figure 6. We can see clearly the three communication overhead per node whenkey encryption without compromising individual privacy. k ranges from 0 to 600, and T ranges from 0 to 6000 mins. schemes' individual overhead all grow with the increasing of k value. The packet width per node in the scheme in Reference [24] grows significantly than the other two schemes, especially when _k_ value is relatively higher, and communication overhead closes to O(k[2]), due to the x[k] polynomial operation per node before the matrix multiplication operation. Our scheme's growth rate is close to 6 x 10 the scheme in Reference [23], which is higher always slightly higher than ours, due to the relatively 10 10000 higher public key encryption width. 8 8000 5.2.2. Temporal Communication Overhead Per Node 6 6000 Figure 7 shows the comparison result of ours and the scheme [23] in terms of temporal communication overhead per node when 4 _k ranges from 0 to 600, and 4000_ _T ranges from 0 to 6000 mins,_ In Figure 7, our scheme reduces significantly the packet size sent per node to almost three orders of magnitude than the scheme [23], due to the high overhead of public key encryption. 2 2000 During temporal aggregation, if the process of exchanging random between communication nodes 0 0 is ignorable, then every node sends its serial encrypted packet formed as {E H R t  }(1≤≤t _T)_ to the aggregator, so the packet size is 400 _T_ ⋅( _E6000+_ _X_ + ⋅k R400 + _H_ +128) bits, while in our scheme, one 6000 4000 4000 node's temporal aggregation is computed synchronously200 200before being reported to the aggregator by 2000 2000 _k communication nodes, and they sends the local temporal aggregation packet size of k_ 0 0 T k 0 0 T ###### x + ⋅k R +128 bits to the aggregator every (a) Erkin et al.scheme [23] T timeslot, so aggregating one node's temporal (b) The proposed scheme consumption in _T serial time slots costs_ _k_ ⋅ ( _x_ + _k R⋅_ +128) bits. Hence, when _k_  _T_, ours **Figure 7. Variety of temporal communication overhead per node with the values k and T.** overhead is always lower significantly lower than the scheme in Reference [23]. Just as the Figure 7. Variety of temporal communication overhead per node with the values k and T. In Figure 7, our scheme reduces significantly the packet size sent per node to almost three orders of description above, we shorten the number of the communication nodes in Reference [23] into k, and magnitude than the scheme [6. Conclusions 23], due to the high overhead of public key encryption. During temporal the performance evaluation shows the proposed collection of modular addition and masking keys aggregation, if the process of exchanging random between communication nodes is ignorable, then In the paper, we resolved three issues about privacy-protection aggregation of smart metering every node sends its serial encrypted packet formed as _E_ _H_ _R_ _t_ (1 _t_ _T) to the aggregator,_ customized to the SG. Firstly, the combination of simple modular addition and PRF we designed { _∥_ _∥_ _∥_ _}_ _≤_ _≤_ so the packet size is T ( _E_ + _X_ + k _R_ + _H_ + 128) bits, while in our scheme, one node’s temporal serves the same effect as the other most related works with lower overhead, namely fending off · _|_ _|_ _|_ _|_ _·|_ _|_ _|_ _|_ aggregation is computed synchronously before being reported to the aggregator by k communication maliciously internal attacks without compromising individual fine-grained data. Secondly, we nodes, and they sends the local temporal aggregation packet size of _x_ + k _R_ + 128 bits to the proposed innovatively a publicly verifiable platform, by which, every node in a neighborhood can | _|_ _·|_ _|_ verify local aggregation from every node and total aggregation from the GW and detect the ----- _Appl. Sci. 2019, 9, 490_ 19 of 20 aggregator every T timeslot, so aggregating one node’s temporal consumption in T serial time slots costs k ( _x_ + k _R_ + 128) bits. Hence, when k _T, ours overhead is always lower significantly_ _·_ _|_ _|_ _·|_ _|_ _≪_ lower than the scheme in Reference [23]. Just as the description above, we shorten the number of the communication nodes in Reference [23] into k, and the performance evaluation shows the proposed collection of modular addition and masking keys from PRF saves much computation and communication overhead compared with traditional public key encryption without compromising individual privacy. **6. Conclusions** In the paper, we resolved three issues about privacy-protection aggregation of smart metering customized to the SG. Firstly, the combination of simple modular addition and PRF we designed serves the same effect as the other most related works with lower overhead, namely fending off maliciously internal attacks without compromising individual fine-grained data. Secondly, we proposed innovatively a publicly verifiable platform, by which, every node in a neighborhood can verify local aggregation from every node and total aggregation from the GW and detect the fraudulent profiles from maliciously internal nodes or dishonest user nodes. Thirdly, every node chooses randomly k nodes rather than all nodes as pairwise nodes to communicate, which saves significantly communication and computation overhead, and the independence of the number of users provides scalability and high efficiency under the circumstance of SG big data. From the performance evaluation shows that the proposed scheme is applicable for the security and privacy protection of SG and has practical significance. **Author Contributions: L.Z. and J.Z. designed the hierarchical architecture model, attack models, communication** models, and encryption methods together; J.Z. optimized the communication models, and L.Z. wrote the paper. **Funding: This research was funded by National Natural Science Foundation of China (NSFC) (2017–2020,** No. 51679058). **Conflicts of Interest: The authors declare no conflicts of interest.** **References** 1. Wang, W.; Lu, Z. Cyber security in the Smart Grid: Survey and challenges. Comput. Netw. 2013, 57, 1344–1371. [[CrossRef]](http://dx.doi.org/10.1016/j.comnet.2012.12.017) 2. Ambrosin, M.; Hosseini, H.; Mandal, K. Despicable meter: Anonymous and fine-grained metering data reporting with dishonest meters. In Proceedings of the 2016 IEEE Conference on Communications and Network Security (CNS 2016), Philadelphia, PA, USA, 17–19 October 2016; pp. 163–171. 3. [Krebs, B. FBI: Smart Meter Hacks Likely to Spread. 2013. Available online: http://krebsonsecurity.com/](http://krebsonsecurity.com/2012/04/fbi-smart-meter-hackslikely-to-spread/) [2012/04/fbi-smart-meter-hackslikely-to-spread/ (accessed on 07 April 2012).](http://krebsonsecurity.com/2012/04/fbi-smart-meter-hackslikely-to-spread/) 4. Ohara, K.; Sakai, Y.; Yoshida, F.; Iwamoto, M.; Ohta, K. Privacy-preserving smart metering with verifiability for both billing and energy management. In Proceedings of the 2nd ACM Workshop on ASIA Public-Key Cryptography (ASIAPKC’14), Kyoto, Japan, 3–6 June 2014; pp. 23–32. 5. Lincoln, K.; Philip, K.; Christopher, M. The use of RC4 encryption for smart meters. In Proceedings of the 2014 International Conference on Sustainable Research and Innovation, Nairobi, Kenya, 7–9 May 2014; pp. 58–62. 6. Lu, R.; Liang, X.; Li, X.; Lin, X.; Shen, X. EPPA: An efficient and privacy-preserving aggregation scheme for [secure smart grid communications. IEEE Trans. Parallel Distrib. Syst. 2012, 23, 1621–1631. [CrossRef]](http://dx.doi.org/10.1109/TPDS.2012.86) 7. Garcia, F.D.; Jacobs, B. Privacy-friendly energy-metering via homomorphic encryption. _IEEE Trans._ _[Parallel Distrib. 2010, 6710, 226–238. [CrossRef]](http://dx.doi.org/10.1007/978-3-642-22444-7_15)_ 8. Wang, X.; Mu, Y.; Chen, R. An efficient privacy-preserving aggregation and billing protocol for smart grid. _[Secur. Commun. Netw. 2016, 9, 4536–4547. [CrossRef]](http://dx.doi.org/10.1002/sec.1645)_ 9. Chen, L.; Lu, R.; Cao, Z. PDAFT: A privacy-preserving data aggregation scheme with fault tolerance for [smart grid communications. Peer-to-Peer Netw. Appl. 2015, 8, 1122–1132. [CrossRef]](http://dx.doi.org/10.1007/s12083-014-0255-5) ----- _Appl. Sci. 2019, 9, 490_ 20 of 20 10. He, D.; Kumar, N.; Zeadally, S. Efficient and Privacy-Preserving Data Aggregation Scheme for Smart Grid [against Internal Adversaries. IEEE Trans. Smart Grid 2017, 8, 2411–2419. [CrossRef]](http://dx.doi.org/10.1109/TSG.2017.2720159) 11. Bao, H.; Lu, R. A New Differentially Private Data Aggregation with Fault Tolerance for Smart Grid [Communications. IEEE Internet Things J. 2015, 2, 248–258. [CrossRef]](http://dx.doi.org/10.1109/JIOT.2015.2412552) 12. Kursawe, K.; Danezis, G.; Kohlweiss, M. Privacy-friendly aggregation for the smart-grid. In Proceedings of the International Symposium on Privacy Enhancing Technologies Symposium, Cambridge, UK, 27–29 July 2011; pp. 175–191. 13. Shi, Z.; Sun, R.; Lu RChen, L.; Chen, J.; Shen, X.S. Diverse grouping-based aggregation protocol with error [detection for smart grid communications. IEEE Trans. Smart Grid 2015, 6, 2856–2868. [CrossRef]](http://dx.doi.org/10.1109/TSG.2015.2443011) 14. Gupta, S.S.; Maitra, S.; Paul, G.; Sarkar, S. (Non-)Random Sequences from (Non-)Random [Permutations—Analysis of RC4 Stream Cipher. J. Cryptol. 2014, 27, 67–108. [CrossRef]](http://dx.doi.org/10.1007/s00145-012-9138-1) 15. Mahmood, K.; Chaudhry, S.A.; Naqvi, H.; Shon, T.; Ahmad, H.F. A lightweight message authentication [scheme for smart grid communications in power sector. Comput. Electr. Eng. 2016, 52, 114–124. [CrossRef]](http://dx.doi.org/10.1016/j.compeleceng.2016.02.017) 16. Li, H.; Lu, R.; Zhou, L.; Yang, B.; Shen, X. An Efficient Merkle-Tree-Based Authentication Scheme for Smart [Grid. IEEE Syst. J. 2014, 8, 655–663. [CrossRef]](http://dx.doi.org/10.1109/JSYST.2013.2271537) 17. Chim, T.W.; Yiu, S.M.; Li, V.O.K. PRGA: Privacy-Preserving Recording Gateway-Assisted Authentication of [Power Usage Information for Smart Grid. IEEE Trans. Depend. Secur. Comput. 2015, 12, 85–97. [CrossRef]](http://dx.doi.org/10.1109/TDSC.2014.2313861) 18. Fan, C.I.; Huang, S.Y.; Lai, Y.L. Privacy-Enhanced Data Aggregation Scheme Against Internal Attackers in [Smart Grid. IEEE Trans. Ind. Inform. 2013, 10, 666–675. [CrossRef]](http://dx.doi.org/10.1109/TII.2013.2277938) 19. Castelluccia, C.; Mykletun, E.; Tsudik, G. Efficient Aggregation of encrypted data in Wireless Sensor Networks. In Proceedings of the 2th International Conference on Mobile and Ubiquitous Systems: Networking and Services (MOBIQUITOUS’05), San Diego, CA, USA, 17–21 July 2005; pp. 109–117. 20. Dimitriou, T.; Awad, M.K. Secure and scalable aggregation in the smart grid resilient against malicious [entities. Ad Hoc Netw. 2016, 50, 58–67. [CrossRef]](http://dx.doi.org/10.1016/j.adhoc.2016.06.014) 21. Rahman, M.A.; Manshaei, M.H.; Al-Shaer, E.; Shehab, M. Secure and Private Data Aggregation for Energy [Consumption Scheduling in Smart Grids. IEEE Trans. Depend. Secur. Comput. 2017, 14, 221–234. [CrossRef]](http://dx.doi.org/10.1109/TDSC.2015.2446492) 22. [Shamir, A. How to Share a Secret. Commun. ACM 1979, 22, 612–613. [CrossRef]](http://dx.doi.org/10.1145/359168.359176) 23. Erkin, Z.; Tsudik, G. Private computation of spatial and temporal power consumption with smart meters. In Proceedings of the 10th International Conference on Applied Cryptography and Network Security (ACNS’12), Singapore, 26–29 June 2012; pp. 561–577. 24. Jia, W.; Zhu, H.; Cao, Z.; Dong, X.; Xiao, C. Human-factor-aware privacy-preserving aggregation in smart [grid. IEEE Syst. J. 2017, 18, 598–607. [CrossRef]](http://dx.doi.org/10.1109/JSYST.2013.2260937) 25. [FriendlyARM. 2011. Available online: http://www.friendlyarm.net/ (accessed on 17 August 2011).](http://www.friendlyarm.net/) 26. Ahlswede, R.; Csiszar, I. Common randomness in information theory and cryptography I. Secret sharing. _[IEEE Trans. Inform. Theory 1993, 39, 1121–1132. [CrossRef]](http://dx.doi.org/10.1109/18.243431)_ © 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution [(CC BY) license (http://creativecommons.org/licenses/by/4.0/).](http://creativecommons.org/licenses/by/4.0/.) -----
{ "disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.3390/APP9030490?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.3390/APP9030490, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": "CCBY", "status": "GOLD", "url": "https://www.mdpi.com/2076-3417/9/3/490/pdf?version=1548927505" }
2,019
[]
true
2019-01-31T00:00:00
[ { "paperId": "c93662c31635020079f18e4bc41b9eac7b19549b", "title": "Efficient and Privacy-Preserving Data Aggregation Scheme for Smart Grid Against Internal Adversaries" }, { "paperId": "2789fde75cddbfd59dc9582add23d50bbd95d88b", "title": "Secure and Private Data Aggregation for Energy Consumption Scheduling in Smart Grids" }, { "paperId": "a921c477112716f5f0749fb4907dd76286ec1600", "title": "An efficient privacy-preserving aggregation and billing protocol for smart grid" }, { "paperId": "17a14af6522d344a044c2567658d1754e005a29a", "title": "Secure and scalable aggregation in the smart grid resilient against malicious entities" }, { "paperId": "f6370a080c9463b0413e62ddce8ab458801edfdc", "title": "Despicable me(ter): Anonymous and fine-grained metering data reporting with dishonest meters" }, { "paperId": "4862b22ffcca2c7e578b6a94c1251ffd4c752679", "title": "A lightweight message authentication scheme for Smart Grid communications in power sector" }, { "paperId": "e08b1f7a7f9a933ea03a4b6063ec7425de77c655", "title": "Diverse Grouping-Based Aggregation Protocol With Error Detection for Smart Grid Communications" }, { "paperId": "a55399ff53108b10db3265d32d2e8b1f08841b10", "title": "A New Differentially Private Data Aggregation With Fault Tolerance for Smart Grid Communications" }, { "paperId": "e9fe07ff6b4fbe7d9113d7909578de71333269b5", "title": "Privacy-preserving smart metering with verifiability for both billing and energy management" }, { "paperId": "0013d84b59ab11392862d47a270723e46dc455cb", "title": "Human-Factor-Aware Privacy-Preserving Aggregation in Smart Grid" }, { "paperId": "f77a386ce173edde475d1f98818f560ff87d69a8", "title": "An Efficient Merkle-Tree-Based Authentication Scheme for Smart Grid" }, { "paperId": "bd5bc5eaaf5686e8525c36df8bba9d7e190d1aad", "title": "The use of RC4 Encryption for Smart Meters" }, { "paperId": "b33495fe50e37034b731cdacd76059b65344f70d", "title": "PDAFT: A privacy-preserving data aggregation scheme with fault tolerance for smart grid communications" }, { "paperId": "f01c9a4010f3b4d03360834399d385d8bee52a27", "title": "Privacy-Enhanced Data Aggregation Scheme Against Internal Attackers in Smart Grid" }, { "paperId": "0b458ce6c0d6d7fd20499e5b64a46132d7c380f2", "title": "Cyber security in the Smart Grid: Survey and challenges" }, { "paperId": "d7b856e16f1f5ad328d2d3e4602190937b7c46ab", "title": "(Non-)Random Sequences from (Non-)Random Permutations—Analysis of RC4 Stream Cipher" }, { "paperId": "a996987a529f6c436f1c348af2ff4c1983b18b23", "title": "EPPA: An Efficient and Privacy-Preserving Aggregation Scheme for Secure Smart Grid Communications" }, { "paperId": "528d3b56227b2a91c55116db36d00598ca8a6511", "title": "Private Computation of Spatial and Temporal Power Consumption with Smart Meters" }, { "paperId": "8104944dab19c8c16656238686f0e90e9bb461a8", "title": "Privacy-Friendly Aggregation for the Smart-Grid" }, { "paperId": "1bbc6fae2cf1d6dba0954384b7393651fcc61d8a", "title": "Privacy-Friendly Energy-Metering via Homomorphic Encryption" }, { "paperId": "6f5ba057d6fb21b474c3d626e91807d42befe51c", "title": "Efficient aggregation of encrypted data in wireless sensor networks" }, { "paperId": "5553970601d94c090eb2d3431b653106bf271c4c", "title": "Common randomness in information theory and cryptography - I: Secret sharing" }, { "paperId": "88abb2cda4f2a57499a717966ac4fbe9a993027a", "title": "How to share a secret" }, { "paperId": "1c339243e44c835b2ceeb8fb2be183e1bdd45567", "title": "PRGA: Privacy-Preserving Recording & Gateway-Assisted Authentication of Power Usage Information for Smart Grid" }, { "paperId": null, "title": "FBI : Smart Meter Hacks Likely to Spread" }, { "paperId": null, "title": "FriendlyARM" }, { "paperId": null, "title": "Smart Meter Hacks Likely to Spread" } ]
21,608
en
[ { "category": "Computer Science", "source": "external" }, { "category": "Business", "source": "s2-fos-model" }, { "category": "Computer Science", "source": "s2-fos-model" }, { "category": "Economics", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/0241483aa2623ab398de10982b958d01e6f5e453
[ "Computer Science" ]
0.831905
Incentive-Driven Information Sharing in Leasing Based on a Consortium Blockchain and Evolutionary Game
0241483aa2623ab398de10982b958d01e6f5e453
Journal of Theoretical and Applied Electronic Commerce Research
[ { "authorId": "1390607004", "name": "Hanlei Cheng" }, { "authorId": "48514961", "name": "J. Li" }, { "authorId": "2115405226", "name": "Jing Lu" }, { "authorId": "1778780", "name": "Sio-Long Lo" }, { "authorId": "2057445669", "name": "Zhiyu Xiang" } ]
{ "alternate_issns": null, "alternate_names": [ "J Theor Appl Electron Commer Res" ], "alternate_urls": [ "http://www.jtaer.com/" ], "id": "890beb40-ba59-4681-9bb6-88ed97b7decb", "issn": "0718-1876", "name": "Journal of Theoretical and Applied Electronic Commerce Research", "type": "journal", "url": "http://www.scielo.cl/scielo.php?lng=en&pid=0718-1876&script=sci_serial" }
Blockchain technology (BCT) provides a new way to mitigate the default risks of lease contracts resulting from the information asymmetry in leasing. The conceptual architecture of a consortium blockchain-based leasing platform (CBLP) is first proposed to facilitate information sharing between small and medium-sized enterprises (SMEs, the “lessees”) and leasing firms (LFs, the “lessors”). Then, based on evolutionary game theory (EGT), this study builds a two-party game model and analyzes the influences of four types of factors (i.e., information sharing, credit, incentive–penalty, and risk) on SMEs’ contract compliance or default behaviors with/without blockchain empowerment. The primary findings of this study are as follows: (1) SMEs and LFs eventually evolve to implement the ideal “win–win” strategies of complying with the contract and adopting BCT. (2) The large residual value of the leased asset can tempt SMEs to conduct a default action of unauthorized asset disposal, while leading LFs to access the CBLP to utilize information shared on-chain. (3) When the maintenance service is outsourced instead of being provided by lessors, the maintenance fee is not a core determinant affecting the equilibrium state. (4) There is a critical value concerning the default penalty on-chain to incentivize the involved parties to keep their commitments. (5) The capability of utilizing information, storage overhead, and security risk should all be taken into consideration when deciding on the optimal strategies for SMEs and LFs. This study provides comprehensive insights for designing an incentive mechanism to encourage lessees and lessors to cooperatively construct a sustainable and trustworthy leasing environment.
_Article_ # Incentive-Driven Information Sharing in Leasing Based on a Consortium Blockchain and Evolutionary Game **Hanlei Cheng** **[1]** **, Jian Li** **[1,2]** **, Jing Lu** **[3,4], Sio-Long Lo** **[1,]* and Zhiyu Xiang** **[4]** 1 Faculty of Innovation Engineering, Macau University of Science and Technology, Macau 999078, China 2 School of Advanced Manufacturing, Guangdong University of Technology, Jieyang 522000, China 3 Department of Computer Science and Technology, Hubei University of Education, Wuhan 430205, China 4 Blockchain Laboratory, YGSoft Incorporation, Zhuhai 519085, China ***** Correspondence: sllo@must.edu.mo **Citation: Cheng, H.; Li, J.; Lu, J.; Lo,** S.-L.; Xiang, Z. Incentive-Driven Information Sharing in Leasing Based on a Consortium Blockchain and Evolutionary Game. J. Theor. Appl. _Electron. Commer. Res. 2023, 18,_ [206–236. https://doi.org/10.3390/](https://doi.org/10.3390/jtaer18010012) [jtaer18010012](https://doi.org/10.3390/jtaer18010012) Academic Editor: Jani Merikivi Received: 28 September 2022 Revised: 16 December 2022 Accepted: 24 January 2023 Published: 29 January 2023 **Copyright:** © 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons [Attribution (CC BY) license (https://](https://creativecommons.org/licenses/by/4.0/) [creativecommons.org/licenses/by/](https://creativecommons.org/licenses/by/4.0/) 4.0/). **Abstract: Blockchain technology (BCT) provides a new way to mitigate the default risks of lease** contracts resulting from the information asymmetry in leasing. The conceptual architecture of a consortium blockchain-based leasing platform (CBLP) is first proposed to facilitate information sharing between small and medium-sized enterprises (SMEs, the “lessees”) and leasing firms (LFs, the “lessors”). Then, based on evolutionary game theory (EGT), this study builds a two-party game model and analyzes the influences of four types of factors (i.e., information sharing, credit, incentive– penalty, and risk) on SMEs’ contract compliance or default behaviors with/without blockchain empowerment. The primary findings of this study are as follows: (1) SMEs and LFs eventually evolve to implement the ideal “win–win” strategies of complying with the contract and adopting BCT. (2) The large residual value of the leased asset can tempt SMEs to conduct a default action of unauthorized asset disposal, while leading LFs to access the CBLP to utilize information shared on-chain. (3) When the maintenance service is outsourced instead of being provided by lessors, the maintenance fee is not a core determinant affecting the equilibrium state. (4) There is a critical value concerning the default penalty on-chain to incentivize the involved parties to keep their commitments. (5) The capability of utilizing information, storage overhead, and security risk should all be taken into consideration when deciding on the optimal strategies for SMEs and LFs. This study provides comprehensive insights for designing an incentive mechanism to encourage lessees and lessors to cooperatively construct a sustainable and trustworthy leasing environment. **Keywords: small and medium-sized enterprises; leasing; blockchain technology; evolutionary game** theory; information sharing **1. Introduction** Small and medium-sized enterprises (SMEs) typically encounter capital constraints when buying heavy machinery and industrial equipment for manufacturing, such as forklifts, trucks, hoists, etc. [1]. To cut back on the capital expense, leasing an asset from the Original Equipment Manufacturer (OEM) or a Leasing Firm (LF) is a common and economical option to meet the demand for equipment [2]. Leasing has become a popular financing instrument [3]. In general, a lessee (e.g., SMEs) selects the required equipment, and then a lessor can directly lease the asset they manufacture (if the lessor is the OEM) or purchase the requested asset from the OEM for leasing it out (if the lessor is an LF, such as a financial institution or a firm that specializes in leasing), with the lessee paying rent to the lessor in exchange for using the asset [4]. When the leasing service period expires, the lessee may opt to retain, renew, or return the leased equipment depending on the lease’s contractual provisions. The leasing business emphasizes the separation of ownership and uses the rights of the leased asset [5]. ----- _J. Theor. Appl. Electron. Commer. Res. 2023, 18_ 207 Compared with purchasing, although leasing is more flexible and cost-efficient for a lessee, the current leasing business could encounter some challenges, such as lacking knowledge about the lessee’s credit history, being unable to fully track assets in real time, and failing to discover any default behavior arising from the information asymmetry. Blockchain technology (BCT) stores data in a tamper-proof ledger that can be shared P2P among many nodes without the aid of a reliable third party [6], which can help to transmit data efficiently and accurately among multiple organizations, particularly in the area of equipment leasing (asset management). Hence, in general, the lessee and lessor are encouraged to participate in information sharing on the blockchain. Nonetheless, it is challenging for lessees or lessors (particularly for SMEs) to develop or participate in a blockchain-based application system (especially for the consortium permissioned blockchain system) due to the budget constraints of the BCT membership fee, heavy data storage (computation) charges, and other barriers [7]. Meanwhile, considering that stakeholders may maliciously handle sensitive data, the participants might be reluctant to share critical information with other parties [8] unless sufficient incentives nudge the lessee. However, previous researchers have not performed a quantitative exploration of stakeholders’ BCT-participating behaviors in the context of leasing, where there is a trade-off between the factors of information sharing, credit, incentive–penalty, and risk. On the other hand, although there exists some research relating to the blockchain applied in leasing, the interactive behaviors among lessees and lessors rarely receive adequate attention. Hence, our study aims to bridge this gap by addressing the following research questions. (1) How can the blockchain technically drive information sharing and storage between the SME (the “lessee”) and LF (the “lessor”)? (2) How to incentivize excellent lessees to share more information while expecting that rational lessees and lessors can both maximally benefit from the leasing business empowered by BCT. (3) How can the lessee and lessor adjust their behavior strategies to ensure that all parties’ payoffs reach equilibrium through continuous trial-and-error learning? To address these questions, we need to accomplish the following research objectives. First, a conceptual architecture of a consortium blockchain-based leasing platform (CBLP) is devised, suitable for information sharing among SMEs and LFs in a P2P distributed network. Secondly, we employ evolutionary game theory (EGT) to formulate a game model, taking fully into consideration the four kinds of factors (i.e., information sharing, credit, incentive–penalty, and risk) that affect the leasing strategies involving the two game parties (lessee and lessor). Finally, the main impacting results are discussed in-depth, and policy implications are provided on the ground. In addition, compared with other similar works investigating BCT strategies using the evolutionary game, the novelty and contributions of this study can be summarized as follows: (1) Our evolutionary game model is developed on the blockchain-based leasing business (specifically the operating lease) in manufacturing, which pays more attention to the SME’s leasing behavior (i.e., making the rental payment, reverting the leased asset, maintenance responsibility, and asset monitoring) dynamically changes with the BCT adoption/non-adoption strategy. This study can mitigate the shortcomings of today’s leasing management. (2) We provide a more comprehensive analysis demonstrating that the four factors of “information sharing, credit, incentive–penalty, and risk” dynamically impact the lessee’s complying performance on the LC and the lessor’s decision-making on BCT adoption. More importantly, we carefully consider technical barriers faced by the organizational players when implementing BCT, such as on-chain and off-chain storage overheads, leasing transaction verification overheads, and credit assessment in BCT. (3) Based on the game analysis, our experimental results can support LFs (the “lessor”) in comprehensively understanding how SMEs (the “lessee”) meet the obligations in the ----- _J. Theor. Appl. Electron. Commer. Res. 2023, 18_ 208 LC and give some implications to policymakers when designing a proper incentive mechanism on the lease. The remainder of this paper is structured as follows: Section 2 provides a theoretical background of the leasing business, BCT, leasing empowered by BCT, and the evolutionary game integrated with BCT. Section 3 exhibits the CBLP composed of SMEs and LFs. Section 4 states the problem description. Section 5 builds the game model. Section 6 provides a mathematical analysis of the model stability. Section 7 conducts a numerical simulation. Some recommendations and policy implications are provided in Section 8. **2. Literature Review** The following subsections will give an overview of crucial terminologies (i.e., leasing and BCT), the state of the art of BCT-based leasing, and BCT strategies using evolutionary game theory, which serve as a solid theoretical background for this study. After that, the main challenges in conventional leasing are highlighted. _2.1. Definition of Leasing_ SMEs typically need to decide between leasing and buying an expensive heavy asset (such as real estate, transportation equipment, industrial equipment, etc.). In recent years, leasing assets has become a popular financing tool for SMEs to solve capital problems in the supply chain [9]. According to the Accounting Standard IAS 17, “a lease is an agreement whereby the lessor conveys to the lessee in return for payment or series of payment the right to use an asset for an agreed period of time” (see, e.g., European Commission, 2012) [10]. From an accounting perspective, there are two types of lease, a capital lease and an operating lease [11]. In a capital lease, the lessor transfers the ownership of the asset to the lessee at the end of the lease. In contrast, an operating lease only allows the lessee to have the right to use the assets. Still, it requires the asset to be reverted to the lessor, such that the lessor will either re-lease the asset in another LC or sell it to release the residual value. At present, the operating lease dominates the leasing market. Concerning the determinants in default actions of the LC, Kaposty et al. [12] defined an LC as having defaulted when the lessee becomes insolvent or the lessor terminates the contract due to an overdue payment owed by the lessee. The latter case is considered in this study. Difficulty in repossessing the leased asset is also one of the results of defaulting [13]. Altman et al. [14] discovered that a lessee with poor creditworthiness defaults more easily, resulting in higher default losses. On the other hand, Kysucky and Norden [15] revealed that reducing information asymmetry between the lessee and lessor could motivate the lessee to maintain its reputation to obtain future leases. In addition, an exhaustive inspection of asset maintenance and disposals plays an essential role in contract defaults [12]. However, the current leasing system lacks the ability to reliably record real-time information (including the documents) about the leased asset’s operational activity, which hampers the lessor’s ability to ensure the lessee’s compliance with the LC. _2.2. Blockchain Technology (BCT)_ Blockchain Technology (BCT) was initially proposed by Satoshi Nakamoto, and it enables transactions to be encapsulated in data blocks and appended to a ledger as a chain structure [16]. It allows distributed and mutually distrustful nodes validate transactions through consensus mechanisms while utilizing cryptography (i.e., public–private key encryption and hash functions) to ensure data integrity. By its nature, blockchain technology makes transactions synchronous, non-reversible, immutable, and traceable in distributed databases, enabling organizations to store and share reliable data without double-checking [17]. Blockchain technology can effectively solve the problem of information asymmetry [18] and monitor the asset’s operation in real time, which helps the lessor to alleviate the default risks caused by a low-credit lessee [19]. Therefore, it is beneficial to encourage SMEs (the “lessee”) to use BCT and energetically participate in information sharing. ----- _J. Theor. Appl. Electron. Commer. Res. 2023, 18_ 209 BCT is generally classified as the permissionless blockchain and the permissioned blockchain [20], depending on whether or not nodes are granted to participants in a blockchain network [14,17]. The permissionless blockchain, also called public blockchain, is open access, allowing any node to participate in the consensus procedure, such as Bitcoin and Ethereum. The permissioned blockchain can be further categorized into private permissioned blockchain, in which whitelisted participants in one organization are selected in advance to join the invitation-only network, and a consortium permissioned blockchain, which is operated under the control of several authorized organizations allowing the identifiable participants to execute certain on-chain actions, such as Quorum, Hyperledger Fabric, and Corda. The consortium blockchain is becoming popular with enterprises, where a group of companies collaboratively use the blockchain to improve business processes. Any organization can apply to join the blockchain network, but only authorized organizations granted membership are allowed to write or read information on-chain [21]. This study is dedicated to introducing a consortium permissioned blockchain jointly maintained by the OEM, SMEs, LFs, third-party maintenance centers (MCs), and regulators. At present, blockchain has been widely used in many industries for sharing information, such as supply chain [22], energy [23], healthcare [24], industrial manufacturing [25], smart cities [26], and online education [27]. _2.3. BCT Application in Leasing_ Currently, there is relatively limited research on the application of BCT in the leasing business. Most research focuses on how BCT fosters information exchange through the lifecycle of the leasing process and improves the efficiency and transparency of leased asset management. For instance, to address the issues of lengthy negotiation cycles and cumbersome financing procedures caused by information asymmetry, IBM proposed a crane leasing model based on the IBM Blockchain Platform that requires the identity of the leased crane to be registered on the blockchain and leasing transactions to be recorded on the leasing blockchain [28]. Leased physical aircraft can be tokenized via blockchain, facilitating asset management [13]. In addition, several researchers are particularly interested in BCTbased car-leasing. Auer et al. [29] developed a prototype blockchain-IoT-based car-leasing platform, demonstrating that the blockchain can facilitate collaboration among stakeholders to some extent while relying on the appropriate balance among factors such as security, authenticity, traceability, scalability, etc. It also emphasizes considering storing car-renting events on- or off-chain to support scalability, as agreed by Faber et al. [30]. To address the problem of inefficiency in delivering and searching records, Agyekum et al. [31] used Ethereum to construct a car-leasing platform that enables the transfer of ownership of a leased car by invoking a transaction on the blockchain, hence helping the regulator to clearly monitor every leasing transaction. The aforementioned cases imply the following potential benefits of the blockchain applied in leasing: (1) Stakeholders (such as lessors) spend less time verifying the leasing information’s authenticity on-chain, since the blockchain can record lease contracts and financial transactions in a non-editable way, which reduces the credit investigation cost [32]. (2) Smart contracts deployed on a distributed ledger can help automate some lease payments or ownership transfers, speeding up the processing of rental transactions [33]. (3) All historical events associated with the leased assets’ operation and maintenance and the provenance-related financing activities are objectively recorded by multiple nodes on-chain, which could guarantee asset traceability and data credibility in the leasing business [34]. Hence, BCT is conducive for SMEs and LFs to effectively choose the suitable potential lessor/lessee to sign the LC. _2.4. Evolutionary Game Theory (EGT)_ Evolutionary game theory (EGT) is derived to explore the behavior of the large population of boundedly rational agents who repeatedly engage in strategic interactions under incomplete information circumstances [35]. In contrast with classic game theory, EGT ----- _J. Theor. Appl. Electron. Commer. Res. 2023, 18_ 210 has the advantage of analyzing how the game player would dynamically change their own strategic decisions over time through learning and adapting to the other’s strategic decisions [36]. There is a core concept in EGT named evolutionary stable strategy (ESS), which, if adopted by all players, means that the game cannot be invaded by alternative strategies [37]. Several researchers have applied EGT to leasing issues. For example, to study the safety supervision of town crane operation, Chen et al. [38] established a tripartite evolutionary game model, which reveals that when the penalty amount resulting from poor crane supervision is greater than the total safety investment cost, the stakeholders will apply strict supervision strategies to the asset. In addition, some scholars have conducted similar works deciding whether to adopt BCT using evolutionary game theory. However, most of them focus on the supply chain (finance). For instance, Su et al. [39] constructed a tripartite game model to explore BCT in relation to the evolutionary stability strategies among CEs, SMEs, and FIs, discovering that relatively large default losses can help SMEs to repay receivables on time. Tang et al. [40] used an evolutionary game to demonstrate that BCT can effectively facilitate information sharing in supply chain collaborations. Sun et al. [41] established an evolutionary game model to reveal that BCT impacts credit risk, which plays a vital role in deciding whether financial institutions accept factoring applications in supply chain finance. Song et al. [42] analyzed a tripartite game model of an agricultural supply chain, discovering that blockchain operation costs significantly affect the behaviors of governments and agricultural enterprises. Based on the above literature analysis, it can be found that most research mainly focuses on proposing a blockchain-based leasing scheme or using EGT to study the SME’s BCT strategies in the supply chain. Nevertheless, our work not only provides a consortium blockchain-based leasing platform (CBLP) to streamline the information sharing between lessees and lessors, but also contributes to establishing a two-player evolutionary game model analyzing the lessee’s leasing behaviors (i.e., complying with or defaulting on the LC) considering whether to adopt BCT for information sharing. This study will shed light on the long-term development of the leasing industry. **3. Description of Consortium Blockchain-Based Leasing Platform (CBLP)** Since the evolutionary model that this study will construct is of significant relevance to information storage (i.e., on- and off-chain) and consensus mechanisms (i.e., Raft with credit evaluation and transaction verification), in this section, it is necessary first to present the conceptual architecture of the proposed consortium blockchain-based leasing platform (CBLP), and then the transaction verification process will be concisely explained. _3.1. Conceptual Architecture of CBLP_ This research first provides a consortium blockchain-based leasing platform integrated with RFID devices [43], which incorporates stakeholders such as OEM, SMEs, LFs, MCs, and regulators. The platform is built by using Hyperledger Fabric (HLF) [44], which is an enterprise-grade permissioned blockchain platform facilitating information sharing among multiple organizations [45,46]. In general, an authorized node is required to pay fees to access the consortium blockchain [47]. In the distributed ledger, only organizations with valid IDs can process transactions. More specifically, all authorized lessees and lessors have their PKI-CA certificate and unique Decentralized Identifiers (DIDs) registered on a blockchain with restricted access [48,49]. By scanning the RFID sensor tags encapsulated in the DID, the running condition of the leased asset is recorded as a transaction, which is then turned into a “block” and appended to the ledger. This means that the lessee cannot refute or alter the historical logs of equipment operation and maintenance. This data reliability is conducive to efficiently managing the whole life cycle of the physical leased asset [50], which can help to reduce the inspection costs during the leasing period. It is also critical for each participant involved to be certain about the asset traceability in case of fraud, damages, dispossession, or misdisposition. ----- _J. Theor. Appl. Electron. Commer. Res.case of fraud, damages, dispossession, or misdisposition. 2023, 18_ 211 #### On the other hand, due to the large data sets shared by many various stakeh there is a need to consider data storage and scalability challenges when assessing t extent SMEs are willing to adopt BCT [51]. Our work leverages a hybrid on-chain aOn the other hand, due to the large data sets shared by many various stakeholders, there is a need to consider data storage and scalability challenges when assessing to what #### chain storage mechanism to store and access information (especially complexity d extent SMEs are willing to adopt BCT [51]. Our work leverages a hybrid on-chain and #### the CBLP [52,53]. Specifically, the encrypted raw data (e.g., file) are stored on an of off-chain storage mechanism to store and access information (especially complexity data) #### cloud storage provider (CSP) or distributed storage system (e.g., InterPlanetary Fion the CBLP [52,53]. Specifically, the encrypted raw data (e.g., file) are stored on an off- tem, IPFS) [54]. The off-chain data are linked with the specific metadata via a hash pchain cloud storage provider (CSP) or distributed storage system (e.g., InterPlanetary File System, IPFS) [54]. The off-chain data are linked with the specific metadata via a hash #### which is stored as a transaction validated to the ledger and can be used to audit t pointer, which is stored as a transaction validated to the ledger and can be used to audit the #### chain data that were not modified [55]. The architecture of CBLP with on- and of off-chain data that were not modified [55]. The architecture of CBLP with on- and off-chain #### information storage mechanisms is shown in Figure 1. information storage mechanisms is shown in Figure 1. ##### Figure 1. Figure 1. The architecture of CBLP with an on- and off-chain information storage mechanism.The architecture of CBLP with an on- and off-chain information storage mechanism _3.2. Raft Consensus Based on Credit_ #### 3.2. Raft Consensus Based on Credit Since various transactions are executed through triggering relevant smart contracts (e.g., lease contract, asset ownership transfer contract, lease payment contract, data sharingSince various transactions are executed through triggering relevant smart co #### (e.g., lease contract, asset ownership transfer contract, lease payment contract, datcontract) using multiple organizational nodes, a consensus protocol is used by the CBLP. It plays a crucial role in ensuring that the transactions are recorded in an agreed order on-chain. #### ing contract) using multiple organizational nodes, a consensus protocol is used Meanwhile, to avoid the untrusted consortium stakeholders uploading false information #### CBLP. It plays a crucial role in ensuring that the transactions are recorded in an ----- _J. Theor. Appl. Electron. Commer. Res. 2023, 18_ 212 ### false information or changing the private data content on-chain to comply with the tract, we use a Raft consensus algorithm combined with “credit” incentives to mai the blockchain. That is, an SME (or changing the private data content on-chain to comply with the contract, we use a RaftOrg-lessee node A) and an LF (Org-lessor node B) adop consensus algorithm combined with “credit” incentives to maintain the blockchain. That is, ### flow of “execute–order–validate” to record the transactions on-chain [45]. This is the an SME (Org-lessee node A) and an LF (Org-lessor node B) adopt the flow of “execute–order– ### fundamental transaction process of Hyperledger Fabric at present and will not be e validate” to record the transactions on-chain [45]. This is the most fundamental transaction ### rated on further due to the limitations of this paper’s length, while it is depicted in Fprocess of Hyperledger Fabric at present and will not be elaborated on further due to the 2. limitations of this paper’s length, while it is depicted in Figure 2. ##### Figure 2. Figure 2.Transaction flow in Hyperledger Fabric with Raft consensus algorithm. Transaction flow in Hyperledger Fabric with Raft consensus algorithm. Notably, the Raft consensus protocol follows the “leader and follower” architecture [56,57] ### Notably, the Raft consensus protocol follows the “leader and follower” archite to implement the ordering service, where the leader nodes are dynamically elected from the ### [56,57] to implement the ordering service, where the leader nodes are dynamically elConsenter Set and a node’s credit value determines whether it can join the Consenter Set [58]. from the When the credit value exceeds a predetermined threshold, the node can join theConsenter Set and a node’s credit value determines whether it can join the Consenter senter SetSet as a consensus node. On the contrary, when the credit is lower than the minimum [58]. When the credit value exceeds a predetermined threshold, the nod threshold, the node will face a penalty imposed by the CBLP. In addition, we propose ### join the Consenter Set as a consensus node. On the contrary, when the credit is lower that the credit value is increased or decreased according to the contract compliance or ### the minimum threshold, the node will face a penalty imposed by the CBLP. In adddefault behavior of the (SME) Org-lessee node in the leasing business; the more frequently we propose that the credit value is increased or decreased according to the contract it conforms to the lease contract, the higher its credit value and the greater the likelihood is that the node will be selected as the “Leader” in the Consenter Set to package transactions ### pliance or default behavior of the (SME) Org-lessee node in the leasing business; the into a new block and finalize it. Credit assessed for the SME cannot be treated as an ### frequently it conforms to the lease contract, the higher its credit value and the greate indicator of the lessee’s reputation and contribution to the leasing business. Still, it can help ### likelihood is that the node will be selected as the “Leader” in the the enterprise earn more recognition and achieve more lease financing opportunities fromConsenter Set to pac transactions into a new block and finalize it. Credit assessed for the SME cannot be trlessors, encouraging each lessee to share information on the CBLP [58]. In summary, based on the right balance of the above on- and off-chain storage costs ### as an indicator of the lessee’s reputation and contribution to the leasing business. S and credit incentive mechanisms, the nodes will choose to actively comply with the LC ### can help the enterprise earn more recognition and achieve more lease financing oppand upload authentic information on-chain by joining the consortium blockchain for larger nities from lessors, encouraging each lessee to share information on the CBLP [58]. gains, resulting in the eventual emergence of a Nash equilibrium. In summary, based on the right balance of the above on- and off-chain storage and credit incentive mechanisms, the nodes will choose to actively comply with th ----- _J. Theor. Appl. Electron. Commer. Res. 2023, 18_ 213 **4. Problem Description** In this section, we will first describe the problem we studied. The basic lease scenario explored in this study is provided to better understand the game model. Afterward, the critical parameters are elucidated and presented in Table 1. **Table 1. Parameters. Explanation under the conventional/blockchain-based lease mode.** **Mode** **Party** **Notation** **Definition** _R_ Total rental payments to the LF under the terms of the lease _r1_ Return rate of the SME on the lease _r3_ Reinvestment rate of the SME after the contract’s default _f_ Maintenance fee for the leased asset during the lease period _p1_ Default penalty of the SME under the conventional lease _σ_ Incentives of the SME given by the LFs due to LC compliance _r2_ Return rate of the LF on the lease _Ct_ Marginal credit investigation costs of the LF _C0_ Original acquisition cost of the leased asset _C1_ Monitoring cost of asset’s operation under the conventional lease _vs_ Residual value of the leased asset at the end of the lease _ε_ Loss rate of the LF caused by the contract default _Cb_ Membership cost of the SME joining the consortium blockchain ∆νc Increased credit value of the SME due to LC compliance on-chain _I_ Fixed reward when mining a block on-chain _p2_ Default penalties of the SME on-chain _ZA_ Quantity of information shared by the SME on-chain _uA_ Relative computing power provided by the SME on-chain _g_ Synergy gain on the lease business empowered by the blockchain _C2_ Monitoring cost of asset’s operation under the blockchain-based lease _ZB_ Quantity of information shared by the LF on-chain _uB_ Relative computing power provided by the LF on-chain _ϕ_ Coefficient of information transmission efficiency on-chain _ω_ Validation cost coefficient of confirming transaction on-chain _λ_ Storage cost coefficient of information stored off-chain CSP/IPFS _η_ Security risk coefficient of sharing information on-chain Under the conventional lease mode Under the blockchain-based lease mode SME (Org-lessee node A) LF (Org-lessor node B) SME (Org-lessee node A) LF (Org-lessor node B) SME and LF _4.1. Description of Problem_ The game model involves two types of players in a lease: the lessee (i.e., an SME) and lessor (i.e., an LF). Under a conventional lease, the SME is responsible for paying fees for the right to use an asset leased from the LF, and generally, the SME as a lessee must maintain the asset to ensure that it remains in an operational condition [59]. The LF will pay the credit investigation cost to evaluate whether an SME can pay its rent on time. Meanwhile, it is difficult for the parties to immediately share information (including the historical default records) and for the LF to monitor the leased equipment/assets in real time. Applying BCT can solve the aforementioned issues [29]. If the LF requires the SME to join the consortium so as to upload information (such as historical asset operation documents or payment performance, etc.) on the CBLP, not only can a credit review be instantly conducted, but the ownership and provenance of the leased asset can also be tracked in real time through ----- _J. Theor. Appl. Electron. Commer. Res. 2023, 18_ 214 the asset’s operation. Moreover, a smart contract can be automatically executed to make the lease payment as agreed in the lease contract (LC), which results in the synergetic gains of the leasing process and improves the efficiency of asset management empowered by the blockchain [60]. If stakeholders can share and process a greater quantity of information on-chain, thereby significantly improving the precision of the decisions made by each party, it is essential to take into consideration the corresponding data storage overheads, security exposure, and transaction verification costs when practically using BCT. Once the SME and LF adopt the CBLP, they need to be subject to harsher punishments resulting from defaulting behavior, which undermines the enterprise’s reputation on the whole network. In addition, a blockchain-based credit evaluation mechanism can enhance the effective management of the leased asset, which is also conducive to mining reliable data blocks in the distributed ledger. Therefore, this study intends to model the problem that considers the SME and LFs individual decision-makers concerning “complying with or defaulting on the LC” and “accessing or not accessing the CBLP” and to comprehensively analyze the dynamic influence of the four factors (i.e., information sharing, credit, incentive–penalty, and risk) on the choice of strategy by using evolutionary games. _4.2. Basic Lease Scenarios_ The game model is constructed based on an operating lease with respect to heavy equipment (e.g., forklifts, trucks, and hoists) in the manufacturing supply chain. Generally, the equipment is relatively expensive to purchase, and leasing it is a better option for SMEs. The LF (the “lessor”) first acquires an asset from an OEM and lends the asset to the SME (the “lessee”) for a specific term in exchange for periodic rental payments. Once the LC is signed, the SME has the right to use the asset, whereas the ownership of the leased asset remains with the LF. Therefore, the SME must comply with the contract, not only paying the full rent on time but also reverting the asset to the LF at the maturity date of the lease; otherwise, the SME will pay the penalty for their default. In addition, the LC specifies that the lessee takes responsibility for the maintenance and outsources it to the OEM or MCs other than the lessor (LF in this case). _4.3. Model Parameters_ Rental Payments (R): Rental payments refer to the monthly/quarterly rent that the SME (the “lessee”) pays to the LF (the “lessor”). (1) Return rate (r1, r2): Return rate refers to the yield that can be earned when completing the investment activity on the lease. (2) Reinvestment rate (r3): Reinvestment rate refers to the yield that the lessee expects to earn when it does not pay or defers the full rental price, which can be put into other investments for extra gains. (3) Maintenance fee ( _f_ ): Maintenance fee refers to the cost of carrying out maintenance actions to ensure that the leased asset is in a proper operating condition. In this study, the LC states that the maintenance service must be provided by MCs and completed until the lease termination—the maintenance fee is not embedded in the rental payment. (4) Loss rate (ε): Loss rate refers to the loss that could result from the lessee’s defaulting behavior—for instance, if the lessee defaults by not returning the leased asset at the end of the lease, which cannot be re-leased to the next lessee upon termination of the previous LC. The relevant parameters’ notations and definitions are shown in Table 1. **5. Model Formulation** In this section, an evolutionary game model between the SME (the “lessee”) and the LF (the “lessor”) is developed. Before the mathematical payoff matrix is constructed, some assumptions are first provided. ----- _J. Theor. Appl. Electron. Commer. Res. 2023, 18_ 215 _5.1. Basic Assumptions_ **Assumption 1 (A1): Rational Participants Assumption.** All participants in the game are boundedly rational [61]. Nodes with constrained computing power may not be able to perfectly utilize all of the information on-chain owing to hardware faults or network congestion. Under asymmetric information, each node will constantly select the optimal strategy to maximize its interests while being affected by multiple factors and will eventually reach a state of equilibrium [62,63]. **Assumption 2 (A2): Strategy Selection Assumption.** Assume that the SME (the “lessee”) chooses to “comply with or default on the LC” with the respective probability of x or 1 _x, x_ [0, 1]. The LF (the “lessor”) chooses to _−_ _∈_ “access or not access the CBLP”, utilizing the information shared by stakeholders with the respective probability of y or 1 _y, y_ [0, 1]. _−_ _∈_ **Assumption 3 (A3): Default Behavior Assumption.** For the SME’s unilateral default behavior, assume that it consists of two primary actions: one is not making the full rental payment by the due date, and the other is failing to return the leased asset to the lessor when the LC expires (for simplicity, this study considers that the two actions simultaneously occur when modeling). **Assumption 4 (A4): Information Sharing Assumption.** The quantity of information shared by the player is Zi, and the ability of each player to process and utilize the information on-chain [40] is uj, which depends on the computing power. Hence, the amount of effective information on-chain that the SME or the LF obtains from each other is, respectively, uAZB and uBZA. **Assumption 5 (A5): Credit Assumption.** Assume that each SME will be assigned a credit value ∆vc, which increases with the LC compliance performance. The higher the credit value owned by the SME, the greater the possibility of the enterprise becoming the “Leader” in the Raft Consensus Protocol (Section 3.2) to validate the lease transaction on each consensus round, so improving the lessee’s reputation and recognition on-chain. **Assumption 6 (A6): Incentive–Penalty Assumption.** After the SME participates in information sharing, if the enterprise breaches the LC or tries to tamper with the existing LC to legalize the default behavior on-chain, this has a profoundly negative effect on the leasing business, hence making the SME’s default penalty intensity larger under the CBLP—that is, p2 > p1. The penalty can be deducted from the lessee’s token deposit by executing a smart contract of transferring transactions [64]. **Assumption 7 (A7): (Technology) Risk Assumption.** BCT can improve the data transmission efficiency (ϕ) end-to-end, but meanwhile, each player has to bear the data validation on-chain (ω) and storage costs off-chain (λ) due to the blockchain’s storage limitations. The player also may suffer security risks (η), such as data leakage risks and network attack risks [65]. This has practical implications, in that the benefits of sharing information outweigh the relevant costs after joining the blockchain network, ϕ > ω + λ + η. **Assumption 8 (A8): Other Cost Assumption.** ----- _J. Theor. Appl. Electron. Commer. Res. 2023, 18_ 216 **A8.1: The blockchain can record the SME’s credit history as tamper-resistant and traceable data [66],** _and if the SME joins the blockchain network, the marginal credit investigation cost (Ct) will shrink_ _and asymptotically approach zero._ **A8.2: In leased asset management empowered by BCT, the LF can continuously monitor the** _condition of the leased asset at lower costs [67]—as such, C1 > C2._ _5.2. Payoff Matrix_ In this model, there are four different strategies. Considering that the model is complicated to understand, based on the above lease scenario and assumptions, this subsection will thoroughly explain the players’ payoff under each strategy. (1) Strategy I: S1 = {Comply, Access} Since the SME (the “lessee”) fully abides by the LC, and the LF accesses the CBLP, taking advantage of information sharing on-chain, this results in the SME obtaining the rewards of successfully mining the block _I,_ effective information utilization [(ϕ − _ω −_ _λ −_ _η)ZA + uAZB], the return on the lease Rr1, the incentive σ given by the_ LF (the “lessor”), plus the credit value; however, the SME has to make the payments of rental R, maintenance fee f, and the consortium membership cost Cb. On the other hand, the LF obtains rent R, the return on the lease Rr2, effective information utilization [(ϕ − _ω −_ _λ −_ _η)ZB + uBZA], synergy gain g on the lease empowered by_ BCT, plus the residual value of the leased asset vs after receiving the leased asset returned from the SME, while bearing the cost of purchasing the asset from the OEM at price C0, monitoring cost C2, and the consortium membership cost Cb. Therefore, in Strategy I, the payoffs of the SME and LF are formulated as in Equations (1) and (2), respectively. _PA[S][1]_ [=][ I][ + [(][ϕ][ −] _[ω][ −]_ _[λ][ −]_ _[η][)][Z][A][ +][ u][A][Z][B][] +][ R][(][r][1][ −]_ [1][) +][ σ][ +][ ∆][v][c][ −] _[f][ −]_ _[C][b]_ (1) _PB[S][1]_ [=][ R][(][r][2][ +][ 1][) +][ v][s][ + [(][ϕ][ −] _[ω][ −]_ _[λ][ −]_ _[η][)][Z][B][ +][ u][B][Z][A][] +][ g][ −]_ _[C][0][ −]_ _[C][2][ −]_ _[C][b]_ (2) (2) Strategy II: S2 = {Default, Access} Due to default actions (Section 4.2.), the SME uses the rent to perform re-investment and dispose of the leased asset that has been exhaustively used for manufacturing at the end of the lease. Therefore, it gives the SME the chances to earn extra re-reinvestment return Rr3 and sell the leased asset at the market value of the residual vs. Adopting the BCT provides the SME with effective information utilization [(ϕ − _ω −_ _λ −_ _η)ZA + uAZB]._ However, to continuously keep the leased asset effectively operating without impacting production, the SME still needs to pay the maintenance fee to the MC (instead of the LF) and will be punished in p2 resulting from the default actions. Meanwhile, although the LF can obtain the effective information utilization empowered by the BCT, the default behavior by the SME not only causes the LF to be unable to receive the rental payment R, but also leads to it losing the further earnings Rε from re-leasing to other lessees due to the out-of-control of the leased asset when the rental period is complete. The asset acquiring cost C0, monitoring cost C2, and the consortium membership cost Cb occur. Therefore, in Strategy II, the payoffs of the SME and LF are formulated as in Equations (3) and (4), respectively. _PA[S][2]_ [=][ Rr][3][ +][ v][s][ + [(][ϕ][ −] _[ω][ −]_ _[λ][ −]_ _[η][)][Z][A][ +][ u][A][Z][B][]][ −]_ _[f][ −]_ _[p][2][ −]_ _[C][b]_ (3) _PB[S][2]_ [= [(][ϕ][ −] _[ω][ −]_ _[λ][ −]_ _[η][)][Z][B][ +][ u][B][Z][A][]][ −]_ _[R][ε][ −]_ _[C][0][ −]_ _[C][2][ −]_ _[C][b]_ (4) ----- _J. Theor. Appl. Electron. Commer. Res. 2023, 18_ 217 (3) Strategy III: S3 = {Comply, Not-access} When the SME actively keeps to the stipulations of the LC, the SME earns the investment return Rr1 on the lease and is rewarded by the LF with the incentive σ. However, the SME needs to pay the maintenance fee f to ensure that the leased asset is in good condition. For the lessor, the LF not only obtains benefit Rr2 from the leasing activity, but it also retains the value of the leased asset vs at the end of the LC. Nonetheless, the insufficient credit record integrity of the SME forces the LF to incur credit audit expenses Ct before making a decision on the lease. The costs (C0, C1) of acquiring and monitoring the leased assets are ineluctable. Therefore, in Strategy III, the payoffs of the SME and LF are formulated as in Equations (5) and (6), respectively. _PA[S][3]_ [=][ R][(][r][1][ −] [1][) +][ σ][ −] _[f]_ (5) _PB[S][3]_ [=][ R][(][r][2][ +][ 1][) +][ v][s][ −] _[C][0][ −]_ _[C][t][ −]_ _[C][1]_ (6) (4) Strategy IV: S4 = {Default, Not-access} Based on the above Strategies II and III, the SME will always earn the reinvestment return Rr3 and residual value vs, but it may also suffer from the default punishment p1. In addition, although the SME will resell the leased asset by defaulting, the enterprise has to take responsibility for maintaining it f to ensure that the leased asset remains in an operational condition for manufacturing. If the SME breaches the LC without joining the blockchain network, the LF does not get any returns, and may even be charged with the costs in credit auditing Ct, asset acquiring _C0, and monitoring C1._ Therefore, in Strategy IV, the payoffs of the SME and LF are formulated as in Equations (7) and (8), respectively. _PA[S][4]_ [=][ Rr][3][ +][ v][s][ −] _[f][ −]_ _[p][1]_ (7) _PB[S][4]_ [=][ −][R][ε][ −] _[C][0][ −]_ _[C][t][ −]_ _[C][1]_ (8) Consequently, the profit matrix of the two-party game is shown in Table 2. **Table 2. Evolutionary game payoff matrix of the SME and LF.** **LF** **Strategy** **(Org-Lessor Node B)** **Access** **Not-Access** SME (Org-lessee node A) Comply _I + [(ϕ −_ _ω −_ _λ −_ _η∆)ZvcA − +f u −AZCbB] + R(r1 −_ 1) + σ + _R(r1 −_ 1) + σ − _f_ _R(r2 + 1) + vs + [(ϕC −0 −ω −C2λ − −Cηb)ZA + uBZA] + g −_ _R(r2 + 1) + vs −_ _C0 −_ _Ct −_ _C1_ Default _Rr3 + vs + [(ϕ −_ _ω −_ _λ −_ _η)ZA + uAZB] −_ _f −_ _p2 −_ _Cb_ _Rr3 + vs −_ _f −_ _p1_ [(ϕ − _ω −_ _λ −_ _η)ZB + uBZA] −_ _Rε −_ _C0 −_ _C2 −_ _Cb_ _−Rε −_ _C0 −_ _Ct −_ _C1_ **6. Model Stability Analysis** This section will first construct the replicator dynamic equations between the SME and LF and then will discuss in depth how the two rational players reach an equilibrium state through iteratively changing strategies. A mathematical sensitivity analysis on each type of factor (i.e., information sharing, credit, incentive–penalty, risk) will ultimately be provided. ----- _JTAERJ. Theor. Appl. Electron. Commer. Res. 2023, 18, FOR PEER REVIEW 2023, 18_ 21813 _6.1. Replicator Dynamic SystemBased on the above evolutionary game payoff matrix, we can calculate the expected_ returns of the SME (the “lessee”) and LF (the “lessor”) when they choose different strate-Based on the above evolutionary game payoff matrix, we can calculate the expected gies, and then construct the replicator dynamic equations for each subject. returns of the SME (the “lessee”) and LF (the “lessor”) when they choose different strategies, and then construct the replicator dynamic equations for each subject. 6.1.1. Replication Dynamic Equation of the SME 6.1.1. Replication Dynamic Equation of the SME Assuming the expected return of the SME’s compliance with and, defaulting on the LC, the average returns are Assuming the expected return of the SME’s compliance with and, defaulting on the𝐸�, 𝐸���, and ���𝐸�, respectively. Then: LC, the average returns are𝐸� = 𝑦�𝐼+ �(𝜑−𝜔−𝜆−𝜂)𝓏 Ex, E1−x, and� + 𝑢 E�x𝓏, respectively. Then:��+ 𝑅(𝑟� −1) + 𝜎+ ∆𝑣� −𝑓−𝐶�� (9) _Ex = y[I + [(+ (1 −𝑦)�𝑅(𝑟ϕ −_ _ω −_ _λ −_ �η−1) + 𝜎−𝑓�)ZA + uAZB] + R(r1 − 1) + σ + ∆vc − _f −_ _Cb]_ (9) 𝐸��� = 𝑦�𝑅𝑟� + 𝑣� + �(𝜑−𝜔−𝜆−𝜂)𝓏+(1 − _y)[R(r1 −_ 1) +� _σ+ 𝑢 −�𝓏f ]��−𝑓−𝑝�_ −𝐶�� (10) _E1−x = y+ (1 −𝑦)(𝑅𝑟[Rr3 + vs + [(�_ + 𝑣ϕ −� _ω−𝑓−𝑝 −_ _λ −�η) )ZA + uAZB] −_ _f −_ _p2 −_ _Cb]_ (10) +(���= 𝑥𝐸𝐸1� − _y)(� Rr+ (1 −𝑥)𝐸3 + vs −_ _f��� −_ _p1)_ (11) _Ex = xEx + (1 −_ _x)E1−x_ (11) The replication dynamics equation (RDE) [68] of the SME is denoted as follows: The replication dynamics equation (RDE) [68] of the SME is denoted as follows: 𝐹(𝑥) = [𝑑𝑥] 𝑑𝑡 = 𝑥(𝐸� −𝐸F���)(�x) = _[dx]dt_ = 𝑥(1 −𝑥)(𝐸= x��E−𝐸x −���Ex)� (12) (12) = 𝑥(1 −𝑥)�𝑦(𝐼+ ∆𝑣= x(1 − _x)(�E+ 𝑝x −�_ _E−𝑝1−�x) + (𝑟)_ � −𝑟� −1) 𝑅+ 𝜎+ 𝑝� −𝑣�� = x(1 − _x)[y(I + ∆vc + p2 −_ _p1) + (r1 −_ _r3 −_ 1) R + σ + p1 − _vs]_ Let 𝐹(𝑥) = 0, and we obtain the stationary point of the differential equation as follows: Let F(x) = 0, and we obtain the stationary point of the differential equation as follows: _x𝑥1[∗]�∗[=]= 0, 𝑥[ 0,][ x]2[∗]�∗_ [=]= 1[ 1] (13) (13) _y[∗]𝑦[∗]= = [(][(1 + 𝑟][1][ +][ r][3][�][ −][−𝑟][r][1][�][)][)𝑅+ 𝑣][R][ +][ v][s][�][ −][−𝜎−𝑝][σ][ −]_ _[p][1][�]_ (14) (14) _I +𝐼+ ∆𝑣 ∆vc� ++ 𝑝 p2� −−𝑝p1�_ Based on Equation (14), we can discover that, as shown in Figure 3: Based on Equation (14), we can discover that, as shown in Figure 3: (a) (b) **Figure 3. (a) The dynamic trend of the SME’s strategy in the case of** 𝑦> 𝑦[∗]; (b) The dynamic trend **Figure 3. (a) The dynamic trend of the SME’s strategy in the case of y > y[∗]; (b) The dynamic trend of** of the SME’s strategy in the case of 𝑦< 𝑦[∗]. the SME’s strategy in the case of y < y[∗]. When When y𝑦= 𝑦 = y[∗], the LF can access the CBLP to use the information on-chain with a [∗], the LF can access the CBLP to use the information on-chain with a ��(�) probability of probability of y𝑦[∗][∗], and, and _[∂][F]∂��[(]x[x][)]_ = 0= 0 always holds. That is, the state is always stable, regardless always holds. That is, the state is always stable, regard less of the value of of the value of x. Moreover, any change in other exogenous variables will not alter the𝑥. Moreover, any change in other exogenous variables will not alter the stability of the state. stability of the state. x is the equilibrium point, and all states are stable.𝑥 is the equilibrium point, and all states are stable. WhenWhen𝑦�𝑦 y ̸= y[∗], the system needs to satisfy two requirements to obtain evolutionary [∗], the system needs to satisfy two requirements to obtain evolutionary stability, i.e., stability, i.e., F𝐹(𝑥(x[∗][∗]) = 0) = 0 and and F𝐹′(𝑥[′](x[∗][∗])) < 0 < 0. Then:. Then: - In the case of 𝑦> 𝑦[∗], 𝐹[�](1) < 0. 𝑥= 𝑥�∗ = 1 is an evolutionary stable strategy (ESS). When the probability of the LF accessing the CBLP is larger than 𝑦[∗], the SME will ----- _J. Theor. Appl. Electron. Commer. Res. 2023, 18_ 219 _•_ In the case of y > y[∗], F[′](1) < 0. x = x2[∗] [=][ 1 is an evolutionary stable strategy (ESS).] When the probability of the LF accessing the CBLP is larger than y[∗], the SME will converge with the equilibrium strategy of “comply with the LC”. The number of SMEs who abide by the contract will gradually increase. _•_ In the case of y < y[∗], F[′](0) < 0. x = x1[∗] [=][ 0 is an evolutionary stable strategy (ESS). It] implies that more SMEs will eventually evolve into a stable state of defaulting on the LC, since LFs struggle to distinguish the forgery of credit records without BCT [69]. According to Equation (14), we can find that the probability of the LF choosing to access the CBLP, requiring the SME to join the consortium, is small, and the SME tends to breach the LC. Moreover, the “comply with or default on the LC” decision of the SME has nothing to do with the information sharing (Zi, uj), the asset maintenance fee ( _f_ ), or the consortium membership fee (Cb). In contrast, the determinant of the decision is the size of the gap between the lease reinvestment earnings (i.e., (1 + r3 − _r1)R) that the SME would_ gain for the default and the rewards (i.e., σ) it would receive for its compliance. In addition, Equation (14) gives some further insights that the residual value of the leased asset (vs) is positively correlated with the probability y that the LF chooses to access the CBLP. This is because if the LC provisions are that the lessors (i.e., LF) ensure that the residual value of the leased asset is immutably recorded on-chain, it mitigates the uncertainty of residual value risk, aiding the lessor to retain ownership of the asset at the end of the lease. Meanwhile, the default margin penalty ( _[p][2]p[−]1[p][1]_ ) imposed on the SME is negatively correlated with y. When _[p][2]p[−]1[p][1]_ decreases, the probability of the LF choosing to access the CBLP increases, the main reason for which is that the relatively small penalty (p2) set up on-chain can effectively reduce the default risk of the SME, which makes the LF is more willing to access the CBLP. In addition, owing to the compliance behavior, the higher credit (∆vc) achieved by the SME will stimulate the LF to stick with the conventional leasing mode. 6.1.2. Replication Dynamic Equation of the LF Assuming the expected return of the LF accessing and not accessing the CBLP to utilize the information shared on-chain, the average returns are Ey, E1−y, and Ey, respectively. Then: _Ey_ = x[R(r2 + 1) + vs + [(ϕ − _ω −_ _λ −_ _η)ZB + uBZA] + g −_ _C0 −_ _C2 −_ _Cb]_ (15) +(1 − _x)[[(ϕ −_ _ω −_ _λ −_ _η)ZB + uBZA] −_ _Rε −_ _C0 −_ _C2 −_ _Cb]_ _E1−y = x[R(r2 + 1) + vs −_ _C0 −_ _Ct −_ _C1] + (1 −_ _x)[−Rε −_ _C0 −_ _Ct −_ _C1]_ (16) _Ey_ = yEy + (1 − _y)E1−y_ (17) The replication dynamics equation (RDE) [68] of the LF is denoted as follows: _F(y) =_ _[dy]dt_ = y�Ey − _Ey�_ = y(1 − _y)�Ey −_ _E1−y�_ = y(1 − _y)[gx + ((ϕ −_ _ω −_ _λ −_ _η)ZB + uBZA + Ct + C1 −_ _C2 −_ _Cb)]_ (18) Let F(y) = 0, and we obtain the stationary point of the differential equation as follows: _y1[∗]_ [=][ 0,][ y]2[∗] [=][ 1,] (19) _x[∗]_ = _[C][b][ +][ C][2][ −]_ [(][ϕ][ −] _[ω][ −]_ _[λ][ −]_ _[η][)][Z][B][ −]_ _[u][B][Z][A][ −]_ _[C][t][ −]_ _[C][1]_ (20) _g_ Based on Equation (20), we can discover that, as shown in Figure 4: ----- _JTAERJ. Theor. Appl. Electron. Commer. Res. 2023, 18, FOR PEER REVIEW 2023, 18_ 15 220 (a) (b) **Figure 4. (a) Figure 4. (aThe dynamic trend of the LF’s strategy in the case of ) The dynamic trend of the LF’s strategy in the case of𝑥> 𝑥 x > x[∗]; [∗](b); (b The dynamic trend of ) The dynamic trend of** the LF’s strategy in the case of 𝑥< 𝑥[∗]. the LF’s strategy in the case of x < x[∗]. ��(�) When When𝑥= 𝑥 x = x[∗][∗], the SME complies with the LC with a probability of, the SME complies with the LC with a probability of x𝑥[∗][∗], and, and [∂][F]∂[(]y��[y][)] == 0 0 is is always established. The state is always stable no matter how the value of always established. The state is always stable no matter how the value of y changes.𝑦 changes. In this In case,this case, y is the equilibrium point, and all states are stable. 𝑦 is the equilibrium point, and all states are stable. WhenWhen𝑥�𝑥 x ̸= x[∗], the system needs to satisfy two requirements to obtain evolutionary [∗], the system needs to satisfy two requirements to obtain evolutionary stability, i.e., stability, i.e., F𝐹(𝑦(y[∗][∗]) = 0) = 0 and and F𝐹′(𝑦[′](y[∗][∗])) < 0 < 0. Then:. Then: - _•_ In the case of In the case of𝑥> 𝑥 x > x[∗], [∗],𝐹 F[�](0) < 0[′](0) < 0.. 𝑦= 𝑦 y = y�∗1[∗]= 0[=][ 0 is an evolutionary stable strategy (ESS).] is an evolutionary stable strategy (ESS). When the probability of SME compliance is larger than When the probability of SME compliance is larger than𝑥 x[∗], the LF converges to the [∗], the LF converges to the equilibrium strategy of “not accessing the CBLP ”, and thereby the SME does not need equilibrium strategy of “not accessing the CBLP ”, and thereby the SME does not to join the consortium blockchain to share information. need to join the consortium blockchain to share information. - _•_ In the case of In the case of𝑥< 𝑥 x < x[∗], [∗],𝐹 F[�](1) < 0[′](1) < 0.. 𝑦= 𝑦 y = y�∗2[∗]= 1[=][ 1 is an evolutionary stable strategy (ESS).] is an evolutionary stable strategy (ESS). When the probability of SME compliance is less than x[∗], the LF will converge with When the probability of SME compliance is less than 𝑥[∗], the LF will converge with the equilibrium strategy of “access the CBLP ” to participate in information sharing the equilibrium strategy of “access the CBLP ” to participate in information sharing on-chain to complete the lease. on-chain to complete the lease. According to Equation (20), we can find that considering the long-term cooperation, According to Equation (20), we can find that considering the long-term cooperation, when the SME is more likely to abide by the LC, the LF will decide not to access the CBLP when the SME is more likely to abide by the LC, the LF will decide not to access the CBLP due to the limited synergy and information utilization benefits obtained. due to the limited synergy and information utilization benefits obtained. In addition, Equation (20) gives some further insights that the asset maintenance In addition, Equation (20) gives some further insights that the asset maintenance cost cost ( _f_ ) will not affect the SME’s decision to comply or default, since once an outsourced (𝑓) will not affect the SME’s decision to comply or default, since once an outsourced maintenance action begins, it will not be interrupted until the lease expires. Both the maintenance action begins, it will not be interrupted until the lease expires. Both the consortium membership fee (consortium membership fee (correlated with the x. When the CBLP sets up higher costs for stakeholders to join the𝐶�) and asset monitoring cost on-chain (Cb) and asset monitoring cost on-chain (𝐶�) are positively corre-C2) are positively lated with the 𝑥. When the CBLP sets up higher costs for stakeholders to join the consor consortium, to improve the lease willingness of the LF, the SME is more inclined to comply tium, to improve the lease willingness of the LF, the SME is more inclined to comply with with the LC and provide genuine lease information. When the leased asset is fully inspected the LC and provide genuine lease information. When the leased asset is fully inspected under the CBLP, the SME will not easily default by deferring the lease payment or not under the CBLP, the SME will not easily default by deferring the lease payment or not returning the leased asset after signing the LC on-chain. Notably, the LF has a higher returning the leased asset after signing the LC on-chain. Notably, the LF has a higher abil ability to absorb more high-quality information that the SME shares on-chain, which can ity to absorb more high-quality information that the SME shares on-chain, which can ex expedite the SME’s default behavior. It seems to be a paradox that is contrary to the LF’s pedite the SME’s default behavior. It seems to be a paradox that is contrary to the LF’s decision-making in terms of accessing or not accessing the CBLP. This is because when decision-making in terms of accessing or not accessing the CBLP. This is because when BCT empowers more information synergy for the LF, the LF is more willing to access the BCT empowers more information synergy for the LF, the LF is more willing to access the CBLP, which compels the SME to bear the consortium membership cost, data storage, and CBLP, which compels the SME to bear the consortium membership cost, data storage, and verification overhead. To compensate for the potential losses that may be suffered, the verification overhead. To compensate for the potential losses that may be suffered, the SME will take risks, opting for default, decreasing the likelihood of compliance. The whole SME will take risks, opting for default, decreasing the likelihood of compliance. The whole process will finally be formed as an unstable circle. process will finally be formed as an unstable circle. _6.2. Analysis of Equilibrium Stability and ESS_ _6.2. Analysis of Equilibrium Stability and ESS Based on the above analysis, the game system has five local equilibrium points:_ (0, 0Based on the above analysis, the game system has five local equilibrium points: ), (1, 0), (0, 1), (1, 1), and (x[∗], y[∗]). (0, 0), (1, 0), (0, 1), (1, 1), and (𝑥[∗], 𝑦[∗]). ----- _J. Theor. Appl. Electron. Commer. Res. 2023, 18_ 221 To find the evolutionary stability strategy (ESS), the local stability analysis of the Jacobian matrix is employed [70,71], and thereby we take the first-order derivatives of Equations (12) and (18), respectively, achieving the following Jacobian matrix J.  (21)  **_J =_**  _∂F(x)_ _∂F(x)_ _∂x_ _∂y_  _∂F(y)_ _∂F(y)_ _∂x_ _∂y_ where: _∂F(x)_ _∂x_ = (1 − 2x)[y(I + ∆vc + p2 − _p1) + (r1 −_ _r3 −_ 1) R + σ + p1 − _vs]_ (22) _∂F(x)_ _∂y_ = x(1 − _x)(I + ∆vc + p2 −_ _p1)_ (23) _∂F(y)_ = y(1 _y)g_ (24) _−_ _∂x_ _∂F(y)_ _∂y_ = (1 − 2y)[gx + ((ϕ − _ω −_ _λ −_ _η)ZB + uBZA + Ct + C1 −_ _C2 −_ _Cb)]_ (25) Next, we can calculate the trace value trJ and determinant value detJ of the Jacobian matrix J. _trJ =_ _[δ][F][(][x][)]_ + _[δ][F][(][y][)]_ (26) _δx_ _δy_ � _δF(x)_ _detJ =_ _δx_ �� _δF(y)_ _δy_ � � _δF(x)_ _−_ _δy_ �� _δF(y)_ _δx_ � (27) Thus, the trJ and detJ to the equilibrium point are shown in Table 3. **Table 3. The analysis table for judging the stability of equilibrium points.** **Equilibrium Point** **trJ** **detJ** _E1(0, 0)_ [(r1 − _r3 −_ 1) R ++[(σ +ϕ − p1 −ω −vsλ] _−_ _η)ZB + uBZA_ [(r1 − _r3 −_ 1) R +∗[(σ +ϕ − p1ω − −vλs] − _η)ZB + uBZA_ +Ct + C1 − _C2 −_ _Cb]_ +Ct + C1 − _C2 −_ _Cb]_ _E2(0, 1)_ [(r1 − _r3 −_ 1) R+ (+[IC +b + ∆ Cvc2 + − p(ϕ2 + − σω − −vλs −)] _η)ZB_ [(r1 − _r3 −_ 1) R +(∗[CI +b + ∆ Cv2c − + p(ϕ2 + − σω − −vλs −)] _η)ZB_ _−uBZA −_ _Ct −_ _C1]_ _−uBZA −_ _Ct −_ _C1]_ _E3(1, 0)_ [(1 + r3 − _r1)R +(+[vgs − + (σϕ − −pω1)] −_ _λ −_ _η)ZB_ [(1 + r3 − _r1)R +(∗[vgs + ( −_ _σϕ − −pω1 −)]_ _λ −_ _η)ZB_ +uBZA + Ct + C1 − _C2 −_ _Cb]_ +uBZA + Ct + C1 − _C2 −_ _Cb]_ _E4(1, 1)_ [−(I + ∆vc + p2 ++[σ− −(gv + (s + (ϕ −r1 −ω −r3 −λ −1)ηR))]ZB [−(I + ∆vc + p2 +∗[−σ −(g + (vs + (ϕ −r1ω − −r3λ − −1η)R)Z)]B +uBZA + Ct + C1 − _C2 −_ _Cb)]_ +uBZA + Ct + C1 − _C2 −_ _Cb)]_ _E5(x[∗], y[∗])_ 0 _H_ [1] 1 H = −x∗(1 − _x∗)(I + vc + p2 −_ _p1) ∗_ _y∗(1 −_ _y∗)_ The local stability analysis of the five equilibria was performed to investigate the relationship between positive and negative trJ and detJ and evolutionary stability at the five equilibrium points. When a local equilibrium point satisfies the conditions that trace trJ < 0 and the determinant detJ > 0 of the Jacobian matrix J, it is an evolutionary stable strategy (ESS) [72]. If the trJ > 0 and detJ > 0, the equilibrium point is unstable or a saddle point. Nonetheless, it is obvious that the stationary point (x[∗], y[∗]) should meet 0 _x[∗]_ 1, 0 _y[∗]_ 1, which is meaningful. Considering g > 0 and _≤_ _≤_ _≤_ _≤_ ----- _J. Theor. Appl. Electron. Commer. Res. 2023, 18_ 222 Nonetheless, it is obvious that the stationary point (𝑥[∗], 𝑦[∗]) should meet 0 �𝑥[∗] � 1, 0 �𝑦[∗] �1, which is meaningful. Considering 𝑔> 0 and 0 �0 ≤ �[C]�[b]��[+][C]��(�������)𝓏[2][−][(][ϕ][−][ω][−][λ][−][η]g�[)]��[Z][B]�[−]𝓏[u]�[B]��[Z]�[A]��[−][C]� _[t]�1[−][C][1], demonstrating that: ≤_ 1, demonstrating that: � 𝐶� + 𝐶Cb� +−(𝜑−𝜔−𝜆−𝜂)𝓏 C2 − (ϕ − _ω −_ _λ −�_ −𝑢η)Z�B𝓏 −� −𝐶uBZ� −𝐶A −� _C> 0t −_ _C1 > 0_ Condition 1: Condition 1 :� 𝑔> 0 _g > 0_ (28) (28) (𝜑−𝜔−𝜆−𝜂)𝓏(ϕ − _ω −_ _λ −�_ _η+ 𝑢)Z�B𝓏 +�_ _u+ 𝐶BZ�_ _A+ 𝐶 +� C+ 𝑔−𝐶t + C1 +�_ _g−𝐶 −�C> 02 −_ _Cb > 0_ nominator of nominator ofSimilarly, according to Assumption 3, we know that Similarly, according to Assumption 3, we know that y𝑦[∗][∗],, I𝐼+ ∆𝑣 + ∆� _v+ 𝑝c +� p−𝑝2 −�_ - 0p1, then: > 0, then:0 � (�����∆��0�� p�𝑝�≤)���2��� >> 𝑝���� p(����1+��1, which indicates the de-, which indicates the de-rI3+−�∆r �11v)cR++p, demonstrating 2v−s−pσ1 _−p1_ _≤_ 1, that: demonstrating that: Condition 2: � (1 + 𝑟𝐼+ ∆𝑣� −𝑟�)𝑅+ 𝑣�(+ 𝑝1 +� r�−𝑝3 −−𝜎−𝑝�r1> 0)R +� - 0 vs − _σ −_ _p1 > 0_ (29) Condition 2 : _I + ∆vc + p2_ _p1 > 0_ (29) 𝐼+ ∆𝑣� + 𝑝�I+ 𝜎−𝑣 + ∆vc +� −(1 + 𝑟 p2 + σ −� −𝑟vs −� −)𝑅> 0(1 + r3 − _r1)R > 0_ Based on Condition 1 and Condition 2, we can use the signs of tr𝑱 and det𝑱 to judge Based on Condition 1 and Condition 2, we can use the signs of trJ and detJ to judge the stability of the equilibrium point of the evolutionary game. The results are shown in the stability of the equilibrium point of the evolutionary game. The results are shown in Table 4. Table 4. **Table 4. The analysis of the evolutionary stability of the system equilibrium point.** **Table 4. The analysis of the evolutionary stability of the system equilibrium point.** **Equilibrium Point** 𝑬𝒊 𝐒𝐲𝐦𝐛𝐨𝐥 𝐨𝐟 𝐭𝐫𝑱 𝐒𝐲𝐦𝐛𝐨𝐥 𝐨𝐟 𝐝𝐞𝐭𝑱 **Judgment** **Equilibrium Point𝐸�(0,0)** **_Ei_** **Symbol of tr<0** **_J_** **Symbol of det>0** **_J_** **JudgmentESS** 𝐸EE�12(0,1)((0, 00, 1)) >0 <0>0 >0 >0>0 Unstable point Unstable pointESS 𝐸E�3(1,0)(1, 0) >0 >0 >0 >0 Unstable point Unstable point 𝐸E�4(1,1)(1, 1) <0 <0 >0 >0 ESS ESS 𝐸E�5(𝑥(x[∗][∗], 𝑦, y[∗][∗])) 0 0 +/-+/- Saddle point Saddle point According to Table 4, we can find that According to Table 4, we can find that E𝐸�2(0,1)(0, 1) and and E𝐸�3(1,0)(1, 0) are unstable points. are unstable points. 𝐸E�5(𝑥(x[∗][∗], 𝑦, y[∗][∗])) is a saddle point revealing that evolutionary stability is affected by the values is a saddle point revealing that evolutionary stability is affected by the values of of x𝑥[∗] [∗]andand y[∗]. The game system has two ESS equilibrium points:𝑦[∗] . The game system has two ESS equilibrium points: E1(0, 0) and E𝐸�4((0,0)1, 1) and . This 𝐸indicates that the game’s ultimate evolutionary strategies are “Strategy I:�(1,1). This indicates that the game’s ultimate evolutionary strategies are “Strategy I: S1 = {Comply, SAccess� = {Comply, Access}” and “Strategy IV:}” and “Strategy IV: S4 = {Default, Not-accessS� = {Default, Not-access}”, meaning that both SMEs and LFs}”, meaning that both SMEs and LFs converge at the locations converge at the locations E1 and E4 in Figure𝐸� and 5. 𝐸� in Figure 5. **Figure 5. Figure 5. Dynamics evolution schematic diagram of the SME and the LF.Dynamics evolution schematic diagram of the SME and the LF.** That is, when both parties’ decisions are in the region E1E2E5E3, the game evolves to point E1(0, 0), i.e., the SME breaches the LC, and the LF does not access the CBLP, requiring the SME to access the consortium blockchain. When both parties’ decisions are located in region E2E4E3E5, the game evolves into the ideal stable state E4(1, 1), i.e., the SME ----- _J. Theor. Appl. Electron. Commer. Res. 2023, 18_ 223 abides by the LC and the LF requires the SME to access the consortium blockchain. The probability of the evolutionary outcome between the game subjects can be represented in terms of the area of the regions E1E2E5E3 and E2E4E3E5 [73], the size of which depends on the coordinates of the point E5 (the saddle point (x[∗], y[∗])), where: SE1E2E5E3 = 2[1] [(][x][∗] [+][ y][∗][)][,] _SE2E4E3E5 =_ 2[1] [[(][1][ −] _[x][∗][) + (][1][ −]_ _[y][∗][)]][. The possibility that the SME will conform to the contract]_ and be required to access the CBLP increases as the region E2E4E3E5 expands. _6.3. Sensitivity Analysis in the Evolutionary Game_ The choices made by the game subjects are influenced by the exogenous variables in the model. By taking derivatives of SE2E4E3E5 (abbreviated hereafter as ‘S’) while holding the other parameters constant, it is possible to determine how each parameter affects the game’s evolutionary results (i.e., SE2E4E3E5 ). _x[∗]_ = _[C][b][ +][ C][2][ −]_ [(][ϕ][ −] _[ω][ −]_ _[λ][ −]_ _[η][)][Z][B][ −]_ _[u][B][Z][A][ −]_ _[C][t][ −]_ _[C][1]_ (30) _g_ _y[∗]_ = [(][1][ +][ r][3][ −] _[r][1][)][R][ +][ v][s][ −]_ _[σ][ −]_ _[p][1]_ (31) _I + ∆vc + p2_ _p1_ _−_ _SE2E4E3E5 = S =_ 21 ��1 − _[C][b][+][C][2][−][(][ϕ][−][ω][−][λ][−][η]g[)][Z][B][−][u][B][Z][A][−][C][t][−][C][1]_ � (32) � �� + 1 − [(][1][+][r]I[3]+[−]∆[r][1]v[)]c[R]+[+]p2[v]−[s][−]p[σ]1 _[−][p][1]_ The evolutionary game results are primarily related to the four main determinants: information sharing, credit, incentive–penalty, and risk. The sensitivity analysis of the influence of the four factors on SE2E4E3E5 is described below. 6.3.1. Impact of Information Sharing on S Taking the derivatives of Equation (32) corresponding to ZA, ZB, and uB, _∂S_ = _[u][B]_ (33) _∂ZA_ 2g _[>][ 0]_ _∂S_ = _[ϕ][ −]_ _[ω][ −]_ _[λ][ −]_ _[η]_ _> 0_ (34) _∂ZB_ 2g _∂S_ = (35) _[Z][A]_ _∂uB_ 2g _[>][ 0]_ _S is an increasing function of ZA and ZB, since Equation (34) is true under Assumption_ 7 (A7). That is, as the amount of information shared on-chain (Zi) increases, S will gradually increase, indicating the possibility of evolution to the stable state E4(1, 1) as the quantity of information shared on-chain increases. It further means that this parameter has a favorable impact on the probability of SMEs’ decisions to comply with the contract and being required to join the consortium. The more accurate the information that is shared on the blockchain network, the easier it will be to create a transparent and reliable environment for leasing, and the more SMEs will actively disclose high-quality information to ensure that leasing transactions are executed smoothly. _S is an increasing function of uB. The more the LF can use the effective data on-chain,_ the more the LF is likely to access the CBLP to clearly monitor the SME’s compliance with the LC, thus increasing the likelihood of rental payment on time. 6.3.2. Impact of Credit on S Taking the derivatives of Equation (32) corresponding to ∆vc, _∂S_ = [(][1][ +][ r][3][ −] _[r][1][)][R][ +][ v][s][ −]_ _[σ][ −]_ _[p][1]_ _> 0_ (36) _∂∆vc_ 2(I + ∆vc + p2 _p1)[2]_ _−_ ----- _J. Theor. Appl. Electron. Commer. Res. 2023, 18_ 224 _S is an increasing function of ∆vc. Therefore, as ∆vc increases, the SME has a higher_ probability of conforming to the LC. Then, a higher credit value is assigned to the SME, resulting in the SME’s nodes having a higher probability of being selected as a leader node when performing the Raft consensus protocol. Conversely, a node will be removed from the _Consenter Set if its total credit value falls below the minimum threshold because of multiple_ defaults, escalating the penalty and helping to establish a high-credit leasing environment. 6.3.3. Impact of Incentive–Penalty on S Taking the derivative of Equation (32) corresponding to σ, I, g and p2, _∂S_ 1 (37) _∂σ_ [=] 2(I + ∆vc + p2 _p1)_ _[>][ 0]_ _−_ _∂S_ _> 0_ (38) _∂I_ [= (][1][ +]2([ r]I[3] +[ −] ∆[r][1]v[)]c[R] +[ +] p[ v]2 _[s][ −]p[σ]1)[ −][2]_ _[p][1]_ _−_ _∂S_ _> 0_ (39) _∂g_ [=][ C][b][ +][ C][2][ −] [(][ϕ][ −] _[ω][ −]_ _[λ][ −]2g[η][2][)][Z][B][ −]_ _[u][B][Z][A][ −]_ _[C][t][ −]_ _[C][1]_ _∂S_ = [(][1][ +][ r][3][ −] _[r][1][)][R][ +][ v][s][ −]_ _[σ][ −]_ _[p][1]_ _> 0_ (40) _∂p2_ 2(I + ∆vc + p2 _p1)[2]_ _−_ _S is an increasing function of σ, I, g, and p2. The likelihood that previously defaulting_ SMEs will start to keep their contracts and that the SME is motivated to access the blockchain increases with the incentives the LF provides to them. To encourage a node to choose the on-chain strategy and to encourage more SMEs to become consortium nodes, the LF can appropriately boost the compliance reward of the SMEs when creating the incentive strategy. No matter whether the SME defers the rental payment or refuses to return the leased asset, both default behaviors will lead to the lessee being charged a penalty, which irrevocably damages the SME’s reputation on-chain. Hence, once the SME joins the consortium, the likelihood of compliance rises as the default penalties rise. The LF will also keep using the on-chain strategy to observe how the SMEs choose their payment strategies. Therefore, the penalties for SMEs should be suitably enhanced to guarantee a prompt rental payment. 6.3.4. Impact of Risk on S Taking the derivatives of Equation (32) corresponding to ϕ, ω, λ, and η, _∂S_ (41) _∂ϕ_ [=][ Z]2g[B] _[>][ 0]_ _∂S_ _< 0_ (42) _∂ω_ [=][ −Z]2g[B] _∂S_ _< 0_ (43) _∂λ_ [=][ −Z]2g[B] _∂S_ _< 0_ (44) _∂η_ [=][ −Z]2g[B] _S is an increasing function of ϕ. When more high-quality data are effectively dis-_ tributed and shared by each subject on-chain, more subjects will join the consortium to complete the leasing transactions as more return is generated. In addition, S is a decreasing function of ω, λ, and η. If the participants bear more consensus verification and storage costs, and take on greater security risks, they will be more reluctant to join the consortium and the probability of default will increase. To sum up, different types of factors will have different effects on decision-making. ----- _J. Theor. Appl. Electron. Commer. Res. 2023, 18_ 225 **7. Numerical Experiments and Implications** This section will present some simulation results. We first use VENSIM PLE to build a system dynamics (SD) model to analyze the causal relationships among the variables and strategies. Then, MATLAB_R2021b is employed to examine the efficacy of the evolutionary stable strategies (ESSs) and to demonstrate the previous mathematical sensitivity analysis of each factor. Lastly, some implications of the results are given. The game system has intermediate variables (including Ex, E1−x, Ey, _E1−y) and_ a range of exogenous variables (as presented in Table 1). We set initial values for the exogenous variables involved in the model, as shown in Table 5. In this study, it is assumed that all exogenous variables are positive, and the return of each strategy of each game subject is guaranteed to be positive. **Table 5. Initial value of simulation parameters.** **_R_** **_r1_** **_r2_** **_r3_** **_f_** **_vs_** **_p1_** **_p2_** **_Cb_** **_Ct_** **_C1_** **_C2_** **∆vc** 8 0.2 0.25 0.3 0.05 0.8 3 6 4 0.08 0.6 0.4 1 _I_ _ZA_ _ZB_ _uA_ _uB_ _ε_ _σ_ _g_ _ϕ_ _ω_ _λ_ _η_ / 1.2 5 3 0.5 0.5 0.15 5 1.5 0.6 0.2 0.2 0.2 / _7.1. System Dynamics Model Experiment_ We establish the SD model of the two-party evolutionary game system as depicted in Figure 6. The arrow tails in Table 5 are connected to the independent variables in the associated equation, and the arrowheads are connected to the dependent variables. We set the simulation parameters of INITIAL TIME = 0, FINAL TIME = 10, and TIME STEP = 0.0078125. It can be seen from Figure 7a,b that when the initial states of both sides are pure strategies (i.e., (0, 0), (0, 1), (1, 0), and (1, 1)), no party in the system is willing to change the current state to break the equilibrium. For instance, the initial state of (x = 0, y = 0) or (x = 1, y = 1) will be unchanged if there is no interruption during the evolution. However, this does not mean that these equilibrium states are stable, and once one or both parties take the initiative to make a small change, the equilibrium state will be broken. Although the SME’s compliance probability x and the LF’s CBLP access probability y (for 0.0001) evolve with small mutations, they quickly shift to a new strategy once they find that doing this will yield a higher expected return, thus adjusting the strategy through a mutation of parties to bring the system into a new equilibrium. In addition, through simulating the model, we also discover that the ultimate equilibrium state is (1, 1) when the initial state is _x = 0.5, y = 0.5, as shown in Figure 7c._ In fact, when x = 0, no matter how y ranges from 0 to 1, the system will reach an equilibrium state (0, 0). When x = 1, no matter how y ranges from 0 to 1, the system will reach an equilibrium state (1, 1). Similarly, when y = 0, no matter how x ranges from 0 to 1, the system will reach an equilibrium state (0, 0), and when y = 1, no matter how x ranges from 0 to 1, the system will reach an equilibrium state (1, 1). _7.2. Effect of Parameter Changes on Evolutionary Stable Strategies_ We initiate the probabilities of x and y with values ranging from 0 to 1 in steps of 0.1. It can be seen from Figure 8 that almost all curves converge at (0, 0) and (1, 1), which is consistent with the preceding discussion in Section 6.2. The following subsection further discusses the impacts of the four factors on evolution. Here, we assume the initial strategy probability for each participant is 0.5. ----- ##### E IE _J. Theor. Appl. Electron. Commer. Res. 2023, 18_ 226 ### Figure 6. System dynamics model for the consortium blockchain-based leasing strategies.Figure 6. System dynamics model for the consortium blockchain-based leasing strategies. ## It can be seen from Figure 7a,b that when the initial states of both sides are pure strategies (i.e., (0, 0), (0, 1), (1, 0), and (1, 1)), no party in the system is willing to change the current state to break the equilibrium. For instance, the initial state of (𝑥= 0, 𝑦= 0 or (𝑥= 1, 𝑦= 1) will be unchanged if there is no interruption during the evolution. How ever, this does not mean that these equilibrium states are stable, and once one or both parties take the initiative to make a small change, the equilibrium state will be broken Although the SME’s compliance probability 𝑥 and the LF’s CBLP access probability 𝑦 (fo ----- , 18, FOR PEER REVIEW J. Theor. Appl. Electron. Commer. Res. 2023, 18 22 227 ##### (a) (b) (c) **Figure 7. (a) The dynamic diagram to strategy (0,0); (Figure 7. (a) The dynamic diagram to strategy (0,0); (b) the dynamic diagram to strategy (1,1); (b) the dynamic diagram to strategy (1,1); (c)** **c) the** the dynamic diagram to strategy of dynamic diagram to strategy of𝑥= 0, 𝑦= {0.1,0.3,0.5,0.7,0.9} x = 0, y = {0.1, 0.3, 0.5, 0.7, 0.9. * Considering that }. * Considering that𝑑𝑥/𝑑𝑡 and dx/dt and 𝑑𝑦/𝑑𝑡 have to be explanatory, when performing the simulation, we take the initial state (dy/dt have to be explanatory, when performing the simulation, we take the initial state (x=0.0001, x = 0.0001, _y=0.0001), which is close to 0. Similarly, the initial state (y = 0.0001), which is close to 0. Similarly, the initial state (x =0.9999, y=0.9999x = 0.9999,) is set as 1. y = 0.9999) is set as 1._ ##### In fact, when 𝑥= 0, no matter how 𝑦 ranges from 0 to 1, the system will reach an ilib i (0 0) Wh 1 h f 0 1 h ----- #### p 𝑦 g g _J. Theor. Appl. Electron. Commer. Res. 20230.1. It can be seen from Figure 8 that almost all curves converge at, 18_ (0, 0) and 228 (1, 1) #### is consistent with the preceding discussion in Section 6.2. ##### Figure 8. The dynamic diagram of SMEs and LFs. #### The following subsection further discusses the impacts of the four factors on evolu- tion. Here, we assume the initial strategy probability for each participant is 0.5. #### 7.2.1. Evolution Impacted by Information Sharing **Figure 8. The dynamic diagram of SMEs and LFs.** ##### Figure 8. The dynamic diagram of SMEs and LFs. #### The initial quantity of information sharing (𝓏�) for each participant was set to 1, 3, 5, 7.2.1. Evolution Impacted by Information Sharing #### 7, and 9. As shown in Figure 9a. there are two critical values. When 𝓏� is greater than 5 The initial quantity of information sharing (The following subsection further discusses the impacts of the four factors onZi) for each participant was set to 1, 3, 5, #### and less than 3, the probability is that all parties will converge at 1 and 0, respectively. The tion. Here, we assume the initial strategy probability for each participant is 0.5. 7, and 9. As shown in Figure 9a. there are two critical values. When Zi is greater than 5 system evolves to the states (1, 1) and (0, 0), accordingly. When the computing power and less than 3, the probability is that all parties will converge at 1 and 0, respectively. The𝑢� of each organization is less than 0.3, system evolves to the states (1, 1) and (0, 0), accordingly. When the computing power𝑥 and 𝑦 both converge at 0, and the system evolves uj 7.2.1. Evolution Impacted by Information Sharing to the state (0, 0of each organization is less than 0.3,) (see Figure 9b). When 𝑢� is greater than 0.5, it results in an evolutionary x and y both converge at 0, and the system evolves state (1, 1), indicating that the parties with higher computation are more willing to be to the state (0, 0) (see FigureThe initial quantity of information sharing ( 9b). When uj is greater than 0.5, it results in an evolutionary𝓏𝑖) for each participant was set to incentivized to join the consortium blockchain to share information. 7, and 9. As shown in Figure 9a. there are two critical values. When stateincentivized to join the consortium blockchain to share information. (1, 1), indicating that the parties with higher computation are more willing to be𝓏𝑖 is greate and less than 3, the probability is that all parties will converge at 1 and 0, respectiv system evolves to the states (1, 1) and (0, 0), accordingly. When the computing po1 of each organization is less than 0.3, 𝑥 and 𝑦 both converge at x: zi = 1 0, and the system 0.9 _y: z_ #### to the state (0, 0) (see Figure 9b). When 𝑢𝑗 is greater than 0.5, it results in an evolui = 1 _x: z_ #### state 0.8 (1, 1), indicating that the parties with higher computation are more willini = 3 _y: z_ _i = 3_ #### incentivized to join the consortium blockchain to share information. 0.7 x: zi = 5 _y: z_ _i = 5_ 0.6 _x: zi = 7_ ##### The dynamic diagram of SMEs and LFs. #### The following subsection further discusses the impacts of the four factors on evolu- tion. Here, we assume the initial strategy probability for each participant is 0.5. 0.5 0.4 0.3 0.2 0.1 0 0 2 4 6 8 10 12 14 16 18 20 t #### (a) **Figure 9. Cont.** ----- FOR PEER REVIEW 24 _J. Theor. Appl. Electron. Commer. Res. 2023, 18_ 229 1 1 _x: uj = 0.1_ 0.9 _x: uj = 0.1_ _y: uj = 0.1_ 0.9 0.8 _y: uj = 0.1_ _x: uj = 0.3_ _y: u_ _x: u_ _j = 0.3_ 0.8 _j = 0.3_ _x: u_ 0.7 _y: u_ _j = 0.5_ _j = 0.3_ _y: u_ _x: u_ _j = 0.5_ 0.7 _j = 0.5_ 0.6 _y: u_ _x: uj = 0.7_ _j = 0.5_ _y: u_ 0.6 0.5 _x: uj = 0.7_ _x: uj = 0.7_ _y: u_ _j = 0.9_ _j = 0.7_ _y: u_ 0.5 0.4 _x: u_ _j = 0.9_ _j = 0.9_ _y: u_ 0.4 0.3 _j = 0.9_ 0.3 0.2 0.2 0.1 0.1 0 0 1 2 3 4 5 6 7 8 9 10 t 0 1 2 3 4 5 6 7 8 9 10 #### (b) t **Figure 9. System evolution of** 𝓏(�b and ) 𝑢�: (a) system evolution of 𝓏� = {1, 3, 5, 7, 9}; (b) system lution of 𝑢� = {0.1, 0.3, 0.5, 0.7, 0.9}. ##### Figure 9. System evolution of Figure 9. System evolution of𝓏� and 𝑢�: ( Za) system evolution of i and uj: (a) system evolution of𝓏� = {1, 3, 5, 7, 9} Zi = {1, 3, 5, 7, 9; (b) system evo-}; (b) system lution of 𝑢� = {0.1, 0.3, 0.5, 0.7, 0.9}evolution of7.2.2. Evolution Impacted by Credit uj = {0.1, 0.3, 0.5, 0.7, 0.9. }. 7.2.2. Evolution Impacted by CreditIn order to further study how different levels of credit affect the decision-maki #### 7.2.2. Evolution Impacted by Credit SMEs and LFs, we simulate the factor “credit value” in the range from 1 to 10, with aIn order to further study how different levels of credit affect the decision-making of In order to further study how different levels of credit affect the decision-making of SMEs and LFs, we simulate the factor “credit value” in the range from 1 to 10, with a stepsize of 2, while keeping the other parameters at their initial values. Figure 10 indicates size of 2, while keeping the other parameters at their initial values. Figure 10 indicates #### SMEs and LFs, we simulate the factor “credit value” in the range from 1 to 10, with a step all curves gradually converged to 𝑥= 1, 𝑦= 1, indicating that higher credit helps to that all curves gradually converged to x = 1, y = 1, indicating that higher credit helps to #### size of 2, while keeping the other parameters at their initial values. Figure 10 indicates that tivate the SME to fulfill the contract and join the blockchain to share their informa motivate the SME to fulfill the contract and join the blockchain to share their information. #### all curves gradually converged to However, it also can be found that the SME’s strategy of choosing to keep the contract isHowever, it also can be found that the SME’s strategy of choosing to keep the contr𝑥= 1, 𝑦= 1, indicating that higher credit helps to mo- tivate the SME to fulfill the contract and join the blockchain to share their information. more influenced and motivated by “credit” than the strategy of joining the blockchain.more influenced and motivated by “credit” than the strategy of joining the blockcha However, it also can be found that the SME’s strategy of choosing to keep the contract is more influenced and motivated by “credit” than the strategy of joining the blockchain. 1 1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 0 2 4 6 8 10 12 14 16 18 20 t **Figure 10.Figure 10. System evolution ofSystem evolution of ∆vc =∆𝑣 {1, 3, 5, 7, 9�** = {1, 3, 5, 7, 9}}. . 0 2 4 6 8 10 12 14 16 18 20 ----- _J. Theor. Appl. Electron. Commer. Res. 2023, 18_ 230 7.2.3. Evolution Impacted by Incentive-Penalty _18, FOR PEER REVIEW_ 25 We award an incentive I (e.g., a kind of “gas” fee with blockchain) to the SMEs who publish a valid block during the financing transaction confirmation process. When we dynamically adjust the incentive value I from 1.2 to 9.6, we are surprised to find that the ##### matter how many block rewards are paid to the SME for being a block verifier/miner, the system always evolves to the equilibrium point (1, 1) in Figure 11a. This means that no SME resolutely adheres to conforming to the contract and enters the blockchain, whereas matter how many block rewards are paid to the SME for being a block verifier/miner, the with a higher 𝐼SME resolutely adheres to conforming to the contract and enters the blockchain, whereas, the SME is more proactive in participating in information sharing on- chain. with a higher I, the SME is more proactive in participating in information sharing on-chain. 1 _x: I = 1.2_ 0.9 _y: I = 1.2_ _x: I = 2.4_ _y: I = 2.4_ 0.8 _x: I = 4.8_ _y: I = 4.8_ 0.7 _x: I = 7.2_ _y: I = 7.2_ 0.6 _x: I = 9.6_ _y: I = 9.6_ 0.5 0.4 0.3 0.2 0.1 0 1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 0 2 4 6 8 10 12 14 16 18 20 t ##### (a) _x: p2 = 2_ _y: p2 = 2_ _x: p2 = 4_ _y: p2 = 4_ _x: p2 = 6_ _y: p2 = 6_ _x: p2 = 8_ _y: p2 = 8_ _x: p2 = 10_ _y: p2 = 10_ 0 2 4 6 8 10 12 14 16 18 20 t ##### (b) **Figure 11.** (a) System evolution of Figure 11. (a) System evolution of𝐼= {1.2, 2.4, 4.8, 7.2, 9.6} I = _{1.2, 2.4, 4.8, 7.2, 9.6 ; (b) system evolution of }; (b) system evolution of𝑝�_ = {2, 4, 6, 8, 10}. _p2 = {2, 4, 6, 8, 10}._ ##### We set the penalty on-chain (We set the penalty on-chain (𝑝�) to 2, 4, 6, 8, and 10, revealing the evolution curves of p2) to 2, 4, 6, 8, and 10, revealing the evolution curves 𝑥(𝑡) and 𝑦(𝑡) following the change in of x(t) and y(t) following the change in𝑝�, as shown in Figure 11b. The figure shows that p2, as shown in Figure 11b. The figure shows that when the punishment intensity is relatively small, for instance p2 = 2, the SME tends ##### when the punishment intensity is relatively small, for instance 𝑝� = 2, the SME tends to breach the contract and when the punishment intensity increases to 𝑝= 8 the SME ----- _J. Theor. Appl. Electron. Commer. Res. 2023, 18_ 231 to breach the contract, and when the punishment intensity increases to p2 = 8, the SME tends to actively comply with the contract. In other words, the penalty has a threshold that affects the SME’s strategy selection of SMEs joining the blockchain, which is outside of _JTAER 2023, 18, FOR PEER REVIEW_ the initial expectations—for example, a high penalty erodes the incentive to participate in information sharing. 7.2.4. Evolution Impacted by Risk #### We set the risk cost of the consensus on-chain (𝜔) and storage off-chain (𝜇) to 0.2 0.6, 0.8, and 1.0, and the corresponding impacts on the two parties’ strategies were We set the risk cost of the consensus on-chain (ω) and storage off-chain (µ) to 0.2, 0.4, 0.6, 0.8, and 1.0, and the corresponding impacts on the two parties’ strategies were #### lyzed. As shown in Figure 12, the critical value of the initial risk cost is between 0.2 analyzed. As shown in Figure 12, the critical value of the initial risk cost is between 0.2 and #### 0.4. When the risk is less than 0.2, the SME’s and LF’s probabilities 𝑥 and 𝑦 both 0.4. When the risk is less than 0.2, the SME’s and LF’s probabilities x and y both converged at 1. Vice versa, when the risk level is greater than 0.4, the system evolves to pointverged at 1. Vice versa, when the risk level is greater than 0.4, the system evolves to p (1, 1). Similar results can be inferred from the influencing security risk ((1, 1). Similar results can be inferred from the influencing security risk (η). 𝜂). 1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 0 2 4 6 8 10 12 14 16 18 20 t **Figure 12.Figure 12. System evolution ofSystem evolution of ω = {𝜔= {0.2, 0.4, 0.6, 0.8, 1}0.2, 0.4, 0.6, 0.8, 1}.** . _7.3. Implications of the Results_ #### 7.3. Implications of the Results Based on the above replicator dynamic analysis and simulation results, this study provides some implications:Based on the above replicator dynamic analysis and simulation results, this s (1)provides some implications: The results reveal that the residual value of the leased asset is a decisive factor supporting the lessor’s access strategy. Before signing the LC, it is necessary to #### (1) The results reveal that the residual value of the leased asset is a decisive factor estimate the asset residual value; if the value is relatively large at the termination of #### porting the lessor’s access strategy. Before signing the LC, it is necessary to esti the lease, LFs (lessors) have a high probability of actively adopting BCT to efficiently prove their ownership of the leased asset on-chain. Thus, from the perspective ofthe asset residual value; if the value is relatively large at the termination of the l reducing risks of leased asset default, a blockchain-based leasing service provided byLFs (lessors) have a high probability of actively adopting BCT to efficiently p the lessor is more beneficial for an operating lease than a capital lease.their ownership of the leased asset on-chain. Thus, from the perspective of redu (2) Most leasing businesses tend to treat maintenance as a non-core activity and com #### risks of leased asset default, a blockchain-based leasing service provided by the le monly outsource it to a third-party MC [10], as assumed in this study (Section 4). The #### is more beneficial for an operating lease than a capital lease. results indicate that when the maintenance fee is not embedded in the rental payment, #### (2) Most leasing businesses tend to treat maintenance as a non-core activity and the maintenance charge is not a determinant impacting the lessee’s decisions regarding compliance with/defaulting on the LC. Hence, before the lessor decides whethermonly outsource it to a third-party MC [10], as assumed in this study (Section 4) to adopt BCT, it is necessary to take into consideration the in-house or outsourcedresults indicate that when the maintenance fee is not embedded in the rental maintenance problem.ment, the maintenance charge is not a determinant impacting the lessee’s decis (3) To encourage lessees and lessors to evolve to the ideal equilibrium state, an incentive #### regarding compliance with/defaulting on the LC. Hence, before the lessor dec mechanism should be designed to motivate all parties to cooperatively construct a #### whether to adopt BCT, it is necessary to take into consideration the in-house or ----- _J. Theor. Appl. Electron. Commer. Res. 2023, 18_ 232 sustainable and more trustworthy leasing environment. More high-quality information should be shared on-chain, and stakeholders should also improve the capability to effectively utilize the data on- and off-chain [74]. In contrast to the fixed rewards resulting from block mining, the incentive associated with incremental or deductible credit value for consensus action tends to inspire lessees’ willingness to comply with the contract under the BCT-based leasing business. An appropriate default penalty should be set up on-chain that can deter the lessee from defaulting and encourage it to make rental payments on time and return the leased asset as agreed in the LC. When making strategic decisions to join the consortium to share information, participants (particularly lessees) are more sensitive to the technology risk factor to which they are subject. To reduce the cost of building and maintaining the blockchain system to support the leasing business (e.g., on-chain and off-chain storage costs, verification costs, etc.), it is advised and helpful to embed blockchain-as-a-service (BaaS) in our CBLP in the future [75], which will also enhance SMEs’ willingness to share more valuable information on-chain, achieving a win–win outcome in the leasing business. **8. Conclusions and Future Works** _8.1. Conclusions_ BCT provides a new idea for leasing to address the challenges of the information asymmetry and traceability of leased assets to some degree. Hence, there exists great significance in designing an incentive mechanism to encourage lessees and lessors to join the consortium blockchain and actively share information on-chain. This study first proposes a conceptual architecture of the consortium blockchain-based leasing platform (CBLP), then constructs a dynamical evolutionary game model between the SME (the “lessee”) and LF (the “lessor”). Our primary findings are as follows: (1) With long-term cooperation, the two parties (lessee and lessor) eventually evolve to adopt strategies in which the lessee is more inclined to conform to the LC and the lessor becomes more proactive in accessing the CBLP as a consortium node to share information on-chain. (2) According to previous basic lease scenarios that we assumed, two default actions are explored: (i) overdue rental payment; (ii) asset disposal against the LC. For the former default action, we found that the larger proceeds gained resulting from reinvesting the rental payment will cause the lessee to default, and at this time, the lessor will tend to adopt BCT to mitigate the overdue-payment default risk. In addition, the residual value of the leased asset has a positive impact on the exposure at default, and the lessee will be more likely to default by not returning the leased asset to the lessor due to the temptation of the high profit achieved from asset disposal at the end of the lease. Meanwhile, the lessee’s default on asset disposals will result in the lessor being more inclined to adopt BCT to ensure a timely claim of repossession of the leased asset. (3) Although blockchain can guarantee data reliability (e.g., maintenance events) [76], maintenance cost is not a determinant of the equilibrium state once the maintenance service is outsourced. On the contrary, in-house maintenance provided by the lessor may affect the two parties’ strategic decisions. (4) When the lessee and lessor have incentives to participate in sharing or utilizing more information on-chain, the lessee will eventually evolve to conform to the LC, which will benefit the lessor and leasing industry. Setting up a changeable credit associated with the lessee’s LC performance to compete for a block accounting right via a consensus mechanism [77] is an effective way to incentivize the lessee to comply with the LC, while this method does not work much to incentive the lessor to adopt BCT. In addition, only when the default penalty on-chain exceeds a critical value can it work to incentivize lessees to correctly fulfill their obligations in the LC [78], once the penalty is lower than a critical value, which will in return increase the default risk. The technology risks and relevant costs concerning CBLP deployment play a vital role ----- _J. Theor. Appl. Electron. Commer. Res. 2023, 18_ 233 in encouraging the consortium to participate in information sharing on-chain, which is consistent with what we expected in reality. In summary, this study enables lessees and lessors to build a trustworthy cooperative relationship on the consortium blockchain-based leasing platform, while also assisting lessors or regulators in taking effective measures to incentivize lessees to comply with the lease contract and share more information on-chain to enhance the management of default risks in the leasing industry. _8.2. Limitations and Future Directions_ Considering that the practical application of BCT in the leasing industry is rare, it is difficult to obtain real data. Thus, this study focuses on mathematical modeling and numerical simulation. The conclusions of this study can be further demonstrated and enriched via empirical analysis of specific cases. Meanwhile, there are still some avenues to be explored in the future. For example, it is meaningful to explore “tripartite–win” strategies among lessees, lessors, and OEMs (or third-party MCs). Additionally, blockchain smart contracts play an essential role in financing transactions [79]. Further research can offer new insights into the cost reduction and value transfer of using smart contracts [80] to motivate the related parties to share information. **Author Contributions: Conceptualization, H.C.; methodology, H.C. and J.L. (Jing Lu); software, H.C.** and J.L. (Jian Li); validation, H.C., J.L. (Jing Lu) and Z.X.; formal analysis, H.C. and J.L. (Jian Li); data curation, H.C. and J.L. (Jian Li); writing-original draft, H.C.; writing-review and editing, S.-L.L.; visualization, H.C. and Z.X; supervision, S.-L.L.; All authors have read and agreed to the published version of the manuscript. **Funding: This research was partly funded by the Department of Science and Technology of Guang-** dong Province [Grant number 2020A0505090004] and Macau Science and Technology Development Funds [Grant number 0061/2020/A2, 0158/2019/A3]. **Institutional Review Board Statement: Not applicable.** **Informed Consent Statement: Not applicable.** **Data Availability Statement: Not applicable.** **Acknowledgments: The authors are grateful to the editors and the anonymous referees for their** constructive and thorough comments, which helped to improve our paper. **Conflicts of Interest: The authors declare no conflict of interest.** **Abbreviations** OEM Original Equipment Manufacturer SMEs Small and Medium-Sized Enterprises LFs Leasing Firms MCs Maintenance Centers LC Lease Contract CPL Capital Lease OPL Operating Lease EGT Evolutionary Game Theory ESS Evolutionary Stable Strategy BCT Blockchain Technology RDE Replication Dynamics Equation CBLP Consortium Blockchain-Based Leasing Platform HLF Hyperledger Fabric BaaS Blockchain as a Service CSP Cloud Storage Provider IPFS InterPlanetary File System ----- _J. Theor. Appl. Electron. Commer. Res. 2023, 18_ 234 **References** 1. Mol-Gómez-Vázquez, A.; Hernández-Cánovas, G.; Köeter-Kant, J. Economic and Institutional Determinants of Lease Financing [for European SMEs: An Analysis across Developing and Developed Countries. J. Small Bus. Manag. 2020, 1–22. [CrossRef]](http://doi.org/10.1080/00472778.2020.1800352) 2. Li, J.; Wang, H.; Deng, Z.; Zhang, W.; Zhang, G. Leasing or Selling? The Channel Choice of Durable Goods Manufacturer [Considering Consumers’ Capital Constraint. Flex. Serv. Manuf. J. 2022, 34, 317–350. [CrossRef]](http://doi.org/10.1007/s10696-021-09429-4) 3. [Eisfeldt, A.L.; Rampini, A.A. Leasing, Ability to Repossess, and Debt Capacity. Rev. Financ. Stud. 2009, 22, 1621–1657. [CrossRef]](http://doi.org/10.1093/rfs/hhn026) 4. Van Loon, P.; Delagarde, C.; Van Wassenhove, L.N.; Miheliˇc, A. Leasing or Buying White Goods: Comparing Manufacturer [Profitability versus Cost to Consumer. Int. J. Prod. Res. 2020, 58, 1092–1106. [CrossRef]](http://doi.org/10.1080/00207543.2019.1612962) 5. [Smith, C.W.; Wakeman, L.M. Determinants of Corporate Leasing Policy. J. Financ. 1985, 40, 895–908. [CrossRef]](http://doi.org/10.1111/j.1540-6261.1985.tb05016.x) 6. Zheng, B.-K.; Zhu, L.-H.; Shen, M.; Gao, F.; Zhang, C.; Li, Y.-D.; Yang, J. Scalable and Privacy-Preserving Data Sharing Based on [Blockchain. J. Comput. Sci. Technol. 2018, 33, 557–567. [CrossRef]](http://doi.org/10.1007/s11390-018-1840-5) 7. Dedeoglu, V.; Jurdak, R.; Dorri, A.; Lunardi, R.C.; Michelin, R.A.; Zorzo, A.F.; Kanhere, S.S. Blockchain Technologies for IoT; Springer: Berlin/Heidelberg, Germany, 2020; pp. 55–89. ISBN 9789811387753. 8. Saberi, S.; Kouhizadeh, M.; Sarkis, J.; Shen, L. Blockchain Technology and Its Relationships to Sustainable Supply Chain [Management. Int. J. Prod. Res. 2019, 57, 2117–2135. [CrossRef]](http://doi.org/10.1080/00207543.2018.1533261) 9. Wang, W.; Feng, L.; Li, Y.; Xu, F.; Deng, Q. Role of Financial Leasing in a Capital-Constrained Service Supply Chain. Transp. Res. _[Part E Logist. Transp. Rev. 2020, 143, 102097. [CrossRef]](http://doi.org/10.1016/j.tre.2020.102097)_ 10. Lease Accounting Framework and the Development of International Accounting Standards|SpringerLink. Available online: [https://link.springer.com/chapter/10.1007/978-3-030-71633-2_2 (accessed on 29 November 2022).](https://link.springer.com/chapter/10.1007/978-3-030-71633-2_2) 11. Murthy, D.N.P.; Jack, N. Extended Warranties, Maintenance Service and Lease Contracts; Springer Series in Reliability Engineering; Springer: London, UK, 2014; ISBN 978-1-4471-6439-5. 12. Kaposty, F.; Klein, P.; Löderbusch, M.; Pfingsten, A. Loss given default in SME leasing. Rev. Manag. Sci. 2022, 16, 1561–1597. [[CrossRef]](http://doi.org/10.1007/s11846-021-00486-5) 13. Kuhle, P. Building A Blockchain-Based Decentralized Digital Asset Management System for Commercial Aircraft Leasing. Comput. _[Ind. 2021, 21, 103393. [CrossRef]](http://doi.org/10.1016/j.compind.2020.103393)_ 14. Altman, E.I.; Brady, B.; Resti, A.; Sironi, A. The Link between Default and Recovery Rates: Theory, Empirical Evidence, and [Implications. J. Bus. 2005, 78, 2203–2228. [CrossRef]](http://doi.org/10.1086/497044) 15. Kysucky, V.; Norden, L. The Benefits of Relationship Lending in a Cross-Country Context: A Meta-Analysis. Manag. Sci. 2016, 62, [90–110. [CrossRef]](http://doi.org/10.1287/mnsc.2014.2088) 16. [Nakamoto, S. Bitcoin: A Peer-to-Peer Electronic Cash System. Decentralized Bus. Rev. 2008, 21260. Available online: https:](https://bitcoin.org/bitcoin.pdf) [//bitcoin.org/bitcoin.pdf (accessed on 27 January 2023).](https://bitcoin.org/bitcoin.pdf) 17. Politou, E.; Casino, F.; Alepis, E.; Patsakis, C. Blockchain Mutability: Challenges and Proposed Solutions. IEEE Trans. Emerg. Top. _[Comput. 2021, 9, 1972–1986. [CrossRef]](http://doi.org/10.1109/TETC.2019.2949510)_ 18. Viriyasitavat, W.; Hoonsopon, D. Blockchain Characteristics and Consensus in Modern Business Processes. J. Ind. Inf. Integr. 2019, _[13, 32–39. [CrossRef]](http://doi.org/10.1016/j.jii.2018.07.004)_ 19. Yu, Y.; Huang, G.; Guo, X. Financing Strategy Analysis for a Multi-Sided Platform with Blockchain Technology. Int. J. Prod. Res. **[2021, 59, 4513–4532. [CrossRef]](http://doi.org/10.1080/00207543.2020.1766718)** 20. Belotti, M.; Boži´c, N.; Pujolle, G.; Secci, S. A Vademecum on Blockchain Technologies: When, Which, and How. IEEE Commun. _[Surv. Tutor. 2019, 21, 3796–3838. [CrossRef]](http://doi.org/10.1109/COMST.2019.2928178)_ 21. Singh, S.K.; Jenamani, M.; Dasgupta, D.; Das, S. A Conceptual Model for Indian Public Distribution System Using Consortium [Blockchain with On-Chain and off-Chain Trusted Data. Inf. Technol. Dev. 2021, 27, 499–523. [CrossRef]](http://doi.org/10.1080/02681102.2020.1847024) 22. Dutta, P.; Choi, T.-M.; Somani, S.; Butala, R. Blockchain Technology in Supply Chain Operations: Applications, Challenges and [Research Opportunities. Transp. Res. Part E Logist. Transp. Rev. 2020, 142, 102067. [CrossRef]](http://doi.org/10.1016/j.tre.2020.102067) 23. Andoni, M.; Robu, V.; Flynn, D.; Abram, S.; Geach, D.; Jenkins, D.; McCallum, P.; Peacock, A. Blockchain Technology in the [Energy Sector: A Systematic Review of Challenges and Opportunities. Renew. Sustain. Energy Rev. 2019, 100, 143–174. [CrossRef]](http://doi.org/10.1016/j.rser.2018.10.014) 24. Tandon, A.; Dhir, A.; Islam, A.K.M.N.; Mäntymäki, M. Blockchain in Healthcare: A Systematic Literature Review, Synthesizing [Framework and Future Research Agenda. Comput. Ind. 2020, 122, 103290. [CrossRef]](http://doi.org/10.1016/j.compind.2020.103290) 25. Leng, J.; Ruan, G.; Jiang, P.; Xu, K.; Liu, Q.; Zhou, X.; Liu, C. Blockchain-Empowered Sustainable Manufacturing and Product [Lifecycle Management in Industry 4.0: A Survey. Renew. Sustain. Energy Rev. 2020, 132, 110112. [CrossRef]](http://doi.org/10.1016/j.rser.2020.110112) 26. Xie, J.; Tang, H.; Huang, T.; Yu, F.R.; Xie, R.; Liu, J.; Liu, Y. A Survey of Blockchain Technology Applied to Smart Cities: Research [Issues and Challenges. IEEE Commun. Surv. Tutor. 2019, 21, 2794–2830. [CrossRef]](http://doi.org/10.1109/COMST.2019.2899617) 27. Loukil, F.; Abed, M.; Boukadi, K. Blockchain Adoption in Education: A Systematic Literature Review. Educ. Inf. Technol. 2021, 26, [5779–5797. [CrossRef]](http://doi.org/10.1007/s10639-021-10481-8) 28. Wang, J.; Wu, P.; Wang, X.; Shou, W. The Outlook of Blockchain Technology for Construction Engineering Management. Front. _[Eng. Manag. 2017, 67–75. [CrossRef]](http://doi.org/10.15302/J-FEM-2017006)_ 29. Auer, S.; Nagler, S.; Mazumdar, S.; Mukkamala, R.R. Towards Blockchain-IoT Based Shared Mobility: Car-Sharing and Leasing as [a Case Study. J. Netw. Comput. Appl. 2022, 200, 103316. [CrossRef]](http://doi.org/10.1016/j.jnca.2021.103316) ----- _J. Theor. Appl. Electron. Commer. Res. 2023, 18_ 235 30. Faber, B.; Michelet, G.C.; Weidmann, N.; Mukkamala, R.R.; Vatrapu, R. BPDIMS: A Blockchain-Based Personal Data and Identity Management System. In Proceedings of the 52nd Hawaii International Conference on System Sciences, Maui, HI, USA, 8–11 January 2019; pp. 6855–6864. 31. Obour Agyekum, K.O.-B.; Xia, Q.; Boateng Sifah, E.; Amofa, S.; Nketia Acheampong, K.; Gao, J.; Chen, R.; Xia, H.; Gee, J.C.; Du, X.; et al. V-Chain: A Blockchain-Based Car Lease Platform. In Proceedings of the 2018 IEEE International Conference on Internet of Things (iThings) and IEEE Green Computing and Communications (GreenCom) and IEEE Cyber, Physical and Social Computing (CPSCom) and IEEE Smart Data (SmartData), HICSS, Halifax, NS, Canada, 30 July–3 August 2018; pp. 1317–1325. 32. Zheng, K.; Zheng, L.J.; Gauthier, J.; Zhou, L.; Xu, Y.; Behl, A.; Zhang, J.Z. Blockchain Technology for Enterprise Credit Information [Sharing in Supply Chain Finance. J. Innov. Knowl. 2022, 7, 100256. [CrossRef]](http://doi.org/10.1016/j.jik.2022.100256) 33. Fiorentino, S.; Bartolucci, S. Blockchain-Based Smart Contracts as New Governance Tools for the Sharing Economy. Cities 2021, _[117, 103325. [CrossRef]](http://doi.org/10.1016/j.cities.2021.103325)_ 34. Delle Foglie, A.; Panetta, I.C.; Boukrami, E.; Vento, G. The Impact of the Blockchain Technology on the Global Sukuk Industry: [Smart Contracts and Asset Tokenisation. Technol. Anal. Strateg. Manag. 2021, 1–15. [CrossRef]](http://doi.org/10.1080/09537325.2021.1939000) 35. Wang, J.; Zhou, Z.; Botterud, A. An Evolutionary Game Approach to Analyzing Bidding Strategies in Electricity Markets with [Elastic Demand. Energy 2011, 36, 3459–3467. [CrossRef]](http://doi.org/10.1016/j.energy.2011.03.050) 36. Wang, J.; Peng, X.; Du, Y.; Wang, F. A Tripartite Evolutionary Game Research on Information Sharing of the Subjects of Agricultural [Product Supply Chain with a Farmer Cooperative as the Core Enterprise. Manag. Decis. Econ. 2022, 43, 159–177. [CrossRef]](http://doi.org/10.1002/mde.3365) 37. Apaloo, J.; Brown, J.S.; Vincent, T.L. Evolutionary Game Theory: ESS, Convergence Stability, and NIS. Evol. Ecol. Res. 2009, 11, 489–515. 38. Chen, Y.; Zeng, Q.; Zheng, X.; Shao, B.; Jin, L. Safety Supervision of Tower Crane Operation on Construction Sites: An Evolutionary [Game Analysis. Saf. Sci. 2022, 152, 105578. [CrossRef]](http://doi.org/10.1016/j.ssci.2021.105578) 39. Su, L.; Cao, Y.; Li, H.; Tan, J. Blockchain-Driven Optimal Strategies for Supply Chain Finance Based on a Tripartite Game Model. _[J. Theor. Appl. Electron. Commer. Res. 2022, 17, 67. [CrossRef]](http://doi.org/10.3390/jtaer17040067)_ 40. Tang, Q.; Zhang, Z.; Yuan, Z.; Li, Z. The Game Analysis of Information Sharing for Supply Chain Enterprises in the Blockchain. _[Supply Chain 2022, 2, 13. [CrossRef]](http://doi.org/10.3389/fmtec.2022.880332)_ 41. Sun, R.; He, D.; Su, H. Evolutionary Game Analysis of Blockchain Technology Preventing Supply Chain Financial Risks. J. Theor. _[Appl. Electron. Commer. Res. 2021, 16, 155. [CrossRef]](http://doi.org/10.3390/jtaer16070155)_ 42. Song, L.; Luo, Y.; Chang, Z.; Jin, C.; Nicolas, M. Blockchain Adoption in Agricultural Supply Chain for Better Sustainability: A [Game Theory Perspective. Sustainability 2022, 14, 1470. [CrossRef]](http://doi.org/10.3390/su14031470) 43. Figueroa-Lorenzo, S.; Añorga, J.; Arrizabalaga, S. Methodological Performance Analysis Applied to a Novel IIoT Access Control [System Based on Permissioned Blockchain. Inf. Process. Manag. 2021, 58, 102558. [CrossRef]](http://doi.org/10.1016/j.ipm.2021.102558) 44. Xu, X.; Sun, G.; Luo, L.; Cao, H.; Yu, H.; Vasilakos, A.V. Latency Performance Modeling and Analysis for Hyperledger Fabric [Blockchain Network. Inf. Process. Manag. 2021, 58, 102436. [CrossRef]](http://doi.org/10.1016/j.ipm.2020.102436) 45. Androulaki, E.; Barger, A.; Bortnikov, V.; Cachin, C.; Christidis, K.; De Caro, A.; Enyeart, D.; Ferris, C.; Laventman, G.; Manevich, Y. Hyperledger Fabric: A Distributed Operating System for Permissioned Blockchains. arXiv 2018, arXiv:1801.10228. 46. Rizzardi, A.; Sicari, S.; Miorandi, D.; Coen-Porisini, A. Securing the Access Control Policies to the Internet of Things Resources [through Permissioned Blockchain. Concurr. Comput. Pract. Exp. 2022, 34, e6934. [CrossRef]](http://doi.org/10.1002/cpe.6934) 47. Steinhoff, S.; Stathakopoulou, C.; Pavlovic, M.; Vukoli´c, M. BMS: Secure Decentralized Reconfiguration for Blockchain and BFT Systems. arXiv 2021, arXiv:210903913. 48. Liu, Y.; Lu, Q.; Paik, H.-Y.; Xu, X.; Chen, S.; Zhu, L. Design Pattern as a Service for Blockchain-Based Self-Sovereign Identity. IEEE _[Softw. 2020, 37, 30–36. [CrossRef]](http://doi.org/10.1109/MS.2020.2992783)_ 49. Viriyasitavat, W.; Xu, L.D.; Sapsomboon, A.; Dhiman, G.; Hoonsopon, D. Building Trust of Blockchain-Based Internet-of-Thing [Services Using Public Key Infrastructure. Enterp. Inf. Syst. 2022, 16, 2037162. [CrossRef]](http://doi.org/10.1080/17517575.2022.2037162) 50. Kuhn, M.; Funk, F.; Zhang, G.; Franke, J. Blockchain-Based Application for the Traceability of Complex Assembly Structures. _[J. Manuf. Syst. 2021, 59, 617–630. [CrossRef]](http://doi.org/10.1016/j.jmsy.2021.04.013)_ 51. Herrera-Joancomartí, J.; Pérez-Solà, C. Privacy in Bitcoin Transactions: New Challenges from Blockchain Scalability Solutions. In Proceedings of the Modeling Decisions for Artificial Intelligence; Springer International Publishing: Cham, Switzerland, 2016; pp. 26–44. 52. Miyachi, K.; Mackey, T.K. HOCBS: A Privacy-Preserving Blockchain Framework for Healthcare Data Leveraging an on-Chain [and off-Chain System Design. Inf. Process. Manag. 2021, 58, 102535. [CrossRef]](http://doi.org/10.1016/j.ipm.2021.102535) 53. Solaiman, E.; Wike, T.; Sfyrakis, I. Implementation and Evaluation of Smart Contracts Using a Hybrid On-and Off-blockchain [Architecture. Concurr. Comput. Pract. Exp. 2021, 33, e5811. [CrossRef]](http://doi.org/10.1002/cpe.5811) 54. Wu, H.; Peng, Z.; Guo, S.; Yang, Y.; Xiao, B. VQL: Efficient and Verifiable Cloud Query Services for Blockchain Systems. IEEE _[Trans. Parallel Distrib. Syst. 2021, 33, 1393–1406. [CrossRef]](http://doi.org/10.1109/TPDS.2021.3113873)_ 55. Udokwu, C.; Norta, A. Deriving and Formalizing Requirements of Decentralized Applications for Inter-Organizational Collabo[rations on Blockchain. Arab. J. Sci. Eng. 2021, 46, 8397–8414. [CrossRef]](http://doi.org/10.1007/s13369-020-05245-4) 56. Surjandari, I.; Yusuf, H.; Laoh, E.; Maulida, R. Designing a Permissioned Blockchain Network for the Halal Industry Using [Hyperledger Fabric with Multiple Channels and the Raft Consensus Mechanism. J. Big Data. 2021, 8, 10. [CrossRef]](http://doi.org/10.1186/s40537-020-00405-7) ----- _J. Theor. Appl. Electron. Commer. Res. 2023, 18_ 236 57. Fu, W.; Wei, X.; Tong, S. An Improved Blockchain Consensus Algorithm Based on Raft. Arab. J. Sci. Eng. 2021, 46, 8137–8149. [[CrossRef]](http://doi.org/10.1007/s13369-021-05427-8) 58. Zhang, Y.; Zhang, L.; Liu, Y.; Luo, X. Proof of Service Power: A Blockchain Consensus for Cloud Manufacturing. J. Manuf. Syst. **[2021, 59, 1–11. [CrossRef]](http://doi.org/10.1016/j.jmsy.2021.01.006)** 59. Nechaev, A.S.; Zakharov, S.V.; Barykina, Y.N.; Vel’m, M.V.; Kuznetsova, O.N. Forming Methodologies to Improving the Efficiency [of Innovative Companies Based on Leasing Tools. J. Sustain. Financ. Invest. 2022, 12, 536–553. [CrossRef]](http://doi.org/10.1080/20430795.2020.1784681) 60. Li, Z.; Zhong, R.Y.; Tian, Z.-G.; Dai, H.-N.; Barenji, A.V.; Huang, G.Q. Industrial Blockchain: A State-of-the-Art Survey. Robot. _[Comput.-Integr. Manuf. 2021, 70, 102124. [CrossRef]](http://doi.org/10.1016/j.rcim.2021.102124)_ 61. Li, B.; Li, H.; Sun, Q.; Chen, X. Evolutionary Game Analysis between Businesses and Consumers under the Background of [Internet Rumors. Concurr. Comput. Pract. Exp. 2022, 34, e5897. [CrossRef]](http://doi.org/10.1002/cpe.5897) 62. Weibull, J.W. Evolutionary Game Theory; MIT Press: Cambridge, MA, USA, 1997; ISBN 0-262-73121-5. 63. Xu, B.; Li, Q. Research on Supervision Mechanism of Big Data Discriminatory Pricing on the Asymmetric Service Platform—Based [on SD Evolutionary Game Model. J. Theor. Appl. Electron. Commer. Res. 2022, 17, 63. [CrossRef]](http://doi.org/10.3390/jtaer17040063) 64. Kang, H.; Dai, T.; Jean-Louis, N.; Tao, S.; Gu, X. Fabzk: Supporting Privacy-Preserving, Auditable Smart Contracts in Hyperledger Fabric; IEEE: Piscataway, NJ, USA, 2019; pp. 543–555. 65. Bhushan, B.; Sinha, P.; Sagayam, K.M.; Andrew, J. Untangling Blockchain Technology: A Survey on State of the Art, Security [Threats, Privacy Services, Applications and Future Research Directions. Comput. Electr. Eng. 2021, 90, 106897. [CrossRef]](http://doi.org/10.1016/j.compeleceng.2020.106897) 66. Liu, J.; Zhang, H.; Zhen, L. Blockchain Technology in Maritime Supply Chains: Applications, Architecture and Challenges. Int. J. _[Prod. Res. 2021, 1–17. [CrossRef]](http://doi.org/10.1080/00207543.2021.1930239)_ 67. Chang, V.; Baudier, P.; Zhang, H.; Xu, Q.; Zhang, J.; Arami, M. How Blockchain Can Impact Financial Services—The Overview, [Challenges and Recommendations from Expert Interviewees. Technol. Forecast. Soc. Chang. 2020, 158, 120166. [CrossRef]](http://doi.org/10.1016/j.techfore.2020.120166) 68. Li, Q.; Wang, Y.; Li, K.; Chen, L.; Wei, Z. Evolutionary Dynamics of the Last Mile Travel Choice. Phys. Stat. Mech. Appl. 2019, 536, [122555. [CrossRef]](http://doi.org/10.1016/j.physa.2019.122555) 69. Qiao, Y.; Lan, Q.; Zhou, Z.; Ma, C. Privacy-Preserving Credit Evaluation System Based on Blockchain. Expert Syst. Appl. 2022, 188, [115989. [CrossRef]](http://doi.org/10.1016/j.eswa.2021.115989) 70. [Friedman, D. On Economic Applications of Evolutionary Game Theory. J. Evol. Econ. 1998, 8, 15–43. [CrossRef]](http://doi.org/10.1007/s001910050054) 71. [Nowak, M.A.; Sigmund, K. Evolutionary Dynamics of Biological Games. Science 2004, 303, 793–799. [CrossRef] [PubMed]](http://doi.org/10.1126/science.1093411) 72. Ritzberger, K.; Weibull, J.W. Evolutionary Selection in Normal-Form Games. Econom. J. Econom. Soc. 1995, 63, 1371–1399. [[CrossRef]](http://doi.org/10.2307/2171774) 73. Kang, K.; Zhao, Y.; Zhang, J.; Qiang, C. Evolutionary Game Theoretic Analysis on Low-Carbon Strategy for Supply Chain [Enterprises. J. Clean. Prod. 2019, 230, 981–994. [CrossRef]](http://doi.org/10.1016/j.jclepro.2019.05.118) 74. Yadav, S.; Singh, S.P. Blockchain Critical Success Factors for Sustainable Supply Chain. Resour. Conserv. Recycl. 2020, 152, 104505. [[CrossRef]](http://doi.org/10.1016/j.resconrec.2019.104505) 75. Jie, S.; Zhang, P.; Alkubati, M.; Yubin, B.; Ge, Y. Research Advances on Blockchain-as-a-Service: Architectures, Applications and Challenges. Digit. Commun. Netw. 2021, 8, 466–475. 76. Mohril, R.S.; Solanki, B.S.; Lad, B.K.; Kulkarni, M.S. Blockchain Enabled Maintenance Management Framework for Military [Equipment. IEEE Trans. Eng. Manag. 2021, 69, 3938–3951. [CrossRef]](http://doi.org/10.1109/TEM.2021.3099437) 77. Zhu, S.; Cai, Z.; Hu, H.; Li, Y.; Li, W. ZkCrowd: A Hybrid Blockchain-Based Crowdsourcing Platform. IEEE Trans. Ind. Inform. **[2019, 16, 4196–4205. [CrossRef]](http://doi.org/10.1109/TII.2019.2941735)** 78. Holden, R.; Malani, A. Can Blockchain Solve the Hold-up Problem in Contracts? Cambridge University Press: Cambridge, UK, 2021; ISBN 1-00-900479-4. 79. Natanelov, V.; Cao, S.; Foth, M.; Dulleck, U. Blockchain Smart Contracts for Supply Chain Finance: Mapping the Innovation [Potential in Australia-China Beef Supply Chains. J. Ind. Inf. Integr. 2022, 30, 100389. [CrossRef]](http://doi.org/10.1016/j.jii.2022.100389) 80. Oh, J.; Choi, Y.; In, J. A Conceptual Framework for Designing Blockchain Technology Enabled Supply Chains. Int. J. Logist. Res. _[Appl. 2022, 1–19. [CrossRef]](http://doi.org/10.1080/13675567.2022.2052824)_ **Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual** author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. -----
{ "disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.3390/jtaer18010012?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.3390/jtaer18010012, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": "CCBY", "status": "GOLD", "url": "https://www.mdpi.com/0718-1876/18/1/12/pdf?version=1674979153" }
2,023
[ "JournalArticle" ]
true
2023-01-29T00:00:00
[ { "paperId": "2a5e3cd25d4a5d10c75537a9b69cb01e3912547a", "title": "Blockchain-Driven Optimal Strategies for Supply Chain Finance Based on a Tripartite Game Model" }, { "paperId": "fc23a434a7472082ef4920938b6d0c1262f4fdc6", "title": "Blockchain technology for enterprise credit information sharing in supply chain finance" }, { "paperId": "b445bced95aab4b73afdfe0243c4fbafadec7e5c", "title": "Research on Supervision Mechanism of Big Data Discriminatory Pricing on the Asymmetric Service Platform - Based on SD Evolutionary Game Model" }, { "paperId": "3d34e0af4358de6a70f96b0b6133135364661942", "title": "Blockchain smart contracts for supply chain finance: Mapping the innovation potential in Australia-China beef supply chains" }, { "paperId": "be8836dab5f92d5a63bc7ad72ddc91d476fa94b9", "title": "The Game Analysis of Information Sharing for Supply Chain Enterprises in the Blockchain" }, { "paperId": "851ab39c0b1bc14bc905fb6f48d027955e3a5183", "title": "VQL: Efficient and Verifiable Cloud Query Services for Blockchain Systems" }, { "paperId": "74067550d9ae597bab0046f098a491db1425f3ec", "title": "A conceptual framework for designing blockchain technology enabled supply chains" }, { "paperId": "e24e9d576448ba97fcb752dda8f09b5c1d2d6ac7", "title": "Securing the access control policies to the Internet of Things resources through permissioned blockchain" }, { "paperId": "a3846bafe94156d63f5ee9f7ee492d08c674a38f", "title": "Building trust of Blockchain-based Internet-of-Thing services using public key infrastructure" }, { "paperId": "c8bfd7a7c7500addb70cf9c3661569e8f2690dc0", "title": "Privacy-preserving credit evaluation system based on blockchain" }, { "paperId": "666d5c4bafc6b17288146277bccb85206f57596f", "title": "Blockchain Adoption in Agricultural Supply Chain for Better Sustainability: A Game Theory Perspective" }, { "paperId": "ff83766d73b62f34b96c2caf243312ac518e57bd", "title": "Towards blockchain-IoT based shared mobility: Car-sharing and leasing as a case study" }, { "paperId": "1c1486ac0f89baea07c66d7fa58223fb02553e84", "title": "Safety supervision of tower crane operation on construction sites: An evolutionary game analysis" }, { "paperId": "4c918e373c8fe9f2a293d7d0de3488caf427acfe", "title": "Evolutionary Game Analysis of Blockchain Technology Preventing Supply Chain Financial Risks" }, { "paperId": "856aad8a001944deb982be3b9d1e250d763c79e5", "title": "Leasing or selling? The channel choice of durable goods manufacturer considering consumers’ capital constraint" }, { "paperId": "7edae9e82a5d05d4d9d7a0b43027a415910459bd", "title": "Blockchain-based smart contracts as new governance tools for the sharing economy" }, { "paperId": "72c849e9b3cdf44bfd3ebb6a38d7f029b3478b6a", "title": "Loss given default in SME leasing" }, { "paperId": "807735c594c6f5140b35319da23528ddb3abf77f", "title": "Blockchain Enabled Maintenance Management Framework for Military Equipment" }, { "paperId": "845596c2ca836ab7c0589de32d55f12884bd611b", "title": "Industrial Blockchain: A state-of-the-art Survey" }, { "paperId": "c29b43e0e3cd15e6c6131b864986b9bfcfc07d22", "title": "Methodological performance analysis applied to a novel IIoT access control system based on permissioned blockchain" }, { "paperId": "50836ce628def307a7f3dc0e76571d6e14778611", "title": "The impact of the Blockchain technology on the global Sukuk industry: smart contracts and asset tokenisation" }, { "paperId": "1a893681c3ae37cf4f77c125679d73f10c8e57aa", "title": "A tripartite evolutionary game research on information sharing of the subjects of agricultural product supply chain with a farmer cooperative as the core enterprise" }, { "paperId": "134bafb2f0e5e0f051f1e242fa60c07f7f57a67d", "title": "Blockchain technology in maritime supply chains: applications, architecture and challenges" }, { "paperId": "c6475211b6b36bfd41cff1a31f4ae7ab99d014d5", "title": "Blockchain adoption in education: a systematic literature review" }, { "paperId": "40a1b5f940fbfc58b4adecec08977850729141ac", "title": "Blockchain-based application for the traceability of complex assembly structures" }, { "paperId": "4c4b2fe43c10e129a021a02712e3eecb100c500f", "title": "Proof of service power: A blockchain consensus for cloud manufacturing" }, { "paperId": "0d9e0cbef607991de72dd1d8c567d242a8bf4a86", "title": "Building A blockchain-based decentralized digital asset management system for commercial aircraft leasing" }, { "paperId": "4b014c993c0ba68c5f3c9000c514c4baa043a8d0", "title": "Deriving and Formalizing Requirements of Decentralized Applications for Inter-Organizational Collaborations on Blockchain" }, { "paperId": "528b5a671ddbb2a17f652be43eefd2a83a255c46", "title": "An Improved Blockchain Consensus Algorithm Based on Raft" }, { "paperId": "f956e4ed93e5b3eebed6568885deee873f684fa9", "title": "Research advances on blockchain-as-a-service: architectures, applications and challenges" }, { "paperId": "2e5af24126293387fdf9e9ae96bc9aab3871908b", "title": "A conceptual model for Indian public distribution system using consortium blockchain with on-chain and off-chain trusted data" }, { "paperId": "5ef707d794c63527137e177e1800e27fac61f6bf", "title": "Untangling blockchain technology: A survey on state of the art, security threats, privacy services, applications and future research directions" }, { "paperId": "bcd8b89a9e7a1fe4ab6ea1486f2aae5ccebbd071", "title": "Role of financial leasing in a capital-constrained service supply chain" }, { "paperId": "5e04786ee99ba2dd4bcdbdbc18f0189ccac0b065", "title": "Blockchain-empowered sustainable manufacturing and product lifecycle management in industry 4.0: A survey" }, { "paperId": "a9e212ef2e0abaca7878194ab831e23d5b5296df", "title": "Blockchain technology in supply chain operations: Applications, challenges and research opportunities" }, { "paperId": "36097e3078e33c51c30f06df508b026a3fcb2b65", "title": "Designing a Permissioned Blockchain Network for the Halal Industry using Hyperledger Fabric with multiple channels and the raft consensus mechanism" }, { "paperId": "5baec0e8f6de64ddb33db85fcf4ea38ac10a8527", "title": "Economic and institutional determinants of lease financing for European SMEs: An analysis across developing and developed countries" }, { "paperId": "64aa9e9bbd1a37e67fc5dc1d0d3ac796bcc854be", "title": "Evolutionary game analysis between businesses and consumers under the background of Internet rumors" }, { "paperId": "b4ad7ce1b6a0fefcc2eced1ff0e4943caa613f95", "title": "Forming methodologies to improving the efficiency of innovative companies based on leasing tools" }, { "paperId": "44ff32b8bbfa0b4bdda40162afcbe12af6e5b1b5", "title": "zkCrowd: A Hybrid Blockchain-Based Crowdsourcing Platform" }, { "paperId": "1d7a9791c06a97631b04518266e0eeddc59469f9", "title": "Financing strategy analysis for a multi-sided platform with blockchain technology" }, { "paperId": "441dab9d65a7b1c4658dff33c094330a733210d6", "title": "Implementation and evaluation of smart contracts using a hybrid on‐ and off‐blockchain architecture" }, { "paperId": "a7a0e0e85625485218ff382ebcbf76ca20bd9960", "title": "Design Pattern as a Service for Blockchain-Based Self-Sovereign Identity" }, { "paperId": "69d3bca2a2ec91801c6542cc63476e0660151b36", "title": "Evolutionary dynamics of the last mile travel choice" }, { "paperId": "84f228f6ab0b5710ed88deb3af2e1986a32af59f", "title": "Evolutionary game theoretic analysis on low-carbon strategy for supply chain enterprises" }, { "paperId": "c4ed360660a44a7623154fc18266ce013a5f0c44", "title": "Blockchain Mutability: Challenges and Proposed Solutions" }, { "paperId": "ea8a68d7f356c9e0a6f74706e429e8b41250b784", "title": "A Vademecum on Blockchain Technologies: When, Which, and How" }, { "paperId": "2aa44a41ae725f07952f240d5cb6eefe6daff385", "title": "Leasing or buying white goods: comparing manufacturer profitability versus cost to consumer" }, { "paperId": "cf58f0c53a2bd2c0a1ee7bec77920c3f80036ac2", "title": "Blockchain characteristics and consensus in modern business processes" }, { "paperId": "7117fdbfc3f3a4561ddc7843a54eec31b7373f18", "title": "A Survey of Blockchain Technology Applied to Smart Cities: Research Issues and Challenges" }, { "paperId": "60be2610dba19761d6458bbac27527b744b0109e", "title": "Blockchain technology in the energy sector: A systematic review of challenges and opportunities" }, { "paperId": "2e82b8539af92b4af1f5c1c59dcce9d31dcefccc", "title": "Blockchain technology and its relationships to sustainable supply chain management" }, { "paperId": "ec6cbabde4283b20bf65fac9b1b04bfda468566b", "title": "Scalable and Privacy-Preserving Data Sharing Based on Blockchain" }, { "paperId": "1b4c39ba447e8098291e47fd5da32ca650f41181", "title": "Hyperledger fabric: a distributed operating system for permissioned blockchains" }, { "paperId": "64d67a34a2fb537f56bb8ea43998553f3de92701", "title": "The Benefits of Relationship Lending in a Cross-Country Context: A Meta-Analysis" }, { "paperId": "dfa03b4ffe5dfb9cb0b419c12dbd9faa609ff739", "title": "Extended Warranties, Maintenance Service and Lease Contracts: Modeling and Analysis for Decision-Making" }, { "paperId": "cf766eb8fbe8bd9d5f9716593f8ce3eb6b4e805c", "title": "An evolutionary game approach to analyzing bidding strategies in electricity markets with elastic de" }, { "paperId": "f9c1aaab91aaf6bd27649539417cc6b8fddc6233", "title": "Leasing, Ability to Repossess, and Debt Capacity" }, { "paperId": "2feedd23395f882e40dc46f864f2a536c0fe0234", "title": "Evolutionary Dynamics of Biological Games" }, { "paperId": "5a435a7774f8d73836dd86546808abcc876ba3b6", "title": "The Link between Default and Recovery Rates: Theory, Empirical Evidence and Implications" }, { "paperId": "42ef29fa9b184295459b51b5854e56ed27df3496", "title": "On economic applications of evolutionary game theory" }, { "paperId": "fc11bbcaea9eaaf4b5a00e3c7518867f832ed849", "title": "Evolutionary Selection in Normal-Form Games" }, { "paperId": "403d78d0193442d6c5f56d3f29521ab583a862b7", "title": "Determinants of Corporate Leasing Policy" }, { "paperId": "28ff211bf98028a4acf3d1495c11efc7a7c50801", "title": "hOCBS: A privacy-preserving blockchain framework for healthcare data leveraging an on-chain and off-chain system design" }, { "paperId": "961a69342dbac1523d0eb09557d1a00de79d6c57", "title": "Latency performance modeling and analysis for hyperledger fabric blockchain network" }, { "paperId": null, "title": "Can Blockchain Solve the Hold-up Problem in Contracts?" }, { "paperId": "bce3e557e24247f99b42370c5c718b44a3107307", "title": "Blockchain critical success factors for sustainable supply chain" }, { "paperId": "466c6f989490a3819075908336b27ca722f6337f", "title": "Technological Forecasting & Social Change" }, { "paperId": "cdb46e45d8b96cef76ed1c081275165e2f8cc726", "title": "Evolutionary game theory: ESS, convergence stability, and NIS" }, { "paperId": "9e84bd933c267b09422d9f25d53fed92846e18f2", "title": "Computers in Industry" } ]
36,384
en
[ { "category": "Computer Science", "source": "external" }, { "category": "Computer Science", "source": "s2-fos-model" }, { "category": "Mathematics", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/024384b677b58c1f48697245b748ec291812b563
[ "Computer Science" ]
0.880916
Universal Re-encryption for Mixnets
024384b677b58c1f48697245b748ec291812b563
The Cryptographer's Track at RSA Conference
[ { "authorId": "2779068", "name": "P. Golle" }, { "authorId": "2836467", "name": "M. Jakobsson" }, { "authorId": "1687161", "name": "A. Juels" }, { "authorId": "3213341", "name": "P. Syverson" } ]
{ "alternate_issns": null, "alternate_names": [ "Cryptogr Track RSA Conf", "The Cryptographers’ Track at the RSA Conference", "CT-RSA" ], "alternate_urls": null, "id": "7d878997-4b28-4d42-97e4-1146b7c090bc", "issn": null, "name": "The Cryptographer's Track at RSA Conference", "type": "conference", "url": "http://www.wikicfp.com/cfp/program?id=620" }
null
# Universal Re-encryption for Mixnets **Abstract. We introduce a new cryptographic technique that we call universal re-encryption. A** conventional cryptosystem that permits re-encryption, such as ElGamal, does so only for a player with knowledge of the public key corresponding to a given ciphertext. In contrast, universal reencryption may be performed without knowledge of public keys. We demonstrate an asymmetric cryptosystem with universal re-encryption that is half as efficient as standard ElGamal in terms of both computation and storage. While technically and conceptually simple, universal re-encryption leads to new types of functionality in mixnet architectures. Conventional mixnets are often called upon to enable players to communicate with one another through channels that are externally anonymous, i.e., that hide information permitting traffic-analysis. Universal re-encryption permits a mixnet of this kind to be constructed in which servers hold no public or private keying material, and may therefore dispense with the cumbersome requirements of key generation, key distribution, and private-key management. We describe two practical mixnet constructions, one involving asymmetric input ciphertexts, and another with hybrid-ciphertext inputs. **Key words: anonymity, mix networks, private channels, public-key cryptography, universal** re-encryption ## 1 Introduction A mix network or mixnet is a cryptographic construction that invokes a set of servers to establish private communication channels [5]. One type of mix network accepts as input a collection of ciphertexts, and outputs the corresponding plaintexts in a randomly permuted order. The main privacy property desired of such a mixnet is that the permutation matching inputs to outputs should be known only to the mixnet, and no one else. In particular, an adversary should be unable to guess which input ciphertext corresponds to an output plaintext any more effectively than by guessing at random. One common variety of mixnet known as a re-encryption mixnet relies on a public-key encryption scheme, such as ElGamal [11], that allows for re-encryption of ciphertexts. For a given public key, a ciphertext C[′] is said to represent a re-encryption of C if both ciphertexts decrypt to the same plaintext. In a re-encryption mixnet, the inputs are submitted encrypted under the public-key of the mixnet. (The corresponding private key is held in distributed form among the servers.) The batch of input ciphertexts is processed sequentially by each mix server. The first server takes the set of input ciphertexts, re-encrypts them, and outputs the re-encrypted ciphertexts in a random order. Each server in turn takes the set of ciphertexts output by the previous server, and re-encrypts and mixes them. The set of ciphertexts produced by the last server may be decrypted by a quorum of mix servers to yield plaintext outputs. Privacy in this mixnet construction derives from the fact that the ciphertext pair (C, C[′]) is indistinguishable from a pair (C, R) for a random ciphertext R to any adversary without knowledge of the private key. In this paper, we propose a new type of public-key cryptosystem that permits universal re_encryption of ciphertexts. We introduce the term universal encryption to mean re-encryption_ without knowledge of the public key under which a ciphertext was computed. Like standard re-encryption, universal re-encryption transforms a ciphertext C into a new ciphertext C[′] with ----- same corresponding plaintext. The novelty in our proposal is that re-encryption neither requires nor yields knowledge of the public key under which a ciphertext was computed[1]. When applied to mix networks, our universal re-encryption technique offers new and interesting functionality. Most importantly, mix networks based on universal re-encryption dispense with the cumbersome protocols that traditional mixnets require in order to establish and maintain a shared private key. We discuss more benefits and applications of universal mixnets in the next section. It is possible to construct a universal mixnet based on universal re-encryption roughly as follows. Every input to the mixnet is encrypted under the public key of the recipient for whom it is intended. Thus, unlike standard re-encryption mixnets, universal mixnets accept ciphertexts encrypted under the individual public keys of receivers, rather than encrypted under the unique public key of the mix network. These ciphertexts are universally re-encrypted and mixed by each server. The output of a universal mixnet is a set of ciphertexts. Recipients can retrieve from the set of output ciphertexts those addressed to them, and decrypt them. **Organization** The rest of the paper is organized as follows. In the next section, we give an overview of the main properties that distinguish universal mixnets from standard mixnets, and give one example of a new application made possible by universal mixnets. This is followed in section 3 by a formal definition of semantic security for universal re-encryption, as well as a proposal for creating a public-key cryptosystem with universal re-encryption based on ElGamal. In section 4, we describe our construction for an asymmetric universal mixnet. We define and prove the security properties of our system in section 5. In section 6, we propose a hybrid variant of our universal mixnet construction that combines public-key and symmetric encryption to handle long messages efficiently. We conclude in section 7. ## 2 Universal Mixnets: Properties and Applications To motivate the constructions of this paper, we list here some of the main properties that set apart universal mixnets from traditional re-encryption mixnets. We also give one example of a new application made possible by universal mixnets: Anonymization of RFID tags. **Universal mixnets hold no keying material. A universal mixnet operates without a mono-** lithic public key and thus dispenses at the server level with the complexities of key generation, key distribution, and key maintenance. This allows a universal mixnet to be set up more efficiently and with greater flexibility than a traditional re-encryption mixnet. A universal mixnet can be rapidly re-configured: Servers can enter and leave arbitrarily, even in the middle of a round of processing, without going through any setup. A mix server that crashes or otherwise disappears in the midst of the mixing process can thus be easily replaced by another server. **Universal mixnets guarantee forward anonymity. The absence of shared keys means that** universal mixnets offer perfect forward-anonymity. Even if all mix servers become corrupted, the anonymity of previously mixed batches is preserved (provided that servers do not store the permutations or re-encryption factors they used to process their inputs). In contrast, if the keying material of a standard mix is revealed, an adversary with transcripts from previous mix sessions can compromise the privacy of users. 1 We note that universal re-encryption has been independently devised by Danezis [7], although with a somewhat different application than we consider here. ----- **Universal mixnets do not support escrow capability. The flip-side of perfect forward-** anonymity is that is that it is not possible to escrow the privacy offered by a universal mixnet in a straightforward fashion. Escrow is only achievable in a universal mix as long as every server involved in the mixing remembers how it permuted its inputs and is willing to reveal that permutation. This may be a drawback from the perspective of law enforcement. In comparison, escrow is possible in a traditional mix, provided that the shared key can be reconstructed. This requires the participation of only a quorum of servers, not all of them. **Efficiency. We present in this paper a public-key cryptosystem with universal re-encryption** that is half as efficient as standard ElGamal: It requires exactly twice as much storage, and also twice as much computation for encryption, re-encryption, and decryption. In this regard, the universal mixnet constructions we propose in this paper are practical. The drawback of a universal mixnet, as we discuss in detail below, is that receivers must attempt to decrypt all output items in order to identify the messages intended for them. **2.1** **Anonymizing RFID tags** An interesting new application made possible by universal mixnets is the anonymization of radio-frequency identification (RFID) tags. An RFID tag is a small device that is used to locate and identify physical objects. RFID tags have very limited processing ability (insufficient to perform any re-encryption of data), but they allow devices to read and write to their memory [20, 21]. Communication with RFID tags is performed by means of radio, and the tags themselves often obtain power by induction. Examples of uses of RFID tags include the theft-detection tags attached to consumer items in stores and the plaques mounted on car windshields for automated toll payment. Due to the projected decrease in the cost of RFID tags, their use is likely to extend in the near future to a wide range of general consumer items, including possibly even banknotes [26, 16]. This raises concerns of an emerging privacy threat. Most RFID tags emit static identifiers. Thus, an adversary with control of a large base of readers for RFID tags may be able to track the movement of any object in which an RFID tag is embedded, and hence learn the whereabouts of the owner of that object. In order to prevent tracking of RFID tags, one could let some set of (honest-but-curious) servers perform re-encryption of the information that is publicly readable from RFID tags. The resulting system is surprisingly similar to a mix network, in which the permutation of ciphertexts is replaced by the movement of the RFID tags. A traditional mix network, however, only partially solves the problem of tracking. The difficulty lies in the fact that the data contained in different RFID tags may be encrypted under different public keys, depending on who possesses the authority to access that data. For example, while the data contained in tags used for automated toll payment may be encrypted under the public key of the transit agency, the data contained in tags attached to merchandise in a department store may be encrypted under the public key of that department store. To re-encrypt RFID tag data, a traditional mix network would need knowledge of the key under which that data was encrypted. The public key associated with an RFID tag could be made readable, but then the public key itself becomes an identifier permitting a certain degree of tracking. This is particularly the case if a user carries a collection of tags, and may therefore be identified by means of a constellation of public keys. Universal mixnets offer a means of addressing the problem of RFID-tag privacy. If the data contained in RFID tags is encrypted with a cryptosystem that permits universal re-encryption, then this data can be re-encrypted without knowledge of the public-key. Thus universal re ----- encryption may offer heightened privacy in this setting by permitting agents to perform reencryption without knowledge of public keys. While there have been previous designs using mixes for the purposes of privacy protection for low-power devices (e.g., [19]), universal reencryption permits significant protocol and management simplification. ## 3 Universal Re-encryption A conventional randomized public-key cryptosystem comprises a triple of algorithms, CS = (KG, E, D), for key generation, encryption, and decryption respectively. We assume, as is often the case for discrete-log-based cryptosystems, that system parameters and underlying algebraic structures for CS are published in advance by a trusted party. These are generated according to a common security parameter k. System parameters include or imply specifications of M, **C, and R – respectively a message space, ciphertext space, and set of encryption factors. In** more detail: **– The key-generation algorithm (PK, SK)** KG outputs a random key pair. _←_ **– The encryption algorithm C** E(m, r, PK) is a deterministic algorithm that takes as _←_ input a message m **M, an encryption factor r** **R and a public key PK, and outputs a** _∈_ _∈_ ciphertext C **C.** _∈_ **– The decryption algorithm m** D(SK, C) takes as input a private key SK and ciphertext _←_ _C_ **C and outputs the corresponding plaintext.** _∈_ A critical security property for providing privacy in a mix network is that of semantic _security. Loosely speaking, this property stipulates the infeasibility of learning any informa-_ tion at all about a plaintext from a corresponding ciphertext [12]. For a more formal definition, we consider an adversary that is given a public key PK, where (PK, SK) KG. _←_ This adversary chooses a pair (m0, m1) of plaintexts. Corresponding ciphertexts (C0, C1) = (E(m0, r0, PK), E(m1, r1, PK)) for r0, r1 ∈U R are computed, where ∈U denotes uniform, random selection. For a random bit b, the adversary is given the pair (Cb, C1−b), and tries to guess _b. The cryptosystem CS is said to be semantically secure if the adversary can guess b with_ _advantage at most negligible in k, i.e. with probability at most negligibly larger than 1/2._ For a re-encryption mix network, an additional component known as a re-encryption algorithm, denoted by Re, is required in CS. This algorithm re-randomizes the encryption factor in a ciphertext. In a standard cryptosystem, this means that C[′] Re(C, r, PK) for _←_ _C, C[′]_ **C, r** **R, and a public key PK. Observe that re-encryption, in contrast to encryption,** _∈_ _∈_ may be executed without knowledge of a plaintext. The notion of semantic security may be naturally extended to apply to the re-encryption operation by considering an adversary that chooses ciphertexts (C0, C1) under PK. The property of semantic security under re-encryption, then, means the following: Given respective re-encryptions (Cb[′][, C]1[′] _−b[) in a random order, the]_ adversary cannot guess b with non-negligible advantage in k. Provided that Re yields the same distribution of ciphertexts as E (given r ∈U R) or that the two distributions are indistinguishable, it may be seen that basic semantic security implies semantic security under re-encryption. Bellare et al. [3] define another useful property possessed by the El Gamal cryptosystem. Known as “key-privacy,” this property may be loosely stated as follows. Given a ciphertext encrypted under a public key randomly selected from a published pair (PK0, PK1), an adversary cannot determine which key corresponds to the ciphertext with non-negligible advantage. Key-privacy is one feature of the security property we develop in this paper for universal re-encryption. ----- As already explained, a universal cryptosystem permits re-encryption without knowledge of the public key corresponding to a given ciphertext. Let us denote such a cryptosystem by _UCS = (UKG, UE, URe, UD), where UKG, UE, and UD are key generation, encryption, and de-_ cryption algorithms. These are defined as in a standard cryptosystem. The difference between a universal cryptosystem UCS and a standard cryptosystem resides in the re-encryption algorithm URe. The algorithm URe takes as input a ciphertext C and re-encryption factor r, but no public key PK. Thus, we have C[′] URe(C, r) for C, C[′] **C, r** **R.** _←_ _∈_ _∈_ To define universal semantic security under re-encryption, i.e., with respect to URe, it is necessary to consider an adversarial experiment that is a variant on the standard one for semantic security. We define an experiment uss as follows for a (stateful) adversarial algorithm . This experiment terminates on issuing an output bit. As above, we assume an appropriate _A_ implicit parameterization of UCS under security parameter k. The idea behind the experiment is as follows. The adversary is permitted to construct universal ciphertexts under two randomly generated keys, PK0 and PK1. These ciphertexts are then re-encrypted. The aim of the adversary is to distinguish between the two re-encryptions. The adversary should be unable to do so with non-negligible advantage. Experiment Exp[uss]A [(][UCS, k][)] _PK0 ←_ UKG; PK1 ← UKG; (m0, m1, r0, r1) ←A(PK0, PK1, “specify ciphertexts”); if m0, m1 ̸∈ **M or r0, r1 ̸∈** **R then** output ‘0’; _C0 ←_ UE(m0, r0, PK0); C1 ← UE(m1, r1, PK1); _r0[′]_ _[, r]1[′]_ _[∈][U]_ **[R][;]** _C0[′]_ 0[);][ C]1[′] 1[);] _[←]_ [URe][(][C][0][, r][′] _[←]_ [URe][(][C][1][, r][′] _b ∈U {0, 1};_ _b[′]_ _←A(Cb[′][, C]1[′]_ _−b[,][ “guess”);]_ if b = b[′] then output ‘1’; else output ‘0’; We say that UCS is semantically secure under re-encryption if for any adversary with _A_ resources polynomial in K, the probability pr[Exp[uss]A [(][UCS, k][) = ‘1’]][ −] [1][/][2 is negligible in][ k][.] The experiment uss captures the idea that the keys associated with ciphertexts are concealed by the re-encryption process in UCS. Thus, even an adversary with the opportunity to compose the ciphertexts undergoing re-encryption cannot make use of differences in public keys in order to defeat the semantic security of the cryptosystem. **3.1** **Universal re-encryption based on ElGamal.** We present a public-key cryptosystem with universal re-encryption that may be based on the ElGamal cryptosystem implemented over any suitable algebraic group. The basic idea is simple: We append to a standard ElGamal ciphertext a second ciphertext on the identity element. By exploiting the algebraic homomorphism of ElGamal, we can use the second ciphertext to alter the encryption factor in the first ciphertext. As a result, we can dispense with knowledge of the public key in the re-encryption operation. As already noted, this construction is half as efficient as standard ElGamal. ----- Let E[m] loosely denote ElGamal encryption a plaintext m (under some key). In a universal cryptosystem, a ciphertexts on message m consists of a pair [E[m]; E[1]]. ElGamal possesses a homomorphic property, namely that E[a] _E[b] = E[ab] for group operator_ . Thanks to _×_ _×_ this property, the second component can be used to re-encrypt the first without knowledge of the associated public key. To provide more detail, let denote the underlying group for the _G_ ElGamal cryptosystem; let q denote the order of . (Here the security parameter k is implicit _G_ in the choice of .) Let g be a published generator for . The universal cryptosystem is as _G_ _G_ follows. Note that we assume random selection of encryption and re-encryption factors in this description. **– Key generation (UKG): Output (PK, SK) = (y = g[x], x) for x ∈U Zq.** **– Encryption (UE): Input comprises a message m, a public key y, and a random en-** cryption factor r = (k0, k1) ∈ _Zq[2][. The output is a ciphertext][ C][ = [(][α][0][, β][0][); (][α][1][, β][1][)] =]_ [(my[k][0], g[k][0]); (y[k][1], g[k][1])]. We write C = UEPK(m, r) or C = UEPK(m) for brevity. **– Decryption (UD): Input is a ciphertext C = [(α0, β0); (α1, β1)] under public key y. Verify** _α0, β0, α1, β1 ∈G; if not, the decryption fails, and a special symbol ⊥_ is output. Compute _m0 = α0/β0[x]_ [and][ m][1][ =][ α][1][/β]1[x][. If][ m][1][ = 1, then the output is][ m][ =][ m][0][. Otherwise, the] decryption fails, and a special symbol is output. Note that this ensures a binding between _⊥_ ciphertexts and keys: a given ciphertext can be decrypted only under one given key. **– Re-encryption (URe): Input is a ciphertext C = [(α0, β0); (α1, β1)] with a random re-** encryption factor r[′] = (k0[′] _[, k]1[′]_ [)][ ∈] _[Z]q[2][. Output is a ciphertext][ C][′][ = [(][α]0[′]_ _[, β]0[′]_ [); (][α]1[′] _[, β]1[′]_ [)] =] [(α0α1k0[′] _[, β][0][β]1k0[′]_ [); (][α]1k1[′] _[, β]1k1[′]_ [)], where][ k]0[′] _[, k]1[′]_ _[∈][U]_ _[Z][q][.]_ Observe that the ciphertext size and the computational costs for all algorithms are exactly twice those of the basic ElGamal cryptosystem. The properties of standard semantic security and also universal semantic security under re-encryption (as characterized by experiment uss) may be shown straightforwardly to be reducible to the Decision Diffie-Hellman (DDH) assumption [4] over the group, in much the same way as the semantic security of ElGamal [25]. Thus, _G_ one possible choice of G is the subgroup of order q of Zp[∗][, where][ p][ and][ q][ are primes such that] _q_ _p_ 1. An alternative, with the advantage of more compact ciphertext representation, is a _|_ _−_ group of prime order q defined over an appropriately selected elliptic curve such that the DDH assumption is believed to be hard. Throughout the remainder of the paper, we work with the ElGamal implementation of universal re-encryption, and let g denote a published generator for the choice of underlying group . _G_ ## 4 Universal Mix Network Construction We use the following scenario to introduce our universal mixnet construction. We consider a number of senders who wish to send messages to recipients in such a way that the communication is concealed from everyone but the sender and recipient themselves. In other words, we wish to establish channels between senders and receivers that are externally anonymous. We assume that every recipient has an ElGamal private/public key pair (x, y = g[x]) in some published group . We also assume that every sender knows the public key of all the receivers with _G_ whom she intends to communicate. (Alternatively, the sender may have a “blank” ciphertext for this party. By this we mean an encryption using UE of the identity element in under the _G_ public key of the recipient. A “blank” may be filled in without knowledge of the corresponding public key through exploitation of the underlying algebraic homomorphism in ElGamal.) The communication protocol proceeds as follows: ----- 1. Submission of inputs. Senders post to a bulletin board messages that are universally encrypted under the public key of the recipient for whom they are intended. Every entry on the bulletin board thus consists of a pair of ElGamal ciphertexts (E[m]; E[1]) under the public key of the recipient. Recall that the semantic security of ElGamal ensures the concealment of plaintexts. In other words, for plaintexts m and m[′], a universal ciphertext (E[m]; E[1]) is indistinguishable from another (E[m[′]]; E[1]) to any entity without knowledge of the corresponding private key. 2. Universal mixing. Any server can be called upon to mix the contents of the bulletin board. This involves two operations: (1) The server re-encrypts all the universal ciphertexts on the bulletin board using URe, and (2) The server writes the resulting new ciphertexts back to the bulletin board in random order, overwriting the old ones. It is also desirable that a server that mixes the inputs be able to prove that it operated correctly. This can be done using a number of existing mixing schemes, e.g. [1, 2, 10, 13, 15, 17], and will be discussed in greater detail below. 3. Retrieval of the outputs. Potential recipients must try to decrypt every encrypted message output by the universal mixnet. Successful decryptions correspond to messages that were intended for that recipient. The others (corresponding to decryption output ‘ ’) are _⊥_ discarded by the party attempting to perform the decryption. Recall that our construction of universal encryption based on El Gamal ensures a binding between ciphertexts and keys, so that a given ciphertext can be decrypted only under one given key. **Properties of the basic protocol:** 1. The universal mixnet holds no keying information. Public and private keys are managed exclusively by the players providing input ciphertexts and receiving outputs from the mix. 2. The universal mixnet guarantees only external anonymity. It does not provide anonymity for senders with respect to receivers. Indeed a receiver can trace a message intended for her throughout the mixing process, since that message is encrypted under her public key. If ciphertexts are not posted anonymously, this means that the receiver can identify the players who have posted messages for her. This restriction to external anonymity is of little consequence for the applications we focus on, namely protection against traffic analysis, but should be borne in mind for other applications. 3. The chief drawback of universal mixnets is the overhead that they impose on receivers. Because the public keys corresponding to individual output ciphertexts are unknown, it may be necessary for a receiver to attempt to decrypt each output ciphertext in order to find the right one, i.e., the ciphertext corresponding to her private key. Thus, a universal mixnet imposes an overhead on receivers that is linear in the input batch size. (We discuss ways below and in section 6 to reduce this overhead somewhat.) **Low-volume anonymous messaging: anonymizing bulletin boards.** For simplicity, we have described above the operation of a universal mixnet in which inputs are submitted, mixed and finally retrieved. This sequence of events is characteristic of all mixes. Unlike regular mixes however, universal mixes allow for repeated interleaving of the submission, mixing and retrieval steps. What makes this possible is that the decryption is performed by the recipients of the message rather than by the mixnet, so that existing messages posted to the bulletin board are at all times indistinguishable from new messages. New inputs may be constantly added to the existing content of the bulletin board, and outputs retrieved, provided there is at least one round of mixing between every submission and retrieval to ensure privacy. This suggests a generalization of the private communication protocol described above, in which the bulletin board maintains at all times a pool of unclaimed messages. In other words, ----- universal mixing lends itself naturally to the construction of an anonymizing bulletin board. Senders may add messages and receivers retrieve them at any time, provided there is always at least one round of mixing between each posting and retrieval. This protocol appears well suited to guarantee anonymity from external observers in a system in which few messages are exchanged. The privacy of the protocol relies on the existence of a steady pool of undelivered messages rather than on a constant flow of new messages. The former condition appears much easier to satisfy than the latter in cases when the total number of exchanged messages is small. This pooling of messages affords good anonymity protection, without the usual lack of verifiability of correct performance that vexes such schemes.[2] A potential drawback of a bulletin board based on universal mixing is that one must download the full contents in order to be assured of obtaining all of the messages addressed to oneself. This becomes problematic if the number of messages on the bulletin board is permitted to grow indefinitely. To mitigate this problem, it is possible to have recipients remove the messages they have received.[3] An anonymizing bulletin board based on universal mixing has the important privacy-protecting feature that removal of a particular message does not reveal which entity posted that message. Another important observation, as described in the next section, is that only a portion of each message on a bulletin board need be downloaded to allow a recipient to determine which messages are intended for her. This further restricts the work required by a receiver. **RFID-tag privacy.** Universal re-encryption may be used to enhance the privacy of RFID tags. The idea is to permit powerful computing agents external to RFID tags to universally re-encrypt the tag data (recall that the tags lack the computing power necessary to do the re-encryption themselves). Thus, for example, a consumer walking home with a bag of groceries containing RFID tags might have the ciphertexts on these tags re-encrypted by computing agents that are provided as a public service by shops and banks along the way. In this case, the tags in the bag of groceries will periodically change appearance, helping to defeat any tracking attempt. Application of universal mixnets to RFID-tag privacy is different in some important respects from realization of an anonymous bulletin board. As re-encryption naturally occurs for RFID tags on an individual basis, re-encryption in this setting may be regarded as realizing an _asynchronous mixnet. There is also a special security consideration in this setting. Suppose_ that the ciphertext on an RFID tag is of the form (α, β); (1, 1) (where ’1’ represents the identity element for ). Then the ciphertext on the tag will not change upon re-encryption. Thus, it is _G_ important to prevent an active adversary from inserting such a ciphertext onto an RFID tag so as to be able to trace it and undermine the privacy of the possessor. In particular, on processing ciphertexts, re-encryption agents should check that they do not possess this degenerate form. Of course, an adversary in this environment can always corrupt ciphertexts. Note, however, that even a corrupted ciphertext (α[′], β[′]); (γ, δ) will be rendered unrecognizable to an adversary provided that γ, δ = 1. _̸_ 2 So-called pool mixes typically use processing delays in asynchronous settings to hide timing information. They were first described by Lance Cottrell in the nineties [6]. See [23] for a further discussion of pool mixes, and [9] for an approach to verifying correct functioning of pool mixes. 3 To ensure that messages are only removed by the intended recipient, a proof of knowledge of the corresponding decryption key is required. Note that such a proof can be performed without disclosing the public key associated with the required decryption key. For ciphertext C = [(α0, β0); (α1, β1)], this may take the form of a non-interactive zero-knowledge proof of knowledge of an exponent x such that α1 = β1[x] [– essentially a] Schnorr signature [22]. ----- ## 5 Security In this section, we define two security properties of universal mixnets: **– Correctness: The mixnet is correct if the set of output it produces is a permutation of** the set of inputs. **– Communication privacy: The mixnet guarantees communication privacy if, when Alice** sends a message to Bob and Cathy sends a message to Dario, an observer can not tell whether Alice (resp. Cathy) sent a message to Bob or Dario. **Correctness. Correctness for universal mixnets follows directly from the definition of cor-** rectness for standard mixnets. Like standard mix servers, universal servers must prove that they have performed the mixing operation correctly. For this, it is possible to draw on essentially any of the proof techniques presented in the literature on mixnets, as nearly all apply to ElGamal ciphertexts. For example, to achieve universal verifiability, it is possible to employ the proof techniques in [10, 17, 15]. A small technical consideration, which may be dealt with straightforwardly, is the form of input ciphertexts. Input ciphertexts in most mix network constructions consist of a single ElGamal ciphertext, while in our construction, an input consists of a universal ciphertext, and thus two related ElGamal ciphertexts. **Communication privacy. We define next the property of communication privacy. In order** to state this definition formally, we abstract away some of the operations of the mixnet by defining them in terms of oracle operations. We do this so as to focus our exposition on our universal construction, rather than underlying primitives, particularly as our construction can make use of a broad range of choices of such primitives. We define three oracles: **– An oracle MIX It universally re-encrypts all ciphertexts on the bulletin board BB and** outputs back to BB the new set of ciphertexts in a randomly permuted order. In practice, any mix network with public verifiability may be substituted for our oracle MIX. **– An oracle POST that permits message posting. This oracle requires a poster to submit a** message, encryption factors and ciphertext. The oracle verifies that the message, encryption factors and ciphertext are elements of the appropriate groups. The oracle permits posting if the ciphertext is a valid encryption of the message with the given encryption factors. Note that the oracle POST may be regarded as simulating a proof of knowledge of the plaintext and the encryption factor and a verification thereof. In practice, it could be instantiated with standard discrete-log-based proofs of knowledge, e.g., [8], in either their interactive or non-interactive forms. **– An oracle RETRIEVE that permits message retrieval. The oracle takes a private key and** ciphertext from a user. The oracle verifies that the private key and ciphertext are elements of the appropriate groups. The user is allowed to remove the ciphertext if it is encrypted under the private key. Recall that our construction of universal encryption based on El Gamal ensures a binding between ciphertexts and keys, so that a given ciphertext can be decrypted only under one given key. The oracle RETRIEVE, like POST, abstracts away a proof of knowledge of the plaintext. We define communication privacy in terms of an experiment Exp[comm][−][priv] defined as follows. The adversary may make an arbitrary number of calls to any of the oracles RETRIEVE, ``` MIX, or POST and may order these calls as desired. We enumerate the first several steps here ``` for reference in our proof. ----- Experiment Exp[comm][−][priv](UCS, k) _A_ 1. PK0 ← UKG; PK1 ← UKG; 2. (m0, m1) ←A(PK0, PK1, “specify plaintexts”); 3. b ∈U {0, 1}; 4. C0[′] [=][ UE][PK]b[(][m][b][) and][ C]1[′] [=][ UE][PK]1−b[(][m][1][−][b][) appended to][ BB][;] 5. MIX invoked; 6. (BB); _A_ 7. L ←{C ∈ _BB s.t. C is a valid ciphertext under PK0};_ 8. b[′] (L, “guess b”); _←A_ if b = b[′] then output ‘1’; else output ‘0’; An intuitive description of this experiment is as follows. Alice and Bob wish each to transmit a single message to one of Cathy and Dario, who possess public keys PK0 and PK1 respectively. Our aim is to ensure that the adversary cannot tell whether Alice is sending a message to Cathy or Dario – and likewise to whom Bob is transmitting. The adversary is given the special (strong) power of determining which plaintexts, m0 and m1, are to be received by Cathy and Dario. The adversary observes Alice posting ciphertext C0[′] [and Bob posting ciphertext][ C]1[′] [, but does] not know which ciphertext is for Cathy and which is for Dario. The bulletin board is then subjected to a mixing operation so as to conceal the communication pattern. The adversary may subsequently control when and how the mix network is invoked, and may place its own ciphertexts on the bulletin board. Finally, at the end of the experiment, the adversary is given a list L of all ciphertexts encrypted under PK0, i.e., all the messages that Cathy retrieves. This list L will include the one such message posted by Alice or Bob in addition to all messages encrypted under PK0 and posted by the adversary. The task of the adversary is to guess whether it was Alice who sent a message to Cathy (case b = 0) or Bob (case b = 1). **Definition 1. (Communication privacy) We say that a universal mixnet for UCS pos-** _sesses communication privacy if for any adversary_ _that is polynomial time in k, we have_ _A_ pr[Exp[comm][−][priv](UCS, k) = 1] 1/2 is negligible in k. _A_ _−_ **Theorem 1. Our universal mixnet possesses communication privacy provided that UCS has** _universal semantic security under re-encryption. For our described construction involving El-_ _Gamal, privacy may consequently be reduced to the DDH assumption over_ _._ _G_ **Proof: Assume that we have an adversary** for which pr[Exp[comm][−][priv](UCS, k) = 1] 1/2 is _A_ _A_ _−_ non-negligible in k. We build a new adversary which uses as a subroutine and for which _A[′]_ _A_ pr[Exp[uss] _A[′][ (][UCS, k][) = ‘1’]][ −]_ [1][/][2 is non-negligible in][ k][ (i.e.][ A][′][ breaks the universal semantic] security of the underlying encryption scheme). operates as follows: _A[′]_ **– At the beginning of the experiment Exp[uss], A[′]** is given two public keys PK0 and PK1. A[′] gives these two keys to . This simulates step 1 of Exp[comm][−][priv]. _A_ **– When** calls one of the oracles POST, MIX or RETRIEVE, can trivially simulate the oracle _A_ _A[′]_ for the requested operation for . _A_ **– In step 2 of experiment Exp[comm][−][priv], A specifies plaintexts m0 and m1. A[′]** selects random encryption factors r0 and r1 and computes C0 = UEPK0(m0, r0) and C1 = UEPK1(m1, r1). submits these in the second step of experiment Exp[uss]. then receives as input from _A[′]_ _A[′]_ experiment Exp[uss] two new ciphertexts C0[′] [and][ C]1[′] [.] ----- **– In step 4 of Exp[comm][−][priv], A[′]** posts C0[′] [and][ C]1[′] [to the bulletin board.] **– In step 7 of Exp[comm][−][priv], A[′]** must identify the set of outputs encrypted under PK0. Note that can easily identify among the outputs that correspond to inputs originally _A[′]_ submitted by A those encrypted under PK0, since it controls the oracle POST and MIX. The only difficulty is for A[′] to decide which of C0[′] [and][ C]1[′] [is encrypted under][ PK][0][ and which] under PK1. Since A[′] doesn’t know that, it arbitrarily assigns C0[′] [to the list][ L][ of ciphertexts] encrypted under PK0. In the last step of the simulation, A[′] assigns C0[′] [arbitrarily to][ L][. We claim that if][ A][ can] distinguish between the case where this assignment to L is correct and the case where it is incorrect, then can be used to break universal semantic security in Exp[uss]. This may be _A_ achieved with a small modification of our simulation as follows: (1) A[′] lets C0[′] [=][ C][0][ and] _C1[′]_ [=][ C][1][, but invokes][ Exp][uss][ on the pair (][C]0[′] _[, C]1[′]_ [) during the mixing operation in step 5 and] (2) A[′] submits to Exp[uss] the bit b[′] yielded by A at the end of the experiment. Let us assume, therefore, that the assignment to L is correct. Given this, when A outputs its guess b[′], A[′] then outputs the same bit b[′] as its guess for the experiment Exp[uss]. It is clear now that when guesses correctly, so does . This concludes _A_ _A[′]_ our proof. _⊓⊔_ **Security of UCS and chosen-ciphertext attacks.** The cryptosystem UCS we employ here inherits the semantic security property of the underlying El Gamal cipher under the DDH assumption. This property is critical to our definition of communications privacy. Our model for communications privacy makes one simplifying assumption that must be noted, though: We assume that the adversary does not learn any information about plaintexts. For this reason, we do not require adaptive-chosen ciphertext (CCA) security of our cryptosystem. In fact, we cannot achieve CCA security in the strictest sense in our system: In order to permit re-encryption, ciphertext must be malleable. Note, however, that because of the need to demonstrate knowledge of the plaintext and encryption factors in the POST operation, it is infeasible for an adversary to re-post a message or to post a new message with a related plaintext. On the other hand, there may be circumstances in which an adversary may indeed learn information about plaintexts in our system. To show this in a formal sense, however, it would be necessary to modify our universal cryptosystem so as to achieve CCA security with benign _malleability, as defined by Shoup [24]. In Shoup’s terminology, we would need to require an_ induced compatible relation of plaintext equivalence by formatting plaintexts with appropriate padding. We omit detailed discussion of this topic, however, in this paper. An adversary that can gain significant information about received messages can, after all, break the basic privacy guarantees of the system. ## 6 Hybrid universal mixing We describe next a variant mixnet called a hybrid universal mixnet. This type of mixnet combines symmetric and public-key encryption to accommodate potentially very long messages (all of the same size) in an efficient manner. We refer the interested reader to [18, 14] for definitions and examples of hybrid mixnets. Our definition of a universal hybrid mix considers a weaker threat model than above with respect to correctness. Our universal hybrid mix cannot be verified to correctly execute the protocol because of the use we make of symmetric encryption. ----- Thus, we restrict our security model to mix servers subject only to passive adversarial corruption. Such servers are also known as honest-but-curious. They follow the protocol correctly but try to learn as much information as possible from its execution. For efficiency, inputs m are submitted to a hybrid mix encrypted under an initial symmetric (rather than public) key. We denote by ϵk[m] the symmetric-key encryption of m under key _k. Each mix server Si consecutively re-encrypts the output of the previous mix under a new_ random symmetric key ki. If there are k mix servers, the final output of the mix is therefore _ϵkn[ϵkn−1[. . . ϵk1[ϵk[m]] . . .]. The symmetric keys k, k1, . . ., kn must be conveyed alongside the_ encrypted message to enable decryption by the final recipient. These keys are themselves encrypted as universal ciphertexts under the public key of the recipient. Universal encryption provides a very efficient way of transmitting encryptions of the symmetric keys in a way that does not compromise privacy. Let us now give a more detailed definition of our hybrid universal mixnet. Our construction imposes an upper bound n on the maximum number of times that the mixing operation is performed by the mixnet on any given ciphertext. The protocol consists of the following steps: 1. Submission of inputs. An input ciphertext takes the form _ϵk0[m], E[1], (E[k0], E[1] . . . E[1])_ where ϵk0[m] denotes symmetric-key encryption of m under key k0. This is followed by an encryption of 1, and by a vector of ciphertexts on keys, where only the first element is filled in (with k0), leaving the remaining n − 1 elements as encryptions of 1. 2. Universal mixing. The i[th] server to perform the mixing operation does the following for each of the ciphertexts on the bulletin board: **– Generates a random symmetric key ki;** **– Adds a new layer of symmetric encryption to m under key ki;** **– Uses the second element, E[1], to compute an encryption of ki – call this E[ki];** **– Rotates the elements of the vector one step leftwards, then substituting the first element** with E[ki]; and **– Re-encrypts the second element and each element of the vector.** When it has thus processed all its inputs in this manner, the server outputs them back to the bulletin board in a random order. 3. Retrieval of the outputs. At the end of d _n mixing operations, the final output of the_ _≤_ mixnet assumes the form: _ϵkd[ϵkd−1[. . . ϵk0[m]] . . .], E[1], ({E[1]}[n][−][d], E[k0] . . . E[kd]),_ where _E[1]_ denotes n _d ElGamal ciphertexts on the identity element. As before,_ _{_ _}[n][−][d]_ _−_ recipients try to decrypt every output of the mixnet and discard those outputs for which the decryption fails. Only the second element, E[1], however, has to be decrypted in order for a party to determine whether the ciphertext is intended for her. **Remark: In principle, it is possible to use the “blank” ciphertext E[1] to append ciphertexts on** as many symmetric keys as desired, and thus re-encrypt indefinitely. The reason for restricting the number of “blank” ciphertexts to exactly n is to preserve a uniform length, without which an adversary can distinguish among ciphertexts that have undergone differring numbers of re-encryptions. A drawback of this approach is that a ciphertext re-encrypted more than n times will become undecipherable by the receiver. Given enough messages, it is alternatively possible to permit messages to grow in sizes according to their “ages”, i.e., the number of re-encryptions they have undergone, and to pool them accordingly. ----- ## 7 Conclusion Universal re-encryption represents a simple modification to the basic El Gamal cryptosystem that permits re-randomization of ciphertexts without knowledge of the corresponding private key. This provides a valuable tool, as we show, for the construction of privacy-preserving architectures that dispense with the complications and risks of distributed key setup and management. The costs for the basic universal cryptosystem are only twice those of ordinary El Gamal. On the other hand, the problem of receiver costs in a universal mixnet presents a compelling line of further research. In the construction we have proposed, a receiver must perform a linear number of decryptions to identify messages intended for her. A method for reducing this cost would be appealing from both a technical and practical standpoint. ## References 1. M. Abe. Mix-networks on permutation networks. In K-Y. Lam, E. Okamoto, and C. Xing, editors, ASI_ACRYPT ’99, volume 1716 of Lecture Notes in Computer Science, pages 258–273. Springer-Verlag, 1999._ 2. M. Abe and F. Hoshino. Remarks on mix-networks based on permutation networks. In PKC ’01, pages 317–324. Springer-Verlag, 2001. LNCS no. 1992. 3. M. Bellare, A. Boldreva, A. Desai, and D. Pointcheval. Key-privacy in public-key encryption. In C. Boyd, editor, ASIACRYPT ’01, pages 566–582, 2001. LNCS no. 2248. 4. D. Boneh. The Decision Diffie-Hellman problem. In ANTS ’98, pages 48–63. Springer-Verlag, 1998. LNCS no. 1423. 5. D. Chaum. Untraceable electronic mail, return addresses, and digital pseudonyms. Communications of the _ACM, 24(2):84–88, 1981._ 6. L. Cottrell. Mixmaster & remailer attacks, 1995. http://www.obscura.com/ loki/ remailer/remaileressay.html. 7. G. Danezis, 2002. Personal communication. 8. A. de Santis, G. di Crescenzo, G. Persiano, and M. Yung. On monotone formula closure of SZK. In FOCS _’94, pages 454–465. IEEE Press, 1994._ 9. E. Franz, A. Graubner, A. Jerichow, and A. Pfitzmann. Comparison of commitment schemes used in mixmediated anonymous communication for preventing pool-mode attacks. In C. Boyd and E. Dawson, editors, _ACISP ’98, pages 111–122. Springer-Verlag, 1998. LNCS no. 1438._ 10. J. Furukawa and K. Sako. An efficient scheme for proving a shuffle. In J. Kilian, editor, CRYPTO ’01, volume 2139 of Lecture Notes in Computer Science, pages 368–387. Springer-Verlag, 2001. 11. T. El Gamal. A public key cryptosystem and a signature scheme based on discrete logarithms. _IEEE_ _Transactions on Information Theory, 31:469–472, 1985._ 12. S. Goldwasser and S. Micali. Probabilistic encryption. J. Comp. Sys. Sci, 28(1):270–299, 1984. 13. M. Jakobsson and A. Juels. Millimix: Mixing in small batches, June 1999. DIMACS Technical Report 99-33. 14. M. Jakobsson and A. Juels. An optimally robust hybrid mix network. In PODC ’01, pages 284–292. ACM Press, 2001. 15. M. Jakobsson, A. Juels, and R. Rivest. Making mix nets robust for electronic voting by randomized partial checking. In D. Boneh, editor, USENIX ’02, pages 339–353, 2002. 16. A. Juels and R. Pappu. Squealing euros: Privacy protection in rfid-enabled banknotes. In R. Wright, editor, _Financial Cryptography 2003, 2003. To appear._ 17. A. Neff. A verifiable secret shuffle and its application to e-voting. In P. Samarati, editor, ACM CCS ’01, pages 116–125. ACM Press, 2001. 18. M. Ohkubo and M. Abe. A length-invariant hybrid mix. In T. Okamoto, editor, ASIACRYPT ’00, volume 1976 of Lecture Notes in Computer Science, pages 178–191. Springer-Verlag, 2000. 19. M. Reed, P. Syverson, and D. Goldschlag. Protocols using anonymous connections: mobile applications. In _Security Protocols ’97, pages 13–23. Springer-Verlag, 1997. LNCS 1361._ 20. S. Sarma. Towards the five-cent tag. Technical Report MIT-AUTOID-WH-006, MIT Auto ID Center, 2001. Available from http://www.autoidcenter.org/. 21. S. Sarma. Radio-frequency identification systems. In B. Kaliski, editor, CHES ’02. Springer-Verlag, 2002. To appear. 22. C.-P. Schnorr. Efficient signature generation by smart cards. Journal of Cryptology, 4(3):161–174, 1991. ----- 23. A. Serjantov, R. Dingledine, and P. Syverson. From a trickle to a flood: active attacks on several mix types. In Information Hiding ’02, pages 36–52. Springer-Verlag, 2002. LNCS no. 2578. 24. V. Shoup. A proposal for an iso standard for public key encryption (version 2.1), 20 December 2001. Manuscript. 25. Y. Tsiounis and M. Yung. On the security of ElGamal-based encryption. In Workshop on Practice and _Theory in Public Key Cryptography (PKC ’98), pages 117–134. Springer, 1998. LNCS no. 1431._ 26. J. Yoshida. Euro bank notes to embed RFID chips by 2005. EE Times, 19 December 2001. Available at http://www.eetimes.com/story/OEG20011219S0016. -----
{ "disclaimer": "Notice: The following paper fields have been elided by the publisher: {'abstract', 'references'}. Paper or abstract available at https://api.unpaywall.org/v2/10.1007/978-3-540-24660-2_14?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1007/978-3-540-24660-2_14, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": null, "status": "CLOSED", "url": "" }
2,004
[ "JournalArticle" ]
false
2004-02-23T00:00:00
[]
12,719
en
[ { "category": "Engineering", "source": "external" }, { "category": "Engineering", "source": "s2-fos-model" }, { "category": "Environmental Science", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/0244a3e459fe7b78debc2ee71bbf73514ca013bd
[ "Engineering" ]
0.820646
Distributed control and energy storage requirements of networked Dc microgrids
0244a3e459fe7b78debc2ee71bbf73514ca013bd
[ { "authorId": "145219251", "name": "W. Weaver" }, { "authorId": "143767900", "name": "R. Robinett" }, { "authorId": "3129393", "name": "G. Parker" }, { "authorId": "40213827", "name": "D. Wilson" } ]
{ "alternate_issns": null, "alternate_names": null, "alternate_urls": null, "id": null, "issn": null, "name": null, "type": null, "url": null }
null
# Distributed Control and Energy Storage Requirements of Networked Dc Microgrids Wayne W. Weaver[a], Rush D. Robinett, III[a], Gordon G. Parker[a], David G. Wilson[b] _aMichigan Technological University, 1400 Townsend Dr., Houghton, Michigan, 49931,_ _USA_ _bSandia National Laboratories, P.O. Box 5800, Albuquerque, New Mexico, 87185, USA_ **Abstract** Microgrids are a key technology to help improve the reliability of electric power systems and increase the integration of renewable energy sources. In terconnection and networking of smaller microgrids into larger systems have potential for even further improvements. This paper presents a novel ap proach to a distributed droop control and energy storage in networked dc microgrids. Distributed control is necessary to prevent single points of fail ure along with flexibility and adaptability to changing energy resources. The results show that systems with random sources and fast update rates, a networked microgrid structure can minimize required energy storage require ments. _Keywords:_ microgrid, distributed control, energy storage, optimization, power electronics _Email addresses: wwweaver@mtu.edu (Wayne W. Weaver), rdrobine@mtu.edu_ (Rush D. Robinett, III), ggparker@mtu.edu (Gordon G. Parker), dwilso@sandia.gov (David G. Wilson) _Preprint submitted to Control Engineering Practice_ _July 6, 2015_ ----- **1. Introduction** A microgrid is a collection of energy resources on a common network. These resources include generation, conversion, loads and storage devices [1]. The model of centralized generation is gradually being replaced by a distributed generation model [2]. The emerging technologies in renewable and distributed generation can have lower emissions and cost. The micro grid concept gives a solution for integration of a large number of distributed generations without causing disruption in the utility network. Microgrids also allow for local control of the distributed generation units and attests to the flexibility to operate autonomously during disturbances in the util ity network to increase reliability [3, 4, 5]. In addition, the interconnection and networking of groups of microgrids can reduce the energy storage re quirements. However, the interconnections and power flow control between microgrids increase costs, complexity and failure modes [6]. One of the main challenges for microgrid design and control is that gener ation capacity is very close to load demand. In addition, with the stochastic nature of most renewable energy sources there is a need for energy storage [7, 8, 9]. Energy storage can mitigate both long-term and short-term system transients. For example, a long term transient would be the generation vari ations over hours and days from a wind turbine or photo-voltaic array due to weather patterns. Short-term transients could include step changes in load or faults in the system where the response is on the order of seconds or frac tions of a second. Therefore, a proper energy storage strategy will include devices that can respond at the proper bandwidth of system transients. Within micorgrids, there are many approaches to the control and opti 2 ----- mization of each element. A centralized approach is able to reach higher levels of performance at the cost of single points of failure and lack of flexibility. A distributed and de-centralized approach allows a very flexible system that can adapt to changing system structures and situations. A typical approach to distributed control is droop control [10, 11]. In dc microgrids, droop con trol is equivalent to creating a virtual impedance between the source and the bus such that the total load current is distributed to the sources based on the weighted sum of droop settings. The standard way to implement droop control is through duty cycle control of the dc/dc converter interface to the bus. Since many renewable sources are dc, such as photovoltaics, they require additional power conversion to connect to an ac system. In addition, most electronic loads require a dc power conversion step and many energy storage technologies, such as batteries or super-capacitors are also dc. Therefore, a dc system is a viable option for power distribution in microgrids [12, 13]. However, a dc microgrid with a high penetration of renewable sources can require large energy storage capacity to maintain the system and to mitigate variations in the sources [14]. This paper presents an alternative approach that uses the local energy storage device at the source to actuate the droop control in local and net worked microgrids. The duty cycles of the converters are updated on a periodic interval to only match the source voltage to the bus. This approach allows the requirements for energy storage in capacity and bandwidth to be studied and designed with variations in the renewable energy sources and load. The novelty of this approach is that the system is not actuated through 3 ----- the typical approach of feed-back control of duty cycle of the dc-dc converter, but through feed-back control of the storage devices in the system. Further, the feed-back process of the energy storage actuation is implemented through a distributed droop control. The duty cycles of the dc-dc converter are only updated on periodic intervals through a feed-forward process. The paper will first present the electrical system model of the dc-dc boost coverer, energy storage devices and microgrid structure. Next, the controls are developed for the feed-forward control of the duty cycles and the feed back control of the energy storage devices. Then, the distributed droop control is shown. Finally the feed-forwad, feed-back and distributed droop are demonstrated through simulation of several operational senarios. **2. Dc-dc converter and microgrid model with energy storage** In dc microgrids the interface to the distribution bus is through dc to dc converters. If the bus voltage is higher than the sources, the interface converter will be in the form of a boost converter. In this paper, the sources will be paired with an energy storage device and therefore the boost converter will be bi-directional. _2.1. Dc-dc Converter Model_ A bidirectional dc-dc converter circuit is shown in Fig. 1. The converter is implemented with a power pole of two power MOSFETs, which enable forward blocking and reverse current [15]. In this configuration the top switch state is defined as q while the bottom switch is 1 _q, then the dynamic_ _−_ 4 ----- equations for the converter are _L_ _[di][L]_ (1) _dt_ [=][ v][s][ −] _[i][L][R][L][ −]_ _[q][(][t][)][v][C]_ _dvC_ _CB_ (2) _dt_ [=][ q][(][t][)][i][L][ −] _[i][Load][.]_ Formally, when iL > 0 this converter is a boost converter and when iL < 0 it is a buck converter. However, in this paper we will refer to the circuit as a boost, even though it can have bidirectional current. The time average of the switch state is found from 1 _λ =_ _Tsw_ _t_ � _t−Tsw_ _q(t)dt_ (3) where Tsw is the switching period. The dynamic average model of the con verter [16] is _L_ _[di][L]_ (4) _dt_ [=][ v][s][ −] _[i][L][R][L][ −]_ _[λv][C]_ _dvC_ _CB_ (5) _dt_ [=][ λi][L][ −] _[i][Load][.]_ This model of the converter ensures two quadrant operation because of the choice of the switches in Fig. 1 as, vC > vs and bidirectional current iL. _2.2. Source and storage model_ Consider the bus interface boost converter shown in Fig. 2 which has two voltage sources. The voltage source vv represents an energy source such as a generator or photo-voltaic panel. The source vu represents an energy storage device such as a battery or capacitor. Both voltage sources include a 5 ----- _s_ Source Figure 1: A bi-directional dc-dc converter. series equivalent resistance Rv. Both the source and storage model in Fig. 2 represent a Thevenin equivalent of a source and a second stage converter such that the output terminal voltages are controllable. Both the source and storage converters will contribute current to the inductor of the bus interface boost converter. The total inductor current is _iL =_ _[v][u][ −]_ _[v][L]_ + _[v][v][ −]_ _[v][L]_ (6) _Ru_ _Rv_ |Load L Load C Source|Col2| |---|---| |Source|| ||| and the node voltage vL is _Ru_ _Rv_ _RuRv_ _vL = vv_ + vu _−_ _iL_ _._ (7) _Ru + Rv_ _Ru + Rv_ _Ru + Rv_ It is seen in (7) that the total voltage is a sum of two series sources and a line impedance. The voltages and resistances (7) can be lumped into the new 6 ----- _L_ _B_ _v_ _u_ Figure 2: A boost converter model with two parrallel voltage sources, where vv represents an energy source and vu represents an energy storage device. variables _Ru_ _v = vv_ (8) _Ru + Rv_ _Rv_ _u = vu_ (9) _Ru + Rv_ _RuRv_ _RL =_ _._ (10) _Ru + Rv_ The boost converter with energy source and storage devices can be modeled as a series combination in the new variables of voltage sources and Thevenin resistance as shown in boost converter models of Fig. 3 with the average mode [16] dynamic model |Source|Col2|Storage| |---|---|---| |v||u| |||| _L_ _[di][L]_ (11) _dt_ [=][ −][λv][B][ −] _[i][L][R][L][ +][ u][ +][ v]_ _dvB_ 1 _CB_ (12) _dt_ [=][ λi][L][ −] _[v][B]_ _RB_ where λ represents duty cycle of the converter. _2.3. Microgrid model_ A simple dc microgrid model is shown in Fig. 3 where N boost converters have the series model derived in section 2.2 for the source and local energy 7 ----- |1 L,1 1 B 1 i i i i L,i i i|1|B|Col4| |---|---|---|---| ||||| Figure 3: Microgrid of equivalent series boost converters. storage devices. The model of the microgrid in Fig. 3 has the dynamic equations _diL,i_ _Li_ _dt_ = −λivB − _iL,iRL,i + ui + vi,_ _i = 1..N_ (13) _N_ _dvB_ � 1 _CB_ _dt_ [=] _λiiL,i −_ _vB_ _RB_ + uB (14) _i=1_ where uB represents a centralized bus energy storage device. In implemen tation, uB would be an energy storage device, such as a battery or ultra capacitor, interfaced to the bus through a dc to dc power converter. How ever, for this study it is important to represent the energy storage devices as ideal sources that are controlled to respond to system dynamics. In this way the storage requirements, such as total energy delivered or absorbed and response bandwidth can be determined. The microgrid load is modeled as a simple bus resistor and capacitor RB and CB, so each microgrid will have _N + 1 states. The next challenge is to control the boost converters in such_ way as the load current is shared between the sources. In addition, because 8 ----- of highly variable sources vi, which is indicative of renewable energy sources, storage at the local converters and central bus is required to maintain a nom inal operating point of the bus voltage. In general, the optimal distribution, capacity and bandwidth of the energy storage devices is not well understood, especially for a distributed control architecture. _2.4. Networked microgrid model_ Each microgrid as depicted in Fig. 3 is a self contained power system. However, due to the lack of diversity and inertia in the microgrid, its stability, reliability and flexibility may not be optimal. If multiple microgrids are interconnected then it may be possible to improve the reliability, flexibility and stability of the overall system. However energy storage remains a crucial element in the system. In addition, as the network of microgrids becomes larger, a distributed control architecture becomes even more attractive given that there is no need for a communication infrastructure or central controller. However, it is important to understand the stability margins and energy storage requirements in such a system. Given a known set of sources and loads, there are numerous permutations of networked microgrids. In general, a networked microgrid will have a bus bar voltage vector V, and an injected current vector I and an interconnection admittance matrix Y. Then, the following matrix relationship can be written **I = YV** (15) where all injected current and voltage vectors are k 1, with k representing _×_ the number of bus bars. The bus admittance matrix Y is symmetric in most 9 ----- cases and is a function of line admittances and shunt load resistances of the microgrid. The solution of the nodal equations in (15) follows the method ology discussed in [17]. In this paper, a simple common bus interconnected microgrid structure will be used. Consider the networked microgrid configuration shown in Fig. 4 where there are N sub-microgrids connected to a common bus. Each microgrid has a bus storage element uB,k, an RC load, and M source converters. Power flow for interconnection of the buses is achieved through an interconnection converter from bus k to network n denoted k _n with local energy storage_ _→_ _uk→net. In the configuration shown in Fig. 4, the network bus net must be_ at a higher voltage than any of the lower microgrids 1..N . In this configura tion, power can directly flow from one microgrid to another. The networked microgrid Fig. 4 will have the system of equations _diL,k,m_ _Lk,m_ _dt_ = −λk,mvB,k − _iL,k,mRL,k,m + uk,m + vk,m,_ _k = 1..N,_ _m = 1..M_ (16) _M_ � _λk,miL,k,m −_ _iL,k→net −_ _R[v][B,k]B,k_ + uB,k, _m=1_ _CB,k_ _dvB,k_ = _dt_ _k = 1..N_ (17) 10 ----- _uB,net_ _uk,1iL,k,1_ _uk n_ CB,net RB,net vk,1 _uB,k_ RL,k,m Lk,m,k,m _uk,miL,k,m_ CB,k RB,k vk,m Figure 4: Network of k microgrids connected to a common bus. Microgrid 1 has i source converters and microgrid k has m source converters . _Lk→net_ _diL,kdt→net_ = −λk→netvB,net − _iL,k→netRL,k→net_ + uk→net + vB,k, _k = 1..N_ (18) _N_ _dvB,net_ � _CB,net_ _dt_ = _λi→netiL,i→net_ _i=1_ _−_ _[v][B,net]_ + uB,net. (19) _RB,net_ 11 ----- **3. Feed-forward duty cycle and feed-back energy stortage control** For the control development consider the microgrid structure in Fig. 4 with 2 sub-microgrids (N = 2) and both microgrids have 2 boost converter sources (M = 2). Then the dynamic system equations (16) - (19) will have _N_ (2 + M ) + 1 = 9 states. In this paper, the duty cycles, λ, will only be updated on the interval ∆tλ, and held constant otherwise. This zero-order hold approach will mimic the effect of a discrete digital control system that may have limited computational power. Then, by using the zero-order hold approach, the effects of a digital control and information flow can be studied on the performance of the system. In this paper, it is desired to study a distributed control that will only require local information at the converter to determine the control action. However, a centralized approach based on system wide information can also be used [3, 9, 6]. The updated duty cycle commands are obtained from the steady-state solution of (16) and (18), with _u = 0, such that_ 1 _λk,m =_ (vk,m + iL,k,mRL,k,m), _vB,k_ _k = 1..N,_ _m = 1..M_ (20) 1 _λk→net =_ _vB,net_ (vB,k + iL,k→netRL,k→net), _k = 1..N._ (21) The duty cycle update strategy for the source converters in (20) and the inter face converter in (21) match the high side terminal voltage of the converter, the bus voltage, to the low side source voltage minus the energy storage 12 ----- voltage. This forces the energy storage device back to a zero output condi tion. The feed-forward control in (20) and (21) were chosen to remove the current load from the converters energy storage device. Between updates of the feed-forward duty cycle command the energy storage device actuates and enforces the reference command values. Since the boost converter duty cycle held constant between feed-forward updates, and not through feed-back, the problem of non-minimum phase in boost converters is eliminated [18]. To model the system in more general terms, a change of notation for the states is (x11, x12, x21, x22) = (iL,1,1, iL,1,2, iL,2,1, iL,2,2) (x13, x23) = (iL,1→net, iL,2→net) (x1, x2, x3) = (vB,1, vB,2, vB,net). Then the state equations can be written in the compact form (22) � **M˙x = Rx + u + v =** � **R +** **R[˜]** **x + u + v** (23) where **x = [x11, x12, x21, x22, x13, x23, x1, x2, x3][T]** (24) **v = [v11, v12, v21, v22, 0, 0, 0, 0, 0][T]** (25) **u = [u11, u12, u21, u22, u1→3, u2→3, uB1, uB2, uB3][T]** _._ (26) The matrices from (23) for the example in Fig. 4 are 13 ----- _−RL,11_ 0 0 0 0 0 0 0 0 0 _−RL,12_ 0 0 0 0 0 0 0 0 0 _−RL,21_ 0 0 0 0 0 0 0 0 0 _−RL,22_ 0 0 0 0 0 0 0 0 0 _−RL,13_ 0 0 0 0 0 0 0 0 0 _−RL,23_ 0 0 0 0 0 0 0 0 0 1 0 0 _−_ _RB1_ 0 0 0 0 0 0 0 1 0 _−_ _RB2_ 0 0 0 0 0 0 0 0 1 _−_ _RB3_ (28)   **R =**   (29) (27) 0 0 0 0 0 0 _−λ11_ 0 0 0 0 0 0 0 0 _−λ12_ 0 0 0 0 0 0 0 0 0 _−λ21_ 0 0 0 0 0 0 0 0 _−λ22_ 0 0 0 0 0 0 0 0 0 _−λ13_ 0 0 0 0 0 0 0 0 _−λ23_ _λ11_ _λ12_ 0 0 0 0 0 0 0 0 0 _λ21_ _λ22_ 0 0 0 0 0 0 0 0 0 _λ13_ _λ23_ 0 0 0   **˜R =**   _L11_ _L12_ _L21_ 0 _L22_ _L13_ _L23_ 0 _CB1_ _CB2_ _CB3_   **M =**   and the resistive matrices are shown in (28) and (29). 14 ----- _3.1. Hamiltonian surface shaping power flow control_ A key element of the proposed distributed droop control is a PI con trol strategy based on a Hamiltonian Surface Shaping Power Flow Control (HSSPFC) approach [19]. The first step is to define an error state for (23) **˜x = xref −** **x** (30) where the reference state and control vectors are defined by � **M˙xref =** � **R +** **R[˜]** **xref + uref + v.** (31) The reference vector xref is the set of nominal state variables. Some of the nominal values, such as the nominal bus voltage is set by the system designer. The other reference states, such as the converter currents, are set through the droop control calculations as will be discussed in section 4. The next step is to define the Hamiltonian as **H = [1]** 2 **[˜x][T]** **[M˜x][ + 1]2** �� _t_ � **˜x[T]** _dτ_ **KI** 0 15 �� _t_ � **˜xdτ** (32) 0 ----- � which is positive definite about **˜x = 0 for M and K positive definite and** is the static stability condition. The time derivative of (32) is **˙H = ˜x[T]** **M˜˙x + ˜x[T]** **KI** � _t_ **˜xdτ** 0 � _t_ = ˜x[T] [M˙xref − **M˙x] + ˜x[T]** **KI** **˜xdτ** 0 � � � � = ˜x[T][ ��]R + **R[˜]** **xref + uref + v −** **R +** **R[˜]** **x −** **u −** **v** + ˜x[T] **KI** � _t_ **˜xdτ** 0 � _t_ � = ˜x[T][ �]R + **R[˜]** **˜x + ˜x[T]** (uref − **u) + ˜x[T]** **KI** **˜xdτ** 0 � _t_ = ˜x[T] **R˜x + ˜x[T]** **∆u + ˜x[T]** **KI** **˜xdτ** (33) 0 since ˜x[T][ ˜]R˜x = 0. Now, select a proportional-integral (PI) controller as **∆u = −KP˜x −** **KI** � _t_ **˜xdτ** (34) 0 which gives and **u = uref −** **∆u** (35) � **˙H = −˜x[T][ �]KP −** **R** **˜x < 0** (36) where KP and KI are positive definite controller gain matrices. Equation (36) enables a guideline for picking the gains of the PI controller to maintain 16 ----- stability and performance. For (36), only **H[˙]** (˜x = 0) = 0. However, this only proves stability for the state variables ˜x. Since the control dynamics � _t_ of 0 **[˜x][dτ][ are not included, further analysis is needed to prove asymptotic]** stability. _3.2. Feed-back control dynamics stability_ The stability of the feed-back PI control dynamics can be found from the higher order derivatives of the Hamiltonian in (32) [20]. **Theorem Assume there exists a Lyapunov function V (x) of the dynamical** _system ˙x = f_ (x). Let Ω _be a non-empty set of the state vectors such that_ **x** Ω _V (x) = 0._ (37) _∈_ _⇒_ [˙] _If the first k_ 1 derivatives of V (x), evaluated on the set Ω, are zero _−_ _d[i]v(x)_ = 0 **x** Ω _i = 1, 2, ..., k_ 1 (38) _∀_ _∈_ _−_ _dx[i]_ _and the k-th derivative is negative definite on the set Ω_ _dV (x)_ _< 0_ **x** Ω (39) _∀_ _∈_ _dx_ _then the system x(t) is asymptotically stable, if k is an odd number._ The feed-forward and feed-back PI control law is **u = uref −** **∆u = M˙xref −** **Rxref −** **v + KP˜x + KI** 17 � _t_ **˜xdτ** (40) 0 ----- where uref is found from the solution of (31). Then the system state trajec tories are **M˙x = Rx + v + u** = Rx + v + M˙xref − **Rxref −** **v + KP˜x + KI** The deviation in the state trajectories are � _t_ **˜xdτ.** (41) 0 **M˜x[˙]** = [R − **KP] ˜x −** **KI** � _t_ **˜x[T]** _dτ_ (42) 0 then � **˙˜x = M[−][1]** (R − **KP) ˜x −** **KI** � _t_ � **˜xdτ** _._ (43) 0 The second time derivative of the Hamiltonian is � **¨H = −2˜x[T][ �]KP −** **R** _x˙˜_ � � = −2˜x[T][ �]KP − **R** **M[−][1]** (R − **KP) ˜x −** **KI** � _t_ � **˜xdτ** 0 = 0 _for_ **˜x = 0.** (44) 18 ----- The third oder time derivative of the Hamiltonian from (32) is ... _T_ � � _t_ � **H = −2˜x[˙]** [�]KP − **R�** **M[−][1]** (R − **KP) ˜x −** **KI** **˜xdτ** 0 � � _−_ 2˜x[T][ �]KP − **R** **M[−][1][ �](R −** **KP)** **˜x[˙]** _−_ **KI˜x** � � � _t_ ��T � � = −2 **M[−][1]** (R − **KP) ˜x −** **KI** **˜xdτ** **KP −** **R** **M[−][1]** 0 � � _t_ � (R − **KP) ˜x −** **KI** **˜xdτ** 0 � _−_ 2˜x[T][ �]KP − **R** **M[−][1]** � � (R − **KP) M[−][1]** (R − **KP) ˜x −** **KI** � _t_ � � **˜xdτ** _−_ **KI˜x** 0 � = −2 **M[−][1]KI** � _t_ �T � � [�] **˜xdτ** **KP −** **R** **M[−][1]KI** 0 � _t_ � **˜xdτ** _< 0_ (45) 0 and ... � _t_ **H = 0** _for_ **˜xdτ = 0.** (46) 0 Therefore the control dynamics are asymptotically stable. Furthermore proof of control dynamic stability can also be found in [21, 20]. **4. Droop control** When two or more sources inject power into a common bus or grid a load sharing control scheme is required. If each source tries to control the bus volt age, or frequency in ac systems, then large circulation currents can result. A load sharing scheme can be centralized if an interconnected communication system is present and signals can be passed between sources to balance the 19 ----- load current. However, this can create a single point of failure. Droop control is a common technique for distributed control of electrical sources in a mi crogrid where the control implements a virtual impedance such that the load current is distributed between the sources proportional to the droop settings _Vref,i and Rd,i [14]. The equivalent boost converter under droop control is_ shown in Fig. 5, where Vref,i and Rd,i are parameter settings for the control. To implement the control in a source, and to control the bus voltages through the actuation of the energy storage devices, the reference for the bus current injection in the i[th] converter in Fig. 3, is defined as _iref,i =_ _[V][ref,i][ −]_ _[v][B]_ _._ (47) _Rd,i_ An error signal is created from the reference inductor current as _ei = iref,i_ _ii._ (48) _−_ A control law to drive the error of (48) to zero is needed. However, it is important to point out that the control inputs to this model are the energy storage devices ui for the source converters and uB,k for the bus storage elements. This droop controller implements a decentralized version of the centralized feed-forward control of section 3.1 and utilizes the decoupled feed back control. The droop control will be actuated by the storage elements while the duty cycles λi will be considered as a constant during a discrete epoch. The duty cycles will then be updated at set intervals as discussed in section 3. 20 ----- For the source converter the current injected into the bus is _ii = iL,iλi_ (49) with the error signal for the boost converters as _ei = iref,i_ _iL,iλi._ (50) _−_ A proportional-integral control law for the droop control actuated by the energy storage device is � _uk,m = RL,k,miref,k,m + λivB,k_ _vk,m +_ _Kp,iei + Ki,i_ _−_ � � _eidt_ (51) where uk,m ∈ **u from (40). The droop control law for the bus storage devices** is _uB,j =_ _[V][ref,i][ −]_ _[v][B,k]_ _._ (52) _Rd,i_ The power from the boost storage devices _pk,m = uk,miL,k,m._ (53) The power from the bus storage devices _pB,j = uB,jvB,k._ (54) 21 ----- Figure 5: Equivalent terminal characteristics of a boost converter in a dc microgrid under voltage droop control. Figure 6: Equivalent terminal characteristics the interconnected bus boost converter. The energy supplied from the storage device is � _wu =_ _pust._ (55) The boost converters that interconnect the buses in the microgrid also need a distributed control law to maintain bus voltages. Droop control can also be applied to these converter, however, the voltage must be references to the lower voltage bus. The terminal characteristics of the interconnecting converter are shown in Fig. 6 where vB,n > vB,k. 22 ----- **5. Simulation examples** To demonstrate the distributed droop control approach to networked dc microgrids, a model was built and simulated in Wolfram Mathematica, Wol fram SystemModeler and Modelica [22]. The system shown in Fig. 4 has 2 sub-microgrids (N = 2) and both microgrids have 2 boost converter sources (M = 2). The sub-microgrids 1 and 2 have a nominal bus voltage of 100 V and the network bus 3 has a nominal voltage of 200 V . All voltage sources (v11, v12, v21, v22) have an average dc voltage of 48 V. However, all these voltages also have uniform random white noise sampled on 1 second intervals superimposed on the dc voltage. This random noise input tests the perfor mance of the proposed controller under non-ideal conditions that would be an extreme worst-case scenario for a field deployed microgrid. In microgrid applications the renewable sources are stochastic, but would be better be haved with lower bandwidth and magnitude variations in the transients than what is tested in this paper [23]. Therefore, the simulations shown in this sec tion demonstrate the proposed control is a viable solution for field deployed microgrids. The sources in microgrid 1 represent well behaved devices and are rep resentative of dispachable sources such as diesel generators with very little voltage variation, in this example less than 2 V. The sources in microgrid 2 represent highly variable sources and are representative of renewable sources such as photovoltaic and wind with large voltage variation, in this example up to 45 V. The resistive loads on each sub-microgrid are RB,1 = RB,2 = 10 Ω and the load on the network bus is RB,3 = 100 Ω. The droop voltage settings _Vref,i have been set to the respective bus nominal voltages. The boost con-_ 23 ----- verter droop resistances are set to Rd,i = 0.5 Ω, and the bus storage droop resistances are 2 Ω. The PI control gains were chosen to meet the require ments defined in section 3 and were picked to be KP = 50, KI = 200 for all controllers in the system, which were chosen to satisfy (36). The structure and parameters of this example system were chosen to be indicative of many applications of networked microgrids. Microgrid applica tions such as military forward operating bases and electric ships have a zone based architecture [24]. Each zone of a microgrid can be a self contained power distribution microgrid of sources, loads and storage. Another way to view the zonal microgrid is as a network of smaller microgrids [25]. However, by interconnecting the zones or networked microgrids, greater reliability and potentially lower energy storage requirements can be achieved. _5.1. Constant load example_ A short 10 s simulation example was run with constant loads and random voltage sources. The results from microgrid 2 are shown in Figs. 7-12. The random white noise voltage sources are shown in Fig. 7. The boost energy storage device voltages are shown in Fig. 8 with the duty cycles shown in Fig. 9. The boost energy storage device current and powers are shown in Fig. 10 and Fig. 11 respectively. The resulting bus 2 voltage is seen in Fig. 12. The rate at which all duty cycles are updated in this simulation example is 0.1 s. As can be seen in Figs. 7-12, the bus voltages quickly return to their reference values set through the droop control when the source voltage variations occur. It is seen in Fig. 12 that the bus voltage has an approximate average of 97.487 V which is less than the nominal 100 V due to droop control. It is also 24 ----- 90 80 70 60 50 40 v21 v22 30 0 2 4 6 8 10 |Col1|Col2|Col3|Col4|Col5|Col6|Col7|Col8|Col9|Col10| |---|---|---|---|---|---|---|---|---|---| ||||||||||| ||||||||||| ||||||||||| ||||||||||| ||||||||||| |v 21|||||||||| |v 22|||||||||| _t_ s H L Figure 7: Boost converter source voltages. 0.5 0.0 -0.5 -1.0 |Col1|Col2|Col3|Col4|Col5| |---|---|---|---|---| |||||| |u 21||||| |u 22||||| 0 2 4 6 8 10 _t_ s H L Figure 8: Boost converter storage device voltages. seen that the bus voltage varies little when the source voltages vary because the boost converter energy storage devices act to maintain the desired droop characteristics. When the duty cycle is updated every 0.1 s, the energy storage device voltage returns to zero along with the power output. _5.2. Step change in load example_ To demonstrate the distributed droop control in networked microgrids, the same model from the previous section was modified such that the resistive 25 ----- 0.9 0.8 0.7 0.6 0.5 0.4 0.3 |Col1|Col2|Col3|Col4|Col5|Col6|Col7|Col8|Col9|Col10| |---|---|---|---|---|---|---|---|---|---| ||||||||||| ||||||||||| ||||||||||| ||||||||||| |l 21|||||||||| |l 22|||||||||| ||||||||||| 0 2 4 6 8 10 _t_ s H L Figure 9: Boost converter duty cycles. load on bus 2 is 10 Ω, if t < 10 s _._ (56) 1 Ω, if t 10 s _≥_ _RB,2(t) =_      The step change in load from (56) causes a transient in the system which draws more current from the sources. Through the droop control, the extra load current is shared between the boost converters and energy bus storage devices. The amount of current each device contributes is negotiated through the bus voltages by means of the droop control strategy. The following examples will show the performance of the system when each bus is isolated versus when they are interconnected. _5.2.1. Isolated microgrids_ In the first simulation with the step change in load at bus 2, each bus is isolated from each other. This is accomplished through a very large droop setting of the interconnecting converters. In this case, no energy is shared between buses. The resulting sub-microgrid bus voltages are seen in Fig. 13, where the bus 2 voltage droops from 97.5 V down to 81.6 V, while the bus 1 26 ----- voltage remains constant. The energy from the local energy storage devices in microgrid 1 are shown in Fig. 14. The energy from the local energy storage devices in microgrid 2 are shown in Fig. 15. In addition, Fig. 16 shows that bus storage device 2 must output energy when the load steps. It is also seen in Fig. 16 that the bus storage device 3 is always providing energy to bus 3, since in the isolated configuration, it is the only source of energy. _5.2.2. Networked microgrids_ Next, the interconnecting converters were controlled to maintain droop resistances of 0.5 Ω, and thus allowing energy exchange between buses. It is seen in Fig. 17 and Fig. 18 that all three bus voltages droop in response to the increased load. It is important to point out that the change in the bus 2 voltage is now only approximately 85.6 V, or 4 V less than in the previous isolated case. The energy from the local boost energy storage devices in microgrid 1 and 2 are shown in Fig. 19 and Fig. 20 respectively. The energy from the bus stor age devices is shown in Fig. 21. The energy from the interconnecting boost converters is shown in Fig. 22. It is seen in Fig. 22 that the storage devices in the interconnection converters contribute nor draw any significant amount energy, at least in comparison to the source and bus storage devices. This indicates that in the interconnection storage device may not be necessary in future iterations of a network microgrid design. The performance and energy storage requirements of the system are greatly affected by the networked interconnection. Table 1 provides the a comparison of the total energy used as well as the maximum bus voltage 27 ----- Table 1: Total Energy and Voltage Droop Comparison **Networked** **Isolated** � _Wu (kJ)_ 79.75 119.68 max ∆VB 12.36 16.42 avg ∆VB 6.35 5.56 droop and average voltage droop from the networked and isolated simula tion examples provided. Table 1 shows that in the networked configuration a total of only 79.75 kJ of energy was needed, while in the isolated case 119.68 kJ was needed. It is also seen in Table 1 that the average voltage droop changes very little between cases. These results lead to the conclusion if this system were to be built, smaller energy storage devices can be used, if the sub-microgrids can be networked together. _5.2.3. Duty cycle feed-forward update rates_ Lastly, a series of simulations were performed to determine the effect of duty cycle update rate on the total energy required. In these simulations the models from section 5.2.1 and 5.2.2 was swept with duty cycle update rates of ∆tλ = 0.1 s to 1 s at steps of 0.001 s. The total energy supplied by all the energy storage devices in the system for the networked and isolated cases are shown in Fig. 23. The results in Fig. 23 show that for small update rates, the energy required by the isolated case are greater than the networked case. As the time between updates increases, the isolated grid storage utilization approaches that of the networked grid. A possible explanation is that as the update period increases, the advantages of the interconnectivity degrade and the grid operates as a set of isolated microgrids. It’s interesting to note that for both the interconnected and isolated cases the storage usage decreases as 28 ----- the duty cycle update increases. Based on the results of section 5.2.1 and 5.2.2 the bus voltage droop for the interconnected topology was lower than the grid’s isolated operation. Extending that result to the update period study, the decrease in storage usage for longer update periods comes with a reduction in bus voltage performance. **6. Conclusions** This paper has presented a novel approach to droop control actuated by the local energy storage device. The novelty of this approach lies in the actuation of the system through the energy storage devices. The duty cycles are updated at periodic intervals through feed-forward control to match the high side and low side source voltages. This approach allows the storage needs for capacity and bandwidth response to be identified for the given distributed droop control approach. The results show that the networked microgrids need overall less energy storage in response to transients at certain duty cycle update rates. However, as the update rates decrease, the storage requirements lessen and become the same for a networked system versus isolated systems. **7. Acknowledgment** Sandia National Laboratories is a multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin Company, for the U.S. Department of Energy’s National Nuclear Security Administration under contract DE AC04-94AL85000. 29 ----- **References** [1] R. H. Lasseter, Microgrids, in: IEEE Power Engineering Society Winter Meeting, Vol. 1, IEEE, 2002, pp. 305–308. [2] K. A. Nigim, W.-J. Lee, Micro grid integration opportunities and chal lenges, in: IEEE Power Engineering Society General Meeting, IEEE, 2007, pp. 1–6. [3] E. Trinklein, G. Parker, W. Weaver, R. Robinett, L. Babe, C.-W. Ten, W. Bower, S. Glover, S. Bukowski, Scoping study: Networked micro grids, Tech. Rep. Sandia Report SAND2014-17718, Sandia National Labs (2014). [4] V. Ramirez, R. Ortega, O. Bethoux, A. S´anchez-Squella, A dynamic router for microgrid applications: Theory and experimental results, Con trol Engineering Practice 27 (0) (2014) 23 – 31. [5] M. Erol-Kantarci, B. Kantarci, H. Mouftah, Reliable overlay topology design for the smart microgrid network, IEEE Network 25 (5) (2011) 38–43. [6] R. D. Robinett, D. G. Wilson, S. Y. Goldsmith, Collective control of networked microgrids with high penetration of variable resources part i: Theory, in: IEEE International Conference on Cyber Technology in Automation, Control, and Intelligent Systems, 2012, pp. 1–4. [7] A. Chaouach, R. M. Kamel, R. Andoulsi, K. Nagasaka, Multiobjective intelligent energy management for a microgrid, IEEE Transactions on Industrial Electronics 60 (2013) 1688–1699. 30 ----- [8] M. H. Nazari, M. Ilic, J. Lopes, Small-signal stability and decentralized control design for electric energy systems with a large penetration of distributed generators, Control Engineering Practice 20 (9) (2012) 823– 831. [9] D. G. Wilson, R. D. Robinett, S. Y. Goldsmith, Renewable energy mi crogrid control with energy storage integration, in: International Sympo sium on Power Electronics, Electrical Drives, Automation and Motion, 2012, pp. 158–163. [10] B. K. Johnson, R. H. Lasseter, F. L. Alvarado, R. Adapa, Expandable multiterminal dc systems based on voltage droop, IEEE Transactions on Power Delivery 8 (4) (1993) 1926–1932. [11] A. Nagliero, R. Mastromauro, D. Ricchiuto, M. Liserre, M. Nitti, Gain scheduling-based droop control for universal operation of small wind turbine systems, in: IEEE International Symposium on Industrial Elec tronics, IEEE, 2011, pp. 1459–1464. [12] J. Guerrero, P. C. Loh, T.-L. Lee, M. Chandorkar, Advanced control architectures for intelligent microgrids - part ii: Power quality, energy storage, and ac/dc microgrids, IEEE Transactions on Industrial Elec tronics 60 (4) (2013) 1263–1270. [13] A. M. Dizqah, A. Maheri, K. Busawon, P. Fritzson, Standalone dc mi crogrids as complementarity dynamical systems: Modeling and applica tions, Control Engineering Practice 35 (0) (2015) 102 – 112. 31 ----- [14] W. W. Weaver, R. D. Robinett, G. G. Parker, D. G. Wilson, Energy storage requirements of dc microgrids with high penetration renewables under droop control, International Journal of Electrical Power & Energy Systems 68 (0) (2015) 203 – 209. [15] N. Mohan, T. Undeland, Power Electronics: Converters, Applications, and Design, John Wiley & Sons, 2007. [16] P. T. Krein, J. Bentsman, R. M. Bass, B. L. Lesieutre, On the use of av eraging for the analysis of power electronic systems, IEEE Transactions on Power Electronics 5 (2) (1990) 182–190. [17] W. W. Weaver, P. T. Krein, Game-theoretic control of small-scale power systems, IEEE Transactions on Power Delivery 24 (3) (2009) 1560–1567. [18] Z. Chen, W. Gao, J. Hu, X. Ye, Closed-loop analysis and cascade control of a nonminimum phase boost converter, IEEE Transactions on Power Electronics 26 (4) (2011) 1237–1252. [19] R. D. Robinett III, D. G. Wilson, Nonlinear Power Flow Control Design: Utilizing Exergy, Entropy, Static and Dynamic Stability, and Lyapunov Analysis, Springer, 2011. [20] H. Schaub, J. Junkins, Analytical mechanics of space systems, Aiaa, 2003. [21] R. Robinett, G.G., Parker, H. Schaub, J. Junkins, Lyapunov optimal saturated control for nonlinear systems, Journal of Guidance, Control and Dynamics 20 (6) (1997) 1083–1088. 32 ----- [22] P. Fritzson, Introduction to modeling and simulation of technical and physical systems with Modelica, Wiley. com, 2011. [23] A possibilistic probabilistic tool for evaluating the impact of stochastic renewable and controllable power generation on energy losses in distribu tion networks, a case study, Renewable and Sustainable Energy Reviews 15 (1) (2011) 794 – 800. [24] D. Wilson, J. Neely, M. Cook, S. Glover, J. Young, R. Robinett, Hamil tonian control design for dc microgrids with stochastic sources and loads with applications, in: IEEE International Symposium on Power Elec tronics, Electrical Drives, Automation and Motion, 2014, pp. 1264–1271. [25] R. Robinett, D. Wilson, S. Goldsmith, Collective control of networked microgrids with high penetration of variable resources part i: Theory, in: IEEE International Conference on Cyber Technology in Automation, Control, and Intelligent Systems, 2012, pp. 1–4. 33 ----- 20 18 16 14 12 10 8 6 |Col1|Col2|Col3|Col4|Col5|Col6|Col7|Col8|Col9|Col10|Col11| |---|---|---|---|---|---|---|---|---|---|---| |||||||||||| |||||||||||| ||u 21|||||||||| |u 22||||||||||| |||||||||||| |||||||||||| |||||||||||| |||||||||||| 0 2 4 6 8 10 _t_ s H L Figure 10: Boost converter storage device currents. 10 5 0 -5 -10 -15 |Col1|Col2|Col3|Col4|Col5|Col6| |---|---|---|---|---|---| ||||||| ||||||| ||||||| |u 2|1||||| |u 2|2||||| ||||||| 0 2 4 6 8 10 _t_ s H L Figure 11: Boost converter storage device power. 97.500 97.495 97.490 97.485 97.480 0 2 4 6 8 10 _t_ s H L Figure 12: Bus 2 voltage. 34 ----- 100 95 90 85 80 |Col1|Col2|Col3|Col4|Col5|Col6| |---|---|---|---|---|---| |v v|B1 B2||||| ||||||| ||||||| 0 5 10 15 20 25 30 _t_ s � � Figure 13: Bus 1 and 2 voltages with step change in load on bus 2 in isolated microgrid. 0.006 0.004 0.002 0.000 �0.002 �0.004 0 5 10 15 20 25 30 |Col1|Col2|Col3|Col4|Col5|Col6|Col7|Col8|Col9|Col10|Col11|Col12|Col13|Col14|Col15|Col16|Col17|Col18|Col19|Col20| |---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---| |u|||11||||||||||||||||| |u|||12||||||||||||||||| ||||||||||||||||||||| ||||||||||||||||||||| ||||||||||||||||||||| ||||||||||||||||||||| _t_ s � � Figure 14: Energy supplied in boost devices storage u1,j in isolated microgrid. 80000 60000 40000 20000 0 0 5 10 15 20 25 30 |Col1|Col2|Col3|Col4|Col5|Col6| |---|---|---|---|---|---| |u|21||||| |u|22||||| ||||||| ||||||| _t_ s � � Figure 15: Energy supplied in boost devices storage u2,j in isolated microgrid. 35 ----- 12000 10000 8000 6000 4000 2000 0 |u|Col2|Col3|Col4|Col5|Col6| |---|---|---|---|---|---| |u|1 2||||| |u|3||||| ||||||| ||||||| ||||||| ||||||| 0 5 10 15 20 25 30 _t_ s � � Figure 16: Energy from the bus storage devices in isolated microgrid. 197 196 195 194 193 192 191 0 5 10 15 20 25 30 _t_ s � � Figure 17: Bus 3 voltage with step change in load on bus 2 in networked microgrid. 100 95 90 85 80 |Col1|Col2|Col3|Col4|Col5|Col6| |---|---|---|---|---|---| |v v|B1 B2||||| ||||||| ||||||| 0 5 10 15 20 25 30 _t_ s � � Figure 18: Bus 1 and 2 voltages with step change in load on bus 2 in networked microgrid. 36 ----- 0.12 0.10 0.08 0.06 0.04 0.02 0.00 �0.02 0 5 10 15 20 25 30 |Col1|Col2|Col3|Col4|Col5|Col6|Col7|Col8|Col9|Col10|Col11|Col12|Col13| |---|---|---|---|---|---|---|---|---|---|---|---|---| ||u 11|||||||||||| ||u 12|||||||||||| |||||||||||||| |||||||||||||| |||||||||||||| |||||||||||||| |||||||||||||| _t_ s � � Figure 19: Energy supplied in boost devices storage u1,j in networked microgrid. 40000 30000 20000 10000 0 |Col1|Col2|Col3|Col4|Col5|Col6|Col7| |---|---|---|---|---|---|---| ||u 21|||||| ||u 21|||||| ||u 22|||||| |||||||| |||||||| 0 5 10 15 20 25 30 _t_ s � � Figure 20: Energy supplied in boost devices storage u2,j in networked microgrid. 12000 10000 8000 6000 4000 2000 0 |Col1|Col2|Col3|Col4|Col5|Col6|Col7|Col8| |---|---|---|---|---|---|---|---| |||u 1 u|||||| |||2 u 3|||||| ||||||||| ||||||||| ||||||||| ||||||||| 0 5 10 15 20 25 30 _t_ s � � Figure 21: Energy from the bus storage devices in networked microgrid. 37 ----- 0 �50 �100 �150 0 5 10 15 20 25 30 _t_ s � � Figure 22: Energy bus interface converters storage u1→3 and u2→3 in networked microgrid. 100 80 60 40 20 0 0.0 0.2 0.4 0.6 0.8 1.0 Dtl HsL Figure 23: Total energy supplied by enegy storage devices as a function of duty cycle feed-forward update rate. 38 -----
{ "disclaimer": "Notice: The following paper fields have been elided by the publisher: {'abstract', 'references'}. Paper or abstract available at https://api.unpaywall.org/v2/10.1016/J.CONENGPRAC.2015.06.008?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1016/J.CONENGPRAC.2015.06.008, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": "publisher-specific-oa", "status": "BRONZE", "url": "https://www.sciencedirect.com/science/article/am/pii/S0967066115001185" }
2,015
[]
true
2015-11-01T00:00:00
[]
14,721
en
[ { "category": "Computer Science", "source": "external" }, { "category": "Business", "source": "s2-fos-model" }, { "category": "Environmental Science", "source": "s2-fos-model" }, { "category": "Engineering", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/0246366d638b26f74404861edb9f4071bee082bb
[ "Computer Science" ]
0.882219
Moving from 5G in Verticals to Sustainable 6G: Business, Regulatory and Technical Research Prospects
0246366d638b26f74404861edb9f4071bee082bb
International Conference on Cognitive Radio Oriented Wireless Networks and Communications
[ { "authorId": "1381676255", "name": "Marja Matinmikko-Blue" }, { "authorId": "2741612", "name": "Seppo Yrjölä" }, { "authorId": "2120804", "name": "Petri Ahokangas" } ]
{ "alternate_issns": [ "2166-5370" ], "alternate_names": [ "CROWNCOM", "Int Conf Cogn Radio Oriented Wirel Netw Commun", "CROWNC" ], "alternate_urls": null, "id": "464a5ecb-f65b-4e71-a90d-3310a7d6ea8e", "issn": "2166-5419", "name": "International Conference on Cognitive Radio Oriented Wireless Networks and Communications", "type": "conference", "url": "http://ieeexplore.ieee.org/xpl/conhome.jsp?punumber=1001755" }
null
# Moving from 5G in Verticals to Sustainable 6G: Business, Regulatory and Technical Research Prospects Marja Matinmikko-Blue[1[0000-0002-0094-6344]], Seppo Yrjölä[2[0000-0003-2053-9700]], and Petri Ahokangas[3[0000-0002-2351-8473]] 1 Centre for Wireless Communications, University of Oulu, Finland 2 University of Oulu, Finland and Nokia, Finland 3 Oulu Business School, Martti Ahtisaari Institute, University of Oulu, Finland ``` marja.matinmikko@oulu.fi ``` **Abstract. Mobile communication research is increasingly addressing the use of** 5G in verticals, which has led to the emergence of local and often private 5G networks. At the same time, research on 6G has started, with a bold goal of building a strong linkage between 6G and the United Nations Sustainable Development Goals (UN SDGs). Both of these developments call for a highly multi-disciplinary approach covering the inter-related perspectives of business, regulation and technology. This paper summarizes recent advances in using 5G to serve vertical sectors’ needs and describes a path towards sustainable 6G considering business, regulation and technology viewpoints. By focusing on key trends, the research summarizes four alternative scenarios for the futures business of 6G and considers related regulatory and technology aspects. Our findings highlight the importance of understanding the complex relations of business, regulation and technology perspectives and the role of ecosystems in both 5G in verticals and ultimately in the development of sustainable 6G to bring together stakeholders to solve long-term sustainability problems. **Keywords: Business Strategy, Regulation, Scenario Planning, Sustainability,** 5G, 6G. ## 1 Introduction 5G deployments are underway in the global scale with the first applications focusing on offering high capacity mobile broadband services. The promise of 5G to boost the digitalization of various vertical industries is gradually gaining increasing attention and the emergence of local 5G networks [Matinmikko et al. 2017; Matinmikko et al. 2018] is starting to take place in some countries. Local 5G networks allow different stakeholders to use their own local connectivity platforms without having to rely on mobile network operators. These developments are occurring in complex multi-stakeholder ecosystems where regulatory, business, and technical perspectives are highly intertwined. The emergence of using 5G in the various verticals brings together the ICT sector and the vertical sector in question with their own structures and rules for operations, calling for an ecosystem-level focus. Especially, the availability of spectrum for ----- 2 local networks fully depends on the country of operations, emphasizing the importance of regulatory decisions. At the same time, research on the sixth generation (6G) of mobile communication networks has started globally aiming at first deployments in the 2030s. The first 6G White Paper published in 2019 presented a joint 6G research vision as a group work of 70 experts globally [Latva-aho & Leppänen, 2019]. The paper depicted the future 6G networks as an intelligent system of systems that combines the communication services with a set of other services including imaging, sensing, and locationing services, opening a myriad of new application areas. A set of continuation 6G White Papers published in 2020 [6G Flagship White Papers 2020] prepared in collaboration with 250 international experts went more into details and presented e.g. alternative future scenarios for the business of 6G [Yrjölä et al., 2020], and developed a tight linking between 6G and the United Nations Sustainable Development Goals (UN SDGs) [Matinmikko-Blue et al., 2020a]. Some of the developed future 6G business scenarios have taken sustainability as the starting point, stressing that the whole development of the future mobile communication networks should aim at helping society at large in its attempts to meet the sustainable development goals [Latva-aho & Leppänen 2019; Yrjölä et al. 2020a; Yrjölä et al. 2020b; Matinmikko-Blue et al. 2020a]. To make sense of moving from 5G in verticals towards 6G, we must envision future 6G systems targeting 2030 holistically from the perspective of the interaction between business, regulation and technology perspectives in envisioning future research prospects. The alternative futures of 6G will be shaped by growing societal requirements like inclusivity, sustainability, resilience, and transparency – a highly complex area that will call for major changes in industrialized societies in the long run, see [Latva-aho et al. 2019; Matinmikko-Blue et al. 2020a]. The business perspective specifically needs to consider sustainability [Kuhlman & Farrington, 2010; Evans et al. 2017] in a way that combines the economic (e.g., profit, business stability, financial resilience, viability), societal (e.g., individuals’, communities’, regulative values) and environmental (e.g., renewables, low emissions, low waste, biodiversity, pollution prevention) perspectives. As an emerging field, 6G business scenarios and strategies have not been widely discussed in the literature to date. However, vision papers on future communication needs, enabling technologies, the role of artificial intelligence (AI), and emerging applications have recently been published [Viswanathan & Mogensen, 2020; Saad, Bennis & Chen, 2019; Letaief et al. 2019]. Furthermore, discussion has latterly expanded to 6G indicators of value and performance [Ziegler & Yrjölä, 2020], the role of regulation and spectrum sharing [Matinmikko-Blue et al., 2020a], the antecedents of multi-sided transactional platforms [Yrjölä, 2020], antecedents of the 6G ecosystem [Ahokangas et al. 2020a] and the exploratory scenarios of 6G business [Yrjölä et al., 2020]. Building on the above discussion, this paper provides an overview of 5G in verticals towards sustainable 6G from business, regulation and technology perspectives and presents related research prospects. The paper summarizes future scenarios for sustainable 6G business strategies in the timeframe 2030-2035, originally documented in [Yrjölä et al., 2020], and related strategic options. The rest of this paper is organized as follows. Chapter 2 summarizes the state of the art of 5G in verticals from business, regulation ----- 3 and technology perspectives. Chapter 3 presents an overview of sustainable 6G. Future business scenarios for sustainable 6G and related strategic options are presented in Chapter 4. Finally, future outlook and conclusions are provided in Chapter 5. ## 2 State of the Art of 5G in Verticals 5G has been set high in national agendas to speed up digitalization of various sectors of society in many countries. This chapter presents recent developments in the use of 5G networks to serve the needs of different vertical sectors, such as industry, energy, and health, and their public sector counterparts, from the interrelated business, technology and regulation perspectives. **2.1** **Business Perspective** Business perspective plays an important role in understanding the opportunities that a new technology can offer. The identification of the opportunity space for 5G business in verticals requires discussing four inter-related key themes: 1) the convergence of connectivity and data platforms and related ecosystems, 2) enablers, barriers and limitations to scalability and replicability of 5G solutions and business models, 3) legitimation of the new roles and business models within the verticals, and the 4) economic, societal and environmental sustainability of 5G solutions and business models. As vertical 5G networks are often considered as local networks, the platform-based business models utilized by different stakeholders face several challenges related to the aforementioned themes. Mobile communication networks have for long been seen as platforms [Pujol et al, 2016] or ecosystems [Basole and Karla, 2011]. However, with the deployment of 5G networks, the mobile connectivity platforms operated by mobile network operators (MNOs) are increasingly becoming converged with the data platforms of various cloud service providers, giving rise to novel kinds of platform ecosystems. In industry verticals also the Industry 4.0 platforms as a specific type of data platforms play an important role. Extant literature identifies centralized, hybrid and fragmented types of converged connectivity and data platforms for industry verticals [Ahokangas et al., 2020b]. In this kind of vertical context, a key feature of the converged platforms is the degree of openness achieved for different stakeholders of the ecosystem. Related to openness, the complexity, complementarity and interdependence of the converged connectivity and data platforms can be clarified by looking at the various components, interfaces, data and algorithms utilized in these platforms [Yrjölä et al., 2019] in connection to the connectivity (5G or other), content (e.g., information or data), context (location- or use-case specific data) or commerce (offering made available via a platform) business models utilized [Iivari, et al., 2020]. The vertical business model for local 5G operators presented by [Ahokangas et al., 2019] builds specifically on the idea to provide tailored end-to-end services in restricted geographical areas, such as industry sites, to the users locally. Vertical business models form a vertically structured ecosystem around the activity. The presented oblique business model and corresponding oblique ----- 4 ecosystem in turn builds on mass-tailored end-to-end services with stricter requirement for segmentation [Ahokangas et al., 2019]. The different types of converged connectivity and data platforms and the business models identified for them have varying potential for scalability and replicability. A scalable business model is agile and provides exponentially increasing returns to scale in terms of growth from additional resources applied [Nielsen and Lund, 2018], whereas a replicable business model can be copied to several markets simultaneously with minimum variations [Aspara et al., 2010]. For a firm running a vertical business model, scalability is based on the firm’s capability to understand customer-specific needs and fulfill them, but limited on the size of the cases, their volume and timeline. For a firm running an oblique business model, scalability is based on the volume of unmet local needs and limited by access and availability of local infrastructures needed for providing the service [Ahokangas et al., 2019]. Within converged connectivity and data platform ecosystems, different stakeholders have varying roles and can act as service providers. This raises the issue of legitimacy, meaning that the activities of the stakeholder providing the service is legal and fits with the institutionalized practices within the industry in question [Marano et al., 2020]. Achieving legitimacy for local vertical-specific 5G services and service providers through the deployment of local 5G networks is, however, an open question in many countries. Indeed, disruptive innovations such as 5G have been found to cause regulatory, incumbent and social “pushbacks” and they can be expected also for vertical 5G services, as legitimacy is a precondition for successful value creation and capture on a technology [Biloslavo et al., 2020]. The above discussion points out several challenges for reaching sustainable business models in 5G verticals. “A business model for sustainability helps describing, analyzing, managing and communicating (i) a company’s sustainable value proposition to its customers, and all other stakeholders, (ii) how it creates and delivers this value, (iii) and how it captures economic value while maintaining or regenerating natural, social, and economic capital beyond its organizational boundaries” [Schaltegger et al., 2016, p. 6]. Building vertical 5G business opportunities calls thus for filling in the requirements of scalability, replicability, and sustainability in a legitimate way in a platform ecosystem comprising connectivity and data services. **2.2** **Regulation Perspective** The serving of the different verticals with 5G networks is not only addressed by the current MNOs but increasing attention is being paid to local and often private 5G networks [Matinmikko et al. 2017; Matinmikko et al., 2018] that can be operated independent of the MNOs. Their emergence is highly dependent on the regulations that govern both the electronic communications market as well as the specific verticals, leading to a complex environment to operate. Regulations at national, regional and international levels define the operational conditions and there is wide variation between the national approaches but also some level of harmonization such as on the spectrum for 5G. ----- 5 Prior work on regulatory developments on local 5G networks [Matinmikko et al., 2018; Vuojala et al., 2019; Lemstra, 2018; Ahokangas et al., 2020b] have considered access regulation, pricing regulation, competition regulation, privacy and data protection, and authorization of networks and services. Especially, the authorization of networks and services defining the ways how rights to use radio frequencies are granted is critical for the establishment of local private 5G networks. Without the timely availability of sufficient amount of spectrum suitable for operations in the given environment, it is not possible to deploy the local networks. Specific spectrum options for local 5G networks are analyzed in detail in [Vuojala et al., 2019] including unlicensed access, secondary licensing, spectrum trading/leasing, virtual network or local licensing. Local licensing has emerged as a new spectrum access model in 5G to allow different stakeholders to deploy local networks in addition to the MNOs. A study on the recent 5G spectrum awards decisions in the 3.5 GHz band presented in [Matinmikko-Blue et al., 2019] shows that there is a big divergence in the spectrum awards by different countries taken by the regulators globally. 5G regulatory situation in Europe is discussed in [Lemstra, 2018] where two con trasting scenarios for the future telecommunication market are presented including evolutionary and revolutionary scenarios. Evolutionary scenario continues the MNO market dominance which is likely to occur under the current European regulatory framework. The revolutionary scenario introduces new virtual MNOs that serve specific industry sectors which calls for additional policy and regulatory measures. The mobile communication market is in a turning point with the emergence of locally operated 5G networks by different stakeholders, especially aiming at serving the verticals’ specialized local needs. **2.3** **Technology Perspective** Previous generation mobile technologies have been largely deployed by national (or multi-national) incumbent MNOs for public use, given the high levels of investments required for the infrastructure, and to acquire exclusive radio spectrum. Furthermore, management and operational costs of the networks have been significant, and mobile technologies have required large and complex system integration from global infrastructure vendors with specialized capabilities. In addition to improved performance characteristics in capacity, speed and latency, novel 5G architecture is bringing additional flexibility for traditional MNOs as well as local operators in system deployments. Key technologies expected to transform 5G for verticals include localization and decomposition of network functions, software defined networking and network virtualization among others [Morgado et al., 2018]. A critical aspect of the local private industrial 5G networks is the ability to create customized network slices, where instances of virtual network resources and applications can be delivered to a new breed of services tailored to specific customer or tenant needs with service level agreed performance on demand. Furthermore, the softwarebased network architecture enables efficient sharing of common network infrastructure and resource by different tenants. Abstracting the slice functionality through open interfaces exposure to third party service provisioning enables service-dominant model ----- 6 for the connectivity and underlying network resources, e.g., computing, data and intelligence. The evolution towards the cloud-native infrastructure abstraction both on core and radio access empowers technology vendors and service providers to deploy and operate flexible and portable processes and applications in dynamic multi-vendor cloud environments. The cloud embedded in the edge of the network provides tools for optimized performance and economics for both the virtualized network functions and any other performance critical enterprise or vertical service and can become a control point of the local connectivity and intelligence. Edge cloud use cases considered in 5G are e.g., cloud radio access network (Open RAN, Virtual RAN), edge security, network and service automation enhancing the network itself, and industrial automation, massive scale Internet of Things (IoT), and augmented intelligence with augmented reality (AR)/virtual reality (VR). Another critical aspect is the spectrum. Operations in higher carrier frequencies represent a challenge in terms of deployment. The availability of suitable spectrum for serving the verticals cannot be based on dedicated spectrum paradigm but requires sharing in different domains. Figure 1 summarizes the presented business, regulation and technology perspectives for 5G in verticals. **Fig. 1. Business, regulation and technology perspectives for 5G in verticals.** ## 3 Towards Sustainable 6G In parallel with the on-going development and deployment of 5G in verticals, research on the next generation, namely 6G, systems has already started in different parts of the world, see [Latva-aho & Leppänen, 2019] and [6G Flagship White Papers, 2020]. The research on 6G [Latva-aho & Leppänen, 2019; Matinmikko-Blue et al., 2020a] has ----- 7 identified sustainability stemming from the UN SDGs as the starting point, and it needs to address the technical, business and regulation perspectives, which are discussed next. **3.1** **Role of UN SDGs in 6G** Future 6G networks are aiming at first deployments around the year 2030 which is also the target year for the achievement of the UN SDGs. While 6G communications is expected to boost global growth and productivity, create new business models and transform many aspects of society, its linking with the UN SDGs needs to be clearly formulated. The starting point of 6G research vision presented in [Latva-aho & Leppänen 2019] is that the development of 6G should be fully aligned with the UN SDGs [United Nations, 2018]. In a follow-up white paper, Matinmikko-Blue et al. [2020a] have developed a novel linking between 6G and the UN SDGs through the indicators of the UN SDG framework. In [Matinmikko-Blue et al., 2020a] a three-fold role is foreseen for 6G as 1) provider of services to help reaching the UN SDGs, 2) enabler of measuring tools for data collection to help with the reporting of indicators, and 3) reinforcer of a new ecosystem to be developed in line with the UN SDGs. The white paper further details the linking between 6G and UN SDGs trough the existing indicators of the UN SDG framework where only 7 out of the 231 individual indicators are identified as being related to ICT. In reality, the ICT sector can influence many of the indicators, if not all. The white paper [Matinmikko-Blue et al., 2020a] analyses what 6G can do to contribute to the different UN targets within the SDG framework via the existing UN SDG indicators. The white paper proceeds to stating the need for a new set of indicators for 6G, characterizing the three-fold role of 6G. Additionally, a preliminary action plan is presented, calling for research and educational organizations, governments, standards developers, users, MNOs, network equipment manufacturers, application and service providers and verticals to think out-of-the box and create new technology solutions and collaborative business models to develop new operational models that support the achievement of the SDGs which may need changes to the existing regulations. **3.2** **Business, Regulation and Technology Perspectives** The discussion on 5G business perspective for deployment in verticals presented in Section 2.1 proposed to focus on business models as a way of thinking future 6G ecosystem stakeholders’ choices regarding opportunities, value-add and capabilities, and their expected consequences as scalability, replicability, and sustainability. With the right business choices, opportunities will be identified related to novel and unmet needs, new types of customer and service provider, as well as the interfacing of humans with machines in 6G. New value-add is seen to come from real-time and trustworthy communications, the use of local data and intelligence, and the commoditization of 6G resources as its competitive advantages, including extreme capacity and security, transaction and innovation platformization, and ubiquitous access. The expected business consequences of scalability may be related to the long tail of services, dataflow architecture, automation, and open collaboration between stakeholders; in terms of replicability, to deliberately design modularity and complementarity within platforms; and in ----- 8 terms of sustainability, to empower users and communities, and the utilization of sharing economic mechanisms in the markets. Overall, governments and industries are under high pressure from the sustainability targets arising from the UN SDGs to renew their operations and the achievement of the goals provides new business opportunities especially for ICT solutions. These data and connectivity solutions can significantly contribute to industries to improve their resource efficiency and reduce waste but the solutions themselves need to be developed in alignment with the sustainability goals as well. Digital convergence across industries and multi-level 6G platforms and ecosystems are creating a complex strategic environment that can lead to incomparable and distinct opportunities, as well as emergent problems. The regulations governing the use of future telecommunication systems and the relevant industry specific regulations together create a complex environment, especially around the use of data and connectivity platforms for different purposes. In particular, unanswered questions remain about ecosystemic business models in the context of sustainability. According to our recent findings [Yrjölä et al., 2020a; Yrjölä et al., 2020b], business ecosystems that aim to bring together stakeholders to solve systemic sustainability problems will require open ecosystem-focused value configuration and decentralized power configuration, where traditional stakeholder roles change, and new roles emerge. The focus needs to be on the long tail of specialized user requirements that crosses a variety of industries where related needs can be met with different resource configurations. Spectrum continues to be the key resource for 6G systems as for any wireless net works throughout the times, and the availability of suitable spectrum continues to be significantly restricted due to the existing incumbent spectrum usage, see [MatinmikkoBlue et al. 2020b]. Spectrum availability is a good example of the complex relations of business, regulation and technology perspectives. The availability of spectrum is a regulation decision, which defines the business opportunities and yet is restricted with technical aspects. Potential operations of future 6G systems in the new higher frequency bands at upper millimeterwaves (mmW) and terahertz (THz) regions pose significant technical, regulatory and deployment related challenges. Therefore, future 6G is not restricted only to higher frequency bands but can also be used in the existing bands for mobile communications. What are the economically feasible operational models, how to protect existing incumbent users of the feasible bands and how to implement THz radio links continue to be open topics for 6G. The technology vision work in the global scale for systems towards 2030 and beyond has started at the International Telecommunication Union Radiocommunication sector (ITU-R) with the development of a report on future technology trends. The need for new indicators to characterize the performance of future 6G networks is evident [Latvaaho & Leppänen, 2019; Matinmikko-Blue et al., 2020a; Pouttu et al., 2020], especially for defining and measuring resource efficiency and particularly energy efficiency. Also, the network architecture of 6G needs to be re-thought from prior generations of networks, see [Taleb et al., 2020]. Figure 2 provides a summary emphasizing the need to develop sustainable 6G in line with the UN SDGs from business, regulation and technology perspectives. ----- 9 **Fig. 2. Business, regulation and technology perspectives for sustainable development of 6G.** ## 4 Business Scenarios and Strategic Options for 6G Next, we proceed to new business scenarios developed for 6G and related strategic options developed through a set of virtual future-oriented white paper expert group workshops organized by 6G Flagship at the University of Oulu in 2020 and documented and analyzed in [Yrjölä et al., 2020a; Yrjölä et al., 2020b]. **4.1** **Methodology** The alternative scenarios for the future business of 6G summarized in this paper were created using anticipatory action learning (AAL) research method [Stevenson, 2012] within 6G Flagship’s white paper preparation [6G Flagship 2020]. The process involved a series of online workshops in January-April 2020 where a group of experts from research, standardization and development, telecommunication industry, government, and verticals joined to collaboratively create future business scenarios for 6G. ----- 10 First, the key change drivers for future 6G business were identified resulting in 153 forces [Yrjölä et al, 2020a]. Using these drivers, a set of dimensions and endpoints were selected to form the basis for the scenario development as shown in Figure 3. Value creation and value configuration were selected as the main business dimensions with different end points emphasizing closed and open alternatives. **_Business scenario theme_** _Incumbent customer lock-in_ Value creation _Novel service providers_ _Supply-driven, Proprietary_ Value configuration _Open Ecosystem driven_ **End 1** **Dimension** **End 2** **Fig. 3. Selected business scenario logic and dimensions.** We also used a simple rules strategy framework presented in [Eisenhardt & Sull, 2001], which is a strategic management tool to develop strategies around identified business opportunities and describing the main processes. It provides a highly practical approach with guidelines in the following six rule categories introduced in [Eisenhardt & Sull, 2001] and applied in the mobile communication market in [Ahokangas et al., 2013]: 1) Nature of opportunity rules, 2) How to conduct business and processes in a unique way, 3) Boundary rules to decide, which opportunities to pursue, 4) Priority rules to identify and rank the opportunities, 5) Timing rules to synchronize emerging opportunities and other parts of the company, and 6) Exit rules to selecting things to be ended. Next, we introduce the four developed business scenarios using the dimensions of Figure 3 including Sustainable edge, Telco brokers, MNO6.0 and Over-the-top, as summarized in Figure 4, and presented in [Yrjölä et al., 2020a]. We also briefly summarize strategies as simple rules that were created for the most plausible MNO6.0 scenario and the most preferred Sustainable edge scenario. **4.2** **6G Business Scenarios** A set of business scenarios were developed in 6G Flagship’s white paper process in 2020, documented in [Yrjölä et al. 2020a] and summarized in the follows. Figure 4 summarizes the developed four business scenarios following the scenario logic of Figure 3. In the first scenario, the Sustainable Edge Value Creation, scenario in the upper-right corner of Figure 4, the value creation is customer attraction-driven, and the value configuration is open ecosystem-focused. This scenario is built on decentralized open value configuration and ecosystem-driven business models where novel stakeholders take over customer ownership and networks. Changing stakeholder roles include webscales, over the top (OTT) companies and device vendors being responsible for business to consumer (B2C) customers and local private cloud native networks serve business to business (B2B) customers. The role of traditional MNOs has changed into a wholesale connectivity service provider. Open source principles have become widely spread ----- 11 leading to technology and innovation ownership beyond traditional technology providers through open application programming interfaces (API) and novel resource brokerage. This scenario includes new stakeholder roles also in the form of local communities and special interest groups operating various edge resources in specific locations, such as campuses and remote areas to promote local innovation. New applications come with 6G technology that act as digital value platforms expanding our experiences towards digital computer-generated virtual worlds. The current focus on global-scale solutions changes towards local solutions that balance local demand with local supply and support circular economies. Especially the manufacturing vertical will move towards local decentralized manufacturing supporting a new crowdsourcing-based production ecosystem. **Empowered world** **Competitive world** **_REVOLUTIONARY DEVELOPMENT_** Value creation - Customer attraction _Novel service providers_ - Localized services Most probable Most plausible MNO6.0 Telco Brokers Most preferable - Telcos drive technology - Telcos have primary customer innovation and e2e value relationship, own data & run chain service platform ecosystem - Telcos’ own B2C & B2B - Tech providers drive customer relationship technology ecosystem and run - Platform as broker between NW infrastructure platform customers and OTTs - Platform-based ecosystemic - Innovation “engineering” business models platform - Industry 5.0 - Resilient smart cities **Protective world** _Incumbents_ **Networked world** **_EVOLUTIONARY DEVELOPMENT_** **Fig. 4. Summary of developed 6G business scenarios.** In the second scenario, the Telco Broker Value Creation by Incumbents and Open Ecosystem Value Configurations scenario shown in lower-right corner of Figure 4, the main drivers for value creation remain the existing MNOs while value configuration is based on open ecosystem-focus. The MNOs are in charge of customer relationships and use service platform ecosystem to capture value. Technology providers’ role is to develop the required technologies and provide network infrastructure via platform-based ecosystemic business models. Innovation ecosystem is broadened by the decoupling of technology platforms. Industry 5.0 (I5.0) has emerged as a key vertical for collaborative human machine interaction with robotization across services and industries. Real-time data and high level of digital automation allow the industries to focus on servitization of products. The speed of operations gets more and more rapid within the increasingly ----- 12 reprogrammable and reconfigurable world where design focus gets more and more short-term. In the third scenario, the MNO6.0 Value Creation scenario shown in the lower-left corner of Figure 4, value creation is driven by the incumbent MNOs, and value configuration is closed supply-focused. The role of MNOs is strong, and they drive technological innovation and own the customer relationships. The existing dominant MNO market position with strong customer base acts as the opportunity for businesses is and the focus is on how to cost-efficiently increase the capacity to meet the growing demand. Technology developments on dynamic networks slicing allowing increasing flexibility, shorter time-to-market, and cost optimization. With the MNO market dominance, the use of 6G in verticals is heavily dependent on MNOs’ business decisions. Key technology developments in the form of automated network slicing and operations in higher frequency bands and new machine learning inspired tools will be used to optimize network operations in a predictive manner allowing new applications. These networks will have been assembled with a public-private-partnership funding model, with a view to resiliency and sustainability. In the fourth scenario, the Over-the-Top Value Creation scenario shown in upper left corner of Figure 4, value creation is customer attraction- and lock-in-driven, and value configuration is closed supply-focused. The MNO dominance is replaced by OTTs that have taken over the customer relationships with the help of their access to customer data. The role of operators is to control the standardized and commoditized connectivity technologies and manage the value chains. The role of edge computing is to act as a new control point for serving of the verticals. Networks are programmable and make use of digital twins that represent replicas of complex physical systems to help in optimizing these systems. The ecosystem gets increasingly complicated with different resources and assets needed to meet the versatile needs are brought together by a set of stakeholders including physical infrastructure providers, equipment providers, and data providers under a complex regulatory framework defined by policymakers. Countries with more permitting rules act as resource pools and offer cheap labor, natural resources, and data. The four developed scenarios were then assessed in terms of their probability, plau sibility and preferability. Both the most probable scenario was the Over-the-top scenario while the most plausible scenario was the MNO6.0 scenario. The most preferable scenarios was the Sustainable edge scenario that can be seen to take a bold step towards achievement of the UN SDGS, representing revolutionary and demand-driven transformations. The developed business scenarios for 6G indicate that from economic perspective, user experiences will be increasingly local and customized, delivered by local supply models supporting spatial circular economies. New societal service delivery models will appear through community-driven networks and public private partnerships and the role of 6G will be substantial in vertical industries. The developed scenarios revealed interesting societal observations including increasing tensions between competitive, protective, networked and empowered worldviews. The role of power configurations keeps increasing and may shift from a multi-polarized world to a poly-nodal world. ----- 13 The pressure on companies and governments to meet the UN SDGs is evident in the business scenarios for 6G and the role of 6G as a provider of services towards environmental impact will be important. 6G with a set of new technologies will help in the monitoring and steering of circular economy to promote a truly sustainable data economy. The developed scenarios also show that 6G development faces privacy and security issues related to business and regulation including different aims of governance either stemming from governmental, company or end user perspectives. There the ecosystem-level configurations related to users, decentralized and community-driven business models and platforms and related user empowerment become increasingly important to support the role of local 6G services. **4.3** **Strategic Options for 6G as Simple Rules** Next, we summarize the developed strategic options for selected two scenarios using the simple rules framework from [Eisenhardt & Sull, 2001] that was applied to characterize MNOs’ strategic choices in [Ahokangas et al., 2013]. For the most plausible MNO 6.0 scenario, the baseline for building the simple rules is in the use of MNOs’ wide existing customer base that has growing capacity needs through investments to strengthen customer lock-in and dominant market position in connectivity, enhanced with customer data and holding on to spectrum. The goal is to maintain dominant market position through gaining access to a new wideband spectrum. Automation of network operations and the ability to dynamically create large numbers of networks slices on-demand will help to increase flexibility, shorten time-to-market, and optimize costs. Resources and services will be traded in automated marketplaces. The MNOs could become a wholesale platform provider for other operators which would further strengthen their market position. Regulations plays a key role in maintaining the MNO market dominance which calls for close contact with the regulator. In the MNO6.0 scenario, the MNOs would never give up on their spectrum and customer data. For the most preferred Sustainable Edge Scenario, the simple rules are built on the use of new, local, and specialized demand, challenging incumbent MNOs in narrow business segments specializing in governmental, municipal, vertical, or enterprise customers and vertical differentiation with increasing requirements for sustainability in specific industry segments like education, healthcare, and manufacturing. These challenger operators think and act locally, close to the customer and promote resource sharing in different format such as spectrum and virtualized cloud infrastructures. Sustainability requirements in verticals are a major business opportunity through providing vertical differentiation in specific segments like education, healthcare, manufacturing, energy, and media and entertainment. The sustainable edge service provider supports circular economy and promotes sharing economy principles in network deployment. These locally operated networks have opportunities to scale up from local operations to a multi-locality business. Local and private networks provide several benefits in terms of security and data control, separation from public networks, access to highquality services in specific locations, increased flexibility, scalability and customization, and trustworthy reliabilities and latencies. Furthermore, networks can be deployed as standalone sub-networks or integrated with MNO networks. This requires the ----- 14 establishment of multi-sided platforms -based regulations to govern privacy and security of users. ## 5 Future Outlook and Conclusions Mobile communication research is increasingly addressing the use of 5G in verticals, which has led to the emergence of local 5G network deployment models. Research on 6G has also started, with a bold goal of building a strong linkage with the United Nations Sustainable Development Goals (UN SDGs). These developments call for a highly multi-disciplinary approach covering business, regulation and technology perspectives and our research is addressing these interrelated themes. This paper has provided an overview of the recent developments in 5G in verticals towards the development of sustainable 6G. We have highlighted the importance of the triangle of business – regulation – technology perspectives in the development of new wireless technologies and their deployments and summarized the advancements with a focus on local 5G networks for serving the verticals’ needs towards meeting the sustainable development goals. From the business perspective, a business model for sustainability can help in de scribing, analyzing, managing and communicating 1) a company’s sustainable value proposition to its customers, and other stakeholders, 2) how it creates and delivers this value, 3) and how it captures economic value while maintaining or regenerating natural, social, and economic capital beyond its organizational boundaries. The development of new vertical-specific 5G business opportunities calls for filling in the requirements of scalability, replicability, and sustainability in a legitimate way in a platform ecosystem of connectivity and data services. Digital convergence across industries and multi-level 6G platforms and ecosystems will create a complex environment where ecosystemic business models for sustainability and the evolution of related regulations become important. Business ecosystems that aim to bring together stakeholders to solve systemic sustainability problems will require open ecosystem-focused value configuration and decentralized power configuration, focusing on the long tail of specialized user requirements that crosses a variety of industries. Future research prospects are particularly related to the new business ecosystems, ecosystemic business models and changing stakeholder roles that support sustainability. From the regulation perspective, the serving of different verticals with 5G and future 6G networks introduces local and often private wireless networks to complement the current mobile network operators (MNOs). The regulatory environment for 5G in verticals is very complex encompassing rules from both the electronic communications market as well as specific verticals. Especially, the ways how rights to use radio frequencies are granted is critical for the establishment of local 5G and 6G networks. The divergence in spectrum awards between countries is increasing with 5G, directly influencing the business opportunities in those countries. There are research prospects in finding the best practices from the decisions by analyzing their impact. For the technology perspective, 5G and future 6G architecture is expected to bring additional modularity and flexibility for traditional MNOs as well as for new local ----- 15 operators in system deployments. Key technologies to enable open general-purpose 6G architecture include distributed heterogenous cloud-native architecture, localization and decomposition of network functions, software defined networking and network virtualization, among others. A critical aspect for the local private industrial networks is their ability to create customized network slices that allow the delivery of services tailored to specific customer needs with service level agreed performance on demand. The availability of spectrum for serving the verticals and operations in higher carrier frequencies present a major technical deployment challenge. The availability of spectrum for serving the verticals on shared basis is important. New research prospects are especially in the 6G domain in order to find new indicators for 6G that take sustainability in to account as well as the new network architecture for 6G needs. This study has identified a further need for foresight research that explores the inter related business – regulation – technology perspectives in the context of 5G in verticals and on the road to sustainable 6G, with a special focus on how can 6G become a truly general-purpose technology instead of simply an enabling technology, to support countries and organizations in the journey towards the achievement of the UN SDGs. Especially, the verticals burdened by increasing requirements for sustainability will be in the key position in to realize the benefits of using the new technologies. ## References 1. 6G Flagship. White Papers. Available online: https://www.6gchannel.com/6g-white-papers/ (accessed on 24072020) (2020). 2. Ahokangas, P., Matinmikko, M., Yrjölä, S., Okkonen, H., Casey T. "Simple rules" for mo bile network operators' strategic choices in future cognitive spectrum sharing networks. IEEE Wireless Communications 20(2), 20-26 (2013). 3. Ahokangas, P., Matinmikko-Blue, M., Yrjölä, S., Seppänen, V., Hämmäinen, H., Jurva, R., & Latva-aho, M. Business models for local 5G micro operators. IEEE Transactions on Cognitive Communications and Networking, 5(3), 730-740 (2019). 4. Ahokangas, P., Yrjölä, S., Matinmikko-Blue, M., Seppänen, V. Transformation towards 6G ecosystem. In: Proceedings of 2nd 6G Wireless Summit, Levi, Finland, (2020a). 5. Ahokangas, P., Matinmikko-Blue, M., Yrjölä, S. & Hämmäinen, H. Future vertical 5G plat form ecosystems: Case study of a 5G enabled digitalized port stakeholders’ new interactions and value configurations. In: Proceedings of International Telecommunications Society online conference, Gothenburg, Sweden (2020b). 6. Aspara, J., Hietanen, J., Tikkanen, H. Business model innovation vs replication: financial performance implications of strategic emphases. Journal of Strategic Marketing, 18(1), 3956 (2010). 7. Basole, R. C., Karla, J. On the evolution of mobile platform ecosystem structure and strat egy. Business & Information Systems Engineering, 3(5), 313 (2011). 8. Biloslavo, R., Bagnoli, C., Massaro, M., Cosentino, A. Business model transformation to ward sustainability: the impact of legitimation. Management Decision (2020). 9. Dreborg, K.H. Essence of backcasting. Futures 28(9), 813-828 (1996). 10. Eisenhardt K.M., Sull D.N. Strategy as simple rules. Harvard Business Review 79(1), 107 116 (2001). ----- 16 11. Evans, S., Vladimirova, D., Holgado, M., van Fossen, K., Yang, M., Silva, E.A., Barlow, C.Y. Business Model Innovation for Sustainability: Towards a Unified Perspective for Creation of Sustainable Business Models. Bus. Strateg. Environ. 26, 597-608 (2017). 12. Iivari, M., Ahokangas, P., Matinmikko-Blue. M., Yrjölä, S. Opening closed business eco systems boundaries with digital platforms: empirical case of a port. In: Ziouvelou, X. & McGroarty, F. (Eds.), Emerging Ecosystem-Centric Business Models for Sustainable Value. IGI Global (2020). 13. Kuhlman, T., Farrington, J. What is sustainability? Sustainability 2(11), 3436-3448 (2010). 14. Latva-aho, M., Leppänen, K. (Eds). Key drivers and research challenges for 6G ubiquitous wireless intelligence. 6G Research Visions 1, University of Oulu, Finland (2019). 15. Lemstra, W. Leadership with 5G in Europe: Two contrasting images of the future, with pol icy and regulatory implications. Telecommunications Policy, 42(8), 587-611 (2018). 16. Letaief, K.B., Chen, W., Shi, Y., Zhang, J., Zhang, Y.A. The Roadmap to 6G: AI Empow ered Wireless Networks. IEEE Communications Magazine 57(8), 84-90 (2019). 17. Marano, V., Tallman, S., & Teegen, H. J.. The liability of disruption. Global Strategy Jour nal, 10(1), 174-209 (2020). 18. Matinmikko, M. Latva-aho, M., Ahokangas, P., Yrjölä, S., Koivumäki. T. Micro operators to boost local service delivery in 5G. Wireless Personal Communications, vol. 95, no. 1, pp. 69–82, May 2017. 19. Matinmikko, M., Latva-aho, M., Ahokangas, P., & Seppänen, V. On regulations for 5G: Micro licensing for locally operated networks. Telecommunications Policy, 42(8), 622-635 (2018). 20. Matinmikko-Blue, M., Yrjölä, S., Seppänen, V., Ahokangas, P., Hämmäinen, H., & Latva Aho, M. Analysis of spectrum valuation elements for local 5G networks: Case study of 3.5GHz band. IEEE Transactions on Cognitive Communications and Networking, 5(3), 741753 (2019). 21. Matinmikko-Blue, M., Aalto, S., Asghar, M.I., Berndt, H., Chen, Y., Dixit, S., Jurva, R., Karppinen, P., Kekkonen, M., Kinnula, M., Kostakos, P., Lindberg, J., Mutafungwa, E., Ojutkangas, K., Rossi, E., Yrjölä, S., Öörni, A., (Eds). White Paper on 6G Drivers and the UN SDGs. 6G Research Visions 2, University of Oulu, Finland (2020a). 22. Matinmikko-Blue, M., Yrjölä, S., Ahokangas, P. Spectrum Management in the 6G Era: Role of Regulations and Spectrum Sharing. In: Proceedings of 2nd 6G Wireless Summit, Levi, Finland, (2020b). 23. Morgado, A., Huq, K. M. S., Mumtaz, S., & Rodriguez, J. A survey of 5G technologies: regulatory, standardization and industrial perspectives. Digital Communications and Networks 4(2), 87-97 (2018). 24. Nielsen, C., & Lund, M. Building scalable business models. MIT Sloan Management Re view, 59(2), 65-69 (2018). 25. Pouttu A. (Ed.). 6G White Paper on Validation and Trials for Verticals towards 2030’s. 6G Research Visions 4, University of Oulu, Finland (2020). 26. Pujol, F., Elayoubi, S. E., Markendahl, J., Salahaldin, L. Mobile telecommunications eco system evolutions with 5G. Communications & Strategies, (102), 109 (2016). 27. Saad, W.; Bennis, M.; Chen, M. A Vision of 6G Wireless Systems: Applications, Trends, Technologies, and Open Research Problems. IEEE Network (2019). 28. Schaltegger, S., Hansen, E.G., Lüdeke-Freund, F. Business models for sustainability: ori gins, present research, and future avenues. Organization & Environment, 29(1), 3-10 (2016). 29. Stevenson, T. Anticipatory action learning: conversations about the future. Futures 34, 417 425 (2012). 30. Schoemaker, P. Scenario Planning: A tool for Strategic Thinking. (1995). ----- 17 31. Schwartz, P. The Art of the Long View (1991). 32. Stewart, C. Integral scenarios: reframing theory, building from practice. Futures 40, 160 172 (2007). 33. Taleb. T, Aguiar, R. L., Yahia, I. G. B., Chatras, B., Christensen, G., Chunduri, U., Clemm, A., Costa, X., Dong, L., Elmirghani, J., Yosuf, B., Foukas, X., Galis, A., Giordani, M., Gurtov, A., Hecker, A., Huang, C.-W., Jacquenet, C., Kellerer, W., .... Zorzi, M. White Paper on 6G Networking. 6G Research Visions 6, University of Oulu, Finland (2020). 34. United Nations. Global indicator framework for the Sustainable Development Goals and targets of the 2030 Agenda for Sustainable Development (2018). 35. Viswanathan, H., Mogensen, P.E. Communications in the 6G Era. IEEE Access 8, 57063 57074 (2020). 36. Vuojala, H., Mustonen, M., Chen, X., Kujanpää, K., Ruuska, P., Höyhtyä, M., Matinmikko Blue, M., Kalliovaara, J., Talmola, P., Nyström, A.-G. Spectrum access options for vertical network service providers in 5G. Telecommunications Policy 44(4), 101903 (2019). 37. Yrjölä, S., Ahokangas, P. & Matinmikko-Blue, M. (2019). Novel platform ecosystem busi ness models for future wireless communications services and networks. In: Proxceedings of NFF 2019, Vaasa, Finland (2019). 38. Yrjölä, S.; Ahokangas, P.; Matinmikko-Blue, M., Eds. White Paper on Business of 6G. 6G Research Visions, No. 3, University of Oulu, Finland, (2020a). http://urn.fi/urn:isbn:9789526226767 39. Yrjölä, S. How could Blockchain transform 6G towards open ecosystemic business models? In: Proceedings of IEEE ICC 2020 Workshop on Blockchain for IoT and CPS, Dublin, Ireland, (2020). 40. Yrjölä, S.; Ahokangas, P.; Matinmikko-Blue. Sustainability as a Challenge and Driver for Novel Ecosystemic 6G Business Scenarios. Sustainability (12), 21 (2020b). 41. Ziegler, V., Yrjölä, S. 6G Indicators of Value and Performance. In: Proceedings of 2nd 6G Wireless Summit, Levi, Finland, (2020). -----
{ "disclaimer": "Notice: The following paper fields have been elided by the publisher: {'references', 'abstract'}. Paper or abstract available at https://api.unpaywall.org/v2/10.1007/978-3-030-73423-7_13?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1007/978-3-030-73423-7_13, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": null, "status": "CLOSED", "url": "" }
2,020
[ "JournalArticle" ]
false
null
[]
11,494
en
[ { "category": "Computer Science", "source": "external" }, { "category": "Computer Science", "source": "s2-fos-model" }, { "category": "Mathematics", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/0247616396cf94cb16350019864f3847dad97d32
[ "Computer Science" ]
0.825968
Combining Public Key Encryption with Schnorr Digital Signature
0247616396cf94cb16350019864f3847dad97d32
[ { "authorId": "20843836", "name": "Laura Savu" } ]
{ "alternate_issns": null, "alternate_names": null, "alternate_urls": null, "id": null, "issn": null, "name": null, "type": null, "url": null }
This article presents a new signcryption scheme which is based on the Schnorr digital signature algorithm. The new scheme represents my personal contribution to signcryption area. I have implemented the algorithm in a program and here are provided the steps of the algorithm, the results and some examples. The paper also contains the presentation of the original Signcryption scheme, based on ElGamal digital signature and discusses the practical applications of Signcryption in real life. The purpose of the study is to combine the public key encryption with Schnorr digital signature in order to obtain less computational and communicational costs. Signcryption primitive is a better approach then Encrypt-then-Sign or Sign-then-Encrypt methods regarding the costs. All these algorithms offer the possibility to transmit a message over an insecure channel providing both authenticity and confidentiality.
**_Journal of Software Engineering and Applications, 2012, 5, 102-108_** http://dx.doi.org/10.4236/jsea.2012.52016 Published Online February 2012 (http://www.SciRP.org/journal/jsea) # Combining Public Key Encryption with Schnorr Digital Signature #### Laura Savu Department of Information Security, Faculty of Mathematics and Computer Science, University of Bucharest, Bucharest, Romania. Email: laura.savu@microsoft.com Received December 10[th], 2011; revised January 14[th], 2012; accepted February 7[th], 2012 ### ABSTRACT This article presents a new signcryption scheme which is based on the Schnorr digital signature algorithm. The new scheme represents my personal contribution to signcryption area. I have implemented the algorithm in a program and here are provided the steps of the algorithm, the results and some examples. The paper also contains the presentation of the original Signcryption scheme, based on ElGamal digital signature and discusses the practical applications of Signcryption in real life. The purpose of the study is to combine the public key encryption with Schnorr digital signature in order to obtain less computational and communicational costs. Signcryption primitive is a better approach then Encrypt-thenSign or Sign-then-Encrypt methods regarding the costs. All these algorithms offer the possibility to transmit a message over an insecure channel providing both authenticity and confidentiality. **Keywords: Signcryption; Schnorr; Encryption; Digital Signature; Security; Confidentiality; ElGama; RSA; ECC** ### 1. Introduction Signcryption is the primitive that has been proposed by Youliang Zheng in 1997 and combines public key encryption with digital signature in a single logical step, obtaining a less cost for both communication and computation [1]. Data confidentiality and data integrity are two of the most important functions of modern cryptography. Confidentiality can be achieved using encryption algorithms or ciphers, whereas integrity can be provided by the use of authentication techniques. Encryption algorithms fall into one of two broad groups: private key encryption and public key encryption. Likewise, authentication techniques can be categorized by private key authentication algorithms and public key digital signatures. While both private key encryption and private key authentication admit very fast computation with minimal message expansion, public key encryption and digital signatures generally require heavy computation, such as exponentiations involving very large integers, together with message expansion proportional to security parameters (such as the size of a large composite integer or the size of a large finite field). Signcryption has the intention that the primitive should satisfy “Cost (Signature & Encryption)  Cost (Signature) + Cost (Encryption)” This inequality can be interpreted in a number of ways:  A signcryption scheme should be more computation nally efficient than a native combination of public-key encryption and digital signatures.  A signcryption scheme should produce a signcryption “ciphertext” which is shorter than a naive combination of a public-key encryption ciphertext and a digital signature.  A signcryption scheme should provide greater security guarantees and/or greater functionality than a native combination of public-key encryption and digital signatures [1]. More recently, the significance of signcryption in realworld applications has gained recognition by experts in data security. Since 2007, a technical committee within the International Organization for Standardization (ISO/IEC JTC 1/SC 27) has been developing an international standard for signcryption techniques [2]. The shared secret key between the parties makes possible an unlimited number of applications. Among these applications, one can first think of the following three:  Secure and authenticated key establishment,  Secure multicasting, and  Authenticated key recovery. A number of signcryption-based security protocols have been proposed for aforementioned networks and similar environments. These include:  Secure ATM networks,  Secure routing in mobile ad hoc networks,  Secure voice over IP (VoIP) solutions,  Encrypted email authentication by firewalls, C i h © 2012 S iR **_JSEA_** ----- Combining Public Key Encryption with Schnorr Digital Signature 103  Secure message transmission by proxy, and  Secure message transmission by proxy, and  Mobile grid web services. There are also various applications of signcryption in electronic commerce, where its security properties are very useful. Analyzing this security scheme from an application-oriented point of view, can be observed that a great amount of electronic commerce can take advantage of signcryption to provide efficient security solutions in the following areas:  Electronic payment,  Electronic toll collection system,  Authenticated and secured transactions with smart cards, etc. My personal contribution to the article is represented by the Schnorr Signcryption scheme which has been introduced here. Schnorr Signcryption scheme is made up of a combination between a public key encryption scheme and a digital signature scheme. On the base of the scheme that I present here stands the Schnorr digital sig[nature. A Schnorr signature is a digital signature produ-](http://en.wikipedia.org/wiki/Digital_signature) ced by the Schnorr signature algorithm. Its security is [based on the intractability of certain discrete logarithm](http://en.wikipedia.org/wiki/Discrete_logarithm) problems. It is considered the simplest digital signature [scheme to be provably secure in a random oracle model.](http://en.wikipedia.org/wiki/Random_oracle) It is efficient and generates short signatures. A signcryption scheme typically consists of five algorithms, Setup, KeyGenS, KeyGenR, Signcrypt, Unsigncrypt:  Setup-takes as input a security parameter 1^ k and outputs any common parameters _param_ required by the signcryption schemes. This may include the security parameter 1^ k, the description of a group G and a generator g for that group, choices for hash functions or symmetric encryption schemes, etc.  Key Generation S(Gen) generates a pair of keys for the sender.  Key Generation R(Gen) generates a pair of keys for the receiver.  Signcryption (SC) is a probabilistic algorithm.  Unsigncryption (USC) is a deterministic algorithm. A signcryption scheme is a combination between a public key encryption algorithm and a digital signature scheme. A public key encryption scheme consists of three polynomial-time algorithms (EncKeyGen, Encrypt, Decrypt). **EncKeyGen—Key generation is a probabilistic algori-** thm that takes as input a security parameter 1^ k and outputs a key pair (skenc, pkenc), written (skenc, pkenc)R← EncKeyGen (1^ k ). The public encryption key pkenc is widely distributed, while the private decryption key skebnc should be kept secret. The public key defines a message m ∈ M and a ciphertext ∈ C. **Encrypt—Encryption is a probabilistic algorithm that** takes a message m ∈ M and the public key pkenc as input and outputs a ciphertext C ∈ C, written C ← Encrypt (pkenc, m). **Decrypt—Decryption is a deterministic algorithm that** takes a ciphertext C ∈ C and the private key skenc as input and outputs either a message m ∈ M or the failure symbol ⊥, written m ← Decrypt (skenc, C). The article is structured in seven parts, as follows. Signcryption and its properties definitions are contained in the first part. Also here, in introduction, are presented the practical applications of Signcryption in real life. In the second part is exposed the original signcryption primitive introduced by Youliang Zheng, which combines public key encryption and a derivation of ElGamal digital signature algorithm. Part three contains the presentation of the new sygncryption scheme, Schnorr Signcryption, as a result of the combination of public key encryption and Schnorr digital signature algorithm. The step-by-step implementtation of the Schnorr Signcryption scheme in a source code program is reflected in the fourth part. Strating with the fifth part begins the analyze of the security models on Schnorr Signcryption. The two-users security model is presented in the sixth part and multi-user security model is presented in the seventh part. In each of this models there is exposed another classification for security, the insider security and the outsider security. ### 2. Related Work #### 2.1. Elgamal Signcryption The original signcryption scheme that has been introduced by Youliang Zheng in 1997 is created on a derivation of ElGamal digital signature standard, combined with a public key encryption scheme. Based on discrete algorithm problem, ElGamal Signcryption cost is: 58% less in average computation time; 70% less in message expansion. Here is the detailed presentation of the fifth algorithms that make up the ElGamal signcryption scheme. 1) Setup Signcryption parameters: p = a large prime number, public to all; q = a large prime factor of p − 1, public to all; g = an integer with order q modulo p, in [1, , p − 1], public to all; hash = a one-way hash function; KH = a keyed one-way hash function = KHk(m) = hash (k, m); (E, D) = the algorithms which are used for encryption and decryption of a private key cipher. Alice sends a message to Bob. 2) KeyGen sender Alice has the pair of keys (Xa, Ya): C i h © 2012 S iR **_JSEA_** ----- 104 Combining Public Key Encryption with Schnorr Digital Signature Xa = Alice’s private key, chosen randomly from [1, , q − 1] Ya = Alice’s public key = g xa mod p. 3) KeyGen receiver Bob has the pair of keys (Xb, Yb): Xb = Bob’s private key, chosen randomly from [1, , q − 1] Yb = Bob’s public key = g xb mod p. 4) Signcryption In order to signcrypt a message m to Bob, Alice has to accomplish the following operations: Calculate k = hash Yb x  mod p Split k in k1 and k2 of appropriate length. Calculate r = KHk2(m) = hash(k2, m) Calculate s = x/(r + Xa) mod q, if SDSS1 is used Calculate s = x/(1 + Xa · r) mod q, if SDSS2 is used Calculate c = Ek1(m) = the encryption of the message m with the key k1. Alice sends to Bob the values (r, s, c). 5) Unsigncryption In order to unsigncrypt a message from Alice, Bob has to accomplish the following operations: Calculate k using r, s, g, p, Ya and Xb s xb hash Ya g  r  mod p, if is used SDSS1; s xb hash g Ya  r  mod p, if is used SDSS2; Split k in k1 and k2 of appropriate length. Calculate m using the decryption algorithm m = Dk1(c). Accept m as a valid message only if KHk2(m) = r. Using the two schemes SDSS1 and SDSS2, two signcryption schemes have been created, SCS1 and SCS2, respectively. The two signcryption schemes share the same communication overhead, (|hash(*)| + |q|). SCS1 involves one less modular multiplication in signcryption then SCS2, both have a similar computational cost for unsigncryption [1]. #### 2.2. Rsa Signcryption Rivest introduced for the first time in 1978 the publickey encryption scheme and digital signature scheme [3]. The RSA transform has been the basis of dozens of public-key encryption schemes and digital signature schemes, which have proven to be very successful and have been very widely deployed in industry. They are widely used in the design of public-key encryption and digital signature schemes. The RSA transform was introduced by Rivest, Shamir, and Adleman in 1978 [3]. The exact definition of the problem depends upon the distribution from which the two prime numbers p and q are drawn. For our purposes, this is defined by a probabilistic, polynomial-time RSA parameter generation algorithm RSAGen, which takes as in put a security parameter 1^ k and outputs two primes (p, q) with the property that N = pq is a k-bit integer [4]. **Signcrypt (** _fS_ 1, _fR m,_ ) Bind pkS||pkR r  0,1 _d_ | |m c  H (bind, m||r) d  m||r w  c s  G (bind, c) ○ d C  fR ( _fS_ []1 **(w||s))** Return C **Unsigncrypt (** _fS fR,_ 1,C ) Bind  pkS||pkR (w||s)  fS ( _fR1,C_ ) m||r  G (bind, w) © s If H (bind, m||r) = w, return m Else return ⊥ #### 2.3. Elliptic Curve Cryptography Signcryption [The first signcryption scheme was introduced by Yuliang](http://en.wikipedia.org/wiki/Yuliang_Zheng) [Zheng in 1997 [1]. Zheng also proposed an elliptic curve-](http://en.wikipedia.org/wiki/Yuliang_Zheng) based signcryption scheme that saves 58% of computational and 40% of communication costs when it is compared with the traditional elliptic curve-based signature[then-encryption schemes [5].](http://en.wikipedia.org/wiki/Signcryption#cite_note-1) Here is presented the scheme for an elliptic curve based signcryption algorithm introduced by Mohsen Toorani and Ali Asghar Beheshti Shirazi in [6]. **Signcryption (Alice)** Choosing r in [1, n − 1] R = rG = (xR, yR) K = rU = (xK, yK) s = r[]1 (H (M) + xRdA) (mod n) e = H (M||s) C = (M||e) © xK **Unsigncryption (Bob)** K = dB R = (xK, yK) (M||e’) = C © xK e’ = H(M||s) If e <> e’ then rejects M’ Else u = s[]1 H(M) v = s[]1 xR uG + vU = (x’R, y’R) Signature verification: Is xR = x’R ? The elliptic curve-based schemes are usually based on difficulty of Elliptic Curve Discrete Logarithm Problem (ECDLP) that is computationally infeasible under certain circumstances [7]. The elliptic curve-based systems can attain to a desired security level with significantly smaller keys than those of required by their exponential-based C i h © 2012 S iR **_JSEA_** ----- Combining Public Key Encryption with Schnorr Digital Signature 105 counterparts. This can enhance the speed and leads to efficient use of power, bandwidth, and storage that are the basic limitations of resource-constrained devices [8]. Throughout the years, there have been proposed many other signcryption schemes, each with its own problems and limitations, while offering different level of security services and computational costs. ### 3. Implementation of the New Signcryption Scheme [A Schnorr signature is a digital signature produced by the](http://en.wikipedia.org/wiki/Digital_signature) Schnorr signature algorithm. Its security is based on the [intractability of certain discrete logarithm problems. It is](http://en.wikipedia.org/wiki/Discrete_logarithm) considered the simplest digital signature scheme to be [provably secure in a random oracle model [9].](http://en.wikipedia.org/wiki/Random_oracle) **_Choosing parameters_** [All users of the signature scheme agree on a group G](http://en.wikipedia.org/wiki/Group_(mathematics)) [with generator g of prime order q in which the discrete](http://en.wikipedia.org/wiki/Discrete_log) [log problem is hard.](http://en.wikipedia.org/wiki/Discrete_log) **_Key generation_** Choose a private signing key x. The public verification key is y = g[x]. **_Signing_** To sign a message M: Choose a random k. Let r = g[k] Let e = H (M | | r), where || denotes concatenation and r is represented as a bit string. H is a cryptographic hash function H : 0,1 *    _q_ . Let s = (k – xe). The signature is the pair (s, e). **_Verifying_** Let rv = g[s]y[e] Let ev = H (M | | rv) If ev = e then the signature is verified. **_Demonstration of correctness_** It can be observed that ev = e if the signed message equals the verified message: [r]v  g ys e  gk xe g xe  gk  r, and hence ev = H (M | | rv) = H(M | | r) = e. It has been considered that k < q and the assumption that the hash function is collision-resistant. Public elements: G, g, q, y, s, e, r. Private elements: k, x. [10] A Schnorr Signcryption scheme is based on Schnorr digital signature algorithm. Here is the detailed presentation of the fifth algorithms that make up the Schnorr signcryption scheme. 1) Setup Schnorr Signcryption parameters: p = a large prime number, public to all; q = a large prime factor of p-1, public to all; g = an integer with order q modulo p, in [1,, p − 1], public to all; hash = a one-way hash function; KH = a keyed one-way hash function = KHk (m) = hash (k, m); (E, D) = the algorithms which are used for encryption and decryption of a private key cipher. Alice sends a message to Bob. 2) KeyGen sender Alice has the pair of keys (Xa, Ya): Xa = Alice’s private key, chosen randomly from [1,, q − 1] Ya = Alice’s public key = g[-]xa mod p. 3) KeyGen receiver Bob has the pair of keys (Xb, Yb): Xb = Bob’s private key, chosen randomly from [1,, q − 1]; Yb = Bob’s public key = g[-]xb mod p. 4) Signcryption In order to signcrypt a message m to Bob, Alice has to accomplish the following operations: Calculate k  hash Yb x  mod p ; Split k in k1 and k2 of appropriate length. Calculate r = KHk2(m) = hash (h2, m); Calculate s = x + (r* Xa) mod q; Calculate c = Ek1(m) = the encryption of the message m with the key k1. Alice sends to Bob the values (r, s, c). 5) Unsigncryption In order to unsigncrypt a message from Alice, Bob has to accomplish the following operations: Calculate k using r, s, g, p, Ya and Xb Xb k  hash g s  Ya r  mod p Split k in k1 and k2 of appropriate length. Calculate m using the decryption algorithm m = Dk1 (c). Accept m as a valid message only if KHk2 (m) = r. Analyzing the two presented signcryption schemes, it can be observed that in case of Shnorr signcryption the computation of s, which is s = x + (r* Xa) mod q, is less consuming comparing with the formula used in ElGamal algorithm, where s is s = x/(r+Xa) mod q. Another difference is on the level of unsigncryption step as k is computing differently, using this formula for Sch Xb rr k  hash g s  Ya r  mod p and this formula for El mal s Xb k  hash g r  Ya mod p . ### 4. Security Models for Schnorr Signcryption Scheme The first attempt to produce security models for signcrtion was given by Steinfeld and Zheng [11]. A family of security models for signcryption in both two-user and multi-user settings was presented by An [12] C i h © 2012 S iR **_JSEA_** ----- 106 Combining Public Key Encryption with Schnorr Digital Signature in their work on signcryption schemes built from blackbox signature and encryption schemes. Defining the security of signcryption in the public-key setting is more involved than the corresponding task in the symmetric setting [13] due to the asymmetric nature of the former. The asymmetry of keys makes a difference in the notions of both authenticity and privacy on two major fronts which are addressed in this chapter. The first difference for Schnorr signcryption is that the security of the signcryption needs to be defined in the multi-user setting, where issues with users’ identities need to be addressed. On the other hand, authenticated encryption in the symmetric setting can be fully defined in a much simpler two-user setting. The case of Schnorr settings not only makes a difference in the multiuser and two-user settings but also makes a difference in the adversary’s position depending on its knowledge of the keys. There are two definitions for security of signcryption depending on whether the adversary is an “outsider” (a third party who only knows the public information) or “insider” (a legal user of the network, either the sender or the receiver, or someone that knows the secret key of either the sender or the receiver). In the first case the security model is named “outsider security” and in the latter “insider security”. #### 4.1. Two-Users Security Model In the symmetric setting, there is only one specific pair of users who 1) Share a single key; 2) Trust each other; 3) “Know who they are”; 4) Only care about being protected from “the rest of the world.” In contrast, in Schnorr signcryption setting, each user independently publishes its public keys, after which it can send/receive messages to/from any other user. In particular, 1) each user should have an explicit identity (associated with its public key); 2) each signcryption has to explicitly contain the (presumed) identities of the sender S and the receiver R; 3) each user should be protected from every other user. The security goal is to provide both authenticity and privacy of communicated data. In the symmetric setting, since the sender and the receiver share the same secret key, the only security model that makes sense is one in which the adversary is modeled as a third party or an outsider who does not know the shared secret key. For Schnorr signcryption setting, the sender and the receiver do not share the same secret key but each has his/her own secret key. Due to this asymmetry of the secret keys, the data needs to be protected not only from an outsider but also from an insider who is a legal user of the system (the sender or the receiver themselves or someone who knows either the sender’s secret key or the receiver’s secret key) [4]. #### 4.2. Multi-User Security Model A central difference between the multi-user model and the two-user models is the extra power of the adversary. In the multi-user model, the attacker may choose receiver (resp. sender) public keys when accessing the attacked users’ signcryption (resp. unsigncryption) oracles. For signcryption schemes that share some functionality between the signature and the encryption components, such as are the case for Zheng’s Signcryption scheme and Schnorr Signcryption scheme, the extra power of the adversary in the multi-user model may be much more significant, and a careful case-by-case analysis is required to establish security of such schemes in the multi-user model. As in the two-user setting, the multi-user setting also has two types of models depending on the identity of the attacker: an insider model and an outsider model. ### 5. Experimental Results Here is provided an example from the execution of the program on small numbers. Example: p = 23, q = 11, g = 2, X=3 XA = 4 => YA=13 XB=5 => YB=18 k = 13 => hash(k) = vTB6PsMp4Qos/4+4dICCPaEU+ PQ= k1 = vTB6PsMp4Qos/w== k2 = j7h0gII9oRT49A== hash(k2, m) = E2726583242AB5CCE58AE1151DB126208F17932F hash(k2, m) in base 10 = 1292783042124763369608714420962730428414981280559 (hash(k2,m) in base 10) mod p = 3 s mod q = x + (r*Xa) mod q = 4 Unsigncrypt k = 13 In Table 1 is presented the cost evaluation for the signature and verification in ElGamal and Schnorr signcryption schemes. It is important the improvement for the cost consumption that has been made in the case of the proposed scheme, as at this step it is not necessary to be calculated the modular inverse.  Texp: the time for a modular exponential computation.  Tm: the time for a modular multiplication computation.  Tinv: the time for a modular inverse computation.  Th: the time for a one way hash function f(_) computation. ### 6. Conclusions and Future Work This paper presents a new Signcryption scheme which is C i h © 2012 S iR **_JSEA_** ----- Combining Public Key Encryption with Schnorr Digital Signature 107 **Table 1. The comparison between the proposed Schnorr Signcryption scheme and the initial Youliang Zheng Signcryption** **scheme.** The Proposed Schnorr Signcryption Scheme The Initial Youliang Zheng Signcryption Scheme Computation cost for signature generation Th + Tm Th + Tm + Tinv Computation cost for verifying converted signature Th + Tm + Texp Th + Tm + Tinv + Texp based on Schnorr digital signature algorithm. This scheme is named Schnorr Signcryption and it implements in a single logical step both public key encryption and digital signature, offering less costs as using these two cryptographic functions individually. In signcryption area, the following problems seem interesting in future research: 1) presenting a formal model for group signcryption, and proposing provably secure schemes; 2) Designing schemes to support dynamic group member management in the sense that group member can join or leave the group efficiently and dynamically; 3) Optimizing the open procedure so that it does not linearly depend on the number of group members, so that such schemes are suitable for large groups. ### REFERENCES [1] Y. Zheng, “Digital Signcryption or How to Achieve Cost (Signature & Encryption) << Cost(Signature) + Cost (Encryption),” Full Version, 2011. http://www.sis.uncc.edu/yzheng/papers/ [2] International Organization for Standardization, “IT Security Techniques—Signcryption,” ISO/IEC WD 29150, 2008. [3] R. L. Rivest, A. Shamir and L. Adleman, “A Method for Obtaining Digital Signatures and Public-Key Cryptosystems,” Communications of the ACM, Vol. 21, No. 2, 1978, [pp. 120-126. doi:10.1145/359340.359342](http://dx.doi.org/10.1145/359340.359342) [4] A. Dent and Y. L. Zheng, “Practical Signcryption, a Volume in Information Security and Cryptography,” SpringerVerlag, Berlin, 2010. [5] Y. Zheng and H. Imai, “How to Construct Efficient Signcryption Schemes on Elliptic Curves,” Information Proc_essing Letters, Vol. 68, No. 5, 1998, pp. 227-233._ ### Appendix [doi:10.1016/S0020-0190(98)00167-7](http://dx.doi.org/10.1016/S0020-0190(98)00167-7) [6] M. Toorani and A. A. B. Shirazi, “Cryptanalysis of an Elliptic Curve-Based Signcryption Scheme,” International _Journal of Network Security, Vol. 10, No. 1, 2010, pp._ 51-56. [7] D. Hankerson, A. Menezes and S. Vanstone, “Guide to Elliptic Curve Cryptography,” Springer-Verlag, New York, 2004. [8] M. Toorani and A. A. B. Shirazi, “LPKI—A Lightweight Public Key Infrastructure for the Mobile Environments,” _Proceedings of the 11th IEEE International Conference_ _on Communication Systems, Guangzhou, 19-21 Novem-_ ber 2008, pp. 162-166. [9] C. P. Schnorr, “Efficient Identification and Signatures for Smart Cards,” In: G. Brassard, Ed., Advances in Cryptol_ogy—Crypto’89,_ _Lecture Notes in Computer Science_ _No_ 435, Springer-Verlag, 1990. pp. 239-252. [10] C.-P. Schnorr, “Efficient Signature Generation by Smart Cards,” _Journal of Cryptology, Vol. 4, No. 3, 1991, pp._ [161-174. doi:10.1007/BF00196725](http://dx.doi.org/10.1007/BF00196725) [11] R. Steinfeld and Y. Zheng, “A Signcryption Scheme Based on Integer Factorization,” In: J. Pieprzyk, E. Okamoto and J. Seberry, Eds., _Information Security Work-_ _shop, Lecture Notes in Computer Science, Vol. 1975,_ Springer, Berlin, 2000, pp. 308-322. [12] J. H. An, Y. Dodis and T. Rabin, “On the Security of Joint Signatures and Encryption,” In: L. Knudsen, Ed., _Ad-_ _vances in Cryptology—Eurocrypt 2002, Lecture Notes in_ _Computer Science, Vol. 2332, Springer, Berlin, 2002, pp._ 83-107. [13] M. Bellare and C. Namprempre, “Authenticated Encryption: Relations among Notions and Analysis of the Generic Composition Paradigm,” In: T. Okamoto, Ed., _Ad-_ _vances in Cryptology—Asiacrypt 2000,_ _Lecture Notes in_ _Computer Science, Vol. 1976, Springer, Berlin, 2000, pp._ 531-545. I created a source code program that verifies my algorithm. Executing this program I could generate examples. The step-by-step implementation of the algorithm is as follows: 1) Calculate Ya and Yb double powA = Math.Pow(g, xA); int pow_intA = Convert.ToInt32(powA); C i h © 2012 S iR **_JSEA_** ----- 108 Combining Public Key Encryption with Schnorr Digital Signature int invA = modInverse(pow_intA, p); 2) Calculate k int yB = Convert.ToInt32(textBox11.Text); int x = Convert.ToInt32(textBox18.Text); int p = Convert.ToInt32(textBox4.Text); string cheie = (BigInteger.ModPow(yB, x, p)). ToString(); 3) Calculate hash(k) string HashDeCheie = _calculateHash(cheie); textBox13.Text = HashDeCheie; 4) Split k in two keys k1 and k2 with the same lenght byte[] k = Convert.FromBase64String(textBox13.Text); byte[] k1 = new byte[k.Length/2]; byte[] k2 = new byte[k.Length - k.Length/2]; Buffer.BlockCopy(k, 0, k1, 0, k.Length/2); Buffer.BlockCopy(k, k.Length/2, k2, 0, k.Length - k.Length/2); byte[] test = new byte[k.Length]; k1.CopyTo(test, 0); k2.CopyTo(test, k1.Length); 5) Calculate r using k2; r = hash (k2, m) BigInteger p = BigInteger.Parse(textBox4.Text); System.Text.ASCIIEncoding encoding = new System.Text.ASCIIEncoding(); byte[] keyByte = encoding.GetBytes(key); HMACSHA1 hmacsha1 = new HMACSHA1(keyByte); byte[] messageBytes =encoding.GetBytes(message); byte[] hashmessage = hmacsha1.ComputeHash(messageBytes); 6) Calculate r using k2; transform the value obtained from hash in base 10 textBox19.Text = fn16to10(textBox15.Text).ToIntString(); 7) Calculate the modulo p of the number obtained in base 10 BigInteger nr = BigInteger.Parse(textBox19.Text); BigInteger p = BigInteger.Parse(textBox4.Text); BigInteger rest = 0; BigInteger.DivRem(nr, p, out rest); 8) Calculate s BigInteger q = Convert.ToInt32(textBox5.Text); BigInteger r = Convert.ToInt32(textBox20.Text); BigInteger XA = Convert.ToInt32(textBox9.Text); BigInteger X = Convert.ToInt32(textBox18.Text); BigInteger prod = BigInteger.Multiply(r, XA); BigInteger sum = X + prod; BigInteger rest; BigInteger.DivRem(sum, q, out rest); 9) Encrypt m using the k1 10) Calculate k BigInteger rez2 = BigInteger.Pow(rez1, XB); B igInteger invK = modInverseBI(rez2, p) C i h © 2012 S iR **_JSEA_** -----
{ "disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.4236/JSEA.2012.52016?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.4236/JSEA.2012.52016, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": "CCBY", "status": "HYBRID", "url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=17484" }
2,012
[]
true
2012-02-27T00:00:00
[ { "paperId": "0bb7d3807f7a7066a85c297bd258da37d2defdfb", "title": "Practical Signcryption" }, { "paperId": "377182c155e985538ad5caddf758e936c2de78c8", "title": "Cryptanalysis of an Elliptic Curve-based Signcryption Scheme" }, { "paperId": "091b7275abdb297ff7aedfa46e710352fcd21a0d", "title": "LPKI - A lightweight public key Infrastructure for the mobile environments" }, { "paperId": "80b388f07313e609a0fbd7dadfbadc69ae3b653e", "title": "Protection" }, { "paperId": "3b4db693363eba0e4a7bc6903129841d6e1b3c93", "title": "The International Organization for Standardization." }, { "paperId": "e328960b8e36e96a163f852abcda0e48949824a6", "title": "A Signcryption Scheme Based on Integer Factorization" }, { "paperId": "512e08451eb0d805c77b86e5821560f3b7dec565", "title": "Authenticated Encryption: Relations among Notions and Analysis of the Generic Composition Paradigm" }, { "paperId": "bc533d2f27381d81d8e0cd3f445c54556e938816", "title": "The State of Elliptic Curve Cryptography" }, { "paperId": "d21cde8bc0b11caef550a3e0e5b4cb2077347570", "title": "How to Construct Efficient Signcryption Schemes on Elliptic Curves" }, { "paperId": "072e8123e534331625f52111cb5b7c0441bee8aa", "title": "Digital Signcryption or How to Achieve Cost(Signature & Encryption) << Cost(Signature) + Cost(Encryption)" }, { "paperId": "8d69c06d48b618a090dd19185aea7a13def894a5", "title": "Efficient Identification and Signatures for Smart Cards (Abstract)" }, { "paperId": "2642738e9977c08d4085ce1c6530d63545383d30", "title": "Efficient signature generation by smart cards" }, { "paperId": null, "title": "BigInteger prod = BigInteger.Multiply(r, XA)" }, { "paperId": null, "title": "Calculate k BigInteger rez2 = BigInteger.Pow(rez1" }, { "paperId": null, "title": "BigInteger sum = X + prod" }, { "paperId": null, "title": "Calculate k int yB = Convert.ToInt32(textBox11" }, { "paperId": null, "title": "BlockCopy(k, 0, k1, 0, k.Length/2)" }, { "paperId": null, "title": "string cheie = (BigInteger.ModPow(yB, x, p))" }, { "paperId": null, "title": "Encrypt m using the k1" }, { "paperId": null, "title": "DivRem(nr, p, out rest)" }, { "paperId": null, "title": "B igInteger invK = modInverseBI(rez2, p)" }, { "paperId": null, "title": "On the Security of Joint Signatures and Encryption" }, { "paperId": null, "title": "ASCIIEncoding encoding = new System" }, { "paperId": null, "title": "Calculate r using k2" } ]
8,447
en
[ { "category": "Computer Science", "source": "external" }, { "category": "Computer Science", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/024aa4597ede5823b301593385cb892df14180da
[ "Computer Science" ]
0.895508
Evaluating Countermeasures for Verifying the Integrity of Ethereum Smart Contract Applications
024aa4597ede5823b301593385cb892df14180da
IEEE Access
[ { "authorId": "2117668992", "name": "Suhwan Ji" }, { "authorId": "2111404615", "name": "Dohyung Kim" }, { "authorId": "2050475", "name": "Hyeonseung Im" } ]
{ "alternate_issns": null, "alternate_names": null, "alternate_urls": [ "http://ieeexplore.ieee.org/servlet/opac?punumber=6287639" ], "id": "2633f5b2-c15c-49fe-80f5-07523e770c26", "issn": "2169-3536", "name": "IEEE Access", "type": "journal", "url": "http://www.ieee.org/publications_standards/publications/ieee_access.html" }
Blockchain technology, which provides digital security in a distributed manner, has evolved into a key technology that can build efficient and reliable decentralized applications (called DApps) beyond the function of cryptocurrency. The characteristics of blockchain such as immutability and openness, however, have made DApps more vulnerable to various security risks, and thus it has become of great significance to validate the integrity of DApps before they actually operate upon blockchain. Recently, research on vulnerability in smart contracts (a building block of DApps) has been actively conducted, and various vulnerabilities and their countermeasures were reported. However, the effectiveness of such countermeasures has not been studied well, and no appropriate methods have been proposed to evaluate them. In this paper, we propose a software tool that can easily perform comparative studies by adding existing/new countermeasures and labeled smart contract codes. The proposed tool demonstrates verification performance using various statistical indicators, which helps to identify the most effective countermeasures for each type of vulnerability. Using the proposed tool, we evaluated state-of-the-art countermeasures with 237 labeled benchmark codes. The results indicate that for certain types of vulnerabilities, some countermeasures show evenly good performance scores on various metrics. However, it is also observed that countermeasures that detect the largest number of vulnerable codes typically generate much more false positives, resulting in very low precision and accuracy. Consequently, under given constraints, different countermeasures may be recommended for detecting vulnerabilities of interest. We believe that the proposed tool could effectively be utilized for a future verification study of smart contract applications and contribute to the development of practical and secure smart contract applications.
Received June 2, 2021, accepted June 16, 2021, date of publication June 21, 2021, date of current version June 30, 2021. _Digital Object Identifier 10.1109/ACCESS.2021.3091317_ # Evaluating Countermeasures for Verifying the Integrity of Ethereum Smart Contract Applications SUHWAN JI 1, DOHYUNG KIM 1,2, AND HYEONSEUNG IM 1,2 1Interdisciplinary Graduate Program in Medical Bigdata Convergence, Kangwon National University, Chuncheon 24341, South Korea 2Department of Computer Science and Engineering, Kangwon National University, Chuncheon 24341, South Korea Corresponding authors: Dohyung Kim (d.kim@kangwon.ac.kr) and Hyeonseung Im (hsim@kangwon.ac.kr) This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (No. 2019R1F1A1063272, 2020R1F1A1048395, and 2020R1A4A3079947). **ABSTRACT Blockchain technology, which provides digital security in a distributed manner, has evolved into** a key technology that can build efficient and reliable decentralized applications (called DApps) beyond the function of cryptocurrency. The characteristics of blockchain such as immutability and openness, however, have made DApps more vulnerable to various security risks, and thus it has become of great significance to validate the integrity of DApps before they actually operate upon blockchain. Recently, research on vulnerability in smart contracts (a building block of DApps) has been actively conducted, and various vulnerabilities and their countermeasures were reported. However, the effectiveness of such countermeasures has not been studied well, and no appropriate methods have been proposed to evaluate them. In this paper, we propose a software tool that can easily perform comparative studies by adding existing/new countermeasures and labeled smart contract codes. The proposed tool demonstrates verification performance using various statistical indicators, which helps to identify the most effective countermeasures for each type of vulnerability. Using the proposed tool, we evaluated state-of-the-art countermeasures with 237 labeled benchmark codes. The results indicate that for certain types of vulnerabilities, some countermeasures show evenly good performance scores on various metrics. However, it is also observed that countermeasures that detect the largest number of vulnerable codes typically generate much more false positives, resulting in very low precision and accuracy. Consequently, under given constraints, different countermeasures may be recommended for detecting vulnerabilities of interest. We believe that the proposed tool could effectively be utilized for a future verification study of smart contract applications and contribute to the development of practical and secure smart contract applications. **INDEX TERMS Blockchain, countermeasure, Ethereum, smart contract, vulnerability.** **I. INTRODUCTION** Since Bitcoin [1], which was designed using blockchain, was introduced, blockchain technology has evolved and interests in its applications have greatly been increasing. With the ability to provide digital security in a distributed manner, blockchain has been used to develop a variety of decentralized applications across the industry. In particular, such decentralized applications, called DApps, often operate upon the Ethereum Virtual Machine (EVM), and they are built using smart contracts which are a piece of code that enables DApps to interact with the underlying Ethereum blockchain [2]. The associate editor coordinating the review of this manuscript and approving it for publication was Hailong Sun . Despite the great advantages of using the blockchain technology, however, it has ironically been revealed that smart contracts are vulnerable to various security risks due to the blockchain’s essential features, such as transparency and immutability [3]–[6]. For example, if a smart contract incurs a wrong transaction (by mistake or malicious attacks) and the result is once written to the blockchain, then the transaction can hardly be corrected. Rather, the blockchain should be destroyed (or hardforked). Because of that reason, it is of considerable importance to test the integrity and safety of smart contract applications before they are actually used in conjunction with the blockchain. As a result of recent research on vulnerabilities in smart contracts, representative vulnerabilities were introduced [7], [8], and various countermeasures were ----- proposed [5], [6], [8]–[10]. However, the effectiveness of the countermeasures has not been properly studied. Most performance evaluations have been performed using unlabeled data. Hence, their performance comparisons are often not conclusive since when using unlabeled data, even if a countermeasure detected some vulnerabilities, it is not clear whether they were actual bugs or false positives, and also how many vulnerabilities each countermeasure missed. For example, recently, the authors of [10] used 47,518 smart contracts for comparative studies, but reported the test results without confirming that vulnerabilities actually exist in the smart contracts. Only 69 contracts having 115 vulnerabilities in total were used as labeled data, resulting in that it is not clear what the most effective countermeasures are for each type of vulnerability. Besides, a comparative study itself may be cumbersome and time-consuming tasks since different countermeasures may require different environments to run, and the analysis results in different formats should be additionally arranged. In this paper, a new software tool is designed to facilitate validation of the existing/new countermeasures, into which the user can easily add new countermeasures and labeled benchmark datasets. In particular, it automatically analyzes the results of executing the countermeasures on the available benchmark datasets, and shows their performance using tables and graphs under various performance measures to facilitate easy comparison. To this end, the proposed tool is implemented using OS-level virtualization and operates within a Docker container, allowing it to operate independently of the underlying system and eliminating the need for the user to perform separate installation/execution for each countermeasure. As a result, new countermeasures (as well as labeled benchmark smart contract codes) can easily be included and evaluated in the proposed tool, and their performance can effectively be cross-checked using various metrics. Using the proposed tool and 237 labeled smart contract codes, we evaluate the representative existing countermeasures in the literature. The evaluation results show that, in general, the countermeasures identifying more vulnerable code produce a much larger number of false positives, resulting in very low precision and accuracy. The effectiveness of countermeasures against ‘Access Control’, ‘Denial of Service’, and ‘FrontRunning’ are questionable. The F1-scores of all countermeasures are less than 25%. Vulnerable codes with ‘Integer Overflow/Underflow’ and ‘Timestamp Dependence’ can be completely detected. However, the performance of the countermeasures needs to be further improved in order to reduce false positives. As for the vulnerabilities of ‘Reentrancy’ and ‘Unchecked Low Level Call’, we confirm that there are effective countermeasures that show both high precision and high recall values. We believe that the proposed tool will contribute to a future verification study of smart contracts and development of practical and secure smart contract applications. The main contributions are summarized as follows : - The nine representative vulnerabilities discussed in the Decentralized Application Security Project (or DASP) Top 10 of 2018 [7] and their state-of-the-art countermeasures are revisited. - The limitations of the current countermeasures are discussed from the perspective of practicality, and an effective software tool that can evaluate the performance of the countermeasures with great convenience is designed. - The proposed tool is implemented using an OS-level virtualization technique, and is open to the public via https://github.com/93suhwan/uscv. - The proposed tool eliminates the need to manage a separate installation/execution environment for each countermeasure and provides easy comparative analysis, helping to identify the most effective countermeasures for each type of vulnerability. - Using the proposed tool and 237 labeled data, we conduct a comparative study for the representative existing countermeasures in the literature, and their performance is represented using various performance measures. The rest of the paper is organized as follows. Section II introduces Ethereum smart contracts and their vulnerabilities. Section III summarizes the state-of-the-art countermeasures for the vulnerabilities of smart contracts. In Section IV, we introduce the design of the proposed tool and discuss the results for evaluating the performance of the existing countermeasures using the proposed tool. Finally, Section VI concludes the paper. **II. PRELIMINARIES** _A. BLOCKCHAIN AND ETHEREUM SMART CONTRACTS_ The first blockchain was introduced in 2008 by a pseudonymous person or group known as Nakamoto [1]. Essentially, a blockchain is a list of blocks that record information. Since blocks in the chain are connected using a cryptographic hash (more specifically, in the way that each block contains the cryptographic hash of the previous block in the chain), any information of a block in the chain can be changed only if all of its subsequent blocks can also be modified. However, since such modifications require the consent of the majority of the network, consequently, malicious change in the blockchain is almost impossible. Since a blockchain can work as a distributed, verifiable public ledger that records transactions, its first application was a cryptocurrency, named Bitcoin [1]. In order to add a new block to the blockchain, Bitcoin uses a consensus mechanism called Proof-of-Work (PoW), where nodes in the network compete for generating a right block by solving a cryptographic puzzle. Extended from Bitcoin, Ethereum allows to store computer code that can be used to implement unforgeable decentralized applications [2], which is now being a building platform for running various kinds of DApps. ----- A smart contract is a piece of code that enables DApps to interact with the blockchain, and it actually runs on a quasi-Turing-complete virtual machine, called EVM. EVM is considered as a sort of distributed machine that executes smart contracts that embed the DApp logic by consuming Ether (Gas in EVM). Since the blockchain has a property of immutability, once a smart contract is deployed on the blockchain, it cannot be modified like other transactions. Therefore, it is significant to test the integrity and safety of smart contracts before they are actually used upon the blockchain. Otherwise, the blockchain must be destroyed or hardforked if serious errors in the deployed smart contracts are found afterwords. _B. VULNERABILITIES OF SMART CONTRACTS_ This section briefly reviews the nine representative vulnerabilities discussed in the DASP Top 10 of 2018 [7]. - Reentrancy (Vre): Before a contract is completed (i.e., resolving any effects), the contract is executed recursively or other contracts are invoked to make the state in a mess. Below is an example scenario that exploits a reentrancy vulnerability. function withdraw(){ - Transfer tokens to someone. - Update balance. } 1. The attacker invokes the function **withdraw in succession.** 2. The second function call is done before balance has not been updated for the first function call. 3. balance is updated only for the second function call. - Access Control (Vac): Contract’s private values or functions are accessed abnormally due to an insecure visibility setting. Below is an example that describes an access control vulnerability. This function does not check whether the function was already called and the state has already been initialized. function initState(){ **owner = msg.sender** } 1. The function can be called abnormally via a delegatecall. 2. Then, the value of owner could be manipulated. - Integer Overflow/Underflow (Vio): Solidity uses variables of unsigned int type. If programmers process variables of unsigned int type as if they were the variables of signed int type, an overflow and underflow can occur. If such errors happen, for example, a wrong amount of tokens can be withdrawn. Below is an example that shows an integer underflow. function withdraw(uint amount){ if(balance - amount > 0){ - Withdraw tokens. } } 1. Suppose balance = $0$ and amount = 1. 2. The value of (balance - amount) can be interpreted positive since **balance is a value of unsigned int.** - Unchecked Low Level Call (Vuc): When errors happen in low level functions in Solidity, a boolean value set to false is returned, but the code keeps running. Therefore, the result of such low level functions should be checked to confirm successful execution. Below is an example that shows an unchecked low level call vulnerability. function withdraw(uint amount){ - balance is updated (i.e., balance -= amount). - Transfer tokens (as many as amount) by calling a send function. } 1. If send function call fails, balance is managed incorrectly. - Denial of Service (DoS, Vdos): When DoS attacks are launched, smart contracts can be unavailable. Various types of DoS implementation have been reported including increasing gas necessary, abusing access control, and maliciously behaving. Below is an example that shows a sort of DoS. Computation at each block is limited by the upper bound of the amount of gas in Ethereum. If the function (doSomething), called by the attacker, has a heavy code that consumes too much gas, other transactions cannot be included in the block. function doSomething(){ for(uint i = 0; i < N; i++){ - Heavy code. } } 1. Attackers call doSomething. 2. Too much gas is consumed using heavy code in doSomething. 3. Other transactions cannot be included in the block since the gas limit for the block is reached. - Bad Randomness (Vbr ): Generation of a random number is required in several applications such as games and lotteries. However, it is tricky to implement a random ----- number generation on the Ethereum public blockchain since 1) Ethereum is a deterministic Turing machine without embedding true randomness, and 2) all the data (block variables) used for generating a random number is open to public, even to attackers. Hence, an attacker can predict the sources of randomness to some extent and replicate it to attack the function relying on the random value. Obviously, instead of using block variables open to the public, a random value can be created using timestamps. However, as discussed below for timestamp dependence, the timestamps can be manipulated by miners, resulting in another type of attacks. Below is an example that exploits a bad randomness vulnerability. function coinFlip(bool guess){ value = a hash value generated using a block number side = value / given denominator if (side == guess){ - Win the game. } } 1. Attackers can always win using the function exploit below. 2. Copy the function coinFlip and get the result in advance (A). 3. Call the function coinFlip based on the result (B). function exploit(bool guess){ (A) value = a hash value generated using the same block number side = value / given denominator (B) if(side == guess){ conFlip(guess); } else{ conFlip(!guess); } } - Front-Running (Vfr ): Miners perform calculation while being compensated for the gas. The more gas (higher fees), the more quickly the transactions can be computed. Since the public Ethereum is transparent, pending transactions are visible to anyone. Hence, attackers can preempt the results of an already calculated transaction by copying the transaction at a higher fee. Below is an example scenario that exploits a front-running vulnerability. 1. Information sent in a transaction Ta (Ether, recipient address) is public. 2. Time elapses until Ta is confirmed. 3. Ta is read by an attacker before it is confirmed. 4. The attacker’s transaction Tb, which is generated by copying Ta, is placed before Ta. 5. The attacker can steal the result of computing Ta. - Timestamp Dependence (Vtd ): The timestamp of a block is determined by the miner (they reports the time at which the mining occurs). However, it can be manipulated by the miner (the timestamp can be changed within 15 seconds). Hence, fake time can be advertised by malicious miners, which allows the output of the contract to be changed. - Short Address (Vsa): When a contract receives data of smaller-than-expected size, the missing portion is padded to zeros in EVM. For example, if the user address, signature, and the amount of token to be withdrawn are 0 12345600, 0xabcdef12, and 32(0 × × 00000020), respectively, EVM concatenates all the values in the order of signature, address, and token amount, resulting in 0xabcdef121234560000000020. If an attacker specifies a short address such as 0 × 123456 instead of 0 12345600, in the previous case, × EVM generates a value of 0xabcdef1212345600000020 and two zeros are padded at the end, resulting in 0xabcdef121234560000002000. The resulting value can be misinterpreted as having to withdraw as many tokens as 0 00002000. × **III. COUNTERMEASURES USING STATIC AND** **DYNAMIC ANALYSIS** In this section, we briefly examine 11 publicly available, open-sourced, representative countermeasures for Ethereum smart contracts with a command-line interface (CLI). Table 1 summarizes the characteristics of the considered countermeasures such as the main methods, input, and DASP Top 10 vulnerabilities supported. Among the main methods used by the countermeasures, static analysis refers to any kind of methods for examining and analyzing the code without actually executing it, whereas dynamic analysis refers to those for testing and evaluating the code by running it with test cases. Typical static analysis includes abstract interpretation, control-flow analysis, data-flow analysis, symbolic execution, etc., whereas dynamic analysis includes code coverage, memory error detection, fault localization, security analysis, etc. Static analysis is faster but less precise than dynamic analysis. In addition, static analysis finds properties that hold for all execution paths, whereas dynamic analysis finds those for one or more execution paths, but can detect subtle or complex vulnerabilities that static analysis may not detect. Below we review each countermeasure in alphabetical order of their names. ----- **TABLE 1. Overview of the countermeasures considered in our proposed tool. We considered only publicly available, open-sourced countermeasures with** a CLI. Year denotes the publication year of the first relevant conference, workshop, or journal paper, if any. Vulnerabilities denote either those that can be detected by the given countermeasure (that is, the countermeasure implements a detector for the specified vulnerability) or its functionalities if the countermeasure is a testing tool, linter, or profiler. Vac : Access control; Vdos: Denial of service; Vfr : Front-running; Vio: Integer overflow/underflow; Vre: Reentrancy; Vtd : Timestamp dependence; Vuc : Unchecked low level call. _A. ECHIDNA_ Echidna [11], [12] is an open-source, easy-to-use, propertybased fuzz testing tool for Ethereum smart contracts, developed and used by Trail of Bits. Instead of using a predefined set of rules to detect vulnerabilities, it supports user-defined properties for property-based testing [30], arbitrary assertion checking, and estimation of maximum gas usage. That is, it automatically generates tests to detect violations in user-defined properties and assertions, and allows us to prevent vulnerabilities caused by out-of-gas conditions. Echidna uses the Slither static analysis tool [22], which we discuss below, in the preprocessing step to compile and analyze smart contracts and use information from Slither to improve fuzz testing. Currently, Echidna can also test contracts compiled with Vyper (https://vyper.readthedocs.io/en/stable/) and supports smart contract development frameworks such as Truffle (https://www.trufflesuite.com/) and Embark (https://framework.embarklabs.io/). _B. ETHLINT_ Ethlint [13], formerly known as Solium, is a customizable, stand-alone linter for Solidity smart contracts. It provides a predefined set of various style and security rules, which the user can configure, for example, by choosing which rules to apply to the code or by passing options to the rules to modify their behavior. Ethlint was originally designed to strictly adhere to the Solidity style guide (https://solidity.readthedocs.io/en/develop/styleguide.html), but now it allows the user to not only customize the predefined rules but also write and distribute via NPM new plugins for their own rules. It can also automatically fix the detected style and security issues, but there is no benchmark result. _C. MANTICORE_ Manticore [14], [15] is an open-source dynamic symbolic execution framework not only for Ethereum smart contracts but also for native binaries. It consists of the Core Engine implementing a generic platform-independent symbolic execution engine, the Native and Ethereum Execution Modules for symbolic execution of binaries and smart contracts, respectively, and the Satisfiability Modulo Theories (SMT) module and a Python API for supporting a customized analysis and interacting with external solvers such as Z3 (https://github.com/Z3Prover/z3), Yices (https://yices.csl.sri.com/), and CVC4 (https://cvc4. github.io/). Currently, Manticore supports various built-in vulnerability detectors such as for problematic uses of delegatecall, integer overflows, reentrancy bugs, uses of potentially insecure instructions, reachable external calls, reachable selfdestruct instructions, uninitialized memory and storage usage, invalid instructions, and unused internal transaction return values. The main downside of using Manticore is its long execution time; it is very much slower than other static analysis tools (while it took about 24 minutes on average, other tools just took from a few seconds to a few minutes under experiments using 47,518 contracts) [10]. _D. MYTHRIL_ Mythril [16], [17] is an open-source, interactive, security analysis tool for Ethereum smart contracts, which also supports other EVM-compatible blockchains such as Quorum (https://consensys.net/quorum/), VeChain ----- (https://www.vechain.org/), and Tron (https://tron.network/). It is one of the earliest developed automated smart contract analysis tools and can be used to detect various security vulnerabilities such as use of delegatecall to untrusted contracts, integer overflows/underflows, and multiple sends in a single transaction. It uses various program analysis techniques such as symbolic execution, SMT constraint solving, taint analysis and control flow checking to detect such vulnerabilities. Mythril has been shown to be highly accurate in detecting the DASP Top 10 vulnerabilities when compared with other tools [9], [10]. It can also be used in a commercial SaaS smart contract security analysis platform called MythX (https://mythx.io/) which is more optimized and provides a wider range of functionalities. _E. OYENTE_ Oyente [18], [19] is one of the first Ethereum smart contract analysis tools, which has served as a basis for the design and development of other tools such as HoneyBadger [31], Maian [32], and Osiris [33]. It performs symbolic execution and SMT constraint solving using the Z3 theorem prover to analyze EVM bytecode and detect various vulnerabilities. The authors of [18] conducted an experiment using existing 19,366 Ethereum smart contracts and reported that Oyente identified 8,833 contracts as vulnerable. However, several recent studies [9], [10] revealed that Oyente produces a considerable number of false positives, in particular, due to the integer overflow/underflow vulnerability, as is also discussed in Section IV-B. That is, Oyente is not appropriate for detecting arithmetic vulnerabilities. We also remark that while Oyente currently reports a call stack depth attack vulnerability, it is no longer possible as of the EIP 150 hardfork. _F. SECURIFY_ Securify [20], [21] is a security analysis tool for Ethereum smart contracts, which currently supports more than 37 vulnerabilities including reentrancy, locked Ether, transaction order dependence, and unrestricted write. Together with an input contract, it takes as input a set of security patterns written in a specialized domain-specific language. More specifically, a security property is encoded into a set of compliance and violation patterns, each of which ensures that a contract satisfies and violates the given property, respectively. Such patterns are checked using the Soufflé Datalog solver [34] against the semantic facts obtained from the contract by applying static analysis such as data- and control-flow analysis. In contrast to symbolic execution-based tools such as Mythril [16] and Oyente [18], which do not guarantee to explore every program path, Securify analyzes every contract behavior, thus avoiding false negatives. Securify aims to guarantee that if a contract matches a compliance (resp. violation) pattern, then it definitely complies with (resp. violates) the corresponding security property. However, as discussed in [35], most of the security patterns proposed in [20] are not sound and can produce both false positives and false negatives. _G. SLITHER_ Slither [22], [23] is an open-source Solidity static analysis framework written in Python 3, which supports automated detection of about 45 vulnerabilities and code optimizations that the compiler misses, and visualization of the information about contract details, enhancing developers’ code comprehension. Given a Solidity contract source code, Slither takes as input its abstract syntax tree generated by the Solidity compiler, and recovers its inheritance graph, control flow graph, and list of expressions. Then, Slither transforms the contract code into an intermediate representation called SlithIR, which uses static single assignment form [36] to facilitate the analysis, and applies the usual program analysis techniques such as data-flow analysis and taint tracking. The authors of [22] compares Slither with other static analysis tools such as Securify [20], SmartCheck [24], and Solhint [26] with respect to their capability to detect reentrancy vulnerabilities using 1,000 contracts obtained from Etherscan (https://etherscan.io/), and show that Slither outperforms the other tools for detecting reentrancy vulnerabilities with respect to performance, robustness, and accuracy. _H. SMARTCHECK_ SmartCheck [24], [25] is an efficient static analysis tool for Ethereum smart contracts to detect security vulnerabilities and other code issues. It uses an XML-based intermediate representation (IR) to which Solidity source code is translated. Potential vulnerabilities are then detected by applying XPath [37] patterns on the generated IR. Although SmartCheck is very fast when compared with other analysis tools [10], since it only performs relatively simple lexical and syntactic analysis, it cannot detect some severe bugs requiring more advanced techniques such as taint analysis. It has also shown that SmartCheck produces a large number of false positives in the experiment on the reentrancy vulnerability detection using 1,000 contracts [22]. An online version of SmartCheck with more security patterns than the GitHub version is available at https://tool.smartdec.net/. _I. SOLHINT_ Solhint [26] is an open-source linter for Solidity smart contracts, similar to Ethlint [13]. It can be used not only to validate if the Solidity code complies with the style guide and best coding practices but also to detect syntax-related security vulnerabilities. In addition, the user can customize the predefined rule sets and add new rules if necessary. Solhint has shown to be fast and robust, but produce a large number of false positives in the experiment on the reentrancy vulnerability detection [22]. _J. SOL-PROFILER_ Sol-profiler [27] is a CLI tool to help the user to visualize and review Solidity smart contracts by listing down various properties of every contract method. More specifically, for each method, it specifies the contract, interface, or library ----- to which it belongs, its name and parameter types, its visibility (external, public, internal, or private), if it is a view or pure function, its return type, and its modifiers. Therefore, by using Sol-profiler, the user can easily identify the properties of the contract methods and check if there are errors. However, Sol-profiler does not guarantee any security properties of smart contracts. _K. VERISMART_ VeriSmart [28], [29] is a highly precise verification tool for detecting arithmetic bugs such as an integer overflow and underflow in Ethereum smart contracts. It automatically discovers the transactions invariants of smart contracts, which enable to analyze them effectively and exhaustively. More precisely, it iteratively generates candidate transaction invariants and validates them using an off-the-shelf SMT solver as in the usual counter example-guided inductive synthesis (CEGIS) framework [38]. By experimentally comparing VeriSmart with other analysis tools that can detect arithmetic bugs such as Manticore [14], Mythril [16], Osiris [33], and Oyente [18], using 60 contracts that contains arithmetic vulnerabilities [39], the authors of [28] show that VeriSmart far outperforms the abovementioned analyzers and detects all arithmetic bugs with a negligible false positive rate. Since VeriSmart outperforms Osiris, which can detect only integer-related bugs, we do not include the latter in our proposed evaluation tool. **IV. THE PROPOSED EVALUATION TOOL** _A. DESIGN OF THE PROPOSED TOOL_ A number countermeasures have been introduced to detect vulnerabilities in smart contract applications, as mentioned in the previous section, but their effectiveness has not been studied well. It is even not clear which countermeasures are most effective for each type of vulnerability. When new countermeasures are proposed, it is definitely necessary to conduct comparative performance evaluation with existing ones. However, since different countermeasures could require different environments for installation and execution (e.g., different versions of the Solidity compiler and Z3 theorem prover, etc.) and their verification outputs are produced in different formats, it is not a simple task to perform a comparative study of countermeasures and compare the analysis results. In particular, when new datasets are available, one needs to re-execute all available countermeasures, preprocess their verification outputs, and analyze the result under various performance measures. To avoid such time-consuming tasks, we provide a software tool that can - easily be extended with existing/new countermeasures and labeled smart contract codes, - facilitate comparison of the countermeasures by automatically analyzing their verification outputs in terms of various performance measures and arranging the results in tables and graphs, and thus - help the user to identify the most effective countermeasures for each vulnerability. **FIGURE 1. Overall structure of the proposed evaluation tool.** Fig. 1 shows the overall structure of our proposed evaluation tool. In the proposed tool, each countermeasure is offered in the form of a Docker image using OS-level virtualization and operates within the Docker container, making it easy to meet all operational requirements. This approach helps to effectively manage the use of computational resources (CPU, memory) in the system, since each countermeasure is containerized only while actually analyzing the target code. A set of different versions of the Solidity compiler is provided as a single Docker image and can be used in different containers where countermeasures operate. This design is effective because it eliminates the need to update all Docker images of the existing countermeasures when a new version of the compiler is required to convert a new target code into binary code. The analyzer module analyzes the verification outputs generated by each countermeasure and demonstrates the verification performance using various statistical indicators. More precisely, it preprocesses the verification outputs of each countermeasure to check if some vulnerabilities were found in each code in the benchmark dataset. Then, for each countermeasure and its verification outputs, the analyzer module automatically computes various performance measures such as the numbers of true positives, false positives, true negatives, and false negatives, precision, recall, accuracy, F1-score, and the area under the curve (AUC), as shown in Table 2. Finally, it organizes the analysis results and presents them using tables and graphs as shown in Fig. 2, 3, and 4, making it easy to conduct comparative studies for each type of vulnerability. In particular, since the verification performance is represented using various performance measures, users can identify the most effective countermeasure according to their own interests. (Here we note that, different users can place higher importance on different measures. For example, some users may give higher priority to countermeasures that maximize the number of true positives than those that have the minimum number of false positives, while others may prefer the opposite.) This feature can be useful if the proposed tool is used to verify smart contract codes in practice. In addition to selective application of each countermeasure, an effective subset of countermeasures can automatically be selected/recommended depending on the target vulnerabilities, user interests, and constraints. ----- **TABLE 2. Performance measures used in the proposed tool. The value of** precision, recall, accuracy, and F1-score ranges from 0 to 100, and that of AUC ranges from 0 to 1. **FIGURE 2. Example results of using the proposed tool.** _B. COMPARATIVE STUDY USING THE PROPOSED TOOL_ Using the proposed tool, we evaluated the performance of the state-of-the-art countermeasures with 237 pieces of labeled code collected from the SWC registry (Smart Contract Weakness Classification and Test Cases) [40], SmartBugs SB curated dataset [41], VeriSmart-benchmarks [39], Zeus dataset [42], and eThor dataset [43]. Each code either has a single type of vulnerability or is known to be secure, i.e., without any vulnerability. The number of pieces of code **TABLE 3. The number of smart contracts for testing each countermeasure** for each type of vulnerability. Secure denotes smart contracts having no vulnerability. **TABLE 4. The number of true positives (TP).** **TABLE 5. The number of false positives (FP).** for each vulnerability is arranged in Table 3. The proposed tool arranges the evaluation results in a unified manner as shown in Fig. 2 and produces graphs for each measure as shown in Fig. 3 and 4, which allows an easy comparative study and cross-validation among the countermeasures. Tables 4–9 respectively shows the TP, FP, precision, recall, accuracy, and F1-score of each countermeasure for each type of vulnerability for the dataset described in Table 3. We omit the TN and FN as they are easily obtained from the FP and TP, respectively. In the tables, ‘-’ means that the countermeasure does not support the detection of the corresponding vulnerability. In Table 4, TOTAL represents the number of smart contracts having the corresponding vulnerability, whereas in Table 5, it represents the number of those not having the corresponding vulnerability. We additionally show the F1-score of each countermeasure for each type of vulnerability in Fig. 3, which takes both precision and recall into consideration and thus is a more appropriate metric for imbalanced datasets. As our dataset is highly imbalanced, the F1-score is much lower than the accuracy for every case, but it is much more useful than the accuracy for comparing the performance of various countermeasures. Overall, for AC, every countermeasure shows a low detection rate of vulnerable code. That is, for all countermeasures, ----- **FIGURE 3. F1-score of each countermeasure for each vulnerability.** **TABLE 6. Precision (%).** **TABLE 7. Recall (%).** more than 80% of vulnerable codes are not detected. Mythril detects three of the 18 vulnerable codes, showing the largest TP value. However, considering the FP value together, Slither, which represents a slightly smaller TP value, can be more effective since it shows 100% precision and better accuracy. **TABLE 8. Accuracy (%).** **TABLE 9. F1-score (%).** Only Mythril and Oyente work for DoS and FR, respectively. One out of six vulnerable codes with DoS is detected by Mythril, and two out of four vulnerable codes with FR are detected by Oyente. Both countermeasures produce more false positives (compared to the true positives), resulting in ----- small values in both precision and recall. These results can be interpreted that neither countermeasure makes a sufficiently meaningful contribution to DoS and FR detection. VeriSmart successfully detects all vulnerable codes with IO and reports a 100% recall value. However, it also generates 55 false positives, resulting in the precision of 50%. In general, a large number of false positives require additional manual examinations, which can be a large overhead. In that sense, Manticore that detects six out of 55 vulnerable codes without generating any false positives (100 % precision) may be preferred. Oyente may be the most effective tool for detecting RE, as shown by the highest value in F1-score (Fig. 3) and good performance for all measures (refer to Tables 6, 7 and 8). More specifically, 29 out of 31 vulnerable codes are detected while only six false positives are produced (among 206 secure codes). Slither may be considered competitive in that it completely detects every vulnerable code (even though it produces 32 false positives, showing 49.2% precision and 100% recall). It is confirmed that both Slither and Solhint completely detect five vulnerable codes having TD. However, Slither can be recommended more preferentially since it produces about half of the false positives in Solhint. As for UC, it is reported that all vulnerable codes are detected by SmartCheck which produces 14 false positives, resulting in the precision of 78.8% and F1-score of 88.1%. Slither is also effective in detecting UC. It detects 45 out of 52 vulnerable codes while generating only one false positive, showing good performance for all measures. In Fig. 4, we also show the performance of the countermeasures using their ROC curve and AUC value. Since our dataset is imbalanced, the AUC is also an important measure to be considered. Note that the AUC values in Fig. 4 do not necessarily coincide with the F1-scores in Fig. 3. Following the general guidelines in [44], we consider the countermeasure to be acceptable, excellent, and outstanding if its AUC value is greater than or equal to 0.7, 0.8, and 0.9, respectively. In this regard, there is no effective countermeasure for AC, DoS, and FR; Oyente is excellent and VeriSmart is outstanding at identifying IO; Oyente and Slither are outstanding for RE; Slither and Solhint are outstanding for TD; and finally Slither and SmartCheck are outstanding for UC. The evaluation results can be summarized as follows : - In general, countermeasures that identify many vulnerable codes also tend to be less precise and accurate since they also produce much more false positives. - There is no effective countermeasure for detecting the ‘Access Control’, ’Denial of Service’, and ‘FrontRunning’ vulnerabilities yet. - Vulnerable codes with ‘Integer Overflow’ and ‘Timestamp Dependence’ are completely detected. However, the performance of the countermeasures need to be further improved in order to reduce false positives. - As for the ‘Reentrancy’ and ‘Unchecked Low Level Call’ vulnerabilities, effective countermeasures with both high precision and high recall values are identified. **V. DISCUSSION** _A. RESULTS FROM OUR COMPARATIVE STUDY_ Similar to ours, previous studies such as [9], [10], [22], [28] have also empirically compared various countermeasures using real-world smart contracts and discussed their performance. In [9], four countermeasures such as Mythril, Securify, SmartCheck, and Oyente were evaluated using 10 representative smart contracts, and the results suggested that SmartCheck was statistically most effective in terms of accuracy and ROC, while Mythril had the least number of false positives. This result is consistent with our evaluation results in the case of UC. However, due to a limited number of test codes, the effectiveness of Oyente against IO did not seem to be well understood. The authors of [22] proposed Slither and conducted performance comparisons with Securify, SmartCheck, and Solhint in RE detection using two famous contracts (DAO and SpankChain), which are vulnerable to RE, and 1,000 unlabeled contract data. They reported that Slither overwhelmed the other three countermeasures in terms of accuracy, execution time, and robustness, which is the same as in our evaluation results. The performance of Oyente, which was not covered in their work, is newly verified in our work, and it is discussed that Oyente could be more effective than Slither in that it generates a much smaller number of false positives for RE detection. In [28], the authors introduced VeriSmart, a new method for IO detection, and compared its performance with that of Manticore, Mythril, Osiris, and Oyente using 60 labeled vulnerable contracts. Their evaluation showed that VeriSmart successfully detected all vulnerable codes with a negligible false positive rate (0.41%). In our study using an increased number of codes, VeriSmart’s effectiveness could also be confirmed. However, in our study, it incurred a much higher false positive rate. In [10], comprehensive evaluation for the nine representative countermeasures were performed using a dataset of 69 labeled vulnerable contracts and 47,518 unlabeled contracts. The authors reported that Mythril was the most accurate countermeasure, showing 27% accuracy, when considered the vulnerabilities altogether. However, this report is slightly different from our findings. In our work, which used more labeled codes and measured various evaluation metrics for each vulnerability, Mythril is not recommended due to its low value of precision and recall. Obviously, our experimental results are partially consistent and complementary with those from previous studies mentioned above. Here, we note that in our work, more countermeasures are evaluated with much more labeled codes, and their performance is shown with various measures. Hence, we believe that our comparative study could provide more reliable insights into the state-of-the-art countermeasures and ----- **FIGURE 4. ROC curve and AUC of each countermeasure for each vulnerability.** help developers choose countermeasures that better suit their purpose under given conditions. _B. THREATS TO VALIDITY_ The limitations of our evaluation are summarized as follows. In this work, we collected more labeled data than previous studies to derive more reliable evaluation results. However, as in other related work, a few smart contracts may have incorrect labels, as it is very challenging to manually examine the code. Moreover, our dataset is imbalanced in that the number of safe smart contracts is much larger than the number of vulnerable smart contracts. In particular, among the 237 smart contracts, only six, four, and five vulnerable contracts for DoS, FR and TD are included, respectively. Hence, in most cases, the detection accuracy of countermeasures against each vulnerability is highly reported. Since new datasets and countermeasures can be easily added to our tool, however, we believe that our tool can contribute to achieving more accurate evaluation results with more data in the future. **VI. CONCLUSION** In this paper, we revisited smart contracts using the Ethereum blockchain technology and summarized various vulnerability issues of smart contract applications. A number of countermeasures were briefly introduced and discussed. To assess the effectiveness of the countermeasures, we designed and implemented a software tool that facilitates comparative evaluations of the countermeasures. Using the tool and 237 labeled benchmark codes, we evaluated state-of-the-art vulnerability detection schemes. The evaluation results indicate that countermeasures that exhibit a larger TP value often generate a much larger number of false positives, resulting in very low precision and accuracy. In addition, among the state-of-the-art countermeasures, Oyente and Slither are most effective for RE detection; Slither could be recommended for detection of TD and UC; and VeriSmart could be recommended for IO detection. Using our tool, researchers can easily conduct performance comparisons between their own countermeasure and other state-of-the-art schemes with a variety of performance metrics. As for practitioners, they can exploit our tool to find various vulnerabilities within their smart contract applications. Since in our tool, smart contracts can be examined by a number of countermeasures simultaneously, vulnerabilities can be easily identified. We believe that our proposed tool will be effective in a future verification study of smart contracts and will contribute to the development of practical and secure smart contract applications. **APPENDIX. USAGE OF THE PROPOSED TOOL** The proposed software tool is open to public via the website https://github.com/93suhwan/uscv. This section details how to use it. ----- _A. INSTALLATION_ As mentioned in Section IV-A, each countermeasure is included in the proposed software tool in the form of a Docker image. To create a Docker image, a dockerfile can be run under the following command: $ docker build [dockerfile] The content of a dockerfile is as follows: From ubuntu:18.04 RUN [installCommand] ENTRYPOINT [exeCommand] /* - 1st line indicates a layer is created from the ubuntu:18.04 Docker image - 2nd line is for building a countermeasure by executing [installCommand] - 3rd line specifies the execution command ([exeCommand]) for each countermeasure (which would run by default) *[/] Using the Docker image, a Docker container is generated and executed under the following command: $ docker build -t [dockerImage] [dockerfile] Since multiple versions of the Solidity compiler may be required during the process of compiling the source files, the proposed tool has a Docker image (.solc) that includes different versions of the Solidity compiler. The installed compilers can be used at each container under the following command: $ docker run -v [curDir]:[containerDir]\ [dockerImage] [options] All of these processes for the 11 countermeasures discussed in Section III are executed automatically by running ‘‘createContainers.sh’’. _B. EXECUTION_ The file named ‘‘testing.sh’’ is used to execute each countermeasure. It preprocesses the source file and determines the version of the compiler that can be used for compiling. Then, it can be run with the generalized options as follows: $ testing.sh -t [schemeName] \ -f [srcFile] -l [timeout] (e.g.,) >> testing.sh -t oyente -f test.sol \ -o ‘‘-dl 10 -r’’ -l 100 The file named ‘‘execution.sh’’ is provided to run multiple countermeasures. ‘‘execution.sh’’ supports the following options: $ execution.sh -f/d [srcFile/dirName] \ -t [toolName] (e.g.,) >> execution.sh -d./curDir -v AC -l 100 The analysis results are recorded in the file named tool_name.txt under the directory of ‘‘./result’’. _C. ADDING NEW COUNTERMEASURES AND DATA_ When new countermeasures are proposed, they can easily be integrated into our proposed tool and evaluated with the embedded benchmark data. To this end, the file named ‘‘addScheme.sh’’ is provided. By specifying meta-information on a new countermeasure as arguments, the new scheme can simply be included into the system. $ addScheme.sh -l/n [dirName/imageName]\ -e [cmd] -o [option]\ -M [word] (e.g.,) >> addScheme.sh -l dockerfiles/ ----- smartcheck \ -e smartcheck -o p \ -a SOLIDITY_TX_ORIGIN \ -d SOLIDITY_OVERPOWERED_ROLE \ -i SOLIDITY_VAR|SOLIDITY_UINT_CANT\ -t SOLIDITY_EXACT_TIME \ -u SOLIDITY_UNCHECKED_CALL The installed countermeasures can also be removed from the tool as follows. $ removeScheme.sh [countermeasureName] (e.g.,) >> removeScheme.sh mythril When labeled codes are newly collected, they can be added to our tool and used for the analysis. $ addData.sh -d/f [dirName/fileName] \ -c [vulType] (e.g.,) >> addData.sh -f example.sol -c AC **REFERENCES** [1] S. Nakamoto, ‘‘Bitcoin: A peer-to-peer electronic cash system,’’ Tech. Rep., 2008. [2] G. Wood, ‘‘Ethereum: A secure decentralised generalised transaction ledger,’’ Ethereum Project Yellow Paper, vol. 151, pp. 1–32, 2014. [3] N. Atzei, M. Bartoletti, and T. Cimoli, ‘‘A survey of attacks on Ethereum smart contracts (SoK),’’ in Principles of Security and Trust, M. Maffei and M. Ryan, Eds. Berlin, Germany: Springer, 2017, pp. 164–186. [4] X. Li, P. Jiang, T. Chen, X. Luo, and Q. Wen, ‘‘A survey on the security of blockchain systems,’’ Future Gener. Comput. Syst., vol. 107, pp. 841–853, Jun. 2020. [5] S. Sayeed, H. Marco-Gisbert, and T. Caira, ‘‘Smart contract: Attacks and protections,’’ IEEE Access, vol. 8, pp. 24416–24427, 2020. [6] H. Chen, M. Pendleton, L. Njilla, and S. Xu, ‘‘A survey on Ethereum systems security: Vulnerabilities, attacks, and defenses,’’ ACM Comput. _Surv., vol. 53, no. 3, pp. 1–3, Jun. 2020._ [7] NCC Group. Decentralized Application Security Project (or DASP) Top _10. Accessed: Mar. 25, 2021. [Online]. Available: https://dasp.co/_ [8] M. di Angelo and G. Salzer, ‘‘A survey of tools for analyzing Ethereum smart contracts,’’ in Proc. IEEE Int. Conf. Decentralized Appl. Infrastruc_tures (DAPPCON), Apr. 2019, pp. 69–78._ [9] R. M. Parizi, A. Dehghantanha, K.-K. R. Choo, and A. Singh, ‘‘Empirical vulnerability analysis of automated smart contracts security testing on blockchains,’’ in Proc. 28th Annu. Int. Conf. Comput. Sci. Softw. Eng. _(CASCON), 2018, pp. 103–113._ [10] T. Durieux, J. A. F. Ferreira, R. Abreu, and P. Cruz, ‘‘Empirical review of automated analysis tools on 47,587 Ethereum smart contracts,’’ in Proc. _ACM/IEEE 42nd Int. Conf. Softw. Eng. (ICSE), New York, NY, USA,_ Jun. 2020, pp. 530–541. [11] G. Grieco, W. Song, A. Cygan, J. Feist, and A. Groce, ‘‘Echidna: Effective, usable, and fast fuzzing for smart contracts,’’ in Proc. 29th ACM SIGSOFT _Int. Symp. Softw. Test. Anal., New York, NY, USA, Jul. 2020, pp. 557–560._ [12] Trail of Bits. Echidna: A Fast Smart Contract Fuzzer. Accessed: Mar. 25, 2021. [Online]. Available: https://github.com/crytic/echidna [13] Ethlint. Accessed: Mar. 25, 2021. [Online]. Available: https://github.com/duaraghav8/Ethlint [14] M. Mossberg, F. Manzano, E. Hennenfent, A. Groce, G. Grieco, J. Feist, T. Brunson, and A. Dinaburg, ‘‘Manticore: A user-friendly symbolic execution framework for binaries and smart contracts,’’ in Proc. _34th IEEE/ACM Int. Conf. Automated Softw. Eng. (ASE), Nov. 2019,_ pp. 1186–1189. [15] Trail of Bits. Manticore. Accessed: Mar. 25, 2021. [Online]. Available: https://github.com/trailofbits/manticore/ [16] B. Mueller, ‘‘Smashing Ethereum smart contracts for fun and real profit,’’ in Proc. 9th Annu. HITB Secur. Conf., 2018. Accessed: Mar. 25, 2021. [Online]. Available: https://github.com/b-mueller/smashing-smartcontracts [17] ConsenSys. Mythril. Accessed: Mar. 25, 2021. [Online]. Available: https://github.com/ConsenSys/mythril [18] L. Luu, D.-H. Chu, H. Olickel, P. Saxena, and A. Hobor, ‘‘Making smart contracts smarter,’’ in Proc. ACM SIGSAC Conf. Comput. Commun. Secur., Oct. 2016, pp. 254–269. [19] Oyente. Accessed: Mar. 25, 2021. [Online]. Available: https://github.com/melonproject/oyente [20] P. Tsankov, A. Dan, D. Drachsler-Cohen, A. Gervais, F. Bünzli, and M. Vechev, ‘‘Securify: Practical security analysis of smart contracts,’’ in _Proc. ACM SIGSAC Conf. Comput. Commun. Secur. (CCS), New York,_ NY, USA, 2018, pp. 67–82. [21] Securify _2.0._ Accessed: Mar. 25, 2021. [Online]. Available: https://github.com/eth-sri/securify2 [22] J. Feist, G. Greico, and A. Groce, ‘‘Slither: A static analysis framework for smart contracts,’’ in Proc. 2nd Int. Workshop Emerg. Trends Softw. Eng. _Blockchain (WETSEB), 2019, pp. 8–15._ [23] Trail of Bits. _Slither,_ _The_ _Solidity_ _Source_ _Analyzer._ Accessed: Mar. 25, 2021. [Online]. Available: https://github.com/crytic/slither [24] S. Tikhomirov, E. Voskresenskaya, I. Ivanitskiy, R. Takhaviev, E. Marchenko, and Y. Alexandrov, ‘‘SmartCheck: Static analysis of Ethereum smart contracts,’’ in Proc. 1st Int. Workshop Emerg. Trends _Softw. Eng. Blockchain, May 2018, pp. 9–16._ [25] SmartCheck. Accessed: Mar. 25, 2021. [Online]. Available: https://github.com/smartdec/smartcheck [26] Solhint. Accessed: Mar. 25, 2021. [Online]. Available: https://protofire.github.io/solhint/ [27] Sol-Profiler. Accessed: Mar. 25, 2021. [Online]. Available: https://github.com/Aniket-Engg/sol-profiler [28] S. So, M. Lee, J. Park, H. Lee, and H. Oh, ‘‘VeriSmart: A highly precise safety verifier for Ethereum smart contracts,’’ in Proc. IEEE Symp. Secur. _Privacy (SP), May 2020, pp. 1678–1694._ [29] VeriSmart. Accessed: Mar. 25, 2021. [Online]. Available: https://github. com/kupl/VeriSmart-public [30] K. Claessen and J. Hughes, ‘‘Quickcheck: A lightweight tool for random testing of Haskell programs,’’ in Proc. 5th ACM SIGPLAN Int. Conf. Funct. _Program. (ICFP), New York, NY, USA, 2000, pp. 268–279._ [31] C. F. Torres, M. Steichen, and R. State, ‘‘The art of the scam: Demystifying honeypots in Ethereum smart contracts,’’ in Proc. 28th USENIX Conf. _Secur. Symp. (SEC), 2019, pp. 1591–1607._ [32] I. Nikolić, A. Kolluri, I. Sergey, P. Saxena, and A. Hobor, ‘‘Finding the greedy, prodigal, and suicidal contracts at scale,’’ in Proc. 34th Annu. _Comput. Secur. Appl. Conf. (ACSAC), New York, NY, USA, Dec. 2018,_ pp. 653–663. [33] C. F. Torres, J. Schütte, and R. State, ‘‘Osiris: Hunting for integer bugs in Ethereum smart contracts,’’ in Proc. 34th Annu. Comput. Secur. Appl. _Conf. (ACSAC), New York, NY, USA, 2018, pp. 664–676._ [34] H. Jordan, B. Scholz, and P. Subotić, ‘‘Soufflé: On synthesis of program analyzers,’’ in Computer Aided Verification, S. Chaudhuri and A. Farzan, Eds. Cham, Switzerland: Springer, 2016, pp. 422–430. ----- [35] C. Schneidewind, I. Grishchenko, M. Scherer, and M. Maffei, ‘‘EThor: Practical and provably sound static analysis of ethereum smart contracts,’’ in Proc. ACM SIGSAC Conf. Comput. Commun. Secur. (CCS), New York, NY, USA, Oct. 2020, pp. 621–640. [36] B. K. Rosen, M. N. Wegman, and F. K. Zadeck, ‘‘Global value numbers and redundant computations,’’ in Proc. 15th ACM SIGPLAN-SIGACT Symp. _Princ. Program. Lang. (POPL), New York, NY, USA, 1988, pp. 12–27._ [37] XML Path Language (XPath) 2.0. Accessed: Mar. 25, 2021. [Online]. Available: https://www.w3.org/TR/xpath20/ [38] A. Solar-Lezama, ‘‘Program sketching,’’ Int. J. Softw. Tools Technol. _Transf., vol. 15, nos. 5–6, pp. 475–495, Oct. 2013._ [39] VeriSmart-Bsenchmarks. Accessed: Mar. 25, 2021. [Online]. Available: https://github.com/kupl/VeriSmart-benchmarks [40] SWC Registry (Smart Contract Weakness Classification and Test Cases). Accessed: Mar. 25, 2021. [Online]. Available: https://swcregistry.io/ [41] SmartBugs: _A_ _Framework_ _to_ _Analyze_ _Solidity_ _Smart_ _Contracts._ Accessed: Mar. 25, 2021. [Online]. Available: https://github.com/ smartbugs/smartbugs [42] S. Kalra, S. Goel, M. Dhawan, and S. Sharma, ‘‘ZEUS: Analyzing safety of smart contracts,’’ in Proc. Netw. Distrib. Syst. Secur. Symp. (NDSS), San Diego, CA, USA, 2018, pp. 1–12. [43] eThor: Sound Static Analysis for Ethereum Smart Contracts. Accessed: Mar. 25, 2021. [Online]. Available: https://secpriv.wien/ethor/ [44] D. W. Hosmer, S. Lemeshow, and R. X. Sturdivant, Applied Logistic _Regression, 3rd ed. Hoboken, NJ, USA: Wiley, 2013._ SUHWAN JI received the B.S. and M.S. degrees in computer science from Kangwon National University, South Korea, in 2017 and 2019, respectively, where he is currently pursuing the Ph.D. degree majoring in AI and software with the Interdisciplinary Graduate Program in Medical Bigdata Convergence. His research interests include programming languages, machine learning, and blockchain. DOHYUNG KIM received the B.S. degree in information and computer engineering from Ajou University, Suwon, South Korea, in February 2004, and the Ph.D. degree in computer science from the Korea Advanced Institute of Science and Technology (KAIST), Daejeon, South Korea, in August 2014. From 2014 to 2017, he was a Postdoctoral Researcher and a Research Professor with the Department of Computer Engineering, Sungkyunkwan University. In 2018, he was an Assistant Professor with the Department of Software and Computer Engineering, Ajou University. He is currently an Assistant Professor with the Department of Computer Science and Engineering, Kangwon National University. His research interests include the design and analysis of computer networking and wireless communication systems, especially for future Internet architectures. HYEONSEUNG IM received the B.S. degree in computer science from Yonsei University, South Korea, in 2006, and the Ph.D. degree in computer science and engineering from the Pohang University of Science and Technology (POSTECH), South Korea, in 2012. From 2012 to 2015, he was a Postdoctoral Researcher with the Laboratory for Computer Science, Université Paris-Sud, and the Tyrex Team, Inria, France. He is currently an Associate Professor with the Department of Computer Science and Engineering, Kangwon National University, South Korea. His research interests include programming languages, logic in computer science, big data analysis and management, machine learning, smart healthcare, blockchain, and information security. -----
{ "disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.1109/ACCESS.2021.3091317?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1109/ACCESS.2021.3091317, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": "CCBYNCND", "status": "GOLD", "url": "https://ieeexplore.ieee.org/ielx7/6287639/9312710/09461797.pdf" }
2,021
[ "JournalArticle" ]
true
null
[]
14,468
en
[ { "category": "Computer Science", "source": "external" }, { "category": "Computer Science", "source": "s2-fos-model" }, { "category": "Mathematics", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/024b9198a432d51f0a5245bd60072e147fa2f9e1
[ "Computer Science" ]
0.856956
Inverting Cryptographic Hash Functions via Cube-and-Conquer
024b9198a432d51f0a5245bd60072e147fa2f9e1
Journal of Artificial Intelligence Research
[ { "authorId": "3326894", "name": "O. Zaikin" } ]
{ "alternate_issns": null, "alternate_names": [ "JAIR", "J Artif Intell Res", "The Journal of Artificial Intelligence Research" ], "alternate_urls": null, "id": "aef12dca-60a0-4ca3-819b-cad26d309d4e", "issn": "1076-9757", "name": "Journal of Artificial Intelligence Research", "type": "journal", "url": "http://www.jair.org/" }
MD4 and MD5 are fundamental cryptographic hash functions proposed in the early 1990s. MD4 consists of 48 steps and produces a 128-bit hash given a message of arbitrary finite size. MD5 is a more secure 64-step extension of MD4. Both MD4 and MD5 are vulnerable to practical collision attacks, yet it is still not realistic to invert them, i.e., to find a message given a hash. In 2007, the 39-step version of MD4 was inverted by reducing to SAT and applying a CDCL solver along with the so-called Dobbertin’s constraints. As for MD5, in 2012 its 28-step version was inverted via a CDCL solver for one specified hash without adding any extra constraints. In this study, Cube-and-Conquer (a combination of CDCL and lookahead) is applied to invert step-reduced versions of MD4 and MD5. For this purpose, two algorithms are proposed. The first one generates inverse problems for MD4 by gradually modifying the Dobbertin’s constraints. The second algorithm tries the cubing phase of Cube-and-Conquer with different cutoff thresholds to find the one with the minimum runtime estimate of the conquer phase. This algorithm operates in two modes: (i) estimating the hardness of a given propositional Boolean formula; (ii) incomplete SAT solving of a given satisfiable propositional Boolean formula. While the first algorithm is focused on inverting step-reduced MD4, the second one is not area-specific and is therefore applicable to a variety of classes of hard SAT instances. In this study, 40-, 41-, 42-, and 43-step MD4 are inverted for the first time via the first algorithm and the estimating mode of the second algorithm. Also, 28-step MD5 is inverted for four hashes via the incomplete SAT solving mode of the second algorithm. For three hashes out of them, it is done for the first time.
## Inverting Cryptographic Hash Functions via Cube-and-Conquer **Oleg Zaikin** zaikin.icc@gmail.com _ISDCT, Irkutsk, Russia_ ### Abstract MD4 and MD5 are seminal cryptographic hash functions proposed in early 1990s. MD4 consists of 48 steps and produces a 128-bit hash given a message of arbitrary finite size. MD5 is a more secure 64-step extension of MD4. Both MD4 and MD5 are vulnerable to practical collision attacks, yet it is still not realistic to invert them, i.e. to find a message given a hash. In 2007, the 39-step version of MD4 was inverted via reducing to SAT and applying a CDCL solver along with the so-called Dobbertin’s constraints. As for MD5, in 2012 its 28-step version was inverted via a CDCL solver for one specified hash without adding any additional constraints. In this study, Cube-and-Conquer (a combination of CDCL and lookahead) is applied to invert step-reduced versions of MD4 and MD5. For this purpose, two algorithms are proposed. The first one generates inversion problems for MD4 by gradually modifying the Dobbertin’s constraints. The second algorithm tries the cubing phase of Cube-and-Conquer with different cutoff thresholds to find the one with minimal runtime estimation of the conquer phase. This algorithm operates in two modes: (i) estimating the hardness of a given propositional Boolean formula; (ii) incomplete SATsolving of a given satisfiable propositional Boolean formula. While the first algorithm is focused on inverting step-reduced MD4, the second one is not area-specific and so is applicable to a variety of classes of hard SAT instances. In this study, 40-, 41-, 42-, and 43-step MD4 are inverted for the first time via the first algorithm and the estimating mode of the second algorithm. 28-step MD5 is inverted for four hashes via the incomplete SATsolving mode of the second algorithm. For three hashes out of them this is done for the first time. ### 1. Introduction A cryptographic hash function maps a message of arbitrary finite size to a hash of a fixed size. Such a function should have the following additional properties: (i) preimage resistance; (ii) second preimage resistance; (iii) collision resistance (Menezes, van Oorschot, & Vanstone, 1996). The first property means that it is computationally infeasible to invert the cryptographic hash function, i.e. to find any message that matches a given hash. According to the second one, given a message and its hash, it is computationally infeasible to find another message with the same hash. The third property means that it is computationally infeasible to find two different messages with the same hash. A proper cryptographic hash function must have all three types of resistance. Cryptographic hash functions are really pervasive in the modern digital world. Examples of their applications include verification of data integrity, passwords, and signatures. It is well known that the resistance of a cryptographic hash function can be analyzed by algorithms for solving the Boolean satisfiability problem (SAT) (Bard, 2009). SAT in its decision form is to determine whether a given propositional Boolean formula is satisfiable ©2023 AI Access Foundation. All rights reserved. ----- or not (Biere, Heule, van Maaren, & Walsh, 2021b). This is one of the most well-studied NP-complete problems (Cook, 1971; Garey & Johnson, 1979). Over the last 25 years, numerous crucial scientific and industrial problems have been successfully solved by SAT. In almost all these cases, CDCL solvers, i.e. ones which are based on the Conflict-Driven Clause Learning algorithm (Marques-Silva & Sakallah, 1999), were used. Cube-and-Conquer is an approach for solving extremely hard SAT instances (Heule, Kullmann, Wieringa, & Biere, 2011), for which CDCL solvers alone are not enough. According to this approach a given, problem is split into subproblems on the cubing phase via a lookahead solver (Heule & van Maaren, 2021). Then on the conquer phase the subproblems are solved via a CDCL solver. Several hard mathematical problems from number theory and combinatorial geometry have been solved by Cube-and-Conquer recently, e.g., the Boolean Pythagorean Triples problem (Heule, Kullmann, & Marek, 2016). However, the authors of this study are not aware of any successful application of this approach to cryptanalysis problems. This study is aimed at filling this gap by analyzing the preimage resistance of the cryptographic hash functions MD4 and MD5 via Cube-and-Conquer. MD4 was proposed in 1990 (Rivest, 1990). It consists of 48 steps and produces a 128-bit hash given a message of arbitrary finite size. Since 1995 it has been known to be not collision resistant (Dobbertin, 1996). Despite this vulnerability, MD4 is still used to compute password-derived hashes in some operating systems of the Windows family due to backwards compatibility requirements. Since MD4 still remains preimage resistant and second preimage resistant, its step-reduced versions have been studied in this context recently. In 1998, the Dobbertin’s constraints on intermediate states of MD4 registers were proposed, which reduce the number of preimages, but at the same time significantly simplify the inversion (Dobbertin, 1998). This breakthrough approach made it possible to easily invert 32-step MD4. In 2007, SAT encodings of slightly modified Dobbertin’s constraints were constructed, and as a result 39-step MD4 was inverted via a CDCL solver (De, Kumarasubramanian, & Venkatesan, 2007) for one very regular hash (128 1s). Note, that it is a common practice to invert very regular hashes such as all 1s or all 0s. Since 2007, several unsuccessful attempts have been made to invert 40-step MD4. The second studied cryptographic hash function, MD5, is a more secure 64-step version of MD4 proposed in 1992 (Rivest, 1992). Thanks to elegant yet efficient designs, MD4 and MD5 have become one of the most influential cryptographic functions with several notable successors, such as RIPEMD and SHA-1. MD5 is still used in practice to verify the integrity of files and messages. Since 2005, MD5 has been known to be not collision resistant (Wang & Yu, 2005). Because of a more secure design, the Dobbertin’s constraints are not applicable to MD5 (Aoki & Sasaki, 2008). 26-step MD5 was inverted in 2007 (De et al., 2007), while for 27- and 28-step MD5 it was done for the first time in 2012 (Legendre, Dequen, & Krajecki, 2012). In both papers CDCL solvers were applied, at the same time no additional constraints were added. In (Legendre et al., 2012), 28-step MD5 was inverted for only one hash 0x01234567 0x89abcdef 0xfedcba98 0x76543210. This hash is a regular binary sequence (it is symmetric, and the numbers of 0s and 1s are equal), but at the same time it is less regular than 128 1s mentioned above. The same result was presented later in two papers of the same authors. Unfortunately, none of these three papers explained the non-existence of results for 128 1s and 128 0s. Since 2012 no further progress in inverting step-reduced MD5 has been made. 2 ----- This paper proposes Dobbertin-like constraints, a generalization of Dobbertin’s constraints. Two algorithms are proposed. The first one generates Dobbertin-like constraints until a preimage of a step-reduced MD4 is found. The second algorithm does sampling to find a cutoff threshold for the cubing phase of Cube-and-Conquer with minimal runtime estimation of the conquer phase. This algorithm operates in two modes: (i) estimating the hardness of a given formula; (ii) incomplete SAT-solving of a given formula. The first algorithm is MD4-specific, while the second algorithm in its estimating mode is general, so it can be applied to any SAT instance (including unsatisfiable ones). Yet the incomplete SAT-solving mode is oriented only on satisfiable SAT instances, preferably with many solutions. With the help of the first algorithm and the estimating mode of the second algorithm, 40-, 41-, 42-, and 43-step MD4 are inverted for four hashes: 128 1s, 128 0s, the one from (Legendre et al., 2012), and a random hash. The first algorithm is not applicable to MD5 because of its more secure design. The estimating mode of the second algorithm is not applicable either because for an unconstrained inversion problem the cubing phase produces too hard subproblems. Therefore only the incomplete solving mode of the second algorithm is applied to invert step-reduced MD5. In particular, 28-step MD5 is inverted for the same four hashes. All the experiments were ran on a personal computer. In summary, the contributions of this paper are: - Dobbertin-like constraints, a generalization of Dobbertin’s constraints. - An algorithm that generates Dobbertin-like constraints and the corresponding inversion problems to find preimages of a step-reduced MD4. - A general algorithm for finding a cutoff threshold with the minimal runtime estimation of the conquer phase of Cube-and-Conquer. - For the first time, 40-, 41-, 42-, and 43-step MD4 are inverted. - For the first time, 28-step MD5 is inverted for two most regular hashes (128 1s and 128 0s), and a random non-regular hash. The paper is organized as follows. Preliminaries on SAT and cryptographic hash functions are given in Section 2. Section 3 proposes Dobbertin-like constraints and the algorithm aimed at inverting step-reduced MD4. The Cube-and-Conquer-based algorithm is proposed in Section 4. The considered inversion problems for step-reduced MD4 and MD5, as well as their SAT encodings, are described in Section 5. Experimental results on inverting stepreduced MD4 and MD5 are presented in sections 6, 7, and 8. Section 9 outlines related work. Finally, conclusions are drawn. This paper builds on an earlier work (Zaikin, 2022), but extends it significantly in several directions. First, the algorithm for generating Dobbertin-like constraints for MD4 is improved by cutting off impossible values of the last bits in the modified constraint. As a result, in most cases the considered step-reduced versions of MD4 are inverted about 2 times faster than in (Zaikin, 2022). Second, the incomplete SAT-solving mode of the Cubeand-Conquer-based algorithm is proposed. Third, all considered step-reduced versions of MD4 are inverted for four hashes compared to two hashes in (Zaikin, 2022). Finally, 28-step MD5 is inverted, while in (Zaikin, 2022) only step-reduced MD4 was studied. 3 ----- ### 2. Preliminaries This section gives preliminaries on SAT, Cube-and-Conquer, cryptographic hash functions, MD4, Dobbertin’s constraints, and MD5. **2.1 Boolean Satisfiability** _Boolean satisfiability problem (SAT) (Biere et al., 2021b) is to determine whether a given_ propositional Boolean formula is satisfiable or not. A formula is satisfiable if there exists a truth assignment that satisfies it; otherwise it is unsatisfiable. SAT is historically the first NP-complete problem (Cook, 1971). A propositional Boolean formula is in Conjunctive _Normal Form (CNF), if it is a conjunction of clauses. A clause is a disjunction of literals,_ where a literal is a Boolean variable or its negation. The Davis–Putnam–Logemann–Loveland (DPLL) algorithm is a complete backtracking SAT solving algorithm (Davis, Logemann, & Loveland, 1962). A decision tree is formed, where each internal node corresponds to a decision variable, while edges correspond to variables’ values. Unit Propagation (UP (Dowling & Gallier, 1984)) is used to reduce the tree after assigning a decision variable. UP iteratively applies the unit clause rule: if there is only one remaining unassigned literal in a clause, and all other literals are assigned to False, then this literal is assigned to True. If an unsatisfied clause is encountered, a conflict is declared, and chronological backtracking is performed. Lookahead is another complete SAT-solving algorithm (Heule & van Maaren, 2021). It improves DPLL by the following heuristic. When a decision variable should be chosen, each unassigned variable is assigned to True followed by UP, the reduction is measured, then the same is done for False assignment. Failed literal denotes a literal for which a conflict is found during UP. If both literals of a variable are failed, then unsatisfiability of the CNF is proven. If there is exactly one failed literal l for some variable, then l is forced to be assigned to False. This rule is known as failed literal elimination. If for a variable both literals are not failed, the reduction measure for this variable is calculated as a combination of literalmeasures. A variable with the largest reduction measure is picked as a decision variable. Thus lookahead allows one to choose good decision variables and in addition simplifies the CNF by the described reasoning. Lookahead SAT solvers are strong on random k-SAT formulae. In contrast to DPLL, in Conflict-Driven Clause Learning (CDCL) when a conflict happens, the reason is found and non-chronological backtracking is performed (Marques-Silva & Sakallah, 1999). To forbid the conflict, a conflict clause is formed based on the reason and added to the CNF. Conflict clauses are used not only to limit the exploration of the decision tree, but also to choose decision variables. This complete algorithm is much more efficient than DPLL. Also, it is stronger than lookahead on non-random instances. That is why most modern complete SAT solvers are based on CDCL. Problems from various areas (verification, cryptanalysis, combinatorics, bioinformatics, etc.) can be efficiently reduced to SAT (Biere et al., 2021b). When a cryptanalysis problem is reduced to SAT and solved by SAT solvers, this is called SAT-based cryptanalysis or logical _cryptanalysis (Cook & Mitchell, 1996; Massacci & Marraro, 2000). It is in fact a special type_ of algebraic cryptanalysis (Bard, 2009). In the last two decades, SAT-based cryptanalysis 4 ----- have been successfully applied to stream ciphers, block ciphers, and cryptographic hash functions. **2.2 Cube-and-Conquer** If a given SAT instance is too hard for a sequential SAT solver, it makes sense to solve it in parallel (Balyo & Sinz, 2018). If only complete algorithms are considered, then there are two main approaches to parallel SAT solving: portfolio (Hamadi, Jabbour, & Sais, 2009) and _divide-and-conquer (B¨ohm & Speckenmeyer, 1996). According to the portfolio approach,_ many different sequential SAT solvers (or maybe different configurations of the same solver) solve the same problem simultaneously. In the divide-and-conquer approach, the problem is decomposed into a family of simpler subproblems that are solved by sequential solvers. _Cube-and-conquer (Heule et al., 2011; Heule, Kullmann, & Biere, 2018) is a divide-and-_ conquer SAT solving approach that combines lookahead with CDCL. On the cubing phase, a modified lookahead solver splits a given formula into cubes. On the conquer phase, by joining each cube with the original formula a subformula is formed. Finally a CDCL solver is run on the subformulas. If the original formula is unsatisfiable, then all the subformulas are unsatisfiable. Otherwise, at least one subformula is satisfiable. Since cubes can be processed independently, the conquer phase can be easily parallelized. As it was mentioned in the previous subsection, lookahead is a complete algorithm. When used in the cubing phase of Cube-and-Conquer, a lookahead solver is forced to cut off some branches thus producing cubes. Therefore, such a solver produces a decision tree, where leaves are either refuted ones (with no possible solutions), or cubes. There are two main cutoff heuristics that decide when a branch becomes a cube. In the first one, a branch is cut off after a given number of decisions (Hyv¨arinen, Junttila, & Niemel¨a, 2010). According to the second one, it happens when the number of variables in the corresponding subproblem drops below a given threshold (Heule et al., 2011). In the present study the second cutoff heuristic is used since it usually shows better results on hard instances. **2.3 Cryptographic Hash Functions** A hash function h is a function with the following properties (Menezes et al., 1996). 1. Compression: h maps an input x of arbitrary finite size to an output h(x) of fixed size. 2. Ease of computation: for any given input x, h(x) is easy to compute. An unkeyed cryptographic hash function h is a hash function that has the following potential properties (Menezes et al., 1996). 1. Collision resistance: it is computationally infeasible to find any two inputs x and x[′] such that x = x[′], h(x) = h(x[′]). _̸_ 2. Preimage resistance: for any given output y, it is computationally infeasible to find any of its preimages, i.e. any such input x[′] that h(x[′]) = y. 3. Second-preimage resistance: for any given input x, it is computationally infeasible to find x[′] such that x[′] = x, h(x) = h(x[′]). _̸_ 5 ----- Inputs of cryptographic hash functions are usually called messages, while outputs are called hash values or just hashes. Hereinafter only unkeyed cryptographic hash functions are considered. Methods for disproving the mentioned three properties are called collision attacks, preim_age attacks, and second preimage attacks, respectively. If an attack is computationally fea-_ sible, then it is called practical. Usually it is much easier to propose a practical collision attack than attacks of two other types. It is clear that collision resistance implies secondpreimage resistance, while second-preimage attack implies a collision attack. This study is focused on practical preimage attacks on step-reduced cryptographic hash functions MD4 and MD5. In the rest of the paper, a practical inversion of a cryptographic hash function implies a practical preimage attack and vise versa. **2.4 MD4** The Message Digest 4 (MD4) cryptographic hash function was proposed by Ronald Rivest in 1990 (Rivest, 1990). Given a message of arbitrary finite size, padding is applied to obtain a message that can be divided into 512-bit blocks. Then a 128-bit hash is produced by iteratively applying the MD4 compression function to the blocks in accordance to the Merkle-Damgard construction. Consider the compression function in more detail. Given a 512-bit input, it produces a 128-bit output. The function consists of three rounds, sixteen steps each, and operates by transforming data in four 32-bit registers A, B, C, D. If a message block is the first one, then the registers are initialized with the following constants, respectively: 0x67452301; ``` 0xefcdab89; 0x98badcfe; 0x10325476. Otherwise, registers are initialized with an output ``` produced by the compression function on the previous message block. The message block _M is divided into sixteen 32-bit words. In each step, one register’s value is updated by_ mixing one message word with values of all four registers and an additive constant. This transformation is partially made via a nonlinear function, which is specific for each round. Additive constants are also round-specific. As a result, in each round all sixteen words take part in such updates. Finally, registers are incremented by the values they had after the current block initialization, and the output is produced as a concatenation of A, B, C, D. The nonlinear functions and additive constants are presented in Table 1. Table 1: Characteristics of MD4 rounds. Round Nonlinear function Additive constant 1 _F_ (x, y, z) = (x _y)_ ( _x_ _z)_ `0x0` _∧_ _∨_ _¬_ _∧_ 2 _G(x, y, z) = (x_ _y)_ (x _z)_ (y _z)_ `0x5a827999` _∧_ _∨_ _∧_ _∨_ _∧_ 3 _H(x, y, z) = x_ _y_ _z_ `0x6ed9eba1` _⊕_ _⊕_ The full description of MD4 compression function can be found in (Rivest, 1990). Algorithm 1 presents the function when it processes the first message block. In the pseudocode some steps are omitted, but the remaining ones are enough to understand MD4-related results of the present paper. Here ≪ _r stands for the circular shifting to the left by r bits_ position. 6 ----- **Algorithm 1 MD4 compression function on the first 512-bit message block.** **Input: 512-bit message block M** . **Output: Updated values of registers A, B, C, D.** 1: AA _A_ `0x67452301` _←_ _←_ 2: BB _B_ `0xefcdab89` _←_ _←_ 3: CC _C_ `0x98badcfe` _←_ _←_ 4: DD _D_ `0x10325476` _←_ _←_ 5: A ← (A + F (B, C, D) + M [0]) ≪ 3 _▷_ ROUND 1 starts, Step 1 6: . . . _▷_ Steps 2-12 7: A ← (A + F (B, C, D) + M [12]) ≪ 3 _▷_ Step 13 8: D ← (D + F (A, B, C) + M [13]) ≪ 7 _▷_ Step 14 9: C ← (C + F (D, A, B) + M [14]) ≪ 11 _▷_ Step 15 10: B ← (B + F (C, D, A) + M [15]) ≪ 19 _▷_ Step 16 11: A ← (A + G(B, C, D) + M [0] + 0x5a827999) ≪ 3 _▷_ ROUND 2 starts, Step 17 12: D ← (D + G(A, B, C) + M [4] + 0x5a827999) ≪ 5 _▷_ Step 18 13: C ← (C + G(D, A, B) + M [8] + 0x5a827999) ≪ 9 _▷_ Step 19 14: B ← (B + G(C, D, A) + M [12] + 0x5a827999) ≪ 13 _▷_ Step 20 15: . . . _▷_ Steps 21-32 16: A ← (A + H(B, C, D) + M [0] + 0x6ed9eba1) ≪ 3 _▷_ ROUND 3 starts, Step 33 17: . . . _▷_ Steps 34-47 18: B ← (B + H(C, D, A) + M [15] + 0x6ed9eba1) ≪ 15 _▷_ Step 48 19: A _A + AA_ _▷_ Increment A by the initial value _←_ 20: A _B + BB_ _▷_ Increment B by the initial value _←_ 21: A _C + CC_ _▷_ Increment C by the initial value _←_ 22: A _D + DD_ _▷_ Increment D by the initial value _←_ In 1995, a practical collision attack on MD4 was proposed (Dobbertin, 1996). In 2005, it was theoretically shown that on a very small fraction of messages MD4 is not second preimage resistant (Wang, Lai, Feng, Chen, & Yu, 2005). In 2008, a theoretical preimage attack on MD4 was proposed (Leurent, 2008). Despite the found vulnerabilities, MD4 is still used to compute password-derived hashes in some operating systems of the Windows family, including Windows 10, due to backwards compatibility issues. **2.5 Dobbertin’s constraints for MD4** Since MD4 is still preimage resistant and second preimage resistant in practice, its stepreduced versions have been studied recently. In 1998, Hans Dobbertin introduced additional constraints for MD4 (Dobbertin, 1998). Further they are called Dobbertin’s constraints. Consider a constant 32-bit word K. The Dobbertin’s constraints for 32-step MD4 are as follows: A = K in steps 13, 17, 21, 25; D = K in steps 14, 18, 22, 26; C = K in steps 15, 19, 23, 27 (numbering from 1). It means that on the first round, 3 out of 12 constraints are applied, while the remaining 9 ones are applied on the second round. Algorithm 2 shows how the steps from Algorithm 1 are changed when Dobbertin’s constraints are applied. Comments for the constrained steps are marked with bold. 7 ----- **Algorithm 2 MD4 compression function on the first 512-bit message block with applied** Dobbertin’s constraints. **Input: 512-bit message block M** . **Output: Updated values of registers A, B, C, D.** 1: AA _A_ `0x67452301` _←_ _←_ 2: BB _B_ `0xefcdab89` _←_ _←_ 3: CC _C_ `0x98badcfe` _←_ _←_ 4: DD _D_ `0x10325476` _←_ _←_ 5: A ← (A + F (B, C, D) + M [0]) ≪ 3 _▷_ ROUND 1 starts, Step 1 6: . . . _▷_ Steps 2-12 7: A ← (A + F (B, C, D) + M [12]) ≪ 3 ← _K_ _▷_ **Step 13, A=K** 8: D ← (D + F (A, B, C) + M [13]) ≪ 7 ← _K_ _▷_ **Step 14, D=K** 9: C ← (C + F (D, A, B) + M [14]) ≪ 11 ← _K_ _▷_ **Step 15, C=K** 10: B ← (B + F (C, D, A) + M [15]) ≪ 19 _▷_ Step 16 11: A ← (A + G(B, C, D) + M [0] + 0x5a827999) ≪ 3 ← _K_ _▷_ **ROUND 2 starts, Step** **17, A=K** 12: D ← (D + G(A, B, C) + M [4] + 0x5a827999) ≪ 5 ← _K_ _▷_ **Step 18, D=K** 13: C ← (C + G(D, A, B) + M [8] + 0x5a827999) ≪ 9 ← _K_ _▷_ **Step 19, C=K** 14: B ← (B + G(C, D, A) + M [12] + 0x5a827999) ≪ 13 _▷_ Step 20 15: . . . _▷_ Steps 21-32 16: A ← (A + H(B, C, D) + M [0] + 0x6ed9eba1) ≪ 3 _▷_ ROUND 3 starts, Step 33 17: . . . _▷_ Steps 34-47 18: B ← (B + H(C, D, A) + M [15] + 0x6ed9eba1) ≪ 15 _▷_ Step 48 19: A _A + AA_ _▷_ Increment A by the initial value _←_ 20: A _B + BB_ _▷_ Increment B by the initial value _←_ 21: A _C + CC_ _▷_ Increment C by the initial value _←_ 22: A _D + DD_ _▷_ Increment D by the initial value _←_ Consider step 17. A = C = D = K due to the constrained steps 13, 14, and 15, while _B is unknown. Since G is the majority function, then G(x, y, y) = y for any x and y. So_ we have _K = (A + G(B, C, D) + M_ [0] + 0x5a827999) ≪ 3 = (K + G(B, K, K) + M [0] + 0x5a827999) ≪ 3 = (K + K + M [0] + 0x5a827999) ≪ 3 Then it follows _M_ [0] = (K ≪ 29) − 2K − `0x5a827999` For example, if K = 0xffffffff, then _M_ [0] = 0xffffffff 2 `0xffffffff` `0x5a827999 =` _−_ _·_ _−_ ``` 0xffffffff 0x5a827999 = 0xa57d8668 ``` _−_ _−_ 8 ----- Thus if A is equal to a constant word in step 17, M [0] becomes a constant as well. The same holds for M [4], M [8], M [1], M [5], M [9], M [2], M [6], and M [10] due to constrained steps 18, 19, 21, 22, 23, 25, 26, 27, respectively. In the pseudocode, only 6 constrained steps are shown, but for the remaining ones the picture is the same. Finally, the Dobbertin’s constraints turn 9 message words out of 16 into constants. Therefore the constrained compression function maps 0, 1 onto 0, 1 while the original one maps 0, 1 onto _{_ _}[224]_ _{_ _}[128]_ _{_ _}[512]_ 0, 1 . As a result, for any given hash and a randomly chosen K the number of preim_{_ _}[128]_ ages (messages) is significantly reduced, maybe even to 0. The Dobbertin’s constraints is an example of streamlined constraints (Gomes & Sellmann, 2004). Such constraints are not implied by the formula, so they can remove some (or even all) solutions, but have a good chance of leaving at least one solution. The Dobbertin’s constraints were originally proposed for 32-step MD4 and they do not guarantee that for a certain pair (hash,K), any preimage will remain when the constraints are applied with constant K. What they do guarantee is that the corresponding system of equations will become much smaller and easier to solve. So the idea is to try different _K until a preimage is found. The point is that even if a few such simplified problems are_ to be solved, it may anyway be much easier than solving the original problem. The same holds for more than 32 steps because all the constraints are applied before the 32nd step. In other words, adding more unconstrained steps does not reduce the number of solutions. In (Dobbertin, 1998), the Dobbertin’s constraints were used to invert 32-step MD4 by randomly choosing values of K and B (on step 28) until a consistent system is formed and a preimage is found. In case of 32 steps, a constant value B in addition to K on step 28 implies values of the remaining 7 message words. This is not the case for more than 32 steps. In 2000, modified Dobbertin’s constraints were applied to invert MD4 when the second round is omitted (Kuwakado & Tanaka, 2000). In 2007, a SAT-based implementation of slightly modified Dobbertin’s constraints (where the constraint on step 13 was omitted) made it possible to invert 39-step MD4 (De et al., 2007). Since 2007, several unsuccessful attempts have been made to invert 40-step MD4, see, e.g., (Legendre et al., 2012). The present study is aimed at inverting 40-, 41-, 42-, and 43-step MD4. **2.6 MD5** MD5 was proposed in 1992 by Ronald Rivest as a slightly slower, but at the same time much more secure extension of MD4 (Rivest, 1992). The main changes in MD5 compared to MD4 are as follows. 1. Addition of the fourth round of 16 steps with its own round function, so MD5 consists of 64 steps; 2. Replacement of the second round’s nonlinear function by a new function; 3. Usage of a unique additive constant in each of the 64 steps; 4. Addition of output from the previous step. The nonlinear functions are as follows: - Round 1. F (x, y, z) = (x _y)_ ( _x_ _z)_ _∧_ _∨_ _¬_ _∧_ 9 ----- - Round 2. G(x, y, z) = (x _z)_ (y _z)_ _∧_ _∨_ _∧¬_ - Round 3. H(x, y, z) = x _y_ _z_ _⊕_ _⊕_ - Round 4. I(x, y, z) = y (x _z)_ _⊕_ _∨¬_ For the first time a practical collision attack on MD5 was presented in 2005 (Wang & Yu, 2005). In 2009, a theoretical preimage attack was proposed (Sasaki & Aoki, 2009). It is known that the Dobbertin’s constraints are not applicable to MD5 (Aoki & Sasaki, 2008). This is because of changes 2-4 mentioned above. In particular, when applying to MD5, these constraints remove all solutions (preimages), so simplicity of the obtained reduced problem does not help. In 2007, 26-step MD5 was inverted in (De et al., 2007). In 2012, it was done for 27-, and 28-step MD5 (Legendre et al., 2012). In both papers SAT-based cryptanalysis via CDCL solvers was applied, yet no additional constraints were added to the corresponding CNFs. MD5 is still used in practice as a checksum to verify data integrity. Another application is the storage of passwords’ hashes in operating systems. ### 3. Dobbertin-like Constraints for Inverting Step-reduced MD4 As it was mentioned in the previous section, the progress in inverting step-reduced MD4 was mainly due to the Dobbertin’s constraints. This section first proposes their generalization — Dobbertin-like constraints. Then an algorithm for inverting MD4 via Dobbertin-like constraints is proposed. **3.1 Dobbertin-like Constraints** Suppose that given a constant word K, only 11 of 12 Dobbertin’s constraints hold, while in the remaining corresponding step only b, 0 _b_ 32 bits of the register are equal to the _≤_ _≤_ corresponding b bits of K, at the same time the remaining 32 _b bits in the register take_ _−_ values, opposite to those in K. Denote these constraints as Dobbertin-like constraints. It is clear that the Dobbertin’s constraints is a special case of Dobbertin-like constraints when _b = 32._ Denote an inversion problem for step-reduced MD4 with applied Dobbertin-like constraints as MD4inversion(y, s, K, p, L), where: - y is a given 128-bit hash; - s is the number of MD4 steps (starting from the first one); - K is a 32-bit constant word used in the Dobbertin’s constraints; - p 13, 14, 15, 17, 18, 19, 21, 22, 23, 25, 26, 27 is the specially constrained step; _∈{_ _}_ - L is a 32-bit word such that if Li = 0, 0 ≤ _i ≤_ 31, then i-th bit of the register value modified in step p is equal to Ki. Otherwise, if Li = 1, it is equal to ∼Ki. In other words, the 32-bit word L serves as a bit mask and controls how similar the specially constrained register is to the constant word K. To make this definition more 10 ----- clear, three examples are given below. Hereinafter 0hash and 1hash mean 128 0s and 128 1s (i.e. 4 words 0x00000000 and 0xffffffff, respectively). **Example 3.1 (MD4inversion(0hash, 32, 0x62c7Ec0c, 21, 0x00000000)). The problem is to** invert 0hash produced by 32-step MD4. Since L = 0x00000000, then the specially constrained register (in step 21) has value K, so all 12 Dobbertin’s constraints are applied as usual with K = 0x62c7Ec0c. A similar inversion problem (up to choice of K) in fact was solved in (Dobbertin, 1998). **Example 3.2 (MD4inversion(1hash, 39, 0xfff00000, 12, 0xffffffff)). The problem is to** invert 1hash produced by 39-step MD4. Since L = 0xffffffff, the specially constrained register (in step 12) has value _K = 0x000fffff. In the remaining 11 Dobbertin’s steps_ _∼_ the registers have value K = 0xfff00000. **Example 3.3 (MD4inversion(1hash, 40, 0xffffffff, 12, 0x00000003)). The problem is to** invert 1hash produced by 40-step MD4. Since L = 0x00000003, the first 30 bits of the specially constrained register (in step 12) are equal to those in K, while the last two bits have values ∼K30 and ∼K31, respectively. It means that this register has value 0xfffffffc, while in the remaining 11 Dobbertin’s steps the registers have value K = 0xffffffff. **3.2 Inversion Algorithm** Dobbertin-like constraints can be used for finding preimages of a step-reduced MD4 according to the following idea. For a given hash y, step s, random K, an inversion problem is formed with L = 0x00000000. Thus all 12 Dobbertin’s constraints are applied. The inversion problem is solved and, if a preimage is found, nothing else should be done. Otherwise, if it is proven that no preimages exist in the current inversion problem, then a new one is formed with L = 0x00000001. In this case the specially constrained register’s value is just 1 bit shy of being K. This inversion problem is solved as well. If still no preimage exists, then L is further modified: 0x00000002, 0x00000003, and so on. The intuition here is that the Dobbertin’s constraints lead to a system of equations that is either consistent (with very few solutions) or quite “close” to a consistent one. In the latter case trying different values of L helps to form a consistent system and find its solution. Algorithm 3 follows the described idea. In the pseudocode a complete algorithm is _A_ used, which for a formed inversion problem returns all the preimages if they exist. Note that it is not guaranteed that Algorithm 3 finds any preimage for a given hash. However, as it will be shown in sections 6 and 7, in practice Algorithm 3 is able to find preimages for step-reduced MD4. Moreover, it usually does it in just few iterations (from 1 to 3) of the while loop. Complete algorithms of various types can be used to solve inversion problems formed in Algorithm 3. In particular, wide spectrum of constraint programming (Rossi, van Beek, & Walsh, 2006) solvers are potential candidates. In preliminary experiments, state-of-the-art sequential and parallel CDCL SAT solvers were tried to invert 40-step MD4, but even on the first iterations (where all 12 Dobbertin’s constraints are added) CNFs turned out to be too hard for them. That is why it was decided to use Cube-and-Conquer SAT solvers, which are more suitable for extremely hard SAT instances. The next section describes how 11 ----- **Algorithm 3 Algorithm for inverting step-reduced MD4 via Dobbertin-like constraints.** **Input: Hash y; the number of MD4 steps s; constant K; step p with specially constrained** register; a complete algorithm _._ _A_ **Output: Preimages for hash y.** 1: preimages _←{}_ 2: i 0 _←_ 3: while i < 2[32] **do** 4: _L_ DecimalToBinary(i) _←_ 5: _preimages_ (MD4inversion(y, s, K, p, L)) _←A_ 6: **if preimages is not empty then** 7: **break** 8: _i_ _i + 1_ _←_ 9: return preimages a given problem can be properly split into simpler subproblems on the cubing phase of Cube-and-Conquer. ### 4. Finding Cutoff Thresholds for Cube-and-Conquer Recall (see Subsection 2.2) that in Cube-and-Conquer the following cutoff threshold n is meant in the cubing phase: the number of variables in a subformula, formed by adding a cube to the original CNF and applying UP. It is crucial to properly choose this threshold. If it is too high, then the cubing phase is performed in no time, but very few extremely hard (for a CDCL solver) cubes might be produced; if it is too low, then the cubing phase will be extremely time consuming, and also there will be too huge number of cubes. Earlier two algorithms aimed at finding a cutoff threshold with minimal estimated runtime of Cube-and-Conquer were proposed. Subsection 7.2 of the tutorial (Heule, 2018a) proposed Algorithm A as follows: Optimizing the heuristics requires selecting useful subproblems of the hard formula. This can be done as follows: First determine the depth for which the number of refuted nodes is at least 1000. Second, randomly pick about 100 subproblems (cubes) of the partition on that depth. Second, solve these 100 subproblems and select the 10 hardest ones for the optimization. Later Algorithm B was proposed in (Bright, Cheung, Stevens, Kotsireas, & Ganesh, 2021): The cut-off bound was experimentally chosen by randomly selecting up to several hundred instances from each case and determining a bound that minimizes the sum of the cubing and conquering times. This section proposes a new algorithm that is inspired by algorithms A and B. First the algorithm is presented and then its novelty is described. To find a cutoff threshold with minimal estimated runtime, first it is needed to preselect promising values of n. On the one hand, the number of refuted leaves should be quite 12 ----- significant since it may indicate that at least some subformulas have become really simpler compared to the original problem. On the other hand, the total number of cubes should not be too large. An auxiliary Algorithm 4 follows this idea. Given a lookahead solver, a CNF, and a cutoff threshold, the function LookaheadWithCut runs the solver on the CNF with the cutoff threshold (see Subsection 2.2) and outputs cubes and the number of refuted leaves. **Algorithm 4 Preselect promising thresholds for the cubing phase of Cube-and-Conquer.** **Input: CNF F; lookahead solver ls; starting threshold nstart; threshold decreasing step** _nstep; maximal number of cubes maxc; minimal number of refuted leaves minr._ **Output: Stack of promising thresholds and corresponding cubes.** 1: function PreselectThresholds(F, ls, nstart, nstep, maxc, minr) 2: _stack_ _←{}_ 3: _n ←_ _nstart_ 4: **while n > 0 do** 5: _c, r_ LookaheadWithCut(ls, _, n)_ _▷_ Get cubes and number of refuted _⟨_ _⟩←_ _F_ 6: **if Size(c) > maxc then** 7: **break** _▷_ Break if too many cubes 8: **if r** _minr then_ _≥_ 9: _stack.push(_ _n, c_ ) _▷_ Add threshold and cubes _⟨_ _⟩_ 10: _n_ _n_ _nstep_ _▷_ Decrease threshold _←_ _−_ 11: **return stack** When promising values of the threshold are preselected, it is needed to estimate the hardness of the corresponding conquer phases. It can be done by choosing a fixed number of cubes by simple random sampling (Starnes, Yates, & Moore, 2010) among those produced in the cubing phase. If all corresponding subproblems from the sample are solved by a CDCL solver in a reasonable time, then an estimated total solving time for all subproblems can be easily calculated. This idea is implemented as Algorithm 5. On the first stage, promising thresholds are preselected by Algorithm 4, while on the second stage the one with minimal runtime estimation of the conquer phase is chosen among them. Given a CDCL solver, a CNF, a cube, and a time limit in seconds, the function SolveCube runs the CDCL solver with the time limit on the CNF, and returns the runtime in seconds and an answer whether the CNF is satisfiable or not. The algorithm operates in two possible modes. In the estimating mode, the algorithm terminates upon reaching a time limit by the CDCL solver on any subproblem in random samples. In the incomplete SAT-solving mode, the algorithm terminates upon finding a satisfying assignment. The first mode is aimed at estimating the hardness of a given CNF, while the second one is aimed at finding a satisfying assignment of a satisfiable CNF. The proposed algorithm has the following features. 1. A stack is used to preselect promising thresholds on the first stage in order to start the second stage with solving the simplest subproblems (with lowest n). In practice it allows obtaining some estimation quickly and then improve it. 13 ----- **Algorithm 5 Finding a cutoff threshold with minimal estimated runtime of the conquer** phase. **Input: CNF** ; lookahead solver ls; threshold decreasing step nstep; maximal number of _F_ cubes maxc; minimal number of refuted leaves minr; sample size N ; CDCL solver cs; CDCL solver time limit maxcst; the number of CPU cores cores; operating mode mode. **Output: A threshold nbest with runtime estimation ebest and cubes cbest; whether a satis-** fying assignment is found isSAT . 1: isSAT `Unknown` _←_ 2: nstart ← Varnum(F) − _nstep_ 3: ⟨nbest, ebest, cbest⟩←⟨nstart, +∞, {}⟩ 4: stack ← PreselectThresholds(F, ls, nstart, nstep, maxc, minr) _▷_ First stage 5: while stack is not empty do _▷_ Second stage: estimate thresholds 6: _n, c_ _stack.pop()_ _▷_ Get a threshold and cubes _⟨_ _⟩←_ 7: _sample_ SimpleRandomSample(c, N ) _▷_ Select N random cubes _←_ 8: _runtimes_ _←{}_ 9: **for each cube from sample do** 10: _t, st_ SolveCube(cs, _, cube, maxcst)_ _▷_ Add cube and solve _⟨_ _⟩←_ _F_ 11: **if t > maxcst and mode = estimating then** _▷_ If CDCL was interrupted, 12: **break** _▷_ stop processing sample. 13: **else** 14: _runtimes.add(t)_ _▷_ Add proper runtime 15: **if st = True then** _▷_ If SAT, 16: _isSAT_ `True` _←_ 17: **if mode = solving then** _▷_ and if SAT solving mode, 18: **return ⟨nbest, ebest, cbest, isSAT** _⟩_ _▷_ return SAT immediately. 19: **if Size(runtimes) < N then** _▷_ If at least one interrupted in sample, 20: **break** _▷_ stop main loop. 21: _e_ Mean(runtimes) Size(c)/cores _▷_ Calculate runtime estimation _←_ _·_ 22: **if e < ebest then** 23: _⟨nbest, ebest, cbest⟩←⟨n, e, c⟩_ _▷_ Update best threshold 24: return ⟨nbest, ebest, cbest, isSAT _⟩_ 2. The runtime of the cubing phase is not estimated because it is assumed that this is negligible compared to the conquer phase. 3. In the estimating mode, if on the second stage a CDCL solver fails solving some subproblem within time limit, the algorithm terminates. This is done because in this case it is impossible to calculate a meaningful estimation for the threshold. Another reason is that subproblems from the next thresholds (with higher n) will likely be even harder. 4. It is possible that satisfying assignments are found when solving subproblems from random samples. Indeed, if a given CNF is satisfiable, then cubes which imply satisfying assignments might be chosen to samples. 14 ----- 5. In the estimating mode, even if a satisfying assignment is found when solving some subproblem from samples, the algorithm does not terminate because in this case the main goal is to calculate a runtime estimation. 6. In the estimating mode it is a general algorithm that is able to estimate the hardness of an arbitrary CNF. 7. In the incomplete SAT-solving mode, a solution can be found only for a satisfiable CNF, and even in this case this is not guaranteed because of the time limit for the CDCL solver (see (Kautz, Sabharwal, & Selman, 2021)). 8. In fact, the runtime estimation is a stochastic costly black-box objective function (see, e.g., (Audet & Hare, 2017; Semenov, Zaikin, & Kochemazov, 2021)) that takes an integer value n as input. The algorithm minimizes this objective function. Since all details of Algorithm 5 are given, it now can be compared to Algorithms A and B (see the beginning of this section). It is clear that the idea is the same in all three algorithms — for a certain value of the cutoff threshold, a sample of cubes is formed, the corresponding subproblems are solved, and finally a runtime estimation is calculated. However, there are several major differences which are listed below. 1. Algorithms A and B were described informally and briefly, while Algorithm 5 is presented formally and in detail. 2. In opposite to Algorithms A and B, Algorithm 5 takes into account the situation when some subproblems from a sample are so hard that they can not be solved in reasonable time by a CDCL solver. 3. Algorithms A and B assume that on the conquer phase subproblems are solved incrementally, while Algorithm 5 assumes that each subproblem is solved by an independent call of a CDCL solver. The main difference is the second one. This feature of Algorithm 5 is extremely important in application to cryptanalysis problems, which are considered in the rest of the present paper. The reason is that in this case subproblems in a sample usually differ much (by thousands and even millions of times) in CDCL solver’s runtime. A possible explanation why this feature was not taken into account in both Algorithms A and B is that they were applied to combinatorial and geometric problems, where subproblems’ hardness in a sample is usually uniform. Importance of the third feature follows from the second one — incremental solving pays off in case of the uniform hardness, otherwise it can significantly slow down the solving process. When a cutoff threshold is found by Algorithm 5, the conquer phase operates as follows (see Subsection 2.2). First, subproblems are created by adding cubes to the original CNF in the form of unit clauses. Second, the subproblems are solved by the same CDCL solver that was used to find the threshold for the cubing phase. In the present study the goal is to find all solutions of a considered inversion problem. That is why, given a subproblem, the CDCL solver finds all its satisfying assignments. In opposite to the cubing phase, here the runtime of the CDCL solver is not limited. 15 ----- ### 5. Considered Inversion Problems and Their SAT Encodings This section describes the considered inversion problems for step-reduced MD4 and MD5, as well as their SAT encodings. Following all earlier attempts to invert step-reduced MD4 via SAT (see, e.g., (De et al., 2007; Legendre et al., 2012; Lafitte, Jr., & Heule, 2014; Gribanova & Semenov, 2018)), the padding is omitted (see Subsection 2.4) and only one 512-bit message block is considered. It means that in fact a step-reduced MD4 compression function is considered when it operates on the first block, like it was shown in Algorithm 1. The final incrementing is also omitted since it should be done only after 48-th step. Note that these restrictions does not make inversion problems easier since the compression function is the main component of MD4 function from the resistance point view. Inversion of step-reduced MD5 is considered in similar way. **5.1 Considered hashes** The following four hashes are chosen for inversion: 1. 0x00000000 0x00000000 0x00000000 0x00000000; 2. 0xffffffff 0xffffffff 0xffffffff 0xffffffff; 3. 0x01234567 0x89abcdef 0xfedcba98 0x76543210; 4. 0x62c7Ec0c 0x751e497c 0xd49a54c1 0x2b76cff8. Recall that 0hash and 1hash mean the first and the second hash from the list, respectively. These two hashes are chosen for inversion because it is a common practice in the cryptographic community. The reason is that inverting some hash, that looks just like a random word, is suspicious. Indeed, one can take a random message, produce its hash and declare that this very hash is inverted. On the other hand, if a hash has a regular structure, this approach does not work. All 0s and all 1s are two hashes with the most regular structure, that is why they are usually chosen. For the first time 32-step MD4 was inverted for 0hash (Dobbertin, 1998), while in 39-step case it was done for 1hash (De et al., 2007), and later for 0hash (Legendre et al., 2012). As for the cryptographic hash functions SHA-0 and SHA-1, their 23-step (out of 80) versions for the first time were inverted for _0hash (Legendre et al., 2012)._ The third hash from the list was used to invert 28-step MD5 in (Legendre et al., 2012). Hereinafter this hash is called symmhash. It is symmetrical — the last 64 bits are the first 64 bits in reverse order, but at the same time it is less regular than 1hash or 0hash. The same result for 28-step MD5 was described in later two papers of the same authors. Unfortunately, none of these three papers explained the non-existence of the results for _1hash and 0hash._ The fourth hash from the list is chosen randomly. The goal is to show that the proposed approach is applicable not only to hashes with regular structure. This hash is further called _randhash._ 16 ----- **5.2 Step-reduced MD4** While in (De et al., 2007) only K = 0x00000000 was used as a constant in the Dobbertin’s constraints, in the present study both K = 0x00000000 and K = 0xffffffff are tried in Dobbertin-like constraints. The constraint in step 12 is chosen for the modification (so _p = 12, see Subsection 3.1) since in (De et al., 2007) the constraint for this very step was_ entirely omitted. Eight step-reduced versions of MD4, from 40 to 47 steps, as well as the full MD4 are studied. Hence there are 9 4 2 = 72 MD4-related inversion problems in _×_ _×_ total. None of these 72 inversion problems have been solved so far. Consider 40-step MD4. Since K has two values and y has four values, Algorithm 3 should be run on eight inputs. As a result, according to the notation from Subsection 3.1, the following eight inversion problems are formed in the corresponding first iterations of Algorithm 3: 1. MD4inversion(0hash, 40, 0x00000000, 12, 0x00000000); 2. MD4inversion(0hash, 40, 0xffffffff, 12, 0x00000000); 3. MD4inversion(1hash, 40, 0x00000000, 12, 0x00000000); 4. MD4inversion(1hash, 40, 0xffffffff, 12, 0x00000000). 5. MD4inversion(symmhash, 40, 0x00000000, 12, 0x00000000); 6. MD4inversion(symmhash, 40, 0xffffffff, 12, 0x00000000); 7. MD4inversion(randhash, 40, 0x00000000, 12, 0x00000000); 8. MD4inversion(randhash, 40, 0xffffffff, 12, 0x00000000). For illustrative purpose, consider the first case: invert 0hash produced by 40-step MD4 with Dobbertin’s constraints and K = 0x00000000. If no preimage exists for this inversion problem, then on the second iteration of Algorithm 3 L is increased by 1, so the inversion problem MD4inversion(0hash, 40, 0x00000000, 12, 0x00000001) is formed and so on. **5.3 Step-reduced MD5** Inversion of only 28-step MD5 compression function is considered in this study for the four hashes presented above. Recall that in opposite to MD4, no additional constraints that reduce the number of preimages are added. Note that for all hashes but symmhash the inversion problems have not been solved earlier. **5.4 SAT Encodings** It is possible to construct CNFs that encode MD4 and MD5 via the following automatic tools: CBMC (Clarke, Kroening, & Lerda, 2004); SAW (Carter, Foltzer, Hendrix, Huffman, & Tomb, 2013); Transalg (Semenov, Otpuschennikov, Gribanova, Zaikin, & Kochemazov, 2020); CryptoSAT (Lafitte, 2018). In the present paper, the CNFs are constructed via Transalg of version 1.1.5[1]. This tool takes a description of an algorithm as an input 1. https://gitlab.com/transalg/transalg 17 ----- and outputs a CNF that implements the algorithm. The description must be formulated in a domain specific C-like language called TA language. The TA language supports the following basic constructions used in procedural languages: variable declarations; assignment operators; conditional operators; loops, function calls. Additionally it supports various integer operations and bit operations including bit shifting and comparison that is quite handy when describing a cryptographic algorithm. A TA program is a list of functions in TA language. All the constructed CNFs and the corresponding TA programs are available online[2]. All these CNFs can be easily reconstructed by giving the TA programs to Transalg as inputs. In a CNF that encodes step-reduced MD4, the first 512 variables correspond to a message, the last 128 variables correspond to a hash, while the remaining auxiliary variables are needed to encode how the hash is produced given the message. The first 512 variables are further called message variables, while the last 128 ones — hash variables. The Tseitin transformations are used in Transalg to introduce auxiliary variables (Tseitin, 1970). Characteristics of the constructed CNFs are given in Table 2. Table 2: Characteristics of CNFs that encode the considered step-reduced MD4 and MD5. Function Variables Clauses Literals MD4-40 7025 70 809 317 307 MD4-41 7211 73 158 329 330 MD4-42 7397 75 507 341 353 MD4-43 7583 77 856 353 376 MD4-44 7769 80 205 365 399 MD4-45 7955 82 554 377 422 MD4-46 8141 84 903 389 445 MD4-47 8327 87 252 401 468 MD4-48 8513 89 601 413 491 MD5-28 7471 54 672 216 362 The CNF that encodes 40-step MD4 has 7 025 variables and 70 809 clauses. Then every step adds 186 variables and 2 349 clauses, so as a result a CNF that encodes the full (48-step) MD4 has 8 513 variables and 89 601 clauses. Note that these CNFs encode the functions themselves, so all message and hash variables are unassigned. To obtain a CNF that encodes an inversion problem for a given 128-bit hash, 128 corresponding one-literal clauses are to be added, so all hash variables become assigned. The problem is to find values of the message variables. The Dobbertin’s constraints are added as another 384 one-literal clauses (32 clauses for each constraint). As a result, a CNF that encodes the inversion of 40-step MD4 with all 12 Dobbertin’s constraints has 7 025 variables and 71 321 clauses, while that for the 48-step version consists of 8 513 variables and 90 113 clauses. Note that Dobbertin-like constraints (see Subsection 3.1) are also added as 384 one-literal clauses — the only difference is in values of the corresponding 32 variables that encode the specially constrained register. 2. https://github.com/olegzaikin/EnCnC 18 ----- The CNF that encodes 28-step MD5 has 7 471 variables and 54 672 clauses. A CNF that encodes an inversion problem has 7 471 variables and 54 800 clauses since only 128 one-literal clauses for hash variables are added. ### 6. Inverting 40-step MD4 via Dobbertin-like Constraints This section describes experimental setup, simplification, and results for 40-step MD4. Assume that several given hashes for a step-reduced MD4 are to be inverted. Then Algorithm 3 and the estimating mode of Algorithm 5 can be used in the following combinations: 1. The estimating mode of Algorithm 5 is run on a CNF that encodes the inversion problem for an arbitrary hash among given ones, yet the Dobbertin’s constraints are fully applied, i.e. L = 0x00000000. When the best cutoff threshold is found, Algorithm 3 is iteratively run using a Cube-and-Conquer solver with the found threshold as algorithm on all given hashes. It means that the threshold found for one hash _A_ and L = 0x00000000 is used for all other hashes and values of L. 2. For each hash, its own best threshold is found for L = 0x00000000 and is used for all other values of L. In Algorithm 3, is again a Cube-and-Conquer solver with the _A_ found threshold. 3. For each hash and each value of L its own threshold is found. Therefore is the _A_ estimating mode of Algorithm 5 followed by a Cube-and-Conquer solver. In this study, the second combination is used since for any value of L the same amount of one-literal clauses is added to a CNF. **6.1 Experimental Setup** Algorithm 3 was implemented in Python, while Algorithm 5 and the conquer phase of Cube-and-Conquer were implemented in C++ as a parallel SAT solver Estimate-andCube-and-Conquer (EnCnC). The implementation is available online[3]. All experiments in this paper were held on a personal computer equipped with the 12core CPU AMD 3900X and 48 Gb of RAM. The implementations are multithreaded, so all 12 CPU cores were employed in all runs. In case of Algorithm 5, values of a cutoff threshold _n and then subproblems from samples are processed in parallel. In case of the conquer_ phase, subproblems are processed in parallel. The input parameters’ values of Algorithm 3 in case of 40-step MD4 were discussed in Section 5. As for Algorithm 5, the following input parameters’ values were used: - March cu lookahead solver (Heule et al., 2011) since it has been recently successfully applied to several hard problems (Heule et al., 2016; Heule, 2018b). - nstep = 5. It was chosen in preliminary experiments. If this parameter is equal to 1, then a better threshold usually can be found, but at the same time Algorithm 5 3. https://github.com/olegzaikin/EnCnC 19 ----- becomes quite time-consuming. On the other hand, if nstep is quite large, e.g., 50, then as a rule almost all most promising thresholds are just skipped. - maxc = 2 000 000. On the considered CNFs, March cu reaches 2 000 000 cubes in about 30 minutes, so that value of maxc looks reasonable. Higher values were also tried, but it did not give any improvement. - minr = 1 000. If it is less then 1 000, then subproblems are too hard because they are not simplified enough by lookahead. At the same time, higher value of this parameter did not allow collecting enough amount of promising thresholds. - N = 1 000. First N = 100 was tried, but it led to too optimistic estimations which were several times lower than real solving time. On the other hand, N = 10 000 is too time-consuming and gives just modest improvement in accuracy compared to _N = 1 000. The accuracy of obtained estimations is discussed later in Subsection 7.1._ - Kissat CDCL solver of version sc2021 (Biere, Fleury, & Heisinger, 2021a). The reason is that Kissat and its modifications won SAT Competitions 2020 and 2021. - maxst = 5 000 seconds. It is a standard time limit in SAT Competitions (see, e.g., (Balyo, Froleyks, Heule, Iser, J¨arvisalo, & Suda, 2021)), so modern CDCL solvers are designed to show all their power within this time. - cores = 12. - mode = estimating. Here the goal is not just to find one preimage, but rather find all preimages for a given inversion problem (up to added Dobbertin-like constraints). It should be noted that in both Algorithm 5 and the conquer phase of Cube-and-Conquer subproblems were solved by Kissat in the non-incremental mode, i.e. it solved them independently from each other. **6.2 Simplification** In case of 40-step MD4, two parameters were varied for each of four considered hashes (see Section 5.1). The first one is the value of the Dobbertin’s constant K (see Section 3): ``` 0x00000000 and 0xffffffff. The second one is simplification type applied to a CNF. A ``` motivation behind varying the second parameter is as follows. First, it is crucial to simplify a CNF before giving it to a lookahead solver. Second, in preliminary experiments it was found out that the simplification type can significantly alter the effectiveness of Cube-andConquer on the considered problems. The CDCL solver CaDiCaL of version 1.5.0 (Froleyks & Biere, 2021) was used to simplify the CNFs. This solver uses inprocessing, i.e. a given CNF is simplified during the CDCL search (Biere, 2011). The more conflicts have been generated by a CDCL solver so far, the more simplified (in terms of the number of variables) the CNF has been made. A natural simplification measure in this case is the number of generated conflicts. In the experiments related to 40-step MD4, the following limits on the number of generated conflicts were tried: 1, 10 thousand, 100 thousand, 1 million, 10 million. Note that 1 conflict 20 ----- as the limit in some cases gives the same result as UP (see Subsection 2.1), while in the remaining cases the corresponding CNF is slightly smaller. For example, consider problem MD4inversion(1hash, 40, 0xffffffff, 12, 0x00000000). Table 3 presents characteristics of six CNFs which encode this problem. The original (unsimplified) CNF is described by the number of variables, clauses, and literals. For those simplified by CaDiCaL also the runtime on 1 CPU core is given. Table 3: CNFs that encode MD4inversion(1hash, 40, 0xffffffff, 12, 0x00000000). The best values are marked with bold. Simplif. type Variables Clauses Literals Simplif. runtime no (original CNF) 7025 71 321 317 819 1 conflict 3824 33 371 138 820 0.02 sec 10 thousand conflicts 2969 27 355 116 618 0.31 sec 100 thousand conflicts 2803 23 121 94 250 4.29 sec 1 million conflicts 2756 **22 391** **90 412** 1 min 19 sec 10 million conflicts **2054** 24 729 110 267 33 min It is clear, that first the number of variables, clauses, and literals decrease, but then 10 million conflicts provides lower number of variables yet the number of clauses and literals is higher than that on 1 million conflicts. For other hashes and values of L the picture is similar. In preliminary experiments also the limits of 1 thousand and 100 million conflicts were tried. However, it turned out that the first variant is usually similar to 1 conflict in number of variables and clauses. The second variant in all cases was similar to 10 million conflicts in number of variables, though the number of clauses was a bit lower. Yet generating 100 million conflicts is quite time consuming — it takes about 1 day on average. On the other hand, 10 million conflicts are generated in about a half an hour. That is why these two simplification types, 1 thousand and 100 million conflicts, were omitted. **6.3 Experiments** Of course, more parameters can be varied for each hash in addition to K and simplification type mentioned in the previous subsection. One of the most natural is a CDCL solver used in Cube-and-Conquer. For example, a cryptanalysis-oriented solver can be chosen (Soos, Nohl, & Castelluccia, 2009; Nejati & Ganesh, 2019; Kochemazov, 2021). Moreover, internal parameters of the chosen CDCL solver can be varied as well. Recall that there are 4 hashes, 5 simplification types, while K has 2 values. Therefore in total 4 5 2 = 40 CNFs were constructed with fully applied Dobbertin’s constraints _×_ _×_ (L = 0x00000000) for MD4-40. On each of them the first iteration of Algorithm 3 was run. It turned out, that Algorithm 5 could not find any estimations for all 20 CNFs with _K = 0x00000000. The reason is because in all these cases Kissat was interrupted due to_ the time limit even for the simplest (lowest) values of the cutoff threshold n. On the other hand, for K = 0xffffffff much more positive results were achieved. For 0hash, symmhash, and randhash, estimations for all simplification types were successfully calculated, and the 21 ----- best one was 1 conflict in all these cases. On the other hand, for 1hash no estimations were found for 1 conflict and 10 thousand conflicts, while the best estimation was gained for 1 million conflicts. The results are presented in Table 4. For each pair (simplification type, hash) the best estimation for 12 CPU cores, the corresponding cutoff threshold, and the number of cubes are given. Here “-” means that no estimation was obtained because Kissat was interrupted on the simplest threshold. Runtimes of Algorithm 5 are not presented there, but on average it took about 2 hours for K = 0x00000000 and about 3 hours for K = 0xffffffff. Table 4: Runtime estimations for 40-step MD4. The best estimations are marked with bold. Hash Simplif. conflicts _ebest_ _nbest_ _|cbest|_ _0_ _1_ _symm_ _rand_ **1** **15 h 33 min** **3290** **303 494** 10 thousand 21 h 43 min 2530 210 008 100 thousand 52 h 32 min 2485 107 657 1 million 22 h 19 min 2400 148 518 10 million 34 h 27 min 1895 69 605 1 - - 10 thousand - - 100 thousand 81 h 31 min 2535 362 429 **1 million** **42 h 43 min** **2510** **182 724** 10 million 991 h 12 min 1890 1 671 849 **1** **19 h 16 min** **3395** 80 491 10 thousand 29 h 47 min 2725 181 267 100 thousand 22 h 44 min 2615 60 403 1 million 21 h 11 min 2530 151 567 10 million 59 h 28 min 1945 189 744 **1** **14 h 27 min** **3400** **75 823** 10 thousand 227 h 54 min 2660 1 098 970 100 thousand 20 h 22 min 2540 159 942 1 million 17 h 33 min 2455 225 854 10 million 81 h 3 min 1915 242 700 Figure 1 depicts how the objective function was minimized on the inversion problem for _0hash. Here 10k stands for 10 thousand conflicts, 1m for 1 million conflicts and so on. The_ figures for the remaining three hashes can be found in Appendix A. In Section 4 it was mentioned that in the estimating mode of Algorithm 5 it is possible to find satisfying assignments of a given satisfiable CNF. That is exactly what happened for _symmhash — a satisfying assignments was found for the CNF simplified by 100 thousand_ conflicts. It means that a preimage for symmhash generated by 40-step MD4 was found just in few hours during the search for good thresholds for the cubing phase. However, the goal was to find all preimages of the considered inversion problems (up to chosen value of _L). That is why using the cubes produced with the help of the best cutoff thresholds, the_ conquer phase was run on all four inversion problems: 1-conflict-based for 0hash, symmhash, 22 ----- 15 10 5 1 1m 10k 100k 10m 0 500000 1000000 1500000 2000000 1 0.5 |Col1|1 1m 10k 100 10m| |---|---| ||| Cubes Figure 1: Minimization of the objective function on 40-step MD4, 0hash. The intersection of two dotted lines shows the best estimation. and randhash; 1-million-conflicts-based for 1hash. As a result, all subproblems were solved successfully. The subproblems’ solving times in case of 0hash are shown in Figure 2. mean median Figure 2: Kissat runtimes on subproblems from the conquer phase applied to ``` MD4inversion(0hash, 40, 0xffffffff, 12, 0x00000000) ``` . 23 ----- For 0hash and 1hash, no satisfying assignments were found, therefore the corresponding inversion problems have no solutions. On the other hand, satisfying assignments were found for hashes symmhash and randhash. The found thresholds, estimations, and the real runtimes are presented in Table 5. In the header, sol stands for the number of solutions. Note that the best estimation ebest was calculated only for L = 0x00000000, so for other values of L it is equal to “-”. The right three columns present subproblems’ statistics: mean solving time; maximum solving time; and standard deviation of times (when they are in seconds). The minimum solving time is not reported since it was equal to 0.007 seconds in all cases. Table 5: Estimated and real runtimes (on 12 CPU cores) of the conquer phase for inversion problems related to 40-step MD4. The best estimations from Table 4 are presented. Hash _L_ _ebest_ real time sol mean max _sd_ `0x00000000` 15 h 33 min 20 h 9 min 0 2.84 sec 1 h 13.68 _0_ `0x00000001` - 19 h 25 min 0 2.61 sec 29 min 10.22 `0x00000002` - 34 h 27 min 1 5.7 sec 38 min 20.62 `0x00000000` 42 h 43 min 48 h 29 min 0 11.5 sec 26 min 30.23 _1_ `0x00000001` - 59 h 7 min 0 4.08 sec 17 min 11.4 `0x00000002` - 28 h 1 min 1 7.7 sec 17 min 18.68 _symm_ `0x00000000` 19 h 16 min 20 h 45 min 2 11.24 sec 18 min 21.1 _rand_ `0x00000000` 14 h 27 min 15 h 48 min 1 9.08 sec 38 min 21.59 The next iteration of Algorithm 3 (with L = 0x00000001) was executed for 0hash and _1hash. Note that the same simplification and cutoff threshold as for L = 0x00000000 were_ applied to the corresponding CNFs. The conquer phase again did not find any satisfying assignment. Finally, preimages for both hashes were found on the third iteration (L = ``` 0x00000002), see Table 5. All found preimages are presented in Table 6. The obtained ``` results will be discussed in the next section. ### 7. Inverting 41-, 42-, and 43-step MD4 via Dobbertin-like Constraints This section presents results on inverting 41-, 42-, and 43-step MD4. Finally, all MD4related results are discussed. Recall that in the previous section on inverting 40-step MD4, Algorithm 3 was run on 40 CNFs: for each of 4 hashes, 2 values of K and 5 simplification types were tried. Note that _K = 0x00000000 did not allow solving any 40-step-related problem. As for simplification_ types, for 3 hashes out of 4 the best estimations were obtained on 1-conflict-based CNFs, while for the remaining one 1 million conflicts was the best. Following these results, in this section only K = 0xffffffff is used, as well as only two mentioned simplification types. Therefore only 8 CNFs were constructed for 41-step MD4, and the same for 42-, 43-, and 44-step MD4. Also it turned out that the best 40-step-related estimations were achieved when at most 303 494 cubes were produced, see Table 4. That is why in this section the 24 ----- Table 6: Found preimages for 40-step MD4. Hash Preimages 0xe57d8668 0xa57d8668 0xa57d8668 0xbc8c857b 0xa57d8668 0xa57d8668 0xa57d8668 0xcb0a1178 _0_ _1_ _symm_ 0xa57d8668 0xa57d8668 0xa57d8668 0x307bc4e7 0xad02e703 0xe1516b23 0x981c2a75 0xc08ea9f7 0xe57d8668 0xa57d8668 0xa57d8668 0x1d236482 0xa57d8668 0xa57d8668 0xa57d8668 0x97a13204 0xa57d8668 0xa57d8668 0xa57d8668 0x991ede3 0x301e2ac3 0x5bed2a3d 0xe167a833 0x890d22f0 0xa57d8668 0xa57d8668 0xa57d8668 0xc8cf2f7c 0xa57d8668 0xa57d8668 0xa57d8668 0x61915bc1 0xa57d8668 0xa57d8668 0xa57d8668 0x2c017cc4 0xda6acfa2 0x55e9f993 0x50d83f7b 0x2d7d47a6 0xa57d8668 0xa57d8668 0xa57d8668 0x154f3b86 0xa57d8668 0xa57d8668 0xa57d8668 0x95b7616d 0xa57d8668 0xa57d8668 0xa57d8668 0xf3ca15df 0x7eb66f5e 0x446dc43f 0x7d8e2888 0xafe37a76 0xa57d8668 0xa57d8668 0xa57d8668 0xbb809ab0 0xa57d8668 0xa57d8668 0xa57d8668 0xab67285f _rand_ 0xa57d8668 0xa57d8668 0xa57d8668 0x85517639 0xc3eab3d 0x6edfba39 0xa1512693 0xaa686ac9 value of maxc is reduced from 2 000 000 to 500 000. The remaining input parameters of Algorithm 5 are the same. The same approach was applied as in the previous section: for each pair (steps, hash) first the best cutoff threshold was found via Algorithm 5 for a CNF with added Dobbertin’s constraints (L = 0x00000000), and then Algorithm 3 used the found threshold to run Cubeand-Conquer as a complete algorithm on each iteration. For 44 steps, no estimations were obtained. On the other hand, for 41, 42, and 43 steps estimations were successfully calculated and they turned out to be comparable with that for 40 steps. Moreover, Algorithm 5 found preimages for two problems: 41 step and 1hash; 42 steps and 0hash. In Section 4 it was discussed that such a situation is possible if a given CNF is satisfiable. The found estimations for 43-step MD4 are presented in Table 7. For all hashes, 1 conflict was the best. For 41 steps, 1 conflict was better on 0hash and 1hash, while on remaining two hashes 1-million-conflicts based simplification was the winner. On 42-step MD4, 1 conflict was the best for all hashes except 1hash. Table 7: Runtime estimations for 43-step MD4. The best estimations are marked with bold. Hash Simplif. conflicts _ebest_ _nbest_ _|cbest|_ **1** **15 h 26 min** **3 390** **103 420** _0_ 1 million - - **1** **39 h 10 min** **3 395** **98 763** _1_ 1 million 52 h 5 min 2 575 121 969 **1** **37 h 51 min** **3 395** **81 053** _symm_ 1 million 50 h 7 min 2 555 253 489 **1** **49 h 13 min** **3 385** **120 619** _rand_ 1 million 86 h 23 min 2 565 246 972 25 ----- Figure 3 depicts how the objective function was minimized on the inversion problem for 1hash in case of 43 steps. Figures for the remaining three 43-steps-related inversion problems can be found in Appendix A. 4 3 2 1 |Col1|1 1m| |---|---| ||| 0 100000 200000 300000 400000 500000 Cubes Figure 3: Minimization of the objective function on the inversion problem ``` MD4inversion(1hash, 43, 0xffffffff, 12, 0x00000000). The intersection of two dotted lines ``` shows the best estimation among all simplification types. Using the found cutoff thresholds, Algorithm 3 was run on all inversion problems with _L = 0x00000000, and, as a result, for 43 steps preimages were found for all four hashes. For_ 41 and 42 steps, preimages were found on the first or the second iteration of Algorithm 3. The results are presented in Table 8. Here values 0 and 1 of L stand for 0x00000000 and ``` 0x00000001, respectively, while sd stands for standard deviation in seconds. It can be seen ``` that at least some inversion problems turned out to be easier compared to that for 40-step MD4. This phenomenon is discussed in the next subsection. The subproblems’ solving times in case of 43 steps and 1hash are shown in Figure 4. In Table 9, the found preimages for 43-step MD4 are presented. The corresponding tables for 41 and 42 steps can be found in Appendix B. **7.1 Discussion** **Correctness** The correctness of the found preimages was verified by the reference implementations from (Rivest, 1990). This verification can be easily reproduced since MD4 is hard to invert, but the direct computation is extremely fast. First, the additional actions (padding, incrementing, see Section 5), as well as the corresponding amount of the last steps should be deleted. Then the found preimages should be given as inputs to a compression function. 26 ----- Table 8: Estimated and real runtimes (on 12 CPU cores) of the conquer phase for inversion problems related to 41-, 42, and 43-step MD4. Steps Hash _L_ _ebest_ real time sol mean max _sd_ 41 42 43 0 8 h 40 min 10 h 11 min 0 6.4 sec 17 min 16.77 _0_ 1 - 21 h 23 min 1 12.41 sec 14 h 23 min 421.41 _1_ 0 37 h 45 h 10 min 3 9.78 sec 52 min 44.73 0 19 h 54 min 20 h 10 min 0 12.08 sec 17 min 24.28 _symm_ 1 - 20 h 15 min 4 11.57 sec 17 min 23.66 _rand_ 0 16 h 6 min 17 h 25 min 1 10.05 sec 43 min 31.07 _0_ 0 19 h 36 min 22 h 32 min 3 11.68 sec 19 min 25.51 0 25 h 15 min 29 h 19 min 0 10.91 sec 1 h 14 min 45.61 _1_ 1 - 39 h 1 16.38 sec 2 h 18 min 86.32 _symm_ 0 28 h 20 min 29 h 35 min 1 12.25 sec 32 min 19.98 0 21 h 16 min 21 h 30 min 0 10.22 sec 15 min 18.51 _rand_ 1 - 20 h 35 min 3 9.34 sec 13 min 16.71 _0_ 0 15 h 26 min 17 h 14 min 2 7.23 sec 16 min 16.6 _1_ 0 39 h 10 min 42 h 16 min 1 18.64 sec 39 min 29.88 _symm_ 0 37 h 51 min 41 h 55 min 1 22.59 sec 34 min 46.44 _rand_ 0 49 h 13 min 51 h 21 min 1 18.51 sec 46 min 30.41 mean median Figure 4: Kissat runtimes on subproblems from the conquer phase applied to ``` MD4inversion(1hash, 43, 0xffffffff, 12, 0x00000000). ``` **Simplification** According to the estimations, in most cases the 1-conflict-based simplification is better than more advanced simplifications. On the other hand if only this simpli 27 ----- Table 9: Found preimages for 43-step MD4. Hash Preimages 0xa57d8668 0xa57d8668 0xa57d8668 0xf48a97a3 0xa57d8668 0xa57d8668 0xa57d8668 0xd330e8ed _0_ 0xa57d8668 0xa57d8668 0xa57d8668 0x37c9ca21 0xe1df551f 0x7f49d66a 0x135a1c93 0x9e744bdb 0xa57d8668 0xa57d8668 0xa57d8668 0xb289afa0 0xa57d8668 0xa57d8668 0xa57d8668 0xaf2c850e 0xa57d8668 0xa57d8668 0xa57d8668 0x19c5ce09 0xcae6b29e 0xb2595b20 0xab3a433d 0xf6cdee42 0xa57d8668 0xa57d8668 0xa57d8668 0x82ef987a 0xa57d8668 0xa57d8668 0xa57d8668 0xe18fbc3b _1_ 0xa57d8668 0xa57d8668 0xa57d8668 0x558f3513 0xbf09004d 0x8fb490dd 0x502eca9 0xbd0e1a80 0xa57d8668 0xa57d8668 0xa57d8668 0xd1c33d35 0xa57d8668 0xa57d8668 0xa57d8668 0xc8519181 _symm_ 0xa57d8668 0xa57d8668 0xa57d8668 0x8157aaf2 0xd7bdc37b 0xe52f3348 0xf17901d9 0x7e2de5a4 0xa57d8668 0xa57d8668 0xa57d8668 0x24f0e099 0xa57d8668 0xa57d8668 0xa57d8668 0xe57e4c54 _rand_ 0xa57d8668 0xa57d8668 0xa57d8668 0x8fbbadcd 0xc0326ae6 0xe0e6a048 0x6217a3b9 0x15ee5a3b fication type had been chosen, then the inversion problem for 1hash produced by 40-step MD4 would have remained unsolved. The non-effectiveness of the advanced simplifications is an interesting phenomenon which is worth investigating in the future. **Classes of subproblems** Figures 2 and 4 show that in the conquer phase about 25% of subproblems are extremely easy (runtime is less than 0.1 second) and there is a clear gap between these subproblems and the remaining ones. Since this gap is much lower than mean and median runtime, is seems promising to solve all extremely easy subproblems beforehand and apply the corresponding reasoning to the remaining subproblems. **Accuracy of estimations** The obtained estimations can be treated as accurate ones since they are close to the real solving times (see tables 5 and 8). On average the real time on inversion problems with L = 0x00000000 is 11 % higher than the estimated time, while in the worst case for 40-step MD4 and 0hash the real time is 30 % higher. As for real time on inversion problems with L = 0x00000001 and L = 0x00000002, the picture is different. In some cases, the real time is still close to the estimated time for L = 0x00000000. However, for MD4inversion(0hash, 41, 0xffffffff, 12, 0x00000001) the real time is 2.5 times higher, while the standard deviation is also very high. It can be concluded that the heavy-tail behavior occurs in this case (Gomes & Sabharwal, 2021). These results might indicate that it is better to find its own cutoff threshold for each value of L, that corresponds to the 3rd combination of Algorithm 3 and Algorithm 5 described at the beginning of Section 6. Note that for those problems where their own thresholds were used, i.e. when L = 0x00000000, the heavy-tail behavior does not occur. **Hardness of inversion problems** It might seem counterintuitive that for 40-43 steps the hardness of the inversion problems in fact is more or less similar. Recall that when Dobbertin’s constraints are applied, values of 9 32-bit message words (our of 16) with indices 0, 1, 2, 4, 5, 6, 8, 9, 10 are derived automatically (in a CNF this is done by UP), so only 7 words remain unknown (see Subsection 2.5). It means that in the CNF 224 message bits are unknown compared to 512 message bits when Dobbertin’s constraints are not added. 28 ----- It holds true for Dobbertin-like constraints as well. On the 40th step, the register value is updated via a nonlinear function that takes as input an unknown word M [14] along with registers’ values. That is why the 40th step gives a leap in hardness compared to 39 steps. On the next 8 steps, message words with the following indices are used for updating: 1, 9, 5, 13, 3, 11, 7, 15. It means that on steps 41, 42, and 43 the nonlinear function operates with known (constant) M [1], M [9], and M [5], respectively. Therefore steps 41-43 do not add any hardness. Rather, additional connections between registers’ values are added. As for the remaining steps 44-48, unknown message words are used for updating, so each of these steps gives a new leap in hardness. That is why no estimations were calculated for 44 steps earlier in this section — these inversion problems are much harder. **Partially constant preimages** In all found preimages for steps 40 and 43, 9 out of 16 32-bit message words are equal to 0xa57d8668. These are the automatically derived message words which depend on K. Recall that K = 0xffffffff was used in all cases. However, in some preimages for 41 and 42 steps M [0] = 0x257d8668 while all remaining 8 message words are equal to 0xa57d8668. The reason is that in these cases the preimages were found not in the first iteration of Algorithm 3, so on the 13th step the constant was not K, but rather its slightly modified value. ### 8. Inverting Unconstrained 28-step MD5 As it was mentioned in Subsection 2.6, Dobbertin’s constraints are not applicable to MD5. That is why in this study 28-step MD5 is inverted without adding any additional constraints, like it was done in (Legendre et al., 2012). Recall that in this case for an arbitrary hash there are about 2[384] preimages, but it is not easy to find any of them. Algorithm 5 in its estimating mode is not applicable to MD5 either because the cubing phase gives too hard subproblems for an unconstrained inversion problem, so no runtime estimation can be calculated in reasonable time. On the other hand, since the considered inversion problem has huge number of solutions, the incomplete solving mode of Algorithm 5 suits well for it. First a CNF that encodes 28-step MD5 was constructed based on the encoding from Subsection 5.3. The CNF has 7 471 variables and 54 672 clauses. The same four hashes were considered for inversion as for MD4: 0hash, 1hash, symmhash, randhash. Therefore, 4 CNFs were constructed by adding corresponding 128 one-literal clauses to the original CNF. Then these CNFs were simplified by CaDiCaL such that at most 1 conflict was generated. Characteristics of the simplified CNFs are presented in Table 10. Table 10: Characteristics of simplified CNFs that encode inversion problems for 28-step MD5. Hash Variables Clauses Literals _0_ 6 814 50 572 199 596 _1_ 6 844 50 749 200 153 _symm_ 6 842 50 737 200 114 _rand_ 6 842 50 741 200 110 29 ----- The SAT solver EnCnC (see the beginning of Subsection 6.1) was run on these CNFs in the incomplete solving mode. The following input parameters’ values were used: - March cu. - nstep = 10. - minr = 0. - N = 1 000. - Kissat sc2021. - maxst = 5 000 seconds. - cores = 12. - mode = solving. The key parameter here is maxc (maximal number of generated cubes), for which the following values were tried: 2 000 000; 1 000 000; 500 000; 250 000; 125 000; 60 000. Note that the default value of maxc in EnCnC is 1 000 000. Recall that in the incomplete solving mode, EnCnC stops if a satisfying assignment is found; if a CDCL solver is interrupted due to a time limit on some subproblem, EnCnC continues working. The corresponding 6 versions of EnCnC with different values of maxc were run on the CNFs with the wall-clock time limit of 1 day. In Table 11, the wall-clock solving times are presented. Also, the same data is presented in Figure 5. Table 11: Wall clock time for 28-step MD5 on a 12-core CPU. Here “-” means that the solver was interrupted due to the time limit of 1 day. The best results are marked with bold. Solver _0hash_ _1hash_ _symmhash_ _randhash_ EnCnC-maxc=2m 1 h 47 min 1 h 41 min 39 min 1 h 36 min EnCnC-maxc=1m 42 min 53 min 13 min 59 min EnCnC-maxc=500k 48 min 32 min 22 min **15 min** EnCnC-maxc=250k 38 min 4 min 41 min 37 min EnCnC-maxc=125k 16 min 35 min **6 min** 20 min EnCnC-maxc=60k **4 min** **3 min** 14 min 1 h 32 min P-MCOMSPS - - - Treengeling - - - Additionally, two complete parallel SAT solvers were tried. The first one, P-MCOMSPS, is the winner of the Parallel track in SAT Competition 2021 (Vallade, Frioux, Oanea, Baarir, Sopena, Kordon, Nejati, & Ganesh, 2021). It is a portfolio solver built upon the widely-used Painless framework (Frioux, Baarir, Sopena, & Kordon, 2017). The second one, treengeling (Biere, 2016), is a Cube-and-Conquer solver. It was chosen to compare EnCnC 30 ----- Figure 5: Runtimes of EnCnC in the incomplete solving mode on four MD5-28-related inversion problems. with a competitor built upon a similar strategy. Besides this, treengeling won several prizes in SAT Competitions and SAT Races. Let us discuss the results. Based on average runtime, the best version of EnCnC is EnCnC-maxc=125k, while the worst is EnCnC-maxc=2m. Nevertheless, all versions managed to find satisfying assignments for all 4 CNFs within the time limit. On 23 runs out of 24, versions of EnCnC did it during solving the first 12 subproblems from the first random sample (for the lowest values of the cutoff threshold). It means that Kissat did not reach the time limit of 5 000 seconds in these cases. The only exception is EnCnCmaxc=60k on randhash, where on all 12 first subproblems Kissat was interrupted due to the time limit, and then a satisfying assignment was found in one of the next 12 subproblems from the same sample. As for competitors, they could not solve anything within the time limit. In Table 12, the preimages found by EnCnC-maxc=2m are presented. It should be noted that preimages for 0hash, 1hash, and randhash have not been published so far. The found preimages were verified by the reference implementation from (Rivest, 1992). It can be easily reproduced in the same way that was discussed in Subsection 7.1. ### 9. Related Work In fact, SAT-based cryptanalysis was proposed in 1996 (Cook & Mitchell, 1996), but for the first time it was applied to solve a real cryptanalysis problem in 2000 (Massacci & Marraro, 2000). In particular, a reduced version of the block cipher DES was analyzed via a SAT solver. Since that publication, SAT-based cryptanalysis has been successfully applied to analyze various block ciphers, stream ciphers, and cryptographic hash functions. 31 ----- Table 12: Preimages found by EnCnC-maxc=2m for 28-step MD5. Hash Preimages 0xd825e4fb 0xa73fcaa9 0x660cd53d 0xb9308515 0x4677d4e0 0xcadcee62 0x40722cb3 0xf41a4b12 _0_ 0xac2fdec3 0x9cbcb4a3 0xffcca30f 0x9a0e2026 0x475763e5 0x30ce233b 0xbef0cd57 0x1a6b39d 0xdfe6feeb 0xc4437a85 0x11af5182 0xe3b13f03 0x5103e1fc 0xea231da2 0xc3b513d1 0xb95fa9d7 _1_ 0x7a2a331c 0x2ddf2607 0x699a2dae 0xc1827561 0xfe80aeed 0xcf45b09a 0x5b596c8f 0xd0265347 0x54032182 0x2a1693f1 0x1053aef3 0x9f4d7c87 0x9f0d5ba1 0xb43a63f8 0x4310aa89 0x9df4e0d8 _symm_ 0xada73cbf 0x63fd55c2 0x49f1f4a0 0x5e05beff 0x6c149122 0x54a25f8e 0x12ef4bb0 0x78482fb4 0x120686db 0xad5834c6 0x7d660963 0x71c408fe 0x17cf4511 0x75df78de 0x544ae232 0x13745ecc _rand_ 0x9190f8a2 0x4878ab8d 0x43229cc7 0x5013f2de 0xd49b395a 0xa151b704 0x5f1dd4ec 0xc860dfb5 SAT-based cryptanalysis via CDCL solvers has been earlier applied to cryptographic hash functions as follows. For the first time it was done in (Jovanovic & Janicic, 2005) to construct benchmarks with adjustable hardness. In (Mironov & Zhang, 2006), a practical collision attack on MD4 was performed. 39-step MD4 was inverted in (De et al., 2007; Legendre et al., 2012; Lafitte et al., 2014; Gribanova, Zaikin, Kochemazov, Otpuschennikov, & Semenov, 2017; Gribanova & Semenov, 2018). In (Gladush, Gribanova, Kondratiev, Pavlenko, & Semenov, 2022), the hardness of practical preimage attacks on 43-, 45-, and 47-step MD4 was estimated. In (Gribanova & Semenov, 2020), an MD4-based function was constructed and the full (48-step) version of this function was inverted. As for MD5, in (Mironov & Zhang, 2006) and later in (Gribanova et al., 2017), practical collision attacks on MD5 were performed. In (De et al., 2007), 26-step MD5 was inverted, while in (Legendre et al., 2012) it was done for 27- and 28-step MD5. For the first time a collision for SHA-1 was found in (Stevens, Bursztein, Karpman, Albertini, & Markov, 2017) (in this very case it was done partially by a CDCL solver). Step-reduced versions of SHA-0, SHA-1, SHA-256, SHA-3, BLAKE-256, and JH-256 were inverted in (Nossum, 2012; Legendre et al., 2012; Homsirikamol, Morawiecki, Rogawski, & Srebrny, 2012; Nejati, Liang, Gebotys, Czarnecki, & Ganesh, 2017). An algebraic fault attack on SHA-1 and SHA-2 was performed in (Nejati, Hor´acek, Gebotys, & Ganesh, 2018), while that on SHA-256 was done in (Nakamura, Hori, & Hirose, 2021). The following hard mathematical problems have been solved via Cube-and-Conquer: the Erd˝os discrepancy problem (Konev & Lisitsa, 2015); the Boolean Pythagorean Triples problem (Heule et al., 2016); Schur number five (Heule, 2018b); Lam’s problem (Bright et al., 2021). In (Weaver & Heule, 2020), new minimal perfect hash functions were found. Note that these hash functions are not cryptographic ones and find their application in lookup tables. In the present paper, for the first time significant cryptanalysis problems were solved via Cube-and-Conquer. The present paper presents a general Cube-and-Conquer-based algorithm for estimating hardness of SAT instances. Usually this is done by other approaches: the tree-like space complexity (Ans´otegui, Bonet, Levy, & Many`a, 2008); supervised machine learning (Hutter, Xu, Hoos, & Leyton-Brown, 2014); the popularity–similarity model (Almagro-Blanco & Gir´aldez-Cru, 2022); backdoors (Williams, Gomes, & Selman, 2003). 32 ----- Backdoors are closely connected with Cube-and-Conquer. Informally, backdoor is a subset of variables of a given formula, such that by varying all possible values of the backdoor’s variables simpler subproblems are obtained which can be solved independently via a CDCL solver (Williams et al., 2003; Kilby, Slaney, Thi´ebaux, & Walsh, 2005). In fact, such a set of values can be considered a cube, while choosing a proper backdoor and varying all corresponding values is a special way to generate cubes on the cubing phase of Cube-and-Conquer. For given SAT instance and backdoor, hardness of the instance can be estimated by processing a (relatively small) sample of subproblems (Semenov, Zaikin, Bespalov, & Posypkin, 2011). The search for a backdoor with a minimal hardness was reduced to minimization of a costly stochastic black-box functions in application to SATbased cryptanalysis in (Semenov & Zaikin, 2015; Kochemazov & Zaikin, 2018; Zaikin & Kochemazov, 2021; Semenov, Pavlenko, Chivilikhin, & Kochemazov, 2022). In the present paper, a similar function is minimized to find a cutoff threshold of the cubing phase of Cube-and-Conquer rather than a backdoor. ### 10. Conclusions and Future Work This paper proposed two algorithms. Given a hash, the first algorithm gradually modifies one of twelve Dobbertin’s constraints for MD4 until a preimage for a given hash is found. Potentially, this algorithm is applicable to some other cryptographic hash functions. The second algorithm can operate with a given CNF in two modes. In the estimating mode, values of the cutoff threshold of the cubing phase of Cube-and-Conquer are varied, and the CNF’s hardness for each value is estimated via sampling. The threshold with the best estimation can be naturally used to choose a proper computational platform and solve the instance if the estimation is reasonable. This mode is general, so it can be applied to estimate the hardness and solve hard SAT instances from various classes. In the incomplete SAT solving mode, the second algorithm is a SAT solver, oriented on satisfiable CNFs with many satisfying assignments. The preimage resistance of two seminal cryptographic hash functions, MD4 and MD5, was analyzed. In case of MD4, a combination of the first algorithm and the estimating mode of the second algorithm was used. As a result, 40-, 41-, 42-, and 43-step MD4 were inverted for the first time. In opposite to MD4, MD5 served as an example of a cryptographic hash function for which no problem-specific constraints are added. 28-step MD5 was inverted for two most regular hashes (128 1s and 128 0s) for the first time via the incomplete SAT solving mode of the second algorithm. In other words, the first practical SAT-based preimage attacks on the mentioned step-reduced MD4 and MD5 were proposed. In the future it is planned to apply the proposed algorithms to analyze other cryptographic hash functions. Also we are going to investigate two MD4-related phenomena which were figured out during the experiments. The first one is non-effectiveness (in most cases) of an advanced simplification in application to the constructed CNFs. The second one is an evident division of subproblems in the conquer phase to extremely simple ones and the remaining ones. 33 ----- ### Appendix A. Estimations for Step-reduced MD4 The following figures depict how the objective function was minimized on 40- and 43-step MD4. 15 10 5 5 4 3 2 1 0.5 1 0.5 |Col1|1m 100k| |---|---| ||| |Col1|1| |---|---| 0 500000 1000000 1500000 2000000 Cubes 0 100000 200000 300000 400000 500000 Cubes (a) MD4-40, 1hash 500000 1000000 1500000 Cubes (c) MD4-40, symmhash |Col1|1 1m|1 1m| |---|---|---| |||| (d) MD4-43, symmhash (b) MD4-43, 0hash 5 4 3 2 1 0.5 0 100000 200000 300000 400000 500000 Cubes 5 4 3 2 1 0.5 |Col1|1 1m|1 1m| |---|---|---| |||| 0 100000 200000 300000 400000 500000 Cubes (e) MD4-40, randhash (f) MD4-43, randhash Figure 6: Minimization of the objective function on 40- and 43-step MD4. 34 ----- ### Appendix B. Found Preimages for Step-reduced MD4 Table 13: Found preimages for 41-step MD4. 0x257d8668 0xa57d8668 0xa57d8668 0xdafb914d 0xa57d8668 0xa57d8668 0xa57d8668 0x1edf9f78 _0_ 0xa57d8668 0xa57d8668 0xa57d8668 0x12984195 0x97f0b6c 0xd9e5df17 0xabe482c7 0x23d98522 0xa57d8668 0xa57d8668 0xa57d8668 0x5c31dc3 0xa57d8668 0xa57d8668 0xa57d8668 0x52f59fb2 _1_ 0xa57d8668 0xa57d8668 0xa57d8668 0x1e8a7cbb 0x3982e99f 0x812d980d 0x27b8d0b5 0xb81a00d1 0x257d8668 0xa57d8668 0xa57d8668 0xeaaf86e 0xa57d8668 0xa57d8668 0xa57d8668 0xc3b97274 0xa57d8668 0xa57d8668 0xa57d8668 0x21b8d189 0x15fc5540 0xd283c2c4 0x7d27396b 0x7bb74632 0x257d8668 0xa57d8668 0xa57d8668 0x5e8d818a 0xa57d8668 0xa57d8668 0xa57d8668 0x8fc29cce _symm_ _rand_ _0_ 0xa57d8668 0xa57d8668 0xa57d8668 0x8c6b49cc 0xe31a2c8d 0x9a5e1c5d 0x2dd896f5 0x1ed72fab 0x257d8668 0xa57d8668 0xa57d8668 0x9278c8f 0xa57d8668 0xa57d8668 0xa57d8668 0x4e3194eb 0xa57d8668 0xa57d8668 0xa57d8668 0x22efb603 0xe2b4a054 0xd74ec43 0xf09b0821 0xe4ca9fca 0x257d8668 0xa57d8668 0xa57d8668 0x6172bd01 0xa57d8668 0xa57d8668 0xa57d8668 0x8e35540f 0xa57d8668 0xa57d8668 0xa57d8668 0x4b8210a9 0xd5c0fedb 0x45c28d93 0x1b542bb8 0x74c28676 0xa57d8668 0xa57d8668 0xa57d8668 0x4b11d0ca 0xa57d8668 0xa57d8668 0xa57d8668 0x4c195670 0xa57d8668 0xa57d8668 0xa57d8668 0x76529071 0x68d3862d 0xdd3779df 0x768ce847 0x77e1b04e 0xa57d8668 0xa57d8668 0xa57d8668 0xcfbf3444 0xa57d8668 0xa57d8668 0xa57d8668 0xaac69f2f 0xa57d8668 0xa57d8668 0xa57d8668 0xbdaf1de9 0xfb9496dc 0x537e7a8c 0xd083975f 0xf3a5fc76 0xa57d8668 0xa57d8668 0xa57d8668 0xbfbf37eb 0xa57d8668 0xa57d8668 0xa57d8668 0xf3252a5c 0xa57d8668 0xa57d8668 0xa57d8668 0x3f829fe3 0x28c0fe6 0x27eadfa1 0xc87af86e 0x48fcd23d Table 14: Found preimages for 42-step MD4. 0xa57d8668 0xa57d8668 0xa57d8668 0xecdab667 0xa57d8668 0xa57d8668 0xa57d8668 0xe3844a01 0xa57d8668 0xa57d8668 0xa57d8668 0xa3205929 0xfad1ea59 0xd2cae4d2 0x52149d55 0xc82cffbf 0xa57d8668 0xa57d8668 0xa57d8668 0xae60af85 0xa57d8668 0xa57d8668 0xa57d8668 0x8bcd69e3 0xa57d8668 0xa57d8668 0xa57d8668 0x59b8bf6 0x7755a76 0xfbe0b515 0xf9a31765 0x14d516a6 0xa57d8668 0xa57d8668 0xa57d8668 0xa9210d09 0xa57d8668 0xa57d8668 0xa57d8668 0xba9694ea 0xa57d8668 0xa57d8668 0xa57d8668 0x6a8157fe 0xd6566aae 0xbacb3d6c 0x1ec4854d 0x22357d65 0x257d8668 0xa57d8668 0xa57d8668 0xd8f77148 0xa57d8668 0xa57d8668 0xa57d8668 0x88275d15 _1_ 0xa57d8668 0xa57d8668 0xa57d8668 0xcf6b92d0 0x4a8e498d 0x3beb0878 0xb55e027 0x87b4d62c 0xa57d8668 0xa57d8668 0xa57d8668 0xd1dce7ea 0xa57d8668 0xa57d8668 0xa57d8668 0xcbc2a90 _symm_ 0xa57d8668 0xa57d8668 0xa57d8668 0xd9834f6d 0x5267d5d6 0x41a9cf18 0x71469663 0xbd507731 _rand_ 0x257d8668 0xa57d8668 0xa57d8668 0xbd7389e6 0xa57d8668 0xa57d8668 0xa57d8668 0x3eb8ae3a 0xa57d8668 0xa57d8668 0xa57d8668 0x162c323e 0xa4056a04 0x9da74aac 0xfee2c77 0x8b25de8e 0x257d8668 0xa57d8668 0xa57d8668 0xc1748842 0xa57d8668 0xa57d8668 0xa57d8668 0xd7e32a57 0xa57d8668 0xa57d8668 0xa57d8668 0x21c5baab 0x552a7372 0xa21b2963 0x2fe88ffb 0xadfddb3 0x257d8668 0xa57d8668 0xa57d8668 0xc455558f 0xa57d8668 0xa57d8668 0xa57d8668 0xff87976a 0xa57d8668 0xa57d8668 0xa57d8668 0x3e82e858 0x46ad9cde 0x76f3b1d0 0x31aadb79 0x45cc1c91 35 ----- ### References Almagro-Blanco, P., & Gir´aldez-Cru, J. (2022). Characterizing the temperature of SAT formulas. Int. J. Comput. Intell. Syst., 15 (1), 69. Ans´otegui, C., Bonet, M. L., Levy, J., & Many`a, F. (2008). Measuring the hardness of SAT instances. In AAAI, pp. 222–228. Aoki, K., & Sasaki, Y. (2008). Preimage attacks on one-block MD4, 63-step MD5 and more. In SAC, pp. 103–119. Audet, C., & Hare, W. (2017). Derivative-Free and Blackbox Optimization. Springer Series in Operations Research and Financial Engineering. Springer International Publishing. Balyo, T., Froleyks, N., Heule, M., Iser, M., J¨arvisalo, M., & Suda, M. (Eds.). (2021). Pro_ceedings of SAT Competition 2021: Solver and Benchmark Descriptions. Department_ of Computer Science, University of Helsinki. Balyo, T., & Sinz, C. (2018). Parallel Satisfiability. In Handbook of Parallel Constraint _Reasoning, pp. 3–29. Springer._ Bard, G. V. (2009). Algebraic Cryptanalysis (1st edition). Springer Publishing Company, Incorporated. Biere, A. (2011). Preprocessing and inprocessing techniques in SAT. In HVC 2011. Biere, A. (2016). Splatz, Lingeling, Plingeling, Treengeling, YalSAT Entering the SAT Competition 2016. In Proc. of SAT Competition 2016 – Solver and Benchmark De_scriptions, pp. 44–45._ Biere, A., Fleury, M., & Heisinger, M. (2021a). CaDiCaL, Kissat, Paracooba entering the SAT Competition 2021. In SAT Competition 2021 – Solver and Benchmark Descrip_tions, pp. 10–13._ Biere, A., Heule, M., van Maaren, H., & Walsh, T. (Eds.). (2021b). Handbook of Satisfiability _- Second Edition, Vol. 336 of Frontiers in Artificial Intelligence and Applications. IOS_ Press. B¨ohm, M., & Speckenmeyer, E. (1996). A fast parallel SAT-solver - efficient workload balancing. Ann. Math. Artif. Intell., 17 (3-4), 381–400. Bright, C., Cheung, K. K. H., Stevens, B., Kotsireas, I. S., & Ganesh, V. (2021). A SATbased resolution of Lam’s problem. In AAAI, pp. 3669–3676. Carter, K., Foltzer, A., Hendrix, J., Huffman, B., & Tomb, A. (2013). SAW: the software analysis workbench. In HILT, pp. 15–18. Clarke, E., Kroening, D., & Lerda, F. (2004). A tool for checking ANSI-C programs. In _Tools and Algorithms for the Construction and Analysis of Systems (TACAS 2004)._ Cook, S. A. (1971). The complexity of theorem-proving procedures. In STOC, pp. 151–158. ACM. Cook, S. A., & Mitchell, D. G. (1996). Finding hard instances of the satisfiability problem: A survey. In Satisfiability Problem: Theory and Applications. 36 ----- Davis, M., Logemann, G., & Loveland, D. W. (1962). A machine program for theoremproving. Commun. ACM, 5 (7), 394–397. De, D., Kumarasubramanian, A., & Venkatesan, R. (2007). Inversion attacks on secure hash functions using SAT solvers. In SAT, pp. 377–382. Dobbertin, H. (1996). Cryptanalysis of MD4. In FSE, pp. 53–69. Dobbertin, H. (1998). The first two rounds of MD4 are not one-way. In FSE, pp. 284–292. Dowling, W. F., & Gallier, J. H. (1984). Linear-time algorithms for testing the satisfiability of propositional horn formulae. J. Log. Program., 1 (3), 267–284. Frioux, L. L., Baarir, S., Sopena, J., & Kordon, F. (2017). PaInleSS: A framework for parallel SAT solving. In SAT 2017, pp. 233–250. Froleyks, N., & Biere, A. (2021). Single clause assumption without activation literals to speed-up IC3. In FMCAD. Garey, M. R., & Johnson, D. S. (1979). Computers and Intractability: A Guide to the Theory _of NP-Completeness. W. H. Freeman._ Gladush, A., Gribanova, I., Kondratiev, V., Pavlenko, A., & Semenov, A. (2022). Measuring the effectiveness of SAT-based guess-and-determine attacks in algebraic cryptanalysis. In Parallel Computational Technologies, pp. 143–157. Gomes, C. P., & Sabharwal, A. (2021). Exploiting runtime variation in complete solvers. In Handbook of Satisfiability - Second Edition, Vol. 336 of Frontiers in Artificial In_telligence and Applications, pp. 463–480. IOS Press._ Gomes, C. P., & Sellmann, M. (2004). Streamlined constraint reasoning. In CP, pp. 274–289. Gribanova, I., & Semenov, A. A. (2018). Using automatic generation of relaxation constraints to improve the preimage attack on 39-step MD4. In MIPRO, pp. 1174–1179. Gribanova, I., & Semenov, A. A. (2020). Constructing a set of weak values for full-round MD4 hash function. In MIPRO, pp. 1212–1217. Gribanova, I., Zaikin, O., Kochemazov, S., Otpuschennikov, I., & Semenov, A. (2017). The study of inversion problems of cryptographic hash functions from MD family using algorithms for solving Boolean satisfiability problem. In Mathematical and Information _Technologies, pp. 98–113._ Hamadi, Y., Jabbour, S., & Sais, L. (2009). ManySAT: a parallel SAT solver. J. Satisf. _Boolean Model. Comput., 6_ (4), 245–262. Heule, M. (2018a). Cube-and-Conquer Tutorial. https://github.com/marijnheule/CnC/. Heule, M. (2018b). Schur number five. In AAAI, pp. 6598–6606. Heule, M., Kullmann, O., & Biere, A. (2018). Cube-and-conquer for satisfiability. In Hand_book of Parallel Constraint Reasoning, pp. 31–59. Springer._ Heule, M., Kullmann, O., & Marek, V. W. (2016). Solving and verifying the Boolean Pythagorean triples problem via Cube-and-Conquer. In SAT, pp. 228–245. Heule, M., Kullmann, O., Wieringa, S., & Biere, A. (2011). Cube and conquer: Guiding CDCL SAT solvers by lookaheads. In HVC, pp. 50–65. 37 ----- Heule, M. J. H., & van Maaren, H. (2021). Look-ahead based SAT solvers. In Handbook _of Satisfiability - Second Edition, Vol. 336 of Frontiers in Artificial Intelligence and_ _Applications, pp. 183–212. IOS Press._ Homsirikamol, E., Morawiecki, P., Rogawski, M., & Srebrny, M. (2012). Security margin evaluation of SHA-3 contest finalists through SAT-based attacks. In CISIM, pp. 56–67. Hutter, F., Xu, L., Hoos, H. H., & Leyton-Brown, K. (2014). Algorithm runtime prediction: Methods & evaluation. Artificial Intelligence, 206, 79–111. Hyv¨arinen, A. E. J., Junttila, T. A., & Niemel¨a, I. (2010). Partitioning SAT instances for distributed solving. In LPAR, pp. 372–386. Jovanovic, D., & Janicic, P. (2005). Logical analysis of hash functions. In FroCoS, pp. 200–215. Kautz, H. A., Sabharwal, A., & Selman, B. (2021). Incomplete algorithms. In Handbook _of Satisfiability - Second Edition, Vol. 336 of Frontiers in Artificial Intelligence and_ _Applications, pp. 213–232. IOS Press._ Kilby, P., Slaney, J. K., Thi´ebaux, S., & Walsh, T. (2005). Backbones and backdoors in satisfiability. In AAAI, pp. 1368–1373. Kochemazov, S. (2021). Exploring the limits of problem-specific adaptations of SAT solvers in SAT-based cryptanalysis. In Parallel Computational Technologies, pp. 149–163. Kochemazov, S., & Zaikin, O. (2018). ALIAS: A modular tool for finding backdoors for SAT. In SAT, pp. 419–427. Konev, B., & Lisitsa, A. (2015). Computer-aided proof of Erd˝os discrepancy properties. _Artif. Intell., 224, 103–118._ Kuwakado, H., & Tanaka, H. (2000). New algorithm for finding preimages in a reduced version of the md4 compression function. IEICE TRANS, Fundamentals, A, 83 (1), 97–100. Lafitte, F. (2018). CryptoSAT: a tool for SAT-based cryptanalysis. IET Information Secu_rity, 12_ (6), 463–474. Lafitte, F., Jr., J. N., & Heule, D. V. (2014). Applications of SAT solvers in cryptanalysis: Finding weak keys and preimages. J. Satisf. Boolean Model. Comput., 9 (1), 1–25. Legendre, F., Dequen, G., & Krajecki, M. (2012). Encoding hash functions as a SAT problem. In ICTAI, pp. 916–921. Leurent, G. (2008). MD4 is not one-way. In FSE, pp. 412–428. Marques-Silva, J. P., & Sakallah, K. A. (1999). GRASP: A search algorithm for propositional satisfiability. IEEE Trans. Computers, 48 (5), 506–521. Massacci, F., & Marraro, L. (2000). Logical cryptanalysis as a SAT problem. J. Autom. _Reasoning, 24_ (1/2), 165–203. Menezes, A., van Oorschot, P. C., & Vanstone, S. A. (1996). Handbook of Applied Cryptog_raphy. CRC Press._ Mironov, I., & Zhang, L. (2006). Applications of SAT solvers to cryptanalysis of hash functions. In SAT, pp. 102–115. 38 ----- Nakamura, K., Hori, K., & Hirose, S. (2021). Algebraic fault analysis of SHA-256 compression function and its application. Inf., 12 (10), 433. Nejati, S., & Ganesh, V. (2019). Cdcl(crypto) SAT solvers for cryptanalysis. In CASCON. Nejati, S., Hor´acek, J., Gebotys, C. H., & Ganesh, V. (2018). Algebraic fault attack on SHA hash functions using programmatic SAT solvers. In CP, pp. 737–754. Nejati, S., Liang, J. H., Gebotys, C. H., Czarnecki, K., & Ganesh, V. (2017). Adaptive restart and CEGAR-based solver for inverting cryptographic hash functions. In VSTTE. Nossum, V. (2012). SAT-based preimage attacks on SHA-1. Master’s thesis, University of Oslo, Department of Informatics. Rivest, R. L. (1990). The MD4 message digest algorithm. In CRYPTO, pp. 303–311. Rivest, R. L. (1992). The MD5 message-digest algorithm. RFC, 1321, 1–21. Rossi, F., van Beek, P., & Walsh, T. (Eds.). (2006). Handbook of Constraint Programming, Vol. 2 of Foundations of Artificial Intelligence. Elsevier. Sasaki, Y., & Aoki, K. (2009). Finding preimages in full MD5 faster than exhaustive search. In Joux, A. (Ed.), EUROCRYPT, pp. 134–152. Semenov, A., Zaikin, O., & Kochemazov, S. (2021). Finding effective SAT partitionings via black-box optimization. In Black Box Optimization, Machine Learning, and No-Free _Lunch Theorems, pp. 319–355. Springer International Publishing._ Semenov, A. A., Otpuschennikov, I. V., Gribanova, I., Zaikin, O., & Kochemazov, S. (2020). Translation of algorithmic descriptions of discrete functions to SAT with applications to cryptanalysis problems. Log. Methods Comput. Sci., 16 (1). Semenov, A. A., Pavlenko, A., Chivilikhin, D., & Kochemazov, S. (2022). On probabilistic generalization of backdoors in Boolean satisfiability. In AAAI. Semenov, A. A., & Zaikin, O. (2015). Using Monte Carlo method for searching partitionings of hard variants of Boolean satisfiability problem. In PaCT. Semenov, A. A., Zaikin, O., Bespalov, D., & Posypkin, M. (2011). Parallel logical cryptanalysis of the generator A5/1 in BNB-grid system. In Parallel Computing Technologies, pp. 473–483. Soos, M., Nohl, K., & Castelluccia, C. (2009). Extending SAT solvers to cryptographic problems. In SAT, pp. 244–257. Starnes, D., Yates, D., & Moore, D. (2010). The Practice of Statistics. W. H. Freeman. Stevens, M., Bursztein, E., Karpman, P., Albertini, A., & Markov, Y. (2017). The first collision for full SHA-1. In CRYPTO, pp. 570–596. Tseitin, G. S. (1970). On the complexity of derivation in propositional calculus. In Studies in _constructive mathematics and mathematical logic, part II, Seminars in mathematics,_ Vol. 8, pp. 115–125. V.A. Steklov Mathematical Institute, Leningrad. Vallade, V., Frioux, L. L., Oanea, R., Baarir, S., Sopena, J., Kordon, F., Nejati, S., & Ganesh, V. (2021). New concurrent and distributed Painless solvers: P-MCOMSPS, PMCOMSPS-COM, P-MCOMSPS-MPI, and P-MCOMSPS-COM-MPI. In SAT Com_petition 2021 – Solver and Benchmark Descriptions, pp. 40–41._ 39 ----- Wang, X., Lai, X., Feng, D., Chen, H., & Yu, X. (2005). Cryptanalysis of the hash functions MD4 and RIPEMD. In EUROCRYPT, pp. 1–18. Wang, X., & Yu, H. (2005). How to break MD5 and other hash functions. In Cramer, R. (Ed.), EUROCRYPT, pp. 19–35. Weaver, S. A., & Heule, M. (2020). Constructing minimal perfect hash functions using SAT technology. In AAAI. Williams, R., Gomes, C. P., & Selman, B. (2003). Backdoors to typical case complexity. In _IJCAI, pp. 1173–1178._ Zaikin, O. S., & Kochemazov, S. E. (2021). On black-box optimization in divide-and-conquer SAT solving. Optimization Methods and Software, 36 (4), 672–696. Zaikin, O. (2022). Inverting 43-step MD4 via Cube-and-Conquer. In IJCAI-ECAI, p. in press. 40 -----
{ "disclaimer": "Notice: Paper or abstract available at https://arxiv.org/abs/2212.02405, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": "CCBY", "status": "GOLD", "url": "https://doi.org/10.1613/jair.1.15244" }
2,022
[ "JournalArticle" ]
true
2022-12-05T00:00:00
[ { "paperId": "2bfa20069212d04c1662b960f52d8a404678b461", "title": "A SAT Solver and Computer Algebra Attack on the Minimum Kochen-Specker Problem (Student Abstract)" }, { "paperId": "6094988c9ebe2621e6cccc7bc6f52248117e19d0", "title": "Frontiers in Artificial Intelligence and Applications" }, { "paperId": "9ccd3a54cdf81c9afd5bdaa94dd2d166d619ea37", "title": "Characterizing the Temperature of SAT Formulas" }, { "paperId": "9b39397b07bef4594e2c16ef0785dfff362d546b", "title": "Better Decision Heuristics in CDCL through Local Search and Target Phases" }, { "paperId": "daca018a1564f76df66b40d455bb5d2e11573fa7", "title": "Inverting 43-step MD4 via Cube-and-Conquer" }, { "paperId": "fd2df3e9941353581aaaf0151fec27332370022e", "title": "On Probabilistic Generalization of Backdoors in Boolean Satisfiability" }, { "paperId": "a37a79d09d12de03d01435bd29abf06416f48e21", "title": "Single Clause Assumption without Activation Literals to Speed-up IC3" }, { "paperId": "d643ab573da72863f23fd41439b7a05e69f30aad", "title": "Algebraic Fault Analysis of SHA-256 Compression Function and Its Application" }, { "paperId": "b7bd88de7ad9057c407818261e4bc0759ddef9b9", "title": "Exploring the Limits of Problem-Specific Adaptations of SAT Solvers in SAT-Based Cryptanalysis" }, { "paperId": "429b90bbaa51d77a85218b4ab1511b3e504aee27", "title": "Incomplete Algorithms" }, { "paperId": "b0cc5cb047b4172a7584796b5844eaa109eeade1", "title": "Exploiting Runtime Variation in Complete Solvers" }, { "paperId": "74d57bf83d227253d224944fba2991c1fd4a3887", "title": "Look-Ahead Based SAT Solvers" }, { "paperId": "cb392d0364323c2b7b6ade3ec5878829f3db7364", "title": "A SAT-based Resolution of Lam's Problem" }, { "paperId": "b8612d7866be434e8b6b7c6f4c1d891af8e683c5", "title": "Constructing a Set of Weak Values for Full-round MD4 Hash Function" }, { "paperId": "fa7fa280009fef4e37a92e6fb8fe5616f439d991", "title": "CDCL(Crypto) SAT solvers for cryptanalysis" }, { "paperId": "ce1e092f8c30c1e51b279335e5e34af82dfe9d7d", "title": "Constructing Minimal Perfect Hash Functions Using SAT Technology" }, { "paperId": "055326dd234e552d99d35fb8050048befc760e6b", "title": "On black-box optimization in divide-and-conquer SAT solving" }, { "paperId": "5f27124b4767e1c10ca89ee7e66c37c529152d1d", "title": "The Resolution of Keller’s Conjecture" }, { "paperId": "61b87dd4da58a5297c166281564dc3a475db4a50", "title": "C. Audet and W. Hare: Derivative-free and blackbox optimization. Springer series in operations research and financial engineering" }, { "paperId": "83721103a6fd5535e943b1b575cf70862c2322a8", "title": "Handbook of Applied Cryptography" }, { "paperId": "5cb07d365d3d8748e8859242616aae2f1a1ba918", "title": "CryptoSAT: a tool for SAT-based cryptanalysis" }, { "paperId": "f21a2281efaebfe84896dcec5cc7cf50afb0ac70", "title": "Algebraic Fault Attack on SHA Hash Functions Using Programmatic SAT Solvers" }, { "paperId": "ce0a8071295381cf0fd5afc7a50b9c93895fc8f6", "title": "ALIAS: A Modular Tool for Finding Backdoors for SAT" }, { "paperId": "ceb4e06ad017a2a8135fe0b633a53f7c1dccab21", "title": "Translation of Algorithmic Descriptions of Discrete Functions to SAT with Applications to Cryptanalysis Problems" }, { "paperId": "aeaec30319e3fe46b1ca0d07128474b9858e0a2d", "title": "On Cryptographic Attacks Using Backdoors for SAT" }, { "paperId": "fbce880aa2d637fc1d1258b8e49bdbe4bcb90f90", "title": "Using automatic generation of relaxation constraints to improve the preimage attack on 39-step MD4" }, { "paperId": "db9c1c061dfbf30f325e69565d6f5b0ca99172b1", "title": "Schur Number Five" }, { "paperId": "1f8fb24ce9fdff1c4b3aae52c54a7c0d22e5b8ce", "title": "Proceedings of SAT Competition 2017: Solver and Benchmark Descriptions" }, { "paperId": "8d6870ab6c3abbb7b952fde8ffb2e57da8bf2747", "title": "PaInleSS: A Framework for Parallel SAT Solving" }, { "paperId": "f648783368d2010858070efe778a6a253e656295", "title": "The First Collision for Full SHA-1" }, { "paperId": "fe5093abefd1ded617159182381c99415d9c910b", "title": "Handbook of Petroleum Refining" }, { "paperId": "61d4b2a48bf4bd73853abccf9c30e1dcf8f73a47", "title": "Adaptive Restart and CEGAR-Based Solver for Inverting Cryptographic Hash Functions" }, { "paperId": "10d8f3ef07b15ee728061b9830d848896cdcb9ca", "title": "Encoding Cryptographic Functions to SAT Using TRANSALG System" }, { "paperId": "1be4e96f3ae691fe7f4fbc2a480c820884a0a7ca", "title": "Solving and Verifying the Boolean Pythagorean Triples Problem via Cube-and-Conquer" }, { "paperId": "c602c0296a7c0f06d34ce72370307175a9908e87", "title": "Using Monte Carlo Method for Searching Partitionings of Hard Variants of Boolean Satisfiability Problem" }, { "paperId": "90dccd01df4e1d727fa60b2138e153506e6832a4", "title": "Applications of SAT Solvers in Cryptanalysis: Finding Weak Keys and Preimages" }, { "paperId": "223a9ae6eefd65f7992eb4fa667196840f90d62e", "title": "Computer-aided proof of Erdős discrepancy properties" }, { "paperId": "84ceeba430725f3c674d31740a24a465da8e2ebc", "title": "SAW: the software analysis workbench" }, { "paperId": "9ab2c60ed17e936223677abbb067f226ddee0709", "title": "From a logical approach to internal states of Hash functions how SAT problem can help to understand SHA-⋆ and MD⋆" }, { "paperId": "16367cdea1bc84a3ccafed980e06bca397faf536", "title": "Encoding Hash Functions as a SAT Problem" }, { "paperId": "14e1f20764d6733499e0a07062de42433522bddf", "title": "Algorithm runtime prediction: Methods & evaluation" }, { "paperId": "5cf85100d09c743cbb4b87d0d02c41e8eb3729f0", "title": "Security margin evaluation of SHA-3 contest finalists through SAT-based attacks" }, { "paperId": "18ed960ea0146e674e91275a02abc67ff443f099", "title": "Cube and Conquer: Guiding CDCL SAT Solvers by Lookaheads" }, { "paperId": "6aa30550fe4fb7ef608fc71da766de0c2754b9cc", "title": "Preprocessing and Inprocessing Techniques in SAT" }, { "paperId": "a829753d623f6adc0de2ca73d5474f6d429170e3", "title": "Parallel Logical Cryptanalysis of the Generator A5/1 in BNB-Grid System" }, { "paperId": "feca806dc20be4a72a57b45b9ce7ebfcded08038", "title": "A machine program for theorem-proving" }, { "paperId": "1da2db87c1f4627d2213fe596070e9a60a613e95", "title": "Advanced Meet-in-the-Middle Preimage Attacks: First Results on Full Tiger, and Improved Results on MD4 and SHA-2" }, { "paperId": "508f96b85440dad366be3067fb5e537ffd948a6d", "title": "Partitioning SAT Instances for Distributed Solving" }, { "paperId": "e36075def919440a084f59516403a4ce88702ab2", "title": "Preimage Attacks on One-Block MD4, 63-Step MD5 and More" }, { "paperId": "fbae47bf980dd14adc0dea81ec3dbf90a8aa67c9", "title": "Extending SAT Solvers to Cryptographic Problems" }, { "paperId": "e8da2dc0b6481f30b2b4ffd5bad6b835616e3f37", "title": "ManySAT: a Parallel SAT Solver" }, { "paperId": "1dc4a98755b542437e6e0df316282a47134aa702", "title": "Finding Preimages in Full MD5 Faster Than Exhaustive Search" }, { "paperId": "577cd1ef41f6f2f12f391516cb87029b86ad5aa7", "title": "Measuring the Hardness of SAT Instances" }, { "paperId": "31e1077691e174fdcb3de4d2b326bfd013459b55", "title": "Backdoor Trees" }, { "paperId": "7c5f4b9e9a486d60cc04eac9d395ff640b0433eb", "title": "MD4 is Not One-Way" }, { "paperId": "04347767ad76a974bd019f1174c1ef1c03f4fed2", "title": "Tradeoffs in the Complexity of Backdoor Detection" }, { "paperId": "f072e529551ca1307957fd84300c56384ab271a6", "title": "Inversion Attacks on Secure Hash Functions Using satSolvers" }, { "paperId": "f4f4eeabc6863a3d3571ffa67e5df19fcd9f02f2", "title": "Applications of SAT Solvers to Cryptanalysis of Hash Functions" }, { "paperId": "4aa5a98543ffdf74f475c971b34ef3747d552c26", "title": "Logical Analysis of Hash Functions" }, { "paperId": "f13245779a5d5f7eb419992de69261059249b335", "title": "Backbones and Backdoors in Satisfiability" }, { "paperId": "9ed6d2833ae315921f754e40802916340b148701", "title": "How to Break MD5 and Other Hash Functions" }, { "paperId": "45a2ce8717638071073477dbd0b535b7c87b220b", "title": "Cryptanalysis of the Hash Functions MD4 and RIPEMD" }, { "paperId": "66966f6cc6fc7e5b018ff6e157c991eaa1a3f1b9", "title": "Streamlined Constraint Reasoning" }, { "paperId": "5f5335a25d2b4ef0fb099ae917ff98d8afea97e3", "title": "A Tool for Checking ANSI-C Programs" }, { "paperId": "b84109df224e4f19963d6ffd99d11c9d0ec89db2", "title": "Backdoors To Typical Case Complexity" }, { "paperId": "745f8201364a5c1e22c0ff54b28e1b2eeed34822", "title": "The Practice of Statistics" }, { "paperId": "71efcfb787d75e3718aee49270943d7c514972f7", "title": "Logical Cryptanalysis as a SAT Problem" }, { "paperId": "cf6c60f8875e69216056d652f734da5d0f253a5c", "title": "New Algorithm for Finding Preimages in a Reduced Version of the MD4 Compression Function(Special Section on Cryptography and Information Security)" }, { "paperId": "38be6e613f2c30d21bffe8b468bc0cd46edba0d0", "title": "GRASP: A Search Algorithm for Propositional Satisfiability" }, { "paperId": "b7dde73e8867e9c0f2c0825fb614f3dbba23b9ea", "title": "The First Two Rounds of MD4 are Not One-Way" }, { "paperId": "f3ee74ac186d75938635b1673e5b9972a01c4f17", "title": "Tools and Algorithms for the Construction and Analysis of Systems" }, { "paperId": "e8d8f6eb224942cf294bd499efe21f752f5907a5", "title": "A fast parallel SAT-solver — efficient workload balancing" }, { "paperId": "e8783ef34f73773fb679aef58d987839313e1b3c", "title": "Cryptanalysis of MD4" }, { "paperId": "8861608e6c6b42f8883aec62b98997477229eeb8", "title": "Foundations of Artificial Intelligence" }, { "paperId": "e9ce5ad132f753624a017dc036f45eff45839265", "title": "The MD4 Message-Digest Algorithm" }, { "paperId": "3a500741f6d989f8e672d8aadfd979678b87d09f", "title": "A Certified Digital Signature" }, { "paperId": "d2712ce067a604c61a28778babebeced19b6bf8e", "title": "A Design Principle for Hash Functions" }, { "paperId": "7e33296dfff963d595d2121f14a7a0bd5c187188", "title": "Linear-Time Algorithms for Testing the Satisfiability of Propositional Horn Formulae" }, { "paperId": "048ce334f8b749edcf6c327a8172552cbcced865", "title": "The complexity of theorem-proving procedures" }, { "paperId": null, "title": "Inverting Cryptographic Hash Functions via Cube-and-Conquer — Re-sults and Source code" }, { "paperId": "020f9c70c1b19d95ebcff9e3e00a2eca4f74bf4e", "title": "contrast of" }, { "paperId": null, "title": "Measuring the effectiveness of SAT-based guess-and-determine attacks in algebraic cryptanalysis" }, { "paperId": "88f708e1fae27707df1f786b5f53ab13a4c736e0", "title": "Black Box Optimization, Machine Learning, and No-Free Lunch Theorems" }, { "paperId": "3fe726775b69e763a0e0f5cef0adb1c9e0aa8b55", "title": "Evaluating the Hardness of SAT Instances Using Evolutionary Optimization Algorithms" }, { "paperId": "e954c49711559956144a148a427a534464230c36", "title": "Finding Effective SAT Partitionings Via Black-Box Optimization" }, { "paperId": null, "title": "CaDiCaL, Kissat, Paracooba entering the SAT Competition 2021" }, { "paperId": null, "title": "New concurrent and distributed Painless solvers: P-MCOMSPS, PMCOMSPS-COM, P-MCOMSPS-MPI, and P-MCOMSPS-COM-MPI" }, { "paperId": null, "title": "2018a). Cube-and-Conquer Tutorial. https://github.com/marijnheule/CnC" }, { "paperId": "93adadb9a64b39f19fcd1e7551da2b0eb9be1a0c", "title": "Cube-and-Conquer for Satisfiability" }, { "paperId": "533afa1a39800a966040a124ed930e8d96b53efe", "title": "Parallel Satisfiability" }, { "paperId": "1fd6a36125d944522c4adbfadb6b9abe6613ac27", "title": "Derivative-Free and Blackbox Optimization" }, { "paperId": "e6d3fbf35cdffe39b203eab15ade8c055fc79e0a", "title": "Splatz , Lingeling , Plingeling , Treengeling , YalSAT Entering the SAT Competition 2016" }, { "paperId": "684b5498d99dfef1fe1beac1a6b3864f0088d13b", "title": "Computers And Intractability A Guide To The Theory Of Np Completeness" }, { "paperId": "480dbd4ca40a1e94ca6a724b7e61f39ae296a075", "title": "Inverting Thanks to SAT Solving - An Application on Reduced-step MD*" }, { "paperId": "880abfd4c5421973fd0b25325bedf3248f5065a9", "title": "SAT-based preimage attacks on SHA-1" }, { "paperId": null, "title": "Algebraic Cryptanalysis (1st edition)" }, { "paperId": "027282e993a97cad07797a980c32b0bc83f61989", "title": "Fundaments of Branching Heuristics" }, { "paperId": "9e3ea009e70de20efcd4614e79b9a0aabbb621e0", "title": "The First Two Rounds of MD 4 are Not One-Way Extended Abstract" }, { "paperId": "cd8c2397f984313d5450f8d3c8aca7a8c3f5009e", "title": "Finding Hard Instances of the Satissability Problem: a Survey" }, { "paperId": "e860216dba46960dec044ab477e8cd823cd608c3", "title": "Finding hard instances of the satisfiability problem: A survey" }, { "paperId": "674c0d017a03c9716d490517ad1a58c72f8d9c5d", "title": "On the Complexity of Derivation in Propositional Calculus" }, { "paperId": "ccfe55b17272e2ebf855844e96d5ac2aefa4d841", "title": "The Study of Inversion Problems of Cryptographic Hash Functions From MD Family Using Algorithms for Solving Boolean Satisfiability Problem" }, { "paperId": null, "title": "Second-preimage resistance : for any given input x , it is computationally infeasible to find x (cid:48) such that x (cid:48) (cid:54) = x, h ( x ) = h ( x (cid:48) )" }, { "paperId": null, "title": "Algorithm 5 can be easily modified to be oriented on finding only one solution in the conquer phase" }, { "paperId": null, "title": "The proposed runtime estimation is a stochastic costly black-box objective function (Audet & Hare, 2017; Semenov, Zaikin, & Kochemazov, 2021). The algorithm minimizes this objective function" }, { "paperId": null, "title": "For each hash and each value of L its own threshold is found. Therefore A is the estimating mode of Algorithm 5 followed by a Cube-and-Conquer solver" }, { "paperId": null, "title": "the estimating mode" }, { "paperId": null, "title": "the incomplete SAT solving mode, a solution can be found only for a satisfiable CNF, and even in this case this is not guaranteed because of the time limit for the CDCL solver" } ]
32,295
en
[ { "category": "Political Science", "source": "external" }, { "category": "Political Science", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/024c65380b37cfbcc1fc5a595dd16fa20a382507
[ "Political Science" ]
0.964944
The performance of power and citizenship: David Cameron meets the people
024c65380b37cfbcc1fc5a595dd16fa20a382507
International journal of cultural studies
[ { "authorId": "145562991", "name": "P. Lunt" } ]
{ "alternate_issns": null, "alternate_names": [ "Int j cult stud", "Int J Cult Stud", "International Journal of Cultural Studies" ], "alternate_urls": [ "http://ics.sagepub.com/", "http://www.sagepub.com/journals/Journal200946/title", "https://journals.sagepub.com/home/ics" ], "id": "19c52db0-c559-4319-93e6-289771c58034", "issn": "1367-8779", "name": "International journal of cultural studies", "type": "journal", "url": "http://www.sagepub.com/journal.aspx?pid=196" }
How do citizens respond to and engage with the performance of political power in the context of mainstream media? Through an analysis of two television programmes aired during the UK Brexit referendum campaign of 2016, a picture emerges of citizenship as the performative disruption of the performance of power. In the programmes the then UK prime minister, David Cameron, met members of the public for a mediated discussion of key issues in the Brexit referendum. Their interactions are analysed here as a confrontation between the performance of citizenship and power reflecting activist modalities of disruptive citizenship played out in the television studio. The article ends with reflections on questions about political agency as individualistic forms of disruptive political autonomy.
**The performance of power and citizenship: David Cameron meets the people** Peter Lunt School of Media, Communication and Sociology University of Leicester, UK [Pl108@le.ac.uk](mailto:Pl108@le.ac.uk) **Abstract** How do citizens respond to and engage the performance of political power in the context of mainstream media? Through an analysis of two television programmes aired during the UK Brexit referendum campaign of 2016 a picture emerges of citizenship as the performative disruption of the performance of power. In the programmes the then UK Prime Minister, David Cameron, met members of the public for a mediated discussion of key issues in the Brexit referendum. Their interactions are analysed here as a confrontation between the performance of citizenship and power reflecting activist modalities of disruptive citizenship played out in the television studio. The article ends with reflections on questions about political agency as individualistic forms of disruptive political autonomy. **Keywords: political discourse; debate; civility; autonomy; performance** **Introduction** In this article I examine the mediated juxtaposition and interrelation of the performance of power and citizenship in the context of two television programmes aired during the UK Brexit referendum campaign of 2016. The Prime Minister of the day, David Cameron, head of the campaign to remain in the European Union (EU), appeared on the shows, one at the launch of the campaign and one just days before the actual referendum. The shows were adaptions of the popular BBC current affairs panel discussion programme Question Time on which Cameron fielded questions from members of the public moderated by a programme host. In the first programme he was also interviewed by a political journalist in front of the television audience before taking questions. There are several reasons why these shows are significant in relation to the intersection of political communication and the theme of this special issue, ‘Citizenship and performance’. First, as popular culture, the programmes are part of the diversity of forms of political communication ranging from set piece party political broadcasts, political interviews, televised ----- debates, talk shows and appearances by politicians on current affairs and popular daytime television programmes (Craig, 2016). Second, the mediated engagement between the Prime Minister and members of the public raises questions about the role of the media in public engagement that crosses boundaries between public discourse and politics. Third, as relatively unscripted public exchanges these engagements are performative as ‘individuals, organizations, and parties moved “instinctively” to hook their actions into the background culture in a lively and compelling manner, working to create an impression of sincerity and authenticity rather than one of calculation and artificiality, to achieve verisimilitude’ (Alexander et al., 2006: 1). The analysis illustrates that the performance of power by the Prime Minister was a construction of personal authenticity and political authority, and that the performance of citizenship by lay participants was as a disruption of the performance of power in the form of individualized dissent (Ruiz, 2014; 2016). This article provides an analysis of the two television programmes drawing on dramaturgy (Goffman, 1959) and performance studies followed by an analysis of the genealogy of both the performance of power and citizenship. The article ends with a discussion of the meaning and modality of the performance of citizenship as a subjectivity constructed as autonomy. The Brexit referendum was a major political event that stood in a complex relation to traditional party political affiliation and engaged the public in a relatively open debate between sides representing the answer to a single question: whether to remain in the EU or to leave. Cameron, it has been widely acknowledged, as an ex-public relations man, was a consummate political performer across a range of media contexts (Craig, 2016). In a similar way to former US President Barack Obama, he had developed a style of political leadership that sought to overcome the excesses of spin and media management characteristic of the Clinton and Blair years (Craig, 2016). Until the EU referendum campaign, Cameron appeared as a highly skilled media performer accomplished at managing a variety of communication contexts such as press conferences, interrogative interviews with political journalists and set piece speeches such as the annual party conference. He was equally at home meeting the people in mediated town hall meetings or sitting on the sofa of current affairs television shows as he was when debating in the Chamber of the House of Commons. Craig (2016) argues that such multiply skilled performances across varied communication genres and contexts aims to manage, if not resolve, tensions between authenticity and performance, between the public politician and the private individual, between factual broadcasting and entertainment, and between legitimacy based on expertise and public popularity. Such a leadership style also aims to avoid or overcome the public cynicism that potentially results from the visibility of techniques of media management ----- and spin that draws attention to the strategies of political communication rather than substantive claims and policy commitments (Capella and Jamieson, 1997). Two television programmes were aired on free-to-air channels during the Brexit campaign in which Cameron came face to face with members of the public. The programmes were an extension of a series of similar encounters with members of the public that he made during his time as Prime Minister of a coalition government between his Conservative Party and the Liberal Party from 2010 to 2015. These ‘meetings’, called PM Direct, were made available on YouTube supported by transcripts available on the website of the Prime Minister’s office, and were held in workplaces (e.g. Caterpillar, EasyJet and Rolls Royce). In these contexts, Cameron stood, often shirt-sleeved, amongst the employees or members who were seated at floor level between him and a single fixed camera and on a bank of chairs behind him. This production format created a space in which Cameron was framed by the audience as he delivered intense, short statements of the key points of his campaign agenda in response to questions from members of the audience. Unlike similar examples of political discussion programmes, the shows were unmoderated and Cameron managed the questions from the floor as well as being the only ‘guest’ on the show. An important feature of such encounters was that members of the audience were restricted to asking questions and there were no follow-up or supplementary questions. This lack of interactivity allowed Cameron the opportunity to treat questions as cues, to which he responded by delivering well-rehearsed statements of his policies or campaign agenda. The Conservative Party adapted the PM Direct format for the 2015 General Election campaign. The context moved from workplace meetings to spaces in which greater control could be exercised over access and the production format of the events, and in which the audience acted as cheerleaders creating an excited emotional climate as Cameron pronounced. However, these occasions were constructed to create the impression of being public meetings. There is a long tradition in UK parliamentary election campaigns in which candidates hold public meetings in their constituencies in which they meet members of the public and address their questions and concerns. Such occasions are often robust and boisterous exchanges in which political discourse meets vernacular, committed expressions of politics. In contrast, in the versions of PM Direct aired during the 2015 General Election campaigns, another feature of the shows was the generation of the emotional climate of an election rally in which the ‘audience’ reacted positively and emotionally; Cameron was fired up and the audience was fired up. This was a simulation of the traditional campaign stump, but as a highly controlled and disciplined occasion in which enthusiastic party members created a sense of spontaneity ----- as a background to Cameron’s mini speeches. The Conservative Party won an unexpected majority in the 2015 General Election and one of the election promises had been to hold a referendum on Britain’s membership of the EU that led to the referendum in 2016 when the two programmes analysed here were broadcast. **Background to the EU referendum** In a political campaign, notwithstanding the increasing importance of digital communication technologies, television remains a key site for performative embedding of campaign messages and engagement with national audiences. The communication styles of political leaders have adapted to make use of the diverse forms and contexts of communication balanced by disciplined campaigning and media management strategies. In the three weeks of Brexit campaigning between 2 and 22 June 2016, 15 mainstream television programmes were aired across several genres. These included political interviews conducted by well-known journalists, debates between leading representatives of the Remain and Leave campaigns, audience discussion programmes with members of the public and the programmes examined here, variants of Question Time in which key campaigners faced questions from members of the public. The BBC played a central role in staging 10 of the 15 television programmes during the campaign; ITV held three events, Sky News two and Channel 4 one. Recent commentators (Chadwick, 2013; Craig, 2016) have suggested that after a period of hyperbole about digital campaigning there is growing recognition that television is finding its place in contemporary campaigns, partly through innovations in programme forms and partly by complementing and intersecting with digital and social media campaigns. In the UK a referendum once triggered by an Act of Parliament is managed not by government or political parties but by campaign groups that are chosen by the Electoral Commission to act as the official voice of the two campaigns representing the two sides of the referendum decision – in this case, Remain or Leave the EU. Bids are invited by groups that wish to represent each side of the referendum and the two groups chosen attract public campaign funds. The campaign for Remain was modelled on the Conservative Party campaign of the 2015 General Election. Having governed as part of a coalition with the Liberal Democrats since 2010, the Conservatives won an unexpected parliamentary majority in 2015. The results recorded increased support for both Conservatives and Labour, the Liberal vote collapsed and notably, there was a dramatic increase in nationalist votes for both the Scottish National Party (SNP) and the UK Independence Party (UKIP). The Daily Telegraph offered ----- an insightful analysis of the successful Conservative election campaign organized by Lynton Crosby that demanded discipline from members of the Conservative Party, a campaign agenda that focused on economic policy, negative campaigning against their main rivals – Labour and the Liberal Democrats – focused on the party leaders (Ed Miliband and Nick Clegg), and David Cameron fronting the campaign in presidential style (Swinford, 2015). The deployment of Cameron ‘front and centre’ formed a key part of the Remain campaign strategy as it had in the 2015 Conservative General Election campaign. This was partly justified by his high opinion poll ratings with 41 per cent approval during this period (Boffey, 2015), although these were moderated by perceptions of Cameron as uncommitted and unemotional, and by negative reactions to his upper-class social background. Craig (2016) discusses the strategy adopted to overcome these public perceptions of deploying Cameron’s high-level media skills to make a direct appeal to the broader electorate. For example, Cameron handled interviews such as that by the BBC’s political journalist Andrew Marr by skilfully challenging the host’s framing of Conservative policy, answering only the questions he wanted to answer and refusing to be drawn into areas that might be problematic (Craig, 2016). The challenge facing Cameron and his advisors was to find ways of bringing his undoubted rhetorical and presentational skills into contact with a broader public to popularize his leadership. Consequently, skilful performance in political interviews was supplemented by a mixed communication strategy that kept Cameron in the public eye and aimed to soften his public image and spread his popular appeal. **The television programmes** **_Sky News_** Cameron kickstarted the Remain campaign with an appearance on a _Sky News special_ programme on 2 June 2016. The show began with an interview conducted by Faisal Islam, a political journalist, in front of a live television audience followed by a moderated Q&A session with members of the studio audience hosted by Sky newsreader and presenter, Kay Burley. Islam opened his questioning by asking Cameron to stick to the facts about migration and to outline the figures for net migration during his leadership. This was challenging ground for Cameron because he had made a feature of his critique of the Brexit campaign by stating that it was based on false claims, anticipating subsequent debates about fake news and post-truth political discourse, and here he was being asked about the failure of his government to meet promises made in the General Election campaign to reduce migration to tens of thousands a ----- year. Cameron gave a straight answer by admitting that 600,000 more people had entered the UK than had moved to other countries since he had come to power. When pressed as to whether he had broken a manifesto promise, he provided an intriguing justification by shifting the ground from a manifesto ‘promise’ to an ‘ambition’, suggesting that the relatively better performance of the UK economy during his period of office compared to Continental Europe had led to the creation of many new jobs that had attracted workers from abroad. A strategic advantage of this answer is that it shifted the focus on to Cameron’s central campaigning agenda, the economic benefits of EU membership and the risks of leaving. He argued that the target “remains the right ambition for Britain” and that trying to cut immigration by leaving the EU and pulling out of the single market would be “madness” because of the economic damage it would cause. A number of further questions followed from Islam, most significantly challenging Cameron’s references to the First World War to illustrate the potential security dangers of Brexit, which Islam suggested was an example of “fearmongering”. After further questions that were less challenging, the show changed gear (and genre), morphing into a mediated popular press conference or political talk show (Craig, 2016). Burley moderated the Q&A session between members of the audience and Cameron. This combination of production formats was a challenge as the robust exchange with a professional journalist was followed immediately by a context that required softer skills to engage members of the public. What became immediately apparent was that Cameron continued with his strategy from PM Direct of treating questions as triggers or cues for campaign sound bites or as an expression of concern or lack of understanding of social or political issues. He saw his role as combining the provision of public information and a therapeutic alleviation of public fears and concerns. Rather than seeing indignation about the EU as an expression of substantial political critique addressing substantive political questions, his stance was that it reflected ignorance and anxiety. For example, one participant, identified as a businessman, asked Cameron to reflect on the “personal damage the scaremongering has done to your legacy.” Cameron appealed to personal authenticity in the shape of his core political commitments: “I don’t accept it is scaremongering. I am genuinely worried about Britain leaving the single market.” He then linked his campaign focus on the economic risks of leaving the EU to his political authority: “Frankly, I think the job of the prime minister is to warn about potential dangers as well as to talk about the upsides and the opportunities there are by being a member of this organization.” In addition, Cameron emphasized his reliance on and trust in a variety of experts who supported the claim that leaving the EU would be to the economic detriment of the UK and linking this to his political authority: “But if I didn’t listen to the IMF, to the OECD, to the TUC, to the ----- CBI, to the governor of the Bank of England – if I didn’t listen to any of these people, I would not be doing my job and I would not be serving this country.” As the campaign unfolded, Brexit campaigners were able to characterize such claims to authority as representing the interests of the great and the good: in other words, the establishment. Aversion to and criticism of the establishment is a key plank of populist political discourse (Jagers and Walgrave, 2007), which Cameron opened himself up to by invoking a consensus of expertise in favour of Remain. Another questioner expressed concerns that during the Brexit campaign the Prime Minister shared a platform with the Mayor of London who he had strongly opposed as the Labour candidate in the mayoral election of the previous year. Criticizing the Mayor’s apparent support or legitimation of terrorist groups, Cameron responded: We had a lively election campaign in London, I didn’t think it was the right choice some of the people he shared a platform with. The right thing for the PM to do is to work together. Sadiq and I disagree about many things; we’ll try and work together and on this issue of Europe we agree. We buried our disagreements and appeared on a platform. From Cameron’s perspective, the contingencies of a referendum necessarily realigned politics across party lines and, as leader of the Conservative Party and Prime Minister, he would now find himself opposing the arguments and positions of some of his own party colleagues and working for the Remain campaign, which included many liberal or left-leaning organizations, public figures, and politicians. From the perspective of members of the public, however, the dissociation of Cameron from his role as leader of the Conservative Party and Prime Minister was not taken lightly. For example, one participant suggested the referendum was a vote of confidence in the government and in Cameron himself. Intriguingly, some political commentators ridiculed participants for such questions, but the difficulty of separating political commitments and allegiances from the question of membership of the EU was and remains non-trivial. The limits of Cameron’s communication strategy on this programme were well illustrated by his exchange with a literature student on the Sky News programme. The student, identified as Soraya Bouazzaoui, stated her concerns: “The entire campaign was nothing but scaremongering; no valid facts; no pros and cons and that everything I’ve seen makes voting into the EU look worse.” The campaign, in other words, was high on persuasion and low on fact and argument, and significantly, this intelligent, informed, articulate member of the public was thinking of the referendum as a choice between ‘voting in’ and ‘voting out’. Referenda are, however, usually deployed following parliamentary agreement on legislation that has ----- significant constitutional implications, which is then put to the public for their assent. Cameron’s original strategy was to negotiate significant changes to the UK’s position in Europe, to get parliamentary approval for the changes, and then to put these new conditions to a referendum. In this scenario, the question put to the public would have been to ask them whether they agreed or disagreed with the new conditions for UK membership of the EU. However, Cameron was only able to negotiate adjustments to the UK’s conditions of membership and, in this context, the referendum was drafted as an in/out vote giving equal weight to both sides and triggering more existential questions about membership of the EU. However, as is evident in Cameron’s performance on this show, the Remain campaign avoided substantive political questions of migration and sovereignty to focus on economic policy. Having expressed her concerns about the conduct of the Brexit campaigns, Bouazzaoui put her substantive question about the reassurances that Cameron had repeatedly made that remaining in the EU would make the UK safer in response to terrorist threats. Referring to concerns raised by Middle East states about Turkey’s relations to and perceived support for terrorist groups, she questioned whether being in the EU meant that there were no risks in foreign policy. Cameron’s response was characteristic, saying that he would address the two issues that Bouazzaoui had raised: First, the positive case for staying. I think there is a positive case. I think we’ll be better off as a country, with more jobs. I think we’ll keep our country moving forward, we’ll get things done in the world, whether it’s tackling climate change or indeed standing up to Islamic terrorism … and also, we’ll be safer; strength in numbers. This is a graphic illustration of Cameron’s approach to questions from members of the public as a cue to deliver his campaign messages. However, the question was a legitimate and serious one, and Cameron’s response clearly frustrated Bouazzaoui, who interrupted him: That’s not answering my question. Let me finish now, because I’ve seen you interrupt many people before. Let me finish. I’m an English Literature student, I know waffling when I see it, OK. I’m sorry, but you’re not answering my question – how can you reassure people who want to vote out that we are safe from extremism when we are willing to work with a government like Turkey who want to be part of the EU when they are under heavy accusation? Cameron’s discomfort was evident and he tried to get back on track by saying that he had “got it” and addressed the question of Turkey’s potential accession to the EU: There is no prospect of Turkey joining the EU in decades. They applied in 1987, they have to complete 35 chapters. One has been completed so far. At this rate they will join ----- in the year 3000. There are lots of reasons to vote one way or vote the other way. Turkey is not going to join the EU any time soon, every country, every parliament, has a veto. There are lots of things to worry about in this referendum campaign. I absolutely think that is not a prospect, it’s not going to happen. This exchange illustrates a number of aspects of the interaction between the performance of power and of citizenship in this programme. It demonstrates Cameron’s strategy of taking questions as cues to which he responds with rehearsed campaign speeches. The passage also demonstrates that an important aspect of the performance of citizenship in this context is refusing the subject position of the audience to Cameron’s pronouncements, to bring power to account by insisting on the relevance of answers, and disrupting the performance of power. Cameron’s strategy of treating questions as expressions of concern that he takes as needing reassurance, information or contradiction leaves this participant, members of the audience, and by extension, the public, frustrated. The programme demonstrated that the public were not in agreement with the Remain campaign’s focus on the economic consequences of leaving the EU, and that a combination of substantive political questions related to migration, the legal framework of the EU, sovereignty, the impact of migration on public services and the efficacy of the government’s austerity policies were all implicated in deciding how to vote in the referendum. Furthermore, Cameron’s deflection of questions and his skilled practice of turning to his own agenda raised serious questions about both his claims to authenticity and political authority, on which his enviable popularity ratings had been based up to this point. Press reaction to the programme was ambivalent. There was recognition that Cameron had got his agenda across despite the distraction of a hostile interview and having to manage the relationship with members of the public. In contrast, there was an acknowledgement that the interactions with members of the public were less convincing and seemed to illustrate a gap between the campaign agenda and public concerns. Interestingly this did not split neatly along the political affiliation of the papers – for example, Michael Deacon, writing in The Telegraph: ‘The studio audience didn’t think much of him, and he knew it. It was no disaster. But if you wondered why Mr Cameron didn’t fancy a proper debate: now you know’ (Deacon, 2016). **The BBC** Shortly before the EU referendum Cameron appeared on a BBC programme to meet the people in an adaptation of the Question Time format, moderated by resident host David Dimbleby. ----- This version of the programme differed in significant ways from the standard version of the show on which members of the studio audience are selected by the host, guided by the production team, to ask questions to a panel representing the main political parties plus celebrity guests. In the programme commissioned for the referendum, there was no panel and instead, David Cameron fielded all the questions The producers and the host had learned from the Sky News programme and dealt with Cameron’s tendency to not answer questions and shift topic onto his campaign agenda by clustering questions thematically. Consequently, although Cameron shifted topic in response to the questions, he found himself back on the same ground in the next question. The effect of this was exacerbated by the programme format which was unlike in the panel version of the programme in which different members of the panel voice alternative responses to audience questions, and to contest these among each other before the host turns back to the audience for supplementary questions and comments. In contrast, in this version of the programme one question to the Prime Minister was rapidly followed by another. The first cluster of questions addressed the impact on the political culture of the Brexit campaign asking, for example, if it had “soured the political climate in the UK” by amplifying antagonisms. In response, Cameron attempted to draw a distinction between political commitment, passion and aggression, arguing that the committed use of reason, argument and rhetoric is essential to politics. He then invited the audience to contemplate what distinguishes reasonable/appropriate from unreasonable/inappropriate arguments and sentiments in political discourse and public debate. In this he positioned himself as on the ‘right’ side of these oppositions, claiming that his politics combined authentic personal commitment with political authority backed by arguments and claims backed by evidence. His opponents, by implication, were represented as political opportunists prepared to say anything to win, and consequently lacking both personal authenticity and political authority (Craig, 2016). These reflections on civility in public and political discourse are all very interesting, but Cameron sidestepped the point that the questions were addressed to the conduct of his campaign as much as to the Brexit campaign and to the use of negative campaigning to discredit opponents. Nevertheless, Cameron pressed ahead, aiming to justify the comparison between himself and his political opponents. He focused on Nigel Farage, leader of the populist UKIP and a key figure in the campaign to leave the EU, although not part of the official ‘Leave’ group. Cameron referred to a Brexit campaign poster by UKIP that used a photograph of refugees crossing the border into Bosnia-Herzegovina with the headline ‘Breaking Point’. He argued that Farage was “wrong in fact and wrong in motivation”, and that the aim of Brexit ----- campaigners was an “attempt to frighten and divide people.” In the campaign, Brexit campaigners, particularly Boris Johnson, were able to turn this argument against Cameron by pointing to inconsistencies in his position on Europe, thereby challenging the authenticity of his position and characterizing his focus on the potential economic ills of leaving the EU as ‘project fear’. At this point, the host intervened to ask, “Has your side been guilty of that?” articulating a commitment to impartiality as a moderator of the broader public debate. The theme continued including a question that challenged Cameron on the ‘Brexit budget’ prepared by the Chancellor to demonstrate the effects of leaving Europe on taxation and public spending. Cameron’s reply suggested that his concerns were authentic, expressing his “genuine concern for the economic impact of leaving the EU” and citing, once more, the support of independent experts. Following the exchange on the conduct of the campaigns was a series of questions and answers on Cameron’s own future: would he resign if the country voted to leave? Would he call a general election if the vote was to leave the EU? These questions reflected the central role that Cameron played in the campaign, and although he tried to argue that the campaign was not about him, he shifted to his main agenda that we should remain for the sake of the economy, jobs, safety, security, and because being part of Europe strengthened the UK: “It comes down to a question of the economy and we need to work together – to grow the economy and beat terrorists.” How did Cameron find himself in such a difficult, compromised performative context? In the language of the history of the present (Foucault, 1977, 1984; Garland, 2014), a line can be traced back to his previous ‘meetings’ with members of the public in PM Direct allied to a leadership style that aimed to combine personal authenticity with political authority, and a disciplined approach to campaigning that included a presidential style with Cameron at the centre, negative campaigning against rivals and a focus on economic policy. The field of emergence for this configuration of leadership and campaigning styles was partly due the demands of the UK coalition government of Conservative and Liberal parties between 2010 and 2015 that demanded efforts at public communication as policies did not always clearly follow party lines. During this time there was also increasing public support for nationalist and populist parties that required renewed forms of popular communication from established political parties. However, the PM Direct events did not create a stage that afforded the opportunity for authenticated engagement with members of the public but instead, were ‘managed shows’ (Thompson, 1995) in which Cameron and his team selected the places and audiences and set the rules of interaction. In contrast, as we have seen, the two television shows ----- in which he met the people in the Brexit campaign were managed by the broadcasters and gave opportunities for the performance of disruptive citizenship. Instead of a controlled context that afforded the illusion of public engagement while allowing Cameron to deliver his campaign message, he found himself involved in a contested performative space. So how did members of the public find themselves occupying space in the television studio and challenging the performance of power? The difficulties experienced by Cameron and the opportunity afforded to citizens was a function of the production format of the programme as a mise en scène for the performances of power and citizenship. The two programmes included significant variations on the Question _Time format, a popular political panel talk show in which guest members of the public put_ topical social and political questions to a panel. Question Time is a microcosm of the role of factual programming in public service broadcasting assuming a politics of pluralism, represented by the different panel members who stand for the main political parties as competitive interest groups and, therefore, representing a particular idea of public accountability understood as a fair and balanced representation of the views of different competing interests in the political sphere (Karppinen, 2007). The transformation of the programme format in which the panel was replaced by Cameron represents a shift from the idea of communication of politics in which different positions are put in front of the public (democratic pluralism) with commentary from expertise (elite democracy), to the appropriation of public broadcasting as a vehicle for a political campaign. In the traditional formulation, the responsibility of public service broadcasting is to create contexts in which competing interests have equal opportunities to state their arguments and to provide an expert commentary on those views. In contrast, placing Cameron in the place of the panel made the Prime Minister the single recipient of questions, transforming the programme into a popular version of the press conference in comparison to the panel format adopted in Question Time, which constituted a debate between different political positions. Episodes of Question Time usually proceed in a sequence in which the host invites a question from the studio audience and then asks panel members to answer. In this sense, _Question Time_ is characterized by contestation, argument and often conflict between panel members as they debate alternative answers to the questions under the direction and scrutiny of the programme host. The host then goes back to the audience for supplementary questions and reactions, and finally the person who asked the question gives their reactions and reflections. In contrast, Cameron’s previous mediated town hall PM Direct ‘meetings’ were a pale reflection of the dynamics of _Question Time. There was no variety of responses to_ ----- questions, just Cameron’s, and no display of divergent views or contestation in front of the audience. The rhythm of exchanges and arguments and the emotional flow of the programmes were altered considerably by these changes, becoming a series of Q&A rather than a question followed by a robust exchange and opportunity for further audience engagement. In terms of the flow of emotions, instead of a dispersed exchange of feelings, views and political commitments, the direction of sentiment was focused on Cameron. **Reflections and conclusions** How are we to think about the performance of citizenship in these programmes and the implications for understanding political subjectivity and agency? These disruptive encounters appear to be the work of individuals asserting their rights to visibility in public and to challenge the performance of power. In terms of asserting communicative rights participants communicate in a performative practice similar to Isin and Ruppert’s (2015) account of digital citizenship as rights claiming practice. However, these appear to be political acts undertaken by individuals expressing their autonomy by occupying a space (mainstream television studios) in which the performance of power is made visible and realized through the interaction between the performance of power and citizenship. The lay participants on these programmes are not there to press individual claims and nor are they there to represent an emerging collective or connective (Bennett and Segerberg, 2012); they are there as citizens, as individuals aiming to have their say, bring power to account and disrupt attempts to persuade. This is a form of what Dayan (2009) calls ‘monstration’ and takes up his argument that as television broadcasting converges with digital culture it can be understood by analogy to the bulletin board on which individual citizens post their messages to the public. Such performances of citizenship are agentic, skilful deployments of material and symbolic resources in staged interactions articulated as individualized forms of dissent (Ruiz, 2014, 2016). The television discussion programmes in which Cameron met the people reflect ‘the ways in which citizens – from protestors in Occupy movements … to participants [in] street performances reclaim public space as a place to play out, both expressively and deliberatively, struggles for recognition and new political subjectivities’ (Rovisco and Ong, 2016: 3). In this sense although appearing on television their actions reflected recent activism and protest in what Gerbaudo (2016) terms ‘the digital popular’. The invasion of the television studio and programme space reflects the transgressions of space by the Occupy movement. This is an incursion into mainstream media culture deploying some of the tactics of new protest ----- movements in the name of individual citizens. Along with the invasion of public spaces, new social movements experiment with radical forms of democracy by making use of the resources of digital media in reclaiming the square (Rovisco and Ong, 2016). In so doing, they engage with activities within the square that reflect conceptions of direct democracy and civic virtue (Dagger, 1997). Just as digital and social media provide social movements with new resources that ameliorate their lack of access to mainstream media resources so, here, performative disruption seeks to influence through visibility and public impact and by providing models of alternative political practices. Perhaps what is at stake here is that the disruption of political communication and the occupation of the places of media production appear to express the position of individual citizens intervening in public communication and debate. Such activism appears to suggest a trajectory from personal concerns that are projected into the public sphere through performance as a social practice that inserts personal issues and commitments into mediated public life. There are other examples of a trajectory from personal commitment and action to political debate, such as when individuals protest about the environmental implications of global systems of production and distribution through individual and localized practices of consumer activism (Lekakis, 2013). One way to make sense of this is as a combination of political autonomy and democracy as a social practice (Gray, 2000). Or, following Dagger (1997), to argue that participants perform individualized dissent related to questions of public interest in civically virtuous expressions of individual political autonomy as: … social actors, embedded in collective representations and working through symbolic and material means, implicitly orient towards others as if they were actors on a stage seeking identification with their experiences and understandings from their audiences. (Alexander et al., 2006: 2) These programmes stage an encounter between performances of power and citizenship (Goffman, 1961), that instantiates the blurring of boundaries in contemporary political culture reflecting tropes of digital citizenship in their disruption of power (Isin and Rupert, 2015) and the digital popular as sites of occupation that engage new political subjectivities (Gerbaudo, 2016). In this article I have explored the way that these trends spill over into mainstream, linear media to disrupt the performance of power in a staged encounter by autonomous, individual political subjects (Gray, 2000). Traditional differentiations between the state and the body politic (Habermas, 1987), between questions of politics and values (Rawls, 1993), and between civil and uncivil discourses and actions (Mouffe, 2005), are all potentially blurred in the current conjuncture typified by the example of when Cameron met the people. ----- **References** Alexander JC, Giesen B and Mast JL (eds) (2006) _Social Performance: Symbolic Action,_ _Cultural Pragmatics, and Ritual. Cambridge: Cambridge University Press._ Bennett WL and Segerberg A (2013) The Logic of Connective Action. Cambridge: Cambridge University Press. Boffey D (2015) David Cameron maintains high approval rating, despite Labour’s poll lead. _The_ _Guardian,_ 15 February. Available at: www.theguardian.com/politics/2015/feb/14/opinium-poll-david-cameron-maintains approval-rating (accessed on 31 July 2018). Capella JN and Jamieson KH (1997) Spiral of Cynicism: The Press and the Public Good. New York: Oxford University Press. Chadwick A (2013) _The Hybrid Media Systems: Politics and Power. Oxford: Oxford_ University Press. Craig G (2016) Performing Politics. Cambridge: Polity Press. Dagger R (1997) _Civic Virtues: Rights, Citizenship, and Republican Liberalism. Oxford:_ Oxford University Press. Dayan D (2009) Sharing and showing: Television as monstration. The Annals of the American _Academy of Political and Social Science 625, The end of television? Its impact on the world_ (so far), September: 19–31. Deacon M (2016) ‘I know waffling when I see it!’ David Cameron takes a Brexit roasting. The _Telegraph, 2 June. Available at: www.telegraph.co.uk/news/2016/06/02/i-know-waffling-_ when-i-see-it-david-cameron-takes-a-brexit-roast/ (accessed on 31 July 2018). Foucault M (1977) Discipline and Punish: The Birth of the Prison. New York: Pantheon. Foucault M (1984) Nietzsche, genealogy, history. In: Rabinow P (ed) The Foucault Reader. New York: Pantheon. 76 - 100 Garland D (2014) What is a ‘history of the present’? On Foucault’s genealogies and their critical preconditions. Punishment & Society 16(4): 365–384. Gerbaudo P (2016) Occupying the digital-popular. In: Rovisco M and Ong J (eds) Taking the _Square: Mediated Dissent and Occupations of Public Space. London: Rowman & Littlefield,_ 37 – 54. Goffman E (1959) The Presentation of Self in Everyday Life. Hamondsworth: Penguin. Goffman E (1961) Encounters: Two Studies in the Sociology of Interaction. New York: The Bob Merrell Company. ----- Gray J (2000) Two Faces of Liberalism. Cambridge: Polity Press. Habermas J (1987) The Theory of Communicative Action, Volume 2: Lifeworld and System: A _Critique of Functionalist Reason. Cambridge: Polity Press._ Isin E and Ruppert E (2015) Being Digital Citizens. London: Rowman & Littlefield. Jagers J and Walgrave S (2007) Populism as political communication style: An empirical study of political parties’ discourse in Belgium. European Journal of Political Research 46: 319– 345. Karppinen K (2007) Against naïve pluralism in media politics: On the implications of the radical-pluralist approach to the public sphere. Media, Culture & Society 29(3): 405–508. Lekakis EJ (2013) Coffee Activism and the Politics of Fair Trade and Ethical Consumption in _the Global North._ London: Palgrave Macmillan. Mouffe C (2005) On the Political. London: Sage. Rovisco M and Ong J (eds) (2016) Taking the Square: Mediated Dissent and Occupations of _Public Space. London: Rowman & Littlefield._ Rawls J (1993) Political Liberalism. New York: Colombia University Press. Ruiz P (2014) Articulating Dissent: Protest and the Public Sphere. London: Pluto. Ruiz P (2016) Identity, place and politics: From picket lines to occupation. In: Rovisco M and Ong J (eds) Taking the Square: Mediated Dissent and Occupations of Public Space. London: Rowman & Littlefield, 15 – 36. Swinford S (2015) Election 2015: How David Cameron’s Conservatives won. The Telegraph, 8 May. Available at: www.telegraph.co.uk/news/general-election-2015/11592230/Election 2015-How-David-Camerons-Conservatives-won.html (accessed on 31 July 2018). Thompson J (1995) _The Media and Modernity: A Social Theory of the Media. Cambridge:_ Polity Press. -----
{ "disclaimer": "Notice: The following paper fields have been elided by the publisher: {'references'}. Paper or abstract available at https://api.unpaywall.org/v2/10.1177/1367877919849960?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1177/1367877919849960, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": "other-oa", "status": "GREEN", "url": "https://figshare.le.ac.uk/articles/journal_contribution/The_performance_of_power_and_citizenship_David_Cameron_meets_the_people/10243721/1/files/18491303.pdf" }
2,019
[ "JournalArticle" ]
true
2019-07-23T00:00:00
[]
9,769
en
[ { "category": "Computer Science", "source": "external" }, { "category": "Medicine", "source": "external" }, { "category": "Computer Science", "source": "s2-fos-model" }, { "category": "Environmental Science", "source": "s2-fos-model" }, { "category": "Engineering", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/024c9e9327edd1f6ed69eb9850cfaf5c2b266be9
[ "Computer Science", "Medicine" ]
0.846175
Cloud Servers: Resource Optimization Using Different Energy Saving Techniques
024c9e9327edd1f6ed69eb9850cfaf5c2b266be9
Italian National Conference on Sensors
[ { "authorId": "5956296", "name": "Mohammad Hijji" }, { "authorId": "107848585", "name": "B. Ahmad" }, { "authorId": "23148907", "name": "Gulzar Alam" }, { "authorId": "2275954260", "name": "Ahmed M. Alwakeel" }, { "authorId": "31240499", "name": "Mohammed M. Alwakeel" }, { "authorId": "2191399741", "name": "L. A. Alharbi" }, { "authorId": "31111836", "name": "Ahd Aljarf" }, { "authorId": "2115774945", "name": "M. Khan" } ]
{ "alternate_issns": null, "alternate_names": [ "SENSORS", "IEEE Sens", "Ital National Conf Sens", "IEEE Sensors", "Sensors" ], "alternate_urls": [ "http://nbn-resolving.de/urn/resolver.pl?urn=urn:nbn:ch:bel-142001", "http://www.mdpi.com/journal/sensors", "https://www.mdpi.com/journal/sensors" ], "id": "3dbf084c-ef47-4b74-9919-047b40704538", "issn": "1424-8220", "name": "Italian National Conference on Sensors", "type": "conference", "url": "http://www.e-helvetica.nb.admin.ch/directAccess?callnumber=bel-142001" }
Currently, researchers are working to contribute to the emerging fields of cloud computing, edge computing, and distributed systems. The major area of interest is to examine and understand their performance. The major globally leading companies, such as Google, Amazon, ONLIVE, Giaki, and eBay, are truly concerned about the impact of energy consumption. These cloud computing companies use huge data centers, consisting of virtual computers that are positioned worldwide and necessitate exceptionally high-power costs to preserve. The increased requirement for energy consumption in IT firms has posed many challenges for cloud computing companies pertinent to power expenses. Energy utilization is reliant upon numerous aspects, for example, the service level agreement, techniques for choosing the virtual machine, the applied optimization strategies and policies, and kinds of workload. The present paper tries to provide an answer to challenges related to energy-saving through the assistance of both dynamic voltage and frequency scaling techniques for gaming data centers. Also, to evaluate both the dynamic voltage and frequency scaling techniques compared to non-power-aware and static threshold detection techniques. The findings will facilitate service suppliers in how to encounter the quality of service and experience limitations by fulfilling the service level agreements. For this purpose, the CloudSim platform is applied for the application of a situation in which game traces are employed as a workload for analyzing the procedure. The findings evidenced that an assortment of good quality techniques can benefit gaming servers to conserve energy expenditures and sustain the best quality of service for consumers located universally. The originality of this research presents a prospect to examine which procedure performs good (for example, dynamic, static, or non-power aware). The findings validate that less energy is utilized by applying a dynamic voltage and frequency method along with fewer service level agreement violations, and better quality of service and experience, in contrast with static threshold consolidation or non-power aware technique.
# sensors ### Article ## Cloud Servers: Resource Optimization Using Different Energy Saving Techniques ### Mohammad Hijji [1,]*, Bilal Ahmad [2], Gulzar Alam [3], Ahmed Alwakeel [1], Mohammed Alwakeel [1], Lubna Abdulaziz Alharbi [1], Ahd Aljarf [4] and Muhammad Umair Khan [5,]* 1 Faculty of Computers & Information Technology, University of Tabuk, Tabuk 71491, Saudi Arabia 2 Warwick Manufacturing Group, University of Warwick, Coventry CV4 7AL, UK 3 School of Computing, Ulster University, Belfast BT15 1AP, UK 4 College of Computers & Information Systems, Umm Al Qura University, Mecca 21955, Saudi Arabia 5 School of Computing, Gachon University, Seongnam-si 13120, Korea ***** Correspondence: m.hijji@ut.edu.sa (M.H.); mumairkhan@gachon.ac.kr (M.U.K.) **Citation: Hijji, M.; Ahmad, B.; Alam,** G.; Alwakeel, A.; Alwakeel, M.; Abdulaziz Alharbi, L.; Aljarf, A.; Khan, M.U. Cloud Servers: Resource Optimization Using Different Energy Saving Techniques. Sensors 2022, 22, [8384. https://doi.org/10.3390/](https://doi.org/10.3390/s22218384) [s22218384](https://doi.org/10.3390/s22218384) Academic Editor: Sung-Bae Cho Received: 21 September 2022 Accepted: 26 October 2022 Published: 1 November 2022 **Publisher’s Note: MDPI stays neutral** with regard to jurisdictional claims in published maps and institutional affil iations. **Copyright:** © 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons [Attribution (CC BY) license (https://](https://creativecommons.org/licenses/by/4.0/) [creativecommons.org/licenses/by/](https://creativecommons.org/licenses/by/4.0/) 4.0/). **Abstract: Currently, researchers are working to contribute to the emerging fields of cloud computing,** edge computing, and distributed systems. The major area of interest is to examine and understand their performance. The major globally leading companies, such as Google, Amazon, ONLIVE, Giaki, and eBay, are truly concerned about the impact of energy consumption. These cloud computing companies use huge data centers, consisting of virtual computers that are positioned worldwide and necessitate exceptionally high-power costs to preserve. The increased requirement for energy consumption in IT firms has posed many challenges for cloud computing companies pertinent to power expenses. Energy utilization is reliant upon numerous aspects, for example, the service level agreement, techniques for choosing the virtual machine, the applied optimization strategies and policies, and kinds of workload. The present paper tries to provide an answer to challenges related to energy-saving through the assistance of both dynamic voltage and frequency scaling techniques for gaming data centers. Also, to evaluate both the dynamic voltage and frequency scaling techniques compared to non-power-aware and static threshold detection techniques. The findings will facilitate service suppliers in how to encounter the quality of service and experience limitations by fulfilling the service level agreements. For this purpose, the CloudSim platform is applied for the application of a situation in which game traces are employed as a workload for analyzing the procedure. The findings evidenced that an assortment of good quality techniques can benefit gaming servers to conserve energy expenditures and sustain the best quality of service for consumers located universally. The originality of this research presents a prospect to examine which procedure performs good (for example, dynamic, static, or non-power aware). The findings validate that less energy is utilized by applying a dynamic voltage and frequency method along with fewer service level agreement violations, and better quality of service and experience, in contrast with static threshold consolidation or non-power aware technique. **Keywords: cloud computing; distributed systems; data centers; virtual machines; energy saving** ### 1. Introduction This Virtualization techniques distribute the physical server into many remote and single-performance computer system environments by implementing a layer like a hyper- visor or virtual machine manager on hardware or operating systems. In the implemented performance environment, every single-performance computer, such as a virtual machine, runs freely, combined with an operating system and other relevant applications devoid of mutual interference. The virtualization method was not trendy before due to some challenges, such as separate hardware resources, memory, and inadequate network [1–3]. Virtualization has emerged with advancements in technology, such as enhancements in hardware, cloud computing, IT networks, etc. [4,5]. The research community and ----- _Sensors 2022, 22, 8384_ 2 of 13 ### practitioners started to work on the effective operation of virtualization when more users’ demands and use of cloud data centers for performing their tasks with other applications increased [6,7]. Issues were raised, such as overloaded and idle servers; if one server fails to operate, then all virtual machines will be affected, protection of virtual machines and hardware failure, etc. These issues were resolved with the beginning of virtual machine migration initiated from process migration [8]. The greater part of cloud computing operations is encouraged by virtual machine migration, such as server consolidation, hardware maintenance, energy, and flow management [9–11]. Numerous cloud computing models have been developed in which control and man- agement of computing resources are provided. This helps businesses and clients use resources according to their design needs [12–14]. As an alternative to acquiring increased amounts in obtaining information technology infrastructure and dealing with hardware and software maintenance and updates, companies can outsource their computational requirements to the cloud. Large-size data centers have developed that consist of thousands of processing nodes and expend massive volumes of electric power. According to the latest survey, information technology impacts 25% of the total cost of managing and using data centers [15,16]. Energy consumption is overwhelming not only due to idle computing resources but also because of the ineffective management of these computational hardware and software resources. Servers commonly operate up to 50% complete capacity ahead of additional costs on over-provision and total cost of acquisition [17]. Energy management can be used to leverage resources through virtualization techniques and technology [18,19]. It permits cloud providers to generate many virtual machine occurrences on a separate physical server to enhance the efficient management and utilization of computational resources. This will increase the return on investment. Amiri et al. [20] recommended an SDN (Software Defined Network) model for choos- ing DC (Data Centers) for novel gaming sessions. They used a hierarchy-based model for transport/response delay with bandwidth status by using the Lagrange algorithm and logarithmic techniques. Similarly, they used a new approach to reduce end-to-end latency in a cloud-based gaming data center environment [21]. Cai et al. [22] conducted a comprehensive survey on cloud gaming by involving various facets such as the platform used for cloud gaming, various optimization techniques, and commercial services for cloud gaming. Further, they explored the experience factor for gamers and energy utilization with network metrics. Chen et al. [23] proposed an approach for describing energy usage for virtual machines using measurement attributes such as performance, execution time, power (utilization and effectiveness), and energy usage. Therefore, to reduce the cost related to the cloud and to improve energy saving needed appropriate optimization techniques to enhance the user gamer experience. GreenCloud architecture aims to reduce data center power consumption while guaran- teeing performance from the users’ perspective. GreenCloud architecture enables compre- hensive online monitoring, live virtual machine migration, and VM placement optimization. For experimentation, the CloudSim framework is used. CloudSim is a free and open-source framework based on Java language used for cloud computing infrastructure and services simulations. Similarly, this framework is utilized to model and simulate a cloud computing setting to perform tests and produce results. Further, it maintains various functionalities such as the generation of cloud-based entities, relations among entities, processing events, jobs and tasks queue, and implementation of broker policies [24,25]. The major contribution of the proposed research will be as follows: To investigate how resource optimization can be performed in gaming data centers • Utilizing real-time gaming workload • • To measure service quality during online gaming data by utilizing its two features, i.e., energy consumption and SLA (Service Level Agreement) ----- _Sensors 2022, 22, 8384_ 3 of 13 ### To test and implement DVFS (Dynamic Voltage and Frequency Scaling), non-power • aware, and static threshold virtual machine consolidation techniques for improving service quality. The remainder of the paper is organized as follows: Section 2 explains the literature review, followed by Section 3.1, which presents challenges related to the migration of a single virtual machine; Section 3.2 addresses the challenges related to the migration of the dynamic virtual machine; Section 4 discusses system methodology; Section 5 describes performance analysis and discussion while Section 6 represents conclusions and future work close the article. 2. Literature Review Nathuji and Schwan [26] did initial work on the application of power management in virtualized data centers by proposing an architecture called a data center resource management system by splitting it into two categories: local policies and global policies. Then, [27] worked on virtualized heterogeneous environment power management and proposed the problem of sequential optimization by addressing it through the concept of limited lookahead control. This research work aims to increase resource providers’ profits by reducing power consumption. Similarly, [28] researched the issue of scheduling for multi-tier web applications related to virtualizing heterogeneous systems to decrease power consumption by maintaining performance. Further, [29] recommended a method on the issue of efficient allocation of power in virtual machines over the complete environment of a virtualized heterogeneous computing system. [30] worked on and used continuous optimization to solve the difficulty of power-aware dynamic placement of applications in interaction with a virtualize heterogeneous environment. [31] have worked on the allocation of available power budgets among servers related to virtualized server farms in heterogeneous environments to decrease the mean response time. Furthermore, they used the proposed model to detect optimal power allocation. Jung et al. [32] analyzed the issue of dynamic consolidation of virtual machines running on multi-tier web applications while using live migration. However, the proposed method is only implemented on individual web application setups and cannot be used as a service system for multitenant infrastructure. Similarly, [33] worked on the same issue of capacity planning and resource allocation by proposing three controllers: the longest, shorter, and shortest time scales. Every controller operates at various time scales. Kumar et al. [34] developed a method for dynamic virtual machine consolidation based on estimation stability. Further, they mentioned that the resource demands of application estimation are performed by utilizing the time-varying probability density function. They stated that the values can be achieved by utilizing offline profiling of applications and calibration; however, offline profiling is impractical for infrastructure as a service system. Likewise, [35] researched a similar issue of dynamic consolidation of virtual machine- running applications using machine learning algorithms to optimize the combined energy consumption. However, this method was applied for high-performance computing and is not appropriate for various workloads. Arshad et al. [36] proposed an algorithm based on energy proficiency heuristics by uti- lizing virtual machine consolidation to reduce greater usage of energy consumption in the cloud data server environment. They build up a model for virtual machines relocation from one physical host to the other with an aim to lower energy consumption. Moura et al. [37] used the internal value fuzzy logic approach to overcome the problems of resources us- ing vagueness and inaccuracies to save energy with the lowest performance deprivation. They increased energy effectiveness by 2.3% in cloud computing simulation environments. Similarly, Shaw et al. look at the application of reinforcement machine learning to address the virtual machine consolidation issue related to the dissemination of virtual machines throughout the cloud data centers to enhance the management of resources. They enhance energy efficiency by 25% and lower service violations by 63%. Liu et al. [38] proposed a method to overcome the problem of virtual machine consolidation to optimize energy ----- _Sensors 2022, 22, 8384_ 4 of 13 ### utilization. They presented a new algorithm to choose the optimal solution for energy usage optimization by accomplishing an average conservation of 42% energy. Further, Gharehpasha et al. [39] presented an approach to combine both Sine and Cosine algorithms with the salp swarm algorithm for the best possible virtual machine placement. Also, their research work aims to decrease energy utilization in cloud data centers environment with SLA reduction. Hussain et al. [40] developed a schedule-based algorithm to decrease energy usage in the heterogenous virtual machine cloud environment. After all, Katal et al. [41] conducted a thorough survey on energy efficiency in a cloud computing data center environment. They discussed various methods to lower the power usage in data centers with hardware component level for decreasing the usage by components. As a variation to the above literature findings, we propose that the central research field consists of single servers and exclusive tasks. Though, presently, huge cloud comput- ing platforms such as Gaikai and Amazon EC2 come up with servers that are spending versatile applications which are further disseminated universally. Conversely, there is an examination disparity in gaming, particularly for multi-player scale games with consumers located remotely. In contrast to this, less evidence has been found regarding energy saving in the context of large data in single-objective applications. The notion of virtualization is employed by researchers using a local regression and robust migration algorithm. The findings propose that latency and service quality can be attained in huge data servers with this virtualization technique. Still, adjustment is a prerequisite between the quality of service and experience [42]. Table 1 shows the comparison among different optimization techniques with an applied method, category, and problem resolution. **Table 1. Different Optimization Techniques.** **Method** **Categories** **Technique** **Resolves** Data Centre Resource Local and Global Policies Virtualization Management [26,27] Scheduling for multi-tier web Virtualizing heterogeneous Virtualization applications [28] systems Power-aware dynamic Dynamic Virtualization Continuous Optimization placement of applications [30] Sequential optimization by addressing it through the concept of limited lookahead control Decreases power consumption by maintaining performance for multi-web applications Power-aware dynamic placement of applications in interaction with a virtualized heterogeneous environment Resolves resource optimization for small applications Saves power and resolves resource optimization issues based on workload for servers placed locally and globally Dynamic virtual machine Dynamic VM consolidation consolidation [34] based on estimation stability Resource demands by utilizing the time-varying probability density function Dynamic Voltage and Single and Multi-server DVFS, based on workload Frequency (DVFS)—Proposed ### 3. Challenges ### The main challenges are explored in two domains such as (1) migration to a single virtual machine and (2) migration to a dynamic virtual machine. 3.1. Migration of a Single Virtual Machine Virtual machines offer benefits to the system consumption, workload, and flexibility of the data center. However, challenges remain, such as waste of resources, network conges- tion, and consolidation, which will cause server hardware failures. Single virtual machine migration is used by researchers to define a data center with particular properties [43,44]. Similarly, [45] worked to increase the server average utilization and experiments on the ----- Virtual machines offer benefits to the system consumption, workload, and flexibility _Sensors 2022, 22, 8384_ 5 of 13 of the data center. However, challenges remain, such as waste of resources, network congestion, and consolidation, which will cause server hardware failures. Single virtual machine migration is used by researchers to define a data center with particular properties [43,44]. Similarly, [45] worked to increase the server average utilization and experiments historical data to predict the future servers’ demands, as well as migrating the virtual on the historical data to predict the future servers’ demands, as well as migrating the vir-machine in conditions of future needs. tual machine in conditions of future needs. Unstable length and long latency are the key challenges of migrating virtual machines ### in wide-area networks. Therefore, [Unstable length and long latency are the key challenges of migrating virtual ma-46] get significantly responsive in wide area network mi chines in wide-area networks. Therefore, [46] get significantly responsive in wide area gration by proposing a three-phase solution. Most importantly, virtual machine migration network migration by proposing a three-phase solution. Most importantly, virtual ma-is widely utilized to conserve power using the consolidation of idle desktop virtual [47]. chine migration is widely utilized to conserve power using the consolidation of idle desk-Moreover, researchers have developed algorithms with the objective of decreasing power top virtual [47]. Moreover, researchers have developed algorithms with the objective of mode transition latency [48]. decreasing power mode transition latency [48]. ### 3.2. Migration of a Dynamic Virtual Machine _3.2. Migration of a Dynamic Virtual Machine_ ### Virtual machine migration (VMM) is the movement of some or all parts of virtual Virtual machine migration (VMM) is the movement of some or all parts of virtual ### machine data from one place to a different place, with live migration having no interruption machine data from one place to a different place, with live migration having no interrup ### of the provided services. VMM is organized in two ways: live migration and non-live tion of the provided services. VMM is organized in two ways: live migration and non-live ### migration. In non-live migration, the virtual machine is suspended earlier migration and migration. In non-live migration, the virtual machine is suspended earlier migration and ### conditional on whether the virtual machine needs to remain the running services later conditional on whether the virtual machine needs to remain the running services later ### migration or not. If it is suspended, then the states will be moved into the target site. migration or not. If it is suspended, then the states will be moved into the target site. ### In the case of migration, all the connections are restored after virtual machine continu In the case of migration, all the connections are restored after virtual machine contin ### ation because no open network connection is preserved, as shown in Figure 1. uation because no open network connection is preserved, as shown in Figure 1. **Figure 1. Figure 1.Non-Live Migration. Non-Live Migration.** Live migration is the movement of a virtual machine operating on one physical host Live migration is the movement of a virtual machine operating on one physical host _Sensors 2022, 22, x FOR PEER REVIEW to a different host devoid of interrupting the usual operations or triggering any stoppage to a different host devoid of interrupting the usual operations or triggering any stoppage6 of 14_ or other undesirable causes for the end user, as shown in Figure 2. or other undesirable causes for the end user, as shown in Figure 2. In live migration, data migration memory and network connection continuity are two problems. However, a few challenges are associated with the migration of dynamic virtual machines, such as the consideration of multiple hosts and multiple virtual machines [49]. Other challenges include memory data migration, storage data migration, and network connection connectivity [42]. problems. However, a few challenges are associated with the migration of dynamic virtual machines, such as the consideration of multiple hosts and multiple virtual machines [49]. Other challenges include memory data migration, storage data migration, and network connection connectivity [42]. **Figure 2. Figure 2.Live VM Migration. Live VM Migration.** **4. System Methodology In live migration, data migration memory and network connection continuity are two** ### problems. However, a few challenges are associated with the migration of dynamic virtual The overall system methodology is shown in Figure 3, which consists of the software layer of the system, which is tied up with local as well as global management modules. machines, such as the consideration of multiple hosts and multiple virtual machines [49]. Local managers represent individual nodes as a component of the VMM. The main pur ----- **y** **gy** _Sensors 2022, 22, 8384_ The overall system methodology is shown in Figure 3, which consists of the software 6 of 13 layer of the system, which is tied up with local as well as global management modules. Local managers represent individual nodes as a component of the VMM. The main purpose of this is to continuously monitor all the nodes contributing to the CPU utilization ### Other challenges include memory data migration, storage data migration, and network and then adjust all resources that are needed for a virtual machine, and finally to decide ### connection connectivity [42]. about the node’s migration timing and place related to a virtual machine, as shown in point 4 of Figure 3. ### 4. System Methodology - The global manager represents a master node to gather information from all local ### The overall system methodology is shown in Figure 3, which consists of the software managers to preserve the total layout of the consumption of related resources, as ### layer of the system, which is tied up with local as well as global management modules. Local shown in point 2 of Figure 3. ### managers represent individual nodes as a component of the VMM. The main purpose of this - The global manager provided instructions for the optimization of virtual machine ### is to continuously monitor all the nodes contributing to the CPU utilization and then adjust positioning, as shown in point 3 of Figure 3. ### all resources that are needed for a virtual machine, and finally to decide about the node’s - The main function of VMMs is to resize and migrate the virtual machines and shift ### migration timing and place related to a virtual machine, as shown in point 4 of Figure 3. the power modes of the nodes, as presented in point 5 of Figure 3. **Figure 3.Figure 3. Overall System Methodology. 1 defines the user type as a global user and each nodeOverall System Methodology. 1 defines the user type as a global user and each node** communicates to the global manager through its local manager represented by 2. Each node is communicates to the global manager through its local manager represented by 2. Each node is divided into the number of VMs represented as 5 that are managed by their local manager for mi divided into the number of VMs represented as 5 that are managed by their local manager for gration presented by 4. The global manager issues commands for the optimisation of the VM as migration presented by 4. The global manager issues commands for the optimisation of the VMsignments shown in 3. assignments shown in 3. ### The global manager represents a master node to gather information from all local • managers to preserve the total layout of the consumption of related resources, as shown in point 2 of Figure 3. The global manager provided instructions for the optimization of virtual machine • positioning, as shown in point 3 of Figure 3. • The main function of VMMs is to resize and migrate the virtual machines and shift the power modes of the nodes, as presented in point 5 of Figure 3. 5. Performance Analysis and Discussion Some tests have been conducted on CloudSim simulation settings to determine differ- ent characterizations of resource optimization. All these tests were executed on the same datasets by applying “Eclipse Luna and Java IDE Developers. 283 MB; 144,793 DOWN- LOADS”. Different optimization techniques have been used, namely dynamic voltage and frequency techniques, non-power awareness, and static virtualization techniques. These tests have been designed and carried out on a data set from World of WarCraft that is a mas- sively multiplayer online games (MMOs) game that is multi-location multi-environment. Test environments consist of multiple avatars over 3.5 years collected from an online cloud environment. This helps to test the limits of resource optimization for cloud environments for different features, such as energy optimization. service level agreement, service level ----- _Sensors 2022, 22, 8384_ 7 of 13 ### agreement violations, virtualization, host timing, etc. Virtualization techniques will be used for the management of load for virtual machines (VMs) that are over or underloaded in the system, and relocation of these will be performed based on techniques such as minimum migration time (MMT), maximum correlation (MC), and minimum utilization (MU). DVFS, non-power aware (NPA), and static threshold virtualization technique (STVM) techniques will be compared in the same environment. For STVM techniques, defined resources are used in terms of random-access memory (RAM), bandwidth, storage, and input-output file size, whereas in dynamic technique, resources are allocated based on central processing unit voltage and frequency fluctuations. Different evaluation metrics will be used to gauge the performance of the proposed system. Initially, the tests are divided into different techniques for example DVFS, NPA, and STVM. The reason for dividing them into sub-techniques is to see how the proposed system will behave under different conditions. Test environment and workload are standard for all methods. All these proposed methods will be measured against certain defined parameters such as energy consumption, VM selection time, VM relocation time, host selection meantime, and service level agreement violations. These matrices will help to determine which technique will perform better under static and dynamic workloads in the proposed test environment. The comparison method will also help to determine which technique performs better for energy saving and resource optimization for small and large servers placed globally. A test has been carried out to distinguish how dynamic frequency scaling will behave with non-power-aware techniques for the same workload. The results in Figure 4 are plotted using the reality check method. The results show that the non-power-aware method consumes more power compared to the dynamic voltage and frequency methods. DVFS shows a linear trend for energy consumption and less consumption of power. The DVFS method results in increased profits and minimum SLAs per host compared to the NPA technique. However, using NPA with the same host numbers and fixed millions _Sensors 2022, 22, x FOR PEER REVIEW of instructions per second (MIPS) consumes more energy in the setup, emitting higher8 of 14_ ### CO2 emissions. **_Energy Comparion_** **100%** **80%** **60%** **40%** **20%** **0%** DVFS NPA **Hosts** **-20%** **Figure 4.Figure 4. Illustrations of Energy Utilization in a Data Center.Illustrations of Energy Utilization in a Data Center.** ### A similar test is further extended, and the static threshold virtualization technique (STVM) has been added to determine the energy consumption. In these experimental results, as shown in Figure 5, three virtualization techniques were used to relocate the **_Energy Comparion_** **100%** **80%** **60%** **40%** **20%** **0%** DVFS NPA **Hosts** **-20%** ----- _Sensors 2022, 22, 8384_ 8 of 13 **0%** DVFS NPA ### virtual machines for overloaded and underloaded hosts. This relocation of virtual machinesHosts is done using minimum migration time (MMT), minimum correlation (MC), and maximum-20% utilization (MU) in a static threshold environment. **Figure 4. Illustrations of Energy Utilization in a Data Center.** **Figure 5.Figure 5. Evaluation of Energy Utilization in the Recommended System.Evaluation of Energy Utilization in the Recommended System.** ### In STVM, higher, and lesser threshold boundaries are specified for any test envi-In STVM, higher, and lesser threshold boundaries are specified for any test environ- ronment. In the static threshold technique, MC has less energy consumption comparedment. In the static threshold technique, MC has less energy consumption compared to the to the MU or MMT method. When compared with the dynamic voltage and frequencyMU or MMT method. When compared with the dynamic voltage and frequency tech niques, the results are different. Static threshold behaves better for small workloads as ### techniques, the results are different. Static threshold behaves better for small workloads upper and lower limits are definable for required parameters. In comparison to the dy ### as upper and lower limits are definable for required parameters. In comparison to the namic workload environment, DVFS again proves to have less service level agreement ### dynamic workload environment, DVFS again proves to have less service level agreement violation (SLAV) and maintains higher SLAs, resulting in a better quality of service and ### violation (SLAV) and maintains higher SLAs, resulting in a better quality of service and better user experience compared to the NPA method. It can also be concluded that STVM virtual machine relocation methods are supported with smaller workloads, which verifies the theoretical concept. All three techniques are used to compare the execution times for three techniques for different levels of hosts with the same configuration setup in Figure 6. Virtual machine selection, relocation, and host selection time remained similar for DVFS and NPA. **0%** DVFS NPA ### virtual machines for overloaded and underloaded hosts. This relocation of virtual machinesHosts is done using minimum migration time (MMT), minimum correlation (MC), and maximum-20% ----- p _Sensors 2022, 22, 8384_ All three techniques are used to compare the execution times for three techniques for 9 of 13 different levels of hosts with the same configuration setup in Figure 6. Virtual machine selection, relocation, and host selection time remained similar for DVFS and NPA. **_DVFS_** **_0.03_** **_NPA_** **_0.025_** **_MMT_** **_0.02_** **_MC_** **_MU_** **_0.015_** **_0.01_** **_0.005_** **_0_** **VM Selection** **VM Relocation** **Host Selection** **Mean Time** **Mean Time** **Mean Time** **Figure 6. Virtual Machine Performance Time for Every Host.** **Figure 6. Virtual Machine Performance Time for Every Host.** ### MC has the highest VM selection time in a static environment, and MC takes more MC has the highest VM selection time in a static environment, and MC takes more ### time for VM relocation when compared with other techniques. In a static environment, all time for VM relocation when compared with other techniques. In a static environment, all ### three techniques have similar host selection meantime because of defined threshold limits three techniques have similar host selection meantime because of defined threshold limits ### as compared to a dynamic environment. The results also support the theoretical concept as compared to a dynamic environment. The results also support the theoretical concept ### that no relocation of VMs is done for DVFS, and resource optimization is done using centralthat no relocation of VMs is done for DVFS, and resource optimization is done using cen- processing unit (CPU) voltage and frequency methods.tral processing unit (CPU) voltage and frequency methods. If a proper virtualization technique is selected, downtime in the network can be re-If a proper virtualization technique is selected, downtime in the network can be ### reduced for overloaded and underloaded environments. The results in Figureduced for overloaded and underloaded environments. The results in Figure 7 show that 7 show that in the STVM method, MC has the lowest number of virtual machines that are migrated,in the STVM method, MC has the lowest number of virtual machines that are migrated, whereas maximum utilization has the highest number of migrations. NPA and DVFS dowhereas maximum utilization has the highest number of migrations. NPA and DVFS do _Sensors 2022, 22, x FOR PEER REVIEW not carry any VM migrations, which second the theoretical concept of dynamic voltage andnot carry any VM migrations, which second the theoretical concept of dynamic voltage 10 of 14_ ### frequency scaling and non-power aware techniques.and frequency scaling and non-power aware techniques Service level agreement and service level agreement degradation were administered for all three techniques. DVFS has a minimum service level degradation when compared to the rest of the techniques. NPA has the highest number of SLAV. If better service quality is required, fewer SLAV methods need to be selected. The MMT technique needs to be selected for a better user experience, as this has a minimum number of SLAVs and SLAs for the static threshold environment, as shown in Figure 8. **Figure 7.Figure 7. Sum of Virtual Machine Migrations.Sum of Virtual Machine Migrations.** ### Service level agreement and service level agreement degradation were administered for all three techniques. DVFS has a minimum service level degradation when compared to the rest of the techniques. NPA has the highest number of SLAV. If better service quality is required, fewer SLAV methods need to be selected. The MMT technique needs to be for all three techniques. DVFS has a minimum service level degradation when compared to the rest of the techniques. NPA has the highest number of SLAV. If better service quality is required, fewer SLAV methods need to be selected. The MMT technique needs to be selected for a better user experience, as this has a minimum number of SLAVs and SLAs for the static threshold environment, as shown in Figure 8. ----- _Sensors 2022, 22, 8384_ 10 of 13 ### selected for a better user experience, as this has a minimum number of SLAVs and SLAs for the static threshold environment, as shown in Figure 8. **Figure 7. Sum of Virtual Machine Migrations.** ### selected for a better user experience, as this has a minimum number of SLAVs and SLAs **Figure 8.Figure 8. Analysis of the Service Level Agreement Violation.Analysis of the Service Level Agreement Violation.** ### In dynamic environments, DVFS has less energy consumption associated with NPAIn dynamic environments, DVFS has less energy consumption associated with NPA methods. In a static environment, MMT has the highest number of host shutdowns, as VMsmethods. In a static environment, MMT has the highest number of host shutdowns, as are selected and relocated for loaded hosts to save resources and energy. MMT, therefore,VMs are selected and relocated for loaded hosts to save resources and energy. MMT, also has less mean and standard deviation time in a static environment compared to othertherefore, also has less mean and standard deviation time in a static environment com pared to other virtual machine relocation techniques. ### virtual machine relocation techniques. Therefore, the overall detailed analysis of the proposed system is shown in Figure 9. ### Therefore, the overall detailed analysis of the proposed system is shown in Figure 9. So, depending on whether the test environment is dynamic or static, resource optimiza ### So, depending on whether the test environment is dynamic or static, resource optimization, tion, service quality, and better user experience can be achieved if proper methods are ### service quality, and better user experience can be achieved if proper methods are selected selected for loaded hosts in a cloud environment. Proper selection of optimization tech ### for loaded hosts in a cloud environment. Proper selection of optimization techniques niques will help in energy and resource optimization for large-scale servers that are placed _Sensors 2022, 22, x FOR PEER REVIEW will help in energy and resource optimization for large-scale servers that are placed and11 of 14_ and operating globally. ### operating globally. **_Detailed Analysis of the Proposed System_** **DVFS** **NPA** **900** **800** **MMT** **700** **MC** **600** **MU** **500** **400** **300** **200** **100** **0** **Energy** **Concumption** **Host Shutdown** **Mean Time** **Before Host** **St Dev Time** **Shutdown** **Before Host** **Shutdown** **Figure 9.Figure 9. Overall detailed evaluation of the developed system.Overall detailed evaluation of the developed system.** **DVFS** **NPA** **MMT** **MC** **MU** **100** **0** **Energy** **Concumption** **Host Shutdown** **Mean Time** **6 Conclusions** ----- _Sensors 2022, 22, 8384_ 11 of 13 ### 6. Conclusions Different simulation experiments are designed using the CloudSim simulation envi- ronment to test resource optimization in cloud gaming servers. These experiments suggest different resource optimization techniques for large and small servers. Gaming datasets are versatile in nature and consist of different audio, video, avatars, locations, etc. The data versatility helps to challenge resource optimization in terms of energy consumption, execu- tion time, virtual machine relocation, and service level agreement violations for different user levels. From the results, it is evident that different resource optimization techniques are required to be selected for under-and overloaded hosts depending on servers and user data type. If the data that is being processed has defined limits, then the static threshold technique will be used with another virtualization discussed above. In terms of a dynamic environment with multiple users and a large pool of resources, dynamic resource optimiza- tion behaves better. Therefore, for large servers, DVFS saves more energy, has fewer service level agreement violations, and results in a better quality of service and experience. In the future, this work will be enhanced to explore new energy-saving techniques and compared them with the current methods. This work will also be extended to other domains of computing for example Internet of Things (IoT), Big Data, and Artificial Intelligence (AI). **Author Contributions: Conceptualization, M.H., B.A. and M.U.K.; methodology M.H., B.A., G.A.** and M.U.K.; software, B.A.; validation, M.H., B.A. and G.A.; formal analysis, M.H., B.A., A.A. (Ahmed Alwakeel), A.A. (Ahd Aljarf) and M.A.; investigation, M.A., L.A.A., A.A. (Ahmed Alwakeel) and A.A. (Ahd Aljarf); data curation, M.H., B.A. and A.A. (Ahd Aljarf); writing—original draft preparation, M.H., B.A. and G.A.; writing—review and editing, M.H., B.A., G.A., M.A., L.A.A., A.A. (Ahmed Alwakeel), A.A. (Ahd Aljarf) and M.U.K.; visualization, B.A. and G.A.; supervision, M.H.; project administration, M.H. and B.A.; funding acquisition, M.H. All authors have read and agreed to the published version of the manuscript. **Funding: This research received no external funding.** **Institutional Review Board Statement: Not Applicable.** **Informed Consent Statement: Not Applicable.** **Data Availability Statement: Data will be available upon request through correspondence email.** **Acknowledgments: We acknowledge the support of the University of Tabuk, Saudi Arabia.** **Conflicts of Interest: The authors declare no conflict of interest.** ### References 1. Huber, N.; von Quast, M.; Hauck, M.; Kounev, S. Evaluating and Modeling Virtualization Performance Overhead for Cloud Environments. CLOSER 2011, 11, 563–573. 2. Khan, F.; Ahmad, S.; Gürüler, H.; Cetin, G.; Whangbo, T.; Kim, C.-G. An Efficient and Reliable Algorithm for Wireless Sensor [Network. Sensors 2021, 21, 24. [CrossRef] [PubMed]](http://doi.org/10.3390/s21248355) 3. Khan, F.; Khan, A.W.; Shah, K.; Qasim, I.; Habib, A. An Algorithmic Approach for Core Election in Mobile Ad-hoc Network. _J. Internet Technol. 2019, 20, 4._ 4. Wang, L.; Tao, J.; Kunze, M.; Castellanos, A.C.; Kramer, D.; Karl, W. Scientific Cloud Computing: Early Definition and Experience. In Proceedings of the 2008 10th IEEE International Conference on High Performance Computing and Communications, Dalian, [China, 25–27 September 2008; pp. 825–830. [CrossRef]](http://doi.org/10.1109/HPCC.2008.38) 5. Cheng, L.; Tachmazidis, I.; Kotoulas, S.; Antoniou, G. Design and evaluation of small–large outer joins in cloud computing [environments. J. Parallel Distrib. Comput. 2017, 110, 2–15. [CrossRef]](http://doi.org/10.1016/j.jpdc.2017.02.007) 6. [Wu, J.; Guo, S.; Li, J.; Zeng, D. Big Data Meet Green Challenges: Greening Big Data. IEEE Syst. J. 2016, 10, 873–887. [CrossRef]](http://doi.org/10.1109/JSYST.2016.2550538) 7. Khan, F.; Gul, T.; Ali, S.; Rashid, A.; Shah, D.; Khan, S. Energy aware cluster-head selection for improving network life time in wireless sensor network. Sci. Inf. Conf. 2018, 857, 581–593. 8. Osman, S.; Subhraveti, D.; Su, G.; Nieh, J. The design and implementation of Zap: A system for migrating computing environ[ments. ACM SIGOPS Oper. Syst. Rev. 2002, 36, 361–376. [CrossRef]](http://doi.org/10.1145/844128.844162) 9. Dillon, T.; Wu, C.; Chang, E. Cloud Computing: Issues and Challenges. In Proceedings of the 2010 24th IEEE International [Conference on Advanced Information Networking and Applications, Perth, Australia, 20–23 April 2010; pp. 27–33. [CrossRef]](http://doi.org/10.1109/AINA.2010.187) 10. [Bianchini, R.; Rajamony, R. Power and energy management for server systems. Computer 2004, 37, 68–76. [CrossRef]](http://doi.org/10.1109/MC.2004.217) ----- _Sensors 2022, 22, 8384_ 12 of 13 11. Jiang, J.W.; Lan, T.; Ha, S.; Chen, M.; Chiang, M. Joint VM placement and routing for data center traffic engineering. In Proceedings [of the 2012 Proceedings IEEE INFOCOM, Orlando, FL, USA, 25–30 March 2012; pp. 2876–2880. [CrossRef]](http://doi.org/10.1109/INFCOM.2012.6195719) 12. Buyya, R.; Yeo, C.S.; Venugopal, S.; Broberg, J.; Brandic, I. Cloud computing and emerging IT platforms: Vision, hype, and reality [for delivering computing as the 5th utility. Future Gener. Comput. Syst. 2009, 25, 599–616. [CrossRef]](http://doi.org/10.1016/j.future.2008.12.001) 13. Khan, F.; Tarimer, I.; Taekeun, W. Factor Model for Online Education during the COVID-19 Pandemic Using the IoT. Processes **[2022, 10, 7. [CrossRef]](http://doi.org/10.3390/pr10071419)** 14. Khan, F.; Zahid, M.; Gürüler, H.; Tarımer, I.; Whangbo, T. An Efficient and Reliable Multicasting for Smart Cities.[˙] _Mathematics_ **[2022, 10, 3686. [CrossRef]](http://doi.org/10.3390/math10193686)** 15. In the data center, power and cooling costs more than the it equipment it supports. Electron. Cool. 2007, 13, 24. Avail[able online: https://www.electronics-cooling.com/2007/02/in-the-data-center-power-and-cooling-costs-more-than-the-it-](https://www.electronics-cooling.com/2007/02/in-the-data-center-power-and-cooling-costs-more-than-the-it-equipment-it-supports/) [equipment-it-supports/ (accessed on 31 May 2022).](https://www.electronics-cooling.com/2007/02/in-the-data-center-power-and-cooling-costs-more-than-the-it-equipment-it-supports/) 16. Khan, F.; Khan, A.W.; Khan, S.; Qasim, I.; Habib, A. A secure core-assisted multicast routing protocol in mobile ad-hoc network. _J. Internet Technol. 2020, 21, 375–383._ 17. Fan, X.; Weber, W.-D.; Barroso, L.A. Power provisioning for a warehouse-sized computer. ACM SIGARCH Comput. Archit. News **[2007, 35, 13–23. [CrossRef]](http://doi.org/10.1145/1273440.1250665)** 18. Barham, P. Xen and the Art of Virtualization. In Proceedings of the 19th ACM Symposi-um on Operating Systems Principles, Bolton Landing, NY, USA, 19–22 October 2003; ACM Press: New York, NY, USA, 2003. 19. Ahmad, S.; Mehmood, F.; Khan, F.; Whangbo, T.K. Architecting Intelligent Smart Serious Games for Healthcare Applications: A [Technical Perspective. Sensors 2022, 22, 810. [CrossRef]](http://doi.org/10.3390/s22030810) 20. Amiri, M.; Osman, H.A.; Shirmohammadi, S. Resource optimization through hierarchical SDN-enabled inter data center network for cloud gaming. In Proceedings of the 11th ACM Multimedia Systems Conference, New York, NY, USA, 27 May 2020; [pp. 166–177. [CrossRef]](http://doi.org/10.1145/3339825.3391868) 21. Amiri, M.; Osman, H.A.; Shirmohammadi, S.; Abdallah, M. Toward Delay-Efficient Game-Aware Data Centers for Cloud Gaming. _[ACM Trans. Multimed. Comput. Commun. Appl. 2016, 12, 71. [CrossRef]](http://doi.org/10.1145/2983639)_ 22. Cai, W.; Shea, R.; Huang, C.Y.; Chen, K.T.; Liu, J.; Leung, V.C.M.; Hsu, C.H. A Survey on Cloud Gaming: Future of Computer [Games. IEEE Access 2016, 4, 7605–7620. [CrossRef]](http://doi.org/10.1109/ACCESS.2016.2590500) 23. Chen, Q.; Grosso, P.; van der Veldt, K.; de Laat, C.; Hofman, R.; Bal, H. Profiling Energy Consumption of VMs for Green Cloud Computing. In Proceedings of the 2011 IEEE Ninth International Conference on Dependable, Autonomic and Secure Computing, [Sydney, Australia, 12–14 December 2011; pp. 768–775. [CrossRef]](http://doi.org/10.1109/DASC.2011.131) 24. Calheiros, R.N.; Ranjan, R.; Beloglazov, A.; de Rose, C.A.; Buyya, R. CloudSim: A toolkit for modeling and simulation of cloud [computing environments and evaluation of resource provisioning algorithms. Softw. Pract. Exp. 2011, 41, 23–50. [CrossRef]](http://doi.org/10.1002/spe.995) 25. Khan, F.; Abbas, S.; Khan, S. An efficient and reliable core-assisted multicast routing protocol in mobile Ad-Hoc network. Int. J. _[Adv. Comput. Sci. Appl. 2016, 7, 231–242. [CrossRef]](http://doi.org/10.14569/IJACSA.2016.070533)_ 26. Nathuji, R.; Schwan, K. Virtualpower: Coordinated power management in virtualized enterprise systems. ACM SIGOPS Oper. _[Syst. Rev. 2007, 41, 265–278. [CrossRef]](http://doi.org/10.1145/1323293.1294287)_ 27. Kusic, D.; Kephart, J.O.; Hanson, J.E.; Kandasamy, N.; Jiang, G. Power and performance management of virtualized computing [environments via lookahead control. Clust. Comput. 2009, 12, 1–15. [CrossRef]](http://doi.org/10.1007/s10586-008-0070-y) 28. Srikantaiah, S.; Kansal, A.; Zhao, F. Energy aware consolidation for cloud computing. In Proceedings of the 2008 conference on [Power Aware Computing and Systems, San Diego, CA, USA, 7 December 2008; p. 10. Available online: https://www.usenix.org/](https://www.usenix.org/legacy/event/hotpower08/tech/full_papers/srikantaiah/srikantaiah_html/) [legacy/event/hotpower08/tech/full_papers/srikantaiah/srikantaiah_html/ (accessed on 20 October 2022).](https://www.usenix.org/legacy/event/hotpower08/tech/full_papers/srikantaiah/srikantaiah_html/) 29. Cardosa, M.; Korupolu, M.R.; Singh, A. Shares and utilities based power consolidation in virtualized server environments. In Proceedings of the 2009 IFIP/IEEE International Symposium on Integrated Network Management, New York, NY, USA, 1–5 June [2009; pp. 327–334. [CrossRef]](http://doi.org/10.1109/INM.2009.5188832) 30. Verma, A.; Ahuja, P.; Neogi, A. pMapper: Power and Migration Cost Aware Application Placement in Virtualized Systems. In _[Middleware 2008; Springer: Berlin/Heidelberg, Germany, 2008; pp. 243–264. [CrossRef]](http://doi.org/10.1007/978-3-540-89856-6_13)_ 31. Gandhi, A.; Harchol-Balter, M.; Das, R.; Lefurgy, C. Optimal power allocation in server farms. ACM Sigmetrics Perform. Eval. Rev. **[2009, 37, 157–168. [CrossRef]](http://doi.org/10.1145/2492101.1555368)** 32. Jung, G.; Joshi, K.R.; Hiltunen, M.A.; Schlichting, R.D.; Pu, C. A Cost-Sensitive Adaptation Engine for Server Consolidation of [Multitier Applications. In Middleware 2009; Springer: Berlin/Heidelberg, Germany, 2009; pp. 163–183. [CrossRef]](http://doi.org/10.1007/978-3-642-10445-9_9) 33. Zhu, X.; Young, D.; Watson, B.J.; Wangm, Z.; Rolia, J.; Singhal, S.; McKee, B.; Hyser, C.; Gmach, D.; Gardner, R.; et al. 1000 Islands: Integrated Capacity and Workload Management for the Next Generation Data Center. In Proceedings of the 2008 International [Conference on Autonomic Computing, Chicago, IL, USA, 2–6 June 2008; pp. 172–181. [CrossRef]](http://doi.org/10.1109/ICAC.2008.32) 34. Kumar, S.; Talwar, V.; Kumar, V.; Ranganathan, P.; Schwan, K. vManage: Loosely coupled platform and virtualization management in data centers. In Proceedings of the 6th International Conference on Autonomic Computing, New York, NY, USA, 15–19 June [2009; pp. 127–136. [CrossRef]](http://doi.org/10.1145/1555228.1555262) 35. Berral, J.L.; Goiri, I.; Nou, R.; Ferran, J.; Guitart, J.; Gavalda, R.; Torres, J. Towards energy-aware scheduling in data centers using machine learning. In Proceedings of the 1st International Conference on Energy-Efficient Computing and Networking, New York, [NY, USA, 14–15 October 2010; pp. 215–224. [CrossRef]](http://doi.org/10.1145/1791314.1791349) ----- _Sensors 2022, 22, 8384_ 13 of 13 36. Arshad, U.; Aleem, M.; Srivastava, G.; Lin, J.C.-W. Utilizing power consumption and SLA violations using dynamic VM [consolidation in cloud data centers. Renew. Sustain. Energy Rev. 2022, 167, 112782. [CrossRef]](http://doi.org/10.1016/j.rser.2022.112782) 37. Moura, B.M.P.; Schneider, G.B.; Yamin, A.C.; Santos, H.; Reiser, R.H.S.; Bedregal, B. Interval-valued Fuzzy Logic approach for [overloaded hosts in consolidation of virtual machines in cloud computing. Fuzzy Sets Syst. 2022, 446, 144–166. [CrossRef]](http://doi.org/10.1016/j.fss.2021.03.001) 38. Liu, X.; Wu, J.; Chen, L.; Zhang, L. Energy-aware virtual machine consolidation based on evolutionary game theory. Concurr. _[Comput. Pract. Exp. 2022, 34, e6830. [CrossRef]](http://doi.org/10.1002/cpe.6830)_ 39. Gharehpasha, S.; Masdari, M.; Jafarian, A. Power efficient virtual machine placement in cloud data centers with a discrete and [chaotic hybrid optimization algorithm. Clust. Comput. 2021, 24, 1293–1315. [CrossRef]](http://doi.org/10.1007/s10586-020-03187-y) 40. Hussain, M.; Wei, L.-F.; Lakhan, A.; Wali, S.; Ali, S.; Hussain, A. Energy and performance-efficient task scheduling in heteroge[neous virtualized cloud computing. Sustain. Comput. Inform. Syst. 2021, 30, 100517. [CrossRef]](http://doi.org/10.1016/j.suscom.2021.100517) 41. Katal, A.; Dahiya, S.; Choudhury, T. Energy efficiency in cloud computing data center: A survey on hardware technologies. Clust. _[Comput. 2022, 25, 675–705. [CrossRef]](http://doi.org/10.1007/s10586-021-03431-z)_ 42. Zhang, F.; Liu, G.; Fu, X.; Yahyapour, R. A Survey on Virtual Machine Migration: Challenges, Techniques, and Open Issues. IEEE _[Commun. Surv. Tutor. 2018, 20, 1206–1243. [CrossRef]](http://doi.org/10.1109/COMST.2018.2794881)_ 43. Duggan, M.; Duggan, J.; Howley, E.; Barrett, E. A network aware approach for the scheduling of virtual machine migration [during peak loads. Clust. Comput. 2017, 20, 2083–2094. [CrossRef]](http://doi.org/10.1007/s10586-017-0948-7) 44. Nathan, S.; Bellur, U.; Kulkarni, P. Towards a comprehensive performance model of virtual machine live migration. In Proceedings of the Sixth ACM Symposium on Cloud Computing, Kohala Coast, HI, USA, 27–29 August 2015; pp. 288–301. 45. Bobroff, N.; Kochut, A.; Beaty, K. Dynamic placement of virtual machines for managing sla violations. In Proceedings of the 2007 10th IFIP/IEEE International Symposium on Integrated Network Management, Munich, Germany, 21–25 May 2007; pp. 119–128. 46. Zhang, W.; Lam, K.T.; Wang, C.L. Adaptive live vm migration over a wan: Modeling and implementation. In Proceedings of the 2014 IEEE 7th International Conference on Cloud Computing, Anchorage, AK, USA, 27 June–2 July 2014; pp. 368–375. 47. Bila, N.; de Lara, E.; Joshi, K.; Lagar-Cavilla, H.A.; Hiltunen, M.; Satyanarayanan, M. Jettison: Efficient idle desktop consolidation with partial VM migration. In Proceedings of the 7th ACM European Conference on Computer Systems, Bern, Switzerland, 10–13 April 2012; pp. 211–224. 48. Liu, H.; He, B. Vmbuddies: Coordinating live migration of multi-tier applications in cloud environments. IEEE Trans. Parallel _[Distrib. Syst. 2014, 26, 1192–1205. [CrossRef]](http://doi.org/10.1109/TPDS.2014.2316152)_ 49. Beloglazov, A.; Buyya, R. Optimal online deterministic algorithms and adaptive heuristics for energy and performance efficient [dynamic consolidation of virtual machines in cloud data centers. Concurr. Comput. Pract. Exp. 2012, 24, 1397–1420. [CrossRef]](http://doi.org/10.1002/cpe.1867) -----
{ "disclaimer": "Notice: Paper or abstract available at https://pmc.ncbi.nlm.nih.gov/articles/PMC9659174, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": "CCBY", "status": "GOLD", "url": "https://www.mdpi.com/1424-8220/22/21/8384/pdf?version=1667292344" }
2,022
[ "JournalArticle" ]
true
2022-11-01T00:00:00
[ { "paperId": "736f9cc4bd0adad85682481ab74c411518e72610", "title": "Development of a Model for Spoofing Attacks in Internet of Things" }, { "paperId": "edc2118d69cd42e51751a38337b2650f9ae0a0d8", "title": "Utilizing power consumption and SLA violations using dynamic VM consolidation in cloud data centers" }, { "paperId": "b5b9f3424d73a80ef010c3e5928c5c355cc14a8e", "title": "Factor Model for Online Education during the COVID-19 Pandemic Using the IoT" }, { "paperId": "6dc63f26c474942c8a504aa033a5f8aba0586812", "title": "Architecting Intelligent Smart Serious Games for Healthcare Applications: A Technical Perspective" }, { "paperId": "c2f87230368f3f66274e67d80a1afebe51aa19d6", "title": "Energy‐aware virtual machine consolidation based on evolutionary game theory" }, { "paperId": "77a81471325bbd4c3e0528a53585eef455c3f0d5", "title": "An Efficient and Reliable Algorithm for Wireless Sensor Network" }, { "paperId": "95f36a84877001dabe83e3cf701e0db838b527c0", "title": "Energy efficiency in cloud computing data center: a survey on hardware technologies" }, { "paperId": "2b65c5dff56f9e0fc45f9cbf15a2b2ebcb8698a9", "title": "Energy and performance-efficient task scheduling in heterogeneous virtualized cloud computing" }, { "paperId": "2efb9f335df498215368a22e32d931a2f0344a04", "title": "Interval-valued Fuzzy Logic approach for overloaded hosts in consolidation of virtual machines in cloud computing" }, { "paperId": "5df07bb2b87a8e5b568c2c0973c21b6903122c97", "title": "Power efficient virtual machine placement in cloud data centers with a discrete and chaotic hybrid optimization algorithm" }, { "paperId": "b8db853865fafa782900fc6896b97c5ee7d3217d", "title": "Resource optimization through hierarchical SDN-enabled inter data center network for cloud gaming" }, { "paperId": "c4a42142419e5d841e354162856d136d0114bfa9", "title": "A Secure Core-Assisted Multicast Routing Protocol in Mobile Ad-Hoc Network" }, { "paperId": "b6e4b4fbec34c832dc1941e59ac4d39611e86e0c", "title": "An Algorithmic Approach for Core Election in Mobile Ad-hoc Network" }, { "paperId": "2c1247d7dd6c319c134708c05374788316041dcd", "title": "Energy Aware Cluster-Head Selection for Improving Network Life Time in Wireless Sensor Network" }, { "paperId": "0036975a5e51251e3ccde743ecc3aa18a9ae7623", "title": "A Survey on Virtual Machine Migration: Challenges, Techniques, and Open Issues" }, { "paperId": "ebccfd28ff291a790ef4f313a2640435529ae0da", "title": "Design and evaluation of small-large outer joins in cloud computing environments" }, { "paperId": "54937a64210e0946b678618e160a6646fbeac59f", "title": "A network aware approach for the scheduling of virtual machine migration during peak loads" }, { "paperId": "9e0eb64b8085d9b73166775317c7d0e02d80b9b0", "title": "A Survey on Cloud Gaming: Future of Computer Games" }, { "paperId": "f4900599936427fd1c7714cae15ff00590396a7c", "title": "Big Data Meet Green Challenges: Greening Big Data" }, { "paperId": "5e4fedddbfb7fc57ece9fbd77cf3061285291ebb", "title": "Towards a comprehensive performance model of virtual machine live migration" }, { "paperId": "acec0a12a1946279a35b79828cab4f4cb13761fd", "title": "VMbuddies: Coordinating Live Migration of Multi-Tier Applications in Cloud Environments" }, { "paperId": "5723fc37b2a41250faaefdda5ec5b46d314de666", "title": "Adaptive Live VM Migration over a WAN: Modeling and Implementation" }, { "paperId": "85011e8d503e28c098249e935a1ad52a673dc9f0", "title": "Optimal online deterministic algorithms and adaptive heuristics for energy and performance efficient dynamic consolidation of virtual machines in Cloud data centers" }, { "paperId": "5ba59323988d329f824175f5a99e587979e08422", "title": "Jettison: efficient idle desktop consolidation with partial VM migration" }, { "paperId": "532b2033ed22b81333cdc5a60c4a545849388ca9", "title": "Joint VM placement and routing for data center traffic engineering" }, { "paperId": "4a82222f4234f980f1ce3a476824ed379cd9c530", "title": "Profiling Energy Consumption of VMs for Green Cloud Computing" }, { "paperId": "30a82a63a339c1e69aac36b23900544fe9ec97bb", "title": "CloudSim: a toolkit for modeling and simulation of cloud computing environments and evaluation of resource provisioning algorithms" }, { "paperId": "04bc9c61c81c9125e39a6c52bee56f794e463a6f", "title": "Cloud Computing: Issues and Challenges" }, { "paperId": "8f54b85fcd5074482a74f32a96e808b5be3a9da7", "title": "Towards energy-aware scheduling in data centers using machine learning" }, { "paperId": "022bc8718b12a4667efbf24babcfaa7032e74397", "title": "A Cost-Sensitive Adaptation Engine for Server Consolidation of Multitier Applications" }, { "paperId": "fe6901f657ea8457cf30a5360916f2f14de58870", "title": "vManage: loosely coupled platform and virtualization management in data centers" }, { "paperId": "ad3705596de246d50745db402cbd046cc482b8d7", "title": "Optimal power allocation in server farms" }, { "paperId": "2a4a6255e8218e29bad093cc58a39fec19d92dd0", "title": "Shares and utilities based power consolidation in virtualized server environments" }, { "paperId": "3e19046c665867bbe557685da60738a40738010a", "title": "Energy aware consolidation for cloud computing" }, { "paperId": "2e7647a07fe21c18ab5b7037de3038157338f1db", "title": "pMapper: Power and Migration Cost Aware Application Placement in Virtualized Systems" }, { "paperId": "b254ad4b9a2735a72e13761ed9076b15147ccbde", "title": "Scientific Cloud Computing: Early Definition and Experience" }, { "paperId": "563f7d19eb0e7e1a7197fb498bf47263b91f8f9c", "title": "Power and performance management of virtualized computing environments via lookahead control" }, { "paperId": "f707ae4755139d83da0ca0482defa0413ce897b5", "title": "1000 Islands: Integrated Capacity and Workload Management for the Next Generation Data Center" }, { "paperId": "69e7ae7d28285e1e85056970b0aeded89d366480", "title": "VirtualPower: coordinated power management in virtualized enterprise systems" }, { "paperId": "b70283748f5edb07efed762f1645481e7b0f88ca", "title": "Dynamic Placement of Virtual Machines for Managing SLA Violations" }, { "paperId": "b44e3a9b0afb925b8cd6477495981d7b73d1d0af", "title": "Power provisioning for a warehouse-sized computer" }, { "paperId": "642029f803ee3bd68303df50d663cea3f4473122", "title": "Power and energy management for server systems" }, { "paperId": "3af27466b0b648aaedd25ea7d087e3954329e0dd", "title": "Xen and the art of virtualization" }, { "paperId": "ddcf503f1ea0d0e0a43cef7bce7819556a11e84f", "title": "An Efficient and Reliable Multicasting for Smart Cities" }, { "paperId": "f13a16d437c9c83984e6f06ab0b157f424f7d7eb", "title": "Towards Delay-Efficient Game-Aware Data Centers For Cloud Gaming" }, { "paperId": "2940e7bba0cf534b5caf6a8f7545b5012d1cdf6d", "title": "An Efficient and Reliable Core-Assisted Multicast Routing Protocol in Mobile Ad-Hoc Network" }, { "paperId": "76aab02968d76197d7373e2f724c76f6463ebdec", "title": "Evaluating and Modeling Virtualization Performance Overhead for Cloud Environments" }, { "paperId": null, "title": "In the data center , power and cooling costs more than the it equipment it supports" }, { "paperId": "8fc928bb430d3f72ac876ca156042ad1860acacd", "title": "Article in Press Future Generation Computer Systems ( ) – Future Generation Computer Systems Cloud Computing and Emerging It Platforms: Vision, Hype, and Reality for Delivering Computing as the 5th Utility" }, { "paperId": null, "title": "In the data center, power and cooling costs more than the it equipment it supports" }, { "paperId": "6437561175be151f2a2ce5747443564b54f7140c", "title": "Proceedings of the 5th Symposium on Operating Systems Design and Implementation" } ]
13,322
en
[ { "category": "Computer Science", "source": "external" }, { "category": "Computer Science", "source": "s2-fos-model" }, { "category": "Engineering", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/024e06bc350d5bb5599bd58a89ee18c3fc3ed122
[ "Computer Science" ]
0.897986
A Data Security Enhanced Access Control Mechanism in Mobile Edge Computing
024e06bc350d5bb5599bd58a89ee18c3fc3ed122
IEEE Access
[ { "authorId": "2106945079", "name": "Yichen Hou" }, { "authorId": "144529280", "name": "S. Garg" }, { "authorId": "2117271124", "name": "Lin Hui" }, { "authorId": "1581170567", "name": "DushanthaNalin K. Jayakody" }, { "authorId": "2068300460", "name": "Rui Jin" }, { "authorId": "2237935030", "name": "M. S. Hossain" } ]
{ "alternate_issns": null, "alternate_names": null, "alternate_urls": [ "http://ieeexplore.ieee.org/servlet/opac?punumber=6287639" ], "id": "2633f5b2-c15c-49fe-80f5-07523e770c26", "issn": "2169-3536", "name": "IEEE Access", "type": "journal", "url": "http://www.ieee.org/publications_standards/publications/ieee_access.html" }
Mobile edge computing, with characteristics of position awareness, mobile support, low latency, decentralization, and distribution, has received widespread attention from industry and academia, and has been applied to application areas such as intelligent transportation, smart city, and real-time big data analysis. However, it also brings the new security threats, especially data security threats during data access that leads to unauthorized/unauthorized access, alteration and disclosure of data, affecting the confidentiality and integrity of the data. Therefore, access control, as an important method to ensure the security of user data during data access, began to be applied to mobile edge computing. However, the existing access control has the disadvantages of coarse-grain, poor flexibility and accuracy, lack of internal attack considerations, etc., which cannot meet the needs of data security in practical applications of mobile edge computing. In this paper, a data security enhanced Fine-Grained Access Control mechanism (FGAC) is proposed to ensure data security during data access in mobile edge computing. In FGAC, a dynamic fine-grained trusted user grouping scheme based on attributes and metagraphs theory was first designed. Secondly, the scheme was combined with the traditional role-based access control mechanism to assign roles to users based on user group credibility. And then, based on attribute matching the user authentication further verifies whether the user is allowed to perform the access operations to achieve fine-grained data protection. Experimental results show that FGAC can effectively identify malicious users and make group adjustments, while achieving fine-grained access control and assure the data security during the data access process in mobile edge computing.
Received June 30, 2020, accepted July 18, 2020, date of publication July 23, 2020, date of current version August 5, 2020. _Digital Object Identifier 10.1109/ACCESS.2020.3011477_ # A Data Security Enhanced Access Control Mechanism in Mobile Edge Computing YICHEN HOU 1, SAHIL GARG 2,3, (Member, IEEE), LIN HUI 1, DUSHANTHA NALIN K. JAYAKODY[3], (Senior Member, IEEE), RUI JIN[4], AND M. SHAMIM HOSSAIN 5, (Senior Member, IEEE) 1College of Mathematics and Informatics, Fujian Normal University, Fuzhou 350117, China 2École de Technologie Supérieure (ETS), Montreal, QC H3C 1K3, Canada 3School of Computer Science and Robotics, Tomsk Polytechnic University, 634050 Tomsk, Russia 4College of Engineering, Mathematics, and Physical Sciences, University of Exeter, Exeter EX4 4QF, U.K. 5Department of Software Engineering, College of Computer and Information Sciences, King Saud University, Riyadh 11543, Saudi Arabia Corresponding author: Lin Hui (linhui@fjnu.edu.cn) This work was supported in part by the Competitive Enhancement Program of the Tomsk Polytechnic University, Russia No VIU-ISHITR-180/2020; and in part by the Researchers Supporting Project number (RSP-2020/32), King Saud University, Riyadh, Saudi Arabia. **ABSTRACT** Mobile edge computing, with characteristics of position awareness, mobile support, low latency, decentralization, and distribution, has received widespread attention from industry and academia, and has been applied to application areas such as intelligent transportation, smart city, and real-time big data analysis. However, it also brings the new security threats, especially data security threats during data access that leads to unauthorized/unauthorized access, alteration and disclosure of data, affecting the confidentiality and integrity of the data. Therefore, access control, as an important method to ensure the security of user data during data access, began to be applied to mobile edge computing. However, the existing access control has the disadvantages of coarse-grain, poor flexibility and accuracy, lack of internal attack considerations, etc., which cannot meet the needs of data security in practical applications of mobile edge computing. In this paper, a data security enhanced Fine-Grained Access Control mechanism (FGAC) is proposed to ensure data security during data access in mobile edge computing. In FGAC, a dynamic fine-grained trusted user grouping scheme based on attributes and metagraphs theory was first designed. Secondly, the scheme was combined with the traditional role-based access control mechanism to assign roles to users based on user group credibility. And then, based on attribute matching the user authentication further verifies whether the user is allowed to perform the access operations to achieve fine-grained data protection. Experimental results show that FGAC can effectively identify malicious users and make group adjustments, while achieving fine-grained access control and assure the data security during the data access process in mobile edge computing. **INDEX TERMS Mobile edge computing, access control, data security, data confidentiality, data integrity,** metagraph theory. **I. INTRODUCTION** In recent years, the development of intelligent mobile terminal technology such as smartphones, tablets, various Internet of Things devices, and mobile communication technologies s uch as 5G, the types of mobile applications such as face recognition, augmented reality, virtual reality, live webcasting, etc. are also constantly enriched. Due to constraints such as size, many mobile devices still have relatively scarce resources such as computing, storage, network, and electrical The associate editor coordinating the review of this manuscript and approving it for publication was Md Zakirul Alam Bhuiyan . energy, and cannot meet application requirements. To this end, scholars have proposed the Mobile Cloud Computing (MCC) [1] that expanding physical resources of device by migrating tasks to cloud data center to meet all kinds of application of resource requirements. However, since the rapid growth of the mobile devices and applications, the mobile cloud computing mode is overly centralized, and the number of server connections is extremely large, which will cause huge pressure on the server and the network, resulting in server downtime and excessive network delays, which seriously affects the user experience [2]. In view of the above problems, the traditional centralized computing model needs ----- **FIGURE 1. Architecture of mobile edge computing.** to be further optimized and improved, and is developing towards flattening and marginalization. In this context, as an emerging technology, Mobile Edge Computing (MEC) [3], [4] integrates the mobile access network with various network services and has become an inevitable product that conforms to this trend of development. By migrating the server from a cloud data center to the mobile network edge, MEC reduces physical distance between the mobile terminal and the server. On the one hand, it can reduce the transmission delay and ease the pressure on the backbone network. On the other hand, it can also share the concentration heavy server load. As shown in Fig. 1, a typical MEC is divided into 4 layers, mobile terminal layer, edge network layer, edge data center layer, and core infrastructure layer [5], [6]. In the MEC, the edge terminal equipment is responsible for data perception and reception, and performs some preliminary data processing. The wireless network is connected to the edge network, and the edge network integrates a variety of communication networks to interconnect the mobile terminal and the sensor network to upload the data to the edge data center. The edge data center is deployed at the edge of the network and is connected to the cloud center. And, the edge data center performs data fusion processing according to the processing results to feedback information or provide related services, or transfer the processed data to the core infrastructure. The of data storage, processing, and access operations are performed at the core infrastructure layer. The MEC architecture built on this can provide a platform for data analysis of Intelligent transportation, smart cities, and the Vehicle Area network, etc. With the vigorous development of technologies such as 5G, Internet of Things, and artificial intelligence [7], new service models and services based on mobile edge computing [9] will show an explosive growth trend, and generate ‘‘massive’’ data [10]. And, it also brings new security threats to mobile edge computing [11], [12], especially data security threats during data access. These security threats will lead to unauthorized/unauthorized access, alteration and disclosure of data [13], affecting the confidentiality and integrity [8] of the data. Therefore, access control, as an important method to ensure the security of user data during data access, began to be applied to mobile edge computing. At present, the access control mechanisms used in mobile edge computing are mainly divided into two categories: Role-Based Access Control (RBAC) and Attribute-Based Access Control (ABAC) [14]. However, existing mechanisms have the disadvantages of coarse-grain, poor flexibility and accuracy, lack of internal attack considerations, etc., which cannot meet the needs of data security in practical applications of MEC. To enhance the data security such as data confidentiality and data integrity during data access process, a data security enhanced Fine-Grained Access Control mechanism (FGAC) is proposed, and the contributions of this work include: (1) Combining the traditional RBAC with metagraph theory based user grouping strategy and user attributes, in FGAC, a novel role and attribute based access control mechanism is proposed to achieve fine granularity of data confidentiality and integrity assurance through fine-grained grouping and access rights settings for users. (2) In order to realize the fine-grained grouping of users, a dynamic fine-grained trusted user grouping scheme based on user attributes and metagraph theory is proposed. The scheme divides user groups according to the attribute relevance between users and uses the metagraph theory to establish trust relationships based on the access behavior between users. At the same time, a user group update module is also designed to achieve the dynamic adjustment of user groups within the user group. (3) In order to reduce the probability of internal attacks and achieve fine-grained data protection, a user re-authentication based on attribute matching is proposed. The new authentication mechanism further verifies the matching of user attributes and access data attributes after the user passes preliminary identity verification, restricts the malicious unauthorized access of authorized users, and realizes the fine-grained protection of data. **II. RELATED WORK** In order to achieve more secure, efficient, and dynamic access control to meet various application requirements, recently, researchers combine RBAC and ABAC [14], and propose some improved solutions. Kuhn et al. [17] combined attribute-based and role-based access control schemes for the first time to achieve effective distributed access control and support dynamic role assignment and permission management. Wang et al. [18] proposed an attribute encryption based novel RBAC scheme to provide more flexible access control by introducing the user attributes into RBAC to implement attribute-based user roles ----- and permission assignment. Mon and Naing [20] provide an attributes and roles based access control method, and formulate corresponding access policies to ensure personal data and clouds private. Barkha and Sahani [21] designed a context-based role activation and permission revocation method. The proposed method effectively overcome the shortcomings of traditional ABAC and ABAC, and achieve the advantages of context-aware, fine-grained, etc. For the SaaS model of cloud computing, Geetha and Anbarasi [22] proposed a role-based and attribute-based Web service access control mechanism to ensure the security of the service composition by ranking the possible chains of services based on user’s role and sensitivity of related data. Yu et al. [23] combined attribute encryption algorithm with FAHP-based user trust evaluation methods, and proposed an attribute and user trust based RBAC to implement the fine-grained dynamic authorization of access control. Although the existing research results can provide certain data access security, the implementation of the program will generate a lot of additional overhead and cannot be directly applied to mobile terminals with limited resources. At the same time, these solutions lack the flexibility to meet the fine-grained data security requirements associated with different scenarios and multiple services in mobile edge computing and the need to ensure that multiple categories of users access different data. Besides, the lack of consideration of internal attacks also makes these methods impossible to apply directly to practice. Therefore, introducing an internal attack defense mechanism and designing a fine-grained, flexible, and accurate security access control mechanism against internal attacks will be a powerful guarantee for improving the security of mobile edge computing data. **III. ATTACK MODEL** In FGAC, all users are divided into different groups, and each user accesses data resources according to the role assigned by the user group’s credibility. We consider collusion attacks and self-improvement attacks initiated by internal attackers. Attackers can increase their access to important resources through collaboration, thereby threatening data security. The specific attack is defined as follows: - Collusion attack: Multiple attackers can cooperate and provide false information to increase the reputation value of malicious users and reduce the reputation value of normal users, thereby affecting the security level of users. - Self-promotion attacks: Attackers try to increase their reputation by mistake by providing false information or exploiting calculation loopholes, thereby improving their security level. **IV. A DATA SECURITY ENHANCED FINE-GRAINED** **ACCESS CONTROL MECHANISM (FGAC)** Because of the existing access control problems such as coarse-grained access control strategy, poor flexibility, and accuracy, lack of internal attack considerations, etc., which cannot meet the data security access requirements in practical **TABLE 1. Main symbols.** applications of mobile edge computing, this section proposes a data security-oriented fine-grained access control mechanism FGAC. Table 1 shows the main symbols used in this paper and their meanings. The overall architecture of FGAC is shown in Fig. 2, which mainly contains two modules: user role assignment and authority assignment. Among them, the user role assignment module divides all users into different groups according to the evaluation result of the user attribute relevance, and then assigns roles to each user group according to the user group’s credibility. The rights assignment module re-authenticates the module based on the user based on the attribute matching degree assign appropriate permissions to users. FGAC converts the user-role-permission relationship into a user-user group-role-permission relationship, divides users into different groups according to the user’s attribute values and access requirements, assigns corresponding roles and permissions to the user group, and also validates the user role Perform user authentication with the attribute matching degree, and then screen more qualified users for access operations, and meet the different access needs of users under the premise of ensuring user data security. The constituent elements in FGAC are defined as follows: 1) Users: a collection of data access requesters, denoted as U, defined as : _U = {u1, u2, . . ., un},_ (n ∈ _N_ )(i, j, if i ̸= j then ui ̸= uj). (1) 2) Attribute relevance (AR): The similarity of the user’s own attribute set. The higher the attribute correlation between users, the closer the functions, access data preferences, and security levels of different users are, and the easier they are to be classified into a user group. 3) User group (G): a group divided according to the evaluation results of the user attribute relevance, and the user group is used as a transition between connecting users and roles to form a user-user group-role authorization method. Users in the same user group have similar functions, similar security levels, access requirements, and so on. 4) User group credibility: Measure the value of user group credibility. Each user has a different security level, and users in the same user group have similar security levels. User group credibility is determined by the minimum security level of users in the group. ----- **FIGURE 2. Overall architecture of FGAC.** 5) User group role (Roles): A role is a collection of responsibilities and access rights. In FGAC, role assignment is performed for user groups, and different roles are assigned to user groups with different credibility. At the same time, the user roles in the group are divided into A1 level roles and A2 level roles according to the security level. The highest level A1 role is responsible for updating users in the group, etc.; the other level roles are responsible for access operations without change User group permissions. The roles and role sets are collectively denoted as r and R, respectively, defined as : � _ri = {ui1, ui2, . . ., uik_ }, (k ∈ _N_ ) (2) _R = {r1, r2, . . ., rm},_ (m ∈ _N_ ). 6) Permissions: It represents the specific access permission for different information content. Data owners will add attributes to resources and data according to their requirements, thereby restricting access by unauthorized users; operations are specific access modes that users can perform, such as readable, modifiable, or denied access, etc. 7) Attribute matching degree (AM ): The data owner further restricts access users after verifying the user role and can screen more suitable users for access operations to ensure the security of their own data. The data owner not only requires the user to have the relevant role to obtain access qualification but also further authenticates the access user. It requires that the matching degree between the user attribute and the where UAS[′] is the attribute set used in this interaction.UASi[i][nt] and UASj[i][nt] are the attribute set used in each interaction between users i and j, respectively. n is the total number of interactions between users i and j. w is the threshold of the proportion of attribute intersections.R(i,j) is the reputation of j versus i stored in i’s local reputation database.τ is a access data attribute is greater than the set threshold before the user is allowed to access related data. _A. USER GROUPING SCHEME BASED ON ATTRIBUTES_ _AND METAGRAPHS_ In this scheme, firstly, the data needs to be divided into different levels according to the data sensitivity hierarchy (sh). The data sensitivity hierarchy is determined by the data owner. The higher the hierarchy, the greater the need for confidentiality and data security. Secondly, according to the evaluation results of Attribute Relevance (AR) between users, all users are divided into different groups by using the metagraph theory [16], [19]. Assume that each user has a set of attributes that including specialty, access data preference, security level, etc., and denoted as UAS = {uas1, uas2, . . ., uask }. The attribute relevance AR(i,j)evaluated by user j for user i can be calculated as follows :   [1] _n_ [×] UASinti ∩ UAS[int]j ��� ��� ��UAS′��    _n_ UASinti ∩ UAS[int]j  � ��� ��� _AR(i,j) = R(i:j) × τ ×_  _n[1]_ [×] int=1 ��UAS′��  _s.t._ UASinti ∩ UAS[int]j _> w._ (3) ��� ��� ----- _e1_ _< 0.4; 0.7 > indicates that the attribute correlation_ between user x1 and user x4 is 0.4 and there is a trust relationship. The trust relationship between the two is 0.7. 1) TRUST RELATIONSHIP BETWEEN USERS According to the evaluation result of attribute relevance, all users are divided into different groups using metagraph theory. Assuming that user u and user u[′] belong to different groups, the trust relationship between user u and user u[′] is expressed as �TR �u, u[′][��], which is calculated as follows: (1) When user u and user u[′] have direct interaction, the trust relationship TR[direct] (u,u[′]) [between][ u][ and][ u][′][ is calculated as follows][:] **FIGURE 3. User grouping based on attributes and metagraphs.** time factor that determines how much interaction time affects _R(i,j). Then, τ is defined as follows:_ _τ = τi:j,Tn × θTn._ (4) where θTnindicates the frequency of historical interactions between users i and j up to time Tn. τi:j,Tnis a weighting factor, which determines the degree of influence of the distribution of the historical interactions of users i and j on R(i,j) up to Tn. The calculation of τi:j,Tn and θTn is as follows: |SH| � _Nsh_ _θTn = 1 −_ _e_ [(][−] _sh=m1×n_ ) . (5) _n_ � _τi:j,Tn =_ _l=1_ ( _[T]m[l]_ [×][ l]n [)][.] (6) where Nsh is the number of historical interactions performed by users i and j based on the data sensitivity hierarchy(sh), and m and n are the number of time slots and period T, respectively. The user grouping method based on metagraph theory is defined as follows: 1) Construct the metagraph S =< X _, E > into a graph_ construction specified by its generation set X (user set) and a set of edges E defined on the generation set. 2) Among them, the generation set X represents the user; the edge between the meta nodes users) represents the trust relationship between them. For example, edge _e =< Ve, We >∈_ _E indicates that there is a trust_ relationship between user Ve and user We. 3) The weight of the edge e =< Ve, We _>∈_ _E is_ represented by a binary <ar _wr>, where ar represents_ ; the attribute correlation between the user Ve and the user We; wr represents the trust relationship between the user Ve and the user We, and the value range is [0,1]. As an example, consider the metagraph S =< X _, E >_ in Fig. 3. Generate set X = {x1, x2, x3, x4, x5, x6, x7} with edge set E = {e1, e2, e3, e4}, where e1 =< x1, x4 _>_ _, e2 =< x4, x6 >, e3 =< x3, x5 >, e4 =< x5, x7 >._ First, divide X into 4 groups (G1, G2, G3, G4) according to the attribute correlation between users, where G1 = {x1}, G2 = �x2,, x3� _, G3 =_ �x4,, x5� _, G4 =_ �x6,x7�. Then, the trust relationship between users is established according to the historical interaction between users. For example, where slmax is the maximum security level of the person directly recommended in DirR. Then, each user updates the reputation value of the interacted user according to the calculated trust relationship value between users. Assuming that user i sends an access request to user j, hoping that j provides corresponding services, then the credibility value from j to i can be calculated as follows : _R(i,j) = UQi × TR(i,j)._ (10) Among them, TR(i,j) is the trust relationship between the current users i and j. UQi is the user qualification of user i in the user group. Because each user may have different status and influence in a group, the higher the user’s UQ in the group, the more likely their behavior will meet the group’s 1 _TR[direct](u,u[′])_ [=] SH | | [×] |SH| � _sh=i_ � _SI sh_ _TI_ _[sh][ ×][ ξ][sh]_ � _._ (7)  _ξ = E(γt_ )  |SH| � γt = _j=i_ _IAj_ � |SH| � (t = 1 . . . Nslot ). (8) _IAj,_ _j=1_ where i is the lowest data sensitivity level. SI _[sh]_ and TI _[sh]_ represent the number of successful data interactions with the sensitivity hierarchy(sh) and the total number of interactions, respectively. ξ is a weighting factor, which determines the degree to which the sensitivity hierarchy (sh) affects TR[direct] (u,u[′]) when the two interact. γt is the ratio between the number of interactions with a sensitivity hierarchy higher than the currently required sensitivity hierarchy i and the total number of interactions at all sensitivity hierarchies. IAj represents the number of times the sensitivity hierarchy in the historical interaction is confirmed as j, and Nslot represents the number of time slots. (2) When users u and u[′] do not directly interact, assume _DirR = {dir −_ _reci|i = 1 . . . m} is a set of direct recom-_ menders. The direct recommender uj has direct interaction with the user u[′] and has the result of direct trust relationship evaluation about u[′]. Then the indirect trust relationship _TR[indirect]_ between u and u[′] is calculated as follows (u,u[′]) : � _._ (9) _TR[indirect](u,u[′])_ = [1] _m_ [×] _m_ � _j=1,uj∈DirR_ � _slj_ _slmax_ × TR[direct](u,uj) ----- standards. Let _g be the group, and the UQ of the user in_ _g is_ ¯ ¯ defined as follows :  � _UQ = κ1 ×_ [1]g _AR(u¯, u) + κ2 ×_ _g[1]_ |¯| [×] _u∈¯g,u̸=¯u_ |¯| � × _TR(u¯, u)_  _u∈¯g,u̸=¯u_ (11) _TR(u¯, u) = ρ1 × TR[direct](u¯,u)_ [+][ ρ][2][ ×][ TR]([indirect]u¯,u) _κ1 + κ2 = 1_ ρ1 + ρ2 = 1. Because user i will interact with multiple users, according to the change of the trust relationship between the data owner and user i and the update of the reputation value after each interaction, the comprehensive reputation value R[sum]i of user _i can be calculated as follows_ : _kn_ � _R[sum]i_ = _k[1]n_ _SLj[k][n]_ × λsl × R(i,j). (12) _n=1_ where kn is the total number of interactions between user i and other users. SLj[k][n] is the security level of the data owner _j during the kn interaction of user i. λsl is the proportion of_ the reputation value of user i provided by data owners with different security levels. Assuming that the security level is divided into n levels, the security level of user i is divided according to the comprehensive reputation value of user i. When R[sum]i ∈ [TSj, TSj+1] is satisfied, the security level of user i is j + 1, j ∈ [j, n], where TSj+1 and TSj is the upper limit of the credibility value corresponding to different security levels. 2) USER GROUP UPDATE After the initial grouping of users, it is assumed that user _x belongs to user group g. After some access operations,_ the change of user attributes may no longer meet the requirements of user group g. At this time, user x needs to be comprehensively evaluated to determine whether the user still meets the Group g requirements. (1) If the following constraints are met, the original grouping remains unchanged, and user x still belongs to user group g;  1 � _AR(u, x) > θ_ _G_ 1 | | − [×] _u∈G,x̸=u_ 1 � _TR(u, x) > θ_ [′]  _G_ 1 | | − [×] _u∈G,x̸=u_ (13) _R[sum]x_ _> CG_ _TR(u, x) = ρ1 × TR[direct](u,x)_ [+][ ρ][2][ ×][ TR]([indirect]u,x) ρ1 + ρ2 = 1. where CG is the reputation threshold set by the current user group G. θ and θ [′] are the thresholds of attribute relevance and trust relationship set by group G, respectively. (2) If the user x does not meet the constraints set by the user group g, the user group update module (GUM) is used to update the user x grouping. where γj[y] is a weighting factor, which determines the importance of the jth attribute of the attributes required by the data y, and γj[y] [is set by the data owner.] Finally, the data owner z compares the attribute matching degree AM(x,y) of the user x and the data y with the attribute matching degree threshold Tsy, where is the threshold of the attribute matching degree set by the access data y. If AM(x,y) ≥ _Tsy, it is determined that user x is granted_ **FIGURE 4. User group update module.** The user group update module (GUM) mainly provides two functions, as shown in Fig. 4. One is the redistribution of user groups. This function first integrates the constraints set by all user groups into a list, then calculates the relevant value of user x according to the constraints set by the user group, and finally compares the calculation results with the constraints in the list to assign user _x to In the corresponding group._ The second is the change of constraints. The constraints here refer to the constraints set for each group in the user group redistribution function list. This function mainly provides the update of user group constraints. If the user group has not changed much within a certain period of time, this function will regularly update the constraints set by the user group; if the user changes within the user group are too large, the originally set constraints will no longer meet the group status, the user The group can immediately submit the constraint condition update to the user group update module, and replace the constraint condition of the group in the user group redistribution function list. _B. USER AUTHENTICATION BASED ON ATTRIBUTE_ _MATCHING DEGREE_ The user requests access to certain data. After verifying that the user role is qualified to access the data, the data owner needs to further authenticate the access user by calculating the attribute matching degree. Assume that _UcA = {ucai|i = 1 . . . n} is a set of user attributes corre-_ sponding to the data attribute requirements. When user x sends an access request to data owner z, indicating that he wants to access data y, the attribute matching degree of user _x and data y is calculated as follows_ : _AM(x,y) =_ _n_ � _γj[y]_ [×][ uca][j][.] (14) _j=1,aj∈UcA_ ----- relevant permissions and user x is allowed to perform the access operation. _C. FINE-GRAINED ACCESS CONTROL MECHANISM_ _BASED ON USER GROUPING_ To ensure the security of user data, the FGAC access control strategy is mainly divided into two parts: role assignment strategy and user authorization strategy. - Role assignment strategy FGAC first divides all users into different groups based on user attribute relevance. Users in the same user group have similar functions, similar security levels, and access requirements, etc. Therefore, role assignment is performed for the entire user group, only the user group When the credibility is greater than the threshold set by the role, users in the user group can obtain the corresponding role. - User authorization strategy When a user wants to access a certain item of data, the data owner will often further set the access rights for the item of data according to his requirements, not just the role constraints. After verifying that the user role is qualified to access the data, the data owner will re-authenticate the user based on the attribute matching degree, and calculate the matching degree between the user attribute and the data attribute. Only when the matching degree of the two attributes is greater than the threshold set by the data owner can the user obtain the corresponding authority, and then access the data for related operations. This can ensure the security of the data owner’s data, and prevent users with relevant roles and attributes who do not meet the requirements from accessing relevant data. The specific implementation process of FGAC is shown in Fig. 5, and the access control process is described as follows: (1) User u sends an access request to a certain data; (2) The data owner performs an authorization check on the access request of user u, first verifying whether the role owned by user u is in the set of roles defined in the data and determining whether user u is qualified to access the data. If the role of user u is in the set of accessible roles of this item of data, step (3) is performed; otherwise, the access request of user u is denied; (3) After the user role is verified, the user re-authentication based on the attribute matching degree is then performed to calculate the matching degree between the user u attribute and the data attribute. If the attribute matching degree of the two meets the threshold defined by the data, the user is granted the corresponding permission to allow user u to perform the access operation; otherwise, the access request of user u is denied; (4) After the user, u’s visit is over, first update the trust relationship between users according to the user’s access behavior, and then update the user’s reputation value to adjust the user’s security level. **FIGURE 5. FGAC implementation process.** **V. SIMULATION VERIFICATION AND ANALYSIS** The experiments in this section mainly verify and analyze the user security and authorization fine-grained aspects. In the Windows 7 environment, the configuration is i7-5500U CPU, 8.0GB memory, 1TB hard disk, and simulation verification using MATLAB2017b. In the experiment, we assume that there are 100 mobile terminal users, among which a certain number of malicious users. Malicious users are not always performing malicious visits, while normal users’ visits are always benign. Among the parameters used in this paper, κ1 and κ2 are the weighting factors of equation (11). We set κ1 and κ2 to 0.4 and 0.6 respectively, which determine the degree of influence of attribute relevance and the trust relationship between users on user qualifications(UQ); ρ1 and ρ2 are the weighting factors in equation (11) and equation (13). We set ρ1 and ρ2 to 0.6 and 0.4 respectively, which determine the degree of influence of the direct and indirect trust relationship between users on the trust relationship(TR). ----- **FIGURE 6. User’s reputation changes.** _A. USER SAFETY ANALYSIS_ The user security is determined by the user’s security level, and the user security level is adjusted by updating the trust relationship between users and the user’s reputation value after each interaction. The trust relationship between users reflects the historical interaction between users based on different data sensitivity hierarchies. In Fig. 6(a), it is assumed that two users are in the same user group and the reputation values are equal. To prevent malicious users from excluding the user group and thereby update the user group, we set the user group reputation threshold _CG = 0. From the results in the figure, it can be found_ that with the increase of time, the reputation values of the two users change significantly. On the one hand, when normal users interact with other users, their normal and benign behavior causes their reputation value to continue to increase; on the other hand, when malicious users interact, their malicious behavior makes their reputation value continue to decrease, This is the same as what we estimated. Fig. 6(b) shows the changes in the reputation value and security level of users with high reputation values when their proportion of malicious behavior continues to increase. As can be seen from the results in the figure, even users who performed well in the previous historical interactions will have their **FIGURE 7. Average UIA with different proportions of malicious users.** reputation value lower as the malicious behavior continues to increase in the later period, and the user’s security level will gradually adjust from the high level ‘‘1’’ To the lower level ‘‘4’’, the user’s safety is re-evaluated. Besides, based on the historical interaction between users, we consider comparing and evaluating FGAC, TARAS [15], and RBE in terms of user recognition accuracy and successful acceptance rate, because they are all role-based access control mechanisms, in which TARAS provides users with permissions based on the estimation of the dynamic trust relationship between users, similar to the FGAC mechanism. - User identification accuracy(UIA): the accuracy of identifying normal users and malicious users; - Successful acceptance rate(SAR): The ratio of the number of access requests that do not meet the security requirements to the total number of access requests. 1) USER IDENTIFICATION ACCURACY First, we compared the accuracy of user identification between the two schemes of FGAC and TARAS under the proportion of 20% and 30% malicious users with an attack probability of 1, where the attack probability determines the possibility of malicious users attacking. The greater the probability, the higher the frequency of malicious user attacks. Fig. 7(a) and Fig. 7(b) show the comparison between the ----- accuracy of identifying normal users and malicious users when the proportion of malicious users is 20% and 30%, respectively. It can be seen from the figure that as the proportion of malicious users increases, the accuracy of user identification in both schemes decreases. But at the same time, it can also be found that in the case of a fixed proportion of malicious users (20% or 30%), after a long period of observation and comprehensive evaluation of users, the accuracy of user identification in both schemes has increased, and the accuracy of the FGAC scheme is higher. Although both schemes restrict the access of malicious users by setting thresholds, FGAC combines the division of user groups based on attribute correlation and the establishment of trust relationships, and FGAC sets trust thresholds for user groups. The range of users in the group is small and similar, so users in the group can provide more accurate evaluation references, which improves the accuracy of evaluating users’ security level, and it is easier to detect malicious users and adjust the user group. Therefore, the accuracy of user identification is slightly Higher than TARAS. At the same time, we also compared the accuracy of user identification of the two schemes under different malicious user attack probabilities when the proportion of malicious users was 20%. Fig. 8(a) and Fig. 8(b) show the comparison between the accuracy of identifying normal users and malicious users when the attack probability of malicious users is 30% and 70%, respectively. As can be seen from the figure, as the probability of malicious user attacks increases, the possibility of malicious user exposure increases accordingly, so the accuracy of user identification in both schemes has increased. But at the same time, it can also be found that, regardless of the increase in time or the probability of malicious user attacks, the accuracy of FGAC user identification is still higher than that of TARAS. The reason is that the user group division scheme based on attribute relevance in FGAC divides users with similar security levels into a group. If there is a malicious user in the group and the proportion of the user’s malicious behavior increases, GRM can identify the malicious user in time by establishing a trust relationship between users and setting a user group trust threshold. 2) SUCCESSFUL ACCEPTANCE RATE Fig. 9 is a comparison of the successful acceptance rate of the three schemes of FGAC, TARAS, and RBE. As can be seen from the figure, as the number of interactions, and the proportion of malicious users increase, the successful acceptance rate of the three schemes has increased. In general, the successful acceptance rate of FGAC and TARAS is better than RBE. As shown in Fig. 9(b), when the proportion of malicious users is 0-20%, the overall successful acceptance rate of the two schemes is not much different. As the proportion of malicious users continues to increase, TARAS’s successful acceptance rate has increased, while FGAC’s successful acceptance rate has changed less and is relatively stable. This is because the establishment of the trust relationship between users makes the adjustment of the user’s security level more **FIGURE 8. Average UIA with different attack probabilities.** accurate so that more credible users can be selected during data access. Besides, the user re-authentication based on attribute matching proposed in the FGAC can screen out users who are more in line with the access requirements based on the user’s true attributes and reduce the probability of collusion attacks, which also improves the security of the data access process, and decreases the successful acceptance rate of FGAC. _B. AUTHORIZED FINE-GRAINED VERIFICATION_ Authorized fine-grained verification is mainly to determine whether more fine-grained access control is achieved than the traditional RBAC model. In the simulation experiment, 7 users are specifically set, and each user’s attribute set includes ID, name, department, job title, work experience, the annual number of operating tables, and security level. The security level is determined by the user’s comprehensive reputation value. Table 2 lists the detailed information of each user. After preliminary experiment setting, the threshold of user group credibility corresponding to the role is shown in Table 3. Table 4 is the attribute requirements set by the data Data_1 and the data Data_2. The user access results are shown in Table 5. If user Staff _0 and Staff _3 request access to data Data_1 at the same time, ----- **FIGURE 9. Average SAR.** **TABLE 2. User information.** first verify whether the roles of the two users meet the requirements of data Data_1. At this time, the roles owned by both users are Role_2, which is consistent with the data Data_1 request. Then further verify other attributes. User Staff _0 and Staff _3 are the director physicians of the Department of Neurology. The work experience and the number of operating tables are different. At this time, the matching degree of the user attribute and the data attribute can be calculated **TABLE 3. The credibility of the user group corresponding to the role.** **TABLE 4. Data attribute requirements.** **TABLE 5. Access results.** according to equation (14). Assuming that the weight of work experience in the data Data_1 is 0.4 and the weight of the annual number of operating tables is 0.6. According to the calculation, the user Staff _0 is more in line with the requirements of the data Data_1, then the user Staff _0 is allowed to perform the access operation, and the user Staff _3 is denied the access request. In addition, if user Staff _6 and user Staff _5 request access to data Data_2 at the same time, the roles owned by the two users meet the requirements of Data_2. Although user _Staff _6 and user Staff _5 belong to internal medicine, user_ _Staff _5 belongs to respiratory medicine, which is more in line_ with the requirements of data Data_2. After attribute matching calculation, user Staff _5 is allowed to perform access operations. In the traditional RBAC model, for example, the user Staff _0 and the user Staff _3 are all assigned the role of Role_2, so in the subsequent data access process, the two have the same permissions. The FGAC scheme proposed in this article adds the user re-authentication module based on the attribute matching degree. According to the matching degree of different attribute values and data attributes of the user, even if the user Staff _0 and the user Staff _3 have the same role, the permissions they have will be different, thus enabling more fine-grained authorization to ensure the security of user data. **VI. CONCLUSION** Aiming at the problems that the existing access control policies have coarse granularity, poor flexibility and accuracy, and lack of internal attack considerations, which cannot meet the data security access requirements in practical applications of MEC, this paper proposes a data security enhanced ----- Fine-Grained Access Control mechanism(FGAC) based on user grouping. First, the attribute relevance evaluation for users is carried out, and a dynamic fine-grained trusted user grouping scheme is designed based on the above evaluation results and metagraph theory. Then, combined with role-based access control, the scheme assigns roles based on the credibility of user groups and further verifies users based on attribute matching, to achieve fine-grained protection of data and reduce the risk of internal attacks. Experimental results show that FGAC can effectively limit the access of malicious users and update user groups in time, and ensure the security of user’s data by implementing more fine-grained access control. For future work, we intend to introduce blockchain technology into the access control mechanism in mobile edge computing to solve data security issues in the process of data access further. **REFERENCES** [1] J. X. Zhai, ‘‘Research on authentication protocol in mobile cloud computing,’’ Ph.D. dissertation, Dept. Elect. Eng., Harvard Univ., JiangSu, China, 2019. [2] Y. Chen, D. J. Xu, and L. Xiao, ‘‘Survey on network security based on blockchain,’’ Telecommun. Sci., vol. 34, no. 3, pp. 10–16, Mar. 2018. [3] K. Kaur, S. Garg, G. Kaddoum, M. Guizani, and D. N. K. Jayakody, ‘‘A lightweight and privacy-preserving authentication protocol for mobile edge computing,’’ in Proc. IEEE Global Commun. Conf. (GLOBECOM), Waikoloa, HI, USA, Dec. 2019, pp. 1–6. [4] N. Abbas, Y. Zhang, A. Taherkordi, and T. Skeie, ‘‘Mobile edge computing: A survey,’’ IEEE Internet Things J., vol. 5, no. 1, pp. 450–465, Feb. 2018. [5] X. Y. Ma, ‘‘Research on trusted cooperative mechanism based on edge computing,’’ M.S. thesis, Dept. Abbrev., Beijing Univ. Posts Telecommun., Beijing, China, 2019. [6] X. Li, S. Liu, F. Wu, S. Kumari, and J. J. P. C. Rodrigues, ‘‘Privacy preserving data aggregation scheme for mobile edge computing assisted IoT applications,’’ IEEE Internet Things J., vol. 6, no. 3, pp. 4755–4763, Jun. 2019. [7] X. Guo, H. Lin, Z. Li, and M. Peng, ‘‘Deep-reinforcement-learning-based QoS-aware secure routing for SDN-IoT,’’ IEEE Internet Things J., vol. 7, no. 7, pp. 6242–6251, Jul. 2020. [8] T. Wang, M. Z. A. Bhuiyan, G. Wang, L. Qi, J. Wu, and T. Hayajneh, ‘‘Preserving balance between privacy and data integrity in edge-assisted Internet of Things,’’ IEEE Internet Things J., vol. 7, no. 4, pp. 2679–2689, Apr. 2020. [9] Y. Mao, C. You, J. Zhang, K. Huang, and K. B. Letaief, ‘‘A survey on mobile edge computing: The communication perspective,’’ IEEE Commun. _Surveys Tuts., vol. 19, no. 4, pp. 2322–2358, 4th Quart., 2017._ [10] W. S. Shi, H. Sun, J. Cao, Q. Zhang, and W. Liu, ‘‘Edge computing-an emerging computing model for the Internet of everything era,’’ J. Comput. _Res. Develop., vol. 54, no. 5, pp. 907–924, Feb. 2017._ [11] H. Li, G. Shou, Y. Hu, and Z. Guo, ‘‘Mobile edge computing: Progress and challenges,’’ in Proc. 4th IEEE Int. Conf. Mobile Cloud Comput., Services, _Eng. (MobileCloud), Oxford, U.K., Mar. 2016, pp. 83–84._ [12] S. Wang, X. Zhang, Y. Zhang, L. Wang, J. Yang, and W. Wang, ‘‘A survey on mobile edge networks: Convergence of computing, caching and communications,’’ IEEE Access, vol. 5, pp. 6757–6779, Mar. 2017. [13] H. Tao, M. Z. A. Bhuiyan, M. A. Rahman, G. Wang, T. Wang, M. M. Ahmed, and J. Li, ‘‘Economic perspective analysis of protecting big data security and privacy,’’ Future Gener. Comput. Syst., vol. 98, pp. 660–671, Mar. 2019. [14] J. L. Zhang, Y. C. Zhao, B. Chen, F. Hu, and K. Zhu, ‘‘Survey on data security and privacy-preserving for the research of edge computing,’’ _J. Commun., vol. 39, no. 3, pp. 1–21, Mar. 2018._ [15] B. Gwak, J.-H. Cho, D. Lee, and H. Son, ‘‘TARAS: Trust-aware role-based access control system in public Internet-of-Things,’’ in Proc. 17th IEEE _Int. Conf. Trust, Secur. Privacy Comput. Commun., New York, NY, USA,_ 2018, Aug. 2018, pp. 74–85. [16] Y. Zhu, B. Li, H. Fu, and Z. Li, ‘‘Core-selecting secondary spectrum auctions,’’ IEEE J. Sel. Areas Commun., vol. 32, no. 11, pp. 2268–2279, Nov. 2014. [17] D. R. Kuhn, E. J. Coyne, and T. R. Weil, ‘‘Adding attributes to role-based access control,’’ Computer, vol. 43, no. 6, pp. 79–81, Jun. 2010. [18] Y. Wang, Y. Ma, K. Xiang, Z. Liu, and M. Li, ‘‘A role-based access control system using attribute-based encryption,’’ in Proc. Int. Conf. Big Data _Artif. Intell. (BDAI), Beijing, China, Jun. 2018, pp. 128–133._ [19] M. Ezhei and B. T. Ladani, ‘‘GTrust: A group based trust model,’’ Int. J. Inf. _Secur., vol. 5, no. 2, pp. 155–169, Mar. 2014._ [20] E. E. Mon and T. T. Naing, ‘‘The privacy-aware access control system using attribute-and role-based access control in private cloud,’’ in Proc. 4th _IEEE Int. Conf. Broadband Netw. Multimedia Technol., Shenzhen, China,_ Oct. 2011, pp. 447–451. [21] P. Barkha and G. Sahani, ‘‘Flexible attribute enriched role based access control model,’’ in Proc. Int. Conf. Inf., Commun., Instrum. Control (ICI_CIC), Indore, India, Aug. 2017, pp. 1–6._ [22] N. Geetha and M. S. Anbarasi, ‘‘Role and attribute based access control model for Web service composition in cloud environment,’’ in Proc. Int. _Conf. Comput. Intell. Data Science(ICCIDS), Chennai, India, Jun. 2017,_ pp. 1–4. [23] B. Yu, X. Q. Tai, and Z. J. Ma, ‘‘Research on RBAC model based on attribute and trust in cloud computing environment,’’ Comput. Eng. Appl., vol. 56, no. 9, pp. 84–92, May 2020. YICHEN HOU received the bachelor’s degree in software engineering from Xinyang Normal University, China, in 2018. She is currently pursuing the master’s degree with the School of Mathematics and Information, Fujian Normal University. Her research interests include blockchain, access control, and network security. SAHIL GARG (Member, IEEE) received the Ph.D. degree from the Thapar Institute of Engineering and Technology, Patiala, India, in 2018. He is currently a Postdoctoral Research Fellow at École de technologie supérieure, Université du Québec, Montréal, Canada. He has many research contributions in the area of machine learning, big data analytics, security & privacy, internet of things, and cloud computing. He has over 60 publications in high ranked Journals and Conferences, including 40+ top-tier journal papers and 20+ reputed conference articles. He was awarded the IEEE ICC best paper award in 2018 at Kansas City, Missouri. He is currently a Managing Editor of Springer’s Human-centric Computing and Information Sciences (HCIS) journal. He is also an Associate Editor of the IEEE NETWORK MAGAZINE, IEEE SYSTEM JOURNAL, Elsevier’s Applied Soft Computing, Elsevier’s Future Generation Computer Systems (FGCS), and _Wiley’s International Journal of Communication Systems (IJCS). In addition,_ he also serves as the Workshops and Symposia Officer for the IEEE ComSoc Emerging Technology Initiative on Aerial Communications. He guest-edited a number of special issues in top-cited journals, including IEEE T-ITS, IEEE TII, IEEE IoT Journal, IEEE NETWORK, and Future Generation Computer Systems (Elsevier). He serves/served as the workshop chair/publicity cochair for several IEEE/ACM conferences, including the IEEE INFOCOM, IEEE GLOBECOM, and IEEE ICC and ACM MobiCom. He is a member of ACM. LIN HUI received the Ph.D. degree in computing system architecture from the College of Computer Science, Xidian University, China, in 2013. He is currently a Professor with the College of Mathematics and Informatics, Fujian Normal University, Fuzhou, China, where he is also a M.E. Supervisor. He has published more than 50 papers in international journals and conferences. His research interests include mobile cloud computing systems, blockchain, and network security. ----- DUSHANTHA NALIN K. JAYAKODY (Senior Member, IEEE) received the M.Sc. degree (Hons.) in electronics and communications engineering from Eastern Mediterranean University, Turkey (under the University Graduate Scholarship), and the Ph.D. degree in electronics and communications engineering from University College Dublin, Ireland, under the supervision of Prof. M. Flanagan (Science Foundation Ireland Grant). From 2014 to 2016, he has held a Postdoctoral position at the Coding and Information Transmission Group, University of Tartu, Estonia, and the University of Bergen, Norway. Since 2016, he has been a Professor with the School of Computer Science and Robotics, National Research Tomsk Polytechnic University, Russia. He has held various visiting positions at the Texas A&M University, Qatar, the University of Jyväskylä, Finland, and the National Institute of Technology, Trichy, India. He has served as the Session Chair or a Technical Program Committee Member for various international conferences, such as IEEE PIMRC 2014–2020, IEEE WCNC 2014–2020, and IEEE VTC 2015–2019. RUI JIN received the bachelor’s degree in computer science from the University of Science and Technology Beijing. She is currently pursuing the Ph.D. degree in computer science with the University of Exeter, U.K. Her research interests include network security, machine learning, and mobile edge computing. M. SHAMIM HOSSAIN (Senior Member, IEEE) is currently a Professor with the Department of Software Engineering, College of Computer and Information Sciences, King Saud University, Riyadh, Saudi Arabia. He is also an Adjunct Professor with the School of Electrical Engineering and Computer Science, University of Ottawa, Canada. He has authored and coauthored more than 260 publications including refereed journals (200+ SCI/ISI-Indexed papers, 100+ IEEE/ACM Transactions/Journal papers, 10+ ESI highly cited papers, 1 hot paper), conference papers, books, and book chapters. Recently, he co-edited a book on ‘‘Connected Health in Smart Cities’’, published by Springer. He has served as cochair, general chair, workshop chair, publication chair, and TPC for over 12 IEEE and ACM conferences and workshops. He is currently the cochair of the 3rd IEEE ICME workshop on Multimedia Services and Tools for smart-health (MUST-SH 2020). He is a recipient of a number of awards, including the Best Conference Paper Award and the 2016 ACM Transactions on Multimedia Computing, Communications and Applications (TOMM) Nicolas D. Georganas Best Paper Award, and the 2019 King Saud University Scientific Excellence Award (Research Quality). He is on the editorial board of the IEEE TRANCTIONS ON MULTIMEDIA, the IEEE NETWORK, the IEEE MULTIMEDIA, the IEEE WIRELESS COMMUNICATIONS, IEEE ACCESS, the Journal of Network and Computer Applications (Elsevier), and the International Journal of Multimedia Tools and Applications (Springer). He also presently serves as a lead guest editor of IEEE Network, ACM Transactions on Internet Technology, ACM Transactions on Multimedia Computing, Communications, and Applications (TOMM) and Multimedia systems Journal. Previously, he served as a guest editor of IEEE Communications Magazine, IEEE Network, the IEEE TRANCTION INFORMATION TECHNOLOGY IN BIOMEDICINE (currently JBHI), the IEEE TRANCTIONS ON CLOUD COMPUTING, International Journal of Multimedia _Tools and Applications (Springer), Cluster Computing (Springer), Future_ _Generation Computer Systems (Elsevier), Computers and Electrical Engi-_ _neering (Elsevier), Sensors (MDPI), and International Journal of Distributed_ _Sensor Networks. He is a senior member of the ACM._ -----
{ "disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.1109/access.2020.3011477?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1109/access.2020.3011477, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": "CCBY", "status": "GOLD", "url": "https://ieeexplore.ieee.org/ielx7/6287639/8948470/09146646.pdf" }
2,020
[ "JournalArticle" ]
true
null
[ { "paperId": "ad42f67fda2b83b46ab0d07d137c49c148c1ebc9", "title": "Deep-Reinforcement-Learning-Based QoS-Aware Secure Routing for SDN-IoT" }, { "paperId": "e240ed44143e68d8a61954a5df230b11657e0f5f", "title": "Preserving Balance Between Privacy and Data Integrity in Edge-Assisted Internet of Things" }, { "paperId": "9de4028b8e63a9ab7b7dd13ec4114c04a282a7f5", "title": "Economic perspective analysis of protecting big data security and privacy" }, { "paperId": "0cc5e894cdba0f33c56877e54e1ca493b000f432", "title": "A Lightweight and Privacy-Preserving Authentication Protocol for Mobile Edge Computing" }, { "paperId": "5ec0565869e9c024c3ac98da9f59f1a2f4d744bb", "title": "Privacy Preserving Data Aggregation Scheme for Mobile Edge Computing Assisted IoT Applications" }, { "paperId": "a8029d5a0d45c9f1ce1ea4ca46564b00eaf18f42", "title": "TARAS: Trust-Aware Role-Based Access Control System in Public Internet-of-Things" }, { "paperId": "c117ee3eef61ad6dc2313b38a6caf0efcfbef745", "title": "A Role-Based Access Control System Using Attribute-Based Encryption" }, { "paperId": "e7f84b1d7f8378ffaadbf85c33bacc8bcd9e28dd", "title": "Mobile Edge Computing: A Survey" }, { "paperId": "dead20583beaea14ec2d9a5451c2d28c13d0be5a", "title": "Flexible attribute enriched role based access control model" }, { "paperId": "4095249c5a04dc38e91a2b7c7af5cdcb08b5acbe", "title": "Role and attribute based access control model for web service composition in cloud environment" }, { "paperId": "1be70c1cc40865c281b3f59e973f9bd8a8cf06c8", "title": "A Survey on Mobile Edge Networks: Convergence of Computing, Caching and Communications" }, { "paperId": "e9b28ed88dd50cc80bde3c7cb835b1d7b0feea68", "title": "Edge Computing—An Emerging Computing Model for the Internet of Everything Era" }, { "paperId": "8c5293da3ad1a463cb9694edfbf1bf19b8cbd698", "title": "A Survey on Mobile Edge Computing: The Communication Perspective" }, { "paperId": "638c4339628b8b4cf34136afda4072bbb3e4296d", "title": "Mobile Edge Computing: Progress and Challenges" }, { "paperId": "52135387c4cbc881b2220d7d10f53874129ebc44", "title": "Core-Selecting Secondary Spectrum Auctions" }, { "paperId": "1e22da07e6694af27e3adb517dc938b219660185", "title": "GTrust: a group based trust model" }, { "paperId": "14ffb5fb85c8a7fd11254a5601821891f084d31e", "title": "The privacy-aware access control system using attribute-and role-based access control in private cloud" }, { "paperId": "3c647cbcbebe293712ca9678b1377d50b581a922", "title": "Adding Attributes to Role-Based Access Control" } ]
13,592
en
[ { "category": "Computer Science", "source": "external" }, { "category": "Computer Science", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/025150bc246547d7eabeae51326b6d24c154d4b3
[ "Computer Science" ]
0.849105
Verifier-based Password Authenticated 3P-EKE Protocol using PCLA Keys
025150bc246547d7eabeae51326b6d24c154d4b3
[ { "authorId": "9380768", "name": "A. Raghuvamshi" }, { "authorId": "52538632", "name": "Premchand Parvataneni" } ]
{ "alternate_issns": null, "alternate_names": null, "alternate_urls": null, "id": null, "issn": null, "name": null, "type": null, "url": null }
This paper endeavors to present a novel framework for the generic structure of a verifier-based password authenticated Three-Party Encrypted Key Exchange (3P-EKE) protocol which yields more efficient protocol than the ones knew before. A previous framework presented by Archana and Premchand is more secured against all types of attacks like password guessing, replay, pre-play, man-in-the-middle attack etc. But unfortunately, this protocol does not solve the problem of a server compromise. These proofs help as inspiration to search for another framework. The framework we offer produces more efficient 3P-EKE protocol, and, in addition, delivers perceptive clarification about the existing attacks that do not solve in the previous framework. Moreover, it allows direct change from a class of verge private-key encryption to a hybrid (symmetric & Asymmetric) one without significant overhead.
Published Online June 2016 in MECS (http://www.mecs-press.org/) DOI: 10.5815/ijcnis.2016.06.07 # Verifier-based Password Authenticated 3P-EKE Protocol using PCLA Keys ## Archana Raghuvamshi Adikavi Nannaya University /CSE Department, Rajahmundry, 533296, India E-mail: archana_anur@yahoo.in ## Premchand Parvataneni Osmania University/CSE Department, Hyderabad, 500007, India E-mail: profpremchand.p@gmail.com **_Abstract—This paper endeavors to present a novel_** framework for the generic structure of a verifier-based password authenticated Three-Party Encrypted Key Exchange (3P-EKE) protocol which yields more efficient protocol than the ones knew before. A previous framework presented by Archana and Premchand is more secured against all types of attacks like password guessing, replay, pre-play, man-in-the-middle attack etc. But unfortunately, this protocol does not solve the problem of a server compromise. These proofs help as inspiration to search for another framework. The framework we offer produces more efficient 3P-EKE protocol, and, in addition, delivers perceptive clarification about the existing attacks that do not solve in the previous framework. Moreover, it allows direct change from a class of verge private-key encryption to a hybrid (symmetric & Asymmetric) one without significant overhead. **_Index Terms—Verifier–based protocols, Password –_** based Authentication, Three Party Encrypted Key Exchange Protocol (3P-EKE), Public-Key Cryptosystem Based on Logarithmic Approach (PCLA). I. INTRODUCTION A vital job of cryptography is to guard the confidentiality of messages transferred over the unsecured network. To provide security, messages can be encrypted by using a key (secret information) so that an intruder cannot decode the message. However, encoding the messages may not be only the solution, because an intruder may take the most active role as the network is open and reachable. We may need to change the key according to the session to establish a secure communication through the unsecured open network. Consecutively, a password-authenticated two-party encrypted key exchange (2P-EKE) protocols are used to exchange a session key based on a low-entropy password. In this network, each party who wants to communicate needs to memorize a low-entropy password, which implies high maintenance of passwords. Due to this drawback, a password-authenticated three-party encrypted key exchange (3P-EKE) protocols has its demand as on date. According to Ding & Hoster [1], many of such 3P-EKE protocols suffer from any one of the three types of password guessing attacks. An intruder can guess the correct password by continuously trying until he succeeds, which is known as password guessing attack. An Ideal password authenticated key exchange protocol should satisfy the security requirements like, Mutual Authentication, Resistant to password guessing attacks, Session Key (SK) security, Resistant to Trivial Attack, Resistant to Pre-play Attack, Resistant to Replay Attack, Resistant to Man-in-the-middle Attack, Server spoofing security, Perfect forward secrecy, backward secrecy, Known-Key Security, etc. Based on the low entropy passwords shared between a user and a server, the password-authenticated key exchange (PAKE) protocols are classified into two types. They are: _A._ _Symmetric model_ As the name implies, symmetric (identical) low entropy password shared between a Trusted Party (server) and a user in establishing a secure session key. If a Trusted Party is compromised, an intruder will get succeed in performing an attack on the legitimate user. _B._ _Asymmetric model_ As the name implies, the password distribution is asymmetric in nature; i.e., a user will share the different knowledge (a verifier) about the low entropy password with the Trusted Party in establishing a secure session key. If a Trusted Party is compromised, the password table does not reveal the direct information about the password. In this way, a server spoofing is avoided. Henceforth, the design of a novel framework which is smart in establishing a secure session key with less computational overhead; which prove to be secure against the attacks like password guessing, a server spoofing and also provide mutual authentication, Backward Secrecy, and Forward Secrecy is the need of the hour. This paper endeavors to propose a novel framework for establishing a secure session key based on an asymmetric model by using PCLA keys. PCLA is a new public key ----- cryptosystem based on the logarithmic approach proposed by Archana et al. in 2012[2]. Further, the rest of the paper is organized as follows: Related works is discussed in section II. In section III, we listed notations used in the proposed protocol. A framework for the proposed protocol is described in section IV. Security Analysis of the proposed protocol is done in section V. Finally, we made concluding remarks in section VI. II. RELATED WORK Diffie-Hellman (1976) [3] key exchange protocol is suffered from a man-in-the-middle attack due to the lack of an authentication. To assure a good access control, many applications require a robust client authentication. In such scenario, password-authenticated key exchange (PAKE) protocols have their own identity. Bellovin and Merritt (1992)[4] have first proposed password-based authenticated encrypted key exchange protocol for the two-party network. But, due to the server compromise (server hacking: e.g., In 2012, more than million LinkedIn passwords are stolen) this protocol no longer proved to be secure. Hence, to eliminate such a problem he proposed an improvement over it known as Augmented EKE protocol (1993) [5], where a server instead of storing the actual passwords, it stores the verifiers of the passwords which prevents from a server compromise but it does not solve the problem of off-line dictionary attacks. Subsequently, Gong et al.(1993)[6] proposed a three party password-based authenticated key exchange protocol using a server‘s public key, where the clients are given a risk to verify and keep the public key safely. Many improvements proposed by various researchers in terms of a security and computational efficiency [7, 8, 9, 10]. Abdalla et al. (2005) [11] proposed a ‗provable secure‘ one-time password-based authentication and key exchange (OPKeyX) technology for grid computing; where a user changes the password from one session to another session to eliminate the problem of password sniffing. Lin et al. (2008)[12] proposed an efficient verifier-based password-authentication key exchange protocol by using elliptic curve cryptography. Unfortunately, Yang et al. (2011) [13] showed the flaws of Lin et al. protocol and proposed an improvement over the Efficient verifier-based password-authentication key exchange protocol via elliptic curves. A Novel ECC-3PEKE protocol is proposed by Chang et al. (2004) [14], which proved to be practical, efficient and secure. However, Yoon et al. (2008) [15] notified an undetectable online password guessing attack and proposed an improvement over ECC-3PEKE protocol. Subsequently, PSRJ protocol has been proposed by Padmavathy et al. (2009) [16], which is also an improvement over ECC-3PEKE protocol. They claimed that the proposed protocol achieves better computational complexity and also secure against dictionary attacks. Later Chang et al. (2009) [17] discussed why Yoon Yoo‘s Protocol is still insecure. R. Padmavathy (2010) [18] cryptanalyzed the PSRJ protocol and to overcome an attack she proposed an improvement over the existing one by using reduced modular exponentiation operations. Successively, an impersonation attack has been shown on the ECC-3PEKE protocol by Shirisha Tallapally (2010) [19]. Next, Archana et al. (2012) [20] showed detectable online password guessing the attack on PSRJ protocol. Also, Kulkarni et al.(2007) [21] proposed a novel key exchange protocol based on verifier-based password authentication for three parties; where each client instead of storing the direct password itself it computes a oneway hash function on each password and stores the corresponding result in a server's password table. Subsequently, Shaban et al.(2008)[22] proposed an improvement over the Kulkarni et al.‘s protocol in terms of computational complexity, by showing the reduced rounds from 7 to 4 without using symmetric encryption/decryption. But unfortunately, Archana et al. (2015) [23] cryptanalyzed the Shaban et al.‘s protocol by showing the detectable online password guessing attack. Kulkarni et al.'s protocol are proved as secure against the dictionary attacks but it is computationally more expensive than our proposed protocol. A previous framework presented by Archana et al. ―in press‖ [24] is more secured against all types of attacks like password guessing, replay, pre-play, man-in-themiddle attack etc. But unfortunately, this protocol does not solve the problem of a server compromise. These proofs help as inspiration to search for another framework which eliminates the problems, may occur in the previous framework. III. NOTATIONS The list of notations along with their descriptions used in this paper is given in Table 1. In fact, Ida, Idb, Idtp are the identities of client-A, client-B, Trusted Party-TP respectively, which are known publicly. ----- Table 1. List of Notations At this 1[st] stage, the clients who want to communicate IV. FRAMEWORK FOR PROPOSED PROTOCOL each other have to register with Trusted Party in advance. The procedure for registration is as follows: This section endeavors to propose a novel verifier based password-authenticated 3P-EKE protocol using the **Step 0:** Client-A and Client-B compute‘s the verifiers PCLA keys. PCLA is a new public key cryptosystem based on the logarithmic approach proposed by Archana VA=H(Ida, Idtp, Pwda) and VB=H(Idb, Idtp, Pwdb) by et al. More details about this algorithm (PCLA) are given choosing low random passwords Pwda and Pwdb respectively. Now client-A and client-B sends the in the reference paper [2].The proposed protocol has been divided into three stages. They are: verifiers VA & VB to Trusted Party respectively through a secure channel. - Initialization Stage **i.e., Client-A Trusted Party: {VA}, and Client-B** **Trusted Party: {VB}.** - Key Agreement Stage Then Trusted Party stores the verifiers in its password - Key Computation Stage verifier‘s table. The detail of the initialization stage is depicted in Fig.1. _A._ _Initialization Stage_ Fig.1. Initialization Stage _B._ _Key Agreement Stage_ **i.e., Client-ATrusted Party: {Ida, Idb, Idtp, {EPKeyp** (Pwda), VA}, EPwda(Ma ra), htp(Pwda Ma), FKatp(Ma)}. In this 2[nd] stage of the protocol, the actual procedure Similarly, client-B generates two random numbers viz., for session key agreement begins. The protocol executes REb, rb ∈RZP, and computes Kbtp=Mbrb mod p where Mb= as per the following steps of the protocol. g[REb] mod p and sends the credentials **Step KA1:** Client-A generates two random numbers {Ida,Idb,Idtp,{EPKeypu(Pwdb),VB}, EPwdb(Mbrb),htp(Pwdb viz., REa, ra ∈ZR, and computes Katp=Mara mod p where Mb), FKbtp(Mb)}to Trusted Party. Ma=g[REa] mod p and sends the credentials {Ida, Idb, Idtp, **i.e., Client-BTrusted Party: {Ida,Idb,Idtp,{EPKeypu** {EPKeypu(Pwda),VA},EPwda(Mara),htp(PwdaMa),FKatp (Pwdb),VB},EPwdb(Mbrb),htp(PwdbMb), FKbtp(Mb)}. (Ma)}to Trusted Party. **Step KA2: Upon receiving the credentials from client-** ----- A and client-B, Trusted Party decrypts EPKeypu(Pwda) & EPKeypu(Pwdb) by using its PCLA private key Keypr i.e., DPKeypr(EPKeypu(Pwda)) & DPKeypr(EPKeypu(Pwdb)) and gets the low entropy passwords Pwda, Pwdb respectively. Now Trusted Party computes H(Ida, Idtp, Pwda) & H(Idb, Idtp, Pwdb) and retrieves the verifier VA & VB from its table and checks whether both the numbers are equal. If not, then it terminates the protocol at the current session. If yes, then it implies that client-A & client-B is verified at first level and Trusted Party continues with the residual procedure of the protocol. Trusted party retrieves PwdaMa & PwdbMb from htp(PwdaMa) & htp(PwdbMb) by using trapdoor [25] ‗tp‘ and compute‘s Ma=(PwdaMa) Pwda [& M]b[=(Pwd]b[][M]b[)][][ Pwd]b respectively. Now, it gets ra=(Mara)Ma & rb=(Mbrb)Mb from the credential EPwda(Ma ra) & EPwdb(Mb rb) by decrypting with low entropy password Pwda & Pwdb i.e., DPwda(EPwda(Ma ra)) & DPwdb(EPwdb(Mb rb)) . Next, Trusted Party performs the second level of verification by calculating FKatp(Ma) & FKbtp(Mb) after the computation of Katp=Mara mod p & Kbtp=Mbrb mod p respectively. That is, it compares the computed value of FKatp(Ma) (or FKbtp(Mb)) with the received value of FKatp(Ma)(or FKbtp(Mb)). If not identical, it terminates the protocol at the current session. If both are identical then verification of client-A & client-B is passed and it continues with the residual procedure of the protocol. Now, a Trusted Party chooses a random exponent REtp∈RZP to compute MbREtp mod p and MaREtp mod p and encrypts these values with its PCLA private key. Then Trusted Party sends these credentials to client-A and client-B simultaneously. **i.e., Trusted PartyClient-A:{EPKeypr(MbREtp** mod p)}, and Trusted Party  Client-B:{EPKeypr(MaREtp mod p)}. The detail of key agreement stage is depicted in Fig.2. Fig.2. Key Agreement Stage **Step KC2:** Upon receiving the incoming credentials _C._ _Key Computation Stage_ FSK(Idb, SK) and FSK(Ida, SK) from client-A and client-B In order to compute a secure session key, this stage respectively, they verify each other and can confirm that accomplishes to begin with the verification of the Trusted the mutual session key is SK=( MbREtp) REa (mod p) = Party in a smart way. ( MaREtp)REb (mod p). **Step KC1: Upon receiving the credentials from a** The detail of key computation stage is illustrated in Trusted Party, client-A decrypts EPKeypr(MbREtp mod p) by Fig.3. using the PCLA public key of Trusted Party i.e., DPKeypu(EPKeypr(MbREtp mod p)) to get MbREtp mod p. In _D._ _Password Change Mechanism_ this way, client-A authenticates the Trusted Party. If any one of the client (say Client-A) suspects a ‗leak Similarly, client-B also authenticates the Trusted Party of information‘, then it invokes a password change in the same way. mechanism of our proposed protocol, which is helpful in Now, client-A computes a mutual session key SK= providing backward secrecy. The steps in this mechanism (MbREtp) REa mod p= ((gREb) REtp) REa mod p & FSK(Ida,SK) are as follows: and sends it to client-B. Similarly, client-B computes a mutual session key SK= (MaREtp) REb mod p= ((gREa) REtp) **Step PC1: Client-A preserves the session key SK from** REb mod p & FSK(Idb,SK) and sends it to client-A. the previous session. **i.e., Client-A Client-B: {FSK(Ida,SK)}, and Client-** **Step PC2: Client-A, first encrypts the session key SK** **B Client-A: {FSK(Idb,SK)}.** by using PCLA public key Keypu of Trusted Party. Next, ----- it computes H(Ida, Idtp, Pwda)  H(Ida, Idtp, NewPwda) SK & htp(H(Ida, Idtp, NewPwda) and finally sends the following credentials to Trusted Party to reset the new password verifier. **i.e., Client-A Trusted Party: {Ida, Idtp, EPKeypu(SK),** H(Ida, Idtp, Pwda)  H(Ida, Idtp, NewPwda) SK, htp(H(Ida, Idtp, NewPwda)} **Step PC3: Upon receiving the credentials from client-** A, Trusted Party compute‘s SK by decrypting EPKeypu(SK) with the PCLA private key i.e., DPKeypr (EPKeypu(SK)) and also retrieves the old verifier from the password table and computes the new password verifier as follows: H(Ida, Idtp, NewPwda)= (H(Ida, Idtp, Pwda)  H(Ida, Idtp, NewPwda) SK)  H(Ida, Idtp, Pwda)  SK. Now, Trusted party check for validation by extracting the _new verifier from the received credential htp(H(Ida,_ Idtp, NewPwda) by using a trapdoor ‗tp‘; if computed H(Ida, Idtp, NewPwda) is equal to the received one, Trusted Party updates the password table accordingly and sends ‗Accepted‘ message to the client-A. Otherwise, Trusted Party rejects the request by sending the ‗Denied‘ message to the client. **i.e., Trusted Party: Accepted/Denied.** Fig.3. Key Computation Stage Fig.4. Password Change Mechanism ----- V. SECURITY ANALYSIS The following security requirements are satisfied by the proposed protocol; which proves that the proposed protocol is not only efficient but also secure. _A._ _Resistant to an off-line dictionary attack_ An attacker Eve-E may try to mount off-line password guessing attack to guess the password. She intercepts {Ida, Idb, Idtp, {EPKeypu(Pwda),VA}, EPwda(Ma ra), htp(Pwda Ma), FKatp(Ma)} and may guess a password to extract (Mara), but it is impossible for her to get Ma until trapdoor ‗tp‘ is known, which is known only to Trusted Party. This implies that she cannot verify the hash value FKatp(Ma) which ascertains an offline password guessing attack on the proposed protocol is impossible. Hence, the proposed protocol is resistant to off-line dictionary attack. _B._ _Resistant to server spoofing attack_ Assume an intruder _Eve-E succeeds in getting the_ password table of Trusted Party. Since only the verifier V of clients is stored in password table, Eve-E cannot mimic the client and compute SK. Hence, the proposed protocol is resistant to a server spoofing attack. _C._ _Provides the mutual authentication_ The proposed protocol promotes the mutual authentication and realizes the session key security to a great extent. The following are the scenarios where the mutual authentication can be proved. - **_First Scenario: Client-A and Client-B use the_** public key Keypu of Trusted Party to hide the corresponding passwords. Only Trusted Party knows the private key Keypr to decrypt it. Hence, for an intruder Eve-E, it is not possible to get the passwords of client-A & client-B. - **_Second Scenario: Client-A and Client-B use the_** trapdoor ‗tp‘ to hide the random exponents REa in Ma & Pwda and REb in Mb & Pwdb. Since only Trusted Party knows the trapdoor ‗tp‘ and passwords Pwda & Pwdb he can very well authenticate Client-A and Client-B after receiving the messages sent in step KA1 of the protocol. - **_Third_** **_Scenario:_** Trusted Party sends {EPKeypr(MbREtp mod p)} to client-A & {EPKeypr(MaREtp mod p)} to client-B in _step KA2 of_ the protocol. This message can be used to authenticate Trusted Party. - **_Fourth Scenario: Client-A and Client-B derive a_** key from MbREtp and MaREtp respectively, as mentioned in _step KC1 of the protocol. With the_ help of FSK(Idb, SK) & FSK(Ida, SK) both client-A and client-B can authenticate each other respectively as mentioned in step KC2 of the protocol. Hence, the mutual authentication is provided by the proposed protocol. _D._ _Provides backward secrecy_ The proposed protocol can provide backward secrecy, where compromise of Pwda will not lead to the compromise of NewPwda. Assume, a client-A suspects a ‗leak of information‘ to Eve-E, then immediately client-A request to Trusted Party to change its password from Pwda to NewPwda. Let us assume, subsequently _Eve-E_ intercepted the password change request, i.e., {Ida, Idtp, EPKeypu(SK), H(Ida, Idtp, Pwda)  H(Ida, Idtp, NewPwda) SK, htp(H(Ida, Idtp, NewPwda)} sent to Trusted Party by client-A. However, in this process Eve-E cannot compute SK by using Pwda, hence, he cannot compute NewPwda from the intercepted message {Ida, Idtp, EPKeypu(SK), H(Ida, Idtp, Pwda)  H(Ida, Idtp, NewPwda) SK, htp(H(Ida, Idtp, NewPwda)}. Hence, the backward secrecy is provided by the proposed protocol. _E._ _Provides the forward secrecy_ The session key is computed as follows: SK= (MbREtp)REa (mod p)=(MaREtp)REb (mod p). If the Eve-E gets {EPKeypr(MbREtp mod p)} or {EPKeypr(MaREtp mod p)}, then in order to obtain the session key, she should know the public key of Trusted Party and REb or REa. The session keys generated in different sessions are independent since REa and REb are randomly chosen by client-A and clientB respectively. This indicates that _Eve-E cannot obtain_ previous session keys even if she obtains the session key used in this run. Hence, the forward secrecy is provided by the proposed protocol. VI. CONCLUSION In this paper, we proposed a novel verifier-based password authenticated 3P-EKE protocol using PCLA keys, which provides perceptive justification about the existing attacks that do not solve in the previous framework. That is, our proposed protocol is proved to be secure against offline dictionary attacks and server spoofing attack. Further, we have also proved that our protocol provides mutual authentication, backward secrecy and also forward secrecy. REFERENCES [1] Y. Ding and P. Horster. ―Undetectable online password guessing attacks,‖ ACM Operating Systems Review vol.29, pp.77-86, 1995. [2] Archana Raghuvamshi, P.Premchand and P.Venkateswara Rao. ―PCLA: A New Public-key Cryptosystem Based on Logarithmic Approach‖, International Journal of Computer Science Issues(IJCSI), vol.9,no.1, pp.355-359, 2012. [3] W. Diffie and M. E. Hellman. ―New directions in cryptography‖, IEEE Transactions on Information Theory, vol.22, no.6, pp.644–654, 1976. [4] S. M. Bellovin and M. Merritt, ―Encrypted key exchange: Password-based protocols secure against dictionary ----- attacks‖, IEEE Symposium on Security and Privacy, IEEE Computer Society Press, pp.72–84 May 1992. [5] S. M. Bellovin and M. Merritt, ―Augmented encrypted key exchange: A password-based protocol secure against dictionary attacks and password file compromise‖, ACM CCS, ACM Press vol.93, pp.244–250, November 1993. [6] L. Gong, M. Lomas, R. Needham, and J. Saltzer, ―Protecting poorly chosen secrets from guessing attacks‖, _IEEE Journal on Selected Areas in Communications,_ vol.11,no.5,pp. 648-656, 1993. [7] W.M. Li, and Q.Y. Wen, ―Efficient verifier-based password-authentication key exchange protocol via elliptic curves‖, _Proceedings of 2008 International_ _Conference_ _on_ _Computer_ _Science_ _and_ _Software_ _Engineering, pp. 1003-1006, 2008._ [8] E.J. Yoon, and K.Y. Yoo, ―Robust User Password Change Scheme based on the Elliptic Curve Cryptosystem‖, _Fundamenta Informaticae, pp 483-492, 2008._ [9] Zeng, Yong and Ma, Jianfeng, ―An improvement on a password authentication scheme over insecure networks‖ _Journal of Computational Information Systems, vol.5,_ no.4, pp.1331-1336, 2009. [10] Chunling Liu, Yufeng Wanga and Qinxi Bai, ―A New Three-party Key Exchange Protocol Based on DiffieHellman,‖ I.J. Wireless and Microwave Technologies, vol. 1, no.4, pp. 65-69, 2011. [11] M. Abdalla, O. Chevassut, and D. Pointcheval. ―One-time verifier-based encrypted key exchange‖, PKC LNCS, Springer, vol. 3386, pp.47–64, January 2005. [12] W.M. Lin, and Q.Y. Wen, ―Efficient verifier-based password-authentication key exchange protocol via elliptic curves‖, _Proceedings of 2008 International_ _Conference_ _on_ _Computer_ _Science_ _and_ _Software_ _Engineering, pp.1003-1006, 2008._ [13] Junhan YANG and Tianjie CAO, “A Verifier-based Password-Authenticated Key Exchange Protocol via Elliptic Curves‖, Journal of Computational Information Systems, Binary Information Press, pp.548-553, 2011. [14] Chin-Chen Chang and Ya-fen Chang, ―A novel three party encrypted key exchange protocol‖, Elsevier, Computer Standards & Interfaces, vol.26 pp.471 – 476, 2004. [15] [Eun-Jun Yoon, and Kee-Young Yoo, ―Improving the](https://www.researchgate.net/researcher/33971269_Eun-Jun_Yoon) novel three-party encrypted key exchange protocol‖, Elsevier, _Computer Standards and Interfaces, vol. 30,_ pp.309-314, 2008. [16] R.Padmavathy, Tallapally Shirisha, M.Rajkumar, and Jayadev Gyani, ―Improved analysis on Chang and Chang Password Key Exchange Protocol‖, IEEE International Conference on Advances in Computing, Control, and Telecommunication Technologies, pp.781-783, 2009. [17] Ya-Fen Chang, Wei-Cheng Shiao, and Chung-Yi Lin, ―Comments on Yoon and Yoo‘s Three-party Encrypted Key Exchange Protocol‖, International Conference on Advanced Information Technologies (AIT), 2009. [18] R. Padmavathy, ―Improved Three Party Eke Protocol‖, Information Technology and Control, Vol.39, No.3, pp.220-226, 2010. [19] Shirisha Tallapally, ―Impersonation Attack on EKE Protocol‖, International Journal of Network Security & Its Applications (IJNSA), vol.2, no. 2, pp. 114-121, 2010. [20] Archana Raghuvamshi, P.Venkateshwara Rao, and Prof.P.Premchand, ―Cryptanalysis of Authenticated Key Exchange 3P-EKE Protocol and its Enhancement‖, IEEEInternational Conference on Advances in Engineering, Science and Management (ICAESM -2012), pp.659-666, March 30, 31, 2012. [21] S. Kulkarni, D. Jena, and S.K. Jena, "A Novel Secure Key Agreement Protocol using Trusted Third Party", Computer Science and Security Journals (IJCSS), vol.1, no.1, pp. 11 – 18, 2007. [22] Dina Nabil Shaban, Maged H. Ibrahim, and Zaki B.Nossair, **“Enhanced** Verifier-Based Password Authenticated Key Agreement Protocol For ThreeParties‖, Journal of Engineering Sciences, vol. 36, no. 6, pp.1513- 1522, 2008. [23] Archana Raghuvamshi and Premchand Parvataneni. ―Cryptanalysis of Verifier-Based Password-Authenticated Key Agreement Protocol for Three Parties‖, Research Journal of Recent Sciences. Vol. 4, pp. 5-8, Feb 2015. [24] Archana Raghuvamshi and Premchand Pavataneni, ―Design of a Robust, Computation-Efficient and Secure 3P-EKE Protocol using Analogous Message Transmission‖, International Journal of Computer Network and Information Security (IJCNIS), In Press. [25] Y. Gertner, T. Malkin, and O. Reingold, ―On the impossibility of basing trapdoor functions on trapdoor predicates‖, Proceedings of the 42nd IEEE Symposium on foundations of Computer Science, Las Vegas, Nevada,, pp. 126 – 135, October 2001. **Authors’ Profiles** **Archana Raghuvamshi is presently** working as an Assistant Professor in Dept. of CSE, UCOE, Adikavi Nannaya University, Rajahmundry. She is having 13+ year of teaching experience. She received her Bachelor‘s Degree BSc (M.S.Cs), Master‘s Degrees M.C.A and M.Tech(CSE) from Osmania University, Hyderabad. She did course work in ADS and WMN in IITM (Indian Institute of Technology, Madras). She is perusing Ph.D. (CSE) in JNTUK, Kakinada. She published four research papers in IEEE Digital library and another six research papers in various peer reviewed International Journals. Her research interest includes Cryptography and Information Security, Security in Cloud Computing etc. Ms. Archana Raghuvamshi is a, 1. Professional Member of ACM 2. Member of Professional Body IAENG 3. Member of IACSIT 4. Associate Member of theIRED **Prof.** **Premchand** **Parvataneni** is presently working as a professor in Department of Computer Science and Engineering at University College of Engineering, Osmania University, Hyderabad (Telangana). He received his Bachelor‘s Degree B.Sc (Engg.) from RIT, Jamshedpur. He received his Master‘s M.E (CE) from AU (Andhra University), Visakhapatnam. He received his Ph.D.(CSSE) from AU. He has published more than 50 publications in various International Journals and Conference proceedings. His research Interest includes Cryptography and Network Security, Image Processing, Software Engineering etc. Prof.Premchand is having 40+ years of teaching experience ----- in various Universities. He was as a Director in AICTE, New Delhi. And also, he has been held for the various positions like Head, Chairman of BOS, Additional Controller of Examinations in the Professional wing, Osmania University, Hyderabad. **How to cite this paper:** Archana Raghuvamshi, Premchand Parvataneni,"Verifier-based Password Authenticated 3PEKE Protocol using PCLA keys", International Journal of Computer Network and Information Security(IJCNIS), Vol.8, No.6, pp.59-66, 2016.DOI: 10.5815/ijcnis.2016.06.07 -----
{ "disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.5815/IJCNIS.2016.06.07?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.5815/IJCNIS.2016.06.07, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": null, "status": "GOLD", "url": "http://www.mecs-press.org/ijcnis/ijcnis-v8-n6/IJCNIS-V8-N6-7.pdf" }
2,016
[]
true
2016-06-08T00:00:00
[ { "paperId": "04e9a7d9625b38452aad458dfd9bf6e6e8358dfe", "title": "Design of a Robust, Computation-Efficient and Secure 3P-EKE Protocol using Analogous Message Transmission" }, { "paperId": "175e547be9705b621b06c3a231ba254aad84d599", "title": "PCLA: A new public-key cryptosystem based on logarithmic approach" }, { "paperId": "9b565623e6662ec68983d16e22c87a3399cd90ad", "title": "Cryptanalysis of authenticated key exchange 3P-EKE protocol and its enhancement" }, { "paperId": "4ad98e966fe095e49597599669a8c093a16dc656", "title": "A New Three-party Key Exchange Protocol Based on Diffie-Hellman" }, { "paperId": "ed98c3b046b5895ecc7e8d58c608e08ab01caf30", "title": "Verifier-based password authenticated key exchange protocol via elliptic curve" }, { "paperId": "5d4d6ff7a52c3c3f97c5ac26e045b26533cd684b", "title": "IMPERSONATION ATTACK ON EKE PROTOCOL" }, { "paperId": "53a44301e06bdfd9e17a510b183be4ffabc361cf", "title": "Improved Analysis on Chang and Chang Password Key Exchange Protocol" }, { "paperId": "92ea1592743d40c9819c9880d9b8c6c16cd967ed", "title": "Efficient Verifier-Based Password-Authentication Key Exchange Protocol via Elliptic Curves" }, { "paperId": "a568d98af58850a26ef1def71b53e981e23cf5fb", "title": "ENHANCED VERIFIER-BASED PASSWORD AUTHENTICATED KEY AGREEMENT PROTOCOL FOR THREE-PARTIES" }, { "paperId": "df1e6550400c3d5f38a340618309a253569a5d0a", "title": "Improving the novel three-party encrypted key exchange protocol" }, { "paperId": "2b3bf2ce47d4baf857fea8104205aaeec8d8d57b", "title": "Robust User Password Change Scheme based on the Elliptic Curve Cryptosystem" }, { "paperId": "7a8985c16a0551c464bbfcb46fbc6d2d48377137", "title": "A password authentication scheme over insecure networks" }, { "paperId": "16dde8415501f1a4f5d63b292343088836ebe995", "title": "One-Time Verifier-Based Encrypted Key Exchange" }, { "paperId": "686a5383620fd2be566ad06893d6e5f12a286fbc", "title": "A novel three-party encrypted key exchange protocol" }, { "paperId": "97fe9e35e87a45811e7e0e9b05e7822ae575148c", "title": "Undetectable on-line password guessing attacks" }, { "paperId": "8e39cad74131ad5c1f09d44ef3b2b65a3ae29e35", "title": "Augmented encrypted key exchange: a password-based protocol secure against dictionary attacks and password file compromise" }, { "paperId": "6b479047219fc565a478efbe95572806cd03a7a1", "title": "Protecting Poorly Chosen Secrets from Guessing Attacks" }, { "paperId": "0c0fbe79e49c4859f4d63052d27074049733e092", "title": "Encrypted key exchange: password-based protocols secure against dictionary attacks" }, { "paperId": "ba624ccbb66c93f57a811695ef377419484243e0", "title": "New Directions in Cryptography" }, { "paperId": "8fa5b000d25aed21a3f2288c6571b827a644b626", "title": "IMPROVED THREE PARTY EKE PROTOCOL" }, { "paperId": null, "title": "A Novel Secure Key Agreement Protocol using Trusted Third Party" }, { "paperId": "f346674cb9fc26d243a0b63eb43385e9efaaf860", "title": "Authors' Profiles" }, { "paperId": null, "title": "She received her Bachelor‘s Degree BSc (M.S.Cs), Master‘s Degrees M.C.A and M.Tech(CSE) from Osmania University, Hyderabad" } ]
8,421
en
[ { "category": "Computer Science", "source": "external" }, { "category": "Computer Science", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/0254fed1ad7983ac9d834822d66a553845e0f7de
[ "Computer Science" ]
0.919658
Privacy-Aware and Highly-Available OSN Profiles
0254fed1ad7983ac9d834822d66a553845e0f7de
2010 19th IEEE International Workshops on Enabling Technologies: Infrastructures for Collaborative Enterprises
[ { "authorId": "1796529", "name": "Rammohan Narendula" }, { "authorId": "1766169", "name": "Thanasis G. Papaioannou" }, { "authorId": "1751802", "name": "K. Aberer" } ]
{ "alternate_issns": null, "alternate_names": null, "alternate_urls": null, "id": null, "issn": null, "name": null, "type": null, "url": null }
null
# Privacy-aware and highly-available OSN profiles Rammohan Narendula, Thanasis G. Papaioannou, and Karl Aberer School of Computer and Communication Sciences, EPFL, Switzerland Email: firstname.lastname@epfl.ch **_Abstract—The explosive growth of online social networks_** **(OSNs) and their wide popularity suggest the impact of OSNs on** **today’s Internet. At the same time, concentration of vast amount** **of personal information within a single administrative domain** **causes critical privacy concerns. As a result, privacy-conscious** **users feel dis-empowered with today’s OSNs. In this paper, we** **report on an on-going research work and introduce a privacy-** **aware decentralized OSN called porkut. Our system exploits trust** **relationships in the social network for decentralized storage of** **OSN profiles and their content. By taking users’ geographical** **locations and online time statistics into account, it also addresses** **availability and storage performance issues. We finally advocate** **indexing of social network content and present an approach for** **indexing in a privacy-preserving manner.** **_Keywords-online social network; privacy-preserving index; con-_** **nected dominating set; trust** I. INTRODUCTION Online social networks (e.g. Facebook.com, Orkut.com) have recently seen an explosive growth. Facebook received 130 million visitors in a single month in 2008 [1] and currently has more than 200 million users. As a result, these OSNs have become store houses of unprecedented amount of data in the form of messages, photos, links, and personal information. Facebook has grown to be the the world’s largest photo sharing service surpassing even dedicated photo-sharing online applications (e.g. Flickr). It is also the largest instant messaging service on the web. Researchers argue that future Internet will be very much influenced by social networks regarding the location of content and knowledge, and the user interactions [2]. However, most of the current social networks operate on infrastructure administered by a single authority (big-brother), such as Google, Facebook, etc. These organizations perform mining of personal data hosted inside users profiles and exploit it for targeted advertisements, in order to be compensated for their huge investments in infrastructure. During sign-up time, users consciously or unconsciously permit the organizations to share their personal information with third-parties in whatever form the organizations choose to [3]. In addition, the leakage of personal information from OSNs can be associated with the user activity on non-OSN sites as well [4]. However, the exponential growth of the OSNs suggests that users are ready to trade privacy over utility of the services offered. As a result, there is almost negligible motivation for the OSN operators to address privacy concerns of the users. In order to address privacy concerns of OSN users, research community has resorted to the P2P paradigm for OSN content management. Replacing the big-brother with a community of users, enables OSN users to have complete control on their profile content. In this paper, we present an initial design of such a system, referred to as porkut, where users organize a social network over a P2P overlay with privacy-preserving data access. We briefly outline the system architecture and mainly focus on the distributed storage layer. Specifically, we propose a decentralized mechanism for users to manage their own online social network on top of resources collectively contributed by themselves. Such a design is motivated with several goals in mind: a) It eliminates the requirement for a single bigbrother who can exploit the users’ profile data for his own interest without users’ consent. b) It preserves the privacy of individuals social profile content, as they have complete control on who can access which parts of the content. c) It exploits the trust relationships among users in the social network to improve the content availability and the storage performance. Both issues are non-trivial in a P2P setting. Three approaches with different goals for improving storage performance are introduced, while maintaining high content availability. A user’s profile content is hosted only on a set of self-defined trusted nodes that enforce access control on the content. This set of trusted nodes is selected intuitively keeping the availability and performance goals in mind. Other issues such as the structure of the profile content, the format of the access control policies, trusted identity management, and other data integrity issues are beyond our scope. In addition, the system constructs a privacy-preserving index of the social network content that enables privacyaware searching. We argue that such an index enables content discovery among friends in OSNs and helps system users discover new friends (based on content, such as common interests etc.) within the OSN application and establish new social connections. This is an add-on feature over existing OSNs like Facebook that do not allow content-based search. Such an index is hosted over the P2P overlay in a distributed hash table (DHT). Users can specify their privacy objectives during content publishing and thus content existence and ownership are only revealed according to their preferences. This index could be used to serve advertisements on searches and distribute the revenues to the users according to their published content. This way, users can benefit from their content without compromising their privacy. In contrast, the current OSN applications exploit users’ content for own monetary gains. The rest of the paper is organized as follows. In Section II, a brief description of the system is provided. The storage layer is discussed in Section III. The privacy preserving indexing is ----- described in Section IV. In Section V, we discuss the related work and, finally in Section VI, we conclude this paper and outline our future work. II. SYSTEM OVERVIEW As mentioned earlier, the porkut system exploits the trust relationships among friends and social network connections to improve the availability and search performance of the system. We assume that a user of porkut runs the client on his office or personal laptop/computer. Hence, for the rest of the paper, we use the terms user and node interchangeably. A user u’s profile content is hosted only on a set of selfdefined trusted nodes, which enforce access control on the content on behalf of the user. This set of trusted nodes for a user is referred to as his trusted proxy set (TPS). The TPS members for a user are properly selected with respect to the availability and performance goals. We observe that every user in an OSN has friends scattered over a limited set of geographical locations (e.g. his home town, working location, home country, location of previous institute etc.). Moreover, we observe that each user’s online timings are predictable to a large extent (e.g. his office hours, completely offline on weekends). Exploiting these facts, we populate this set of trusted nodes in such a way that, at any given time, one node in this set is online to satisfy the profile access requests, while at the same time, the content is located at a node falling within a geographical neighborhood away from the user that frequently asks for it. The computation of the set TPS based on a user’s social graph is explained in next section. Each user u is identified by a unique identifier denoted by UIdu. Note that a TPS is a set of UIds. The porkut system employs a distributed hash table (DHT) hosted at the resources contributed by the users. This DHT is used for storing the privacy-preserving index of the profile content and other meta information, e.g. the current IP address of a user. A user u and his TPS mapping is stored in the DHT in the form of (key,value) pair with key being the UIdu and _value being the members of the TPS. Using cryptographic_ signatures, it should be trivial to test the authenticity of such an entry in the DHT. This user-to-TPS mapping in the DHT is useful for contacting the nodes where the profile of a particular user is stored. We assume that, with a reasonable replication factor, one can ensure that the data items stored inside DHT are highly available in spite of node churn. As a trusted storage is not required by the system design, such a DHT could be hosted at a highly available cloud storage or in publicly available OpenDHT-like services [5]. The porkut storage architecture is illustrated in Figure 1. Therein, the user u1 has 5 friends in the OSN, namely u2 to u6. The set TPS = fu1; u2; u4g is shown in the figure and a mapping between u1 and TPS[u1] is inserted into the DHT. The user social graph is represented as online time graph, which is explained in the next section. u3 u5 u6 u1 (u1,TPS[u1]) TPS[u1]={u1,u2,u4} DHT Fig. 1. The porkut storage layer III. STORAGE LAYER In this section, we discuss the storage mechanism of the _porkut system, and mainly address the construction of the set_ TPS(u) for a user u from his social graph. The social network graph is denoted as G(U; R), where U is the set of users represented by the vertices in the graph and R is the set of friendship relations represented by edges. For example, an edge between two vertices u1 and u2 models the fact that users u1; u2 are friends. We assume that friendship relationships are symmetric. This is the default assumption in current OSN applications, e.g. Orkut, Facebook. We use the notation NG(u) to represent the set of neighbors (i.e. friends on the OSN) of user u in the social graph G, and NG[u] to represent NG(u) [ fug. We assume that each user u in the social network is characterized with two parameters: his geographical location and online time period. For instance, the location can be set to the country/city where the user is currently located. We exploit location information of friends of a user, in order to place data as close as possible to the nodes that most frequently access the data for getting profile updates etc. Therefore, data is stored on nodes falling within a certain geographical proximity from its most-frequent access points. This is quantified by the metric access cost Cu[u]1[2] [between] two geographical locations/users/nodes u1 and u2, which is defined as the cost of the communication link between them (i.e. unit cost for transferring data in between these two nodes). This could be measured, for example, in terms of RTT between these two nodes. Online time period represents the usual time that the user is online in the social network. This is the time window in which the user contributes his resources (i.e. bandwidth, storage, and processing power) for the social network operation. The node can only reply to the data access requests (for the data it hosts) that are generated during this time window. Beyond this time window frame, the user is offline. We denote the location and online time period parameters for a user u as Lu and OTu respectively. Given two users u1 and u2’s locations and online time settings, we argue that they can contact each other and thus exchange data if and only if their online time intervals overlap, which we represent by the condition that OTu1 \ OTu2 6= ;. _A. Trusted Proxy Set_ Each user u selects some of the neighbors in his social network as trusted nodes. The user trusts these nodes both for ----- storing his profile content and for enforcing access control on the access requests. We believe that storing content in plain text and leveraging mutual trust relationships for access control enforcement simplifies the system to a great extent. This way of exploiting trust relationships for access control was first introduced by the authors in [6] and employed for the social network case in [1]. We assume that users mutually cooperate for hosting content and delegating access control with some social contracts. The intuition is that users do not breach the delegation responsibilities because of social pressure and monitoring. This is left for future study. Alternative solutions, which employ encryption mechanisms for access control and content storage [7], not only involve complicated key management issues, but also, they are highly inefficient in terms of storage overhead, as the same data item may need to be encrypted multiple times for different users with different access rights. Let T (u)  NG(u) be the set of trusted users/nodes for user u based on his social relationships. T [u] also includes the user u himself in the set of trusted nodes. The user selects a subset of these trusted users for hosting his content. We call this set as trusted proxy set (TPS) (TPS(u)  T (u)). The content of user u is stored on the members of the set TPS(u) and itself, which is denoted as TPS[u] = TPS(u) [ fUIdug We propose the following criteria to select the proper set of members into TPS from the set of all the trusted users of a user: i) low access and consistency costs and ii) high _data availability. To this end, the number of replicas should_ consider access and update costs and replica placement should consider users online time settings. Next, we describe three approaches for the computation of the set TPS[u] that satisfy high availability but have different cost minimization objectives. In every approach, once TPS[u] is computed, for each friend/user in the social neighborhood of user u (i.e., 8v 2 NG(u)), a mount point is configured (represented by Mv) for accessing u’s profile. In other words, for a certain friend of u, u’s profile is said to be mounted at a certain node. Note that, by definition, the mount point is available at some point in time during the friend’s online time frame so that he can access u’s profile. However, a singlemount-point-per-user technique allows to access the profile replica only when that mount point is online. To increase the availability, we can use all the nodes in TPS[u] as mount points. In this case, Mv would be the primary mount point and the remaining would be the secondary ones. In the rest of the discussion, we assume that content accesses are being done from the primary mount point. Given the above, the purpose of the following algorithms is to compute a storage configuration for user u, which is given by:  the set TPS[u], and  8v 2 NG(u), the mount point Mv, where Mv 2 TPS[u]. _B. Computing the storage configuration_ Computing the storage configuration for a user u involves two steps: i) Constructing the online time graph. ii) Storage configuration computation from this graph based on some criterion. For simplicity, we assume that geographical locations are considered at the granularity of country, assuming an OSN user has friends scattered over several countries. First, we construct the online time graph (denoted by OGu) for user u. This graph will be used to compute TPS(u). _Definition 1: Online time graph: for a user u (denoted by_ OGu) is defined as (NG[u]; E) where NG[u] is the set of vertices and E is the set of edges, such that 8v1; v2 2 NG[u], 9 an edge(v1; v2) 2 E iff (v1 2 T [u] _ v2 2 T [u]) ^ (OTv1 \ OTv2 6= ;) Next, we specify the following two conditions on the graph OGu, which are necessary and sufficient in order to compute a valid storage configuration. 1) OGu must be connected. Only then, every user in the set NG[u] can access u’s content. 2) The sub-graph induced by the set T [u] i.e., the graph OGu[T [u]] must also be connected, in order to allow content synchronization across TPS members pass through only trusted nodes[1]. We suppose that each user constructs OGu offline locally from the set of friendship relations that he has in the social network and their online time (OT ) specifications. The construction of OGu is explained with the following example. Assume a user u1 with neighbors in the OSN u2 to u7 and their locations set as follows: Lu1 is Switzerland, Lu2 and Lu3 are India, and finally the rest are US-West. Assume OT set to 8am to 5pm local time for all users. Let T [u1] = fu1; u2; u4; u6g. The resulting OGu1 is shown in Figure 2. Note that OGu[T [u]] is expected to be connected for a reasonable number of trusted friends with overlapping online times (given 120 friends per user in Facebook and 100 in Orkut on average [2]). Otherwise, another node v 2 OGu, yet v =2 T [u], has to be employed in the TPS construction as well. However, profile data stored at v has to be encrypted by a key shared by the T [u] members. This approach would be particularly useful in the bootstrap phase of the social network. In the next subsections, we describe three algorithms with different cost minimization objectives for TPS generation and 1However, as long as the first condition is met, nodes from the set T [u] can be removed one by one until the resulting induced graph becomes connected. u3 u5 u7 u4 u2 u6 u1 T[u1]={u1,u2,u4,u6} Fig. 2. The graph OGu1 ----- _3) Minimize storage cost: This approach quantifies the stor-_ _age cost of a given storage configuration (x = (M; TPS[u]))_ and, by exploring the entire solution space, picks the storage configuration with the minimum effective cost. The storage cost is measured in terms of the total cost incurred for accessing and updating the profile content by a user’s friends in addition to that of replica synchronization among all TPS members. We do not consider the access cost incurred by non-friend users, even though the system allows such users to access the profile content on case-by-case basis based on the access control settings. Let n[v]a [be the number of times a user][ v][ accesses a user] u’s profile content with each access involving s[v]a [units of] data access on average. n[v]u [and][ s]u[v] [represent number of] updates and update sizes respectively. Note that this update is performed on Mv, which must be then pushed to the other members of the TPS as well. We assume that these parameters are approximated from the statistics collected over a certain period. To this end, the user u selects the configuration x that minimizes its storage cost, i.e. h argx [min] v2N�G(u) n[v]a [�] [s]a[v] [�] [C]M[v] v [+][ n]u[v] [�] [s]u[v] [�] [C]M[v] v +�v02T P S[u]�fMvg n[v]u [�] [s]u[v] [�] [C]M[v][0]v  i We refrain from further discussion of this approach for brevity reasons. _C. Handling updates in social graph and TPS_ As social relations evolve, there will be updates in a user’s social graph. Moreover, breach of trust or of the social contract to host and enforce access control on behalf of others, may result to updates in the set TPS. Once a node v is removed from a user u’s TPS, it is no longer contacted for u’s content. All users in NG(u) for which v is the mount point are informed of this change. Such nodes are mapped to a new temporary mount point (say the node u itself), until one of the three aforementioned algorithms are run to assign them new mount points. We assume the user periodically invokes TPS computation process to accommodate the updates made on OG graph because of updates in the set T (u) or updates in friendship relationships. Since revocations can happen from the set TPS, users must choose TPS members carefully. Such revocation can happen either because one of the three aforementioned algorithms excludes an existing member from the set TPS, or a breach in the social contract is noticed. However, we believe that mutual social contracts (i.e. reciprocative hosting of data between users) restrict users from maliciously exploiting their hosted data after their removal from the TPS. Handling additions to the set TPS is simple: user u copies the replica of the profile to this new member, which there on, serves access requests. When a new social relationship is made by user u, we assign as default mount point for the new member the node u itself, or another TPS node that has an overlapping online time interval. Later, the new friend could be assigned a different u3 u5 u7 u4 u2 u6 u1 TPS[u1]={u1,u2,u6} Fig. 3. _MAC approach_ u3 u5 u7 u4 u2 u6 u1 TPS[u1]={u1} Fig. 4. _MNR approach_ user-mount point mappings. If two TPS members are not directly connected in OGu, synchronization has to happen through another node v 2 T [u]. In this case, a profile replica is stored at node v as well; however, still v is not considered as a member of TPS, as it is not a mount point for any neighbor. _1) Minimize the access cost (MAC): The MAC approach_ prioritizes only the access cost for each friend in a user’s social network. Hence, for every user v in OGu, it assigns the nearest (i.e., with minimum access cost) trusted node connected to v as the mount point, i.e. 8v 2 OGu; M (v) = v[0] : Cv[v][0] [][ C]v[i] [;] 8i 2 T [u] Then, TPS(u) = fv : v 2 T (u) ^ 9 v[0] 2 NG(u) : M (v[0]) = vg The set TPS(u) contains all members of T (u), which are assigned as mount points for friends of u. In OGu1 (Figure 2), assume that CIndia[Switzerland] = 1 and CUS[Switzerland]�W est = 2. The resulting storage configuration for the MAC approach is shown in Figure 3. _2) Minimize the number of replicas (MNR): The MNR ap-_ proach determines the number of replicas to be maintained for a user, so as to minimize the storage and replica management overhead. In addition, it applies an optimization step in order to minimize the access costs as well. Our approach exploits the fact that the set TPS can be modeled as the minimum connected dominating set (MCDS) on the graph OGu, with the additional constraint that the members of the MCDS must belong to T [u]. Hereby, we modify a greedy algorithm from [8] to solve this variant of the MCDS problem. **Algorithm 1 The MNR algorithm** 1: Mark all v 2 OGu as white 2: Mark u as black 3: Mark all neighbors of u in OGu as grey 4: while 9 a white node in OGu do 5: Select a grey v[0] 2 T (u) such that v[0] has the highest number of white neighbors in OGu 6: Mark v[0] as black and its neighbors as grey 7: end while 8: TPS[u] is the set of all black nodes in OGu 9: for all grey nodes v in OGu do 10: Mv = v[0] : Cv[v][0] [][ C]v[i] [,][ 8][i][ 2][ TPS][[][u][]] 11: end for ----- mount point based on the result of the execution of above algorithms. When there is a change in the location of some trusted nodes, the graph OGu may get disconnected. Noticing this, node u should set itself as mount point of the disconnected nodes. We suggest u to adjust its online time frame OTu in order to make the TPS graph connected in this case. _D. Replica synchronization_ We propose that after every update, the concerned mount point pushes the update to other TPS members during their online time frame. Note that OGu[T [u]] is connected. Assume that each TPS member is informed of other members by the user u during TPS creation. Until recent updates reach a mount point, it continues to serve access requests with out-dated content, which is acceptable, as porkut aims to eventual consistency among replicas with tolerable temporary inconsistencies. _E. Accessing a user’s profile_ A user u’s profile content is available to his friends in the social network directly through their mount points. New nodes which are not assigned any mount point, can reach the TPS members via the DHT index and access the content after appropriate authorization. However, as already mentioned, the exact organization of the profile content, the request format, and the access control policies are beyond our scope. IV. PRIVACY PRESERVING INDEXING We advocate privacy-aware indexing of social networking content of users in the system. Such index facilitates content discovery on OSN among friends and allows users with specialized interesting content to reach new potential friends. Furthermore, this index allows for short-lived friendship relations for the exchange of a particular content. _A. Privacy objectives_ _porkut’s indexing service addresses various levels of pri-_ vacy, which are described below:  No privacy: Content with no privacy requirements is freely accessible by any social network participant.  Owner privacy: The owner of a particular content (i.e. the user in whose profile the content exists) should not be able to be determined with certainty by the index entry for the content.  Content and Owner privacy: In addition to owner privacy, the index entry should not allow someone to determine with certainty whether a particular content item exists in the system or not. _B. Index creation_ A conventional DHT-based index has entries in the form _(key,value) pairs, where a content identifier (i.e., search term_ on the index) maps to the key and the user profile identifier (UId) maps to the value field. In order to achieve content and owner privacies, porkut indexing mechanism uses kanonymization techniques [9] and (key,value) pairs are replaced by (key[],value[]) pairs i.e., a list of keys are now mapped to a list of values. We call such an index entry as (c; o)- entry, where c is the size of the key list and o is the size of value list. A user inspecting a (c; o)-entry cannot identify which of the content items exist in the system. By analogy, the conventional index entries are referred to as (1; 1)- entries. When a user creates an index entry for a content item, he mixes the item identifier with c � 1 randomly chosen yet meaningful item identifiers and the owner identifier with o � 1 randomly chosen user identifiers, thus creating a (c; o)-entry from a (1; 1)-entry. Each user uses a dictionary of content items which, for example, can be constructed from all of his accessible content items in the social network. This dictionary is used as input to the content anonymization technique. Content entries that require no privacy use c = 1; o = 1. When only owner privacy is needed, c = 1; o > 1 are employed. Using c > 1; o > 1 results in index entries that support both content and owner privacies. Once a user constructs (c; o)-entry, he publishes this entry into the DHT anonymously by employing a Crowds-like source anonymization technique [10], where a crowd is the set of these o users in the index entry. At the end of anonymous routing, a (c; o)-entry is inserted into the DHT as c separate (1; o) entries with each of them having one of the c keys as a pivot. The detailed privacy preserving index construction and its evaluation for a P2P system are described in [11]. A user retrieves from the DHT, the list of UIds associated with his searched key. Then, for each of the UIds, he contacts one (again k-anonymized) of its corresponding TPS members for the content item that he looks for. Our index allows strangers (i.e. non-friend users) to contact each other based on interesting content. Authentication and authorization follow this step. V. RELATED WORK There is significant related work on privacy issues in social networks. The possibility for involuntary personal information leakage in current social networks is highlighted in [12], e.g. by means of certain OSN features like annotating or tagging user photos, and its effects are demonstrated in [4]. Lockr system [13] improves the privacy of centralized and decentralized content sharing systems. It allows users to control their own social information by decoupling the social networking information from other OSN functionality using social attestations, which act like capabilities. However, these social attestations are used only for authentication and authorization is enforced using separate authorization policies. Persona [14] uses attribute-based encryption to realize privacypreserving OSNs. The attributes a user has (e.g., friend, family member, colleague) determine what data he can access. The NOYB approach [3] adopts a novel approach for preserving content privacy. They observe that if users address their privacy issues themselves by hosting encrypted content on OSNs, they could be expelled from the OSN by the OSN operator. Hence, ----- they propose to replace users profile content items with “fake” items randomly picked from a dictionary. NOYB encrypts the index of the user’s item in this dictionary and uses the ciphered index to pick the substitute. On the other hand, flyByNight [15] encrypts the users’ content that hosts on the OSN. Recently, the issue of using decentralized infrastructures for organizing OSNs in a privacy-preserving manner, was addressed by the research community [1], [7], [16]. PeerSon [16] adopts encryption mechanisms for content storage and access control enforcement. It uses a two-tier architecture in which the first tier is a DHT, which is used as a common storage by all participants. The second tier consists of peers and contains the user data. The DHT stores the meta-data required to find users. Peers connect each other directly, exchange the content, and then disconnect. [7] addresses privacy in OSNs by storing profile content in a P2P storage infrastructure. Each user in the OSN defines his own view (“matryoshka”) of the system. In this view, nodes are organized in concentric rings, having nodes at each ring trusted by the nodes in its immediate inner ring, with the user node being the center of all rings. The user’s profile data is stored encrypted at the innermost ring, which is accessed by other users through multi-hop anonymous communication across this set of concentric rings. In the DHT, an entry for a user with the list of nodes in the outermost ring is added. Thus, [7] achieves both content privacy (using encryption) and anonymity of searcher and hosting nodes, yet limited content discovery and profile availability, as opposed to our approach. In [1], a decentralized OSN, Vis-`a-Vis is proposed, where a user’s profile content is stored at his own machine called as virtual individual server (VIS). VISs self-organize into P2P overlays, one overlay per social group what has access to content stored on a VIS. Three different storage environments are considered: cloud alone, P2P storage on top of desktops, a hybrid storage, and their availability, cost, and privacy trade-offs were studied. In desktop-only storage model, a _socially-informed replication scheme was proposed, where a_ user replicates his content to his friend nodes and delegates access control to them. However, normally, a uses trusts only a fraction of his friends to the extent of delegating access control enforcement, as considered in our porkut approach along with online time information. Our earlier work [6] considered access control delegation in P2P systems in terms of trust transitivity. Tribler [17] is a P2P file sharing application which exploits friendship relationships, tastes and preferences of users to increase the performance of file sharing. However, in Tribler, users host their own profile and therefore profile placement for high availability and low access or consistency cost are not considered. Finally, LifeSocial [18] is a P2P-hosted OSN where users employ public-private key pairs to encrypt profile data that is stored in a distributed way and is indexed in a DHT. Friends can read a user’s profile based on a symmetric key that is encrypted with their public keys. However, data privacy and profile availability are not considered in [18]. VI. CONCLUSION AND FUTURE WORK In this paper, we presented the initial design of porkut, a privacy-preserving decentralized OSN. We emphasized on satisfying high availability and lookup efficiency of scattered OSN profiles. The users geographical locations and online time statistics were exploited in deciding the user’s profile storage points. Three algorithms with different cost minimization objectives were presented for selecting the set of nodes that host OSN profiles, while preserving high availability. As a future work, we plan to deploy the porkut system, and study its performance, availability and privacy characteristics in detail. ACKNOWLEDGEMENT This work was funded by the Swiss Nano-Tera OpenSense project (Nano-Tera ref. 839 401). REFERENCES [1] A. Shakimov, A. Varshavsky, L. P. Cox, and R. C´aceres, “Privacy, cost, and availability tradeoffs in decentralized osns,” in Proc. of the WOSN, 2009. [2] A. Mislove, M. Marcon, K. P. Gummadi, P. Druschel, and B. Bhattacharjee, “Measurement and analysis of online social networks,” in Proc. of _the 7th Internet measurements conference, 2007._ [3] S. Guha, K. Tang, and P. Francis, “Noyb: privacy in online social networks,” in Proc. of the WOSP, Seattle, WA, USA, 2008. [4] B. Krishnamurthy and C. E. Wills, “On the leakage of personally identifiable information via online social networks,” in Proc. of the _WOSN, 2009._ [5] S. Rhea, B. Godfrey, B. Karp, J. Kubiatowicz, S. Ratnasamy, S. Shenker, I. Stoica, and H. Yu, “Opendht: a public dht service and its uses,” _SIGCOMM Comput. Commun. Rev., vol. 35, no. 4, pp. 73–84, 2005._ [6] N. Rammohan, Z. Miklos, and K. Aberer, “Towards access control aware p2p data management systems,” in Proc. of the 2nd International _workshop on data management in peer-to-peer systems, 2009._ [7] L. A. Cutillo, R. Molva, and T. Strufe, “Privacy preserving social networking through decentralization,” in Proc. of the WONS, 2009. [8] L. Ruan, H. Du, X. Jia, W. Wu, Y. Li, and K.-I. Ko, “A greedy approximation for minimum connected dominating sets,” Theoretical _Computer Science, vol. 329, no. 1-3, pp. 325 – 330, 2004._ [9] L. Sweeney, “k-anonymity: a model for protecting privacy,” Int. J. _Uncertain. Fuzziness Knowl.-Based Syst., vol. 10, no. 5, pp. 557–570,_ 2002. [10] M. K. Reiter and A. D. Rubin, “Crowds: anonymity for web transactions,” ACM Trans. Inf. Syst. Secur., vol. 1, no. 1, 1998. [11] R. Narendula, T. G. Papaioannou, and K. Aberer, “Panacea: Tunable privacy for access controlled data in peer-to-peer systems,” 2010, EPFL Technical Report 148337. http://infoscience.epfl.ch/record/148337. [12] I.-F. Lam, K.-T. Chen, and L.-J. Chen, “Involuntary information leakage in social network services,” in Proc. of the 3rd International Workshop _on Security, 2008._ [13] A. Tootoonchian, S. Saroiu, Y. Ganjali, and A. Wolman, “Lockr: better privacy for social networks,” in Proc. of the CoNEXT, 2009. [14] R. Baden, A. Bender, N. Spring, B. Bhattacharjee, and D. Starin, “Persona: an online social network with user-defined privacy,” in Proc. _of the ACM SIGCOMM, 2009._ [15] M. M. Lucas and N. Borisov, “Flybynight: mitigating the privacy risks of social networking,” in Proc. of the WPES, 2008. [16] S. Buchegger, D. Schi¨oberg, L.-H. Vu, and A. Datta, “Peerson: P2p social networking: early experiences and insights,” in Proc. of the ACM _EuroSys Workshop on Social Network Systems, 2009._ [17] J. A. Pouwelse, P. Garbacki, J. Wang, A. Bakker, J. Yang, A. Iosup, D. H. J. Epema, M. Reinders, M. R. van Steen, and H. J. Sips, “Tribler: a social-based peer-to-peer system: Research articles,” Concurr. Comput. _: Pract. Exper., vol. 20, no. 2, pp. 127–138, 2008._ [18] K. Graffi, P. Mukherjee, B. Menges, D. Hartung, A. Kovacevic, and R. Steinmetz, “Practical security in p2p-based social networks,” in Proc. _of the IEEE LCN, October 2009._ -----
{ "disclaimer": "Notice: The following paper fields have been elided by the publisher: {'references', 'abstract'}. Paper or abstract available at https://api.unpaywall.org/v2/10.1109/WETICE.2010.40?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1109/WETICE.2010.40, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": null, "status": "GREEN", "url": "http://infoscience.epfl.ch/record/148642/files/cops.pdf" }
2,010
[ "JournalArticle" ]
true
2010-06-28T00:00:00
[]
9,130
en
[ { "category": "Computer Science", "source": "external" }, { "category": "Computer Science", "source": "s2-fos-model" }, { "category": "Engineering", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/02562dc2b524dceb6a535c2baddc1e8d88463ad9
[ "Computer Science" ]
0.881622
Practical Fully-Decentralized Secure Aggregation for Personal Data Management Systems
02562dc2b524dceb6a535c2baddc1e8d88463ad9
International Conference on Statistical and Scientific Database Management
[ { "authorId": "2123019524", "name": "Julien Mirval" }, { "authorId": "1714085", "name": "Luc Bouganim" }, { "authorId": "40576371", "name": "I. S. Popa" } ]
{ "alternate_issns": null, "alternate_names": [ "Stat Sci Database Manag", "SSDBM", "Statistical and Scientific Database Management", "Int Conf Stat Sci Database Manag" ], "alternate_urls": null, "id": "070a8b94-9234-4a39-bff0-23e0c6b74464", "issn": null, "name": "International Conference on Statistical and Scientific Database Management", "type": "conference", "url": "http://www.ssdbm.org/" }
Personal Data Management Systems (PDMS) are flourishing, boosted by legal and technical means like smart disclosure, data portability and data altruism. A PDMS allows its owner to easily collect, store and manage data, directly generated by her devices, or resulting from her interactions with companies or administrations. PDMSs unlock innovative usages by crossing multiple data sources from one or many users, thus requiring aggregation primitives. Indeed, aggregation primitives are essential to compute statistics on user data, but are also a fundamental building block for machine learning algorithms. This paper proposes a protocol allowing for secure aggregation in a massively distributed PDMS environment, which adapts to selective participation and PDMSs characteristics, and is reliable with respect to failures, with no compromise on accuracy. Preliminary experiments show the effectiveness of our protocol which can adapt to several contexts with varying PDMSs characteristics in terms of communication speed or CPU resources and can adjust the aggregation strategy to the estimated selective participation.
# Practical Fully-Decentralized Secure Aggregation for Personal Data Management Systems ## Julien Mirval, Luc Bouganim, Iulian Sandu Popa To cite this version: #### Julien Mirval, Luc Bouganim, Iulian Sandu Popa. Practical Fully-Decentralized Secure Aggrega- tion for Personal Data Management Systems. 33rd International Conference on Scientific and Sta- tistical Database Management, SSDBM 2021, Jul 2021, Tampla, FL, United States. pp.259-264, ￿10.1145/3468791.3468821￿. ￿hal-03329878￿ ## HAL Id: hal-03329878 https://hal.science/hal-03329878 #### Submitted on 8 Oct 2021 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. ``` Copyright ``` ----- # Practical Fully-Decentralized Secure Aggregation for Personal Data Management Systems ### Julien Mirval ##### julien.mirval@cozycloud.cc Cozy Cloud Inria-Saclay UVSQ, Université Paris-Saclay France #### ABSTRACT ### Luc Bouganim ##### luc.bouganim@inria.fr Inria-Saclay UVSQ, Université Paris-Saclay France ### Iulian Sandu-Popa ##### iulian.sandu-popa@uvsq.fr UVSQ, Université Paris-Saclay Inria-Saclay France Personal Data Management Systems (PDMS) are flourishing, boosted by legal and technical means like smart disclosure, data portability and data altruism. A PDMS allows its owner to easily collect, store and manage data, directly generated by her devices, or resulting from her interactions with companies or administrations. PDMSs unlock innovative usages by crossing multiple data sources from one or many users, thus requiring aggregation primitives. Indeed, aggregation primitives are essential to compute statistics on user data, but are also a fundamental building block for machine learning algorithms. This paper proposes a protocol allowing for secure aggregation in a massively distributed PDMS environment, which adapts to selective participation and PDMSs characteristics, and is reliable with respect to failures, with no compromise on accuracy. Preliminary experiments show the effectiveness of our protocol which can adapt to several contexts with varying PDMSs characteristics in terms of communication speed or CPU resources and can adjust the aggregation strategy to the estimated selective participation. #### CCS CONCEPTS - Computer systems organization → **Architectures; • Infor-** **mation systems →** _Data management systems._ #### KEYWORDS Privacy, secure aggregation, decentralized, machine learning. **ACM Reference Format:** Julien Mirval, Luc Bouganim, and Iulian Sandu-Popa. 2021. Practical FullyDecentralized Secure Aggregation for Personal Data Management Systems. In 33rd International Conference on Scientific and Statistical Database Man_agement (SSDBM 2021), July 6–7, 2021, Tampa, FL, USA. ACM, New York,_ [NY, USA, 6 pages. https://doi.org/10.1145/3468791.3468821](https://doi.org/10.1145/3468791.3468821) #### 1 INTRODUCTION The new privacy-protection regulations (e.g., GDPR) and smart disclosure initiatives in the last decade have boosted the development and adoption of Personal Data Management Systems (PDMSs) [2]. A PDMS (e.g., Cozy Cloud, Nextcloud, Solid) is a data platform Publication rights licensed to ACM. ACM acknowledges that this contribution was authored or co-authored by an employee, contractor or affiliate of a national government. As such, the Government retains a nonexclusive, royalty-free right to publish or reproduce this article, or to allow others to do so, for Government purposes only. _SSDBM 2021, July 6–7, 2021, Tampa, FL, USA_ © 2021 Copyright held by the owner/author(s). Publication rights licensed to ACM. ACM ISBN 978-1-4503-8413-1/21/07...$15.00 [https://doi.org/10.1145/3468791.3468821](https://doi.org/10.1145/3468791.3468821) allowing users to easily collect, store and manage into a single place data directly generated by user devices (e.g., quantified-self data, smart home data, photos) and data resulting from user interactions (e.g., social interaction data, health, bank, telecom). Users can then leverage the power of their PDMS to benefit from their personal data for their own good and in the interest of the community [7]. Consequently, the PDMS paradigm leads to an important shift in the personal data ecosystem since data becomes massively distributed, at the user-side. It also holds the promise of unlocking innovative usages. An individual can now cross her data from different data silos, e.g., health records and physical activity data. Moreover, individuals can cross data within large communities of users, e.g., to compute statistics for epidemiological studies or to train a machine learning model (ML) for recommender systems or automatic classification of user data. However, these exciting perspectives should not eclipse the security issues –user data must be kept private– and the right for any PDMS user to consent, or not, in participating in each computation. Aggregation primitives (e.g., sum or average) are obviously essential to compute basic statistics on user data but are also a fundamental building block for machine learning algorithms. Thus, to enable such new usages, we need scalable, privacy-preserving protocols implementing data aggregation primitives with selective (i.e., consenting) participants. Ideally, the proposed protocol should provide an accurate result that fully takes advantage of high-quality data available in PDMSs. Efficiency (i.e., protocol latency and total load of the system) is of prime importance and the protocol should adapt to several contexts: the PDMSs could be limited by their communication speed or by their computation power. Finally, given the scale of such decentralized aggregation, such protocols must also be robust to node failures. To summarize, our goal is to propose an aggregation protocol for basic aggregate functions that fulfills the following properties: - fully decentralized and highly scalable, with the number of participants. - privacy-preserving, i.e., it protects the confidentiality of user data. - accurate, i.e., it does not require a trade-off between accuracy and privacy. - adaptable, i.e, it can adapt to a large spectrum of computation selectivity values (reflecting the subset of contributor nodes) and system configurations (network and cryptographic latency). - reliable, i.e., it handles node failures or voluntary disconnections. ----- SSDBM 2021, July 6–7, 2021, Tampa, FL, USA Mirval, Bouganim and Sandu-Popa The rest of this paper is organized as following. After discussing the related works w.r.t. the required properties in Section 2, we introduce the considered architecture and threat model in Section 3. Sections 4 and 5 focus on the proposed protocol and preliminary results. Section 6 concludes with future issues. #### 2 RELATED WORKS Secure aggregation is an intense research area since many years and many approaches were proposed: secure multi-party computation (SMC) and (fully) homomorphic threshold encryption (HTE), (local) differential privacy and gossip-based protocols. However, the existing solutions are not adapted to the PDMS context and fail to cover all the required properties listed above. HTE and SMC-based solutions [4, 6, 9, 10] generally target applications in which central servers orchestrate and coordinate the participating nodes (e.g. federated learning). Such solutions are not scalable with a large number of participants and a fully decentralized setting such as in the PDMS context (e.g., the server(s) load is linear [9] or quadratic [6] with the number of participants). Local differential privacy (LDP) has gained significant momentum in the recent years addressing problems such as machine learning [14] or basic statistics based on range queries [8]. However, LDP requires more noise than classical DP [1], either affecting accuracy or requiring a large number of participants to reduce the impact of noise, contradicting adaptability to selective participation. Gossip-based protocols are scalable, fully decentralized, reliable and have an adjustable accuracy. Unfortunately, classical gossipbased protocols do not protect the user privacy. In [5], participants collectively learn a machine learning model in a privacy preserving way by gossiping differentially private models, impacting accuracy. In [12], participants introduce noise in the first iterations and gradually remove it in subsequent iterations. This approach makes such solutions unreliable w.r.t. node failures. Finally, we are not aware of gossip protocols tolerating selective participation and trivial adaptations produce inaccurate results. #### 3 SYSTEM OVERVIEW AND THREAT MODEL In this section, we introduce first the system architecture and the related concepts. We then present the considered threat model for the proposed secure protocol. #### 3.1 System Architecture **P2P network. We envision a fully distributed Peer-to-Peer (P2P)** system relying only on PDMSs, thus requiring an efficient communication overlay. Distributed Hash Tables (DHT) are structured overlays which enable a logarithmic scalability with the number of nodes. Our protocol is currently built on top of the Chord DHT [13]. Each node has an Id obtained by hashing a static property of the node and stores a fingertable (FT) to route Chord messages. FT is a table with a number of entries equal to the size of the Id space in bits. If X is a node Id, the i[th] entry of the FT contains the IP address of the node whose Id is closest but lower than X + 2[i] . Routing is done by searching in the FT the closest entry to the target address and transmitting recursively the message until it reaches its target, with a worse case of O(loд(N )) message complexity, where N is the number of DHT nodes. **Computation model. An aggregate computation can be triggered** by any node, called querier. The querier broadcasts the computation and each node consents or not to contribute, and in the positive case is called contributor. The ratio between the number of contributors and total number of nodes defines the selectivity σ, 0 < σ ≤ 1. Finally, each node (contributor or not) is a potential data processor and is then called aggregator. #### 3.2 Threat Model We consider the honest-but-curious threat model, in which, an attacker can access, without altering, the data manipulated by the attacked nodes, then called leaking nodes. The rationale behind the honest-but-curious model is that a PDMS can hold the entire digital life of her owner and therefore needs to be highly protected against privacy threats. Recent works [3] indicate that Trusted Execution Environments (TEEs) are prime candidates to offer this protection since they guarantee that executed code and manipulated data cannot be observed. In our context, this property allows sharing data between PDMS nodes without breaking data confidentiality. We thus consider that each PDMS is protected by a secure hardware solution, such as Intel SGX or ARM TrustZone, providing a TEE. Such a hardware protection makes attacks difficult to produce, but since no security measure is unbreakable, we consider that some PDMS owners have succeeded in tampering their PDMS. Since attackers may collude and thus, de facto, control more than one PDMS, the worst-case attack is represented by the maximum number of colluding nodes controlled by a single “attacker”, i.e., C leaking nodes. Additionally, the TEE of each PDMS is equipped with a trustworthy certificate. Thus, any node can verify the authenticity of other participants by checking their certificate. This prevents Sybil attacks (i.e., forging nodes to master a large portion of the system). Finally, attackers can also observe the communications between the nodes, thus requiring secure communication channels (e.g., TLS) to protect sensitive data exchanges. Our objective is to provide a protocol that fully protects the confidentiality of the contributors’ data and all the intermediary results, with high and tunable probability, the final result not being confidential. Also, we consider that being a contributor for a given computation is not a sensitive information. #### 4 PROPOSED PROTOCOL In the protocol overview, we analyze the properties listed in Section 1 and present the main ideas and techniques behind each property and its impact on the protocol. Due to space constraints, we cannot describe in detail the proposed protocol and thus discuss some identified key elements of the protocol under the form of questions/answers, considering first the privacy aspects and then the efficiency perspective. #### 4.1 Protocol Overview **Scalability: The DHT achieves de facto a fully decentralized and** efficient architecture for inter-nodes communication. Achieving a scalable aggregation process requires multiple aggregators, arranged in a tree structure. Building and broadcasting this aggregation tree can be very costly since the tree itself can be large. We ----- Practical Fully-Decentralized Secure Aggregation for Personal Data Management Systems SSDBM 2021, July 6–7, 2021, Tampa, FL, USA thus employ a divide-and-conquer approach to parallelize the tree construction and diffusion and use the finger table structure to minimize communications. Finally, we reduce the knowledge (and thus the diffusion) of the tree to the minimal part strictly necessary to perform the aggregation: basically, each node of the tree only knows its parent(s) and its children. **Privacy and accuracy: We use a secret sharing scheme (without** threshold) in which each contributor splits its data into s shares, making them unreadable unless someone collects all s shares. s is computed such that the probability to obtain s shares for an attacker, controlling C nodes, is inferior to a security threshold _α (e.g., α = 10[−][6]). Each i[th]_ share has the value xi = x + ϵi such that [�]i[n]=1 _[ϵ][i][ =][ 0, where][ x][ is the private value. This way, shares]_ from different contributors can be aggregated separately and if no share is missing (the reliability is discussed below), the final result will be equal to the exact sum of all private data. Hence, our protocol provides also, by construction, accurate results. Note that the protocol works for complex values of x, such as an array or a matrix, which is useful for advanced aggregations, e.g., training a naive Bayes ML model. **Adaptability: The number of aggregators (i.e., the tree fan-out and** its height) is tuned as a function of the number of contributors, the communication costs (i.e., the latency to send a message between two nodes) and the processing costs (i.e., the asymmetric cryptographic costs to secure a communication or to sign or verify a signature, which is, by far, the most important processing costs). This allows the protocol to always offer near-optimal performance (i.e., aggregation latency) and achieve adaptability w.r.t. the computation selectivity and PDMSs characteristics. Furthermore, our protocol can also be conveniently configured to offer the desired trade-off between the latency and the total cost of the aggregation, which are conflicting optimization objectives as discussed in Section 5. **Reliability: Handling failures and disconnections is mainly im-** plemented at two levels. First, the aggregators in the last level of the tree (just above the leaves) execute a synchronization protocol to make sure that contributors have sent all the s shares before disconnection and remove the shares for the contributors that have sent less than s shares. This ensures that the aggregation result stays accurate despite contributors failure. Second, a list of backup aggregators is created before the tree creation. Its size depends on the observed node failure/disconnection ratio. In case an aggregator fails, it is automatically replaced with a backup node during the aggregation process (the parents monitor their children). This allows the protocol to be robust to node failures and avoids losing aggregation subtree results. #### 4.2 Privacy Issues _What is the impact of the secret sharing on the aggregation tree?_ Considering _s shares for each contributor and partial results leads to_ build s separate aggregation trees, with exactly the same structure, to avoid inferences from an attacker on any of the intermediate results. The final sum of the shares is done by the querier (tree root). A simple means to construct such trees is to consider that each node of the tree is a group of s nodes (see Figure 1 with s = 2). The protocol to build the tree is described in Section 4.3 considering that, at each step, s nodes are selected instead of one. To make this selection efficient, each node in the DHT maintains a cache with the addresses of the s − 1 successor nodes that will form the aggregator group. _How is the number of shares computed? An attacker could cleverly_ locate her controlled nodes in the DHT to obtain the s shares of a group (typically controlling a node and its s − 1 successors). We avoid this attack by reusing the concept of imposed location that we proposed in [11]: the node Id in the DHT is computed by hashing the public key from the PDMS certificate (see Section 3.2). The nodes are then uniformly distributed in the DHT space and the PDMS owner (here the attacker) cannot influence this placement: the uniform distribution also applies to leaking nodes. As a consequence, s can be easily computed and is minimal when s = ⌈log(α)/log(C/N )⌉. _Do contributors/aggregators have to check the correctness of the_ _received query? Basically, the answer is yes. Indeed, a trivial attack_ would be to impersonate s aggregators (at the bottom of the tree) and ask a set of contributors for their shares, with the same protocol. If no control is done, the contributor cannot distinguish a real query from a fake query. To avoid such an attack, every aggregator must check the signature of the incoming query using the public key of the sender, having previously checked the validity of the sender’s certificate. Since all the nodes are honest but curious, they must follow the protocol and thus cannot create a specific query that would lead to the disclosure of certain data. #### 4.3 Efficiency Issues _What is the divide-and-conquer approach to build the aggregation_ _tree? Assuming the querier knows the height h and the fan-out_ _f of the aggregation tree, it starts creating a tree assigning the_ whole DHT to its successor(s). Recursively, each aggregator in the tree (i.e., a parent node) is assigned to a DHT region that it will subdivide and delegate to other aggregators in that region. When an aggregator oversees a DHT region, it looks for f nodes that are (almost) evenly spaced across the region. The node responsible for finding peers is a parent aggregator, while the selected nodes are child aggregators. Each child then becomes the parent of the region between itself and the next sibling. This process goes on until the height h is reached. At the last tree level, the tree leaves (i.e., the contributors) are found by using a localized DHT broadcast in the respective region. Contributors willing to participate reply with their private data, after establishing a secure channel with their aggregator parent. The aggregators at level d aggregate the data they receive before sending them to the previous level of the tree down to the root (i.e., the querier) which performs the final aggregation to obtain the result. Figure 1 illustrates this process with two nodes per group (blue and red) by using letters to represent a group. The fan-out is 4 and the height is 3 (excluding the querier Q). Q selects his successor, A, who is responsible of the whole DHT and is the root of the tree. A uses its finger table to contact C, E and B. C recursively contacts D. The second level of the tree is built. Then B, E, C and D contact recursively the nodes for the second level (the figure only shows what happens with E for readability). When leaves are contacted, they send one share to each aggregators of the group (i.e., blue and red) which are summed-up separately in each aggregation tree and finally summed-up by Q. ----- SSDBM 2021, July 6–7, 2021, Tampa, FL, USA Mirval, Bouganim and Sandu-Popa ### QAB Q 1 4 Aggregator share 1 ##### Aggregator share 2 A Contributor 4 5 Querier B E C D 3 E ### D 2 3 6 I 4 5 I H F G 4 ### H 5 ### F ## Chord ### G ## DHT Aggregation tree C **Figure 1: Building the aggregation tree based on DHT** _How does an aggregator contribute? If a node selected as aggrega-_ tor in the tree wishes to contribute, it can simply add its data to the partial aggregate it computes before sending it to its parent. Note that it will add it without splitting the data into shares since its parent cannot guess this addition. To compute an average, we need to count the number of contributors and thus, the aggregator will add _s to the count of share contributions: each aggregator accounts the_ number of shares it received, and the total will be finally divided by _s to obtain the number of contributors. Consequently, aggregators_ do not appear as leaves in the aggregation tree. Note that this is not the case for backup nodes which must have the possibility to appear as leaves of the tree in case they wish to contribute. _How are the tree fan-out f and height_ _h computed? At one extreme,_ a binary tree (f = 2) distributes the query load on a maximum number of aggregator nodes but increases the communications costs, including the creation of many secure channels to transfer the intermediate results. At the other extreme, a tree limited to a unique aggregator (f = σ × N ) minimizes the communications and thus the number of secure channels (1 per contributor). It minimizes the total system load induced by the query but concentrates most of that load on this unique aggregator (that becomes overloaded by asymmetric crypto operations for the communication decryption). An "ideal" aggregation tree would be completely balanced, with the same fan-out all along the tree. Moreover, this fan-out (and thus the height of the tree) would be cleverly chosen to optimize the query latency without impacting too much the total load. Note that this depends on the PDMS characteristics, i.e., communication speed or computation power. Finally, the tree height is simply computed based on the number of contributors (σ × _N_ ) and the tree fan-out. σ can be estimated, for instance by contacting all nodes within a region of the DHT, and checking the ratio of nodes willing to participate. Since nodes are uniformly distributed in the DHT thanks to the hash of their public key, choosing a sample of the population should give a good estimation of σ . #### 5 PRELIMINARY RESULTS As in most evaluations of distributed systems [13], we implemented a simulator allowing varying any parameter: number of nodes N, of colluding nodes C, security threshold α, selectivity σ, and β, a ratio defined below. Our simulator captures two metrics: (i) for the network utilization, we consider the number of exchanged messages as the most important metric (compared to, e.g., the message size); (ii) for the PDMS resource utilization, the simulator counts the _asymmetric cryptographic operations which are, by far, the most_ expensive operations. The output of the simulator is the protocol latency and total work. They depend on β, the relative cost of one asymmetric cryptographic operation denoted crypt and the latency when sending a message between two PDMSs denoted com. Specifically, β = crypt/(crypt + com) with 0 ≤ _β ≤_ 1. However, note that the two extremes values of β are not realistic, i.e., β = 0 when crypt = 0 or com = +∞, β = 1 when com = 0 or crypt = +∞. Our protocol is adaptive to σ and β, thus called Adaptive in this section. To measure the impact of these two parameters on the aggregation costs, we compare the Adaptive protocol to two other simplified versions. First, Full tree is a classical aggregation tree that does not adapt to the query selectivity, i.e., it considers σ = 1: a tree is created recursively until all nodes are included, but only those willing to contribute will send back shares. Second, Single **_level considers that β = 0, i.e., the communication cost is so high_** that we must minimize it, thus concentrating all the computation on a single group, collecting the shares from all participants, and sending the results to the querier. We consider a network with N = 1, 000, 000 nodes, a quite large attack level (C = 10, 000) and a high security threshold (α = 10[−][6]) and compare the above protocols in relative terms, i.e., dividing the latency/total work of Full tree/Single Level by the one of Adaptive. We first confirmed that the adaptive protocol is scalable. With increasing values of N, we obtained a logarithmic increase of the latency, thanks to the DHT and the divide-and-conquer approach. We also verified that the number of colluding nodes C has a small impact on the protocol latency, with reasonable values of C w.r.t. N ----- Practical Fully-Decentralized Secure Aggregation for Personal Data Management Systems SSDBM 2021, July 6–7, 2021, Tampa, FL, USA #### 10[4] 10[3] 10[2] 10[1] 10[0] 10[−][4] 10[−][3] 10[−][2] 10[−][1] 10[0] #### 10[4] 10[3] #### 10[0] 0.0 0.2 0.4 0.60 0.80 1.00 𝛽 ratio #### 10[2] 10[1] #### selectivity (𝜎) #### 10[3] 10[2] 10[1] 10[0] 10[−][1] 10[−][4] 10[−][3] 10[−][2] 10[−][1] 10[0] #### 10[3] 10[2] #### 10[1] #### 10[0] 10[−][1] 0.0 0.2 0.4 0.6 0.8 1.0 |103|Col2| |---|---| |102 work total 101 relative 100 1|| ||| ||| ||| #### selectivity (𝜎) **Figure 2: Latency and total work relatively to the Adaptive** **_strategy varying σ_** #### 𝛽 ratio **Figure 3: Latency and total work relatively to the Adaptive** **_strategy varying β_** in accordance with the considered threat model. Thus, in the rest of this section, we focus on the adaptability feature of our protocol and leave the evaluation of its reliability for future works. We vary the selectivity σ (keeping β = 0.5) and the PDMSs characteristics _β (keeping σ = 0.01). The results are presented in Figures 2 and 3_ (log scale on Y axis for all graphs and on X axis for selectivity only). Let’s first focus on the Single Level protocol studied to show the impact of an extreme strategy, i.e., concentrating all the load on a single (group of) node(s). As expected, Single Level always provides a better total work than Adaptive and Full tree. However, the latency increases linearly with the number of participants leading rapidly to prohibitive costs. Practically, Single Level is competitive only if the selectivity is extremely high (i.e., tens to a few hundreds of contributors) or β = 0 (i.e., unrealistic setting). Execution based on aggregation trees (Full tree or Adaptive) are much scalable for handling many contributors by distributing the workload. Note that for a maximal selectivity, both approaches have exactly the same latency, as their structure is identical. However, _Full tree becomes more costly for both latency and total work as_ soon as the selectivity is below 1. Indeed, the adaptive fan-out and tree depth of Adaptive can reduce the latency up to a factor of 3 and especially the total work up to two orders of magnitude, which indicates the importance of adapting the aggregation structure to the computation and system settings. In the last part of our experimental evaluation, we study in more details the Adaptive protocol. In particular, we evaluate the impact of the tree fan-out on the latency and the total work of the protocol with different values of β while keeping σ = 0.01. The results are presented in Figure 4 (log scale on the X axis for both graphs). As above and to increase the readability, we represent relative values for both the latency and the total work, i.e., the ratio between the latency value (or the total work value) and its minimum observed value. As expected, increasing the fan-out, decreases the total work, as the aggregation tree includes less nodes. This reduces the total amount of communications (and hence reduces the number of secure channels), but concentrates the cryptographic load on a ----- SSDBM 2021, July 6–7, 2021, Tampa, FL, USA Mirval, Bouganim and Sandu-Popa _𝛽_ = 0.2 _𝛽_ = 0.4 _𝛽_ = 0.6 _𝛽_ = 0.8 _𝛽_ = 1.0 #### 3 2 #### 2.75 2.5 #### 2.25 2 #### 1.75 1.5 #### 1.25 1 #### 2 4 8 16 32 fan-out #### 2 4 8 16 32 fan-out **Figure 4: Relative latency and relative total work w.r.t. the minimum value varying the fan-out** few nodes, leading generally to a higher latency. However, we observed an exception to this behavior for small fan-out values. In this case, the communication overhead required to construct the tree leads to sub-optimal latency. With small values of β (i.e., the communication cost is larger than the cryptographic cost), this overhead is more prominent. On the contrary, once the fanout increases, smaller values of β result in a decreased latency, as the cryptographic operations, which are dominant, are relatively cheaper. Our results confirm that there is a sweet spot for the fan-out depending on the PDMSs and network characteristics. The results also indicate that, depending on the application requirements, the fan-out can be adjusted to obtain a better trade-off between latency and total work. For example, training a machine learning model on user’s data may be less restricted in terms of latency than a real-time traffic analysis. For instance, when β = 0.6, choosing a fan-out of 8 leads to a total work only 3% higher than the optimal value, while the latency is 32% larger than the optimal value. #### 6 CONCLUSION AND FUTURE WORKS In this short paper, we made the first steps towards the design of an aggregation protocol providing interesting properties: highly scalable, privacy preserving, adaptable to selective participation, to several system settings, with a tree-like structure enabling robustness to failure; all this without compromise on the result quality. This protocol could be a building block to compute statistics on large communities of PDMS users or even to train ML algorithms. There is still a long way to go before providing all the required properties with efficient and secure protocols. Our next steps are to focus on the reliability aspect, selectivity estimation, and performance enhancements in the case of ML algorithms manipulating large datasets and requiring many iterations on users’ data. This is an exciting research agenda with innovative usages in perspective. #### REFERENCES [1] Mário S. Alvim, Konstantinos Chatzikokolakis, Catuscia Palamidessi, and Anna Pazii. 2018. Local Differential Privacy on Metric Spaces: Optimizing the Trade-Off [with Utility. In IEEE CSF. 262–267. https://doi.org/10.1109/CSF.2018.00026](https://doi.org/10.1109/CSF.2018.00026) [2] Nicolas Anciaux, Philippe Bonnet, Luc Bouganim, Benjamin Nguyen, Philippe Pucheral, Iulian Sandu Popa, and Guillaume Scerri. 2019. Personal data management systems: The security and functionality standpoint. Information Systems 80 (2019), 13–35. [3] Nicolas Anciaux, Luc Bouganim, Philippe Pucheral, Iulian Sandu Popa, and Guillaume Scerri. 2019. Personal Database Security and Trusted Execution Environments: A Tutorial at the Crossroads. Proc. VLDB Endow. 12, 12 (2019), 1994–1997. [4] Yoshinori Aono, Takuya Hayashi, Lihua Wang, Shiho Moriai, et al. 2017. Privacypreserving deep learning via additively homomorphic encryption. IEEE Transac_tions on Information Forensics and Security 13, 5 (2017), 1333–1345._ [5] Aurélien Bellet, Rachid Guerraoui, Mahsa Taziki, and Marc Tommasi. 2018. Personalized and private peer-to-peer machine learning. In International Conference _on Artificial Intelligence and Statistics. PMLR, 473–481._ [6] Keith Bonawitz, Vladimir Ivanov, Ben Kreuter, Antonio Marcedone, H Brendan McMahan, Sarvar Patel, Daniel Ramage, Aaron Segal, and Karn Seth. 2017. Practical secure aggregation for privacy-preserving machine learning. In ACM CCS. 1175–1191. [7] EU Commission. 25 October 2020. Proposal for a Regulation on European data [governance (Data Governance Act), COM/2020/767. [eur-lex].](https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52020PC0767) [8] Graham Cormode, Tejas Kulkarni, and Divesh Srivastava. 2019. Answering [Range Queries Under Local Differential Privacy. PVLDB 12, 10 (2019). https:](https://doi.org/10.14778/3339490.3339496) [//doi.org/10.14778/3339490.3339496](https://doi.org/10.14778/3339490.3339496) [9] Henry Corrigan-Gibbs and Dan Boneh. 2017. Prio: Private, robust, and scalable computation of aggregate statistics. In NSDI. 259–282. [10] David Froelicher, Juan Ramón Troncoso-Pastoriza, Joao Sa Sousa, and Jean-Pierre Hubaux. 2020. Drynx: Decentralized, secure, verifiable system for statistical queries and machine learning on distributed datasets. IEEE Transactions on _Information Forensics and Security 15 (2020), 3035–3050._ [11] Julien Loudet, Iulian Sandu Popa, and Luc Bouganim. 2019. SEP2P: Secure and Efficient P2P Personal Data Processing. In EDBT. [12] Yilin Mo and Richard M Murray. 2016. Privacy preserving average consensus. _IEEE Trans. Automat. Control 62, 2 (2016), 753–765._ [13] Ion Stoica, Robert Morris, David Karger, M Frans Kaashoek, and Hari Balakrishnan. 2001. Chord: A scalable peer-to-peer lookup service for internet applications. _ACM SIGCOMM 31, 4 (2001), 149–160._ [14] Kai Zheng, Wenlong Mou, and Liwei Wang. 2017. Collect at Once, Use Effectively: Making Non-interactive Locally Private Learning Possible. In ICML, Vol. 70. -----
{ "disclaimer": "Notice: The following paper fields have been elided by the publisher: {'references'}. Paper or abstract available at https://api.unpaywall.org/v2/10.1145/3468791.3468821?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1145/3468791.3468821, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": "other-oa", "status": "GREEN", "url": "https://hal.archives-ouvertes.fr/hal-03329878/file/3468791.3468821.pdf" }
2,021
[ "JournalArticle", "Book", "Conference" ]
true
2021-07-06T00:00:00
[]
8,564
en
[ { "category": "Computer Science", "source": "external" }, { "category": "Computer Science", "source": "s2-fos-model" }, { "category": "Engineering", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/025659bfc224e9420db7c3c50027478355a32fb2
[ "Computer Science" ]
0.874027
An Improved Deep Learning Model for DDoS Detection Based on Hybrid Stacked Autoencoder and Checkpoint Network
025659bfc224e9420db7c3c50027478355a32fb2
Future Internet
[ { "authorId": "144871713", "name": "Amthal K. Mousa" }, { "authorId": "2112605919", "name": "M. N. Abdullah" } ]
{ "alternate_issns": null, "alternate_names": null, "alternate_urls": [ "http://nbn-resolving.de/urn/resolver.pl?urn=urn:nbn:ch:bel-156830", "https://www.mdpi.com/journal/futureinternet" ], "id": "c3e5f1c8-9ba7-47e5-acde-53063a69d483", "issn": "1999-5903", "name": "Future Internet", "type": "journal", "url": "http://www.e-helvetica.nb.admin.ch/directAccess?callnumber=bel-156830" }
The software defined network (SDN) collects network traffic data and proactively manages networks. SDN’s programmability makes it excellent for developing distributed applications, cybersecurity, and decentralized network control in multitenant data centers. This exceptional architecture is vulnerable to security concerns, such as distributed denial of service (DDoS) attacks. DDoS attacks can be very serious due to the fact that they prevent authentic users from accessing, temporarily or indefinitely, resources they would normally expect to have. Moreover, there are continuous efforts from attackers to produce new techniques to avoid detection. Furthermore, many existing DDoS detection methods now in use have a high potential for producing false positives. This motivates us to provide an overview of the research studies that have already been conducted in this area and point out the strengths and weaknesses of each of those approaches. Hence, adopting an optimal detection method is necessary to overcome these issues. Thus, it is crucial to accurately detect abnormal flows to maintain the availability and security of the network. In this work, we propose hybrid deep learning algorithms, which are the long short-term memory network (LSTM) and convolutional neural network (CNN) with a stack autoencoder for DDoS attack detection and checkpoint network, which is a fault tolerance strategy for long-running processes. The proposed approach is trained and tested with the aid of two DDoS attack datasets in the SDN environment: the DDoS attack SDN dataset and Botnet dataset. The results show that the proposed model achieves a very high accuracy, reaching 99.99% in training, 99.92% in validation, and 100% in precision, recall, and F1 score with the DDoS attack SDN dataset. Also, it achieves 100% in all metrics with the Botnet dataset. Experimental results reveal that our proposed model has a high feature extraction ability and high performance in detecting attacks. All performance metrics indicate that the proposed approach is appropriate for a real-world flow detection environment.
## future internet _Article_ # An Improved Deep Learning Model for DDoS Detection Based on Hybrid Stacked Autoencoder and Checkpoint Network **Amthal K. Mousa * and Mohammed Najm Abdullah** Computer Engineering Department, University of Technology-Iraq, Baghdad P.O. Box 10071, Iraq; mohammed.n.abdullah@uotechnology.edu.iq *** Correspondence: amthal.k.mousa@uotechnology.edu.iq** **Citation: Mousa, A.K.; Abdullah,** M.N. An Improved Deep Learning Model for DDoS Detection Based on Hybrid Stacked Autoencoder and Checkpoint Network. Future Internet **[2023, 15, 278. https://doi.org/](https://doi.org/10.3390/fi15080278)** [10.3390/fi15080278](https://doi.org/10.3390/fi15080278) Academic Editor: Izzat Alsmadi Received: 20 July 2023 Revised: 11 August 2023 Accepted: 17 August 2023 Published: 19 August 2023 **Copyright:** © 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons [Attribution (CC BY) license (https://](https://creativecommons.org/licenses/by/4.0/) [creativecommons.org/licenses/by/](https://creativecommons.org/licenses/by/4.0/) 4.0/). **Abstract: The software defined network (SDN) collects network traffic data and proactively manages** networks. SDN’s programmability makes it excellent for developing distributed applications, cybersecurity, and decentralized network control in multitenant data centers. This exceptional architecture is vulnerable to security concerns, such as distributed denial of service (DDoS) attacks. DDoS attacks can be very serious due to the fact that they prevent authentic users from accessing, temporarily or indefinitely, resources they would normally expect to have. Moreover, there are continuous efforts from attackers to produce new techniques to avoid detection. Furthermore, many existing DDoS detection methods now in use have a high potential for producing false positives. This motivates us to provide an overview of the research studies that have already been conducted in this area and point out the strengths and weaknesses of each of those approaches. Hence, adopting an optimal detection method is necessary to overcome these issues. Thus, it is crucial to accurately detect abnormal flows to maintain the availability and security of the network. In this work, we propose hybrid deep learning algorithms, which are the long short-term memory network (LSTM) and convolutional neural network (CNN) with a stack autoencoder for DDoS attack detection and checkpoint network, which is a fault tolerance strategy for long-running processes. The proposed approach is trained and tested with the aid of two DDoS attack datasets in the SDN environment: the DDoS attack SDN dataset and Botnet dataset. The results show that the proposed model achieves a very high accuracy, reaching 99.99% in training, 99.92% in validation, and 100% in precision, recall, and F1 score with the DDoS attack SDN dataset. Also, it achieves 100% in all metrics with the Botnet dataset. Experimental results reveal that our proposed model has a high feature extraction ability and high performance in detecting attacks. All performance metrics indicate that the proposed approach is appropriate for a real-world flow detection environment. **Keywords: DDoS detection; distributed denial of service; software defined networking; SDN;** network security **1. Introduction** Software defined networking, also known as SDN, is a novel approach to the networking paradigm which separates control decisions from the forwarding hardware. The primary objective is to make it as simple as possible for software developers to rely on the resources provided by the network for storage and computation [1]. The SDN comprises switches that support open-flow, a controller, and a secure channel for the controller and the switches [2]. SDN focuses on four main features [3]: Separation of the data plane from the control plane. _•_ A centralized management system and network perspective. _•_ Open connections between the devices in the control plane and the data plane. _•_ The network can be programmed by an outside administration. _•_ SDN has two main assets, which are the centralization of control and the ability to control the whole network through software. Those two assets are attractive features for ----- _Future Internet 2023, 15, 278_ 2 of 16 attackers. Thus, several security challenges affect the SDN, including the distributed denial of service attack (DDoS), man-in-the-middle attack, side channel attack, application manipulation, diversion of traffic, application exploitation, traffic sniffing, password guessing or brute force, and network manipulation [4]. Recently, the DDoS attack has become one of the most serious attacks due to the inability to access the controller. The process and communication capacity of the controller are overloaded when DDoS attacks occur against the SDN controller because of the unnecessary flow produced by the controller for the attack packets. The capacity of the switch flow table becomes full, leading the network performance to decline to a critical threshold [5]. Machine learning (ML) and powerful deep learning (DL) are two of the most common techniques to protect any network from DoS/DDoS attacks. This work proposes a novel model of DL-based DDoS attack detection algorithms in SDN, evaluates those efforts, and then compares those findings to the recent related papers. The motivation of using a proposed model to find and stop DDoS attacks on SDN is to give an overview of the research studies that have already been conducted in this area and point out the strengths and weaknesses of each of those approaches. Also, DDoS attacks are a big problem for SDN networks. Traditional methods of defense may not be able to find and stop these attacks, because attackers now use new methods to flood SDN using different types of traffic (high and low rates), that slow down the SDN controller and make it inaccessible to legitimate users. Additionally, many recent DDoS detection methods have a high potential for producing false alarms, which can be time-consuming to analyze and cause alert fatigue. Consequently, techniques that can lessen false positives as well as increase the accuracy of DDoS detection are required. This study proposes a model-based CNN-LSTM as a stacked autoencoder with a checkpoint network for DDoS detection to achieve high accuracy DDoS detection. We have demonstrated how this particular structure can enhance performance, accurately estimate attacks, and remarkably suppress false alarms. Additionally, we provide details of the dataset and hyperparameter values. Furthermore, we produce a comparative analysis of the proposed approach against some recently published work. The main contributions of this work are as follows: Propose a deep-stacked autoencoder-based CNN-LSTM for detecting DDoS attacks on a _•_ network. This model can extract features effectively in an unsupervised learning approach. Utilize a checkpoint network model: a fault tolerance strategy for long-running pro _•_ cesses that permits the definition of checkpoints for the model weights at certain locations and improves inference accuracy in real time. After this introductory section of the paper, Section 2 provides a background of DDoS attacks and the detection mechanism in the SDN. Section 3 presents an overview of the most recent related studies for DDoS detection. The proposed system structure appears in Section 4, and the experimental results of the proposed model classifiers for DDoS attack detection in SDN appear in Section 5, along with a comparison to some relevant research. The last topic of discussion in the paper is the conclusion in Section 6. **2. Concept of SDN and the Detection Mechanism of DoS/DDoS Attacks** The development of technology to detect and mitigate distributed denial of service attacks in SDN environments [6] provides a significant obstacle to these attacks. A distributed denial of service attack sends many packets to the target network. Unmatched flows are considered new if the target and source IP addresses of the forwarded packets are fake, and switches cannot locate these packets in their flow table entries. Next, the switch will forward the packet directly to the SDN controller or send the mismatched packet to the SDN controller [7]. Finding the appropriate routes for these packets lies within the purview of the SDN controller. Many disguised DDoS flows are in legitimate traffic. These flows continually consume the controller’s resources, and as a result, those resources eventually become unavailable for use by incoming packets. As a direct consequence of this attack, the SDN controller goes offline, which causes the entire network to enter a downstate. Even if ----- _Future Internet 2023, 15, 278_ 3 of 16 a backup controller is available, this security flaw still exists [8]. The characteristics of a DDoS attack in a software-defined networking system are subtly distinct from those of an attack on a traditional network. The following is a conclusion reached after researching the DDoS attack techniques utilized against the SDN controller [9]: In traditional networks, there are one or more network links, and DDoS attackers go _•_ after servers that are the endpoints. In SDN, the controller is hit with a DDoS attack. In SDN, the main goal of a DDoS attack is to make the controller’s resources unavailable by failing at a single point. _•_ The IP addresses of packets in traditional networks are real. As a result, DDoS attackers typically target the terminal server. To conduct a DDoS assault in SDN, the attacker attempts to counterfeit the IP addresses of the destination, involving the controller in constant processing with fresh flows. The controller’s resources are made unavailable. _•_ In traditional networks, when a DDoS attack occurs, the server stops providing services to actual users. But in SDN, if the DDoS attack occurs, the controller in the SDN loses contact with the data plane and cannot provide services for moving data packets. Traditional ways to find DDoS attacks use a stochastic analysis and the randomness of network traffic to find unusual intrusions. When the detection software finds an attack event, traffic rate-limiting and filtering are used to lessen the damage. But if it uses mitigation strategies carelessly, they will affect legitimate traffic. Even though the victim is not receiving a lot of traffic, a poor response like this can make it difficult for regular users to get online. So, the detection technique must be capable of determining when a DDoS attack occurs and distinguishing between attack traffic and normal traffic. The current trend in DDoS detection is to use machine learning to classify and detect malicious traffic. These techniques can learn the attributes of the underlying data smartly without needing to be told what is normal and what is dangerous. Even though machine learning-based techniques show promise, most focus on offline traffic analysis and have trouble staying current with how DDoS attacks change over time [10]. Lastly, the detection method should try to reduce false alarms, which can hurt sources that are not doing anything wrong. So, the defense system stops attack traffic and ensures that legitimate traffic gets to the end users reliably [11]. **3. Related Works** Recent DDoS detection research utilizing machine learning approaches has achieved promising results. These systems can intelligently understand the underlying data properties without explicitly specifying normal and harmful behaviors, bypassing the limits of conventional detection schemes. The DDoS detection problem is a binary classification problem in which the observed traffic is either normal or attack traffic. Moreover, detection techniques have used deep learning more often in recent years to find DDoS attacks and presented several approaches. Of various recent notable works in this field, some utilized convolutional networks, some utilized recurrent neural networks, especially LSTM and bidirectional LSTM, and some used autoencoder, an unsupervised learning approach, to discover non-linear characterizations from input data, and would then perform a classification algorithm to differentiate malicious traffic from genuine traffic. In 2017, Yuan, Li, and Li [12] developed a deep learning algorithm named “DeepDefense”, a model that uses a deep learning model to detect DDoS attacks. To carry out their research, they used CNN, as well as several distinct variants of RNN (such as LSTM and the gated recurrent unit neural network (GRUNN)), and the random forest (RF) method. The study included a comparison analysis between several deep learning methodologies and between deep learning and machine learning algorithms (it selected RF). DeepDefense put into action four deep learning models: LSTM, CNN-LSTM, GRU, and 3-LSTIM. We compared the results of these models with one another. With an accuracy of 98.410% and an area under the curve (AUC) score of 99.450%, the top deep learning model, 3LSTM, was able to identify DDoS attacks. Shone et al. [13] found that stacking two autoencoders allowed for the learning of more complex feature-based correlations. For intrusion detec ----- _Future Internet 2023, 15, 278_ 4 of 16 tion, they combined the stacked autoencoder with a random forest classifier. They asserted that the soft-max layer was less effective than traditional classifiers. In 2019, Pekta¸s and Acarman [14] presented a model-based deep learning method that utilized CNN and LSTM to train the spatial-temporal characteristics of network flows. It used two datasets for training and testing: the ISCX2012 dataset [15] and CICIDS2017 [16]. The results show that the model achieved 0.9669 in precision, 0.9649 in recall, 0.9657 in F1-score, and 0.9666 in accuracy. The model also returned good results when using CI-CIDS2017, where it achieved 0.9797 in precision, 0.9765 in recall, 0.9780 in F1-score, and 0.9772 in accuracy. Some studies developed hybrid deep learning models. Gadze et al., 2021 [17] proposed a model that combined two types of deep learning, LSTM and CNN, to detect an attack. Mininet generated the dataset dynamically and utilized OpenFlow switches and Floodlight as an external controller. Based on the findings, RNN LSTM outperformed linear-based models like SVM (86.85%) and Naive Bayes (82.61%), achieving an accuracy of 89.63% compared to their respective scores. Their model had an accuracy of 99.4%, while the KNN technique, based on linear models, had an even higher accuracy. In addition, the model functioned most effectively when it split the data in a 70/30 train/test split ratio. Singh and Jang-Jaccard (2022) [18] created a hybrid autoencoder model dubbed MSCNNLSTM-AE. This model found anomalies in network traffic by utilizing a combination of a multi-scale convolutional neural network (MSCNN) and LSTM. The MSCNN autoencoder was employed initially to evaluate the spatial characteristics of the dataset. Next, it used an LSTM-based autoencoder network to identify the temporal features of the latent space features learned from the MSCNN-AE. The authors analyzed their work with the UNSWNB15 [19], NSL-KDD [20], and CICDDoS2019 tests. The accuracy score for their model (MSCNN-LSTM-AE) came in at 93.76%, while the recall score was 92.26%. Elubeyd and Yiltas-Kaplan [21] presented a hybrid deep learning approach for detecting and countering DoS/DDoS attacks in SDNs. The selection of a hybrid model that included a 1D CNN, a dense neural network (DNN), and a gated recurrent unit (GRU) took advantage of their individual strengths that synergistically addressed the intricacies of the problem. The model achieved good results when using CICDDoS 2019, where it achieved 0.9981 in accuracy, 0.9996 in precision, 0.999 in recall, and 0.9993 in F1-score. Some recent studies used a stacked autoencoder to improve DDoS detection accuracy. Yaser et al., 2022 [22] proposed a novel approach for detecting DDoS attacks, which involved integrating deep learning with feedforward neural networks in the form of autoencoders. The training and evaluation of the model were analyzed using two datasets, initially through a static approach and subsequently through an iterative technique. They developed the autoencoding model through a layer-by-layer stacking of the input layer and the hidden layer of self-encoding models, wherein each self-encoding model employed a hidden layer. They assessed the performance of their model by employing a three-fold data partitioning strategy comprising training, testing, and validating subsets. The test result showed that the model yielded superior accuracy for the static dataset. Specifically, for the ISCXIDS-2012 dataset, the model attained a maximum accuracy of 99.35% during training, 99.3% during validation, 99.78% for precision, 99.99% for recall, and 99.87 for F1-score. The UNSW-2018 dataset exhibited high levels of accuracy during training, with values of 99.95% for training and 99.94 for validation, and 99.99 for recall, precision, and F1-score. Jiang et al., 2018 [23], presented a new method (DLGraph) for detecting malware based on deep learning along with graph embedding. Their architecture for deep learning was comprised of two stacked denoising autoencoders (SDA). One SDA was able to learn the latent structure of functioncall graphs in programs. The other SDA was capable of learning a latent representation of Windows API calls made by programs. They utilized the node2vec technique when incorporating a function-call graph in a feature space. The experimental results on three distinct datasets demonstrated that the proposed DLGraph method obtained high levels of accuracy and exceeded the closely related DL4MD method, where it gained 99.14% in accuracy for dataset 1, 99.36% for dataset 2, and 99.31 for dataset 3. Table 1 shows the comparison between these related works. ----- _Future Internet 2023, 15, 278_ 5 of 16 **Table 1. Comparison of related works in terms of methods, performance measures, and achievement.** **Ref** **Model** **Achievement** 3-LSTIM outperformed the other models which gained Yuan, Li [12] LSTM, GRU, CNN-LSTM, and 3-LSTIM 99.450% in accuracy combined the stacked autoencoder with a They asserted that the soft-max layer was less effective Shone et al. [13] random forest classifier than traditional classifiers For the ISCX2012 dataset, the model achieved 0.9669 in precision, 0.9649 in recall, 0.9657 in F1-score, and 0.9666 Pekta¸s and Acarman [14] LSTM and CNN in accuracy. For CI-CIDS2017, the model achieved 0.9797 in precision, 0.9765 in recall, 0.9780 in F1-score, and 0.9772 in accuracy The model outperformed the other ML models, in which it gained 99.4% in accuracy compared with RNN LSTM Gadze et al., 2021 [17] LSTM and CNN that archived 89.63%, SVM achieved 86.85%, and Naive Bayes achieved 82.61% Hybrid autoencoder model dubbed The accuracy score was 93.76% and the recall Singh and Jang-Jaccard, 2022 [18] MSCNN-LSTM-AE score was 92.26% Hybrid deep learning approaches (1D CNN, a The model achieved good results when using CICDDoS Elubeyd and Yiltas-Kaplan [21] dense neural network (DNN), and a gated 2019, where it achieved 0.9981 in accuracy, 0.9996 in _uture Internet 2023, 15, x FOR PEER REVIEW_ recurrent unit (GRU)) precision, 0.999 in recall, and 0.9993 in F1-score For the ISCXIDS-2012 dataset, the model attained a maximum accuracy of 99.35% during training, 99.3% during validation, 99.78% for precision, 99.99% for recall, Yaser et al., 2022 [22] LSTM-Autoencoder and 99.87 for F1-score. For UNSW-2018, it gained 99.95% for training accuracy and 99.94 for validation accuracy, #### 4. Proposed Model Structure and 99.99 for recall, precision, and F1-score DLGraph based on two stacked denoising It gained 99.14% in accuracy for dataset 1, 99.36% for Jiang et al., 2018 [23] #### Our approach uses autoencoders, a method that is now popular in deepautoencoders (SDA) dataset 2, and 99.31 for dataset 3 autoencoder is an unsupervised neural network-based feature extraction learns the best feasible factors to reproduce faithfully its output given some 4. Proposed Model Structure Our approach uses autoencoders, a method that is now popular in deep learning. #### its many appealing features is its potential to provide a non-linear and more An autoencoder is an unsupervised neural network-based feature extraction method that #### eralization than the principal component analysis (PCA). It achieves this relearns the best feasible factors to reproduce faithfully its output given some input. One propagation with input-equivalent target values. To rephrase, it tries to figuof its many appealing features is its potential to provide a non-linear and more efficient generalization than the principal component analysis (PCA). It achieves this result by #### predict the occurrence of itself as closely as possible. The typical architectur backpropagation with input-equivalent target values. To rephrase, it tries to figure out #### encoder consists of three layers: an input layer, an output layer, and a hiddhow to predict the occurrence of itself as closely as possible. The typical architecture of an hidden layer’s dimensions are lower than that of the input [24]. Figure 1 shoautoencoder consists of three layers: an input layer, an output layer, and a hidden layer. The hidden layer’s dimensions are lower than that of the input [24]. Figure 1 shows the #### tional (single) autoencoders. traditional (single) autoencoders. ##### Figure 1. Figure 1. Single autoencoder [Single autoencoder [24]. 24]. ----- _Future Internet 2023, 15, 278_ 6 of 16 _Future Internet 2023, 15, x FOR PEER REVIEW In the proposed method, we utilize a deep autoencoder. Unlike traditional autoen-7 of 17_ coders, deep autoencoders consist of two typical deep-belief networks, one for encoding and one for decoding, with four or five shallow layers each. Deep learning can be applied to autoencoders by a stacked autoencoder, in which many hidden layers build depth, and the hidden layers reflect fundamental concepts. As a result of this increased depth, computing𝑦𝑖= 𝑠(𝑊𝑥𝑖 + 𝑏) (1) costs will reduce, the amount of instruction data required will decrease, and accuracy willThrough improve. The output of one buried layer serves as the input to a later, more advanced 𝑧𝑖 = 𝑠(𝑊[′]𝑦𝑖+ 𝑏[′]) (2) step. First-order features are often learned from unprocessed data by the first layer of a stacked autoencoder. Second-order features based on trends in the presence of first-orderThis concealed representation is mapped back into a reconstruction of the same shape as input x. Here, s represents a non-linear function, such as the sigmoid function. The first traits are typically learned by the second layer. Subsequent layers build our understandcomponent is the encoder, while the second is the decoder. This model’s parameters min-ing of higher-order characteristics. Figure 2 shows the structure of the proposed deep imize the average reconstruction error. autoencoder model. **Figure 2. Figure 2. The general structure of the proposed deep autoencoder.The general structure of the proposed deep autoencoder.** The model consists of one input layer, one convolutional layer (Conv1D), two LSTM The first layer is the input layer, which receives input Xi and uses numerous hidden layers, one max pooling layer, and one dense layer in output. Figure 3 shows the training layers to encode and decode it (encoder and decoder blocks). The encoding process model with the proposed deep autoencoder scheme.compresses the attributes to make them smaller than the input data, and the decoding process restores these attributes in reverse order to begin the final output at the deepest layer. When processed, the output feature vector Xi is virtually identical to the input. The convolutional layer and LSTM are combined with an autoencoder to generate a robust DDoS attack classifier. LSTM is excellent at understanding the context of Internet packets, identifying long- and short-term dependencies, and identifying trends in DDoS attack sequences. LSTM is particularly proficient at categorizing processes such as time series and learning from experience. After the encoding is complete, based on the output result of the hidden layer, the output layer is decoded and reconstructed according to Equation (2) to produce an output of the same size as the input layer neuron. The purpose of the autoencoder section is to map input x [0, 1][d] to a latent represen_∈_ tation y [0, 1]d[′], where the mapping is performed by the function _∈_ _yi = s(Wxi + b)_ (1) Through _zi = s�W[′]yi + b[′][�]_ (2) This concealed representation is mapped back into a reconstruction of the same shape as input x. Here, s represents a non-linear function, such as the sigmoid function. The **Figure 3. The training model of the proposed deep autoencoder.** first component is the encoder, while the second is the decoder. This model’s parameters minimize the average reconstruction error. In addition, the checkpoint network improves the weights. Checkpointing is a crucial The model consists of one input layer, one convolutional layer (Conv1D), two LSTM functionality that expedites failure recovery, reducing the total training time and ensuring continuous progress. Checkpoints are periodic captures of the current state of a running layers, one max pooling layer, and one dense layer in output. Figure 3 shows the training process, which are then stored in a durable storage medium. The individual loads the model with the proposed deep autoencoder scheme. most recent checkpoint to recover from a setback and recommence training. In addition tation y [0, 1]d[′] _∈_ Through ----- ### y, p g y, y p g _Future Internet 2023, 15, 278_ model with the proposed deep autoencoder scheme. 7 of 16 **Figure 3. The training model of the proposed deep autoencoder.** ##### Figure 3. The training model of the proposed deep autoencoder. In addition, the checkpoint network improves the weights. Checkpointing is a crucial functionality that expedites failure recovery, reducing the total training time and ensuring _Future Internet 2023, 15, x FOR PEER REVIEW In addition, the checkpoint network improves the weights. Checkpointi8 of 17_ continuous progress. Checkpoints are periodic captures of the current state of a running ### functionality that expedites failure recovery, reducing the total training timeprocess, which are then stored in a durable storage medium. The individual loads the most continuous progress. Checkpoints are periodic captures of the current staterecent checkpoint to recover from a setback and recommence training. In addition to the to the imperative of failure recovery, the utilization of checkpoints is necessary for trans imperative of failure recovery, the utilization of checkpoints is necessary for transferring ### process, which are then stored in a durable storage medium. The individtraining processes across various nodes or clusters. This transition may be necessary forferring training processes across various nodes or clusters. This transition may be neces sary for server maintenance (for instance, urgent security updates that cannot be delayed), ### most recent checkpoint to recover from a setback and recommence traininserver maintenance (for instance, urgent security updates that cannot be delayed), hardware malfunctions, network complications, and the optimization or reallocation of resources.hardware malfunctions, network complications, and the optimization or reallocation of Another significant application of checkpoints involves the real-time publication of snapsresources. Another significant application of checkpoints involves the real-time publicaof trained models to enhance the accuracy of inference, commonly referred to as onlinetion of snaps of trained models to enhance the accuracy of inference, commonly referred training. For example, we can employ an interim model obtained by checkpointing forto as online training. For example, we can employ an interim model obtained by checkprediction serving. This method allows the model to continue training on more recentpointing for prediction serving. This method allows the model to continue training on datasets, ensuring the freshness of the inference model. We can also utilize checkpointsmore recent datasets, ensuring the freshness of the inference model. We can also utilize for transfer learning, a technique where an intermediate structure state is an initial pointcheckpoints for transfer learning, a technique where an intermediate structure state is an for training toward a distinct objective [initial point for training toward a distinct objective [10]. Figure 4 shows the training loop 10]. Figure 4 shows the training loop with a checkpoint network.with a checkpoint network. **Figure 4. The training looping with checkpoint network [10].** **Figure 4. The training looping with checkpoint network [10].** The evaluation of the model occurs at the conclusion of each epoch and the weights ----- _Future Internet 2023, 15, 278_ 8 of 16 The evaluation of the model occurs at the conclusion of each epoch, and the weights corresponding to the highest accuracy and lowest loss during that specific epoch are retained and saved. In the event that the weights in the model during a specific epoch fail to yield the optimal accuracy or loss, as determined by the user-defined criteria, the weights will not be preserved. However, the training process will persist, commencing from the aforementioned condition. **5. Results and Discussion** This section discusses the experimental results of our proposed method. We use several performance metrics for evaluation. We use two datasets to test the performance of the proposed model in detecting DDoS attacks. Then, we compare these results with some recent related works using the same datasets and with some other machine learning algorithms. _5.1. Datasets_ Most datasets are imperfect, and the row samples employed to cover the application manners are insufficient in these cases. The most common DDoS datasets involve CI-CIDS2019, CICIDS2017, KDDCUP99, ISCX2012, Kyoto 2006+, and NSL-KDD, which researchers have significantly utilized for intrusion detection. In this work, we chose two datasets to validate our proposed DDoS classifier, which are: 1- DDoS attack SDN dataset (Mendeley Data): This set is an SDN-specific dataset created by the Mininet emulator and utilized by machine learning and deep learning algorithms for traffic classification. The project begins by constructing 10 Mininet topologies with switches connected to a single Ryu controller. It simulates a network for benign TCP, UDP, and ICMP traffic and malicious traffic, which consists of a TCP Syn attack, UDP Flood attack, and ICMP assault. The data collection contains 23 features, of which some are from the switches, and others are calculated, such as the Packet count, Switch-id, duration sec, byte count, Destination IP, Source IP, Port number, etc. 2- Botnet dataset (UNSW_2018_IoT_Botnet): Even though several datasets have been proposed for detecting intrusions, most datasets are not updated and do not reflect actual data. The Canadian Institute for Cybersecurity addressed these issues by developing the Intrusion Detection Evaluation Dataset, ISCX-IDS 2012, and [25] generated by monitoring network activity for seven days. The labeled dataset consists of approximately 1,512,000,000 packets with 20 features. The primary characteristics of this dataset are discussed in [25] and include real, normal, and malicious streams comprising FTP, HTTP, IMAP, POP3, SMTP, and SSH protocols collected using real devices. All data are categorized and marked. The collected datasets contain a variety of intrusion kinds (Infiltrating, DoS, DDoS, and Brute Force SSH). _5.2. Performance Evaluation Metrics_ A performance evaluation is the process of measuring the of a classification model after assigning cases to their various predetermined labels. The performance evaluation considers measures including accuracy, recall, precision, F1-score, and confusion matrix [26]. These metrics are described as follows: 1. Accuracy: The ratio of all correct predictions over the total number of packets in the dataset [24]: Tp + Tn Accuracy(Ac) = (3) Tp + Tn + Fp + Fn where Tp is the “True Positive,” which describes the rate where the actual instance of a certain label is categorized as that label; Tn is the “True Negative,” which is the attack instance’s value that is classified as an attack; Fp is the “False Positive,” which is the number incorrectly classified for a certain class label, i.e., the instance categorized ----- _Future Internet 2023, 15, 278_ 9 of 16 value as an additional class label for a given dataset; Fn is the “False Negative,” which is the value of normal traffic which is classified as an attack. 2. Recall: Recall is the number of correctly predicted positive records over all the positive records: a metric that can detect DDoS attack traffic compared to normal traffic [24]. Tp Recall = (4) Tp + Fn 3. Precision: Precision is the proportion of actual positive instances that were correctly predicted, i.e., it is a metric that can detect DDoS attack traffic among normal traffic [24]. Tp Precision = (5) Tp + Fp 4. F1-score: The F1-score is the balance between the recall and the precision [24]. F1 score = 2 [Recall][ ×][ precision] (6) _−_ _×_ Recall + precision 5. Confusion Matrix: The data classification results appear in a table format. The accuracy of a classification model is evaluated by applying it to test data for which the results have already been determined. The study uses it to show the distribution of the expected outcomes, despite its poor suitability for anything beyond binary _Future Internet 2023, 15, x FOR PEER REVIEW classification [24]._ 10 of 17 _5.3. Results and Discussion of Base Classifier Models_ 5.3.1. Mendeley DDoS Attack SDN Dataset The dataset utilized in this study is a software defined networking (SDN) dataset The dataset utilized in this study is a software defined networking (SDN) dataset generated by implementing ten distinct topologies within the Mininet framework, generated by implementing ten distinct topologies within the Mininet framework, wherein wherein switches are interconnected to a singular Ryu controller. The network simulation switches are interconnected to a singular Ryu controller. The network simulation encom encompasses benign traffic, including TCP, UDP, and ICMP, as well as malicious traffic, passes benign traffic, including TCP, UDP, and ICMP, as well as malicious traffic, which which comprises TCP Syn, UDP Flood, and ICMP attacks. The dataset contains 23 fea comprises TCP Syn, UDP Flood, and ICMP attacks. The dataset contains 23 features, in tures, including extracted data from switches and calculated variables. At first, we extract cluding extracted data from switches and calculated variables. At first, we extract packet packet fields from the DDoS attack SDN dataset. The number of samples used is 100000. fields from the DDoS attack SDN dataset. The number of samples used is 100,000. The The extracted feature appears in Figure 5. extracted feature appears in Figure 5. **Figure 5. Extracted feature from the original packets.** **Figure 5. Extracted feature from the original packets.** The extracted features are Packet_count, Switch-id, byte_count, duration_sec (repre The extracted features are Packet_count, Switch-id, byte_count, duration_sec (repre senting the duration in seconds), duration_nsec (representing the duration in nanoseconds), senting the duration in seconds), duration_nsec (representing the duration in nanosec and the overall duration obtained by summing duration_sec and duration_nsec. Addi onds), and the overall duration obtained by summing duration_sec and duration_nsec. tionally, the characteristics contain Source IP and Destination IP. The numerical identifier Additionally, the characteristics contain Source IP and Destination IP. The numerical iden assigned to a specific communication endpoint within a computer network is commonly tifier assigned to a specific communication endpoint within a computer network is com referred to as a port number. The variable “tx_bytes” represents the quantity of bytes monly referred to as a port number. The variable “tx_bytes” represents the quantity of that have been moved via the switch port, whereas “rx_bytes” denotes the quantity of bytes that have been moved via the switch port, whereas “rx_bytes” denotes the quantity bytes that have been received on the switch port. The “dt” field represents the numerical of bytes that have been received on the switch port. The “dt” field represents the numeri ----- _Future Internet 2023, 15, 278_ 10 of 16 representation of both the date and time. This field is utilized to monitor the flow of a particular process at regular intervals of 30 s. The calculated features encompass the term “packet per flow”, which refers to the count of packets transmitted during a single flow. Similarly, “byte per flow” represents the count of bytes transmitted during a single flow. “Packet Rate” denotes the number of packets sent per second and may be computed by splitting the packet for each flow by the monitoring interval. Additionally, the number of “Packet_ins” messages and the total flow inputs in the switch are relevant factors in this context. The variables tx_kbps and rx_kbps represent the rates at which data are transferred and received, respectively. Port bandwidth refers to the cumulative value of both the transmitted kilobits per second (tx_kbps) and received kilobits per second (rx_kbps). The final column denotes the class label, which serves as an indicator to determine if the traffic class is normal or malicious. In the classification scheme utilized, benign traffic is assigned a label of 0, whereas malicious traffic is assigned a label of 1. A network simulation is conducted over a duration of 250 min, resulting in the collection of 104,345 rows of data. The simulation is executed repeatedly within a specified time frame, allowing for the accumulation of further data. We split the dataset into 80% for training and 20% for testing. Attack traffic is labeled with 1, whereas the normal traffic is labeled with 0, and we train the model for 500 epochs to study the effect of the checkpoint strategy on improving classification accuracy. The classification result appears in Table 2 and Figures 6 and 7. **Table 2. Results of tests using the Mendeley dataset of the proposed model.** **Precision (%)** **Recall (%)** **F1-Score (%)** **Accuracy (%)** **Validation Accuracy (%)** **Normal** **Attack** **Normal** **Attack** **Normal** **Attack** _Future Internet 99.99_ **2023, 15, x FOR PEER REVIEW 99.923** 100 100 100 100 100 10011 of 17 _Future Internet 2023, 15, x FOR PEER REVIEW_ 11 of 17 where Normal is the normal traffic, and Attack is DDoS attack traffic. (a) (b) (a) (b) **Figure 6. A screenshot of the output terminal (a) training and validation accuracy results; (b) metrics** **Figure 6. Figure 6.A screenshot of the output terminal (A screenshot of the output terminal (a) training and validation accuracy results; (a) training and validation accuracy results;b) metrics** results. results. (b) metrics results. (a) (b) (c) (a) (b) (c) **Figure 7. Classification results of the CNN-LSTM-autoencoder model. (a) Accuracy results of train-** **Figure 7. Figure 7. Classification results of the CNN-LSTM-autoencoder model. (Classification results of the CNN-LSTM-autoencoder model. (a) Accuracy results of traininga) Accuracy results of train-** ing and validation per epoch, (b) loss per epoch, and (c) confusion matrix. ing and validation per epoch, (b) loss per epoch, and (c) confusion matrix. and validation per epoch, (b) loss per epoch, and (c) confusion matrix. **Table 2. Results of tests using the Mendeley dataset of the proposed model.** **Table 2. Results of tests using the Mendeley dataset of the proposed model.** ----- _Future Internet 2023, 15, 278_ 11 of 16 Table 2 and Figure 6 show the final results of training and Figure 7a, b represent the training/test loss and training/test accuracy after 500 epochs, respectively. Figure 7c shows the confusion matrix of the proposed model. These results prove that the model achieves very high classification results and stability with close results between training and validation, where it gains 99.99% in training accuracy, 99.923% in validation accuracy, and gain 100% in precision, recall, and F1-score. From Figure 7a, the model training and validation loss are very low, about 3.98 10[−][4] for training and 3.3 10[−][3] for validation. _×_ _×_ From Figure 7b, the model reaches the best train/validation accuracy at epoch (19). From Figure 7c, the model achieves a significant degree of prediction accuracy. It reaches an accuracy of about 100% in correctly detecting attacks and normal traffic flows. Moreover, the proposed model has fewer false alarms since it shows a False Positive Rate (FPR) of 0.00086 and a False Negative Rate (FNR) of 0.00071. These results show the importance of using a checkpoint network and many epochs, which can highly affect accuracy, as shown in Figure 6. _nternet 2023, 15, x FOR PEER REVIEW_ 5.3.2. UNSW_2018_IoT_Botnet Dataset The Bot-IoT dataset was developed in 2018 and published in 2019 by the New South Wales University (UNSW). It is a contemporary and authentic dataset for training machine learning models to effectively identify and mitigate Botnet attacks within Internet of Things #### attacks, such as OS and Service Scan, DoS, DDoS, Data exfiltration, and Keyloggi(IoT) networks. The dataset comprises 72 million instances, consisting of three dependent ditionally, the DoS and DDoS attacks are further categorized based on the specifiand forty-three independent features. The dataset encompasses various cyber-attacks, such as OS and Service Scan, DoS, DDoS, Data exfiltration, and Keylogging. Additionally, the #### col. At first, we extract packet fields from the DDoS attack SDN dataset. The nu DoS and DDoS attacks are further categorized based on the specific protocol. At first, we #### samples used is 100000. The extracted feature appears in Figure 8. extract packet fields from the DDoS attack SDN dataset. The number of samples used is 100000. The extracted feature appears in Figure 8. **Figure 8. Extracted feature from the original UNSW 2018 dataset packets.** ##### Figure 8. Extracted feature from the original UNSW 2018 dataset packets. The UNSW 2018 dataset involves pkSeqID to represent the row identifier, Proto is the representation of textual for transaction protocols that are resent in the network flow, #### The UNSW 2018 dataset involves pkSeqID to represent the row identifier, saddr is the IP address of the source, sport is the port number of the source, daddr is #### the representation of textual for transaction protocols that are resent in the netwothe IP address of the destination, dport is the port number of the destination, seq is the saddr is the IP address of the source, sport is the port number of the source, daddargus sequence number, stddev is the aggregated records of the standard deviation, min is the minimum duration of the standard deviation, state number represents the feature #### IP address of the destination, dport is the port number of the destination, seq is th state numerical representation, mean is the aggregated records of the average deviation, #### sequence number, stddev is the aggregated records of the standard deviation, mdrate is the packets per second of destination-to-source, srate is the packets per second of minimum duration of the standard deviation, state number represents the featusource-to-destination, max is the aggregated records’ maximum duration, attack is the class label where 0 represents normal traffic and 1 represents the attack traffic, category is the #### numerical representation, mean is the aggregated records of the average deviatio category of traffic, subcategory is the subcategory of traffic, and dbytes is the byte count of #### is the packets per second of destination-to-source, srate is the packets per sedestination-to-source. source-to-destination, max is the aggregated records’ maximum duration, attacHence, like the previous dataset, we split the data into 80% for training and 20% for testing. Attack traffic is labeled 1, the normal traffic is labeled 0, and we train the model for #### class label where 0 represents normal traffic and 1 represents the attack traffic, cat 500 epochs. The classification results appear in Table 3 and Figures 9 and 10. #### the category of traffic, subcategory is the subcategory of traffic, and dbytes is t count of destination to source ----- count of destination to source. _Future Internet 2023, 15, 278_ Hence, like the previous dataset, we split the data into 80% for training and 20% for 12 of 16 testing. Attack traffic is labeled 1, the normal traffic is labeled 0, and we train the model for 500 epochs. The classification results appear in Table 3 and Figures 9 and 10. **Table 3. Results of tests using the UNSW 2018 dataset of the proposed model.** **Table 3. Results of tests using the UNSW 2018 dataset of the proposed model.** **Precision (%)** **Recall (%)** **F1-Score (%)** **Accuracy (%)** **Validation Accuracy (%)** **Precision (%)** **Recall (%)** **F1-Score (%)** **Accuracy (%)** **Normal[Validation ]** **Attack** **Normal** **Attack** **Normal** **Attack** **Accuracy (%)** **Normal** **Attack** **Normal** **Attack** **Normal** **Attack** 100 100 100 100100 100100 100 100 100 100 100 100100 100100 where Normal is the normal traffic, and Attack is DDoS attack traffic.where Normal is the normal traffic, and Attack is DDoS attack traffic. _Future Internet 2023, 15, x FOR PEER REVIEW_ 13 of 17 (a) (b) _Future Internet 2023, 15, x FOR PEER REVIEW Figure 9. Screenshot of the output terminal showing (a) training and validation accuracy results; (13 of 17 b)_ **Figure 9. Screenshot of the output terminal showing (a) training and validation accuracy results;** metrics results. (b) metrics results. ###### (a) (b) (c) metrics results. (b) metrics results. ###### (b) (a) (b) (c) **Figure 10. Classification results of the CNN-LSTM-autoencoder model. (a) Accuracy results of train-** ing and validation per epoch, (Figure 10. Classification results of the CNN-LSTM-autoencoder model. (b) loss per epoch, and (c) confusion matrix. a) Accuracy results of train **Figure 10. Classification results of the CNN-LSTM-autoencoder model. (a) Accuracy results of** ing and validation per epoch, (b) loss per epoch, and (c) confusion matrix. training and validation per epoch, (b) loss per epoch, and (c) confusion matrix. ###### Table 3 and Figure 9 show the final results of training and Figure 10a,b represent the training/test loss and training/test accuracy after 500 epochs, respectively. Figure 10c Table 3 and Figure 9 show the final results of training and FigureTable 3 and Figure 9 show the final results of training and Figure 1010a,b represent the a,b represent the shows the confusion matrix of the proposed model. These results prove that the model training/test loss and training/test accuracy after 500 epochs, respectively. Figure 10c training/test loss and training/test accuracy after 500 epochs, respectively. Figure 10c achieves very high classification results and stability with close results between training shows the confusion matrix of the proposed model. These results prove that the model shows the confusion matrix of the proposed model. These results prove that the model achieves very high classification results and stability with close results between training achieves very high classification results and stability with close results between training ###### and validation, where it gains 100% in all metrics. From Figure 10a, the model training and validation, where it gains 100% in all metrics. From Figure 10a, the model training and validation, where it gains 100% in all metrics. From Figure 10a, the model training ###### and validation loss are very low, about 3.98 × 10[−][4] for training and 3.3 × 10[−][3 ]for validation. and validation loss are very low, about 3.98 × 10and validation loss are very low, about 3.98 10[−][4][−] for training and 3.3 × 10[4] for training and 3.3 _[−] 10[3 ]for validation. [−][3]_ for valida _×_ _×_ ###### From Figure 10b, the model reaches the best train/validation accuracy at epoch 19, which From Figure 10b, the model reaches the best train/validation accuracy at epoch 19, which tion. From Figure 10b, the model reaches the best train/validation accuracy at epoch 19, ###### shows the importance of using a checkpoint network and many epochs, which can signif shows the importance of using a checkpoint network and many epochs, which can signif-which shows the importance of using a checkpoint network and many epochs, which can ###### icantly affect accuracy, as shown in Figure 11. icantly affect accuracy, as shown in Figure 11. significantly affect accuracy, as shown in Figure 11. **Figure 11. Best accuracy results of training and validation at epoch 19.** **Figure 11. Best accuracy results of training and validation at epoch 19.** **Figure 11. Best accuracy results of training and validation at epoch 19.** ###### (a) For further checking, we utilize the standard deviation metrics to quantify the extent ###### For further checking, we utilize the standard deviation metrics to quantify the extent to hi h the att ibute alue of a featu e de iate f o it ea alue The ta da d de i ----- _Future Internet 2023, 15, 278_ 13 of 16 **Figure 11. Best accuracy results of training and validation at epoch 19.** For further checking, we utilize the standard deviation metrics to quantify the extent toFor further checking, we utilize the standard deviation metrics to quantify the e which the attribute value of a feature deviates from its mean value. The standard deviationto which the attribute value of a feature deviates from its mean value. The standard categorization aids in the identification of features that deviate from the average value byation categorization aids in the identification of features that deviate from the ave highlighting values that are both above and below the mean. Figurevalue by highlighting values that are both above and below the mean. Figure 12 sh 12 shows the standard deviation result for the proposed model for two datasets.the standard deviation result for the proposed model for two datasets. 13 of 16 (a) (b) **Figure 12. The standard deviation metrics for proposed model using (Figure 12. The standard deviation metrics for proposed model using (a) Mendeley dataset anda) Mendeley dataset an** (b) UNSW 2018 dataset.UNSW 2018 dataset. As shown in Figure 12 and Table 4, the proposed model has a lower variance. As a result, the proposed approach outperforms in terms of accuracy and reliability, and the learning curves are smoother, indicating that the proposed model is consistent. As a result, it is not only more accurate, but it is also more robust and consistent. **Table 4. Results of tests using the UNSW 2018 dataset of the proposed model.** **Dataset** **Accuracy (%)** **Standard Deviation (%)** **Validation Accuracy (%)** **Standard Deviation (%)** Mendeley 99.99 0.0118198 99.923 0.000954 UNSW 2018 100 0.9250918 100 0.023263 5.3.3. Comparison of Results with Some Machine Learning and Deep Learning Algorithms This section compares the proposed CNN-LSTM-autoencoder model with the LSTM model and three other ML algorithms includes K-nearest neighbors algorithm (KNN), SVM, and XGBoost. We use the same datasets and number of epochs to determine the difference in performance between the proposed model and these two ML models. The results appear in Table 5. **Table 5. Results of tests using the Mendeley dataset for ML models.** **Precision (%)** **Recall (%)** **F1-Score (%)** **Model** **Accuracy (%)** **Val. Accuracy (%)** **Normal** **Attack** **Normal** **Attack** **Normal** **Attack** LSTM 92.6 79.433 80 80 6 62 11 70 KNN - - 97 95 97 95 97 95 SVM - 95 98 91 94 96 96 94 XGBoost 99.55 99.54 - - - - - where Normal is the normal traffic, and Attack is DDoS attack traffic. From Table 5, the results show the lower performance of the LSTM model with the DDoS attack SDN dataset when it gains 92.6% in training accuracy and 79.433% in validation accuracy. It also achieves very low results in recall, 6% in normal and 62% in attack, and the precision is the same in both normal and attack (about 80%). So, the model ----- _Future Internet 2023, 15, 278_ 14 of 16 gains poor results in the F1-score, 11% for normal, and 70% in attack. Table 5 shows good results for the KNN model with the DDoS attack SDN dataset when it gains 97% for normal and 95% for attack for precision, and for recall, it gains 97% in normal and 95% in attack. The F1-score is then good at 97% for normal and 95% in attack. However, these results are less than the proposed CNN-LSTM-autoencoder model. The SVM also gains good results which are 98% for normal and 91% for attack for precision, and for recall, it gains 94% in normal and 96% in attack. The F1-score is then good at 96% for normal and 94% in attack. The XGBoost achieves higher accuracy and reaches up to 99.54%. By comparing the deep learning model (LSTM) with the proposed CNN-LSTMautoencoder model, the proposed model is more accurate and stable than the LSTM model and achieves higher accuracy in lower epochs. Compared with the machine learning model (KNN), the proposed CNN-LSTM-autoencoder model is also more accurate than all ML for the DDoS attack SDN dataset. 5.3.4. Comparison of Results with Published (Base) We compared the result of the proposed system with some recent related works using the DDoS attack SDN dataset and UNSW2018 BoTIoT (Table 6). The obtained results showed that the model achieves very high classification results. For the UNSW2018 dataset, the proposed model achieves 100% in all metrics and a very low loss, about 3.98 10[−][4] _×_ for training and 3.3 10[−][3] for validation. Table 6 proves that our model outperforms _×_ Yaser et al. [22] in all metrics using the same dataset (UNSW 2018). Ivanova et al. [27] and Prasad et al. [28] had models that achieved an accuracy of 99.99%, whereas our model achieves 100% accuracy. Our model outperforms their models in all metrics. Although it had high accuracy, they showed lower precision, recall, and F1-score for normal flows. So, our model can accurately detect and recognize normal and abnormal flows since it shows 100% in all metrics. **Table 6. Comparison results between the proposed CNN-LSTM-autoencoder model and some** recent works. **Precision (%)** **Recall (%)** **F1-Score (%)** **Ref.** **Dataset** **Algorithm** **Accuracy (%)** **Val. Accuracy (%)** **Normal** **Attack** **Normal** **Attack** **Normal** **Attack** Proposed CNN-LSTM 100 100 100 100 100 100 100 100 model autoencoder Yaser et al. [22] UNSW2018 LSTM-autoencoder 99.95 99.94 95 99 94 99 95 99 optimized Ivanova et al. feed-forward neural 99.99 99.99 82.55 99.99 66.35 99.99 73.57 99.87 [27] network Prasad et al. VMFCVD 99.99 99.99 87.72 99.99 82.55 99.99 81.97 99.99 [28] Proposed CNN-LSTM 99.99 99.923 100 100 100 100 100 100 model autoencoder DDOS attack CNN 98.74 - 98.75 98.73 98.9 98.55 98.83 98.64 SDN Dataset LSTM 95.60 - 96.20 94.90 95.64 95.56 95.92 95.23 (Mendeley Ahuja et al. [29] CNN-LSTM 99.48 - 99.43 99.55 99.66 99.26 99.54 99.40 dataset) SVC-SOM 95.45 - 96.71 93.75 95.40 95.51 96.05 94.62 SAE-MLP 99.75 - 99.96 99.69 99.77 99.94 99.87 99.82 Generated Yaser et al. [22] LSTM-autoencoder 97.62 97.68 98 88 92 97 95 93 SDN dataset where Normal is the normal traffic, and Attack is DDoS attack traffic. Meanwhile, for the DDoS attack SDN dataset, our system gains an accuracy of up to 99.99% in training and 99.923% in validation and achieves 100% in precision, recall, and F1-score. Also, the model training and validation losses are very low, about 3.98 × 10[−][4] for training and 3.3 10[−][3] for validation. Our model outperforms all proposed models by _×_ Ahuja et al. [29] in all factors. Experimental results reveal that our proposed model has a high feature extraction ability and high performance in detecting attacks. All performance metrics indicate that the proposed approach is the most appropriate choice to apply to a real-world flow detection environment. ----- _Future Internet 2023, 15, 278_ 15 of 16 **6. Conclusions** Network virtualization imposes new risks and exploitable attacks in addition to those currently on traditional networks. The DDoS attack group is one of the most aggressive attack types in recent years, devastating the entire network infrastructure. To defend against the DDoS attack, within the scope of this project, we developed and deployed a DDoS detection system based on deep learning to detect multi-vector attacks within an SDN environment. The proposed approach has a success rate of 99.99% in train and 99.923% in validation and 100% for all metrics (precision, recall, and F1-score) for identifying individual DDoS attacks in all DDoS datasets. It does so with an accuracy of 100% and an extremely low False-Positive Rate compared to other efforts, and it categorizes the traffic into normal and attack groups. One of our future goals is to test the proposed model as a real-time classifier in an SDN environment under real-time DDoS traffic and normal traffic to address its accuracy and time of detection using an emulator such as Mininet or in a real SDN environment. In addition, our goal is to lessen the strain placed on the controller by putting in place a network intrusion detection system that can identify not only DDoS attacks but also others. **Author Contributions: Conceptualization, A.K.M. and M.N.A.; methodology, A.K.M.; formal analy-** sis, A.K.M. and M.N.A.; investigation, A.K.M.; writing—original draft preparation, A.K.M.; supervision, M.N.A. All authors have read and agreed to the published version of the manuscript.All authors have read and agreed to the published version of the manuscript. **Funding: This research received no external funding.** **Data Availability Statement: Data derived from public domain resources.** **Conflicts of Interest: The authors declare no conflict of interest.** **References** 1. Urrea, C.; Benítez, D. Software-Defined Networking Solutions, Architecture and Controllers for the Industrial Internet of Things: [A Review. Sensors 2021, 21, 6585. [CrossRef] [PubMed]](https://doi.org/10.3390/s21196585) 2. Nadeau, T.D.; Gray, K. SDN: Software Defined Networks; O’Reilly Media: Newton, MA, USA, 2013. 3. Feamster, N.; Rexford, J.; Zegura, E.; Tech, G. The Road to SDN: An Intellectual History of Programmable Networks. ACM _[SIGCOMM Comput. Commun. Rev. 2014, 44, 87–98. [CrossRef]](https://doi.org/10.1145/2602204.2602219)_ 4. Pradhan, A.; Mathew, R. Solutions to Vulnerabilities and Threats in Software Defined Networking (SDN). Procedia Comput. Sci. **[2020, 171, 2581–2589. [CrossRef]](https://doi.org/10.1016/j.procs.2020.04.280)** 5. Silva, F.S.D.; Silva, E.; Neto, E.P.; Lemos, M.; Neto, A.J.V.; Esposito, F. A Taxonomy of DDoS Attack Mitigation Approaches [Featured by SDN Technologies in IoT Scenarios. Sensors 2020, 20, 3078. [CrossRef] [PubMed]](https://doi.org/10.3390/s20113078) 6. Abdulkarem, H.S.; Alethawy, A.D. DDoS attack detection and mitigation at SDN enviroment. Iraqi J. Inf. Commun. Technol. 2021, _[4, 1–9. [CrossRef]](https://doi.org/10.31987/ijict.4.1.115)_ 7. Tan, L.; Pan, Y.; Wu, J.; Zhou, J.; Jiang, H.; Deng, Y. A New Framework for DDoS Attack Detection and Defense in SDN [Environment. IEEE Access 2020, 8, 161908–161919. [CrossRef]](https://doi.org/10.1109/ACCESS.2020.3021435) 8. Choudhary, A.R.; Associates, C. OpenFlow switch controller as a policy-based system. Issues Inf. Syst. 2021, 22, 320–334. [[CrossRef]](https://doi.org/10.48009/1_iis_2021_320-334) 9. Wang, T.; Chen, H.; Cheng, G.; Lu, Y. SDNManager: A Safeguard Architecture for SDN DoS Attacks Based on Bandwidth [Prediction. Secur. Commun. Netw. 2018, 2018, 7545079. [CrossRef]](https://doi.org/10.1155/2018/7545079) 10. Lakshmanan, V.; Robinson, S.; Munn, M. Machine Learning Design Patterns; O’Reilly Media, Inc.: Newton, MA, USA, 2020; Chapter 4; ISBN 9781098115784. 11. Doshi, K.; Yilmaz, Y.; Uludag, S. Timely Detection and Mitigation of Stealthy DDoS Attacks via IoT Networks. arXiv 2020, [arXiv:abs/2006.08064. Available online: http://arxiv.org/abs/2006.08064 (accessed on 1 May 2023). [CrossRef]](http://arxiv.org/abs/2006.08064) 12. Yuan, X.; Li, C.; Li, X. DeepDefense: Identifying DDoS Attack via Deep Learning. In Proceedings of the 2017 IEEE International [Conference on Smart Computing, SMARTCOMP, Hong Kong, China, 29–31 May 2017; pp. 1–8. [CrossRef]](https://doi.org/10.1109/SMARTCOMP.2017.7946998) 13. Shone, N.; Ngoc, T.N.; Phai, V.D.; Shi, Q. A Deep Learning Approach to Network Intrusion Detection. IEEE Trans. Emerg. Top. _[Comput. Intell. 2018, 2, 41–50. [CrossRef]](https://doi.org/10.1109/TETCI.2017.2772792)_ 14. Pekta¸s, A.; Acarman, T. A deep learning method to detect network intrusion through flow-based features. Int. J. Netw. Manag. **[2018, 29, e2050. [CrossRef]](https://doi.org/10.1002/nem.2050)** 15. [IDS 2012|Datasets|Research|Canadian Institute for Cybersecurity|UNB. Available online: https://www.unb.ca/cic/datasets/](https://www.unb.ca/cic/datasets/ids.html) [ids.html (accessed on 11 December 2022).](https://www.unb.ca/cic/datasets/ids.html) ----- _Future Internet 2023, 15, 278_ 16 of 16 16. [IDS 2017|Datasets|Research|Canadian Institute for Cybersecurity|UNB. Available online: https://www.unb.ca/cic/datasets/](https://www.unb.ca/cic/datasets/ids-2017.html) [ids-2017.html (accessed on 11 December 2022).](https://www.unb.ca/cic/datasets/ids-2017.html) 17. Gadze, J.D.; Bamfo-Asante, A.A.; Agyemang, J.O.; Nunoo-Mensah, H.; Opare, K.A.-B. An Investigation into the Application of [Deep Learning in the Detection and Mitigation of DDOS Attack on SDN Controllers. Technologies 2021, 9, 14. [CrossRef]](https://doi.org/10.3390/technologies9010014) 18. Singh, A.; Jang-Jaccard, J. Autoencoder-based Unsupervised Intrusion Detection using Multi-Scale Convolutional Recur-rent Networks. arXiv 2022, arXiv:2204.03779. 19. [The UNSW-NB15 Dataset|UNSW Research. Available online: https://research.unsw.edu.au/projects/unsw-nb15-dataset](https://research.unsw.edu.au/projects/unsw-nb15-dataset) (accessed on 12 December 2022). 20. [NSL-KDD|Datasets|Research|Canadian Institute for Cybersecurity|UNB. Available online: https://www.unb.ca/cic/datasets/](https://www.unb.ca/cic/datasets/nsl.html) [nsl.html (accessed on 6 December 2019).](https://www.unb.ca/cic/datasets/nsl.html) 21. Elubeyd, H.; Yiltas-Kaplan, D. Hybrid Deep Learning Approach for Automatic DoS/DDoS Attacks Detection in Software-Defined [Networks. Appl. Sci. 2023, 13, 3828. [CrossRef]](https://doi.org/10.3390/app13063828) 22. Yaser, A.L.; Mousa, H.M.; Hussein, M. Improved DDoS Detection Utilizing Deep Neural Networks and Feedforward Neural [Networks as Autoencoder. Futur. Internet 2022, 14, 240. [CrossRef]](https://doi.org/10.3390/fi14080240) 23. Jiang, H.; Turki, T.; Wang, J.T. DLGraph: Malware detection using deep learning and graph embedding. In Proceedings of the 2018 17th IEEE international conference on machine learning and applications (ICMLA), Orlando, FL, USA, 17–20 December 2018; IEEE: Piscataway, NJ, USA, 2018. 24. Elsayed, M.S.; Le-Khac, N.-A.; Dev, S.; Jurcut, A.D. Network Anomaly Detection Using LSTM Based Autoencoder. In Proceedings [of the Q2SWinet’20, Alicante, Spain, 16–20 November 2020. [CrossRef]](https://doi.org/10.1145/3416013.3426457) 25. Shiravi, A.; Shiravi, H.; Tavallaee, M.; Ghorbani, A.A. Toward developing a systematic approach to generate benchmark datasets [for intrusion detection. Comput. Secur. 2012, 31, 357–374. [CrossRef]](https://doi.org/10.1016/j.cose.2011.12.012) 26. Tonkal, Ö.; Polat, H.; Ba¸saran, E.; Cömert, Z.; Kocao˘glu, R. Machine Learning Approach Equipped with Neighbourhood [Component Analysis for DDoS Attack Detection in Software-Defined Networking. Electronics 2021, 10, 1227. [CrossRef]](https://doi.org/10.3390/electronics10111227) 27. Ivanova, V.; Tashev, T.; Draganov, I. Detection of IoT based DDoS Attacks by Network Traffic Analysis using Feedforward Neural [Networks. Int. J. Circuits Syst. Signal Process. 2022, 16, 653–662. [CrossRef]](https://doi.org/10.46300/9106.2022.16.81) 28. Prasad, A.; Chandra, S. VMFCVD: An Optimized Framework to Combat Volumetric DDoS Attacks using Machine Learning. _[Arab. J. Sci. Eng. 2022, 47, 9965–9983. [CrossRef] [PubMed]](https://doi.org/10.1007/s13369-021-06484-9)_ 29. Ahuja, N.; Singal, G.; Mukhopadhyay, D.; Kumar, N. Automated DDOS attack detection in software defined networking. J. Netw. _[Comput. Appl. 2021, 187, 103108. [CrossRef]](https://doi.org/10.1016/j.jnca.2021.103108)_ **Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual** author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. -----
{ "disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.3390/fi15080278?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.3390/fi15080278, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": "CCBY", "status": "GOLD", "url": "https://www.mdpi.com/1999-5903/15/8/278/pdf?version=1692439584" }
2,023
[ "JournalArticle", "Review" ]
true
2023-08-19T00:00:00
[ { "paperId": "4cb78da23ad10b8c967b4f2b88d661eef251e538", "title": "Hybrid Deep Learning Approach for Automatic Dos/Ddos Attacks Detection in Software Defined Networks" }, { "paperId": "8f689a70f8803a9bbb24fa6287976035c0b7883c", "title": "Improved DDoS Detection Utilizing Deep Neural Networks and Feedforward Neural Networks as Autoencoder" }, { "paperId": "3906dc23018d4af33f5286b6828b1f5760e7765b", "title": "VMFCVD: An Optimized Framework to Combat Volumetric DDoS Attacks using Machine Learning" }, { "paperId": "e61531a73ee56bba9d713eb3c6945944ad82aaa1", "title": "Detection of IoT based DDoS Attacks by Network Traffic Analysis using Feedforward Neural Networks" }, { "paperId": "171008558ed39a467879635c9472b7a024b8d70a", "title": "Software-Defined Networking Solutions, Architecture and Controllers for the Industrial Internet of Things: A Review" }, { "paperId": "d337ce71613e6d6c74fe6b88822852b5782681b4", "title": "Automated DDOS attack detection in software defined networking" }, { "paperId": "aa1d3d17c48673ee968d729dd443811d93aac128", "title": "Machine Learning Approach Equipped with Neighbourhood Component Analysis for DDoS Attack Detection in Software-Defined Networking" }, { "paperId": "80a5a2b2e37f7dd8a7aa1b92d98da216bfc53f58", "title": "DDOS ATTACK DETECTION AND MITIGATION AT SDN ENVIROMENT" }, { "paperId": "4befc58cb4bbde0c86db40d44e8344bec77f2053", "title": "An Investigation into the Application of Deep Learning in the Detection and Mitigation of DDOS Attack on SDN Controllers" }, { "paperId": "53f74493cb86cb733d0210b6bfa3accfd23e9506", "title": "A Taxonomy of DDoS Attack Mitigation Approaches Featured by SDN Technologies in IoT Scenarios" }, { "paperId": "fe8f6bc91042ba9f18f6f284e6f3b984bf8265e7", "title": "A deep learning method to detect network intrusion through flow‐based features" }, { "paperId": "78f7c6818c97383b5ae4b61664ffc6cde7974466", "title": "A Deep Learning Approach to Network Intrusion Detection" }, { "paperId": "7968cabb6a0e45ddc56ae8af5bc3e7881baa966d", "title": "SDNManager: A Safeguard Architecture for SDN DoS Attacks Based on Bandwidth Prediction" }, { "paperId": "4e62c6c212e51e2033654bd1f2cc26f394d5dd3a", "title": "The road to SDN: an intellectual history of programmable networks" }, { "paperId": "846fcf30dc75f04886092891e754791e9704f69f", "title": "Toward developing a systematic approach to generate benchmark datasets for intrusion detection" }, { "paperId": null, "title": "OpenFlow switch controller as a policy-based system" }, { "paperId": "6fdc744218dba580e8104c3be0111eef9369c8b2", "title": "A New Framework for DDoS Attack Detection and Defense in SDN Environment" }, { "paperId": "a13a0f244eb633a8e870b69b56be3c41ec97078f", "title": "Solutions to Vulnerabilities and Threats in Software Defined Networking (SDN)" } ]
16,812
en
[ { "category": "Computer Science", "source": "external" }, { "category": "Computer Science", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/025bde8270278a539aa25dfc08ee9e73b569f0d0
[ "Computer Science" ]
0.908816
DocCert: Nostrification, Document Verification and Authenticity Blockchain Solution
025bde8270278a539aa25dfc08ee9e73b569f0d0
International Conference on Blockchain Computing and Applications
[ { "authorId": "1776184", "name": "Monther Aldwairi" }, { "authorId": "2258548115", "name": "Mohamad Badra" }, { "authorId": "2074769883", "name": "Rouba Borghol" } ]
{ "alternate_issns": null, "alternate_names": [ "BCCA", "Int Conf Blockchain Comput Appl" ], "alternate_urls": null, "id": "6cbed6ad-fb46-4c9a-8477-75227ff3a47e", "issn": null, "name": "International Conference on Blockchain Computing and Applications", "type": "conference", "url": "https://ieeexplore.ieee.org/xpl/conhome/1839124/all-proceedings" }
Many institutions and organizations require nostrification and verification of qualification as a prerequisite for hiring. The idea is to recognize the authenticity of a copy or digital document issued by an institution in a foreign country and detect forgeries. Certificates, financial records, health records, official papers and others are often required to be attested from multiple entities in distinct locations. However, in this digital era where most applications happen online, and document copies are uploaded, the traditional signature and seal methods are obsolete. In a matter of minutes and with a simple photo editor, a certificate or document copy may be plagiarized or forged. Blockchain technology offers a decentralized approach to record and verify transactions without the need for huge infrastructure investment. In this paper, we propose a blockchain based nostrification system, where awarding institutions generate a digital certificate, store in a public but permissioned blockchain, where students and other stakeholders may verify. We present a thorough discussion and formal evaluation of the proposed system.
#, Authenticity Blockchain Solution Monther Aldwairi _College of Computer and Information_ _Technology_ _Jordan University of Science and_ _Technology_ Irbid, Jordan munzer@just.edu.jo Mohamad Badra _College of Technological Innovation_ _Zayed University_ Dubai, UAE mohamd.badra@zu.ac.ae Rouba Borghol _Science and Liberal Arts Dept._ _Rochester Institute of Technology_ Dubai, UAE rbbcad@rit.edu **_Abstract— Many institutions and organizations require_** **nostrification and verification of qualification as a prerequisite** **for hiring. The idea is to recognize the authenticity of a copy or** **digital document issued by an institution in a foreign country** **and detect forgeries. Certificates, financial records, health** **records, official papers and others are often required to be** **attested from multiple entities in distinct locations. However, in** **this digital era where most applications happen online, and** **document copies are uploaded, the traditional signature and seal** **methods are obsolete. In a matter of minutes and with a simple** **photo editor, a certificate or document copy may be plagiarized** **or forged. Blockchain technology offers a decentralized** **approach to record and verify transactions without the need for** **huge infrastructure investment. In this paper, we propose a** **blockchain based nostrification system, where awarding** **institutions generate a digital certificate, store in a public but** **permissioned** **blockchain,** **where** **students** **and** **other** **stakeholders may verify. We present a thorough discussion and** **formal evaluation of the proposed system.** **_Keywords— nostrification, antiforgery, plagiarism, document_** **_authentication, blockchain._** I. INTRODUCTION Often times, job applicants are required to certify their documents, attest certificates, or equalize a degree or course. The attestation process is lengthy, time consuming, costly and cumbersome, especially when the candidate did graduate long time ago or he graduated from a foreign country, he no longer has access to. Equivalency process requires those attested certificates and transcripts and awards an equivalent and recognized degree. The same process applies for international trade agreements, customs’ forms, birth certificates, etc. The process may involve universities, schools, notaries, embassies, departments ministries, education boards, etc. The process takes several months and may be costly if you live abroad. However, after all of that trouble, the final attested or sealed documents may be easily forged digitally. Therefore, not much verification is achieved after this lengthy and costly process [1]. Below is a sample of foreign degree equivalency process requirements. 1. Certified copy of degree (and or all previous degrees) in English or a translated version from an official service. Copies must be attested by a. Ministry of education in issuing country, b. ministry of exterior in issuing country, c. embassy of country seeking equivalency, d. ministry of exterior in country of equivalency, and e. ministry of education in country of equivalency. 2. Copy of transcript or diploma indicating dates of admission and completion. 3. Copy of passport with visa, entry and exit stamps. 4. Equivalency fees. 5. All original documents. In most of the above documents, notaries’ services may be required. Public and private notaries are authorized by the judicial system to attest documents and certify their originality. Online notaries have been using cameras to verify identity and attest documents. However, the admissibility in court of digital signatures continues to be challenged. Courts often accepts digitally signed documents only when a copy of the originally signed document is presented! [2]. We believe blockchain is a game changer and would present a perfect solution for the online nostrification issue [3]. Blockchain has been made popular with the wide spread of Bitcoin. Bitcoin is one of the earliest and most popular cryptocurrencies. It is a digital currency that can be exchanged between people without the need for a central bank or authority. It benefits from peer-to-peer networks and blockchain technologies to keep an anonymous record of all Bitcoins [4]. Blockchain is a shared immutable ledger for securely recording transaction history. A blockchain could be public or private. As indicated by the name, the blocks are chained, with each block storing one or more transactions. Transactions are kept indefinitely and the blockchain may be queried to verify any transaction, which makes it ideal for nostrification. There have been few attempts to use blockchain for nostrification. EduCTX was one of the first attempts to use blockchain as a higher education credit platform. It is supposed to serve as a centralized repository of all students’ records and completed courses [5]. In this paper we propose to use blockchain in a novel manner to implement a secure, shared authenticated and public repository of student records. Students may access their records, so can universities and any other participant who wishes to verify a record. Security and privacy are of the at most importance and therefore access to records is authenticated. The rest of the paper is organized as follows. Section II explains blockchain in more details. Section III surveys the literature, covers the related work and points out advantages and disadvantages of exiting approaches. Section IV discusses the proposed approach and Section V presents the formal evaluation. ----- II. BLOCKCHAIN Blockchain maintains a shared record including full details of every single transaction over a network. Blockchain is based on peer to peer networks, making it distributed and not controlled by any third party [4]. A transaction is any exchange of assets between participants and is represented by a block. Each block tracks and stores data and those blocks are chained together chronologically. Blocks are not editable, which means once a transaction is committed to the blockchain it can no longer be modified. A transaction is reversed by creating a new block, which maintains a timeline of events and changes to the data. Each block contains the transaction data, timestamp, unique hash and the hash of the previous block. The latter maintains the chain and the timestamp ensures timeliness. Unlike databases that are files stored on a single system, blockchain is decentralized and identical copies of the shared ledger are distributed across all participating nodes. This distributed nature of the ledger reduces the chances of data tempering. If a party chooses to add a block to his copy of ledger, it will be inconsistent with all other blockchain participants [6]. Before any block is added to the chain, a consensus of the majority of the endorsing nodes must be reached. Consensus may be through solving a cryptographic puzzle called “proofof-work”, which is the case in many cryptocurrencies. Proofof-state on the other hand requires validators to hold a cryptocurrency in escrow trusted service. While proof-oftime-elapsed randomizes blocks waiting for trusted environment. Solo-NoOps requires no consensus and validator applies transactions, which may lead to divergent chains or ledgers. Finally, Byzantine-Fault-Tolerance (BFT) achieves consensus in a peer to peer network while some nodes are malicious or faulty [7]. In blockchain for business, we have a shared ledger, where every participant has his own copy. Those ledgers are permissioned and proper credentials are required to access the ledger. The ledger is immutable, in that no participant may tamper with a transaction after it was agreed upon. Transactions cannot be altered deleted or inserted back in time. Smart contracts are a set of business rules in chain code format that when executed a block/transaction is created. The shared ledger has the final say of an asset ownership and provenance. Contrary to cryptocurrencies that emphasize anonymity, blockchain for business is private permissioned network that values identity and permissions over anonymity [8]. Turkanovic et al. explained that higher education institutions (HEI) keep their students’ completed courses’ records in databases that are structured and only available to institution’s staff [4]. Thus, leaving students with limited access where they can only view or print their document. Moreover, these student documents are stored in different standards, which contributes to the problem of transferring student documents to another HEI [9]. Correspondingly, if a student wants to apply for a job in a foreign country, he/she has to translate and nostrificate their academic certificate, which is complex and time consuming. In addition, if a student loses his/her academic certificate, he/she has to visit their HEI and ask for a new copy. Andrejs Rauhvargers discussed qualifications frameworks for recognizing qualification in the European higher education. The paper details the recognition of foreign higher education qualifications [9]. Blockchain is idea for keeping record of student’s diploma certificates, transcript, courses, grades, achievements, skills and research experience. All of these may be securely registered in a shared ledger that can be accessed by many institutions or stakeholders. This will help reduce fraud, forgery and false claims [10]. All of the above data can be logged in the form of timely transactions into the shared ledger. The data in the blockchain is permissioned and associated with a student ID, organization ID and stored security in the blockchain. Using blockchain means performance may be sacrificed for secure recordkeeping of transactions. Nonetheless, the blockchain would be much more efficient as opposed to the manual attestation process described earlier [11]. Using blockchain in education might be a new concept, but it surely is very beneficial. It will make it much easier for students to have all of their completed courses certificated, verified and in one place. Not only this will facilitate attestation and verification of qualification but also will help in the cases of credit transfer between institutions. It will be very easy for any workplace anywhere in the world to subscribe to the blockchain and verify graduate credentials, of course with the applicant’s consent [12]. III. RELATED WORK There are a few research papers concerned with blockchain for nostrification. In this section we summarize each paper and present a critical analysis. Wibke et al. used blockchain technology to store and handle educational data [13]. They offer the possibility to store different types of immutable educational information on blockchain technology. A total of 58.1 % of the education technologies were based on Ethereum, 3.2 % on Bitcoin, 9.7 % on EOS, and 1.6 % on NEM; 1.6 % used a private blockchain, 4.8 % could be used more than one blockchain and 6.5 % used other blockchain technologies. Their results provide a deeper understanding of blockchain technology in education and serve as a signal to educational stakeholders by underlining the importance of blockchain technology in education. EduCTX is a blockchain-based higher education credit platform from University of Maribor [5]. This EduCTX platform is anticipated to use ECTX tokens as academic credits. It rests on peer-to-peer networks where the peers of the network are HEI and users of the platform are students and other various organizations. These ECTX tokens represent students credit amount for completed courses. Every student will have an EduCTX wallet for collection of ECTX tokens that will be transferred by his/her HEI. The transferred information is stored in blockchain alongside with the sender’s identity with HEI official name, the recipients (which is student and is anonymously presented), the token (course credit value) and the course identification. Therefore, students can access and provide his/her completed courses by directly presenting their blockchain address. EduCTX is still a prototype based on Ark blockchain platform, and the real-world perception cannot be evaluated. EduCTX enables organizations and students to check academic records of a student’s (potential employee) in transparent way. Moreover, since the system is based on blockchain platform, it maintains the possibility of fraud detection and prevention. On the other hand, in the case of a student’s losing his/her private key, they have to visit to their ----- home HEI and request a new blockchain address, which is time consuming and almost similar to the current approach for certification. Moreover, it is expected that user and organizations have to protect and backup their private keys, signatures and stamps to be secure, because this platform is yet to have additional level of protection against impersonation. Gresch et al. from the University of Zurich proposed a blockchain-based Architecture for Transparent Certificate Handing [14]. The work used a questionnaire to shed the light on the wide spread of people with fake diplomas, and how ineffective is the current accreditation system. The system identifies three stake holders: the certificate issuer, companies and institutes wanting to verify diplomas, and the graduates or applications who submitted the diploma. The system has two stages. First, the issuing organization has to create the digital diploma, with one-way hash function and the hash will be stored in a smart contract. Second, the verifier company verifies the authenticity of the document without contacting the university. A prototype was built using an Ethereum blockchain and deployed on University of ZuricH BlockChain (UZHBC). They concluded that granting an organization the ability to issue certificates is one of the most critical aspects of the blockchain. In addition, they only stored the diploma has on the blockchain for privacy concerns. As opposed to storing an encrypted diploma and risking losing the data forever if the key is lost. Azael Capetllo proposed a blockchain education longstanding model for academic institutions [15]. The paper described the technology of storing student records, which can be shared openly with third parties, offering a safe and lasting record. The technology is strong against data damage or loss, and those third parties can verify student record directly by accessing the University blockchain. Two applications of Blockchain in education have been mentioned in the research paper, the first one is Smart Contracts, and it is to form an autonomous learning experience by consuming an analogy from the financial application of blockchain. The second application is the use of Blockchain to offset the cost learning using peer-to-peer networks, offering financial prize for students offering services to university. Mike Sharples and John Domingue from University of Nicosia proposed blockchain and Kudos, a distributed system for educational record, reputation and reward [16]. It was the first higher education institution to issue academic certificates whose authenticity can be verified through the Bitcoin blockchain. They proposed to use Bitcoin payments as a reward for academic achievements as tasks such as peer review or assessments. Then they proposed an “educational reputation currency’, called Kudos. Each recognized educational institution, innovative organization, and intellectual worker is given an initial award of educational reputation currency, the initial award might be based on some existing metric: Times Higher Education World Reputation Rankings for Universities, H-index for academics, Amazon author rank for published authors etc. An institution could allocate some of its initial fund of Kudos to staff whose reputation it wishes to promote. Each person and institution store its fund of reputation in a virtual wallet on a universal educational blockchain. They used Ethereum smart contracts to implement OpenLearn badges on a private blockchain, where student enroll on courses and institution award them badges. Wolfgang et al. proposed blockchain in the context of education and certification [17]. The blockchain technology supports counterfeit protection of certificates, easy verification of certificates even if the certification authority no longer exists and automation of monitoring processes for certificates with a time-limited validity. It ensures higher efficiency and improved security for certification authorities through digitization of current processes, issuing and registering of certificates in a blockchain as well as automatic monitoring of certificates. It comprises a blockchain including smart contracts, a public storage holding profile information of certification authorities, a document management system managing the actual payload of certificates tracked by the blockchain and the parties involved in the system, namely accreditation and certification authorities, certifiers, learners and employers. John Rooksby and Kristiyan Dimitrov from University of Glasgow implemented Ethereum based blockchain technology for permanent and tamper proof grading system [18]. The system was able to store student information on courses enrolled, grades and their final degree. It supported the university specific cryptocurrency called Kelvin Coin. Payment of the cryptocurrency can be made by smart contract to the top performing student in a course. However, there were some drawbacks involved by implementing the system. Scenario-based and focus group evaluation methods were implemented to address the advantages and disadvantage derived from the system. Because universities rely on trust and confidentiality the blockchain system was found to be not trustworthy. Blockchain system was global scope idea, however universities tend to set their own boundaries, at least at institutional level. Moreover, using smart contracts to store grades in blockchain was problematic due the fact that there is no formal algorithm for calculating grades. Unfortunately, the Ethereum Blockchain system needed to change the way of administrative system of the university work. Finally, the prototype of the blockchain system was found to prioritize transparency over efficiency. Cheng et al. [19] proposed a system that uses Ethereum to generate digital certificates and confirm the eligibility of graduation certificates [19]. The system functions as follows. The HEI enters student’s certificate and academic records into the system. The system verifies all the data. The student receives a quick response (QR) code, inquiry number and electronic file of their certificate. Whenever students want to apply for a job or apply for higher education, he/she has only to send the e-certificate alongside with the QR code to the respective organization. The organization can retrieve the student’s certificate and academic records once the credentials are verified. Moreover, the QR is used to asses if the certificate is tampered or forged. There have been several industry projects that were concerned with student records and online digital badges. Many projects capitalized on the opportunity of digital diplomas as countermeasure to fake diplomas. Those projects offered technologies to both mange the complete educational past of students by gathering all digital badges awarded by different academic organizations. Sony Global Education for example, has announced development of a new blockchain for storing academic records [20]. Their platform allows secure sharing of exams results and academic proficiency levels with third-party evaluating organizations. Mozilla Foundation Open Badges are a digital record of the different ----- accomplishments encoded into an image with associated infrastructure for verification [21]. MS Global Learning Consortium was managing this central open source repository of badges with over 1500 participating organizations until 2017. More recently, Mozilla migrated all users to Badgr, as a replacement for Open Badges as the standard verifying credentials [22]. Finally, Acclaim and IBM offer digital badges as a form of organizations recognizing individuals’ skills and competencies [23]. Contrary to all of the above industry efforts based in central repositories of badges, BCDiploma is using blockchain to provide security, immutability, ease of use for certifying diplomas and other achievements [24]. All of the above research agreed that counterfeit certificates, credentials and documents is a major problem that can be solved with blockchain. Record all student’s academic history from completed courses, skills and qualification in one trusted and secure blockchain is a perfect solution [25]. Yet, all of them tried to tweak current cryptocurrencies blockchain to be used to store certificates and award badges (reputation) and that resulted in low usability. Crypto currencies blockchains and smart contracts are not customized to student records. We propose a permissioned and custom blockchain for business, designed specifically for storing student records or any other document for that matter. IV. PROPOSED APPROACH In this section, we propose an efficient solution that is based on a Merkle tree to provide nostrification and verification of qualifications to guarantee data integrity on the certificates through non-repudiation. A Merkle hash tree [26] is a data structure used to efficiently verify data integrity and authenticity. As illustrated in Fig. 1, each non-leaf node in the tree, from the bottom up until reaching the root node in the tree, holds the hash of the concatenated hashes of its sub-nodes; example, ℎ�� � ℎ�ℎ� | ℎ��. The hash held by the root is represented as the root hash, which can be shared in a trusted way for verification purposes; example ℎ���� �ℎ�ℎ�� | ℎ��� . In [27], a hash calendar is proposed to include the generated root hashes to verify the integrity of the contents of large data structures. In our proposed solution, we propose forming a Merkle tree where the leaves are documents. The first objective is to provide a periodic publication of the root hash in the blockchain to enhance transparency and protection against any modification in the hashed content, and to provide proof of existence of contents. Each document is certified by its issuer, so we include, in a blockchain, either the hash of that document, or the hash root of a set of documents issued by the same issuer when multiple documents are to be included. The included hash is authenticated by digitally signing it by the same issuer. Fig. 1. Example of Merkle tree with 4 leaves (depth = 2). Interested parties can authenticate the existence of any document and the verification is legally acceptable. The verification process relies on the authentication path of a given node in the tree to validate the hashed content held by the node, without Merkle tree traversal [28]. The authentication path of a node in the tree consists of a set of siblings on the path from that node to the root. The content of a document can be authenticated using the hashed contents held by the root node and by the corresponding authentication path as well. For example, and with reference to Fig.1, to verify ℎ�, the verifier needs the values of ℎ����, and the authentication path ℎ�� and ℎ� . Hence, the verifier computes ℎ��� �ℎ�ℎ�� | ℎ�� and ℎ����� �ℎ�ℎ��� | ℎ���, and then compares ℎ����� with ℎ���� for equality. Choosing the hash function or algorithm [29] depends on many factors such as speed, digest length, number of rounds, collisions and ease of implementation both in hardware and software [30]. _A._ _Transaction Structure_ When a transaction is generated by our entities, it should include the hash root and a set of hash values, where each hash is the digest of a document belonging to the user. ## hroot Set of hash values (i.e., h1, h2, …) _B._ _Nostrification’s Generation of Document’s Qualification_ The proposed system is very versatile and can be applied to any document and authentication process. In this section, we describe our solution using three different scenarios. In the first one, we describe the case where the user has several documents issued by the same entities, whereas in the second case, the user has several documents issued by different entities. The third scenario is concerned with the case of one document that will be certified by a hierarchy of different institutions. The proposed nostrification solutions supports 3 cases or operating scenarios, because of space limitation we present cases 1 and 3. **_Case 1_** In this case, the issuer (entity) of the documents will form a Merkle tree where each leaf is the hash of a document (Fig. 2). Next, the entity will generate the authentication path for each intermediate node in the tree, the digest of each document, and the hash root of the tree. Then the entity will sign the hash root and publish it along with the hash value of each document in the blockchain. The entity will next issue the documents to the user after stamping each document. The stamp consists of adding to each document, the identifier of the transaction that is already stored in the blockchain. **_Case 3_** This case is similar to the two previous cases; however, each tree is dedicated to one document only and each layer of the tree is associated to an organization that will certify the document. Each organization has a private and a public key. When we want to nostrify a document, we start by selecting all organizations that will certify the document. Then, with the hash of the document, we sign that hash with the private key of the first organization. Finally, we create the right node of the layer with the pair constituted by the signature and the location of the public key of the organization, required to verify the signature. The parent node of the layer is created ----- by hashing the result of the concatenation of the hash from the left node and the signature from the right node. The process is then repeated for each organization. When a layer has been created for each organization, we calculate the last hash that will become the hash root and we have the Merkle tree (Fig. 3). Fig. 2. Nostrification of several documents issued by the same entities. As everyone is able to get public keys of organization with the location included in the tree, it is very simple to verify the authenticity of a document certify by any number of organizations. Fig. 3. Nostrification of a single document by different entities. _C._ _Nostrification’s Verification of Document’s_ _Qualification_ Any third party who is willing to verify any document that is shown by the user, the third party shall first query the blockchain to extract the root hash and the set of hash values as well, using the transaction identifier stored on the document presented by the user. Then, the third party generates the digest of the presented document and compares it for equality with the one of the hash values stored in the extracted set of the hash values. Then, the third party regenerates the hash root and compares the result for equality with the hash root that is downloaded from blockchain. Then, it verifies the signature on the hash root that is generated by the document issuer, and if the signature is validate, the third party approve the document. In case a third party is willing to verify more than one document that were nostrified by more than one entity, then the user should send the transaction identifier of the most recent nostrified document, which includes a hash root, and a set of hash values. This latter set includes the hash of the most recent nostrified document and the hash of any other document belonging to the user and nostrified prior to nostrifying the most recent nostrified document. V. IMPLEMENTATION AND SECURITY ANALYSIS In this section, we present a detailed analysis of the proposed approach’s security and we demonstrate its effectiveness in providing in providing long-term integrity protection, proof of existence, authenticity, non-repudiation and privacy. Additionally, we evaluate the efficiency in terms of the processor and time usage. We start with one of the most popular attacks on data integrity, False Data Injection (FDI). In FDI the attackers intentionally change the data in such a way that the receiver will be unable to detect forged data. Blockchain by its design is secured against tampering and revision, which makes it very difficult to the adversary to inject or add forged or malformed document into the blockchain. In addition, the signature of the issuer over the document, make it much more difficult, even impossible, to inject malformed data into the blockchain. In addition to the long-term data integrity and the proof of existence, our solution ensures the authenticity and the nonrepudiation of origin because the hash root will be signed by the last document’s issuer when we have several documents from different entities, and by the documents’ issuer when those documents were issued by the same issuer. It is worth noting that the issued documents will be always valid if the issuer’s certificate will expire or revoked. In fact, our solution leverages blockchain approach properties to provide longterm integrity of documents. Privacy concern usually arises in many applications, particularly in is a public distributed database like the blockchain. The privacy concerns are mostly related to the publication of the user’s documents. In our solution, the privacy is preserved since the digest of the documents are stored in the blockchain, but not the document itself. Hence, the adversary needs to crack the digest in order to find out the original document. Our approach maintains the above security services while reducing the computation overhead. In fact, instead of generating a signature for each document issued by the same entity to the user, the entity will only need to sign the hash root. However, our solution will introduce very limited computation overhead related to the hash function that will be applied to generate the hash of each document that is required to compute the hash root. But consider the asymmetric encryption computational overhead when compared to the hash function computational overhead, this latter is usually negligible. The system proof of concept was implemented using Python v3, Flask webserver, and Ganache is used to create the blockchain test server. To evaluate the efficiency of the system we measure the CPU usage, memory consumption, and time for adding a document and the nostrification process. The PyCharm IDE was used for measurements and average of five runs. In Table I, we can observe that adding a user will relatively take more time because of the deployment of the contract on the blockchain. Our system proof of concept allows everyone to verify the authenticity of a document after accessing both the authentication path and the hash root, which are stored into a smart contract that is publicly available on a blockchain. Our used smart contracts are based on the same cryptography technology being used by cryptocurrency; therefore, they have the same level of security. All details of deployments or ----- updates of contracts are written into transactions to help finding the data at any time. Particularly, it allows storing data like the username and the user’s Merkle Tree. When the issuing institution adds a user along with its documents to the blockchain, a contract is deployed, and a transaction initializes it with the data. At any time, the issuing institution can add several documents to the user profile, in which the user’s Merkle Tree is then updated, and a new transaction is also needed to update the data stored on the blockchain. TABLE I. SIMULATED PERFORMANCE **_Add Time (s)_** **_Time (s)_** **_Memory_** **_CPU_** **_1 user & 4_** **_Nostrification_** **_(MB)_** **_documents_** 2% 37 7.5 0.001 Case1 5% 49 0.22 0.028 Case3 VI. CONCLUSIONS In this paper, we proposed a blockchain-based nostrification system, where awarding institutions generate a digital certificate, store in a public but permissioned blockchain, where students and other stakeholders may verify. We present a thorough discussion and formal evaluation of the proposed system. In addition, we implemented a prototype of the solutions supporting 3 use-cases. The formal analysis shows resistance to all sorts of common attacks with excellent performance in terms of CPU and memory usage as well as negligible blockchain programing and query times. ACKNOWLEDGMENT This project was supported in part by Zayed University Research incentive grant #R22018. REFERENCES [1] Guang Chen, Bing Xu, Manli Lu and Nian-Shing Chen, Exploring blockchain technology and its potential applications for education, Smart Learning Environments 5:1, 2018, [2] Ernie Brickell, Jan Camenisch, and Liqun Chen. 2004. Direct anonymous attestation. In Proceedings of the 11th ACM conference on Computer and communications security (CCS '04). ACM, New York, NY, USA, 132-145. [3] Nelson Bore, Samuel Karumba, Juliet Mutahi, Shelby Solomon Darnell, Charity Wayua, and Komminist Weldemariam. 2017. Towards Blockchain-enabled School Information Hub. In Proceedings of the Ninth International Conference on Information and Communication Technologies and Development (ICTD '17). ACM, New York, NY, USA, Article 19, 4 pages. [4] [Satoshi Nakamoto, Bitcoin: A Peer-to-Peer Electronic Cash System. White paper, October 31, 2008. Available https://bitcoin.org/en/bitcoin-paper [Last Access 20/5/2019]. [5] Turkanovic, M., Holbl, M., Kosic, K., Hericko, M., & Kamisalic, A. (2018). EduCTX: A Blockchain-Based Higher Education Credit Platform. IEEE Access,6, 5112-5127. [6] Swan M. Blockchain: Blueprint for a New Economy. “O’Reilly Media, Inc.”; 2015. [7] Vincent Gramoli, From blockchain consensus back to Byzantine consensus, Future Generation Computer Systems, 2017. [8] Underwood, Sarah. (2016). Blockchain beyond Bitcoin. Communications of the ACM. 59. 15-17. 10.1145/2994581. [9] Gavin Heron, Pam Green Lister. (2014) Influence of National Qualifications Frameworks in Conceptualising Feedback to Students. Social Work Education 33:4, pages 420-434. [10] Blockcerts, The open standard for blockchain credentials. 2016. Available https://www.blockcerts.org/ [Last Access 20/5/2019]. [11] Grech, A., & Camilleri, A. F. (2017). Blockchain in Education. JRC Science for Policy Report. https://doi.org/10.2760/60649 [12] Guang Chen, Bing Xu, Manli Lu and Nian-Shing Chen, Exploring blockchain technology and its potential applications for education. Smart Learning Environments, 5:1, 3 January 2018. [13] Lévy, W., Stumpf-Wollersheim, J., & Welpe, I. M. (2018). Disrupting Education Through Blockchain-Based Education Technology? SSRN Electronic Journal. doi:10.2139/ssrn.3210487 [14] Gresch J., Rodrigues B., Scheid E., Kanhere S.S., Stiller B. (2019) The Proposal of a Blockchain-Based Architecture for Transparent Certificate Handling. In: Abramowicz W., Paschke A. (eds) Business Information Systems Workshops. BIS 2018. Lecture Notes in Business Information Processing, vol 339. Springer, Cham. [15] Azael Capetillo, Blockchained education: Challenging the longstanding model of academic institutions. Available http://www.iacee.org/docs/P10Blockchained_education_Challenging_the_longstanding_model_of_academic_institutions._52.pdf [Last Access 20/5/2019]. [16] Sharples, M., & Domingue, J. (2016). The Blockchain and Kudos: A Distributed System for Educational Record, Reputation and Reward. Adaptive and Adaptable Learning Lecture Notes in Computer Science,490-496. doi:10.1007/978-3-319-45153-4_48. [17] Wolfgang Gräther, Sabine Kolvenbach, Rudolf Ruland, Julian Schütte, Christof Torres, Florian Wendland, Blockchain for Education: Lifelong Learning Passport, Proceedings of 1st ERCIM Blockchain Workshop 2018, ERCIM-Blockchain 2018: Blockchain Engineering: Challenges and Opportunities for Computer Science Research. Amsterdam, Netherlands, 8 - 9 May 2018. doi:10.18420/blockchain2018_05. [18] John Rooksby and Kristiyan Dimitrov, “Trustless Education? A Blockchain System for University Grades”. New value transactions: Understanding and Designing for Distributed Autonomous Organizations, Workshop at DIS2017, 10th of June 2017. Edinburgh [19] J. Cheng, N. Lee, C. Chi and Y. Chen, "Blockchain and smart contract for digital certificate," 2018 IEEE International Conference on Applied System Invention (ICASI), Chiba, 2018, pp. 1046-1051. doi: 10.1109/ICASI.2018.8394455 [20] Sony Global Education Develops Technology Using Blockchain for Open Sharing of Academic Proficiency and Progress Records.” Sony Global, Sony Global Headquarters. February 22, 2016. Available: http://www.sony.net/SonyInfo/News/Press/201602/160222E/index.html [Last Access 20/5/2019]. [21] Open Badges Standard, 2014. Available: http://www.badgealliance.org/open-badges-standard/. [Last Access 20/5/2019]. [22] Badgr, 2019. Available: https://badgr.com/ [Last Access 20/5/2019]. [23] Acclaim Digital badges, 2018. Available: https://www.youracclaim.com/ [Last Access 20/5/2019]. [24] BCDiploma, 2019. Available: https://www.bcdiploma.com/index.html [Last Access 20/5/2019]. [25] DJ Skiba, The potential of Blockchain in education and health care. Nurs. Educ. Perspect. 38(4), 220–221 (2017) https://doi.org/10.1097/01.NEP.0000000000000190. [26] R.C. Merkle, “A Digital Signature based on Conventional Encryption Function,” in proc CRYPTO, 1987, Springer Verlag. [27] A. Buldas, A. Kroonmaa, R. Laanoja, “Keyless signatures infrastructure: How to build global distributed hashtrees,” in H. Riis Nielson and D. Gollmann (Eds.), NordSec 2013, LNCS 8208, 2013, pp. 313-320. [28] M. Jakobsson, T. Leighton, S. Micali, M. Szydlo, “Fractal Merkle tree representation and traversal,” in Cryptographer’s Track at RSA Conference, 2003, pp.314-326. [29] M. Aldwairi, D. Alansari, “n-Grams exclusion and inclusion filter for intrusion detection in Internet of Energy big data system”s. Trans Emerging Tel Tech. 2022; 33:e3711. [30] M.r Aldwairi, Y. Flaifel, K. Mhaidat, “Efficient Wu-Manber Pattern Matching Hardware for Intrusion and Malware Detection”, International Conference on Electrical, Electronics, Computers, Communication, Mechanical and Computing (EECCMC), 28-29th January 2018, Tamil Nadu, India |Col1|CPU|Memory (MB)|Add Time (s) 1 user & 4 documents|Time (s) Nostrification| |---|---|---|---|---| |Case1|2%|37|7.5|0.001| |Case3|5%|49|0.22|0.028| -----
{ "disclaimer": "Notice: Paper or abstract available at https://arxiv.org/abs/2310.09136, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": null, "status": "GREEN", "url": "https://arxiv.org/pdf/2310.09136" }
2,023
[ "JournalArticle", "Conference" ]
true
2023-10-13T00:00:00
[ { "paperId": "70be9c86d51ea0f3b8932484a084f65837c014d5", "title": "Blockchained education: challenging the long-standing model of academic institutions" }, { "paperId": "3c6e884fb6adae5bf90dc56acb3a64bb9ece5ae4", "title": "Efficient Wu-Manber Pattern Matching Hardware for Intrusion and Malware Detection" }, { "paperId": "546d8aad284e72be2e02802abbc52ae2bb196c4c", "title": "Trustless education? A blockchain system for university grades1" }, { "paperId": "713e07e26178b03586b438ccd1035472086b4714", "title": "n-Grams exclusion and inclusion filter for intrusion detection in Internet of Energy big data systems" }, { "paperId": "82410757efad7a5db47676ab4b5bfb3ccff6553b", "title": "The Proposal of a Blockchain-Based Architecture for Transparent Certificate Handling" }, { "paperId": "3aae59b3d9ef1b9f4808d8095c4ba1c99b8dcaad", "title": "Disrupting Education Through Blockchain-Based Education Technology?" }, { "paperId": "50f725695a38b38e43f83757c289e7f6239398db", "title": "Blockchain and smart contract for digital certificate" }, { "paperId": "ce99ac88a0baefd749ca4d7db8e5ed07dbaf51bf", "title": "Exploring blockchain technology and its potential applications for education" }, { "paperId": "ea54e5e33145a511c88cd04727e8f5ff39cb0212", "title": "Towards Blockchain-enabled School Information Hub" }, { "paperId": "d6853c98da0e95757dfc8fe31da4cea5f47503cc", "title": "Blockchain in education" }, { "paperId": "9df21c23ccd29676889aea3d1e42e7fea76b4536", "title": "EduCTX: A Blockchain-Based Higher Education Credit Platform" }, { "paperId": "7d7bbd407359122decdc5fd90afd58fd11b68476", "title": "From blockchain consensus back to Byzantine consensus" }, { "paperId": "9a446ba6ca75630407aa7d5deee1b0a2d0ec7ed8", "title": "The Potential of Blockchain in Education and Health Care." }, { "paperId": "efe573cbfa7f4de4fd31eda183fefa8a7aa80888", "title": "Blockchain beyond bitcoin" }, { "paperId": "0200d453f5c995c87761e50976ed07692e257a30", "title": "The Blockchain and Kudos: A Distributed System for Educational Record, Reputation and Reward" }, { "paperId": "97fddbbfd681bce9eeb8e0a013353b4d5b2ba0db", "title": "Blockchain: Blueprint for a New Economy" }, { "paperId": "c7864e71ae2e4c8b3cc8daa91a651b4c09c90835", "title": "Influence of National Qualifications Frameworks in Conceptualising Feedback to Students" }, { "paperId": "25a17e483599215949cb3961fd945f6867d3bcae", "title": "Keyless Signatures' Infrastructure: How to Build Global Distributed Hash-Trees" }, { "paperId": "60739cb3e92415cff6f76a34374045d3f5cb6f27", "title": "Direct anonymous attestation" }, { "paperId": "a616473a13cf6f3a9deba8cf30847dd83d59979b", "title": "Fractal Merkle Tree Representation and Traversal" }, { "paperId": "5bcd990b11e068234c3a13b021f3266bb45a2964", "title": "A Digital Signature Based on a Conventional Encryption Function" }, { "paperId": "120de9ed3ad57dfa2f9297aaad25f63a708be89a", "title": "Blockchain for Education: Lifelong Learning Passport" }, { "paperId": null, "title": "Last Access 20" }, { "paperId": null, "title": "Acclaim Digital badges" }, { "paperId": null, "title": "Sony Global Education Develops Technology Using Blockchain for Open Sharing of Academic Proficiency and Progress Records" }, { "paperId": null, "title": "The open standard for blockchain credentials" }, { "paperId": null, "title": "Open Badges Standard," }, { "paperId": "4e9ec92a90c5d571d2f1d496f8df01f0a8f38596", "title": "Bitcoin: A Peer-to-Peer Electronic Cash System" }, { "paperId": "ee1b2191e6de66a8c45d3cbafda96a7262780b61", "title": "Bitcoin: A Peer-to-Peer Electronic Cash System" }, { "paperId": null, "title": "Satoshi" }, { "paperId": null, "title": "Blockchain Engineering: Challenges and Opportunities for Computer Science Research" } ]
9,121
en
[ { "category": "Economics", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/025db54d117d024615422b3cee16c1d9f46e15ce
[]
0.929581
Could the Issuance of CBDC Reduce the Likelihood of Banking Panic?1
025db54d117d024615422b3cee16c1d9f46e15ce
Journal of Central Banking Theory and Practice
[ { "authorId": "2218880205", "name": "Soraya Ben Souissi" }, { "authorId": "103423786", "name": "M. Nabi" } ]
{ "alternate_issns": null, "alternate_names": [ "J Central Bank Theory Pract" ], "alternate_urls": null, "id": "b665ae3f-05b5-443a-98de-ecbe4779954d", "issn": "1800-9581", "name": "Journal of Central Banking Theory and Practice", "type": "journal", "url": "http://www.degruyter.com/view/j/jcbtp" }
Abstract This paper delves into the relationship between the issuance of Central Bank Digital Currencies (CBDC) and the likelihood of banking panic. The issuance of CBDC acts as a disturbing shock that incentivizes depositors to withdraw all/part of their deposits from the commercial banks, to swap it for CBDC which are offered by the central bank. We determine a variety of tools that central banks can use in order for the issuance of CBDC to act as a stabilizing factor of the banking system (by reducing the likelihood of banking panic).
_Journal of Central Banking Theory and Practice, 2023, 2, pp. 83-101_ _Received: 08 February 2022.; accepted: 13 June 2022_ ## Soraya BEN SOUISSI [*], Mahmoud Sami NABI[ **] # Could the Issuance of CBDC Reduce the Likelihood of Banking Panic?[1] **Abstract: This paper delves into the relationship between the issu-** ance of Central Bank Digital Currencies (CBDC) and the likelihood of banking panic. The issuance of CBDC acts as a disturbing shock that incentivizes depositors to withdraw all/part of their deposits from the commercial banks, to swap it for CBDC which are offered by the central bank. We determine a variety of tools that central banks can use in order for the issuance of CBDC to act as a stabilizing factor of the banking system (by reducing the likelihood of banking panic). **Keywords: Central bank digital currency, liquidity, financial stabil-** ity. **JEL classifications: E31, E42, G11.** ## 1. Introduction[1] _UDK: 336.711:004_ _DOI: 10.2478/jcbtp-2023-0015_ _* University of Carthage,_ _LEGI-Tunisia Polytechnic School_ _and FSEG Nabeul, Tunisia_ _E-mail:_ _soraya.souissi.ss@gmail.com_ _** University of Carthage,_ _LEGI-Tunisia Polytechnic School_ _and FSEG Nabeul, Tunisia_ _ERF, Economic Research Forum,_ _Egypt_ _E-mail (Corresponding author):_ _mahmoudsami.nabi@ept.rnu.tn_ The determinants and impacts of Central Bank Digital Currency (CBDC)’s issuance is the subject of an increasingly number of research papers and experimental projects by central banks. Unlike crypto-currencies which are not backed by any sovereign authority, CBDC is considered as a new form of central banks currencies. This new form of sovereign currency is expected to contribute to faster, easier, cheaper and more secure financial transactions. The effects of issuing this new currency are not yet well understood. While some researchers and financiers (e.g. Davoodalhosseini, 2018; Panetta, 2018; Cooper, Esser and Allen, 2019; 1 The authors declare that the current research has not benefited from any source of funding and that there are no conflicts of interest with third parties. ----- Kaczmarek, 2022) emphasize its benefits, others are more sceptic. The main argument against the issuance of CBDC is related to the financial stability issue that might be exacerbated by bank run. For example, Genberg (2020) argues that the issuance of CBDC could threaten the intermediation function of commercial banks. Besides, it is not clear if CBDC will replace cash or if it will be considered a financial asset? This paper tries to contribute to this nascent literature by investigating the impacts of CBDC’s issuance on financial stability. Our paper is in line with Kim and Kwon (2019) and Brunnermeier and Niepelt (2019) which analyse the conditions under which CBDC issuance does not affect financial stability. It studies the effects of CBDC issuance on financial stability through a simplified model based on Kim and Kwon` paper with three main modifications: space, time, and investment choice. In our model, CBDC does not exist initially, and is issued at the end of the first period. This shock incentivizes the depositors to withdraw their deposits from the commercial banks and swap it (totally or partially) for CBDC. We then study the effect of this event on financial stability and the possible options to preserve it. We show that avoiding the bank run is possible if the central bank transfers the CBDC into loans for the commercial banks, in an attempt to preserve the stability of the reserve-deposit ratio. In addition, we show that this is not the only possible option. Indeed, the central bank could also intervene by restricting the access to CBDC accounts, either by limiting its available amount or by imposing a substitution fee. The next option is to suspend the convertibility of bank deposits into CBDC (à la Diamond and Dybvig, 1983). For this option, we show that the proportion of lenders converting their deposits into CBDC shall be kept below an endogenously determined bank panic cut-off. The remaining of this paper is organized as follows. We begin with a literature review on the economics of CBDC. The second section presents the theoretical model which is used to analyse the effects of CBDC issuance on financial stability. The third section is devoted to the analysis of the equilibrium without CBDC issuance. In the fourth section, we study the impacts of CBDC issuance on financial stability. Finally, we determine various other options that could limit the impacts of CBDC issuance on financial stability. ## 2. The literature review There are emerging studies analysing the determinants and impacts of central bank digital currency (CBDC) issuance. Auer and Böhme (2020) focus on the economic and institutional drivers of CBDC projects. It suggests a CBDC pro ----- ject index taking higher values in countries where mobile phone use is widespread and innovation capacity is developed. Cooper et al. (2019) and Vučinić and Luburić (2022) show that CBDC can accelerate the financial inclusion by facilitating the interoperability of the payment systems, improving their efficiency, and reducing the financial costs and risks. These studies show that the impacts of CBDC are diverse and not all positive. Indeed, this digital currency can have a destabilizing impact on banking intermediation and a negative effect on financial and digital equality. For Panetta (2018), if the CBDC are used as a means of payment, they will have a positive effect on financial inclusion. The main idea is that a proportion of consumers who do not have bank accounts could use it without incurring the cost of holding bank accounts. From the perspective of the central banks, the use of CBDC are expected to reduce the cost of using cash. If CBDC are used as a store of value, they will be considered as assets without cost for the economic agents (who will no longer have to bear the fees due to the management of their deposit accounts). CBDC would be more suitable than bank deposits if they are issued as liquidity-free assets with a rate of return. But this option is not free of impacts on the financial stability, since it could generate bank runs and impact the intermediation role of commercial banks. In this context, Vučinić (2020) shows that FinTech could have an adverse systemic impact on financial stability through microfinancial and macrofinancial risks. CBDCs can have this same impact on financial stability since they are part of fintech. Bindseil (2020) analyses the effect of CBDC creation in two forms: by replacing banknotes and by replacing bank deposits. It concludes that the first form has a neutral effect on financial stability while the second form does not. Some other studies tried to analyse the impacts of CBDC on financial stability by using general equilibrium models or attempting to analyse the neutrality conditions of the introduction of this new currency. Brunnermeier and Niepelt (2019) develop a general model of money, liquidity, and financial frictions and attempts to define the equivalence conditions between different monetary systems. The exchange equivalence between a private and a public currency is studied. The authors analyse if CBDC’s issuance affects the allocations and equilibrium prices. They show that the issuance shall be accompanied by measures that guarantee wealth and liquidity neutrality. Besides, they show that a substitution operation accompanied by open-market operations and transfers has no effects on wealth and liquidity. Kim and Kwon (2019) propose an OLG model with agents that move between locations, in which CBDC competes with bank deposits and is accessible to all agents in all locations. It shows that an increase in CBDC deposits leads to a reduction in commercial bank deposits, and to an increase of the probability of banking panic. The authors argue that the financial system could keep its stability if the central bank uses CBDC deposits to extend credit to commercial ----- banks. Mersch (2017) confirms this idea regarding the destabilizing effect of the introduction of CBDC, mainly through the increase in the risk of deposit flight. Bitter (2020) tries to study how the introduction of CBDC affects the likelihood of aggregate bank runs. It shows that CBDC does not affect the aggregate output and prices in a steady state. However, it changes the composition of household savings, bank funding and capital investment. Besides, central banks can accommodate CBDC in their balance sheet via some options as loans to banks and corporate asset purchases. The author concludes that these two CBDC policies have a stabilizing effect on the economy during crises. Brunnermeier and Niepelt (2019) and Kim and Kwon (2019) suggest different approaches of measures to reduce the negative impacts of CBDC on financial stability. The former considers the neutrality of the equilibrium allocations while the latter focuses on a cut-off threshold that triggers banking panic. Nevertheless, the two studies converge to quite similar conclusions. Indeed, they show that for the CBDC’s issuance not to affect financial stability, the central bank should activate specific instruments. Brunnermeier and Niepelt suggest open market operations and clearing transfers, whereas Kim and Kwon (2019) propose the refinancing of commercial banks by the central bank. In the same vein, Kumhof and Noone (2018) outline the following four principles that must be followed in order to control the impacts of CBDC issuance on financial stability: i) Payment of an adjustable interest rate for CBDC, ii) Distinction between reserves and (non-convertible) CBDC, iii) No convertibility of bank deposits into CBDC hold in commercial banks, iv) Issuance of CBDC against eligible securities. The payment interest rate on CBDC should be adjustable so that it can be used as a monetary policy tool to maintain financial stability, price stability, and parity between bank deposits and CBDC. ## 3. The model We consider a two-periods and three-dates _t = 0,1,2 economy. There is a [0,1]_ continuum of agents with a unit mass. Agents live for two periods. In the first period, they are young and starting from t = 1 they become old. Half of the agents are lenders while the other half are borrowers. The preferences of agents are described by the following utility function: (1) Where is consumption in period j. β is a stochastic discount factor. Lenders have an initial endowment x _> 0 of consumable good when they are young_ ----- and no donations when they are old. Borrowers have no donation when they are young and have a donation y > 0 when they are old. We assume that βx > y. At time _t = 0, there is a continuum of old agents with a unit mass. They have an_ initial donation of money M > 0 and from this time there is no injection or withdrawal of money. At the beginning of the first period, agents receive their donations. The young lenders will use their allocation to purchase goods and services and invest the rest as deposits in commercial banks that will be remunerated at the end of the first period. Young borrowers contact commercial banks to get loans in the first period which they will repay increased by interest at the beginning of the second period. Finally, we consider that cash exists in the economy and is used by agents to make transactions. We assume that the central bank chooses to issue the CBDC during at t = 1 as a liquid and non-risky asset which is accessible directly to agents. The central bank keeps the CBDC accounts and pays a remuneration (r[c]) that compete with bank deposits. The central bank purchases government securities at a rate Rc. The model has a finite number of commercial banks that live forever. They hold reserves, collect deposits and grant loans. Each bank announces its repayment schedule at _t = 0 and the interest rate that will be charged on deposits for each unit deposited_ according to the type of lender. After the issuance of the CBDC during the first period t = 1, a random fraction _π of young lenders called "swappers" decide to invest a part or all of their com-_ mercial bank deposits in CBDC. Thus, a lender may have a diversified portfolio of commercial bank deposits and CBDC deposits. Swappers contact their banks to withdraw their deposits. We denote F(π) the distribution function of the random variable π and f(π) its continuously differentiable density function. ## 4. The financial equilibrium without CBDC ### 4.1. Agents’ problem At the beginning of the first period, lenders receive their initial donations, consume and decide on the amount to deposit. Commercial banks decide on the interest rate that will remunerate each type of lender: r[s](π) if it is a swapper and _r(π) if it is not. Once the repayment schedule is announced, banks accept deposits_ and set the interest rate R applied to loans. Each lender chooses the deposit level _d[b] that maximizes its expected utility given the payment scheme announced at_ time t = 1. At the beginning of the first period (t = 0), the lender invests all his capital in a commercial bank. His expected utility is: ----- The volume of deposits that maximizes this utility is given by: (2) (3) On the other hand, a borrower observes the competitive interest rate R given at time t = 0 and determines the amount of the credit he will apply for in order to maximise his expected utility, whose expression is: (4) The borrower chooses the optimal amount of credit whose expression is: (5) ### 4.2. Commercial banks’ problem Commercial banks hold reserves, collect deposits and grant credit. They hold reserves z for any positive amount of deposits in commercial banks d[b]. They grant credit for the remaining amount: (6) Let be the reserve-deposit ratio decided by the central bank. The reserves are remunerated at the rate where pt is the inverse of the price level at time t =1, 2. To simplify, we consider that we are in the case of a stationary equilibrium where pt+1 = pt. ### 4.3. Equilibrium without CBDC The gross interest rate is given by R > 1. At the equilibrium, the total amount of deposits made by the young lenders reduced by the number of bank reserves, should allow the commercial banks to cover the demand for credits. Consequently, using equation (5), we obtain the market equilibrium condition: (7) We can then derive the expression of the nominal interest rate in equilibrium in a regime where only fiat money exists: ----- (8) ## 5. Effects of CBDC issuance on financial stability We now consider that the central bank decides to issue the CBDC and charges an interest rate r[c]. In the following sections, we study the effects of this new introduction on the behaviour of agents and the initial equilibrium values. ### 5.1. Agents and commercial banks problem When the central bank announces the issuance of CBDC during a specified period t = 1, the young lender is proposed three strategies: - Keep the full deposit in the commercial bank: d[b] = d, - Withdraw all its deposits from the commercial bank and transfer it to a CBDC account at the central bank: d[c] = d, - Withdraw a proportion θ of its deposits to convert it into CBDC, where In case the lender decides to invest in a CBDC account at the central bank, he will be qualified as a swapper. The expression of the utility is then given by: (9) The lender will always choose the optimal deposit level that maximizes his utility: (10) For borrowers, nothing is altered and the amount of credit they apply for is always the same. Once γ is chosen, and credits are granted, lenders who choose to invest (swappers) in CBDC and whose proportion is equal to (π) they will make withdrawals in an amount that equals: (11) This withdrawal amount is paid by the commercial bank in the form of banknotes since the latter do not convert deposits into CBDC. It is assumed that there is a fraction α(π) of bank reserves intended for swappers, where α ∈ [0,1]. ----- ### 5.2. Equilibrium in case of CBDC investment Considering that commercial banks do not make profit at the equilibrium, the values of r[s] (π), r(π), r[c], α(π) and γ should be chosen in order to maximize the utility which is expressed as follows: (12) Such as: (13) (14) (15) The optimal solution must satisfy equations (13) and (14) as equalities. In this case, and assuming [2], we can determine the maximal level of bank reserves that can be withdrawn by swappers: (16) where and . This fraction of reserves cannot exceed the maximum value of 1. Therefore, we can define as the bank panic cut-off point for which, i.e. all bank reserves will be liquidated by agents switching to CBDC, and above which commercial banks can no longer satisfy liquidity withdrawal demand. Considering equation (16), we have: (17) This value is interpreted as the cut-off value for the probability of migration to CBDC that can generate a deposit run. If the amount of CBDC deposit per individual is low (θ is small), the cut off value of will be high, signifying low exposure to bank panic. In particular, there is no banking panic for since . It is already clear that this restriction emanates as one of the tools that the central bank can use to mitigate the negative effects of the CBDC issuance on financial stability. As shown in Graph 1, below the threshold, the bank panic threshold is too high and the stability of the financial system can be preserved. 2 Under the pressure of bank competition and under the condition of no bank panic, which will be made explicit later, following Kim and Kwon (2019). ----- **Graph 1: Cut-off variation as a function of CBDC conversion ratio** Source: Authors` simulations The bank panic threshold depends not only on but also on the reserve-deposit ratio y. In Graph 2, we vary this ratio for a given interest rate R. We observe that as y increases, the bank panic threshold increases for a given CBDC conversion ratio. This means that the more reserves the commercial banks hold to satisfy the liquidity needs of their customers (who want to convert their deposits into CBDC), the longer it takes for the banking panic to trigger. This same threshold of banking panic is weakly sensitive to a variation in the interest rate R applicable to bank loans, if we keep the reserve-deposit ratio of commercial banks at the same level (Graph 3). **Graph 2: Impact of the variation in the reserve-deposit ratio** Source: Authors’ simulations ----- **Graph 3: Impact of interest rate changes** Source: Authors` simulations However, what is more interesting is the analysis of determinants of the highest level (see section V) for a given swap proportion . From another viewpoint, condition (17) means that if depositors wish to convert all their deposits into CBDC ( ) corresponding to ), the cut-off probability of switching to CBDC will be equal to the level, which itself depends on the interest rate charged on the loans and on the reserve-deposit ratio. The higher this ratio is, the more reserves commercial banks have to pay to swappers and the banking panic phenomenon takes longer to appear. On the other hand, if deposits are not converted into CBDC or if this proportion is close to zero, the cut-off value of banking panic will incline towards infinity. Thus, bank panic is less likely to occur. If the probability of lenders leaving to CBDC remains below the bank panic cutoff ( ), then we will have and commercial banks have enough reserves to honour all liquidity demands. In the presence of banking competition and in a market characterized by stability, i.e. the absence of a banking panic, the interest rate applied to deposits for swappers and non-swappers is the same and is given by the expression: (18) However, if then, the commercial banks use all their bank reserves to pay the liquidity requests and we are then faced with a situation of bank run. In this case, the interest rates charged to swappers and non-swappers are no longer equivalent and we have: ----- (19) (20) These two expressions allow us to conclude that . Hence, in case of bank run, swappers will be disadvantaged compared to non-swappers, since they will have a lower remuneration. The optimal strategy for commercial banks is then: By analysing equation (17) the cut-off of banking panic is an increasing function of the reserve-deposit ratio. The higher this ratio, the higher the threshold, and thus the lower the probability of a banking panic. On the other hand, if lenders choose to decrease their bank deposits in favour of CBDC deposits at the central bank, they will make massive liquidity withdrawals, bank reserves will decrease and the bank panic cut-off will also decrease. It can be concluded that if lenders choose to convert their deposits into CBDC following its, they will need to make liquidity withdrawals. Commercial banks can meet this withdrawal demand as long as it does not exceed the threshold of its reserves. Otherwise, reserves will fall to zero and the proportion of lenders leaving banks will approach the limit for the banking panic. Following Kim and Kwon (2019), we define the optimal reserve-deposit ratio. Let us define the function as: (21) The optimal reserve-deposit ratio can be expressed as: (22) Considering equation (17), we can then rewrite equation (22) as: (23) The function is decreasing and concave in and for all . If, then (23) is satisfied only by . If, then (23) has two solutions and the interior solution solves the optimization problem. Results of the optimal choice of can be summarized as follows: the optimal reserve-deposit ratio is given by ; with and . ----- ### 5.3. The general equilibrium in case of CBDC issuance **_Proposition 1_** In equilibrium, the CBDC issuance increases the nominal interest rate which is given by: (24) **Proof. At the equilibrium, the total amount of bank deposits must cover the** bank's reserves and the granted loans. In other words: (25) We then have the following equilibrium condition: (26) It is clear from equation (26) that if the proportion of bank deposits converted into CBDC increases, then the nominal interest rate also increases. Consequently, there will be less lending by commercial banks, for a given deposit-to-reserve ratio. Therefore, if (absence of CBDC), the nominal interest rate is at its minimum threshold compared to interest rate, in the presence of CBDC. In other words, the issuance of CBDC will be more expensive for the borrowers. The increase in the interest rate could have an impact on the volume of granted loans. However, the decrease is not only due to the increase in the nominal interest rate, but also to the declining volume of private deposits since there will be a run-off of lenders` deposits to CBDC accounts at the central bank. ## 6. Limiting the negative effects of CBDC on financial stability In this section, the various strategies for dealing with the potential effects of CBDC issuance on financial stability are analysed. We have seen previously, that following this issuance, which occurs at, lenders could withdraw their deposits from commercial banks and convert it into CBDC accounts at the central bank. Commercial banks must hold enough reserves to meet this need for liquidity. Beyond a certain limit withdrawals could generate a bank panic and reduce the amount of granted loans. To overcome this panic, Kim and Kwon (2019) propose that the central bank use CBDC deposits to extend credits to commercial banks, which could then use the new reserves to pay lenders. Bitter (2020) shows ----- that under two different scenarios: loans to banks and corporate asset purchases, CBDC issuance does not destabilize the economy. On the contrary, it could improve financial stability by postponing the emergence of bank run equilibrium. The authors opts for the principle of managing the issuance of CBDC through the interest rate. In the same context, Gross and Schiller (2021) show that the central bank can decrease the remuneration of CBDC in order to reduce the volume held. However, the authors show that if CBDC are not interest-bearing, the central bank cannot govern the demand for its digital currency. ### 6.1. Intervention on the volume of CBDC Here, we study the possibility of central bank intervention through the volume of issued CBDC . This intervention instrument can be applied to cases where CBDC are issued with or without interest. **_Proposition 2_** To prevent a bank run and provide the required liquidity to lenders, conversion to CBDC should not exceed the highest level given by: (27) **Proof. We saw in the previous section that when the central bank issues its new** form of money at, a proportion of young lenders (swappers) equal to will choose to invest a volume of their deposits in CBDC. We showed that there is a threshold at which there is a banking panic phenomenon for any, leading to financial instability. Let us consider an overlapping generation model, extending the time horizon to infinity. In this context, it is possible for the central bank to prevent this banking panic with new generations being born starting from, by constraining the volume of CBDC to be converted to a ceiling, based on previous periods observations. Accordingly, and in order to avoid a banking panic, the demand for withdrawals of deposits that would be converted into CBDC has to be at most equal to the proportion of reserves left for swappers. Since the central bank has already observed the proportion of swappers at, it has to implement a new strategy to avoid a banking panic arising from the conversion of deposits into CBDC. This implies that: (28) Replacing by and by its expression in (17), we can easily deduce the expression of . ----- The highest level of conversion to CBDC is established by replacing in (28) by its expression in equation (18). Therefore, if the proportion of swappers in the preceding period is observed, and knowing the cut-off point for the bank panic, the central bank can limit the issuance of CBDC so that it does not exceed the limit defined in equation (28). The highest volume of CBDC to be issued is a decreasing function of the swapper ratio observed, given a defined reserve-todeposit ratio, as shown in Graph 4. In case the bank reserves of a period are fully liquidated, the proportion of swappers at time is equal to the bank panic threshold, then . The expression (28) turns into: (29) **Graph 4: Relation between CBDC issuance** **and bank run** Source: Authors’ simulations Then the maximum volume of CBDC issued by the central bank during a period following a banking panic, will depend on the volume of CBDC issued in the previous period. We agree with the findings of Panetta (2018) that the strategy of limiting the volume of CBDC can reduce the risk of a bank run, while developing an approximate expression of this maximum volume. If the central bank chooses to adopt a strategy of limiting the volume of CBDC, it will not be able to issue an additional quantity of this new money if the demand for it increases. In this case, the adjustment shall be made through the interest rate applied to CBDC. ### 6.2. Intervention through commercial banks' suspension of convertibility We saw previously that the central bank can intervene through the limitation of CBDC issuance. Reducing the probability of deposit flight is not limited to the central bank, but can be done by commercial banks using the convertibility suspension tool, as analysed in Diamond and Dybvig (1983). In this model, a deposit contract between the commercial bank and the lender fixes the deposit remuneration. Besides, the withdrawal of liquidity by the agents is done sequentially until the bank reserves are exhausted. We have already defined as the amount of deposits that will be withdrawn from the bank in the form of cash to ----- be converted into CBDC. We also assume that the central bank sets a maximum threshold for conversion to individual CBDC that is the same for all lenders. We assume that liquidity demanders can make liquidity withdrawals of deposits in a sequential manner and in a well-defined order. Furthermore, we assume that banks accept each agent's withdrawal request given only his position in the queue and without any additional information about the behaviour of agents who are ranked after him. Finally, we assume that lenders are served in order. We denote by V1 the remuneration of deposits that will be withdrawn. This remuneration depends on the lender's position in the line-up at period t. We will define as the total number of deposit withdrawal demands and as the number of withdrawal demands served before individual j. is a fraction of . Thus, we have: (30) **_Proposition 3_** The bank deposit convertibility is suspended as soon as in order to avoid the exhaustion of the fraction of reserves intended for swappers and so the emergence of bank run. (31) **Proof. In order to determine the expression of, we propose the following rea-** soning. We know that each lender will invariably have a total deposit amount equal to . Once and are decided and the volume of CBDC allowed for conversion, is chosen by the central bank, commercial banks must set the number of withdrawals made by lenders such that: (32) For all withdrawal sequences that occur before reaching the point, we are in an equilibrium situation with CBDC where agents are seeking to maximize their utility described in equation (8). Yet, we know that at the equilibrium, each lender (j) will choose the level of deposit that maximises its utility, i.e.: ----- (33) By substituting this individual deposit expression into equation (32), we can then derive the expression for . If the number of withdrawals by lenders for conversion into CBDC, reaches the level of, the banks no longer pay any remuneration and the agents have an incentive to keep their bank deposits until the end of the first period to receive their remuneration. **This critical limit for the number of withdrawals is a decreas-** **ing function of CBDC volume. An increase in the CBDC conversion volume** **generates a lower threshold of convertibility suspension. In a situation where** lenders choose to convert their deposits into CBDC, banks may choose to serve them sequentially according to their position in the line-up until their reserves are exhausted. At that point, the commercial banks pay no further remuneration and agents have an incentive to keep their deposits at their banks until receiving their final remuneration. ## 7. Conclusion The issuance of CBDC by central banks is generating a lot of interest. New research is emerging to study their various economics impacts. Several authors have shown that the introduction of CBDC could remain without effect on financial stability if it is accompanied by some measures by central banks such as open market operations or granting of credits to commercial banks to guarantee the stability of their reserves. In this paper, we tried to analyse the impacts of CBDC’s issuance on financial stability through a simplified model inspired from Kim and Kwon (2019). CBDC are assimilated to non-risky liquid financial assets competing with bank deposits. The issuance of this new money takes place over a period of time after lenders have already invested in bank deposits. We enable lenders to convert a part or their total bank deposits into CBDC, and we analyse the impacts of such behaviour on the likelihood of banking panic. We show that under certain conditions, two strategies are possible to avoid the negative effects of CBDC issuance on financial stability. The central bank could limit the volume of issued CBDC to a predetermined threshold in order to avoid the occurrence of a bank run. Commercial banks could also limit the convertibility of deposits into cash as soon as the banking panic cut-off is reached. Although our results are consistent with existing work on the impacts of CBDC issuance on financial stability, we of ----- fer original recommendations in relation to the options available for the central bank to mitigate these negative impacts. ----- ## References 1. Auer, R. and Böhme, R, (2020), “CBDC architectures, the financial system, and the central bank of the future”, VOXEU – Center for Economic Policy Research. 2. Bindseil, U., (2020), “Tired CBDC and the financial system”, European Central Bank, Working Paper Series, No. 2351. 3. Bitter, L., (2020), “Banking crises under a Central Bank Digital Currency (CBDC), Beiträge zur Jahrestagung des Vereins für Socialpolitik 2020 : Gender Economics. 4. Brunnermeier, M.K. and Niepelt, D. (2019), “On the equivalence of private and public money”, Journal of Monetary Economics, vol.106, October 2019, p27-41. 5. Brunnermeier, M.K. and Sannikov, Y. (2016), “The I Theory of Money”, NBER Working Paper Series, Working Paper 22533. 6. Cooper, B., Esser, A. and Allen, M. (2019), “The use cases of central bank digital currency for financial inclusion: A case for mobile money”, The _Centre for Financial Regulation and Inclusion._ 7. Davoodalhosseini, S.M.R. (2018), “Central Bank Digital Currency and Monetary Policy”, Bank of Canada, Staff working paper, 2018-36. 8. Diamond, D. and Dybvig, P. (1983), “Bank runs, Deposit Insurance, and Liquidity”, Journal of Political Economy, 1983, vol.91, No. 3. 9. Genberg, H. (2020), “Digital transformation: Some Implications for Financial and Macroeconomic Stability”, Macroeconomic stabilization in the digital age, Asian Development Bank Institute, 68 – 85. 10. Gross, J. and Schiller, J. (2021), “A Model for Central Bank Digital Currencies: Implications for Bank Funding and Monetary Policy”, Social Science Research Network. 11. Kaczmarek, P. (2022) “Central Bank Digital Currency: Scenarios of Implementation and Potential Consequences for Monetary System,” Journal _of Central Banking Theory and Practice, Central Bank of Montenegro, vol._ 11(3), pages 137-154. 12. Kim, Y. and Kwon, O. (2019), “Central Bank Digital Currency and Financial Stability”, Bank Of Korea Working Paper. 13. Kumhof, M. and Noone, C. (2018), “Central bank digital currencies- design principles and balance sheet implications”, Bank of England, Staff Working _Paper No. 725._ 14. Mersch, Y. (2017), “Digital base money : an assessment from the ECB perspective”, speech, Helsinki,16 January 2017. ----- 15. Panetta, F. (2018), “21st Century Cash : Central Banking, technological innovation and digital currencies”, Bocconi University. 16. Vučinić, M. (2020) “Fintech and financial stability : Potential influence of Fintech on financial stability, risks and benefits,” Journal of Central Banking _Theory and Practice, Central bank of Montenegro, vol. 9(2), pages 43-66._ 17. Vučinić, M. and Radoica, L. (2022) “Fintech, Risk-based thinking and cyber risk,” Journal of Central Banking Theory and Practice, Central bank of Montenegro, vol. 11 (2), pages 27-53. -----
{ "disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.2478/jcbtp-2023-0015?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.2478/jcbtp-2023-0015, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": "CCBY", "status": "GOLD", "url": "https://sciendo.com/pdf/10.2478/jcbtp-2023-0015" }
2,023
[ "JournalArticle" ]
true
2023-05-01T00:00:00
[ { "paperId": "e51c5aae3d405ef4a71b853fd49690df082956e8", "title": "Central Bank Digital Currency: Scenarios of Implementation and Potential Consequences for Monetary System" }, { "paperId": "c9179af62222c2badd45cd18242903fb2d3a530c", "title": "Fintech, Risk-Based Thinking and Cyber Risk" }, { "paperId": "b061a57abfada65588a3f4d5a8f44fc36de1f548", "title": "Fintech and Financial Stability Potential Influence of FinTech on Financial Stability, Risks and Benefits" }, { "paperId": "5aa2691b106ea71b10c218a915f23b3fabc33f66", "title": "Tiered CBDC and the Financial System" }, { "paperId": "64e80e34fc664d9d7d86bae35a398a164c1c3590", "title": "On the Equivalence of Private and Public Money" }, { "paperId": "0debfa9cb8631487c8946d50a4658b76ed1bb33d", "title": "Central Bank Digital Currency and Financial Stability" }, { "paperId": "1ca2403d065e0715c560978012972b7e71c21f47", "title": "Central Bank Digital Currency and Monetary Policy" }, { "paperId": "665efab907e67c5a7ed457c1f5ad70bc0c030717", "title": "Central Bank Digital Currencies - Design Principles and Balance Sheet Implications" }, { "paperId": "97396fbef37644f319c4ba035886137df55959f8", "title": "The I Theory of Money" }, { "paperId": "5d2a97be97ed70c86addf5229ae2ffa5b39227e0", "title": "Bank Runs, Deposit Insurance, and Liquidity" }, { "paperId": "b365b874e5fc8f1bd860014912acd47e656db0dd", "title": "Digital Base Money: a few considerations from a central bank’s perspective" } ]
8,290
en
[ { "category": "Medicine", "source": "external" }, { "category": "Medicine", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/025e07ce9e6fd253b53c18e097d0ea6df186490f
[ "Medicine" ]
0.897687
Knowledge structure and emerging trends on osteonecrosis of the femoral head: a bibliometric and visualized study
025e07ce9e6fd253b53c18e097d0ea6df186490f
Journal of Orthopaedic Surgery and Research
[ { "authorId": "2119019381", "name": "Haiyang Wu" }, { "authorId": "2150057421", "name": "Kunming Cheng" }, { "authorId": "2077571148", "name": "Linjian Tong" }, { "authorId": "2115663919", "name": "Yulin Wang" }, { "authorId": "2109351991", "name": "Weiguang Yang" }, { "authorId": "104642162", "name": "Zhiming Sun" } ]
{ "alternate_issns": null, "alternate_names": [ "J Orthop Surg Res" ], "alternate_urls": [ "http://www.josr-online.com/", "http://www.josr-online.com/home" ], "id": "0cd86699-aba1-4cf4-b9d4-b95bb87da7f6", "issn": "1749-799X", "name": "Journal of Orthopaedic Surgery and Research", "type": "journal", "url": "https://josr-online.biomedcentral.com/" }
Background Osteonecrosis of the femoral head (ONFH) is a common disabling disease with considerable social and economic impacts. Although extensive studies related to ONFH have been conducted in recent years, a specific bibliometric analysis on this topic has not yet been performed. Our study attempted to summarize the comprehensive knowledge map, development landscape, and future directions of ONFH research with the bibliometric approach. Methods All publications concerning ONFH published from 2001 to 2020 were identified from Web of Science Core Collection. Key bibliometric indicators were calculated and evaluated using CiteSpace, VOSviewer, and the online bibliometric analysis platform. Results A total of 2594 publications were included. Our analysis revealed a significant exponential growth trend in the annual number of publications over the past 20 years ( R 2  = 0.9663). China, the USA, and Japan were the major contributors both from the quality and quantity points of view. Correlation analysis indicated that there was a high positive correlation between the number of publications and gross domestic product ( r  = 0.774), and a moderate positive correlation between publications and demographic factor ( r  = 0.673). All keywords were categorized into four clusters including Cluster 1 (etiology and risk factors study); Cluster 2 (basic research and stem cell therapy); cluster 3 (hip-preserving study); and Cluster 4 (hip replacement study). Stem cell therapy-related research has been recognized as an important research hotspot in this field. Several topics including exosomes, autophagy, biomarkers, osteogenic differentiation, microRNAs, steroid-induced osteonecrosis, mesenchymal stem cells, double-blind, early-stage osteonecrosis, and asymptomatic osteonecrosis were considered as research focuses in the near future. Conclusion Over the past two decades, increasing attention has been paid to global ONFH-related research. Our bibliometric findings provide valuable information for researchers to understand the basic knowledge structure, identify the current research hotspots, potential collaborators, and future research frontiers in this field.
_p_ _g y_ _(_ _)_ https://doi.org/10.1186/s13018-022-03068-7 ## RESEARCH ARTICLE ## Open Access # Knowledge structure and emerging trends on osteonecrosis of the femoral head: a bibliometric and visualized study ### Haiyang Wu[1*†], Kunming Cheng[2†], Linjian Tong[1], Yulin Wang[1], Weiguang Yang[1] and Zhiming Sun[1,3*] **Abstract** **Background: Osteonecrosis of the femoral head (ONFH) is a common disabling disease with considerable social and** economic impacts. Although extensive studies related to ONFH have been conducted in recent years, a specific bibliometric analysis on this topic has not yet been performed. Our study attempted to summarize the comprehensive knowledge map, development landscape, and future directions of ONFH research with the bibliometric approach. **Methods: All publications concerning ONFH published from 2001 to 2020 were identified from Web of Science Core** Collection. Key bibliometric indicators were calculated and evaluated using CiteSpace, VOSviewer, and the online bibliometric analysis platform. **Results: A total of 2594 publications were included. Our analysis revealed a significant exponential growth trend in** the annual number of publications over the past 20 years (R[2] 0.9663). China, the USA, and Japan were the major = contributors both from the quality and quantity points of view. Correlation analysis indicated that there was a high positive correlation between the number of publications and gross domestic product (r 0.774), and a moderate = positive correlation between publications and demographic factor (r 0.673). All keywords were categorized into four = clusters including Cluster 1 (etiology and risk factors study); Cluster 2 (basic research and stem cell therapy); cluster 3 (hip-preserving study); and Cluster 4 (hip replacement study). Stem cell therapy-related research has been recognized as an important research hotspot in this field. Several topics including exosomes, autophagy, biomarkers, osteogenic differentiation, microRNAs, steroid-induced osteonecrosis, mesenchymal stem cells, double-blind, early-stage osteonecrosis, and asymptomatic osteonecrosis were considered as research focuses in the near future. **Conclusion: Over the past two decades, increasing attention has been paid to global ONFH-related research. Our** bibliometric findings provide valuable information for researchers to understand the basic knowledge structure, identify the current research hotspots, potential collaborators, and future research frontiers in this field. **Keywords: Osteonecrosis of the femoral head, Bibliometric analysis, Hotspots, VOSviewer, CiteSpace** *Correspondence: nfykdxwhy@126.com; szhm618@163.com †Haiyang Wu and Kunming Cheng contributed equally to the study. 1 Graduate School of Tianjin Medical University, No. 22 Qixiangtai Road, Tianjin 300070, China 3 Department of Orthopaedic Surgery, Tianjin Huanhu Hospital, No 6, Jizhao Road, Jinnan District, Tianjin 300350, China Full list of author information is available at the end of the article **Introduction** Osteonecrosis of the femoral head (ONFH) is a common progressive disease typically characterized by reduction in vascular supply, bone metabolism disorder, and necrosis of the subchondral bone and eventually resulting in bone collapse of the femoral head [1]. It can be classified as traumatic and non-traumatic ONFH on the basis of diverse etiologies. Although the pathophysiology of this process has not yet been clearly elucidated, corticosteroid © The Author(s) 2022. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this [licence, visit http://​creat​iveco​mmons.​org/​licen​ses/​by/4.​0/. The Creative Commons Public Domain Dedication waiver (http://​creat​iveco​](http://creativecommons.org/licenses/by/4.0/) [mmons.​org/​publi​cdoma​in/​zero/1.​0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.](http://creativecommons.org/publicdomain/zero/1.0/) ----- use, alcoholism, smoking, inherited coagulation disorders, as well as systemic lupus erythematosus are typically considered to be high-risk factors for non‐traumatic ONFH [1, 2]. In the USA alone, more than 20,000 new patients were diagnosed with non-traumatic ONFH each year, contributing to approximately 10% of the total number of total hip arthroplasties (THA) performed annually [3]. Another epidemiological study estimated that around 8.12 million Chinese population aged 15 years or over were affected by this condition, among which 55.75% of females and 26.35% of males have reported corticosteroid use [4]. Therefore, ONFH represent a major challenge in the orthopedic arena due to its high morbidity and disability rate, especially among young and middle-aged people. Many tactics for treatment of ONFH depend on the severity of their condition, and the staging system formulated by the Association Research Circulation Osseous (ARCO) is one of the commonly used staging method in clinical practice [2]. According to the changes in the intraosseous blood supply in different phases of disease progression, the corresponding surgical and nonsurgical treatment strategies are recommended to prevent, or at least delay the progression toward the stage of femoral head collapse, in which THA is unavoidable [2, 5]. Despite the availability of numerous hip‐preserving surgical methods including core decompression, osteotomy, and bone graft, there still exists controversy regarding whether these treatment modalities are meaningful and valuable, as more than 80% patients with ONFH finally require THA [6, 7]. Additionally, some other controversies derived from ONFH, such as the pathogenesis, the optimal classification system, practicability of pharmacological treatments, optimal treatment protocols and surgical timing of THA, predictors of outcomes, and so on [2, 3]. Motivated by these concerns, ONFH have piqued the interest of researchers worldwide and a large number of related papers on this challenging topic have been published. To our knowledge, although some systematic reviews focusing on a specific subfield of ONFH research have been published, the global knowledge structure and research trends in this area have not been systematically studied yet. Notably, the appearance of bibliometric method has compensated the shortage of literature reviews in a complementary fashion. Bibliometrics, first defined by Pritchard in 1969, is a visualization method to quantitatively assess the contribution of a research field by using mathematical and statistical approaches [8]. It is also regarded as an important approach to reveal the research trends and predict the research hotspots in a certain field [9, 10]. Over recent years, the application of bibliometric analysis is very extensive in the biomedical sciences, due to the explosion in the quantity of scientific publications and the availability of several freeware bibliometric tools [11, 12]. In the field of hip surgery, one recent study explored the global trends and hotspots of scientific research on femoroacetabular impingement from 2000 to 2019, based on 2471 originals articles indexed in Web of Science (WOS) [13]. Our research team and other groups have also investigated the publications on hip fracture [14], developmental dysplasia of the hip [15], and THA [16] by using bibliometric methods and assessed the coauthorship and co-citation network in these areas. However, to date, no studies have applied the bibliometric method to analyze the global research trends on ONFH. In view of this, the aims of this study were to (1) identify the current status of ONFH domain, including the distribution of annual outputs, the major players such as countries, institutions, and individuals; (2) analyze the cooperation networks at the level of countries, institutions, and authors; (3) summarize main research directions and hotspots; and (4) propose research frontiers and potential hotspots in the near future. **Methods** **Source of bibliometric data and search strategies** Based on previous studies [14, 17, 18], the Science Citation Index Expanded (SCI-Expanded) of the Web of Science Core Collection (WOSCC) was selected as the main data source. The scientific literature was searched based on the titles (TI), abstracts (AB), and author keywords (AK) with the following search strategy: “femoral head necrosis” OR [(osteonecrosis OR necrosis) NEAR/2 (“femoral head”)] OR ONFH. The proximity operator of “NEAR/2” was used to combine search terms, which means two terms may have separated by a maximum of two words in any order (e.g., osteonecrosis NEAR/2 femoral head would have identified “osteonecrosis of the femoral head” and “osteonecrosis of femoral head”). A timespan of 20 years was set, and thus, only literature published from 2001 to 2020 was included. Publication language was restricted to English, and only original articles and reviews were eligible for this bibliometric analysis. All data utilized in this work were downloaded from public databases and, therefore, ethics committee approval or informed consent was not required. **Data export and extraction** Considering that the database is regularly updated, all searches were done on a separate day to avoid this potential bias. By using the function of “export” in WoSCC, “full records and cited references” of retrieved records were exported as “tab delimited text (.txt)” to bibliometric tools for additional processing. Then the detailed data on the general information including annual publications, ----- countries, institutions, authors, source journals, funding sources, research areas, number of citations, and Hirschindex (H-index) were extracted. The above procedure was completed by two investigators independently, and any disagreements were solved through discussion or, if necessary, by the senior author. Moreover, journal impact factors (JIF) and quartile ranks were collected from the 2020 Journal Citation Reports. The detailed literature search and selection process are shown in Fig. 1. **Bibliometric analysis** To obtain a more comprehensive analysis, three bibliometric tools, including an online platform and two software, were used to perform this study. First, the online [bibliometric analysis platform (available at: https://​bibli​](https://bibliometric.com/) [ometr​ic.​com/) was used to conduct academic coop-](https://bibliometric.com/) eration networks between countries. Then, VOSviewer 1.6.16 and Citespace V 5.7 R2 software were further used for mapping and visualizing bibliometric networks of scientific publications. VOSviewer, a freely available Java-based software developed by van Eck and Waltman at Erasmus University, is one of the frequently used bibliometric tools for quantitatively analyzing the academic literature [11]. In this study, VOSviewer was used to visualize the following network maps of ONFH research: network map of co-citation authors and journals; co-occurrence analysis of keywords. Specifically, co-citation network means that two items appear together in the bibliography of a third citing item, while co-occurrence network represents that the relationship of items is built according to the quantity of publications where they occur together [8, 11]. Generally speaking, the visualization maps mainly consist of nodes and links with different colors. Nodes in the visualization map represented the analyzed elements such as author, journal, or keyword, and the size of the nodes indicated the number of citations or occurrences [14]. The links between nodes reflected the relationship **Fig. 1** Flow diagram of the literature search and selection process ----- of co-citation or co-occurrence. An important parameter, total link strength (TLS), was used to quantitatively evaluate the strength of links [11, 14]. And the detailed descriptions of the maps could be found in the software [manual at https://​www.​vosvi​ewer.​com/​docum​entat​ion.](https://www.vosviewer.com/documentation) Apart from that, we also employed another bibliometric software, called Citespace, which was developed by Professor Chaomei Chen of Drexel University, to perform further bibliometric analysis [12]. In the present study, CiteSpace was applied to conduct research cooperation relationships of authors and institutions; timeline view map of co-citation references; and references with the strongest citation bursts. CiteSpace is capable of generating different types of visualization map, such as the network map, the cluster view map, and timeline view map [12]. Overall, all these visualization maps are also comprised of nodes and lines representing different meanings. Betweenness centrality (BC) is an important indicator that could identify the relative importance of a node within the networks, and nodes with the highest BC value (≥ 0.1) are usually known as hubs nodes that usually marked with purple rings [17, 19]. More detailed software utilization skills and information about the visualization maps can be [found in the CiteSpace manual (available at http://​clust​](http://cluster.ischool.drexel.edu/~cchen/citespace/CiteSpaceManual.pdf) [er.​ischo​ol.​drexel.​edu/​~cchen/​cites​pace/​CiteS​paceM​](http://cluster.ischool.drexel.edu/~cchen/citespace/CiteSpaceManual.pdf) [anual.​pdf](http://cluster.ischool.drexel.edu/~cchen/citespace/CiteSpaceManual.pdf) ). **Statistical analyses** R software (v3.6.3.), SPSS (IBM SPSS Statistics 21, Inc., Chicago, IL, USA), and Microsoft Excel 2019 were used for descriptive analysis, statistical evaluation, data fitting, and plotting graphs. We computed the growth rate of publications over time with the following calculation formula as described previously by Guo et al. [20]. Pearson’s correlation coefficient test was used to assess the correlation between continuous variables, and correlations were considered significant when p value < 0.05. **Results** **Publication outputs and trends** Following the aforementioned screening strategy, a total of 2594 documents including 2394 original articles and 200 reviews related to ONFH were identified covering the period 2001–2020. Figure 2 presents the specific numbers of annual documents about ONFH. Model fitting curve revealed a significant exponential growth trend in the annual number of publications over the past 20 years (y = 44.943e[0.088][x], _R[2]_ = 0.9663). From 2001 to 2020, the average growth rate of scientific outputs was 33.41%. **Analysis of countries/regions and funding agencies** All publications were distributed among 71 countries/ regions. China had published the most publications with 1077 (41.52%) articles/reviews, followed by the **Fig. 2** The specific numbers of annual documents regarding ONFH ----- USA [392 (15.11%)], Japan [278 (10.72%)], South Korea [167 (6.44%)], and Germany [123 (4.74%)] (Table 1). The results of correlation analysis indicated that there was a high positive correlation between the number of publications and gross domestic product (GDP) (r = 0.774, _p < 0.001), and a moderate positive correlation was_ also found between publications and demographic factor (r = 0.673, _p_ = 0.001). International collaboration of countries in this domain was also analyzed. As demonstrated in Fig. 3A, extensive collaboration was observed between productive countries. For instance, China collaborated closely with the USA, Australia, Germany, and South Korea. The USA, South Korea, Greece, Japan, and Italy have demonstrated active cooperation as well. In addition, the annual number of publications in the top 10 prolific countries from 2001 to 2020 is illustrated in Fig. 3B. Figure 3C lists the top 10 most active funding agencies in this field. Of these, four of them were from Japan, three from China, and the remaining three agencies were from the USA. **Analysis of the most prolific institutions** The top 10 most prolific institutions were laid out in Fig. 3D. All these institutions were from Asian institutions including 7 Chinese institutions, 2 Japanese institutions, and 1 Korean institution. Among them, Shanghai **Table 1 Top 20 most productive countries related to ONFH research** Jiao Tong University had the largest number of publications (77), followed by Kyushu University (58), Guangzhou University of Chinese Medicine (51), and Xi’an Jiaotong University (51). As for the other parameters, H-Index for Osaka University exhibited the highest value (18), followed closely by Shanghai Jiao Tong University (17). And Seoul National University had the highest value of average number of citations (23.05). A network visualization map of institution cooperation was generated by CiteSpace and illustrated in Fig. 4A. **Analysis of the most influential authors** As shown in Fig. 4B, Zhang CQ from Shanghai Jiao Tong University contributed the highest number of papers, followed by Zhao DW from Dalian University, and Motomura G from Kyushu University. Figure 4C illustrates the cooperation network map of authors, none of the included authors had a BC value of more than 0.1. In addition, the co-citation network among authors was conducted using VOSviewer. As displayed in Fig. 4D, only authors with a minimum of 100 citations were included. There were 55 nodes, 5 clusters, and 1439 links in the network map. Among them, Mont MA from Sinai Hospital of Baltimore has occupied maximum node with the largest citations and TLS. **Ranking** **Countries** **Publications, n** **% of 2594** **H-index** **Average** **citations per** **document** 1 China 1077 41.52 45 11.61 2 USA 392 15.11 55 29.35 3 Japan 278 10.72 33 16.28 4 South Korea 167 6.44 32 19.89 5 Germany 123 4.74 27 18.29 6 UK 102 3.93 28 24.54 7 France 73 2.81 23 25.14 8 India 59 2.27 15 12.63 9 Turkey 56 2.16 13 11.84 10 Italy 49 1.89 20 19.69 11 Greece 47 1.81 23 26.72 12 Canada 41 1.58 19 28.2 13 Switzerland 36 1.39 15 34.31 14 Belgium 31 1.20 18 36.13 15 Australia 27 1.04 12 27.52 16 Spain 27 1.04 10 11.85 17 Israel 24 0.93 12 38.13 18 Brazil 22 0.85 8 10.45 19 Austria 20 0.77 11 22.75 20 Iran 20 0.77 7 10.45 Ranking: according to the number of total publications ----- **Fig. 3** **A International collaboration of countries in this domain. The area occupied per country is proportional to the number of documents. Line** thickness reflects the closeness between countries, and a thicker line represents a stronger collaboration. B The annual number of publications in the top 10 prolific countries from 2001 to 2020. From 2010 onward, China has surpassed the USA for the first time and has remained that way since then. C The top 10 most active funding agencies in ONFH-related research. D The total number of publications, H-index, and average citations per item of top 10 most prolific countries in this field **Analysis of the higher‑impact journals** The top 10 most prolific journals were listed in Table 2. _International Orthopaedics (JIF 3.075) has published_ the greatest number of 123 papers, followed by _Clinical_ _Orthopaedics and Related Research (JIF 4.291), and Jour-_ _nal of Arthroplasty (JIF 4.757), with 103 and 81 publica-_ tions, respectively. According to JIF, JCR splits journals belonging to the same discipline into four equal parts, of the top 25% classified as Q1 and the top 25–50% being Q2, and so on. More than half of the top 10 journals were categorized in Q1 or Q2. Figure 5 shows the network visualization map of journal co-citation analysis. Only journals with more than 200 citations were depicted. Of the 60 journals satisfying the criteria, the top 5 co-cited journals were _Clinical Orthopaedics and Related Research,_ _Journal of Bone and Joint Surgery American Volume,_ _Bone & Joint Journal, Journal of Arthroplasty, and Inter-_ _national Orthopaedics._ **Analysis of highly cited references** Reference co-citation analysis is one of the most attractive functions of CiteSpace, which is often applied to determine the research focuses on a given field. As shown in Fig. 6 and Table 3, all the nodes in the reference co-citation network map could be grouped into 13 major clusters. In CiteSpace, weighted mean silhouette value (S value) and the modularity value (Q value) are two indicators to evaluate the significance of clustering, and it is generally believed that _S > 0.7 and_ _Q > 0.3 represent the_ clusters are convincing. In this study, the mean value of ----- **Fig. 4** **A The cooperation network map of institutions generated by CiteSpace. Each node represents an institution and the node size is** proportional to the number of publications by that institution. The node with the highest BC value (≥ 0.1) are usually known as hubs nodes that usually marked with purple rings. The connecting line between nodes indicates the cooperation relationship, and the value of link strength is also displayed between lines. B The total number of publications, H-index, and average citations per item of top 10 productive authors in this field. C The cooperation network map of productive authors generated by CiteSpace. The graphical explanations are the same as in A. D Authors co-citation analysis by VOSviewer. Each node represents a different author, and the node size is proportional to the quantity of citations. The thickness of the connecting line between nodes indicates link strength of a co-citation relationship, which could be weighted by a quantitative indicator that is TLS **Table 2 Top 10 journals with most publications in the field of ONFH research** **Ranking** **Journal title** **Output** **% of 2594** **JIF (2020)** **Quartile in** **category** **(2020)** 1 International Orthopaedics 123 4.74 3.075 Q2 2 Clinical Orthopaedics and Related Research 103 3.97 4.291 Q1 3 Journal of Arthroplasty 81 3.12 4.757 Q1 4 Journal of Bone and Joint Surgery American Volume 79 3.05 5.284 Q2 5 Archives of Orthopaedic and Trauma Surgery 69 2.66 3.067 Q3 6 Bone & Joint Journal 67 2.58 5.082 Q1 7 Hip International 54 2.08 2.135 Q4 8 Medicine 53 2.04 1.889 Q3 9 BMC Musculoskeletal Disorders 50 1.93 2.355 Q3/Q4 10 Journal of Orthopaedic Surgery and Research 48 1.85 2.359 Q2 ----- **Fig. 5** The network visualization map of journal co-citation analysis by VOSviewer. Each node represents a different journal, and the node size is proportional to the quantity of citations. Other graphical explanations are the same as in Fig. 4D **Fig. 6** The timeline view map of reference co-citation analysis. For each cluster, the position of each node shows the time of publication of the document, and the node size represents the number of citations S equals 0.7481 and that Q equals 0.7794, indicating the rationality of this clustering strategy. Moreover, as shown in Fig. 7, the top 25 references with the strongest citation bursts were identified in terms of their burst values. **Analysis of the most concerned keywords** In this study, keywords that occurred at least 10 times were extracted from 2594 publications and analyzed by VOSviewer. After deleting meaningless keywords ----- **Table 3 Main clusters of co-cited references** **Cluster ID** **Size** **Silhouette** **Label** **Mean (year)** #0 32 0.965 Risk factor 2015 #1 31 0.9 Glucocorticoid-induced osteonecrosis 2010 #2 25 0.881 Autologous transplantation 2006 #3 21 0.932 Non-weight bearing 1998 #4 21 0.972 Intertrochanteric osteotomy 1998 #5 21 0.934 Fibular grafting 2001 #6 20 0.915 Prognostic value 1999 #7 19 0.893 Stem cell therapy 2015 #8 19 0.788 t-786c enos polymorphism 2002 #9 17 0.846 Gene expression 2007 #10 17 0.967 Concentrated autologous bone marrow 2010 #11 14 1 Propylene fumarate 2004 #12 14 0.951 Early collapse 2002 **Fig. 7** Top 25 references with the strongest citation burst in the ONFH field. The red segment represents the begin and end year of the burst duration and merging keywords with the same meaning, a total of 319 keywords were identified. Based on the research categories of these keywords, VOSviewer software was able to divide all keywords into several major clusters with different colors. As shown in the network visualization map of Fig. 8A, all the included keywords were classified into the following four clusters: Cluster 1(etiology and risk factors study, green nodes); Cluster 2 (basic research and stem cell therapy, red nodes); cluster 3 (hip-preserving study, blue nodes); and Cluster 4 (hip replacement study, purple nodes). In addition ----- **Fig. 8** **A Network visualization map of the co-occurrence network of keywords using VOSviewer. Each node represents a certain keyword. Nodes** and font size represent the number of keyword occurrences. Keywords with close correlation will be assigned to one cluster with the same color. B Overlay visualization map of keywords analysis in the ONFH field. The color of each node shows the average appearing year (AAY) of the keyword, according to the color gradient shown at the bottom right. The blue-purple color reflected the keywords appeared relatively earlier, while the dark red nodes represented the recent occurrence to this, we also provided an overlay visualization map of keywords co-occurrence analysis in Fig. 8B. **Discussion** **Worldwide research tendency of ONFH from 2001 to 2020** The number of publications in a certain field is able to reflect the productivity and development of the topic over the years. ONFH-related research has drawn increasing attention among scholars from 2001 to 2020. The global number of publications in this field has been gradually increased from 58 in 2001 to 297 in 2020, 71.05% of which was published in the last 10 years. One important reason for this growth was that the incidence of ONFH has been increasing worldwide and this devastating condition has become an increasingly prominent issue globally [1, 2, 21, 22]. Based on current trends, Mont et al. [23] reported that the total number of individuals affected by ONFH is estimated to reach 20 million by next 10 years worldwide. Besides, the increase in annual publications is inseparable from the advances in basic research and clinical trials in recent years [24, 25]. **Knowledge structure of ONFH‑related publications** **_Countries_** It is not difficult to see that the research centers of this field were mainly concentrated in East Asia, North America and Western Europe. The results of correlation analysis indicated that part of the discrepancy in the quantity of publications across different countries can be explained by factors of economy or population. The H-index is a bibliometric indicator that simultaneously measures the quality (mainly depends on citations) and quantity (number of documents) of publications in a journal, author, or country [19, 26]. In this study, the USA, China, and Japan were the top three countries with the highest H-index. Therefore, this result further proved ----- that China, the USA, and Japan were the major contributors both from the quality and quantity points of view. **_Institutions_** In terms of research institutions, the most prolific institution was Shanghai Jiao Tong University, Kyushu University, Guangzhou University of Chinese Medicine, and Xi’an Jiaotong University. Nevertheless, inter-institutional cooperation levels were relatively low and primarily conducted in Asian research institutions. Furthermore, none of the institutions had BC value greater than 0.1, indicating that there was no one institution occupied the absolutely central position in the collaboration network [17, 19]. In view of this, strengthening cooperative networks in different research institutions or teams will be important for future studies whether for basic scientific researches or clinical trials. **_Authors_** An analysis of the most influential authors is helpful for scholars to learn existing partnerships and identify potential cooperative subjects at home and abroad. As shown in Fig. 4B, Zhang CQ, Zhao DW, and Motomura G were the top three contributors in this field. Zhang CQ and colleagues mainly focused on the application of free vascularized fibula grafting for the treatment of ONFH [27]. Apart from that, a study conducted by their research team, which reported the potential preventative effect of exosomes secreted by induced pluripotent stem cell-derived mesenchymal stem cells (iPS-MSCExos) on ONFH via promoting local angiogenesis, also received a great attention [28]. The co-citation analysis is usually considered to be a better method to evaluate the academic influence of a journal or a scholar [17]. As a result, the co-citation network among authors was conducted using VOSviewer. Mont MA from Sinai Hospital of Baltimore has occupied maximum node with the largest citations and TLS. Further analysis found that several high-quality reviews regarding diagnosis, classification systems, and treatment for ONFH, published by Mont et al. have achieved a high number of citations [29, 30]. **_Journals_** As for journal analysis, International Orthopaedics, Clin_ical Orthopaedics and Related Research, and_ _Journal of_ _Arthroplasty were the top three journals with the most_ publications. Of the top 10 journals, although China is the largest publishing country, there is no one Chinese journal, indicating that China should strengthen several international journals in this field so as to attract more scientific publications and spread academic perspective. Notably, to address this issue, Chinese government has continuously increased its investment in the construction of first-class academic journals in recent years [31]. **An overview of research focuses and frontiers** **_Reference analysis_** Reference co-citation analysis is often applied to determine the research focuses on a given field. All the publications and their references data were used to create homogeneous clusters, thus references that were connected tightly were divided into the same clusters, and conversely in different clusters. Our findings demonstrated that there were 13 major clusters in the co-citation network map. The largest cluster was “risk factor” (#0) [1, 2, 32]. Figure 6 shows the timeline view for the major clusters, which could illustrate the temporal and evolution characteristics of each cluster. The development of cluster 3, cluster 4, and cluster 6 occurred earliest, whereas cluster 0 (risk factor) and cluster 7 (stem cell therapy) were the recent research topics in the field of ONFH, which reflects the shift in research focus. Apart from that, burst detection of reference was another approach to track and capture the research hotspots. References with the strongest citation burst, indicating that they have received special attention during a period, are generally acknowledged as the research basics of frontiers in a certain field. As shown in Fig. 7, the strongest burst starting from 2015 was from the paper published by Mont MA and colleagues [30], followed by Moya-Angeler et al. [3] in 2015 and Mont MA et al. [29] in 2006. It also can be observed that reference with citation bursts first emerged in 2004, due to an article in 2002, and continued through 4 years. Of note, the burst of several references after 2015 is still ongoing, suggesting that these topics have gained considerable attention in recent years and deserve further attention for future periods of time. It is worth noting that most of these references involved stem cells therapy [33–35]. **_Keywords analysis_** Generally, the author keywords of an article are usually the most representative terms used to give a brief overview of research theme, and the co-occurrence analysis of keywords is a common bibliometric method to present the knowledge content and structure visually and also uncover the evolution process and hot topics of a field [14]. Based on the research categories of these keywords, VOSviewer software was able to divide all keywords into several major clusters with different colors. As shown in the network visualization map of Fig. 8A, all the included keywords were classified into four main research clusters including Cluster 1(etiology and risk factors study); Cluster 2 (basic research and stem cell therapy); cluster 3 (hip-preserving study); Cluster 4 (hip replacement ----- study). In addition to this, the VOSviewer software could impart all the included keywords with different colors based on their AAY. It can be seen that early research prior to 2015, the ONFH research was mainly focused on “hip replacement study” in cluster 4 and “hip-preserving study” in cluster 3, whereas keywords belong to in cluster 2 (“basic research and stem cell therapy”) had the relatively latest AAY than other clusters. Stem cells possess the ability to self-renew and differentiate into various cell types such as osteoblasts and endothelial cells to promote angiogenesis as well as bone regeneration [33]. In the meantime, they could also secrete a broad range of biological factors including multiple cytokines, growth factors, and exosomes to promote new blood vessel formation and rebuild blood supply in the necrotic regions [35]. As far as we know, multiple rigorous random controlled trials (RCTs) on the efficacy of stem cell therapy for early-stage ONFH have been initiated or currently in progress [23]. Yet at the same time, their clinical values remain to be further elucidated. Thus, stem cell therapy has become one of the most promising areas for ONFH. Additionally, in cluster 2, these keywords with relatively latest AAY, such as exosomes, autophagy, biomarkers, osteogenic differentiation, microRNAs, steroid-induced osteonecrosis, and mesenchymal stem cells, may have great potential to be hot topics in the near future. For example, miRNAs are small non-coding RNAs that broadly regulate gene expression by specifically binding to complementary sequences in the 3’‐untranslated regions of their target RNAs. In recent years, the field of miRNAs has been emerged as a focus of ONFH research, and has received extensive attention from researchers in China and other countries [33]. Some scholars have used the microarray method to compare miRNA expression in patients with ONFH and in patients with femoral neck fracture. Of the 17 miRNAs identified with differential expression, 12 were up-regulated and 5 were downregulated, suggesting that aberrant miRNAs expression might be involved in the pathogenesis of ONFH, and thus become diagnostic markers for ONFH [36]. Additionally, accumulating evidence demonstrated that multiple miRNAs could act as novel therapeutic targets for the prevention and treatment of ONFH by regulating osteogenic and adipogenic differentiation in MSCs [33, 35]. In terms of steroid-induced osteonecrosis, it is worth emphasizing that as the ongoing spread of coronavirus disease 2019 (COVID-19) globally, despite great strides in management, corticosteroids remain the mainstay for the treatment of moderate to severe acute respiratory syndrome (SARS), and with it arises challenges such as steroid-induced ONFH, especially in patients with the long-term or high doses use [37]. Some scholars have noted the potential risk and called for judicious use of corticosteroids in COVID-19 patients, particularly not recommended for routine use [38]. Aside from cluster 2, several topics with relatively latest AAY in other clusters including double-blind [39], early-stage osteonecrosis [40, 41], and asymptomatic osteonecrosis [42] also deserve further attention. **Limitation** Despite the rigorous bibliometric analysis of this study, there were still several inevitable shortcomings. For example, we only analyzed bibliometric data from WOSCC database, which potentially missed several relevant publications reported in other databases, such as Scopus and PubMed [43]. Moreover, in consideration of only English publications in the study, it is unavoidable that several important publications in non-English language were omitted. As for the keyword clustering analysis, it might not be appropriate to combine the keywords with the same meaning into one node as different keywords might belong to clinical or basic research, respectively. And finally, the latest publications in 2021 were not incorporated since they lack sufficient time to accumulate considerable citations, which might in part affect our conclusions due to the rapid updating of research hotspots and frontiers. **Conclusion** Overall, the ascending trend in the annual number of publications indicates that ONFH has attracted a great deal of interest from researchers worldwide, especially in the last 10 years. China, the USA, and Japan were the major contributors. And part of the discrepancy in the quantity of publications across different countries can be explained by the factors of economy or population. The most prolific institution was Shanghai Jiao Tong University. Professor Zhang CQ and Mont MA were the most influential authors with the highest number of publications and citations, respectively. According to keywords analysis, all the selected keywords could be categorized into four major clusters. Stem cell therapy-related research has been recognized as an important research hotspot in this field. It is recommended to pay more attention to these topics including exosomes, autophagy, biomarkers, osteogenic differentiation, microRNAs, steroid-induced osteonecrosis, mesenchymal stem cells, double-blind, early-stage osteonecrosis, and asymptomatic osteonecrosis, which have great potential to continue to be the research focuses in the near future. **Abbreviations** ONFH: Osteonecrosis of the femoral head; THA: Total hip arthroplasty; WOSCC: Web of Science Core Collection; GDP: Gross Domestic Product; JIF: Journal ----- impact factors; TLS: Total link strength; BC: Betweenness centrality; AAY​: Average appearing year; MSCs: Mesenchymal stem cells. **Acknowledgements** The authors thank Dr. Zhou Yan of Tianjin Medical University and “home-for[researchers” Company (https://​www.​home-​for-​resea​rchers.​com/) for their help](https://www.home-for-researchers.com/) in in polishing our English writing. **Author’s contributions** All authors contributed to the study conception and design. LT, YW, and WY collected the data and material preparation. Data collection and analysis were performed by HW and KC. The first draft of the manuscript was written by HW and KC. ZS revised the work. All authors read and approved the final manuscript. **Funding** This work was supported by the Tianjin Municipal Health Bureau (Grant Number 14KG115) and Key Program of the Natural Science Foundation of Tianjin (Grant Number 20JCZDJC00730). **Availability of data and materials** All the data can be downloaded from Web of Science Core Collection. **Declarations** **Ethics approval and consent to participate** Ethical approval was not required for this study, as all data were downloaded from public databases and did not involve any human or animal participants. **Consent for publication** All authors agreed with the content and gave explicit consent to submit and they obtained consent from the responsible authorities at the institute/ organization where the work has been carried out. **Competing interests** Haiyang Wu, Kunming Cheng, Linjian Tong, Yulin Wang, Weiguang Yang, Zhiming Sun declare that they have no competing interests. **Author details** 1 Graduate School of Tianjin Medical University, No. 22 Qixiangtai Road, Tianjin 300070, China. [2] Department of Intensive Care Unit, The Second Affiliated Hospital of Zhengzhou University, Zhengzhou 450014, Henan, China. [3] Department of Orthopaedic Surgery, Tianjin Huanhu Hospital, No 6, Jizhao Road, Jinnan District, Tianjin 300350, China. Received: 19 August 2021 Accepted: 15 March 2022 **References** 1. Tan B, Li W, Zeng P, et al. Epidemiological study based on China osteonecrosis of the femoral head database. Orthop Surg. 2021;13:153–60. 2. Zhao D, Zhang F, Wang B, et al. Guidelines for clinical diagnosis and treatment of osteonecrosis of the femoral head in adults (2019 version). J Orthop Transl. 2020;21:100–10. 3. Moya-Angeler J, Gianakos AL, Villa JC, et al. Current concepts on osteonecrosis of the femoral head. World J Orthop. 2015;6:590–601. 4. Zhao DW, Yu M, Hu K, et al. Prevalence of nontraumatic osteonecrosis of the femoral head and its associated risk factors in the Chinese population: results from a nationally representative survey. Chin Med J (Engl). 2015;128:2843–50. 5. Wu CT, Yen SH, Lin PC, et al. Long-term outcomes of Phemister bone grafting for patients with non-traumatic osteonecrosis of the femoral head. Int Orthop. 2019;43:579–87. 6. Johnson AJ, Mont MA, Tsao AK, et al. Treatment of femoral head osteonecrosis in the United States: 16-year analysis of the Nationwide Inpatient Sample. Clin Orthop Relat Res. 2014;472:617–23. 7. Liu L, Gao F, Sun W, et al. Investigating clinical failure of core decompression with autologous bone marrow mononuclear cells grafting for the treatment of non-traumatic osteonecrosis of the femoral head. Int Orthop. 2018;42:1575–83. 8. Wu H, Tong L, Wang Y, et al. Bibliometric analysis of global research trends on ultrasound microbubble: a quickly developing field. Front Pharmacol. 2021;12:646626. 9. Li C, Ojeda-Thies C, Xu C, et al. Meta-analysis in periprosthetic joint infection: a global bibliometric analysis. J Orthop Surg Res. 2020;15:251. 10. Li C, Wang L, Perka C, et al. Clinical application of robotic orthopedic surgery: a bibliometric study. BMC Musculoskelet Disord. 2021;22:968. 11. van Eck NJ, Waltman L. Software survey: VOSviewer, a computer program for bibliometric mapping. Scientometrics. 2010;84:523–38. 12. Synnestvedt MB, Chen C, Holmes JH. CiteSpace II: visualization and knowledge discovery in bibliographic databases. AMIA Annu Symp Proc. 2005;2005:724–8. 13. Tang F, Dai WB, Li XL, et al. Publication trends and hot spots in femoroacetabular impingement research: a 20-year bibliometric analysis. J Arthroplasty. 2021;36:2698–707. 14. Wu H, Li Y, Tong L, et al. Worldwide research tendency and hotspots on hip fracture: a 20-year bibliometric analysis. Arch Osteoporos. 2021;16:73. 15. Wu H, Wang Y, Tong L, et al. The global research trends and hotspots on developmental dysplasia of the hip: a bibliometric and visualized study. Front Surg. 2021;8:671403. 16. Zhang W, Tang N, Li X, et al. The top 100 most cited articles on total hip arthroplasty: a bibliometric analysis. J Orthop Surg Res. 2019;14:412. 17. Gao J, Xing D, Dong S, et al. The primary total knee arthroplasty: a global analysis. J Orthop Surg Res. 2020;15:190. 18. Giannoudis PV, Chloros GD, Ho YS. A historical review and bibliometric analysis of research on fracture nonunion in the last three decades. Int Orthop. 2021;45:1663–76. 19. Wu H, Zhou Y, Xu L, et al. Mapping knowledge structure and research frontiers of ultrasound-induced blood-brain barrier opening: a scientometric study. Front Neurosci. 2021;15:706105. 20. Guo Y, Hao Z, Zhao S, et al. Artificial intelligence in health care: bibliometric analysis. J Med Internet Res. 2020;22:e18228. 21. Sodhi N, Acuna A, Etcheson J, et al. Management of osteonecrosis of the femoral head. Bone Joint J. 2020;102:122–8. 22. Cooper C, Steinbuch M, Stevenson R, et al. The epidemiology of osteonecrosis: findings from the GPRD and THIN databases in the UK. Osteoporos Int. 2010;21:569–77. 23. Mont MA, Zywiel MG, Marker DR, et al. The natural history of untreated asymptomatic osteonecrosis of the femoral head: a systematic literature review. J Bone Joint Surg Am. 2010;92:2165–70. 24. Mao L, Jiang P, Lei X, et al. Efficacy and safety of stem cell therapy for the early-stage osteonecrosis of femoral head: a systematic review and metaanalysis of randomized controlled trials. Stem Cell Res Ther. 2020;11:445. 25. Huang C, Wen Z, Niu J, Lin S, Wang W. Steroid-induced osteonecrosis of the femoral head: novel insight into the roles of bone endothelial cells in pathogenesis and treatment. Front Cell Dev Biol. 2021;9:777697. 26. Hirsch JE. An index to quantify an individual’s scientific research output. Proc Natl Acad Sci U S A. 2005;102:16569–72. 27. Zhang CQ, Gao YS, Zhu ZH, et al. Why we choose free vascularized fibular grafting for osteonecrosis of the femoral head? Microsurgery. 2011;31:417–8. 28. Liu X, Li Q, Niu X, et al. Exosomes secreted from human-induced pluripotent stem cell-derived mesenchymal stem cells prevent osteonecrosis of the femoral head by promoting angiogenesis. Int J Biol Sci. 2017;13:232–44. 29. Mont MA, Jones LC, Hungerford DS. Nontraumatic osteonecrosis of the femoral head: ten years later. J Bone Joint Surg Am. 2006;88:1117–32. 30. Mont MA, Cherian JJ, Sierra RJ, et al. Nontraumatic osteonecrosis of the femoral head: where do we stand today? A ten-year update. J Bone Joint Surg Am. 2015;97:1604–27. 31. Zu GA. WJG sets an example of internationalization for other Chinese academic journals. World J Gastroenterol. 2010;16:2707–9. 32. Wu H, Shang R, Cai X, et al. Single ilioinguinal approach to treat complex acetabular fractures with quadrilateral plate involvement: outcomes using a novel dynamic anterior plate-screw system. Orthop Surg. 2020;12:488–97. 33. Hao C, Yang S, Xu W, et al. MiR-708 promotes steroid-induced osteonecrosis of femoral head, suppresses osteogenic differentiation by targeting SMAD3. Sci Rep. 2016;6:22599. ----- 34. Houdek MT, Wyles CC, Packard BD, et al. Decreased osteogenic activity of mesenchymal stem cells in patients with corticosteroid-induced osteonecrosis of the femoral head. J Arthroplasty. 2016;31:893–8. 35. Hernigou P, Flouzat-Lachaniette CH, Delambre J, et al. Osteonecrosis repair with bone marrow cell therapies: state of the clinical art. Bone. 2015;70:102–9. 36. Yuan HF, Von Roemeling C, Gao HD, et al. Analysis of altered microRNA expression profile in the reparative interface of the femoral head with osteonecrosis. Exp Mol Pathol. 2015;98:158–63. 37. Gálvez-Romero JL, Palmeros-Rojas O, Real-Ramírez FA, et al. Cyclosporine A plus low-dose steroid treatment in COVID-19 improves clinical outcomes in patients with moderate to severe disease: a pilot study. J Intern Med. 2021;289:906–20. 38. Zhang S, Wang C, Shi L, et al. Beware of steroid-induced avascular necrosis of the femoral head in the treatment of COVID-19-experience and lessons from the SARS epidemic. Drug Des Dev Ther. 2021;15:983–95. 39. Hauzeur JP, De Maertelaer V, Baudoux E, et al. Inefficacy of autologous bone marrow concentrate in stage three osteonecrosis: a randomized controlled double-blind trial. Int Orthop. 2018;42:1429–35. 40. Yue J, Gao H, Guo X, et al. Fibula allograft propping as an effective treatment for early-stage osteonecrosis of the femoral head: a systematic review. J Orthop Surg Res. 2020;15:206. 41. Hua KC, Yang XG, Feng JT, et al. The efficacy and safety of core decompression for the treatment of femoral head necrosis: a systematic review and meta-analysis. J Orthop Surg Res. 2019;14:306. 42. Wei QS, Hong GJ, Yuan YJ, et al. Huo Xue Tong Luo capsule, a vasoactive herbal formula prevents progression of asymptomatic osteonecrosis of femoral head: a prospective study. J Orthop Transl. 2018;18:65–73. 43. Wu H, Sun Z, Tong L, et al. Bibliometric analysis of global research trends on male osteoporosis: a neglected field deserves more attention. Arch Osteoporos. 2021;16:154. **Publisher’s Note** Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. -----
{ "disclaimer": "Notice: Paper or abstract available at https://pmc.ncbi.nlm.nih.gov/articles/PMC8960091, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": "CCBY", "status": "GOLD", "url": "https://josr-online.biomedcentral.com/track/pdf/10.1186/s13018-022-03068-7" }
2,022
[ "JournalArticle" ]
true
2022-03-28T00:00:00
[ { "paperId": "505776b938f60eecd327a3d8e3563c5bfaed0073", "title": "Steroid-Induced Osteonecrosis of the Femoral Head: Novel Insight Into the Roles of Bone Endothelial Cells in Pathogenesis and Treatment" }, { "paperId": "d845899e33ccd1a56c53a130f08a7bd29fa6246c", "title": "The Global Research Trends and Hotspots on Developmental Dysplasia of the Hip: A Bibliometric and Visualized Study" }, { "paperId": "66f503781cd203bd9e7f1dc744a8acd955f0535e", "title": "Bibliometric analysis of global research trends on male osteoporosis: a neglected field deserves more attention" }, { "paperId": "4e71222a7c12c7df5310e538e054b7e7dc657c89", "title": "Mapping Knowledge Structure and Research Frontiers of Ultrasound-Induced Blood-Brain Barrier Opening: A Scientometric Study" }, { "paperId": "7b25720059f573000f59eecf78b5809c418d19a6", "title": "Clinical application of robotic orthopedic surgery: a bibliometric study" }, { "paperId": "5f45eee62a0778f34dbbaa10fbf42ba403c32b8d", "title": "Bibliometric Analysis of Global Research Trends on Ultrasound Microbubble: A Quickly Developing Field" }, { "paperId": "fc24e5ff6525ca1d8cdf7ea6ee47712ef7993174", "title": "A historical review and bibliometric analysis of research on fracture nonunion in the last three decades" }, { "paperId": "71920364958c3d9a8b32c3084c2e94e1fa733f7b", "title": "Worldwide research tendency and hotspots on hip fracture: a 20-year bibliometric analysis" }, { "paperId": "3a064dcea528939e0ca4055afefb8e106ec9d2b1", "title": "Publication Trends and Hot Spots in Femoroacetabular Impingement Research: A 20-Year Bibliometric Analysis." }, { "paperId": "fb514be4d63ae41fcb05e2cdbcf222a03f5f1f00", "title": "Beware of Steroid-Induced Avascular Necrosis of the Femoral Head in the Treatment of COVID-19—Experience and Lessons from the SARS Epidemic" }, { "paperId": "31cb39d93748309295b0b4db9f0a2147bf2b5464", "title": "Epidemiological Study Based on China Osteonecrosis of the Femoral Head Database" }, { "paperId": "0c866c3f7bfcef1839bf41dd4cdde80f99608fce", "title": "Cyclosporine A plus low‐dose steroid treatment in COVID‐19 improves clinical outcomes in patients with moderate to severe disease: A pilot study" }, { "paperId": "eed028c1f96ff0d45cb3524b8dbd391e501404dc", "title": "Efficacy and safety of stem cell therapy for the early-stage osteonecrosis of femoral head: a systematic review and meta-analysis of randomized controlled trials" }, { "paperId": "f17aff43609f02c542abbe8989e02881e0b5ed75", "title": "Fibula allograft propping as an effective treatment for early-stage osteonecrosis of the femoral head: a systematic review" }, { "paperId": "17d180dae73468d8c6ca37176f6d965ad24ff0d2", "title": "The primary total knee arthroplasty: a global analysis" }, { "paperId": "08c1943d4d11486157432b0b2cd97c8e538883b8", "title": "Meta-analysis in periprosthetic joint infection: a global bibliometric analysis" }, { "paperId": "9f031630e63769594b8e5e35a9476a9614c23142", "title": "Single Ilioinguinal Approach to Treat Complex Acetabular Fractures with Quadrilateral Plate Involvement: Outcomes Using a Novel Dynamic Anterior Plate–Screw System" }, { "paperId": "2bbc7e46425d8dabb2c2ebbf28dbbb0462d3b5e3", "title": "Artificial Intelligence in Health Care: Bibliometric Analysis" }, { "paperId": "d07a582c5207c70972627aa5e1ffb94d26d18717", "title": "Guidelines for clinical diagnosis and treatment of osteonecrosis of the femoral head in adults (2019 version)" }, { "paperId": "083f4fbbfe2ada334e6aae348594645394d5ef74", "title": "The top 100 most cited articles on total hip arthroplasty: a bibliometric analysis" }, { "paperId": "1b25cff2ff1be73ead0d4527fd9dd5005788d049", "title": "The efficacy and safety of core decompression for the treatment of femoral head necrosis: a systematic review and meta-analysis" }, { "paperId": "30461c5ec1fcde2a29d59517dbfa4cc5ed8946dd", "title": "Huo Xue Tong Luo capsule, a vasoactive herbal formula prevents progression of asymptomatic osteonecrosis of femoral head: A prospective study" }, { "paperId": "f86d81143f243a1fe99afe85f05d835efb232387", "title": "Inefficacy of autologous bone marrow concentrate in stage three osteonecrosis: a randomized controlled double-blind trial" }, { "paperId": "27afdb535a2e7516adeaf0b8950c91b89c9b17bd", "title": "Long-term outcomes of Phemister bone grafting for patients with non-traumatic osteonecrosis of the femoral head" }, { "paperId": "6cfc74f4cf3eb5daa644ef277bd8524d79731285", "title": "Investigating clinical failure of core decompression with autologous bone marrow mononuclear cells grafting for the treatment of non-traumatic osteonecrosis of the femoral head" }, { "paperId": "bb6189f8113a2eea9c6b2bc30b5aacf4b8328a69", "title": "Publisher's Note" }, { "paperId": "99d5833dc2b1093fbb9987d521ada2930e749e02", "title": "Exosomes Secreted from Human-Induced Pluripotent Stem Cell-Derived Mesenchymal Stem Cells Prevent Osteonecrosis of the Femoral Head by Promoting Angiogenesis" }, { "paperId": "a1d466daa94aa17b4a674c86622e643605e16dc1", "title": "Decreased Osteogenic Activity of Mesenchymal Stem Cells in Patients With Corticosteroid-Induced Osteonecrosis of the Femoral Head." }, { "paperId": "121016d380f14ce696d909b4748c73356f3d5827", "title": "MiR-708 promotes steroid-induced osteonecrosis of femoral head, suppresses osteogenic differentiation by targeting SMAD3" }, { "paperId": "489d706a40201afb180058e5d81b731f3687aacb", "title": "Prevalence of Nontraumatic Osteonecrosis of the Femoral Head and its Associated Risk Factors in the Chinese Population: Results from a Nationally Representative Survey" }, { "paperId": "420bba2ca599c8d0a2d591f1cb825750162a4122", "title": "Nontraumatic Osteonecrosis of the Femoral Head: Where Do We Stand Today? A Ten-Year Update." }, { "paperId": "5cfaf64b0931b978a4a1738d871b148b8b0e0ccd", "title": "Current concepts on osteonecrosis of the femoral head." }, { "paperId": "b23ad5150713b61190be59462a855ceff0b66552", "title": "Analysis of altered microRNA expression profile in the reparative interface of the femoral head with osteonecrosis." }, { "paperId": "db49fefe6389e14e6773db37c7f7ed407883cf92", "title": "Treatment of Femoral Head Osteonecrosis in the United States: 16-year Analysis of the Nationwide Inpatient Sample" }, { "paperId": "a60da185617baf7adbb699b0fbaeba31dcba3b0e", "title": "Why we choose free vascularized fibular grafting for osteonecrosis of the femoral head?" }, { "paperId": "c5b7af49b7a40d248449eb0acf743ab72547b83d", "title": "The natural history of untreated asymptomatic osteonecrosis of the femoral head: a systematic literature review." }, { "paperId": "4fe772fadd6f63df784541947d5fd2cc9a47859c", "title": "WJG sets an example of internationalization for other Chinese academic journals." }, { "paperId": "edea882e8294537c76e16339ce32c2dd96753025", "title": "Software survey: VOSviewer, a computer program for bibliometric mapping" }, { "paperId": "a0f16815404e86b2b4a732e5a090fca7be64c18a", "title": "The epidemiology of osteonecrosis: findings from the GPRD and THIN databases in the UK" }, { "paperId": "a7f5aff22f72fc30c4f3c771c3ec92b281dfd5bd", "title": "Nontraumatic osteonecrosis of the femoral head: ten years later." }, { "paperId": "c177ba654583987567377ed9d17a58f26b3580a3", "title": "F1000Prime recommendation of An index to quantify an individual's scientific research output." }, { "paperId": "f5153b4371f362c6fef3aa6395db5fc0ee4b96da", "title": "Management of osteonecrosis of the femoral head." }, { "paperId": "e455ec7012de2d49be108a52b7152caa02189e1a", "title": "Osteonecrosis repair with bone marrow cell therapies: state of the clinical art." }, { "paperId": "1cdb5cc417c041acb85dd5d1b8f8b59fee38fa71", "title": "CiteSpace II: Visualization and Knowledge Discovery in Bibliographic Databases" }, { "paperId": null, "title": "Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations" } ]
11,840
en
[ { "category": "Computer Science", "source": "external" }, { "category": "Computer Science", "source": "s2-fos-model" }, { "category": "Engineering", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/0260f2fe0b329bbacd07ecaf3731bf48e7c8ebc3
[ "Computer Science" ]
0.895421
Secured Decentralized Confidential Data Distributed in the Disruption-Tolerant Military Network
0260f2fe0b329bbacd07ecaf3731bf48e7c8ebc3
[ { "authorId": "2056988783", "name": "Aniruddha Singh Chauhan" }, { "authorId": "66266387", "name": "Nikita Umare" } ]
{ "alternate_issns": null, "alternate_names": null, "alternate_urls": null, "id": null, "issn": null, "name": null, "type": null, "url": null }
null
**_https://dx.doi.org/10.22161/ijaers/3.10.11 ISSN: 2349-6495(P) | 2456-1908(O)_** # Secured Decentralized Confidential Data Distributed in the Disruption-Tolerant Military Network ## Aniruddha Singh Chauhan[1], Prof. Nikita Umare[2] 1ME 3rd Sem. WCC Student, Abha Gaikwad-Patil College of Engineering, Nagpur, India 2Department of CSE/WCC, Abha Gaikwad-Patil College of Engineering, Nagpur, India **_Abstract— Disruption tolerant network technologies are_** authorities and access the confidential information or _becoming successful solutions that allow wireless devices_ command reliably by exploiting external storage nodes. _carried by soldiers to communicate with each other and_ Some of the most challenging issues in this scenario are the _access the confidential information or command reliably by_ enforcement of authorization policies and the policies _exploiting external storage nodes. Some of the most_ update for secure data retrieval. Cipher text-policy attribute_challenging issues in this scenario are the enforcement of_ based encryption (CP-ABE) is a promising cryptographic _authorization policies and the policies update for secure_ solution to the access control issues. _data retrieval. Cipher text-policy attribute-based encryption_ However, the problem of applying CP ABE in decentralized _is a promising cryptographic solution to the access control_ DTNs introduces several security and privacy challenges _issues. However, the problem of applying CP-ABE in_ with regard to the attribute revocation, key escrow, and _decentralized DTNs introduces several security and privacy_ coordination of attributes issued from different authorities. _challenges with regard to the attribute revocation, key_ We propose a secure data retrieval scheme using CPABE _escrow, and coordination of attributes issued from different_ for decentralized DTNs where multiple key authorities _authorities. We propose a secure data retrieval scheme_ manage their attributes independently. We demonstrate how _using idea for decentralized DTNs where multiple key_ to apply the proposed mechanism to securely and efficiently _authorities manage their attributes independently. We_ manage the confidential data distributed in the disruption_demonstrate how to apply the proposed mechanism to_ tolerant military network. _securely and efficiently manage the confidential data_ _distributed in the disruption-tolerant military network._ **II.** **PROBLEM** **DEFINITION** **_Keywords—Access control, attribute-based encryption_** Military applications require increased protection of **_(ABE), disruption-tolerant network (DTN)._** confidential data including access control method In many cases, it is desirable to provide differentiated access services **I.** **INTRODUCTION** such that Data access policies are defined over user attributes Mobile nodes in military environments such as a battlefield or roles, which are managed by the key authorities. or a hostile region are likely to suffer from intermittent network connectivity and frequent partitions.Disruptiontolerant network (DTN) technologies are becoming successful solutions that allow wireless devices carried by soldiers to communicate with each other in this extreme networking environment typically when there is no end to end connection between source and destination pairs, the messages from the source node may need to wait in the intermediate nodes for a substantial amount of time until the connection would be eventually established. Storage nodes in DTNs where data is stored or replicated in a way such that only authorized mobile nodes can access the necessary information quickly and efficiently. Many military applications require increased protection of confidential data including access control methods that are cryptographically enforced and provide differentiated access service such that data access policies, which are defined as _Fig.1: System Flow_ per user attributes or roles, which are managed by the key ----- **_https://dx.doi.org/10.22161/ijaers/3.10.11 ISSN: 2349-6495(P) | 2456-1908(O)_** **III.** **PROPOSED METHOD** 1) Key Authorities: They are key generation centres that generate public/secret parameters for CP-ABE. The key authorities consist of a central authority and multiple local authorities. We assume that there are secure and reliable communication channels between a central authority and each local authority during the initial key setup and generation phase. Each local authority manages different attributes and issues corresponding attribute keys to users. They grant differential access rights to individual users based on the users’ attributes. The key authorities are assumed honest-but-curious. That is, they will honestly execute the assigned tasks in the system; however they would like to learn information of encrypted contents as much as possible. 2) Storage node: This entity stores data from senders and provide corresponding access to users. It may be mobile or static, we also assume the storage node to be semi trusted, that is honest-but-curious. 3) Sender: This entity owns confidential messages or data (e.g., a commander) and wishes to store them into the external data storage node for ease of sharing or for reliable delivery to users in the extreme networking environments. A sender is responsible for defining (attribute based) access policy and enforcing it on its own data by encrypting the data under the policy before storing it to the storage node. 4) User: This mobile node wants to access the data stored at the storage node (e.g., a soldier). If a user possesses a set of attributes satisfying the access policy of the encrypted data defined by the sender, and is not revoked in any of the attributes, then he will be able to decrypt and obtain the data. Since the key authorities are semi-trusted, they should be deterred from accessing plaintext of the data in the storage node; meanwhile, they should be still able to issue secret keys to users. In order to realize this contradictory requirement, the central authority and the local authorities engage in the arithmetic 2PC protocol with master secret keys of their own and issue independent key components to users during the key issuing phase. The 2PC protocol prevents them from knowing each other’s master secrets so that none of them can generate the whole set of secret keys of users individually. Thus, we take an assumption that the central authority does not collude with the local authorities (otherwise, they can guess the secret keys of every user by sharing their master secrets). **Problems in Proposed Method:** 1) Collusion-resistance: If multiple users collude, they may be able to decrypt a cipher text by combining their attributes even if each of the users cannot decrypt the cipher text alone [11]–[13]. For example, suppose there exist a user with attributes {”Battalion 1”, “Region 1”} and another user with attributes {”Battalion 2”, “Region 2”}. They may succeed in decrypting a cipher text encrypted under the access policy of (“Battalion 1” AND “Region 2”), even if each of them cannot decrypt it individually. We do not want these colluders to be able to decrypt the secret information by combining their attributes. We also consider collusion attack among curious local authorities to derive users’ keys 2)Backward and forward Secrecy: In the context of ABE, backward secrecy means that any user who comes to hold an attribute (that satisfies the access policy) should be prevented from accessing the plaintext of the previous data exchanged before he holds the attribute. On the other hand, forward secrecy means that any user who drops an attribute should be prevented from accessing the plaintext of the subsequent data exchanged after he drops the attribute, unless the other valid attributes that he is holding satisfy the access policy. 3) Key Escrow: In CP-ABE, the key authority generates private keys of users by applying the authority’s master secret keys to users’ associated set of attributes. Thus, the key authority can decrypt every cipher text addressed to specific users by generating their attribute keys. If adversaries when deployed in the hostile environments compromise the key authority, this could be a potential threat to the data confidentiality or privacy especially when the data is highly sensitive. The key escrow is an inherent problem even in the multiple-authority systems as long as each key authority has the whole privilege to generate their own attribute keys with their own. **IV.** **REASERCH CONTRIBUTION** Technologies are becoming successful solutions that allow wireless devices carried by soldiers to communicate with each other and access the confidential information or command by exploiting external storage nodes. Some of the most challenging issues in this scenario are the enforcement of authorization policies and the policies update for secure data retrieval. Cipher text-policy attribute-based encryption is a promising cryptographic solution to the access control issues. However, the problem of applying CP-ABE in decentralized DTNs introduces several security and privacy challenges with regard to the attribute revocation, key escrow, and coordination of attributes issued from different authorities. Hence, we propose a secure data retrieval scheme using IDEA Algorithm as 3DES with MD5 Algorithm Known as Crypto Hybrid Algorithm for decentralized DTNs where multiple key authorities manage their attributes independently. We demonstrate how to apply the proposed mechanism to securely and efficiently manage the confidential data distributed in the disruption-tolerant military network. ----- **_https://dx.doi.org/10.22161/ijaers/3.10.11 ISSN: 2349-6495(P) | 2456-1908(O)_** **3DES with MD5 ALGORTHIM** Use of multiple length keys leads us to the Triple-DES algorithm, in which DES is applied three times. Triple DES is simply another mode of DES operation. It takes three 64bit keys, for an overall key length of 192 bits. In Private Encryption, you simply type in the entire 192-bit key rather than entering each of the three keys individually. The Triple DES then breaks the user provided key into three sub keys, padding the keys if necessary so they are each 64 bits long. The procedure for encryption is the same as regular DES, but it is repeated three times. Hence, the name Triple DES, The data is encrypted with the first key, decrypted with the second key, and finally encrypted again with the third key. In particular, the following requirements must be supported Triple DES, also known as 3DES. by the key management scheme, in order to facilitate data Consequently, Triple DES runs three times slower than aggregation and dissemination process: standard DES, but is much more secure if used properly. 1. Data aggregation is possible only if intermediate nodes The procedure for decrypting something is the same as the have access to encrypted data so that they can extract procedure for encryption, except it is executed in reverse. measurement values and apply to them aggregation Like DES, data is encrypted and decrypted in 64-bit chunks. functions. Therefore, nodes that send data packets toward Unfortunately, there are some weak keys that one should be the base station must encrypt them with keys available to aware of: if all three keys, the first and second keys, or the the aggregator nodes. second and third keys are the same, then the encryption 2. Data dissemination implies broadcasting of a message procedure is essentially the same as standard DES. This from the aggregator to its group members. If an aggregator situation is to be avoided because it is the same as using a shares a different key (set of keys) with each of the sensor slow version of regular DES. within its group, then it will have to make multiple Note that although the input key for DES is 64 bits long, the transmissions, encrypted each time with a different key, in actual key used by DES is only 56 bits in length. The least order significant (right most) bit in each byte is a parity bit, and to broadcast a message to all of the nodes .But transmissions should be set so that there are always an odd number of 1s must be kept as low as possible because of their high-energy in every byte. These parity bits are ignored, so only the consumption rate. seven most significant bits of each byte are used, resulting 3. Confidentiality: In order to protect sensed data and in a key length of 56 bits. This means that the effective key communication-changes between sensor nodes it is strength for Triple DES is actually 168 bits because each of important to guarantee the secrecy of messages. In the the three keys contains 8 parity bits that are not used during sensor network, case this is usually achieved by the use of the encryption process. symmetric cryptography as asymmetric or public key A commonly used technique in the Internet is to provide a cryptography in general is considered too expensive. MD5 -Hash String so the receiver can compare if the file However, while encryption protects against outside attacks, has been transmitted without any modifications. it does not protect against inside attacks/node compromises, 3DES encrypts a 64-bit block of plaintext to 64-bit block of as an attacker can use recovered cryptographic key material ciphertext. It uses a 128-bit key. The algorithm consists of to successfully eavesdrop, impersonate or participate in the eight identical rounds and a “half” roundfinal secret communications of the network Furthermore, while Transformation. There are 216 possible 16-bitblocks confidentiality guarantees the security of communications 0000000000000000, 1111111111111111. Each operation inside the network it does not prevent the misuse of with the set of possible 16-bit blocks is an algebraic group. information reaching the base station Hence,confidentiality Bitwise XOR is bitwise addition modulo 2, and addition must also be coupled with the right control policies so that modulo 216 is the usual group operation. Some spin must be only authorized users can have access to confidential put on the elements – the 16-bit blocks – to make sense of information multiplication modulo 216 + 1, however. 0 (i.e., 4. Integrity and Authentication: Integrity and authentication 0000000000000000) is not an element of the multiplicative is necessary to enable sensor nodes to detect modified, group. injected, or replayed packets. While it is clear that safety critical applications require authentication, it is still wise to use it even for the rest of applications since otherwise the owner of the sensor network may get the wrong picture of the sensed world thus making inappropriate decisions. ----- **_https://dx.doi.org/10.22161/ijaers/3.10.11 ISSN: 2349-6495(P) | 2456-1908(O)_** However, authentication alone does not solve the problem of node takeovers as compromised nodes can still authenticate themselves to the network. Hence, authentication mechanisms should be “collective” and aim at securing the entire network. First, we focused on the establishment of trust relationship among wireless sensor nodes, and presented a key management protocol for sensor networks. The protocol includes support for establishing four types of keys per Fig 7.2 Packets Vs Time Graph sensor node: individual keys shared with the base station, pairwise keys shared with individual neighboring nodes, cluster keys **V.** **CONCULSION** shared with a set of neighbors, and a group key shared with The corresponding attribute group keys are updated and all the nodes in the network. We showed how the keys delivered to the valid attribute group members securely could be distributed so that the protocol can support in (including the user). In addition, all of the components network processing and efficient dissemination, while encrypted with a secret key in the cipher text are re restricting the security impact of a node compromise to the encrypted by the storage node with a random, and the cipher immediate network neighborhood of the compromised node. text components corresponding to the attributes are re Applying the protocol makes it hard for an adversary to encrypted with the updated attribute group keys. Even if the disrupt the normal operation of the network. user has stored the previous cipher text exchanged before he In Hybrid Cryptosystem System, security is combination of obtains the attribute keys and the holding attributes satisfy more algorithm than base paper but still requires less time to the access policy, he cannot decrypt the pervious cipher Verify and process. While they are not present in the base text. paper. Hybrid Cryptosystem to enhance the security we use combination of algos **REFERENCES** 1) Idea algo. [1] J. Burgess, B. Gallagher, D. Jensen, and B. N. Levine, 2) MD5 “Maxprop: Routing for vehicle-based disruption 3) ECB (ELECTRONIC CODE BOOK) tolerant networks,” in _Proc. IEEE INFOCOM, 2006,_ 4) Hashing code _pp. 1–11._ [2] M. Chuah and P. Yang, “Node density-based adaptive **Comparative Result analysis** routing scheme for disruption tolerant networks,” in In my Base Paper we have used CP-ABE systems i.e. _Proc. IEEE MILCOM, 2006, pp.1–6._ Cipher text-policy attribute-based encryption which is a [3] M. M. B. Tariq, M. Ammar, and E. Zequra, “Mesage promising cryptographic solution to the access control ferry route design for sparse ad hoc networks with issues. While its communication Cost is higher than New mobile nodes,” in Proc. ACM _MobiHoc, 2006, pp. 37–_ Hybrid Cryptography Technique. Comparative results can _48._ see in Graph as: [4] S. Roy andM. Chuah, “Secure data retrieval based on ciphertext policy attribute-based encryption (CP-ABE) system for the DTNs,” Lehigh CSE Tech. Rep., 2009. [5] M. Chuah and P. Yang, “Performance evaluation of content-based information retrieval schemes for DTNs,” in Proc. IEEE MILCOM,2007, pp. 1–7. [6] M. Kallahalla, E. Riedel, R. Swaminathan, Q. Wang, and K. Fu,“Plutus: Scalable secure file sharing on untrusted storage,” in _Proc.Conf. File Storage_ _Technol., 2003, pp. 29–42._ [7] [7] L. Ibraimi, M. Petkovic, S. Nikova, P. Hartel, and _Fig.7.1: Communication cost in CP-ABE System_ W. Jonker, “Mediated ciphertext-policy attribute-based encryption and its application,”in _Proc. WISA, 2009,_ Number of conversion and verification time is more in base LNCS 5932, pp. 309–323. paper CP-ABE System then Hybrid Encryption by Using [8] [8] N. Chen, M. Gerla, D. Huang, and X. Hong, Idea Algorithm and MD5. “Secure, selective group broadcast in vehicular networks using dynamic attribute based encryption,”in _Proc. Ad Hoc Netw. Workshop, 2010, pp. 1–8._ ----- **_https://dx.doi.org/10.22161/ijaers/3.10.11 ISSN: 2349-6495(P) | 2456-1908(O)_** [9] D. Huang and M. Verma, “ASPE: Attribute-based secure policy enforcement n vehicular ad hoc networks,” Ad Hoc Netw.,vol. 7, no. 8,pp. 1526–1535, 2009. [10] A. Lewko and B. Waters, “Decentralizing attributebased encryption,”Cryptology ePrint Archive: Rep. 2010/351, 2010. [11] A. Sahai and B. Waters, “Fuzzy identity-based encryption,” in Proc.Eurocrypt, 2005, pp. 457–473. [12] V. Goyal, O. Pandey, A. Sahai, and B. Waters, “Attribute-based encryption for fine-grained access control of encrypted data,” in _Proc. ACM Conf._ _Comput. Commun. Security, 2006, pp. 89–98._ [13] J. Bethencourt, A. Sahai, and B. Waters, “Ciphertextpolicy attributebased encryption,” in _Proc. IEEE_ _Symp. Security Privacy, 2007, pp. 321–334_ -----
{ "disclaimer": "Notice: The following paper fields have been elided by the publisher: {'abstract', 'references'}. Paper or abstract available at https://api.unpaywall.org/v2/10.22161/IJAERS/3.10.11?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.22161/IJAERS/3.10.11, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": null, "status": "BRONZE", "url": "https://doi.org/10.22161/ijaers/3.10.11" }
2,016
[]
true
2016-10-01T00:00:00
[]
4,669
en
[ { "category": "Computer Science", "source": "external" }, { "category": "Computer Science", "source": "s2-fos-model" }, { "category": "Engineering", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/02620aa4121370fb8727553bb72e8bab1b95450f
[ "Computer Science" ]
0.856484
Adaptive Restructuring of Merkle and Verkle Trees for Enhanced Blockchain Scalability
02620aa4121370fb8727553bb72e8bab1b95450f
Internet of Things
[ { "authorId": "2267503406", "name": "Oleksandr Kuznetsov" }, { "authorId": "2282961530", "name": "Dzianis Kanonik" }, { "authorId": "2282962463", "name": "Alex Rusnak" }, { "authorId": "2282967712", "name": "Anton Yezhov" }, { "authorId": "2283137914", "name": "Oleksandr Domin" } ]
{ "alternate_issns": [ "2199-1073" ], "alternate_names": [ "Internet Thing" ], "alternate_urls": [ "https://www.sciencedirect.com/journal/internet-of-things", "http://www.springer.com/series/11636" ], "id": "2989732e-2668-4b47-9c29-326646a60273", "issn": "2542-6605", "name": "Internet of Things", "type": null, "url": "https://www.journals.elsevier.com/internet-of-things" }
null
# Adaptive Restructuring of Merkle and Verkle Trees for Enhanced Blockchain Scalability Oleksandr Kuznetsov[ 1,2*], Dzianis Kanonik [1], Alex Rusnak [1], Anton Yezhov[ 1], Oleksandr Domin[1 ] 1 Proxima Labs, 1501 Larkin Street, suite 300, San Francisco, USA 2 Department of Political Sciences, Communication and International Relations, University of Macerata, Via Crescimbeni, 30/32, 62100 Macerata, Italy [*Corresponding author. E-mail(s): kuznetsov@karazin.ua](mailto:kuznetsov@karazin.ua) [Contributing authors: alex@proxima.one](mailto:alex@proxima.one) **Abstract:** The scalability of blockchain technology remains a pivotal challenge, impeding its widespread adoption across various sectors. This study introduces an innovative approach to address this challenge by proposing the adaptive restructuring of Merkle and Verkle trees, fundamental components of blockchain architecture responsible for ensuring data integrity and facilitating efficient verification processes. Unlike traditional static tree structures, our adaptive model dynamically adjusts the configuration of these trees based on usage patterns, significantly reducing the average path length required for verification and, consequently, the computational overhead associated with these processes. Through a comprehensive conceptual framework, we delineate the methodology for adaptive restructuring, encompassing both binary and non-binary tree configurations. This framework is validated through a series of detailed examples, demonstrating the practical feasibility and the efficiency gains achievable with our approach. Moreover, we present a comparative analysis with existing scalability solutions, highlighting the unique advantages of adaptive restructuring in terms of simplicity, security, and efficiency enhancement without introducing additional complexities or dependencies. This study's implications extend beyond theoretical advancements, offering a scalable, secure, and efficient method for blockchain data verification that could facilitate broader adoption of blockchain technology in finance, supply chain management, and beyond. As the blockchain ecosystem continues to evolve, the principles and methodologies outlined herein are poised to contribute significantly to its growth and maturity. **Keywords:** Blockchain Scalability, Merkle and Verkle Trees, Adaptive Restructuring, Data Verification, Efficiency Optimization, Blockchain Architecture **1. Introduction** The advent of blockchain technology has heralded a new era in digital transactions, offering unparalleled security, transparency, and decentralization [1]. At its core, blockchain leverages cryptographic principles to create a distributed ledger system, where data integrity and transaction veracity are maintained across a network of nodes without the need for a central authority [2]. This innovative approach has found applications far beyond its initial cryptocurrency origins, extending into finance, supply chain management, healthcare, and more [3]. However, as blockchain technology ventures into more complex and demanding applications, it encounters a fundamental challenge that threatens its broader adoption: scalability [4]. The ----- scalability issue primarily revolves around the capacity of a blockchain network to handle a large volume of transactions quickly and efficiently. Current blockchain architectures, while robust and secure, are hampered by their inherent design, which leads to bottlenecks in transaction processing and data verification [5,6]. These limitations not only increase transaction costs but also extend the time required to achieve consensus across the network, thereby reducing the system's overall throughput. **1.1. The Blockchain Paradigm and the Challenge of Scalability** In the burgeoning landscape of blockchain technology, Ethereum stands out as a beacon of innovation and application diversity. The network's capacity to support a wide array of decentralized applications (dApps) and smart contracts has positioned it at the forefront of blockchain development. This prominence is underscored by the data depicted in Figure 1, which reveals a significant uptick in the total number of unique Ethereum addresses over the past year, surging from 219.26 million to 254.66 million [7]. **Figure 1: Growth in Total Unique Ethereum Addresses** However, the rapid expansion of the Ethereum network brings to light the pressing challenges associated with managing an ever-growing blockchain ecosystem. The core issue revolves around the efficient verification and management of data within the blockchain's infrastructure, a task that becomes increasingly complex as the network scales. The juxtaposition of the network's growth against the slight decrease in daily active Ethereum addresses, as shown in Figure 2, further complicates this challenge [8]. Despite a year-over-year decrease of 1.70% in daily active addresses, the fact remains that a mere 0.17% of all unique addresses engage with the network on a daily basis. This discrepancy between the total number of addresses and the proportion of active participants underscores a critical aspect of blockchain management: the network's activity is highly concentrated within a relatively small segment of the overall user base. This concentration of activity presents a unique set of challenges in optimizing the blockchain's state tree. The state tree must be updated frequently to reflect the transactions and interactions occurring within the network, a process that is predominantly influenced by the small fraction of actively participating addresses. The need to efficiently manage and verify these updates, without compromising the integrity or performance of the network, is paramount. ----- **Figure 2: Daily Active Ethereum Addresses** Given this backdrop, our research aims to address the pressing need for an optimized approach to managing the blockchain's state tree, particularly within the context of Ethereum's rapidly expanding network. The goal of our study is to explore innovative methods for restructuring Merkle and Verkle Trees adaptively, thereby enhancing the efficiency of data verification processes. By focusing on dynamic adjustments to tree configurations in response to usage patterns, we seek to minimize verification path lengths and reduce the computational overhead associated with maintaining data integrity. This research endeavor not only aims to bolster the scalability of blockchain systems but also to contribute to the ongoing discourse on optimizing blockchain infrastructure for the next generation of decentralized applications. **1.2. State of the art** In the realm of blockchain scalability, Kottursamy et al. (2023) [9] introduce a novel blockchain architecture termed Mutable Block with Immutable Transaction (MBIT), aiming to enhance scalability through a trapdoor cryptographic hash function for quantifying unspent coins. While their approach significantly reduces verification and confirmation times, it primarily focuses on transaction efficiency without addressing the broader scalability challenges related to blockchain's data structure and state management. Li et al. (2023) [10] propose PRI, a Payment Channel Hubs (PCH) solution enhancing privacy, reusability, and interoperability for blockchain scalability. Despite its innovative approach to solving the deposit lock-in problem and supporting multi-party participation, PRI's reliance on trusted hardware and its limited scope in addressing the fundamental architectural scalability issues of blockchain remain unaddressed. Nasir et al. (2022) [11] provide a systematic review of scalable blockchains, identifying key strategies for enhancing blockchain capabilities and analyzing scalability solutions. Their work highlights the multifaceted nature of blockchain scalability but leaves a gap in practical implementation strategies for optimizing blockchain data structures, particularly in the context of state management and verification processes. ----- Sanka and Cheung (2021) [12] offer a comprehensive review of blockchain scalability issues and solutions, emphasizing the need for efficient consensus mechanisms and system throughput improvements. While their analysis sheds light on the scalability challenges, the exploration of adaptive data structures for optimizing blockchain's underlying architecture is not thoroughly explored. Sharma et al. (2023) [13] introduce BLAST-IoT, a blockchain-assisted scalable trust model for the Internet of Things (IoT), focusing on secure dissemination and storage of trust information. Their model addresses scalability in the context of IoT devices but does not extend to the broader blockchain scalability challenges, particularly in relation to adaptive restructuring of blockchain data structures. Wang and Wu (2024) [14] present Lever-FS, a validation framework for intensive blockchain validation, achieving scalability through optimistic execution and dispute resolution. While their work advances the scalability of validation processes, it does not directly tackle the optimization of blockchain's state tree structure for overall network efficiency. Wang et al. (2023) [15] propose a scalable, efficient, and secured consensus mechanism for Vehicle-to-Vehicle (V2V) energy trading, leveraging blockchain technology. Their consensus mechanism addresses scalability in the specific context of V2V energy trading but does not address the broader application of scalable data structures within the blockchain. Xiao et al. (2024) [16] develop CE-PBFT, a high availability consensus algorithm for large-scale consortium blockchain, focusing on improving system throughput and reducing latency. While their algorithm enhances consensus efficiency, the exploration of adaptive and scalable blockchain data structures remains an area for further research. Yu et al. (2023) [17] introduce OverShard, a full sharding approach for scaling blockchain, which significantly improves throughput and reduces confirmation latency. However, the application of sharding to optimize blockchain's state tree and the exploration of adaptive restructuring techniques are not fully addressed. Zhen et al. (2024) [18] propose a dynamic state sharding blockchain architecture for scalable and secure crowdsourcing systems, addressing the scalability and security of blockchain in crowdsourcing applications. While their architecture offers improvements in throughput and security, the potential for adaptive restructuring of blockchain data structures to further enhance scalability is not explored. Transitioning from the broader challenges of blockchain scalability, we delve into the specific realm of tree-based data structures within blockchain technology. These structures, notably Merkle and Verkle trees, are pivotal for ensuring data integrity, enhancing verification processes, and optimizing storage within blockchain systems. The following literature review highlights significant advancements and identifies gaps that our research aims to fill. Ayyalasomayajula and Ramkumar (2023) [19] explore the optimization of Merkle Tree structures, comparing linear and subtree implementations. Their findings favor the subtree method for its efficiency in handling large-scale databases. However, their work primarily focuses on theoretical advantages without addressing the practical challenges of integrating these optimized structures into existing blockchain frameworks. ----- Jeon et al. (2023) [20] introduce a hardware-accelerated approach for generating reusable Merkle Trees for Bitcoin blockchain headers, significantly reducing execution time and power consumption. While their solution enhances the efficiency of block candidate generation, it is tailored to Bitcoin and does not explore the adaptability of their approach to other blockchain architectures or the potential for further optimization in tree structure. Jing, Zheng, and Chen (2021) [21] provide a comprehensive review of Merkle Tree's technical principles and applications across various fields. Their work underscores the versatility and potential of Merkle Trees but stops short of proposing innovative methods for dynamic restructuring or optimization of these trees in response to the evolving needs of blockchain systems. Knollmann and Scheideler (2022) [22] present a self-stabilizing protocol for the Hashed Patricia Trie, a distributed data structure enabling efficient prefix search. Their protocol addresses selfstabilization in distributed systems but does not explore the scalability implications of their data structure within the broader context of blockchain technology. Lin and Chen (2023) [23] propose a file verification scheme based on Verkle Trees, highlighting the efficiency and security benefits over traditional Merkle Trees. While their work demonstrates the potential of Verkle Trees in file verification, the exploration of these trees for broader blockchain scalability and optimization remains limited. Liu et al. (2021) [24] offer systematic insights into Merkle Trees, emphasizing their role in blockchain data verification and retrieval. Their discussion on the advantages and applications of Merkle Trees provides a solid foundation but lacks a detailed exploration of innovative approaches to enhance these trees' efficiency and scalability in blockchain systems. Mardiansyah, Muis, and Sari (2023) [25] introduce the Multi-State Merkle Patricia Trie (MSMPT) for high-performance data structures in multi-query processing. Their work addresses performance and efficiency in lightweight blockchain but does not delve into the scalability challenges of more complex blockchain systems. Mitra, Tauz, and Dolecek (2023) [26] propose the Graph Coded Merkle Tree to mitigate Data Availability Attacks in blockchain systems. While their approach offers a novel solution to a specific problem, the broader application of their design for general blockchain scalability and data structure optimization is not addressed. Zhao et al. (2024) [27] focus on minimizing block incentive volatility through Verkle tree-based dynamic transaction storage. Their innovative approach addresses a crucial aspect of blockchain economics but does not explore the structural optimization of Verkle Trees for enhanced scalability and efficiency in blockchain systems. Our research fills these gaps by proposing adaptive restructuring techniques for Merkle and Verkle Trees, aiming to enhance blockchain scalability, optimize data verification and storage processes, and provide a flexible framework adaptable to various blockchain architectures and applications. **1.3. Our contribution** This work introduces a novel approach to optimizing tree-based data structures within blockchain technology, focusing on adaptive restructuring techniques for Merkle and Verkle Trees. Our contributions are twofold: First, we propose a dynamic restructuring algorithm that enhances the scalability and efficiency of blockchain systems by optimizing the verification and storage processes. Second, we extend the applicability of these optimized tree structures beyond traditional ----- blockchain applications, demonstrating their versatility in various blockchain architectures and scenarios. Through rigorous analysis and experimentation, our research addresses the critical scalability challenges faced by blockchain technology, offering a scalable, efficient, and adaptable solution. **1.4. Article structure** The structure of this article is designed to provide a comprehensive overview of our research and findings. Section 2 conceptualizes the problem of blockchain scalability and the role of tree-based data structures in addressing this challenge. Section 3 introduces our idea for optimizing trees in blockchain, detailing the theoretical foundation of our approach. Section 4 evaluates the efficiency of adaptive Merkle trees through analytical and empirical methods. Section 5 describes the algorithm for Merkle Tree restructuring, followed by Section 6, which presents examples of the algorithm's execution in various scenarios. Section 7 delves into the specifics of path encoding in adaptive Merkle Trees, and Section 8 explores the enhancement of Verkle Trees through adaptive restructuring. The discussion in Section 9 synthesizes our results, comparing them with existing solutions and highlighting our contribution to the field. Finally, the conclusion in Section 10 summarizes our research contributions and outlines future directions for this promising area of study. **2. Conceptualizing the Problem** The core issue addressed in this research is the optimization of tree structures in blockchain systems for efficient and cost-effective verification. Currently, blockchain data is stored in balanced trees, with Merkle paths for data verification being approximately equal in length and complexity across all data. This uniformity results in a consistent verification cost and complexity, regardless of the frequency of data use. Figure 3 depicts a balanced Merkle Tree, a fundamental data structure used in blockchain for ensuring data integrity. Each leaf node (A-P) represents a block of data with a unique hash value, while the non-leaf nodes (AB, CD, etc.) are hashes of their respective child nodes. The root node (ABCDEFGHIJKLMNOP) encompasses the entire tree's hash, providing a single point of reference for the entire dataset's integrity. The Merkle Tree's structure ensures that any alteration in a single data block can be quickly detected by recalculating the hashes up the tree to the root. However, this balanced structure, while efficient in evenly distributing the data, does not account for the frequency of data access or modification (frequency is indicated in brackets). As a result, frequently used data and rarely accessed data have the same level of complexity and cost in terms of verification, leading to inefficiencies in resource utilization. ----- **Figure 3: Balanced Merkle Tree Structure** Figure 4 highlights the Merkle Path (nodes B, CD, EFGH, IJKLMNOP) for verifying the integrity of leaf node A (with a high frequency of 0.2041). The Merkle Path is marked in red, indicating the nodes whose hashes are required to verify A's integrity up to the root. The leaf node A and the root are highlighted in green, while the intermediate nodes (AB, ABCD, ABCDEFGH) involved in hash calculations are in yellow. The verification process involves recalculating and comparing the hashes from node A up to the root, ensuring data integrity. However, this method, while straightforward, applies the same verification complexity to all data, regardless of usage frequency. This "one-size-fits-all" approach is suboptimal, especially for data that is accessed and modified frequently, as it incurs unnecessary computational overhead. **Figure 4: Merkle Path (red nodes B, CD, EFGH, IJKLMNOP) for Leaf Node A** In Figure 5, the Merkle Path for verifying leaf node G (with a frequency of 0.0612) is shown. The path (nodes H, EF, ABCD, IJKLMNOP) is marked in red, with node G and the root in green, and the intermediate nodes (GH, EFGH, ABCDEFGH) in yellow. The verification process for G ----- follows the same principle as for A, recalculating hashes along the red path to validate the data's integrity. **Figure 5: Merkle Path (red nodes H, EF, ABCD, IJKLMNOP) for Leaf Node G** The Figure 6 demonstrates the Merkle Path for leaf node P (with a frequency of 0.0102), with the path (nodes O, MN, IJKL, ABCDEFGH) in red, P and the root in green, and intermediate nodes (OP, MNOP, IJKLMNOP) in yellow. The process for verifying P's integrity mirrors that of A and G, emphasizing the consistent approach across the tree. **Figure 6: Merkle Path (red nodes O, MN, IJKL, ABCDEFGH) for Leaf Node P** This consistency in verification, while ensuring uniform security and integrity checks, does not account for the varying frequencies of data access and modification. It leads to a rigid and sometimes inefficient system, especially in a dynamic environment like blockchain, where data access patterns can vary significantly. Thus, the current Merkle Tree verification process, as illustrated in these figures, is a rather primitive and blunt approach. It treats all data equally, irrespective of its usage frequency, leading ----- to potential inefficiencies in computational resources. Our proposed solution aims to revolutionize this process by introducing adaptive Merkle Trees. These trees will optimize verification paths based on data usage frequency, significantly reducing the complexity and cost of verifying frequently accessed data. This innovative approach promises to enhance the efficiency and scalability of blockchain systems, tailoring the verification process to the dynamic needs of the network. By differentiating between frequently and infrequently accessed data, adaptive Merkle Trees can allocate computational resources more effectively, ensuring faster and more costefficient data verification. This method not only optimizes the blockchain's performance but also aligns with the evolving nature of blockchain usage, where certain data nodes may become hotspots of activity. **3. Our Idea for Optimizing Trees in Blockchain** Figure 7 represents an innovative adaptation of the traditional Merkle Tree, incorporating principles of Shannon-Fano and Huffman statistical coding. Unlike the balanced Merkle Tree, this adaptive structure is intentionally unbalanced to optimize the verification process based on the frequency of data usage. Each leaf node (A-P) still represents a block of data with a unique hash value, but their placement in the tree now correlates with the probability of their usage. In this adaptive Merkle Tree, the most frequently used data nodes (A, B, C, D) are positioned closer to the root, significantly shortening the path required for their verification. This strategic placement reduces the computational complexity and time required for verifying frequently accessed data. Conversely, less frequently used data nodes (M, N, O, P) are placed further from the root, reflecting their lower probability of access. **Figure 7: Adaptive Merkle Tree** ----- The structure of this tree is a direct application of Shannon-Fano and Huffman coding principles, where the most common elements are given shorter codes (or paths in the case of a Merkle Tree). This approach ensures that the average path length for verification is minimized, aligning the computational effort with the actual usage patterns of the data within the blockchain. In the Figure 8, the Merkle Path for leaf node A (highlighted in green) is significantly shorter than in a balanced Merkle Tree. The path (marked in red) includes nodes DHG and CJLONFBEMPKI, with intermediate calculations (in yellow) at node ADHG. This optimized path reflects the high frequency of usage for node A, making the verification process faster and more cost-effective. The integrity of node A can be verified with fewer computational steps, demonstrating the efficiency of the adaptive Merkle Tree in handling frequently used data. For leaf node G (Figure 9), the Merkle Path includes nodes H, D, and CJLONFBEMPKI, with intermediate calculations at nodes HG and ADHG. This path, while longer than that for node A, is still optimized based on the usage frequency of G. The adaptive tree structure ensures that the verification process remains efficient, even for nodes with moderate usage. This approach balances the need for data integrity with computational efficiency, tailoring the verification complexity to the usage pattern of each node. **Figure 8: Optimized Merkle Path for High-Frequency Leaf Node A** ----- **Figure 9: Adaptive Merkle Path for Moderately Used Leaf Node G** The Merkle Path for leaf node P (Figure 10), a less frequently used node, is longer, including nodes M, K, I, E, B, CJLONF, and ADHG. The path reflects P's lower usage frequency, with more intermediate calculations (nodes MP, MPK, MPKI, EMPKI, BEMPKI, and CJLONFBEMPKI) required for verification. While this makes the verification process for P more resource-intensive, it is justified by the node's infrequent use. This example illustrates how the adaptive Merkle Tree allocates computational resources more efficiently, focusing on optimizing the paths for more frequently used nodes. ----- **Figure 10: Extended Merkle Path for Low-Frequency Leaf Node P** Thus, the adaptive Merkle Tree approach significantly enhances the efficiency of data verification in blockchain systems. For high-frequency nodes like A, the verification process is streamlined, requiring fewer computational steps and resources. This optimization can lead to a verification process that is up to twice as fast and cost-effective compared to a balanced Merkle Tree. Conversely, for nodes with lower usage frequencies, like P, the longer verification path is a reasonable trade-off, considering their infrequent access. **4. Efficiency of adaptive Merkle trees** In this work, we delve into the comparative complexity of data integrity verification between the conventional balanced Merkle Tree and the proposed adaptive Merkle Tree model. The balanced Merkle Tree's average path length is determined by ### k  logm n, where _m represents the maximum allowable number of child nodes per node (the arity of the tree),_ and n is the count of unique symbols within the alphabet. Conversely, the adaptive Merkle Tree's average path length mirrors the average length of a Huffman code, calculated as the weighted sum of all code lengths, with the probabilities of the corresponding symbols serving as weights: _n_ _k_ _A_ =  _pi_ li, _i=1_ where _[p]i[ is the probability of the ]_ _[i][ th symbol, and ]_ _[l]i[ is the length of the code for the ]_ _[i][ th symbol. ]_ ----- The theoretical minimum average length of a Huffman code, given a specific probability distribution, can be derived from the entropy formula: _n_ _k_ _A_  _H_ = − _pi_  log (m _pi_ ) . _i=1_ Thus, the efficiency of a Huffman code increases as its average code length approaches the entropy of the distribution. For the binary tree example ( _m =_ 2 ) discussed, the Huffman code's average length is approximately 3.49 bits per symbol, closely approximating the entropy of the symbol probabilities distribution, which is about 3.46 bits per symbol. These figures suggest that the Huffman code from our example is remarkably close to the theoretical minimum average code length defined by entropy. Ideally, if the code were perfectly optimal, its average length would equal the entropy. Transforming these assessments into a comparison of the complexity of data integrity verification in both the classical balanced and the proposed adaptive Merkle Tree yields: - For a balanced binary tree, the average Merkle path length is _k _ 4 ; - For an adaptive binary Merkle Tree, the average path length is _k А_ 3.49, indicating an efficiency gain of approximately 13%. This gain is reflected in the reduced average number of hash computations required for verifying the integrity of leaf data. The efficiency gain increases with the growing disparity between the probabilities of leaf data. In the extreme case, where one leaf has a 100% probability and all others have 0%, the maximum efficiency gain—up to 100%—can be observed. Although this represents a hypothetical scenario, it is intriguing to model real adaptive Merkle Trees, including non-binary types, and assess the effectiveness of our proposed solution. In Ethereum, Patricia trees are utilized, and our aim is to extend our approach to this case as well. Furthermore, algorithms for the gradual restructuring of balanced trees into an unbalanced form are of particular interest. We propose a protocol for such gradual restructuring, which utilizes newly added nodes to replace high-frequency nodes in the existing tree. These high-frequency nodes are relocated within the tree to positions that correspond to their usage probability, allowing us to incrementally modify the tree's configuration and enhance the efficiency of blockchain integrity checks without a complete overhaul. **5. Algorithm for Merkle Tree Restructuring** The restructuring of a Merkle Tree, aimed at optimizing its efficiency for blockchain applications, necessitates adherence to two primary criteria: - Minimization of Average Path Length: The restructuring process must account for the usage frequency of each leaf, ensuring that the average path length, _k, approaches the A_ theoretical minimum, or the average entropy, _H . The deviation between_ _k and A_ _H is_ assessed through the average discrepancy: ### = kA − H (1) with each elemental discrepancy defined as: ### =i p li ( i + log (m pi )), (2) where ----- _n_ =  i . _i=1_ This requirement mandates the availability of a list of probabilities, _pi, and path lengths,_ _li, for each leaf during the restructuring process, updating only as necessary._ - Minimization of Altered Paths: The algorithm should limit modifications to a minimal subset of nodes, reflecting the reality that only a few accounts are activated in any given transaction, including complex smart transactions. This approach ensures that inactive accounts retain their positions and paths within the tree, preserving the integrity of user data and access pathways. To adhere to this criterion, the algorithm must maintain a list of leaves (nodes) eligible for restructuring, focusing solely on those affected by current transactions. **Restructuring Algorithm (A Single Iteration)** Input: - A tree (or tree fragment) with its root, intermediate nodes, and leaves (the bottom layer of the tree nodes). - The probability distribution (frequencies) of the tree's leaves. - A set (list) of leaves available for restructuring. - A new leaf and/or a new probability distribution for all tree leaves. Output: - A restructured tree (or tree fragment) optimized according to the criterion of minimizing the average discrepancy (  ). Algorithm Steps: - Utilize the set (list) of leaves available for restructuring to formulate all possible restructuring alternatives for the tree (or tree fragment). - Evaluate the average discrepancy (  ) for each alternative. - Select the alternative with the lowest average discrepancy (  ). - Adopt the selected alternative as the algorithm's output. The most challenging aspect of this algorithm is Step 1, which involves generating all possible restructuring alternatives for the tree. This process is crucial for identifying the most efficient tree configuration that minimizes the average path length while accommodating the dynamic nature of blockchain transactions. To demonstrate the algorithm's functionality amidst the increasing number of alternatives, several illustrative examples will be provided, showcasing its application in various scenarios. **6. Examples of Merkle Tree Restructuring Algorithm Execution** To illustrate the algorithm's application, let's consider the case of a binary tree where one new leaf is added at each iteration. Initially, we form a list of alternatives for all leaves in the previous tree configuration. Each leaf can be transformed into an intermediate node with two child nodes: one from the previous configuration and one new (added) leaf. **6.1 Example 1: Restructuring a Binary Tree by Adding One Leaf** ----- Suppose we have a small binary ( _m =_ 2 ) balanced tree consisting of two nodes A and B, with probabilities 7/8 and 1/8, respectively (see Fig. 11, a). a) b) c) **Figure 11: Binary Tree Restructuring (First Iteration)** Assume that at the next moment, a new leaf C is created with probabilities now equal to: A (1/2), B (1/4), C (1/4). Our goal is to add this new leaf C in such a way as to minimize the average discrepancy (1). Here and subsequently, we assume that all branches from the previous tree configuration are available for addition. **6.1.1. First Iteration** On the first iteration, we have two alternatives for adding the new leaf to the previous tree configuration. These alternatives are presented in Fig. 11 and Table 1. The first alternative (see Fig. 11, b) corresponds to adding node C (1/4) to the branch with node A (1/2). As we can see from Table 1, this increases the discrepancy  . The second alternative (see Fig. 11, c) is more preferable as the discrepancy (1) here is significantly lower (equals zero), i.e., node C (1/4) should be added to the branch with node B (1/4). Table 1. Discrepancy Values for Two Alternative Ways of Tree Restructuring First alternative, Fig. 11, b Second alternative, Fig. 11, c _pi_ _li_ _pli i_ −pi log (2 _pi_ ) i _pi_ _li_ _pli i_ −pi log (2 _pi_ ) i A ½ 2 1 ½ ½ A ½ 1 ½ ½ 0 B ¼ 1 ¼ ½ -¼ B ¼ 2 ½ ½ 0 C ¼ 2 ½ ½ 0 C ¼ 2 ½ ½ 0  ½  0 Therefore, by the criterion of minimizing (1), we select the second alternative, i.e., the tree presented in Fig. 11, c. From the perspective of path length, this option is optimal as its average discrepancy (1) equals zero. Essentially, this indicates that we have achieved an ideal structure for this probability distribution. Continuing from the initial iteration of the Merkle Tree restructuring algorithm, let us delve into subsequent iterations to further elucidate the process and its outcomes. **6.1.2. Second Iteration** |First alternative, Fig. 11, b|Col2|Col3|Col4|Col5|Col6|Col7|Second alternative, Fig. 11, c|Col9|Col10|Col11|Col12|Col13| |---|---|---|---|---|---|---|---|---|---|---|---|---| ||p i|l i|pl i i|−p log (p) i 2 i| i|||p i|l i|pl i i|−p log (p) i 2 i| i| |A|½|2|1|½|½||A|½|1|½|½|0| |B|¼|1|¼|½|-¼||B|¼|2|½|½|0| |C|¼|2|½|½|0||C|¼|2|½|½|0| ||||||½|||||||0| ----- Let us assume that in the second iteration, a new leaf, D, is introduced, leading to a new probability distribution among the leaves: A (1/2), B (1/8), C (1/4), and D (1/8). Building upon the tree's previous state (refer to Fig. 11, c), we are presented with three distinct restructuring alternatives (illustrated in Fig. 12): a) Integrating the new node D (1/8) into the branch containing node A (1/2); b) Integrating the new node D (1/8) into the branch containing node B (1/8); c) Integrating the new node D (1/8) into the branch containing node C (1/4). For each alternative, we calculate the average discrepancy, as detailed in Table 2. The most favorable alternative, marked by a zero discrepancy, is highlighted in the table. a) b) c) **Figure 12: Binary Tree Restructuring (Second Iteration)** Table 2. Comparison of Alternatives (Second Iteration) _k_ _A_ _H_  Fig. 12, a 2 1.75 0.25 Fig. 12, b 1.75 1.75 0 Fig. 12, c 1.875 1.75 0.125 Accordingly, the second alternative (see Fig. 12, b) emerges as the most preferable, characterized by a zero average discrepancy, indicating an optimal restructuring choice under the given criteria. **6.1.3. Third Iteration** Advancing to the third iteration, let's hypothesize the addition of another new leaf, E, resulting in the following probability distribution: A (1/2), B (1/8), C (1/8), D (1/8), E (1/8). The tree structure that achieves a zero discrepancy, indicative of an optimal configuration minimizing the average path length, is depicted in Fig. 13. |Col1|k A|H|| |---|---|---|---| |Fig. 12, a|2|1.75|0.25| |Fig. 12, b|1.75|1.75|0| |Fig. 12, c|1.875|1.75|0.125| ----- **Figure 13: Binary Tree Restructuring (Third Iteration)** Achieving a zero discrepancy signifies that we have attained an optimal tree structure, effectively minimizing the average path length. **6.1.4. Fourth Iteration** During the fourth iteration, we face the task of incorporating an additional leaf, resulting in a new probability distribution: A (1/2), B (1/4), C (1/16), D (1/16), E (1/16), F (1/16). Ideally, a tree structure with a zero discrepancy ( = 0) would align with the configuration depicted in Figure 14.a. However, this ideal structure cannot be achieved by simply adding a leaf to the previous configuration (as shown in Figure 13). a) b) **Figure 14: Binary Tree Restructuring (Fourth Iteration)** Utilizing the tree from Figure 13, we identify five potential alternatives (refer to Table 3), none of which lead to the desired configuration seen in Figure 14.a. In Table 3, we present a comparison of all possible alternatives based on the average discrepancy value (1). It becomes evident that the last three alternatives are equivalent in terms of their potential ----- outcomes, and thus, the choice of restructuring can be made arbitrarily. We decide to employ a lexicographical ordering rule, selecting alternative "C" as illustrated in Figure 14.b. Although this new structure is not optimal, it achieves the minimum average discrepancy of 0.125 among all possible restructuring scenarios. Table 3. Comparison of Alternatives (Fourth Iteration) _k_ _A_ _H_  Restructuring Branch with Leaf A 2.4375 2 0.4375 Restructuring Branch with Leaf B 2.3125 2 0.3125 Restructuring Branch with Leaf C 2.125 2 0.125 Restructuring Branch with Leaf D 2.125 2 0.125 Restructuring Branch with Leaf E 2.125 2 0.125 **6.1.5. Iterations 5-10** For further illustration, let's assume that each subsequent iteration introduces one additional leaf, with the probability distribution for the tree's leaves as specified in Table 4. This table also indicates the number of available restructuring alternatives and (in parentheses) the number of alternatives with the minimum discrepancy value. The last column provides the minimum discrepancy value (11) across all alternatives. Figures 15-20 showcase the restructuring outcomes based on the chosen optimal alternative. Table 4. List of Leaves and Their Probabilities per Iteration, Number of Restructuring Alternatives, and Minimum Discrepancy Value |Col1|k A|H|| |---|---|---|---| |Restructuring Branch with Leaf A|2.4375|2|0.4375| |Restructuring Branch with Leaf B|2.3125|2|0.3125| |Restructuring Branch with Leaf C|2.125|2|0.125| |Restructuring Branch with Leaf D|2.125|2|0.125| |Restructuring Branch with Leaf E|2.125|2|0.125| |Iteration number|List of Leaves and Their Probabilities|Number of Restructurin g Alternatives|Minimum Discrepancy Value| |---|---|---|---| |5|A (1/2), B (1/4), C (1/16), D (1/16), E (1/32), F (1/16), G (1/32)|6 (1)|0.125| |6|A (1/4), B (1/4), C (1/16), D (1/8), E (1/16), F (1/8), G (1/16), H (1/16)|7 (6)|0.25| |7|A (1/4), B (1/4), C (1/16), D (1/8), E (1/32), F (1/8), G (1/32), H (1/16), I (1/16)|8 (1)|0.1875| |8|A (1/4), B (1/4), C (1/16), D (1/8), E (1/32), F (1/16), G (1/32), H (1/16), I (1/16), J (1/16)|9 (2)|0.125| |9|A (1/8), B (1/4), C (1/16), D (1/8), E (1/16), F (1/8), G (1/32), H (1/16), I (1/16), J (1/16), K (1/32)|10 (2)|0.185| |10|A (1/8), B (1/4), C (1/16), D (1/8), E (1/32), F (1/8), G (1/32), H (1/16), I (1/16), J (1/16), K (1/32), L (1/32)|11 (2)|0.185| ----- **Figure 15: Binary Tree Restructuring** (Iterations 5) **Figure 17: Binary Tree Restructuring** (Iterations 7) **Figure 19: Binary Tree Restructuring** (Iterations 9) **Figure 16: Binary Tree Restructuring** (Iterations 6) **Figure 18: Binary Tree Restructuring** (Iterations 8) **Figure 20: Binary Tree Restructuring** (Iterations 10) ----- The table reveals that, despite an increase in the number of alternatives with each iteration, the number of most preferable restructuring options remains limited. Moreover, despite fluctuations in the probabilities of individual leaves, we consistently approach an optimal tree structure. To further enhance the efficiency of tree restructuring, we propose expanding the algorithm's parameters. This involves forming potential alternatives not just by adding leaves but by considering various scenarios for swapping the positions of leaf pairs. This approach is particularly relevant in blockchain transactions, which typically involve at least two accounts, thus allowing for the repositioning of leaves by exchanging their locations. This expansion of the algorithm's capabilities demonstrates our commitment to optimizing tree structures for improved verification efficiency, paving the way for more dynamic and efficient blockchain architectures. **6.2. Example 1.1: Binary Tree Restructuring Through Leaf Node Swapping** To demonstrate the algorithm's functionality, let's revisit the outcome of the sixth iteration from the previous example, which represented the worst case in terms of average discrepancies, i.e., it was the most suboptimal structure depicted in Figure 16. We now aim to improve this structure by swapping the positions of leaf pairs to minimize the discrepancy (Δ). Instead of considering all possible leaf pairs, we focus only on those leaves whose elementary discrepancies ( i ) are non-zero. Essentially, the criterion i 0 indicates that the i -th leaf is "out of place." For the graph in Figure 16, we have: - Leaves: A, B, C, D, E, F, G, H; - Probabilities ( _pi ): 0.25, 0.25, 0.06, 0.13, 0.06, 0.13, 0.06, 0.06;_ - Path Lengths ( il ): 2, 3, 4, 3, 4, 4, 4, 2; - Discrepancies ( i ): 0, 0.25, 0, 0, 0, 0.125, 0, -0.125. Thus, leaves B, F, and H are candidates for swapping positions, yielding three alternatives: - Swapping B and F results in Δ = 0.375; - Swapping B and H results in Δ = 0.0625; - Swapping F and H results in Δ = 0.125. Clearly, the best alternative for our example is to swap leaves B and H. Visually, this corresponds to the same graph shown in Figure 16.1 – identical to Figure 16 but with B and H swapped. After this restructuring, we observe the following distribution of elementary discrepancies: - Leaves: A, H, C, D, E, F, G, B; - Probabilities ( _pi ): 0.25, 0.0625, 0.0625, 0.125, 0.0625, 0.125, 0.0625, 0.25;_ - Path Lengths ( il ): 2, 3, 4, 3, 4, 4, 4, 2; - Discrepancies ( i ): 0, -0.0625, 0, 0, 0, 0.125, 0, 0. ----- Now, the only viable alternative is to swap leaves H and F, which results in a zero average discrepancy, indicating an optimal structure in terms of minimizing the average path length in the binary tree. The outcome of this optimization is depicted in Figure 16.2. **Figure 16.1: Graph Optimization Post-Sixth** Iteration (Swapping Leaves B and H) **Figure 16.2: Graph Optimization Post-Sixth** Iteration (Swapping Leaves H and F) It's important to note that non-binary trees are often used in practical scenarios. For instance, Ethereum's blockchain utilizes a Patricia-Merkle tree, which can have up to 16 child nodes, i.e., ### m =16. Let's demonstrate the algorithm's operation on a simple example of a non-binary ( _m =_ 4 ) tree. **6.3. Example 2.1: Restructuring a Non-Binary Tree by Adding a Single Leaf** In this example, we explore a non-binary tree where each node can have up to four children ( ### m = 4 ), closely mirroring a simplified real-world scenario of Patricia-Merkle trees in the Ethereum blockchain. Let's assume the initial state of the tree is as shown in Figure 11.a, similar to the previous example. Also, let the newly added leaves and their probability distributions follow the pattern established in Example 1. These changes in probabilities are summarized in Table 5, which also lists the number of alternatives and the unique discrepancy value (1). Figures 21-30 depict the corresponding tree graphs. Table 5. List of Leaves and Their Probabilities per Iteration, Number of Restructuring Alternatives, and Minimum Discrepancy Value |Iteration Number|List of Leaves and Their Probabilities|Number of Alternatives|Minimum Discrepancy Value (1)| |---|---|---|---| |1|A (1/2), B (1/4), C (1/4)|3 (1)|0. 25| |2|A (1/2), B (1/8), C (1/4), D (1/8)|4 (1)|0.125| |3|A (1/2), B (1/8), C (1/8), D (1/8), E (1/8)|4 (3)|0.25| |4|A (1/2), B (1/4), C (1/16), D (1/16), E (1/16), F (1/16)|6 (1)|0.375| ----- |5|A (1/2), B (1/4), C (1/16), D (1/16), E (1/32), F (1/16), G (1/32)|7 (1)|0.34375| |---|---|---|---| |6|A (1/4), B (1/4), C (1/16), D (1/8), E (1/16), F (1/8), G (1/16), H (1/16)|7 (1)|0.25| |7|A (1/4), B (1/4), C (1/16), D (1/8), E (1/32), F (1/8), G (1/32), H (1/16), I (1/16)|9 (1)|0.21875| |8|A (1/4), B (1/4), C (1/16), D (1/8), E (1/32), F (1/16), G (1/32), H (1/16), I (1/16), J (1/16)|10 (1)|0.15625| |9|A (1/8), B (1/4), C (1/16), D (1/8), E (1/16), F (1/8), G (1/32), H (1/16), I (1/16), J (1/16), K (1/32)|10 (1)|0.21875| |10|A (1/8), B (1/4), C (1/16), D (1/8), E (1/32), F (1/8), G (1/32), H (1/16), I (1/16), J (1/16), K (1/32), L (1/32)|12 (1)|0.21875| **Figure 21: Tree Restructuring** (Iterations 1) **Figure 22: Tree Restructuring** (Iterations 2) **Figure 23: Tree Restructuring** (Iterations 3) **Figure 24: Tree Restructuring (Iterations 4)** **Figure 25: Tree Restructuring (Iterations 5)** **Figure 26: Tree Restructuring (Iterations 6)** ----- **Figure 27: Tree Restructuring (Iterations 7)** **Figure 28: Tree Restructuring (Iterations 8)** **Figure 29: Tree Restructuring (Iterations 9)** **Figure 30: Tree Restructuring (Iterations 10)** ----- The calculations of discrepancies in Table 5 show that the configuration of the restructured trees tends towards optimality by minimizing the average path length. Now, let's demonstrate the algorithm's operation in the mode of swapping positions between pairs of nodes, as in Example 1.1. **6.4. Example 2.2: Restructuring a Non-Binary Tree Through Leaf Pair Swapping** Let's delve into the most challenging scenario from Example 2, which resulted in the highest discrepancy value. This scenario corresponds to the tree graph obtained after the fourth iteration, as depicted in Figure 24. We will demonstrate how swapping the positions of leaf pairs can enhance this structure. To form a set of alternatives for the graph in Figure 24, we observe: - Leaves: A, B, C, D, E, F; - Probabilities ( _pi ): 0.50, 0.25, 0.06, 0.06, 0.06, 0.06;_ - Path Lengths ( il ): 1, 2, 1, 1, 2, 2; - Discrepancies ( i ): 0.25, 0.25, -0.0625, -0.0625, 0, 0. Focusing on pairs with differing path lengths, we identify: - Swapping positions between leaves A and B, Δ=0.625; - Swapping positions between leaves B and C, Δ=0.1875; - Swapping positions between leaves B and D, Δ=0.1875. The last two alternatives are equivalent and halve the discrepancy (1), thus optimizing the final tree structure (see Figure 24.1). **Figure 24.1: Graph Optimization After the Fourth Iteration (Swapping Positions Between** Leaves B and C) This approach allows for the dynamic restructuring of trees, minimizing the divergence between the current and optimal graph structures. By combining different rules (adding new leaves and swapping positions of existing leaves), we can achieve highly efficient structures that minimize the average path length. ----- Through this methodology, we underscore the algorithm's capability to swiftly adapt tree structures, ensuring an optimal configuration that aligns closely with the theoretical minimum discrepancy. This adaptability is crucial for maintaining efficient data verification processes in blockchain technologies, where the dynamic nature of transactions necessitates a flexible yet robust system for ensuring data integrity. The proposed algorithm exemplifies a significant advancement in optimizing tree structures for blockchain applications, particularly in scenarios where non-binary trees, such as Patricia-Merkle trees used in Ethereum, are prevalent. By judiciously applying leaf swapping and addition strategies, we can significantly enhance the efficiency of these cryptographic structures, paving the way for more scalable and cost-effective blockchain operations. In conclusion, let's explore another example of a tree with _m =16, which can be considered as_ restructuring a fragment of the Patricia-Merkle tree in the Ethereum blockchain. **6.5. Example 2.3: Restructuring a Patricia-Merkle Tree Fragment Through Leaf Pair** **Swapping** Imagine we have a fragment of the Patricia-Merkle tree with leaves (and probabilities) assigned as follows (see Figure 31): A (0.003906), B (0.0625), C (0.16529), D (0.0625), E (0.0625), F (0.0625), G (0.0625), H (0.0625), I (0.003906), J (0.0625), K (0.0625), L (0.0625), M (0.0625), N (0.003906), O (0.0625), P (0.000244), Q (0.0625), R (0.01), S (0.0625), T (0.000244). For this tree configuration, we have an average discrepancy Δ=0.1297. By swapping the positions of leaves A (0.003906) and E (0.0625), we achieve the lowest discrepancy Δ=0.0711 among all possible alternatives. This tree is depicted in Figure 32. Continuing to apply the algorithm, we swap the positions of leaves P (0.000244) and R (0.01), resulting in a discrepancy Δ=0.0614 and a graph as shown in Figure 33. **Figure 31: Fragment of a Patricia-Merkle Tree with** _m =16_ **Figure 32: Result of the First Optimization of the Patricia-Merkle Tree with** _m =16_ ----- **Figure 33: Result of the Second Optimization of the Patricia-Merkle Tree with** _m =16_ Thus, even a small number of algorithm iterations allows for a significant reduction in discrepancy (1) and optimization of the tree structure, reducing the average path length. This example underscores the potential of our restructuring algorithm to enhance the efficiency of Patricia-Merkle trees in blockchain applications. By judiciously swapping the positions of leaf pairs, we can significantly improve the tree's structure, aligning it closer to the optimal configuration. This process not only minimizes the average path length but also contributes to the overall efficiency and scalability of blockchain operations, particularly in systems like Ethereum where Patricia-Merkle trees play a crucial role in data integrity verification. **7. Path Encoding in the Adaptive Merkle Tree** The integration of adaptive Merkle Trees into existing blockchain systems like Ethereum presents a paradigm shift in data integrity verification. This shift, while promising significant efficiency gains, also necessitates substantial modifications to current protocols. In Ethereum's existing structure, an account's address directly determines its encoding path in the Patricia-Merkle Tree. This encoding, defined by a series of nibbles (four-bit blocks), uniquely maps each address from the root to a specific leaf in the vast tree structure. The current system's design allows for the seamless integration of new addresses into this expansive tree. Adopting an adaptive approach fundamentally alters this scenario. Instead of a balanced structure, we would deal with a highly unbalanced tree where frequently used leaves are positioned closer to the root, and less probable leaves are relegated to lower levels. Implementing this directly in the existing Patricia-Merkle Tree structure is not feasible. However, creating a new tree during a protocol update in Ethereum could allow for the incorporation of this adaptive approach, radically changing the concept of path encoding in the tree. The challenge arises in reconciling existing addresses with new path encodings. In the new structure, a random address would no longer be tied to a specific path encoding but would merely determine the leaf's value, not its path from the root. This concept is illustrated in Figures 34 and 35, where Figure 34 shows the simplified path encoding in a balanced tree with corresponding addresses, and Figure 35 depicts these addresses in an adaptive tree with Huffman code-based path encodings. A practical solution to address compatibility issues in the adaptive tree is the tabular storage of two structures: "account address – path encoding." This approach allows for the adaptation of the tree based on the usage probabilities of addresses, leading to significant savings in verification complexity and cost. Simultaneously, it preserves the existing mechanism for generating random addresses, including the ability to transfer funds to not-yet-created accounts. ----- **Figure 34: Path Encoding in a Balanced Merkle Tree** **Figure 35: Adaptive Merkle Tree with Huffman Code-Based Path Encoding** Figure 34 illustrates the path encoding mechanism within a traditional balanced Merkle Tree, as utilized in current blockchain systems like Ethereum. Each account address is directly linked to a unique path encoded by a series of nibbles, efficiently mapping the journey from the tree's root to ----- the respective leaf. This representation underscores the systematic and predictable nature of path encoding in a balanced tree structure, highlighting the ease with which new addresses can be integrated into the expansive tree. Figure 35 depicts the transformative approach of an adaptive Merkle Tree, where path encodings are based on Huffman codes. This figure contrasts sharply with Figure 34, showcasing a more dynamic and usage-frequency-oriented structure. In this adaptive model, the path encoding is no longer a straightforward derivative of the account address but is instead determined by the frequency of data access, leading to a highly unbalanced but efficient tree structure. This figure effectively demonstrates the shift from a uniform to a tailored approach in path encoding, aligning more closely with the actual usage patterns within the blockchain network. **Table 6: Correlation between Account Addresses and Path Encodings in Adaptive Merkle Tree** Leaves Leaf Path Encoding in a Balanced Huffman Code-Based Path probabilities Merkle Tree (Account Addresses) Encoding in Adaptive Merkle Tree A 0.2041 0000 00 B 0.1531 0001 110 C 0.1224 0010 100 D 0.1020 0011 010 E 0.0816 0100 1110 F 0.0714 0101 1011 G 0.0612 0110 0111 H 0.0510 0111 0110 I 0.0408 1000 11111 J 0.0306 1001 10100 K 0.0204 1010 111101 L 0.0204 1011 101010 M 0.0102 1100 1111000 N 0.0102 1101 1010111 O 0.0102 1110 1010110 P 0.0102 1111 1111001 Average code length 4 **3.49 (13% more efficient)** Table 6 presents a crucial solution to the challenge posed by the transition to an adaptive Merkle Tree: a tabular representation of the correlation between account addresses and their new path encodings. This table exemplifies the practical approach to reconciling the existing system of address generation with the innovative path encoding mechanism of the adaptive tree. By maintaining a record of these correlations, the table ensures that the integrity and functionality of the blockchain are preserved, even as the system evolves to embrace more efficient data verification methods. This tabular approach symbolizes a bridge between the legacy structures of blockchain and the future-oriented adaptive Merkle Tree, ensuring a seamless transition and operational continuity. **8. Enhancing Verkle Trees Through Adaptive Restructuring** The advent of Verkle trees represents a significant leap forward in the optimization of blockchain storage and verification processes. By combining the succinctness of vector commitments with the hierarchical structure of Merkle trees, Verkle trees offer a promising solution to scalability and efficiency challenges in blockchain systems. This section delves into the potential applications of |Leaves|Leaf probabilities|Path Encoding in a Balanced Merkle Tree (Account Addresses)|Huffman Code-Based Path Encoding in Adaptive Merkle Tree| |---|---|---|---| |A|0.2041|0000|00| |B|0.1531|0001|110| |C|0.1224|0010|100| |D|0.1020|0011|010| |E|0.0816|0100|1110| |F|0.0714|0101|1011| |G|0.0612|0110|0111| |H|0.0510|0111|0110| |I|0.0408|1000|11111| |J|0.0306|1001|10100| |K|0.0204|1010|111101| |L|0.0204|1011|101010| |M|0.0102|1100|1111000| |N|0.0102|1101|1010111| |O|0.0102|1110|1010110| |P|0.0102|1111|1111001| |Average code length||4|3.49 (13% more efficient)| ----- our adaptive restructuring approach to Verkle trees, exploring how dynamic adjustments to tree configurations can further enhance their efficiency and applicability in blockchain technologies. **8.1. Application of Adaptive Trees in Verkle Tree Technology** Verkle trees, a novel data structure, merge the benefits of Merkle trees with vector commitments, providing a compact, efficient means of storing and verifying blockchain state. They stand poised to revolutionize data storage in blockchain by significantly reducing the size of proofs required for state verification. Our approach, centered on adaptive restructuring, introduces a method to dynamically adjust Verkle tree configurations based on usage patterns, thereby optimizing both storage efficiency and verification speed. Adaptive restructuring in the context of Verkle trees involves the dynamic adjustment of tree branches and nodes based on the frequency and patterns of data access and updates. This method leverages statistical analysis to predict which parts of the tree are accessed more frequently, allowing for a more efficient organization of data. By applying Huffman or Shannon-Fano coding principles, we can ensure that the most accessed elements are closer to the root, thereby reducing the path length for common operations. **8.2. Technology and Advantages** - Reduced Proof Sizes: By optimizing the structure of Verkle trees to reflect access patterns, we can significantly reduce the size of proofs required for verifying transactions. This is because frequently accessed data can be positioned closer to the root, making it quicker and less resource-intensive to generate and verify proofs. - Enhanced Verification Speed: Adaptive restructuring can lead to a more efficient verification process. Shorter paths for frequently accessed data mean that less computational effort is required to verify transactions, enhancing the overall throughput of the blockchain network. - Dynamic Scalability: As blockchain systems evolve, so do their storage and access patterns. Adaptive restructuring allows Verkle trees to dynamically adjust to these changes, ensuring that the data structure remains optimized for current usage trends. This adaptability is crucial for maintaining high performance as the system scales. - Cost Efficiency: By optimizing the path lengths for data access and verification, the proposed approach can also reduce the cost associated with these operations. In blockchain systems where transaction costs are a significant concern, such as Ethereum, this can lead to substantial savings for users and applications. - Application in Sharding: Verkle trees are particularly well-suited for sharded blockchain architectures. Adaptive restructuring can enhance the efficiency of cross-shard communication by optimizing the storage and retrieval of shard-specific data, further improving the scalability of sharded networks. Thus, the integration of adaptive restructuring techniques with Verkle tree technology presents a promising avenue for enhancing blockchain efficiency. By dynamically optimizing data storage and access patterns, we can achieve significant improvements in proof size, verification speed, and overall system scalability. This approach not only addresses current scalability and efficiency challenges but also provides a flexible framework that can adapt to future developments in blockchain technology. As we continue to explore the potential of adaptive Verkle trees, it becomes increasingly clear that this innovative approach could play a pivotal role in the next generation of blockchain systems. ----- **9. Discussion** In this work, we have embarked on a comprehensive exploration of optimizing tree structures within the blockchain ecosystem, addressing the critical challenge of scalability that plagues current blockchain technologies. Our investigation spans from conceptualizing the inherent problems associated with traditional Merkle trees to proposing and validating an innovative approach for adaptive restructuring of these trees to enhance efficiency and scalability in blockchain systems. The blockchain paradigm, while revolutionary, faces significant scalability challenges, primarily due to the inherent limitations of its underlying data structures and consensus mechanisms. Traditional Merkle trees, despite their widespread adoption for ensuring data integrity and facilitating efficient verifications, contribute to these scalability issues due to their static nature and the increasing cost of operations as the blockchain grows. Existing solutions to blockchain scalability, such as sharding and layer 2 protocols, offer partial remedies by distributing the workload or offloading transactions. However, these approaches often introduce complexity or compromise on decentralization and security. Our review of the state of the art highlights a gap in dynamically optimizing the data structures themselves to directly address the root causes of inefficiency. **9.1. Our Contribution** Our primary contribution lies in the introduction of adaptive Merkle trees, a novel concept that leverages dynamic restructuring based on usage patterns to optimize path lengths and reduce the computational overhead associated with data verification and integrity checks. By applying principles from Huffman and Shannon-Fano coding to the organization of tree nodes, we ensure that frequently accessed data is more accessible, thereby reducing the average path length and associated costs. Through rigorous analysis and examples, we demonstrated the efficiency gains achievable with adaptive Merkle trees. Our algorithm for Merkle tree restructuring, detailed in Section 5, provides a systematic approach for dynamically adjusting tree structures, significantly improving upon the static nature of traditional Merkle trees. Extending our concept to Verkle trees, we showcased how adaptive restructuring could be applied to this advanced data structure, further enhancing its efficiency and making it even more suitable for large-scale blockchain applications. This application not only underscores the versatility of our approach but also its potential to contribute to the next generation of blockchain technologies. **9.2. Comparison with Existing Solutions** In the quest to address blockchain scalability, several innovative solutions have been proposed and implemented across various platforms. Each of these solutions presents unique advantages and challenges. Below (Table 7), we provide a comparative analysis of these solutions, including our adaptive restructuring approach, to highlight their relative strengths and limitations. **Table 7: Comparison of Scalability Solutions in Blockchain Technology** |Solution Type|Examples|Advantages|Disadvantages| |---|---|---|---| ----- **- Reduces average path length** **and verification costs.** **- Concept is newer and less** **tested in real-world scenarios.** |Sharding|Ethereum 2.0, Zilliqa|- Distributes workload across multiple chains. - Enhances transaction throughput.|- Increases complexity. - Potential security risks due to smaller validator sets.| |---|---|---|---| |Layer 2 Protocols|Lightning Network, Plasma|- Offloads transactions from the main blockchain. - Facilitates faster and cheaper transactions.|- Can introduce centralization points. - Complex to manage and integrate.| |State Channels|Raiden Network, Celer Network|- Enables off-chain transaction channels. - Instantaneous transaction settlement.|- Requires on-chain settlement for disputes. - Limited to participants in the channel.| |Sidechains|Liquid Network, POA Network|- Allows for customizable blockchains linked to the main chain. - Facilitates specific use cases and scalability.|- Security is often reliant on the main chain. - Interoperability challenges.| |Adaptive Merkle Trees|Our Approach|- Dynamically optimizes data structure based on usage. - Reduces average path length and verification costs.|- Requires initial restructuring and maintenance. - Concept is newer and less tested in real-world scenarios.| The comparative analysis underscores the diversity of approaches to tackling blockchain scalability, each with its unique trade-offs. Sharding and Layer 2 protocols, while promising significant throughput improvements, introduce additional layers of complexity and potential security concerns. State channels and sidechains offer more specialized solutions but are limited by their applicability and integration challenges. Our approach, adaptive restructuring of Merkle and Verkle trees, stands out by directly optimizing the underlying data structure of the blockchain. This method offers a fundamental improvement in efficiency without introducing external dependencies or significantly altering the blockchain's operational principles. While it necessitates initial efforts for restructuring and ongoing maintenance, the benefits of reduced path lengths and lower verification costs present a compelling case for its adoption. Moreover, being a relatively new concept, it opens up extensive opportunities for further research and development to fully realize its potential and address any emerging challenges. Thus, our work contributes a novel perspective to the field of blockchain research, opening new avenues for the development of more scalable and efficient blockchain systems. By addressing scalability at the data structure level, we provide a foundational solution that can be integrated with other scalability and efficiency-enhancing techniques, offering a comprehensive approach to overcoming one of the most significant barriers to blockchain adoption. **10. Conclusion** The exploration of adaptive restructuring in Merkle and Verkle trees within this study presents a novel approach to addressing the enduring challenge of blockchain scalability. By dynamically adjusting the structure of these trees based on usage patterns, we propose a method that ----- significantly reduces the average path length for verification processes, thereby enhancing the efficiency and scalability of blockchain systems. Our contribution to the field of blockchain technology is twofold. Firstly, we introduce a conceptual framework for the adaptive restructuring of Merkle trees, which lays the groundwork for practical implementations in existing blockchain infrastructures. Secondly, through a series of detailed examples, we demonstrate the feasibility and benefits of our approach, highlighting its potential to optimize verification processes and reduce associated costs. Comparative analysis with existing scalability solutions reveals that while many approaches offer improvements in transaction throughput and efficiency, they often introduce additional complexity or security concerns. In contrast, adaptive restructuring directly targets the underlying data structure of the blockchain, offering foundational improvements without compromising on security or introducing external dependencies. The implications of our research extend beyond theoretical advancements. By providing a scalable and efficient method for data verification, adaptive restructuring has the potential to facilitate broader adoption of blockchain technology across various sectors, including finance, supply chain management, and beyond. It opens up new avenues for blockchain applications that require high throughput and efficient data integrity verification. In conclusion, the adaptive restructuring of Merkle and Verkle trees represents a significant step forward in the quest for blockchain scalability. It offers a unique blend of efficiency, security, and practicality, making it a promising solution for the next generation of blockchain systems. As the blockchain ecosystem continues to evolve, the principles and methodologies outlined in this study will undoubtedly contribute to its growth and maturity, paving the way for more scalable, efficient, and versatile blockchain architectures. **References** [1] Xu K, Zhu J, Song X, Lu Z, editors. Blockchain Technology and Application: Third CCF China Blockchain Conference, CBCC 2020, Jinan, China, December 18-20, 2020, Revised Selected Papers. vol. 1305. Singapore: Springer; 2021. https://doi.org/10.1007/978-981-336478-3. [2] Lee S-W, Singh I, Mohammadian M, editors. Blockchain Technology for IoT Applications. Singapore: Springer; 2021. https://doi.org/10.1007/978-981-33-4122-7. [3] Krichen M, Ammi M, Mihoub A, Almutiq M. Blockchain for Modern Applications: A Survey. Sensors 2022;22:5274. https://doi.org/10.3390/s22145274. [4] IEEE Draft Standard for Blockchain-based Digital Asset Classification. IEEE P3206/D10, April 2023 2023:1–20. [5] Zeba S, Suman P, Tyagi K. Chapter 4 - Types of blockchain. In: Pandey R, Goundar S, Fatima S, editors. Distributed Computing to Blockchain, Academic Press; 2023, p. 55–68. https://doi.org/10.1016/B978-0-323-96146-2.00003-6. [6] Tiwari A. Chapter 14 - Cryptography in blockchain. In: Pandey R, Goundar S, Fatima S, editors. Distributed Computing to Blockchain, Academic Press; 2023, p. 251–65. https://doi.org/10.1016/B978-0-323-96146-2.00011-5. [7] Ethereum Cumulative Unique Addresses n.d. https://ycharts.com/indicators/ethereum_cumulative_unique_addresses (accessed January 11, 2024). [8] Ethereum Daily Active Addresses n.d. https://ycharts.com/indicators/ethereum_daily_active_addresses (accessed January 11, 2024). ----- [9] Kottursamy K, Sadayapillai B, AlZubi AA, Bashir AK. A novel blockchain architecture with mutable block and immutable transactions for enhanced scalability. Sustainable Energy Technologies and Assessments 2023;58:103320. https://doi.org/10.1016/j.seta.2023.103320. [10] Li Y, Weng J, Wu W, Li M, Li Y, Tu H, et al. PRI: PCH-based privacy-preserving with reusability and interoperability for enhancing blockchain scalability. Journal of Parallel and Distributed Computing 2023;180:104721. https://doi.org/10.1016/j.jpdc.2023.104721. [11] Nasir MH, Arshad J, Khan MM, Fatima M, Salah K, Jayaraman R. Scalable blockchains — A systematic review. Future Generation Computer Systems 2022;126:136–62. https://doi.org/10.1016/j.future.2021.07.035. [12] Sanka AI, Cheung RCC. A systematic review of blockchain scalability: Issues, solutions, analysis and future research. Journal of Network and Computer Applications 2021;195:103232. https://doi.org/10.1016/j.jnca.2021.103232. [13] Sharma A, Pilli ES, Mazumdar AP, Jain A. BLAST-IoT: BLockchain Assisted Scalable Trust in Internet of Things. Computers and Electrical Engineering 2023;109:108752. https://doi.org/10.1016/j.compeleceng.2023.108752. [14] Wang M, Wu Q. Fast intensive validation on blockchain with scale-out dispute resolution. Computer Standards & Interfaces 2024;89:103820. https://doi.org/10.1016/j.csi.2023.103820. [15] Wang Y, Li Y, Suo Y, Qiang Y, Zhao J, Li K. A scalable, efficient, and secured consensus mechanism for Vehicle-to-Vehicle energy trading blockchain. Energy Reports 2023;10:1565–74. https://doi.org/10.1016/j.egyr.2023.07.035. [16] Xiao J, Luo T, Li C, Zhou J, Li Z. CE-PBFT: A high availability consensus algorithm for large-scale consortium blockchain. Journal of King Saud University - Computer and Information Sciences 2024;36:101957. https://doi.org/10.1016/j.jksuci.2024.101957. [17] Yu B, Zhao H, Zhou T, Sheng N, Li X, Xu J. OverShard: Scaling blockchain by full sharding with overlapping network and virtual accounts. Journal of Network and Computer Applications 2023;220:103748. https://doi.org/10.1016/j.jnca.2023.103748. [18] Zhen Z, Wang X, Lin H, Garg S, Kumar P, Hossain MS. A dynamic state sharding blockchain architecture for scalable and secure crowdsourcing systems. Journal of Network and Computer Applications 2024;222:103785. https://doi.org/10.1016/j.jnca.2023.103785. [19] Ayyalasomayajula P, Ramkumar M. Optimization of Merkle Tree Structures: A Focus on Subtree Implementation. 2023 International Conference on Cyber-Enabled Distributed Computing and Knowledge Discovery (CyberC), 2023, p. 59–67. https://doi.org/10.1109/CyberC58899.2023.00021. [20] Jeon K, Lee J, Kim B, Kim JJ. Hardware Accelerated Reusable Merkle Tree Generation for Bitcoin Blockchain Headers. IEEE Computer Architecture Letters 2023;22:69–72. https://doi.org/10.1109/LCA.2023.3289515. [21] Jing S, Zheng X, Chen Z. Review and Investigation of Merkle Tree’s Technical Principles and Related Application Fields. 2021 International Conference on Artificial Intelligence, Big Data and Algorithms (CAIBDA), 2021, p. 86–90. https://doi.org/10.1109/CAIBDA53561.2021.00026. [22] Knollmann T, Scheideler C. A self-stabilizing Hashed Patricia Trie. Information and Computation 2022;285:104697. https://doi.org/10.1016/j.ic.2021.104697. [23] Lin K-W, Chen Y-C. A File Verification Scheme Based on Verkle Trees. 2023 International Conference on Consumer Electronics - Taiwan (ICCE-Taiwan), 2023, p. 295–6. https://doi.org/10.1109/ICCE-Taiwan58799.2023.10226788. [24] Liu H, Luo X, Liu H, Xia X. Merkle Tree: A Fundamental Component of Blockchains. 2021 International Conference on Electronic Information Engineering and Computer Science (EIECS), 2021, p. 556–61. https://doi.org/10.1109/EIECS53707.2021.9588047. [25] Mardiansyah V, Muis A, Sari RF. Multi-State Merkle Patricia Trie (MSMPT): High Performance Data Structures for Multi-Query Processing Based on Lightweight ----- Blockchain. IEEE Access 2023;11:117282–96. https://doi.org/10.1109/ACCESS.2023.3325748. [26] Mitra D, Tauz L, Dolecek L. Graph Coded Merkle Tree: Mitigating Data Availability Attacks in Blockchain Systems Using Informed Design of Polar Factor Graphs. IEEE Journal on Selected Areas in Information Theory 2023;4:434–52. https://doi.org/10.1109/JSAIT.2023.3315148. [27] Zhao X, Zhang G, Long H-W, Si Y-W. Minimizing block incentive volatility through Verkle tree-based dynamic transaction storage. Decision Support Systems 2024;180:114180. https://doi.org/10.1016/j.dss.2024.114180. -----
{ "disclaimer": "Notice: Paper or abstract available at https://arxiv.org/abs/2403.00406, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": null, "status": "GREEN", "url": "http://arxiv.org/pdf/2403.00406" }
2,024
[ "JournalArticle" ]
true
2024-03-01T00:00:00
[ { "paperId": "c6f8f769b41cee1cbedf2d2cf9b77f93b64e4120", "title": "Minimizing Block Incentive Volatility Through Verkle Tree-Based Dynamic Transaction Storage" }, { "paperId": "5902810bbbcfa673f365de85cc492879654ebffb", "title": "Fast intensive validation on blockchain with scale-out dispute resolution" }, { "paperId": "561124f31bd25e111424789993ab44911146e4c6", "title": "Optimization of Merkle Tree Structures: A Focus on Subtree Implementation" }, { "paperId": "9e17bfe2de77407c7166fa160900f7be862596d9", "title": "A dynamic state sharding blockchain architecture for scalable and secure crowdsourcing systems" }, { "paperId": "399066925c3cb0f8de15a90ed4b7631df16f2256", "title": "A scalable, efficient, and secured consensus mechanism for Vehicle-to-Vehicle energy trading blockchain" }, { "paperId": "903d15c229e3b5787a2924cc5c47edc1568d965d", "title": "OverShard: Scaling blockchain by full sharding with overlapping network and virtual accounts" }, { "paperId": "cf360b7e1c5cd0c50556c3bf85baa5561cda28ec", "title": "A novel blockchain architecture with mutable block and immutable transactions for enhanced scalability" }, { "paperId": "121c2c8b926ff8156eaadfe60678866d74bdfcf1", "title": "A File Verification Scheme Based on Verkle Trees" }, { "paperId": "29f6cc329fe4f9b6f41a8b7be2b4ab283d253f74", "title": "Hardware Accelerated Reusable Merkle Tree Generation for Bitcoin Blockchain Headers" }, { "paperId": "e6a079c2e64cd246c67001cf8c95fa211d59e0e9", "title": "BLAST-IoT: BLockchain Assisted Scalable Trust in Internet of Things" }, { "paperId": "26b26ca0b95d865603889313e64ace6617c112b3", "title": "PRI: PCH-based privacy-preserving with reusability and interoperability for enhancing blockchain scalability" }, { "paperId": "78bb887d4dc5609e3603468e89b0179682e2a9bd", "title": "Blockchain for Modern Applications: A Survey" }, { "paperId": "57562777be3443b7bde45689a29c02fa03fba2ee", "title": "A systematic review of blockchain scalability: Issues, solutions, analysis and future research" }, { "paperId": "9006824870cd8fd818d2c42d1d7bc3db38751314", "title": "Merkle Tree: A Fundamental Component of Blockchains" }, { "paperId": "0f33216c3f5ab6a215d553d4258980a0617b650c", "title": "Review and Investigation of Merkle Tree’s Technical Principles and Related Application Fields" }, { "paperId": "b2b434ad786220ef5fb4b10b99bab0a5fd2a5b9e", "title": "Types of Blockchain" }, { "paperId": "34689080e758df6ba9ff41098cbba734be5470ec", "title": "Cryptography in Blockchain" }, { "paperId": "0d63e5bf40c61aeb51608567f181135ba1a8bb0c", "title": "A Self-Stabilizing Hashed Patricia Trie" }, { "paperId": "680b11975830d4f46afb142387e373de2324524b", "title": "CE-PBFT: A high availability consensus algorithm for large-scale consortium blockchain" }, { "paperId": "5aed6b0c6ca5b6bbe62557b9631e331ab99d99c2", "title": "Multi-State Merkle Patricia Trie (MSMPT): High-Performance Data Structures for Multi-Query Processing Based on Lightweight Blockchain" }, { "paperId": "d4ed93fa85263bfaf0998cd5e6e12dd6f32a5bc5", "title": "Graph Coded Merkle Tree: Mitigating Data Availability Attacks in Blockchain Systems Using Informed Design of Polar Factor Graphs" }, { "paperId": null, "title": "IEEE Draft Standard for Blockchain-based Digital Asset Classification" }, { "paperId": "afb9d253720281f261bb967395f613f637b7a81e", "title": "Scalable blockchains - A systematic review" }, { "paperId": "eb2225e14bc47053b9433cc352d31876bb6950b4", "title": "Blockchain Technology for IoT Applications" }, { "paperId": null, "title": "Blockchain Technology and Application: Third CCF China Blockchain Conference, CBCC 2020, Jinan, China, December 18-20, 2020, Revised Selected Papers. vol. 1305. Singapore" }, { "paperId": null, "title": "Daily Active Addresses" } ]
18,465
en
[ { "category": "Business", "source": "external" }, { "category": "Environmental Science", "source": "s2-fos-model" }, { "category": "Business", "source": "s2-fos-model" }, { "category": "Economics", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/0263cf4e40d6bc60fe421ed73763e4fbed70c0fd
[ "Business" ]
0.838219
Can Decentralization Drive Green Innovation? A Game Theoretical Analysis of Manufacturer Encroachment Selection with Consumer Green Awareness
0263cf4e40d6bc60fe421ed73763e4fbed70c0fd
[ { "authorId": "33586843", "name": "Dan-Ni Cao" }, { "authorId": "2118398094", "name": "Jin Li" }, { "authorId": "2109615284", "name": "Gege Liu" }, { "authorId": "37698352", "name": "R. Mei" } ]
{ "alternate_issns": null, "alternate_names": null, "alternate_urls": null, "id": null, "issn": null, "name": null, "type": null, "url": null }
With the increase of public environmental awareness and the growth of e-commerce, sustainable development promotes the manufacturer to increasingly participate in green innovation and make full use of the online sales channel to enhance competitiveness. Despite decentralized encroachment being widely adopted in business reality, the current literature has commonly paid more attention to centralized encroachment. To complement related research, a dual-channel green supply chain composed of a manufacturer (its retail subsidiary) and a retailer is investigated. We focus on what encroachment strategy (centralization vs. decentralization) drives the green innovation and analyze the impact of consumer green awareness and product substitutability on the manufacturer’s encroachment strategy, green innovation efforts and supply chain performance. Under each encroachment strategy, we build a Stackelberg game model and derive the equilibrium outcome. Then, we theoretically analyze the effects of consumer green awareness and product substitutability on green innovation and each party’s profitability. Our comparative analysis shows what encroachment strategy drives green innovation and what encroachment strategy benefits both parties and social welfare. Numerical studies are also conducted to support the analytical results. Our key findings reveal that decentralization improves the green innovation and achieves a both-win situation for the manufacturer and the retailer. Besides that, decentralization can reduce the environmental damage and increase social welfare as well.
# processes _Article_ ## Can Decentralization Drive Green Innovation? A Game Theoretical Analysis of Manufacturer Encroachment Selection with Consumer Green Awareness **Dan Cao** **[1], Jin Li** **[2], Gege Liu** **[2]** **and Ran Mei** **[3,]*** 1 School of Business Administration, Zhejiang Gongshang University, Hangzhou 310018, China; cd@mail.zjgsu.edu.cn 2 School of Management and E-Business, Modern Business Research Center, Key Research Institute, Zhejiang Gongshang University, Hangzhou 310018, China; jinli@mail.zjgsu.edu.cn (J.L.); lgg979492@163.com (G.L.) 3 School of Economics and Management, Tongji University, Shanghai 200092, China ***** Correspondence: 1610334@tongji.edu.cn [����������](https://www.mdpi.com/article/10.3390/pr9060990?type=check_update&version=2) **�������** **Citation: Cao, D.; Li, J.; Liu, G.; Mei,** R. Can Decentralization Drive Green Innovation? A Game Theoretical Analysis of Manufacturer Encroachment Selection with Consumer Green Awareness. _[Processes 2021, 9, 990. https://](https://doi.org/10.3390/pr9060990)_ [doi.org/10.3390/pr9060990](https://doi.org/10.3390/pr9060990) Academic Editor: Anet Režek Režek Jambrak Received: 10 May 2021 Accepted: 31 May 2021 Published: 3 June 2021 **Publisher’s Note: MDPI stays neutral** with regard to jurisdictional claims in published maps and institutional affil iations. **Copyright: © 2021 by the authors.** Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons [Attribution (CC BY) license (https://](https://creativecommons.org/licenses/by/4.0/) [creativecommons.org/licenses/by/](https://creativecommons.org/licenses/by/4.0/) 4.0/). **Abstract: With the increase of public environmental awareness and the growth of e-commerce,** sustainable development promotes the manufacturer to increasingly participate in green innovation and make full use of the online sales channel to enhance competitiveness. Despite decentralized encroachment being widely adopted in business reality, the current literature has commonly paid more attention to centralized encroachment. To complement related research, a dual-channel green supply chain composed of a manufacturer (its retail subsidiary) and a retailer is investigated. We focus on what encroachment strategy (centralization vs. decentralization) drives the green innovation and analyze the impact of consumer green awareness and product substitutability on the manufacturer’s encroachment strategy, green innovation efforts and supply chain performance. Under each encroachment strategy, we build a Stackelberg game model and derive the equilibrium outcome. Then, we theoretically analyze the effects of consumer green awareness and product substitutability on green innovation and each party’s profitability. Our comparative analysis shows what encroachment strategy drives green innovation and what encroachment strategy benefits both parties and social welfare. Numerical studies are also conducted to support the analytical results. Our key findings reveal that decentralization improves the green innovation and achieves a both-win situation for the manufacturer and the retailer. Besides that, decentralization can reduce the environmental damage and increase social welfare as well. **Keywords: green innovation; manufacturer encroachment; consumer green awareness; substitutability** **1. Introduction** In recent years, environmental damage and deterioration issues, such as air and water pollution, greenhouse effect, climate change, landfill waste, acidic rains and noise pollution, have been widely concerned by lots of the countries in the world. These environmental hazards are increasingly threatening human health and survival. To address these environmental issues, both governments and firms all over the world have taken action. According to UK Climate Change Act schedules, the target of 80% reduction by 2050 in greenhouse gas emissions has been set up. The government in China plans for a 40–45% reduction in its carbon emissions per-unit GDP by 2025 compared to its 2005 level [1]. Many famous firms including Wal-Mart, M&S, Tesco and Debenhams take various measures to reduce their own and suppliers’ carbon footprint in raw material sourcing, production, logistics and retailing [2]. Green supply chain can be defined as enhancing the overall environmental protection awareness of the supply chain and promoting the improvement of economic and social ----- _Processes 2021, 9, 990_ 2 of 24 benefits by utilizing resources in the whole process from production to sales [3]. In addition to traditional supply chain management activities, green supply chain management (GSCM) puts more emphasis on reducing the harm of its operation to environment and human health [4]. Many researchers all over the world are devoted to investigating GSCM and green innovation involving various factors, such as green planning, green product design, green manufacturing, by-products use and reverse logistics [5,6]. In the context of environmental protection, green (environmental-friendly) products with appropriate quality and lowest negative effect on the environment have been manufactured [7]. In addition, lots of manufacturers are willing to make green innovation efforts to improve the supply chain greenness, e.g., producing new green products, developing green technologies and using new clean energy to reduce pollutants. For example, Pepsi Cola, a giant beverage manufacturer, makes use of advanced green technology to replace corrugated materials with reusable plastic containers to reduce environmental pollution [8]. Green innovation activities can establish better social image for supply chain members and improve their core competitiveness by reducing its harm on the environment. Compared with non-green products, green products are environmentally friendly but paid at a higher price as well. The reason is that, for a manufacturer, it will cost more to produce green products, which makes green products more expensive [9]. Intuitively, the manufacturer implements green innovation activities in the production of green products only when their benefits exceed the production costs. The key to the problem is whether consumers are willing to pay a sufficiently high premium to offset the additional costs. Fortunately, there is plenty of evidence showing that environmentally aware consumers prefer green products and are willing to pay higher price than non-green ones. StarKist tuna reports that consumers are willing to buy dolphin safe tuna at a $0.21 higher price per can than regular one [10]. A survey by Bureau of Energy in Taiwan reveals that 50% of the respondents in nine developed countries prefer eco-labelled products, which are produced with the aim of supporting consumer decision-making for environmentally friendly products by providing transparency and enhancing trust in the environmental identities of products [11], and 24% of them would like to pay a premium price for these green products [12]. In this sense, consumer’s stronger willingness to pay for green products will enhance the demand and then incentive the manufacturer to adopt more green innovations. With the fierce market competition and rapid growth of e-commerce business, to stay competitive and increase demand and profits, many traditional manufacturers have established online direct sale channels. For example, Samsung and LG sell their mobile phones through online stores as well as their offline retailers, which is known as a dualchannel supply chain [13]. Specifically, the direct channel often causes a conflict between the manufacturer and the retailer, which is referred as manufacturer encroachment [14]. So far, there are two encroachment strategies: centralized encroachment and decentralized encroachment. Under centralized encroachment, the manufacturer centrally makes all decisions (e.g., pricing) for her subsidiary in direct channel. In contrast, the manufacturer under decentralized encroachment will grant more decision-making power to her retail subsidiary. As an example, Sony Corporation allows its retail subsidiary, StylingLife Holdings (a holding company for Sony’s group of retail businesses), to manage their own retail businesses independently [15]. Centralized encroachment usually leads to excessive vertical competition and inflexible trade between the manufacturer and the retailer [16,17]. Decentralized encroachment is hoped to improve the dual-channel interactions and address these concerns. Arya et al. [18] show that decentralized encroachment for a manufacturer can soften the retail competition by setting a transfer price above marginal cost to her downstream subsidiary, which increases the wholesale price and eventually benefits the manufacturer. In this paper, we will focus on these two types of manufacturer encroachment strategies in the presence of green innovation. An incentive has arisen to incorporate green innovation into the study of manufacturer encroachment, where a manufacturer makes green innovation efforts to produce green ----- _Processes 2021, 9, 990_ 3 of 24 products and sells them in dual channels. In the mobile phones industry, Nokia uses materials with no toxic flame retardants to produce mobile phones and accessories and then sells them through both retailers and their own physical/online stores [19]. For green products, some key factors should be investigated, such as consumer green awareness and manufacturer’s green innovation efforts. As mentioned earlier, consumer green awareness will affect the demand and pricing decisions. The manufacturer makes green innovation efforts to improve the environment performance and attract more customers, but it also incurs more manufacturing and design/production costs. Therefore, the manufacturer attempts to decide an optimal level of green innovation efforts to balance the gains from the consumer’s demand with green awareness and the losses from green innovation investment. On the other hand, since the manufacturer’s encroachment usually causes the channel conflicts, the manufacturer also needs to determine the pricing through a deliberate trade-off between the profits in both channels. Moreover, to examine the effect of the channel conflicts between the manufacturer and the retailer, we will consider the product substitutability across the two channels. The primary goal of this paper is to study what encroachment strategy (centralization vs. decentralization) drives the green innovation and analyze the impact of consumer green awareness and product substitutability on the manufacturer’s encroachment strategy, green innovation efforts and supply chain performance. Based on extant literature, these issues are not yet fully addressed. To fill this important research gap, we build a Stackelberg game model in a two-echelon supply chain with a green manufacturer and a retailer. The manufacturer produces green products and directly sells them to end consumers through her subsidiary, which may be centralized or decentralized with the manufacturer. In these settings, the subsidiary will compete with the retailer selling substitutable green products. We contribute to the literature in the following three ways. First, in terms of modelling, we contribute to the dual-channel literature by incorporating green innovation, competition and consumer green awareness into game-theoretic models of manufacturer’s encroachment. The analytical results show that as consumer green awareness increases, the manufacturer is motivated to improve green innovation efforts, which in turn benefits both the manufacturer and the retailer. Nevertheless, the higher product substitutability will reduce the level of the manufacturer’s green innovation efforts. Accordingly, each firm’s profit will decrease unless the encroachment cost exceeds a threshold. Second, unlike most existing studies considering centralized encroachment only, this paper contributes to the GSCM literature by allowing the manufacturer to choose between centralized and decentralized encroachment. Third, our research of this paper shows how the manufacturer makes a choice between centralized and decentralized encroachments when making green innovation efforts in dual channels. Our main findings reveal that decentralized encroachment outperforms centralized encroachment in the level of green innovation efforts and both members’ profitability. Moreover, decentralized encroachment can also reduce the environmental damage and improve the social welfare. The structure of this paper is organized as follows. Section 2 reviews relevant literature. Section 3 describes the basic assumptions and supply chain model for a manufacturer using centralized and decentralized encroachment. In Section 4, we derive the equilibriums under centralized and decentralized encroachments. Section 5 compares these two strategies. In Section 6, numerical studies are performed to verify the main findings. Finally, Section 7 summarizes the paper and puts forward suggestions for operations management and future research. **2. Literature Review** There are three streams of literature related to our study: green supply chain management, dual-channel supply chain and manufacturer encroachment. These are briefly reviewed below. With the development of green supply chain management, many scholars have begun to study green supply chains from different perspectives. For example, Liu et al. [20] used ----- _Processes 2021, 9, 990_ 4 of 24 game theory to study the influence of green preference coefficient and competitiveness on the decision-making of supply chain members in the situations of no competition, manufacturer competition and manufacturer-retailer competition. Green et al. [21] discovered that green supply chain management could promote economic development, improve ecological environment and enhance the competitiveness of manufacturer to some extent. Ghosh and Shah [22] studied the influence of the consumer green preference coefficient on the decision-making of supply chain members in the two-level green supply chain under the conditions of manufacturer led, retailer led and Nash equilibrium. He et al. [23] explored the effects of consumer preference characteristics on the green innovation efforts of the food supply chain, and they found that the change in consumer preference characteristics is an important factor to motivate supply chain members to make green innovation efforts. Liu et al. [24] pointed that the market demand for green products is not only related to product price but also to consumers’ low-carbon preference. Lee [25] suggested that the supply chain members must participate in green innovation activities at the same time to achieve a win-win scenario in the CLSC. However, the above literature shows that the studies on the green supply chain mainly concentrate on product pricing, green innovation and consumers’ green preference. Moreover, in the above literature, dual-channel is not involved. The existing dual-channel green supply chain research mainly focused on the pricing, green production issues and channel competition, rarely considering the consumer green awareness. Cai [26] studied the channel selection and coordination of dual-channel supply chains and concluded that the operating costs of channels, the substitutability of channels and the overall profit of the supply chain will affect the channel selection decisions of suppliers and retailers. Heydari et al. [27] employed a Stackelberg game method to research the optimal pricing decision and coordination strategies for a green supply chain considering the introduction of online channel. Li et al. [8] presented a model to analyze green production and the pricing of the members in decentralized and centralized decision scenarios employing the Stackelberg game method. The results demonstrated when customers’ loyalty to retail channel and green cost meet certain conditions, it is beneficial for both sides to develop direct channel. Different from the above research, our work will focus on the analysis of consumers’ green preference and products’ substitutability of a dual-channel green supply chain under different manufacturer’s encroachment strategies. Many researchers claimed that due to the aggravation of market competition, encroachment will reduce the retailers’ profit [28], but some other scholars disagreed with this. For example, by reducing double marginalization, both the manufacturers and the retailers will benefit [29,30]. In the situation of quantity competition, Arya et al. [18] pointed out that a right transfer price set by the manufacturers can convey a less aggressive message to the retailers, which can effectively reduce the retailers’ profit loss. Yoon [13] pointed out that the manufacturer encroachments do not always pose a threat to the retailers; when the manufacturers participate in retail, the manufacturers will lower their costs and set lower wholesale prices to the retailer. However, manufacturer encroachment did cause channel conflict. When the demand information is asymmetric [31], the retailers can hardly benefit from the direct sales channel of the suppliers. Ha et al. [32] pointed out that when the quality is endogenous and the manufacturers have enough flexibility in adjusting the quality, the encroachment is unfavorable for the retailers. This paper is also closely related to the study of Arya et al. [18]. From their model, the retailer has advantages in sales because the retailer is more direct contact with consumers than the manufacturer. The manufacturer will choose a strategy without encroachment when the cost is too high. Besides, we study the decision-makings of green manufacturer under the encroachment. We focus on what encroachment drives green innovation and the analysis of the impacts of consumer green awareness and channel competitiveness on each party’s profitability and total social welfare. ----- _Processes 2021, 9, 990_ 5 of 24 In summary, this paper is different from the previous literature in three aspects. First, we take the consumer green awareness and channel competitiveness into account in the dual-channel green supply chain model. Second, we analyze whether the decentralization strategy with transfer pricing can alleviate the retailer’s revenue loss under the manufacturer’s encroachment. The results show that decentralization with an appropriate transfer pricing in a dual-channel green supply chains can drive green innovations and benefit both the retailer and the manufacturer. Third, we analyze and compare the two encroachment strategies’ decision-making, profits, environmental damage and social welfare. **3. Model Formulation** _3.1. Model Description_ We consider a dual-channel green supply chain consisting of a manufacturer (he/him) and a retailer (she/her). The manufacturer has one upstream (manufacturing and wholesale) subsidiary and a downstream (retail) subsidiary. The upstream manufacturing subsidiary makes green innovation (such as developing and using green technologies) and provides differentiated green products, namely, product d and product r, to the downstream subsidiary and the retailer, respectively. The two downstream parties then sell final products to end markets. They engage in a Cournot competition in the retail market. The manufacturer can choose between two encroachment strategies: centralized and decentralized encroachments. Under centralized encroachment, the manufacturer produces green products and sells them directly to the consumers in addition to wholesaling them to the retailer. Under decentralized encroachment, the manufacturer sells green products to its subsidiary with a transfer price, where the subsidiary can make its own sale decisions for maximizing its profit. This paper focuses on the impact of consumer green awareness and product competition on the manufacturer’s green innovation and the choice of encroachment strategies. To facilitate the expression, the superscripts C and D stand for equilibriums under centralized and decentralized encroachments. We use the subscripts r, d and m to represent the retailer, subsidiary, and manufacturer, respectively. The decision variables and parameters used in this paper are shown in Table 1. **Table 1. Notations list.** **Notations** **Description** _i_ Index of firms: manufacturer (i = m), subsidiary (i = d) or retailer (i = r) _j_ Index of centralized (j = C) or decentralized (j = D) encroachment _a_ Market base (the intercept of demand function) _λ_ The coefficient of demand sensitivity for green innovation per unit product _c_ The manufacturer’s direct sales cost for per unit product, 0 ≤ _c < a_ _θ_ Green innovation effort made by the manufacturer _wr_ Manufacturer’s wholesale price for the retailer _wd_ Manufacturer’s transfer price for her subsidiary _qi_ The product quantities of firm i = d, r and qi > 0 _pi_ The product’s retail price of firm i = d, r and pi > wi > 0 Πi Profit function of firm i = m, d, r Under centralized encroachment, the manufacturer and the retailer compete in quantities in the market, both of which are rational and pursue the maximization of their interests. The game sequence of centralized encroachment is as follows: At stage 1, the manufacturer determines the wholesale price and the level of her green innovation efforts. At stage 2, the manufacturer and the retailer jointly choose their respective sales volumes. With decentralized encroachment, the manufacturer leaves the retail decision to its sale subsidiary, and the subsidiary and retailer compete quantitatively in the market to maximize their own interests. The game order of decentralized encroachment is as follows: At stage 1, the manufacturer determines the wholesale price of the retailer, the transfer price of the ----- _Processes 2021, 9, 990_ 6 of 24 subsidiary and the level of green innovation efforts. Then, at stage 2, the subsidiary and the retailer decide on their respective sales quantities at the same time. _3.2. Assumptions_ To establish the appropriate models, we make the following assumptions: The manufacturer and the retailer compete in the market for quantities while the _•_ environmentally aware consumers are willing to pay a higher price for products with higher green quality. To characterize these features, we follow similar widely adopted demand functions [8,33–35] to depict the retail prices for products d and r as follows: _pd = a −_ _qd −_ _kqr + λθ, pr = a −_ _qr −_ _kqd + λθ_ (1) where a is the initial market potential, θ stands for the manufacturer’s efforts in green innovation, pi is the retail price for per unit green product i = d, r, qi is the order quantities for per unit green product i = d, r. The cross-price sensitivity coefficient k is less than its price-sensitivity coefficient, i.e., 0 ≤ _k ≤_ 1 [33,34]. In extreme cases, the value of k = 0 reflects that the two competing products are completely independent. _λ represents the green preference coefficient of the consumers. The larger it is, the_ higher the price that consumers are willing to pay for green products. Compared with non-green products, the manufacturer will invest more in green _•_ innovation, such as low-carbon storage technology, solar technology and new energy vehicle technology. This is often in the form of fixed costs. For tractability, we consider _uθ[2]_ a second-order cost function h = 2 [to represent the green innovation investment,] where u > 0 is the cost coefficient of the investment. It can be seen that the investment cost of the green product is a convex function of green innovation efforts, that is, the cost of green input increases with the level of the green innovation effort, which is consistent with the practical industry operations. This function is commonly used in the literatures [35]. In addition to the cost of green investment, the manufacturer will also incur the _•_ cost of producing products. To avoid the trivial, we assume the manufacturer’s unit production cost is not related with product green innovation and set it to zero. We use c (0 _c < a) to represent the manufacturer’s unit selling cost while the retailer’s_ _≤_ unit selling cost is normalized to zero, indicating the retailer has cost advantage in retailing. The retailer’s sales advantage comes from a better understanding of customer’s preferences and more direct contact with the customers [30]. To make our paper realistic and without loss of generality, we assume the green _•_ investment cost is sufficiently high, i.e., u > 0 (see the Proofs of Lemmas 1 and 2). _•_ We assume that the manufacturers and retailers are completely rational, pursuing the maximization of their respective interests. In the research of many scholars, firms often seek to maximize their profits as their objectives [36]. **4. The Equilibrium Results** _4.1. Centralized Encroachment_ Under centralized encroachment, the manufacturer first decides the retailer’s wholesale price (wr) and the level of green innovation efforts (θ), and then, the manufacturer and the retailer choose their retail quantities (qd and qr) to maximize their own profits. Given this decision sequence, to ensure sub-game perfection, the game is solved using backward induction. Given the manufacturer’s wholesale price and green innovation efforts, the manufacturer solves the following profit-maximization problem: Max (qd, qr, θ, wr) = wrqr + (pd _c)qd_ (2) _qd_ [∏]m _−_ _−_ _[u]2[θ][2]_ ----- _Processes 2021, 9, 990_ 7 of 24 In Equation (2), wrqr represents the manufacturer’s wholesale profit and (pd − _c)qd_ denotes the manufacturer’s retail profit. Similarly, the profit-maximization problem of the retailer can be given by Maxqr ∏r (qd, qr, wr) = (pr − _wr)qr_ (3) Solving Equations (2) and (3) simultaneously, the retail quantities of the manufacturer (qd) and the retailer (qr) can be obtained as follows: _q[C]d_ [(][θ][,][ w][r][) =][ 2][(][a][ −] _[c][)][ −]_ _[ak][+(][2][ −]_ _[k][)][λθ][ +][ kw][r]_ (4) 4 _k[2]_ _−_ _qr[C][(][θ][,][ w][r][) =][ 2][a][ −]_ [2][w][r][ −] [(][a][ −] _[c][)][k][ + (][2][ −]_ _[k][)][λθ]_ (5) 4 _k[2]_ _−_ Intuitively, Equations (4) and (5) reveal that the direct selling cost depresses the quantities in the direct channel but stimulates the quantities in the wholesale channel. Thus, the existence of direct selling cost is helpful to alleviate manufacturer encroachment to some extent. On the other hand, it implies that the retailer can still survive in the market despite the manufacturer reaching the end consumers through direct sale channel. As a matter of fact, instead of replacing the retailer, the manufacturer seeks to maximize the total profits in two channels. Anticipating Equations (4) and (5), the manufacturer decides the wholesale price (wr) and green innovation efforts (θ) simultaneously to maximize the sum of retail and wholesale profits: Max (q[C]d [(][θ][,][ w][r][)][,][ q]r[C][(][θ][,][ w][r][)][,][ θ][,][ w][r][)] (6) _wr,θ_ [∏]m Substituting Equations (4) and (5) into Equation (6), solving the first-order conditions of Equation (6) reveals the manufacturer’s equilibrium wholesale price and the green innovation efforts. Then, we put them back to get the equilibrium results under centralized encroachment, which are summarized as follows. **Lemma 1. Under centralized encroachment, when c <** _⌢c =_ _au(2−k)(4+k)_ (8−k[2])u−(2−k)λ[2][, the equilibrium] _green innovation efforts, price, retail quantities and profits are the following:_ _θ[C]_ = _λ[(2u2 −(8−k)(3k6[2]−)−k)(a2−−(k8)(−64−k+k)kλ[2][2])c]_, _wr[C]_ [=] [4(2−2(k8[2]−)a3+(k[2])au−−c)(k2[3]−] uk)(−62−(2k −)λk[2])cλ[2], _q[C]d_ [=] [(2−k2)((84−+3kk)[2]a)−u−(8(−2−k[2]k))(c]u6−+(k2)λ−[2]k)cλ[2], qr[C] [=] 2(84−[( 13k−[2]k)u)a−+(ck2−)]ku)(−62−cλk)[2]λ[2][,] Π[C]m [=] [(2 −k)(2 −26[)2 a([2]8−−23(k8[2]−)u4−k+(2k−[2])kca)(+(6−8k+)λk[2][2]])c[2]]u−2c[2]λ[2] _, and_ 2 Πr[C] [=] [24(8{−2u3[(k[2] 1)−u−k)(a2+−ckk)(] −6−cλk[2])}λ[2]][2][ .] _⌢_ _Otherwise, when c ≥_ _c, the manufacturer will not use this encroachment strategy._ Lemma 1 shows that whether the strategy of centralized encroachment can be used depends on the encroachment cost. Only when the encroachment cost is less than a threshold, the manufacturer can encroach into the end market. Otherwise, she has to rely on the retailer to sell her green products. Next, we shall discuss the effect of consumer green awareness. **Proposition 1. Under centralized encroachment, the equilibrium green innovation efforts (θ[C]),** _wholesale price (wr[C][), sales quantities (][q]i[C][) and profits (][Π][C]m_ _[and][ Π]r[C][) all increase with the consumer]_ _green awareness._ From Proposition 1, the greater the green preference of the consumers, i.e., the stronger the consumer environmental awareness, they are more inclined to buy green products, and ----- _Processes 2021, 9, 990_ 8 of 24 it stimulates the demand for products with higher greenness. Certainly, the manufacturer is incentivized to make more green innovation efforts. Then, the increasing green innovation efforts incur higher research and development costs, and thus, the manufacturer will raise wholesale prices for green products to maximize her profits, and the retailer will set higher retail price accordingly. As a result, both the manufacturer and the retailer will benefit from increasing consumer green awareness. **Proposition 2. Under centralized encroachment, the level of green innovation efforts (θ[C]) decreases** _with the product substitutability (k)._ Proposition 2 suggests that as the competition between the manufacturer and the retailer increases, the manufacturer tends to reduce green innovation efforts. The reason is that increasing competition will force the manufacturer to reduce her prices. To maximize the profits, the manufacturer will choose to reduce the green innovation investment to save the producing costs. In other words, the costs of increasing green innovation are greater than the benefits, which thereby curb the manufacturer’s green innovation efforts. **Proposition 3. Under centralized encroachment, as the product substitutability, k, increases:** _(a)_ _If c < c1, both players’ profits will decrease;_ _(b)_ _If c1 < c < c2, the manufacturer’s profit will decrease while the retailer’s profit will_ _increase, where c1 and c2 are provided in Appendix A;_ _⌢_ _(c)_ _If c2 < c <_ _c, both players’ profits will increase._ It can be seen from Proposition 3 that as the product substitutability increases, the change of both players’ profits critically depends on the unit encroachment cost. The threshold of the encroachment cost is affected by several factors such as the consumer green awareness, the cost-coefficient of green innovation efforts, the product substitutability, and market demand. From Proposition 3(a), when the manufacturer’s encroachment is sufficiently cost-efficient (c < c1), both players suffer from increasing product substitutability. This is intuitive because the manufacturer is easy to fall into face-to-face competition with the retailer for low encroachment cost. From Proposition 3(b), when the encroachment cost is in an intermediate range (c1 _<_ _c <_ _c2), increasing product substitutability hurts the manufacturer but bene-_ fits the retailer. The reason is that compared with the low encroachment cost, the higher encroachment cost will curb the manufacturer’s direct sales and soften the competition with the retailer, which boosts the demand in the wholesale market. As the market becomes more competitive, the retailer benefits more from the increasing wholesale demand. However, from the manufacturer’s perspective, she still suffers from the increasing competition because the growing benefit from the wholesale market is not enough to cover the loss caused by the reduction of direct channel sales. Proposition 3(c) suggests that both players always prefer to the higher product substitutability when the encroachment is highly costly. The reason, again, is that the manufacturer’s encroachment in this case is so difficult that she turns to extend the wholesale market. This effect benefits both the manufacturer and the retailer especially when the products between the two channels become more substitutable and more competitive. _4.2. Decentralized Encroachment_ We now move on to study the decentralized encroachment. Under decentralization, the manufacturer first sets the wholesale price for the retailer, the transfer price for the subsidiary and the green innovation efforts. Then, the retailer and the subsidiary compete in the market to determine their respective retail quantities. We solve the game by backward induction. ----- _Processes 2021, 9, 990_ 9 of 24 Given the manufacturer’s wholesale and transfer prices, the retailer’s retail decision under decentralized encroachment is the same as that in Equation (3) under centralized encroachment. The subsidiary solves the following profit maximization problem: Maxqd [∏]d (qd, qr, wd) = (pd − _wd −_ _c)qd_ (7) By solving Equations (3) and (7) simultaneously, the retail quantities, qd and qr, of the subsidiary and the retailer can be given by _qd[D][(][θ][,][ w][r][,][ w][d][) =][ 2][(][a][ −]_ _[c][)][ −]_ [2][w][d][ −] _[ak][ +][ k][w][r][ + (][2][ −]_ _[k][)][λθ]_, (8) 4 _k[2]_ _−_ _qr[D][(][θ][,][ w][r][,][ w]d[) = (][2][ −]_ _[k][)][a][ −]_ [2][w][r][ +][ ck][ +][ kw][d][+(][2][ −] _[k][)][λθ]_ . (9) 4 _k[2]_ _−_ As expected, the subsidiary’s retail quantities decrease with the transfer price and the encroachment cost while the retailer’s retail quantities increase with these parameters. Compared with Equations (4) and (5) under centralized encroachment, in addition to the encroachment cost, the transfer price imposes a double-marginalization effect on the direct channel, which conveys a less aggressive posture to the retailer and thus may increase his market share. Given the responses in Equations (8) and (9), the manufacturer decides the wholesale price (wr), transfer price (wd) and green innovation efforts to maximize the sum of retail and wholesale profits: Max (qd[D][(][θ][,][ w][r][,][ w][d][)][,][ q]r[D][(][θ][,][ w][r][,][ w]d[)][,][ θ][,][ w][r][,][ w]d[)][.] (10) _wr,wd,θ_ [∏]m Substituting Equations (8) and (9) into Equation (10), the first-order conditions of Equation (10) are solved to obtain the optimal wholesale price, transfer price and green innovation efforts. Then, we substitute them back and derive the equilibrium results for decentralized encroachment, as indicated in the following lemma. **Lemma 2. Under decentralized encroachment, when c <** _⌣c =_ 2au(2−k) _, the equilibrium green_ 4u−λ[2] _innovation efforts, prices, quantities and profits are the following:_ _θ[D]_ = _λ[(3 −2k)a−(2−k)c ]_ 2u(2 −k[2]) −λ[2](3−2k) [,] _wr[D]_ = 22[2uau((22 − −kk[2][2])) − −λcλ[2][2](3(2−−2kk))] [,][ w]d[D] = 2[k2[u2u(2( −a−k[2]ak) −+ckλ[2])(−3c−λ2[2]k])] [,] _qd[D]_ = 2[22uu((22 − a−k[2]2)c −−akλ[2])+(3−cλ2[2]k)] [,] _qr[D]_ = 2[22uu(2( −a−kak[2])+ −ckλ)[2]−(3c−λ[2]2k)] [,] Πm[D] [=] 2u[(3 −4[22uk)(2a[2] −−k2[2]ac) −(2λ−[2]k()+3−22ck[2])]]−c[2]λ[2] _, and Πr[D]_ = 4[2[2uu(2( −a−kak[2])+ −ckλ)[2]−(3c−λ[2]2]k[2])][2][ .] _⌣_ _Otherwise, when c ≥_ _c, the manufacturer will not use the decentralized encroachment strategy._ Similar to the equilibriums in Lemma 1 under centralized encroachment, Lemma 2 clearly demonstrates that the manufacturer still chooses to encroach into the end market only when the direct selling cost is below a threshold. A difference lies in the threshold _⌢_ _⌣_ value shifting from _c to_ _c . We can verify:_ _⌣_ _⌢_ _auk(2_ _k)[2u(k + 2)_ 3λ[2]] _−_ _−_ _c_ _c =_ (11) _−_ _−_ (λ[2] 4u)[u(k[2] 8) + (2 _k)λ[2]]_ _[<][ 0.]_ _−_ _−_ _−_ This comparison implies that the manufacturer can use the centralized encroachment strategy in a greater range than the decentralized one. This is because the transfer price between subsidiaries under decentralization strengthens the double marginalization in direct channel, which in turn increases the difficulty of the manufacturer encroachment. ----- _Processes 2021, 9, 990_ 10 of 24 **Proposition 4. Under decentralized encroachment, the green innovation efforts (θ[D]), wholesale** _price (wr[D][), transfer price (][w]d[D][), retail quantities (][q]r[D]_ _[and][ q]d[D][) and profits (][Π]r[D]_ _[and][ Π]m[D][) all increase]_ _with the consumer green awareness._ Proposition 4 shows that consumer green awareness has a positive effect on the decisions under decentralized encroachment, which is consistent with the results of centralized encroachment. More specifically, as consumer green awareness increases, the manufacturer will make more green innovation efforts to produce greener products and charge higher prices for the retailer and the subsidiary. Moreover, both the retailer and the subsidiary can sell greener products in the end markets. Because of aforesaid reasons, both the manufacturer and the retailer end up earning more profits. We next discuss the effect of product substitutability and have the following propositions. **Proposition 5. Under decentralized encroachment, the green innovation efforts (θ[D]) decrease with** _the products substitutability (k)._ This proposition is similar to Proposition 2, indicating that the more intense the competition, the less is the green innovation effort made by the manufacturer. As such, increasing product competition will induce the manufacturer to reduce green innovation and produce products with lower greenness to save the costs. **Proposition 6. Under decentralized encroachment, as the product substitutability, k, increases:** _⌣_ _(a)_ _If c <_ _c, the manufacturer’s profit will decrease;_ _⌣_ _(b)_ _If c < c3, the retailer’s profit will decrease; if c3 < c <_ _c, it will increase, where c3 is_ _provided in Appendix A._ Proposition 6 exhibits that the effect of product substitutability on each player’s profitability is related with the encroachment cost. These results under decentralized encroachment are similar to those in Proposition 3 under centralized encroachment except for different thresholds. In particular, as the product substitutability increases, when the encroachment cost is small enough (c < c3), the manufacturer can make full use of the direct channel to initiate stronger competition with the retailer so that both parties are _⌣_ worse off. When the encroachment cost is in the middle range (c3 < c < _c ), it is not_ easier for the manufacturer to sell through direct channel, and she in turn relies more on the retailer to rake in more profit, which instead benefits the retailer. When the encroachment _⌢_ cost is sufficiently large (c3 < c < _c ), the direct channel is suppressed, but the wholesale_ channel is efficiently expended. In this case, the retailer achieves higher profitability from extending the wholesale market. It is worthy to note that the manufacturer is always hurt due to increasing competition. The reasons are as follows: First, for the case of low encroachment cost, when the product substitutability increases, as mentioned above, the manufacturer will rely more on the direct channel but limit the wholesale market. This channel conflict leads to the manufacturer’s profit loss. Second, as shown in Equation (11), the manufacturer’s decentralized encroachment needs to lower the threshold of the encroachment cost, which makes it impossible for the manufacturer to increase her profit by softening the channel competition. Interestingly, we find that the threshold of the retailer’s benefit under decentralized encroachment is lower than that under centralized encroachment, i.e., c3 < c1. It indicates that the retailer under decentralized encroachment can benefit from stronger competition in a more efficient way in comparison with centralized encroachment. The underlying reason is again the double marginalization caused by the transfer price under decentralized encroachment. This double-marginalization effect in direct channel acts as a part of encroachment costs and thus reduces the encroachment threshold values. ----- _Processes 2021, 9, 990_ 11 of 24 **5. Centralized vs. Decentralized Encroachment** With the equilibrium outcomes under centralized and decentralized encroachments on hand, we compare them and have the following proposition. **Proposition 7. (a) The transfer price under decentralized encroachment is set above marginal cost,** _i.e., wd[D]_ _[>][ 0;]_ _(b) The manufacturer’s retail quantity is lower under decentralized encroachment, i.e., qd[D]_ _< q[C]d_ _[,]_ _while the retailer’s retail quantity is higher under decentralized encroachment, i.e., qr[D]_ _[>][ q]r[C][;]_ _(c) The level of green innovation efforts is higher under decentralized encroachment, i.e., θ[D]_ _> θ[C];_ _(d) The wholesale price is higher under decentralized encroachment, i.e.,wr[D]_ _[>][ w]r[C][.]_ Proposition 7(a) demonstrates that the transfer price under decentralized encroachment is set above marginal cost. In fact, the manufacturer signals to the retailer that she is less aggressive in the retail competition. As reflected in Proposition 7(b), this posture is detrimental to the manufacturer’s retail market, but it enhances the demand in the wholesale realm. By convincing the retailer that the manufacturer will not take a more competitive strategy, decentralized encroachment induces the manufacturer to make more green innovation efforts in Proposition 7(c) and charge higher wholesale price in Proposition 7(d). Due to the greener products, environmentally aware consumers are willing to pay higher prices (we can check that pd[D] _[>][ p][C]d_ [and][ p]r[D] _[>][ p]r[C][). Although higher]_ pricing reduces the loss imposed in retail profits, there are two significant aforementioned benefits in the wholesale market: higher selling price and more retail quantities. Thus, a stronger boost in wholesale profit (wr[D][q]r[D] _[>][ w]r[C][q]r[C][) may provide a benefit for both players.]_ We have the following proposition. **Proposition 8. Under decentralized encroachment, both the manufacturer and the retailer benefit,** _i.e., Πm[D]_ _[>][ Π][C]m[,][ Π]r[D]_ _[>][ Π]r[C][.]_ Proposition 8 shows that the strategy of decentralized encroachment is beneficial to both the manufacturer and the retailer. From Proposition 7, the manufacturer’s wholesale quantity increases while her retail quantity decreases under decentralized encroachment. The resulting total profit increases, indicating that the manufacturer’s profit in the wholesale channel is enough to cover the loss of the retail channel. Therefore, when producing green products, the manufacturer prefers to adopt the strategy of decentralized encroachment. In addition to evaluate the economic goals, we further investigate the societal and environmental performance. In this paper, the social welfare is considered in the comparisons of centralized and decentralized encroachments. Typically, consumer surplus, known as net income of consumers, refers to the difference between the willingness of all consumers to pay for a certain number of products and the actual total price paid. Consumer surplus measures the extra benefits that consumers believe they have gained [37]. Based on the above explanation, firms (the manufacturer and the retailer) in supply chain seek economic benefits and provide green products to increase consumer surplus. However, their operations also produce emissions and pollutions, which are detrimental to the environment. To reflect these effects and consistent with previous literature [37–39], social welfare consists of total profits of all supply chain parties, consumer surplus and environmental damage impact. Under the encroachment structure j = C, D, each element is calculated as follows. (1) Supply chain profit SC[j]. The supply chain profit in this study is equal to the total profits of the two stakeholders, i.e., the manufacturer and the retailer. It is 2 _SC[j]_ = Πm[j] [+][ Π]r[j] [= (][a][ −] _[q]d[j]_ _[−]_ _[kq]r[j]_ [+][ λθ] _[j][ −]_ _[c][)][q]d[j]_ [+ (][a][ −] _[q]r[j]_ _[−]_ _[kq]d[j]_ [+][ λθ] _[j][)][q]r[j]_ _[−]_ _[u][(][θ]2[j][)]_ . (12) ----- _Processes 2021, 9, 990_ 12 of 24 (2) Consumer Surplus CS[j]. Consumers buy green products and enjoy the surpluses, which equal to the price increment between the maximum acceptable price and the actual price. It can be given by _CS[j]_ = [1]2 [(][q]d[j] [+][ q]r[j] [)]2. (13) (3) Environmental damage ED[j]. From Proposition 7(c), decentralized encroachment provides the higher level of green innovation efforts than centralized one. To avoid trivial results and similar to [5], we take the green innovation efforts (θ[D]) under decentralized encroachment as a benchmark, where the environmental damage is normalized to zero (ED[D] = 0) relative to that under centralized encroachment. We use the parameter d to denote the environmental damage cost of per unit product reduction of green innovation efforts made by the manufacturer, which measures the level of environmental impact because of greenness decline. A larger d shows a higher degree of the manufacturer’s production damage to the environment. The environmental damage under centralized encroachment can be formulated by _ED[C]_ = [1] _d_ [+][ q]r[C][)]2. (14) 2 _[d][(][θ][D][ −]_ _[θ][C][)(][q][C]_ We use a quadratic damage function to characterize decreasing marginal returns due to the fact that additional production will produce more environmental pollution. This treatment is also widely used in the extant literature [37–40]. To sum up, social welfare under each encroachment strategy j = C, D can be given as follows. _SW_ _[j]_ = SC[j] + CS[j] _ED[j]._ (15) _−_ By comparing the social welfare under centralized and decentralized encroachments, we have the following proposition. **Proposition 9. Compared with centralized encroachment:** _(a) Environmental damage under decentralized encroachment is lower, i.e., ED[D]_ _< ED[C];_ _(b) If d ≤_ _d[∗], the social welfare under decentralized encroachment is lower, i.e., SW_ _[D]_ _≤_ _SW[C]; if_ _d > d[∗], the social welfare under decentralized encroachment is higher, i.e., SW_ _[D]_ _> SW[C], where d[∗]_ _is provided in Appendix A._ Proposition 9(a) indicates that the environmental damage under decentralized encroachment is lower than that under centralized encroachment. This thanks to the higher level of green innovation efforts led by decentralized encroachment. Proposition 9(b) shows that when the cost coefficient of environmental damage is sufficiently large, decentralized encroachment has the potential to prominently reduce the environmental damage and thereby leads to higher total social welfare. Therefore, from the perspective of environmental and societal performances, the government can make policies, such as reward and punishment mechanisms, to incentive the manufacturers to adopt decentralized encroachment to produce greener products. **6. Numerical Studies** In this section, we use some numerical examples to discuss the impacts of some key factors on the member’s profit improvement and social welfare due to centralized vs. decentralized encroachment. These key factors include consumer green awareness (λ), product substitutability (k) and cost factor of environment damage (d). Similar to the extant literature [7,20,41,42], in order to simplify the calculation and make the value in the corresponding interval, we set the parameter values as follows: a = 10, µ = 4, c = 2. We can verify that the setting of these parameter values meets the basic assumptions of our model, such as positive demands and profits. ----- _Processes 2021, 9, 990_ 13 of 24 _6.1. Effect of Consumer Green Awareness (λ) on Profits_ We first analyze the effect of consumer green awareness on profits. To this end, we set _k = 0.8 and vary λ from 0–1. For ease of expressions, we denote firm i’s profit improvement_ between decentralized and centralized encroachments by ∆Πi = Πi[D] _i_ [,][ i][ =][ m][,][ r][. In] _[−]_ [Π][C] this setting, we illustrate the effect of consumer green awareness on each player’s profit improvement in Figure 1 where (a) and (b) show the effect of consumer green awareness on the manufacturer’s and the retailer’s profit, respectively. **Figure 1. The effect of λ on each party’s profit improvement.** From Figure 1, we can see that as λ increases, the profits of both the manufacturer and the retailer under either centralized or decentralized encroachment will increase. These results verify the rightness of Propositions 1 and 4. More importantly, the profit improvements of both the manufacturer and the retailer due to decentralized encroachment increase with λ. In other words, both supply chain members can benefit more from decentralized encroachment when consumer green awareness grows. Therefore, firms are incentivized to adopt the preferred decentralization strategy under the environment of advocating green consumptions. _6.2. Effect of Product Substitutability (k) on Profits_ To examine the effects of the product substitutability (k), we choose c = 0.1 and 5 to represent the low and high levels of the encroachment cost. Let λ = 1 and fix the other parameters as before. Figures 2 and 3 plot the results, where (a) and (b) show the results when c = 0.1 and c = 5, respectivelyFirst, for the case of low encroachment cost (c = 0.1), we observe that the profits of both the manufacturer and the retailer under each regime decrease with k. These results are consistent with Propositions 3 and 6. For high encroachment cost (c = 5), the manufacturer’s profit under each regime also decreases with _⌣_ _k. The reason is_ _c < c2, indicating that when the encroachment cost is sufficiently high,_ the manufacturer will not choose to use the direct channel. To ensure the manufacturer’s _⌣_ encroachment, we still set the high encroachment cost lower than the threshold, i.e., _c > 5._ Figure 3b shows that the retailer’s profit under decentralized encroachment will increase as k becomes sufficiently large. This implies that the encroachment cost is very high such that c > c3. These results are also in line with Proposition 6. ----- _Processes 2021, 9, 990_ 14 of 24 **Figure 2. The effect of k on the manufacturer’s profit improvement.** **Figure 3. The effect of k on the retailer’s profit improvement.** Second, for the case of low encroachment cost (c = 0.1), it can be seen that as k increases, both players’ profit improvements due to decentralized encroachment first increase and then decrease. To explain this, note that the manufacturer can efficiently use her direct channel in this setting. The benefit of decentralized encroachment lies in allowing the manufacturer to convey reduced competitiveness to increase the wholesale demand. When k = 0, the two channels are independent, and thus, there is no difference between these two strategies. As k increases, the competition becomes fiercer, and thus, the decentralized encroachment can reap the wholesale profit more. However, if k is very large, the manufacturer under decentralized encroachment will reduce the transfer price to deal with the stiff competition, which makes decentralized encroachment close to the centralized one. Specifically, in the extreme case of k = 1, strong competition will induce the manufacturer to exclude the retailer, which leads to equal profits between these two strategies. Third, when the encroachment cost is high (c = 5), it is not easy for the manufacturer to encroach into the retail market, and thus, the manufacturer will rely more on the wholesale market. Figures 2b and 3b show that both players’ profit improvements due to decentralized encroachment increase with k. This means that due to the role of softened competition played by decentralized encroachment, when the retailer benefits from the increasing wholesale market, the manufacturer can also benefit from increasing dependence on it for the case of high encroachment cost. In summary, we observe that each member’s profit improvement due to decentralized encroachment is always positive. These results verify that Proposition 8 holds as well. ----- _Processes 2021, 9, 990_ 15 of 24 _6.3. Effect of λ, k and d on Social Welfares_ We next compare the social welfares between centralized and decentralized encroachments and study how the parameters affect their difference. For convenience, we define ∆SW = SW _[D]_ _SW[C]. Figure 4 is drawn to show these results, where (a) shows the effect_ _−_ of λ when k = 0.8 and d = 10, (b) shows the effect of k when λ = 2 and d = 10, and (c) illustrates the effect of d when λ = 2 and k = 0.8. First, as λ increases, social welfare under either centralized or decentralized encroachment also increases. This implies that higher consumer green awareness not only benefits the firms in the supply chain but only benefits both the environment and the society. Moreover, the higher the consumer green awareness, the more the social welfare improvement due to decentralized encroachment. Therefore, in addition to promoting green consumption, policy makers also need to encourage the manufacturer to adopt decentralized direct channels in sales. **Figure 4. The effect of λ, k and d on social welfare’s improvement.** Second, Figure 4b shows increasing product substitutability (k) has a negative effect on social welfare under each regime. This is because channel competition reduces both supply chain members’ profits as indicated in Figures 2 and 3. However, it is interesting to find that as k increases, the social welfare’s improvement under decentralized encroachment first increases and then decreases. We have the following reasons. When the encroachment cost is not high (c = 2), as reflected in Figures 2 and 3, the profit improvement of both players also first increases and then decreases with k. For k sufficiently large, the level of green innovation efforts under decentralized encroachment will decrease and closely reach to that under centralized encroachment. Moreover, increasing competition under centralized encroachment is helpful to restore the demand, which in turn leads to more consumer surplus. That is why the social welfare’s improvement will decrease as k is large ----- _Processes 2021, 9, 990_ 16 of 24 enough. At the extremes, when k is close to 1, centralized encroachment is preferred in social welfare. Finally, the environmental damage coefficient has a positive effect on social welfare’s improvement in a linear way. Consistent with Proposition 9, when d exceeds a threshold, the manufacturer under decentralized encroachment will make more green innovation efforts to increase the product greenness, and subsequently reduce the harm to the environment. As a result, the total social welfare under decentralized encroachment will also increase accordingly. **7. Conclusions** The rapid development of green supply chain management and e-commerce enables a large number of supply chain enterprises to sell green products. In addition, many traditional manufacturers have established online direct sale channels to increase demand and profits. In this dual-channel supply chain, the upstream manufacturer’s direct sale channel encroaches into the retailer’s markets, which inevitably affects the downstream retailer’s and consumers’ decision-making. As a result, manufacturer encroachment may reduce the retailer’s market and hurt the retailer [32,43,44]. Because of induced lower wholesale price and increased demand, the encroachment may also benefit the manufacturer and the retailer [18,45,46]. So far, the manufacturer has two types of encroachment strategies, namely, centralized and decentralized encroachments. Prior studies commonly pay more attention on centralized encroachment, where the manufacturer makes centralized retail decisions on behalf of his subsidiary. In contrast, under decentralized encroachment, the manufacturer charges a transfer price to his subsidiary and permits the subsidiary to make its own retail decisions. Despite decentralized encroachment being widely adopted in business reality, the related studies on it are still limited. We make a major contribution to studying what encroachment strategy drives green innovation and analyze the effects of consumer green awareness and product substitutability on the manufacturer’s choice between centralized and decentralized encroachments. Our analysis and main findings are summarized as follows: (1) Under each encroachment strategy, increasing consumer green awareness incentives the manufacturer to put in more efforts in green innovation. This also benefits both the manufacturer and the retailer. (2) Under each encroachment strategy, as the channel competition between the manufacturer and the retailer intensifies, the manufacturer will reduce her green innovation efforts. (3) Under centralized encroachment, when the encroachment cost is relatively low, higher product substitutability will hurt both the manufacturer and the retailer. In contrast, when the encroachment cost is high, both parties can benefit from the increasing product substitutability. (4) The effect of product substitutability on profits under decentralized encroachment has a similar pattern to that under centralized encroachment. A difference is that due to the less threshold value of encroachment cost, the manufacturer under decentralization is always worse off as the product substitutability increases. (5) Decentralization is more efficient to drive the manufacturer’s green innovation than centralization. Moreover, decentralization benefits both the manufacturer and the retailer in profitability. (6) Because of a higher level of green innovation efforts, decentralization reduces the environmental damage in comparison with centralization. When the environment damage cost is sufficiently high, decentralization is also preferred in social welfare. According to the above results, we can provide meaningful managerial insights and policy suggestions for firms and policymakers. In the context of supply chain sustainability and profitability, when the firms participate in green innovation, decentralization is a promising alternative to conventional centralization strategy. Although centralization is proved to be perfect for eliminating double marginalization [12,23,47], our study demonstrates that decentralization with an appropriate transfer price above marginal cost can convey softened retail competition between the manufacturer and the retailer, which best balances the profitability from the wholesale and retail markets. This leads to higher green innovation and consequently benefits both parties. Therefore, the government can devise ----- _Processes 2021, 9, 990_ 17 of 24 various policies, such as monetary subsidies and/or taxations for environmentally friendly activities [38,48–50], to inspire the adoption of decentralization among supply chain firms in green innovations. We acknowledge that our models have some limitations. Therefore, the following studies may need to be further explored in the future. First, the competition between green products and ordinary products can be considered in the multi-channel supply chains. Secondly, research under asymmetric information can be incorporated in the model. Third, contracts/policies can be considered, such as government subsidies and revenue sharing. **Author Contributions: Conceptualization and methodology, J.L. and G.L.; formal analysis, D.C. and** R.M.; writing—original draft preparation, J.L., G.L. and R.M.; writing—review and editing, D.C., J.L. and G.L.; supervision, D.C., J.L. and R.M. All authors have read and agreed to the published version of the manuscript. **Funding: This research received no external funding.** **Institutional Review Board Statement: Not applicable.** **Informed Consent Statement: Not applicable.** **Data Availability Statement: The data used to support the findings of this study are available** upon request. **Acknowledgments: This project was co-sponsored by the National Social Science Foundation of** China (19BGL194), the Zhejiang Provincial Natural Science Foundation of China (LY20G020006 and LQ19G030007) and the Zhejiang Gongshang University on-line and off-line Hybrid Teaching Reform Project (1010XJ2919103). **Conflicts of Interest: The authors declare no conflict of interest.** **Appendix A. Proofs** **Proof of Lemma 1. Under centralized encroachment, the game is solved by backward** induction. At the second stage, the second derivatives of the manufacturer and the retailer are given by _∂_ ∏[2]m = _[∂]_ [∏][2]r = 2 < 0. _−_ _∂[2]qd_ _∂[2]qr_ Hence, ∏m and ∏r are concave in qd and qr, respectively. Then, the first-order conditions associated with Equations (2) and (3) jointly yield unique equilibrium quantities as a function of the wholesale price and product green degree, as those in Equations (4) and (5). At the first stage, the Hessian matrix of the manufacturer’s profit function in Equation (6) is given as follows: 6k[2]−16 _λ(k[3]−4k[2]+8)_ (k[2]−4)[2] (k[2]−4)[2] _λ(k[2]−2k−4)_ _u(4k+8−2k[2]−k[3])+λ[2](2k−4)_ (k−2)(k+2)[2] (k−2)(k+2)[2]  .  _H =_   The leading principal minors are the following: _| H1| =_ (6kk[2][2]−−416)[2][ and][ |][ H][2][|][ =][ −] _[λ][2][(][k][2][−][8][k]([+]k[2][12]−[)+]4)[2][2][u][(][3][k][2][−][8][)]_ . The Hessian matrix is negative definite if | H1| < 0 and | H2| > 0. In order to satisfy the two aforementioned conditions, the equation u > [3]4 _[λ][2][ must be ensured. Therefore,]_ the condition u > 34 _[λ][2][ is considered to maintain the concavity of the manufacturer’s]_ profit function. By solving the first-order derivatives of _[∂]∂[Π]w[m]r_ = 0 and _[∂][Π]∂θ[m]_ = 0, we obtain the unique optimal wr[C] [and][ θ][C][. Substituting them into Equations (4) and (5), we have the quantities of] _q[C]d_ [and][ q]r[C][.] ----- _Processes 2021, 9, 990_ 18 of 24 _q[C]d_ [=] [(2−k2)((84−+3kk)[2]a)−u−(8(−2−k[2]k))(c]u6−+(k2)λ−[2]k)cλ[2], qr[C] [=] 2(84−[( 13k−[2]k)u)a−+(ck2−)]ku)(−62−cλk)[2]λ[2][ .] We define: _⌢_ _au(2_ _k)(4 + k)_ _−_ _c =_ (8 _k[2])u_ (2 _k)λ[2][ .]_ _−_ _−_ _−_ _⌢_ Note that q[C]d [(] _c ) = 0 and_ _∂q[C]d_ (8 − _k[2])u −_ (2 − _k)λ[2]_ = _−_ _∂c_ 2(8 3k[2])u (2 _k)(6_ _k)λ[2][ <][ 0.]_ _−_ _−_ _−_ _−_ _⌢_ _⌢_ Hence, we know that q[C]d _[>][ 0 for][ c][ <]_ _c . Similarly, we can verify that when c <_ _c,_ demand and profits in equilibrium are positive, which is the necessary conditions for Nash equilibrium. Substituting them back, we can obtain all the equilibrium outcomes as _⌢_ indicated in Lemma 1, for c < _c ._ _⌢_ When c ≥ _c, the direct selling cost is so high that the manufacturer chooses to close_ her direct channel. In this case, the manufacturer just relies on the retailer to sell her products and does not use the encroachment strategy. This completes the proof. □ **Proof of Proposition 1. The first order conditions of q[C]d** [,][ q]r[C][,][ w]r[C][,][ θ][C][,][ Π][C]m[,][ Π]r[C] [with respect] to λ are respectively shown as follows: _dq[C]d_ �k[2]+2k − 8)(8 _c −_ 12a+8ak − 4ck − _ak[2]_ + ck[2][�] = [2][λ][u], _dλ_ [(2 _k)(6_ _k)λ[2]_ 2u(8 3k[2])][2] _−_ _−_ _−_ _−_ _dqr[C]_ �k − 1)(8 _c −_ 12a+8ak − 4ck − _ak[2]_ + ck[2][�] = [8][λ][u], _dλ_ [(2 _k)(6_ _k)λ[2]_ 2u(8 3k[2])][2] _−_ _−_ _−_ _−_ _dwr[C]_ = [2][λ][u][(][4] _[k][2][ −]_ _[k][3][ −]_ [8][)(][8] _[c][ −]_ [12][a][+][8][ak][ −] [4][ck][ −] _[ak][2][ +][ ck][2][�]_, _dλ_ [(2 _k)(6_ _k)λ[2]_ 2u(8 3k[2])][2] _−_ _−_ _−_ _−_ _dθ[C]_ =, _[−][(][8]_ _[c][ −]_ [12][a][+][8][ak][ −] [4][ck][ −] _[ak][2][ +][ ck][2][)[][k][2][λ][2][ +][ u][(][16][ −]_ [6][k][2][)+][λ][2][(][12][ −] [8][k][)]] _dλ_ [(2 _k)(6_ _k)λ[2]_ 2u(8 3k[2])][2] _−_ _−_ _−_ _−_ _dΠ[C]m_ = [2][λ][u] [(][8] _[c][ −]_ [12][a][+][8][ak][ −] [4][ck][ −] _[ak][2][ +][ ck][2][�][2]_, _dλ_ [(2 _k)(6_ _k)λ[2]_ 2u(8 3k[2])][2] _−_ _−_ _−_ _−_ _dΠr[C]_ �a − _ak + ck)](8_ _c −_ 12a+8ak − 4ck − _ak[2]_ + ck[2][�] = [32][λ][u][(][k][ −] [1][)[][c][λ][2][ −] [2][u] . _dλ_ [(2 _k)(6_ _k)λ[2]_ 2u(8 3k[2])][3] _−_ _−_ _−_ _−_ As 0 < k < 1 and u > [3][λ]4 [2] [, the following conditions are satisfied:] 8c 12a+8ak 4ck _ak[2]_ + ck[2] _< 0,_ _cλ[2]_ 2u(a _ak + ck) < 0_, and _−_ _−_ _−_ _−_ _−_ (2 _k)(6_ _k)λ[2]_ 2u(8 3k[2]) < 0. _−_ _−_ _−_ _−_ _dq[C]d_ _m_ _r_ Hence, we have _dλ_ _[>][ 0,][ dq]dλ[Cr]_ _[>][ 0,][ dw]dλ[Cr]_ _[>][ 0,][ d]d[θ]λ[C]_ _[>][ 0,][ d]d[Π]λ[C]_ _> 0 and_ _[d]d[Π]λ[C]_ _> 0. This_ completes the proof. □ **Proof of Proposition 2. The first order derivative of θ[C]** with respect to k are shown as follows: _dθ[C]_ = 4λH1 _dk_ [(2 _k)(6_ _k)λ[2]_ 2u(8 3k[2])][2][,] _−_ _−_ _−_ _−_ where H1 = 16cu − 32αu+4cλ[2]+6ck[2]u − _ck[2]λ[2]+44αku −_ 32cku+2ckλ[2] _−_ 12αk[2]u. The first and second order derivative of H1 with respect to k are shown as follows: ----- _Processes 2021, 9, 990_ 19 of 24 _dH1_ = 4u(11 _a_ 8c 6ak + 3ck) + 2cλ[2](1 _k),_ _−_ _−_ _−_ _dk_ _d[2]_ _H1_ = [2cλ[2] + 12u(2 _a_ _c)] < 0._ _−_ _−_ _dk[2]_ So, as k increases, the first order derivative of H1 will decrease. When k = 1, _dHdk1_ = 20u(a − _c) > 0. Hence,_ _[dH]dk[1]_ _[>][ 0, which means that as][ k][ increase,][ H][1][ will increase.]_ Specifically, when k = 1, H1 = 5c(λ[2] _−_ 2cu)< 0. Therefore, we conclude that H1 < 0 and _[d]dk[θ][C]_ _< 0. This completes the proof. □_ **Proof of Proposition 3. The first order derivative of Πr[C]** [with respect to][ k][ are shown] as follows: _dΠr[C]_ _−16[cλ[2]_ _−_ 2u(a − _αk + ck)]H2_ = _dk_ [(2 _k)(6_ _k)λ[2]_ 2u(8 3k[2])][3][,] _−_ _−_ _−_ _−_ where H2 = 16au[2] _−_ 4cλ[4] _−_ 16cu[2] _−_ 4aλ[2]u+12cλ[2]u+6ak[2]u[2] _−_ 6ck[2]u[2] + ckλ[4] _−_ 12aku[2] _−_ 2akλ[2]u +6ckλ[2]u + ak[2]λ[2]u _ck[2]λ[2]u._ _−_ We can see that (2 − _k)(6 −_ _k)λ[2]_ _−_ 2u(8 − 3k[2]) < 0 and cλ[2] _−_ 2u(a − _αk + ck�_ _< 0._ _r_ So, the sign of _[d][Π][C]_ _dk_ [is opposite to that of][ H][2][.] The first-order condition of H2 with respect to c is: _dH2_ = 2(8 + 3k[2]) u[2] + (12 + 6k _k[2])u_ (4 _k)λ[4]_ _< 0._ _−_ _−_ _−_ _−_ _dc_ It means H2 decreases with c. We define _au[2(3 k[2]_ 6k + 8�u (4 + 2k _k[2])λ[2]]_ _−_ _−_ _−_ _c1 =_ 2(3k[2] + 8)u[2] (12 + 6k _k[2])λ[2]u+(4_ _k)λ[4][ .]_ _−_ _−_ _−_ When c = c1, H2 = 0. _⌢_ Comparing _c and c1, we have_ _⌢_ _c −_ _c1 =_ � 2au[4k(3 _k2 −_ 8)u2+2(k3 + k2λ2 − 12k2+12k − 8λ2 +16)λ2u � (k[2] + 2kλ[2] 10k 4λ[2] + 16)λ[4]] _−_ _−_ _−_ [u(k[2] 8) + (2 _k)λ[2]][2(3_ _k[2]_ +8)u[2]+(k[2] 6k 12)λ[2]u+(4 _k)λ[4]]_ _[>][ 0.]_ _−_ _−_ _−_ _−_ _−_ _⌢_ It follows that if c < c1, H2 > 0; if c1 < c < _c, H3 < 0. Hence, we know if c < c1,_ _dΠr[C]_ _⌢_ Πr[C] _dk_ _< 0; if c1 < c <_ _c, ddk_ _[>][ 0.]_ The first-order derivative of Π[C]m [with respect to][ k][ are shown as follows:] _dΠ[C]m_ = _[−][2][[][c][λ][2][ −]_ [2][u][(][a][ −] _[ak][ +][ ck][)]][H][3]_ _dk_ 16[2u(k[2] 2)+λ[2](3 2k)] [2][,] _−_ _−_ where H3 = 16cu − 16au − 4cλ[2]+6aku + ckλ[2]. We can see that the sign of _[d][Π][C]m_ 2au(8−3k) _dk_ [depends on that of][ H][3][. When][ c][ =][ c][2][ =] 16u+(k−4)λ[2][,] _⌢_ _H3 = 0. Next, we compare_ _c and c2 in the following:_ _⌢_ _auk[2(3k[2]_ 8)u + (k[2] 8k + 12)λ[2]] _−_ _−_ _c −_ _c2 =_ (16u + kλ[2] 4λ[2])[(k[2] 8)u + (2 _k)λ[2]]_ _[>][ 0.]_ _−_ _−_ _−_ Moreover, _[dH]dc[3]_ = 16u − _λ[2](4 −_ _k) > 0. This indicates that H3 increases with c. When_ _⌢_ Π[C]m _c < c2, H3 < 0; when c2 < c <_ _c, H3 > 0. Hence, when c < c2, ddk_ _< 0; when_ _⌢_ Π[C]m _c2 < c <_ _c, ddk_ _> 0._ ----- _Processes 2021, 9, 990_ 20 of 24 Now, comparing c1 and c2, we have _au[6ku_ (4 _k)λ[2]][2(3_ _k[2]_ 8)u+(k[2] 8k + 12�λ[2]] _−_ _−_ _−_ _−_ _c1_ _c2 =_ _−_ [16u + (k 4)λ[2]][2(3 _k[2]_ +8)u[2]+(k[2] 6k 12)λ[2]u+(4 _k)λ[4]]_ _[<][ 0.]_ _−_ _−_ _−_ _−_ _⌢_ This follows that c1 < c2 < _c ._ _m_ _r_ In summary, based on above results, we have: if c < c1, _[d][Π]dk[C]_ _< 0 and_ _[d]dk[Π][C]_ _< 0;_ _m_ _r_ _⌢_ Π[C]m _r_ if c1 < c < c2, _[d][Π]dk[C]_ _< 0 and_ _[d]dk[Π][C]_ _> 0; if c2 < c <_ _c, ddk_ _> 0 and_ _[d]dk[Π][C]_ _> 0. This_ completes the proof. □ **Proof of Lemma 2. With decentralized encroachment, we again use backward induction to** solve the game. At the second stage, the second conditions of the profit functions are _∂_ ∏[2]m = _[∂]_ [∏][2]r = 2 < 0. _−_ _∂[2]qd_ _∂[2]qr_ Therefore, ∏m and ∏r are concave in qd and qr, respectively, which guarantees the uniqueness of the optimal retail quantities as shown in Equations (8) and (9). Then, at the first stage, the Hessian matrix obtained from the manufacturer’s profit function is calculated as follows: 2(3k[2]−8) 2k(2−k[2]) _λ(k[2]−2k−4)_ (k[2]−4)[2] (k[2]−4)[2] (k−2)(k+2)[2] 2k(2−k[2]) 4(k[2]−2) _k[2]λ_ (k[2]−4)[2] (k[2]−4)[2] (k−2)(k+2)[2] _λ(k[2]−2k−4)_ _k[2]λ_ _u(−k[2]−4k−4)+2λ[2]_ (k−2)(k+2)[2] (k−2)(k+2)[2] (k+2)[2]   .    _H =_       The leading principal minors are _| H1| =_ 2((k3[2]k−[2]−4)8[2])[,] _|H2| =_ 4((k2[2] −−4k)[2][2]) and _|H3|_ = 2[2 u((kk[2]−−22)()+k+λ[2]2()3[2]−2k)] . The Hessian matrix is negative definite if | H1| _<_ 0, _| H2| > 0, |H3|_ _<_ 0. In order to meet these mentioned conditions, the equation 2[2 _u(k[2]_ 2) + λ[2](3 2k)] < 0 must be established. Therefore, the condition _−_ _−_ 2[2 _u(k[2]_ 2) + λ[2](3 2k)] < 0 is considered to maintain the concavity of the profit func_−_ _−_ tion for manufacturer. Since u > [3][λ]4 [2] [,][ ∏][m][ is jointly concave in][ w][r][ and][ w][d][ and][ θ][. Solving] the first order derivatives _[∂]∂[Π]w[m]r_ = 0, _[∂]∂[Π]w[m]d_ = 0, _[∂][Π]∂θ[m]_ = 0, we derive the corresponding equilibrium prices and product green degree as follows. _wr[D]_ = 22[2uau((22 − −kk[2][2])) − −λcλ[2][2](3(2−−2kk))] [,][ w]d[D] = 2[k2[u2u(2( −a−k[2]ak) −+ckλ[2])(−3c−λ2[2]k])] [,] _θ[D]_ = _λ[(3 −2k)a−(2−k)c ]_ 2u(2 −k[2]) −λ[2](3−2k) [.] Substituting them into Equations (8) and (9), the equilibrium quantities are given by: 2u(2 _a_ 2c _ak) + cλ[2]_ 2u(a _ak + ck)_ _cλ[2]_ _−_ _−_ _−_ _−_ _qd[D]_ = 2[2u(2 _k[2])_ _λ[2](3_ 2k)] [,][ q]r[D] = 2[2u(2 _k[2])_ _λ[2](3_ 2k)] [.] _−_ _−_ _−_ _−_ _−_ _−_ We define a threshold _⌣c so that qdD[(][c][ =]_ _⌣c ) = 0, where_ _⌣c =_ 2au4u(−2−λ[2]k[ .]) Note that _[∂]∂[q]cd[D]_ = − 2[2u(2−k4[2]u)−−λλ[2][2](3−2k)] _[<][ 0.]_ _⌣_ _⌣_ It follows that qd[D] _[>][ 0 for][ c][ <]_ _c . In a similar way, we can prove that when c <_ _c,_ demand and profits in equilibrium are positive, which is the necessary conditions for Nash _⌣_ equilibrium. Using substitution, for c < _c, the equilibrium outcomes are derived as_ shown in Lemma 2. ----- _Processes 2021, 9, 990_ 21 of 24 _⌣_ When c ≥ _c, the manufacturer will close the direct channel because of very high_ encroachment cost. In other words, the manufacturer will turn to the single wholesale channel and not use decentralized encroachment strategy. This completes the proof. □ **Proof of Proposition 4. The first order derivatives of qd[D][,][ q]r[D][,][ w]r[D][,][ θ][D][,][ Π]m[D]** [and][ Π]r[D] [with] respect to λ are shown as follows: _dqdλd[D]_ = 2λu[2(u2(−2k−)[(k[2]3) −−λ2[2]k()3a−−2(k2)]−[2]k)c], _[dq]dλr[D]_ = 2λu[2(u1(−2k−)[(k[2]3) −−λ2[2]k()3a−−2(k2)]−[2]k)c], _dwdλd[D]_ = 2kλ[u2(u1(−2−k)[(k[2])3− −λ2[2]k()3a−−2(k2)]−[2]k)c], _[dw]dλr[D]_ = 2λu[(22u −(2k−[2])[(k[2])3− −λ2[2]k(3)a−−2(k2)]−[2]k)c], _dθ[D]_ [(3 −2k)a−(2−k)c][(3 −2k)λ[2]+2(2 −k[2])u] _dλ_ = [2u(2−k[2])−λ[2](3−2k)] [2], _ddΠλm[D]_ = [λ2uu [((23− −k[2]2)k−)aλ−[2]((32−−2kk))]c][2][2][,] _ddΠλr[D]_ = 2λu(1 −k)[(3 −[2u2(k2)−a−k[2]()2−−λk)[2]c(][3−2 u2(ka)]−[3]ak+ck) −cλ[2]] . _d_ _r_ _r_ _m_ Similar to the Proposition 1, we have _[dq][D]_ _dλ_ _[>][ 0,][ dq]dλ[D]_ _[>][ 0,][ dw]dλ[D]_ _[>][ 0,][ d]d[θ]λ[D]_ _[>][ 0,][ d]d[Π]λ[D]_ _[>][ 0,]_ _dΠr[D]_ _dλ_ _[>][ 0. This completes the proof.][ □]_ **Proof of Proposition 5. The first order derivative of θ[D]** with respect to k are shown as follows: _dθ[D]_ _−λH3_ = _dk_ [2u(2 _k[2])_ _λ[2](3_ 2k)] [2][,] _−_ _−_ _−_ where H3= 8au − 4cu − _cλ[2]_ _−_ 2ck[2]u − 12aku+8cku+4ak[2]u. We get: _[dH]dk[3]_ = 4u(2c − 3a+2ak − _ck),_ _[d]dk[2]_ _[H][2][3]_ = 4u(2a − _c) > 0._ Similar to the Proposition 2, we have H3 > 0 and _[d]dk[θ][D]_ _< 0. This completes the proof._ **Proof of Proposition 6. The first-order condition of Πm[D]** [with respect to][ k][ is the following:] � _dΠm[D]_ = . _−_ [[][2] _[u][(][a][ −]_ _[ak][ +][ ck][)]_ _[−][c][λ][2][][][2]_ _[au][(][2][ −]_ _[k][)][ −]_ [(][4][u][ −] _[λ][2][)][c]_ _dk_ 2[2u(2 _k[2])_ _λ[2](3_ 2k)][2] _−_ _−_ _−_ It follows that _[d][Π]dkm[D]_ _< 0, for c <_ _⌣c =_ 2au4u(−2−λ[2]k[ .]) Similarly, the first-order condition of Πr[D] [with respect to][ k][ is in the following.] _dΠr[D]_ = [[][2] _[u][(][a][ −]_ _[ak][ +][ ck][)][ −]_ _[c][λ][2][]][ H][4]_ _dk_ [2u(2 _k[2])_ _λ[2](3_ 2k)] [3][,] _−_ _−_ _−_ where H4 = cλ[4] _−_ 4au[2]+4cu[2] + aλ[2]u − 3cλ[2]u − 2ak[2]u[2]+2ck[2]u[2]+4aku[2] _−_ 2ckλ[2]u. _r_ We can see that the sign of _[d][Π]dk[D]_ depends on that of H4. We define a threshold c3 so that H5(c = c3) = 0, where c3 = 2(k[2]au+[22) uu([2]k−[2]−(22kk++32))λ−[2]uλ+[2]]λ[4][ .] Comparing c3 and _⌣c, we have c3 −_ _⌣c =_ (4auu−(2λku[2])[−2λ(k[2][2])[+22u)(uk[2][2]−−(22)+(k+33−)λ2[2]ku)+λ[2]λ][4]] _[<][ 0.]_ The first-order condition of H4 with respect to c is: _dH4_ = 2(k[2] + 2)u[2] (2k + 3)uλ[2] + λ[4] _> 0._ _−_ _dc_ ----- _Processes 2021, 9, 990_ 22 of 24 _⌣_ This indicates H4 increases with c. Thus, if c < c3, H4 < 0; if c3 < c < _c, H4 > 0._ _r_ _⌣_ Πr[D] It follows that if c < c3, _[d][Π]dk[D]_ _< 0; if c3 < c <_ _c, ddk_ _> 0. This completes the proof. □_ **Proof of Proposition 7. (a) From lemma 2, wd[D]** = 2[k2[u2u(2( −a−k[2]ak) −+ckλ[2])(−3c−λ2[2]k])] _[>][ 0.]_ (b) Using qd[D] [and][ q]r[D] [from Lemma 2 and][ q][C]d [and][ q]r[C] [from Lemma 1,] _k(2_ _k)[_ 3λ[2]+2u(2 + k)][cλ[2] 2u(a _ak + ck)]_ _−_ _−_ _−_ _−_ _qd[D]_ _d_ [=] _[−]_ _[q][C]_ 2[λ[2](k[2] 8k + 12) + 2u(3k[2] 8)][2u(k[2] 2)+λ[2](3 2k)] _[<][ 0,]_ _−_ _−_ _−_ _−_ _k[2][�]λ[2]_ 2u)[cλ[2] 2u(a _ak + ck)]_ _−_ _−_ _−_ _qr[D]_ _r_ [=] _[−]_ _[q][C]_ 2[λ[2](k[2] 8k + 12) + 2u(3k[2] 8)][2u(k[2] 2)+λ[2](3 2k)] _[>][ 0.]_ _−_ _−_ _−_ _−_ Hence, qd[D] _< q[C]d_ [and][ q]r[D] _[>][ q]r[C][.]_ (c) Using θ[D] from Lemma 2 and θ[C] from Lemma 1, _k[2]λ�k_ 1)[cλ[2] 2u(a _ak + ck)]_ _−_ _−_ _−_ _θ[D]_ _θ[C]_ = _−_ [λ[2](k[2] 8k + 12) + 2u(3k[2] 8)][2u(k[2] 2)+λ[2](3 2k)] _[>][ 0.]_ _−_ _−_ _−_ _−_ Hence, θ[D] _> θ[C]._ (d) Using wr[D] [from Lemma 2 and][ w]r[C] [from Lemma 1,] _k[2][2u�k[2]_ 2)+λ[2](2 _k)][cλ[2]_ 2u(a _ak + ck)]_ _−_ _−_ _−_ _−_ _wr[D]_ _r_ [=] _[−]_ _[w][C]_ 2[λ[2](k[2] 8k + 12) + 2u(3k[2] 8)][2u(k[2] 2)+λ[2](3 2k)] _[>][ 0.]_ _−_ _−_ _−_ _−_ Hence, wr[D] _[>][ w]r[C][. This completes the proof.][ □]_ **Proof of Proposition 8. Using Πm[D]** [and][ Π]r[D] [from Lemma 2 and][ Π][C]m [and][ Π]r[C] [from Lemma 1,] _k[2][cλ[2]_ 2u(a _ak + ck)]_ [2] _−_ _−_ Πm[D] _m_ [=] _[−]_ [Π][C] 4[λ[2](k[2] 8k + 12) + 2u(3k[2] 8)][2u(k[2] 2)+λ[2](3 2k)] _[>][ 0,]_ _−_ _−_ _−_ _−_ Πr[D] _r_ [=][ k][2][(][λ][2][ −] [2][u][)[][c][λ][2][ −] [2][u][(][a][ −] _[ak][ +][ ck][)]][2][[(]_ _[k][2][ −]_ [16][k][ +][ 24][)][λ][2][+][2][u][(][7][k][2][ −] [16][)]] _> 0._ _[−]_ [Π][C] 4[λ[2](k[2] 8k + 12) + 2u(3k[2] 8)][2][2u(k[2] 2)+λ[2](3 2k)] [2] _−_ _−_ _−_ _−_ Hence, Πm[D] _[>][ Π][C]m_ [and][ Π]r[D] _[>][ Π]r[C][. This completes the proof.][ □]_ **Proof of Proposition 9. (a) From (14), we know: ED[D]** = 0, ED[C] = [1]2 _[d][(][θ][D][ −]_ _[θ][C][)(][q][C]d_ [+][ q]r[C][)][2][.] Since θ[D] _θ[C]_ _> 0 from Proposition 7(c), we obtain ED[D]_ _< ED[C]._ _−_ (b) From (15), for j = C, D, SW _[j]_ = SC[j] + CS[j] _ED[j]. From Proposition 8, we can_ _−_ see SC[D] = Πm[D] [+][ Π]r[D] _[>][ SC][C][ =][ Π][C]m_ [+][ Π]r[C][. Using][ q]d[D] [and][ q]r[D] [from Lemma 2 and][ q][C]d [and] _qr[C]_ [from Lemma 1,] _k�cλ[2]_ 2u�a _ak + ck)][u(k[2]_ + k 4) + (3 2k)λ[2][�] _−_ _−_ _−_ _−_ _−_ _qm[D]_ [+][ q]r[D] _m_ _r_ [=] _[−]_ _[q][C]_ _[−]_ _[q][C]_ [λ[2](k[2] 8k + 12) + 2u(3k[2] 8)][2u(k[2] 2)+λ[2](3 2k)] _[<][ 0.]_ _−_ _−_ _−_ _−_ Hence, from Equation (13), we have CS[D] _< CS[C]. Comparing the social welfare_ under centralized and decentralized encroachments, 2 _SW_ _[D]_ _−_ _SW[C]_ = SC[D] _−_ _SC[C]_ + CS[D] _−_ _CS[C]_ + [1] _d_ [+][ q]r[C][)] . 2 _[d][(][θ][D][ −]_ _[θ][C][)(][q][C]_ We define a threshold d[∗] such that SW _[D]_ = SW[C] for d = d[∗], where _d[∗]_ = [2][(][CS][C][ −] _[CS][D][ +][ SC][C][ −]_ _[SC][D][)]_ . (θ[D] _−_ _θ[C])(q[C]d_ [+][ q][Cr][ )][2] ----- _Processes 2021, 9, 990_ 23 of 24 Therefore, we have if d _d[∗], SW_ _[D]_ _SW[C]; if d > d[∗], SW_ _[D]_ _> SW[C]. This completes_ _≤_ _≤_ the proof. □ **References** 1. Wang, K.; Wei, Y.M.; Huang, Z. Potential Gains from Carbon Emissions Trading in China: A DEA Based Estimation on Abatement Cost Savings. Omega 2016, 63, 48–59. 2. Ramanathan, U.; Bentley, Y.; Pang, G. The Role of Collaboration in the UK Green Supply Chains: An Exploratory Study of the [Perspectives of Suppliers, Logistics and Retailers. J. Clean. Prod. 2014, 70, 231–241. [CrossRef]](http://doi.org/10.1016/j.jclepro.2014.02.026) 3. [Sarkis, J. A Strategic Decision Framework for Green Supply Chain Management. J. Clean. Prod. 2003, 11, 397–440. [CrossRef]](http://doi.org/10.1016/S0959-6526(02)00062-8) 4. Linton, J.D.; Klassen, R.; Jayaraman, V. Sustainable Supply Chains: An Introduction. J. Oper. Manag. 2007, 25, 1075–1082. [[CrossRef]](http://doi.org/10.1016/j.jom.2007.01.012) 5. Hong, Z.; Guo, X. Green Product Supply Chain Contracts Considering Environmental Responsibilities. Omega 2019, 83, 155–166. [[CrossRef]](http://doi.org/10.1016/j.omega.2018.02.010) 6. Basiri, Z.; Heydari, J. A Mathematical Model for Green Supply Chain Coordination with Substitutable Products. J. Clean. Prod. **[2017, 145, 232–249. [CrossRef]](http://doi.org/10.1016/j.jclepro.2017.01.060)** 7. Hong, P.; Jagani, S.; Kim, J.; Youn, S.H. Managing Sustainability Orientation: An Empirical Investigation of Manufacturing Firms. _[Int. J. Prod. Econ. 2019, 211, 71–81. [CrossRef]](http://doi.org/10.1016/j.ijpe.2019.01.035)_ 8. Li, B.; Hou, P.-W.; Chen, P.; Li, Q.-H. Pricing Strategy and Coordination in a Dual Channel Supply Chain with a Risk-Averse [Retailer. Int. J. Prod. Econ. 2016, 178, 154–168. [CrossRef]](http://doi.org/10.1016/j.ijpe.2016.05.010) 9. Conrad, K. Price Competition and Product Differentiation when Consumers Care for the Environment. Environ. Resour. Econ. **[2005, 31, 1–19. [CrossRef]](http://doi.org/10.1007/s10640-004-6977-8)** 10. Reinhardt, F.L. Environmental Product Differentiation: Implications for Corporate Strategy. Calif. Manag. Rev. 1998, 40, 43–73. [[CrossRef]](http://doi.org/10.2307/41165964) 11. Thøgersen, J. Eco-labeling is one among a number of policy tools that are used in what. In New Tools for Environmental Protection: _Education, Information, and Voluntary Measures; Dietz, T., Stern, P.C., Eds.; National Academy Press: Washington, DC, USA, 2002;_ pp. 83–104. 12. Ranjan, A.; Jha, J.K. Pricing and Coordination Strategies of a Dual-Channel Supply Chain Considering Green Quality and Sales [Effort. J. Clean. Prod. 2019, 218, 409–424. [CrossRef]](http://doi.org/10.1016/j.jclepro.2019.01.297) 13. [Yoon, D. Supplier Encroachment and Investment Spillovers. Prod. Oper. Manag. 2016, 25, 1839–1854. [CrossRef]](http://doi.org/10.1111/poms.12580) 14. Li, T.T.; Xie, J.X.; Zhao, X.B. Supplier Encroachment in Competitive Supply Chains. Int. J. Prod. Econ. 2015, 165, 120–131. [[CrossRef]](http://doi.org/10.1016/j.ijpe.2015.03.023) 15. [Sony Sells a Portion of Its Shares in StylingLife Holdings. Available online: https://www.sony.net/SonyInfo/News/Press/2006](https://www.sony.net/SonyInfo/News/Press/200612/06-114E/index.html) [12/06-114E/index.html (accessed on 27 April 2021).](https://www.sony.net/SonyInfo/News/Press/200612/06-114E/index.html) 16. Kalnins, A. An Empirical Analysis of Territorial Encroachment Within Franchised and Company-Owned Branded Chains. _[Mark. Sci. 2004, 23, 476–489. [CrossRef]](http://doi.org/10.1287/mksc.1040.0082)_ 17. Vinhas, A.S.; Anderson, E. How Potential Conflict Drives Channel Structure: Concurrent (Direct and Indirect) Channels. _[J. Mark. Res. 2005, 42, 507–515. [CrossRef]](http://doi.org/10.1509/jmkr.2005.42.4.507)_ 18. Arya, A.; Mittendorf, B.; Yoon, D.-H. Friction in Related-Party Trade When a Rival Is Also a Customer. Manag. Sci. 2008, 54, [1850–1860. [CrossRef]](http://doi.org/10.1287/mnsc.1080.0906) 19. Patra, P. Distribution of Profit in a Smart Phone Supply Chain under Green Sensitive Consumer Demand. J. Clean. Prod. 2018, 192, [608–620. [CrossRef]](http://doi.org/10.1016/j.jclepro.2018.04.144) 20. Liu, Z.L.; Anderson, T.D.; Cruz, J.M. Consumer Environmental Awareness and Competition in Two-Stage Supply Chains. _[Eur. J. Oper. Res. 2012, 218, 602–613. [CrossRef]](http://doi.org/10.1016/j.ejor.2011.11.027)_ 21. Green, K.W.G., Jr.; Zelbst, P.J.; Meacham, J.; Bhadauria, V.S. Green Supply Chain Management Practices: Impact on Performance. _[Supply Chain Manag. Int. J. 2012, 17, 290–305. [CrossRef]](http://doi.org/10.1108/13598541211227126)_ 22. Ghosh, D.; Shah, J. A Comparative Analysis of Greening Policies Across Supply Chain Structures. Int. J. Prod. Econ. 2012, 135, [568–583. [CrossRef]](http://doi.org/10.1016/j.ijpe.2011.05.027) 23. He, J.; Lei, Y.; Fu, X. Do Consumer’s Green Preference and the Reference Price Effect Improve Green Innovation? A Theoretical [Model Using the Food Supply Chain as a Case. Int. J. Environ. Res. Public Health 2019, 16, 5007. [CrossRef] [PubMed]](http://doi.org/10.3390/ijerph16245007) 24. Liu, X.; Du, W.; Sun, Y. Green Supply Chain Decisions Under Different Power Structures: Wholesale Price vs. Revenue Sharing [Contract. Int. J. Environ. Res. Public Health 2020, 17, 7737. [CrossRef] [PubMed]](http://doi.org/10.3390/ijerph17217737) 25. Lee, D. Who Drives Green Innovation? A Game Theoretical Analysis of a Closed-Loop Supply Chain under Different Power [Structures. Int. J. Environ. Res. Public Health 2020, 17, 2274. [CrossRef]](http://doi.org/10.3390/ijerph17072274) 26. [Cai, G. Channel Selection and Coordination in Dual-Channel Supply Chains. J. Retail. 2010, 86, 22–36. [CrossRef]](http://doi.org/10.1016/j.jretai.2009.11.002) 27. Heydari, J.; Govindan, K.; Aslani, A. Pricing and Greening Decisions in a Three-Tier Dual Channel Supply Chain. Int. J. Prod. Econ. **[2019, 217, 185–196. [CrossRef]](http://doi.org/10.1016/j.ijpe.2018.11.012)** 28. Fein, A.J.; Anderson, E. Patterns of Credible Commitments: Territory and Brand Selectivity in Industrial Distribution Channels. _[J. Mark. 1997, 61, 19–34. [CrossRef]](http://doi.org/10.1177/002224299706100202)_ ----- _Processes 2021, 9, 990_ 24 of 24 29. Tsay, A.A.; Agrawal, N. Channel Conflict and Coordination in the E-Commerce Age. Production and Operations Management. _[Prod. Oper. Manag. 2009, 13, 93–110. [CrossRef]](http://doi.org/10.1111/j.1937-5956.2004.tb00147.x)_ 30. [Arya, A.; Mittendorf, B.; Sappington, D.E.M. The Bright Side of Supplier Encroachment. Mark. Sci. 2007, 26, 651–659. [CrossRef]](http://doi.org/10.1287/mksc.1070.0280) 31. [Li, Z.; Gilbert, S.M.; Lai, G. Supplier Encroachment Under Asymmetric Information. Manag. Sci. 2014, 60, 449–462. [CrossRef]](http://doi.org/10.1287/mnsc.2013.1780) 32. [Ha, A.; Long, X.; Nasiry, J. Quality in Supply Chain Encroachment. Manuf. Serv. Oper. Manag. 2015, 18, 280–298. [CrossRef]](http://doi.org/10.1287/msom.2015.0562) 33. Cai, G.G.; Zhang, Z.G.; Zhang, M. Game Theoretical Perspectives on Dual Channel Supply Chain Competition with Price [Dis-Counts and Pricing Schemes. Int. J. Prod. Econ. 2009, 117, 80–96. [CrossRef]](http://doi.org/10.1016/j.ijpe.2008.08.053) 34. Chen, J.; Zhang, H.; Sun, Y. Implementing Coordination Contracts in a Manufacturer Stackelberg Dual-Channel Supply Chain. _[Omega 2012, 40, 571–583. [CrossRef]](http://doi.org/10.1016/j.omega.2011.11.005)_ 35. [Swami, S.; Shah, J. Channel Coordination in Green Supply Chain Management. J. Oper. Res. Soc. 2013, 64, f336–f351. [CrossRef]](http://doi.org/10.1057/jors.2012.44) 36. Li, W.N.; Elsadany, A.A.; Zhou, W.; Zhu, Y.L. Global Analysis, Multi-Stability and Synchronization in a Competition Model of [Public Enterprises with Consumer Surplus. Chaos Solitons Fractals 2021, 143, 110604. [CrossRef]](http://doi.org/10.1016/j.chaos.2020.110604) 37. [Pal, R.; Saha, B. Pollution Tax, Partial Privatization and Environment. Resour. Energy Econ. 2015, 40, 19–35. [CrossRef]](http://doi.org/10.1016/j.reseneeco.2015.01.004) 38. Bian, J.; Zhao, X. Tax or Subsidy? An analysis of Environmental Policies in Supply Chains with Retail Competition. _[Eur. J. Oper. Res. 2020, 283, 901–914. [CrossRef]](http://doi.org/10.1016/j.ejor.2019.11.052)_ 39. Spence, M. Product Differentiation and Welfare. Am. Econ. Rev. 1976, 66, 407–414. 40. Krass, D.; Nedorezov, T.; Ovchinnikov, A. Environmental Taxes and the Choice of Green Technology. Prod. Oper. Manag. 2013, 22, [1035–1055. [CrossRef]](http://doi.org/10.1111/poms.12023) 41. Poyago-Theotoky, J. The Organization of R&D and Environmental Policy. J. Econ. Behav. Organ. 2007, 62, 63–75. 42. Ouchida, Y.; Goto, D. Do Emission Subsidies Reduce Emission? In the Context of Environmental R&D Organization. Econ. Model. **2014, 36, 511–516.** 43. Bian, J.; Guo, X.; Li, K.W. Decentralization or Integration: Distribution Channel Selection under Environmental Taxation. _[Transp. Res. Part E Logist. Transp. Rev. 2018, 113, 170–193. [CrossRef]](http://doi.org/10.1016/j.tre.2017.09.011)_ 44. Zhang, J.; Li, S.; Zhang, S.; Dai, R. Manufacturer Encroachment with Quality Decision under Asymmetric Demand Information. _[Eur. J. Oper. Res. 2019, 273, 217–236. [CrossRef]](http://doi.org/10.1016/j.ejor.2018.08.002)_ 45. Chiang, W.K.; Chhajed, D.; Hess, J.D. Direct Marketing, Indirect Profits: A Strategic Analysis of Dual-Channel Supply-Chain [Design. Manag. Sci. 2003, 49, 1–20. [CrossRef]](http://doi.org/10.1287/mnsc.49.1.1.12749) 46. Sun, X.; Tang, W.; Chen, J.; Li, S.; Zhang, J. Manufacturer Encroachment with Production Cost Reduction under Asymmetric [Information. Transp. Res. Part E Logist. Transp. Rev. 2019, 128, 191–211. [CrossRef]](http://doi.org/10.1016/j.tre.2019.05.018) 47. Li, J.; Liang, J.; Shi, V.; Wang, Q. The Benefit of Manufacturer Encroachment Considering Consumer’s Environmental Awareness and Product Competition. Ann. Oper. Res. 2021, 3, 1–19. 48. Li, J.; Wang, F.; He, Y. Electric Vehicle Routing Problem with Battery Swapping Considering Energy Consumption and Carbon [Emissions. Sustainability 2020, 12, 10537. [CrossRef]](http://doi.org/10.3390/su122410537) 49. Zhang, C.; Su, W.; Zeng, S.; Balezentis, T.; Herrera-Viedma, E. A Two-Stage Subgroup Decision-Making Method for Processing [Large-Scale Information. Expert Syst. Appl. 2021, 171, 114586. [CrossRef]](http://doi.org/10.1016/j.eswa.2021.114586) 50. Li, J.; Yi, L.; Shi, V.; Chen, X. Supplier Encroachment Strategy in the Presence of Retail Strategic Inventory: Centralization or [Decentralization? Omega 2021, 98, 102213. [CrossRef]](http://doi.org/10.1016/j.omega.2020.102213) -----
{ "disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.3390/PR9060990?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.3390/PR9060990, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": "CCBY", "status": "GOLD", "url": "https://www.mdpi.com/2227-9717/9/6/990/pdf?version=1622787550" }
2,021
[]
true
2021-06-03T00:00:00
[ { "paperId": "2328b57078cd120ae0952b2a79358a231ac81f63", "title": "The benefit of manufacturer encroachment considering consumer's environmental awareness and product competition" }, { "paperId": "697d5b1b14ff52f2fbc1266207d12bbff06fb3a5", "title": "Global Analysis, Multi-stability and Synchronization in a Competition Model of Public Enterprises with Consumer Surplus" }, { "paperId": "1e872ff46f1fc62612f68804c6fea6c821ae4477", "title": "Electric Vehicle Routing Problem with Battery Swapping Considering Energy Consumption and Carbon Emissions" }, { "paperId": "cec3687b05da9e0c4887c07546999e0c906541b9", "title": "Green Supply Chain Decisions Under Different Power Structures: Wholesale Price vs. Revenue Sharing Contract" }, { "paperId": "55323624dc86d421d26fd240328d4769dda4b2fe", "title": "Tax or subsidy? An analysis of environmental policies in supply chains with retail competition" }, { "paperId": "9f5334c853d025fa71c7f04fb331e79ee083d847", "title": "Who Drives Green Innovation? A Game Theoretical Analysis of a Closed-Loop Supply Chain under Different Power Structures" }, { "paperId": "562d9341a20626203ea3b8afdc528387eb16b103", "title": "Supplier encroachment strategy in the presence of retail strategic inventory: Centralization or decentralization?" }, { "paperId": "ff7b2373c8a96f4e84e3bb422486e9ca76d60eab", "title": "Do Consumer’s Green Preference and the Reference Price Effect Improve Green Innovation? A Theoretical Model Using the Food Supply Chain as a Case" }, { "paperId": "d11c4bbe06e7c4264be4c71180254a85b2942363", "title": "Pricing and greening decisions in a three-tier dual channel supply chain" }, { "paperId": "bad44789619df2346a1661008e11f89c4b18bfd4", "title": "Manufacturer encroachment with production cost reduction under asymmetric information" }, { "paperId": "af68cedfaae647a95e395960dd491680b31b40be", "title": "Managing sustainability orientation: An empirical investigation of manufacturing firms" }, { "paperId": "e3eeeca6af78d9f50427868192c51f189ed16fc7", "title": "Pricing and coordination strategies of a dual-channel supply chain considering green quality and sales effort" }, { "paperId": "b4d84193bf2dcee743b7dee208839d4677b19608", "title": "Manufacturer encroachment with quality decision under asymmetric demand information" }, { "paperId": "374c856e57c2813fd31ed8eef2cab8a0c7dba387", "title": "Distribution of profit in a smart phone supply chain under Green sensitive consumer demand" }, { "paperId": "408cdd8b64c365401d4cdd6ce9018e3e9a2d2e69", "title": "Green product supply chain contracts considering environmental responsibilities" }, { "paperId": "4179435871c7b97d5aa55d08705771e0655c4aae", "title": "Decentralization or integration: Distribution channel selection under environmental taxation" }, { "paperId": "4d00dd9b5d236a3d2027d035ea23ff641f7eb967", "title": "A mathematical model for green supply chain coordination with substitutable products" }, { "paperId": "4a8d537bd58dbe6e9b6f61d841acf5c3a4b86d15", "title": "Supplier Encroachment and Investment Spillovers" }, { "paperId": "36e7f367ad73e3e07cb541d2ac1050d119341348", "title": "Potential gains from carbon emissions trading in China: A DEA based estimation on abatement cost savings" }, { "paperId": "c6a30ed4b957f678d8945133efa37925192be71e", "title": "Pricing strategy and coordination in a dual channel supply chain with a risk-averse retailer" }, { "paperId": "2062063fd5e93e03728417cf74dd06fb869fe6e4", "title": "Quality in Supply Chain Encroachment" }, { "paperId": "68b468872c453007dc54da5b808d48220cb125aa", "title": "Supplier encroachment in competitive supply chains" }, { "paperId": "435fd655139ea20bb0465a1ecdd7472930a2d7d4", "title": "Pollution tax, partial privatization and environment" }, { "paperId": "44773d46e2e2476f0a16b5e1c847aa335a975b80", "title": "The role of collaboration in the UK green supply chains: an exploratory study of the perspectives of suppliers, logistics and retailers" }, { "paperId": "9fc1c3f111afb58f53f5f5f9cc24a41d8cd40a63", "title": "Environmental Taxes and the Choice of Green Technology" }, { "paperId": "b61386529f4b92796bd56c4d397af0332a5c9192", "title": "Channel coordination in green supply chain management" }, { "paperId": "db5d7918993d624eee5424c743c09a75d5e011a9", "title": "Implementing coordination contracts in a manufacturer Stackelberg dual-channel supply chain" }, { "paperId": "500c00519cd6429b0ab241d437544b74745797aa", "title": "Consumer environmental awareness and competition in two-stage supply chains" }, { "paperId": "5fd91d42c57e11c6a87f17d61291146e87c239d3", "title": "Green supply chain management practices: impact on performance" }, { "paperId": "e428991eac03f67729f8fa613df80693958ce18d", "title": "Supplier Encroachment under Asymmetric Information" }, { "paperId": "17b8835b6849aea0c6117eeba5d272d45e8006ed", "title": "A comparative analysis of greening policies across supply chain structures" }, { "paperId": "9a9eb26e86328afd7c9d5e3285bb70f1714223d3", "title": "Channel Selection and Coordination in Dual-Channel Supply Chains" }, { "paperId": "d07a4201f6f57560ef6f8e46899a63d42c370f9a", "title": "Friction in Related-Party Trade When a Rival Is Also a Customer" }, { "paperId": "b86c6dcc9b6a211dc7cfacb102ff8cd61a11dab9", "title": "Sustainable supply chains: An introduction" }, { "paperId": "764a46f67285c2bda64f1e86c0b6dcf579700d50", "title": "The Bright Side of Supplier Encroachment" }, { "paperId": "d221bb2443c9d92f23f7938a1ed4e167b126b5a5", "title": "How Potential Conflict Drives Channel Structure: Concurrent (Direct and Indirect) Channels" }, { "paperId": "a55066d3dd052c90f762378484cc6c81668503c0", "title": "An Empirical Analysis of Territorial Encroachment Within Franchised and Company-Owned Branded Chains" }, { "paperId": "f7a52ca6bff0a790955828d415b701ca55baf0fb", "title": "Channel Conflict and Coordination in the E‐Commerce Age" }, { "paperId": "035438e4dea47bb61cdf1285ed153013167bdf66", "title": "New Tools for Environmental Protection: Education, Information and Voluntary Measures" }, { "paperId": "368163bc8d31fbfe3ab258c495b567c3d5ad6ed2", "title": "Price Competition and Product Differentiation When Consumers Care for the Environment" }, { "paperId": "0a649f841e000b45fbdb5ade06b04d7872a6af24", "title": "A STRATEGIC DECISION FRAMEWORK FOR GREEN SUPPLY CHAIN MANAGEMENT" }, { "paperId": "9b1ff8cb50ebd89d1fa394da441636b952b3f3b4", "title": "Environmental Product Differentiation: Implications for Corporate Strategy" }, { "paperId": "13b4322bbb2d804e119dcdf7e08513b8e2ec4cf2", "title": "Patterns of Credible Commitments: Territory and Brand Selectivity in Industrial Distribution Channels" }, { "paperId": "ab147883440385cbd5cb5062b515b98e6b777203", "title": "A Two-stage subgroup Decision-making method for processing Large-scale information" }, { "paperId": "2aa7d84d4ab04ffe84c800ba8ebcef3ba562111c", "title": "Do emission subsidies reduce emission? In the context of environmental R&D organization" }, { "paperId": "cd140febf18fb013c6979d02264c0a56fc864190", "title": "Game theoretical perspectives on dual-channel supply chain competition with price discounts and pricing schemes" }, { "paperId": "de6c2ac9e27039effd09ed4a95824e0295ee7d2c", "title": "The organization of R&D and environmental policy" }, { "paperId": "c0d90a6b6b2e9701cc9900daf0a2dcfa9c6ed40b", "title": "Direct Marketing, Indirect Profits: A Strategic Analysis of Dual - Channel Supply - Chain Design" }, { "paperId": "0453ea6c2dbf048112d6b216b66cf437be6fc568", "title": "Design, planning, scheduling, and control problems of flexible manufacturing systems" }, { "paperId": "6bb149a41f6133ceca1d9983eca918cd2abead4a", "title": "Product Differentiation and Welfare" }, { "paperId": null, "title": "Sony Sells a Portion of Its Shares in StylingLife Holdings" }, { "paperId": null, "title": "Do Emission Subsidies Reduce Emission ?" }, { "paperId": null, "title": "Supply Chain Management Practices: Impact on Performance. Supply Chain Manag" }, { "paperId": null, "title": "Indirect Profits: A Strategic Analysis of Dual-Channel Supply-Chain Design" }, { "paperId": null, "title": "Eco-labeling is one among a number of policy tools that are used in what" } ]
27,675
en
[ { "category": "Computer Science", "source": "external" }, { "category": "Computer Science", "source": "s2-fos-model" }, { "category": "Engineering", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/0265955b9565abd870ab9d874305a84ef397c743
[ "Computer Science" ]
0.819012
Fast Multi-precision Multiplication for Public-Key Cryptography on Embedded Microprocessors
0265955b9565abd870ab9d874305a84ef397c743
Journal of Cryptology
[ { "authorId": "70931341", "name": "M. Hutter" }, { "authorId": "1764501", "name": "Erich Wenger" } ]
{ "alternate_issns": null, "alternate_names": [ "J Cryptol" ], "alternate_urls": [ "https://www.iacr.org/jofc/jofc.html", "http://www.springeronline.com/sgw/cda/frontpage/0,11855,5-0-70-1009426-detailsPage=journal|description|description,00.html?referer=www.springeronline.com/journal/00145/about" ], "id": "de5467ac-3f75-47f8-8397-1c10f6f9fc09", "issn": "0933-2790", "name": "Journal of Cryptology", "type": "journal", "url": "https://link.springer.com/journal/145" }
null
# Fast Multi-precision Multiplication for Public-Key Cryptography on Embedded Microprocessors Michael Hutter and Erich Wenger Institute for Applied Information Processing and Communications (IAIK), Graz University of Technology, Inffeldgasse 16a, 8010 Graz, Austria _{Michael.Hutter,Erich.Wenger}@iaik.tugraz.at_ **Abstract. Multi-precision multiplication is one of the most fundamen-** tal operations on microprocessors to allow public-key cryptography such as RSA and Elliptic Curve Cryptography (ECC). In this paper, we present a novel multiplication technique that increases the performance of multiplication by sophisticated caching of operands. Our method significantly reduces the number of needed load instructions which is usually one of the most expensive operation on modern processors. We evaluate our new technique on an 8-bit ATmega128 microcontroller and compare the result with existing solutions. Our implementation needs only 2, 395 clock cycles for a 160-bit multiplication which outperforms related work by a factor of 10 % to 23 %. The number of required load instructions is reduced from 167 (needed for the best known hybrid multiplication) to only 80. Our implementation scales very well even for larger Integer sizes (required for RSA) and limited register sets. It further fully complies to existing multiply-accumulate instructions that are integrated in most of the available processors. **Keywords: Multi-precision Arithmetic, Microprocessors, Elliptic Curve** Cryptography, RSA, Embedded Devices. ## 1 Introduction Multiplication is one of the most important arithmetic operation in public-key cryptography. It engross most of the resources and execution time of modern microprocessors (up to 80 % for Elliptic Curve Cryptography (ECC) and RSA implementations [6]). In order to increase the performance of multiplication, most effort has been put by researchers and developers to reduce the number of instructions or minimize the amount of memory-access operations. Common multiplication methods are the schoolbook or Comba [4] technique which are widely used in practice. They require at least 2n[2] _load instructions_ to process all operands and to calculate the necessary partial products. In 2004, Gura et al. [6] presented a new method that combines the advantages of these methods (hybrid multiplication). They reduced the number of load instructions ----- to only 2 _n[2]/d_ where the parameter d depends on the number of available reg_⌈_ _⌉_ isters of the underlying architecture. They reported a performance gain of about 25 % compared to the classical Comba multiplication. Their 160-bit implementation needs 3,106 clock cycles on an 8-bit ATmega128 microcontroller. Since then, several authors applied this method [7,12,14,15,17] and proposed various enhancements to further improve the performance. Most of the related work reported between 2,593 and 2,881 clock cycles on the same platform. In this paper, we present a novel multiplication technique that reduces the number of needed load instructions to only 2n[2]/e where e > d. We propose a new way to process the operands which allows efficiently caching of required operands. In order to evaluate the performance, we use the ATmega128 microcontroller and compare the results with related work. For a 160-bit multiplication, 2,395 clock cycles are necessary which is an improvement by a factor of 10 % compared to the best reported implementation of Scott et al. [14] (which need 2,651 clock cycles) and by a factor of about 23 % compared to the work of Gura et al. [6]. We further compare our solution with different Integer sizes (160, 192, 256, 512, 1,024, and 2,048) and register sizes (e = 2, 4, 8, 10, and 20). It shows that our solution needs about 15 % less clock cycles for any chosen Integer size. Our solution also scales very well for different register sizes without significant loss of performance. Besides this, the method fully complies with common architectures that support multiply-accumulate instructions using a (Comba-like) triple-register accumulator. The paper is organized as follows. In Section 2, we describe related work on that topic and give performance numbers for different multiplication techniques. Section 3 describes different multi-precision multiplication techniques used in practice. We describe the operand scanning, product scanning, and the hybrid method and compare them with our solution. In Section 4, we present the results of our evaluations. We describe the ATmega128 architecture and give details about the implementation. Summary and conclusions are given in Section 5. ## 2 Related Work In this section, we describe related work on multi-precision multiplication over prime fields. Most of the work given in literature make use of the hybridmultiplication technique [6] which provides best performance on most microprocessors. This technique was first presented at CHES 2004 where the authors reported a speed improvement of up to 25 % compared to the classical Combamultiplication technique [4] on 8-bit platforms. Their implementation requires 3,106 clock cycles for a 160-bit multiplication on an ATmega128 [1]. Several authors adopted the idea and applied the method for different devices and environments, e.g. sensor nodes. Wang et al. [18] and Ugus et al. [16] made use of this technique and implemented it on the MICAz motes which feature an ATmega128 microcontroller. Results for the same platform have been also reported by Liu et al. [11] and Szczechowiak et al. [15] in 2008 who provide software libraries (TinyECC and NanoECC) for various sensor-mote platforms. One of the first who improved the implementation of Gura has been due to Uhsadel et ----- al. [17]. They have been able to reduce the number of needed clock cycles to only 2,881. Further improvements have been also reported by Scott et al. [14]. They introduced additional registers (so-called carry catchers) and could increase the performance to 2,651 clock cycles. Note that they fully unrolled the execution sequence to avoid additional clock cycles for loop instructions. Similar results have been also obtained by Kargl et al. [7] in 2008 which reported 2,593 clock cycles for an un-rolled 160-bit multiplication on the ATmega128. In 2009, Lederer et al. [9] showed that the needed number of addition and move instructions can be reduced by simply rearranging the instructions during execution of the hybrid-multiplication method. Similar findings have been also reported recently by Liu et al. [12] who reported the fastest looped version of the hybrid multiplication needing 2,865 clock cycles in total. ## 3 Multi-precision Multiplication Techniques In the following subsections, we describe common multiplication techniques that are often used in practice. We describe the operand scanning, product scanning, and hybrid multiplication method[1]. The methods differ in several ways how to process the operands and how many load and store instructions are necessary to perform the calculation. Most of these methods lack in the fact that they load the same operands not only once but several times throughout the algorithm which results in additional and unnecessary clock cycles. We present a new multiplication technique that improves existing solutions by efficiently reducing the _load instructions through sophisticated caching of operands._ Throughout the paper, we use the following notation. Let a and b be two _m-bit large Integers that can be written as multiple-word array structures A =_ (A[n 1], . . ., A[2], A[1], A[0]) and B = (B[n 1], . . ., B[2], B[1], B[0]). Further let _−_ _−_ _W be the word size of the processor (e.g. 8, 16, 32, or 64 bits) and n =_ _m/W_ _⌈_ _⌉_ the number of needed words to represent the Integers a or b. We denote the result of the multiplication by c = ab and represent it in a double-size word array C = (C[2n 1], . . ., C[2], C[1], C[0]). _−_ **3.1** **Operand-Scanning Method** Among the most simplest way to perform large Integer multiplication is the operand-scanning method (or often referred as schoolbook or row-wise multiplication method). The multiplication can be implemented using two nested loop operations. The outer loop loads the operand A[i] at index i = 0 . . . n 1 and _−_ keeps the value constant inside the inner loop of the algorithm. Within the inner loop, the multiplicand B[j] is loaded word by word and multiplied with the operand A[i]. The partial product is then added to the intermediate result of the same column which is usually buffered in a register or stored in data memory. 1 Note that we do not consider multiplications methods such as Karatsuba-Ofman or FFT in this paper since they are considered to require more resources and memory accesses on common microcontrollers than the given methods [8]. ----- C[14] C[7] C[0] A[7]B[7] A[0]B[0] A[0]B[7] **Fig. 1. Operand-scanning multiplication of 8-word large Integers a and b** Figure 1 shows the structure of the algorithm on the left side. The individual row levels can be clearly discerned. On the right side of the figure, all n[2] partial products are displayed in form of a rhombus. Each point in the rhombus represents a multiplication A[i] _B[j]. The most right-sided corner of the rhombus_ _×_ starts with the lowest indices i, j = 0 and the most left-sided corner ends with the highest indices i, j = n 1. By following all multiplications from the right _−_ to the lower-mid corner of the rhombus, it can be observed that the operand _A[i] keeps constant for any index i_ [0, n). The same holds true for the operand _∈_ _B[j] and j_ [0, n) by following all multiplications from right to the upper-mid _∈_ corner of the rhombus. Note that this is also valid for the left-handed side of the rhombus. For the operand-scanning method, it can be seen that the partial products are calculated from the upper-right side to the lower-left side of the rhombus (we marked the processing of the partial products with a black arrow). In each row, _n multiplications have to be performed. Furthermore, 2n load operations and n_ _store operations are required to load the multiplicand and the intermediate result_ _C[i + j] and to store the result C[i + j]_ _C[i + j] + A[i]_ _B[j]. Thus, 3n[2]_ + 2n _←_ _×_ memory operations are necessary for the entire multi-precision multiplication. Note that this number decreases to n[2] + 3n for architectures that can maintain the intermediate result in available working registers. **3.2** **Product-Scanning Method** Another way to perform a multi-precision multiplication is the product-scanning method (also referred as Comba [4] or column-wise multiplication method). There, each partial product is processed in a column-wise approach. This has several advantages. First, since all operands of each column are multiplied and added consecutively (within a multiply-accumulate approach), a final word of the result is obtained for each column. Thus, no intermediate results have to be stored or loaded throughout the algorithm. In addition, the handling of carry propagation ----- C[14] C[7] C[0] A[7]B[7] A[0]B[0] A[0]B[7] **Fig. 2. Product-scanning multiplication of 8-word large Integers a and b** is very easy because the carry can be simply added to the result of the next column using a simple register-copy operation. Second, only five working registers are needed to perform the multiplication: two registers for the operand and multiplicand and three registers for accumulation[2]. This makes the method very suitable for low-resource devices with limited registers. Figure 2 shows the structure of the product-scanning method. By having a look at the rhombus, it shows that by processing the partial products in a column-wise instead of a row-wise approach, only one store operation is needed to store the final word of the result. For the entire multi-precision operation, 2n[2] _load operations are necessary to load the operands A[i] and B[j] and 2n store_ operations are needed to store the result. Therefore, 2n[2] +2n memory operations are needed. **3.3** **Hybrid Method** The hybrid multiplication method [6] combines the advantages of the operandscanning and product-scanning method. It can be implemented using two nested loop structures where the outer loop follows a product-scanning approach and the inner loop performs a multiplication according to the operand-scanning method. The main idea is to minimize the number of load instructions within the inner loop. For this, the accumulator has to be increased to a size of 2d + 1 registers. The parameter d defines the number of rows within a processed block. Note that the hybrid multiplication is equals to the product-scanning method if parameter _d is chosen as d = 1 and it is equal to the operand-scanning method if d = n._ Figure 3 shows the structure of the hybrid multiplication for d = 4. It shows that the partial products are processed in form of individual blocks (we marked the processing sequence of the blocks from 1 to 4). Within one block, all operands are processed row by row according to the operand-scanning approach. Note that 2 We assume the allocation of three registers for the accumulator register whereas 2 + _⌈log2(n)/W ⌉_ registers are actually needed to maintain the sum of partial products. ----- C[14] C[7] C[0] A[7]B[7] A[0]B[0] A[0]B[7] **Fig. 3. Hybrid multiplication of 8-word large Integers a and b (d = 4)** these blocks use operands with a very limited range of indices. Thus, several load instructions can be saved in cases where enough working registers are available. However, the outer loop of the hybrid method processes the blocks in a columnwise approach. So between two consecutive blocks no operands can be shared and all operands have to be loaded from memory again. This becomes clear by having a look at the processing of Block 1-3. Block 2 and 3 do not share any operands that possess the same indices. Therefore, all operands that have already been loaded for Block 1 and that can be reused in Block 3 have to be loaded again after processing of Block 2 which requires additional and unnecessary load instructions. However, in total, the hybrid method needs 2 _n[2]/d_ + 2n memory_⌈_ _⌉_ access instructions which provides good performances on devices that feature a large register set. **3.4** **Operand-Caching Method** We present a new method to perform multi-precision multiplication. The main idea is to reduce the number of memory accesses to a minimum by efficiently caching of operands. We show that by spending a certain amount of store operations, a significant amount of load instructions can be saved by reusing operands that have been already loaded in working registers. The method basically follows the product-scanning approach but divides the calculation into several rows. In fact, the product-scanning method provides best performance if all needed operands can be maintained in working registers. In such a case, only 2n load instructions and 2n store instructions would be necessary. However, the product-scanning method becomes inefficient if not enough registers are available or if the Integer size is too large to cache a significant amount of operands. Hence, several load instructions are necessary to reload and overwrite the operands in registers. In the light of this fact, we propose to separate the product-scanning method into individual rows r = _n/e_ . The size e of each row is chosen in a way that all _⌊_ _⌋_ ----- C[14] C[7] C[0] 1 2 3 4 1 2 3 4 binit r0 r1 A[7]B[7] A[0]B[0] A[0]B[7] **Fig. 4. Operand-caching multiplication of 8-word large Integers a and b (e = 3)** needed words of one operand can be cached in the available working registers. Figure 4 shows the structure of the proposed method for parameter e = 3. That means, 3 registers are reserved to store 3 words of operand a and 3 registers are reserved to store 3 words of operand b. Thus, we assume f = 2e + 3 = 9 available registers including a triple-word accumulator. The calculation is now separated into r = ⌊8/3⌋ = 2 rows, i.e. r0 and r1, and consists of one remaining block which we further denote as initialization block binit. This block calculates the partial products which are not processed by the rows. All rows are further separated into four parts. Part 1 and 4 use the classical product-scanning approach. Part 2 and 3 perform an efficient multiplyaccumulate operation of already cached operands. The algorithm starts with the calculation of binit and processes the individual rows afterwards (starting from the the smallest to the largest row, i.e. from the top to the bottom of the rhombus). Furthermore, all partial products are generated from right to left. In the following, we describe the algorithm in a more detail. **Initialization Block binit. This block (located in the upper-mid of the rhom-** bus) performs the multiplication according to the classical product-scanning method. The Integer size of the binit multiplication is (n − _re), i.e. 8 −_ 6 = 2 in our example, which is by definition smaller than e. Because of that, all operands can be loaded and maintained within the available registers resulting in only 4(n _re) memory-access operations. Note that the calculation_ _−_ of binit is only required if there exist remaining partial products, i.e. n mod _e ̸= 0. If n mod e = 0, the calculation of binit is skipped. Furthermore,_ consider the special case when n < e where only binit has to be performed skipping the processing of rows (trivial case). **Processing of Rows. In the following, we describe the processing of each row** _p = r_ 1 . . . 0. Each row consists of four parts. _−_ **Part 1. This part starts with a product-scanning multiplication. All operands** for that row are first loaded into registers, i.e. A[i] with i = pe . . . e(p + 1) 1 _−_ ----- |3|Col2| |---|---| |Col1|A[3] x B[5] … … A[1] x B[7]|Col3| |---|---|---| |||C[8]| |A[4] x B[5] … … A[2] x B[7]||| ||C[9]|| |A[2] x B[1] 2 … … A[0] x B[3] + C[3]|A[2] x B[1] … … A[0] x B[3]|Col3| |---|---|---| |||C[3]| |A[2] x B[2] … … A[0] x B[4]||| |+ C[4]|C[4]|| + C[12] 3 2 A[1] x B[7] + C[8] A[4] x B[5] A[2] x B[7] + C[9] A[2] x B[5] A[0] x B[7] + C[7] |Col1|+|Col3|Col4| |---|---|---|---| |ACC2|ACC1|ACC0|| ||||| |A[7] x B[5] … … A[5] x B[7]|Col2| |---|---| |+|C[12]| **Fig. 5. Processing of Part 2 and 3 of the row r1** and B[j] with j = 0 . . . e 1. The sum of all partial products A[i] _B[j] is_ _−_ _×_ then stored as intermediate result to the memory location C[i] (same index range as A[i]). Therefore, 2e load instructions and e store instructions are needed. **Part 2. The second part, processes n** _e(p + 1) columns using a multiply-_ _−_ accumulate approach. Since all operands of A[i] were already loaded and used in Part 1, only one word B[j] has to be loaded from one column to the next. The operands A[i] are kept constant throughout the processing of Part 2. Next to the needed load instructions for B[j], we have to load and update the intermediate result of Part 1 with the result obtained in Part 2. Thus, 2(n _e(p + 1)) load and n_ _e(p + 1) store instructions are required_ _−_ _−_ for that part. **Part 3. The third part performs the same operation as described in Part 2** except that the already loaded operands B[j] are kept constant and that one word A[i] is loaded for each column. Figure 5 shows the processing of Part 2 and 3 of row r1 (p = 0). For each column, two load instructions are necessary (marked in grey). All other operands have been loaded and cached in previous parts. Operands which are not required for further processing are overwritten by new operands, e.g. B[1] . . . B[4] in Part 2 of our example. **Part 4. The last part calculates the remaining partial products. In contrast to** Part 1, no load instructions are required since all operands have been already loaded in Part 3. Hence, only e memory-access operations are needed to store the remaining words of the (intermediate) result c. Table 1 summaries the memory-access complexity of the initialization block and the individual parts of a row p. By summing up all load instructions, we get _r−1_ � 2(n _re) +_ (4n 4pe 2e) = 2n + 4rn 2er[2] 2er (1) _−_ _−_ _−_ _−_ _−_ _≤_ [2][n][2] _e [.]_ _p=0_ The total number of store operations can be evaluated by 2(n _re) +_ _−_ _r−1_ � (2n 2pe) = 2n + 2rn _er[2]_ _er_ (2) _−_ _−_ _−_ _≤_ _[n][2]_ _e_ [+][ n.] _p=0_ ----- **Table 1. Memory-access complexity of binit and each part of row p = 0 . . . r −** 1 **Component** **Load Instr.** **Store Instr.** **Total** _binit_ 2(n − _re)_ 2(n − _re)_ 4(n − _re)_ Part 1 2e _e_ 3e Part 2 2(n − _e(p + 1))_ _n −_ _e(p + 1)_ 3(n − _e(p + 1))_ Part 3 2(n − _e(p + 1))_ _n −_ _e(p + 1)_ 3(n − _e(p + 1))_ Part 4 0 _e_ _e_ Table 2 lists the complexity of different multi-precision multiplication tech niques. It shows that the hybrid method needs 2 _⌈_ _[n]d[2]_ _[⌉]_ _[load][ instructions whereas]_ the operand-caching technique needs about [2][n][2] _e_ [. Since the total number of avail-] able registers f equals to 2e + 3 for the operand-caching technique (2e registers for the operand registers and three registers for the accumulator) and 3d + 2 for the hybrid method (d + 1 registers for the operands and 2d + 1 registers for the accumulator), we obtain 2e + 3 = 3d + 2 = _e = [3][d][ −]_ [1] and _e > d._ (3) _⇒_ 2 If we compare the total number of memory-access instructions for the hybrid and the operand-caching method and express both runtimes using f, we get + 2n > [6][n][2] (4) _f_ 3 [+][ n] _−_ 2 � 3n2 _f_ 2 _−_ � Note that there are more parameters to consider. The number of additions of the operand-caching method is 3n[2] and the number of additions of the hybrid method is n[2](2 + d/2) (upper bound). Also the pseudocode of Gura et al. [6] for the hybrid multiplication method is inefficient in the special case of n mod d = 0. _̸_ **Table 2. Memory-access complexity of different multiplication techniques** **Method** **Load** **Store** **Memory** **Instructions** **Instructions** **Instructions** Operand Scanning 2n[2] + n _n[2]_ + n 3n[2] + 2n Product Scanning [4] 2n[2] 2n 2n[2] + 2n Hybrid [6] 2⌈n[2]/d⌉ 2n 2⌈n[2]/d⌉ + 2n **Operand Caching** **2n[2]/e** **_n[2]/e + n_** **3n[2]/e + n** ## 4 Results We used the 8-bit ATmega128 microcontroller for evaluating the new multiplication technique. The ATmega128 is part of the megaAVR family from Atmel [1]. It has been widely used in embedded systems, automotive environments, and ----- **Table 3. Unrolled instruction counts for a 160-bit multiplication on the ATmega128** **Method** **Instruction** **Clock** LD ST MUL ADD MOVW Others **Cycles** Operand Scanning 820 440 400 1,600 2 464 5,427 Product Scanning 800 40 400 1,200 2 159 3,957 Hybrid (d=4) 200 40 400 1,250 202 109 2,904 **Operand Caching (e=10)** **80** **60** **400** **1,240** **2** **68** **2,395** sensor-node applications. The ATmega128 is based on a RISC architecture and provides 133 instructions [2]. The maximum operating frequency is 16 MHz. The device features 128 kB of flash memory and 4 kB of internal SRAM. There exist 32 8-bit general-purpose registers (R0 to R31). Three 16-bit registers can be used for memory addressing, i.e. R26:R27, R28:R29, and R30:R31 which are denoted as X, Y, and Z. Note that the processor also allows pre-decrement and post-increment functionalities that can be used for efficient addressing of operands. The ATmega128 further provides an hardware multiplier that performs an 8 8-bit multiplication within two clock cycles. The 16-bit result is _×_ stored in the registers R0 (lower word) and R1 (higher word). We used register R25 to store a zero value. Furthermore, we reserved R23, R24, and R25 as accumulator register. Thus, 20 registers, i.e. R2...R21, can be used to store and cache the words of the operands (e = 10 registers for each operand a and b). All implementations have been done by using a self-written code generator that allows the generation of (looped & unrolled) assembly code. In order to demonstrate the performance of our method, we implemented all multiplication techniques described in Section 3. For comparison reasons, we decided to implement a 160 160-bit multiplication as it has been done by _×_ most of the related work. Note that for RSA and ECC, larger Integer sizes are recommended in practice [10,13]. The Standards for Efficient Cryptography (SEC) already removed the recommended secp160r1 elliptic curve from their standard since SEC version 2 of 2010 [3]. Table 3 summarizes the instruction counts for the operand scanning, product scanning, hybrid, and operand-caching implementation. The operand-scanning and product-scanning methods have been implemented without using all the available registers (as it usually would be implemented). For hybrid multiplication, we applied d = 4 because it allows a better optimization regarding necessary addition operations compared to a multiplication with d = 5. The carry propagation problem has been solved by implementing a similar approach as proposed by Liu et al. [12]. Thus, 200 MOVW instructions have been necessary to handle the carry propagation accordingly. For a fair comparison, all methods have been optimized for speed and provide unrolled instruction sequences. Furthermore, we implemented all accumulators as ring buffers to reduce necessary MOV instructions. After each partial-product generation, the indices of the accumulator registers are shifted so that no MOV instructions are necessary to copy the carry. ----- **Table 4. Comparison of multiplication methods** for different Integer sizes **Size** **Op.** **Prod.** **Hybrid** **Operand** **[bit]** **Scan.** **Scan.** **Method** **Caching** 160 5,427 3,957 2,904 2,395 192 7,759 5,613 4,144 3,469 256 13,671 9,789 7,284 6,123 512 53,959 38,013 28,644 24,317 1,024 214,407 149,757 113,604 96,933 2,048 854,791 594,429 452,484 387,195 10[6] 10[5] 10[4] 10[3] 160 256 512 1024 2048 Integer size **Fig. 6. Comparison chart** Best results have been obtained for the operand-caching technique. By trading additional 20 store instructions, up to 120 load instructions could be saved when we compare the result with the best reference values (hybrid implementation). Note that load, store, and multiply instructions on the ATmega128 are more expensive than other instructions since they require two clock cycles instead of only one. For operand-caching multiplication, almost the same amount of load and store instructions are required. In total 2,395 clock cycles are needed to perform the multiplication. Compared to the hybrid implementation, a speed improvement of about 18 % could be achieved. We also compare the performance of the implemented multi-precision methods for different Integer sizes. Table 4 shows the result for Integer sizes from 160 up to 2,048 bits[3]. The operand-caching technique provides the best performance for any Integer size. It is therefore well suited for large Integer sizes such as it is in the case of RSA. In average, a speed improvement of about 15 % could be achieved compared to the hybrid method. Figure 6 shows the appropriate performance chart in a double logarithmic scale. **Table 5. Performance of operand-caching multi-** plication for different Integer sizes and available registers **Size** **_e=2_** **_e=4_** **_e=8_** **_e=10_** **_e=20_** 160 3,915 2,965 2,513 2,395 2,205 192 5,611 4,255 3,577 3,469 3,207 256 9,915 7,531 6,339 6,123 5,671 512 39,291 29,915 25,227 24,317 22,451 1,024 156,411 119,227 100,635 96,933 89,529 2,048 624,123 476,027 401,979 387,195 357,581 10[6] 10[5] 10[4] 10[3] 2 4 8 10 20 Available registers e **Fig. 7. Performance chart** 3 Note that due to a fully unrolled implementation such large Integer multiplications might be impractical due to the huge amount of code. ----- **Table 6. Comparison with related work** **Method** **Instruction** **Clock** LD ST MUL ADD MOVW Others **Cycles** **Hybrid** Gura et al. [6] (d=5) 167 40 400 1,360 355 197 3,106 Uhsadel et al. [17] (d=5) 238 40 400 986 355 184 2,881 Scott et al. [14] (d=4)[a] 200 40 400 1,263 70 38 2,651 Liu et al. [12] (d=4) 200 40 400 1,194 212 179 2,865 **Operand Caching** **with looping[a,c]** (e=9) **92** **66** **400** **1,252** **41** **276** **2,685** **unrolled[b,c]** (e=10) **80** **60** **400** **1,240** **2** **68** **2,395** _a binit, Part 1, and Part 4 unrolled. Part 2 and Part 3 looped._ _b Fully unrolled implementation without overhead of loop instructions._ _c w/o PUSH/POP/CALL/RET._ Table 5 and Figure 7 show the performance for different Integer sizes in re lation to parameter e. The parameter e is defined by the number of available registers to store words of one operand, i.e. e = _[f]_ _[−]2_ [3] [, where][ f][ = 2][e][ + 3 denotes] the number of available registers in total (including the triple-size register for the accumulator). It shows that for e > 10 no significant improvement in speed is obtained. The performance decrease for smaller e and higher Integer sizes. However, if we compare our solution (160-bit multiplication with smallest parameter e = 2 _f = 7 registers) with the product-scanning method (needing_ _→_ _f = 5 registers), we obtain 3,915 clock cycles for the operand-caching method_ and 3,957 clock cycles for the product scanning method. It therefore provides a good performance even for a smaller set of available registers. For the special case e = 20, where all 20 words of one 160-bit operand can be maintained in registers (ideal case for product scanning), it shows that the number of clock cycles reaches nearly the optimum of 2,160 clock cycles, i.e. 4n = 80 memory-access instructions, n[2] = 400 multiplications, and 3n[2] = 1, 200 additions. We compare our result with related work in Table 6. For a fair comparison, we also implemented a operand-caching version that does not unroll the algorithm but includes additional loop instructions. It shows that the operand-caching method provides best performance. Compared to Gura et al. [6] 23 % less clock cycles are needed for a 160-bit multiplication. A 10 % improvement could be achieved compared to the best solution reported in literature [14]. Note that most of the related work need between 167 to 238 load instructions which mostly explains the higher amount of needed clock cycles. ## 5 Conclusions We presented a novel multiplication technique for embedded microprocessors. The multiplication method reduces the number of necessary load instructions ----- through sophisticated caching of operands. Our solution follows the productscanning approach but divides the processing into several parts. This allows the scanning of sub-products where most of the operands are kept within the register-set throughout the algorithm. In order to evaluate our solution, we implemented several multiplication techniques using different Integer sizes on the ATmega128 microcontroller. Using operand-caching multiplication, we require 2,395 clock cycles for a 160-bit multiplication. This result improves the best reported solution by a factor of 10 % [14]. Compared to the hybrid multiplication of Gura et al. [6], we achieved a speed up of 23 %. Our evaluation further showed that our solution scales very well for different Integer sizes used for ECC and RSA. We obtained an improvement of about 15 % for bit sizes between 256 and 2,048 bits compared to a reference implementation of the hybrid multiplication. It is also worth to note that our multiplication method is perfectly suitable for processors that support multiply-accumulate (MULACC) instructions such as ARM or the dsPIC family of microcontrollers. It also fully complies to architectures which support instruction-set extensions for MULACC operations such as proposed by Großsch¨adl and Sava¸s [5]. **Acknowledgements. The work has been supported by the European Com-** mission through the ICT program under contract ICT-2007-216646 (European Network of Excellence in Cryptology - ECRYPT II) and under contract ICTSEC-2009-5-258754 (Tamper Resistant Sensor Node - TAMPRES). ## References 1. Atmel Corporation. 8-bit AVR Microcontroller with 128K Bytes In-System Pro[grammable Flash (August 2007), http://www.atmel.com/dyn/resources/prod_](http://www.atmel.com/dyn/resources/prod_documents/doc2467.pdf) [documents/doc2467.pdf](http://www.atmel.com/dyn/resources/prod_documents/doc2467.pdf) [2. Atmel Corporation. 8-bit AVR Instruction Set (May 2008), http://www.atmel.](http://www.atmel.com/dyn/resources/prod_documents/doc0856.pdf) [com/dyn/resources/prod_documents/doc0856.pdf](http://www.atmel.com/dyn/resources/prod_documents/doc0856.pdf) 3. Certicom Research. Standards for Efficient Cryptography, SEC 2: Recommended [Elliptic Curve Domain Parameters, Version 2.0. (January 2010), http://www.](http://www.secg.org/) [secg.org/](http://www.secg.org/) 4. Comba, P.: Exponentiation cryptosystems on the IBM PC. IBM Systems Journal 29(4), 526–538 (1990) 5. Großsch¨adl, J., Sava¸s, E.: Instruction Set Extensions for Fast Arithmetic in Finite Fields GF(p) and GF(2[m]). In: Joye, M., Quisquater, J.-J. (eds.) CHES 2004. LNCS, vol. 3156, pp. 133–147. Springer, Heidelberg (2004) 6. Gura, N., Patel, A., Wander, A., Eberle, H., Shantz, S.C.: Comparing Elliptic Curve Cryptography and RSA on 8-bit CPUs. In: Joye, M., Quisquater, J.-J. (eds.) CHES 2004. LNCS, vol. 3156, pp. 119–132. Springer, Heidelberg (2004) 7. Kargl, A., Pyka, S., Seuschek, H.: Fast Arithmetic on ATmega128 for Elliptic Curve Cryptography. Cryptology ePrint Archive Report 2008/442 (October 2008), [http://eprint.iacr.org/](http://eprint.iacr.org/) 8. Ko¸c, C¸.K.: High Speed RSA Implementation. Technical report, RSA Laboratories, RSA Data Security, Inc. 100 Marine Parkway, Suite 500 Redwood City (1994) ----- 9. Lederer, C., Mader, R., Koschuch, M., Großsch¨adl, J., Szekely, A., Tillich, S.: Energy-Efficient Implementation of ECDH Key Exchange for Wireless Sensor Networks. In: Markowitch, O., Bilas, A., Hoepman, J.-H., Mitchell, C.J., Quisquater, J.-J. (eds.) Information Security Theory and Practice. LNCS, vol. 5746, pp. 112– 127. Springer, Heidelberg (2009) 10. Lenstra, A., Verheul, E.: Selecting Cryptographic Key Sizes. Journal of Cryptology 14(4), 255–293 (2001) 11. Liu, A., Ning, P.: TinyECC: A Configurable Library for Elliptic Curve Cryptography in Wireless Sensor Networks. In: International Conference on Information Processing in Sensor Networks - IPSN 2008, St. Louis, Missouri, USA, Mo, April 22-24, pp. 245–256 (2008) 12. Liu, Z., Großsch¨adl, J., Kizhvatov, I.: Efficient and Side-Channel Resistant RSA Implementation for 8-bit AVR Microcontrollers. In: Workshop on the Security of the Internet of Things - SOCIOT 2010, 1st International Workshop, Tokyo, Japan, November 29. IEEE Computer Society, Los Alamitos (2010) 13. National Institute of Standards and Technology (NIST). SP800-57 Part 1: DRAFT [Recommendation for Key Management: Part 1: General (May 2011), http://](http://csrc.nist.gov/publications/drafts/800-57/Draft_SP800-57-Part1-Rev3_May2011.pdf) [csrc.nist.gov/publications/drafts/800-57/Draft_SP800-57-Part1-Rev3_](http://csrc.nist.gov/publications/drafts/800-57/Draft_SP800-57-Part1-Rev3_May2011.pdf) [May2011.pdf](http://csrc.nist.gov/publications/drafts/800-57/Draft_SP800-57-Part1-Rev3_May2011.pdf) 14. Scott, M., Szczechowiak, P.: Optimizing Multiprecision Multiplication for Public [Key Cryptography. Cryptology ePrint Archive, Report 2007/299 (2007), http://](http://eprint.iacr.org/) [eprint.iacr.org/](http://eprint.iacr.org/) 15. Szczechowiak, P., Oliveira, L.B., Scott, M., Collier, M., Dahab, R.: NanoECC: Testing the Limits of Elliptic Curve Cryptography in Sensor Networks. In: Verdone, R. (ed.) EWSN 2008. LNCS, vol. 4913, pp. 305–320. Springer, Heidelberg (2008) 16. Ugus, O., Hessler, A., Westhoff, D.: Performance of Additive Homomorphic ECElGamal Encryption for TinyPEDS. In: GI/ITG KuVS Fachgespr¨ach Drahtlose Sensornetze, RWTH Aachen, UbiSec 2007 (July 2007) 17. Uhsadel, L., Poschmann, A., Paar, C.: Enabling Full-Size Public-Key Algorithms on 8-bit Sensor Nodes. In: 4th European Workshop on Security and Privacy in Ad-hoc and Sensor Networks, ESAS 2007, Cambridge, UK, July 2-3 (2007) 18. Wang, H., Li, Q.: Efficient Implementation of Public Key Cryptosystems on Mote Sensors (Short Paper). In: Ning, P., Qing, S., Li, N. (eds.) ICICS 2006. LNCS, vol. 4307, pp. 519–528. Springer, Heidelberg (2006) ## A Algorithm for Operand-Caching Multiplication The following pseudo code shows the algorithm for multi-precision multiplication using the operand-caching method. Variables that are located in data memory are denoted by Mx where x represents the name of the Integer a or _b. The parameter e describes the number of locally usable registers Ra[e −_ 1, . . ., 0] and Rb[e _−_ 1, . . ., 0]. The triple-word accumulator is denoted by ACC = (ACC2, ACC1, ACC0). ----- **Require: word size n, parameter e, n** _e, Integers a, b_ _≥_ _∈_ [0, n), c [0, 2n). _∈_ **Ensure: c = ab.** _r =_ _n/e_ . _⌊_ _⌋_ ⎫ _ACCforRRABforACCend forM(ACC i[[eeCACC = 0 − − ← j[re = 02111 ← +0., . . ., to, . . .,, ACC ← i to0. n] ←ACC 0] 0] − i0 ← ← do)ACCre ← + −MM( RAB0ACC1.[ do[Ann[ − −j]2 ∗1re, ACC, . . ., reR −B[1i −, . . .,1).].j]. 0]._ ⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎬ **end for** **_binit_** **forend forMCend forMforACC(ACC i[2CACC = 0 jn[n − =2 +1 ← to, ACC ←re i i + 1] −0. n ←ACC −1] to0ACC ←)re ← n + −ACC −0( RACC2.** **doreA[0 −j.]2 ∗, ACC1 doRB[n1 −).** _re −_ _j + i]._ ⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎭ _ACC0 ←_ 0. **for p = r** 1 to 0 do **Row Loop:** _−_ _}_ _RA[e −_ 1, . . ., 0] ← _MA[(p + 1)e −_ 1, . . ., pe]. ⎫ **forRforBACC i[e j = 0 − = 01 ← to, . . ., toACC e 0] i − do ←1 + doM RBA[[ej −] ∗** 1R, . . .,B[i − 0].j]. ⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎬ **Part 1** **end for** (ACCMACCC[pe21 ← +, ACC i0.] ←0)ACC ← (0ACC. 2, ACC1). ⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎭ **end for** **for i = 0 to n** (p + 1)e 1 do ⎫ _−_ _−_ **end forRforACCB j[e = 0 − ←1, . . ., toACC e 0] − + ←1 R doMAB[j[]e ∗ +R iB], R[e −B[1e − −** 2j]., . . ., 1]. ⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎬ **Part 2** _ACCM(ACCACCC[(2 ←p1 ← + 1), ACCACC0.e +0 +) i ←] M ←(CACCACC[(p + 1)20, ACC._ _e + i1].)._ ⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎭ **end for** ----- **for i = 0 to n** (p + 1)e 1 do _−_ _−_ _RA[e −_ 1, . . ., 0] ← _MA[(p + 1)e + i], RA[e −_ 2, . . ., 1]. **for j = 0 to e** 1 do _−_ _ACC ←_ _ACC + RA[j] ∗_ _RB[e −_ 1 − _j]._ **end for** _ACC ←_ _ACC + MC_ [(n + i]. _MC[n + i] ←_ _ACC0._ (ACC1, ACC0) ← (ACC2, ACC1). _ACC2 ←_ 0. **end for** **for i = 0 to e** 2 do _−_ **for j = i + 1 to e** 1 do _−_ _ACC ←_ _ACC + RA[j] ∗_ _RB[e −_ _j + i]._ **end for** _MC[2n −_ (p + 1)e + i] ← _ACC0._ (ACC1, ACC0) ← (ACC2, ACC1). _ACC2 ←_ 0. **end for** _MC[2n −_ 1 − _pe] ←_ _ACC0._ _ACC0 ←_ 0. **end for** **Return c.** ⎫ ⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎬ ⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎭ ⎫ ⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎬ ⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎭ **Part 3** **Part 4** ## B Example: 160-Bit Operand-Caching Multiplication C[38] C[19] C[0] 1 A[19]B[19] 4 A[0]B[0] 2 2 2 1 A[0]B[19] **Fig. 8. Operand-caching multiplication for n = 20 and e = 7** 3 3 4 -----
{ "disclaimer": "Notice: The following paper fields have been elided by the publisher: {'abstract', 'references'}. Paper or abstract available at https://api.unpaywall.org/v2/10.1007/s00145-020-09351-2?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1007/s00145-020-09351-2, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": null, "status": "BRONZE", "url": "https://link.springer.com/content/pdf/10.1007/978-3-642-23951-9_30.pdf" }
2,011
[ "JournalArticle" ]
true
2011-09-28T00:00:00
[]
12,244
en
[ { "category": "Computer Science", "source": "external" }, { "category": "Computer Science", "source": "s2-fos-model" }, { "category": "Engineering", "source": "s2-fos-model" }, { "category": "Environmental Science", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/0266f264f35f3c0049ffe3b517170d081a1824e1
[ "Computer Science" ]
0.865562
Leveraging User-Diversity in Energy-Efficient Edge-Facilitated Collaborative Fog Computing
0266f264f35f3c0049ffe3b517170d081a1824e1
IEEE Access
[ { "authorId": "2055852524", "name": "Antoine Paris" }, { "authorId": "1423728195", "name": "Hamed Mirghasemi" }, { "authorId": "1798136", "name": "I. Stupia" }, { "authorId": "1698047", "name": "L. Vandendorpe" } ]
{ "alternate_issns": null, "alternate_names": null, "alternate_urls": [ "http://ieeexplore.ieee.org/servlet/opac?punumber=6287639" ], "id": "2633f5b2-c15c-49fe-80f5-07523e770c26", "issn": "2169-3536", "name": "IEEE Access", "type": "journal", "url": "http://www.ieee.org/publications_standards/publications/ieee_access.html" }
With the increasing number of heterogeneous resource-constrained devices populating the current wireless ecosystem, enabling ubiquitous computing at the edge of the network requires moving part of the computing burden back to the edge to reduce user-side latency and relieve the backhaul network. Motivated by this challenge, this work investigates edge-facilitated collaborative fog computing to augment the computing capabilities of individual devices while optimizing for energy-efficiency. Collaborative-computing is modeled using the Map-Reduce framework, consisting in two computing rounds and a communication round. The computing load is optimally distributed among devices, taking into account their diversity in terms of computing and communication capabilities. Devices local parameters such as CPU frequency and RF transmit power are also optimized for energy-efficiency. The corresponding optimization problem is shown to be convex and optimality conditions are obtained through Lagrange duality theory. A waterfilling-like interpretation for the size of the computing load assigned to each device is given. Numerical experiments demonstrate the benefits of the proposed collaborative-computing scheme over various other schemes in several respects. Most notably, the proposed scheme exhibits increased probability of successfully dealing with more demanding computations in time, along with significant energy-efficiency gains. Both improvements come from the scheme ability to advantageously leverage devices diversity.
Received June 11, 2021, accepted July 1, 2021, date of publication July 5, 2021, date of current version July 13, 2021. _Digital Object Identifier 10.1109/ACCESS.2021.3094888_ # Leveraging User-Diversity in Energy-Efficient Edge-Facilitated Collaborative Fog Computing ANTOINE PARIS, (Member, IEEE), HAMED MIRGHASEMI, (Member, IEEE), IVAN STUPIA, (Member, IEEE), AND LUC VANDENDORPE, (Fellow, IEEE) Institute for Information and Communication Technologies, Electronics, and Applied Mathematics (ICTEAM), Catholic University of Louvain (UCLouvain), 1348 Louvain-la-Neuve, Belgium Department of Electrical Engineering (ELEN), Catholic University of Louvain (UCLouvain), 1348 Louvain-la-Neuve, Belgium Communication System Group (CoSy), Catholic University of Louvain (UCLouvain), 1348 Louvain-la-Neuve, Belgium Corresponding author: Antoine Paris (antoine.paris@uclouvain.be) This work was supported by the Fonds de la Recherche Scientifique (F.R.S.-FNRS) through the Excellence Of Science (EOS) Program (MUlti-SErvice WIreless NETwork) under Project 30452698. The work of Antoine Paris was supported by the F.R.S.-FNRS. **ABSTRACT With the increasing number of heterogeneous resource-constrained devices populating the** current wireless ecosystem, enabling ubiquitous computing at the edge of the network requires moving part of the computing burden back to the edge to reduce user-side latency and relieve the backhaul network. Motivated by this challenge, this work investigates edge-facilitated collaborative fog computing to augment the computing capabilities of individual devices while optimizing for energy-efficiency. Collaborative-computing is modeled using the Map-Reduce framework, consisting in two computing rounds and a communication round. The computing load is optimally distributed among devices, taking into account their diversity in terms of computing and communication capabilities. Devices local parameters such as CPU frequency and RF transmit power are also optimized for energy-efficiency. The corresponding optimization problem is shown to be convex and optimality conditions are obtained through Lagrange duality theory. A waterfilling-like interpretation for the size of the computing load assigned to each device is given. Numerical experiments demonstrate the benefits of the proposed collaborative-computing scheme over various other schemes in several respects. Most notably, the proposed scheme exhibits increased probability of successfully dealing with more demanding computations in time, along with significant energy-efficiency gains. Both improvements come from the scheme ability to advantageously leverage devices diversity. **INDEX TERMS Wireless collaborative computing, map-reduce, energy-efficiency, joint computation and** communications optimization, fog computing. **I. INTRODUCTION** The current trends in communication and networking suggest that the future wireless ecosystem will be populated by a massive number of heterogeneous devices (i.e., in terms of computing and communication capabilities): from relatively powerful smartphones and laptops to ultra-low-power sensors, actuators and other connected ‘‘things’’ [1]–[3]. At the same time, emerging applications like virtual and augmented reality, context-aware computing, autonomous driving, Internet of Things (IoT) and so forth, require more and more computing capabilities while aiming for smaller and smaller latency [4]. All in all, recent years have seen the focus moving The associate editor coordinating the review of this manuscript and approving it for publication was Mahdi Zareei . from communications as an objective per se, to communications as a way to enhance computing capabilities of energy limited devices [5]–[7]. This paradigm shift started with Mobile Cloud Computing (MCC) [8], [9] first, and with Multi-access Edge Computing (MEC) [10]–[13] later on. While MCC proved itself to be effective to enable ubiquitous computing on resource-constrained devices while prolonging their battery life, MEC has the advantage of offering smaller computing latencies while reducing the pressure on the backhaul network. This makes MEC both more suitable for ultra-lowlatency applications emerging from the recent 5G developments and more able to cope with the ever growing number of connected devices and their ever growing computing demand. Compared to MCC, the inherent spatial distribution ----- of MEC also has the advantage of offering some level of decentralization. In contexts where MCC latency is unacceptable and in the absence of MEC servers nearby, or when the use of third-party owned MCC/MEC is deliberately ruled-out for privacy reasons, fog computing offers an even more decentralized alternative. Fog computing is formally defined in [14] as ‘‘a huge number of heterogeneous wireless devices that **_communicate and potentially cooperate among them and_** **_with the network to perform processing tasks without the_** _intervention of third parties’’. Those distributed computing_ resources can also be exploited to enhance the computing capabilities of individual devices. As MEC, fog computing benefits from reduced user-side latency and reduces the pressure on the backhaul network: two features recognized as key enablers for ubiquitous artificial intelligence (AI) at the edge of the network [15]. Achieving this, however, requires to move part of the computing burden from the powerful server-side to the resource-constrained user-side. To accommodate for relatively complex processing tasks on such limited devices requires to (i) enable devices collaboration to pool their computing capabilities and (ii) take care of devices resources management, both in terms of computing resources and communication resources. It is also worth noting that fog computing can be integrated in a multi-tier architecture along with MEC and MCC [16]. In this paper, we jointly optimize for energy-efficiency the computation and communication resources of a set of heterogeneous and resource-constrained mobile devices taking part in collaborative fog computing. _A. APPLICATION SCENARIO_ As already mentioned, enabling computationally demanding intelligent mobile systems at the edge of the network requires to offload part of the computing burden to mobile devices to reduce user-side latency and relieve the backhaul network [15]. A recent comprehensive survey on ‘‘Deep Learning in Mobile and Wireless Networking’’ [17] discusses fog computing to support those two objectives in the context of machine learning (ML) or deep learning (DL) inference; two key components of ubiquitous AI. ML/DL models, however, often contain tens to hundreds of millions of parameters. As such, it might be prohibitive or even impossible for a single mobile device limited in computing, memory and battery capacity to process (or even simply store) the full ML/DL model needed to perform the inference with a reasonable latency [18]. Though it is possible to reduce the size of a ML/DL model using various model compression techniques such as pruning, weights quantization and so forth, this always comes at the cost of accuracy [19]. Enabling devices collaboration to augment their individual processing capabilities is another solution that does not sacrifice accuracy [18]. Combined with proper resources management and allocation to preserve devices battery life, this collaborative inference approach should allow the processing of reasonably large ML/DL models. It is worth noting that this kind of distributed/collaborative inference is envisioned as a key enabler to ubiquitous AI in future 6G networks [20]. This example application scenario, also known as ‘‘model-split’’ inference [21] and illustrated and detailed in Fig. 1, thus consists in distributing a ML/DL model pre-trained off-device in the cloud to perform collaborative on-device inferences on local input data later on. _B. RELATED WORKS AND MOTIVATIONS_ This paper extends our previous work [22], primarily with more realistic energy consumption models, both for devices local computations and for communications between devices. Although this change might appear subtle at first, it more than doubles the number of degrees of freedom in the collaborative-computing scheme, making it both more challenging to optimize and more interesting. While the optimization problem in [22] was relatively easy to solve and even admitted a semi-closed form solution, it only offered little insights into the structure of the optimal solution. At the opposite, the optimization problem in this work is more challenging and doesn’t admit a closed-form solution. Yet, we are now able to offer a waterfilling-like interpretation of the optimal computing load assigned to each device, thus gaining some insights on the structure of the optimal solution that we were not able to offer in our previous work. Naturally, the additional degrees of freedom also allow to improve the performances of the collaborative-computing scheme (see numerical experiments in Sec. V for comparison). The system model used in this paper is inspired by previous works on wireless distributed computing (WDC) [23]–[28]. Most notably, we also use the Map-Reduce distributed computing framework [29] with an access point (AP) or base station (BS) facilitating the communications between devices (i.e., communications are edge-facilitated). While Map-Reduce was originally introduced by Google as a programming model for processing very large data sets in parallel on several hundreds or thousands of machines in a reasonable amount of time, the framework quickly attracted a wider variety of applications, e.g., machine learning, physical simulations, digital media processing tasks, etc [30]. Around the same time, and with the advent of mobile devices, Map-Reduce also started to be considered as a viable framework to distribute computing tasks in mobile systems [31], [32]. Together, these evolutions of the Map-Reduce framework field of applications make it a very good choice for the collaborative ML/DL inference application scenario discussed in the last section, and, more generally, for the development of ubiquitous AI in future 6G networks [15], [20]. The vast majority of these works, however, study WDC from a network-coding theory point of view, focusing on coded distributed computing (CDC) and discussing the trade-offs between the computation and communication loads incurred by the collaboration. In short, the conclusion is that increasing devices computing load makes it possible to leverage network coding opportunities during the exchange of intermediate computation results, hence reducing the communication load. Considering the ----- **FIGURE 1. Collaborative fog computing ML/DL inference scenario (adapted from the** edge-based app-level mobile data analysis approach illustrated in [17]). The ML/DL model is first trained off-device in the cloud using offline datasets. The ML/DL model weights, noted w, are then offloaded to the edge of the network for future on-device inference. Each device n ∈ [N] wants to perform some inference φ(dn, w ) on its local input data dn using the model weights w . However, this operation might be prohibitive or even impossible to carry on a single mobile device due to memory, computing capabilities or battery limitations. Distributing the ML/DL model weights _w across multiple devices to enable collaborative inference is envisioned as a_ potential solution to this issue [18]. It is also worth noting again that this fog computing inference scenario significantly reduces backhaul network traffic and user-side latency compared to the cloud-based scenario in which inferences are performed in the cloud [17]. _inherent energy-limited nature of mobile devices as the_ _main bottleneck to WDC, the focus in this work is shifted_ _towards the allocation of computing and communication_ _resources and optimization of the collaboration to mini-_ _mize devices energy consumption. The two approaches are_ however not mutually exclusive and could be combined in future collaborative-computing schemes to further improve the global energy-efficiency of the system. Most existing works on WDC consider the set of collaborating devices to be homogeneous in terms of computing and communication capabilities (with recent notable exceptions – still focused on CDC – like [24] and [28]). Under these conditions, the computing load is thus uniformly distributed across all devices. Motivated by current trends in the wire_less ecosystem, we here consider the set of devices to be_ _heterogeneous instead. As a consequence, it might no longer_ be optimal to uniformly distribute the computing load (e.g., the ML/DL model weights, see Fig. 1) across mobile devices. To allow our collaborative-computing scheme to take into account – and leverage – devices diversity, our model thus allows arbitrary partition of the computing load. Compared to previous works focused on CDC, we also make the latency constraint accompanying the computing task an explicit constraint. Various other cooperative-computing schemes were also studied in the literature, see e.g., [33]–[38]. [33] discusses cooperative-computing and cooperative communications in the context of MEC systems wherein a user can partially (or totally) offload a computing task to both a MEC server and a so-called helper device that can then (i) perform some local computations for the user device (i.e., cooperative computing), (ii) further offload part or all the task to the MEC server (i.e., cooperative communication), or (iii) both. The system model and problem formulation used in this work also owes a lot to [33], especially with regards to devices energy consumption models. Reference [34] also devises an energy-efficient cooperative-computing scheme in which a mobile device can partially or totally offload a computing task to a surrounding idle device acting as a helper. In the context of Mobile Wireless Sensors Networks (MWSNs), [35] augments this framework by optimally selecting the helper device among a set of N surrounding devices. In [36], a wireless powered cooperative-computing scheme wherein a user device can offload computations to N helper devices is described. In [37] authors describe Mobile Device Cloud (MDC), i.e., a framework in which power balancing is performed among a cluster of mobile devices, and empirically optimize the collaboration to maximize the lifetime of the set of devices. Finally, in [38], an energy-efficient and incentive-aware network-assisted (i.e., coordinated by the edge of the network), device-to-device (D2D) collaboration framework is presented. _C. OBJECTIVE AND CONTRIBUTIONS_ Given the heterogeneous nature of mobile devices and their limitations in terms of memory, computing and communication capabilities and battery life, this work aims to provide insights into the following question: how to distribute a **computing load (e.g., ML/DL model) across an hetero-** **geneous set of resource-constrained wireless devices to** **complete a given set of computing tasks (e.g., ML/DL** **inferences) in the most energy-efficient way, while satis-** **fying a given deadline?** ----- More precisely, the contributions of this paper can be summarized as follows : - we propose an N -devices edge-facilitated collaborative fog computing scheme based on the Map-Reduce distributed computing framework [29] and formulate a joint computation and communication resources optimization problem with energy-efficiency as objective ; - we gain engineering insights into the structure of the optimal solution by leveraging Lagrange duality theory and offer a waterfilling-like interpretation for the size of the computing load assigned to each device ; - through numerical experiments, we compare the performance of the proposed scheme with various other schemes using less degrees-of-freedom in the optimization (such as the one proposed in [22]) to analyze the relative benefits of each set of variables being optimized and to show that the proposed scheme advantageously exploits devices diversity. _D. ORGANIZATION OF THE PAPER_ Section II starts by describing in details the collaborative computing model and the energy and time consumption models for both local computation and edge-facilitated communications. Next, Sec. III formulates the joint computation and communication resources allocation problem and analyzes its feasibility. Section IV then reformulates the problem, proves its convexity in this new formulation and leverages Lagrange duality theory to gain some insights on the optimal solution of the problem. Section V benchmarks the performances of the optimal collaborative-computing scheme against various other schemes through numerical experiments. Finally, Sec. VI discusses the results obtained in this work, their limits, and opportunities for future research. **II. SYSTEM MODEL** As already illustrated in Fig. 1, we consider an heterogeneous set of N wireless devices indexed by n [N ] sharing a ∈ common AP or BS. Each device n wants to perform a given computing task φ(dn, w) within a given latency τ, with dn some D-bit local input data to device n and w some L-bit data common to all N devices. As detailed in Sec. I-A and in Fig. 1, w could be a ML/DL model that was pre-trained in the cloud, while φ(dn, w) could represent an inference performed using this model w on an input dn. Motivated by this example application, it is assumed that L _D. As a_ ≫ consequence of the large size of w, it might be impossible or prohibitive in terms of energy consumption for an individual device to complete the computing task φ(dn, w) within the deadline τ . Devices thus pool together and collaborate to augment their individual computing capabilities. The collaborative computing model used in this work, i.e., MapReduce [29] is described in Sec. II-A. Next, and because we are here concerned by optimizing the energy-efficiency of the collaboration, Sec. II-B and II-C describe the models used to quantify the time needed and energy consumed by the different phases of the collaboration. The AP/BS is responsible for coordinating and optimizing the collaboration. This makes our collaborative-computing scheme edge-facilitated (or network-assisted) and fits with the fog computing definition given in the introduction. To allow for offline optimization of the collaborative-computing scheme, we assume that the AP/BS has perfect non-causal knowledge of the uplink channels (i.e., devices to AP/BS) during the communication phase, and perfect knowledge of the computing and communication capabilities of all devices. Although unrealistic with regard to the channels, this simplifying assumption allows us to provide a first performance evaluation of the proposed collaborative fog computing scheme in a best-case scenario. _A. COLLABORATIVE COMPUTING MODEL_ The computing tasks {φ(dn, w)}n[N]=1 [(e.g., ML/DL inferences)] are shared between N devices according to the Map-Reduce framework [29]. First, we assume that the L-bit data w (e.g., ML/DL model weights) can be arbitrarily partitioned in N smaller ln-bit data wn (one for each device n) with ln ∈ R≥0[1] and �N (1) _n=1_ _[l][n][ =][ L][.]_ As opposed to previous works focusing on CDC [23]–[28], we are not assuming any redundancy in the computing loads {wn}n[N]=1 [assigned to each device, that is][ w][i][ ∩] _[w][j][ = ∅]_ [for] all i ̸= j.[2] Also, the sizes {ln}n[N]=1 [of the assigned computing] loads {wn}n[N]=1 [are optimized for energy-efficiency taking into] account the diversity of devices instead of being uniform (e.g., ln = L/N or a multiple for all n) and fixed ahead of time. Assuming relatively large downlink rates, and because the focus is on the energy consumption of mobile devices (rather than the energy consumption of the AP), we neglect the time and energy needed to transmit wn from the AP to device n, for all n [N ]. To make collaborative-computing ∈ possible, we also assume that the D-bit local input data {dn}n[N]=1 [were shared between mobile devices through the AP] in a prior phase that we neglect in this work because D is assumed to be relatively small compared to L [23], [24]. 1) MAP During the first phase of the Map-Reduce framework, namely the Map phase, each device n produces intermediate computation results (e.g., partial inference results using a subset wn of the ML/DL model weights w) _gn(d1, wn), gn(d2, wn), . . ., gn(dN_ _, wn),_ 1In practice, ln should be an integer multiple of the size of the smallest possible division of w. In this work, we relax this practical consideration to avoid dealing with integer programming later on. Note that ln = 0 is also possible, in which case device n does not participate to the collaboration. 2 In addition to creating network-coding opportunities during the communication phase, thereby decreasing the communication load, redundant computing loads can also provide some level of protection against straggler and faulty devices. Nevertheless, to keep the focus on the fundamental nature of the energy-efficient collaborative-computing problem, we assume (i) no straggler devices, (ii) no faulty devices and (iii) no network-coding during the communication phase. ----- where gn is the Map function executed at device n. The size of the intermediate computation result gn(dm, wn) produced by device n for device m is assumed to be proportional to the size ln of its assigned computing load wn and is given by _βln. Each device n thus computes intermediate computation_ results for all the other devices, i.e., gn(dm, wn) for all m ̸= n, and for itself, i.e., gn(dn, wn), using the part wn of w received from the AP. The Map phase is illustrated in the colored and framed columns of Fig. 2. 2) SHUFFLE Next, devices exchange intermediate computation results with each other in the so-called Shuffle phase. As already mentioned multiple times, coded shuffling [23]–[28] is not considered in this work. In this simplified Shuffle phase, each device n thus directly transmits the intermediate computation results gn(dm, wn) to device m via the AP, for all m ̸= n. Device n thus needs to transmit a total of (N − 1)βln bits of intermediate computation results to the AP. To ease notations in the rest of the paper, we define α = (N − 1)β. The Map phase can thus be seen as a data compression phase, reducing the size of wn from ln bits to αln bits of intermediate computation results before transmission in the Shuffle phase. The intermediate computation results exchanged during the Shuffle phase are indicated in bold on Fig. 2. 3) REDUCE Finally, during the Reduce phase, each device m combines a total of [�]n[N]=1 _[β][l][n][ =][ β][L][ bits of intermediate computation]_ results {gn(dm, wn)}n[N]=1 [produced by all the collaborating] devices to obtain φ(dm, w) as _hm�g1(dm, w1), g2(dm, w2), . . ., gN_ (dm, wN )�, where hm is the Reduce function executed at device m. This last operation, which could be thought of as combining all the partial inference results produced by all the devices to get the final inference φ(dm, w), is illustrated in the colored rows on Fig. 2. We note tn[MAP], tn[SHU] and tn[RED] the amount of time needed to perform the Map, Shuffle and Reduce phases, respectively, at device n. Because the Map and Shuffle phases must be over at every device before the Reduce phase starts (as all the intermediate computation results need to be available), we have the following constraint _tn[MAP]_ + tn[SHU] ≤ _τ −_ maxn _n_ }, _n ∈_ [N ]. (2) [{][t] [RED] _B. LOCAL COMPUTING MODEL_ During the Map phase, each device n receives ln bits to process. The number of CPU cycles needed to process one bit of input data at device n is assumed to be given by a constant cn. At the opposite of our previous work [22], devices are now assumed to be able to perform dynamic frequency scaling (DFS), i.e., a device can adjust its CPU frequency on the fly depending on the needs. Then, noting κn the effective capacitance coefficient (that depends on the chip architecture while constraint (2) becomes _tn[MAP]_ + tn[SHU] ≤ _τ −_ _t_ [RED], _n ∈_ [N ]. (8) **FIGURE 2. Illustration of the Map-Reduce collaborative-computing** model. The computing tasksDuring the Map phase, each device {φ(dn, w n produces intermediate computation)}[N]n=1 [are shared across][ N][ devices.] resultsphase, the intermediate computation results in bold on the figure are {gn(dm, wn)}[N]m=1 [(see framed columns). Next, during the Shuffle] transmitted via the AP to the devices for which they have been computed. The AP is said to facilitate the communications. Finally, during the Reduce to obtainphase, each device φ(dm, w ) (see colored rows). m combines the intermediate values {gn(dm, wn)}[N]n=1 of each device), the energy needed for computation during the Map phase can be modeled as [11], [12], [33] _En[MAP]_ = _[κ][n][c]n[3][l]n[3]_ _n ∈_ [N ] (3) (tn[MAP])[2][,] with the following constraint _cnln ≤_ _tn[MAP]fn[max],_ _n ∈_ [N ] (4) where fn[max] is the maximum CPU frequency of device n. Motivated by the fact that D _L and to avoid integer_ ≪ variables in our optimization problem later on, the energy and time to process local input data {dn}n[N]=1 [during the Map phase] have been neglected in both (3) and (4). Similarly, the energy needed at device n to combine the βL bits of intermediate computation results during the Reduce phase can be modeled as _n[(][β][L][)][3]_ _En[RED]_ = _[κ][n][c][3]_ _n ∈_ [N ] (5) (tn[RED])[2][,] with the following constraint _cnβL ≤_ _tn[RED]fn[max],_ _n ∈_ [N ]. (6) Because increasing tn[RED] is always favorable for energyefficiency and because the Reduce phase cannot start before the Map and Shuffle phases are over, one can see that we will always have the same tn[RED] = t [RED] across all N devices. As a consequence, constraint (6) becomes � _t_ [RED] (7) ≤ _βL max_ _n_ � _cn_ _fn[max]_ ----- _C. EDGE-FACILITATED DEVICES COMMUNICATIONS_ During the Shuffle phase, devices exchange intermediate computation results through the AP. This exchange thus involves both an uplink communication (devices to AP) and a downlink communication (AP to devices). In most applications however, it is reasonable to assume that the downlink rates are much larger than the uplink rates. For this reason, and because we are primarily interested by the energy consumed by the resource-constrained devices, we neglect the time needed for the downlink communications in this work. We also assume that all the devices can communicate in an orthogonal manner to the AP (e.g., through frequency division multiple access techniques, or through interference alignment [39]). Let hn denote the wireless channel power gain from device n to the AP during the Shuffle phase, pn the RF transmit power of device n, B the communication bandwidth and N0 the noise power spectral density at the AP. The achievable uplink rate of device n is then given by[3] **FIGURE 3. Another interpretation of the energy-efficient collaborative fog** computing problem: how can we send a total of L bits at a given rate of _L/τ bits/second through N parallel special channels consisting of a_ ‘‘computing channel’’ in series with a wireless communication channel in the most energy-efficient way? wireless communication channel in the most energy-efficient way? This is illustrated in Fig. 3. This interpretation was already mentioned in [34] for a single channel (i.e., for _N_ 1) and is here generalized for multiple parallel channels. = _A. FEASIBILITY_ Before solving Problem (P1), we first seek to determine its feasibility condition, i.e., condition that ensures that the system is able to meet the deadline. _Lemma 1 (Feasibility): Problem (P1) is feasible if and_ _only if the task size L satisfies_ � _rn(pn) = B ln_ 1 + _[p][n][h][n]_ _N0B_ � (9) in nats/second.[4] Noting P[c]n [the constant energy consumption] of the communication circuits at device n, the energy consumed during the Shuffle phase is thus given by _En[SHU]_ = tn[SHU](pn + P[c]n[)] (10) with the following constraints _αln ≤_ _tn[SHU]rn(pn),_ _n ∈_ [N ] (11) and _pn ≤_ _p[max]n_ _,_ _n ∈_ [N ] (12) where p[max]n is the maximum RF transmit power at device n. **III. PROBLEM FORMULATION** Putting everything together, the energy-efficient collaborative fog computing problem can be formulated as follows _N_ � (P1) : minimize _En[MAP]_ + En[SHU] + En[RED] **_l,t[MAP],t[SHU],t_** [RED],p _n=1_ subject to (1), (4), (7), (8), (11), (12) _ln, tn[MAP], tn[SHU], pn,_ _t_ [RED] ≥ 0, _n_ [N ] ∈ _fn[max]_ _._ (13) _cn_ _τ −_ _βL maxn_ � _fnc[max]n_ _n_ _/cn_ 1 _[α][f][ max]_ + _rn(p[max]n_ ) � _L ≤_ _Lmax =_ _N_ � _n=1_ _Proof: The maximum computing capacity of the sys-_ tem Lmax is obtained by solving the following optimization problem _N_ � _Lmax ≜_ maximize _ln_ **_l,t[MAP],t[SHU],t_** [RED],p _n=1_ subject to (4), (7), (8), (11), (12) _ln, tn[MAP], tn[SHU], pn,_ _t_ [RED] ≥ 0, _n ∈_ [N ]. where l, t[MAP], t[SHU] and p are N -length vectors containing the corresponding variables. Interestingly, this problem can be reformulated as follows: how can we send a total of L bits at a given rate of L/τ bits/second through N parallel special channels consisting of a ‘‘computing channel’’ in series with a 3The noise power N0B can be multiplied by the SNR gap � to account for practical modulation and coding schemes. This additional factor is left out here for the sake of clarity. 4Nats and bits are used interchangeably in this paper (with the proper factor correction applied when needed) to avoid carrying ln(2) factors in the derivations later on. For the maximum computing capacity to be achieved, constraints (8), (12) and (7) must be met, that is, the entire time τ is used by all devices, all devices transmit at their maximum RF transmit power p[max] and the Reduce phase executes as fast as possible. Next, the two constraints (4) and (11) on ln can be re-written in a single constraint as follows _ln ≤_ min � _tnMAPcnfn[max]_ _,_ _α[1]_ _[t]n[SHU]rn(p[max]n_ )� _._ (14) At the optimum, this constraint is satisfied and, given the relationship between tn[MAP] and tn[SHU], we have _α_ _[t]n[MAP]fn[max]_ = tn[SHU]rn(p[max]n ), (15) _cn_ which intuitively means that the number of bits of intermediate values produced by the Map phase at full speed must equal the number of bits that can be transmitted at full speed during ----- the Shuffle phase. Then using the satisfied constraints (8) and (7) together, we have � _,_ (16) _x[3]_ is a convex function for x 0. Its perspective function, ≥ _x[3]/y[2], is thus also a convex function for y > 0. The term_ associated to the energy consumed by the Map phase is thus jointly convex with respect to ln ≥ 0 and tn[MAP] _> 0. Next,_ the terms associated to the energy consumed by the Shuffle phase are linear and hence convex by definition. Finally, the function 1/x[2] is a convex function with respect to x > 0 which makes the term associated to the energy consumed by the Reduce phase a convex function as well. As convexity is preserved under addition, the objective function of Problem (P2) is a convex function. We then show that the set defined by the constraints is a convex set. The equality constraint (1) is affine and thus defines a hyperplane. Next, inequalities (4), (7), (8), and (21) are either linear or affine and thus define a polyhedron. The only remaining constraint (omitting trivial positivity constraints on all variables) is then constraint (20). For constraint (20) to define a convex set, its right-hand side term must be a concave function. The function rn(x) is a concave function with respect to x ≥ 0. Its perspective function, yrn(x/y) is thus also a concave function with respect to x ≥ 0 and y > 0. Because the intersection of a hyperplane, a polyhedron and a convex sublevel set remains a convex set, the set defined by the constraints of Problem (P2) is also convex. Problem (P2) can easily be solved using a software for convex optimization like cvxopt [40]. This wouldn’t however offer any interpretation of the results. To this effect, we seek to gain some insights into the optimal solution to Problem (P2) mathematically using Lagrange duality theory. We thus let λ ∈ R, βn ≥ 0, µn ≥ 0 be the Lagrange multipliers associated with constraints (1), (8) and (20) respectively. The partial Lagrangian is then given by �N _κnc[3]n[l]n[3]_ _n[(][β][L][)][3]_ _L(x, µ, β, λ) =_ _n=1_ (tn[MAP])[2][ +][E][n] [+][t]n[SHU]P[c]n [+][ κ][n](t[c][RED][3] )[2] �N � � _En_ �� + _n=1_ _[µ][n]_ _αln −_ _tn[SHU]rn_ _tn[SHU]_ + βn(tn[MAP] + tn[SHU] + t [RED] − _τ_ ) � �N � + λ _L −_ _n=1_ _[l][n]_ _,_ (22) where optimization variables and Lagrange multipliers have been aggregated in the corresponding vectors to ease notations. The dual function is then given by (DF) : g(µ, β, λ) = minx _L(x, µ, β, λ)_ s.t. (4), (7), (21) _tn[MAP], tn[SHU],_ _t_ [RED] ∈ [0, τ ], _n_ [N ] ∈ _ln,_ _En ≥_ 0, _n ∈_ [N ]. As the dual function provides a lower bound to the optimal value of the primal problem, we then seek to maximize it to obtain the best possible lower bound. The dual problem is _tn[MAP]_ + tn[SHU] = τ − _βL maxn_ which allows us to finally obtain � _cn_ _fn[max]_ _tn[MAP]_ = _τ −_ _βL maxn_ _n_ _/�cnfn[max]cn_ � _,_ (17) 1 _[α][f][ max]_ + _rn(p[max]n_ ) and _tn[SHU]_ = _τ −_ _βL maxnn_ �)fn[max]cn � _._ (18) 1 + _α[r]f[n]n[(][max][p][max]/cn_ The maximum computing load Lmax is thus given by _τ −_ _βL maxn_ � _fn[max]cn_ _n_ _/cn_ 1 _[α][f][ max]_ + _rn(p[max]n_ ) � _fn[max]_ _._ (19) _cn_ _Lmax =_ _N_ � _n=1_ We see through (17) and (18) that, at full capacity, the time for the Map and Shuffle phases is shared according to the ratio of (i) the maximum rate at which the Map phase can produce intermediate computation results αfn[max]/cn, and (ii) the maximum rate at which the Shuffle phase can transmit intermediate computation results rn(p[max]n ). At lower than full capacity, these time intervals will be able to adjust taking into account the energy-efficiency of both phases. **IV. OPTIMAL SOLUTION** Inspired by [33], we then introduce a new set of variables _En = tn[SHU]pn, i.e., the RF energy consumed by the Shuffle_ phase, and substitute pn for En/tn[SHU] to convexify Problem (P1). With this new variable, constraints (11) and (12) can be re-written as _αln ≤_ _tn[SHU]rn_ � _En_ _tn[SHU]_ � (20) and _En ≤_ _tn[SHU]p[max]n_ (21) respectively, for all n [N ]. Problem (P1) thus becomes ∈ �N _κnc[3]n[l]n[3]_ (P2) : **_l,t[MAP]minimize,t[SHU],t_** [RED],E _n=1_ (tn[MAP])[2][ +][ E][n][ +][ t]n[SHU]P[c]n + _[κ][n][c]n[3][(][β][L][)][3]_ (t [RED])[2] subject to (1), (4), (7), (8), (20), (21) _ln, tn[MAP], tn[SHU], En,_ _t_ [RED] ≥ 0, _n ∈_ [N ]. We now prove the convexity of this new formulation. _Lemma 2 (Convexity): Problem (P2) is convex._ _Proof: As this is a minimization problem, we start by_ showing the convexity of the objective function. The function ----- given by (D1) : maximize _g(µ, β, λ)_ **_µ,β,λ_** subject to µn, _βn ≥_ 0, _n ∈_ [N ]. Problem (P2) is convex (Lemma 2) and satisfies Slater’s condition if it is strictly feasible (in the sense given in Lemma 1). Strong duality thus holds and Problem (P2) can be solved by solving the dual problem (D1). _A. DERIVATION OF THE DUAL FUNCTION_ Before solving the dual problem (D1), we seek to evaluate the dual function g(µ, β, λ) for all µ, β, λ by solving Problem (DF). To this effect, we first decompose Problem (DF) in 2N 1 sub-problems as follows + minimizeln,tn[MAP] (κtn[MAP]nc[3]n[l]n)[3][2][ +][ (][αµ][n][ −] _[λ][)][l][n][ +][ β][n][t]n[MAP]_ subject to 0 ≤ _ln ≤_ _tn[MAP]fn[max]/cn_ _tn[MAP]_ ≤ _τ_ (23) � _En_ � minimizeEn,tn[SHU] _En + (P[c]n_ [+][ β][n][)][t]n[SHU] − _µntn[SHU]rn_ _tn[SHU]_ subject to 0 ≤ _En ≤_ _tn[SHU]p[max]n_ _tn[SHU]_ ≤ _τ_ (24) �N _κnc[3]n[(][β][L][)][3]_ minimizet [RED] _n=1_ (t [RED])[2] + βnt [RED] � _cn_ � subject to βL maxn _fn[max]_ ≤ _t_ [RED] ≤ _τ._ (25) It is interesting to note that Problems (23) and (24) correspond to the Map and Shuffle phases at device n respectively while Problem (25) corresponds to the Reduce phase. _Lemma 3 (Solution of Problem (23)): For any µn, βn ≥_ 0 _and λ ∈_ R, the optimal solution of Problem (23) satisfies _ln[∗]_ [=][ M]n[∗][t]n[MAP][∗] (26) _with Mn[∗][, the effective processing rate (in bits/second) of]_ _device n defined as_ _Lemma 4 (Solution of Problem (24)): For any µn, βn ≥_ 0, _the optimal solution of Problem (24) satisfies_ _En[∗]_ [=][ p]n[∗][t]n[SHU][∗] (30) _with p[∗]n[, the RF transmit power used during the Shuffle phase]_ _at device n defined as_  0 _Bµn ≤_ _[BN]hn[0]_ _p[∗]n_ [≜] B �µn − _[N]hn[0]_ � _Bµn ∈_ � _BNhn0_ _[,][ BN]hn[0]_ [+][ p]n[max]� (31) p[max]n _Bµn ≥_ _[BN]hn[0]_ _n_ [+][ p][max] _and tn[SHU][∗]_ _given by_  = 0 _ρ2,n < 0_ _tn[SHU][∗]_ ∈ [0, τ ] _ρ2,n = 0_ (32) = τ _ρ2,n > 0_ _with ρ2,n = µnrn(p[∗]n[)]_ [−] _[P][c]n_ [−] _[β][n][ −]_ _[µ][n]_ 1+pp[∗]n[∗]nNhnN0hn0B + _δ2,np[max]n_ _and_ _δ2,n =_ 0 _p[∗]n_ _[<][ p]n[max]_ µn 1+p[max]nNhn0 _Nhn0B_ − 1 _p[∗]n_ [=][ p]n[max]. (33) _Proof: See Appendix B._ _Lemma 5 (Solution of Problem (25)): For any β1, . . ., βN_ 0, the optimal solution of Problem (25) satisfies ≥ _t_ _[RED][∗]_ =  _N_ βL maxn{ _fn[c][max][n]_ } _n�=1_ _βn >_ �max2 [�]nn[N]�=f 1n[max]cn[κ][n][c]��n[3] 3 (34) βL�[3] 2 [�]�n[N]Nn==11 _[κ][β][n][n][c]n[3]_ _n�N=1_ _βn ≤_ �max2 [�]nn[N]�=f 1n[max]cn[κ][n][c]��n[3] 3 . _Mn[∗]_ [≜] 0 _λ −_ _αµn ≤_ 0 � _λ3−καµnc[3]nn_ _λ −_ _αµn ∈_ �0, 3κncn(fn[max])[2][�] (27)  _fnc[max]n_ _λ −_ _αµn ≥_ 3κncn(fn[max])[2] _and tn[MAP][∗]_ _given by_ _tn[MAP][∗]_    = 0 _ρ1,n < 0_ ∈ [0, τ ] _ρ1,n = 0_ (28) = τ _ρ1,n > 0_ _Proof: See Appendix C._ _B. MAXIMIZATION OF THE DUAL FUNCTION AND_ _INTERPRETATION_ The dual function being concave but non-differentiable, we could now maximize it using the subgradient-based ellipsoid method, as was done for example in [33]. However, in addition to being unpractical to solve the actual problem (when compared to the use of a convex optimization solver like cvxopt [40]), this method doesn’t offer any additional insight into the structure of the optimal solution. Instead, we intuitively look at what happens if we maximize the dual function g(µ, β, λ) taking into account the results of Lemmas 3, 4 and 5. To ease the analysis, we start with λ = 0 and µn = 0 for all n. In this case, ln[∗] [=][ 0] for all devices (see (26) and (27)) and the penalty term L − �N _n=1_ _[l]n[∗]_ [=][ L][ associated with][ λ][ appearing in the dual function] is thus strictly positive. Intuitively, this implies that the task has not been fully distributed across the devices, violating constraint (1). It is thus possible to increase the value of the dual function through this positive penalty term by increasing the value of λ. Because ln[∗] [is proportional to][ √][λ][ −] _[αµ][n]_ through Mn[∗][, this increases the number of bits][ l]n[∗] [processed] _with ρ1,n = 2κn(cnMn[∗][)][3][ −]_ _[β][n][ +][ γ][2][,][n]_ _fnc[max]n_ _[and]_ _γ2,n =_ � _n_ 0 _Mn[∗]_ _[<][ f][ max]cn_ (29) _λ −_ _αµn −_ 3κncn(fn[max])[2] _Mn[∗]_ [=][ f]n[ max]cn _[.]_ _Proof: See Appendix ??._ ----- by each device. Moreover, because ln[∗] [is also inversely pro-] portional to �κnc[3]n through Mn[∗][, less energy-efficient devices] (i.e., the ones with larger values of κnc[3]n[) get fewer bits to] process. The value of λ can be increased in this way until the penalty term L − [�]n[N]=1 _[l]n[∗]_ [equals 0 (i.e., until the task is fully] distributed across the devices). Next, because µn = 0 for all devices as of now, p[∗]n [=][ 0 (see (][31][)) and the penalty term] _αln[∗]_ _n_ _rn(p[∗]n[)][ =][ α][l]n[∗]_ [associated with][ µ][n][ appearing in the] [−] _[t]_ [SHU*] dual function is strictly positive for all devices. Intuitively, this implies that the rate constraint (20) is violated for all devices. It is thus possible to increase the value of the dual function through this penalty term by increasing the value of µn. Increasing µn has a double effect: (i) it decreases the value of ln[∗] [because][ l]n[∗] [is proportional to][ √][λ][ −] _[αµ][n][, and (ii)]_ it increases the value of p[∗]n [because][ p]n[∗] [is directly proportional] to µn. Combined, these two effects work together towards satisfying the rate constraint (20). For devices with very bad channel or very low maximum RF transmit power p[max]n, µn could increase so much that λ − _αµn would become negative,_ meaning that the number of bits to be processed ln[∗] [would] fall to 0 (see (26) and (27) again). At this point, there is an iterative interplay between λ and {µn}n[N]=1 [in which both] successively increase to maximize the dual function until both constraints (1) and (20) are satisfied and a maximum has been reached. It is now possible to give a waterfilling-like interpretation of the structure of the optimal computing load partition, i.e., {ln[∗][}][N]n=1[, through the effective processing rate][ {][M]n[∗][}]n[N]=1 [given] in (27). First, λ acts as a kind of global (i.e., across all devices) water level for {ln[∗][}][N]n=1 [through the effective process-] ing rate Mn[∗][. This water level has to be sufficiently high for the] tasks to be fully executed in time. Then, αµn can be seen as the base of the water vessel of device n. Following the above discussion, this base αµn mainly depends on the communication capabilities and energy-efficiency of device n (i.e., hn and p[max]n ). This is illustrated in Fig. 4. Finally, the actual water content of each vessel, i.e., λ − _αµn is divided by 3κnc[3]n[. This]_ term, related to the computing energy-efficiency of device n can be interpreted as a ‘‘pressure’’ applied to the water vessel of each device. The less energy-efficient device n is, the larger 3κnc[3]n [becomes and the more pressure is applied to its water] vessel, hence reducing the corresponding water level and ln[∗][.] This is illustrated in Fig. 5. **V. NUMERICAL RESULTS** In this section, the performances of the optimal collaborativecomputing scheme (denoted Opt in what follows) are benchmarked against various other schemes through numerical experiments. The schemes used for comparison are - Blind: the task allocation (i.e., choosing the value of _ln for each device n) doesn’t take into account the het-_ erogeneity of the devices; the scheme is blind to device diversity (both in terms of computing and communicating capabilities). In this case, the variable ln is set to L/N for each device n. This corresponds to what is **FIGURE 4.** First part of the waterfilling-like interpretation of the optimal effective processing rate Mn[∗] for a single device n. λ acts as a global (i.e., shared by all devices) water level that has to be sufficiently high for the tasks to be fully executed in time while αµn acts as the base of the water vessel of device n and depends on device n communication capabilities and energy-efficiency (i.e., channel conditions and maximum RF transmit power). **FIGURE 5.** Second part of the waterfilling-like interpretation of the optimal effective processing rate Mn[∗] for a single device n. Each water vessel is ‘‘compressed’’ by an applied ‘‘pressure’’ 3κncn[3] that represents the computing efficiency of device n. Less efficient devices will then see their effective processing rate reduced by a factor that depends on their computing energy-efficiency. done in most works on CDC assuming homogeneous devices [23], [25], [26]. - NoDFS: the CPU frequency of each device n is fixed to its maximum value fn[max] rather than being optimized for energy-efficiency. In this case, the variable tn[MAP] is set to cnln/fn[max] for each device n while tn[RED] (now different for each device) is set to cnβL/fn[max]. This scheme is close to the one proposed in our previous work [22]. - Blind-NoDFS: this scheme combines the two previous cases. In this case, ln = L/N and tn[MAP] = _fn[max]cn_ _NL_ for each device n while tn[RED] = cnβL/fn[max]. The only optimization left here concerns the Shuffle phase and the variables tn[SHU] and En. - NoOpt: in this scheme, nothing is optimized. This is basically Blind-NoDFS with α _N[L]_ [=][ t]n[SHU]rn � _tn[SHU]En_ � and En = tn[SHU]p[max]n . The parameters used in the following numerical experiments are given in Table 1. The ranges for the parameters were selected to comply with devices consumption models used in the literature on MEC systems, e.g., [11]–[13], [33]. _A. MAXIMUM COMPUTING LOAD AND OUTAGE_ _PROBABILITY_ To show that the proposed scheme indeed enhances the computing capabilities of individual devices, we start by comparing the maximum computing load of Opt and Blind, ----- **TABLE 1. Parameters used in the numerical experiments.** noted Lmax[Opt] [and][ L]max[Blind][, respectively. Other schemes are not] included here as Lmax[NoDFS] = Lmax[Opt] [and][ L]max[Blind-NoDFS] = Lmax[Blind] [=] _Lmax[NoOpt]. For Opt, the maximum computing load Lmax[Opt]_ [can be] readily obtained using Lemma 1. For Blind, we introduce the following Lemma. _Lemma 6 (Maximum Computing Load of Blind): The_ _maximum computing load achievable by the Blind scheme_ _is given by_ _fn[max]_ _cn_ _τ −_ _βL maxn_ � _fnc[max]n_ _n_ _/cn_ 1 _[α][f][ max]_ + _rn(p[max]n_ ) � **FIGURE 6. Maximum computing loads L[Opt]max** [and][ L]max[Blind] [averaged over] 1.000.000 random instances of the problem. Note that L[Opt]max [for][ N][ =][ 10 is] hidden by L[Blind]max [for][ N][ =][ 30][,][ 40 and 50.] **FIGURE 7. Empirical outage probability Pout[Opt]** [and][ P]out[Blind] for L = 10 Mb, averaged over 1.000.000 random instances of the problem. probability is defined, for a random heterogeneous set of devices and a given allowed latency τ, as the probability that the maximum computing load that can be processed by the system is lower than the actual computing load L, i.e., _P[∗]out_ [=][ Pr] �L ≥ _Lmax[∗]_ � _._ (36) For a given task size L, this probability can be empirically computed by averaging over a large number of randomly generated sets of devices. For L = 10 Mb, both P[Opt]out [and][ P]out[Blind] are depicted in Fig. 7 as a function of the allowed latency _τ and for several values of N_ . This plot again demonstrates the benefits of leveraging devices diversity to distribute the task among the devices. At the opposite, we see that Blind suffers from devices diversity. Indeed, for larger values of the allowed latency τ, increasing the number of devices N penalizes Blind by increasing its outage probability P[Blind]out [. Intu-] itively, this comes from the fact that increasing the number of devices N increases the probability of having a very weak device limiting the whole system. Mathematically, the lower tail of the distribution of Lmax[Blind] [grows larger and larger with] _N_, making the distribution more and more skewed towards small values of Lmax[Blind][. This also explains why this trend]   (35)  _[.]_ _Lmax[Blind]_ [=][ N][ min]    _Proof: Obtaining Lmax[Blind]_ [requires solving the following] linear program _Lmax[Blind]_ ≜ maximize _l,t[MAP],t[SHU][ Nl]_ subject to cnl ≤ _tn[MAP]fn[max]_ ∀n _αl ≤_ _tn[SHU]rn(p[max]n_ ) ∀n � _cn_ � _t_ [RED] ≥ _βL maxn_ _fn[max]_ _tn[MAP]_ + tn[SHU] ≤ _τ −_ _t_ [RED], _n ∈_ [N ] _l, tn[MAP],_ _tn[SHU]_ ≥ 0, _n ∈_ [N ]. A reasoning similar to the one used in Lemma 1 – and omitted here for the sake of space – can then be used to obtain the analytical expression given above. Values of Lmax[Opt] [and][ L]max[Blind] [for different values of the allowed] latency τ and various numbers of devices N are plotted in Fig. 6. As expected, both Lmax[Opt] [and][ L]max[Blind] grow with the allowed latency τ . However, Lmax[Opt] [grows with][ τ][ much faster] than Lmax[Blind] [does. Next, one can see that increasing the number] of devices N for a given allowed latency τ is always more profitable for Opt than for Blind. Furthermore, the benefits of further increasing the number of devices N remain constant for Opt but quickly saturates for Blind. Both observations can be explained by the fact that Opt is able to leverage devices diversity by optimally exploiting the different computing and communicating capabilities of the devices while Blind, as per its name, is not. Another way of looking at the maximum computing loads of the different schemes is through what we define as the ‘‘outage probability’’ of the system. In this context, the outage ----- was not visible on Fig. 6 as it only shows the mean of the distribution of Lmax[Blind][. In addition, it appears that the benefits] on P[Blind]out of allowing a looser deadline (for a given N ) saturate as the value of τ increases beyond a certain point that varies with the number of devices N . Again, and for the same reason, this trend was not visible on Fig. 6 and cannot be explained by looking at the mean of Lmax[Blind] [only. This saturation effect] appears when the mode of the distribution of Lmax[Blind] [becomes] larger than the value of the actual computing load L used to compute P[Blind]out [. Passed this point, the benefits on][ P]out[Blind] of further pushing the mode to larger values by increasing τ become smaller and smaller. Coming back to the example application of collaborative on-device ML/DL inference, this indicates that Opt enables inferences with larger ML/DL models (i.e., more accurate/complex inferences) for the same latency, or the other way around, similar inferences for a smaller latency. _B. ENERGY CONSUMPTION_ We now look at the energy consumed by the different collaborative-computing schemes. Fig. 8 depicts the total energy consumed per bit processed for different numbers of devices N, while Fig. 9 depicts the energy consumed by each phase of the collaboration (i.e, Map, Shuffle and Reduce). In both figures, L = 1 Mb, βL = 0.1 kb and τ = 100 ms. Each point is the result of an average over 100 feasible (for each scheme) instances of the problem, i.e., instances for which L ≤ _Lmax[Opt]_ _[,][ L]max[Blind][. Note that the parameters have]_ also been chosen to allow comparison between the schemes, i.e., to ensure that feasible instances arise with reasonable probability for all schemes. First, one can observe in Fig. 8 that the energy consumed by both Blind-NoDFS and NoOpt is actually the same. This stems from the fact that it is always optimal (from an energy-efficiency point of view) for constraint (20) to be met. Indeed, the opposite would mean that the device is investing either too much time tn[SHU] (hence increasing the energy consumption of the communications circuits tn[SHU]P[c]n[) or too] much RF energy En with regards to the number of bits αln that needs to be transmitted in the Shuffle phase. For the same reason, constraint (21) is almost always satisfied as well, meaning that devices participating to the Shuffle phase transmit at the maximum RF power allowed, i.e., p[max]n . Schemes Blind-NoDFS and NoOpt are thus equivalent and both transmit at the maximum RF transmit power and at the maximum rate. These two observations are valid for all the other schemes as well. In addition, the energy per bit consumed by both Blind-NoDFS and NoOpt is roughly constant with the number of devices. At the opposite, the energy consumed by the other schemes decreases with N as diversity across the devices is exploited for energy-efficiency. Interestingly, optimizing {tn[MAP]}n[N]=1 [and][ t] [RED][ only (in][ Blind][) is more] beneficial than optimizing ln only (in NoDFS), even though the number of bits assigned to each device for processing by Blind is uniform across the devices, and thus blind to **FIGURE 8.** Comparison of the total energy consumed by the different schemes as a function of the number of devices N. Note that the energy consumption is the same for both NoOpt (yellow curve) and Blind-NoDFS (black curve). diversity. Combining both schemes in Opt leads to a gain in energy-efficiency with respect to NoOpt reaching two orders of magnitude for large values of N . Fig. 9 breaks down the energy consumption of the different schemes in 3 components: E [MAP], E [SHU] and E [RED]. Note that NoOpt, being equivalent to Blind-NoDFS, has been omitted to avoid cluttering the plot. First, it appears that the energy consumption of the Map phase largely dominates the energy consumption of the Shuffle and Reduce phases for small values of N .[5] As the number of devices N increases, this difference decreases for all schemes leveraging diversity across the devices (i.e., all but Blind-NoDFS). At the opposite, the energy consumed by the Shuffle phase increases with the number of devices N, no matter the scheme used. This figure also shows that there is not much (if anything) to gain from optimization in the Shuffle phase. Next, one can see that the energy efficiency of the Reduce phase increases with N when t [RED] can be optimized (i.e., when devices can perform DFS). This decrease with N is however slower than what we observed for the Map phase. For Blind, this can be explained by the fact that priority in the optimization is given to the more energy intensive Map phase. For Opt, this comes from the fact that, at the opposite of the Map phase, all devices have to perform the Reduce phase. Finally, for NoDFS and Blind-NoDFS, each device has to perform the Reduce phase at full speed causing E [RED] to increase with N . _C. ENERGY-LATENCY TRADE-OFF_ Figs. 10 and 11 depict the total energy consumption per bit and the energy consumed per bit by each phase, respectively, for the different schemes and for different values of the allowed latency τ . For both figures, L = 1 Mb, βL = 0.1 kb and N 10. Each point is the result of an average over = 100 feasible (for each scheme) instances of the problem, i.e., instances for which L ≤ _Lmax[Opt]_ _[,][ L]max[Blind][. Again, note that]_ 5Note that this statement is strongly dependent on the energy consumption model and the parameters used for the numerical experiments. As an example, increasing the number of bits transmitted during the Shuffle phase, _αln, through the total size of the intermediate computation results βL would_ directly result in an increase of E [SHU] by the same factor. ----- **FIGURE 9.** Breakdown of the energy consumed by the three phases of the collaboration as a function of the number of nodes N. Note that the energy consumption for the Reduce phase is the same for both NoDFS and Blind-NoDFS. **FIGURE 10.** Comparison of the total energy consumed by the different schemes as a function of the allowed latency τ . Note that the energy consumption is the same for both NoOpt (yellow curve) and Blind-NoDFS (black curve). the parameters have also been chosen to allow comparison between the schemes, i.e., to ensure that feasible instances arise with reasonable probability for all schemes. Interestingly, Figs. 10 and 11 closely resemble Figs. 8 and 9, implying that the effect of increasing the number of devices N is roughly equivalent to the effect of increasing the allowed latency τ . The underlying mechanisms, however, are different. For schemes where devices are able to perform DFS (i.e., Opt and Blind), increasing τ enables the devices to further decrease their CPU frequency, hence saving energy. For NoDFS, increasing τ enables the system to increase the number of bits assigned to the most energy-efficient devices, hence reducing the load on less energy-efficient devices and again saving energy. _D. NUMBER OF PARTICIPATING DEVICES_ Finally, Fig. 12 shows the average fraction of devices participating to the collaboration, i.e., devices with ln > 0 that thus participate to the Map and Shuffle phases, as a function of the computing load L/Lmax. For Blind (and Blind-NoDFS), this fraction is of course constant and equal to 1 as ln = L/N for all n. For Opt, this fraction starts at around 0.6 for very small computing loads and quickly reaches 1 for computing loads >0.2. At the opposite, for NoDFS, the fraction of devices participating to the Map and Shuffles phases closely **FIGURE 11.** Breakdown of the energy consumed by the three phases of the collaboration as a function of the allowed latency τ . Note that the energy consumption for the Reduce phase is the same for both NoDFS and Blind-NoDFS. **FIGURE 12.** Average fraction of devices participating to the collaboration, i.e., devices with ln[∗] > 0 that thus participate to the Map and Shuffle phases, as a function of the computing load L/Lmax. follows the fraction L/Lmax. To explain these radically different behaviors, we look at the energy consumed by the Map phase at each device n for both schemes. For Opt first, Eq. (3) indicates that En[MAP] is a cubic function of ln. For NoDFS, injecting tn[MAP] = cnln/fn[max] in (3) shows that En[MAP] becomes a linear function of ln. This explains why the computing load is more evenly spread across devices for Opt than for NoDFS. **VI. DISCUSSION AND FUTURE WORKS** This work built upon our previous work [22] to further highlight the benefits of leveraging devices diversity – whether in terms of computing or communication capabilities – to enhance individual computing capabilities of the devices while increasing energy-efficiency of the system as a whole. It also provides new insights on the structure of the optimal solution through a waterfilling-like interpretation. As mentioned in the introduction, this makes collaborativecomputing another potential viable architecture to be used in conjunction with MEC and MCC to enable ubiquitous computing on heterogeneous devices. However, further validation with more realistic and practical assumptions is needed. Interferences between devices during the Shuffle phase, for example, were neglected in this work. As the interference level is expected to increase with the number of devices participating to the Shuffle phase, taking into account interference in the communication model could have a significant ----- impact on the number of devices participating to the collaboration. Non-causal knowledge of the uplink channels was also assumed to allow for offline optimization of the collaboration. To get rid of this unrealistic assumption, one could instead consider the expectation taken over the channel gain hn of _rn(pn) in constraint (11) for a given channel gain distribution._ Adaptation to the actual channel condition observed during the Shuffle phase could then be performed on-the-fly by each device. Downlink communications were also neglected in this work. While this makes sense in a scenario where optimizing the energy-consumption of end devices is the primary objective, care should be taken to avoid simply ignoring the energy burden imposed to the edge of the network. On the other hand, the system could also optimize channel and bandwidth allocation across devices, considered to be given in this work. The Shuffle phase could also be further optimized by integrating results from CDC [23]–[28]. These additional degrees of freedom could enable additional energy savings and increased system-wise performance. This would however come at the cost of a complexified optimization problem, and a sweet spot between optimization complexity and efficiency gains should thus be found. **APPENDIX A** **PROOF OF LEMMA 3** Problem (23) being convex, the optimal solution satisfies the KKT conditions. The Lagrangian of problem (23) is given by _L1,n =_ ([κ]tn[MAP][n][c]n[3][l]n)[3][2][ +][ (][αµ][n][ −] _[λ][)][l][n][ +][ β][n][t]n[MAP]_ − _γ1,nln_ + γ2,n �ln − _[t]n[MAP]cnfn[max]_ � − _γ3,ntn[MAP]_ � � + γ4,n _tn[MAP]_ − _τ_ (37) with γ1,n, γ2,n, γ3,n, γ4,n ≥ 0 the Lagrange multipliers. The KKT conditions are then given by _∂L1,n_ _n[l]n[2]_ = 3 _[κ][n][c][3]_ (38) _∂ln_ (tn[MAP])[2][ +][ αµ][n][ −] _[λ][ −]_ _[γ][1][,][n][ +][ γ][2][,][n][ =][ 0]_ _∂L1,n_ _n[l]n[3]_ _fn[max]_ _∂tn[MAP]_ = −2 ([κ]tn[MAP][n][c][3] )[3][ +][ β][n][ −] _[γ][2][,][n]_ _cn_ − _γ3,n + γ4,n = 0_ (39) _δ1,nEn = 0_ (47) � � _δ2,n_ _En −_ _tn[SHU]p[max]n_ = 0 (48) _δ3,ntn[SHU]_ = 0 (49) � � _δ4,n_ _tn[SHU]_ − _τ_ = 0. (50) We first obtain (30), (31) and (33) using condition (45) and complementary slackness conditions (47) and (48). Substituting (30) in (46) and defining ρ2,n = δ4,n − _δ3,n, we then_ obtain (32) using complementary slackness conditions (49) and (50). **APPENDIX C** **PROOF OF LEMMA 5** Problem (25) being convex, the optimal solution satisfies the KKT conditions. The Lagrangian of problem (25) is given by �N _κnc[3]n[T][ 3]_ _L3 =_ _n=1_ (t [RED])[2][ +][ β][n][t] [RED] � � _cnβL_ � � � � + ϵ1 maxn _fn[max]_ − _t_ [RED] + ϵ2 _t_ [RED] − _τ_ (51) with ϵ1, ϵ2 ≥ 0 the Lagrange multipliers. The KKT conditions are then given by **APPENDIX B** **PROOF OF LEMMA 4** Problem (24) being convex, the optimal solution satisfies the KKT conditions. The Lagrangian of problem (24) is given by � _En_ � _L2,n = En + tn[SHU]P[c]n_ [−] _[µ][n][t]n[SHU]rn_ _tn[SHU]_ + βntn[SHU] − _δ1,nEn + δ2,n(En −_ _tn[SHU]p[max]n_ ) − _δ3,ntn[SHU]_ � � + δ4,n _tn[SHU]_ − _τ_ (44) with δ1,n, δ2,n, δ3,n, δ4,n ≥ 0 the Lagrange multipliers. The KKT conditions are then given by _∂L2,n_ _∂En_ = 1 − _δ1,n + δ2,n −_ _µn_ _hn_ _N0_ 0 (45) 1 _En_ _hn_ = + _tn[SHU]_ _BN0_ _En_ _hn_ _∂L2,n_ _tN[SHU]_ _N0_ � _En_ � _∂tn[SHU]_ = µn 1 + _tn[SHU]En_ _BNhn0_ − _µnrn_ _tn[SHU]_ + P[c]n [+][ β][n] − _δ2,np[max]n_ − _δ3,n + δ4,n = 0_ (46) with the complementary slack conditions with the complementary slackness conditions _N_ � _βn = 0_ _n=1_ (52) �3 _N_ � _κnc[3]n_ [+] _n=1_ _∂L3_ � _T_ _∂t_ [RED][ =][ ϵ][2][ −] _[ϵ][1][ −]_ [2] _t_ [RED] _ϵ1_ _γ2,n_ _γ1,nln = 0,_ (40) �ln − _[t]n[MAP]cnfn[max]_ � = 0, (41) _γ3,ntn[MAP]_ = 0, (42) � � _γ4,n_ _tn[MAP]_ − _τ_ = 0. (43) with the complementary slackness conditions � _cnβL_ _fn[max]_ − = � � _ϵ2_ _t_ [RED] − _τ_ = 0. (54) � � _t_ [RED] 0 (53) − = � max _n_ We first obtain (26), (27) and (29) using condition (38) and complementary slackness conditions (40) and (41). Substituting (26) in (39) and defining ρ1,n = γ4,n − _γ3,n, we then_ obtain (28) using complementary slackness conditions (42) and (43). Condition (52) together with complementary slackness conditions (53) (54) allow us to obtain (34). ----- **ACKNOWLEDGMENT** The authors would also like to thank their colleague Emre Kilcioglu for proofreading and comments, and the anonymous reviewers for their constructive criticism. **REFERENCES** [1] M. Chiang and T. Zhang, ‘‘Fog and IoT: An overview of research opportunities,’’ IEEE Internet Things J., vol. 3, no. 6, pp. 854–864, Dec. 2016. [2] T. Qiu, N. Chen, K. Li, M. Atiquzzaman, and W. Zhao, ‘‘How can heterogeneous Internet of Things build our future: A survey,’’ IEEE Commun. _Surveys Tuts., vol. 20, no. 3, pp. 2011–2027, 3rd Quart., 2018._ [3] B. M. Rashma, S. Macherla, A. Jaiswal, and G. Poornima, ‘‘Handling heterogeneity in an IoT infrastructure,’’ in Advances in Machine Learning _and Computational Intelligence, S. Patnaik, X.-S. Yang, and I. K. Sethi,_ Eds. Singapore: Springer, 2021, pp. 635–643. [4] S. Kumar, P. Tiwari, and M. Zymbler, ‘‘Internet of Things is a revolutionary approach for future technology enhancement: A review,’’ J. Big Data, vol. 6, no. 1, pp. 1–21, Dec. 2019. [5] A. Mammela and A. Anttonen, ‘‘Why will computing power need particular attention in future wireless devices?’’ IEEE Circuits Syst. Mag., vol. 17, no. 1, pp. 12–26, 1st Quart., 2017. [6] F. Pereira, R. Correia, P. Pinho, S. I. Lopes, and N. B. Carvalho, ‘‘Challenges in resource-constrained IoT devices: Energy and communication as critical success factors for future IoT deployment,’’ Sensors, vol. 20, no. 22, p. 6420, Nov. 2020. [7] S. Barbarossa, S. Sardellitti, and P. D. Lorenzo, ‘‘Communicating while computing: Distributed mobile cloud computing over 5G heterogeneous networks,’’ IEEE Signal Process. Mag., vol. 31, no. 6, pp. 45–55, Nov. 2014. [8] H. T. Dinh, C. Lee, D. Niyato, and P. Wang, ‘‘A survey of mobile cloud computing: Architecture, applications, and approaches,’’ Wireless Com_mun. Mobile Comput., vol. 13, no. 18, pp. 1587–1611, Dec. 2013._ [9] C. Arun and K. Prabu, ‘‘Applications of mobile cloud computing: A survey,’’ in Proc. Int. Conf. Intell. Comput. Control Syst. (ICICCS), 2017, pp. 1037–1041. [10] Y. C. Hu, M. Patel, D. Sabella, N. Sprecher, and V. Young, ‘‘Mobile edge computing—A key technology towards 5G,’’ ETSI White Paper, vol. 11, no. 11, pp. 1–16, 2015. [11] Y. Mao, C. You, J. Zhang, K. Huang, and K. B. Letaief, ‘‘A survey on mobile edge computing: The communication perspective,’’ IEEE Commun. _Surveys Tuts., vol. 19, no. 4, pp. 2322–2358, 4th Quart., 2017._ [12] P. Mach and Z. Becvar, ‘‘Mobile edge computing: A survey on architecture and computation offloading,’’ IEEE Commun. Surveys Tuts., vol. 19, no. 3, pp. 1628–1656, 3rd Quart., 2017. [13] A. Filali, A. Abouaomar, S. Cherkaoui, A. Kobbane, and M. Guizani, ‘‘Multi-access edge computing: A survey,’’ IEEE Access, vol. 8, pp. 197017–197046, 2020. [14] L. M. Vaquero and L. Rodero-Merino, ‘‘Finding your way in the fog: Towards a comprehensive definition of fog computing,’’ ACM SIGCOMM _Comput. Commun. Rev., vol. 44, no. 5, pp. 27–32, Oct. 2014._ [15] Y. Shi, K. Yang, T. Jiang, J. Zhang, and K. B. Letaief, ‘‘Communicationefficient edge AI: Algorithms and systems,’’ IEEE Commun. Surveys Tuts., vol. 22, no. 4, pp. 2167–2191, Jul. 2020. [16] E. El Haber, T. M. Nguyen, and C. Assi, ‘‘Joint optimization of computational cost and devices energy for task offloading in multi-tier edgeclouds,’’ IEEE Trans. Commun., vol. 67, no. 5, pp. 3407–3421, May 2019. [17] C. Zhang, P. Patras, and H. Haddadi, ‘‘Deep learning in mobile and wireless networking: A survey,’’ IEEE Commun. Surveys Tuts., vol. 21, no. 3, pp. 2224–2287, 3rd Quart., 2019. [18] R. Stahl, Z. Zhao, D. Mueller-Gritschneder, A. Gerstlauer, and U. Schlichtmann, ‘‘Fully distributed deep learning inference on resourceconstrained edge devices,’’ in Embedded Computer Systems: Architectures, _Modeling, and Simulation, D. Pnevmatikatos, M. Pelcat, and M. Jung,_ Eds. Cham, Switzerland: Springer, 2019, pp. 77–90. [19] N. D. Lane, S. Bhattacharya, A. Mathur, P. Georgiev, C. Forlivesi, and F. Kawsar, ‘‘Squeezing deep learning into mobile and embedded devices,’’ _IEEE Pervas. Comput., vol. 16, no. 3, pp. 82–88, Jul. 2017._ [20] K. B. Letaief, W. Chen, Y. Shi, J. Zhang, and Y.-J.-A. Zhang, ‘‘The roadmap to 6G: AI empowered wireless networks,’’ IEEE Commun. Mag., vol. 57, no. 8, pp. 84–90, Aug. 2019. [21] J. Park, S. Samarakoon, A. Elgabli, J. Kim, M. Bennis, S.-L. Kim, and M. Debbah, ‘‘Communication-efficient and distributed learning over wireless networks: Principles and applications,’’ 2020, arXiv:2008.02608. [Online]. Available: http://arxiv.org/abs/2008.02608 [22] A. Paris, H. Mirghasemi, I. Stupia, and L. Vandendorpe, ‘‘Energy-efficient edge-facilitated wireless collaborative computing using map-reduce,’’ in _Proc. IEEE 20th Int. Workshop Signal Process. Adv. Wireless Commun._ _(SPAWC), Jul. 2019, pp. 1–5._ [23] S. Li, Q. Yu, M. A. Maddah-Ali, and A. S. Avestimehr, ‘‘A scalable framework for wireless distributed computing,’’ IEEE/ACM Trans. Netw., vol. 25, no. 5, pp. 2643–2654, Oct. 2017. [24] M. Kiamari, C. Wang, and A. S. Avestimehr, ‘‘Coding for edge-facilitated wireless distributed computing with heterogeneous users,’’ in Proc. 51st _Asilomar Conf. Signals, Syst., Comput., Oct. 2017, pp. 536–540._ [25] S. Li, M. A. Maddah-Ali, Q. Yu, and A. S. Avestimehr, ‘‘A fundamental tradeoff between computation and communication in distributed computing,’’ IEEE Trans. Inf. Theory, vol. 64, no. 1, pp. 109–128, Jan. 2018. [26] F. Li, J. Chen, and Z. Wang, ‘‘Wireless MapReduce distributed computing,’’ 2018, arXiv:1802.00894. [Online]. Available: http://arxiv. org/abs/1802.00894 [27] S. Li, M. A. Maddah-Ali, and A. S. Avestimehr, ‘‘Coding for distributed fog computing,’’ IEEE Commun. Mag., vol. 55, no. 4, pp. 34–40, Apr. 2017. [28] F. Xu and M. Tao, ‘‘Heterogeneous coded distributed computing: Joint design of file allocation and function assignment,’’ in Proc. IEEE Global _Commun. Conf. (GLOBECOM), Dec. 2019, pp. 1–6._ [29] J. Dean and S. Ghemawat, ‘‘MapReduce: Simplified data processing on large clusters,’’ in Proc. 6th Symp. Operating Syst. Design Implement. _(OSDI), San Francisco, CA, USA, 2004, pp. 137–150._ [30] S. Chen and W. S. Schlosser, ‘‘Map-reduce meets wider varieties of applications,’’ Intel, Pittsburgh, PA, USA, Tech. Rep. IRP-TR-08-05, 2008. [31] R. P. Elespuru, S. Shakya, and S. Mishra, ‘‘Mapreduce system over heterogeneous mobile devices,’’ in Software Technologies for Embedded and _Ubiquitous Systems, S. Lee and P. Narasimhan, Eds. Berlin, Germany:_ Springer, 2009, pp. 168–179. [32] A. Dou, V. Kalogeraki, D. Gunopulos, T. Mielikainen, and V. H. Tuulos, ‘‘Misco: A MapReduce framework for mobile systems,’’ in Proc. 3rd Int. _Conf. Pervas. Technol. Rel. Assistive Environ. (PETRA). New York, NY,_ USA: ACM, 2010, pp. 1–8. [33] X. Cao, F. Wang, J. Xu, R. Zhang, and S. Cui, ‘‘Joint computation and communication cooperation for energy-efficient mobile edge computing,’’ _IEEE Internet Things J., vol. 6, no. 3, pp. 4188–4200, Jun. 2019._ [34] C. You and K. Huang, ‘‘Exploiting non-causal CPU-state information for energy-efficient mobile cooperative computing,’’ IEEE Trans. Wireless _Commun., vol. 17, no. 6, pp. 4104–4117, Jun. 2018._ [35] Z. Sheng, C. Mahapatra, V. C. M. Leung, M. Chen, and P. K. Sahu, ‘‘Energy efficient cooperative computing in mobile wireless sensor networks,’’ _IEEE Trans. Cloud Comput., vol. 6, no. 1, pp. 114–126, Jan. 2018._ [36] D. Wu, F. Wang, X. Cao, and J. Xu, ‘‘Wireless powered user cooperative computation in mobile edge computing systems,’’ 2018, _arXiv:1809.01430. [Online]. Available: http://arxiv.org/abs/1809.01430_ [37] A. Mtibaa, A. Fahim, K. A. Harras, and M. H. Ammar, ‘‘Towards resource sharing in mobile device clouds: Power balancing across mobile devices,’’ _SIGCOMM Comput. Commun. Rev., vol. 43, no. 4, pp. 51–56, Aug. 2013._ [38] L. Pu, X. Chen, J. Xu, and X. Fu, ‘‘D2D fogging: An energy-efficient and incentive-aware task offloading framework via network-assisted D2D collaboration,’’ IEEE J. Sel. Areas Commun., vol. 34, no. 12, pp. 3887–3901, Dec. 2016. [39] K. Yang, Y. Shi, and Z. Ding, ‘‘Low-rank optimization for data shuffling in wireless distributed computing,’’ in Proc. IEEE Int. Conf. Acoust., Speech _Signal Process. (ICASSP), Apr. 2018, pp. 6343–6347._ [40] M. S. Andersen, J. Dahl, and L. Vandenberghe. CVXOPT: A Python _Package for Convex Optimization._ ANTOINE PARIS (Member, IEEE) received the B.Sc. and M.Sc. degrees in electrical engineering from UCLouvain, Louvain-la-Neuve, Belgium, in 2016 and 2018, respectively. He is currently a F.R.S.-FNRS Research Fellow at the Institute of Information and Communication Technologies, Electronics and Applied Mathematics (ICTEAM), University of Louvain. His research interests include collaborative computing, fog computing, and wireless sensors networks. ----- HAMED MIRGHASEMI (Member, IEEE) received the B.Sc. and M.Sc. degrees in electrical engineering from the Sharif University of Technology, Tehran, Iran, in 2006 and 2009, respectively, and the Ph.D. degree from the Telecom ParisTech, Paris, France, in 2014. He is currently a Postdoctoral Researcher with UCLouvain, Louvainla-Neuve, Belgium. His research interests include information theory, stochastic optimization, and deep learning. IVAN STUPIA (Member, IEEE) received the Ph.D. degree from the University of Pisa, Italy, in 2009. In 2011, he joined the Institute of Information and Communication Technologies, Electronics and Applied Mathematics (ICTEAM), University of Louvain. He was involved in various European and national projects on wireless communications in different fields of application, such as cellular systems, wireless sensor networks, security, and aeronautical communications. His academic experience is corroborated by more than 50 publications in international journals and proceedings of international conferences. His general and research interests include the areas of wireless communications and signal processing with special emphasis on application of advanced mathematical tools to the design of self-adaptive/self-organizing wireless networks, energy harvesting and wireless power transfer for the Internet of Things (IoT) services, and green and cost-effective design of wireless networks. LUC VANDENDORPE (Fellow, IEEE) was born in Mouscron, Belgium, in 1962. He received the degree (summa cum laude) in electrical engineering and the Ph.D. degree in applied science from the Catholic University of Louvain (UCLouvain), Louvain La Neuve, Belgium, in 1985 and 1991, respectively. Since 1985, he has been with the Communications and Remote Sensing Laboratory, UCL, where he first worked in the field of bit rate reduction techniques for video coding. In 1992, he was a Visiting Scientist and a Research Fellow with the Telecommunications and Traffic Control Systems Group, Delft University of Technology, The Netherlands, where he worked on spread spectrum techniques for personal communications systems. From October 1992 to August 1997, he was a Senior Research Associate with the Belgian NSF, UCL, and an invited Assistant Professor. He is currently a Full Professor with the Institute for Information and Communication Technologies, Electronics, and Applied Mathematics, UCLouvain. His research interests include digital communication systems and more precisely resource allocation for OFDM(A)based multicell systems, MIMO and distributed MIMO, sensor networks, UWB-based positioning, and wireless power transfer. He is or has been a TPC Member for numerous IEEE conferences, such as VTC, GLOBECOM, SPAWC, ICC, PIMRC, and WCNC. He was an Elected Member of the Signal Processing for Communications Committee, from 2000 to 2005, and the Sensor Array and Multichannel Signal Processing Committee of the Signal Processing Society, from 2006 to 2008 and from 2009 to 2011. He was the Chair of the IEEE Benelux Joint Chapter on communications and vehicular technology, from 1999 to 2003. He was the Co-Technical Chair for IEEE ICASSP 2006. He served as an Editor for synchronization and equalization of IEEE TRANSACTIONS ON COMMUNICATIONS, from 2000 to 2002, and an Associate Editor for IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, from 2003 to 2005, and IEEE TRANSACTIONS ON SIGNAL PROCESSING, from 2004 to 2006. HAMED deep learning. -----
{ "disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.1109/ACCESS.2021.3094888?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1109/ACCESS.2021.3094888, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": "CCBYNCND", "status": "GOLD", "url": "https://ieeexplore.ieee.org/ielx7/6287639/9312710/09475030.pdf" }
2,020
[ "JournalArticle" ]
true
2020-03-31T00:00:00
[ { "paperId": "41703df035d404823d4d6833030c0111c528309e", "title": "Challenges in Resource-Constrained IoT Devices: Energy and Communication as Critical Success Factors for Future IoT Deployment" }, { "paperId": "186b8261221e8a87af1bad44fcb7c3e159be3e0c", "title": "Communication-Efficient and Distributed Learning Over Wireless Networks: Principles and Applications" }, { "paperId": "47a8355e76c3675c481f135cce3a5911c74aeac3", "title": "Communication-Efficient Edge AI: Algorithms and Systems" }, { "paperId": "3edf3fb8f8296374ed815a6f742ec79bb12379b0", "title": "Internet of Things is a revolutionary approach for future technology enhancement: a review" }, { "paperId": "0e09d91a2a2f2745cd1161ab4c03b61dbd8b72ae", "title": "Heterogeneous Coded Distributed Computing: Joint Design of File Allocation and Function Assignment" }, { "paperId": "11650dd5388f32372cdf30b6907f73696beb52bc", "title": "The Roadmap to 6G: AI Empowered Wireless Networks" }, { "paperId": "e34825220509aa620fcee5d467ae599522747a67", "title": "Fully Distributed Deep Learning Inference on Resource-Constrained Edge Devices" }, { "paperId": "cdc609c16afd59c7a60c23e580d13fdb7d92a24f", "title": "Energy-Efficient Edge-Facilitated Wireless Collaborative Computing using Map-Reduce" }, { "paperId": "ab480bcce180003181726e8b8ee54102171f63eb", "title": "Joint Optimization of Computational Cost and Devices Energy for Task Offloading in Multi-Tier Edge-Clouds" }, { "paperId": "9e568a6b04a4381999a606a0ab542ee726a2336b", "title": "Wireless Powered User Cooperative Computation in Mobile Edge Computing Systems" }, { "paperId": "cdb2a32822a21dba84568038a817e71979036766", "title": "Joint Computation and Communication Cooperation for Energy-Efficient Mobile Edge Computing" }, { "paperId": "4fc5a04ffa85a9be13779b2f8bfbe5298189adbd", "title": "Low-Rank Optimization for Data Shuffling in Wireless Distributed Computing" }, { "paperId": "8a5d0579590465494c9aba58a857af43b190b6a6", "title": "Deep Learning in Mobile and Wireless Networking: A Survey" }, { "paperId": "3c1d360b06f1273163985e72ab9a45bae36cee4a", "title": "How Can Heterogeneous Internet of Things Build Our Future: A Survey" }, { "paperId": "318a48935f0036db36520ce1c1f78798ee9793af", "title": "Wireless MapReduce Distributed Computing" }, { "paperId": "5e00a8bd1bb34c809688b3ec76975046b15e45a1", "title": "Coding for edge-facilitated wireless distributed computing with heterogeneous users" }, { "paperId": "d53533b4f24504ecb04612a0d08c46db966caa2f", "title": "Squeezing Deep Learning into Mobile and Embedded Devices" }, { "paperId": "6e2ecd04dd799b2b0cef1627eeab0347d320d120", "title": "Applications of mobile cloud computing: A survey" }, { "paperId": "90b31e90d4b7248580a4d41c1a504a16b3834702", "title": "Exploiting Non-Causal CPU-State Information for Energy-Efficient Mobile Cooperative Computing" }, { "paperId": "51aa27a83c888718e30374369ac9cd54927dd0e0", "title": "Coding for Distributed Fog Computing" }, { "paperId": "afd9dadac8d3354615e26b2038887ecdbc9e33e5", "title": "Mobile Edge Computing: A Survey on Architecture and Computation Offloading" }, { "paperId": "872734bfa9f84d4e06b16911ba9ca4b9456d7a52", "title": "Why Will Computing Power Need Particular Attention in Future Wireless Devices?" }, { "paperId": "8c5293da3ad1a463cb9694edfbf1bf19b8cbd698", "title": "A Survey on Mobile Edge Computing: The Communication Perspective" }, { "paperId": "c141e32b6b3b52583cc78091550548c7a6222f8f", "title": "D2D Fogging: An Energy-Efficient and Incentive-Aware Task Offloading Framework via Network-assisted D2D Collaboration" }, { "paperId": "a93e7aa3705a6b02306dfd113274ff2c006497d1", "title": "Caching at the wireless edge: design aspects, challenges, and future directions" }, { "paperId": "4562404f32057d884a1a04e132513bba0871ce87", "title": "A Scalable Framework for Wireless Distributed Computing" }, { "paperId": "aa245959d84734f92d1e8f179417eb7226868e62", "title": "Fog and IoT: An Overview of Research Opportunities" }, { "paperId": "d19059600752f284f5f715675cb401ecc3ca7879", "title": "A Fundamental Tradeoff Between Computation and Communication in Distributed Computing" }, { "paperId": "d588c79ff7a6dd0377f44ca78534869a9a5b2c5d", "title": "Communicating While Computing: Distributed mobile cloud computing over 5G heterogeneous networks" }, { "paperId": "ede4ffff2968c84ea26bf64f6f26f670b8ab3824", "title": "Finding your Way in the Fog: Towards a Comprehensive Definition of Fog Computing" }, { "paperId": "6da3d71dc601fd9cd6b4e84bc947de5474c5873b", "title": "A survey of mobile cloud computing: architecture, applications, and approaches" }, { "paperId": "329ec922c147bd64ec3cb8d06261d9ca800b0cdc", "title": "Towards resource sharing in mobile device clouds: power balancing across mobile devices" }, { "paperId": "f72847ed440f0ce372880066b24544dab3d6dbe5", "title": "Misco: a MapReduce framework for mobile systems" }, { "paperId": "54c34971dc27ab3bb03bf4adaba182b71cc72125", "title": "MapReduce System over Heterogeneous Mobile Devices" }, { "paperId": "c35bc2704272dda5fe58f53fe2c134839016d6d9", "title": "Handling Heterogeneity in an IoT Infrastructure" }, { "paperId": "066afd1ed7940d794ae19e62645ca66053ce6678", "title": "Energy Efficient Cooperative Computing in Mobile Wireless Sensor Networks" }, { "paperId": "a8e0b0b2fd084b9c42438fc3c9dbdf497fc4ea66", "title": "Multi-access Edge Computing: A Survey" }, { "paperId": null, "title": "-FNRS Research Fellow at the Institute of Information and Communication Technologies" }, { "paperId": null, "title": "‘‘Mobile edge computing—A key technology towards 5G,’’" }, { "paperId": "627be67feb084f1266cfc36e5aed3c3e7e6ce5f0", "title": "MapReduce: simplified data processing on large clusters" }, { "paperId": null, "title": "Map-Reduce Meets Wider Varieties of Applications" }, { "paperId": "3239ccaf508fa33cb22439e89cb746a55e395ee3", "title": "Software Technologies for Embedded and Ubiquitous Systems" }, { "paperId": null, "title": "CVXOPT: A Python Package for Convex Optimization" } ]
22,289
en
[ { "category": "Computer Science", "source": "external" }, { "category": "Computer Science", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/026730ead96211ee8e954e9df5c220191fa2d329
[ "Computer Science" ]
0.883916
Comparative Performance of Machine Learning Algorithms for Cryptocurrency Forecasting
026730ead96211ee8e954e9df5c220191fa2d329
Indonesian Journal of Electrical Engineering and Computer Science
[ { "authorId": "101995247", "name": "Nor Azizah Hitam" }, { "authorId": "2717236", "name": "A. R. Ismail" } ]
{ "alternate_issns": null, "alternate_names": [ "Indones J Electr Eng Comput Sci" ], "alternate_urls": null, "id": "bb21160f-0f31-4e34-abab-715b95a870a2", "issn": "2502-4752", "name": "Indonesian Journal of Electrical Engineering and Computer Science", "type": "journal", "url": "http://www.iaescore.com/journals/index.php/IJEECS" }
Machine Learning is part of Artificial Intelligence that has the ability to make future forecastings based on the previous experience. Methods has been proposed to construct models including machine learning algorithms such as Neural Networks (NN), Support Vector Machines (SVM) and Deep Learning. This paper presents a comparative performance of Machine Learning algorithms for cryptocurrency forecasting. Specifically, this paper concentrates on forecasting of time series data. SVM has several advantages over the other models in forecasting, and previous research revealed that SVM provides a result that is almost or close to actual result yet also improve the accuracy of the result itself. However, recent research has showed that due to small range of samples and data manipulation by inadequate evidence and professional analyzers, overall status and accuracy rate of the forecasting needs to be improved in further studies. Thus, advanced research on the accuracy rate of the forecasted price has to be done.
**Indonesian Journal of Electrical Engineering and Computer Science** Vol. 11, No. 3, September 2018, pp.1121~ 1128 ISSN: 2502-4752, DOI: 10.11591/ijeecs.v11.i3.pp1121-1128  1121 # Comparative Performance of Machine Learning Algorithms for Cryptocurrency Forecasting **Nor Azizah Hitam, Amelia Ritahani Ismail** Department of Computer Science, International Islamic University Malaysia (IIUM), Kuala Lumpur, Malaysia **Article Info** **ABSTRACT** **_Article history:_** Received May 28, 2018 Revised Jun 5, 2018 Accepted Jun 11, 2018 **_Keywords:_** Artificial Intelligence Machine Learning Support Vector Machines Neural Networks Deep Learning **_Corresponding Author:_** Machine Learning is part of Artificial Intelligence that has the ability to make future forecastings based on the previous experience. Methods has been proposed to construct models including machine learning algorithms such as Neural Networks (NN), Support Vector Machines (SVM) and Deep Learning. This paper presents a comparative performance of Machine Learning algorithms for cryptocurrency forecasting. Specifically, this paper concentrates on forecasting of time series data. SVM has several advantages over the other models in forecasting, and previous research revealed that SVM provides a result that is almost or close to actual result yet also improve the accuracy of the result itself. However, recent research has showed that due to small range of samples and data manipulation by inadequate evidence and professional analyzers, overall status and accuracy rate of the forecasting needs to be improved in further studies. Thus, advanced research on the accuracy rate of the forecasted price has to be done. _Copyright © 2018 Institute of Advanced Engineering and Science._ _All rights reserved._ Amelia Ritahani Ismail Department of Computer Science, International Islamic University Malaysia (IIUM), Kuala Lumpur, Malaysia. E-mail: amelia@iium.edu.my **1.** **INTRODUCTION** Forecasting future values or price of experimental time series plays a vital role in almost all fields of studies including economics, science and engineering, finance, business, meteorology and telecommunication [1]. Cryptocurrency, an alternative medium of exchange consisting of over 1441 (as of January 2018) decentralized crypto coin types. Relating machine learning algorithms to cryptocurrency is considered as a new field with limited research studies. In general, system can be used to any directive machine learning problem, in return the system will provide a description relevant to samples both in and out of the dataset. There are numerous type of cryptocurrency including Bitcoin, Litecoin, Ethereum, Nem, Ripple, Iota, Stellar and others. The cryptographic foundation of each crypto coin makes them vital. Considering the exchange rates of cryptocurrencies are notorious for being volatile, we attempt to model an algorithm that can be used in trading of numerous cryptocurrencies. In order to show the accuracy rate of the predicted price of the proposed methodology, two different data are used as explanatory examples. The comparative cryptocurrencies are Litecoin and Ethereum, Bitcoin, Stellar, Ripple and Nem. This paper uses the mean absolute percentage error (MAPE) calculation to evaluate the proposed models. The outline of this paper is as follows. Section 1 introduces some basic notions of cryptocurrencies and machine learning algorithms. Section 2 discusses the type of cryptocurrency and two largest alternative blockchain technologies, Litecoin (LTC) and Ethereum (XRP) and the purposes of each development. Section 3 presents about machine learning algorithms and three most widely used algorithms, Artificial **_Journal homepage: http://iaescore.com/journals/index.php/ijeecs_** ----- 1122  ISSN: 2502-4752 Neural Networks (ANN) and Support Vector Machines (SVM) and Deep Learning. Section 4 explains the experiments and results of experiments using all models. **1.1.** **Cryptocurrency** Litecoin (LTC) and Ethereum (XRP) are among the largest alternative blockchain technologies, known as altcoins and were invented after Bitcoin (BTC). Altcoins may have different purposes of development but are using general methodology based on decentralized P2P network, with the assumption of no network failure and no Internet interruption [2-5]. Research on the cryptocurrency field is still limited. Mostly, research in this field is focusing on a single cryptocurrency rather than broader areas such as technological advancement, government participation in market regulations as well as market development [6]. This section will focus on six types of cryptocurrency begins with Bitcoin, Ethereum, Litecoin, Nem, Ripple followed by Stellar. In the succeeding section, we focus the review of previous studies on Machine Learning, Support Vector Machines (SVM), Artificial Neural Networks (ANNs) and Deep Learning applied in forecasting. A peer to peer (p2p) payment cash system, non regulated digical currency and introduced in 2008 with no legal status tendered is known as Bitcoin. It is called as one type of cryptocurrencies with its cryptographic function in its security of creation and money transfer. In recent years, bitcoin turns out to be the most well known currency in the area of volume trading, thus makes a Bitcoin as the most potential financial medium for investors [7]. It locks the transaction as the individualities of the sender, receiver and the volume of transaction are all encrypted [6]. Ethereum (XRP) is a decentralized block-chain based technology that runs Turing-complete to build and execute smart contracts or circulated systems [8-9]. The value of its coin is called ether. It was introduced by Vitalik Buterin in 2013 and funded a year later amounted US$18 million worth of bitcoins, raised through online public crowd sale [8]. Ether has no boundaries on its circulation, can be traded in cryptocurrency exchanges, not to be one of the payment system but it‘s intention is merely to be used in the Ethereum network [1, 9]. Litecoin (LTC) was released in October 2011 using a similar technology to Bitcoin, and invented by Charles Lee. The block generation time is decreased as much as 4 times per block (from 10 minutes to 2.5 minutes per block) 84 million of maximum limit, it is equivalent to 4 times higher than Bitcoin and has adopted a different hashing algorithm [9-10]. Litecoin is considered as the ‗silver standard‘ of crypto coin and turn into a second most accepted by both miners and exchanges [9]. It uses Scrypt encryption algorithm and contradicts to SHA-256 and developed to bid the Bitcoin network transaction confirmation speed and uses an algorithm that was resilient to the advancement of hardware mining technologies. NEM is a blockchain notarization also known as a peer-to-peer platform that provides services like online payment and messaging system. Having a conjointly owned notarization, it then makes NEM to become as the first public/private blockchain combination [8]. Ripple, an open source digital currency, produced by Jed McCaleb and partner, Chris Larsen, a distributed peer-to-peer network payment medium controlled and managed by a single organization and offers another medium of security mechanism [6, 8]. The development of Ripple is based on Byzantine Consensus Protocol and maximum number of Ripple is 100 million [8]. Stellar, like Ripple offers and entire substitute of security instrument and implemented based on Byzantine Consensus Protocol. Stellar has implemented a new technology to process the financial transactions including open source, scattered and unlimited ownership [6, 11]. **1.2.** **Machine Learning** To succeed on trading, mastering analysis is very important. Future value can be analyzed in two different ways, technical analysis and fundamental analysis. Technical analysis uses trading information from the market information, such as price, trading volume to forecast future price while other uses the information outside the market like economic situation, interest rate and geopolitical issues to forecast future direction [11]. Many investors focus on technical while some focus fundamental. However, there are some investors who focus on overlaps between fundamental as well as technical. This paper will present about technical analysis by applying the machine learning algorithms. Machine learning has been established as a serious model in classical statistics in the forecasting world for over more than two decades [1], [12]. Two most widely used algorithms for forecasting price movement are known as Artificial Neural Networks (ANNs) and Support Vector Machine (SVM) and both has own patterns of learning [11, 13]. ANNs has been widely used for prediction in securities. Number of issues in ANNs has been discussed by researchers including the selection of parameters and training set [14]. According to [1], the embedding formu lation recommends that when a historical dataset S is available, the one-step forecasting can be considered as supervised learning. Supervised learning is the task of deriving a function from training data consist of a set Indonesian J Elec Eng & Comp Sci, Vol. 11, No. 3, September 2018 : 1121 – 1128 ----- Indonesian J Elec Eng & Comp Sci ISSN: 2502-4752  1123 of training dataset. It comes in a set of input and output variables that is also considered as dependent on the inputs. One-step forecasting can be applied when a mapping model is exist [1]. In one-step forecasting, the previous values of the series, n are available, thus forecasting can be performed as a generic regression problem as Figure 1. General approach to model an input/output sense, relies on the accessibility of experimental pairs and denoted as training set. Training set is initiated by the historical series S by creating the [(N – n -1) x n] input data matrix. In one step forecasting, the approximator ˆf returns the prediction of the value of the time series at time t + 1 as a function of the n previous values (the rectangular box containing z-1 represents a unit delay operator, i.e., yt-1 = z-1 yt) [1]. And the [(N – n 1) x 1] output vector (1) For the sake of simplicity, a is assume as d = 0 lag time. Henceforth, in this chapter we will refer to the ith row of X, which is essentially a temporal pattern of the series, as to the (reconstructed) state of the series at time t – i + 1. Figure 1. Proposed Methodology **1.2.1.** **Support Vector Machine (SVM)** Support Vector Machine (SVM) method or classifier was introduced as an induction principle that can avoid over-fitting the data at the assimilation of the training dataset [15] and is known as the most flexible technique to construct the explicit and accurate boundaries [16], [17]. SVM works very well in various applications, provide fast training result and easy to use [18]. Eventually, SVM has been invented to answer pattern recognition problems to fault diagnosis problems [15, 19]. It gives nonlinear and solid solution by applying kernel functions to map the input space into a higher dimensional feature [20]. There are many benefits of the SVM including outperforms in generalization model and perform well with small datasets. SVM creates a lot of benefits in many fields including pattern classification problem [14]. Besides, SVM is to produce a classification hyper-plane that differentiate two classes of data with maximum margin. Standard SVM model is as follows: _Comparative Performance of Machine Learning Algorithms for Cryptocurrency… (Amelia Ritahani Ismail)_ ----- 1124  ISSN: 2502-4752 (2) Another important point of discussion is the options offered by type of SVM. SVM offers linear and nonlinear type of models. Linear SVMs outperforms the nonlinear in terms of speed and execution time, but underperform dealing with complex datasets contains many training examples but less features. While nonlinear SVMs although losing its explanatory power, seems to perform steadily across various problems, and becomes most preferred choice compared to linear SVMs [18]. **1.2.2.** **Artificial Neural Networks (ANNs)** A common neural network that is doing the deep learning at its hidden layers is called an artificial neural networks [21]. Standard ANNs comprises of input layer, hidden layers and output layer [22]. It is an extremely similar system consisting interrelated and interacting processing nodes or neurons [23, 23], works like a human brain and process the information by interacting with a numbers of straightforward processing features [23]. There are input and output neurons in this environment where input neurons will be triggered upon instruments sensing the environment. While other neurons trigger through weighted connections from neurons which was activated earlier, some neurons could effect the environment by activating actions [24]. Depending on the issue and how neurons are linked, such behavior may need a long connecting chains of computational phases where each phase revises the aggregate activation of the network. **1.2.3.** **Deep Learning (DL)** Deep Learning is considered as a diverse methods in neural networks [25] and primarily to get the most precise result across many phases, as shown in Table 1 [24]. DL is capable to produce influencing results based on multiple layer extraction [25]. Models explained in this section applies a non-linear function on the hidden units and enables a more lavish model that is capable to learn more abstract illustrations to form a deep network when modules are arranged on top of each other [26]. The goal of deep network is to design structures at the lower layers that will separate the variation factors in the input data ad chain the representations at the higher layers, but the drawbacks of the training with multiple hidden layer units lies in the event of the error signal being backpropagated [26]. Table 1. Variable Description Variable Description Open Price The first price of a given cryptocurrency in a daily trading Close Price The price of the last transaction for a given cryptocurrency at the end of a daily trading High Price The highest price that was paid for a cryptocurrency during a daily trading Low Price The lowest price of a cryptocurrency reached in a daily trading **2.** **PROPOSED METHODOLOGY** In this paper, we consider time series data based on 5 years of daily history, as inputs for all models and may vary based on the availability of datasets from the source. The data is prepared from daily open, close, high and low price of a daily trading for all total of six types of cryptocurrencies and are downloaded from the market capitalization database and range from 2013 through 2018. **2.1.** **Data Description** Our main purpose of this paper is to get the most accurate forecasting price, based on the above mentioned methods. Bitcoin, BTC is the first digital currency in market capitalization list and begins since March 2013 through January 2018. Training data for bitcoin starts from 28th March 2013 to 16th until January 2017, followed by Ethereum from 7th August, 2015 to 16th January, 2017, Litecoin from 28th April 2013 to 16th January 2017, Nem 1st April 2015 to 16th January 2017, Ripple 4th August, 2015 through 16th January, 2017 and Stellar from 4th August 2013 to 16th January 2017. While testing data starts for all selected type of cryptocurrencies start from 17th January, 2017 through 16th January 2018 subsequently. Table 2 The training and testing dataset in our time series data. The first part is the training set (number of values as per #Observations) in the first segment, accordingly. Several classifiers are then used to predict the test data (number of values in the testing set is = 364) in the second segment. Indonesian J Elec Eng & Comp Sci, Vol. 11, No. 3, September 2018 : 1121 – 1128 ----- Indonesian J Elec Eng & Comp Sci ISSN: 2502-4752  1125 Table 2. The training and testing dataset in our time series data Training Data Test Data Cryptocurrency Name From To #Observations From To #Observations 28-MarBitcoin, BTC, XBT 13 16-Jan-17 1388 17-Jan-17 16-Jan-18 364 Ether or ―Ethereum‖, ETH 7-Aug-15 16-Jan-17 526 17-Jan-17 16-Jan-18 364 Litecoin, LTC 28-Apr-13 16-Jan-17 1358 17-Jan-17 16-Jan-18 364 Nem, XEM 1-Apr-15 16-Jan-17 657 17-Jan-17 16-Jan-18 364 Ripple, XRP 4-Aug-13 16-Jan-17 1262 17-Jan-17 16-Jan-18 364 Stellar, XLM 5-Aug-14 16-Jan-17 896 17-Jan-17 16-Jan-18 364 **3.** **RESULTS AND ANALYSIS** The result section begins by showing performance measures for each cryptocurrency types according to classifiers. These serve as a control for the rest of the discussion. The analysis is separated into two different experiments: i) Performance measures by various classifiers ii) Forecasted cryptocurrency value by machine learning algorithms vs actual value. Table 3 shows the performance accuracy in correspondence to four classifiers on the cryptocurrency market capitalization. The maximum value is 95.5%, which means that any alphas over 95.5% have p-value of 0.01 or less. Table 3. Performance Measures by various classifiers Performance Accuracy (%) Classifiers Bitcoin Ethereum Litecoin Nem Ripple Stellar SVM 78.90 **95.50** **82.40** 47.70 70.00 58.70 ANNs **79.40** 78.00 75.80 **77.80** 81.40 89.80 DL 61.90 69.40 62.80 57.20 60.90 70.70 BoostedNN 81.20 81.60 72.20 77.40 **81.50** **92.80** Several different classifiers were trained with the same set of features. In this case, the datasets were evaluated using classification accuracy. The comparison of all classifiers generated by different methods are based on the same dataset. Thus it will be fair for all classifiers to perform the testing and training. The results for the classifiers with the best performance on the test set are testified. The results show that SVM classifier works well for Ethereum followed by Litecoin. While, ANN is seen works best for Bitcoin followed by Nem. Ripple and Stellar has the best performance accuracy for BoostedNN. However, among all, SVM classifier performs the best compared to the other classifiers with the performance accuracy of 95.5%. For comparability, same data sets and period of 364 days were chosen for all classifiers. Performance can be seen in Figure 2-7. The SVM significantly outperformed the other classifiers. This result is further explored using mean absolute percentage error (MAPE) calculation. SVM mean absolute percentage error is 0.31% and is the lowest MAPE. Thus, the SVM is considered as reliable forecasting model for these six selected cryptocurrency. Figure 2. SVM value is comparable to actual Bitcoin for the period from 17/1/2017 to 16/1/2018 _Comparative Performance of Machine Learning Algorithms for Cryptocurrency… (Amelia Ritahani Ismail)_ ----- 1126  ISSN: 2502-4752 Figure 3. SVM value is comparable to actual Litecoin for the period from 17/1/2017 to 16/1/2018 Figure 4. SVM value is comparable to actual Ripple for the period from 17/1/2017 to 16/1/2018 Figure 5. SVM value is comparable to actual Ethereum for the period from 17/1/2017 to 16/1/2018 Indonesian J Elec Eng & Comp Sci, Vol. 11, No. 3, September 2018 : 1121 – 1128 ----- Indonesian J Elec Eng & Comp Sci ISSN: 2502-4752  1127 Figure 6. SVM value is comparable to actual Nem for the period from 17/1/2017 to 16/1/2018 Figure 7. SVM value is comparable to actual Stellar for the period from 17/1/2017 to 16/1/2018 **4.** **CONCLUSION** The paper is highly focuses on the comparative performance of machine learning algorithms of six cryptocurrencies. To begin with, the review of cryptocurrency has covered six major cryptocurrency, there are Bitcoin, Ethereum, Litecoin, Nem, Ripple and Stellar. Further, previous studies on Machine Learning, Support Vector Machines (SVM), Artificial Neural Networks (ANNs) and Deep Learning forecasting has been explored. Firstly, the performance measures were done to get the accuracy of classifiers over the selected cryptocurrency and obtained the result as in Figure 3. Result shows that SVM outperformed other classifiers with the accuracy of 95.5%. It is realized, that the quality of training data and population of dataset plays an important role for a successful prediction. Secondly, the forecasted cryptocurrency value by Machine Learning vs actual value of cryptocurrency were then analyzed. From the comparative analysis done in this section, SVM has a comparable values for all cryptocurrency for the period from 17/1/2017 to 16/1/2018. Moreover, the result is further explored using mean absolute percentage error (MAPE) calculation. The results show that SVM has the lowest value of MAPE. Thus, the SVM is considered as a reliable forecasting model for the selected cryptocurrency. In future, the algorithm will be improved on the accuracy rate of the forecasted price. Besides, with the power of SVM, future work will be done to further optimize the SVM to get the most accurate result as per actual value of cryptocurrency. _Comparative Performance of Machine Learning Algorithms for Cryptocurrency… (Amelia Ritahani Ismail)_ ----- 1128  ISSN: 2502-4752 **REFERENCES** [1] Bontempi, G., Taieb, S. Ben, & Borgne, L. (2013). ―Machine Learning Strategies for Time Series Forecasting‖, 62–77. [2] Huckle, S., & White, M. (2016). ―Socialism and the blockchain‖. _Future_ _Internet,_ _8(4)._ https://doi.org/10.3390/fi8040049 [3] Bitcoin. Bitcoin Developer Guide. Available online: https://bitcoin.org/en/developer-guide#block-chain (accessed on 24 January 2018). [4] Ethereum. Ethereum Project. Available online: https://www.ethereum.org/ (accessed on 24 January 2018). [5] Litecoin. Litecoin—Open Source P2P Digital Currency. Available online: https://litecoin.org/ (accessed on 24 January 2018). [6] Farell, R. (2015). ―An Analysis of the Cryptocurrency Industry‖. Wharton Research Scholars Journal. Paper, 130. Retrieved from http://repository.upenn.edu/wharton_research_scholars%0Ahttp://repository.upenn.edu/wharton _research [7] _scholars/130 [8] Krause, D. (2017). Bitcoin - A Favourable Instrument For Diversification? A Quantitative Study On The Relations Between Bitcoin [9] Lee, D., Chuen, K., Guo, L., Wang, Y., & Chian, L. K. (2017). Cryptocurrency: A New Investment Opportunity?, 1–54. [10] Heid, A. (2013). ―Analysis of the Cryptocurrency Marketplace‖. Retrieved February, 15, 2014. [11] Application, F. A., & Guidelines, G. (2013). Ashesi University College. Office, 1–4. [12] Chaigusin, S. (2014). An Application of Decision Tree for Stock Trading Rules : A Case ofthe Stock Exchange of Thailand Proceedings of Eurasia Business Research Conference,(June). [13] Ahmed, N. K., Atiya, A. F., El Gayar, N., & El-Shishiny, H. (2010). ―An empirical comparisonof machine learning models for time series forecasting‖. _Econometric_ _Reviews,_ _29(5),_ 594–621. https://doi.org/10.1080/07474938.2010.481556 [14] Patel, J., Shah, S., Thakkar, P., & Kotecha, K. (2015). ―Predicting stock and stock priceindex movement using Trend Deterministic Data Preparation and machine learningtechniques‖. _Expert Systems with Applications, 42(1),_ 259–268.https://doi.org/10.1016/j.eswa.2014.07.040 [15] Kongsilp, W., Mateus, C., Huang, M., Ting-ting, Z., Wan-yi, C., Maita, A. R. C., … deCarvalho, A. F. (2015). ―Prediction of Stock Trading Signal Based on Support VectorMachine‖. _Engineering Computations, 32(1), 445–_ 463.https://doi.org/10.1108/02644401311286099 [16] Zhang, L., & Wang, J. (2015). ―Optimizing parameters of support vector machines using team-search-based particle swarm optimization‖. _Engineering Computations, 32(5), 1194–1213. https://doi.org/10.1108/EC-12-2013-_ 0310 [17] Basudhar, A. and Missoum, S. (2010), ―An improved adaptive sampling scheme for the construction of explicit boundaries‖, Structural and Multidisciplinary Optimization, Vol. 42 No. 4, pp. 1-13. [18] Lin, K., Basudhar, A., & Missoum, S. (2012). ―Parallel construction of explicit boundaries using support vector machines‖. _Engineering Computations, 30(1), 132–148. https://doi.org/10.1108/02644401311286099_ [19] Huerta, R., Corbacho, F., & Elkan, C. (2013). ―Nonlinear support vector machines can systematically identify stocks with high and low future returns‖. Algorithmic Finance, 2(1), 45–58. https://doi.org/10.3233/AF-13016 [20] Baccarini, L.M.R., Rocha e Silva, V.V., de Menezes, B.R. and Caminhas, W.M. (2011), ―SVM practical industrial application for mechanical faults diagnostic‖, _Expert Systems with Applications, Vol. 38 No. 6, pp. 6980-6984._ [21] Hacib, T., Acikgoz, H., Bihan, Y. Le, Mekideche, M. R., Meyer, O., & Pichon, L. (2010). ―Support vector machines for measuring dielectric properties of materials‖. COMPEL: The International Journal for Computation _and_ _Mathematics_ _in_ _Electrical_ _and_ _Electronic_ _Engineering,_ _29(4),_ 1081–1089. https://doi.org/10.1108/03321641011044497 [22] Nivetha, R. Y. (2017). ―Developing a Prediction Model for Stock Analysis‖, 4–6. https://doi.org/10.1109/ICTACC.2017.11 [23] Borodo, S. M., Shamsuddin, S. M., & Hasan, S. (2016). ―Big data platforms and techniques‖. Indonesian Journal _of_ _Electrical_ _Engineering_ _and_ _Computer_ _Science_ _(IJEECS),_ _1(1),_ 191–200. https://doi.org/10.11591/ijeecs.v1.i1.pp191-200 [24] Lu, C. J. (2010). ―Integrating independent component analysis-based denoising scheme with neural network for stock price prediction‖. _Expert_ _Systems_ _with_ _Applications,_ _37(10),_ 7056–7064. https://doi.org/10.1016/j.eswa.2010.03.012 [25] The, S., Ai, S., Dalle, I., & Galleria, S. (2014). ―Deep Learning in Neural Networks : An Overview‖, 1–88. [26] Borodo, S. M., Shamsuddin, S. M., & Hasan, S. (2016). ―Big data platforms and techniques‖. Indonesian Journal _of_ _Electrical_ _Engineering_ _and_ _Computer_ _Science_ _(IJEECS),_ _1(1),_ 191–200. https://doi.org/10.11591/ijeecs.v1.i1.pp191-200 [27] M. L. Ã., Karlsson, L., & Loutfi, A. (2014). ―A review of unsupervised feature learning and deep learning for time series modeling‖. _Pattern Recognition Letters, 42(C), 11–24. https://doi.org/10.1016/j.patrec.2014.01.008_ Indonesian J Elec Eng & Comp Sci, Vol. 11, No. 3, September 2018 : 1121 – 1128 -----
{ "disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.11591/IJEECS.V11.I3.PP1121-1128?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.11591/IJEECS.V11.I3.PP1121-1128, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": "CCBYNC", "status": "HYBRID", "url": "http://ijeecs.iaescore.com/index.php/IJEECS/article/download/13469/9222" }
2,018
[]
true
2018-09-01T00:00:00
[ { "paperId": "06ab46ea8d8f29abcf5e910b321d8ab69a1207fa", "title": "Socialism and the Blockchain" }, { "paperId": "0a7f19c6eccef7523acbf204f91a60e97f95f60d", "title": "Optimizing parameters of support vector machines using team-search-based particle swarm optimization" }, { "paperId": "0ac9579e715aded4f20caf1485a1e8c21678668d", "title": "Prediction of Stock Trading Signal Based on Support Vector Machine" }, { "paperId": "af3b7e1e2921aa68f6e00e07589c0d0f585e9f76", "title": "A review of unsupervised feature learning and deep learning for time-series modeling" }, { "paperId": "193edd20cae92c6759c18ce93eeea96afd9528eb", "title": "Deep learning in neural networks: An overview" }, { "paperId": "78852ae4a9b2a14252999032e379e30d014971b3", "title": "Parallel construction of explicit boundaries using support vector machines" }, { "paperId": "ee7dfe9335163564555d270baf538093ec407a02", "title": "Nonlinear support vector machines can systematically identify stocks with high and low future returns" }, { "paperId": "bf77e0c9f8c30684be7b5e31183d373327f977eb", "title": "SVM practical industrial application for mechanical faults diagnostic" }, { "paperId": "7afcb40f4969ba18d756b74ff61ddbbc6a40b180", "title": "Integrating independent component analysis-based denoising scheme with neural network for stock price prediction" }, { "paperId": "38bcea3941f7264b9b9bad0eb10c700d4bc02904", "title": "Support vector machines for measuring dielectric properties of materials" }, { "paperId": "ecab2d2946b26988a208c4593a41dd01b0da15e4", "title": "An improved adaptive sampling scheme for the construction of explicit boundaries" }, { "paperId": "1cd54d46f6fad73a61d961765a0e962ca9bc9fcb", "title": "Bitcoin a favourable instrument for diversification? : A quantitative study on the relations between Bitcoin and global stock markets" }, { "paperId": null, "title": "Developing a Prediction" }, { "paperId": "ec5f9fa3a1422df6cf7df431efc502d39a29e2db", "title": "Big Data Platforms and Techniques" }, { "paperId": "a68100519819845d990d496e9d8827de1394825b", "title": "An Analysis of the Cryptocurrency Industry" }, { "paperId": "81be22bd4cf4e7ebb6ff5aa2a415e6b9aaf712fc", "title": "Predicting stock and stock price index movement using Trend Deterministic Data Preparation and machine learning techniques" }, { "paperId": null, "title": "Bitcoin Developer Guide Ethereum Project Litecoin — Open Source P 2 P Digital Currency" }, { "paperId": null, "title": "An Application of Decision Tree for Stock Trading Rules : A Case ofthe Stock Exchange of Thailand" }, { "paperId": "99a39ea1b9bf4e3afc422329c1d4d77446f060b8", "title": "Machine Learning Strategies for Time Series Forecasting" } ]
6,811
en
[ { "category": "Computer Science", "source": "external" }, { "category": "Computer Science", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/026a1ec02e15aacd176148eedb487bbc08edf905
[ "Computer Science" ]
0.864079
Binding and Normalization of Binary Sparse Distributed Representations by Context-Dependent Thinning
026a1ec02e15aacd176148eedb487bbc08edf905
Neural Computation
[ { "authorId": "2851989", "name": "D. Rachkovskij" }, { "authorId": "33568664", "name": "E. Kussul" } ]
{ "alternate_issns": null, "alternate_names": [ "Neural Comput" ], "alternate_urls": [ "http://ieeexplore.ieee.org/servlet/opac?punumber=6720226", "http://www.mitpressjournals.org/loi/neco", "https://www.mitpressjournals.org/loi/neco" ], "id": "69b9bcdd-8229-4a00-a6e0-00f0e99a2bf3", "issn": "0899-7667", "name": "Neural Computation", "type": "journal", "url": "http://cognet.mit.edu/library/journals/journal?issn=08997667" }
null
``` Neural Computation (2001) v. 13 n. 2, pp. 411-452 # Binding and Normalization of Binary Sparse Distributed Representations by Context-Dependent Thinning ``` Dmitri A. Rachkovskij V. M. Glushkov Cybernetics Center Pr. Acad. Glushkova 40 Kiev 03680 Ukraine dar@infrm.kiev.ua (Preferable contact method) Ernst M. Kussul Centro de Instrumentos Universidad Nacional Autonoma de Mexico Apartado Postal 70186 04510 Mexico D.F. Mexico ekussul@servidor.unam.mx Keywords: distributed representation, sparse coding, binary coding, binding, variable binding, representation of structure, structured representation, recursive representation, nested representation, compositional distributed representations, connectionist symbol processing. **Abstract** Distributed representations were often criticized as inappropriate for encoding of data with a complex structure. However Plate's Holographic Reduced Representations and Kanerva's Binary Spatter Codes are recent schemes that allow on-the-fly encoding of nested compositional structures by real-valued or dense binary vectors of fixed dimensionality. In this paper we consider procedures of the Context-Dependent Thinning which were developed for representation of complex hierarchical items in the architecture of Associative-Projective Neural Networks. These procedures provide binding of items represented by sparse binary codevectors (with low probability of 1s). Such an encoding is biologically plausible and allows a high storage capacity of distributed associative memory where the codevectors may be stored. In contrast to known binding procedures, Context-Dependent Thinning preserves the same low density (or sparseness) of the bound codevector for varied number of component codevectors. Besides, a bound codevector is not only similar to another one with similar component codevectors (as in other schemes), but it is also similar to the component codevectors themselves. This allows the similarity of structures to be estimated just by the overlap of their codevectors, without retrieval of the component codevectors. This also allows an easy retrieval of the component codevectors. Examples of algorithmic and neural-network implementations of the thinning procedures are considered. We also present representation examples for various types of nested structured data (propositions using role-filler and predicate-arguments representation schemes, trees, directed acyclic graphs) using sparse codevectors of fixed dimension. Such representations may provide a fruitful alternative to the symbolic representations of traditional AI, as well as to the localist and microfeaturebased connectionist representations. ----- 1. Introduction The problem of representing nested compositional structures is important for connectionist systems, because hierarchical structures are required for an adequate description of real-world objects and situations. In fully local representations, an item (entity, object) of any complexity level is represented by a single unit (node, neuron) (or a set of units which has no common units with other items). Such representations are similar to symbolic ones and share their drawbacks. These drawbacks include the limitation of the number of representable items by the number of available units in the pool, and therefore, the impossibility to represent the combinatorial variety of real-world objects. Besides, a unit corresponding to a complex item only represents its name and pointers to its components (constituents). Therefore in order to determine the similarity of complex items, they should be unfolded into the baselevel (indecomposable) items. The attractiveness of distributed representations was emphasized by the paradigm of cell assemblies (Hebb, 1949) that influenced the work of Marr (1969), Willshaw (1981), Palm (1980), Hinton, McClelland & Rumelhart (1986), Kanerva (1988), and many others. In fully distributed representations, an item of any complexity level is represented by its configuration pattern over the whole pool of units. For binary units, this pattern is a subset of units which are in the active state. If the subsets corresponding to various items intersect, then the number of these subsets is much more than the number of units in the pool, providing an opportunity to solve the problem of information capacity of representations. If similar items are represented by similar subsets of units, the degree of corresponding subsets' intersection could be the measure of their similarity. The potentially high information capacity of distributed representations provides hope for solving the problem of representing combinatorially growing number of recursive compositional items in a reasonable number of bits. Representing composite items by concatenation of activity patterns of their component items would increase the dimensionality of the coding pool. If the component items are encoded by pools of equal dimensionality, one could try to represent composite items as superposition of activity patterns of their components. The resulting coding pattern would have the same dimensionality. However another problem arises here, known as "superposition catastrophe" (e.g. von der Malsburg, 1986) as well as "ghosts", "false” or “spurious” memory (e.g. Feldman & Ballard, 1982; Hopfield, 1982; Hopfield, Feinstein, Palmer, 1983). A simple example looks as follows. Let there be component items a, b, c and composite items ab, ac, cb. Let us represent any two of the composite items, e.g. _ac or_ _cb. For this purpose, superimpose activity patterns corresponding to the component items_ _a_ and c, c and b. The ghost item ab also becomes represented in the result, though it is not needed. In the "superposition catastrophe" formulation, the problem consists in no way of telling which two items (ab, _ac, or ab, cb, or ac, cb) make up the representation of the composite pattern abc, where the patterns of_ all three component items are activated. The supposition of no internal structure in distributed representations (or assemblies) (Legendy, 1970; von der Malsburg, 1986; Feldman, 1989) held back their use for representation of complex data structures. The problem is to represent in the distributed fashion not only the information on the set of base-level components making up a complex hierarchical item, but also the information on the combinations in which they meet, the grouping of those combinations, etc. That is, some mechanisms were needed for binding together the distributed representations of certain items at various hierarchical levels. One of the approaches to binding is based on temporal synchronization of constituent activation (Milner, 1974; von der Malsburg, 1981, 1985; Shastri & Ajjanagadde, 1993; Hummel & Holyoak, 1997). Though this mechanism may be useful inside single level of composition, its capabilities to represent and store complex items with multiple levels of nesting are questionable. Here we will consider binding mechanisms based on the activation of specific coding-unit subsets corresponding to a group (combination) of items, the mechanisms that are closer to the so-called conjunctive coding approach (Smolensky, 1990; Hummel & Holyoak, 1997). "Extra units" considered by Hinton (1981) represent various combinations of active units of two or more distributed patterns. The extra units can be considered as binding units encoding various ----- combinations of distributedly encoded items by distinct distributed patterns. Such a representation of bound items is generally described by tensor products (Smolensky, 1990) and requires an exponential growth of the number of binding units with the number of bound items. However, as was already mentioned, for recursive structures it is desired that the dimensionality of the patterns representing composite items is the same as of the component items’ patterns. Besides, the most important property of distributed representations - their similarity for similar structures - should be preserved. The problem was discussed by Hinton (1990), and a number of mechanisms for construction of his _reduced descriptions has been proposed. Hinton (1990), Pollack (1990), Sperduti (1994) get the_ reduced description of a composite pattern as a result of multilevel perceptron training using backpropagation algorithm. However, their patterns are low dimensional. Plate (1991, 1995) binds highdimensional patterns with gradual (real-valued) elements on the fly, without increase of the dimensionality, using the operation of circular convolution. Kanerva (1996) uses bitwise XOR to bind binary vectors with equal probability of 0s and 1s. Binary distributed representation are especially attractive, because binary bitwise operations are enough to handle them, providing the opportunity for significant simplification and acceleration of algorithmic implementations. Sparse binary representations (with small fraction of 1s) are of special interest. The sparseness of codevectors allows a high storage capacity of distributed associative memories (Willshaw, Buneman, & Longuet-Higgins, 1969; Palm, 1980; Lansner & Ekeberg, 1985; Amari, 1989) which can be used for their storage, and still further acceleration of software and hardware implementations (e.g. Palm & Bonhoeffer, 1984; Kussul, Rachkovskij, & Baidyk, 1991a; Palm, 1993). Sparse encoding has also neurophysiological correlates (Foldiak & Young, 1995). The procedure for binding of sparse distributed representations ("normalization procedure") was proposed by Kussul as one of the features of the Associative-Projective Neural Networks (Kussul, 1988, 1992; Kussul, Rachkovskij, & Baidyk, 1991). In this paper, we describe various versions of such a procedure, its possible neural-network implementations, and provide examples of its use for the encoding of complex structures. In section 2 we discuss representational problems encountered in the Associative-Projective Neural Networks and an approach for their solution. In section 3 the requirements on the ContextDependent Thinning procedure for binding and normalization of binary sparse codes are formulated. In section 4 several versions of the thinning procedure along with their algorithmic and neural-network implementations are described. Some generalizations and notations are given in section 5. In section 6 retrieval of individual constituent codes from the composite item code is considered. In section 7 the similarity characteristics of codes obtained by Context-Dependent Thinning procedures are examined. In section 8 we show examples of encoding various structures using Context-Dependent Thinning. Related work and general discussion are presented in section 9, and conclusions are given in section 10. 2. Representation of composite items in the APNN: the problems and the answer 2.1. Features of the APNN The Associative-Projective Neural Networks (APNN) is the name of a neural-network architecture proposed by Kussul in 1983 for the AI problems which require efficient manipulation of hierarchical data structures. Fragments of the architecture implemented in software and hardware were also used for solution of pattern-recognition tasks (Kussul, 1992, 1993). APNN features of interest here are as follows (Kussul, 1992; Kussul, Rachkovskij, & Baidyk, 1991a; Amosov et al., 1991): - Items of any complexity (an elementary feature, a relation, a complex structure, etc.) are represented by stochastic distributed activity patterns over the neuron field (pool of units); - The neurons are binary and therefore patterns of activity are binary vectors; - Items of any complexity are represented over the neural fields of the same high dimensionality N; - The number M of active neurons in the representations of items of various complexity is approximately (statistically) the same and small compared to the field dimensionality N. However M is large enough to maintain its own statistical stability. ----- - Items of various complexity level are stored in different distributed auto-associative neural-network memories with the same number N of neurons. Thus, items of any complexity are encoded by sparse distributed stochastic patterns of binary neurons in the neural fields of the same dimensionality N. It is convenient to represent activity patterns in the neural fields as binary vectors, where 1s correspond to active neurons. Let us use bold-face lowercase font for codevectors to distinguish them from the items they represent denoted in italics. The number of 1s in x is denoted |x|. We seek to make |x| ≈ _M for x of various complexity. The_ similarity of codevectors is determined by the number of 1s in their intersection or overlap: |x∧y|, where ∧ is elementwise conjunction of x and y. The probability of 1s in x (or density of 1s in x, or simply the vector density) is p(x)=|x|/N. Information encoding by stochastic binary vectors with a small number of 1s allows a high capacity of correlation-type neural-network memory (known as Willshaw memory or Hopfield network) to be reached using Hebbian learning rule (Wilshaw, Buneman, & Longuet-Higgins, 1969; Palm, 1980; Lansner & Ekeberg, 1985; Frolov & Muraviev, 1987, 1988; Frolov, 1989; Amari, 1989; Tsodyks, 1989). The codevectors we are talking about may be exemplified by vectors with _M=100...1000,_ _N=10_ 000...100 000. Though the maximal storage capacity is reached at _M=logN (Willshaw, Buneman, &_ Longuet-Higgins, 1969; Palm, 1980), we use _M≈√N to get a network with a moderate number_ _N of_ neurons and N[2] connections at sufficient statistical stability of M (e.g. the standard deviation of M less than 3%). Under this choice of the codevector density _p=M/N_ ≈ 1/√N, the information capacity holds high enough and the number of stored items can exceed the number of neurons in the network (Rachkovskij, 1990a, 1990b; Baidyk, Kussul, & Rachkovskij, 1990). Let us consider the problems arising in the APNN and other neural-network architectures with distributed representations when composite items are constructed. 2.2. Density of composite codevectors The number H of component items (constituents) comprising a composite item grows exponentially with the nesting level, that is, with going to the higher levels of the part-whole hierarchy. If S items of a level l constitute an item of the adjacent higher level (l+1), then for level (l+L) the number H becomes _H = S[L]_ . (2.1) The presence of several items comprising a composite item is encoded by the concurrent activation of their patterns, that is, by superposition of their codevectors. For binary vectors, we will use superposition by bitwise disjunction. Let us denote composite items by concatenation of symbols denoting their component items, e.g. abc. Corresponding composite codevectors (ai ∨ _bi_ ∨ _ci, i=1,...,N) will be denoted_ as a ∨ **b ∨** **c or simply abc.** Construction of composite items will be accompanied by fast growth of density p' and respective number M' of 1s in their codevectors. For H different superimposed codevectors of low density p: _p'H = 1-(1-p)[H]_ ≈ 1-e[-][pH], (2.2) _M'H_ ≈ _p'HN. (2.3)_ Equations 2.2 and 2.3 take into account the "absorption" of coincident 1s that prevents the exponential growth of their number versus the composition level L. However it is important that p'>>p (see Figure 1) and M' >> _M. Since the dimensionality N of codevectors representing items of various complexity is the_ same, the size of corresponding distributed auto-associative neural networks, where the codevectors are stored and recalled, is also the same. Therefore at M' >> _M_ ≈ √N (at the higher levels of hierarchy) their storage capacity in terms of the number of recallable codevectors will decrease dramatically. To maintain high storage capacity at each level, M' should not substantially exceed M. However, due to the requirement of statistical stability, the number of 1s in the code can not be reduced significantly. Besides, the operation of distributed auto-associative neural-network memory usually implies the same number of 1s in codevectors. Thus it is necessary to keep the number of 1s in the codevectors of complex items approximately equal to M. (However, some variation of M between distinct hierarchical levels may be tolerable and even desirable). ----- These provide one of the reasons why composite items should be represented not by all 1s of their component codevectors, but only by their fraction approximately equal to M (i.e. only by some M representatives of active neurons encoding the components). 2.3. Ghost patterns and false memories The well-known problem of ghost patterns or superposition catastrophe was mentioned in the Introduction. It consists in losing the information on the membership of component codevectors in particular composite codevector, when several composite codevectors are superimposed in their turn. This problem is due to the essential property of superposition operation. The contribution of each member to their superposition does not depend on the contributions of other members. For superposition by elementwise disjunction, representation of a in a∨b and a∨c is the same. The result of superposition of several base-level component codevectors contains only the information concerning participating components and no information about the combinations in which they meet. Therefore if common items are constituents of several composite items, then the combination of the latter generally can not be inferred from their superposition codevector. For example, let a complex composite item consist of base-level items a, b, c, d, e, f. Then how could one determine that it really consists of the composite items abd, bce, caf, if there may be also other items, such as abc, def, etc. ? In the formulation of "false” or “spurious patterns", superposition of composite patterns **abc** and **def generates false** patterns ("ghosts") abd, bce, caf, etc. The problem of introducing “false assemblies” or “spurious memories” (unforeseen attractors) into a neural network (e.g. Kussul, 1980; Hopfield, Feinstein, & Palmer, 1983; Vedenov, 1987, 1988) has the same origin as the problem of ghosts. Training of an associative memory of matrix type is usually performed using some version of Hebbian learning rule implemented by superimposing in the weight matrix the outer products of memorized codevectors. For binary connections, e.g. _Wij' = Wij_ ∨ _xixj, (2.4)_ where _xi_ and _xj are the states of the_ _i-th and the_ _j-th neurons when the pattern_ **x to be memorized is** presented (i.e. the values of the corresponding bits of x), Wij and Wij' are the connection weights between the i-th and the j-th neurons before and after training, respectively, ∨ stands for disjunction. When this learning rule is sequentially used to memorize several composite codevectors with partially coinciding components, false assemblies (attractors) may appear - that is, memorized composite codevectors that were not presented to the network. For example, when representations of items _abd,_ _bce, caf are memorized, the false assembly abc (unforeseen attractor) is formed in the network (Figure_ 2A). Moreover, various two-item assemblies _ab,_ _ad, etc. are present, which also were not explicitly_ presented for storing. The problem of introducing false assemblies can be avoided if non-distributed associative memory is used, where the patterns are not superimposed when stored and each composite codevector is placed into a separate memory word. However the problem of false patterns or superposition catastrophe still persists. 2.4. An idea of the thinning procedure A systematic use of distributed representations provides the prerequisite to solve both the problem of codevector density growth and the superposition catastrophe. The idea of solution consists in including into the representation of a composite item not full sets of 1s encoding its component items, but only their subsets. If we choose the fraction of 1s from each component codevector so that the number of 1s in the codevector of a composite item is equal to _M, then the density of 1s will be preserved in_ codevectors of various complexity. For example, if S=3 items of level l comprise an item of level l+1, then approximately _M/3 of 1s should be preserved from each codevector of the_ _l-th level. Then the_ codevector of level l+1 will have approximately M of 1s. If two items of level l+1 comprise an item of level l+2, then approximately M/2 of 1s should be preserved from each codevector of level l+1. Thus the low number _M of 1s in the codevectors of composite items of various complexity is maintained, and_ ----- therefore high storage capacity of the distributed auto-associative memories where these low-density codevectors are stored can be maintained as well (see also section 2.1). Hence the component codevectors are represented in the codevector of the composite item in a reduced form - by a fraction of their 1s. The idea that the items of higher hierarchical levels ("floors") should contain their components in reduced, compressed, coarse form is well-accepted among those concerned with diverse aspects of Artificial Intelligence research. Reduced representation of component codevectors in the codevector of composite item realized in the APNN may be relevant to "coarsen models" of Amosov (1968), "reduced descriptions" of Hinton (1990), and "conceptual chunking" of Halford, Wilson, & Phillips (in press). Reduced representation of component codevectors in the codevectors of composite items also allows a solution of the superposition catastrophe. If the subset of 1s included in the codevector of a composite item from each of the component codevectors depends on the composition of component items, then different subsets of 1s from each component codevector will be found in the codevectors of different composite items. For example, non-identical subsets of 1s will be incorporated into the codevectors of items abc and acd from a. Therefore the component codevectors will be bound together by the subsets of 1s delegated to the codevector of the composite item. It hinders the occurrence of false patterns and assemblies. For the example from the Introduction, when both _ac and_ _cb are present, we will get the_ following overall composite codevector: ac ∨ ca ∨ cb ∨ bc, where xy stands for the subset of 1s in x that becomes incorporated into the composite codevector given y as the other component. Therefore if ac ≠ **ab,** **bc** ≠ **ba, we do not observe the ghost pattern ab** ∨ **ba in the resultant codevector.** For the example of Figure 2A, where false assemblies emerge, they do not emerge under reduced representation of items (Figure 2B). Now interassembly connections are formed between different subsets of active neurons which have relatively small intersection. Therefore the connectivity of assembly corresponding to the non-presented item abc is low. That the codevector of a composite item contains the subsets of 1s from the component codevectors preserves the information on the presence of component items in the composite item. That the composition of each subset of 1s depends on the presence of other component items preserves the information on the combinations in which the component items occurred. That the codevector of a composite item has approximately the same number of 1s as its component codevectors allows the combinations of such composite codevectors to be used for construction of still more complex codevectors of higher hierarchical levels. Thus an opportunity emerges to build up the codevectors of items of varied composition level containing the information not only on the presence of their components, but on the structure of their combinations as well. It provides the possibility to estimate the similarity of complex structures without their unfolding but simply as overlap of their codevectors which is considered by many authors as a very important property for AI systems (e.g. Kussul, 1992; Hinton, 1990; Plate, 1995, 1997). Originally the procedure reducing the sets of coding 1s of each item from the group which makes up a composite item was named "normalization" (Kussul, 1988; Kussul & Baidyk, 1990; Kussul, 1992). That name emphasized the property to maintain the number of 1s in the codes of composite items equal to that of component items.. However in this paper we will call it "Context- Dependent Thinning" (CDT) by its action mechanism, that reduces the number of 1s taking into account the context of other items from their group. 3. Requirements on the Context-Dependent Thinning procedures Let us summarize the requirements on the CDT procedures and on the characteristics of codevectors produced by them. The procedures should process sparse binary codevectors. An important case of input is superimposed component codevectors. The procedures should output the codevector of the composite item where the component codevectors are bound and the density of the output codevector is comparable to the density of component codevectors. Let us call the resulting (output) codevector as "thinned" codevector. The requirements may be expressed as follows. ----- 3.1. Determinism Repeated application of the CDT procedures to the same input should produce the same output. 3.2. Variable number of inputs The procedure should process one, two, or several codevectors. One important case of input is a vector in which several component codevectors are superimposed. 3.3. Sampling of inputs Each component codevector of the input should be represented in the output codevector by a fraction of its 1s (or their reversible permutation). 3.4. Proportional sampling The number of 1s representing input component codevectors in the output codevector should be proportional to their density. If the number of 1s in a and b is the same, then the number of 1s from a and b in thinned ab should also be (approximately) the same. 3.5. Uniform low density The CDT procedures should maintain (approximately) uniform low density of output codevectors (small number M' of 1s) under varied number of input codevectors and their correlation degree. 3.6. Density control The CDT procedures should be able to control the number M' of 1s in output codevectors within some range around M (the number of 1s in the component codevectors). For one important special case, M'=M. 3.7. Unstructured similarity An output codevector of the CDT procedures should be similar to each component codevector at the input (or to its reversible permutation). Fulfillment of this requirement follows from fulfillment of the sampling of inputs requirement (3.3). The thinned codevector for ab is similar to a and b. If the densities of component codevectors are the same, the magnitude of similarity is the same (as follows from the requirement of proportional sampling, 3.4). 3.8. Similarity of subsets The reduced representations of a given component codevector should be similar to each other to a degree that varies directly with the similarity of the set of other codevectors with which it is composed. The representation of **a in the thinned** **abc should be more similar to its representation in the thinned** **abd** than in thinned aef. 3.9. Structured similarity If two sets (collections) of component items are similar, their thinned codevectors should be similar as well. It follows from the similarity of subsets requirement (3.8). If **a and** **a' are similar,** **b** and **b' are** similar, then thinned ab should be similar to thinned a'b'. Or, thinned abc should be similar to thinned **abd.** 3.10. Binding Representation of a given item in an output thinned codevector should be different for different sets (collections) of component items. Representation of **a** in thinned **abc should be different from the** representation of **a in thinned** **abd. Thus the representation of** **a** in the thinned composite codevector contains information on the other components of a composite item. 4. Versions of the Context-Dependent Thinning procedures ----- Let us consider some versions of the CDT procedure, their properties and implementations. 4.1. Direct conjunctive thinning of two or more codevectors Direct conjunctive thinning of binary x and y is implemented as their element-wise conjunction: **z = x** ∧ **y, (4.1)** where z is thinned and bound result. The requirement of determinism (section 3.1) holds for the direct conjunctive thinning procedure. The requirement of variable number of inputs (3.2) is not met, since only two codevectors are thinned. Overlapping 1s of **x and** **y go to** **z, therefore the sampling of inputs requirement (3.3) holds.** Since equal number of 1s from **x and** **y enters into** **z even if** **x** and **y are of different density, the** requirement of proportional sampling (3.4) is not fulfilled in general case. For stochastically independent vectors x and y the density of the resulting vector z is: _p(z) = p(x)p(y) < min(p(x),p(y)) < 1. (4.2)_ Here min() selects the smallest of its arguments. Let us note that for correlated x and y the density of 1s in z depends on the degree of their correlation. Thus p(z) is maintained the same only for independent codevectors of constant density, and the requirement of uniform low density (3.5) is generally not met. Since p(z) for sparse vectors is substantially lower than p(x) and p(y), the requirement of density control (3.6) is not met and recursive construction of bound codevectors is not supported (see also Kanerva, 1998; Sjödin, et. al., 1998). Similarity and binding requirements (3.7-3.10) may be considered as partially satisfied for two codevectors (see also Table 1). Table 1. Properties of various versions of thinning procedures. "Yes" means that the property is present, "No" means that the property is not present, “No-Yes” and "Yes-No" mean that the property is partially present. See text for details. Properties of thinning procedures Direct conjunctive Permutive Additive (4.3) and thinning (4.1) thinning (4.2) subtractive (4.4) CDT Determinism (3.1) Yes Yes Yes Variable number of inputs (3.2) No-Yes Yes Yes Sampling of inputs (3.3) Yes Yes Yes Proportional sampling (3.4) No Yes Yes Uniform low density (3.5) No No Yes Density control (3.6) No No Yes Unstructured similarity (3.7) Yes-No Yes Yes Similarity of subsets (3.8) Yes-No Yes Yes Structured similarity (3.9) Yes-No Yes Yes Binding (3.10) Yes-No Yes Yes Though the operation of direct conjunctive thinning of two codevectors does not meet all requirements on the CDT procedure, it has been applied by us for encoding of external information, in particular, for binding of distributed binary codevectors of feature item and its numerical value (Kussul & Baidyk, 1990; Rachkovskij & Fedoseyeva, 1990; Artykutsa et al., 1991; Kussul, Rachkovskij, & Baidyk, 1991a, 1991b). The density p of the codevectors of features and numerical values was chosen so as to provide a specified density p' of the resulting codevector (Table 2, K=2). Table 2. The density p of K independent codevectors chosen to provide a specified density p' of codevectors produced by their conjunction. |Properties of thinning procedures|Direct conjunctive thinning (4.1)|Permutive thinning (4.2)|Additive (4.3) and subtractive (4.4) CDT| |---|---|---|---| |Determinism (3.1) Variable number of inputs (3.2) Sampling of inputs (3.3) Proportional sampling (3.4) Uniform low density (3.5) Density control (3.6) Unstructured similarity (3.7) Similarity of subsets (3.8) Structured similarity (3.9) Binding (3.10)|Yes No-Yes Yes No No No Yes-No Yes-No Yes-No Yes-No|Yes Yes Yes Yes No No Yes Yes Yes Yes|Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes| ----- |p'|K 2 3 4 5 6 7 8 9 10 11 12| |---|---| |0.001 0.010 0.015|0.032 0.100 0.178 0.251 0.316 0.373 0.422 0.464 0.501 0.534 0.562 0.100 0.215 0.316 0.398 0.464 0.518 0.562 0.599 0.631 0.658 0.681 0.122 0.247 0.350 0.432 0.497 0.549 0.592 0.627 0.657 0.683 0.705| To thin more than two codevectors, it is natural to generalize equation 4.1: **z = ∧s xs, (4.3)** where s = 1…S, S is the number of codevectors to be thinned. Though this operation allows binding of two or more codevectors, a single vector can not be thinned. The density of resulting codevector **z** depends on the densities of xs and on their number S. Therefore to meet the requirement of uniform low density (3.5), the densities of **xs should be chosen depending on the number of thinned codevectors.** Also, the requirement of density control (3.6) is not satisfied. We applied this version of direct conjunctive thinning to encode positions of visual features on a two-dimensional retina. Three codevectors were bound (S=3): the codevector of a feature, the codevector of its _X-coordinate, and the codevector of its_ _Y-coordinate (unpublished work of 1991-1992 on_ recognition of handwritten digits, letters, and words in collaboration with WACOM Co., Japan). Also, this technique was used to encode words and word combinations for text processing (Rachkovskij, 1996). In so doing, the codevectors of letters comprising words were made bound (S>10). The density of codevectors to be bound by thinning was chosen so as to provide a specified density of the resulting codevector (Table 2, K=3...12). Neural-network implementations of direct conjunctive thinning procedures are rather straightforward and will not be considered here. 4.2. Permutive conjunctive thinning The codevectors to be bound by direct conjunctive thinning are not superimposed. Let us consider the case where S codevectors are superimposed by disjunction: **z = ∨sxs. (4.4)** Conjunction of a vector with itself produces the same vector: **z** ∧ **z =** **z. So let us modify** **z** by permutation of all its elements and make conjunction with the initial vector: **z' = z** ∧ **z[~]. (4.5)** Here, z[~] is the permuted vector. In vector-matrix notation, it can be rewritten as: **z' = z** ∧ **Pz, (4.5a)** where P is an N x N permutation matrix (each row and each column of P has a single 1, and the rest of **P is 0; multiplying a vector by a permutation matrix permutes the elements of the vector).** Proper permutations are those producing the permuted vector that is independent of the initial vector, e.g. random permutations or shifts. Then the density of the result is _p(z') = p(z)p(z[~]) = p(z) p(Pz). (4.6)_ Let us consider the composition of the resulting vector: **z' = z** ∧ **z[~] = (x1** ∨... ∨ **xs) ∧** **z[~]** = (x1 ∨ ... ∨ **xs) ∧ (x1~** ∨ ...∨ **xs~)** = x1 ∧ (x1~ ∨ ... ∨ **xs~) ∨ ... ∨** **xs** ∧ (x1~ ∨ ... ∨ **xs~)** = (x1 ∧ **x1~) ∨ ... ∨ (x1** ∧ **xs~) ∨ ... ∨ (xs** ∧ **x1~) ∨ ...∨ (xs** ∧ **xs~). (4.7)** Thus the resulting codevector is the superposition of all possible pairs of bitwise codevector conjunctions. Each pair includes certain component codevector and certain permuted component codevector. Because of initial disjunction of component codevectors, this procedure meets more requirements on the CDT procedures than direct conjunctive thinning. The requirement of variable number of inputs (3.2) is now fully satisfied. As follows from equation 4.7, each component codevector **xs is thinned by conjunction with one and the same stochastic independent vector** **z[~]. Therefore** ----- statistically the same fraction of 1s is left from each component **xs. Therefore the requirements of** sampling of inputs (3.3) and of proportional sampling (3.4) hold. For S sparse codevectors of equal density p(x)<<1 _p(z)≈Sp(x), (4.8)_ _p(z') = p(z)p(z~) ≈_ _S2p2(x). (4.9)_ To satisfy the requirements of density (3.5-3.6), p(z')=p(x) should hold for various S. It means that p(x) should be equal to 1/S[2]. Therefore at fixed density _p(x) the density requirements (3.5-3.6) are not_ satisfied for variable number S of component items. The similarity and binding requirements (3.7-3.10) hold. In particular, the requirement of similarity of subsets (3.8) holds because the higher the number of identical items, the more identical conjunctions are superimposed in equation 4.7. A neural-network implementation of permutive conjunctive thinning is shown in Figure 3. In the neural network terms, units are called "neurons", and their pools are called "neural fields". There are two input fields fin1 and fin2 and one output field fout consisting of N binary neurons each. fin1 is connected to **fout by a bundle of** _N direct projective (1-to-1) connections. Each connection of this bundle connects_ neurons of the same number. fin2 is connected to fout by the bundle of N permutive projective connections. Each connection of this bundle connects neurons of different numbers. "Synapses" of the neurons of the output field have weights of +1. The same pattern of superimposed component codevectors is activated in both input fields. Each neuron of the output field summarizes the excitations of its two inputs (synapses). The output is determined by comparison of the excitation level with the threshold θ = 1.5. Therefore each output neuron performs conjunction of its inputs. Thus the activity pattern of the output field corresponds to bit conjunction of pattern present in the input fields and its permutation. Obviously, there are a lot of different configurations of permutive connections. Permutation by shift is particularly attractive as it is simple and fast to implement in computer simulations. 4.3. Additive CDT procedure Though for permutive conjunctive thinning the density of resulting codevectors is closer to the density of each component codevector than for direct conjunction, it varies with the number and density of component codevectors. Let us make a codevector z by the disjunction of S component codevectors xs, as in equation 4.4. Since the density of component codevectors is low and their number is small, then the "absorption" of common 1s is low and, according to equations 4.8 and 4.9, p(z') is approximately S[2]p(x) times p(x). For example, if p(x)=0.01 and S=5, then p(z') ≈ (1/4)p(x). Therefore, to make the density of the thinned codevector equal to the density of its component codevectors, let us superimpose an appropriate number K of independent vectors with the density p(z'): 〈z〉 = ∨k (z ∧ **zk~) = z** ∧ (∨k **zk~) (4.10)** Here 〈z〉 is the thinned output vector, zk~ is a unique (independent stochastic) permutation of elements of vector z, fixed for each k. In vector-matrix notation, we can write: 〈z〉 = ∨k (z ∧ **Pkz) = z** ∧ ∨k (Pkz) (4.10a) The number K of vectors to be superimposed by disjunction can be determined as follows. If the density of the superposition of permuted versions of z is made _p(∨k(Pkz)) = 1/S, (4.11)_ then after conjunction with z (equations 4.10, 4.10a) we will get the needed density of 〈z〉: _p(〈z〉) = p(z)/S_ ≈ _Sp(x)/S = p(x). (4.12)_ Taking into account the "absorption" of 1s in disjunction of K permuted vectors z[~], equation (4.11) can be rewritten: 1/S = 1-(1-pS)[K]. (4.13) Then K=ln(1-1/S)/ln(1-pS). (4.14) The dependence K(S) at different p is shown in Table 3. ----- Table 3. The number K of permutations of an input codevector that produces the proper density of the thinned output codevector in the additive version of Context-Dependent Thinning. (K should be rounded to the nearest integer). Number S of component codevectors in the Density p of input codevector component codevector 2 3 4 5 6 7 0.001 346.2 135.0 71.8 44.5 30.3 21.9 0.005 69.0 26.8 14.2 8.8 6.0 4.3 0.010 34.3 13.3 7.0 4.4 2.9 2.1 4.3.1. Meeting the requirements to the CDT procedures Since the configuration of each k-th permutation is fixed, the procedure of additive CDT is deterministic (3.1). The input vector z to be thinned is the superposition of component codevectors. The number of these codevectors may be variable, therefore the requirement of variable number of inputs (3.2) holds. The output vector is obtained by conjunction of **z (or its reversible permutation) with the** independent vector ∨k(Pkz). Therefore the 1s of all codevectors superimposed in **z are equally** represented in 〈z〉 and both the sampling of inputs and the proportional sampling requirements (3.3 - 3.4) hold. Density control of the output codevector for variable number and density of component codevectors is realized by varying K (Table 3). Therefore the density requirements (3.5 - 3.6) hold. Since the sampling of inputs and proportional sampling requirements (3.3-3.4) hold, the codevector 〈z〉 is similar to all component codevectors xs, and the requirement of unstructured similarity (3.7) holds. The more similar are the components of one composite item to those of another, the more similar are their superimposed codevectors **z. Therefore the more similar are the vectors - disjunctions of** _K fixed_ permutations of z, and the more similar representations of each component codevector will remain after conjunction (equation 4.10) with **z. Thus the similarity of subsets requirement (3.8) holds.** Characteristics of this similarity will be considered in more detail in section 7. Since different combinations of component codevectors produce different **z and therefore** different codevectors of _K permutations of_ **z, representations of certain component codevector in the** thinned codevector will be different for different combinations of component items, and the binding requirement (3.10) holds. The more similar are representations of each component in the output vector, the more similar are output codevectors (the requirement of structured similarity 3.9 holds). 4.3.2. An algorithmic implementation As mentioned before, shift is an easily implementable permutation. Therefore an algorithmic implementation of this CDT procedure may be as in Figure 4A. Another example of this procedure does not require preliminary calculation of _K (Figure 4B). In this version, conjunctions of the initial and_ permuted vectors are superimposed until the number of 1s in the output vector becomes equal to M. 4.3.3. A neural-network implementation A neural-network implementation of the first example of the additive CDT procedure (Figure 4A) is shown in Figure 5. To choose _K depending on the density of_ **z, the neural-network implementation should** incorporate some structures not shown in the figure. They should determine the density of the initial pattern **z** and "activate" (turn on) _K bundles of permutive connections from their total number_ _Kmax._ Alternatively, these structures should actuate the bundles of permutive connections one-by-one in the |Density p of component codevector|Number S of component codevectors in the input codevector 2 3 4 5 6 7| |---|---| |0.001 0.005 0.010|346.2 135.0 71.8 44.5 30.3 21.9 69.0 26.8 14.2 8.8 6.0 4.3 34.3 13.3 7.0 4.4 2.9 2.1| ----- fixed order[1] until the density of the output vector in **fout becomes M/N. Let us recall that bundles of shift** permutive connections are used in algorithmic implementations. 4.4. Subtractive CDT procedure Let us consider another version of the CDT procedure. Rather than masking **z** with the straight disjunction of permuted versions of **z, as in additive thinning, let us mask it with the inverse of that** disjunction: 〈z〉 = z ∧ ¬(∨k **zk~) = z** ∧ ¬∨k (Pkz). (4.15) If we choose _K to make the number of 1s in_ 〈z〉 equal to _M, then this procedure will satisfy the_ requirements of section 3. Therefore the density of superimposed permuted versions of **z before** inversion should be 1-1/S (compare to equation 4.13). Thus the number _K of permuted vectors to be_ superimposed in order to obtain the required density (taking into account "absorption" of 1s) is determined from: 1-1/S = 1 - (1-pS)[K]. (4.16) Then, for pS << 1 _K_ ≈ lnS/(pS). (4.17) Algorithmic implementations of this subtractive CDT procedure (Kussul, 1988; Kussul & Baidyk, 1990; Kussul, 1992) are analogous to those presented for the additive CDT procedure in section 4.3.2. A neural-network implementation is shown in Figure 6. Since the value of lnS/S is approximately the same at S=2,3,4,5 (Table 4), one can choose the K for a specified density of component codevectors p(x) as: _K_ ≈ 0.34/p(x). (4.18) At such K and S, p(〈z〉) ≈ _p(x). Therefore the number K of permutive connection bundles in Figure 6 can_ be fixed, and their sequential activation is not needed. So each neuron of _Fout may be considered as_ connected to an average of K randomly chosen neurons of Fin1 by inhibitory connections. More precise values of K (obtained as exact solution of equation 4.16) for different values of p are presented in Table 4. Table 4. The function lnS/S and the number K of permutations of an input codevector that produces the proper density of the thinned output codevector in the subtractive version of Context-Dependent Thinning procedure. (K should be rounded to the nearest integer). Number S of component codevectors in the input codevector 2 3 4 5 6 7 _p_ lnS/S 0.347 0.366 0.347 0.322 0.299 0.278 0.001 346.2 365.7 345.9 321.1 297.7 277.0 0.005 69.0 72.7 68.6 63.6 58.8 54.6 0.010 34.3 36.1 34.0 31.4 29.0 26.8 This version of the CDT procedure has been originally proposed under the name "normalization procedure" (Kussul, 1988; Kussul & Baidyk, 1990; Amosov et al., 1991). We have used it in the multilevel APNN for sequence processing (Rachkovskij, 1990b; Kussul & Rachkovskij, 1991). We have also used it for binding of sparse codes in perceptron-like classifiers (Kussul, Baidyk, Lukovich, & 1As noted by Kanerva (personal communication), all Kmax bundles could be activated in parallel, if the weight of the k-th bundle is set to be 2[-][k] and the common threshold of fint neurons is adjusted dynamically so that fout has the desired density of 1s. |p|Number S of component codevectors in the input codevector 2 3 4 5 6 7 lnS/S 0.347 0.366 0.347 0.322 0.299 0.278| |---|---| |0.001 0.005 0.010|346.2 365.7 345.9 321.1 297.7 277.0 69.0 72.7 68.6 63.6 58.8 54.6 34.3 36.1 34.0 31.4 29.0 26.8| ----- Rachkovskij, 1993) and in one-level APNN applied for recognition of vowels (Rachkovskij & Fedoseyeva, 1990, 1991), textures (Artykutsa et al., 1991; Kussul, Rachkovskij, & Baidyk, 1991b), shapes (Kussul & Baidyk, 1990), handprinted characters (Lavrenyuk, 1995), logical inference (Kasatkin & Kasatkina, 1991). 5. Procedures of auto-thinning, hetero-thinning, self-exclusive thinning and notation In sections 4.2-4.4 we considered the versions of thinning procedures where a single vector (superposition of component codevectors) was the input. The corresponding pattern of activity was present both in the field fin1 and fin2 (Figures 3, 5, 6), and therefore the input vector thinned itself. Let us call these procedures "auto-thinning" or "auto-CDT" and denote them as label〈u〉. (5.1) Here u is the codevector to be thinned (usually superposition of component codevectors) which is in the input fields **fin1 and** **fin2 of Figures 3,5,6.** [label]〈...〉 denotes particular configuration of thinning (particular realization of bundles of permutive connections). Let us note that angle brackets are used by Plate to denote normalization operation in HRRs (e.g. Plate, 1995; see also section 9.1.5). A lot of orthogonal configurations of permutive connections are possible. Differently labeled CDT procedures implement different thinning. In the algorithmic implementations (Figure 4) different labels will use different seeds. No label corresponds to some fixed configuration of thinning. Unless otherwise specified, it is assumed that the number K of bundles is chosen to maintain the preset density of the thinned vector 〈u〉, usually |〈u〉|≈M. ∨k(Pku) can be expressed as Ru thresholded at 1/2, where the matrix R is the disjunction, or it can also be the sum, of K permutation matrices Pk. This, in turn, can be written as a function T(u), so that we get 〈u〉 = u ∧ _T(u). (5.2)_ It is possible to thin one codevector by another one if the pattern to be thinned is activated in fin1, and the pattern which thins is activated in fin2. Let us call such procedure hetero-CDT, hetero-thinning, thinning u with w. We denote hetero-thinning as label〈u〉w. (5.3) Here w is the pattern that does the thinning. It is activated in fin2 of Figures 3,5,7. u is the pattern which is thinned, it is activated in fin1. [label]〈...〉 is the configuration label of thinning. For auto-thinning, we may write 〈u〉 **=** 〈u〉u. For the additive hetero-thinning, equation 4.10 can be rewritten as 〈u〉w = u ∧ (∨k(Pkw)) = u ∧ _T(w). (5.4)_ For the subtractive hetero-thinning, equation 4.15 can be rewritten as 〈u〉w = u ∧ ¬(∨k(Pkw)) = u ∧ ¬T(w). (5.5) _Examples._ As before, we denote composite codevector u to be thinned by its component codevectors, e.g. u = a ∨ **b ∨** **c or simply u = abc.** Auto-thinning of composite codevector u: 〈u〉u = 〈a∨b∨c〉a∨b∨c = 〈a∨b∨c〉 = 〈abc〉abc = 〈abc〉. Hetero-thinning of composite codevector u with codevector d: 〈u〉d = 〈a∨b∨c〉d = 〈abc〉d. For both additive and subtractive CDT procedures: 〈abc〉 = 〈a〉abc ∨ 〈b〉abc ∨ 〈c〉abc. We can also write 〈abc〉 = (a ∧ _T(abc)) ∨ (b_ ∧ _T(abc))_ ∨ (c ∧ _T(abc)). Analogous expression can be_ written for a composite pattern with other numbers of components. Let us note that K should be the same for thinning of the composite pattern as a whole or its individual components. For the additive CDT procedure it is also true: 〈abc〉 = 〈a〉abc ∨ 〈b〉abc ∨ 〈c〉abc = ----- 〈a〉a ∨ 〈b〉a ∨ 〈c〉a ∨ 〈a〉b ∨ 〈b〉b ∨ 〈c〉b ∨ 〈a〉c ∨ 〈b〉c ∨ 〈c〉c. For the subtractive CDT procedure we can write: 〈a〉bcd = 〈〈〈a〉b〉c〉d and 〈abc〉 = 〈〈〈a〉a〉b〉c ∨ 〈〈〈b〉a〉b〉c ∨ 〈〈〈c〉a〉b〉c. Let us also consider a modification of the auto-CDT procedures which will be used in section 7.2. If we eliminate the thinning of a component codevector with itself, we obtain "self-exclusive" autothinning. Let us denote it as 〈abc〉\abc: 〈abc〉\abc = 〈a〉bc ∨ 〈b〉ac ∨ 〈c〉ab. 6. Retrieval of component codevectors After thinning, the codevectors of component items are present in the thinned codevector of a composite item in a reduced form. We must be able to retrieve complete component codevectors. Since the requirement of the unstructured similarity (3.7) holds, the thinned composite codevector is similar to its component codevectors. So if we have a full set (alphabet) of component codevectors of the preceding (lower) level of compositional hierarchy, we can compare them with the thinned codevector. The similarity degree is determined by the overlap of codevectors. The alphabet items corresponding to the codevectors with maximum overlaps are the sought-for components. The search of the most similar component codevectors can be performed by a sequential finding of overlaps of the codevector to be decoded with all codevectors of the component alphabet. An associative memory can be used to implement this operation in parallel. After retrieving of the full-sized component codevectors of the lower hierarchical level, one can then retrieve their component codevectors of still lower hierarchical level in an analogous way. For this purpose, the alphabet of the latter should be known as well. If the order of component retrieval is important, some auxiliary procedures can be used (Kussul, 1988; Amosov et al., 1991; Rachkovskij, 1990b; Kussul & Rachkovskij, 1991). _Example. Let us consider the alphabet of six component items a, b, c, d, e, f. They are encoded_ by stochastic fixed vectors of _N=100000 bits with_ _M≈1000 bits set to 1. Let us obtain the thinned_ codevector 〈abc〉. The number of 1s in 〈abc〉 in our numerical example is |〈abc〉| = 1002. Let us find the overlap of each component codevector with the thinned codevector: |a ∧ 〈abc〉| = 341; |b ∧ 〈abc〉| = 350; |c ∧ 〈abc〉| = 334; |d ∧ 〈abc〉| = 12; |e ∧ 〈abc〉| = 7; |f ∧ 〈abc〉| =16. So the representation of the component items a, b, c is substantially higher than the representation of the items d, e, f occurring due to a stochastic overlap of independent binary codevectors. The numbers obtained are typical for the additive and the subtractive versions of thinning, as well as for their self-exclusive versions. 7. Similarity preservation by the thinning procedures In this section, let us consider the similarity of thinned composite codevectors as well as the similarity of thinned representations of component codevectors in the thinned composite codevectors. These kinds of similarity are considered under different combinations of component items and different versions of thinning procedures. Let us use the following Context-Dependent Thinning procedures: - permutive conjunctive thinning, section 4.2 (Paired-M); - additive auto-CDT, section 4.3 (CDTadd); - additive self-exclusive auto-CDT, section 5 (CDTadd-sl); - subtractive auto-CDT, section 4.4 (CDTsub); - subtractive self-exclusive auto-CDT, section 5 (CDTsub-sl). For these experiments, let us first obtain the composite codevectors which have 5 down to 0 component codevectors in common: **abcde, abcdf, abcfG, abfgh, afghi, fghij. For the component** codevectors, N=100000 bits with M≈1000. Then, for each thinning procedure, let us thin the composite ----- codevectors down to the density of their component codevectors. (For permutive conjunctive thinning, the density of component codevectors was chosen to get approximately M of 1s in the result). 7.1. Similarity of thinned codevectors Let us find an overlap of thinned codevector 〈abcde〉 with 〈abcde〉, 〈abcdf〉, 〈abcfg〉, 〈abfgh〉, 〈afghi〉, and 〈fghij〉. Here 〈〉 is used to denote any thinning procedure. A normalized measure of the overlap of x with various y is determined as |x∧y|/|x|. The experimental results are presented in Figure 7A, where the normalized overlap of thinned composite codevectors is shown versus the normalized overlap of corresponding unthinned composite codevectors. It can be seen that the overlap of thinned codes for various versions of the CDT procedure is approximately equal to the square of overlap of unthinned codes. For example, the similarity (overlap) of **abcde and** **abfgh is approximately 0.4 (two common components of five total), and the overlap of** their thinned codevectors is about 0.16. 7.2. Similarity of component codevector subsets included into thinned codevectors Some experiments were conducted in order to investigate the similarity of subsets requirement (3.8). The similarity of subsets of a component codevector incorporated into various thinned composite vectors was obtained as follows. First, the intersections of various thinned five-component composite codevectors with their component a were determined: **u=a∧〈abcde〉,** **v=a∧〈abcdf〉,** **w=a∧〈abcfg〉,** **x=a∧〈abfgh〉,** **y=a∧〈afghi〉. Then, the normalized values of the overlap of intersections were obtained as |u∧v|/|u|,** |u∧w|/|u|, |u∧x|/|u|, |u∧y|/|u|. Figure 7B shows how the similarity (overlap) of component codevector subsets incorporated into two thinned composite codevectors varies versus the similarity of corresponding unthinned composite codevectors. It can be seen that these dependencies are different for different thinning procedures: for the CDTadd and the CDTadd-sl they are close to linear, but for the CDTsub and CDTsub-sl they are polynomial. Which is preferable, depends on the application. 7.3. The influence of the depth of thinning By the depth of thinning we understand the density value of a thinned composite codevector. Before, we considered it equal to the density of component codevectors. Here, we vary the density of the thinned codevectors. The experimental results presented in Figure 8 are useful for the estimation of resulting similarity of thinned codevectors in applications. As in sections 7.1-7.2, composite codevectors of five components were used. Therefore approximately 5M of 1s (actually, more close to 4.9M because of random overlaps) were in the input vector before thinning. We varied the number of 1s in the thinned codevectors from 4M to M/4. Only the additive and the subtractive CDT procedures were investigated. The similarity of thinned codevectors is shown in Figure 8A. For a shallow thinning, where the resulting density is near the density of input composite codevector, the similarity degree of resulting vectors is close to that of input codevectors (the curve is close to linear). For a deep thinning, where the density of thinned codevectors is much less than the density of input codevectors, the similarity function behave as a power function, transforming from linear through quadratic to cubic (for subtractive thinning). The similarity of component subsets in the thinned codevector is shown in Figure 8B. For the additive CDT procedure, the similarity function is linear, and its angle reaches approximately 45° for “deep” thinning. For the subtractive CDT procedure, the function is similar to the additive one for the “shallow” thinning, and becomes near-quadratic for the “deep” thinning. 8. Representation of structured expressions ----- Let us consider representation of various kinds of structured data by binary sparse codevectors of fixed dimensionality. In the examples below, the items of the base level for a given expression. In its turn, they may, in their turn, represent complex structured data. 8.1. Transformation of symbolic bracketed expressions into representations by codevectors Performing the CDT procedure can be viewed as analog of introducing brackets into symbolic descriptions. As mentioned in section 5, the CDT procedures with different thinning configurations are denoted by different labels at the opening thinning bracket: [1]〈 〉, [2]〈 〉, [3]〈 〉, [4]〈 〉, [5]〈 〉, etc. Therefore in order to represent a complex symbolic structure by a distributed binary codevector, one should - map each symbol of the base-level item to the corresponding binary sparse codevector of fixed dimensionality; - replace conventional brackets in symbolic bracketed representation by "thinning" ones. Each compositional level has its own label of thinning brackets, that is, thinning configuration; - superimpose the codevectors inside the thinning brackets of the deepest nesting level by elementwise disjunction; - perform the CDT procedure on superimposed codevectors using the configuration of thinning corresponding to particular "thinning" label; - superimpose the resulting thinned vectors inside the thinning brackets of the next nesting level; - perform the CDT procedure on superimposed codevectors using appropriate thinning configuration of that nesting level; - repeat the two previous steps until the whole structure is encoded. 8.2. Representation of ordered items For many propositions, the order of arguments is essential. To represent the order of items encoded by the codevectors, binding with appropriate roles is usually used. One approach is to use explicit binding of role codevectors (agent-object, antecedentconsequent, or just ordinal number) with the item (filler) codevector. This binding can be realized by an auto- or hetero-CDT procedure (Rachkovskij, 1990b). The item a which is #3 may be represented as 〈a ∨ **n3〉 or 〈a〉n3, where n3 is the codevector of the “third place” role.** Another approach is to use implicit binding by providing different locations for different positions of an item in a proposition. To preserve fixed dimensionality of codevectors, it was proposed to encode different positions by the specific shifts of codevectors (Kussul & Baidyk, 1993). (Reversible permutations can be also used). For our example, we have **a** shifted by the number _n3 of 1-bit shifts_ corresponding to the “third place” of an item. These and other techniques to represent the order of items have their pros and cons. Thus a specific technique should be chosen depending on the application. So we will not consider details here. It is important that such techniques exist, and we will denote the codevector of item a at the n-th place simply by a_n. Let us note that generally the modification of an item codevector to encode its ordinal number should be different for different nesting levels. It is analogous to having its own thinning configuration at each level of nesting. Therefore **a and** **b should be modified in the same manner in** [1]〈...a_n...〉 and 1〈...b_n...〉, but a should generally be modified differently in 2〈...a_n...〉. 8.3. Examples 8.3.1. Role-filler structure Representations of structures or propositions by the role-filler scheme are widely used (Smolensky, 1990; Pollack, 1990; Plate, 1991, 1995; Kanerva, 1996; Sperduti, 1994). Let us consider the relational instance ----- _knows(Sam, loves(John, Mary)). (8.1)_ Using the Holographic Reduced Representations of Plate, it can be represented as: **L1 = love + loveagt∗john + loveobj∗mary,** (8.2) **L2 = know + knowagt∗sam + knowobj∗L1,** (8.3) where ∗ stands for binding operation, and + denotes addition. In our representation: **L1 = [2]〈love ∨** [1]〈loveagt ∨ **john〉** ∨ [1]〈loveobj ∨ **mary〉〉, (8.4)** **L2 = [4]〈know ∨** [3]〈knowagt ∨ sam〉 ∨ [3]〈knowobj ∨ L1〉〉. (8.5) 8.3.2. Predicate-arguments structure Let us consider representation of relational instances loves(John, Mary) and loves(Tom, Wendy) by the predicate-arguments (or symbol-argument-argument) structure (Halford, Wilson, & Phillips, in press): **loves∗John∗Mary + loves∗Tom∗Wendy. (8.6)** Using our representation, we obtain: 2〈1〈loves_0 ∨ **John_1 ∨** **Mary_2〉** ∨ 1〈loves_0 ∨ **Tom_1 ∨** **Wendy_2〉〉. (8.7)** Let us note that this example may be represented using the role-filler scheme of HRRs as **L1 = loves + lover∗Tom + loved∗Wendy, (8.8)** **L2 = loves + lover∗John + loved∗Mary, (8.9)** **L = L1 + L2. (8.10)** Under such a representation, the information about who loves whom is lost in L (Plate, 1995; Halford, Wilson, & Phillips, in press). In our representation, this information is preserved even using the rolefiller scheme: **L1 = [2]〈loves** ∨ [1]〈lover ∨ **Tom〉** ∨ [1]〈loved ∨ **Wendy〉〉, (8.11)** **L2 = [2]〈loves ∨** [1]〈lover ∨ **John〉** ∨ [1]〈loved ∨ **Mary〉〉, (8.12)** **L = 〈L1 ∨** **L2〉. (8.13)** Another example of relational instance from Halford, Wilson, & Phillips (in press): _cause(shout-at(John,Tom),hit(Tom, John)). (8.14)_ Using our representation scheme, it may be represented as 2〈cause_0 ∨ 1〈shout-at_0 ∨ **John_1 ∨** **Tom_2〉_1 ∨** 1〈hit_0 ∨ **Tom_1 ∨** **John_2〉_2〉. (8.15)** 8.3.3. Tree-like structure An example of bracketed binary tree adapted from Pollack (1990): ((d (a n)) (v (p (d n)))). (8.16) If we do not take the order into account, but use only the information about the grouping of constituents, our representation may look as simple as: 4〈3〈d ∨ 2〈a ∨ **n〉〉** ∨ 3〈v ∨ 2〈p ∨ 1〈d ∨ **n〉〉〉〉. (8.17)** 8.3.4 Labeled directed acyclic graph Sperduti & Starita (1997), Frasconi, Gori, & Sperduti (1997) provide examples of labeled directed acyclic graphs. Let us consider _F( a, f(y), f(y, F(a, b)) ). (8.18)_ Using our representation, it may look as 3〈F_0 ∨ **a_1 ∨** 2〈f_0 ∨ **y_1〉_2 ∨** 2〈f_0 ∨ **y_1 ∨** 1〈F_0 ∨ **a_1 ∨** **b_2〉_2〉_3 〉. (8.19)** 9. Related work and discussion The procedures of Context-Dependent Thinning allow construction of binary sparse representations of complex data structures, including nested compositional structures or part-whole hierarchies. The basic principles of such representations and their use for data handling were proposed in the context of the ----- Associative-Projective Neural Networks paradigm (Kussul, 1988, 1992; Kussul, Rachkovskij, & Baidyk, 1991a). 9.1 Comparison to other representation schemes Let us compare our scheme for representation of complex data structures using the CDT procedure (we will call it "APNN-CDT" below) with other schemes using distributed representations. The best known schemes are (L)RAAMs (Pollack, 1990; Blair, 1997; Sperduti 1994), Tensor Product representations (Smolensky, 1990; Halford, Wilson, & Phillips, in press), Holographic Reduced Representations (HRRs) (Plate, 1991, 1995), Binary Spatter Codes (BSCs) (Kanerva, 1994, 1996). For this comparison, we will use the framework of Plate (1997) who proposes to distinguish these schemes using the following features: the nature of distributed representation; the choice of superposition; the choice of binding operation; how the binding operation is used to represent predicate structure; the use of other operations and techniques. 9.1.1. The nature of distributed representation Vectors of random real-valued elements with the Gaussian distribution are used in HRRs. Dense binary random codes with the number of 1s equal to the number of 0s are used in BSCs. Vectors with real or binary elements (without specified distributions) are used in other schemes. In the APNN-CDT scheme, binary vectors with randomly distributed small number of 1s are used to encode base-level items. 9.1.2. The choice of superposition The operation of superposition is used for unstructured representation of an aggregate of codevectors. In BSCs superposition is realized as a bitwise thresholded addition of codevectors. Schemes with non-binary elements, such as HRRs, use elementwise summation. For tensors, superposition is realized as adding up or ORing the corresponding elements. In the APNN-CDT scheme, elementwise OR is used. 9.1.3. The choice of binding operation Most schemes use special operations for binding of codevectors. The binding operations producing the bound vector that has the same dimension as initial codevectors (or one of them in (L)RAAMs) are convenient for representation of recursive structures. The binding operation is performed "on the fly" by circular convolution (HRRs), elementwise multiplication (Gayler, 1998), or XOR (BSCs). In (L)RAAMs, binding is realized through multiplication of input codevectors by the weight matrix of the hidden layer formed by training of a multilayer perceptron using the codevectors to be bound. The vector obtained by binding can be bound with another codevector in its turn. In Tensor Models, binding of several codevectors is performed by their tensor product. The dimensionality of resulting tensor grows with the number of bound codevectors. In the APNN-CDT scheme, binding is performed by the Context-Dependent Thinning procedure. Unlike the other schemes where the codevectors to be bound are not superimposed, they can be superimposed by disjunction in the basic version of the CDT procedure. Superposition codevector z (as in equation 4.4) makes the context codevector. The result of the CDT procedure may be considered as superimposed bindings of each component codevector with the context codevector. Or, it may be considered as superimposed paired bindings of all component codevectors with each other. (Note that in the “self-exclusive” CDT version (section 5) the codevector of each component is not bound to itself. In the hetero-CDT version, one codevector is bound to another codevector through thinning with the latter). According to Plate's framework, CDT as a binding procedure can be considered as a special kind of superposition (disjunction) of certain elements of the tensor product of **z by itself (i.e.** _N[2]_ scalar products zizj). Actually, 〈zi〉 is disjunction of certain zizj=zi∧zj, where zj is the j-th element of permuted z ----- (equation 4.10). CDT can be also considered as a hashing procedure: the subspace to where hashing is performed is defined by 1s of z, and some 1s of z are mapped to that subspace. Since the resulting bound codevector 〈z〉 is obtained in the CDT procedure by thinning the 1s of **z, (where the component codevectors are superimposed),** 〈z〉 is similar to its component codevectors (unstructured similarity is preserved). Therefore to retrieve the components bound in the thinned codevector, we only need to choose the most similar component codevectors from their alphabet. This can be done using an associative memory. None of the mentioned binding operations, except for the CDT, preserves unstructured similarity. Therefore to extract some component codevector from the bound codevector, they demand to know the other component codevector(s). Then rebinding of the bound codevector with the inverses of known component codevector(s) produces a noisy version of the sought component codevector. This operation is known as decoding or "unbinding". To eliminate noise from the unbound codevector, a "clean-up" memory with the full alphabet of component codevectors is also required in those schemes. If some or all components of the bound codevector are not known, decoding in those schemes requires exhaustive search (substitution, binding, and checking) through all combinations of codevectors from the alphabet. Then the obtained bound codevector most similar to the bound codevector to be decoded provides the information on its composition. As in the other schemes, structured similarity is preserved by the CDT, i.e. bindings of similar patterns are similar to each other. However the character of similarity is different. In most of the other schemes the similarity of the bound codevectors is equal to the product of similarities of the component codevectors (e.g. Plate, 1995). For example, the similarity of a∗b and a∗b' is equal to the similarity of b and b'. Therefore if b and b' are not similar at all, the bound vectors also will not be similar. The codevectors to be bound by the CDT procedure are initially superimposed component codevectors. So their initial similarity is the mean of the components' similarities. Also, the thinning itself preserves approximately the square of similarity of the input vectors. So, the similarity for dissimilar b and b' will be >0.25 instead of 0 for the other schemes. 9.1.4. How the binding operation is used to represent predicate structure In most of the schemes predicate structures are represented by role-filler bindings. Halford, Wilson, & Phillips (in press) use predicate-argument bindings. The APNN-CDT scheme allows such representations of predicate structures as role-filler bindings, predicate-argument bindings, and also offers a potential for other possible representations. Both ordered and unordered arguments can be represented. 9.1.5. The use of other operations and techniques. _- Normalization. After superposition of codevectors, some normalizing transformation is used in various_ schemes to bring the individual elements or the total strength of the resulting codevector within certain limits. In BSCs, it is the threshold operation that converts a non-binary codevector (the bitwise sum of component codevectors) to a binary one. In HRRs, it is the scaling of codevectors to the unit length that facilitates their comparison. The CDT procedure performs a dual role: it not only binds superimposed codevectors of components, but it also normalizes the density of the resulting codevector. It would be interesting to check to what extent the normalization operations in other schemes provide the effect of binding as well. _Clean-up memory._ Associative memory is used in various representation schemes for storage of component codevectors and their recall (clean-up after finding their approximate noisy versions using unbinding). After the CDT procedure, the resulting codevector is similar to its component codevectors, however the latter are represented in the reduced form. Therefore it is natural to use associative memories in the APNN-CDT scheme to store and retrieve the codevectors of component items of various complexity levels. Since component codevectors of different complexity levels have approximately the same and small number of 1s, an associative memory based on assembly neural networks with simple Hebbian learning rule allows efficient storage and retrieval of a large number of codevectors. ----- _Chunking. The problem of chunking remains one of the least developed issues in existing representation_ schemes. In the HRRs and BSCs chunks are normalized superpositions of stand-alone component codevectors and their bindings. In its turn, the codevector of a chunk can be used as one of the components for binding. Thus, chunking allows structures of arbitrary nesting or composition level to be built. Each chunk should be stored in a clean-up memory. When complex structures are decoded by unbinding, noisy versions of chunk codevectors are obtained. They are used to retrieve pure versions from the clean-up memory, which can be decoded in their turn. In those schemes, the codevectors of chunks are not bound. Therefore they can not be superimposed without the risk of structure loss, as it was repeatedly mentioned in this paper. In the APNN-CDT scheme, any composite codevector after thinning represents a chunk. Since the component codevectors are bound in the chunk codevector, the latter can be operated as a single whole (an entity) without confusion of components belonging to different items. When a compositional structure is constructed using HRRs or BSCs, the chunk codevector is usually the filler which becomes bound with some role codevector. In this case, in distinction to the APNN-CDT scheme, the components a, b, c of the chunk become bound with the role rather than with each other: **role∗(a + b + c) = role∗a + role∗b + role∗c. (9.1)** Again, if the role is not unique, it can not be determined to which chunk the binding **role∗a belongs.** Also, the role codevector should be known for unbinding and subsequent retrieving of the chunk. Thus in the representation schemes of HRRs and Binary Spatter Codes each of the component codevectors belonging to a chunk binds with (role) codevectors of other hierarchical levels not belonging to that chunk. Therefore such bindings may be considered as "vertical". In the APNN-CDT scheme, a "horizontal" binding is essential: the codevectors of the chunk components are bound with each other. In the schemes of Plate, Kanerva, and Gayler, the vertical binding chain **role_upper_level** ∗ (role_lower_level ∗ **filler) is indistinguishable from** **role_lower_level** ∗ (role_upper_level ∗ **filler),because their binding operations are associative and commutative. For the CDT procedure, in** contrast, [2]〈[1]〈a ∨ **b〉** ∨ **c〉** ≠ [2]〈a ∨ [1]〈b ∨ **c〉〉, and also 〈〈a** ∨ **b〉** ∨ **c〉** ≠ 〈a ∨ 〈b ∨ **c〉〉.** Gayler (1998) proposes to bind a chunk codevector with its permuted version. It resembles the version of thinning procedure from section 4.2, but for real-valued codevectors. Different codevector permutations for different nesting levels allow the components of chunks from different levels to be distinguished, in a similar fashion as using different configurations of thinning connections in the CDT. However since the result of binding in the scheme of Gayler and in the other considered schemes (with the exception of APNN-CDT) is not similar to the component codevectors, in those schemes decoding of the chunk codevector created by binding with a permutation of itself will generally require exhaustion of all combinations of component codevectors. This problem with the vertical binding schemes of Plate, Kanerva, and Gayler can be rectified by using a binding operation that, prior to a conventional binding operation, permutes its left and right arguments differently (as discussed on p. 84 in Plate (1994)). The obvious problem of Tensor Product representation is the growth of dimensionality of the resulting pattern obtained by the binding of components. If it is not solved, the dimensionality will grow exponentially with the nesting depth. Halford, Wilson, & Phillips (in press) consider chunking as the means to reduce the rank of tensor representation. To realize chunking, they propose to use the operations of convolution, concatenation, superposition, as well as some special function that associates the outer product with the codevector of lower dimension. However the first three operations do not rule out confusion of grouping or ordering of arguments inside chunk, (i.e., different composite items may produce identical chunks). And the special function (and its inverse) requires concrete definition. Probably it could be done using associative memory, e.g. of the sigma-pi type proposed by Plate (1998). In (L)RAAMs the chunks of different nesting levels are encoded in the same weight matrix of connections between the input layer and the hidden layer of a multilayer perceptron. It may be one of the reasons for poor generalization. Probably if additional multilayer perceptrons are introduced for each ----- nesting level (with the input for each following perceptron provided by the hidden layer of the preceding one, similarly to Sperduti & Starita, 1997), generalization in those schemes would improve. In the APNN, chunks (thinned composite codevectors) of different nesting levels are memorized in different auto-associative neural networks. It allows an easy similarity-based decoding of a chunk through its subchunks of the previous nesting level and decreases memory load at each nesting level (see also Rachkovskij, accepted). 9.2. Sparse binary schemes Indicating unknown areas where useful representation schemes for nested compositional structures can be found, Plate (1997) notes that known schemes poorly handle sparse binary patterns, because known binding and superposition operations change the density of sparse patterns. From the work known to us, only Sjödin (1998) expresses the idea of "thinning" in an effort to avoid the low associative memory capacity for dense binary patterns. He defines the thinning operation as preserving the 1s corresponding to the maximums of some function defined over the binary vector. The values of that function can be determined at cyclic shifts of the codevector by the number of steps equal to the ordering number of 1s in that codevector. However it is not clear from such a description what are the properties of the maximums and, therefore, what is the character of similarity. The CDT procedure considered in this paper allows the density of codevectors to be preserved while binding them. Coupled with the techniques for encoding of the pattern ordering, this procedure allows implementation of various representation schemes of complex structured data. Approximately the same low density of binary codevectors at different nesting levels permits the use of identical procedures for construction, recognition, comparison, and decoding of patterns at different hierarchical levels of the Associative-Projective Neural Network architecture (Kussul 1988, 1992; Kussul, Rachkovskij, & Baidyk, 1991). The CDT procedure preserves the similarity of encoded descriptions allowing the similarity of complex structures to be determined by the overlap of their codevectors. Also, in the codevectors of complex structures formed using the CDT procedure, representation of the component codevectors (subset of their 1s) is reduced. Therefore, the APNN-CDT scheme can be considered another implementation of Hinton's (1990) reduced descriptions, Amosov's (1967) item coarsing, or compression of Halford, Wilson, & Phillips (in press). Besides, the CDT scheme is biologically relevant since it uses sparse representations and allows simple neural-network implementation. 10. Conclusion Procedures of Context-Dependent Thinning described in the paper perform binding of items represented as sparse binary codevectors. They allow a variable number of superimposed patterns to be bound on the fly while preserving the density of bound codevectors. The result of the CDT is of the same dimensionality as the component codevectors. Using of the auto-CDT procedures as analogs of brackets in the bracketed symbolic representation of various complex data structures permits easy transformation of these representations to the binary codevectors of fixed dimensionality with a small number of 1s. Unlike other binding procedures, binding by the auto-CDT preserves the similarity of bound codevector with each of the component codevectors. It makes possible both to determine the similarity of complex items with each other by the overlap of their codevectors and to retrieve in full size the codevectors of their components. Such operations are efficiently implementable by distributed associative memories which provide high storage capacity for the codevectors with small number of 1s. The APNN-CDT style representations have been already used by us in applications (an earlier work is reviewed in Kussul, 1992, 1993; more recent developments are described in Lavrenyuk, 1995; Rachkovskij, 1996; Kussul & Kasatkina, 1999; Rachkovskij, accepted). We hope that the CDT procedures will find their application for distributed representation and manipulation of complex compositional data structures, contributing to the progress of connectionist symbol processing (Touretzky, 1995, 1990; Touretzky & Hinton, 1988; Hinton, 1990; Plate, 1995). Fast (parallel) ----- evaluation of similarity or finding the most similar compositional items allowed by such representations are extremely useful for solution of a wide range of AI problems. Acknowledgements: The authors are grateful to Pentti Kanerva and Tony Plate for their extensive and helpful comments, valuable suggestions, and continuous support. This work was funded in part by the International Science Foundation Grants U4M000 and U4M200. **References** Amari, S. (1989). Characteristics of sparsely encoded associative memory. Neural Networks, 2, 445-457. Amosov, N. M. (1967). Modelling of thinking and the mind. New York: Spartan Books. Amosov, N. M., Baidyk, T. N., Goltsev, A. D., Kasatkin, A. M., Kasatkina, L. M., Kussul, E. M., & Rachkovskij, D. A. (1991). Neurocomputers and intelligent robots. Kiev: Naukova dumka. (In Russian). Artykutsa, S. Ya., Baidyk, T. N., Kussul, E. M., & Rachkovskij, D. A. (1991). Texture recognition using neurocomputer. (Preprint 91-8). Kiev, Ukraine: V. M. Glushkov Institute of Cybernetics. (In Russian). Baidyk, T. N., Kussul, E. M., & Rachkovskij, D. A. (1990). Numerical-analytical method for neural network investigation. In Proceedings of The International Symposium on Neural Networks and _Neural Computing - NEURONET'90 (pp. 217-222). Prague, Czechoslovakia._ Blair, A. D. (1997). Scaling-up RAAMs. (Technical Report CS-97-192). Brandeis University, Department of Computer Science. Fedoseyeva, T. V. (1992). The problem of training neural network training to recognize word roots. In _Neuron-like networks and neurocomputers (pp. 48-54). Kiev, Ukraine: V. M. Glushkov Institute of_ Cybernetics. (In Russian). Feldman, J. A., & Ballard, D. H. (1982). Connectionist models and their properties. Cognitive Science, 6, 205-254. Feldman, J. A. (1989). Neural Representation of Conceptual Knowledge. In L. Nadel, L. A. Cooper, P. Culicover, & R. M. Harnish (Eds.), Neural connections, mental computation (pp. 68-103). Cambridge, Massachusetts, London, England: A Bradford Book, The MIT Press. Foldiak, P., & Young, M. P. (1995). Sparse Coding in the Primate Cortex. In M. A. Arbib (Ed.), _Handbook of brain theory and neural networks (pp. 895-898). Cambridge, MA: MIT Press._ Frasconi, P., Gori, M., & Sperduti, A. (1997). A general framework for adaptive processing of data structures. Technical Report DSI-RT-15/97. Firenze, Italy: Universita degli Studi di Firenze, Dipartimento di Sistemi e Informatica. Frolov, A. A., & Muraviev, I. P. (1987). Neural models of associative memory. Moscow: Nauka. (In Russian). Frolov, A. A., & Muraviev, I. P. (1988). Informational characteristics of neuronal and synaptic plasticity. _Biophysics, 33, 708-715._ Frolov, A. A. (1989). Information properties of bilayer neuron nets with binary plastic synapses. _Biophysics, 34, 868-876._ Gayler, R. W. (1998). Multiplicative binding, representation operators, and analogy. In K. Holyoak, D. Gentner, & B. Kokinov (Eds.), Advances in analogy research: Integration of theory and data from _the cognitive, computational, and neural sciences. (p. 405). Sofia, Bulgaria: New Bulgarian_ University. (Poster abstract. Full poster available at: http://cogprints.soton.ac.uk/abs/comp/199807020). Halford, G. S., Wilson W. H., & Phillips S. (in press). Processing capacity defined by relational complexity: implications for comparative, developmental, and cognitive psychology. Behavioral and _Brain Sciences._ Hebb, D. O. (1949). The organization of behavior. New York: Wiley. ----- Hinton, G. E. (1981). Implementing semantic networks in parallel hardware. In G. E. Hinton & J. A. Anderson (Eds.), Parallel models of associative memory (pp. 161-187). Hillside, NJ: Lawrence Erlbaum Associates. Hinton, G. E. (1990). Mapping part-whole hierarchies into connectionist networks. Artificial _Intelligence, 46, 47-76._ Hinton, G. E., McClelland, J. L., & Rumelhart, D. E. (1986). Distributed representations. In D. E. Rumelhart, J. L. McClelland, & the PDP research group (Eds.), Parallel distributed processing: _Exploration in the microstructure of cognition 1: Foundations (pp. 77-109). Cambridge, MA: MIT_ Press. Hopfield, J. J. (1982). Neural networks and physical systems with emergent collective computational abilities. Proceedings of the National Academy of Sciences, USA, 79, 2554-2558. Hopfield, J. J., Feinstein, D. I., & Palmer, R. G. (1983). "Unlearning" has a stabilizing effect in collective memories. Nature, 304, 158-159. Hummel, J. E., & Holyoak K. J. (1997). Distributed representations of structure: A theory of analogical access and mapping. Psychological Review, 104, 427-466. Kanerva, P. (1988). Sparse distributed memory. Cambridge, MA: MIT Press. Kanerva, P. (1994). The Spatter Code for encoding concepts at many levels. In M. Marinaro and P.G. Morasso (eds.), ICANN '94, Proceedings of International Conference on Artificial Neural Networks (Sorrento, Italy), 1, pp. 226-229. London: Springer-Verlag. Kanerva, P. (1996). Binary Spatter-Coding of Ordered K-tuples. In C. von der Malsburg, W. von Seelen, J. C. Vorbruggen, & B. Sendhoff (Eds.). Proceedings of the International Conference on Artificial _Neural Networks - ICANN'96, Bochum, Germany. Lecture Notes in Computer Science, 1112, 869-_ 873. Berlin: Springer. Kanerva, P. (1998). Encoding structure in Boolean space. In L. Niklasson, M. Boden, and T. Ziemke (eds.), ICANN 98: Perspectives in Neural Computing (Proceedings of the 8th International Conference on Artificial Neural Networks, Skoevde, Sweden), 1, pp. 387-392. London: Springer. Kasatkin, A. M., & Kasatkina, L. M. (1991). A neural network expert system. In Neuron-like networks _and neurocomputers (pp. 18-24). Kiev, Ukraine: V. M. Glushkov Institute of Cybernetics. (In_ Russian). Kussul, E. M. (1980). Tools and techniques for development of neuron-like networks for robot control. Unpublished Dr. Sci. dissertation. Kiev, Ukrainian SSR: V. M. Glushkov Institute of Cybernetics. (In Russian). Kussul, E. M. (1988). Elements of stochastic neuron-like network theory. In Internal Report "Kareta_UN" (pp. 10-95). Kiev, Ukraine: V. M. Glushkov Institute of Cybernetics. (In Russian)._ Kussul, E. M. (1992) Associative neuron-like structures. Kiev: Naukova Dumka. (In Russian). Kussul, E. M. (1993). On some results and prospects of development of associative-projective neurocomputers. In Neuron-like networks and neurocomputers (pp. 4-11). Kiev, Ukraine: V. M. Glushkov Institute of Cybernetics. (In Russian). Kussul, E. M., & Baidyk, T. N. (1990). Design of a neural-like network architecture for recognition of object shapes in images. Soviet Journal of Automation and Information Sciences, 23(5), 53-58. Kussul, E. M., & Baidyk, T. N. (1993). On information encoding in associative-projective neural networks. (Preprint 93-3). Kiev, Ukraine: V. M. Glushkov Institute of Cybernetics. (In Russian). Kussul, E. M., Baidyk, T. N., Lukovich, V. V., & Rachkovskij, D. A. (1993). Adaptive neural network classifier with multifloat input coding. In Proceedings of NeuroNimes'93, Nimes, France, Oct. 25-29, _1993. EC2-publishing._ Kussul, E. M., & Kasatkina, L. M. (1999). Neural network system for continuous handwritten words recognition. (Submitted). Kussul, E. M., & Rachkovskij, D. A. (1991). Multilevel assembly neural architecture and processing of sequences. In A .V. Holden & V. I. Kryukov (Eds.), Neurocomputers and Attention: Vol. II. _Connectionism and neurocomputers (pp. 577-590). Manchester and New York: Manchester_ University Press. ----- Kussul, E. M., Rachkovskij, D. A., & Baidyk, T. N. (1991a). Associative-Projective Neural Networks: architecture, implementation, applications. In Proceedings of the Fourth International Conference _"Neural Networks & their Applications", Nimes, France, Nov. 4-8, 1991 (pp. 463-476)._ Kussul, E. M., Rachkovskij, D. A., & Baidyk, T. N. (1991b). On image texture recognition by associative-projective neurocomputer. In C. H. Dagli, S. Kumara, & Y. C. Shin (Eds.), Proceedings _of the ANNIE'91 conference "Intelligent engineering systems through artificial neural networks" (pp._ 453-458). ASME Press. Lansner, A., & Ekeberg, O. (1985). Reliability and speed of recall in an associative network. IEEE _Trans. Pattern Analysis and Machine Intelligence, 7, 490-498._ Lavrenyuk, A. N. (1995). Application of neural networks for recognition of handwriting in drawings. In _Neurocomputing: issues of theory and practice (pp. 24-31). Kiev, Ukraine: V. M. Glushkov Institute_ of Cybernetics. (In Russian). Legendy, C. R. (1970). The brain and its information trapping device. In Rose J. (Ed.), Progress in _cybernetics, vol. I. New York: Gordon and Breach._ Marr, D. (1969). A theory of cerebellar cortex. Journal of Physiology, 202, 437-470. Milner, P. M. (1974). A model for visual shape recognition. Psychological Review, 81, 521-535. Milner, P.M. (1996). Neural representations: some old problems revisited. Journal of Cognitive _Neuroscience, 8, 69-77._ Palm, G. (1980). On associative memory. Biological Cybernetics, 36, 19-31. Palm, G. (1993). The PAN system and the WINA project. In P. Spies (Ed.), Euro-Arch’93 (pp. 142-156). Springer-Verlag Palm, G. & Bonhoeffer, T. (1984). Parallel processing for associative and neuronal networks. Biological _Cybernetics, 51, 201-204._ Plate, T. A. (1991). Holographic Reduced Representations: Convolution algebra for compositional distributed representations. In J. Mylopoulos & R. Reiter (Eds.), Proceedings of the 12th _International Joint Conference on Artificial Intelligence (pp. 30-35). San Mateo, CA: Morgan_ Kaufmann. Plate, T. A. (1994). Distributed representations and nested compositional structure. PhD Thesis, Department of Computer Science, University of Toronto. Plate, T. A. (1995). Holographic Reduced Representations. IEEE Transactions on Neural Networks, 6, 623-641. Plate, T. (1997). A common framework for distributed representation schemes for compositional structure. In F. Maire, R. Hayward, & J. Diederich (Eds.), Connectionist systems for knowledge _representation and deduction (pp. 15-34). Queensland University of Technology._ Plate, T.A. (to appear). Structured Operations with Vector Representations. Expert Systems: The International Journal of Knowledge Engineering and Neural Networks, Special Issue on Connectionist Symbol Processing. Plate, T. (submitted). Randomly connected sigma-pi neurons can form associative memories. Pollack, J. B. (1990). Recursive distributed representations. Artificial Intelligence, 46, 77-105. Rachkovskij, D. A. (1990a). On numerical-analytical investigation of neural network characteristics. In _Neuron-like networks and neurocomputers (pp. 13-23). Kiev, Ukraine: V. M. Glushkov Institute of_ Cybernetics. (In Russian). Rachkovskij, D. A. (1990b). Development and investigation of multilevel assembly neural networks. Unpublished Ph.D. dissertation. Kiev, Ukrainian SSR: V. M. Glushkov Institute of Cybernetics. (In Russian). Rachkovskij, D. A. (1996). Application of stochastic assembly neural networks in the problem of interesting text selection. In Neural network systems for information processing (pp. 52-64). Kiev, Ukraine: V. M. Glushkov Institute of Cybernetics. (In Russian). Rachkovskij, D. A. (to appear). Representation and Processing of Structures with Binary Sparse Distributed Codes. IEEE Transactions on Knowledge and Data Engineering, Special Issue on Connectionist Models for Learning In Structured Domains. ----- Rachkovskij, D. A. & Fedoseyeva, T. V. (1990). On audio signals recognition by multilevel neural network. In Proceedings of The International Symposium on Neural Networks and Neural Computing _- NEURONET'90 (pp. 281-283). Prague, Czechoslovakia._ Rachkovskij, D. A. & Fedoseyeva T. V. (1991). Hardware and software neurocomputer system for recognition of acoustical signals. In Neuron-like networks and neurocomputers (pp. 62-68). Kiev, Ukraine: V. M. Glushkov Institute of Cybernetics. (In Russian). Shastri, L. & Ajjanagadde, V. (1993). From simple associations to systematic reasoning: connectionist representation of rules, variables, and dynamic bindings using temporal synchrony. Behavioral and _Brain Sciences, 16, 417-494._ Sjödin, G. (1998). The Sparchunk Code: a method to build higher-level structures in a sparsely encoded SDM. In Proceedings of IJCNN'98 (pp. 1410-1415), IEEE, Piscataway, NJ: IEEE. Sjödin, G., Kanerva, P., Levin, B., & Kristoferson, J. (1998). Holistic higher-level structure-forming algorithms. In Proceedings of 1998 Real World Computing Symposium - RWC'98 (pp. 299-304). Smolensky, P. (1990). Tensor product variable binding and the representation of symbolic structures in connectionist systems. Artificial Intelligence, 46, 159-216. Sperduti, A. (1994). Labeling RAAM. Connection Science, 6, 429-459. Sperduti, A. & Starita, A. (1997). supervised neural networks for the classification of structures. IEEE _Transactions on Neural Networks, 8, 714-735._ Tsodyks, M. V. (1989). Associative memory in neural networks with the Hebbian learning rule. Modern _Physics Letters B, 3, 555-560._ Touretzky, D. S. (1990). BoltzCONS: Dynamic symbol structures in a connectionist network. Artificial _Intelligence, 46, 5-46._ Touretzky, D. S. (1995). Connectionist and symbolic representations. In M. A. Arbib (Ed.), Handbook of _brain theory and neural networks (pp. 243-247). Cambridge, MA: MIT Press._ Touretzky, D. S., & Hinton, G. E. (1988). A distributed connectionist production system. Cognitive _Science, 12, 423-466._ Vedenov, A. A. (1987). "Spurious memory" in model neural networks. (Preprint IAE-4395/1). Moscow: I. V. Kurchatov Institute of Atomic Energy. Vedenov, A. A. (1988). Modeling of thinking elements. Moscow: Science. (In Russian). von der Malsburg, C. (1981). The correlation theory of brain function. (Internal Report 81-2). Gottingen, Germany: Max-Planck-Institute for Biophysical Chemistry, Department of Neurobiology. von der Malsburg, C. (1985). Nervous structures with dynamical links. Ber. Bunsenges. Phys. Chem., 89, 703-710. von der Malsburg, C. (1986) Am I thinking assemblies? In G. Palm & A. Aertsen (Eds.), Proceedings of _the 1984 Trieste Meeting on Brain Theory (pp. 161-176). Heidelberg: Springer-Verlag._ Willshaw, D. (1981). Holography, associative memory, and inductive generalization. In G. E. Hinton & J. A. Anderson (Eds.), Parallel models of associative memory (pp. 83-104). Hillside, NJ: Lawrence Erlbaum Associates. Willshaw, D. J., Buneman, O. P., & Longuet-Higgins, H. C. (1969). Non-holographic associative memory. Nature, 222, 960-962. ----- Figure captions Figure 1. Growth of the density p' of 1s in the composite codevectors of higher hierarchical levels (see equations 2.1 and 2.2). Here each codevector of the higher level is formed by bit disjunction of S codevectors of the preceding level. Items at each level are uncorrelated. The codevectors of base-level items are independent with the density of 1s equal to p. The number of hierarchical levels is L. (For any given number of base-level items, the total number of 1s in the composite codevectors is obviously limited by the number of 1s in the disjunction of base-level codevectors). Figure 2. Hatched circles represent patterns of active units encoding items; formed connections are plotted by arrowed lines.(A) Formation of a false assembly. When three assemblies (abd; bce; acf) are consecutively formed in a neural network by connecting all active units of patterns encoding their items, the fourth assembly abc (gridy hatch) is formed as well, though its pattern was not explicitly presented to the network. (B) Preventing of a false assembly. If each of three assemblies is formed by connecting only subsets of active units encoding the component items, then the connectivity of the false assembly is weak. xyz denotes the subset of units encoding item x when it is combined with items y and _z.. The_ pairwise intersections of the small circles represent the false assembly. Figure 3. A neural-network implementation of permutive conjunctive thinning. The same N-dimensional binary pattern is activated in the input neural fields fin1 and fin2. It is a superposition of several component codevectors. fin1 is connected to fout by a bundle of direct projective connections. fin2 is connected to fout by a bundle of permutive connections. Conjunction of the superimposed component codevectors and their permutation is obtained in the output neural field fout, where the neural threshold θ=1.5. Figure 4. (A), (B). Algorithmic implementations of the additive version of the Context- Dependent Thinning procedure. Parameter seed defines a configuration of shift permutations. For small K, checking that r is unique would be useful. Figure 5. A neural-network implementation of the additive version of Context-Dependent Thinning procedure. There are four neural fields with the same number of neurons: two input fields fin1 and fin2, the output field fout, the intermediate field fint. The neurons of fin1 and fout are connected by the bundle of direct projective connections (1-to-1). fint and fout are also connected in the same manner. The same binary pattern z (corresponding to superimposed component codevectors) is in the input fields fin1 and **fin2. The intermediate field fint is connected to the input field fin2 by K bundles of permutive projective** connections. The number K of required bundles is estimated in Table 3. Only two bundles are shown here: one by solid lines and one by dotted lines. The threshold of fint neurons is 0.5. Therefore fint accumulates (by bit disjunction) various permutations of the pattern z in fin2. The threshold of fout is equal to 1.5. Hence this field performs conjunction of the pattern z from fin1 and the pattern of K permuted and superimposed z from fint. z, 〈z〉, w correspond to the notation of Figure 4. Figure 6. A neural-network implementation of the subtractive Context-Dependent Thinning procedure. There are three neuron fields of the same number of neurons: two input fields fin1 and fin2, as well as the output field fout. The copy of the input vector z is in both input fields. The neurons of fin1 and fout are connected by the bundle of direct projective connections (1-to-1). The neurons of fin2 and **fout are** connected by K bundles of independent permutive connections. (Only two bundles of permutive connections are shown here: one by solid lines, and one by dotted lines). Unlike Figure 5, the synapses of permutive connections are inhibitory (the weight is -1). The threshold of the output field neurons is 0.5. Therefore the neurons of z remaining active in fout are those for which none of the permutive connections coming from z are active. As follows from Table 4, K is approximately the same for the number S = 2,...,5 of component codevectors of certain density p. ----- Figure 7: (A) Overlap of thinned composite codevectors and (B) Overlap of component subsets in thinned composite codevectors - versus the overlap of the corresponding unthinned composite codevectors for various versions of thinning procedures. CDTadd - the additive, CDTsub - the subtractive, CDTadd-sl - the self-exclusive additive, CDTsub-sl - the self-exclusive subtractive CDT procedure, Paired-M - permutive conjunctive thinning; the densities of component codevectors are chosen to obtain M of 1s in the thinned codevector. For component codevectors, N=100000, M≈1000. The number of component codevectors: 5. The results are averaged over 50 runs with different random codevectors. Figure 8: (A) Overlap of thinned composite codevectors and (B) Overlap of component subsets in thinned composite codevectors for various "depth" of additive (CDTadd) and subtractive (CDTsub) procedures of Context-Dependent Thinning. There are five components in the composite item. Therefore the input composite codevector includes approximately 5M of 1s. The composite codevector is thinned to have from 4M to M/4 of 1s. Two curves for thinning depth M are consistent with the corresponding curves in Figure 7. For all component codevectors, N=100000, M≈1000. The results are averaged over 50 runs with different random codevectors. ----- 1,0 0,9 0,8 0,7 0,6 0,5 0,4 0,3 0,2 0,1 0,0 0 1 2 3 4 5 6 7 8 9 10 Number L of hierarchical levels Figure 1. ������������������������� ������������������������� �������������������������d ������������������������� ����������������������� �������������������������� �������������������������� ��������������������������e �������������������������� ��������������������������������������������������� ��������������������������������������������������� �������������������������� ��������������������������a �������������������������� �������������������������� ## (A) Figure 2. ----- . ## # fin1 fout fin2 # 1 0 0 0 1 2 1 0 1 2 3 0 0 0 3 4 1 1 1 4 5 1 0 1 5 6 1 1 1 6 7 0 0 0 7 8 0 0 0 8 ... ... ... ... ... N 1 1 1 N ## θθθθ=1.5 Figure 3. The input codevector z = x1 ∨ **x2** ∨ ... ∨ **xS** **(A)** The output (thinned) codevector is 〈z〉 Calculate K=ln(1-1/S)/ln(1-pS) Set auxiliary vector w to 0 Seed the random-number generator: randomize(seed). for (k=1, 2,..., K) if ((r=rand()) ≠ 0) for (i=1, 2,..., N) wi = wi ∨ zi+r modulo N for (i=1, 2,..., N) 〈z〉i = zi ∧ wi The input codevector z = x1 ∨ **x2** ∨ ... ∨ **xS** **(B)** Set the output (thinned) codevector 〈z〉 to 0. Seed the random-number generator: randomize(seed). while(|〈z〉| < M) if ((r=rand()) ≠ 0) for (i=1, 2,..., N) 〈z〉i = 〈z〉i ∨ (zi ∧ zi+r modulo N) Figure 4. (A), (B). ----- ## # fin1 fout fint fin2 # # z 〈〈〈〈z〉〉〉〉 w z 1 0 0 1 0 1 2 1 0 0 1 2 3 0 0 0 0 3 4 1 1 1 1 4 5 1 1 1 1 5 6 0 0 1 0 6 7 0 0 1 0 7 8 0 0 1 0 8 9 1 1 1 1 9 ... ... ... ... ... ... N 1 0 0 1 N ## θθθθ=1.5 θθθθ=0.5 Figure 5. ## # fin1 fout fin2 # # z 〈〈〈〈z〉〉〉〉 z 1 0 0 0 1 2 1 0 1 2 3 0 0 0 3 4 1 1 1 4 5 1 1 1 5 6 0 0 0 6 7 0 0 0 7 8 0 0 0 8 9 1 0 1 9 ... ... ... ... ... N 1 1 1 N Figure 6. ----- 1 0,9 0,8 0,7 0,6 0,5 0,4 0,3 0,2 0,1 0 ## (A) 1 0,9 0,8 0,7 0,6 0,5 0,4 0,3 0,2 0,1 0 0 0,2 0,4 0,6 0,8 1 Overlap of composite codevectors CDTadd CDTsub CDTadd-sl CDTsub-sl Paired-M 0 0,2 0,4 0,6 0,8 1 Overlap of composite codevectors ## (B) Figure 7. ----- 1 0,9 0,8 0,7 0,6 0,5 0,4 0,3 0,2 0,1 0 ## (A) 0 0,2 0,4 0,6 0,8 1 Overlap of composite codevectors 1 0,9 0,8 0,7 0,6 0,5 0,4 0,3 0,2 0,1 0 ## (B) 0 0,2 0,4 0,6 0,8 1 Overlap of composite codevectors Figure 8. -----
{ "disclaimer": "Notice: The following paper fields have been elided by the publisher: {'references', 'abstract'}. Paper or abstract available at https://api.unpaywall.org/v2/10.1162/089976601300014592?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1162/089976601300014592, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": null, "status": "GREEN", "url": "http://cogprints.org/1240/1/cdt-1sp.pdf" }
2,001
[ "JournalArticle" ]
true
2001-02-01T00:00:00
[]
27,966
en
[ { "category": "Computer Science", "source": "external" }, { "category": "Economics", "source": "s2-fos-model" }, { "category": "Computer Science", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/026e0df72f523f1b772e9295344b7ba9877e1098
[ "Computer Science" ]
0.854808
Incentives for Crypto-Collateralized Digital Assets
026e0df72f523f1b772e9295344b7ba9877e1098
Proceedings
[ { "authorId": "31520104", "name": "Philip N. Brown" } ]
{ "alternate_issns": [ "0377-2969" ], "alternate_names": null, "alternate_urls": null, "id": "efad5e01-e2cf-44e8-9cfa-c5d9634707fc", "issn": "2198-7432", "name": "Proceedings", "type": null, "url": "http://paspk.org/Proceedings-of-PAS-39" }
Digital currencies such as Bitcoin frequently suffer from high price volatility, limiting their utility as a means of purchasing power. Hence, a popular topic among cryptocurrency researchers is a digital currency design which inherits the decentralization of Bitcoin while somehow mitigating its violent price swings. One such system which attempts to establish a price-stable cryptocurrency is the BitShares market-pegged-asset protocol. In this paper, we present a simple mathematical model of the BitShares protocol, and analyze it theoretically and numerically for incentive effects. In particular, we investigate how the selection of two key design parameters function as incentive mechanisms to encourage token holders to commit their core BitShares tokens as collateral for the creation of new price-stabilized tokens. We show a pair of analytical results characterizing some simple facts regarding the interplay between these design parameters. Furthermore, we demonstrate numerically that in some settings, setting these design parameters is a complex, sensitive, and unintuitive task, prompting further work to more fully understand this design process.
## ***proceedings*** *Proceedings* # **Incentives for Crypto-Collateralized Digital Assets [†]** **Philip N. Brown** Department of Computer Science at the University of Colorado, Colorado Springs, CO 80918, USA; philip.brown@uccs.edu - Presented at the 3rd annual Decentralized Conference, Athens, Greece, 30 October–1 November 2019. ��������� Published: 21 October 2019 **�������** **Abstract:** Digital currencies such as Bitcoin frequently suffer from high price volatility, limiting their utility as a means of purchasing power. Hence, a popular topic among cryptocurrency researchers is a digital currency design which inherits the decentralization of Bitcoin while somehow mitigating its violent price swings. One such system which attempts to establish a price-stable cryptocurrency is the BitShares market-pegged-asset protocol. In this paper, we present a simple mathematical model of the BitShares protocol, and analyze it theoretically and numerically for incentive effects. In particular, we investigate how the selection of two key design parameters function as incentive mechanisms to encourage token holders to commit their core BitShares tokens as collateral for the creation of new price-stabilized tokens. We show a pair of analytical results characterizing some simple facts regarding the interplay between these design parameters. Furthermore, we demonstrate numerically that in some settings, setting these design parameters is a complex, sensitive, and unintuitive task, prompting further work to more fully understand this design process. **Keywords:** blockchain; cryptocurrency; decision theory; incentive design **1. Introduction** Since the introduction of the Bitcoin cryptocurrency in 2008 [ 1 ], digital currencies based on blockchain technology have sprung into the public view in large part due to the dramatic price swings they experience [ 2 ]. For example, in the two years from 1 January 2017 to 1 January 2019, the US-Dollar denominated price of the Bitshares core token (BTS) gained approximately 7500%, then lost about 85%, then gained 1600%, and then lost another 95% [ 3 ]. It is widely believed that this extreme volatility hinders the consumer adoption of cryptocurrencies for ordinary payments, since the purchasing power of an ordinary currency is expected to change very slowly, if at all [4]. Various methods have been proposed in recent years to create digital currencies which maintain the decentralized and trustless nature of ordinary cryptocurrencies without suffering from the same price volatility [ 5 – 9 ]. To date, the longest-running such project is the Bitshares market-pegged-asset protocol (MPA), which has been nearly continuously operational in some form since mid-2014 [ 10 ]. The MPA system functions in a largely decentralized manner by allowing core token (BTS) holders to lock away their BTS tokens as collateral in exchange for a loan of bitAsset tokens, which can then be sold on the open market. The protocol has several mechanisms to regulate token supply and collateral ratios for the purpose of maintaining the market price of the pegged assets at or near a price target. While largely successful in maintaining a price peg (and considerably moreso than some early competitors), two major problems have plagued the BitShares MPA system in recent years: *•* **Loose price-pegging.** From 2014 to November 2018, the US Dollar-pegged smartcoin BitUSD has maintained an average price *near* $1, but for extended periods has traded above $1.15, with occasional drops to around $0.90. This is a considerably narrower range than typical freely-floating cryptocurrencies, but stands as an important area for improvement. *Proceedings* **2019**, *28* [, 2; doi:10.3390/proceedings2019028002](http://dx.doi.org/10.3390/proceedings2019028002) [www.mdpi.com/journal/proceedings](http://www.mdpi.com/journal/proceedings) ----- *Proceedings* **2019**, *28*, 2 2 of 8 *•* **Undercollateralization.** To insulate collateral holders from each others’ risky behavior, the MPA system has an in-built safety mechanism known as “global settlement.” In the event of undercollateralization, the global settlement mechanism immediately closes all collateral positions and establishes a fixed exchange rate between BTS and the bitAsset, effectively ceasing any form of price-pegging. This mechanism was triggered in late 2018 on several BitShares-platform smartcoins, including BitUSD (which at the time of writing has a market capitalization of approximately $10 million). In this paper, we present a highly-simplified model of the BitShares price-pegging mechanism, and use it to perform an initial numerical study on some of the key design tradeoffs facing a smartcoin protocol designer. In particular, we study how the selection of two protocol incentive parameters impact the likelihood of undercollateralization events for the price-pegged smartcoins. Our central contribution is to show that the incentive model in BitShares contains regions of high sensitivity; that is, token holders’ decisions are discontinuous in incentive parameters in such a way that a small change in parameter values can cause a large sudden jump in overall system risk levels. **2. Model and Performance Metrics** *2.1. Collateral Incentive Model* Due to the complexity of modeling the full problem, in this paper we allow (with loss of generality) the collateralization of a smartcoin to be contolled by a single centralized entity; we term this entity the *agent* . This simplification allows us to closely examine some of the key incentive issues involved in collateral management without any of the complications arising from a multi-agent setting. Throughout, the reader may refer to Table 1 for a compact depiction of the agent’s perspective of the protocol. The agent is modeled as possessing an endowment of *Q* core tokens to be committed as collateral. The agent must decide what *value* of stable tokens to be created using the endowment *Q* as collateral. In other words, the agent must select a *collateral ratio R*, where *⟨* Value of stable tokens created *⟩* = *[Q]* (1) *R* [.] The agent’s choice of *R* is lower-bounded by the blockchain *maintenance collateral ratio* parameter *M* *≥* 1; that is, *R* *≥* *M* . (2) In the BitShares blockchain, a typical value of *M* at press time is 1.5 or 1.6. Once the collateral is committed and stable tokens created, it is assumed that the agent trades the stable tokens on the open market for their equivalent value in core tokens, resulting in the following situation (in all of the following, the amounts are denominated in terms of their equivalent market value of core tokens): 1. Core token collateral held by blockchain: *Q* . 2. Stable-token debt: *Q* / *R* . 3. Core tokens held freely: *Q* / *R* . We consider the effect of a sudden shock *d* *>* 0 on the underlying core token price; this shock is drawn from some distribution *D* . Here, we assume for simplicity that the shock occurs, and then the agent’s collateralized position is immediately closed and all profits or losses are realized immediately. We refer to a shock as “negative” if *d* *<* 1 (thus reducing the value of the collateral); otherwise we refer to it as “positive.” According to the BitShares protocol, if the price shock causes the agent’s collateral ratio to fall below 1, then all of the agent’s collateral is taken to cover the debt. This scenario is known as *global settlement* . ----- *Proceedings* **2019**, *28*, 2 3 of 8 If the shock is negative but less severe and the collateral ratio remains above 1 but falls below *M*, then the agent’s debt is multiplied by a penalty factor known as the *maximum short-squeeze ratio* *S* *≥* 1; this is known as a *margin call* . Finally, if the shock is such that the agent’s collateral ratio remains above *M*, no penalty is assessed and in our model the agent simply realizes the resulting loss or gain. Mathematically, this penalty-free scenario results in a shock-adjusted debt of *dR* *[Q]* [, so the agent’s] profit ratio *P* ( *R*, *d* ; *M*, *S* ) is *P* ( *R*, *d* ; *M*, *S* ) = *[Q]* [/] *[R]* *[ −]* *[Q]* [/] [(] *[Rd]* [)] (3) *Q* = [1] 1 *−* [1] . (4) *R* � *d* � For moderate negative shocks, the agent must pay the additional *S* penalty for the margin call, resulting in a profit ratio of *P* ( *R*, *d* ; *M*, *S* ) = *[Q]* [/] *[R]* *[ −]* *[Q]* *[S]* [/] [(] *[Rd]* [)] (5) *Q* = [1] 1 *−* *[S]* . (6) *R* � *d* � Finally, for shocks that result in undercollateralization and global settlement, the agent must surrender the entire collateral amount to cover the debt, resulting in *P* ( *R*, *d* ; *M*, *S* ) = *[Q]* [/] *[R]* *[ −]* *[Q]* (7) *Q* = [1] (8) *R* *[−]* [1.] Thus, the full *ex post* profit ratio (that is, the profit given *d* ) is *R* 1 *[−]* [1] if *d* *∈* [ 0, *S* / *R* ) 1 *R* �1 *−* *[S]* *d* � if *d* *∈* [ *S* / *R*, *M* / *R* ) 1 *R* �1 *−* [1] *d* � if *d* *∈* [ *M* / *R*, ∞ ) . *P* ( *R*, *d* ; *M*, *S* ) =  (9) When the dependence of *P* ( *R*, *d* ; *M*, *S* ) on *M* and *S* is clear, we shall sometimes simply write *P* ( *R*, *d* ) . **Table 1.** Tabular depiction of collateralization incentive model. In the first row, labeled *Start*, the agent holds *Q* core blockchain tokens, no debt, and no collateral. In stage 2, the agent has committed *Q* tokens as collateral and received a loan of *Q* / *R* price-stable tokens, which are then sold. In stage 3, the value of the core token has changed by a factor of *d* ; since our accounting is being done in core tokens, the effect of this is that the agent’s debt is modified by a factor of 1 / *d* . In stage 4, the debt position is closed, resulting in either a profit or a loss for the agent depending on the *M* and *S* parameters. **Stage** **Debt to Blockchain** **Locked Collateral** **Freely-Held Tokens** Start 0 0 *Q* Position Opened *Q* *R* *Q* *Q* *R* After Price Shock *Rd* *Q* *Q* *Q* *R* Position Closed 0 0 *P* ( *R*, *d* ; *M*, *S* ) Depends on *d* ; see (9) ----- *Proceedings* **2019**, *28*, 2 4 of 8 *2.2. Decision Model* Let the price shock *d* be drawn from some distribution *D* with probability density function *f* : [ 0, ∞ ) *→* [ 0, ∞ ) . Given this distribution, *M*, and *S*, the agent’s goal is simple: select collateral ratio *R* to maximize expected profit. Throughout, we use the notation *P* ( *R* ; *M*, *S* ) to denote the agent’s expected profit given distribution *D* . That is, the agent’s optimal collateral ratio is *R* *[∗]* ( *M*, *S* ) ≜ arg max E (10) *R* *≥* *M* *d* *∼* *D* [[] *[P]* [(] *[R]* [,] *[ d]* [;] *[ M]* [,] *[ S]* [)]] [ .] However, the system designer’s goal in selecting incentive parameters *M* and *S* is somewhat more nuanced. On the one hand, the designer wishes to ensure that the agent selects a low-enough collateral ratio *R* so that a reasonable number of price-stable tokens are created. On the other hand, the designer wishes to ensure that the agent selects a *high* -enough *R* so that the price-stable token remains fully collateralized in the event of a significant market downturn (i.e., the agent’s collateral is sufficient to repay the debt even when a very low value of *d* is realized). In this paper, we focus simply on the effect of *M* and *S* on the probability of an undercollateralization event, and leave the study of the above nontrivial tradeoff for future work. That is, we seek to characterize the probability of an undercollateralization event *given that the agent is* *selecting the profit-maximizing R* *[∗]* ( *M*, *S* ) *.* Let *F* : [ 0, ∞ ] *→* [ 0, 1 ] denote the cumulative distribution function of the price-shock distribution *D* . Then the probability of undercollateralization in the presence of a profit-maximizing agent, denoted *p* *u*, is given by *S* *p* *u* ( *M*, *S* ) ≜ *F* . (11) � *R* *[∗]* ( *M*, *S* ) � That is, undercollateralization occurs when *d* is realized *below* the threshold *S* / *R*, as in the first condition of (9) ; Equation (11) is simply the cdf evaluated at this point with *R* = *R* *[∗]* ( *M*, *S* ) . This threshold is triggered since all collateral is exhausted in paying the price-stable token debt *after* the payment of the margin-call penalty *S* . **3. Our Contributions** The central goal of this paper is to characterize how the system designer’s choice of *M* and *S* affect the emergent probability of undercollateralization *p* *u* . As a first step, in this paper we perform a numerical study of the undercollateralization risk in the presence of a price disturbance that is distributed lognormally. That is, we consider a situation in which the price disturbance *d* is given by the formula *d* = exp ( *s* ), (12) where *s* is drawn from normal distribution *N* ( *µ*, *σ* [2] ) with mean *µ* and variance *σ* [2] . Under this distribution, if *µ* = 0 we have the convenient fact for all *δ* *>* 0 that *f* ( *δ* ) = *f* ( 1/ *δ* ) . (13) That is, this distribution models symmetric *multiplicative* price shocks: when *µ* = 0, the price is equally likely to double as it is to halve. Mathematically, given *µ* and *σ* as above, the probability density function of this distribution is given by 2 [�] *f* ( *x* ) = 1 exp *−* [1] lo g ( *x* *−* *µ* ) . (14) *σx* *√* 2 *π* � 2 � *s* � ----- *Proceedings* **2019**, *28*, 2 5 of 8 *3.1. The Interplay between MCR and MSSR* First, we note that regardless of the distribution *D* from which price shocks are drawn, the system operator’s choice of *M* affects agent profits *only* if *S* *>* 1. That is, if the operator selects *S* = 1, this effectively curtails their ability to have a nuanced effect on agent behavior. **Proposition 1.** *If* *S* = 1 *, then for any fixed* *R* *, the agent’s expected profit* *P* ( *R* ; *M*, *S* ) *is a constant function* *of M.* **Proof.** Consider the *ex post* profit function given by (9), and let *S* = 1. Then the function collapses to *P* ( *R*, *d* ; *M*, 1 ) = =  *R* 1 *[−]* [1] if *d* *∈* [ 0, 1/ *R* ) 1  *R* 1 �1 *−* [1] *d* � if *d* *∈* [ 1/ *R*, *M* / *R* )  *R* �1 *−* [1] *d* � if *d* *∈* [ *M* / *R*, ∞ ) *R* 1 *[−]* [1] if *d* *∈* [ 0, 1/ *R* ) 1 (15) � *R* �1 *−* [1] *d* � if *d* *∈* [ 1/ *R*, ∞ ), which is clearly a constant function of *M* . Since the *ex post* profit is constant in *M*, then so must the expected profit be as well. Thus, it may be helpful to think of *S* as a “switch” which activates *M* . If *S* is not used (i.e., no penalty is charged for margin calls), then *M* loses its effectiveness to influence agent behavior. This suggests that the system operator should always maintain *S* *>* 1. From here, let us assume that *S* *>* 1. Our next result considers the effect of *M* on agent profits. Intuitively, one would expect that increasing *M* would strictly decrease the agent’s expected profit. Proposition 1 demonstrates that this is not generally true (in particular, it fails when *S* = 1). However, our next result shows that for a wide range of price shock distributions, the agent’s profit (for fixed *R* ) is strictly decreasing in *M* . **Proposition 2.** *Let* *S* *>* 1 *, let* *R* *>* *M* *, and let price shock distribution* *D* *be such that its pdf has* *f* ( *M* / *R* ) *>* 0 *.* *Then the agent’s expected profit is strictly decreasing in M:* *∂* [(] *[M]* [/] *[R]* [)] *[f]* *×* ( 1 *−* *S* ) *<* 0. (16) *∂M* *[P]* [(] *[R]* [;] *[ M]* [,] *[ S]* [) =] *MR* **Proof.** Let *f* ( *t* ) be the pdf of distribution *D* . Then when the agent selects collateral ratio *R*, the expected profit ratio can be computed from (9) as *S* *P* ( *R* ; *M*, *S* ) ≜ E [1] *R* *f* ( *t* ) *dt* *−* *[S]* *d* *∼* *D* [[] *[P]* [(] *[R]* [,] *[ d]* [;] *[ M]* [,] *[ S]* [)] =] *R* *[−]* � 0 *R* *M* *R* � *S* *R* *f* ( *t* ) *dt* *−* [1] *t* *R* ∞ � *M* *R* *f* ( *t* ) *dt* . (17) *t* The partial derivative of *P* ( *R* ; *M*, *S* ) can be computed from (17) as *∂* [(] *[M]* [/] *[R]* [)] + *[f]* [(] *[M]* [/] *[R]* [)] (18) *∂M* *[P]* [(] *[R]* [;] *[ M]* [,] *[ S]* [) =] *[ −]* *[S ]* *[f]* *MR* *MR* which completes the proof of Proposition 2. *3.2. Optimal Agent Behavior as a Function of MCR* The complexities of the system render a full theoretical analysis challenging; accordingly, we perform a simple numerical simulation study on the effect of the MCR parameter *M* on the risk of undercollateralization events. To demonstrate how this parameter affects *p* *u*, we first plot the agent’s expected profit ratio *P* ( *R* ; *M*, *S* ) as a function of collateral ratio *R* for a fixed distribution *D* ----- *Proceedings* **2019**, *28*, 2 6 of 8 (with lognormal parameters *µ* = 0.033 and *σ* = 0.2, fixed *S* = 1.01, and varying *M* . See Figure 1 for a depiction of this. **Remark 1.** *The simulations depicted in Figure 1 demonstrate that the agent’s optimal choice of collateral ratio* *can be extremely sensitive to the system operator’s choice of* *M* *. In particular, at a threshold of approximately* *M* *≈* 1.53 *, P* ( *R* ; *M*, *S* ) *is discontinuous in M, increasing suddenly from about* 1.53 *to about* 1.85 *.* **Figure 1.** Plots of agent’s expected profit ratio *P* ( *R* ; *M*, *S* ) as a function of *R* for fixed lognormal distribution parameters of *µ* = 0.033 and *σ* = 0.2, and *S* = 1.01 and various values of *M* . Note that when *M* is low, e.g., *M* = 1.4, the agent’s optimal decision (that is, the maximizer of the corresponding trace on the chart, marked approximately by the red discs) is *R* *[∗]* ( *M*, *S* ) = *M* . However, increasing *M* has the tendency to “pull down” the left-hand end of the trace. When *M* *≈* 1.5, this gives rise to a local maximum in the profit function away from *R* = *M* ; around *M* *≈* 1.53, this local maximum becomes the global maximum and the agent’s optimal collateral ratio “snaps” to the right, yielding *R* *[∗]* ( *M*, *S* ) *>* *M* . *3.3. Undercollateralization Risk as a Function of MCR* To understand how these agent decisions impact *p* *u* ( *M*, *S* ), we then compute the optimal collateral ratio *R* *[∗]* ( *M*, *S* ) as a function of *M* for several values of *S*, and report these results in Figure 2. We find several features of Figure 2 worthy of note. **Remark 2.** *The probability of an undercollateralization event,* *p* *u* *, can be sharply (discontinuously) dependent* *on* *M* *, but the presence of this discontinuity depends on the value of* *S* *. Considering the right-hand plot in* *Figure 2, note that around* *M* *≈* 1.53 *, there is a discontinuity in the* *p* *u* *plot corresponding to* *S* = 1.01 *.* *This indicates that* *p* *u* *is an extremely sensitive function of the incentive parameters* *M* *and* *S* *in complex ways* *that are not* a priori *obvious and are deserving of further study.* **Remark 3.** *The probability of undercollateralization,* *p* *u* *, is not monotone in* *S* *, and the nature of its* *non-monotonicity is* not consistent over all values of *M* . *That is, consider* *M* *[′]* = 1.5 *; here, we have* *that* *p* *u* *is locally increasing in* *S* *:* *p* *u* ( *M* *[′]*, 1.01 ) *>* *p* *u* ( *M* *[′]*, 1.005 ) *. Alternatively, consider* *M* *[′′]* = 1.6 *; here, we* *have that* *p* *u* *is locally* decreasing *in* *S* *:* *p* *u* ( *M* *[′]*, 1.01 ) *<* *p* *u* ( *M* *[′]*, 1.005 ) *. This indicates that without careful* *analysis, it may be nearly impossible to determine how to select incentive parameters* **Remark 4.** *For low-enough* *S* *and very low* *M* *, the optimal collateral ratio appears to be* *R* *[∗]* ( *M*, *S* ) = *M* *.* *Consider the left-hand plot of Figure 2: here, for* *S* *∈{* 1.005, 1.01 *}* *, the agent’s optimal value of* *R* *is the same.* *That is, in this regime,* *S* *has no effect on collateral ratios. However, in the right-hand plot of Figure 2, in that* ----- *Proceedings* **2019**, *28*, 2 7 of 8 *same regime, it is evident that the probability of undercollateralization actually* is *a function of* *S* *, and that higher* *S leads to higher p* *u* *. That is, in this regime, this suggests that it is strictly better to set S* = 1 *.* **Figure 2.** Plots of system behavior as a function of *M* for fixed lognormal distribution parameters of *µ* = 0.033 and *σ* = 0.2 and various values of *S* . ( **Left** ) Agent’s optimal collateral ratio *R* *[∗]* ( *M*, *S* ) with respect to *M* . Note that when *S* is low, the agent’s optimal action is to set *R* = *M*, but that larger values of *S* render other, less-risky collateral ratios optimal. ( **Right** ) Probability of undercollateralization *p* *u* ( *M*, *S* ) with respect to *M* . Several features are of note here: first, when *S* = 1.02, the probability of undercollateralization is extremely low, despite the fact that only a 2% penalty is assessed the agent in the event of a margin call. Second, when *S* *≤* 1.01, the probability of undercollateralization ( *p* *u* ) is sharply dependent on the value of *M*, can be as high in these simulations as 3.5%, and for *S* = 1.01, *p* *u* is discontinuous around *M* = 1.53. That is, the probability of undercollateralization is *extremely* sensitive to the operator’s selection of *M* . **4. Discussion and Future Work** Our analytical results in Propositions 1 and 2 suggest that selecting *S* *>* 1 is a crucial element of an effective incentive design; any value of *S* *>* 1 ensures that *M* is an effective tool for influencing the behavior of agents. Taken together, Remarks 1–4 illustrate that even on this carefully simplified model of the BitShares incentive system, emergent behavior among agents can be extremely complex and unintuitive. If behavior is challenging on a simple model, we expect that it is likely to be more challenging as the model’s complexity increases to match the real-world functions of the system. In particular, we recommend that careful attention be paid to the following key things: *•* The behavior of token holders may be *extremely* sensitive to small changes in MCR. This may be exploited to increase liquidity (i.e., by decreasing MCR slightly to incentivize the creation of additional price-stable tokens), but it may be very difficult to predict its precise effects and can easily increase the overall risk in the system. *•* The effects of MSSR on token holder behavior may be unintuitive, and essential aspects of their character may be highly dependent on MCR. In our simulations, we found that for low MCR, it is strictly better to set MSSR very close to 1 rather than at a moderate value such as 1.01, as noted in Remark 4. However, for a somewhat higher MCR, as we note in Remark 3, this situation is reversed and risks decrease with *S* . *Future Work* Clearly, the most important priority is to adapt the present model to a multiagent dynamic context which more accurately captures the complexities of the agents’ decisions. Modeling this as a one-shot decision process has benefits in its simplicity, but clearly misses some important aspects of ----- *Proceedings* **2019**, *28*, 2 8 of 8 the real-world system. In particular, in the real BitShares system, agents have the ability to update their collateral ratios over time as the core token price evolves, and integrating this into our model will be crucial to generate high-fidelity predictions. Furthermore, there are intrinsic game-theoretic aspects to this system, due to the market dynamics and the fact that in the real system, not all agents are charged the margin-call penalty every time. Lastly, an important aspect that is missed in our analysis is that in the actual BitShares system, there is a much more nuanced relationship between the external market and the actions of the agents committing collateral to create price-stable tokens. That is, the market value of the price-stable tokens can actually fluctuate somewhat, and this fluctuation likely dramatically impacts the incentives faced by core token holders. **Funding:** This research was funded by BitShares worker proposal 1.14.204. **References** 1. Nakamoto, S. *Bitcoin: A Peer-to-Peer Electronic Cash System* [. Available online: https://bitcoin.org/bitcoin.pdf](https://bitcoin.org/ bitcoin.pdf) (accessed on 30 June 2019). 2. Godsiff, P. *Bitcoin: Bubble or Blockchain* ; Smart Innovation, Systems and Technologies; Springer: Cham, Switzerland, 2015; doi:10.1007/978-3-319-19728-9_16. 3. CoinMarketCap. Available online: [https://coinmarketcap.com/currencies/bitcoin/ (accessed on](https://coinmarketcap.com/currencies/bitcoin/) 30 June 2019). 4. Athey, S.; Parashkevov, I.; Sarukkai, V.; Xia, J. *Bitcoin Pricing, Adoption, and Usage: Theory and Evidence* ; Stanford University Graduate School of Business Research Paper; Stanford University: Stanford, CA, USA, 2016. 5. Lee, J. *Nu Whitepaper* [; 2014. Available online: https://nubits.com/assets/nu-whitepaper-23_sept_2014-en.](https://nubits.com/assets/nu-whitepaper-23_sept_2014-en.pdf) [pdf (accessed on 30 June 2019).](https://nubits.com/assets/nu-whitepaper-23_sept_2014-en.pdf) 6. Chohan, U.W. *Are Stable Coins Stable?* Notes on the 21st Century (CBRi), Preprint Available at SSRN. Available [online: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3326823. (accessed on 12 February 2019).](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3326823) 7. [Libra White Paper. Available online: https://libra.org/en-US/wp-content/uploads/sites/23/2019/06/](https://libra.org/en-US/wp-content/uploads/sites/23/2019/06/LibraWhitePaper_en_US.pdf) [LibraWhitePaper_en_US.pdf (accessed on 30 June 2019).](https://libra.org/en-US/wp-content/uploads/sites/23/2019/06/LibraWhitePaper_en_US.pdf) 8. [The MakerDAO White Paper. Available online: https://makerdao.com/whitepaper/DaiDec17WP.pdf](https://makerdao.com/whitepaper/DaiDec17WP.pdf) (accessed on 30 June 2019). 9. Klages-Mundt, A.; Minca, A. (In)Stability for the Blockchain: Deleveraging Spirals and Stablecoin Attacks. *arXiv* **2019**, arXiv:1906.02152 10. The BitShares Blockchain Foundation. *The BitShares Blockchain* [. Available online: https://www.bitshares.](https://www.bitshares.foundation/papers/BitSharesBlockchain.pdf) [foundation/papers/BitSharesBlockchain.pdf (accessed on 30 June 2019).](https://www.bitshares.foundation/papers/BitSharesBlockchain.pdf) *⃝* c 2019 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution [(CC BY) license (http://creativecommons.org/licenses/by/4.0/).](http://creativecommons.org/licenses/by/4.0/.) -----
{ "disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.3390/proceedings2019028002?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.3390/proceedings2019028002, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": "CCBY", "status": "HYBRID", "url": "https://www.mdpi.com/2504-3900/28/1/2/pdf?version=1571629186" }
2,019
[]
true
2019-10-21T00:00:00
[ { "paperId": "42e734c0a243f7d2cd837e8e8453251257b241f9", "title": "Are Stable Coins Stable?" }, { "paperId": "4e430b64bac6fdf86a947ee81b9150ed1451704b", "title": "(In)Stability for the Blockchain: Deleveraging Spirals and Stablecoin Attacks" }, { "paperId": "8a7c799b5734dfd961606ace218deab192e06ec5", "title": "Bitcoin Pricing, Adoption, and Usage: Theory and Evidence" }, { "paperId": "2359ad394ab9a17e112c249a78eb5c5d3e55669c", "title": "Bitcoin: Bubble or Blockchain" }, { "paperId": "433561f47f9416a6500c8350414fdd504acd2e5e", "title": "Bitcoin Proof of Stake: A Peer-to-Peer Electronic Cash System" }, { "paperId": "ecdd0f2d494ea181792ed0eb40900a5d2786f9c4", "title": "Bitcoin : A Peer-to-Peer Electronic Cash System" }, { "paperId": null, "title": "CoinMarketCap" } ]
8,121
en
[ { "category": "Computer Science", "source": "external" }, { "category": "Economics", "source": "external" }, { "category": "Computer Science", "source": "s2-fos-model" }, { "category": "Economics", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/026f542ead119d0d53a6c7a35496d207b8b0b053
[ "Computer Science", "Economics" ]
0.873319
DeFi Risk Transfer: Towards A Fully Decentralized Insurance Protocol
026f542ead119d0d53a6c7a35496d207b8b0b053
International Conference on Blockchain
[ { "authorId": "2038268968", "name": "Matthias Nadler" }, { "authorId": "2129392146", "name": "Felix Bekemeier" }, { "authorId": "2083839088", "name": "Fabian Schär" } ]
{ "alternate_issns": null, "alternate_names": [ "ICBC", "IEEE Int Conf Blockchain Cryptocurrency", "IEEE International Conference on Blockchain and Cryptocurrency", "Int Conf Blockchain" ], "alternate_urls": null, "id": "f1ab8d75-7f15-4bb4-ad88-e834ec6ed604", "issn": null, "name": "International Conference on Blockchain", "type": "conference", "url": null }
In this paper, we propose a fully decentralized and smart contract-based insurance protocol. We identify various issues in the Decentralized Finance (DeFi) insurance context and propose a solution to overcome these shortcomings. We introduce an economic model that allows for risk transfer without any external dependencies or centralized intermediaries. In particular, our proposal does not need any sort of subjective claim assessment, community voting or external data providers (oracles). Moreover, it solves the problem of over-insurance and proposes various ways to mitigate the capital inefficiencies usually seen with DeFi collateral. The work takes inspiration from peer-to-peer (P2P) insurance and collateralized debt obligations (CDO). We formally describe the protocol, assess its efficiency and key properties and present a reference implementation. Finally, we address limitations, extensions and ideas for further research.
# DeFi Risk Transfer: Towards A Fully Decentralized Insurance Protocol ### Fabian Schär _Center for Innovative Finance (CIF)_ _University of Basel_ Basel, Switzerland f.schaer@unibas.ch ### Matthias Nadler _Center for Innovative Finance (CIF)_ _University of Basel_ Basel, Switzerland matthias.nadler@unibas.ch ### Felix Bekemeier _Center for Innovative Finance (CIF)_ _University of Basel_ Basel, Switzerland felix.bekemeier@unibas.ch **_Abstract—In this paper, we propose a fully decentralized and_** **smart contract-based insurance protocol. We identify various** **issues in the Decentralized Finance (DeFi) insurance context** **and propose a solution to overcome these shortcomings. We** **introduce an economic model that allows for risk transfer** **without any external dependencies or centralized intermediaries.** **In particular, our proposal does not need any sort of subjective** **claim assessment, community voting or external data providers** **(oracles). Moreover, it solves the problem of over-insurance and** **proposes various ways to mitigate the capital inefficiencies usually** **seen with DeFi collateral. The work takes inspiration from** **peer-to-peer (P2P) insurance and collateralized debt obligations** **(CDO). We formally describe the protocol, assess its efficiency** **and key properties and present a reference implementation.** **Finally, we address limitations, extensions and ideas for further** **research.** **_Index Terms—Blockchain, DeFi, Decentralized Insurance, Risk_** **Transfer, Smart Contracts** I. INTRODUCTION Decentralized Finance (DeFi) refers to public blockchainbased financial infrastructure that uses smart contracts to replicate traditional financial services in a more open, interoperable, and transparent way [1]. These smart contract-based services are usually referred to as protocols. They provide basic building blocks such as the opportunity to swap assets or allocate liquidity efficiently and can be reused and combined in any way. While decentralized exchanges and lending markets are arguably among the most prominent protocols and get a lot of attention, there are other crucial building blocks that are required for a well-functioning financial infrastructure. One of these building blocks is the ability to transfer risks. Consider the following general example: An economic agent has an investment opportunity that may result in a small loss or a large gain. Further assume that both outcomes have the same probability. The expected return would be positive and a risk-neutral (or risk-seeking) agent would be willing to engage. However, if the same opportunity is instead presented to a risk-averse agent, they may decline and forego a positive expected return due to the cost of uncertainty. If a financial market allows risk to be transferred, there is a simple solution. The risk-averse person can approach an entity with a higher risk tolerance and offer them a premium in return for their willingness to bear the risk. They essentially share the positive expected return and the risk would be borne by the entity with the higher risk tolerance. Similarly, a blockchain-based financial infrastructure becomes more efficient if smart contract risks are transferable. Risk-averse investors could share some of their expected return as a compensation for an insurance policy that covers the smart contract risks of the respective liquidity pool. DeFi users who are willing to bear additional risk could generate a higher yield. The existence of a market for risk transfer would be beneficial for everyone, as it allows all DeFi users to structure their portfolio in accordance with their individual risk preferences. There already exists a relatively large number of smart contract-based insurance protocols, including but not limited to Nexus Mutual [2], Nsure [3], cozy.finance [4], Unslashed Finance [5] and Risk Harbor [6]. While some of these protocols offer innovative solutions and have provided valuable contributions to the DeFi protocol space, they are arguably not fully decentralized and face various challenges. _First, insurance requires that the insurer can credibly_ demonstrate its ability to cover potential losses at all times. Centralized insurance is based on a combination of reputation and regulation. Moreover, centralized insurance companies rely on active asset and risk management to strike a balance between liquidity and capital efficiency. DeFi, on the other hand, is built on a pseudonymous system with little to no legal recourse. It relies on transparency and (over-)collateralization. Consequently, many implementations face trade-offs between capital efficiency, security and special privileges that allow for manual interventions. _Second, DeFi insurance protocols usually struggle with_ claim assessment. Generally speaking there are two options. (a) The insurance policy is parametric and relies on oracles and (b) the outcome is decided through a vote, by so-called claim assessors. Both approaches are quite subjective and can easily lead to false outcomes. The former introduces dependencies to external data providers and does not reflect true damages due to its parametric nature. The latter relies on a voting process among pseudonymous actors that can assume various roles within (and outside) the system. Moreover, truly decentralized voting will be either subject to sybil attacks [7] or whale dominance with potentially problematic incentives. ----- There are good arguments, why neither the oracle-based nor the claim assessor-based approach should be considered fully decentralized. _Third, most protocols cannot prevent over-insurance. DeFi_ users can buy cover for protocols to which they have no exposure. This can create problematic incentives and – depending on the jurisdiction – result in conflict with the law. In this paper, we propose a novel DeFi insurance protocol that solves these issues. To the best of our knowledge it is the first proposal for a fully decentralized insurance protocol with no external dependencies. As part of this research project, we have also built a basic reference implementation of the protocol. The implementation can be found in the appendix. After this short introduction, we discuss related works from the DeFi, insurance and finance literature. In Section III we turn to the technical part, describe the protocol and perform a gas efficiency analysis. In Section IV we study external incentives for liquidity providers and derive the implicit cost of liquidity provision for various pools involving our protocol’s tranche tokens. In Section V we discuss our results, potential extensions and limitations. Finally, we conclude in Section VI. II. RELATED WORK The motivation for a DeFi insurance protocol is closely linked to discussions on smart contract and DeFi risks, protocol failures and shock propagation. These issues have received an increasing amount of research attention and are an important part of the academic discourse on DeFi [8]– [12]. Our protocol can mitigate some of the consequences by allocating risk in a more efficient way. Moreover, market prices for risk premiums can serve as an indication of the perceived risk; similar to prediction markets. With regard to yield-generating lending protocols, different authors discuss the risks of illiquidity, dependencies and misaligned incentives [9], [13]–[15]. Moreover, there are various papers discussing oracle reliability and potential manipulation [16], [17]. Our proposal does not have any dependencies, allows the insurant to hedge against oracle exposure, and even works in situations where the insured protocols become illiquid. Existing DeFi insurance protocols are mostly based on principles of mutual insurance, where users participate in the commercial success of the protocol. In theory, mutuals can have certain advantages for large risk pools [18], in the presence of transactional costs and governance issues [19], and in addressing problems of adverse selection [20]. However, due to centralized economic value capture in most mutuals, problems potentially remain with respect to default risks [21]. In a DeFi context, mutual-based insurance protocols usually rely on centralized or vote-based claim assessment and may depend on know your customer (KYC) principles or introduce other forms of dependencies. Our protocol is fundamentally different from a mutual insurance. There is no centralized economic value capture and the protocol does not accumulate reserves. The general concept of our protocol is inspired by peer-to-peer (P2P) insurance and financial instruments with tranches, such as collateralized debt obligations (CDO). In a P2P insurance model, individuals pool their insurance premiums and use these funds to cover individual damages. P2P risk transfer is still at a very early stage of research, with seminal works including [22]–[27]. Several authors have started to formally explore the organizational structure, optimality and pitfalls of P2P insurance [28]–[30]. Our protocol is based on similar principles. In particular, we make use of different risk preferences and levels that allow individuals to pool their risks without the explicit need for an intermediary. However, there is an important difference between P2P insurance and our approach: P2P insurance usually covers individual risks. As such, P2P insurance is built on the general assumption that damages within the collective are uncorrelated and that premiums of the unaffected insurants can be used to compensate the ones that have suffered losses. Our protocol insures large scale risks that will affect all insurance holders. Consequently, we need explicit roles in accordance with the individuals’ risk preferences. This is achieved by creating tranches with different seniorities and security guarantees. As such, our protocol incorporates some aspects of CDOs. CDOs have been discussed extensively in the subject-related literature [31]–[34]. They split cash flows among tranches with different seniority. The most senior tranches are honored first and the most junior tranches bear the losses. In addition to traditional use cases, such as CDOs for bank refinancing, insurance risk also appears to be a suitable use-case for CDOs [35]. Likewise, CDOs are used widely in various applications, also outside traditional financial markets. For example, CDOs have already been discussed related to the support of microcredits [36]. This combination of P2P insurance, seniority-based promises and DeFi specifics builds the foundation of our protocol and allow us to propose a fully decentralized DeFi insurance. III. PROTOCOL In this section, we present a decentralized risk hedging protocol, based on tranched insurance. First, we provide a quick overview and describe the core functionality of the protocol. Second, we take a more technical perspective and describe individual function calls and state transitions. Third, we discuss potential technical extensions and trade-offs. Fourth, we provide a short efficiency analysis and discussion of the protocol’s computational costs (gas fees). _A. Protocol Overview_ The general idea of our insurance protocol is to pool assets from two third-party protocols, and allow users to split the pool redemption rights into two tranches: A and B. If any of the third-party protocols suffer losses during the insurance period, those losses will be primarily borne by the B-tranche holders. A-tranche holders will only be negatively affected if 50% or more of the pooled funds are irrecoverable, or if ----- both protocols become temporarily illiquid and face (partial) losses. We effectively split the redemption rights into a riskier and less risky version and allow the market for A- and Btranches to determine the fair risk premium in line with the users’ expectations. The protocol consists of three main phases: risk splitting, _investing/divesting and redemption._ In the risk splitting phase, anyone may allocate their preferred number of C-tokens to the insurance protocol. These _C-tokens represent the underlying asset, e.g., a stablecoin or_ Ether. In exchange, the users receive equal denominations in _A- and B-tranches, thereby ensuring that an equal number of_ both tranches will be created. A and B are ERC-20 compliant tokens and can be transferred separately. This allows the users to swap the tokens on decentralized exchanges to obtain a relative allocation of A- and B-tranches that reflects their risk preferences. At the beginning of the invest/divest phase, the insurance protocol allocates the accumulated collateral of C equally into two protocols. In return, it receives interest-bearing tokens (wrapped liquidity shares) from each protocol. We denote these shares as Cx and Cy. To make things less abstract, consider the following example: A stablecoin (C) gets allocated to two distinct yieldgenerating lending protocols. In return, the insurance protocol receives the respective interest-bearing tokens (Cx and Cy). They are locked in the insurance contract, where they will accumulate interest over time. At the end of the invest/divest phase, the insurance protocol tries to liquidate the wrapped shares. This is a necessary step in preparation for the redemption of the A- and B-tranches. In a third step, the protocol enters the redemption phase. The goal of this phase is to compute potential losses and allow the A- and B-tranche holders to claim their respective share of the underlying. It is important to understand that the redemption phase can be executed in one of two distinct modes. Mode selection depends on the success of the liquidation at the end of the invest/divest phase. If the liquidation of Cx and Cy works as expected and the insurance protocol receives the collateral tokens C, then redemption can be conducted in liquid mode. In this mode, it is straightforward to distribute the interest equally among all A- and B-tranche holders. Similarly, potential losses can be computed and primarily allocated to B-tranche holders. If the liquidation of Cx or Cy fails, the protocol enters fallback _mode. This can happen if a third-party protocol suffers from a_ liquidity crunch or if an external contract changes the expected behavior. In fallback mode, users redeem their tranche tokens directly for their preferred mix of Cx and Cy tokens. The higher tranche seniority of A-tranches is ensured through a timelock-based redemption sequence. In a first step, A-tranche holders get to choose if they want to claim their share in Cx, _Cy or a mix of the two. After the timelock is over, B-tranche_ holders can claim what is left. _B. Technical Implementation_ A reference implementation of the insurance contract is available in the appendix and demonstrates how our protocol can be used to provide insurance for two yield-generating protocols that wrap the Maker DAO stablecoin Dai [37], denoted as C. The two yield-generating protocols are Aave version 2 [38] with aDai and Compound Finance [39] with cDai, denoted as Cx and Cy respectively. The reference implementation includes the full Solidity code for the Ethereum Virtual Machine-based (EVM) contract and can be used as a starting point for developers who want to create their own insurance contracts using a similar approach. In this subsection we provide an overview of the reference implementation’s technical specifications, including the functions, variables and states. We present this information in a chronological order, following the timeline presented in Figure 2. The states are referred to as: ReadyToAccept, ReadyToInvest, _MainCoverActive, ReadyToDivest, Liquid, FallbackOnlyA and_ _FallbackAll. Note that strictly speaking a smart contract cannot_ automatically transition from one state to another based on the passage of time; this is a fundamental limitation of smart contract technology. Any state change on the contract has to be initiated by a function call. Our implementation works around this by defining states as a set of successfully callable functions and reverting function calls, if they are outside the allowed time windows. Hence, the set may change based on time conditions. Before the first state, the initial parameters must be defined and contract deployed. The parameters include the addresses of the tokens involved in the contract, as well as the absolute values for the timestamps when state transitions occur. These forced state transitions are represented in Figures 1 and 2 as _S, T1, T2 and T3, where S < T1 < T2 < T3. Furthermore, the_ constructor deploys two ERC-20 token contracts for A- and _B-tranches, with the insurance contract as the sole, immutable_ owner. This means that only the insurance contract can mint and burn the tranche tokens. After deployment, the contract is in the ReadyToAccept state and the public function splitRisk() is available for anyone to call. The input parameter for the function is an amount of C tokens. The splitRisk() function then transfers this amount of C tokens from the caller to the insurance contract and issues a number of A- and B-tranche tokens equal to half that amount to the caller. For example, if the input is 100, the function will transfer 100 C tokens from the caller to the insurance contract and issue 50 tranche A tokens and 50 tranche B tokens to the caller. It is important to note that the act of calling the splitRisk() function does not provide the user with any form of insurance cover. In order to obtain insurance cover – or to assume more risk – the user must sell or trade a portion of their tranche A or tranche B tokens. When time S is reached, the contract transitions to the ReadyToInvest state and users can no longer mint new tranche tokens. The invest() function is available during ----- Fig. 1. State Transition Diagram: Represents state transitions and their respective function sets. this state and it is tailored to the specific needs of the protocols that are part of the insurance contract, with the goal of splitting the deposited C tokens equally among the protocols. In the reference implementation, the function will send half of the available C to Aave and the other half to Compound in exchange for their respective yield-bearing tokens, Cx and Cy. After a successful invest() call, the insurance contact holds _Cx and Cy of equal value and does no longer hold C. Calling_ the invest function incurs a transaction fee, paid by the caller while the benefits of the call are shared among all participants. To avoid the problem of a first mover disadvantage, to ensure that the call is executed in a timely fashion and to split the costs equally among all participants, the invest() function should compensate the caller for executing the transaction.[1] The unlikely case in which no successful invest() call is made before the forced state transition at T1 will be covered later in this subsection. When a successful invest() call is made, the contract transitions to the MainCoverActive state and sets the variable isInvested = true. The contract is now exposed to the risks of the third-party protocols and the main period of insurance cover for the A-tranches begins. In this state, no functions can be called on the contract. However, the A- and _B-tranches remain transferable._ At time _T1,_ the contract will transition from the MainCoverActive state to the ReadyToDivest state, where the divest() function can be invoked. It has a similar structure to the invest() function, but instead of depositing the underlying assets into the third-party protocols, divest() tries to withdraw the underlying assets including any accumulated yield from the protocols. A divest() call is considered successful if no errors occur while withdrawing the assets and if both Cx and Cy have been fully converted back to C. A successful divest() call immediately transitions the protocol to the Liquid state by setting inLiquidMode = true. In this state, the allocation of the redeemed assets to the A- and B-tranches is deterministic and can be calculated as part of the divest() call. Let us define CS as the total initially invested amount, CT1 as the total redeemed amount 1We did not include a compensation mechanism in the reference implementation. When implemented, it should cover at least the base fee of the transaction plus a fixed amount for the tip. and i as the interest. We can then differentiate between three cases and determine the payouts for each case, as shown in Table I. The payout per A- and B-tranche token is stored on the contract and can be accessed using the variables cPayoutA and cPayoutB, respectively. During the liquid state, users can call the claim() function which accepts an amount for Aand B-tranches as input. If the caller is in control of at least the specified amount of tranches, the contract will burn these tranches and transfer the payout to the caller. For convenience, a claimAll() function is available and will internally call the claim() function with the caller’s current balance of tranches. If no successful divest() call is made during the ReadyToDivest state, a forced transition occurs at T2 and the protocol enters fallback mode, which starts in state FallbackOnlyA. In fallback mode, the protocol has no knowledge about the value of its interest-bearing tokens relative to the initial investment. Therefore, instead of assigning a payout to the tranches, the tranche holders can choose which of the two interest-bearing tokens they would like to redeem. Based on the total amount of tranche tokens and the remaining interest-bearing tokens, the contract determines a fixed redeem-ratio for each of the two interest-bearing tokens. These ratios are stored on the contract as cxPayout and _cyPayout and are defined as the total amount of the respective_ asset, divided by half of the total amount of tranches. For example, assume 50 A- and 50 B-token have been minted and the contract holds 20 Cx and 1500 Cy. A tranche can now be redeemed for 0.4 Cx or 30 Cy. Once all tranches are redeemed there are no interest-bearing tokens left on the contract. A- and _B-tranches can be redeemed for the same amount. However,_ during the FallbackOnlyA state, as the name suggests, only _A-tranches can be redeemed for interest-bearing tokens with_ the function claimA(). As an input for this function, the caller specifies how many of their A-tranches they want to redeem for Cx and how many for Cy. The contract then burns the tranches and transfers the assets according to the redeemratios. At time T3, if the contract is in fallback mode, the final transition happens to the FallbackAll state. This state is identical to FallbackOnlyA with the only difference that _B-tranches can now also be redeemed via the claimB()_ function. ----- deployment _D_ invest() _S_ claimAll() if inLiquidMode = TRUE divest() _T1_ _t_ _T2_ _T3_ if inLiquidMode = FALSE claimA() claimB() Fig. 2. Sequential actions in liquid mode (top, divest() successful) and fallback mode (bottom, divest() unsuccessful). Finally, to ensure we never end up in a state where the assets cannot be recovered, we need to define a state transition from ReadyToInvest to Liquid if the invest() function was not successfully called. This transition happens after T1 if isInvested == false and allows the users to reclaim their initially invested funds. _C. Extensions and Trade-Offs_ To obtain insurance cover, a protocol user must sell their Btranches. A possible extension to the insurance contract would be to use intra-transaction composability and connect it to a decentralized exchange. This would allow users to sell their _B-tranches in the same transaction as the splitRisk()_ function. However, note that any additions to the insurance contract will introduce additional risk. Keeping the contract as simple as possible and reducing dependencies to a minimum will help to manage this risk. We argue that most extensions which introduce new dependencies should be implemented at the user interface level in a separate contract. Consider the following example: Let us assume that we want to create a function to insure an amount of C tokens. We create a new contract with a function that uses a flash loan [15] for twice the amount and calls splitRisk(). In the same function, the B-tranches are sold to a decentralized exchange and the A-tranches transferred to the caller. Finally, the flash loan is repaid, using the proceeds from the sale and the funds from the initial caller. The additional contract can be developed and deployed independently of the insurance contract. This separation offers more flexibility and introduces no additional risks for other users. The trade-off here is that the transaction fees might be slightly higher, as external calls are more costly than internal ones. _D. Transaction Costs_ Depositing funds into a protocol incurs a transaction fee, which is imposed by the blockchain network and expressed in units of computation – commonly called gas. This transaction fee can vary slightly based on circumstantial parameters, but it largely depends on the computational complexity of the transaction. Depositing funds into our reference implementation via the splitRisk() function costs around 83,000 gas. Depositing to Aave or Compound directly incurs a fee of 249,000 or 156,000 gas, respectively. While calling the invest() function is expensive (488,000 gas), this cost can be split among all users in the insurance contract. Similar to yield aggregation protocols [40], the insurance contract becomes more gas efficient, the more users participate and even for just a few users, we expect the minting of insured tokens to be cheaper than minting uninsured tokens. IV. LP-INCENTIVES AND DIVERGENCE LOSS Recall that users must mint A- and B-tranches in equal proportions. Consequently, they will only be able to reach token allocations in line with their risk preferences if there is a liquid market. Insufficient liquidity would lead to large price spreads (or slippage). Hence, there is a need for market makers, or more generally liquidity providers. In what follows, we analyze the incentives for liquidity provision of A- and B-tranches on constant product market makers (CPMM), a special form of automated market makers (AMM) [41]. Note, that CPMMs are only one of many possibilities; tranche token markets could emerge on any trading infrastructure. However, there are a few reasons why CPMMs are of particular importance. First, they usually handle a large part of the on-chain trading volume. Second, CPMMs allow for composable calls and will always be able to quote a price TABLE I THE THREE POTENTIAL OUTCOMES FOR LIQUID MODE **Case** **Payoff A** **Payoff B** **Description** _CT1 ≥_ _CS_ _C2T1_ _C2T1_ Proceeds are split equally among all tranche token holders. Both tranches are treated equally. _CS > CT1 >_ _[C]2[S]_ _C2S_ + i _CT1 −_ � _C2S_ + i� _A-tranche holders get fully compensated and receive yield payment._ _B-tranche holders receive a proportion of their initial stake._ _CT1 ≤_ _[C]2[S]_ _CT1_ 0 Proceeds are used to partially compensate A-tranche holders. This can only occur if both yield-generating protocols suffer losses. ----- for any (input) amount. Third, CPMMs can be set up in a completely decentralized way and are therefore in line with the strict decentralization requirement of our insurance protocol. In a CPMM setup, profitability for liquidity providers is determined by two opposing effects. On the one hand, the pool accumulates protocol fees. The gains are assigned proportionally to all liquidity provision shares. The rate of return depends on the pool’s trading volume relative to the pool’s liquidity. On the other hand, liquidity providers are s.t. divergence loss (also known as impermanent loss). Divergence loss refers to the problem that liquidity providers lose value, if the liquidity redemption price ratio differs from the liquidity provision price ratio. Intuitively, this effect can be thought of as negative arbitrage. Divergence loss is zero if the two pool tokens maintain their initial price ratio and increases when the relative price is shifting in one direction. To assess the incentives for A- and B-tranche liquidity providers we have to understand divergence loss in the context of our tranche tokens. Let us assume a standard a _b = k setup,_ _·_ where a and b represent the initial amount of A and B tokens in the pool and k is a constant product, that determines all feasible combinations of a and b. Let us rearrange the equation and take the partial derivative w.r.t. a. The absolute value of the resulting slope can be reinterpreted as the relative price. _pAB =_ _[k]_ (1) _a[2]_ Trading activity may shift the token allocation to a[∗] and _b[∗], with a[∗]_ _b[∗]_ = k. Using (1) we obtain the new price ratio _·_ _p[∗]AB_ [. This allows us to express the post-trade quantities as a] function of the new price ratio p[∗]AB [.] _p[∗]_ _AB_ _AB_ _pAB_ _pAB_ _[−]_ _[p][∗]_ _[−]_ [1] _p[∗]AB_ _pAB_ [+ 1] � 2 _·_ �������� _D =_ �������� _._ (8) We can now use this equation to analyze two distinct outcomes and observe the effects on the pool and the liquidity providers. First, assume the cover is not needed. The contract enters Liquid state, and A- and B-tranches can be redeemed for equal amounts of C. We refer to this case as the standard _case. Second, assume one of the underlying yield-generating_ protocols suffers losses. These losses will be reflected in the price of tranche B and therefore have an effect on the liquidity pools that contain B. We refer to this as the benefit case. _pA_ _pC_ _pB_ _S_ invest() Interest _t_ _T1_ divest() � _k_ � , _b[∗]_ = _k · p[∗]AB_ (2) _p[∗]AB_ _a[∗]_ = We can now compute portfolio values Vp of a simple buy and hold strategy (3) with the outcome of liquidity provision (4). _VP (a, b) = p[∗]AB_ (3) _[·][ a][ +][ b]_ _VP (a[∗], b[∗]) = p[∗]AB_ (4) _[·][ a][∗]_ [+][ b][∗] Using (2) to substitute quantities in (3) and (4) we get � _k_ � + _pAB_ _VP (a, b) = p[∗]AB_ _[·]_ � _VP (a[∗], b[∗]) = 2 ·_ _k · p[∗]AB_ _[.]_ (6) _k · pAB_ _,_ (5) Divergence loss can be expressed as follows _VP (a[∗], b[∗]) −_ _VP (a, b)_ _D :=_ ���� _VP (a, b)_ (7) ���� Fig. 3. Relative price development of A and B shares between S and T1, compared to the price of the underlying redeemable asset C. _1) Standard Case: In the standard case A-tranches lose_ their cover value over time. Conversely, B-tranches become less risky and will eventually be redeemable for an equal amount of C as A-tranches. Hence, we know that p[∗]AB [= 1][.] Making use of substitution in (8), the expected divergence loss can be expressed as a function of the initial price ratio pAB . The greater the initial risk premium, the higher the divergence loss for liquidity provision in A/B-pools. Alternatively, a liquidity provider could decide to contribute to an A/C- or _B/C-pool. In T1, we know that pA = pB = pC_ (1 + i), _·_ where i is the accumulated interest. Hence, we know that _p[∗]AC_ [=][ p]BC[∗] [= 1 +][ i][. If we plug this value into (8), the] expected divergence loss, for any expected interest rate, can be expressed as a function of the initial price ratio pAB . Figure 3 shows the price relations of the three tokens. For A/Bpool liquidity provision considerations, interest rates can be neglected. However, for A/C- and B/C-pools, interest plays an important role. Note that B-tranche prices already have a positive time trend. As such, interest will further increase the price spread to C. Conversely, A-tranche prices have a negative time trend and interest will therefore decrease the spread. Consequently, any (positive) interest will create a situation where the divergence loss of B/C-pools is greater than the divergence loss of A/C-pools. This is shown in Figure 4. While the extent of the divergence loss depends on various factors, it is important to understand that the effect is relatively small. Moreover, there are ways to mitigate a trend-based From (7) we plug in (5) and (6). After rearranging we get ----- Fig. 4. Divergence Loss (in line with equation (8)) for a/c-and b/c-Pools with an expected interest of 5%. The two points marked in our graph represent an example for an initial price spread between A and B. The initial valuation of each a token starts at 1.02 c, and the valuation of each b token at 0.98 c. divergence loss. Alternative pool models, such as the constant _power sum invariant [42] can be used to design decentralized_ exchanges that are better suited for tokens with an inherent price trend. _2) Benefit Case: If any of the yield-generating protocols_ suffer a loss, A-tranche holders will be compensated at the expense of B-tranche holders. In extreme scenarios, where one of the yield-generating protocols loses its entire collateral, B-tranches become worthless. From (8) we know that limp[∗]AB _[→∞]_ _[D][ =][ −][1][. Hence,][ A/B][- and][ B/C][-pool liquidity]_ providers are at risk of losing their entire stake. While this constitutes an additional risk for providers of B-tranche liquidity, where they have to expose the B counterpart to an additional risk and effectively stake twice the amount, they receive trading fees in return. As such, the incentives depend on the specifics and the risks of the insured protocols as well as the relative trading volume. In extreme cases, where A/B and _B/C liquidity provision would be prohibitively risky, liquidity_ providers could instead contribute to A/C-pools. Liquid A/Cpools would be sufficient, in the sense that anyone who is interested in coverage could obtain it directly from the pool. This scenario will be further discussed in Section V. V. DISCUSSION In the introduction we argued that current smart contractbased insurance protocols face various challenges and limitations. We will start our discussion by revisiting these points and explain how our model addresses them. First, the vast majority of existing insurance protocols allows for over-insurance, where users can buy cover that exceeds their exposure. This can create problematic incentives and – depending on the jurisdiction – result in conflict with the law. Our model does not allow for over-insurance. The risk and capital are linked through our tranches and cannot be separated without the use of another protocol. Second, there are various challenges relating to claim assessment. All of the existing insurance protocols we have examined have some form of dependency on external factors during the claim assessment process. These dependencies can be introduced through parametric triggers, oracles, community voting or decisions by a predetermined expert council. All of these approaches can lead to undesirable outcomes. The incentives may not be aligned and create situations that can result in deviations from the true outcome. In our model, we do not rely on claim assessors, voting in a decentralized autonomous organization (DAO), expert councils, oracles or any trigger events. Instead, we use a deterministic distribution schedule of a common underlying (Liquid Mode) and a sequential choice model in accordance with the seniority of the tranches (Fallback Mode). Consequently, payouts are not conditional on any subjective decisions by an involved- or third-party. Third, we argued that many DeFi insurance protocols suffer from capital inefficiencies and there certainly is a trade-off between capital efficiency, security and special privileges. We found that most existing protocols tend to be conservative or cautious in their approach. The collateral is usually held in low-risk, non-interest-bearing assets. As a result, these protocols have at most 50% capital efficiency before leverage. Some protocols are capable of increasing their efficiency by covering multiple – ideally uncorrelated – risks with the same collateral; however, they still require the collateral to be in a low-risk, non-interest-bearing asset. In our model it is possible to hold the collateral, i.e., the B-tranche, in a interest-bearing asset without any significant drawbacks on the security side, if the risks of the insured protocols are indeed uncorrelated. Moreover, our approach is quite flexible in the sense that further leverage, based on a larger number of underlying protocols is feasible and could be implemented as an extension. In addition to these three initial points, there is another advantage related to the risk premium that we came across in the course of our research. As shown in Section IV, both our cover and collateral (A- and B-tranches) are freely tradable. The risk premium is simply determined by the relative price between the two tranches. This allows us to create a marketbased price-finding mechanism for a fair risk premium. The price can emerge naturally and does not depend on preset parameters or statically implemented risk spreads that may paralyze risk transfer activity. In Section IV we show that there are greater incentives to provide liquidity for the A-tranches than for the B-tranches. Even in an extreme case, where the B liquidity would be very low to non-existent, one could still obtain B-tranches. To do so, they call the splitRisk() function to mint A- and Btranches in equal amounts and then sell the A-tranches, for which the market can be assumed to be sufficiently liquid. Anyone interested in the insurance cover could simply buy Atranches on the open market and would not have to interact ----- with the protocol. Assuming a constant supply, greater demand for A-tranches would increase the risk spread and therefore incentivize the creation of additional A- (and B-) tranches. There are many benefits to our proposal and we believe that this paper significantly contributes towards the DeFi protocol stack. However, every proposal also has its limitations and drawbacks. In the remainder of this section, we discuss some of these limitations and propose potential extensions and new research avenues to mitigate these issues. First, our model requires a common underlying among all involved protocols. The reason for this is to eliminate any reliance on external price sources, i.e., oracles. In liquid mode, we redeem everything to denominations of a unified underlying at the end of a predefined time period. While it is theoretically possible to wrap tokens to give them an arbitrary underlying, this will have one of two consequences: either a dependency on external price sources has to be introduced, or the fallback case in our model would introduce an insurance against relative price movements of the assets and the underlying. The latter may be desirable in some cases, but it is not the default behaviour we want to achieve. Second, our protocol has a fixed time span. Consequently, insuring assets over a longer period of time requires regular actions from all involved parties. A new contract has to be deployed for each period and the assets need to be moved over. This problem is exacerbated by shorter insurance periods. Longer insurance periods on the other hand increase the time that claimants have to wait for their compensation in case of an incident and also increase the risk of both protocols failing during the same period. We believe this limitation could be mitigated with an extension to the protocol, which uses short insurance periods and rolls over any non-redeemed tranches to a new insurance period. However, an extension of this nature could significantly increase the complexity of the protocol and would require further research to determine the practicality and potential consequences. Third, in our model we specify minting and redeeming time windows for the tranches. Consequently, the total supply of _A- and B-tranches cannot change during the main insurance_ period. This can be an issue, especially if there is insufficient liquidity for the B-tranches, as discussed in Section IV or if the demand for cover changes significantly. Further research into this topic is necessary, but we believe that under certain circumstances, the minting window could be extended to allow the creation of new shares during the active insurance phase. One requirement for this would be a way to track the accrued interest on the insurance protocol and to increase the costs of the newly created tranches accordingly. Similar considerations can be made for the redeeming window. Early redemption of equal parts of A- and B-tranches should be possible without large changes to the model. Even early redemption of just _A-tranches is theoretically possible._ Finally, our model and the reference implementation use two protocols. This is not a strict limitation. In fact, it can be shown that the model works as described as long as the number of tranches is equal to the number of insured protocols. For example, an extension to three protocols is possible with the introduction of a third tranche, without any fundamental changes to the protocol. A more challenging extension is the addition of further protocols without any changes to the number of tranches. This extension would severely increase the complexity of fallback mode. Recall that A-tranche holders get to choose which of the remaining interest-bearing tokens they want to redeem. In a world where the number of tranches is equal to the number of protocols, this is unproblematic, since there will always be sufficient collateral of any type for A-tranche holders to choose from. In a model where the number of protocols is greater than the number of tranches, A-tranche holders might compete with each other and race to redeem the more valuable collateral. As such, models where the number of protocols is greater than the number of tranches can create a first mover advantage, where A-tranche holders are treated inconsistently. A potential solution to solve this issue is a two-step approach, that lets tranche holders choose and commit their redemption preferences before the final redemption ratios are calculated. VI. CONCLUSION In this paper, we propose a fully decentralized DeFi insurance model that does not rely on any external information sources, such as price feeds (oracles) or claim assessors. The general idea of our insurance protocol is to pool assets from two third-party protocols, and allow users to split the pool redemption rights into two freely tradable tranche tokens: A and B. Any losses are first absorbed by the B-tranche holders. _A-tranche holders will only be negatively affected if 50% or_ more of the pooled funds are irrecoverable, or if both protocols become temporarily illiquid and face (partial) losses. The market for A- and B-tranches determines the fair risk premium for the insurance. Our approach has several advantages over other DeFi insurance solutions. In addition to being fully decentralized and trustless, it also prevents over-insurance, does not rely on any parametric triggers, and is highly capital-efficient. We provide a complete reference implementation of the insurance protocol in Solidity, with coverage for two popular lending market protocols. We believe that fully decentralized and trustless infrastructure is crucial and may create more transparent, open and resilient financial markets. Our contribution should be seen as a composable building block and a foundation for further research and development efforts. ACKNOWLEDGMENT The authors would like to thank Tobias Bitterli, Mitchell Goldberg, Emma Littlejohn, Katrin Schuler and Dario Thürkauf. APPENDIX The full Solidity source code for our reference implementa[tion can be found in our github repository: https://github.com/](https://github.com/cifunibas/decentralized-insurance) [cifunibas/decentralized-insurance](https://github.com/cifunibas/decentralized-insurance) ----- REFERENCES [1] F. Schaer, “Decentralized finance: On blockchain- and smart contractbased financial markets,” Fed. Reserve Bank St. Louis Rev., vol. Second Quarter 2021, pp. pp. 153–74, 2021. [2] H. Karp and R. Melbardis, “Nexus mutual whitepaper: A peer-to-peer discretionary mutual on the ethereum blockchain,” 2017. [Online]. [Available: https://nexusmutual.io/assets/docs/nmx_white_paperv2_3.pdf](https://nexusmutual.io/assets/docs/nmx_white_paperv2_3.pdf) [3] Nsure.Network, “Nsure.network - open insurance platform for open [finance,” 2020. [Online]. Available: https://nsure.network/Nsure_WP_0.](https://nsure.network/Nsure_WP_0.7.pdf) [7.pdf](https://nsure.network/Nsure_WP_0.7.pdf) [4] Cozy.Finance, “Cozy finance developer docs,” 2020. [Online]. Available: [https://docs.cozy.finance/](https://docs.cozy.finance/) [5] Unslashed.Finance, “Insurance for decentralized finance,” 2021. [[Online]. Available: https://documentation.unslashed.finance/](https://documentation.unslashed.finance/) [6] M. Resnick, R. Ben-Har, D. Patel, and A. Bipin, “Risk harbor v2,” 01 2022. [Online]. Available: [https://github.com/Risk-Harbor/RiskHarbor-Whitepaper/blob/](https://github.com/Risk-Harbor/RiskHarbor-Whitepaper/blob/main/Risk%20Harbor%20Core%20V2%20Whitepaper.pdf) [main/Risk%20Harbor%20Core%20V2%20Whitepaper.pdf](https://github.com/Risk-Harbor/RiskHarbor-Whitepaper/blob/main/Risk%20Harbor%20Core%20V2%20Whitepaper.pdf) [7] J. R. Douceur, “The sybil attack,” in International workshop on peer_to-peer systems._ Springer, 2002, pp. 251–260. [8] N. Atzei, M. Bartoletti, and T. Cimoli, “A Survey of Attacks on Ethereum Smart Contracts (SoK),” in Principles of security and trust, ser. Lecture notes in computer science, 0302-9743, M. Maffei and M. Ryan, Eds., vol. 10204. Springer, Berlin, Heidelberg, 2017, pp. 164–186. [9] L. Gudgeon, D. Perez, D. Harz, B. Livshits, and A. Gervais, “The decentralized financial crisis,” _2020_ _Crypto_ _Valley_ _Conf._ _Blockchain_ _Technol._ _(CVCBT),_ 2020. [Online]. Available: [https:](https://arxiv.org/pdf/2002.08099) [//arxiv.org/pdf/2002.08099](https://arxiv.org/pdf/2002.08099) [10] D. Macrinici, C. Cartofeanu, and S. Gao, “Smart contract applications within blockchain technology: A systematic mapping study,” Telemat. _Inform., vol. 35, no. 8, pp. 2337–2354, 2018. [Online]. Available:_ [http://www.sciencedirect.com/science/article/pii/S0736585318308013](http://www.sciencedirect.com/science/article/pii/S0736585318308013) [11] Z. Zheng, S. Xie, H.-N. Dai, W. Chen, X. Chen, J. Weng, and M. Imran, “An overview on smart contracts: Challenges, advances and platforms,” Future Gener. Comput. Syst., vol. 105, pp. 475–491, [2020. [Online]. Available: http://www.sciencedirect.com/science/article/](http://www.sciencedirect.com/science/article/pii/S0167739X19316280) [pii/S0167739X19316280](http://www.sciencedirect.com/science/article/pii/S0167739X19316280) [12] L. Zhou, X. Xiong, J. Ernstberger, S. Chaliasos, Z. Wang, Y. Wang, K. Qin, R. Wattenhofer, D. Song, and A. Gervais, “SoK: Decentralized Finance (DeFi) Attacks,” 8/27/2022. [Online]. Available: [https://arxiv.org/pdf/2208.13035](https://arxiv.org/pdf/2208.13035) [13] M. Bartoletti, J. H.-y. Chiang, and A. L. Lafuente, “Sok: Lending pools in decentralized finance,” in _Financial_ _Cryptography_ _and_ _Data Security. FC 2021 Int. Workshops, vol. 12676._ Springer, [Berlin, Heidelberg, 2021, pp. 553–578. [Online]. Available: https:](https://link.springer.com/chapter/10.1007/978-3-662-63958-0_40) [//link.springer.com/chapter/10.1007/978-3-662-63958-0_40](https://link.springer.com/chapter/10.1007/978-3-662-63958-0_40) [14] A. Lehar and C. A. Parlour, “Systemic fragility in decentralized [markets,” 2022. [Online]. Available: https://papers.ssrn.com/sol3/papers.](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4164833) [cfm?abstract_id=4164833](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4164833) [15] K. Qin, L. Zhou, B. Livshits, and A. Gervais, “Attacking the DeFi Ecosystem with Flash Loans for Fun and Profit,” in Financial _Cryptography_ _and_ _Data_ _Security._ _FC_ _2021._ _Lecture_ _Notes_ _in_ _Computer Science, N. Borisov and C. Diaz, Eds., vol. 12674._ Springer, Berlin, Heidelberg, 2021, pp. 3–32. [Online]. Available: [https://link.springer.com/chapter/10.1007/978-3-662-64322-8_1](https://link.springer.com/chapter/10.1007/978-3-662-64322-8_1) [16] G. Angeris and T. Chitra, “Improved price oracles: Constant function market makers,” in Proc. 2nd ACM Conf. Advances in Financial _Technologies, ser. AFT ’20, no. 12._ New York, NY, USA: Association for Computing Machinery, 2020, pp. 80–91. [Online]. Available: [https://doi.org/10.1145/3419614.3423251](https://doi.org/10.1145/3419614.3423251) [17] B. Liu, P. Szalachowski, and J. Zhou, “A first look into defi oracles,” in _2021_ _IEEE_ _Int._ _Conf._ _Decentralized_ _Applications_ _and_ _Infrastructures_ _(DAPPS)._ Los Alamitos, CA, USA: IEEE Computer Society, aug 2021, pp. 39–48. [Online]. Available: [https://doi.ieeecomputersociety.org/10.1109/DAPPS52256.2021.00010](https://doi.ieeecomputersociety.org/10.1109/DAPPS52256.2021.00010) [18] P. Albrecht and M. Huggenberger, “The fundamental theorem of mutual insurance,” Insur.: Math. Econ., vol. 75, pp. 180–188, 2017. [[Online]. Available: https://www.sciencedirect.com/science/article/pii/](https://www.sciencedirect.com/science/article/pii/S0167668716304978) [S0167668716304978](https://www.sciencedirect.com/science/article/pii/S0167668716304978) [19] C. Laux and A. Muermann, “Financing risk transfer under governance problems: Mutual versus stock insurers,” J. Financ. Intermediation, vol. 19, no. 3, pp. 333–354, 2010. [Online]. Available: [http:](http://www.sciencedirect.com/science/article/pii/S1042957309000266) [//www.sciencedirect.com/science/article/pii/S1042957309000266](http://www.sciencedirect.com/science/article/pii/S1042957309000266) [20] J. A. Ligon and P. D. Thistle, “The formation of mutual insurers in markets with adverse selection,” J. Bus., vol. 78, no. 2, pp. [529–556, 2005. [Online]. Available: https://ideas.repec.org/a/ucp/jnlbus/](https://ideas.repec.org/a/ucp/jnlbus/v78y2005i2p529-556.html) [v78y2005i2p529-556.html](https://ideas.repec.org/a/ucp/jnlbus/v78y2005i2p529-556.html) [21] C. S. Tapiero, Y. Kahane, and L. Jacque, “Insurance premiums and default risk in mutual insurance,” Scand. Actuar. J., vol. 1986, no. 2, pp. 82–97, 1986. [22] M. Denuit, J. Dhaene, and C. Y. Robert, “Risk–sharing rules and their properties, with applications to peer–to–peer insurance,” J. _Risk Insur., vol. 89, no. 3, pp. 615–667, 2022. [Online]. Available:_ [https://onlinelibrary.wiley.com/doi/10.1111/jori.12385?af=R](https://onlinelibrary.wiley.com/doi/10.1111/jori.12385?af=R) [23] M. Denuit and C. Y. Robert, “Large-loss behavior of conditional mean risk sharing,” _ASTIN_ _Bulletin,_ vol. 50, no. 3, pp. 1093–1122, 2020. [Online]. Available: [https://www.cambridge.](https://www.cambridge.org/core/article/largeloss-behavior-of-conditional-mean-risk-sharing/B6AE93167BC6BD47C35BB050C292B05F) [org/core/article/largeloss-behavior-of-conditional-mean-risk-sharing/](https://www.cambridge.org/core/article/largeloss-behavior-of-conditional-mean-risk-sharing/B6AE93167BC6BD47C35BB050C292B05F) [B6AE93167BC6BD47C35BB050C292B05F](https://www.cambridge.org/core/article/largeloss-behavior-of-conditional-mean-risk-sharing/B6AE93167BC6BD47C35BB050C292B05F) [24] R. Feng, M. Liu, and N. Zhang, “A unified theory of decentralized insurance,” 2022. [Online]. Available: [https://papers.ssrn.com/sol3/](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4013729) [papers.cfm?abstract_id=4013729](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4013729) [25] R. Feng, C. Liu, and S. Taylor, “Peer-to-peer risk sharing with an application to flood risk pooling,” Ann. Oper. Res., 2022. [Online]. [Available: https://doi.org/10.1007/s10479-022-04841-x](https://doi.org/10.1007/s10479-022-04841-x) [26] M. Denuit, “Size-biased transform and conditional mean risk sharing, with application to p2p insurance and tontines,” ASTIN Bulletin, vol. 49, no. 3, pp. 591–617, 2019. [27] ——, “Investing in your own and peers’risks: the simple analytics of p2p insurance,” Eur. Actuar. J., vol. 10, no. 2, pp. 335–359, 2020. [[Online]. Available: https://doi.org/10.1007/s13385-020-00238-x](https://doi.org/10.1007/s13385-020-00238-x) [28] A. Charpentier, L. Kouakou, M. Loewe, P. Ratz, and F. Vermet, “Collaborative insurance sustainability and network structure,” 2021. [[Online]. Available: https://arxiv.org/pdf/2107.02764](https://arxiv.org/pdf/2107.02764) [29] G. P. Clemente and P. Marano, “The broker model for peer-to-peer insurance: an analysis of its value,” Geneva Pap. Risk Insur.: Issues _Pract., vol. 45, no. 3, pp. 457–481, 2020. [Online]. Available:_ [https://link.springer.com/article/10.1057/s41288-020-00165-8](https://link.springer.com/article/10.1057/s41288-020-00165-8) [30] S. Levantesi and G. Piscopo, “Mutual peer-to-peer insurance: The allocation of risk,” J. Co-op. Organ. Manag., vol. 10, no. 1, [p. 100154, 2022. [Online]. Available: https://www.sciencedirect.com/](https://www.sciencedirect.com/science/article/pii/S2213297X21000264) [science/article/pii/S2213297X21000264](https://www.sciencedirect.com/science/article/pii/S2213297X21000264) [31] D. Duffie and N. Gârleanu, “Risk and valuation of collateralized debt obligations,” Financ. Anal. J., vol. 57, no. 1, pp. 41–59, 01 2001. [[Online]. Available: https://doi.org/10.2469/faj.v57.n1.2418](https://doi.org/10.2469/faj.v57.n1.2418) [32] J. Armstrong and J. Kiff, “Understanding the benefits and risks of synthetic collateralized debt obligations,” Bank of Canada Financial _System Review, pp. 53–61, 2005._ [33] D. J. Lucas, L. S. Goodman, and F. J. Fabozzi, Collateralized debt obli_gations: structures and analysis, 2nd ed., ser. Wiley finance._ Hoboken, N.J.: John Wiley & Sons, 04 2006, vol. 140. [34] C. Bluhm and C. Wagner, “Valuation and risk management of collateralized debt obligations and related securities,” Annu. Rev. _Financ. Econ., vol. 3, no. 1, pp. 193–222, 2022/12/15 2011. [Online]._ [Available: https://doi.org/10.1146/annurev-financial-102710-144835](https://doi.org/10.1146/annurev-financial-102710-144835) [35] J. P. Forrester, “Insurance risk collateralized debt obligations,” J. _Struct. Finance, vol. 14, no. 1, p. 28, 04 2008. [Online]. Available:_ [http://jsf.pm-research.com/content/14/1/28.abstract](http://jsf.pm-research.com/content/14/1/28.abstract) [36] H. N. Bystroem, “The microfinance collateralized debt obligation: A modern robin hood?” World Dev., vol. 36, no. 11, pp. 2109–2126, 2008. [37] MakerDAO, “The dai stablecoin system whitepaper,” 12 2017. [Online]. [Available: https://makerdao.com/whitepaper/DaiDec17WP.pdf](https://makerdao.com/whitepaper/DaiDec17WP.pdf) [38] Aave, “Aave protocol whitepaper v2.0,” 12 2020. [Online]. Available: [https://github.com/aave/protocol-v2/blob/master/aave-v2-whitepaper.pdf](https://github.com/aave/protocol-v2/blob/master/aave-v2-whitepaper.pdf) [39] R. Leshner and G. Hayes, “Compound: The money market protocol,” 02 2019. [Online]. Available: [https://compound.finance/documents/](https://compound.finance/documents/Compound.Whitepaper.pdf) [Compound.Whitepaper.pdf](https://compound.finance/documents/Compound.Whitepaper.pdf) [40] S. Cousaert, J. Xu, and T. Matsui, “Sok: Yield aggregators in defi,” in _2022 IEEE Int. Conf. Blockchain and Cryptocurrency (ICBC), 2022, pp._ [1–14. [Online]. Available: https://ieeexplore.ieee.org/document/9805523](https://ieeexplore.ieee.org/document/9805523) [41] V. Mohan, “Automated market makers and decentralized exchanges: a DeFi primer,” _Financial_ _Innov.,_ vol. 8, no. 1, pp. 1–48, 2022. [Online]. Available: [https://link.springer.com/article/10.1186/](https://link.springer.com/article/10.1186/s40854-021-00314-5) [s40854-021-00314-5](https://link.springer.com/article/10.1186/s40854-021-00314-5) ----- [42] A. Niemerg, D. Robinson, and L. Livnev, “Yieldspace: An automated liquidity provider for fixed yield tokens,” 2020. [Online]. Available: [https://yield.is/YieldSpace.pdf](https://yield.is/YieldSpace.pdf) -----
{ "disclaimer": "Notice: Paper or abstract available at https://arxiv.org/abs/2212.10308, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": null, "status": "GREEN", "url": "https://arxiv.org/pdf/2212.10308" }
2,022
[ "JournalArticle", "Conference" ]
true
2022-12-20T00:00:00
[ { "paperId": "d52befb1085c0539dfe833523dc88eb71e9f2613", "title": "SoK: Decentralized Finance (DeFi) Attacks" }, { "paperId": "1ab554c549e8a81635c95a849651d319ba3df696", "title": "Risk‐sharing rules and their properties, with applications to peer‐to‐peer insurance" }, { "paperId": "1a9997d381a041fecddd92bcd7a3caed6130d2b3", "title": "Collaborative Insurance Sustainability and Network Structure" }, { "paperId": "15a789d417b0a1c9e8c5ed42eded026a85d51134", "title": "SoK: Yield Aggregators in DeFi" }, { "paperId": "d102780639e4d91e6549b23f621da00e961447bf", "title": "SoK: Lending Pools in Decentralized Finance" }, { "paperId": "ba90fb4781a57a14e45559b6607c106e2aea90a6", "title": "Peer-to-peer risk sharing with an application to flood risk pooling" }, { "paperId": "1669088e855d088bd5021a767492aef35e1ad40d", "title": "Automated market makers and decentralized exchanges: a DeFi primer" }, { "paperId": "b14b58dbce132a6d17c4dc346b68cf0673c00101", "title": "LARGE-LOSS BEHAVIOR OF CONDITIONAL MEAN RISK SHARING" }, { "paperId": "ff7482eaaa4cef5d6076a98c674019324654a194", "title": "The broker model for peer-to-peer insurance: an analysis of its value" }, { "paperId": "2ce49584a1fd0f34f37cf617d2027f1ca593ae1e", "title": "Investing in your own and peers’ risks: the simple analytics of P2P insurance" }, { "paperId": "cd19976ac5207bf02a73b332fc4aa3946e6812b5", "title": "A First Look into DeFi Oracles" }, { "paperId": "2c40956941634b92c16ccf3ba305abeeab8f8e55", "title": "Improved Price Oracles: Constant Function Market Makers" }, { "paperId": "47072c24806046a9c4827467d7047af8c6a07b62", "title": "Attacking the DeFi Ecosystem with Flash Loans for Fun and Profit" }, { "paperId": "082f7f6e1fda6358d47df5d26fe862ef6021a803", "title": "Decentralized Finance: On Blockchain- and Smart Contract-Based Financial Markets" }, { "paperId": "c876d76a22b23ae28fe6d9c5589f857768d543fc", "title": "The Decentralized Financial Crisis" }, { "paperId": "e0bc89f5776804bc2be27f1945f900d1ac8f1e7f", "title": "An Overview on Smart Contracts: Challenges, Advances and Platforms" }, { "paperId": "cf38c389e415412bd8aa1af0f3e8abbc7f97b47c", "title": "SIZE-BIASED TRANSFORM AND CONDITIONAL MEAN RISK SHARING, WITH APPLICATION TO P2P INSURANCE AND TONTINES" }, { "paperId": "f5a21fb87d88b4510dd4d42fbdc52a674a592ea6", "title": "Smart contract applications within blockchain technology: A systematic mapping study" }, { "paperId": "4f23887df244122da1d087c65121a85b57a54b13", "title": "The fundamental theorem of mutual insurance" }, { "paperId": "aec843c0f38aff6c7901391a75ec10114a3d60f8", "title": "A Survey of Attacks on Ethereum Smart Contracts (SoK)" }, { "paperId": "790959fa05601e03d5f22646c40feb30a7917414", "title": "Valuation and Risk Management of Collateralized Debt Obligations and Related Securities" }, { "paperId": "a905a4fdb5fa3da754eda0b7c2e7ff3da264ca98", "title": "Financing Risk Transfer under Governance Problems: Mutual versus Stock Insurers" }, { "paperId": "f13cd1e7c2958841de80e60abe7c23589a0b0292", "title": "The Microfinance Collateralized Debt Obligation: A Modern Robin Hood?" }, { "paperId": "28c587bbe661e4e8ea054d106d59efd7fa6d8f2c", "title": "Insurance Risk Collateralized Debt Obligations" }, { "paperId": "b717d4e45d99795f1feb912dcd6d78c15d46c539", "title": "The Formation of Mutual Insurers in Markets with Adverse Selection" }, { "paperId": "e2f09c5978b7413b9a92de6e2050b365814da235", "title": "Collateralized Debt Obligations: Structures and Analysis" }, { "paperId": "35516916cd8840566acc05d0226f711bee1b563b", "title": "The Sybil Attack" }, { "paperId": "d8a92d6adeff34a1acca31558390434cc7e2306b", "title": "Risk and Valuation of Collateralized Debt Obligations" }, { "paperId": "7f2746713421f4178d66a7c65277632d0b494a88", "title": "Insurance premiums and default risk in mutual insurance" }, { "paperId": "469f794c6094795371e4cd318f8a6d22766fc09f", "title": "Systemic Fragility in Decentralized Markets" }, { "paperId": "172bc103350c37288d55362f10b23cdf845c3389", "title": "A Unified Theory of Decentralized Insurance" }, { "paperId": "430c6c797ec9c05bd5f7edfe4158ab1590b52b17", "title": "Mutual peer-to-peer insurance: The allocation of risk" }, { "paperId": null, "title": "Risk harbor v2" }, { "paperId": null, "title": "Unslashed.Finance, “Insurance for decentralized finance,”" }, { "paperId": "7298a315da0f62f95012f0f4ed748c2a455ec5d7", "title": "YieldSpace: An Automated Liquidity Provider for Fixed Yield Tokens" }, { "paperId": null, "title": "Cozy finance developer docs" }, { "paperId": null, "title": "Aave protocol whitepaper v2.0" }, { "paperId": null, "title": "Nsure.network - open insurance platform for open finance" }, { "paperId": null, "title": "Nexus mutual whitepaper: A peer-to-peer discretionary mutual on the ethereum blockchain" }, { "paperId": "1fc2453a60f652893fb22517d5a44541746df67b", "title": "Understanding the Benefits and Risks of Synthetic Collateralized Debt Obligations" }, { "paperId": null, "title": "Risk harbor v 2 , ” 01 2022 . [ Online ] The sybil attack , ” in International workshop on peer - to - peer systems" }, { "paperId": null, "title": "Compound : The money market protocol , ” 02 2019 . [ Online ]" }, { "paperId": null, "title": "The dai stablecoin system whitepaper , ” 12 2017 . [ Online ] Aave , “ Aave protocol whitepaper v 2 . 0 , ” 12 2020 . [ Online ]" } ]
14,996
en
[ { "category": "Medicine", "source": "external" }, { "category": "Computer Science", "source": "external" }, { "category": "Medicine", "source": "s2-fos-model" }, { "category": "Computer Science", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/0272fb91bc0ca70d246268095be9d6138293babc
[ "Medicine", "Computer Science" ]
0.88378
MedCo: Enabling Secure and Privacy-Preserving Exploration of Distributed Clinical and Genomic Data
0272fb91bc0ca70d246268095be9d6138293babc
IEEE/ACM Transactions on Computational Biology & Bioinformatics
[ { "authorId": "2201389", "name": "J. Raisaro" }, { "authorId": "1398987134", "name": "J. Troncoso-Pastoriza" }, { "authorId": "122581714", "name": "Mickaël Misbach" }, { "authorId": "40381688", "name": "João Sá Sousa" }, { "authorId": "5332114", "name": "S. Pradervand" }, { "authorId": "6961679", "name": "E. Missiaglia" }, { "authorId": "1702906", "name": "O. Michielin" }, { "authorId": "144067653", "name": "B. Ford" }, { "authorId": "1757221", "name": "J. Hubaux" } ]
{ "alternate_issns": null, "alternate_names": [ "IEEE/ACM Trans Comput Biology Bioinform", "IEEE/ACM Trans Comput Biology Bioinform", "IEEE/ACM Transactions on Computational Biology and Bioinformatics" ], "alternate_urls": [ "http://www.computer.org/portal/web/tcbb/home" ], "id": "dc4a9aad-72db-4530-a183-eaa4bf1d4490", "issn": "1545-5963", "name": "IEEE/ACM Transactions on Computational Biology & Bioinformatics", "type": "journal", "url": "https://ieeexplore.ieee.org/servlet/opac?punumber=8857" }
The increasing number of health-data breaches is creating a complicated environment for medical-data sharing and, consequently, for medical progress. Therefore, the development of new solutions that can reassure clinical sites by enabling privacy-preserving sharing of sensitive medical data in compliance with stringent regulations (e.g., HIPAA, GDPR) is now more urgent than ever. In this work, we introduce MedCo, the first operational system that enables a group of clinical sites to federate and collectively protect their data in order to share them with external investigators without worrying about security and privacy concerns. MedCo uses (a) collective homomorphic encryption to provide trust decentralization and end-to-end confidentiality protection, and (b) obfuscation techniques to achieve formal notions of privacy, such as differential privacy. A critical feature of MedCo is that it is fully integrated within the i2b2 (Informatics for Integrating Biology and the Bedside) framework, currently used in more than 300 hospitals worldwide. Therefore, it is easily adoptable by clinical sites. We demonstrate MedCo's practicality by testing it on data from The Cancer Genome Atlas in a simulated network of three institutions. Its performance is comparable to the ones of SHRINE (networked i2b2), which, in contrast, does not provide any data protection guarantee.
## MEDCO: Enabling Secure and Privacy-Preserving Exploration of Distributed Clinical and Genomic Data #### Jean Louis Raisaro, Juan Ramón Troncoso-Pastoriza, Mickaël Misbach, João Sá Sousa, Sylvain Pradervand, Edoardo Missiaglia, Olivier Michielin, Bryan Ford and Jean-Pierre Hubaux **Abstract—The increasing number of health-data breaches is creating a complicated environment for medical-data sharing and,** consequently, for medical progress. Therefore, the development of new solutions that can reassure clinical sites by enabling privacy-preserving sharing of sensitive medical data in compliance with stringent regulations (e.g., HIPAA, GDPR) is now more urgent than ever. In this work, we introduce MedCo, the first operational system that enables a group of clinical sites to federate and collectively protect their data in order to share them with external investigators without worrying about security and privacy concerns. MedCo uses (a) collective homomorphic encryption to provide trust decentralization and end-to-end confidentiality protection, and (b) obfuscation techniques to achieve formal notions of privacy, such as differential privacy. A critical feature of MedCo is that it is fully integrated within the i2b2 (Informatics for Integrating Biology and the Bedside) framework, currently used in more than 300 hospitals worldwide. Therefore, it is easily adoptable by clinical sites. We demonstrate MedCo’s practicality by testing it on data from The Cancer Genome Atlas in a simulated network of three institutions. Its performance is comparable to the ones of SHRINE (networked i2b2), which, in contrast, does not provide any data protection guarantee. **Index Terms—Secure data-sharing, homomorphic encryption, differential privacy, i2b2, distributed data, decentralized trust, genomic** privacy. #### � #### 1 INTRODUCTION ITH the increasing digitalization of clinical and genomic information, data sharing is becoming the # W keystone for realizing the promise of personalized medicine. Several initiatives, such as the Patient-Centered Clinical Research Network (PCORNet) [1] in the USA, eTRIKS/TranSMART [2] in the EU, the Swiss Personalized Health Network (SPHN) [3] in Switzerland, and the Global Alliance for Genomics and Health (GA4GH) [4], are laying down the foundations for new biomedical research infrastructures aimed at interconnecting (so far) siloed repositories of clinical and genomic data. In this global ecosystem, the ability to provide strong privacy and security guarantees in order to comply with increasingly strict regulations (e.g., HIPAA [5] in USA or the new GDPR [6] in EU) is crucial, yet extremely challenging, to achieve. Currently, there exist two main approaches for sharing medical data. The first is the centralized approach (see Figure 1(A)) typical of initiatives such as All of Us [7] and Genomics England [8]. With this approach, data from multiple institutions are brought together in a single and centralized repository that can be accessed by researchers - J.L. Raisaro, J.R. Troncoso-Pastoriza, M. Misbach, J. Sá Sousa, B. Ford and J.-P. Hubaux are with the School of Computer and Communication Sciences, EPFL, Lausanne, Switzerland. E-mail: see https://people.epfl.ch/jean-pierre.hubaux - S. Pradervand, E. Missiaglia and O. Michielin are with the Lausanne University Hospital, CHUV, Lausanne, Switzerland. - S. Pradervand is with Genomic Technologies Facility, University of Lausanne, UNIL, Lausanne, Switzerland, and with Vital-IT, Swiss Institute of Bioinformatics, Lausanne, Switzerland. willing to run analysis on a unified dataset. The second is the decentralized approach (see Figure 1(B)), where the different institutions keep the data at their premises and form an interoperable peer-to-peer network accessible by researchers. PCORNet [1] and the Beacon Project of the GA4GH [9] are examples of this second approach. Unfortunately, both approaches to sharing medical data have revealed intrinsic limitations that demonstrate why neither of the two has already been fully adopted by the healthcare sector. On the one hand, the centralized approach provides undeniable advantages in terms of availability and flexibility, although it introduces a single point of failure in the system by accumulating all the trust on a single entity (i.e., the data repository). Indeed, the security and confidentiality of all the data rely on the ability of the central repository to thwart both external (hackers) and internal (insiders) attacks. Furthermore, as the number of health-data breaches constantly increases [10], there is significant public pressure on clinical sites to ensure that the privacy and security of patients’ data can be properly protected, notably when stored or processed by third parties. As a result, clinical sites are worried about adopting the centralized approach and outsourcing their data to a single central repository (e.g., the cloud), especially when the data to be shared is highly sensitive or identifying (e.g., genomic data). On the other hand, the fully decentralized approach solves the single-point-of-failure issue: clinical sites can individually enforce local control on their own data by monitoring and managing the different accesses. However, this decentralization imposes substantial costs on the clinical sites as ----- they have to maintain an interoperable network, often with very limited resources (both human and technical). For this reason, the fully decentralized approach is also likely to be unsustainable in the long run, especially for large scale projects where multiple clinical sites are involved. **�����������������������** **�������������** **�������������** **��������������** **�������������** **�������������** **�������������** **�������������** **������������������������** **��������������������������** **����������������** **�������������** **�������������** **�������������������** **���������** **������������** **���** **�������������** **�����������������������������������������������** Fig. 1: Comparison of approaches for sharing medical data. (A) Centralized approach affected by the single-point-of-failure problem. (B) Decentralized approach affected by high maintenance costs (both technical and human). (C) Hybrid and secure approach enabled by MedCo, where clinical sites can securely outsource their data to the storage and processing unit (SPU) of their choice. In this paper, to address the challenge of achieving privacy-preserving, secure and scalable data sharing we introduce MedCo. MedCo is the first operational system that enables hundreds of clinical sites to share their clinical and genomic data through a hybrid or “somewhat” decentralized approach that overcomes the limitations of the approaches described above (see Figure 1). Instead of concentrating the trust on a single central repository as in the centralized approach, MedCo distributes the trust among a set of different “storage and processing” units to which clinical sites can securely outsource the storage of their data. Together, the storage and processing units form a secure, federated and interoperable network that investigators can query for research purposes as if it were a single unified database. MedCo enables each clinical site to choose its preferred storage and processing unit in order to offload the maintenance and availability costs that affect the fully decentralized approach. Such a storage and processing unit can be hosted either by the clinical site itself, by a governmental institution, or by a private/public cloud provider with whom the clinical site establishes a datause agreement. For example, a clinical site with enough resources can have its own storage and processing unit hosted at its premises. Whereas, a clinical site with limited resources could use a cloud provider of its choice. Potentially, each country could have a national storage and processing unit, e.g., administered by the government or a not-profit organization, to which all clinical sites within the same country can outsource their data The different national storage and processing units could then federate to form an international, secure and distributed clinical research network. A critical advantage of MedCo, with respect to stateof-the-art systems for sharing medical data, is its ability to provide strong security guarantees to clinical sites willing to safely outsource the storage of their data to potentially untrusted storage and processing units. Indeed, MedCo enables each site to encrypt its data with a shared key that is collectively generated by all the storage and processing units in the federation. As the encryption scheme used by MedCo is additively homomorphic, investigators can directly query and process the encrypted data stored at different storage and processing units without the need for decrypting them. This ensures end-to-end protection of the data in the Anytrust adversary model. Only authorized investigators can decrypt the result of a query/analysis and none of the storage and processing units alone, even if compromised, can decrypt the data stored at its premises. Actually, in order to succeed and get access to the unencrypted data, an adversary would need to simultaneously compromise all the storage and processing units in the federation. Additionally, MedCo can also be configured to minimize the risk of re-identification stemming from the behavior of malicious or curious investigators that try to abuse the querying system; this is achieved by providing obfuscated results that provide formal and well-established notions of privacy, e.g., differential privacy. In order to ease its adoption in operational research environments, we developed MedCo on top of existing and well-established open-source technologies for clinical data exploration, namely i2b2 [11] and SHRINE [12]. Currently, i2b2 is used at more than 300 clinical sites worldwide. We demonstrate the practicality of MedCo by testing it in a simulated federation of three clinical sites that outsource their oncology data (both clinical and genomic) to three different storage and processing units. We compare MedCo with a standard deployment, based on i2b2 and SHRINE (that does not provide any data protection guarantee) and we show that MedCo’s performance overhead is practical. In light of its low overhead, we believe that MedCo can dramatically accelerate and automate IRB review processes for sharing sensitive (and identifying) medical data with external researchers. Review processes can take several weeks, if not months, to permit researchers to access the data, and these processes are often denied because the necessary privacy and security guarantees cannot be provided. As such, MedCo paves the way to new and unexplored usecases where, for example, (i) researchers will be able to securely query massive amounts of distributed clinical and genetic data to obtain descriptive statistics indispensable for generating new hypotheses in clinical research studies, or (ii) clinicians will be able to find patients with similar (possibly identifying) characteristics to those of the patient under examination in order to take more informed decisions in terms of diagnosis and treatment. In summary, in this paper we make the following contributions: - We introduce MedCo, the first operational system enabling the sharing of sensitive clinical and genomic information in a privacy-preserving secure and scalable way **������������** ----- - We developed MedCo to be fully compatible with state of-the-art clinical research platforms such as i2b2 and SHRINE, hence it can be seamlessly deployed by clinical sites. - We extensively tested MedCo in a simulated federation of three sites, focusing on a clinical-oncology case with tumor DNA data from The Cancer Genome Atlas, and we demonstrated its practicality. - We propose a new generic method to add dummy data in order to mitigate frequency attacks that can target the probabilistically encrypted data after they are transformed to deterministically encrypted data for the sake of enabling equality-matching queries. #### 2 RELATED WORK Among the operational systems for sharing clinical or genomic information, SHRINE [12] (the networked version of i2b2 [11]) and the GA4GH Beacon Network [13] are certainly the most advanced and widespread. For example, SHRINE is used in several PCORNet clinical data research networks. However, as opposed to MedCo, they provide limited privacy guarantees (restricted to ad-hoc result obfuscation) and no protection of data confidentiality besides standard access control, thus significantly restraining the possibility of outsourcing the storage and of processing of the data to external parties in order to partially offload the costs of maintaining an always-available interoperable network. SHRINE provides an ad-hoc mechanism for obfuscating query results and for locking-out investigators after a certain number of queries, whereas MedCo features a privacybudget mechanism that achieves differential privacy. Conversely, the Beacon still suffers from risk of re-identification, as none of the three practical strategies described in [14] has been implemented yet. To the best of our knowledge, there are two recent works dealing with privacy-preserving queries in distributed medical databases; they represent the two main alternatives to the encryption-based approach followed in this work: The first one, PRINCESS [15], is based on trusted hardware: The sites encrypt all their data under AES-GCM (Advanced Encryption Standard - Galois Counter Mode) and send them to an enclave that runs in a central server, featuring an Intel SGX processor; this server decrypts and processes the sensitive data thus, enabling the secure computation of statistical models. Compared to our work, PRINCESS can be more versatile in terms of allowed computations, but it presents a single point of failure (the central server), and it centralizes all trust in the enclave and in the attestation protocol provided by Intel. Furthermore, the memory restrictions of the enclave limit the scalability of the scheme, requiring compression and batching techniques to enable processing of large genomic data, for which MedCo scales much better. The other recent approach, SMCQL [16], is based on secure two-party computation; it introduces a framework for private data network queries on a federated database of mutually distrustful parties. SMCQL features a secure query executor that implements different types of queries (e.g., merge, join, distinct) on the distributed database by relying on garbled circuits and Oblivious RAM (ORAM) techniques. Whereas this work features truly decentralized trust, it does not scale well to scenarios with more than two sites that are typical in medical contexts with a high number of collaborating hospitals. #### 3 PRELIMINARIES In this section, we briefly introduce the main cryptographic concepts used throughout the paper. **3.1** **Deterministic Encryption** Deterministic encryption (DTE) [17] is a special type of encryption that preserves the equality property of the plaintexts that, as opposed to probabilistic encryption, makes ciphertexts indistinguishable and, a priori, unusable. Yet, DTE also leaks this property; for a given plaintext and key, DTE always produces the same ciphertext. More formally, for A, B ⊆ Z with |A| ≤|B|, a function f : A → B is equality-preserving if for all i, j A, f (i) = f (j) iff i = j. We ∈ say that an encryption scheme with plaintext and ciphertext spaces D and R, respectively, is deterministic if EDTE(K, ·) is an equality-preserving function from to for all K D R ∈K (where K is the key space). DTE-based schemes have several advantages and are mainly used in the context of encrypted database systems (e.g., CryptDB [18]) as they enable relational databases to perform equality searches on encrypted data in the same way as they would operate on the plaintext data. As a counterpart, they provide less security guarantees than probabilistic encryption schemes, as they are vulnerable to inference attacks due to the amount of information they leak. Hence, their application has to be carefully assessed. **3.2** **Homomorphic Encryption** Homomorphic encryption (HE) is a special type of encryption that supports computation on encrypted data. Homomorphic encryption is probabilistic and provides semantic security, meaning that no adversary without the secret key can compute any function of the plaintext from the ciphertext. In 2009, Gentry [19] introduced for the first time a special type of HE that enables arbitrary computations on ciphertexts, called fully homomorphic encryption (FHE). Despite its complete functionality, FHE is currently unpractical, as it introduces huge computational and storage overheads that make it unusable for real-world applications. For this reason, many variations of FHE have been proposed in the past few years, with the goal of improving efficiency by sacrificing some flexibility. Such cryptosystems are called practical homomorphic cryptosystems, and according to their functionality, they can be classified as additively homomorphic if they satisfy only the addition of ciphertexts, multiplicatively homomorphic if they satisfy only multiplication, or somewhat homomorphic if they support (a limited number of) additions and multiplications. In this paper, we use the additively homomorphic cryptosystem ElGamal on Elliptic Curves, due to its low ciphertext expansion and fast homomorphic operations ----- 3.2.1 ElGamal On Elliptic Curves The ElGamal cryptosystem on elliptic curves (ECElGamal) is an asymmetric, probabilistic and additivelyhomomorphic encryption scheme that achieves semantic security, i.e., ciphertext indistinguishability. It enables additions and multiplications by constants in the ciphertext domain. As every asymmetric cryptosystem, EC-ElGamal features three algorithms: - Key generation: Let E denote an elliptic curve over the prime field GF(p) and G its base point. Then, the secret key can be defined as an integer k ∈ GF(p), and the public key can be derived as K = kG. - Encryption: Let m be an integer and M = mG its mapping to the corresponding point on the curve E. Then, the encryption of M with the public key K is denoted as EK(M ) = (C1, C2) = (rG, M +rK), where r is a random nonce. - Decryption: Given the ciphertext EK(M ) = (C1, C2) and the secret key k, the decryption algorithm computes the original plaintext point as D(EK(M )) = −kC1 + C2 = M . The original plaintext m is obtained by inverting the mapping from the elliptic curve point M . Due to its additive homomorphism, EC-ElGamal enables combining the encryptions of any two messages in order to obtain an encrypted result that, when decrypted, equals the sum of these two messages. More formally, let M1 and M2 be any two messages, and α and β be two scalars; then, we have that αEK(M1) + βEK(M2) = EK(αM1 + βM2). #### 4 MEDCO ECOSYSTEM In this section, we introduce the ecosystem in which MedCo operates. We begin by describing the system and threat models. We then define the goals of MedCo with respect to privacy/security and functionality. ������������������� ��������� �������������������� ��������� �� ���� ���� ���� �� �� �� ��������������� **����������������** ��� ����������������������������� �� Fig. 2: MedCo’s system and threat models. **4.1** **System Model** We consider the system model depicted in Figure 2, where several clinical sites (Si) want to collaborate in order to share clinical and genomic data with investigators but do not want to rely on any central third party or authority for stor ing or managing their data. Moreover, because of the high costs (both technical and human) for maintaining a fully interoperable decentralized network and the increasing size of the data, clinical sites want to securely outsource the storage of their data to a preferred storage and processing unit (SPUj). Each site can have its own SPU, or multiple sites can share the same SPU. All SPUs are organized together in a peer-to-peer network and form a collective authority. SPUs are responsible for (i) securely storing the data of the clinical sites and (ii) securely processing a request of an authorized investigator that wants to explore clinical sites’ data for generating and validating new research hypotheses or for identifying cohorts of interest, by finding the patients that match specific inclusion/exclusion clinical and genetic criteria across the whole network. **4.2** **Threat Model** In this system model, we consider the following threats: - Storage and processing units: We assume storage and processing units to be honest-but-curious (HBC) parties. Indeed, SPUs can be compromised by internal or external adversaries that do not tamper with the data-sharing protocol but can try to infer sensitive information about the patients from the data stored at their premises and from the data being processed during the protocol itself. As a result, SPUs cannot be trusted by clinical sites and they do not trust each other, either. - Investigators: We assume investigators to be potentially malicious-but-covert (MBC) adversaries. Indeed, an investigator can try to legitimately use the system in order to infer sensitive information about the patients (without being discovered) by performing consecutive queries and exploiting the information leaked by the end-results. For example, a malicious investigator with some background information about a given individual can infer the presence of such individual into a sensitive cohort (e.g., patients who are HIV-positive) or even reconstruct a subset of her medical record. - Clinical sites: We assume clinical sites to be trusted parties. Finally, we assume that investigators cannot collude with SPUs, and that at least one SPU does not collude with the others. **4.3** **MedCo’s Goals** To meet end-users expectations and be compliant with regulations, MedCo has the following goals with respect to functionality and privacy/security features. 4.3.1 Functionality Goals The purpose of MedCo is to enable investigators to securely explore the clinical and genomic data stored at all SPUs by the various clinical sites in the network. Therefore, MedCo must provide the same functionalities as those provided by state-of-the-art distributed cohort explorers such as SHRINE [12]: - (F1) Cohort Exploration: An authorized investigator should be able to obtain the number of patients per clinical ----- site who satisfy a set of inclusion/exclusion clinical and genetic criteria, optionally grouped by age, gender or ethnicity. More formally, MedCo must support SQL queries such as SELECT COUNT(patients) FROM distributed_dataset WHERE criteria_i AND/OR criteria_j AND/OR ... GROUP BY criteria_k; - (F2) Cohort Selection: An authorized investigator should be able to obtain the pseudonyms of the patients who satisfy a set of inclusion/exclusion clinical and genetic criteria at each clinical site. More formally, MedCo must support SQL queries such as SELECT patients FROM distributed_dataset WHERE criteria_i AND/OR criteria_j AND/OR ...; 4.3.2 Security and Privacy Goals MedCo must always provide the following privacy/security features: - (SP1) Trust Decentralization: There should be no single point of failure in the system. - (SP2) End-to-end Data Protection: The confidentiality of the data stored at the SPUs must be protected at rest, in transit and during computation. The data are encrypted by the clinical site and the result of the query can be decrypted only by the investigator issuing the query. Depending on the access privileges of the investigator querying the system, MedCo should be able to also provide the following optional features (either one or both of them): - (SP3) Unlinkability: The investigator must not be able to trace a query response back to its original clinical site. - (SP4) Result Obfuscation: The query result is obfuscated in order to achieve formal privacy guarantees (e.g., differential privacy) and prevent re-identification. #### 5 MEDCO CORE ARCHITECTURE & PROTOCOLS In this section, we provide a detailed description of MedCo. We begin with a brief overview of the system architecture and core querying protocol. Then, we describe in detail the different steps of the system initialization and the data ingestion phases. Finally, we describe the steps of the secure querying protocol that enables an investigator to efficiently query the distributed encrypted data stored at the different storage and processing units. **5.1** **General Overview** The main purpose of MedCo, whose architecture is depicted in Figure 3, is to reassure clinical sites willing to share their clinical and genomic data with investigators, by enabling clinical sites to securely outsource the storage and processing of their data to a set of potentially untrusted storage and processing units. In order to achieve the privacy and security goals mentioned in Section 4 3 MedCo enables SPUs ������������ �������������� ���� ������ ��������� ������ ��� �������� **����** ������� ����� **�** ������� **�������** �������������������� �� ���� **�** **�** **�** �� �� �� � ���� **����������** ���� �� **����** �� ������� ������� ������� **����������������** �� �� ���������������� ���� �������������������������������������������������������������������������� Fig. 3: MedCo core architecture and secure query protocol comprising of: ETL process (steps A, B, C); Query generation (step 1); Query re-encryption (step 2); Local query processing (step 3); Local result obfuscation (step 4); Distributed results shuffling (step 5); Distributed results re-encryption (steps 6): Results decryption (step 7). to collectively generate an encryption key for an additivelyhomomorphic encryption system[1], used by clinical sites to encrypt their data before leaving the local trusted zone of the site. Through a set of secure distributed protocols, MedCo enables the SPUs (i) to switch the encryption of the data from probabilistic encryption to deterministic encryption in order to securely process equality-matching queries, and (ii) to re-encrypt the query result from an encryption with the collective public key to an encryption under the investigator’s public key, so that (only) the investigator can eventually decrypt the result. And, depending on the access privileges of the investigator issuing the query, MedCo can securely shuffle and/or obfuscate the query results in order to achieve unlinkability and/or differential privacy, respectively (see Section 4.3.2). **5.2** **System Initialization** During the initialization of MedCo, each storage and processing unit (SPUi) generates a pair of EC-ElGamal cryptographic keys (ki,Ki), where Ki = Gki, along with a secret si. Then, all SPUs combine their EC-ElGamal public keys in order to generate a single collective public key K = [�]i [K][i] that will be used by the different clinical sites to encrypt the data to be outsourced. **5.3** **Data Extraction Transformation and Loading** During the data-ingestion phase, i.e., extraction transformation and loading (ETL) phase, each clinical site extracts patient-level data from its private EHR system or clinical research data warehouse, and transforms the data in order to fit the “star-schema” data model [20] used by 1. For performance reasons, in this work we use EC-ElGamal, but any other additively homomorphic scheme can be used as well � ----- MedCo. The star schema data model is based on the Entity Attribute-Value (EAV) concept also used by widespread clinical research systems such as i2b2 [11], where clinical and genetic observations (or “facts”) about patients (e.g., diagnosis, medications, procedures, laboratory values and genetic variants) are stored in a narrow table called “fact” table. Observations are encoded by ontology concepts from an extensible set of medical terminologies, e.g., the International Classification of Disease (ICD) or the US National Drug Code (NDC). In this data model, four other “dimension” tables further describe the patients’ data and metadata. For example, the “patient dimension” table contains pseudonymized demographic information of the patients, and the “visit dimension” table stores information about the visit, such as its date and time and the type of provider. In such a data model, the information that clinical sites want to protect from potential honest-but-curious adversaries at the storage and processing units is represented by the mapping between the patients in the database and the set of their clinical and genomic observations stored in the “fact” table that are considered to be sensitive or identifying. In order to protect such mapping, each site separately performs the following three steps: **A. Generation of Dummy Patients: Each site generates a** set of dummy patients with plausible clinical observations specifically chosen so that the distribution of observations across patients in the “fact” table is as close as possible to the uniform distribution. We explain the rationale behind this step in detail in Section 6. To distinguish the real patients from the dummies, each site also generates a binary flag to be appended to the demographic information in the “patient dimension” table. Such flag is set to 1 for real patients and to 0 for dummy patients. **B. Data Encryption: In order to break the link between the** patients and their sensitive observations in the “fact” table, each site encrypts with the collective public key K the set of ontology concepts that encode these observations along with the patients’ binary flags. As EC-ElGamal is a probabilistic encryption scheme, each clinical site obtains a set of probabilistic ciphertexts that are totally indistinguishable from each other. **C. Data Loading and Re-Encryption: After encryption,** each site uploads the encrypted data to the selected storage and processing unit that immediately starts a Distributed Deterministic Re-Encryption (DDR) protocol (the details of this protocol are explained in Section 5.5) in which the encrypted concepts are sent across the network of SPUs so that their encryption is switched from probabilistic to deterministic. This re-encryption is necessary for enabling the secure processing of equality-matching queries (as those defined in Section 4.3) that otherwise would be impossible with probabilistic ciphertexts. Due to the presence of dummy patients, even if the deterministic nature of the ciphertexts leaks the equality of the underlying plaintexts, an honest-but-curious adversary is not able to perform a frequency attack to distinguish ontology concepts based on their frequency distribution. Dummy patients are indistinguishable from real patients, as long as the patients’ binary flags are probabilistically encrypted **5.4** **Secure Query Protocol** We assume each investigator that uses MedCo has a pair of EC-ElGamal cryptographic keys (kI, KI ) and, optionally, is assigned an initial differential privacy budget �I during the registration phase. The purpose of such a budget is to limit the number of queries an investigator with low privileges can run on the system, hence �I -differential privacy can be guaranteed. The proposed secure query protocol is illustrated in Figure 3 and comprises the following steps: **1. Query Generation: The secure query protocol starts with** an authenticated and authorized investigator who wants to obtain either the number of patients or the pseudonyms of the patients who match a set of inclusion/exclusion clinical and genetic criteria across the different clinical sites. In clinical research, this procedure is called “cohort selection”. For this purpose, the investigator builds a query by logically combining (i.e., through AND and OR operators) a set of “sensitive” and “non-sensitive” concepts from a common (i.e., shared across the different sites) ontology. The “sensitive” concepts in the query are encrypted with the collective public key K and the query is sent along with the investigator’s public key KI to one of the storage and processing units. **2. Query Re-Encryption: The SPU that receives the query** starts a Distributed Deterministic Re-Encryption (DDR) protocol (described in Section 5.5) in order to switch the encryption of the sensitive concepts in the query from probabilistic to deterministic. Once the DDR protocol is over, the initial SPU broadcasts the deterministic version of the query to the other SPUs in the network. **3. Local Query Processing: Each SPU locally processes the** query by filtering the patients (both dummy and real) in the “patient dimension” table whose observations in the “fact” table (both the unencrypted and the deterministically encrypted ones) match the concepts in the query. If the query requests the list of matching patients’ pseudonyms, each SPU returns the list of matching patients’ pseudonyms along with the probabilistically encrypted binary flags. If the query requests the number of matching patients, each SPU homomorphically adds the matching-patients’ dummy flags and returns the encrypted result EK(Ri) = EK([�]j∈φ [f][ j]i [) =][ �]j∈φ [E][K][(][f][ j]i [)][, where][ E][K][(][f][ j]i [)][ is the en-] crypted flag of the j-th patient in site Si and φ is the set of patients matching the query. In the homomorphic summation, the binary flags of the dummy patients have a null contribution (i.e., EK(0)), hence the encrypted final result corresponds to the actual number of real matching patients. **4. Result Obfuscation: This step is optional and depends** on (i) the type of query and (ii) the investigator’s privileges. In order to guarantee differential privacy, each SPU can obfuscate the encrypted patient counts computed during the previous step by homomorphically adding noise sampled from a Laplacian distribution. More specifically, let �q be the privacy budget allocated for a given query q and μ be the noise value drawn from a Laplacian distribution with mean 0 and scale [Δ]�q[f] [, where the sensitivity][ Δ][f][ is equal to 1,] due to Ri being a count. Then, the encrypted obfuscated query result is obtained as EK( R[ˆ]i) = EK(Ri + μ) = ----- EK(Ri)+EK(μ). We note that the query result is released to the investigator only if the investigator’s differential privacy budget is enough for such a query, i.e., if �I �q > 0. − **5. Result Shuffling: This step is also optional and depends,** as the previous step, on (i) the type of query and (ii) the investigator’s privileges. In order to break the link between the encrypted (potentially obfuscated) query results generated at the different SPUs and the corresponding clinical sites, the SPUs jointly run a Distributed Verifiable Shuffling (DVS) protocol (described in Section 5.5) on the set of encrypted patient counts. As a result, each SPU receives encrypted counts[2], that might have been generated by another SPU. **6. Result Re-Encryption: The query results securely com-** puted by each SPU are encrypted with the collective key K; to be decrypted by the investigator, each SPU runs a Distributed Key Switching (DKS) protocol (described in Section 5.5) that involves the other SPUs and switches the encryption of the query results from an encryption with K to an encryption with KI, the investigator’s public key. After this, the newly encrypted query results are sent back to the the SPU that initiated the protocol and then on to the investigator. **7. Result Decryption: As the query results are encrypted** with KI, the investigator can use the corresponding secret key kI to decrypt them and obtain the corresponding plaintext values. If the query results are the list of patients’ pseudonyms along with the patients’ binary flag, the investigator can simply rule out the dummy patients by discarding those who have the flag set to zero. **5.5** **Secure Sub-Protocols** The secure query protocol of MedCo is based on three secure and distributed sub-protocols re-adapted from [21]. In this section, we describe them in detail. - Distributed Deterministic Re-Encryption (DDR) Proto**col. The DDR protocol enables a set of SPUs to determinis-** tically re-encrypt data that are probabilistically encrypted under the collective key generated by all SPUs, without ever decrypting the data. The purpose of this protocol is to enable equality-matching queries on probabilistically encrypted data that otherwise would not be possible. More formally, let n be the number of SPUs in the network, EK (M ) = (C1, C2) = (rG, M + rK) be the encryption of a message M under the collective public key K. The DDR protocol comprises two rounds through all SPUs. In the first round, each SPUi sequentially uses its secret si and adds siG to C2. After this first round, the resulting ciphertext is ( C[˜]1,0, C[˜]2,0) = (rG, M + rK + [�]i[n]=1 [s][i][G][)][.] In the second round, each SPU partially and sequentially modifies this ciphertext. More specifically, when SPUi receives the modified ciphertext ( C[˜]1,i−1, C[˜]2,i−1) from SPUand Ci[˜]−21,i, it computes = si �C˜2,i− ( 1C −[˜]1,iC,˜C1[˜],i2−,i1)k, wherei�. At the end of theC[˜]1,i = siC[˜]1,i−1 second round, the deterministic re-encryption is obtained 2. The number of encrypted counts received by an SPU corresponds to the number of sites that have outsourced the storage of their data to that SPU by keeping only the second component of the resulting ciphertext DTs(M ) = C2,n = sM + [�]i[n]=1 [s][i][sG][, where] s = [�]i[n]=1 [s][i][ is the collective secret corresponding to the] product of each SPU’s secret. - Distributed Verifiable Shuffling (DVS) Protocol. The DVS protocol enables a set of SPUs to sequentially shuffle probabilistically encrypted data so that the outputs cannot be linked back to the original ciphertexts. More specifically, the DVS protocol uses the Neff shuffle [22]. It takes as input multiple sequences of EC-ElGamal pairs (C1,i,j, C2,i,j) forming a a × b matrix, and outputs a shuffled matrix of ( C[¯]1,i,j, C[¯]2,i,j) pairs such that for all 1 ≤ i ≤ a and 1 ≤ j ≤ b, ( C[¯]1,i,j, C[¯]2,i,j) = (C1,π(i),j + rπ[��](i),j[B, C][2][,π][(][i][)][,j][ +][ r]π[��](i),j[P] [)][, where][ r]i,j[��] [is a] re-randomization factor, π is a permutation and P is a public key. - Distributed Key Switching (DKS) Protocol. The DKS protocol enables a set of SPUs to convert a ciphertext generated with the collective public key K into a ciphertext of the same data generated under any known public key U, without ever decrypting them. The DKS protocol never makes use of decryption. Let EK (M ) = (C1, C2) = (rG, M + rK) be the encryption of a message M with the collective public key K. The DKS protocol starts with a modified ciphertext tuple ( C[˜]1,0, C[˜]2,0) = (0, C2). Then, each SPU partially and sequentially modifies this element by generating a fresh random nonce vi and computing ( C[˜]1,i, C[˜]2,i) where C[˜]1,i = C[˜]1,i−1 + viG and C[˜]2,i = C[˜]2,i−1 − kiC1 + viU . The resulting ciphertext corresponds to the message m encrypted under the public key U, ( C[˜]1,n, C[˜]2,n) = (vG, M + vU ) from the original ciphertext (C1, C2), where v = v1 + . . . + vn. #### 6 DUMMY-ADDITION STRATEGIES For cohort-exploration queries, the deterministic encryption of the ontology concepts applied during the ETL phase (see Section 5.3) avoids dictionary attacks by any subset of colluding HBC SPUs due to the distribution of the secrets si used in the DDR protocol. Nevertheless, a generationof-dummy-patients step is required prior to encryption in order to avoid leaking to the SPUs (i) the ontology concepts distribution and (ii) the query result. In this section, we analyze the optimal dummy-generation strategy to achieve this goal. We assume, without loss of generality, that each patient has a different set of observations; if there were equal patients in the database, fake ontology concepts could be added to make them different. The leakage to HBC SPUs can be estimated by calculating (i) the adversary’s equivocation (i.e., conditional entropy) on the ontology concepts of the “fact” table given their tagged versions, as an average measure, and (ii) the smallest anonymity set of the ontology concepts, as a worst-case measure. The higher the equivocation and the larger the anonymity set is, the lower the leakage is. For this exposition, we will focus only on the relation between patients and occurrences of sensitive ontology concepts, leaving aside the temporal dimension. This is a simplifying assumption, implying that (a) either there are no causality relations between concepts or the time ----- |Ontology code|Col2|a b c d e|Col4|Tagged|x y z r s|dummy flag| |---|---|---|---|---|---|---| |real patients|pid 1 pid 2 pid 3|1 1 1 1 0 0 1 1 1 1 1 0 1 1 1||pa pb pc pd pe|1 1 1 0 1 1 1 1 1 0 1 0 1 1 1 0 1 1 1 1 1 1 0 1 1|E(1) E(0) E(1) E(0) E(1)| |dummy patients|pid 4 pid 5|1 1 0 1 1 1 1 1 0 1||||| |||||||| #### M M [′] ←−� �−→ ←−� �−→ Fig. 4: Toy example. Ontology concepts mapping to real and added dummy patients with pseudo-identifiers pidi, and ontology concepts a, b, c, d, e. pa, pb, pc, pd, pe are the randomly sorted version of the patient pseudo-identifiers, and x, y, z, r, s are the shuffled and deterministically re-encrypted version of the ontology concepts. The binary flag is a probabilistic encryption of 1 for real patients and 0 for dummies. dimension is encrypted or not available in the database, and that (b) the non-sensitive non-encrypted concepts are independent of the encrypted ones; if this is not the case, dependent concepts should be reclassified as sensitive and be encrypted. We will follow the toy example shown in Figure 4. This figure represents the (horizontally) folded version of the (vertical) “fact” table, therefore coding each patient as a row, each ontology concept as a column, and each observed (resp. unobserved) concept in a patient as a “1” (resp. “0”) in the corresponding cell. More formally, let us define the matrix that associates ontology concepts with patients as the tuple of a random binary matrix M, where each row can be either a real or a dummy patient, and each column represents one ontology concept and two functions σp and σo, which map the patient pseudo-identifiers (pidj in Fig. 4) to the rows (pa, pb, pc, pd, pe in Fig. 4), and the observed ontology concepts (a, b, c, d, e in Fig. 4) to the columns (x, y, z, r, s in Fig. 4), respectively. These maps represent the shuffling applied to patients before they are assigned their pseudo-identifiers, and the shuffling and deterministic reencryption applied to ontology concepts before they are loaded into the SPU’s database. In order to focus on the practical leakage of the deterministically encrypted database, let us assume that the deterministic re-encryption of the concepts and the probabilistic encryption of the patients’ binary flags do not leak anything about their inputs (their trapdoors cannot be broken), even if they are based on computational guarantees. Therefore, the adversary (each of the SPUs) observes the realization of the row- and columnpermuted matrix: A ≡ [M[�] = M [�]], and her equivocation, with respect to the original information given A, can be expressed as H(M, σo, σp|A) = H(M|σo, σp, A) (1) + H(σo|σp, A) + H(σp|A) (a) = H(σo|σp, A) + H(σp|A) (2) (b) ≤ H(σo|A) + H(σp) (c) ≤ H(σo) + H(σp). Expression (1) can be divided in three terms: the first represents the entropy of M conditioned to the two permutations and the observed contents of the cells, which is fully deterministic hence zero-entropy (step (a) in (2)); the second term is the entropy of the ontology concepts permutation conditioned to the observation of the matrix cells and the patient permutation, and the third term is the entropy of the patient permutation conditioned on the observed matrix contents. We aim at maximizing these two terms. The last term of the equivocation can be maximized by making the dummy patients indistinguishable from the real patients, i.e., drawn from the same distribution. Empirically, this means that all the patients, real or dummy, have the same type of distribution, and the contents of the rows are independent of the position of the dummy patients in the list. This also makes the two permutations independent of each other even when conditioned on the contents of M [�] (step (b) in (2)). In our toy example in Fig. 4, all the real patients’ rows belong to the same type (weight 4); by generating two new dummy patients with the same weight, they become indistinguishable from real patients in our simplified example. In order to maximize the entropy of the ontology concepts mapping σo conditioned on A (step (c) in (2)), all the permutations have to be equiprobable for the given M [�]. This is achieved by flattening the joint distribution of the observed ontology concepts through the added dummies; the geometric interpretation of this flattening is that any column permutation can be cancelled out by a row permutation, such that it is not possible to univocally map any ontology concept to any column in M [�]. In our toy example, it can be seen that due to the two added dummies, any fixed query yields the same number of patients independently of the permutation applied to the query terms, which gives a complete indistinguishability between all the deterministically encrypted ontology concepts even in light of the matrix M [�]. It must be noted that the unobserved concepts do not have to be added to the table, as the adversary does not have a priori knowledge of which is the subset of observed concepts, only its cardinality. Also, this strategy fully breaks the correlation between ontology concepts; for example, if the site added only one dummy patient with concepts a, b, e to the real patients in Figure 4 the individual appearance rate of the concepts would be flattened, but it would leak that there is a correlation between the concepts c and d, that could be identified in the encrypted matrix through an lpoptimization attack [23]. The last bound in (2) is the best that clinical sites can do with the dummy-patient addition strategy knowing the ----- matrix of real patients; it maximizes the uncertainty of the attacker about the original ontology concepts, for any real distribution of patients and ontology concepts. The corresponding practical dummy-addition strategy can be described as follows: Real rows are grouped according to their weight (number of observations); if the whole set of observed ontology concepts has n elements, for each group of rows of weight k < n, dummy rows are added to complete all the k-combinations of n elements, producing � n � rows (counting both real and dummies) per group. k In our toy example, (considering independent concepts) the equivocation goes from 3.58 bits with no dummies to 10.23 bits with the two dummies, and the minimum anonymity set raises from 2 to 5. This strategy guarantees the maximum uncertainty for the adversary for an arbitrary real distribution of concepts across patients, but it generates a combinatorial number of dummies, which is not feasible in general (unless the number of observed concepts is very low). But if some assumptions can be made about the concepts joint distribution, we can simplify the strategy. If dependencies are only found within small groups of concepts, the groups being mutually independent (this is the case for genomic information and dependencies found inside subsets of localized variants), it is possible to constrain the needed number of dummies by applying the same dummy-addition strategy in a restricted block-wise fashion. In order to flatten only the histogram of group weights, we group the concepts in independent blocks of size n[�] n and apply the dummy� generation permutation to the blocks (inter-block), but not to the contents of each block, until the block distribution is flat, therefore reducing the needed number of dummy rows. This trade-off strategy creates an “anonymity set” of ontology concepts of size n/n[�] in such a way that the adversary cannot distinguish between the set of concepts inside different blocks. The drawback is that the equivocation is reduced, as the resulting joint distribution of the ontology concepts is only flat across blocks, but not inside each block. In the worst case in terms of leakage (fully correlated concepts within each block), the achievable adversary’s equivocation becomes H(M, σo, σp|A) = H(σo|σp, A) + H(σp|A) ≤ H(σo,n/n� ) + H(σp), where σo,n/n� are the permutations of the n/n[�] blocks of n[�] concepts each. This bound is achieved when the blocks are mutually independent, hence the best partitioning strategy consists in keeping correlated concepts inside the same block. If fully independence between concepts can be assumed (n[�] = 1), it can be seen that flattening the observations histogram leads to the same maximum attacker equivocation as the complete permutation strategy (Eq. (2)), but with a much lower number of added dummies. In order to further reduce this number, it is possible to set a minimum anonymity set size m for the concepts and add dummies to water fill the observation histogram (block-wise flat, instead of fully flat) until each concept has at least other m 1 concepts featuring the same number of observations. − Finally, it must be noted that whenever a site’s database is updated, dummies can be regenerated (and encryptions re-randomized) when the ETL process (see Section 5.3) is run again for the whole updated database. The DDR protocol uses a different fresh randomness so that the concepts from the updated database cannot be linked back to the concepts of the old one. #### 7 PRIVACY & SECURITY ANALYSIS AND EXTEN **SIONS (MEDCO+)** The main privacy and security goals for MedCo are summarized in Section 4.3. In this section, we briefly discuss and analyze the fulfillment of these targets for MedCo, and we revisit possible extensions for more stringent requirements. Security in MedCo is based on the cryptographic guarantees provided by the underlying decentralized sub-protocols described in Section 5.5. All input sensitive data are either deterministically (ontology concepts) or probabilistically (patients’ binary flags) encrypted with collectively maintained keys, such that they cannot be decrypted without the cooperation of all sites, thus guaranteeing confidentiality and avoiding single points of failure (SP1 in Section 4.3). For the full step-by-step security analysis of the distributed subprotocols, we refer the reader to [21]. Following this analysis, paired with the dummy strategy described in Section 6, it can be seen that MedCo covers the unlinkability requirement (SP3 in Section 4.3) for the query results, thanks to the DVS protocol; and it protects their confidentiality, as only the authorized investigator can decrypt the query results thanks to the DKS protocol (SP2 in Section 4.3). Conversely, to avoid re-identification (or attribute disclosure) attacks (SP4 in Section 4.3), MedCo also enables the application of differentially private noise to the results and, due to the proposed dummy strategy, it guarantees confidentiality of the data also against all the SPUs that participate in the system (SP2 in Section 4.3). There are two extensions that can be applied to MedCo in order to satisfy additional confidentiality and integrity requirements: guaranteeing unlinkability among investigators’ queries, and obtaining protection against (potentially) malicious SPUs. **- Query confidentiality: In the basic MedCo system pre-** sented in Section 5, HBC SPUs can link the ontology concepts used across different queries, as the deterministically encrypted values of the same concepts are the same for all the queries. In the case that query confidentiality is also a requirement (e.g., investigators from pharmaceutical companies), it is possible to address it by probabilistically encrypting ontology concepts during the ETL phase and by deterministically re-encrypting the obtained ciphertexts with a fresh secret for each new query. Then, the effective encryption key is different for each fresh run of the DDR protocol, so it is not possible to link the query terms between different runs of the shuffling-DDR. When this modified system (which we denote MedCo+) is paired with the proposed dummy-addition strategy, the terms between queries are indistinguishable and unlinkable, at the cost of transferring and re-encrypting at runtime the encrypted database of each site. **- Malicious SPUs: MedCo’s threat model assumes HBC** SPUs to be a credible and plausible assumption, based on the damage to reputation that a SPU would suffer if it misbehaves in a collective data-sharing protocol. Nevertheless, it is possible to cope with malicious SPUs by using proof ----- generation protocols [21] that produce and publish zero knowledge proofs for all the computations performed at the SPUs, hence the proofs can be verified by any entity in order to assess that no SPU deviated from the correct behavior. This solution yields a hardened and resilient query protocol, but the cost of producing all proofs results in a typically unacceptable burden in common data sharing applications, for which the basic proposed MedCo covers all fundamental privacy and security requirements and yields a very competitive performance, as shown in the next Section. #### 8 IMPLEMENTATION AND EVALUATION We implemented and tested MedCo on a clinical oncology use-case by simulating a network of three clinical sites, each one outsourcing the storage of their data to a different SPU. **8.1** **Implementation** To ease its adoption at clinical sites, we implemented MedCo as three components that fully integrate within the i2b2 [11] framework and its networking system SHRINE [12]. i2b2 (Informatics for Integrating Biology and the Bedside) (i2b2) is the state-of-the-art clinical platform for enabling secondary use of electronic health records (EHR) [11]. It is currently used at more than 300 medical institutions, covering the data of more than 250 million patients. Its back-end consists of a set of server-side software modules implemented in Java, called “cells”, that are responsible for the business logic of the platform and are organized in a “hive”. The i2b2 data model is based on the “star schema” [20]. Queries are built in a dedicated JavaScriptbased Web-client by logically combining ontology concepts organized in a hierarchical tree-based structure. The three components of MedCo are: - A new i2b2 server cell, called “MedCo cell”, developed in Java and Go. The MedCo cell is responsible for the execution of the secure query protocol and communicates with the other i2b2 cells through a REST API. We used the UnLynx library [21] to implement the DDR, DVS and DKS secure distributed sub-protocols. - A new i2b2 Web-client plugin developed in JavaScript. The plugin is responsible for managing the cryptographic operations in the browser. - A data importation tool, developed in Go, that is responsible for encrypting the sensitive ontology concepts and generating the dummy patients. These components are publicly available at [24]. We note that MedCo is not limited to i2b2/SHRINE but can also be integrated on top of other state-of-the-art platforms for clinical and translational research, such as TranSMART [2], in order to make them secure and distributed. **8.2** **Oncology Use-Case** The lack of privacy and security guarantees of existing tools makes sharing sensitive oncological data outside the trusted boundaries of clinical sites extremely difficult, if not impossible. For this reason, we tested MedCo on genomic and clinical data from The Cancer Genome Atlas (TCGA) [25] by performing typical queries for oncogenomics. We report here two representative examples: **Query A: Number of patients with skin cutaneous melanoma** AND a mutation in BRAF gene affecting the protein at position 600. About half of melanoma patients harbor a mutation in the BRAF gene at position V600E or V600K and can be treated by the BRAF inhibitor vemurafenib [26]. The proportion of mutated BRAF melanoma is therefore an important benchmark for a clinic or hospital. **- Query B: Number of patients skin cutaneous melanoma AND** a mutation in BRAF gene AND a mutation in (PTEN OR CDKN2A OR MAP2K1 OR MAP2K2 genes). This query is based on the fact that patients treated with vemurafenib develop resistance through mutations that activate the MAP kinase pathways [27]. When facing drug resistance, finding another patient with a similar mutation profile could bring invaluable information for clinical decisions. We used genomic and clinical data of 8,000 cancer patients, 9 clinical attributes, and an average of 142 genetic mutations per patient (more than 1 million observations in total). We imported these data from the Mutation Annotation Format (MAF) into the i2b2 “star schema” data model. Each mutation is represented as a code comprising the concatenation of its chromosome, position, reference allele and tumor allele. Clinical attributes are encoded with the ICD-10 [28] and ICD-O [29] international terminologies. **8.3** **Experimental Setup** The initial testing environment comprises 3 servers interconnected by 10 Gbps links and featuring two Intel Xeon E5-2680 v3 CPUs @2.5 GHz that support 24 threads on 12 cores, and 256 GB RAM. Each server represents an SPU and hosts the i2b2/SHRINE Web client with the MedCo plugin, the i2b2 hive including the SHRINE components, the new MedCo cell, and the i2b2 database implemented in PostgreSQL. In order to test MedCo’s scalability, we increase the number of servers up to 9 (see setup S3 below). To set up our system and facilitate its deployment, we use Docker [30]. To evaluate MedCo’s performance, we consider five different experimental setups, with each measurement averaged over 10 independent runs, and show MedCo’s computational and storage overhead with respect to an unprotected i2b2/SHRINE deployment: **S1. ETL runtime for increasing dataset size: We analyze** the amount of time needed to extract, transform and load the data (pre-processing), which includes the formatting, the initial probabilistic encryption, the deterministic reencryption of sensitive ontology concepts, and the loading of the data in the i2b2 database. **S2. Query runtime breakdown: We run queries A and B** (see Section 8.2) on a federation of 3 SPUs, each storing the full initial dataset (i.e., around 1 million observations on 8,000 patients at each SPU), and report the query-runtime breakdowns for each step of the secure query protocol. **S3. Query runtime for increasing dataset size: We run** queries A and B (see Section 8.2) on a federation of 3 SPUs in order to study MedCo’s scalability with respect to increasing dataset sizes. **S4. Query runtime overhead for increasing number of** **SPUs: We run queries A and B (see Section 8 2) on a** ----- federation with an increasing number of SPUs, each storing the whole initial dataset. **S5. Network traffic for varying query size: We study the** amount of network traffic inter-SPU for queries with an increasing number of ontology concepts. **8.4** **Performance Results** In the following, we report the performance results for the aforementioned use-cases and experimental setups. We show MedCo’s computational and storage overhead with respect to an unprotected i2b2/SHRINE deployment. As shown in Figure 5, the ETL phase (setup S1) is a costly operation in MedCo. We can distinguish two separate subphases: (i) the processing of the ontology (including the parsing, the encryption and the distributed deterministic reencryption), which only depends linearly on the size of the ontology and is usually constant, and (ii) the processing of patients’ observations, which depends linearly on the number of observations/patients but does not involve any costly encryption operation hence it is much faster than the ontology processing. We note that the ETL phase is performed only once and can be significantly optimized through parallel computing. If new data need to be added after the first importation, there is no need to re-process the ontology again. Figure 6 provides query-runtime breakdowns for both query A and query B (setup S2). The times for query-parsing and encryption/decryption in the Web client, broadcasting the query across the different SPUs, and result obfuscation are all negligible, so we do not account for them. Unexpectedly, results show that the standard i2b2 query to the central “fact” table is the most expensive operation in MedCo, as it depends on the total number of observations in the database. In this case, each SPU stores approximately 1 million observations (both genomic and clinical) per affiliated clinical site (one site per SPU in our setting). This time is also linear in the number of ontology concepts used in the query Fig. 5: ETL time vs database size for experimental setup S1. (a) Query A (b) Query B Fig. 6: Query-runtime breakdown for queries A and B in a network with three sites and three SPUs for experimental setup S2. The vertical black line signals the point where each node has to wait for the others before it can proceed. (96 for query A and 281 for query B) and it is inherent to the standard i2b2 database management for SQL-queries to the “fact” table. The times for fetching the encrypted patients’ binary flags from the “patient dimension” table and the homomorphic aggregation (Step 3 in the query workflow) depend linearly on the number of patients satisfying the query criteria and can be extremely fast for rare ontology concepts or rare combinations of concepts. For example, for queries A and B, homomorphic aggregation takes around 30 and 8 milliseconds respectively, as only around 32 and 7 patients per site satisfy the query criteria. Differently, the deterministic re-encryption time is linear in the number of sensitive concepts in the query and number of SPUs in the network, as each probabilistically encrypted concept has to be sequentially modified by each SPU Such a process takes ----- (a) Query A runtime vs database size. (b) Query B runtime vs database size. (c) Queries A and B runtime vs number of SPUs. (d) Network traffic vs query size Fig. 7: MedCo’s performance results for experimental setups S3-S5. less time for query A than for query B, as they respectively comprise 96 (95 mutations and 1 clinical attribute) and 281 (280 mutations and 1 clinical attribute) query attributes. The remaining secure distributed operations introduced by MedCo depend on the number of SPUs in the network, but they are negligible, as they involve only one ciphertext, i.e., the encrypted query result. Figure 7 shows the performance results for setups S3S5. The measurements are averaged out between SPUs. For setup S3 (Subfig. 7a and Subfig. 7b), in order to study MedCo’s ability to scale with increasing database sizes, we randomly sample patients from the original dataset of 8k patients and create smaller datasets of 1k, 2k and 4k patients per site. For setups S4 and S5 (Subfig. 7c and Subfig. 7d), we use the initial dataset (8k patients). Results show that MedCo is extremely efficient and performancewise comparable to the insecure i2b2/SHRINE deployment. MedCo’s overhead only depends on the number of sensitive concepts in the query, the number of matching patients satisfying the research criteria and marginally on the number of SPUs in the network. As shown in Subfigure 7c, the number of SPUs affects only the time needed by the distributed protocols to deterministically re-encrypt the sensitive ontology concepts in the query and to re-encrypt the query end-result under the investigator’s key. In Subfigures 7a, 7b and 7c, we can also observe that MedCo+ has a relatively higher runtime cost as a counterpart for achieving query unlinkability, because all the observations in the “fact” table of each SPUs have to be deterministically re-encrypted on the fly by the whole set of SPUs for each new query. This is confirmed by Subfigure 7d where the network traffic is significant and almost constant for MedCo+, whereas for MedCo it is almost negligible and it increases with the number of concepts in the query. We note, however, that the privacy enhancements brought by MedCo+ might be necessary only under specific circumstances (e.g., when an investigator from a pharmaceutical company is using the system). Finally, the storage overhead introduced by encryption affects only the “concept dimension” table that stores the ----- ontology, and it is in the order of 4x, as MedCo s de terministic re-encryption converts each ontology concept, represented by 64-bit integers, into a 32-bytes ciphertext. Depending on the specific distribution of ontology codes across patients, a varying number of dummy patients must also be considered. In the tested oncology use-case, we assume independent codes and follow the dummy-addition strategy described in Section 6. As a result, we obtain an increase factor of 3.6x. #### 9 CONCLUSION In this paper, we have presented MedCo, the first operational scalable system that enables secure sharing of sensitive medical data, which so far was impossible due to the low security guarantees of existing operational systems. MedCo relies on secure distributed protocols and a new dummy-records addition strategy that enables different privacy/security vs. efficiency trade-offs. With its generic architecture, MedCo is easily deployable on top of existing health information systems such as i2b2 or tranSMART. Finally, results on a clinical oncology use-case have shown practical query-response times and good scalability with respect to the number of sites and amount of data. Therefore, we firmly believe that MedCo represents a concrete solution for fostering medical data sharing in a privacy-conscious and regulatory-compliant way. #### REFERENCES [1] J. V. Selby, A. C. Beal, and L. Frank, “The patient-centered outcomes research institute (pcori) national priorities for research and initial research agenda,” Jama, vol. 307, no. 15, pp. 1583–1584, 2012. [2] B. D. Athey, M. Braxenthaler, M. Haas, and Y. Guo, “tranSMART: an open source and community-driven informatics and data sharing platform for clinical and translational research,” AMIA Summits on Translational Science Proceedings, vol. 2013, p. 6, 2013. [3] Swiss Academies of Arts and Sciences, “Swiss Personalized Health Network,” http://www.samw.ch/en/Projects/SPHN. html, last Accessed: June 11, 2018. [4] The Global Alliance for Genomics and Health, “A federated ecosystem for sharing genomic, clinical data,” Science, vol. 352, no. 6291, pp. 1278–1280, 2016. [5] U.S. Department of Health & Human Services, “The health insurance portability and accountability act (hipaa),” https:// www.hhs.gov/hipaa/index.html, last Accessed: June 11, 2018. [6] EU Parlament, “The EU General Data Protection Regulation (GDPR),” http://www.eugdpr.org/, last Accessed: June 11, 2018. [7] “All of us research program,” https://allofus.nih.gov/, last Accessed: June 11, 2018. [8] “The 100,000 genomes project protocol v3, genomics england.” https://www.genomicsengland.co.uk/, last Accessed: June 11, 2018. [9] T. G. A. for Genomics and Health, “Beacon network,” https:// beacon-network.org/, 2017, last Accessed: June 11, 2018. [10] U.S. Department of Health and Human Services, “Breach portal: Notice to the secretary of hhs breach of unsecured protected health information,” https://ocrportal.hhs.gov/ocr/ breach/breach_report.jsf, last Accessed: June 11, 2018. [11] S. N. Murphy, G. Weber, M. Mendis, V. Gainer, H. C. Chueh, S. Churchill, and I. Kohane, “Serving the enterprise and beyond with informatics for integrating biology and the bedside (i2b2),” Journal of the American Medical Informatics Association, vol. 17, no. 2, pp. 124–130, 2010. [12] G. M. Weber, S. N. Murphy, A. J. McMurry, D. MacFadden, D. J. Nigrin, S. Churchill, and I. S. Kohane, “The shared health research information network (shrine): a prototype federated query tool for clinical data repositories,” Journal of the American Medical Informatics Association vol 16 no 5 pp 624–630 2009 [13] G. A. for Genomics and Health, The beacon project, https:// beacon-network.org/\#/, last Accessed: June 11, 2018. [14] J. L. Raisaro, F. Tramèr, Z. Ji, D. Bu, Y. Zhao, K. Carey, D. Lloyd, H. Sofia, D. Baker, P. Flicek, S. Shringarpure, C. Bustamante, S. Wang, X. Jiang, L. Ohno-Machado, H. Tang, X. Wang, and J.-P. Hubaux, “Addressing Beacon re-identification attacks: quantification and mitigation of privacy risks,” Journal of the American Medical Informatics Association, no. 0, pp. 1–8, 2017. [15] F. Chen, S. Wang, X. Jiang, S. Ding, Y. Lu, J. Kim, S. C. Sahinalp, C. Shimizu, J. C. Burns, V. J. Wright et al., “Princess: Privacyprotecting rare disease international network collaboration via encryption through software guard extensions,” Bioinformatics, p. btw758, 2017. [16] J. Bater, G. Elliott, C. Eggen, S. Goel, A. Kho, and J. Rogers, “Smcql: Secure querying for federated databases,” Proc. VLDB Endow., vol. 10, no. 6, pp. 673–684, Feb. 2017. [Online]. Available: https://doi.org/10.14778/3055330.3055334 [17] M. Bellare, A. Boldyreva, and A. O Neill, “Deterministic and efficiently searchable encryption,” Advances in Cryptology-CRYPTO 2007, pp. 535–552, 2007. [18] R. A. Popa, C. Redfield, N. Zeldovich, and H. Balakrishnan, “Cryptdb: protecting confidentiality with encrypted query processing,” in Proceedings of the Twenty-Third ACM Symposium on Operating Systems Principles. ACM, 2011, pp. 85–100. [19] C. Gentry, “A fully homomorphic encryption scheme,” Ph.D. dissertation, Stanford University, 2009. [20] P. M. Nadkarni and C. Brandt, “Data extraction and ad hoc query of an entity attribute value database,” Journal of the American Medical Informatics Association, vol. 5, no. 6, pp. 511–527, 1998. [21] D. Froelicher, P. Egger, J. S. Sousa, J. L. Raisaro, Z. Huang, C. Mouchet, B. Ford, and J.-P. Hubaux, “UnLynx: A decentralized system for privacy-conscious data sharing,” in Proceedings on Privacy Enhancing Technologies, vol. 4, no. EPFL-CONF-229308, 2017, pp. 152–170. [22] C. A. Neff, “Verifiable mixing (shuffling) of elgamal pairs,” VHTi Technical Document, VoteHere, Inc, 2003. [23] M. Naveed, S. Kamara, and C. V. Wright, “Inference attacks on property-preserving encrypted databases,” in Proceedings of the 22Nd ACM SIGSAC Conference on Computer and Communications Security, ser. CCS ’15. New York, NY, USA: ACM, 2015, pp. 644–655. [Online]. Available: http://doi.acm.org/10.1145/2810103.2813651 [24] LCA1, EPFL, “Medco source code,” https://c4science.ch/w/ medco/, last Accessed: June 11, 2018. [25] K. Tomczak, P. Czerwi´nska, and M. Wiznerowicz, “The cancer genome atlas (tcga): an immeasurable source of knowledge,” Contemporary oncology, vol. 19, no. 1A, p. A68, 2015. [26] P. A. Ascierto, J. M. Kirkwood, J.-J. Grob, E. Simeone, A. M. Grimaldi, M. Maio, G. Palmieri, A. Testori, F. M. Marincola, and N. Mozzillo, “The role of braf v600 mutation in melanoma,” Journal of translational medicine, vol. 10, no. 1, p. 85, 2012. [27] H. Yang, D. Kircher, K. Kim, A. Grossmann, M. VanBrocklin, S. Holmen, and J. Robinson, “Activated mek cooperates with cdkn2a and pten loss to promote the development and maintenance of melanoma,” Oncogene, vol. 36, no. 27, pp. 3842–3851, 2017. [28] W. H. Organization, The ICD-10 classification of mental and behavioural disorders: clinical descriptions and diagnostic guidelines. World Health Organization, 1992, vol. 1. [29] A. G. Fritz, International classification of diseases for oncology: ICDO. World Health Organization, 2000. [30] D. Merkel, “Docker: lightweight linux containers for consistent development and deployment,” Linux Journal, vol. 2014, no. 239, p. 2, 2014. ----- **Jean Louis Raisaro earned his PhD in Com** puter and Communication Sciences in 2018 from EPFL, Lausanne, Switzerland. Prior to that he earned a MS and BS in Biomedical Informatics and Bioinformatics in 2012 and 2009 from University of Pavia, Pavia, Italy. His main interests are the design and development of new efficient privacy-enhancing technologies for the protection of medical data with a special focus on genetic data. He is an expert in applied cryptography, privacy and medical informatics. **Juan Ramón Troncoso-Pastoriza Ph.D. Tele-** com. Engineering (2012); elected member of the IEEE Information Forensics and Security TC and the IEEE Signal Processing Society Student Services Committee for the period 2017-2019, and associate editor of four journals on Information Security (EURASIP JIS, IET IFS, Elsevier DSP and Elsevier JVCI). His research interests include Secure Signal Processing, Applied Cryptography and Genomic Privacy, areas in which he has published numerous papers in top conferences and journals and holds several international granted patents. **Mickaël Misbach is pursuing a Master in Com-** munication Systems at EPFL in Lausanne, Switzerland, with a specialization in Information Security. He is expected to graduate in September 2018. During the last years of his studies he worked on medical data privacy, being the main developer of the privacy-conscious cohort explorer MedCo. **João Sá Sousa is currently a Security / Privacy** Software Engineer at EPFL under the direction of professor Jean-Pierre Hubaux. He has a MS and BS degree in Informatics Engineering at the University of Coimbra and did a 3-month internship at CMU-SV. His main interests include Wireless Security, Genomic Privacy, Cryptography, Android Development, Web Development and Business Management. **Sylvain Pradervand received a Ph.D. degree** in molecular biology from the University of Lausanne in 1998. After a postdoc studying transcriptomics in heart disease models at the University of California San Diego, he turned his interests to bioinformatics. He is currently leading the bioinformatics team of the genomic technologies facility of the University of Lausanne and the bioinformatics team of the clinical research support platform of the Lausanne University Hospital. **Edoardo Missiaglia obtained his bachelor s de** gree in biology (1994) from the University of Padova, master’s degree in genetics (1998) from the University of Bologna and PhD in pathological oncology (2003) from the University of Verona. He worked at the ICRF (Cancer Research UK) (2001-2003) as research assistant and at University of Verona (2003-05) and ICR (2005-2010) as Post-Doc and bioinformatician. He has been working as Project Manager at the SIB (2010-2014). He became the scientific director of the molecular pathology laboratory of the Institute of Pathology at CHUV in August 2014. **Olivier Michielin is associate professor at the** University of Lausanne. He obtained a diploma of Physics in 1991 at the EPFL and an MD from the University of Lausanne in 1997. He pursued his PhD training under the supervision of Jean-Charles Cerottini (LICR) and Martin Karplus (Harvard and Strasbourg Universities). He was appointed Group Leader of the Swiss Institute of Bioinformatics in 2002 and became an Assistant Professor and Privat Docent at the Medical Faculty of Lausanne in 2004 and 2005, respectively. In parallel, he has trained as a medical oncologist and obtained his board certification in 2007 at the Multidisciplinary Oncology Center (CePO) of Lausanne where he is currently in charge of the melanoma clinic. **Bryan Ford leads the Decentralized/Distributed** Systems (DEDIS) research group at the Swiss Federal Institute of Technology in Lausanne (EPFL). Ford focuses broadly on building secure decentralized systems, touching on topics including private and anonymous communication, scalable decentralized systems, blockchain technology, Internet architecture, and operating systems. Ford earned his B.S. at the University of Utah and his Ph.D. at MIT, then joined the faculty of Yale University where his work received the Jay Lepreau Best Paper Award and grants from NSF, DARPA, and ONR, including the NSF CAREER award. **Jean-Pierre Hubaux is a full professor at EPFL.** Through his research, he contributes to laying the foundations and developing the tools for protecting privacy in tomorrow’s hyper-connected world. He has pioneered the areas of privacy and security in mobile/wireless networks and in genomics. He is a Fellow of both IEEE (2008) and ACM (2010). -----
{ "disclaimer": "Notice: The following paper fields have been elided by the publisher: {'references'}. Paper or abstract available at https://api.unpaywall.org/v2/10.1109/TCBB.2018.2854776?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1109/TCBB.2018.2854776, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": null, "status": "GREEN", "url": "https://infoscience.epfl.ch/record/256348/files/08410926.pdf" }
2,019
[ "JournalArticle" ]
true
2019-07-01T00:00:00
[]
19,529
en
[ { "category": "Computer Science", "source": "external" }, { "category": "Computer Science", "source": "s2-fos-model" }, { "category": "Engineering", "source": "s2-fos-model" }, { "category": "Physics", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/0273efebece84154e5d64f42df67e2139df992b6
[ "Computer Science" ]
0.875587
A performance model for the communication in fast multipole methods on high-performance computing platforms
0273efebece84154e5d64f42df67e2139df992b6
The international journal of high performance computing applications
[ { "authorId": "3083117", "name": "H. Ibeid" }, { "authorId": "2274654", "name": "Rio Yokota" }, { "authorId": "145907031", "name": "D. Keyes" } ]
{ "alternate_issns": null, "alternate_names": [ "int j high perform comput appl", "International Journal of High Performance Computing Applications", "Int J High Perform Comput Appl" ], "alternate_urls": [ "https://journals.sagepub.com/home/hpc" ], "id": "8ce575db-79d9-4601-83df-e1e96f6b4e3b", "issn": "1094-3420", "name": "The international journal of high performance computing applications", "type": "journal", "url": "http://www.sagepub.com/journals/Journal201339/title" }
null
|Col1|論文 / 著書情報 Article / Book Information| |---|---| |Title|A performance model for the communication in fast multipole methods on high-performance computing platforms| |Author|Huda Ibeid, Rio Yokota, David Keyes| |J ournal/Book name|International J ournal of High Performance Computing Applications, Vol. 30, No. 4, pp. 423--437| |Issue date|2016, 3| |DOI|http://dx.doi.org/10.1177/1094342016634819| |Note|このファイルは著者(最終)版です。 This file is author (final) version.| ----- ## A Performance Model for the Communication in Fast Multipole Methods on HPC Platforms #### Huda Ibeid, Rio Yokota, and David Keyes Division of Computer, Electrical and Mathematical Sciences and Engineering King Abdullah University of Science and Technology, Saudi Arabia **Abstract** Exascale systems are predicted to have approximately one billion cores, assuming Gigahertz cores. Limitations on affordable network topologies for distributed memory systems of such massive scale bring new challenges to the currently dominant parallel programing model. Currently, there are many efforts to evaluate the hardware and software bottlenecks of exascale designs. It is therefore of interest to model application performance and to understand what changes need to be made to ensure extrapolated scalability. The fast multipole method (FMM) was originally developed for accelerating N -body problems in astrophysics and molecular dynamics, but has recently been extended to a wider range of problems, including preconditioners for sparse linear solvers [31]. Its high arithmetic intensity combined with its linear complexity and asynchronous communication patterns make it a promising algorithm for exascale systems. In this paper, we discuss the challenges for FMM on current parallel computers and future exascale architectures, with a focus on inter-node communication. We focus on the communication part only; the efficiency of the computational kernels are beyond the scope of the present study but see, e.g., [3]. We develop a performance model that considers the communication patterns of the FMM, and observe a good match between our model and the actual communication time on four HPC systems, when latency, bandwidth, network topology, and multi-core penalties are all taken into account. To our knowledge, this is the first formal characterization of inter-node communication in FMM that validates the model against actual measurements of communication time. The ultimate communication model is predictive in an absolute sense; however, on complex systems, this objective is often out of reach, or of a difficulty out of proportion to its benefit when there exists a simpler model that is inexpensive and sufficient to guide coding decisions leading to improved scaling. The current model provides such guidance. ### 1 Introduction _N_ -body problems arise in many areas of physics (e.g., astrophysics, molecular dynamics, acoustics, electrostatics). In these problems, the system is described by a set of N particles and the dynamics of the system arise from interactions that occur between every pair of particles. This requires (N [2]) computational _O_ complexity. For this reason, many efforts have been directed at producing fast N -body algorithms. More efficient algorithms of the particle interaction problem can be provided by a hierarchical approach using tree structures. In this approach, the computational domain is hierarchically subdivided, and the particles are clustered into a hierarchical tree structure. An approximation of tunable accuracy is applied to far-field interactions, whereas near-field interactions are summed directly. When the far-field expansion is calculated against the particles directly, this approach called a treecode [1]. When the far-field effect is translated to local-expansions before summing their effect, it is called a fast multipole method (FMM) [15, 5]. These approaches bring the complexity down to (N log N ) and (N ) for treecode and FMM, re_O_ _O_ spectively. FMM has been listed as one of the top ten algorithms of the twentieth century [8] due to its wide applicability and impact on scientific computing. It was originally developed for applications in electrostatics and astrophysics, but continues to find new areas of application such as aeroacoustics [29], fluid dynamics [14], magnetostatics [28], and electrodynamics [33]. Because of its linear complexity, FMM scales well with respect to the problem size, if implemented efficiently. For future computer systems the conservation of flops is less important than the conservation of distant loads and stores to supply the 1 ----- arguments for the flops. FMM stands out among hierarchical O(N ) algorithms for its strong arithmetic intensity. Since the performance of a single-processor core has plateaued, future supercomputing performance will depend mainly on increases in system scale rather than improvements in single-processor performance. Processor counts are already in the millions for the top system. Modeling application performance at such scales is required to guide algorithmic choices and tunings on existing architectures and evaluate contemplated architectures. Since the performance of the FMM has a large impact on a wide variety of applications across a wide range of disciplines, it is important to understand the challenges that FMM implementations face on architectures with increased parallelism, as well as to predict and locate bottlenecks that might cause performance degradation. On future architectures where computation becomes relatively cheap compared to data movement, we anticipate that inter-node communication will become the bottleneck. The priority of the present study is the communication model of FMM. To model the performance, we start with the baseline model, namely (α, β) model for communication, where α is the latency and β is the inverse bandwidth. Then, some penalties are added to the baseline model based on machine constraints. These penalties include distance and reduced per-core bandwidth. Our performance model is related to universal communication features and can be applied regardless of local FMM implementation choices, core-scale machine characteristics that do not affect communication, and arithmetic workload associated with other aspects of the computation. Of course, the importance of communication as a bottleneck depends strongly on the cost of other tasks, but it is important to be able to evaluate communication costs as a component in an overall cost model. The Byte-count parameters in our model makes it adaptable to any of the various FMM implementations, while the penalties in our model are tunable to various architectures. We validate our performance model on four different architectures, Shaheen (BG/P), Mira (BG/Q), Titan (Cray XK7), and Piz Dora (Cray XC40). The focus of this paper is on characterizing the FMM communication, not on introducing a new model. For this purpose, we apply a performance model developed originally and applied to multigrid methods, which have a different communication pattern. A new application of an existing tool emphasizes the versatility of the tool. Meanwhile, such detailed analysis of the communication in FMM has not previously been reported, so there is particular relevance to the FMM community, and to the HPC community that exploits, or will exploit at exascale, FMM solvers. The paper is organized as follows. Section 2 gives an overview of related work. Section 3 summarizes some performance challenges that face FMM on parallel machines. These challenges include massive parallelism and degradation due to inter-node communication. In Section 4, an exposition of the fast multipole method sufficiently detailed to expose communication properties is given. Section 5 describes our performance model. Experiments done to validate the performance models are provided in Section 6 and we conclude in Section 8. ### 2 Related work Performance modeling and characterization for understanding and predicting the performance of scientific applications on HPC platforms have been targeted by many related projects. For example, Clement and Quinn developed a performance prediction methodology through symbolic analysis of their source code [6]. Mendes and Reed focused on predicting scalability of an application program executing on a given parallel system [24]. Mendes proposed methodology to predict the performance scalability of data parallel applications on multi-computers based on information collected at compile time [23]. The approach of combining computation and communication to obtain a general performance model is described by Snavely et al. [27]. DeRose and Reed concentrate on tool development for performance analysis [7]. Performance models for implicit CFD codes have been considered [17]. The efficiency of the spectral transform method on parallel computers has been evaluated by Foster [10]. Kerbyson et al. provide an analytical model for the application SAGE [19]. Performance models for AMG were developed by Gahvari _et al. [11], who have also analysed the performance_ of AMG over a dragonfly network in [12]. Traditional evaluation of specific machines via benchmarking is presented by Worley [30]. Scaling FMM to higher and higher processor counts has been a popular topic [25, 18], while extensive study of single-node performance optimization, tuning, and analysis of FMM has also been of interest [4]. However, there has been little effort to model the inter-node communication of FMMs. Lashuk et al. derive the overall complexity of FMM on distributed memory heterogeneous architectures [20], but do not validate the model against the actual performance. The present work is based on the communication 2 ----- model for AMG [11], and extends their theory to FMM. To our knowledge, this is the first formal characterization of inter-node communication in FMM, which validates the model against actual measurements of communication time. ### 3 Performance challenges High performance computing systems have shown a sustained exponential growth with performance improvement of approximately 10x every 3.6 years as measured, for instance, by the Gordon Bell Prizes or the Top500 benchmark over the past 2.5 decades. This performance improvement comes at a cost in code complexity and introduces many challenges. Furthermore, the development of an exascale computing capability will cause significant and dramatic changes in computing hardware architecture relative to current petascale computers. In this section we present some of the challenges faced by FMMs to achieve good parallel performance on future exascale systems. #### 3.1 Trends in Computer Hardware Computers consisting of nodes in the tens of thousands with cores per node in the hundreds have emerged as the most widely used high-performance computing platforms. These nodes communicate by sending messages through a network, which leads to lower scalability and less performance due to cores on a single node contenting for access to the interconnect. We discuss multicore and manycore issues in more detail when presenting our performance models that take this into account. #### 3.2 Communication Two types of costs in terms of time and energy are usually analyzed separately: computation (flops) and communication (Bytes). Communication involves moving data between levels of a memory hierarchy in case of sequential algorithms and exchanging data between processors over a network in the case of parallel algorithms. Therefore, without considering overlap, the running time of an algorithm is the sum of three terms: the number of flops times the time per flop, the number of words moved divided by the bandwidth (measured as words per unit time), and the number of messages times the latency. The last two terms determine the time consumed by communication. The time per flop is already an order of magnitude less than reciprocal bandwidth and latency and the gaps between computation and communication are growing exponentially with time. (See Table 2 under the machine descriptions in Section 6 below.) Communication performance models can guide development of algorithms to help reduce the communication. ### 4 Fast multipole method _N_ -body methods are most commonly used to simulate the interaction of particles in a potential field, which has the form Here, f (xi) represents a field value evaluated at a point xi which is generated by the influence of sources located at xj with weights qj. K(xi, xj) is the kernel that governs the interactions between evaluation and source particles. The direct approach to simulate the _N_ -body problem is relatively simple; it evaluates all pair-wise interactions among the particles. While this method is exact to within machine precision, the solution is (N [2]) in its computational complexity, which _O_ is prohibitively expensive for even modestly large data sets. However, its simplicity and ease of implementation make it an appropriate choice when simulating small particle sets (N < 1000) where high accuracy is desired [26]. For a larger number of particles, many faster algorithms have been invented, e.g., treecodes [1] and, the fast multipole method (FMM) [15]. The main idea behind these fast algorithms is to coarse grain the effect of sufficiently far particles as permitted by rigorous analysis. The most common way to achieve this approximation is to cluster the far particles into successively larger groups by constructing a tree. The treecode clusters the far particles and achieves (N log N ) complexity. The FMM further _O_ clusters the near particles in addition to the far particles to achieve (N ) complexity. _O_ In this section, we present an overview of fast algorithms that have been developed for the calculation of N -body problems. First, the spatial hierarchy and the fast approximate evaluation of these algorithms are discussed. Then, a description of the communication introduced by the domain partitioning scheme used in these algorithms is provided. The main focus is on the data flow of the FMM algorithm for which we develop the performance model. #### 4.1 FMM Overview This overview is intended to introduce some key ingredients of the FMM. The mathematics behind the _f_ (xi) = _N_ � _qjK(xi, xj)_ (1) _j=1_ 3 ----- (a) 2-D view (b) Tree view Figure 1: Hierarchical decomposition specific FMM kernels is well documented elsewhere and its detail conveniently decouples, given a simple interface to the communication model. For details of the mathematics we refer the reader to previous publications on FMM [2, 5]. **4.1.1** **Basic Component** Both treecodes [1] and the FMM [15] are based on two key ideas: the tree representation for the spatial hierarchy, and the fast approximate evaluation. The spatial hierarchy means that the computational domain is hierarchically decomposed into increasing levels of refinement, and then the near and far subdomains can be identified at each level. The three-dimensional spatial domain of the treecode and FMM is represented by octrees, where the space is recursively subdivided into eight cells until the finest level of refinement or “leaf level. Figure 1 illustrates such a hierarchical space decomposition for a two-dimensional domain (a), associated to a quad-tree structure (b). The original FMM [16] is based on a series expansion of the Laplace Green’s function (1/r) and therefore can be applied to the evaluation of related potentials and/or forces [13]. The approximation reduces the number of operations in exchange for accuracy. **4.1.2** **Flow of Calculation** Figure 2, shows the flow of FMM where the effect of the source particles, shown in red in the lower left corner, are calculated on the target particles, shown in blue in the lower right corner. The schematic is a 2-D representation of what is actually a 3-D octree structure. The calculation starts by transforming the mass/charge of the source particles to a multipole expansion (P2M). Then, the multipole expansion is translated to the center of larger cells (M2M). Then, the influence of multipoles on the particles is calculated in three steps. First, it translates the multipole expansion to a local expansion (M2L). Next, the center of expansion is translated to smaller cells (L2L). Finally, the effect of the local expansion in the far field is translated onto the target particles (L2P). All pairs interaction is used to calculate the effect of near field on target particles (P2P). #### 4.2 FMM Communication Scheme Partitioning of the FMM global tree structure and communication stencils is shown in Figure 3. The binary tree on the left side is a simplification of what is actually an octree in a 3-D FMM. Likewise, the schematics on the right are a 2-D representation of what is actually a 3-D grid structure. Each leaf of the global tree is a root of a local tree in a particular MPI process, where the global tree has Lglobal levels, and the local tree has Llocal levels. Each process stores only the local tree, and communicates the halo region at each level of the local and global tree as shown in the red hatched region in the four illustrations on the right. The blue, green, and black lines indicate global cell boundaries, process boundaries, local cell boundaries, respectively. The switch between local and global trees produces a change in the communication pattern, as revealed in the heat map in Figure 4, where the switch is between levels 3 and 4. ### 5 Modeling Performance Performance modeling is a key ingredient in high performance computing. It has a great importance in the design, development and optimization of applications, architectures and communication systems. It also plays a crucial role in understanding important performance bottlenecks of complex systems. For this reason, performance models are used to analyze, predict, and calibrate performance for systems of interest. The tree-based communication of FMM is in 4 ----- #### M2L multipole to local #### M2M multipole to multipole #### P2M M2L particle to multipole #### P2P source particles particle to particle #### L2L local to local #### L2P local to particle target particles ##### Lglobal Llocal Figure 2: Data-flow of FMM calculation. Data dependency is between red and blue points. . global cell boundaries process boundaries ##### Global M2L local cell boundaries Level : 0 Level : 1 Level : 2 Level : Lglobal-2 Global M2M Level : Lglobal-1 Many process in one global cell Level : Lglobal Many local cells in one process Level : Lglobal+1 ##### Local M2L Level : Lglobal+Llocal-3 Level : Lglobal+Llocal-2 Level : Lglobal+Llocal-1 Local P2P ##### rank 0 rank 1 Figure 3: Splitting of the local and global tree in FMM. |Col1|Level : Lglobal-2 Level : L-1|Col3|Col4|Col5|Col6|Col7|Col8|Col9|Col10|Col11|Global M2M| |---|---|---|---|---|---|---|---|---|---|---|---| ||Level : Lglobal-1 Ma||||||||||ny process in one global cel L l P2P| |Leve l|Level : Lglobal l : Lglobal+1 Level : Lglobal+Llocal- Level : Lglobal+L|3 local-2|Many local cells in one process Local M2 Loca||||||||| ||||||||||||| ||||||||||||| ||||||||||||| ||||||||||||| ||||||||||||| ||Level : Lglobal rank 0|+Llocal-1 rank 1|||||||||| ||||||||||||| ||||||||||||| ||||||||||||| ||||||||||||| ||||||||||||| creasingly important in HPC applications, both of FMM itself and, for instance, of hierarchically lowrank (or “rank-structured”) matrices, which are under active development in theory and software. The application of a model of demonstrated relevance to one application to an entirely different application makes a statement about the value and general applicability of the model. In this section we develop a performance model to understand the performance of the communication in FMM through a phase-byphase analysis based on four principal phases. We start with a baseline model that is a combina tion of the latency and inverse bandwidth. We subsequently refine this baseline model to reach a more realistic model that is able to cover the relevant system architecture properties, with the exception that overlapping communication with computation is not considered in this work. #### 5.1 FMM Communication Phases As shown in Figure 3, our FMM uses a separate tree structure for the local and global tree. In order to construct a performance model for the communication in 5 ----- (a) Level=7 (b) Level=6 (c) Level=5 (d) Level=4 (e) Level=3 (f) Level=2 Figure 4: Heat maps for level-by-level communication patterns for the M2L phase of an FMM with N=62,500 per process using 128 processes. Areas of black indicate zero messages between processes, the peak communication volume is represented in red. In this example, the switch between global and local trees is between Level 3 and Level 4. FMM, we estimate the amount of data that must be sent at each level of the hierarchy. Table 1 shows the number of cells that are sent, which correspond to the illustrations in Figure 3. Lglobal is the depth of the global tree, Llocal is the depth of the local tree. We define N as the global number of particles, and P as the number of processes (MPI ranks). The global tree is constructed so that each MPI process is a leaf node in the global tree. Therefore, the depth of the global tree only depends on the number of processes _P and not N_ . The depth of the global tree grows with log8 P, whereas the depth of the local tree grows with log8(N/P ). For the current calculations we are assuming a nearly uniform particle distribution (as in explicit solvent molecular dynamics) and therefore a full octree structure. **5.1.1** **Global M2L** In Table 1 we show the number of cells to send per level and the total amount of communication for all levels. There are four types of communication in our FMM, which correspond to the four stages shown with the red hatching in Figure 3. The first is the “Global M2L” communication, which sends 26 8 _×_ cells at each level, as shown at the top right of Figure 3. The green lines are the process boundaries and the blue lines are the cell boundaries, which means one FMM cell belongs to many processes in the global tree. In order to avoid redundant communication, we index each process that shares a global cell and perform a one-to-one communication between the processes with matching indices only. In order to further reduce the communication, we select one process for a group of eight cells to do the communication. There 6 ----- fore, the number of processes to communicate with (pi) is always 26 and the number of cells to send is always 8 for every process and for every level in the global tree. In other words, for the “Global M2L” communication the message size and number of sends is constant regardless of N and P, and only the number of hops between the processes will increase depending on the network topology. On torus networks, we map the MPI ranks to the torus and synchronize the direction of the 26 one-to-one communications. The communication per level is (1) and the number _O_ of levels in the global tree is (log P ), so the total _O_ communication complexity for this stage is (log P ) _O_ as shown in Table 1. **5.1.2** **Global M2M** The second type of communication is the “Global M2M”, which sends 7 cells at each level, as shown in Figure 3. We use a similar technique to the “Global M2L” case to avoid redundant communication by pairing the MPI ranks for the one-to-one communication when many processes share the same global cell. The number of processes to communicate with is always seven and the number of cells to send is always one for every process and for every level in the global tree. Similar to the “Global M2L” case, only the number of hops during the one-to-one communication will increase, and the rate depends on the network topology. The communication per level is (1) and the _O_ number of levels is (log P ), so the total communi_O_ cation is (log P ) for the “Global M2M” stage. _O_ **5.1.3** **Local M2L** The third type of communication is the “Local M2L”, which is shown in the red hatching in the second picture from the bottom on the right side of Figure 3. The process boundaries shown in green are coarser than the local cell boundaries shown in black, which means that one process contains many cells, in contrast to the previous two communication types. In a full octree structure, we know that all cells are nonempty so we simply need to send two layers of halo cells for the M2L calculation at each level, as shown in Figure 3. Therefore, the number of processes to Table 1: Amount of communication in FMM Cells to send / level Total comm. Global M2L 26 8 (log P ) _×_ _O_ Global M2M 7 (log P ) _O_ Local M2L (2[i] + 4)[3] 8[i] ((N/P )[2][/][3]) _−_ _O_ Local P2P (2[i] + 2)[3] 8[i] ((N/P )[2][/][3]) _−_ _O_ communicate with is always the 26 neighbors, and the number of cells to send depends on the level. At level i of the local tree, there are 2[i] cells in each direction. Two layers of halo cells on each size will create a volume of (2[i] +4)[3] cells, and subtracting the center volume 8[i] will give (2[i] + 4)[3] 8[i] as shown in Table 1. _−_ The leading term is (4[i]) since the 8[i] term cancels _O_ out. Since the number of levels in the local tree grow as log8(N/P ) the communication complexity for the “Local M2L” is (4[log][8][(][N/P][ )]) = ((N/P )[2][/][3]). This _O_ _O_ can also be understood as the surface to volume ratio of the bottom two illustrations in Figure 3. Since _N/P is constant for weak scaling and decreases for_ strong scaling, this part does not affect the asymptotic weak/strong scalability of the FMM. **5.1.4** **Local P2P** The fourth type of communication in the FMM is the “Local P2P”, which is shown in the bottom picture on the right side of Figure 3. This communication only happens at the bottom level of the local tree. Similar analysis to the “Local M2L” stage shows that (2[i] + 2)[3] 8[i] cells must be sent, as shown in Table 1. _−_ In this case, i is exactly log8(N/P ) and we obtain the same asymptotic amount of communication of ((N/P )[2][/][3]). Similar to the “Local M2L”, this part _O_ does not affect the asymptotic weak/strong scalability of the FMM. However, the content of the data is different from the previous three cases where the multipole expansion coefficients were being sent. In the P2P communication the coordinates and the charges of every particle that belongs to the cell must be sent. Therefore, the asymptotic constant of (N/P )[2][/][3] is _O_ typically much larger than that of the “Local M2L”, and this could be the dominant part of the communication time depending on the number of particles per leaf cell. #### 5.2 Baseline Model ((α, β) model) To model interprocess communication, we begin with the basic (α, β) model, where α represents communication latency, where β is the send time per-Byte (inverse bandwidth). Using the basic model, a message send cost can be represented as _Tα−β = α + nβ_ (2) where n is the number of Bytes in the message. This basic model describes the communication over an ideal architecture where the communication cost does not depend on processor locations or network traffic caused by many processors communicating at the same time [9]. For a more realistic architecture, 7 ----- a more detailed model is needed. For this reason, we add penalties to this basic model to take into account machine-specific performance issues. In particular, we consider communication distance, interconnection switching delay, limited bandwidth, and the effect of multiple cores on a single node contending for available resources. #### 5.3 Distance Penalty ((α, β, γ) Model) Following [11], we refine the assumption that distance between processors in interconnected networks does not have effect on communication time. To take into account the effect of distance we refine the baseline model according to the number of extra hops a message travels _Tα−β−γ = α + nβ + (h −_ _hm)γ,_ (3) where h is the number of hops a message travels, hm is the smallest possible number of hops a message can travel in the network, and γ is the delay per extra hop. If there is no network contention and all messages travel with minimum number of hops, this distance penalty should have no effect. #### 5.4 Bandwidth Penalty on β The peak hardware bandwidth is rarely achieved in message passing. Therefore, we multiply β by _Bmax/B to incorporate the ratio between the peak_ hardware per-node bandwidth Bmax and the effective bandwidth from the benchmark B. _Tβ−P enalty = α + nβ [B][max]_ + (h − _hm)γ_ (4) _B_ #### 5.5 Multicore Penalty on α or γ Increasing the number of cores per node increases the data traffic between nodes, and could potentially result in congestion. Furthermore, larger number of cores per node introduces more noise caused by access to resources shared by multiple cores. To model these effects, we multiply α and/or γ by the number of active cores per node c. This model focuses on the worst case behavior where a machine’s aggregate bandwidth could be exceeded by all cores communicating simultaneously. The resulting models are _Tα−P enalty = cα + nβ + (h −_ _hm)γ_ (5) _Tγ−P enalty = α + nβ + c(h −_ _hm)γ_ (6) ### 6 Model Validation #### 6.1 Machine Description To validate our performance models we benchmark our FMM code on four different architectures; Shaheen, Mira, Titan, and Piz Dora. **Shaheen is 16 racks of an IBM BlueGene/P. Each** rack contains 1024 PowerPC 450 CPUs with 4 cores running at 850MHz with 32kB private L1 cache and 8MB shared L3 cache. Each compute node has 2GB RAM with 13.6 GB/s memory bandwidth. The nodes are connected by 3-D torus network with 5.1GB/s injection bandwidth per node. **Mira is 48 racks of an IBM BlueGene/Q. Each** rack contains 1024 Power A2 CPUs with 16 + 1 cores running at 1.6GHz with 16kB private L1 cache and 32MB shared L2 cache. Each compute node has 16GB RAM with 42.6GB/s memory bandwidth. The nodes are connected by a 5-D torus network with 20GB/s injection bandwidth per node. **Titan is a Cray XK7 system with 18, 688 com-** pute nodes each equipped with an AMD Opteron 6274 CPU and NVIDIA Kepler K20X GPU. The CPU has 16 cores running at 2.2 GHz with 16 kB L1 cache, 2 4 _×_ MB L2 cache, and 8 2 MB L3 cache. The GPU has _×_ 15 64 cores running at 730 MHz with 64+48 kB L1 _×_ cache and 1.5 MB L2 cache. Each compute node has 32 GB of RAM with 51.2 memory bandwidth. The nodes are connected by a 3-D torus with 20GB/s of injection bandwidth per node. We do not use any of the GPUs in the current study. **Piz Dora is a Cray XC40 with 1256 compute** nodes, each with two 12-core Intel Haswell CPUs (Intel R Xeon R E5-2690 v3). Piz Dora has a total of _⃝_ _⃝_ 30144 cores (24 cores per node). Out of the total, 1192 nodes feature 64GB of RAM each, while the remaining 64 compute nodes have 128GB of RAM each (fat nodes). The nodes are connected by a dragonfly network using the Aries interconnect where the routers in each group are arranged as rows and columns of a rectangle, with all-to-all links across each row and column but not diagonally. In order to obtain the machine parameters, the ``` b eff benchmark in the HPC Challenge suite [22] was ``` used to determine the parameters α and β. We report the best-case latency and bandwidth measurements. To find the parameter γ, we followed the same procedure as Gahvari et al. [11]. The machine parameters for Shaheen, Mira, Titan, and Piz Dora are shown in Table 2. Note that our definition of β is defined as send time per Byte, whereas Gahvari et al. define their β as send time per element (8 Bytes). 8 ----- Table 2: Machine parameters for latency α, inverse bandwidth β, and distance penalty γ, on Shaheen, Mira, Titan, and Piz Dora. Shaheen Mira Titan Piz Dora _α_ 4.12 µs 5.33 µs 1.67 µs 0.457 µs _β_ 2.14 ns 1.32 ns 1.62 ns 0.4054 ns _γ_ 29.9 ns 134 ns 284 ns 0.4838 µs #### 6.2 Experimental Setup We ran the FMM code for 10 steps and measured the time spent on the communication for the “Global M2L” and “Local M2L” phases. The results are then divided by 10 to get the average time spent at each level. The “Global M2M” phase was negligible and the “Local P2P” phase only occurs at the bottom level and is irrelevant to the scalability of the FMM, so we do not consider these two phases in the current analysis. We used the Laplace kernel in three dimensions with random distribution of particles in a cube. We use periodic boundary conditions so that there is no load imbalance at the edges of the domain. The number of MPI processes was varied between _P =_ 128, 1024, 8192, while the number of particles _{_ _}_ per process was kept constant at N/P = 62, 500. On all machines we used the maximum number of cores on each node before increasing the number of nodes. Timings were measured with gettimeofday() after a ``` MPI Barrier() call. We used the default rank map ``` ping to the nodes that the system provides. Table 3 shows communication information and statistics when running the FMM on 128, 1024, and 8192 processes. “Level” is the level within the tree structure and goes from 0 to Lglobal +Llocal _−1, where_ _Llocal = 4 for N/P = 62, 500. Therefore, the bottom_ four levels in Table 3 (a), (b), and (c) belong to the local tree. The depth of the global tree Lglobal is 4, 5, and 6 for 128, 1024, and 8192 processes, respectively. “Cells” is the total number of cells at that level of the tree structure, which is simply 8[Level] for a full octree. “Sends” is the number of processes to which sends. As mentioned in Section 5.1 we have developed a communication scheme that limits the number of sends to 26 regardless of the problem size, number of processes, or the level. “Bytes” is the aggregate data size that is sent by a given process at each level of the tree. As shown in Table 1, the number of cells for the “Global M2L” communication is 26 8. For _×_ each cell we are sending 56 multipole expansion coefficients in single precision (4 Bytes). Therefore, the total number of Bytes for the “Global M2L” phase is 26 8 56 4 = 46592. We can see from Table 1 _×_ _×_ _×_ that the amount of cells involved in the “Local M2L” Table 3: Statistics of the M2L communication. (a) 128 Processes Level Cells Sends Bytes 0 1 0 0 1 8 0 0 2 64 26 46592 3 512 26 46592 4 4096 26 46592 5 32768 26 100352 6 262144 26 272384 7 2097152 26 874496 (b) 1024 Processes Level Cells Sends Bytes 0 1 0 0 1 8 0 0 2 64 26 46592 3 512 26 46592 4 4096 26 46592 5 32768 26 46592 6 262144 26 100352 7 2097152 26 272384 8 16777216 26 874496 (c) 8192 Processes Level Cells Sends Bytes 0 1 0 0 1 8 0 0 2 64 26 46592 3 512 26 46592 4 4096 26 46592 5 32768 26 46592 6 262144 26 46592 7 2097152 26 100352 8 16777216 26 272384 9 134217728 26 874496 communication can be calculated by (2[i] + 4)[3] 8[i], _−_ where i is the level in the local tree (not the “Level” shown in Table 3). For example, for level one in the local tree, the amount of cells will be (2[1] + 4)[3] 8[1] _−_ which is equivalent to 26 8. This is why the “Bytes” _×_ is the same for the “Global M2L” and the first level of the “Local M2L” in Table 3. #### 6.3 Model Validation We compare the actual communication time for the M2L communication with our performance model on Shaheen, Mira, Titan, and Piz Dora. We compare against same combination of models as in the multigrid study [11]. The combinations are: |Col1|Shaheen|Mira|Titan|Piz Dora| |---|---|---|---|---| |α|4.12 µs|5.33 µs|1.67 µs|0.457 µs| |β|2.14 ns|1.32 ns|1.62 ns|0.4054 ns| |γ|29.9 ns|134 ns|284 ns|0.4838 µs| |Level|Cells|Sends|Bytes| |---|---|---|---| |0|1|0|0| |1|8|0|0| |2|64|26|46592| |3|512|26|46592| |4|4096|26|46592| |5|32768|26|100352| |6|262144|26|272384| |7|2097152|26|874496| |Level|Cells|Sends|Bytes| |---|---|---|---| |0|1|0|0| |1|8|0|0| |2|64|26|46592| |3|512|26|46592| |4|4096|26|46592| |5|32768|26|46592| |6|262144|26|100352| |7|2097152|26|272384| |8|16777216|26|874496| |Level|Cells|Sends|Bytes| |---|---|---|---| |0|1|0|0| |1|8|0|0| |2|64|26|46592| |3|512|26|46592| |4|4096|26|46592| |5|32768|26|46592| |6|262144|26|46592| |7|2097152|26|100352| |8|16777216|26|272384| |9|134217728|26|874496| 9 ----- 1. Baseline model (α _β model)_ _−_ 2. With distance penalty (α _β_ _γ model)_ _−_ _−_ 3. With distance and bandwidth penalty (β penalty) 4. With distance and bandwidth penalty, plus multicore penalty on latency (α, β penalty) 5. With distance and bandwidth penalty, plus multicore penalty on distance (β, γ penalty) 6. With distance and bandwidth penalty, plus multicore penalty on latency and distance (α, β, γ penalty) The results on Shaheen are shown in Figure 5. The actual measured performance is shown as a black line, where an error bar is drawn according to the standard deviation in communication time among the different MPI ranks. By comparing the Bytes in Table 3 with the communication time in Figure 5, we see that the deepest four levels that belong to the “Local M2L” phase have a communication time that is proportional to the data size being sent. The main discrepancy in the models is caused by the β penalty, for which the ratio between the theoretical injection bandwidth and the b eff benchmark results is accounted for. The actual communication time agrees well with the models with α, β, and γ penalties. For the shallow levels that belong to the “Global M2L” phase, the communication time increases as the level decreases/coarsens. Here, and in Figures 7, 8, and 9 to follow, the “Global M2L” levels are 3 in part (a), 3 and 4 in part (b), and 3, 4, and 5 in part (c). The reason for the increase can be understood by looking back at Figure 3, where the “Global M2L” is communicating with farther processes at coarser levels of the tree. Since we are mapping the geometric partitioning of the octree to the 3-D torus network of Shaheen, the proximity in the octree directly translates to the proximity in the network. Therefore, even though the data size is constant for all levels in the “Global M2L” phase, the number of hops is larger, which accounts for switching delays and also network contention to some extent. This increases the communication time at coarser levels and the models that incorporate γ are able to predict this behavior. In Figure 6, the M2L communication time on Shaheen is plotted against the MPI rank to show the load balance between the processes. Each color shows M2L communication at a different level of the tree structure, and the numbers in the legend represent the levels. The communication time of each level is stacked on top of each other so that the total hight of the area (a) 128 processes 10 [-1] actual ,-- Model ,---. Model - Penalty ,, - Penalties -, . Penalties 10 [-2],, -, . Penalties 10 [-3] 10 [-4] 2 3 4 5 6 7 8 9 Level (b) 1024 processes 10 [-1] actual ,-- Model ,---. Model - Penalty ,, - Penalties -, . Penalties 10 [-2],, -, . Penalties 10 [-3] 10 [-4] 2 3 4 5 6 7 8 9 10 Level (c) 8192 processes Figure 5: Performance model prediction and actual time for M2L communication phase on Shaheen. 10 ----- |Col1|7 6 5 4 3| |---|---| ||| ||| ||| ||| MPIRANK (a) 128 processes 0.012 0.01 8 0.008 7 6 0.006 5 4 0.004 3 0.002 0 200 400 600 800 1000 MPIRANK (b) 1024 processes 0.012 0.01 9 8 0.008 7 6 0.006 5 4 0.004 3 0.002 0 2000 4000 6000 8000 MPIRANK (c) 8192 processes Figure 6: Load balance of M2L communication phase on Shaheen. plot represents the total M2L communication time shown in Figure 5. The MPI ranks are sorted according to the total M2L communication time for better visibility in the small differences between processes. As can be seen from the figure, the load balance is quite good. The imbalance seems to come from the finest levels, which are 7, 8, and 9 for 128, 1024, and 8192 processes, respectively. The M2L communication time on Mira is plotted along with the six model predictions in Figure 7. Similarly to the runs on Shaheen, the main difference in the model predictions is caused by the β penalty. We also see a discrepancy between the model predictions with and without the α penalty for the “Global M2L” phase (coarser levels). The multicore penalty is very small on the Bluegene/Q. This lack of multicore penalty has been observed in other applications where the use of hybrid OpenMP+MPI approach did not improve the performance over a flat MPI approach [21]. Contrary to the runs on Shaheen, the communication time has a nearly flat profile for the “Global M2L” phase. This is because the 5-D torus network minimizes the number of hops and network contention so the degradation at coarse levels of the tree is minimal. Far nodes in the octree are not so far in the Bluegene/Q network topology. Figure 8 shows the M2L communication time on Titan along with the six model predictions. Similarly to the previous two cases, the difference between the model predictions is mainly due to the correction for the inverse bandwidth. This difference in the theoretical injection bandwidth and measured effective bandwidth seems to have the largest effect on all three architectures. What is different from the previous two cases is the large jump in the actual communication time for the “Global M2L” phase. For example, for the 8192 process run level 5 is taking about 10 times more than level 6 even though the message size is 46, 592 Bytes for both cases. The γ term in the current performance models anticipates such behavior. The error bars in the actual timings are quite large, which indicates that there is a large load imbalance compared to the previous two systems. The concaveconvex switch at level 5 in 8(b) is not well predicted by the models, but the more refined models do pick it up at level 6 in 8(c). Though a good match between the measurements and simple models is not realized for M2L at all granularities on Titan, performance trends are generally well predicted. The M2L communication time on Piz Dora is plotted along with the six model predictions in Figure 9. In the case of 128 processes, the best fitting model is the baseline model plus only the distance penalty. Increasing the number of processes increases the pos |Col1|8 7 6 5 4 3| |---|---| ||| ||| ||| ||| ||| |Col1|9 8 7 6 5 4 3| |---|---| ||| ||| ||| ||| ||| ||| 11 ----- (a) 128 processes 10 [-1] actual ,-- Model ,---. Model - Penalty ,, - Penalties -, . Penalties 10 [-2],, -, . Penalties 10 [-3] 10 [-4] 2 3 4 5 6 7 8 9 Level (b) 1024 processes 10 [-1] actual ,-- Model ,---. Model - Penalty ,, - Penalties -, . Penalties 10 [-2],, -, . Penalties 10 [-3] 10 [-4] 2 3 4 5 6 7 8 9 10 Level (c) 8192 processes Figure 7: Performance model prediction and actual time for M2L communication phase on Mira. (a) 128 processes 10 [-1] actual ,-- Model ,---. Model - Penalty ,, - Penalties -, . Penalties 10 [-2],, -, . Penalties 10 [-3] 10 [-4] 2 3 4 5 6 7 8 9 Level (b) 1024 processes 10 [-1] actual ,-- Model ,---. Model - Penalty ,, - Penalties -, . Penalties 10 [-2],, -, . Penalties 10 [-3] 10 [-4] 2 3 4 5 6 7 8 9 10 Level (c) 8192 processes Figure 8: Performance model prediction and actual time for M2L communication phase on Titan. 12 ----- 10−1 10−2 10−3 10−4 sibility of contention and makes the model with all penalties the best fitting model. Similar to the runs on Titan, there is a large jump in the actual communication time for the “Global M2L” phase with even worse load balancing suggested by the large error bars. The performance model is able to predict the poor performance at the coarse levels. ### 7 Conclusion 2 3 4 5 6 7 8 Level (a) 128 processes 10−1 10−2 10−3 10−4 2 3 4 5 6 7 8 9 Level (b) 1024 processes 10−1 10−2 10−3 10−4 2 3 4 5 6 7 8 9 10 Level (c) 8192 processes Figure 9: Performance model prediction and actual time for M2L communication phase on Piz Dora. The goal of this work is to model the global communication of the FMM, to be able to anticipate challenges on future exascale machines. To improve model fidelity, we consider penalties based on machine constraints including distance effects, reduced per core bandwidth, and the number of cores per node. We observe a good match between the (α, β, γ) model with multicore penalties and the actual communication time. The discrepancy between the other models means that all components of the model; latency (α), bandwidth (β), hops (γ), and multicore penalty must be taken into account when predicting the communication performance of FMM. In our benchmark tests, we compare the performance models with measurements for the M2L communication, since this is the dominant part of the FMM communication. Our observations are consistent with those of the studies by Gahvari et al. [11], where the performance of an algebraic multigrid method is analyzed using the same model. The measurements fall within the bounds of the performance models, and match best with the model where latency, bandwidth, hops, and multicore penalty are all taken into account. The present communication model is able to predict the performance on four HPC systems possessing different characteristics. To our knowledge, this is the first formal characterization of inter-node communication in FMM, which validates the model against actual measurements of communication time. Furthermore, the FMM implementation considered in this paper has a provably best theoretical communication complexity among FMM algorithms [32], so demonstrations for other implementations may be less relevant in practice. Our current FMM code does not support asynchronous data transfer so we are not able to provide a reference implementation for the performance model that includes asynchronous data transfers. The ultimate communication model is predictive in an absolute sense; however, on complex systems, this objective is often out of reach, or of a difficulty out of proportion to its benefit when there exists a 13 ----- simpler model that is inexpensive and sufficient to guide coding decisions leading to improved scaling. The current model provides such guidance. Looking into the future, we will most likely be seeing more network topologies with larger diameter (more hops). Large radix networks seem to be the current trend, but with the exponential increase in the node count the increase of the network diameter is unavoidable. Our communication model with the distance penalty is able to capture the increase in communication time at the coarse levels of the FMM communication on Titan’s torus network. This should allow predicting the communication bottlenecks on future networks with larger diameter. The performance model herein is applicable to evolving heterogeneous systems, such as GPUs or Xeon Phis. This is because the accelerators and coprocessors affect the per-node computation but not the inter-node communication. Nor is the model affected by the on-node computational performance of FMM, as long as the accelerators and coprocessors are not using more than one MPI process, which is the optimal way to use the current generation of such hardware. ### Acknowledgements We acknowledge system access and the generous assistance of the staffs at four facilities for the performance tests herein: the KAUST Supercomputing Laboratory; the Argonne Leadership Computing Facility at Argonne National Laboratory, which is supported by the Office of Science of the U.S. Department of Energy under contract DE-AC02-06CH11357; the Oak Ridge Leadership Computing Facility at Oak Ridge National Laboratory, which is supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC05-00OR22725; and the Swiss National Supercomputing Centre (CSCS), under project ID g81. ### Author Biographies _Huda Ibeid received her BSc degree in Computer En-_ gineering from the University of Jordan and is currently a PhD candidate in Computer Science at the King Abdullah University of science and Technology (KAUST). Her research interests include fast algorithms for particle-based simulations, fast algorithms on parallel computers and GPUs, design of parallel numerical algorithms, parallel programming models and performance optimizations for heterogeneous GPU-based systems. _Rio Yokota obtained his PhD from Keio University,_ Japan, in 2009 and worked as a postdoctoral researcher with Prof. Lorena Barba at the University of Bristol and then Boston University. He has worked on the implementation of fast N -body algorithms on special-purpose machines such as mdgrape-3, and then on GPUs after CUDA was released, and on vortex methods for fluids simulation. He joined the King Abdullah University of Science and Technology (KAUST) as a research scientist, where he continued to work on fast multipole methods. He is now at the Tokyo Institute of Technology as an Associate Professor. _David Keyes is the director of the Extreme Com-_ puting Research Center at KAUST and an Adjoint Professor of Applied Mathematics at Columbia University. Keyes graduated in Aerospace and Mechanical Sciences from Princeton University and earned a doctorate in Applied Mathematics from Harvard University. He did postdoctoral work in the Computer Science Department of Yale University. He works at the algorithmic interface between parallel computing and the numerical analysis of partial differential equations, across a spectrum of aerodynamic, geophysical, and chemically reacting flows. ### References [1] J. Barnes and P. Hut. O(N log N ) force-calculation algorithm. Nature, 324:446–449, 1986. [2] R. Beatson and L. Greengard. A short course on fast multipole methods. In Wavelets, Multilevel Methods _and Elliptic PDEs, pages 1–37. Oxford Science Pub-_ lications, 1997. [3] A. Chandramowlishwaran, K. Madduri, and R. Vuduc. Diagnosis, tuning, and redesign for multicore performance: A case study of the fast multipole method. In SC ’10 Proceedings of the _2010 ACM/IEEE International Conference for High_ _Performance Computing, Networking, Storage and_ _Analysis, 2010._ [4] A. Chandramowlishwaran, S. Williams, L. Oliker, I. Lashuk, G. Biros, and R. Vuduc. Optimizing and tuning the fast multipole method for state-of-the-art multicore architectures. In Proceeding of the Inter_national Parallel Distributed Processing Symposium_ _(IPDPS), pages 1–12, 2010._ [5] H. Cheng, L. Greengard, and V. Rokhlin. A fast adaptive multipole algorithmin three dimensions. _Journal of Computational Physics, 155(2):468–498,_ 1999. [6] M. J. Clement and M. J. Quinn. Symbolic performance prediction of scalable parallel programs. In 14 ----- _Proceedings of the International Parallel Processing_ _Symposium, pages 635–639, April 1995._ [7] L. DeRose and D. A. Reed. SvPablo: A multilanguage, architecture-independent performance analysis system. In Proceeding of the International _Conference on Parallel Processing, pages 311–318,_ Augest 1999. [8] J. Dongarra and F. Sullivan. Guest Editors Introduction to The Top 10 Algorithms. _Computing in_ _Science and Engineering, 2:22–23, 2000._ [9] I. Foster. Designing and Building Parallel Programs. Addison-Wesley, 1995. [10] I. T. Foster and P. H. Worley. Parallel algorithms for the spectral transform method. _SIAM Journal on_ _Scientific and Statistical Computing, 18(3):806–837,_ 1997. [11] H. Gahvari, A. H. Baker, M. Schulz, U. M. Yang, K. E. Jordan, and W. Gropp. Modeling the performance of an algebraic multigrid cycle on HPC platforms. In ICS ’11 Proceedings of the International _Conference on Supercomputing, pages 172–181, 2011._ [12] H. Gahvari, W. Gropp, K. E. Jordan, M. Schulz, and U. M. Yang. Algebraic multigrid on a dragonfly network: First experience on a Cray XC30. In Pro_ceeding of the 5th International Workshop on Per-_ _formance Modeling, Benchmarking and Simulation_ _of High Performance Computer Systems (PMBS14),_ November 2014. [13] N. L. Gorn and D. V. Berkov. Adaptation and performance of the fast multipole method for dipolar systems. Journal of Magnetism and Magnetic Mate_rials, 272-276:698–700, 2004._ [14] L. Greengard, M. C. Kropinski, and A. Mayo. Integral equation methods for Stokes flow and isotropic elasticity in the plane. _Journal of Computational_ _Physics, 125:403–414, 1996._ [15] L. Greengard and V. Rokhlin. A fast algorithm for particle simulations. _Journal of Computational_ _Physics, 73(2):325–348, 1987._ [16] L. Greengard and Rokhlin V. On the efficient implementation of the fast multipole algorithm. Research Report RR-602, Yale University, 1988. [17] W. D. Gropp, D.K. Kaushik, D.E. Keyes, and B.F. Smith. Toward realistic performance bounds for implicit CFD codes. In Proceedings of Parallel CFD’99, pages 23–26, May 1999. [18] P. Jetley, L. Wesolowski, F. Gioachin, L. V. Kale, and T. R. Quinn. Scaling hierarchical N-body simulations on GPU clusters. In SC ’10 Proceedings of _the 2010 ACM/IEEE International Conference for_ _High Performance Computing, Networking, Storage_ _and Analysis, pages 1–11, 2010._ [19] D. Kerbyson, H. Alme, A. Hoisie, F. Petrini, A. Wasserman, and M. Gittings. Predictive performance and scalability modeling of a large-scale appli cation. In Proceedings of the 2001 ACM/IEEE con_ference on Supercomputing, pages 1–12, 2001._ [20] I. Lashuk, A. Chandramowlishwaran, H. Langston, T.-A. Nguyen, R. Sampath, A. Shringarpure, R. Vuduc, L. Ying, D. Zorin, and G. Biros. A massively parallel adaptive fast multipole method on heterogeneous architectures. In Proceedings of the Con_ference on High Performance Computing Network-_ _ing, Storage and Analysis, pages 1–12, 2009._ [21] M. Lee, N. Malaya, and R. D. Moser. Petascale direct numerical simulation of turbulent channel flow on up to 768k cores. In Proceedings of the Confer_ence on High Performance Computing Networking,_ _Storage and Analysis, Denver, CO, USA, Novermber_ 16-22 2013. [22] P. Luszczek and J. Dongarra. Introduction to the HPC Challenge Benchmark Suite. Technical Report ICL-UT-05-01, University of Tennessee, Knoxville, March 2005. [23] C. L. Mendes. Performance Scalability Prediction on _Multicomputers._ PhD thesis, University of Illinois, Urbana-Champaign, May 1997. [24] C. L. Mendes and D. A. Reed. Integrated compilation and scalability analysis for parallel systems. In_ternational Conference on Parallel Architectures and_ _Compilation Techniques (PACT’98), pages 385– 392,_ October 1998. [25] J. M. Perez-Jorda and W. Yang. On the scaling of multipole methods for particle-paticle interactions. _Chemical Physics Letters, 282:71–78, 1998._ [26] W. T. Rankin. Efficient Parallel Implementations of _Multipole Based N-body Algorithm. PhD thesis, Duke_ University, 1999. [27] A. Snavely, N. Wolter, and L. Carrington. Modeling application performance by convolving machine signatures with application profiles. In Proceeding of _the IEEE Workshop on Workload Characterization,_ pages 149–156, December 2001. [28] B. Van de Wiele, F. Olyslager, and L. Dupre. Application of the fast multipole method for the evaluation of magneto-static fields in micromagnetic computations. _Journal of Computational Physics,_ 227:9913–9932, 2008. [29] W. R. Wolf and S. K. Lele. Aeroacoustic integrals accelerated by fast multipole method. AIAA Journal, 49(7):1466–1477, 2011. [30] P. H. Worley. Performance evaluation of the IBM SP and the Compaq AlphaServer SC. In Proceeding of _the ACM International Conference of Supercomput-_ _ing 2000, pages 235–244, 2000._ [31] R. Yokota, J. Pestana, H. Ibeid, and D. E. Keyes. Fast multipole preconditioners for sparse matrices arising from elliptic equations. _arXiv:1308.3339v2,_ 2014. 15 ----- [32] R. Yokota, G. Turkiyyah, and D. Keyes. Communication complexity of the fast multipole method and its algebraic variants. Supercomputing Frontiers and _Innovations, 1(1):63–84, 2014._ [33] J.-S. Zhao and W.-C. Chew. Three-dimensional multilevel fast multipole algorithm from static to electrodynamic. Microwave and Optical Technology Letters, 26(1):43–48, 2000. 16 -----
{ "disclaimer": "Notice: The following paper fields have been elided by the publisher: {'abstract', 'references'}. Paper or abstract available at https://api.unpaywall.org/v2/10.1177/1094342016634819?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1177/1094342016634819, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": null, "status": "GREEN", "url": "http://t2r2.star.titech.ac.jp/rrws/file/CTT100718789/ATD100000413/" }
2,016
[ "JournalArticle" ]
true
2016-11-01T00:00:00
[]
15,030
en
[ { "category": "Medicine", "source": "external" }, { "category": "Computer Science", "source": "external" }, { "category": "Computer Science", "source": "s2-fos-model" }, { "category": "Medicine", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/0276e32e1440fee9e433f45192e8ccd7f31e3a56
[ "Medicine", "Computer Science" ]
0.892612
Platform for Efficient Switching between Multiple Devices in the Intensive Care Unit
0276e32e1440fee9e433f45192e8ccd7f31e3a56
Methods of Information in Medicine
[ { "authorId": "7239058", "name": "F. D. Backere" }, { "authorId": "48609947", "name": "Thomas Vanhove" }, { "authorId": "2255655569", "name": "E. Dejonghe" }, { "authorId": "2255655390", "name": "M. Feys" }, { "authorId": "2255684665", "name": "T. Herinckx" }, { "authorId": "2255618571", "name": "J. Vankelecom" }, { "authorId": "145320688", "name": "J. Decruyenaere" }, { "authorId": "2250414779", "name": "F. D. Turck" } ]
{ "alternate_issns": null, "alternate_names": [ "Method Inf Med" ], "alternate_urls": [ "http://www.schattauer.de/de/magazine/uebersicht/zeitschriften-a-z/methods.html" ], "id": "95f5bdab-1f05-4090-899f-3869a15b5707", "issn": "0026-1270", "name": "Methods of Information in Medicine", "type": "journal", "url": "https://methods.schattauer.de/en/contents/methods-open.html" }
null
# Platform for Efficient Switching between Multiple Devices in the Intensive Care Unit Femke De Backere[1], Thomas Vanhove[1], Emanuel Dejonghe[1], Matthias Feys[1], Tim Herinckx[1], Jeroen Vankelecom[1], Johan Decruyenaere[2] and Filip De Turck[1] 1 Information Technology Department (INTEC), Ghent University - iMinds, Gaston Crommenlaan 8, bus 201, 9050 Ghent, Belgium 2 Department of Intensive Care, Ghent University Hospital, De Pintelaan 185, B-9000 Gent, Belgium # SUMMARY _Objectives: Handheld computers, such as tablets and smartphones, are becoming more and_ more accessible in the clinical care setting and in Intensive Care Units (ICUs). By making the most useful and appropriate data available on multiple devices and facilitate the switching between those devices, staff members can efficiently integrate them in their workflow, allowing for faster and more accurate decisions. This paper addresses the design of a platform for the efficient switching between multiple devices in the ICU. The key functionalities of the platform are the integration of the platform into the workflow of the medical staff and providing tailored and dynamic information at the point of care. _Methods: The platform is designed based on a 3-tier architecture with a focus on extensibility,_ scalability and an optimal user experience. After identification to a device using Near Field Communication (NFC), the appropriate medical information will be shown on the selected device. The visualization of the data is adapted to the type of the device. A web-centric approach was used to enable extensibility and portability. _Results: A prototype of the platform was thoroughly evaluated. The scalability, performance_ and user experience were evaluated. Performance tests show that the response time of the system scales linearly with the amount of data. Measurements with up to 20 devices have shown no performance loss due to the concurrent use of multiple devices. _Conclusions: The platform provides a scalable and responsive solution to enable the efficient_ switching between multiple devices. Due to the web-centric approach new devices can easily be integrated. The performance and scalability of the platform have been evaluated and it was shown that the response time and scalability of the platform was within an acceptable range. # MESH TERMS: Decision Making, Computer-Assisted Decision Support Systems, Clinical/organization & administration Intensive Care Units Information Systems User-Computer Interface # CORRESPONDENCE TO: Femke De Backere Department of Information Technology Internet Based Communication Networks and Services (IBCN) Ghent University - iMinds Gaston Crommenlaan 8 (Bus 201), B-9050 Gent, Belgium T: +32 9 33 14938 F: +32 9 33 14899 E: femke.debackere@intec.UGent.be ----- # 1 INTRODUCTION Handheld computers, such as tablets and smartphones, are becoming more and more popular, even in the clinical care setting [1][2][3]. Moreover, with the increasing memory capabilities, processing power and connectivity these devices can offer a portable platform for patient management in the Intensive Care Unit (ICU) [4]. Furthermore, in a computerized ICU, a computer is located next to every bed. Each department also has a unit PC and physicians usually have a personal desktop and smartphone. Moreover, the number of devices on an ICU is steadily increasing in recent years [5]. Therefore, there is a need for an efficient switching mechanism between the different devices, used in the ICU, to ensure quality of care. These devices have the capabilities and potential to be integrated within existing clinical decision support systems (CDSS). CDSS are computer-driven technology solutions, developed to provide support to physicians, nurses and patients using medical knowledge and patient-specific information. Thus, these systems will not replace the medical staff, but will merely give advice and guidance. This way, they are able to take all relevant data and information into account. By filtering the information in an intelligent manner and presenting it to the medical staff at the appropriate moment and in an intelligent way, these systems can improve health care [6]. CDSS can be used in every aspect of the care process, from preventive care and diagnosis to monitoring and follow up. Studies have already shown that these systems improve quality, safety and effectiveness of medical decisions, resulting in improved patient care, higher performance of the medical staff and more effective clinical services [7]. Nevertheless, the uptake of CDSS is rather low and this is due to a number of factors [8]. First, one of the main problems in the use of CDSS is the integration of applications into the current workflow of the medical staff [9]. Kawamoto et al. [10]concluded that CDSS are more successful when integrated into the work process of the medical staff. This also means the integration with existing information systems of the hospital [11]. Second, the devices used in the ICU are not optimal embedded within CDSS. Sharing information at the right time and place has a large influence on the use of these systems and on the performance of the medical staff, moreover it is time-saving [12]. A third problem is representing all the relevant information of a specific patient. In the ICU, up to 200,000 parameters are collected for each patient on a daily basis [13][14]. These parameters are mainly originating from examinations and from monitoring data. Visualizing this data in an optimal way and selecting only the most relevant information is a challenging task [15]. Due to problems and the necessity to ensure continuity of healthcare services, improving patient quality of life and rationalizing healthcare costs, new pervasive healthcare systems [16] are being explored [17][18]. The research on clinical decision support has evolved over 50 years [19] resulting in new approaches such as pervasive and ubiquitous healthcare [20][21]. As the medical staff in a clinical setting has very diverse tasks and the work is highly fragmented [22][23]. On average, they do not spend more than 5 minutes on a specific task ----- and in many cases only spend 1.5 minutes on a specific activity before switching to another task. This means that personnel has to be highly adaptive and should be able to cope with an ever changing environment and continually adjust their activities [24]. In an intensive care setting, there is a wide variety of systems that are integrated in the workflow of the doctors and the nurses. This means that during an activity they are taken into account the readings and/or measurement from these systems into account to assess the situation of the patient or they have to interact with different (software) systems to obtain the correct information about a specific patient [25]. As the staff only has limited time to spend on certain tasks or activities, accessing the correct infrastructure and tools can create a big overhead for the staff. The study of Koch et al. [26] indicated that using integrated displays, where all important information in contained in one screen, could be an advantage, if bidirectional communication between different devices is implemented. Also, event recognition and treatment efficiency can be improved when using a second display [27]. A better integration of the current infrastructure and handheld devices, such as tablets and smartphones, can improve this situation. Therefore, there is a need for a platform, which is capable of switching between the different devices, used in the ICU. However, there are still obstacles concerning the efficient switching between devices and that should be taken into account when developing such a platform. First, the user friendliness should be of paramount importance. Introducing a new tool into such a complex setting as an ICU, should improve the quality of care and the workflow of the medical staff. It should support the staff in their current activities. Second, the speed of the switching mechanism is also important, as doctors and nurses only spend 5 minutes on average on a task, they do not want to wait for a few minutes while transferring the data from one device to another. Finally, the switching mechanism should be carried out in such a manner, that it suggest when the situation is right to switch to another device and automatically detect other devices in the vicinity. Keeping user friendliness in mind, users should get the suggestion to switch and should not be switched to another device automatically. The purpose of this paper is twofold. On the one hand, the design and implementation of a platform is presented, enabling doctors and other members from the medical staff to switch between multiple devices. This platform is also capable of detecting which content is suitable to be displayed on which device, e.g., text can be shown on all devices, while high-resolution images are less suitable to be displayed on smartphones. On the other hand, the performance of the platform is evaluated, to give valuable insights in the scalability, responsiveness and user experience. The remainder of the article is structured as follows. Section 2 details the objectives of our platform, whereas Section 3 is devoted to the methodological approach. Section 4 deals with the evaluation results. Finally, the main contributions are discussed and the main conclusions of this research are highlighted in Sections 5 and 6. # 2 OBJECTIVES The aim of this research is to design a platform that allows for the efficient switching between devices in an Intensive Care setting. This platform should offer following features: - Integration in the workflow and at the point of care: to optimize the care and minimize loss of time and costs, the visualization of the data by the platform should be integrated into the workflow of the doctors and the medical staff. Moreover, the ----- visualization should also be possible at the point of care, while examining the patients and not only in the office of the physician. - Tailored information: The information should be adapted dynamically to the capabilities of the devices, e.g., high quality images, such as X-ray pictures, should not be shown on a smartphone as the transfer of data would create an unacceptable delay and it would not be easy to interpret this kind of images on a small screen. - Dynamic information: When new information becomes available, the device should be able to immediately visualize the new content. - Displaying the appropriate medical information on the device: Based on the role and the preferences of the end-user and the properties of the device, which are provided to the platform by means of a database, the platform is capable of selecting and visualizing the information in a user friendly manner. By enabling the medical staff to enter their personal preferences, we make sure that it possible to deviate from the settings made by the platform and we ensure a user-friendly experience. For example, a cardiologist should see information concerning the heart instead of seeing kidney data first. Besides these functional requirements, the platform should also fulfill the following nonfunctional requirements: - The platform should be generic and it should be possible to plug-in new devices at any moment. - As the number of devices in the ICU is increasing at a steady pace, the platform should be scalable and able to cope with a large number of clients. - The platform performance should be such that the loading times are in an acceptable range. The original research contribution of the paper is the design of a platform for the efficient switching between devices in an Intensive Care setting, taking into account the above four functional requirements and the three non-functional requirements. The design of the platform is outlined in the paper and obtained performance results are presented, together with a discussion section. The platform can also be used outside the intensive care setting, for instance in ambulatory settings. # 3 METHODS The platform offers an environment, in which the efficient switching between devices is facilitated. Section 3.1 details the general concept of the platform’s architecture, whereas Section 3.2 describes the platform components and their interactions, by focusing on the envisioned scenarios. Section 3.3 discusses the use of Near Field Communication as a localization and switching standard between the devices. In Section 3.4, the components of the platform and their interactions are described. Further implementation details, concerning the three-tiered architecture are given in Sections 3.4.1, 3.4.2 and 3.4.3. Section 3.4.4 handles the implementation details of the NFC communication. Finally, Section 3.5 details the security and confidentiality techniques used within the platform. ----- ## 3.1 GENERAL CONCEPT Figure 1 illustrates the general concept of the platform. As can be seen in this figure, data from a various range of sources is gathered in the Intensive Care Information System (ICIS). This involves data from clinical observations, prescription information, monitoring parameters, lab results as well as administrative data. Furthermore, information regarding the personal preferences from doctors is stored in the Staff Preferences database and general knowledge is kept in the Knowledge database, for example the capabilities of every type of device. The information from these three databases is used for filtering and selecting the requested information. Based on the capabilities of device and the preferences of the user, the information can be filtered in an additional step, if necessary, and is sent to the device. ## 3.2 SCENARIO From the general concept, as discussed in the previous section, the following scenario can be envisioned. 1) The doctor is on his way for his round in the ICU ward and decides that he already wants to check the last measurements of patient X. He takes his smartphone and gets a concise overview, in the form of a table, of the patient’s status. 2) As the doctor arrives at the ICU ward, he wants to visualize the measurements onto the bedside desktop PC. Therefore, he swipes his personal tag on the reader, attached to the device. 3) Immediately, the patient’s data that the doctor was viewing on the smartphone is shown on the bedside PC. As this screen has a larger size, the measurements are shown as graphs, where possible. 4) Meanwhile, the nurse at the unit desktop PC is entering additional information about patient X. Straightaway, all devices, currently visualizing data about patient X, will be updated, ensuring an up to date view on patient X. 5) After a while, the doctor moves to the bed of patient Y and on the bedside PC he visualizes this patient’s data. With this action, the smartphone application, still visualizing the data of patient X, will refresh automatically and instantly show the data of patient Y. The visualization of this scenario is shown in Figure 2. Next to the floor plan, some examples are given of which information is displayed on the screen of the involved devices. The scenario, described in the previous paragraph, details the general concept of the proposed platform. However, the platform makes it possible to switch between a wide range of different devices that could be used in the ICU: smartphones, tablets, desktop computers at the nurse’s station or in the doctor’s office, bedside pcs and smart TVs. In fact, all devices, which are capable of visualizing web pages can be used with the platform. Different uses cases for switching are: - Switching between devices, from a device with a small screen to one with a bigger screen, because the medical staff wants to have a more detailed overview of certain variables. This can be done by means of a graph instead of a table or a listing of the variables from the last hour. - Switching from a screen that can be seen by visitors of patients, during visiting hours, to a more personal device. This way, patient confidentiality can be taken into account. Femke D **Verwijd** Femke D **Verwijd** ----- - Switching from a device, residing next to the patients bed, to a more personal device, because the doctor is continuing his/her round. ## 3.3 NEAR FIELD COMMUNICATION To implement the tags and readers, as mentioned in the previous section, we assume doctors and the medical staff will use Near Field Communication (NFC)[28]. NFC is a new set of standards that enables smartphones and other devices, with similar capabilities, to establish radio communication. This connection is set up by bringing the devices in close proximity to each other (usually only a few centimeters), or by touching each other. Not only communication between two NFC enabled devices is possible, but also the communication between an NFC reader and an unpowered NFC chip, which is often called a tag. NFC is compatible with current existing Radio Frequency Identification (RFID) structures, tags and smart cards [29][30]. There also is no technical barrier to use NFC, as the concept is straightforward. The user just has to bring the two devices in their range to start communication. As the communication range is short, it is easy to distinguish multiple devices residing in each other’s neighborhood. This also means that there is little change that there will be security issues, if no other device is in the vicinity, there will be no communication [31]. 3.4 PLATFORM COMPONENTS AND INTERACTIONS The platform is implemented by using Java EE 6 (Java Enterprise Edition 6), which defines a standard for developing and implementing multi-tier applications, based on standardized modular components. The Java EE framework offers a complete set of services to these components and details concerning middleware activities are handled automatically, without complex programming. A multi-tier, distributed application model is used by this platform. Based on functionality, the application is split up into different components, which can be installed on different machines, depending on the tier they belong to. Most Java EE enterprise applications can be split into 3 tiers: - Entities are contained in the _Persistence Tier. The Java Persistence API is used to_ implement entities and to persist them into a table in a relational database. - Enterprise Java Beans (EJB) are defined in the _Business Tier. These beans are_ responsible for adding logic to the application. The Java EE framework ensures that these EJBs offer scalability by means of resource pooling. There are two different types of EJBs. Tasks of clients are performed by session beans. Based on the requirements, session beans can be stateless, stateful or a singleton. Message-driven beans are used when the application has to process asynchronous messages. In the Business Tier, Web Services can be defined, which can call upon external services. - Java Server Pages (JSP) and servlets are stored in the Web Tier. JSP and servlets can be used to visualize dynamic web content and make it possible to enforce a separation between the representation of data and the business logic. The 3-tier architecture of the platform, as depicted in Figure 3, is based on a web centric approach and addresses the functional needs as mentioned in Section 2. These choices ensure that the platform is flexible and portable. Also, by choosing a web centric approach, all Femke D **Verwijd** ----- devices with a web browser are able to plug into the system. The implementation code of the Proof of Concept is made publically available on GitHub through the following URL: https://github.ugent.be/fddbacke/DeviceSwitching.git. 3.4.1 PERSISTENCE TIER The Persistence Tier contains all the entities, representing the tables in the database. In fact, entities are Plain Old Java Object (POJO), extended with annotations that can indicate for example, an ID or multiplicities, such as a many-to-many relationship. The most important entities in the platform are: - _DeviceType: information about the specifications of a device type, e.g., the resolution_ of the screen. - _Staff: knowledge about staff members’ preferences and limitations, concerning the_ data that a certain staff member can consult. - _Variable: identifier to determine which type of data is stored, for example, body_ temperature, blood pressure, heartbeat per minute. - _Patient: detailed information about the patient, such as name, episode number and_ unique national number. `o` _PatientVariable: actual data about a specific_ _Variable, linked to the_ _Patient._ For example: Patient x has a body temperature of 37.2 degrees. 3.4.2 BUSINESS TIER In the Business Tier, the Facade design pattern is applied [32]. This implies that all communication between the Web Tier and Business Tier will pass through a specific bean, the _ManagementBean. This bean provides only high-level business methods in order to have a_ safe and simple interface. Furthermore, the methods in the ManagementBean can be split in two types. First, there are methods that are related to the state of the application for a current user. Second, there are also methods for retrieving and changing information related to the patients. Since this defines a clear distinction, the _ManagementBean is connected to two_ different internal components. The first component, the _StateManager, keeps track of the current run-time state of the_ application. This class is implemented as a Singleton. A Singleton session bean is instantiated only once for each application and will exists for the whole lifecycle of that application. The Singleton session bean was chosen since this bean has to keep the information about the global state of the application. Information about the current number of devices and the users logged in on these devices is not stored in the database, but is all stored inside the _StateManager._ Run-time state information is not stored in the database as this would result in an extra delay. The disadvantage of this decision is that information about which patient the user was viewing and which devices were in use by the user, will be lost when the server has to reboot. The second component, the DataBean, is used to communicate with the Persistence Tier. This way, the application can take care of the data related to patients and staff members. Because all global state-information is saved in the _StateManager, the_ _ManageBean can be_ implemented as a stateless session bean. This design choice makes the application more scalable, since there can be more instances of this implementation available at the same time. ----- 3.4.3 WEB TIER The Web Tier consists of both code running on the device (the client) and code running on the server. The Asynchronous Javascript and XML (AJAX) design principle is used to create a fast and responsive system [33]. However, Javascript Object Notation (JSON) was used instead of XML to enable easier processing in the client [34]. A static HTML page (HyperText Markup Language) is downloaded to the client and the content of this page is changed dynamically. Communication with the server happens in the background, without any user intervention. The server-side code consists of servlets running in the Java EE Web Container. These will handle the requests from the client and forward them to the Business Tier. Another function of the servlets is to convert the raw data from the Business Tier into a representation that the client can process. This conversion depends on the type of device. For example, a graph will be generated from the raw data for tablets and laptops, while smartphones will receive a textual representation. The data is presented to the user in the form of `Blocks. Each` `Block contains a specific` _Variable, for example blood pressure or body temperature. When a user adds new information_ to a `Block, this information is sent to the Business Tier and all devices displaying this` information will receive the updated content. Clients also poll the server at fixed intervals for new content. To lower the load on the Business Tier, a ChangeTracker object is introduced. The goal of this object is to keep track of which devices need to be updated with new content. When a client checks whether there is new content, the request will only be passed on to the Business Tier, if the _ChangeTracker_ indicates that new content is available. This reduces the load on the server. 3.4.4 NFC COMMUNICATION IMPLEMENTATION In order to establish a connection between the NFC infrastructure, used for the platform and the platform itself, the IOTOPE[1] library is used. By developing a small client for the device, equipped with an NFC reader, the platform can be notified, when a certain device is in the vicinity of an NFC tag. Each NFC reader an decode tags. Whenever this situation occurs, the client will perform a post to the NFC login server by transferring all the necessary data. This is in fact standard IOTOPE functionality. This login server will send the request to the login server of our platform and thus establish the connection. The NFC communication within this platform is implemented in such a way that there is support for two different setups. In the first setup, each user has a tag and a reader is connected to each device. In the second setup, each device has a tag and a user can log in by scanning the tag with his/her personal smartphone. The implementation of this small client is also available on https://github.ugent.be/fddbacke/DeviceSwitching.git. ## 3.5 SECURITY AND CONFIDENTIALITY 1 https://github.com/alexvanboxel/iotope-node/downloads ----- In a clinical setting, security and privacy play an important role, as it is not the intention that everybody can gain access to the information used within the platform. Therefore, some security and privacy measurements are taking within the platform. The web application of the platform, running on the handheld device, is facilitated by means of a web page. Thus, it is possible to use existing, standard security methods for website. A secure Transport Layer Security[2] (TLS) connection is used. For the web application, this means that the Hyper Text Transfer Protocol (Secure) (HTTPS) is utilized. To prevent not-authorized users, accessing the application, a login mechanism is implemented. This way, only personnel of the ICU can gain access to the web application. Data can only be fetched from the databases of the ICU, if the correct WiFi hotspot is used. No information is cached in the handheld device. When the medical staff moves to another patient, the data is automatically forgotten. When a nurse or doctor forgets to leave the handheld device at the ICU, the web application will notice this when they leave the hospital, by means of GPS tracking, which is only possible outside buildings, and the web application will be closed. # 4 RESULTS Measurements were performed to evaluate the performance of the platform. The results of these tests give valuable information about the scalability, responsiveness and the user experience. ## 4.1 EVALUATION APPROACH Time measurements were performed to benchmark the platform. Therefore, timestamps were collected both at the client-side and server-side. This enables the calculation of the response time under different circumstances and the determination of the most time consuming parts of the application. Special care has been taken to assure that a possible mismatch between the time on the client and the time on the server is excluded from the measurement results. All evaluation tests were repeated 30 times. These results were then averaged to exclude statistical fluctuations. Special code was included in the client to facilitate these measurements. This code simulates user interaction and collects timestamps belonging to the performed action. Reaction time of the NFC reader and the time to render the Document Object Model (DOM) in the client browser are excluded from the measurements. Response time will always be used as benchmark. This is defined as the time between the selection of the patient and the time when the DOM with the patient’s information is updated in the client. ## 4.2 EVALUATION SETUP All the tests were performed with the same server: a laptop with an Intel Core i5-2410M CPU with 6 GB RAM and a SSD running Microsoft Windows 7. Glassfish Version 3.1.2 was used as the Application Server. The client used in the tests, unless stated otherwise, is a laptop with an Intel Core i5-3210M running Ubuntu 12.04. Firefox 17.0 was used as web browser. For the communication between the devices a wireless 802.11 b/g router with 100Mbit/s Ethernet connection was used. Unless stated otherwise, the devices were connected to the WiFi network. 2 http://tools.ietf.org/html/rfc5246 ----- A Samsung Galaxy SII Smartphone, running Android 4.0 and a Samsung Galaxy Tab 10.1 tablet, running Android 3.1 were used as mobile devices. The tests were performed using their standard web browsers. ## 4.3 EVALUATION RESULTS The first evaluation analyzes the response time in function of the number of blocks that are sent to the client. This test was performed with a wired connection between server and client. As can be seen in Figure 4, the relation between the response time and the number of blocks shown in the client is linear. Presenting the data as graphs takes more time than presenting the data in a tabular form. The response time increases also faster when graphs are generated. This was expected since the number of blocks equals the number of graphs to be created. Also, the time for generating three blocks, is almost equal for both representations. This is because the first three blocks never contain any graphs. To gain more insights in the response time, the different parts were analyzed. Three different parts were identified: - DOM-modification time at the client-side - The communication delay between server and client. This delay consists of both the network delay and the JSON-parsing at the client-side. - The generation at the server-side, this can further be split in: `o` Switch and select patient at the server-side `o` Business logic without database interaction `o` Interaction with the database `o` Generating HTML The result of this analysis with the data represented as graphs, or as tables can be found in Figure 5 and Figure 6 respectively. The average and standard deviation are shown in Table 1. It can be concluded that for the rendering of tables the most time-consuming parts are the DOM modification at the client-side and the database interaction at the server-side. For the rendering of graphs the HTML-generation at the server-side takes a considerable amount of time. This is caused by the fact that the generation of the graphs is done as part of the generation of the HTML code. At the server-side, the database interaction always takes a considerable part of the total response time. The average response times were measured when different devices were connected with each device constantly sending requests to the server. With up to 20 devices connected, no statistically significant differences in response time were measured, implying that the system scales well. Finally, the response time was measured on different types of devices to evaluate the user experience. In normal operation, the system will show a different representation of the data depending on the type of device. For these measurements, the same data was sent to each device to enable comparison of the performance of the different devices. This data consists of 25 blocks, each containing 100 values and these are represented as a graph and as a table. The results are shown in Table 2. On all devices the representation with graphs consistently takes longer. The smartphone also performs better than the tablet for both the representation with graphs and tables. Femke D **Verwijd** Femke D **Verwijd** Femke D **Verwijd** Femke D **Verwijd** Femke D **Verwijd** ----- # 5 DISCUSSION An accurate analysis of the evaluation results, obtained as described in the previous section, indicates that the platform performs as desired. The response time for generating tables, as shown in Figure 6 shows that most of the time goes to client DOM modification (57%) and database interaction (31%). For generating graphs, as shown in Figure 5, 65% of the response time is needed for HTML generation. Client DOM modification and database interaction take respectively 12% and 16% of the response time. Measurements with up to 20 devices have shown no performance loss due to the concurrent use of multiple devices, proving that the application is scalable. The ICU of Ghent University[3] is one of the largest ICUs in Belgium, which holds 56 beds. The department consists of 5 different units (cardiac, burn unit, surgical, pediatric and internal) and these are located in different locations. Each unit in itself is again divided in several smaller units where each ICU bed has a bedside PC. As the platform can be distributed in such a way that each small unit has its own server, it can be ensured that the platform keeps running smoothly. Moreover, in the experiments where 25 medical parameters were measured and 100 values per parameter were stored, the system was able to consistently respond in less than 1 second when the data is presented in tabular form. The difference in response time between the different types of devices is less than a factor 3, as can be observed in Table 2. This difference can be partly compensated by adapting the representation of the data to the device type, which guarantees a consistent user experience on all devices. Furthermore, the performance difference between the tablet and smartphone, used in this experimental set-up, can most probably be explained by the newer Android version and the faster CPU of the smartphone. When the response time as a function of the number of the data points is measured, as shown in Figure 4 and Table 1, a linear relationship is achieved as expected. The HTML generation takes 50 times as much time than presenting the data in a tabular form. The increase in response time as a function of the number of data points is smaller with graphs, since the number of graphs that needs to be created is the same. The main advantages of using the new NFC technology, instead of the more known RFID technology are: (i) the capability of bi-directional communication, this way of communication, instead of single mode communication as with RFID, allows for more flexibility as the tags are able to communicate directly with each other, (ii) the ability to emulate contactless smart cards, which advances the interoperability of NFC as there is no need for an NFC tag or RFID card and information, stored on the NFC device, is used for communication, however, making the system secure is a great challenge [35](iii) peer-to-peer connections can be established, with this mode it is possible to exchange data at link-level [36][37], and (iv) the speed of the connection establishment is negligible, NFC connections are typically set up in less than 100 milliseconds as the connection between 2 devices is created automatically[38]. The platform performance can be further improved by reducing the number of requests from the client to the server: this can be realized by changing the polling architecture (from client to server) to a push-based architecture (from server to client). In this way, when the server discovers a change to the data of a viewer, it will push these changes to the client. 3 http://www.icu.be/eng/ Femke D **Verwijd** Femke D **Verwijd** Femke D **Verwijd** Femke D **Verwijd** Femke D **Verwijd** ----- The following technical considerations and challenges should be taken into account for the platform. First, as the platform is used within a clinical setting, privacy and security are very important. Therefore, we also plan to carefully consider the privacy and security requirements in the extended version of the platforms. Second, Javascript was chosen to create a fast and responsive system, and the execution of the code is done at client side, which limits the necessary processing power in the back-end. However, this also means that the CPU of the end user’s device is used, which can have an impact on the battery consumption of mobile client. An adaptive approach to balance components between client server is currently being considered. Another limitation of Javascript is that different layout engines will render the code in a different manner, which may result in inconsistencies in terms of functionality and interface. Proper front-end development tools and extensive automated software testing will allow circumventing these incompatibility concerns. Third, by using AJAX, network latency can impact the responsiveness of the platform. Lightweight alternatives are currently being studied. The proposed platform can be integrated within existing CDSS that are already deployed in the intensive care unit, as previous research indicates that stand-alone CDSS are not enabled to be executed on multiple computing platforms [39]. This can be done by generating the entities, based on the relational databases of the ICU. As indicated in Figure 1, several databases are integrated within our platform. If these databases are replaced with those used in CDSS and the queries are adjusted accordingly, the platform can be fully operational in the ICU, integrated with the other tools and systems. EHR (Electronic Health Record) applications are considered to be an important part of CDSS, hence integrating the proposed platform with existing EHR applications can be realized in a similar way: generating the entities in the platform based on the database tables in the EHR application. This approach was taken for the integration of the system with the EHR application in the Intensive Care department of Ghent University hospital. # 6 CONCLUSIONS In this paper, a platform to access data through multiple devices is described. Based on the functional and non-functional requirements, a 3-tier architecture was designed and implemented. Due to the web-centric approach, the platform is portable, scalable and extensible. Extensive timing measurements were performed to investigate the response time, the scalability and user experience of the designed platform. The results of these evaluations show that the response time of the platform scales linearly with the amount of data. The response time for generating tables is always less than the response time for generating graphs. The platform, presented in this paper, facilitates the use of multiple devices in an ICU setting, which is integrated in the workflow of the doctors and the medical staff at the point of care. Future research will focus on replacing the polling architecture into a push mechanism and on the implementation of several caching strategies. # CONFLICT OF INTEREST The authors declare that they have no conflict of interest. Femke D **Verwijd** ----- # REFERENCES [1] Lapinsky, S.. Mobile computing in critical care. Journal of Critical Care 2007;22(1):41–44. [2] Berger, E.. The iPad: Gadget or medical godsend? Annals of Emergency Medicine 2010;56(1):A21– A22. [3] Kubben, P.. Neurosurgical apps for iPhone, iPod touch, iPad and Android. Surgical Neurology International 2010;1(1):89+. [4] Lapinsky, P., Weshler, J., Mehta, S., Varkul, M., Hallett, D., Stewart, T.. Handheld computers in Critical Care. Journal of Critical Care 2001;5:227+. [5] Colpaert, K., Vanbelleghem, S., Danneels, C., Benoit, D., Steurbaut, K., Van Hoecke, S., et al. Has information technology finally been adopted in Flemish Intensive Care Units? BMC Medical Informatics and Decision Making 2010;10(62). [6] Osheroff, J., Teich, J., Middleton, B., Steen, E., Wright, A., Detmer, D.. A roadmap for National Action on Clinical Decision Support. JAMIA 2007;14(2):141–145. [7] Garg, A., Adhikari, N., McDonald, H., Rosas-Arellano, P., Devereaux, P., Beyene, J., et al. Effects of Computerized Clinical Decision Support Systems on Practitioner Performance and Patient Outcomes. JAMA 2005;293(10):1223–1238. [8] Moxey, A., Robertson, J., Newby, D., Hains, I., Williamson, M., Pearson, S.. Computerized Clinical Decision Support for Prescribing: Provision does not guarantee Uptake. JAMIA 2010;17(1):25–33. [9] Orwat, C., Graefe, A., Faulwasser, T.. Towards Pervasive Computing in Health Care - A Literature Review. Social Science Research Network Working Paper Series 2008;8(26). [10] Kawamoto, K., Lobach, D.. Clinical Decision Support provided within Physician Order Entry Systems: A Systematic Review of Features Effective for changing Clinician Behavior. In: AMIA Annl Sym Proc. 2003, p. 361–365. [11] Decruyenaere, J., DeTurck, F., Vanhastel, S., Vandermeulen, F., Demeester, P., De Moor, G.. On the design of a generic and scalable multilayer software architecture for data flow management in the Intensive Care Unit. Methods of information in medicine 2003;42(1):79–88. [12] Kawamoto, K., Houlihan, C., Balas, A., Lobach, D.. Improving Clinical Practice using Clinical Decision Support Systems: A Systematic Review of Trials to identify Features Critical to Success. BMJ (Clinical research ed) 2005;330(7494):765+. [13] Van Hoecke, S., Decruyenaere, J., Danneels, C., Taveirne, K., Colpaert, K., Hoste, E., et al. Service oriented subscription management of medical decision data in the Intensive Care Unit. Methods of Information in Medicine 2008;47(4):364–380. [14] Herasevich, V., Pickering, B., Dong, Y., Peters, S., Gajic, O.. Informatics Infrastructure for Syndrome Surveillance, Decision Support, Reporting, and Modeling of Critical Illness. Mayo Clinic Proc 2010;85(3):247–254. [15] Sittig, D., Wright, A., Osheroff, J., Middleton, B., Teich, J., Ash, J., et al. Grand Challenges in Clinical Decision Support. J Biomed Inform 2008;41(2):387–392. [16] Bardram, J.. Pervasive Healthcare as a Scientific Discipline. Methods of Information in Medicine 2008;47(3):178–185. [17] Triantafyllidis, A., Koutkias, K., Chouvarda, I., Maglaveras, N.. An Open and Reconfigurable Wireless Sensor Network for Pervasive Health Monitoring. Methods of Information in Medicine 2008;47(3):229–234. [18] Blobel, B.. Architectural Approach to eHealth for Enabling Paradigm Changes in Health. Methods of Information in Medicine 2010;49(2):123– 134. [19] Mitchell, J., Gerdin, U., Lindberg, D., Lovis, C., Martin-Sanchez, F., Miller, R., et al. 50 years of informatics research on decision support: What’s next. Methods of Information in Medicine 2011;50(6):525. [20] Peek, N., Swift, S.. Intelligent data analysis for knowledge discovery, patient monitoring and quality assessment. Methods of Information in Medicine 2012;51(4):318. [21] Surján, G., et al. How to use health informatics to manage the information overflow created by itself? Methods Inf Med 2013;52:97–98. [22] Tentori, M., Favela, J.. Activity-aware computing for healthcare. IEEE Pervasive Computing 2008;7(2):51–57. [23] Tentori, M., Hayes, G.R., Reddy, M.. Pervasive computing for hospital, chronic, and preventive care. Foundations and Trends in Human- Computer Interaction 2012;5(1):1–95. [24] Bardram, J.E., Bossen, C.. Mobility work: The spatial dimension of collaboration at a hospital. Computer Supported Cooperative Work (CSCW) 2005;14(2):131–160. [25] Bardram, E.. The trouble with login: on usability and computer security in ubiquitous computing. Personal and Ubiquitous Computing 2005;9(6):357–367. ----- [26] Koch, S.H., Weir, C., Westenskow, D., Gondan, M., Agutter, J., Haar, M., et al. Evaluation of the effect of information integration in displays for icu nurses on situation awareness and task completion time: A prospective randomized controlled study. International Journal of Medical Informatics 2013;82(8):665–675. [27] Effken, J.A., Loeb, R.G., Kang, Y., Lin, Z.C.. Clinical information displays to improve ICU outcomes. International journal of Medical Informatics 2008;77(11):765–777. [28] Falke, O., Rukzio, E., Dietz, U., Holleis, P., Schmidt, A.. Mobile Services for Near Field Communication. Tech. Rep. LMU-MI-2007-1; Vodafone Group Research and Development, Munich, Embedded Interaction Research Group, University of Munic, Computing Department, Lancaster University, UK, Fraunhofer IAIS, Sankt Augustin and b-it, University of Bonn; 2007. [29] Ok, K., Coskun, V., Aydin, M.N., Ozdenizci, B.. Current benefits and future directions of NFC services. In: Education and Management Technology (ICEMT), 2010 International Conference on. IEEE; 2010, p. 334–338. [30] NFC-­‐Forum, Available: http://www.nfc-­‐forum.org. [31] Csapodi, M., Nagy, A.. New applications for nfc devices. In: Mobile and Wireless Communications Summit, 2007. 16th IST. IEEE; 2007, p. 1–5. [32] Gamma, E., Helm, R., Johnson, R., Vlissides, J.. Design Patterns: Elements of Reusable Object Oriented Software. Addison-Wesley; 1994. [33] Paulson, L.. Building Rich Web Applications with AJAX. Computer 2005;38(10):14–17. [34] Wang, G.. Improving Data Transmission in Web Applications via the Translation between XML and JSON. In: 2011 Third International Conference on Communications and Mobile Computing. IEEE; 2011, p. 182–185. [35] Roland, M. "Software card emulation in NFC-enabled mobile phones: great advantage or security nightmare." Fourth International Workshop on Security and Privacy in Spontaneous Interaction and Mobile Phone Use. 2012. [36] Madlmayr, G., Langer, J., Kantner, C. Scharinger, J.. “NFC devices: Security and privacy.” In: Availability, Reliability and Security, 2008. ARES 08. Third International Conference on. IEEE, 2008. p. 642-647. [37] Ok, K., Coskun, V., Aydin, M. N. Ozdenizci, B “Current benefits and future directions of NFC services.” In:Education and Management Technology (ICEMT), 2010 International Conference on. IEEE, 2010. p. 334-338. [38] Want, R.. Near Field Communication. IEEE Pervasive Computing 2011;10(3):4–7. [39] Farion, K., Michalowski, W., Wilk, S., OSullivan, D., Rubin, S., Weiss, D.. Clinical decision support system for point of care use. Meth- ods Inf Med 2009;48:381–390. ----- # FIGURES **FIGURE 1 – GENERAL CONCEPT OF THE PLATFORM. DATA FROM A WIDE RANGE OF SOURCES IS GATHERED IN THE** **INTENSIVE CARE INFORMATION SYSTEM (ICIS). PERSONAL PREFERENCES OF THE STAFF ARE STORED IN THE STAFF** **PREFERENCES DATABASE AND GENERAL KNOWLEDGE IS KEPT IN THE KNOWLEDGE DATABASE. INFORMATION OF THESE** **DATA SOURCES IS USED TO SELECT AND FILTER THE DATA INTELLIGENTLY. BASED ON THE TYPE OF THE DEVICE AND THE** **PREFERENCES OF THE USER THE INFORMATION IS VISUALIZED ON A SPECIFIC DEVICE.** ----- **FIGURE 2 – AN ILLUSTRATIVE SCENARIO TO SHOW THE PLATFORM: ON THE LEFT SIDE OF THE FIGURE A PART OF THE ICU** **FLOOR PLAN IS SHOWN, ON THE RIGHT SIDE AN EXAMPLE OF THE VISUALISATION ON EACH OF THE DEVICES, USED IN THE** **SCENARIO IS DISPLAYED. 1) THE DOCTOR IS ON HIS WAY TO THE ICU WARD AND ALREADY GOES THROUGH THE PATIENT’S** **DATA ON HIS SMARTPHONE, BY MEANS OF A TABLE. 2) WHEN HE ARRIVES AT THE PATIENT’S BEDSIDE PC, HE USES HIS** **PERSONAL TAG TO IDENTIFY HIMSELF TO THE COMPUTER. 3) ON THIS SCREEN, HE SEES A MORE DETAILED OVERVIEW OF** **THE DATA SHAPED AS A GRAPH. 4) WHEN THE NURSE ENTERS NEW DATA OF THE PATIENT INTO THE SYSTEM, THE NEW** **INFORMATION IS IMMEDIATELY VISUALIZED ON THE SCREEN OF THE PC AND SMARTPHONE OF THE DOCTOR. 5) WHEN** **VISITING A NEXT PATIENT, THE DOCTOR CHANGES PATIENT’S ON THE APPLICATION AND ALL SCREENS ARE AGAIN** **UPDATED.** **FIGURE 3 -­‐ HIGH LEVEL OVERVIEW OF THE PLATFORM'S ARCHITECTURE. THE SERVLETS COMMUNICATE WITH THE** **BUSINESS TIER AND CONVERT RAW DATA INTO A SUITABLE REPRESENTATION FOR THE DEVICE. THE MANAGEMENTBEAN** **IS CONNECTED TO TWO INTERNAL COMPONENTS. THE STATEMANAGER KEEPS TRACK OF THE CORRECT RUN-­‐TIME STATE** **AND THE DATABEAN IS USED TO COMMUNICATIE WITH THE PERSISTENCE TIER.** ----- **FIGURE 4 – RESPONSE TIME AS A FUNCTION OF THE NUMBER OF BLOCKS WITH THE DATA REPRESENTED AS TABLES OR AS** **GRAPHS.** **FIGURE 5 -­‐ ANALYSIS OF RESPONSE TIME WITH DATA REPRESENTED IN GRAPHS** ----- **FIGURE 6 -­‐ ANALYSIS OF RESPONSE TIME WITH DATA REPRESENTED AS TABLES** ----- # TABLES **TABLE 1 -­‐ AVERAGE AND STANDARD DEVIATION IN MS FOR THE DIFFERENT PARTS OF THE RESPONSE TIME** **Graph** **Average [ms] σ [ms]** **Table** **Average [ms] σ [ms]** **Select patient** 11.07 2.49 11.3 1.62 **Client DOM modification** 276.4 8.02 645.47 14.78 **Communication delay** 122.93 17.97 84.2 12.66 **HTML generation** 1435.7 174.88 26.03 14.30 **Business logic without** **DB communication** 10.8 10.08 10.9 12.47 **Database interaction** 363.07 176.08 348.37 170.41 **TABLE 2 -­‐ AVERAGE AND STANDARD DEVIATION IN MS CORRESPONDING TO THE RESPONSE TIME FOR THE DIFFERENT** **TYPES OF DEVICES** **Graph** **Average [ms] σ [ms]** **Table** **Average [ms] σ [ms]** **Computer** 1126.27 184.40 2219.97 305.99 **Tablet** 2798.07 304.80 3652.93 537.28 **Smartphone** 2449.13 289.17 3218.07 348.94 -----
{ "disclaimer": "Notice: The following paper fields have been elided by the publisher: {'abstract', 'references'}. Paper or abstract available at https://api.unpaywall.org/v2/10.3414/ME13-02-0021?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.3414/ME13-02-0021, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": "other-oa", "status": "GREEN", "url": "https://biblio.ugent.be/publication/5968043/file/5968054.pdf" }
2,014
[ "JournalArticle" ]
true
2014-06-06T00:00:00
[]
12,508
en
[ { "category": "Computer Science", "source": "external" }, { "category": "Biology", "source": "s2-fos-model" }, { "category": "Computer Science", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/0277681acf245005906e17d6996ed498421c5b68
[ "Computer Science" ]
0.908015
Data-aware optimization of bioinformatics workflows in hybrid clouds
0277681acf245005906e17d6996ed498421c5b68
Journal of Big Data
[ { "authorId": "3451478", "name": "Athanassios M. Kintsakis" }, { "authorId": "71018930", "name": "Fotis Psomopoulos" }, { "authorId": "143619722", "name": "P. Mitkas" } ]
{ "alternate_issns": [ "2579-0048" ], "alternate_names": [ "J Big Data", "Journal on Big Data" ], "alternate_urls": [ "http://www.springer.com/computer/database+management+&+information+retrieval/journal/40537", "http://techscience.com/JBD/index.html", "https://journalofbigdata.springeropen.com", "https://journalofbigdata.springeropen.com/" ], "id": "d60da343-ab92-4310-b3d7-2c0860287a9d", "issn": "2196-1115", "name": "Journal of Big Data", "type": "journal", "url": "http://www.journalofbigdata.com/" }
Life Sciences have been established and widely accepted as a foremost Big Data discipline; as such they are a constant source of the most computationally challenging problems. In order to provide efficient solutions, the community is turning towards scalable approaches such as the utilization of cloud resources in addition to any existing local computational infrastructures. Although bioinformatics workflows are generally amenable to parallelization, the challenges involved are however not only computationally, but also data intensive. In this paper we propose a data management methodology for achieving parallelism in bioinformatics workflows, while simultaneously minimizing data-interdependent file transfers. We combine our methodology with a novel two-stage scheduling approach capable of performing load estimation and balancing across and within heterogeneous distributed computational resources. Beyond an exhaustive experimentation regime to validate the scalability and speed-up of our approach, we compare it against a state-of-the-art high performance computing framework and showcase its time and cost advantages.
## RESEARCH ## Open Access # Data‑aware optimization of bioinformatics workflows in hybrid clouds ### Athanassios M. Kintsakis[*], Fotis E. Psomopoulos and Pericles A. Mitkas *Correspondence: akintsakis@issel.ee.auth.gr Department of Electrical and Computer Engineering, Aristotle University of Thessaloniki, 54124 Thessaloniki, Greece **Abstract** Life Sciences have been established and widely accepted as a foremost Big Data discipline; as such they are a constant source of the most computationally challenging problems. In order to provide efficient solutions, the community is turning towards scalable approaches such as the utilization of cloud resources in addition to any existing local computational infrastructures. Although bioinformatics workflows are generally amenable to parallelization, the challenges involved are however not only computationally, but also data intensive. In this paper we propose a data management methodology for achieving parallelism in bioinformatics workflows, while simultaneously minimizing data-interdependent file transfers. We combine our methodology with a novel two-stage scheduling approach capable of performing load estimation and balancing across and within heterogeneous distributed computational resources. Beyond an exhaustive experimentation regime to validate the scalability and speed-up of our approach, we compare it against a state-of-the-art high performance computing framework and showcase its time and cost advantages. **Keywords: Cloud computing, Component-based workflows, Bioinformatics, Big data** management, Hybrid cloud, Comparative genomics **Introduction** There is no doubt that Life Sciences have been firmly established as a Big Data science discipline, largely due to the high-throughput sequencers that are widely available and extensively utilized in research. However, when it comes to tools for analyzing and interpreting big bio-data, the research community has always been one step behind the actual acquisition and production methods. Although the amount of data currently available is considered vast, the existing methods and extensively used techniques can only hint at the knowledge that can be potentially extracted and consequently applied for addressing a plethora of key issues, ranging from personalized healthcare and drug design to sustainable agriculture, food production and nutrition, and environmental protection. Researchers in genomics, medicine and other life sciences are using big data to tackle fundamental issues, but actual data management and processing requires more networking and computing power [14]. Big data is indeed one of today’s hottest concepts, but it can be misleading. The name itself suggests mountains of data, but that’s just the start. Overall, big data consists of three v’s: volume of data, velocity of processing the data, and © The Author(s) 2016. This article is distributed under the terms of the Creative Commons Attribution 4.0 International License [(http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium,](http://creativecommons.org/licenses/by/4.0/) provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. ----- variability of data sources. These are the key features of information that require particular tools and methodologies to efficiently address them. The main issue with dealing with big data is the constantly increasing demands for both computational resources as well as storage facilities. This in turn, has led to the rise of large-scale high performance computing (HPC) models, such as cluster, grid and cloud computing. Cloud computing can be defined as a potentially high performance computing environment consisting of a number of virtual machines (VMs) with the ability to dynamically scale resources up and down according to the computational requirements. This computational paradigm has become a popular choice for researchers that require a flexible, pay-as-you-go approach to acquiring computational resources that can accompany their local computational infrastructure. The combination of public and privately owned clouds defines a hybrid cloud, i.e. an emerging form of a distributed computing environment. From this perspective, optimizing the execution of data-intensive bioinformatics workflows in hybrid clouds is an interesting problem. Generally speaking, a workflow can be described as the execution of a sequence of concurrent processing steps, or else computational processes, the order of which is determined by data interdependencies as well as the target outcome. In a data-intensive workflow, data and metadata, either temporary or persistent, are created and read at a high rate. Of course, a workflow can be both data and computationally intensive and the two are often found together in bioinformatics workflows. In such workflows, when scheduling tasks to distributed resources, the data transfers between tasks are not a negligible factor and may comprise a significant portion of the total execution time and cost. A high level of data transfers can quickly overwhelm the storage and network throughput of cloud environments, which is usually on the order of 10–20 MiB/s [6], while also saturating the bandwidth of local computational infrastructures and leading to starvation of resources to other users and processes. It is well known that a high level of parallelization can be achieved in a plethora of bioinformatics workflows by fragmenting the input of individual processes into chunks and processing them independently, thus achieving parallelism in an embarrassingly parallel way. This is the case in most evolutionary investigation, comparative genomics and NGS data analysis workflows. This fact can be largely taken advantage of in order to achieve parallelism by existing workflow management approaches emphasizing parallelization. The disadvantage of this approach however is that it creates significant data interdependencies, which in turn lead to data transfers that can severely degrade performance and increase overall costs. In this work, we investigate the problem of optimizing the parallel execution of dataintensive bioinformatics workflows in hybrid cloud environments. Our motivation is to achieve better time and cost efficiency than existing approaches by minimizing file transfers in highly parallelizable data-intensive bioinformatics workflows. The main contributions of this paper are twofold; (a) We propose a novel data management paradigm for achieving parallelism in bioinformatics workflows while simultaneously minimizing data-interdependency file transfers, and (b) based on our data management paradigm, we introduce a 2-stage scheduling approach balancing the trade-off between parallelization opportunities and minimizing file transfers when mapping the execution of bioinformatics workflows into a set of heterogeneous distributed computational resources ----- comprising a hybrid cloud. Finally, in order to validate and showcase the time and cost efficiency of our approach, we compare our performance with Swift, one of the most widely used and state-of-the-art high performance workflow execution frameworks. The rest of the paper is organized as follows: a review of the state-of-the-art on workflow management systems and frameworks in general and in the field of bioinformatics in particular is presented in "Related work" section. "Methods" section outlines the general characteristics and operating principles of our approach. "Use case study" section briefly presents the driving use case that involves the construction of phylogenetic profiles from protein homology data. "Results and discussion" section provides the results obtained through rigorous experimentation, in order to evaluate the scalability and efficiency as well as the performance of our approach when compared against a high performance framework. Finally, concluding remarks and directions for future work are given in "Conclusions and future work" section. **Related work** The aforementioned advantages of cloud computing have led to its widespread adoption in the field of bioinformatics. Initial works were mostly addressed on tackling specific, highly computationally intensive problems that outstretched the capabilities of local infrastructures. As the analyses became more complex and incorporated an increasing number of modules, several tools and frameworks appeared that aimed to streamline computations and automate workflows. The field of bioinformatics has also sparked the interest of many domain agnostic workflow management systems, some of the most prolific applications of which were bioinformatics workflows, thus leading to the development of pre-configured customized versions specifically for bioinformatics workflows [34]. Notable works addressing well-known bottlenecks in computationally expensive pipelines, the most characteristic of which are Next Generation Sequencing (NGS) data analysis and whole genome assembling (WGA) include [18], Rainbow [9], CloudMap [29], CloudBurst [40], SURPI [31] and RSD-Cloud [45]. These works, although highly successful, lack a general approach as they are problem specific and are often difficult to setup, configure, maintain and most importantly integrate within a pipeline, when considering the experience of a non-expert life sciences researcher. Tools and frameworks aiming to streamline computations and automate standard analysis bioinformatics workflows include Galaxy [17], Bioconductor [16], EMBOSS [39] and Bioperl [43]. Notable examples of bioinformatics workflow execution in the cloud include [11, 33] and an interesting review on bioinformatics workflow optimization in the cloud can be found in [15]. In the past few years, there is a significant trend in integrating existing tools into unified platforms featuring an abundance of ready to use tools, with particular emphasis on ease of deployment and efficient use of resources of the cloud. A platform based approach is adopted by CloudMan [1], Mercury [38], CLoVR [3], Cloud BioLinux [22] and others [24, 32, 42, 44]. Most of these works are addressing the usability and user friendly aspect of executing bioinformatics workflows, while some of them also support the use of distributed computational resources. However, they largely ignore the underlying data characteristics of the workflow and do not perform any data-aware optimizations. ----- Existing domain agnostic workflow management systems including Taverna [48], Swift [49], Condor DAGMan [23], Pegasus [13], Kepler [26] and KNIME [5] are capable of also addressing bioinformatics workflows. A comprehensive review of the aspects of parallel workflow execution along with parallelization in scientific workflow managements systems can be found in [8]. Taverna, KNIME and Kepler mainly focus on usability by providing a graphical workflow building interface while offering limited to non-existent support, in their basic distribution, for use of distributed computational resources. On the other side, Swift, Condor DAGMan and Pegasus are mainly inclined over accomplishing parallelization on both local and distributed resources. Although largely successful in achieving parallelization, their scheduling policies are non data-aware and do not address minimizing file transfers between sites. Workflow management systems like Pegasus, Swift and Spark can utilize shared file systems like Hadoop and Google Cloud Storage. The existence of a high performance shared file system can be beneficial in a data intensive worfklow as data can be transferred directly between sites and not staged back and forth from the main site. However, the advantages of a shared file system can be outmatched by a data-aware scheduling policy which aims to minimize the necessity of file transfers to begin with. Furthermore, the existence of a shared file system is often prohibitive in hybrid clouds comprising of persistent local computational infrastructures and temporarily provisioned resources in the cloud. Beyond the significant user effort and expertise required in setting up a shared file system, one of the main technical reasons for this situation is that elevated user operating system privileges are required for this operation, which are not usually granted in local infrastructures. A Hadoop MapReduce [12] approach is capable of using data locality for efficient task scheduling. However, its advantages become apparent in a persistent environment where the file system is used for long term storage purposes. In the case of temporarily cloud provisioned virtual machines, the file system is not expected to exist either prior or following the execution of the workflow and consequently all input data are loaded at the beginning of the workflow. There is no guarantee that all the required data for a specific task will be placed in the same computational site and even if that were the case, no prior load balancing mechanism exists for assigning all the data required for each task to computational sites while taking into account the computational resources of the site and the computational burden of the task. Additionally, a MapReduce approach requires re-implementation of many existing bioinformatics tools which is not only impractical but also unable to keep up to date with the vanilla and standardized versions. Finally, it is important to note that none of the aforementioned related work clearly addresses the problem of applying a data-aware optimization methodology when executing data-intensive bioinformatics workflows in hybrid cloud environments. It is exactly this problem that we address in this work, by applying a data organization methodology coupled with a novel scheduling approach. **Methods** In this section we introduce the operating principles and the underlying characteristics of the data management and scheduling policy comprising our methodology. ----- **Data management policy** The fact that data parallelism can be achieved in bioinformatics workflows has largely been taken advantage of in order to accelerate workflow execution. Data parallelism involves fragmenting the input into chunks which are then processed independently. For certain tasks of bioinformatics workflows, such as sequence alignment and mapping of short reads which are also incidentally some of the most computationally expensive processes, this approach can allow for a very high degree of parallelism in multiprocessor architectures and distributed computing environments. However, prior to proceeding to the next step, data consistency requires that the output of the independently processed chunks be recombined. In a distributed computing environment, where the data is located on multiple sites, this approach creates significant data interdependency issues as data needs to be transferred from multiple sites in order to be recombined, allowing the analysis to proceed to the next step. The same problem is not evident in a multiprocessor architecture, as the data exists within the same physical machine. A sensible approach to satisfying data interdependencies with the purpose of minimizing, or even eliminating unnecessary file transfers would be to stage all fragments whose output must be recombined on the same site. Following that, the next step, responsible for processing the recombined output, can also be completed on the same site, and then the next step, that will operate on the output of the previous, also on the same site, further advancing this course until it is no longer viable. It is becoming apparent that this is a recursive process that takes into account the anticipated data dependencies of the analysis. In this way, segments of the original workflow are partitioned into workflow ensembles (workflows of similar structure but differing in their input data) that have no data interdependencies and can then be executed independently in an approach reminiscent of a bag-of-tasks. Undoubtedly, not all steps included in a workflow can be managed this way, but a certain number can, often also being the most computationally and data intensive. Instead of fragmenting the input of data parallelizable tasks into chunks arbitrarily, we propose fragmenting into chunks that can also sustain the data dependencies of a number of subsequent steps in the analysis. Future tasks operating on the same data can be grouped back-to-back into forming a pipeline. To accomplish the aforementioned, we model the data input space as comprising of Instances. An Instance (Inst) is a single data entry, the simplest form data can exist independently. An example of an Inst would be a single protein sequence in a .fasta file. Instances are then organized into organization units (OU), which are sets of instances that satisfy the data dependencies of one or more tasks. The definition of an OU is a set of Insts that can satisfy the data dependencies of a number of consecutive tasks, thus allowing the formation of an OU pipeline. However, before attempting to directly analyze the data involved, a key step is to preprocess the data instances in order to allow for a structured optimization of the downstream analysis process. A common occurrence in managing big data is the fact that their internal organization is dependent on its specific source. Our data organization model is applied through a preprocessing step that restructures the initial data organization into sets of Insts and OUs in a way reminiscent of a base transformation. The process involves identifying Insts in the input data, and grouping them together into OUs according to workflow data interdependencies. An identifier is constructed for ----- each Inst that also includes the OU it belongs to. The identifier is permanently attached to the respective data and therefore is preserved indefinitely. The initial integrity of the input data is guaranteed to be preserved during workflow execution, thus ensuring the accessibility to this information in later stages of the analysis and allowing for the recombination process. The identifier construction process is defined as follows. **Definition 1 Each** _OU is a set that initially contains a variable number (denoted by_ _n, k, l, ...) of instances Instj where_ j = [1, n]. The internal order of instances within an OU is preserved as the index assigned to each unique identifier Instj (i.e. the order 1 < i < n of the instances) is reflected directly upon the constructed identifier. The total number of _m_ _OUs themselves are grouped into a set of OUs and are each assigned unique identifi-_ ers OUi constructed in a semi-automated manner to better capture the semantic context of the defined _OUs. Finally, the instance identifier,_ _InstID consists of the concatenated_ OUi and Instj parameters, as shown below: OUs = {OU0, OU1, OU2, . . ., OUm} (1) OU0 = {Inst0, ..., Instn}, OU1 = {Inst0, ..., Instk } ... (2) OUn = {Inst0, ..., Instl} InstID = F (OUi, Instj) = OUi_Instj (3) At some point, some or all the pipelines may converge in what usually is a non parallelizable merging procedure. This usually happens at the end of the workflow, or in intermediate stages, before a new set of OU pipelines is formed and the analysis continues onward. **Scheduling policy** It is obvious that this data organization approach although highly capable of minimizing data transfers, it severely limits the opportunities for parallelization, as each OU pipeline is processed in its entirety in a single site. In very small analyses where the number of _OUs is less than the number of sites, obviously some sites will not be utilized, though_ this is a boundary case, unlikely to occur in real world analyses. In a distributed computing environment, comprised of multiprocessor architecture computational sites, ideally each _OU pipeline will be assigned to a single processor._ Given that today’s multiprocessor systems include a significant number of CPU cores, the number of _OU pipelines must significantly exceed, by a factor of at least 10, the_ number of sites in order to achieve adequate utilization. Unfortunately, even that would prove inadequate, as the computational load of OU pipelines may vary significantly, thus requiring an even higher number of them in order to perform proper load balancing. It is apparent that this strategy would be fruitful only in analyses where the computational load significantly exceeds the processing capabilities of the available sites, spanning execution times into days or weeks. In solely data-intensive workflows, with no computationally intensive component, under-utilization of multiprocessor systems may not ----- become apparent as storage and network throughput are the limiting factors. Otherwise, it will most likely severely impact performance. Evidently, a mechanism for achieving parallelism in the execution of an OU pipeline in a single site is required. Furthermore, in a heterogeneous environment of computational sites of varying processing power and OU pipelines of largely unequal computational loads, load balancing must be performed in order to map the OU pipelines into sites. To address these issues we propose a novel 2-stage scheduling approach which combines an external scheduler at stage 1 mapping the OU pipelines into sites and an internal to each site scheduler at stage 2 capable of achieving data and task parallelism when processing an OU pipeline. **_External scheduler_** The external scheduler is mainly concerned with performing load balancing of the OU pipelines across the set of computational resources. As both the OU pipelines and the computational sites are largely heterogeneous, the first step is performing an estimation regarding both the OU pipeline loads and the processing power of the sites. The second step, involves the utilization of the aforementioned estimations by the scheduling algorithm tasked with assigning the OU pipelines to the set of computational resources. In order to perform an estimation of the load of an OU pipeline, a rough estimation could be made based on the size of the OU input. A simple approach would be to use the disk file size in MB but that would most likely be misleading. A more accurate estimation could be derived by counting the number of instances, this approach too however is also inadequate as the complexity cannot be directly assessed in this way. In fact, the computational load can only be estimated by taking into account the type of information presented by the file, which is specific to its file type. For example, given a .fasta file containing protein sequences, the most accurate approach for estimating the complexity of a sequence alignment procedure would be to count the number of bases, rather than count the number of instances. Fortunately, the number of distinct file types found in the most common bioinformatics workflows is small, and therefore we have created functions for each file type that can perform an estimation of the computational load that corresponds to them. We already support formats of .fasta, .fastq and plain ASCII (such as tab-delimited sequence similarity files) among others. In order to better match the requirements of the data processing tasks to the available computational resources, the computational processing power of each site must also be assessed. This is accomplished by running a generic benchmark on each site which is actually a mini sample workflow that aims to estimate the performance of the site for similar workflows. The benchmarks we currently use are applicable on comparative genomics and pangenome analysis approaches, and measure the multithreaded performance of the site, taking into account its number of CPU cores. We also use the generic tool UnixBench [41] to benchmark the sites when no similar sample workflow is available. The problem can now be modeled as one of scheduling independent tasks of unequal load to processors of unequal computational power. As these tasks are independent, they can be approached as a bag of tasks. Scheduling bag of tasks has been extensively studied and many algorithms exist, derived from heuristic [46], list scheduling [20] or metaheuristic optimization approaches [30]. In this work we utilize one of the highest performing algorithms, the FPLT (fastest processor largest task) algorithm. According to ----- FPLT, tasks are placed in descending order based on their computational load and each task, starting from the largest task, is assigned to the fastest available processor. Whenever a processor completes a task, it is then added to the list of available processors, the fastest of which is assigned the largest remaining task. FPLT is a straightforward and lightweight algorithm, capable of outperforming other solutions most of the time [20] when all tasks are available from the start, as is the case here, without adding any computational burden. The disadvantage of FPLT is that when the computational power of processors is largely unequal, a processor might be assigned a task that severely exceeds its capabilities, thus delaying the makespan of the workflow. This usually happens when some processors are significantly slower than the average participating in the workflow. The external scheduler initially performs an assessment of the type and load of the OU pipelines. It then determines the capabilities of the available sites in processing the pipelines by retrieving older targeted benchmarks or completing new on the fly. The OU pipelines are then submitted to the sites according to FPLT and job failures are handled by resubmission. The pseudocode of the external scheduler is presented in Algorithmic Box 1. **_Internal scheduler_** The internal scheduler is local to each site and is responsible for achieving data and task parallelism when processing an _OU pipeline. Task parallelism involves executing_ independent tasks directly in parallel while data parallelism requires the identification of tasks whose input can be fragmented in chunks and processed in parallel. The second requires that such tasks are marked as suitable for fragmentation at the workflow description stage or maintaining a list of such tasks for automatic identification. Our approach supports both. ----- The internal scheduler automatically identifies the number of CPUs on the computational site and sets the number of simultaneous processing slots accordingly. It receives commands from the master and assigns them to threads in order to execute them in parallel. In case it receives a task where data parallelism is possible, it will fragment the input into individual chunks, or else subsets, and then launch threads in order to process them in parallel. A decision must be made on the number of fragments a task must be split to, which involves a trade off between process initialization overhead and load balancing between threads. Given the widely accepted assumption that the CPU cores of a given site have the same computational capabilities, a simple solution would be to launch a number of threads equal to the machine’s CPU count and divide the total number of input data, or else the instances, across them. This solution is in turn predicated on the assumption that the load assigned to a thread should directly correspond to the amount of data it has to process and as such is prone to variations. In our case however, as all required data exists within the same site, it is no longer desirable to distribute the data processing load among the threads in advance, as the data can be accessed by any thread at any time without any additional cost thus providing greater flexibility. Therefore, when considering the situation within a single sitel, our approach can be defined by the process of splitting the superset of all m _Insts of the OU pipeline into k_ subsets of fixed size n. The number of subsets is given when dividing m by n. Superset{Inst0, ..., Instm} = Subset1{Inst0, ..., Instn} ∪ ... ∪ Subsetk {Inst0, ..., Instn} (4) k = [m] (5) n Each given Subseti, is assigned to a thread responsible for completing the respective task. Initially the subsets are placed into a list in random order. Each thread attempts to process the next available subset and this continues recursively until all available subsets are exhausted. In order to synchronize this process and to ensure that no two threads process the same subset, a lock is established that monitors the list of subsets. Every time a thread attempts to obtain the next available subset it must first acquire the lock. If the lock is unavailable the thread is set to sleep in a waiting queue. If the lock is available, the thread acquires the requested subset and increases an internal counter that points to the next available subset. It then immediately releases the lock, an action that also wakes the first thread that may be present in the queue. The pseudocode describing the operation of the internal scheduler is presented in Algorithmic boxes 2 and 3. ----- As the probability of two threads completing the execution of a subset at exactly the same time is extremely low, the synchronization process has been proven experimentally to be very efficient, where most of the time there are no threads waiting on the queue. The average waiting time along with the time of acquiring and releasing the lock is usually minuscule. However, there is an important overhead that is associated with the initialization of the process that will complete the task. An accurate estimation of this overhead time is difficult to obtain as it is dependent on the actual processes being launched and the overall status of the operating system at any given time. We estimate this overhead to be around 300–1000 ms. A totalDelay parameter that indicates the estimated initialization delay involved in processing a given subset can be evaluated. This parameter can be constructed by multiplying the number k of subsets with the overhead parameter that reflects the average time wasted on synchronization and launching the respective processes, and dividing the result by the number of threads, as follows: overhead totalDelay = k ∗ (6) threadCount It becomes apparent that minimizing the _totalDelay time is equal to minimizing the_ number of subsets k. The minimum value of k is equal to the number of threads in which case the overhead penalty is suffered only once by each thread. However it is unwise ----- to set _k equal to the number of threads as the risk of unequally distributing the data_ between the threads far outweighs the delay penalty. We make the reasonable hypothesis that the execution times of chunks of fixed size n = 1 resemble a Log Normal distribution, which is typically encountered in processing times [4]. Our hypothesis was verified on an individual basis experimentally by running a BLAST procedure as presented in Fig. 1. BLAST is the most computationally intensive task of our use case study workflow presented in . Evidently, this does not apply to all tasks but is a reasonable hypothesis and a common observation in processing times. A Log Normal distribution appears approximately like a skewed to the right, positive values only, normal distribution. This particular distribution presented in Fig. 1 allows us to estimate that only 8.2 % of the processing times were twice as large as the average processing time. Moreover, less than 0.5 % of the processing times were larger than five times the average processing time. It can easily be asserted that from a given set size and below, it is highly unlikely for many of the slower processing times to appear within it. However, it must be noted that this already low probability is further reduced by the fact that this is a boundary situation, to be encountered by the end of the workflow where other threads have terminated. After experimentation we have established that an empirical rule to practically eliminate the chance is to set n equal to 0.01 % of the number m of instances. **Fig. 1 An experimental run presenting the execution times of subsets with a size of one, when our specific** membership function that involves BLAST alignment and phylogenetic profiling building (presented in "Use case study" section ) is run. It is apparent that the execution times follow a Log Normal distribution which is outlined by the red line ----- The delayTime % defined by Eq. 7 is the total time wasted as a percentage of the actual processing time. totalDelay delaytime % = m (7) n [∗] [avgProcessingTime][ ∗] [threads][ ∗] [100] Assuming that the average processing time, avgProcessingTime, of a single instance is at least two and a half times greater than the overhead time and the number of threads is at least eight, then by setting n at 0.01 % of m will lead to a delayTime % value equal to 0.05 % which is considered insignificant. We conclude that a value of _n approximating 0.01 % of_ _m is a reasonable compro-_ mise. In practice, other limitations to the size of the subset n may exist, that are related to the nature of the memberships functions involved and must be taken into account. For example, in processes using hash tables extensively or having significant memory requirements, a relatively high subset size would not be beneficial as there is risk for the hash tables to be overloaded resulting in poor performance and high RAM usage. It is evident that an accurate size n of the subsets cannot be easily calculated from a general formula as it may have specific constraints due to the actual processes involved. However, a general rule of thumb can be established of setting n around 0.01 % of m and is expected to work reasonably well for the majority of cases. It is however, classified as a parameter that can be optimized and thus its manipulation is encouraged on a use case basis. **Execution engine** A number of requirements motivated us to implement a basic workflow execution engine that was used in our experiments for validating our approach. These requirements are the deployment of containers on sites that include all the necessary software and tools, graphic workflow description, secure connections over SSH tunneling and HTTPS and not requiring elevated user privileges for accessing sites. The execution environment is comprised of a number of computational sites having a UNIX based operating system and a global, universally accessible cloud storage similar to Amazon S3, referred to as object storage. The object storage is used to download input data, upload final data and to share data between sites. It is not used for storing intermediate data that temporarily exist within each site. We have implemented the proposed framework using Java 8 and Shell scripting in Ubuntu Linux 14.04. The overall architecture is loosely based on a master/slave model, where a master node responsible for executing the external scheduler serves as the coordinator of actions from the beginning to the completion of a given workflow. The master node is supplied with basic information like the description of the workflow and input data, the object storage and the computational sites. The workflow can be described as a directed acyclic graph (DAG) in the GraphML [7] language by specifying graph nodes corresponding to data and compute procedures and connecting them with edges as desired. To describe the workflow in a GUI environment, the user can use any of the available and freely distributed graph design software tools that supports exporting to GraphML. The only requirement for using a computational site is the existence of a standard user account and accessibility over the SSH protocol. Each site is initialized by establishing a ----- secure SSH connection through which a Docker [28] container equipped with the software dependencies required to execute the workflow is fetched and deployed. Workflow execution on each site takes place within the container. The object storage access credentials are transferred to the containers and a local daemon is launched for receiving subsequent commands from the master. The daemon is responsible for initiating the internal scheduler and passing all received commands to it. Communication between the master and the daemons running within the Docker container on each site is encrypted and takes place over SSH tunneling. File transfers between sites and the object storage are also encrypted and take place over the HTTPS protocol. **Use case study** The selected case study utilized in validating our approach is from the field of comparative genomics, and specifically the construction of the phylogenetic profiles of a set of genomes. Phylogenetic profiling is a bioinformatics technique in which the joint presence or joint absence of two traits across large numbers of genomes is used to infer a meaningful biological connection, such as involvement of two different proteins in the same biological pathway [35, 37]. By definition, a phylogenetic profile of a genome is an array where each line corresponds to a single sequence of a protein belonging to the genome and contains the presence or absence of the particular entity across a number of known genomes that participate in the study. The first step in building phylogenetic profiles involves the sequence alignment of the participating protein sequences of all genomes against themselves. It is performed by the widely used NCBI BLAST tool [25] and the process is known as a BLAST all vs all procedure. Each protein is compared to all target sequences and two values are derived, the identity and the e-value. _Identity refers to the extent to which two (nucleotide or_ amino acid) sequences have the same residues at the same positions in an alignment, and is often expressed as a percentage. _E-value (or expectation value or expect value)_ represents the number of different alignments with scores equivalent to or better than a given threshold S, that are expected to occur in a database search by chance. The lower the E-value, the more significant the score and the alignment. Running this process is extremely computationally demanding, the complexity of which is not straightforward to estimate [2], but can approach O(n[2]). For example, a simple sequence alignment between 0.5 million protein sequences, can take up to a week on a single high-end personal computer. Even when employing high-performance infrastructures, such as a cluster, significant time as well as the expertise to both run and maintain a cluster-enabled BLAST variant are required. Furthermore the output files consume considerable disk space which for large analyses can easily exceed hundreds of GBs. Based on the sequence alignment data, each phylogenetic profile requires the comparison and identification of all homologues across the different number of genomes in the study. The phylogenetic profiling procedure for each genome requires the sequence alignment data of all its proteins against the proteins of all other genomes. Its complexity is linear to the number of sequence alignment matches generated by blast. Different types of phylogenetic profiles exist, including binary, extended and best bi-directional all 3 of which are constructed in our workflow procedure. ----- According to our data organization methodology, in this case proteins correspond to _Insts and are grouped into OUs, which in this case are their respective genomes. Inde-_ pendent pipelines are formed for each _OU consisting firstly of the BLAST process_ involving the sequence alignment of the proteins of the _OU against all other proteins_ of all OUs and secondly of the three phylogenetic profile creation processes which utilizes the output of the first in order to create the binary, extended and best bi-directional phylogenetic profile of the genome corresponding to the OU. These pipelines are then scheduled according to the scheduling policy described in . **Results and discussion** A number of experiments have been performed in order to validate and evaluate our framework. Therefore, this section is divided into (a) the validation experiments further discussed in "Validation" subsection, where the methods outlined in "Methods" section are validated and (b) the comparison against Swift, a high performance framework, further discussed in "Comparison against a high performance framework" subsection where the advantages of our approach become apparent. The computational resources used are presented in Table 1. Apart from the privately owned resources of our institution, the cloud resources consist of a number of virtual machines belonging to the European Grid Infrastructure (EGI) federated cloud and operated by project Okeanos [21] of GRNET (Greek Research and Technology Network). Okeanos is based on the Synnefo (the meaning of the word is “cloud” in Greek) open source cloud software which uses Google Ganeti and other third party open source software. Okeanos, is the largest academic cloud in Greece, spanning more than 5400 active VMs and more than 500,000 spawned VMs. As the resources utilize different processors of unequal performance, their performance was compared to the processors of the cloud resources which served as a baseline reference. As such, the number of CPUs of each site was translated to a number of baseline CPUs, so a direct comparison can be performed. In this way, non integer numbers appear in the number of baseline CPUs of each site. This combination of local, privately owned computational resources with cloud-based resources represents the typical use case we are addressing, individuals or research labs that wish to extend their computational infrastructure by adopting resources of one or multiple cloud vendors. **Table 1 The pool of available computational resources along with their hardware type,** **number of threads and number of baseline processors are presented** **# Count** **CPU type** **# CPUs** **# Baseline CPUs** 1× 2 × Intel Xeon E5 2660 @ 2.2 GHZ 24 21.7 1× 2 × Intel Xeon E5 2660 @ 2.2 GHZ 12 16.7 1× Intel i7 6700 @ 4.0 GHZ 8 15 1× Intel i7 4790S @ 3.5 GHZ 8 11.3 10× AMD Opteron 6172 @ 2.1 GHZ 8 8 Total 14 – 132 144.7 All machines were equipped with more than 6 GB of RAM and were connected to the internet through a 100 MBps connection ----- The input data used in our experiments consists of an extended plant pangenome of 64 plant genomes including 39 cyanobacteria for which the complete proteome was available. The total size was 268 MB and includes 619,465 protein sequences nd 2.3 × 10[8] base pairs. In order to accommodate our range of experiments, the data was divided into sub-datasets. It must be noted that, although the input data used may appear relatively small in file size, it can be very demanding to process, requiring weeks on a single personal computer. The particular challenge in this workflow is not the input size but the computational requirements in conjunction with the size of the output as will become apparent in the following sections. The dataset consist of files downloaded from the online and publicly accessible databases of UniProt [10] and PLAZA[36] and can also be provided by our repositories upon request. The source code of the proposed framework along with the datasets utilized in this work can be found in our repository https://www.github.com/ akintsakis/odysseus. **Validation** In order to experimentally validate the optimal subset size value as outlined in "Internal scheduler" section and the overall scalability performance of our approach, a number of experiments were conducted utilizing the phylogenetic profiling use case workflow. All execution times reported below involve only the workflow runtime and do not include site initialization and code and initial data downloads as they require a nearly constant time, irrespective of both problem size and number of sites and as such they would distort the results and not allow for accurately measuring scaling performance. For reporting purposes, the total time for site initialization is approximately 3–5 min. **_Optimal subset value n_** The phylogenetic profiling workflow was executed with an internal scheduler subset size value _n of 0.0010, 0.0025, 0.0050, 0.0100, 0.0250, 0.0500 and 0.2500 % as a percentage_ of the total number of protein sequences in three distinct datasets comprising 189,378, 264,088 and 368,949 protein sequences. All sites presented in Table 1 except for the first one, participated in this experiment. The site execution times for each subset size for all three datasets are presented in boxplot form in Fig. 2. They verify the hypothesis presented in "Internal scheduler" subsection, we observe that the fastest execution time is achieved when the subset size n is set close to our empirical estimation of 0.01 % of the total dataset size. It is apparent that smaller or larger values of n lead to increased execution times. Generally, in both three datasets analyzed we observe the same behavior and pattern of performance degradation when diverging from the optimal subset size. Smaller values of _n lead to substantially longer processing times mainly due to the_ delay effect presented in Eq. 7. As _n increases, the effect gradually attenuates and is_ diminished for values larger than 0.0050 % of the dataset. Larger subset sizes impact performance negatively, with the largest size of 0.2500 % tested, yielding the slowest execution time overall. This can be attributed to the fact that for larger subset sizes, the load may not be optimally balanced and some threads that were assigned disproportionately higher load might prolong the overall total execution time while other threads are idle. Additionally, large subset sizes can lead to reduced opportunities for parallelization, ----- **Fig. 2 The execution times of all 13 participating sites in boxplot form are presented for the phylogenetic** profiling workflow when executed with an internal scheduler subset size value n of 0.0010, 0.0025, 0.0050, 0.0100, 0.0250, 0.0500 and 0.2500 % as a percentage of the total number of protein sequences in three distinct datasets comprising 189,378, 264,088 and 368,949 protein sequences. The optimal value of n leading to the fastest execution times is 0.01 % of the input dataset especially on smaller OUs that are broken into fewer chunks than the available threads on site, thus leaving some threads idle. The average memory usage of all execution sites for each subset size for all three datasets is presented in Fig. 3. It is apparent that both the subset size and the size of the dataset increase memory consumption. Between smaller subset sizes, differences in memory usage are insignificant and inconsistent, thus difficult to measure. As we reach the larger subsets, the differences become more apparent. Due to the current workflow not being memory intensive, increases in memory usage are only minor. However, in a ----- **Fig. 3 Average memory utilization of all 13 participating sites of the the phylogenetic profiling workflow** when executed with an internal scheduler subset size value n of 0.0010, 0.0025, 0.0050, 0.0100, 0.0250, 0.0500 and 0.2500 % as a percentage of the total number of protein sequences in three distinct datasets comprising 189,378, 264,088 and 368,949 protein sequences. Both subset size and dataset size seem to increase memory consumption, though the differences are minimal due to the workflow not being memory demanding memory demanding workflow these differences could be substantial. Although the size of the dataset to be analyzed cannot be tuned, the subset size can and it should be taken into account in order to remain within the set memory limits. A subset size n value of 0.0100 % is again a satisfactory choice when it comes to keeping memory requirements on the low end. Although we have validated that an adequate and cost-effective approach is to set the value of n at 0.0100 % of the total size of the dataset, we must state that optimal selection of n is also largely influenced by the type of workflow and thus its manipulation is encouraged on a use case basis. **_Execution time reduction scaling_** In this experiment the performance scalability of our approach was evaluated. For the needs of this experiment, a subset of the original dataset was formed, consisting of only the 39 cyanobacteria. The purpose was to evaluate the speed-up and efficiency and compare it to the ideal case, in which a linear increase in the total available processing power would lead to an equal reduction in processing time. Speed-up S(P), and efficiency E(P), ----- are fundamental metrics for measuring the performance of parallel applications and are defined in literature as follows: S(p) = [T] [(][1][)] (8) T (p) where T(1) is the execution time with one processor and T(p) is the execution time with p processors. T (1) E(p) = (9) p ∗ T (p) The above equations found in literature assume that p processors of equal computational power are used. As in our case we use resources of uneven computational performance, we translate their processing power into baseline processor units. Consequently, p can take continuous and not discrete values, corresponding to the increase in computational power as measured in baseline processors. All sites presented in Table 1 participated in this experiment. The sites were sorted in ascending order according to their multithreaded performance and the workflow was executed a number of times equal to the number of sites, by increasing the number of participating sites one at a time. The execution times of all sites are presented in boxplot form for all workflow runs in Fig. 4. The X axis represents the total computational power score of the sites participating in the workflow and the Y axis, in logarithmic scale, represents the site execution time in seconds. The dashed magenta line is the ideal workflow execution time (corresponding to linear speed-up) and it intersects the mean values of all boxplots. As we can **Fig. 4 The workflow execution times of various configurations of participating sites are presented in boxplot** form when the cyanobacteria phylogenetic profiling validation workflow is executed. Initially, only one site participates in the workflow and the execution time is the longest as seen on the left of the figure. As new sites are gradually added the execution times are decreased. The dashed magenta line represents the ideal reduction in time based on the increase in total computational resources ----- see variations in site execution time for each workflow run are consistent with no large deviations present. There are outliers in some workflow runs towards the lower side, where one site would terminate before others as there are no more OU pipelines to process. Despite being an outlier value however, they are not laid too far away in absolute quantities. Execution times fall consistently as new sites are added and computational resources are increased. A closer inspection of the results can be found in Table 2 where the execution times and the average and makespan speed-up and efficiency are analyzed. As expected, the average speed-up is almost identical to the ideal case, where the speed-up is equal to _p and efficiency approaches the optimal value of 1. This was to be expected, as our_ approach does not introduce any overhead and keeps file transfers to a minimum, almost like as if all the processing took place on a single site. The minuscule variations observed can be attributed to random variations in the processing power of our sites and/or our benchmarking of the sites and to random events controlled by the OS. It can be observed that when using a high number of CPUs the efficiency tends to marginally drop to 0.97. This is attributed to the fact the data intensive part of the workflow is limited by disk throughput and cannot be accelerated by increasing the CPU count. Although the data intensive part is approximately 3–5 % of the total workflow execution time when excluding potential file transfers, using such a high number of CPUs for this workflow begins to approach the boundaries of Amdahl’s law [19]. On average, the makespan efficiency is 0.954 % for all runs. It can be presumed that the makespan speed-up and efficiency tend to reach lower values when a higher number of sites are involved. This is to be expected as some sites terminate faster than others when the pool of OU pipelines is exhausted and as such their resources are no longer utilized. This effect becomes apparent mostly when using a very high number of CPUs for the given workflow that results in a workflow completion time of less than 30 min. Although it is apparent in this experiment, we are confident it will not be an issue in **Table 2 Speed-up and efficiency as scalability metrics of our approach** **#CPUs** **Execution times in seconds** **Average** **Makespan** **Average** **Min** **Max** **Std** **S(p)** **E(p)** **S(p)** **E(p)** 8.0 35,601 35,601 35,601 0.0 8.00 1.00 8.00 1.00 16.0 17,834 17,629 18,037 288.3 15.9 0.99 15.8 0.98 24.0 11,866 11,845 11,880 18.1 24.0 1.00 24.0 0.99 32.0 8930 8,628 9,039 201.4 31.9 0.99 31.5 0.98 40.0 7127 6,768 7,240 201.5 39.9 0.99 39.3 0.98 48.0 5929 5,713 6,159 234.7 48.0 1.00 46.2 0.96 56.0 5095 4,834 5,285 206.8 55.9 0.99 53.9 0.96 64.0 4467 4,097 4,546 150.5 63.8 0.99 62.6 0.97 72.0 3968 3,767 4,207 178.0 71.7 0.99 67.7 0.94 80.0 3571 3,122 3,688 164.5 79.7 0.99 77.1 0.96 91.3 3138 2,970 3,378 182.7 90.7 0.99 84.3 0.92 106.2 2741 2,561 2,888 85.0 103.9 0.98 98.5 0.93 123.0 2378 2,080 2,604 191.8 119.8 0.97 109.3 0.89 144.7 2029 1,937 2,183 69.4 140.3 0.97 130.4 0.90 ----- real world cases as using 14 sites for this workflow, can be considered as an overkill and therefore slightly inefficient. In general, the average speed-up and efficiency is the metric of interest when evaluating the system’s cost efficiency and energy savings as our approach automatically shuts down and releases the resources of sites that have completed their work. The makespan speedup corresponds to the actual completion time of the workflow when all sites have terminated and the resulting data is available. Our approach attempts to optimize the makespan speed-up but with no compromise in the average speed-up, i.e. the system’s cost efficiency. We can conclude from this experiment that the average speed-up is close to ideal and the makespan speed-up is inferior to the ideal case by about 5 % on average and can approach 10 % when using a high number of resources when compared to the computational burden of the workflow. **Comparison against a high performance framework** To establish the advantages of our approach against existing approaches, we chose to execute our use case phylogenetic profiling workflow in Swift and perform a comparison. Swift [49] is an implicitly parallel programming language that allows the writing of scripts that distribute program execution across distributed computing resources [47], including clusters, clouds, grids, and supercomputers. Swift is one of the highest performing frameworks for executing bioinformatics workflows in a distributed computing environment. The reason we chose Swift is that it is a well established framework that emphasizes parallelization performance and in use in a wide range of applications also including bioinformatics. Swift has also been integrated [27] into the popular bioinformatics platform Galaxy, in order to allow for utilization of distributed resources. Although perfectly capable of achieving parallelization, Swift is unable to capture the underlying data characteristics of the bioinformatics workflows addressed in this work, thus leading to unnecessary file transfers that increase execution times and costs and may sometimes even become overwhelming to the point of causing job failures. The testing environment included all sites presented in Table 1 except for the first one, as we were unable to set the system environment variables required by Swift, due to not having elevated privileges access to it. In the absence of a pre-installed shared file system, the Swift filesystem was specified as local, where all data were staged from the site where Swift was executing from. This is the default Swift option that is compatible with all execution environments and does not require a preset shared file system. The maximum number of jobs on each site was set equal to the site’s number of CPUs. Three datasets were chosen as input to the phylogenetic profiling workflow, these are the total of 64 plant genomes and its subsets of 58 and 52 genomes. The datasets were chosen with the purpose of approximately doubling the execution time of each workflow run when compared to the previous one. Uptime, system load and network traffic among others were monitored on each site. In order to perform a cost analysis, we utilized parameters from the Google Cloud Compute Engine pricing model, according to which, the cost per hour to operate the computational resources is 0.232$ per hour per 8 baseline CPUs and the cost of network traffic is 0.12$ per GB as per Google’s internet egress worldwide cheapest zone policy. ----- The makespan execution time, total network traffic and costs of our approach against Swift when executing the phylogenetic profiling workflow for the three distinct datasets are presented in Table 3. The values presented are average values of 3 execution runs. As can be seen, for workflow runs 1 and 2, Swift is approximately 20 % slower in makespan and 16 % slower in the case of workflow run 3. This is attributed mostly to the time lost waiting for the file transfers to take place in the case of Swift. It must be noted that we were unable to successfully execute workflow 3 until termination with Swift, due to network errors near the end of the workflow that we attribute to the very large number of required file transfers. Had the workflow reached termination, we expect Swift to be about 17–18 % slower. As the particular use case workflow is primarily computationally intensive, an increase in the input size of the workflow increases the computational burden faster than the data intensive part, thus the performance gap is slightly smaller in the case of workflow 3. The total network traffic includes all inbound and outbound network traffic of all sites. It is apparent that it is significantly higher in Swift thus justifying the increased total execution time accounting to file transfers. Regarding the cost of provisioning the VMs, it was calculated by multiplying the uptime of each site with the per processors baseline cost of operation. The external scheduler of our approach will release available resources when the pool of OU pipelines is exhausted, thus leading to cost savings that can range from 10 to 25 % when compared to keeping all resources active until the makespan time. Oppositely, this feature is not supported by Swift and as such in this case all sites are active until makespan time, leading to increased costs. The cost savings of our approach regarding provisioning of VMs were higher than 40 % in all three workflow. The cost of network transfers is difficult to interpret as it is dependent on the locations and the providers of the computational resources. The cost presented here is a worst case estimate that would take place when all network traffic between sites were **Table 3 Execution time, network traffic and cost comparison of our approach against Swift** **Workflow 1 1.16 ∗** **10[8] Bases** **Workflow 2 1.67 ∗** **10[8] Bases** **Workflow 3 2.3 ∗** **10[8] Bases** Makespan Ours 9302 s 18278 s 33752 s Swift 11194 s 21933 s 39229 s Diff +20.3 % +19.9 % +16.2 % Network total traffic Ours 0.183 GB 0.338 MB 0.403 GB Swift 57.525 GB 88.944 GB 140.982 GB Cost of Provisioning VMs Ours 8.86 $ 17.6 $ 32.63 $ Swift 13.05 $ 25.56 $ 45.73 $ Diff +47.2 % +45.2 % +40.0 % Cost of network transfers Ours 0.02 $ 0.04 $ 0.05 $ Swift 6.90 $ 10.67 $ 16.91 $ Total cost Ours 8.88 $ 17.64 $ 32.68 $ Swift 19.95 $ 36.23 $ 62.64 $ Diff +124.6 % +105.3 % +91.6 % ----- charged at the nominal rate. That is not always true, for example if all sites were located within the same cloud facility of one vendor there would be no cost at all for file transfers. However, they would still slow down the workflow leading to increased uptime costs, unless the sites were connected via a high speed link like InfiniBand often found in supercomputer configuration environments. In a hybrid cloud environment, which this work addresses, as computational sites will belong to different cloud vendors and private infrastructures, the file transfer cost can be significant and may even approach the worst case scenario. In total, our approach is significantly more cost effective than Swift, which can be anywhere from 40 to 47 % to more than 120 % more expensive, depending on the pricing of network file transfers. To further analyze the behavior of our framework against Swift, in Fig. 5 we present the system load and network activity of all sites when executing the phylogenetic profiling workflow with the 64 genome input dataset for both our approach and Swift. The Swift system load and network activity are denoted by the blue line and red line respectively, while the system load and network activity of our approach are denoted by the green and magenta lines respectively. Figure 6 plots each line separately for site 0, allowing for increased clarity. A system load value of 1 means that the site is fully utilized, while values higher than 1 means that the site is overloaded. A network activity value of **Fig. 5 System load and network activity of all sites when executing the phylogenetic profiling workflow with** the 64 genome input dataset for both our approach and Swift ----- **Fig. 6 System load and network activity of site 0 when executing the phylogenetic profiling workflow with** the 64 genome input dataset for both our approach and Swift 1 corresponds to utilization of 100 MBps. The network activity reported is both incoming and outgoing, so the maximum value it can reach is 2, which means 100 MBps of incoming and outgoing traffic simultaneously, though this is difficult to achieve due to network switch limitations. Regarding our approach, the network traffic magenta line is barely visible, marking only a few peaks, that coincide with drops in system load as denoted by the green line. This is to be expected as network traffic takes place while downloading the input data of the next OU pipeline and simultaneously uploading the output of the just processed OU pipeline, during which the cpu is mostly inactive. It is apparent that the number of sections between the load drops are equal to the number of OU pipelines, 64 in this case. Other than that, the system load is consistently at a value of 1. In the Swift execution case, load values are slightly higher than 1 in all sites except site 1 which has 12 instead of 8 CPUs. This can be attributed to the slightly increased computational burden of submitting the jobs remotely and transferring inputs and outputs to the main site. The internal scheduler of our approach operating on each site can be more efficient. Network traffic is constant and on the low end for the duration of the workflow, as data is transferred to and from the main site. However, near the end of the workflow, system load drops and network traffic increases dramatically, especially on site 0 which is the main site from which Swift operates and stages all file transfers to and from the ----- other sites. As the computationally intensive part of most OU pipelines comes to an end, the data intensive part then requires a high number of file transfers that overloads the network and creates a bottleneck. This effect significantly slows down the makespan and is mostly responsible for the increased execution times of Swift and the costly file transfers. In large workflows where the data to be moved is hundreds of GBs, it can even lead to instability due to network errors. **Conclusions and future work** In this work, we presented a versatile framework for optimizing the parallel execution of data-intensive bioinformatics workflows in hybrid cloud environments. The advantage of our approach is that it achieves surpassing time and cost efficiency than existing solutions through minimization of file transfers between sites. It accomplishes that through the combination of a data management methodology that organizes the workflow into pipelines with minimal data interdependencies along with a scheduling policy for mapping their execution into a set of heterogeneous distributed resources comprising a hybrid cloud. Furthermore, we compared our methodology with Swift, a state of the art high performance framework and achieved superior cost and time efficiency in our use case workflow. By minimizing file transfers, the total workflow execution time is reduced thus leading to directly decreasing costs based on uptime of computational resources. Costs can also decrease indirectly, as file transfers can be costly especially in hybrid clouds where resources are not located within the facility of a single cloud vendor. We are confident that our methodology can be applied to a wide range of bioinformatics workflows sharing similar characteristics with our use case study. We are currently working on expanding our use case basis by implementing workflows in the fields of metagenomics, comparative genomics, and haplotype analysis according to our methodology. Additionally, we are improving our load estimation functions so as to more accurately capture the computational load of a given pipeline through an evaluation of the initial input. In the era of Big data, cost-efficient high performance computing is proving to be the only viable option for most scientific disciplines [14]. Bioinformatics is one of the most representative fields in this area, as the data explosion has overwhelmed current hardware capabilities. The rate at which new data is produced is expected to increase significantly faster compared to the advances, and the cost, in hardware computational capabilities. Data-aware optimization can be the a powerful weapon in our arsenal when it comes to utilizing the flood of data to advance science and to provide new insights. **Authors’ contributions** AMK and FEP conceived and designed the study and drafted the manuscript. AMK implemented the platform as a software solution. PAM participated in the project design and revision of the manuscript. AMK and FEP analyzed and interpreted the results and coordinated the study. FEP edited the final version of the manuscript. All authors read and approved the final manuscript. **Acknowledgements** This work used the European Grid Infrastructure (EGI) through the National Grid Infrastructure NGI_GRNET - HellasGRID. We also thank Dr. Anagnostis Argiriou (INAB-CERTH) for access to their computational infrastructure. **Competing interests** The authors declare that they have no competing interests. ----- Received: 17 August 2016 Accepted: 11 October 2016 **References** 1. Afgan E, Baker D, Coraor N, Chapman B, Nekrutenko A, Taylor J. Galaxy cloudman: delivering cloud compute clusters. BMC Bioinform. 2010;11(Suppl 12):S4. 2. Altschul SF, Gish W, Miller W, Myers EW, Lipman DJ. Basic local alignment search tool. J Mol Biol. 1990;215(3):403–10. 3. Angiuoli SV, Matalka M, Gussman A, Galens K, Vangala M, Riley DR, Arze C, White JR, White O, Fricke WF. Clovr: a virtual machine for automated and portable sequence analysis from the desktop using cloud computing. BMC Bioinform. 2011;12(1):356. 4. Baker KR, Trietsch D. Principles of sequencing and scheduling. Hoboken: Wiley; 2013. 5. Berthold MR, Cebron N, Dill F, Gabriel TR, Kötter T, Meinl T, Ohl P, Sieb C, Thiel K, Wiswedel B. Knime: the konstanz information miner. In: Data analysis, machine learning and applications. Berlin: Springer; 2008. p. 319–26 6. Bocchi E, Mellia M, Sarni S. Cloud storage service benchmarking: methodologies and experimentations. In: Cloud networking (CloudNet), 2014 IEEE 3rd international conference on, IEEE; 2014. p. 395–400 7. Brandes U, Eiglsperger M, Herman I, Himsolt M, Marshall MS. Graphml progress report structural layer proposal. In: Graph drawing. Berlin: Springer; 2001. p. 501–12 8. [Bux M, Leser U. Parallelization in scientific workflow management systems. 2013. arXiv preprint arXiv:1303.7195](http://arxiv.org/abs/1303.7195) 9. Chong Z, Ruan J, Wu CI. Rainbow: an integrated tool for efficient clustering and assembling rad-seq reads. Bioinformatics. 2012;28(21):2732–7. 10. Consortium U, et al. The universal protein resource (uniprot). Nucleic Acids Res. 2008;36(suppl 1):D190–5. 11. De Oliveira D, Ocaña KA, Ogasawara E, Dias J, Gonçalves J, Baião F, Mattoso M. Performance evaluation of parallel strategies in public clouds: a study with phylogenomic workflows. Future Gener Comput Syst. 2013;29(7):1816–25. 12. Dean J, Ghemawat S. Mapreduce: simplified data processing on large clusters. Commun ACM. 2008;51(1):107–13. 13. Deelman E, Singh G, Su MH, Blythe J, Gil Y, Kesselman C, Mehta G, Vahi K, Berriman GB, Good J, et al. Pegasus: a framework for mapping complex scientific workflows onto distributed systems. Sci Progr. 2005;13(3):219–37. 14. Duarte AM, Psomopoulos FE, Blanchet C, Bonvin AM, Corpas M, Franc A, Jimenez RC, de Lucas JM, Nyrönen T, Sipos G, et al. Future opportunities and trends for e-infrastructures and life sciences: going beyond the grid to enable life science data analysis. Front Genet. 2015:6. 15. Emeakaroha VC, Maurer M, Stern P, Łabaj PP, Brandic I, Kreil DP. Managing and optimizing bioinformatics workflows for data analysis in clouds. J Grid Comput. 2013;11(3):407–28. 16. Gentleman RC, Carey VJ, Bates DM, Bolstad B, Dettling M, Dudoit S, Ellis B, Gautier L, Ge Y, Gentry J, et al. Bioconductor: open software development for computational biology and bioinformatics. Genome Biol. 2004;5(10):R80. 17. Goecks J, Nekrutenko A, Taylor J, et al. Galaxy: a comprehensive approach for supporting accessible, reproducible, and transparent computational research in the life sciences. Genome Biol. 2010;11(8):R86. 18. Gurtowski J, Schatz MC, Langmead B. Genotyping in the cloud with crossbow. Curr Prot Bioinform. 2012:15–3. 19. Hill MD, Marty MR. Amdahl’s law in the multicore era. Computer. 2008;7:33–8. 20. Iosup A, Sonmez O, Anoep S, Epema D. The performance of bags-of-tasks in large-scale distributed systems. In: Proceedings of the 17th international symposium on high performance distributed computing. New York: ACM; 2008. p. 97–108 21. Koukis V, Venetsanopoulos C, Koziris N. [∼] Okeanos: Building a cloud, cluster by cluster. IEEE Internet Comput. 2013;3:67–71. 22. Krampis K, Booth T, Chapman B, Tiwari B, Bicak M, Field D, Nelson KE. Cloud biolinux: pre-configured and ondemand bioinformatics computing for the genomics community. BMC Bioinform. 2012;13(1):42. 23. Litzkow MJ, Livny M, Mutka MW. Condor-a hunter of idle workstations. In: Distributed computing systems, 8th international conference on, IEEE; 1988. p. 104–11. 24. Liu B, Madduri RK, Sotomayor B, Chard K, Lacinski L, Dave UJ, Li J, Liu C, Foster IT. Cloud-based bioinformatics workflow platform for large-scale next-generation sequencing analyses. J Biomed Inform. 2014;49:119–33. 25. Lobo I. Basic local alignment search tool (blast). Nature Educ. 2008;1(1):215. 26. Ludäscher B, Altintas I, Berkley C, Higgins D, Jaeger E, Jones MB, Lee EA, Tao J, Zhao Y. Scientific workflow management and the kepler system. Concurr Comput Pract Exp. 2006;18(10):1039–65. 27. Maheshwari K, Rodriguez A, Kelly D, Madduri R, Wozniak J, Wilde M, Foster I. Enabling multi-task computation on galaxy-based gateways using swift. In: Cluster computing (CLUSTER), 2013 IEEE international conference on, IEEE; 2013. p. 1–3. 28. Merkel D. Docker: lightweight linux containers for consistent development and deployment. Linux J. 2014;2014(239):2. 29. Minevich G, Park DS, Blankenberg D, Poole RJ, Hobert O. Cloudmap: a cloud-based pipeline for analysis of mutant genome sequences. Genetics. 2012;192(4):1249–69. 30. Moschakis IA, Karatza HD. Multi-criteria scheduling of bag-of-tasks applications on heterogeneous interlinked clouds with simulated annealing. J Syst Soft. 2015;101:1–14. 31. Naccache SN, Federman S, Veeraraghavan N, Zaharia M, Lee D, Samayoa E, Bouquet J, Greninger AL, Luk KC, Enge B, et al. A cloud-compatible bioinformatics pipeline for ultrarapid pathogen identification from next-generation sequencing of clinical samples. Genome Res. 2014;24(7):1180–92. 32. Nagasaki H, Mochizuki T, Kodama Y, Saruhashi S, Morizaki S, Sugawara H, Ohyanagi H, Kurata N, Okubo K, Takagi T, et al. Ddbj read annotation pipeline: a cloud computing-based pipeline for high-throughput analysis of next-generation sequencing data. DNA Res. 2013;dst017. 33. Ocaña KA, De Oliveira D, Dias J, Ogasawara E, Mattoso M. Designing a parallel cloud based comparative genomics workflow to improve phylogenetic analyses. Future Gener Comput Syst. 2013;29(8):2205–19. ----- 34. Oinn T, Addis M, Ferris J, Marvin D, Senger M, Greenwood M, Carver T, Glover K, Pocock MR, Wipat A, et al. Taverna: a tool for the composition and enactment of bioinformatics workflows. Bioinformatics. 2004;20(17):3045–54. 35. Pellegrini M, Marcotte EM, Thompson MJ, Eisenberg D, Yeates TO. Assigning protein functions by comparative genome analysis: protein phylogenetic profiles. Proc Natl Acad Sci. 1999;96(8):4285–8. 36. Proost S, Van Bel M, Sterck L, Billiau K, Van Parys T, Van de Peer Y, Vandepoele K. Plaza: a comparative genomics resource to study gene and genome evolution in plants. Plant Cell. 2009;21(12):3718–31. 37. Psomopoulos FE, Mitkas PA, Ouzounis CA, Promponas VJ, et al. Detection of genomic idiosyncrasies using fuzzy phylogenetic profiles. PLoS One. 2013;8(1):e52854. 38. Reid JG, Carroll A, Veeraraghavan N, Dahdouli M, Sundquist A, English A, Bainbridge M, White S, Salerno W, Buhay C, et al. Launching genomics into the cloud: deployment of mercury, a next generation sequence analysis pipeline. BMC Bioinform. 2014;15(1):30. 39. Rice P, Longden I, Bleasby A, et al. Emboss: the European molecular biology open software suite. Trends Genet. 2000;16(6):276–7. 40. Schatz MC. Cloudburst: highly sensitive read mapping with mapreduce. Bioinformatics. 2009;25(11):1363–9. 41. Smith B, Grehan R, Yager T, Niemi D. Byte-unixbench: a unix benchmark suite. 2011. 42. Sreedharan VT, Schultheiss SJ, Jean G, Kahles A, Bohnert R, Drewe P, Mudrakarta P, Görnitz N, Zeller G, Rätsch G. Oqtans: the rna-seq workbench in the cloud for complete and reproducible quantitative transcriptome analysis. Bioinformatics. 2014:btt731. 43. Stajich JE, Block D, Boulez K, Brenner SE, Chervitz SA, Dagdigian C, Fuellen G, Gilbert JG, Korf I, Lapp H, et al. The bioperl toolkit: Perl modules for the life sciences. Genome Res. 2002;12(10):1611–8. 44. Tang W, Wilkening J, Desai N, Gerlach W, Wilke A, Meyer F. A scalable data analysis platform for metagenomics. In: Big data, 2013 IEEE international conference on, IEEE; 2013. p. 21–6. 45. Wall DP, Kudtarkar P, Fusaro VA, Pivovarov R, Patil P, Tonellato PJ. Cloud computing for comparative genomics. BMC Bioinform. 2010;11(1):259. 46. Weng C, Lu X. Heuristic scheduling for bag-of-tasks applications in combination with qos in the computational grid. Future Gener Comput Syst. 2005;21(2):271–80. 47. Wilde M, Hategan M, Wozniak JM, Clifford B, Katz DS, Foster I. Swift: a language for distributed parallel scripting. Parallel Comput. 2011;37(9):633–52. 48. Wolstencroft K, Haines R, Fellows D, Williams A, Withers D, Owen S, Soiland-Reyes S, Dunlop I, Nenadic A, Fisher P, et al. The taverna workflow suite: designing and executing workflows of web services on the desktop, web or in the cloud. Nucleic Acids Res. 2013:gkt328. 49. Zhao Y, Hategan M, Clifford B, Foster I, Von Laszewski G, Nefedova V, Raicu I, Stef-Praun T, Wilde M. Swift: fast, reliable, loosely coupled parallel computation. In: Services, 2007 IEEE Congress on, IEEE; 2007. p. 199–206. -----
{ "disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.1186/s40537-016-0055-2?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1186/s40537-016-0055-2, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": "CCBY", "status": "GOLD", "url": "https://journalofbigdata.springeropen.com/counter/pdf/10.1186/s40537-016-0055-2" }
2,016
[ "JournalArticle" ]
true
2016-10-21T00:00:00
[ { "paperId": "1f18c130264b82562d1f5b93cdb7463f33d23028", "title": "Principles of Sequencing and Scheduling" }, { "paperId": "67b4c7080ffc94ccc495a37aa6dcbe8a34302b05", "title": "Future opportunities and trends for e-infrastructures and life sciences: going beyond the grid to enable life science data analysis" }, { "paperId": "ab15e4a5e569e000675883d0b2374c212f8b4420", "title": "Multi-criteria scheduling of Bag-of-Tasks applications on heterogeneous interlinked clouds with simulated annealing" }, { "paperId": "bd0cf8ceaa409b1bde3b4223b2e897da8c467f95", "title": "Cloud storage service benchmarking: Methodologies and experimentations" }, { "paperId": "d4d6bef7e3df07197a9882a89f140a394e3c6a5c", "title": "A cloud-compatible bioinformatics pipeline for ultrarapid pathogen identification from next-generation sequencing of clinical samples" }, { "paperId": "e0a622bd6e897e8597d82b379486e36133e0838d", "title": "Cloud-based bioinformatics workflow platform for large-scale next-generation sequencing analyses" }, { "paperId": "875d90d4f66b07f90687b27ab304e04a3f666fc2", "title": "Docker: lightweight Linux containers for consistent development and deployment" }, { "paperId": "c4d05bf0062d648800465d13b65bb0d81b5f7b7b", "title": "Launching genomics into the cloud: deployment of Mercury, a next generation sequence analysis pipeline" }, { "paperId": "d03293cdd2079ab23b881728f81aecb8d0a3c40b", "title": "Oqtans: the RNA-seq workbench in the cloud for complete and reproducible quantitative transcriptome analysis" }, { "paperId": "7f302d785d779b5ebf34b76bc4dfb9150e0d2cd2", "title": "A scalable data analysis platform for metagenomics" }, { "paperId": "98f769c1584c7fac8b89bd929a3856b6ecac7314", "title": "Designing a parallel cloud based comparative genomics workflow to improve phylogenetic analyses" }, { "paperId": "37a878ddf4893b768580a7cc3d50a0adac2043e4", "title": "Managing and Optimizing Bioinformatics Workflows for Data Analysis in Clouds" }, { "paperId": "735bbb3782fade94e053b46a179b82e53e06f063", "title": "Enabling multi-task computation on Galaxy-based gateways using swift" }, { "paperId": "8ba0dd6ce4fa74ccce43034cdec7de5a3871ad2c", "title": "Performance evaluation of parallel strategies in public clouds: A study with phylogenomic workflows" }, { "paperId": "07b62aa82b3cc693873e7cb4f0fa6501ad25ea3c", "title": "DDBJ Read Annotation Pipeline: A Cloud Computing-Based Pipeline for High-Throughput Analysis of Next-Generation Sequencing Data" }, { "paperId": "a0b789d1f3afe9cf6fadfaa8f120479181f0c4e5", "title": "The Taverna workflow suite: designing and executing workflows of Web Services on the desktop, web or in the cloud" }, { "paperId": "99c40f08918f70aab65ef62fe0f2531f9f33ca93", "title": "~okeanos: Building a Cloud, Cluster by Cluster" }, { "paperId": "f5c0924d91e8e8b9a6ea6cae8f73e604b39b407b", "title": "Parallelization in Scientific Workflow Management Systems" }, { "paperId": "b5eddfe356dc226e146df0c63dac55846b83610f", "title": "Detection of Genomic Idiosyncrasies Using Fuzzy Phylogenetic Profiles" }, { "paperId": "5a4923537633e6e3621203f3807d736d534a5fea", "title": "CloudMap: A Cloud-Based Pipeline for Analysis of Mutant Genome Sequences" }, { "paperId": "2d5376233101455b91772aeea1e2d84320588dcb", "title": "Rainbow: an integrated tool for efficient clustering and assembling RAD-seq reads" }, { "paperId": "0e231e08c29455a8cc1bb3a82f10a5b4bf308c39", "title": "Genotyping in the Cloud with Crossbow" }, { "paperId": "8f262d37490e1393123d2b3459f73dd06f27159a", "title": "Cloud BioLinux: pre-configured and on-demand bioinformatics computing for the genomics community" }, { "paperId": "9ca2a17fa3669323dbcb367e6c70ff9acb969b6d", "title": "Swift: A language for distributed parallel scripting" }, { "paperId": "7a199a0c579160a93157f3d6975ded8aeb295fd9", "title": "CloVR: A virtual machine for automated and portable sequence analysis from the desktop using cloud computing" }, { "paperId": "83cc2ae159eea5f4174a7974a631b7aa88d00a7c", "title": "Galaxy CloudMan: delivering cloud compute clusters" }, { "paperId": "ebd342793ee80999119cf163f5774d649fc0c928", "title": "Galaxy: a comprehensive approach for supporting accessible, reproducible, and transparent computational research in the life sciences" }, { "paperId": "252b1b24dae7d8ba553fe4eb9c659aabe53ccdba", "title": "Cloud computing for comparative genomics" }, { "paperId": "63f4c96346eed2fc43dfdc34c6e4f366bca127fe", "title": "PLAZA: A Comparative Genomics Resource to Study Gene and Genome Evolution in Plants[W]" }, { "paperId": "540a803d3d99f3b7814f37652c056718c701b5c5", "title": "CloudBurst: highly sensitive read mapping with MapReduce" }, { "paperId": "3a02beb0ffb573c31a6be82565f62ba7abe98bc7", "title": "Amdahl's Law in the Multicore Era" }, { "paperId": "ec46be6dd0fa93893a804396cfa62b4d6371339e", "title": "The performance of bags-of-tasks in large-scale distributed systems" }, { "paperId": "7fd2213b834a5e745deecc0ee375e5be05fd1b2f", "title": "Swift: Fast, Reliable, Loosely Coupled Parallel Computation" }, { "paperId": "0b4b0d518f6c1f814b6e91ac110495f2264d1d7e", "title": "Taverna: lessons in creating a workflow environment for the life sciences" }, { "paperId": "2022160699559f695d4007e6328cc01261d3beed", "title": "Pegasus: A framework for mapping complex scientific workflows onto distributed systems" }, { "paperId": "6f10e2b58bf58286f3f961cbda878370309d7c29", "title": "Heuristic scheduling for bag-of-tasks applications in combination with QoS in the computational grid" }, { "paperId": "7f1b4ef3351041755e40cab188748af613949f4e", "title": "Basic Local Alignment Search Tool (BLAST)" }, { "paperId": "3d9fbcf35f53bd84c75fd99daa6b2c69397b0a01", "title": "The Universal Protein Resource (UniProt)" }, { "paperId": "36789799e464aa7465125ff8e778939843a0e89b", "title": "Taverna: a tool for the composition and enactment of bioinformatics workflows" }, { "paperId": "fd495d6cf7c3169bc58550fdf32be6e16e2800f8", "title": "Bioconductor: open software development for computational biology and bioinformatics" }, { "paperId": "33780e4aba639a97f9fb7f7e773853f74dd494b7", "title": "The Bioperl toolkit: Perl modules for the life sciences." }, { "paperId": "ddf06cf0d375fb9404fe30c5f1d7858d74080e9c", "title": "EMBOSS: the European Molecular Biology Open Software Suite." }, { "paperId": "adea14ce6f45a439d3480bf2f4c030b77ea89102", "title": "Assigning protein functions by comparative genome analysis: protein phylogenetic profiles." }, { "paperId": "a03dc4d8aa1c2ebd5c3a3c1d87d531f52d63c120", "title": "Basic local alignment search tool." }, { "paperId": "ea6b2281bab9dd7efbc2ad6b95492a5263861cc9", "title": "Condor-a hunter of idle workstations" }, { "paperId": "5c632485f98035bd6c8c263af8193cc9ad1358da", "title": "The use of microprocessors as automobile on-board controlers" }, { "paperId": null, "title": "Byte-unixbench: a unix benchmark suite" }, { "paperId": "627be67feb084f1266cfc36e5aed3c3e7e6ce5f0", "title": "MapReduce: simplified data processing on large clusters" }, { "paperId": "f199791a6381fd8eacff37a00c8f7ab195bbdb64", "title": "KNIME: The Konstanz Information Miner" }, { "paperId": "06429adb228b2241f2854819b6c9963120da6a48", "title": "GraphML Progress Report ? Structural Layer Proposal" } ]
17,230
en
[ { "category": "Engineering", "source": "external" }, { "category": "Engineering", "source": "s2-fos-model" }, { "category": "Environmental Science", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/027bd79f0d6c728825aaac3fa41f6178e8b30145
[ "Engineering" ]
0.860423
Decentralized Blended Acquisition
027bd79f0d6c728825aaac3fa41f6178e8b30145
[ { "authorId": "145271273", "name": "G. Berkhout" } ]
{ "alternate_issns": null, "alternate_names": null, "alternate_urls": null, "id": null, "issn": null, "name": null, "type": null, "url": null }
null
### Decentralized Blended Acquisition #### Guus Berkhout, Delft University of Technology **SUMMARY** The concept of blending and deblending is reviewed, making use of traditional and dispersed source arrays. The network concept of distributed blended acquisition is introduced. A million-trace robot system is proposed, illustrating that decentralization may bring about a revolution in the way we acquire seismic data in the future. **INTRODUCTION** In traditional seismic surveys, interference between shot records is minimized by choosing the temporal interval and/or the lateral distance between consecutive shots sufficiently large. However, in the concept of simultaneous shooting shot records do overlap, allowing denser source sampling in a favorable economic way. Denser source sampling takes care of the desired property that each subsurface gridpoint is illuminated from a larger number of angles and, therefore, will improve the image quality in terms of signal-to-noise ratio and spatial resolution. In the seismic literature, already an abundance of references on simultaneous shooting can be found. Examples of recent publications are Beasley (2008), Berkhout (2008), Howe et al. (2008), Pecholcs et al. (2010), Berkhout et al. (2012), Beasley et al. (2012), Abma et al. (2012), Krupovnickas et al. (2012). In blended acquisition, being a special version of simultaneous shooting, the ‘simultaneous’ source wavefield is incoherent (see Figure 1). objective of blended acquisition is to maximize the emission of full-bandwidth, non-aliased, far-field signal energy within a pre-specified acquisition time. In traditional seismic surveys a single coherent source (array) is used for each shot record. This localized source unit must transmit the full temporal frequency band for a wide range of emission angles. Today’s seismic vibrators and airgun arrays are designed such that they have a large bandwidth, ranging over many octaves. In practice, however, such source designs are a compromise from a systems engineering point of view. I propose that the individual source units in a blended array (1) are not chosen to be equal and (2) do not need to satisfy the wide-band requirements. Instead, they may be dedicated narrowband designs with superior emission properties around their central frequency. The ultimate criterion is that the combined incoherent source wavefield has the required temporal and angular spectral properties at each gridpoint in the subsurface. In addition, I propose that the traditional centralized concept in seismic acquisition is replaced by a decentralized network alternative. **THEORETICAL CONSIDERATIONS** Seismic data can be arranged in data matrix P. In the frequency domain P represents a frequency slice of the total data volume and one element Pij is one frequency component of the trace measured at detector position i generated by source _j. In my notation P(zd, zs) means that the source and detector_ positions are situated at depth levels zs and zd respectively. If we choose for the moment zs = zd = z0 (typical for land data), then the model of data matrix P can be written as (Berkhout, 1982): **P(z0, z0) = D(z0)X(z0, z0)S[+](z0),** (1) where matrix X is the Earth’s transfer operator that includes the interaction with the surface. In source matrix S[+](z0) each column represents a (directional) source. In detector matrix D each row represents a receiver (array). The response of each source column (S[⃗]j[+][) is given by the corresponding column of] the data matrix (P[⃗]j). Using expression 1, the result of one blended experiment can be formulated by (Berkhout, 2008): **P(z0, z0)[⃗]Γj** (z0) = D(z0)X(z0, z0)S[+](z0)[⃗]Γj (z0). (2a) Figure 1: Subdivision of simultaneous shooting methods, based on the degree of incoherency. Such an incoherent wavefield is physically generated by firing a multitude of sources, each source with its own code (such as temporal delay, nonlinear phase function, pseudo-random time series), together forming a blended source array. Unlike a traditional source array, a blended source array may cover a large spatial area, meaning that one blended source array illuminates subsurface gridpoints from many different angles. The Column vector _[⃗]Γj(z0) contains the blending information. This_ is illustrated in Figure 2: elements Γkj (z0) are complex-valued scalars, describing time delays or a more complex code, while the involved sources are indicated by the positions (k) of the scalars in column vector _[⃗]Γj_ (z0). Note that equation 2a is based on the linearity of seismic data in wavefields. This can be eas ----- � _Sk[�]�kj_ #### Decentralized Blended Acquisition **DESCRIPTION OF DEBLENDING ALGORITHM** _z0_ ### S[�]�j blending code ### � � � Sk[�]�kj _k_ _[x][�]s_ ### S[�]�j |Col1|Col2| |---|---| one unit of a blended source array includes classical field array _k_ ### S� �[�] j _k_ blended source array (downward radiating) Figure 2: One blended source array consists of a multitude of source units, each unit having its own code. ily seen if we rewrite this equation as follows: � _⃗Pk(z0, z0)Γkj_ (z0) = D(z0)X(z0, z0) � _⃗Sk[+][(][z][0][)Γ][kj][(][z][0][)][,]_ _k_ _k_ (2b) showing that the weighted sources of the blended source array generate a weighted set of shot records, the latter being referred to as a blended shot record. Equation 2b can be made specific for marine data by showing explicitly the ghost effect. If we allow the individual elements (k) of a blended source array to be at different depth levels (zk), then we may write: � _⃗Pk(z0, zk)Γkj(zk) = D(z0)X(z0, z0)_ � _⃗Sk[+][(][z][0][, z][k][)Γ][kj][(][z][k][)]_ _k_ _k_ (3a) where, assuming a surface reflectivity of -1, In deblending, blended measurements are given and unblended data need be computed (inversion process). In this closed-loop process, numerically simulated measurements - output of forward modeling according to equations 2a and 2b - are compared with the real measurements. By minimizing the difference between the two datasets the unblended samples (parameters) can be estimated. To explain this inversion process, let us minimize the following unconstrained least-squares criterion (zd and zs are omitted for notational convenience): 2 2 Δ⃗Pj ′ = _⃗Pj ′_ _._ (5a) ��� ��� ��� _[−]_ **[P][⃗][Γ][j]** ��� Bear in mind that in minimization equation 5a where _P[⃗]k[(][i][)]_ in 6 is approaching _P[⃗]k in 7 asymptotically._ In the first iteration (i = 1) ΔP[′] = P[′], meaning that the inversion process starts with pseudo-deblending. It is interesting to realize that Λ may be a scaled unity matrix or a diagonal matrix or a bandmatrix, depending on the properties of blending matrix Γ. During the presentation properties of the algorithm will be illustrated with examples. The computational diagram is shown in Figure 3. � **P[⃗]Γj =** _⃗PkΓkj_ (5b) _k_ represents the modeling output and vector _P[⃗]k equals the de-_ blended shot record for shot k. The iterative solution of minimization problem 5a is given by: _⃗P_ [(]k[i][)] = _P[⃗]_ [(]k[i][−][1)] + �ΔP[′][�][i][−][1]Λ[⃗]Γ[H]k _[,]_ (6) where diagonal matrix Λ contains the weights. The validity of iterative, weighted, least-squares solution 6 can be quickly verified by substituting the expression of ΔP[′] in equation 6, leading to the well-known analytic equation: � � **P[′]Λ[⃗]Γ[H]k** [=][ P] ΓΛ[⃗]Γ[H]k _,_ (7) _⃗Sk[+][(][z][0][, z][k][) =][ W][∗][(][z][0][, z][k][)][⃗S]k[+][(][z][k][)][ −]_ **[W][(][z][0][, z][k][)][⃗S]k[−][(][z][k][)][.]** (3b) In equation 3b matrix W(z0, zk) describes the propagation between source depth zk and surface level z0 and superscript * denotes the complex conjugate. Note that the incident wavefield in gridpoint i at depth level zm, being generated by blended source array j at the surface z0, is given by: P[+]ij [(][z][m][, z][0][) =][ ⃗W][ †]i [(][z][m][, z][0][)][S][+][(][z][0][)][⃗][Γ][j] [(][z][0][)][.] (4) Here, _W[⃗]_ _i[†]_ [describes wavefield propagation from all source ar-] ray points at surface level z0 to subsurface gridpoint i at depth level zm. From the foregoing it follows that blended acquisition has two important advantages: (1) the number of source points per km[2] is increased and (2) the survey time per km[2] may be decreased. Both aspects refer to data quality: more signal energy per unit area and unit time is transmitted into the subsurface (less spatial aliasing and larger signal to background noise ratio). The second aspect also refers to economics. Particularly in special situations, think of areas where access is restricted to a limited period of time, blending may be the only solution that is practically feasible. **P�** subtraction adaptive (�P�)i�1 estimation parameter _[P]�j_ (i�1) parameter selection **P** |Col1|adaptive subtraction| |---|---| ([P][�]j�)[i]�1 i+1 � (i�1) _[P]j_ |P(i1) j parameter selection G P(i1) j|parameter selection|Col3| |---|---|---| � Figure 3: Computational diagram of deblending in terms of inversion, showing the four principal algorithmic modules (estimation, selection, modeling and subtraction) in each iteration. ----- #### Decentralized Blended Acquisition **DISPERSED SOURCE ARRAYS** For the design of blended source arrays, the individual sources at surface locations k (S[⃗]k[+][Γ][kj] [), see equation 4, need to be opti-] mized by considering the properties of the composite incident wavefield at subsurface locations i (Pij[+][). It means that the in-] dividual sources of a blended array may consist of narrowband sources with different central frequencies (‘components’), as long as the sum of all arriving components (‘composite result’) satisfies the full bandwidth requirements. According to the Nyquist criterion, the ideal source spacing should be smaller than half the smallest wavelength a source transmits. In case of different source types, e.g., low-, mid- and high-frequency sources, it means that each type has its own optimum spacing. Note that this is largest for the low-frequency sources and smallest for the high-frequency sources! I call this type of blended source configuration: Dispersed Source Array (DSA). It is important to realize that a DSA acts like a modern audio surround system: the different loudspeaker units are decentralized, taking care of the different sub-bands within the total audio frequency range. This subdivision leads to entirely different loudspeaker designs for the low, mid and high frequencies (see Figure 4). The audio-seismic comparison highlights the fundamental difference of the DSA concept with systems such as Polychromatic Acquisition (CREWES consortium) and SeisMovie (Meunier et al., 2001), where broadband source units operate in a multi-monochromatic manner. detector side, showing excellent results (Soubaras, 2010). Combining the two is the way to go. **DECENTRALIZED BLENDED ACQUISITION** Based on the blending method and the DSA concept, it is proposed to make another fundamental improvement in seismic data acquisition. This improvement is achieved by changing the system architecture. I propose to focus future acquisition developments on the major opportunities that are offered by the decentralized network architecture. By moving from a sin_gle complex, centralized system to a network of simple, decen-_ tralized subsystems, more information is collected with less complexity. Decentralization is the major change we have seen in many technological solutions during the last decade; particularly think of information, communication and computation systems in the IC-sector. Central systems have been transformed to networks, increasing the capability and efficiency beyond expectation. Figure 5 visualizes two system architectures. Figure a. centralized network (N=5) b. decentralized network (N[2]=25) Figure 5: Two types of system architectures. Until today, seismic acquisition occurs with a centralized architecture (a). 1. ONE BROADBAND 2. DIFFERENT NARROWBAND 3. DIFFERENT DISTRIBUTED SOURCE SOURCES NARROWBAND SOURCES Figure 4: Application of the DSA concept in broadband high performance audio systems. Note the significantly different designs for the different frequency bands. Inhomogeneous blending with DSAs has a number of attractive potential advantages: (1) the dedicated narrowband units of a blended array represent technically simple, no-compromise source units, (2) destructive interference within a source array is avoided, allowing angle-independent source wavelets, (3) each source type has its own spatial sampling interval, allowing multi-scale acquisition grids, (4) each source type has its own depth level, allowing ghost matching in the field (marine), (5) deblending DSA data is relatively simple: the first step (source decoding + bandpass filtering) is already very effective, (6) DSAs are more flexible to comply with the emerging strict regulation on sea life protection (marine). It is interesting to mention here that the advantages of multilevel depth sources were already demonstrated in a EAGE workshop on marine seismic in Cyprus (Cambois and Osnes, 2009). Recently, the variable depth option was also proposed at the 5a shows schematically a conventional broadcast architecture, allowing N one-way connections from the central source subsystem to the N receiver subsystems. Hence, with this architecture the information received increases linear with N . Figure 5b shows a decentralized network architecture, where every element functions both as a source and receiver subsystem. Now there exist N [2] connections in the network, meaning that the information received increases quadratically with N, see Figure 6. decentralized N[2 ] offsets & centralized N azimuths 1 _N_ Figure 6: The difference in information content between a centralized and decentralized system. ----- #### Decentralized Blended Acquisition If we look at the current seismic acquisition systems, then we may conclude that the industry makes use of the so-called broadcast architecture: one seismic source (array) sends its energy - via the Earth - to the N seismic detectors. In the past decades we have seen that the number of detectors have been continuously increased to as much as 100.000 and further increases are in progress. This has increased the complexity of the acquisition system tremendously. Actually, current seismic systems are great technological achievements. I propose to the industry to abandon the centralized acquisition concept: the linear relationship is not an attractive proposition. Instead, it is proposed to concentrate on the exciting opportunities that are offered by the network architecture. For example, if we use an acquisition network with a swarm of 100 simple source-detector subsystems, where each subsystem consists of a DSA robot dragging one short 100-detector cable, then the total number of traces per blended shot record equals one million traces (100x100[2] )! Figure 7 gives an artist impression of such a network. Figure 7: Artist impression of a distributed seismic acquisition network. Each robot consists of an optimized narrowband source and a small detector array, e.g., with 100 receivers only. A swarm of one hundred of these robots configure a one million trace system. concentrate future developments on the network architecture concept, showing a quadratic behavior in seismic information (N [2]). By moving from a single complex, centralized system to a network of simple, decentralized subsystems, robotization becomes an attractive proposition: a one million channel system can be realized by a small number of simple source-detector robots. **FINAL REMARK** Berkhout and Blacquiere (2012) conclude that the signal to background-noise ratio of a field-blended survey must be higher than of a comparable traditional survey. This is because the power of the signal (total signal energy divided by the effective survey time) increases in blended acquisition, not only because the number of sources increases, but also due to the fact that the survey time may decrease. On the other hand, the power of the background noise is independent of whatever we do in the blending process. Hence, a shorter recording time not only favors economics, it also favors quality, particularly in areas with a high background noise level. This conclusion emphasizes the enormous potential of blended acquisition for the industry. As a consequence, I expect that unblended seismic acquisition will become a technology of the past. **ACKNOWLEGDMENT** I would like to acknowledge the sponsors of the Delphi consortium at Delft University of Technology for the stimulating discussions on robotized blended acquisition and I also want to thank them for their financial support. **CONCLUSIONS** With a multitude of dedicated narrow-band source units, being referred to as Dispersed Source Arrays, the blended incident wavefield at a particular subsurface gridpoint contains broadband, multi-angle, multi-azimuth information. The theoretical spatial sampling requirements can be fulfilled by allowing lowfrequency sources to be distributed more sparsely than highfrequency sources (‘multi-scale shooting grids’). In the marine case source depths can be optimized (‘ghost matching’). It is also proposed to rethink the centralized acquisition concept. Instead, I propose to concentrate future developments on the network architecture concept, where information collection is linear in the number of detectors (N ). A plea is made to ----- http://dx.doi.org/10.1190/segam2013-0845.1 **EDITED REFERENCES** Note: This reference list is a copy-edited version of the reference list submitted by the author. Reference lists for the 2013 SEG Technical Program Expanded Abstracts have been copy edited so that references provided with the online metadata for each paper will achieve a high degree of linking to cited sources that appear on the Web. **REFERENCES** ### Abma, R., Zhang, Q., Arogunmati, A., and Beaudoin, G., 2012, An overview of BP’s marine ## independent simultaneous source field trials: 82nd Annual International Meeting, SEG, Expanded Abstracts, http://dx.doi.org/10.1190/segam2012-1404.1. ### Beasley, C. J., B. Dragoset, and A. Salama, 2012, A 3D simultaneous source field test processed using alternating projections: A new active separation method: Geophysical Prospecting, 60, 591–601, http://dx.doi.org/10.1111/j.1365-2478.2011.01038.x. Beasley, C. J., 2008, A new look at marine simultaneous sources: The Leading Edge, 27, 914–917, http://dx.doi.org/10.1190/1.2954033. ### Berkhout, J., and G. Blacquière, 2012, Utilizing dispersed source arrays in blended acquisition: 82nd ## Annual International Meeting, SEG, Expanded Abstracts, http://dx.doi.org/10.1190/segam2012-0302.1. ### Berkhout, A., D. Verschuur, and G. Blacquière, 2012, Illumination properties and imaging promises of blended, multiple-scattering seismic data: A tutorial: Geophysical Prospecting, 60, 713–732, http://dx.doi.org/10.1111/j.1365-2478.2012.01081.x. Berkhout, A. J., 1982, Seismic migration, imaging of acoustic energy by wave field extrapolation. Part A: Theoretical aspects: Elsevier. Berkhout, A. J., 2008, Changing the mindset in seismic data acquisition: The Leading Edge, 27, 924–938, http://dx.doi.org/10.1190/1.2954035. ### Doulgeris, P., K. Bube, G. Hampson, and G. Blacquière, 2012, Convergence analysis of a coherency- constrained inversion for the separation of blended data: Geophysical Prospecting, 60, 769–781, http://dx.doi.org/10.1111/j.1365-2478.2012.01088.x. Howe, D., M. Foster, T. Allen, B. Taylor, and I. Jack, 2008, Independent simultaneous sweeping — A method to increase the productivity of land seismic crews: 78th Annual International Meeting, ## SEG, Expanded Abstracts, 2826–2830, http://dx.doi.org/10.1190/1.3063932. ### Krupovnickas, T., K. Matson, C. Corcoran, and R. Pascual, 2012, Marine simultaneous source OBS survey suitability for 4D analysis : 82nd Annual International Meeting, SEG, Expanded ## Abstracts, http://dx.doi.org/10.1190/segam2012-0815.1. ### Meunier, J., F. Huguet, and P. Meynier, 2001, Reservoir monitoring using permanent sources and vertical receiver antennae: The Céré-la-Ronde case study: The Leading Edge, 20, 622–629, http://dx.doi.org/10.1190/1.1439008. Pecholcs, P. I., S. K. Lafon, T. Al-Ghamdi, H. Al-Shammery, P. G. Kelamis, S. X. Huo, O. Winter, J.-B. Kerboul, and T. Klein, 2010, Over 40,000 vibrator points per day with real-time quality control: Opportunities and challenges: 80th Annual International Meeting, SEG, Expanded Abstracts, 111–115, http://dx.doi.org/10.1190/1.3513041. -----
{ "disclaimer": "Notice: The following paper fields have been elided by the publisher: {'abstract', 'references'}. Paper or abstract available at https://api.unpaywall.org/v2/10.1190/SEGAM2013-0845.1?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1190/SEGAM2013-0845.1, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": null, "status": "GREEN", "url": "https://repository.tudelft.nl/file/File_86897a5b-1732-40b7-a9d6-171b42f50f7f" }
2,013
[ "Review" ]
true
2013-01-10T00:00:00
[]
5,618
en
[ { "category": "Computer Science", "source": "external" }, { "category": "Computer Science", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/0287942a57191b829f0a68658bb43646c7d5160d
[ "Computer Science" ]
0.834799
Enforcing Security and Assurance Properties in Cloud Environment
0287942a57191b829f0a68658bb43646c7d5160d
International Conference on Utility and Cloud Computing
[ { "authorId": "31559735", "name": "Aline Bousquet" }, { "authorId": "1885414", "name": "Jérémy Briffaut" }, { "authorId": "1803718", "name": "E. Caron" }, { "authorId": "145634347", "name": "E. M. Domínguez" }, { "authorId": "2064925777", "name": "Javier Franco" }, { "authorId": "2018068", "name": "Arnaud Lefray" }, { "authorId": "143875640", "name": "Ó. López" }, { "authorId": "2066752310", "name": "Saioa Ros" }, { "authorId": "1403816047", "name": "Jonathan Rouzaud-Cornabas" }, { "authorId": "2208354", "name": "C. Toinard" }, { "authorId": "3185042", "name": "Mikel Uriarte" } ]
{ "alternate_issns": null, "alternate_names": [ "IEEE/ACM Int Conf Util Cloud Comput", "Utility and Cloud Computing", "Int Conf Util Cloud Comput", "Util Cloud Comput", "UCC", "IEEE/ACM International Conference Utility and Cloud Computing" ], "alternate_urls": null, "id": "d03a5bfe-db75-4dc9-95f3-ae92b081f42c", "issn": null, "name": "International Conference on Utility and Cloud Computing", "type": "conference", "url": null }
Before deploying their infrastructure (resources, data, communications, ) on a Cloud computing platform, companies want to be sure that it will be properly secured. At deployment time, the company provides a security policy describing its security requirements through a set of properties. Once its infrastructure deployed, the company want to be assured that this policy is applied and enforced. But describing and enforcing security properties and getting strong evidences of it is a complex task. To address this issue, in [1], we have proposed a language that can be used to express both security and assurance properties on distributed resources. Then, we have shown how these global properties can be cut into a set of properties to be enforced locally. In this paper, we show how these local properties can be used to automatically configure security mechanisms. Our language is context-based which allows it to be easily adapted to any resource naming systems e.g., Linux and Android (with SELinux) or PostgreSQL. Moreover, by abstracting low-level functionalities (e.g., deny write to a file) through capabilities, our language remains independent from the security mechanisms. These capabilities can then be combined into security and assurance properties in order to provide high-level functionalities, such as confidentiality or integrity. Furthermore, we propose a global architecture that receives these properties and automatically configures the security and assurance mechanisms accordingly. Finally, we express the security and assurance policies of an industrial environment for a commercialized product and show how its security is enforced.
## Enforcing Security and Assurance Properties in Cloud Environment ### Aline Bousquet, Jérémy Briffaut, Eddy Caron, Eva María Dominguez, Javier Franco, Arnaud Lefray, Oscar López, Saioa Ros, Jonathan Rouzaud-Cornabas, Christian Toinard, et al. To cite this version: #### Aline Bousquet, Jérémy Briffaut, Eddy Caron, Eva María Dominguez, Javier Franco, et al.. Enforcing Security and Assurance Properties in Cloud Environment. 8th IEEE/ACM International Conference on Utility and Cloud Computing (UCC 2015), University of Cyprus, Dec 2015, Limassol, Cyprus. ￿hal-01240557￿ ### HAL Id: hal-01240557 https://inria.hal.science/hal-01240557 #### Submitted on 9 Dec 2015 #### HAL is a multi-disciplinary open access archive for the deposit and dissemination of sci- entific research documents, whether they are pub- lished or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. #### L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. #### Distributed under a Creative Commons Attribution NonCommercial NoDerivatives| 4 0 ----- # Enforcing Security and Assurance Properties in Cloud Environment #### Aline Bousquet[∗], J´er´emy Briffaut[∗], Eddy Caron[†], Eva Mar´ıa Dominguez[¶], Javier Franco[‡], Arnaud Lefray[∗][†], Oscar L´opez[§], Saioa Ros[§], Jonathan Rouzaud-Cornabas[∥], Christian Toinard[∗] and Mikel Uriarte[§] _∗INSA Centre Val de Loire, Univ. Orl´eans, LIFO EA 4022, Bourges, France_ †University of Lyon - LIP, CNRS - ENS Lyon - Inria - UCB Lyon, France ‡Industry and Advanced Manufacturing department, Vicomtech-IK4, Spain §Research and Development department, Nextel S.A., Spain - ransport - Technology and Development, IKUSI, Spain _∥Universiy of Lyon, CNRS, Inria, INSA-Lyon, LIRIS, UMR5205, F-69621, France_ Email: aline.bousquet, jeremy.briffaut, christian.toinard @insa-cvl.fr, eddy.caron, arnaud.lefray @ens-lyon.fr, _{_ _}_ _{_ _}_ jonathan.rouzaud-cornabas@inria.fr, jfranco@vicomtech.org, sros, olopez, muriarte @nextel.es, eva.dominguez@ikusi.com _{_ _}_ **_Abstract—Before deploying their infrastructure (resources,_** **data, communications, ...) on a Cloud computing platform,** **companies want to be sure that it will be properly secured.** **At deployment time, the company provides a security policy** **describing its security requirements through a set of properties.** **Once its infrastructure deployed, the company want to be assured** **that this policy is applied and enforced. But describing and** **enforcing security properties and getting strong evidences of it is** **a complex task.** **To address this issue, in [1], we have proposed a language that** **can be used to express both security and assurance properties** **on distributed resources. Then, we have shown how these global** **properties can be cut into a set of properties to be enforced** **locally. In this paper, we show how these local properties can** **be used to automatically configure security mechanisms. Our** **language is context-based which allows it to be easily adapted** **to any resource naming systems e.g., Linux and Android (with** **SELinux) or PostgreSQL. Moreover, by abstracting low-level** **functionalities (e.g., deny write to a file) through capabilities, our** **language remains independent from the security mechanisms.** **These capabilities can then be combined into security and assur-** **ance properties in order to provide high-level functionalities, such** **as confidentiality or integrity. Furthermore, we propose a global** **architecture that receives these properties and automatically** **configures the security and assurance mechanisms accordingly.** **Finally, we express the security and assurance policies of an** **industrial environment for a commercialized product and show** **how its security is enforced.** **_Keywords—Security, Cloud, Assurance, Enforcement, Use-case_** I. INTRODUCTION In security, three main concepts commonly known as the CIA-triad (not to be confused with the US agency) has been widely used for decades: Confidentiality, Integrity and Availability. Both the Departement Of Defense guidelines (TCSEC/Orange Book) [2] edited in 1985 and the more recent Common Criteria (ISO/IEC 15408) international standard define security as an integration of availability, confidentiality and integrity. In a survey [3] on Cloud adoption practices, the Cloud Security Alliance (CSA) indicates that 73% of the participating industries are concerned about the security of their data. Thus, while many companies are transitioning to Cloud computing, they are also worried about the security risk. But Cloud platforms lack of reliable security [4]. Furthermore, Halpern et al. [5] state that security policies described in a natural language have quite ambiguous semantics. To answer these problems, we need to provide a way (a language) to let the Cloud tenants (e.g., companies) express their security requirements i.e., through a security policy. This security policy must them be enforced on the Cloud platform and assurance reports (i.e., proofs) of this enforcement must be given to the tenants. A single security mechanism cannot protect a heterogeneous and multi-layer system such as a Cloud [6]. Consequently, it is a set of uncoupled (and already existing) mechanisms that will be used to enforce the security. However, even if the mechanisms are uncoupled, it is mandatory to carefully take into account their capabilities (i.e., what they are able to enforce) and configure all of them at once to provide the wanted security. But, each of these mechanisms also comes with its own configuration language. In [1], we have defined a specification language for global security properties (i.e., properties that involve distributed resources). We have shown how these global properties can be automatically cut into a set of local properties. These local properties can be used to automatically configure security mechanisms. Moreover, our common independent language abstracting low-level capabilities can be used to provide proofs of security enforcement (i.e., assurance). As said previously, the tenants also require to receive a proof that their security is indeed enforced during the whole life cycle of the infrastructure. Accordingly, using our language, the tenants can expressed their security assurance requirements. Once the security and assurance policies has described a set of properties to enforce, an architecture is required to automatically configure distributed and heterogeneous mechanisms. Furthermore, this architecture must also send back assurance reports to the tenants. To show the usability and capacities of our solution, we ----- describe how our language has been used to define the security policy of a complete industrial application. Then, we show how our architecture is used to automatically enforce the policy and generate assurance reports of it. This paper is organized as follows. In Section II, we present a set of existing security mechanisms that could be used to provide security in Clouds, and the related work around Cloud security and assurance. Section III describes the language and the architecture we use for the security policy enforcement and assurance. Section IV details an industrial use-case and the whole process to secure it, and Section V concludes this paper. II. RELATED WORK The solution proposed in this paper aims to both enforce and assure security properties, such as confidentiality or integrity. Hence, this section first describes the works related to the definition of security properties and their enforcement. Then, we quickly present the security mechanisms we will use in Section IV. Finally, we present some existing solutions for assurance. _A. Security Policy and Enforcement_ Because of the ever increasing adoption of Cloud Computing platforms, many researches have been done to improve their security. As stated in [6], a security policy language is required to allow the tenants to express their security requirements. Indeed, this makes sense that the tenants define their security as they are the ones that know the best their infrastructure and its security requirements. Some of works related to security policy languages are specific to a programming language and require the modification of the application sources. Ponder2 [7] is a distributed object management system. The Ponder2 language can express security and management policies for distributed systems. It is declarative and object-oriented and can be used to declare different types of policies. Consequently, it can only be used on Java application augmented with the Ponder2 solution. Same for the A4Cloud project described in [8] and its associated language, A-PPL. Furthermore, this solution focuses on privacy and accountability, but does not address other classes of security, such as isolation. Works such as [9], [10] also strength the need of combing multiple security mechanisms to provide an end-to-end and cross-layer security. VESPA [10] is one of such architecture for protecting cloud infrastructures using a policy-based management approach. However, this work is oriented toward the use of automatic computing to create self-protection loops. Consequently, they lack a language allowing the tenants to express their security. Nonetheless, a combination with our works could be a a way for future work. In [11], authors present MEERKATS, a mission-oriented Cloud architecture dedicated to security. It is composed of several components that aim to address several types of attacks and seek to provide high flexibility in the use of the protection mechanisms. Nevertheless, MEERKATS lacks a simple way of expressing the security requirements of an infrastructure. In [12], the authors present a policy-based security framework. Their ASPF policy consists of an attribute map (that links system elements to their attributes) and a set of rules (indicating which actions are allowed). While the ASPF framework can enforce security policy, only low-level security properties can be expressed, which makes the definition of the security policy complex. Finally, several works such as [13] have been done around the use of XACML to define security policies. But they focus on a specific type of mechanisms, access control. Moreover, XACML is a complex language [14] that requires the verification of the policy conformance regarding its syntax and its semantic. Furthermore, XACML does not express high level security requirements such as integrity but rather it expresses the policy directly using low-level capabilities. Accordingly, the size of the security policy is larger and thus the risk of making mistakes increases as these policies are written by human versus generated ones. Nevertheless, it could be possible to automatically generate such policy from our security policy language. Thus, such work could be used as a security mechanisms by our solution. [15] presents a privacy-aware access control system since privacy is an important concern for most users. However, the PRIME architecture is based on XACML and therefore presents the same limitations. Each of these solutions either focuses on one kind of protection (mainly access control) or uses low-level security policy (tedious to define). To the best of our knowledge, there is no current research on the configuration of existing security mechanisms through a common abstract language. _B. Security Mechanisms_ Many different security mechanisms exist, providing a wide range of features. We present some of them that we use in our solution. SELinux [16] is a Linux Security Module (LSM) providing MAC (Mandatory Access Control). PAM [17] provides authentication management support for Linux. Iptables [18] is a standard Linux firewall. We use the tunnel functionality provided by OpenSSH [19] to secure the communications between the machines. Even if not used in this paper, cryptographic solution (such the one presented in [20] for the security of medical records or even homomorphic encryption [21]) could be added to the list of the security mechanisms that our solution takes into account. Indeed, the solution we propose is mechanism-agnostic. _C. Assurance_ Operational Security Assurance [22] provides the ground for confidence that deployed security mechanisms are running as expected. Some researches have been done to evaluate security assurance. For instance, Common Criteria [23] evaluates security functionality and assurance by means of tests conducted by users. However, this process is static and time consuming. Consequently, it cannot be directly applied for a continuous evaluation of security assurance. Furthermore, Common Criteria focuses on the implementation phase of the product rather than on the operation phase, when the product is used. Assurance Profile [24] is a formalized document that defines a common set of security assurance measurement requirements for a service infrastructure and facilitates a future evaluation against these needs. This is the approach selected for the assurance framework development. ----- XCCDF (eXtensible Configuration Checklist Description Format) is a standard that can perform assurance checks. It belongs to SCAP [25], a set of specifications from NIST to standardize the format and the naming of information reporting concerning specific security configurations. XCCDF provides security checklists and benchmarks to support an automated compliance testing over a set of target systems. OpenSCAP [26] is an auditing tool implementing SCAP and XCCDF. III. ARCHITECTURE AND LANGUAGE As we have seen, many security mechanisms are efficient but are focused toward a specific issue and/or type of protection. It is important to understand that we do not propose any new or more secure mechanism, but we rather consider the existing ones and automatically configure/coordinate them in order to enforce high-level security properties. In this section, we first present our functional architecture. Then, we present our language and how it is used to enforce a security policy. Eventually, we show how it is possible to automatically assess the correctness of the enforcement. _A. Functional Architecture_ As depicted in Fig. 1, our solution consists in a 3-steps cycle. First, the tenant specifies its security policy using the language detailed in Section III-B. Then, from this highlevel policy specification, the policy is enforced by firstly selecting security mechanisms and secondly configuring them. At the end, the policy specification and the list of selected mechanisms are used to generate the assurance profile. The assurance part will verify whether or not the security properties (from the policy) are duly enforced. These assurance checks are sent as feedbacks to the specification step to notify the tenant if the enforcement is correct/incorrect but also if the available mechanisms are sufficient (or not) to enforce the policy. Fig. 1: Functional Architecture _B. Policy Specification_ During the policy specification step, a knowledgeable tenant (i.e., a security expert) expresses a set of security properties to enforce e.g., confidentiality, integrity. In [1], we have defined the Cloud Security Property Language (CSPL) that allows to specify security properties. In particular, we have shown how to automatically transform a property on a set of distributed objects (that we refer as a global property) into a set of properties on local objects (referred as local properties). In this paper, we are focusing on the enforcement of local properties with a given security mechanism and verifying the correct enforcement of this property. Therefore, in the following we consider only local properties. CSPL is a context-based language. A context is a set of attributes where each attribute characterizes an entity or a set of entities. At the highest level, entities can be classified into 2 categories: subject (i.e., the active resources such as users and processes) and object (i.e., the passive resources such as files). For instance, the context configApp = (File="Configuration") :(Domain="App") identifies the configuration files (attribute File) of an application (attribute Domain). Therefore, it is possible to use the same set of contexts on different systems. A specific mapping file is required for each system to associate the resources (e.g., files, users, sockets, processes) name (e.g., the full path of a file, the user id, IP addresses, processes name) and the context which they belong to. For example, to associate the application’s configuration to the corresponding files, the following line is added to the mapping file (“o” for “object”): o /opt/dbhook/dbhook.conf configApp Using contexts to identify resources (or set of resources), CSPL allows to define security properties and by relying on contexts to address entities, the expression of security properties is independent from the resources naming of the target system. These properties are independent from any security mechanism; in fact, multiple mechanisms can realize the same security property. Our proposition is to select a security mechanism able to enforce a given property from a pool of available mechanisms. For instance, the property P1 expresses that the context SCInt integrity has to be guaranteed with the exception to SCAuth contexts that are allowed to go against this property _i.e., no one is allowed to modify it except the resources with_ the context SCAuth. P1:= Integrity (Context SCInt, Context SCAuth); Then, P1 must be instantiated. For example, to specify that the integrity of the application’s configuration files with the context configApp should be protected and only be the user with the context adminRoot = (Username="appAdmin"):(Role= "StandardUser|appAdmin") is allowed to go against, the following property is instantiated: Integrity(configApp, adminRoot). From the tenant’s point of view, the security properties are abstract i.e., the tenant only considers the semantics of the properties, and not the underlying security mechanisms. However, the properties need to be precisely defined in order to be enforced. Thus, we introduce the concept of capabilities. A capability is an elementary function provided by one or several security mechanisms. For instance, C1:= deny_all_write_accesses( Context) is a capability that can be provided by access control mechanisms (e.g., Unix’s DAC permissions or SELinux) but also by other security mechanisms. It can be used to enforce an integrity property. Consequently, the integrity property P1 can be defined as follows: P1:=Integrity (Context SCInt, Context SCauth) { deny_all_write_accesses(SCInt); allow_write_access(SCInt, SCauth); } The context SCInt represents an (set of) object(s) to secure, while the context SCauth is the identity of a (set of) subject(s) ----- that can write to i.e., modify, SCInt. Two capabilities are involved in this property. The capability deny_all_write_accesses denies all write accesses to the context while the capability allow_write_access allows the context SCauth to counter the previous capability. In a Cloud environment, a security property can address multiple machines (e.g., Virtual Machines). In our language, a context mapped to an IP address refers to a machine. For example, a mapping file can include the following lines (“c” for “computer”): c 192.168.30.8 hostClient c 172.22.11.181 hostReverseProxy The user can use particular attributes to include networkspecific metadata such as port e.g., tunServer:=hostReverseProxy :(Port="5900"). These kinds of contexts can be used in typed security properties i.e., properties accepting only specific types of contexts (here IP:Port). For example, Confidentiality_Tunnel (hostClient, tunServer) creates a tunnel between two machines. In this paper and especially in our use-case (see Section IV), we will use the following security properties. Each of the parameter (i.e., contexts) of the properties can be a single context or a set of contexts. **Isolation(Context SC1): Isolate a context SC1 from other** _•_ resources and vice-versa (e.g., it isolates an application and its resources from the rest of the system) **Confidentiality(Context SC1, Context SC2):** Deny every _•_ contexts from reading a context SC1 with the exception of the context SC2 **Integrity(Context SC1, Context SC2): Deny every contexts** _•_ from modifying SC1 with the exception of the context SC2 **Confidentiality_Tunnel(IP:Port SC1, IP:Port SC2): Create** _•_ a secure tunnel between 2 contexts SC1 and SC2 of types IP:Port. **Access(Context SC1, Context SC2):** Allow connections _•_ from SC2 to SC1 **Authentication(Context SC1, Context SC2, Context SC3):** _•_ Upon successful connection from SC1 on SC2, modify the context of SC1 to SC3 Finally, we enrich the set of security properties with the following assurance property : **Assurance(int secs): Run assurance checks at frequency** _•_ secs (i.e., every secs seconds) for every security property. _C. Policy enforcement_ The second step of our solution, the policy enforcement, must first automatically select a set of suitable security mechanisms enabling the enforcement of each security property. And then, it must automatically configure them accordingly. The component in charge of the policy enforcement step is call the Secure Element Extended (SE[E] ). In the mean time, the assurance property triggers the generation of an assurance profile based on the security properties and the selected mechanisms. Let’s first present the selection and enforcement of the security properties. Figure 2 presents the architecture of the _SE[E]_ and how it can enforce the security policy. The SE[E] takes as input the security properties and the contexts/resources mapping. The SE[E] also considers the security mechanisms that are available on the system and their capabilities. Then, it selects the right security mechanisms and enforces the properties by configuration. PolicyLoad1 Property2Projection Engine 4 Get suitable SEE MappingEngine ContextGet PluginsSELinuxPlugin MechanismsSecuritySELinux For each Select 3 plugins Capabilities SSH TunnelPlugin SSH capability Plugin SelectorPlugin 5Send suitablesplugins DirectoryShared variablePublish iptablesPlugin Configure SSM9 iptables 7 6 pluginSelect Directory variableGet Plugin PAM PAM Plugins Manager 8 Apply each capability OscapPlugin Oscap using selected plugin Fig. 2: Architecture of the SE[E] . First, the SE[E] loads the security policy i.e., the contexts, the properties, and the contexts/resources mapping (step 1 on Figure 2). Then, the SE[E] iterates over security properties of the policy. For each capability of each property (step 2), the SE[E] selects a plug-in (i.e., a security mechanism) that can apply it (steps 3 to 6). This is done by querying the Capabilities Directory that contains the association between the capabilities and the security mechanisms (steps 4 and 5). Once the Plugin Selector has the list of matching mechanisms, it selects one of them (best-effort algorithm, step 6). When a capability/plug-in mapping has been found for the property, it is sent to the Plugins Manager that controls the plug-ins (step 7). Then, the Plugins Manager contacts each plug-in that needs to perform some actions (step 8) and the plug-ins configure their associated security mechanism (step 9). The use of plug-ins offers a modular model: new mechanisms can be easily added by developing the associated plugin. Plug-ins implement a simple interface to communicate with the SE[E], but the way they interact with their mechanisms is up to the plug-in’s developer. For instance, if the SE[E] receives the integrity property **Integrity(configApp, adminRoot) defined in the previous section,** it can enforce it using several security mechanisms. If this property is enforced using SELinux, then the SE[E] generates a SELinux module that forbids any write operation to the files labeled configApp that does not come from a user labeled adminRoot. The SE[E] also provides secure communication capabilities, especially for the case of properties between multiple systems. Thus, the two sides (i.e., the selected mechanisms applying to the two contexts) of the communication must use compatible mechanisms to enforce a property. For instance, let us consider the property Confidentiality_Tunnel(hostClient, tunServer). The server allows the connection of the user through the defined port, and the client sets up the tunnel. The coordination is done by the SE[E] ’s communication capabilities. Now, let’s present the generation of assurance files for the |Shared Directory|variable Get| |---|---| |Plugin Selector|plugins 5Send suitables|Capabilities Directory| |---|---|---| |Col1|Col2|Col3|SEE|Col5|Col6|Col7| |---|---|---|---|---|---|---| |1 Load|Projection Engine||SEE Mapping Get Engine Context 4Get suitable plugins Capabilities 5Send suitablesDirectory plugins Publish Shared variable 6pS le ule gc int Directory vaG riae bt le 8Apply each capability|Plugins Plugin SELinux Plugin SSH Tunnel Plugin iptables Plugin PAM Plugin Oscap|9 Configure|Security Mechanisms| |Policy|Property 2 3 For each Select capabilityPlugin Plugin Selector 7 Plugins Manager|||||SELinux| ||For each capability|||||SSH| |||||||| |||||||iptables| ||7||||SSM|| |||||||| |||||||PAM| |||||||| |||||||Oscap| ||Plugins Manager|||||| ||||using selected plugin|||| ----- assurance framework presented in Section III-D. To validate the enforcement of the security policy, multiple files are generated and given as input to the Assurance step, namely the XCCDF and system-specific scripts. System-specific scripts are generated using a process similar to the property enforcement: each property definition includes an assurance specification, using capabilities. For instance, the assurance of the Integrity property is defined as follows: P1:=Integrity (Context SCInt, Context SCauth) { **assurance {** boolean c = true; for (SCUserTmp IN get_all_users()) { if (SCUserTmp.Id == SCauth.Id) { c &= check_write (SCInt, SCauth); } else { c &= (NOT check_write (SCInt, SCauth)); } } return c; }} As a result, the system-specific script will contain the implementation of the check_write assurance capability for the context SCInt and the authorized context SCAuth. This generated script is called a Based Measure (BM) as it is the lowest level of assurance measure. Therefore, the XCCDF is simply a list of Based Measures. The XCCF file and the related scripts are given as input to the assurance step. _D. Assurance_ In order to be able to evaluate continuously the security assurance for a service, it is necessary to implement a process composed of several steps: modeling, measuring, aggregation, evaluation and presentation of the security assurance reports. This process is supported by a set of software components that compose our assurance framework. Fig. 3 presents our assurance framework architecture. Fig. 3: Assurance Framework Architecture. _1) System Measurement Collection: As stated before, the_ assurance step receives an XCCDF file and the related systemspecific scripts called Based Measures. The SE[E] is responsible for launching the measurement collection process realized by the Assurance Collector Engine (ACE). This engine includes a BM Agent which executes several system-specific scripts. Script results are associated with some metadata including extra information to unequivocally identify their origins and contexts. Note that the ACE, based on OpenSCAP, is the only assurance module deployed in each virtual machine. Next, the Measurement Aggregator (MA) receives these measurements from each node, validates and classifies them according to their metadata, before storing them in the Assur**ance DB.** _2) Assurance Results Presentation: We have seen how to_ execute low-level assurance checks and collect their results. In the following, we present how to add semantics to the collected results i.e., interpret them, and how to present all assurance checks to the tenant in a modular and concise manner. Our assurance model defining the entities/files relations from lowlevel measures to high-level views is presented in Fig. 4. Fig. 4: Assurance Model. First, we have not determined yet if the collected values mean a correct or faulty enforcement i.e., we need to interpret them. Hence, we call Derived Measure (DM) the interpretation of a Based Measure. Depending on the number of security properties and the size of the system (i.e., number of objects), it is possible to have a significantly large set of assurance checks (or Derived Measures) which can be an impediment to the tenant’s verification task. Our solution is to hierarchically aggregate these measurements. Therefore, a set of Derived Measures is aggregated in an Operational Measurement Requirement (OMR) via an aggregation function. In particular, if all Derived Measures have successfully passed their checks, then the OMR is marked as successful. In other words, an OMR is a set of system assurance checks. The Operational Profile (OP) contains both the definition of OMRs (i.e., the list of Derived Measures) and the definition of Derived Measures. Our next level of abstraction is to allow the tenant to specify several Security Assurance Views (SAVs) where an assurance view is an aggregation of Operational Measurement Requirements. The definition of Security Assurance Views is done in the Assurance Profile (AP) file. In Fig. 3, the Assurance Modeling Tool takes in input the Assurance Profile and the Operational Profile to maintain a Security Assurance Model. Depending on the layer of the ----- assurance model (e.g., SAV, OMR, or DM), the Assurance **Assessment Engine is responsible for deciding if the collected** assurance values meet the expectations and for computing the aggregation results. Finally, the Assurance Visualization Tool provides a Graphical User Interface for the user to monitor the assurance status. For a monitored service, the user will be able to select the different SAVs available in the Security Assurance Model. This tool presents an assurance report adapted to the tenant’s concerns. To sum up, the modeling and configuration of the assurance framework rely on the definition and refinement of 3 XML files : 1) the AP for defining the high level measurement requirements and the assurance views adapted to the tenant’s concerns, 2) the OP for establishing the links with the real environment, and 3) the XCCDF for specifying how to execute the assurance measurements. Examples of these files are provided in Section IV. IV. IKUSI’S USECASE _A. Description of Usecase_ In this section, we present an industrial use-case based on Ikusi application : Airport Management. It aims to provide a centralized-operational management for airports management services. It involves the coordination of a group of processes, where both human and IT system interactions are required. It is a classical 3-tier web architecture i.e., a HTTP frontend (tier-1), an application server (tier-2) and a database (tier-3). This architecture is deployed on top of an IaaS Cloud and is provided to end-users through a SaaS model. Moreover, one instance of the application server is launched for each client _i.e., for each airport._ Services provided by the architecture include the management of an operational data repository for each airport operator and passenger, the real time management of flight status updates, and the dynamic allocation and optimization of assigned resources according to data from air flight companies and airport operators. It is based on message exchange modules, on resource allocation and on billing management airport services to provide airlines with an operational platform based on Cloud computing technology. It also incorporates enhanced security solutions based on a network of secure element developed in the SEED4C project. The use-case is presented in Fig. 5. Four different kinds of machines or VMs are involved. First, the machine ctseed1 is the client machine. It is the device that is used within the airport to access the airport’s services. Secondly, the reverseproxy VM (i.e., tier-1) is a proxy used by the enduser to access the airport’s services. The musik VMs (both musik1 and musik2) belong to an airport (MAD[1] or EAS[2]) and are accessed by the end-user machine through the proxy (i.e., an instance of the application server, tier-2). The corresponding VM is selected based on the location of the end-user. Apart from their airport domain, these VMs are identical, so 1MAD: Madrid Airport code 2EAS: San Sebastian Airport code we only consider one of them in this use-case (the security policy would be duplicated). Each of these VMs runs a Musik application that accesses the database (running in the seed4c_mysql machine) i.e., tier-3. _B. Security policy_ Based on the use case description, a security policy is defined through the graphical tool Sam4C (see Fig 5). The next listing presents an excerpt from the security policies for the different VMs of the use-case: 1 // Policy for the Database VM 2 Isolation(DomainAODB); 3 4 Integrity(BinaryAODB); 5 Integrity(ConfigAODB, AdminRoot); 6 Integrity(KeyAODB, AdminRoot); 7 Integrity(LogAODB, ServiceAODB); 8 9 Confidentiality(FileAODB, ServiceAODB); 10 Confidentiality(KeyAODB, AdminRoot); 11 Confidentiality(ConfigAODB, AdminRoot); 12 Confidentiality(ConfigAODB, ServiceDB); 13 Confidentiality(LogAODB, AdminRoot); 14 Confidentiality(LogAODB,AdminOperator); 15 16 Authentication(HostReverseProxy, ServiceSSH, "SystemUser| CloudProvider|AdminRoot|AdminOperator|User"); 17 18 Access (MysqlPort|MysqlProxyPort|SSHPort|NTPPort, AnyIP); 19 20 Assurance(Freq); 21 22 // Policy for the ReverseProxy VM 23 Integrity(BinaryModuleWeb); 24 Integrity(BinaryWeb); 25 Integrity(ConfigWeb); 26 27 Confidentiality(ConfigWeb,AdminRoot); 28 Confidentiality_Tunnel(tunClient, tunServer); 29 30 Access (SSHPort|NTPPort, AnyIP); 31 32 Authentication(anyone, ServiceSSH, "SystemUser|CloudProvider |AdminRoot|AdminOperator|User" ); The first security property (line 2) of this listing sandboxes the whole application. Lines 4 to 7 forbid anyone to edit the application’s binary, but allow several write accesses to its files (configuration, keys, and logs). Lines 9 to 14 forbid read access to the application resources except for the application itself. Line 16 specifies the context evolution upon an SSH connection: a role is given to the authenticating user depending on his login data. Line 18 opens several ports for all incoming IP addresses. Line 20 defines the assurance tests to perform. The second part of the listing describes the policy for the reverse proxy. Lines 23 to 25 guarantee the integrity of the Web application. Line 27 requests the confidentiality of the configuration files. Line 28 specifies that the network communication between the proxy and the client should be kept confidential. Line 30 opens some ports. Finally, line 32 manages the contexts evolution upon SSH connections. The contexts used in this policy are associated to system resources. An extract from the association file is displayed in the next listing: 1 o /opt/dbhook(/.*)? FileAODB 2 o /opt/dbhook/dbhook.conf ConfigAODB 3 o /opt/dbhook/keys(/.*)? KeyAODB 4 o /opt/dbhook/log(/.*)? LogAODB 5 o /opt/dbhook/proxydaemon.sh BinaryAODB ----- Fig. 5: Usecase Description 6 o /etc/rc\.d/init\.d/dbhook BinaryAODB 7 o /opt/oscap/ssm/results/SSM-results-$date.xml SSMResultFile 8 o /opt/oscap/ssm/SSM-xccdf.xml SSMXccdfFile 9 10 p /usr/bin/mysqld_safe ServiceDB 11 p /usr/libexec/mysqld ServiceDB 12 p /usr/bin/mysql-proxy ServiceAODB 13 p /usr/sbin/sshd ServiceSSH 14 15 u cloudprovider CloudProvider 16 u tenant-admin AdminRoot 17 u tenant-operator AdminOperator 18 u user User 19 20 c 172.22.11.181 HostReverseProxy 21 c 172.22.11.178 HostServerBBDD 22 c 212.81.220.68 HostClient Lines 1 to 8 of the mapping file associate the contexts to files. Lines 10 to 13 are for the processes, lines 15 to 18 for the users, and lines 20 to 22 for the computers (IP addresses). _C. Security Enforcement_ The security policy is enforced by several security mechanisms. The SE[E] detects what are the available mechanisms and selects those that can enforce the properties. In this usecase, four mechanisms collaborate to enforce the whole policy. _1) SELinux:_ First security mechanism available is SELinux. It enforces properties from three groups: isolation, confidentiality, and integrity. To enforce them, the plug-in generates a SELinux module. Upon receiving an isolation property for a domain, the plug-in creates a SELinux module to isolate all elements of this domain from the rest of the system. Then, the plug-in will allow some interactions corresponding to confidentiality and integrity properties. To enforce the properties Isolation( DomainAODB), Integrity(ConfigAODB, AdminRoot), and Confidentiality( ConfigAODB, "ServiceDB|AdminRoot") from policy, the following module is generated (see next listing). Lines 2-5 define the domain and SELinux contexts, while lines 7-8 give authorization rules. Lines 12-13 associate SELinux contexts to resources. 1 $ cat Aodb.te 2 policy_module(Aodb,1.0.0) 3 see_create_service_domain(Aodb) 4 see_create_files_type(Aodb_conf_t) 5 see_create_files_type(Aodb_file_t) 6 7 see_files_type_read_write(Aodb_t,Aodb_conf_t) 8 see_files_type_read(idAodbAdmin_t,Aodb_conf_t) 9 [...] 10 11 $ cat Aodb.fc 12 /opt/dbhook/dbhook.conf gen_context(system_u:object_r: Aodb_conf_t,s0) 13 /usr/bin/mysql-proxy gen_context(system_u:object_r: Aodb_exec_t,s0) 14 [...] _2) PAM:_ The PAM plug-in enforces authentication properties. Indeed, such property specifies how contexts can evolve to have correct properties applied. Moreover, it controls the authentication rights and allows or denies a user authentication. Upon encountering the property **Authentication(anyone, ServiceSSH, "SystemUser|** CloudProvider|AdminRoot|AdminOperator|User"), PAM plug-in adds a rule to PAM configuration in order to detect a successful login: session required pam_exec.so /etc/see/scripts/notifyLogin When a successful authentication occurs, PAM executes the script notifyLogin (see next listing), which informs the _SE[E]_ (through Ncat) of a connection and sends data, such as the user name, the remote host, or the date. 1 $ cat notifyLogin 2 #!/bin/sh 3 [ "$PAM_TYPE" = "open_session" ] || exit 0 4 {echo "User: $PAM_USER" 5 echo "Ruser: $PAM_RUSER" 6 echo "Rhost: $PAM_RHOST" 7 echo "Service: $PAM_SERVICE" 8 echo "TTY: $PAM_TTY" 9 echo "Date: ‘date‘" 10 echo "Server: ‘uname -a‘" 11 echo "PID: $$" ----- 12 echo "PPID: $PPID" 13 } | ncat -U --send-only /var/run/seePam _3) iptables: The iptables plug-in enforces the network_ access properties. For instance, the iptables plug-in can allow network communications on a specific port or from a given IP address. The Access properties in the use-case’s policy are used to open some ports. For instance, the access property Access ( SSHPort, AnyIP) is enforced using the following iptables rule: iptables -I INPUT -m state --state NEW -p tcp --dport 22 -j ACCEPT This plug-in can be requested to apply additional capabilities by other plug-ins. For instance, the SSH tunneling plug-in can dynamically request a specific port to be opened by the iptables plug-in. _4) SSH Tunneling: The SSH Tunneling plug-in enforces_ the creation of confidential tunnels between machines inside and outside the Cloud. Apart from the infrastructure in the Cloud, the Airport Management use-case includes physical machines located in the airport. As part of the use-case, we need to monitor what is displayed on the machines from the Cloud application. The remote port forwarding process comes with a solution to this issue allowing flows redirection. The remote port forwarding process ensures the confidentiality of the communication, because SSH is an encrypted protocol. Furthermore, thank to the public key cryptography, both sides of the communication channel are authenticated. The communication between the machines is essential since the SSH server machine has to allocate its own local ports. They will be assigned to a SSH client in order to allow it to do the port forwarding. The enforcement is made of 3 steps: 1) the SSH server machine allocates a local port for the client to set up the tunnel, 2) the SSH client gets the allocated port, and 3) the SSH client creates the remote tunnel. Fig. 6 shows this process for a remote port forwarding tunnel creation using the port 5900 as the port on client machine. Fig. 6: Tunnel creation process example To enforce the tunnel, the command ssh -R 7900:0.0.0.0:5900 ctseed1@reverse-seed4c is executed, where 7900 is the allocated port for the SSH server, 5900 is the objective port for the SSH client, reverse-seed4c is the server’s hostname, and ctseed1 is the user on the SSH server machine used by the client machine. _D. Assurance_ The Assurance Model used in the airport management usecase is based on the security policy and focuses on monitoring the effectiveness of the security mechanisms. The model checks that deployed security mechanisms (e.g., SELinux, Iptables, and OpenSSH tunnel) are running as expected. It also checks that the security properties are fulfilled, in terms of data integrity, data confidentiality, and data availability. For instance, the enforcement of the property Integrity (ConfigAODB, AdminRoot) (line 5 of the security policy in Section IV-B) can be checked using the script from listing 1. This script is generated by the SE[E] during the enforcement step (Section IV-C), depending on the properties of the security policy. 1 $ cat BM_fileInt-1.1.sh 2 #!/bin/bash 3 RET=$XCCDF_RESULT_PASS 4 check_write(){su -c "test -w "$1"" $2; return $?;} 5 FILES=[...] # list of files in integrity property 6 USERS=[...] # list of all users 7 OK_USERS=[...] # list of authorized users 8 9 for file in "${FILES[@]}" ; do 10 for user in "${USERS[@]}" ; do 11 check_write $file $user 12 WRITE_OK=$? 13 14 if [[ " ${OK_USERS[@]} " =˜ " $user " ]] ; then 15 if [[ $WRITE_OK -ne "0" ]] ; then 16 RET=$XCCDF_RESULT_FAIL 17 echo "Unexpected access denial: $user->$file" 18 fi 19 else 20 if [[ $WRITE_OK -eq "0" ]] ; then 21 RET=$XCCDF_RESULT_FAIL 22 echo "Unauthorized access: $user->$file" 23 fi 24 fi 25 done 26 done 27 exit $RET Listing 1: Script checking the integrity of a file The script BM_fileInt-1.1.sh checks the integrity of a file by testing which users are allowed to write it. Line 4 defines a function to check if a file can be written by a specific user. Lines 5 to 7 get the files and users involved in the property (not detailed here due to lack of space). Then, the script loops over the files (line 9) and the users (line 10) and tries to open the files for writing (line 11). If the property and the test result do not match (lines 15 and 20), the return value is set to XCCDF_RESULT_FAIL (lines 16 and 21), so that the script will exit with a failure. Otherwise, the script exits with the return value XCCDF_RESULT_PASS, indicating that the integrity property has been properly enforced. As presented before, the assurance framework is steered by 3 files, namely the Assurance Profile (AP), the Operational Profile (OP) and the XCCDF. The excerpt of the Assurance Profile presented in Listing 2 defines one Security Assurance View (SAV) with two Operational Measurement Requirements (OMRs), OMR_1 and OMR_3 (lines 10-11), needed for the evaluation of data integrity (lines 7-13). 1 [...] 2 <SecurityAssuranceView id="SAV_1"> 3 <Statement>Security Functions effectiveness</Statement> ----- 4 <SAVObject id="1_Data_Int"> 5 <Description>Data Integrity</Description> 6 <MetricsAggregFunction>#%t</MetricsAggregFunction> 7 <Metric id="SF_Int_Active"> 8 <Description>Availability of security functions affecting data integrity</Description> 9 <ReqAggregFunction>#t==##</ReqAggregFunction> 10 <ConcernedMeasurementReq>OMR_1</ConcernedMeasurementReq> 11 <ConcernedMeasurementReq>OMR_3</ConcernedMeasurementReq> 12 [...] 13 </Metric> 14 [...] 15 </SAVObject> 16 [...] 17 </SecurityAssuranceView> 18 [...] Listing 2: AP file for the Airport Management use case. The XCCDF file in Listing 3 defines the last step of the measurement chain. It specifies the assurance checks (with their related scripts, for example BM_fileInt-1.1.sh, line 14) that have to be executed to collect the base measures (here, BM-fileInt-1.1, lines 9 to 16) needed to evaluate upper levels of the assurance model. 1 [...] 2 <Profile id="properties_IO"> 3 <description>Properties Assurance</description> 4 <select idref="BM-fileInt-1.1" selected="true" /> 5 <select idref="BM-fileConf-1.1" selected="true" /> 6 <select idref="BM-netConf-1.1" selected="true" /> 7 </Profile> 8 <Group id="properties_group"> 9 <Rule id="BM-fileInt-1.1" selected="true"> 10 <title>File Integrity</title> 11 <description>Check that file integrity is enforced</ **description>** 12 <check system="http://open-scap.org/page/SCE"> 13 <check-import import-name="stdout" /> 14 <check-content-ref href="BM_fileInt-1.1.sh"/> 15 </check> 16 </Rule> 17 [...] 18 </Group> 19 [...] Listing 3: XCCDF file for the Airport Management use case. In order for the Assurance Profile and the XCCDF file to inter-operate, the Operational Profile (Listing 4) links the Operational Measurement Requirements OMR_3 of the Assurance Profile (lines 13 to 18) with the Based Measures BM-fileInt-1.1 of the XCCDF file (lines 3 to 9). It also specifies the machine from which to collect this data (line 7). 1 [ . . . ] 2 <DerivedMeasures> 3 _<DerivedMeasure_ **id** =”DM− f i l e I n t −1.1− musik1 ”> 4 _<Description>Check_ t h a t f i l e i n t e g r i t y i s e f f e c t i v e</ **Description>** 5 _<InterpretFunction>” pass ” . equals ($0)</ InterpretFunction>_ 6 _<ConcernedBaseMeasure>BM−_ f i l e I n t −1.1</ ConcernedBaseMeasure _>_ 7 _<ConcernedDevice>Musik1</ ConcernedDevice>_ 8 _<P e r i o d i c i t y>180000</ P e r i o d i c i t y>_ 9 _</ DerivedMeasure>_ 10 [ . . . ] 11 </ DerivedMeasures> 12 <MeasurementRequirements> 13 _<MeasurementRequirement id_ =”OMR 3”> 14 _<MRAggregFunction># t ==##</ MRAggregFunction>_ 15 _<DerivedMeasure>DM−_ f i l e I n t −1.1− musik1</ DerivedMeasure> 16 _<DerivedMeasure>DM−_ f i l e I n t −1.1− musik2</ DerivedMeasure> 17 _<DerivedMeasure>DM−_ f i l e I n t −1.1−db</ DerivedMeasure> 18 _</ MeasurementRequirement>_ 19 [ . . . ] 20 </ MeasurementRequirements> 21 [ . . . ] Listing 4: OP file for the Airport Management use case. The Assurance Collector Engine executes the script BM_fileInt-1.1.sh (listing 1) in order to check to enforcement of integrity properties in the security policy (here, the property on line 5 of the policy). Both the Assurance Profile and the Operational Profile are imported in the Assurance Modeling Tool and derived into the Airport Management Assurance Model, displayed in Fig. 7 by the Assurance Visualization Tool. The model shows the Security Assurance Views defined in the Assurance Profile, in this case the Security Functions effectiveness view, with its corresponding measurement requirements fed by the assurance checks. The left framework of the model allows the navigation by the model structure and shows the assurance compliance in a colour basis. The right framework shows the details on the selected model component. In this case it shows the base measures corresponding to SELinux (MAC) mechanisms status, but the results obtained from the integrity property verification can also be displayed. Fig. 7: Airport Management Assurance Model Evaluation and Visualization. _E. Results_ Table I presents some statistics concerning the security policy for this usecase. Number of Templates 8 Properties 47 For the client node 1 For the proxy VM 7 For the application VM 12 For the database VM 27 SSMs collaborating to enforce the security properties 5 (SELinux, iptables, PAM, SSH, Oscap) Assurance scripts for the properties 8 Assurance scripts for the SSMs 4 TABLE I: Use-case Policy Statistics As we can see, the policy for this use-case uses only 8 different properties templates, since our high-level properties cover a wide range of security needs. The policy itself uses |Number of|Col2| |---|---| |Templates|8| |Properties For the client node For the proxy VM For the application VM For the database VM|47 1 7 12 27| |SSMs collaborating to enforce the security properties (SELinux, iptables, PAM, SSH, Oscap)|5| |Assurance scripts for the properties Assurance scripts for the SSMs|8 4| ----- 50 contexts and 47 properties for the protection of the whole use-case, which is a very low number considering all the security functionalities covered. Moreover, this policy is entirely generated from a GUI, so the Cloud tenant does not have to write these contexts and properties himself. Besides, this policy manages both the enforcement and the assurance, so that Cloud tenant has information about the status of the enforcement, through a graphical dashboard. V. CONCLUSION In this paper, we have presented a solution to specify, enforce and assure security properties in a Cloud environment. Our solution handles the enforcement by re-using existing security and assurance mechanisms, such as SELinux, iptables, PAM, SSH, or Oscap. Our solution is composed of several elements: 1) a language that can express the security and assurance properties independently from the system, the resources naming, and the available mechanisms, 2) an enforcement engine, the SE[E], that receives the properties and enforces them by configuring existing mechanisms, and 3) an assurance framework that models, measures, aggregates, evaluates and presents the security assurance results. Our solution has shown its efficiency on a complete industrial use-case for airport system management: 1) the policy expressing the security requirements of the use-case has been defined, 2) the policy has been enforced using several mechanisms that collaborate to offer an end-to-end protection (across the different machines), and 3) the assurance framework has confirmed the proper enforcement of the security policy. In our future works, we will define generic policy templates that could be used to secure the system base, in addition to the policy on the tenant’s software architecture. This added protection would improve the overall security of the system. Besides, we plan to extend the language so that the results generated by the assurance framework are sent back to the enforcement engine: this would allow the enforcement engine to update the configuration of the security mechanisms to adapt the protection in case something is not working as expected. **_Acknowledgments_** This work was done thanks to the financial support of the Celtic+ project Seed4C (Eureka Celtic+ CPP2011/2-6). REFERENCES [1] L. Bobelin, A. Bousquet, J. Briffaut, J.-F. Couturier, C. Toinard, E. Caron, A. Lefray, and J. Rouzaud-Cornabas, “An advanced securityaware cloud architecture,” in High Performance Computing & Simu_lation (HPCS), 2014 International Conference on._ IEEE, 2014, pp. 572–579. [2] United States Department of Defense, “Trusted Computer System Evaluation Criteria (Orange Book),” Tech. Rep., 1985. [3] Cloud Security Alliance, “Cloud Adoption Practices and Priorities Survey Report,” 2015, https://cloudsecurityalliance.org/download/cloudadoption-practices-priorities-survey-report/. [4] T. Jaeger and J. Schiffman, “Outlook: Cloudy with a Chance of Security Challenges and Improvements,” IEEE Security & Privacy _Magazine, vol. 8, no. 1, pp. 77–80, Jan. 2010. [Online]. Available: http:_ //ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=5403158 [5] J. Y. Halpern and V. Weissman, “Using first-order logic to reason about policies,” ACM Trans. Inf. Syst. Secur., vol. 11, no. 4, pp. 21:1–21:41, Jul. 2008. [Online]. Available: http://doi.acm.org/10.1145/ 1380564.1380569 [6] R. Sandhu, R. Boppana, R. Krishnan, J. Reich, T. Wolff, and J. Zachry, “Towards a Discipline of Mission-Aware Cloud Computing,” in Pro_ceedings of the 2010 ACM workshop on Cloud computing security_ _workshop, 2010, pp. 13–18._ [7] K. Twidle, N. Dulay, E. Lupu, and M. Sloman, “Ponder2: A policy system for autonomous pervasive environments,” in Autonomic and _Autonomous Systems, 2009. ICAS’09. Fifth International Conference_ _on._ IEEE, 2009, pp. 330–335. [8] S. Pearson, V. Tountopoulos, D. Catteddu, M. S¨udholt, R. Molva, C. Reich, S. Fischer-H¨ubner, C. Millard, V. Lotz, M. G. Jaatun et al., “Accountability for cloud and other future internet services.” in Cloud_Com, 2012, pp. 629–632._ [9] J. Chen, Y. Wang, and X. Wang, “On-demand security architecture for cloud computing,” Computer, vol. 45, no. 7, pp. 73–78, 2012. [10] A. Wailly, M. Lacoste, and H. Debar, “Vespa: multi-layered selfprotection for cloud resources,” in Proceedings of the 9th international _conference on Autonomic computing._ ACM, 2012, pp. 155–160. [11] A. D. Keromytis, R. Geambasu, S. Sethumadhavan, S. J. Stolfo, J. Yang, A. Benameur, M. Dacier, M. Elder, D. Kienzle, and A. Stavrou, “The meerkats cloud security architecture,” in Distributed Computing Systems _Workshops (ICDCSW), 2012 32nd International Conference on. IEEE,_ 2012, pp. 446–450. [12] R. He, M. Lacoste, and J. Leneutre, “A policy management framework for self-protection of pervasive systems,” in Autonomic and Autonomous _Systems (ICAS), 2010 Sixth International Conference on._ IEEE, 2010, pp. 104–109. [13] C. Ngo, P. Membrey, Y. Demchenko, and C. de Laat, “Policy and context management in dynamically provisioned access control service for virtualized cloud infrastructures,” in Availability, Reliability and _Security (ARES), 2012 Seventh International Conference on, Aug 2012,_ pp. 343–349. [14] V. C. Hu, E. Martin, J. Hwang, and T. Xie, “Conformance checking of access control policies specified in xacml,” in Computer Software _and Applications Conference, 2007. COMPSAC 2007. 31st Annual_ _International, vol. 2._ IEEE, 2007, pp. 275–280. [15] C. A. Ardagna, S. De Capitani di Vimercati, S. Paraboschi, E. Pedrini, and P. Samarati, “An xacml-based privacy-centered access control system,” in Proceedings of the first ACM workshop on Information _security governance._ ACM, 2009, pp. 49–58. [16] R. Haines, The SELinux Notebook - Third Edition, 2012. [17] V. Samar, “Unified login with pluggable authentication modules (pam),” in Proceedings of the 3rd ACM conference on Computer and commu_nications security._ ACM, 1996, pp. 1–10. [18] O. Andreasson, “Iptables tutorial 1.2. 2,” 2001. [19] T. Ylonen and C. Lonvick, “The secure shell (ssh) connection protocol,” 2006. [20] M. Li, S. Yu, Y. Zheng, K. Ren, and W. Lou, “Scalable and secure sharing of personal health records in cloud computing using attributebased encryption,” Parallel and Distributed Systems, IEEE Transactions _on, vol. 24, no. 1, pp. 131–143, 2013._ [21] A. L´opez-Alt, E. Tromer, and V. Vaikuntanathan, “On-the-fly multiparty computation on the cloud via multikey fully homomorphic encryption,” in _Proceedings_ _of_ _the_ _Forty-fourth_ _Annual_ _ACM_ _Symposium_ _on_ _Theory_ _of_ _Computing,_ ser. STOC ’12. New York, NY, USA: ACM, 2012, pp. 1219–1234. [Online]. Available: http://doi.acm.org/10.1145/2213977.2214086 [22] S. Haddad, S. Dubus, A. Hecker, T. Kanstr´en, B. Marquet, and R. Savola, “Operational security assurance evaluation in open infrastructures,” in Risk and Security of Internet and Systems (CRiSIS), 2011 _6th International Conference on._ IEEE, 2011, pp. 1–6. [23] C. Criteria, “Common Criteria for Information Technology Evaluation v3.1 (ISO/IEC 15408),” 2012. [24] ETSI, “ETSI TR 187 023: Security Assurance Profile for Secured Telecom Operations Statement of needs for security assurance measurement in operational telecom infrastructures,” 2012, www.etsi.org/deliver/etsi tr/187000 187099/187023/01.01.01 60/tr 187023v010101p.pdf. [25] SCAP, “SCAP: Security Content Automation Protocol,” 2014, http:// scap.nist.gov/. [26] OpenSCAP, “OpenSCAP Website,” 2014, http://www.open-scap.org. -----
{ "disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.1109/UCC.2015.45?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1109/UCC.2015.45, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": "CCBYNCND", "status": "GREEN", "url": "https://hal.inria.fr/hal-01240557/file/UCC_2015.pdf" }
2,015
[ "JournalArticle", "Conference" ]
true
2015-12-01T00:00:00
[ { "paperId": "58a56d344c96de6d2ae898d0ae7b4f8fc4b6ade8", "title": "Scalable and Secure Sharing of Personal Health Records in Cloud Computing Using Attribute-Based Encryption" }, { "paperId": "cff2f444a32c7318b182ea8fcc64799b43c693c3", "title": "SCALABLE AND SECURE SHARING OF PERSONAL HEALTH RECORDS IN CLOUD COMPUTING" }, { "paperId": "da8d7822c57b6fcaefc35bce2f6ac845f0641305", "title": "Development of protection profile and security target for Indonesia electronic ID card's (KTP-el) card reader based on common criteria V3.1:2012/SNI ISO/IEC 15408:2014" }, { "paperId": "1a798f7f0df116c2ac43769764594f24df3d97b4", "title": "An advanced security-aware Cloud architecture" }, { "paperId": "b578fe9ed0f71a0ea4d6acc4fabbe6cab91447fe", "title": "Accountability for cloud and other future Internet services" }, { "paperId": "b8f22f6fbf4f91607f842c37c0367826067830af", "title": "VESPA: multi-layered self-protection for cloud resources" }, { "paperId": "00ffdd19d85c066a1b9359992eb31168b1e12e1f", "title": "Policy and Context Management in Dynamically Provisioned Access Control Service for Virtualized Cloud Infrastructures" }, { "paperId": "115193759bcf8ffcb2cf8d8a29cabbf54228fbc3", "title": "On-Demand Security Architecture for Cloud Computing" }, { "paperId": "4a16120f3da0159922d3e066b8e61f1e4dfb1760", "title": "The MEERKATS Cloud Security Architecture" }, { "paperId": "c3f095c11102196896ed10ed27b9adb2e5bca68f", "title": "On-the-fly multiparty computation on the cloud via multikey fully homomorphic encryption" }, { "paperId": "0eda5846a8ab5ac5ec175331c98554de389b3449", "title": "Towards a discipline of mission-aware cloud computing" }, { "paperId": "18b1791690af2873c800c9be6ef1a51470cc15ab", "title": "A Policy Management Framework for Self-Protection of Pervasive Systems" }, { "paperId": "23c9b3ca9b7f1ca3226ba3cbfbc26e7cf7935880", "title": "An XACML-based privacy-centered access control system" }, { "paperId": "d49b3b10d0da631e8e0f39870bc001a8e76ee864", "title": "Proceedings of the first ACM workshop on Information security governance" }, { "paperId": "c1e0cf409bebefa4997c84fe2a3b401f52c06138", "title": "Ponder2: A Policy System for Autonomous Pervasive Environments" }, { "paperId": "d17f69b0061d54f5c8b114faabc969b770591497", "title": "Conformance Checking of Access Control Policies Specified in XACML" }, { "paperId": "3c154b28ba5110443894e2c26edb183a1504a802", "title": "Using first-order logic to reason about policies" }, { "paperId": null, "title": "“Cloud Adoption Practices and Priorities Survey Report,”" }, { "paperId": null, "title": "OpenSCAP Website" }, { "paperId": null, "title": "SCAP: Security Content Automation Protocol" }, { "paperId": null, "title": "Common Criteria for Information Technology Evaluation v3.1 (ISO/IEC 15408)" }, { "paperId": null, "title": "The SELinux Notebook -Third Edition" }, { "paperId": "0958d851a6b1ff1b4b5b862e3758d2ac10bf2779", "title": "Outlook: Cloudy with a Chance of Security Challenges and Improvements" }, { "paperId": "e67fb2446f914ef589835c540bcd60c5542a5c0e", "title": "The Secure Shell (SSH) Connection Protocol" }, { "paperId": "6d701f92a3b7cf34fe4cc770da9782a564aa9307", "title": "Trusted Computer System Evaluation Criteria ( Orange Book ) December" }, { "paperId": null, "title": "Iptables tutorial 1.2. 2" }, { "paperId": "6adacd3f01ef16bd70b179132cdf8fa5ea6e6531", "title": "Unified Login with Pluggable Authentication Modules ( PAM )" }, { "paperId": "2e602d3dd1d9e950fcb73da5f999a812e019d3c2", "title": "2011 6th International Conference on Risks and Security of Internet and Systems (crisis) Operational Security Assurance Evaluation in Open Infrastructures" }, { "paperId": null, "title": "// Policy for the Database VM 2 Isolation( DomainAODB )" }, { "paperId": null, "title": "Policy for the ReverseProxy VM 23 Integrity( BinaryModuleWeb ); 24 Integrity( BinaryWeb ); 25 Integrity( ConfigWeb )" }, { "paperId": null, "title": "Access ( MysqlPort | MysqlProxyPort | SSHPort | NTPPort , AnyIP )" }, { "paperId": null, "title": "Assurance( Freq )" }, { "paperId": null, "title": "32 Authentication(anyone , ServiceSSH" }, { "paperId": null, "title": "ETSI TR 187 023: Security Assurance Profile for Secured Telecom Operations Statement of needs for security assurance measurement in operational telecom infrastructures" } ]
14,989
en
[ { "category": "Computer Science", "source": "external" }, { "category": "Computer Science", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/028821c2f74bce87b58d99cf63a204fe5cce94e9
[ "Computer Science" ]
0.879975
Optimal content placement for peer-to-peer video-on-demand systems
028821c2f74bce87b58d99cf63a204fe5cce94e9
2011 Proceedings IEEE INFOCOM
[ { "authorId": "143858231", "name": "Bo Tan" }, { "authorId": "1768703", "name": "L. Massoulié" } ]
{ "alternate_issns": null, "alternate_names": null, "alternate_urls": null, "id": null, "issn": null, "name": null, "type": null, "url": null }
null
# Optimal Content Placement for Peer-to-Peer Video-on-Demand Systems[1] #### Bo (Rambo) Tan Department of Electrical and Computer Engineering University of Illinois at Urbana-Champaign Urbana, IL 61801, USA Email: botan2@illinois.edu #### Laurent Massouli´e Technicolor Paris Research Lab Issy-les-Moulineaux Cedex 92648, France Email: laurent.massoulie@technicolor.com **(a) Distributed Server Network** **(b) Pure Peer-to-Peer Network** **_Abstract—In this paper, we address the problem of content_** **placement in peer-to-peer systems, with the objective of maxi-** **mizing the utilization of peers’ uplink bandwidth resources. We** **consider system performance under a many-user asymptotic. We** **distinguish two scenarios, namely “Distributed Server Networks”** **(DSN) for which requests are exogenous to the system, and “Pure** **P2P Networks” (PP2PN) for which requests emanate from the** **peers themselves. For both scenarios, we consider a loss network** **model of performance, and determine asymptotically optimal** **content placement strategies in the case of a limited content** **catalogue. We then turn to an alternative “large catalogue”** **scaling where the catalogue size scales with the peer population.** **Under this scaling, we establish that storage space per peer** **must necessarily grow unboundedly if bandwidth utilization is** **to be maximized. Relating the system performance to properties** **of a specific random graph model, we then identify a content** **placement strategy and a request acceptance policy which jointly** **maximize bandwidth utilization, provided storage space per peer** **grows unboundedly, although arbitrarily slowly, with system size.** I. INTRODUCTION The amount of multimedia traffic accessed via the Internet, already of the order of exabytes (10[18]) per month, is expected to grow steadily in the coming years. A peer-to-peer (P2P) architecture, whereby peers contribute resources to support service of such traffic, holds the promise to support such growth more cheaply than by scaling up the size of data centers. More precisely, a large-scale P2P system based on resources of individual users can absorb part of the load that would otherwise need to be served by data centers. In the present work we address specifically the Video-onDemand (VoD) application, for which the critical resources at the peers are storage space and uplink bandwidth. Our objective is to ensure that the largest fraction of traffic is supported by the P2P system. More precisely, we look for content placement strategies that enable content downloaders to maximally use the peers’ uplink bandwidth, and hence maximally offload the servers in the data centers. Such strategies must adjust to the distinct popularity of video contents, as a more popular content should be replicated more frequently. We consider the following mode of operation: Video requests are first submitted to the P2P system; if they are 1Part of the results developed in this paper have made the object of a “brief announcement” in [12] and further shown in more detail in [13]. Fig. 1: Two architectures of P2P VoD systems accepted, uplink bandwidth is used to serve them at the video streaming rate (potentially via parallel substreams from different peers). They are rejected if their acceptance would require disruption of an ongoing request service. Rejected requests are then handled by the data center. Alternative modes of operation could be envisioned (e.g., enqueueing of requests, service at rates distinct from the streaming rate, joint service by peers and data center,...). However the proposed model is appealing for the following reasons. It ensures zero waiting time for requests, which is desirable for VoD application; analysis is facilitated, since the system can be modeled as a loss network [7], for which powerful theoretical results are available; and finally, as our results show, simple placement strategies ensure optimal operation in the present model. In the P2P system we are considering, there are two kinds of peers: boxes and pure users. Their difference is that boxes do contribute resources (storage space and uplink bandwidth) to the system, while pure users do not. This paper focuses on the following two architectures (illustrated in Figure 1): - Distributed Server Network (DSN): Requests to download contents come only from pure users, and can be regarded as external requests. - Pure P2P Network (PP2PN): There are no pure users in the system, and boxes do generate content requests, which can be regarded as “internal”. The rest of the paper is organized as follows: We review related work in Section II and introduce our system model in Section III. For the Distributed Server Network scenario, the ----- so-called “proportional-to-product” content placement strategy is introduced and shown to be optimal in a large system limit in Section IV, where extensive simulation results are also provided. For the Pure P2P Network scenario, a distinct placement strategy is introduced and proved optimal in Section V. These results apply for a catalogue of contents of limited size. An alternative model in which catalogue size grows with the user population is introduced in Section VI, where it is shown that the “proportional-to-product” placement strategy remains optimal in the DSN scenario in this large catalogue setting, for a suitably modified request management technique. II. RELATED WORK The number and location of replicas of distinct content objects in a P2P system have a strong impact on such system’s performance. Indeed, together with the strategy for handling incoming requests, they determine whether such requests must either be delayed, or served from an alternative, more expensive source such as a remote data center. Requests which cannot start service at once can either be enqueued (we then speak of a waiting model) or redirected (we then speak of a loss model). Previous investigations of content placement for P2P VoD systems were conducted by Suh et al. [11]. The problem tackled in [11] differs from our current perspective, in particular no optimization of placement with respect to content popularity was attempted in this work. Performance analysis of both queueing and loss models are considered in [11]. Valancius et al. [17] considered content placement dependent on content popularity, based on a heuristic linear program, and validated this heuristic’s performance in a loss model via simulations. Tewari and Kleinrock [14], [15] advocated to tune the number of replicas in proportion to the request rate of the corresponding content, based on a simple queueing formula, for a waiting model, and also from the standpoint of the load on network links. They further established via simulations that Least Recently Used (LRU) storage management policies at peers emulated rather well their proposed allocation. Wu et al. [18] considered a loss model, and a specific timeslotted mode of operation whereby requests are submitted to randomly selected peers, who accommodate a randomly selected request. They showed that in this setup the optimal cache update strategy can be expressed as a dynamic program. Through experiments, they established that simple mechanisms such as LRU or Least Frequently Used (LFU) perform close to the optimal strategy they had previously characterized. Kangasharju et al. [6] addressed file replication in an environment where peers are intermittently available, with the aim of maximizing the probability of a requested file being present at an available peer. This differs from our present focus in that the bandwidth limitation of peers is not taken into account, while the emphasis is on their intermittent presence. They established optimality of content replication in proportion to the logarithm of its popularity, and identified simple heuristics approaching this. Boufkhad et al. [3] considered P2P VoD from yet another viewpoint, looking at the number of contents that can be simultaneously served by a collection of peers. Content placement problem has also been addressed towards other different optimization objectives. For example, Almeida et al. [1] aim at minimizing total delivery cost in the network, and Zhou et al. [19] target jointly maximizing the average encoding bit rate and average number of content replicas as well as minimizing the communication load imbalance of video servers. Cache dimensioning problem is considered in [9], where Laoutaris et al. optimized the storage capacity allocation for content distribution networks under a limited total cache storage budget, so as to reduce average fetch distance for the request contents with consideration of load balancing and workload constraints on a given node. Our paper takes a different perspective, focusing on many-user asymptotics so the results show that the finite storage capacity per node is never a bottleneck (even in the “large catalogue model”, it also scales to infinity more slowly than the system size). There are obvious similarities between our present objective and the above works. However, none of these identifies explicit content placement strategies at the level of the individual peers, which lead to minimal fraction of redirected (lost) requests in a setup with dynamic arrivals of requests. Finally, there is a rich literature on loss networks (see in particular Kelly [7]); however our present concern of optimizing placement to minimize the amount of rejected traffic in a corresponding loss network appears new. III. MODEL DESCRIPTION We now introduce our mathematical model and related notations. Denote the set of all boxes as . Let = B and B |B| index the boxes from 1 to B. Box b has a local cache Jb that can store up to M contents, all boxes having the same storage space M . We further assume that each box can simultaneously serve U concurrent requests, where U is an integer, i.e., each box has an uplink bandwidth equal to U times the video streaming rate. In particular we assume identical streaming rates for all contents. The set of available contents is defined as . Let = C C |C| and index contents from 1 to C. Thus a given box b will be able to serve requests for content c for all c ∈Jb. In a Pure P2P Network, when box b has a request for a certain content c, which is coincidentally already in its cache, a “local service” is provided and no download service is needed, hence the service to this request consumes no bandwidth resource. The effect of local service on deriving an optimal content placement strategy will be discussed in detail in Section V. In a Distributed Server Network, however, local service will never occur since all the requests are external with respect to ----- the system resources[2]. For a new request that needs a download service, an attempt is made to serve this request by some box holding content c, while ensuring that previously accepted requests can themselves be assigned to adequate boxes, given the cache content and bandwidth resources of all boxes. This potentially involves “repacking” of requests, i.e., reallocation of all the bandwidth resources in the system (“box-serving-request” mapping) to accommodate this new download demand pattern. If such repacking can be found, then the request is accepted; otherwise, it is rejected from the P2P system. It will be useful in the sequel to characterize the concurrent numbers of requests that are amenable to such repacking. Let n = {nc}c∈C be the vector of numbers nc of requests per content c. Clearly, a matching of these requests to server boxes is feasible if and only if there exist nonnegative integers zcb (number of concurrent downloads of content c from box b) such that � zcb = nc, ∀ c ∈C; b:c∈Jb � zcb ≤ U, ∀ b ∈B. (1) c:c∈Jb A more compact characterization of feasibility follows by an application of Hall’s theorem [2] (detailed in Appendix B), giving that n is feasible if and only if: ∀S ⊆C, � nc ≤ U |{b ∈B : S ∩Jb ̸= ∅}| . (2) c∈S We now introduce statistical assumptions on request arrivals and durations. New requests for content c occur at the instants of a Poisson process with rate νc. We assume that the video streaming rate is normalized to 1, and is the same for all contents. We further assume that all videos have the same duration, again normalized at 1. Under these assumptions, the amount of work per time unit brought into the system by content c equals νc. With the above assumptions at hand, assuming fixed cache contents, the vector n of requests under service is a particular instance of a general stochastic process known as a loss network model. Loss networks were introduced to represent ongoing calls in telephone networks, and exhibit rich structure. In particular, the corresponding stochastic process is reversible, and admits a closed-form stationary distribution. For the Distributed Server Network model, the stationary distribution reads: π(n) = [1] � νc[n][c] (3) Z c∈C nc! [I][{][n][ is feasible][}][.] In words, the numbers of requests nc are independent Poisson random variables with parameter νc, conditioned on feasibility of the whole vector n. 2In fact the external users issuing requests could keep local copies of previously accessed content, and hence experience “local service” upon reaccessing the same content. But we do not need consider this as this happens outside the perimeter of our system. Our objective is then to determine content placement strategies so that in the corresponding loss network model, the fraction of rejected requests is minimal. The difficulty in doing this analysis resides in the fact that the normalizing constant Z is cumbersome to evaluate. Nevertheless, simplifications occur under large system asymptotics, which we will exploit in the next sections. We conclude this section by the following remark. For simplicity we assumed in the above description that a particular content is either fully replicated at a peer, or not present at all, and that a request is served from only one peer. It should however be noted that we can equally assume that contents are split into sub-units, which can be placed onto distinct peers, and downloaded from such distinct peers in parallel sub-streams in order to satisfy a request. This extension is detailed in Appendix F. IV. OPTIMAL CONTENT PLACEMENT IN DISTRIBUTED SERVER NETWORKS We first describe a simple adaptive cache update strategy driven by demand, and show why it converges to a “predetermined” content placement called “proportional-to-product” strategy. We then establish the optimality of this “proportionalto-product” placement in a large system asymptotic regime. _A. The Proportional-to-Product Placement Strategy_ A simple method to adaptively update the caches at boxes driven by demand is described as follows: **Demand-Driven Cache Update** Whenever a new request comes, with probability ǫB (ǫ is chosen such that ǫB 1), the server picks a box b uniformly at ≤ random, and attempts to push content c into this box’s cache. If c is already in there, do nothing; otherwise, remove a content selected uniformly at random from the cache. Since external demands for content c are according to a Poisson process with rate νc, we find that under the above simple strategy, content c is pushed at rate ǫνc into a particular box which is not caching content c. Recall that each box stores M distinct contents, and let j denote a candidate “cache state”, which is a size M subset of the full content set . For C convenience, let denote the collection of all such j. J With the above strategy, the caches at each box evolve independently according to a continuous-time Markov process. The rate at which cache state j is changed to j[′], where j[′] = j + c d for some contents d j, c / j, which { } \ { } ∈ ∈ we denote by q(j, j[′]), is easily seen to be q(j, j[′]) = ǫνc/M . Indeed, content d is evicted with probability 1/M, while content c is introduced at rate ǫνc. It is easy to verify that the distribution p( ) given by p(j) = [1] Z � νc, j ∈J, (4) c∈j ----- for some suitable normalizing constant Z, verifies the follwing equation: p(j)q(j, j[′]) = p(j[′])q(j[′], j), j, j[′] . (5) ∈J The latter relations, known as the local balance equations, readily imply that p( ) is a stationary distribution for the above Markov process; since the process is irreducible, this is the unique stationary distribution. Thus, we can conclude that under this cache update strategy, the random cache state at any box eventually follows this stationary distribution. This is what we refer to as the **“proportional-to-product” placement strategy, and it is the** one we advocate in the Distributed Server Network scenario. _Remark 1: The customized parameter ǫ should not be too_ large, otherwise the burden on the server will be increased due to use of “push”. Neither should it be too small, otherwise the Markov chain will converge too slowly to the steady state. ⋄ Under the cache update strategy, the distribution of cache contents needs time to converge to the steady state. However, if we have a priori information about content popularity, we can use a sampling strategy as an alternative way to directly generate proportional-to-product content placement in one go. One method works as follows: **Sampling-Based Preallocation** Select successively M contents at random in an i.i.d. fashion, according to the probability distribution {νˆc}, where νˆc = νc/ [�]c[′]∈C [ν][c][′][ is the normalized popularity. If there are] duplicate selections of some content, re-run the procedure. It is readily seen that this yields a sample with the desired distribution. An alternative sampling strategy which can be faster than the one described above when very popular items are present is given in the Appendix C. _B. A Loss Network Under Many-User Asymptotics_ We now consider the asymptotic regime called “many user– **fixed catalogue” scaling: The number of boxes B goes to** infinity. The system load, defined as �c∈C [ν][c] ρ ≜, (6) BU is assumed to remain fixed, which is achieved in the present section by assuming that the content collection is kept fixed, C while the individual rates {νc} scale linearly with B. We also assume that the normalized content popularities {νˆc} remain fixed as B increases. It thus holds that νc = ˆνcρBU for all c . Note that although boxes are pure resources rather than ∈C users, scaling of {νc} with B to infinity actually indicates a “many-user” scenario. To analyze the performance of our proposed proportionalto-product strategy, we require that the cache contents are sampled at random according to this strategy and are subsequently kept fixed. This can either reflect the situation where we use the previously introduced sampling strategy, or alternatively the situation where the cache update strategy has already made the distribution of cache states converge to the steady state, and occurs at a slower time scale than that at which new requests arise and complete. Note that, as B grows large, the right-hand side in the feasibility constraint (2) verifies, by the strong law of large numbers, |{b ∈B : S ∩Jb ̸= ∅}| ∼ B � mj. (7) j:j∩S̸=∅ Here, {mj} corresponds to a particular content placement strategy, under which each box holds a size M content set j with probability mj, and this happens independently over boxes. Specifically, mj = Z1 �c∈j [ν][ˆ][c][ (where][ Z][ is a nor-] malizing constant) corresponds to our proportional-to-product placement strategy. We now establish a sequence of loss networks indexed by a large parameter B. For the B[th] loss network, requests for content c (regarded as “calls of type c”) arrive at rate ∈C νc(B) = (ρU ˆνc) · B, each “virtual link” S ⊆C has a capacity WS(B) ≜ (U � mj) · B, (8) j:j∩S̸=∅ and c represents that virtual link is part of the “route” ∈S S which serves call of type c.[3] This particular setup has been identified as the “large capacity network scaling” in Kelly [7]. There, it is shown that the loss probabilities in the limiting regime where B can be characterized via the analysis →∞ of an associated variational problem. We now describe the corresponding results in [7] relevant to our present purpose. For the B[th] loss network, consider the problem of finding the mode of the stationary distribution (3), which corresponds to maximizing c∈C[(][n](cB) [log][ ν]c(B) − log n(cB) [!)][ over feasible][ n][(][B][)] [.] [�] (B) (B) (B) (B) Then, approximate log nc [!][ by][ n]c [log][ n]c − nc according to Stirling’s formula and replace the integer vector n[(][B][)] by a real-valued vector x[(][B][)] . This leads to the following optimization problem: **[OPT 1]** maxx[(][B][)] �(x(cB) [log][ ν]c(B) − x(cB) [log][ x](cB) + x(cB) [)] (9) c∈C s.t. ∀S ⊆C, � x(cB) ≤ WS(B) (10) c∈S over x[(][B][)] 0. ≥ 3Note that this construction in fact admits a form of fixed routing which is equivalently transformed from a dynamic routing model where each particular box is regarded as a link and calls of type c can use any single-link route corresponding to a box holding content c. This equivalent transform is based on the assumption that repacking is allowed (cf. Section 3.3. in [7]). We have already found this equivalent transform by converting feasibility condition (1) to (2) in Section III. ----- The corresponding Lagrangian is given by: L(x[(][B][)], y[(][B][)] ) = �(xc(B) [log][ ν]c(B) − xc(B) [log][ x]c(B) + x(cB) [)] c∈C + � yS[(][B][)] [(][W]S(B) − � xc(B) [)][,] S⊆C c∈S where {yS(B)[}]S⊆C [are Lagrangian multipliers. The KKT con-] ditions for this convex optimization problem comprise the original constraints and the following ones: 4), or if the catalogue size C scales with the box population size B, a case not covered by the classical literature on loss networks, and to which we turn in Section VI-B. _Proof: First, we consider ρ_ 1. Letting ≥ − � y¯S[(][B][)] S:c∈S = 1/ρ, c, (14) ∀ ∈C � exp � y¯S[(][B][)][(][W]S(B) − � x¯c(B) [) = 0][,][ ¯][y]S[(][B][)] ≥ 0, ∀S ⊆C, c∈S ∂L(¯x[(][B][)](B,) ¯y[(][B][)] ) = log νc(B) − log ¯xc(B) − � y¯S[(][B][)] = 0, ∀ c ∈C ∂xc S:c∈S (11) where (¯x[(][B][)], ¯y[(][B][)] ) is a solution to the optimization problem. From equation (11), we further get x¯(cB) = νc(B) exp(− � y¯S[(][B][)] [)][,][ ∀] [c][ ∈C][.] (12) S:c∈S we have ∀c ∈C, � y¯S[(][B][)] = log ρ. (15) S:c∈S Putting equation (15) into (12) leads to ∀c ∈C, ¯x(cB) = νc(B) [/ρ.] Thus, inequality (10) in OPT 1 becomes Then the result that we will need from Kelly [7] is the following: for the B[th] loss network, the steady state probability of accepting request for c, denoted by Ac(B) [, verifies] ∀S ⊆C, � νc(B) ≤ ρ � mjBU. (16) c∈S j:j∩S̸=∅ Since νc(B) = ρBU · ˆνc and [�]c∈C [ν][ˆ][c][ = 1][, inequality (16)] further becomes, upon explicitly writing out the normalization constant Z: � νˆc ≤ � νˆc · � c∈G c∈C G: G∩S̸=∅ G⊆C |G|=M � + O �B[−] [1]2 �, c, (13) ∀ ∈C ∀S ⊆C, � νˆc · � c∈S G: G⊆C |G|=M � νˆc. (17) c∈G A(cB) = exp � − � y¯S[(][B][)] S:c∈S where ¯yS(B) are the Lagrangian multipliers of the previous optimization problem. _C. Optimality of Proportional-to-Product Content Placement_ Note that the global acceptance probability, denoted by Asys, which also reads Asys = [�]c∈C [ν][ˆ][c][A][c][, cannot exceed] min(1, 1/ρ). Indeed, it is clearly no larger than 1. It cannot exceed 1/ρ either, otherwise the system would treat more requests than its available resources. We now prove that the proportional-to-product content placement not only achieves the optimal global acceptance probability Asys = min(1, 1/ρ), but also achieves fair individual acceptance probabilities, i.e., Ac = Asys for all c. More precisely, we have the following theorem: _Theorem 1: By using mj =_ c∈j [ν][ˆ][c][/Z][ for all][ j][ ⊆C] [�] s.t. j = M, where Z is the normalizing constant, we have lim |B→∞| A(cB) = min{1, 1/ρ}, ∀c ∈C, for fixed ρ and C. ⋄ Before giving the proof, we comment on the result. One point to note is that because of (7), the above optimal acceptance rate is achieved with probability one under any random sampling which follows the proportional-toproduct scheme. Secondly, the optimality of the asymptotic acceptance probability does not depend on M, as long as M 1. Thus for this particular scaling regime, storage space ≥ is not a bottleneck. As we shall see in the next two sections, increasing M **does improve performance if either local** services occur, as in the Pure P2P Network scenario (Section Two types of product terms (mapped to subsets ) appear K ⊆C on both sides: I. [�]c∈K [ν][ˆ][c][:][ |K|][ =][ M][ + 1][,][ K ∩] [S][ ̸][=][ ∅][.] II. ([�]c∈K [ν][ˆ][c][)][ ·][ ˆ][ν][c][′][:][ c][′][ ∈K ∩] [S,][ |K|][ =][ M] [.] To show whether inequality (17) hold, we only have to prove that given any, for each product term (related to a ) S ⊆C K which appears in one inequality corresponding to a certain, S its multiplicity on the left hand side is no more than that on the right hand side. 1. For a product term of Type I: - On the LHS: Since [�]c∈K [ν][ˆ][c][ =][ �]c∈G [ν][ˆ][c][ ·][ ˆ][ν][c][′][ for] some and c[′], where is a size M G ⊆C ∈S ∩K G content set, c[′], and = + c[′] . It is easy to ̸∈G K G { } see that we have different choice of c[′] in |S ∩K| a, so the multiplicity of this product term on the K LHS equals . |S ∩K| - On the RHS: When 2, for any c[′], |S ∩K| ≥ ∈K c[′] is a size M content set of which the intersect K\{ } with is not empty, hence the multiplicity equals S (= M +1). When = 1, the exception to |K| |S ∩K| the above case is that if c[′], then c[′] is ∈S ∩K K\{ } a size M content set which has no intersect with S and is actually impossible to appear in the second summation term (over all size M content sets s.t. G = ) in inequality (17). Thus, the multiplicity G ∩S ̸ ∅ equals 1 (= M ). |K| − From above, we can see that the multiplicity of the product term on the LHS is always no more than that on the RHS. ----- 2. For a product term of Type II: is actually already a size M content set s.t. = K G G ∩C ̸ . Therefore, it is easy to see that on both sides, the ∅ multiplicities of this product term are both 1. Now we can conclude that inequality (17) holds for all, S ⊆C and continue to check the complementary slackness. Given ρ 1, one simple solution to equation (15) reads: ≥ ∀S ⊆C, ¯yS[(][B][)] = log ρ · I{S=C}. (18) Besides, inequality (17) is tight for = (we even do not S C need to check this when ρ = 1). Therefore, complementary slackness is always satisfied with solution (18). So far we have proved that the KKT condition holds when ρ 1. When ρ < 1, we modify (14) by letting ≥ − � y¯S[(][B][)] S:c∈S = 1, c, (19) ∀ ∈C � Fig. 2: System loss rates under different traffic loads 10 contents and serve at most U = 4 concurrent requests. The duration of downloading each content is exponentially distributed with mean equal to 1 time unit. The parameter ǫ in the cache update algorithm is set as 1/B such that upon a request, one box will definitely be chosen for cache update. For every algorithm, we take the average over 10 independent repetitive experiments, each of which is observed for 10 time units. According to the sample path, the initial 1/5 of the whole period is regarded as a “warm-up” period and hence ignored in the calculation of final statistics.[4] Some implementation details are not captured by our theoretical model, but should be considered in simulations. Upon a request arrival, the most idle box (i.e., with the largest number of free connections) among all the boxes which hold the requested content is chosen to provide the service, for the purpose of load balancing. If none of them is idle, we use a heuristic repacking algorithm which iteratively reallocates the ongoing services among boxes, in order to handle as many requests as possible while still respects load balancing. One important parameter which trades off the repacking complexity and the performance is the maximum number of iterations t[max]r, which is set as “undefined” by default (i.e., the iterations will continue until the algorithm terminates; theoretically there are at most C iterations). Other details regarding the repacking algorithm can be found in Appendix D. We will see an interesting observation about t[max]r later. Figure 2 evaluates system loss rates under different traffic loads ρ. Our two algorithms SAMP and CU, which target the proportional-to-product placement, both match the theoretically optimum very well.[5] On the other hand, the UNIF algorithm, which does not utilize any information about content popularity, incurs a large loss even if the system is underloaded (ρ < 1). The gain of proportional-to-product placement over UNIF becomes less significant as the traffic 4We can get enough samples during each observation period of 10 time units (for example, when ρ = 1, B = 4000 and U = 4, the average arrivals would be 160000). It has also been checked that after the warm-up period, the distribution of cache states well approximates the proportional-to-product placement and is kept quite stably for the remaining observation period. 5In fact, around ρ = 1, they perform a little worse than the optimum. The reason is that ρ = 1 is the “critical traffic load” (a separation point between zero-loss and nonzero-loss ranges), under which the simulation results are easier to incur deviation from the theoretical value. exp � and hence there is an additional factor 1/ρ > 1 on the RHS of inequality (17). Since the old version of inequalities (17) is proved to hold, the new version automatically holds, but none of them is tight now. However, from (19) we have ¯yS(B) = 0,, which means complementary slackness is always ∀S ⊆C satisfied (similar to ρ = 1). Therefore, according to equation (13), it can be concluded that by using mj = [�]c∈j [ν][ˆ][c][/Z][ for all][ j][, we can achieve] A(cB) = min{1, 1/ρ} + O �B[−] [1]2 �, ∀c ∈C, so limB→∞ A(cB) = min{1, 1/ρ}. _D. Simulation Results_ In this subsection, we use extensive simulations to evaluate the performances of the two implementable schemes proposed in Subsection IV-A which follow the “proportional-to-product” placement strategy, namely the sampling-based preallocation scheme and the demand-driven cache update (labeled as **“SAMP” and “CU”, respectively).** We compare the results with the theoretical optimum (i.e., loss rate for each content equals (1 1/ρ)[+]; the curves − are labeled as “Optimal”) and a uniform placement strategy (labeled as “UNIF”) defined as the following: first, permute all the contents uniformly at random, resulting in a content sequence {ci}, for 1 ≤ i ≤ C; then, push the M contents indexed by subsequence {c(j mod C)[}]bM+1≤j≤(b+1)M [into] the cache of box b, for 1 b B. UNIF is also used to ≤ ≤ generate the initial content placement for CU so that the loss rate can be reduced during the warm-up period. If not further specified, the default parameter setting is as follows: The popularity of contents {νˆc} follows a zipf-like distribution (see e.g. [4]), i.e., (c0 + c)[−][α] νˆc = (20) �c[′]∈C[(][c][0][ +][ c][′][)][−][α][,] with a decaying factor α > 0 and the shift c0 ≥ 0. We use α = 0.8 and c0 = 0. The content catalogue size C = 500 and the number of boxes B = 4000. Each box can store M = ----- 80% SAMP 70% CU 60% UNIF Optimal 50% 40% 30% 20% 10% 0% 0 0.4 0.8 1.2 1.6 2 α |SAMP CU UNIF Optimal|Col2|Col3|Col4|Col5| |---|---|---|---|---| |||||| Fig. 3: System loss rates with different α (ρ = 1) Fig. 5: System loss rates with different number of boxes Fig. 4: Effect of repacking on the system loss rate load grows, which can be easily expected. In Figure 3, when the decaying factor α in the zipflike distribution increases, the distribution of placed contents generated by UNIF has a higher discrepancy from the real content popularity distribution, so UNIF performs worse. On the other hand, the two proportional-to-product strategies are insensitive to the change of content popularity, as we expected. Figure 4 shows the effect of repacking on the system loss rate. In sub-figure (a), we find that under SAMP, repacking is not necessary. In sub-figure (b) which shows the performances of CU, when ρ is low, one iteration of repacking is sufficient to make the performance close enough to the optimum; when ρ is high, repacking also becomes unnecessary. The main takeaway message from this figure is that we can execute a repacking procedure of very small complexity without sacrificing much performance. The reason is that when the server picks a box to serve a request, it already respects the rule of load balancing. We then explain why CU still needs one iteration of repacking to improve the performance when ρ is low. Note that during the cache update, it is possible that the box is currently uploading the “to-be-kicked-out” content to some users. If repacking is enabled, those ongoing services can be repacked to other boxes (see details in Appendix D), but if t[max]r = 0 (no repacking), they will be terminated and counted as losses. When ρ is high, however, boxes are more likely to be busy, which leads to the failure of repacking, so repacking Fig. 6: Loss rate of requests for each content (ρ = 1) makes no difference. Recall that the proportional-to-product placement is only optimal when the number of boxes B . Figures 5 and →∞ 6 then show the impact of a finite B. In Figure 5, as B decreases, the system loss rate of every algorithms increases (compared to the two proportional-to-product strategies, UNIF is less sensitive to B). In Figure 6, non-homogeneity in the individual loss rates of requests for each content also reflects a deviation from the theoretical result (when B, the →∞ loss rates of the requests for all the contents are proved to be identical). As expected, increasing the number of boxes (from 4000 to 8000) makes the system closer to the limiting scenario and the individual loss rates more homogeneous. Another observation is that as the popularity of a content decreases (in the figure, the contents are indexed in the descending order of their popularity), the individual loss rate increases. However, according to Figure 2, those less popular contents do not affect the system loss rate much even if they incur high loss, since their weights {νˆc} are also lower. In fact, if we choose a smaller content catalogue size C or a larger cache size M, simulations show the negative impact of a finite B will be reduced (the figures are omitted here). This tells us that if C scales with B rather than being fixed, the proof of optimality under the loss network framework in Subsection IV-B is no longer valid and M must be a bottleneck against the performance of the optimal algorithm. We will solve this problem by introducing a certain type of “large catalogue model” later in Section VI. ----- V. OPTIMAL CONTENT PLACEMENT IN PURE PEER-TO-PEER NETWORKS In the Pure P2P Network scenario, when box b has a request for content c which is currently in its own cache, a “local service” will be provided and no download bandwidth in the network will be consumed. To simplify our analysis, each request for a specific content is assumed to originate from a box chosen uniformly at random (this in particular assumes identical tastes of all users). This means that the effective arrival rate of the requests for content c which generates traffic load actually equals ˜νc ≜ νc(1 − m˜ c), where ˜mc is defined as the fraction of boxes who have cached content c. Let ρc ≜ ρνˆc denote the traffic load generated by requests for content c, and λc denote the fraction of the system bandwidth resources used to serve requests for content c. Obviously, [�]c∈C [λ][c][ ≤] [1][. The traffic load absorbed] by the P2P system either via local services or via service from another box is then upper-bounded by ρ˜ = � ρc ˜mc + [ρc(1 − m˜ c)] ∧ λc, (21) c∈C where “ ” denotes the minimum operator. ∧ We will use this simple upper bound to identify an optimal placement strategy in the present Pure P2P Network scenario. To this end, we shall establish that our candidate placement strategy asymptotically achieves this performance bound, namely absorbs a portion ˜ρ in the limit where B tends to infinity. To find the optimal strategy, we introduce a variable xc ≜ [ρc(1 − m˜ c)] ∧ λc for all c. Note further that the fraction λc is necessarily bounded from above by ˜mc, as only those boxes holding c can devote their bandwidth to serving c. It is then easy to see that the quantity ˜ρ in (21) is no larger than the optimal value of the following linear programming problem: **[OPT 2]** - For c = c[∗] + 1, ˜mc = λc = xc = 1 − [�]c[c]=[∗] M [m][˜] [c][.] - For c[∗] + 2 ≤ c ≤ C, ˜mc = λc = xc = 0. ⋄ The proof consists in checking that the KKT conditions are met for the above candidate solution. Details are given in Appendix E. The above optimal solution suggests the following placement strategy: **“Hot-Warm-Cold” Content Placement Strategy** Divide the contents into three different classes according to their popularity ranking (in descending order): - Hot: The M 1 most popular contents. At each box, − M 1 cache slots are reserved for them to make sure − that requests for these contents are always met via local service. - Warm: The contents with indices from M to c[∗] + 1 (or c[∗] if c=M [m][˜] [c][ = 1][). For these contents, a fraction][ ˜][m][c] [�][c][∗] of all the boxes will store content c in their remaining one cache slots, where the value of ˜mc is given in Theorem 2. All requests for these contents (except c[∗] + 1 if it is classified as “warm”) can be served, at the expense of all bandwidth resources. - Cold: The other less popular contents are not cached at all. max m˜,λ,x �(ρc ˜mc + xc) c∈C s.t. ∀ c ∈C, 0 ≤ m˜ c ≤ 1, 0 ≤ λc ≤ m˜ c; ∀ c ∈C, 0 ≤ xc ≤ λc, xc ≤ ρc(1 − m˜ c); � m˜ c = M, � λc 1. ≤ c∈C c∈C The following theorem gives the structure of an optimal solution to OPT 2, and as a result suggests an optimal placement strategy. _Theorem 2: Assume that {νˆc} are ranked in descending_ order. The following solution solves OPT 2: _Remark 2: The requests for the c[∗]_ most popular contents (“hot” contents and “warm” contents except content c[∗] + 1) incur zero loss, while the requests for the C c[∗] 1 least − − popular contents incur 100% loss. There is a partial loss in the requests for content c[∗] + 1 if [�]c[c]=[∗] M [m][˜] [c][ <][ 1][.] Note that the placement for “warm” contents looks like the “water-filling” solution in the problem of allocating transmission powers onto different OFDM channels to maximize the overall achievable channel capacity in the context of wireless communications [16]. ⋄ Under this placement strategy, the maximum upper bound on the absorbed traffic load reads c[∗] � c=M ρc 1 + ρc . � ρ˜ = c[∗] � ρc + (ρc∗+1 + 1) c=1 � 1 − - For 1 ≤ c ≤ M − 1, ˜mc = 1, λc = xc = 0. - For M ≤ c ≤ c[∗], ˜mc = λc = xc = ρc/(1 + ρc), where c[∗] satisfies that We then have the following corollary: _Corollary 1: Considering the large system limit B_, →∞ with fixed catalogue and associated normalized popularities { ˆνc} as considered in Subsection IV-B, the proposed “hotwarm-cold” placement strategy achieves an asymptotic fraction of absorbed load equal to the above upper bound ˜ρ, and is hence optimal in this sense. ⋄ _Proof: With the proposed placement strategy, hot (respec-_ tively, cold) contents never trigger accepted requests, since all incoming requests are handled by local service (respectively, rejected). For warm contents, because each box holds only one c[∗] � c=M ρc 1, but ≤ 1 + ρc c[∗]+1 � c=M ρc - 1. 1 + ρc ----- warm content, it can only handle requests for that particular warm content. As a result, the processes of ongoing requests for distinct warm contents evolve independently of one another. For a given warm content c, the corresponding number of ongoing requests behaves as a simple one-dimensional loss network with arrival rate νc(1 − m˜ c) and service capacity m˜ cBU . For c = M, . . ., c[∗], one has ˜mc = ρc/(1 + ρc) where ρc = νc/(BU ), so both the arrival rate and the capacity of the corresponding loss network equal ˜mcBU . The asymptotic acceptance probability as B then converges to 1 and →∞ the accepted load due to both local service and services from other boxes converges to ρc. For content c[∗] + 1 (if m˜ c∗+1 > 0), the corresponding loss network has arrival rate νc∗+1(1−m˜ c∗+1) and service capacity ˜mc∗+1BU . Then, in the limit B, the accepted load (due to both local services →∞ and services from other boxes) reads ρc∗+1 ˜mc∗+1 + ˜mc∗+1 (which is actually smaller than ρc[∗]+1). Summing the accepted loads of all contents yields the result. VI. LARGE CATALOGUE MODEL Keeping the many-user asymptotic, we now consider an alternative model of content catalogue, which we term the “large catalogue” scenario. The set of contents is divided C into a fixed number of “content classes”, indexed by i . ∈I In class i, all the contents have the same popularity (arrival rate) νi. The number of contents within class i is assumed to scale in proportion to the number of boxes B, i.e., class i contains αiB contents for some fixed scaling factor αi. We further define α ≜ [�]i [α][i][. With the above assumptions, the] system traffic load ρ in equation (6) reads ρ = [1] U � αiνi. (22) i∈I bottleneck? Is the proportional-to-product placement strategy still optimal under the large-catalogue scaling? _A. Necessity of Unbounded Storage_ We first establish that bounded storage will strictly constrain utilization of bandwidth resources. To this end we need the following lemma: _Lemma 1: Consider the system under large catalogue scal-_ ing, with fixed weights αi and cache size M per box. Define M [′] ≜ 2M/α . Then ⌈ ⌉ (i) More than half of the contents are replicated at most M [′] times, and (ii) For each of these contents, the loss probability is at least E(inf i νi, M [′]U ) > 0, where E(·, ·) is the Erlang function [7] defined as: −1 � C ν[n] � E(ν, C) ≜ [ν][C] � . C! n! n=1 ⋄ _Proof: We first prove part (i). Note that the total number_ of content replicas in the system equals BM . Thus, denoting by f the fraction of contents replicated at least M [′] + 1 times, it follows that fαB(M [′] + 1) BM, which in turn yields ≤ M M f ≤ α ( 2M/α + 1) 2M + α [<][ 1]2 [,] ⌈ ⌉ [≤] which implies statement (i). To prove part (ii), we establish the following general property for a loss network (equivalent to our original system) with call types j ∈J, corresponding arrival rates νj, and capacity (maximal number of competing calls) Cl on link ℓ for all ℓ . We use ℓ j to indicate that the route for calls of type ∈L ∈ j comprises link ℓ. Denoting the loss probability of calls of type j in such a loss network as pj, we then want to prove pj ≥ E(νj, Cj[′] [)][,] (23) where Cj[′] [≜] [min][ℓ][∈][j][ C][ℓ][, i.e., the capacity of the bottleneck] link on the route for calls of type j. Note that the RHS of the above inequality is actually the loss probability of a loss network with only calls of type j and capacity Cj[′] [. Fixing index][ j][, we define this loss network] as an auxiliary system and consider the following coupling construction which allows us to deduce inequality (23): Let Xk be the number of active calls of type k in the original system for all k, and let Xj[′] [denote the number of active calls of type] j in the auxiliary system. Initially, Xj(0) = Xj[′][(0)][. The non-] zero transition rates for the joint process ({Xk}k∈K, Xj[′] [)][ are] given by k ̸= j : Xk → Xk + 1 at rate νk � I{[�]k∋ℓ [X][k][<C][ℓ][}][,] ℓ∈j k ̸= j : Xk → Xk − 1 at rate Xk, (Xj, Xj[′] [)][ →] [(][X][j][ + 1][, X]j[′] [+ 1)] at rate νj[both], (Xj, Xj[′] [)][ →] [(][X][j][ + 1][, X]j[′][)] at rate νj[ori], (Xj, Xj[′] [)][ →] [(][X][j][, X]j[′] [+ 1)] at rate νj[aux], (Xj, Xj[′] [)][ →] [(][X][j][ −] [1][, X]j[′] [−] [1)] at rate Xj, + (Xj, Xj[′] [)][ →] [(][X][j][, X]j[′] [−] [1)] at rate �Xj[′] [−] [X][j]�, The primary motivation for this model is mathematical convenience: by limiting the number of popularity values we limit the “dimensionality” of the request distribution, even though we now allow for a growing number of contents. It can also be justified as an approximation, that would result from batching into a single class all contents with a comparable popularity. Such classes can also capture the movie type (e.g. thriller, comedy) and age (assuming popularity decreases with content age). We use ˆυi to denote the normalized popularity of content class i ∈I and it reads i∈I [υ][ˆ][i][ = 1][. It is reasonable to regard] [�] each ˆυi as fixed. ˆνi ≜ υˆi/(αiB) represents the normalized popularity of a specific content in class i, which decreases as the number of contents in this class αiB increases, since users now have more choices within each class. In practice, an online video provider company which uses the Distributed Server Network architecture adds both boxes and available movies of each type to attract more user traffic, under a constraint of a maximum tolerable traffic load ρ. Returning to the Distributed Server Network model of Section IV, we consider the following questions: What amount of storage is required to ensure that memory space is not a ----- where νj[both] ≜ νjI{Xj[′] [<C]j[′] [}][ ·] � I{[�]k∋ℓ [X][k][<C][ℓ][}][,] ℓ∈j νj[ori] ≜ νjI{Xj[′] [=][C]j[′] [}][ ·] � I{[�] ℓ∈j k∋ℓ [X][k][<C][ℓ][}][,] νj[aux] ≜ νjI{Xj[′] [<C]j[′] [}][ · I][{∃][ℓ][∈][j][ s.t.][ �] k∈ℓ [X][k][=][C][ℓ][}][.] It follows from Theorem 8.4 in [5] that {Xk} is indeed a loss network process with the original dynamics, and that Xj[′] [is] a one-dimensional loss network with capacity Cj[′] [and arrival] rate νj. From the construction, we can see that all transitions preserve the inequality Xj(t) ≤ Xj[′] [(][t][)][ for all][ t][ ≥] [0][, due to the] following reason: Once Xj increases by 1, Xj[′] [either increases] by 1 or equals the capacity limit Cj[′] [, and for the latter case, the] corresponding transition rate νj[ori] implies that Xj ≤ Cj[′] [=][ X]j[′][.] Similarly, once Xj[′] [decreases by 1, either][ X][j][ also decreases] by 1, or in the case that Xj does not decrease, it must be that the transition rate Xj[′] [−] [X][j][ is strictly positive. In any case, the] above inequality is preserved. We further let Aj (t), A[′]j[(][t][)][ denote the number of type][ j] external calls, Lj(t), L[′]j[(][t][)][ the number of type][ j][ call rejec-] tions, and Dj(t), Dj[′] [(][t][)][ the number of type][ j][ call completions,] respectively in the original and auxiliary systems, during time interval [0, t]. It follows from our construction that whenever the service for a call of type j completes in the original system, the service for a call of type j also completes in the auxiliary system, hence Dj(t) ≤ Dj[′] [(][t][)][ for all][ t][ ≥] [0][. Since] Xj(t) = Aj(t)−Dj(t)−Lj(t), Xj[′] [(][t][) =][ A][′]j[(][t][)][−][D]j[′] [(][t][)][−][L][′]j[(][t][)] and Aj(t) = A[′]j[(][t][)][, we have][ L][j][(][t][)][ ≥] [L][′]j[(][t][)][. Upon dividing] this inequality by A(t) and letting t tend to infinity, one retrieves the announced inequality (23) by the ergodic theorem. Back to the context of our P2P system, for those contents which are replicated at most M [′] times (i.e., the contents considered in part (i)), the rejection rate of content c of type j reads pj ≥ E(inf i νi, Cj[′] [)][ ≥] [E][(inf] [i][ ν][i][, M][ ′][U] [)][.] The above lemma readily implies the following corollary: _Corollary 2: Under the assumptions in Lemma 1, The over-_ all rejection probability is at least [1] 2 [E][(min][i][ ν][i][, M][ ′][U] [)][. Indeed,] for bounded M, M [′] is also bounded, and E(mini νi, M [′]U ) is bounded away from 0. ⋄ Thus, even when the system load ρ is strictly less than 1, with bounded M there is a non-vanishing fraction of rejected requests, hence a suboptimal use of bandwidth. _B. Efficiency of Proportional-to-Product Placement_ We consider the following “Modified Proportional-to**Product Placement”: Each of the M storage slots at a given** box b contains a randomly chosen content. The probability of selecting one particular content c is νi/(ρBU ) if it belongs to class i. In addition, we assume that the selections for all such MB storage slots are done independently of one another. _Remark 3: This content placement strategy can be viewed_ as a “balls-and-bins” experiment. All the MB cache slots in the system are regarded as balls, and all the |C| (= i [α][i][B][)] [�] contents are regarded as bins. We throw each of the MB balls at random among all the bins. Bin c (corresponding |C| to content c which belongs to class i) will be chosen with probability νi/(ρBU ). Alternatively, the resulting allocation can be viewed as a bipartite random graph connecting boxes to contents. ⋄ Note that this strategy differs from the “proportional-toproduct” placement strategy proposed in Section IV, in that it allows for multiple copies of the same content at the same box. However, by the birthday paradox, we can prove the following lemma which shows that up to a negligible fraction of boxes, the above content placement does coincide with the proportional-to-product strategy. _Lemma 2: By using the above content placement strategy,_ at a certain box, if M ≪ �(mini αi)B, Pr(all the M cached contents are different) 1. (24) ≈ ⋄ _Proof: In the birthday paradox, if there are m people_ and n equally possible birthdays, the probability that all the m people have different birthdays is close to 1 whenever m n. Here in our problem, at a certain box, the M ≪ [√] cache slots are regarded as “people” and the contents are |C| regarded as “birthdays.” Although the probability of picking one content is non-uniform, the probability of picking one content within a specific class is uniform. One can think of picking a content for a cache slot as a two-step process: With probability αiνi/ [�]j [α][j][ν][j][, a content in class][ i][ is chosen. Then] conditioned on class i, a specific content is chosen uniformly at random among all the αiB contents in class i. Contents from different classes are obviously different. When M ≪ [√]αiB, even if all the M cached contents are from class i, the probability that they are different is close to 1. Thus, M ≪ [√]mini αiB is sufficient for (24) to hold. To prove that under this particular placement, inefficiency in bandwidth utilization vanishes as M, we shall in →∞ fact consider a slight modification of the “request repacking” strategy considered so far for determining which contents to accept: **Counter-Based Acceptance Rule** A parameter L > 0 is fixed. Each box b maintains at all times a counter Zb of associated requests. For any content c, the following procedure is used by the server whenever a request arrives: A random set of L distinct boxes, each of which holds a replica of content c, is selected. An attempt is made to associate the newly arrived request with all L boxes, but the request will be rejected if its acceptance would lead any of the corresponding box counters to exceed LU . ----- _Remark 4: Note that in this acceptance rule, associating a_ request to a set of L boxes does not mean that the requested content will be downloaded from all these L boxes. In fact, as before, the download stream will only come from one of the L boxes, but here we do not specify which one is to be picked. It is readily seen that the above rule defines a loss network. Moreover, it is a stricter acceptance rule than the previously considered one. Indeed, it can be verified that when all ongoing requests have an associated set of L boxes, whose counters are no larger than LU, there exist nonnegative integers Zcb such that b:c∈Jb [Z][cb][ =][ Ln][c][,][ ∀] [c][ ∈C][ and][ �]c:c∈Jb [Z][cb][ ≤] [�] LU, b, then feasibility condition (2) holds a fortiori. ∀ ∈B ⋄ We introduce an additional assumption, needed for technical reasons. _Assumption 1: A content which is too poorly replicated is_ never served. Specifically, a content must be replicated at **least M** [3][/][4] **times to be eligible for service.** ⋄ Our main result in this context is the following theorem: _Theorem 3: Consider fixed M_, αi, νi, and corresponding load ρ < 1. Then for suitable choice of parameter L, with high probability (with respect to placement) as B, the →∞ loss network with the above “modified proportional-to-product placement” and “counter-based acceptance rule” admits a content rejection probability φ(M ) for some function φ(M ) decreasing to zero as M . →∞ ⋄ The interpretation of this theorem is as follows: The fraction of lost service opportunities, for an underloaded system (ρ < 1), vanishes as M increases. Thus, while Corollary 2 showed that M is necessary for optimal performance, →∞ this theorem shows that it is also sufficient: there is no need for a minimal speed (e.g. M log B) to ensure that the loss ≥ rate becomes negligible. The proof is given in Appendix A. VII. CONCLUSION In peer-to-peer video-on-demand systems, the information of content popularity can be utilized to design optimal content placement strategies, which minimizes the fraction of rejected requests in the system, or equivalently, maximizes the utilization of peers’ uplink bandwidth resources. We focused on P2P systems where the number of users is large. For the limited content catalogue size scenario, we proved the optimality of a proportional-to-product placement in the Distributed Server Network architecture, and proved optimality of “Hot-Warm-Cold” placement in the Pure P2P Network architecture. For the large content catalogue scenario, we also established that proportional-to-product placement leads to optimal performance in the Distributed Server Network. Many interesting questions remain. To name only two, more general popularity distributions (e.g. Zipf) for the large catalogue scenario could be investigated; the efficiency of adaptive cache update rules such as the one discussed in Section IV-A, or classical alternatives such as LRU, in conjunction with a loss network operation, also deserves more detailed analysis. REFERENCES [1] J. M. Almeida, D. L. Eager, M. K. Vernon, and S. J. Wright. Minimizing delivery cost in scalable streaming content distribution systems. IEEE _Transactions on Multimedia, 6(2):356–365, Apr. 2004._ [2] B. Bollob´as. Modern Graph Theory. Springer, New York, 1998. [3] Y. Boufkhad, F. Mathieu, F. de Montgolfier, D. Perino, and L. Viennot. Achievable catalog size in peer-to-peer video-on-demand systems. In _Proc. of the Seventh International Workshop on Peer-to-Peer Systems_ _(IPTPS), 2008._ [4] L. Breslau, P. Cao, L. Fan, G. Philips, and S. Shenker. Web caching and zipf-like distributions: Evidence and implications. In Proc. of IEEE _INFOCOM, Mar. 1999._ [5] M. Draief and L. Massoulie. Epidemics and rumours in complex networks. In the London Mathematical Society Lecture Note Series. Cambridge University Press, 2010. [6] J. Kangasharju, K. W. Ross, and D. A. Turner. Optimizing file availability in peer-to-peer content distribution. In Proc. of IEEE INFOCOM, 2007. [7] F. Kelly. Loss networks. The Annals of Applied Probability, 1(3):319– 378, 1991. [8] A. Klenke and L. Mattner. Stochastic ordering of classical discrete distributions. Advances in Applied probability, 42(2):392–410, 2010. [9] N. Laoutaris, V. Zissimopoulos, and I. Stavrakakis. On the optimization of storage capacity allocation for content distribution. Computer _Networks, 47:409–428, 2003._ [10] M. Mitzenmacher and E. Upfal. Probability and computing: randomized _algorithms and probabilistic analysis._ Cambridge University Press, 2005. [11] K. Suh, C. Diot, J. Kurose, L. Massoulie, C. Neumann, D. Towsley, and M. Varvello. Push-to-peer video-on-demand system: Design and evaluation. _IEEE Journal on Selected Areas in Communications,_ 25(9):1706–1716, 2007. [12] B. R. Tan and L. Massoulie. Brief announcement: Adaptive content placement for peer-to-peer video-on-demand systems. In Proc. of 29th _Annual ACM SIGACT-SIGOPS Symposium on Principles of Distributed_ _Computing (PODC), Jul. 2010._ [13] B. R. Tan and L. Massoulie. Optimal content placement for peer-to-peer video-on-demand systems. In Proc. of IEEE INFOCOM, Apr. 2011. [14] S. Tewari and L. Kleinrock. On fairness, optimal download performance and proportional replication in peer-to-peer networks. In Proc. of IFIP _Networking, 2005._ [15] S. Tewari and L. Kleinrock. Proportional replication in peer-to-peer networks. In Proc. of IEEE INFOCOM, 2006. [16] D. Tse and P. Viswanath. Fundamentals of Wireless Communication. Cambridge University Press, 2005. [17] V. Valancius, N. Laoutaris, L. Massoulie, C. Diot, and P. Rodriguez. Greening the internet with nano data centers. In Proc. of the 5th interna_tional conference on Emerging networking experiments and technologies_ _(CoNEXT), pages 37–48, 2009._ [18] J. Wu and B. Li. Keep cache replacement simple in peer-assisted VoD systems. In Proc. of IEEE INFOCOM Mini-Conference, 2009. [19] X. Zhou and C.-Z. Xu. Efficient algorithms of video replication and placement on a cluster of streaming servers. Journal of Network and _Computer Applications, 30(2):515–540, Apr. 2007._ APPENDIX _A. Proof of Theorem 3_ The proof has five sequential stages: _1) The chance for a content to be “good”_ Let Nc denote the number of replicas of content c of class i. Then, Nc admits a binomial distribution with parameters (MB, ρBUνi [)][. We call content][ c][ a “good” content if][ |][N][c][ −] E[Nc]| < M [2][/][3], i.e., Nc − νρUiM < M 2/3. (25) ���� ���� ----- As Nc = [�]i[MB]=1 [Z][i][, where][ Z][i][ ∼] [Ber][(][p][)][ (][p][ ≜] ρBUνi [) are i.i.d.,] according to the Chernoff bound, � Pr Nc ≥ M [2][/][3] + [ν]ρU[i][M] � e[−][MB][·][I][(][a][)], (26) ≤ where a ≜ �M [2][/][3] + [ν]ρU[i][M] � /MB and I(x) ≜ supθ{xθ − ln(E[e[θZ][i] ]) is the Cram´er transform of the Bernoulli random } variable Zi. Instead of directly deriving the RHS of inequality (26), which can be done but needs a lot of calculations (see Appendix G), we upper bound it by using a much simpler approach here: For the same deviation, a classical upper bound on the Chernoff bound of a binomial random variable is provided by the Chernoff bound of a Poisson random variable which has the same mean (see e.g. [5]). Therefore, the RHS of inequality (26) can be upper bounded by � � ρU �� exp 1 +, − [ν][i][M] ρU [·][ ˆ][I] νiM [1][/][3] where I[ˆ](x) is the Cram´er transform of a unit mean Poisson random variable, i.e., I[ˆ](x) = x log x x + 1. By Taylor’s − expansion of I[ˆ](x) at x = 1, the exponent in the last expression is equivalent to � ρU νiM [1][/][3] 2 � + o �M [−][2][/][3][��] − [ν][i][M] ρU [·] � 1 2 = M [1][/][3] + o �M [1][/][3][�] = Θ �M [1][/][3][�] . − [ρU] − 2νi On the other hand, when M is large, −M [2][/][3] + [ν]ρU[i][M] ≥ 0 holds, hence we have � Pr Nc ≤−M [2][/][3] + [ν]ρU[i][M] � e[−][MB][·][I][ˆ][(ˆ][a][)], (27) ≤ memory slot at a particular box (we index a slot by j for 1 j MB), and f (ξ) corresponds to the number of good ≤ ≤ contents in class i based on the placement ξ, i.e., Xi = f (ξ). It is easy to see that in our case c = 1, hence we have Pr(|Xi − E[Xi]| ≥ t) ≤ 2e[−][2][t][2][/][(][MB][)], ∀t > 0. Taking t = (MB)[2][/][3] in the above inequality further yields � Pr |Xi − E[Xi]| ≥ (MB)[2][/][3][�] ≤ 2e[−][2(][MB][)][1][/][3]. Thus, we have � � Pr Xi ≥ 1 − 2e[−][Θ(][M] [1][/][3][)][�] - αiB − (MB)[2][/][3][�] (a) � ≥ Pr Xi ≥ E[Xi] − (MB)[2][/][3][�] � ≥ Pr |Xi − E[Xi]| < (MB)[2][/][3][�] 1 2e[−][2(][MB][)][1][/][3], (29) ≥ − where (a) holds since E[Xi] = Pr(content c is good) · αiB � ≥ 1 − 2e[−][Θ(][M] [1][/][3][)][�] - αiB. Note that in order for the lower bound on Xi shown in the above probability to be Θ(B), M o(B[1][/][2]) is a sufficient ∼ condition. _3) The chance for a box to be “good”_ We call a replica “good” if it is a replica of a good content, and use Ci to denote the number of good replicas of class i. We also call a box “good” if the number of good replicas of class i held by this box lies within αiνiM O(M [2][/][3]). ± ρU As we did for “good contents,” we will also use the Chernoff bound to prove that a box is good with high probability. Let Ei represent an event that the number Xi of good contents within class i satisfies � Xi ≥ 1 − 2e[−][Θ(][M] [1][/][3][)][�] αiB − (MB)[2][/][3], (30) which has a probability of at least 1 2e[−][Ω((][MB][)][1][/][3][)], accord− ing to inequality (29) when M o(B[1][/][2]). Conditional on ∼ Ei, according to the lower bound in inequality (25) (i.e., the definition of “good contents”) and inequality (30), we have � νiM ��� Ci ≥ 1 − 2e[−][Θ(][M] [1][/][3][)][�] αiB ρU [−] [M][ 2][/][3] � (MB)[2][/][3] − = Pr �MB � Zˆi ≥ MB · ˆa i=1 � where (−Z[ˆ]i) ∼ Ber(p), ˆa ≜ M [−][1][/][3]/B −p ∈ [−1, 0] when B is large, and it is easy to check that I[ˆ](ˆa) = I( aˆ). Similarly − as above by upper bounding e[−][MB][·][I][(][−][a][ˆ][)], we can find that the exponent of the upper bound is also Θ �M [1][/][3][�]. Therefore, − Pr(content c is good) 1 2e[−][Θ(][M] [1][/][3][)]. (28) ≥ − _2) The number of “good contents” in each class_ Denoting by Xi the number of good contents in class i, we want to use a corollary of Azuma-Hoeffding inequality (see e.g. Section 12.5.1 in [10] or Corollary 6.4 in [5]) to upper bound the chance of its deviation from its mean. This corollary applies to a function f of independent variables ξ1, . . ., ξn, and states that if the function changes by an amount no more than some constant c when only one component ξi has its value changed, then for all t > 0, Pr( f (ξ) E[f (ξ)] t) 2e[−][2][t][2][/][(][nc][2][)]. | − | ≥ ≤ Back to our problem, each independent variable ξj correspond to the choice of a content to be placed in a particular On the other hand, from the upper bound in inequality (25) and the fact Xi ≤ αiB, we obtain that Ci ≤ MB · [α]ρU[i][ν][i] �1 + O(M [−][1][/][3])� . (32) = MB [α][i][ν][i] ρU �1 O(M [−][1][/][3] + M [2][/][3]B[−][1][/][3])� . − (31) ----- Conditional on Ei, to constitute a box, sample without replacement from the determined content replicas. Denote the number of good replicas of class i stored in a particular box (say, box b) by ζi, which actually represents the number of good replicas in the M samples sampled without replacement from all the MB replicas, among which Ci are good ones (conditional on Ei). This means that, conditional on Ei, ζi follows a hypergeometric distribution H(MB, Ci, M ). It can be found that (see e.g. Theorem 1 in [8]) conditional on Ei, Hi ≤st ζi ≤st Gi. Here, “≤st” represents stochastic ordering, and from inequality (34), we further have Pr ζi − αiρUνiM ≥ O(M 2/3)� ����� ���� ≤ 2e[−][Θ(][M] [1][/][3][)] - Pr(Ei) + (1 − Pr (Ei)) = 1 − (1 − 2e[−][Θ(][M] [1][/][3][)]) Pr (Ei) 1 (1 2e[−][Θ(][M] [1][/][3][)])(1 2e[−][Ω((][MB][)][1][/][3][)]) ≤ − − − = 2e[−][Θ(][M] [1][/][3][)] 2e[−][Ω((][MB][)][1][/][3][)]. (35) − � Gi ∼ Bin M, [α][i][ν][i] ρU � Hi ∼ Bin M, [α][i][ν][i] ρU �1 + O(M [−][1][/][3])�[�], �1 O(M [−][1][/][3] + M [2][/][3]B[−][1][/][3])�[�], − where the second parameters of the distributions of Gi and Hi are determined according to inequalities (32) and (31) respectively. We will see why we need these two “binomial bounds” on ζi. By definition, Pr(box b is not good) = Pr �� i∈I αiνiM ζi − ρU ����� O(M 2/3)�[�] ≥ ���� ≤ � Pr ζi − αiρUνiM i∈I ����� where for all i, ∈I O(M 2/3)�, (33) ≥ ���� Putting inequality (35) back to inequality (33) immediately results in Pr(box b is good) 1 2 e[−][Θ(][M] [1][/][3][)]. (36) ≥ − |I| _4) The number of “good boxes”_ We use a similar approach as in Stage 2 to bound the number of good boxes, say Y, which can be represented as a function g(ξ) where ξ = (ξ1, ξ2, · · ·, ξMB) is the same content placement vector defined in Stage 2. Still, g(ξ) changes by an amount no more than 1 when only one component ξi has its value changed, then for all t > 0, Pr( Y E[Y ] t) | − | ≥ ≤ 2e[−][2][t][2][/][(][MB][)], and taking t = (MB)[2][/][3] further yields � Pr Y E[Y ] (MB)[2][/][3][�] 2e[−][2(][MB][)][1][/][3]. | − | ≥ ≤ Similarly as we obtain inequality (29), we finally come to � � Pr Y B 1 2 e[−][Θ(][M] [1][/][3][)][��] 1 2e[−][2(][MB][)][1][/][3]. ≥ − |I| ≥ − (37) _5) The performance of a loss network_ Finally, consider the performance of the loss network defined by the “Counter-Based Acceptance Rule.” We introduce an auxiliary system to establish an upper bound on the rejection rate. In the auxiliary system, upon arrival of a request for content c, L different requests are mapped to L distinct boxes holding a replica of c, but here they are accepted or rejected individually rather than jointly. Letting Zb (respectively, Zb[′][) denote the number of requests associated] to box b in the original (respectively, auxiliary) system, one readily sees that Zb ≤ Zb[′] [at all times and all boxes and for] each box b, the process Zb[′] [evolves as a one-dimensional loss] network. We now want to upper bound the overall arrival rate of requests to a good box: _(a) Non-good contents_ Assume that upon a request arrival, we indeed pick L content replicas, rather than L distinct boxes holding the requested content (as specified in the acceptance rule). This entails that, if two replicas of this content are present at one box, then this box can be picked twice. However, since a vanishing fraction of boxes will have more than one replicas of the same content when M ≪ �(mini αi)B (as proved in Lemma 2), we can strengthen the definition of a “good” box to ensure that, on top of the previous properties, a good box should hold M distinct replicas. It is easy to see that the Pr ζi − αiρUνiM ≥ O(M 2/3)� ����� ���� = Pr ζi − αiρUνiM ≥ O(M 2/3), Ei� ����� ���� + Pr ζi − αiρUνiM ≥ O(M 2/3), Eic� ����� ���� ≤ Pr ζi − αiρUνiM ≥ O(M 2/3) Ei� - Pr (Ei) ����� ���� ���� + Pr (Ei[c][)][ .] (34) By definition of stochastic ordering, Pr ζi − αiρUνiM ≥ O(M 2/3) Ei ����� ���� ���� � � ≤ Pr Gi ≥ [α][i]ρU[ν][i][M] + O(M [2][/][3]) � � � + Pr Hi ≤ [α][i]ρU[ν][i][M] − O(M [2][/][3]) (a) 2e[−][Θ(][M] [1][/][3][)], ≤ where (a) can be obtained using a similar Chernoff bounding approach as for Nc in Stage 1 of this proof. Thus, continuing ----- fraction of good boxes will still be of the same order as with the original weaker definition. With these modified definitions, consider one non-good content c of class i cached at a good box. Its unique replica will be picked with probability L/Nc when the sampling of L replicas among the Nc existing ones is performed. Thus, since we ignore requests for all content c with Nc ≤ M [3][/][4] (according to Assumption 1), the request rate will be at most νiLM [−][3][/][4]. Besides, there are at most O(M [2][/][3]) non-good content replicas held by one good box. The reason is as follows: By definition, a good box holds at least � � αiνiM O(M [2][/][3])� = M O(M [2][/][3]) (38) − − ρU i∈I good content replicas among all classes, so the remaining slots, being occupied by non-good content replicas, are at most O(M [2][/][3]). Therefore, the overall arrival rate of requests for non-good contents to a good box is upper bounded by νnon-good = O(M [2][/][3] - LM [−][3][/][4]) = O(LM [−][1][/][12]). (39) _(b) Good contents_ The rate generated by a good content c of class i is νiL/Nc. Now, by definition of a good content, one has: Nc ≥ [ν]ρU[i][M] [(1][ −] [O][(][M][ −][1][/][3][))][.] network, say E(λ, C), as a certain conditional probability of S Poi(λ), i.e., ∼ E(λ, C) = Pr(S = C S C) = [Pr(][S][ =][ C][)] | ≤ Pr(S C) [.] ≤ Using the Chernoff bound, we have Pr(S C) e[−][λI][(][C/λ][)], ≥ ≤ where I(x) = x log x x + 1, hence − Pr(S C) e[−][λI][(][C/λ][)] ≥ E(λ, C) ≤ 1 − Pr(S ≥ C) [≤] 1 − e[−][λI][(][C/λ][)][ .] This entails that the rate of requests for this content is upper bounded by ρLU M [(1 +][ O][(][M][ −][1][/][3][))][.] By definition of a “good box,” there are at most αiνiM/ρU + O(M [2][/][3]) good content replicas of class i cached in this good box. Therefore, the overall arrival rate of requests for good contents to a good box is upper bounded by νgood = � i∈I � ρLU � M [(1 +][ O][(][M][ −][1][/][3][))] � αiνiM + O(M [2][/][3])� × ρU = (ρLU )(1 + O(M [−][1][/][3])). (40) To conclude, for any good box b, the process Zb[′] [evolves] as a one-dimensional loss network with arrival rate no larger than ν = νnon-good + νgood = ρLU + O(LM [−][1][/][12]), by combining the two results in (39) and (40). Next, we are going to upper bound the loss probability of Zb[′][. Since][ ν][ is an upper bound on the arrival rate, the] probability that Zb[′] [=][ LU][ is upper bounded by][ E][(][ρLU][ +] O(LM [−][1][/][12]), LU ). One can actually further upper bound this Erlang function by e[−][Θ(][L][)]. To see this, let us first rewrite the loss probability (Erlang function) of a general 1-D loss Back to the Erlang function in our problem, I(C/λ) = I((ρ + O(M [−][1][/][12]))[−][1]), hence, Pr(Zb[′] [=][ LU] [)][ ≤] [E][(][ρLU][ +][ O][(][LM][ −][1][/][12][)][, LU] [)][ ≤] [e][−][Θ(][L][)][,] (41) where the second inequality holds under the assumption that ρ < 1 (otherwise, the exponent will become 0 or +Θ(L)). The number of good replicas in good boxes is, due to inequality (37) and equation (38), at least MB(1 O(M [−][1][/][3])), − with a high probability (at least 1 2e[−][2(][MB][)][1][/][3]). On the other − hand, the total number of replicas of good contents is at most MB, which is the total number of replicas (or available cache slots). Now pick some small ǫ (0, 1/3) and let X[˜] denote the ∈ number of good contents which have at least M [2][/][3+][ǫ] replicas outside good boxes. Then necessarily, with a probability of at least 1 2e[−][2(][MB][)][1][/][3], − XM˜ [2][/][3+][ǫ] MB MB(1 O(M [−][1][/][3])) = O(BM [2][/][3]), ≤ − − i.e., X[˜] O(BM [−][ǫ]). According to inequality (29), the total ≤ number of good contents is Θ(B) (specifically, very close to = αB) with a probability of at least 1 2 e[−][2(][MB][)][1][/][3], |C| − |I| hence we can conclude that, with high probability, for a fraction of at least 1 O(M [−][ǫ]) of good contents, each of them − has at least a fraction 1 O(M [−][1][/][3+][ǫ]) of its replicas stored − in good boxes (since a good content has νi ρU [M][ ±][ O][(][M][ 2][/][3][)] replicas in total by definition). We further use [˜] to represent C the set of such contents. Recall that Ac was defined in Subsection IV-B as the steadystate probability of accepting a request for content c in the original system. For all c, ∈ C[˜] Ac ≥ Pr(all the L sampled replicas are in good boxes) × Pr(Zb < LU, ∀b s.t. box b is sampled) (a) L �1 O(M [−][1][/][3+][ǫ])� ≥ − × Pr(Zb[′] [< LU,][ ∀][b s.t.][ box][ b][ is sampled][)][.] (b) L �1 O(M [−][1][/][3+][ǫ])� �1 Le[−][Θ(][L][)][�] . ≥ − - − (42) Here, (b) is obtained according to inequality (41). The argument why (a) holds is as follows: We have Nc ≈ νiM/(ρU ) replicas (assuming that content c is of class i), among which Nc[′] [=][ N][c][(1][ −] [O][(][M][ −][1][/][3+][ǫ][))][ are in good boxes. Then, the] ----- probability that L samples fall in the good boxes can be written explicitly as Nc[′][(][N][ ′]c [−] [1)][ · · ·][ (][N][ ′]c [−] [L][ + 1)] Nc(Nc − 1) · · · (Nc − L + 1) [,] which can be approximated as the first part on the RHS we write above, under the assumption that L M . The second ≪ part is due to the fact that Zb[′] [≤] [Z][b][ for all box][ b][.] It should be recalled that within this stage of proof, finally coming to inequality (42) actually needs everything to be conditional on the following events: - The number of good boxes is Θ(B); - The number of good contents is Θ(B); - A box caches M distinct replicas, and as B, M →∞ and M ≪ �(mini αi)B, all of them p have high probabilities. Additionally, [˜] as B, M . C →C →∞ Therefore, further letting L but keeping L M [1][/][3][−][ǫ], →∞ ≪ we will find that the RHS of inequality (42) is approximated as 1 O(LM [−][1][/][3+][ǫ]) Le[−][Θ(][L][)] 1, − − ≈ and then conclude that the requests for almost all the contents will have near-zero loss. _B. Proof of Equivalence between Feasibility Conditions (1)_ _and (2)_ _1) Sufficiency of Condition (2): We use Hall’s theorem to_ prove the sufficiency. **[Hall’s theorem] Suppose J = {J1, J2, · · · } is a collection of** sets (not necessarily countable). A SDR (“System of Distinct Representatives”) for J is defined as X = {x1, x2, · · · }, where xi ∈ Ji. Then, there exists a SDR (not necessarily unique) iff. meets the following condition: J , � A . (43) ∀T ⊆J |T | ≤| | A∈T ⋄ In our P2P VoD system, denote the content set as = C {c1, c2, · · ·, cN }. Given the ongoing download services of each content {ni}i[N]=1[, we get a “distinguishable content set”] C¯ = {c[(1)]1 [, c]1[(2)][,][ · · ·][, c]1[(][n][1][)]; c[(1)]2 [, c]2[(2)][,][ · · ·][, c]1[(][n][2][)]; · · · ; c[(1)]N [, c]N[(2)][,][ · · ·][, c]N[(][n][N] [)]}, where c[(]i[k][)] represents the k-th download service of content i for 1 ≤ k ≤ ni, and has its “potential connection set” Ji[(][k][)] = {lb[(][j][)] : 1 ≤ j ≤ U, ci ∈ b, b ∈B}, i.e., the set of all the connections of those boxes which have content ci. A collection of the “potential connection sets” for all {c[(]i[k][)]} is then J = {J1[(1)][, J]1[(2)][,][ · · ·][, J]1[(][n][1][)]; · · · ; JN[(1)][, J]N[(2)][,][ · · ·][, J]N[(][n][N] [)]}, and a SDR for is S X = {x[(1)]1 [, x]1[(2)][,][ · · ·][, x]1[(][n][1][)]; · · · ; x[(1)]N [, x]N[(2)][,][ · · ·][, x][(]N[n][N] [)]}, s.t. x[(]i[k][)] ∈ Ji[(][k][)], which means each c[(]i[k][)] is affiliated with a distinct connection (i.e., a feasible solution in our model). Now we want to prove the existence of such a SDR, i.e., to prove equation (43). For, there is a one-to-one ∀T ⊆J mapping between and a [¯] . Further, this [¯] can be T S ⊆ C[¯] S mapped to a where S ⊆C S = {ci : ∃1 ≤ k ≤ ni, s.t. c[(]i[k][)] ∈ S}[¯], i.e., is the set of all contents considered in [¯] without S S considering multiple services of each content. Then,, ∀T ⊆J RHS = | � Ji[(][k][)]| = � U Ji[(][k][)]∈T b:∃ci∈S s.t. ci∈b and Therefore, if = U |{b ∈B : S ∩Jb ̸= ∅}| LHS = |T | = |S| ≤[¯] � ni. ci∈S ∀S ⊆C, � ni ≤= U |{b ∈B : S ∩Jb ̸= ∅}| ci∈S holds, then equation (43) holds. The sufficiency is proved. _2) Necessity of Condition (2): For any_, S ⊆C � nc = � � Zcb = � � Zcb c∈S c∈S b:c∈Jb b: ∃c∈S c∈S∩Jb s.t. c∈Jb (a) ≤ � U = U |{b ∈B : S ∩Jb ̸= ∅}|, b: ∃c∈S s.t. c∈Jb where the inequality (a) is due to the second constraint in condition (1). Hence, the necessity is proved. _C. Approximation to Proportional-to-Product Placement Us-_ _ing Bernoulli Sampling_ An alternative sampling strategy to get the proportional-toproduct placement is as follows: To push contents to box b (1 b B), the server will ≤ ≤ 1. Generate C independent Bernoulli random variables Xc ∼ Ber(pc) for all c ∈C, where pc = βνˆc/(1 + βνˆc), νˆc is the normalized version of νc, and β is a customized constant parameter. 2. If [�]c∈C [X][c][ =][ M][ (which means a valid cluster of size] M is generated), push content c to box b if Xc = 1; Otherwise, go back to Step 1. We now analyze why this scheme works: after generating a valid size-M subset, the probability that this subset is a certain ----- subset Gj equals Pr(Xc = 1, ∀c ∈Gj ; Xc = 0, ∀c ̸∈Gj| � Xc = M ) c∈C �c∈Gj [p][c][ ·][ �]c̸∈Gj [(1][ −] [p][c][)] = Pr([�]c∈C [X][c][ =][ M] [)] = � pc � �c∈C [p][c] c∈Gj 1 − pc Pr([�]c∈C [X][c][ =][ M] [)] � = � νˆc/Z, c∈Gj where Z = Pr([�]c∈C [X][c][ =][ M] [)][/][(][β][M][ �]c∈C [p][c][)][, which actu-] ally equals the normalizing factor for [�]c∈Gj [ν][ˆ][c][.] We then consider the computational complexity of this approximation algorithm. Assuming that {νˆc} is sorted in the descending order, we have C � (1 − pc) c=M+1 Pr(� Xc = M ) ≥ c∈C M � pc · c=1 = �Cc�=1Mc[(1 +]=1 [β][ β][ν][ˆ][c][ν][ˆ][c][)] ≜ P [∗]. So the computational complexity is upper bounded by O(BC/P [∗]). Note that the constant parameter β can be adjusted to get a higher Pr([�]c∈C [X][c][ =][ M] [)][ in order to reduce] computational complexity. To achieve this, we can just choose a β which maximizes its lower bound P [∗], so ∂ log P [∗] = [M] ∂β β [−] C � c=1 νˆc = 0. (44) 1 + βνˆc _D. Detailed Implementation in the Simulations_ _1) A Heuristic Repacking Algorithm: We first describe the_ concept of “repacking.” When the cache size M = 1, all the bandwidth resources at a certain box belongs to the content the box caches. When M 2, however, this is not the case: ≥ all the contents cached in one box are actually competitors for the bandwidth resources at that box. Let’s consider a simple example in which B = 2, M = 2 and U = 1: Box 1 which caches content 1 and 2 is serving a download of content 2, while box 2 which caches content 2 and 3 is idle. When a request for content 1 comes, the only potential candidate to serve it is box 1, but since the only connection is already occupied by a download of content 2, the request for content 1 has to be rejected. However, if this ongoing download can be “forwarded” to the idle box 2, the new request can be satisfied without breaking the old one. We call this type of forwarding “repacking.” In the the feasibility condition (1) and its equivalent form (2), we actually allow perfect repacking to identify a feasible {nc}. In a real system, perfect repacking needs to enumerate all the possible serving patterns and choose the best one based on some criterion, which is usually computationally infeasible. We then propose a heuristic repacking algorithm which is not so complex but can achieve similar functionality and improve performances, although imperfect. Several variables need to be defined before we describe the algorithm: - nc: the system-wide ongoing downloads of content c, which does not count the downloads from the server. - Bc[k][: The set of boxes which have content][ c][ (“potential] candidate boxes”) and k free connections, for 0 k U . ≤ ≤ - Dc: number of boxes which has content c. Dc = �Uk=0 [|B]c[k][|][.] - ub: a U -dimensional vector, of which the i-th component represents the content box b is using its i-th connection to upload (a value 0 represents a free connection). - co: the “orphan content” which is affiliated with a new request or an ongoing download but has not been assigned with any box. - Co: the set of contents which has once been chosen as orphan contents. - tR: the number of repacking already done. Note that when choosing a box to serve a request, load balancing is already considered, which to some extent reduces the chance of necessary repacking in later operations. However, repacking is still needed for an incoming request for content c as soon as ∪k>0Bc[k] [=][ ∅][.] **Repacking Algorithm** After getting a request for content c while ∪k>0Bc[k] [=][ ∅][, the] server 1. Initialize co := c, Co := {c}, and tR := 0. 2. Let C[¯] = {c[′] : nc′/Dc′ > nco/Dco and c[′] ̸∈Co}, i.e., a set of contents which haven’t become orphans during The server can use any numerical methods (e.g., Newton’s method) to seek a root of equation (44). In fact, this lower bound P [∗] on Pr([�]c∈C [X][c][ =][ M] [)][ is not tight, since it is just] the largest item in the sum expression. When the popularity is close to uniformness (e.g., in a zipf-like distribution, α is small), this largest item is no longer dominant, so the lower bound P [∗] is quite untight, which means we actually overestimate the computation complexity by only evaluating its upper bound. However, this will not affect the real gain we obtain after choosing the optimal β according to equation (44). Recall that we also proposed a simple sampling strategy in Section IV-A. It is easy to see that when some contents are much more popular than the others (e.g., zipf-like α is large), the probability that duplicates appear in one size-M sample is high, hence largely increases the number of resampling. Thus, it would be faster if we choose the Bernoulli sampling. However, when the popularity is quite uniform, the simple sampling works very well. An extreme case is that under the uniform popularity distribution, M−1 � i=1 � 1 − [i] C Pr a valid size-M subset = { } �MC � - M ! = C[M] � , which shows that when C is large, you can get a valid sample almost every time. ----- this repacking process and of which the utilization factor (may be larger than 1) is larger than that of the current orphan content co. If C[¯]o = ∅, regard co as a loss and TERMINATE. 3. Choose c[∗] = arg maxc′∈C¯{nc′/Dc′}. Uniformly pick one (box, connection) pair from {(b, i) : b ∈Bc[0][, c][∗] [is the][ i][-th component of][ u][b][}][.] 4. Use the chosen box b and its i-th connection to continue uploading the remaining part of content co. At the same time, c[∗] which was served using that connection becomes a new orphan, i.e., co := c[∗]. Update ub and {nc}. Set tR := tR + 1. 5. If ∪k>0Bc[k]o [̸][=][ ∅][, i.e., there exists a free connection to] serve the new co, then use the load-balancing-based box selection rule to select a box to continue uploading the remaining part of co. The repacking process is perfect (no remaining orphan) and TERMINATE. Otherwise, - If tR = t[max]R, a customized algorithm parameter (0 ≤ t[max]R ≤ C), regard co as a loss and TERMINATE. - Otherwise, set Co := Co + {co}, and go to Step 2. _2) A Practical Issue in Cache Update: When a box b is_ chosen for cache update (and it does not hold the content c corresponding to the request), it might still be uploading content c[′] which is to be replaced. This fact is not captured by the Markov chain model. In practice, those ongoing services must be terminated. Since we have introduced the repacking scheme, they become “orphans” ready for repacking. We implement the procedure as follows: 1. Rank these orphans by their remaining service time in the ascending order, i.e., the original download which is sooner to be completed is given higher priority. 2. Do repacking one by one until one orphan fails to be repacked. Note that here the repacking algorithm starts from Step 5, since there may already be some boxes with both content c and free connections. _E. Proof of Theorem 2_ The Lagrangian of OPT 2 is L( ˜m, λ, x; u, v, y, z, w, η, γ) The KKT condition includes the feasible set defined in OPT 2 and the following: ∂L ∂xc = 1 − yc − zc = 0, ∀c; ∂L ∂m˜ c = ρc − uc + vc − ρczc − γ = 0, ∀c; ∂L ∂λc = −vc + yc − η + wc = 0, ∀c; uc( ˜mc − 1) = 0, uc ≥ 0, ∀c; vc(λc − m˜ c) = 0, vc ≥ 0, ∀c; yc(xc − λc) = 0, yc ≥ 0, ∀c; zc(xc − ρc + ρc ˜mc) = 0, zc ≥ 0, ∀c; wcλc = 0, wc ≥ 0, ∀c. We then put the solution stated in the theorem into KKT condition to check whether the condition is satisfied. The analysis is as follows: - For 1 ≤ c ≤ M − 1, since ˜mc = 1 and λc = xc = 0, we obtain that vc = 0, yc + zc = 1, ρc(1 − zc) = uc + γ, and yc = η − wc. Letting wc = 0, we further have: uc = ρcη−γ, yc = η, zc = 1−η. To keep uc, yc, zc ≥ 0, we must have η ∈ [0, 1] and γ ≤ ρcη, for 1 ≤ c ≤ M −1. Thus, since {ρc} are also ranked in the descending order, we have γ ≤ ρM −1η. (45) - For M ≤ c ≤ c[∗], since ˜mc = λc = xc = ρc/(1 + ρc), we obtain that uc = wc = 0, yc + zc = 1, ρc(1 − zc) = γ − vc, yc = η + vc. We further have: vc = [γ][ −] [ρ][c][η] ρc + 1 [, y][c][ =][ η]ρc[ +] + 1[ γ] [, z][c][ = 1][ −] ρ[η]c[ +] + 1[ γ] [.] To keep vc, yc, zc ≥ 0, we must have ρcη ≤ γ ≤ ρc + 1 η, for M c c[∗]. Thus, − ≤ ≤ ρM η ≤ γ ≤ ρc∗ + 1 − η. (46) - For c = c[∗] + 1, when mc = 0, it degenerates to the next case. When ˜mc > 0, since ˜mc = λc = xc < ρc(1 − m˜ c), we obtain that uc = wc = zc = 0, yc = 1, ρc + vc = γ, η + vc = 1. We further have γ = ρc∗+1 + 1 − η. (47) - For c[∗] +2 ≤ c ≤ C, since ˜mc = λc = xc = 0, we obtain that uc = zc = 0, yc = 1, vc = γ−ρc, wc = η+vc−1 = η + γ − ρc − 1. To keep vc, wc ≥ 0, and due to the fact that η ∈ [0, 1], we must have γ ≥ ρc, for c[∗] +2 ≤ c ≤ C. Thus, γ ≥ ρc[∗]+2. (48) For inequalities (45), (46), (48) and equation (47) to hold simultaneously, we can choose a η which satisfies ρc∗+1 + 1 ρM −1 + 1 [≤] [η][ ≤] [ρ][c]ρ[∗]M[+1] + 1[ + 1] [,] = � c∈C � ρc ˜mc + xc − uc( ˜mc − 1) − vc(λc − m˜ c) � −yc(xc − λc) − zc(xc − ρc + ρc ˜mc) + wcλc γ − . η − �� λc − 1 c∈C � �� m˜ c − M c∈C � which also satisfies η [0, 1]. Therefore, the theorem is ∈ proved. ----- m˜ cIt should be mentioned that when∗+1 = 0, the case “c = c[∗] + 1” can be combined with[�]c[c]=[∗] M [m][˜] [c][ = 1][, i.e.,] the next case “c[∗] + 2 c C”, hence equation (47) does not ≤ ≤ exist while inequality (48) is changed to γ ≥ ρc[∗]+1. Then, we can just choose a η which satisfies 0 η ≤ ≤ [ρ][c][∗][+1][ + 1] ρM + 1 [.] _F. Storage of Segments and Parallel Substreaming_ then within one stream, some substreams may complete earlier than the others. Therefore, the above equality needs to be added as a constraint (and used to come up with the following result), i.e., the bandwidth for the K substreams should be reserved until the whole streaming is completed. Then, in the proof of the optimality of “proportional-toproduct” placement for DSN, every expression keeps the same, except that the feasibility constraint (10) is changed to � x(cB) ≤ � mjBUK, (51) c:θ∈c j:j∩S̸=∅ We have mentioned before that compared to the “storage of complete contents and downloads by single streaming” setting, a more widely used mechanism in practice is that each box stores one specific segment of a video content and a download (streaming) comprises parallel substreaming from different boxes. To model this mechanism, we have the following simplifying assumptions: Each content is divided into K segments with equal length which are independently stored. Each box can store up to M segments (actually it does not matter if we keep the original storage space of each box, i.e., M complete contents, which now can hold MK segments, since the storage space is a customized parameter) and these M segments do not necessarily belong to M distinct contents. The bandwidth of each box is kept as U, so now each box can accommodate UK parallel substreaming, each with download rate 1/K (the average service duration is still kept as 1 because each segment is 1/K of the original content length). The definition of “traffic load” ρ is then the same as in equation (6). A request for a content will be divided into subrequests submitted to the boxes holding those corresponding segments of this content, generating K parallel substreaming flows in total (one box can serve more than one substreaming service for this request if it caches more than one distinct segments of this content). Let θ represent a segment and θ c indicate that θ is a ∈ segment of content c. Recall that we use nc to denote the number of concurrent downloads (now called “streams”) of content c in the network. We further use nθ to denote the number of substreams corresponding to segment θ. Now the original feasibility constraint (1) becomes � zθb = nθ, ∀ θ ∈ Θ; b: θ∈Jb � zθb ≤ UK, ∀ b ∈B, (49) θ: θ∈Jb where Θ represents the whole set of segments and zθb denotes the the number of concurrent substreams downloading segment θ from box b. It is easy to see that the equivalent version which can be proved by Hall’s theorem becomes: ∀S ⊆ Θ, � nθ ≤ KU |{b ∈B : S ∩Jb ̸= ∅}|, (50) θ∈S where with a little abuse of notation, is used to denote a S subset of Θ, instead of as before. C Since we have assumed that video duration and video streaming rate are all the same, one naturally has nθ = nc for all θ c. If we let randomness exist in the service duration, ∈ Θ, � ∀S ⊆ θ∈Θ and the “proportional-to-product” placement {mj} is now with respect to each segment, i.e., mj = [�]θ∈j [ν][ˆ][θ][/Z][ for] all j Θ s.t. j = M, where Z is the normalizing ⊆ | | constant and ˆνθ = ˆνc if θ ∈ c. With an observation that �θ∈Θ [ν][ˆ][θ][ =][ K][ �]c∈C [ν][ˆ][c][ =][ K][, we can still come to an] inequality same with inequality (17), except that c and are C replaced by θ and Θ respectively. All the succeeding steps are exactly the same in the proof of optimality. _G. Another Approach to Bound the Chance of “Good Con-_ _tents” in Proving Theorem 3_ At the first stage of proving Theorem 3, we mentioned that we can also directly derive the Chernoff bound on the RHS of inequality (26) to get the result. The derivation is given below: Recall that I(x) = supθ{xθ − ln(E[e[θZ][i]])} is the Cram´er transform of the Bernoulli random variable Zi. It is easy to check that I(x) = � � x � � 1−x � a ln p + (1 − x) ln 1−p if x ∈ [0, 1] + else ∞ Also recall that a ≜ �M [2][/][3] + [ν]ρU[i][M] � /MB = M [−][1][/][3]/B + p, where p ≜ ρBUνi [. Since we are considering a large][ B][,][ a][ ∈] [[0][,][ 1]] holds. Thus, denoting ¯p = 1 p for brevity, the exponent of − RHS of inequality (26) reads MB I(a) − � 1 � = (pMB + M [2][/][3]) ln 1 + − pM [1][/][3]B � 1 � (¯pMB M [2][/][3]) ln 1 − − - − pM¯ [1][/][3]B pMB = + − [pMB][ +][ M][ 2][/][3] pM [1][/][3]B 2(pM [1][/][3]B)[2] pMB¯ + [pMB][¯] [ −] [M][ 2][/][3] + pM¯ [1][/][3]B 2(¯pM [1][/][3]B)[2][ +][ o][(][M][ 1][/][3][)] = − [M][ 1][/][3] 2B = − [M][ 1][/][3] 2 � ρU 1 + νi B(1 − ρU[ν][i] [)] � 1 p [+ 1]p¯ � + o(M [1][/][3]) � + o(M [1][/][3]) = Θ �M [1][/][3][�] . (52) − With similar steps as above, we can show the exponent exponent of the RHS of inequality (27) is also Θ �M [1][/][3][�]. − Therefore, inequality (28) is proved. -----
{ "disclaimer": "Notice: The following paper fields have been elided by the publisher: {'abstract', 'references'}. Paper or abstract available at https://api.unpaywall.org/v2/10.1109/INFCOM.2011.5935250?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1109/INFCOM.2011.5935250, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": null, "status": "GREEN", "url": "https://arxiv.org/pdf/1004.4709" }
2,010
[ "JournalArticle" ]
true
2010-04-27T00:00:00
[]
29,301
en
[ { "category": "Computer Science", "source": "external" }, { "category": "Medicine", "source": "external" }, { "category": "Computer Science", "source": "s2-fos-model" }, { "category": "Engineering", "source": "s2-fos-model" }, { "category": "Environmental Science", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/028a57e04bdebfd6e0de0b6b95c1070f6a0b1fd8
[ "Computer Science", "Medicine" ]
0.882433
Neural Network Ensembles for Sensor-Based Human Activity Recognition Within Smart Environments
028a57e04bdebfd6e0de0b6b95c1070f6a0b1fd8
Italian National Conference on Sensors
[ { "authorId": "79805155", "name": "Naomi Irvine" }, { "authorId": "51071379", "name": "C. Nugent" }, { "authorId": "2108431596", "name": "Shuai Zhang" }, { "authorId": "2155654884", "name": "Hui Wang" }, { "authorId": "38218996", "name": "Wing W. Y. Ng" } ]
{ "alternate_issns": null, "alternate_names": [ "SENSORS", "IEEE Sens", "Ital National Conf Sens", "IEEE Sensors", "Sensors" ], "alternate_urls": [ "http://nbn-resolving.de/urn/resolver.pl?urn=urn:nbn:ch:bel-142001", "http://www.mdpi.com/journal/sensors", "https://www.mdpi.com/journal/sensors" ], "id": "3dbf084c-ef47-4b74-9919-047b40704538", "issn": "1424-8220", "name": "Italian National Conference on Sensors", "type": "conference", "url": "http://www.e-helvetica.nb.admin.ch/directAccess?callnumber=bel-142001" }
In this paper, we focus on data-driven approaches to human activity recognition (HAR). Data-driven approaches rely on good quality data during training, however, a shortage of high quality, large-scale, and accurately annotated HAR datasets exists for recognizing activities of daily living (ADLs) within smart environments. The contributions of this paper involve improving the quality of an openly available HAR dataset for the purpose of data-driven HAR and proposing a new ensemble of neural networks as a data-driven HAR classifier. Specifically, we propose a homogeneous ensemble neural network approach for the purpose of recognizing activities of daily living within a smart home setting. Four base models were generated and integrated using a support function fusion method which involved computing an output decision score for each base classifier. The contribution of this work also involved exploring several approaches to resolving conflicts between the base models. Experimental results demonstrated that distributing data at a class level greatly reduces the number of conflicts that occur between the base models, leading to an increased performance prior to the application of conflict resolution techniques. Overall, the best HAR performance of 80.39% was achieved through distributing data at a class level in conjunction with a conflict resolution approach, which involved calculating the difference between the highest and second highest predictions per conflicting model and awarding the final decision to the model with the highest differential value.
# sensors _Article_ ## Neural Network Ensembles for Sensor-Based Human Activity Recognition Within Smart Environments **Naomi Irvine** **[1,]*, Chris Nugent** **[1]** **, Shuai Zhang** **[1], Hui Wang** **[1]** **and Wing W. Y. NG** **[2]** 1 School of Computing, Ulster University, Co. Antrim, Northern Ireland BT37 0QB, UK; cd.nugent@ulster.ac.uk (C.N.); s.zhang@ulster.ac.uk (S.Z.); h.wang@ulster.ac.uk (H.W.) 2 School of Computer Science and Engineering, South China University of Technology, Guangzhou 510640, China; wingng@ieee.org ***** Correspondence: irvine-n2@ulster.ac.uk [����������](http://www.mdpi.com/1424-8220/20/1/216?type=check_update&version=1) Received: 13 November 2019; Accepted: 21 December 2019; Published: 30 December 2019 **�������** **Abstract: In this paper, we focus on data-driven approaches to human activity recognition (HAR).** Data-driven approaches rely on good quality data during training, however, a shortage of high quality, large-scale, and accurately annotated HAR datasets exists for recognizing activities of daily living (ADLs) within smart environments. The contributions of this paper involve improving the quality of an openly available HAR dataset for the purpose of data-driven HAR and proposing a new ensemble of neural networks as a data-driven HAR classifier. Specifically, we propose a homogeneous ensemble neural network approach for the purpose of recognizing activities of daily living within a smart home setting. Four base models were generated and integrated using a support function fusion method which involved computing an output decision score for each base classifier. The contribution of this work also involved exploring several approaches to resolving conflicts between the base models. Experimental results demonstrated that distributing data at a class level greatly reduces the number of conflicts that occur between the base models, leading to an increased performance prior to the application of conflict resolution techniques. Overall, the best HAR performance of 80.39% was achieved through distributing data at a class level in conjunction with a conflict resolution approach, which involved calculating the difference between the highest and second highest predictions per conflicting model and awarding the final decision to the model with the highest differential value. **Keywords: human activity recognition; neural networks; ensemble neural networks; model conflict** resolution; smart environments **1. Introduction** Human Activity Recognition (HAR) is a challenging and dynamic research field that has been attracting significant interest in recent years [1], as human activities are intricate and highly diverse. Particularly, sensor-based approaches to HAR have become prevalent in pervasive computing, largely due to advancements with sensing technologies and wireless sensor networks. HAR is a fundamental component in an extensive range of application areas, including connected health, pervasive computing, surveillance systems, human computer interaction (HCI), and ambient assisted living (AAL) in smart home settings. Other notable interest domains include human/object detection and recognition based on object analysis and processing, for example, tracking and detection [2,3], computer engineering [4], physical sciences [5], health-related issues [6], natural sciences, and industrial academic areas [7]. Notably, the progression of AAL technologies is becoming vital, due to the continuously increasing cost of healthcare provision, the aging population, and the need to support “aging in place”. In this domain, several dedicated smart home projects have been aimed at AAL for the elderly and disabled, for example CASAS [8], Gator Tech [9], MavHome [10], DOMUS [11], ----- _Sensors 2020, 20, 216_ 2 of 26 and Aware Home [12]. These environments all employ a large number of sensors that capture activity data via a range of sensor modalities. They possess the common aim of supporting smart home inhabitants in carrying out activities of daily living (ADLs) and providing them with non-intrusive, AAL environments to promote their independence and quality of life. ADL monitoring in smart environments is an important aspect to consider for assessing the health status of inhabitants, therefore the automatic detection of these activities is a significant motivation for conducting HAR research [13]. Various sensors are available for the purpose of image object capturing and processing, including binary sensors, digital cameras, and depth data in image analysis fields [14,15]. Sensor-based approaches to HAR can be deemed generally within two categories: data-driven or knowledge-driven. Data-driven approaches make use of datasets to learn activity models through applying machine learning and data mining techniques [16], whereas knowledge-driven approaches build activity models through exploiting rich prior knowledge in the domain of interest [17]. This work focuses on data-driven approaches to HAR and addresses the current challenges of their application to openly available datasets. Within the context of this work, the availability of openly available datasets prompted focus on data-driven approaches, whilst an awareness of the difficulties in accessing domain knowledge averted attention away from knowledge-driven approaches. Nevertheless, data quality is a substantial consideration as data-driven approaches depend on good training data, however, in the realms of HAR, a shortage of high quality, large-scale, and accurately annotated HAR datasets exists for recognizing ADLs within smart environments [18]. This work avails of low-quality data and emphasizes that good practice concerning data preparation can help improve HAR performance. In relation to this, it has been observed that many machine learning algorithms rely on large amounts of data during the training phase to achieve the desired generalization capabilities [18]. Ensemble learners have been explored widely, due to their ability to improve machine learning performance [19], with the main motivation being the desire to improve generalization capabilities [20]. By combining a set of imperfect models, the acknowledged limitations of individual learners can be more efficiently managed, in that the errors recognized in each component can be minimized as an ensemble, through the implementation of effective combination approaches [20]. In this paper, contributions include improving the quality of an openly available HAR dataset for the purpose of data-driven HAR, since it has been observed that data quality is a substantial consideration for data-driven approaches to HAR, as well as proposing a new ensemble of neural networks as a data-driven HAR classifier. Furthermore, various approaches to resolving conflicts that occur between base models in ensemble classifiers are investigated, and the effects of various data distributions that form the complement class per model are analyzed, as each model in the ensemble contains unique classes. It has been observed that the various data distributions to generate the complement class per model greatly impact the number of conflicts arising between the base models, thus demonstrating that the effective generation of these classes is an important consideration. The importance of adhering to good data preparation practices is also highlighted, as restructuring and balancing the data has supported and notably improved HAR performance. The remainder of the paper is structured as follows. Section 2 provides an overview of HAR and describes ensemble approaches to activity classification. Following this, Section 3 describes the dataset used in this study and issues identified with the data. Section 4 provides the methods and materials implemented. Results are then presented and discussed in Section 5, followed by conclusions and future work in Section 6. **2. Related Works** This Section presents relevant background information and related works. Section 2.1 provides information relating to HAR within smart home settings, Section 2.2 describes neural networks with regards to their recent use for HAR tasks and Section 2.3 describes ensemble learners with particular consideration to ensemble generation and integration techniques. ----- _Sensors 2020, 20, 216_ 3 of 26 _2.1. Human Activity Recognition (HAR)_ HAR is concerned with the ability to recognize and interpret human activities automatically through the deployment of sensors and the processing of the data they generate [21]. Various approaches to recognizing activities within smart environments have been explored, including the extensive use of wearable devices [22,23] and video-based approaches [24], which is largely due to the increased accessibility of these technologies. Nevertheless, these approaches have associated limitations to consider, including concerns with ethics, comfort, privacy invasion, and obtrusiveness. For example, it has been reported that many elderly inhabitants in AAL scenarios are often reluctant and unwilling to continuously adopt the use of body-worn sensors, in addition to expressing reluctance to the installation of video-based monitoring [25]. Consequently, in an attempt to address the identified concerns and prevent user acceptance issues, binary sensors deployed in the surrounding environment are becoming increasingly promising for long-term activity monitoring in the ubiquitous computing domain, as these devices eliminate the privacy concerns identified with other approaches to HAR, whilst also being non-invasive to smart home inhabitants [16]. Binary sensors have been used in a recent HAR study conducted by [26] to recognize nine ADLs, such as cleaning, cooking and sleeping, performed by four smart home inhabitants. The sensors deployed included motion detectors integrated within, or attached to, smart appliances. These also incorporated ON/OFF states for cleaning appliances, e.g., a vacuum, ceiling lights, cooking heaters, TV and PC, as well as OPEN/CLOSE states for kitchen appliances such as the fridge. The chosen classifier was a Random Forest model which achieved 68% accuracy, however, the researchers suggested this figure could be increased by applying more effective methods. In addition to this, in [27], binary sensors were deployed within a home monitoring environment to recognize four basic activity classes, namely relaxing, preparing a meal, eating, and transitioning from bed to toilet. A Deep Convolutional Neural Network (DCNN) was proposed for activity classification, where the binary sensor data generated by four door sensors and 31 passive infrared (PIR) motion sensors were converted into representative activity images. The images generated were used to train the DCNN model which obtained an accuracy of 99.36% in recognizing the four ADLs observed in the study. Although this approach performed significantly well, a greater number of activity classes could have been explored. Another study conducted by [28] explored the potential of ADL recognition using neural networks within a smart home setting. Experiments involved the design and implementation of recurrent (RNN) and convolutional (CNN) neural networks to recognize activities, e.g., cooking, bathing and sleeping. Data acquired through the deployment of binary sensors consisting of pressure sensors, reed switches, float sensors and PIR motion sensors, was used to train the various neural network classifiers, with results showing that the RNN and CNN models significantly outperformed other common classifiers during comparisons achieving 89.8% and 88.2% accuracies, respectively. HAR requires a feature extraction stage where a set of features are chosen as inputs to a classification model in order to represent the activities being detected. Various state-of-the-art features have been determined for HAR, however, these vary depending on the sensors used to capture activity data. For example, in the realms of wearable technologies that produce accelerometry data, extracting the maximum, minimum, and range features are beneficial in differentiating between activities that comprise movements of varying ranges [29]. Additionally, calculating the signal magnitude area (SMA) of an accelerometry signal has proven advantageous in differentiating between static and dynamic activities [30]. Alternatively, considering the vision-based HAR domain, visual objects can be represented, for example, using local descriptors [31] or calculating the centroids from the contour of depth silhouettes [32]. In this domain, features are commonly extracted with a template-based approach, for example, through human silhouette representations, or a model-based approach, i.e., where the body is defined by a skeleton-based outline with joint points used as feature representation [33]. ----- _Sensors 2020, 20, 216_ 4 of 26 _2.2. Neural Networks_ Neural Networks (NNs) are discriminative models that have been attracting attention recently and are becoming a popular classifier for activity recognition tasks [34]. The Multilayer Perceptron (MLP) is a notable type of feed-forward NN often used for activity recognition tasks [35–38]. They are capable of modelling complex, non-linear relationships and provide an alternative approach to pattern recognition, which is valuable for application in the HAR domain [35]. NNs require high computational capacities which had restricted their use previously, however, due to recent advancements in technology, more complex architectures are being explored with potential to offer better performance and support [39]. In [37], various approaches to recognizing 11 common ADLs were explored, including the use of a single hidden layer NN, a deep NN architecture, and a fuzzy rule-based approach. The shallow NN performed best with an accuracy of 97.72%, followed by the deep NN approach with 96.59%, with the researchers stating the potential of deep NNs had not been shown during the study, whilst also stating that this could be due to insufficient amounts of training data. IN addition to this, in a study conducted by [40], an efficiency investigation was carried out which compared HAR performance using shallow to deep NN approaches. The shallow NN outperformed the convolutional neural network (CNN) on the evaluated HAR datasets, with the shallow NN achieving 99.2% on the WARD data and 96.7% on the UCI_DB data [41], in comparison to 97.7% and 94.2% with the CNN model, respectively. Conclusions of this study stated that the optimal choice for HAR tasks is the use of shallow NNs with two or three layers, rather than the implementation of more complex architectures, particularly if the dataset contains a small number of training samples. _2.3. Ensemble Learners_ A technique often used to improve classification performance is to combine multiple models together, i.e., to create an ensemble method, rather than relying on the performance of a single model [29]. Ensemble learning involves two key considerations: ensemble generation and ensemble integration [42]. The generation phase includes generating the base models and determining the size of the ensemble. If the models created are achieved using a consistent induction algorithm, it is known as a homogeneous approach, whereas a heterogeneous approach involves creating models using various different algorithms [43]. In [44], a heterogeneous ensemble approach was implemented to recognize various activities within the CASAS smart home testbeds. The ensemble included four base classifiers, which included a Hidden Markov Model (HMM), a NN, a Support Vector Machine (SVM), and Conditional Random Fields (CRF). The results were promising and revealed performance improvements over the use of a single classification model. Further to this, [45] implemented an ensemble classification approach to activity recognition using several heterogeneous base classifiers. The five common base classifiers included an SVM, Decision Tree (DT), kNN, NN, and Naïve Bayes. Results demonstrated that the ensemble approach, combined through majority voting, performed extremely well in classifying twelve activities. As for homogeneous approaches, [46] proposed an ensemble of random forest learners with the aim of generating a more accurate, stable classifier to recognize activities from the PAMAP physical activity dataset. Activity recognition performance was very high, and the generalization capability of the produced classifier had improved significantly. In [47], multiple HMM base models were combined using a decision templates method to recognize activities collected by a smartphone-embedded triaxial accelerometer. Their approach addressed the interclass similarity and intraclass variability HAR challenges, with results showing the ensemble generated performed significantly well with data representing six activity classes and collected by 30 participants. In addition to this, [48–51] proposed homogeneous ensemble approaches for HAR. An observation has been made that less research effort exists on heterogeneous ensembles due to more difficulties arising in controlling interactions between the various learning processes [43]. More recently, researchers have been exploring ensemble learners on the basis of deep learning approaches. For example, [52–54] proposed ensemble deep learning techniques for HAR, which revealed positive results and robustness. Nevertheless, NNs, and more ----- _Sensors 2020, 20, 216_ 5 of 26 specifically, deep learning techniques, require a large number of training samples to enhance their performance [55]. 2.3.1. Ensemble Generation During ensemble generation, data partitioning is a commonly considered approach aimed towards diversifying the input data of the base models, so that the subspaces of inputs become complementary [56]. Boosting and Bagging are two common data partitioning ensemble methods used to combine multiple classification models that have been trained on different subsets of the training data [29]. Boosting involves the combination of multiple base classifiers to generate a strong committee classifier that may provide significantly enhanced performance in comparison to the base classifiers, achieved through reweighting the misclassified data samples and therefore boosting their performance [29]. SMOTEBoost and RUSBoost are adaptations of the known AdaBoost approach, where random undersampling or SMOTE is applied to the base classifiers training data, along with the reweighting phase in accordance with the AdaBoost algorithm, as demonstrated in a study conducted by [57]. Both SMOTEBoost and RUSBoost inject a great degree of arbitrariness through generating or removing instances, resulting in improved robustness to noise [57]. Bagging, on the other hand, averages the outputs produced by each base model, where each model is trained on different training sets consisting of data generated through sampling with replacement [42]. Examples of well-known bagging-based approaches include OverBagging, UnderBagging, and SMOTEBagging. Particularly, SMOTEBagging has been recommended for handling multi-class imbalanced data problems where the instances within each bag are significantly diverse [58]. In a recent study [59], two bagging-based hybrid methods were proposed to deal with imbalanced datasets, namely, ADASYNBagging and RSYNBagging. The ADASYNBagging approach uses the bagging algorithm in conjunction with the ADASYN-based oversampling method, whereas the RSYNBagging approach uses the ADASYN-based oversampling method as well as random undersampling alongside the bagging algorithm. The performances of the proposed hybrid approaches were compared against UnderBagging and SMOTEBagging techniques and evaluated on twelve datasets, with promising experimental results obtained. The benefits of the proposed hybrid approaches were demonstrated, as they outperformed the benchmark methods on eight of the twelve datasets evaluated. Another approach considered during ensemble generation is to manipulate the inputs of the base classifiers at a feature level, for example, training the base models on various different subsets of features [56]. 2.3.2. Ensemble Integration The ensemble integration phase determines how the predictions produced by the base models should be integrated together to increase performance by obtaining a single outcome [42]. Multiple fusion strategies exist and can be considered at a class label level, a trainable level, or a support level, according to [55]. The class label fusion technique involves each of the base classifiers voting for a certain class, then the final output is decided upon through either a majority voting or weighted majority voting strategy. Majority voting decides on the final output prediction based on the class that has been chosen most often or unanimously by the base classifiers, whereas weighted majority voting assigns weights to each model, often based on their performances, where the classifier with the highest output after weight assignments wins the overall prediction [55]. In a study conducted by [60], majority voting was implemented to decide upon the final outputs of an ensemble approach based on AdaBoost. Three weak learners were used, namely a Decision Tree, Logistic Regression, and Linear Discriminant Analysis (LDA). In addition to AdaBoost, Bagging and Stacking methods were also explored, with the best performance produced by the Bagging approach. Another study [46] used weighted majority voting with an ensemble of Random Forest classifiers. Each classifier was assigned different weights per activity, with the final outcome attained through combining the classification outcomes from each base model via the weighted votes. ----- _Sensors 2020, 20, 216_ 6 of 26 Fusion techniques at a trainable level consider the chosen fusion weights during the learning process and implement optimization strategies to increase classification performance whilst also reducing computation cost [55]. These include weighted summations of hypotheses, where higher weights are assigned to those with lower error rates and the Dempster–Shafer theory to handle uncertainty in the decision-making process. In [61] the outputs of various SVM classifiers, trained on different input feature subsets, were subsequently combined using the Dempster-Shafer fusion rule. The four-step process included creating decision templates for all training instances, calculating the proximity between decision templates and classifier outputs, computing the belief degrees for each output class, and finally, applying the Dempster rule to combine the degrees of belief derived from each base classifier. Finally, support function fusion involves computing an output decision score for each base classifier, which is derived from the estimated likelihood of a class [56]. This estimation can be computed as an a posteriori probability attained through probabilistic models, using fuzzy membership functions, or through combining NN outputs according to their performance. In [62], five classifiers were combined using an average of probabilities fusion method to recognize six activities. This method used the average of the probability distributions for each base classifier to make a final decision, achieving the best HAR performance in comparison to a majority voting approach that was also implemented during this study. Another study [63], implemented a support function fusion where a Naïve Bayesian fusion method was compared to a majority voting approach to fuse several HMM base models. The Naïve Bayesian approach involves calculating the posterior probability of the HMM outputs, which achieved the best activity recognition performance during the study. After reviewing the literature, we focus on ensemble learning for HAR in this work, due to their perceived benefits, rather than relying on the performance of a single model. Particularly, an ensemble of NNs are explored, although, due to a lack of high quality data in ADL datasets and a low quantity of data, it was decided to employ lighter weight models rather than exploring deeper architectures. The literature has shown that shallow NNs have previously achieved similar performance to deep NN architectures for HAR tasks, with provided recommendations to use shallow architectures particularly in cases where a small number of training samples are available. As outlined, there has been less effort made with heterogeneous ensembles in the research community due to difficulties existing in controlling interactions between the various learning processes, consequently, this work focuses on a homogeneous approach to generating an ensemble of NNs. As stated in [64], one of the crucial problems to consider with ensemble learning is the combination rule employed to determine a final class decision amongst the base models. In this work, a support function fusion method is used to integrate base models, and various approaches to effectively resolve conflicts that occur between the base models are investigated to determine a final output decision. In summary, this section contained detailed relevant background information and related works. The recent potential of NNs for HAR and pattern recognition problems was presented, which demonstrated that shallow architectures are preferred in scenarios where the dataset contains a small number of training samples. Ensemble approaches to activity recognition were also discussed because of their recent performance in the HAR domain. As mentioned, this work focuses on data-driven approaches to HAR, thus the importance of data quality in relation to those is considered in Section 3, along with a description of the dataset used to conduct experiments. **3. Dataset for Data-Driven HAR** An overview of the HAR data is presented in this section with an emphasis on the quality of data acquired. The UCAmI Cup challenge is also described, as the dataset used in this study was derived from this competition. Section 3.1 outlines details of the original dataset, Section 3.1.1 highlights the problems identified, and Section 3.2 details the restructured dataset created as a result of the encountered problems and to demonstrate more realistic capabilities of binary datasets for HAR in smart environments. ----- _Sensors 2020, 20, 216_ 7 of 26 Data collection is becoming a critical concern among the countless challenges in machine learning, largely due to limited amounts of training data being available to researchers in their respective fields and the quality of the data being collected [65]. In the realms of machine learning, it is known that the majority of effort and time is consumed through preparing the data, which involves data collection, cleansing, interpretation, and feature engineering [65]. Data quality is an imperative consideration in applications of data-driven approaches to HAR, as the performance of models are largely dependent on the quality of training data. Noise can be introduced during data collection by the participants and/or sensors which adversely affects the performance of data-driven techniques [66]. Common issues include missing or erroneous values and mislabeled data [67]. Data cleansing is known as the process of removing inconsistencies or errors, such as outliers and/or noise from a collection of data [68]. According to [66], addressing the presence of outliers and noise is vital as their existence can substantially influence experimental results produced by data-driven approaches. Nevertheless, an unclear border is often present between normal and abnormal data, where a considerably large “gray area” may exist [69]. In supervised learning, noise can transpire at an attribute or class level. In [70], an effort was made to evaluate the impact of noise on classification performance on 17 datasets, generated within various domains. Each dataset was manually introduced to various levels of noise to investigate how it affected model performance. Their findings demonstrated that as noise levels increase, performance decreases, and that attribute noise is generally less harmful than class noise. Furthermore, [71] compared and evaluated how well several classifiers performed with noisy, poor quality data. Conclusions stated that robustness to noisy data and classification performance varied significantly amongst the algorithms observed, with the Random Forest and kNN models proving most resilient to noise. The data used in this study was generated for the 1st UCAmI Cup challenge, where participants were invited to use their tools and techniques to analyze a HAR dataset with the aim of achieving the highest accuracy on the unseen test set. In [72], the challenge organizers describe the UCAmI Cup dataset comprehensively. Knowledge-driven rule-based approaches outperformed the data-driven approaches to the activity recognition problem, with many of the participants reporting issues and limitations found within the data [73–76]. The approach implemented by [73] involved a domain knowledge-based solution inspired by a Finite State Machine, achieving 81.3% accuracy. In [74], a hybrid model was proposed using a hidden Markov chain and logic model. The researchers combined their knowledge-driven and probabilistic models using a weighted averaging method, however, they reported that they had expected a better result than 45.0% accuracy on the test set. In addition to this, [75] used a Naïve Bayes approach with emphasis on location-aware, event-driven activity recognition. The applied method interpreted events as soon as they became available in real-time, omitting the need of an explicit segmentation phase, and generated activity estimations using an activity prediction model. Reported results show mean accuracies of around 68%, with the researchers stating that given the high number of activity classes, the outcome achieved was reasonable. Another approach implemented in [76] used various common machine learning algorithms, including a Decision Tree, Nearest Neighbour, Support Vector Machine, and three ensemble approaches including Random Forest, Boosting, and Bagging. The researchers reported a training set accuracy of 92.1%, however, their approach achieved 60.1% on the provided test data which demonstrated poor generalization. Their suggested cause for the low outcome was the high imbalance of classes in the training set, and they stated that the training algorithm required more labelled training data to perform better. _3.1. UCAmI Cup Dataset_ The HAR dataset was collected over 10 days by researchers in the UJAmI Smart Lab [72]. The UJAmI Smart Lab is divided into five regions: an entrance, a workplace, a living room, a bedroom with an integrated bathroom, and a kitchen, which measures approximately 25 square meters combined, as presented in Figure 1. The dataset was captured by a single male inhabitant completing morning, afternoon, and evening routines, representing 246 occurrences of 24 activity classes, as presented ----- _Sensors 2020, 20, 216_ 8 of 26 in Table 1. The training set consisted of 7 days of labelled data, with the remaining 3 days of data being _Sensors 2019, 19, x FOR PEER REVIEW_ 8 of 26 provided as an unlabeled test set. **Figure 1. Location of Binary Sensors in the UJAmI Smart Lab [72].** **Figure 1. Location of Binary Sensors in the UJAmI Smart Lab [72].** **Table 1. Activity Classes in the UCAmI Cup Dataset [72], where M, A, and E indicate the Morning,** **Table 1. Activity Classes in the UCAmI Cup Dataset [72], where M, A, and E indicate the Morning,** Afternoon, and Evening routines, respectively. Afternoon, and Evening routines, respectively. **ID** **Name** **Instances** **Routine** **ID** **Name** **Instances** **Routine** **ID** **Name** **Instances** **Routine** **ID** **Name** **Instances** **Routine** Take Leave Act01 MedicationTake 52 A, E Act13 Leave Smart Smart Lab 33 M, A Act01 52 A, E Act13 33 M, A Medication Prepare Visitor toLab Act02 63 M Act14 7 M, A BreakfastPrepare Visitor to Smart Lab Act02 Prepare 63 M Act14 Put waste 7 M, A Act03 Breakfast 118 A Act15 Smart Lab 75 A, E lunch in the bin Prepare Put waste in Act04Act03 Prepare 76118 EA Act15 Act16 Wash 2275 A, E M Dinnerlunch the bin hands Prepare Brush Act05Act04 Breakfast 7876 ME Act16 Act17 Wash hands teeth 13222 M, A, EM Dinner Use the Act06Act05 Breakfast Lunch 10178 AM Act17 Act18 Brush teeth 44132 M, A, EM, A, E toilet Act06 Lunch 101 A Act18 Use the toilet Wash 44 M, A, E Act07 Dinner 86 E Act19 13 A, E Act07 Dinner 86 E Act19 Wash dishes dishes 13 A, E Put washing Put Act08Act08 Eat a snack Eat a snack 1212 AA Act20 Act20 washing 2020 M, AM, A in machine in machine Work at the Work at Act09Act09 Watch TV Watch TV 7070 A, EA, E Act21 Act21 2020 MM the tabletable Act10 Enter Smart Enter 21 A, E Act22 Dressing 86 M, A, E Act10 Smart Lab 21 A, E Act22 Dressing 86 M, A, E Lab Play Act11 Play a 28 M, E Act23 Go to bed 30 E Act11 a videogame 28 M, E Act23 Go to bed 30 E videogame Relax on Act12 85 M, A, E Act24 Wake up 32 M Relax on the sofa Act12 85 M, A, E Act24 Wake up 32 M the sofa A set of 30 binary sensors consisting of magnetic contact switches, PIR motion detectors, A set of 30 binary sensors consisting of magnetic contact switches, PIR motion detectors, and and pressure sensors were deployed in the UJAmI Smart Lab to capture human interactions within the pressure sensors were deployed in the UJAmI Smart Lab to capture human interactions within the environment, as presented in Figure 1. The two changeable states of the magnetic contact switches were environment, as presented in Figure 1. The two changeable states of the magnetic contact switches open/close, which were attached to, or integrated within, doors and objects, such as the medication were open/close, which were attached to, or integrated within, doors and objects, such as the box. The motion detectors generated and recorded movement/no movement states to identify whether medication box. The motion detectors generated and recorded movement/no movement states to identify whether an inhabitant had moved in or out of the 7-meter sensing range. Finally, the pressure e o de loyed e e ated eithe e u e/ o e u e tate a d a e e t i the bed a d the ofa to ----- _Sensors 2020, 20, 216_ 9 of 26 an inhabitant had moved in or out of the 7-meter sensing range. Finally, the pressure sensors deployed generated either pressure/no pressure states and was present in the bed and the sofa to detect any interactions. A comprehensive description of each binary sensor is presented in Table 2. **Table 2. Description of binary sensors [72].** **ID** **Object** **Type** **States** Movement/No SM1 Kitchen area Motion movement Movement/No SM3 Bathroom area Motion movement Movement/No SM4 Bedroom area Motion movement Movement/No SM5 Sofa area Motion movement M01 Door Contact Open/Close TV0 TV Contact Open/Close D01 Refrigerator Contact Open/Close D02 Microwave Contact Open/Close D03 Wardrobe Contact Open/Close D04 Cups cupboard Contact Open/Close D05 Dishwasher Contact Open/Close D07 WC Contact Open/Close D08 Closet Contact Open/Close D09 Washing machine Contact Open/Close D10 Pantry Contact Open/Close C01 Medication box Contact Open/Close C02 Fruit platter Contact Open/Close C03 Cutlery Contact Open/Close C04 Pots Contact Open/Close C05 Water bottle Contact Open/Close C07 XBOX Remote Contact Present/Not present C08 Trash Contact Open/Close C09 Tap Contact Open/Close C10 Tank Contact Open/Close C12 Laundry basket Contact Present/Not present C13 Pyjamas drawer Contact Open/Close C14 Bed Pressure Pressure/No pressure C15 Kitchen faucet Contact Open/Close H01 Kettle Contact Open/Close S09 Sofa Pressure Pressure/No pressure 3.1.1. Data Challenges A number of issues were identified with the original binary dataset that hindered the performance of recognizing ADLs in a smart environment setting. These included: - Number of classes. The number of classes in the original dataset were very high given the low number of instances per activity and low amount of data overall. As discussed previously, data-driven approaches rely on large amounts of good quality data. Furthermore, certain classes were too closely related to one another to recognize with binary data alone. For example, the following activities relied on one door sensor: entering the smart lab, leaving the smart lab, and having a visitor to the smart lab. Binary sensors are limited in inferring activities in that they provide information at an abstract level [77], therefore Act08 eating a snack was difficult to distinguish compared to Act03 prepare breakfast, Act04 prepare lunch, and Act05 prepare dinner, as these activities all used similar sensors. Thus in order to capture activities at a finer level, the presentation and interpretation of binary data often requires further knowledge of the environment [78]. This issue was discussed by a UCAmI Cup participant in [75], where conclusions ----- _Sensors 2020, 20, 216_ 10 of 26 had stated that their achieved activity recognition performance was reasonable given the large number of activity classes present in the dataset. - Imbalanced dataset. The distribution of instances per class in the original dataset were highly diverse, which may have caused minority classes to be overlooked by the classification model. For example, Act19 wash dishes was represented by 13 instances of data, whereas other activities such as Act17 brush teeth had more than 100 instances. Furthermore, the distribution of instances per class in the provided training and test sets were highly varied. For example, Act09 was very under-represented in the training set, yet the test set included a large number of Act09 instances. Noteworthy, Act09 also produced very similar sensor characteristics to Act12, which was problematic in the initial experiments, as the training set included large amounts of Act12 data. This issue was discussed in [74], where researchers stated that their approach also found difficulty in classifying Act12, due to the poor representation of this activity in the training set, and suggested that the data should be better distributed to improve HAR performance. - Quantity of data. As previously stated, data-driven approaches require lots of data during the training phase to learn activity models and to ensure these models can generalize well to new data. NN require lots of data to learn complex activity models [79], though the original dataset was relatively small. Thus, more labelled training data could have improved initial experiments. In [76], UCAmI Cup participants suggested the cause for their low HAR performance was the high imbalance of classes in the training set and stated that the training algorithm required more labelled training data to perform better. - Missing sensors. Act21 work at table had no binary sensor located near the table to distinguish this activity, as presented in Figure 1. This issue caused confusion as the sensor firing for Act21 in the labelled training set was seen to be a motion sensor located in the bedroom, which is irrelevant to Act21 and therefore seen as erroneous. In addition to missing sensors, there were also missing values from sensors that were expected to fire during certain activities. As previously stated, some researchers participating in the UCAmI Cup challenge reported that they found missing values or mislabeling of some activities within the training set. In [73] this issue was discussed, where participants stated that during one instance of Act10 enter the smart lab, the only binary sensor that is expected to fire (M01), does not change states. - Interclass similarity. This is a common HAR challenge that occurs when certain activities generate similar sensor characteristics, though they are physically different [80]. Table 3 shows the activities that produced similar sensor characteristics, resulting in difficulties arising in discriminating between these activities during classification. **Table 3. Activities producing similar sensor characteristics within the UCAmI Cup data.** **Activity Group** **Activity Name** **Common Sensors** Enter Smart Lab, Leave Smart Lab, Act10, Act13, Act14 M01 Door and Visitor to Smart Lab Act23, Act24 Go to Bed and Wake Up C14 Bed S09 Pressure Sofa Act09, Act12 Watch TV and Relax on Sofa SM5 Sofa Motion Prepare Breakfast, Prepare Lunch, Act02, Act03, Act04, Act08 Prepare Dinner, Prepare Snack SM1 Kitchen Motion D10 Pantry C03 Cutlery As a result of the various problems identified with the dataset, it was decided to restructure the data to reveal the potential of using binary sensors alone within smart environments. ----- _Sensors 2020, 20, 216_ 11 of 26 _Sensors 2019, 19, x FOR PEER REVIEW_ 11 of 26 _3.2. Restructured Dataset_ First, the provided training and test sets were combined to better represent activity classes within the training data. Figure 2 shows the distribution of the combined 10 days of 24 activity classes First, the provided training and test sets were combined to better represent activity classes within the training data. Figurefor all the available data in the UCAmI Cup. As can be viewed in Figure 2, certain classes were very 2 shows the distribution of the combined 10 days of 24 activity classes for all the available data in the UCAmI Cup. As can be viewed in Figureunder-represented, with a third of all activity classes containing less than 30 instances. These classes 2, certain classes were very under-represented, with a third of all activity classes containing less than 30 instances. These classeswere removed, as they would be under-represented in the training phase and therefore would not were removed, as they would be under-represented in the training phase and therefore would notgeneralize well to unseen data. Consequently, 8.82% of instances were removed, which comprised generalize well to unseen data. Consequently, 8.82% of instances were removed, which comprised thethe following classes: Act08, Act11, Act16, and Act19-Act21. An opportunity to combine certain following classes: Act08, Act11, Act16, and Act19-Act21. An opportunity to combine certain similarsimilar activity classes was also identified so that the data could be used effectively. For example activity classes was also identified so that the data could be used eAct10, Act13, and Act14 were combined to produce ActN1 door, as they all make use of a single door ffectively. For example Act10, Act13, and Act14 were combined to produce ActN1 door, as they all make use of a single door sensor,sensor, and Act09 and Act12 were combined to produce ActN2 watch TV on sofa, as they mainly and Act09 and Act12 were combined to produce ActN2 watch TV on sofa, as they mainly consisted ofconsisted of the inhabitant sitting on the sofa. Furthermore, Act02 and Act05, Act03 and Act06, and the inhabitant sitting on the sofa. Furthermore, Act02 and Act05, Act03 and Act06, and finally Act04finally Act04 and Act07 were combined to produce ActN3 breakfast, ActN4 lunch, and ActN5 dinner, and Act07 were combined to produce ActN3 breakfast, ActN4 lunch, and ActN5 dinner, respectively,respectively, as these sets of activities were similar. Table 4 presents the restructured dataset. as these sets of activities were similar. Table 4 presents the restructured dataset. **Figure 2. Distribution of the 24 UCAmI Cup activity classes with threshold shown.** **Figure 2. Distribution of the 24 UCAmI Cup activity classes with threshold shown.** **Table 4. Activity classes in the restructured dataset.** **Table 4. Activity classes in the restructured dataset.** **ID** **Name** **Instances** **Routine** **ID** **Name** **Instances** **Routine** **ID** **Name** **Instances** **Routine** **ID** **Name** **Instances** **Routine** Take Act01 MedicationTake 52 A, E Act24 Wake up 32 M Act01 52 A, E Act24 Wake up 32 M Medication Put waste Act15 75 A, E ActN1 Door 61 M, A, E Put waste in the bin Act15 Brush 75 A, E ActN1 Watch TVDoor 61 M, A, E Act17 in the bin 132 M, A, E ActN2 155 M, A, E teeth on sofa Watch TV on Act17 Act18 Brush teeth Use the 132 44 M, A, E M, A, E ActN2 ActN3 Breakfast 141155 M, A, E M toilet sofa Act22 Use the Dressing 86 M, A, E ActN4 Lunch 219 A Act18 Act23 Go to bed 44 30 M, A, E E ActN3 ActN5 Breakfast Dinner 162141 EM toilet Act22 Dressing 86 M, A, E ActN4 Lunch 219 A **4. Proposed HAR Classification ModelAct23** Go to bed 30 E ActN5 Dinner 162 E The materials and methods implemented are described within this section. The data pre-processing **4. Proposed HAR Classification Model** phase is explained, including data segmentation and feature extraction, which are two fundamental aspects of the activity recognition process [The materials and methods implemented are described within this section. The data pre-80]. Following this, the ensemble approach and conflict resolution techniques are presented.processing phase is explained, including data segmentation and feature extraction, which are two fundamental aspects of the activity recognition process [80]. Following this, the ensemble approach and conflict resolution techniques are presented ----- _Sensors 2020, 20, 216_ 12 of 26 _Sensors 2019, 19, x FOR PEER REVIEW_ 12 of 26 _4.1. Data Pre-Processing 4.1. Data Pre-Processing_ Since the data restructuring process involved combining the provided train and test sets to Since the data restructuring process involved combining the provided train and test sets to produce produce a set of data that better represents activity classes in the training data, it was subsequently a set of data that better represents activity classes in the training data, it was subsequently required to required to extract a new test set. Thus, 15% of the data was randomly selected and removed to extract a new test set. Thus, 15% of the data was randomly selected and removed to generate an unseen generate an unseen test set. The raw data files containing data streams produced by binary sensors test set. The raw data files containing data streams produced by binary sensors include a timestamp, include a timestamp, the sensor ID, the sensor state, and the inhabitant name, as presented in Figure 3. the sensor ID, the sensor state, and the inhabitant name, as presented in Figure 3. **Figure 3. Excerpt from a raw binary data file.** **Figure 3. Excerpt from a raw binary data file.** The raw data was segmented into 30-second non-overlapping time windows to identify the The raw data was segmented into 30-second non-overlapping time windows to identify the segments of data that are likely to contain information regarding activities. Time-based windowing segments of data that are likely to contain information regarding activities. Time-based windowing involves dividing the entire dataset equally into time segments that include a fixed quantity of data involves dividing the entire dataset equally into time segments that include a fixed quantity of per window [29]. It is a common approach for segmenting data streams collected through data per window [29]. It is a common approach for segmenting data streams collected through environmental sensors, however, no clear consensus exists for choosing the optimal window size for environmental sensors, however, no clear consensus exists for choosing the optimal window size ADL recognition [81], therefore a 30 second window size was chosen, as this was the regulation for ADL recognition [81], therefore a 30 second window size was chosen, as this was the regulation adhered to in the UCAmI Cup challenge. A total of 31 features were included, which consisted of one adhered to in the UCAmI Cup challenge. A total of 31 features were included, which consisted of one feature per binary sensor and an additional time routine feature representing whether the activity feature per binary sensor and an additional time routine feature representing whether the activity had occurred in the morning, afternoon, or evening, to help distinguish between the similar activities had occurred in the morning, afternoon, or evening, to help distinguish between the similar activities previously outlined. For example, as Act23 go to bed and Act24 wake up use the same pressure sensor previously outlined. For example, as Act23 go to bed and Act24 wake up use the same pressure sensor located in the bed, the inclusion of a time routine feature can help distinguish these activities due to located in the bed, the inclusion of a time routine feature can help distinguish these activities due to the human nature of habitually waking up in the morning and going to bed in the evening. the human nature of habitually waking up in the morning and going to bed in the evening. _4.2. Ensemble Approach_ _4.2. Ensemble Approach_ Ensemble methods for classification have been explored recently, due to their potential to improve Ensemble methods for classification have been explored recently, due to their potential to robustness, performance and generalization capabilities in comparison to single model approaches [40]. improve robustness, performance and generalization capabilities in comparison to single model Our approach consists of four MLPs as base classifiers to generate a homogeneous ensemble method. approaches [40]. Our approach consists of four MLPs as base classifiers to generate a homogeneous A model is created per time routine: Morning, Afternoon, and Evening as some activities uniquely ensemble method. A model is created per time routine: Morning, Afternoon, and Evening as some occur within specific routines. Additionally, a Mixed model is created to consider activities that occur activities uniquely occur within specific routines. Additionally, a Mixed model is created to consider arbitrarily throughout the day. Figure 4 presents the four base classifiers where n indicates the number activities that occur arbitrarily throughout the day. Figure 4 presents the four base classifiers where of classes per model. M, A, and E represent the Morning, Afternoon, and Evening models, respectively, n indicates the number of classes per model. M, A, and E represent the Morning, Afternoon, and and finally MI represents the Mixed model. Evening models, respectively, and finally MI represents the Mixed model. ----- _Sensors 2020, 20, 216_ 13 of 26 _Sensors 2019, 19, x FOR PEER REVIEW_ 13 of 26 **Figure 4.Figure 4. Four base classifiers presented per time routine, where n indicates the number of classes per Four base classifiers presented per time routine, where n indicates the number of classes per** model. M, A, and E represent the Morning, Afternoon, and Evening models, respectively, and finally model. M, A, and E represent the Morning, Afternoon, and Evening models, respectively, and finally MI represents the Mixed model. MI represents the Mixed model. Definitions: Definitions: Input: Input: _R_ _X =_ �→x 1, →x 2, . . ., →x M� ∈ _BN×d,_ 𝑋= [𝑥⃗�, 𝑥⃗�, …, 𝑥⃗�][�] ∈ 𝐵[�×�], where N is the number of instances, d is the number of features, d=31. where N is the number of instances, d is the number of features, d=31. Output: Output: → � � 𝑥⃗� _x= [𝑥 i =_ ��, 𝑥x[1]1��[,][ x], …, 𝑥2[2][,][ . . .]��[,]] where [ x][d]i where𝑥�� _x∈[0,1][d]i_ [∈] [[][0, 1]. []][.] � �R Y = y1, y2, . . ., yN ∈ [1, . . ., 12]. Y = [y�, y�, …, y�][�] ∈[1, …, 12]. Base Models: Base Models: Models M1, M2, M3, and M4 represent the Morning, Afternoon, Evening, and Mixed base models, Models M1, M2, M3, and M4 represent the Morning, Afternoon, Evening, and Mixed base models, respectively, in the proposed ensemble approach. respectively, in the proposed ensemble approach. Given the instance Given the instance𝑥⃗� base model output →x i base model output𝑀� is given by M _j is given by_ 𝑓�� = 𝑓fi[j] [=]�[ f](𝜑[ j][�]�ϕ(𝑥[j]�())xi, )�, where index where indexj = [1, …, 4]; j = [1, . . ., 4];𝜑[�] ϕ(𝑥[j]�()x is the input to the activation function of base model i) is the input to the activation function of base model𝑀 M� and j and f𝑓[j][�] is is the the output of each base model 𝑀� output of each base model M _j_ � � � � � For simplicity, the output can be represented as For simplicity, the output can be represented as𝑓� = �𝑝�, …, 𝑝 fi[j] [=]���,p where 1[j] [,][ . . .][,][ p]m[j]𝑚j� represents the number, where mj represents the number of outputs from base model M _j._ of outputs from base model Predicted class _k[ˆ]_ _[j]_ 𝑀�. _i_ Predicted class maximum p values𝑘[�]�� ∈[1, …, 12] pi[j][,1] [∈]= max[[][1,][ . . .] from base model �p[, 12]1[j] [,][ . . .][]][ from base model][,][ p]m[j] _j�._ 𝑀� is the class represented by the output with [ M] _[j][ is the class represented by the output with]_ maximum The second largest value in the output vector is notated asp values 𝑝��,� = max [𝑝��, …, 𝑝�� �]. _pi[j][,2][.][ p][ values will be used for later]_ conflict resolution in Algorithms 2–5. The second largest value in the output vector is notated as Base Model Compositions: 𝑝��,�. p values will be used for later conflict Universal set C represents the set of all classes of activities; C[j] represents activity classes represented resolution in Algorithms 2–5. by the time domain of each base model M _j_ Base Model Compositions: Universal set C represents the set of all classes of activities; 𝐶[�] represents activity classes represented by the time domain of each base model 𝑀� ----- _Sensors 2020, 20, 216_ 14 of 26 _C�[j]_ is the complement class for base model M _j and it combines the activity classes not in the C[j]_ denoted below � _k ∈_ _C : k �_ _C[j][�]_ Example: Morning Base Model M1 contains activities from classes _C[1]_ = [Act24, ActN3] _C�[1]_ = [{ActN4, Act23, ActN5, Act01, Act15, Act17, Act18, Act22, ActN1, ActN2}] There are mj = 3 number of classes, where all but one class, the complement, are in C[1]. The morning model contains two main activity classes, namely Act24 wake up and ActN3 breakfast, as these activities occur in a typical morning routine. ActN4 lunch, is the only main class within the afternoon model as lunch usually occurs in the afternoon. The evening model contains two main classes, namely Act23 go to bed and ActN5 dinner, as these activities habitually occur in an evening routine. Finally, the mixed model contains seven main activity classes that do not regularly occur within a specific time routine. For example, Act15 put waste in the bin and Act22 dressing are activities commonly performed at any time during the day. The activity class outputs per model are presented in Table 5. **Table 5. Activity class outputs per model.** **#output** **Model ID** **Name** **Activity Classes** m1 = 3 _M1_ Morning m2 = 2 _M2_ Afternoon m3 = 3 _M3_ Evening m4 = 8 _M4_ Mixed _C[1]_ = [Act24, ActN3] ← 2 classes _C�[1]_ = [ActN4, Act23, ActN5, Act01, Act15, Act17, Act18, Act22, ActN1, ActN2] ← 1 class _C[2]_ = [ActN4] ← 1 class _C�[2]_ = [Act24, ActN3, Act23, ActN5, Act01, Act15, Act17, Act18, Act22, ActN1, ActN2] ← 1 class _C[3]_ = [Act23, ActN5] ← 2 classes _C�[3]_ = [Act24, ActN3, ActN4, Act01, Act15, Act17, Act18, Act22, ActN1, ActN2] ← 1 class _C[4]_ = [Act01, Act15, Act17, Act18, Act22, ActN1 ActN2] ← 7 classes _C�[4]_ = [Act24, ActN3, ActN4, Act23, ActN5] ← 1 class A framework for the implemented homogenous ensemble approach is presented in Figure 5, where the conflict resolution approaches are compared. Each base model is presented with an input feature vector consisting of data produced by 30 binary sensors and an additional time routine feature, resulting in a total of 31 input features. Each of the base models produce output predictions derived from the estimated likelihood of each class, which are subsequently combined through the support function fusion [56] during the ensemble integration phase. ----- feature, resulting in a total of 31 input features. Each of the base models produce output predictions derived from the estimated likelihood of each class, which are subsequently combined through the Sensors 2020, 20, 216 15 of 26 support function fusion [56] during the ensemble integration phase. **Figure 5. Framework for the homogeneous ensemble approach. M1, M2, and M3 represent the Morning,** **Figure 5.** Framework for the homogeneous ensemble approach. _M1,_ _M2, and_ _M3 represent the_ Afternoon, and Evening models, respectively, and M4 represents the Mixed model. Morning, Afternoon, and Evening models, respectively, and M4 represents the Mixed model. Due to each model having no overlapping classes, each needs to be trained with a complement class, which consists of representative activity samples from each of the main classes contained within the remaining models. The aim of this is that each model will be able to identify whether or not new activity instances belong to that model, thus when a model receives an unseen input of an activity ----- _Sensors 2020, 20, 216_ 16 of 26 class existing within its complement, it should recognize that the activity does not exist as a main class in the model and should, therefore, eliminate itself from the decision process. For example, if the morning model is presented with an activity instance contained in the _C[�][1]_ class, e.g., ActN4, as presented in Table 5, it should recognize that ActN4 belongs to the complement class and should therefore exclude itself from the decision making process. To analyze the effects on model conflicts of various data distributions that construct the complement classes per model, we explore two approaches towards generating these classes. Section 4.2.1 explains the generation of the complement class data at a model level, where activity instances are distributed evenly between the remaining models, and Section 4.2.2 explains the generation of the complement class data at a class level, where activity instances are distributed evenly between the remaining classes. 4.2.1. Complement Class Generation at a Model Level Distributing instances at a model level involves balancing the complement class data equally between the remaining models. The first step in the process is to calculate how many instances this class should contain, in total. Per model, this is calculated as the average number of main class instances. This total is then divided by the number of remaining models to achieve an equal distribution of instances per model. Following this, the class distributions are calculated by dividing the number of instances per model by the number of main classes within each model. Table 6 presents the distribution of instances at a model level. **Table 6. Model level-distribution of instances for complement class compositions.** **Model** **Complement** **Distribution** **(No. of Instances)** **Class Distribution** **(No. of Instances)** Act17 (03) Act18 (04) Act22 (04) ActN1 (04) ActN2 (04) Act15 (09) Act17 (09) Act01 (09) Act22 (09) ActN1 (09) ActN2 (09) Act15 (04) Act17 (04) Act22 (04) ActN1 (04) ActN2 (04) Act23 (12) ActN5 (12) ActN4 (24) Act23 (12) ActN5 (12) Act01 (03) Act15 (03) Act24 (31) ActN3 (31) Act23 (31) ActN5 (31) Act18 (08) Act24 (13) ActN3 (14) ActN4 (27) Act18 (03) Act01 (04) Act24 (12) ActN3 (12) ActN4 (24) complement class _C[�][1]_ of M1 complement class _C[�][2]_ of M2 complement class _C[�][3]_ of M3 complement class _C[�][4]_ of M4 Afternoon (24) Evening (24) Mixed (25) Morning (62) Evening (62) Mixed (62) Morning (27) Afternoon (27) Mixed (27) Morning (24) Afternoon (24) Evening (25) 4.2.2. Complement Class Generation at a Class Level Distributing instances at a class level involves balancing the complement class data equally between the remaining classes within the models. As with the previous approach, the first step involves calculating the average number of main class instances per model to attain the total instances for each complement class. Following this, the previously calculated total is divided by the number of remaining classes across the remaining models to achieve an equal distribution of instances per class. Finally, all instances per class were multiplied by 2 to better represent each class. For example, to generate the M1 complement class, the average number of main class instances was calculated ----- _Sensors 2020, 20, 216_ 17 of 26 first, resulting in 74. Subsequently, to achieve an equal distribution of instances per class within the complement, 74 was divided by the 10 remaining classes, resulting in 7.4 instances required per class. Finally, to better represent each class during training, this number was multiplied by 2, resulting in 14.8 (15) instances per class. Table 7 presents the distribution of instances at a class level. **Table 7. Class level-distribution of instances for complement class compositions.** **Model** **Complement** **Distribution** **(No. of Instances)** **Class Distribution** **(No. of Instances)** Act17 (15) Act18 (15) Act22 (15) ActN1 (15) ActN2 (15) Act15 (34) Act17 (34) Act01 (34) Act22 (34) ActN1 (34) ActN2 (34) Act15 (16) Act17 (16) Act22 (16) ActN1 (16) ActN2 (16) Act23 (29) ActN5 (29) ActN4 (15) Act23 (15) ActN5 (15) Act01 (15) Act15 (15) Act24 (34) ActN3 (34) Act23 (34) ActN5 (34) Act18 (34) Act24 (16) ActN3 (16) ActN4 (16) Act18 (16) Act01 (16) Act24 (29) ActN3 (29) ActN4 (29) complement class _C[�][1]_ of M1 complement class _C[�][2]_ of M2 complement class _C[�][3]_ of M3 complement class _C[�][4]_ of M4 _4.3. Model Conflict Resolution_ Afternoon (15) Evening (30) Mixed (105) Morning (68) Evening (68) Mixed (238) Morning (32) Afternoon (16) Mixed (112) Morning (58) Afternoon (29) Evening (58) As mentioned, support function fusion [56] is explored through combining the output predictions produced by each MLP base model during the ensemble integration phase. The combined predictions are subsequently analyzed to determine whether a single model has chosen the final output, i.e., all models except one had chosen the complement class. If this is not the case, and more than one model has chosen a main class output, a conflict has occurred between these models during the decision making process, as seen in Algorithm 1. We investigate several approaches to the model conflict resolution to determine the final output class per instance. **Algorithm 1. Process of finding conflicts between models** 1: For Each instance →x i ∈ _B1×d_ 2: _if ∃j�kˆij_ [∈] _[C]_ _[j][�]_ Λ∃jj �kˆijj [∈] _[C]_ _[jj][ Λ][ j][ �]_ _[jj]�_ Then use conflict resolution approaches in Algorithms 2/3/4/5 as there are at least 2 3: conflicting cases The first method of resolving conflicts, presented in Algorithm 2, is simply to award the final decision to the model with the highest output prediction. This approach has previously been established as a soft-level combiner [82], as it makes use of the output predictions given by the classifiers as the posterior probabilities of each output class. A limitation of this method, however, is that it provides limited confidence of the output prediction. For example, consider the two largest output values of one base model are 0.56 and 0.54, respectively. If the final class decision is awarded according to the highest output value in this case, there is less confidence in the quality of classification, which implies a less secure output prediction. To overcome this, another technique, presented in Algorithm 3, is proposed ----- _Sensors 2020, 20, 216_ 18 of 26 to calculate the difference between the highest and second highest predictions per conflicting model, where subsequently the final decision is given to the model with the highest differential value, i.e., this is the model with the strongest class prediction. Following this, the impact of a weighting technique is investigated in Algorithm 4 on the basis of the number of classes per model, as each base model contains a different number of unique classes. This approach considers the output predictions from each conflicting base classifier and the number of classes the base models are trained on, i.e., the output predictions from each base model are multiplied by the number of classes within those base models. For example, if a conflict occurs between model M2 and model M4, which contain two and eight classes, respectively, the two class problem may be less complex than the eight class problem, and therefore a lower weighting is specified for M2. Finally, we explore the potential of another weighted method in Algorithm 5, which builds upon the previous approach. Weightings are implemented on the basis of the number of classes, as well as the training performance per model, i.e., the output predictions from each conflicting base classifier are multiplied by the number of classes in that model and the training performance achieved. According to [83], a base classifier that outperforms other base classifiers in an ensemble approach should be given a higher confidence when deciding upon the final output prediction, as the training performance measure is indicative of the classifiers’ effectiveness in predicting the correct output class. The training performance measure in Algorithm 5 is the classification accuracy obtained by each conflicting model. Repeated notations: The largest value in the output vector is notated as p[j][,1] . _i_ The second largest value in the output vector is notated as p[j][,2] _i_ [.] **Algorithm 2. Conflict resolution approach 1** **Input:** →x i, base models Mr, Ms **Output: class yi** 1: _i f p[r][,1]_ - p[s][,1] _i_ _i_ 2: Then yi = _k[ˆ]i[r]_ 3: Else yi = _k[ˆ]i[s]_ **Algorithm 3. Conflict resolution approach 2** **Input:** →x i, base models Mr, Ms **Output: class yi** � � � � 1: _i f_ _p[r]i_ [,1] − _p[r]i_ [,2] - _p[s]i_ [,1] − _p[s]i_ [,2] 2: Then yi = _k[ˆ]i[r]_ 3: Else yi = _k[ˆ]i[s]_ **Algorithm 4. Conflict resolution approach 3** **Input:** →x i, base models Mr, Ms **Output: class yi** 1: _i f p[r]i_ [,1] × mr > p[s]i [,1] × ms 2: Then yi = _k[ˆ]i[r]_ 3: Else yi = _k[ˆ]i[s]_ ----- _Sensors 2020, 20, 216_ 19 of 26 **Algorithm 5. Conflict resolution approach 4** **Input:** →x i, base models Mr, Ms **Output: class yi** 1: _Acc[r]_ _train_ [represents training performance for base model][ M][r] 2: _Acc[s]_ _train_ [represents training performance for base model][ M][s] 3: _if p[r]i_ [,1] × mr × Acc[r]train [>][ p]i[s][,1] × ms × Acc[s]train 4: Then yi = _k[ˆ]i[r]_ 5: Else yi = _k[ˆ]i[s]_ **5. Results and Discussion** The results show that the class level distribution technique, described in Section 4.2.2, greatly reduces the number of conflicts that occur between the various base models, in comparison to the model level distribution technique, as shown in Table 8. This is due to improved representations of activities within the complement classes per model during the training phase of the base classifiers. For example, with the class level distribution technique activity instances were distributed evenly between classes, therefore evenly representing each activity within the complement class. Contrarily, the model level distribution technique involved balancing the complement class data equally between the remaining models, which meant the class distributions within these models were imbalanced. For example, with the model level distribution technique, the _C[�][1]_ complement class contained 24 instances of ActN4 and only 03 instances of Act17, whereas with the class level distribution technique, the _C[�][1]_ complement class contained 15 instances each of ActN4 and Act17. Consequently, with the implementation of the latter distribution technique, the base classifiers are stronger at deciding when an unseen instance belongs to their complement class, eliminating themselves from the decision-making process and therefore reducing the number of conflicts that occur. **Table 8. Number of conflicts.** **No. of Conflicts Per Fold** 1 2 3 4 5 6 7 8 9 10 Avg. Complement Class – 76 57 69 52 49 35 60 45 62 56 56.1 Model Level Approach Complement Class – 21 37 11 13 13 42 29 39 11 17 23.3 Class Level Approach Classification performance from each of the two data distribution techniques were analyzed before and after conflict resolution approaches were applied, as presented in Figure 6. Considering the complement class generation at a model level, the preliminary performance accuracy of 60.28% is much less than that of the complement class generation at a class level, which achieves a preliminary accuracy of 72.12%. This is due to less model conflicts occurring in the latter approach, which shows the base models were stronger during the decision-making process. As for the final accuracies produced after conflict resolution techniques had been applied, the class level approach outperformed the model level approach in all four cases. Finally, overall, the best HAR performance of 80.39% was achieved using complement data generated at a class level in conjunction with the conflict resolution approach presented in Algorithm 3, i.e., resolving conflicts through calculating the difference between the highest and second highest predictions per conflicting model, where the final decision is given to the model with the highest differential value. ----- _Sensors 2020, 20, 216_ 20 of 26 _Sensors 2019, 19, x FOR PEER REVIEW_ 20 of 26 ### HAR Performance per Conflict Resolution Approach 90 77.6 79.27 80.2280.39 76.2 79.05 79.8380.17 80 72.12 70 60.28 60 50 40 30 20 10 0 Algorithm 2 Algorithm 3 Algorithm 4 Algorithm 5 Prelim % Final % Final % Final % Final % Complement Class - Model Level Approach Complement Class - Class Level Approach **Figure 6. Human Activity Recognition (HAR) performance per conflict resolution approach.** **Figure 6. Human Activity Recognition (HAR) performance per conflict resolution approach.** Table 9 presents an analysis of incorrectly classified instances with regards to the first data Table 9 presents an analysis of incorrectly classified instances with regards to the first data distribution approach where complement class data was generated at a model level, as discussed distribution approach where complement class data was generated at a model level, as discussed previously in Section 4.2.1, whereas Table 10 presents an analysis of incorrectly classified instances previously in Section 4.2.1, whereas Table 10 presents an analysis of incorrectly classified instances with with regards to the second data distribution approach, where complement class data was generated regards to the second data distribution approach, where complement class data was generated at a class at a class level, as discussed previously in Section 4.2.2. The “incorrect” instances reported describe level, as discussed previously in Section 4.2.2. The “incorrect” instances reported describe those that those that were incorrectly classified by the target model, for example, there may not have been any were incorrectly classified by the target model, for example, there may not have been any conflicting conflicting models, yet the incorrect class was chosen by the base classifier. The number of incorrectly models, yet the incorrect class was chosen by the base classifier. The number of incorrectly classified classified instances are important to consider when analyzing the effectiveness of each conflict instances are important to consider when analyzing the effectiveness of each conflict resolution resolution approach, as these cases would permanently be incorrect, regardless of the application of approach, as these cases would permanently be incorrect, regardless of the application of conflict conflict resolution techniques. resolution techniques. The “right but incorrect” cases are those that were correctly classified by the target base model, although they were not chosen during the final decision-making process after applying the conflict Table 9. Ensemble approach 1—analysis of incorrect instances. resolution approaches. These cases are considered when evaluating the most effective approach of **Fold** the four explored, as they could have resulted in a correct classification, given the application of an 1 2 3 4 5 6 7 8 9 10 Avg. effective conflict resolution technique. Algorithm Incorrect 22 22 21 29 29 20 30 22 20 22 23.7 2 Right but 17 18 21 12 17 16 9 14 20 20 16.4 IncorrectTable 9. Ensemble approach 1—analysis of incorrect instances. Algorithm Incorrect 23 22 21 29 29 22 29 22 20 24 24.1 3 Right but **Fold** 10 14 10 9 12 12 9 12 14 11 11.3 Incorrect 1 2 3 4 5 6 7 8 9 10 Avg. Algorithm Algorithm IncorrectIncorrect 22 22 23 22 21 21 2929 2929 2220 30 29 22 22 20 20 22 22 23.7 23.9 2 4 Right but Incorrect Right but 17 18 21 12 17 16 9 14 20 20 16.4 31 22 13 23 11 15 23 18 10 21 18.7 Algorithm IncorrectIncorrect 23 22 21 29 29 22 29 22 20 24 24.1 Algorithm3 Right but Incorrect Incorrect 22 10 22 14 21 10 299 2912 2212 9 29 12 22 14 20 11 22 11.3 23.8 Algorithm 5 Right butIncorrect 22 23 21 29 29 22 29 22 20 22 23.9 14 10 13 7 13 15 9 17 14 11 12.3 4 Right but Incorrect Incorrect 31 22 13 23 11 15 23 18 10 21 18.7 Algorithm Incorrect 22 22 21 29 29 22 29 22 20 22 23.8 5 Right but Incorrect 14 10 13 7 13 15 9 17 14 11 12.3 **Table 10. Ensemble approach 2—analysis of incorrect instances.** **Fold** 1 2 3 4 5 6 7 8 9 10 Avg. Algorithm Incorrect 33 26 35 33 25 32 27 28 40 26 30.5 2 Right but Incorrect 6 9 2 6 4 11 8 10 2 8 6.6 Algorithm Incorrect 33 26 35 33 25 31 27 28 40 26 30.4 3 Right but Incorrect 5 7 3 2 6 7 6 5 0 6 4.7 ----- _Sensors 2020, 20, 216_ 21 of 26 **Table 10. Ensemble approach 2—analysis of incorrect instances.** **Fold** 1 2 3 4 5 6 7 8 9 10 Avg. Algorithm Incorrect 33 26 35 33 25 32 27 28 40 26 30.5 2 Right but 6 9 2 6 4 11 8 10 2 8 6.6 Incorrect Algorithm Incorrect 33 26 35 33 25 31 27 28 40 26 30.4 3 Right but 5 7 3 2 6 7 6 5 0 6 4.7 Incorrect Algorithm Incorrect 33 26 35 33 25 31 27 28 40 25 30.3 4 Right but 8 21 4 3 6 6 11 7 1 5 7.2 Incorrect Algorithm Incorrect 33 26 34 33 25 31 27 28 40 25 30.2 5 Right but 8 8 5 2 6 5 6 7 1 5 5.3 Incorrect The “right but incorrect” cases are those that were correctly classified by the target base model, although they were not chosen during the final decision-making process after applying the conflict resolution approaches. These cases are considered when evaluating the most effective approach of the four explored, as they could have resulted in a correct classification, given the application of an effective conflict resolution technique. The conflict resolution approach presented in Algorithm 3 was the most effective when applied to both data distributions, as there were the lowest number of “right but incorrect” instances (on average 11.3 and 4.7, respectively), closely followed by the approach in Algorithm 5. The lower the number of “right but incorrect” cases helps to determine which conflict resolution approach is most effective in deciding upon which base model should be awarded the final class decision. For example, consider the conflict resolution technique in Algorithm 3 with ensemble approach 2, as presented in Table 10. There were 23.3 conflicts occurring on average (refer to Table 8). Upon analysis of the incorrectly classified instances, 30.4, on average, were incorrectly classified, whereas 4.7, on average, could have been correctly classified, though an incorrect base model won the final decision after applying conflict resolution. Finally, this means that as a result of applying Algorithm 3, an average of 18.6 conflicting cases were correctly resolved, improving the final HAR performance. As shown in Figure 6, the best HAR performance of 80.39% was achieved using complement data generated at a class level in conjunction with the conflict resolution approach presented in Algorithm 3. Given the non-parametric nature of the neural networks, two non-parametric benchmark classifiers were chosen to evaluate the proposed ensemble approach, namely, Support Vector Machine (SVM) and Nearest Neighbour (kNN) classifiers. The multiclass SVM classifier was an error-correcting output codes (ECOC) model required for multiclass learning, consisting of multiple binary learners. Figure 7 presents the performance of our ensemble approach in comparison to the chosen non-parametric benchmark classifiers. The kNN model achieved an accuracy of 70.95%, whereas the SVM model achieved 76.54%, thus demonstrating that the proposed ensemble approach outperformed both benchmark classifiers. ----- p y whereas the SVM model achieved 76.54%, thus demonstrating that the proposed ensemble approach Sensors 2020, 20, 216 22 of 26 outperformed both benchmark classifiers. #### Benchmark Comparison of HAR Performance 82 80.39 80 78 76.54 76 74 72 70.95 70 68 66 Ensemble NN kNN SVM Classification Models **Figure 7. Figure 7.HAR performance of the proposed ensemble Neural Network (NN) approach compared to HAR performance of the proposed ensemble Neural Network (NN) approach compared to** Nearest Neighbour (kNN) and Support Vector Machine (SVM) classifiers. Nearest Neighbour (kNN) and Support Vector Machine (SVM) classifiers. **6. Conclusions** **6. Conclusions** In this work, we focused on data-driven approaches to HAR and addressed the current challenges In this work, we focused on data-driven approaches to HAR and addressed the current of their application to openly available datasets. We proposed an ensemble approach to recognize challenges of their application to openly available datasets. We proposed an ensemble approach to ADLs within a smart environment setting, with particular emphasis on exploring various approaches recognize ADLs within a smart environment setting, with particular emphasis on exploring various to resolving conflicts that occur between base models in ensemble classifiers and analyzing the effects approaches to resolving conflicts that occur between base models in ensemble classifiers and of various data distributions that generate the complement class per base model. It was observed analyzing the effects of various data distributions that generate the complement class per base model. that distributing data at a class level greatly reduces the number of conflicts that occur between the base models, leading to an increased preliminary performance before the application of conflict resolution techniques. It was also found that the best method of resolving conflicts, in comparison to other approaches explored, is to award the final decision to the model with the highest differential value between the highest and second highest predictions per conflicting model. We evaluated our proposed HAR classification model, the ensemble NN method, by comparing the achieved HAR performance with two non-parametric benchmark classifiers. The ensemble NN method outperformed both benchmark models, demonstrating the effectiveness of the proposed ensemble approach. This work is limited in that feature selection techniques were not applied to determine an optimal subset of input features. According to [84], feature selection is an increasingly significant consideration in machine learning, with the primary aim of its application being to reduce the dimensionality in large, multi-dimensional datasets. Thus, future work would involve the application of feature selection techniques to determine the optimal subset of features required for the classification problem. Additionally, this work is limited in that the proposed approach was evaluated on one HAR dataset, therefore future work would involve evaluating the methods on another dataset so that results are not subjective to only the current dataset. **Author Contributions: Conceptualization, N.I., C.N., S.Z., H.W., and W.W.Y.N.; methodology, N.I., C.N.,** S.Z., and W.W.Y.N.; software, N.I.; validation, N.I., C.N., S.Z., and W.W.Y.N.; formal analysis, N.I., C.N., S.Z., and W.W.Y.N.; investigation, N.I.; resources, N.I., C.N., S.Z., H.W., and W.W.Y.N.; data curation, N.I.; writing-original draft preparation, N.I.; writing-review and editing, N.I., C.N., S.Z., and W.W.Y.N.; visualization, N.I., C.N., S.Z., and W.W.Y.N.; supervision, C.N., S.Z., H.W., and W.W.Y.N.; project administration, N.I., C.N., S.Z., H.W., and W.W.Y.N.; funding acquisition. All authors have read and agreed to the published version of the manuscript. **Funding: This research was supported through a Northern Ireland Department for the Economy (DfE) PhD** scholarship. The APC was funded through the DfE PhD scholarship. ----- _Sensors 2020, 20, 216_ 23 of 26 **Conflicts of Interest: The authors declare no conflict of interest.** **References** 1. Ranasinghe, S.; al Machot, F.; Mayr, H.C. A review on applications of activity recognition systems with [regard to performance and evaluation. Int. J. Distrib. Sens. Netw. 2016, 12, 1550147716665520. [CrossRef]](http://dx.doi.org/10.1177/1550147716665520) 2. Zhao, W.; Yan, L.; Zhang, Y. Geometric-constrained multi-view image matching method based on semi-global [optimization. Geo-Spat. Inf. Sci. 2018, 21, 115–126. [CrossRef]](http://dx.doi.org/10.1080/10095020.2018.1441754) 3. Abdallah, Z.S.; Gaber, M.M.; Srinivasan, B.; Krishnaswamy, S. Adaptive mobile activity recognition system [with evolving data streams. Neurocomputing 2015, 150, 304–317. [CrossRef]](http://dx.doi.org/10.1016/j.neucom.2014.09.074) 4. Bakli, M.S.; Sakr, M.A.; Soliman, T.H.A. A spatiotemporal algebra in Hadoop for moving objects. _[Geo-Spat. Inf. Sci. 2018, 21, 102–114. [CrossRef]](http://dx.doi.org/10.1080/10095020.2017.1413798)_ 5. Awad, M.M. Forest mapping: A comparison between hyperspectral and multispectral images and [technologies. J. For. Res. 2018, 29, 1395–1405. [CrossRef]](http://dx.doi.org/10.1007/s11676-017-0528-y) 6. Jalal, A.; Quaid, M.A.K.; Hasan, A.S. Wearable sensor-based human behavior understanding and recognition in daily life for smart environments. In Proceedings of the 2018 International Conference on Frontiers of Information Technology (FIT), Islamabad, Pakistan, 17–19 December 2018; pp. 105–110. 7. Lee, Y.; Choi, T.J.; Ahn, C.W. Multi-objective evolutionary approach to select security solutions. CAAI Trans. _[Intell. Technol. 2017, 2, 64–67. [CrossRef]](http://dx.doi.org/10.1049/trit.2017.0002)_ 8. Cook, D.J.; Crandall, A.S.; Thomas, B.L.; Krishnan, N.C. CASAS: A Smart Home in a Box. Computing Practices **[2013, 46, 62–69. [CrossRef] [PubMed]](http://dx.doi.org/10.1109/MC.2012.328)** 9. Helal, S.; Chen, C. The Gator Tech Smart House: Enabling Technologies and Lessons Learned. In Proceedings of the 3rd International Convention on Rehabilitation Engineering & Assistive Technology, Singapore, 22–26 April 2009. 10. Cook, D.J.; Youngblood, M.; Heierman, E.O.; Gopalratnam, K.; Rao, S.; Litvin, A.; Khawaja, F. MavHome: An Agent-Based Smart Home. In Proceedings of the First IEEE International Conference on Pervasive Computing and Communications, 2003 (PerCom 2003), Fort Worth, TX, USA, 26 March 2003; pp. 521–524. 11. [The DOMUS Laboratory. Available online: http://domuslab.fr (accessed on 8 November 2019).](http://domuslab.fr) 12. [The Aware Home. Available online: http://awarehome.imtc.gatech.edu (accessed on 8 November 2019).](http://awarehome.imtc.gatech.edu) 13. Krishnan, N.C.; Cook, D.J. Activity Recognition on Streaming Sensor Data. Pervasive Mob. Comput. 2014, 10, [138–154. [CrossRef] [PubMed]](http://dx.doi.org/10.1016/j.pmcj.2012.07.003) 14. Kamal, S.; Jalal, A.; Kim, D. Depth images-based human detection, tracking and activity recognition using [spatiotemporal features and modified HMM. J. Electr. Eng. Technol. 2016, 11, 1857–1862. [CrossRef]](http://dx.doi.org/10.5370/JEET.2016.11.6.1857) 15. Buys, K.; Cagniart, C.; Baksheev, A.; de Laet, T.; de Schutter, J.; Pantofaru, C. An adaptable system for RGB-D based human body detection and pose estimation. J. Vis. Commun. Image Represent. 2014, 25, 39–52. [[CrossRef]](http://dx.doi.org/10.1016/j.jvcir.2013.03.011) 16. Chen, L.; Hoey, J.; Nugent, C.D.; Cook, D.J.; Yu, Z.; Member, S. Sensor-Based Activity Recognition. IEEE Trans. _[Syst. Man Cybern. Part C 2012, 42, 790–808. [CrossRef]](http://dx.doi.org/10.1109/TSMCC.2012.2198883)_ 17. Azkune, G.; Almeida, A.; López-De-Ipiña, D.; Chen, L. Extending Knowledge-Driven Activity Models [through Data-Driven Learning Techniques. Expert Syst. Appl. 2015, 42, 3115–3128. [CrossRef]](http://dx.doi.org/10.1016/j.eswa.2014.11.063) 18. Cleland, I.; Donnelly, M.P.; Nugent, C.D.; Hallberg, J.; Espinilla, M. Collection of a Diverse, Naturalistic and Annotated Dataset for Wearable Activity Recognition. In Proceedings of the 2018 IEEE International Conference on Pervasive Computing and Communications Workshops (PerCom Workshops), Athens, Greece, 19–23 March 2018. 19. Akhand, M.A.H.; Murase, K. Neural Networks Ensembles: Existing Methods and New Techniques; LAP LAMBERT Academic Publishing: Khulna, Bangladesh, 2010. 20. Sharkey, A.J.C. Combining Artificial Neural Nets; Springer: London, UK, 1999. 21. Aggarwal, J.K.; Xia, L.; Ann, O.C.; Theng, L.B. Human activity recognition: A review. In Proceedings of the 2014 IEEE International Conference on Control System, Computing and Engineering (ICCSCE 2014), Batu Ferringhi, Malaysia, 28–30 November 2014; pp. 389–393. 22. Hegde, N.; Bries, M.; Swibas, T.; Melanson, E.; Sazonov, E. Automatic Recognition of Activities of Daily Living utilizing Insole-Based and Wrist-Worn Wearable Sensors. IEEE J. Biomed. Health Inform. 2017, 22, [979–988. [CrossRef] [PubMed]](http://dx.doi.org/10.1109/JBHI.2017.2734803) ----- _Sensors 2020, 20, 216_ 24 of 26 23. Liu, J.; Sohn, J.; Kim, S. Classification of Daily Activities for the Elderly Using Wearable Sensors. J. Healthc. _[Eng. 2017, 2017, 8934816. [CrossRef]](http://dx.doi.org/10.1155/2017/8934816)_ 24. Pirsiavash, H.; Ramanan, D. Detecting activities of Daily Living in First-Person Camera Views. In Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition, Providence, RI, USA, 16–21 June 2012; pp. 2847–2854. 25. Roy, N.; Misra, A.; Cook, D. Ambient and Smartphone Sensor Assisted ADL Recognition in Multi-Inhabitant [Smart Environments. J. Ambient Intell. Humaniz. Comput. 2016, 7, 1–19. [CrossRef]](http://dx.doi.org/10.1007/s12652-015-0294-7) 26. Moriya, K.; Nakagawa, E.; Fujimoto, M.; Suwa, H.; Arakawa, Y.; Kimura, A.; Miki, S.; Yasumoto, K. Daily Living Activity Recognition with Echonet Lite Appliances and Motion Sensors. In Proceedings of the 2017 IEEE International Conference on Pervasive Computing and Communications Workshops (PerCom Workshops), Kona, HI, USA, 13–17 March 2017. 27. Gochoo, M.; Tan, T.; Huang, S. DCNN-Based Elderly Activity Recognition Using Binary Sensors. In Proceedings of the 2017 International Conference on Electrical and Computing Technologies and Applications (ICECTA), Ras Al Khaimah, United Arab Emirates, 21–23 November 2017. 28. Singh, D.; Merdivan, E.; Hanke, S. Convolutional and Recurrent Neural Networks for Activity Recognition in Smart Environment. In Towards Integrative Machine Learning and Knowledge Extraction; Springer: Cham, Switzerland, 2017; pp. 194–209. 29. Cook, D.J.; Krishnan, N.C. Activity Learning: Discovering, Recognizing and Predicting Human Behavior from _Sensor Data, 1st ed.; Wiley: Hoboken, NJ, USA, 2015._ 30. Mannini, A.; Rosenberger, M.; Haskell, W.L.; Sabatini, A.M.; Intille, S.S. Activity Recognition in Youth Using [Single Accelerometer Placed at Wrist or Ankle. Med. Sci. Sports Exerc. 2017, 49, 801–812. [CrossRef]](http://dx.doi.org/10.1249/MSS.0000000000001144) 31. Huang, Q.; Yang, J.; Qiao, Y. Person re-identification across multi-camera system based on local descriptors. In Proceedings of the 2012 Sixth International Conference on Distributed Smart Cameras (ICDSC), Hong Kong, China, 30 Octorber–2 November 2012; pp. 1–6. 32. Farooq, A.; Jalal, A.; Kamal, S. Dense RGB-D map-based human tracking and activity recognition using skin joints features and self-organizing map. KSII Trans. Internet Inf. Syst. 2015, 9, 1856–1869. 33. Kamal, S.; Jalal, A. A Hybrid Feature Extraction Approach for Human Detection, Tracking and Activity [Recognition Using Depth Sensors. Arab. J. Sci. Eng. 2016, 41, 1043–1051. [CrossRef]](http://dx.doi.org/10.1007/s13369-015-1955-8) 34. Böttcher, S.; Scholl, P.M.; van Laerhoven, K. Detecting Transitions in Manual Tasks from Wearables: [An Unsupervised Labeling Approach. Informatics 2018, 5, 16. [CrossRef]](http://dx.doi.org/10.3390/informatics5020016) 35. Trost, S.G.; Wong, W.-K.; Pfeiffer, K.A.; Zheng, Y. Artificial Neural Networks to Predict Activity Type and [Energy Expenditure in Youth. Med. Sci. Sport Exerc. 2012, 44, 1801–1809. [CrossRef]](http://dx.doi.org/10.1249/MSS.0b013e318258ac11) 36. Lara, O.D.; Labrador, M.A. A Survey on Human Activity Recognition using Wearable Sensors. IEEE Commun. _[Surv. Tutor. 2013, 15, 1192–1209. [CrossRef]](http://dx.doi.org/10.1109/SURV.2012.110112.00192)_ 37. Synnott, J.; Nugent, C.; Zhang, S.; Calzada, A.; Cleland, I.; Espinilla, M.; Quero, J.M.; Lundstrom, J. Environment Simulation for the Promotion of the Open Data Initiative. In Proceedings of the 2016 IEEE International Conference on Smart Computing (SMARTCOMP), St. Louis, MO, USA, 18–20 May 2016. 38. Oniga, S.; József, S. Optimal Recognition Method of Human Activities Using Artificial Neural Networks. _[Meas. Sci. Rev. 2015, 15, 323–327. [CrossRef]](http://dx.doi.org/10.1515/msr-2015-0044)_ 39. [Greengard, S. GPUs Reshape Computing. Commun. ACM 2016, 59, 14–16. [CrossRef]](http://dx.doi.org/10.1145/2967979) 40. Suto, J.; Oniga, S. Efficiency investigation from shallow to deep neural network techniques in human activity [recognition. Cogn. Syst. Res. 2019, 54, 37–49. [CrossRef]](http://dx.doi.org/10.1016/j.cogsys.2018.11.009) 41. Anguita, D.; Ghio, A.; Oneto, L.; Parra, X.; Reyes-Ortiz, J.L. A Public Domain Dataset for Human Activity Recognition Using Smartphones. Eur. Symp. Artif. Neural Netw. 2013, 437–442, 9782874190827. 42. Rooney, N.; Patterson, D.; Nugent, C. Ensemble Learning for Regression. Encyclopedia Data Warehous. Mining _Inf. Sci. Ref. N. Y. US 2010, 2, 777–782._ 43. Mendes-Moreira, J.; Soares, C.; Jorge, A.M.; de Sousa, J.F. Ensemble Approaches for Regression: A Survey. _[ACM Comput. Surv. 2012, 45, 10. [CrossRef]](http://dx.doi.org/10.1145/2379776.2379786)_ 44. Fatima, I.; Fahim, M.; Lee, Y.-K.; Lee, S. Classifier ensemble optimization for human activity recognition in smart homes. In Proceedings of the 7th International Conference on Ubiquitous Information Management and Communication, Kota Kinabalu, Malaysia, 17–19 January 2013; pp. 1–7. 45. Ni, Q.; Zhang, L.; Li, L. A Heterogeneous Ensemble Approach for Activity Recognition with Integration of [Change Point-Based Data Segmentation. Appl. Sci. 2018, 8, 1695. [CrossRef]](http://dx.doi.org/10.3390/app8091695) ----- _Sensors 2020, 20, 216_ 25 of 26 46. Feng, Z.; Mo, L.; Li, M. A Random Forest-based ensemble method for activity recognition. In Proceedings of the 2015 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Milan, Italy, 25–29 August 2015; pp. 5074–5077. 47. Kim, Y.J.; Kang, B.N.; Kim, D. Hidden Markov Model Ensemble for Activity Recognition Using Tri-Axis Accelerometer. In Proceedings of the 2015 IEEE International Conference on Systems, Man, and Cybernetics, Kowloon, China, 9–12 October 2016; pp. 3036–3041. 48. Sagha, H.; Bayati, H.; Millán, J.D.R.; Chavarriaga, R. On-line anomaly detection and resilience in classifier [ensembles. Pattern Recognit. Lett. 2013, 34, 1916–1927. [CrossRef]](http://dx.doi.org/10.1016/j.patrec.2013.02.014) 49. A Genetic Algorithm-based Classifier Ensemble Optimization for Activity Recognition in Smart Homes. _[KSII Trans. Internet Inf. Syst. 2013, 7, 2853–2873. [CrossRef]](http://dx.doi.org/10.3837/tiis.2013.11.018)_ 50. JMin, K.; Cho, S.B. Activity recognition based on wearable sensors using selection/fusion hybrid ensemble. In Proceedings of the 2011 IEEE International Conference on Systems, Man, and Cybernetics, Anchorage, AK, USA, 9–12 October 2011; pp. 1319–1324. 51. Diep, N.N.; Pham, C.; Phuong, T.M. Motion Primitive Forests for Human Activity Recognition Using Wearable Sensors. In Pacific Rim International Conference on Artificial Intelligence; Springer: Cham, Switzerland, 2016; pp. 340–353. 52. Ijjina, E.P.; Mohan, C.K. Hybrid deep neural network model for human action recognition. Appl. Soft Comput. _[J. 2016, 46, 936–952. [CrossRef]](http://dx.doi.org/10.1016/j.asoc.2015.08.025)_ 53. Hwang, I.; Park, H.M.; Chang, J.H. Ensemble of deep neural networks using acoustic environment classification for statistical model-based voice activity detection. Comput. Speech Lang. 2016, 38, 1–12. [[CrossRef]](http://dx.doi.org/10.1016/j.csl.2015.11.003) 54. Guan, Y.; Ploetz, T. Ensembles of Deep LSTM Learners for Activity Recognition using Wearables. Proc. ACM _[Interact. Mob. Wearable Ubiquitous Technol. 2017, 1, 11. [CrossRef]](http://dx.doi.org/10.1145/3090076)_ 55. Nweke, H.F.; Teh, Y.W.; Mujtaba, G.; Al-garadi, M.A. Data fusion and multiple classifier systems for human [activity detection and health monitoring. Inf. Fusion 2019, 46, 147–170. [CrossRef]](http://dx.doi.org/10.1016/j.inffus.2018.06.002) 56. Woz’niak, M.; Woz’niak, W.; Graña, M.; Corchado, E. A survey of multiple classifier systems as hybrid [systems. Inf. Fusion 2014, 16, 3–17. [CrossRef]](http://dx.doi.org/10.1016/j.inffus.2013.04.006) 57. Díez-Pastor, J.F.; Rodríguez, J.J.; García-Osorio, C.; Kuncheva, L.I. Random Balance: Ensembles of variable [priors classifiers for imbalanced data. Knowl.-Based Syst. 2015, 85, 96–111. [CrossRef]](http://dx.doi.org/10.1016/j.knosys.2015.04.022) 58. Feng, W.; Huang, W.; Ren, J. Class imbalance ensemble learning based on the margin theory. Appl. Sci. 2018, _[8, 815. [CrossRef]](http://dx.doi.org/10.3390/app8050815)_ 59. Ahmed, S.; Mahbub, A.; Rayhan, F.; Jani, R.; Shatabda, S.; Farid, D.M. Hybrid Methods for Class Imbalance Learning Employing Bagging with Sampling Techniques. In Proceedings of the 2017 2nd International Conference on Computational Systems and Information Technology for Sustainable Solution (CSITSS), Bangalore, India, 21–23 December 2017; pp. 1–5. 60. Farooq, M.; Sazonov, E. Detection of chewing from piezoelectric film sensor signals using ensemble classifiers. In Proceedings of the 2016 38th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Orlando, FL, USA, 16–20 August 2016; pp. 4929–4932. 61. Mohammadi, E.; Wu, Q.M.J.; Saif, M. Human activity recognition using an ensemble of support vector machines. In Proceedings of the 2016 International Conference on High Performance Computing & Simulation (HPCS), Innsbruck, Austria, 18–22 July 2016; pp. 549–554. 62. Bayat, A.; Pomplun, M.; Tran, D.A. A study on human activity recognition using accelerometer data from [smartphones. Procedia Comput. Sci. 2014, 34, 450–457. [CrossRef]](http://dx.doi.org/10.1016/j.procs.2014.07.009) 63. Zappi, P.; Stiefmeier, T.; Farella, E.; Roggen, D.; Benini, L.; Tröster, G. Activity recognition from on-body sensors by classifier fusion: Sensor scalability and robustness. In Proceedings of the 2007 3rd international conference on intelligent sensors, sensor networks and information, Melbourne, QLD, Australia, 3–6 December 2007; pp. 281–286. 64. Krawczyk, B.; Wo´zniak, M. Untrained weighted classifier combination with embedded ensemble pruning. _[Neurocomputing 2016, 196, 14–22. [CrossRef]](http://dx.doi.org/10.1016/j.neucom.2016.02.040)_ 65. Roh, Y.; Heo, G.; Whang, S.E. A Survey on Data Collection for Machine Learning: A Big Data—AI Integration [Perspective. IEEE Trans. Knowl. Data Eng. 2019, 1, 1–19. [CrossRef]](http://dx.doi.org/10.1109/TKDE.2019.2946162) 66. Gorunescu, F. Data Mining: Concepts, Models and Techniques, 1st ed.; Springer: Berlin, Germany, 2011. ----- _Sensors 2020, 20, 216_ 26 of 26 67. Kantardzic, M. Data Mining: Concepts, Models, Methods and Algorithms, 2nd ed.; Wiley-IEEE Press: Hoboken, NJ, USA, 2002. 68. Maimon, O.; Rokach, L. Data Mining and Knowledge Discovery Handbook, 1st ed.; Springer: Boston, MA, USA, 2005. 69. Han, J.; Kamber, M.; Pei, J. Data Mining: Concepts and Techniques, 3rd ed.; Elsevier Inc.: Waltham, MA, USA, 2012. 70. Zhu, X.; Wu, X. Class Noise vs. Attribute Noise: A Quantitative Study of Their Impacts. Artif. Intell. Rev. **[2004, 22, 177–210. [CrossRef]](http://dx.doi.org/10.1007/s10462-004-0751-8)** 71. Folleco, A.A.; Khoshgoftaar, T.M.; van Hulse, J.; Napolitano, A. Identifying Learners Robust to Low Quality Data. Informatica 2009, 33, 245–259. 72. Espinilla, M.; Medina, J.; Nugent, C. UCAmI Cup. Analyzing the UJA Human Activity Recognition Dataset [of Activities of Daily Living. MDPI Proc. UCAmI 2018, 2, 1267. [CrossRef]](http://dx.doi.org/10.3390/proceedings2191267) 73. Karvonen, N.; Kleyko, D. A Domain Knowledge-Based Solution for Human Activity Recognition: The UJA [Dataset Analysis. MDPI Proc. UCAmI 2018, 2, 1261. [CrossRef]](http://dx.doi.org/10.3390/proceedings2191261) 74. Lago, P.; Inoue, S. A Hybrid Model Using Hidden Markov Chain and Logic Model for Daily Living Activity [Recognition. MDPI Proc. UCAmI 2018, 2, 1266. [CrossRef]](http://dx.doi.org/10.3390/proceedings2191266) 75. Seco, F.; Jiménez, A.R. Event-Driven Real-Time Location-Aware Activity Recognition in AAL Scenarios. _MDPI Proc. UCAmI 2018, 2, 1240._ 76. Cerón, J.D.; López, D.M.; Eskofier, B.M. Human Activity Recognition Using Binary Sensors, BLE Beacons, an Intelligent Floor and Acceleration Data: A Machine Learning Approach. MDPI Proc. UCAmI 2018, 2, [1265. [CrossRef]](http://dx.doi.org/10.3390/proceedings2191265) 77. Ding, D.; Cooper, R.A.; Pasquina, P.F.; Fici-Pasquina, L. Sensor technology for smart homes. Maturitas 2011, _[69, 131–136. [CrossRef]](http://dx.doi.org/10.1016/j.maturitas.2011.03.016)_ 78. Amiribesheli, M.; Benmansour, A.; Bouchachia, A. A review of smart homes in healthcare. J. Ambient Intell. _[Humaniz. Comput. 2015, 6, 495–517. [CrossRef]](http://dx.doi.org/10.1007/s12652-015-0270-2)_ 79. Jain, N.; Srivastava, V. DATA MINING TECHNIQUES: A SURVEY PAPER. Int. J. Res. Eng. Technol. 2013, 2, 116–119. 80. Bulling, A.; Blanke, U.; Schiele, B. A Tutorial on Human Activity Recognition using Body-Worn Inertial [Sensors. ACM Comput. Surv. 2014, 46, 33. [CrossRef]](http://dx.doi.org/10.1145/2499621) 81. Banos, O.; Galvez, J.-M.; Damas, M.; Pomares, H.; Rojas, I. Window Size Impact in Human Activity [Recognition. Sensors 2014, 14, 6474–6499. [CrossRef] [PubMed]](http://dx.doi.org/10.3390/s140406474) 82. Mohandes, M.; Deriche, M.; Aliyu, S.O. Classifiers Combination Techniques: A Comprehensive Review. _[IEEE Access 2018, 6, 19626–19639. [CrossRef]](http://dx.doi.org/10.1109/ACCESS.2018.2813079)_ 83. Yijing, L.; Haixiang, G.; Xiao, L.; Yanan, L.; Jinling, L. Adapted ensemble classification algorithm based on multiple classifier system and feature selection for classifying multi-class imbalanced data. Knowl.-Based Syst. **[2016, 94, 88–104. [CrossRef]](http://dx.doi.org/10.1016/j.knosys.2015.11.013)** 84. Suto, J.; Oniga, S.; Sitar, P.P. Comparison of wrapper and filter feature selection algorithms on human activity recognition. In Proceedings of the 2016 6th International Conference on Computers Communications and Control (ICCCC), Oradea, Romania, 10–14 May 2016; pp. 124–129. © 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution [(CC BY) license (http://creativecommons.org/licenses/by/4.0/).](http://creativecommons.org/licenses/by/4.0/.) -----
{ "disclaimer": "Notice: Paper or abstract available at https://pmc.ncbi.nlm.nih.gov/articles/PMC6982871, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": "CCBY", "status": "GOLD", "url": "https://www.mdpi.com/1424-8220/20/1/216/pdf" }
2,019
[ "JournalArticle" ]
true
2019-12-30T00:00:00
[ { "paperId": "8772b78e1ff9a3b6db46492ae58af3108a0d5b47", "title": "Efficiency investigation from shallow to deep neural network techniques in human activity recognition" }, { "paperId": "800cd71393a568128e3797b832f3005a664e662f", "title": "Data fusion and multiple classifier systems for human activity detection and health monitoring: Review and open research directions" }, { "paperId": "db049db27febe25c8af0007ef98ea26cba24bd66", "title": "Wearable Sensor-Based Human Behavior Understanding and Recognition in Daily Life for Smart Environments" }, { "paperId": "3a83d8595e6727269c876fcebd23ee9ddd524b76", "title": "A Survey on Data Collection for Machine Learning: A Big Data - AI Integration Perspective" }, { "paperId": "badee62939c4c87f8ae4bb93012418f634195a99", "title": "Event-Driven Real-Time Location-Aware Activity Recognition in AAL Scenarios" }, { "paperId": "aa42bd924240622f176eff4fe0ce4990c2c55163", "title": "UCAmI Cup. Analyzing the UJA Human Activity Recognition Dataset of Activities of Daily Living" }, { "paperId": "09b563b54a53efe8b564621332df4a4b8f22190f", "title": "A Hybrid Model Using Hidden Markov Chain and Logic Model for Daily Living Activity Recognition" }, { "paperId": "558bce500a47dd94d46c9494b52423d7d4c8b835", "title": "A Domain Knowledge-Based Solution for Human Activity Recognition: The UJA Dataset Analysis" }, { "paperId": "81d3ea8a18a23207a000a9ef9eb31d02ee6f8a4a", "title": "Human Activity Recognition Using Binary Sensors, BLE Beacons, an Intelligent Floor and Acceleration Data: A Machine Learning Approach" }, { "paperId": "745364feecfc3429484a51f9d121b8c8c4225f18", "title": "A Heterogeneous Ensemble Approach for Activity Recognition with Integration of Change Point-Based Data Segmentation" }, { "paperId": "8b085c3215389c4107850eac223c3e3ad253901d", "title": "Automatic Recognition of Activities of Daily Living Utilizing Insole-Based and Wrist-Worn Wearable Sensors" }, { "paperId": "e29d22c32f5c89a5c425c916a1157009aad52521", "title": "Class imbalance ensemble learning based on the margin theory" }, { "paperId": "dcb71a8f3f7e3caa905ab9568973043f3a385aee", "title": "Detecting Transitions in Manual Tasks from Wearables: An Unsupervised Labeling Approach" }, { "paperId": "c6fd6422039178a0abaec8156f64e9b61a54ecfe", "title": "Collection of a Diverse, Realistic and Annotated Dataset for Wearable Activity Recognition" }, { "paperId": "9b120751fd878fbf19040a43d04281cc1c17d0f4", "title": "Geometric-constrained multi-view image matching method based on semi-global optimization" }, { "paperId": "a91545462f2f425161f93f41389e9a1036fbda51", "title": "A spatiotemporal algebra in Hadoop for moving objects" }, { "paperId": "ede39f75804f815d9b7cd2b98d751ebd388f6c18", "title": "Hybrid Methods for Class Imbalance Learning Employing Bagging with Sampling Techniques" }, { "paperId": "f45854a32044148793280fe08590746096b625d3", "title": "Classification of Daily Activities for the Elderly Using Wearable Sensors" }, { "paperId": "9ecca1856f7858ecc58de632352c30061a074e88", "title": "Forest mapping: a comparison between hyperspectral and multispectral images and technologies" }, { "paperId": "45296e2b4e2bf98eb60953091d0906619a42fd72", "title": "DCNN-based elderly activity recognition using binary sensors" }, { "paperId": "5a954a4817848c5b0a921b199a53131fb9ab07cd", "title": "Multi-objective evolutionary approach to select security solutions" }, { "paperId": "04f79497dfb0ee31f0376e465749bf176a8d0ccf", "title": "Activity Recognition in Youth Using Single Accelerometer Placed at Wrist or Ankle" }, { "paperId": "4b8446ea306d7d3f8218a8850158b0ae084a988e", "title": "Ensembles of Deep LSTM Learners for Activity Recognition using Wearables" }, { "paperId": "b65f5acd18f62cbbd0d43f53b195e7321e32685e", "title": "Daily living activity recognition with ECHONET Lite appliances and motion sensors" }, { "paperId": "9ef98f87a6b1bd7e6c006f056f321f9d291d4c45", "title": "Depth Images-based Human Detection, Tracking and Activity Recognition Using Spatiotemporal Features and Modified HMM" }, { "paperId": "012037e6bfef86f140f56eb97e453fcb1ff5cca1", "title": "Hybrid deep neural network model for human action recognition" }, { "paperId": "339410e5d7e41b3477472f7e27dc46ce950939cb", "title": "GPUs reshape computing" }, { "paperId": "86100cac4647f6cb190af5373b8463b177df0e7c", "title": "Motion Primitive Forests for Human Activity Recognition Using Wearable Sensors" }, { "paperId": "de21627c01e8324aaddd6a3a303be7e8fdcb43da", "title": "Detection of chewing from piezoelectric film sensor signals using ensemble classifiers" }, { "paperId": "f559c8f003f2f336e6cca50fffaf724c4bfc7ea7", "title": "A review on applications of activity recognition systems with regard to performance and evaluation" }, { "paperId": "3bd9e2b35052f43ac25d9c27a30e388d9a4a56f3", "title": "Untrained weighted classifier combination with embedded ensemble pruning" }, { "paperId": "fd59460de812c8b250a5756e54c5152b77dd74df", "title": "Ensemble of deep neural networks using acoustic environment classification for statistical model-based voice activity detection" }, { "paperId": "e0ab926cd48a47a8c7b16e27583421141f71f6df", "title": "Human activity recognition using an ensemble of support vector machines" }, { "paperId": "327e601151bb52868e2e353a5f163aa849dd59c8", "title": "Environment Simulation for the Promotion of the Open Data Initiative" }, { "paperId": "5cb4199d013f7ee92bae3cfe65a2170ea5b067ec", "title": "Comparison of wrapper and filter feature selection algorithms on human activity recognition" }, { "paperId": "24d633be7c2068a12e78a78865d4a5b19555cb1d", "title": "Ambient and smartphone sensor assisted ADL recognition in multi-inhabitant smart environments" }, { "paperId": "76a368170db29ef70d34794a91eb18b0bc9cc130", "title": "Optimal Recognition Method of Human Activities Using Artificial Neural Networks" }, { "paperId": "126a413f410146470217456b3becbbd114623c60", "title": "A Hybrid Feature Extraction Approach for Human Detection, Tracking and Activity Recognition Using Depth Sensors" }, { "paperId": "c6cd47443f9e8d523387594935d56827c50e9e26", "title": "A Random Forest-based ensemble method for activity recognition" }, { "paperId": "fd5ec6492b4b25db921ae665f730bbfce27d869c", "title": "Hidden Markov Model Ensemble for Activity Recognition Using Tri-Axis Accelerometer" }, { "paperId": "9d8429a413cd1c4ec8cf1024b5fbec79de91ec57", "title": "Random Balance: Ensembles of variable priors classifiers for imbalanced data" }, { "paperId": "5984388a90b1a675b837703c67880c9a7e28b8bd", "title": "Dense RGB-D Map-Based Human Tracking and Activity Recognition using Skin Joints Features and Self-Organizing Map" }, { "paperId": "5591b632e78bd12791872f32a1a013ab7ded4195", "title": "Extending knowledge-driven activity models through data-driven learning techniques" }, { "paperId": "3fce056a3bed699a27f902116468fce921975d6c", "title": "A review of smart homes in healthcare" }, { "paperId": "99e601284696121e47f160cfa8464faaef609a28", "title": "Adaptive mobile activity recognition system with evolving data streams" }, { "paperId": "cb264f6ab37ac664389344d88bc2b8451005485d", "title": "Activity Learning: Discovering, Recognizing, and Predicting Human Behavior from Sensor Data" }, { "paperId": "e3a48e6b95cd0511a6c2c31ce98a8c86e61f25b5", "title": "Human activity recognition: A review" }, { "paperId": "615a72aadb94a994f9d49548c0dffbeed1631ba2", "title": "Window Size Impact in Human Activity Recognition" }, { "paperId": "8acdf4a1e88b443b917d80e97850af7bbb1e3534", "title": "A survey of multiple classifier systems as hybrid systems" }, { "paperId": "fc79c74063d5183cfbc0994be7b2dcab26bed937", "title": "Activity recognition on streaming sensor data" }, { "paperId": "aea9e985c1d20bd591ca4bceae41d662f06fa174", "title": "A Genetic Algorithm-based Classifier Ensemble Optimization for Activity Recognition in Smart Homes" }, { "paperId": "fefa1bf3fca6c1ce08ecbd63e4604815c11038a0", "title": "DATA MINING TECHNIQUES: A SURVEY PAPER" }, { "paperId": "27c1d65b998cd3aeee052c364c704ff6018bc24d", "title": "On-line anomaly detection and resilience in classifier ensembles" }, { "paperId": "0cf5e93a4fd936672d9763ee577224fcbfa06980", "title": "CASAS: A Smart Home in a Box" }, { "paperId": "8d3041129b500b90521c7d768996fc2de11b0e47", "title": "A Survey on Human Activity Recognition using Wearable Sensors" }, { "paperId": "9cf59101b32f7550312f676de7588df3abf6f2a9", "title": "Classifier ensemble optimization for human activity recognition in smart homes" }, { "paperId": "9e605d72ab39f7a37a13e912556ee034c0d06133", "title": "Sensor-Based Activity Recognition" }, { "paperId": "e30a47911cfb98934dc640eb669b9351827bd37b", "title": "Ensemble approaches for regression: A survey" }, { "paperId": "a19a5a604e07c525f9a8969c05f1c9ed5b497053", "title": "Person re-identification across multi-camera system based on local descriptors" }, { "paperId": "fb5c28ec125db520e7f3ef9a0e4be7831d9316f6", "title": "Artificial neural networks to predict activity type and energy expenditure in youth." }, { "paperId": "9e81caf9dd31b893ebbee3970c312619b7eac7bf", "title": "Detecting activities of daily living in first-person camera views" }, { "paperId": "f4e0ab55f1571a31263a511a92aec8691fc60408", "title": "Activity recognition based on wearable sensors using selection/fusion hybrid ensemble" }, { "paperId": "3adf3a23dd16e6489973d23f6877af615102233e", "title": "Data Mining - Concepts, Models and Techniques" }, { "paperId": "230ef581d3361ac90da495983002dfb50cfcd660", "title": "Sensor technology for smart homes." }, { "paperId": "36e9538f1998b5922ebbcc72695251bbfc324ee2", "title": "Neural Networks Ensembles: Existing Methods and New Techniques" }, { "paperId": "a826892385d54211968a44df6ee8dcb6bc96e963", "title": "The Gator Tech Smart House: enabling technologies and lessons learned" }, { "paperId": "c2bee99b8834e17af7778ef51ccd84cb161deda9", "title": "Identifying learners robust to low quality data" }, { "paperId": "1b4ef3e5a783a18de6439f345af20831dcdcf29e", "title": "Activity recognition from on-body sensors by classifier fusion: sensor scalability and robustness" }, { "paperId": "dfca81b6743e88edb352e601c95af1502dea3077", "title": "Data Mining: Concepts, Models, Methods, and Algorithms" }, { "paperId": "c63a8640e5d426b1c8b0ca2ea45c20c265b3f2ad", "title": "Class Noise vs. Attribute Noise: A Quantitative Study" }, { "paperId": "4b61ae6fc24e533e73b41cd502f94d74e2cd4436", "title": "MavHome: an agent-based smart home" }, { "paperId": "6b0b93396792958c5e58da1933b46a4f565e82c6", "title": "Data Mining And Knowledge Discovery Handbook" }, { "paperId": "756d8af94094d276a2fe6c8e297a3373e3c0b5f9", "title": "Classifiers Combination Techniques: A Comprehensive Review" }, { "paperId": "c6b4e80e9aa37f9f1d1da145e58ae0b57dbaeef7", "title": "Adapted ensemble classification algorithm based on multiple classifier system and feature selection for classifying multi-class imbalanced data" }, { "paperId": "cb206d1aed84388d3a4b83470957819bb6f1cf75", "title": "Adapted ensemble classification algorithm based on multiple classifier system and feature selection for classifying multi-class imbalanced data" }, { "paperId": "13c21287d85fb93d22d041be777e9adadd18abef", "title": "Convolutional and Recurrent Neural Networks for Activity Recognition in Smart Environment" }, { "paperId": "9baa45693e37d6f14a9a0efae7c5f552808b36cb", "title": "The 11th International Conference on Mobile Systems and Pervasive Computing (MobiSPC-2014) A Study on Human Activity Recognition Using Accelerometer Data from Smartphones" }, { "paperId": "b9eb00ee1656f40ae3bbfd8631bda30c1dd9206d", "title": "A tutorial on human activity recognition using body-worn inertial sensors" }, { "paperId": "9f97bec3ba3071b41609f8f4590b8635116d7e9f", "title": "An adaptable system for RGB-D based human body detection and pose estimation" }, { "paperId": "83de43bc849ad3d9579ccf540e6fe566ef90a58e", "title": "A Public Domain Dataset for Human Activity Recognition using Smartphones" }, { "paperId": "de8a9421577b8d22953b84d0574984eab0f33182", "title": "Ensemble Learning for Regression" }, { "paperId": "982b955c900b04e9da64e3b39422690c13d6b94f", "title": "Data Mining - Concepts and Techniques" }, { "paperId": "d5355193db69c1ec9e03cf09e6c5045612d0b273", "title": "M: Cluster Analysis" }, { "paperId": "20202f93d49e782395a6c3bd829678e3fd352924", "title": "Combining Artificial Neural Nets" }, { "paperId": null, "title": "This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license" }, { "paperId": null, "title": "The DOMUS Laboratory The Aware Home" } ]
25,913
en
[ { "category": "Engineering", "source": "external" }, { "category": "Computer Science", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/028b33b36dcefb6ae6139e06cefa758df632ffeb
[ "Engineering" ]
0.923838
Is Security Realistic in Cloud Computing?
028b33b36dcefb6ae6139e06cefa758df632ffeb
Journal of International Technology and Information Management
[ { "authorId": "153309689", "name": "S. Srinivasan" }, { "authorId": "2119102915", "name": "Jesse H. Jones" } ]
{ "alternate_issns": [ "1941-6679" ], "alternate_names": [ "J Int Technol Inf Manag" ], "alternate_urls": null, "id": "cb34de5f-6970-41cc-b2fe-c801d63d384d", "issn": "1543-5962", "name": "Journal of International Technology and Information Management", "type": "journal", "url": null }
INTRODUCTION Cloud computing today is benefiting from the technological advancements in communication, storage and computing. The basic idea in cloud computing is to take advantage of economies of scale if IT services could be provided on demand with a decentralized infrastructure. This idea is a natural evolution from the IT time-share model of the 1960s and 1970s. Today, technology has advanced significantly and many more organizations have computing demands that are elastic in nature. Organizations large and small require reliable computing resources in order to succeed in business. Large businesses deal with complex systems where as Small and Medium sized Enterprises (SMEs) need access to affordable computing resources. Based on these aspects we can summarize some of the rationale for today's cloud computing needs as follows: * acquiring and managing the IT resources requires specialized skills, * maintaining a reliable IT infrastructure is expensive, * rapid technology advancements make it difficult to keep current the IT expertise, * internet has opened up many opportunities for individuals as well as small businesses, * number of entities requiring computing resources has grown exponentially, * SMEs' demand for computing resources varies significantly over time, * providing data security is a complex undertaking. In the above paragraph we have identified some of the major reasons as to why cloud computing would be advantageous to use. When a significant part of the business depends on a type of service that the business does not fully control, the question arises as to how the business can meet its obligations to its customers. As highlighted above, IT services are essential to the success of the business but it would be cost prohibitive for the business to manage an IT center with the required expertise and fluctuating demand on resources for processing and storage. Thus, a business using cloud computing must understand the security challenges that it would be responsible for and how cloud computing could help in this regard. We address the security challenges by first noting the differences in the types of cloud computing that a business might be using. In order to address the security challenges associated with cloud computing, we need to understand first the meaning of cloud computing. The primary reason for this is that the term 'cloud computing' is used as a catch-all for a wide ranging array of services. After a careful analysis of numerous sources in the literature we have arrived at the following working definition of 'cloud computing' based primarily on the National Institute of Standards and Technology definition: Cloud computing consists of both the infrastructure and services that facilitate reliable on-demand access to resources that can be allocated and released quickly by the user without provider intervention using the pay-as-you-go model (NIST, 2011). It is worth noting in this context that Mell and Grance further amplified on this general definition in their NIST report that is now widely accepted as one of the important definitions of cloud computing (Mell, 2011). Today's cloud computing has three basic types: Software as a Service (SaaS), Platform as a Service (PaaS) and Infrastructure as a Service (IaaS). In the simplest of terms 'cloud computing' has come to embody SaaS. Similar to the IT time-share model mentioned earlier, SaaS provides both the server hardware and software to an organization without any of the complications of managing an IT system. The simplest example of SaaS service would be email for an organization. The cloud provider benefits from the economies of scale in managing a large infrastructure because of their strength in that area and is able to provide the necessary computing resources to the user, majority of who are SMEs, at an affordable cost. SaaS leaves the full control of the computing system with the provider. …
# Journal of International Technology and Information Management Journal of International Technology and Information Management [Volume 22](https://scholarworks.lib.csusb.edu/jitim/vol22) [Issue 4](https://scholarworks.lib.csusb.edu/jitim/vol22/iss4) [Article 3](https://scholarworks.lib.csusb.edu/jitim/vol22/iss4/3) 11-4-2013 # Is Security Realistic in Cloud Computing? Is Security Realistic in Cloud Computing? S. Srinivasan Texas Southern University [Follow this and additional works at: https://scholarworks.lib.csusb.edu/jitim](https://scholarworks.lib.csusb.edu/jitim?utm_source=scholarworks.lib.csusb.edu%2Fjitim%2Fvol22%2Fiss4%2F3&utm_medium=PDF&utm_campaign=PDFCoverPages) Recommended Citation Recommended Citation Srinivasan, S. (2013) "Is Security Realistic in Cloud Computing?," Journal of International Technology and Information Management: Vol. 22: Iss. 4, Article 3. [DOI: https://doi.org/10.58729/1941-6679.1020](https://doi.org/10.58729/1941-6679.1020) [Available at: https://scholarworks.lib.csusb.edu/jitim/vol22/iss4/3](https://scholarworks.lib.csusb.edu/jitim/vol22/iss4/3?utm_source=scholarworks.lib.csusb.edu%2Fjitim%2Fvol22%2Fiss4%2F3&utm_medium=PDF&utm_campaign=PDFCoverPages) This Article is brought to you for free and open access by CSUSB ScholarWorks. It has been accepted for inclusion in Journal of International Technology and Information Management by an authorized editor of CSUSB [ScholarWorks. For more information, please contact scholarworks@csusb.edu.](mailto:scholarworks@csusb.edu) ----- # Is Security Realistic In Cloud Computing? **S. Srinivasan** **Jesse H. Jones School of Business** **Texas Southern University** **USA** **ABSTRACT** _Cloud computing is rapidly emerging as an attractive IT option for businesses. As a concept_ _cloud computing is well received because of the benefits it offers but many users are not clear_ _about the scope of security in cloud computing. Many surveys point out that security in the cloud_ _remains the top concern for many businesses in their decision making consideration in spite of_ _the cost advantages it offers. In order to identify the security concerns we analyzed over 50_ _research articles and industry white papers published over the past five years. In this paper we_ _focus on the question “Is security realistic in cloud computing?” In presenting the justification_ _that it is possible to expect adequate security features in the cloud we address several related_ _issues. In this context we first briefly describe the three types of cloud services – SaaS, PaaS and_ _IaaS. Then we focus on the security aspects that businesses must pay attention to in order to_ _succeed. Next, we consider the importance of trust in the service providers and how they could_ _build customer trust in their services. This discussion leads to service reliability in the cloud and_ _what the cloud providers have learned from cloud outages in order to build trust. Also, we_ _highlight how the security features offered in the cloud support compliance requirements. We_ _conclude the paper with some relevant information on the legal aspects related to cloud_ _computing._ **INTRODUCTION** Cloud computing today is benefiting from the technological advancements in communication, storage and computing. The basic idea in cloud computing is to take advantage of economies of scale if IT services could be provided on demand with a decentralized infrastructure. This idea is a natural evolution from the IT time-share model of the 1960s and 1970s. Today, technology has advanced significantly and many more organizations have computing demands that are elastic in nature. Organizations large and small require reliable computing resources in order to succeed in business. Large businesses deal with complex systems where as Small and Medium sized Enterprises (SMEs) need access to affordable computing resources. Based on these aspects we can summarize some of the rationale for today’s cloud computing needs as follows: - acquiring and managing the IT resources requires specialized skills, - maintaining a reliable IT infrastructure is expensive, - rapid technology advancements make it difficult to keep current the IT expertise, - internet has opened up many opportunities for individuals as well as small businesses, - number of entities requiring computing resources has grown exponentially, - SMEs’ demand for computing resources varies significantly over time, - providing data security is a complex undertaking. ----- In the above paragraph we have identified some of the major reasons as to why cloud computing would be advantageous to use. When a significant part of the business depends on a type of service that the business does not fully control, the question arises as to how the business can meet its obligations to its customers. As highlighted above, IT services are essential to the success of the business but it would be cost prohibitive for the business to manage an IT center with the required expertise and fluctuating demand on resources for processing and storage. Thus, a business using cloud computing must understand the security challenges that it would be responsible for and how cloud computing could help in this regard. We address the security challenges by first noting the differences in the types of cloud computing that a business might be using. In order to address the security challenges associated with cloud computing, we need to understand first the meaning of cloud computing. The primary reason for this is that the term ‘cloud computing’ is used as a catch-all for a wide ranging array of services. After a careful analysis of numerous sources in the literature we have arrived at the following working definition of ‘cloud computing’ based primarily on the National Institute of Standards and Technology definition: _Cloud computing consists of both the infrastructure and services that_ _facilitate reliable on-demand access to resources that can be allocated and released quickly by_ _the user without provider intervention using the pay-as-you-go model (NIST, 2011). It is worth_ noting in this context that Mell and Grance further amplified on this general definition in their NIST report that is now widely accepted as one of the important definitions of cloud computing (Mell, 2011). Today’s cloud computing has three basic types: Software as a Service (SaaS), Platform as a Service (PaaS) and Infrastructure as a Service (IaaS). In the simplest of terms ‘cloud computing’ has come to embody SaaS. Similar to the IT time-share model mentioned earlier, SaaS provides both the server hardware and software to an organization without any of the complications of managing an IT system. The simplest example of SaaS service would be email for an organization. The cloud provider benefits from the economies of scale in managing a large infrastructure because of their strength in that area and is able to provide the necessary computing resources to the user, majority of who are SMEs, at an affordable cost. SaaS leaves the full control of the computing system with the provider. Some of the major commercial SaaS providers are Amazon, Google, Microsoft and SalesForce. PaaS provides the customer a platform, such as the Windows operating system with the necessary server capacity to run the applications for the customer. PaaS is used mainly by developers who need to test their applications under a variety of conditions. The PaaS cloud service provider manages the system for its upkeep and provisioning of tools such as .NET and Java whereas the customer is responsible for the selection of applications that run on the platform of their choice using the available tools. Thus, the customer is responsible for the security challenges associated with the applications that they run. For example, a customer running a SQL Server database on the platform should be aware of the vulnerabilities of the database system. Hence, the customer should have the expertise to manage such applications on the platform used under this pay-as-you-go model. The benefit to the customer is that if their hardware needs change or if they require a Linux/UNIX platform for some other applications, ----- then provisioning them takes only a few days as opposed to few weeks to make the new system operational. Major PaaS cloud service providers are Google App Engine and Windows Azure. IaaS provides the customer the same features as PaaS but the customer is fully responsible for the control of the leased infrastructure. IaaS may be viewed as the computing system of the customer that is not owned by them. Unlike PaaS, IaaS requires the organization to have the necessary people with extensive computing expertise. The IaaS customer would be responsible for all security aspects of the system that they use except physical security, which would be handled by the cloud provider. Amazon and IBM are examples of IaaS providers. Combining the information presented so far about these three types of cloud services with additional cloud service providers, we have Table 1 that provides a quick snapshot of the available resources. **Table 1: Summary of cloud service providers.** **Provider** **Type of** **Product name** **service** SaaS AWS Amazon PaaS Elastic Beanstalk IaaS EC2, S3 SaaS Gmail, GoogleDocs Google PaaS App Engine Microsoft PaaS Azure SaaS Sales Cloud Salesforce.com PaaS Force.com Rackspace PaaS Rackspace Cloud IaaS Rackspace Cloud SaaS CloudBurst IBM IaaS Blue Cloud EMC IaaS Atmos Apple SaaS iCloud AT & T SaaS Synaptic Hosting VMware IaaS vCloud Director It is worth noting that these three types of services are gaining ground. According to the Ponemon Institute/CA Technologies 2011 study, among cloud service providers, SaaS accounts for 55 percent, PaaS accounts for 11 percent and IaaS accounts for 34 percent. Besides these three service types available, a potential user must also consider the four different cloud deployment models for meeting their computing needs. The four cloud deployment models are public cloud, private cloud, hybrid cloud, and community cloud. The most common cloud deployment model is the public cloud. In the public cloud the customer shares the resources with other customers. On the other hand, in a private cloud the resource are dedicated to the organization and has greater security because the computing resources are not shared with other customers. Private cloud is affordable only for large organizations. A natural evolution from public cloud and private cloud service models is the hybrid cloud which uses both proprietary computing resources and/or private cloud resources that the organization manages directly and the public cloud for some of the computing requirements, especially the ones with varying |Provider|Type of service|Product name| |---|---|---| |Amazon|SaaS|AWS| ||PaaS|Elastic Beanstalk| ||IaaS|EC2, S3| |Google|SaaS|Gmail, GoogleDocs| ||PaaS|App Engine| |Microsoft|PaaS|Azure| |Salesforce.com|SaaS|Sales Cloud| ||PaaS|Force.com| |Rackspace|PaaS|Rackspace Cloud| ||IaaS|Rackspace Cloud| |IBM|SaaS|CloudBurst| ||IaaS|Blue Cloud| |EMC|IaaS|Atmos| |Apple|SaaS|iCloud| |AT & T|SaaS|Synaptic Hosting| |VMware|IaaS|vCloud Director| ----- demands on resources (Bhattacharjee, 2009). Two of the major hybrid cloud providers currently are VMware and HP. Another important statistic to note is that 65 percent of the cloud service customers use public cloud service while 18 percent each use private cloud and hybrid cloud services. These three types of cloud services aim to meet the customer requirements at different levels of engagement in managing the computing hardware and software. This has a direct correlation to the size of the organization in choosing the type of cloud service. For this reason we can broadly classify the cloud computing users as belonging to either the public cloud or the private cloud. Small and medium sized businesses typically use the public cloud and large organizations use the private cloud. All the cloud service providers mentioned earlier provide both public and private cloud services. In the private cloud, a large organization which has a data center to manage, is able to use large amounts of storage and computing power dedicated to just their organization only. The private cloud facilitates the large organization to handle demand elasticity similar to the public cloud provider. The community cloud is used by organizations with a common focus such as health care, automotive and financial services. The community cloud represents a vertical market in which the organizations stand to benefit by having a dedicated server that addresses the specialized needs of that sector. For example, in the media industry companies are looking for ways to simplify content production at low-cost. This requires collaboration among a large group of people. A community cloud facilitates the location of necessary computing resources for content production and editing. By using a community cloud dedicated to the media industry this need is met. Windows Azure platform is used as a public cloud for this community cloud architecture. Having provided a brief overview of the three basic cloud types and the four deployment models, let us next review the security aspects in the cloud as discussed in several research articles and industry white papers. One of the main reasons for the cloud to provide cost efficiencies is its ability to leverage the economies of scale in their hardware and their ability to offer Virtual Machines (VMs) on a single hardware for multiple clients. Moreover, cloud providers enable visibility to the customer on the location of their VM in the cloud. How this feature is exploited by attackers to launch side-channel attacks on the cloud is the major contribution of Ristenpart, Tromer, Schacham and Savage (2009). In their oft cited paper “Hey, You, Get off of my cloud,” these UC San Diego and MIT researchers highlight the security concerns of many businesses. They point out the data leakage aspect in a public cloud (Ristenpart, 2009). In a multi-tenant environment on a physical infrastructure, which is very common in a public cloud, such attacks are capable of extracting encryption keys. Thus, one of the heavily relied upon defense to secure data storage in the cloud becomes vulnerable. Armbrust et al., discuss in their paper the top 10 obstacles to cloud adoption. These UC Berkeley researchers show the current status of the cloud service and how the technology needs to improve further to address customer security concerns. This paper points out how, in spite of advancements in interoperability among different platforms, the storage APIs tend to be proprietary. This basically locks in a cloud customer from switching to another cloud service provider easily (Armbrust, 2010). Providing very high reliability of service in the cloud requires extensive infrastructure deployment with plenty of redundancy built-in. Major service providers like Amazon, Google, Microsoft and Salesforce have the ability to assure very high availability of their services. All these services have experienced some well publicized outages which cause concern for businesses in their desire to switch to the cloud. ----- The significance of cloud security is the focus of one of the four parts of the book Cloud Computing by Antonopoulos and Gillam. In this edited book the authors have included several chapters on cloud security (Antonopoulos, 2010). In particular, the work of Durbano, Rustvold, Saylor and Studarus focus on the significance of standards in enabling cloud security. Their work points out the gaps in ISO 27002 security controls (Durbano, 2010). Chen, Paxson and Katz answer the question of ‘What is new about cloud computing security?’ Their analysis shows that many of the cloud security issues are not really new except that they hinge upon multi-tenancy trust considerations and auditability of service providers’ ability to back up their claims with data on security aspects (Chen, 2010). One of the challenges for any new technology is the availability of global standards. Cloud computing is evolving rapidly but there are not many commonly accepted standards yet. ISO 27001, NIST and Cloud Security Alliance are all working toward providing guidelines for the cloud industry. One of the Cloud Security Alliance guidelines involves the Top 9 Cloud Computing Threats in 2013. Some of these threats relate to data breaches in the cloud, data loss due to data leakage, insecure APIs and abuse of cloud services (Cloud Security Alliance, 2013). We already pointed out one such abuse from the work of Ristenpart et al involving side channel attacks. Next we look at the literature review article of Yang and Tate in which they classify 205 articles that appeared in cloud computing (Yang, 2012). They started this line of research in 2009 when they reviewed 54 articles. Since then the field has grown significantly and they included several of the articles that we are examining in this brief review. Similar to Yang and Tate’s work, Idziorek and Tannian surveyed all research articles in the area of public cloud computing and focused on cloud computing security. This article points out several reasons on the impediments still facing cloud computing adoption (Idziorek, 2012). Likewise, Modi et al surveyed the issues affecting cloud computing adoption and their vulnerabilities. This paper identifies some solutions to strengthen security and privacy in the cloud (Modi, 2013). Related to this work is the technical book by Trivedi and Pasley on Cloud Computing Security. As developers of cloud security solutions with a major technology company these authors identify several security solutions based on cloud architecture, design and the way the customers deploy their cloud based solutions (Trivedi, 2012). Continuing this line of research on cloud computing security, Zissis and Lekkas propose the creation of a trusted third party focused on cloud security. The authors point out that this arrangement would create a security mesh for all cloud users that will lead to a trusted environment (Zissis, 2012). Many businesses use cloud computing for data storage. This feature provides the business a cost effective solution to store as much data as necessary and at the same time provide related data backup, recovery and business continuity benefits. However, it also introduces the risk of not having full control over the data storage as it is physically outside the control of the business. This has led to several risks for businesses. To address this concern Wang et al propose a flexible distributed method. In their approach they propose a method that achieves storage correctness and supports dynamic operations such as data update and delete (Wang, 2009). John Viega from a major security service firm analyzed the security aspects of the three major cloud services – SaaS, PaaS and IaaS. His analysis shows that in the case of SaaS the main concern for the customer relates to the service providers’ ability to protect the infrastructure from attack and ensure non-leakage of data in the multi-tenant environment. In PaaS, even though the developers who subscribe to this service will be able to develop their own security solutions, they are still ----- dependent on the service providers’ way of protecting the service below their application level for intrusion prevention. For IaaS, the major concern is the way the virtual machines are configured. A related concern with IaaS service is the reliability of the service provider (Viega, 2009). Mark Ryan has a special focus in his paper on privacy concerns related to the cloud because his paper addresses an area of interest for many academic researchers. The goal of Ryan’s paper is on the privacy aspects related to the two major conference management systems in use – EDAS and EasyChair. The paper highlights the many benefits of the conference management systems on the cloud and also highlights some concerns such as the leakage of reviewer information, cumulative success records of many researchers related to their submissions for a variety of conferences over a long period of time and aggregated reviewing profile of the reviewers. These data could be accidentally or maliciously disclosed by systems administrators on these cloud systems where they are privy to large volumes of data. Even though this is a very small segment of the cloud service industry, this paper’s focus is on the potential privacy concerns for data stored on the cloud (Ryan, 2011). The next set of papers that we examine relates to cloud computing risks and how they are addressed. Gartner Research identifies seven cloud computing risks that are quite common. These are presented in the context of a potential cloud customer evaluating a cloud service. Some of these concerns relate to how the service provider handles privileged access to system resources, their regulatory compliance activities related to physical security of the system and third party audit such as SAS 70 Type II audit report, where they store the data and how they segregate belonging to different customers so that they do not co-mingle (Brodkin, 2008). In summarizing the cloud security concerns of many European partners, Daniele Catteddu in his lengthy report points out that the two major benefits of cloud services, namely the economies of scale and the operational flexibility are ‘both a friend and foe.’ The main thrust of this report is that the cloud customer needs an assurance that the service providers are following sound security practices to mitigate the risks faced by the customer and the provider (Catteddu, 2010). Similar to the above report, the World Privacy Forum developed a report on the privacy implications of data stored in the cloud. This report especially focuses on the many legal aspects of compliance based on laws such as HIPAA (Health Insurance Portability and Accountability Act), GLBA (Gramm-Leach-Bliley Act), ECPA (Electronic Communications Privacy Act), and Fair Credit Reporting Act. The report notes that the information stored by an individual or a business with a cloud service provider may have less protection than when the same information is held by the information creator. Moreover, governments find it easy to obtain lot of information from a centralized source such as a cloud provider (Gellman, 2009). The main contribution of this report is in raising the awareness of the cloud customers relative to privacy issues in the cloud. One of the indicators of a mature model is the availability of enough case law to understand how courts interpret the technological aspects. Using this metric cloud computing is not yet mature enough to have a solid body of case law. To understand the ambiguities of how to interpret the implementation models in the cloud we cite two instances. In the first case, Cartoon Network sued CSC Holdings (parent company of Cablevision) for copyright infringement in 2009. In this case, Cablevision provided customers the ability to store the recordings of their choice on the cloud and access the same using their authentication credentials. Cartoon Network contended that Cablevision was violating their copyright by sharing their content with others using the ----- cloud storage. The court ruled that Cablevision was simply providing their authenticated customers a storage facility in the cloud and not illegally sharing any copyrighted material. In the second case, Arista Records sued Usenet.com in 2009 for violating their copyright on their musical content by enabling unauthorized redistribution of their copyrighted content through the bulletin board system in the cloud managed by Usenet. The model used by Usenet enabled the cloud users to share their stored content with others unlike the Cablevision case. Since the cloud was used in this case for data sharing the court ruled that it was a copyright violation by Usenet (Wittow, 2011). The analysis so far was focused on the US experience with the use of cloud computing. However, cloud computing is a global phenomenon. In United Kingdom the government’s cloud computing initiative is known as G-cloud. In the brief article focused on G-cloud and NIST definitions of cloud computing, author Craig-Wood develops a comprehensive picture of all aspects of cloud (Craig-Wood, 2010). We have developed Figure 2 based on this view. This figure summarizes effectively our prior discussion on the three cloud types, four cloud deployment models and some of the major advantages of cloud services. Just as cloud computing is used in UK, the Australian experience relative to the legal requirements of the cloud provider is described in their white paper by Vincent and Crooks. The details presented in this white paper relate to privacy laws, location of data in the cloud, how foreign governments might get access to this data, security beaches and service availability (Vincent, 2013). These are all important security considerations for a cloud service consumer to consider prior to making a commitment to use the cloud. ----- ## Figure 2: Cloud Types-Deployment Models-Features. Cloud Featureeees On-demand Self-Service Pay-as-You-Go Pooled Broad Network Public CloudPrivate CloudHybrid CloudCommunity CloudSaaPaaIaaService Deployment Models We review next the German experience with respect to cloud computing based on adherence to privacy laws. This topic is discussed by Doelitzscher, Reich and Sulistio in their Cloud Security Project with particular emphasis on Small and Medium-sized Enterprises (SMEs). They introduce a six layer security model that involves risk analysis and encryption (Doelitzscher, 2010). We conclude this security review of existing cloud computing literature with a brief outline of Bhensook and Senivongse’s assessment of security requirements compliance by service providers. This paper makes an extensive analysis of Cloud Security Alliance’s recommendations by using the Goal Question Metric (GQM) for security requirements compliance. The weighted scoring model that they develop is then tested using Amazon Web Services’ (AWS) compliance. The results show that in most cases AWS is compliant with various metrics being measured (Bhensook, 2012). |Col1|Col2|Col3|Col4| |---|---|---|---| |On-demand Self-Service Pay-as-You-Go Pooled Broad Network Public CloudPrivate CloudHybrid CloudCommunity CloudSaaPaaIaaService Deployment Models|||| ||||| On-demand Self-Service Pay-as-You-Go Pooled Broad Network Public CloudPrivate CloudHybrid CloudCommunity CloudSaaPaaIaaService Deployment Models ----- ## NEW PARADIGM Cloud computing is a significant shift in the way IT services are managed. Organizations large and small have managed IT services over the years with varying levels of investments. Today, with advancements in communication technology, many new options have opened up for existing businesses and new entrepreneurs want to use more of the capabilities of IT. These two aspects have spawned the significant growth of cloud computing, which gives the customer the ability to benefit from the pay-as-you-go model. Cloud computing has enabled the service providers to benefit from the economies of scale. This change in service rendering is necessitated by the fact that today’s workforce is increasingly mobile and consequently the need for access to remote resources is greater. Moreover, demand fluctuations for IT services are a reality. Businesses cannot afford to provision IT services to meet peak demand, which occur infrequently. Cloud computing provides an ideal solution to meet these needs without incurring significant cost in services provisioning. Investments necessary to have a reliable IT service kept many prospective entrepreneurs from creating online ventures. On the web, businesses large and small look alike. Cloud computing is providing entrepreneurs the opportunity to try their ideas out, with IT services no longer holding them back as a barrier to entry. The major beneficiaries of cloud computing are small and medium sized businesses as this new concept provides them an opportunity to try out high-end services with no up-front cost, allowing them to use the pay-as-you-go model. Large enterprises also stand to benefit from cloud computing, although of a different nature. Large enterprises manage data centers and the IT paradigm shift referred to earlier mean more in the context of accessing data from the data centers. In this context private clouds have been introduced where the benefits of storage management and elasticity in demand for computing services are the key drivers. Moreover, the cloud technology also offers high level of reliability and availability of systems without significant capital layout. Often, the benefits of cloud computing are realized by taking a hybrid approach. The hybrid approach gives the large organizations the ability to manage their IT centers and at the same time expand their computing capacity without large capital investment by utilizing the cloud resources. This is especially useful to meet seasonal peak demands with hybrid clouds. Organizations with seasonal high demands that could benefit from hybrid clouds are in the entertainment industry around holidays, sports networks with on-demand service and tax service providers. In assessing cloud computing’s appeal we should consider the usage levels of organizational servers. Server utilization level gives a good metric to see if the investment cost in hardware is worth it. The U.S. federal government started looking at the server utilization in its data centers several years ago and found that the utilization level was low. According to a 2010 report by the Computer Sciences Corporation (CSC), a global technology services company, that among all data centers in use the server utilization rate is between 6 percent and 20 percent. The data from this report is shown in Figure 2.1. Even Google’s server utilization rate is around 40 percent. One reason for the low utilization is the lack of virtualization and the need to use dedicated servers for multiple operating systems as well as separation of sensitive applications. Cloud ----- computing is a natural fit to address the low utilization aspect because of high levels of virtualization. With multiple users sharing the computing resources, cloud computing has a very high level of server utilization (Hayes, 2008). **Figure 2.1: Server Usage Statistics.** Cloud computing architecture enables businesses to meet demand elasticity in computing resources. Business organizations have great difficulty in dealing with demand elasticity for cost considerations. A useful model to compare here is how networks manage elasticity in bandwidth demand. For cost reasons network bandwidth provisioning uses the Committed Information Rate (CIR) model. Likewise, cloud computing provides a similar feature in meeting demand elasticity in both storage and computing power. Without the ability to meet demand elasticity, businesses may end up with an underprovisioned service. In that case customers would abandon such services. Jeff Bezos, CEO of Amazon, highlights the success of extreme demand for computing power within a very short period of time. In this case a nascent web services company, Animoto, grew so rapidly that its server needs grew from 50 to 3,500 servers over a three day period. Amazon was able to accommodate such a high demand easily. This is a good illustration of high demand elasticity. In the traditional model, the end user had control over the creation, maintenance and deletion of a document. In the cloud environment, the end user is spared the trouble of maintaining the computing system and reaps the benefits of the application software. This is a positive aspect of cloud storage. However, it is not entirely clear to the end user that when a document is deleted it is going to be inaccessible from the storage system. There have been instances where the ----- document lingered on in the storage system of the cloud provider. These types of issues are unique to cloud computing and thus are a departure from the standard expectation of a computer system. Thus, we note that a shift in approach is needed in order to have control over the online information. Many large organizations are considering cloud based services as a cost-effective way to plan for disaster recovery. In this case the organization has its own computing resources that it controls and plans to use the cloud services for disaster recovery purposes. In the traditional model for disaster recovery the organization would use a warm or cold site as its backup facility, which is a recurring expenditure for the company. The cloud model eliminates this recurring expenditure, instead the organization pays for the services it uses when needed. The main cloud service type being considered for disaster recovery is IaaS. Data backup is another service area for which cloud computing is gaining ground. In the traditional model companies perform an incremental backup daily and full backup weekly. The backed up data gets stored off site and handled by companies such as Iron Mountain. In the cloud computing model a large organization would use the cloud services for backup and recovery, which by its very design is at an offsite far away from the company location. The organization could architect the backup process in such a way that it is an automated full backup continuously. The promise of these two services in the cloud has brought Microsoft and Iron Mountain together to offer data backup and recovery. Using the Windows Azure platform Microsoft performs data backup and Iron Mountain manages the stored data for the customer based on their expertise in this field. The customer pays for this service based on the amount of storage used and the retention period for backup data. This service has the added benefit of offsite storage built-in that is essential for disaster recovery and backup because the cloud provider is remotely located relative to the customer. An essential component of efficient data backup is data de-duplication, also known as ‘intelligent compression.’ The deduplication method allows for storing only one copy of the data and providing a pointer from all future occurrences of the same data. Data de-duplication can be performed at the file or block level. The latter is more efficient than the former. In typical email backups many users may have the same file as an attachment and so the same file is backed up multiple times. Using the deduplication approach only one copy is saved and all other references point to the same copy. This is a typical file level de-duplication. Most often de-duplication is more efficient at the block level. In this approach each block of data is hashed using MD5 or SHA-1 and the index is stored. Future hashes of blocks producing the same index are treated as duplicates and not stored. There are sophisticated methods available to detect hash collision, which is rare (Armbrust, 2010). ## SECURITY CHALLENGES Managing security is a major challenge to both large and small organizations. Transitioning to cloud computing increases the challenges faced by these companies many-folds because of the several unknowns. One important aspect of security is physical security. An organization that owns the computing resources knows how to provide physical security. An organization that uses cloud computing does not know where the physical resources are located and so providing that form of security is transferred to the service provider. This raises the important question of liability in the event of a security compromise. Many organizations do not take this aspect into consideration in their security planning. Ultimately it would be the organization that would be ----- held accountable by its customers for any security loss due to failure in physical security. Major providers such as Amazon, Google, SalesForce and Microsoft provide the necessary physical security guarantee. Security and trust are closely related. An important contributor to trust is transparency. One way to achieve customer trust in cloud environments is to share the security practices as they relate to physical security, backup and recovery, compliance, incident handling and logs of security attacks. With the help of non-disclosure agreements cloud providers can share details of logs on incident handling and security attacks. Information security’s core requirements are confidentiality, integrity and availability. We add the fourth piece – access control – to this security scenario. In this section we will explore the various security challenges facing a customer using cloud computing. First and foremost, the cloud customer must understand the levels of security the provider is guaranteeing. This includes system up time, system upkeep with respect to software updates and patches and sharing a variety of logs that will enable the cloud customer to meet certain compliance requirements such as HIPAA and SOX. Since the cloud provider manages the servers on which the customer applications run, access control to the applications is an important part of security as well. Access control in cloud environments differ based on the type of service – SaaS, PaaS, IaaS. Today’s cloud environment is dominated by SaaS and in this case the cloud provider should provide the necessary authentication mechanisms to grant access. The leading SaaS service is CRM, estimated to be about US$ 3.8 Billion worldwide in 2011. A Goldman-Sachs survey in 2010 points out that small and medium sized businesses would consider cloud service 70 percent of the time and would prefer a cloud solution, if available, 58 percent of the time. This high level of confidence in SaaS cloud services shows the maturity of this particular market. The success of cloud computing as a viable alternative to individual businesses managing their own IT centers is commendable. From an operational perspective this is an efficient model. However, from a security perspective there are several major concerns to overcome. As mentioned above, physical security of systems is one concern. A more serious concern is in access control – physical access to hardware, privileged access to operating environment, access to application software. In Figure 3.1 we combine the access control feature with the type of protection that a service provider could guarantee the customer for security of their data in this design. The goal of this proposed architecture for cloud relative to the cloud customer is to provide a way for them to see how their data and interaction with the cloud are both secure. The Authentication layer inside the cloud provides a way for the cloud customer to control who gets access to their virtual machine and data. Knowing who the privileged users are from the service provider perspective will enable the customer to monitor the access log, which they should be able to get in an automated manner from the service provider. The firewall inside the cloud gives the cloud customer the level of assurance that they are accustomed to in their own systems. The customer would be able to have the firewall configured by the service provider the way they want, thus giving an added level of protection to their data. By using encrypted storage in the cloud and holding the encryption key themselves, the customer will have added security protection in case of data leakage because of storage comingling. ----- **Figure 3.1: Cloud Computing Security Infrastructure.** Customer Authentication Layer Firewall Server Encrypted Storage From a security perspective cloud computing should be viewed as a potential single point of failure. Cloud providers have high redundancy in their architecture, yet relying solely on one cloud provider may raise reliability issues. Moreover, when many customers are still ranking cloud security as their main concern in their switch to cloud computing, the providers have to offer many trust building measures. One such is system reliability. This has suffered some setback because of some well publicized instances of cloud outage. Some of these major outages are the April 2011 Amazon AWS failure (Amazon, 2011), October 2009 DDoS attack on Bitbucket that utilized Amazon EC2 and the November 2007 Rackspace failure due to power supply disruption. Since reliability could still be an issue for some customers, one possible solution to using cloud services is to split the types of cloud services among multiple service providers. A common approach to this could be to use one cloud provider for operations and another provider for storage solutions. Besides the access control concerns with cloud storage and the violations cited in this regard, it is a major decision for a customer to have a data backup outside of the cloud service provider. The reason for this is that when a cloud service provider has to comply with law enforcement demand for removal of a storage device for investigative purposes, many other customers might also be affected. This is not a hypothetical scenario. In March 2009 a data center in Dallas owned by a cloud service provider was raided by law enforcement and as such many other customers were also affected because their data was unavailable for access for an extended period of time. Server Authentication Layer Customer ----- Security, interoperability and portability are considered the major barriers for cloud computing adoption. In many instances the cloud service provider does not think it is their responsibility to protect customer data. In the 2011 Ponemon Institute/CA Technologies cloud security study a surprising finding is reported (Ponemon, 2011). It points out that a majority of the cloud service providers do not think that cloud security is their responsibility and even more disturbing is that these providers do not think that their services protect and secure customers’ sensitive data. Cloud service customers should be aware of how secure their sensitive data are and how to protect the same. Cloud computing model requires virtualization in a big way. In virtual environments data comingling is a reality. We identified in Figure 3.1 how a customer could work with the service provider in establishing a level of security to prevent comingling of data on a central server. Since cloud service providers operate in a multi-tenant environment in their public cloud, an unintended consequence of comingling data from multiple customers on the same physical device is data leakage. The author recommends the architecture specified in Figure 3.2 as a preferred solution to cloud provider’s infrastructure vis-à-vis the customer need for access logs. The figure shows the need for virtualization of servers and isolation of the virtual servers. Also, the customer should be aware of the privileged users at the cloud provider who will have access to their virtual servers as well as the ability to get data from user access logs for their respective virtual servers in an automated manner. Ristenpart et al have highlighted the data leakage problems associated with comingling data on the same physical device (Ristenpart, 2009). Data leakage could lead to privacy violations. When cloud providers do not see that it is their responsibility to protect customer data, as highlighted by the Ponemon Institute study, it puts an extra burden on cloud customers to use encryption for all their data. We expect more and more customers are going to be sensitive to the potential of data leakage from the cloud, which is not a problem if they had their own computing resources. The security does not come from encrypted storage alone. The customer must hold the encryption key in their system at their place of business and not place it on the cloud, even if it is in their virtual server. Ristenpart et al have shown how an attacker could place their virtual server on the same physical server as the customer and perform side-channel attacks to recover the encryption key. **Figure 3.2: Cloud Provider Infrastructure.** Provider Privileged Users Customers VS = Virtual Server VS1 VS2 VS3 …… AL = Access Log VSn AL1 AL2 AL3 …… ALn |VS1 VSn|Col2|VS2|Col4|VS3|Col6|……|Col8|Col9| |---|---|---|---|---|---|---|---|---| |AL1|Col2|AL2|Col4|AL3|Col6|……|Col8|Col9| |---|---|---|---|---|---|---|---|---| VS1 VS2 VS3 …… VSn AL1 AL2 AL3 …… Customers ----- In order to provide data security, both customers and cloud service providers must realize that security has to become information-centric, i.e., security moves with the data. This way any operational decision by the cloud service provider will protect the customer data. Another important aspect of handling the cloud security challenge is to trust the processes and infrastructure. To this end, 1. Businesses should know from cloud provider how the following are handled: – Physical security – Number of privileged users with access to hardware – Log data for system access, up time, incident handling, attacks on the system – Data backup and recovery – Compliance with regulations 2. Security and trust are interrelated 3. Important contributor to trust is transparency The significance of compliance such as SAS 70 Type II Audit builds trust for the customer. Providing data security involves not only having adequate measures but also compliance with relevant laws. In order to verify compliance an organization would require log data related to access control and incident handling. The cloud customer should be able to get such data from the provider. This is not standard practice with the cloud providers. System up time is another issue that should be looked into by the cloud customer. The system up time could be verified only with log data. In Table 3.1 we highlight the compliance aspects from Amazon and Google, two of the major cloud service providers. Table 3.1. Summary of Security Features by Amazon and Google |Provider|Security Features|Compliance Support| |---|---|---| |Amazon| ISO 27001 certified  SAS 70 Type II Audit certified  PCI DSS Level 1 certified  FISMA certified  Physically secure data centers distributed around the world  User control for data encryption  Data backup  Customer-aware storage region  Does not allow third-party cloud providers  Processes in place to prevent unauthorized insider access to customer data  SLA guarantee of 99.95% up time  Supports multifactor authentication|SOX compliant GLBA compliant HIPAA compliant| ----- |Col1| Supports host-based firewall|Col3| |---|---|---| |Google|SAS 70 Type II Audit certified Multi-layered physical security Hardware customization, maintenance Secure storage handling Privacy protection for customer data Data centers located around the world Redundancy built into storage management Ease of movement for customer data into and out of Google Apps Provides additional levels of access control designed by customers|Google does not explicitly certify any of their products as meeting standard compliance requirements such as HIPAA, SOX, GLBA, FISMA, etc. Instead, it provides features that customers can enable in order to be compliant on their own using Google Apps.| Trust aspects include knowing the reliability of cloud service. To this end the cloud service provider should be able to provide the customer their infrastructure capability with respect to system uptime. It is an integral part of knowing the provider’s ability to assure security. Table 3.2 gives the amount of downtime allowed when cloud providers claim a certain level of uptime, usually denoted by a set of 9s. For example, four 9s means 99.99 percent up time. Highly available computing is expensive. Addition of one 9 to the guaranteed uptime nearly doubles the cost. Given these limitations the cloud provider uptime claims should be backed by data. **Table 3.2 : System downtime chart based on up time claim by cloud provider.** System Availability Maximum downtime Per year 99.999% 00:05:15 99.99% 00:52:35 99.9% 08:45:56 99% 87:39:29 Source: Stratus.com Standards play an important role in security. In the case of Cloud Computing, the standards are still evolving. The groups that are making contributions to cloud standards are the Distributed Management Task Force, Object Management Group, National Institute of Standards and Technology, European Telecommunications Standards Institute and the Storage Networking Industry Association. These standards help address the issues raised earlier so that customers can be familiar with the level of security that they are getting from the cloud provider. ## FUTURE OF CLOUD COMPUTING Cloud computing has many benefits to offer. Foremost among them is in meeting the elasticity of demand and the ability to use the ‘pay-as-you-go’ model. Cloud computing is often referred to as “converting capital expenses to operating expenses.” Cloud computing provides risk transference for the customer in that the customer need not worry about the cost involved in overprovisioning or under provisioning resources. It is important to note that a business that consistently under provisions resources will lose customers permanently. |System Availability|Maximum downtime Per year| |---|---| |99.999%|00:05:15| |99.99%|00:52:35| |99.9%|08:45:56| |99%|87:39:29| ----- Cloud computing has become the preferred storage solution to deal with Big Data. We are seeing an exponential growth in data due to several social media applications. A welcome contribution to cloud computing has been added by Google through its MapReduce architecture. MapReduce provides a simple way to process large volumes of data quickly. The open source implementation of MapReduce method is Hadoop by Apache. Many organizations use Hadoop Distributed File System (HDFS) to process large volumes of data in a reliable way. Without the availability of cloud services many businesses will not be able to afford the high cost of infrastructure needed to use Hadoop. Handling security in the cloud is a complex process and many researchers are working on incremental aspects of providing security to a variety of aspects in cloud processing. Hamlen et al (Hamlen, 2010) discuss one such security solution for the cloud. In particular, they discuss ways to efficiently store data in remote locations and query encrypted data. This approach to storing data in the cloud after encrypting it provides a level of security for sensitive data. Some of the cost advantages that the cloud provides will be lost if we were to add the cost of encryption. If we store data in the cloud in encrypted form then we need to develop processes to query encrypted data. Hamlen et al use HDFS for virtualization and apply Hadoop processes for secure storage. Another area where cloud computing could play a major role in the future is in backup storage. Many businesses do not consider backup as highly critical and with cloud services providing a viable low cost alternative, it is expected to catch on in the future. Companies like Carbonite and CrashPlan have simplified the process of backup. Recovery goes hand-in-hand with backup and customers are responsible for the policies regarding recovery testing periodically. Given the cloud infrastructure capability this should be a simple solution as the customer would be able to have access to the necessary hardware and software available to test the recovery aspects on a pay-as-you-go model. As cloud computing emerges as an affordable service for many businesses it is inevitable that there will be several legal issues that arise as to ownership of data, liability for protection of data and safeguarding of privacy. There is not a large body of case law in this regard. We cite two recent court cases where challenges were mounted because of the use of cloud service. In the first case, Cartoon Network challenged that CableVision’s Remote Storage DVR service violated the Copyright laws. The CableVision service offered the ability for the customer to record shows and store them in the cloud in their personal storage area for later playback. Since the access to this storage was based on customer’s access control mechanism of userID/Password, the court ruled that CableVision was not violating the Copyright laws (Cartoon Network, 2008). According to a study by Harvard University’s Lerner and others, the impact of this ruling was significant as it resulted in investments of over a billion dollars in this area of cloud service. However, there were conflicting rulings in Europe (unfavorable in France, favorable in Germany) on similar aspects of storage in the cloud by individual users (Borek, 2012). Cloud service technology has the ability to allow each user of their service to have their own storage space as described in the CableVision case above. The cloud technology is also capable of storing a single copy of an artifact and make it available only to authorized users. MP3Tunes, a new cloud-based music service, created a locker in which a customer could place a song and access it later on demand with appropriate authentication. As part of the cloud technology’s ----- capability, MP3Tunes held only one copy of a music in the locker even when new users attempt to store the same song. Capitol Records, one of the largest music companies, sued MP3Tunes as offering a music distribution without proper license of copyrighted material. The Manhattan District Court in U.S. ruled that MP3Tunes did not violate the owner’s copyright for the song (Capital Records, 2011) as it was leveraging one aspect of the technology known as deduplication. The users accessing the song all had stored that song in the cloud. These two cases point to the challenges faced by the cloud service industry as the service matures. ## SUMMARY Cloud computing is efficient and cost effective. It takes the burden out of many businesses to maintain a technical system and yet reap the benefits of its availability. In our analysis so far we were able to identify the many security challenges posed by the cloud service. The solutions identified in various places in the above discussion show that we can answer in the affirmative the question raised by the title of this paper concerning cloud security. Cloud computing has opened up the opportunity to many entrepreneurs to focus on their core business and count on cloud services to provide the necessary computing power. Many businesses experience demand elasticity and cloud computing is a natural fit in providing cost effective service in this area. Security aspects did not get much attention for a few years now and many customers were focusing on availability of service. As more and more businesses start using this service the question of security is coming to the forefront. We have addressed some of the security concerns with cloud computing. One area that is bound to receive greater attention is in compliance with government laws in each jurisdiction in which the cloud operates and the global industry requirements such as the one by the Payment Card Industry as many businesses depend on cloud computing. **REFERENCES** [Amazon. (2011). AWS Security Center. http://aws.amazon.com/security/ Accessed 11/15/13](http://aws.amazon.com/security/) Antonopoulos, N., & Gillam, L. (2010). Cloud Computing: Principles, Systems and Applications. Springer: London, UK. Armbrust, M., Fox, A., Grith, R., Joseph, A. D., Katz, R., Konwinski, A., Lee, G., Patterson, D., Rabkin, A., Stoica, I. & Zaharia, M. (2010). A view of cloud computing. _Communications of ACM, 53(4), 50-58._ Bhattacharjee, R. 2009. An Analysis of the Cloud Computing Platforms, _MIT MS Thesis,_ Cambridge, MA. Bhensook, N., & Senivongse, T. (2012). An Assessment of Security Requirements Compliance of Cloud Providers, _Proceedings of the IEEE 4[th] International Conference on Cloud_ _Computing Technology and Science, 520-525._ Borek, C., Christensen, L., Hess, P., Lerner, J. & Rafert, G. (2012). Lost in the Clouds: The Impact of Copyright Scope on Investment in Cloud Computing Ventures, [http://www.intertic.org/Conference/Lerner.pdf Accessed 11/22/13](http://www.intertic.org/Conference/Lerner.pdf) ----- Brodkin, J. (2008). Gartner: Seven Cloud Computing Risks, [http://www.idi.ntnu.no/emner/tdt60/papers/Cloud_Computing_Security_Risk.pdf](http://www.idi.ntnu.no/emner/tdt60/papers/Cloud_Computing_Security_Risk.pdf) Accessed 11/15/13. Capital Records vs MP3Tunes. (2011). Manhattan U.S. District Court, 1:07-cv-09931. Cartoon Network vs CSC Holdings. (2008). US 2[nd] Circuit Court Ruling, 536 F .3d 121. Catteddu, D. (2010). Cloud Computing: Benefits, Risks and Recommendations for Information Security, _Web Applications Security, edited by Serrao, C., Diaz, V. A. & Cerullo, F.,_ Springer: Berlin, Germany. Chen, Y., Paxson, V., & Katz, R. H. (2010). What is new about cloud computing security. UC _Berkeley Technical Report EECS-2010-5._ [Cloud Security Alliance. (2013). http://www.cloudsecurityalliance.org Accessed 11/15/13](http://www.cloudsecurityalliance.org/) Craig-Wood, K. (2010). Definition of Cloud Computing incorporating NIST and G-Cloud views, [http://www.katescomment.com/definition-of-cloud-computing-nist-g-cloud/](http://www.katescomment.com/definition-of-cloud-computing-nist-g-cloud/) Doelitzscher, F., Reich, C., & Sulistio, A. (2010). _Proceedings of the 10[th] IEEE International_ _Conference on Computer and Information Technology, 930-935._ [Durbano, J. P., Rustvold, D., Saylor, G., & Studarus, J.](http://link.springer.com/search?facet-author=%22Derek+Rustvold%22) (2010). Securing the cloud, _Cloud_ _Computing, edited by Antonoupolos, N. & Gillam, L., 289-302, Springer: London, UK._ Gellman, R. (2009). Privacy in the clouds: risks to privacy and confidentiality from cloud computing. World Privacy Forum. Hamlen, K., Kantarcioglu, M., Khan, L., & Bhavani, T. (2010). Security Issues for Cloud Computing, International Journal of Information Security and Privacy, 4(2), 39-51. Hayes, B. (2008). Cloud Computing, Communications of ACM, 51(7), 9–11. Idziorek, J., & Tannian, M. (2012). Security analysis of public cloud computing. International _Journal of Communication Networks and Distributed Systems, 9(1&2), 4-20._ Mell, P., & Grance, T. (2011). The NIST Definition of Cloud Computing, _NIST, Gaithersburg,_ MD. Modi, C., Patel, D., Borisaniya, B., Patel, A., & Rajarajan, M. (2013). A survey on security issues and solutions at different layers of cloud computing. Journal of Supercomputing, 63(2), 563-592. ----- NIST SP 800-145. (2011). The NIST Definition of cloud computing. [http://csrc.nist.gov/publications/nistpubs/800-145/SP800-145.pdf](http://csrc.nist.gov/publications/nistpubs/800-145/SP800-145.pdf) Accessed 11/15/13 Ponemon. (2011). Security of cloud computing providers study, Ponemon Institute, May. Ristenpart, T., Tromer, E., Shacham, H., & Savage, S. (2009). Hey, You, Get Off of My Cloud: Exploring Information Leakage in Third-Party Compute Clouds, _Proceedings of 16[th]_ _ACM Conference on Computer and Communications Security, Chicago, IL._ Ryan, M. D. (2011). Cloud Computing Privacy Concerns on our Doorstep, Communications of _ACM, 54(1), 36-38._ Trivedi, K., & Pasley, K. 2012. Cloud Computing Security, Cisco Press: Indianapolis, IN Viega, J. (2009). Cloud computing and the common man. IEEE Computer, 42(8), 106-108. Vincent, M., & Crooks, K. (2013). Cloud Computing: What legal commitments can you expect from your provider? _White_ _Paper,_ SheltonIP, Sydney, Australia, 1-21. [http://i.haymarket.net.au/Assets/cloudcover2013.pdf Accessed 11/15/13](http://i.haymarket.net.au/Assets/cloudcover2013.pdf) Wang, C., Wang, Q., Ren, K., & Lou, W. (2009, July)). Ensuring data storage security in cloud computing, 17[th] International. Workshop on Quality of Service, 1-9. Wittow, M. H. (2011). Cloud Computing: Recent Cases and Anticipating New Types of Claims, _The Computer and Internet Lawyer, 28(1), 1-8._ Yang, H., & Tate, M. (2012). A descriptive literature review and classification of cloud computing research, Communications of AIS, 31(2), 35-60. Zissis, D., & Lekkas, D. (2012). Addressing cloud computing security issues. Future Generation _Computer Systems, 28(3), 583-592._ -----
{ "disclaimer": "Notice: The following paper fields have been elided by the publisher: {'references'}. Paper or abstract available at https://api.unpaywall.org/v2/10.58729/1941-6679.1020?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.58729/1941-6679.1020, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": null, "status": "BRONZE", "url": "https://scholarworks.lib.csusb.edu/cgi/viewcontent.cgi?article=1020&context=jitim" }
2,013
[]
true
2013-04-01T00:00:00
[]
13,652
en
[ { "category": "Computer Science", "source": "external" }, { "category": "Engineering", "source": "external" }, { "category": "Mathematics", "source": "external" }, { "category": "Medicine", "source": "external" }, { "category": "Computer Science", "source": "s2-fos-model" }, { "category": "Engineering", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/028f829fc1d47a64dd4ca5db564f8b50deba9ff5
[ "Computer Science", "Engineering", "Mathematics", "Medicine" ]
0.862817
An Optimality Summary: Secret Key Agreement with Physical Unclonable Functions
028f829fc1d47a64dd4ca5db564f8b50deba9ff5
IACR Cryptology ePrint Archive
[ { "authorId": "10417153", "name": "O. Günlü" }, { "authorId": "2307454", "name": "Rafael F. Schaefer" } ]
{ "alternate_issns": null, "alternate_names": [ "IACR Cryptol eprint Arch" ], "alternate_urls": null, "id": "166fd2b5-a928-4a98-a449-3b90935cc101", "issn": null, "name": "IACR Cryptology ePrint Archive", "type": "journal", "url": "http://eprint.iacr.org/" }
We address security and privacy problems for digital devices and biometrics from an information-theoretic optimality perspective to conduct authentication, message encryption/decryption, identification or secure and private computations by using a secret key. A physical unclonable function (PUF) provides local security to digital devices and this review gives the most relevant summary for information theorists, coding theorists, and signal processing community members who are interested in optimal PUF constructions. Low-complexity signal processing methods are applied to simplify information-theoretic analyses. The best trade-offs between the privacy-leakage, secret-key, and storage rates are discussed. Proposed optimal constructions that jointly design the vector quantizer and error-correction code parameters are listed. These constructions include modern and algebraic codes such as polar codes and convolutional codes, both of which can achieve small block-error probabilities at short block lengths, corresponding to a small number of PUF circuits. Open problems in the PUF literature from signal processing, information theory, coding theory, and hardware complexity perspectives and their combinations are listed to stimulate further advancements in the research on local privacy and security.
# entropy _Review_ ## An Optimality Summary: Secret Key Agreement with Physical Unclonable Functions **Onur Günlü *** **and Rafael F. Schaefer** Information Theory and Applications Chair, Technische Universität Berlin, 10623 Berlin, Germany; rafael.schaefer@tu-berlin.de *** Correspondence: guenlue@tu-berlin.de; Tel.: +49-30-314-26632** **Abstract: We address security and privacy problems for digital devices and biometrics from an** information-theoretic optimality perspective to conduct authentication, message encryption/decryption, identification or secure and private computations by using a secret key. A physical unclonable function (PUF) provides local security to digital devices and this review gives the most relevant summary for information theorists, coding theorists, and signal processing community members who are interested in optimal PUF constructions. Low-complexity signal processing methods are applied to simplify information-theoretic analyses. The best trade-offs between the privacy-leakage, secretkey, and storage rates are discussed. Proposed optimal constructions that jointly design the vector quantizer and error-correction code parameters are listed. These constructions include modern and algebraic codes such as polar codes and convolutional codes, both of which can achieve small blockerror probabilities at short block lengths, corresponding to a small number of PUF circuits. Open problems in the PUF literature from signal processing, information theory, coding theory, and hardware complexity perspectives and their combinations are listed to stimulate further advancements in the research on local privacy and security. [����������](https://www.mdpi.com/1099-4300/23/1/16?type=check_update&version=3) **�������** **Citation: Günlü, O.; Schaefer, R.F. An** Optimality Summary: Key Agreement with Physical Unclonable Functions. Entropy 2021, 23, 16. [https://doi.org/10.3390/e23010016](https://doi.org/10.3390/e23010016) Received: 4 November 2020 Accepted: 16 December 2020 Published: 24 December 2020 **Publisher’s Note: MDPI stays neu-** tral with regard to jurisdictional clai ms in published maps and institutio nal affiliations. **Copyright: © 2020 by the authors. Li-** censee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and con ditions of the Creative Commons At [tribution (CC BY) license (https://](https://creativecommons.org/licenses/by/4.0/) [creativecommons.org/licenses/by/](https://creativecommons.org/licenses/by/4.0/) [4.0/).](https://creativecommons.org/licenses/by/4.0/) **Keywords: physical unclonable functions (PUFs); private authentication; secret key generation;** information theoretic privacy; code constructions for security **1. Motivations** Fundamental advances in cryptography were made in secret during the 20th century. One exception was Claude E. Shannon’s paper “Communication Theory of Secrecy Systems” [1]. Until 1967, the literature on security was not extensive, but a book [2] with a historical review of cryptography changed this trend [3]. Since then, the amount of sensitive data to be protected against attackers has increased significantly. Continuous improvements in security are needed and every improvement creates new possibilities for attacks [4]. Recent hardware-intrinsic security systems, biometric secrecy systems, 5th generation of cellular mobile communication networks (5G) and beyond, as well as the internet of things (IoT) networks, have numerous noticeable characteristics that differentiate them from existing mechanisms. These include large numbers of low-complexity terminals with light or no infrastructure, stringent constraints on latency, and primary applications of inference, data gathering, and control. Such characteristics make it difficult to achieve a sufficient level of secrecy and privacy. Traditional cryptographic protocols, requiring certificate management or key distribution, might not be able to handle various applications supported by such technologies and might not be able to assure the privacy of personal information in the data collected. Similarly, low complexity terminals might not have the necessary processing power to handle such protocols, or latency constraints might not permit the processing time required for cryptographic operations. Similarly, traditional methods that store a secret key in a secure nonvolatile memory (NVM) can be illustrated to be not secure because of possible invasive attacks to the hardware. Thus, secrecy and ----- _Entropy 2021, 23, 16_ 2 of 23 privacy for information systems are issues that need to be rethought in the context of recent networks, digital circuits, and database storage. Information-theoretic security is an emerging approach to provide secrecy and privacy, for example, for wireless communication systems and networks by exploiting the unique characteristics of the wireless communication channel. Information-theoretic security methods such as physical layer security (PLS) use signal processing, advanced coding, and communication techniques to secure wireless communications at the physical layer. There are two key advantages of PLS. Firstly, it enables the use of resources available at the physical layer such as multiple measurements, channel training mechanisms, power, and rate control, which cannot be utilized by the upper layers of the protocol stack. Secondly, it is based on an information-theoretic foundation for secrecy and privacy that does not make assumptions on the computational capabilities of adversaries, unlike cryptographic primitives. By considering the security and privacy requirements of recent digital systems and the potential benefits from information-theoretic security and privacy methods, it can be seen that information-theoretic methods can complement or even replace conventional cryptographic protocols for wireless networks, databases, and user authentication and identification. Since information-theoretic methods do not generally require pre-shared secret keys, they might considerably simplify the key management in complicated networks. Thus, these methods might be able to fulfill the stringent hardware area constrains of digital devices and delay constraints in 5G/6G applications, or to avoid unnecessary computations, increasing the battery life of low power devices. Information-theoretic methods offer “built-in” secrecy and privacy, generally independent of the network infrastructure, providing better scalability with respect to an increase in the network or data size. A promising local solution to information-theoretic security and privacy problems is a physical unclonable function (PUF) [5]. PUFs generate “fingerprints” for physical devices by using their intrinsic and unclonable properties. For instance, consider ring oscillators (ROs) with a logic circuit of multiple inverters serially connected with a feedback of the output of the last inverter into the input of the first inverter, as depicted in Figure 1. RO outputs are oscillation frequencies 1/x, where _x is the oscillation period, that are_ � � unique and uncontrollable since the difference between different RO outputs is caused by submicron random manufacturing variations that cannot be controlled. One can use RO outputs as a source of randomness, called a PUF circuit, to extract secret keys that are unique to the digital device that embodies these ROs. The complete method that puts out a unique secret key by using RO outputs is called an RO PUF. Similarly, binary static random access memory (SRAM) outputs are utilized as a source of randomness to implement SRAM PUFs in almost all digital devices because most digital devices have embedded SRAMs used for data storage. The logic circuit of an SRAM is depicted in Figure 2 and the logically stable states of an SRAM cell are (Q, Q) = (1, 0) and (0, 1). During the power-up, the state is undefined if the manufacturer did not fix it. The undefined power-up state of an SRAM cell converges to one of the stable states due to random and uncontrollable mismatch of the inverter parameters, fixed when the SRAM cell is manufactured [6]. There is also random noise in the cell that affects the cell at every power-up. Since the physical mismatch of the cross-coupled inverters is due to manufacturing variations, an SRAM cell output during power-up is a PUF output that is a response with one challenge, where the challenge is the address of the SRAM cell [6]. #### 1 x[^] ### ENABLE **Figure 1. Ring oscillator (RO) logic circuit.** ----- _Entropy 2021, 23, 16_ 3 of 23 ###### Q Q **Figure 2. Static random access memory (SRAM) logic circuit.** PUFs resemble biometric features of human beings. In this review, we will list stateof-the-art methods that bridge the gap between the practical secrecy systems that use PUFs and the information-theoretic security limits by - Modeling real PUF outputs to solve security problems with valid assumptions; - Analyzing methods that make information-theoretic analysis tractable, for example, by transforming PUF symbols so that the transform-domain outputs are almost independent and identically distributed (i.i.d.), and that result in smaller hardware area than benchmark designs in the literature; - Stating the information-theoretic limits for realistic PUF output models and providing optimal and practical (i.e., low-complexity and finite-length) code constructions that achieve these limits; - Illustrating best-in-class nested codes for realistic PUF output models. In short, we start with real PUF outputs to obtain mathematically-tractable models of their behavior and then list optimal code constructions for these models. Since we discuss methods developed from the fundamentals of signal processing and information theory, any further improvements in this topic are likely to follow the listed steps in this review. _Organization and Main Insights_ In Section 2, we provide a definition of a PUF, list its existing and potential applications, and analyze the most promising PUF types. The PUF output models and design challenges faced when manufacturing reliable, low-complexity, and secure PUFs are listed in Section 3. The main security challenge in designing PUFs, i.e., output correlations, is tackled in Section 4 mainly by using a transform coding method, which can provably protect PUFs against various machine learning attacks. The reliability and secrecy performance (e.g., the number of authenticated users) metrics used for PUF designs are defined and jointly optimized in Section 5. PUF security and complexity performance evaluations for the defined transform coding method are given in Section 6. Performance results for error-correction codes used in combination with previous code constructions that are used for key extraction with PUFs, are shown in Section 7 in order to illustrate that previous key extraction methods are strictly suboptimal. We next define the information theoretic metrics and the ultimate key-leakage-storage rate regions for the key agreement with PUFs problem, as well as comparing available code constructions for the key agreement problem in Section 8. Optimal code constructions for the key extraction with PUFs are implemented in Section 9 by using nested polar codes, which are used in 5G networks in the control channel, to illustrate significant gains from using optimal code constructions. In Section 10, we provide a list of open PUF problems that might be interesting for information theorists, coding theorists, and signal processing researchers in addition to the PUF community. **2. PUF Basics** We give a brief review of the literature on PUFs and discuss the problems with previous PUF designs that can be tackled by using signal processing and coding-theoretic methods. ----- _Entropy 2021, 23, 16_ 4 of 23 A PUF is defined as an unclonable function embodied in a device. In the literature, there are alternative expansions of the term PUF such as “physically unclonable function”, suggesting that it is a function that is only physically-unclonable. Such PUFs may provide a weaker security guarantee since they allow their functions to be digitally-cloned. For any practical application of a PUF, we need the property of unclonability both physically and digitally. We therefore consider a function as a PUF only when the function is a physical function, i.e. it is in a device, and it is not possible to clone it physically and digitally. Physical identifiers such as PUFs are heuristically defined to be complex challengeresponse mappings that depend on the random variations in a physical object. Secret sequences are derived from this complex mapping, which can be used as a secret key. One important feature of PUFs is that the secret sequence generated is not required to be stored and it can be regenerated on demand. This property makes PUFs cheaper (no requirement for a memory for secret storage) and safer (the secret sequence is regenerated only on demand) alternatives to other secret generation and storage techniques such as storing the secret in an NVM [5]. There is an immense number of PUF types, which makes it practically impossible to give a single definition of PUFs that covers all types. We provide the following definition of PUFs that includes all PUF types of interest for this review. **Definition 1 ([5]). We define a PUF as a challenge-response mapping embodied by a device such** _that it is fast and easy for the device to put out the PUF response and hard for an attacker, who does_ _not have access to the PUF circuits, to determine the PUF output to a randomly chosen input, given_ _that a set of challenge-response (or input-output) pairs is accessible to him._ The terms used in Definition 1, i.e., fast, easy, and hard, are relative terms that should be quantified for each PUF application separately. There are physical functions, called physical one-way functions (POWFs), in the literature that are closely related to PUFs. Such functions are obtained by applying the cryptographic method of “one-way functions”, which refers to easy to evaluate and (on average) difficult to invert functions [7], to physical systems. As the first example of POWFs, the pattern of the speckle obtained from waves that propagate through a disordered medium is a one-way function of both the physical randomness in the medium and the angle of the beam used to generate the optical waves [8]. Similar to POWFs, biometric identifiers such as the iris, retina, and fingerprints are closely related to PUFs. Most of the assumptions made for biometric identifiers are satisfied also by PUFs, so we can apply almost all of the results in the literature for biometric identifiers to PUFs. However, it is common practice to assume that PUFs can resist invasive (physical) attacks, which are considered to be the most powerful attacks used to obtain information about a secret in a system, unlike biometric identifiers that are constantly available for attacks. The reason for this assumption is that invasive attacks permanently destroy the fragile PUF outputs [5]. This assumption will be the basis for the PUF system models used throughout this review. We; therefore, assume that the attacker does not observe a sequence that is correlated with the PUF outputs, unlike biometric identifiers, since physical attacks applied to obtain such a sequence permanently change the PUF outputs. _2.1. Applications of PUFs_ A PUF can be seen as a source of random sequences hidden from an attacker who does not have access to the PUF outputs. Therefore, any application that takes a secret sequence as input can theoretically use PUFs. We list some scenarios where PUFs fit well practically: - Security of information in wireless networks with an eavesdropper, i.e., a passive attacker, is a PLS problem. Consider Wyner’s wiretap channel model introduced in [9]. This model is the most common PLS model, which is a channel coding problem unlike the secret key agreement problem we consider below that is a source coding problem. A randomized encoder helps the transmitter in keeping the message secret ----- _Entropy 2021, 23, 16_ 5 of 23 by confusing the eavesdropper. Therefore, at the WTC transmitter, PUFs can be used as the local randomness source when a message should be sent securely through the wiretap channel. - Consider a 5G/6G mobile device that uses a set of SRAM outputs, which are available in mobile devices, as PUF circuits to extract secret keys so that the messages to be sent are encrypted with these secret keys before sending the data over the wireless channel. Thus, the receiver (e.g., a base station) that previously obtained the secret keys (sent by mobile devices, e.g., via public key cryptography) can decrypt the data, while an eavesdropper who only overhears the data broadcast over the wireless channel cannot easily learn the message sent. - The controller area network (CAN) bus standard used in modern vehicles is illustrated in [10] to be susceptible to denial-of-service attacks, which shows that safety-critical inputs of the internal vehicle network such as brakes and throttle can be controlled by an attacker. One countermeasure is to encrypt the transmitted CAN frames by using block ciphers with secret keys generated from PUF outputs used as inputs. - IoT devices such as wearable or e-health devices may carry sensitive data and use a PUF to store secret keys in such a way that only a device to which the secret keys are accessible can command the IoT devices. One common example of such applications is when PUFs are used to authenticate wireless body sensor network devices [11]. - Cloud storage requires security to protect users’ sensitive data. However, securing the cloud is expensive and the users do not necessarily trust the cloud service providers. A PUF in a universal serial bus (USB) token, i.e., Saturnus, has been trademarked to encrypt user data before uploading the data to the cloud, decrypted locally by reconstructing the same secret from the same PUF. - System developers want to mutually authenticate a field programmable gate array (FPGA) chip and the intellectual property (IP) components in the chip, and IP developers want to protect the IP. In [12], a protocol is described to achieve these goals with a small hardware area that uses one symmetric cipher and one PUF. Other applications of PUFs include providing non-repudiation (i.e., undeniable transmission or reception of data), proof of execution on a specific processor, and remote integrated circuit (IC) enabling. Every application of PUFs has different assumptions about the PUF properties, computational complexity, and the specific system models. Therefore, there are different constraints and system parameters for each application. We focus mainly on the application where a secret key is generated from a PUF for user, or device, authentication with privacy and secrecy guarantees, and low complexity. _2.2. Main PUF Types_ We review four PUF types, i.e., silicon, arbiter, RO, and SRAM PUFs. We consider mainly the last two PUF types for algorithm and code designs due to their common use in practice and because signal processing techniques can tackle the problems arising in designing these PUFs. For a review of other PUF types that are mostly considered in the hardware design and computer science literatures, and various classifications of PUFs, see, for example, [4,13,14]. The four PUF types considered below can be shown to satisfy the assumption that invasive attacks permanently change PUF outputs, since digital circuit outputs used as the source of randomness in these PUF types change permanently under invasive attacks due to their dependence on nano-scale alterations in the hardware. _2.3. Silicon and Arbiter PUFs_ Common complementary metal-oxide-semiconductor (CMOS) manufacturing processes are used to build silicon PUFs, where the response of the PUF depends on the circuit delays, which vary across integrated circuits (ICs) [5]. Due to high sensitivity of the circuit delays to environmental changes (e.g., ambient temperature and power supply voltage), arbiter PUFs are proposed in [15], for which an arbiter (i.e., a simple transparent data latch) is added to the silicon PUFs so that the delay comparison result is a single bit. The differ ----- _Entropy 2021, 23, 16_ 6 of 23 ence of the path delays is mapped to, for example, the bit 0 if the first path is faster, and the bit 1 otherwise. The difference between the delays can be small, causing meta-stable outputs. Since the output of the mapper is generally pre-assigned to the bit 0, the signals that are incoming are required to satisfy a setup time (tsetup), required by the latch to change the output to the bit 1, resulting in a bias in the arbiter PUF outputs. Symmetrically implementable latches (e.g., set-reset latches) should be used to overcome this problem, which is difficult because FPGA routing does not allow the user to enforce symmetry in the hardware implementation. We discuss below that PUFs without symmetry requirements, for example, RO PUFs, provide better results. _2.4. RO PUFs_ The RO logic circuit is depicted in Figure 1, where an odd number of inverters are connected serially with feedback. The first logic gate in Figure 1 is a NAND gate, giving the same logic output as an inverter gate when the ENABLE signal is 1 (ON), to enable/disable the RO circuit. The manufacturing-dependent and uncontrollable component in an RO is the total propagation delay of an input signal to flow through the RO, determining the oscillation frequency [1] _xˆ_ [of an RO that is used as the source of randomness. A self-sustained] oscillation is possible when the ring that oscillates at the oscillation frequency [1] _xˆ_ [of the RO] provides a phase shift of 2π with a voltage gain of 1. Consider an RO with m ≥ 3 inverters. Each inverter should provide a phase shift of _m[π]_ with an additional phase shift of π due to the feedback. Therefore, the signal should flow through the RO twice to provide the necessary phase shift [16]. Suppose a propagation 1 delay of τd for each inverter, so the oscillation frequency of an RO is [1] . We remark _xˆ_ [=] 2mτd that since RO outputs are generally measured by using 32-bit counters, it is realistic to assume that a measured RO output [1] _xˆ_ [is a realization of a continuous distribution that can] be modeled by using the histogram of a family of RO outputs with the same circuit design, as assumed below. The propagation delay τd is affected by nonlinearities in the digital circuit. Furthermore, there are deterministic and additional random noise sources [16]. Such effects should be eliminated to have a reliable RO output. Rather than improving the standard RO designs, which would impose the condition that manufacturers should change their RO designs, the first proposal to fix the reliability problem was to make hard bit decisions by comparing RO pairs [17], as illustrated in Figure 3. Oscillator 1 ENABLE Bit >< Response Counter Oscillator N Challenge **Figure 3. The first and most common RO physical unclonable function (PUF) design [17].** In Figure 3, the multiplexers are challenged by a bit sequence of length at most _⌈log2 N⌉_ so that an RO pair out of N ROs is selected. The counters count the number of times a rising edge is observed for each RO during a fixed time. A logic bit decision is made by comparing the counter values, which can be bijectively mapped to the oscillation ----- _Entropy 2021, 23, 16_ 7 of 23 frequencies. For instance, when the upper RO has a greater counter value, then the bit 0 is generated; otherwise, the bit 1. Given that ROs are identically laid out in the hardware, the differences in the oscillation frequencies are determined mainly by uncontrollable manufacturing variations. Furthermore, it is not necessary to have a symmetric layout when hard-macro hardware designs are used for different ROs, unlike arbiter PUFs. The key extraction method illustrated in Figure 3 gives an output of ([N]2 [)][ bits, which] are correlated due to overlapping RO comparisons. This causes a security threat and makes the RO PUF vulnerable to various attacks, including machine learning attacks. Thus, nonoverlapping pairs of ROs are used in [17] to extract each bit. However, there are systematic variations in the neighboring ROs due to the surrounding logic, which also should be eliminated to extract sequences with full entropy. Furthermore, ambient temperature and supply voltage variations are the most important effects that reduce the reliability of RO PUF outputs. A scheme called 1-out-of-k masking is proposed as a countermeasure to these effects, which compares the RO pairs that have the maximum difference between their oscillation frequencies for a wide range of temperatures and voltages to extract bits [17]. The bits extracted by such a comparison are more reliable than the bits extracted by using previous methods. The main disadvantages of this scheme are that it is inefficient due to unused RO pairs, and only a single bit is extracted from the (semi-) continuous RO outputs. We review transform-coding based RO PUF methods below that significantly improve on these methods without changing the standard RO hardware designs. _2.5. SRAM PUFs_ There are multiple memory-based PUFs such as SRAM, Flip-flop, DRAM, and Butterfly PUFs. Their common feature is to possess a small number of challenge-response pairs with respect to their sizes. As the most promising memory-based PUF type that is already used in the industry, we consider SRAM PUFs that use the uncontrollable settling state of bi-stable circuits [18]. In the standard SRAM design, there are four transistors used to form the logic of two cross-coupled inverters, as depicted in Figure 2, and two other transistors to access the inverters. The power-up state, i.e., (Q, Q) = (1, 0) or (0, 1), of an SRAM cell provides one secret bit. Concatenating many such bits allows to generate a secret key from SRAM PUFs on demand. We provide an open problem about SRAM PUFs in Section 10. **3. Correlated, Biased, and Noisy PUF Outputs** PUF circuit outputs are biased (nonuniform), correlated (dependent), and noisy (erroneous). We review a transform-coding algorithm that extracts an almost i.i.d. uniform bit sequence from each PUF, so a helper-data generation algorithm can correct the bit errors in the sequence generated from noisy PUF outputs. Using this transform-coding algorithm, we also obtain memoryless PUF measurement-channel models, so standard informationtheoretic tools, which cannot be easily applied to correlated sequences, can be used. **Remark 1. The bias in the PUF circuit outputs is considered in the PUF literature to be a big** _threat against the security of the key generated from PUFs since the bias allows to apply, for example,_ _machine learning attacks. However, it is illustrated in [19] (Figure 6) that the output bias does_ _not change the information-theoretic rate regions significantly, illustrating that there exist code_ _constructions that do not require PUF outputs to be uniformly distributed._ We consider two scenarios, where a secret key is either generated from PUF outputs (i.e., generated secret [GS] model) or they are bound to PUF outputs (chosen secret [CS] model). An example of GS methods is code-offset fuzzy extractors (COFE) [20], and an example of the CS methods is the fuzzy-commitment scheme (FCS) [21]. We first analyze a method that significantly improves privacy, reliability, hardware cost and secrecy performance, by transforming the PUF outputs into a frequency domain, which are later used in the FCS. We remark that the information-theoretic analysis of the CS model follows directly from the analysis of the GS model [22], so one can use either model for comparisons. ----- _Entropy 2021, 23, 16_ 8 of 23 PUF output correlations might cause information leakage about the PUF outputs (i.e., privacy leakage) and about the secret key (i.e., secrecy leakage) [22,23]. Furthermore, channel codes are required to satisfy the constraint on the reliability due to output noise. The transform coding method proposed in [24] adjusts the PUF output noise to satisfy the reliability constraint in addition to reducing the PUF output correlations. _3.1. PUF Output Model_ Consider a (semi-)continuous output physical function such as an RO output as a source with real valued outputs ˆx. Since in a two-dimensional (2D) array the maximum distance between RO hardware logic circuits is less than in a one-dimensional array, decreasing the variations in the RO outputs caused by surrounding hardware logic circuits [25], we consider a 2D RO array of size l = r _c that can be represented as a vector random_ _×_ variable _X[l]. Each device embodies a single 2D RO array that has the same circuit design_ [�] and we have _X[�][l]_ _∼_ _f �Xl_, where f �Xl is a probability density function. Mutually independent and additive Gaussian noise denoted as _Z[l]_ disturbs the RO outputs, i.e., we have noisy RO [�] outputs _Y[l]_ = _X[l]_ +Z[l]. Since _X[l]_ and _Y[l]_ are dependent, using these outputs a secret key can [�] [�] [�] [�] [�] be agreed [26,27]. **Remark 2. PUF outputs are noisy, as discussed above in this section. However, the first PUF** _outputs are used by, for example, a manufacturer to generate or embed a secret key, which is called_ _the enrollment procedure. Since a manufacturer can measure multiple noisy outputs of the same_ _RO to estimate the noiseless RO output, we can consider that the PUF outputs measured during_ _enrollment are noiseless. However, during the reconstruction step, for example, an IoT device_ _observes a noisy RO output, which can be the case because the IoT device cannot measure the_ _RO outputs multiple times due to delay and complexity constraints. Therefore, we consider a_ _key-agreement model where the first measurement sequence (during enrollment) is noiseless and the_ _second measurement sequence (during reconstruction) is noisy; see also Section 8. Extensions to_ _key agreement models with two noisy sequences, where the noise components can be correlated, are_ _discussed in [23,28,29]._ We extract i.i.d. symbols from _X[l]_ and _Y[l]_ such that information theoretic tools used [�] [�] in [30] for the FCS can be applied. An algorithm is proposed in [24] to obtain almost i.i.d. uniformly-distributed and binary vectors X[n] and Y[n] from _X[l]_ and _Y[l], respectively._ [�] [�] For such X[n] and Y[n], we can define a binary error vector as E[n] = X[n] _Y[n], where_ is _⊕_ _⊕_ the modulo-2 sum. We then obtain the random sequence E[n] Bernoulli[n](p), so the _∼_ channel PY|X ∼ BSC(p) is a binary symmetric channel (BSC) with crossover probability p. We discuss a transform-coding method below, which further provides reliability guarantees for each bit generated. The FCS can reconstruct a secret key from dependent random variables with zero secrecy leakage [21]. For the FCS, depicted in Figure 4, an encoder Enc( ) maps a secret _·_ key S, which is uniformly distributed in the set 1, 2, . . .,, into a codeword C[n] _∈S_ _{_ _|S|}_ with binary symbols that are later added to the PUF-output sequence X[n] in modulo-2 during enrollment. The output is called helper data W, sent to a database via a noiseless, public and authenticated communication link. The sum of W and Y[n] in modulo-2 is _R[n]_ = W _Y[n]_ = C[n] _E[n], mapped to a secret key estimate_ _S[ˆ] during reconstruction by the_ _⊕_ _⊕_ decoder Dec( ). _·_ We next give information-theoretic rate regions for the FCS; see [31] for informationtheoretic notation and basics. ----- _Entropy 2021, 23, 16_ 9 of 23 _S_ _Sˆ_ Enrollment Reconstruction **Figure 4. The fuzzy commitment scheme (FCS).** **Definition 2. The FCS can achieve a secret-key vs. privacy-leakage rate pair (Rs,Rℓ) with zero** _secrecy leakage (i.e., perfect secrecy) if, given any δ > 0, there is some n ≥_ 1, and an encoder and _decoder pair for which we have Rs =_ [log][ |S|] _and_ _n_ Pr[S[ˆ] ̸= S] = PB ≤ _δ_ (reliability) (1) _H(S) ≥_ _n(Rs −_ _δ)_ (key uniformity) (2) _I(S; W)=_ 0 (perfect secrecy) (3) _I(X[n]; W) ≤_ _n(Rℓ_ + δ) (privacy) (4) _where (3) suggests that S and W are independent and (4) suggests that the rate of dependency_ _between X[n]_ _and W is bounded. The achievable secret-key vs. privacy-leakage rate, or key-leakage,_ _region RFCS for the FCS is the union of all achievable pairs._ **Theorem 1 ([30]). The key-leakage region RFCS for the FCS with perfect secrecy, uniformly-** _distributed X and Y, and a channel PY|X ∼_ _BSC(p) is_ _RFCS = {(Rs, Rℓ)_ : 0 ≤ _Rs ≤_ 1 − _Hb(p),_ _Rℓ_ _≥_ 1 − _Rs}_ (5) _where Hb(p) = −p log p −_ (1 − _p) log(1 −_ _p) is defined as the binary entropy function._ The region R of all achievable (secret-key, privacy-leakage) rate pairs for the CS model with a negligible secrecy-leakage rate is [22] � = � _R_ _PU|X_ � (Rs, Rℓ): 0 ≤ _Rs ≤_ _I(Y; U),_ _Rℓ_ _≥_ _I(X; U) −_ _I(Y; U)_ (6) such that U _X_ _Y forms a Markov chain and it suffices to have_ + 1. The aux_−_ _−_ _|U|≤|X |_ iliary random variable U represents a distorted version of X through a channel PU|X. The FCS is optimal only at the point (Rs[∗][,][ R][∗]ℓ [)=(][1][−][H][b][(][p][)][,][ H][b][(][p][))][ [][30][], corresponding to] the maximum secret-key rate. **4. Transformation Steps** Transform coding methods decrease RO output correlations for ROs that are in the same 2D array by using, for example, a linear transformation. We discuss a transformcoding algorithm proposed in [32] as an extension of [24] to provide reliability guarantees to each generated bit. Joint optimization of the error-correction code and quantizer in order to maximize the reliability and secrecy are the main steps. The output of these post-processing steps is a bit sequence X[n] (or its noisy version Y[n]) utilized in the FCS. ----- _Entropy 2021, 23, 16_ 10 of 23 It suffices to discuss only the enrollment steps, depicted in Figure 5, since the same steps are used also for reconstruction. _X[l]_ are correlated RO outputs, where the cause of correlations is, for example, the sur� rounding logic in the hardware. A transform Tr×c(·) with size r×c transforms RO outputs to decrease output correlations. We model each output T in the transform domain, i.e., _transform coefficient, calculated by transforming the RO outputs given in the dataset [33] by_ using the Bayesian information criterion (BIC) [34] and the corrected Akaike’s information criterion (AICc) [35], suggesting a Gaussian distribution as a good fit for the discrete Haar transform (DHT), discrete Walsh–Hadamard transform (DWHT), DCT, and Karhunen– Loève transform (KLT). ##### X[^][ l] ##### Post-Processing #### Transform Hist. ##### Quant. with X [n] #### Trxc Equali. ###### Gray Map **Figure 5. Transformation steps [24].** In Figure 5, the histogram equalization changes the probability density of the i-th coefficient Ti into a standard normal distribution so that quantizers are the same for all transform coefficients, decreasing the storage. Obtained coefficients _Ti are independent_ [�] when the transform coefficients Ti are jointly Gaussian and the transform Tr×c(·) decorrelates the RO outputs perfectly. For such a case, scalar quantizers do not introduce any performance loss. Bit extraction methods and scalar quantizers are given below for the FCS with the independence assumption, which can be combined with a correlation-thresholding approach in practice. **5. Joint Quantizer and Error-Correction Code Design** The steps in Figure 5 are applied to obtain a uniform binary sequence X[n]. We utilize a quantizer ∆(·) that assigns quantization-interval values of k = 1, 2, · · ·, 2[K][i], where Ki represents the number of bits obtained from the i-th coefficient. We have ∆(t[ˆ]i) = k if _bk−1 <_ _t[ˆ]i ≤_ _bk_ (7) � _k_ � where we have bk = Φ[−][1], and Φ[−][1](·) is the standard Gaussian distribution’s 2[K][i] quantile function. A length-Ki bit sequence represents the output k. Since the noise has zero mean, we use a Gray mapping to determine the sequences assigned to each k, so neighboring sequences differ only in one bit. _Quantizers with Given Maximum Number of Errors_ We discuss a conservative approach that suppose either bits assigned to a quantized transform coefficient all flip or they are all correct. Let the correctness probability Pc of a coefficient be the probability that all bits assigned to a transform coefficient are correct, used to choose the number of bits extracted from a coefficient in such a way that one can design a channel encoder with a bounded minimum distance decoder (BMDD) to satisfy |Col1|Col2| |---|---| |RO Array rxc|Col2|Col3|Col4| |---|---|---|---| |l|||| ||||| |Transform T rxc|||| ----- _Entropy 2021, 23, 16_ 11 of 23 the reliability constraint PB ≤ 10[−][9], a common value for the block-error probability of PUFs that use CMOS circuits [17]. Let Q(·) be the Q-function, f �T the probability density of the standard Gaussian distribution, and σn[2]ˆ [the noise variance. The correctness probability can be calculated as] _f �T(t[ˆ])dt[ˆ]_ (8) � � _bk+1−tˆ_ _−Q_ _σnˆ_ 2[K]−1 _Pc(K)=_ ∑ _k=0_ _bk+1_ � _bk_ � � _bk−tˆ_ _Q_ _σnˆ_ �[�] where K is the length of the bit sequence assigned to a quantizer with quantization boundaries bk from (7) for an equalized Gaussian transform coefficient _T. In (8), we calculate the_ [�] probability that the additive noise will not change the quantization interval assigned to the transform coefficient, i.e., all bits associated with the transform coefficient stay the same after adding noise. Assume that all errors in up to Cmax coefficients can be corrected by a channel decoder, that the correctness probability Pc,i(K) of the i-th coefficient _Ti is greater than or equal to_ [�] _Pc(Cmax), and that errors occur independently. We first find the minimum correctness_ probability that satisfies PB ≤ 10[−][9], denoted as Pc(Cmax), by solving _l_ #### ∑ _c=Cmax+1_ �l _c_ � (1−Pc(Cmax))[c]Pc(Cmax)[l][−][c] _≤_ 10[−][9] (9) which allows to find the maximum bit-sequence length Ki for the i-th transform coefficient such that Pc,i(K) ≥ _Pc(Cmax). The first transform coefficient, i.e., DC coefficient,_ _T[�]1 can_ in general be estimated by an attacker, which is the first reason why it is not used for key extraction. As the second reason, temperature and voltage changes affect RO outputs highly linearly, which affects the DC coefficient the most [36]. Thus, we fix K1 = 0, so the total number of extracted bits can be calculated as _l_ _n(Cmax)=_ ∑ _Ki._ (10) _i=2_ We first sort Ki values in descending order such that Ki[′] _[≥]_ _[K]i[′]+1_ [for all][ i] [=] [1, 2,][ . . .][,][ l][ −] [2. Thus,] up to _Cmax_ _e(Cmax) =_ ∑ _Ki[′]_ (11) _i=1_ bit errors must be corrected for the worst case scenario. Using a BMDD, a block code with minimum distance dmin ≥ 2e(Cmax)+1 can satisfy this requirement [37]. The advanced encryption standard (AES) requires a seed of, e.g., a secret key with length 128 bits. If the FCS is applied to PUFs to extract such a secret key for the AES, the block code designed should have a code length ≤ _n(Cmax) bits, code dimension ≥128 bits, and_ minimum distance dmin ≥ 2e(Cmax) + 1, given a Cmax. Such an optimization problem is generally hard to solve but, using an exhaustive search over different Cmax values and over different algebraic codes, one can show the existence of a channel code that satisfies all constraints. Considering codes with low-complexity implementations is preferred for, e.g., IoT applications. We remark that the correctness probability might be significantly greater than Pc(Cmax), that the probability that less than Ki bits are actually in error when the i-th coefficient is erroneous is high, and that the bit errors do not necessarily happen in the coefficients from which the maximum-length bit sequences are obtained. Therefore, we next illustrate that even though e(Cmax) errors cannot be corrected, the constraint _PB ≤_ 10[−][9] is satisfied. ----- _Entropy 2021, 23, 16_ 12 of 23 **6. PUF Performance Evaluations** Represent RO outputs _X[l]_ as a vector random variable with the autocovariance matrix [�] **C** 8 and 16 16 RO arrays, whose autocovariance matrix is estimated by **X�X� and consider 8×** _×_ using the RO outputs in [33]. Using the dataset, we next compare the performance of the DWHT, DCT, KLT, and DHT in terms of their security, decorrelation efficiency, uniqueness, and complexity. _6.1. Decorrelation Efficiency_ Consider the autocovariance matrix CTT of the transform coefficients so that the decorrelation efficiency ηc, used as a decorrelation performance metric, of a fixed transform is [38] _l_ ∑ _|CTT(a, b)|1{a_ _̸=_ _b}_ _b=1_ (12) _l_ _b∑=1_ _|CX�X�_ (a, b)|1{a _̸=_ _b}_ _ηc = 1 −_ _l_ ∑ _a=1_ _l_ ∑ _a=1_ where 1 is the indicator function. The KLT has a decorrelation efficiency of 1, _{·}_ i.e., optimal [38]. Average ηc values of remaining transforms are given in Table 1 and they have good (i.e., high) and similar decorrelation efficiency performance. The DHT and DCT have the highest efficiency for 8 8 RO arrays; while, for 16 16 RO arrays, _×_ _×_ the DWHT is the best transform. Table 1 suggests that an array size increase improves ηc. **Table 1. Average decorrelation-efficiency results for RO outputs.** **DWHT** **DCT** **DHT** _ηc for 16 × 16_ 0.9988 0.9987 0.9986 _ηc for 8 × 8_ 0.9977 0.9978 0.9978 _6.2. Complexity of Transforms_ Computational complexity of r _c = 8_ 8 and 16 16 RO arrays are considered, _×_ _×_ _×_ which are powers of 2 so that there are fast algorithms to implement the DWHT, DCT, and DHT. The KLT has a computational complexity of O(n[3]) for r = c = n; while, the DWHT and DCT have O(n[2] log2 n), and the DHT has O(n[2]) [39]. Efficient implementations of the DWHT that do not require multiplications exist [32], which can be applied also to the transforms proposed in [40]. The DWHT is therefore a good candidate for implementing RO PUFs for IoT applications. For instance, a hardware implementation of 2D DWHT in an evaluation board of Xilinx ZC706 with a Zynq-7000 XC7Z045 system-on-chip is illustrated in [32] to require approximately 11% smaller hardware area and 64% less processing time than the benchmark RO PUF hardware implementation in [41]. _6.3. Security and Uniqueness_ The extracted bit sequence is required to be uniformly distributed to use the rate region _RFCS in (5). The randomness measure called uniqueness is the average fractional Hamming_ distance between bit sequences generated from different RO PUFs. All transforms have similar uniqueness results with a mean Hamming distance of 0.500 and Hamming distance variance is 7 × 10[−][4]. These results are close to optimal uniqueness results, expected because of equipartitioned quantization intervals and high decorrelation efficiencies, that are better than previous uniqueness results with mean values of 0.462 [17] and 0.473 [33]. The national institute of standards and technology (NIST) has randomness tests to check if an extracted binary sequence can be differentiated from a uniformly-random binary sequence [42]. The bit sequences with the DWHT pass most of the applicable tests, considered to be an acceptable result [42]. The KLT performs the best because of its optimal decorrelation performance. ----- _Entropy 2021, 23, 16_ 13 of 23 **7. Error-Correction Codes for PUFs with Transform Coding** Suppose that bit sequences extracted by using the transform-coding method are i.i.d. and uniformly distributed, so perfect secrecy is satisfied. We assume that signal processing steps mentioned above perform well, so we can conduct standard information- and codingtheoretic analysis. We provide a list of codes designed for the transform-coding algorithm by using the reliability metric considered above. Select a channel code for the quantizer designed above for a fixed maximum number of errors for a secret key of size 128 bits. The correctness probabilities for the coefficients with the smallest and highest probabilities are depicted in Figure 6. Transform coefficients that represent the low-frequency coefficients are the most reliable, which are at the upperleft corner of the 2D transform-coefficient array with indices such as 1, 17, 2, 18, 3, 19. These coefficients thus have the highest signal-to-noise ratios (SNRs). Conversely, the least reliable coefficients are observed to be coefficients that represent intermediate frequencies, indicating that one can define a metric called SNR-packing efficiency, defined similarly as the energy-packing efficiency, and show that it follows a more complicated scan order than the classic zig-zag scan order used for the energy-packing efficiency. 1 0.8 0.6 0.4 0.2 0 1 2 3 4 5 6 7 8 9 10 |Col1|Col2|Col3|Col4|Col5|Col6|Col7|Col8|Col9| |---|---|---|---|---|---|---|---|---| |||||||||| |Co||||||||| ||eff. 1|||||||| |Co Co|eff. 2 eff. 17|||||||| |Co Co Co|eff. 31 eff. 128 eff. 150|||||||| _K_ **Figure 6. Transform coefficients’ correctness probabilities.** Fix Cmax, defined above, and calculate Pc(Cmax) via (9), n(Cmax) via (10), and e(Cmax) via (11). If Cmax ≤ 10, Pc(Cmax) is large and Pc,i(K = 1) _≤_ _Pc(Cmax) for all i = 2, . . ., l. In ad-_ dition, if 11 _≤_ _Cmax ≤_ 15, then n(Cmax) ≤ 128 bits. Furthermore, if Cmax increases, Pc(Cmax) decreases, so the maximum of the number Kmax(Cmax)= _K1[′]_ [(][C][max][)][ of bits extracted among] all used coefficients increases, increasing the hardware complexity. Thus, consider only the cases where Cmax ≤ 20. Table 2 shows Pc(Cmax), n(Cmax), and e(Cmax) for a range of Cmax values used for channel-code selection. **Table 2. Code-design constraints.** **Cmax** **20** **19** **18** **17** **16** _Pc_ 0.9844 0.9860 0.9875 0.9889 0.9902 _Kmax_ 3 3 3 3 3 _n_ 259 255 250 224 144 _e_ 25 23 21 20 18 Consider Reed–Solomon (RS) and binary (extended) Bose–Chaudhuri–Hocquenghem (BCH) codes, whose minimum-distance dmin is high. There is no BCH or RS code with parameters satisfying any of the (n(Cmax), e(Cmax)) pairs in Table 2 such that its dimension ----- _Entropy 2021, 23, 16_ 14 of 23 is 128 bits. However, the analysis leading to Table 2 is conservative. Thus, we next _≥_ find a BCH code whose parameters are as close as possible to an (n(Cmax), e(Cmax)) pair in Table 2. Consider the binary BCH code that can correct all error patterns with up to _eBCH = 18 errors with the block length of 255 and code dimension of 131 bits._ First, extract exactly one bit from each transform coefficient, i.e., Ki = 1 for all _i_ = 2, 3, . . ., l, so n = l 1 = 255 bits are extracted, resulting in mutually-independent _−_ bit errors Ei. Thus, all error patterns with up to e = 20 bit errors should be corrected by the chosen code rather than e(20)= 25 bit errors. However, this value is still greater than _eBCH = 18._ The block error probability PB for the BCH code C(255, 131, 37) with a BMDD is equal to the probability of encountering more than 18 errors, i.e., we have 255 _PB =_ ∑ _j=19_ � #### ∑ (1 − Pc,i) [•] ∏ Pc,i _A∈Fj_ _i[∏]∈A_ _i∈A[c]_ � (13) where Pc,i is the correctness probability of the i-th coefficient _Ti as in (8) for i_ = 2, 3, . . ., 256, [�] _A[c]_ denotes the complement of the set A, and Fj is the set of all size-j subsets of the set _{2, 3, . . ., 256}. Pc,i values are different and they represent probabilities of independent_ events because we assume that the transform coefficients are independent. We apply the discrete Fourier transform characteristic function method [43] to evaluate the block-error probability with the result PB ≈ 1.26 × 10[−][11] _< 10[−][9]. The block-error probability_ (i.e., reliability) constraint is therefore satisfied by the BCH code (255, 131, 37), although _C_ the conservative analysis suggested otherwise. This code achieves a (secret-key, privacyleakage) rate pair of (Rs, Rℓ) = ( [131]255 [, 1][−] 255[131] [)][ ≈] [(][0.514, 0.486][)][ bits/source-bit, which is] significantly better than previous results. We next consider the region of all achievable rate pairs for the CS model and the FCS for a BSC PY|X with crossover probability pb = 1 − _l−11_ [∑]i[l]=2 _[P][c][,][i][(][K][i][ =][ 1][)]_ _[≈]_ [0.0097, i.e., probability of being in error averaged over all used] coefficients with the above defined quantizer. The (secret-key, privacy-leakage) rate pair of the BCH code, regions of all rate pairs achievable by the FCS and CS model, the maximum secret-key rate point, and a finite-length bound [44] for the block length of n = 255 bits and _PB =_ 10[−][9] are depicted in Figure 7 for comparisons. 1 0.8 0.6 0.4 0.2 0 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 |Col1|Col2|Col3|Col4|Col5|Col6|Col7|Col8|Col9|Col10|Col11|Col12| |---|---|---|---|---|---|---|---|---|---|---|---| ||||||||||||| ||||||||||||| |||Propos|ed Code||||||||| |||Fuzzy CS Mo|Commit del|ment|||||||| |||(R∗, R∗ l s Finite-l|) ength B|ound|||||||| _Rl_ **Figure 7.** The operation point of the considered Bose–Chaudhuri–Hocquenghem (BCH) code _C(255, 131, 37), the maximum secret-key rate point (R[∗]ℓ_ [,][ R][s][∗][ )][, regions of achievable rate pairs][ according] to (5) and (6), and a finite-length bound for BSC(0.0097), n = 255 bits, and PB = 10[−][9]. Denote the maximum secret-key rate as Rs[∗] _[≈]_ [0.922 bits/source-bit and the corre-] sponding minimum privacy-leakage rate as R[∗] _ℓ_ _[≈]_ [0.079 bits/source-bit. The gap between] (R[∗]ℓ [,][ R]s[∗][)][ at which the FCS is optimal and the rate tuple achieved by the BCH code can be] explained by the short block length and small block-error probability. However, the finitelength bound given in [44] (Theorem 52) suggests that the FCS can achieve the rate tuple ----- _Entropy 2021, 23, 16_ 15 of 23 (Rs, Rℓ) = (0.691, 0.309) bits/source-bit, shown in Figure 7. Better channel code designs and decoders (possibly with higher hardware implementation complexity) can improve the performance, but they might not be feasible for IoT applications. Figure 7 shows that there are other code constructions (that are not standard error-correcting codes) that can achieve smaller privacy-leakage and storage rates for a fixed secret-key rate, illustrated below. **8. Code Constructions for PUFs** Consider the two-terminal key agreement problem, where the identifier outputs during enrollment are noiseless. We mention two optimal linear code constructions from [45] that are based on distributed lossy source coding (or Wyner–Ziv [WZ] coding) [46]. The random linear code construction achieves the GS and CS models’ key-leakage-storage regions and the nested polar code construction jointly designs vector quantization (during enrollment) and error correction (during reconstruction) codes. Designed nested polar codes improve on existing code designs in terms of privacy-leakage and storage rates, and one code achieves a rate tuple that existing methods cannot achieve. Several practical code constructions for key agreement with identifiers have been proposed in the literature. For instance, the COFE and the FCS both require a standard errorcorrection code to satisfy the constraints of, respectively, the key generation (GS model) and key embedding (CS model) problems, as discussed above. Similarly, a polar code construction is proposed for the GS model in [47]. These constructions are sub-optimal in terms of storage and privacy-leakage rates. A Golay code is used as a vector quantizer (VQ) in [22] in combination with distributed lossless source codes (or Slepian–Wolf [SW] codes) [48] to increase the ratio of key vs. storage rates (or key vs. leakage rates). Thus, we next consider VQ by using WZ coding to decrease storage rates. The WZ-coding construction turns out to be optimal, which is not coincidental. For instance, the bounds on the storage rate of the GS model and on the WZ rate (storage rate) have the same mutual information terms optimized over the same conditional probability distribution. This similarity suggests an equivalence that is closely related to the concept of formula duality. In fact, the optimal random code construction, encoding, and decoding operations are identical for both problems. One therefore can call the GS model and WZ problem functionally equivalent. Such a strong connection suggests that there might exist constructive methods that are optimal for both problems for all channels, which is closely related to the operational duality concept. Consider the GS model, where a secret key is generated from a physical or biometric source, depicted in Figure 8(a). The encoder Enc( ) observes during enrollment the _·_ noiseless i.i.d. sequence X[n] _∼_ _PX to generate public helper data W and a secret key S,_ i.e., (S, W) = Enc(X[n]). The decoder Dec( ) observes during reconstruction the helper data _·_ _W and a noisy measurement Y[n]_ of X[n] through a memoryless channel PY|X to estimate the secret key, i.e., _S = Dec(Y[n], W). Similarly, the CS model is shown in Figure 8(b), where_ [�] a secret key S independent of (X[n], Y[n]) is chosen and embedded into the helper data, i.e., W = Enc(X[n], S). The alphabets,,, and are finite sets, which can be achieved _X_ _Y_ _S_ _W_ if, for example, the transform-coding algorithm discussed above is applied. ----- _Entropy 2021, 23, 16_ 16 of 23 _S_ _S�_ (a) (b) (a) (b) (a) (S, W) = Enc(X[n]) _W_ (b) _S� = Dec(Y[n], W)_ _W_ = Enc(X[n], S) _PX_ _X[n]_ _PY|X_ _Y[n]_ Enrollment Reconstruction **Figure 8. The (a) generated-secret (GS) and (b) chosen-secret (CS) models.** **Definition 3. For GS and CS models, a key-leakage-storage tuple (Rs, Rℓ, Rw) is achievable if,** _given any δ > 0, there is an encoder, a decoder, and some n_ _≥_ 1 such that Rs = [log][ |S|] _and_ _n_ Pr[S[ˆ] ̸= S] = PB ≤ _δ_ (reliability) (14) _I(W; S) ≤_ _nδ_ (weak secrecy) (15) _I(X[n]; W) ≤_ _n(Rℓ_ + δ) (privacy) (16) _H(S) ≥_ _n(Rs −_ _δ)_ (uni f ormity) (17) log |W| ≤ _n(Rw + δ)_ (storage) (18) _are satisfied. The key-leakage-storage regions Rgs for the GS model and Rcs for the CS model are_ _the closures of the sets of achievable tuples for these models._ **Theorem 2 ([22]). The key-leakage-storage regions Rgs and Rcs for the GS and CS models,** _respectively, are_ � _Rcs =_ _PU|X_ � (Rs, Rℓ, Rw): � _Rgs =_ _PU|X_ � (Rs, Rℓ, Rw): 0 ≤ _Rs ≤_ _I(Y; U),_ _Rℓ_ _≥_ _I(X; U) −_ _I(Y; U),_ � _Rw ≥_ _I(X; U) −_ _I(Y; U)_ _,_ _and_ 0 ≤ _Rs ≤_ _I(Y; U),_ _Rℓ_ _≥_ _I(X; U) −_ _I(Y; U),_ � _Rw ≥_ _I(X; U)_ _where U −_ _X −_ _Y form a Markov chain. Rgs and Rcs are convex sets and |U|≤|X | + 1 suffices_ _for both rate regions._ **Remark 3. Improvement of the weak secrecy to strong secrecy, where (15) is replaced with** _I(W; S) ≤_ _δ, is possible by using multiple identifier output blocks as described in [49], e.g., by using_ _multiple PUFs in the same device._ Assume, as above, that X[n] _∼_ Bernoulli[n]( 2[1] [)][ and the channel][ P][Y][|][X] _[∼]_ [BSC][(][p][A][)][ for] _pA ∈_ [0, 0.5]. Define the star-operation as q ∗ _pA = q(1 −_ _pA) + (1 −_ _q)pA. The key-leakage-_ storage region of this GS model is � _Rgs,bin =_ _q∈[0,0.5]_ � (Rs, Rℓ, Rw): 0 ≤ _Rs ≤_ 1 − _Hb(q ∗_ _pA),_ _Rℓ_ _≥_ _Hb(q ∗_ _pA) −_ _Hb(q),_ � _Rw ≥_ _Hb(q ∗_ _pA) −_ _Hb(q)_ . (19) ----- _Entropy 2021, 23, 16_ 17 of 23 _Comparisons Between Code Constructions for PUFs_ We consider three best code constructions proposed for the GS and CS models, which are COFE and the polar code construction in [47] for the GS model, and FCS for the CS model, in order to compare them with the WZ-coding constructions. The FCS and COFE achieve only a single point on the key-leakage rate region boundary, i.e., Rs[∗] [=][ I][(][X][;][ Y][)][ and] _R[∗]_ _ℓ_ [=][ H][(][X][|][Y][)][.] Adding a VQ step, one can improve these two methods. During enrollment rather than X[n], its quantized version Xq[n] [can be used for this purpose, which can be asymptotically] represented as summing the original helper data and another independent random variable _J[n]_ Bernoulli[n](q), i.e., W = X[n] _C[n]_ _J[n]_ is the (new) helper data. Modified FCS and _∼_ _⊕_ _⊕_ COFE can achieve the key-leakage region when a union of all achieved rate tuples is taken over all q [0, 0.5]. Nevertheless, the helper data of the modified FCS and COFE have _∈_ length n bits, i.e., the storage rate is 1 bit/source-bit, which is suboptimal. The storage rate of 1 bit/source-bit is decreased by using the polar code construction proposed in [47]. Nevertheless, this construction cannot achieve the key-leakage-storage region. In addition, in [47] there is an assumption that a “private” key that is shared between the encoder and decoder is available, which is not realistic because there is a need for hardware protection against invasive attacks to have such a private key. If such a hardware protection is feasible, there is no need to utilize an on-demand key reconstruction and storage method like a PUF. The previous methods cannot, therefore, achieve the keyleakage-storage region for a BSC, unlike the distributed lossy source coding constructions proposed in [45]. To compare such WZ-coding constructions, we use the ratio of key vs. storage rates as the metric, which determines the design procedures to control the storage and privacy leakage. **9. Optimal Nested Polar Code Constructions** The first channel codes with asymptotic information-theoretic optimality and low decoding complexity are polar codes [50], whose finite length performance is good when a list decoder is utilized. Nesting two codes is simple with polar codes due to their simple matrix representation; therefore, one can use them for distributed lossy source coding [51]. The channel polarization phenomenon, i.e., converting a channel into polarized binary channels by using a polar transform, is the core of polar codes. The polar transform takes a sequence U[n] with unfrozen and frozen bits as input and converts it into a codeword that has also length n. The decoder then observes a noisy codeword in addition to the fixed frozen bits of U[n] in order to estimate the bit sequence U[n]. A polar code with block length _n, and frozen bit sequence G[|F|]_ at indices are denoted as (n,, G[|F|]). We next utilize _F_ _C_ _F_ nested polar codes that are proposed for WZ coding in [51]. _9.1. The GS Model Polar Code Construction_ Consider two nested polar codes C(n, F, V) and C1(n, F1, V) such that F = F1 ∪Fw and V = [W, V], where W is of length m2 and V is of length m1. Suppose m1 and m2 satisfy _m1_ (20) _n_ [=][ H][b][(][q][)][ −] _[δ]_ _m1 + m2_ = Hb(q ∗ _pA) + δ_ (21) _n_ for a δ > 0 and some distortion q ∈ [0, 0.5]. Two polar codes C(n, F, V) and C1(n, F1, V) are nested since the set of indices F1 refer to frozen channels with values V, which are common to both polar codes, and the code C has further frozen channels with values W at indices Fw. Since the rate of C1 is greater than the capacity of the lossy source coding problem for an average distortion q, it functions as a VQ with distortion q. Furthermore, since the rate of C is less than the channel capacity of the BSC(q ∗ _pA), it functions as an error-correcting_ code. We want to calculate the values W during enrollment, stored as the public helper ----- _Entropy 2021, 23, 16_ 18 of 23 data, such that (V, W, Y[n]) can be used during reconstruction to estimate the key S with length n − _m1 −_ _m2, which is depicted in Figure 9. We assign the all-zero vector to V, so to_ not increase storage, which does not affect the average distortion E[q] between Xq[n] [and][ X][n] defined below; see [51] (Lemma 10) for a proof. _S_ _Sˆ_ Helper Data and Key _W_ Key Extraction Extraction _U[n]_ _Uˆ_ _[n]_ Polar Polar Polar _V_ _W_ _V_ Decoder C1 Transform Decoder C _Xq[n]_ BSC(q ∗ _pA)_ _Y[n]_ _PX_ _X[n]_ BSC(pA) ##### Enrollment Reconstruction **Figure 9. Second WZ-coding construction for the GS model.** During enrollment, the PUF outputs X[n] _∼_ Bernoulli[n]( [1]2 [)][ are observed by a polar] decoder of C1 and considered as noisy measurements of a sequence Xq[n] [measured through] a BSC(q), i.e., X[n] is quantized into Xq[n] [by a polar decoder of][ C]1[. The polar decoder puts out] the sequence U[n] and the bit values W at its indices Fw are publicly stored as the helper data. Furthermore, the bit values at indices j ∈{1, 2, . . ., n} \ F are assigned as the secret key S. We remark that the polar transform of U[n] is the sequence Xq[n] [that is the quantized] (or distorted) version of X[n]. Consider the error sequence Eq[n] [=][ X][n][ ⊕] _[X]q[n][, which also models]_ the distortion between Xq[n] [and][ X][n][. The error sequence is shown in [][51][] (Lemma 11) to] resemble a sequence that is distributed according to Bernoulli[n](q) when n tends to ∞. During reconstruction, a polar decoder of then observes Y[n], a noisy version of X[n] _C_ measured through a BSC(pA). The frozen bits V = [V, W] of C are available to the polar decoder in order to estimate U[n], from which the secret key estimate _S can be obtained by_ [�] finding the bit values at indices j ∈{1, 2, . . ., n} \ F . Next, a design procedure to implement practical nested polar codes that satisfy these properties is summarised. Nested polar codes C ⊆C1 must be constructed jointly such that the sets of indices _F and F1 result in codes that satisfy the security and reliability constraints simultane-_ ously. Suppose the block length n, key length n − _m1 −_ _m2, target block-error probability_ _PB = Pr[S ̸=_ _S[�]], and BSC crossover probability pA are given, which depends on the PUF_ application considered. Then we have the following design procedure [45]: - Design a polar code with rate _[n][−][m][1][−][m][2]_, corresponding to fixing its indices that _C_ _F_ _n_ determine the frozen bits. This step is a conventional error-correcting code design task. ----- _Entropy 2021, 23, 16_ 19 of 23 - Find the maximum BSC crossover probability pc for which the code C achieves the target block-error probability PB, which can be achieved by evaluating the performance of for a BSC over a crossover probability range. Using the inverse of the _C_ star-operation pc = E[q] ∗ _pA, the target distortion averaged over a large number of_ realizations of X[n] that should be achieved by C1 is E[q] = _[p][c][ −]_ _[p][A]_ . This step can be 1 − 2pA applied via Monte-Carlo simulations. - Find an index set F1, representing the frozen set of C1, such that F1 ⊂F and the target distortion E[q] is achieved with a minimal amount of helper data. This step can be applied by starting with F1′ [=][ F][ and then computing the resulting average] distortion E[q[′]] obtained from Monte-Carlo simulations. If E[q[′]] is greater than E[q], we remove elements from F1′ [according to polarized bit channel reliabilities. This step] is repeated until the resulting average distortion E[q[′]] is less than the target (or desired) distortion E[q]. An additional degree of freedom is provided by varying the distortion level in the design procedure above, making the design procedure suitable for numerous applications. Using this degree of freedom, PUFs with different BSC crossover probabilities pA can be supported by using the same nested polar codes with different distortion levels. Similarly, different PUF applications with different target block-error probabilities PB can also be supported by using the same nested codes with different distortion levels. _9.2. Designed GS Model Nested Polar Codes_ We design nested polar codes to generate a secret key S of length log |S| = _n−m1−m2 =_ 128 bits, used in the AES. Furthermore, the common target block-error probability for PUFs used in an FPGA is PB = 10[−][6] and the common BSC PY|X crossover probability for SRAM and RO PUFs is pA = 0.15 [6,36]. We consider these PUF applications and parameters to design nested polar codes that improve on previously proposed codes. _Code 1: Suppose a block length of n = 1024 bits and a fixed list size of 8 for polar_ successive cancellation list (SCL) decoders are used for nested codes. First, the code _C_ with rate 128/1024 is designed to determine pc, which is defined in the design procedure steps above, obtained by using the SCL decoder. We obtain the crossover probability value _pc = 0.1819, corresponding to a target distortion of E[q] = 0.0456. This target distortion is_ obtained with a minimal helper data W length of m2 = 650 bits. _Code 2: Suppose a block length of n = 2048 bits. Applying the design procedure steps_ given above, we obtain for Code 2 the value pc = 0.2682, resulting in a target distortion of _E[q] = 0.1689. This target distortion is obtained with a minimal helper data W length of_ _m2 = 611 bits._ For these nested polar code designs, the error probability PB is considered as the average error probability over a large number of input realizations, corresponding to a large number of PUF circuits that have the same circuit design. This result can be improved by satisfying the target error probability for each input realization, which can be implemented by using the maximum distortion rather than E[q] in the design procedure discussed above. A block-error probability that is 10[−][6] can be guaranteed for 99.99% _≤_ of all realizations of input X[n] by including an additional 32 bits for the helper data W for Code 1 and an additional 33 bits for Code 2. The numbers of additional bits included are small because the distortion q has a small variance for the block lengths considered. For code comparisons below, we depict the sizes of helper data needed to guarantee the target block-error probability of PB = 10[−][6] for 99.99% of all PUF realizations. _9.3. Comparisons of Codes_ The boundary points of Rgs,bin for pA = 0.15 are projected onto the storage-key (Rw, Rs) plane and depicted in Figure 10. The point (Rs[∗][,][ R]w[∗] [)][, defined in Section][ 3.1][, is also] depicted. Furthermore, we use the random coding union bound from [44] (Theorem 16) to obtain the rate pairs that can be achieved by using the FCS or COFE. These points are ----- _Entropy 2021, 23, 16_ 20 of 23 shown in Figure 10 in addition to the rate tuples achieved by the previous SW-coding based _Entropypolar code design from [ 2020, 1, 0_ 47], and Codes 1 and 2 discussed above. 20 of 24 0.4 0.35 0.3 0.25 0.2 0.15 0.1 0.05 0 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 1.1 1.2 |Col1|Col2|Col3|Col4|Col5|Col6|Col7|Col8|Col9|Col10|Col11| |---|---|---|---|---|---|---|---|---|---|---| |||||||||||| |||||||||||| |||||||Rgs,bin Boundary (R w∗, R s∗ ) Code 1, n=1024||||| |||||||Code 2, n=2048 Prev. Polar Code [47], n|=1024|||| |||||||Best Code in [6], n=14 FCS/COFE achievable,|08 n=102|4||| |||||||FCS/COFE achievable,|n=204|8||| |||||||||||| |||||||||||| |||||||||||| |||||||||||| _Rw_ **Figure 10.Figure 10. Storage-key rates for the GS model with Storage-key rates for the GS model with pA p =A 0.15. The = 0.15. The (Rw[∗]**, ( RsR[∗] )w[∗] point is the best, Rs[∗] ) point is the best possible point achieved by SW-coding constructions, which lies on the dashed line representing possible point achieved by SW-coding constructions, which lies on the dashed line representing _Rw + Rs = H(X). The block error probability satisfies PB ≤_ 10[−][6] and the key length is 128 bits for all _Rw +code points. Rs = H(X). The block error probability satisfies PB ≤_ 10[−][6] and the key length is 128 bits for all code points. coding method in the binary case. The previous SW-coding based polar code construction improves the The COFE and FCS result in a storage rate of 1 bit/source-bit, which is strictly subop rate tuples achieved by the COFE and FCS in terms of the ratio of key vs. storage rates. Code 1 achieves the key-leakage-storage tuple oftimal. The previous SW-coding based polar code construction in [ (0.125, 0.666, 0.666) bits/source-bit and Code 2 of47] achieves a rate tuple (0.063, 0.315, 0.315) bits/source-bit, which significantly improve on all previous code constructions without any privatesuch that Rs + Rw = 1 bit/source-bit, as expected because it is an SW-coding construction key assumption. Thus, Codes 1 and 2 results also suggest that for these parameters increasing thethat corresponds to a syndrome coding method in the binary case. The previous SW-coding block length increases thebased polar code construction improves the rate tuples achieved by the COFE and FCS in Rs/Rw ratio, which is 0.188 for Code 1 and 0.199 for Code 2. Furthermore, the privacy-leakage and storage rate tuple achieved by Code 2 cannot be achieved by using previousterms of the ratio of key vs. storage rates. Code 1 achieves the key-leakage-storage tuple constructions without applying the time sharing method, because Code 2 achieves the privacy-leakageof (0.125, 0.666, 0.666) bits/source-bit and Code 2 of (0.063, 0.315, 0.315) bits/source-bit, (and storage) rate of 0.315 bits/source-bit that is less than the minimal privacy-leakage (and storage)which significantly improve on all previous code constructions without any private key ratesassumption. Thus, Codes 1 and 2 results also suggest that for these parameters increasing R[∗]ℓ [=][ R]w[∗] [=][ H]b[(][p]A[)][ ≈] [0.610 bits/source-bit that can be achieved by using previous code] constructions. the block length increases the Rs/Rw ratio, which is 0.188 for Code 1 and 0.199 for Code 2. To find an upper bound on the the ratio of key vs. storage rates for the maximum secret-key Furthermore, the privacy-leakage and storage rate tuple achieved by Code 2 cannot be rate point, we apply the sphere packing bound from [52, Eq. (5.8.19)] for the channel pA = 0.15 and achieved by using previous constructions without applying the time sharing method, code parameters n = 1024, and PB = 10[−][6]. The sphere packing bound shows that the rate of C, as because Code 2 achieves the privacy-leakage (and storage) rate of 0.315 bits/source-bit that depicted in Figure 9, must satisfy R 0.273 bits/source-bit. Suppose the key rate is fixed to its _C ≤_ maximum valuehave the ratio ofis less than the minimal privacy-leakage (and storage) ratesbits/source-bit that can be achieved by using previous code constructions. R Rss/ =Rw R ≤C and the storage rate is fixed to its minimum value0.375. Similarly, for n = 2048 we obtain the ratio of R[∗]ℓ [=][ R] R Rw[∗]ws/ =[=]R[ H]w 1 ≤ −b[(]0.437. The[p]RAC, so we[)][ ≈] [0.610] two finite-length results that are valid for WZ-coding constructions with nested codes indicate thatTo find an upper bound on the the ratio of key vs. storage rates for the maximum ratio of key vs. storage rates achieved by Codes 1 and 2 can be further increased. Using differentsecret-key rate point, we apply the sphere packing bound from [52] (Equation (5.8.19)) for nested polar codes that improve the minimum-distance properties, as in [the channel pA = 0.15 and code parameters n = 1024, and PB =53 10], or using nested algebraic[−][6]. The sphere packing codes for which design methods are available in the literature, as in [bound shows that the rate of C, as depicted in Figure 9, must satisfy54], one can reduce the gaps RC ≤ 0.273 bits/sourceto the finite-length bounds calculated for nested code constructions. We remark again that suchbit. Suppose the key rate is fixed to its maximum value Rs = RC and the storage rate is optimality-seeking approaches, for example, based on information-theoretic security, provide the rightfixed to its minimum value Rw = 1 − _RC_, so we have the ratio of Rs/Rw ≤ 0.375. Similarly, insights into the best solutions for the digital era’s security and privacy problems.for n = 2048 we obtain the ratio of Rs/Rw ≤ 0.437. The two finite-length results that are valid for WZ-coding constructions with nested codes indicate that ratio of key vs. storage rates achieved by Codes 1 and 2 can be further increased. Using different nested polar codes that improve the minimum-distance properties, as in [53], or using nested algebraic codes for which design methods are available in the literature, as in [54], one can reduce the gaps to the finite-length bounds calculated for nested code constructions. We remark again that such optimality-seeking approaches, for example, based on information-theoretic ----- _Entropy 2021, 23, 16_ 21 of 23 security, provide the right insights into the best solutions for the digital era’s security and privacy problems. **10. Discussions and Open Problems** - We want to use low-complexity scalar quantizers after transformation without extra secrecy leakage; however, the decorrelation efficiency metric does not fully represent the dependency between transform coefficients. What is the right metric to use for choosing the transform used in combination with scalar quantizers? Is mutual information between transform coefficients an appropriate metric for this purpose? The choice of the transform should also depend on a reliability metric such as SNRpacking efficiency so that the transform, quantizers, and the error-correction codes can be designed jointly. What is the right reliability metric for this purpose? - It is shown in [36] that the ambient temperature and supply voltage affect the RO outputs deterministically rather than adding extra random noise, which was assumed in the RO PUF literature. What are the right output models for common PUF types, i.e., what are the deterministic and random components, and how are they related? - SRAM PUFs are already used in products. In the literature there is no extensive analysis of the output correlations between different SRAMs in the same device possibly because SRAM outputs are binary and it is difficult to model the correlation between binary symbols. However, SRAM outputs are modeled in [6] as binary-quantized sums of independent Gaussian random variables. Is it possible to determine or approximate the correlations between the Gaussian random variables of different SRAMs? If yes, this might be useful for an attacker to obtain information about the secret sequence generated from the SRAM PUF output, which causes extra secrecy leakage. - The transform-coding approach discussed above provides reliability guarantees for RO arrays with random outputs, which considers an average over all ROs manufactured. The worst case scenario is when the transform coefficient value is on the quantization boundary, for which the secret-key capacity is 0 bit. If one replaces the average reliability metric used above by a lower bound on the reliability of each RO, i.e., a worst-case scenario metric, how would this change the rate of the errorcorrection code used? For a fixed code, what should be the optimal bound on the reliability of each RO to maximize the yield, i.e., the percentage of ROs among all manufactured ROs for which the worst-case reliability guarantee is satisfied? - Are the WZ problem and the GS model operationally equivalent? - Linear block-code constructions discussed above are for uniformly-distributed PUF outputs. Can one construct other (random) linear block codes that are asymptotically optimal for nonuniform PUF outputs? Is it necessary to use an extension of the COFE for this purpose? - Consider the nested polar code design procedure given above. Construction of a code for n ≤ 512 is not possible with the procedure discussed above because q ∗ _pA_ increases with increasing q for q [0, 0.5]. Is it possible to construct a nested polar _∈_ code for n = 512 by improving the decoder and the code design procedure? **Author Contributions: O.G. conceived the study, designed, and conducted the experiments; O.G.** and R.F.S. contributed to the writing of the paper and analyzed the data, and a combined effort of O.G. and R.F.S. improved the algorithms discussed. All authors have read and agreed to the published version of the manuscript. **Funding: O.G. and R.F.S. are supported by the German Federal Ministry of Education and Research** (BMBF) within the national initiative for “Post Shannon Communication (NewCom)” under the Grant 16KIS1004. We acknowledge support by the German Research Foundation (DFG) and the Open Access Publication (OAP) Fund of TU Berlin. **Institutional Review Board Statement: Not applicable.** **Informed Consent Statement: Not applicable.** ----- _Entropy 2021, 23, 16_ 22 of 23 **Data Availability Statement: Not applicable.** **Conflicts of Interest: The funders had no role in the design of the study; in the collection, analyses,** or interpretation of data; in the writing of the manuscript, or in the decision to publish the results. The authors declare no conflict of interest. **References** 1. [Shannon, C.E. Communication theory of secrecy systems. Bell Syst. Tech. J. 1949, 28, 656–715. [CrossRef]](http://dx.doi.org/10.1002/j.1538-7305.1949.tb00928.x) 2. Kahn, D. The Codebreakers: The Story of Secret Writing; Macmillan Publishers: New York, NY, USA, 1967. 3. Schneier, B. Applied Cryptography: Protocols, Algorithms, and Source Code in C, 2nd ed.; John Wiley & Sons: Hoboken, NJ, USA, 1996. 4. Böhm, C.; Hofer, M. Physical Unclonable Functions in Theory and Practice; Springer: New York, NY, USA, 2012. 5. Gassend, B.; Clarke, D.; Dijk, M.V.; Devadas, S. Silicon physical random functions. In Proceedings of the 9th ACM Conference on Computer and Communications Security, Washington, DC, USA, 18–22 November 2002; pp. 148–160. 6. Maes, R.; Tuyls, P.; Verbauwhede, I. A Soft Decision Helper Data Algorithm for SRAM PUFs. In Proceedings of the 2009 IEEE International Symposium on Information Theory, Seoul, Korea, 28 June–3 July 2009; pp. 2101–2105. 7. Goldreich, O. Modern Cryptography, Probabilistic Proofs and Pseudorandomness; Springer: Berlin Heidelberg, Germany, 1998; Volume 17. 8. Pappu, R. Physical One-Way Functions. Ph.D. Thesis, Massachusetts Institute of Technology, Cambridge, MA, USA, 2001. 9. [Wyner, A.D. The wire-tap channel. Bell Syst. Tech. J. 1975, 54, 1355–1387. [CrossRef]](http://dx.doi.org/10.1002/j.1538-7305.1975.tb02040.x) 10. Palanca, A.; Evenchick, E.; Maggi, F.; Zanero, S. A stealth, selective, link-layer denial-of-service attack against automotive networks. In Detection of Intrusions and Malware, and Vulnerability Assessment; Springer: Cham, Switzerland, 2017; pp. 185–206. 11. Lee, Y.S.; Lee, H.J.; Alasaarela, E. Mutual authentication in wireless body sensor networks (WBSN) based on Physical Unclonable Function (PUF). In Proceedings of the 2013 9th International Wireless Communications and Mobile Computing Conference (IWCMC), Sardinia, Italy, 1–5 July 2013; pp. 1314–1318. 12. Simpson, E.; Schaumont, P. Offline hardware/software authentication for reconfigurable platforms. In International Workshop on _Cryptographic Hardware and Embedded Systems; Springer: Berlin/Heidelberg, Germany, 2006; pp. 311–323._ 13. Herder, C.; Yu, M.; Koushanfar, F.; Devadas, S. Physical Unclonable Functions and Applications: A Tutorial. Proc. IEEE 2014, _[102, 1126–1141. [CrossRef]](http://dx.doi.org/10.1109/JPROC.2014.2320516)_ 14. Huth, C.; Guillaume, R.; Strohm, T.; Duplys, P.; Samuel, I.A.; Güneysu, T. Information reconciliation schemes in physical-layer [security: A survey. Comput. Netw. 2016, 109, 84–104. [CrossRef]](http://dx.doi.org/10.1016/j.comnet.2016.06.014) 15. Lim, D.; Lee, J.W.; Gassend, B.; Suh, G.E.; Dijk, M.V.; Devadas, S. Extracting Secret Keys From Integrated Circuits. IEEE Trans. _Very Large Scale Integr. Syst. 2005, 13, 1200–1205._ 16. Mandal, M.K.; Sarkar, B.C. Ring oscillators: Characteristics and Applications. Indian J. Pure Appl. Phys. 2010, 48, 136–145. 17. Suh, G.E.; Devadas, S. Physical Unclonable Functions for Device Authentication and Secret Key Generation. In Proceedings of the 2007 44th ACM/IEEE Design Automation Conference, San Diego, CA, USA, 4–8 June 2007; pp. 9–14. 18. Guajardo, J.; Kumar, S.S.; Schrijen, G.J.; Tuyls, P. FPGA Intrinsic PUFs and Their Use for IP Protection. In International Workshop on _Cryptographic Hardware and Embedded Systems; Springer: Berlin/Heidelberg, Germany, 2007; pp. 63–80._ 19. Günlü, O.; Kramer, G.; Skorski, M. Privacy and secrecy with multiple measurements of physical and biometric identifiers. In Proceedings of the 2015 IEEE Conference on Communications and Network Security (CNS), Florence, Italy, 28–30 September 2015; pp. 89–94. 20. Dodis, Y.; Ostrovsky, R.; Reyzin, L.; Smith, A. Fuzzy extractors: How to generate strong keys from biometrics and other noisy [data. SIAM J. Comput. 2008, 38, 97–139. [CrossRef]](http://dx.doi.org/10.1137/060651380) 21. Juels, A.; Wattenberg, M. A fuzzy commitment scheme. In Proceedings of the 6th ACM Conference on Computer and Communications Security, Singapore, 1–4 November 1999; pp. 28–36. 22. Ignatenko, T.; Willems, F.M.J. Biometric systems: Privacy and secrecy aspects. IEEE Trans. Inf. Forensics Secur. 2009, 4, 956–973. [[CrossRef]](http://dx.doi.org/10.1109/TIFS.2009.2033228) 23. Günlü, O.; Kramer, G. Privacy, Secrecy, and Storage With Multiple Noisy Measurements of Identifiers. IEEE Trans. Inf. Forensics _[Secur. 2018, 13, 2872–2883. [CrossRef]](http://dx.doi.org/10.1109/TIFS.2018.2834303)_ 24. Günlü, O.; I¸scan, O. DCT Based Ring Oscillator Physical Unclonable Functions. In Proceedings of the 2014 IEEE International[˙] Conference on Acoustics, Speech and Signal Processing (ICASSP), Florence, Italy, 4–9 May 2014; pp. 8198–8201. 25. Maiti, A.; Schaumont, P. Improved ring oscillator PUF: An FPGA-friendly secure primitive. J. Cryptol. 2011, 24, 375–397. [[CrossRef]](http://dx.doi.org/10.1007/s00145-010-9088-4) 26. Ahlswede, R.; Csiszár, I. Common Randomness in Information Theory and Cryptography—Part I: Secret Sharing. IEEE Trans. Inf. _[Theory 1993, 39, 1121–1132. [CrossRef]](http://dx.doi.org/10.1109/18.243431)_ 27. Maurer, U.M. Secret Key Agreement by Public Discussion from Common Information. IEEE Trans. Inf. Theory 1993, 39, 733–742. [[CrossRef]](http://dx.doi.org/10.1109/18.256484) 28. Günlü, O.; Schaefer, R.F.; Poor, H.V. Biometric and physical identifiers with correlated noise for controllable private authentication. _arXiv 2020, arXiv:2001.00847._ 29. Günlü, O.; Schaefer, R.F.; Kramer, G. Private authentication with physical identifiers through broadcast channel measurements. In Proceedings of the 2019 IEEE Information Theory Workshop (ITW), Visby, Sweden, 25–28 August 2019; pp. 1–5. ----- _Entropy 2021, 23, 16_ 23 of 23 30. Ignatenko, T.; Willems, F.M. Information leakage in fuzzy commitment schemes. IEEE Trans. Inf. Forensics Secur. 2010, 5, [2337–2348. [CrossRef]](http://dx.doi.org/10.1109/TIFS.2010.2046984) 31. Cover, T.M.; Thomas, J.A. Elements of Information Theory; John Wiley & Sons: Hoboken, NJ, USA, 2012. 32. Günlü, O.; Kernetzky, T.; I¸scan, O.; Sidorenko, V.; Kramer, G.; Schaefer, R.F. Secure and Reliable Key Agreement with Physical[˙] [Unclonable Functions. Entropy 2018, 20, 340. [CrossRef]](http://dx.doi.org/10.3390/e20050340) 33. Maiti, A.; Casarona, J.; McHale, L.; Schaumont, P. A Large Scale Characterization of RO-PUF. In Proceedings of the 2010 IEEE International Symposium on Hardware-Oriented Security and Trust (HOST), Anaheim, CA, USA, 13–14 June 2010; pp. 94–99. 34. [Schwarz, G. Estimating the dimension of a model. Ann. Stat. 1978, 6, 461–464. [CrossRef]](http://dx.doi.org/10.1214/aos/1176344136) 35. Sugiura, N. Further analysis of the data by Akaike’s information criterion and the finite corrections. Commun. Stat. Theory Methods **[1978, 7, 13–26. [CrossRef]](http://dx.doi.org/10.1080/03610927808827599)** 36. Günlü, O.; I¸scan, O.; Kramer, G. Reliable secret key generation from physical unclonable functions under varying environmental[˙] conditions. In Proceedings of the 2015 IEEE International Workshop on Information Forensics and Security (WIFS), Rome, Italy, 16–19 November 2015; pp. 1–6. 37. Lin, S.; Costello, D.J. Error Control Coding; Prentice-Hall: Englewood Cliffs, NJ, USA, 2004. 38. Ohm, J.R. Multimedia Signal Coding and Transmission; Springer: Berlin/Heidelberg, Germany, 2015. 39. Wang, R. Introduction to Orthogonal Transforms: With Applications in Data Processing and Analysis; Cambridge University Press: Cambridge, UK, 2012. 40. Günlü, O.; Schaefer, R.F. Low-Complexity and Reliable Transforms for Physical Unclonable Functions. In Proceedings of the 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Barcelona, Spain, 4–8 May 2020; pp. 2807–2811. 41. Maes, R.; Herrewege, A.V.; Verbauwhede, I. PUFKY: A fully functional PUF-based cryptographic key generator. In Cryptographic _Hardware Embedded Systems; Springer: Berlin/Heidelberg, Germany, 2012; pp. 302–319._ 42. Rukhin, A.; Soto, J.; Nechvatal, J.; Smid, M.; Barker, E. A Statistical Test Suite for Random and Pseudorandom Number Generators for _Cryptographic Applications; Technical Report; The National Institute of Standards and Technology: Gaithersburg, MD, USA, 2001._ 43. Hong, Y. On Computing the Distribution Function for the Sum of Independent and Nonidentical Random Indicators; Technical Report; Department of Statistics, Virginia Tech.: Blacksburg, VA, USA, 2011. 44. Polyanskiy, Y.; Poor, H.V.; Verdú, S. Channel Coding Rate in the Finite Blocklength Regime. IEEE Trans. Inf. Theory 2010, _[56, 2307–2359. [CrossRef]](http://dx.doi.org/10.1109/TIT.2010.2043769)_ 45. Günlü, O.; I¸scan, O.; Sidorenko, V.; Kramer, G. Code Constructions for Physical Unclonable Functions and Biometric Secrecy[˙] [Systems. IEEE Trans. Inf. Forensics Secur. 2019, 14, 2848–2858. [CrossRef]](http://dx.doi.org/10.1109/TIFS.2019.2911155) 46. Wyner, A.D.; Ziv, J. The rate-distortion function for source coding with side information at the decoder. IEEE Trans. Inf. Theory **[1976, 22, 1–10. [CrossRef]](http://dx.doi.org/10.1109/TIT.1976.1055508)** 47. Chen, B.; Ignatenko, T.; Willems, F.M.; Maes, R.; van der Sluis, E.; Selimis, G. A Robust SRAM-PUF Key Generation Scheme Based on Polar Codes. In Proceedings of the GLOBECOM 2017—2017 IEEE Global Communications Conference, Singapore, 4–8 December 2017; pp. 1–6. 48. [Slepian, D.; Wolf, J. Noiseless coding of correlated information sources. IEEE Trans. Inf. Theory 1973, 19, 471–480. [CrossRef]](http://dx.doi.org/10.1109/TIT.1973.1055037) 49. Maurer, U.; Wolf, S. Information-theoretic key agreement: From weak to strong secrecy for free. In Proceedings of the International Conference on the Theory and Applications of Cryptographic Techniques, Bruges, Belgium, 14–18 May 2000; pp. 351–368. 50. Arikan, E. Channel Polarization: A Method for Constructing Capacity-Achieving Codes for Symmetric Binary-Input Memoryless [Channels. IEEE Trans. Inf. Theory 2009, 55, 3051–3073. [CrossRef]](http://dx.doi.org/10.1109/TIT.2009.2021379) 51. Korada, S.B.; Urbanke, R.L. Polar Codes are Optimal for Lossy Source Coding. IEEE Trans. Inf. Theory 2010, 56, 1751–1768. [[CrossRef]](http://dx.doi.org/10.1109/TIT.2010.2040961) 52. Gallager, R.G. Low-Density Parity-Check Codes; M.I.T. Press: Cambridge, MA, USA, 1963. 53. Günlü, O.; Trifonov, P.; Kim, M.; Schaefer, R.F.; Sidorenko, V. Randomized Nested Polar Subcode Constructions for Privacy, Secrecy, and Storage. arXiv 2020, arXiv:2004.12091. 54. Jerkovits, T.; Günlü, O.; Sidorenko, V.; Kramer, G. Nested Tailbiting Convolutional Codes for Secrecy, Privacy, and Storage. arXiv **2020, arXiv:2004.13095.** -----
{ "disclaimer": "Notice: Paper or abstract available at https://arxiv.org/abs/2012.08924, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": "CCBY", "status": "GOLD", "url": "https://www.mdpi.com/1099-4300/23/1/16/pdf?version=1611228187" }
2,020
[ "Review", "JournalArticle" ]
true
2020-11-05T00:00:00
[ { "paperId": "eb5d870bd20abde2f7a39e87bf4be306ced452cc", "title": "Nested Tailbiting Convolutional Codes for Secrecy, Privacy, and Storage" }, { "paperId": "6f136d593b88ac58351ee7f92a52b06a3c37b0ae", "title": "Randomized Nested Polar Subcode Constructions for Privacy, Secrecy, and Storage" }, { "paperId": "230498fca7fcd3ce688f45e61e7f705453d6a374", "title": "Low-Complexity and Reliable Transforms for Physical Unclonable Functions" }, { "paperId": "9431185026ea605543f9db8160db28e6b609f7c9", "title": "Differential privacy for eye tracking with temporal correlations" }, { "paperId": "8ccc6a9057e90f0f64c8fa0627dd24ed211b0c60", "title": "Biometric and Physical Identifiers with Correlated Noise for Controllable Private Authentication" }, { "paperId": "3a0331c3071c0f76eeed27845533612068a9b5f8", "title": "Private Authentication with Physical Identifiers Through Broadcast Channel Measurements" }, { "paperId": "be80f1e19aaf3c1dcc9c070a7bb728dc165be903", "title": "Secure and Reliable Key Agreement with Physical Unclonable Functions †" }, { "paperId": "75847bb8a5932607b8b360b28bf748d7bead8bb8", "title": "Code Constructions for Physical Unclonable Functions and Biometric Secrecy Systems" }, { "paperId": "38415e0dc8401a79882c28bf6c23d32eec3231b0", "title": "A Stealth, Selective, Link-Layer Denial-of-Service Attack Against Automotive Networks" }, { "paperId": "219b07d9eba79624c8cf5f782ce96e4812197c5e", "title": "A Robust SRAM-PUF Key Generation Scheme Based on Polar Codes" }, { "paperId": "e665bd5eb45d04e2c3e3f255926f14c4cba96a86", "title": "Information reconciliation schemes in physical-layer security: A survey" }, { "paperId": "6e8637f4fb7a7ea9f2ef733e26fe04b9903c6c1c", "title": "Privacy, Secrecy, and Storage With Multiple Noisy Measurements of Identifiers" }, { "paperId": "8436e46720eb5df67b20900e78479083402bbe1c", "title": "Privacy and secrecy with multiple measurements of physical and biometric identifiers" }, { "paperId": "0f8717c1c19827dd54cb196b9aee54d876005a2c", "title": "Reliable secret key generation from physical unclonable functions under varying environmental conditions" }, { "paperId": "a1f389c111d8bbeae315b05eba2487bbbf16ceb2", "title": "Multimedia Signal Coding and Transmission" }, { "paperId": "a1da194a3f03d7d7216b03ee5ff408d6e070f4b3", "title": "Physical Unclonable Functions and Applications: A Tutorial" }, { "paperId": "4e58acf47d72b4adca677e81b2ee1fe8bb50e3f0", "title": "DCT based ring oscillator Physical Unclonable Functions" }, { "paperId": "3ac8bc8affff2e004898d2528ba16d6488557774", "title": "Mutual authentication in wireless body sensor networks (WBSN) based on Physical Unclonable Function (PUF)" }, { "paperId": "c5dca6736779ab2e7a94735533c4a9c138568914", "title": "Physical Unclonable Functions in Theory and Practice" }, { "paperId": "03c857ecdb86724133b2a473ce5d99f34a4e83c1", "title": "PUFKY: A Fully Functional PUF-Based Cryptographic Key Generator" }, { "paperId": "058e4b42459c2703a2c915036242d12b939c22b9", "title": "Introduction to Orthogonal Transforms: With Applications in Data Processing and Analysis" }, { "paperId": "df5cac0129326f07ff1a64cb1a1646acd500d174", "title": "Improved Ring Oscillator PUF: An FPGA-friendly Secure Primitive" }, { "paperId": "03c4cc2526ce04c7f0532e6d3ec5556c03957c51", "title": "A large scale characterization of RO-PUF" }, { "paperId": "14db5217b7da79c4365df56ae91a75344cd0fa2c", "title": "Information Leakage in Fuzzy Commitment Schemes" }, { "paperId": "9d4e83db13ea001165d8968a3e869e8716c78377", "title": "Channel Coding Rate in the Finite Blocklength Regime" }, { "paperId": "c1f27dd01a563a82646e295824ab449511c69034", "title": "Ring oscillators: Characteristics and applications" }, { "paperId": "e49181d1f312f8a0a9689005c0a3321116e9a1ec", "title": "Biometric Systems: Privacy and Secrecy Aspects" }, { "paperId": "d66b3ea2f3538675e037f863a1e56968f3ed5ca9", "title": "A soft decision helper data algorithm for SRAM PUFs" }, { "paperId": "3418666b85e8d0c58c364900c7beedada91fbbe4", "title": "Polar codes are optimal for lossy source coding" }, { "paperId": "62f9b3c57d73092f2125e9be5568dd1aa0a06d18", "title": "Channel Polarization: A Method for Constructing Capacity-Achieving Codes for Symmetric Binary-Input Memoryless Channels" }, { "paperId": "bacf21343fa323ca220cc70e1ec06e782ecbe2e2", "title": "FPGA Intrinsic PUFs and Their Use for IP Protection" }, { "paperId": "852839104a55cbedffda3a6b8650c108ae2da4dd", "title": "Physical Unclonable Functions for Device Authentication and Secret Key Generation" }, { "paperId": "b934b5a983562d396ff6a2e722045ddc0c5ec80b", "title": "Offline Hardware/Software Authentication for Reconfigurable Platforms" }, { "paperId": "fe195a79c748c2f883e4eebef4402dc6c612399b", "title": "Extracting secret keys from integrated circuits" }, { "paperId": "7dbdb4209626fd92d2436a058663206216036e68", "title": "Elements of Information Theory" }, { "paperId": "f9000e8e1e1851a3b95dcc3c3b02cd3b2788ab11", "title": "Error Control Coding" }, { "paperId": "5f00233479dd7d08475f708c911c5b3999a78d8f", "title": "Fuzzy Extractors: How to Generate Strong Keys from Biometrics and Other Noisy Data" }, { "paperId": "5e310160e95d9f7d92369725b27da453d3e5365b", "title": "Physical One-Way Functions" }, { "paperId": "d3f002dd0d6f56bc3bdf9493f734f1abe5a999d0", "title": "A Statistical Test Suite for Random and Pseudorandom Number Generators for Cryptographic Applications | NIST" }, { "paperId": "961fc0f6c68303cde94a28fdb24e6eaf3ef040b4", "title": "A Statistical Test Suite for Random and Pseudorandom Number Generators for Cryptographic Applications" }, { "paperId": "209ec57396225554522738b10ad05cbf2d274fe7", "title": "Information-Theoretic Key Agreement: From Weak to Strong Secrecy for Free" }, { "paperId": "3cc379e3eb3db1126f5cf6f0055be911159ffe5d", "title": "A fuzzy commitment scheme" }, { "paperId": "6e7ae7f8d422ae8b7ef25ddadfb39d1abf46a85f", "title": "Modern Cryptography, Probabilistic Proofs and Pseudorandomness" }, { "paperId": "8b55961e7be0ac5deb5f0c304ab0c1de1a827621", "title": "Applied Cryptography: Protocols, Algorithms, and Source Code in C" }, { "paperId": "5553970601d94c090eb2d3431b653106bf271c4c", "title": "Common randomness in information theory and cryptography - I: Secret sharing" }, { "paperId": "68f8fafd79445110b91c0f24480fbd8da160c31c", "title": "Secret key agreement by public discussion from common information" }, { "paperId": "37e44d1de8003d8394d158ec6afd1ff0e87e595b", "title": "Estimating the Dimension of a Model" }, { "paperId": "c02bae0c9e68c4417d9d6226790c9a1366991494", "title": "The wire-tap channel" }, { "paperId": "71c969f0a7204259cdd394283ab02d0fc34e5fc2", "title": "Noiseless coding of correlated information sources" }, { "paperId": "86ba7660ed50b2be9ccd801070680152c5646b55", "title": "The codebreakers : the story of secret writing" }, { "paperId": "e073a7c5a6418d96fc16d8337a6056a457e75c1e", "title": "Communication theory of secrecy systems" }, { "paperId": "b1304465c267596576a151c91ac948d225a28c6b", "title": "Key Agreement with Physical Unclonable Functions and Biometric Identifiers" }, { "paperId": null, "title": "On computing the distribution function for the sum of independent and nonidentical random indicators" }, { "paperId": "c80460c741362038b3e366313414b2edcaa015a4", "title": "Physical one-way functions" }, { "paperId": "d389f72307f2748c7a5169ebc15872818b139c16", "title": "A STATISTICAL TEST SUITE FOR RANDOM AND PSEUDORANDOM NUMBER GENERATORS FOR CRYPTOGRAPHIC APPLICATIONS" }, { "paperId": "ce133305355a373843656145dfcd6a4563d75520", "title": "Applied cryptography - protocols, algorithms, and source code in C, 2nd Edition" }, { "paperId": "0b3995e33daf8fea7a772585ff7dac925af7fae9", "title": "Further analysts of the data by akaike' s information criterion and the finite corrections" }, { "paperId": "d2607df2b01d1123cae2716b876776510c41f836", "title": "The rate-distortion function for source coding with side information at the decoder" }, { "paperId": "206f827fad201506c315d40c1469b41a45141893", "title": "Low-density parity-check codes" }, { "paperId": "d8548b8fce3861732afdcc83213a8a3cbf27b83c", "title": "Silicon Physical Random Functions (cid:3)" } ]
24,444
en
[ { "category": "Medicine", "source": "external" }, { "category": "Medicine", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/029062eca1fb9a279c20694127fcf1c8313966ff
[ "Medicine" ]
0.809361
Exome/Genome-Wide Testing in Newborn Screening: A Proportionate Path Forward
029062eca1fb9a279c20694127fcf1c8313966ff
Frontiers in Genetics
[ { "authorId": "4855327", "name": "V. Rahimzadeh" }, { "authorId": "2121443177", "name": "J. Friedman" }, { "authorId": "51893879", "name": "G. de Wert" }, { "authorId": "3148841", "name": "B. Knoppers" } ]
{ "alternate_issns": null, "alternate_names": [ "Front Genet" ], "alternate_urls": [ "http://journal.frontiersin.org/journal/genetics", "https://www.frontiersin.org/journals/genetics" ], "id": "9ec189b3-db0f-41d8-9c82-21ef5fa9b87e", "issn": "1664-8021", "name": "Frontiers in Genetics", "type": "journal", "url": "http://www.frontiersin.org/genetics/" }
Population-based newborn screening (NBS) is among the most effective public health programs ever launched, improving health outcomes for newborns who screen positive worldwide through early detection and clinical intervention for genetic disorders discovered in the earliest hours of life. Key to the success of newborn screening programs has been near universal accessibility and participation. Interest has been building to expand newborn screening programs to also include many rare genetic diseases that can now be identified by exome or genome sequencing (ES/GS). Significant declines in sequencing costs as well as improvements to sequencing technologies have enabled researchers to elucidate novel gene-disease associations that motivate possible expansion of newborn screening programs. In this paper we consider recommendations from professional genetic societies in Europe and North America in light of scientific advances in ES/GS and our current understanding of the limitations of ES/GS approaches in the NBS context. We invoke the principle of proportionality—that benefits clearly outweigh associated risks—and the human right to benefit from science to argue that rigorous evidence is still needed for ES/GS that demonstrates clinical utility, accurate genomic variant interpretation, cost effectiveness and universal accessibility of testing and necessary follow-up care and treatment. Confirmatory or second-tier testing using ES/GS may be appropriate as an adjunct to conventional newborn screening in some circumstances. Such cases could serve as important testbeds from which to gather data on relevant programmatic barriers and facilitators to wider ES/GS implementation.
Edited by: Laura V. Milko, University of North Carolina at Chapel Hill, United States Reviewed by: Milan Macek, Charles University, Czechia Jonathan Berg, University of North Carolina at Chapel Hill, United States *Correspondence: Vasiliki Rahimzadeh [vrahim@stanford.edu](mailto:vrahim@stanford.edu) Specialty section: This article was submitted to Human and Medical Genomics, a section of the journal Frontiers in Genetics Received: 29 January 2022 Accepted: 27 May 2022 Published: 04 July 2022 Citation: Rahimzadeh V, Friedman JM, de Wert G and Knoppers BM (2022) Exome/Genome-Wide Testing in Newborn Screening: A Proportionate Path Forward. Front. Genet. 13:865400. [doi: 10.3389/fgene.2022.865400](https://doi.org/10.3389/fgene.2022.865400) p y [doi: 10.3389/fgene.2022.865400](https://doi.org/10.3389/fgene.2022.865400) # Exome/Genome-Wide Testing in Newborn Screening: A Proportionate Path Forward Vasiliki Rahimzadeh [1]*, Jan M. Friedman [2], Guido de Wert [3] and Bartha M. Knoppers [4] 1Stanford Center for Biomedical Ethics, Stanford University, Stanford, CA, United States, 2Department of Medical Genetics, University of British Columbia, Vancouver, BC, Canada, [3]Department of Health, Ethics and Society, Maastricht University, Maastricht, Netherlands, [4]Centre of Genomics and Policy, McGill University, Montreal, QC, Canada ### Population-based newborn screening (NBS) is among the most effective public health programs ever launched, improving health outcomes for newborns who screen positive worldwide through early detection and clinical intervention for genetic disorders discovered in the earliest hours of life. Key to the success of newborn screening programs has been near universal accessibility and participation. Interest has been building to expand newborn screening programs to also include many rare genetic diseases that can now be identified by exome or genome sequencing (ES/GS). Significant declines in sequencing costs as well as improvements to sequencing technologies have enabled researchers to elucidate novel gene-disease associations that motivate possible expansion of newborn screening programs. In this paper we consider recommendations from professional genetic societies in Europe and North America in light of scientific advances in ES/GS and our current understanding of the limitations of ES/GS approaches in the NBS context. We invoke the principle of proportionality—that benefits clearly outweigh associated risks—and the human right to benefit from science to argue that rigorous evidence is still needed for ES/GS that demonstrates clinical utility, accurate genomic variant interpretation, cost effectiveness and universal accessibility of testing and necessary follow-up care and treatment. Confirmatory or second-tier testing using ES/GS may be appropriate as an adjunct to conventional newborn screening in some circumstances. Such cases could serve as important testbeds from which to gather data on relevant programmatic barriers and facilitators to wider ES/GS implementation. Keywords: exome sequencing, genome sequecing, newborn screening, population health genomics, access, public health ethics ## INTRODUCTION Population-based newborn screening (NBS) is among the most effective public health programs ever launched (Tonniges, 2000; Sahai and Marsden, 2009; Berry, 2015). Updated national estimates in the United States suggest nearly 12,900 newborns screened positive for childhood onset disorders that previously led to severe morbidity or mortality and were listed on the Recommended Universal Screening Panel (RUSP) (5) between 2015 and 2017 (Sontag et al., 2020). Key to the success of NBS programs has been their affordability and near universal access and participation. Pre-symptomatic treatment of newborns who screen positive for some of these conditions is much more cost-effective ----- and less burdensome on healthcare systems than treating the conditions once they become symptomatic (Carroll and Downs, 2006). Preventing the development of symptomatic disease is a particularly important consideration with respect to genetic diseases that can be detected by ES/GS analysis because most do not have specific treatments that can prevent disease onset or progression. Since early validation studies of mass screening tests for metabolic disorders in the 1960s (McCandless and Wright, 2020), NBS methods as well as their formal adoption and oversight have evolved considerably. Interest has been building to expand NBS programs to also include more rare genetic diseases that can be identified using ES/GS approaches (Holm et al., 2018; Genomics England and the UK National Screening Committee, 2021; Gold et al., 2022; Lu et al., 2022). Improvements to genome sequencing technologies that enable researchers to elucidate novel gene-disease associations and to diagnose conditions undiscoverable using traditional biochemical or other biomarker testing, and the wide availability and declining costs of genomic testing are among the reasons ES/GS might be advantageous as a first-tier clinical test for diagnosing genetic diseases. At the outset, it is important to distinguish NBS meant to identify pre-symptomatic infants rare but potentially devastating conditions e.g., phenylketonuria (PKU), severe combined immunodeficiency disease of congenital heart defects, from screening for risk stratification meant to guide lifestyle modification or surveillance protocols routinely offered to adults. Current universal NBS protocols fall into the first category; ES/GS of newborn infants for most genetic diseases would fall into the second category. This is true whether one considers all known genetic diseases or only a subset in which non-specific interventions may be able to reduce the risk or age of symptomatic onset. Using ES/GS as a tool in NBS may also inappropriately conflate the recognition of a disease-associated genetic variant with diagnosis of the disease. Diagnosing a genetic disease requires a physician to interpret an ES/GS result in the context of an individual’s complete clinical picture–the medical history, family history, physical exam, and other laboratory and imaging studies–in light of what is known about the range of clinical manifestations, inheritance pattern, penetrance, and variability of the disease. Complete clinical assessment is the only confirmatory “test” available for most genetic diseases. If universal NBS relied on sequencing the entire genome, exome or specific regions of the exome, then complete clinical assessment for the genetic disease indicated would be necessary to confirm the molecular “diagnosis” in every case. Population-based NBS of any kind should only be offered as part of a comprehensive public health program that includes clinical follow-up, therapeutic interventions, quality assurance, governance and oversight, and public and professional education (Friedman et al., 2017) in addition to the confirmatory complete clinical assessment and genetic counselling (if the condition found is a genetic disease). If ES/ GS is being considered as a replacement for current NBS, evidence that the ES/GS methods are superior to the existing methods is necessary. Adoption of sequencing-based NBS without consideration of the unique ethical, legal and social issues it raises (Eichinger et al., 2021; Woerner et al., 2021) risks widening disparities in availability and access to standard NBS, particularly in under-resourced settings. In this paper, we review recommendations from professional bodies regarding integration of genomic sequencing methods in public NBS programs in Europe (Howard et al., 2015) and North America, where the authors are based. We limit our discussion of relevant ethical, legal and social issues associated with universal ES/GS as a population screening tool for newborns, acknowledging, as others do (Johnston et al., 2018), that different professional obligations and standards exist in clinical screening, diagnostic, and direct-to-consumer contexts. Our analysis focuses on applications of universal genomic sequencing of the genome, exome, or a portion of the exome that includes a large number of disease-associated genes. We refer to as “ES/GS,” rather than on targeted sequencing of one or a few genes for confirmatory testing of conditions identified by conventional NBS (Bhattacharjee et al., 2015). Indeed, there are compelling advantages for supporting genomic sequencing method applied in the NBS context. Genomic sequencing has been shown to detect previously fatal diseases in affected newborns, as well as provide information to patients and families about genetic predisposition risks for later onset diseases (Holm et al., 2018) and inform preventative clinical action. Scholars have also argued that biological family may receive ancillary benefits from recognition of disease-associated variants in an infant by enabling prenatal diagnosis or specialized care for future pregnancies, earlier diagnosis or prevention of disease in relatives, or the empowerment provided by better knowledge (Ceyhan-Birsoy et al., 2019; Biesecker et al., 2021). However, the “gap between what sequencing results can reveal and the kinds of information most people need to improve their health, combined with widely publicized hopes for the revolutionary power of genomics, creates the very real risk that patients, research participants, health care professionals, policy-makers, and others may have unrealistic expectations of what sequencing can achieve and little appreciation for its downsides” (Johnston et al., 2018). Public opinion research suggests that family preferences vary considerably regarding whether and how to return genomic sequencing results (Lipstein et al., 2010; Fernandez et al., 2014; Botkin et al., 2015; Joseph et al., 2016; Pereira et al., 2021), to say nothing of current shortages of genetic counsellors and genetic specialist physicians needed or enhancements to genomic literacy and education for health professionals and the general public should ES/GS become routine in NBS (Lewis et al., 2016). Key policy questions also remain unresolved. These include: What rights and protections apply for genomic and related health data involving newborns when they become adults? How will public health agencies ensure that appropriate infrastructures for sequencing, variant interpretation, diagnostic confirmation, treatment or non-medical interventions, genetic counselling, clinical follow-up, and program governance and quality assurance are in place and accessible to all infants, even those in under-resourced settings? And whether requirements for ----- explicit informed consent to ES/GS-based NBS would need to obtained from the parents and, if so, should it include permission for others (researchers, family members, police, etc.) to access stored newborn sequencing data in the future. We assess these questions by evaluating the proposed benefits and foreseeable risks of implementing ES/GS in NBS. In our analysis, we apply the principle of proportionality to our discussion—that benefits of sequencing should clearly outweigh associated risks—and consider the human right to benefit from science -especially that of the asymptomatic, atrisk newborn to be found. We conclude that routine universal ES/ GS implementation is not justified at the present time, even if the analysis is restricted to a subset of disease-associated genes. Stronger evidence is needed to establish the clinical utility of ES/GS, accurate genomic variant interpretation, and cost effectiveness for newborn screening, as well as policies ensuring universal access and equitable resourcing for not only the testing but also for comprehensive diagnostic confirmation, treatment, genetic counselling, and clinical follow-up of affected patients. Moreover, this evidence should demonstrate the population health benefits of universal ES/GS-wide screening of newborns and not simply that anticipated harms of incorporating ES/GS are minimal. Prioritizing expanded access over expanded testing is likely to lead to more equitable distribution of the public health benefits of newborn screening programs. ## PRINCIPLE OF PROPORTIONALITY The principle of proportionality suggests an intervention may be ethically permissible if its anticipated benefits on balance justify exposure to associated harms and hence a helpful framework with which to assess ES/GS-based screening (Sénécal et al., 2018). The principle is rooted in moral and legal theory of punishment. 17th Century constitutional law theorists, for example, invoked the principle to judge the statutory fairness between restrictions imposed to implement a corrective measure and the severity of the act(s) the measure purports to mitigate (Walen, 2021). In research, the proportionality principle underpins decisions institutional/ethics review boards make regarding the relative risks and benefits of a study to prospective participants and is subsequently codified in national human subjects research regulations (OHRP, 2017; Canadian Institutes of Health Research, 2018) and international biomedical research norms (Council for International Organizations of Medical Sciences (CIOMS) in collaboration with the World Health Organization, 2016; WMA, 2022). It has also been has more recently been applied to guide privacy protections when sharing genomic and related health data (Wright et al., 2016). And last, but not least, some more recent versions of the normative framework for screening add the principle of proportionality as a central, over-arching, screening criterion: “The overall benefits of screening should outweigh the harm” (Andermann et al., 2008; Health Council of the Netherlands, 2008). The appeal of the proportionality principle to the NBS debate is astutely summarized by Kalkman and Dondorp in their position against screening newborns for non-treatable conditions: “the dividing line in the debate is . . . whether such screening should be regarded as catering to a parental “right to know,” or as a public health service that should be subject to standards of evidence and proportionality” (Kalkman and Dondorp, 2022). ## The Benefits of Accurate and Timely Diagnosis New precision methods to detect disease-causing genetic variants have greatly improved (Dondorp and de Wert, 2013). ES/GS could identify infants with rare genetic diseases not currently recognized using standard NBS. In theory, newborns who screen positive by ES/GS have the potential to benefit from: early diagnosis; disease onset prevention using available approaches; opportunities for genetic counselling for their families; eligibility for participation in clinical trials or other research studies; and avoiding long and difficult diagnostic odysseys. ES/GS should not, in our view, replace standard methods for any disease screening unless the former has been shown to have better sensitivity and specificity for the disease. For conditions that are not included in current NBS programs, development and uniform adoption of an approach will be needed to select the conditions for which ES/GS are expected to provide tangible benefit to the newborn. An exome- or genome-wide analysis that generates more harms than benefits or for which the harms and benefits have not been established is ethically unjustifiable–a more targeted analysis is to be preferred; see for example (Milko et al., 2019). But agreement on a uniform approach for selecting conditions detectable only using ES/GS is proving elusive for NBS programs worldwide (Jansen et al., 2017). Assuming agreement on the approach were achieved, the question would become whether every disease gene that we look for using ES/GS must meet the same criteria required to add conditions to the RUSP. The benefit-harm calculus is further complicated by the type of disorder being screened. One significant challenge facing public health decision-makers and clinicians alike is determining when to add conditions to the RUSP that are identifiable only through ES/GS methods. For diseases for which standard screening is superior, ES/GS may be considered as an add-on to current first-tier screening programs. Findings from a comparison study for example showed that traditional NBS using tandem mass spectrometry had greater sensitivity and specificity than ES for the diseases that are currently being screened, but ES was useful for confirmatory (Adhikari et al., 2020). ## Screening for Late-Onset Conditions Debates abound in the literature regarding the ethics of testing children for conditions likely to present later in life or which may be clinically relevant for parents or other biological family members in the immediate term. The presumption of clinical benefit to the parents and family members, however, has been challenged (Buchbinder and Timmermans, 2011; Ross and Clayton, 2019). Screening parents themselves using ES/GS for ----- previously unrecognized conditions would not only be more clinically effective but, most importantly, avoids instrumentalizing the child for parental benefit. We furthermore object to predictive testing for later-onset disorders taking account both the harm principle and the principle of respect for the child’s future right to informational self-determination, a specification of the child’s proposed right to an open future (Davis, 1997). Professional guidelines are consistent with these principles, advocating that publicly funded, universal NBS should be limited to diseases that can be diagnosed in the newborn period and which can be effectively treated or prevented during childhood (de Wert et al., 2021; Miller et al., 2021). As others have argued, “Providing additional genomic information beyond the most actionable conditions, while potentially of interest to many parents, may increase the complexity of informed consent and thereby serve to distract from the primary health benefits” (Roman et al., 2020). Broadening the scope of NBS beyond its primary aim of detecting rare disorders in asymptomatic children has the potential to adversely impact the universal delivery of NBS, to say nothing of the impacts on public trust and widespread support for NBS. ## Testing Capability and Challenges in Genomic Variant Interpretation Standard clinical analyses of ES/GS data do not reliably identify some kinds of disease-causing genetic variants, including short tandem repeat expansions, mobile element insertions, and complex or small structural variants. Knowing that ES/GSbased NBS has been done may preclude or delay appropriate genetic testing for symptomatic genetic disease in an older child or adult. Interpretation of NBS results requires extensive knowledge of benign, as well as disease-causing variants for every gene tested. The sensitivity and specificity of ES/GS for most rare genetic diseases are unknown and likely to remain so because sample sizes are small and studies difficult to power sufficiently. In addition, the penetrance and phenotypic spectrum associated with pathogenic variants for most genetic disease loci are unknown. Thus, it is difficult or impossible to know if an asymptomatic baby with a “molecular diagnosis” of a rare genetic disease will ever develop the disease or, in the event the child does develop the disease, when it will occur or how severe it will be. Moreover, genetic disease diagnosis is Bayesian. That is, the probability of finding a pathogenic variant is small in a healthy newborn with no family history of the genetic disease. Since there is no primary indication for NBS, the a priori risk that an infant will develop any particular genetic disease is extremely small. This makes “positive” results more likely to be false positives and less likely to be true positives, even if the analytical validity of the test is very high. Our inability at the present time to interpret the pathogenicity of most genomic variants is perhaps the strongest reason against adopting ES/GS in population-based NBS, despite improvements to clinical annotation of variants (Amendola et al., 2020) and broader accessibility to relevant databases at the point of care (Rehm et al., 2015). The problems of interpretation also exacerbate the effects of false positives/negatives on families and the healthcare system that are likely to result if variants of hundreds or thousands of potential disease genes are analyzed (Adhikari et al., 2020). The confidence of variant classification and clinical interpretation of genetic results will determine their predictive value. In line with the ethical principle of proportionality, proponents of ES/GS-based NBS will need to specify thresholds for what genes and/or variants should be disclosed in a screening context based on better understanding of anticipated benefits and harms associated with those decisions. The general issue remains that ES/GS is currently used as a diagnostic test, i.e., to confirm a clinical diagnosis of suspected genetic disease. However, in NBS, ES/GS would be used as a screening test to identify children who are at high risk of a genetic disease implied by the “molecular diagnosis.” If ES/GS were indeed used as a screening test, confirmatory testing to manage the inevitable false positives must be available. The distinction between the ES/GS result, regardless of its ACMG classification, and the actual diagnosis of a disease in the child would have to be explicit, generally accepted, and universally understood to avoid stigmatization, discrimination, insurance coverage, among other social issues. Interpretation of ES/GS variants requires comparisons to allele frequencies in both diagnosed and healthy populations and has direct implications for justice and health equity. This is because ES/GS interpretation is dependent on genetic ancestry. Variant interpretation upon which positive predictive values for ES/GS are measured has been established almost exclusively from individuals of European descent (Popejoy and Fullerton, 2016; Peterson et al., 2019). Given such underrepresentation of diverse ancestries, clinical interpretation of ES/GS results could be less reliable for newborns of non-European ancestry. Without adequate representation in datasets from individuals with diverse genetic ancestry, some newborns will benefit more from ES/GS than others. Clinical variant interpretation using resources such as ClinVar (Wain et al., 2018) and gnomAD (Gudmundsson et al., 2021) is therefore growing in importance, given they provide clinical assertions about genomic variants and associations with disease across genetically diverse populations. In general, problems of underrepresentation have prompted the development of new tools to monitor trends and identify gaps in genomic databases (Wang et al., 2022). Indeed, the global catalog of clinically actionable variants is expected to grow as reference data sets become larger, better curated and strive to be more representative of world populations. ## Re-Analysis and Obligations to Update Variant Interpretation It is anticipated that routine re-analysis of “negative” screens might increase the diagnostic rate by 3%–5% per year and identify variants of concern in children who later present with clinical features suggestive of a genetic disease (Wenger et al., 2017; Costain et al., 2018). To capture these clinical benefits, NBS programs would need to systematically update screens and store ----- ES/GS datasets in the health record to ensure results reflect up-todate classification of genomic variants and take into account attendant costs and privacy risks. The treating physician may no longer be following the family and follow up with a new provider may be difficult and expensive. If a variant of uncertain significance were reclassified but not reported to the family based on clinical course, would NBS programs be subject to legal action if a child later manifests the disease (Clayton et al., 2021)? The expenditures and risks of storing all children’s genomic data long-term to enable such systematic re-analysis may also exceed those of re-sequencing only those children for whom it is clinically indicated in the future (Veenstra et al., 2021). ## Stigma, Psychological Impacts and Medicalization Recent studies investigating the psychosocial impacts of expanding ES/GS in the newborn context have yielded different results. In a randomized trial of NBS with and without ES, researchers found both clinicians and parents valued information gleaned from standard of care NBS more than from exome sequencing but for different reasons (Pereira et al., 2019). Parents expressed knowing in advance how to prepare for a child with special needs was a benefit to sequencing, but worried about the psychosocial distresses brought on by variants of unknown significance and potential for discrimination among other things (Pereira et al., 2019). The potential for social stigma and medicalization of children with a molecular diagnosis who are pre-symptomatic (or destined never to exhibit the disease because it is non-penetrant) is also a concern. This scenario would be particularly concerning if enhanced surveillance or prophylactic treatments impinge on the child’s quality of life or expose them to interventions with adverse effects. ## Genomic Data Privacy and Protection Key policy questions persist with respect to what rights and protections should apply to genomic and related health data collected at birth when newborns reach adulthood. The moral justification for mandatory NBS rests on the premise that finding the asymptomatic, at risk child is within the child’s best interests (United Nations Convention on the Rights of the Child, 1989). Child welfare considerations and the “the opportunity to intervene and dramatically alter a child’s life course and expectancy has been regarded as sufficient to preempt any claims of parental autonomy” (Goldenberg and Sharp, 2012). It is unlikely, however, that the huge volumes of data generated from ES/GS followed by untargeted whole exome/genome analysis will meet the criteria needed to justify overruling parental decision-making authority. Yet samples taken from dried blood spots collected and stored using Guthrie cards are rich data sources needed to advance population health research. While most samples are de-identified or pseudonymized according to applicable laws/regulations when used for research, the generation of ES/GS data as part of NBS introduces novel ethical, legal and social challenges for data protection, agency and consent for the future adult (Khoury et al., 2003; Lewis, 2014). Genomic data are highly identifying and may implicate not only the individual tested but also their biological relatives. Concerns regarding loss of privacy and misuse of genomic data have emerged as key themes in the empirical literature on expansion of sequencing in NBS, and were found to be especially acute among participants of color (Joseph et al., 2016; Tsosie et al., 2021). It is unclear if the benefits of storing children’s genomic data in a centralized research data repository outweighs the privacy and security risks, particularly if children are not given the opportunity to consent themselves. Re-consenting minors when they become adults to the continued use of their data collected at birth is supported in theory but logistically challenging to implement in practice (Knoppers et al., 2016; Rothwell et al., 2017; Nordfalk and Ekstrøm, 2019). Legislation passed in the United States in 2014, for example, requires that researchers seek broad consent for the use of the child’s dried blood spots for research beyond NBS (Newborn Screening Saves Lives Reauthorization Act, 2014). However this law preceded revisions to the United States Common Rule which now exempts research using de-identified data, thus removing a layer of specific consent (Lewis and Goldenberg, 2015; Rothwell et al., 2017). Empirical studies involving parents of both healthy and affected newborns suggest NBS programs should err on the side of greater transparency in terms of when, how and for what purposes their child’s samples and data will be used (Downie et al., 2021). Policy makers would need to determine whether, or how permissions for future use of ES/ GS data for research will be incorporated into screening, and it remains unknown what effect this will have on public willingness to sustain state sponsored NBS programs that adopt ES/GS. ## ES/GS and the Wilson and Jungner Criteria Disagreement regarding which disorders are screened for has largely (though not entirely) been avoided in some jurisdictions through standardization (Advisory Committee on Heritable Disorders in Newborns and Children, 2018) and concerted efforts are ongoing to harmonize screening lists internationally (Vittozzi et al., 2010; Franková et al., 2021). Wilson and Jungner anticipated such discrepancies and in 1968, developed criteria that outlined practical principles for screening services (Box 1) (Wilson and Jungner, 1968). While there have been recent calls to update the criteria to better align with technological advances in testing methods (King et al., 2021) and apply more nuanced decision analysis approaches (Prosser et al., 2012), the Wilson and Jungner criteria remain the generally accepted guidelines. The threat to NBS participation should be a top concern if conditions are added to mandatory screening that challenge the Wilson-Jungner criteria or do not reflect how healthcare is accessed or paid for in a particular jurisdiction. Universal ES/ GS with untargeted analysis in the NBS context poses several direct challenges to these criteria. First, while there are many accepted treatments for conditions commonly screened for, most rare genetic diseases that are detectable by ES/GS do not have proven therapies. ----- Second, establishing a clinical diagnosis in an asymptomatic infant with a “molecular diagnosis” of a rare variant is resourceintensive, requiring specialized clinical assessment and variant interpretation, additional testing, and counseling services (Appelbaum et al., 2020). Newborn screening by any method should be accessible to every infant (Friedman et al., 2017; de Wert et al., 2021). To meet this universality target, healthcare centers must be equipped with appropriate sequencing infrastructure. Both human and material resources will therefore be needed in addition to those already allocated for existing NBS programs. At present, ES is available as a diagnostic tool primarily from certain clinical laboratories and through direct-to-consumer genetic testing services. A comparison of community report cards published by the National Organization for Rare Disorders (National Organization for Rare Disorders Newborn screening State report card, 2021) demonstrates that many NBS programs already face various resource limitations and vast differences exist in screening availability by U.S. states (Roman et al., 2020). Disparities in NBS access and quality could be seen to violate the parens patriae doctrine which upholds that it is the duty of the State and its courts to protect the interests of persons in situations of vulnerability, for example children. NBS programs organized by the State are an extension of this duty (Knoppers, 1992), and the reasons many jurisdictions adopt an implied consent to NBS. GS/ES-based NBS may well be different; if explicit consent is required, extant research suggests families are more likely to refuse consent, thus inadvertently denying their child the benefits of current NBS(Bombard et al., 2014; Joseph et al., 2016; Friedman et al., 2017; Genetti et al., 2019). Moreover, the right of everyone to benefit from science and its applications is protected under Article 27 of the United Nations Declaration of Human Rights. While not a legally binding agreement, 193 countries have ratified at least one of the nine core international treaties which codify the Declaration’s commitments to basic rights and freedoms. Article 24 of the Convention on the Rights of the Child further obligates signatories to implement interventions that reduce infant and child mortality, to provide effective health care, and to combat childhood disease, among other legally binding responsibilities. Taken together, international conventions have been powerful tools for motivating the development and sustainability of public health programs (Reinbold, 2019) including NBS. Applying a human rights frame to the current debate favors expanding access to established NBS methods that have shown to be clinically effective, and which enable more children to directly benefit from proven methods. Ensuring universal access to high quality NBS irrespective of birthplace, gender and income, however, continues to be a global challenge (Krotoski et al., 2009; Borrajo, 2021). Third, most genetic conditions diagnosed through ES/GS in early childhood have unknown natural histories or are unrecognizable during early childhood because the diseases are so rare and have only been described in a small number of patients. Fourth, ES/GS is widely misunderstood among patients and clinicians alike, challenging overall public acceptance as a testing method. Issues of particular concern include data privacy, family decision-making when faced with an uncertain result and possible insurance discrimination (Pereira et al., 2019; Wojcik et al., 2021). Fifth, recent analyses of global NBS coverage indicate that cost remains a barrier to even standard NBS access in low- and middle-income countries (Therrell et al., 2015, 2020; Howson et al., 2018; Therrell and Padilla, 2018). Since ES/GS cannot replace all current NBS by other methods, sequencing computing and storage costs for genomic data would be needed in addition to current laboratory costs to mitigate real privacy and security risks. Studies further show that clinical demand for medical geneticists and genetic counsellors far exceeds available services (O’Daniel, 2010; Boothe et al., 2021). Ultimately, however, NBS alone cannot reasonably be expected to universally improve health outcomes without addressing systemic health disparities, underlying social determinants of health (Melzer, 2022) and barriers to healthcare access (Goldstein et al., 2020) experienced predominantly by marginalized racial/ethnic groups (Sohn and Timmermans, 2019). ## CONCLUSION Owing to the public health importance of universal access to NBS, applying ES/GS as screening tools in the newborn context is unsubstantiated as yet clinically and pragmatically. Ongoing translational research and technological advances will emerge in the coming years which are sure to improve our ----- understanding of the opportunities and limitations of ES/GS in detecting and preventing early disease. Considering this evolving evidence, policy makers ought to be persuaded by a burden a proof that clearly demonstrates superior public health benefits of ES/GS beyond those achievable through traditional NBS methods. Attempts to concentrate efforts only on justifying the minimalness of any anticipated harms associated with ES/GS in NBS risks sidelining the real ethical, legal and social issues which have thus far tempered the promises of precision medicine in general. Our position thus exposes a central tension in the debate between providing universal access to traditional NBS and respecting parents’ decision-making about much more extensive screening that they may perceive to be in the child’s best interests but that many adults may not opt for themselves. All screening programs expose individuals to potential harms that must be balanced against the benefits anticipated. This is not unique to genome-wide sequencing-based screening programs and is true even if only a selected “slice” of genes represented in the exome data were analyzed. The reality that some infants will screen positive and never experience symptoms does not justify excluding possible ES/GS for NBS. Rather the balance of benefits and harms must be quantified and considered in any policy decision regarding screening programs to ensure aggregate benefits outweigh foreseeable aggregate harms. Indeed, NBS programs must expand to provide all newborns access to screening that is of proven value, meet established criteria for ## REFERENCES Adhikari, A. N., Gallagher, R. C., Wang, Y., Currier, R. J., Amatuni, G., Bassaganyas, L., et al. (2020). The Role of Exome Sequencing in Newborn [Screening for Inborn Errors of Metabolism. Nat. Med. 26, 1392–1397. doi:10.](https://doi.org/10.1038/s41591-020-0966-5) [1038/s41591-020-0966-5](https://doi.org/10.1038/s41591-020-0966-5) Advisory Committee on Heritable Disorders in Newborns and Children (2018). Recommended Uniform Screening Panel. Amendola, L. M., Muenzen, K., Biesecker, L. G., Bowling, K. M., Cooper, G. M., Dorschner, M. O., et al. (2020). Variant Classification Concordance Using the ACMG-AMP Variant Interpretation Guidelines across Nine Genomic [Implementation Research Studies. Am. J. Hum. Genet. 107, 932–941. doi:10.](https://doi.org/10.1016/j.ajhg.2020.09.011) [1016/j.ajhg.2020.09.011](https://doi.org/10.1016/j.ajhg.2020.09.011) Andermann, A., Blancquaert, I., Beauchamp, S., and Déry, V. (2008). Revisting Wilson and Jungner in the Genomic Age: a Review of Screening Criteria over [the Past 40 Years. Bull. World Health Organ 86, 317–319. doi:10.2471/blt.07.](https://doi.org/10.2471/blt.07.050112) [050112](https://doi.org/10.2471/blt.07.050112) Appelbaum, P. S., Parens, E., Berger, S. M., Chung, W. K., and Burke, W. (2020). Is There a Duty to Reinterpret Genetic Data? the Ethical Dimensions. Genet. Med. [22, 633–639. doi:10.1038/s41436-019-0679-7](https://doi.org/10.1038/s41436-019-0679-7) [Berry, S. A. (2015). Newborn Screening. Clin. Perinatology 42, 441–453. doi:10.](https://doi.org/10.1016/j.clp.2015.03.002) [1016/j.clp.2015.03.002](https://doi.org/10.1016/j.clp.2015.03.002) Bhattacharjee, A., Sokolsky, T., Wyman, S. K., Reese, M. G., Puffenberger, E., Strauss, K., et al. (2015). Development of DNA Confirmatory and High-Risk Diagnostic Testing for Newborns Using Targeted Next-Generation DNA [Sequencing. Genet. Med. 17, 337–347. doi:10.1038/gim.2014.117](https://doi.org/10.1038/gim.2014.117) Biesecker, L. G., Green, E. D., Manolio, T., Solomon, B. D., and Curtis, D. (2021). [Should All Babies Have Their Genome Sequenced at Birth? BMJ, n2679. doi:10.](https://doi.org/10.1136/bmj.n2679) [1136/bmj.n2679](https://doi.org/10.1136/bmj.n2679) Bombard, Y., Miller, F. A., Hayeems, R. Z., Barg, C., Cressman, C., Carroll, J. C., et al. (2014). Public Views on Participating in Newborn Screening Using Genome [Sequencing. Eur. J. Hum. Genet. 22, 1248–1254. doi:10.1038/ejhg.2014.22](https://doi.org/10.1038/ejhg.2014.22) proportionality (e.g., Wilson-Jungner) and shown to yield greater and more equitably distributed public health gains. ## DATA AVAILABILITY STATEMENT The original contributions presented in the study are included in the article/Supplementary Material, further inquiries can be directed to the corresponding author. ## AUTHOR CONTRIBUTIONS All authors conceived of and contributed to the ideas represented in this paper. Author VR drafted the initial and revised manuscripts following peer review. Authors JF, GdW, and BK contributed to both editorial and substantive revisions to earlier drafts of the manuscript and during peer review. All authors approved the final version of the manuscript. ## FUNDING VR received funding for this work from the NIH Division of Loan Repayment as well as the Stanford Training Program in ELSI Research (T32HG008953). BK is supported by the Canada Research Chair in Law and Medicine. Boothe, E., Greenberg, S., Delaney, C. L., and Cohen, S. A. (2021). Genetic Counseling Service Delivery Models: A Study of Genetic Counselors’ Interests, Needs, and Barriers to Implementation. Jrnl Gene Coun 30, [283–292. doi:10.1002/jgc4.1319](https://doi.org/10.1002/jgc4.1319) Borrajo, G. J. C. (2021). Newborn Screening in Latin America: A Brief Overview of [the State of the Art. Am. J. Med. Genet. C Semin. Med. Genet. 187, 322. doi:10.](https://doi.org/10.1002/ajmg.c.31899) [1002/ajmg.c.31899](https://doi.org/10.1002/ajmg.c.31899) Botkin, J. R., Belmont, J. W., Berg, J. S., Berkman, B. E., Bombard, Y., Holm, I. A., et al. (2015). Points to Consider: Ethical, Legal, and Psychosocial Implications of Genetic Testing in Children and Adolescents. Am. J. Hum. Genet. 97, 6–21. [doi:10.1016/j.ajhg.2015.05.022](https://doi.org/10.1016/j.ajhg.2015.05.022) Buchbinder, M., and Timmermans, S. (2011). Newborn Screening and Maternal [Diagnosis: Rethinking Family Benefit. Soc. Sci. Med. 73, 1014–1018. doi:10.](https://doi.org/10.1016/j.socscimed.2011.06.062) [1016/j.socscimed.2011.06.062](https://doi.org/10.1016/j.socscimed.2011.06.062) Canadian Institutes of Health Research (2018). Canadian Institutes of Health Research, Natural Sciences and Engineering Research Council of Canada, and Social Sciences and Humanities Research Council, Tri-Council Policy Statement: Ethical Conduct for Research Involving Humans. Carroll, A. E., and Downs, S. M. (2006). Comprehensive Cost-Utility Analysis of [Newborn Screening Strategies. Pediatrics 117, S287–S295. doi:10.1542/peds.](https://doi.org/10.1542/peds.2005-2633H) [2005-2633H](https://doi.org/10.1542/peds.2005-2633H) Ceyhan-Birsoy, O., Murry, J. B., Machini, K., Lebo, M. S., Yu, T. W., Fayer, S., et al. (2019). Interpretation of Genomic Sequencing Results in Healthy and Ill Newborns: Results from the BabySeq Project. Am. J. Hum. Genet. 104, [76–93. doi:10.1016/j.ajhg.2018.11.016](https://doi.org/10.1016/j.ajhg.2018.11.016) Clayton, E. W., Appelbaum, P. S., Chung, W. K., Marchant, G. E., Roberts, J. L., and Evans, B. J. (2021). Does the Law Require Reinterpretation and Return of [Revised Genomic Results? Genet. Med. 23, 833–836. doi:10.1038/s41436-020-](https://doi.org/10.1038/s41436-020-01065-x) [01065-x](https://doi.org/10.1038/s41436-020-01065-x) Costain, G., Jobling, R., Walker, S., Reuter, M. S., Snell, M., Bowdin, S., et al. (2018). Periodic Reanalysis of Whole-Genome Sequencing Data Enhances the Diagnostic Advantage over Standard Clinical Genetic Testing. Eur. J. Hum. [Genet. 26, 740–744. doi:10.1038/s41431-018-0114-6](https://doi.org/10.1038/s41431-018-0114-6) ----- Council for International Organizations of Medical Sciences (CIOMS) in collaboration with the World Health Organization (2016). International Ethical Guidelines for Health-Related Research Involving Humans. Geneva, Switzerland: World Health Organization. Davis, D. S. (1997). Genetic Dilemmas and the Child’s Right to an Open Future. [Hastings Cent. Rep. 27, 7. doi:10.2307/3527620](https://doi.org/10.2307/3527620) de Wert, G., Dondorp, W., Dondorp, W., Clarke, A., Dequeker, E. M. C., Cordier, C., et al. (2021). Opportunistic Genomic Screening. Recommendations of the [European Society of Human Genetics. Eur. J. Hum. Genet. 29, 365–377. doi:10.](https://doi.org/10.1038/s41431-020-00758-w) [1038/s41431-020-00758-w](https://doi.org/10.1038/s41431-020-00758-w) Dondorp, W. J., and de Wert, G. M. W. R. (2013). The ’thousand-Dollar Genome’: [an Ethical Exploration. Eur. J. Hum. Genet. 21, S6–S26. doi:10.1038/ejhg.](https://doi.org/10.1038/ejhg.2013.73) [2013.73](https://doi.org/10.1038/ejhg.2013.73) Downie, L., Halliday, J., Lewis, S., and Amor, D. J. (2021). Principles of Genomic [Newborn Screening Programs. JAMA Netw. Open 4, e2114336. doi:10.1001/](https://doi.org/10.1001/jamanetworkopen.2021.14336) [jamanetworkopen.2021.14336](https://doi.org/10.1001/jamanetworkopen.2021.14336) Eichinger, J., Elger, B. S., Koné, I., Filges, I., Shaw, D., Zimmermann, B., et al. (2021). The Full Spectrum of Ethical Issues in Pediatric Genome-wide [Sequencing: a Systematic Qualitative Review. BMC Pediatr. 21, 387. doi:10.](https://doi.org/10.1186/s12887-021-02830-w) [1186/s12887-021-02830-w](https://doi.org/10.1186/s12887-021-02830-w) Fernandez, C. V., Bouffet, E., Malkin, D., Jabado, N., O’Connell, C., Avard, D., et al. (2014). Attitudes of Parents toward the Return of Targeted and Incidental [Genomic Research Findings in Children. Genet. Med. 16, 633–640. doi:10.1038/](https://doi.org/10.1038/gim.2013.201) [gim.2013.201](https://doi.org/10.1038/gim.2013.201) Franková, V., Driscoll, R. O., Jansen, M. E., Loeber, J. G., Kožich, V., Bonham, J., et al. (2021). Regulatory Landscape of Providing Information on Newborn [Screening to Parents across Europe. Eur. J. Hum. Genet. 29, 67–78. doi:10.1038/](https://doi.org/10.1038/s41431-020-00716-6) [s41431-020-00716-6](https://doi.org/10.1038/s41431-020-00716-6) Friedman, J. M., Cornel, M. C., Goldenberg, A. J., Lister, K. J., Sénécal, K., Vears, D. F., et al. (2017). Genomic Newborn Screening: Public Health Policy [Considerations and Recommendations. BMC Med. Genomics 10, 9. doi:10.](https://doi.org/10.1186/s12920-017-0247-4) [1186/s12920-017-0247-4](https://doi.org/10.1186/s12920-017-0247-4) Genetti, C. A., Schwartz, T. S., Robinson, J. O., VanNoy, G. E., Petersen, D., Pereira, S., et al. (2019). Parental Interest in Genomic Sequencing of Newborns: Enrollment Experience from the BabySeq Project. Genet. Med. 21, 622–630. [doi:10.1038/s41436-018-0105-6](https://doi.org/10.1038/s41436-018-0105-6) Genomics England and the UK National Screening Committee (2021). Implications of Whole Genome Sequencing for Newborn Screening: A Public Dialogue. Gold, N. B., Harrison, S. M., Rowe, J. H., Gold, J., Furutani, E., Biffi, A., et al. (2022). Low Frequency of Treatable Pediatric Disease Alleles in gnomAD: An Opportunity for Future Genomic Screening of Newborns. Hum. Genet. [Genomics Adv. 3, 100059. doi:10.1016/j.xhgg.2021.100059](https://doi.org/10.1016/j.xhgg.2021.100059) Goldenberg, A. J., and Sharp, R. R. (2012). The Ethical Hazards and Programmatic [Challenges of Genomic Newborn Screening. JAMA 307, 461. doi:10.1001/jama.](https://doi.org/10.1001/jama.2012.68) [2012.68](https://doi.org/10.1001/jama.2012.68) Goldstein, N. D., Palumbo, A. J., Bellamy, S. L., Purtle, J., and Locke, R. (2020). State and Local Government Expenditures and Infant Mortality in the United States. [Pediatrics 146, e20201134. doi:10.1542/peds.2020-1134](https://doi.org/10.1542/peds.2020-1134) Gudmundsson, S., Singer-Berk, M., Watts, N. A., Phu, W., Goodrich, J. K., Solomonson, M., et al. (2021). Variant Interpretation Using Population [Databases: Lessons from gnomAD. Hum. Mutat. doi:10.1002/humu.24309](https://doi.org/10.1002/humu.24309) Health Council of the Netherlands (2008). Screening: Between Hope and Hype. Holm, I. A., Agrawal, P. B., Agrawal, P. B., Ceyhan-Birsoy, O., Christensen, K. D., Fayer, S., et al. (2018). The BabySeq Project: Implementing Genomic [Sequencing in Newborns. BMC Pediatr. 18, 225. doi:10.1186/s12887-018-](https://doi.org/10.1186/s12887-018-1200-1) [1200-1](https://doi.org/10.1186/s12887-018-1200-1) Howard, H. C., Knoppers, B. M., Knoppers, B. M., Cornel, M. C., Wright Clayton, E., Sénécal, K., et al. (2015). Whole-genome Sequencing in Newborn Screening? A Statement on the Continued Importance of Targeted Approaches in Newborn Screening Programmes. Eur. J. Hum. Genet. 23, 1593–1600. [doi:10.1038/ejhg.2014.289](https://doi.org/10.1038/ejhg.2014.289) Howson, C. P., Cedergren, B., Giugliani, R., Huhtinen, P., Padilla, C. D., Palubiak, C. S., et al. (2018). Universal Newborn Screening: A Roadmap for Action. Mol. [Genet. Metabolism 124, 177–183. doi:10.1016/j.ymgme.2018.04.009](https://doi.org/10.1016/j.ymgme.2018.04.009) Jansen, M. E., Metternick-Jones, S. C., and Lister, K. J. (2017). International Differences in the Evaluation of Conditions for Newborn Bloodspot Screening: a Review of Scientific Literature and Policy Documents. Eur. [J. Hum. Genet. 25, 10–16. doi:10.1038/ejhg.2016.126](https://doi.org/10.1038/ejhg.2016.126) Johnston, J., Lantos, J. D., Goldenberg, A., Chen, F., Parens, E., Koenig, B. A., et al. (2018). Sequencing Newborns:A Call for Nuanced Use of Genomic [Technologies. Hastings Cent. Rep. 48, S2–S6. doi:10.1002/hast.874](https://doi.org/10.1002/hast.874) Joseph, G., Chen, F., Harris-Wai, J., Puck, J. M., Young, C., and Koenig, B. A. (2016). Parental Views on Expanded Newborn Screening Using Whole-Genome [Sequencing. PEDIATRICS 137, S36–S46. doi:10.1542/peds.2015-3731H](https://doi.org/10.1542/peds.2015-3731H) Kalkman, S., and Dondorp, W. (2022). The Case for Screening in Early Life for ’non-Treatable’ Disorders: Ethics, Evidence and Proportionality. A Report from [the Health Council of the Netherlands. Eur. J. Hum. Genet. doi:10.1038/s41431-](https://doi.org/10.1038/s41431-022-01055-4) [022-01055-4](https://doi.org/10.1038/s41431-022-01055-4) Khoury, M. J., McCabe, L. L., and McCabe, E. R. B. (2003). Population Screening in [the Age of Genomic Medicine. N. Engl. J. Med. 348, 50–58. doi:10.1056/](https://doi.org/10.1056/NEJMra013182) [NEJMra013182](https://doi.org/10.1056/NEJMra013182) King, J. R., Notarangelo, L. D., and Hammarström, L. (2021). An Appraisal of the Wilson & Jungner Criteria in the Context of Genomic-Based Newborn Screening for Inborn Errors of Immunity. J. Allergy Clin. Immunol. 147, [428–438. doi:10.1016/j.jaci.2020.12.633](https://doi.org/10.1016/j.jaci.2020.12.633) Knoppers, B. M., Sénécal, K., Boisjoli, J., Borry, P., Cornel, M. C., Fernandez, C. V., et al. (2016). Recontacting Pediatric Research Participants for Consent when They Reach the Age of Majority. IRB 38, 1–9. Knoppers, B. M. (1992). Canadian Child Health Law: Health Rights and Risks of Children. Toronto, Ont., Canada: Thompson Educational Publishing, Inc. Krotoski, D., Namaste, S., Raouf, R. K., El Nekhely, I., Hindi-Alexander, M., Engelson, G., et al. (2009). Conference Report: Second Conference of the Middle East and North Africa Newborn Screening Initiative: Partnerships for Sustainable Newborn Screening Infrastructure and Research [Opportunities. Genet. Med. 11, 663–668. doi:10.1097/GIM.0b013e3181ab2277](https://doi.org/10.1097/GIM.0b013e3181ab2277) Lewis, M. A., Paquin, R. S., Roche, M. I., Furberg, R. D., Rini, C., Berg, J. S., et al. (2016). Supporting Parental Decisions about Genomic Sequencing for Newborn Screening: The NC NEXUS Decision Aid. Pediatrics 137, S16–S23. [doi:10.1542/peds.2015-3731E](https://doi.org/10.1542/peds.2015-3731E) Lewis, M. H., and Goldenberg, A. J. (2015). Return of Results from Research Using Newborn Screening Dried Blood Samples. J. Law. Med. Ethics 43, 559–568. [doi:10.1111/jlme.12299](https://doi.org/10.1111/jlme.12299) Lewis, M. H. (2014). Newborn Screening Controversy. JAMA Pediatr. 168, 199. [doi:10.1001/jamapediatrics.2013.4980](https://doi.org/10.1001/jamapediatrics.2013.4980) Lipstein, E. A., Nabi, E., Perrin, J. M., Luff, D., Browning, M. F., and Kuhlthau, K. A. (2010). Parents’ Decision-Making in Newborn Screening: Opinions, Choices, [and Information Needs. Pediatrics 126, 696–704. doi:10.1542/peds.2010-0217](https://doi.org/10.1542/peds.2010-0217) Lu, C. Y., McMahon, P. M., and Wu, A. C. (2022). Modeling Genomic Screening in [Newborns. JAMA Pediatr. 176, 344. doi:10.1001/jamapediatrics.2021.5798](https://doi.org/10.1001/jamapediatrics.2021.5798) McCandless, S. E., and Wright, E. J. (2020). Mandatory Newborn Screening in the United States: History, Current Status, and Existential Challenges. Birth Defects [Res. 112, 350–366. doi:10.1002/bdr2.1653](https://doi.org/10.1002/bdr2.1653) Melzer, S. M. (2022). Addressing Social Determinants of Health in Pediatric Health Systems: Balancing Mission and Financial Sustainability. Curr. Opin. Pediatr. [34, 8–13. doi:10.1097/MOP.0000000000001083](https://doi.org/10.1097/MOP.0000000000001083) Milko, L. V., O’Daniel, J. M., DeCristo, D. M., Crowley, S. B., Foreman, A. K. M., Wallace, K. E., et al. (2019). An Age-Based Framework for Evaluating Genome[Scale Sequencing Results in Newborn Screening. J. Pediatr. 209, 68–76. doi:10.](https://doi.org/10.1016/j.jpeds.2018.12.027) [1016/j.jpeds.2018.12.027](https://doi.org/10.1016/j.jpeds.2018.12.027) Miller, D. T., Lee, K., Gordon, A. S., Amendola, L. M., Adelman, K., Bale, S. J., et al. (2021). Recommendations for Reporting of Secondary Findings in Clinical Exome and Genome Sequencing, 2021 Update: a Policy Statement of the American College of Medical Genetics and Genomics (ACMG). Genet. Med. [23, 1391–1398. doi:10.1038/s41436-021-01171-4](https://doi.org/10.1038/s41436-021-01171-4) National Organization for Rare Disorders Newborn screening State report card (2021). National Organization for Rare Disorders Newborn Screening State [Report Card. Available at: https://rarediseases.org/policy-issues/newborn-](https://rarediseases.org/policy-issues/newborn-screening/) [screening/.](https://rarediseases.org/policy-issues/newborn-screening/) Newborn Screening Saves Lives Reauthorization Act (2014). Newborn Screening Saves Lives Reauthorization Act of 2014, 42 U.S.C. §§ 300b-8 to 300b-17. Nordfalk, F., and Ekstrøm, C. T. (2019). Newborn Dried Blood Spot Samples in Denmark: the Hidden Figures of Secondary Use and Research Participation. [Eur. J. Hum. Genet. 27, 203–210. doi:10.1038/s41431-018-0276-2](https://doi.org/10.1038/s41431-018-0276-2) ----- O’Daniel, J. M. (2010). The Prospect of Genome-Guided Preventive Medicine: A Need and Opportunity for Genetic Counselors. J. Genet. Couns. 19, 315–327. [doi:10.1007/s10897-010-9302-4](https://doi.org/10.1007/s10897-010-9302-4) OHRP (2017). United States Common Rule. Rockville, MD: OHRP. Pereira, S., Robinson, J. O., Gutierrez, A. M., Petersen, D. K., Hsu, R. L., Lee, C. H., et al. (2019). Perceived Benefits, Risks, and Utility of Newborn Genomic [Sequencing in the BabySeq Project. Pediatrics 143, S6–S13. doi:10.1542/peds.](https://doi.org/10.1542/peds.2018-1099C) [2018-1099C](https://doi.org/10.1542/peds.2018-1099C) Pereira, S., Smith, H. S., Frankel, L. A., Christensen, K. D., Islam, R., Robinson, J. O., et al. (2021). Psychosocial Effect of Newborn Genomic Sequencing on Families [in the BabySeq Project. JAMA Pediatr. 175, 1132–1141. doi:10.1001/](https://doi.org/10.1001/jamapediatrics.2021.2829) [jamapediatrics.2021.2829](https://doi.org/10.1001/jamapediatrics.2021.2829) Peterson, R. E., Kuchenbaecker, K., Walters, R. K., Chen, C.-Y., Popejoy, A. B., Periyasamy, S., et al. (2019). Genome-wide Association Studies in Ancestrally Diverse Populations: Opportunities, Methods, Pitfalls, and Recommendations. [Cell. 179, 589–603. doi:10.1016/j.cell.2019.08.051](https://doi.org/10.1016/j.cell.2019.08.051) Popejoy, A. B., and Fullerton, S. M. (2016). Genomics Is Failing on Diversity. [Nature 538, 161–164. doi:10.1038/538161a](https://doi.org/10.1038/538161a) Prosser, L. A., Grosse, S. D., Kemper, A. R., Tarini, B. A., and Perrin, J. M. (2012). Decision Analysis, Economic Evaluation, and Newborn Screening: Challenges [and Opportunities. Genet. Med. 14, 703–712. doi:10.1038/gim.2012.24](https://doi.org/10.1038/gim.2012.24) Rehm, H. L., Berg, J. S., Brooks, L. D., Bustamante, C. D., Evans, J. P., Landrum, M. J., et al. (2015). ClinGen - the Clinical Genome Resource. N. Engl. J. Med. 372, [2235–2242. doi:10.1056/NEJMsr1406261](https://doi.org/10.1056/NEJMsr1406261) Reinbold, G. W. (2019). Effects of the Convention on the Rights of the Child on Child Mortality and Vaccination Rates: a Synthetic Control Analysis. BMC Int. [Health Hum. Rights 19, 24. doi:10.1186/s12914-019-0211-9](https://doi.org/10.1186/s12914-019-0211-9) Roman, T. S., Crowley, S. B., Roche, M. I., Foreman, A. K. M., O’Daniel, J. M., Seifert, B. A., et al. (2020). Genomic Sequencing for Newborn Screening: Results [of the NC NEXUS Project. Am. J. Hum. Genet. 107, 596–611. doi:10.1016/j.ajhg.](https://doi.org/10.1016/j.ajhg.2020.08.001) [2020.08.001](https://doi.org/10.1016/j.ajhg.2020.08.001) Ross, L. F., and Clayton, E. W. (2019). Ethical Issues in Newborn Sequencing [Research: The Case Study of BabySeq. Pediatrics 144, e20191031. doi:10.1542/](https://doi.org/10.1542/peds.2019-1031) [peds.2019-1031](https://doi.org/10.1542/peds.2019-1031) Rothwell, E., Goldenberg, A., Johnson, E., Riches, N., Tarini, B., and Botkin, J. R. (2017). An Assessment of a Shortened Consent Form for the Storage and Research Use of Residual Newborn Screening Blood Spots. J. Empir. Res. Hum. [Res. Ethics 12, 335–342. doi:10.1177/1556264617736199](https://doi.org/10.1177/1556264617736199) Sahai, I., and Marsden, D. (2009). Newborn Screening. Crit. Rev. Clin. Laboratory [Sci. 46, 55–82. doi:10.1080/10408360802485305](https://doi.org/10.1080/10408360802485305) Sénécal, K., Unim, B., Unim, B., and Maria Knoppers, B. (2018). Newborn Screening Programs: Next Generation Ethical and Social Issues. Obm Genet. [2, 1. doi:10.21926/obm.genet.1803027](https://doi.org/10.21926/obm.genet.1803027) Sohn, H., and Timmermans, S. (2019). Inequities in Newborn Screening: Race and [the Role of Medicaidq. SSM - Popul. Health 9, 100496. doi:10.1016/j.ssmph.](https://doi.org/10.1016/j.ssmph.2019.100496) [2019.100496](https://doi.org/10.1016/j.ssmph.2019.100496) Sontag, M. K., Yusuf, C., Grosse, S. D., Edelman, S., Miller, J. I., McKasson, S., et al. (2020). Infants with Congenital Disorders Identified through Newborn Screening - United States, 2015-2017. MMWR Morb. Mortal. Wkly. Rep. 69, [1265–1268. doi:10.15585/mmwr.mm6936a6](https://doi.org/10.15585/mmwr.mm6936a6) Therrell, B. L., Lloyd-Puryear, M. A., Lloyd-Puryear, M. A., Ohene-Frempong, K., Ware, R. E., Padilla, C. D., et al. (2020). Empowering Newborn Screening Programs in African Countries through Establishment of an International [Collaborative Effort. J. Community Genet. 11, 253–268. doi:10.1007/s12687-](https://doi.org/10.1007/s12687-020-00463-7) [020-00463-7](https://doi.org/10.1007/s12687-020-00463-7) Therrell, B. L., Padilla, C. D., Loeber, J. G., Kneisser, I., Saadallah, A., Borrajo, G. J. C., et al. (2015). Current Status of Newborn Screening Worldwide: 2015. [Seminars Perinatology 39, 171–187. doi:10.1053/j.semperi.2015.03.002](https://doi.org/10.1053/j.semperi.2015.03.002) Therrell, B. L., and Padilla, C. D. (2018). Newborn Screening in the Developing Countries. Curr. Opin. Pediatr. 30, 734–739. [doi:10.1097/MOP.](https://doi.org/10.1097/MOP.0000000000000683) [0000000000000683](https://doi.org/10.1097/MOP.0000000000000683) Tonniges, T. F. (2000). Serving the Family from Birth to the Medical Home. Newborn Screening: a Blueprint for the Future - a Call for a National Agenda on State Newborn Screening Programs. Pediatrics 106, 389–422. Tsosie, K. S., Yracheta, J. M., Kolopenuk, J. A., and Geary, J. (2021). We Have “Gifted” Enough: Indigenous Genomic Data Sovereignty in Precision Medicine. [Am. J. Bioeth. 21, 72–75. doi:10.1080/15265161.2021.1891347](https://doi.org/10.1080/15265161.2021.1891347) United Nations Convention on the Rights of the Child (1989). United Nations [Convention on the Rights of the Child. Available at: https://www.ohchr.org/en/](https://www.ohchr.org/en/professionalinterest/pages/crc.aspx) [professionalinterest/pages/crc.aspx (Accessed January 24, 2022).](https://www.ohchr.org/en/professionalinterest/pages/crc.aspx) Veenstra, D. L., Rowe, J. W., Pagán, J. A., Shelton Brown, H., Schneider, J. E., Gupta, A., et al. (2021). Reimbursement for Genetic Variant Reinterpretation: [Five Questions Payers Should Ask. Am. J. Manag. Care 27, e336–e338. doi:10.](https://doi.org/10.37765/ajmc.2021.88763) [37765/ajmc.2021.88763](https://doi.org/10.37765/ajmc.2021.88763) Vittozzi, L., Hoffmann, G. F., Cornel, M., and Loeber, G. (2010). Evaluation of Population Newborn Screening Practices for Rare Disorders in Member States of the European [Union. Orphanet J. Rare Dis. 5, P26. doi:10.1186/1750-1172-5-S1-P26](https://doi.org/10.1186/1750-1172-5-S1-P26) Wain, K. E., Palen, E., Savatt, J. M., Shuman, D., Finucane, B., Seeley, A., et al. (2018). The Value of Genomic Variant ClinVar Submissions from Clinical Providers: Beyond the Addition of Novel Variants. Hum. Mutat. 39, 1660–1667. [doi:10.1002/humu.23607](https://doi.org/10.1002/humu.23607) Walen, A. (2021). “Retributive Justice,” in The Stanford Encyclopedia of Philosophy. Editor E. N. Zalta (Stanford, CA: Metaphysics Research Lab, Stanford University). Available at: [https://plato.stanford.edu/archives/sum2021/](https://plato.stanford.edu/archives/sum2021/entries/justice-retributive/) [entries/justice-retributive/(Accessed April 13, 2022).](https://plato.stanford.edu/archives/sum2021/entries/justice-retributive/) Wang, T., Antonacci-Fulton, L., Howe, K., Lawson, H. A., Lucas, J. K., Phillippy, A. M., et al. (2022). The Human Pangenome Project: a Global [Resource to Map Genomic Diversity. Nature 604, 437–446. doi:10.1038/](https://doi.org/10.1038/s41586-022-04601-8) [s41586-022-04601-8](https://doi.org/10.1038/s41586-022-04601-8) Wenger, A. M., Guturu, H., Bernstein, J. A., and Bejerano, G. (2017). Systematic Reanalysis of Clinical Exome Data Yields Additional Diagnoses: Implications [for Providers. Genet. Med. 19, 209–214. doi:10.1038/gim.2016.88](https://doi.org/10.1038/gim.2016.88) Wilson, J. M. G., and Jungner, G. (1968). Principles and Practice of Screening for Disease. Geneva: World Health Organization. WMA (2022). WMA - the World Medical Association-WMA Declaration of Helsinki – Ethical Principles for Medical Research Involving Human Subjects. [Available at: https://www.wma.net/policies-post/wma-declaration-of-helsinki-](https://www.wma.net/policies-post/wma-declaration-of-helsinki-ethical-principles-for-medical-research-involving-human-subjects/) [ethical-principles-for-medical-research-involving-human-subjects/(Accessed](https://www.wma.net/policies-post/wma-declaration-of-helsinki-ethical-principles-for-medical-research-involving-human-subjects/) February 11, 2022). Woerner, A. C., Gallagher, R. C., Vockley, J., and Adhikari, A. N. (2021). The Use of Whole Genome and Exome Sequencing for Newborn Screening: Challenges [and Opportunities for Population Health. Front. Pediatr. 9, 663752. doi:10.](https://doi.org/10.3389/fped.2021.663752) [3389/fped.2021.663752](https://doi.org/10.3389/fped.2021.663752) Wojcik, M. H., Zhang, T., Ceyhan-Birsoy, O., Genetti, C. A., Lebo, M. S., Yu, T. W., et al. (2021). Discordant Results between Conventional Newborn Screening and Genomic Sequencing in the BabySeq Project. Genet. Med. 23, 1372–1375. [doi:10.1038/s41436-021-01146-5](https://doi.org/10.1038/s41436-021-01146-5) Wright, C. F., Hurles, M. E., and Firth, H. V. (2016). Principle of Proportionality in Genomic Data Sharing. Nat. Rev. Genet. 17, 1–2. [doi:10.1038/nrg.2015.5](https://doi.org/10.1038/nrg.2015.5) Conflict of Interest: The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. Publisher’s Note: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher. Copyright © 2022 Rahimzadeh, Friedman, de Wert and Knoppers. This is an open[access article distributed under the terms of the Creative Commons Attribution](https://creativecommons.org/licenses/by/4.0/) [License (CC BY). The use, distribution or reproduction in other forums is permitted,](https://creativecommons.org/licenses/by/4.0/) provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms. -----
{ "disclaimer": "Notice: Paper or abstract available at https://pmc.ncbi.nlm.nih.gov/articles/PMC9289115, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": "CCBY", "status": "GOLD", "url": "https://www.frontiersin.org/articles/10.3389/fgene.2022.865400/pdf" }
2,022
[ "JournalArticle" ]
true
2022-07-04T00:00:00
[ { "paperId": "772d3c795067541b768fdf08ca01051cc73af257", "title": "The Human Pangenome Project: a global resource to map genomic diversity" }, { "paperId": "e2d1dfab7d7dd990b8e059fd312ce516c2f40dff", "title": "The case for screening in early life for ‘non-treatable’ disorders: ethics, evidence and proportionality. A report from the Health Council of the Netherlands" }, { "paperId": "d123d7c5f8e189e4d047fa2ee205d97ce16413cd", "title": "Modeling Genomic Screening in Newborns." }, { "paperId": "6526ccfdde1c76c184f55ed095ab6180d7ab76d3", "title": "Addressing social determinants of health in pediatric health systems: balancing mission and financial sustainability" }, { "paperId": "7f4ae21e4a02f14bc6eda2d91d00b044f2f164d6", "title": "Should all babies have their genome sequenced at birth?" }, { "paperId": "87c472f54dba27c1385d25e8bf1e2b96e3a7f316", "title": "Reimbursement for genetic variant reinterpretation: five questions payers should ask." }, { "paperId": "f29fc1a2c8edf1e44bc9c0338e8c636f05b7635d", "title": "The full spectrum of ethical issues in pediatric genome-wide sequencing: a systematic qualitative review" }, { "paperId": "698057e0ce4fdacc40eb3f28b50670b2ceafaea2", "title": "Low frequency of treatable pediatric disease alleles in gnomAD: An opportunity for future genomic screening of newborns" }, { "paperId": "1d7b5af69837570297f15632e8adab36141582a4", "title": "Psychosocial Effect of Newborn Genomic Sequencing on Families in the BabySeq Project: A Randomized Clinical Trial." }, { "paperId": "d5db5e591db09e3250b1a8c06d739d15d7d1b47e", "title": "Variant interpretation using population databases: Lessons from gnomAD" }, { "paperId": "5d2428be0a021099776be0070a6b5c3abb144c90", "title": "The Use of Whole Genome and Exome Sequencing for Newborn Screening: Challenges and Opportunities for Population Health" }, { "paperId": "a44e312cf7dda561903ca26899ecc9ae9cfe17ed", "title": "Principles of Genomic Newborn Screening Programs" }, { "paperId": "144846532e1ae756e587caefd6a8609f69508106", "title": "Recommendations for reporting of secondary findings in clinical exome and genome sequencing, 2021 update: a policy statement of the American College of Medical Genetics and Genomics (ACMG)" }, { "paperId": "e6ed7ac24641e609a1bb11f739b7414267c396b8", "title": "We Have “Gifted” Enough: Indigenous Genomic Data Sovereignty in Precision Medicine" }, { "paperId": "a2a5cfed759b38ffcae2ccba118bb563ac8936c0", "title": "Discordant results between conventional newborn screening and genomic sequencing in the BabySeq Project" }, { "paperId": "f72fb1ce300d461ab6126b64b5fc52f9e8b633e0", "title": "Newborn screening in Latin America: A brief overview of the state of the art" }, { "paperId": "3bbc5725937253b30dba9a6892fa155f50ad280f", "title": "An appraisal of the Wilson & Jungner criteria in the context of genomic-based newborn screening for inborn errors of immunity." }, { "paperId": "af59a050c51c33a63ad9f07d2a00e904da3891f6", "title": "Does the law require reinterpretation and return of revised genomic results?" }, { "paperId": "efc3b5c2715c9eae64ea2733a89f5abd2cb341ae", "title": "Opportunistic genomic screening. Recommendations of the European Society of Human Genetics" }, { "paperId": "4979ae1a00710252a48ad0271b05b74018dd8eae", "title": "Variant Classification Concordance using the ACMG-AMP Variant Interpretation Guidelines across Nine Genomic Implementation Research Studies." }, { "paperId": "3c121084a7f9403f9b380027767b560f27e70b78", "title": "Regulatory landscape of providing information on newborn screening to parents across Europe" }, { "paperId": "c5bbd964a693f32fb00d8d8a88d81dc7fb9efd3a", "title": "State and Local Government Expenditures and Infant Mortality in the United States" }, { "paperId": "07f2275dece3e247c2008b4699f425be2cfc148c", "title": "Infants with Congenital Disorders Identified Through Newborn Screening — United States, 2015–2017" }, { "paperId": "bba46eff638095df0a5088e0f1548f640f7bbbcf", "title": "Genetic counseling service delivery models: A study of genetic counselors’ interests, needs, and barriers to implementation" }, { "paperId": "6c85f4505fcc165a29503a418238a2383711675e", "title": "The role of exome sequencing in newborn screening for inborn errors of metabolism" }, { "paperId": "61421ebce87423363bdee3d7881a97c0a65c1619", "title": "Empowering newborn screening programs in African countries through establishment of an international collaborative effort" }, { "paperId": "b46ad4399ff6b93b2c9135409cbbe12419e812d7", "title": "Mandatory newborn screening in the United States: History, current status, and existential challenges" }, { "paperId": "f6e36e2b2d57e300ecb6c3e9fc0f8808801d74a4", "title": "Genomic Sequencing for Newborn Screening: Results of the NC NEXUS Project." }, { "paperId": "eab019a98b0807c8757f40ce4ff0e6ea9478b5f2", "title": "Ethical Issues in Newborn Sequencing Research: The Case Study of BabySeq" }, { "paperId": "3af43fabfaa7fe89f0c8ffe666b5fd0ba104dd27", "title": "Is there a duty to reinterpret genetic data? The ethical dimensions" }, { "paperId": "0b6c17207b908152de37e94077b8a2a1efad2c0f", "title": "Inequities in newborn screening: Race and the role of medicaid☆" }, { "paperId": "6530c6954e593d8e8f6f528ce8059ced57f39655", "title": "Genome-wide Association Studies in Ancestrally Diverse Populations: Opportunities, Methods, Pitfalls, and Recommendations" }, { "paperId": "4779a28455293dfbcf983e4dad3dd1feb7b4ad6b", "title": "Effects of the Convention on the Rights of the Child on child mortality and vaccination rates: a synthetic control analysis" }, { "paperId": "f3f4ed9df39872e034aaf7226d43b9814bb1b736", "title": "An Age-Based Framework for Evaluating Genome-Scale Sequencing Results in Newborn Screening." }, { "paperId": "5164cf3f0956b8fa72bb00c71605edc817462c78", "title": "Council for International Organizations of Medical Sciences" }, { "paperId": "3293396095ab63c52fb3e91eea6bd4ec64ed01fe", "title": "Perceived Benefits, Risks, and Utility of Newborn Genomic Sequencing in the BabySeq Project" }, { "paperId": "53551c819dcf126bd471a63ef26bf1cb42070131", "title": "Interpretation of Genomic Sequencing Results in Healthy and Ill Newborns: Results from the BabySeq Project." }, { "paperId": "255287819382eb783e86ae2d2d18ea32d61259f5", "title": "Newborn screening in the developing countries" }, { "paperId": "35cd91892a4caccdd33d7714bf390113dc751906", "title": "The value of genomic variant ClinVar submissions from clinical providers: Beyond the addition of novel variants" }, { "paperId": "c7682b0ae406134e1e2b0c0010a1c90a3c444568", "title": "Newborn dried blood spot samples in Denmark: the hidden figures of secondary use and research participation" }, { "paperId": "9ebf5eac882d850473e87c71aa1f4973ac4ee0c2", "title": "Newborn Screening Programs: Next Generation Ethical and Social Issues" }, { "paperId": "6c3f8ac4bc5b9225d66746c5e442d70f9d4711fe", "title": "The BabySeq project: implementing genomic sequencing in newborns" }, { "paperId": "2189d1feadbc865a02562ffe901a68fb7ed1435f", "title": "Parental Interest in Genomic Sequencing of Newborns: Enrollment Experience from the BabySeq Project" }, { "paperId": "2a310c8dd600ab712124380f53c96cf458bb103b", "title": "Sequencing Newborns: A Call for Nuanced Use of Genomic Technologies." }, { "paperId": "a31050670c2a827d7d9b0592bb8211b14d74d2b1", "title": "Universal newborn screening: A roadmap for action." }, { "paperId": "7c5973d9817bc9d086f61c8058e8296c8fc975c1", "title": "Periodic reanalysis of whole-genome sequencing data enhances the diagnostic advantage over standard clinical genetic testing" }, { "paperId": "216d4b7ce528a0a4c8ac2e4fe372732f01311e8c", "title": "An Assessment of a Shortened Consent Form for the Storage and Research Use of Residual Newborn Screening Blood Spots" }, { "paperId": "281f1fdfdba052a33cbba45d336539d77bcd5e7f", "title": "Genomic newborn screening: public health policy considerations and recommendations" }, { "paperId": "09d16bb20223d9d49e0c7ef0942144bfa537e227", "title": "International differences in the evaluation of conditions for newborn bloodspot screening: a review of scientific literature and policy documents" }, { "paperId": "94c627d2d8b631fabb20cb816ab50095deb6952c", "title": "Recontacting Pediatric Research Participants for Consent When They Reach the Age of Majority." }, { "paperId": "f0f44716e502fbe14fad5d0b71533ab6d1abb3e6", "title": "Genomics is failing on diversity" }, { "paperId": "77fff587e39e6e8a9487e649498fe275ad3c7b5f", "title": "Systematic reanalysis of clinical exome data yields additional diagnoses: implications for providers" }, { "paperId": "0356b8549b733372b5d4fecc0d22a9f26d986c8b", "title": "Parental Views on Expanded Newborn Screening Using Whole-Genome Sequencing" }, { "paperId": "faef92d7d702868cb174cd9faaabe5d096b7b6f6", "title": "Supporting Parental Decisions About Genomic Sequencing for Newborn Screening: The NC NEXUS Decision Aid" }, { "paperId": "dccb4c6f9f0b382a750e9678a952219c5398e38f", "title": "Principle of proportionality in genomic data sharing" }, { "paperId": "27d27c729555f50bccf997be5a58a6e96d62c877", "title": "Return of Results from Research Using Newborn Screening Dried Blood Samples" }, { "paperId": "59abff3409f7691d6b0536ebcc57dda24ff777a8", "title": "Points to Consider: Ethical, Legal, and Psychosocial Implications of Genetic Testing in Children and Adolescents." }, { "paperId": "30fc5297c5d0c28ee282b7b9ed52f4c5f3924720", "title": "ClinGen--the Clinical Genome Resource." }, { "paperId": "6ba430a944df5ec5b280bb4881272ee2dd0eb133", "title": "Current status of newborn screening worldwide: 2015." }, { "paperId": "3769aa115f75024483d9f696f48ca9d40ee55483", "title": "Whole-genome sequencing in newborn screening? A statement on the continued importance of targeted approaches in newborn screening programmes" }, { "paperId": "bc3bf1606f8c34014d584ca7bb2a273d544b8151", "title": "Development of DNA Confirmatory and High-Risk Diagnostic Testing for Newborns Using Targeted Next-Generation DNA Sequencing" }, { "paperId": "e6675636454dd6580191061a524b4c9be43801de", "title": "Newborn screening." }, { "paperId": "7e416fe71f417d634d422cedb6e3c4b39262acaa", "title": "Newborn screening controversy: past, present, and future." }, { "paperId": "04865ed09a45bb846dcae8f7507efafe301d1630", "title": "Public views on participating in newborn screening using genome sequencing" }, { "paperId": "796238ac5178f93c1581b8b8280279b0eb30768b", "title": "Attitudes of parents toward the return of targeted and incidental genomic research findings in children" }, { "paperId": "540d930637ff7249d34730a74f1e00e3eb1fd1f4", "title": "The ‘thousand-dollar genome’: an ethical exploration" }, { "paperId": "40f343a8f6d8a5ca4a3eea214b328cc030bee1e7", "title": "Tri-Council Policy Statement: Ethical Conduct for Research Involving Humans" }, { "paperId": "5ced25089ba767b9a4182727a39449b43556c4fd", "title": "Decision analysis, economic evaluation, and newborn screening: challenges and opportunities" }, { "paperId": "132f7daf6f0778d1967b76c310c463db3f6ba6f1", "title": "The ethical hazards and programmatic challenges of genomic newborn screening." }, { "paperId": "07d1a2f016f413fb786f7c476a50fb385e55614d", "title": "Newborn screening and maternal diagnosis: rethinking family benefit." }, { "paperId": "896a00302ecef2c77aed9c15544e7b7b250fbb79", "title": "Evaluation of population newborn screening practices for rare disorders in member states of the European Union" }, { "paperId": "ed1ff66f9edf97ec06238434a0d24f49a4414077", "title": "Parents' Decision-Making in Newborn Screening: Opinions, Choices, and Information Needs" }, { "paperId": "d3a49b64933b8e4af4cd49ba6da3d60ecea49111", "title": "The Prospect of Genome-guided Preventive Medicine: A Need and Opportunity for Genetic Counselors" }, { "paperId": "9f5bf739ac7a0e0ca14c942dacef3f45b67eb61b", "title": "Conference report: Second conference of the Middle East and North Africa newborn screening initiative: Partnerships for sustainable newborn screening infrastructure and research opportunities" }, { "paperId": "876061a13749f98be99b5260dd2f49a662d1414b", "title": "Newborn Screening" }, { "paperId": "bbfa1aa85bbac3b427f45e218220b02fd7c26172", "title": "Revisiting Wilson and Jungner in the genomic age: a review of screening criteria over the past 40 years." }, { "paperId": "50d806f6f054d860326ef0584f0b54b0c517f69b", "title": "Comprehensive Cost-Utility Analysis of Newborn Screening Strategies" }, { "paperId": "1df1172e6bbb5f2923ca149280dadcc14560e8e3", "title": "Population screening in the age of genomic medicine." }, { "paperId": "a7e5a72374ca7224bd90e1f5e8259cab21d3dc81", "title": "Serving the family from birth to the medical home. Newborn screening: a blueprint for the future - a call for a national agenda on state newborn screening programs" }, { "paperId": "242187731d65c888430df594d75436db90366b0d", "title": "Genetic dilemmas and the child's right to an open future." }, { "paperId": "5d301630d781b16e082fcbed9e9bd00003274beb", "title": "The United Nations convention on the rights of the child" }, { "paperId": "d01508c5496069303c28363a295d7058fef1c162", "title": "On retributive justice" }, { "paperId": null, "title": "WMA - the World Medical Association-WMA Declaration of Helsinki – Ethical Principles for Medical Research Involving Human Subjects" }, { "paperId": null, "title": "United States Common Rule" }, { "paperId": null, "title": "Screening: Between Hope and Hype" }, { "paperId": "d932f6af927d5e62d11e189b826ab56c05605f8a", "title": "Serving the family from birth to the medical home. A report from the Newborn Screening Task Force convened in Washington DC, May 10-11, 1999." }, { "paperId": null, "title": "Canadian Child Health Law: Health Rights and Risks of Children" }, { "paperId": "9978d23a1ab3194e555f405c605b557625a1e9ac", "title": "Principles and practice of screening for disease" }, { "paperId": null, "title": "National Organization for Rare Disorders Newborn screening State report card (2021)" } ]
18,071
en
[ { "category": "Computer Science", "source": "external" }, { "category": "Mathematics", "source": "external" }, { "category": "Computer Science", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/0291665270a93c509ecde8b2c50642b3adcfe176
[ "Computer Science", "Mathematics" ]
0.839306
Automated abstraction by incremental refinement in interpolant-based model checking
0291665270a93c509ecde8b2c50642b3adcfe176
IEEE/ACM International Conference on Computer-Aided Design
[ { "authorId": "1743867", "name": "G. Cabodi" }, { "authorId": "1730731", "name": "P. Camurati" }, { "authorId": "3251152", "name": "M. Murciano" } ]
{ "alternate_issns": null, "alternate_names": [ "IEEE/ACM Int Conf Comput Des" ], "alternate_urls": [ "http://ieeexplore.ieee.org/xpl/mostRecentIssue.jsp?punumber=345" ], "id": "b87983f2-c272-4ada-8161-52fb86e00bfb", "issn": "1063-6757", "name": "IEEE/ACM International Conference on Computer-Aided Design", "type": null, "url": "http://ieeexplore.ieee.org/xpl/mostRecentIssue.jsp?punumber=212" }
null
# Automated Abstraction by Incremental Refinement in Interpolant-based Model Checking ## G. Cabodi, P. Camurati, M. Murciano Dipartimento di Automatica ed Informatica Politecnico di Torino - Torino, Italy Email: gianpiero.cabodi, paolo.camurati, marco.murciano @polito.it _{_ _}_ **_Abstract—This paper addresses the field of Unbounded Model_** **Checking (UMC) based on SAT engines, where Craig interpolants** **have recently gained wide acceptance as an automated abstrac-** **tion technique.** **We start from the observation that interpolants can be quite** **effective on large verification instances. As they operate on** **SAT-generated refutation proofs, interpolants are very good at** **automatically abstract facts that are not significant for proofs.** **In this work, we push forward the new idea of generating** **abstractions without resorting to SAT proofs, and to accept** **(reject) abstractions whenever they (do not) fulfill given ade-** **_quacy constraints. We propose an integrated approach smoothly_** **combining the capabilities of interpolation with abstraction and** **over-approximation techniques, that do not directly derive from** **SAT refutation proofs.** **The driving idea of this combination is to incrementally** **generate, by refinement, an abstract (over-approximate) image,** **built up from equivalences, implications, ternary and localization** **abstraction, then (eventually) from SAT refutation proofs.** **Experimental results, derived from the verification of hard** **problems, show the robustness of our approach.** I. INTRODUCTION Symbolic model checking [1] is a method for verifying temporal properties of finite state systems, which relies on a symbolic representation of sets, typically through Binary Decision Diagrams (BDDs) [2]. Symbolic representations of state sets have been by far the key factor for BDD success in symbolic model checking [1]. Given the canonicity of BDDs, and the efficient implementations of quantification, image and pre-image operators could produce Boolean functions representing state sets. Unfortunately, scalability problems practically limit BDD usage at circuits with tens to few hundreds latches. By contrast, bounded model checking (BMC) [3] can falsify temporal properties using Boolean satisfiability (SAT). BMC is efficient at producing counter-examples and it has been shown to be more robust and scalable than BDD-based symbolic model checking. As a matter of fact, SAT tools are the core technology to attack larger problems. And Inverted Graphs (AIGs), or similar (non-canonical) circuit-based representations [4] are used in order to model circuit unrollings. Circuits are either converted to CNF clauses, to be processed by SAT solvers, or directly manipulated by circuit-based solvers. However, BMC is not complete, as it only guarantees the correctness of a property up to a given bound. Therefore, specific techniques are required in order to support Unbounded Model Checking (UMC) in SAT based environments. For the sake of detail, the ability to check reachability fix-points is the main difference between BMC and UMC. All UMC approaches basically rely on one or more methods able to detect that the forward, backward or mixed analysis they perform is complete. _A. Related Works_ Our work follows UMC approaches based on SAT rather than BDDs. Inductive proofs are at the base of the most of such approaches [5], [6], [7], [8], all following the seminal work of Sheeran et al. [9]. Fix-point checks are proved inductively, whereas completeness is based on uniqueness constraints, expressing loop-free paths between states. Unfortunately, the longest loop-free path can be exponentially longer than the diameter of the reachable state space, thus most of the research in this field has concentrated on finding tight sets of inductive invariants, i.e. over-approximations of reachable states, quite often sufficient for inductive proofs. In order to avoid the exponential depth, symbolic representations of state sets are an alternative approach. But they are generally difficult to manipulate within non BDD-based frameworks, as both CNF and circuit-based representations can lead to memory explosion. Williams et al. [10] first adopted Boolean Expression Diagrams (BEDs), for the removal of quantifiers. Abdulla et al. [11] adopted Reduced Boolean Circuits (RBCs), i.e., a variant of BEDs, to represent formulas on which they performed existential quantifier elimination through substitution, scope reduction, etc. McMillan [12], later followed by Kang and Park [13], proposed quantifier elimination through the enumeration of SAT solutions (all-solutions _SAT). Ganai et al. [14] extended the previous approaches by_ using “circuit co-factoring”. The authors adopted a circuit to represent state sets, and they used circuit-based co-factoring to capture a large set of states in every SAT enumeration step. All the above methods potentially converge faster than [9], although they share the common problem of possibly exponential state set representations. Abstraction techniques represent an orthogonal direction to tackle complexity, as they seek and remove those parts of a circuit/system that are not relevant for the proof. Within this general path of research, our work follows the ideas first ----- introduced by McMillan in [15], who used Craig interpolants for Unbounded Model Checking. Craig interpolants exploit the ability of Modern SAT solvers to generate proofs of unsatisfiability. Over-approximations of the reachable states are computed, starting from refutation proofs of (unsatisfied) BMC-like runs. The approach can be viewed as an iterative refinement of proof-based abstractions to narrow down a proof to relevant facts and its convergence is bound to the state graph diameter. Craig interpolant’s most interesting features are its completeness and the automated abstraction mechanism, while the drawbacks are (again) mainly related to the potential size blow-up of SAT-based interpolant circuits, and to the convergence of over-approximate reachability. New applications and improvements over the base method [15] were proposed in [16], [17], [18], [19], in order to push forward applicability and scalability of the technique. This paper is a direct follow-up of [18], as we push forward our previous idea, to compute abstractions at the image level, generating interpolant-like over-approximations, without resorting to SAT refutation proofs. _B. Motivations_ Our experience [20], [21] shows that SAT-based Craig interpolants, combined with preliminary computations of inductive invariants, can prove a broad range of verification instances. A careful analysis of the unproved problems lead us to the following observations: _• Craig interpolants tend to produce highly redundant_ circuit representations of state sets, derived from SATgenerated proofs (often of unpredictable size). Circuits can be compacted by means of logic synthesis optimizers, that are limited in terms of scalability, and thus poorly working with big interpolants _• Over-approximation can drastically reduce the forward_ diameter (as long state transition paths can by bypassed by over-approximation). But it can also trigger state space explorations within unreachable areas, with direct consequences in terms of both traversal depth and reachable state set size _• Craig interpolants combine the power of forward and_ backward reachability, by over-approximating forward states within the region left “free” by backward reachability. Whenever the backward unrolling representation is complex, and the “free” region shrinks, interpolant representations tend to explode. Due to the previously mentioned problems, we seek for _alternative ways to generate state set over-approximations,_ whose circuit size can be better kept under control, and we focus our efforts on the generation of tighter over_approximations._ _C. Contributions_ In this paper, we propose a set of abstraction techniques not directly derived from SAT refutation proofs, although controlled by SAT-based “adequacy” checks. We incrementally compute sets of atomically simple over-approximations, with increasing complexity and computational effort. Our approach is a follow-up of our previous experience [18], where we already explored the combination of interpolants and localization abstraction. With the current work we go much beyond that idea, as we introduce an automated abstraction procedure, based on incremental learning and simplification steps. We learn small atomic constraints such as equivalences and implications between state variables. We simplify circuits representing state sets and combinational unrollings, by exploiting equivalences and implications, plus ternary and localization abstractions. Compared to [19], our method works on abstractions at the image level, whereas they seek for abstraction/refinement steps to be used for entire interpolantbased traversals. We adopt: _• cube-based over-approximations, by detecting state vari-_ ables temporarily stuck at constant values _• equivalence classes and implications among state vari-_ ables, able to provide simple over-approximations, at the base of powerful circuit optimizations _• abstractions based on ternary logic models._ Starting from the above considerations, our approach proposes two main novel contributions: 1) We adopt state set over-approximation techniques that are novel for interpolant-based Model Checking 2) We propose an integrated approach for image computation, incrementally combining and tuning the above over-approximation techniques within a unified SATbased (complete) Unbounded Model Checking approach. Our experimental results concentrate on proving correct properties, and they show that the proposed methods improve the original one by making it faster, more robust and scalable. We show experiments where in some cases we are able to complete difficult instances, not achievable with previous techniques. _D. Outline_ Section II introduces background notions on notation, BMC and UMC, and SAT-based Craig interpolant Model Checking. Sections III and IV present our main contributions. Section V discusses the experiments we performed. Section VI concludes with some summarizing remarks. II. BACKGROUND _A. Model and Notation_ We address systems modeled by labeled state transition structures, and represented implicitly by Boolean formulas. The state space and the (free) inputs are defined by indexed sets of Boolean variables V = _{v1, . . ., vn} and_ _W = {w1, . . ., wn}, respectively. States correspond to the_ valuations of variables in V, whereas transition labels correspond to the valuations of variables in W . We indicate next states with the primed variable set V _[′]_ = {v1[′] _[, . . ., v]n[′]_ _[}][.]_ ----- Whenever we explicitly need time frame variables, we use _V_ _[i]_ = {v1[i] _[, . . ., v]n[i]_ _[}][ and][ W][ i][ =][ {][w]1[i]_ _[, . . ., w]n[i]_ _[}][ for variable]_ instances at the i−th time frame. We also adopt the short notation[1] _V_ _[i..j]_ for V _[i], V_ _[i][+1], ..., V_ _[j]. Similarly for W_ . A set of states is expressed by a state predicate S(V ) (or S(V _[′]) for the next state space). I(V ) is the initial state_ predicate. We use P (V ) to denote an invariant property, and _F_ (V ) = ¬P (V ) for its complement (as it is often used as target for bug search). With abuse of notation, in the rest of this paper, we make no distinction between the characteristic function of a set and the set itself. _T_ (V, W, V _[′]) is the transition relation, that we assume given_ by a circuit graph, with state variables mapped to latches. Present and next state variables correspond to latch outputs and inputs, respectively. The input of the i−th latch is fed by a combinational circuit, described by the δi(V, W ) Boolean function. Hence, the transition relation can be expressed as � � _T_ (V, W, V _[′]) =_ _ti(V, W, vi[′][) =]_ (vi = δi(V, W )) _i_ _i_ A state path of length k is a sequence of states σ0, . . ., σk such that T (σi, νi, σi+1) is true, given some input νi, for all 0 ≤ _i < k._ A state set S[′] is reachable from state set S in k steps if there exists a path of length k, in the labeled state transition structure, connecting a state belonging to S to another one belonging to S[′]; equivalently _S(V_ [0]) ∧ [�]i[k]=0[−][1] _[T]_ [(][V][ i][, W][ i][, V][ i][+1][)][ ∧] _[S][′][(][V][ k][)]_ The image operator IMG(T, From) computes the set of states _T o reachable in one step from the states in From_ _T o(V_ _[′])_ = IMG(T (V, W, V _[′]), From(V ))_ = _∃V,W (From(V ) ∧_ _T_ (V, W, V _[′]))_ An over-approximate image (or pre-image) is any state set including the exact image _T o[+]_ = IMG[+](T, From) _⊇_ IMG(T, From) Pre-image is dual, with the only difference that existential quantification of functionally computed state variables can be operated by composition: _T o(V )_ = PREIMG(T (V, W, V _[′]), From(V_ _[′]))_ = _∃W,V ′(From(V_ _[′]) ∧_ _T_ (V, W, V _[′]))_ = _∃W From(δ(V, W_ )) _B. Bounded Model Checking_ SAT-based Bounded Model Checking (BMC) [3] considers only k−bounded reachability, as expressed by the propositional formula BMC[k]0 = _I(V_ [0]) ∧ [�]i[k]=0[−][1] _[T][ (][V][ i][, W][ i][, V][ i][+1][)][ ∧]_ _[F]_ [(][V][ k][)] A bounded proof is thus translated into a SAT problem. BMC[k]0 is satisfiable iff there is a counter-example (a path from I to 1V i..j is defined if i ≤ _j, otherwise we conventionally define it as a void_ variable set _F_ ) of length k. In the case of circuits, existential quantification is conveniently applied to intermediate sets of state variables BMC[k]0 [=] _I(V_ [0]) ∧∃V 1..k ([�][k]i=0[−][1] _[T]_ [(][V][ i][, W][ i][, V][ i][+1][)][ ∧] _[F]_ [(][V][ k][))] = _I(V_ [0]) ∧ _Conek(V_ [0], W [0][..k][−][1]) where Conek represents a combinational single output circuit unrolling, formally defined exploiting quantification by functional composition _Conek_ = _Conek(V_ [0], W [0][..k][−][1]) = _∃V 1..k_ ([�][k]i=0[−][1][(][V][ i][+1][ =][ δ][i][(][V][ i][, W][ i][))][ ∧] _[F]_ [(][V][ k][))] = _F_ (δ(...(δ(W [0], V [0]))), W _[k][−][1])_ The main advantage of using Conek (a combinational circuit) instead of transition relation instances ([�]i _[T][ i][), is that several]_ circuit-based simplifications, besides Cone-Of-Influence (COI) reductions, are possible on Conek, from constant propagations to combinational optimizations, before moving to CNF- or circuit-based SAT. _C. State sets and Fix-points in Unbounded Model Checking_ Although reachability is usually formulated in terms of the image and/or pre-image operators, we will here express backward reachability using Conek, as previously defined. SAT-based model checking approaches generally keep explicit representations of circuit unrollings instead of (exact) state set representations, due to the inherent complexity of quantification operators. The set of states (backward) reachable from F in (exactly) k steps can be obtained by primary input quantification over a circuit unrolling _BckRk(V ) = ∃W 0..k−1_ _Conek(V, W_ [0][..k][−][1]) The overall set of backward reachable states is the union of all reachable states up to depth k _BckR0..k(V )_ = �ki=0 _[BckR][i][(][V][ )]_ = �ki=0 _[∃][W][ 0][..i][−][1]_ _[Cone][i][(][V, W][ 0][..i][−][1][)]_ = _∃W 0..k−1_ [�]i[k]=0 _[Cone][i][(][V, W][ 0][..i][−][1][)]_ where distributivity of existential quantification over union has been applied. We introduce the short notation Cone0..k for �k _i=0_ _[Cone][i][. So backward reachable states are defined as]_ _BckR0..k(V ) = ∃W0..k−1_ _Cone0..k(V, W0..k−1)_ A backward reachability fix-point could be checked by a SAT run on the following Boolean formula _BckRk+1(V ) ∧¬BckRk(V )_ or in a simpler one, with quantified state sets on the second term only _Conek+1(V, W0..k) ∧¬BckRk(V )_ Unfortunately, both the above formulations are difficult to manipulate in many practical cases, due to the complexity of SAT-based quantification. ----- _D. Craig Interpolants in Model Checking_ Given two inconsistent formulas A and B (A ∧ _B = 0), an_ interpolant C is a formula such that: 1) It is implied by A 2) It is inconsistent with B, i.e., C ∧ _B is unsatisfiable_ 3) It is expressed over the common variables of A and B. A Craig interpolant C = ITP (A, B) is an AND/OR circuit that can be computed in linear time from the refutation proof of A∧B. Albeit the computation is linear, the refutation proof itself can be exponentially larger than A and B. A _k-adequate_ over-approximate image is an IMG[+](T, From) that does not intersect any state on paths of length k to F . Using the Cone0..k circuit unrolling, a k-adequate over-approximate image is IMG[+]Adq[(][T, From, Cone][0][..k][)] defined as follows[2]: IMG[+]Adq [is][ undefined][ iff] _From(V ) ∧_ _T_ (V, W0, V _[′]) ∧_ _Cone0..k(V_ _[′], W_ [1][..k]) ̸= 0 Otherwise, it is computed by interpolation: IMG[+]Adq[(][T, S, Cone][0][..k][)] = ITP(S(V ) ∧ _T_ (V, W0, V _[′]),_ _Cone0..k(V_ _[′], W1..k))_ An image is called adequate if it is k-adequate for any k, i.e., no path of any length can lead from a state within the image to states in F . Since the model is finite, a k-adequate image is adequate if k ≥ _d, where d is the diameter of the state_ transition graph. McMillan [15] proposed an effective fully SAT-based Unbounded Model Checking algorithm, exploiting interpolants, as sketched in Figure 1. INTERPOLANTMC (I, T, F ) k = 0 **do** _Cone0..k = CIRCUITUNROLL(F, δ, k)_ res = FINITERUN (I, T, Cone0..k) k = k + 1 **while (res = undecided)** FINITERUN (I, T, Cone) **if (SAT(I ∧** _T ∧_ _Cone)_ **return (reachable)** R = I **while (true)** _To = IMG[+]Adq_ [(][T] [, R,][ Cone][)] **if (To = undefined)** **return (undecided)** **if (To ⇒** R) **return (unreachable)** R = R ∨ _To_ Fig. 1. Interpolant-based Verification. While INTERPOLANTMC is the entry point of the algorithm, routine FINITERUN takes care of the interpolant-based 2Indexes of input variable sets have been shifted up in Conek, from _W_ [0][..k][−][1] to W _[i..k], in order to use W_ [0] variables in the forward T instance. over-approximate traversal. The latter function may end up with three possible results: _• “reachable”, if it proves F reachable in k steps, hence_ the property has been disproved _• “unreachable”, if the approximate traversal using the_ IMG[+]Adq [image computation reaches a fix-point. In this] case the property is proved _• “undecided”, if F is intersected by the over-approximate_ state sets. Then, k in increased for a new FINITERUN call. McMillan [15] proved that the previous algorithm is sound and complete. In synthesis, let us assume I and F mutually unreachable: if k < d, a k-adequate set can produce a non _k-adequate image. In this case, “undecided” is returned and_ _k is increased. Otherwise, when k ≥_ _d, IMG[+]Adq_ [is always] adequate. At this point, the algorithm will terminate with an approximate reachability fix-point. According to [17], k can be incremented by the depth of the last FINITERUN execution to avoid a quadratic number of image computations. III. SAT-BASED OVER-APPROXIMATE IMAGE The over-approximate images we are concerned with are defined by the following two conditions: _• An over-approximate image includes the exact image._ Avoiding existential quantification, a given T o[+](V _[′]) is_ an over-approximate image if: _T_ (V, W0, V _[′]) ∧_ _From(V ) ⇒_ _T o[+](V_ _[′])_ _• We look for k−adequate images, as adequacy (by increas-_ ing k values) is the condition adopted to incrementally tighten abstractions. Given Cone0..k _T o[+](V_ _[′]) ∧_ _Cone0..k(V_ _[′], W_ [1][..k]) = 0 Given the above observations, we look for an overapproximate image, computed as a Boolean conjunction of several atomic over-approximations _T o[+](V_ _[′]) =_ [�]i _[to][i][+][(][V][ ′][)]_ characterized by the property that each toi[+] factor includes the exact image, but only the [�]i _[to][i][+][ product is][ k][−][adequate]_ (whereas each toi[+] is not required to be k−adequate) _∀i, T_ (V, W0, V _[′]) ∧_ _From(V )_ _⇒_ _toi[+](V_ _[′])_ �i _[to][i][+][(][V][ ′][)][ ∧]_ _[Cone][0][..k][(][V][ ′][W][ 1][..k][)]_ = 0 Hence, we look for a product of toi[+] factors, where each one is an over-approximate image in itself, and the overall product is k−adequate. We choose to possibly find several (small) toi[+], instead of a single T o[+], as we adopt an iterative algorithm to select among multiple (small) candidate over-approximations. Whereas it might be computationally infeasible to enumerate all functions of the (V’) variables as candidate over-approximations, we explore given classes of atomic functions. ----- The second criterion we adopt is the incremental refinement of an initially coarse T o[+] abstraction: _• We consider toi[+]_ candidates, selected by incremental complexity. We group them by iterating through classes of functions, so that we first capture the simplest/easiest ones, before moving to more complex candidates _• Whenever we find an over-approximation, we possibly_ use it to simplify the data structure for the next steps (e.g. the transition relation and the backward cone used for adequacy checks) _• We_ stop the iterative process whenever the overapproximation is adequate. As the selection we operate is not complete, we possibly end up computing a Craig interpolant (on the simplified backward cone and forward transition relation) from a SAT refutation proof. Figure 2 shows the skeleton of our iterative image overapproximation algorithm. IMG[+]Adq [(][T] [,][ From][,][ Cone][)] _To[+]_ = 1 Class = constClass **do** Candclass = GETCANDIDATES (Class) Abstrclass = { } **foreach cj (V’) ∈** CandClass **if ACCEPT (ci, T**, From, Cone) Abstrclass = Abstrclass ∪{cj _}_ _To[+]class_ [=][ OVER][A][PPR][S][ET][ (Abstr][class][)] _To[+]_ = To[+] _∧_ _To[+]class_ **if ¬SAT (To[+]** _∧_ _Cone)_ **return (To[+])** SIMPLIFY (Cone, Abstrclass) SIMPLIFY (T, Abstrclass) Class = NEXTCLASS() **while (Class ̸= emptyClass)** **return (To[+]** _∧_ ITP (From∧ _T_, Cone)) Fig. 2. Over-approximate k-adequate image computation. Let Class represent the choice for the type of overapproximations to be considered: we start by constClass, the equivalence class of _constant_ _variables,_ then NEXTCLASS() returns, at each new iteration, equivalences, implications, ternary abstraction and localization abstraction. For each class, we iterate through the cj candidates, in order to gather the accepted ones. A candidate is accepted if it is a valid over-approximation, i.e. adequacy conditions are kept after applying the over-approximation. Once all accepted candidates for a given class have been selected, we generate the class over-approximate image (T o[+]class [=] OVERAPPRSET(Abstrclass)). T o[+]class [is a new refinement for] (and it is and-ed to) the overall image T o[+]. If the refined _T o[+]_ is adequate (¬SAT (T o[+] _∧_ _Cone)), it is returned as_ a result, otherwise we exploit the set of atomic abstractions (Abstrclass) to simplify Cone and T, and we move to the next class. When we have looped through all classes, without returning an adequate T o[+] set, this means that the over-approximation is still too coarse, and we end up calling SAT-based interpolation (which is possibly easier due to all previous simplifications on _T and Cone)._ All of the above mentioned classes will be discussed in section IV, where, for each one, we motivate it, we discuss the abstraction obtained, and how to efficiently compute _k−adequacy and to simplify Cone and T_ . IV. ABSTRACTION CLASSES AND ADEQUACY CHECKS Abstraction classes are now individually discussed, as well as the related acceptance criteria, set over-approximation and simplification strategies. _A. Constant variables_ We first look for variables equivalent to constant values, i.e. variables that are implied to constant values in the next state. More formally, let variable x[′]j _[∈]_ _[X]_ _[′][ be the generic next]_ state variable, we consider two possible over-approximation literal candidates: ¬x[′]j [and][ x]j[′] [. Therefore, the candidate class] returned by GETCANDIDATES(constClass) is the set of all (direct and complemented) literals: GETCANDIDATES(constClass) = {x[′]1[,][ ¬][x][′]1[, . . ., x][′]n[,][ ¬][x][′]n[}] We efficiently accept/reject individual candidates by an incremental SAT procedure that iterates through all literals, and detects the implied ones. An accepted literal lj is such that: _S ∧_ _T ⇒_ _lj_ The next step is the explicit computation of T o[+]constClass (function OVERAPPRSET), as the conjunction of all implied literals. Given a cube, the simplification task (performed by the function SIMPLIFY(T, AbstrconstClass)) is straightforward. We just need to replace variables by the corresponding constant values, and this action generally reduces the overall AIG node count for Cone. SIMPLIFY(T, AbstrconstClass)) is easier, as we simply need to remove the T components corresponding to constant values. It is worth noticing that constant state variables typically appear at the initial steps of forward traversals and possibly throughout the traversal of externally constrained (by external assumptions) systems. Keeping exact values for implied variables is a way to tighten over-approximation vs. other abstraction techniques. Craig interpolants as well as localization abstraction could, for instance, abstract away implied variables (whenever they are not relevant for the proof). But this might result in a looser over-approximation, and possibly trigger visits of unreachable states. By explicitly expressing T o[+]constClass[, we] also remove implied variables from further computations (as in other abstraction techniques), but the T o[+]constClass [factor] in the overall over-approximate image will constrain the corresponding present state variables at their exact value, at the next forward traversal iteration. ----- _B. Equivalences and Implications_ After detecting constant state variables, we look for equivalences (modulo complementation) between couples of next state variables. Let literals li and lj be two literals to be considered for equivalence, the corresponding candidate equivalence is eqij = _li ⇔_ _lj._ The set of candidate equivalences is obviously quadratic in the number of variables, as well as the iteration for acceptance checks. Again we can exploit efficient solutions based on equivalence classes and incremental SAT, inspired by the ones proposed in [7], [8]. _T o[+]eqClass_ [(function][ OVER][A][PPR][S][ET][) is straightforwardly] computed as the conjunction of all proved equivalences. Function simplification given a set of equivalences is performed by means of variable merging (for each equivalence class we substitute each variable literal with a class representative literal). Once equivalences have been detected and used to simplify _Cone and T_, we consider variable implications, again, modulo complementation. Implications are accepted similarly to equivalences, but their computation cannot rely on equivalence classes any more. Furthermore, implications cannot be used for direct function simplification. We just need to explicitly represent them (T o[+]Impl[) in order to tighten the candidate][ T o][+][.] We also explicitly add the T o[+]Impl [constrain to][ Cone][ as] an additional redundant factor, for space search restriction purposes. _C. Ternary abstraction_ Three-Valued Logic is at the base of several synthesis and verification approaches, following the general idea to encode an additional logic value, representing either the unknown, or the {0, 1} set. Our approach directly follows an idea proposed by Bres et al. [22], following works by Malik [23] and Shiple et al. [24]. Other related works can be found in the field of equivalence checking [25], where ternary logic is used in order to handle circuit initialization sequences. The cited works are inspired from Scott’s three-valued logic, built upon the usual two-valued Boolean logic by adding a third value, that denotes the undefined or unknown value. _⊥_ Bres et al. [22] adopt a two bit encoding (sometimes called dual rail) for ternary constants ((0, 1) for false, (1, 0) for _true, (0, 0) for ⊥). A Boolean function f is represented by_ two bit functions f0 and f1, such that f0 (resp. f1) is the characteristic function of the set for which f evaluates to 0 (resp. 1). f⊥ = ¬f0 ∧¬f1 is the set for which the function is undefined (the don’t care set). The function is completely defined if f1 ∨ _f0 = 1, and f⊥_ = 0. Let us introduce the notation f [3] = (f0, f1) for such a dual rail encoding. Boolean operators for over the ternary encoding are defined by the following rules: _¬f_ [3] = _¬(f0, f1)_ = (f1, f0) _f_ [3] _∨_ _g[3]_ = (f0, f1) ∨ (g0, g1) = (f0 ∧ _g0, f1 ∨_ _g1)_ _f_ [3] _∧_ _g[3]_ = (f0, f1) ∧ (g0, g1) = (f0 ∨ _g0, f1 ∧_ _g1)_ Let si be a Binary variable, and σi the corresponding ternary one. Abstraction of variable σi in f [3](σ1, . . ., σi, . . ., σn) was done in [22], by setting variable σi to the unknown value: _f_ [3](σ1, . . ., ⊥, . . ., σn) In the dual rail encoding, the σi variable is assigned the _⊥_ = (0, 0) ternary constant, whereas all other σj variables are encoded by the symbolic ternary value (sj, ¬sj ) (using the corresponding binary variable). Given the above assignments, function f [3] has now a possibly non-void don’t care set f⊥. And a binary overapproximation of f can be obtained by ¬f0 = f1 ∨ _f⊥_ _⊇_ _f_ . _T ernaryAbs (f_ (s1, . . ., si, . . ., sn), si) = _¬f0((s1, ¬s1), . . ., (0, 0), . . ., (sn, ¬sn))_ Informally, we have replaced the characteristic function of the set ”on which f is true” (the onset of f ), by a superset ”on which f is certainly not false”. We apply ternary abstraction to our over-approximate image computation, by looping through all primary input and state variables, and iteratively selecting them as possible ternary abstraction candidates. Ternary abstraction of the generic state/input variable si could violate an adequacy condition. This is the reason why it is accepted just if it still guarantees adequacy: _From(V )_ _∧_ _ternaryAbs(T (V, W_ [0], V _[′]), si)_ _∧_ _ternaryAbs(Conek(V_ _[′], W_ [1][..k]), si) = 0̸ The si variable can be one of the V _[′]_ state variables, or the W [0][..k] input variables. An accepted ternary abstraction does not directly produce any over-approximation of the image (T o[+]) (unless all W [0] variables are quantified). Hence, OVERAPPRSETternaryClass generally returns no abstraction (T o[+]ternaryClass [= 1][. On the other hand, ternary abstraction] is directly applied to simplify T and/or Cone. _D. Localization abstraction_ Localization abstraction is our last attempt to produce an over-approximation. A candidate abstraction corresponds to letting a state variable be free at the forward/backward boundary, i.e. simply re-labeling the chosen vi[′] [variable in] _Cone(V_ _[′], W_ [1][..k]) by a fresh new variable. The abstraction process is inspired by our previous work [18], and is managed analogously to ternary abstraction. V. EXPERIMENTAL RESULTS We implemented our algorithms on top of the PdTrav tool, a state-of-the-art verification framework which won two of the sub-categories at the 2007 Model Checking competition [20]. We compared results of our tool with and without the proposed methodology. Our experiments ran on a Dual-Core Pentium IV 3 GHz Workstation with 3 GByte of main memory, running Debian Linux. We performed extensive tests, by specifically addressing proofs of correctness. For each verification instance, we used a 900 seconds time limit. ----- 900 800 700 600 500 400 300 200 100 0 0 100 200 300 400 500 600 700 800 900 Original time [sec] problems, as shown by the results on the large industrial benchmarks. VI. CONCLUSIONS This paper addresses improvements to Interpolant-based model checking by means of an integrated approach exploiting over-approximation techniques that are novel for this field. We describe an integrated approach for image computation, incrementally combining and tuning different techniques within a unified SAT-based (complete) Unbounded Model Checking approach. The method we propose adopts the general skeleton of interpolant-based model checking procedure, and exploits preliminary SAT-based exploration of candidate atomic abstractions, within a global effort to tighten over-approximations and keep state set sizes under control. Experimental results, specifically oriented to hard verification problems, show the robustness of our approach implemented on a state-of-the-art verification framework. REFERENCES Fig. 3. Verification time on circuits coming from [20], [21]. We present results on circuits derived from the Model Checking competition [20], [21], and a few standard benchmarks from the VIS distribution [26], with particular emphasis on true hard-to-solve instances. The reason for reporting only true properties verification data is the ability of BMC in being the most effective technique for checking falsifications (i.e., false properties), as showed by the competition results. The scattered plot in Figure 3 shows verification times on 152 benchmarks from the Model Checking competition. It compares a standard interpolant-based UMC technique (time on the X-axis) against the same strategy improved as suggested in this paper (time on the Y-axis). Figure 3 clearly shows a set of “easy” benchmarks, i.e., the ones which are solved in few seconds/minutes for both the techniques. The plot also highlights a set of problems that were not solved within the 900 seconds time limit with the standard interpolant computation, while they are solved with the proposed optimization. The overall results clearly show the robustness of the proposed approach. Table I reports more detailed data on a few selected hard-tosolve verification instances. The meaning of columns follows: Model is instance name (industrial designs names have intentionally been hidden), # PI, # FF and # NODES represent the number of primary inputs, memory elements and AIG nodes of the circuit respectively; finally, Std ITP, Method A and Method B provide the verification time in seconds, with different strategies. Ternary abstraction was disabled in column Method A, while it was enabled in column Method B. The data clearly show that most of the problems could already be completed without resorting to ternary abstraction (see column Method A), whereas the latter plays an important role in a few cases. An inner look at those experiments showed that ternary abstraction requires a time overhead for acceptance checks, which is not worth the advantage gained with the abstraction, in several cases. Though we still need to further enhance the self-tuning capability of our approach, it is already able to attack large [1] J. R. Burch, E. M. Clarke, D. E. Long, K. L. McMillan, and D. L. Dill, “Symbolic Model Checking for Sequential Circuit Verification,” IEEE _Trans. on Computer-Aided Design, vol. 13, no. 4, pp. 401–424, Apr._ 1994. [2] R. E. Bryant, “Graph–Based Algorithms for Boolean Function Manipulation,” IEEE Trans. on Computers, vol. C–35, no. 8, pp. 677–691, Aug. 1986. [3] A. Biere, A. Cimatti, E. M. Clarke, M. Fujita, and Y. Zhu, “Symbolic Model Checking using SAT procedures instead of BDDs,” in Proc. _36th Design Automat. Conf._ New Orleans, Louisiana: IEEE Computer Society, Jun. 1999, pp. 317–320. [4] P. Bjesse and A. Boralv, “DAG-Aware Circuit Compression For Formal Verification,” in Proc. Int’l Conf. on Computer-Aided Design. San Jose, California: IEEE Computer Society, Nov. 2004. [5] P. Bjesse and K. Claessen, “SAT–Based Verification without State Space Traversal,” in Proc. Formal Methods in Computer-Aided Design, ser. LNCS, vol. 1954. Austin, TX, USA: Springer, 2000. [6] M. L. Case, A. Mishchenko, and R. K. Brayton, “Inductively Finding a Reachable State Space Over-Approximation,” in Proc. Int’l Workshop _on Logic Synthesis, Lake Tahoe, California, May 2006._ [7] F. Lu and K. T. Cheng, “IChecker: An Efficient Checker for Inductive Invariants,” in High-Level Design Validation and Test Workshop, 2006, pp. 176–180. [8] G. Cabodi, S. Nocco, and S. Quer, “Boosting the Role of Inductive Invariants in Model Checking,” in Proc. Design Automation & Test in _Europe Conf._ Nice, France: IEEE Computer Society, Apr. 2007. [9] M. Sheeran, S. Singh, and G. St˚almarck, “Checking Safety Properties Using Induction and SAT Solver,” in Proc. Formal Methods in _Computer-Aided Design, ser. LNCS, W. A. Hunt and S. D. Johnson,_ Eds., vol. 1954. Austin, Texas, USA: Springer, Nov. 2000, pp. 108– 125. [10] P. F. Williams, A. Biere, E. M. Clarke, and A. Gupta, “Combining Decision Diagrams and SAT Procedures for Efficient Symbolic Model Checking,” in Proc. Computer Aided Verification, ser. LNCS, E. A. Emerson and A. P. Sistla, Eds., vol. 2102. Chicago, Illinois: SpringerVerlag, Jul. 2000, pp. 124–138. [11] P. A. Abdulla, P. Bjesse, and N. Een, “Symbolic Reachability Analysis based on SAT-Solvers,” in Tools and Algorithms for the Construction _and Analysis of Systems, M. I. S. Susanne Graf, Ed., vol. 1785._ Berlin, Germany: Springer-Verlag, Apr. 2000, pp. 411–425. [12] K. L. McMillan, “Applying SAT Methods in Unbounded Symbolic Model Checking,” in Proc. Computer Aided Verification, ser. LNCS, E. Brinksma and K. G. Larsen, Eds., vol. 2404. Copenhagen, Denmark: Springer, 2002, pp. 250–264. [13] H. J. Kang and L. C. Park, “SAT-based unbounded symbolic model checking,” in Proc. 40th Design Automat. Conf. Anaheim, CA: IEEE Computer Society, 2003, pp. 840–843. ----- TABLE I VERIFICATION DATA ON HARD-TO-SOLVE EXPERIMENTS (− MEANS OVERFLOW, BOLD FONTS ARE USED FOR best RESULTS) Model #PI #FF #NODES Std ITP Method A Method B intel 005.blif 165 170 1776 (-) 26.88 **22.15** intel 006.blif 345 350 3265 (-) **518.82** (-) intel 020.blif 349 354 5735 (-) 757.51 **646.22** intel 021.blif 360 365 5882 (-) **410.02** (-) intel 024.blif 352 357 5710 (-) (-) **437.55** intel 026.blif 486 492 6263 (-) 215.04 **211.10** intel 029.blif 559 564 8816 (-) (-) **347.52** sfeistel-inv.blif 68 296 6837 304.44 **111.51** 555.18 blackjack-inv.blif 5 103 3979 (-) 96.51 **35.58** 31 2 batch 1.blif 24 122 1506 (-) 23.12 **23.00** soap-inv.blif 11 140 3605 (-) 199.15 **172.48** intel 049.blif 136 77 1305 254.29 (-) **193.83** nusmvguidancep9.blif 84 86 1902 359.59 **249.91** 357.99 pdtvisns3p09.blif 21 101 3770 386.14 **315.51** 747.10 pdtvisvsa16a29.blif 32 172 7016 663.12 753.26 **306.39** visprodcellp22.blif 30 63 2771 147.09 216.99 **116.42** cmu.periodic.N.blif 32 34 1555 230.59 75.84 **35.12** nusmv.guidance[∧]2.C.blif 84 86 1920 251.45 **134.33** 220.22 nusmv.guidance[∧]6.C.blif 84 86 1901 347.76 185.84 **149.77** nusmv.guidance[∧]7.C.blif 84 86 2001 **91.26** 140.41 134.88 nusmv.guidance[∧]8.C.blif 84 86 1919 424.77 678.26 **392.52** nusmv.reactor[∧]6.C.blif 74 76 1396 (-) (-) **475.04** vis.coherence[∧]2.E.blif 6 29 1216 111.35 **60.01** 82.17 vis.coherence[∧]3.E.blif 6 29 1214 649.73 60.57 **49.18** industrial1.blif 120 76 1089 (-) **93.43** 289.62 industrial2.blif 119 79 1103 (-) **96.90** 224.40 industrial3.blif 119 78 1100 (-) **255.72** 261.15 industrial4.blif 138 97 2172 (-) 422.33 **326.60** industrial5.blif 113 459 7666 (-) 112.32 **108.67** industrial6.blif 52 187 3600 126.08 **59.31** 75.16 [14] M. K. Ganai, A. Gupta, and P. Ashar, “Efficient SAT-based Unbounded Symbolic Model Checking Using Circuit Cofactoring,” in Proc. Int’l _Conf. on Computer-Aided Design. San Jose, California: IEEE Computer_ Society, Nov. 2004. [15] K. L. McMillan, “Interpolation and SAT-Based Model Checking,” in _Proc. Computer Aided Verification, ser. LNCS, W. A. H. Jr. and_ F. Somenzi, Eds., vol. 2725. Boulder, CO, USA: Springer, 2003, pp. 1–13. [16] K. L. McMillan and R. Jhala, “Interpolation and SAT-Based Model Checking,” in Proc. Computer Aided Verification, ser. LNCS, T. Ball and R. B. Jones, Eds., vol. 3725. Edimburgh, Scotlan, UK: Springer, 2005, pp. 39–51. [17] J. Marques-Silva, “Improvements to the implementation of Interpolant– Based Model Checking,” in Proc. Computer Aided Verification, ser. LNCS, D. Borrione and W. Paul, Eds., vol. 3725. Edimburgh, Scotlan, UK: Springer, 2005, pp. 367–370. [18] G. Cabodi, S. Nocco, M. Murciano, and S. Quer, “Stepping Forward with Interpolants in Unbounded Model Checking,” in Proc. Int’l Conf. _on Computer-Aided Design._ San Jose, California: ACM Press, Nov. 2006. [19] B. Li and F. Somenzi, “Efficient Abstraction Refinement in Interpolation-Based Unbounded Model Checking,” in Tools and Algo_rithms for the Construction and Analysis of Systems, vol. 3920, 2006,_ pp. 227–241. [20] A. Biere and T. Jussila, “The Model Checking Competition Web Page, http://fmv.jku.at/hwmcc07/organizers.html,” 2007. [21] ——, “The Model Checking Competition Web Page, http://fmv.jku.at/hwmcc08/organizers.html,” 2008. [22] A. B. Y. Bres, G. Berry and E. M. Sentovich, “State Abstraction Techniques for the Verification of Synchronous Circuits,” in dcc02: _Designing Correct Circuits 2002, Grenoble, France, Apr. 2002._ [23] S. Malik, “Analysis of Cyclic Combinational Circuits,” vol. 13, no. 7, 1994, pp. 950–956. [24] G. B. T. R. Shiple and H. Touati, “Constructive Analysis of Cyclic e e Circuits,” in IDTC’96: International Design and Testing Conference, Paris, France, 1996. [25] Z. Khasidashvili and Z. Hanna, “Sat-based methods for sequential hardware equivalence verification without synchronization,” in BMC’03: _First International Workshop on Bounded Model Checking, Boulder,_ Colorado, Jul. 2003, pp. 593–607. [26] R. K. B. et al., “VIS,” in Proc. Formal Methods in Computer-Aided _Design, ser. LNCS, M. Srivas and A. Camilleri, Eds., vol. 1166._ Palo Alto, California: Springer, Nov. 1996, pp. 248–256. -----
{ "disclaimer": "Notice: The following paper fields have been elided by the publisher: {'abstract', 'references'}. Paper or abstract available at https://api.unpaywall.org/v2/10.1109/ICCAD.2008.4681563?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1109/ICCAD.2008.4681563, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": null, "status": "GREEN", "url": "http://www.cecs.uci.edu/%7Epapers/iccad08/PDFs/Papers/02B.2.pdf" }
2,008
[ "JournalArticle", "Conference" ]
true
2008-11-10T00:00:00
[]
12,619
en
[ { "category": "Computer Science", "source": "external" }, { "category": "Computer Science", "source": "s2-fos-model" }, { "category": "Engineering", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/0291fb253d7567a106da907b2fe867cec86be541
[ "Computer Science" ]
0.872326
A directory service for configuring high-performance distributed computations
0291fb253d7567a106da907b2fe867cec86be541
Proceedings. The Sixth IEEE International Symposium on High Performance Distributed Computing (Cat. No.97TB100183)
[ { "authorId": "143767485", "name": "Steven M. Fitzgerald" }, { "authorId": "1698701", "name": "Ian T Foster" }, { "authorId": "8682509", "name": "C. Kesselman" }, { "authorId": "1745570", "name": "G. Laszewski" }, { "authorId": "2581430", "name": "Warren Smith" }, { "authorId": "1720669", "name": "S. Tuecke" } ]
{ "alternate_issns": null, "alternate_names": null, "alternate_urls": null, "id": null, "issn": null, "name": null, "type": null, "url": null }
null
**Government retains lor itself, and others act-** **ing on its behalf, a paid-up, nonexclusive,** **irrevocable worldwide license in said article** **to reproduce, prepare derivative works, dis-** **tribute copies to the public, and perform pub-** **licly and display publicly, by or** **on behalf of** ###### A Directory Service for Configuring the Government. High-Performance Distributed Computations #### 1V ###### Steven Fitzgerald,' Ian Foster: Carl Kesselman,' Gregor von Laszewski? JUL 0 7 Warren Smith? Steven Tuecke2 ``` 0 ST.1 Information Sciences Institute Mathematics and Computer Sc,;nce University of Southern California Argonne National Laboratory Marina del Rey, CA 90292 Argonne, IL 60439 http://www.globus.org/ Abstract standard default protocols, interfaces, and so on. The situ- ``` ation is also quite different in traditional high-performance _High-pelfotmance execution in distributed computing_ computing, where systems are usually homogeneous and _envimnments ofren requires careful selection and configu-_ hence can be configured manually. But in high-performance _ration not only of computers, networks, and other resources_ distributed computing, neither defaults nor manual config- _but also of the protocols and algorithms used by applica-_ uration is acceptable. Defaults often do not result in ac- _tions. Selection and configuration in turn require access_ ceptable performance, and manual configuration requires **_to_** _accurate, up-to-date information on_ _the structure and_ low-level knowledge of remote systems that an average _state_ _of_ _available resources. Unfortunately, no standard_ programmer does not possess. We need an _infonnation-_ _mechanism exists for organizing or accessing such i n f o m -_ _rich approach to configuration in which decisions are made_ _tion. Consequently, diferent tools andapplications adopt ad_ (whether at compile-time, link-time, or run-time [ 191) based _hoc mechanisms, or t h q compromise their portability and_ upon information about the structure and state of the system _pegormance by using default configurations. We propose_ on which a program is to run. _a Metacomputing Directory Service that provides efficient_ An example from the I-WAY networking experiment il- _and scalable access to diverse, dynamic, and distributed_ lustrates some of the difficulties associated with the configu- _information about resource structure and state. We define_ ration of high-performance distributed systems. The I-WAY _an extensible data model to represent required information_ was composed of massively parallel computers, worksta- _and present a scalable, high-perfotmance, distributed im-_ tions, archival storage systems, and visualization devices 161. _plementation. The data representation and application pro-_ These resources were interconnected by both the internet _gramming interface are adopted from the Lightweight Di-_ and a dedicated 155 Mbisec IP over ATM network. In this _rectory Access Protocol; the data model and implementation_ environment, applications might run on a single or multi- _are new. We use the Globus distributed computing toolkit to_ ple parallel computers, of the same or different types. An _illustrate how this directory service enables the development_ optimal communication configuration for a particular situa- `of more flexible and efficient distributed computing services` tion might use vendor-optimized communication protocols _and applications._ within a computer but TCP/IP between computers over an **ATM** network (if available). A significant amount of infor- mation must be available to select such configurations, for ###### 1 Introduction example: High-performance distributed computing often requires **_0_** What are the network interfaces (i.e., IP addresses) for careful selection and configuration of computers, networks, the ATM network and Internet? application protocols, and algorithms. These requirements do not arise in traditional distributed computing, where con- What is the raw bandwidth of the **ATM** network and figuration problems can typically be avoided by the use of the Internet, and which . ###### I ----- ###### DISCLAIMER This report was prepared as an account of work sponsored by an agency of the United States Government. Neither the United States Government nor any agency thereof, nor any of their employees, make any warranty, express or implied, or assumes any legal liabili- ty or responsibility for the accuracy, completeness, or usefulness of any information, appa- ratus, product, or process disclosed, or represents tbat its use would not infringe privately owned rights. Reference herein to any specific commercial product, process, or service by trade name, trademark, manufacturer, or otherwise does not necessarily **comtitute or** imply its endorsement, recommendation, or favoring by the United States Government or any agency thereof. The views and opinions of authors expressed herein do not necessar- ily state or reflect those of the United States Government or any agency thereof. ----- ##### Portions of this document may be iiiegiile ###### in electronic image products. Images are produced from the best available original document. ----- **_0_** Is the ATM network currently available? **_0_** a demonstration of the use of the information provided by MDS to guide resource and communication config- **_0_** Between which pairs of nodes can we use vendor pro- uration within a distributed computing toolkit. tocols to access fast internal networks? The rest of this article is organized as follows. In Sec- **_0_** Between which pairs of nodes must we use TCP/IP? tion 2, we explain the requirements that a distributed com- puting information infrastructure must satisfy, and we pro- Additional information is required if we use a resource lo- pose MDS in response to these requirements. We then de- cation service to select an "optimal" set of resources from scribe the representation (Section 3), the data model (Sec- among the machines available on the I-WAY at a given time. tion 4), and the implementation (Section **_5) of MDS. In_** In our experience, such configuration decisions are not Section 6, we demonstrate how MDS information is used difficult if the right information is available. Until now, within Globus. We conclude in Section 7 with suggestions however, this information has not been easily available, and for future research efforts. this lack of access has hindered application optimization. Furthermore, making this information available in a useful ###### 2 Designing a Metacomputing Directory Ser- fashion is a nontrivial problem: the information required to ###### vice configure high-performance distributed systems is diverse in scope, dynamic in value, distributed across the network, and detailed in nature. The problem of organizing and providing access to in- In this article, we propose an approach to the design formation is a familiar one in computer science, and there of high-performance distributed systems that addresses this are many potential approaches to the problem, ranging from need for efficient and scalable access to diverse, dynamic, database systems to the Simple Network Management Proto- and distributed information about the structure and state col (SNMP). The appropriate solution depends on the ways of resources. The core of this approach is the definition in which the information is produced, maintained, accessed, and implementation of a Metacomputing Directory Service and used. (MDS) that provides a uniform interface to diverse infor- mation sources. We show how a simple data representa- `2.1` **Requirements** tion and application programming interface (API) based on the Lightweight Directory Access Protocol (LDAP) Following are the requirements that shaped our design meet requirements for uniformity, extensibility, and dis- of an information infrastructure for distributed computing tributed maintenance. We introduce a data model suitable applications. Some of these requirements can be expressed for distributed computing applications and show how this in quantitative terms (e.g., scalability, performance); others model is able to represent computers and networks of inter- are more subjective (e.g., expressiveness, deployability). est. We also present novel implementation techniques for this service that address the unique requirements of high- **Performance.** The applications of interest to us frequently performance applications. Finally, we use examples from operate on a large scale (e.g., hundreds of proces- the Globus distributed computing toolkit [9] to show how sors) and have demanding performance requirements. MDS data can be used to guide configuration decisions with Hence, an information infrastructure must permit rapid realistic settings. We expect these techniques to be equally access to frequently used configuration information. It useful in other systems that support computing in distributed is not acceptable to contact a server for every item: environments, such as Legion [12], NEOS [5], NetSolve [4], caching is required. Condor [16], Nimrod [I], PRM [18], AppLeS [2], and het- **Scalability and cost.** The infrastructure must scale to large erogeneous implementations of MPI [ 131. numbers of components and permit concurrent access The principal contributions of this article are by many entities. At the same time, its organization **_0_** a new architecture for high-performance distributed must permit easy discovery of information. The human computing systems, based upon an information service and resource costs (CPU cycles, disk space, network called the Metacomputing Directory Service; bandwidth) of creating and maintaining information must also be low, both at individual sites and in total. **_0_** a design for this directory service, addressing issues of **Uniformity. Our goal is to simplify the development of** data representation, data model, and implementation; tools and applications that use data to guide config- **_0_** a data model able to represent the network structures uration decisions. We require a uniform data model commonly used by distributed computing systems, in- `as well as an application programming interface (API)` cluding various types of supercomputers; and for common operations on the data represented via that ###### 2 ----- model. One aspect of this uniformity is a standard rep- and the Network Information Service (NIS) both permit re- resentation for data about common resources, such as mote access but are defined within the context of the IP processors and networks. protocol suite, which can add significant overhead to a high- performance computing environment. Furthermore, SNMP **Expressiveness. We require a data model rich enough to** does not define an API, thus preventing its use as a compo- represent relevant structure within distributed comput- nent within other software architectures. ing systems. **A** particular challenge is representing High-performance computing systems such as PVM [ 111, characteristics that span organizations, for example net- p4 [3], and MPICH [ 131 provide rapid access to configura- work bandwidth between sites. tion data by placing this data (e.g., machine names, network interfaces) into files maintained by the programmer, called **Extensibility. Any data model that we define will be in-** “hostfiles.” However, lack of support for remote access complete. Hence, the ability to incorporate additional means that hostfiles must be replicated at each host, compli- information is important. For example, an applica- cating maintenance and dynamic update. tion can use this facility to record specific information The Domain Name Service (DNS) provides a highly dis- about its behavior (observed bandwidth, memory re- tributed, scalable service for resolving Internet addresses to quirements) for use in subsequent runs. values (e.g., IP addresses) but is not, in general, extensible. **Multiple information sources. The information that we** Furthermore, its update strategies are designed to support require may be generated by many different sources. values that change relatively rarely. Consequently, an information infrastructure must inte- The X.500 standard [ 14, 201 defines a directory service grate information from multiple sources. that can be used to provide extensible distributed directory services within a wide area environment. A directory service **Dynamic data.** Some of the data required by applications is a service that provides read-optimized access to general ###### is highly dynamic: for example, network availability data about entities, such as people, corporations, and com- or load. An information infrastructure must be able to puters. X.500 provides a framework that could, in principle, make this data availabIe in a timely fashion. be used to organize the information that is of interest to us. However, it is complex and requires IS0 protocols and the **Flexible access.** We require the ability to both read and up- date data contained within the information infrastruc- heavyweight ASN. 1 encodings of data. For these and other ture. Some form of search capability is also required, reasons, it is not widely used. ###### to assist in locating stored data. The Lightweight Directory Access Protocol [24] is a streamlined version of the X.500 directory service. It re- **Security.** It is important to control who is allowed to update moves the requirement for an IS0 protocol stack, defining configuration data. Some sites will also want to control a standard wire protocol based on the IP protocol suite. It access. also simplifies the data encoding and command set of X.500 and defines a standard API for directory access [ 151. LDAP **Deployability. An information infrastructure is useful only** is seeing wide-scale deployment as the directory service of if is broadly deployed. In the current case, we require choice for the World Wide Web. Disadvantages include its techniques that can be installed and maintained easily only moderate performance (see Section 5), limited access at many sites. to external data sources, and rigid approach to distributing **Decentralized maintenance.** It must be possible to dele- data across servers. gate the task of creating and maintaining information Reviewing these various systems, we see that each is about resources to the sites at which resources are lo- in some way incomplete, failing to address the types of cated. This delegation is important for both scalability information needed to build high-performance distributed and security reasons. computing systems, being too slow, or not defining an API to enable uniform access to the service. For these reasons, ###### 2.2 Approaches we have defined our own metacomputing information in- frastructure that integrates existing systems while providing a uniform and extensible data model, support for multiple It is instructive to review, with respect to these require- information service providers, and a uniform API. ments, the various (incomplete) approaches to information infrastructure that have been used by distributed computing systems. **2.3** **A Metacomputing Directory Service** Operating system commands such `as` **mame** and sysinf o can provide important information about a partic- Our analysis of requirements and existing systems leads ular machine but do not support remote access. SNMP [21] us to define what we call the Metacomputing Directory Ser- _3_ ----- vice (MDS). This system consists of three distinct compo- entry represents. This type information, which is encoded nents: within the MDS data model, is encoded in MDS by associ- ating an object class with each entry. We now describe how ###### 1. Representation and data access: The directory struc- entries are named and then, how attributes are associated ture, data representation, and API defined by LDAP. with objects. ###### 2. Data model: A data model that is able to encode 3.1 Naming MDS Entries the types of resources found in high-performance dis- tributed computing systems. Each MDS entry is identified by a unique name, called its **3.** **Implementation: A set of implementation strategies** **_distinguished name. To simplify the process of locating an_** designed to meet requirements for performance, mul- MDS entry, entries are organized to form a hierarchical, tree- tiple data sources, and scalability. structured name space called a directory information tree (DIT). The distinguished name for an entry is constructed We provide more details on each of these components in the by specifying the entries on the path from the DIT root to following sections. the entry being named. Figure 1 illustrates the structure of MDS and its role in a Each component of the path that forms the distinguished high-performance distributed computing system. An appli- name must identify a specific DIT entry. To enable this, we cation running in a distributed computing environment can require that, for any DIT entry, the children of that entry access information about system structure and state through must have at least one attribute, specified a priori, whose a uniform API. This information is obtained through the value distinguishes it from its siblings. (The X.500 repre- MDS client library, which may access a variety of services sentation actually allows more than one attribute to be used and data sources when servicing a query. to disambiguate names.) Any entry can then be uniquely named by the list of attribute names and values that identify ###### 3 Representation its ancestors up to the root of the DIT. For example, consider the following MDS distinguished name: The MDS design adopts the data representations and API < `hn =` `dark.mcs.anl.gov,` defined by the LDAP directory service. This choice is driven `ou =` MCS, by several considerations. Not only is the LDAP data rep- o = Argonne National Laboratory, resentation extensible and flexible, but LDAP is beginning o = Globus, to play a significant role in Web-based systems. Hence, we c = u s > can expect wide deployment of LDAP information services, familiarity with LDAP data formats and programming, and The components of the distinguished name are listed in little the existence of LDAP directories with useful information. **_endian order, with the component corresponding to the root_** Note that the use of LDAP representations and API does not of the DIT listed last. Within a distinguished name, abbrevi- constrain us to use standard LDAP implementations. As we ated attribute names are typically used. Thus, in this exam- explain in Section 5, the requirements of high-performance ple, the names of the distinguishingattributes are: host name distributed computing applications require alternative im- (HN), organizational unit (Ow, organization (0), and coun- plementation techniques. However, LDAP provides an at- try (C). Thus, a country entry is at the root of the DIT, while tractive interface on which we can base our implementation. host entries are located beneath the organizational unit level LDAP also provides a mechanism to restrict the types of of the DIT (see Figure 2). In addition to the conventional set operations that can be performed on data, which helps to of country and organizational entries (US, ANL, USC, etc.), address our security requirements. we incorporate an entry for a pseudo-organization named In the rest of this section, we talk about the “MDS repre- “Globus,” so that the distinguished names that we define do sentation,” although this representation comes directly from not clash with those defined for other purposes. LDAP (which in turn “borrows” its representation from X.500). In this representation, related information is or- `3.2` **Object Classes** ganized into well-defined collections, called entries. MDS contains many entries, each representing an instance of some Each DIT entry has a user-defined type, called its object type of object, such as an organization, person, network, or **_class._** (LDAP defines a set of standard object class defi- computer. Information about an entry is represented by one nitions, which can be extended for a particular site.) The or more attributes, each consisting of a name and a cor- object class of an entry defines which attributes are associ- responding value. The attributes that are associated with ated with that entry and what type of values those attributes a particular entry are determined by the type of object the may contain. For example, Figure 3 shows the definition 4 ----- ###### Figure 1. Overview of the architecture of the Metacomputing Directory Service of the object classes `GlobusHost and` `GlobusResource,` **4** **DataModel** and Figure 4 shows the values associated with a particular host. The object class definition consists of three parts: a To use the MDS representation for a particular purpose, parent class, a list of required attributes, and a list of optional we must define a data model in which information of interest attributes. can be maintained. This data model must specify both a DIT The SUBCLASS section of the object class definition en- hierarchy and the object classes used to define each type of ables a simple inheritance mechanism, allowing an object entry. class to be defined in terms of an extension of an existing In its upper levels, the DIT used by MDS (see Figure 2) object class. The MUST `CONTAII and MAY CONTAIN sec-` is typical for LDAP directory structures, looking similar to tions specify the required and optional attributes found in the organization used for multinational corporations. The an entry of this object class. Following each attribute name root node is of object class country, under which we place is the type of the attribute value. While the set of attribute first the _organization entry representing Globus and then_ types is extensible, a core set has been defined, including the _organization and_ _organizational unit @e., division or_ case-insensitive strings (cis) and distinguished names (dn). department) entries. Entries representing people and com- puters are placed under the appropriate organizational units. In Figure 3, GlobusHost inherits from the object class The representation of computers and networks is central ``` GlobusResource. This means that a GlobusHost entry ``` to the effective use of MDS, and so we focus on this issue (i.e., an entry of type GlobusHost) contains all of the at- in this section. tributes required by the `GlobusResource class, as well as` the attributes defined within its own MUST `CONTAIN sec-` ###### 4.1 Representing Networks and Computers tion. `In Figure` 4, the administrator attribute is inherited from `GlobusResource. A GlobusHost entry may also` We adopt the framework for representing networks intro- optionally contain the attributes from both its parent’s and its own MAY CONTAIN section. duced in RFC 1609 [17] as the starting point for the repre- sentation used in MDS. However, the RFC 1609 framework Notice that the administrator attribute in Figure 4 con- provides a network-centric view in which computers are ac- tains a distinguished name. This distinguished name acts as cessible only via the networks to which they are connected. a pointer, linking the host entry to the person entry represent- We require a representation of networks and computers that ing the administrator. One must be careful not to confuse allows us to answer questions such as this link, which is part of an entry, with the relationshipsrep- resented by the DIT, which are not entry attributes. The DIT **_0_** Are computers A and B on the same network? should be thought of as a separate structure used to organize **_0_** What is the latency between computers C and D? an arbitrary collection of entries and, in particular, to enable the distribution of these entries over multiple physical sites. **_0_** What protocols are available between computers E and Using distinguished names as attribute values enables one to F? construct more complex relationships than the trees found in the DIT. The ability to define more complex structures is es- In answering these questions, we often require access to sential for our purposes, since many distributed computing information about networks, but questions are posed most structures are most naturally represented as graphs. often from the perspective of the computational resource. _5_ ----- **_nn=WAN_** **_O'USC_** - **_OU=MCS_** **_IS1 at USC_** [. ]\ **_(California)_** '-- ###### Figure 2. A subset of the DIT defined by MDS, showing the organizational nodes for Globus, ANL, and USC; the organizational units IS1 and MCS; and a number of people, hosts, and networks. That is, they are computer-centric questions. Our datamodel see, logical information, such as the network protocol being reflects this perspective. used, is not specified in the GlobusNetwork object but is A high-level view of the DIT structure used in MDS is associated with a GlobusNetworkImage object. Networks shown in Figure 2. **As indicated in this figure, both people** that span organizations can be represented by placing the and hosts are immediate children of the organizations in `GlobusNetwork object higher in the DIT.` which they are located. For example, the distinguished Networks and hosts are related to one another via name `GlobusNetworkInt erf ace objects: hosts contain network` interfaces, and network interfaces are attached to networks. < `hn=dark.mcs.anl.gov,` A network interface object represents the physical charac- ``` ou=MCS, o=Argonne National Laboratory, ``` `o=Globus, c=US >` teristics of a network interface (such as interface speed) and the hardware network address (e.g. the 48-bit Ethernet ad- dress in the case of Ethernet). Network interfaces appear identifies a computer administered by the Mathematics and under hosts in the DIT, while a network interface is `as-` Computer Science (MCS) Division at Argonne National sociated with a network via an attribute whose value is a Laboratory. distinguished name pointing to a GlobusNetwork object. Communication networks are also explicitly represented ``` A reverse link exists from the GlobusNetwork object back ``` in the DIT as children of an organization. For example, the to the interface. distinguished name ###### To illustrate the relationship between GlobusHost, < `nn=mcs-lan,` `GlobusNetwork, and GlobusNetworkInt erf aceobjects,` **ou=MCS,** `o=Argonne National Laboratory,` we consider the configuration shown in Figure 5. This con- `o=Globus, c=US >` figuration consists of an IBM SP parallel computer and two workstations, all associated with MCS. The SP has two net- represents the local area network managed by MCS. works: an internal high-speed switch and an Ethernet; the This distinguished name identifies an instance of a workstations are connected only to an Ethernet. Although `GlobusNetwork object.` The attribute values of a the SP Ethernet and the workstation Ethernet are connected `GlobusNetwork object provides information about the` via a router, we choose to represent them `as a single net-` _physical network link. such as the link protocol (e.g., ATM_ work. An alternative, higher-fidelity MDS representation or Ethernet), network topology (e.g., bus or ring type), and would capture the fact that there are two interconnected physical media (e.g., copper or fiber). As we shall soon Ethernet networks. **_6_** ----- ``` GlobusHost OBJECT CLASS GlobusResource OBJECT CLASS SUBCLASS OF GlobusResource SUBCLASS OF GlobusTop MUST CONTAIN { MUST CONTAIN c ``` `hostflame` :: `cis,` `administrator :: dn` `type` : : `cis,` 3 `vendor` :: `cis,` `MAY CONTAIE <` `model` ; : `cis,` `manager` :: dn, `OStype` .. . . `cis,` `provider` :: dn, `OSversion` :: cis `technician` :: dn, ###### 3 description :: cis, ``` MAY CONTAIN < documentation :: cis ``` `net workNode` :: dn, 3 `tot alMemory` : : cis, `tot alSwap` : : cis, `dat aCache` : : cis, `instructioncache` : : cis ###### 3 Figure 3. Simplified versions of the MDS object classes GlobusHost and GlobusResource ``` dn: hn=dark.mcs.anl.gov, ou=MCS, o=Argonne National Laboratory , o=Globus , c=US objectclass: GlobusHost objectclass: GlobusResource administrator: cn=John Smith, ou=MCS, o=Argonne National Laboratory, o=Globus, c=US hostName: dark.mcs.anl.gov type : sparc ``` `vendor :` **SUI** ``` model ; SPARCstation-IO ``` `OStype :` sunos ``` OSversion: 5.5.1 ###### Figure 4. Sample data representation for an MDS computer ``` The MDS representation for Figure 5 is shown in Fig- each node has at least two network interfaces: one to the ure 6. Each host and network in the configuration appear high-speed switch and one to an Ethernet. Finally, we see in the DIT directly under the entry representing MCS at Ar- that distinguished names are used to complete the repre- gonne National Laboratory. Note that individual SP nodes sentation, linking the network interface and network object are children of MCS. This somewhat unexpected represen- together. tation is a consequence of the SP architecture: each node is a fully featured workstation, potentially allowing login. **4.2** **Logical Views and Images** Thus, the MDS representation captures the dual nature of the SP as a parallel computer (via the switch network object) At this point, we have described the representation of a and as a collection of workstations. physical network: essentially link-level aspects of the net- As discussed above, the GlobusNetuorkInt erf ace ob- work and characteristics of network interface cards and the jects are located in the DIT under the GlobusHost objects. hosts they plug into. However, a physical networkmay sup- Note that a GlobusHost can have more than one network port several “logical” views, and we may need to associate interface entry below it. Each entry corresponds to a dif- additional information with these logical views. For exam- ferent physical network connection. In the case of an SP, ple, asingle network might be accessible viaseveral different _7_ ----- ###### Figure 6. The MDS representation of the configuration depicted in Figure 5, showing host (HN), network (NN), and network interface (NIN) objects. The dashed lines correspond to “pointers” represented by distinguished name attributes cept of **_images_** `as a mechanism for representing multiple` logical views of the same physical network. We apply the same concept in our data model. Where physical net- works are represented by `GlobusHost, Globusletwork,` and `GlobusNetworkInterf ace object classes, network` images are repre- sented by GlobusHost Image, GlobusNetnorkImage, and `GlobusNetworkInterf aceImage object classes.` Each image object class contains new information associated with the logical view, `as` well `as a distinguished name` pointing to its relevant physical object. In addition, a physical object has distinguished name pointers to all of the images that refer to it. For example, one may use both IP and IPX protocols over a single Ethernet interface card. We would represent this in **MDS** by creating two ###### Figure 5. A configuration comprising two net- GlobusNetworkInterf aceImage objects. One image ob- works and N+2 computers ject would represent the IP network and contain the IP ad- dress of the interface, as well as a pointer back to the object class representing the Ethernet card. The second image ob- ject would contain the IPX address, as well as adistinguished protocol stacks: IP, Novel1 IPX, or vendor-provided libraries name pointing back to the same entry for the Ethernet card. such as MPI. Associated with each of these protocols can The GlobusNetworkInterf ace object would include the be distinct network interface and performance information. distinguished names of both interface images. Additionally, a “partition” might be created containing a The structure of network images parallels that of the cor- subset of available computers; scheduling information can responding physical networks, with the exception that not be associated with this object. all network interfaces attached to a host need appear in an The RFC 1609 framework introduces the valuable con- image. To see why, consider the case of the IBM SP. One **8** ----- might construct a network image to represent the “parallel turn our attention to the MDS implementation. Since our computer” view of the machine in which IBM’s proprietary data model has been defined completely within the LDAP message-passing library is used for communication. Since framework, we could in principle adopt the standard LDAP this protocol cannot be used over the Ethernet, this image of implementation. This implementation uses a TCP-based the network will not contain images representing the Ether- wire protocol and a distributed collection of servers, where net card. Note that we can also produce a network image each server is responsible for all the entries located within a of the SP representing the use of IP protocols. This view complete subtree of the DIT. While this approach is suitable may include images of both the switch and Ethernet network for a loosely coupled, distributed environment, it has three interfaces. significant drawbacks in a high-performance environment: ###### 4.3 Questions Revisited 0 Single information provider. The LDAP implemen- tation assumes that all information within a DIT subtree At this stage we have gone quite deeply into the repre- is provided by a single information provider. (While sentation of computers and networks but have strayed rather some LDAP servers allow alternative “backend” mech- far from the issue that motivated the MDS design, namely, anisms for storing entries, the same backend must be the configuration of high-performance distributed compu- used for all entries in the DIT subtree.) However, re- tations. To see how MDS information can be used, let us stricting all attributes to the same information provider revisit the questions posed in Section **1 with respect to the** complicates the design of the MDS data-model. For use of multiple computers on the I-WAY example, the IP address associated with a network in- terface image can be provided by a system call, while **_What are the nenvork intelfaces (i.e., IP addresses)_** the network bandwidth available through that interface **_for the ATM network and Internet?_** A host’s IP ad- is provided by a service such as the Network Weather dress on the ATM network can be found by look- Service ( N W S ) [23]. ing for a `GlobusBetaorkInterface that is point-` ing to a `GlobusBetwork with a link protocol at-` **0** **Clienffserver architecture.** The LDAP implementa- tribute value of ATM. From the interface, we find the tion requires at least one round-trip network commu- `GlobusNetworkInterf aceImage representing an IP` nication for each LDAP access. Frequent MDS ac- network, and the IP address will be stored as an attribute cesses thus becomes prohibitively expensive. We need in this object. a mechanism by which MDS data can be cached locally for a timely response. **_What is the raw bandwidth of the ATM network and_** **_the Internet, and which is higher? Is the ATM network_** **0** **Scope of Data. The LDAP implementation assumes** **_currently available?_** The raw bandwidth of the ATM that any piece of information may be used from any network will be stored in the I-WAY GlobusNetwork point in the network (within the constraints of access object. Information about the availability of the ATM control). However, a more efficient implementation of network can also be maintained in this object. attribute update can be obtained if one can limit the locations from which attribute values can be accessed. **_Between which pairs of_** **_nodes can we use vendorproto-_** The introduction of scope helps to determine which **_cols to access fast internal networks? Between which_** information must be propagated to which information **_pairs of_** **_nodes must we use_** `TCP/IP? Two nodes can` providers, and when information can be safely cached. communicate using a vendor protocol if they both point to GlobusHostImage objects that belong to the same Note that these drawbacks all relate to the LDAP im- `GlobusNetworkImage object.` plementation, not its API. Indeed, we can adopt the LDAP API for MDS without modification. Furthermore, for those Note that the definition of the MDS representation, API, and DIT subtrees that contain information that is not adversely data model means that this information can be obtained via affected by the above limitations, we can pass the API calls a single mechanism, regardless of the computers on which straight through to an existing LDAP implementation. In an application actually runs. general, however, MDS needs a specialized implementation of the LDAP API to meet the requirements for high perfor- ###### 5 Implementation mance and multiple information providers. The most basic difference between our MDS implemen- We have discussed how information is represented in tation and standard LDAP implementations is that we allow MDS, and we have shown how this information can be used information providers to be specified on a per attribute ba- to answer questions about system configuration. We now sis. Referring to the above example, we can provide the IP **9** ----- address of an interface via SNMP, the current available band- first uses MDS information to determine which low-level width viaNWS, and the name of the machine into which the mechanisms are available between the processors. Then, it interface card is connected. Additionally, these providers selects from among these mechanisms, currently on the basis can store information into MDS on a periodic basis, thus of built-in rules (e.g., “ATM is better than Internet”); rules allowing refreshing of dynamic information. The specifi- based on dynamic information (“use **ATM** if current load cation of which protocol to use for each entry attribute is is low”), or programmer-specified preferences (“always use stored in an object class metadata entry. Metadata entries Internet because I believe it is more reliable”) can **also be** are stored in MDS and accessed via the LDAP protocol, supported in principle. The result is that application source In addition to specifying the access protocol for an at- code can run unchanged in many different environments, tribute, the MDS object class metadata also contains a time- selecting appropriate mechanisms in each case. to-live (TIL) for attribute values and the update scope of the These method-selection mechanisms were used in the attribute. The “L data is used to enable caching; a TIL of I-WAY testbed to permit applications to run on diverse het- ###### 0 indicates that the attribute value cannot be cached, while a erogeneous virtual machines. For example, on a virtual ?TL of - 1 indicates that the data is constant. Positive TIL machine connecting IBM SP and SGI Challenge comput- values determine the amount of time that the attribute value ers with both ATM and Internet networks, Nexus used three is allowed to be provided out of the cache before refreshing. different protocols (IBM proprietary MPL on the SP, shared- The update scope of an attribute limits the readers of an memory on the Challenge, and TCP/IP or AAL5 between updated attribute value. Our initial implementation consid- computers) and selected either ATM or Internet network ers three update scopes: process, computation, and global. interfaces, depending on network status [8]. Process scope attributes are accessible only within the same Another application for MDS information that we are process as the writer, whereas computation scope attributes investigating is resource location _[22]._ **A** “resource bro- can be accessed by any process within a single computation, ker” is basically a process that supports specialized searches and global scope attributes can be accessed from any node against MDS information. Rather than incorporate these or process on a network. search capabilities in MDS servers, we plan to construct resource brokers that construct and maintain the necessary ###### 6 MDS Applications in Globus indexes, querying MDS periodically to obtain up-to-date information. We review briefly some of the ways in which MDS in- formation can be used in high-performance distributed com- **7 Summary** puting. We focus on applications within Globus, an infras- tructure toolkit providing a suite of low-level mechanisms We have argued that the complex, heterogeneous, and designed to be used to implement a range of higher-level dynamic nature of high-performance distributed computing services [9]. These mechanisms include communication, systems requires an **_information-rich approach to system_** authentication, resource location, resource allocation, pro- configuration. In this approach, tools and applications do cess management, and (in the form of MDS) information not rely on defaults or programmer-supplied knowledge to infrastructure. make configuration choices. Instead, they base choices on The Globus toolkit is designed with the configuration information obtained from external sources. problem in mind. It attempts to provide, for each of its With the goal of enabling information-rich configuration, components, interfaces that allow higher-level services to we have designed and implemented a Metacomputing Direc- manage how low-level mechanisms are applied. As an ex- **_tory Service. MDS is designed to provide uniform, efficient,_** ample, we consider the problem referred to earlier of select- and scalable access to dynamic, distributed, and diverse in- ing network interfaces and communication protocols when formation about the structure and state of resources. MDS executing communication code within a heterogeneous net- defines a representation (based on that of LDAP), a data work. The Globus communication module (a library called model (capable of representing various parallel computers Nexus [lo]) allows a user to specify an application’s com- and networks), and an implementation (which uses caching munication operations by using a single notation, regardless and other strategies to meet performance requirements). Ex- of the target platform: either the Nexus API or some library periments conducted with the Globus toolkit (particularly in or language layered on top of that API. At run-time, the the context of the I-WAY) show that MDS information can Nexus implementation configures a communication struc- be used to good effect in practical situations. ture for the application, selecting for each communication We are currently deploying MDS in our GUSTO dis- link (a Nexus construct) the communication method that is tributed computing testbed and are extending additional to be used for communications over that link [7]. In mak- Globus components to use MDS information for configura- ing this selection for a particular pair of processors, Nexus tion purposes. Other directions for immediate investigation 10 ----- include expanding the set of information sources supported, [9] **I. Foster and C.** Kesselman. Globus: A metacomputing in- evaluating performance issues in applications, and develop- frastructure toolkit. International Journal of Supercomputer ing optimized implementations for common operations. In **_Applications, 1997. To appear._** the longer term, we are interested in more sophisticated ap- [lo] **1. Foster, c. Kesselm, and s. necke- The Nexus approach** to integrating multithreading and communication. **_Journal_** plications (e.g., source routing, resource scheduling) and in **_of Parallel and Distributed Computing, 37:70-82,1996._** the recording and use Of aPP1ication-generated performance [ 1 11 A. G&, A. Bepelin, J. Dongma, W. Jiang, B. Manchek, metrics. and V. Sunderam. PVM: Parallel Krtml Machine-A User's **_Guide and Tutorial for Network Parallel Computing._** MlT ###### Acknowledgments Press, 1994. [I21 A. Chimshaw, J. Weissman, E. West, and E. Lyot, Jr. Meta- systems: **_An_** approach combining parallel processing and We gratefully acknowledge the contributions made by heterogeneous distributed computing systems. **_Journal of_** Craig Lee, Steve Schwab, and Paul Stelling to the design and **_Parallel and Distributed Computing, 21(3):257-270,1994._** implementation of Globus components. This work was sup- [13] W. Gropp, E. Lusk, N. Doss, and A. Skjellum. **A** high- performance, portable implementation of the MPI message ported by the Defense Advanced Research Projects Agency passing interface standard. Parallel Computing, 22:789-828, under contract N66001-96-C-8523 and by the Mathemati- 1996. cal, Information, and Computational Sciences Division sub- [14] **S. Heker, J. Reynolds, and C. Weider. Technical overview** program of the Office of Computational and Technology of directory services using the x.500 protocol. `RFC 1309,` Research, **U.S. Department of Energy, under Contract W-** Ey14,03/12 92. 3 1-109-Eng-38. [15] T. Howes and M. Smith. The ldap application program in- terface. RFC 1823,08/09 95. [16] M. Litzkow, M. Livney, and M. Mutka. Condor - a hunter ###### References of idle workstations. In Proc. 8th Intl Conf: on Distributed **_Computing Systems, pages 104-1 11,1988._** D. Abramson, `R. Sosic, J. Giddy, and B. Hall. Nimrod:` [ 171 G. Mansfield, T. Johannsen, and M. Knopper. Charting net- **_A tool for performing parameterised simulations using dis-_** works in the x.500 directory. RFC 1609,03125 94. (Experi- tributed workstations. In **_Pmc. 4th IEEE Symp._** **_on High_** mental). **_Performance Distributed Computing. IEEE Computer Soci-_** [ 181 B. C. Neumann and S . Rao. The Prosper0 resource manager: ety Press, 1995. A scalable framework for processor allocation in distributed **E Berman, R.** Wolski, S . Figueira, J. Schopf, and G. Shao. systems. **_Concurrency: Practice_** & Experience, 6(4):339- 355,1994. Application-level scheduling on distributed heterogeneous [ 191 D. Reed, C. Elford, T. Madhyastha,E. Smimi, and S . Lamm. networks. In **_Proceedings_** `of Supercomputing` **_'96. ACM_** The Next Frontier: Interactive and Closed Loop Performance Press, 1996. Steering. In **_Proceedings_** **_of_** **_the_** **_1996_** **_ICPP Workshop on_** [3 I **R.** Butler and E. Lusk. Monitors, message, and clusters: **_Challengesfor ParallelProcessing,pages20-31,Aug. 1996._** The p4 parallel programming system. **_Parallel Computing,_** [20] J. Reynolds and C. Weider. Executive introduction to direc- 20547-564, April 1994. tory services using the x.500 protocol. RFC 1308, FYI 13, H. Casanova and J. Dongarra. Netsolve: A network server for 03/12 92. solving computational science problems. Technical Report [21] M. Rose. `The Simple Book. Prentice Hall, 1994.` CS-95-3 13, University of Tennessee, Nov. 1995. [22] G. von Laszewski. **_A Parallel Data Assimilatim System_** J. Czyzyk, M. P. Mesnier, and J. J. More. The Network- **_and Its Implications on a Metacomputing Environment. PhD_** Enabled Optimization System (NEOS) Server. Preprint thesis, Syracuse University, Dec. 1996. MCS-P6 15-0996, Argonne National Laboratory, Argonne, [23] R. Wolski. Dynamically forecasting network performance Illinois, 1996. using the network weather service. Technical Report TR- T. DeFanti, I. Foster, M. Papka, R. Stevens, and T. Kuh- CS96-494, U.C. San Diego, October 1996. **fuss. Overview of the I-WAY: Wide area visual supercomput-** [24] W. Yeong, T. Howes, and **_S . Kille. Lightweight directory_** ing. **_International Journal of Supercomputer Applications,_** access protocol. RFC 1777,03/28 95. Draft Standard. 10(2):123-130,1996. I. Foster, J. Geisler, C. Kesselman, and S . Tuecke. Manag- ing multiple communication methods in high-performance networked computing systems. Journal of Parallel and Dis- **_tributed Computing, 40:35-48, 1997._** I. Foster, J. Geisler, W. Nickless, W. Smith, and S . Tuecke. Software infrastructure for the I-WAY high-performance dis- tributed computing experiment. In Proc. 5th IEEE Symp. on **_High Performance Distributed Computing, pages 562-57_** 1. IEEE Computer Society Press, 1996. 11 -----
{ "disclaimer": "Notice: The following paper fields have been elided by the publisher: {'references', 'abstract'}. Paper or abstract available at https://api.unpaywall.org/v2/10.1109/HPDC.1997.626445?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1109/HPDC.1997.626445, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": "other-oa", "status": "GREEN", "url": "https://digital.library.unt.edu/ark:/67531/metadc691395/m2/1/high_res_d/508115.pdf" }
1,997
[ "JournalArticle" ]
true
1997-08-05T00:00:00
[]
11,504
en
[ { "category": "Business", "source": "s2-fos-model" }, { "category": "Computer Science", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/02920271ab40ff82676325adcb340f8a60a63eb2
[]
0.822347
Improving Operational Efficiency through Quality 4.0 Tool: Blockchain Implementation and Subsequent Market Reaction
02920271ab40ff82676325adcb340f8a60a63eb2
Kvalita Inovácia Prosperita
[ { "authorId": "2180117178", "name": "Vladimíra Gimerská" }, { "authorId": "37871549", "name": "M. Šoltés" }, { "authorId": "119015195", "name": "Rajmund Mirdala" } ]
{ "alternate_issns": null, "alternate_names": [ "Quality, Innovation, Prosperity", "Qual Innov Prosper", "Kval Inovácia Prosper" ], "alternate_urls": [ "https://www.qip-journal.eu/index.php/QIP/about" ], "id": "85d3209d-e1d9-4246-a5fa-843669e87d00", "issn": "1335-1745", "name": "Kvalita Inovácia Prosperita", "type": "journal", "url": "http://www.qip-journal.eu/" }
Purpose: This article aims to observe and measure how modern and innovative blockchain technology improves the data quality and transparency and thus affect the stock prices of publicly traded companies after announcing its implementation in their operations. Additionally, the objective is to compare the results with control group of non-adopters. Methodology/Approach: We selected 30 public companies across various sectors, obtained daily stock price data, identified peer companies, and employed an event study approach to examine the statistical impact of blockchain adoption announcements. Findings: A significant negative reaction (-0.4%) was observed in stock prices the day following a blockchain adoption announcement, but overall, the market response was unsystematic, indicating no consistent reaction in stock prices post-announcement. Research Limitation/Implication: The event study approach assumes that markets are always efficient. This methodology has some limitations because we live in a world that is not perfect, and stock prices do not necessarily fully reflect all available information. Originality/Value of paper: Blockchain implementation is a current and intriguing subject that has attracted limited scholarly research. Each new study contributes valuable insights to the understanding of how this innovative technology impacts corporate operations. Furthermore, this research endeavours to draw comparisons between companies that have announced their adoption of blockchain and their non-adopters counterparts.
# Improving Operational Efficiency through Quality 4.0 Tool: Blockchain Implementation and Subsequent Market Reaction ## DOI: 10.12776/QIP.V27I2.1877 Vladimíra Gimerská, Michal Šoltés, Rajmund Mirdala Received: 2023-06-21 Accepted: 2023-06-28 Published: 2023-07-31 ## ABSTRACT **Purpose: This article aims to observe and measure how modern and innovative** blockchain technology improves the data quality and transparency and thus affect the stock prices of publicly traded companies after announcing its implementation in their operations. Additionally, the objective is to compare the results with control group of non-adopters. **Methodology/Approach: We selected 30 public companies across various** sectors, obtained daily stock price data, identified peer companies, and employed an event study approach to examine the statistical impact of blockchain adoption announcements. **Findings: A significant negative reaction (-0.4%) was observed in stock prices** the day following a blockchain adoption announcement, but overall, the market response was unsystematic, indicating no consistent reaction in stock prices postannouncement. **Research Limitation/Implication: The event study approach assumes that** markets are always efficient. This methodology has some limitations because we live in a world that is not perfect, and stock prices do not necessarily fully reflect all available information. **Originality/Value of paper: Blockchain implementation is a current and** intriguing subject that has attracted limited scholarly research. Each new study contributes valuable insights to the understanding of how this innovative technology impacts corporate operations. Furthermore, this research endeavours to draw comparisons between companies that have announced their adoption of blockchain and their non-adopters counterparts. **Category: Conceptual paper** **Keywords: quality 4.0, blockchain; event studies; digitalisation** ISSN 1335 1745 ( i t) ISSN 1338 984X ( li ) ----- ## 1 INTRODUCTION According to Global Data Management Research organisations need to improve their data quality. Report shows that failing to improve the data can cause increased costs, unreliable analytics, negative impact on customer trust, experience and company reputation which lead to slow digital transformation. (Reno, 2022) The American Society of Quality defines Quality 4.0 as the term which references “the future of quality and organisational excelence within the context of Industry 4.0” (American Society for Quality, n.d.). It is confirmed that quality management and Industry 4.0 directly influence performance (Nguyen et al., 2021). Technologies 4.0 such as Internet of Things, Artificial Inteligence or Blockchain are utilised to improve quality of products and services for the customer and at the same time increase value for shareholders. It is unquestionable that using Technologies 4.0 as part of Quality 4.0 “provides numerous benefits to quality management, including increased speed and transparency, increased adaptability to new situations and continual improvement across businesses plus increased awareness, skills and inteligence” (Mtotywa, 2022). It also enables early error detection and reduces downtime through anticipatory maintenance planning (Mtotywa, 2022). Blockchain technology as one of the Quality 4.0 tools has substantially advanced since its inception, and companies across multiple industries have widely adopted it. While most of the attention surrounding blockchain relates to its use in cryptocurrency, recent literature and applications show its vast potential for various applications in many industries, especially within the finance sector and the supply chain. It is an innovative technology that brings significant optimisation and automatisation when implemented in the company’s various operations. Blockchain as a quality toll can help company to perform better as it helps gaining operational excellence, and as a result, foster process innovation. “Moreover, new forms of collaboration and traceability, such as, block chain, are very important in this period, especially when factors affecting competitiveness can vary” (Santos et al., 2021). On the other hand, its adoption is complex and expensive, so exploring existing use cases is important for companies to help them in their strategic decision-making process whether to invest in this technology or not. This paper focuses on observing and measuring how this Quality 4.0 tool affects the stock prices of publicly traded companies that announced its implementation in their operations. We conducted an event study analysis on 30 selected publicly traded companies from various areas and sectors which announced blockchain adoption and how this announcement as an event impacted the price development. We use SPSS software and market model to test the abnormal returns and their significance on 41 days, 20 days prior and 20 days after the announcement. Additionally, through the platform Infront Analytics, we searched for peer companies for each analysed firm from our sample to compare the ISSN 1335 1745 ( i t) ISSN 1338 984X ( li ) ----- development during the event window. The objective is to determine whether and to what extent the market reacts to such announcements about blockchain implementations. ## 2 LITERATURE REVIEW It is worth analysing blockchain as a technology in the context of consequent market reactions after a new technology is announced. Such new technological changes could be e-commerce platforms (Subramani and Walden, 2001; Dehning et al., 2004), mobile apps (Boyd, Kannan and Slotegraaf, 2019) or ERP systems (Hendricks, Singhal and Stratman, 2006; Ranganathan and Brown, 2006). A study conducted by Chen et al. (2022) shares similarities with our objectives but focuses exclusively on China and Chinese businesses. The researchers examined two categories of firms – those in high-tech industries and those outside- intending to embrace blockchain technology in the future. In total, 302 companies listed on the Shanghai and Shenzhen Stock Exchanges between 2016 and 2020 were chosen. The analysis was conducted over 41 and 11 trading days over two timeframes. The findings revealed that high-tech firms’ blockchain announcements gained greater interest from investors, eliciting more significant stock price reactions as investors deemed these companies more trustworthy (Chen et al., 2022). There is evidence that blockchain can potentially reduce costs. In the airspace industry, companies like Honeywell, Moog and Air New Zealand reported up to 30% savings by using blockchain to create secure digital marketplaces for 3Dprinted aircraft parts (Tampi, 2020). In the IT sector, a positive relationship between technological initiatives and financial performance was observed (e.g., Bose and Man Leung, 2019; Bradley et al., 2018), where the emphasis was also placed on operational efficiency improvements, revenue generation and firms value (Bose and Man Leung, 2019; Melville, Kraemer and Gurbaxani, 2004). Additionally, blockchain has the potential to promote innovation in business models leading to cost reduction and providing new sources of revenue (Lacity, 2018). Although studies on blockchain application announcements exist, companies’ returns are often compared with Bitcoin returns (Cheng et al., 2019; Cahill et al., 2020). Only some consider the market value that can be created by implementing blockchain. In such cases, an event study methodology is usually used to assess the short-term value investors assign to recently revealed IT initiatives based on future cash flow anticipation (Boyd, Kannan and Slotegraaf, 2019). The closest study to ours was published by Klockner, Schmidt and Wagner (2022), where 175 blockchain announcements from 100 companies were analysed. The study was well diversified in 11 industries and 15 countries, and data were additionally tested for robustness. Here, a positive market reaction was identified for announcements in the context of operations and supply chain ISSN 1335 1745 ( i t) ISSN 1338 984X ( li ) ----- management. Furthermore, this sample confirmed a significant average abnormal return of 0.30% on the announcement day. However, when an external IT provider is used to implement blockchain, a significantly less positive reaction is observed. Klockner’s research (2022) also provides a comprehensive summary of recent research which involves blockchain and its influence on cost-efficient processes. The researches include the following use cases: effect on supply chain and traceability, enhancement of data and knowledge sharing between supply chain participants, security and acceleration of inter-organisational payments and order processing (Klockner, Schmidt and Wagner, 2022) An investigation of blockchain-related announcements was carried out by Cahill and colleagues in 2020 on a sample of 713 companies in year between 2016 and 2018 that explored the relationship between Bitcoin development and blockchain announcement. An average abnormal return of 5.3% was observed on announcement days, and smaller companies experienced greater abnormal returns than larger ones. Furthermore, lower returns occurred by non-speculative announcements than by speculative ones (Cahill et al., 2020). Cheng et al. (2019) also explored the connection between 79 publicly traded companies’ initial 8-K filings on blockchain activities and investors’ reactions. They classified the activities detailed in these disclosures as either existing or speculative (“existing” were firms with a well-defined strategy for blockchain implementation, and “speculative” firms outlining ambiguous plans for blockchain). Their research showed that speculative information had 7.5% positive abnormal returns while existing disclosures experienced almost zero abnormal returns. These favourable responses are undone within a month, suggesting investor overreaction to speculative disclosures (Cheng et al., 2019). Another event study looks at financial corporations that use blockchain and how their stocks performed during the COVID-19 pandemic. The common parameter is that high-tech companies, whether they are members of blockchain consortiums or have some technological advantage, have better positive stock development results, avoiding potential losses during pandemic-related announcements (Paul, Adhikari and Bose, 2022). Liu et al. (2022) examined market reactions to blockchain announcements, focusing on a company with 143 announcements. The researchers employed event study methodology and multivariate regression to analyse market responses and determine factors affecting these changes. They found a positive market reaction on announcement days and noted that strategic-level announcements elicited a stronger positive response from the market (Liu et al., 2022). ## 3 METHODOLOGY AND DATA The event study methodology is gaining popularity in business and marketing disciplines to measure the impact of significant events at the firm. This technique can be used to assess the effect of some important event or corporate ISSN 1335 1745 ( i t) ISSN 1338 984X ( li ) ----- announcement on a company’s financial performance, profitability, and market valuation over a defined event window, ranging from a few days to a few years. The methodology is flexible and can be adapted to measure different events, making it useful for researchers in various fields (Ullah et al., 2021). Within our study, we aim to answer following research question (RQ) and thus we create the null hypothesis (H0): RQ: Is there any reaction in stock prices after the company officially announces the application of blockchain technology in its operations? H0: There is no reaction in stock prices after the company’s announcement regarding blockchain implementation. Within the null hypothesis, we will test abnormal returns of companies that announced blockchain and compare them to the peer group of similar companies that have not announced any blockchain application in the time around event window. The null hypothesis will be confirmed when abnormal returns are equal to zero, and we will reject the null hypothesis when abnormal returns are not equal to zero. We will also analyse whether the announcement of blockchain’s adoption had a positive or negative impact on the stock price. In order to test the hypothesis, we first gathered two main types of data, announcements of blockchain, which are publicly available, and stock prices. Then, based on Infront Analytics (2023), we created a control group of similar companies that had not publicly communicated any blockchain adoption in that time period. When the same company from the analysed group appeared as a peer to some other company (mostly in the case of industry car producers), we took the second or the third listed international company as peer (Infront Analytics, 2023). We chose thirty globally active corporations from various industries and obtained daily stock close prices for the last ten years from the Yahoo.com platform. In addition, we chose the MSCI World Index to compare prices with general market performance. Because certain companies and indices representing benchmarks are traded in different countries, the problem of non-trading days arose, a common issue in event studies. To solve this, we follow the methodology mentioned by Campbell, Cowan and Salotti (2010), which completely omits nontrading days from the analysis. Simultaneously during the phase of choosing the companies for our analysis, we searched for specific announcements regarding real blockchain implementation projects. We did not consider any press releases about exploring the technology, only the real adoption of blockchain in the company’s operations. These announcements were set in our event study approach as event days (t0). In almost all cases, the t0 was between 2016 and 2020 except for a few early adopters who have worked on adoption since 2015, for instance, IBM and Microsoft. If the announcement was made during a non-trading day, as the event day (t0) we took the first following trading day. ISSN 1335 1745 ( i t) ISSN 1338 984X ( li ) ----- Tab. 1 summarises our sample and corresponding announcement days and sources. Selected corporations come from the Automotive, Finance, Food & Beverages, Supply Chain and IT sector. For each of the selected companies, we found peer company and this group was also tested within our study (furthermore called as “blockchain group” and “control group”). _Table 1 – Companies which Announced Blockchain Adoption and Their Peers_ **Company** **Official** **Peer company** **Sources of** **Announcement** **(Infront Analytics, 2023)** **blockchain** **announcement** 1 Walmart 19 Oct 2016 Pan Pacific Int. Holding Coindesk.com, Accenture, 2 Anheuser-Busch 14 March 2018 Boston Beer Company LedgerInsights.com, SAP.com, group 3 Allianz 07 Nov 2017 Unipol media.mercedesbenz.com, 4 AT&T 26 Sept 2018 Verizon Communications Volkswagen.com, 5 SAP 16 May 2017 Oracle Corp Porsche.com, Microsoft.com, 6 Mercedes -Benz 28 June 2017 Stellantis Hyperledger.com, Reuters.com, 7 Volkswagen 22 Apr 2019 Kia carrefour.com, Computerworld.com, 8 BMW 13 Feb 2019 Honda prnewswire.com, 9 Porsche 22 Feb 2018 Renault bnnbloomberg.ca, Yahoo.com 10 Microsoft 09 Nov 2015 Adobe 11 IBM 17 Dec 2015 HP 12 Foxconn 06 March 2017 Pegatron 13 Nestle 22 Aug 2017 Danone 14 Carrefour 06 March 2018 Tesco PLC 15 MasterCard 21 Oct 2016 Visa 16 Honeywell International 17 Dec 2018 General Electric Company 17 JPMorgan Chase & Co. 03 March 2016 Bank of America 18 Tyson Foods 22 Aug 2017 Hormel Foods 19 Wells Fargo 24 Oct 2016 Regions Financial Corp. 20 Coca-Cola 05 Nov 2019 Keurig Dr. Pepper 21 FedEx 14 May 2018 Deutsche Post 22 Cisco Systems 11 July 2017 Ciena Corp 23 HSBC Bank 3 Oct 2017 Credit Agricole 24 Deutsche Bank 16 Sept 2019 Commerzbank 25 UBS Bank 11 Dec 2017 BNP Paribas 26 Maersk 16 Jan 2018 Hapag Lloyd ISSN 1335 1745 ( i t) ISSN 1338 984X ( li ) |Col1|Company|Official Announcement|Peer company (Infront Analytics, 2023)|Sources of blockchain announcement| |---|---|---|---|---| |1|Walmart|19 Oct 2016|Pan Pacific Int. Holding|Coindesk.com, Accenture, LedgerInsights.com, SAP.com, group- media.mercedes- benz.com, Volkswagen.com, Porsche.com, Microsoft.com, Hyperledger.com, Reuters.com, carrefour.com, Computerworld.com, prnewswire.com, bnnbloomberg.ca, Yahoo.com| |2|Anheuser-Busch|14 March 2018|Boston Beer Company|| |3|Allianz|07 Nov 2017|Unipol|| |4|AT&T|26 Sept 2018|Verizon Communications|| |5|SAP|16 May 2017|Oracle Corp|| |6|Mercedes -Benz|28 June 2017|Stellantis|| |7|Volkswagen|22 Apr 2019|Kia|| |8|BMW|13 Feb 2019|Honda|| |9|Porsche|22 Feb 2018|Renault|| |10|Microsoft|09 Nov 2015|Adobe|| |11|IBM|17 Dec 2015|HP|| |12|Foxconn|06 March 2017|Pegatron|| |13|Nestle|22 Aug 2017|Danone|| |14|Carrefour|06 March 2018|Tesco PLC|| |15|MasterCard|21 Oct 2016|Visa|| |16|Honeywell International|17 Dec 2018|General Electric Company|| |17|JPMorgan Chase & Co.|03 March 2016|Bank of America|| |18|Tyson Foods|22 Aug 2017|Hormel Foods|| |19|Wells Fargo|24 Oct 2016|Regions Financial Corp.|| |20|Coca-Cola|05 Nov 2019|Keurig Dr. Pepper|| |21|FedEx|14 May 2018|Deutsche Post|| |22|Cisco Systems|11 July 2017|Ciena Corp|| |23|HSBC Bank|3 Oct 2017|Credit Agricole|| |24|Deutsche Bank|16 Sept 2019|Commerzbank|| |25|UBS Bank|11 Dec 2017|BNP Paribas|| |26|Maersk|16 Jan 2018|Hapag Lloyd|| ----- |Col1|Company|Official Announcement|Peer company (Infront Analytics, 2023)|Sources of blockchain announcement| |---|---|---|---|---| |27|Northern Trust|22 Feb 2017|Key Corp|| |28|Tata Motors|16 Dec 2020|Ashok Leyland|| |29|Morgan Stanley|28 Nov 2018|State Street Corp|| |30|Deutsche Telekom|24 June 2019|Telefonica DE|| In our analysis, we decided to explore an event window of 41 days in total (20 days prior to and after the event day). As an estimation window, we take 200 days, starting from 250 days before the announcement and ending 51 days before the event (Fig. 1). Estimation Period Event Period _Figure 1 – Event Timeline – Estimation Period and Event Period_ Furthermore, we conduct calculations of actual and expected returns of each company from blockchain and control group using the market model described by formulas below: ��,� � ln ��[�]����[��] � (1) ����,����� ��� ∙��� (2) ���,� � ��,� � ����,�� (3) �� ����,� � ����,� ���� (4) Firstly, we calculate daily returns as natural logarithms (formula 1). The expected return of company _i_ on day _t is represented by_ ����,�� �formula 2�, and ��� represents the return of the MSCI World, our benchmark index at time t. ISSN 1335 1745 ( i t) ISSN 1338 984X ( li ) ----- The company’s abnormal returns _ARi,t_ (formula 3) re calculated as a difference between actual and expected returns. In the next step, cumulative abnormal return is calculated as a sum of previous abnormal returns during the event window between t1 and t2. We get the final CAR (formula 4) of all companies as the sum of average abnormal returns for each day in the event window. ## 4 RESULTS Using the methodology described earlier, we created charts showing abnormal returns and cumulative abnormal returns of 30 observed blockchain companies and 30 companies belonging to the control group (Fig. 2 and Fig. 3). Statistical tests were then conducted using SPSS Software. The following two tables display our results for the Blockchain group, indicating that returns on the event day t0 were slightly negative. Negative performance can also be observed for three days after the announcement. On the contrary, the control group performed on t0 positively but in a small magnitude of 0.00172. For the consequent four days, results change to negative, ranging from -0.00073 to -0.00318. Before the announcement, only six out of 20 trading days recorded negative results. However, the development changed after the announcement when nine out of 20 trading days ended with negative returns. As Tab. 3 presents, after the announcement, there was only one statistically significant day, and it was the day after the day of the announcement. A negative average abnormal return on the day t+1 (0.4%) can be interpreted as some fear or insecurity of investors about adopting new technology into a company’s operations. According to the data, we could observe another three days, which showed statistical significance: days t-13, t-12 and t-11. Here the abnormal returns were positive, which can be interpreted as result of some insider information coming to the market before the announcement. We tested statistical significance also in the case of the control group. Only the day t-5 was tested as significant on the 5% level of significance (average ARt-5 was +0.00622) and by 10% level of significance, there were another three days showing significance, t-13 (average ARt-13 was +0.00479) and t-10 (average ARt-10 was -0.00536). In this event study analysis, we considered only blockchain announcements to be exclusive events that could affect the stock price, while other factors that may impact the stock price were not considered. ISSN 1335 1745 ( i t) ISSN 1338 984X ( li ) ----- _Table 2 – One-Sample Statistics – Blockchain Group_ **N** **Mean** **Std.** **Std. Error** **N** **Mean** **Std.** **Std.** **Deviation** **Mean** **Deviation** **Error** **Mean** t-20 30 0.0010 0.0159 0.0029 t+1 30 -0.0036 0.0094 0.0017 t-19 30 0.0037 0.0194 0.0035 t+2 30 -0.0018 0.0071 0.0013 t-18 30 0.0014 0.0104 0.0019 t+3 30 -0.0021 0.0180 0.0033 t-17 30 -0.0005 0.0086 0.0016 t+4 30 0.0008 0.0058 0.0011 t-16 30 0.0030 0.0104 0.0019 t+5 30 0.0013 0.0131 0.0024 t-15 30 -0.0016 0.0070 0.0013 t+6 30 0.0032 0.0155 0.0028 t-14 30 -0.0002 0.0104 0.0019 t+7 30 -0.0009 0.0129 0.0023 t-13 30 0.0075 0.0145 0.0026 t+8 30 -0.0018 0.0100 0.0018 t-12 30 0.0044 0.0093 0.0017 t+9 30 0.0009 0.0093 0.0017 t-11 30 0.0070 0.0188 0.0034 t+10 30 0.0020 0.0089 0.0016 t-10 30 0.0009 0.0125 0.0023 t+11 30 -0.0015 0.0101 0.0018 t-9 30 -0.0017 0.0111 0.0020 t+12 30 0.0022 0.0142 0.0026 t-8 30 0.0007 0.0087 0.0016 t+13 30 0.0030 0.0153 0.0028 t-7 30 -0.0027 0.0121 0.0022 t+14 30 -0.0002 0.0124 0.0023 t-6 30 0.0019 0.0112 0.0020 t+15 30 0.0029 0.0105 0.0019 t-5 30 -0.0008 0.0109 0.0020 t+16 30 -0.0014 0.0102 0.0019 t-4 30 0.0011 0.0137 0.0025 t+17 30 0.0018 0.0246 0.0045 t-3 30 -0.0027 0.0136 0.0025 t+18 30 0.0041 0.0187 0.0034 t-2 30 0.0000 0.0112 0.0020 t+19 30 0.0012 0.0102 0.0019 t-1 30 0.0014 0.0091 0.0017 t+20 30 -0.0013 0.0158 0.0029 t0 30 -0.0013 0.0091 0.0017 Notes: N – Number of observations; Std. Deviation – Standard Deviation; Std. Error Mean – Standard Error Mean. _Table 3 – One-Sample T-Test – Blockchain Group_ **t** **Sig.** **Mean Difference** **95% Confidence Interval of the** **(2-tailed)** **Difference** Lower Upper t-20 0.355 0.725 0.00103 -0.00492 0.00698 t-19 1.032 0.311 0.00366 -0.00360 0.01092 t-18 0.754 0.457 0.00144 -0.00246 0.00533 ISSN 1335 1745 ( i t) ISSN 1338 984X ( li ) |Col1|N|Mean|Std. Deviation|Std. Error Mean|Col6|N|Mean|Std. Deviation|Std. Error Mean| |---|---|---|---|---|---|---|---|---|---| |t-20|30|0.0010|0.0159|0.0029|t+1|30|-0.0036|0.0094|0.0017| |t-19|30|0.0037|0.0194|0.0035|t+2|30|-0.0018|0.0071|0.0013| |t-18|30|0.0014|0.0104|0.0019|t+3|30|-0.0021|0.0180|0.0033| |t-17|30|-0.0005|0.0086|0.0016|t+4|30|0.0008|0.0058|0.0011| |t-16|30|0.0030|0.0104|0.0019|t+5|30|0.0013|0.0131|0.0024| |t-15|30|-0.0016|0.0070|0.0013|t+6|30|0.0032|0.0155|0.0028| |t-14|30|-0.0002|0.0104|0.0019|t+7|30|-0.0009|0.0129|0.0023| |t-13|30|0.0075|0.0145|0.0026|t+8|30|-0.0018|0.0100|0.0018| |t-12|30|0.0044|0.0093|0.0017|t+9|30|0.0009|0.0093|0.0017| |t-11|30|0.0070|0.0188|0.0034|t+10|30|0.0020|0.0089|0.0016| |t-10|30|0.0009|0.0125|0.0023|t+11|30|-0.0015|0.0101|0.0018| |t-9|30|-0.0017|0.0111|0.0020|t+12|30|0.0022|0.0142|0.0026| |t-8|30|0.0007|0.0087|0.0016|t+13|30|0.0030|0.0153|0.0028| |t-7|30|-0.0027|0.0121|0.0022|t+14|30|-0.0002|0.0124|0.0023| |t-6|30|0.0019|0.0112|0.0020|t+15|30|0.0029|0.0105|0.0019| |t-5|30|-0.0008|0.0109|0.0020|t+16|30|-0.0014|0.0102|0.0019| |t-4|30|0.0011|0.0137|0.0025|t+17|30|0.0018|0.0246|0.0045| |t-3|30|-0.0027|0.0136|0.0025|t+18|30|0.0041|0.0187|0.0034| |t-2|30|0.0000|0.0112|0.0020|t+19|30|0.0012|0.0102|0.0019| |t-1|30|0.0014|0.0091|0.0017|t+20|30|-0.0013|0.0158|0.0029| |t0|30|-0.0013|0.0091|0.0017|||||| |Col1|t|Sig. (2-tailed)|Mean Difference|95% Confidence Interval of the Difference|Col6| |---|---|---|---|---|---| |||||Lower|Upper| |t-20|0.355|0.725|0.00103|-0.00492|0.00698| |t-19|1.032|0.311|0.00366|-0.00360|0.01092| |t-18|0.754|0.457|0.00144|-0.00246|0.00533| ----- |Col1|t|Sig. (2-tailed)|Mean Difference|95% Confidence Interval of the Difference|Col6| |---|---|---|---|---|---| |||||Lower|Upper| |t-17|-0.316|0.754|-0.00049|-0.00370|0.00271| |t-16|1.572|0.127|0.00299|-0.00090|0.00689| |t-15|-1.26|0.218|-0.00161|-0.00423|0.00101| |t-14|-0.116|0.909|-0.00022|-0.00411|0.00367| |t-13|2.843|0.008a|0.00753|0.00211|0.01295| |t-12|2.607|0.014a|0.00443|0.00096|0.00791| |t-11|2.054|0.049a|0.00705|0.00003|0.01406| |t-10|0.413|0.683|0.00094|-0.00373|0.00561| |t-9|-0.858|0.398|-0.00173|-0.00586|0.00240| |t-8|0.445|0.659|0.00071|-0.00254|0.00396| |t-7|-1.22|0.234|-0.00268|-0.00720|0.00183| |t-6|0.916|0.367|0.00187|-0.00231|0.00606| |t-5|-0.420|0.678|-0.00084|-0.00493|0.00325| |t-4|0.423|0.675|0.00105|-0.00404|0.00615| |t-3|-1.10|0.280|-0.00274|-0.00782|0.00235| |t-2|0.011|0.991|0.00002|-0.00415|0.00420| |t-1|0.827|0.415|0.00137|-0.00202|0.00476| |t0|-0.781|0.441|-0.00130|-0.00472|0.00211| |t+1|-2.081|0.046a|-0.00355|-0.00705|-0.00006| |t+2|-1.400|0.172|-0.00180|-0.00444|0.00083| |t+3|-0.636|0.530|-0.00209|-0.00881|0.00463| |t+4|0.723|0.476|0.00076|-0.00140|0.00293| |t+5|0.552|0.585|0.00132|-0.00356|0.00619| |t+6|1.140|0.263|0.00322|-0.00255|0.00899| |t+7|-0.392|0.698|-0.00092|-0.00573|0.00388| |t+8|-0.980|0.335|-0.00179|-0.00552|0.00194| |t+9|0.554|0.584|0.00094|-0.00252|0.00439| |t+10|1.246|0.223|0.00203|-0.00130|0.00536| |t+11|-0.814|0.422|-0.00150|-0.00528|0.00227| |t+12|0.848|0.403|0.00219|-0.00310|0.00749| |t+13|1.071|0.293|0.00299|-0.00272|0.00870| ISSN 1335 1745 ( i t) ISSN 1338 984X ( li ) ----- |Col1|t|Sig. (2-tailed)|Mean Difference|95% Confidence Interval of the Difference|Col6| |---|---|---|---|---|---| |||||Lower|Upper| |t+14|-0.097|0.923|-0.00022|-0.00486|0.00442| |t+15|1.507|0.143|0.00289|-0.00103|0.00680| |t+16|-0.743|0.464|-0.00138|-0.00519|0.00243| |t+17|0.410|0.685|0.00184|-0.00735|0.01103| |t+18|1.186|0.245|0.00406|-0.00294|0.01106| |t+19|0.641|0.527|0.00120|-0.00263|0.00502| |t+20|-0.463|0.647|-0.00133|-0.00722|0.00455| Notes: Test value – 0; a, b indicate the 5 and 10 percent significance levels; T – value of t-statistic; Sig. (2-tailed) – two-tailed significance. The following figures demonstrate abnormal returns of both analysed groups of companies and cumulative abnormal returns. As seen in Fig. 2, the day of the announcement caused negative abnormal returns – according to our data, this occurred by 18 out of 30 companies, and this trend continued for the next three days. The biggest loss suffered by Volkswagen (abnormal return ARt0 was 2.56%, and the actual return on that day Rt0 was -1.92. The biggest decline abnormal on the day t+1 had Deutsche Bank (ARt+1 -2.69%) and Morgan Stanley (ARt+1 -2.15%) For the selected sample of blockchain companies and the control group, the data in the selected period showed a similar direction of stock price movements. The only difference was the magnitude of abnormal returns. In Figure 3, we can track the development of cumulative abnormal returns. We see that CAR was more or less positive during the event window, which is also similar to the general market development between 2015 and 2019, where we observed an increasing trend. Additionally, we can see outperformance between the blockchain-adopting companies and their peer companies between days t-16 and t0, which we can interpret as positive expectations of investors about coming announcements since insider information is common practice on the market. However, these results show that investors do not yet assign such an important role in this technology probably because they cannot estimate the long-term impact. Thus they approach this information rather more cautiously. ISSN 1335 1745 ( i t) ISSN 1338 984X ( li ) ----- _Figure 2 – Abnormal Returns_ _Figure 3 – Cumulative Abnormal Returns_ To check the robustness of our data, we also calculated the cumulative abnormal returns of each company and tested different event windows to find out whether there were some statistically significant periods. Those observed intervals were <-20,+20>; <-15,+15>; <-10,+10>; <-5,+5>; <-2,+2>; <-1,+1>; <-5,+10>; <5,+15>; <-5,+20>. Within the blockchain group, five out of nine intervals were slightly negative on average (Tab. 4), while the control group’s results were slightly positive on average. Shorter periods around the event day, mostly between t-10 and t+10, were negative. However, longer intervals above ten days prior to and after the event showed positive results of abnormal returns. ISSN 1335 1745 ( i t) ISSN 1338 984X ( li ) ----- As presented in Tab. 5, no interval showed statistical significance. The same procedure we have done with the control group. After performing statistical tests in SPSS software, results showed that no event window was statistically significant. _Table 4 – One-Sample Statistics – Various Event Windows – Blockchain Group_ **N** **Mean** **Std. Deviation** **Std. Error Mean** <-20,+20> 30 0.031321 0.118237 0.021587 <-15,+15> 30 0.018313 0.073109 0.013348 <-10,+10> 30 -0.005217 0.049348 0.009010 <-5,+5> 30 -0.007796 0.035402 0.006464 <-2,+2> 30 -0.005267 0.018841 0.003440 <-1,+1> 30 -0.003487 0.014663 0.002677 <-5,+10> 30 -0.004325 0.037792 0.006900 <-5,+15> 30 0.002025 0.054756 0.009997 <-5,+20> 30 0.006407 0.077646 0.014176 Notes: N – Number of observations; Std. Deviation – Standard Deviation; Std. Error Mean – Standard Error Mean. _Table 5 – One-Sample T-Test – Various Event Windows – Blockchain Group_ **Test Value** **df** **Sig.** **Mean** **95% Confidence Interval of the** **= 0** **(2-tailed)** **Difference** **Difference** **T** Lower Upper <-20.+20> 1.451 29 0.158 0.031321 -0.012829 0.075472 <-15.+15> 1.372 29 0.181 0.018313 -0.008987 0.045612 <-10.+10> -0.579 29 0.567 -0.005217 -0.023644 0.013210 <-5.+5> -1.206 29 0.238 -0.007796 -0.021015 0.005423 <-2.+2> -1.531 29 0.137 -0.005267 -0.012302 0.001768 <-1.+1> -1.303 29 0.203 -0.003487 -0.008962 0.001988 <-5.+10> -0.627 29 0.536 -0.004325 -0.018436 0.009787 <-5.+15> 0.203 29 0.841 0.002025 -0.018421 0.022472 <-5.+20> 0.452 29 0.655 0.006407 -0.022587 0.035400 Notes: Test value – 0; T – value of t-statistic; Sig. (2-tailed) – two-tailed significance. ISSN 1335 1745 ( i t) ISSN 1338 984X ( li ) |Col1|N|Mean|Std. Deviation|Std. Error Mean| |---|---|---|---|---| |<-20,+20>|30|0.031321|0.118237|0.021587| |<-15,+15>|30|0.018313|0.073109|0.013348| |<-10,+10>|30|-0.005217|0.049348|0.009010| |<-5,+5>|30|-0.007796|0.035402|0.006464| |<-2,+2>|30|-0.005267|0.018841|0.003440| |<-1,+1>|30|-0.003487|0.014663|0.002677| |<-5,+10>|30|-0.004325|0.037792|0.006900| |<-5,+15>|30|0.002025|0.054756|0.009997| |<-5,+20>|30|0.006407|0.077646|0.014176| |Col1|Test Value = 0|df|Sig. (2-tailed)|Mean Difference|95% Confidence Interval of the Difference|Col7| |---|---|---|---|---|---|---| ||T|||||| ||||||Lower|Upper| |<-20.+20>|1.451|29|0.158|0.031321|-0.012829|0.075472| |<-15.+15>|1.372|29|0.181|0.018313|-0.008987|0.045612| |<-10.+10>|-0.579|29|0.567|-0.005217|-0.023644|0.013210| |<-5.+5>|-1.206|29|0.238|-0.007796|-0.021015|0.005423| |<-2.+2>|-1.531|29|0.137|-0.005267|-0.012302|0.001768| |<-1.+1>|-1.303|29|0.203|-0.003487|-0.008962|0.001988| |<-5.+10>|-0.627|29|0.536|-0.004325|-0.018436|0.009787| |<-5.+15>|0.203|29|0.841|0.002025|-0.018421|0.022472| |<-5.+20>|0.452|29|0.655|0.006407|-0.022587|0.035400| ----- ## 5 CONCLUSIONS The popularity of blockchain as one of the Quality 4.0 instruments has grown rapidly in business and academic communities. It has the potential to enhance the data transparency and quality which optimises company operations and thus increase value for shareholders. This paper aimed to analyse the impact of blockchain announcements on selected international companies using an event study approach. Our objective was to answer the following research question set at the beginning of our analysis. RQ: Is there any reaction in stock prices after the company officially announces the application of blockchain technology in its operations? H0: There has been no reaction in stock prices after the company’s announcement regarding blockchain implementation. Using the event study approach and SPSS software we analysed our data sample consisting of two groups of sixty companies in total. Within the blockchain group. three days before the event were statistically significant with positive results (t-13. t-12 and t-11). which can be interpreted as some insider information or signals spread on the market about planned announcements. However, after the event only the first day after the announcement (t+1) was tested as statistically significant with a negative reaction (-0.4%). This can be connected to cautious investors on the market when talking about some new technology as blockchain. where current knowledge is probably not sufficient yet. and the technology needs to be explored more. According to our data. there was no reaction on the market after the announcement. and the significance of t+1 day was rather random as systematic. Thus we do not reject the null hypothesis. and we can summarise that there has been no reaction in stock prices after the company’s announcement regarding blockchain implementation. The importance and maturity of blockchain will rise in the next years. and thus every additional study around this topic will be important to extend the pool of knowledge. Once it is properly established in the market. evaluating its long-term impact on companies will be interesting. Therefore. we hereby encourage researchers to analyse in the future the results on a long-term basis to conclude whether the blockchain positively influences companies’ operations and whether this technology is worth investment of such considerable financial resources. ## REFERENCES American Society for Quality, n.d. _Quality Glossary Definition: Quality 4.0._ [online] Available at: <https://asq.org/quality-resources/quality-40#:~:text=%22Quality%204.0%22%20is%20a%20term.%2C%20digital%2C%2 0and%20disruptive%20technologies> [Accessed 04 April 2023]. ISSN 1335 1745 ( i t) ISSN 1338 984X ( li ) ----- Bose, I. and Man Leung, A.C., 2019. Adoption of identity theft countermeasures and its short- and long-term impact on firm value. _MIS Quarterly, [e-journal]_ 43(1), pp.313-327. DOI: 10.25300/misq/2019/14192. Boyd, D.E., Kannan, P.K. and Slotegraaf. R.J., 2019. Branded apps and their impact on firm value: A design perspective. Journal of Marketing Research, [ejournal] 56(1), pp.76-88. DOI: 10.1177/0022243718820588. Bradley, R.V. and Esper, T.L., 2018. The joint use of RFID and EDI: Implications for hospital performance. Production and Operations Management, [e-journal] 27(11), pp.2071-2090. DOI: 10.1111/poms.12955. Cahill, D., Baur. D.G., Liu, Z. and Yang, J.W., 2020. I am a blockchain too: How does the market respond to companies' interest in blockchain?. _Journal of_ _Banking_ _and_ _Finance,_ [e-journal] 113, 105740. DOI: 10.1016/j.jbankfin.2020.105740. Campbell, C.J., Cowan, A.R. and Salotti, V., 2010. Multi-country event-study methods. Journal of Banking & Finance, [e-journal] 34(12), pp.3078-3090. DOI: 10.1016/j.jbankfin.2010.07.016. Chen, K., Lai, T.L., Liu, Q. and Wang, C., 2022. Beyond the blockchain announcement: Signaling credibility and market reaction. International Review of _Financial Analysis, [e-journal] 82, 102209. DOI: 10.1016/j.irfa.2022.102209._ Cheng, S.F., De Franco, G., Jiang, H. and Lin, P., 2019. Riding the blockchain mania: Public firms’ speculative 8-K Disclosures. _Management Science, [e-_ journal] 65(12), pp.5901-5913. DOI: 10.1287/mnsc.2019.3357. Dehning, B., Richardson, V.J., Urbaczewski, A. and Wells, J.D., 2004. Reexamining the value relevance of E-commerce initiatives. _Journal of_ _Management_ _Information_ _Systems,_ [e-journal] 21(1), pp.55-82. DOI: 10.1080/07421222.2004.11045788 Hendricks, K.B., Singhal, V.R. and Stratman, J.K., 2006. The impact of Enterprise Systems on corporate performance: A study of ERP, SCM, and CRM system implementations. _Journal of Operations Management, [e-journal] 25(1),_ pp.65-82. DOI: 10.1016/j.jom.2006.02.002. Infront Analytics, 2023. _Find. Compare. Evaluate. [online] Available at:_ <https://www.infrontanalytics.com/> [Accessed 13 March 2023]. Klockner, M., Schmidt, C.G. and Wagner, S.M., 2022. When Blockchain Creates Shareholder Value: Empirical Evidence from International Firm Announcements. _Production and Operations Management, [e-journal] 31(1), pp.46-64. DOI:_ 10.1111/poms.13609. Lacity, M.C., 2018. Addressing key challenges to making enterprise blockchain applications a reality. MIS Quarterly Executive, 17(3), pp.201-222. ISSN 1335 1745 ( i t) ISSN 1338 984X ( li ) ----- Liu, W., Wang, J., Jia, F. and Choi, T., 2022. Blockchain announcements and stock value: A Technology Management Perspective. _International Journal of_ _Operations & Production Management, [e-journal] 42(5), pp.713-742. DOI:_ 10.1108/ijopm-08-2021-0534. Melville, N., Kraemer, K. and Gurbaxani, V., 2004. Information technology and organisational performance: An integrative model of IT business value. _MIS_ _Quarterly, [e-journal] 28(2), pp.283-322. DOI: 10.2307/25148636._ Mtotywa, M., 2022. Developing a Quality 4.0 Maturity Index for Improved Business Operational Efficiency and Performance. _Quality Innovation_ _Prosperity, [e-journal] 26(2), pp.101-127. DOI: 10.12776/QIP.V26I2.1718._ Nguyen, N., Nguyen, Ch., Nguyen, H. and Nguyen, V., 2021. The Impact of Quality Management on Business Performance of Manufacturing Firms: The Moderated Effect of Industry 4.0. Quality Innovation Prosperity, 25(3), pp.120135. DOI: 10.12776/QIP.V25I3.1623. Paul, S., Adhikari, A. and Bose, I., 2022. White Knight in dark days? Supply chain finance firms. blockchain. and the COVID-19 pandemic. _Information &_ _Management, [e-journal] 59(6), 103661. DOI: 10.1016/j.im.2022.103661._ Ranganathan, C. and Brown, C.V., 2006. ERP investments and the market value of firms: Toward an understanding of influential ERP project variables. _Information_ _Systems_ _Research,_ [e-journal] 17(2), pp.145-161. DOI: 10.1287/isre.1060.0084. Reno, G., 2022. 12 _Things You Can Do to Improve Data Quality. [online]_ Napperville: FirstEigen. Available at: <firsteigen.com/blog/12-things-you-cando-to-improve-data-quality/> [Accessed 10 March 2023]. Santos, G., Sá, J.C., Félix, M.J., Barreto, L., Carvalho, F., Doiro, M., Zgodavová, K. and Stefanović, M., New Needed Quality Management Skills for Quality Managers 4.0. _Sustainability,_ [e-journal] 13(11), 6149. DOI: 10.3390/su13116149. Subramani, M. and Walden, E., 2001. The impact of e-commerce announcements _on_ _the_ _market_ _value_ _of_ _firms._ Available at SSRN: <https://ssrn.com/abstract=269668 or http://dx.doi.org/10.2139/ssrn.269668> [Accessed 10 March 2023]. Tampi, T., 2020. VeriTX to use blockchain to transform aerospace 3D printing supply chain. _3Dprint.com,_ [online] 02 November. Available at: <https://3dprint.com/274744/veritx-to-use-blockchain-to-transform-aerospace3d-printing-supply-chain/> [Accessed 10 March 2023]. Ullah, S., Zaefarian, G., Ahmed, R. and Kimani, D., 2021. How to apply the event study methodology in Stata: An overview and a step-by-step guide for authors. _Industrial Marketing Management, [e-journal] 99(November 2021),_ pp.A1-A12. DOI: 10.1016/j.indmarman.2021.02.004. ISSN 1335 1745 ( i t) ISSN 1338 984X ( li ) ----- ## ABOUT AUTHORS **Vladimíra Gimerská[0000-0001-7108-7201 ](V.G.) – PhD student, Technical University** of Košice, Faculty of Economics, Slovak Republic, e-mail: gimerska.vladimira@gmail.com. **Michal Šoltés[0000-0002-1421-7177 ]** (M.S.) – Assoc. Prof., Dean of the Faculty of Economics, Technical University of Košice, Slovak Republic, e-mail: michal.soltes@tuke.sk. **Rajmund Mirdala[0000-0002-9949-3049 ]** (R.M.) – Prof., Department of Economics, Faculty of Economics, Technical University of Košice, Slovak Republic, e-mail: rajmund.mirdala@tuke.sk. ## AUTHOR CONTRIBUTIONS Conceptualisation. V.G.; Methodology, V.G.; Formal analysis, V.G.; Investigation, V.G.; Original draft preparation, V.G.; Review and editing, M.S. and R.M.; Visualization, V.G.; Supervision, M.S. and R.M. ## CONFLICTS OF INTEREST The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection. analyses. or interpretation of data; in the writing of the manuscript. or in the decision to publish the results. © 2023 by the authors. Submitted for possible open access publication under the terms and conditions of the Creative Commons Attribution (CC-BY) license (http://creativecommons.org/licenses/by/4.0/). ISSN 1335 1745 ( i t) ISSN 1338 984X ( li ) -----
{ "disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.12776/qip.v27i2.1877?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.12776/qip.v27i2.1877, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": "CCBY", "status": "GOLD", "url": "https://www.qip-journal.eu/index.php/QIP/article/download/1877/1369" }
2,023
[ "JournalArticle" ]
true
2023-07-31T00:00:00
[ { "paperId": "7edfb72b0a06d49eca54f19b92f0fad67868a4a1", "title": "Developing a Quality 4.0 Maturity Index for Improved Business Operational Efficiency and Performance" }, { "paperId": "16c463725c5093fd4e7f3048a86f2a96f6f7b419", "title": "Beyond the blockchain announcement: Signaling credibility and market reaction" }, { "paperId": "c277fd4a221725eee919eb655af30b88e9b4fd2e", "title": "White knight in dark days? Supply chain finance firms, blockchain, and the COVID-19 pandemic" }, { "paperId": "eec034c0d306b6bcd33d83f0188cc9c2571521c2", "title": "Blockchain announcements and stock value: a technology management perspective" }, { "paperId": "7af62dcc5c954d63c2673b8cc40929619a84dc04", "title": "When Blockchain Creates Shareholder Value: Empirical Evidence from International Firm Announcements" }, { "paperId": "2bf57fbdc071ec75415192bad3fc329771fcaad0", "title": "How to apply the event study methodology in STATA: An overview and a step-by-step guide for authors" }, { "paperId": "03454a5e070413c3974a2bffdab16a8a085dffac", "title": "Riding the Blockchain Mania: Public Firms’ Speculative 8-K Disclosures" }, { "paperId": "512566eecd9189e6d76828affee72c8de20dcdf9", "title": "Branded Apps and Their Impact on Firm Value: A Design Perspective" }, { "paperId": "1fb9bec1ae8850c35549dc11345f07ccd32e3fbb", "title": "Adoption of Identity Theft Countermeasures and its Short- and Long-Term Impact on Firm Value" }, { "paperId": "1bf7de009e2dc8d8c5e077a77ebba1b44eef9476", "title": "The Joint Use of RFID and EDI: Implications for Hospital Performance" }, { "paperId": "135d16b93ee0b407446740d7651c36393182a0c2", "title": "I Am a Blockchain Too..." }, { "paperId": "2e29693babcc1f7734b58f7de0f89fc56fa2ed1a", "title": "Multi-Country Event Study Methods" }, { "paperId": "41eb2c8e05a7566e4d16f76255f202eb260efe44", "title": "ERP Investments and the Market Value of Firms: Toward an Understanding of Influential ERP Project Variables" }, { "paperId": "ceb847ccef950d8f53c9a3f2cb1a10669dafa46e", "title": "Reexamining the Value Relevance of E-Commerce Initiatives" }, { "paperId": "fa02855ba4688a077b22607f420f5be7fd4f9139", "title": "Review: Information Technology and Organizational Performance: An Integrative Model of IT Business Value" }, { "paperId": "8282feec96a60372129f8915c788e7d2abb8eda5", "title": "The Impact of E-Commerce Announcements on the Market Value of Firms" }, { "paperId": null, "title": "2022. 12 Things You Can Do to Improve Data Quality. [online] Napperville: FirstEigen" }, { "paperId": "28019620225469236f764fa91f326582eff57c10", "title": "The Impact of Quality Management on Business Performance of Manufacturing Firms: The Moderated Effect of Industry 4" }, { "paperId": "1773a51be5142a516a7622a5f8672bd7310949fc", "title": "Addressing Key Challenges to Making Enterprise Blockchain Applications a Reality" }, { "paperId": "3732d243e534fed43de5065d39bfcc4a8d6de3bd", "title": "The impact of enterprise systems on corporate performance: A study of ERP, SCM, and CRM system implementations" }, { "paperId": null, "title": "12 Things You Can Do to Improve Data Quality. [online] Napperville: FirstEigen. Available at: <firsteigen.com/blog/12-things-you-cando-to-improve-data-quality/> [Accessed" }, { "paperId": null, "title": "VeriTX to use blockchain to transform aerospace 3D printing supply chain. 3Dprint.com" }, { "paperId": null, "title": "There has been no reaction in stock prices after the company’s announcement regarding blockchain implementation" } ]
13,152
en
[ { "category": "Computer Science", "source": "external" }, { "category": "Mathematics", "source": "external" }, { "category": "Computer Science", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/0293ee9829d6cd3d9a774e7486b068205ca1179e
[ "Computer Science", "Mathematics" ]
0.866338
Attack on the Edon-kKey Encapsulation Mechanism
0293ee9829d6cd3d9a774e7486b068205ca1179e
International Symposium on Information Theory
[ { "authorId": "40323363", "name": "Matthieu Lequesne" }, { "authorId": "1764813", "name": "J. Tillich" } ]
{ "alternate_issns": null, "alternate_names": [ "International Symposium on Information Technology", "Int Symp Inf Theory", "Int Symp Inf Technol", "ISIT" ], "alternate_urls": null, "id": "234ccdc0-f58f-4f94-b86a-428d11a0c5ad", "issn": null, "name": "International Symposium on Information Theory", "type": "conference", "url": "http://www.wikicfp.com/cfp/program?id=1719" }
The key encapsulation mechanism $\text{EDON}-\mathcal{K}$ was proposed in response to the call for post-quantum cryptography standardization issued by the National Institute of Standards and Technologies (NIST). This scheme is inspired by the McEliece scheme but uses another family of codes defined over $\mathbb{F}_{2^{128}}$ instead of $\mathbb{F}_{2}$ and is not based on the Hamming metric. It allows significantly shorter public keys than the McEliece scheme. In this paper, we give a polynomial time algorithm that recovers the encapsulated secret. This attack makes the scheme insecure for the intended use. We obtain this result by observing that recovering the error in the McEliece scheme corresponding to $\text{EDON}-\mathcal{K}$ can be viewed as a decoding problem for the rank-metric. We show that the code used in $\text{EDON}-\mathcal{K}$ is in fact a super-code of a Low Rank Parity Check (LRPC) code of very small rank (1 or 2). A suitable parity-check matrix for the super-code of such low rank can be easily derived from for the public key. We then use this parity-check matrix in a decoding algorithm that was devised for LRPC codes to recover the error. Finally we explain how we decapsulate the secret once we have found the error.
## Attack on the EDON-K Key Encapsulation Mechanism ##### Matthieu Lequesne Sorbonne Université, UPMC Univ Paris 06 Inria, Team SECRET, 2 rue Simone Iff, CS 42112, 75589 Paris Cedex 12, France Email: matthieu.lequesne@inria.fr Abstract—The key encapsulation mechanism EDON-K was proposed in response to the call for post-quantum cryptography standardization issued by the National Institute of Standards and Technologies (NIST). This scheme is inspired by the McEliece scheme but uses another family of codes defined over F2128 instead of F2 and is not based on the Hamming metric. It allows significantly shorter public keys than the McEliece scheme. In this paper, we give a polynomial time algorithm that recovers the encapsulated secret. This attack makes the scheme insecure for the intended use. We obtain this result by observing that recovering the error in the McEliece scheme corresponding to EDON-K can be viewed as a decoding problem for the rankmetric. We show that the code used in EDON-K is in fact a super-code of a Low Rank Parity Check (LRPC) code of very small rank (1 or 2). A suitable parity-check matrix for the supercode of such low rank can be easily derived from for the public key. We then use this parity-check matrix in a decoding algorithm that was devised for LRPC codes to recover the error. Finally we explain how we decapsulate the secret once we have found the error. I. INTRODUCTION The syndrome decoding problem is a fundamental problem in complexity theory, since the original paper of Berlekamp, McEliece and van Tilborg [BMvT78] proving its NP-completeness for the Hamming distance. The same year, McEliece proposed a public-key cryptosystem based on this problem [McE78] and instantiated it with binary Goppa codes. This scheme was for a long time considered inferior to RSA due to its large key size. However, this situation has changed drastically when it became apparent in [Sho94] that RSA and actually all the other public-key cryptosystems used in practice could be attacked in polynomial time by a quantum computer. There are now small prototypes of such computers that lead to think that they will become a reality in the future and in 2016, the National Institute of Standards and Technology (NIST) announced a call for standardization of cryptosystems that would be safe against an adversary equiped with a quantum computer. Four families of cryptosystems are often mentioned as potential candidates: cryptosystems based on error correcting codes, lattices, hash functions and multivariate quadratic equations [BBD09]. All of these are based on mathematical problems that are expected to remain hard even in the presence of a quantum computer. ##### Jean-Pierre Tillich Inria, Team SECRET, 2 rue Simone Iff, CS 42112, 75589 Paris Cedex 12, France Email: jean-pierre.tillich@inria.fr The key encapsulation mechanism EDON-K [GG17] was proposed by Gligoroski and Gjøsteen in response to the call issued by the NIST. This scheme is inspired by the McEliece scheme but uses another family of codes defined over F2128 instead of F2. This choice leads to very short keys for a codebased scheme. The metric used for the decoding is not properly defined and the security relies on an ad-hoc problem named finite field vector subset ratio problem supposedly hard on average. In this paper, we show that the metric used for EDON-K is in fact equivalent to the well-known rank metric. This metric was first introduced in 1951 as “arithmetic distance” between matrices over a field Fq [Hua51]. The notion of rank distance and rank codes over matrices was defined in 1978 by Delsarte [Del78]. He introduced a code family, named maximum rank distance (MRD) codes, that attains the analogue of the MDS (maximum distance separable) bound for the rank metric. Gabidulin suggests in [Gab85] to consider a subfamily of such codes that are linear over an extension field Fqm. It provides a vectorial representation of these codes and allows to represent them in a much more compact way. This is the main reason why the rank metric based McEliece schemes achieve significantly smaller key sizes. Moreover this vectorial representation allows to view the known families of MRD codes as rank metric analogues of Reed-Solomon codes and to obtain an efficient decoding algorithm for them [Gab85]. There are also rank metric analogues for other families of codes. For instance, the Low Rank Parity-Check (LRPC) codes introduced in [GMRZ13] can be considered as analogues of Low Density Parity-Check (LDPC) codes. Just like their binary cousins, they enjoy an efficient decoding algorithm that is based on a low rank parity-check matrix of such a code. Here, we prove that the code used in EDON-K is a actually a super-code of an LRPC code of rank 2. What is more, this LRPC code is itself a subspace of codimension 1 of another LRPC code of rank 1. It turns out that parity-check matrices of rank 2 for the first super-code and rank 1 for the second one can easily be derived from the public key. In both cases, this allows us to decode the ciphertext without the secret key. This gives a way to recover the encapsulated secret and breaks completely the EDON-K system. ----- The paper is organized as follows. First, we recall some basic definitions and properties of rank metric and LRPC codes in Section II. In Section III we present the scheme of EDON-K. Then we explain the general idea of our attack in section IV. In Section V, we detail how we reconstruct a parity-check matrix of the code and in Section VI how we decode the ciphertext. In Section VII, we explain how we derive the encapsulated secret from the error. Finally in Section VIII we discuss the cost of this attack and its consequences. II. RANK METRIC CODES A. Notation In the following document, q denotes a power of a prime number. In the case of EDON-K, we will have q = 2. Fq denotes the finite field with q elements and, for any positive integer m, Fqm denotes the finite field with q[m] elements. We will sometimes view Fqm as an m-dimensional vector space over Fq. We use bold lowercase and capital letters to denote vectors and matrices respectively. We denote ⟨x1, . . ., xk⟩K the K-vector space generated by the elements {x1, . . . xk}. B. Definitions Definition 1 (Rank metric over F[n]q[m][)][.][ Let][ x][ = (][x][1][, . . ., x][n][)][ ∈] F[n]q[m][ and][ (][β][1][, . . ., β][m][)][ be a basis of][ F][q][m][ viewed as an][ m][-] dimensional vector space over Fq. Each coordinate xj ∈ Fqm is associated to a vector of F[m]q in this basis: xj = �m i=1 [m][i,j][β][i][. The][ m][ ×][ n][ matrix associated to][ x][ is given by] M(x) := (mi,j)1≤i≤m,1≤j≤n. The rank weight wt(x) of x is defined as : wt(x) := Rank M(x). The associated distance d(x, y) between elements x and y of F[n]q[m][ is defined by][ d][(][x][,][ y][) :=][ wt][(][x][ −] [y][)][.] Definition 2 (Support of a word). Let x = (x1, . . ., xn) ∈ F[n]q[m] [. The support of][ x][, denoted][ Supp(][x][)][, is the][ F][q][-subspace] of Fqm generated by the coordinates of x: Supp(x) := ⟨x1, . . ., xn⟩Fq . We have dim(Supp(x)) = wt(x). Definition 3 (Fqm-linear code). An Fqm-linear code C of dimension k and length n is a subspace of dimension k of F[n]q[m] [.][ C][ can be represented in two equivalent ways: by a] generator matrix G ∈ Fq[k][m][×][n] such that C = {xG | x ∈ F[k]q[m] [}] and by a parity-check matrix H ∈ Fq[(][n][m][−][k][)][×][n] such that C = {x ∈ F[n]q[m][ |][ Hx][⊺] [=][ 0][n][−][k][}][.] The decoding problem in the rank metric can be described as follows. Problem 1 (Decoding problem for the rank metric). Let C be an Fqm -linear code of dimension k and length n. Given y = c + e where c ∈C and e ∈ F[n]q[m][ is of rank weight][ ≤] [r] find c and e. C. LRPC codes Definition 4 (LRPC code). A Low Rank Parity Check (LRPC) code of rank d, length n and dimension k over Fqm is a code that admits a parity-check matrix H = (hi,j ) ∈ F[(]q[n][m][−][k][)][×][n] such that the vector space of Fqm generated by its coefficients hi,j has dimension at most d. LRPC codes can be viewed as analogues of LDPC codes for the rank metric. In particular, they enjoy an efficient decoding algorithm based on their low rank parity-check matrix. Roughly speaking, Algorithm 1 of [GMRZ13] decodes up to d errors when rd ≤ n − k in polynomial time (see [GMRZ13, Theorem 1]). It uses in a crucial way the notion of the linear span of a product of subspaces of Fqm Definition 5. Let U and V be two Fq subspaces of Fqm . We denote by U · V the linear span of the product of U and V : U · V := ⟨uv : u ∈ U, v ∈ V ⟩Fq . III. THE EDON-K KEM EDON-K [GG17] is a key encapsulation mechanism proposed by Gligoroski and Gjøsteen for the NIST post-quantum cryptography call. Here we describe the key generation, encapsulation and decapsulation, omitting some details that are not relevant for the attack. We refer to [GG17] for the full description. A. Parameters and notations The parameters for EDON-K are given in Table I. In this paper we often refer to the parameters of edonk128ref, the reference version proposed for 128 security-bits. TABLE I PARAMETERS PROPOSED FOR EDON-K Name m N K R ν L edonk128ref 128 144 16 40 8 6 edonk128K16N80nu8L6 128 80 16 40 8 6 edonk128K08N72nu8L8 128 72 8 40 8 8 edonk128K32N96nu4L4 128 96 32 40 4 4 edonk128K16N80nu4L6 128 80 16 40 4 6 edonk192ref 192 112 16 40 8 8 edonk192K48N144nu4L4 192 144 48 40 4 4 edonk192K32N128nu4L6 192 128 32 40 4 6 edonk192K16N112nu4L8 192 112 16 40 4 8 The scheme makes use of a hash function H (·) corresponding to standard SHA2 functions (SHA-256 or SHA384 depending on the parameters). We will denote H[i](·) := H(. . . H(·)) � i times�� � Given a binary matrix P = (pi,j ) and two non-zero elements a ̸= b of F2m, Pa,b = (˜pi,j) denotes the matrix of the same size with coefficients in F2m where ˜pi,j = a if pi,j = 0 and ˜pi,j = b if pi,j = 1. ----- In particular, if P is orthogonal: Pc,d⊺ = (Pa,b)−1 (1) where c := a[2]+a b[2][ and][ d][ :=] a[2]+b b[2][ .] For two vectors (or matrices) x and y, we will denote x||y their concatenation. B. Key generation Given the security level and the appropriate parameters. $ - a, b ← F2m non-zero elememts such that a ̸= b. $ N ×N - P ← F2 an orthogonal matrix. $ R×N - H ← F2 such that H = [HT ||HB][⊺] where HB is an R × R orthogonal matrix and HT has columns of even Hamming weight. a b - c := a[2]+b[2][,][ d][ :=] a[2]+b[2][ .] $ ν - ˜g ← F2[m] [.] - Vg := Support(˜g). $ K×N - G ←Vg such that GH[⊺] = 0K×R. (2) ⊺ - Gpub := GPc,d . - Return (PublicKey := Gpub, SecretKey := (a, b, P, H)). C. Encapsulation Given the PublicKey and the public parameters. $ K - m ← F2[m][.] - ˜e ∈ F[L]2[m][ generated as follows:] $ – (˜e0, ˜e1) ← F2m; – for 1 ≤ i ≤ [L]2 [−] [1][,][ (˜][e][2][i][,][ ˜][e][2][i][+1][) =][ H][ (˜][e][2][i][−][2][||][e][˜][2][i][−][1][)][.] - Ve := Support(˜e). $ N - e ←Ve [.] - c := mGpub + e. - (s0, s1) := H (˜eL−2||e˜L−1). - SharedSecret := H (s0||s1||H (c)). - h := H (s1||so||H (c)). - Ciphertext := (c, h). - Return (Ciphertext, SharedSecret). D. Decapsulation Given Ciphertext, SecretKey and the public parameters. - Recover e by decoding the c using the private matrix H[′] := HPa,b⊺. - Deduce Ve the vector space spaned by the coefficients of the vector e. - For all (λ, ν) ∈Ve × Ve, for 1 ≤ i ≤ [L]2 [−] [1][:] – (s[′]0[, s][′]1[) :=][ H][i][ (][λ][||][µ][||H][ (][c][))][;] – if H (s[′]1[||][s][′]0[||][c][) =][ h][:] Return SharedSecret := H (s[′]0[||][s][′]1[||H][ (][c][))][.] IV. OUTLINE OF THE ATTACK ON EDON-K Our attack is based on three observations - The ciphertext is a vector c such that c = mGpub + e. (3) This error e is of low rank, since its rank is at most L. - This code Cpub generated by Gpub is a subcode of an LRPC code, namely the code C[′] with parity-check matrix H[′] := HPa,b⊺. This code is indeed an LRPC code of rank 2 since all the entries of H[′] belong to ⟨a, b⟩F2. We have Cpub ⊂C[′] (4) since GpubH[′][⊺] = GPc,d⊺(HPa,b⊺)[⊺] = GPc,d⊺Pa,bH⊺ = GH[⊺] (from (1)) = 0K×R (from (2)). This equation also appears as Corollary 1 of [GG17, p.19]. We have given its proof here for the convenience of the reader. Let K [′] = N − R be the dimension of C[′]. - If we recover a parity-check matrix of rank 2 for C[′] we will be able to recover mGpub and e from c. Indeed, mGpub ∈C[′] and we can decode C[′] using a variation of Algorithm 1 of [GMRZ13] and the knowledge of the parity-check matrix, provided wt(e) ≤ L < (N − K [′])/2 = R/2 is verified, which is the case for the parameters of EDON-K. Hence we will proceed in three steps: 1) constructing and solving a linear system of equations to find a parity-check matrix for the code C[′] (detailed in Section V); 2) decoding the ciphertext using a slight variation of Algorithm 1 of [GMRZ13] (see Section VI); 3) recovering the secret from the error vector (explained in Section VII). V. RECONSTRUCTING THE PARITY-CHECK MATRIX A. Compressed public key In order to reduce the public key size, the designers of EDON-K chose to represent the public key in a compressed form. They took advantage of the fact that all the coefficients of Gpub live in the vector space Vg,c,d := ⟨cg˜1, . . ., cg˜ν, dg˜1, . . ., dg˜ν⟩F2 of dimension 2ν. Hence, the compressed public key consists in two parts: first the basis g˜c,d := (cg˜1, . . ., cg˜ν, dg˜1, . . ., dg˜ν) ∈ F[2]2[m][ν] [ of the vector-] space Vg,c,d, then the entries of the matrix Gpub such that each entry is represented by its coefficients in the basis ˜gc,d. For example, if an entry x of Gpub is equal to c [�]i[ν]=1 [γ][i][g][˜][i][ +] d i=1 [δ][i][g][˜][i][ with][ γ][i][, δ][i] ∈ F2, x will be represented by [�][ν] (γ1, . . ., γν, δ1, . . ., δν) ∈ F[2]2[ν][. There is another subtlety in] the compression that we will not mention here. ----- B. Finding a basis The attacker does not have access to the value of a and b but can deduce the value of ab[−][1] = cd[−][1] = (cg˜1)(dg˜1)[−][1] from g˜c,d as mentioned in paragraph 7.2.2 of the documentation of EDON-K [GG17]. Let us bring in α := ab[−][1]. We notice that H” := b[−][1]H[′] is also a parity-check matrix of the LRPC code C[′]. This matrix has all its coefficients in ⟨1, α⟩F2. We use this information to reconstruct such a parity-check matrix of the code C[′] by solving a linear system, similarly to what is done in [GRS16, Section IV B]. This system is derived from the following facts: (i) Gpub H[′′][⊺] = 0K×R; (ii) the entries of H[′′] belong to ⟨1, α⟩F2. In other words, the possible rows x = (x1, . . ., xN ) of H[′′] are solutions of the following system � Gpubx⊺ = 0K (5) xi ∈ ⟨1, α⟩F2 for all i ∈{1, . . ., N }. This system is obviously linear over F2 and the solution set is an F2-linear subspace. A basis of this subspace can then be used as rows for H[′′]. We now show that solving this system can be done by solving a linear system over F2. C. Recovering H[′′] by solving a linear system over F2 and an affine system in a more general case Actually in this section we will consider a more general version of (5). Given a system Ax[⊺] = b[⊺] (6) where A = (aij )1≤i≤r,1≤j≤N is a given matrix in F[r]2[×][m][N] and b is a given vector in F[r]2[m][, and given][ V][ a subspace of dimension] t of F2m (viewed as vector space over F2 of dimension m), how to find the affine set of the solutions x = (xi)1≤i≤N ∈ V [N] of the system? We can rewrite the system (6) as  a11x1 + · · · + a1N xN = b1  - · · = - · · (7)  ar1x1 + · · · + arN xN = br. We introduce a basis {v1, . . ., vt} of V and express each unknown xj in this basis in terms of t other unknowns xj1, . . ., xjt ∈ F2: Let {β1, . . ., βm} be an F2-basis of F2[m], we introduce for 1 ≤ ℓ ≤ m the projection πℓ from F2m to F2 defined by: πℓ : a = Fj2=1m [a][j][β][j] �−→−→ aFℓ2. (9) [�][m] The r equations of system (8) defined over F2m lead to rm affine equations over F2 by applying πℓ for ℓ ∈{1, . . ., m}:  N t  �j=1 �i=1 [π][ℓ][(][a][1][j] [v][i][)][x][ji] = πℓ(b1) . . . = . . . (10)  �Nj=1 �ti=1 [π][ℓ][(][a][rj][v][i][)][x][ji] = πℓ(br). We can solve this affine system in F2 to recover the solution of (6). The system has rm binary equations and tN unknowns, hence a complexity of O(rmt[2]N [2]). If we apply this technique to (5), where t = 2 and r = K we obtain a basis of the vector space in time O(KmN [2]). VI. DECODING STEP The previous step recovers an R × N matrix H[(3)] whose entries all belong to ⟨1, α⟩F2. The matrices H[(3)] and H[′′] share the property that their rows form a basis of solutions of (5). Therefore, there exists an R × R binary invertible matrix Q such that H[(3)] = QH[′′]. (11) We use H[(3)] to decode and recover e from the ciphertext c. The vectors are linked by the equation c = mGpub + e. (12) We use here a slight variation of Algorithm 1 of [GMRZ13] to decode. Algorithm 1 would consist in performing the following steps: 1) Compute s[⊺] := H[(3)]c[⊺] and then V := Support(s). Here we typically have V = Support(e) - 1, α⟩Fq when H[(3)] is a random matrix. 2) Compute V [′] := V ∩ α[−][1]V . This step typically recovers Support(e) when V = Support(e) · ⟨1, α⟩Fq . 3) Once we have Support(e) we recover e = (e1, . . ., eN ) by solving the linear equation H[(3)]e[⊺] = s[⊺] with the additional constraints ei ∈ Support(e) for i ∈{1, . . ., N }. This is done by using the technique given in Subsection V-C. In our case, due to the special structure of H which contains only a’s and b’s V is not equal to Support(e) · ⟨1, α⟩Fq . This is due to the following result. Proposition 1. We have for every e ∈ F[N]2[m][:] xj = t � xjivi. i=1 � . F2 � N � ei i=1 In other words, the system (6) is equivalent to  N t  �j=1 �i=1 [a][1][j][v][i][x][ji] = b1 . . . = . . . (8)  �Nj=1 �ti=1 [a][rj][v][i][x][ji] = br. Support(H[(3)]e[⊺]) ⊂ (1 + α)Support(e) + Proof. From (11), we deduce that Support(H[(3)]e[⊺]) = Support(QH[′′]e[⊺]) = Support(H[′′]e[⊺]). ----- Let s[⊺] := H[′′]e[⊺]. Denote the i-entry of s by si and the entry of H[′′] in row i and column j by h[′′]ij [. We have:] si = N � h[′′]ij[e][j] j=1 � � = ej + αej j s.t. h[′′]ij [=1] j s.t. h[′′]ij [=][α] = N � � ej + (1 + α) ej. j=1 j s.t. h[′′]ij [=][α] This implies the proposition. This proposition directly gives a subspace of dimension L + 1 that contains Support(e) since we deduce from it that Support(e) ⊂ (1 + α)[−][1]Support(H[′′]e). (13) A slight modication of Algorithm 1 of [GMRZ13] yields therefore e: 1) compute the syndrome s[⊺] := H[(3)]c[⊺] and then V := (1 + α)[−][1]Support(s); 2) The space V contains Support(e), so we can recover e = (e1, . . ., eN ) by solving the linear equation H[(3)]e[⊺] = s[⊺] with the additional constraints ei ∈ V for i ∈{1, . . ., N }. This is done by using the technique given in Subsection V-C. Note that we can also skip step 2 and directly look for s0 and s1 in the space V of dimension L +1 instead of decoding exactly the value of e. In fact, this is what is specified in the decapsulation of EDON-K. VII. RECOVERING THE SHARED SECRET Once we have recovered the error vector e ∈ F[N]q[m] [, we need] to recover s0 and s1 to obtain the value of SharedSecret. We know that the elements of e were picked randomly in Ve = Support(˜e).We proceed just like in the decapsulation algorithm. We generate Support(e) which is equal to Ve with high probability. More exactly, the probability that Support(e) is of dimension < L is � LL−1 �N . For the parameters of edonk128ref this probability is 2[−][37]. In such a case, the attack might fail, but the decapsulation would fail too. Then, among the 2[L] elements of Ve, we need to identify a couple of consecutive elements of ˜e to deduce the secret. For all pairs of candidates (λ, µ) ∈Ve × Ve, for 1 ≤ i ≤ [L]2 [−] [1] we compute (s[′]0[, s][′]1[) :=][ H][i][ (][λ][||][µ][||H][ (][c][))][. If][ H][ (][s][′]1[||][s][′]0[||][c][) =][ h] then we have (s[′]0[, s][′]1[) = (][s][0][, s][1][)][. Finally we recover the secret] SharedSecret = H (s0||s1||c). In total this operation requires O(L2[2][L]) operations, just like the decapsulation. This is the reason why the value of L needs to remain small, otherwise the decapsulation is not possible. VIII. CONCLUDING REMARKS A. Cost of the attack Let us analyze the cost of the three steps of the attack mentioned in Section IV. Step 1 and 2 are polynomial in terms of the parameters of the code. Step 1 only uses linear algebra operations and has a complexity at most O(KmN [2]). The complexity of step 2 is given by Theorem 1 of [GMRZ13] (using n = N, k = N − R, r = L and d = 2), hence is equal to L[2](16m + N [2]). The complexity of step 3 is O(L2[2][L]). This is not polynomial in L but L is a very small parameter (4 ≤ L ≤ 8 in the proposal). Moreover this third step is the same as the decapsulation algorithm, so L needs to stay small, otherwise the decapsulation would become too costly or even impossible. So L can be considered as a constant ≤ 10 to allow a reasonable decapsulation. Hence the most costly operation appears to be step 1. B. Without compression of the public key Our attack takes advantage of the compressed form of the public key that allows a direct access to the value α = ab[−][1]. One could think that this is the origin of the attack, and decide to express the public key in its uncompressed form to fix the attack. As a consequence, the public key would be of size K × N × m bits instead of K × N × ν bits in the compressed form. In practice the public key for edonk128ref would be 16 times longer (around 288 kbits). This inflation of the key size could be avoided by sending out a random basis of the space Vg,c,d. However, this is not enough. There is an even more direct way to proceed, without the value of α. Instead of looking for a matrix H[(3)] with entries liyng in ⟨1, α⟩F2, we can use the following result. Proposition 2. There exists a full rank (R − 1) × N binary matrix H[(4)] that satisfies GpubH[(4)][⊺] = 0K×(R−1). Proof. Let T be a binary full-rank matrix (R − 1) × R matrix that has rows of even Hamming weight. For instance we can choose 1 1 0 - · · 0 0 1 1 0 ...  T =   .    ... ... ... ... ...    0 - · · 0 1 1 We observe now that TH has all its entries in {0, a + b}. This follows directly from the fact that if we sum an even number of elements in {a, b} we either get 0 (if the number of a’s is even, and therefore also the number of b’s) or a+b (if the number of a’s is odd). From this, it follows immediately that 1 H[(4)] := a + b [TH] ----- satisties the property. First, it is clear that this is a binary matrix and we also have 1 GpubH[(4)][⊺] = a + b [G][pub][H][⊺][T][⊺] = 0K×(R−1). Obtaining such a matrix H[(4)] is straightforward. We just have to use the algorithm given in Section V to recover a basis of dimension R − 1 of binary vectors x satisfying Gpubx[⊺] = 0K. We then use this matrix H[(4)] to compute the syndrome s = H[(4)]c[⊺]. Since H[(4)]c[⊺] = H[(4)]e[⊺] we directly obtain with very high probability that Support(e) = Support(H[(4)]c[⊺]). This reveals the support of the error and from there we can go directly to the last step of the attack to reconstruct the shared secret. C. Security of the scheme Considering the attack that we described, there is a way to recover the secret of the edonk128ref scheme from a public key without the private key in polynomial time. In practice, the attack implemented with Sage on a personal computer recovers the secret in less than a minute, so the scheme is far from achieving the 128-bits security claimed in [GG17]. Hence this scheme is insecure for the intended use. Moreover, the cost of this attack is polynomial in terms of the parameters, so there is no proper way to increase the parameters to achieve the intended security level while keeping a reasonably small key size. REFERENCES [BBD09] Daniel J. Bernstein, Johannes Buchmann, and Erik Dahmen, editors. Post-Quantum Cryptography. Springer-Verlag, 2009. [BMvT78] Elwyn Berlekamp, Robert McEliece, and Henk van Tilborg. On the inherent intractability of certain coding problems. IEEE Trans. Inform. Theory, 24(3):384–386, May 1978. [Del78] Philippe Delsarte. Bilinear forms over a finite field, with applications to coding theory. J. Comb. Theory, Ser. A, 25(3):226– 241, 1978. [Gab85] Ernest Mukhamedovich Gabidulin. Theory of codes with maximum rank distance. Problemy Peredachi Informatsii, 21(1):3–16, 1985. [GG17] Danilo Gligoroski and Kristian Gjøsteen. Edon-k. first round submission to the NIST post-quantum cryptography call, November 2017. [GMRZ13] Philippe Gaborit, Gaétan Murat, Olivier Ruatta, and Gilles Zémor. Low rank parity check codes and their application to cryptography. In Proceedings of the Workshop on Coding and Cryptography WCC’2013, Bergen, Norway, 2013. Available on www.selmer.uib.no/WCC2013/pdfs/Gaborit.pdf. [GRS16] Philippe Gaborit, Olivier Ruatta, and Julien Schrek. On the complexity of the rank syndrome decoding problem. IEEE Trans. Information Theory, 62(2):1006–1019, 2016. [Hua51] Loo-Keng Hua. A theorem on matrices over a sfield and its applications. J. Chinese Math. Soc., 1(2):109–163, 1951. [McE78] Robert J. McEliece. A Public-Key System Based on Algebraic Coding Theory, pages 114–116. Jet Propulsion Lab, 1978. DSN Progress Report 44. [Sho94] Peter W. Shor. Algorithms for quantum computation: Discrete logarithms and factoring. In S. Goldwasser, editor, FOCS, pages 124–134, 1994. -----
{ "disclaimer": "Notice: Paper or abstract available at https://arxiv.org/abs/1802.06157, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": null, "status": "GREEN", "url": "https://arxiv.org/pdf/1802.06157" }
2,018
[ "JournalArticle" ]
true
2018-02-16T00:00:00
[ { "paperId": "d791016d78b1054ce6c756a55ac78909ede25fdb", "title": "Low Rank Parity Check codes and their application to cryptography" }, { "paperId": "684cfa5805f671ec1a24dc0b8e529259241693e4", "title": "On the Complexity of the Rank Syndrome Decoding Problem" }, { "paperId": "99291ce0b97a31c786560241fea62604332afbf5", "title": "Post-quantum cryptography" }, { "paperId": "2273d9829cdf7fc9d3be3cbecb961c7a6e4a34ea", "title": "Algorithms for quantum computation: discrete logarithms and factoring" }, { "paperId": "5ea62d87fedfe0374fca3d5852c820e49d98ed7e", "title": "Bilinear Forms over a Finite Field, with Applications to Coding Theory" }, { "paperId": "5e29000d24d5ded11e7a32216a91bdadaa9877f1", "title": "On the inherent intractability of certain coding problems (Corresp.)" }, { "paperId": null, "title": "Attack on the Edon-K Key Encapsulation Mechanism" }, { "paperId": null, "title": "Edon-k. first round sub-mission to the NIST post-quantum cryptography call" }, { "paperId": null, "title": "Theory of codes with maximum rank distance" }, { "paperId": null, "title": "A Public-Key System Based on Algebraic Coding Theory, pages 114–116" }, { "paperId": "50508720c488f87598f832874906ea510d42f7da", "title": "A THEOREM ON MATRICES OVER A SFIELD AND ITS APPLICATIONS" }, { "paperId": null, "title": "constructing and solving a linear system of equations to find a parity-check matrix for the code C ′" }, { "paperId": null, "title": "decoding the ciphertext using a slight variation of Algorithm 1 of [GMRZ13] (see Section VI)" } ]
8,887
en
[ { "category": "Computer Science", "source": "external" }, { "category": "Computer Science", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/0294a22ca84a83792d5deedf651d2ddfca7d3a79
[ "Computer Science" ]
0.88685
Optimized information discovery using self-adapting indices over Distributed Hash Tables
0294a22ca84a83792d5deedf651d2ddfca7d3a79
IEEE International Performance, Computing, and Communications Conference
[ { "authorId": "153347544", "name": "F. Memon" }, { "authorId": "1770669", "name": "Daniel Tiebler" }, { "authorId": "145046960", "name": "Frank Dürr" }, { "authorId": "1700118", "name": "K. Rothermel" } ]
{ "alternate_issns": null, "alternate_names": [ "International Phoenix Conference on Computers and Communications", "IPCCC", "Int Phoenix Conf Comput Commun", "International Performance, Computing, and Communications Conference", "Int Perform Comput Commun Conf", "IEEE Int Perform Comput Commun Conf" ], "alternate_urls": null, "id": "8f125553-3fd5-4370-a175-f7179db29048", "issn": null, "name": "IEEE International Performance, Computing, and Communications Conference", "type": "conference", "url": "http://www.ipccc.org/" }
null
# Optimized Information Discovery using Self-adapting Indices over Distributed Hash Tables ## Faraz Memon, Daniel Tiebler, Frank D¨urr, Kurt Rothermel IPVS – Distributed Systems Department, Universit¨at Stuttgart Universit¨atsstraße 38, 70569 Stuttgart, Germany Email: faraz.memon, tiebledl, frank.duerr, kurt.rothermel @ipvs.uni-stuttgart.de { } ## Abstract _Distributed Hash Table (DHT)-based peer-to-peer in-_ _formation discovery systems have emerged as highly scal-_ _able systems for information storage and discovery in mas-_ _sively distributed networks._ _Originally DHTs supported_ _only point queries. However, recently they have been ex-_ _tended to support more complex queries, such as multi-_ _attribute range (MAR) queries. Generally, the support for_ _MAR queries over DHTs has been provided either by cre-_ _ating an individual index for each data attribute or by cre-_ _ating a single index using the combination of all data at-_ _tributes. In contrast to these approaches, we propose to_ _create and modify indices using the attribute combinations_ _that dynamically appear in MAR queries in the system._ _In this paper, we present an adaptive information dis-_ _covery system that adapts the set of indices according to the_ _dynamic set of MAR queries in the system. The main con-_ _tribution of this paper is a four-phase index adaptation pro-_ _cess. Our evaluations show that the adaptive information_ _discovery system continuously optimizes the overall system_ _performance for MAR queries. Moreover, compared to a_ _non-adaptive system, our system achieves several orders of_ _magnitude improved performance._ ## 1. Introduction During the past decade, DHTs have led the way for distributed, scalable and fault-tolerant information discovery systems. DHTs have been extended from their original form, where they supported only point queries, to meet modern application demand of supporting multi-attribute range (MAR) queries. Queries such as, “find all comput_ers with RAM from 2 to 6 GB and CPU speed from 1.0 to_ _4.0 GHz” or “find all restaurants open from 10 to 11 PM_ _and with seating capacity for 8 to 10 people”, are typical_ examples of MAR queries. DHTs have been extended using three different indexing approaches to provide the support for MAR queries. The first approach maps the value ranges of individual data attributes to a network of peers [4, 5, 19, 21]. A MAR query is resolved by dividing the query into multiple single-attribute range queries and then by joining the results at the query initiator. The second approach indexes the combination of all data attributes [6, 10, 18]. Data attributes that are not included in a MAR query are considered to be wild-cards in this approach. The third type of approach, employed by our Optimized Information Discovery (OID) system [16], indexes several attribute combinations with each combination different from the other. A MAR query is resolved by selecting the most efficient index for performing the query. Although the third type of indexing approach outperforms the other two approaches in terms of individual query efficiency [16], the overall system performance still depends on the attribute combinations used for defining each index. The efficiency of the overall system increases with increasing number of queries being able to find a closely matching index, in terms of the used attribute combination. In [15], we presented a tool that assists the designer of a distributed application in defining a useful set of indices for the third type of DHT indexing approach. Given a limit for the maximum number of indices and a representative set of MAR queries (workload), our tool recommends a set of indices that produces close-to-optimal system performance for the workload within the given limit. The index recommendation tool is an offline tool, i.e., it is assumed that the workload provided to the tool is somehow collected from an already existing DHT-based information discovery system. Further, it is assumed that the recommended set of indices is manually installed over the DHT by the designer of the distributed application. In order to carry out such an installation, the information discovery system would have to be taken offline, which is highly undesirable for large-scale peer-to-peer (P2P) systems. In this paper, we relax these assumptions to present an adaptive OID system. The adaptive OID system performs the task of ## Published in Proceedings of 29th International Performance C i d C i i C f ( CCC 10) 1 9 ----- index recommendation and index installation online, eliminating the need for manually updating the set of indices. The main contribution of this paper is the index adaptation process. The index adaptation process in a DHT, including online index recommendation and index installation, is carried out in four phases. During the first phase, a workload of MAR queries is collected from several peers in the network using uniform random sampling. The second phase involves execution of the index recommendation tool to determine an optimal set of indices for the collected workload. During the third phase, the cost and the benefit of installing the recommended set of indices is calculated. If it is beneficial to install the recommended set of indices, the installation is carried out during the fourth phase. The rest of the paper is organized as follows: in Section 2 we give an overview of the related work, the architecture of the adaptive OID system is discussed in Section 3, in Section 4 we describe the index adaptation process in detail, evaluation results are presented in Section 5, and finally we conclude the paper with an overview of our future work in Section 6. ## 2. Related Work A number of adaptive P2P information discovery systems have been proposed in the past. In this section, we discuss some of them in relation to our system. ### 2.1. Unstructured P2P Systems Several unstructured P2P information discovery systems have been suggested that improve the efficiency of future queries based on the past query workload in the system [3, 13, 14, 17]. The major difference between these systems and the structured P2P systems such as ours is that, each peer in these systems tries to optimize the performance of queries individually by modifying local data index. This does not necessarily lead to the optimization of overall system performance. Moreover, given a query, these systems perform only a best-effort search in the network, i.e., not all matching data objects are always retrieved. ### 2.2. Structured P2P Systems In order to improve the search efficiency of queries in structured P2P information discovery systems, several DHT extensions have been proposed [7, 8, 20]. Deng et al. [7] introduce learning-aware blind search for range queries in DHTs. Each peer in their system stores information about previously retrieved results from each link of the DHT using a local index structure. Queries are forward to regions of the DHT that had previously returned the highest number of results. Unlike our system, their system performs only best-effort search since each peer tries to optimize the query performance individually. Skobeltsyn et al. [20] present a system that stores the results of frequently issued queries at certain peers in the DHT. The choice of queries whose results are cached is based on the dynamic workload of queries in the system. A query is resolved first by looking up the results in the local cache. If no results are found, the peer tries to find a neighboring cache with results. If still no results are found, the query is sent to all peer using broadcast. In our system, queries are never resolved using broadcast since it is highly unscalable to resolve queries in such manner. Instead, we optimize indices for efficient query processing. The HiPPIS system [8] indexes the data in a DHT using hierarchical indices. Each peer in the system logs each query that it issues. If the granularity of a queried attribute changes locally at a peer, e.g., more queries contain “city” attribute instead of the “state” attribute, the peer checks if the index has to be adapted accordingly. The peer performs the adaptation check by asking every peer in the system for the query statistics on the attribute using flooding. If adaptation is needed, the peer locks all the peers in the system by flooding a lock message. During this period, queries are answered also using flooding. Finally, the adaptation message is sent to all peers in the system using flooding as well. Unlike the HiPPIS system, our system has a flooding-free scalable index adaptation process. ## 3. System Architecture The adaptive OID system has a layered architecture (see Fig. 1(a)). The top layer consists of distributed applications that require support for MAR queries. The bottom layer is the DHT layer that provides the service for looking up a key, broadcasting a message, and aggregating a value. The middle layer, known as the OID framework layer, consists of four major components: data index space, data placement controller, query engine, and adaptation engine. The data index space of OID framework layer consists of several indices. The data placement controller uses these indices to route each data object to the peer responsible for hosting it. The query engine is responsible for distributed query resolution, while the adaptation engine participates in the index adaptation process. Each data index in the OID framework layer is a Hilbert Space-filling curve (SFC) [11]. Due to locality preserving properties of Hilbert SFC, data objects that are close in a multi-dimensional attribute space tend to be mapped to sets of neighboring peers in a DHT. This enables efficient processing of MAR queries. A Hilbert SFC is defined as: **Definition 1 A continuous function h : (a1, a2, . . ., ad) �→** _x ∈_ N, where (a1, a2, . . ., ad) is a point in a d-dimensional _Euclidean space and N is the set of natural numbers._ ----- |0|Col2|3| |---|---|---| |1||| Distributed Applications (b) CPU Speed 2560 2048 Adaptation Engine Index Recommendation Workload Tool Data Placement Query Engine Controller Data Index Space SFC1 SFC2 SFC3 … SFCo 1536 cess are: distributed workload collection, index recommendation, adaptation decision, and index installation. We define following three types of peer roles to carry out the index adaptation process: **Adaptation Peer – An adaptation peer is a peer that period-** ically initiates the index adaptation process. The length of a period is set by the designer of the distributed application. In order to avoid conflicting index updates, there can only be a single adaptation peer at a time in the network. We assume that the location of the adaptation peer is pre-selected by the designer of the distributed application. This could be done by deciding that the peer that is the successors of a certain key would be the adaptation peer in the network. If a new peer joins at the location of the adaptation peer, the state of the adaptation peer is transferred to it, making it the new adaptation peer. Moreover, if the adaptation peer fails during the first three phases of the adaptation process, the process is restarted by the new adaptation peer. We assume a correctly functioning DHT where any peer that fails or leaves the network is automatically replaced. **Monitoring Peer – Each peer in our system is a monitoring** peer. Monitoring peers are involved in the local collection of the query workload, i.e., each monitoring peer logs each query that it resolves. This log is emptied when a new set of indices is installed in the data index space of the peer. **Sampling Peer – A sampling peer is a peer that is involved** in distributed workload collection discussed in the next section. Any peer can take the role of a sampling peer. ### 4.1. Distributed Workload Collection Distributed Hash Table (a) 1024 5 6 9 10 512 1.0 1.5 2.0 2.5 3.0 (c) |0|Col2|1|Col4|14|Col6|15|Col8| |---|---|---|---|---|---|---|---| ||||||||| |3||2||13||12|| |4||7||8|||11| ||||||||| |5||||9|||| **Figure 1. System Architecture** A Hilbert SFC divides a d-dimensional euclidean space into 2[k][·][d] cubes, called zones. A line then passes through all zones defining an order among them. The result is a k[th] order SFC, where k, known as the approximation level, defines the granularity of the space sub-division. Figure 1(b)(c) show a 2[nd] and a 3[rd] order Hilbert SFC respectively. A data object in our system is indexed using each SFC defined in the data index space of the OID framework layer. If the SFC shown in Fig. 1(c) is one such index, then a data object defined as (CPU Speed = 2.7 GHz, Mem Size = 1792 MB) would receive an identifier 12 from this index. After a data object receives an identifier from each SFCbased index, a copy of the data object is routed to the DHT peers responsible for the object identifiers. For a detailed description of the data indexing process, see [16]. A MAR query is resolved in two steps. First, the query is mapped to each SFC defined in the data index space. For example, a MAR query defined as “(CPU Speed >= 1.3 _GHz)_ (CPU Speed <= 2.3 GHz) (Mem Size >= _∧_ _∧_ 640 MB) (Mem Size <= 2304 MB)” can be mapped _∧_ to 11 zones on the SFC shown in Fig. 1(c). In the second step, the query is routed to the peers responsible for the zone identifiers obtained using the least expensive index [16]. ## 4. Index Adaptation The goal of the index adaptation process is to update the set of indices in the OID framework layer of each peer according to the dynamic workload of MAR queries in the system. In order to achieve this goal, we introduce a fourphase index adaptation process that is periodically executed in the system. The four phases of the index adaptation pro Ideally, if the complete set of past queries were collected from all peers in the network, an optimal set of indices could be obtained. However, collecting queries from all peers is neither efficient nor scalable. Therefore, the goal of distributed workload collection is to collect a subset of the complete set of queries by sampling some random peers. The idea is to sample a sufficiently large subset of peers at different locations in the network to get an approximation of the complete set of queries. The adaptation peer could directly collect a workload of MAR queries by randomly sampling some monitoring peers in the network. However, in this case, the adaptation peer will have to issue a large number of sampling requests and handle a large number of sampling responses, making the sampling process unscalable. Therefore, in order to limit the fanout of the adaptation peer and to make the sampling process scalable, we use a two-level sampling process. The adaptation peer initiates the first level of the sampling process by generating β random keys from the identifier space of the DHT, i.e., [0, 2[m]), where m is the number of identifier bits. A DHT lookup is then performed for each random key in order to identify the peer responsible ----- for it. Here, we assume a basic DHT lookup functionality that, given a key, returns the identity of the peer responsible for the key. Once the identity of a random peers is learned, a sampling request with parameter γ is sent to it, where γ indicates the number of peers to be sampled at the second level of the sampling process. Upon receiving a sampling request from the adaptation peer, a peer assumes the role of a sampling peer. The sampling peer then forwards the sampling request to γ random monitoring peers in the same manner as the adaptation peer. After receiving a sampling request from a sampling peer, a monitoring peer responds with the local query workload. A sampling peer accumulates all the workloads received from γ random monitoring peers into a single workload. Since the same query could have been resolved by several monitoring peers, it could appear multiple times in the accumulated workload. Therefore, duplicates are eliminated during the accumulation process. Note that the same query issued twice is not considered as a duplicate query since each query has a globally unique identifier. Finally, the accumulated workload including the workload of the sampling peer is sent to the adaptation peer where the accumulation process is repeated. In order to detect the failures of the monitoring or the sampling peers, the process of distributed workload collection includes timeouts at each level. At the level of a sampling peer, if a response is not received from a monitoring peer before the timeout, the sampling request is re-issued assuming that the faulty monitoring peer has been replaced by the DHT. Similarly, at the level of the adaptation peer, if a response is not received from a sampling peer before the timeout, the sampling request is re-issued. The distributed workload collection phase requires _O ((β · γ) · (log2N + 2)) messages in the worst-case to_ collect a workload of MAR queries. N is the total number of peers in the network and log2N is the maximum number of messages required for a DHT lookup. Two additional messages are needed to send a sampling request to a peer and receive a sampling response from it. ### 4.2. Index Recommendation Once a workload of MAR queries has been collected at the adaptation peer, the next step in the adaptation process is to search for an optimal set of indices for the collected workload. For this purpose, we utilized the index recommendation tool previously introduced by us. Given a workload of MAR queries and a limit o for the maximum number of indices, the index recommendation tool recommends a close-to-optimal set of indices Ir for the given workload. For a detailed description of the index recommendation tool and the index recommendation algorithms, see [15]. ti-j ti-3 ti-2 ti-1 ti ti+1 ti+2 ti+3 ti+j Ti, i-j Ti, i+j **Figure 2. Adaptation Decision** ### 4.3. Adaptation Decision After obtaining a recommended set of indices from the index recommendation tool, a na¨ıve approach would be to directly install this set of indices in the network. However, it is possible that the cost of installing the recommended set of indices outweighs the benefit of installing it. Therefore, the goal of the adaptation decision phase is to determine whether the installation of the recommended set of indices is beneficial or not. This is done by comparing the estimated cost of the workload over the current set of indices with the estimated cost of the workload over the recommended set of indices. The installation cost of the recommended set of indices is also taken into account. Let ti mark the current periodic execution of the index adaptation process, Ic be the current set of indices, and Ir be the recommended set of indices. Then, we define the following quantities in our system (see Fig. 2): **Ti,i−j – Time interval between ti and ti−j ∀j ∈** N[+] where, _ti−j marks the index adaptation process where Ic was in-_ stalled. Note that this time interval is dynamic since a new set of indices is not installed during each periodic execution of the index adaptation process. **Wi−j – Complete set of MAR queries during the time in-** terval Ti,i−j. **SWi−j – Sampled workload, from the complete set of** MAR queries during the time interval Ti,i−j. **costin – Estimated cost of installing the recommended set** of indices Ir. The adaptation peer considers the installation of the recommended set Ir beneficial, if the following condition holds: _cost(SWi−j, Ic) > cost(SWi−j, Ir) + costin_ (1) i.e., if the cost of the sampled workload SWi−j over the current set of indices Ic is greater than the cost of the same workload over the recommended set of indices Ir plus the installation cost of the recommended set of indices. The assumption behind Condition 1 is that the complete set of MAR queries Wi−j would be repeated for a similar interval of time in the future, i.e., for Ti,i+j. This is the most general assumption for predicting the cost of future queries. If |Col1|Col2|Col3|Col4|Col5|Col6| |---|---|---|---|---|---| |W i-j|||||| ||||||| |||t t t t t t t|||t| ----- Condition 1 is satisfied, the next phase of index adaptation is carried out. Otherwise, the index adaptation process is halted until the next periodic execution. The cost functions cost(SWi−j, Ic) and cost(SWi−j, Ir) in Condition 1 can be generalized as a cost function _cost(Q, I). If Q = {q1, q2, q3, . . ., ql} is a set of queries_ and I = {SFC1, SFC2, SFC3, . . ., SFCo} is a set of indices, then cost(Q, I) is calculated as: Note that Equations 3 and 4 require the knowledge of global parameters such as N and λ, which are generally not known to a peer in a DHT network. However, an estimate for these parameters could be obtained by installing a reliable broadcast/aggregation tree in the network. The root of this tree will be the adaptation peer in our system. Such a broadcast/aggregation tree could be installed and maintained using the approaches discussed in [9] and [12]. ### 4.4. Index Installation Once it is determined that installing a recommended set of indices is beneficial, the adaptation peer initiates the index installation phase. The goal of the index installation phase is to broadcast the new set of indices Ir, and to reindex the data on each peer accordingly. A na¨ıve way of carrying out the index installation phase is to broadcast the recommended set of indices using the DHT broadcast/aggregation tree, and let each peer re-index the data according to the new set of indices. However, queries issued in the system during the re-indexing of the data may not be able to recall the matching data objects completely. This could happen in cases where, e.g., a query issued using the new set of indices searches for matching data objects at a peer where the data has not been placed yet using the new set of indices. In order to avoid this shortcoming, we introduce a 3-step index installation phase. During the first step, a broadcast message containing the new set of indices Ir is sent by the adaptation peer to each peer in the system. For this purpose, the DHT broadcast/aggregation tree is used. Upon receiving the broadcast message, each peer begins to re-index its data. Note that the old set of indices Ic and the corresponding data is not yet removed from the system. Hence, the queries that are issued during this step continue to be resolved using Ic. A data object in the OID system is indexed using o number of indices, i.e., |Ic| = o. This means that there are o copies of the same data object in the system. Therefore, it has to be made sure that each copy of the data object is not re-indexed using each index in Ir. For example, if Ic and Ir are as shown in Fig. 3, a data object indexed using Ic would be located at four locations in the system. Now if the same data object is re-indexed at each location using every index in Ir, it would be sent four times to each new location in the network. To avoid this, the data is re-indexed as follows. Data re-indexing at a peer starts with the comparison of the installed set of indices Ic with the new set of indices Ir (see Fig. 3). First, the common elements in both sets are ignored. Next, a mapping is defined from each element in _Ic to each corresponding element in Ir. The data objects_ that had been previously indexed using an element of Ic are now re-indexed only using the corresponding element in Ir. For example, in Fig. 3, the data objects that had been _|Q|_ � _cost(Q, I) =_ _cost(qi, SFCj) such that_ _i=1_ _cost(qi, SFCj) < cost(qi, SFCk) where_ _j, k : 1_ (j, k) _I_ and j = k _∀_ _≤_ _≤|_ _|_ _̸_ (2) i.e., the cost of a set of queries Q over a set of indices I is a sum of the cost of each query in Q over the least expensive index in I. In order to determine the least expensive index for a query, the network cost of the query over each index needs to be calculated. Due to highly dynamic nature of P2P systems, this cost cannot be accurately anticipated. However, if the cost of routing a message in the network is known, the maximum cost for resolving a query can be calculated. Let z be the total number of zones a query maps to, on an SFC-based index. In order to resolve this query, the peer responsible for each zone has to be queried. If a basic query routing strategy is considered, where first a lookup is performed to determine the peer responsible for each zone, then the maximum cost of a query q on an index SFC is calculated as: _cost(q, SFC) = z · (log2N + 2) [messages]_ (3) where N is the total number of peers in the network and _log2N is the maximum number of messages needed for_ looking up a peer responsible for a zone. Two additional messages are needed to send a query request to a peer and receive a query response from it. In order to check if Condition 1 holds, the cost of installing the recommended set of indices costin has to be calculated. Similar to the cost calculation above, only the maximum cost of installation can be calculated. Let λ be the total number of unique data objects in the system, then the maximum cost for installing the recommended set of indices Ir is calculated as: _costin = 3 · (N −_ 1) + (|Ir| −|Ic ∩ _Ir|)·_ (4) _λ · (log2N + 2) [messages]_ where 3 (N 1) is the cost of broadcasting the recom_·_ _−_ mended set of indices Ir, (|Ir| −|Ic ∩ _Ir|) is the total num-_ ber of new indices, and λ · (log2N + 2) is the cost of reindexing the data. The reason for the broadcast cost being almost three-times the network size is discussed in the next section. ----- Ic SFC1 SFC4 SFC7 SFC9 Ir SFC7 SFC6 SFC8 SFC1 **Figure 3. Data Re-Indexing** indexed using SFC4 in Ic are only re-indexed using SFC6 in Ir. The re-indexing process involves a look-up for the new location of a data object and then transfer of the data object to this location. Once the re-indexing of the data is finished at a peer, it sends an acknowledgement to the parent node in the DHT broadcast/aggregation tree. During the second step of the index installation process, the acknowledgements from all peers in the network are aggregated until the adaptation peer receives the aggregated acknowledgement. Queries still continue to be resolved using the old set of indices Ic. Upon receiving an aggregated acknowledgement from the child nodes in the DHT broadcast/aggregation tree, the adaptation peer starts the third step of index installation by broadcasting a use index message. When this message is received at a peer, the peer removes Ic, discards the corresponding data, empties the monitored query log, and starts using Ir for query resolution. Note that the data common between Ic and Ir is not discarded. During this step of index installation, if a query is issued from a peer that has not received the use index message yet, then there are two possibilities. First, the query will be resolved using Ic, if all peers involved in query resolution have not discarded the data corresponding to Ic. Second, even if a single peer involved in query resolution has discarded Ic, then the peer that issued the query will be asked to re-issue it using Ir. If the adaptation peer fails before the first step of the index installation phase, then the process of index adaptation is repeated by the new adaptation peer. However, if the adaptation peer fails after the first step of index installation, the new adaptation peer is already aware of the state of index installation due to the broadcast of new indices in the network. Therefore, the new adaptation peer executes the next steps of the index installation phase. ## 5. System Evaluation In this section, we present the results from the performance evaluation of the adaptive OID system. We simulated our system using the PeerSim [1] simulator. The simulations were performed on an AMD Opteron machine with 4 GB of RAM. Considering resource discovery in grid computing as an example scenario, we represent the data objects in our simulations as resource specifications. Each resource specification consists of attributes shown in Table 1. The value for |SFC 1|SFC 4|Col3|SFC 7|SFC 9| |---|---|---|---|---| |||||| |SFC 7|SFC 6||SFC 8|SFC 1| |Attribute|Value Domain|Definition| |---|---|---| |CPU Speed|1.0 – 4.0|CPU clock speed in gigahertz| |Busy CPU|0 – 100|Percentage of CPU(s) in use| |Mem Size|1.0 – 8.0|Total Memory size in gigabytes| |Mem Used|0 – 100|Percentage of Memory in use| |HDD Size|100.0 – 3000.0|Total HDD size in gigabytes| |DL Bandwidth|0.5 – 100|Bandwidth of down link in mbits/sec| **Table 1. Attribute List** each attribute in a resource specification is randomly generated from the value domain of the attribute. Unlike the database management systems where benchmark workloads are made available by the TPC [2], no such workload of MAR queries is readily available for P2P systems. Hence, we generate the workloads using the attributes in Table 1 for simulating different scenarios of our system. For each point on the graphs displayed in this section, the corresponding experiment is repeated 10 times with different workloads, and an average value is plotted. ### 5.1. Varying Number of Attributes In this section, we present the results from the performance evaluation of our system using a workload of queries with varying number of attributes. We show that an adaptive OID system is essential for continuous optimization of overall system performance for MAR queries. Table 2 shows the parameter values used for this simulation. **Parameter** **Value** **Definition** _N_ 1000 Total number of peers in the DHT _n_ 1600 Total number of queries in the workload _o_ 3 Maximum number of indices _λ_ 5000 Total number of data objects _β_ 33 First level sampling parameter _γ_ 2 Second level sampling parameter **Table 2. Simulation Parameters** The workload is generated in a manner that the start of the workload contains queries with 4 attributes followed by queries with 3, 2, and 4 attributes again. To simulate a slow change in the workload over time, the attributes in queries are varied slowly, i.e., the change from queries with 4 attributes to queries with 3 attributes and so on, is not sudden. Each attribute in a query is randomly selected from the list shown in Table 1. Similarly, the range for an attribute in a query is randomly selected from the domain of the attribute. The values for parameters β and γ are set so that almost 10% of peers in the network are sampled. We simulate the adaptive OID system, the non-adaptive system, and a system with only a single adaptation (partially adaptive system), by executing the generated workload from random peers in the DHT over a period of time. The non-adaptive system is a system with only a single data index over all 6 attributes shown in Table 1. For the partially |Parameter|Value|Definition| |---|---|---| |N|1000|Total number of peers in the DHT| |n|1600|Total number of queries in the workload| |o|3|Maximum number of indices| |λ|5000|Total number of data objects| |β|33|First level sampling parameter| |γ|2|Second level sampling parameter| ----- 10[9] 10[8] 10[7] 10[6] 10[5] 10[4] 10[3] 10[2] 0 250 500 750 1000 1250 1500 Simulation Time 0 250 500 750 1000 1250 1500 Simulation Time 10[8] 10[7] 10[6] 10[5] 10[4] 10[3] **Figure 4. Varying Number of Attributes** adaptive system, the adaptation takes place after 10 simulation time units. Moreover, for the adaptive OID system, the index adaptation process is scheduled to run after every 10 simulation time units. A single simulation time unit is long enough to allow execution of a single query. For every 5 simulation time units, we plot the average number of messages in all three systems during that 5-timeunit-window (see Fig. 4). The number of messages represents all the messages in the system including messages for the index adaptation process. For the adaptive OID system, the peaks in the number of messages (see Fig. 4) mark the points where index installation takes place. The higher the peak, the larger the number of indices that are exchanged. Similar to the non-adaptive system, the adaptive OID system and the partially adaptive system start with one index over all attributes. However, the first adaptation happens very soon in both the systems and 2 additional indices are installed (see Fig. 4). This improves the performance of MAR queries in both systems because the queries are able to find less expensive indices for resolution. Since the first adaptation is based on a very small workload, the second adaptation follows soon in the adaptive OID system. The system continues to adapt itself over time according to the workload of queries. After each adaptation, the performance of MAR queries improves as the average number of messages in the system are reduced. Figure 4 shows that the partially adaptive system produces 99.2% less messages than the non-adaptive system. Moreover, the adaptive OID system produces 83.6% less messages than the partially adaptive system. Therefore, the adaptive OID system is several orders of magnitude better than the non-adaptive system. Figure 4 also shows that, in order to optimize the overall system performance for MAR queries, a system with continuous adaptations is essential. The performance of the non-adaptive system worsens with decreasing number of attributes in queries (see Fig. 4). This happens because with decreasing number of query attributes, more attributes have to be considered as wild-cards on a single large index. The performance of the system gets better towards the end of the simulation because the number **Figure 5. Fixed Number of Attributes** of attributes in queries increases from 2 to 4 attributes. In order to further analyze the impact of the number of attributes in queries, we perform another simulation where the number of attributes in the workload is kept constant to 3 attributes. Other simulation parameters have the same values as in Table 2. Figure 5 shows the performance of all three systems with respect to the average number of message in a 5-time-unit-window. Figure 5 shows that the adaptive OID system quickly adapts its indices to the changing workload of queries. Major adaptations come close to the start of the simulation. After that, even though some small adaptations happen in the system, the performance of the system remains roughly constant. This happens because the indices adapted during the start of the simulation remain beneficial for the complete simulation. The performance of the non-adaptive system remains almost constant, and several orders of magnitude worse than the adaptive system, throughout the simulation. With a constant number of attributes in queries, the performance of the partially adaptive system comes close to the performance of the adaptive system (see Fig. 5). However, the adaptive system still produces 3.1% less messages compared to the partially adaptive system. This difference in the number of messages grows larger over time. Therefore, in a long running system, the adaptive system would perform significantly better than a partially adaptive system. ### 5.2. Varying Number of Indices In this section, we present the performance evaluation of the adaptive OID system by showing the impact of varying number of indices on the system. We perform 3 different simulations using the same workload as in the first simulation discussed in Sec. 5.1. The maximum number of indices _o is varied from 3 to 5 across these simulations. Other sim-_ ulation parameters have the same values as in Table 2. For each simulation we plot the average number of messages in a 10-time-unit-window. Generally, the larger the set of indices, the better the performance of the system after an adaptation (see Fig. 6), be ----- 5k 10k 20k 40k 80k 160k Num. of Data Objects 10[5] 10[4] 10[3] 10[2] 0 250 500 750 1000 1250 1500 Simulation Time 800 700 600 500 400 300 200 **Figure 6. Varying Number of Indices** cause with increasing number of indices, more queries find an optimal index for resolution. Since more queries are optimized, the overall system performance also improves slightly with increasing number of indices, e.g., the system with 4 indices produces 2.3% less messages than the system with 3 indices. Similarly, the system with 5 indices produces 1% less messages than the system with 4 indices. ### 5.3. Varying Number of Data Objects In this section, we present the performance evaluation of the adaptive OID system by showing the impact of varying number of data objects on the system. We perform 6 different simulations using the same workload as in the first simulation discussed in Sec. 5.1. The total number of data objects in the system λ is doubled across the simulations, starting from 5000 and going up to 160,000. Other simulation parameters have the same values as in Table 2. For each simulation we plot the average adaptation window size defined as: average number of simulation time units needed for an adaptation to happen in the system. Figure 7 shows the performance of the adaptive OID system with respect to the average adaptation window size. The larger the number of data objects in the system, the longer it takes for an adaptation to happen. The reason is that with increasing number of data objects, the index installation cost also increases. Hence, a larger and more diverse workload is needed for the adaptation to be beneficial. ### 5.4. Distributed Workload Collection In this section, we discuss the results from the performance evaluation of the distributed workload collection (see 4.1) phase of the index adaptation process. We perform 16 simulations using the same workload as in the first simulation discussed in Sec. 5.1. For a fixed DHT network size, we vary the values of β and γ across 4 simulations, such that the total number of peers sampled in the network vary between 6% and 12% (in steps of 2%) of the total network size. This simulation scenario is repeated for varying DHT **Figure 7. Varying Number of Data Objects** network sizes of N = (10[2], 10[3], 10[4], 10[5]). Other simulation parameters have the same values as in Table 2. During each simulation, after a distributed workload collection phase ends, we measure the cost deviation metric defined as: � _|cost(W, IrSW_ ) − _cost(W, Ir[W]_ [)][|] � 100 _∗_ _cost(W, Ir[W]_ [)] where W is the complete set of MAR queries from all peers, _Ir[SW]_ is the recommended set of indices obtained using the sampled workload SW, and Ir[W] is the recommended set of indices obtained using the complete set of queries W . The cost deviation indicates how good the recommended set of indices is (in percentage), if it is obtained using the sampled workload, compared to the recommended set of indices obtained using the global workload. The lower the cost deviation, the better the performance of the system because the indices are more optimized for future queries. For each simulation, the average cost deviation is plotted in Fig. 8. For the network size of 10[2], the calculation for the number of peers to sample using β and γ, was rounded-off to the same value (7% of the network size) in case of 6% and 8% sampled peers. Figure 8 shows that for a fixed network size, the larger the number of sampled peers, the smaller is the cost deviation. This happens because with increasing number of sampled peers, a better approximation of the complete set of queries is acquired. Hence, the recommended set of indices obtained using the sampled workload is more similar to the recommended set of indices obtained using the complete set of queries. Figure 8 also portrays that with increasing network size, sampling a smaller percentage of peers in the network is sufficient for having a low cost deviation. ## 6. Conclusion and Future Work In this paper, we presented the design and evaluation of the adaptive OID system. The adaptive OID system optimizes the overall system performance for MAR queries by dynamically adapting the set of indices in a DHT. The set of ----- 16 14 12 10 2[468] 10[2] 10[3] 10[4] 10[5] 6 [ 7 8 9 10 11 12 13] **Figure 8. Distributed Workload Collection** indices is adapted using a four-phase index adaptation process. During the first phase, a workload of MAR queries is collected from the DHT network using uniform random sampling of peers. This workload is then used in the second phase for obtaining a new set of indices using the index recommendation tool [15]. During the third phase the cost and the benefit of installing a new set of indices is estimated. If it is beneficial to install the new set of indices, the installation is carried out during the fourth phase of index adaptation process. Our evaluations show that the adaptive OID system continuously adapts the set of indices in the system according to the dynamic workload of MAR queries. The adaptations are most useful when there is a variety of different queries in the system. Nonetheless, the adaptive OID system shows several orders of magnitude improved performance compared to a non-adaptive system. Currently, the complete log of MAR queries is retrieved from a peer during the distributed workload collection phase. In future, we plan to change this phase so that it is possible to retrieve the query log until a specified point in time in the past. This would limit the amount of network information flow during the sampling process, making the distributed workload collection phase more scalable. ## References [1] PeerSim: A P2P Simulator. http://peersim.sourceforge.net/. [2] Transaction Processing Performance Council . http://www.tpc.org/. [3] W. Acosta and S. Chandra. Exploiting the Properties of Query Workload and File Name Distributions to Improve P2P Synopsis-based Searches. In Proc. of Intl. Conf. on _Computer Communications. IEEE, 2008._ [4] A. Andrzejak and Z. Xu. Scalable, Efficient Range Queries for Grid Information Services. In Proc. of Intl. Conf. on P2P _Computing. IEEE, 2002._ [5] M. Cai, M. Frank, J. Chen, and P. Szekely. MAAN: A Multi-Attribute Addressable Network for Grid Information Services. In Proc. of Intl. Workshop on Grid Computing. IEEE, 2003. [6] Y. Chawathe, S. Ramabhadran, S. Ratnasamy, A. LaMarca, S. Shenker, and J. Hellerstein. A Case Study in Building Layered DHT Applications. In Proc. of Conf. on Applica_tions, Technologies, Architectures, and Protocols for Com-_ _puter Communications. ACM, 2005._ [7] Z. Deng, D. Feng, K. Zhou, Z. Shi, and C. Luo. Range Query Using Learning-Aware RPS in DHT-Based Peer-toPeer Networks. In Proc. of Intl. Symp. on Cluster Computing _and the Grid. IEEE, 2009._ [8] K. Doka, D. Tsoumakos, and N. Koziris. HiPPIS: An Online P2P System for Efficient Lookups on d-dimensional Hierarchies. In Proc. of Workshop on Web Information and Data _Management. ACM, 2008._ [9] S. El-Ansary, L. O. Alima, P. Brand, and S. Haridi. Efficient Broadcast in Structured P2P Networks. In Peer-to-Peer Sys_tems II. Springer, 2003._ [10] P. Ganesan, B. Yang, and H. Garcia-Molina. One Torus to Rule Them All: Multi-dimensional Queries in P2P Systems. In Proc. of Intl. Workshop on the Web and Databases. ACM, 2004. [11] D. Hilbert. Uber die stetige Abbildung einer Linie auf ein[¨] Fl¨achenst¨uck. In Mathematische Annalen, 1891. [12] K. Huang and D. Zhang. DHT-based Lightweight Broadcast Algorithms in Large-scale Computing Infrastructures. _Future Gener. Comput. Syst., 2010._ [13] V. Kalogeraki, D. Gunopulos, and D. Zeinalipour-Yazti. A Local Search Mechanism for Peer-to-Peer Networks. In _Proc. of Conf. on Information and Knowledge Management._ ACM, 2002. [14] G. Koloniari, Y. Petrakis, E. Pitoura, and T. Tsotsos. Query Workload-aware Overlay Construction using Histograms. In _Proc. of Intl. Conf. on Information and Knowledge Manage-_ _ment. ACM, 2005._ [15] F. Memon, F. D¨urr, and K. Rothermel. Index Recommendation Tool for Optimized Information Discovery Over Distributed Hash Tables. In Proc. of Intl. Conf. on Local Com_puter Networks. IEEE, 2010._ [16] F. Memon, D. Tiebler, F. D¨urr, K. Rothermel, M. Tomsu, and P. Domschitz. OID: Optimized Information Discovery using Space Filling Curves in P2P Overlay Networks. In Proc. of _Intl. Conference on Parallel and Distributed Systems. IEEE,_ 2008. [17] L. T. Nguyen, W. G. Yee, and O. Frieder. Query Workload Driven Summarization for P2P Query Routing. In Proc. of _Intl. Conf. on Peer-to-Peer Computing. IEEE, 2008._ [18] C. Schmidt and M. Parashar. Flexible Information Discovery in Decentralized Distributed Systems. In Proc. of Intl. _Symp. on High Performance Distributed Computing. IEEE,_ 2003. [19] Y. Shu, B. C. Ooi, K.-L. Tan, and A. Zhou. Supporting Multi-dimensional Range Queries in Peer-to-Peer Systems. In Proc. of Intl. Conf. on P2P Computing. IEEE, 2005. [20] G. Skobeltsyn and K. Aberer. Distributed Cache Table: Efficient Query-driven Processing of Multi-term Queries in P2P Networks. In Proc. of Intl. Workshop on Information Re_trieval in P2P Networks. ACM, 2006._ [21] P. Triantafillou and T. Pitoura. Towards a Unifying Framework for Complex Query Processing over Structured Peer-to-Peer Data Networks. In Proc. of Intl. Workshop _on Databases, Information Systems and P2P Computing._ Springer, 2003. -----
{ "disclaimer": "Notice: The following paper fields have been elided by the publisher: {'references', 'abstract'}. Paper or abstract available at https://api.unpaywall.org/v2/10.1109/PCCC.2010.5682330?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1109/PCCC.2010.5682330, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": null, "status": "GREEN", "url": "https://www2.informatik.uni-stuttgart.de/bibliothek/ftp/ncstrl.ustuttgart_fi/INPROC-2010-116/INPROC-2010-116.pdf" }
2,010
[ "JournalArticle", "Conference" ]
true
2010-12-01T00:00:00
[]
11,311
en
[ { "category": "Computer Science", "source": "external" }, { "category": "Computer Science", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/02987c79896dfedd7d494deaead3e78eac241f65
[ "Computer Science" ]
0.89571
Unlinkable Collaborative Learning Transactions: Privacy-Awareness in Decentralized Approaches
02987c79896dfedd7d494deaead3e78eac241f65
IEEE Access
[ { "authorId": "30682012", "name": "S. Rahmadika" }, { "authorId": "1708489", "name": "K. Rhee" } ]
{ "alternate_issns": null, "alternate_names": null, "alternate_urls": [ "http://ieeexplore.ieee.org/servlet/opac?punumber=6287639" ], "id": "2633f5b2-c15c-49fe-80f5-07523e770c26", "issn": "2169-3536", "name": "IEEE Access", "type": "journal", "url": "http://www.ieee.org/publications_standards/publications/ieee_access.html" }
Smart contracts (SCs) and collaborative learning (CL) are disclosed publicly, in which most transactions and activities that occur by the parties can be bared in real-time. Both are strengthened in a decentralized manner. CL allows numerous clients to collectively build deep learning models privately by aggregating the gradient values from clients’ devices, yet it lacks the incentive mechanism for the contributing clients. On the other hand, the merits of SCs can be a plausible solution as an incentive mechanism in the CL system because self-executing contracts with immutable data records are resistant to failure. The clients can claim the rewards by stating their contribution arbitrarily in the SCs and tendering a proof transaction function. Nevertheless, directly adopting SCs in the CL system could breach clients’ privacy because the transactions are exposed openly. The observer can infer the properties of the clients’ resources. Therefore, we designed schemes that can overcome observers’ ability to link clients’ information with their associated devices during training. In essence, our schemes are unbiased. We also provide a secure incentive mechanism for the parties in the CL system by obscuring the information values. Finally, the numerical results indicate that the proposed schemes satisfy the design goals.
Received December 30, 2020, accepted April 19, 2021, date of publication April 28, 2021, date of current version May 6, 2021. _Digital Object Identifier 10.1109/ACCESS.2021.3076205_ # Unlinkable Collaborative Learning Transactions: Privacy-Awareness in Decentralized Approaches SANDI RAHMADIKA 1 AND KYUNG-HYUNE RHEE 2, (Member, IEEE) 1Department of Information Security, Graduate School, Pukyong National University, Busan 48513, South Korea 2Department of IT Convergence and Application Engineering, Pukyong National University, Busan 48513, South Korea Corresponding author: Kyung-Hyune Rhee (khrhee@pknu.ac.kr) This work was supported in part by the National Research Foundation of Korea (NRF) grant funded by the Korea Government (MSIT) under Grant NRF-2018R1D1A1B07048944, and in part by the Ministry of Science and ICT (MSIT), South Korea, under the Information Technology Research Center (ITRC) support program, supervised by the Institute of Information and Communications Technology Planning and Evaluation (IITP), under Grant IITP-2020-0-01797. **ABSTRACT Smart contracts (SCs) and collaborative learning (CL) are disclosed publicly, in which most** transactions and activities that occur by the parties can be bared in real-time. Both are strengthened in a decentralized manner. CL allows numerous clients to collectively build deep learning models privately by aggregating the gradient values from clients’ devices, yet it lacks the incentive mechanism for the contributing clients. On the other hand, the merits of SCs can be a plausible solution as an incentive mechanism in the CL system because self-executing contracts with immutable data records are resistant to failure. The clients can claim the rewards by stating their contribution arbitrarily in the SCs and tendering a proof transaction function. Nevertheless, directly adopting SCs in the CL system could breach clients’ privacy because the transactions are exposed openly. The observer can infer the properties of the clients’ resources. Therefore, we designed schemes that can overcome observers’ ability to link clients’ information with their associated devices during training. In essence, our schemes are unbiased. We also provide a secure incentive mechanism for the parties in the CL system by obscuring the information values. Finally, the numerical results indicate that the proposed schemes satisfy the design goals. **INDEX TERMS Blockchain, collaborative learning, decentralized approach, smart contracts, unlinkability.** **I. INTRODUCTION** The satisfactory applications of internet-based information systems that rely on a dispersed manner have been extensively researched by academia, developers, and industries. The foremost objective of the decentralized approach is to address the communication bottleneck issues and memory usage of the conventional centralized system [1]. The paradigm of a centralized system for various major implementations also shifted toward dispersed manners such as for financial applications, medical records, digital rights, and intellectual property, among others. Blockchain technology through Bitcoin cryptocurrency [2] and federated learning [3] are the most prominent practical adoptions of decentralized approaches. The appearance of blockchain technology in 2008 that represented a thoroughly peer-to-peer version of electronic cash called Bitcoin is the trigger for further development of a decentralization-based system. Blockchain 1.0 is the first The associate editor coordinating the review of this manuscript and approving it for publication was Chi-Yuan Chen . generation [4] with simple ledgers that record transactions, followed by generation 2.0 with smart contract (SC) features pioneered by the Ethereum platform [5]. Blockchain 3.0 is the latest version of the decentralized generation by combining more features that support cloud nodes, open-chain access, and incentives for self-evolution. Regardless of the blockchain merits, such as tamper-resistance, paperless, immutability, append-only based data structure, blockchain suffers from privacy issues [6] because the process is transparent and disclosed publicly. Even though there is a private version of the blockchain, the validators of consensus are semi-trusted parties that can jeopardize the sustainability of the system. Hence, blockchain-based service alone without several additional protocols is likely inappropriate for application in systems that hold numerous types of sensitive information. Comparable to the Bitcoin blockchain, federated learning (FL) also relies on the decentralized approach in building the deep learning model collectively from multiple devices. In contrast to conventional machine learning, where the ----- clients process the training model centrally, FL allows the clients to build the artificial intelligence (AI) model by sending the updated gradient values to the aggregation server without revealing the dataset [7]. In this sense, private data remain confidential (the FL preserves privacy for clients by design). Nevertheless, FL-based schemes, such as collaborative learning, lack the proper incentive mechanism that can motivate clients to improve AI models. Several applications do not even provide a reward for clients. Blockchain with SC features can be a solution to tackle the incentive mechanism issues. However, directly adopting SCs threatens the clients’ privacy because it is transparent and openly available in the network. Accordingly, SCs in CL will be a serious consideration if implemented in a system with profoundly confidential data. Specifically, complementary protocols need to be implemented. Privacy-awareness in the smart contract blockchain and collaborative learning appears as part of a flaw that must be overcome. For instance, in collaborative learning, where the primary objective is to provide clients’ privacy, Melis et al. [8] surprisingly argued that the observer could infer the presence of exact data points of clients’ datasets with certain assumptions. On the other hand, SCs make transactions visible to the public. The stored data can be accessed at any time, and the value of data managed by the individuals is noticeable. Transparency is one of the concrete features of the SC blockchain. However, this feature is not desirable for various cases, especially in cross-silo FL with highly confidential data such as medical records, biometric data, employee data, sexual orientation [9], philosophical beliefs, and so forth. For these reasons, the relationship between the data used in training and the owner needs to be obscured. To support an unlinkable incentive mechanism in crosssilo FL, SCs combined with supplementary protocols can be a credible solution to address privacy and linkability issues for decentralized applications. For example, a well-known decentralized cryptocurrency called Monero (stock symbol XMR) provides several features to obfuscate the information of every transaction. The core technology of Monero (XMR) is based on the CryptoNote algorithm [10] with the elliptic curve parameters, ring signatures, and stealth address as the principal protocols. However, its application has been limited to only the financial sector. Therefore, an unlinkable and secure incentive scheme in CLs can be created by referring to the CryptoNote protocol as the supplementary values in the arbitrary functions of Ethereum SCs. To evaluate the objective of this research, we implemented a prototype of the federated learning scheme by utilizing a convolutional neural network installed on devices fully controlled by clients. Our system is built by referring to FL’s principles. Clients who have contributed by using their resources can tender reward claims transaction through SCs, which will then be verified by the validators. If the claim is verified successfully, the incentives are propagated through the Ethereum SC platform. Every transaction that occurs is obscured using several protocols (elaborated in Section IV) so that the observer has no knowledge of the transaction values. In summary, this research provides the main following contributions: (i) We present the cross-silo FL framework by referring to the FL principles (developed by the Google AI team) as a case study for an incentive scheme based on blockchain SCs. (ii) We introduce used-model-only services with the obscur_ing transactions feature. This use case is intended for_ clients who only want to use the model without expecting the incentives from model providers, e.g., due to an insufficient amount of the dataset. (iii) We introduce unlinkable rewarding and training activities without revealing the information values of transactions. It is part of the extended version of used_model-only services._ The road map of this paper is organized as follows. Section II investigates the existing decentralized model and techniques that leverage the FL-based approach and blockchain technology. In Section III, we provide the essential information related to the CL models, conventional incentives mechanism, concerns, and benefits from this research. The model and operations of our proposed schemes are presented in Section IV. In Section V, we outline the fundamental points of this study. The opportunities and challenges are described in Section VI. Finally, we conclude this paper in Section VII. **FIGURE 1. General overview of the collaborative learning model.** **II. RELATED WORK** The growing interest in adopting blockchain-based incentive mechanisms in various fields has produced several prominent schemes. In this section, we focus on federated, collaborative, and decentralized learning terms from prior works. The general model of the federated learning approach is depicted in Figure 1. A joint optimization approach that combines the party’s reputation and contract theory as the incentive mechanism in the FL system was introduced in [11]. The party’s reputation is generated by calculating the multi-weight subjective logic model to motivate users to always participate in training. Correspondingly, a reliable and accountable FL ----- that relies on the blockchain is outlined in [12]. The proof-ofconcept is applied, and it is associated with practical methods for secure aggregation of local model updates in the FL. A system called DeepChain was proposed in [13] to preserve privacy for clients during training in the FL. The incentive is designed to rely on a smart contract. The system forces the parties to behave honestly with a designed punishment policy. In line with this, an auditable FL with trust and blockchainbased incentive, namely FLChain, has been discussed in [14] which also proposed a protocol to reduce the time cost of blockchain queries. In short, the previously mentioned related works combine blockchain with an FL system for different objectives. Blockchain is primarily utilized to propagate incentives to the parties. Various methods to preserve privacy have also been discussed. However, the linkability concerns within the system, especially in smart contracts, have not been thoroughly reviewed in previous works. The uncertainty of leaking personal data during training needs to be taken into account. Thus, we raise this topic to be discussed in this research. Another comparative approach with different purposes is described in [15]. Instead of using blockchain as an incentive scheme, Sharma et al. (2020) proposed a distributed computing defense framework by utilizing blockchain merits in the sustainable society fields. Recently, efforts to utilize blockchain and FL were elaborated in [16] and [17]. In fog and crowd computing environments, several troublesome problems, such as network congestion, overhead, and communication delays are addressed. This limits the discussion of the trade-off between privacy and efficiency of the proposed schemes. In terms of security from a federated learning perspective, the research in [18] outlines that malicious users might perform poisoning attacks against the updated models targeting specific devices in the FL network. With the same intentions, the authors in [19] presented Sybil-based poisoning attacks in the FL and introduced a novel defense to address these problems. However, recent studies have been broadly explored to cover the poisoning attacks in the FL, such as those presented in [7] and [20]. The methods addressing targeted model poisoning using a simple improvement of some FL protocols can be feasible solutions. However, concerns about the probability of data leakage during training are still significant. Moreover, the incentive mechanism through SCs in transparent mode with clear communication between parties that cause the linkability concerns should be dealt with in the first place. In this paper, we use the term collaborative learning (see Figure 2) as a scheme in which multiple computing devices conjointly build a deep learning model over shared memory, whereby multiple machines with specific computational capabilities accomplish tasks independently. The linkability of the smart contract is based on the private Ethereum platform. Our objective is not only to preserve users’ privacy, but also to drop the linkage between the flow of information in the network and clients’ identity. **FIGURE 2. The general form of collaborative learning structures** (it consists of 7 minimum operations (1-7 OPx)). **III. PROBLEM STATEMENT** This section provides an overview of the conventional training model with a cloud-trained mode (centralized logging) that is heavily adopted in machine learning fields. We also outline existing incentive schemes based on a centralized approach in several applications. Finally, the merits of combining these technologies are highlighted concisely along with the requirements of this research. _A. CONCENTRATED LEARNING WITH A CLOUD-TRAINED_ _STRATEGY (CENTRALIZED LOGGING)_ An advanced stage of extracting information from a set quantity of data is known as a traditional server-in-the-loop with a centralized logging approach [21]. This type of architecture allows the synapse server to gather log data from multiple log files (on devices) to be later sent to a specific address on the network. As a result, legitimate parties can carry out several transactions and activities such as troubleshooting, malicious detection, and analyzing the behavior of the learning process. The conventional training with a cloud-trained approach is shown in Figure 3. Because the log data has been increasing significantly over time, it complicates the manual maintenance of the logs by operators and developers (e.g., using the matching protocol). Regardless of these matters, conventional training with a cloud-trained approach is straightforward to be employed in the real-world. This method does not burden the clients’ machines because the training process is conducted in the cloud. The overhead on the devices can also be automatically resolved. In this sense, everything is accomplished through cloud services. However, the security trade-off cannot be neglected by directly adopting this approach without thorough consideration. Users unknowingly break their privacy ----- **FIGURE 3. Conventional training with a cloud-trained approach.** by sending their valuable data to cloud services [22]. The cloud environment is a semi-trusted party [23]. The server might be a malicious party that utilizes users’ information to make a profit. Moreover, it becomes a serious concern when a malicious server publicly reveals users’ sensitive data. Hence, a credible solution in regular training with a cloud-trained approach is necessary. Improvement of the conventional training model has been extensively researched in recent years. The upgraded version is called on-device interference with cloud-based training [24]. The training is still conducted on the cloud, but users can create a type of data bundle that is used for training within a defined time frame. In this respect, the users are no longer required to send their data gradually to the model providers. This scheme reduces the burden on the device even better than the previous approach. The users also do not have to be online during the process, making the process more agile and faster. These merits are essential in systems that adopt a dispersed approach because the resources and memory are limited and likely cannot be regularly updated. Nevertheless, users’ privacy also becomes a concern in this scheme. Users’ valuable data are sent to the cloud services in order to be able to use the model. The model providers could be malicious, which might expose users’ data. Therefore, federated learning is popular for eliminating these concerns because training is carried out privately within devices. _B. CONVENTIONAL INCENTIVE SCHEMES_ Practically every incentive scheme is in a centralized model that relies on the middleman for each transaction. The incentive mechanism over the Internet with a different type of system has a related objective (to motivate the parties to behave honestly). The nature of a centralized incentive scheme is still suffering from bottleneck and single point of failure (SPoF) issues that can jeopardize the entire root of the system. A trust-based incentive scheme with a big data field as a use case is presented in [25]. The authorized mobile users are assigned to allocate the tasks of big data with a reverse auction game model. There is a score function that determines the highest score to be a winner. However, candidate selection based on trust is concentrated (vulnerable to bottlenecks and SPoF issues). Similarly, the authors in [26] designed an incentive mechanism for opportunistic cloud computing services to address the free-riding problem where the users are selfish and unwilling to share their resources in the network. The incentive scheme is based on game theory using the Nash equilibrium principle. However, avoiding the involvement of the middleman is likely to be challenging in real implementation. Since the advent of blockchain technology publicly, the paradigm of propagating incentives has deliberately changed into a decentralized form. Bitcoin (BTC), Ethereum (ETH), and Monero (XMR) are prominent decentralized cryptocurrencies built on blockchain technology. A middleman is no longer required to manage the transactions. As a result, the overall costs of transactions can be reduced (removing intermediaries’ fees). Incentive-based blockchain technology provides irrevocable and tamper-resistant activities that enable the parties to monitor the process effectively. Nevertheless, the transparency feature on the blockchain is not desired for several cases. The research papers in [27]–[29], and [30] presented truthful schemes based on blockchain technology to provide a secure incentive mechanism for different use cases. Even though the implementation based on blockchain is secure by design as well as the addition of several protocols, the systems still suffer from linkability issues that might have a significant effect on the sensitive data. Therefore, the supplementary protocol is needed so that it can eliminate the linkage between information data and the identity of the user. _C. COLLABORATIVE LEARNING WITH DECENTRALIZED_ _INCENTIVE MECHANISM_ The Google AI team in 2017 presented distributed machine learning techniques that enable users to improve a machine learning model privately without exposing their dataset. The system is called FL as a strategy to improve communication efficiency (its architecture is explained in detail in [31]). It is implemented on a smart device mobile keyboard that can predict the most likely next-words or phrases. Since then, further research has been gradually carried out by concentrating on specific features such as incentive mechanisms for the parties involved. Existing federated learning schemes lack incentive mechanisms. The free-rider problem is still likely to be the greatest concern in such a system. Besides being able to motivate users, incentives can also force users to behave honestly. In line with this, blockchain merits can be a plausible solution to be adopted within federated learning. A recent study on FL and blockchain-based incentive mechanisms was presented in [32]. These technologies are implemented in the edge network field involving internetof-things devices (network edge). A similar objective was also presented in [33]. Overall, key design aspects and incentive mechanisms are achieved. However, as in most previous studies, the offered system still suffers from traceability, ----- linkability, and transparency issues. It is likely not desirable to be implemented straightforwardly for users that hold a large amount of sensitive information. Determining the type of blockchain adopted in the private cross-silo federated learning system is paramount because the new platforms that have emerged with the unique features offered are unsuitable and likely to be vulnerable to viewing by unauthorized observers. The effectiveness of verifying transactions, including the process of distributing incentives, is also a vital detail that must be thoroughly considered. BTC with proof-of-work (PoW) as a consensus mechanism could be a barrier in federated learning that requires rapid incentive procedures. The difficulty level in BTC always increases over time, as can be seen in Figure 4. By directly adjusting the difficulty level (either by slowing down or speeding up) can affect the security [34]. **FIGURE 4. Statistic of BTC’s difficulties over the years [35].** _D. REQUIREMENTS_ Linkability-awareness in collaborative learning and smart contract transactions becomes essential when these approaches are linked to valuable user data. The fundamental objective is to provide a tamper-proof incentive mechanism in collaborative learning by relying on smart contracts. It can resolve disputes between parties by design. However, it is not advisable to directly adopt the transparency process. Additional protocols are required to break the linkage information during transactions. More precisely, our design objective must meet the following requirements: (i) (RQ-1) Sustainable private learning activities. A collaborative learning scheme should be able to provide sustainable private learning activities for the parties. Our system preserves users’ privacy by design because the training is conducted confidentially without revealing the data. We propose plausible techniques that allow users with sensitive data to carry out training without suffering linkability issues. (ii) (RQ-2) Compatible decentralized incentive mecha**nism. The blockchain-based incentive provides tangible** benefits for enterprises. Incentive schemes based on smart contracts should be able to provide a compatible incentive for the parties within the collaborative learning system. The system must also satisfy the fairness of revenues associated with users’ resources. (iii) (RQ-3) Unlinkable and untraceable transactions. Accuracy, high-speed, and trust quotient are among the advantages offered by blockchain smart contracts. Nevertheless, in cases of private distributed learning with sensitive data, the transparency process is not preferable. Therefore, the system should be able to provide unlinkable and untraceable transactions. The observer still can see the information in the blockchain network since the SC is visible publicly, but the observer has zero knowledge about the transactions that occurred. (iv) (RQ-4) Comprehensiveness. The designed system must be able to ensure the completeness of the entire transaction process, starting from learning activities up to the propagation of incentives. **IV. MODELING AND OPERATIONS** First, we provide information about the federated learning model as a backbone use case in this research. The concept development model is adopted by referring to FL principles. The protocol for group signatures is detailed in this section. We also design the used-model-only feature and the secure decentralized rewarding schemes as plausible approaches to preserve users’ privacy and unlinkable transactions while using the services offered. _A. COLLABORATIVE LEARNING FRAMEWORKS_ In this research, collaborative learning frameworks are signified by the number of clients Cx1, Cx2, . . ., Cxn _Cxi,j in_ ∈ different groups G1, G2, . . ., Gn _Gi,j that conduct training_ ∈ on a global model ψglb(x) with a private dataset δn. For ease of presentation, we set five groups of clients G1 _G5 with_ − 20 devices for each group G(1 − 5) − _Cx(01−20) (a total_ of 100 clients). Each client possesses the same dataset with comparable capability settings on all devices (in the realworld implementation, the dataset and device capabilities are distinct from one another). Every group with predetermined settings jointly conducts training and sends the results of the gradient values _ψup[group][1], ψup[group][2], . . ., ψup[group][5]_ to the leader in each group. Leaders are sorted based on the number of transactions recorded by the system. In round 1 r1, the leaders perform aggregation in order to obtain the updated gradient values from the clients within the group. The final aggregation value is determined by the aggregation server Svr _[Ag], which also_ has the role of a model provider. Figure 5 illustrates the updated filtering of each group ψup[group][1], . . ., ψup[group][5] to detect suspicious updates from malicious clients. The server _Svr_ _[Ag]_ makes a prediction about which malicious clients have tendered one of the gradients. Eventually, the server compares the gradient values and flags the outliers that are significantly distant from the rest {[′]ψup[group][1], . . .,[′] _ψup[group][5]} ⊂_ {[′]ψup[group][1], . . .,[′] _ψup[group][5]}. However, the filtering update_ operation is out of the scope of our research because we ----- **FIGURE 5. Groups of collaborative learning with an aggregation server.** assume that the clients are behaving honestly. For more details about the filtering process, we recommend the readers refer to the following references [20] and [36]. The clients with their associated datasets can be signified in (1) as follows: _Cxn_ � _r1 →_ (ψglb(x)) = {Cx1(δ1), . . ., Cxn(δn)}, (1) _n=0_ _Tr1 ≤_ _Tmax[Cx][n][;]_ ∀G(1−5) − _Cx(01−20) →_ _ψglb(x))_ (2) _MAE(ψglb(x))_ _n_ � = [1]n |yi − _f (xi)|_ (3) _n=1_ The clients are bounded with a maximum training Tmax[r][n] time for each round. The training time is set by the aggregation server Svr _[Ag], which also has a role as a global model_ _ψglb(x) provider. The maximum training time is adjustable_ and can be determined by the model provider as needed (denoted in (2)). Within Tmax[r][n] [, the clients collectively trans-] fer the updated gradients to the group leader. The updated gradient groups ψup[group][1], . . ., ψup[group][5] are derived from the aggregation value from every participant in the group. Specifically, there are five gradient values from the total number of the group that will be filtered by the aggregation server {[′]ψup[group][1], . . .,[′] _ψup[group][5]} →_ _Svr_ _[Ag]. The accuracy_ is calculated by the mean absolute error (MAE), as shown in (3) [37]. Eventually, the final aggregated group updates _ψup[final]_ are calculated by the server by excluding the updates identified as suspicious clients. The accuracy is calculated by the MAE, as shown in (3), where f (xi) is the prediction value of model ψglb(x), and yi is the actual value of the records. In short, the lower the MAE values in model ψglb(x), the higher its accuracy. _B. GROUP SIGNATURES OF CLIENTS AND TERMINOLOGY_ The main objective of group signatures in the collaborative learning transaction is to hide the real identity of the parties involved in a transaction. To perform group signatures in a transaction, the members do not require a group manager or middleman. The identity of the signer is disguised by design because the transaction is signed on behalf of the group. The use of the ring signature in collaborative learning transactions is inspired by CryptoNote v 2.0 [38]. Untraceability in a transaction can be accomplished by implementing a ring signature in a transaction, which is also a core piece of technology behind the CryptoNote protocol. Once a group signature of clients has been created, every member of the group is free to use the signature by combining it with his/her private key to disguise the signers’ identity. The signer allows choosing the number of signatures to be included in the transaction in order to be able to disguise the signers’ information. The signatures can be chosen freely as long as they are part of the group signatures of clients. _Rsgn ∈_ _Rmb ⩾_ 1 (Rsgn is a group signature chosen by the clients, and Rmb is the ring members/total number of signatures available in the group). The parent keys for each party are derived from the trapdoor permutation function, such as Rivest-Shamir-Adleman (RSA), Rabin cryptosystem, and elliptic curve cryptography (ECC). **TABLE 1. Summary of notations used.** Suppose the aggregation server/model provider (Svr ), client 1 Cx1, client 2 Cx2, client n Cxn, and reward manager _Scx are in the same group of the collaborative learning system._ Each party has a pair of parent keys (Pubn, Privn). The public key is computed by encryption yn _gn(xn), where gn is an_ = extended trapdoor permutation function and gn(xn) defines _fi(xi) = x[2]_ _mod ni over {0, 1}[b]. A summary of the notations_ used in this paper is defined in Table 1. Eventually, the signature key for every member in our collaborative learning scheme can be defined as follows: ----- (i) Aggregation server (Svr _[Ag]) →_ _hash(Pubvr_ _, Privvr_ ) → _Pubvr = yvr = gvr_ (xvr ) (ii) Client 1 (Cx1) → _hash(Pubx1, Privx1) →_ _Pubx1 =_ _yx1 = gx1(xx1)_ (iii) Client 2 (Cx2) → _hash(Pubx2, Privx2) →_ _Pubx2 =_ _yx2 = gx2(xx2)_ (iv) Client n (Cxn) → _hash(Pubxn, Privxn) →_ _Pubxn =_ _yxn = gxn(xxn)_ (v) Reward Manager (Scx) → _hash(Pubsc, Privsc)_ → _Pubsc = ysc = gsc(xsc)_ Since the clients use the signature on behalf of the group, the observer cannot infer any information within the transaction. The signature can be used for any transaction without procuring permission from every member of the group. For instance, clients Cxi,j leverage the following keys (4) to sign a message hash(msg, yvr _, yx1, yxn . . ., ysc). After the_ group signatures of clients are generated, the parties can unite new members to the group by using members’ public keys _Update_Rsgn →_ _Get_New_PubKey ⊕_ _Rsgn. The parties can_ also exclude the members’ public key straightforwardly out of the ring members DelRsgn → _Del(Get_PubKeyn) ∈_ _Rsgn._ _Rsgn →_ _ycx1 ⊕_ _ycx2 ⊕_ _ycx3⊕, . . ., ⊕ycxn;_ {gcx1(xcx1) ⊕ _gcx2(xcx2)⊕, . . ., ⊕gcxn(xcxn)};_ _Rtot_ � _gi(xi) = gcxn(xcxn) ⊕_ _gvr_ (xvr ) ⊕ _gsc(xsc);_ (4) _i=1_ Enhancing privacy in CL activities entails terminology that should not be confused with similar entities in a different environment. The CL’s base signature algorithm refers to the elliptic curve discrete logarithm problem. Secretec _key_ − for each party is a number sα ∈ [1, l − 1], where l is a prime order of the base point in ECC. Publicec _key is_ − defined as a point pubα = secα · G, with G as a generator. _One_ _timekeypair in the CL’s transaction is a set of secret_ − and publicec _keys. Intuitively, each participant possesses_ − a pair of secretuserkeys(sα, secβ) from a couple of different _secretec−keys. There is also a pair of trackingkeys(sα, pubβ),_ which is derived from secret and publicec − _key (pubβ =_ sec β · G and secα ̸= _secβ). In conclusion, a pair of_ _publicuserkeys(pubα, pubβ) is obtained from the associated_ private key (sα, secβ). The public key of the clients is enforced as a one-time destination key and a one-time private key in order to use the funds. We elaborate on this point in detail in Section IV-C and Section IV-D. The structure of the transaction generally persists, comparable to the Bitcoin and Ethereum fields. Every participant in CL requests a global model by collecting several independent transaction outputs and signing the transaction with the corresponding secret keys. _C. USED-MODEL-ONLY TRANSACTIONS_ In Section IV-B, we present the group signatures of clients and the terminology used in the CL system. Clients may desire to use the deep learning model provided by the aggregation server without acquiring cryptocurrencies as a reward. → _Ck,v(yx1, yx2, . . ., yxn),_ _Ek_ (yxn ⊕ _Ek_ (yn − 1 ⊕ _Ek_ (yx1 ⊕ _v)) ≡_ _v_ (7) Algorithm 1 presents the sequence of collaborative learning with the distribution of incentives through the blockchain network. A transaction request Txδ1_req consists of global model info, dynamic rule, and request statement Txδ1_req = _ψglb(i,j)||rdc_info||‘‘req.’’; and this transaction is signed_ using Cx1’s private key combined with group signatures of clients that have been generated in advance as can be seen in (6). The value of yx1 = gx1(xx1) within the group is calculated using Cx1’s private key. Simultaneously, the ring equation of the total group generated in yx1 can be solved using Cxi’s private keys xi = g[−][1](yi). The client expresses the desired global model in the form of type and version. In another case, the client may not satisfy dynamic rules in the sense of the minimum number of datasets set by the model provider, so rewards are not given. Even though no reward is distributed, clients still require their identity to be hidden during training for several purposes. The CL system must disguise communication between the clients and the aggregation server. We call this case used-model-only transactions _(UMO-Tx)._ In the case of UMO-Tx, earnings incentives are not the ultimate goal for clients. The CL framework (in Section IV-A) is designed to be private and requires an authentication process to join the system. We assume that the clients are legitimate parties who have passed through the official authentication process. In short, the authentication method is beyond the scope of our research. The initial process is identical in that the client makes a transaction request to the model provider by using the group signatures of clients in the transaction. The provider will send the global model if the clients’ transaction is labeled ‘‘true’’. To use the desired global model ψglb(x), the client Cxn ∈ _Cxi,j is required to make a transaction request Txδ1_req_ addressed to the model provider Svr _[Ag]. For instance, client_ 1 Cx1 applies the UMO-Tx mode for a particular purpose. _Cx1 then conceives a group signature of clients by selecting_ a number of members from the total available. As an illustration, Cx1 takes 25 public keys of clients in sequence combined with the public key of the aggregation server Svr _[Ag]_ and the smart contract manager Scx (available: Rsgn 100 clients). = In this case, the ring signature can be constructed as follows: _Rsgn →_ _gvr_ (xvr )⊕gsc(xsc)⊕gcx1(xcx1)⊕, . . ., ⊕gcx25(xcx25). The final group of signatures used by clients in the Txδ1_req transaction is signified as (5). Client Cx1 is free to choose the number of members as long as Rsgn ∈ _Rmb ⩾_ 1. _Rsgn_ � _Clientsgcxn(xcxn ⊕_ _Agg.Servergvr_ (xvr ) _i=1_ ⊕SCManagergsc(xsc) ∈ _Rmb ⩾_ 1); (5) _ψglb(i,j)||rdc_info||‘‘req.’’_ _Txδ1_req =_ _,_ (6) _signwithRsgn ∈_ _Rmb ⩾_ 1||privx1 _Combiningfunction_ ----- **Algorithm 1 The Global Model ψglbx Is Provided by the Aggregation Server Svr** _[Ag]. The Model Is Gradually Trained by a Set_ Number of Clients Locally and Privately in Their Respective Groups Cx(i,j) in G(i,j) 1: procedure MODEL PROVIDER Svr _[Ag]_ PERFORMS: 2: _Svr_ _[Ag]_ the provider publishes several global models ψglb1, . . ., ψglb2, . . ., ψglbn; 3: _Svr_ _[Ag]_ conceives a group of FL’s signature(ex. 25 members) 4: Estimates Cxn _Cx(i,j) in groups Gn_ _G(i,j)_ _*roughly mapping available devices_ ∈ ∈ 5: Publishes rdc_info →∀ _ψglbn_ _*minimum requirements and rewarding’ info_ 6: Set Tmax[r][n] [→∀] _[ψ][glb]n_ 7: 20 devices for each of the five groups → _G(1 −_ 5) − _Cx(01−20)_ 8: **for group signatures of clients do** 9: Parents private keys of the parties → (Pubn, Privn) 10: Signature for one party is calculated → _yn = gn(xn)_ 11: Ex: Aggregation server (Svr _[Ag]) →_ _hash(Pubvr_ _, Privvr_ ) → _Pubvr = yvr = gvr_ (xvr ) 12: _*(pair of parent keys from trapdoor permuatation functions)_ 13: Ex: One group signatures → _Rsgn →_ _ycx1 ⊕_ _ycx2 ⊕_ _ycx3⊕, . . ., ⊕ycxn;_ 14: _(every member of the group is free to use the signature by combining it with his private key ..._ 15: _to disguise the signers’ identity)_ 16: **end for** 17: **for used-model-only transaction (UMO-Tx) do** 18: _(For example, Cx1 as requester & Svr_ _[Ag]_ _as a model provider & an aggregation server)_ 19: _Cx1 determines the desired global model ψglb(i,j)_ 20: _Cx1 generates a group of signature Rsgn ∈_ _Rmb ⩾_ 1 21: Submits Txδ1_req = ψglb(i,j)||rdc_info||‘‘req.’’; _*sign it with Rsgn_ 22: _Svr_ _[ag]_ checks clients transaction Txδ1_req 23: **end for** 24: **for rewarding mechanism (performed by Svr** _[Ag]) do_ 25: _(Cx1 believes his updated gradient value ψup[δ][1]_ _meets the requirements to be incentivized)_ 26: _Cx1 tenders a new transaction Txδ1_ETH_ 27: _Svr_ _[Ag]_ confirms clients’ transaction with respective updated gradient value ψup[δ][1] 28: _Svr_ _[Ag]_ unpacks Cx1’s public keys (Pubα1, Pubβ1) 29: _Svr_ _[Ag]_ generates a random r ∈ [1, l − 1] & computes a one-time destination OTDcx1 30: _OTDcx1 is sent over the blockchain network_ _*Cx1 checks every passing txs using Privα1, Privβ1_ 31: _Cx1 can recover the corresponding one-time private key OPKcx1_ _*one-time private key for spending_ _the reward_ 32: **end for** 33: _(the process is carried out repeatedly until it reaches the maximum round determined by the provider)_ 34: _(each model may require different dynamic rules, and it might be distinct)_ 35: end procedure Concurrently, dynamic rules contain information about system and device requirements, network, and conditions for reward provisions. The transaction is computed by the system that combines functions Ck,v, where k is a hash value of _Txδ1_req and v is a random glue value picked by the client._ This computation assumes a random oracle for a cryptographic hash function, while the client will use k as a key for _Ek as signified in (7). Clients also enable the addition of a new_ signature Update_Rsgn or remove the member’s signature _DelRsgn as needed without having to obtain approval from_ the model provider. A transaction request is addressed to the model provider directly through a secure channel that is resistant to eavesdropping and tampering. The model provider unpacks the transaction Txδ1_req sent by Cx1 and verifies whether the signature co.mes from the group signatures of clients. The integrity of transactions is also verified promptly by the provider. Finally, the desired global model will be sent to _Cx1 by the provider if Txδ1_req meets the following con-_ dition: Rsgn ∈ _Rmb ≡_ _v AND Txδ1_req’’ = True and_ described in (8). - sgni,j ∈ _Rmb ≡_ _v ‘‘AND’’_ ‘‘Txδ1_req’’ = True → _then‘‘Approve’’_ - sgni,j /∈ _Rmb ≇_ _v ‘‘OR’’_ ‘‘Txδ1_req’’ = False → _then‘‘Decline’’_ _Txδ1_req_    (8) Conclusively, in the event of UMO-Tx, clients can still take advantage of the models provided by the system without having to reveal their identities to the public. ----- Obscuring a transaction request can be achieved because the transaction is signed on behalf of the group. The initial stage for a transaction request is always the same for each client. The difference between them is that the transaction is signed by the private key of each requester combined with the group signatures generated beforehand. From the client’s perspective, receiving incentives is not the primary purpose; this could be due to the client not meeting the dynamic rules set by the model provider or any other limitations. Nevertheless, the clients’ identities remain anonymous, and transactions can be carried out securely. _D. SECURE DECENTRALIZED REWARDING SCHEMES_ In Section IV-B, we elaborate the used-model-only transaction scheme (UMO-Tx), where the client does not intend to receive incentives but still wants to be anonymous while using the global model. To receive incentives from the model provider, the clients have to meet all the dynamic rule requirements set by the model provider. One of the requirements that should be met by clients is the least number of datasets _δ1 knowledge that corresponds to the updated gradient values_ from training ψup. When all conditions are satisfied, and the provider has verified the transaction, incentives can be given to clients securely. This section presents an unlinkable and untraceable incentive mechanism by utilizing the CryptoNote protocol and blockchain technology through smart contract features. For ease of understanding, we consider that client Cx1 possesses a sufficient amount of dataset δ1 to be incentivized. The client also meets the dynamic rules in terms of device and network requirements. First, the client deploys a transaction request addressed to the model provider. The client generates a group of signatures by choosing a number of members’ public keys [�]i[R]=[tot]1 _[g][i][(][x][i][)][ =][ gcx][n][(][x][cxn][)][ ⊕]_ _[g][vr]_ [(][x][vr] [)][ ⊕] _[g][sc][(][x][sc][).]_ The clients state the desired global model in the transaction, along with the dynamic rules information. Cx1’s transaction request is then signed with the group signatures combined with Cx1’s private key. The model provider checks every incoming transaction and unpacks the transaction request of Cx1. The provider will send the global model to Cx1 if the only transaction satisfies a particular condition, which is signified in (8). The conditions applied by the provider can vary significantly for each global model. 1) DEPLOYING TRANSACTION Txδ1_ETH VIA SCs In order to be incentivized, the Cx1 is required to submit a new transaction through a private Ethereum SC, which is denoted as Txδ1_ETH . This transaction represents the global model type and version used ψglb(info), the gradient value of the training results ψup[δ][1], and the dataset’s knowledge δ1knowledge. The primary difference in the transaction _Txδ1_ETH is that Client Cx1 inserts a pair of public keys_ (Pubα1, Pubβ1). Public key Pubα1 is generated based on _Cx1’s private key Privα combined with a base point/generator_ _Gα as follows: Pubα1 →_ _Privα1 · Gα. Whilst, public key_ _Pubβ1 is derived from another Cx1’s private key Privβ with_ its own base point/generator Pubβ → _Privβ1 · Gβ; where_ _Privα1_ _Privβ1 ‘‘AND’’Gα_ _Gβ as given in (9)._ ̸= ̸= � � _ψglb(info)||ψup[δ][1]||δ1knowledge_ _Cx1[′]sPubKey →_ _Pubα1, Pubβ1_ _Txδ1_ETH =_ {signwithRsgn ∈ _Rmb ⩾_ 1||privx1} _where →_ _Privα1 ̸= Privβ1‘‘AND’’Gα ̸= Gβ;_ (applied to all clients with respective ‘‘G’’) (9) A pair of public keys (Pubα1, Pubβ1) that are attached to an Ethereum SC transaction Txδ1_ETH have their respective purposes. The first Cx1’s public key (Pubα1) is used together with the model providers’ random data r = _Svr_ _[Ag][′]srandomdata_ _R_ _r_ _G as a part of the Diffie–_ → = Hellman key exchange concept in a transaction. In this sense, the sender and receiver both use half of the information that can be decrypted using the recipients’ secret key. On the other hand, the second public key (Pubβ1) is employed as a tracking key. The client Cx1 will search for a transaction in the blockchain network sent by the provider by attaching Pubβ1, so that the client can recognize that the transaction is intended for him. Transactions sent by the provider contain funds or a cryptocurrency that can be used by legitimate clients only. 2) ONE-TIME DESTINATION KEY AND ONE-TIME PRIVATE KEY Before a specified amount of cryptocurrency is transferred through the blockchain network, the provider first checks the transactions Txδ1_ETH claimed by the client. If all conditions are met (labeled as ‘‘True’’), then a pair of the public key of Cx1 is unpacked. The provider then performs a random base point r ∈ [1, l − 1] and computes a one-time destination _OTDcx1 key addressed to the client Cx1, as shown in (10):_ _OTDcx1 = Hs(rPubα1) · G + Pubβ1_ _where r = Svr_ _[Ag][′]srandomdata →_ _R = r · G,_ (10) _OPKcx1 = Hs(Privα1 · R) + Privβ1_ (11) When transaction OTDcx1 is sent over the blockchain network, client Cx1 checks every passing transaction using the private key Privα1, Privβ1. Cx1 can recover the corresponding one-time private key to use the funds/cryptocurrency because only Cx1 has knowledge about Privα1 and Privβ1. The clients Cx1’s one-time private key is signified in (11). In the original CryptoNote protocol, the one-time private key is also being used as part of a ring signature to disguise the signers’ identity. Eventually, a key-image can prevent doublespending intentions from malicious clients. _E. RESEARCH DESIGN LIMITATIONS AND ASSUMPTIONS_ 1) A HARD FORK (RADICAL CHANGE) REQUIREMENT The Ethereum platform provides a decentralized ecosystem (see Figure 6) for developers to create products using the Ethereum Virtual Machine (EVM). The EVM is powerful and ----- **FIGURE 6. Ethereum VM on blockchain.** embedded within each full blockchain network by design. The smart contract byte-codes are executed through an EVM. Interacting with the EVM via smart contracts is likely to be more costly than with traditional servers. In other words, numerous use cases are favored using the EVM rather than conventional servers. However, our proposed schemes cannot be employed entirely in the Ethereum network because it requires a hard fork (radical change) to be applied in the entire network. A hard fork in the Ethereum requires a radical change to the network protocol, which can alter the entire course of the transaction. Therefore, we provide several performances estimates for the transaction through the EVM, which are detailed in Section V. 2) SYNCHRONOUS DISTRIBUTED LEARNING AND COMPUTING Theoretically, collaborative learning requires a massive number of devices with various device capabilities to generate an aggregation value. Likewise, the datasets used in local training are distinct from one another. In terms of simulation, it is not straightforward to achieve all of these requirements. For these reasons, we set the simultaneous computation with multiple computing devices over shared memory with the same dataset derived from [39]. Regardless of the outlined limitations and assumptions, the critical points of the simulation output were successfully collected. **V. PERFORMANCE RESULTS, COMPARISONS,** **AND LESSONS LEARNED** _A. EXPERIMENTAL SETUP_ As discussed in the previous sections, we propose the model provider to have another role as an aggregation server, which calculates the most recent gradient values obtained from multiple groups. The model provider constructs an AI model using a convolutional neural network (ConvNet) to analyze visual imagery from private datasets of several clients. The deep learning model used was not our primary focus in this research. Thus, we implement a straightforward ConvNet model with a convolution layer and rectified linear unit (ReLu) as an activation function and a pooling layer (added after the convolutional layer). Finally, in the classification part of ConvNet, several core components are adopted, such as flattened, fully connected, and softmax functions. Inside the first layer of ConvNet, we set the input to be 1, the output is 25, the kernel size is 5, and stride is 1. A fully connected layer has a fixed size input image of 4 4 for × feature size with 50 channels combined. Meanwhile, the ConvNet training loader’s batch size is arranged to be 50 data samples with 1,000 samples for testing (out of 60,000 training samples [39]) that have been size-normalized by default. Finally, collaborative learning and blockchain performance tests were carried out on an Intel(R) Core(TM) i7-7700 CPU @ 3.60GHz and 3.60GHz with 16.0 GB RAM modules. For the blockchain network, we use the Ethereum smart contract through the Ganache - Truffle Suite platform to run inquiries, execute commands, and inspect the transactions’ state with all required dependencies installed. The clients’ and providers’ account addresses are obtained from Ganache, similar to the parties’ private keys. The blockchain network is implemented on the remote procedure call server HTTP://127.0.0.1:7545 with automining mode. The gas price was set to be 20,000,000,000 Wei, and the gas limit was set to 6,721,975 by default. Ganache provides 100 Ether by default; with these amounts of Ether, clients are able to conduct various transactions in the Ethereum. The parties’ identities are managed by a crypto wallet called MetaMask, which is an extension for accessing Ethereum and a gateway to blockchain applications. _B. UNLINKABLE COLLABORATIVE_ _LEARNING PERFORMANCE_ Collaborative learning activities in this research are performed in a synchronous distributed learning and computing environment, where multiple computing devices are shared over the same memory and computing resources. This can be described as a single machine that executes some commands simultaneously [40], where the execution tasks are divided separately. Therefore, training activities can be completed nearly at the same time in a parallel form. Detailed information on training methods can be found in Section IV-A, and the environmental setup is presented in Section V-A. Collaborative learning activities are carried out within each group using the same model and training data. In real-world implementations, the dataset is closed publicly. To use the global model, each client first makes a Txδ1_req transaction addressed to the provider. The leader of each group collects the updated gradient values from all clients within the group. The leader eventually calculates the group’s updated gradient values ψupgroup. The final aggregated gradient values _ψup[final]_ are computed by the provider, which later redistributes the new updated model back to the clients. The unlinkable collaborative training activity is performed for one round. Figure 7 (a) illustrates the log loss for each group during training activities, while Figure 7 (b) shows the length of time taken to complete the training. Comprehensively, training activities improve with increasing cycles, as is the nature of training in a single deep learning model. However, we find that the training activities in Group 1 are slightly better than those in the other groups in terms of log loss (1.1785, 1.4285, 1.5585, 1.6485, and 1.7585, respectively). ----- **FIGURE 7. (a) Visualization of the log loss for each group in CL; (b) the** length of time taken to complete the training in one round. Correspondingly, the time required to complete the collaborative learning activities also varies for each group, even though the command is executed simultaneously over shared memory. Based on the performance results collected, Group 1 completed training moderately fast (272.92 minutes) compared to Group 2, which also completed the task a little faster than Group 3 (276.62 and 280.91 minutes, respectively). The longest time was experienced by Group 4 and Group 5, which required nearly 295 minutes to complete the task. This sort of phenomenon can occur because we placed Group 1, Group 2, and other groups in the order that allows them to be a priority in execution on the computer. However, the results might be significantly different for a real-world implementation with different datasets, types of networks, and computing device capabilities. **FIGURE 8. (a) Accuracy (out of 10k training data); (b) Distribution points** of the average loss of collaborative learning; (c) Heatmap of average loss per cycle. The average training accuracy for all groups is illustrated in Figure 8 (a), where the maximum size of training data is 10,000 images. The green line separates the higher half from the lower half of the training sample. Figure 8 (b) depicts the distribution points of the average loss of collaborative learning activities, which are derived from the calculation results of all groups. The red horizontal line is the middle number in the sorted log loss derived from training. Figure 8 (b) shows the heatmap of the average loss per cycle. Figure 8 (c) shows the distribution values of the heatmap of the average loss per cycle. The average log loss value is considerably high from the first cycle up to the 40th percentile, ranging from 2.0115 to 2.7365. Nevertheless, the accuracy increased gradually as the number of cycles increased, reaching 91.776% on average from clients’ combined total. The log loss among the groups is not enormously distinct because the environmental setup is comparable from one to another. For better data analysis from performance results, a variance of network types and device capabilities are required (as future endeavors). _C. GROUP SIGNATURES OF CLIENTS AND_ _DECENTRALIZED REWARDING PERFORMANCE_ A secure decentralized reward scheme is constructed by deploying Txδ1_ETH through the Ethereum smart contract. The client must meet all the requirements set by the model provider. The client also publishes a pair of public keys within the transaction. The clients’ public key (Pubα1) will be used together with the providers’ random data r = _Svr_ _[Ag][′]srandomdata →_ _R = r · G and the public key (Pubβ1)_ as a tracking key in the blockchain network. We describe this scheme in detail in Section IV-D. When the model provider confirms the Txδ1_ETH transaction, the incentive is given to the client by unpacking the clients’ public key (Pubα1, Pubβ1). A fair incentive system can motivate clients to behave honestly without worrying about getting incentives appropriate to their contribution. Furthermore, incentives are propagated securely with unlinkable and untraceable transactions. To estimate the amount of gas and Ether used in a transaction, we recorded 10 different transactions from 10 consecutive clients. Transactions between clients are differentiated by the amount of arbitrary value input in the smart contract. The input in transaction Txδ1_ETH is contrasted with information about the global model used, the training result (gradient values), information about the dataset used, a pair of public keys from the client, and signed with the group signatures of clients selected as coveted. Therefore, for ease of presentation, we set Client 1 to input the least arbitrary values in the smart contract, followed by Client 2 with more input than Client 1. In particular, Client 10 is the client with the longest arbitrary input. **TABLE 2. Summary of the cumulative gas used in the transaction.** A summary of the cumulative gas used in the transaction is displayed in Table 2. We record the gas used by the clients and the smart contract manager, which is also a part of the model providers’ role in distributing the reward for the clients. The cumulative gas from the clients is used in the ----- transaction Txδ1_ETH, while the gas of the smart contract manager is used to distribute the reward. The lowest gas consumption in the transaction occurred in the first transaction by Client 1, which is 96,388 units. Meanwhile, the highest gas consumption occurred in the 10th transaction, amounting to 106,921 units with an average overall transaction rate of 100,960 units with the gas limits being automatically adjusted by the system. Various arbitrary inputs cause this difference with different sizes for each client. Therefore, the amount of gas used varied. On the smart contract managers’ side, the difference is not significant because there is no notable difference in distributing rewards to clients. In terms of a group signature of clients directly affecting the gas fee, Ether is used because the clients are required to state the information of the signature used in transaction _Txδ1_ETH_ . The more combinations of public keys that are used in the transaction, the more reliable the transaction will be from the observer’s efforts to find out the contents of the transaction. However, the use of a large number of signatures raises trade-offs in terms of execution times, which is longer, and vice versa. There are numerous researches seeking a better way in using a ring signature scheme such as proposed in [41], [42], and [43]. **TABLE 3. Benchmark of the spent Ether in transaction (Tx).** The amount of Ether spent deploying a transaction is summarized in Table 3. Every client and provider in the CL possesses 100.00 Ether by default. The Ether can be used to deploy a transaction in the Ethereum network, so the amount of Ether issued and earned can be inspected through the Ganache interface. The lowest amount of ether consumption occurred in the first transaction, which is 4.43 × 10[−][3] ETH. The last transaction with the greatest input spent 5.67 × 10[−][3] ETH. The amount of Ether spent on each transaction is associated with the amount of gas consumed. Meanwhile, Ether on the smart contract manager side is shown to be larger because the results have been accumulated with rewards for each client (a combination of rewards and transaction costs). The number of rewards given can be set freely by the model provider (from 0.001 ETH to 0.1 ETH). Therefore, we focus only on transactions made by clients. Eventually, all transaction information is shown in Figure 9. _D. COMPARATIVE ANALYSIS_ With the emergence of federated learning that can overcome privacy issues by design and blockchain technology, which is immutable, the merger of these two technologies has begun to be studied by researchers and industries. Research on the use of blockchain in federated learning regularly influences **FIGURE 9. (a) Comparison of gas units between clients and smart** contract manager; (b) The amount of ether spent in a transaction. methods of distributing incentives wherein blockchain technology and third parties are not required to be involved in the transaction. By design, a blockchain-based incentive mechanism is suitable for use in the federated learning system because the data structure in a decentralized ledger is appendonly, such that the data record cannot be altered or deleted. Furthermore, blockchain relies upon protected cryptography to secure data records (in chronological order with a timestamp). We analyze our research results with various prior studies, with multiple platforms and objectives summarized in Table 4. Previous studies are selected in terms of combining collaborative learning technology and blockchain as the backbone of the incentive scheme in the decentralized learning approach. Other advantages of blockchain, such as transparency and traceability of transactions, are not expected in a distributed learning system that processes sensitive data. Many studies have shown that decentralized training allows for data leakage by observers with certain assumptions. The observers with marked assumptions as presented in [49] can deploy whitebox and black-box attacks by enrolling the engineered term, which is capable of producing a good performance (primary task), but also capable of leaking clients’ training data (malicious task). Moreover, the implementation of blockchain technology that is transparent and traceable exacerbates the ----- **TABLE 4. Performance benchmark with several of the existing approaches on unlinkable collaborative learning and decentralized rewarding in CL.** possibility of data leakage allowing the observer to associate each transaction with an openly viewable data owner. Therefore, a secure, unlinkable, and untraceable incentive distribution mechanism is necessary for a collaborative learning system as part of our objective in this research. Table 4 presents a comparable approach to several studies in utilizing blockchain-based incentives in decentralized learning systems. The types of platforms and methods used vary, but we focus on three key points, namely additional pri_vacy, unlinkable transactions, and untraceable transactions_ provided by the authors. Research in [44] proposed a similar approach and platform to our research, but the transactions are still traceable and can be linked by the observer. Likewise, the schemes proposed by [13], [17] and [33] also preserves additional privacy to protect the client’s identity and to make sure the transaction is carried out securely. Yet, the authors do not focus on the linkable and traceable transaction concerns in decentralized learning with blockchain-based incentives. Meanwhile, the research papers in [11], [14], [45]–[48] proposed a similar approach, but there is no information about the additional privacy in decentralized learning, nor about the unlinkable and untraceable transaction that can jeopardize the clients’ identity and data. Our proposed approach successfully satisfies three key points. First, we designed additional privacy to protect the identity of the clients. Second, we also designed an unlinkable transaction that provides a sense of security for the client, and finally, we designed transactions that cannot be tracked by observers. In terms of training accuracy, our results are no better than those of previous studies because the decentralized learning and algorithms used are not our primary focus in this research. We concentrate on designing transactions that are secure, unlinkable, and untraceable by observers. **VI. CHALLENGES AND CONSIDERATIONS** We differentiate this section into two points: the considerations of aggregation servers’ roles and future directions of collaborative learning with the blockchain-based incentive mechanism for a variety of similar applications. _A. AGGREGATION SERVERS CHALLENGES_ _AND CONSIDERATIONS_ Protecting users’ privacy is the fundamental premise of our proposed schemes. Aggregation servers collect the updated gradient values derived from multiple users. The server then calculates the aggregation values as representative of the updated model. The aggregation servers also play a role in determining the amount of cryptocurrency for the contributed users. In this sense, a large number of transactions might burden the servers that can reduce system effectiveness because the server is still in a centralized form affected by bottleneck issues. Moreover, aggregation servers are assumed to be semi-trusted parties [50]. The whole transaction process is in a decentralized form, but the aggregating process of the updated gradient values. Hence, for the potential directions, we put forward the role of the aggregation server to be empowered by the blockchain approach that does not rely on a single third party to manage a transaction. _B. CHALLENGES AND CONSIDERATIONS_ _IN COLLABORATIVE LEARNING WITH_ _BLOCKCHAIN-BASED INCENTIVE_ The objective of consolidating collaborative learning and blockchain-based incentives is to enable multiple users at different geographical locations to improve an AI model with a reasonable incentive for the contributing users. Fundamental schemes of collaborative learning with smart contracts can be directly implemented for general data that are deemed insensitive to the users. Our proposed schemes empower users to manage training activities confidentially, even for private data. However, the following challenges should be considered. (i) The availability of end-users. Even though a commensurate incentive scheme has been designed to ----- motivate users in conjointly building models (using their resources), technical difficulties might still appear, especially on the users’ side. The user may not complete all processes or violate defined protocols, causing failure in the building of the model. (ii) System heterogeneity. The capabilities of each device might differ considerably in terms of storage, computing power, communication capabilities, CPU, memory, network connectivity [51], and the battery level. (iii) Costly communication. Collaborative learning potentially comprises an extensive number of devices. This makes the communication and computing process slower, more costly, and time-consuming [52] by several orders of magnitude. (iv) Smart contract adoption and learning curve. A highvolume stream of transaction records must be overcome by the system. On-chain and off-chain transactions with filters have various features that are suitable for adoption in collaborative learning. (v) Human readable execution [53]. Our objective is to preserve privacy and avoid linkability of transactions, yet the flow of the execution process is still readable by the observer, for example, utilizing byte-code in the EVM. Nevertheless, the value remains confidential. **VII. CONCLUSION** We have presented privacy-awareness in decentralized approaches as plausible solutions to address the linkability concerns in collaborative learning and Ethereum smart contracts. These points become essential because the existing schemes provide various privacy techniques, yet the linkability issues within the systems are beyond the focus. Hence, we design supplementary protocols to be adopted within collaborative learning and blockchain smart contracts. We have completed the main requirements in this research, such as providing sustainable private learning activities, compatible decentralized incentives with unlinkable and untraceable transactions. We have also shown that our schemes can eliminate the worries of observers’ knowledge in associating clients’ resources with their devices’ identity by obscuring the transaction values that can only be recognized by a legitimate party. Finally, the overall results positively recommend that our schemes satisfy the design goals. Apart from the merits of the given scheme, the role of the centralized aggregation server in computing the gradient values is another interest in the long run. The aggregation server is likely to suffer from bottleneck issues and become an SPoF that is inherent to the centralized approach. Therefore, in the near future, we emphasize the replacement of centralized aggregation servers with distributed computing parties based on blockchain technology. **REFERENCES** [1] A. Koloskova, S. Stich, and M. Jaggi, ‘‘Decentralized stochastic optimization and gossip algorithms with compressed communication,’’ in Proc. Int. _Conf. Mach. Learn., 2019, pp. 3478–3487._ [2] S. Nakamoto, ‘‘Bitcoin: A peer-to-peer electronic cash system,’’ Manubot, Madison, WI, USA, Tech. Rep., 2019. [3] J. Konečný, H. B. McMahan, F. X. Yu, P. Richtárik, A. T. Suresh, and D. Bacon, ‘‘Federated learning: Strategies for improving communication efficiency,’’ 2016, arXiv:1610.05492. [Online]. Available: http://arxiv.org/abs/1610.05492 [4] P. Garcia, ‘‘Biometrics on the blockchain,’’ Biometric Technol. Today, vol. 2018, no. 5, pp. 5–7, May 2018. [5] V. Buterin, ‘‘Ethereum white paper,’’ GitHub Repository, vol. 1, pp. 22–23, 2013. [6] B. K. Mohanta, D. Jena, S. S. Panda, and S. Sobhanayak, ‘‘Blockchain technology: A survey on applications and security privacy challenges,’’ _Internet Things, vol. 8, Dec. 2019, Art. no. 100107._ [7] A. N. Bhagoji, S. Chakraborty, P. Mittal, and S. Calo, ‘‘Analyzing federated learning through an adversarial lens,’’ in Proc. Int. Conf. Mach. Learn., 2019, pp. 634–643. [8] L. Melis, C. Song, E. De Cristofaro, and V. Shmatikov, ‘‘Exploiting unintended feature leakage in collaborative learning,’’ in Proc. IEEE Symp. _Secur. Privacy (SP), May 2019, pp. 691–706._ [9] M. D. Mansh, A. Nguyen, and K. A. Katz, ‘‘Improving dermatologic care for sexual and gender minority patients through routine sexual orientation and gender identity data collection,’’ JAMA Dermatol., vol. 155, no. 2, pp. 145–146, 2019. [10] A. Mackenzie, S. Noether, and M. C. Team, ‘‘Improving obfuscation in the cryptonote protocol,’’ Monero Res. Lab, Tech. Rep., 2015. [11] J. Kang, Z. Xiong, D. Niyato, S. Xie, and J. Zhang, ‘‘Incentive mechanism for reliable federated learning: A joint optimization approach to combining reputation and contract theory,’’ IEEE Internet Things J., vol. 6, no. 6, pp. 10700–10714, Dec. 2019. [12] S. Awan, F. Li, B. Luo, and M. Liu, ‘‘Poster: A reliable and accountable privacy-preserving federated learning framework using the blockchain,’’ in Proc. Conf. Comput. Commun. Secur. (ACM SIGSAC), Nov. 2019, pp. 2561–2563. [13] J. Weng, J. Weng, J. Zhang, M. Li, Y. Zhang, and W. Luo, ‘‘DeepChain: Auditable and privacy-preserving deep learning with blockchain-based incentive,’’ IEEE Trans. Dependable Secure Comput., early access, [Nov. 8, 2019, doi: 10.1109/TDSC.2019.2952332.](http://dx.doi.org/10.1109/TDSC.2019.2952332) [14] X. Bao, C. Su, Y. Xiong, W. Huang, and Y. Hu, ‘‘FLChain: A blockchain for auditable federated learning with trust and incentive,’’ in Proc. 5th Int. _Conf. Big Data Comput. Commun. (BIGCOM), Aug. 2019, pp. 151–159._ [15] P. K. Sharma, J. H. Park, and K. Cho, ‘‘Blockchain and federated learningbased distributed computing defence framework for sustainable society,’’ _Sustain. Cities Soc., vol. 59, Aug. 2020, Art. no. 102220._ [16] Y. Qu, L. Gao, T. H. Luan, Y. Xiang, S. Yu, B. Li, and G. Zheng, ‘‘Decentralized privacy using blockchain-enabled federated learning in fog computing,’’ IEEE Internet Things J., vol. 7, no. 6, pp. 5171–5183, Jun. 2020. [17] Z. Li, J. Liu, J. Hao, H. Wang, and M. Xian, ‘‘CrowdSFL: A secure crowd computing framework based on blockchain and federated learning,’’ _Electronics, vol. 9, no. 5, p. 773, May 2020._ [18] M. Shayan, C. Fung, C. J. M. Yoon, and I. Beschastnikh, ‘‘Biscotti: A ledger for private and secure peer-to-peer machine learning,’’ 2018, _arXiv:1811.09904. [Online]. Available: http://arxiv.org/abs/1811.09904_ [19] C. Fung, C. J. M. Yoon, and I. Beschastnikh, ‘‘Mitigating sybils in federated learning poisoning,’’ 2018, arXiv:1808.04866. [Online]. Available: http://arxiv.org/abs/1808.04866 [20] E. Bagdasaryan, A. Veit, Y. Hua, D. Estrin, and V. Shmatikov, ‘‘How to backdoor federated learning,’’ in Proc. Int. Conf. Artif. Intell. Statist., 2020, pp. 2938–2948. [21] S. Savazzi, M. Nicoli, and V. Rampa, ‘‘Federated learning with cooperating devices: A consensus approach for massive IoT networks,’’ IEEE Internet _Things J., vol. 7, no. 5, pp. 4641–4654, May 2020._ [22] S. A. Osia, A. S. Shamsabadi, A. Taheri, K. Katevas, H. R. Rabiee, N. D. Lane, and H. Haddadi, ‘‘Privacy-preserving deep inference for rich user data on the cloud,’’ 2017, arXiv:1710.01727. [Online]. Available: http://arxiv.org/abs/1710.01727 [23] D. Sangeetha and V. Vaidehi, ‘‘A secure cloud based personal health record framework for a multi owner environment,’’ Ann. Telecommun., vol. 72, nos. 1–2, pp. 95–104, Feb. 2017. [24] S. Dhar, J. Guo, J. Liu, S. Tripathi, U. Kurup, and M. Shah, ‘‘On-device machine learning: An algorithms and learning theory perspective,’’ 2019, _arXiv:1911.00623. [Online]. Available: http://arxiv.org/abs/1911.00623_ ----- [25] Q. Xu, Z. Su, S. Yu, and Y. Wang, ‘‘Trust based incentive scheme to allocate big data tasks with mobile social cloud,’’ IEEE Trans. Big Data, early [access, Oct. 23, 2017, doi: 10.1109/TBDATA.2017.2764925.](http://dx.doi.org/10.1109/TBDATA.2017.2764925) [26] E. Kuada and H. Olesen, ‘‘Incentive mechanisms for opportunistic cloud computing services,’’ in Proc. 8th IEEE Int. Conf. Collaborative Comput., _Netw., Appl. Worksharing, Oct. 2012, pp. 127–136._ [27] S. Zou, J. Xi, S. Wang, Y. Lu, and G. Xu, ‘‘Reportcoin: A novel blockchainbased incentive anonymous reporting system,’’ IEEE Access, vol. 7, pp. 65544–65559, 2019. [28] Y. Wang, Z. Su, and N. Zhang, ‘‘BSIS: Blockchain-based secure incentive scheme for energy delivery in vehicular energy network,’’ IEEE Trans. Ind. _Informat., vol. 15, no. 6, pp. 3620–3631, Jun. 2019._ [29] B. Jia, T. Zhou, W. Li, Z. Liu, and J. Zhang, ‘‘A blockchain-based location privacy protection incentive mechanism in crowd sensing networks,’’ _Sensors, vol. 18, no. 11, p. 3894, Nov. 2018._ [30] L. Li, J. Liu, L. Cheng, S. Qiu, W. Wang, X. Zhang, and Z. Zhang, ‘‘CreditCoin: A privacy-preserving blockchain-based incentive announcement network for communications of smart vehicles,’’ IEEE Trans. Intell. _Transp. Syst., vol. 19, no. 7, pp. 2204–2220, Jul. 2018._ [31] K. Bonawitz, H. Eichner, W. Grieskamp, D. Huba, A. Ingerman, V. Ivanov, C. Kiddon, J. Konečný, S. Mazzocchi, H. B. McMahan, T. Van Overveldt, D. Petrou, D. Ramage, and J. Roselander, ‘‘Towards federated learning at scale: System design,’’ 2019, arXiv:1902.01046. [Online]. Available: http://arxiv.org/abs/1902.01046 [32] L. U. Khan, S. R. Pandey, N. H. Tran, W. Saad, Z. Han, M. N. H. Nguyen, and C. S. Hong, ‘‘Federated learning for edge networks: Resource optimization and incentive mechanism,’’ 2019, arXiv:1911.05642. [Online]. Available: http://arxiv.org/abs/1911.05642 [33] Y. Qu, S. R. Pokhrel, S. Garg, L. Gao, and Y. Xiang, ‘‘A blockchained federated learning framework for cognitive computing in industry 4.0 networks,’’ IEEE Trans. Ind. Informat., vol. 17, no. 4, pp. 2964–2973, Apr. 2021. [34] S. Rahmadika, S. Noh, K. Lee, B. J. Kweka, and K.-H. Rhee, ‘‘The dilemma of parameterizing propagation time in blockchain P2P network,’’ J. Inf. Process. Syst., vol. 16, no. 3, pp. 699–717, 2020. [35] B. Developers. Difficulty in Bitcoin (BTC). Accessed: Aug. 5, 2020. [Online]. Available: https://btc.com/stats/diff [36] K. Sharad, G. Karame, and G. A. Marson, ‘‘System for secure federated learning,’’ US Patent App. 16 296 380, Sep. 10, 2020. [37] Y. Lu, X. Huang, Y. Dai, S. Maharjan, and Y. Zhang, ‘‘Blockchain and federated learning for privacy-preserved data sharing in industrial IoT,’’ _IEEE Trans. Ind. Informat., vol. 16, no. 6, pp. 4177–4186, Jun. 2020._ [38] S. Noether, S. Noether, and A. Mackenzie, ‘‘A note on chain reactions in traceability in cryptonote 2.0,’’ Res. Bull. MRL-0001. Monero Res. Lab, vol. 1, pp. 1–8, Sep. 2014. [39] L. Deng, ‘‘The MNIST database of handwritten digit images for machine learning research [best of the Web],’’ IEEE Signal Process. Mag., vol. 29, no. 6, pp. 141–142, Nov. 2012. [40] B. Barney, ‘‘Introduction to parallel computing,’’ Lawrence Livermore Nat. _Lab., vol. 6, no. 13, p. 10, Dec. 2010._ [41] T. H. Yuen, J. K. Liu, M. H. Au, W. Susilo, and J. Zhou, ‘‘Efficient linkable and/or threshold ring signature without random oracles,’’ Comput. _J., vol. 56, no. 4, pp. 407–421, Apr. 2013._ [42] F. Zhang and K. Kim, ‘‘ID-based blind signature and ring signature from pairings,’’ in Proc. Int. Conf. Theory Appl. Cryptol. Inf. Secur. Berlin, Germany: Springer, 2002, pp. 533–547. [43] C. A. Melchor, P.-L. Cayrel, P. Gaborit, and F. Laguillaumie, ‘‘A new efficient threshold ring signature scheme based on coding theory,’’ IEEE _Trans. Inf. Theory, vol. 57, no. 7, pp. 4833–4842, Jul. 2011._ [44] S. Rahmadika and K.-H. Rhee, ‘‘Reliable collaborative learning with commensurate incentive schemes,’’ in Proc. IEEE Int. Conf. Blockchain _(Blockchain), Nov. 2020, pp. 496–502._ [45] Y. Liu, Z. Ai, S. Sun, S. Zhang, Z. Liu, and H. Yu, ‘‘Fedcoin: A peerto-peer payment system for federated learning,’’ in Federated Learning. Cham, Switzerland: Springer, 2020, pp. 125–138. [46] S. Fan, H. Zhang, Y. Zeng, and W. Cai, ‘‘Hybrid blockchain-based resource trading system for federated learning in edge computing,’’ IEEE Internet _Things J., vol. 8, no. 4, pp. 2252–2264, Feb. 2021._ [47] L. U. Khan, S. R. Pandey, N. H. Tran, W. Saad, Z. Han, M. N. H. Nguyen, and C. S. Hong, ‘‘Federated learning for edge networks: Resource optimization and incentive mechanism,’’ IEEE Commun. Mag., vol. 58, no. 10, pp. 88–93, Oct. 2020. [48] S. Rahmadika and K.-H. Rhee, ‘‘Merging collaborative learning and blockchain: Privacy in context,’’ in Proc. Korea Inf. Process. Soc. _Conf. Seoul, South Korea: Korea Information Processing Society, 2020,_ pp. 228–230. [49] C. Song, T. Ristenpart, and V. Shmatikov, ‘‘Machine learning models that remember too much,’’ in Proc. ACM SIGSAC Conf. Comput. Commun. _Secur., Oct. 2017, pp. 587–601._ [50] S. Truex, N. Baracaldo, A. Anwar, T. Steinke, H. Ludwig, R. Zhang, and Y. Zhou, ‘‘A hybrid approach to privacy-preserving federated learning,’’ in _Proc. 12th ACM Workshop Artif. Intell. Secur. (AISec), 2019, pp. 1–11._ [51] T. Li, A. K. Sahu, A. Talwalkar, and V. Smith, ‘‘Federated learning: Challenges, methods, and future directions,’’ IEEE Signal Process. Mag., vol. 37, no. 3, pp. 50–60, May 2020. [52] J. Mills, J. Hu, and G. Min, ‘‘Communication-efficient federated learning for wireless edge intelligence in IoT,’’ IEEE Internet Things J., vol. 7, no. 7, pp. 5986–5994, Jul. 2020. [53] Z. Zheng, S. Xie, H.-N. Dai, W. Chen, X. Chen, J. Weng, and M. Imran, ‘‘An overview on smart contracts: Challenges, advances and platforms,’’ _Future Gener. Comput. Syst., vol. 105, pp. 475–491, Apr. 2020._ SANDI RAHMADIKA received the dual master’s degree in engineering from Institut Teknologi Bandung (ITB), Indonesia, and Pukyong National University (PKNU), South Korea, in 2016, where he is currently pursuing the Ph.D. degree with the Laboratory of Information Security and Internet Applications (LISIA). His research interests include applied cryptography, privacy preservation in the decentralized systems, and AI with blockchain integration. KYUNG-HYUNE RHEE (Member, IEEE) received the M.S. and Ph.D. degrees from the Korea Advanced Institute of Science and Technology (KAIST), South Korea, in 1985 and 1992, respectively. He worked as a Senior Researcher with the Electronic and Telecommunications Research Institute (ETRI), South Korea, from 1985 to 1993. He also worked as a Visiting Scholar with The University of Adelaide, The University of Tokyo, and the University of California, Irvine. He has served as the Chairman of the Division of Information and Communication Technology, Colombo Plan Staff College for Technician Education in Manila, Philippines. He is currently a Professor with the Department of IT Convergence and Application Engineering, Pukyong National University, South Korea. His research interests include security and evaluation of blockchain technology, key management and its applications, and AI-enabled security evaluation of cryptographic algorithms. -----
{ "disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.1109/ACCESS.2021.3076205?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1109/ACCESS.2021.3076205, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": "CCBY", "status": "GOLD", "url": "https://ieeexplore.ieee.org/ielx7/6287639/9312710/09417207.pdf" }
2,021
[ "JournalArticle" ]
true
null
[ { "paperId": "a208331a249f082c6b19076754dad9cb129e12aa", "title": "A Blockchained Federated Learning Framework for Cognitive Computing in Industry 4.0 Networks" }, { "paperId": "f6d4d82357206a36ec3423314bae65913896bc1c", "title": "Hybrid Blockchain-Based Resource Trading System for Federated Learning in Edge Computing" }, { "paperId": "e027677ee8e1c12b2da1a662309ad53ce2a994f8", "title": "Reliable Collaborative Learning with Commensurate Incentive Schemes" }, { "paperId": "ba0bdd244e57f941be9d190e5a8d46785d7e2e1a", "title": "Blockchain and federated learning-based distributed computing defence framework for sustainable society" }, { "paperId": "4f9477b4e894e05b6570e4ad7da82420085eb8c0", "title": "Communication-Efficient Federated Learning for Wireless Edge Intelligence in IoT" }, { "paperId": "2a367204cc142f4a810ee5bdcc8c55dd4d2fc40e", "title": "The Dilemma of Parameterizing Propagation Time in Blockchain P2P Network" }, { "paperId": "d792ce75ae10d0534cada7fb9c8d6ef316e35a9f", "title": "Blockchain and Federated Learning for Privacy-Preserved Data Sharing in Industrial IoT" }, { "paperId": "dd62be03be2b747c9f09672a056d14a05c7c3e54", "title": "CrowdSFL: A Secure Crowd Computing Framework Based on Blockchain and Federated Learning" }, { "paperId": "d2da19b2ae5af19df02c27862f403883a71c82f5", "title": "Decentralized Privacy Using Blockchain-Enabled Federated Learning in Fog Computing" }, { "paperId": "2865e822f43ca92125fd2a19e5f15b946539419c", "title": "FedCoin: A Peer-to-Peer Payment System for Federated Learning" }, { "paperId": "0d7e26c623068f7119878f12f5ee1a49b20b9c9d", "title": "Federated Learning With Cooperating Devices: A Consensus Approach for Massive IoT Networks" }, { "paperId": "e0bc89f5776804bc2be27f1945f900d1ac8f1e7f", "title": "An Overview on Smart Contracts: Challenges, Advances and Platforms" }, { "paperId": "75c91d2cf7e926de17bb0c9f501d4183ddf22dc6", "title": "Blockchain technology: A survey on applications and security privacy Challenges" }, { "paperId": "07f229cc6e5b80fc8ffbbb3e4db85142466f55a9", "title": "DeepChain: Auditable and Privacy-Preserving Deep Learning with Blockchain-Based Incentive" }, { "paperId": "2a3d09bbdfe21418ce75d6973f71028fa9192b89", "title": "Federated Learning for Edge Networks: Resource Optimization and Incentive Mechanism" }, { "paperId": "503d39b221099bed1f583585e94e0087bf7656f3", "title": "Poster: A Reliable and Accountable Privacy-Preserving Federated Learning Framework using the Blockchain" }, { "paperId": "6d8db61577a021981ccf58d0c5ced167674b389c", "title": "On-Device Machine Learning: An Algorithms and Learning Theory Perspective" }, { "paperId": "9251ad8137f9aea4596aeccc656f351cfeced551", "title": "Incentive Mechanism for Reliable Federated Learning: A Joint Optimization Approach to Combining Reputation and Contract Theory" }, { "paperId": "49bdeb07b045dd77f0bfe2b44436608770235a23", "title": "Federated Learning: Challenges, Methods, and Future Directions" }, { "paperId": "f7c20aef687e3157596e9d56c10c43ee9cb2ef38", "title": "FLChain: A Blockchain for Auditable Federated Learning with Trust and Incentive" }, { "paperId": "5643004ceb08739e657977a4bc105d50f5bfe29b", "title": "Reportcoin: A Novel Blockchain-Based Incentive Anonymous Reporting System" }, { "paperId": "b59b37f21261e008ecb3ad8f0658ee790b42ea7e", "title": "BSIS: Blockchain-Based Secure Incentive Scheme for Energy Delivery in Vehicular Energy Network" }, { "paperId": "79cf9462a583e1889781868cbf8c31e43b36dd2f", "title": "Towards Federated Learning at Scale: System Design" }, { "paperId": "44b3b3bb40a9055eccdf86ea1702f6ae8b38934c", "title": "Decentralized Stochastic Optimization and Gossip Algorithms with Compressed Communication" }, { "paperId": "827e8d3891dc6e6c0cc868e9161c867c3fa8868f", "title": "Improving Dermatologic Care for Sexual and Gender Minority Patients Through Routine Sexual Orientation and Gender Identity Data Collection." }, { "paperId": "67498fdf77fd036a09a4593c37b012d6cf34f3f6", "title": "A Hybrid Approach to Privacy-Preserving Federated Learning" }, { "paperId": "6c66108edb9af0533309055e7b2ecb8922db03d8", "title": "Analyzing Federated Learning through an Adversarial Lens" }, { "paperId": "1df492149bce34a88c60fa19e01c25c77e3733af", "title": "Biscotti: A Ledger for Private and Secure Peer-to-Peer Machine Learning" }, { "paperId": "a5e0e841f79ea97ca32a10c302b9d14f26a8272a", "title": "A Blockchain-Based Location Privacy Protection Incentive Mechanism in Crowd Sensing Networks" }, { "paperId": "333420606f059a7d5574a6fb9e35591346d3f957", "title": "Mitigating Sybils in Federated Learning Poisoning" }, { "paperId": "14d8b4fdb0262c30ae9afe20ea8e7227b115c63e", "title": "How To Backdoor Federated Learning" }, { "paperId": "30e0ffeb519a4df2d4a2067e899c5fb5c5e85e70", "title": "Exploiting Unintended Feature Leakage in Collaborative Learning" }, { "paperId": "ff48135f2681a2f5ed596a8e050080a50402c467", "title": "Biometrics on the blockchain" }, { "paperId": "19dc966f0fb70abffaea682b81d3554c15551fda", "title": "CreditCoin: A Privacy-Preserving Blockchain-Based Incentive Announcement Network for Communications of Smart Vehicles" }, { "paperId": "a56f5aa03b1058158a71432b79d2e595bcac4408", "title": "Trust Based Incentive Scheme to Allocate Big Data Tasks with Mobile Social Cloud" }, { "paperId": "6cefb70f4668ee6c0bf0c18ea36fd49dd60e8365", "title": "Privacy-Preserving Deep Inference for Rich User Data on The Cloud" }, { "paperId": "18cfd4b9e35fb12fbebedb0fdc3f7811090372bf", "title": "Machine Learning Models that Remember Too Much" }, { "paperId": "7a26bd8a284b027a3783f113782cdc7b9eb1504d", "title": "Introduction to Parallel Computing" }, { "paperId": "7fcb90f68529cbfab49f471b54719ded7528d0ef", "title": "Federated Learning: Strategies for Improving Communication Efficiency" }, { "paperId": "f913c3a47106b51ca6566dcc37b2e52c0d549dc9", "title": "A secure cloud based Personal Health Record framework for a multi owner environment" }, { "paperId": "97ed4d9379ba1008bb1a0dacdc76f9df9f4079be", "title": "Efficient Linkable and/or Threshold Ring Signature Without Random Oracles" }, { "paperId": "46f74231b9afeb0c290d6d550043c55045284e5f", "title": "The MNIST Database of Handwritten Digit Images for Machine Learning Research [Best of the Web]" }, { "paperId": "63f68e7c6a41c89972c9dcb7a598586f2dfc1f06", "title": "Incentive mechanisms for Opportunistic Cloud Computing Services" }, { "paperId": "8997d7151509eb0534dcc654b159921c3d8dc39c", "title": "A New Efficient Threshold Ring Signature Scheme Based on Coding Theory" }, { "paperId": "5475bd4f5a58f6bf171243e72e0eccb8eb3d0506", "title": "ID-Based Blind Signature and Ring Signature from Pairings" }, { "paperId": null, "title": "‘‘Merging collaborative learning and blockchain: Privacy in context" }, { "paperId": null, "title": "Difficulty in Bitcoin (BTC)" }, { "paperId": "5917a3dfa83f8aff6a10539da236b15fe06956da", "title": "MRL-0004 Improving Obfuscation in the CryptoNote Protocol" }, { "paperId": null, "title": "‘‘A note on chain reactions in traceability in cryptonote 2.0,’’" }, { "paperId": null, "title": "Ethereum white paper" }, { "paperId": "4e9ec92a90c5d571d2f1d496f8df01f0a8f38596", "title": "Bitcoin: A Peer-to-Peer Electronic Cash System" }, { "paperId": null, "title": "South Korea, in 2016, where he is currently pursuing the Ph.D. degree with the Laboratory of Information Security and Internet Applications (LISIA)" }, { "paperId": null, "title": "ii) System heterogeneity" }, { "paperId": null, "title": "Unlinkable CL Transactions: Privacy-Awareness in Decentralized Approaches" }, { "paperId": null, "title": "Accessed" }, { "paperId": null, "title": "iii) We introduce unlinkable rewarding and training activities without revealing the information values of transactions" } ]
20,245
en
[ { "category": "Computer Science", "source": "external" }, { "category": "Computer Science", "source": "s2-fos-model" }, { "category": "Engineering", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/029a9046f114af95c0ec83e13e144704b7f77868
[ "Computer Science" ]
0.885171
A survey on parallel and distributed multi-agent systems for high performance computing simulations
029a9046f114af95c0ec83e13e144704b7f77868
Computer Science Review
[ { "authorId": "4747047", "name": "A. Rousset" }, { "authorId": "2501805", "name": "B. Herrmann" }, { "authorId": "1910857", "name": "Christophe Lang" }, { "authorId": "2005060", "name": "L. Philippe" } ]
{ "alternate_issns": null, "alternate_names": [ "Comput Sci Rev" ], "alternate_urls": null, "id": "3aa92b7f-af7a-4ebd-8925-1152710bfbc7", "issn": "1574-0137", "name": "Computer Science Review", "type": "journal", "url": "http://www.elsevier.com/wps/find/journaldescription.cws_home/710138/description#description" }
null
# A survey on parallel and distributed Multi-Agent Systems ## Alban Rousset, Bénédicte Herrmann, Christophe Lang, Laurent Philippe To cite this version: Alban Rousset, Bénédicte Herrmann, Christophe Lang, Laurent Philippe. A survey on parallel and distributed Multi-Agent Systems. Padabs 2014, 2nd Workshop on Parallel and Distributed AgentBased Simulations, in conjunction with Euro-Par 2014, 2014, Porto, Portugal. pp.371–382. ￿hal01230768￿ ## HAL Id: hal-01230768 https://hal.science/hal-01230768 Submitted on 19 Nov 2015 **HAL is a multi-disciplinary open access** archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. ----- # A survey on parallel and distributed Multi-Agent Systems ### Alban ROUSSET[∗][1], Bénédicte Herrmann[†][1], Christophe LANG[‡][1], and Laurent PHILIPPE[§][1] 1Femto-ST Institute – University of Franche-Comté, 16 Route de ### Gray 25030 Besançon cedex - France **Abstract** Simulation has become an indispensable tool for researchers to explore systems without having recourse to real experiments. Depending on the characteristics of the modeled system, methods used to represent the system may vary. Multi-agent systems are, thus, often used to model and simulate complex systems. Whatever modeling type used, increasing the size and the precision of the model increases the amount of computation, requiring the use of parallel systems when it becomes too large. In this paper, we focus on parallel platforms that support multi-agent simulations. Our contribution is a survey on existing platforms and their evaluation in the context of high performance computing. We present a qualitative analysis, mainly based on platform properties, then a performance comparison using the same agent model implemented on each platform. ## 1 Introduction In the field of simulation, we often seek to exceed limits, that is to say analyse larger and more precise models to be closer to the reality of a problem. Increasing the size of a model has however a direct impact on the amount of needed computing resources and centralised systems are often no longer sufficient to run these simulations. The use of parallel resources allows us to overcome the resource limits of centralised systems and also to increase the size of the simulated models. _∗alban.rousset@femto-st.fr_ _†bherrman@femto-st.fr_ _‡clang@femto-st.fr_ _§lphilipp@femto-st.fr_ 1 ----- There are several ways to model a system. For example, the time behavior of a large number of physical systems is based on differential equations. In this case the discretization of a model allows its representation as a linear system. It is then possible to use existing parallel libraries to take advantage of many computing nodes and run large simulations. On the other hand it is not always possible to model any time dependent system with differential equations. This is for instance the case of complex systems. A complex system is defined in [25] as "A system that can be analyzed into many com_ponents having relatively many relations among them, so that the behavior_ _of each component depends on the behavior of others". Thus the complexity_ of the dependencies between the phenomena that drive the entities behavior makes it difficult to define a global law that models the entire system. For this reason multi-agent systems are often used to model complex systems because they rely on an algorithmic description of agents that interact and simulate the expected behavior. From the viewpoint of increasing the size of simulations, multi-agent systems are constrained to the same rules as other modelling techniques but there exists less support for parallel execution of the models. In this article, we focus on multi-agent platforms that provide parallel distributed programming environments for multi-agent systems. Recently, the interest for parallel multi-agent platforms has increased. This is because parallel platforms offer more resources to run larger agent simulations and thus allows to obtain results or behavior that was not possible to obtain with smaller number of agents (eg. simulation of individual motions in a city/urban mobility). The contribution of this article is a survey on parallel distributed multiagent platforms. This survey is based on an extensive bibliographical work done to identify the existing platforms, a qualitative analysis of these platforms in terms of ease of development, distribution management or proposed agent model, and a performance evaluation based on a representative model run on a HPC cluster. The article is organised as follows. First, we give the context of multiagent system (MAS) in general and parallel distributed multi-agent systems (PDMAS) in particular. We then introduce the different multi-agent platforms found in our bibliographical research. In the third section, we describe the method used to classify platforms and we describe the model implemented in each platform to evaluate its performance. In the fourth section, we present the qualitative comparison of the different PDMAS followed by the benchmark based on the implemented model. We finish the paper with conclusion and future work. 2 ----- ## 2 Related works The concept of agent has been studied extensively for several years and in different domains. It is not only used in robotics and other fields of artificial intelligence, but also in fields such as psychology [6] or biology [23]. One of the first definitions of the agent concept is due to Ferber [13] : "An agent is a real or virtual autonomous entity, operating in a environment, able to perceive and act on it, which can communicate with other agents, which exhibits an independent behavior, which can be seen as the consequence of his knowledge, its interactions with other agents and goals it need to achieved". A multi-agent system, or MAS, is a platform that provides the mandatory support to run simulations based on several autonomous agents. These platforms implement functions that provide services such as agent life cycle management, communication between agents, agent perception or environment management. Among well known platforms we can cite Repast Simphony [21], Mason [19], NetLogo [28] and Gama [1]. These platforms however do not natively implement a support to run models in parallel and it is necessary to develop a wrapper from scratch, in order to distribute or parallelize a simulation. There exists several papers that propose survey on these multi-agent platforms [29, 5, 4, 16]. Some platforms like RepastHPC [10], D-Mason [12], Pandora [2], Flame [8] or JADE [3] provide a native support for parallel execution of models. This support usually includes the collaboration between executions on several physical nodes, the distribution of agents between nodes and so on. During our analysis of the literature, we did not find any survey about parallel multi-agent platforms except the paper written by Coakley and al. [8]. This comparison is based on qualitative criteria such as the implementation language but the paper does not provide any performance comparison of the studied platforms. After an extensive bibliographical work, we identified 10 implementations or projects of parallel multi-agent platforms. For each platform we tried to download the source or executable code and we tried to compile it and test it with the provided examples and templates. Some of the platforms cannot be included in our study because there is no available source code or downloadable executable (MACE3J [15], JAMES[17], SWAGES [24]), or because only a demonstration version is available (PDES-MAS [22, 27]), or because there is a real lack of documentation (Ecolab [26]). It was thus not possible to build a new model in these platforms and thus to assess their parallel characteristics and performance. These platforms have subjected to a qualitative analysis which is not included in this paper. For the 5 remaining platforms, on which we were able to implement our model, we can consider that they truly offer a functioning parallel multi-agent 3 ----- support. We succinctly present each of these platforms in the following. **D-Mason (Distributed Mason) [12] is developed by the University of** Salerno. D-Mason is the distributed version of the Mason multi-agent platform. The authors choose to develop a distributed version of Mason to provide a solution that does not require users to rewrite their already developed simulations and also to overcome the limitations on maximum number of agents. D-Mason uses ActiveMQ JMS as a base to implement communications. D-Mason uses the Java language to implement the agent model. **Flame [8] is developed by the University of Sheffield. Flame was designed** to allow a wide range of agent models. Flame provides specifications in the form of a formal framework that can be used by developers to create models and tools. Flame allows parallelization using MPI. Implementing a Flame simulation is based on the definition of X-Machines [9] which are defined as finite state automata with memory. In addition, agents can send and receive messages at the input and the output of each state. **Jade [3] is developed by the Telecom laboratory of Italia. The aims of** Jade are to simplify the implementation of distributed multi-agent models across a FIPA compliant [3] middleware and to provide a set of tools that support the debugging and the deployment phases. The platform can be distributed across multiple computers and its configuration can be controlled from a remote GUI. Agents are implemented in Java while the communications relay on the RMI library. **Pandora [2] is developed by the Supercomputing center of Barcelona.** It is explicitly programmed to allow the execution of scalable multi-agent simulations. According to the literature, Pandora is able to treat thousands of agents with complex actions. Pandora also provides a support for a geographic information system (GIS) in order to run simulations where spatial coordinates are used. Pandora uses the C + + language to define and to implement the agent models. For the communications, Pandora automatically generates MPI code from the Pandora library. **RepastHPC [10] is developed by the Argone institute of USA. It is a** part of a series of multi-agent simulation platforms: RepastJ and Repast Simphony. RepastHPC is specially designed for high performance environments. RepastHPC use the same concepts as the core of RepastSimphony, that is to say it uses also the concept of projections (grid, network) but this concept is adapted to parallel environments. The C + + language is used to implement an agent simulation but the ReLogo language, a derivative of the NetLogo language, can also be used. For the communications, the RepastHPC platform relays on MPI using the Boost library [11]. From these descriptions we can note that some platforms have already been designed to target high performance computing systems such as clusters whereas others are more focused on distribution on less coupled nodes such as a network of workstations. 4 ----- ## 3 Survey methodology In this section we explain the methodology used to make this survey. As already stated we started by a bibliographical search (using keywords on search engines and following links cited in the studied articles). This bibliographical search allowed us to establish a first list of existing platforms. By testing the available platforms we established a second list of functioning platforms. To our knowledge this list is complete and their is no other available and functional platform that provide a support for parallel distributed MAS. Note we only concentrate on distributed platforms and that the list excludes shared memory parallel platforms and many-cores (as GPU or Intel Xeon Phi) platforms. After we defined different criteria to compare and analyse each platform. We finished by implementing a reference model on each platform and executing it in order to compare the platform performance. These evaluation steps are detailed in the following. This survey mainly focuses on the implementation, more precisely the development, of models and their execution efficiency. To classify the platforms we defined two sets of criteria: first, implementation and execution based criteria and, second, criteria about classical properties of parallel systems. We briefly explain in which correspond each criteria. For the implementation and execution criteria, all platforms have their own constraints that impact on the ease of the model implementation. The chosen criteria are: 1. Programming language, 2. Agent representation 3. Simulation type, time-driven or event-driven 4. Reproductibility, do several executions of a simulation give the same results? For the classical properties of parallel systems, we focus on: 1. Scalability of platform, in terms of agents and nodes, 2. Load balancing, agent distribution, 3. MultiThread execution, to take benefit of multicore processors, 4. Communication library. To further compare the platforms, we have defined a reference agent model that we implemented on each platform. The reference model is based on three important behaviors for each agent: the agent perception, the communications between agents and/or with the environment and agent mobility. The reference model simulates each of these behaviors. 5 ----- Figure 1: AML representation of the reference agent model Figure 1 gives an AML [7] (Agent Modeling Language) representation of our reference model. The Environment is represented by a square grid. _Agents are mobile and move randomly on the grid. A vision characterised_ by the "radius" property is also associated with each agent. It represents the limited perception of the agent on the environment. Each agent is composed of 3 sub-behaviors : 1. The walk behavior allows agents to move in a random direction on the environment. This behavior is used to test the mobility and the perception of the agents. As the agents walk through their environment to discover other agents and other parts of the environment, interactions and communications with the environment are also tested with this behavior. 2. The interact behavior allows agents to interact and send messages to other agents in their perception fields. This behavior intends to simulate communications between agents and to evaluate the communication support of the platforms. 3. The compute behavior allows agents to compute a "Fast Fourier Transform (FFT)" [14] in order to represent a workload. This behavior intends to simulate the load generated by the execution of the agent inner algorithms. The global agent behavior consists in performing each of this three behaviors at each time step. The reference model has several parameters that determine the agent behavior and also the global model properties. For instance, the model allows to vary the workload using different sizes of input for the FFT calculus. It is also possible to generate more or less communications between agents by setting the number of contacted agents in the interact behavior or to assess the agent mobility by setting the agent speed in the walk behavior. 6 ----- ## 4 Qualitative analysis In this section we expose two levels of comparisons between the studied platforms: first a qualitative comparison using the previously presented criteria and second a performance comparison using the reference model. Table 1 gives a synthetic representation of the comparison for the implementation and execution criteria. Most platforms use classical languages such as C-C++ or Java to define agents, except the Flame platform which uses the XMML language. The XMML language is an extension of the XML language designed to define X-Machines. Note that the RepastHPC platform implements, in addition to the C++ programming language, the widespread Logo agent language. The Repast-Logo or R-Logo is the Repast implementation of Logo for C++. It allows to simplify the simulation implementation at the price of a lower power of expression compared to C++. RepastHPC D-Mason Flame Pandora Jade Prog. lang. C++/R-Logo Java XMML/C C/C++ Java Agent repre- Object Object X-Machine Object Object sent. Simu. type event-driven time-driven time-driven time-driven time-driven ReproductibilityYes Yes No Yes No Table 1: Comparison of implementation and execution properties Agents are usually defined as objects with methods representing behaviors. An agent container gathers all the agents. This container is cut and distributed in the case of parallel execution. The agent implementation is different for the Flame platform that does not use the object concept to define a agent but rather uses automatas called X-Machines. In a X-Machine, a behavior is represented by a state in the automata and the order of execution between behaviors are represented by transitions. This difference changes the programming logic of a model but induces no limitation compared with other platforms because agents are in fact encoded in C language. For the simulation type, event or time driven, all platforms use the timedriven approach except RepastHPC which is based on the event-driven approach. RepastHPC however allows to fix a periodicity to each scheduled event, so that we can reproduce the behavior of time-driven simulations. Finally all platforms allow agents to communicate. This communication can be performed either internally with agents that are on the same node, or externally, with agents that are on different nodes. The D-Mason and Pandora platforms propose remote method invocations to communicate with other agents while the other platforms use messages to communicate between agents. Table 2 summarises the criteria of the platforms about classical properties of parallel systems. Globally we can note that all studied platforms meet the demands for the development of parallel simulations. Note that we did 7 |Col1|RepastHPC|D-Mason|Flame|Pandora|Jade| |---|---|---|---|---|---| |Prog. lang.|C++/R-Logo|Java|XMML/C|C/C++|Java| |Agent repre- sent.|Object|Object|X-Machine|Object|Object| |Simu. type|event-driven|time-driven|time-driven|time-driven|time-driven| |Reproductibil|ityYes|Yes|No|Yes|No| ----- not find any information on the scalability property of the Pandora and Jade platforms, so they are marked as Not Available (NA) for this property. To efficiency exploit the power of several nodes the computing load must be balanced among them. There is different ways to balance the computing load . The load can be balanced at the beginning of the simulation (Static) or adapted during the execution (Dynamic). A dynamic load balancing is usually better as it provides a better adaptation in case of load variation during the model execution, but it can also be subject to instability. Most platforms use dynamic load balancing except the Jade and Flame platforms. In [20] the authors propose a way to use dynamic load balancing with the Flame platform. RepastHPC D-Mason Flame Pandora Jade Scalability 1028 36 nodes [8] 432 proc. [8] NA NA proc. [18] Load Balancing Dynamic Dynamic Static [8] Dynamic Static [3] Multithread exec Yes [8] Yes [12, 8] No [8] Yes Yes Com. library MPI [11, 10] JMS [12] MPI [18] MPI [2] RMI Table 2: Comparison classical properties of parallel systems Note that only Flame does not support multi-threaded executions. The platform however relays on the MPI messaging library. As most MPI libraries provide optimised implementations of message passing functions when the communicating processes are on the same node, using processes located on the same node instead of threads does not lead to large overhead. In the implementation of a multi-agent system this probably leads to equivalent performance as the simplification of synchronisation issues may compensate the cost of using communication functions. Last, the communication support for most platforms is MPI. This is not surprising for platforms targeting HPC systems as this library is mainly used on these computers. Note that the D-Mason platform relays on the JMS communication service despite it is not the most scalable solution for distributed environments. An MPI version of D-MASON is in development. Finally, the Jade platform is based on the java Remote Method Invocation (RMI) library which is not very adapted to parallel applications as it is based on synchronous calls. During the model implementation we also noted that the Jade platform seems to be more oriented for equipment monitoring and cannot be run on HPC computers due to internal limitations. Jade is thus not included in the rest of the comparisons. ## 5 Performance evaluation For the performance evaluation we have implemented the reference model defined in section 3 on the four functional platforms: RepastHPC, D-MASON, 8 |Col1|RepastHPC|D-Mason|Flame|Pandora|Jade| |---|---|---|---|---|---| |Scalability|1028 proc. [18]|36 nodes [8]|432 proc. [8]|NA|NA| |Load Balancing|Dynamic|Dynamic|Static [8]|Dynamic|Static [3]| |Multithread exec|Yes [8]|Yes [12, 8]|No [8]|Yes|Yes| |Com. library|MPI [11, 10]|JMS [12]|MPI [18]|MPI [2]|RMI| ----- Flame, Pandora. During this model implementation, we did not encounter noticeable difficulties expect with the RepastHPC platform for which we have not been able to implement external communications, communications between agents running on different nodes. RepastHPC does not have the native mechanisms to make it whereas it is possible to implement it on the other platforms. RepastHPC actually offers the possibility to interact with an agent on an other node but not to report the modifications. Although we have been able to run the four platforms, D-Mason, Flame, Pandora, RepastHPC, on a standard workstation, only two of them (RpastHPC, Flame) have successfully run on our HPC system. The D-Mason platform uses a graphical interface that cannot be disconnected. We are thus not able to run D-MASON on our cluster, only accessible through its batch manager. The Pandora simulations have deadlock problems even if we use examples provided with the platform. For these reasons the presented results only consider the Flame and RepastHPC platforms. We have realised several executions in order to exhibit the platform behaviors concerning scalability (Figures 2 and 3) and workload (Figure 4). To assess scalability we vary the number of nodes used to execute the simulations while we fix the number of agents. We then compute the obtained speedup. For workload we fix the number of nodes to 8 and we vary the number of agents in the simulation. Each execution is realised several times to assess the standard variation and the presented results are the mean of the different execution durations. Due to a low variation in the simulation runtime, the number of executions for a result is set to 10. 60 40 20 0 0 50 100 number of cores **Legend** Ideal speedUp Max speedUp Min speedUp Figure 2: Scalability of FLAME simulations using 10 000 agents, FFT 100 and 200 cycles About the HPC experimental settings, we have run the reference model on a 764 cores cluster using the SGE batch system. Each node of the cluster is a bi-processors, with Xeon E5 (8*2 cores) processors running at 2.6 Ghz 9 ----- Figure 3: Scalability of RepastHPC simulations using 10 000 agents, FFT 100 and 200 cycles frequency and with 32 Go of memory. The nodes are connected through a non blocking DDR infinyBand network organised in a fat tree. The system is shared with other users but the batch system garanties that the processes are run without sharing their cores. Execution results for scalability for a model with 10 000 agents are given on Figure 2 and 3, with the ideal speedup reference. Note that the reference time used to compute the speedup is based on a two core run of the simulations. This is due to RepastHPC which cannot run on just one core so that its reference time must be based on two core runs. The speedup is therefore limited to half the number of nodes. We can note that both platforms scale well up to 32 cores but the performance does not progress so well after, becoming 2/3 of the theoretical speedup for 128 cores. In addition on Figure 3 we can see that RepastHPC results are above the theorical speedup for simulations with less than 50 cores. As we suspected that these better results come from cache optimizations in the system, we did more tests to confirm this hypothesis. The realized tests increase the number of agents and the load on each agent to saturate the cache and force memory accesses. As the results for these new tests are under the theorical speedup the hypothesis is validated. Figures 4 represents the workload behavior of the two platforms. The inner load of agents (FFT) is here set to 100. The figure shows that RepastHPC really better reacts to load increasing than Flame. The same behavior has also been noted for a load of 10 (for 20 000 agents the ratio is 0.92). On the opposite for a load of 1000 the difference is less noticeable (for 20 000 agents the ratio is 0.81). Obviously the used model does not use all the power of Flame as it is limited in term of inter-agent communications. The question to answer is: is it due to the use of the concept of X-Machines or synchro 10 ----- Figure 4: Workload behavior for simulation using 8 cores nisation mechanisms in the underlying parallelism? Another possible reason that could justify this difference is the cost of the synchronisations provided by Flame when using remote agents and that is not managed in RepastHPC. ## 6 Conclusion In this article we have presented a comparison of different parallel multi-agent platforms. This comparison is performed at two levels, first at a qualitative level using criteria on the provided support, and second at a quantitative level, using a reference agent model implementation. The qualitative comparison shows the properties of all the studied platforms. The quantitative part shows an equivalent scalability for both platforms but better performance results for the RepasHPC platform. When implementing our reference model we have noticed that the synchronisation support of the platforms does not provide the same level of service: the RepastHPC platform does not provide communication support for remote agents while Flame do it. This support seems to be a key point in the platform performance. For this reason, in our future work, we intend to better examine the efficiency of synchronisation mechanisms in parallel platforms. For example how are the synchronizations made during an execution and is there a way to improve synchronization mechanisms in parallel multi-agent systems? ## Acknowledgment Computations have been performed on the supercomputer facilities of the Mésocentre de calcul de Franche-Comté. 11 ----- ## References [1] Edouard Amouroux, Thanh-Quang Chu, Alain Boucher, and Alexis Drogoul. Gama: an environment for implementing and running spatially explicit multi-agent simulations. In Agent computing and multi_agent systems, pages 359–371. Springer, 2009._ [2] Elaini S Angelotti, Edson E Scalabrin, and Bráulio C Ávila. Pandora: a multi-agent system using paraconsistent logic. In Computational Intel_ligence and Multimedia Applications, 2001. ICCIMA 2001., pages 352–_ 356. IEEE, 2001. [3] Fabio Bellifemine, Agostino Poggi, and Giovanni Rimassa. Jade–a fipa-compliant agent framework. In Proceedings of PAAM, volume 99, page 33. London, 1999. [4] Matthew Berryman. Review of software platforms for agent based models. Technical report, DTIC Document, 2008. [5] Rafael H Bordini, Lars Braubach, Mehdi Dastani, Amal El FallahSeghrouchni, Jorge J Gomez-Sanz, Joao Leite, Gregory MP O’Hare, Alexander Pokahr, and Alessandro Ricci. A survey of programming languages and platforms for multi-agent systems. Informatica (Slove_nia), 30(1):33–44, 2006._ [6] Gregory Carslaw. _Agent based modelling in social psychology._ PhD thesis, University of Birmingham, 2013. [7] Radovan Červenka, Ivan Trenčansk`y, Monique Calisti, and Dominic Greenwood. Aml: Agent modeling language toward industry-grade agent-based modeling. In Agent-Oriented Software Engineering V, pages 31–46. Springer, 2005. [8] Simon Coakley, Marian Gheorghe, Mike Holcombe, Shawn Chin, David Worth, and Chris Greenough. Exploitation of hpc in the flame agentbased simulation framework. In Proceedings of the 2012 IEEE 14th Int. _Conf. on HPC and Communication & 2012 IEEE 9th Int. Conf. on Em-_ _bedded Software and Systems, HPCC ’12, pages 538–545, Washington,_ DC, USA, 2012. IEEE Computer Society. [9] Simon Coakley, Rod Smallwood, and Mike Holcombe. Using x-machines as a formal basis for describing agents in agent-based modelling. SIM_ULATION SERIES, 38(2):33, 2006._ [10] Nicholson Collier and Michael North. Repast HPC: A platform for large_scale agentbased modeling. Wiley, 2011._ [11] Nick Collier. Repast hpc manual, 2010. 12 ----- [12] Gennaro Cordasco, Rosario Chiara, Ada Mancuso, Dario Mazzeo, Vittorio Scarano, and Carmine Spagnuolo. A Framework for Distributing Agent-Based Simulations. In Euro-Par 2011: Parallel Processing _Workshops, volume 7155 of Lecture Notes in Computer Science, pages_ 460–470, 2011. [13] Jacques Ferber and Jean-François Perrot. Les systèmes multi-agents: _vers une intelligence collective. InterEditions Paris, 1995._ [14] Matteo Frigo and Steven G Johnson. The design and implementation of fftw3. Proceedings of the IEEE, 93(2):216–231, 2005. [15] Les Gasser and Kelvin Kakugawa. Mace3j: fast flexible distributed simulation of large, large-grain multi-agent systems. In Proceedings of the _first inter. joint Conf. on Autonomous agents and multiagent systems:_ _part 2, pages 745–752. ACM, 2002._ [16] Brian Heath, Raymond Hill, and Frank Ciarallo. A survey of agentbased modeling practices (january 1998 to july 2008). JASSS, 12(4):9, 2009. [17] Jan Himmelspach and Adelinde M. Uhrmacher. Plug’n simulate. In _Proceedings of the 40th Annual Simulation Symposium, ANSS ’07, pages_ 137–143, Washington, DC, USA, 2007. IEEE Computer Society. [18] Mike Holcombe, Simon Coakley, and Rod Smallwood. A general framework for agent-based modelling of complex systems. In Proceedings of _the 2006 European Conf. on Complex Systems, 2006._ [19] Sean Luke, Claudio Cioffi-Revilla, Liviu Panait, and Keith Sullivan. MASON: A New Multi-Agent Simulation Toolkit. _Simulation,_ 81(7):517–527, July 2005. [20] Claudio Márquez, Eduardo César, and Joan Sorribes. A load balancing schema for agent-based spmd applications. In International _Conf. on Parallel and Distributed Processing Techniques and Applica-_ _tions (PDPTA), Accepted, 2013._ [21] Michael J North, Nicholson T Collier, Jonathan Ozik, Eric R Tatara, Charles M Macal, Mark Bragen, and Pam Sydelko. Complex adaptive systems modeling with repast simphony. Complex adaptive systems _modeling, 1(1):1–26, 2013._ [22] Ton Oguara, G Theodoropoulos, B Logan, M Lees, and C Dan. Pdesmas: A unifying framework for the distributed simulation of multi-agent systems. School of computer science research - University of Birming_ham, 6, 2007._ 13 ----- [23] Vincent Rodin, Abdessalam Benzinou, Anne Guillaud, Pascal Ballet, Fabrice Harrouet, Jacques Tisseau, and Jean Le Bihan. An immune oriented multi-agent system for biological image processing. _Pattern_ _Recognition, 37(4):631–645, 2004._ [24] M Scheutz, P Schermerhorn, R Connaughton, and A Dingler. Swagesan extendable distributed experimentation system for large-scale agentbased alife simulations. Proceedings of Artificial Life X, pages 412–419, 2006. [25] Herbert A Simon. The architecture of complexity. Springer, 1991. [26] Russell K Standish and Richard Leow. Ecolab: Agent based modeling for c++ programmers. arXiv preprint cs/0401026, 2004. [27] Vinoth Suryanarayanan, Georgios Theodoropoulos, and Michael Lees. Pdes-mas: Distributed simulation of multi-agent systems. _Procedia_ _Comp. Sc., 18:671–681, 2013._ [28] Seth Tisue and Uri Wilensky. Netlogo: Design and implementation of a multi-agent modeling environment. In Proceedings of Agent, volume 2004, pages 7–9, 2004. [29] Robert Tobias and Carole Hofmann. Evaluation of free java-libraries for social-scientific agent based simulation. JASS, 7(1), 2004. 14 -----
{ "disclaimer": "Notice: The following paper fields have been elided by the publisher: {'abstract', 'references'}. Paper or abstract available at https://api.unpaywall.org/v2/10.1016/j.cosrev.2016.08.001?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1016/j.cosrev.2016.08.001, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": null, "status": "BRONZE", "url": "https://link.springer.com/content/pdf/10.1007/978-3-319-14325-5_32.pdf" }
2,014
[ "JournalArticle", "Review" ]
true
2014-08-25T00:00:00
[]
7,828
en
[ { "category": "Computer Science", "source": "external" }, { "category": "Engineering", "source": "s2-fos-model" }, { "category": "Computer Science", "source": "s2-fos-model" }, { "category": "Environmental Science", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/029e0f6017a14e5059d2f130da4c6b7992414533
[ "Computer Science" ]
0.884818
An Efficient Privacy-Preserving Authentication Scheme for Energy Internet-Based Vehicle-to-Grid Communication
029e0f6017a14e5059d2f130da4c6b7992414533
IEEE Transactions on Smart Grid
[ { "authorId": "1791276", "name": "P. Gope" }, { "authorId": "48440849", "name": "B. Sikdar" } ]
{ "alternate_issns": null, "alternate_names": [ "IEEE Trans Smart Grid" ], "alternate_urls": null, "id": "1c2f3998-b5ca-48ca-9991-94b71c71ecb7", "issn": "1949-3053", "name": "IEEE Transactions on Smart Grid", "type": "journal", "url": "http://ieeexplore.ieee.org/servlet/opac?punumber=5165411" }
The energy Internet (EI) represents a new electric grid infrastructure that uses computing and communication to transform legacy power grids into systems that support open innovation. EI provides bidirectional communication for analysis and improvement of energy usage between service providers and customers. To ensure a secure, reliable, and efficient operation, the EI should be protected from cyber attacks. Thus, secure and efficient key establishment is an important issue for this Internet-based smart grid environment. In this paper, we propose an efficient privacy-preserving authentication scheme for EI-based vehicle-to-grid communication using lightweight cryptographic primitives such as one-way non-collision hash functions. In our proposed scheme, a customer can securely access services provided by the service provider using a symmetric key established between them. Detailed security and performance analysis of our proposed scheme are presented to show that it is resilient against many security attacks, cost effective in computation and communication, and provides an efficient solution for the EI.
## This is a repository copy of An efficient privacy-preserving authentication scheme for energy internet-based vehicle-to-grid communication. White Rose Research Online URL for this paper: http://eprints.whiterose.ac.uk/156053/ Version: Accepted Version Article: Gope, P. orcid.org/0000-0003-2786-0273 and Sikdar, B. (2019) An efficient privacy-preserving authentication scheme for energy internet-based vehicle-to-grid communication. IEEE Transactions on Smart Grid, 10 (6). 6. pp. 6607-6618. ISSN 1949-3053 https://doi.org/10.1109/tsg.2019.2908698 © 2019 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other users, including reprinting/ republishing this material for advertising or promotional purposes, creating new collective works for resale or redistribution to servers or lists, or reuse of any copyrighted components of this work in other works. Reproduced in accordance with the publisher's self-archiving policy. **Reuse** Items deposited in White Rose Research Online are protected by copyright, with all rights reserved unless indicated otherwise. They may be downloaded and/or printed for private study, or other acts as permitted by national copyright laws. The publisher or other rights holders may allow further reproduction and re-use of the full text version. This is indicated by the licence information on the White Rose Research Online record for the item. **Takedown** If you consider content in White Rose Research Online to be in breach of UK law, please notify us by emailing eprints@whiterose.ac.uk including the URL of the record and the reason for the withdrawal request. [eprints@whiterose.ac.uk](mailto:eprints@whiterose.ac.uk) [https://eprints.whiterose.ac.uk/](https://eprints.whiterose.ac.uk/) ----- # An Efficient Privacy-preserving Authentication Scheme for Energy Internet-based Vehicle-to-Grid Communication ### Prosanta Gope, Member, IEEE and Biplab Sikdar, Senior Member, IEEE Abstract—The Energy Internet (EI) represents a new electric grid infrastructure that uses computing and communication to transform legacy power grids into systems that support open innovation. EI provides bidirectional communication for analysis and improvement of energy usage between service providers and customers. To ensure a secure, reliable and efficient operation, the EI should be protected from cyber attacks. Thus, secure and efficient key establishment is an important issue for this Internetbased smart grid environment. In this paper, we propose an efficient privacy-preserving authentication scheme for EI-based Vehicle-to-Grid Communication using lightweight cryptographic primitives such as one-way non-collision hash functions. In our proposed scheme, a customer can securely access services provided by the service provider using a symmetric key established between them. Detailed security and performance analysis of our proposed scheme are presented to show that it is resilient against many security attacks, cost effective in computation and communication, and provides an efficient solution for the EI. Index Terms—Energy internet, mutual authentication, advanced metering infrastructure, smart grids. I. INTRODUCTION Energy, science and economy can mutually reinforce each other through new synergies and bring about greater efficiencies. From the perspective of sustainable development of society, the exploitation and utilization of the renewable energy and replacing traditional fossil fuels are important directions for reforming the energy landscape. However, the traditional grid structure makes it difficult to meet the requirements associated with integrating renewables and distributed generation, and incorporate other mechanisms to improve energy efficiency. In order to address these issues, the concept of Energy Internet has been proposed that seeks to integrate Information and Communications Technologies (ICT), cyberphysical systems and power system technologies to develop the next-generation of smart grids [1], [31]. Analogous to the conventional Internet, the idea of EI has been introduced to allow energy to be shared similar to information sharing in the Internet [30]. The fundamental idea behind the EI is to combine economics, information and energy using the power grid as the backbone network to provide an open and egalitarian framework for exchanging energy and associated information. The EI is designed to facilitate the seamless integration of P. Gope, is with the Department of Computer Science, University of Sheffield, United Kingdom. (E-mail: prosana.nitdgp@gmail.com) B. Sikdar is with the National University of Singapore, (Email: bsikdar@nus.edu.sg) diverse energy sources with the grid, and facilitate the interaction between various elements of the power grid to achieve increase in energy efficiencies [2]. All aspects of a power grid such as generation, transmission, distribution, service provider, operations, markets, and customers will benefit from secure and efficient communication on decisions about energy and information flow [25-26]. Finally, compared with smart grids, EI further integrates other energy networks such as gas for improved energy operations. Vehicle to Grid (V2G) technology broadly consists of systems that facilitate the bi-directional flow of electrical energy between vehicles and the electrical grid. Electrical energy may flow from the grid to the vehicle to charge the battery and it may also flow in the reserve direction when the grid requires energy (e.g., to provide peaking power). With bi-directional chargers, electric vehicles (EVs) can become participants in the V2G eco-system, and such vehicles are energy assets for the smart grid. EVs need to charge and draw power from the grid when the State of Charge (SOC) of their batteries becomes low. The V2G property of EVs would also allow EVs to deliver power back to the grid and the concept of EI in V2G networks can be used to allow energy to be transported from vehicles to a location where it is used to perform useful work. One of the key benefits of EI in V2G environments is that it allows individuals (e.g., EV owners, households etc.) to trade energy without the need to build their own transmission and distribution networks [31]. With EI-based V2G, the unstable and intermittent energy generated by renewable energy sources (mainly solar and wind energy sources) can be used by EVs two provide two benefits. First, it provides a way to address the large energy demand of EVs through renewable energy sources, thus reducing the potential adverse impact of EVs on power grids. Second, it prevents renewable energy from being wasted when they are generated during low demand periods of traditional (non-EV) loads. This allows more efficient use of energy and can hence facilitate the wider adoption of renewable energy. EI-based V2G systems also have other applications, including power dispatch between cities, power transfer from renewable energy sources to end users, etc. In addition to the routing of energy between various entities, the exchange of information is an important aspect of the EI and EI-based V2G systems. A number of protocols have been proposed for information exchange in EI based systems. The ISO/IEC/IEEE 18880 standard defines communication protocols and architectures for the EI It defines the data ----- Table I COMPARATIVE ANALYSIS OF THE RELATED SCHEMES |Scheme|Primitive Used|Mobilty Support|Location Privacy Support| |---|---|---|---| |[4]|ECC, Bilinear Pairing|No|No| |[5]|ECC, Bilinear Pairing|No|No| |[6]|Bilinear Pairing|No|No| |[7]|AES-CBC, Hash function|No|No| |[8]|Bilinear Pairing, Hash function|No|No| |[9]|Bilinear Pairing, Hash function|No|No| |[10]|Bilinear Pairing,Hash function|No|No| |[11]|PUF, Hash function|No|No| |[12-15]|Public-key sign-encryption|Yes|Most of them cannot| |[21-25]|Public-key sign-encryption|Yes|Most of them cannot| |IEC15118|ECDSA|Yes|No| |OCPP|ECDSA|Yes|No| |Proposed Scheme|Hash function|Yes|Yes| exchange protocols and the network architecture for integrating various components and participants in the grid, data storage, and application services. ISO/IEC/IEEE 18880 uses wide area communications using TCP/IP, and existing nonTCP/IP networks can connect through multi-protocol gateways. ISO/IEC/IEEE 18881 and ISO/IEC/IEEE 18883 have been developed to address network management and network security issues that are neglected in ISO/IEC/IEEE 18880. In contrast, V2G communications typically just focus on the communication between the vehicles, charging stations, and the grid. In V2G, Electric Vehicle Supply Equipment (EVSE) (i.e., EV chargers), such as those in charging stations, can be shared by many customers. Therefore, a temporal association between the EV and the EVSE has to be initiated for the charging and billing process when the charging cable is inserted. In this regard, some EVs and EVSEs use the IEC 15118 standard for communication. Similarly, EVSEs use Open Charge Point Protocol (OCPP) for communication between the EVSE and the energy management systems. While EI-based V2G has many benefits (as mentioned above), cyber-security of the components and data are big concerns [28-29]. The vehicles themselves face increasingly complex attacks that target not only the vehicle’s operation but also its privacy. Threats to privacy include the exposure of the vehicle user’s real identity, the vehicle’s driving path, location, and disclosure of other private information. Thus, there are significant challenges to the design of security mechanisms for V2G environments and these are further complicated by the topological structure autonomy and fast rate of transformations due to vehicular movement. A number of organizations are working on the development of security solutions for the EI [2-3]. A. Related Work Secure communication is one of the most important requirements for the EI environment in order to guarantee secure exchange of data at all times. For secure and efficient data exchange between the components, protocols with high security and performance are required To address this issue many researchers have proposed several mutual authentication and key establishment schemes suitable for the Advanced Metering Infrastructure (AMI) with various security considerations and goals. Mohammadali et al. [4] proposed two ECC-based identity-based key establishment protocols. The protocols reduce the computational overhead at the smart meter side of the AMI, and they are resilient against replay and desynchronization attacks. However, they are vulnerable to man-in-the-middle, impersonation, and false data injection attacks, and they incur high computational cost during key establishment. Nicanfar et al. [5] introduced two key exchange protocols that are based on the use of a symmetric-key algorithm and ECC. The protocols provide security and scalability for key exchange in smart grids. However, they are vulnerable to false data injection attacks. Moreover, both the protocols incur large computational costs which makes them unsuitable for resource limited devices in smart grids. Wu and Zhou [6] presented an authentication and key distribution scheme by combining symmetric key and public key cryptographic systems, and the authors claim that their scheme can eliminate man-in-the-middle and reply attacks. Subsequently, Xia and Wang showed that [6] cannot ensure security against man-in-the-middle attacks and they also proposed a new data aggregation scheme [7]. However, Park et al. reported that the scheme presented in [7] is insecure against impersonation attacks [8]. Besides, it cannot address the customer’s privacy requirements. Tsai et al. combined an identity-based signature scheme and an identity-based encryption scheme [9] for key distribution in smart grids. Odelu et al. investigated the protocol presented in [9] and demonstrated that it cannot guarantee the security of the session key and the strong credentials privacy of the smart meter [10]. They also introduced a new scheme with a claim that is can reduce computation overheads. However, Chen et al. proved that the scheme presented in [10] is vulnerable to several attacks, and it has large computational and communication costs. Moreover, our analysis shows that the scheme in [10] is weak against man-in-the-middle attacks which may lead to DoS attacks at the server end In this context an attacker ----- Table II SYMBOLS AND CRYPTOGRAPHIC FUNCTION Symbol Definition IDCS Identity of charging station PIDi Pseudo identity of Useri ki Secret key of the Useri pswi Password of the Useri βi Thumbprint of the Useri SK Session key (Useri -CSj ) Kcu Shared secret key between the CSj and USP LAIx Location area identifier of the entity x h(·) One-way hash function ⊕ Exclusive-OR operation || Concatenation operation (say Eve) can capture the initial message (Msg1 in [10]), alter the message, and then send the altered message Msg1[e] to the service provider (SP). The SP can only decide about the validity of a request message (Msg1[e] [) after completing the] whole process, i.e., after receiving the response message (Msg3 in [10]). Consequently, each request is stored in a buffer, where several intensive pairings first need to be computed, followed by submission response. This buffer needs to be kept open until a response from the smart meter (SM) is received. As a consequence, the memory can easily overflow if a large number of invalid requests are sent, since the invalid requests cannot be distinguished due to the late detection of the forged messages. In [11], the authors have considered the physical security of the smart meter and they proposed an authentication scheme by using the concept of physical unclonable functions (PUFs). In addition to [4-11], some recent studies on privacy issues in V2G communications have appeared in literature [12-15,21-25]. In these schemes, privacy of the car owner is considered as an important concern. However, in these schemes an EV needs to perform several computationally inefficient cryptographic primitives such as group signature, sign-encryption, etc. Besides, most of these schemes cannot ensure the location privacy of the EV user, which is essential for securely monitoring the status of the EV and efficiently providing services to the EV user. Table I compares related work to our approach with respect to the primitive used, ability for mobility support, and location privacy. B. Our Contribution In this paper, we first introduce a new model for EI-based V2G communication. Subsequently, we propose a lightweight authentication and key establishment scheme for EI-based V2G communication. The major contributions of this paper can be summarized as follows: - A new model for EI-based V2G communication, which allows an EV user to seamlessly charge or discharge the battery of his/her vehicle from the charging stations located in different geographical locations. However, the charging/discharging rate may vary based on the location of the charging station - An efficient privacy-preserving authentication protocol, which provides several key security properties including Authentication Key Exchange (AKE) security, privacy of the user, protection against eavesdropping or interception attacks, protection against man-in-the middle attacks, and location privacy, which are all requirements for secure EIbased V2G communication [32-33]. There are some existing schemes which can ensure most of the security requirements for EI-based V2G communication. However, they use computationally expensive public-key cryptographic primitives. On the contrary, the proposed scheme is based lightweight cryptographic primitives such as oneway hash function and exclusive-OR operation, which creates significantly less computational overhead on the resource limited user’s device (as shown in Table V). - Most of the existing schemes including the existing underlying communication protocols such as IEC15118 and OCPP for V2G communications are vulnerable to some of the well-known security attacks such as man-inthe middle attacks, impersonation attacks, etc. Therefore, we provide a rigorous formal security analysis of our proposed scheme using the BR93-model [18] to show that it is secure against such attacks. - A comparative study of the proposed scheme with closely related existing schemes. It is shown that the proposed scheme is secure and computationally efficient, and requires significantly lower overhead for establishing a session key between an user’s device and the charging station, as compared to the related existing schemes. The remainder of this paper is organized as follows. In Section II, we present our system and adversary model. In Section III we introduce the proposed scheme. The formal security analysis and performance analysis of the proposed scheme are presented in Section IV and Section V, respectively. Finally, conclusions are drawn in Section VI. The symbols and cryptographic functions of the proposed scheme are defined in Table II. II. SYSTEM AND ADVERSARY MODEL A. System Model for EI-based V2G Communication Fig. 1 shows the system model for an EI-based V2G environment, which consists of three major components: a set of EV users each with a mobile device (MD) connected to the Internet, a set of charging stations (CSs), and a utility service provider (USP). The USP consists of two components: power generation, distribution, and management center (PGDMC) and data center (DC). Each user is required to register their EV with the USP. Then, the USP maintains all the user information in its data center. In this network model, the USP is an organization that is responsible for procuring electricity from various vendors. The USP also supplies electricity to charging stations in different locations. These charging stations may be owned by several private companies. A user may charge/discharge the batteries of his/her EV from/to any of the CSs. However, the charging/discharging rate may vary based on the location of the CS. For example, the charging/discharging rate of the CSs located at commercial area |Symbol|Definition| |---|---| |ID CS|Identity of charging station| |PID i|Pseudo identity of User i| |k i|Secret key of the User i| |psw i|Password of the User i| |β i|Thumbprint of the User i| |SK|Session key (User -CS j) i| |K cu|Shared secret key between the CS and USP j| |LAI x|Location area identifier of the entity x| |h(·)|One-way hash function| |⊕|Exclusive-OR operation| ||||Concatenation operation| ----- Figure 1. System model for the proposed scheme. networks (CANs) may be higher than others. On the other hand, the charging/discharging rate of public area networks (PANs) may be lower than residential area networks (RANs). We assume that a secure channel is available between an EV user and the USP during the initial registration. Subsequently, each user with a mobile device communicates with the CS through the Internet. A CS may communicate with the USP through the public Internet or private networks. In this model, two types of flows, i.e., energy flows (shown by dotted lines) and data flows (shown by solid lines) have been considered. All the entities (user, CS and USP) need to authenticate themselves before sharing any information. Because of the public network based communication used in the system environment, there is a possibility of various attacks, such as replay, man-in-the middle, and impersonation attacks. In our scheme, users use biometrics (e.g., fingerprints) in addition to a password for two-factor authentication. B. Adversary Model During user registration, a user and the USP interact through a secure channel. On the other hand, during the execution of the proposed authenticated key agreement scheme, all parties communicate through an insecure public channel. In this context, we consider the Dolev-Yao threat model (DY model) [29], where an adversary may eavesdrop, modify, or delete the messages exchanged during transmission. Now, due to the usage of public networks and wireless communication in this EI-based V2G environment, there is a possibility of several attacks, such as impersonation, man-in-the middle, replay attacks, etc. The user’s privacy is another important issue in this environment. Also, an adversary can impersonate as a legitimate user and try to obtain services. Similarly, a charging station may impersonate as others and ask for higher charges from a user. Hence, there is a need for an authenticated key agreement scheme by which the legitimacy of the entities can be verified, and also both the user and CS can establish a session key III. PROPOSED SCHEME In this section, we present our proposed lightweight authentication protocol for EI-based V2G communication, where a user (Useri ) who has mobile device MDi with Internet connectivity requests charging of his/her EV’s battery from a charging station CSj . In this regard, both Useri and CSj need to authenticate each other with the help of the USP. After successful mutual authentication between Useri and CSj, both entities will establish a session key SK for their secure communication. Our proposed scheme consists of the following two phases: user registration and authentication. A. User Registration Phase Each user first needs to register with the USP. The registration process consists of the following steps: Step R1: Useri sends the registration request along with its identity IDi to the USP through the secure (out-of-band) channel. Step R2: Upon receiving the request, the USP creates an account and inserts a new row in its database. It then randomly generates a unique pseudo identity PIDi, a secret key ki, and also generates a set of shadow identities SID = {sid1, sid2, · · ·, sidn }, which are later used in case of loss of synchronization between the USP and Useri . Next, the USP composes a message with {PIDi, ki, SID} and sends it to Useri through the secure channel. Finally, the USP stores {PIDi, ki, SID} in its database for further interaction with Useri . Step R3: Upon receiving {PIDi, ki, SID} from the USP, the user inputs his/her biometrics (e.g., thumbprint) βi and password pswi and computes ki[∗] [=][ k][i][ ⊕] [h][(][β][i][||][psw][i] [)][. Finally,] Useri stores {PIDi, ki[∗][,][ SID][}][ in his/her mobile device for] further communication with the USP. B. Authentication Phase To accomplish communication security, Useri has to go through an authentication process each time before obtaining services from charging station CSj . The authentication phase of the proposed scheme comprises of the following steps: ----- Figure 2. Steps and computations in the key agreement phase of proposed scheme. Step 1: Useri inputs his/her thumbprint βi and password pswcomputesi into his/her mobile device αi = h(βi) and ∂i′ MD[=][ h]i[(]. The mobile device then[α][i][||][psw][i] [)][, and validates] the user’s legitimacy. If the user’s validation is successful, then the device calculates ki = ki[∗] [⊕] [h][(][β][i][||][psw][i] [)][. After that,] the user generates a nonce Nu and finds his/her location area identity, LAIu, using the MD’s location service. Next, Useri � computes EL = LAIu h(ki ||Nu ), a key-hash response V1 = h(PIDi ||Nu ||ki ||EL), and subsequently composes a message MA1 : {PIDi, Nu, EL, V1 } and sends it to charging station CSj . Step 2: Upon arrival of message MA1, charging station CSj generates a nonce Nc and computes V2 = h(IDcs ||Nc||Kcu ||LAIcs ), where LAIcs denotes the location area identifier of charging station CSj . Next, CSj composes a message MA2 : {MA1, IDcs, Nc, LAIcs, V2 } and sends it to the USP. Step 3: Upon arrival of message MA2, the USP first locates PIDi in its database and then computes and validates the key hash responses V1 and V2 . Next, the USP decodes LAIu from EL and then compares and validates LAIu with LAIcs . If the validation is successful, the USP generates a key SK and a new pseudo identity� PIDi[new] . It then computes PIDi[new] [∗] = PIDi[new] h(PIDi ||ki ), SKu = h(IDu ||ki ||Nu ) [�] SK, SKcs = h(IDcs ||Kcu ||Na ) [�] SK, V3 = h(SKcs ||Kcu ||Nc), and V4 = h(SKu ||ki ||PIDi[new] [∗]). Next, the USP composes a message MA3 : {(PIDi[new] [∗], SKu, V4 )||(SKcs, V3 )} and sends MA3 to charging station CSj . Step 4: Upon arrival of the response message MA3 from the USP, the charging station first computes and validates the key-hash response V3 . If the validation is successful, CSj decodes the session key SK = h(IDcs ||Kcu ||Na ) [�] SKcs and composes a new message MA4 : {(PIDi[new] [∗], SKu, V4 )} and then sends it to Useri . Step 5: Upon arrival of message MA4, Useri first verifies the key-hash response V4 . If the validation is successful, Useri computes and decodes the session key SK h(ID ||k ||N ) SK and the new pseudo identity [�] � PIDi[new] = PIDi[new] [∗] h(PIDi ||ki ) for the next round. The entities involved in the protocol will stop the execution of the scheme if any of the above verification steps is unsuccessful. For dealing with the loss of synchronization problem, instead of the pseudo identity PIDi, Useri needs to select one of the unused shadow identities sidx from SID = {sid1, sid2, · · ·, sidn } and send it in message MA1 . On receiving this message and after successfully validating the user, the USP generates a new pseudo identity and securely sends it in message MA3 by using the secret key ki . At the end of the authentication process, both Useri and the USP delete the used shadow identity sidx from their storage. Also, in the proposed scheme, Useri can only use almost t shadow identities, where t < n − 1. After that, the user needs to request for reloading. In this context, the user sends a “ReLoad” message to the USP. On receiving that message, the USP generates a new set of shadow identities and then securely sends it in message MA3 by using the secret key ki . Details of this phase are depicted in Fig. 2. Remark 1: In our proposed scheme, if a user needs to charge or discharge his/her vehicle multiple times in a day, then he/she needs to go through the authentication process each time, even if the same EV is used. Besides, since one of the goals of the proposed scheme is to achieve location privacy, we do not keep any footprint of the CSs. Therefore, even if the EV uses the same CS multiple times, it needs to execute the proposed anonymous authentication process. Since our proposed scheme is based on lightweight cryptographic primitives such as hash functions, it has a lower computational cost (execution times are shown in Table III and Table IV). Besides, from Table IV we can see that the communication cost of the proposed scheme is significantly less than the other schemes. On the other hand, in our proposed scheme, we allow a user to have a single account for multiple EVs, which will avoid any increase in the credential storage requirement. Remark 2: Now, we consider the scenario where two users Useri and Userj share a vehicle. In such cases, during registration the USP will generate two sets of security ----- credentials {PIDi, ki, SIDi } and {PIDj, kj, SIDj } under the same account and send them to Useri and Userj, respectively. After receiving their credentials, both the users securely store them in their respective mobile devices (as shown in Step R3). Now, when Useri uses the vehicle then he/she needs to use {PIDi, ki, SIDi } to get through the authentication process. Similarly, when user Userj uses the vehicle then he/she is required to use {PIDj, kj, SIDj } in order to authenticate with the USP. In this way, the proposed scheme can support the scenario where a vehicle is shared among multiple users. However, in this context, the storage complexity at the USP will increase linearly with the number of shared users. IV. FORMAL SECURITY ANALYSIS This section presents the formal security proof of the proposed scheme. We first demonstrate that our proposed scheme is secure. A. Definitions and Assumptions Bellare and Rogaway introduced a theoretical security proof for an authentication and key exchange protocol for a symmetric two-party case, which we refer to as the BR93-Model [18]. During the authentication process only the USP can authenticate a user, and a CS needs to forward the authentication request of the user to the USP. Thus, we assume that the communication between the CS and USP is secure, so that the USP and the CS can be regarded as a single participant and we call it the service agent (SA). 1) Complexity Assumptions: The security of our proposed scheme is based on the secure one-way hash function, which can be regarded as a pseudorandom function [19]. Therefore, we first introduce the security definitions of pseudorandom functions and show their game environments that will be used for the security proofs of the proposed scheme. Definition 1: Let f be a polynomial-time computable function and AdvH = |Pr[H [f] = 1] − Pr[H [f][ ′] = 1]| denote the advantage of an algorithm H, controlled by a probabilistic polynomial-time adversary A, in distinguishing f from another function f [′]. We say that f is a (n, q, ε)-secure pseudorandom function if there is no feasible algorithm H that can distinguish f from f [′] with advantage AdvH ≥ ε, while making at most q oracle queries to f or a truly random function f [′] and running at most n times by playing the following game: Initialization: A challenger C interacting with A picks a random bit b ∈{0, 1} to determine the function fb, where f0 is a pseudorandom function and f1 is a truly random function. Training Phase: A issues q queries, x1, · · ·, xq to C, where xi ∈{0, 1}[∗] are binary strings of arbitrary length. The challenger responds to these queries by sending fb(xi) to A for i = 1, · · ·, q, where fb(xi) ∈{0, 1}[l] and l is a fixed positive integer. Guess: A outputs b[′] ∈{0, 1} as a guess of b. A wins this game if b[′] = b. We define the advantage of A winning the game as Advf0,A = |Pr[b[′] = b] − [1]2 [|][.] According to the pseudorandom function assumption, no probabilistic polynomial-time adversary can win the above game with non negligible advantage 2) Security Model and Notations: Protocol Participants: �s A,B [denotes the oracle which plays the role of][ A][ to interact] with B in session s, and [�]A,B[t] [denotes the oracle which plays] the role B to interact with A in session t, where A, B ∈ I, s, t ∈ N, I is the set of identities of the players such as a user and the service agent who participate in the protocol, and N is the set of positive integers. Protocols: The proposed authentication scheme uses a three-party authentication and key exchange scheme. However, the protocol can be reduced to a de facto two-party setting protocol. Therefore, we define a two-party authentication and key exchange protocol as follows. Definition 2: A two-party authentication and key exchange protocol P, is formally specified by an efficiently computable function on the following inputs: [�] k: The length of the security parameter used in the protocol. A: The identity of the initiator of P, where A ∈ I. B: The identity of the intended partner of P, where B ∈ I. x: The secret information, where x ∈{0, 1}[∗]. K: The conversation in P so far. r: The random coin flips of the sender or initiator, where r ∈{0, 1}[+]. The output of (k, A, B, x, K, r) = (m, δ, α) is defined as [�] follows: m: The next message to be sent, where m ∈{0, 1} {∗}, [�] where {∗} specifies that the initiator sends no message. δ: The decision, where δ ∈{A, R, ∗}, and A, R, and * denote accept, reject, and no decision, respectively. α : The private output, where α ∈{0, 1}[∗] [�]{∗} and {∗} denotes that the initiator does not have any private output. 3) Adversary Model: An adversary A is a probabilistic polynomial-time Turing machine during the execution of protocol P . A can control the channel between A and B by eavesdropping on the messages sent by A and B, modifying these messages, and compromising the session secrets shared between A and B. These behaviors can be modeled by the following queries. Execute([�][s]A,B[,][ �][t]B,A[)][: This query models all kinds of] passive attacks, where a passive adversary can intercept all the data exchanged between [�]A,B[s] [and][ �]B,A[t] [in a session of] P . Send([�][s]A,B[, m][)][: This query models active attacks, where] an adversary sends a message m to [�]A,B[s] [and obtains a] response message according to the proposed scheme. Reveal([�][s]A,B[)][: This query models the exposure of session] keys (known session key attacks) in a particular session s. Corrupt([�][s]A,B[)][: This query models the revelation of long-] term secret keys. This query models passive attacks. Test([�][s]A,B[)][: When][ �]A,B[s] [has accepted and shared a] session key, adversary A can make this query and try to distinguish a real session key from a random string. 4) Security Definitions: Before defining the notion of mutual authentication security, we first briefly review the definition of a matching conversation. Definition 3 (Matching Conversations): An authenticated key exchange protocol P is a message-driven protocol and the goal of P is to achieve a matching conversation. We first define a protocol session of a party A as (A B s role) where B ----- is the identity of A’s partner, s is the session identifier, and role can be either initiator or responder. A P with two protocol sessions between a party A and a party B are of the form (A, B, s, initiator) and (A, B, t, responder), respectively. Two sessions are said to be a matching conversation involving A and B if their session identifiers are identical and the initiator and responder parties are A and B. If a protocol P consists of more than two sessions and each pair of sessions in sequence is a matching conversation, then P is said to be a protocol of matching conversations. We define mutual authentication based on the definition of matching conversation as follows. P is a mutual authentication protocol if for any polynomial time adversary A: (1) matching conversation implies acceptance and (2) acceptance implies matching conversation. The first condition says that if the sessions of two parties consists of a matching conversation, then the parties accept the authentication of each other. The second condition says that if each party accepts the authentication with the other party in a conversation, then the probability that there is no matching conversation between them is negligible. Formally, mutual authentication (MA) security is defined as: Definition 4: An authentication protocol P is MA-Secure (i.e., P satisfies MA-Security) if: (1) Matching conversation implies acceptance: If oracles �s A,B [and][ �]B,A[t] [have matching conversations, then both] oracles accept the authentication of each other, AND (2) Acceptance implies matching conversations: The probability of event NoMatching[A](k) is negligible, where k is a security parameter and NoMatching[A](k) is the event that there exist i, j, A, and B such that [�]A,B[i] [is accepted but] there is no oracle [�]B,A[j] [which is engaged in a matching] conversation. The event NoMatching[A](k) can also be denoted as Succ[MA]P (A) which is the probability that a polynomial-time adversary A can successfully impersonate one of the two interactive entities who want to authenticate each other in P . Authentication Key Exchange (AKE) Security: In an execution of an MA-Secure authentication protocol P, a polynomial-time adversary A interacts with two fresh oracles: �s A,B [and its partner][ �]B,A[t] [. At the end of the execution,][ A] issues a Test query to one of the two fresh oracles. Then the real session key or a random string is returned to A according to the value of a random bit b. Finally, A outputs a bit b[′] and terminates the game. The AKE-Advantage, AdvP[AKE](A), is defined as |Pr [b = b[′]] − 1/2|. We give a formal definition of AKE-Security below: Definition 5 A protocol P is AKE-Secure if it satisfies the following properties: (1) At the beginning the adversary engages in the execution of P with [�]A,B[s] [and its partner][ �]B,A[t] [. Then both oracles can] accept and share the same session key with each other. (2) P is MA-Secure. (3) For every probabilistic polynomial-time adversary A, AdvP[AKE](A) is negligible. When a Test query is issued before finishing the execution of the protocol, the game is played as per the above definition if the session key is generated by any one of the two fresh parties Otherwise the Test query will be rejected B. Formal Security Analysis of the Proposed Scheme The proposed scheme is based on hash functions, which can be considered as secure pseudorandom functions [19]. In this section, we show that the proposed scheme is provably secure based on the pseudorandom function assumption. As mentioned earlier, even though our proposed scheme is based on a three-party authentication and key exchange protocol, it can be reduced to a two-party authentication and key exchange protocol. Lemma 1: If h is a (n0, q0, ε0)-secure pseudorandom function family with negligible ε0, then the proposed authentication scheme is MA-Secure. Proof: Assume that there is a polynomial-time adversary A who can break MA-Security of the proposed protocol P with non-negligible probability Succ[MA]P (A). We construct a polynomial time algorithm F using A to show that F can break the pseudorandom function with non-negligible advantage, thus providing a contradiction. Also, Succ[MA]P (A) = Pr [SuccUser] + Pr [SuccSA] − Pr [SuccUser, SuccSA] ≤ Pr [SuccUser] + Pr [SuccSA], where SuccUser and SuccSA are the events that A successfully impersonates as a legitimate user and SA, respectively, to pass authentication. Therefore, we split the proof into two cases, one for SA impersonation and the other for user impersonation. Case 1 (SA Impersonation): Assume that A can impersonate as a SA with probability ǫ[′]. If A wants to be successfully authenticated by a user (say Ui ) using [�]User,SA[s] [controlled] by F, A must correctly send V4 = h(SKu ||ki ||PIDi[new] [∗]). In the following game, F will exploit the ability of A to break the pseudorandom function assumption with ǫ[′] ≤ 4ǫ0+2[−][k], where k is the security parameter. F plays the game in Definition 1 with challenger C as follows. Initialization: Let the long-term secret key ki be k-bit long. C picks a random bit b ∈{0, 1} and sets up a secure one-way hash function hb where h0 = hki is a pseudorandom function and h1 is a random function. If F simulates the game by using h1 to interact with A, we call this game a random experiment. On the other hand, if F uses h0 to simulate the game, we call this game a real experiment. The goal of F is to correctly guess if hb = h0 or hb = h1 (i.e., b = 0 or b = 1). Training: F simulates [�]User,SA[s] [and][ �]SA,User[t] [to interact] with A by answering the following queries: - Execute([�][s]User,SA[,][ �][t]User,HG[)][:][ F][ uses][ h][b][ given by][ C] as hki in the protocol. F also randomly generates kh and PIDi[new] and then computes PIDi[new] [∗] = h(PIDi ||ki ) ⊕ PIDi[new], SKu = h(IDu ||ki ||Nu ) ⊕ SK, and V4 = h(SKu ||ki ||PIDi[new] [∗]). Subsequently, F simulates [�]User,SA[s] and [�]SA,User[t] [with the help of][ h][b][,][ PID]i[new] [∗], SKu, and V4. - Send([�][s]User,SA[, m][)][:][ �][s]User,SA [sends the request mes-] sage m = {PIDi, Nu, V1} of the protocol. [�]User,SA[s] [first] validates V1 by querying hb and then finds PIDi in its database and then checks the correctness of V1 by querying hb. - Send([�][t]SA,User[, m][)][:] If m = {PIDi, Nu, V1}, �t then SA,User computes PIDi[new] [∗] = h(PIDi ||ki ) ⊕ PID [new] k [HG] h(ID ||k ||N ) ⊕ SK and V ----- h(SKu ||ki ||PIDi[new] [∗]). [�]SA,User[t] [then responds by sending] {PIDi[new] [∗], SKu, V4} to A. Challenge: First, A queries Send([�][s]User,SA[, m][)][ to trigger] the protocol. [�]User,SA[s] [then sends][ m][ =][ {][PID][i][, N][u][,][ V][1][}][ to] A. Then A generates the authentication response parameter V4 with success probability Pr [SuccSA] = ε[′]. Thus, A queries Send([�][t]SA,User[,][ {][PID]i[new] [∗], SKu, V4}). After receiving this query, F issues a query x [∗] = h(SKu ||ki ) to hb and obtains the output V4[∗] [=][ h][(][SK][u] [||][k][i] [||][PID]i[new] [∗]). Guess: Finally, F outputs a guess bit b[′] ∈{0, 1}. If V4[∗] [=] V 4 then F outputs 0; otherwise, F outputs a random bit 0 or 1. The analysis of the probability that F can successfully distinguish between the given hb (i.e., b = b[′]) can be divided into two cases: under a real experiment (i.e., b = 0), and under a random experiment (i.e., b = 1). In the case of a real experiment, A can successfully send the correct authentication information to win the game with probability ε[′]. Hence, F will output b[′] = 0 with probability ǫ[′] when A sends correct authentication information under a real experiment. However, if A sends wrong information, F can only make a random guess for b, and thus F will output b[′] = 0 with probability (1 − ε[′])/2. Thus, when b = 0, Pr[b = b[′]|b = 0] = ε[′] + (1 − ε[′])/2. In the case of random experiments, A can only send the correct authentication information by random guessing and the probability of a correct guess is 2[−][k]. Thus, when b = 1, F outputs b[′] = 1 with probability (1 − 2[−][k])/2 (i.e., Pr[b = b[′]|b = 1] = (1 − 2[−][k])/2). Combining the two cases, we have Pr [b = b[′]] = Pr [b = b[′], b = 0] + Pr [b = b[′], b = 1] = (ε[′] + (1 − ε[′])/2)1/2 + ((1 − 2[−][k])/2)1/2 = 1/2 + ǫ[′]/4 − 2[−][(][k][+2)]. Thus we have ε0 ≥|Pr [b = b[′]] − 1/2| = ǫ[′]/4 − 2[−][(][k][+2)]. ⇒ ǫ[′] ≤ 4ε0 + 2[−][k]. Case2 (User Impersonation): Suppose that A can impersonate as a user with probability ε[′′]. If A wants to be accepted by �t SA,User[, then][ A][ has to send out the correct authentication] information. Thus F plays the same game as in Case 1 with C. Initialization: C selects a hash function hb according to a random bit b ∈{0, 1} for answering the queries from F where h0 = hki is a pseudorandom function and h1 is a random function. Training: F first selects the required Nu and PIDi �s in the protocol. F then simulates User,SA and �t SA,User by answering Execute([�][s]User,SA[,][ �]SA,User[t] [)] and Send([�][s]User,SA[, m][)][. The simulations of these oracles] are similar to those in Case 1. Guess: F outputs a guess b[′] ∈{0, 1} according to PIDi and V1. If PIDi and V1 are valid, then F outputs 0, implying hb = hki ; otherwise it outputs a random bit 0 or 1. The probability that A successfully sends out the correct Succ[MA]P (A) ≤ Pr [SuccSA] + Pr [SuccUser] = ε[′] + ε[′′] ≤ 8ǫ0 + 2[−][(][k][−][1)]. From the above, ε0 is non-negligible, which contradicts the assertion in the lemma’s statement that ε0 is negligible. Thus we can conclude that the proposed authentication scheme is MA-Secure. Lemma 2: If h is a (n0, q0, ε0)-secure pseudorandom function family with negligible ε0, then the proposed scheme is AKE-Secure. Proof: In Lemma 1 we have proved that the proposed protocol P is MA-Secure. Now, consider an adversary A who can break AKE-Security of P with non-negligible AdvP[AKE](A) = ε. We construct a simulator F using the ability of A to break the pseudorandom function assumption [20]. F plays the following game, as given in Definition 3, with a challenger C. Initialization: C picks a random bit b ∈{0, 1} and sets up a secure hash function hb for answering the queries from F, where h0 = hki is a pseudorandom function and h1 is a random function. Training: F selects the required Ng and SIDi in the protocol. F then simulates [�]User,SA[s] [, and][ �][t]SA,User [by answer-] ing Execute([�][s]HG,SA[,][ �][t]SA,HG[)][ and][ Send][(][�][s]User,SA[, m][)][,] respectively. The simulations of these oracles are similar to those in the proof of Lemma 1. - Test([�][s]User,SA[)][: If][ k][h][ of][ �][s]User,SA [is generated, then] F randomly chooses c ∈{0, 1}, and returns the real session key kh if c = 0 or a random string for c = 1. Otherwise, F returns ⊥, denoting meaninglessness. - Test([�][t]N,E[)][: The simulation is the same as the one] above. Challenge: After querying Execute([�][s]HG,SA[,][ �][t]SA,HG[)][,] A sends a Test query to F. Guess: After querying Test([�][s]User,SA[)] or Test([�][t]SA,User[)][,][ A][ outputs a bit][ b] = 0 if it thinks that the responding string is the real session key; otherwise, it outputs b = 1. Finally, F outputs b[′] = 0 if b[′] = b; otherwise F outputs b[′] = 1. The analysis of the probability of the event b = b[′] is similar to that in the proof of Lemma 1. A can win the game by successfully guessing b = b[′] with probability (ε + 1/2) under a real experiment (i.e., b = 0). Also, A can only guess if b = b[′] with probability 1/2 under a random experiment (i.e., b = 1). If A successfully guesses b = b[′], then F will output b[′] = 1. Therefore the probability of b b[′] and b 0 is (ε+1/2)1/2 PIDi and V1 is ε[′′] in the real experiment and 2[−][k] in the random experiment. Following the analysis of Case 1, we have Pr [b = b[′]] = 1/2 + ε[′′]/4 − 2[−][(][k][+2)] ⇒ ε′′ ≤ 4ǫ0 + 2−k. Combining Case 1 and Case 2, ----- Figure 3. Attack Tree. and the probability of b = b[′] and b = 1 is 1/4. Thus we have Pr [b = b[′]] = Pr [b = b[′], b = 0] + Pr [b = b[′], b = 1] = (ε + 1/2)1/2 + 1/4 = 1/2 + ǫ/2 ⇒ ǫ0 ≥ Pr [b = b[′]] − 1/2 = ǫ/2 From the above, ε0 is non-negligible, and thus a contradiction occurs. Therefore, AdvP[AKE](A) is negligible for each polynomial-time adversary A and P is AKE-Secure. C. Informal Security Analysis So far, we have formally proved that the proposed scheme can ensure AKE-security, which is imperative to achieve security against impersonation attacks or replay attacks, session key security, etc. In this subsection we use the attack tree shown in Fig. 3 to show how the proposed scheme ensures some of the important security properties which are necessary for EI-based V2G communications. 1) Protection Against Impersonation or Forgery Attacks: In the proposed scheme, if an adversary tries to impersonate as a legitimate user Useri, then he/she needs to send a valid authentication request MA1 : {PIDi, Nu, EL, V1 }. However, the adversary cannot provide the thumbprint βi and password pswi . Therefore, he/she cannot use the mobile device and compute ki = ki[∗] [⊕] [h][(][β][i][||][psw][i] [)][,][ EL][ =][ LAI][u] � h(ki ||Nu ), and a valid key-hash response V1 = h(PIDi ||Nu ||ki ||EL), which are essential to authenticate with the USP. On the other hand, if the adversary tries to impersonate as a legitimate service provider, then he/she must know the secret keys Kcu and ki . Without knowing the secrets Kcu and ki, the adversary cannot generate valid keyhash responses V3 = h(SKcs ||Kcu ||Nc) and V4 = h(SKu ||ki ||PIDi[new] [∗]). In our EI-based V2G communications model, charging/discharging rates vary based on the location. A charging station CSj may try to cheat the Useri by providing a false location identity LAIcs to the USP and demand an inaccurate amount from the user. The proposed scheme will be able to detect such forgery attempts in the following way: the USP decodes LAIu from the EL and then compares and validates LAI with LAI If the validation is successful then only the USP will proceed with the execution of the further steps. Otherwise, the USP will terminate the execution of the protocol and take necessary against the CS. Similarly, an user may intentionally provide a forged LAIu in order to pay a lower amount for charging or ask for a higher amount for discharging. The USP will similarly be able to detect such attempts. Next, we consider a scenario where the user’s mobile device is lost or stolen. The adversary may try to use this device to impersonate as a legitimate user. However, in our proposed scheme we have considered multi-factor security and the adversary cannot provide the valid thumbprint βi and password pswi . Hence, he/she will not be able to proceed with further execution of the protocol. In this way, we can ensure security against impersonation and forgery attacks. 2) Privacy of the User: In the proposed scheme, the user needs to use a valid pseudo identity PIDi for each session, which cannot be used twice. Therefore, no one except the service provider can a recognize the activity of the user. Besides, in case of loss of synchronization, the user needs to use one of the unused shadow identities sidj from SID = {sid1, · · ·, sid n }. After that, the user deletes sidj from its memory. Therefore, changing the pseudonym in each session ensures identity intractability. This approach of the proposed scheme is quite useful for achieving privacy against eavesdropper (PAE). 3) Protection Against Eavesdropping or Interception Attacks: In the proposed scheme, an adversary cannot reuse the message MA1 : {PIDi, Nu, EL, V1 } since PIDi changes in each session. The adversary cannot reuse message MA2 since a new random number Nc is used in each session. Similarly, an adversary also cannot resend the messages MA3 and MA4 since keyhash response messages V3 and V4 change in each session and they are generated based on the challenges Nu and Nc, respectively. In this way, we ensure security against replay attacks. 4) Protection Against Compromised User’s Device: Next, we consider a scenario when an attacker hijacks the car with the user’s device and forces the legitimate user to input his/her password and thumbprint and then change the password and the thumbprint. After that, the adversary may try to ask for charging/discharging services from the USP. In order to address this issue, the ----- Table III PERFORMANCE COMPARISON BASED ON SECURITY FEATURES SP1 SP2 SP3 SP4 SP5 Yes Yes No No Yes No No No No No No No No No No No No Yes No No Yes Yes Yes Yes No Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes SP2: Privacy against eavesdropper; SP3: Resilience against man-in-the-middle attacks; SP4: Forward secrecy; SP5: Session key security; SP6: Resilience against DoS attacks Table IV EXECUTION TIME OF VARIOUS CRYPTOGRAPHIC OPERATIONS User’s Device (HTC One Smartphone) USP/CS (Intel Core i5-4300 Machine) 5.12 ms 2.6 ms 21.86 ms 14.5 ms 8.67 ms 3.78 ms gen 55.946 ms ver - 17.237 ms 0.0186 ms 0.011 ms 7.235 ms 2.338 ms 0.0584 ms 0.041 ms Table V PERFORMANCE COMPARISON BASED ON COMPUTATION COST (IN MS) AND COMMUNICATION COST User’s Device USP/CS 2Tmp+Tm +Tcertgen +3Th ≈88.15 3Tmp+Tm +Tcertver +4Th ≈57.87 3Tmp+Tm +Tcertgen +Th ≈93.24 4Tmp+Tm +Tcertver +4Th +Ts ≈63.77 2Tmp+Tm +Tcertgen +Th +Ts ≈92.38 3Tmp+Tm +Tcertver +3Th +Ts ≈57.88 Ts + 4Th ≈0.13 Ts + 4th ≈0.085 4Tmp+Te +5Th ≈27.85 3Tmp+Te + 2Tb+5Th ≈23.22 3Tmp+Te + 6Th ≈22.74 2Tmp+Te + 2Tb+6Th ≈15.32 6Th ≈0.15 8Th ≈0.88 : Time Required for a multiplication point operation; Tm : Time Required for a multiplication operation; :Time Required for of a modular exponential operation; Ts :Time Required for a symmetric encryption/decryption; Tb:Time Required for a bilinear pairing;Th : Time Required for a hash operation; Tcertgen/ver : Time Required for a certificate generation/verification operation |Scheme|SP1|SP2|SP3|SP4|SP5|SP6| |---|---|---|---|---|---|---| |Mohammadali et al. [4]|Yes|Yes|No|No|Yes|No| |Nicanfar et al. [5]|No|No|No|No|No|No| |Wu et al. [6]|No|No|No|No|No|No| |Xia et al. [7]|No|No|Yes|No|No|No| |Tsai et al. [9]|Yes|Yes|Yes|Yes|No|No| |Odelu et al. [10]|Yes|Yes|Yes|Yes|Yes|No| |Proposed Scheme|Yes|Yes|Yes|Yes|Yes|Yes| |SP1: Privacy of customer; SP2: Privacy against eavesdropper; SP3: Resilience against man-in-the-middle attacks;||||||| |SP4: Forward secrecy; SP5: Session key security; SP6: Resilience against DoS attacks||||||| |Operation|User’s Device (HTC One Smartphone)|USP/CS (Intel Core i5-4300 Machine)| |---|---|---| |T mp|5.12 ms|2.6 ms| |T m|21.86 ms|14.5 ms| |T b|8.67 ms|3.78 ms| |T certgen|55.946 ms|-| |T certver|-|17.237 ms| |T h|0.0186 ms|0.011 ms| |T e|7.235 ms|2.338 ms| |T s|0.0584 ms|0.041 ms| |Scheme|User’s Device|USP/CS|Communication Cost| |---|---|---|---| |Mohammadali et al. [4]|2T +T +T +3T ≈88.15 mp m certgen h|3T mp+T m+T +4T h≈57.87 certver|2340-bits| |Nicanfar et al. [5]|3T +T +T +T ≈93.24 mp m certgen h|4T mp+T m+T +4T h+T s≈63.77 certver|2176-bits| |Wu and Zhou [6]|2T mp+T m+T certgen+T h+T ≈92.38 s|3T mp+T m+T +3T h+T s≈57.88 certver|4064-bits| |Xia and Wang [7]|T + 4T ≈0.13 s h|T + 4t ≈0.085 s h|3296-bits| |Tsai and Lo [9]|4T +T +5T ≈27.85 mp e h|3T +T + 2T +5T ≈23.22 mp e b h|6880-bits| |Odelu et al. [10]|3T +T + 6T ≈22.74 mp e h|2T +T + 2T +6T ≈15.32 mp e b h|2912-bits| |Proposed Scheme|6T ≈0.15 h|8T ≈0.88 h|1802-bits| |T : Time Required for a multiplication point operation; T : Time Required for a multiplication operation; mp m|||| |T :Time Required for of a modular exponential operation; T :Time Required for a symmetric encryption/decryption; e s|||| |T :Time Required for a bilinear pairing;T : Time Required for a hash operation; b h|||| |T : Time Required for a certificate generation/verification operation certgen/ver|||| legitimate user needs to inform such an incident to the USP as soon as possible. After that, the USP will block the user’s account. In addition, the USP can also place a limit on the weekly or monthly charging/discharging amount for an user. In this way, we can address the scenario of compromised user devices. 5) Protection Against Physical Attacks: In the proposed scheme we assume that all the devices (such as user’s mobile device, EV, EVSE) are tamper proof. Therefore, if an adversary attempts to perform any physical attacks, they can be resisted by the hardware. In addition, in order to deal with physical attacks, devices with embedded physical uncloneable functions (PUFs) [11] can also be used. Any attempt to tamper with the PUF changes the behavior of the device and renders the PUF useless, thereby making it possible to detect any tampering attempts. V. PERFORMANCE EVALUATION This section evaluates and compares the performance of the proposed scheme with respect to other authentication schemes for smart grids. We first consider several imperative security properties such as forward secrecy, session key security, etc. for analyzing the performance of our proposed authentication scheme on the security front with respect to other schemes ([4] [5] [6] [7] [9]) Table III shows that the schemes ----- presented in [4], [5], [6], [7], [9], and [10] fail to guarantee all the imperative security properties. Although Odelu et al.’s scheme can provide various security features, it is not robust against DoS attacks (as discussed in Section 1). In contrast, the proposed scheme can ensure all the important security features (as shown in Table III). For instance, in our proposed scheme, the USP can quickly make a decision against an invalid authentication request, which helps our scheme to be resilient against DoS attacks. Next, we evaluate the performance of the proposed scheme in terms of the computation and communication costs. In this regard, we first conduct simulations of the cryptographic operations used by all the schemes on an Ubuntu 12.04 virtual machine with an Intel Core i5-4300 dual-core 2.60 GHz CPU (operating as the USP/CS). To simulate a customer’s mobile device, we use a HTC One smartphone with ARM Cortex-A9 MPCore processor operating at 890 MHz. We use the JPBC library Pbc-05.14 [21] and the JCE library [22] for evaluating the computation times of different cryptographic operations used in the proposed scheme and [4], [5], [6], [7], [9], and [10]. From Table IV we can see that the performance of the proposed scheme in terms of computation and communication costs is better than the others. Next, if we consider the existing standards such as IEC 15118 and OCPP protocol for V2G communications, then we find that like [4], [5], and [6], their authentication and key-establishment schemes are based on the computationally expensive ECDSA crypto-system, where each signature generation takes 23.81 ms (at the user’s device) and each signature verification (at the USP) takes 17.56 ms. Besides, according to [32] and [33], these protocols also suffer from several security issues (such as insecure against manin-the middle attacks, network impersonation attacks, DoS attacks, etc.) and challenges. These protocols also expose some important information such as customer name, vehicle identification number, charging location, and charging schedule, which affects the customer’s privacy. Here, we argue that our lightweight authentication and key establishment scheme can easily be used by these underlying communication protocols (such as IEC 15118 and OCPP) so that they can address all the underlying security issues and ensure an enhanced security level along with higher degree of efficiency. Next, in order to comprehensively evaluate the practicality of the proposed scheme, we consider the scalability of the proposed scheme when deployed by organizations that own charging stations. Since companies with large number of charging stations do not exist yet, we use traditional gasoline refueling companies to obtain representative numbers. In the USA, the biggest service providers are Shell (13727 stations), Chevron (6075 stations) and Exxon (5800 stations) [34]. Current battery charging technologies for EVs may be classified as either slow (energy flow rates of 2-6 KW) or rapid charging (upto 150 KW) [35]. We consider EV models Nissan LEAF (2018), Tesla Model S 100D and Mitsubishi Outlander PHEV (2018) that come with battery capacities of 40 KHh, 100 KWh and 13.8 KWh, respectively. Assuming a fast charging station with energy flow rate of 50 KWh, the empty to full charging time for these vehicles is 1 hour, 2 hours and 40 minutes respectively While a 150 KW rapid charger takes 1 hour to charge the Tesla Model S 100D battery, the Nissan and Mitsubishi models do not support this technology. Thus we use one hour as a representative time for charging current EVs in charging stations. The number of charging points in CSs varies. For traditional (petrol) filling stations, even in larger stations, studies indicate the average number is 18 (in Florida, [36]), i.e., 18 vehicles can fill up at the same time. We use 18 as the number of charging points in a CS and thus, 18 authentication requests are generated from a CS every hour. Now, based on Table V, the communication cost for the proposed protocol is 1802 bits = 226 bytes. On the other hand, TCP + IP + Ethernet overhead = 20 + 20 + 24 = 64 bytes and during the authentication process 4 messages are required to be exchanged. Therefore, the total communication overhead = 226 + 4×64 = 482 bytes (approx. 500). The computation time required at the USP is 0.00088 sec for verifying an authentication request. Using Shell as an example, we have 13800×18 = 248400 authentication requests per hour (Shell has 13800 refilling stations). Therefore, the amount of CPU time required every hour for verifying these transactions is 0.00088 × 248400 = 219 seconds. A simple personal computer or low end server can easily handle such computational requirements. The communication requirement of these authentication requests is 500 × 248400 = 124200000 bytes every hour = 276000 bits/sec = 276 Kbps. Thus, we conclude that the proposed scheme can provide all the important security properties and has lower (and practical) computation and communication costs, and is hence suitable for EI-based V2G communication. VI. CONCLUSION Secure and efficient key exchange is critical for ensuring secure data exchange in the Energy Internet. Aiming at the problem of safe communication between EV users, the USP and CSs, this paper proposed an efficient privacypreserving authentication scheme for EI-based Vehicle-to-Grid communication. In this regard, only lightweight cryptographic primitives such as one-way non-collision hash functions have been considered. We quantified the performance of our scheme using theoretical analysis and simulation tools. Our scheme is resilient against many security attacks, efficient in computation and communication, and compares favorably with existing related schemes. ACKNOWLEDGMENT This research was supported by the National Research Foundation, Prime Minister’s Office, Singapore under its Corporate Laboratory@University Scheme, National University of Singapore, and Singapore Telecommunications Ltd. This research was supported in part by Singapore Ministry of Education Academic Research Fund Tier 1 (R-263-000-C13112). The authors would like to thank all the five reviewers for their insightful comments and valuable suggestions. REFERENCES [1] J. Rifkin, "The third industrial revolution," Engineering and Technology, l 3 7 26 27 2008 ----- [2] Z. Y. Dong, "Towards an intelligent future energy grid," The University of Sydney, New South Wales, 2016. [3] V. C. Gungor et al., "A Survey on Smart Grid Potential Applications and Communication Requirements," IEEE Transactions on Industrial Informatics, vol. 9, no. 1, pp. 28-42, 2013. [4] Mohammadali, M. Sayad Haghighi, M. H. Tadayon, and A. Mohammadi Nodooshan, "A Novel Identity-Based Key Establishment Method for Advanced Metering Infrastructure in Smart Grid," IEEE Transactions on Smart Grid, pp. 1-1, 2016. [5] H. Nicanfar and V. C. M. Leung, "Multilayer Consensus ECC-Based Password Authenticated Key-Exchange (MCEPAK) Protocol for Smart Grid System," IEEE Transactions on Smart Grid, vol. 4, no. 1, pp. 253264, 2013. [6] D. Wu and C. Zhou, "Fault-tolerant and scalable key management for smart grid," IEEE Trans. Smart Grid, vol. 2 no. 2 pp. 371-378 Jun. 2011. [7] J. Xia and Y. Wang, "Secure key distribution for the smart grid," IEEE Trans. Smart Grid, vol. 3 no. 3 pp. 1437-1443 Aug. 2012. [8] J. H. Park, M. Kim, and D. Kwon, "Security weakness in the smart grid key distribution proposed by Xia and Wang" IEEE Trans. Smart Grid, vol. 4 no. 3 pp. 1613-1614 Sep. 2013. [9] J.-L. Tsai and N.-W. Lo, “Secure Anonymous Key Distribution Scheme for Smart Grid,” IEEE Transactions on Smart Grid, vol. 7, no. 2, pp. 906–914, 2016. [10] V. Odelu, A. K. Das, M. Wazid, and M. Conti, “Provably Secure Authenticated Key Agreement Scheme for Smart Grid,” IEEE Transactions on Smart Grid, 2016, DOI: 10.1109/TSG.2016.2602282. [11] P. Gope, and B. Sikdar, “Privacy-Aware Authenticated Key Agreement Scheme for Secure Smart Grid Communication,” IEEE Transactions on Smart Grid DOI:10.1109/TSG.2018.2844403, 2018. [12] M. Mustafa, N. Zhang, and Z. Fan, “Smart electric vehicle charging: Security analysis,” in Proc. IEEE PES ISGT, Washington, DC, USA, Feb. 2013, pp. 1–6. [13] H. Guo, Y. Wu, and M. Ma, “UBAPV2G: A unique batch authentication protocol for vehicle-to-grid communications,” IEEE Trans. Smart Grid, vol. 2, no. 4, pp. 707–714, Nov. 2011. [14] H. Liu, H. Ning, and L. Yang, “Role-dependent privacy preservation for secure V2G networks in the smart grid,” IEEE Trans. Inf. Forensics Security, vol. 9, no. 2, pp. 208–220, Feb. 2014. [15] Z. Yang, S. Yu, and C. Liu, “P [2] : Privacy-preserving communication and precise reward architecture for V2G networks in smart grid,” IEEE Trans. Smart Grid, vol. 2, no. 4, pp. 697–706, Dec. 2011. [16] .Pbc library. Tech. rep. http://crypto.standford.edu/pbc/(accessed on 16 April 2017). [17] Oracle Technology Network. Java Cryptography Architecture (JCA). [Online]. [18] M. Bellare and P. Rogaway, “Entity Authentication and Key Distribution,” Advances in Cryptology - Crypto 1993, D. Stinson, ed. pp. 110125, Springer-Verlag, 1993. [19] B. Schneier, Applied Cryptography (2nd edn), pp. 197-211, John Wiley & Sons, New York, 1996. [20] A. Menezes and S. Wanstone, Handbook of Applied Cryptography. Boca Raton, FL, USA: CRC Press, 1996. [21] H. Liu, H. Ning, Y. Zhang and L-T. Yang, “Aggregated-Proofs Based Privacy-Preserving Authentication for V2G Networks in the Smart Grid,” IEEE Trans. Smart Grid, vol. 66, no. 3, pp. 1722–1733, 2012. [22] A. Abdallah and X. Shen, “Lightweight Authentication and PrivacyPreserving Scheme for V2G Connections,” IEEE Transactions on Vehicular Technology vol. 3, no. 4, pp. 2615–2629, 2017. [23] N. Saxena and B.-J. Choi, “Authentication Scheme for Flexible Charging and Discharging of Mobile Vehicles in the V2G Networks,” IEEE Transactions on Information Forensics and Security vol. 11, no. 11, pp. 1438–1452, 2017. [24] D. He, S. Chan and M. Guizani, “A Privacy-friendly and efficient secure communication framework for V2G networks,” IET Communications vol. 12, no. 3, pp. 304–309, 2018. [25] Y. Zhang, S. Gjessing, H. Liu, H. Ning, L-T. Yang and M. Guizani, “Securing vehicle-to-grid communications in the smart grid,” IEEE Wireless Communications vol. 20, no. 6, pp. 66–73, 2018. [26] P. Gope, and B. Sikdar, "An Efficient Privacy-Friendly Hop-by-Hop Data Aggregation Scheme for Smart Grids," IEEE Systems Journals, 10.1109/JSYST.2019.2899986, 2019. [27] P. Gope, and B. Sikdar, "Lightweight and Privacy-Friendly Spatial Data Aggregation for Secure Power Supply and Demand Management in Smart-Grids," IEEE Transactions on Information Forensics & Security, DOI 10 1109/TIFS 2018 2881730 2018 [28] A.-S. Sani et al., “Cyber security framework for Internet of Thingsbased Energy Internet,” Future Generation Computing Systems, doi.org/10.1016/j.future.2018.01.029, 2018. [29] A. Jindal, N. Kumar and M. Singh, “Internet of energy-based demand response management scheme for smart homes and PHEVs using SVM,” Future Generation Computing Systems, doi.org/10.1016/j.future.2018.04.003, 2018. [30] J. Shen, T. Zhou, F. Wei, X. Sun and Y. Xiang, “Privacy-Preserving and Lightweight Key Agreement Protocol for V2G in the Social Internet of Things,” IEEE Internet of Things Journal vol. 4, no. 4, pp. 2526–2536, 2017. [31] K. Zhou, S. Yang and Z. Shaoa, “Energy Internet: The business perspective,” Applied Energy vol. 178, no. 15, pp. 212–222, 2016. [32] S. Lee et al., “Study on Analysis of Security Vulnerabilities and Countermeasures in ISO/IEC 15118 Based Electric Vehicle Charging Technology,” International Conference on IT Convergence and Security (ICITCS), doi: 10.1109/ICITCS.2014.7021815, 2014. [33] C. Alcaraz, J. Lopez and S. Wolthusen, “OCPP Protocol: Security Threats and Challenges,” IEEE Trans. Smart Grid, vol. 8, no. 5, pp. 2452–2459, 2017. [34] https://247wallst.com/retail/2017/04/21/10-retailers-that-controlamericas-gasoline-sales [35] Y. Ligen, H. Vrubel and H. Girault, “Mobility from Renewable Electricity: Infrastructure Comparison for Battery and Hydrogen Fuel Cell Vehicles,” World Electric Vehicle Journal, vol. 9, no. 1, 2018. [36] Florida Department of Transportation, Trip Generation Characteristics of Large Gas Stations/Convenience Stores and Student Apartments, https://fdotwww.blob.core.windows.net/sitefinity/docs/defaultsource/content/planning/systems/programs/sm/tripgen/trip-generationof-convenience-stores.pdf. Prosanta Gope (M’18) received the PhD degree in computer science and information engineering from National Cheng Kung University (NCKU), Tainan, Taiwan, in 2015. He is currently working as a Lecturer in the department of Computer Science (Cyber Security) at the University of Sheffield, UK. Prior to this, Dr. Gope was working as a Research Fellow in the department of Computer Science at National University of Singapore (NUS). His research interests include lightweight authentication, authenticated encryption, access control system, security in mobile communication and cloud computing, lightweight security solutions for smart grid and hardware security of the IoT devices. He has authored over 50 peerreviewed articles in several reputable international journals and conferences, and has four filed patents. He received the Distinguished Ph.D. Scholar Award in 2014 from the National Cheng Kung University, Tainan, Taiwan. He currently serves as an Associate Editor of the IEEE INTERNET OF THINGS JOURNAL, IEEE SENSORS JOURNAL, the SECURITY AND COMMUNICATION NETWORKS and the MOBILE INFORMATION SYSTEMS JOURNAL. Biplab Sikdar (S’98-M’02-SM’09) received the Ph.D. degree in electrical engineering from the Rensselaer Polytechnic Institute, Troy, NY, USA, in 2001. He was on the faculty of Rensselaer Polytechnic Institute from 2001 to 2013, first as an Assistant and then as an Associate Professor. He is currently an Associate Professor with the Department of Electrical and Computer Engineering, National University of Singapore, Singapore. His research interests include computer networks, and security for IoT and cyber physical systems. Dr. Sikdar is a member of Eta Kappa Nu and Tau Beta Pi. He served as an Associate Editor for the IEEE Transactions on Communications from 2007 to 2012. He currently serves as an Associate Editor for the IEEE Transactions on Mobile Computing. -----
{ "disclaimer": "Notice: The following paper fields have been elided by the publisher: {'references'}. Paper or abstract available at https://api.unpaywall.org/v2/10.1109/TSG.2019.2908698?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1109/TSG.2019.2908698, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": "mit", "status": "GREEN", "url": "http://eprints.whiterose.ac.uk/156053/1/Final%20TSG.pdf" }
2,019
[ "JournalArticle" ]
true
2019-04-02T00:00:00
[]
19,266
en
[ { "category": "Engineering", "source": "s2-fos-model" }, { "category": "Environmental Science", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/02a06d4d6f9118145f72870c7e1a548707a44f04
[]
0.863393
Improved Virtual Synchronous Generator Principle for Better Economic Dispatch and Stability in Grid-Connected Microgrids with Low Noise
02a06d4d6f9118145f72870c7e1a548707a44f04
Energies
[ { "authorId": "2145892508", "name": "Shruti Singh" }, { "authorId": "32235486", "name": "D. Gao" } ]
{ "alternate_issns": null, "alternate_names": null, "alternate_urls": [ "http://nbn-resolving.de/urn/resolver.pl?urn=urn:nbn:ch:bel-155563", "https://www.mdpi.com/journal/energies", "http://www.mdpi.com/journal/energies" ], "id": "1cd505d9-195d-4f99-b91c-169e872644d4", "issn": "1996-1073", "name": "Energies", "type": "journal", "url": "http://www.e-helvetica.nb.admin.ch/directAccess?callnumber=bel-155563" }
The proper operation of microgrids depends on Economic Dispatch. It satisfies all requirements while lowering the microgrids’ overall operating and generation costs. Since distributed generators constitute a large portion of microgrids, seamless communication between generators is essential. While guaranteeing a reliable microgrid operation, this should be achieved with the fewest losses as possible. The distributed generator technology introduces noise into the system by design. To find the best economic dispatch strategy, noise was considered in this research as a limitation in grid-connected microgrids. The microgrid’s performance was improved, and the proposed technique also showed increased resilience. A virtual synchronous generator (VSG) control approach is proposed with a noiseless consensus-based algorithm to improve the power quality of microgrids. Voltage and frequency regulation modules are the foundation of the VSG paradigm. The synchronous generator’s second-order equation (hidden-pole configuration) was also used to represent the voltage of the stator and rotor motion. This study compared changes in power, frequency, and voltage for the microgrid by utilizing the described control approach using MATLAB. According to the findings, this method aids in controlling load and noise variations and offers distributed generators an efficient control strategy.
# energies _Article_ ## Improved Virtual Synchronous Generator Principle for Better Economic Dispatch and Stability in Grid-Connected Microgrids with Low Noise **Shruti Singh and David Wenzhong Gao *** Department of Electrical and Computer Engineering, University of Denver, Denver, CO 80210, USA; shruti.singh@du.edu *** Correspondence: wenzhong.gao@du.edu** **Abstract: The proper operation of microgrids depends on Economic Dispatch. It satisfies all re-** quirements while lowering the microgrids’ overall operating and generation costs. Since distributed generators constitute a large portion of microgrids, seamless communication between generators is essential. While guaranteeing a reliable microgrid operation, this should be achieved with the fewest losses as possible. The distributed generator technology introduces noise into the system by design. To find the best economic dispatch strategy, noise was considered in this research as a limitation in grid-connected microgrids. The microgrid’s performance was improved, and the proposed technique also showed increased resilience. A virtual synchronous generator (VSG) control approach is proposed with a noiseless consensus-based algorithm to improve the power quality of microgrids. Voltage and frequency regulation modules are the foundation of the VSG paradigm. The synchronous generator’s second-order equation (hidden-pole configuration) was also used to represent the voltage of the stator and rotor motion. This study compared changes in power, frequency, and voltage for the microgrid by utilizing the described control approach using MATLAB. According to the findings, this method aids in controlling load and noise variations and offers distributed generators an efficient control strategy. **Citation: Singh, S.; Gao, D.W.** Improved Virtual Synchronous Generator Principle for Better Economic Dispatch and Stability in Grid-Connected Microgrids with Low Noise. Energies 2023, 16, 4670. [https://doi.org/10.3390/en16124670](https://doi.org/10.3390/en16124670) Academic Editors: Favuzza Salvatore and Jaser Sa’Ed Received: 15 May 2023 Revised: 5 June 2023 Accepted: 11 June 2023 Published: 12 June 2023 **Copyright:** © 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons [Attribution (CC BY) license (https://](https://creativecommons.org/licenses/by/4.0/) [creativecommons.org/licenses/by/](https://creativecommons.org/licenses/by/4.0/) 4.0/). **Keywords: microgrids; virtual synchronous generator; consensus-based algorithm; economic** dispatch; power systems; distributed generators **1. Introduction** Economic Dispatch is an optimization problem used to reduce a system’s costs. It is a significant issue in the world of power systems. All of the system’s constraints are considered when determining the system’s minimum cost. Economic dispatch problems have been solved using a variety of techniques, most frequently the quadratic convex function [1,2]. The Lagrangian relaxation approach and the quadratic programming, respectively, were applied in [3,4]. The equal increment cost condition was taken into consideration when studying consensus-based algorithms [5–8]. Economic dispatch and demand-side management issues were resolved to reduce the overall costs [9–15]. Particle Swarm Optimization was used in [16] for effective demand response in islanded microgrids. Refs. [17,18] used the Dragonfly algorithm and the Cuckoo Search Algorithm to solve the demand response in economic dispatch problems respectively. Ref. [19] introduced an improved Genetic Algorithm for optimal dispatch. The system/microgrids was assumed to be noiseless for the sake of the traditional economic dispatch problem. Noise from both system components and the environment is present in real-time. The effectiveness and resilience of the microgrids are impacted by this. It restricts their stability as well. To stabilize microgrids and improve their performance and resilience, noise must be incorporated into the consensus-based algorithm economic dispatch problem. ----- _Energies 2023, 16, 4670_ 2 of 19 Noise was taken into account in several analyses [20–22]. These studies created a power-sharing strategy in microgrids that was parameter-independent and a noise-less algorithm for better voltage and frequency synchronization. However, there has not been much research in this particular area, which this study explored using the mentioned strategy. This approach was presented by [23] for isolated microgrids; however, grid-connected microgrids were not considered. This study outlines the grid-connected microgrids’ noiseless economic dispatch problem. Additionally, this method does not require a central controller, making the system cheaper and more cost-effective. Because a distributed strategy was used, a central controller was not required, which minimized the communication complexity [24–29]. Ref. [30] proposed a multi-agent consensus control-based economic dispatch algorithm allowing the microgrid to switch from isolated to grid-connected modes more reliably. A key role is played by inverters in the interaction between the distribution network and the microgrid [31]. Conventionally, the droop control technique [32] is employed, although it is extremely vulnerable to changes in load. An improved droop control approach has been suggested in many publications. Some of them are dependent on the inverter’s output voltage. The drawback is that the droop coefficient causes the frequency to be too unstable. In DC microgrids, a discrete consensus-based adaptive droop control technique has also been put forth [33,34]. In several articles, the P/Q control method has been applied. The U/f control approach has primarily been employed in island mode. It is possible to create two sets of control systems using the P/Q and U/f control methods in conjunction with switching control components between the two [35]. However, because the two strategies have a complex structure, this system is challenging to create. This paper introduces the novel concept of economic dispatch with noise effects on a grid-connected microgrid’s performance. A consensus-based algorithm along with a virtual synchronous generator strategy was used to reduce the fluctuations in voltage and frequency of the microgrids due to various noise levels. Two economic dispatch algorithms, i.e., the Lagrange formulation and the particle swarm optimization technique were compared to analyze their effect on the grid-connected microgrid’s overall performance. This paper is divided into multiple parts. The economic dispatch problem and PSO algorithm are defined in Section 2. Section 3 introduces the microgrid structure. The distributed noiseresilient economic dispatch approach is presented in Section 4 [23]. The VSG model and control approach are introduced in Section 5. The results and the discussion are explained in Section 6, and the conclusions are stated in Section 7. **2. Economic Dispatch Formulation** _2.1. Lagrange Formulation_ The economic dispatch issue for a microgrid that is connected to a grid is defined using the Lagrangian method. The goal purpose of the microgrid is first established. The most prevalent use of this function is to address economic dispatch issues. The cost of a generator in a microgrid system can be expressed using the following equation [36], taking into account all the generation units: _n_ _n_ ### ∑i=1 [P][i][C][i][ =][ ∑]i=1 [x][i][C]i[2] [+][ y][i][C][i] [+][ z][i] (1a) where _PiCi is the generator cost_ _xi, yi, zi are the cost coefficients_ _Ci is the generator’s total power output._ To solve the economic dispatch issue, we aimed at lowering the microgrid’s generation costs. Equation (1a) becomes: _n_ _n_ min ∑i=1 _[P][i][C][i][ =][ min]_ [∑]i=1 _[x][i][C]i[2]_ [+][ y][i][C][i] [+][ z][i] (1b) ----- _Energies 2023, 16, 4670_ 3 of 19 Additionally, the generator’s total electric output can be defined as [36]: _n_ ### ∑i=1 [C][i][ =][ C][D][ +][ C][loss][, for][ C]i[min]< Ci < Ci[max] (2) where _CD = total load; Closs = losses during transmission_ _Ci[min]_ = minimum generation limit of generator i _Ci[max]_ = maximum generation limit of generator i. In formulating the Lagrangian function, the above three equations together become [36]: _n_ _n_ _n_ _n_ � � _L(C1, C2, . . . Cn) = ∑i=1_ _[P][i][C][i][ +][ λ][(][C][D][ +][ C][loss][ −]_ [∑]i=1 _[C][i][)+]_ [∑]i=1 _[u][x][(][C][i][ −]_ _[C]i[max])+_ ∑i=1 _[u][y]_ _Ci[min]_ _−_ _Ci_ (3) where λ, ux, uy are Lagrange multipliers. Calculating each generator’s incremental cost (IC1, IC2, . . ., ICn) is necessary to discover a solution to the aforementioned economic dispatch problem. These incremental costs for various generators should be equal to determine the microgrid’s minimal cost, i.e., _IC1 = IC2 = . . . = ICn_ where n defines the number of generation units This problem’s most popular solution was used here [36]: _λi=_ _[∂]∂[P]C[i][C]i_ _[i]_ [=][ 2][x][i][C][i][ +][ y][i][ =][ λ][∗][, for][ C]i[min]< Ci < Ci[max] _λi=_ _[∂]∂[P]C[i][C]i_ _[i]_ [=][ 2][x][i][C][i][ +][ y][i][ <][ λ][∗][, for][ C][i][ =][ C]i[max] _λi=_ _[∂]∂[P]C[i][C]i_ _[i]_ [=][ 2][x][i][C][i][ +][ y][i][ >][ λ][∗][, for][ C][i][ =][ C]i[min] (4) where _λi = incremental cost_ _λ[∗]_ = optimal incremental cost. To determine an economic dispatch schedule for the microgrid, the economic dispatch issue must take into consideration the generation restrictions for each unit. The economic dispatch problem is rather simple to resolve and takes into account all the restraints of generators. However, when addressing the economic dispatch issue for microgrids, the majority of problems have some limitations that must be taken into account. For any issues relating to economic dispatch, the aforementioned equations serve as the fundamental problem formulation. _2.2. Particle Swarm Optimization (PSO) Algorithm_ Particle Swarm Optimization is a computational method that was inspired by the movement of bird flocks and other organisms/particles by Kennedy, Eberhart, and Shi [16]. It is a population-based optimization tool in which particles change position by taking into account their velocity, their own experience, and the experience of their neighboring particles. The position and velocity of particle j in N-dimensional space are represented as aj = (aj1, aj2, . . . ajN) and bi = (bj1, bj2, . . . bjN). The best position for this particle can be represented as Abestj = (aj[A]1 [,][ a]j[A]2 [,][ . . .][ a]jN[A] [). The best position for the neighboring particle] can be represented as Bbest = (a1[B] [,][ a]2[B] [,][ . . .][ a][B]N[). New modified position and velocity can be] formulated as: � � � _b[k]jN[+][1][=][ ξ][.][ b][k]jN[+][m][1][r][1][ ×]_ _AbestjN −_ _a[k]jN_ [) +][ m][2][r][2][ ×] _BbestN −_ _a[k]jN_ and, _a[k]jN[+][1][=][ a][k]jN[+][b][k]jN[+][1]_ (4a) where ----- _Energies 2023, 16, 4670_ 4 of 19 _k = number of iterations_ Ξ = inertia weight factor _m1, m2 = acceleration constant_ _r1, r2 = random number within the range [0, 1]_ The inertia weight factor and the acceleration constant affect the performance significantly. The weight factor provides the required momentum for particles to move around in N-dimensional space. The acceleration constant indicates the weight of stochastic acceleration terms that help in pulling all particles towards Abestj and Bbest positions. This algorithm is used iteratively to find convergence in optimal dispatch solutions. The best incremental cost was determined using this method and was then sent to the agents to either accept or modify the output power of generators to minimize the effect of noise on system parameters’ fluctuations. **3. Microgrid Structure** Four generator units constituted the microgrid that was the subject of this paper’s investigation. It featured two coal-based generator units, a wind generator, and a solar/photovoltaic (PV) generator and was connected to the grid. The quadratic Equation (1a) expresses the cost function of the units. Closs was estimated to make up 7% of the total load. Table 1 below provides the cost-coefficients values for each unit, the minimum power generation limits, and the maximum power generation limits for all the units [36]. **Table 1. List of parameters for generators [36].** **Unit** **Cmin (kW)** **Cmax (kW)** **x** **y** **z** 1 4 18 0.070 2.15 56 2 8 40 0.080 1.15 50 3 5 25 0.070 3.3 41 4 5 40 0.056 3.4 36 **4. Economic Dispatch with Consensus-Based Approach for Noise-Less Communication [19]** The strategy described in [23] is explained in this section. A microgrid’s communication link was developed. Each generator unit had a corresponding agent that gathered data from its corresponding unit. These data were read by a certain agent. All the agents that were part of this communication system could share data and communicate with each other [34]. We had four agents in total, each of which was connected to a different generation unit on our microgrid in grid-connected mode; there were four generation units. The information data received, collected, and processed by an agent was also exchanged with other agents. This exchange helped understand the current status of each unit. To reduce the overall cost of the microgrid system, the information received from the agent(s) was used to modify the output power. Noise from the components, surroundings, and electric/magnetic interference was taken into account for this analysis. This method includes noise that accumulated as a result of the communication between the units and that resulting from the communication between the units and the agents; it was considered in modeling as Gaussian noise [16]. The communication links between the agents were indicated as c12, c21, c23, c32, c34, c43, c13, c31, and so on. Each agent determined the corresponding incremental cost of each unit before exchanging it with the others. Based on the data, the set point of the output power was determined and supplied to the appropriate generation units. To address the economic dispatch issue, the units modify their power generation capacity to have an equal incremental cost. This reduces the cost of microgrids. According to Refs. [23,36]: ----- _Energies 2023, 16, 4670_ 5 of 19 Z[k + 1] = Z[k] + µ[k][P z[k] + WN[k]] P = H[′]NH _−_ W = H[′]N H = H2 − H1 where Z[k] = incremental cost of a unit at the kth iteration, Z[k + 1] = incremental cost of a unit at the (k + 1)th iteration, µ[k] = recursive step size, N = r r diagonal matrix with link control gain as its diagonal elements, _×_ H1 and H2 = r × n matrix where the rows are elementary vectors, N[k] = communication link noise. (5) 1 1 0 0 _−_ 1 1 0 0 _−_ 0 1 1 0 _−_ 0 1 1 0 _−_ 0 0 1 1 _−_ 0 0 1 1 _−_ ������������ ������������ ������������ H1 = ������������ 0 1 0 0 1 0 0 0 0 0 1 0 0 1 0 0 0 0 0 1 0 0 1 0 ; H2 = ������������ 1 0 0 0 0 1 0 0 0 1 0 0 0 0 1 0 0 0 1 0 0 0 0 1 and H = ������������ = H2 − H1 N (small noise) = diag [0.2 0.2 0.2 0.2 0.2 0.2] N (medium noise) = diag [0.5 0.5 0.5 0.5 0.5 0.5] N (large noise) = diag [0.8 0.8 0.8 0.8 0.8 0.8] Similarly, P and W can be determined from (5). To lessen the effects of noise, we averaged the additional costs of the units. This produced a microgrid that was more durable, stable, and free of (or, with less) communication noise [23]. _Zavg[k + 1] =_ _k+1_ 1 [∑][k]j=[+]1[1] _[Z][[][j][]=]_ _k+1_ 1 [∑][k]j=1 _[z][[][j][] +][ Z][[][k][ +][ 1][]]_ (6) = Zavg[k] − _k+1_ 1 _[Z][avg][[][k][] +]_ _k+1_ 1 _[Z][[][k][ +][ 1][]]_ The noise-less economic dispatch with the consensus-based strategy using (5) and (6) is [23,36]: _Z[k + 1] = Z[k] + µ[k][P z[k] + WN[k]]_ 1 _Zavg[k + 1] = Zavg[k] +_ �Z[k + 1] − _Zavg[k]�_ (7) _k + 1_ where Zavg[k + 1] are the desired set points for the unit incremental costs. This method is iterative, and an estimate was created using the step size. It was then averaged in the subsequent stages to limit the effect of noise. For each step size, the consensus problem was iteratively solved. The flowchart for the consensus-based economic dispatch algorithm is shown in Figure 1. ----- _Energies 2023, 16, x FOR PEER REVIEW_ 6 of 19 _Energies 2023, 16, 4670_ 6 of 19 **Figure 1. Figure 1. Algorithm flowchart.Algorithm flowchart.** **5. Virtual Synchronous Generator (VSG)** **5. Virtual Synchronous Generator (VSG)** The VSG control system [37] is a comprehensive system that combines several modules The VSG control system [37] is a comprehensive system that combines several mod to enable an efficient and effective electricity management. It is based on the VSG strategy, ules to enable an efficient and effective electricity management. It is based on the VSG which is responsible for simulating the system’s performance and determining its optimal strategy, which is responsible for simulating the system’s performance and determining power output, and contains the Frequency Regulation Module, which adjusts the frequency its optimal power output, and contains the Frequency Regulation Module, which adjusts of the output power to match that of the grid, the Voltage Regulation Module, which the frequency of the output power to match that of the grid, the Voltage Regulation regulates the voltage of the output power, the grid-connected mode of the Control Module, Module, which regulates the voltage of the output power, the grid-connected mode of the which ensures the output power is synchronized with the grid, the SPWM Modulation Control Module, which ensures the output power is synchronized with the grid, the Module, which adjusts the output current amplitude, and the Sampling Calculation Module, SPWM Modulation Module, which adjusts the output current amplitude, and the Sam which calculates the output power by sampling the input signal. All of these modules pling Calculation Module, which calculates the output power by sampling the input work together to provide a reliable and secure electricity management system as shown in signal. All of these modules work together to provide a reliable and secure electricity Figure 2 below. management system as shown in Figure 2 below. When exposed to disturbances and load variations, power electronic inverters have poor system stability [37]. The traditional SG rotation has a significant output inductance and moment of inertia. Therefore, the microgrid’s power supply can be compared to the prime mover by reproducing the exterior characteristics of the microgrid into an SG. The microgrid inverter’s inverter and filter modules provide the electric energy produced ----- _Energies 2023, 16, 4670_ 7 of 19 _Energies 2023, 16, x FOR PEER REVIEW_ 7 of 19 by distributed sources to the load, while the energy storage system stores the residual electric energy. ###### Figure 2. VSG control strategy block diagram. ##### When exposed to disturbances and load variations, power electronic inver poor system stability [37]. The traditional SG rotation has a significant output in and moment of inertia. Therefore, the microgrid’s power supply can be compar prime mover by reproducing the exterior characteristics of the microgrid into an microgrid inverter’s inverter and filter modules provide the electric energy pro distributed sources to the load, while the energy storage system stores the resid tric energy. Ref. [38] presented the SG second-order equation modeling, which include lowing equations. **Figure 2. Figure 2.Stator voltage equation: VSG control strategy block diagram.VSG control strategy block diagram.** ###### Figure 2. VSG control strategy block diagram. ##### When exposed to disturbances and load variations, power electronic inver poor system stability [37]. The traditional SG rotation has a significant output in and moment of inertia. Therefore, the microgrid’s power supply can be compar prime mover by reproducing the exterior characteristics of the microgrid into an microgrid inverter’s inverter and filter modules provide the electric energy pro distributed sources to the load, while the energy storage system stores the resid tric energy. Ref. [38] presented the SG second-order equation modeling, which include lowing equations. 7 of 19 7 of 19 by distributed sources to the load, while the energy storage system stores the residual electric energy. When exposed to disturbances and load variations, power electronic inverters have Ref. [38] presented the SG second-order equation modeling, which includes the fol-U[�] ������ = E[�] −ΔU[�] poor system stability [37]. The traditional SG rotation has a significant output inductance lowing equations. ##### where and moment of inertia. Therefore, the microgrid’s power supply can be compared to the Stator voltage equation: . . . Uprime mover by reproducing the exterior characteristics of the microgrid into an SG. The [�] ������ = three-phase reference voltage Urefabc = E − ∆U (8) Ewheremicrogrid inverter’s inverter and filter modules provide the electric energy produced by [�] [= ]electromotive force distributed sources to the load, while the energy storage system stores the residual elec-. ##### ΔUtric energy. U. refabc[�] = voltage drop caused by virtual synchronous impedance. = three-phase reference voltage E = electromotive forceRef. [38] presented the SG second-order equation modeling, which includes the fol-The output current I0 of the inverter is equal to the synchronous genera . ##### current; rlowing equations. ∆U = voltage drop caused by virtual synchronous impedance.a and Xd, respectively, are armature resistance and synchronous reac obtain ΔU, a vector multiplication is used for (rStator voltage equation: The output current I0 of the inverter is equal to the synchronous generator statora + jXd) and I0. The module in provides a corresponding control signal in line with Ucurrent; ra and Xd, respectively, are armature resistance and synchronous reactance. ToU[�] ������ = E[�] −ΔU[�] refabc after E is corrected f(8) obtain ∆U, a vector multiplication is used for (ra + jXd) and I0. The module in Figure 3 ##### tion. provides a corresponding control signal in line with Uwhere refabc after E is corrected for deviation. U[�] ������ = three-phase reference voltage E[�] [= ]electromotive force ΔU[�] = voltage drop caused by virtual synchronous impedance. The output current I0 of the inverter is equal to the synchronous generator stator current; ra and Xd, respectively, are armature resistance and synchronous reactance. To obtain ΔU, a vector multiplication is used for (ra + jXd) and I0. The module in Figure 3 provides a corresponding control signal in line with Urefabc after E is corrected for deviation. ###### Figure 3. Figure 3. Stator voltage model.Stator voltage model. The rotor motion model promotes system stability, as shown in Figure 4. When Pm and Pe do not match, it obviates this by adding J and D; dθ is the correction angle. E[�] [= ]electromotive force ΔU[�] = voltage drop caused by virtual synchronous impedance. The output current I0 of the inverter is equal to the synchronous generator stator current; ra and Xd, respectively, are armature resistance and synchronous reactance. To obtain ΔU, a vector multiplication is used for (ra + jXd) and I0. The module in Figure 3 provides a corresponding control signal in line with Urefabc after E is corrected for devia- tion. ----- D = damping co-efficientThe rotor motion model promotes system stability, as shown in Figure 4. When Pm _Energies 2023, 16, 4670_ 8 of 19 ω = angular velocity and Pe do not match, it obviates this by adding J and D; dθ is the correction angle. ωR = rated angular velocity For the rotor motion model [39]: ∆ω = [1] −D∆ω) dt (9) J [�(x][�] ω[−x][�] ω = ∆ω + ω� (10) where Δω = angular velocity difference Xm = mechanical power Xe = electromagnetic power **Figure 4. J = moment of inertia Figure 4. Rotor motion model.Rotor motion model.** D = damping co-efficient The frequency module in Figure 5 includes the grid-connected sinusoidal wave SS, For the rotor motion model [39]: ω = angular velocity the system frequency fωR = rated angular velocity V, the reference active power P� ref, the reference frequency fref, and the grid side frequency fg. The frequency regulation module chooses its reference value ∆ω = [1] ( [x][m][ −] [x][e] D∆ω) dt (9) _−_ J ω based on the fg range once the grid-connected signal SS has been sent by Judger2. The reference value is chosen as fg if it falls within the typical range and as fref if it does not. fref ω = ∆ω + ωr (10) is used as the reference value while the system is in islanded mode. where ∆ω = angular velocity difference Xm = mechanical power Xe = electromagnetic power J = moment of inertia **Figure 4. D = damping co-efficientRotor motion model.** ω = angular velocity ωR = rated angular velocityThe frequency module in Figure 5 includes the grid-connected sinusoidal wave SS, the system frequency fThe frequency module in FigureV, the reference active power P 5 includes the grid-connected sinusoidal wave SS, theref, the reference frequency fref, and the grid side frequency fsystem frequency fV, the reference active power Pg. The frequency regulation module chooses its reference value ref, the reference frequency fref, and the **Figure 5. based on the fgrid side frequency fFrequency regulation module. g range once the grid-connected signal SS has been sent by Judgerg. The frequency regulation module chooses its reference value based2. The** reference value is chosen as fon the fg range once the grid-connected signal SS has been sent by Judgerg if it falls within the typical range and as fref if it does not. f2. The referenceref value is chosen as fis used as the reference value while the system is in islanded mode. To Judger1, the frequency deviation Δf is provided. Depending on the interval in g if it falls within the typical range and as fref if it does not. fref is used which the frequency difference is situated, Judger1 passes on to the regulator in the next as the reference value while the system is in islanded mode. stage. The secondary frequency regulation is simulated by PI1, and the frequency module regulates the main frequency per the co-efficient kp. It also regulates and switches to a secondary frequency, if necessary. The Synchronous Generator maintains system frequency stability, both primary and secondary. Qref and Q0 are the inputs to the virtual voltage regulation module. The difference is multiplied by the voltage-reactive co-efficient kU to obtain the electro-motive force for power adjustment (reactive), ΔE1. To determine ΔE2, which is the terminal voltage electro-motive force, the effective capacitor voltage Uc in the filter module’s reference voltage **Figure 5. Frequency regulation module.** **Figure 5. Frequency regulation module.** To Judger1, the frequency deviation ∆f is provided. Depending on the interval in which the frequency difference is situated, JudgerTo Judger1, the frequency deviation Δf is provided. Depending on the interval in 1 passes on to the regulator in the next which the frequency difference is situated, Judgerstage. The secondary frequency regulation is simulated by PI1 passes on to the regulator in the next 1, and the frequency module stage. The secondary frequency regulation is simulated by PIregulates the main frequency per the co-efficient kp. It also regulates and switches to a1, and the frequency module secondary frequency, if necessary. The Synchronous Generator maintains system frequency regulates the main frequency per the co-efficient kp. It also regulates and switches to a stability, both primary and secondary. secondary frequency, if necessary. The Synchronous Generator maintains system frequency stability, both primary and secondary. Qref and Q0 are the inputs to the virtual voltage regulation module. The difference is multiplied by the voltage-reactive co-efficient kQref and Q0 are the inputs to the virtual voltage regulation module. The difference is U to obtain the electro-motive force for power adjustment (reactive), ∆E1. To determine ∆E2, which is the terminal voltage electro multiplied by the voltage-reactive co-efficient kU to obtain the electro-motive force for motive force, the effective capacitor voltage Uc in the filter module’s reference voltage power adjustment (reactive), ΔE1. To determine ΔE2, which is the terminal voltage electro-motive force, the effective capacitor voltage UUref differential value is translated into amplitude. When the synchronous generator isc in the filter module’s reference voltage operating in no-load mode, Eref is the reference electro-motive force, whereas dE is the corrected electromotive force when in grid-connected mode, as shown in Figure 6. The U–Q relationship is as follows: ∆ω = angular velocity difference Xm = mechanical power Xe = electromagnetic power J = moment of inertia **Figure 4. D = damping co-efficientRotor motion model.** ω = angular velocity ωR = rated angular velocityThe frequency module in Figure 5 includes the grid-connected sinusoidal wave SS, the system frequency fThe frequency module in FigureV, the reference active power P 5 includes the grid-connected sinusoidal wave SS, theref, the reference frequency f [�] ∆ω = −D∆ω) dt J [�(x][�] ω ω = ∆ω + ω� where Δω = angular velocity difference Xm = mechanical power ----- ##### (Q Q) ( ) _Energies 2023, 16, 4670_ Uref differential value is translated into amplitude. When the synchronous generator is operat-9 of 19 ##### where ing in no-load mode, Eref is the reference electro-motive force, whereas dE is the corrected ##### kSGU = SG voltage-reactive coefficient electromotive force when in grid-connected mode, as shown in Figure 6. The U–Q rela- USGref = reference values of voltage tionship is as follows: QSGref = reference values of reactive power U −U − UUSGrefSGref = k = kSGUSGU (Q (QSGrefSGref − − Q) Q) (11)(11) where where kkSGUSGU = SG voltage-reactive coefficient = SG voltage-reactive coefficient UUSGrefSGref = reference values of voltage = reference values of voltage QQSGrefSGref = reference values of reactive power = reference values of reactive power ###### Figure 6. Voltage regulation module. [Where Q]ref = [ reference reactive power, Q]0 = [ control output ] reactive power. where kkSGUSGU = SG voltage-reactive coefficient = SG voltage-reactive coefficient UUSGrefSGref = reference values of voltage = reference values of voltage QQSGrefSGref = reference values of reactive power = reference values of reactive power **Figure 6. Figure 6. Voltage regulation module. Where QVoltage regulation module. [Where Q]refref = = reference reactive power, Q[ reference reactive power, Q]0 =0 = control output[ control output ]** ##### Figure 7 depicts the control module in grid-connected mode. The grid-connected mod-reactive power. reactive power. ule completes pre-synchronization when the sinusoidal wave SS switches from ‘0’ position to Figure 7 depicts the control module in grid-connected mode. The grid-connected Figure 7 depicts the control module in grid-connected mode. The grid-connected mod ##### ‘1’ position. The PI3 regulator and Judgermodule completes pre-synchronization when the sinusoidal wave SS switches from ‘0’4 receive the difference between [φ]g [and ][φ]V (voltage ule completes pre-synchronization when the sinusoidal wave SS switches from ‘0’ position to ##### phase angles)[. The rotor motion model receives the value provided by the PI]position to ‘1’ position. The PI‘1’ position. The PI3 regulator and Judger3 regulator and Judger4 receive the difference between 4 receive the difference between[3][ as ][dθ][φ]g[. Judger][and ][φ]V (4voltage ϕg and ϕV (voltage phase angles). The rotor motion model receives the value provided by ##### chooses the next input value based on the interval in which the difference is found. The PIphase angles)[. The rotor motion model receives the value provided by the PI][3][ as ][dθ]2, a [. Judger]4 the PI3 as dθ. Judger4 chooses the next input value based on the interval in which the ##### regulator, and Judgerchooses the next input value based on the interval in which the difference is found. The PI3 receive the difference between [|U][�][�|][ and ] [U]amp. [The virtual voltage ] 2., a difference is found. The PI2, a regulator, and Judger3 receive the difference between Ug ##### regulator module receives the value from the PIregulator, and Judger3 receive the difference between2 as dE. Similar to Judger[|U][�][�|][ and ]4[U], Judgeramp. [The virtual voltage ]3 chooses ��� ��� regulator module receives the value from the PIand Uamp. The virtual voltage regulator module receives the value from the PI2 as dE. Similar to Judger4, Judger3 chooses 2 as dE. ##### the following input value based on the interval in which the difference is placed. It is deter-Similar to Judger4, Judger3 chooses the following input value based on the interval in the following input value based on the interval in which the difference is placed. It is deter ##### mined by the difference between the two sides when determining the frequency. When all which the difference is placed. It is determined by the difference between the two sides mined by the difference between the two sides when determining the frequency. When all ##### three Judgers are selected as 1 and the switch signal is changed from the “0” position to the when determining the frequency. When all three Judgers are selected as 1 and the switchthree Judgers are selected as 1 and the switch signal is changed from the “0” position to the signal is changed from the “0” position to the “1” position, the pre-synchronization phase ##### “1” position, the pre-synchronization phase is said to be complete. “1” position, the pre-synchronization phase is said to be complete. is said to be complete. **Figure 7. Grid control module.** ###### Figure 7. Grid control module. Figure 7. Grid control module. **Figure 7. Grid control module.** ###### Voltage regulation module. [Where Q]ref = [ reference reactive power, Q] ----- _Energies 2023, 16, 4670_ 10 of 19 The system specifications for the LCL filter and the three-phase bridge inverter are provided in Table 2. The system specifications for the LCL filter and the three-phase bridge inverter are **Table 2. List of the components.** provided in Table 2. **Components** **Values** **Table 2. List of the components.L1** 6 mH L2 1.5 mH **Components** **Values** C 6 micro-F J L1 0.15 kg.m6 mH[2] L2 1.5 mH Kp, kUC 800 kW/Hz, 0.8 Hz/kVar 6 micro-F PWM freq J 25 kHz 0.15 kg·m[2] P at constant load Kp, kU 800 kW/Hz, 0.8 Hz/kVar10 kW PWM freq 25 kHz Q at constant load 8 kVar P at constant load 10 kW Q at constant loadra 0.05 ohm 8 kVar Xd ra 0.05 H 0.05 ohm P variable Xd 5 kW 0.05 H P variable 5 kW Q variable 3 kVar Q variable 3 kVar **6. Results and Discussion** **6. Results and Discussion** Four different scenarios were used to examine the grid-connected microgrid. The Four different scenarios were used to examine the grid-connected microgrid. The microgrid was initially evaluated when there was no noise in the system. The Lagrange microgrid was initially evaluated when there was no noise in the system. The Lagrange method described in the preceding section and the PSO algorithm were evaluated to see method described in the preceding section and the PSO algorithm were evaluated to see how well they operated in the absence of noise. In the second scenario, the system was how well they operated in the absence of noise. In the second scenario, the system was subjected to noise with a variance of 0.2, and the performance was tracked. The noise subjected to noise with a variance of 0.2, and the performance was tracked. The noise variance was raised to 0.5 in the third test, and it was set to 0.8 in the final condition ex variance was raised to 0.5 in the third test, and it was set to 0.8 in the final condition amined. MATLAB was used to examine how well the microgrid performed under vari examined. MATLAB was used to examine how well the microgrid performed under ous noise circumstances with and without the VSG control approach. Figure 8 shows the various noise circumstances with and without the VSG control approach. Figure 8 shows network figure of the algorithm used. The incremental costs from each generator were the network figure of the algorithm used. The incremental costs from each generator were shared with the agents. These agents shared data and decided if the incremental cost was shared with the agents. These agents shared data and decided if the incremental cost was optimal. If this was not the case, the information was passed to the generator to adjust its optimal. If this was not the case, the information was passed to the generator to adjust output power until the optimal incremental cost criterion was met. Once an optimal its output power until the optimal incremental cost criterion was met. Once an optimal economic dispatch solution was found, the total output power was sent to the VSG which economic dispatch solution was found, the total output power was sent to the VSG which was then used to meet the load demand or sent to the grid to fulfill any power deficit. The was then used to meet the load demand or sent to the grid to fulfill any power deficit. The use of a consensus-based algorithm and of the VSG strategy helped reduce the noise use of a consensus-based algorithm and of the VSG strategy helped reduce the noise effects effects and stabilize the microgrid. and stabilize the microgrid. **Figure 8. Figure 8.Network figure. Network figure.** With the introduction of various noise levels, the power output of the four generating units was observed, and we tried to model the ideal dispatch schedule in all instances. ----- ing units was observed, and we tried to model the ideal dispatch schedule in all instancing units was observed, and we tried to model the ideal dispatch schedule in all instanc _Energies 2023, 16, 4670_ es. To demonstrate the system’s stability and the incremental cost for various noise levels, es. To demonstrate the system’s stability and the incremental cost for various noise levels, 11 of 19 a comparison was evaluated. Figures 9 and 10 demonstrate, respectively, the fluctuating a comparison was evaluated. Figures 9 and 10 demonstrate, respectively, the fluctuating power output of the four generating units during a 60 s period with and without the VSG power output of the four generating units during a 60 s period with and without the VSG using the Lagrange formulation. It took about 15 s without the VSG to stabilize the power using the Lagrange formulation. It took about 15 s without the VSG to stabilize the power output of all generators, whereas with the VSG, it took about 12 s. For a low (0.2) and output of all generators, whereas with the VSG, it took about 12 s. For a low (0.2) and To demonstrate the system’s stability and the incremental cost for various noise levels, a medium (0.5) noise variance, it the system required about 20 s and 25 s, respectively to medium (0.5) noise variance, it the system required about 20 s and 25 s, respectively to comparison was evaluated. Figures 9 and 10 demonstrate, respectively, the fluctuating establish a consistent producing power output. Figures 11 and 12 show this observation. establish a consistent producing power output. Figures 11 and 12 show this observation. power output of the four generating units during a 60 s period with and without the VSG For a 0.8 noise level, shown in Figure 13, the microgrid required about 45 s to reach a For a 0.8 noise level, shown in Figure 13, the microgrid required about 45 s to reach a using the Lagrange formulation. It took about 15 s without the VSG to stabilize the power stable power output. It was observed that the economic dispatch solution was not stable stable power output. It was observed that the economic dispatch solution was not stable output of all generators, whereas with the VSG, it took about 12 s. For a low (0.2) and and was less reliable for noise levels without the VSG strategy. It is clear from the graphs and was less reliable for noise levels without the VSG strategy. It is clear from the graphs that the system required a few seconds to reach the desired constant value. that the system required a few seconds to reach the desired constant value. medium (0.5) noise variance, it the system required about 20 s and 25 s, respectively to establish a consistent producing power output. FiguresThe system required lesser time to each the intended value of optimal power when The system required lesser time to each the intended value of optimal power when 11 and 12 show this observation. was connected with the VSG, as shown in Figure 14. For low, medium, and high noise was connected with the VSG, as shown in Figure 14. For low, medium, and high noise For a 0.8 noise level, shown in Figure 13, the microgrid required about 45 s to reach a stable variance for all four units, it can be seen in the graph that there was a 5–10 s improvement variance for all four units, it can be seen in the graph that there was a 5–10 s improvement power output. It was observed that the economic dispatch solution was not stable and was for each unit in noise variance when the Lagrange method was used with the VSG control for each unit in noise variance when the Lagrange method was used with the VSG control less reliable for noise levels without the VSG strategy. It is clear from the graphs that the strategy. strategy. system required a few seconds to reach the desired constant value. 3030 2525 2020 1515 1010 55 00 00 1010 2020 3030 4040 5050 6060 Time (sec)Time (sec) **Figure 9. Figure 9. Generator output power in kW without the VSG in the absence of noise. Generator output power in kW without the VSG in the absence of noise.** **Figure 9. Generator output power in kW without the VSG in the absence of noise.** 3030 2525 2020 1515 1010 55 00 00 1010 2020 3030 4040 5050 6060 _Energies 2023, 16, x FOR PEER REVIEW_ 12 of 19 Time (sec)Time (sec) **Figure 10. Figure 10. Figure 10.Generator output power in kW with the VSG in the absence of noise. Generator output power in kW with the VSG in the absence of noise. Generator output power in kW with the VSG in the absence of noise.** 30 25 20 15 10 5 0 0 10 20 30 40 50 60 Time (sec) **Figure 11. Generator output power in kW with a 0.2 noise variance and without the VSG.** **Figure 11. Generator output power in kW with a 0.2 noise variance and without the VSG.** ----- 0 10 20 Time (sec)30 40 50 60 Time (sec) _Energies 2023, 16, 4670_ 12 of 19 **Figure 11. Generator output power in kW with a 0.2 noise variance and without the VSG.** **Figure 11. Generator output power in kW with a 0.2 noise variance and without the VSG.** 30 30 25 25 20 20 15 15 Unit 1 Unit 1Unit 2 1010 Unit 2Unit 3Unit 3 5 5 0 0 0 10 20 30 40 50 60 0 10 20 30 40 50 60 Time (sec) Time (sec) |Col1|Col2|Col3|Col4|Col5| |---|---|---|---|---| |||||| |||||| |||||| |||||| |Unit 1 UUnint i1t 2 UUnint i2t 3 UUnint i3t 4 Unit 4||||| |||||| |||||| |||||| **Figure 12. Generator output power in kW with a 0.5 noise variance and without the VSG.** **Figure 12. Figure 12.Generator output power in kW with a 0.5 noise variance and without the VSG. Generator output power in kW with a 0.5 noise variance and without the VSG.** 3030 2525 2020 1515 1010 55 00 00 1010 2020 3030 4040 5050 6060 Time (sec)Time (sec) |UUnint i1t 1 UUnint i2t 2 UUnint i3t 3 UUnint i4t 4|UUnint i1t 1 UUnint i2t 2 UUnint i3t 3 UUnint i4t 4|Col3| |---|---|---| |||| |||| |||| |||| |||| |||| |||| **Figure 13. Figure 13. Figure 13.Generator output power in kW with a 0.8 noise variance and without the VSG. Generator output power in kW with a 0.8 noise variance and without the VSG. Generator output power in kW with a 0.8 noise variance and without the VSG.** The system required lesser time to each the intended value of optimal power when was connected with the VSG, as shown in Figure 14. For low, medium, and high noise variance for all four units, it can be seen in the graph that there was a 5–10 s improve _Energies 2023, 16, x FOR PEER REVIEW_ 13 of 19 ment for each unit in noise variance when the Lagrange method was used with the VSG control strategy. 30 25 20 15 10 5 0 0 10 20 30 40 Unit 3 0.8 noise50 60 Time (sec) Unit 4 0.8 noise **Figure 14. Figure 14. Comparison of the generator units’ output power in kW with all noise levels with theComparison of the generator units’ output power in kW with all noise levels with the** Lagrange method and the VSG. Lagrange method and the VSG. |Unit 1 0.2 noise Unit 2 0.2 noise Unit 3 0.2 noise Unit 4 0.2 noise Unit 1 0.5 noise Unit 2 0.5 noise Unit 3 0.5 noise Unit 4 0.5 noise Unit 1 0.8 noise Unit 2 0.8 noise|Col2|Col3| |---|---|---| ||Unit 1 0.2 noise Unit 2 0.2 noise Unit 3 0.2 noise Unit 4 0.2 noise Unit 1 0.5 noise Unit 2 0.5 noise Unit 3 0.5 noise Unit 4 0.5 noise Unit 1 0.8 noise Unit 2 0.8 noise|| |0 10 20 30 40 Time (sec)|Un5it0 3 0.8 noise Unit 4 0.8 noise|6| ----- #### Figure 15 compares the output power of all generating units for a 0.8 noise level _Energies 2023, 16, 4670_ 13 of 19 #### using the Lagrange method and the PSO algorithm with the VSG control strategy. The resulting graph shows that the PSO algorithm performed better than the Lagrange method. With the PSO algorithm, convergence occurred faster than with the Lagrange Figure 15 compares the output power of all generating units for a 0.8 noise level method. For Unit 1, the PSO achieved stability 3 s earlier at the 27 s mark. For unit 2, the using the Lagrange method and the PSO algorithm with the VSG control strategy. The resulting graph shows that the PSO algorithm performed better than the Lagrange method. #### PSO algorithm performed better by 10 s, as indicated by the red legend. For Units 3 and 4, With the PSO algorithm, convergence occurred faster than with the Lagrange method. the PSO algorithm performed slightly better than the Lagrange method. The processing For Unit 1, the PSO achieved stability 3 s earlier at the 27 s mark. For unit 2, the PSO algorithm performed better by 10 s, as indicated by the red legend. For Units 3 and 4, #### time for the PSO algorithm was 15.648 s, whereas the Lagrange method required 22.343 s. the PSO algorithm performed slightly better than the Lagrange method. The processing #### Overall, it can be concluded that the PSO algorithm solved the economic dispatch prob-time for the PSO algorithm was 15.648 s, whereas the Lagrange method required 22.343 s. lem much quicker and efficiently than the Lagrange method. Overall, it can be concluded that the PSO algorithm solved the economic dispatch problem much quicker and efficiently than the Lagrange method. 30 25 20 15 10 5 ###### gies 2023, 16, x FOR PEER REVIEW 0 0 10 20 30 40 50 60 Time (sec) |Col1|Col2|Col3| |---|---|---| |Unit 1 - Lag Unit 2 - Lag Unit 3 - Lag Unit 4 - Lag Unit 1 - PSO Unit 2 - PSO R PEER REVIEW Unit 3 - PSO||| ||Unit 1 - Lag Unit 2 - Lag Unit 3 - Lag Unit 4 - Lag Unit 1 - PSO Unit 2 - PSO Unit 3 - PSO|| |||| |||| ||Unit 4 - PSO|| ##### Figure 15. Comparison of the generator units’ output power in kW with a 0.8 noise levels using the Figure 16. In Figure 17, we can see that it took about 50 s to reach the opFigure 15. Comparison of the generator units’ output power in kW with a 0.8 noise levels using the Lagrange and the PSO algorithm with the VSG. Lagrange and the PSO algorithm with the VSG. ### cost for a 0.2 noise level. In Figure 18, it took 55 s to reach the optimal v level, whereas it took more than 60 s to reach the optimal value for aThe consensus-based approach aided in setting the incremental cost of each generator #### The consensus-based approach aided in setting the incremental cost of each gener-unit more quickly when there was noise. However, it is observed in Figures 9–15 that the ### shown in Figure 19. consensus-based approach required more time as the noise variance increased. The graphs #### ator unit more quickly when there was noise. However, it is observed in Figures 9–15 that in Figures 16–19 show that the average incremental cost ($/kWh) was approximately 5.91. #### the consensus-based approach required more time as the noise variance increased. The graphs in Figures 16–19 show that the average incremental cost ($/kWh) was approxi- mately 5.91. In Figures 16–19, the incremental costs for all generator units were compared under various noise situations using the Lagrange method. As can be observed from the graph, it was difficult to stabilize the microgrid when it was connected to the grid, due to a larger noise variance (pink legend). For low to medium noise levels, it functioned well. Under no noise condition, the optimal incremental cost was reached in 45 s, as shown in Figure 16. Figure 16. Incremental Cost (IC) of the generating units compared in the absence of noise with theIncremental Cost (IC) of the generating units compared in the absen VSG using the Lagrange method. #### VSG using the Lagrange method. ----- **Figure 16. Incremental Cost (IC) of the generating units compared in the absence of noise with the** _Energies 2023, 16, 4670_ **Figure 16. Incremental Cost (IC) of the generating units compared in the absence of noise with the 14 of 19** VSG using the Lagrange method. VSG using the Lagrange method. **Figure 17.** Incremental Cost (IC) of the generating units compared in the presence of a 0.2 noise **Figure 17. Figure 17. Incremental Cost (IC) of the generating units compared in the presence of a 0.2 noiseIncremental Cost (IC) of the generating units compared in the presence of a 0.2 noise** variance with the VSG, using the Lagrange method. variance with the VSG, using the Lagrange method. variance with the VSG, using the Lagrange method. _Energies 2023, 16, x FOR PEER REVIEW_ 15 of 19 **Figure 18. Figure 18. Figure 18. Incremental Cost (IC) of the generating units compared in the presence of a 0.5 noiseIncremental Cost (IC) of the generating units compared in the presence of a 0.5 noise Incremental Cost (IC) of the generating units compared in the presence of a 0.5 noise** variance with the VSG, using the Lagrange method. variance with the VSG, using the Lagrange method. variance with the VSG, using the Lagrange method. **Figure 19. Figure 19. Incremental Cost (IC) of the generating units compared in the presence of a 0.8 noiseIncremental Cost (IC) of the generating units compared in the presence of a 0.8 noise** variance with the VSG, using the Lagrange method. variance with the VSG, using the Lagrange method. In Figures 20 and 21, it can be seen that with the VSG strategy, the generator units in In Figures 16–19, the incremental costs for all generator units were compared under the presence of higher noise levels stabilized more quickly. On average, 0.45 s were re-various noise situations using the Lagrange method. As can be observed from the graph, it quired for the system to stabilize with a high noise level of 0.8. In the absence of VSG, it was difficult to stabilize the microgrid when it was connected to the grid, due to a larger took more than 0.9 s for the frequency to stabilize, as shown in Figure 20. noise variance (pink legend). For low to medium noise levels, it functioned well. Under no noise condition, the optimal incremental cost was reached in 45 s, as shown in Figure 16. In Figure60.6 17, we can see that it took about 50 s to reach the optimal incremental cost for a Unit 1 0.2 noise level. In Figure60.4 18, it took 55 s to reach the optimal value for a 0.5 noise level,Unit 2Unit 3 Unit 4 ----- ### p g q y g _Energies 2023, 16, 4670_ quired for the system to stabilize with a high noise level of 0.8. In the a15 of 19 ### took more than 0.9 s for the frequency to stabilize, as shown in Figure 2Figure 19. Incremental Cost (IC) of the generating units compared in the presence of a 0.8 noise variance with the VSG, using the Lagrange method. whereas it took more than 60 s to reach the optimal value for a 0.8 noise level, as shown in Figure60.6 19. ###### the presence of higher noise levels stabilized more quickly. On average, 0.45 s were re-in the presence of higher noise levels stabilized more quickly. On average, 0.45 s were60.4 In FiguresIn Figures 20 and 21, it can be seen that with the VSG strategy, the generator units in 20 and 21, it can be seen that with the VSG strategy, the generator unitsUnit 1Unit 2Unit 3 quired for the system to stabilize with a high noise level of 0.8. In the absence of VSG, itUnit 4 required for the system to stabilize with a high noise level of 0.8. In the absence of VSG, it ###### took more than 0.9 s for the frequency to stabilize, as shown in Figure 20. took more than 0.9 s for the frequency to stabilize, as shown in Figure60.2 20. 60.660 |gure 19.|Col2|Col3|Col4|Col5|Col6| |---|---|---|---|---|---| 60.4 59.8 60.2 59.6 60 59.459.8 59.259.6 59.4 59 59.2 58.8 59 58.658.8 0 0.1 0.2 0.3 0.4 0.5 0.6 Time (sec) 58.6 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 Time (sec) #### Figure 20. Comparison of the generator units’ frequency changes with a 0.8 noi **Figure 20. Figure 20. Comparison of the generator units’ frequency changes with a 0.8 noise level without theComparison of the generator units’ frequency changes with a 0.8 noise level without the** #### VSG, using the Lagrange method. VSG, using the Lagrange method. VSG, using the Lagrange method. 60.160.1 60 60 59.9 59.9 59.8 59.8 59.7 59.759.6 59.5 59.6 |Unit 1 Unit 2 Unit 3 Unit 4|Col2|Col3|Col4| |---|---|---|---| ||||| ||||| ||||| ||Unit 1 Unit 2 Unit 3 Unit 4||| ||||Unit 1 Unit 2 Unit 3 Unit 4| 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 Time (sec) **Figure 21. Comparison of the generator units in the presence of a 0.8 noise level with the VSG in** 59.5 terms of frequency change, using the Lagrange method. 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 Time (sec) **Figure 21. Comparison of the generator units in the presence of a 0.8 noise level with the VSG in** #### Figure 21. Comparison of the generator units in the presence of a 0.8 noise le terms of frequency change, using the Lagrange method. #### terms of frequency change, using the Lagrange method. In Figures 22 and 23 it is observed that with a load change, the system was more stable and reached its maximum limit faster with the VSG strategy. The system oscillated more and had more THD without the VSG, as observed in Figure 22. The microgrid stabilized in 0.35 s with a noise variance of 0.8 when both the Lagrange method and the VSG were in operation. This can be seen in Figure 23. ----- more and had more THD without the VSG, as observed in Figure 22. The microgrid sta-more and had more THD without the VSG, as observed in Figure 22. The microgrid sta _Energies 2023, 16, 4670_ bilized in 0.35 s with a noise variance of 0.8 when both the Lagrange method and the VSG bilized in 0.35 s with a noise variance of 0.8 when both the Lagrange method and the VSG 16 of 19 were in operation. This can be seen in Figure 23. were in operation. This can be seen in Figure 23. 4040 3535 3030 2525 2020 1515 1010 55 00 00 0.10.1 0.20.2 0.30.3 0.40.4 0.50.5 0.60.6 0.70.7 Time (sec)Time (sec) **Figure 22. Figure 22.Figure 22. Comparison of the generating units’ maximum power with a 0.8 noise variance and a loadComparison of the generating units’ maximum power with a 0.8 noise variance and a Comparison of the generating units’ maximum power with a 0.8 noise variance and a** load change without the VSG, using the Lagrange method. change without the VSG, using the Lagrange method.load change without the VSG, using the Lagrange method. 4040 3535 3030 2525 2020 1515 1010 55 00 00 0.10.1 0.20.2 0.30.3 0.40.4 0.50.5 0.60.6 0.70.7 Time (sec)Time (sec) **Figure 23. Figure 23.Figure 23. Comparison of the generating units’ maximum power with a 0.8 noise variance and a loadComparison of the generating units’ maximum power with a 0.8 noise variance and a Comparison of the generating units’ maximum power with a 0.8 noise variance and a** load change with the VSG, using the Lagrange method. change with the VSG, using the Lagrange method.load change with the VSG, using the Lagrange method. Table 3 compares the Lagrange method and the PSO algorithm performances when TableTable 3 compares the Lagrange method and the PSO algorithm performances when 3 compares the Lagrange method and the PSO algorithm performances when used with the VSG control strategy in relation to the incremental cost. For all noise vari-used with the VSG control strategy in relation to the incremental cost. For all noise variances,used with the VSG control strategy in relation to the incremental cost. For all noise variances, the PSO algorithm performed better and provided stability more quickly. In the the PSO algorithm performed better and provided stability more quickly. In the absence ofances, the PSO algorithm performed better and provided stability more quickly. In the absence of noise, the PSO algorithm required 27.45 s to reach the optimal incremental noise, the PSO algorithm required 27.45 s to reach the optimal incremental cost, whereasabsence of noise, the PSO algorithm required 27.45 s to reach the optimal incremental cost, whereas the Lagrange method required about 38.21 s. With a 0.2 noise variance, the the Lagrange method required about 38.21 s. With a 0.2 noise variance, the PSO algorithmcost, whereas the Lagrange method required about 38.21 s. With a 0.2 noise variance, the PSO algorithm required 10 s less than the Lagrange method to reach the optimal incre-required 10 s less than the Lagrange method to reach the optimal incremental cost. For aPSO algorithm required 10 s less than the Lagrange method to reach the optimal incremental cost. For a medium noise variance of 0.5, the PSO algorithm was faster by about 13 medium noise variance of 0.5, the PSO algorithm was faster by about 13 s and for a highmental cost. For a medium noise variance of 0.5, the PSO algorithm was faster by about 13 s and for a high noise variance of 0.8, it was faster by about 39 s. It is observed in Table 3 noise variance of 0.8, it was faster by about 39 s. It is observed in Tables and for a high noise variance of 0.8, it was faster by about 39 s. It is observed in Table 3 3 that the PSO that the PSO algorithm performed better for all levels of noise variance and required algorithm performed better for all levels of noise variance and required much less time tothat the PSO algorithm performed better for all levels of noise variance and required much less time to stabilize the system than the Lagrange method. stabilize the system than the Lagrange method.much less time to stabilize the system than the Lagrange method. Table 4 compares the Lagrange method and the PSO algorithm performances when used with the VSG control strategy in relation to frequency and maximum power. For all noise variances, the PSO algorithm performed better and provided stability more quickly. For the 0.8 noise condition, the PSO algorithm required 0.2 s to reach frequency stability, whereas the Lagrange method required about 0.45 s. Similarly, for the maximum power, the PSO algorithm required half the time to reach stability, as seen in Table 4. ----- _Energies 2023, 16, 4670_ 17 of 19 **Table 3. Comparison of the time to reach the average optimal incremental cost by the two examined methods.** **Noise Variance** **Lagrange Method** **PSO Algorithm** No noise 38.21 s 27.45 s 0.2 variance 48 s 38.20 s 0.5 variance 52.57 s 40.19 s 0.8 variance 90 s 51.85 s **Table 4. Comparison of the time to reach the optimal levels of the shown parameters for a 0.8 noise variance.** **Method/Algorithm** **Frequency (Hz)** **Max. Power (kW)** Lagrange 0.45 s 0.30 s PSO 0.20 s 0.15 s **7. Conclusions** For islanded microgrids, the suggested consensus-based approach for economic dispatch performs well [23]. This algorithm was utilized in this study to examine how the microgrid operated in grid-connected mode. The VSG strategy was also introduced to enhance the system’s stability. The microgrid’s performances of the Lagrange method and the PSO algorithms were compared, with and without the use of the VSG strategy. It is concluded that with the inclusion of the VSG control strategy, the system could reach stabilization much faster in the presence of different levels of noise and load changes, as described in the Results section. This was observed for both the Lagrange method and the PSO algorithm. The consensus-based economic dispatch algorithm worked efficiently in conjunction with the VSG control strategy. It can also be concluded from the results obtained that the PSO algorithm performed better in stabilizing the frequency, output power, and load changes in the microgrid. The optimal incremental cost was also achieved faster with the PSO algorithm. The results clearly showed that a consensus-based economic dispatch solution with the VSG strategy yielded a better stabilization in microgrids in the presence of low, medium, and high noise variances. Future research should be carried out to assess the performance of different algorithms on the noise effect in both grid-connected and islanded microgrids. Reactive power compensation can also be included in future studies for a better overall performance of microgrids. **Author Contributions: Conceptualization, S.S.; Methodology, D.W.G.; Software, S.S.; Validation,** S.S. and D.W.G.; Investigation, S.S. and D.W.G.; Resources, D.W.G.; Writing—original draft, S.S.; Writing—review & editing, D.W.G. All authors have read and agreed to the published version of the manuscript. **Funding: This research received no external funding.** **Data Availability Statement: Data is unavailable due to privacy or ethical restrictions.** **Acknowledgments: The noiseless consensus-based economic dispatch algorithm was created in [23],** and its performance for islanded microgrids was examined. To the best of the authors’ knowledge, no other researchers have examined the impact of this method on grid-connected microgrids. **Conflicts of Interest: The authors declare no conflict of interest.** **References** 1. Liu, D.; Cai, Y. Taguchi method for solving the economic dispatch problem with nonsmooth cost functions. IEEE Trans. Power _[Syst. 2005, 20, 2006–2014. [CrossRef]](https://doi.org/10.1109/TPWRS.2005.857939)_ 2. Park, J.B.; Jeong, Y.W.; Shin, J.R.; Lee, K. An improved particle swarm optimization for nonconvex economic dispatch problems. _[IEEE Trans. Power Syst. 2010, 25, 156–166. [CrossRef]](https://doi.org/10.1109/TPWRS.2009.2030293)_ 3. Guo, T.; Henwood, M.; Van Ooijen, M. An algorithm for combined heat and power economic dispatch. IEEE Trans. Power Syst. **[1996, 11, 1778–1784. [CrossRef]](https://doi.org/10.1109/59.544642)** ----- _Energies 2023, 16, 4670_ 18 of 19 4. Fan, J.Y.; Zhang, L. Real-time economic dispatch with line flow and emission constraints using quadratic programming. IEEE _[Trans. Power Syst. 1998, 13, 320–325. [CrossRef]](https://doi.org/10.1109/59.667345)_ 5. Olfati-Saber, R.; Murray, R.M. Consensus problems in networks of agents with switching topology and time-dalys. IEEE Trans. _[Autom. Control 2004, 49, 1520–1533. [CrossRef]](https://doi.org/10.1109/TAC.2004.834113)_ 6. Ren, W.; Beard, R.W.; Atkins, E.M. Information consensus in multivehicle cooperative control. IEEE Control Syst. Mag. 2007, _27, 71–82._ 7. Jadbabaie, A.; Lin, J.; Morse, A.S. Coordination of groups of mobile autonomous agents using nearest neighbor rules. IEEE Trans. _Autom. Control 2005, 50, 169–182._ 8. Moreau, L. Stability of multiagent systems with time-dependent communication links. IEEE Trans. Autom. Control 2005, _[50, 169–182. [CrossRef]](https://doi.org/10.1109/TAC.2004.841888)_ 9. Ma, Y.; Zhang, W.; Liu, W.; Yang, Q. Fully distributed social welfare optimization with line flow constraint consideration. IEEE _[Trans. Ind. Informat. 2015, 11, 1532–1540. [CrossRef]](https://doi.org/10.1109/TII.2015.2475703)_ 10. Rahbari-Asr, N.; Ojha, U.; Zhang, Z.; Chow, M.-Y. Incremental welfare consensus algorithm for cooperative distributed genera[tion/demand response in smart grid. IEEE Trans. Smart Grid 2014, 5, 2836–2845. [CrossRef]](https://doi.org/10.1109/TSG.2014.2346511) 11. Xu, Y.; Li, Z. Distributed optimal resource management based on the consensus algorithm in a microgrid. IEEE Trans. Ind. Electron **[2015, 62, 2584–2592. [CrossRef]](https://doi.org/10.1109/TIE.2014.2356171)** 12. Xu, Y.; Yang, Z.; Gu, W.; Li, M.; Deng, Z. Robust real-time distributed optimal control based energy management in a smart grid. _[IEEE Trans. Smart Grid 2017, 8, 1568–1579. [CrossRef]](https://doi.org/10.1109/TSG.2015.2491923)_ 13. Zheng, W.; Wu, W.; Zhang, B.; Lin, C. Distributed optimal residential demand response considering operational constraints of [unbalanced distribution networks. IET Gener. Transm. Distrib. 2018, 12, 1970–1979. [CrossRef]](https://doi.org/10.1049/iet-gtd.2017.1366) 14. Guo, F.; Wen, C.; Li, Z. Distributed optimal energy scheduling based on a novel PD pricing strategy in smart grid. IET Gener. _[Transm. Distrib. 2017, 11, 2075–2084. [CrossRef]](https://doi.org/10.1049/iet-gtd.2016.1722)_ 15. Rahbari-Asr, N.; Zhang, Y.; Chow, M.-Y. Consensus-based distributed scheduling for cooperative operation of distributed energy [resources and storage devices in smart grids. IET Gener. Transm. Distrib. 2016, 10, 1268–1277. [CrossRef]](https://doi.org/10.1049/iet-gtd.2015.0159) 16. Jordehi, A.R.; Javadi, M.S.; Catalao, J.P.S. Dynamic Economic Load Dispatch in Isolated Microgrids with Particle Swarm Optimisation considering Demand Response. In Proceedings of the 55th International Universities Power Engineering Conference (UPEC), Turin, Italy, 1–4 September 2020; pp. 1–4. 17. Imtiaz, B.; Cui, Y.; Zafar, I. Economic Dispatch of Microgrid Incorporating Demand Response Using Dragonfly Algorithm. In Proceedings of the 2021 IEEE International Conference on Advances in Electrical Engineering and Computer Applications (AEECA), Dalian, China, 27–28 August 2021; pp. 59–68. 18. Singh, J.; Poddar, S.; Ramalingam, S.P.; Shanmugam, P.K.; Kalam, A. Investigation on Dynamic Economic Dispatch Problem of Microgrid Using Cuckoo Search Algorithm—Grid Connected and Island Mode. In Proceedings of the 2019 IEEE Region 10 Conference (TENCON), Kochi, India, 17–20 October 2019; pp. 1886–1891. 19. Luo, N.; Liu, J.; Zhang, P. Optimal Dispatching of Active Distribution Network based on Improved Genetic Algorithm. In Proceedings of the 2022 44th International Conference on Frontiers Technology of Information and Computer (ICFTIC), Qingdao, China, 2–4 December 2022; pp. 551–554. 20. Abhinav, S.; Schizas, I.D.; Lewis, F.L.; Davoudi, A. Distributed noise-resilient networked synchrony of active distribution systems. _[IEEE Trans. Smart Grid 2018, 9, 836–846. [CrossRef]](https://doi.org/10.1109/TSG.2016.2569602)_ 21. Abhinav, S.; Schizas, I.D.; Ferrese, F.; Davoudi, A. Optimization based Ac microgrid synchronization. IEEE Trans. Ind. Informat. **[2017, 13, 2339–2349. [CrossRef]](https://doi.org/10.1109/TII.2017.2702623)** 22. Dehkordi, N.M.; Baghaee, H.R.; Sadati, N.; Guerrero, J.M. Distributed noise-resilient secondary voltage and frequency control for [islanded microgrids. IEEE Trans. Smart Grid 2018, 10, 3780–3790. [CrossRef]](https://doi.org/10.1109/TSG.2018.2834951) 23. Chen, F.; Chen, M.; Zhao, X.; Guerrero, J.M.; Wang, L.Y. Distributed Noise-resilient economic dispatch strategy for islanded [microgrids. IET Gener. Transm. Distrib. 2019, 13, 3029–3039. [CrossRef]](https://doi.org/10.1049/iet-gtd.2018.5740) 24. Yazdanian, M.; Mehrizi-Sani, A. Distributed control techniques in microgrids. IEEE Trans. Smart Grid 2014, 5, 2901–2909. [[CrossRef]](https://doi.org/10.1109/TSG.2014.2337838) 25. Molzahn, D.K.; Dorfler, F.; Sandberg, H.; Low, S.H.; Chakrabarti, S.; Baldick, R.; Lavaei, J. A survey of distributed optimization [and control algorithms for electric power systems. IEEE Trans. Smart Grid 2017, 8, 2941–2962. [CrossRef]](https://doi.org/10.1109/TSG.2017.2720471) 26. Han, Y.; Zhang, K.; Hong, L.; Coelho, E.A.A.; Guerrero, J.M. MAS-based distributed coordinated control and optimization in [kicrogrid and microgrid clusters: A comprehensive review. IEEE Trans. Power Electron 2018, 33, 6488–6508. [CrossRef]](https://doi.org/10.1109/TPEL.2017.2761438) 27. Xu, T.; Wu, W.; Sun, H.; Wang, L. Fully distributed multi-area dynamic economicdispatch method with second-order convergence [for active distribution networks. IET Gener. Transm. Distrib. 2017, 11, 3955–3965. [CrossRef]](https://doi.org/10.1049/iet-gtd.2016.1945) 28. Kouveliotis-Lysikatos, I.; Hatziargyriou, N. Fully distributed economic dispatch of distributed generators in active distribution [networks considering losses. IET Gener. Transm. Distrib. 2017, 11, 627–636. [CrossRef]](https://doi.org/10.1049/iet-gtd.2016.0616) 29. Zheng, W.; Wu, W.; Zhang, B.; Li, Z.; Liu, Y. Fully distributed multi-area economic dispatch method for active distribution [networks. IET Gener. Transm. Distrib. 2015, 9, 1341–1351. [CrossRef]](https://doi.org/10.1049/iet-gtd.2014.0904) 30. Tu, Y.; Su, J.H.; Du, Y.; Yang, X.Z.; Xu, H.D. Analysis of microgrid inverter paralleling system based on virtual oscillator. Electr. _Power Autom. Equip. 2017, 37, 24–30._ ----- _Energies 2023, 16, 4670_ 19 of 19 31. Cheng, Q.M.; Gao, J.; Cheng, Y.M. An inverter control method suitable for islanding operation. Power Syst. Technol. 2018, _42, 203–209._ 32. Xu, Y.Q.; Ma, H.J. Inverter parallel operation technology based on improved droop control. Power Syst. Prot. Control. 2015, _43, 103–107._ 33. Lü, Z.Y.; Wu, Z.J.; Dou, X.B. Adaptive discrete droop control of isolated DC microgrid based on discrete consistency. Proc. CSEE **2015, 35, 4397–4407.** 34. Chen, W.; Li, T. Distributed Economic Dispatch for Energy Internet based on Miultiagent Consensus Control. IEEE Trans. Autom. _[Control 2021, 66, 137–152. [CrossRef]](https://doi.org/10.1109/TAC.2020.2979749)_ 35. Meng, J.; Shi, X.; Wang, Y.; Fu, C. A virtual synchronous generator control strategy for distributed generation. In Proceedings of the 2014 China International Conference on Electricity Distribution (CICED), Shenzhen, China, 23–26 September 2014; pp. 495–498. 36. Singh, S.; Gao, D.W. Noiseless Consensus based Algorithm for Economic Dispatch problme in Grid-connected Microgrids to enahnce Stability among Distributed Generators. In Proceedings of the 2019 North American Power Symposium (NAPS), Wichita, KS, USA, 13–15 October 2019; pp. 1–5. 37. Li, H.; Gu, R.-Z. Research on Grid-coonected Control and Simulation of Microgrid Inverter based on VSG. In Proceedings of the 2018 China International Conference on Electricity Distribution (CICED), Tianjin, China, 17–19 September 2018. 38. Miškovi´c, M.; Miroševi´c, M.; Milkovi´c, M. Analysis of synchronous generator angular stability depending on the choice of the [excitation system. Angew. Chem. 2009, 39, 4555. [CrossRef]](https://doi.org/10.37798/2009584308) 39. Wang, K.; Qi, C.; Huang, X.; Li, G. Large disturbance stability evaluation of interconnected multi-inverter power grids with VSG [model. J. Eng. 2017, 2017, 2483–2488. [CrossRef]](https://doi.org/10.1049/joe.2017.0775) **Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual** author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. -----
{ "disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.3390/en16124670?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.3390/en16124670, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": "CCBY", "status": "CLOSED", "url": "https://www.mdpi.com/1996-1073/16/12/4670/pdf?version=1686621916" }
2,023
[ "JournalArticle" ]
true
2023-06-12T00:00:00
[ { "paperId": "efad9615a2ae16f704f4f14e0522e5f6b0541c48", "title": "Distributed Noise-Resilient Secondary Voltage and Frequency Control for Islanded Microgrids" }, { "paperId": "829bf44d870635e6708918fd193fead00215afef", "title": "Distributed noise‐resilient economic dispatch strategy for islanded microgrids" }, { "paperId": "8c9ac526021d13fda27220c98e2894a27ba446a6", "title": "Distributed Economic Dispatch for Energy Internet Based on Multiagent Consensus Control" }, { "paperId": "05651c5dd4ead4996aab47c1e54341ff23fafdc7", "title": "MAS-Based Distributed Coordinated Control and Optimization in Microgrid and Microgrid Clusters: A Comprehensive Overview" }, { "paperId": "3282ffbca9a87b4a313992ad9fa99a7c3895d1b4", "title": "Distributed Noise-Resilient Networked Synchrony of Active Distribution Systems" }, { "paperId": "4d935d24433b237ee3ba44eea34ecc45522f7d5b", "title": "Distributed optimal residential demand response considering operational constraints of unbalanced distribution networks" }, { "paperId": "baf0d2448403e98c6bcd89ca9db5cb483eb1a9c7", "title": "Large Distribution Stability Evaluation of Interconnected Multi-Inverter Power Grids with Virtual Synchronous Generator Model" }, { "paperId": "6685d35225152d56a6d234e6f4e7159da89a3709", "title": "A Survey of Distributed Optimization and Control Algorithms for Electric Power Systems" }, { "paperId": "e6e567ab99fe573c72a8494055db287c88caf61c", "title": "Robust Real-Time Distributed Optimal Control Based Energy Management in a Smart Grid" }, { "paperId": "8862e5e5788f0e04b4655888ac3bb0d33bde3591", "title": "Fully distributed multi-area dynamic economic dispatch method with second-order convergence for active distribution networks" }, { "paperId": "87c0587853c4f8ed61557b49cf7569e579869f26", "title": "Optimization-Based AC Microgrid Synchronization" }, { "paperId": "98c73bb37f51ce3f972e11c1d35e010807c41d90", "title": "Distributed optimal energy scheduling based on a novel PD pricing strategy in smart grid" }, { "paperId": "313a19291882db4fc3a2bee4d851bc28363cbb15", "title": "Fully distributed economic dispatch of distributed generators in active distribution networks considering losses" }, { "paperId": "bf715411d4145dced796edd2962b33a8f78ad56f", "title": "Consensus-based distributed scheduling for cooperative operation of distributed energy resources and storage devices in smart grids" }, { "paperId": "fbf3a61ce06e5e8669c213025a8487c637cb810f", "title": "Fully Distributed Social Welfare Optimization With Line Flow Constraint Consideration" }, { "paperId": "ff236c180d66d616cc4b9f7d24c6df50297e52c4", "title": "Fully distributed multi-area economic dispatch method for active distribution networks" }, { "paperId": "1260805dc54fb86c893f94df3298622e1f1c2c39", "title": "Distributed Optimal Resource Management Based on the Consensus Algorithm in a Microgrid" }, { "paperId": "e7d8f5aa3794745dd8a68af2064cbd1bf9de0cad", "title": "Incremental Welfare Consensus Algorithm for Cooperative Distributed Generation/Demand Response in Smart Grid" }, { "paperId": "1d84be92cbeab857d079fb639dee39da93bac911", "title": "Distributed Control Techniques in Microgrids" }, { "paperId": "d1210000a5d35f96dd1df161ca2968aaa7c14193", "title": "An Improved Particle Swarm Optimization for Nonconvex Economic Dispatch Problems" }, { "paperId": "e274acb96305dd122815565afb9e3edf747557cf", "title": "ANALYSIS OF SYNCHRONOUS GENERATOR ANGULAR STABILITY DEPENDING ON THE CHOICE OF THE EXCITATION SYSTEM" }, { "paperId": "c92917c75a596fd27351cb70a5694b61f622863f", "title": "Information consensus in multivehicle cooperative control" }, { "paperId": "dea24d1daa97f262e723a62a866fd40d619248f5", "title": "Taguchi method for solving the economic dispatch problem with nonsmooth cost functions" }, { "paperId": "7fe0ef2ddacd193101dc5ba3df97b0241a5e8fc6", "title": "Stability of multiagent systems with time-dependent communication links" }, { "paperId": "9839ed2281ba4b589bf88c7e4acc48c9fa6fb933", "title": "Consensus problems in networks of agents with switching topology and time-delays" }, { "paperId": "20be6a4e06295792977d2d6e4c9eb9a8405226e9", "title": "Coordination of groups of mobile autonomous agents using nearest neighbor rules" }, { "paperId": "4219380acc45645484a5b6ad10d0932a4df82d65", "title": "Real-time economic dispatch with line flow and emission constraints using quadratic programming" }, { "paperId": "fdd93d5fa6cdf685901bec57aa7f9af1e046d811", "title": "An algorithm for combined heat and power economic dispatch" }, { "paperId": null, "title": "An inverter control method suitable for islanding operation" }, { "paperId": null, "title": "Analysis of microgrid inverter paralleling system based on virtual oscillator" }, { "paperId": null, "title": "Inverter parallel operation technology based on improved droop control" }, { "paperId": null, "title": "Adaptive discrete droop control of isolated DC microgrid based on discrete consistency" } ]
19,814
en
[ { "category": "Computer Science", "source": "external" }, { "category": "Computer Science", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/02a1e357408df84ddf78c6d1270caaa9b4f1013e
[ "Computer Science" ]
0.800405
A CIPHERTEXT-POLICY ATTRIBUTE-BASED SEARCHABLE ENCRYPTION SCHEME IN NON-INTERACTIVE MODEL
02a1e357408df84ddf78c6d1270caaa9b4f1013e
Journal of Computer Science and Cybernetics
[ { "authorId": "2004098326", "name": "Van Anh Trinh" }, { "authorId": "2241026", "name": "V. Trinh" } ]
{ "alternate_issns": null, "alternate_names": [ "J Comput Sci Cybern" ], "alternate_urls": null, "id": "819d1840-7bd3-4c53-b5cc-5b9fb861c981", "issn": "1813-9663", "name": "Journal of Computer Science and Cybernetics", "type": "journal", "url": null }
We address the problem of searching on encrypted data with expressive searching predicate and multi-writer/multi-reader, a cryptographic primitive which has many concrete application scenarios such as cloud computing, email gateway application and so on. In this paper, we propose a public-key encryption with keyword search scheme relied on the ciphertext-policy attribute-based encryption scheme. In our system, we consider the model where a user can generate trapdoors by himself/herself, we thus can remove the Trusted Trapdoor Generator which can save the resource and communication overhead. We also investigate the problem of combination of a public key encryption used to encrypt data and a public-key encryption with keyword search used to encrypt keywords, which can save the storage of the whole system
_Journal of Computer Science and Cybernetics, V.35, N.3 (2019), 233–249_ DOI 10.15625/1813-9663/35/3/13667 # A CIPHERTEXT-POLICY ATTRIBUTE-BASED SEARCHABLE ENCRYPTION SCHEME IN NON-INTERACTIVE MODEL VAN ANH TRINH[1], VIET CUONG TRINH[2][,][∗] 1Thanh Hoa University of Culture, Sports and Tourism 2Hong Duc University, Thanh Hoa, Viet Nam _[∗Trinhvietcuong@hdu.edu.vn](mailto:Trinhvietcuong@hdu.edu.vn)_ � **Abstract. We address the problem of searching on encrypted data with expressive searching predi-** cate and multi-writer/multi-reader, a cryptographic primitive which has many concrete application scenarios such as cloud computing, email gateway application and so on. In this paper, we propose a public-key encryption with keyword search scheme relied on the ciphertext-policy attribute-based encryption scheme. In our system, we consider the model where a user can generate trapdoors by himself/herself, we thus can remove the Trusted Trapdoor Generator which can save the resource and communication overhead. We also investigate the problem of combination of a public key encryption used to encrypt data and a public-key encryption with keyword search used to encrypt keywords, which can save the storage of the whole system. **Keywords. Attribute-based Encryption; Searchable Encryption; Searching on Encrypted Data.** **1.** **INTRODUCTION** Searching on encrypted data is an important task, which can be applicable to many practical contexts such as cloud computing or email gateway application. In the context of cloud computing, the user’s data is first encrypted and then outsourced to the cloud server. When user would like to find some specific data, he/she needs to ask help from the cloud server, user however doesn’t want the cloud server to know about his/her original data. In the email gateway application, when anyone would like to securely send email to Alice, he/she encrypts the content of email under Alice’s public key before sending. On the other hand, Alice would like to set priority order of receiving her emails. To this aim, Alice gives the email gateway the ability to check the priority order of incoming emails and then send to her emails in the order she wants. However, Alice also doesn’t want the email gateway to know about the content of such emails. Searchable Encryption (SE) was introduced in [2, 16] to deal with such aforementioned problems. In a nutshell, in a system which supports SE, we append encrypted keywords with corresponding encrypted data. User then relies on his/her secret key in SE scheme and chosen keywords to generate a trapdoor for the cloud server (or the email gateway) to perform the search. The trapdoor is generated in such a way that the cloud server (or the email gateway) using this trapdoor can perform successfully the search, but doesn’t get any information about the original data in the resulted ciphertexts. On the other hand, since keywords are encrypted, unauthorized users (called outsiders) as well as cloud server (called c 2019 Vietnam Academy of Science & Technology _⃝_ ----- 234 VAN ANH TRINH, VIET CUONG TRINH insider) ideally also don’t know any information about keywords in each ciphertext. We can category SE in two types: 1. SE in the private-key setting [16], where there is only one writer (data owner who encrypts the data as well as the corresponding keywords) and one/multi reader (user who would like to search and then should be able to decrypt the resulted ciphertexts). This type of SE has obviously limited applications in practice. For example, it cannot apply to the context of sending email above since anyone should have the capacity of encrypting the content of emails sent to Alice. 2. SE in the public-key setting [2], where there are multi-writer and one/multi reader. A full searchable encryption system in practice includes two components: the first is a Public Key Encryption (PKE) scheme used to encrypt data; the second is a PublicKey Encryption with keyword Search (PEKS) used to encrypt keywords. Such full system is called a PKE-PEKS scheme. In a PKE-PEKS scheme, a full ciphertext, including both the encrypted keywords and encrypted data, should be in the form PKEAlicepk (data)||PEKSAlice′pk [(keywords).] There are two cases: PKE and PEKS are independent, that means Alice’s publickey/secret-key in PKE is different to the ones in PEKS; and otherwise, where Alice’s public-key/secret-key could be the same in both PKE and PEKS. Obviously, such full system will become more efficient in the latter case. However, in this case we have to consider carefully the security of the full system [10] since the adversary is now more powerful than the one in the former case. When PKE and PEKS are independent, we often only care about PEKS scheme and omit the PKE scheme for simplicity. In some schemes [6, 11, 14], Alice cannot generate the trapdoor by himself/herself, he/she needs to contact with a Trusted Trapdoor Generator (TTG), that will obviously increase the communication overhead of the user, and moreover the Trusted Trapdoor Generator should be always online. We call such schemes interactive schemes. In summary, there are several following important properties one should take into account when estimating a system which supports searching on encrypted data: Efficiency: Performance of encryption/decryption/searching algorithm, key-size/ _•_ ciphertext-size, PKE and PEKS are independent or not, interactive or noninteractive, etc; Expressive searching predicate: Whether or not the PEKS scheme supports con _•_ junctive keywords or even boolean formulas of keywords for searching. Obviously, this property is more desirable than simple equality keyword search in practice; Trapdoor security: Cloud server with a trapdoor in hand knows nothing about _•_ the keywords in the ciphertext and trapdoor, even when the trapdoor “matches” the ciphertext. We note that this property is very hard to achieve in the publickey setting, to the best of our knowledge there is only one scheme [6] that can _partially achieve this property._ Keyword security: Ideally, unauthorized users and cloud server cannot derive any _•_ information about keywords in the ciphertext. ----- A CIPHERTEXT-POLICY ATTRIBUTE-BASED SEARCHABLE ENCRYPTION SCHEME 235 **1.1.** **Related work** Over past decade, substantial progress has been made on problem of searching on encrypted data [1, 2, 3, 6, 8, 9, 11, 14, 16, 17, 18, 19], to name a few. These papers use different techniques and consider different situations for searching on encrypted data. SE in private-key setting and supports only single-writer/single-reader was first introduced in [16]. Continue this line of research, the authors in [3] investigated searchable encryption with conjunctive keyword searches and boolean queries. The authors in [17, 18] went further to investigate searchable encryption scheme in single-writer/multi-reader setting and partially trapdoor security. Partially non-interactive which can reduce the communication overhead was also investigated in [17]. SE in the public-key setting was first introduced by Boneh et al. [2], but their schemes only support multi-writer/single-reader and equality queries. In [1, 19], the authors extended to support multi-writer/multi-reader but their schemes still only support equality queries. Expressive searching predicate and multi-writer/multi-reader were investigated in [6, 8, 11, 12, 14], where authors manage to transform from a key policy/ciphertext policy attribute-based encryption scheme to a PEKS scheme, these schemes are thus called key policy/ciphertext policy attribute-based searchable encryption scheme. The authors in [6] went further to consider partially trapdoor security in the sense that, they split a keyword into two parts: The keyword name and the keyword value, where one keyword name can have many keyword values. They then showed that in their scheme, the cloud server with the trapdoor in hand can only know keyword names but nothing about keyword values in the ciphertext. This interesting property is useful in some specific practical contexts. However, the downside of this technique is that the searching time is only acceptable if the keyword names are included in the ciphertext, this leads to the fact that anyone can also know the keyword names in the ciphertext. On the other hand, the combination of PKE and PEKS was investigated in [10] where [10] also investigated the non-interactive property, however this scheme does not support expressive searching predicate. **1.2.** **Our contribution and organization of the paper** In this paper, we propose a PKE-PEKS scheme supporting both the expressive searching predicate and multi-writer/multi-reader, our scheme is built from the CP-ABE scheme in [13], we thus name our scheme CP-ABSE scheme for short. Our scheme has following properties: Our scheme is a combination of an existing PKE scheme (which is exactly the CP-ABE _•_ in [13]) and a new proposed PEKS scheme. In our scheme, user has only one pair of public key/secret key for both PKE and PEKS, and moreover user can use the CP-ABE setting to encrypt/decrypt data; Our scheme is non-interactive: User can generate the trapdoor by himself/herself, we _•_ thus can remove the Trusted Trapdoor Generator in our system. On the other hand, since trapdoor is generated from user’s secret key, user is able to decrypt all resulted ciphertexts which can save time and communication overhead of the system; Efficiency: Since our CP-ABSE scheme is built from the CP-ABE scheme in [13], our _•_ scheme naturally inherits the efficiency and properties of this CP-ABE scheme such as ----- 236 VAN ANH TRINH, VIET CUONG TRINH constant-size of user’s secret key, optimized ciphertext size, multi-authority and fast decryption. Note that the CP-ABE scheme in [13] is still one of the most efficient CP-ABE schemes to date. We also note that our scheme does not achieve trapdoor security. We emphasize that _•_ this property is very hard to achieve in the public-key setting, to the best of our knowledge there is only one scheme [6] that can partially achieve this property. In the Section 5, we give the details comparison among our scheme and several schemes which also support both the expressive searching predicate and multi-writer/multi-reader. The paper includes 6 sections. The first section presents the definition and security model of a CP-ABSE scheme. In Section 3, we present the construction of our CP-ABSE scheme and prove that it is secure in the following section. The comparison and discussions are given in Section 5. Finally, the conclusion is in Section 6. **2.** **PRELIMINARIES** In this section, we first give the system workflow and the threat model of our system, then we present the definition and security model for our CP-ABSE scheme. **2.1.** **Ciphertext policy attribute based searchable encryption** **2.1.1.** **System workflow and threat model** Our CP-ABSE scheme is a combination of a traditional CP-ABE scheme and a PEKS scheme supporting expressive searching predicate. In our scheme, there are four entities: data owner; user; cloud server and Private Key Generator (PKG). More precisely: 1. PKG: Play the role of PKG in traditional CP-ABE scheme, generates secret keys for users. 2. Data owner: Encrypt data as well as corresponding keywords, upload them to a public cloud. 3. User: Rely on his/her secret key to generate a trapdoor, send this trapdoor to the cloud server and get back resulted ciphertexts. Finally, decrypt resulted ciphertexts to recover data. 4. Cloud server: Receive a trapdoor from a user, perform the search based on the trapdoor and send back resulted ciphertexts to the user. **Threat model.** Similar to the threat model in recent schemes [6, 8, 9, 11, 12, 14], in our system, there are two goals for which an adversary would like to achieve: getting information about encrypted data and getting information about encrypted keywords. ----- A CIPHERTEXT-POLICY ATTRIBUTE-BASED SEARCHABLE ENCRYPTION SCHEME 237 **2.1.2.** **System algorithms** Formally, our CP-ABSE scheme includes seven following probabilistic algorithms. **Setup(1[ν],** ): The inputs of this algorithm are security parameter ν and the description _B_ of attribute universe, the outputs are master key MSK and the public parameters _B_ param of the system. **Extract(u,** (u), MSK, param): The inputs of this algorithm are attribute set (u) of user _B_ _B_ _u, param and MSK, the output is the user’s secret key du._ **Encrypt(M, A, param): The inputs of this algorithm are param, a message M and an** access policy A over the universe of attributes, the output is ciphertext ct along with a description of the access policy A. **Decrypt(ct, du, param): The inputs of this algorithm are param, the ciphertext ct and the** secret key du of user u, the output is the message M if and only if B(u) satisfies A. Otherwise, the output is . _⊥_ **Trapdoor(du, Wi, param): The inputs of this algorithm are param, secret key du of user u** and a set of keywords user would like to search Wi, the output is the trapdoor tds. **EncryptKW(KF, A[′], param): The inputs of this algorithm are param, an access policy A[′]** over the universe of attributes and an access policy over the universe of keywords, _KF_ the output is the ciphertext ct[′] along with a description of the access policy A[′]. **Search(tds, ct[′], param): The inputs of this algorithm are param, a trapdoor tds and a** ciphertext ct[′], the output is 1 if the keyword set Wi embedded in tds matches the access structure KF in ct[′] and B(u) satisfies A[′]. Otherwise, the output is 0. We note that the full ciphertext should be the couple (ct, ct[′]). In addition, in order for user to be able to decrypt the resulted ciphertext, we choose A[′] in ct[′] such that if B(u) satisfies A[′] then B(u) satisfies A in ct. **2.1.3.** **Security model** **Selective semantic security.** The selective semantic security game is similar to the one in [13], except that the adversary can ask additional corruption trapdoor query. Due to the space limitations we refer the reader to [13] for details. **Insider security. Assume that** is the attacker, is the challenger. The insider security _A_ _C_ game is defined as follows. **Setup(1[ν],** ). At the beginning of the game, the adversary provides a target access _B_ _A_ policy A[∗] over universe of attributes, and two equal target access policy KF 0[∗][,][ KF] _[∗]1_ over universe of keywords for which she intends to attack, where “equal access policy” means that if KF 0[∗] [and][ KF] 1[∗] [are described in the DNF form, they have the same] number of clauses. runs the Setup(1[ν], ) algorithm to obtain param and MSK. She _C_ _B_ then gives param to . _A_ ----- 238 VAN ANH TRINH, VIET CUONG TRINH **Query phase 1. A chooses a set of attributes B(u) as well as a set of keywords Wi and** asks corruption trapdoor query corresponding to these sets. The challenger computes and returns corresponding tds to the adversary. **Challenge. C chooses b** _←{[$]_ 0, 1} and runs EncryptKW(A[∗], KF _b[∗][,][ param][) to generate][ ct][′∗][.]_ Finally, outputs ct[′∗]. _C_ **Query phase 2. The same as phase 1.** **Guess.** finally outputs b[′] 0, 1 as its guess for b. _A_ _∈{_ _}_ _A wins the game if b[′]_ = b, and if A never asks on B(u), Wi such that both B(u) satisfies A[∗] and Wi satisfies either KF 0[∗] [or][ KF] 1[∗][. The advantage of][ A][ to win the game is defined] **Adv[IS]A** [= Pr] �b = b[′][�] _−_ 2[1] _[.]_ **Definition 1. A ciphertext-policy attribute-based searchable encryption scheme achieves** insider security if all polynomial time adversaries have at most a negligible advantage in the above game. **Outsider security. The outsider security game is similar to the insider game, the difference** is that the adversary can ask corrupted secret key queries, instead of corrupted trapdoor queries. Due to the space limitations, we refer the reader to [13] for the definitions of Access Structures, LSSS Matrices, Bilinear Maps and (P, Q, f ) GDDHE Assumptions and so on. _−_ **3.** **CIPHERTEXT POLICY ATTRIBUTE BASED SEARCHABLE** **ENCRYPTION** In this paper, we rely on the CP-ABE scheme in [13] to build our CP-ABSE scheme. Concretely, our CP-ABSE scheme is a combination of CP-ABE scheme in [13] and a new PEKS scheme, where the later scheme is also built from the former scheme. User in our scheme, therefore, can use the same public key and secret key for both CP-ABE scheme and PEKS scheme. In our scheme, user relies on his/her secret key and a set of chosen keywords W = (w1, . . ., wt) to generate the trapdoor. More precisely, from a set of chosen keywords W, user has to indicate exactly which combinations of keywords he/she would like to search. Consider the example in [6], W = (w1, w2, w3) where w1 = “Diabetes”, w2 = “Age : 30” and w3 = “Weight : 150 − 200”, user has to indicate the set of combinations of keywords he/she would like to search Wi = (w1||w2, w1||w3). The advantage of this point is that we can save the searching time and the communication overhead, since cloud server only needs to find and then send back ciphertexts user really wants. In other schemes [5, 6, 11, 14], user doesn’t indicate exactly which combinations of keywords he/she would like to search, the cloud server thus searches on all possible combinations of keywords. ----- A CIPHERTEXT-POLICY ATTRIBUTE-BASED SEARCHABLE ENCRYPTION SCHEME 239 **3.1.** **Detailed construction** Our scheme is described as follows. **Setup(ν,** ): Denote N = the maximal number of attributes in the system, _B_ _|B|_ (p, G, GT, e(·, ·)) a bilinear group system. The algorithm first picks a random generator g ∈ G, random scalars a, α, λ ∈ Zp, computes g[a], g[α], g[λ]. The algorithm continues to generate 2N group elements in G associated with N attributes in the system _h1, . . ., hN_ _,_ _h[˜]1, . . .,_ _h[˜]N_ . Let H, _H[˜] be hash functions such that H : {0, 1}[∗]_ _→_ G and _H˜ : GT × {0, 1}[∗]_ _→_ Zp. Suppose that the keyword universe in the system is W = (w1, w2, w3, . . . ), where each _wi ∈{0, 1}[∗]. In our system the set W is unbounded, we can add any new keyword into_ the system at anytime. For simplicity, we omit W in the global parameters. Finally, we set the master secret key and global parameters as MSK = (g[α], λ) and param = (g, g[a], g[λ], e(g, g)[α], h1, . . ., hN _,_ _h[˜]1, . . .,_ _h[˜]N_ _, H,_ _H[˜])._ **Extract(u,** (u), MSK, param): Assume (u) is the attribute set of user u. The algorithm _B_ _B_ $ chooses su _←_ Zp, then computes user u’s secret key as du = (du0, d′u0[,][ {][d][u]i[}]i∈B(u)[, λ][),] where _du0 = g[α]_ _· g[a][·][s][u], d[′]u0_ [=][ g][s][u][,][ {][d][u]i [=][ h]i[s][u][}][i][∈B][(][u][)][.] User u then keeps du0 and λ secret and publishes the rest of his/her secret key to the public domain. That means the secret key of user is of the constant-size. **Encrypt(M, A, param): The inputs are a message M, an access policy A, as well as param.** Assume that A is a boolean formula β and that the size of β is |β|. At first, encryptor describes β in the form of DNF access policy as β = (β1 _βm), where each βi is_ _∨· · · ∨_ a set of attributes, i = 1, . . ., m. $ The encryptor chooses a scalar s _←_ Zp, then computes C, C0 as _C = M · e(g, g)[α][·][s], C0 = g[s]._ Next, encryptor compares between m and _β_, if m _β_ he/she computes _|_ _|_ _≤|_ _|_ _C1 = (g[a][ �]_ _hi)[s], . . ., Cm = (g[a][ �]_ _hi)[s]._ _i∈β1_ _i∈βm_ Else, the encryptor constructs an LSSS matrix M representing the original boolean formula β, and a map function ρ such that (M, ρ) ∈ (Z[ℓ]p[×][n], F([ℓ] → [N ])). She then chooses a random vector _[−→]v = (s, y2, . . ., yn) ∈_ Z[n]p [.] For i = 1, . . ., ℓ she computes _λi =_ _[−→]v · Mi, where Mi is the vector corresponding to the i’th row of M_ . She computes _Ci = g[a.λ][i]h[−]ρ([s]i)[, i][ = 1][, . . ., ℓ.]_ Eventually, the output is either ct = (C, C0, . . ., Cm) along with a description of β or _ct = (C, C0, . . ., Cℓ) along with a description of (M, ρ)._ ----- 240 VAN ANH TRINH, VIET CUONG TRINH **Decrypt(ct, du, param): The decryptor u first parses the ct and checks the number of** elements in ct. If it is equal to m + 1, decryptor parses the ct as (C0, C1, . . . Cm), then finds j such that βj ⊂B(u), and computes _e(C0, du0_ �i∈βj _[d][u]i[)]_ = _e(g[s], g[α](g[a][ �]i∈βj_ _[h][i][)][s][u][)]_ = e(g, g)[α][·][s] = K. _e(d[′]u0, Cj)_ _e(g[s][u], (g[a][ �]i∈βj_ _[h][i][)][s][)]_ Finally, computes = C _K[−][1]._ _M_ _·_ Else, she defines the set I ⊂{1, 2, . . ., ℓ} such that I = {i : ρ(i) ∈B(u)}. Let {ωi ∈ Zp}i∈I be a set of constants such that if {λi} are valid shares of any secret s according to M then [�]i∈I _[ω][i][λ][i][ =][ s][. Note that from the relation][ �]i∈I_ _[ω][i][M][i][ = (1][,][ 0][, . . .,][ 0) where]_ _Mi is the i-th row of the matrix M_, she can determine these constants. She parses the _ct as (C0, C1, . . . Cℓ) and computes_ � � _e(_ _Ci[−][ω][i], d[′]u0[)][ ·][ e][(][C][0][, d][u]0_ _d[−]uρ[ω](i[i])[) =][ K.]_ _i∈I_ _i∈I_ Then computes = C _K[−][1]._ _M_ _·_ **Trapdoor(du, Wi = ( ˜wi1, . . ., ˜wik** ), param): Suppose that each ˜wij ∈{0, 1}[∗], j ∈ [k], is a concatenation of set of keywords, for example “Diabetes _Age : 30”._ _||_ The user randomly chooses scalars r1, . . ., rk ∈ Zp, computes the trapdoor � tds = �{tds0,j, tds1,j, {tds2,j,ℓ}ℓ∈Bu}j∈[k], tds0, {tdsi}i∈B(u), _W[˜]_ _i_ � � = _{g[α]g[as][u]g[ar][j]_ (g[a]H( ˜wij ))[λ], g[r][j] _, {h[˜][r]ℓ[j]_ _[}][ℓ][∈B][u][}][j][∈][[][k][]][, g][s][u][,][ {][h]i[s][u][}][i][∈B][(][u][)][,][ ˜][W][i]_ _._ where W[˜] _i is a short description of Wi. User then sends ({tds0,j}j∈[k],_ _W[˜]_ _i) to the cloud_ server, he/she publishes the rest of tds to the public domain. That means the trapdoorsize is linear in the number of combinations of keywords user would like to search. **EncryptKW(KF, A[′], param, ): Assume that access policy A[′]** = β = (β1 ∨· · · ∨ _βm) and_ _KF = (kf1 ∨· · · ∨_ _kfm′), where each βi is a set of attributes and kfi is a concatenation_ of set of keywords. Note that βi ̸= βj, kfi′ ̸= kfj′, ∀i, j ∈ [m], i[′], j[′] _∈_ [m[′]]. $ The encryptor picks a scalar s _←_ Zp, then computes _C0 = g[s], C1 = (g[a][ �]_ _hi)[s], . . ., Cm = (g[a][ �]_ _hi)[s],_ _i∈β1_ _i∈βm_ _C˜1 = (g[a][ �]_ _i∈β1_ _h˜i)[s], . . ., ˜Cm = (g[a][ �]_ _i∈βm_ _h˜i)[s]._ Next, he/she computes _Xi = e(g, g)[α][·][s]_ _· e(g, g[a]H(kfi))[λ][·][s], i = 1, . . ., m[′],_ then computes _K1 = H[˜](X1, kf1), . . ., Km′ = H[˜](Xm′, kfm′)._ ----- A CIPHERTEXT-POLICY ATTRIBUTE-BASED SEARCHABLE ENCRYPTION SCHEME 241 Eventually, encryptor outputs _ct[′]_ = (C0, . . ., Cm, _C[˜]1, . . .,_ _C[˜]m, K1, . . ., Km′)_ along with a description of β. **Search(tds, ct[′], param): The cloud server first finds ℓ** _∈_ [m] such that βℓ _⊂B(u), then_ computes (Xj, Yj), j = 1, . . ., k _Xj_ = _e(C0, tds0,j_ �i∈βℓ [tds][i][ ·][ tds][2][,j,i][)] = _e(g[s], g[α]g[as][u]g[ar][j]_ _g[aλ]H( ˜wij_ )[λ][ �]i∈βℓ _[h]i[s][u][h][˜]i[r][j]_ [)] _e(tds0, Cℓ) · e(tds1,j,_ _C[˜]ℓ)_ _e(g[s][u], (g[a][ �]i∈βℓ_ _[h][i][)][s][)][ ·][ e][(][g][r][j]_ _[,][ (][g][a][ �]i∈βℓ_ _[h][˜][i][)][s][)]_ = _e(g, g)[α][·][s]_ _· e(g, g[a]H( ˜wij_ ))[λ][·][s], _Yj_ = _H˜(Xj, ˜wij_ ). If there exists a pair (i, j), i ∈ [m[′]], j ∈ [k] such that Ki = Yj then the cloud server outputs “yes”. Otherwise, the cloud server outputs “no”. Note that, the cloud server doesn’t need to compute all pairs (Xj, Yj), j = 1, . . ., k, as long as he/she finds a pair (i, j), i ∈ [m[′]], j ∈ [k] such that Ki = Yj, he/she outputs “yes” and stops. **Correctness: It is easy to verify that if there exists at least one pair ˜wij** _Wi and kft_ _∈_ _∈KF_ such that ˜wij = kft, then _Xt_ = _e(g, g)[α][·][s]_ _· e(g, g[a]H(kft))[λ][·][s]_ = e(g, g)[α][·][s] _· e(g, g[a]H( ˜wij_ ))[λ][·][s] = Xj, that means _Kt = H[˜](Xt, kft) = H[˜](Xj, ˜wij_ ) = Yj. **Remark 1.** As in [13] all sets βi must be disjoint subsets to resist the simple attack, _i = 1, . . ., m. This leads to the fact that attributes in the system cannot be reused in the_ access formula. To deal with this problem, they allow each attribute to have kmax copies of itself as in [4, 7]. Note that the user’s secret key is still of constant-size. **4.** **SECURITY** In this section, we show that our scheme is secure in the model defined in Subsection 2.1.3. We first refer the reader to the modified BDHE assumption defined in [13], and then we define a new modified BDHE assumption. We finally prove our scheme achieves the selective semantic security under the new modified BDHE assumption, and achieves the insider and outsider security under the modified BDHE assumption. **Definition 2. (New Modified-BDHE problem) Let (p, G, GT, e) be a bilinear group system,** $ pick a, t, s, q, θ, r1, . . ., rθ _←_ Zp, a generator g ∈ G. Given � _⃗Y =_ _g, g[s], g[a], . . ., g[a][q]_ _, g[a][q][+2], . . ., g[a][2][q]_ _, g[s][(][at][+][a][)], g[at], . . ., g[a][q][t],_ � _g[a][q][+2][t], . . ., g[a][2][q][t], g[a][q][+1]g[ar][1], . . ., g[a][q][+1]g[ar][θ]_ _, g[r][1], . . ., g[r][θ]_ $ it is hard to distinguish between T = e(g, g)[a][q][+1][s] _∈_ GT and T _←_ GT . ----- 242 VAN ANH TRINH, VIET CUONG TRINH Assume that is an adversary that outputs b 0, 1 with advantage ϵ in solving new _A_ _∈{_ _}_ Modified-BDHE problem in G if � � � Pr (Y, T[⃗] = e(g, g)[a][q][+1][s]) = 0 Pr (Y, T[⃗] = R) = 0 _ϵ._ _A_ _−_ _A_ _≥_ ��� ���� **Definition 3. The new Modified-BDHE assumption is secure if no polytime adversary has** a non-negligible advantage in solving the new Modified-BDHE problem. Intuitively, to compute e(g, g)[a][q][+1][s] one should know one of the values g[a][q][+1] or g[a][q][+1][t] or _e(g, g)[sar][i], i_ [θ], but these elements are not provided in _Y[⃗] ._ _∈_ **4.1.** **Selective semantic security** **Theorem 1. Let β[∗]** _be the challenge access policy, from β[∗]_ _we construct the corresponding_ _challenge LSSS matrix L’ of size ℓ[′]_ _n[′]_ _and map function ρ[′]._ _We next describe β[∗]_ = _×_ _β1[∗]_ _m_ _[where][ β]i[∗][, i][ = 1][, . . ., m][ are disjoint sets and then construct the corresponding]_ _[∨· · · ∨]_ _[β][∗]_ _challenge LSSS matrix L[∗]_ _of size ℓ[∗]_ _n[∗]_ _and map function ρ[∗]. If those LSSS matrices satisfy_ _×_ _ℓ[′], n[′], ℓ[∗], n[∗]_ _q, and if θ_ _k[∗]_ _q[∗]_ _where k[∗]_ _and q[∗]_ _are maximum number of combinations_ _≤_ _≥_ _·_ _of keywords in a trapdoor and maximum number of trapdoor queries corresponding to β[∗]_ _adversary can make, respectively, our scheme is selectively semantic secure under the new_ _Modified-BDHE assumption._ Compare to the proof in [13], here the simulator needs to answer additional corruption trapdoor query. To answer this kind of query, the simulator uses the elements g[a][q][+1]g[a.r][1], . . ., _g[a][q][+1]g[a.r][θ]_ _, g[r][1], . . ., g[r][θ]_ . Note that these elements only appear in new Modified-BDHE assumption, not in Modified-BDHE assumption. The rest of the proof of this theorem is similar to the one in [13]. **4.2.** **Keyword security** **4.2.1.** **Insider security** **Theorem 2. Assume that β[∗]** = β1[∗] _[∨· · · ∨]_ _[β]m[∗]_ _[is the challenge access policy and from][ β][∗]_ _construct a corresponding challenge LSSS matrix L[∗]_ _of size ℓ[∗]_ _n[∗]_ _and map function ρ[∗]._ _×_ _If this LSSS matrix satisfies ℓ[∗], n[∗]_ _q, our scheme achieves insider security under the_ _≤_ _Modified-BDHE assumption in the random oracle model._ _Proof_ In this proof we show that the simulator who attacks Modified - BDHE assumption _S_ can simulate an adversary who attacks our scheme in the insider security game as defined _A_ in the Subsection 2.1.3. As a result, if wins with non-negligible advantage then also can _A_ _S_ win with non-negligible advantage. More precisely: At the setup phase, is given an instance of Modified-BDHE assumption, and then receives _S_ the challenge access policy β[∗] = β1[∗] _[∨· · · ∨]_ _[β]m[∗]_ [as well as][ KF] 0[∗] [= (][kf]0[∗],1[, . . ., kf]0[∗],m[′][) and] _KF_ 1[∗] [= (][kf]1[∗],1[, . . ., kf]1[∗],m[′][) from][ A][. Note that][ β]i[∗][, i][ = 1][, . . ., m][ are disjoint sets. From] challenge access policy β[∗] = β1[∗] _[∨· · · ∨]_ _[β]m[∗]_ [, simulator builds LSSS matrix (][M]ℓ[∗][∗]×n[∗][, ρ][∗][)] ----- A CIPHERTEXT-POLICY ATTRIBUTE-BASED SEARCHABLE ENCRYPTION SCHEME 243 such that both ℓ[∗], n[∗] _q. To program the parameters for the system, simulator picks_ _≤_ _α[′]_ _←$_ Zp and implicitly sets α = α′ + _aq+1, then computes e(g, g)α = e(ga, gaq_ )e(g, g)α′. The simulator finds sets of rows of matrix M _[∗]: I1, . . ., Im where {ρ(i), i ∈_ _Ij} = βj[∗]_ (note that Ij, j = 1, . . ., m are disjoint sets since βj[∗] [are disjoint sets). Now,][ β][∗] [can be] rewritten as (∧ρ(i))i∈I1 ∨ (∧ρ(i))i∈I2 ∨· · · ∨ (∧ρ(i))i∈Im. To program h1, . . ., hN _,_ _h[˜]1, . . .,_ _h[˜]N_, the simulator implicitly defines the vector _−→y = (t, ta, ta2, . . ., tan[∗]−1)⊥_ _∈_ Znp _[∗][.]_ Let _[−→]Λ_ = (λ1, . . ., _λℓ∗)_ = _M_ _[∗]_ _· [−→]y_ be the vector shares, for j = 1, . . ., ℓ[∗], _λj =_ [�]i∈[n[∗]] _[M]j,i[∗]_ _[ta][i][−][1][.]_ He/she next finds {ωi}1≤i≤ℓ∗ where for all j = 1, . . ., m, [�]i∈Ij _[ω][i][ ·][ λ][i][ =][ t][. Note that]_ we can find {ωi}1≤i≤ℓ∗ since from the property of LSSS matrix, there exists {ωi}1≤i≤ℓ∗ such that for all j = 1, . . ., m, [�]i∈Ij _[ω][i][ ·][ M]i[∗]_ [= (1][,][ 0][, . . .,][ 0)][.] For each hj, _h[˜]j, 1 ≤_ _j ≤_ _N, where there exists an index i ∈_ [ℓ[∗]] such that j = ρ[∗](i) $ (note that the function ρ[∗] is injective), the simulator chooses zj, ˜zj _←_ Zp and compute: Note that the simulator knows matrix M _[∗]_ and g[ta][k] where k [n[∗]] from the instance _∈_ of Modified-BDHE assumption. � � _hj = g[z][j]_ _· g[ω][i]_ _k∈[n[∗]]_ _[M]i,k[∗]_ _[ta][k]_ = g[z][j] · g[aω][i][λ][i]; ˜hj = g[z][˜][j] · g[ω][i] _k∈[n[∗]]_ _[M]i,k[∗]_ _[ta][k]_ = g[z][˜][j] · g[aω][i][λ][i]. Otherwise, the simulator chooses zj, ˜zj _←$_ Zp and computes hj = gzj _, ˜hj = gz˜j_ . We note that {hj, _h[˜]j}j=1,...,N are distributed randomly due to choosing randomly zj, ˜zj._ To program g[λ], the simulator implicitly sets λ = _a[q]_ and computes g[λ] = (g[a][q] )[−][1]. _−_ Simulator also chooses hash functions _,_ [˜] and in this proof simulator models _,_ [˜] _H_ _H_ _H_ _H_ as random oracles. At the end of this phase, the simulator gives following param to _A_ (g, g[a], g[λ], e(g, g)[α], h1, . . ., hN _,_ _h[˜]1, . . .,_ _h[˜]N_ _, H,_ _H[˜])._ Query phase 1. In this phase, the simulator needs to answer five types of query: 1. The hash query. 2. The corrupted trapdoor query (Bu, Wi = ( ˜wi1, . . ., ˜wik )) where Wi doesn’t “satisfy” KF 0[∗] [or][ KF] 1[∗][, that means there doesn’t exist any triple (][i][j][, b, b][′][) such that] _w˜ij = kfb,b[∗]_ _[′][.]_ 3. The corrupted trapdoor query (Bu, Wi) where Wi “satisfies” KF 0[∗] [or][ KF] 1[∗][, but] _Bu doesn’t “satisfy” β[∗]._ 4. The partially corrupted trapdoor query (Bu, Wi) where Wi “satisfies” KF 0[∗] [or] _KF_ 1[∗] [and][ B][u] [“satisfies”][ β][∗][. Note that user only keeps (][{][tds][0][,j][}]j∈[k][,][ ˜][W][i][) secret] and publishes the rest of tds to the public domain. That means can know _A_ ({tds1,j, {tds2,j,ℓ}ℓ∈Bu}j∈[k], tds0, {tdsi}i∈Bu) for any (Bu, Wi = ( ˜wi1, . . ., ˜wik )). ----- 244 VAN ANH TRINH, VIET CUONG TRINH 5. The partially corrupted secret key query Bu for any Bu. The reason is that user only keeps du0 secret. Regarding the hash query: Simulator creates two lists _,_ ˜, at the beginning _•_ _L_ _L_ _L,_ _L[˜] are empty. For each hash query corresponding to ˜wi which doesn’t satisfy_ _KF_ 0[∗] [or][ KF] 1[∗][, the simulator first checks whether ˜][w][i] [has been queried before. If] not, he/she chooses yi _←$_ Zp and adds triple ( ˜wi, gyi, yi) ∈ ({0, 1}∗, G, Zp) into _L and returns g[y][i]_ to A. Otherwise, he/she simply finds the triple ( ˜wi, g[y][i], yi) and returns g[y][i] to A. In the case ˜wi “satisfies” KF 0[∗] [or][ KF] 1[∗][, the simulator first] $ checks whether ˜wi has been queried before. If not, he/she chooses yi _←_ Zp and adds triple ( ˜wi, g[−][a] _· g[y][i], yi) into L and returns g[−][a]_ _· g[y][i]_ to A. Otherwise, he/she simply finds the triple ( ˜wi, g[−][a] _· g[y][i], yi) and returns g[−][a]_ _· g[y][i]_ to A. For each hash query corresponding to (Ki, kfj) where Ki ∈ GT, kfj ∈{0, 1}[∗], simulator first $ checks whether (Ki, kfj) has been queried before. If not, he/she chooses yij _←_ Zp and adds triple (Ki, kfj, yij ) into L[˜]. Otherwise, he/she simply finds the triple (Ki, kfj, yij ) and returns yij to A. _• Regarding the second type of query: A first sends Wi = ( ˜wi1, . . ., ˜wik_ ) and B(u) to simulator with the requirement that Wi doesn’t satisfy KF 0[∗] [or][ KF] 1[∗][. To program] each tds0,j, j ∈ [k], the simulator first checks whether ˜wij _, j ∈_ [k] has been queried before. If not, he/she chooses yij _←$_ Zp and adds triple ( ˜wij _, gyij, yij_ ) into L. In both ways, simulator knows yij from L, and H( ˜wij ) = g[y][ij] since Wi doesn’t $ satisfy KF 0[∗] [or][ KF] 1[∗][. Next, simulator first chooses][ s][u][, r][j] _←_ Zp then computes tds0,j = g[α][′]g[as][u]g[ar][j] (g[λ])[y][ij] = g[α]g[as][u]g[ar][j] (g[a]H( ˜wij ))[λ]. Note that g[α][′] = g[α][′] _g[a][q][+1]_ _g[−][a][q][+1]_ = g[α] _g[aλ], since g[λ]_ = g[−][a][q] . Since simulator _·_ _·_ _·_ knows su, rj, j ∈ [k], he/she can easily computes the rest of the trapdoor for any set (u). Finally, simulator returns tds to . _B_ _A_ _• Regarding the third type of query: A first sends Wi = ( ˜wi1, . . ., ˜wik_ ) and B(u) to simulator with the requirement that B(u) doesn’t satisfy β[∗] and Wi “satisfies” _KF_ 0[∗] [or][ KF] 1[∗][. The simulator first finds a vector][ −→][x][ = (][x][1][, . . ., x][n][∗][)][ ∈] [Z][n]p _[∗]_ such that x1 = −1 and for all i where ρ[∗](i) ∈B(u) the product ⟨[−→]x · Mi[∗][⟩] [= 0. The] $ simulator continues to pick ζ _←_ Zp and implictly define the value su as _su = ζ + x1a[q]_ + x2a[q][−][1] + · · · + xn[∗]a[q][−][n][∗][+1]. The simulator computes � � _du0 = g[α][′]g[aζ]_ (g[a][q][+1][−][i])[x][i] = g[α] _· g[a][·][s][u]; d[′]u0_ [=][ g][ζ] (g[a][q][+1][−][i])[x][i] = g[s][u]. _i=2,...,n[∗]_ _i=1,...,n[∗]_ For j (u) such that there is no i [ℓ[∗]] satisfying ρ[∗](i) = j. The simulator _∈B_ _∈_ knows values zj and computes h[s]j[u] = (g[s][u])[z][j] . For j ∈B(u) such that there is an indice i [ℓ[∗]] satisfying ρ[∗](i) = j. The simulator computes _∈_ � _h[s]j[u]_ = (g[s][u])[z][j] _· g[(][ζ][+][x][1][a][q][+][···][+][x][n][∗]_ _[a][q][−][n][∗][+1][)][ω][i]_ _k∈[n[∗]]_ _[M]i,k[∗]_ _[ta][k]_ _._ ----- A CIPHERTEXT-POLICY ATTRIBUTE-BASED SEARCHABLE ENCRYPTION SCHEME 245 Note that the product ⟨[−→]x · _Mi[∗][⟩]_ [= 0 thus the simulator doesn’t need to know the] unknown term of form g[a][q][+1][t] to compute h[s]j[u][, all other terms he/she knows from] the assumption. Simulator simply sets g[s][u] and {h[s]j[u][}][j][∈B]u [as tds][0][ and][ {][tds][i][}][i][∈B]u[,] respectively. To program {tds0,j, tds1,j, {tds2,j,ℓ}ℓ∈Bu}j∈[k], the simulator considers two cases: 1. ˜wij “satisfies” KF 0[∗] [or][ KF] 1[∗][, that means there exists at least a triple (][i][j][, b, b][′][)] such that ˜wij = kfb,b[∗] _[′][.]_ Simulator checks whether ˜wij has been queried before. If not, he/she chooses yij _←$_ Zp and adds triple ( ˜wij _, g−agyij, yij_ ) into L. In both ways, simulator knows yij from L, and H( ˜wij ) = g[−][a]g[y][ij] . $ Next, simulator chooses rj _←_ Zp then computes tds0,j = g[α]g[as][u]g[ar][j] (g[−][a][q] )[y][ij] = g[α]g[as][u]g[ar][j] (g[a]H( ˜wij ))[λ]. Note that λ = −a[q]. With known rj, simulator can easily compute tds1,j, _{tds2,j,ℓ}ℓ∈Bu._ 2. ˜wij doesn’t “satisfy” KF 0[∗] [or][ KF] 1[∗][. Simulator checks whether ˜][w][i]j [has been] queried before. If not, he/she chooses yij _←$_ Zp and adds triple ( ˜wij _, gyij, yij_ ) into L. In both ways, simulator knows yij from L, and H( ˜wij ) = g[y][ij] . Next, $ simulator picks ζj _←_ Zp and implicitly defines the value rj as _rj = ζj + x1a[q]_ + x2a[q][−][1] + · · · + xn[∗]a[q][−][n][∗][+1], then similarly computes g[α]g[ar][j] _, g[r][j]_ _, {h[˜]ℓrj_ _}ℓ∈Bu as above (note that rj now_ plays the role as su). He/she then sets g[−][r][j] _, {h[˜]ℓ−rj_ _}ℓ∈Bu as tds1,j, {tds2,j,ℓ}ℓ∈Bu_ respectively, and computes tds0,j = g[α]g[as][u]g[−][α]g[−][ar][j] _g[α][′](g[−][a][q]_ )[y][ij] = g[α]g[as][u]g[−][ar][j] (g[a]H( ˜wij ))[λ]. Note that g[α] = g[α][′]g[a][q][+1] and (g[a]H( ˜wij ))[λ] = g[−][a][q][+1](g[−][a][q] )[y][ij] _._ Finally, simulator returns tds to . _A_ Regarding the fourth and fifth types of query: It is straightforward since the _•_ unknown value g[α] only appears in the tds0,j and du0, therefore simulator can simply choose su, {rj}j∈[k] _←$_ Zp and then computes tds, or choose su, $← Zp and then computes du. Challenge: The simulator picks a random bit _b,_ computes _C0[∗]_ = _g[s]_ and (C1[∗][, . . ., C]m[∗] [) =][ I,][ ( ˜][C]1[∗][, . . .,][ ˜][C]m[∗] [) =][ J][, where] _I =_ �g[s][(][a][+][at][)]g�i∈I1 _[sz][ρ][∗][(][i][)], . . ., g[s][(][a][+][at][)]g�i∈Im_ _[sz][ρ][∗][(][i][)]�_  _,_  =  g[s], (g[a][ �] _hi)[s], . . ., (g[a][ �]_ _hi)[s]_ _i∈β1[∗]_ _i∈βm[∗]_ ----- 246 VAN ANH TRINH, VIET CUONG TRINH _J =_ �g[s][(][a][+][at][)]g�i∈I1 _[s][z][˜][ρ][∗][(][i][)], . . ., g[s][(][a][+][at][)]g�i∈Im_ _[s][z][˜][ρ][∗][(][i][)]�_ _h˜i)[s], . . ., (g[a][ �]_ _i∈βm[∗]_  _._  =  (g[a][ �]  _i∈β1[∗]_ _h˜i)[s]_ To computes {Ki[∗] _b,i_ [has been queried before.] _[}][i][∈][[][m][′][]][, simulator first checks whether][ kf]_ _[∗]_ $ If not, he/she chooses yi _←_ Zp and adds triple (kfb,i∗ _[, g][−][a][g][y][i][, y][i][) into][ L][. Otherwise,]_ he/she finds (kfb,i[∗] _[, g][−][a][g][y][i][, y][i][) from][ L][. In both ways, simulator knows][ y][i][ from][ L][, and]_ _H(kfb,i[∗]_ [) =][ g][−][a][g][y][i][. He/she computes] _Xi[∗]_ [=][ T][ ·][ e][(][g][s][, g][α][′][)][ ·][ e][(][g][s][,][ (][g][λ][)][y][i][) =][ T][ ·][ e][(][g][s][, g][α][′][)][ ·][ e][(][g][λ][, g][a][H][(][kf]b,i[∗] [))][s] then computes Ki[∗] [= ˜][H][(][X]i[∗][, kf]b,i[∗] [). Finally, he/she outputs] _ct[′∗]_ = (C0[∗][,][ {][C]i[∗][}]i∈[m][,][ {][ ˜][C]i[∗][}]i∈[m][,][ {][K]i[∗][}]i∈[m[′]][)][.] Note that if T = e(g, g)[a][q][+1][s] then ct[′∗] is in valid form. Query phase 2. Similar to phase 1. Guess: gives his guess b[′] to, will output its guess 0 corresponding to T = e(g, g)[a][q][+1][s] _A_ _S_ _S_ if b[′] = b; otherwise, outputs its guess 1 corresponding to T is a random element. _S_ When T = e(g, g)[a][q][+1][s], gives a perfect simulation so _S_ � � Pr _S(Y, T[⃗]_ = e(g, g)[a][q][+1][s]) = 0 = [1]2 [+][ Adv]A[IS][.] When T is a random element {Ki[∗][}][i][∈][[][m][′][]] [are completely hidden from the][ A][, so] _Pr[_ (Y, T[⃗] = R) = 0] = [1] _S_ 2 [. As a result,][ S][ is able to play the Modified -][ BDHE][ game] with non-negligible advantage (equal to Adv[IS]A [) or][ S][ is able to break the security of] Modified-BDHE assumption. **Outsider Security. Outsider security is similar to the case of Insider Security, due to the** space limitations we omit it here. **5.** **COMPARISON** Regarding PEKS scheme supporting both expressive searching predicate and multi-writer /multi-reader, to our knowledge [6, 8, 9, 11, 14] are the best schemes to date. In these schemes, the authors in [9] proposed a PEKS schemes scheme with constant size of ciphertext, however their scheme only supports limited AND-gates access policy. The authors in [6, 8, 11, 14] managed to transform from existing KP-ABE or CP-ABE schemes to PEKS schemes, these schemes enjoy some interesting properties such as fast keyword search or outsourcing decryption or partially trapdoor security. Other properties such as fined-grain access policy, public key-size, ciphertext-size are similar to ones in our scheme, but our scheme has constant ----- A CIPHERTEXT-POLICY ATTRIBUTE-BASED SEARCHABLE ENCRYPTION SCHEME 247 size of secret key while they haven’t, moreover our model is different to their model. We give in Fig 1 the comparison between our model and the model in [6, 8, 11, 14]. We can easily see from Fig 1 that there is no TTG in our model, user relies solely on his/her secret key to generate a trapdoor. In contrast, in their model, TTG takes charge of generating trapdoors and therefore should be always online. Compare to their model, our model has two following advantages: There is no TTG in our model, we therefore can save the system resource and the _•_ communication overhead between user and TTG. In our model, user uses his/her secret key to generate trapdoors, cloud server relies on _•_ such trapdoors to search corresponding ciphertexts, user therefore is able to decrypt all the resulted ciphertexts. In contrast, in their model, TTG takes charge of generating trapdoors and user’s secret key is not involved in this process, this leads to the fact that the resulted ciphertexts may contain ciphertexts for which the user cannot decrypt. We argue that our model is more useful in practice since it is waste time and communication overhead if user receives the ciphertexts for which he/she cannot decrypt. Data Encrypt documents Cloud Data Encrypt documents Cloud owner and corresponding server owner and corresponding server keywords keywords 3.Trapdoor 1.Trapdoor 4.Resulted ciphertexts 2.Resulted Trusted ciphertexts User User 1.Trapdoor request Trapdoor Generator 2.Trapdoor _Figure 1. Our model on the left and their model on the right_ We also note that the scheme in [15] can deal with well the problem of trapdoor security, and moreover this scheme is very efficient in term of both communication and computation. However, this scheme is in the different type to our scheme since our scheme supports expressive searching while the scheme in [15] supports equality queries. Consider the example in [6], W = (w1, w2, w3) where w1 = “Diabetes[′′], w2 = “Age : 30[′′] and _w3 = “Weight : 150 −_ 200[′′], user in our scheme can search for ciphertexts which has keyword either “Diabetes[′′] or “Weight : 150 200[′′]. While the user in the scheme in [15] must _−_ specify the keyword search is ”Diabetes” or “Weight : 150 200[′′] and then receives only _−_ the ciphertext corresponding to the keyword search. For example, if the keyword search is “Diabetes[′′], user only receives the ciphertext corresponding to “Diabetes[′′]. This is similar to the difference between traditional public key encryption and attribute-based encryption. Encrypt documents and corresponding keywords 1.Trapdoor request 2.Trapdoor 2.Resulted ciphertexts 3.Trapdoor 2.Trapdoor User Data owner Cloud server ----- 248 VAN ANH TRINH, VIET CUONG TRINH **6.** **CONCLUSION** In this paper, we propose a CP-ABSE scheme which supports both expressive searching predicate and multi-writer/multi-readers. To our knowledge, our scheme has some interesting properties such as constant-size of secret key and in the non-interactive model. Our scheme will become very efficient when the number of combinations of keywords to which a user would like to search is small. Our scheme is therefore very suitable for a large class of applications in practice for which the aforementioned case falls into. **ACKNOWLEDGMENTS** This research is funded by Vietnam National Foundation for Science and Technology Development (NAFOSTED) under grant number 102.01-2018.301. **REFERENCES** [1] M. R. Asghar, G. Russello, B. Crispo, and M. Ion, “Supporting complex queries and access policies for multi-user encrypted databases,” in Proceeding CCSW ’13 Proceedings of the _2013 ACM workshop on Cloud computing security workshop, Berlin, Germany, November_ 4, 2013, pp 77–88. Doi 10.1145/2517488.2517492 [2] D. Boneh, G. Di Crescenzo, R. Ostrovsky, and G. Persiano, “Public key encryption with keyword search,” Advances in Cryptology - EUROCRYPT 2004 International Conference on the _Theory and Applications of Cryptographic Techniques, Interlaken, Switzerland, May 2-6,_ 2004, pp 506–522. [3] D. Cash, S. Jarecki, C. S. Jutla, H. Krawczyk, M.-C. Rosu, and M. Steiner, “Highly-scalable searchable symmetric encryption with support for Boolean queries”, in Advances in Cryptology _CRYPTO 2013 33rd Annual Cryptology Conference, Proceedings, Part I, Santa Barbara,_ CA, USA, August 18-22, 2013. [4] S. Canard and V. C. Trinh, “Constant-size ciphertext attribute-based encryption from multichannel broadcast encryption,” Information Systems Security. 12th International Confe_rence, ICISS 2016, Proceedings, Jaipur, India, December 16-20, 2016._ [5] H. Cui, R. Deng, J. Liu, and Y. Li, “Attribute-based encryption with expressive and authorized keyword search,” Information Security and Privacy. 22nd Australasian Conference, ACISP _2017, Proceedings, Auckland, New Zealand, July 35, 2017, Part I, pp. 106–126._ [6] H. Cui, Z. Wan, R. Deng, G. Wang, and Y. Li, “Efficient and expressive keyword search over encrypted data in the cloud,” IEEE Transactions on Dependable and Secure Computing, vol. 15, no. 3, pp. 409–422, 2018. Doi 10.1109/TDSC.2016.2599883 [7] S. Hohenberger and B. Waters, “Attribute-based encryption with fast decryption,” Public_Key Cryptography_ _PKC 2013. 16th International Conference on Practice and Theory_ _in Public-Key Cryptography, Proceedings, Nara, Japan, February 26_ March 1, 2013, pp. 162–179. [8] Jianting Ning, Zhenfu Cao, Xiaolei Dong, Kaitai Liang, Hui Ma, Lifei Wei, “Auditable σTime Outsourced Attribute-Based Encryption for Access Control in Cloud Computing,” IEEE _Transactions on Information Forensics and Security, vol. 13, no. 1, pp. 94–105, 2018. Doi_ 10.1109/TIFS.2017.2738601 ----- A CIPHERTEXT-POLICY ATTRIBUTE-BASED SEARCHABLE ENCRYPTION SCHEME 249 [9] Jinguang Han, Ye Yang, Joseph K. Liu, Jiguo Li,Kaitai Liang,Jian Shen, “Expressive attributebased keyword search with constant-size ciphertext,” In Soft Computing Journal, vol. 22, no. 15, pp 5163-5177, August 2018. [10] A. Kiayias, O. Oksuz, A. Russell, Q. Tang, and B. Wang, “Efficient encrypted keyword search for multi-user data sharing,” Computer Security ESORICS 2016. 21st European Symposium _on Research in Computer Security, Proceedings, Heraklion, Greece, September 26-30, 2016,_ Part I, pp 173–195. [11] J. Lai, X. Zhou, R. H. Deng, Y. Li, and K. Chen, “Expressive search on encrypted data,” ASIA _CCS ’13 Proceedings of the 8th ACM SIGSAC Symposium on Information, Computer and_ _Communications Security, Hangzhou, China, May 8-10, 2013, pp 243–251._ [12] Z. Lv, C. Hong, M. Zhang, and D. Feng, “Expressive and secure searchable encryption in the public key setting,” Information Security 17th International Conference, ISC 2014, Pro_ceedings, Hong Kong, China, October 12-14, 2014, pp 364–376._ [13] Q. M. Malluhi, A. Shikfa, and V. C. Trinh, “A ciphertext-policy attribute-based encryption scheme with optimized ciphertext size and fast decryption,” Proceeding ASIA CCS ’17 Procee_dings of the 2017 ACM on Asia Conference on Computer and Communications Security,_ Abu Dhabi, United Arab Emirates, April 02 - 06, 2017, pp. 230–240. [14] R. Meng, Y. Zhou, J. Ning, K. Liang, J. Han, and W. Susilo, “An efficient key-policy attributebased searchable encryption in prime-order groups,” Provable Security 11th International _Conference, ProvSec 2017, Proceedings, Xi’an, China, October 23-25, 2017, pp. 39–56._ [15] Qiong Huang and Hongbo Li, “An efficient public-key searchable encryption scheme secure against inside keyword guessing attacks,” Information Sciences Journal, vol. 403–404, pp. 1–14, September 2017. Doi.org/10.1016/j.ins.2017.03.038 [16] D. X. Song, D. Wagner, and A. Perrig, “Practical techniques for searches on encrypted data,” _Proceeding 2000 IEEE Symposium on Security and Privacy, Berkeley, CA, USA, May 14–17,_ 2000, pp. 44–55. Doi 10.1109/SECPRI.2000.848445 [17] S. Sun, J. K. Liu, A. Sakzad, R. Steinfeld, and T. H. Yuen, “An efficient non-interactive multiclient searchable encryption with support for boolean queries,” Computer Security ESORICS _2016. 21st European Symposium on Research in Computer Security, Proceedings, Part I,_ Heraklion, Greece, September 26–30, 2016, pp. 154–172. [18] Y. Wang, J. Wang, S. Sun, J. Liu, W. Susilo, and X. Chen, “Towards multi-user searchable encryption supporting boolean query and fast decryption,” Provable Security 11th International _Conference, ProvSec 2017, Proceedings, Xi’an, China, October 23–25, 2017, pp. 24–38._ [19] Y. Yang, H. Lu, and J. Weng, “Multi-user private keyword search for cloud computing,” _2011 IEEE Third International Conference on Cloud Computing Technology and Science,_ Athens, Greece, 29 Nov.- Dec. 01, 2011, pp. 264–271. Doi 10.1109/CloudCom.2011.43 _Received on March 05, 2019_ _Revised on April 18, 2019_ -----
{ "disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.15625/1813-9663/0/0/13667?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.15625/1813-9663/0/0/13667, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": null, "status": "GOLD", "url": "https://vjs.ac.vn/index.php/jcc/article/download/13667/383095" }
2,019
[]
true
2019-08-06T00:00:00
[ { "paperId": "31f7bbf30a0692e87c0c7874a5b38a1d916dad92", "title": "An Efficient Key-Policy Attribute-Based Searchable Encryption in Prime-Order Groups" }, { "paperId": "243e6f59ebf485056ab0587909b1327e8f42663f", "title": "Towards Multi-user Searchable Encryption Supporting Boolean Query and Fast Decryption" }, { "paperId": "274a85d51bff300d0c75a6c8ac8a85f6ea2b79de", "title": "An efficient public-key searchable encryption scheme secure against inside keyword guessing attacks" }, { "paperId": "121858abcbf359cb9e6049c520bd48bb5f3cdf0c", "title": "Expressive attribute-based keyword search with constant-size ciphertext" }, { "paperId": "b4a28bf0a9dc659c260b5d2acc37dc3afa752e65", "title": "Attribute-Based Encryption with Expressive and Authorized Keyword Search" }, { "paperId": "1395e5687a9877b955e338ea54cc4867cb3b7fa6", "title": "A Ciphertext-Policy Attribute-based Encryption Scheme With Optimized Ciphertext Size And Fast Decryption" }, { "paperId": "f684fa1eed729ffd06c6e970844c3b9f7265a43f", "title": "Constant-Size Ciphertext Attribute-Based Encryption from Multi-channel Broadcast Encryption" }, { "paperId": "84e84bed92a5761e30f899720eca680f1a45352a", "title": "An Efficient Non-interactive Multi-client Searchable Encryption with Support for Boolean Queries" }, { "paperId": "d279baa32365b2f8b7a6b259d0ec208c0bceff0e", "title": "Efficient and Expressive Keyword Search Over Encrypted Data in Cloud" }, { "paperId": "254ccff069b29181b1696cdbfe06fb0886a996e1", "title": "Expressive and Secure Searchable Encryption in the Public Key Setting" }, { "paperId": "8cc288bb4fa70839bc33893b30575cbe22f995b0", "title": "Supporting complex queries and access policies for multi-user encrypted databases" }, { "paperId": "effd352328e57bc7b68e0802b7c91be386b86614", "title": "Highly-Scalable Searchable Symmetric Encryption with Support for Boolean Queries" }, { "paperId": "e1156302e833962e9de91d8bbb9401bcdf31e7dd", "title": "Expressive search on encrypted data" }, { "paperId": "fae557d8ba82575d1f7b8689080f13ff27df3f73", "title": "Attribute-Based Encryption with Fast Decryption" }, { "paperId": "590bcd5e8c4d566b1f71128e0f62af63eaf215cd", "title": "Multi-User Private Keyword Search for Cloud Computing" }, { "paperId": "afaa1636e44246ebf8f3c27ad32fca9a8ce5f0a1", "title": "Public Key Encryption with Keyword Search" }, { "paperId": "95216caad0c07d9fae76416676ffa9184a88b674", "title": "Practical techniques for searches on encrypted data" }, { "paperId": null, "title": "Auditable σ- Time Outsourced Attribute-Based Encryption for Access Control in Cloud Computing" }, { "paperId": "210463e45c96847bd12cac858446e413877b161d", "title": "Edinburgh Research Explorer Efficient Encrypted Keyword Search for Multi-user Data Sharing" } ]
17,602
en
[ { "category": "Computer Science", "source": "external" }, { "category": "Computer Science", "source": "s2-fos-model" }, { "category": "Engineering", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/02a29c5897b1fa71149d802ad7f082ae833fee5c
[ "Computer Science" ]
0.861416
ART: sub-logarithmic decentralized range query processing with probabilistic guarantees
02a29c5897b1fa71149d802ad7f082ae833fee5c
Distributed and parallel databases
[ { "authorId": "1718248", "name": "S. Sioutas" }, { "authorId": "1732298", "name": "P. Triantafillou" }, { "authorId": "1837376", "name": "G. Papaloukopoulos" }, { "authorId": "1710324", "name": "E. Sakkopoulos" }, { "authorId": "1702182", "name": "K. Tsichlas" }, { "authorId": "1796253", "name": "Y. Manolopoulos" } ]
{ "alternate_issns": null, "alternate_names": [ "Distributed and Parallel Databases", "Distrib parallel database", "Distrib Parallel Database" ], "alternate_urls": null, "id": "ceac6326-95ad-4e90-a522-b7ab052ea5a4", "issn": "0926-8782", "name": "Distributed and parallel databases", "type": "journal", "url": "https://link.springer.com/journal/10619" }
We focus on range query processing on large-scale, typically distributed infrastructures, such as clouds of thousands of nodes of shared-datacenters, of p2p distributed overlays, etc. In such distributed environments, efficient range query processing is the key for managing the distributed data sets per se, and for monitoring the infrastructure’s resources. We wish to develop an architecture that can support range queries in such large-scale decentralized environments and can scale in terms of the number of nodes as well as in terms of the data items stored. Of course, in the last few years there have been a number of solutions (mostly from researchers in the p2p domain) for designing such large-scale systems. However, these are inadequate for our purposes, since at the envisaged scales the classic logarithmic complexity (for point queries) is still too expensive while for range queries it is even more disappointing. In this paper we go one step further and achieve a sub-logarithmic complexity. We contribute the ART (Autonomous Range Tree) structure, which outperforms the most popular decentralized structures, including Chord (and some of its successors), BATON (and its successor) and Skip-Graphs. We contribute theoretical analysis, backed up by detailed experimental results, showing that the communication cost of query and update operations is $O(\log_{b}^{2} \log N)$ hops, where the base b is a double-exponentially power of two and N is the total number of nodes. Moreover, ART is a fully dynamic and fault-tolerant structure, which supports the join/leave node operations in O(loglogN) expected w.h.p. number of hops. Our experimental performance studies include a detailed performance comparison which showcases the improved performance, scalability, and robustness of ART.
## ART : Sub-Logarithmic Decentralized Range Query Processing with Probabilistic Guarantees S. Sioutas[1], P. Triantafillou[2], G. Papaloukopoulos[2], E. Sakkopoulos[2], K. Tsichlas[3], and Y. Manolopoulos[3] 1 Ionian University, Department of Informatics, sioutas@ionio.gr 2 CTI and Dept. of Computer Engineering & Informatics, University of Patras, (peter, papaloukg, sakkopul)@ceid.upatras.gr 3 Aristotle University of Thessaloniki, Department of Informatics, (tsichlas, manolopo)@csd.auth.gr Abstract. We focus on range query processing on large-scale, typically distributed infrastructures, such as clouds of thousands of nodes of shared-datacenters, of p2p distributed overlays, etc. In such distributed environments, efficient range query processing is the key for managing the distributed data sets per se, and for monitoring the infrastructure’s resources. We wish to develop an architecture that can support range queries in such large-scale decentralized environments and can scale in terms of the number of nodes as well as in terms of the data items stored. Of course, in the last few years there have been a number of solutions (mostly from researchers in the p2p domain) for designing such largescale systems. However, these are inadequate for our purposes, since at the envisaged scales the classic logarithmic complexity (for point queries) is still too expensive while for range queries it is even more disappointing. In this paper [4] we go one step further and achieve a sub-logarithmic complexity. We contribute the ART [5] structure, which outperforms the most popular decentralized structures, including Chord (and some of its successors), BATON (and its successor) and Skip-Graphs. We contribute theoretical analysis, backed up by detailed experimental results, showing that the communication cost of query and update operations is O(log[2]b [log][ N] [) hops, where the base][ b][ is a double-exponentially power of] two and N is the total number of nodes. Moreover, ART is a fully dynamic and fault-tolerant structure, which supports the join/leave node operations in O(log log N ) expected w.h.p number of hops. Our experimental performance studies include a detailed performance comparison which showcases the improved performance, scalability, and robustness of ART. Keywords:Distributed Data Structures, P2P Data Management. 4 A limited and preliminary version of this work has been presented as brief announcement in Twenty-Ninth Annual ACM SIGACT-SIGOPS Symposium on Principles of Distributed Computing, Zurich, Switzerland July 25-28, 2010 [28] 5 Autonomous Range Tree ----- 2 S. Sioutas et al. ### 1 Introduction and Motivation Decentralized range query processing is a notoriously difficult problem to solve efficiently and scalably in decentralized network infrastructures. It has been studied in the last years extensively, particularly in the realm of p2p, which is increasingly used for content delivery among users. There are many more real-life applications in which the problem also materializes. Consider the (popular nowadays) cloud infrastructures for content delivery. Monitoring of thousand of nodes, where thousands of different applications from different organizations execute, is an apparent requirement. This monitoring process often requires support for range queries over this decentralized infrastructure: consider range queries that are issued in order to identify which of the cloud nodes are under-utilized, (i.e., utilization < threshold) in order to assign to them more data & tasks and better exploit all available resources, increasing the revenues of the cloud infrastructure, or to identify overloaded nodes, (load > threshold) in order to avoid bottlenecks in the cloud, which hurts overall performance, and revenues. Each node in the cloud maintains a tuple with attributes: utilization, OS, load, NodeId, e.t.c. Collectively, these makeup a relation, CloudNodes, and we wish to execute queries such as: SELECT NodeId FROM Cloudnodes WHERE low < utilization < high or point and range queries, e.g. SELECT NodeId FROM Cloudnodes WHERE low < utilization < high and OS=UNIX An acceptable solution for processing range queries in such large-scale decentralized environments must scale in terms of the number of nodes as well as in terms of the number of data items stored. The available solutions for architecting such large-scale systems are inadequate for our purposes, since at the envisaged scales (trilions of data items at millions of nodes) the classic logarithmic complexity (for point queries) offered by these solutions is still too expensive. And for range queries, it is even more disappointing. Further, all available solutions incur large overheads with respect to other critical operations, such as join/leave of nodes, and insertion/deletion of items. Our aim with this work is to provide a solution that is comprehensive and outperforms related work with respect to all major operations, such as lookup, join/leave, insert/delete, and to the required routing state that must be maintained in order to support these operations. Specifically, we aim at achieving a sub-logarithmic complexity for all the above operations! Peer-to-peer (P2P) systems have become very popular, in both academia and industry. They are widely used for sharing resources like music files etc. Search for a given ID, is a crucial operation in P2P systems, and there has been considerable recent work in devising effective distributed search (a.k.a. lookup) techniques. The proposed structures include a ring as in Chord [15], a multiple dimensional grid as in CAN [22], a multiple list as in SkipGraph [2, 10], or a tree as in PHT [24], BATON [13] and BATON* [14]. Most search structures (including all the ones just mentioned except for BATON* and PHT) bound ----- Autonomous Range Tree 3 the search cost to a base 2 logarithm of the search space: for a system with N peer nodes, the search cost is bounded by O(log N ). Relative to tree-based indexes, a disadvantage of PHTs (Prefix Hash Trees) is that their complexity is expressed in terms of the log of the domain size, D, rather than the size of the data set, N and depends on distribution over D bit keys. BATON* is a − multi-way search tree, which reduces the search cost to O(logm N ), where m is the tree fanout. The penalty paid is a larger update cost, but no worse than linear in m. One of the distributed indexes with high fanout is the P-Tree [5], where each peer maintains a B[+]-tree leaf and a path of virtual index nodes from the root to the specific leaf. Search is very effective, but updates are expensive, possibly requiring substantial synchronization effort. BATON* extends BATON by allowing a fanout of m > 2. Thus, the search cost becomes O(logm N ), as expected. Moreover, the cost of updating routing tables is O(m logm N ) only, as compared to O(log2 N ) in BATON - an improvement that is better than linear in m. Furthermore, BATON* has better fault tolerance properties than BATON, and supports load balancing more efficiently. In fact, the system’s fault tolerance, measured as the number of nodes that must fail before the network is partitioned, increases linearly with m. Similarly, the expected cost of load balancing decreases linearly with m. Our Results: In this paper we present the ART structure, which outperforms the most popular decentralized structures, including Chord (and some of its successors), BATON and BATON* and Skip-Graphs. ART is an exponentialtree structure, which remains unchanged w.h.p., and organizes a number of fullydynamic buckets of peers. We provide and analyze all relevant algorithms for accessing ART. We contribute theoretical analysis, backed up by detailed experimental results, showing that the communication cost of query and update operations is O(log[2]b [log][ N] [) hops, where the base][ b][ is a double-exponentially] power of two. Moreover, ART is a fully dynamic and fault-tolerant structure, which supports the join/leave node operations in O(log log N ) expected w.h.p number of hops. Since ART is a tree based system, our experimental performance studies include our development of BATON* (the best current tree based system), and a detailed performance comparison which showcases the improved performance, scalability, and robustness of ART. In Section 2 we present more thoroughly key previous work. Section 3 describes the ART structure and analyzes its basic functionalities. Section 4 presents a thorough experimental evaluation; Section 5 presents some interesting heuristics and thresholds, whereas Section 6 concludes the paper. ### 2 Previous Work Existing structured P2P systems can be classified into three categories: distributed hash table (DHT) based systems, skip list based systems, and tree based systems. There are several P2P DHT architectures like Chord [15], CAN [22], Pastry [23], Tapestry [31], Kademlia [20] and and Kelips [9]. Unfortunately, these systems cannot easily support range queries since DHTs destroy data ordering. This means that they cannot support common queries such as ”find all research papers published from 2004 to 2008”. To support range queries, inefficient DHT variants have been proposed (for details see [8], [25], [1], [29]). ----- 4 S. Sioutas et al. Skip list based systems such as Skip Graph [2, 10] and Skip Net [12] are based on the skip-list structure. To provide decentralization they use randomized techniques to create and maintain the structure. Moreover, they can support both exact match queries and range queries by partitioning data into ranges of values. However, they cannot guarantee data locality (which hurts efficient range query processing) and load balancing in the system. Tree based systems also carry their own disadvantages. P-Grid [5] utilizes a binary prefix tree. It can neither guarantee the bound of search steps since it cannot control the tree height. An arbitrary multi-way tree was proposed in [19], where each node maintains links to its parent, children, sibling and neighbors. It also suffers from the same problem. P-Tree [5] utilizes a B[+]-tree on top of the CHORD overlay network, and peers are organized as a CHORD ring, each peer maintaining a data leaf and a left most path from the root to that B[+]-tree node. This results in significant overhead in building and maintaining the consistency of the B[+]-tree. In particular, a tree has been built for each joining node, and periodically, peers have to exchange their stored B[+]-tree for checking consistency. BATON [13] utilizes a binary balanced tree and as a consequence, it can control the tree height and avoid the problem of P-Grid. Nevertheless, similarly to other P2P systems, BATON’s search cost is bounded by O(log2 N ). BATON* [14] is an overlay multi-way tree based on B-trees, with better searching performance. The penalty paid is a marginally larger update cost. Systems like MAAN [4], Mercury [3] and DIM [18] support multi-attribute queries in a multi-dimensional space. BATON* can also effectively support queries over multiple attributes. In addition to supporting the use of multiple attributes in a single index, BATON* further introduces the notion of attribute classification, based on the importance of the attribute for querying, and the notion of attribute groups. In particular, BATON* relies on the construction of multiple independent indexes for groups of one or more attributes. For further details about the suggested techniques for partitioning attributes into such groups, see [14]. P2P Lookup, Insert, Maximum Size Join/ architectures Delete key of routing table Depart peer CHORD O(log N ) O(log N ) O(log N ) w.h.p. H-F-Chord(a) O(log N/ log log N ) O(log N ) O(log N ) LPRS-Chord O(log N ) O(log N ) O(log N ) Skip Graphs O(log N ) O(1) O(log N ) amortized BATON O(log N ) O(log N ) O(log N ) w.h.p. BATON* O(logm N ) O(m logm N ) O(m logm N ) ART-tree O(log[2]b [log][ N] [)] O(N [1][/][4]/ log[c] N ) O(log log N ) expected w.h.p. Table 1. Performance comparison between ART, Chord, BATON and Skip Graphs. For comparison purposes, in Table 1 we present a qualitative evaluation with respect to elementary operations between ART, Skip-Graphs, Chord and its ----- Autonomous Range Tree 5 newest variations (F-Chord(α) [26], LPRS-Chord [30]), BATON [13] and its newest variation BATON* [14]. It is noted that c is a big positive constant. ### 3 Our Solution First, we build the LRT (Level Range Tree) structure, one of the basic components of the final ART structure. LRT will be called upon to organize collections of peers at each level of ART. 3.1 Building LRT structure LRT is built by grouping peers having the same ancestor and organizing them in a tree structure recursively. The innermost level of nesting (recursion) will be characterized by having a tree in which no more than b peers share the same direct ancestor, where b is a double-exponentially power of two (e.g. 2,4,16,...). Thus, multiple independent trees are imposed on the collection of peers. Figure 1 illustrates a simple example, where b = 2. **CI�** .� **LSI�** Fig. 1. The LRT structure for b=2 The degree of the peers at level i > 0 is d(i) = t(i), where t(i) indicates the number of peers at level i. It holds that d(0)=b and t(0)=1. Let n be w-bit keys. Each peer with label i (where 1 i N ) stores ordered keys that belong in the ≤ ≤ range [(i 1) ln n, i ln n–1], where N = n/lnn is the number of peers. Note here − that the lnn (and not logn) factor is due to a specific combinatorial game ([16]) we invoke in the next subsection. We also equip each peer with a table named Left Spine Index (LSI), which stores pointers to the peers of the left-most spine (see pointers starting from peer 5). ----- 6 S. Sioutas et al. Furthermore, each peer of the left-most spine is equipped with a table named Collection Index (CI), which stores pointers to the collections of peers presented at the same level (see pointers directed to collections of last level). Peers having the same father belong to the same collection. For example, in Figure 1, peers 8, 9, 10, and 11 constitute a certain collection. Lookup Algorithm Assume we are located at peer s (we mean the peer labeled by integer number s) and seek a key k. First, we find the range where k belongs in. Let say k [(j 1) ln n, j ln n 1]. The latter means that we have to search ∈ − − for peer j. The first step of our algorithm is to find the LRT level where the desired peer j is located. For this purpose, we exploit a nice arithmetic property of LRT. This property says that for each peer x located at the left-most spine of level i, the following formula holds: label(x) = label(father(x)) + b[2][i][−][2] (1) For example, peer 4 is located at level 2, thus 4 = father(4) + 2 or peer 8 is located at level 3, thus 8 = father(8)+4 or peer 24 (not depicted in the Figure 1) is located at level 4, thus 24 = father(24) + 16. The last equation is true since father(24) = 8. Thus, for each level i (in the next subsection we will prove that 0 i ≤ ≤ log log N ), we compute the label x of its left most peer by applying Equation (1). Then, we compare the label j with the computed label x. If j x, we continue ≥ by applying Equation (1), otherwise we stop the loop process with current value i. The latter means that peer j is located at the i-th level. So, first we follow the i-th pointer of the LSI table located at peer s so as to reach the leftmost peer x of level i. Then, we compute the collection in which the peer j belongs. Since the number of collections at level i equals the number of peers located at level (i 1), we divide the distance between j and x by the factor t(i 1). Let m (in − − � j−x+1 � particular m = t(i−1) ) be the result of this division. The latter means that we have to follow the (m + 1)-th pointer of the CI table so as to reach the desired collection. Since the collection indicated by the CI[m+1] pointer is organized in the same way at the next nesting level, we continue this process recursively. Analysis The degree of the peers at level i > 0 is d(i) = t(i), where t(i) indicates the number of peers at level i. It is defined that d(0)=b and t(0)=1. It is apparent that t(i) = t(i 1)d(i 1), and, thus, by putting together the − − various components, we can solve the recurrence and obtain d(i) = t(i) = b[2][i][−][1] for i 1. This double exponentially increasing fanout guarantees the following ≥ lemma: Lemma 1: The height (or the number of levels) of LRT is O(log logb N ) in the worst case. The size of the LSI table equals the number of levels of LRT. Moreover, the maximum size of the CI table appears at last level. It is apparent from the building of the LRT structure that at last level h, t(h) = O(N ). It holds that t(h) = b[2][h][−][1], thus b[2][h][−][1] = O(N ) or h−1 = O(loglogbN ) or h = O(loglogbN )+1. Since the number of collections at level h equals the number of peers located ----- Autonomous Range Tree 7 at level (h 1) we take t(h 1) = b[2][h][−][2] = b[2][(][O][(][loglogb N] [)+1)][−][2] or b[2][O][(][loglogb N] [)][−][1] = − − � b[2][O][(][loglogbN] [)][2][−][1] = b[2][O][(][loglogb N] [)][�][1][/][2] and the lemma 2 follows: √ Lemma 2: The maximum size of the CI and LSI tables is O( N ) and O(log log N ) in worst-case respectively. We need now to determine what will be the maximum number of nesting trees that can occur for N peers. Observe that the maximum number of peers with the same direct ancestor is d(h 1). Would it be possible for a second level − tree to have the same (or bigger) depth than the outermost one? This would imply that [�][h]j=0[−][1] [t][(][j][)][ < d][(][h][ −] [1).] As otherwise we would be able to fit all the d(h 1) peers within the first h 1 − − levels. But we need to remember that d(i) = t(i), thus d(h − 1) + j=0 [d][(][j][)][ <] [�][h][−][2] d(h 1). − This would imply that the number of peers in the first h 2 levels is negative, − clearly impossible. Thus, the second level tree will have depth strictly lower than the depth of the outermost tree. The innermost (let say j[th]) level of nesting (recursion) is characterized by having a tree in which no more than b nodes share the same direct ancestor, where b is a double-exponentially power of two (e.g. 2,4,16,...). In this case b = N [1][/b][j] and the lemma 3 follows: Lemma 3: The maximum number of possible nestings in LRT structure is O(logb log N ) in the worst case. At each peer we pay an extra processing cost by repeating the equation (1) O(log log N ) times at most in order to locate the desired LSI pointer. Then, we need O(1) hops for locating the left-most peer x of the desirable level. We must note here that the processing overhead compared to communication overhead is negligible, thus we can ignore the O(log log N ) processing factor at each peer. Finally we need O(1) hops for locating the desirable collection of peers via the CI[m+1] pointer. Since, the collection indicated by the CI[m+1] pointer is organized in the same way at a next nesting level, we continue the above process recursively. According to lemma 2 the maximum number of nesting levels is O(logb log N ), and the theorem follows: Theorem 1: Exact-match queries in the LRT structure require O(logb log N ) hops or lookup messages in the worst case. 3.2 Building ART Structure We define as cluster peer a bucket of Θ(polylog N [′]) ordered peers, where N [′] is the number of cluster peers. At initialization step we choose as cluster peers the 1st peer, the (ln n +1)-th peer, the (2 ln n + 1)-th peer and so on. This means that each cluster peer with label i[′] (where 1 i[′] N [′]) stores ordered peers with sorted keys belonging in ≤ ≤ the range [(i[′] 1) ln[2] n, . . ., i[′] ln[2] n 1], where N [′] = n/ ln[2] n is the number of − − cluster peers. ART stores cluster peers only, each of which is structured as an independent decentralized architecture. The backbone-structure of ART is exactly the same with LRT (see Figure 2). Moreover, instead of the Left-most Spine Index (LSI), which reduces the robustness of the whole system, we introduce the Random ----- 8 S. Sioutas et al. **RSI�** **Cluster_Peer 1�** **1�** **RSI�** **RSI�** **2-level LRT�** **2�** **3�** **Decentralized Architecture of�** **Peer_Node�1�,Peer_Node �2�,......,Peer_Node �lnn�** **RSI�** **RSI�** **RSI�** **RSI�** **4�.�** **5�.�** **6�** **7�.�** **Cluster_Peer i�** **8�** **9�** **10�** **11�** **12�** **13�** **14�.�** **15�** **i�** **8�** **12�** **9�** **10�** **13�** **14�** **i�** **11�** **15�** **Decentralized Architecture of�** **Peer_Node�(i-1)lnn+1�Peer_Node�(i-1)lnn+2�** **,......,Peer_Node�ilnn�** Fig. 2. The ART structure for b=2 Spine Index (RSI) routing table, which stores pointers to randomly chosen (and not to left-most) cluster peers (see in Figure 2 the pointers starting from peer 3). In addition, instead of using fat CI tables, we access the appropriate collection of cluster peers by using a 2-level LRT structure. The 2-level LRT is an LRT structure over log[2][c] Z buckets each of which organizes logZ[2][c] Z [collections in a] LRT manner, where Z is the number of collections at current level and c is a big positive constant (see Figure 3) Load Balancing We model the join/leave of peers inside a cluster peer as the combinatorial game of bins and balls presented in [16] and the lemma 4 follows: Lemma 4: Given a µ( ) random sequence of join/leave peer operations, the load of each cluster peer never becomes zero and never exceeds Θ(polylog N [′]) size in expected w.h.p. case. Routing Overhead ART stores cluster peers, each of which is structured as an independent decentralized architecture (be it BATON*, Chord, Skip-Graph, e.t.c.) (see Figure 2). Here, we will try to avoid the existence of CI routing tables, √ since these tables may become very large (O( N )) in the worst case as well as the occurrence of local hot spots in the left-most spine results in a less robust decentralized infrastructure. Thus, instead of the Left-most Spine Index (LSI), we introduce the Random Spine Index (RSI) routing table. The latter table stores pointers to the cluster peers of a random spine (for example, in Figure 2 the randomly chosen cluster peers 1, 2, 6 and 10 are pointed to by the RSI table of cluster peer 3). Furthermore, instead of CI tables, we can access the appropriate ----- Autonomous Range Tree 9 **2nd level�** **LRT structure of�Buckets�** **Bucket�1�** **Bucket�polylogz�** **1st level�** **in LRT manner�** **in LRT manner�** **C�1�** **C�z/polylogz�** **C�(�z+1-z/polylogz)�** **C�z�** **C�i� denotes the i-th collection�** Fig. 3. The 2-level LRT structure collection of cluster peers by using the 2-level LRT structure discussed above (see Figure 3). Since the larger number of collections is Z = O(N [1][/][2]) (it appears in the last level), the overhead of routing information is dominated by the second � Z level structures in each of which we have an O( log[2][c] Z [) =][ O][(][N][ 1][/][4][/][ log][c][ N] [)] routing overhead. Thus, Theorem 2 follows: Theorem 2: The overhead of routing information in ART is O(N [1][/][4]/ log[c] N ) in the worst case. Remark 1: If we use a k-level LRT structure, the routing information overhead becomes O(N [1][/][2][k] / log[c] N ) in the worst case. Lookup Algorithms Let us explain the lookup operations in ART. For example, in Figure 4 suppose we are located at cluster peer 3 and we are looking for two keys, which are located at cluster peers 19 and 119 respectively. The first step of our algorithm is to find the levels of the ART where the desired cluster peers (e.g. 19 and 119) are located. In our example, the fourth and fifth levels are the desired levels. By following the RSI[4] and RSI[5] pointers we reach the cluster peers 10 and 87 respectively. Now, we are starting from peers 10 and 87 to lookup the peers 19 and 119 respectively in the 2-level LRT structures of the collections in respective levels. Generally speaking, since the maximum number of nesting levels is O(logb log N ) and at each nesting level i we have to apply the standard LRT structure in N [1][/][2][i] collections, the whole searching process requires T1(N ) hops or lookup messages to locate the target cluster peer, where: T1(N ) = logb log N logb log N � logb log(N [1][/][2][i]) = logb( � log(N [1][/][2][i] )) (2) i=0 i=0 where logb log N � log(N [1][/][2][i] ) < (log N )[log][b][ log][ N] i=0 from which we get: T1(N ) < logb((log N )[log][b][ log][ N] ) = O(log[2]b [log][ N] [)] ----- 10 S. Sioutas et al. Then, we have to locate the target peer by searching the respective decentralized structure, requiring T2(N ) hops. Since each of the known decentralized architectures requires a logarithmic number of hops, the total process requires T (N ) = T1(N ) + T2(N ) = O(log[2]b [log][ N] [) hops or lookup messages and the theo-] rem follows. Theorem 3: Exact-match queries in the ART structure require O(log[2]b [log][ N] [)] hops or lookup messages. Having located the target peer for key kℓ and exploiting the order of keys on each node, range queries of the form [kℓ, kr] require an O(log[2]b [log][ N][ +][ |][A][|][)] complexity, where A is the number of node-peers between the peers responsible | | for kℓ, kr respectively. The theorem follows. Theorem 4: Range queries of the form [kℓ, kr] in the ART structure require an O(log[2]b [log][ N][ +][ |][A][|][) complexity, where][ |][A][|][ is the answer size.] **RSI�** **1�** **RSI�** **RSI�** **2�** **3�.�** **4�.�** **RSI�** **5�.�** **RSI�** **6�** **RSI�** **7�.�** **RSI�** **2-level LRT�** 8� 9� 10� 11� 12� 13� 14�.� 15� 16� 19� 20� 23� 24�.� 39�.� 72� **87�** **88�.�** 103�.� 104�.� **119�.�** **2-level LRT�** Fig. 4. An example of Lookup Steps via RSI[ ] tables and 2-level LRT structures Query Processing, Data Insertion and Data Deletion, Peer Join and Peer Departure In the following we briefly present the basic routines for query processing, data insertion and data deletion, peer join and peer departure. The Range Search(s, kℓ, kr) routine (Algorithm 1) gets as input the peer s in which the query is initiated and the respective range of keys [kℓ, kr] and returns as output the id of the cluster peer S, which contains peer s as well as the cluster peer W in which the key kℓ belongs. Then, it calls the basic ART Lookup(T, S, idS, W, idW ) routine, in order to locate the target peer responsible for key kℓ, and then, exploiting the order of keys on each peer performs ----- Autonomous Range Tree 11 Algorithm 1 Range Search(s,kℓ,kr,A) 1: Input: s, kℓ, kr (we are at peer s and we are looking for keys in range [kℓ, kr]) 2: Output: idW (the identifier of cluster-peer W, which stores kℓ key), A (the answer) 3: BEGIN 4: We compute idS:the identifier of Cluster peer S, which contains peer s; 5: We compute idW :let j be the identifier of target Cluster peer W, which stores kℓ key; 6: Let T the basic ART structure of cluster-peers; 7: W=ART Lookup(T, S, idS, W, idW ); {call of the basic routine} 8: A=Linear Scan of all Cluster peers located in and right to W until we find a key > kr; 9: END a right linear scan till it finds a key > kr. The ART Lookup(T, S, idS, W, idW ) routine (Algorithm 2) gets as input the cluster peer S (with identifier idS) in which the query is initiated and returns as output the id (idW ) of the cluster peer W in which the key kℓ belongs. T denotes the ART-tree structure. Moreover, Algorithm 2 requires O(log[2]b [log][ N] [)] hops, according to first part (T1(N )) of Theorem 3. Obviously, the same complexity holds for insert/delete key operations (see Algorithms 3 and 4), since we have to locate the target peer into which the key must be inserted or deleted. For join (depart) peer operations (for details see Algorithm 5), we need O(log[2]b [log][ N] [) +][ T][join][(][N] [) (][O][(log]b[2] [log][ N] [) +][ T][depart][(][N] [)) lookup messages, where] Tjoin(N ) (Tdepart(N )) is the number of hops required from the respective decentralized structure for peer-join (peer-departure). In the peer join algorithm we assumed that the new peer is accompanied by a key, and this key designates the exact position in which the new peer must be inserted. If an empty peer u makes a join request at a particular peer v (which we call entrance peer) then there is no need to get to a different cluster peer than the one in which u belongs. Similarly, the algorithm for the departure of a peer u assumes that the request for departure of peer u can be made from any peer in the ART-structure. This may not be desirable, and in many applications it is assumed that the choice for departure of peer u can be made only from this peer. Of course, in this way the algorithm for peer departure is simplified since there is no need to traverse the ART structure but only the cluster peer in which u belongs. In order to bound the size of each cluster peer we assume that the probability of picking an entrance peer is equal among all existing peers, and that the probability of a peer departing is equal among all existing peers in the ART. Since the size of the cluster peer is bounded by polylogN expected w.h.p., the following theorem is established: Theorem 5: The peer join/departure can be carried out in O(loglogN ) hops or lookup messages. Node Failure, Fault Tolerance, Network Restructuring and Load Balancing Since we have modeled the join/leave of peers inside a cluster peer as the combinatorial game of bins and balls presented in [16], each cluster peer of an ART structure (according to lemma 4) never exceeds a polylogarithmic number ----- 12 S. Sioutas et al. Algorithm 2 ART Lookup(T, S, idS, W, idW ) 1: Input: We are at cluster-peer S with identifier idS 2: Output: We are looking for the cluster-peer W with identifier idW 3: BEGIN 4: If (S is responsible for kℓ) 5: Return S; 6: Else 7: If W =1 then i=0; 8: Else if W ∈{2, 3, . . ., b + 1] then i=1; 9: Else 10: x=b+2; 11: For (i = 2; i < c1log logb N ; + + i) 12: x = father(x) + b[2][i][−][2] ; 13: If j < x then break( ); 14: Follow the RSI[i] pointer of cluster peer S; 15: Let X the correspondent cluster peer; 16: Search for W the 2-level LRT structure starting from X; 17: Let Y the first cluster-peer of the correspondent collection; 18: Let T [′] the ART structure of the collection above at next level of nesting with root the cluster-peer Y ; 19: S = Y ; 20: ART Lookup(T [′], S, idS, W, idW ); {recursive call of the basic routine} 21: Return W ; 22: END Algorithm 3 ART insert(T, s, k) 1: Input: We are at peer s and we want to insert the key k 2: Output: The peer w in which k must be inserted 3: BEGIN 4: We compute idS:the identifier of Cluster peer S, which contains peer s; 5: We compute idW :let j be the identifier of target Cluster peer W, which stores the k key; 6: ART Lookup(T, S, idS, W, idW ); 7: Let W the target cluster peer; 8: Search W for peer w containing k; 9: If k does not exist into w, then insert k into it; 10: END Algorithm 4 ART delete(T, s, k) 1: Input: We are at peer s and we want to delete the key k 2: Output: The peer w in which k must be deleted 3: BEGIN 4: We compute idS:the identifier of Cluster peer S, which contains peer s; 5: We compute idW :let j be the identifier of target Cluster peer W, which stores the k key; 6: ART Lookup(T, S, idS, W, idW ); 7: Let W the target cluster peer; 8: Search W for peer w containing k; 9: If k exists into w, then delete it; 10: END ----- Autonomous Range Tree 13 Algorithm 5 ART join/leave peer(T, s, w) 1: Input: We are at peer s and we want to insert/delete the new peer w 2: Output: The cluster peer W in which the peer w must be inserted/deleted 3: BEGIN 4: We compute idS:the identifier of Cluster peer S, which contains peer s; 5: We compute idW :let j be the identifier of target Cluster peer W, which contains peer w; 6: ART Lookup(T, S, idS, W, idW ); {call of the basic routine} 7: Let W the target cluster peer; 8: Insert/delete w into/from W ; 9: END of peers and never becomes empty in expected case with high probability. The latter means that the skeleton ART structure of cluster peers remains unchanged in the expected case with high probability as well as in each cluster peer the algorithms for peer failure, network restructuring and load balancing are according to the polylogarithmic-sized decentralized architecture we use. Multi-attribute Queries As in [14], we divide the whole range of attributes into several sections: each section is used to index an attribute (if it appears frequently in queries) or a group of attributes (if these attributes rarely appear in queries). Since ART can only support queries over one-dimensional data, if we index a group of attributes, we have to convert their values into one-dimensional values (by choosing Hilbert space filling curve or other similar methods). For example, if we have a system with 12 attributes: a1, a2, · · ·, a12 in which only 4 attributes from a1 to a4 are frequently queried (i.e. 90% of all queries), we can build 4 separate indexes for them. The remaining attributes can be divided equally into two groups to index, four attributes in each group. This way, the number of replications can be significantly reduced from 12 down to 6. ### 4 Evaluation For evaluation purposes we used the Distributed Java D-P2P-Sim simulator presented in [27]. The D-P2P-Sim simulator is extremely efficient delivering - 100, 000 cluster peers in a single computer system, using 32-bit JVM 1.6 and 1.5 GB RAM and full D-P2P-Sim GUI support. When 64-bit JVM 1.6 and 5 RAM is utilized the D-P2P-Sim simulator delivers > 500, 000 cluster peers and full D-P2P-Sim GUI support in a single computer system. When D-P2PSim simulator acts in a distributed environment with multiple computer systems with network connection delivers multiple times the former population of cluster peers with only 10% overhead. Our experimental performance studies include a detailed performance comparison with BATON*, one of the state-of-the-art decentralized architectures. In particular, we implemented each cluster peer as a BATON* [14], the best known decentralized tree-architecture. We tested the network with different numbers of peers ranging up to 500,000. A number of data equal to the network size multiplied by 2000, which are numbers from the universe [1..1,000,000,000] are inserted ----- 14 S. Sioutas et al. to the network in batches. The synthetic data (numbers) from this universe were produced by the following distributions: beta, uniform and power-law. For each test, 1,000 exact match queries and 1,000 range queries are executed, and the average costs of operations are taken. Searched ranges are created randomly by getting the whole range of values divided by the total number of peers multiplies α, where α [1..10]. Note that in all experiments the default value of parameter ∈ b is 4. The source code of the whole evaluation process is publicly available [6]. 4.1 Single- and Multi-attribute Query Performance **Cost of exact match query�** **Cost of range query�** 8� 8� BATON* (fanout=10)� 6� 6� BATON* (fanout=10)� **Number of �** ART (normal, beta,� **Number of routing �** 4� ART (normal, beta, uniform)� **routing hops�** 4� uniform)� **hops�** 2� ART (powlow)� 2� ART (powlow)� 0� 0� 0� 100000�200000�300000�400000�500000�600000� 0� 200000� 400000� 600000� **Number of nodes�** **Number of nodes�** Fig. 5. Cost of exact match query (left) and cost of range query (right). As proved previously, the whole query performance of ART is O(log[2]b [log][ N][ ′][)] where the N [′] cluster peers structure their internal peers according to the BATON* architecture. For normal, beta and uniform distributions each cluster peer contains 0.75 log[2] N peers on average and for power-law distributions each cluster peer contains 2.5 log[2] N peers on average. Thus, in the former case the average number of cluster peers is N [′] = 0.75 logN [2] N [, whereas in the latter case] the number of cluster peers becomes N [′] = 2.5 logN [2] N [on average. In all cases,] ART outperforms BATON* by a wide margin. As depicted in Figure 5 (up), our method is almost 2 times faster and as a consequence we have a 50% improvement. The results are analogous with respect to the cost range queries as depicted in Figure 5 (down). Figure 6 (up) depicts the cost of updating routing tables. Since each cluster peer structures O(N/polylog N ) (and not O(N )) peers according to BATON* architecture, the results are as expected. We remark that BATON* requires m logm N hops, whereas m logm polylog N hops are required by ART. In particular and as depicted in Figure 6 (up), our method updates the routing tables 3 or 4 times faster. Figure 6 (down) depicts the insertion cost in multi-attribute case, where we have 6 separate indexes. BATON* requires 6 log N hops and ART requires 6 log[2]b [log(][N/][polylog][ N] [) + 6 log(polylog][ N] [) hops. We observe that the insertion] cost of ART is the lowest for any distribution. Again, our method is almost 2 times faster. Finally, the results are analogous for multi-attribute exact-match and range queries respectively (see Figures 7 (up) and 7 (down)). 6 http://code.google.com/p/d-p2p-sim/ ----- Autonomous Range Tree 15 **Cost of updating routing tables�** **Cost of Insertion�** 40� 200� BATON* (fanout=10)� 30� 150� BATON* (fanout=10)� **Number of �** ART (normal, beta,� **Number of �100�** **messages�** 20� uniform)� **messages�** 50� ART (normal, beta,�uniform)� 10� ART (powlow)� 0� ART (powlow)� 0� 0� 200000� 400000� 600000� 0� 200000� 400000� 600000� **Number of nodes�** **Number of nodes�** Fig. 6. Cost of updating routing tables (left) and cost of insertion (right). **Cost of exact match query�** **Cost of range query�** 8� 10� BATON* (fanout=10)� BATON* (fanout=10)� 6� 8� **Number of �** ART (normal, beta,� **Number of �** 6� ART (normal, beta,� 4� **messages�** uniform)� **messages�** 4� uniform)� 2� ART (powlow)� 2� ART (powlow)� 0� 0� 0� 200000� 400000� 600000� 0� 200000� 400000� 600000� **Number of nodes�** **Number of nodes�** Fig. 7. Cost of multi-attribute exact-match (left) and range queries (right). 4.2 Load Balancing ART not only reduces the search cost but also achieves better load balancing. To verify this claim, we test the network with a variety of distributions and evaluate the cost of load balancing. For simplicity, in our system, we assume that the query distribution follows the data distribution. As a result, the workload of a peer is determined only by the amount of data stored at that peer. In BATON*, when a peer joins the network, it is assigned a default upper and lower load limit by its parent. If the number of stored data at the peer exceeds the upper bound, it is considered as an overloaded peer and vice versa. If a peer is overloaded and cannot find a lightly loaded leaf peer, it is likely that all other peers also have the same work load; thus, it automatically increases the boundaries of storage capability. In ART the overlay of cluster peer remains unaffected in the expected case with high probability when peers join or leave the network. Thus, the load-balancing performance is restricted inside a cluster peer (which is a new BATON* structure) and as a result ART needs no more than 4 lookup-messages (instead of 1000 messages needed from BATON* in case of 500.000 nodes). For details see Figure 8 (up). 4.3 Fault Tolerance To evaluate the system’s fault tolerance in case of massive failure we initialized the system with 10,000 peers. In the sequel, we let peers randomly fail step by step without recovering. At each step, we check to see if the network is partitioned or not. With massive peer failures, we face a massive destruction of links ----- 16 S. Sioutas et al. **Cost of load balancing�** **Search Cost�** **in case of massive failure�** 1200� 1000� BATON* (fanout=10)� 200� **Number of �** 800� ART (normal, beta,� 150� BATON* (fanout=10)� **messages�** 600�400�200� uniform)�ART (powlow)� **Number of �messages�** 100�50� ART (normal, beta,�uniform)� 0� 0� ART (powlow)� 0� 200000� 400000� 600000� 0� 2000� 4000� 6000� 8000� **Number of nodes�** **Number of failure nodes �** Fig. 8. Cost of load-balancing (left) and search cost in case of massive failure (right). connected to failed peers. Since the search process has to bypass these peers, the search query has to be forwarded forth and back several times to find a way to the destination and as a result the search cost is expected that will increase substantially. Since the backbone of ART structure remains unaffected w.h.p., meaning that there is always a peer for playing the role of cluster representative, the search cost is restricted inside a cluster peer (which is a BATON* structure) and as a result ART needs no more than 32 lookup-messages (instead of 180 messages needed from BATON* in case of 6.000 nodes). Figure 8 (down) illustrates this effect. ### 5 Trade-offs and Heuristics If each collection of cluster peers is organized individually as a BATON[∗] structure (not the whole level of collections), then we can climb up the ART structure until we reach the nearest common ancestor of the cluster peer we are located in, and the cluster peer we are searching. Then a downwards traversal is initiated to reach this cluster peer. Since, each collection of i[th]-level is organized according to BATON*, we can decide in O(logm n[1][/][2][i]) hops the child we must follow for further searching. As a result, the total time becomes O(logm n) and no improvement has been achieved. In our solution, if we parameterize the size of the buckets (depicted in Figure 3) from O(log[2][c] N ) to O(log[2][f] [(][N] [)] N ), where f (N ) is a function of the network size, then we can get an interesting trade-off between the routing data overhead and the number of hops for an operation. In particular, if Z is the number of collections at the current level, then each bucket contains O( Z log[2][f] [(][N] [)] N [)] collections. Thus, the first LRT layer organizes O(log[2][f] [(][N] [)] N ) bucket representatives and each second LRT layer organizes O( Z log[2][f] [(][N] [)] N [) collections. In this] case, the routing overhead is dominated by the second layer LRTs which becomes O( N [1][/][4] log[f] [(][N] [)] N [). To achieve an optimal routing data overhead we would like] the following: O( N [1][/][4] log[f] [(][N] [)] N [) =][ O][(1)][ ⇔] [f] [(][N] [) =][ O][(log][ N] [). In this case the first] LRT layer contains O(log[2][f] [(][N] [)] N ) or O(log[2 log][ N] N ) bucket representative nodes. Therefore, a lookup operation in first layer requires O(log log(log[2 log][ N] N )) or ω(log log N ) hops. Each of the second layer LRTs contains O( Z log[2][f] [(][N] [)] N [) collec-] tion representative nodes, where Z is the number of collections at current level. ----- Autonomous Range Tree 17 Therefore, the number of hops required by a lookup operation in second layer is O(log log N ). So, the total time becomes ω(log log N ) and the sub-logarithmic complexity is not guaranteed. As a result, if we want an optimal routing overhead we cannot guarantee sub-logarithmic complexity. If we relax the routing overhead to be of polynomial size then we can achieve this. In our solution the routing data overhead (O(N [1][/][4]/ log[c] N )) is a polynomial function. However, in reality even for an extremely large number of peers N=1.000.000.000, the routing data overhead is 6 for c = 1, which is less than the fanout of BATON[∗] (m = 10) that we used to run our experiments. The latter demonstrates the significance of our result. ### 6 Conclusions We presented a new efficient decentralized infrastructure for range query processing with probabilistic guarantees, the ART structure. Theoretical analysis showed that the communication cost of query, update and join/leave node operations scale sub-logarithmically expected w.h.p.. Experimental performance comparison with BATON*, the state-of-the-art decentralized structure, showed the improved performance, scalability and efficiency of our new method. Finally, we believe that ART will enable general purpose decentralized trees to support a wider class of queries, and then broaden the horizon of their applicability. ### References 1. Andrzejak A. and Xu Z.: “ Scalable, Efficient Range Queries for Grid Information Services”, Proceedings 2nd International Conference on Peer-To-Peer Computing (P2P), pp.33-40, Linkoping, Sweden, 2002. 2. Aspnes J. and Shah G.: “Skip Graphs”, Proceedings 14th Annual ACM-SIAM Symposium on Discrete Algorithms (SODA), pp.384-393, Baltimore, MD, 2003. 3. Bharambe A.R., Agrawal M. and Seshan S.: “Mercury: Supporting Scalable Multiattribute Range Queries”, Proceedings ACM Conference on Applications, Technologies, Architectures, and Protocols for Computer Communication (SIGCOMM), pp.353-366, Portland, OR, 2004. 4. Cai M., Frank M., Chen J. and Szekely P.: “Maan: a Multi-attribute Addressable Network for Grid Information Services”, Proceedings 4th International Workshop on Grid Computing (GRID), pp.184-191, Phoenix, AZ, 2003. 5. Crainiceanu A., Linga P., Gehrke J. and Shanmugasundaram J.: “Querying Peerto-peer Networks Using P-Trees”, Proceedings 7th International Workshop on Web and Databases (WebDB), pp.25-30, Paris, France, 2004. 6. A. Carzaniga, E. Nitto, D. Rosenblum, and A. L. Wolf. Issues in supporting eventbased architectural styles. In 3rd Intl Software Architecture Workshop, 1998. 7. Antonio Carzaniga, David S. Rosenblum, and Alexander L. Wolf. Design and evaluation of a wide-area event notification service. ACM Trans. Comput. Syst., 19(3):332383, 2001. 8. Gupta A., Agrawal D. and El Abbadi A.: “Approximate Range Selection Queries in Peer-to-peer Systems”, Proceedings 1st Biennial Conference on Innovative Data Systems Research, Asilomar, CA, 2003. 9. Indranil Gupta, Ken Birman, Prakash Linga, Al Demers AND Robbert van Renesse, ”Kelips: Building an efficient and stable P2P DHT through increased memory ----- 18 S. Sioutas et al. and background overhead”, Proceedings of the 2nd International Workshop on Peerto-Peer Systems (IPTPS ’03), Berkeley, CA, USA, 2003. 10. Goodrich M.T., Nelson M.J. and Sun J.Z.: “The Rainbow Skip Graph: a FaultTolerant Constant-Degree Distributed Data Structure”, Proceedings 14th Annual ACM-SIAM Symposium on Discrete Algorithms (SODA), pp.384-393, Miami, FL, 2006. 11. Jun Gao, Peter Steenkiste, ”An Adaptive Protocol for Efficient Support of Range Queries in DHT-Based Systems,” 12th IEEE International Conference on Network Protocols (ICNP’04), pp.239-250, Berlin, Germany, 2004. 12. Harvey N.J.A., Jones M.B., Saroiu S., Theimer M. and Wolman A.: “Skipnet: a Scalable Overlay Network with Practical Locality Properties”, Proceedings USENIX Symposium on Internet Technologies and Systems, Seattle, WA, 2003. 13. Jagadish H.V., Ooi B.C. and Vu Q.H.: “Baton: a Balanced Tree Structure for Peerto-peer Networks”, Proceedings 31st International Conference on Very Large Data Bases (VLDB), pp.661-672, Trondheim, Norway, 2005. 14. Jagadish H.V., Ooi B.C., Tan K.L., Vu Q.H. and Zhang R.: “Speeding up Search in P2P Networks with a Multi-way Tree Structure”, Proceedings ACM International Conference on Management of Data (SIGMOD), pp.1-12, Chicago, IL, 2006. 15. Karger D., Kaashoek F., Stoica I., Morris R. and Balakrishnan H.: “Chord: a Scalable Peer-to-peer Lookup Service for Internet Applications”, Proceedings ACM Conference on Applications, Technologies, Architectures, and Protocols for Computer Communication (SIGCOMM), pp.149-160, San Diego, CA, 2001. 16. Kaporis A., Makris Ch., Sioutas S., Tsakalidis A., Tsichlas K. and Zaroliagis Ch.: “Improved Bounds for Finger Search on a RAM”, Proceedings 11th Annual European Symposium on Algorithms (ESA), pp.325-336, Budapest, Hungary, 2003. 17. Kaporis A., Makris Ch., Sioutas S., Tsakalidis A., Tsichlas K. and Zaroliagis Ch.: “Dynamic Interpolation Search Revisited”, Proceedings 33rd International Colloquium Automata, Languages and Programming (ICALP), Part I, pp.382-394, Venice, Italy, 2006. 18. Li X., Kim Y.J., Govindan R. and Hong W.: “Multi-dimensional Range Queries in Sensor Networks”, Proceedings 1st International Conference on Embedded Networked Sensor Systems (SenSys), pp.63-75, Los Angeles, CA, 2003. 19. Liau C.Y., Ng W.S., Shu Y., Tan K.L. and Bressan S.: “Efficient Range Queries and Fast Lookup Services for Scalable P2P Networks”, Proceedings 2nd International Workshop on Databases, Information Systems, and Peer-to-Peer Computing(DBISP2P), pp.93-106, Toronto, Canada, 2004. 20. Maymounkov P. and Mazieres D.: “Kademlia: a Peer-to-peer Information System Based on the XOR Metric”, Proceedings 1st International Workshop on Peer-to-Peer Systems (IPTPS), pp.53-65, Cambridge, MA, 2002. 21. S. Naicken, B. Livingston, A. Basu, S. Rodhetbhai, I. Wakeman, and D. Chalmers, The state of peer-to-peer simulators and simulations, SIGCOMM Comput. Commun. Rev., 2007 Vol 37, No 2, pp. 95-98 22. Ratnasamy S., Francis P., Handley M., Karp R. Shenker S.: “A Scalable Content addressable Network”, Proceedings ACM Conference on Applications, Technologies, Architectures, and Protocols for Computer Communication (SIGCOMM), pp.161172, San Diego, CA, 2001. 23. Rowstron A. and Druschel P.: “Pastry: a Scalable, Decentralized Object Location, and Routing for Large-scale Peer-to-peer Systems”, Proceedings IFIP/ACM International Conference on Distributed Systems Platforms (MIDDLEWARE), pp.329-350, Heidelberg, Germany, 2001. 24. Sriram Ramabhadran, Sylvia Ratnasamy, Joseph M. Hellerstein, Scott Shenker: ”prefix hash tree”, Proceedings of the twenty-third annual ACM symposium on ----- Autonomous Range Tree 19 Principles of distributed computing table of contents (Brief announcement), PP.368368, Newfoundland, Canada, 2004. 25. Sahin O.D., Gupta A., Agrawal D. and El Abbadi A.: “A Peer-to-peer Framework for Caching Range Queries”, Proceedings 20th International Conference on Data Engineering (ICDE), pp.165-176, Boston, MA, 2004. 26. Stoica I., Morris R., Liben-Nowell D., Karger D.R., Kaashoek M.F., Dabek F. and Balakrishnan H.: “Chord: a Scalable Peer-to-peer Lookup Protocol for Internet Applications”, IEEE/ACM Transactions on Networking, Vol.11, No.1, pp.17-32, 2003. 27. S. Sioutas, G. Papaloukopoulos, E. Sakkopoulos, K. Tsichlas and Y. Manolopoulos ”A novel Distributed P2P Simulator Architecture: D-P2P-Sim”, ACM CIKM 2009, pp. 2069-2070, 2009. 28. S. Sioutas, G. Papaloukopoulos, E. Sakkopoulos, K. Tsichlas and Y. Manolopoulos ”Brief announcement: ART–sub-logarithmic decentralized range query processing with probabilistic guarantees”, ACM PODC 2010, pp. 118-119, 2010. 29. Triantafillou P. and Pitoura T.: “Towards a Unifying Framework for Complex Query Processing over Structured Peer-to-Peer Data Networks”, VLDB 03 Workshop on Databases, Information Systems, and Peer-to-Peer Computing, 2003. 30. Zhang H., Goel A. and Govindan R.: “Incrementally Improving Lookup Latency in Distributed Hash Table Systems”, SIGMETRICS, pp.114-125, San Diego, CA, 2003. 31. Zhao B.Y., Huang L., Stribling J., Rhea S.C., Joseph A.D. and Kubiatowicz J.D.: “Tapestry: a Resilient Global-scale Overlay for Service Deployment”, IEEE JSAC, Vol.22, No.1, pp.41-53, 2004. -----
{ "disclaimer": "Notice: Paper or abstract available at https://arxiv.org/abs/1201.2766, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": null, "status": "GREEN", "url": "https://arxiv.org/pdf/1201.2766" }
2,010
[ "JournalArticle", "Book" ]
true
2010-07-25T00:00:00
[ { "paperId": "5d8384c21ab3a1107feec9c1d5748131d89bbda4", "title": "ART: sub-logarithmic decentralized range query processing with probabilistic guarantees" }, { "paperId": "1a675bf983f18f6153d72deca0c06fb15ccda100", "title": "A novel distributed P2P simulator architecture: D-P2P-sim" }, { "paperId": "23991c517cc13bcce1dd714830578a6e03646f33", "title": "Join queries in P2P DHT Systems" }, { "paperId": "5f05bdecda952e862bab0231c8566b0104adf739", "title": "The state of peer-to-peer simulators and simulations" }, { "paperId": "8683b86a7a640c26b0c68edbdadf1de45447a74e", "title": "Dynamic Interpolation Search Revisited" }, { "paperId": "aa80c6ceb4ed86306c6c985c242dc6bac2fee60b", "title": "Speeding up search in peer-to-peer networks with a multi-way tree structure" }, { "paperId": "5831f1722e9513c1b79e52b0d15cf8d35c11364d", "title": "The rainbow skip graph: a fault-tolerant constant-degree distributed data structure" }, { "paperId": "d257d97ac43c5cd1e2a7643bfaf039cb25b34e01", "title": "BATON: A Balanced Tree Structure for Peer-to-Peer Networks" }, { "paperId": "8b76104fa372c664cf5210d4dae1c59066b6b81f", "title": "An adaptive protocol for efficient support of range queries in DHT-based systems" }, { "paperId": "4e29d549fb237c33f3035b46bdbd471da993f125", "title": "Mercury: supporting scalable multi-attribute range queries" }, { "paperId": "d83253377c0ddc9e4aaa1181a825d15e98a5a3ce", "title": "Efficient Range Queries and Fast Lookup Services for Scalable P2P Networks" }, { "paperId": "27521785235e785f9b48d8224e075b9037b4f1af", "title": "Brief announcement: prefix hash tree" }, { "paperId": "f2c4786567ebb91169753e71c015898e14936e48", "title": "Querying peer-to-peer networks using P-trees" }, { "paperId": "971ca0fce11064aa3808c0e78b6bff22c1b6e0f4", "title": "A peer-to-peer framework for caching range queries" }, { "paperId": "b851141730342a9613adc38e92a38fd1cb42f623", "title": "Tapestry: a resilient global-scale overlay for service deployment" }, { "paperId": "c81c79b64f2b185a330132586a6fcf8c45f8a9a6", "title": "MAAN: A Multi-Attribute Addressable Network for Grid Information Services" }, { "paperId": "0c3e07c5ef9a4abb64ac152e64efaca63b0fd788", "title": "Multi-dimensional range queries in sensor networks" }, { "paperId": "1a7a9fd1a17db14ce591cc633ae3caef50a7abcd", "title": "Improved Bounds for Finger Search on a RAM" }, { "paperId": "91ca8c26f621a3fc04ed45f68ade474d43b6e3fd", "title": "Towards a Unifying Framework for Complex Query Processing over Structured Peer-to-Peer Data Networks" }, { "paperId": "1571ee0f19ff9931490c9e6720f45345a268f7cf", "title": "Incrementally improving lookup latency in distributed hash table systems" }, { "paperId": "684d6cc62cee0e5ef9dc988691848104e4a5c21e", "title": "Kelips: Building an Efficient and Stable P2P DHT through Increased Memory and Background Overhead" }, { "paperId": "2ebe1ffec53e63c2799cba961503f0a6abafccd3", "title": "Skip graphs" }, { "paperId": "dbd62aed6beede2d157640fec3d1ec24302c0360", "title": "Scalable, efficient range queries for grid information services" }, { "paperId": "eb51cb223fb17995085af86ac70f765077720504", "title": "Kademlia: A Peer-to-Peer Information System Based on the XOR Metric" }, { "paperId": "cf025469b2d7e4b37c7f2d2bf0d46c6776f48fd4", "title": "Pastry: Scalable, Decentralized Object Location, and Routing for Large-Scale Peer-to-Peer Systems" }, { "paperId": "680ba806b8a651e8cb2e2d64a9d6bc2325a8eea1", "title": "A scalable content-addressable network" }, { "paperId": "f03db79dc2922af3ec712592c8a3f69182ec5d65", "title": "Chord: A scalable peer-to-peer lookup service for internet applications" }, { "paperId": "30e1983f9b8227760452c714ee6ad62a913d6236", "title": "Design and evaluation of a wide-area event notification service" }, { "paperId": "a65586b3b7c7dbf5f620444ecc8962c563b7d7f7", "title": "Issues in supporting event-based architectural styles" }, { "paperId": "8dbcb2ffbd99b7255eaa32abc615cb0b5275469b", "title": "A scalable peer-to-peer lookup protocol for Internet applications" }, { "paperId": "44e4c963e379448ab4a04b172b9ec659ed08096b", "title": "A Review of Dimension Reduction Techniques" }, { "paperId": "f0574a77de9b6be9b3bd6d75dd4f21e2ce1113cb", "title": "P2p Networking And Applications" }, { "paperId": "dc9ced0da655ff64a60ba03bf31d0eebf7e52a03", "title": "Querying the Internet with PIER" }, { "paperId": "b678e1f35d01bd3cf2c9aa02b359f1f177547b4d", "title": "Approximate Range Selection Queries in Peer-to-Peer Systems" }, { "paperId": "8d60ae4c2df409e4ab1d7e518b39e4d91dc6c6a7", "title": "Proceedings of Usits '03: 4th Usenix Symposium on Internet Technologies and Systems Symphony: Distributed Hashing in a Small World" }, { "paperId": null, "title": "Proceedings of the 2nd International Workshop on Peerto-Peer Systems (IPTPS '03)" }, { "paperId": null, "title": "prefix hash tree”, Proceedings of the twenty-third annual ACM symposium on Autonomous Range Tree 19 Principles of distributed computing table of contents (Brief announcement), PP.368368" } ]
14,699
en
[ { "category": "Environmental Science", "source": "s2-fos-model" }, { "category": "Economics", "source": "s2-fos-model" }, { "category": "Business", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/02a3f37b0735b61801d2b28babacf7aec3ff83b6
[]
0.862854
Blockchain Technology as a Catalyst for Sustainable Development: Exploring Economic, Social, and Environmental Synergies
02a3f37b0735b61801d2b28babacf7aec3ff83b6
Academic Journal of Interdisciplinary Studies
[ { "authorId": "2151832242", "name": "Marsela Thanasi Boçe" }, { "authorId": "2290624379", "name": "Julian Hoxha" } ]
{ "alternate_issns": [ "2281-3993" ], "alternate_names": [ "Acad J Interdiscip Stud" ], "alternate_urls": [ "https://www.mcser.org/journal/index.php/ajis/article/view/1450" ], "id": "6c2c7db3-50d7-4c44-9697-2b6a1224d38b", "issn": "2281-4612", "name": "Academic Journal of Interdisciplinary Studies", "type": "journal", "url": "https://www.mcser.org/journal/index.php/ajis" }
This paper explores the transformative potential of blockchain technology (BT) as a catalyst for sustainable development, addressing the tri-fold aspects of environmental, economic, and social sustainability. Through a comprehensive review and theoretical framework, it delves into how BT can significantly contribute to achieving the United Nations Sustainable Development Goals (SDGs). The study highlights BT's role in enhancing transparency, ensuring product traceability, and promoting resource efficiency, thereby facilitating a more equitable economic growth and environmental stewardship. By examining various applications of BT across industries including supply chain management, renewable energy, and conservation efforts, the paper illustrates BT's capability to reduce carbon emissions, improve resource allocation, and support sustainable business practices. Furthermore, it identifies challenges such as scalability, energy consumption, and regulatory hurdles, proposing strategic recommendations for overcoming these obstacles. The research emphasizes the need for collaborative efforts among stakeholders, including policymakers, practitioners, and researchers, to leverage BT effectively for sustainable development. It contributes to both theoretical understanding and practical implementation of blockchain as a powerful enabler for sustainability, offering insights for future research and policy-making in this evolving domain.   Received: 1 December 2023 / Accepted: 19 February 2024 / Published: 5 March 2024
. **_ISSN 2281-3993_** **_www.richtmann.org_** **_March 2024_** **Research Article** © 2024 Marsela Thanasi Boçe and Julian Hoxha. This is an open access article licensed under the Creative Commons Attribution-NonCommercial 4.0 International License (https://creativecommons.org/licenses/by-nc/4.0/) Received: 1 December 2023 / Accepted: 19 February 2024 / Published: 5 March 2024 # Blockchain Technology as a Catalyst for Sustainable Development: Exploring Economic, Social, and Environmental Synergies **Marsela Thanasi Boçe[1]** **Julian Hoxha[2]** _1College of Business Administration,_ _American University of the Middle East,_ _Kuwait_ _2College of Engineering,_ _American University of the Middle East,_ _Kuwait_ **DOI: https://doi.org/10.36941/ajis-2024-0041** **_Abstract_** _This paper explores the transformative potential of blockchain technology (BT) as a catalyst for sustainable_ _development, addressing the tri-fold aspects of environmental, economic, and social sustainability. Through_ _a comprehensive review and theoretical framework, it delves into how BT can significantly contribute to_ _achieving the United Nations Sustainable Development Goals (SDGs). The study highlights BT's role in_ _enhancing transparency, ensuring product traceability, and promoting resource efficiency, thereby_ _facilitating a more equitable economic growth and environmental stewardship. By examining various_ _applications of BT across industries including supply chain management, renewable energy, and_ _conservation efforts, the paper illustrates BT's capability to reduce carbon emissions, improve resource_ _allocation, and support sustainable business practices. Furthermore, it identifies challenges such as_ _scalability, energy consumption, and regulatory hurdles, proposing strategic recommendations for_ _overcoming these obstacles. The research emphasizes the need for collaborative efforts among stakeholders,_ _including policymakers, practitioners, and researchers, to leverage BT effectively for sustainable_ _development. It contributes to both theoretical understanding and practical implementation of blockchain as_ _a powerful enabler for sustainability, offering insights for future research and policy-making in this evolving_ _domain._ **_Keywords: Blockchain technology (BT), Environmental, Economic, Social, Sustainability_** **1.** **Introduction** Sustainable development, a pressing global challenge, has gained significant attention from researchers and policymakers (Liu et al., 2011). In the business context, sustainability is increasingly crucial for survival due to rising regulatory pressures and evolving production practices (Hahn and Figge, 2018). Sustainability, as defined by Clark (2007), involves responsible behavior towards the environment, society, and future generations. Corporate sustainability is viewed as meeting the needs of various stakeholders, shareholders, employees, customers, regulatory bodies, and society at large, ----- **_ISSN 2281-3993_** **_www.richtmann.org_** **_March 2024_** without compromising the ability of future stakeholders to meet their own needs (Mani et al., 2016). Sustainable development efforts currently focus on balancing environmental, social, and economic objectives to meet present needs without compromising future generations. Key goals include tackling climate change, promoting clean energy, ensuring economic growth, and fostering social inclusion. However, challenges remain significant. Environmentally, the world faces urgent issues like climate change, loss of biodiversity, and pollution (Luna et al., 2024; Dehshiri and Amiri, 2024; Arshad et al., 2023). Efforts to reduce carbon emissions and increase renewable energy adoption are progressing, but not at the pace required to meet international climate goals. Economically, sustainable development aims to reduce poverty and inequality. While there has been progress, global economic disparities persist, exacerbated by factors like insufficient infrastructure in developing countries and the economic impacts of the COVID-19 pandemic. Socially, challenges include ensuring equal access to quality education, healthcare, and employment opportunities. There's also a need to strengthen social safety nets and promote gender equality. The modern world's need for sustainability is a complex subject that necessitates a thorough, critical examination. It includes comprehending how problems like pollution, resource depletion, and climate change are interrelated. This entails realizing how individuals’ actions affect the environment and how crucial it is to lessen this harmful influence (Yang et al, 2023). Economic aspects to consider sustainability can result in long-term economic gains, despite the misconception that it impedes economic growth. This includes the emergence of new markets and employment prospects, especially in renewable energy. Furthermore, social aspects of sustainability must be considered including fair labor practices, equitable resource access, and considering future generations' needs. Analyzing the impact of our actions and activities on the community and wider, throughout the world, now and in the future is crucial (Shayan et al., 2022). Reaching sustainability necessitates a substantial shift in both personal conduct and societal expectations. This is a difficult task that requires knowledge of and control of the sociological and psychological elements that shape human behavior. In addition, technological innovation has a dual function: it both causes sustainability (environmental) issues and provides answers to them. To comprehend the possibilities and constraints of these technical solutions, critical analysis is required (Sarfraz et al., 2023; Dehshiri and Amiri, 2024). In an era where sustainability is not merely an option but a necessity, BT emerges as a pivotal tool in harmonizing the trinity of social, economic, and environmental dimensions of sustainability (Carter and Easton, 2011). At the juncture of the Fourth Industrial Revolution, blockchain stands out as a transformative force, capable of fostering a synergy between technological advancement and sustainable development. Blockchain is a decentralized ledger that offers an immutable, secure, and distributed database that can facilitate the verifiable exchange of information and assets, reducing the dependency on intermediaries and enhancing peer-to-peer transactions (Yli-Huumo et al., 2016; Galen et al., 2018) and significantly boost the efficacy of sustainable development endeavors by promoting transparency and trust (Horner and Ryan, 2019). This article seeks to inform and guide stakeholders, from policymakers to practitioners, on the effective deployment of blockchain solutions for economic, social, and environmental sustainability. The article's theoretical contributions lie in the creation of a conceptual framework to elucidate the intricate relationship between blockchain and sustainability dimensions. Practically, it aims to unearth best practices, discern obstacles to implementation, and recommend pathways to leverage blockchain for a more sustainable future. The upcoming sections will delve into the theoretical underpinnings of blockchain in sustainability, outline the challenges and limitations of incorporating blockchain for sustainable solutions, present a comprehensive framework that explicates the relationship between blockchain and environment, economic, and social sustainability, and offer conclusions with strategic ----- **_ISSN 2281-3993_** **_www.richtmann.org_** **_March 2024_** recommendations for future research and policymaking. **2.** **Sustainability Definition** Scholars have put forth numerous definitions of sustainability encompassing various aspects. Prior to 1990, literature focused on three dimensions of sustainability and sustainable development: social justice, environmental preservation, and economic prosperity (Purvis et al., 2019). Thus, sustainability can be conceptualized as a triple bottom line comprising social responsibility, environmental stewardship, and economic viability (Kapfere and Denizeau, 2017). Subsequently, Gladwin et al. (1995), through content analysis of multiple definitions, identified additional critical elements of sustainable development, such as inclusiveness, prudence, connectivity, security, and equity. The most widely accepted definition of sustainability in scholarly works is "meeting the needs of the present without compromising the ability of future generations to meet their own needs" (Brundtland Report, 1987, p. 8). Sustainability in the modern world is a complex issue, requiring a deep understanding of the links between environmental concerns like pollution, resource depletion, and climate change, and the importance of mitigating our environmental impact. Economically, it offers long-term benefits and opportunities, particularly in renewable energy, challenging the notion that it hinders growth. Socially, it involves ensuring equitable resource distribution, fair labor, and considering future generations, requiring a global perspective on the impact of our actions. Technologically, it poses both challenges and solutions, necessitating critical analysis of innovation's potential and limitations (Sarfraz et al., 2023). In essence, sustainability is about harmonizing environmental, economic, and social interests to ensure the well-being of current and future generations, facilitated by technological innovation, cultural shifts, effective policy, and a balanced global-local approach (Hariram, 2023). _2.1_ _Fundamentals of blockchain: a comprehensive analysis of its functionality and benefits to_ _sustainability_ The term "blockchain" initially surfaced on the internet in 2008 and has exerted a substantial impact on public institutions, private enterprises, and emerging businesses. The BT is primarily employed as an innovative approach for facilitating transactions between two entities. It functions as a decentralized and secure ledger, enabling direct trades between two anonymous individuals without the requirement of a trusted intermediary. This technology introduces a new operating framework for enterprises and institutions. According to Palacio (2018), it can serve as a viable tool for tackling worldwide difficulties and facilitating the realization of the United Nations' sustainable development goals (SDGs) across all countries. _2.1.1_ _The salient sustainable characteristics of BT_ Several salient facets of BT provide advantages in terms of the advancement of sustainability. - Transparency: The concept of transparency refers to the quality or state of being open, honest, and accountable. The ledger system of the blockchain provides a transparent account of transactions. The utilization of this function is of utmost importance in the surveillance of sustainable sourcing practices of products, ensuring compliance with environmental rules, and mitigating the likelihood of unethical operations within the supply chain. - Authenticity: The ability to track the trajectory of a product from its initial source contributes to the verification of sustainable practices' authenticity. This holds particular importance in industries such as agriculture, where it is important to closely observe the movement of products from the manufacturer to the end consumer (Prashar, et al., 2020; ----- **_ISSN 2281-3993_** **_www.richtmann.org_** **_March 2024_** Bhusal, 2021). - Decentralization: Through the utilization of a decentralized network, BT mitigates the necessity for intermediaries or centralized governing bodies. This might potentially lead to the development of more equitable and efficient systems, hence reducing carbon emissions and improving resource allocation (Risso et al., 2023). - The cryptographic nature of blockchain ensures the guarantee of data integrity and security. Ensuring the protection of sensitive environmental data and maintaining the accuracy and reliability of sustainability records is imperative. - Smart contracts are agreements capable of executing themselves, as they have the agreement's conditions explicitly encoded into their code. One potential strategy to streamline operations and reduce administrative burden is the use of automated systems for monitoring and implementing sustainability commitments and regulations (Dal Mas et al., 2020). - The implementation of BT can greatly reduce the need for excessive paperwork and manual processing involved in monitoring and verifying sustainable practices. This technological advancement can enhance operational efficiency and mitigate environmental consequences. - Through the implementation of BT, resource management may be significantly enhanced by facilitating accurate monitoring and projection capabilities. This, in turn, fosters waste reduction and promotes the efficient utilization of resources (Parmentola et al., 2022). - The provision of a shared and dependable platform for the exchange of information can foster cooperation among many stakeholders involved in sustainability endeavors, including corporations, governmental bodies, and individuals (Schulz et al., 2020). Figure 1 shows the connection between the sustainability-related features of BT with sustainable practices. Through these attributes, BT can significantly influence several sustainability-related concerns, such as the conservation of resources, preservation of the environment, and promotion of sustainable business practices. **Figure 1: Sustainability-related features of BT** **Source: Authors’ own work** _2.1.2_ _Historical context: blockchain in sustainability-driven projects_ The historical backdrop of using BT into sustainability-focused initiatives can be traced back to its inception within the financial industry, specifically with the advent of Bitcoin in 2009. However, the recognition of BT's promise for sustainability occurred a few years later when other businesses and ----- **_ISSN 2281-3993_** **_www.richtmann.org_** **_March 2024_** organizations commenced exploring its wider range of uses. The implementation of BT in supply chain management has emerged as an early and noteworthy application within the realm of sustainability. Organizations have come to recognize that the transparency and traceability attributes of BT may effectively guarantee ethical sourcing and production procedures. One example of blockchain use in the food industry is IBM's Food Trust, which was developed in partnership with Walmart. This platform utilizes BT to monitor the entire supply chain of food goods, tracing their path from the farm to the retail shop. By doing so, it aims to enhance food safety measures and minimize wastage. Another illustration may be found in Everledger, a company that uses BT to track the provenance and life cycle of various items. Its primary emphasis is on diamonds, with the aim of guaranteeing their ethical sourcing and absence of involvement in conflicts (Everledger, 2023). BT has gained traction within the energy sector, mostly to advance renewable energy initiatives and enhance the efficiency of energy distribution systems. Power Ledger and WePower are examples of companies that have utilized BT to facilitate peer-to-peer energy trading. This innovative approach enables customers to engage in the buying and selling of surplus renewable energy, thereby fostering the adoption of sustainable energy resources (Ahmad, 2023). BT has also been utilized in the realm of environmental conservation. One example of a platform that aims to modernize the energy sector is VAKT. This platform is designed to establish a safe and unchangeable digital environment specifically for the processing of physical post-trade activities in the energy industry. The use of this solution aids in the reduction of administrative documentation and facilitates the establishment of an effective and environmentally conscious energy trading system (Thoughtworks, 2023). An additional illustration may be found in the Aerial platform, which leverages BT to establish a heightened level of transparency and accountability in the monitoring of carbon emissions and credits (Aerial, 2023). The concept of "green bonds" and sustainable investments has also experienced advantages through the utilization of BT in the realm of sustainable finance. Platforms like as BanQu contribute to the establishment of economic prospects for underprivileged communities by offering a digital identity and financial record, which are essential for gaining access to financial services and engaging in sustainable economic endeavors. The utilization of BT by organizations such as Plastic Bank is being employed to address the issue of plastic waste in marine environments, namely by providing incentives for recycling activities in developing nations. Individuals have the capacity to gather plastic waste and subsequently trade it for digital tokens, so facilitating the establishment of a circular economy and mitigating the adverse effects of environmental pollution (Böckel et al., 2021). To summarize, the historical backdrop of blockchain in the realm of sustainability is characterized by its evolution from a singularly financial instrument to a multifaceted technological solution that tackles diverse aspects of sustainability. The impetus for this change can be attributed to the increasing recognition of environmental concerns and the imperative for transparent and efficient solutions across various industries. Organizations worldwide have been actively engaging in the exploration and implementation of BT to address sustainability objectives, thereby showcasing its adaptability and capacity for fostering favorable ecological outcomes. _2.2_ _Review of the relevant literature_ This section synthesizes the existing blockchain technology (BT) focused literature and elaborates on the key themes listed in Table 1 from the literature review. Studies included in this section were identified using the Scopus database, using the following combination of keywords (TITLE (“Blockchain technology”) AND TITLE (“Sustainability”). This approach is similar to the approach employed by existing review articles on various topics (Dwivedi, et al., 2023) Existing research reviewed for this article is categorized into the following major themes: Blockchain Technology, research in the environment, economic and social domains, and application. ----- **_ISSN 2281-3993_** **_www.richtmann.org_** **_March 2024_** **Table 1: Sustainability theme-based categorization of BT articles** **Theme** **Sub-theme** **Description** **References** Alhasan and Hamdan 2023; Cui, 2023; Ahmad, 2023; Zuo, Renewable Energy Research related to renewable energy. 2022; Bai et al., 2022; Thakur et al., 2021; Howson, 2020; Brilliantova and Thurner, 2019; Teufel et al., 2019; Climate Change Arshad et al., 2023; Khan et al., 2023; Truby et al., 2022; Wang, Studies on mitigating climate change. Mitigation 2022; Fu et al., 2021; Olivier et al., 2017; Luna et al., 2024; Dehshiri and Amiri, 2024; Arshad et al., **Environment** Environmental Research focused on environmental 2023; Tyagi, 2023; Yang et al., 2023; Naqash et al., 2023; Rodríguez 2023; Waqar, 2023; Singh et al., 2023; Sipthorpe et Conservation conservation. al., 2022; Jiang and Zheng, 2021; Park and Li, 2021; Kim et al., 2021; Richardson, 2020; Allena, 2020; Schulz et al., 2020 Dehshiri and Amiri, 2024; Gazzola et al., 2023; Bosona and Sustainable Studies pertaining to the role of BT on Gebresenbet, 2023; Dal Mas et al., 2023; Yontar. 2023; Pandey, Agriculture developing sustainable agriculture. 2022; Bhusal, 2021 Prashar et al., 2020; Mirabelli and Solina, 2020 Research related to BT integration to enhance Sustainable supply Dehshiri and Amiri, 2024; Jabbar and Bjørn, 2018; Bett ́ın supply chain management, lower costs, and chain management D ́ıaz et al., 2018; Kshetri, 2018; Tian, 2016 achieve a competitive edge **Economic** The role of BT on ehancing transparency Cozzio, 2023; Tafuro et al., 2023; Balzarova et al., 2022; Promoting fair trade within fair trade practices. Cozzio, 2023; Francisco and Swanson, 2018 Employment and The influence of BT on employment and Tartan, 2023; Shabaltina et al., 2021; Novak, 2019 Income Distribution income distribution Research that discusses equality and social Gazzola et al., 2023; Chaudhuri et al. 2023; Fallah Shayan et inclusion such as financial inclusion, secure al., 2022; Thanasi-Boçe et al., 2022; Al-Issa et al., 2022; Böckel Equality and social identity verification, transparent government, et al., 2021; Khanfar et al., 2021; Venkatesh et al., 2020; Dal inclusion education and credential verification, and Mas et al., 2020; Konashevych, 2020; Chen et al., 2020; Martin decentralized markets et al., 2011; Carter and Rogers, 2008. Ronaghi and Mosakhani, 2023; Rai et al., 2021; Tang, 2018; Business ethics and Research related to the relation between BT Mani et al., 2016; Lashley, 2016; Krechovská and Prochazkova, effective corporate and corporate ethics and performance. 2014; Krishna et al., 2011; Aguilera et al., 2009; Carter and governance **Social** Rogers, 2008 Enhancing quality of life: Research focused on improving the quality of -Sustainable Gazzola et al., 2023; Vishwakarma, 2023; Chaudhuri et al., life through the impact of BT on: production and 2023; Sikder, 2023; Al-Issa et al., 2022; Khanfar et al., 2021; -environment consumption Khanfar et la., 2021; Lu et al., 2020; Rai et al., 2021; Mani et al., -satisfying consumers’ needs for products and -Product 2016 services authentication and traceability **Source: Authors’ own creation** _2.3_ _Blockchain and environmental sustainability_ Environmental sustainability, as outlined in the United Nations Sustainable Development Goals (SDGs), refers to the responsible interaction with the environment to avoid depletion or degradation of natural resources and allow for long-term environmental quality (UN, 2023). In a time when environmental issues are becoming ever more complicated and interconnected, environmental sustainability is not merely a requirement but a worldwide obligation. In this effort, the SDGs of the United Nations function as a guiding framework, delineating the complex aspects of sustainability. These goals underscore the urgent need to address critical issues such as climate action (Goal 13) to combat climate change, protect oceans and marine life (Goal 14), and sustain terrestrial ecosystems (Goal 15). They emphasize the necessity of clean water and sanitation (Goal 6), affordable and clean energy (Goal 7), and responsible consumption and production (Goal 12) to ensure efficient use of resources and waste reduction. Additionally, these goals highlight the importance of global partnerships (Goal 17) for effective implementation and policy coherence, incorporating aspects of international cooperation and technology transfer to achieve environmental sustainability. |Theme|Sub-theme|Description|References| |---|---|---|---| |Environment|Renewable Energy|Research related to renewable energy.|Alhasan and Hamdan 2023; Cui, 2023; Ahmad, 2023; Zuo, 2022; Bai et al., 2022; Thakur et al., 2021; Howson, 2020; Brilliantova and Thurner, 2019; Teufel et al., 2019;| ||Climate Change Mitigation|Studies on mitigating climate change.|Arshad et al., 2023; Khan et al., 2023; Truby et al., 2022; Wang, 2022; Fu et al., 2021; Olivier et al., 2017;| ||Environmental Conservation|Research focused on environmental conservation.|Luna et al., 2024; Dehshiri and Amiri, 2024; Arshad et al., 2023; Tyagi, 2023; Yang et al., 2023; Naqash et al., 2023; Rodríguez 2023; Waqar, 2023; Singh et al., 2023; Sipthorpe et al., 2022; Jiang and Zheng, 2021; Park and Li, 2021; Kim et al., 2021; Richardson, 2020; Allena, 2020; Schulz et al., 2020| ||Sustainable Agriculture|Studies pertaining to the role of BT on developing sustainable agriculture.|Dehshiri and Amiri, 2024; Gazzola et al., 2023; Bosona and Gebresenbet, 2023; Dal Mas et al., 2023; Yontar. 2023; Pandey, 2022; Bhusal, 2021 Prashar et al., 2020; Mirabelli and Solina, 2020| |Economic|Sustainable supply chain management|Research related to BT integration to enhance supply chain management, lower costs, and achieve a competitive edge|Dehshiri and Amiri, 2024; Jabbar and Bjørn, 2018; Bett ́ın- D ́ıaz et al., 2018; Kshetri, 2018; Tian, 2016| ||Promoting fair trade|The role of BT on ehancing transparency within fair trade practices.|Cozzio, 2023; Tafuro et al., 2023; Balzarova et al., 2022; Cozzio, 2023; Francisco and Swanson, 2018| ||Employment and Income Distribution|The influence of BT on employment and income distribution|Tartan, 2023; Shabaltina et al., 2021; Novak, 2019| |Social|Equality and social inclusion|Research that discusses equality and social inclusion such as financial inclusion, secure identity verification, transparent government, education and credential verification, and decentralized markets|Gazzola et al., 2023; Chaudhuri et al. 2023; Fallah Shayan et al., 2022; Thanasi-Boçe et al., 2022; Al-Issa et al., 2022; Böckel et al., 2021; Khanfar et al., 2021; Venkatesh et al., 2020; Dal Mas et al., 2020; Konashevych, 2020; Chen et al., 2020; Martin et al., 2011; Carter and Rogers, 2008.| ||Business ethics and effective corporate governance|Research related to the relation between BT and corporate ethics and performance.|Ronaghi and Mosakhani, 2023; Rai et al., 2021; Tang, 2018; Mani et al., 2016; Lashley, 2016; Krechovská and Prochazkova, 2014; Krishna et al., 2011; Aguilera et al., 2009; Carter and Rogers, 2008| ||Enhancing quality of life: -Sustainable production and consumption -Product authentication and traceability|Research focused on improving the quality of life through the impact of BT on: -environment -satisfying consumers’ needs for products and services|Gazzola et al., 2023; Vishwakarma, 2023; Chaudhuri et al., 2023; Sikder, 2023; Al-Issa et al., 2022; Khanfar et al., 2021; Khanfar et la., 2021; Lu et al., 2020; Rai et al., 2021; Mani et al., 2016| ----- **_ISSN 2281-3993_** **_www.richtmann.org_** **_March 2024_** Achieving environmental sustainability demands innovative strategies, with emerging technologies such as blockchain offering novel avenues for progress. Policymakers can make more informed decisions by comprehending the effects of market interventions and weighing their options, mindful of potential disruptions. The insights gained can guide political institutions in shaping the necessary political and legal frameworks to foster the creation of effective green blockchain applications (Arshad et al., 2023). As the exploration of blockchain's potential in promoting sustainability progresses, it becomes essential to examine its influence on specific domains that are important for the environmental agenda. _2.3.1_ _Supply chain management_ Supply chain may be made much more transparent with the use of BT (Kouhizadeh et al., 2021; Risso et al., 2023). Every transaction or movement of products may be tracked on a tamper-proof ledger by utilizing BT (Yap et al., 2023). As a result, parties involved in the supply chain can identify inefficiencies and sources of waste and pollution by tracking the origin, path, and present state of products (Kim et al., 2021). By utilizing BT to document every stage of a product's supply chain, it becomes feasible to verify the purchase of items from practices that promote biodiversity conservation (Park and Li, 2021; Tyagi, 2023; Dehshiri and Amiri, 2024). By fostering sustainable supply chains, this directly promotes SDG 12 (Responsible Consumption and Production). It can also impact SDG 13 (Climate Action) by helping reduce emissions associated with production and logistics. _2.3.2_ _Water resource management_ The implementation of BT has the potential to safeguard the integrity and transparency of water usage and quality data, similar to its application in managing supply chain information and energy data (Rodríguez 2023). The recording of every water usage event, such as extraction, purification, and distribution, has the potential to be documented on a blockchain (Naqash et al., 2023). Initiatives such as IBM's blockchain-based water management system facilitate the establishment of transparent systems for the administration of water rights and consumption data, thereby promoting fair and sustainable practices in water distribution and utilization. It aligns with SDG 6 (Clean Water and Sanitation) by promoting sustainable management of water resources. _2.3.3_ _Energy efficiency_ Assisting in the management of the complexities of a decentralized energy system and enhancing energy operations along the entire value chain are two ways in which BT is anticipated to transform the energy industry (Brilliantova and Thurner, 2019). BT enables the verification, automation, and security of energy transfers without intermediaries (Teufel et al., 2019). BT can be used to create decentralized energy grids and systems that reward energy-saving behaviors. For example, smart contracts on a blockchain can automatically compensate individuals or organizations that reduce their energy consumption. This supports SDG 7 by promoting energy efficiency and SDG 13 by reducing energy-related emissions. _2.3.4_ _Renewable energy certificate (REC)_ In the realm of renewable energy, blockchain can play a pivotal role in issuing, tracking, and verifying RECs (Zuo, 2022; Cui, 2023). By guaranteeing each certificate's authenticity and distinctiveness, this system eliminates fraud and double counting. This is in line with SDG 13 and SDG 7 since it encourages the use of renewable energy sources. ----- **_ISSN 2281-3993_** **_www.richtmann.org_** **_March 2024_** As blockchain redefines the paradigms of transparency and traceability in supply chains, it opens new avenues for mitigating GHG emissions and empowering renewable energy markets. Through its ability to verify the authenticity of RECs and carbon credits, blockchain enables a more robust and trustworthy system for environmental accounting (Sipthorpe et al., 2022). Its role in incentivizing energy efficiency further illustrates its utility in the pursuit of a low-carbon economy. _2.3.5_ _Carbon credits_ The utilization of BT facilitates the establishment of a reliable yet readily transparent infrastructure for the distribution, exchange, and suspension of this approach is designed to prevent the duplication of carbon credits and to verify that each credit corresponds to a genuine decrease in emissions (Richardson, 2020). This action helps the achievement of SDG 13 by facilitating the implementation of a mechanism that allows for the compensation of greenhouse gas emissions. It serves as an incentive for enterprises and nations to allocate resources towards programs aimed at reducing emissions (Sipthorpe et al., 2022). _2.3.6_ _Green building and sustainable construction:_ Similarly to monitoring REC and ensuring energy efficiency, the utilization of BT within the construction industry facilitates the monitoring and assessment of the sustainability of building materials and construction methodologies (Waqar, 2023; Singh et al., 2023). Information about the sourcing, production, and transportation of materials can be stored on a blockchain. The tracking of construction materials' origin and environmental impact is a capability that stakeholders need. The implementation of this transparency measure guarantees compliance with specific environmental criteria and facilitates the adoption of sustainable construction methods (Jiang and Zheng, 2021). Sustainable materials are key to reducing the environmental footprint of buildings (Yang et al., 2023). This relates to SDG 11 (Sustainable Cities and Communities), focusing on sustainable construction practices. _2.3.7_ _Sustainable agriculture and food systems:_ The application of BT in enhancing supply chain transparency is highly relevant in the context of sustainable agriculture and food systems since it enables the seamless monitoring of products from their origin to the final consumer (Mirabelli and Solina, 2020; Bosona and Gebresenbet, 2023; Dal Mas et al., 2023). The entire process, spanning from production to retail, is meticulously documented, thereby guaranteeing the implementation of food safety protocols and sustainable practices. It has the capacity to fundamentally transform the processes of food production, distribution, and consumption (Yontar. 2023; Pandey, 2022). This directly corresponds to the objectives of establishing resilient agricultural practices and guaranteeing sustainable food production systems, addressing SDG 2 and SDG 12 (Dehshiri and Amiri, 2024). AgriDigital and provenance are two important initiatives that provide blockchain-based solutions for the traceability of agricultural products. This approach promotes more transparency and provides valuable support for the adoption of sustainable farming methods. The global importance of food safety and quality has been highlighted by recent high-profile incidents, raising public interest in food traceability (Prashar et al., 2020). Due to its ability to track food through all stages of its life cycle, the World Health Organization encourages a cooperative approach among governments, producers, and consumers to ensure safety through information sharing in complex food networks. Profit-driven businesses often use information systems to track food, enhancing safety and potentially increasing profits. Gazzola et al. (2023) investigated how BT can contribute to this area, particularly in building positive consumer relationships. ----- **_ISSN 2281-3993_** **_www.richtmann.org_** **_March 2024_** The utilization of BT has promise for establishing a more secure, environmentally conscious, and dependable agricultural food system in the forthcoming years (Bosona and Gebresenbet, 2023). Despite being in its nascent phases and facing obstacles including high implementation costs, privacy concerns, security issues, scalability limitations, and performance challenges, the integration of this technology has the potential to bring about a substantial transformation in the agricultural sector (Bhusal, 2021). _2.3.8_ _Environmental policy compliance and governance:_ Blockchain can monitor compliance with environmental policies and regulations, akin to verifying carbon credits and managing RECs. It ensures a decentralized and unalterable ledger of emissions data and adherence to environmental regulations. Governments can manage environmental subsidies and penalties with BT, ensuring policy implementation is transparent and efficient (Luna et al., 2024; Allena, 2020; Schulz et al., 2020). The promotion of open and accountable governance contributes to the achievement of SDG 13 and SDG 16 (Peace, Justice, and Strong Institutions). _2.3.9_ _Public participation and awareness in environmental conservation_ The utilization of BT has the potential to improve public involvement in environmental initiatives using tokenization, thereby enabling anyone to engage in conservation programs through investment or direct participation, like how blockchain encourages participation in renewable energy markets (Alhasan and Hamdan, 2023; Bai et al., 2022; Thakur et al., 2021; Howson, 2020) Platforms like Earth Token facilitate individuals' ability to invest in environmental assets, hence fostering public engagement in initiatives aimed at environmental preservation. Supports various SDGs by fostering inclusive participation in sustainable development (SDG 17 - Partnerships for the Goals). _2.3.10Sustainable transportation and electric vehicles (EVs):_ The utilization of BT can facilitate the effective management and optimization of electric vehicle (EV) charging stations, as well as their easy integration into smart power grids (Fu et al., 2021; Wang, 2022; Khan et al., 2023). Blockchain is being investigated by initiatives such as MOBI (Mobility Open Blockchain Initiative) to determine the identity, history, and utilization of vehicles; this research may promote the adoption of more environmentally friendly transportation methods. Aligns with SDG 11 and SDG 13 by promoting sustainable urban transportation systems and contributing to climate action. _2.3.11_ _Innovative climate-conscious projects:_ As the world deals with the effects of climate change, which include loss of species, extreme weather, and an imbalance in the environment, countries must focus on long-term economic growth. Even though the world's economy grew by an average of 3.4% per year from 2012 to 2018, rising greenhouse gas (GHG) emissions cast a shadow over this progress. This is mostly because of practices that use a lot of energy and resources (Olivier et al., 2017). The Intergovernmental Panel on Climate Change (IPCC) reports and the growing support for green projects in the public and political spheres, which led to important agreements like the Paris Agreement of 2015, show how important it is to act quickly to stop climate change. Climate markets need to become open to the world because people are more aware of the effects of climate change (Truby et al., 2022; Arshad et al., 2023) and to spend and come up with new ways to make them more resilient. Several projects utilizing BT have been developed to address the challenge of climate change. These include various platforms designed for monitoring and mitigating emissions, projects that ----- **_ISSN 2281-3993_** **_www.richtmann.org_** **_March 2024_** provide incentives for adopting sustainable behaviors, and structures that facilitate climate finance and investment in projects aimed at promoting sustainability. These projects can contribute to multiple SDGs, including SDG 13, SDG 11, and SDG 15 by promoting actions that mitigate climate change and its impacts. Each of the previously discussed topics is connected to the broader theme related to a specific SDG and is interconnected with the previously identified areas such as: monitoring carbon footprints with blockchain; enhancing renewable energy markets; waste management and recycling and climate change. This methodology offers a systematic framework for comprehending the interplay and contributions of different aspects of sustainability and BT to these broad concepts. Below in Table 2 we show a categorized table that aligns each theme placed under the broader topic where its impact is most significant. **Table 2. Environmental sustainability-related themes** **Broader Topics** **Sustainability-related Themes** - Supply Chain Management Monitoring Carbon Footprints with Blockchain - Environmental Policy Compliance and Governance - Renewable Energy Certificates (RECs) Enhancing Renewable Energy Markets - Energy Efficiency - Sustainable Transportation and Electric Vehicles (EVs) - Sustainable Agriculture and Food Systems Waste Management and Recycling - Green Building and Sustainable Construction - Public Participation and Awareness- Innovative Climate Climate Change - Conscious Projects Supply Chain Management and Environmental Policy Compliance and Governance are key in monitoring carbon footprints, as they involve tracking emissions and ensuring regulatory adherence. RECs, Energy Efficiency, and Sustainable Transportation/EVs enhance renewable energy markets by promoting the use of clean energy and efficient energy practices. Sustainable Agriculture and Food Systems and Green Building and Sustainable Construction contribute to waste management and recycling through sustainable practices and resource efficiency. Public Participation and Awareness and Innovative Climate-Conscious Projects are crucial for addressing climate change, as they involve engaging the public and implementing novel solutions to climate challenges. Table 3 provides a clear overview of how BT is being applied in various sectors to address environmental challenges and sustainability goals. Each application demonstrates the potential of blockchain to contribute to a more sustainable and environmentally conscious world. **Table 3. Application of blockchain for environmental sustainability** Broader Topic Specific Application Description Monitoring Carbon Companies use blockchain-based smart contracts - Smart contracts for Footprints with to automatically track and report emissions, emission tracking Blockchain integrating sensors and IoT devices. |Broader Topics|Sustainability-related Themes| |---|---| |Monitoring Carbon Footprints with Blockchain|- Supply Chain Management - Environmental Policy Compliance and Governance| |Enhancing Renewable Energy Markets|- Renewable Energy Certificates (RECs) - Energy Efficiency - Sustainable Transportation and Electric Vehicles (EVs)| |Waste Management and Recycling|- Sustainable Agriculture and Food Systems - Green Building and Sustainable Construction| |Climate Change|- Public Participation and Awareness- Innovative Climate - Conscious Projects| |Broader Topic|Specific Application|Description| |---|---|---| |Monitoring Carbon Footprints with Blockchain|- Smart contracts for emission tracking|Companies use blockchain-based smart contracts to automatically track and report emissions, integrating sensors and IoT devices.| ||- Decentralized carbon - Emission trading platforms|Blockchain enables the creation of platforms for transparent and efficient trading of carbon credits, as seen with IBM and Energy Blockchain Labs.| |Enhancing Renewable Energy Markets|- Blockchain in microgrids|Projects like Brooklyn Microgrid use blockchain to create local energy networks for buying and selling renewable energy.| ||- Tokenization of renewable energy assets|Blockchain is used for tokenizing renewable energy assets, facilitating investment in renewable energy projects (e.g., WePower).| ----- **_ISSN 2281-3993_** **_www.richtmann.org_** **_March 2024_** Broader Topic Specific Application Description Blockchain-based platforms like Plastic Bank - Recycling incentivization offer tokens for collecting and recycling programs materials, incentivizing proper recycling Waste Management practices. and Recycling Blockchain is used to track the lifecycle of - Supply chain transparency for products, ensuring responsible recycling (e.g., recycling Arianee project for luxury goods). Blockchain platforms facilitate investments in - Climate finance and investment climate change mitigation projects, such as the platforms Poseidon Foundation supporting forest conservation efforts. Climate Change Projects like the Open Earth Foundation use - Tracking and verifying climate blockchain for transparent and tamper-proof data climate data, aiding in climate policy and modeling. _2.4_ _Blockchain in economic sustainability_ Economic sustainability, as per the United Nations' SDGs, refers to the practice of managing resources and developing economically in a way that ensures long-term economic health without harming the environment or compromising the ability of future generations to meet their own needs. It involves a balanced approach that integrates economic growth with social inclusion and environmental protection (UN, 2023). The SDGs encompass various facets of economic sustainability through distinct objectives. Goal 8 advocates for sustained, inclusive economic growth and decent work, emphasizing productivity, innovation, entrepreneurship, and the separation of economic growth from environmental harm. Goal 9 targets the development of resilient infrastructure, sustainable industrialization, and innovation as key drivers of economic progress. Goal 10 is focused on reducing inequalities both within and among countries, promoting equitable distribution of wealth and resources as a cornerstone of sustainable economic development. Lastly, Goal 12 aims to establish sustainable patterns of consumption and production, prioritizing efficient resource use, and waste reduction, and minimizing the environmental impact of economic activities. _2.4.1_ _Promoting fair trade_ Economic sustainability can be bolstered by enhancing transparency within fair trade practices. Fair trade relies on consumer willingness to pay premiums based on the assurance of superior quality and ethical practices within supply chains. Ecolabels must therefore demonstrate attributes such as traceability, accountability, and ecological sustainability to build trust. Blockchain technology (BT) is posited as an improvement over traditional marketing strategies, with Balzarova et al. (2022) noting its net positive impact on food supply chains. Francisco and Swanson (2018) argue that BT could significantly improve transparency and traceability in agricultural supply chains. This technology facilitates consumer trust, providing self-verification mechanisms that can potentially replace reliance on ecolabels (Cozzio, 2023). In assessing the adoption of blockchain in fair trade, Balzarova et al. (2022) utilized the Technology Readiness Index (TRI), unveiling five themes: the conditional benefits of BT, the duality of transparency outcomes, consumer behavior factors, and implementation barriers, highlighting the practical challenges of BT adoption. Fairtrade certification to blockchain adoption begins with the firm's expertise and willingness to implement BT (Holmberg et al., 2022). Moreover, public-private partnerships (PPPs) often face transparency and accountability issues, impacting trust and collaboration. Tafuro et al. (2023) examined blockchain’s potential to address these issues in PPPs, |Broader Topic|Specific Application|Description| |---|---|---| |Waste Management and Recycling|- Recycling incentivization programs|Blockchain-based platforms like Plastic Bank offer tokens for collecting and recycling materials, incentivizing proper recycling practices.| ||- Supply chain transparency for recycling|Blockchain is used to track the lifecycle of products, ensuring responsible recycling (e.g., Arianee project for luxury goods).| |Climate Change|- Climate finance and investment platforms|Blockchain platforms facilitate investments in climate change mitigation projects, such as the Poseidon Foundation supporting forest conservation efforts.| ||- Tracking and verifying climate data|Projects like the Open Earth Foundation use blockchain for transparent and tamper-proof climate data, aiding in climate policy and modeling.| ----- **_ISSN 2281-3993_** **_www.richtmann.org_** **_March 2024_** suggesting that despite its complexities, blockchain can enhance the efficacy of PPPs and contribute to sustainable development goals. _2.4.2_ _Sustainable Supply Chain Management_ Blockchain technology (BT) has emerged as a transformative force in supply chain management, offering unparalleled benefits in terms of transparency, security, and operational efficiency, which are pivotal for sustainable economic growth. This technology's core advantage lies in its ability to create a decentralized, immutable ledger that enables real-time tracking of products from their point of production to delivery (Sarfraz et al., 2023; Waqar et al., 2023; Yontar et al., 2023; Risso et al., 2023; Khanfar et al., 2021; Kouhizadeh et al., 2021). This capability is instrumental in reducing losses associated with counterfeit and gray market trading, while also promoting environmentally friendly production methodologies (Carter and Easton, 2011; Yap et al., 2023). Moreover, blockchain facilitates the automation of sustainability agreements through smart contracts (Dal Mas, et al., 2020), ensuring compliance with eco-friendly criteria and fostering the circular economy by meticulously documenting products' lifecycle for future reuse (Yap et al., 2023). Despite the promising potential of blockchain in revolutionizing supply chains, its integration into existing systems is fraught with challenges. These include the need for substantial technology infrastructure development, regulatory support, and overcoming the general reluctance towards adopting new technologies. Specific hurdles such as user-friendliness, the proprietary nature of blockchain solutions, and the seamless integration of virtual and physical tracking mechanisms are significant (Dehshiri and Amiri, 2024; Jabbar and Bjørn, 2018; Kshetri, 2018; Tian, 2016). The reconciliation of blockchain’s virtual capabilities with the physical tracking of items poses a complex problem that research has yet to fully address, often focusing more on the virtual benefits than on practical physical applications. In addition to revolutionizing supply chain management, blockchain significantly enhances business efficiency by simplifying processes, eliminating intermediaries, thus reducing costs and saving time. For instance, in the energy sector, blockchain enables efficient peer-to-peer energy trading, facilitating the use of renewable energy sources and contributing to a more sustainable energy supply. Technology also allows for the tokenization of physical assets, such as real estate, making these markets more liquid and efficient by enabling assets to be traded on blockchain platforms (Carter and Easton, 2011). The deployment of blockchain technology in supply chain management not only ensures greater transparency, security, and cost efficiency but also facilitates the real-time tracking of the entire production and delivery process. This aspect significantly mitigates the risks associated with counterfeit and gray market trading, thereby endorsing sustainable business models (Thanasi-Boçe et al., 2022). Blockchain's ability to enhance product quality, prevent counterfeits, and achieve stakeholder transparency is pivotal for monitoring resource utilization and promoting sustainable manufacturing practices. Furthermore, smart contracts play a critical role in ensuring suppliers adhere to environmental standards, thereby incentivizing responsible production (Yap et al., 2023). Despite its potential, integrating BT into supply chains poses challenges, including the necessity for developing technology infrastructure, regulatory support, and overcoming the reluctance towards new technologies. Issues such as ease of use, the proprietary nature of blockchain solutions, and the integration of virtual and physical tracking systems represent significant hurdles. Bridging the gap between blockchain's virtual capabilities and the physical tracking of items remains a complex challenge, with current research focusing more on the technology's virtual advantages than its application in physical item tracking (Dehshiri and Amiri, 2024; Jabbar and Bjørn, 2018; Kshetri, 2018; Tian, 2016). Furthermore, most research has focused on blockchain's benefits for companies, with less emphasis on consumer information. Bett ́ın-D ́ıaz and colleagues (2018) suggested a methodology for developing traceable supply chains with consumer considerations, but detailed strategies for conveying information to customers are limited. ----- **_ISSN 2281-3993_** **_www.richtmann.org_** **_March 2024_** Addressing the challenges of blockchain integration within supply chains necessitates the development of a robust technological infrastructure and sustainable methodologies, bolstered by collaborative investments and regulatory frameworks among supply chain partners (Dehshiri and Amiri, 2024). The ease of use and the capacity to integrate blockchain within existing infrastructures are essential for promoting user adoption and overcoming resistance towards new technologies (Jabbar and Bjørn, 2018). _2.4.3_ _Impact on Employment and Income Distribution_ The influence of BT on employment and income distribution is a fluctuating and complex matter. BT has the potential to generate novel employment prospects and possibilities, notably in domains such as software development, consulting, and auditing. Conversely, other authors claim that the use of BT may result in workforce reduction within specific industries, given its capacity to automate functions and procedures presently executed by human labor (Novak, 2019). The utilization of BT possesses the capacity to exert influence on employment and income distribution through several avenues, including both advantageous and detrimental outcomes, as provided is in Table 4. **Table 4: The impact of blockchain on employment and income distribution** Positive or Impact Negative Explanation (+/-) BT could provide jobs in blockchain development, cybersecurity, data Job creation (+) analysis, and smart contract audits. The growing acceptance of BT is creating job opportunities for skilled workers in connected fields. BT can simplify and streamline manual processes, reducing middlemen Streamlining of (+) and administrative staff. This technique may replace jobs in some processes locations, but it can save companies money and improve operations. BT enables decentralized platforms and apps, boosting the gig economy. This phenomenon has pros and disadvantages. Peer-to-peer networks Decentralization (+) allow people to make money by completing activities, sharing resources, and selling services. This could improve income distribution. Banks can use BT to provide financial services to those who don't have Financial (+) access to them. This allows people to work and participate in the formal inclusion economy, empowering them. Blockchain wealth concentration among early adopters and significant Income inequality (-) enterprises may aggravate economic inequality. Individuals with significant computer power and resources may have an advantage. The growing demand for BT skills spurs investment in educational and Reskilling and (+) training programs, giving people the chance to learn new skills and boost education their income (Fleener, 2022). Regulatory ambiguity in the blockchain business can hinder investment Regulatory (-) and job growth. Businesses are hesitant to expand in uncertain legal uncertainty environments. Additionally, BT has a positive impact on reducing fraud and corruption. Blockchain's immutable ledger ensures that records cannot be altered after the fact, which can significantly reduce fraud and corruption, particularly in public sectors such as land registries and government contracts. _2.5_ _Blockchain for social sustainability_ BT significantly affects social sustainability, addressing systemic issues across various key areas to foster a more equitable, transparent, and sustainable society. This transformative technology |Impact|Positive or Negative (+/-)|Explanation| |---|---|---| |Job creation|(+)|BT could provide jobs in blockchain development, cybersecurity, data analysis, and smart contract audits. The growing acceptance of BT is creating job opportunities for skilled workers in connected fields.| |Streamlining of processes|(+)|BT can simplify and streamline manual processes, reducing middlemen and administrative staff. This technique may replace jobs in some locations, but it can save companies money and improve operations.| |Decentralization|(+)|BT enables decentralized platforms and apps, boosting the gig economy. This phenomenon has pros and disadvantages. Peer-to-peer networks allow people to make money by completing activities, sharing resources, and selling services. This could improve income distribution.| |Financial inclusion|(+)|Banks can use BT to provide financial services to those who don't have access to them. This allows people to work and participate in the formal economy, empowering them.| |Income inequality|(-)|Blockchain wealth concentration among early adopters and significant enterprises may aggravate economic inequality. Individuals with significant computer power and resources may have an advantage.| |Reskilling and education|(+)|The growing demand for BT skills spurs investment in educational and training programs, giving people the chance to learn new skills and boost their income (Fleener, 2022).| |Regulatory uncertainty|(-)|Regulatory ambiguity in the blockchain business can hinder investment and job growth. Businesses are hesitant to expand in uncertain legal environments.| ----- **_ISSN 2281-3993_** **_www.richtmann.org_** **_March 2024_** underpins efforts aligned with the United Nations' Sustainable Development Goals (SDGs), enhancing quality of life and promoting social inclusion (UN, 2023; Mani et al., 2016). The key areas of social impact where blockchain demonstrates its substantial contributions are discussed below. In each of the discussed areas, blockchain technology not only addresses pressing social and environmental challenges but also promotes a more equitable distribution of resources and opportunities, underscoring its profound impact on fostering a sustainable and inclusive global society (Lu et al., 2020; Rai et al., 2021). _2.5.1_ _Promoting equality and social inclusion_ Blockchain technology has the potential to promote equality and social inclusion through several key mechanisms. By providing a decentralized and transparent ledger system, it offers innovative solutions to systemic problems that hinder equality and social inclusion. The discussion below focuses of the ways blockchain can contribute to these goals: _2.5.2_ _Financial inclusion_ Blockchain facilitates access to financial services for the unbanked and underbanked populations, who are often excluded from traditional banking systems. By enabling peer-to-peer transactions without the need for intermediaries, blockchain can lower transaction costs and make financial services more accessible to everyone, regardless of their geographic location or socioeconomic status. This increased access to financial services can help reduce poverty and boost economic participation among marginalized communities. Blockchain plays a crucial role in bridging the financial gap for unbanked and underprivileged populations. Platforms like BanQu provide digital identities and secure, accessible financial transactions, offering disenfranchised individuals, including refugees, access to banking and financial services, thereby facilitating economic participation and empowerment (Fallah Shayan et al., 2022). _2.5.3_ _Secure identity verification_ Blockchain can provide secure and immutable digital identities, offering a solution for individuals without official documents or those whose records have been lost due to conflicts or disasters. A blockchain-based identity system can enable these individuals to access essential services such as healthcare, education, and banking, thereby promoting social inclusion. _2.5.4_ _Transparent and fair governance_ Blockchain secures electoral systems and enhances civic engagement through transparent and trustworthy voting mechanisms. By ensuring that votes are tamper-proof and accurately recorded, blockchain can foster a more inclusive and fair political process, giving marginalized groups a stronger voice and promoting political equality. Projects like Voatz show the potential of blockchain to increase participation in democratic processes, strengthen democratic governance, and encourage citizen involvement in decision-making processes (Carter and Rogers, 2008). _2.5.5_ _Supply chain transparency_ Blockchain can track the provenance of products from origin to consumer, ensuring fair trade and ethical practices (Balzarova et al., 2022). This transparency can empower consumers to make informed choices that support social and economic fairness, benefiting small producers and workers in developing countries by ensuring they receive a fair share of profits. ----- **_ISSN 2281-3993_** **_www.richtmann.org_** **_March 2024_** _2.5.6_ _Education and credential verification_ Blockchain can securely store and verify academic credentials, enabling individuals from disadvantaged backgrounds to prove their qualifications and skills easily. This can improve access to job opportunities and higher education, breaking down barriers to social mobility and promoting equality. _2.5.7_ _Education and employment_ Blockchain enhances the mobility and employability of individuals by authenticating academic credentials and professional achievements. Through initiatives like the MIT Media Lab's Digital Certificates Project, blockchain ensures the integrity and verifiability of educational records, streamlining employment processes and supporting lifelong learning (Thanasi-Boçe et al., 2022). This enables individuals from disadvantaged backgrounds to prove their qualifications and skills easily. Also, it can improve access to job opportunities and higher education, breaking down barriers to social mobility and promoting equality. _2.5.8_ _Decentralized markets_ By facilitating decentralized marketplaces, blockchain allows small businesses and entrepreneurs from marginalized communities to participate in the global economy directly. This can level the playing field, reducing the dominance of large corporations and empowering individuals and small enterprises (Dal Mas et al., 2020). _2.5.9_ _Protection of property rights_ In countries where land and property rights are not adequately documented and enforced, blockchain technology has the potential to offer a record of ownership that is both secure and unchangeable. This has the potential to safeguard the rights of vulnerable communities against encroachment and disputes, so fostering economic stability and social inclusion simultaneously (Konashevych, 2020; Chen et al., 2020). _2.5.10Enhancing quality of life_ Blockchain technology has the potential to enhance the quality of life through several key mechanisms that are discussed below: _2.5.11_ _Environmental stewardship and circular economy_ Blockchain incentivizes recycling and responsible waste management through projects like Plastic Bank, addressing environmental challenges and promoting the principles of the circular economy. By tokenizing waste and facilitating the trade of recyclable materials, blockchain contributes to sustainable practices and environmental conservation. _2.5.12_ _Establishing consumer trust and transparency through authenticity and traceability_ BT is increasingly vital for brands in various sectors to ensure product authenticity and traceability, which is fundamental in establishing consumer trust and transparency. This decentralized and transparent technology enables brands to authenticate their products and track their supply chain journey, meeting the growing consumer demand for ethical and environmentally responsible production (Thanasi-Boçe et al., 2022) especially in the growing e-commerce (Sikder, 2023). ----- **_ISSN 2281-3993_** **_www.richtmann.org_** **_March 2024_** Key applications of blockchain for product authenticity and traceability illustrated with examples include: **_Supply Chain Management: Blockchain is used to document and track every stage of a_** product's progression, from raw material acquisition to final delivery (Venkatesh et al., 2020). For instance, IBM Food Trust is a blockchain-based platform employed by major retailers like Walmart to trace the origin of food products, such as leafy greens. Consumers can access this information by scanning a QR code on the packaging, ensuring the product's legitimacy and source. **_Food and Agriculture: Farmers can document crop-related data on blockchain, enhancing_** food safety and integrity. An example is the BeefLedger project in Australia, which uses blockchain to trace the source of beef products. Consumers can scan a QR code on meat packaging to access details about the cattle's breed, origin, diet, and processing methods, thereby verifying food provenance and quality using product labels. **_Luxury Goods: Luxury brands are leveraging blockchain to provide digital certificates of_** authenticity. LVMH, the conglomerate that owns luxury brands like Louis Vuitton, has launched the AURA blockchain platform. It allows consumers to verify the authenticity and origin of luxury items, such as designer bags and watches. Each product comes with an NFC chip linked to blockchain, providing a digital certificate of authenticity and detailed information about the product's history. This shift towards sustainable luxury aligns with contemporary values, setting new standards in the luxury industry for technological innovation and ethical practices (Al-Issa et al., 2022). **_Pharmaceuticals: In the pharmaceutical industry, blockchain is used to monitor medication_** lifecycles and reduce counterfeit products. Chronicled is a blockchain platform employed to combat counterfeit drugs. It enables tracking the production and distribution of pharmaceuticals, ensuring that medications are genuine. Patients can authenticate prescriptions using QR codes. **_Copyright and Digital Content: Creators can establish ownership of their work through_** blockchain timestamps. For example, Verisart is a blockchain-based platform used by artists and creators to certify the authenticity of digital art and collectibles. It provides a timestamped certificate of authenticity on the blockchain, making it easy for buyers to verify the originality of digital content. **_Automotive Industry: Car manufacturers are exploring blockchain applications to maintain_** comprehensive records of vehicle maintenance, accidents, and ownership changes. BMW, for instance, aims to create a tamper-proof history of used cars, recording maintenance, accident history, and ownership transfers on the blockchain. Buyers can access this information to make informed decisions when purchasing a pre-owned vehicle. In summary, BT offers a reliable and immutable method for maintaining a verifiable record of a product's journey, enhancing its credibility and traceability across various sectors. The Italian coffee roaster Lavazza's successful implementation of blockchain for product tracking demonstrates the importance of collaborative supply chain efforts and innovation in adapting to socioeconomic trends (Gazzola et al., 2023). _2.5.13_ _Sustainable production and consumption_ Sustainable consumption refers to the use of products and services that satisfy basic needs and improve quality of life while minimizing the impact on the environment, so future generations can also fulfill their needs. Chaudhuri et al. (2023)’ study highlights the importance of customer education and engagement, along with cultivating local partnerships, as essential behavioral strategies for enhancing social sustainability and mitigating risks in the context of BT. The utilization of BT has a profound influence on the promotion of sustainable production and consumption (Böckel et al., 2021) through the augmentation of transparency, traceability, and accountability within supply chains across diverse sectors (Khanfar et al., 2021). BT enables consumers to achieve supply chain transparency by providing them with the ability to trace the origins of items, thereby certifying their validity and ensuring ethical sourcing practices. The promotion of openness fosters the adoption of sustainable practices, including the responsible ----- **_ISSN 2281-3993_** **_www.richtmann.org_** **_March 2024_** management of resources and the establishment of fair labor conditions. The immutability of BT presents a formidable obstacle to the proliferation of counterfeit goods in markets, thereby guaranteeing that consumers are provided with authentic, secure, and ecologically sustainable products. _2.5.14_ _Renewable energy adoption_ Facilitating peer-to-peer energy trading, blockchain projects such as the Brooklyn Microgrid enable consumers to trade locally produced renewable energy. This democratization of energy production promotes environmental sustainability and incentivizes the shift towards renewable energy sources, contributing to reduced carbon footprints and supporting clean energy goals. _2.5.15_ _Humanitarian aid and disaster relief_ In the realm of humanitarian aid, blockchain improves the efficiency and transparency of aid distribution. The World Food Programme's Building Blocks project exemplifies how blockchain delivers food assistance directly to beneficiaries, minimizing transaction costs and fraud risks, thus ensuring that aid reaches those in need more effectively (Martin et al., 2011). _2.5.16Business ethics and effective corporate governance_ Business ethics and effective corporate governance are instrumental in achieving social sustainability (Ronaghi and Mosakhani, 2023). Business ethics, fundamental to workplace social interactions, has a significant impact on an organization's social aspect (Lashley, 2016). Blockchain introduces unparalleled transparency in global supply chains, enabling verification of ethical sourcing and adherence to fair labor practices. By documenting the journey of goods from their origin, initiatives like Everledger and Fairfood International leverage blockchain to combat the trade in conflict minerals and ensure products are produced under ethical conditions, empowering consumers with information to make responsible choices (Tang, 2018). Krishna et al. (2011) identified a positive relationship between business ethics and corporate performance. Other authors have examined corporate governance, influencing organizational social behaviors through stakeholder monitoring and power structures (Schultz et al., 2020; Aguilera et al., 2009), and the interaction between sustainability concepts and corporate governance concerning corporate performance (Krechovská and Prochazkova, 2014). _2.6_ _Challenges of BT implementation and proposed solutions in sustainability programs_ In the review of scholarly literature, it has been established that the incorporation of blockchain technology (BT) into sustainability initiatives is not without its challenges. These obstacles are extensively examined in the works of Mulligan et al. (2023) and Khanfar et al. (2021), among others, and warrant careful consideration in the discourse on the advancement of BT within sustainability programs: ## • Scalability and security issues: Scalability is a critical technical challenge for blockchain, as increased transaction volumes can reduce system performance and efficiency. Security is also a concern; despite inherent protections, blockchain is susceptible to cyber threats like the 51% attack, and new vulnerabilities may emerge as the technology evolves. ## • Energy consumption: Blockchain networks, especially those using proof-of-work (PoW) mechanisms, require significant computational resources, leading to high energy consumption, as seen in Bitcoin mining. This poses environmental concerns, particularly regarding sustainability objectives aimed at reducing carbon emissions. ----- **_ISSN 2281-3993_** **_www.richtmann.org_** **_March 2024_** ## • Regulatory and ethical considerations: The decentralized and transnational nature of blockchain creates complex regulatory challenges. Compliance with varying international regulations on digital currencies, data protection, and cross-border transactions is crucial. Ethical issues, such as data privacy and the potential for blockchain to enable illicit activities, also need to be addressed. ## • Resistance to change and adoption hurdles: Resistance to adopting new technologies like blockchain is common, often due to limited understanding or concerns about potential impacts. Integrating blockchain into existing systems can be complex and costly, posing significant challenges for organizations, especially smaller ones with limited resources. ## • Economic implications: BT's impact on employment and income distribution is complex. While it can offer new job opportunities, financial inclusivity, and skill development, it also has the potential to disrupt traditional employment structures and exacerbate income inequalities. Addressing these issues requires regulatory frameworks, educational initiatives, and industrial adjustments to ensure equitable benefits distribution. The successful and sustainable adoption of BT requires overcoming these challenges through the development of more energy-efficient consensus mechanisms, robust security measures, effective regulatory management, and stakeholder education about the technology's benefits and implementation. Table 5 presents comprehensive strategies required to address the multifaceted challenges of integrating blockchain technology into sustainability initiatives. These solutions span technical innovations, regulatory adjustments, ethical considerations, educational efforts, and economic policies, highlighting the need for collaborative efforts among all stakeholders to harness blockchain technology's full potential for sustainable development. **Table 5. Strategies of integrating blockchain technology into sustainability initiatives** **Challenge Category** **Solution** **Description** |Challenge Category|Solution|Description| |---|---|---| |Scalability and security|Layered architecture and off-chain solutions|Investigating off-chain solutions and implementing a layered blockchain architecture can improve scalability by offloading transaction processing, reducing congestion and increasing efficiency while maintaining security.| ||Sophisticated consensus mechanisms|Implementing streamlined consensus mechanisms like PoS or DPoS to reduce computational and energy demands, addressing scalability and security concerns simultaneously.| |Energy utilization|Transition to energy- efficient consensus mechanisms|Moving from PoW to more energy-efficient mechanisms like PoS to significantly reduce blockchain's energy consumption and advance sustainability goals.| ||Implementation of renewable energy sources|Promoting the use of renewable energy sources in blockchain operations through regulations and incentives to minimize environmental impact.| |Ethical and regulatory considerations|Frameworks for international regulations|Developing harmonized international frameworks to manage the decentralized nature of blockchain, ensuring privacy, data protection, and prevention of illegal activities.| ||Establishing ethical standards and guidelines|Setting up ethical guidelines and standards for blockchain applications to uphold user privacy and contribute positively to societal goals.| |Resistance to change and adoption|Capacity building and education|Providing educational initiatives and resources on blockchain to demystify the technology and reduce opposition, emphasizing the importance of training for developers, users, and stakeholders.| ||Collaborations and pilot projects|Implementing pilot projects and fostering collaborations between industries, governments, NGOs, and blockchain developers to demonstrate the practical benefits and feasibility of blockchain, reducing integration complexity and costs.| |Economic consequences|Inclusive economic policies|Developing policies that support financial inclusivity and SMEs to mitigate adverse effects on employment and income distribution, ensuring equitable benefits from blockchain technology.| ||Programs for skill development and job transition|Investing in skill development and retraining programs to prepare the workforce for new opportunities in the blockchain sector, positively impacting employment and income distribution.| ----- **_ISSN 2281-3993_** **_www.richtmann.org_** **_March 2024_** _2.7_ _Framework for BT impact on environment, economic, and social domains_ This framework outlines the multifaceted impact of blockchain technology across environmental, economic, and social domains. For the environment, blockchain aids in monitoring carbon footprints and enhancing renewable energy markets, among other benefits. Economically, it supports fair trade, improves supply chain management, and promotes financial inclusion, thereby driving efficiency. Socially, blockchain ensures product authenticity and traceability, bolsters business ethics, and contributes to sustainable production practices. The framework also identifies challenges such as scalability and regulatory compliance, while highlighting blockchain's sustainable features like transparency and decentralization. **Figure 2. BT impact framework** **Source: Authors’ own work** **3.** **The Future Outlook of Sustainability Blockchain Integration** The trajectory of blockchain integration in sustainability shows promise across technological advancements, policy considerations, and economic impacts. Innovations in BT are expected to address issues like energy consumption and scalability, such as transitioning from Proof of Work (PoW) to more sustainable mechanisms like Proof of Stake (PoS) or Proof of Authority (PoA), reducing blockchain's ecological impact. Blockchain's application across various industries could enhance supply chain transparency, optimize renewable energy distribution, and facilitate resource management, with smart contracts and decentralized applications enforcing sustainability standards. Governments play a critical role in shaping the future of blockchain for sustainability, requiring legal frameworks that balance innovation with potential risks, including data privacy and financial stability concerns. Policymaking might incentivize blockchain adoption in sustainable practices, such as subsidies for companies using blockchain for sustainable supply chain tracing or legislative support for blockchain-based renewable energy solutions (Mulligan et al., 2023). International collaboration is essential, given blockchain's global nature and sustainability challenges. Harmonizing regulatory approaches can improve cross-border blockchain initiatives, contributing to global sustainability goals. The economic impact of blockchain in various sectors ----- **_ISSN 2281-3993_** **_www.richtmann.org_** **_March 2024_** could be significant, enabling new markets and opportunities, especially in sustainable goods and services. Blockchain's potential to disrupt market structures, reduce intermediaries, lower transaction costs, and democratize market access could redistribute economic power. It may also influence investment patterns, as its ability to verify and track sustainable practices could attract more investments into sustainable projects, impacting capital flows in global markets. In conclusion, blockchain's convergence with sustainability is a dynamic area with the potential for significant advancements in technology, policy, and economics. Despite challenges, the ongoing development of blockchain, supportive government policies, and its growing economic influence could advance global sustainability efforts. **4.** **Conclusion and Future Research Directions** This article offers an in-depth analysis of blockchain technology's potential to significantly contribute to sustainable development across environmental, economic, and social dimensions. It meticulously explores how BT can act as a transformative tool, addressing global sustainability challenges by enhancing transparency, promoting resource efficiency, and facilitating equitable economic growth. One of the key topics discussed in this paper is how BT can serve as a powerful enabler for achieving sustainability goals by ensuring the traceability and authenticity of products, which is crucial for environmental stewardship and social justice. Through various applications, such as in supply chain management, renewable energy sectors, and conservation efforts, BT has the potential to reduce carbon emissions, improve resource allocation, and support sustainable business practices. Furthermore, BT fosters a unique synergy among environmental, economic, and social dimensions by enabling transparent, secure, and efficient operations across various sectors. Its immutable ledger ensures the traceability of products, promoting environmental sustainability through the verification of ethical sourcing and waste reduction practices. Economically, blockchain reduces operational costs by streamlining transactions and eliminating intermediaries, while also providing financial inclusion for underserved populations through decentralized financial services. Socially, the technology enhances transparency and trust among consumers, businesses, and communities, supporting fair labor practices and equitable resource distribution. This multifaceted impact not only encourages responsible consumption and production but also empowers individuals and communities by democratizing access to resources and services. By addressing these pillars simultaneously, BT creates a holistic approach to sustainable development, aligning with global efforts to achieve the UN-SDGs. Its application across industries represents a transformative shift towards a more sustainable, equitable, and interconnected world, showcasing the potential of technology to resolve complex global challenges harmoniously. The paper also highlights the importance of addressing the challenges and limitations associated with the implementation of BT for sustainable solutions, including concerns over scalability, energy consumption, and regulatory barriers. The authors emphasize the need for strategic recommendations and a comprehensive approach that balances the opportunities and obstacles of using BT in sustainability efforts. In conclusion, this research underscores the transformative potential of BT in promoting sustainable development. It calls for a collaborative effort among stakeholders, including policymakers, practitioners, and researchers, to leverage BT effectively. By addressing the identified challenges and harnessing the capabilities of BT, there is a significant opportunity to advance towards a more sustainable, equitable, and environmentally friendly future. The findings of this paper contribute to both theoretical understanding and practical implementation of blockchain technology as a catalyst for sustainable development, offering guidance and insights for future research and policy-making in this evolving domain. However, to fully harness the sustainability potential of blockchain, further technological advancements are necessary to address and minimize its environmental impact. In anticipation of future investigations, the forthcoming study must direct its attention toward a few crucial domains: ----- **_ISSN 2281-3993_** **_www.richtmann.org_** **_March 2024_** - Explore and advance blockchain technologies characterized by reduced energy consumption to address environmental issues related to energy efficiency. - The establishment of regulatory frameworks is crucial for the effective governance of BT, ensuring its deployment is in line with global sustainability objectives. These frameworks encompass the development of comprehensive rules and regulatory guidelines. - Investigate scaling solutions that may effectively manage higher transaction volumes while maintaining optimal levels of energy efficiency and security. - Investigate the cross-sector applications of BT, with a specific focus on its potential contributions to sustainable practices in non-traditional industries such as agriculture, healthcare, and public governance. - Evaluate the wider socio-economic implications of BT in facilitating sustainability, specifically examining its influence on market dynamics and global trade. - Finally, the utilization of BT shows great potential in promoting sustainability objectives. However, it is crucial to exercise prudent oversight in its implementation to guarantee a favorable contribution to the overall ecosystem. Ongoing advancements in innovation, research, and policy formulation will play a crucial role in effectively leveraging this technology to achieve a more sustainable future. **References** Aerial (2023). Available at: https://aerial.ai Accessed: November 16, 2023. Aguilera, R. C., Ortiz, M. P., Ortiz, J. P., & Banda, A. A. (2021). Internet of things expert system for smart cities using the blockchain technology. Fractals, 29(01), 2150036. Ahmad, M. (2023). Top 10: Energy Companies Using Blockchain Technology. Energy. Available at: https://energydigital.com/top10/top-10-energy-companies-using-blockchain-technology. Accessed: November 16, 2023 Alhasan, H., Hamdan, A. (2023). Blockchain Technology and Environmental Sustainability. In: El Khoury, R., Nasrallah, N. (eds) Emerging Trends and Innovation in Business and Finance. Contributions to Management Science. Springer, Singapore. AL-Issa, N., Thanasi-Boçe, M., & Ali, O. (2022). Boosting Luxury Sustainability Through Blockchain Technology. In Blockchain Technologies in the Textile and Fashion Industry(pp. 17-46). Singapore: Springer Nature Singapore. Allena, M. (2020). Blockchain technology for environmental compliance: Towards a 'choral' approach. Environmental Law Review, 50(4). Retrieved from Arshad, A., Shahzad, F., Rehman, I. U., and Sergi, B. S. (2023). A systematic literature review of blockchain technology and environmental sustainability: Status quo and future research. International Review of _Economics & Finance._ Bai, Y., Hu, Q., Seo, S. -H., Kang, K., and Lee, J. J. (2022). Public participation consortium blockchain for smart city governance. IEEE Internet of Things Journal, 9(3), 2094-2108. Balzarova, M., Dyer, C., and Falta, M. (2022). Perceptions of blockchain readiness for fairtrade programmes. Technological Forecasting and Social Change, 185, 122086. Bhusal, Chandra Sekhar, Blockchain Technology in Agriculture: A Case Study of Blockchain Start-Up Companies (October 1, 2021). International Journal of Computer Science and Information Technology, 13(5). Böckel, A., Nuzum, A. K. and Weissbrod, I. (2021). Blockchain for the circular economy: analysis of the researchpractice gap. Sustainable Production and Consumption, 25, 525-539. Bosona, T. and Gebresenbet, G. (2023). The role of blockchain technology in promoting traceability systems in agri-food production and supply chains. Sensors, 23(11), 5342. Brilliantova, V., and Thurner, T. W. (2019). Blockchain and the future of energy. Technology in Society, 57, 38-45. Carter, C. R. and Easton, P. L. (2011). Sustainable supply chain management: evolution and future directions. _International Journal of Physical Distribution & Logistics Managemen.,41(1), 46–62._ Carter, C. R., and Rogers, D. S. (2008). A framework of sustainable supply chain management: moving toward new theory. International journal of physical distribution & logistics management, 38(5), 360-387. Clark, W. C. (2007). Sustainability science: a room of its own. _Proceedings of the National Academy of Sciences,_ _104(6), 1737–1738._ ----- **_ISSN 2281-3993_** **_www.richtmann.org_** **_March 2024_** Chen, W., Zhou, K., Fang, W., Wang, K., Bi, F., & Assefa, B. (2020). Review on blockchain technology and its application to the simple analysis of intellectual property protection. International Journal of Computational _Science and Engineering, 22(4), 437-444._ Cozzio, C., Viglia, G., Lemarie, L., and Cerutti, S. (2023). Toward an integration of blockchain technology in the food supply chain. Journal of Business Research, 162, 113909. Cui, M.-l., Feng, T.-t., and Wang, H.-r. (2023). How can blockchain be integrated into renewable energy? --A bibliometric-based analysis. Energy Strategy Reviews, 50, 101207. Dal Mas, F., Dicuonzo, G., Massaro, M., & Dell'Atti, V. (2020). Smart contracts to enable sustainable business models. A case study. Management Decision, 58(8), 1601-1619. Dal Mas, F., Massaro, M., Ndou, V., and Raguseo, E. (2023). Blockchain technologies for sustainability in the agrifood sector: A literature review of academic research and business perspectives. _Technological_ _Forecasting and Social Change, 187, 122155._ Dehshiri, S. J. H., and Amiri, M. (2024). Evaluation of blockchain implementation solutions in the sustainable supply chain: A novel hybrid decision approach based on Z-numbers. Expert Systems with Applications, 235, 121123. Everledger (2023) Making the commercial case for blockchain diamond tracking. Available at: https://everledger.io/making-the-commercial-case-for-blockchain-diamond-tracking/ Accessed: November 16, 2023 Fallah Shayan, N., Mohabbati-Kalejahi, N., Alavi, S., and Zahed, M. A. (2022). Sustainable development goals (SDGs) as a framework for corporate social responsibility (CSR). Sustainability, 14(3), 1222. Tian, F. (2016). An agri-food supply chain traceability system for China based on RFID & blockchain technology. In: 13th International Conference on Service Systems and Service Management, ICSSSM 2016. IEEE, June 2016, pp. 1–6. Fleener, M. J. (2022). Blockchain Technologies: A Study of the Future of Education. Journal of Higher Education _Theory & Practice, 22(1)._ Fu, Z., Dong, P., Li, S., Ju, Y., and Liu, H. (2021). How blockchain renovate the electric vehicle charging services in the urban area? A case study of Shanghai, China. Journal of Cleaner Production, 315, 128172 Gazzola, P., Pavione, E., Barge, A., and Fassio, F. (2023). Using the Transparency of Supply Chain Powered by Blockchain to Improve Sustainability Relationships with Stakeholders in the Food Sector: The Case Study of Lavazza. Sustainability, 15(10), 7884. Hahn, T. and Figge, F. (2018). Why architecture does not matter: on the fallacy of sustainability balanced scorecards. Journal of Business Ethics, _150(4), 919–935._ Howson, P. (2020). Building trust and equity in marine conservation and fisheries supply chain management with blockchain. Marine Policy, 115, 103873. Jabbar, K. and Bjørn, P. (2018). Infrastructural Grind”. In: Proceedings of the 2018 ACM Conference on Supporting Groupwork - GROUP ’18. New York, NY, USA: ACM, 2018, pp. 297–308. ISBN: 9781450355629. DOI: 10.1145/3148330.3148345. Jiang, Y., and Zheng, W. (2021). Coupling mechanism of green building industry innovation ecosystem based on blockchain smart city. Journal of Cleaner Production, 307, 126766. Kapferer, J. N., and Michaut-Denizeau, A. (2019). Are millennials really more sensitive to sustainable luxury? A cross-generational international comparison of sustainability consciousness when buying luxury. Journal of _Brand Management, 1-13._ Khan, K., Hassan, F., Koo, J., Mohammed, M. A., Hasan, Y., Muhammad, D., ... and Qureshi, N. M. F. (2023). Blockchain-based applications and energy effective electric vehicle charging–A systematic literature review, challenges, comparative analysis and opportunities. Computers and Electrical Engineering, 112, 108959. Khanfar, A.A., Iranmanesh, M., Ghobakhloo, M., Senali, M. G., and Fathi, M. (2021). Applications of blockchain technology in sustainable manufacturing and supply chain management: A systematic review. Sustaina _bility, 13(14), 7870._ Kim, H. M., Bock, G. W., and Lee, G. (2021). Predicting Ethereum prices with machine learning based on Blockchain information. Expert Systems with Applications, 184, 115480. Kouhizadeh, M., Saberi, S., and Sarkis, J. (2021). Blockchain technology and the sustainable supply chain: Theoretically exploring adoption barriers. International journal of production economics, 231, 107831. Konashevych, O. (2020). Constraints and benefits of the blockchain use for real estate and property rights. Journal _of Property, Planning and Environmental Law,_ _12(2), 109-127._ Krechovská, M., and Procházková, P. T. (2014). Sustainability and its integration into corporate governance focusing on corporate performance management and reporting. Procedia Engineering, 69, 1144-1151. ----- **_ISSN 2281-3993_** **_www.richtmann.org_** **_March 2024_** Krishna, A., Dangayach, G. S., and Jainabc, R. (2011). Business ethics: A sustainability approach. Procedia-Social _and Behavioral Sciences, 25, 281-286._ Lashley, C. (2016). Liberating wage slaves: Towards sustainable employment practices. In The Routledge handbook _of hospitality studies (pp. 389-400). Routledge._ Luna, M., Fernandez-Vazquez, S., Tereñes Castelao, E., and Arias Fernández, Á. (2024). A blockchain-based approach to the challenges of EU’s environmental policy compliance in aquaculture: From traceability to fraud prevention. Marine Policy, 159, 105892. Lund, E. H., Jaccheri, L., Li, J., Cico, O., & Bai, X. (2019, May). Blockchain and sustainability: A systematic mapping study. In 2019 IEEE/ACM 2nd International Workshop on Emerging Trends in Software Engineering for _Blockchain (WETSEB) (pp. 16-23). IEEE._ Mani, V., Gunasekaran, A., Papadopoulos, T., Dubey, R., and Benjamin, H. (2016). Supply chain social sustainability for developing nations: evidence from India. Resources, Conservation and Recycling., _111, 42–_ 52. Mirabelli, G., and Solina, V. (2020). Blockchain and agricultural supply chains traceability: Research trends and future challenges. Procedia Manufacturing, 42, 414-421. Mulligan, C., Morsfield, S., and Cheikosman, E. (2023). Blockchain for sustainability: A systematic literature review for policy impact. Telecommunications Policy, 102676. Naqash MT, Syed TA, Alqahtani SS, Siddiqui MS, Alzahrani A., and Nauman M.A (2023). Blockchain Based Framework for Efficient Water Management and Leakage Detection in Urban Areas. Urban Science. 7(4), 99. Nir Kshetri, N. (2018). Blockchain’s roles in meeting key supply chain management objectives. _International_ _Journal of Information Management_ _39(Apr. 2018), 80–89._ Novak, M. (2019). The Implications of Blockchain for Income Inequality Between Science and EconomicsBlockchain Economics: Implications of Distributed Ledgers, pp. 235-250 Olivier, J. G., Schure, K. M., & Peters, J. A. H. W. (2017). Trends in global CO2 and total greenhouse gas emissions. PBL Netherlands Environmental Assessment Agency, 5, 1-11. Palacio, N. S. (2018). Blockchain: A technological tool for sustainable development or a massive energy consumption network. Διαθέσιμο στο: https://www. revistabionatura. com/files/2018.03, 4. Pandey, V., Pant, M., and Snasel, V. (2022). Blockchain technology in food supply chains: Review and bibliometric analysis. Technology in Society, 69, 101954. Park, A., & Li, H. (2021). The effect of blockchain technology on supply chain sustainability performances. Sustainability, 13(4), 1726. Parmentola, A., Petrillo, A., Tutore, I., and De Felice, F. (2022). Is blockchain able to enhance environmental sustainability? A systematic review and research agenda from the perspective of Sustainable Development Goals (SDGs). Business Strategy and the Environment, 31(1), 194-217. Prashar, D., Jha, N., Jha, S., Lee, Y., and Joshi, G. P. (2020). Blockchain-based traceability and visibility for agricultural products: A decentralized way of ensuring food safety in india. Sustainability, 12(8), 3497. Purvis, B., Mao, Y., & Robinson, D. (2019). Three pillars of sustainability: in search of conceptual origins. Sustainability science, 14, 681-695. Richardson, A., and Xu, J. (2020). Carbon Trading with Blockchain. In P. Pardalos, I. Kotsireas, Y. Guo, and W. Knottenbelt (Eds.), Mathematical Research for Blockchain Economy (pp. 105–124). Risso, L. A., Ganga, G. M. D., Godinho Filho, M., de Santa-Eulalia, L. A., Chikhi, T., and Mosconi, E. (2023). Present and future perspectives of blockchain in supply chain management: a review of reviews and research agenda. Computers and Industrial Engineering, 109195. Rodríguez Furones, A., and Tejero Monzón, J. I. (2023). Blockchain applicability in the management of urban water supply and sanitation systems in Spain. Journal of Environmental Management, 344, 118480. Sarfraz, M., Khawaja, K. F., Han, H., Ariza-Montes, A., & Arjona-Fuentes, J. M. (2023). Sustainable supply chain, digital transformation, and blockchain technology adoption in the tourism sector. Humanities and Social _Sciences Communications, 10(1), 1-13._ Sikder, A. S. (2023). Blockchain-Empowered E-commerce: Redefining Trust, Security, and Efficiency in Digital Marketplaces in the Context of Bangladesh.: Blockchain-Empowered E-commerce. International Journal of _Imminent Science & Technology, 1(1), 216-235._ Shabaltina, L. V., Madiyarova, D. M., & Tamer, N. (2021). Generating Employment with Equipped and Trained Workforce Using Blockchain Technology. In Modern Global Economic System: Evolutional Development vs. _Revolutionary Leap 11 (pp. 431-440). Springer International Publishing._ Schulz, K. A., Gstrein, O. J., and Zwitter, A. J. (2020). Exploring the governance and implementation of sustainable development initiatives through blockchain technology. Futures, 122, 102611. ----- **_ISSN 2281-3993_** **_www.richtmann.org_** **_March 2024_** Singh, A. K., Kumar, V. R. P., Dehdasht, G., Mohandes, S. R., Manu, P., and Pour Rahimian, F. (2023). Investigating the barriers to the adoption of blockchain technology in sustainable construction projects. Journal of Cleaner Production, 403, 136840. Sipthorpe, A., Brink, S., Van Leeuwen, T., and Staffell, I. (2022). Blockchain solutions for carbon markets are nearing maturity. One Earth, 5(7), 779-791. Tafuro, A., Dammacco, G., and Costa, A. (2023). A Conceptual Study on the Role of Blockchain in Sustainable Development of Public–Private Partnership. Administrative Sciences, 13(8), 175. Tartan, C. (2023). A Blockchain-Based Welfare Distribution Model for Digital Inclusivity. REGION, 10(1), 19-44. Teufel, B., Sentic, A., and Barmet, M. (2019). Blockchain energy: Blockchain in future energy systems. Journal of _Electronic Science and Technology, 17(4), 100011._ Thakur, T., Mehra, A., Hassija, V., Chamola, V., Srinivas, R., Gupta, K. K., and Singh, A. P. (2021). Smart water conservation through a machine learning and blockchain-enabled decentralized edge computing network. _Applied Soft Computing, 106, 107274._ Thanasi-Boçe, M., AL-Issa, N., and Ali, O. (2022). Combating Luxury Counterfeiting Through Blockchain Technology. In Blockchain Technologies in the Textile and Fashion Industry (pp. 1-16). Singapore: Springer Nature Singapore. Thoughtworks (2023). VAKT_Building the world's first enterprise-level blockchain platform. Available at: https://www.thoughtworks.com/en-au/clients/vakt Accessed: November 16, 2023 Truby, J., Brown, R. D., Dahdal, A., and Ibrahim, I. (2022). Blockchain, climate damage, and death: Policy interventions to reduce the carbon emissions, mortality, and net-zero implications of non-fungible tokens and Bitcoin. Energy Research & Social Science, 88, 102499. Tyagi, K. A global blockchain-based agro-food value chain to facilitate trade and sustainable blocks of healthy lives and food for all. Humanit Soc Sci Commun 10, 196 (2023). https://doi.org/10.1057/s41599-023-01658-2 United Nations (2023) 17 SDGs goals. Available at: https://sdgs.un.org/goals Venkatesh, V. G., Kang, K., Wang, B., Zhong, R. Y., and Zhang, A. (2020). System architecture for blockchain based transparency of supply chain social sustainability. _Robotics and Computer-Integrated Manufacturing, 63,_ 101896. Wang, J. (2022). A novel electric vehicle charging chain design based on blockchain technology. Energy Reports, 8(Supplement 4), 785-793. Waqar, A., Khan, A. M., and Othman, I. (2023). Blockchain empowerment in construction supply chains: Enhancing efficiency and sustainability for an infrastructure development. _Journal of Infrastructure_ _Intelligence and Resilience._ Yang, Z., Zhu, C., Zhu, Y., and Li, X. (2023). Blockchain technology in building environmental sustainability: A systematic literature review and future perspectives. Building and Environment, 245, 110970. Yap, K., Chin, H.H., and Klemeš, J.J. (2023). Blockchain technology for distributed generation: A review of current development, challenges and future prospect. Renewable and Sustainable Energy Reviews. Yontar, E. (2023). The role of blockchain technology in the sustainability of supply chain management: Grey based dematel implementation. Cleaner Logistics and Supply Chain, 8, 100113. Zuo, Y. (2022). Tokenizing renewable energy certificates (RECs)—A blockchain approach for REC issuance and trading. IEEE Access, 10, 134477-134490. -----
{ "disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.36941/ajis-2024-0041?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.36941/ajis-2024-0041, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": "CCBYNC", "status": "HYBRID", "url": "https://www.richtmann.org/journal/index.php/ajis/article/download/13698/13257" }
2,024
[ "JournalArticle", "Review" ]
true
2024-03-05T00:00:00
[ { "paperId": "83c5079a34d7b9819803f07bb7671ccab4635de1", "title": "A blockchain-based approach to the challenges of EU’s environmental policy compliance in aquaculture: From traceability to fraud prevention" }, { "paperId": "6b257a7ff1605e0c2f34cac31fabc120c9d1564d", "title": "Blockchain-based applications and energy effective electric vehicle charging - A systematic literature review, challenges, comparative analysis and opportunities" }, { "paperId": "5775ed1a34ba38c1449241d0e751c5d7021dbcf4", "title": "How can blockchain be integrated into renewable energy? --A bibliometric-based analysis" }, { "paperId": "144d7b8ab7ca3b083fcb5775e64e9324779bf2c9", "title": "Blockchain empowerment in construction supply chains: Enhancing efficiency and sustainability for an infrastructure development" }, { "paperId": "86e9a896350472b9c046dae02100324214c1b218", "title": "Blockchain for sustainability: A systematic literature review for policy impact" }, { "paperId": "59aad4b562030c0f33f8886f81b29ba7a33a4463", "title": "Blockchain technology in building environmental sustainability: A systematic literature review and future perspectives" }, { "paperId": "a74bfadd545e542a89da2203df07d74cc5f99da0", "title": "A Blockchain Based Framework for Efficient Water Management and Leakage Detection in Urban Areas" }, { "paperId": "6495ccdaf44d49a1912e2725f145f0dd6fb16c2f", "title": "Sustainable supply chain, digital transformation, and blockchain technology adoption in the tourism sector" }, { "paperId": "c215e6685bbe18c59fbe4d6bbde4fb71efc69325", "title": "The role of blockchain technology in the sustainability of supply chain management: Grey based dematel implementation" }, { "paperId": "a0dc8d1c8c3f2e54340930553fe63106bc6a7172", "title": "Evaluation of blockchain implementation solutions in the sustainable supply chain: A novel hybrid decision approach based on Z-numbers" }, { "paperId": "8ac15c9704ac8879438456140a3e8386baf7ef6e", "title": "A Conceptual Study on the Role of Blockchain in Sustainable Development of Public–Private Partnership" }, { "paperId": "4a234597889335155fa1541f82314593a91b417c", "title": "Blockchain applicability in the management of urban water supply and sanitation systems in Spain." }, { "paperId": "cea332126c771539cd44d5ef676785b62e566450", "title": "A systematic literature review of blockchain technology and environmental sustainability: Status quo and future research" }, { "paperId": "2cc63bbcf11aa7b82860df88a98183e4c168c089", "title": "Toward an integration of blockchain technology in the food supply chain" }, { "paperId": "c4654c462e613b0dfadad5046817037e86bb0e47", "title": "The Role of Blockchain Technology in Promoting Traceability Systems in Agri-Food Production and Supply Chains" }, { "paperId": "58ea95274cd9ef4274722c791487cd528f16c1fb", "title": "Using the Transparency of Supply Chain Powered by Blockchain to Improve Sustainability Relationships with Stakeholders in the Food Sector: The Case Study of Lavazza" }, { "paperId": "893c512a0f62537c56eeb2d6fe7ffec8d6137d25", "title": "A global blockchain-based agro-food value chain to facilitate trade and sustainable blocks of healthy lives and food for all" }, { "paperId": "2891a2bbb6d32b9e360017b2b5e68c25eb8afde9", "title": "Blockchain technology for distributed generation: A review of current development, challenges and future prospect" }, { "paperId": "4fb6300f10dc4ffd4ad4130f3a7eb36f044b6ce5", "title": "Blockchain-Based Welfare Distribution Model for Digital Inclusivity" }, { "paperId": "9b8f14a8c0a64ca6539325c788e4db2b43412167", "title": "Present and future perspectives of blockchain in supply chain management: a review of reviews and research agenda" }, { "paperId": "7f774056ca253c59d7366e0efde9e395c9ed0813", "title": "Investigating the barriers to the adoption of blockchain technology in sustainable construction projects" }, { "paperId": "6757dfb563758c98263828dabf2e32fb5c17e91f", "title": "Blockchain technologies for sustainability in the agrifood sector: A literature review of academic research and business perspectives" }, { "paperId": "b4a6d4a81fa9c5dcc5793d75027329ec6eaac4da", "title": "Perceptions of Blockchain Readiness for Fairtrade Programmes" }, { "paperId": "427295959a43638eb0dd814e6efaa94ed5de979e", "title": "Blockchain solutions for carbon markets are nearing maturity" }, { "paperId": "c2e710b26cf2883f0bbbdacf476176722d6bf6b5", "title": "Blockchain, climate damage, and death: Policy interventions to reduce the carbon emissions, mortality, and net-zero implications of non-fungible tokens and Bitcoin" }, { "paperId": "f50aeab4fda4629387efce3411f00191a3e932cc", "title": "Blockchain technology in food supply chains: Review and bibliometric analysis" }, { "paperId": "515602084236ba2a2e28226f77a01547ca47ff00", "title": "Blockchain Technologies: A Study of the Future of Education" }, { "paperId": "9c2f3c2b979f9f6d7535414ee627af5e5d24dd41", "title": "Public Participation Consortium Blockchain for Smart City Governance" }, { "paperId": "12fb72aa41e95d7a5ec04a5c6fdb795871983b33", "title": "Sustainable Development Goals (SDGs) as a Framework for Corporate Social Responsibility (CSR)" }, { "paperId": "2ce03b0d6a0ce6efc4b612f1a4a22f9f486b71c3", "title": "Predicting Ethereum prices with machine learning based on Blockchain information" }, { "paperId": "39c01c7bd740c59c923b125536532dbdd755261f", "title": "Blockchain Technology in Agriculture: A Case Study of Blockchain Start-Up Companies" }, { "paperId": "359f561a61da65defadffc4807a89f0858c39897", "title": "How blockchain renovate the electric vehicle charging services in the urban area? A case study of Shanghai, China" }, { "paperId": "6502e646b5be0db1f0b265a39052a0aa2071e4de", "title": "Is blockchain able to enhance environmental sustainability? A systematic review and research agenda from the perspective of Sustainable Development Goals (SDGs)" }, { "paperId": "9787abbfc6ee3aa67cd9f801364229058ffe798c", "title": "Applications of Blockchain Technology in Sustainable Manufacturing and Supply Chain Management: A Systematic Review" }, { "paperId": "ad5f8c9523df448e3af8633defc89f48f0db3cb7", "title": "Coupling mechanism of green building industry innovation ecosystem based on blockchain smart city" }, { "paperId": "f4ca231b52e3be27e9097d9f0e71e15aa39046e1", "title": "Smart water conservation through a machine learning and blockchain-enabled decentralized edge computing network" }, { "paperId": "227c7263c0a65589708ecc429b1e198931271841", "title": "The Effect of Blockchain Technology on Supply Chain Sustainability Performances" }, { "paperId": "979a7afae01dd40825f0c7aecaf6ec2391d22cad", "title": "Blockchain technology and the sustainable supply chain: Theoretically exploring adoption barriers" }, { "paperId": "4a6361c924191e919b322bb7f7cc88d364d40ad0", "title": "Blockchain for the Circular Economy: Analysis of the Research-Practice Gap" }, { "paperId": "9e4d761cd4c68647ec93960cd2c83fe2a96ca4aa", "title": "INTERNET OF THINGS EXPERT SYSTEM FOR SMART CITIES USING THE BLOCKCHAIN TECHNOLOGY" }, { "paperId": "60ec11beba5f3f63409affe028a46256e22acb68", "title": "Review on blockchain technology and its application to the simple analysis of intellectual property protection" }, { "paperId": "573ea4dca564627225b4ecdcd70c1a1c155d15cb", "title": "Exploring the Governance and Implementation of Sustainable Development Initiatives through Blockchain Technology" }, { "paperId": "41aa55e38496067febbbfcb8437687b2cb6f581b", "title": "Smart contracts to enable sustainable business models. A case study" }, { "paperId": "599a846955e74dedb410e5dab835e3773580612e", "title": "System architecture for blockchain based transparency of supply chain social sustainability" }, { "paperId": "1c18e533121100985c53fb79420f09f2df2857ab", "title": "Carbon Trading with Blockchain" }, { "paperId": "7278575118b28164a20d6250d5be296b778dcbf7", "title": "Building trust and equity in marine conservation and fisheries supply chain management with blockchain" }, { "paperId": "b45a3e6a7afd4ac786ffafb01b6d197e6b59ad10", "title": "Blockchain-Based Traceability and Visibility for Agricultural Products: A Decentralized Way of Ensuring Food Safety in India" }, { "paperId": "1b90d2a8c1affe6b4c957b20d92d0f2654df44b1", "title": "Constraints and Benefits of the Blockchain Use for Real Estate and Property Rights" }, { "paperId": "bbdf3c6c4a479501f317d34663a03b3eab8aa2d6", "title": "Blockchain energy: Blockchain in future energy systems" }, { "paperId": "9da438a348b177a62dcc15d302697b366517ca76", "title": "Blockchain-Empowered E-commerce: Redefining Trust, Security, and Efficiency in Digital Marketplaces in the Context of Bangladesh." }, { "paperId": "113971f6ca186f0bc7fb3d0672a7082f5f1a3dc8", "title": "Blockchain Technology for Environmental Compliance: Towards A 'Choral' Approach" }, { "paperId": "0bdcb2de1d622781b95d9583ccab80191c80e707", "title": "Are millennials really more sensitive to sustainable luxury? A cross-generational international comparison of sustainability consciousness when buying luxury" }, { "paperId": "ee3679d4566368a7a0fe083b658bbed33dd1bee6", "title": "Blockchain and the future of energy" }, { "paperId": "8bf68ed110b8f372ae662808429dabef8ea600b7", "title": "Blockchain and Sustainability: A Systematic Mapping Study" }, { "paperId": "c1465188cca1a449d2ba6245c95f7876d7936441", "title": "Blockchain: A technological tool for sustainable development or a massive energy consumption network?" }, { "paperId": "ab6d3d6dd694d1210006ed5d1163dcf0d402cbf3", "title": "Three pillars of sustainability: in search of conceptual origins" }, { "paperId": "f06fdd3bdd9ab615f72e73a8b7cb61562fc01b4b", "title": "Why Architecture Does Not Matter: On the Fallacy of Sustainability Balanced Scorecards" }, { "paperId": "677d276996ba7a84b9078e9c413cbb1d8820a15e", "title": "1 Blockchain's roles in meeting key supply chain management objectives" }, { "paperId": "c35147e0337416d10011aac7348b589732f02adf", "title": "The Implications of Blockchain for Income Inequality" }, { "paperId": "430169c00bccf06de97beca2572886871cfdacdb", "title": "Supply chain social sustainability for developing nations: Evidence from India" }, { "paperId": "24cdeb7d7421012c2fdd362b8e2816c105b7071f", "title": "An agri-food supply chain traceability system for China based on RFID & blockchain technology" }, { "paperId": "6af2fa42a1673631b420dfe3bd4185519b8de03c", "title": "Sustainable supply chain management: Evolution and future directions" }, { "paperId": "c7f1a16a3e8f6ed10d704eb2ce7428c174d3cfe2", "title": "A framework of sustainable supply chain management: moving toward new theory" }, { "paperId": "a6245960b75630a0a24c7a4816551605ac3d3ab3", "title": "Sustainability Science: A room of its own" }, { "paperId": null, "title": "Combating Luxury Counterfeiting Through Blockchain Technology" }, { "paperId": "89356474123db591e5593d87f58b055c3c0c8be3", "title": "Tokenizing Renewable Energy Certificates (RECs)—A Blockchain Approach for REC Issuance and Trading" }, { "paperId": "3b3b4567dabcaf3ce3c9c55d9318f35343b9ae84", "title": "A novel electric vehicle charging chain design based on blockchain technology" }, { "paperId": "accdb4b200bf1460fda9903eaac964a2af953a0f", "title": "Generating Employment with Equipped and Trained Workforce Using Blockchain Technology" }, { "paperId": "4aa925df769289815bd0dcfe2e25572705fd4e11", "title": "Blockchain and agricultural supply chains traceability: research trends and future challenges" }, { "paperId": "5c5e99f1438535bc5668eafbb3a0671e13c3f988", "title": "TRENDS IN GLOBAL CO2 AND TOTAL GREENHOUSE GAS EMISSIONS" }, { "paperId": null, "title": "Infrastructural Grind”" }, { "paperId": null, "title": "Liberating wage slaves: Towards sustainable employment practices" }, { "paperId": "df308927c925496bc1708bc709d09f97ce90b78a", "title": "on Intelligent Manufacturing and Automation , 2013 Sustainability and its Integration into Corporate Governance Focusing on Corporate Performance Management and Reporting" }, { "paperId": "28cc60bcfbded0c1583b9e8c12ee72e02be51113", "title": "Business Ethics: A Sustainability Approach" }, { "paperId": null, "title": "VAKT_Building the world's first enterprise-level blockchain platform" }, { "paperId": null, "title": "Aerial" } ]
22,128
en
[ { "category": "Computer Science", "source": "external" }, { "category": "Computer Science", "source": "s2-fos-model" }, { "category": "Engineering", "source": "s2-fos-model" }, { "category": "Environmental Science", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/02aa857d4991ffc0fe8e1992c1f6e5b1b94d39b0
[ "Computer Science" ]
0.840041
LSec: Lightweight Security Protocol for Distributed Wireless Sensor Network
02aa857d4991ffc0fe8e1992c1f6e5b1b94d39b0
IFIP International Conference on Personal Wireless Communications
[ { "authorId": "1794236", "name": "R. Shaikh" }, { "authorId": "31273100", "name": "Sungyoung Lee" }, { "authorId": "2109254546", "name": "Mohammad A. U. Khan" }, { "authorId": "48481767", "name": "Y. Song" } ]
{ "alternate_issns": null, "alternate_names": [ "PWC", "IFIP Int Conf Pers Wirel Commun" ], "alternate_urls": null, "id": "06efd4cf-6136-4948-b84a-68bde61dbe66", "issn": null, "name": "IFIP International Conference on Personal Wireless Communications", "type": "conference", "url": null }
null
## LSec: Lightweight Security Protocol for Distributed Wireless Sensor Network* Riaz Ahmed Shaikh, Sungyoung Lee, Mohammad A.U. Khan, and Young Jae Song Department of Computer Engineering, Kyung Hee University, Sochen-ri, Giheung-eup, Yongin-si, Gyeonggi-do, 449-701, South Korea {riaz, sylee, khan}@oslab.khu.ac.kr, yjsong@khu.ac.kr **Abstract.** Constraint specific wireless sensor networks need energy efficient and secure communication mechanisms. In this paper we propose Lightweight Security protocol (LSec) that fulfils both requirements. LSec provides authentication and authorization of sensor nodes with simple secure key exchange scheme. It also provides confidentiality of data and protection mechanism against intrusions and anomalies. LSec is memory efficient that requires 72 bytes of memory storage for keys. It only introduces 74.125 bytes of transmission and reception cost per connection. #### 1 Introduction Wireless sensor networks consist of a large number of small size sensor nodes deployed in the observed environment. Sensor nodes have smaller memory (8K of total memory and disk space) and limited computation power (8-bit, 4 MHz CPU) [1]. They usually communicate with a powerful base station which connects sensor nodes with external networks. The limited energy at senor nodes creates hindrances in implementing complex security schemes. There are two major factors for energy consumption: 1. Transmission and reception of data. 2. Processing of query request. Wireless networks are relatively more vulnerable to security attacks than wired networks due to the broadcast nature of communication [1]. In order to implement security mechanism in sensor networks, we need to ensure that communication overhead is less and consumes less computation power. With these constraints it is impractical to use traditional security algorithms and mechanism meant for powerful workstations. Sensor networks are vulnerable to a variety of security threats such as DoS, eavesdropping, message replay, message modification, malicious code, etc. In order to secure sensor networks against these attacks, we need to implement message - This work is financially supported by the Ministry of Education and Human Resources Development (MOE), the Ministry of Commerce, Industry and Energy (MOCIE) and the Ministry of Labor (MOLAB) through the fostering project of the Lab of Excellency. The corresponding author of this paper is Prof. Sungyoung Lee. ----- confidentiality, authentication, message integrity, intrusion detection and some other security mechanism. Encrypting communication between sensor nodes can partially solve the problems but it requires a robust key exchange and distribution scheme. In general, there are three types of key management schemes [2,3]: Trusted Server scheme, self enforcing scheme and key-predistribution scheme. Trusted server schemes relies on a trusted base station, that is responsible for establishing the key agreement between two communicating nodes as described in [4]. It uses symmetric key cryptography for data encryption. The main advantages of this scheme are, it is memory efficient, nodes only need to store single secret key and it is resilient to node capture. But the drawback of this scheme is that it is energy expensive, it requires extra routing overhead in the sense that each node need to communicate with base station several times [3]. Self enforcing schemes use public key cryptography for communication between sensor nodes. This scheme is perfectly resilient against node capture and it is fully scalable and memory efficient. But the problem with the traditional public keys cryptography schemes such as DSA [5] or RSA [6] is the fact that they require complex and intensive computations which is not possible to perform by sensor node having limited computation power. Some researchers [7,8] uses Elliptic curve cryptography as an alternative to traditional public key systems but still not perfect for sensor networks. Third scheme is key pre-distribution scheme based on symmetric key cryptography, in which limited numbers of keys are stored on each sensor node prior to their deployment. This scheme is easy to implement and does not introduce any additional routing overhead for key exchange. The degree of resiliency of node capture is dependent on the pre-distribution scheme [3]. Quite recently some security solutions have been proposed in [9,10,11,12,13] especially for wireless sensor networks but each suffers from various limitations such as higher memory and power consumptions that are discussed in section 4. Keeping all these factors in mind we propose a lightweight security protocol (LSec) for wireless sensor networks. LSec combines the features of trusted server scheme and Self Enforcing security schemes. Our main contribution is the designing and implementation of LSec that provides - Authentication and Authorization of sensor node. - Simple Secure key exchange scheme. - Secure defense mechanism against anomalies and intrusions. - Confidentiality of data. - Usage of both symmetric and asymmetric schemes. The rest of the paper is organized as follows. Section 2 describes the details of LSec. Section 3 presents the simulation results and evaluation of LSec. Section 4 presents the comparison of LSec with other security solutions and Section 5 consists of conclusion and future direction. #### 2 Light Weight Security Protocol (LSec) The basic objective of LSec is to provide lightweight security solution for wireless sensor networks where all nodes can communicate with each other. LSec can support both static and mobile environment, which may contain single and multiple Base ----- Stations (BS). Basic system architecture is shown in figure 1. LSec uses both symmetric and asymmetric schemes for providing secure communication in wireless sensor networks. AzM KMM ��� ��� ��� ��� TGM ������� ������ ������� ������ **Fig. 1. LSec System Architecture** Key Management Module (KMM) is used to store public and shared secret key of each node with BS to the database. Token Generator Module (TGM) is used to generate the tokens for the requesters, which will be further used by the other communicating party for the authentication of requester node. Authorization Module (AzM) is used to check whether a particular node is allowed to communicate with other node or group. Lightweight mobile agents will only be installed on Cluster heads which sends alerts messages to intrusion detection system (IDS), which is responsible for detecting any anomaly or intrusion in the network. Basic assumptions and rules of LSec are given below. **2.1 Assumptions** 1. Base Station (BS) is the trusted party and it will never be compromised. Compromising the Base station can render the entire sensor network useless, and it is the only point from where sensor node can communicate with external networks. 2. Only Base Station (BS) knows the Public keys (Pk) of all the sensor nodes in the network. Communicating nodes will know each other’s public key during the time of connection establishment. **2.2 Rules** - Asymmetric scheme will only be used for sharing ephemeral secret key between communicating nodes. - For every session new random secret key will be used. - Data will be encrypted by using symmetric schemes because these schemes are considered to be executed three to four times faster than asymmetric schemes [14]. ----- **2.3 LSec Packet Format** LSec packet format is shown in table 1. Currently LSec uses seven types of packets, ‘Request’, ‘Response’, ‘Init’, ‘Ack’, ‘Data’, ‘Update Group Key’ and ‘Alert’ packet. All seven packets are distinguished by ‘type’ field in the LSec packet. IDsrc field contain the id of sending node and last encrypted portion contain the information depending upon the type of packet, as shown in table 1. **Table 1. LSec: Type field** Type IDsrc Encrypted Portion Any Request (sensor node) EK A-BS (Intended-IDdest, N) Response BS EKA-BS (R-type, Intended-IDdest, N, Pk, token | R) Any + Init (sensor node) EKB (N, Pk, token) Any + Ack (sensor node) EKA (N,sk) Any Data EKsk (data) (sensor node) Any CH UpdateGroupKey sensor node EKG (GroupID, new Key), MAC Any CH Alert sensor node EKCH-BS (Alert-type), MAC EKA-BS = Encrypt with the secret key shared between node A and BS EKA+ = Encrypt with the public key of node A EKB+ = Encrypt with the public key of node B EKsk = Encrypt with the shared secret key EKG = Encrypt with group key EKCH-BS = Encrypt with the secret key shared between Cluster head and BS R-type = Response type (positive or negative response) R = Reason of negative acknowledgement Intended-IDdest = ID of Intended Destination Pk = public key IDsrc = ID of source node N = Nonce (Unique Random Number) MAC = Message Authentication Code CH = Cluster Head The distribution of bits to different fields (as shown in table 2), introduces some upper limits, such as, size of source address is of 2 bytes, it means our LSec works only in the environment where number of sensor nodes not exceeding 2[16]. Length of Nonce (unique random number) field is of 3 bytes, so LSec can allow maximum of 2[24] connections at a time. The length of public key and private key is of exactly 128 |Type|ID src|Encrypted Portion| |---|---|---| |Request|Any (sensor node)|EK (Intended-ID , N) A-BS dest| |Response|BS|EK (R-type, Intended-ID , N , A-BS dest Pk, token | R)| |Init|Any (sensor node)|EK +(N, Pk, token) B| |Ack|Any (sensor node)|EK +(N,sk) A| |Data|Any (sensor node)|EKsk (data)| |UpdateGroupKey|Any CH sensor node|EK (GroupID, new Key), MAC G| |Alert|Any CH sensor node|EK (Alert-type), MAC CH-BS| ----- **Table 2. Distribution of bits to different fields of LSec** **Field** **Size** **Field** **Size** Type 4 bits Public and Private 128 key bits IDsrc, 16 bits Secret key 64 bits IDdest Nonce (N) 23 bits token 4 bytes R-type 1 bit data 30 bytes bits and the length of secret key is of exactly 64 bits. Only stream cipher encryption algorithms are allowed to use because of a fixed length size of packets. MAC is of 64 bits. **2.4 Procedure** LSec works in three phases, authentication and authorization phase, key distribution phase, and data transmission phase. Authentication and authorization is performed during the exchange of “Request” and “Response” packet by using symmetric scheme. Key distribution phase involves sharing of random secret key in a secure manner by using asymmetric scheme. In this phase “INIT” and “ACK” packets will be exchanged. Data transmission phase involves transmission of data packet in an encrypted manner. Let’s suppose node A wants to communicate with the node B. It will first send request packet to Base station, for receiving token and public key of node B. The request packet is encrypted with the secret key shared between node A and BS. BS first checks in the database via AzM that weather node A has rights to establish connection with node B. If yes, it generates the token which will be further used by the node B for the authentication of node A. That token is encrypted with secret key shared between node B and BS, so that node A will not able to decrypt token. BS will sent back a response packet that contains token, public key of node B and Nonce (Unique Random Number) that was there in request packet. Nonce will ensure node A that packet came from genuine BS. When node A gets the positive response from BS it sent the INIT packet to node B that contains Nonce, its own public key and token generated by BS. The whole INIT packet is encrypted with the public key of node B. When node B gets INIT packet it first check token, if it is correct, it will generate the secret key and sent it back to node A in an encrypted manner. When node A gets ACK packet, it deletes the public key of node B from its memory, and sent data to node B by using new session secret key. When data transmission complete, both nodes delete that session key. For group communication, each node uses the group secret key for data transmission in a secure manner. Cluster head will update this key after periodic interval. |Field|Size|Field|Size| |---|---|---|---| |Type|4 bits|Public and Private key|128 bits| |IDsrc, IDdest|16 bits|Secret key|64 bits| |Nonce (N)|23 bits|token|4 bytes| |R-type|1 bit|data|30 bytes| ----- #### 3 Simulation and Performance Analysis We have tested our LSec protocol on Sensor Network Simulator and Emulator (SENSE) [15]. In sensor node we introduce the middleware between application layer and network layer as shown in figure 2. Application Sensor LSec Middleware Network FIFO Battery Link layer Power Physical Mobility to Channel from Channel Position_out Data In **Fig 2. Sensor Node Architecture** **Table 3. Simulation Parameters** Terrain 1000x1000 Total Number of Nodes 101 (including BS) Initial battery of each sensor node 1x10[6]J Power consumption for transmission 1.6W Power consumption for reception 1.2 W Idle power consumption 1.15W Carrier sense threshold 3.652e-10W Receive power threshold 1.559e-11W Frequency 9.14e8 Transmitting & Receiving antenna gain 1.0 That middleware uses LSec for the enforcement of security in the sensor network. At application layer we use constant bit rate component (CBR) that generate constant traffic during simulation between two communicating sensor nodes. For the demonstration and performance evaluation of LSec, CBR is run with and without |Terrain|1000x1000| |---|---| |Total Number of Nodes|101 (including BS)| |Initial battery of each sensor node|1x106J| |Power consumption for transmission|1.6W| |Power consumption for reception|1.2 W| |Idle power consumption|1.15W| |Carrier sense threshold|3.652e-10W| |Receive power threshold|1.559e-11W| |Frequency|9.14e8| |Transmitting & Receiving antenna gain|1.0| ----- LSec. We randomly deploy 100 sensor nodes plus one Base station (BS) in 1000 by 1000 terrain. Basic simulation parameters employed are described in table 3. **3.1 Performance Analysis of Communication Overhead** In our simulation scenario, application sent data packets of size 30 bytes in a periodic interval. The overall communication overhead of LSec for one to one communication is decreases with the increase in transfer of number of data packets as shown in figure 3. Communication Overhead (C0 %) is calculated as #### Nc *74.125 CO(%) = ( n )*100 # ∑ NiP *30 _i=1_ (1) Where as ‘Nc’ is the total number of connections. _NiP_ is the number of packets transferred by node i. We multiplied 74.125 bytes to Nc because for every connection LSec exchange four control packets (Request, Response, Init, and Ack) during the authentication, authorization and key exchange phase whose cumulative size is 74.125 byte. Size of each data packet is 30 bytes. **Each Data Packet Size = 30 bytes** 30 25 20 15 10 5 0 10 20 30 40 50 60 70 80 90 100 **Number of Data Packets Transfer** **Fig. 3. Communication Overhead (%) of LSec** **3.2 Performance Analysis of Power Computation** Power Computation primarily depends upon the kind of symmetric and asymmetric scheme. If we assume that computation power required for symmetric encryption and decryption scheme is CSE and CSD respectively and computation power of asymmetric encryption and decryption scheme as CAE and CAD respectively. Then the total power consumption required by single node during first two phases is _Power Computation = (CSE + CSD) + (CAE + CAD)_ (2) ----- Computation power required by a single node during data transmission phase is calculate as, _Power Computation= (TNSP*CSE) + (TNRP*CSD)_ (3) Where TNSP is the Total Number of Sent data packets and TNRP is the Total Number of received data packets. **3.3 Performance Analysis of Memory Consumption** Every sensor node needs to store only six keys, three of them are permanent and three are ephemerals. Permanent keys consist of one public key (self), one private keys and one public key of BS. Ephemerals keys consist of group key, public key of other node and session secret key. In order to save these keys only 72 bytes are needed. Details are given in table 4. This approach will make sensor network memory efficient. **Table 4. Storage Requirement of Keys** S/No Keys Size (in bytes) Permanent Keys 1 Public key of node 16 2 Private key of node 16 3 shared secret key b/w Node & BS 8 Ephemeral Keys 4 Group Key 8 5 Public key of other node 16 6 Session key 8 Total Storage size Required 72 bytes **3.4 Performance Analysis of Energy Consumption** The main source of energy consumption at sensor node is its transmission and reception cost. We used SENSE that consumes energy in four different modes: TRANSMIT, RECIEVE, IDLE, and SLEEP. Energy consumption rate of each mode **Initial Energy 1x106 J** With out any Security With LSec 1003.53 1003.51 1003.49 1003.47 1003.45 1003.43 1003.41 1003.39 1003.37 1003.35 1 8 15 22 29 36 43 50 57 64 71 78 85 92 99 **Nodes** **Fig 4. Energy Consumptions** |S/No|Keys|Size (in bytes)| |---|---|---| |Permanent Keys||| |1|Public key of node|16| |2|Private key of node|16| |3|shared secret key b/w Node &|BS 8| |Ephemeral Keys||| |4|Group Key|8| |5|Public key of other node|16| |6|Session key|8| |Total Storage size Required||72 bytes| ----- is given in table 3. For each connection, LSec exchange four control packets (Request, Response, Init, and Ack) of cumulative size 74.125 bytes that requires for authentication, authorization and key exchange mechanism. That is an acceptable tradeoff between energy and security. Simulation result of energy consumption is shown in figure 4. **3.5 Resilience Against Node Compromise** Single node compromised will not expose the whole communication in network. Only the communication links that are established with compromised node will expose the network. Let’s suppose ‘Ncn’ is the set of nodes that establish connections and ‘Ncp’ is the set of compromised nodes. Then Ncn ∩ Ncp will give us the set of nodes that are compromised as well as connected. Then the maximum number of connections that can be exposed only if all compromised nodes connected to uncompromised nodes. On the other hand minimum numbers of links that can be exposed only if all compromised nodes are connected with each other. (4) (5) _M i n_ _M a x_ : _N c n_ ∩ _N c p_ ⎛ ⎞ ⎜ _N c n_ ∩ _N c p_ ⎟ ⎜ _f o r_ ⎯⎯→ _e v e n_ ⎟ ⎜ 2 ⎟ : ⎜ ⎟ ⎜ _N c n_ ∩ _N c p_ + 1 ⎟ ⎜⎜ ( ) _f o r_ ⎯⎯→ _o d d_ ⎟⎟ ⎝ 2 ⎠ If we assume that sensor networks consists of 1000 nodes and total 500 connections established between pair of nodes then the total links that can be minimum and maximum compromised is shown in figure 5. **N=1000 Connections = 500** Min Max 100 80 60 40 20 0 50 150 200 250 300 350 400 450 500 **Compromised Nodes** **Fig. 5. Percentage of Compromised Links** #### 4 Comparison of LSec with Other Security Solutions Comparison of all above discussed schemes with LSec is given in table 5. We provided comparison from the perspective of memory requirement, transmission cost, ----- and some other basic security parameters such as authentication, authorization, confidentiality, etc. Data integrity is generally handled at link layer with the help of some hashing schemes such as MD5, SHA1 etc or by CRC schemes and availability is normally handled at physical layer. LSec lies between network and application layer that’s why it doesn’t provide explicit data integrity and availability support. **Table 5. Comparison of LSec with other security solutions** **SPINS** **TinySec** **LiSP** **LSec** Memory Requirement with Depended 3 ≥ 8 6 respect to storage on KMS[1] of keys During key Depended -- 12.6*TNN[2] 74.125*TNC[3] exchange on KMS (bytes) During Data 20% 10% - 20 8.33% Transmission Public Key Cryptography No No No Yes Support Symmetric key cryptography Yes Yes Yes Yes Support Intrusion Detection No No Yes Yes mechanism Authentication Yes Yes Yes Yes support Authorization No No Yes Yes support Data Integrity Yes Yes Yes No support Confidentiality Yes Yes Yes Yes support Availability No No Yes No support 1 KMS: Key Management Scheme 2 KNN: Total Number of Nodes 3 KNC: Total Number of Connections #### 5 Conclusion and Future Directions We proposed Lightweight security protocol (LSec) for wireless sensor networks, which provides authentication and authorization of sensor node. It also provides |Col1|Col2|SPINS|TinySec|LiSP|LSec| |---|---|---|---|---|---| |Memory Requirement with respect to storage of keys||3|Depended on KMS1|≥ 8|6| |Transmission Cost|During key exchange (bytes)|--|Depended on KMS|12.6*TNN2|74.125*TNC3| ||During Data Transmission|20%|10%|> 20|8.33%| |Public Key Cryptography Support||No|No|No|Yes| |Symmetric key cryptography Support||Yes|Yes|Yes|Yes| |Intrusion Detection mechanism||No|No|Yes|Yes| |Authentication support||Yes|Yes|Yes|Yes| |Authorization support||No|No|Yes|Yes| |Data Integrity support||Yes|Yes|Yes|No| |Confidentiality support||Yes|Yes|Yes|Yes| |Availability support||No|No|Yes|No| ----- simple secure key exchange scheme and confidentiality of data. LSec is highly scalable and memory efficient. It uses 6 keys, which takes only 72 bytes of memory storage. It introduces 74.125 bytes of transmission and reception cost per connection. It has the advantage of simple secure defense mechanism against compromised nodes. In future, we will try to solve the issue related to the neighboring nodes of the base station that suffered from higher communication overhead by forwarding request and response packets during authentication and authorization phase. #### References 1. C. Karlof and D. Wagner, “Secure Routing in Wireless Sensor Networks: Attacks and Countermeasures”, _proc. of the First IEEE International Workshop on Sensor Network_ _Protocols and Applications (WSNA’03), May 2003, pp. 113- 127_ 2. Wenliang Du, Jing Deng, Han, Y.S., Shigang Chen, Varshney P.K, “A key management scheme for wireless sensor networks using deployment knowledge”, _proc. of INFOCOM_ _2004, Mar 2004_ 3. Lydia Ray, “Active Security Mechanisms for Wireless Sensor Networks and Energy optimization for passive security Routing”, PhD Dissertation, Dep. of Computer Science, Louisiana State University, Aug 2005 4. J. Kohl and B. Clifford Neuman, “The Kerberos Network Authentication Service (v5)”, RFC 1510, Sep 1993 5. W. Diffie and M.E. Hellman, “New Directions in Cryptography”, _IEEE Transaction on_ _Information Theory, vol. 22, Nov 1976, pp. 644-654._ 6. R. L. Rivest, A. Shamir, L.M. Adleman, “A method for obtaining Digital Signatures and Public key cryptosystem”, Communication of ACM, vol. 21(2), 1978, pp. 120-126 7. Erik-Oliver Blaß and Martina Zitterbart, “Towards Acceptable Public-Key Encryption in Sensor Networks”, _proc. of 2[nd] International Workshop on Ubiquitous Computing, ACM_ _SIGMIS, May 2005_ 8. John Paul Walters, Zhengqiang Liang, Weisong Shi, and Vipin Chaudhary, “Wireless sensor network security: A Survey”, Technical Report MIST-TR-2005-007, July, 2005 9. A. Perrig, R. Szewczyk, V. Wen, D. Culler and J. D. Tygar, “SPINS: Security protocols for sensor networks”, proc. of 7th annual international conference on Mobile computing _and networking, Rome, Italy, Aug 2001, pp 188-189_ 10. Chris Karlof, Naveen Sastry, and David Wagner, “TinySec: a link layer security architecture for wireless sensor networks”, _Proc. of the 2[nd] international conference on_ _Embedded networked sensor systems, Baltimore, MD, USA, Nov 2004, pp 162-175_ 11. K. Jones, A.Wadaa, S. Oladu, L. W|son, and M. Etoweissy, “Towards a new paradigm for securing wireless sensor networks”, _proc. of the 2003 workshop on New security_ _paradigms, Ascona, Switzerland, Aug 2003, pp 115 - 121_ 12. Taejoon Park, and Kang G. Shin, “LiSP: A Lightweight Security Protocol for Wireless Sensor Networks”, _ACM Transactions on Embedded Computing Systems, vol. 3(3), Aug_ 2004, pp. 634–660 13. Sencun Zhu, Sanjeev Setia, and Sushil Jajodia, “LEAP: Efficient Security Mechanism for Large-Scale Distributed Sensor Networks ”, _Proc. of the 10[th] ACM conference on_ _Computer and communications security, Washington, USA, 2003, pp. 62-72_ 14. Elaine Shi and Adrian Perrig, “Designing Secure Sensor Networks”, _IEEE Wireless_ _Communications, Dec 2004, pp. 38-43_ 15. Sensor Network Simulator and Emulator (SENSE) http://www.cs.rpi.edu/~cheng3/sense/ -----
{ "disclaimer": "Notice: The following paper fields have been elided by the publisher: {'abstract', 'references'}. Paper or abstract available at https://api.unpaywall.org/v2/10.1007/11872153_32?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1007/11872153_32, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": null, "status": "BRONZE", "url": "https://link.springer.com/content/pdf/10.1007/11872153_32.pdf" }
2,006
[ "JournalArticle" ]
true
2006-09-20T00:00:00
[]
6,634
en
[ { "category": "Computer Science", "source": "external" }, { "category": "Business", "source": "external" }, { "category": "Computer Science", "source": "s2-fos-model" }, { "category": "Business", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/02acccd3a4dbea265de8c043807c2dbb4115130c
[ "Computer Science", "Business" ]
0.884255
Phishing Scam Detection on Ethereum: Towards Financial Security for Blockchain Ecosystem
02acccd3a4dbea265de8c043807c2dbb4115130c
International Joint Conference on Artificial Intelligence
[ { "authorId": "47482568", "name": "Weili Chen" }, { "authorId": "1591110971", "name": "Xiongfeng Guo" }, { "authorId": "2285217145", "name": "Zhiguang Chen" }, { "authorId": "2341167621", "name": "Zibin Zheng" }, { "authorId": "2258301089", "name": "Yutong Lu" } ]
{ "alternate_issns": null, "alternate_names": [ "Int Jt Conf Artif Intell", "IJCAI" ], "alternate_urls": null, "id": "67f7f831-711a-43c8-8785-1e09005359b5", "issn": null, "name": "International Joint Conference on Artificial Intelligence", "type": "conference", "url": "http://www.ijcai.org/" }
In recent years, blockchain technology has created a new cryptocurrency world and has attracted a lot of attention. It also is rampant with various scams. For example, phishing scams have grabbed a lot of money and has become an important threat to users' financial security in the blockchain ecosystem. To help deal with this issue, this paper proposes a systematic approach to detect phishing accounts based on blockchain transactions and take Ethereum as an example to verify its effectiveness. Specifically, we propose a graph-based cascade feature extraction method based on transaction records and a lightGBM-based Dual-sampling Ensemble algorithm to build the identification model. Extensive experiments show that the proposed algorithm can effectively identify phishing scams.
# Phishing Scam Detection on Ethereum: Towards Financial Security for Blockchain Ecosystem ## Weili Chen[1][,][2], Xiongfeng Guo [1][,][3], Zhiguang Chen [1][,][2], Zibin Zheng [1][,][3] and Yutong Lu [1][,][2] 1School of Data and Computer Science, Sun Yat-sen University 2National Supercomputer Center in Guangzhou, Sun Yat-sen University 3National Engineering Research Center of Digital Life, Sun Yat-sen University ## chenwli28@mail.sysu.edu.cn, guoxf6@mail2.sysu.edu.cn, zhiguang.chen@nscc-gz.cn, zhzibin@mail.sysu.edu.cn, yutong.lu@nscc-gz.cn ## Abstract In recent years, blockchain technology has created a new cryptocurrency world and has attracted a lot of attention. It also is rampant with various scams. For example, phishing scams have grabbed a lot of money and have become an important threat to users’ financial security in the blockchain ecosystem. To help deal with this issue, this paper proposes a systematic approach to detect phishing accounts based on blockchain transactions and take Ethereum as an example to verify its effectiveness. Specifically, we propose a graph-based cascade feature extraction method based on transaction records and a lightGBM-based Dual-sampling Ensemble algorithm to build the identification model. Extensive experiments show that the proposed algorithm can effectively identify phishing scams. ## 1 Introduction The birth of Bitcoin has brought a whole new world of cryptocurrency. According to coinmarketcap.com, there are now over 5,000 cryptocurrencies (or tokens) with a market capitalization larger than $200 billion (see [Chen et al., 2020] for a detailed analysis of the token market). The key technology behind these cryptocurrencies is blockchain technology. Generally speaking, a blockchain can be described as a distributed and trusted database maintained by a peer-to-peer network through a special consensus mechanism [Zheng et _al., 2018]. A blockchain usually implements a cryptocurren-_ cy (or a virtual currency) and it can be exchanged with other cryptocurrencies or fiat money through exchanges. The financial nature of cryptocurrency makes it the target of many scams. Financial security is an important foundation for the healthy development of blockchain technology. The proliferation of scams in the ecosystem will hinder users’ acceptance and use of blockchain technology, and further, hinder the progress of the technology. Thus, identification of these scams has become an urgent and critical problem in the blockchain ecosystem and has attracted great attention from researchers [Bartoletti et al., 2020; Chen et al., 2018]. The phishing scam is a new type of cybercrime that arises along with the rise of online business [Liu and Ye, 2001], which 4506 has now been found in the blockchain ecosystem. According to the report of Chainalysis, more than 50% of all cybercrime revenue was generated from phishing scams since 2017[1]. A widely known example is the phishing scam on Bee Token ICO [2], in which the phisher eventually gathered about $1 million from the investors in only 25 hours. These examples show that detecting and preventing phishing scams is an urgent problem in the blockchain ecosystem. Traditional phishing scams typically involve setting up a fake official website and luring users into logging in to obtain private information, such as passwords. Thus, the main task of the traditional phishing scam detection method is to identify fake websites through various methods so that users can get an early warning before logging in. However, phishing scams in the blockchain era have many new characteristics. First of all, instead of private information, cryptocurrencies become the phishing targets. Phishers use a variety of methods to lure ordinary users to transfer money to a designated account (such as in the case of Bee Token ICO scam). Second, the ill-gotten cryptocurrencies have to be cashed through exchanges for fiat money (i.e., to convert the ill-gotten cryptocurrencies into fiat money) through transactions. Third, the transaction records of public blockchain are publicly accessible, which provides a new data source for phishing detection. Based on these new characteristics and the fact that phishing scams are rampant in the blockchain ecosystem, we propose to build phishing scam detection methods based on blockchain transactions and AI. These methods can be incorporated into users’ cryptocurrency wallets (i.e., tools for management of accounts and transactions in the blockchain ecosystem) as a function of alerting users to potential risks when interacting with unfamiliar accounts. Figure 1 shows the proposed framework and uses Ethereum as an example to demonstrate the effectiveness of our approach. Specifically, we first downloaded the Ethereum ledger using an Ethereum client Parity and crawled etherscan.io to get all the phishing accounts. Then, based on common sense and data analysis, we propose several filtering rules to alleviate the class imbalance problem. On this basis, we construct the transaction graph and propose a graph-based cascade feature extraction 1https://blog.chainalysis.com/the-rise-of-cybercrime-onethereum/ 2https://theripplecryptocurrency.com/bee-token-scam/ ----- Figure 1: The framework. method. Next, a Dual-sampling Ensemble framework is proposed to identify suspect accounts. Finally, we verify the validity of the model by comparing it with other methods, evaluate the performance of the model under different parameters, and discuss the effectiveness of these features. In summary, we make the following major contributions. (1) We propose a systematic approach to detect phishing scams in the blockchain ecosystem, and take Ethereum as an example to verify the effectiveness. The approach has good performance, which indicates that our method can be embedded into users’ cryptocurrency wallets to provide users with a financial risk warning function. To accelerate the research in this field and promote the healthy development of blockchain technology, all relevant data and code will be released after the paper is published. (2) We propose a graph-based cascade feature extraction method, which can conveniently extract rich transaction structure information and form a feature set with a good classification effect. Besides, it is very scalable and hard to evade according to the “six-degree separation” theorem. (3) We propose a new model integration algorithm, namely the Dual-sampling Ensemble algorithm, which can be used for classification problems with a high level of class imbalance. The evaluation results show the effectiveness of the algorithm. ## 2 Background and Related Work Blockchain technology is a key support technology for cryptocurrencies such as Bitcoin[3]. A blockchain can be seen as a common ledger maintained between peers that do not need to trust each other [Zheng et al., 2017]. The ledger records the number of users’ cryptocurrency and the history of transfer transactions between them. The user is represented in the system as a public-private key pair. Public keys, often called _addresses, are like accounts in a banking system that records_ the cryptocurrency they hold. (In this paper, we use the term address and account interchangeably.) In blockchain systems, transactions are messages sending from one account (the initiator’s address) to another (the receiver’s address) [Chen et _al., 2018]. Typically, the initiator transfers a certain amoun-_ t of cryptocurrency to the recipient. Transactions that occur over a period are packaged into blocks by peers and linked to the previous block through cryptography. Each block has a corresponding height (denoted as blockNumber in this paper), 3https://bitcoin.org/bitcoin.pdf increasing by 1 from 0. The block height can be viewed as the time when the transaction took place. In the bitcoin system, blocks are created roughly every ten minutes. Ethereum is known as the second-generation blockchain technology because it provides full support for smart contracts [Wood, 2014]. A smart contract on a blockchain can be viewed as a piece of code that automatically executes and cannot be terminated when a given condition is met. Ethereum is now the largest platform for blockchain smart contracts and one of the main targets of various cyber attacks in the blockchain ecosystem. The cryptocurrency maintained by Ethereum is called ether. In recent years, with the development of blockchain technology, financial security in the blockchain ecosystem has received extensive attention, and the identification of various fraudulent behaviors has become a research hotspot. In the Bitcoin ecosystem, [Vasek and Moore, 2015] presents the first empirical analysis of Bitcoin-based scams. The authors identify 192 scams and point out that at least 13,000 distinct victims lost more than $11 million. [Vasek and Moore, 2018] analyzes the supply and demand for Bitcoinbased Ponzi schemes, while [Bartoletti et al., 2018] establish an address identification model for Ponzi scheme in the Bitcoin ecosystem. Besides, [Chen et al., 2019a] show that there are market manipulation in the Bitcoin exchange Mt. Gox. In the Ethereum ecosystem, on the one hand, people are concerned with the identification of various scams, for example, smart Ponzi schemes [Bartoletti et al., 2020; Chen et al., 2018]. On the other hand, since most smart contracts control certain digital assets, ensuring that there are no vulnerabilities in the smart contracts is an important part of Ethereum’s financial security [Kalra et al., 2018]. Phishing detection has been extensively studied in the past decades and many methods have been proposed [Khonji et al., 2013; Abdelhamid et al., 2014; Zouina and Outtaj, 2017]. However, there is seldom research on phishing fraud identification considering the characteristics of blockchain. [Andryukhin, 2019] classify the main types and schemes of phishing attacks on the blockchain project and suggest methods of protection against phishing attacks from the blockchain project side’s perspective. Unlike them, we are targeting the entire blockchain ecosystem and providing users with an early warning against phishing scams. ## 3 Proposed Method Identifying phishing accounts in the blockchain system faces two challenges: 1) we only have transaction records and know little about account functions and holder information and 2) the number of phishing addresses is very few and other addresses are huge, identifying such a small group of accounts in the huge account set is like looking for a needle in the haystack. (The details of the data are described in Section 4.) To meet the challenges, the proposed method includes two parts, the cascade feature extraction method, and the lightGBM-based Dual-sampling Ensemble algorithm. ### 3.1 Cascade Feature Extraction Method Since transaction records are the only information we can use, and they give the accounts a natural graphical structure, ----- to extract effective features, we first construct a transaction graph (TG) based on these transaction records. Specifically, _TG = (V, E), where V is a set of nodes (all the addresses_ in the dataset) and E = {(vi, vj)|vi, vj ∈ _V } is a set of or-_ dered edges. Each edge indicates that an address Vi transfers a certain amount of ether to another address Vj. Each edge has two attributes: blockNumber and amount, representing the time when this edge emerges and the amount of the transaction. Please note that there may be multiple edges between two nodes in TG, depending on the number of transactions between the two related accounts. (we use account, address, and node interchangeably in the following.) Next, we introduce the proposed feature extraction method. Graph-based features have proven to be very effective in many identification problems [Chatzakou et al., 2017; Ramalingam and Chinnaiah, 2018]. Thus, we propose a TG-based cascade feature extraction method for phishing account identification. The idea is as follows. Treat the transaction between accounts as a friend relationship, to judge the category of an account, we can use not only the information of the account, but also the information of its friends, even the information of its friends’ friends, and so on. To explain more clearly, we first define several keywords related to a node. _Node data: Node data is the transaction history of that_ _•_ node. Each transaction contains information about the time, direction, and amount of the transaction. The transaction time is denoted as blockNumber, which is an increasing integer. A transaction has two directions: _out and in. The out-transactions of an account trans-_ fer ether from the account to other accounts and the intransactions of an account receive ether from other accounts. _Node features: Node features are all kinds of informa-_ _•_ tion extracted from node data. In this paper, we extract information through various statistical methods. _N-order friend: A node’s 1-order friend is a node di-_ _•_ rectly connected to the node (i.e., there are transactions between them). A node’s n-order friend is a node connected to the node with at least n-1 nodes. _N-order features: The 0-order features of a node is the_ _•_ node features of that node. The n-order features are extracted in cascade from the n-order friends. To explain how to achieve cascade feature extraction, we show the procedure of 2-order features extraction in Figure 2. Suppose we need to compute the 2-order features of node _A, which have 1-order friends B, C and 2-order friends D, E,_ _F, G, H. In the figure, each undirected edge represents one_ or more transactions (regardless of the directions) between two nodes, and the counterparty of the 2-order friends is not shown. The procedure is divided into three stages. In the first stage, we compute a statistic (i.e., the grey rectangle) for each 2-order friends by using its node data (i.e., the transaction history). The second stage needs to calculate a statistic for each 1-order friend by using the statistics computed in the first stage (not the node data of the 1-order friend). Similarly, in the last stage, we still calculate a statistic whose input comes from the second stage. This approach is very scalable. Figure 2: Example of 2-order feature extraction procedure. In fact, by increasing the order and using different statistic methods at different stages, we can extract rich information about how a node interacts with the entire network. It should be noted that the approach we describe here does not take into account the direction of the transaction. But, for phishing accounts, in-transactions and out-transactions are significantly different in meaning. Therefore, in this paper, we extract features for two different directions respectively. **Node Features** The node features are statistics of its node data. There are two _types of data: transaction amount and transaction times (i.e.,_ _blockNumber). In order to distinguish the nature of the trans-_ action, statistics are made in different directions (i.e., outtransactions or in-transactions). For convenience, we name these features as direction type method. For example, a feature in block std of a node indicates the standard deviation (i.e., the method sd) for the transaction time (i.e., the type of data block) of all in-transactions (i.e., the transaction direction _in). For the transaction time, we compute only the transaction_ time span (denoted as ptp) and its standard deviation (denoted as sd). For the transaction amount, we calculated the sum, the maximum, the minimum, the mean and the standard deviation (i.e., sd). In addition, there are statistics unrelated to transaction amount: count, unique, and unique ratio. They represent the number of transactions (i.e., count), the number of counterparties (i.e., unique), and the ratio of the two (i.e., _unique/count). By doing so, we obtained 19 features (i.e.,_ 2 1 (2 + 5 + 2) + 1). _×_ _×_ **N-order Features** For simplicity, in this study, we extract only 1-order network features. As mentioned, the direction of the transaction is important in identifying phishing scams. Thus, considering the transaction direction, the 1-order friends of a node can be divided into from friends and to friends. In simple terms, when there is a transfer transaction from node A to node B, we call node B a from friend of node A and node A a to friend of node B. Specifically, the 1-order network features are named as friend direction statistic2 statistic1. For example, the from in mean max feature is calculated as follows: we first compute the maximum (i.e., max) of the intransaction amounts for each from friend. Then, we compute the mean of all statistics in the previous stage. Similarly, to compute to out std sum, we first compute the sum of all the _out-transaction amounts for each to friend. Then, we com-_ ----- pute the standard deviation (i.e., sd) of all statistics in the previous stage. By doing so, we can obtain 200 features (i.e., 2 2 2 5 5). Please note that we did not take time into _×_ _×_ _×_ _×_ account in the 1-order network feature extraction. ### 3.2 Dual-sampling Ensemble Method Identifying phishing scams is essentially establishing a classification model of addresses. But the phishing account identification faces a class imbalance problem. To build a useful suspect identification model, we propose a Dualsampling Ensemble method, an identification framework integrated with many base models trained by sampling examples and features. **Base Model** The base models play a central role in the identification framework. Many mature classification algorithms can be used as base models, such as logistic regression (LR), support vector machine (SVM), and decision tree (DT). Among these models, the gradient boosting decision tree (GBDT) obtained good results in many problems. There are several different variants of GBDT, including XGBoost [Chen and Guestrin, 2016] and lightGBM [Ke et al., 2017], which are widely used and generally accepted. In the phishing detection problem, we found that lightGBM is more efficient, thus we select it as our base model. Given the supervised training set X = {(xi, yi), i = 1, 2, _, n_, lightGBM integrates a number of K regression _· · ·_ _}_ trees f (x) = _K1_ �Ti=1 _[h][i][(][x][)][ to approximate a certain func-]_ tion f _[∗](x) that minimizes the expected value of a specific_ loss function L(y, f (x)). In each iteration of GBDT, assume that the strong learner obtained by the previous iteration is _ht−1(x), the loss function is L(f_ (x), ht−1(x)), then the aim for the current iteration is to find a week learner using CART regression tree model which denoted as ht(x), to minimize the formula L(f (x), ht−1(x) + ht(x)). Suppose in iteration _t, the negative gradient for sample i can be represented as_ _rti =_ _∂L(∂hyit,h−t1−(x1(ix)_ _i))_ _. By using the Log-likelihood loss as_ loss functionL(y, h(x)) = log(1 + exp( _yh(x))), where_ _−_ _y_ [ 1, 1], we can simplify the negative gradient of sam_∈_ _−_ ple as below: _yi_ _rti = −_ _[∂L][(][y][i][, h][t][−][1][(][x][i][))]_ = _∂ht−1(xi)_ 1 + exp(yih(xi)) _[,]_ where i = 1, 2, _, m._ _· · ·_ By using the formula, LightGBM chooses to remove these small gradient samples from the training set to make the model pay more attention to those samples which cause great Loss. This technique is called Gradient-based One-Side Sampling (GOSS) [Ke et al., 2017]. When constructing the CART regression tree, LightGBM binds the mutual exclusion features so that the number of features (the leaves) can be greatly reduced. **Dual-sampling Ensemble** Inspired by EasyEnsemble [Liu et al., 2008], we propose a Dual-sampling Ensemble algorithm to solve the class imbalance problem in the phishing scam identification. The pseudocode is shown in Algorithm 1. **Algorithm 1 The Dual-sampling Ensemble algorithm** **Input:** The minority class example set P, the majority example set,, the number of base models k, the _N_ _|P| ≪|N|_ feature sample ratio r, and the number of features d, The best parameters for the base model **Output:** The integration result. 1: Let i 0; _←_ 2: while i < k do 3: _i_ _i + 1;_ _←_ 4: Randomly sample a subset Ni from N, |Ni| = ⌊ _K[N]_ _[⌋][;]_ 5: Learn a base model hi using P ∪Ni with only d × r randomly sampled features. The parameters are sampled around the best parameters; 6: end while 7: return H(x) = _K1_ �Ti=1 _[h][i][(][x][)]_ The idea behind the Dual-sampling Ensemble is simple. Similar to EasyEnsemble [Liu et al., 2008], we reduce the class imbalance by sampling the majority example set (i.e., negative examples). The difference is that we also sample the features of the examples in the training set since we can obtain a large number of features by using the cascade feature extraction method. This dual sampling method allows the base models to have better heterogeneity. ## 4 Data Collection and Preparation ### 4.1 Data Collection We launch an Ethereum client, Parity[4], on our server to download the ledger of Ethereum. By using Parity, we obtained all the Ethereum blocks before January 3, 2019 (to be exact, from block height 0 to block height 7,000,000). By analyzing the transactions obtained, we get 43,783,194 accounts, among which 1,564,580 accounts controlled by smart contracts. One of the most important tasks in establishing a phishing scam identification model is to find enough phishing account examples. Fortunately, etherscan.io provides several tags for Ethereum addresses, and by crawling the website, we obtain all the addresses labeled with Phishing[5]. These addresses are used in some verified phishing scams. In this way, we obtain 1,683 phishing addresses. We call these phishing addresses as positive examples and the rest as negative examples. ### 4.2 Data Cleaning After getting all the data, we found that the class was very imbalanced. The class imbalance ratio, i.e., the ratio of the size of the majority class (negative examples) to minority class (positive examples), exceeds 26,000. Given that some addresses are not phishing addresses, we recommend that some obvious negative examples (i.e., non-phishing addresses) be eliminated before model training in order to build a more effective model. To this end, we 1) filter transaction records involving a smart contract address, 2) eliminate addresses 4www.parity.io/ethereum/ 5etherscan.io/accounts/label/phish-hack ----- with less than 10 or more than 1,000 transaction records, and 3) ignore all transactions that appear before block height 2 million. The above cleaning methods are based on the following considerations. First of all, smart contracts often have complex logic and are not convenient for phishing scams. Furthermore, smart contracts account for very little in the phishing addresses (i.e., 2.6%), and they usually relate to tokens. Thus, In this preliminary study, for the sake of simplicity, we leave out smart contracts. Second, we want to learn the behavioral characteristics of phishing accounts through transaction records, and too few records are not good for learning. Besides, too many records indicate that the account may be a wallet or other type of accounts. In fact, there are many addresses (i.e., >70%) with more than 1,000 transaction records, and only one address is labeled with phishing. Finally, by analyzing the initial activity time of phishing addresses, we find that all phishing addresses are active after 2016-08-02. This may be because, in the early days of Ethereum, phishing scams were relatively few, and even fewer were recorded. Therefore, we proposed to build the model based on records after block height of 2 million (i.e., 2016-08-02). These filtering rules allow the model to focus on learning the characteristics of phishing scams. ## 5 Experiment Result and Analysis ### 5.1 Experiment Settings We downloaded all of Ethereum’s transaction data from its inception to January 3, 2019 (i.e., from block height 0 to block height 7,000,000). By using the filter rules in Section 4.2, we ended up with 7,795,044 transaction records. There are 534,820 addresses, 323 of which are phishing addresses. The following experiments are based on this data set. In order to reflect the effectiveness of the model more accurately and avoid the contingency caused by the partitioning of train and test sets, the paper adopts the evaluation method of k-fold cross-validation. Specifically, we set the parameter k=5. To accurately evaluate the model, we select four metrics: precision, recall, F1, and AUC, which is commonly used in classification problems. ### 5.2 Method Comparison In order to verify that our proposed model is more suitable for this problem, we compared the single-model lightGBM, Support Vector Machine (SVM), decision tree (DT), and their Dual-sampling Ensemble (DE+) models. SVM and DT are considered efficient in many classification problems of class imbalance [Chen et al., 2019b]. Thus, we chose it as the baseline of our model. To compare the performance of these methods, we set the feature sampling rate to 70%, and the number of base models to 1600 (i.e., balance ensemble). Table 1 shows the results. As can be seen, in these single-models, SVM performs poorly, lightGBM and DT have certain performance, but they are obviously of no practical value. On the contrary, after adopting the ensemble strategy, the performance of each model is significantly improved, especially lightGBM and DT (i.e., DElightGBM and DEDT). This result Method Precision Recall F1 AUC SVM 0.0000 0.0002 0.0000 0.4817 DT 0.0552 0.0810 0.0657 0.5630 lightGBM 0.0535 0.0745 0.0623 0.5364 DESVM 0.2222 0.0076 0.0146 0.5046 DEDT 0.7295 0.7167 0.7230 0.7183 **DElightGBM** **0.8196** **0.8050** **0.8122** **0.8097** Table 1: The performance comparison #models Precision Recall F1 AUC 1 0.0789 0.0991 0.0879 0.549 100 0.7583 0.3993 0.5232 0.6947 800 **0.9288** 0.7368 **0.8217** **0.8274** 1000 0.826 0.7585 0.7908 0.8206 1600 0.8196 **0.805** 0.805 0.8097 Table 2: The effect of example sampling (with lightGBM) shows that the ensemble method is a good choice when facing the class imbalance. It is worth noting that the proposed model (i.e., DElightGBM) performs well on all metrics (i.e., all larger than 0.8). It means that the proposed model can be deployed in a real wallet for real-time warnings. ### 5.3 Example Sampling Effect Analysis Evaluating the impact of example sampling on the model is essentially selecting the number of base models. Table 2 shows the four evaluation metrics of the framework DElightGBM with different numbers of base models. (We set the feature sampling rate to 70% and the parameters of each model are randomly selected around the optimal parameters.) It can be seen that with the increase in the number of base models, all the metrics obtained different degrees of promotion. When the number of base models reaches 800 (i.e., half balance ensemble), three metrics (i.e., precision, F1 and AUC) reach the maximum. However, the recall keeps going up, and it reaches its maximum when the number of base models is 1600 (i.e., balance ensemble). This result indicates that the level of class imbalance is a very important factor affecting the performance of base models. From the experimental results, half balance ensemble seems to be a good choice. To make the model more practical, however, we would prefer to find all potential phishing scams (i.e., higher recall) at the expense of precision. Therefore, we propose the use of the balance ensemble for phishing scam detection. ### 5.4 Feature Sampling Evaluation Next, we analyze the effect of feature sampling by setting different sampling ratios. To eliminate the effect of the number of base models, it is uniformly set at 1600. Table 3 shows the evaluation results. In general, the feature sampling method has a certain influence on the final results, however, as compared with example sampling, its influence is far less significant. From the perspective of the most preferred metric, re_call, 0.8 is the best feature sampling ratio. Compared to using_ all the features (i.e., ratio=1), recall improved 4.24%. ----- Ratio Precision Recall F1 AUC 0.6 0.8228 0.7832 0.8025 0.8018 0.7 0.8149 0.8205 0.8177 0.8127 **0.8** **0.8258** **0.8390** **0.8324** **0.8282** 0.9 0.8055 0.7955 0.8005 0.7957 1.0 0.8282 0.8049 0.8164 0.8096 Table 3: The effect of feature sampling Figure 3: The top 15 important features. These results reveal a noteworthy phenomenon. It is not necessarily correct that the more features the model has, the better the performance. On the contrary, in the case that we can obtain a large number of features, a certain degree of feature sampling is conducive to obtaining a better model. This may because feature sampling can make different base models view the object from different angles, so as to obtain better identification. ### 5.5 Feature Analysis Since we adopted the method of cascading feature extraction, a large number of features were obtained. Figure 3 shows the top 15 important features in the model. Next, we analyze why some of these features are important. _in block std is the standard deviation of blockNumber of_ _•_ all in transaction for a node. This feature reflects the intensity of in-transactions at a certain address. If there is a large number of in-transactions in a short period, the _blockNumber of these transactions will be very close to_ each other, and thus the constructed to block std will be very small. This feature is much more important than the others, and its meaning is easy to understand. For a phishing address, a natural phenomenon is that the number of in-transactions increased suddenly within a period after the phishing began. However, with the phishing scam revealed, in-transactions become rare, or even nonexistent. This leads to in-transactions are concentrated in a small period for a phishing address, and the feature can grasp this characteristic very well. _to out sum median is a typical 1-order network feature._ _•_ It reflects the overall situation (i.e., sum) of all the to friends’ out-transactions. This feature is not as intuitive as the previous one and requires some explanation to understand its value. First of all, we can think of the median amount of out transaction of an address as an indicator of its financial strength. This is not difficult to understand, because the large median means that at least half of the address’s out transaction amounts are large, indicating that its financial strength is stronger. Second, for phishing addresses, to friends are the victims of the phishing scam. Thus, for phishing scams, this feature can be seen as an indication of the overall financial strength of all its victims. _from in sum min is also an 1-order network feature. D-_ _•_ ifferent from the previous feature, this feature reflects the in transaction of the node’s from friend. It is relatively easy to understand why the feature is important. For phishing scams, money laundering is an important part before cashing out. Therefore, the from friend of the phishing address, which is usually the intermediate address used for money laundering, must exhibit behavior characteristics different from normal addresses. And, this type of features captures the difference effectively. The above analysis of the top three features shows that our feature engineering achieves good results, fully mining the characteristics of the node itself and different neighbors of the node. ## 6 Conclusion and Future Work In blockchain ecosystems, various scams are rampant, which seriously threaten the financial security of users involved. To help dealing with this issue, in this study, we propose a systematic approach to detect phishing scams in the Ethereum ecosystem. First of all, by using the Parity client and crawl etherscan.io, we collect all transactions of the Etehreun blockchain and the labeled phishing addresses. Then, by using this data, we construct a transaction graph and propose a graph-based cascade feature extraction method, which helps us extract many useful features. Next, based on the extracted features and lightGBM, we propose a Dual-sampling Ensemble model to detect phishing suspects. Finally, we evaluate the model from many angles, and the results indicate the effectiveness of our model. In the future, we are going to further this study to other cybercrimes and set up a blockchain scam detection website to provide the phishing scam identification service in the form of API. Besides, to accelerate the research in this field, all relevant data and code will be released after the paper is published. ## Acknowledgments The work described in this paper was supported by the National Key R&D Program of China (2018YFB0204303), the National Natural Science Foundation of China (61722214), the Natural Science Foundation of Guangdong (2018B030312002, 2019B020214006), China Postdoctoral Science Foundation (Grant no. 2019TQ0372, 2019M660223), the Program for Guangdong Introducing Innovative and Entrepreneurial Teams under Grant NO. 2016ZT06D211 and Pearl River S&T Nova Program of Guangzhou under Grant NO. 201906010008. Zhiguang Chen and Zibin Zheng are the corresponding authors. ----- ## References [Abdelhamid et al., 2014] Neda Abdelhamid, Aladdin Ayesh, and Fadi Thabtah. Phishing detection based associative classification data mining. _Expert Systems_ _with Applications, 41(13):5948–5959, 2014._ [Andryukhin, 2019] AA Andryukhin. Phishing attacks and preventions in blockchain based projects. In Proceedings _of the International Conference on Engineering Technolo-_ _gies and Computer Science (EnT), pages 15–19. IEEE,_ 2019. [Bartoletti et al., 2018] Massimo Bartoletti, Barbara Pes, and Sergio Serusi. Data mining for detecting bitcoin ponzi schemes. In Proceedings of the Crypto Valley Conference _on Blockchain Technology, pages 75–84. IEEE, 2018._ [Bartoletti et al., 2020] Massimo Bartoletti, Salvatore Carta, Tiziana Cimoli, and Roberto Saia. Dissecting ponzi schemes on ethereum: Identification, analysis, and impact. Future Generation Computer Systems, 102:259–277, 2020. [Chatzakou et al., 2017] Despoina Chatzakou, Nicolas Kourtellis, Jeremy Blackburn, Emiliano De Cristofaro, Gianluca Stringhini, and Athena Vakali. Mean birds: Detecting aggression and bullying on twitter. In Proceedings of _the ACM on web science conference, pages 13–22. ACM,_ 2017. [Chen and Guestrin, 2016] Tianqi Chen and Carlos Guestrin. Xgboost: A scalable tree boosting system. In Proceedings _of the ACM SIGKDD International Conference on Knowl-_ _edge Discovery and Data Mining, pages 785–794. ACM,_ 2016. [Chen et al., 2018] Weili Chen, Zibin Zheng, Jiahui Cui, Edith Ngai, Peilin Zheng, and Yuren Zhou. Detecting ponzi schemes on ethereum: Towards healthier blockchain technology. In Proceedings of the World Wide Web Confer_ence (WWW2018), pages 1409–1418. International World_ Wide Web Conferences Steering Committee, 2018. [Chen et al., 2019a] Weili Chen, Jun Wu, Zibin Zheng, Chuan Chen, and Yuren Zhou. Market manipulation of bitcoin: Evidence from mining the mt. gox transaction network. In IEEE INFOCOM 2019-IEEE Conference on _Computer Communications, pages 964–972. IEEE, 2019._ [Chen et al., 2019b] Weili Chen, Zibin Zheng, Edith C-H Ngai, Peilin Zheng, and Yuren Zhou. Exploiting blockchain data to detect smart ponzi schemes on ethereum. IEEE _Access, 7:37575–37586, 2019._ [Chen et al., 2020] Weili Chen, Tuo Zhang, Zhiguang Chen, Zibin Zheng, and Yutong Lu. Traveling the token world: A graph analysis of ethereum erc20 token ecosystem. In Pro_ceedings of the World Wide Web Conference (WWW2020),_ pages 1409–1418. International World Wide Web Conferences Steering Committee, 2020. [Kalra et al., 2018] Sukrit Kalra, Seep Goel, Mohan Dhawan, and Subodh Sharma. Zeus: Analyzing safety of smart contracts. In Proceedings of the Network and Dis_tributed System Security Symposium, 2018._ [Ke et al., 2017] Guolin Ke, Qi Meng, Thomas Finley, Taifeng Wang, Wei Chen, Weidong Ma, Qiwei Ye, and Tie-Yan Liu. Lightgbm: A highly efficient gradient boosting decision tree. In Proceedings of the International _Conference on Advances in Neural Information Process-_ _ing Systems, pages 3146–3154, 2017._ [Khonji et al., 2013] Mahmoud Khonji, Youssef Iraqi, and Andrew Jones. Phishing detection: a literature survey. _IEEE Communications Surveys & Tutorials, 15(4):2091–_ 2121, 2013. [Liu and Ye, 2001] Jiming Liu and Yiming Ye. Introduction to e-commerce agents: Marketplace solutions, security issues, and supply and demand. In E-Commerce Agents, pages 1–6. Springer, 2001. [Liu et al., 2008] Xu-Ying Liu, Jianxin Wu, and Zhi-Hua Zhou. Exploratory undersampling for class-imbalance learning. IEEE Transactions on Systems, Man, and Cy_bernetics, Part B (Cybernetics), 39(2):539–550, 2008._ [Ramalingam and Chinnaiah, 2018] Devakunchari Ramalingam and Valliyammai Chinnaiah. Fake profile detection techniques in large-scale online social networks: A comprehensive review. Computers & Electrical Engineering, 65:165–177, 2018. [Vasek and Moore, 2015] Marie Vasek and Tyler Moore. There’s no free lunch, even using bitcoin: Tracking the popularity and profits of virtual currency scams. In Pro_ceedings of the International Conference on Financial_ _Cryptography and Data Security, pages 44–61. Springer,_ 2015. [Vasek and Moore, 2018] Marie Vasek and Tyler Moore. Analyzing the bitcoin ponzi scheme ecosystem. In _Proceedings of the International Conference on Finan-_ _cial Cryptography and Data Security, pages 101–112._ Springer, 2018. [Wood, 2014] Gavin Wood. Ethereum: A secure decentralised generalised transaction ledger. Ethereum Yellow _Paper, 2014._ [Zheng et al., 2017] Zibin Zheng, Shaoan Xie, Hongning Dai, Xiangping Chen, and Huaimin Wang. An overview of blockchain technology: Architecture, consensus, and future trends. In Proceedings of the IEEE International _Congress on Big Data, pages 557–564. IEEE, 2017._ [Zheng et al., 2018] Zibin Zheng, Shaoan Xie, Hongning Dai, Xiangping Chen, and Huaimin Wang. Blockchain challenges and opportunities: A survey. _International_ _Journal of Web and Grid Services, 14:352–375, 2018._ [Zouina and Outtaj, 2017] Mouad Zouina and Benaceur Outtaj. A novel lightweight url phishing detection system using svm and similarity index. Human-centric Comput_ing and Information Sciences, 7(1):17, 2017._ -----
{ "disclaimer": "Notice: The following paper fields have been elided by the publisher: {'references'}. Paper or abstract available at https://api.unpaywall.org/v2/10.24963/ijcai.2020/621?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.24963/ijcai.2020/621, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": null, "status": "BRONZE", "url": "https://www.ijcai.org/proceedings/2020/0621.pdf" }
2,020
[ "JournalArticle", "Conference" ]
true
2020-07-01T00:00:00
[]
9,630
en
[ { "category": "Computer Science", "source": "external" }, { "category": "Computer Science", "source": "s2-fos-model" }, { "category": "Engineering", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/02af1e17cb68c7cace6f3d38c2e767e7b5fb1e66
[ "Computer Science" ]
0.863542
Survey on Three Components of Mobile Cloud Computing: Offloading, Distribution and Privacy
02af1e17cb68c7cace6f3d38c2e767e7b5fb1e66
[ { "authorId": "9491665", "name": "Anirudh Paranjothi" }, { "authorId": "2109251680", "name": "M. Khan" }, { "authorId": "1776310", "name": "Mais Nijim" } ]
{ "alternate_issns": null, "alternate_names": null, "alternate_urls": null, "id": null, "issn": null, "name": null, "type": null, "url": null }
Mobile Cloud Computing (MCC) brings rich computational resource to mobile users, network operators, and cloud computing providers. It can be represented in many ways, and the ultimate goal of MCC is to enable execution of rich mobile application with rich user experience. Mobility is one of the main characteristics of MCC environment where user can be able to continue their work regardless of movement. This literature review paper presents the state-of-the-art survey of MCC. Also, we provide the communication architecture of MCC and taxonomy of mobile cloud in which specifically concentrates on offloading, mobile distribution computing, and privacy. Through an extensive literature review, we found that MCC is a technologically beneficial and expedient paradigm for virtual environments in terms of virtual servers in a distributed environment, multi-tenant architecture and data storing in a cloud. We further identified the drawbacks in offloading, mobile distribution computing, privacy of MCC and how this technology can be used in an effective way.
**Journal of Computer and Communications, 2017, 5, 1-31** [http://www.scirp.org/journal/jcc](http://www.scirp.org/journal/jcc) ISSN Online: 2327-5227 ISSN Print: 2327-5219 # Survey on Three Components of Mobile Cloud Computing: Offloading, Distribution and Privacy ### Anirudh Paranjothi, Mohammad S. Khan, Mais Nijim Department of Electrical Engineering and Computer Science, Texas A&M University, Kingsville, TX, USA How to cite this paper: Paranjothi, A., Khan, M.S. and Nijim, M. (2017) Survey on Three Components of Mobile Cloud Computing: Offloading, Distribution and Privacy. Journal of Computer and Communications, 5, 1-31. [https://doi.org/10.4236/jcc.2017.56001](https://doi.org/10.4236/jcc.2017.56001) Received: January 18, 2017 Accepted: April 3, 2017 Published: April 6, 2017 Copyright © 2017 by authors and Scientific Research Publishing Inc. This work is licensed under the Creative Commons Attribution International License (CC BY 4.0). [http://creativecommons.org/licenses/by/4.0/](http://creativecommons.org/licenses/by/4.0/) ## Abstract Mobile Cloud Computing (MCC) brings rich computational resource to mobile users, network operators, and cloud computing providers. It can be represented in many ways, and the ultimate goal of MCC is to enable execution of rich mobile application with rich user experience. Mobility is one of the main characteristics of MCC environment where user can be able to continue their work regardless of movement. This literature review paper presents the state-of-the-art survey of MCC. Also, we provide the communication architecture of MCC and taxonomy of mobile cloud in which specifically concentrates on offloading, mobile distribution computing, and privacy. Through an extensive literature review, we found that MCC is a technologically beneficial and expedient paradigm for virtual environments in terms of virtual servers in a distributed environment, multi-tenant architecture and data storing in a cloud. We further identified the drawbacks in offloading, mobile distribution computing, privacy of MCC and how this technology can be used in an effective way. ## Keywords Cloud Computing, Mobile Cloud Computing, Offloading, Distribution and Privacy Open Access ## 1. Introduction Smartphones are becoming popular and its users are increasing rapidly every year. Features of smart phones include touch screen interface, Wi-Fi, high speed processors, GPS, etc. Popularity of smartphones allows developers to develop mobile applications in various domains like sports, games, finance, education, etc. [1]. Still these devices are suffered from issues like limited storage space, li DOI 10 4236/j 2017 56001 A il 6 2017 ----- A. Paranjothi et al. 2 mited bandwidth and energy due to development of complex mobile applica tions. To solve these issues cloud computing techniques were introduced. Some of the popular cloud service providers are Amazon, Windows, Google, etc. Heavy Reading [2] and ABI research [3] suggested that revenue of MCC market $68 billion in 2017. MCC is the combination of cloud computing, mobile computing and wireless networks. Advantages of MCC over cloud computing are: 1) Flexibility: Due to flexibility, users can access the data using their devices from any part of the world. But the user should have proper internet connectivi ty. 2) Data availability: Data availability allows the user to access their data at any time. It also provides the facility of multiple users accessing the same data si multaneously. 3) Multiple platforms: MCC also provides support for multiple platforms. It allows users to access their data in cloud irrespective of any platform. Current cloud computing provides following facilities to the user: 1) Execut ing operations on cloud, 2) Large storage capacity, 3) Backup, 4) Traffic count ing, 5) Ability to choose datacenters. Cloud providers mainly concentrate on areas like throughput, memory, availability of server, storage, etc. Also, they are providing three basic services for MCC: 1) Platform service: Platform as a Service (PaaS) provides hardware and soft ware for the user to create, modify, run their applications. Main advantage of Paas is that it allows user to execute and complete their tasks without having ap propriate software or hardware. 2) Application services: Application services are also known as Software as a Service (Saas). It provides software application to the user whenever they need it over the internet. It gained more popularity in software market due to software on demand. The main advantage of using Saas is: 1) Cost savings, 2) Efficiency. It also eliminates the issue of individual user license and thereby reduces the ex pense of an organization. 3) Context-rich services: Mobile applications are becoming popular and pro viding context aware services to its users. To support this, MCC providers are providing context rich services to the users. It includes congestion detection, discovering parking space, etc. Papers [4] [5] [6] [7] have not discussed in detail about various techniques involved in MCC. This paper gives the overall idea about offloading, mobile dis tribution, and privacy in the cloud. Further this paper gives information about various factors affecting MCC and future of cloud computing environment. Rest of the paper is organized as follows: Section 2 discusses about current mobile cloud architecture and programming model; Section 3 discusses about Offloading mobile applications in cloud; Section 4 focuses on Mobile distribu tion computing and cloud; Section 5 discusses about Privacy in cloud and user authentication. Finally, we presented the conclusion and future work in Section 6. ----- A. Paranjothi et al. ## 2. Current Mobile Cloud Architecture and Programming Models Mobile cloud architecture: Current mobile cloud computing architecture includes following components: 1) Regional Data center (RDC), 2) Wireless core, 3) Base stations. It is repre sented in Figure 1. MCC architecture allows users to offload their operations on cloud [8]. Example: Global Positioning System [GPS], multiplayer games, etc. [9]. But, it is most suitable for heterogeneous environments [10]. RDC: It is used in home computer systems and its associated elements like storage and telecommunication systems. RDC consists of various security devic es, power supplies, environment controls, etc. Cloud data centers are distributed in different locations around the world [11]. Wireless core network: Routing the telephone calls across PST is the main function of wireless core network. Also, it provides various services to users who are connected in a network. Programming models: Existing programming models in MCC are: 1) Clone cloud, 2) MAUI, 3) Odessa, 4) Orleans, 5) RESTFUL [9]. These programming models are briefly discussed below. Table 1 illustrates comparison of programming model based on blocking state, cloud state and remote execution unit. Figure 1. Mobile cloud architecture. Table 1. Programming model comparison. Models Blocking Cloud state Remote exec. Unit Clone Cloud Yes Full thread Thread MAUI Yes Partial Method Odessa Yes Partial App task Orleans No Partial Grains RESTful No No Cloud task 3 ----- A. Paranjothi et al. 4 1) Clone cloud: Clone cloud allows its users to have own copy of their cloud. By providing this facility, user will have full control over their clouds. Clone cloud consists of solver, profiler and analyzer. Solver in clone cloud is responsi ble for offloading the data on cloud. It will be based on dynamic profiler and static analyzer. 2) MAUI: This programming model is based on Microsoft.NET framework. Profiler in MAUI framework makes remotable decisions. Resource demanding process can be accessed with the help of Remote Procedure Call (RPC). This model is platform and language independent. 3) Odessa: It is a parallel processing framework where developers have to ar range their applications in the form of data flow graph. In graph, vertices are called as stages and edges are called as connectors. In Odessa, connectors give information about data dependency between stages. This programming model is mostly suitable for media applications. Existing applications cannot be accessed in Odessa framework. 4) Orleans: It is the reliable framework for establishing scalable, elastic appli cations on cloud. Orleans consists of grains, which uses asynchronous messages for communication. Application developer in Orleans mainly concentrates on logic since it provides scalability, reliability and availability during its runtime. It is one of the promising programming models in MCC environment. 5) RESTFUL: This programming model is developed due to media processing applications often requires components for gesture recognition, face recognition, etc. In this model, appropriate functions can be invoked whenever they are needed. It can be done by using http or https protocol. ## 3. Offloading Mobile Applications in Cloud In recent days, cloud computing research has been moved towards how to make offloading decisions rather than concentrating on making offloading feasible. Analytical model helps in making these decisions [12] [13]. Offloading and par allelism are the two main factors that impact the system performance. In this section we illustrated the existing frameworks suitable for offloading in mobile application environment. Offloading: Transferring computations to servers available on the cloud are called offloading [8]. Offloading decisions can be done in two ways 1) Manually by the developers [14] 2) Automatically using tools [15]. ### 3.1. Odessa Framework Odessa is a lightweight framework, designed for mobile applications [17]. Odes sa makes offloading more flexible. Mobile application has three main require ments: 1) Crisp response, 2) Continuous data processing 3) Algorithms should be computed intensive. This framework provides three major contributions: 1) Odessa contributes to offloading and parallelism decisions. 2) Odessa designs a light weight mobile interactive perception applications. 3) It works well across variety of execution environments. The authors used three different applications ----- A. Paranjothi et al. to measure their system performance. The applications are described below in detail. **Interactive Perception Applications** Face Recognition: Face recognition application is represented in Figure 2(a). Face detector and classifier are the main components involved in it. Face detector is used to detect faces using OpenCV [18], Haar classifier and face classifier will classify the faces detected by face detector using a dedicated algorithm. Object and Pose Recognition: Object and Pose recognition application is represented in Figure 2(b). Four Feature Graph Source Copy Tiler Detect merger Splitter Reco Display Classify merge (a) Feature De Feature Source Copy Scaler Tiler SIFT merger scaler splitter Cluster Cluster Match Model Display RANSAC Clustering joiner splitter Joiner matcher (b) Pair Motion Feature Scaler Tiler Descaler Copy Classify Generator SIFT manager Source Copy Display Face Face Scaler Tiler Descaler Copy Detect Merger (c) Figure 2. (a) Face recognition; (b) Object recognition; (c) Gesture recognition. 5 |Source|Col2| |---|---| ||| |Col1|Col2|Col3|Col4|Col5|Col6|Col7|Col8|Col9|Col10| |---|---|---|---|---|---|---|---|---|---| |Scaler|||Tiler||Motion SIFT||Feature manager|Descaler|| |Classify|Col2| |---|---| |Col1|Col2|Pair Generator| |---|---|---| |||| |Source||Copy| |||| |||| |||Scaler| |||| ----- A. Paranjothi et al. 6 main components are involved in it. First, the image will go through a downscale to extract SIFT features [20]. Second, the extracted SIFT features will be com pared with previously constructed 3D models. Object features are clustered by position to isolate different occurrences. RANdom SAmple Consensus (RANSAC) algorithm identifies each occurrence of image with estimated 6D posture. Recognition of Gesture: Gesture recognition application is represented in Figure 2(c). Face detection and motion extraction are the major components involved in it. It extracts SIFT features to encode optical flow. These features are then filtered by positions to compare with previously generated histograms. The histograms are used as an input for machine identifies the control gestures. ### 3.2. Sprout Sprout is a distributed system used for stream processing [21]. It is used to create and execute parallel processing applications [17]. The main goal of sprout is to support processing of high streaming data. Two important features of sprout are: 1) Automated data transfer, 2) Parallelism support. Also, sprout pro vides the mechanism of adjusting applications dynamically at run time, changes the degree of parallelism, migrating processing stages between machines. ### 3.3. Odessa Design Odessa uses the concepts of offloading, pipelining and data parallelism to im prove its performance and accuracy. Odessa has three main goals they are 1) To satisfy the need of mobile applications Odessa should accomplish low make span and high throughput. 2) It should concentrate on input complexity change, device capability and network conditions. 3) It should have low communication and computation overhead. ### 3.4. ThinkAir Framework ThinkAir is one of the simplest frameworks in Mobile Cloud Computing (MCC) [22], represented in Figure 3. It allows developers to migrate their software to the cloud. Smart phone virtualization and method-level computation offloading are two main concepts adopted by ThinkAir. Offloading in ThinkAir removes the restrictions caused by CloneCloud during the process of offloading [22]. It also on-demand resource allocation for efficient performance of an application. Parallelism is attained by dynamically creating, destroying virtual machines in the cloud. ### 3.5. ThinkAir Design ThinkAir framework is designed based on following parameters: 1) Mobile broad band connectivity and speeds are increasing continuously, 2) Capabilities of smart phones are increasing, 3) Cloud computing is becoming more popular, and provide resources to users at low cost. The design goals of ThinkAir are: ----- A. Paranjothi et al. Figure 3. ThinkAir framework. 1) Adaptation ThinkAir framework easily adapts according to environment and it also avoids interference of correct executing software. 2) Ease of Use ThinkAir framework provides simple interface for developers to avoid the is sue of misusing framework [22] and it increased competition among developers. 3) Performance improvement ThinkAir framework improves performance and efficiency of mobile devices by binding smartphones to the cloud. 4) Dynamic scaling ThinkAir provides the feature of calculating computational power dynamical ly at server side. It also provides parallel executions to improve the performance. This framework has three major components: 1) Execution Environment, 2) Application servers, 3) Profilers. ### 3.6. Compilation and Execution Compilation and Execution section of ThinkAir deals with three major areas: 1) Programmer API, 2) Compiler, 3) Execution controller. 7 ----- A. Paranjothi et al. 8 1) Programmer API ThinkAir contains library with compiler since the developer can access execu tion environment indirectly. Method considered for offloading is commented with @Remote. ThinkAir code generator generates necessary remote able me thod with utility functions by taking source file as input and execution controller is used for method invocation and detects the given method is suitable for of floading or not. 2) Compiler ThinkAir Compiler consists of two parts 1) Remoteable Code Generator, 2) Native Development Kit (NDK). Remoteable code generator used for annotated code translation and Native Development Kit (NDK) used for native code sup port in cloud. 3) Execution Controller Execution controller executes remotable methods and makes offloading deci sions. Offloading decision depends on the data collected during past execution, the current environment and user’s policies. There are four such policies com bine with execution time, energy and cost. The four policies are, 1) Execution Time, 2) Energy, 3) Execution time and energy, 4) Execution time, energy, and cost. 4) Execution Flow Execution Controller starts with profiler to provide data for future invocations and it decides this invocation is suitable for offloading or not. If it is suitable for offloading, then it can be migrated to the cloud using java reflection technique. If it is not suitable for offloading or if connection fails, then execution will back to local execution environment (i.e., smart phone) by eliminating the data col lected by the profiler. ## 4. Mobile Distribution Computing and Cloud Mobile distribution computing will provide access to widely distributed re sources. Distribution computing has the advantages of scalability, fault toler ance, and load balancing. In many situations processing tasks needs to be distri buted. However, in distribution computing there is chance of communication failure because it could fail at any time. This section gives the information about various distribution techniques used in the cloud. ### 4.1. Clone 2 Clone (C2C) Clone 2 Clone (C2C) [23] provides distributed peer to peer platform for smart phones. Performance measurement of C2C in private and public clouds shows that it is possible to implement C2C in distributed environment with 3 times lesser cellular traffic. In addition, it also saves 99%, 80% and 30% of the battery respectively. ### 4.2. C2C: Architecture Design C2C platform needs a mechanism to enable peer to peer networking, to notify ----- A. Paranjothi et al. others about the presence of others. In C2C, CloneDS (i.e., Clone Directory Ser vices) maps its users to clones and clones to IPs. To establish a connection in C2C platform, request a clone with public IP and key pair. Key pair can be pub lic key or private key. C2C architecture is represented in Figure 4 and it consists of five basic steps: 1) DS register, 2) DS lookup, 3) C2C Connect, 4) User lookup, 5) User clone connection. DS register: Clone id, public key, IP address and device id will be send to clone DS from the clone. DS lookup: Clone A obtains a list signed by CloneDS. The list contains the details of signed clones with their IP address and public keys. C2C connect: Peer to peer connection will be established with other clones from clone A. User lookup: User A can always get her clone’s IP through a CloneDS lookup. User clone connection: It establish a connection with user through Public IP. ### 4.3. C2C and Security In C2C, communication between the user and clone is secured by the shared symmetric key. This architecture provides trust to the users through CloneDS but some destructive cloud providers use this opportunity by connecting user to harmful ones. ### 4.4. CloneDoc Framework CloneDoc and SPORC [24] provide more complexity to the system, but the main advantage of using such system is that it will improve the battery performance by reducing its usage. CloneDoc receives the operations from user’s smart phones and keeps the device updated. The clone maintains two states 1) pending queue, 2) committed queue [25]. The clone in C2C delivers operation received from device to server and again send backs the result to appropriate devices. To handle these tasks, it maintains a queue. It should be managed in such a way that Figure 4. C2C architecture and networking. 9 ----- A. Paranjothi et al. 10 delay should be minimum. CloneDoc contains a protocol to solve this problem. It is commonly known as clone user consistency protocol. In CloneDoc, clone is also responsible for detecting server malfunction. Detecting server malfunction contains checking of encryption and decryption operation, sequence numbers, etc. ### 4.5. Code in the Air Code In The Air (CITA) [26] is a system which simplifies the rapid development tasking applications. It can be handled by both expert users, non-expert users. In CITA non expert users specify their tasks easily over phone and expert users specify their tasks by writing server side scripts. Current approaches have two major problems: 1) Poor abstraction, 2) Poor programming support. CITA helps developers as well as end users. ### 4.6. CITA Architecture CITA architecture has following 3 components, 1) Tasking framework, 2) Activ ity layer, 3) Push communication and it is represented in Figure 5. Tasking framework: It allows developers to write and compile scripts. Compilation of scripts can be done in server side. CITA also provides JavaScript interface for its developers to manipulate different devices in single program. Backend of CITA deals with device coordination and efficient execution of code on different devices Activity layer: Activity layer in CITA provides an extensive abstraction to high level activities and also it provides facility of energy efficient recognition of an activity. Push communication: it improves on the energy and load shortcomings of ex isting systems. Figure 5. Code in the air architecture. ----- A. Paranjothi et al. ### 4.7. CITA Activity Layer CITA contains an activity layer. The main purpose of activity layer is to express conditions. **4.7.1. Place Hierarchies** CITA uses a built-in location hierarchy, it identifies three types of locations: 1) Room level, Floor level 3) Building level. These hierarchies are having different implementations. Room level location hierarchy: It is used to match a named location if Wi-Fi signal strength is good. Floor level location hierarchy: It is used during overlapping of Wi-Fi signals. Building level location hierarchy: It is used to refer the buildings or large bounding box on a map. CITA contains two activity detectors: 1) enterPlace, 2) leavePlace. It will be called when a user enters and leaves a location respectively [26]. **4.7.2. Activity Composition** CITA allows developers and users to create high level activities using logical predicates [26]. This is one of the advantages of CITA because it provides reusa bility. Developers and end users can reuse activity modules created by other de velopers to write their own activities. CITA supports AND, OR, NOT, WITHIN, FOR, NEXT primitives. ### 4.8. Dial to Deliver Push Service CITA provides an asynchronous message delivery service to mobile devices from CITA server [26] but it has three major problems. 1) The information to be delivered is very less. 2) TCP connection in mobile devices leads to timeout due to long waiting time. 3) Current push notifications limits notifications of specific types. CITA uses standard telephone service. It contains the registered users phone numbers. To verify the user, CITA server initiates a voice call on the other end CITA client verify the phone number. If the number matches it wakes up the client. The main disadvantage of using this service is, load on the network will be increased. ### 4.9. WhereStore WhereStore provides location based storage for mobile devices. It uses the tech nique of filtered replication to distribute location history among mobile devices [27]. WhereStore reduces energy consumption by exchanging data in clouds. Location specific applications are the one differentiates computer from mobile phones [28]. The following applications are benefitted from WhereStore frame work: 1) Web applications, 2) Media player, 3) Live traffic and sensing applica tions. 11 ----- A. Paranjothi et al. 12 ### 4.10. Where Store Background WhereStore is designed using two techniques: 1) location prediction, 2 Replica tion system [27]. **4.10.1. Replication** Replication system uses collection as its major technique. It is used to maintain data synchronization between peers. Collection in a replication system has user data and meta data. It will be represented as separate items. In a filtered system, filter identifies the subset where the data is stored exactly. Consider the example of separating the JPEG images according to geographical region. It can be done by identify the images according to the geotag attached in it. There are two ma jor goals involved in filtered replication system: 1) Each clone stores exactly matching item in its filter. 2) In each clone version of items should be same. **4.10.2. Predicting Location** GPS in mobile devices provides location based services to the user. Location based services provides the advantages of tracking the user [29] [30]. The main idea of using location prediction in WhereStore is to predict the future location of the user by matching past location history of the user with present location [31]. ### 4.11. WhereStore System Framework WhereStore provides dynamic data storage for its user’s. It will be based on two parameters: 1) past location of the user, 2) present location of the user. The con ceptual view of WhereStore is represented in Figure 6. WhereStore provides complete control to its user. Here, data will be grouped according to the graphi cal regions where the is likely to stay. It ensures availability of data in user’s cur rent location. WhereStore provides complete transparent mechanism for data placement when compared to other frameworks. Figure 6. WhereStore conceptual view. ----- A. Paranjothi et al. WhereStore will be located top of the replication system and location service. In which, replication layer consists of various collections. Each application in WhereStore has separate collections. The replication layer creates clones (i.e., replicas) for mobile devices and cloud. Each clone has its own storage capacity and filter. Storage capacity of each clone specifies the maximum number of bytes. WhereStore semantics are same as cache (i.e.,) the cloud can be accessed only when the item is not available in a local environment. The filter located in each clone will be adjusted according to the current location of user. **4.11.1. Types of Data** WhereStore acts on groups, regions and items. Each item can be identified using its key and priority associated to it. Data can be divided into several items. Groups are set of items and regions gives information about different geograph ical area. WhereStore has a separate interface to create application, maintain re gions and groups. According to the geographical areas regions will be created and each region will be associated with multiple groups. **4.11.2. Filters** Filters in WhereStore specify the items stored in a given location. It has set of filters each possible future location. Future locations will be identified using current location based on the location prediction system. Consider (l1, l2, l3, … ln) be the future location with probability pi. WhereStore create new filters (fi) for each possible future location. When the devices location changes new set of filters which is computed and updated recently will be passed into the replica tion. Each replication system has its own probability pi and maximum storage capacity. The rank of items will be based on occurrences. If a particular item is located in more filters, rank will be high. **4.11.3. Cloud Synchronization** Data exchange is performed in replication platform using synchronization estab lished between cloud and smartphones. It provides the advantage to the smart phone by having only items which matches exactly with filter. Smartphones suf fers due to limited storage capacity. To utilize the space in a proper manner, smartphones filter will be evaluated in a cloud. Each filter will be associated with a cloud calculates the set of items matches with filter and rank will be provided to each item. Storage capacity of each item calculated according to the rank. ### 4.12. Implementation of WhereStore WhereStore implementation is based on clientserver architecture. Here, cloud act as server and user’s mobile device act as client. The client has two major components 1) location, 2) Replication. Location specifies the information about future smartphone location and contains information of local cache memory on cloud. Architecture of WhereStore is represented in Figure 7. Whenever appli cation interacts with WhereStore configuration file needs to be provided. Con figuration file specifies replication at a particular location. Filters will be updated 13 ----- A. Paranjothi et al. 14 Figure 7. Architecture of WhereStore. based on input given by the configuration file. Later the updated filter will be used by the replication system. **4.12.1. Mobile Phone Data Access** WhereStore use existing data storage for accessing data. There are numerous ex isting applications stores the data in their own way. WhereStores uses the con cept of Cimbiosys as replication system [32]. It uses callback mechanism for ac cessing the data and implemented whenever needed. Cimbiosys determines the data to be broadcasted during the synchronization process. Wherestore is re sponsible for creating metadata whenever new item added into it. **4.12.2. Synchronization of Cache** In WhereStore, Cimbiosys synchronization exchange messages based on a tech nique called pull style exchange. It will be done in one way. Here, target clone will establish a synchronization with source clone. The connection will be estab lished by sending a request message. Once the connection gets established, source clone starts checking any of its tem is are not admitted by target clone. If any of such items exist, source clone will return the corresponding item to target clone. The current Cimbiosys model can be expressed in two ways: 1) modifica tion of sync request from mobile device to cloud, 2) modify the filter based on Cimbiosys. **4.12.3. Location Prediction** WhereStore uses location prediction technique to predict its user’s possible fu ture location. It can be achieved by using StarTrack framework [33]. In Star Track framework, location of a user will be captured periodically using smart phones. The captured location will be forwarded to cloud where StarTrack serv ers will be located. StarTrack server will convert the location it is received from smartphones into tracks. It provides a API (Application Program Interface) to perform operations on tracks. In order to identify where the user will be located in a future, StarTrack uses a technique called place transition graph. This tech nique will be created based on the tracks generated by StarTrack framework. It ----- A. Paranjothi et al. also has the detail of places that are visited by the users frequently. Latitude and Longitude pair will be used to create this FrequentPlaces. It will be created based on the tracks. Place transition graph will be constructed based on FrequentPlac es. Usually it will be represented in the form adjacency matrix. First, all the ele ment in the matrix will be initialized as zero. Then, set the corresponding value of each item by combining start and end points. Finally, normalize the value in each row according to the probability. This normalization can be done by adding the trip frequencies of each row and divide each frequency with sum of row. ### 4.13. Virtual Machine Synchronization (VMsync) The utility of the mobile device will increase if the user wants to switch from one device to another device. For example, user is able to continue the operation in the second mobile device absolutely where the user left in first mobile device without any delay. VMsync [34] used to synchronize virtual machines (VMs). This synchronization will take place among mobile devices. In device switching, VMs encapsulates computation state and data for a complete operating system and applications associated with it. In VMs application state also getting syn chronized along with mobile devices. System level VMs are used in now days to provide improved security and manageability. In order to reduce the delay and make image consistent, VMsync is used. It will transfer the changes made in ac tive VM to other mobile devices. The most important component in VMsync architecture is known as daemon. The main purpose of daemon is to monitor the memory and filesystem of VM. If any changes made, it will report the changes to the server. Server will be located on the cloud, and it will send the changes to devices. ### 4.14. Preliminary Design of VMsync The main function of VMsync is to handle VM images across various mobile devices and reducing time between switching devices. It uses method called Switch Penalty to perform device migration. The disadvantage of using this me thod is, data transfer cost is high. VMsync architecture is represented in Figure 8. It contains multiple hosts and provides following facilities to the user: 1) Vir tualization support, 2) Resource rich server, 3) Synchronization between devices. Initially, VMsync had only one active device. This device is used to update server regarding memory, file system changes over a periodic time. It can be done only when the device is active. This process is known as checkpoint. VMs other than active VM are known as standby VM. These VMs are getting updated with the help of synchronization server periodically. But, during the updating process device should be connected to network. Synchronization dae mon used to monitor this process. VMs should be designed in a way that it can balance data transfer and computational overhead. Modern mobile operating systems like Windows, Android, iOS, provides support for different hardware manufactured by various companies. This functionality can provide the facility adapting changes in hardware during runtime in future. 15 ----- A. Paranjothi et al. 16 Figure 8. VMsync architecture. ### 4.15. Wireless Mesh Networks (WMN) Wireless Mesh Networks (WMN) provides a low cost, next generation wireless networking, and also it provides a high speed internet access. Wireless Mesh Network supports wide range of mobile applications. Wireless Mesh Networking with Mobile Cloud Computing (WM-MCC) is considered to be a best solution for large scale big data applications [35]. In Wireless Mesh Networks mobile client is connected to a Base Transceiver Station (BTS) and it access the mesh network via mesh router. While mesh routers will be connected to each other and it will communicate with cloud through internet. The cloud service platform in wireless mesh networks provides data query services. ## 5. Privacy in Cloud Privacy is a major component in MCC. The user needs to understand the standards and procedures provided by the cloud provider to protect their data from threats. The number of businesses and individuals that are moving their data and performing computation on cloud is increasing. Although the cloud computing provides numerous benefits, security remains as one of the major challenge when data and computation are utilized by untrusted third parties [36]. The following section provides the information about different security approaches used in the cloud to protect user data. ### 5.1. Secure Outsourcing of Collective Sensing and Systematic Applications to the Cloud (p-Cloud) Two main approaches were proposed to provide security. i) StreamForce, ii) CloudMine. Streamforce: It is an access control system for sharing of data over malicious and untrusted clouds. This approach is designed with three goals: 1. To provide support to specify and impose fine grained access control policies. 2. To outsource data to cloud if access control methods are enforced. ----- A. Paranjothi et al. 3. When handling most expensive computations system will be efficient. CloudMine: It is an on-demand and cloud-based service with which different data owners achieve secure analysis over their collective data. This approach supports three essential functions: 1) sum, 2) set union and intersection, 3) sca lar product. CloudMine attains three security promises, 1. It provides data confidentiality in contrast to colluding, semi-honest data owners and semi-honest clouds. 2. Protection is provided to outputs of joint computation against semi-honest clouds. 3. Data owners can accurately identify if the cloud has been lazy. ### 5.2. System Model for p-Cloud Approach Figure 9 represents our system model for deploying collective applications to un-trusted clouds. It includes two entities: 1) client, 2) cloud. On the cloud, a collective task that consists of joint data and computation from different clients is performed. The attacker cloud consists of the un-trusted cloud and clients. There are three levels of un-trustworthiness: 1. Curious but Honest 2. Curious and Lazy 3. Fully Malicious Curious and lazy model allows the attackers to compromise while operating carrying out outsourced undertakings. Particularly, the cloud attempts to learn sensitive information and does not effectively corrupts the computation, but ra ther it tries to do as limited as possible while charging the customers for the same. This model is legitimized by the economic incentives to overcharge clients without being distinguished, however, there are three security properties relating to this framework. 1. The outsourced data should be protected for input privacy. 2. The outputs should be protected for output privacy. 3. Three parameters are included in integrity. Namely: a) Correctness, b) Com pleteness and c) Freshness. Streamforce approach accomplishes input and output privacy as well as cor rectness in the curious but honesty model. CloudMine accomplishes similar properties yet in the curious and lazy adversary model. Figure 9. Collaborative applications on the un-trusted cloud. 17 ----- A. Paranjothi et al. 18 ### 5.3. Streamforce Approach Streamforce approach utilizes a fine grained access control system to share data in un-trusted clouds. Implement fine-grained access control in collective appli cations that are out-sourced to an un-trusted cloud. There are two roles assigned to client. Namely a) Data Users, b) Data Owners. An access agreement P is given to user and the owner is provided with private data x along with related attribute I. When the data attribute fulfills the strategy, i.e. P(I) = True, then an autho rized client can get access to x. First, the owner sends c = f(x;I) to cloud by using a encoding function f. Later, the cloud changes the encoded data as t = π(c). Lastly, the client assesses a function g(t). In this setting, input privacy infers that the cloud cannot learn x from c. From output privacy and correctness, it is im plied that the access control methodology is secure, that is the unauthorized access is not permitted: g(t) = x ↔P(I) = True. Figure 10 illustrates the design space for access control implementation on a cloud domain. It is described into three measurement strategy. Namely: a) fine graininess, b) cloud reliability and c) cloud/customer work proportion. Trusted cloud can accomplish best fine-graininess. It supports an extensive variety of approaches and accomplishes best work proportion. Streamforce is particularly intended for stream data. It is outlined with three goals: 1) It supports specification and enforcement of fine-grained access control policies. 2) Access control policies are enforced when data is outsourced to the cloud. 3) System is efficient when handling most expensive computations. Streamforce security depends on three main encryption strategies. Namely: a) Deterministic εd, b) proxy attribute-based εp and c) sliding window εw. Figure 10. State of the art in outsourced access control. ----- A. Paranjothi et al. Deterministic Encryption technique (DET): It is a private-key strategy that is semantically secure while encoding multiple plaintexts. εd = (Gen, EncDec), εd∙Enc(m) = εd. _Enc m(_ ′) ↔ _m_ = _m′_ . The Proxy Attribute Based Encryption technique (PABE) broadens the idea of Key Policy Attribute Based Encryption (KP-ABE). εp = (Gen, KeyGen, Enc; Trans, Dec) [37]. Specifically, a master key MK is generated by Gen(.), a transformation key TK is generated by KeyGen (MK, P) and a predicate P is given by decryption key SK. By utilizing the attributes A, Enc (m, A) encrypts m. Trans (TK; CT) partly decodes the cipher text, which is later decrypted by Dec(SK,CT’). Decryptions and transformation are effective if P(A) = True. Streamforce util ize the strategy provided in [38]. The Sliding Window Encryption technique (WE) permits a client to decrypt just the aggregate of window of ciphertexts but not the individual ciphertexts, εw = (Gen, Enc, Dec). Assume that p(M, ws)[i] and s(M, ws)[i] are the product and sum of i[th] window sliding windows upon a sequence M. The general public parameters and the private keys are created by Gen(k). M is encrypted by Enc (M = (m0,m1, ….,mn-1),W) by utilizing a set of window sizes W, whose outcome is CT = (c0, c1, c2,…). CT is decrypted by Dec (ws,CT, SKws) for the window size ws by utilizing the private key SKws. s(M; ws) [i] for all I is the outcome, which is the aggregate of the sliding window. Secure query administrator Encryption is determined as a strategy that is used to secure data confidential ity in contrast to the cloud and unauthorized client access. However, straightly presenting encryption details to system entities is not considered as a perfect reflection for access control. Rather, Streamforce models and authorizes access control strategies by means of an arrangement of secure inquiry administrators like: 1) secure Map, 2) Filter, 3) Join and 4) Aggregate. Evaluation Execute a model of Streamforce over Epser (An open source stream process ing engine proficient of handling millions of data items every second). Make a benchmark dataset similar to stock market data and containing one million tuples that belong to 100 streams[7]. Throughput and latency of Streamforce are examined through the experiments conducted on Amazon EC2 with 6 multiple policies (T1-T6). The throughputs for various strategies on a single cloud server are demonstrated in Figure 11. The maximum throughput is observed for sim ple strategy using Map operator, which is 250 tuples/sec. This analyzes ineffec tively against Esper's execution on plaintext data. In addition, experiment is also carried with different cloud servers and the outcomes demonstrated scalability for both latency and throughput. We shared the workload in- two ways when more cloud servers are added. They are: a) Simple: based on stream, b) Balanced: based on computation load. ### 5.4. Practical Confidentiality in Preserving Big Data Analysis Cloud Computing provides support to Big Data Analysis [39] via data flow lan guages known as Pig Latin [40]. It is of great value to manage sensitive data only 19 ----- A. Paranjothi et al. 20 Figure 11. Throughput on single server. in an encrypted from in the cloud and to perform reasonable data analysis. Crypsis, is a runtime system for Pig Latin which allows corresponding scripts to be executed efficiently by utilizing cloud resources however without exposing input data in the same form. Crypsis can broaden the scope of encryption empowered big data analysis depending on the following point of view: i) Perspective of Extended Program Multiple opportunities to operate in encrypted mode are identified by Crypsis by evaluating entire data flow programs. ii) Perspective of Extended System Cloud resources can be performed by Crypsis instead of giving up and driving users to run the entire data flow programs on their end. This can be done by considering the chance of performing small computations on user end. Three main Contributions for preserving big data analysis 1) Without sacrificing confidentiality propose an architecture for executing Pig Latin scripts 2) Outline a novel-field sensitive program to study and transform to Pig Latin scripts that can distinguish operation with effects. 3) Current fundamental assessment results for implementing the solution depending on run time Pig Latin scripts obtained from an open source Apache PigLatin. ### 5.5. Background: PigLatin Apache Pig is a data examination platform. It incorporates the Pig runtime framework for high-level data flow language Pig Latin [40]. Pig permits data experts to query big data without the complication of writing MapReduce programs. No fixed schema is required to be operated by Pig. All these properties of Pig Latin and in addition its wide reception made it to be chosen as the data flow language for Crypsis. ----- A. Paranjothi et al. Data types and Statements Pig Latin includes simple types (e.g., int, long), and complex types (e.g., bag, tuple, map). In addition, field can be a data item like a tuple, bag, map. Pig Latin statements also work with relations; relations are simply a bag of tuples. Expressions and Operators Relations are established by loading an input file or by applying relational op erators to different relations. Examples of relational operators are JOIN, GROUPBY, FOREACH. etc. Operators in Pig Latin can likewise incorporate casts, arithmetic operators (e.g., +, −, \, *), comparisons, as well as LOAD and STORE operators. Functions Pig Latin incorporates built-in functions (e.g., ABS, COS AVG) and allows users to define their own user defined functions (UDFs) if needed. ### 5.6. Architecture and System Overview Crypsis is having an adversary capable of fully manipulating the cloud infra structure. The adversary can see encrypted data and Pig Latin scripts that oper ate on the data and it can control the computation software and control the cloud infrastructure. Crypsis ensures confidentiality in the presence of adver sary. Figure 12 illustrates the architecture of Crypsis prototype. 1. Transformation of program The client presents a source Pig Latin script that works on unencrypted in formation. This is evaluated by Crypsis in order to find the suitable encryption scheme through which the input data must be encrypted. Calls to Crypsis UDFs which implement operations on encrypted data are used to replace the operators in source script. The constants are supplanted using their encrypted values in order to create an objective script that can be executed on encrypted data. 2. Encryption techniques missing in cloud The parts of input data that are encrypted previously and stored in cloud are tracked by an encryption service which contains an input data encryption sche Figure 12. Architecture of Crypsis. 21 ----- A. Paranjothi et al. 22 ma. Depending on the input data encryption schema as well as the recommended encryption schemes assumed in previous step, the encryption service determines the encryption schemas lacking in the cloud. 3. Encryption, Sending data to cloud Various encryption schemes which are enabled through diverse cryptosystems are used by Crypsis. 1) Randomized encryption (RAN) is the main encryption scheme which does not support operators and best secure encryption scheme. One way to execute RAN is by utilizing Blowfish [41] in order to encrypt integer values by exploiting the benefits of its limited 64-bit block size and also by utilizing AES [42] in order to encrypt the remaining. 2) Deterministic encryption (DET) let’s fairness comparisons upon encrypted data. First, develop DET utilizing AES and Blowfish permutation block ciphers for estimations of 12 bits and 64 bits respectively. Then, pad minute values properly to coordinate the normal block size. The approach for values greater than 128 bits, check the approach utilized in CryptDB [43]. Later, implement the Order Preserving Encryption (OPE) scheme that permits to arrange correlations utilizing the order preserving symmetric encryption usage from CryptDB. Paillier cryptosystem to implement additive homo-morphic encryption (AHE) which allows additions over encrypted data and ElGamal [44] cryptosystem to implement multiplicative homomorphism encryption (MHE). 4. Execution When all required encrypted data is loaded in the cloud, the execution handler requests to start executing the job. 5. Crypsis UDFs Crypsis does not impose any changes to the PigLatin service. Instead, operations on encrypted data are handled by a set of pre-defined UDFs stored in the cloud storage along with the encrypted data. 6. Re-encryption At the time of target script execution, it is possible that intermediate data are generated after some operations are performed. Encryption scheme of the particular data relies upon the previous operation executed on that data. This situation is handled by Crypsisthrough re-encryption of intermediate data. In particular, this intermediate data is directed to the user where this data can be decrypted without any risk. Later, the decrypted data is encrypted using the specific encryption scheme and then again sent to cloud. After the re-encryption is finished, execution of target script is continued. 7. Results Results are again sent to user when the job is finished. ### 5.7. Program and Transformation Analysis Analysis and process involved in PigLatinCrypsis are represented briefly in Listing 1. It has two input files: input1, input2. Input1 has two arguments and input 2 has one argument. Line 3 in script is used to filter all rows less than or equal to 10. The subsequent lines (i.e., Line 4 and Line 5) perform addition operation on ----- A. Paranjothi et al. second argument of input1 by grouping first argument. Each group sum will be performed in Line 6 using input 2. Finally, Line 7 displays the result stored in output file. Figure 13 illustrates outline of the different steps and intermediate data structures in program transformation. Input script analysis First, Crypsis checks the user submitted source (PigLatin) script for syntax errors and generates a directed, acyclic data flow (DAG) representation of it. The data flow representation uses relations as vertices and the data flow between relations as the edges. Also generate two additional data structures: 1) MET (Map of Expression Trees) 2) SAF (Set of Annotated Fields) [40]. Source script expressions will be stored in MET. In data flow graph, each vertex has keys for all expressions. SAF contains one entry for each field specified for each relation. In input script analysis, AF (Annotated Field) is used to represent individual entries. Figure 13. Program Transformation in Crypsis. Listing 1. Source pig latin script S1. Listing 2. Transformed pig latin script. 23 ----- A. Paranjothi et al. 24 Encryption analysis The program transformation component identifies the encryption scheme required for each field. It identifies the encryption of each field by observing MET. In script, all operators are already registered with encryption technique. But, some relational operators involved in PigLatin require precise encryption schemes. Script transformation After knowing encryption scheme required for each field, decision will be made for which encrypted file to be loaded. Script checks valid encryption tech nique, if it is not available re-encrypt operation will be initialized. It calls encryp tion scheme to change the required fields into a specific encryption technique. The transformed PigLatin script is represented in Listing 2. ### 5.8. Evaluation of Big Data Analysis Micro-benchmarks Construct a micro-benchmark that compares unencrypted data with en crypted data based on the size and time requires to execute. It is represented in Table 2. The evaluation of this micro-benchmark was performed on a single machine with two 32 bit CPUs and 3 GB of RAM. While running benchmark, one problem we faced was in PigLatin scripts that projects the value of map fields using chararrayconstants as keys (map#’key’). PigMix Run the Apache PigMix2 [45] benchmark to calculate the Crypsis perfor mance. PigMix2 is a set of 17 Pig Latin scripts that tests the latency and scalabil ity of the Pig runtime. The experiment was performed using Amazon EC2 [46]. ### 5.9. Information Leakage Information leakage [47] gives the information about how the privacy is getting disturbed in mobile environment. Two types of attacks are possible in mobile environment: 1) External attack, 2) Internal attack. Both attacks are used to ex tract the user information. It is possible to perform these attacks without the us er using devices of attacker. Table 2. Comparing size of data and latency of addition and multiplication operations over plaintext and encrypted data. †^ is the number of operations performed in multiples of 1000. ‡NE denotes no encryption or plaintext data. Size (KB) Time (ms) Add Multiply ^[†] NE[‡] AHE MHE NE AHE NE MHE 2 269 12,071 12,153 32 477 32 2267 4 538 24,142 24,306 63 895 62 4118 6 807 36,212 36,459 92 1314 90 5978 8 1076 48,283 48,611 121 1730 118 7818 10 1345 60,354 60,764 150 2147 147 9658 ----- A. Paranjothi et al. ### 5.10. Background of Mobile Analytic Service Background of mobile analytic service concentrates on developers, users, appli cations, networks, etc. This section deals with app ecosystem and mobile analyt ics. **5.10.1. App Ecosystem** Application developers use a technique called ad networks to increase the profit of applications. Recent study shows that, top applications available in Android Market (i.e., 52.1% of applications) are enclosed with at least one ad networks. App ecosystem is represented in Figure 14. It illustrates the flow of information among users. Mobile applications are enclosed with analytic library. Main func tion of analytic library is to collect attributes related user and send it to the serv ers maintained by analytic companies. The information will be processed and will be given to ad networks like Flurry, Google Ad, etc. to provide appropriate ad for the user. **5.10.2. Mobile Analytics and Tracking** Mobile analytics are used to measure the performance of the applications based on prior knowledge about users, applications, etc. Dashboard performance of flurry is represented in Figure 15. It gives the information about various inter ests of a user. ### 5.11. User Profile Extraction User profile extraction used to extract various information about users. Different services are used to collect distinct information about users (i.e., name, age, etc.). In order to extract information about user, first step is to act on behalf of user. Next step for google is to extract the user profile illustrated by google. For flurry, target information should be send to analytics application which in turn extracts user profile. Figure 14. App ecosystem. 25 ----- A. Paranjothi et al. 26 Figure 15. Flurry analytics. **5.11.1. Device id Spoofing** Getting access to device id. Device id of an android user can be accessed by using two different methods. 1) It can be obtained by grabbing message send by third party. 2) It can be extracted by capturing identifier of a target device. Device id spoofing Android users can be easily identified by combining device id with device in formation. It is possible to device information using the methods described above. Once the information about the device is available, device id spoofing will be done by changing the values in identifying parameters. It is represented in Table 3. **5.11.2. Extracting User Information** Google: One main advantage of using android is, it allows users to manage their application preferences. This facility extracts user information from Google server. Using this opportunity, anyone can access user profile. Flurry: Unlike Google, Flurry will not allow its users to access their information. Profile extraction of Flurry is represented in Figure 16. Here, spoofing will be done by identifying target device id. After identifying the id, it will make the Flurry to generate report message (appIDx). All user information can be accessed using this technique. Another method of extracting user profile is by extracting audience report. It can be done by capturing report (Pt) at time t. Flurry also provides an additional feature to distinguish user according to age, name, group, etc. ### 5.12. Deceiving User Profiles Second attack targets analytic results. It will attack analytic service and provides inappropriate ads to the user. It will be done by identifying target device and de stroy the user information by supplying irrelevant usage reports. This attack will reduce the benefits of ad companies. ----- A. Paranjothi et al. Figure 16. Privacy leakage attack scenario. Table 3. Android path identifier. Parameter File path in Android file system Android ID /data/data/com.android.providers.settings/databases/settings.db ro.build.id ro.build.version.release ro.product.brand /system/build.prop ro.product.name ro.product.device ro.product.model **Attacking Technique** This attack will be done in two ways: 1) validating user information, 2) imple menting ad influence attack. Both way uses following steps. Training In this method, attackers create new user profiles and train them according to different categories. By doing this, ad developers (i.e., Google and Flurry) will be updating user profile according to the reports they received from various catego ries. The response time will be different for both ad developers. Google takes 6 hours to update user profile and Flurry takes one week to update its user profile. Collecting Ads HTTP protocol is used to deliver ads to the users who are using Google or Flurry. Attackers will run tcpdump to extract ads from TCP but it is possible only in Google. In Flurry, redirection methods will be done to get ads. ### 5.13. User Authentication in MCC In MCC user authentication is used to validate the user identity. Authentication is used to protect the user against privacy and security issues [48]. It is used to prevent unauthorized access of the user. We have to concentrate security on three major components in MCC. The three components are cloud, wireless 27 ----- A. Paranjothi et al. 28 communication and mobile device. The efficient algorithm will have the follow ing qualities least possible computing, memory and storage over heads. The purpose of authentication algorithm is to reduce the security threats in mobile devices. Some of the threats commonly occurred in mobile devices are denial of service, loss of device, malfunction of device etc. [48]. The authentication in MCC varies in following scenarios when compared with cloud computing: 1) resource limitations, 2) sensors, 3) high mobility, 4) network heterogeneity. ## 6. Conclusion and Future Work In this paper, we have reviewed and explained in detail about offloading, mobile distribution and privacy in MCC. Also, to implement next generation wireless networking with low cost, we explained Wireless Mesh Networking with MCC (WM-MCC). After reviewing various aspects, we found that MCC can be used to provide efficient data storage and processing but the factors affecting MCC are computation power, bandwidth, security and energy. We also found that use of encryption method in offloading and remote execution leads to performance degradation. New research and expansions programs are required to make offloading deci sions more feasible and improve security in the mobile cloud. Furthermore, us ers want to migrate their data from smartphone to cloud but this migration pos es some technical issues. Hence, we need a concrete effort from academia and industry to improve these shortcomings. ## References [1] Zhao, W., Sun, Y. and Dai, L. (2010) Improving Computer Basis Teaching through Mobile Communication and Cloud Computing Technology. Proceedings of the 3rd International Conference on Advanced Computer Theory and Engineering (ICACTE’10). [2] Heavy Reading Real World Research (2013) The Mobile Cloud Market Outlook to 2017 [3] ABI (2009) Mobile Cloud Computing Subscribers to Total Nearly One Billion by 2014, Tech. Rep., ABI Research. [4] Mobile Cloud Computing: A Survey, State of Art and Future Directions M. Reza Rahimi · JianRen · Chi Harold Liu · Athanasios V. Vasilakos · NaliniVenkatasubramanian. [5] Fernando, N., Loke, S.W. and Rahayu, W. (2013) Mobile Cloud Computing: A Survey. Future Generation Computer Systems, 29, 84-106. [6] Fernando, N., Loke, S.W. and Rahayu, W. (2013) Mobile Cloud Computing: A Survey. Future Generation Computer Systems, 29, 84-106. [7] Wilbaut, C., Hanafi, S. and Salhi, S. (2008) A Survey of Effective Heuristics and Their Application to a Variety of Knapsack Problems. IMA Journal of Management [Mathematics, 19, 227-244. https://doi.org/10.1093/imaman/dpn004](https://doi.org/10.1093/imaman/dpn004) [8] Kchaou, H., Kechaou, Z. and Alimi, A.M. (2015) Towards an Offloading Framework Based on Big Data Analytics in Mobile Cloud Computing Environments. Procedia Computer Science, 53, 292-297. [https://doi.org/10.1016/j.procs.2015.07.306](https://doi.org/10.1016/j.procs.2015.07.306) ----- A. Paranjothi et al. [9] Bahl, P., et al. (2012) Advancing the State of Mobile Cloud Computing. Proceedings of the 3rd ACM Workshop on Mobile Cloud Computing and Services, Low Wood [Bay, 25 June 2012, 21-28. https://doi.org/10.1145/2307849.2307856](https://doi.org/10.1145/2307849.2307856) [10] Verbelen, T., et al. (2012) Cloudlets: Bringing the Cloud to the Mobile User. Proceedings of the 3rd ACM Workshop on Mobile Cloud Computing and Services, [ACM. https://doi.org/10.1145/2307849.2307858](https://doi.org/10.1145/2307849.2307858) [11] Chen, M., Jin, H., Wen, Y. and Leung, V.C.M. (2013) Enabling Technologies for Future Data Center Networking: A Primer. IEEE Network, 27, 8-15. [https://doi.org/10.1109/MNET.2013.6574659](https://doi.org/10.1109/MNET.2013.6574659) [12] Kumar, K. and Lu, Y.-H. (2010) Cloud Computing for Mobile Users: Can Offloading Computation Save Energy? IEEE Computer, 43, 51-56. [https://doi.org/10.1109/MC.2010.98](https://doi.org/10.1109/MC.2010.98) [13] Yang, K., Ou, S. and Chen, H.-H. (2008) On Effective Offloading Services for Resource-Constrained Mobile Devices Running Heavier Mobile Internet Applications. IEEE Communications Magazine, 46, 56-63. [https://doi.org/10.1109/MCOM.2008.4427231](https://doi.org/10.1109/MCOM.2008.4427231) [14] Cuervo, E., Balasubramanian, A., Cho, D., Wolman, A., Saroiu, S., Chandra, R. and Bahl, P. (2010) MAUI: Making Smartphones Last Longer with Code Offload. ACM [MobiSys’10, 49-62. https://doi.org/10.1145/1814433.1814441](https://doi.org/10.1145/1814433.1814441) [15] Chun, B.-G., Ihm, S., Maniatis, P., Naik, M. and Patti, A. (2011) CloneCloud: Elastic Execution between Mobile Device and Cloud. ACM EuroSys’11, 301-314. [https://doi.org/10.1145/1966445.1966473](https://doi.org/10.1145/1966445.1966473) [16] Huang, B.-K., et al. (2015) A Cloud-Based Offloading Service for Computation-Intensive Mobile Applications. IEEE 21st International Conference on Embedded and Real-Time Computing Systems and Applications (RTCSA), 19-21 August 2015. [https://doi.org/10.1109/rtcsa.2015.17](https://doi.org/10.1109/rtcsa.2015.17) [17] Ra, M.R., Sheth, A., Mummert, L., Pillai, P., Wetherall, D. and Govindan, R. (2011) Odessa: Enabling Interactive Perception Applications on Mobile Devices. Proceedings of the 9th International Conference on Mobile Systems, Applications, and Services, Bethesda, 28 June-1 July 2011, 43-56. [https://doi.org/10.1145/1999995.2000000](https://doi.org/10.1145/1999995.2000000) [18] Bradski, G. and Kaehler, A. (2008) Learning OpenCV: Computer Vision with the OpenCV Library. O’Reilly Media. [19] Romea, A.C., Berenson, D., Srinivasa, S. and Ferguson, D. (2009) Object Recognition and Full Pose Registration from Single Image for Robotic Manipulation. IEEE International Conference on Robotics and Automation. [20] Lowe, D. (2004) Distinctive Image Features from Scale-Invariant Keypoints. International Journal on Computer Vision (IJCV), 60, 91-110. [https://doi.org/10.1023/B:VISI.0000029664.99615.94](https://doi.org/10.1023/B:VISI.0000029664.99615.94) [21] Pillai, P.S., Mummert, L.B., Schlosser, S.W., Sukthankar, R. and Helfrich, C.J. (2009) "SLIPstream: Scalable Low-Latency Interactive Perception on Streaming Data. ACM International Workshop on Network and Operating System Support for Digital Audio and Video. [22] Kosta, S., Aucinas, A., Hui, P., Mortier, R. and Zhang, X. (2012) Thinkair: Dynamic Resource Allocation and Parallel Execution in the Cloud for Mobile Code Offloading. In INFOCOM, 2012 Proceedings IEEE, 945-953. [23] Kosta, S., Perta, V.C., Stefa, J., Hui, P. and Mei, A. (2013) Clone2clone (c2c): Peer-to-Peer Networking of Smartphones on the Cloud. In 5th USENIX Workshop on Hot Topics in Cloud Computing (HotCloud13). 29 ----- A. Paranjothi et al. 30 [24] Feldman, A.J., Zeller, W.P., Freedman, M.J. and Felten, E.W. Sporc: Group Collaboration Using Untrusted Cloud Resources. In Proc. OSDI’10. [25] Ellis, C.A. and Gibbs, S.J. (1989) Concurrency Control in Groupware Systems. [SIGMOD REC, 18, 399-407. https://doi.org/10.1145/66926.66963](https://doi.org/10.1145/66926.66963) [26] Ravindranath, L., Thiagarajan, A., Balakrishnan, H. and Madden, S. (2012) Code in the Air: Simplifying Sensing and Coordination Tasks on Smartphones. Proceedings of the Twelfth Workshop on Mobile Computing Systems & Applications, p. 4, ACM. [27] Stuedi, P., Mohomed, I. and Terry, D. (2010) WhereStore: Location-Based Data Storage for Mobile Devices Interacting with the Cloud. Proceedings of the 1st ACM Workshop on Mobile Cloud Computing & Services: Social Networks and Beyond, p. 1, ACM. [28] Trestian, I., Ranjan, S., Kuzmanovic, A. and Nucci, A. (2009) Measuring Serendipity: Connecting People, Locations and Interests in a Mobile 3g Network. Proceedings of the 9th ACM SIGCOMM Conference on Internet Measurement Conference, [New York, 267-279. https://doi.org/10.1145/1644893.1644926](https://doi.org/10.1145/1644893.1644926) [29] Ananthanarayanan, G., Haridasan, M., Mohomed, I., Terry, D. and Thekkath, C.A. (2009) Startrack: A Framework for Enabling Track-Based Applications. Proceedings of the 7th International Conference on Mobile Systems, Applications, and Services, [New York, 207-220. https://doi.org/10.1145/1555816.1555838](https://doi.org/10.1145/1555816.1555838) [30] Mun, M., Reddy, S., Shilton, K., Yau, N., Burke, J., Estrin, D., Hansen, M., Howard, E., West, R. and Peir, P.B. (2009) The Personal Environmental Impact Report, as a Platform for Participatory Sensing Systems Research. MobiSys’09: Proceedings of the 7th International Conference on Mobile Systems, Applications, and Services, [New York, 55-68. https://doi.org/10.1145/1555816.1555823](https://doi.org/10.1145/1555816.1555823) [31] Krumm, J. and Horvitz, E. (2007) Predestination: Where Do You Want to Go Today? Computer, 40, 105-107. [32] Ramasubramanian, V., Rodeheer, T.L., Terry, D.B., Walraed-Sullivan, M., Wobber, T., Marshall, C.C. and Vahdat, A. (2009) Cimbiosys: A Platform for Content-Based Partial Replication. NSDI’09: Proceedings of the 6th USENIX Symposium on Networked Systems Design and Implementation, Berkeley, 261-276. [33] Ananthanarayanan, G., Haridasan, M., Mohomed, I., Terry, D. and Thekkath, C.A. Startrack: A Framework for Enabling Track-Based Applications. MobiSys’09: Proceedings of the 7th International Conference on Mobile Systems, Applications, and [Services, New York, 207-220. https://doi.org/10.1145/1555816.1555838](https://doi.org/10.1145/1555816.1555838) [34] Bickford, J. and Cáceres, R. (2013) Towards Synchronization of Live Virtual Machines among Mobile Devices. Proceedings of the 14th Workshop on Mobile Computing Systems and Applications, p. 13, ACM. [https://doi.org/10.1145/2444776.2444794](https://doi.org/10.1145/2444776.2444794) [35] Lin, H., et al. (2015) A Trustworthy Access Control Model for Mobile Cloud Computing Based on Reputation and Mechanism Design. Ad Hoc Networks, 35, 51-64. [https://doi.org/10.1016/j.adhoc.2015.07.007](https://doi.org/10.1016/j.adhoc.2015.07.007) [36] Dinh, T.T.A. and Datta, A. (2013) Towards Secure Outsourcing of Collaborative Sensing and Analytic Applications to the Cloud-the pCloud Approach. Proceedings of the First International Workshop on Middleware for Cloud-Enabled Sensing, p. [2, ACM. https://doi.org/10.1145/2541603.2541606](https://doi.org/10.1145/2541603.2541606) [37] Goyal, V., Pandey, O., Sahai, A. and Waters, B. (2006) Attribute-Based Encryption for Fine-Grained Access Control of Encrypted Data. In CCS’06. [38] Green, M., Hohenberger, S. and Waters, B. (2011) Outsourcing the Decryption of Abeciphertexts. In USENIX Security. ----- A. Paranjothi et al. [39] Stephen, J.J., Savvides, S., Seidel, R. and Eugster, P. (2014) Practical Confidentiality Preserving Big Data Analysis. Proceedings of the 6th USENIX Conference on Hot Topics in Cloud Computing, USENIX Association, 10. [40] Olston, C., Reed, B., Srivastava, U., Kumar, R. and Tomkins, A. (2008) Pig Latin: A Not-So-Foreign Language for Data Processing. In SIGMOD. [41] Schneier, B. (1994) Description of a New Variable-Length Key, 64- Bit Block Cipher (Blowfish). In Fast Software Encryption. Springer-Verlag, 191-204. [42] Daemen, J. and Rijmen, V. (2002) The Design of Rijndael: AES—The Advanced Encryption Standard. Springer-Verlag, Berlin, Heidelberg, New York. [43] Popa, R.A., Redfield, C.M.S., Zeldovich, N. and Balakrishnan, H. (2011) CryptDB: Protecting Confidentiality with Encrypted Query Processing. In SOSP. [44] Elgamal, T. (1985) A Public-Key Cryptosystem and a Signature Scheme Based on Discrete Logarithms. IEEE Transactions on Information Theory, 31, 4. [45] Ouaknine, K., Carey, M. and Kirkpatrick, S. (2015) The PigMix Benchmark on Pig, MapReduce, and HPCC Systems. IEEE International Congress on Big Data (Big Data Congress), 27 June-2 July 2015. [https://doi.org/10.1109/bigdatacongress.2015.99](https://doi.org/10.1109/bigdatacongress.2015.99) [46] Marathe, A., et al. (2016) Exploiting Redundancy and Application Scalability for Cost-Effective, Time-Constrained Execution of HPC Applications on Amazon EC2. IEEE Transactions on Parallel and Distributed Systems, 27, 2574-2588. [https://doi.org/10.1109/TPDS.2015.2508457](https://doi.org/10.1109/TPDS.2015.2508457) [47] Chen, T., Ullah, I., Kaafar, M.A. and Boreli, R. (2014) Information Leakage through Mobile Analytics Services. Proceedings of the 15th Workshop on Mobile Computing Systems and Applications, p. 15, ACM. [https://doi.org/10.1145/2565585.2565593](https://doi.org/10.1145/2565585.2565593) [48] Alizadeh, M., et al. (2015) Authentication in Mobile Cloud Computing: A Survey. Journal of Network and Computer Applications. Submit or recommend next manuscript to SCIRP and we will provide best service for you: Accepting pre-submission inquiries through Email, Facebook, LinkedIn, Twitter, etc. A wide selection of journals (inclusive of 9 subjects, more than 200 journals) Providing 24-hour high-quality service User-friendly online submission system Fair and swift peer-review system Efficient typesetting and proofreading procedure Display of the result of downloads and visits, as well as the number of cited articles Maximum dissemination of your research work [Submit your manuscript at: http://papersubmission.scirp.org/](http://papersubmission.scirp.org/) [Or contact jcc@scirp.org](mailto:jcc@scirp.org) 31 -----
{ "disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.4236/JCC.2017.56001?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.4236/JCC.2017.56001, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": null, "status": "GOLD", "url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=75210" }
2,017
[ "Review" ]
true
2017-04-06T00:00:00
[ { "paperId": "9a674d82cd4f47477e3f1aa7b89a3d9f6e3894b8", "title": "Learning OpenCV 3: Computer Vision in C++ with the OpenCV Library" }, { "paperId": "8a19fc3871bbb1841864950f25c03771b298666a", "title": "Exploiting Redundancy and Application Scalability for Cost-Effective, Time-Constrained Execution of HPC Applications on Amazon EC2" }, { "paperId": "d4e8e9d36d594b39b37b42c66841ef98f0e3925d", "title": "Authentication in mobile cloud computing: A survey" }, { "paperId": "6a3d2d1595aa0e5e90cfed08cd2c7ee9cb346743", "title": "A trustworthy access control model for mobile cloud computing based on reputation and mechanism design" }, { "paperId": "cc504af1af05c7ac1954962a037ddcc7f70bbd13", "title": "A Cloud-Based Offloading Service for Computation-Intensive Mobile Applications" }, { "paperId": "35316dbc451751b8bd61a9ebbda6889b90b02140", "title": "The PigMix Benchmark on Pig, MapReduce, and HPCC Systems" }, { "paperId": "7c8a4abc7624802783bde0688969fcdf373d01e7", "title": "Practical Confidentiality Preserving Big Data Analysis" }, { "paperId": "83eb1b3d8b3fe674c12ed0b09fa4a803ca16046b", "title": "Information leakage through mobile analytics services" }, { "paperId": "449e836eb90eff2d055e916a410ea97f19855e6d", "title": "Towards secure outsourcing of collaborative sensing and analytic applications to the cloud - the pCloud approach" }, { "paperId": "6cf7236dc94e53f31a5f265b13257382c23d9253", "title": "Mobile Cloud Computing: A Survey, State of Art and Future Directions" }, { "paperId": "a3b871c78aacb36c0336dc20274931b0d65701a0", "title": "Enabling technologies for future data center networking: a primer" }, { "paperId": "bea33fc20db18434b12269048c4af173075b3252", "title": "Towards synchronization of live virtual machines among mobile devices" }, { "paperId": "22d41328530154267c54d436d2d59d33fd9bacdc", "title": "Cloudlets: bringing the cloud to the mobile user" }, { "paperId": "f2befa373403def2a367edbe60afeaa03d1c7e61", "title": "Advancing the state of mobile cloud computing" }, { "paperId": "faff5dac93f324923ccb2c5ebeb1844de1643bd9", "title": "ThinkAir: Dynamic resource allocation and parallel execution in the cloud for mobile code offloading" }, { "paperId": "72d0ee37ff57ae7977d451b7e2e0bcbe522bbf44", "title": "Code in the air: simplifying sensing and coordination tasks on smartphones" }, { "paperId": "acc104820d211a25b568f8677b4ad29a8667c7cd", "title": "CryptDB: protecting confidentiality with encrypted query processing" }, { "paperId": "ccde4f28eac0501c7fa075d06ab3d0f01fbd09af", "title": "Outsourcing the Decryption of ABE Ciphertexts" }, { "paperId": "eb27a6067d7a0416d060434e307e8cf34c11a39f", "title": "Odessa: enabling interactive perception applications on mobile devices" }, { "paperId": "731304583d2e40bf6ee030e3cd81f767713f999a", "title": "CloneCloud: elastic execution between mobile device and cloud" }, { "paperId": "4be2c5d63e570300f30d96d12949f826c5c9c136", "title": "SPORC: Group Collaboration using Untrusted Cloud Resources" }, { "paperId": "5fe125dcbff9c3023fe66eab137f5fabadd61a87", "title": "Improving computer basis teaching through mobile communication and cloud computing technology" }, { "paperId": "0d366f3522bcf503b9f0fea8a5d009ba3ecddf39", "title": "WhereStore: location-based data storage for mobile devices interacting with the cloud" }, { "paperId": "0f7dd6bcaf4c2a64b3a51b7351929dab23584ea8", "title": "MAUI: making smartphones last longer with code offload" }, { "paperId": "93a96e0f68038fbbe70f80952632e8f0770af56e", "title": "Cloud Computing for Mobile Users: Can Offloading Computation Save Energy?" }, { "paperId": "018bff5b5b6d38ed6cd96f43adbb04a95f34695c", "title": "Measuring serendipity: connecting people, locations and interests in a mobile 3G network" }, { "paperId": "1802d0f19c7e324a784c5f334463607dddf7efb2", "title": "StarTrack: a framework for enabling track-based applications" }, { "paperId": "8e383be7e8a37c93435952797731f07940eac6dc", "title": "PEIR, the personal environmental impact report, as a platform for participatory sensing systems research" }, { "paperId": "3741c9aea5e0eba7658826052a30d0e82de67078", "title": "SLIPstream: scalable low-latency interactive perception on streaming data" }, { "paperId": "8ee174f53bf7fa3e2e43eaf60553601f7e9f8022", "title": "Object recognition and full pose registration from a single image for robotic manipulation" }, { "paperId": "ee07f7bea5c3c6aa44a7257dee8b63b574ce12c9", "title": "A Platform for Content-based Partial Replication" }, { "paperId": "3ac939c4ada416858e1c1bff9e6f18aaa98323f3", "title": "Pig latin: a not-so-foreign language for data processing" }, { "paperId": "d25ca9e72b3ede77cf08529f2bfe44ab7b07933f", "title": "Predestination: Where Do You Want to Go Today?" }, { "paperId": "30b4b7503a7004646ba4a75b6f0aaf4f626f0854", "title": "A survey of effective heuristics and their application to a variety of knapsack problems" }, { "paperId": "422876e542daadefe3371091a65c5671185796e2", "title": "Attribute-based encryption for fine-grained access control of encrypted data" }, { "paperId": "8c04f169203f9e55056a6f7f956695babe622a38", "title": "Distinctive Image Features from Scale-Invariant Keypoints" }, { "paperId": "92a6103f2c3b84e4ad87857b86b8566b59623442", "title": "The Design of Rijndael: AES - The Advanced Encryption Standard" }, { "paperId": "d1776dfb8f66cb40cadcac9bb66760ec9b7b3920", "title": "Description of a New Variable-Length Key, 64-bit Block Cipher (Blowfish)" }, { "paperId": "517b1a3da76f773df920ce59ce8bf6306de2c951", "title": "Towards an Offloading Framework based on Big Data Analytics in Mobile Cloud Computing Environments" }, { "paperId": "ec13c3e7119191802e6f5783d297fe7a5a05293e", "title": "Mobile cloud computing: A survey" }, { "paperId": "c0a568ca43a0a24c47d77934a3e228c168c825fb", "title": "Clone2Clone (C2C): Peer-to-Peer Networking of Smartphones on the Cloud" }, { "paperId": "5099369c8a4096c5c997d4ae6354515ed1d7ab30", "title": "On effective offloading services for resource-constrained mobile devices running heavier mobile Internet applications" }, { "paperId": "a1cd437a924849d19e0713f042e45e79dc8b95a1", "title": "A public key cyryptosystem and signature scheme based on discrete logarithms" } ]
18,297
en
[ { "category": "Computer Science", "source": "external" }, { "category": "Engineering", "source": "external" }, { "category": "Mathematics", "source": "external" }, { "category": "Engineering", "source": "s2-fos-model" }, { "category": "Computer Science", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/02af7c7509c70c91d28cd2273002aff5e479c6ee
[ "Computer Science", "Engineering", "Mathematics" ]
0.832977
MAESTRO-X: Distributed Orchestration of Rotary-Wing UAV-Relay Swarms
02af7c7509c70c91d28cd2273002aff5e479c6ee
IEEE Transactions on Cognitive Communications and Networking
[ { "authorId": "9078700", "name": "Bharath Keshavamurthy" }, { "authorId": "153659184", "name": "M. Bliss" }, { "authorId": "2709642", "name": "Nicolò Michelusi" } ]
{ "alternate_issns": null, "alternate_names": [ "IEEE Trans Cogn Commun Netw" ], "alternate_urls": [ "https://ieeexplore.ieee.org/xpl/RecentIssue.jsp?punumber=6687307" ], "id": "65e58b80-9699-4da6-bd60-929b57b8533d", "issn": "2332-7731", "name": "IEEE Transactions on Cognitive Communications and Networking", "type": "journal", "url": "http://ieeexplore.ieee.org/servlet/opac?punumber=6687307" }
This work details a scalable framework to orchestrate a swarm of rotary-wing UAVs serving as cellular relays to facilitate beyond line-of-sight connectivity and traffic offloading for ground users. First, a Multiscale Adaptive Energy-conscious Scheduling and TRajectory Optimization (MAESTRO) framework is developed for a single UAV. Aiming to minimize the time-averaged latency to serve user requests, subject to an average UAV power constraint, it is shown that the optimization problem can be cast as a semi-Markov decision process, and exhibits a multiscale structure: outer actions on radial wait velocities and terminal service positions minimize the long-term delay-power trade-off, optimized via value iteration; given these outer actions, inner actions on angular wait velocities and service trajectories minimize a short-term delay-energy cost; finally, rate adaptation is embedded along the trajectory to leverage air-to-ground channel propagation conditions. A novel hierarchical competitive swarm optimization scheme is developed in the inner optimization, to devise high-resolution trajectories via iterative pair-wise updates. Next, MAESTRO is eXtended to UAV swarms (MAESTRO-X) via scalable policy replication, enabled by a decentralized command-and-control network augmented with: (1) spread maximization to proactively position UAVs to serve future requests; (2) consensus-driven conflict resolution to orchestrate scheduling decisions based on delay-energy costs including queuing dynamics; (3) adaptive frequency reuse to improve spectrum utilization across the network; and (4) a piggybacking mechanism allowing UAVs to serve multiple ground users simultaneously. Numerical evaluations show that, for user requests of 10 Mbits, generated according to a Poisson arrival process with rate 0.2 req/min/UAV, single-agent MAESTRO offers $3.8\times $ faster service than a high-altitude platform and 29% faster than a static UAV deployment; moreover, for a swarm of 3 UAV-relays, MAESTRO-X delivers data payloads $4.7\times $ faster than a successive convex approximation scheme; and remarkably, a single UAV optimized via MAESTRO outclasses 3 UAVs optimized via a deep-Q network by 38%.
# MAESTRO-X: Distributed Orchestration of Rotary-Wing UAV-Relay Swarms ### Bharath Keshavamurthy[∗], Matthew A. Bliss[†], and Nicolò Michelusi[∗] **Abstract** This work details a scalable framework to orchestrate a swarm of rotary-wing UAVs serving as cellular relays to facilitate beyond line-of-sight connectivity and traffic offloading for ground users. First, a Multiscale Adaptive Energy-conscious Scheduling and TRajectory Optimization (MAESTRO) framework is developed for a single UAV. Aiming to minimize the time-averaged latency to serve user requests, subject to an average UAV power constraint, it is shown that the optimization problem can be cast as a semi-Markov decision process, and exhibits a multiscale structure: outer actions on radial wait velocities and terminal service positions minimize the long-term delay-power trade-off, optimized via value iteration; given these outer actions, inner actions on angular wait velocities and service trajectories minimize a short-term delay-energy cost; finally, rate adaptation is embedded along the trajectory to leverage air-to-ground channel propagation conditions. A novel hierarchical competitive swarm optimization scheme is developed in the inner optimization, to devise high-resolution trajectories via iterative pair-wise updates. Next, MAESTRO is eXtended to UAV swarms (MAESTRO-X) via scalable policy replication, enabled by a decentralized command-and-control network augmented with: (1) spread maximization to proactively position UAVs to serve future requests; (2) consensus-driven _conflict resolution to orchestrate scheduling decisions based on delay-energy costs including queuing_ dynamics; (3) adaptive frequency reuse to improve spectrum utilization across the network; and (4) a piggybacking mechanism allowing UAVs to serve multiple ground users simultaneously. Numerical evaluations show that, for user requests of 10 Mbits, generated according to a Poisson arrival process with rate 0.2 req/min/UAV, single-agent MAESTRO offers 3.8 faster service than a high-altitude platform _×_ and 29% faster than a static UAV deployment; moreover, for a swarm of 3 UAV-relays, MAESTRO-X delivers data payloads 4.7 faster than a successive convex approximation scheme; and remarkably, a _×_ single UAV optimized via MAESTRO outclasses 3 UAVs optimized via a deep-Q network by 38%. **Index Terms** UAV-Relays, Trajectory optimization, SMDPs, Hierarchical CSO [A preliminary version of this work was presented at Asilomar 2022 [1]. Source code is available on GitHub [2].](https://github.com/bharathkeshavamurthy/MAESTRO-X.git) Part of this work has been supported by NSF under grants CNS-1642982 and CNS-2129015. _∗Electrical, Computer and Energy Engineering, Arizona State University, Tempe, AZ._ _†Electrical and Computer Engineering, Purdue University, West Lafayette, IN._ ----- I. INTRODUCTION Enterprises across various industrial sectors have stepped-up the adoption of Unmanned Aerial Vehicles (UAVs) to gather data, survey infrastructure, monitor operations, and automate logistics [3], [4]. UAVs can also be leveraged to enhance troop deployments in military scenarios [5], aid emergency response during a natural disaster [6], and facilitate data harvesting in precision agriculture [7]. Inevitably, this has fostered varied academic research and industrial R&D on UAV-augmented beyond line-of-sight connectivity and traffic offloading in cellular networks, whose coverage can be enhanced by the mobility and maneuverability of UAVs [8], [9]. Yet, the pervasive potential of UAV-assisted wireless networks presents a plethora of challenges in real-world deployments [9]: limited on-board energy of aerial platforms, Quality-of-Service (QoS) requirements, air-to-ground channels, and computational feasibility challenges of UAV trajectory design. Several works have tackled some of these challenges by employing tools from optimization and artificial intelligence—however, numerous problems remain unsolved: failure to capture uncertain system dynamics vis-à-vis random traffic arrivals [10]–[14]; restrictions on UAV path and velocity characteristics [11], [15]; inefficient centralized swarm deployments [16]–[18]; computationally expensive joint multi-agent formulations offering limited scalability [19]–[22]; and failure to account for link layer effects on the QoS of the network [23], [24]. In this paper, considering these drawbacks in the state-of-the-art, we study the decentralized orchestration of multiple power-constrained rotary-wing UAVs supplementing a terrestrial base station by relaying data traffic dynamically generated by ground users. Incorporating waiting state optimization, computationally feasible trajectory design, throughput-maximizing rate adaptation to Air-to-Ground (A2G) propagation conditions, queue management, frequency reuse to enhance spectrum utilization, multi-user service, and multi-UAV consensus-driven scheduling, we develop a scalable framework to efficiently automate the operations of distributed UAV-relay deployments. Ergo, specializing to single UAV-relay settings, we first propose MAESTRO, a Multiscale Adaptive Energy-conscious Scheduling and TRajectory Optimization framework to control the idle and service phase operations of the UAV. Seeking to minimize the average communication delay subject to an average UAV mobility power constraint, we show that the problem can be cast as a Semi-Markov Decision Process (SMDP) with a multiscale structure: outer decisions on radial velocities and terminal service positions influence the long-term delay-power cost; consequently, given these outer actions, inner actions on angular wait velocities and service ----- |Paper|Adaptive control|Channel model|Frequency reuse|Multiuser service|UAV Motion Mobility Velocity|Col7|UAV deployment|Multi-UAV scheduling|Overall formulation|Link Layer Schedule Queue|Col12| |---|---|---|---|---|---|---|---|---|---|---|---| |MAESTRO-X|Yes|A2G|Yes|Yes|Dynamic|Variable|Distributed|Decoupled|Model-based|Yes|Yes| |[10]|No|FSPL|No|No|Dynamic|Variable|Single|-|Model-based|Yes|No| |[16]|No|A2G|Yes|Yes|Dynamic|Variable|Centralized|Joint|Model-based|Yes|No| |[19]|No|A2G|No|Yes|Restricted|Fixed|Distributed|Joint|Model-free|No|No| |[11]|No|FSPL|No|No|Dynamic|Fixed|Single|-|Model-based|Yes|No| |[12]|No|FSPL|No|No|Dynamic|Variable|Single|-|Model-based|Yes|No| |[20]|No|FSPL|No|Yes|Restricted|Fixed|Distributed|Joint|Model-free|Yes|No| |[13]|No|A2G|No|No|Static|-|Single|-|Model-based|No|No| |[23]|No|FSPL|No|No|Static|-|Distributed|Joint|Model-based|Yes|No| |[24]|Yes|FSPL|No|No|Static|-|Distributed|Joint|Model-based|No|No| |[17]|No|FSPL|No|No|Dynamic|Fixed|Centralized|Joint|Model-based|Yes|No| |[18]|No|A2G|No|No|Static|-|Centralized|Joint|Model-based|No|No| |[27]|No|A2G|No|No|Restricted|Fixed|Distributed|Decoupled|Model-free|No|No| |[21]|Yes|FSPL|No|No|Static|-|Distributed|Joint|Model-free|No|Yes| |[22]|Yes|A2G|No|No|Static|-|Distributed|Joint|Model-free|No|No| |[14]|No|A2G|No|No|Dynamic|Variable|Single|-|Model-based|Yes|No| |[28]|Yes|FSPL|No|No|Dynamic|Variable|Single|-|Model-free|No|No| TABLE I: A comparison of the features of our framework with those of relevant schemes in the literature. trajectories minimize a short-term delay-energy cost. We develop a value iteration algorithm [25] exploiting this multiscale structure to optimize outer actions, and a hierarchical variant of Competitive Swarm Optimization (CSO) [26], decoupled from value iteration, to optimize high-resolution trajectories embedding a novel throughput maximizing rate adaptation scheme for A2G channels. Next, we extend MAESTRO to a swarm of UAV-relays (MAESTRO-X) via a scalable replication strategy, enabled by a decentralized command-and-control network and augmented with: spread maximization to proactively position the UAVs to serve future service requests; consensus-driven conflict resolution to orchestrate ground user scheduling decisions based on delay-energy costs, including queuing dynamics; frequency reuse to enhance spectrum utilization; and piggybacking to enable each UAV to serve multiple users simultaneously. **Related Work: Table I summarizes our approach (MAESTRO-X) and contrasts it with relevant** works in the state-of-the-art. First, we observe non-adaptive schemes, e.g., [10], [17], [18] designed for applications where ground users possess local storage or aggregation capabilities allowing for deterministic traffic; however, practical deployments involve dynamically generated requests and randomly located ground users. Accommodating these uncertainties calls for the design of adaptive UAV orchestration frameworks. Yet, existing works do so only for single UAV-relay deployments [28] or consider static placement of UAVs (i.e., no trajectory design) [21], [22], [24]. In contrast, we design adaptive trajectory and scheduling strategies for distributed multi-UAV swarms, that accommodate dynamic and uncertain traffic generated by ground users. Next, works employing Free Space Pathloss (FSPL) channel models, e.g., [10]–[12], [20], fail to account for the A2G channel characteristics in UAV-assisted wireless networks. Existing works that model A2G channels fail to leverage small- and large-scale A2G conditions via ----- rate adaptation. A notable exception is [14], which differs from our rate adaptation scheme in two ways: 1) we select the rate to maximize throughput (vs. [14], which aims to satisfy an outage constraint), and 2) we use a probabilistic line-of-sight (LoS) and Non-LoS (NLoS) model. Furthermore, most works surveyed neither consider spectrum reuse (with the exception of [16]) nor permit simultaneous multi-user service (with the exception of [16], [19], [20])—however, the works that do incorporate these crucial features [16], [19], [20] fail to consider adaptation to dynamically generated requests from randomly located users, as done in our work. A common approach for trajectory design is Successive Convex Approximation (SCA) [10], [14]. SCA typically relies on the FSPL channel model to devise convex relaxations of the objective and constraints. Exceptions include [14] and [16], which apply SCA approaches under A2G channels. In [14], a logistic approximation of the achievable rate is used under outage constraints; in [16], only large-scale fading is considered. However, when coupling trajectory design with our throughput-maximizing rate adaptation scheme, closed-form rate expressions with first-order convex approximations are impractical. To tackle this challenge, we propose a CSO [26] approach for UAV trajectory design. Unlike SCA, CSO does not rely on the problem structure of FSPL models to work effectively, and can thus accommodate realistic A2G propagation conditions. Particle Swarm Optimization (PSO) [29], a swarm-based optimization method in which particle updates are driven by the global and individual best positions, has been used to optimize static UAV placement [30], [31], or restricted UAV trajectories (e.g., moving along a circle [15], or with fixed speed [11]). Removing these restrictions calls for the more efficient update strategy of CSO, which exhibits superior performance on several benchmarks [26]: it involves pair-wise particle competitions, wherein winners advance to the next iteration and the losers learn from the winners. Moreover, we scale CSO to higher-dimensional trajectory design by embedding it within a Hierarchical wrapper (HCSO), which iteratively optimizes trajectories of increasing resolution, without imposing unreasonable restrictions on UAV mobility. Next, shifting our attention to swarm orchestration frameworks, several approaches consider centralized multi-UAV deployments [16]–[18] in which an aggregation center coordinates the UAV-relaying operations; or either joint multi-relay solutions [16], [23], [24] or model-free formulations constituting combined state and action spaces [19]–[22]. An exception is [27], which considers a model-free setup with decentralized UAV deployments and decoupled scheduling. But, [27] does not consider adaptation to randomly-generated data traffic, as we do in our work; rather, a sense-and-send protocol is devised, wherein tasks are always ready to be sensed. ----- Centralized swarm deployments often need additional capital and operational expenditure, and joint multi-UAV designs lead to large solution spaces resulting in prohibitive convergence times. Mindful of such considerations, we present an orchestration framework suitable for distributed UAV deployments by replicating our single-agent policy across the swarm and augmenting it with spread maximization and consensus-driven link-layer prescient conflict resolution over a command-and-control network. This eliminates the need for a centralized aggregation center, mitigates the computational overhead encountered by joint multi-relay models, and facilitates the seamless incorporation of queuing dynamics into scheduling decisions. Also, as shown in our numerical evaluations, our framework can be scaled to networks with 10 UAVs, while _≥_ state-of-the-art approaches [10], [16], [19] become prohibitively expensive for networks with 5 UAVs. Additionally, although model-free control schemes [19]–[22], [27], [28] consider unknown system dynamics when solving for the optimal trajectory and/or scheduling solution, they fail to efficiently exploit the problem structure, resulting in large policy convergence times. In contrast, we use a model-based approach, by casting the problem as an SMDP, which captures the temporal irregularities seen in the state transitions of UAV-augmented wireless networks. **Contributions: We develop a novel framework for the scalable orchestration of UAV-relay** swarms. To the best of our knowledge, no other work simultaneously incorporates the practical features of 1) dynamic traffic from randomly located ground users; 2) efficient exploitation of A2G channel conditions via a throughput-maximizing rate adaptation scheme; 3) easy scalability to large UAV swarms via policy replication, coupled with multi-agent coordination mechanisms over a distributed command-and-control network; and 4) waiting state optimization to position idle UAVs for potential new requests. In a nutshell, the contributions of this paper are: _• MAESTRO: For a single UAV, we construct an adaptive scheduling and trajectory design_ framework to minimize the communication latencies in serving dynamic transmission requests generated by randomly located ground users, subject to an average UAV power constraint. We show that the problem can be solved as a Semi-Markov Decision Process (SMDP). A multiscale decomposition facilitates efficient computation of rate adaptation, scheduling and trajectory solutions, and energy-conscious orchestration of the UAV during idle periods. _• HCSO: To enable computationally tractable design of high-resolution UAV trajectories under_ A2G propagation conditions, we propose Hierarchical CSO (HCSO), a variant of CSO wherein iterative pair-wise cost comparisons devise trajectories of increasingly higher resolution. _• MAESTRO-X: Coupled with decentralized command-and-control operations over a distributed_ ----- (a) Deployment Model. (b) Throughputs under A2G channel. Fig. 1: (a) A terrestrial BS aided by UAVs serving as relays for a diverse set of GNs: traffic offloading for cellular UEs, and coverage extensions for livestock monitors and soil sensors; (b) rate-adapted throughputs (see Table II for the numerical parameters) along the GN BS link (direct), GN UAV link (decode), UAV BS link (forward), _→_ _→_ _→_ and GN UAV BS link (decode-and-forward, with the UAV relay stationed above the BS or the GN). _→_ _→_ mesh network, we augment the single-UAV trained policy with multi-UAV mechanisms to orchestrate waiting phase operations (spread maximization), coordinate scheduling decisions incorporating queuing dynamics (consensus-driven conflict resolution), enable simultaneous multi-user service (piggybacking), and enhance spectrum utilization (frequency reuse). The rest of the paper is organized as follows: Sec. II introduces the system model; Sec. III elucidates the design of MAESTRO; Sec. IV describes the main algorithms; Sec. V details policy replication and multi-UAV mechanisms to manage distributed swarms (MAESTRO-X); Sec. VI chronicles our numerical evaluations; and finally, Sec. VII lists our concluding remarks. II. SYSTEM MODEL Consider the deployment scenario depicted in Fig. 1a: a swarm of NU rotary-wing Unmanned Aerial Vehicles (UAVs) operate as cellular relays to supplement a terrestrial Base Station (BS) by relaying data traffic dynamically generated by Ground Nodes (GNs). The BS is located at the center of the circular cell (of radius a), at height HB. The UAVs operate at a fixed height HU . The GNs are distributed uniformly at random throughout the cell, with density λG [GNs/unit area]. Multi-user communication is enabled via OFDMA over a spectrum of bandwidth W, discretized into NC orthogonal data channels (possibly, obtained by grouping multiple subcarriers together), each with bandwidth B≜ _N[W]C_ [. We assume the system operates in the uplink, i.e., traffic requests] generated by the GNs are transmitted to the BS, either directly or by using one UAV as a relay. It can be extended to both uplink/downlink via a state variable differentiating between the two. ----- **Communication Model: Each GN generates uplink transmission requests of L bits, according to** a Poisson process with rate λR|G [requests/GN/unit time]. Coupled with the random deployment of GNs, uplink requests arrive in time according to a Poisson process with rate Λ≜λG·λR|Gπa[2] [requests/unit time] over the circular cell. Since a new request is uniformly distributed in the cell area, the position (r, θ) of the source GN—expressed in polar coordinates with respect to the BS—has angular coordinate θ uniform in [0, 2π), and radial coordinate with probability density function given by fR(r)= _a[2][r][2]_ [I][(][r][≤][a][)][, where][ I][(][·][)][ is the indicator function.] A fully-connected mesh network overlaying the BS and UAVs enables command-and-control using the band-edges of the allocated spectrum as control channels. Since control packets constitute short frames relative to the large GN-generated data payloads (communicated over data channels), the control operation latencies are neglected. To request uplink transmission to the BS, a GN sends a service request with its location; the BS broadcasts this need-for-service to the UAV swarm. Next, a consensus-driven conflict resolution process occurs among the BS and all UAVs (Sec. V), based on assessed delay-energy costs for this request, culminating in a scheduling decision. If direct-BS transmission is chosen, the BS chooses an available data channel, or queues the request until one becomes available (see Sec. V). The BS then instructs the GN to begin direct transmission over the data channel. Otherwise, if UAV relay i is selected, the new GN request is served via a Decode-and-Forward (D&F) strategy on an available data channel (or queued until one becomes available), as detailed in Sec. V. While executing the D&F protocol, the UAV moves along a pre-designed energy-conscious trajectory, i.e., a sequence of way-points and velocities (see Sec. IV). In Sec. V, we also discuss a frequency reuse mechanism to improve spectrum utilization efficiency, and a piggybacking mechanism allowing the scheduled UAV to serve multiple requests simultaneously. As evident from this communication model, the GN BS, GN UAV, and UAV BS links must be characterized, as detailed next. _→_ _→_ _→_ **A2G Channel Model: For a generic link, we denote the flat-fading channel coefficient as** _h≜[√]βg, where β captures the large-scale channel variations, and g with E [|g|[2]] =1 denotes_ the small-scale fading component. We model the large-scale component as β=βLoS(d)≜β0d[−][α] for LoS and β=βNLoS(d)≜κβ0d[−][α][˜] for NLoS links, where β0 is the pathloss at a reference distance of 1 m, 2 _α_ _α˜ are the LoS and NLoS pathloss exponents, κ_ (0, 1] captures the _≤_ _≤_ _∈_ additional NLoS attenuation, and d denotes the Tx-Rx Euclidean distance [10]. Following [32], we use a probabilistic LoS model, with LoS probability PLoS(ϕ)=[1+z1 exp{−z2(ϕ−z1)}][−][1], where ϕ∈(0[o], 90[o]] is the Tx-Rx elevation angle, and z1, z2 are parameters specific to the ----- propagation environment (e.g., urban, suburban, rural) [32]. The distribution of the small-scale fading component g also depends on the LoS or NLoS link state [33]: for LoS links, as in [14], we model g as Rician fading with a ϕ-dependent K-factor K(ϕ)=k1 exp{k2ϕ}, where k1, k2 are specific to the propagation environment; for NLoS links, we model g as Rayleigh fading � � (Rician with K=0) [33]. Given h, the link capacity is C(h)=B· log2 1+ _[|]N[h][|]0[2]B[P]_, where P is the transmission power, N0 is the noise power spectral density at the receiver, and B is the channel bandwidth. We assume that other sources of signal degradation, such as the Doppler effect, are well-compensated at the receiver (for example, see the approaches in [34]). Since the large-scale fading components typically vary slowly relative to the acquisition rate of Channel State Information (CSI), we assume that the current large-scale parameters (β, K) are known at the transmitter’s side throughout the communication process, using CSI feedback over the control channel. Conversely, small-scale fading conditions vary at a much faster timescale and cannot be tracked at the transmitter. Hence, given (β, K) and a transmission rate Υ [bits/second], we define the outage probability as Pout(Υ, β, K)≜P(C([√]βg)<Υ)|β, K)=P (|g|[2]<u(Υ, β)), Υ where u(Υ, β)≜ _[N]βP[0][B]_ [(2] _B −1). The expected throughput is then R(Υ, β, K)=Υ· (1−Pout(Υ, β, K)),_ assuming that the small-scale fading is averaged out across time. The rate Υ is then selected to maximize the expected throughput (as opposed to the approach in [14], which imposes an outage probability constraint) as Υ[∗](β, K)≜ arg maxΥ≥0 R(Υ, β, K), solved in Proposition 1. **_Proposition 1. Given the large-scale parameters (β, K) and γ≜_** _[N]βP[0][B]_ [, the optimal throughput-] maximizing rate is Υ[∗](β, K)=B log2 �1+ _[Z]2[∗]_ �, where Z _[∗]_ is the unique solution in (0, ∞) of _h[′](Z) ≜_ (2+Z) ln1 �1+ _[Z]2_ � _−_ _[γ][(][K][+1)]2_ _[e][−][K]_ exp{−γQ(K1( + 1)√2K,[Z]2 �[}][I][0]γ[(](�K2+1)γKZ(K) +1)Z) = 0, (1) where I0(x) is the modified Bessel function of first kind of order 0, Q1(·, ·) is the standard Marcum Q-function [14]. Z _[∗]_ is solvable via the bisection method. The expected throughput is _√_ _R[∗](β, K) ≜_ max Υ≥0 _[R][(Υ][, β, K][) = Υ][∗][(][β, K][)][ ·][ Q][1][(]_ � 2K, 2(K + 1)u(Υ[∗](β, K), β)). (2) _Proof. See Appendix A._ � When K=0 (Rayleigh fading for NLoS), Q1 specializes to Q1(0, 2u(Υ, β))= exp _u(Υ, β)_, _{−_ _}_ while the condition h[′](Z)=0 becomes (1+ _[Z]_ 2 [) ln(1+] _[Z]2_ [)=][ 1]γ [. Finally, with the LoS and NLoS] conditions averaged out in the temporal and spatial dimensions, the average link throughput is ----- _R¯(d, ϕ) ≜_ _PLoS(ϕ) · R[∗](βLoS(d), K(ϕ)) + (1 −_ _PLoS(ϕ)) · R[∗](βNLoS(d), 0)._ (3) This expression is then specialized to the three distinct communication links by expressing the transmission powers, the environment-specific parameters (z1,z2,k1,k2), the large-scale parameters (β, K), and the LoS/NLoS probabilities based on the spatial configuration, i.e., d and ϕ. For the GN→BS link, we let _R[¯]GB(r) be the throughput with the GN in position (r, θ), computed_ by setting the GN-BS distance as d=�HB[2] [+][r][2][ and the elevation angle as][ ϕ][= sin][−][1][ �] _[H]d[B]_ � in (3). Similarly, for the GN→UAV link, we let _R[¯]GU_ (rGU ) be the throughput when the GN-UAV distance (projected onto the x−y plane) is rGU, computed by setting the GN-UAV Euclidean distance as d=�rGU[2] [+][H]U[2] [and the elevation angle as][ ϕ][= sin][−][1][ �] _[H]d[U]_ � in (3). Finally, for the UAV→BS link, we let _R[¯]UB(rUB) be the throughput when the x−y projected UAV-BS distance_ � is rUB, computed by setting the GN-UAV Euclidean distance as d= _rUB[2]_ [+(][H][U] _[−][H][B][)][2][ and the]_ elevation angle as ϕ= sin[−][1][ �] _[H][U]_ _[−][H][B]_ � in (3). As shown in Figs. 1b, the poor QoS experienced _d_ by GNs farther away from the BS, caused by deterioration in LoS probabilities with distance, motivates the need for UAV-relays to improve coverage throughout the cell. **UAV Mobility Power Model: For a rotary-wing UAV, since its communication power needs** ( 10 W) are dwarfed by its mobility power requirements ( 1000 W), we model the on-board _≈_ _≈_ energy expenditure as a function of the horizontal flying velocity V [10], i.e., �0.5 � + P2 �� 1 + _[V][ 4]_ _−_ _[V][ 2]_ 4v0[4] 2v0[2] + P3V [3], 0 ≤ _V ≤_ _Vmax,_ (4) _Pmob(V ) = P1_ � 1 + [3][V][ 2] _Utip[2]_ where Pi are the scaling constants, Utip is the rotor blade tip velocity, v0 is the mean rotor induced velocity while hovering, and Vmax is the maximum UAV flying speed [10]. We let _Pmax ≜_ max min 0≤V ≤Vmax[P][mob][(][V][ )][ and][ P][min][ ≜] 0≤V ≤Vmax[P][mob][(][V][ )][ be the maximum and minimum power] consumption of the UAV, respectively. From [10], hovering requires Pmob(0) =1371 W, while flying at 22 m/s only consumes Pmin =936 W. This suggests that the mobility of the UAVs can be exploited to reduce power consumption, while simultaneously improving coverage across the cell. Our goal is to define an energy-conscious adaptive service scheduling and trajectory optimization scheme to minimize the time-averaged communication delay experienced by GNs in the cell, subject to an average per-UAV mobility power constraint, studied next. ----- Fig. 2: The single-agent specialization of our generalized deployment depicted in Fig. 1a. III. MAESTRO: A SEMI-MARKOV DECISION PROCESS FORMULATION We now specialize the system model to a single UAV relay (illustrated in Fig. 2) via an SMDP formulation. The effective traffic rate experienced by a single UAV is Λ[′]≜ _N[Λ]U_ [[requests/unit] time/UAV], assumed in this section in place of the overall rate Λ. Let qU (t)=(rU (t), θU (t)) be the polar coordinate of the UAV at time t, projected onto the x−y plane, where rU (t)∈R+ and _θU_ (t)∈[0, 2π) denote the UAV’s radius and angle with respect to the BS. The system operates with the following phases. In the waiting phase, no GN requests are being served by the UAV, which moves according to a waiting policy. When a new GN request originates in position (r, θ), the system transitions to the request scheduling phase, where it is determined whether the GN should transmit its data payload directly to the BS, or relay it through the UAV. In case of direct transmission, the system immediately re-enters the waiting phase, as the UAV remains free to serve other requests; else, the system enters the UAV relay phase, in which the data payload is relayed through the UAV using the D&F protocol; upon completion, the system re-enters the waiting phase. In this section, we conservatively assume that: 1) when the UAV is serving a request, it is unable to serve other incoming requests, which are thus directly served by the BS; and 2) data channels are always available at the BS to serve incoming requests. We defer to Sec. V for the description of a piggybacking mechanism to simultaneously serve multiple transmission requests, and of a queuing mechanism when data channels are unavailable. **Communication Delay and UAV Energy Consumption: Here, we formulate the average** communication delay and UAV energy consumption under a given policy µ that defines the request scheduling, communication strategy, and UAV trajectory (formally defined later). We define a decision interval as the time duration spanning the start of a waiting phase, the subsequent request scheduling phase when a GN request is received, until the system re-enters the waiting phase after scheduling a direct transmission to the BS, or following the UAV relay phase. ----- Consider the uth such decision interval of duration ∆u, split into the time ∆u[(][w][)] to wait for a new request, and the time ∆[(]u[s][)] to serve it, either through the BS (scheduling decision ξu=0) or through the UAV (ξu=1). Then, ∆u=∆[(]u[w][)][+][ξ]u[∆]u[(][s][)][, since the UAV enters the waiting phase] immediately (and the decision interval terminates) in case of direct-BS transmission. Let Nu≥0 be the number of additional requests received during the UAV relay phase of the uth decision period: since these are served directly by the BS, we denote their delays as ∆[(]u,i[bs][)][, i][=][{][1][,][ 2][, . . ., N][u][}][. Let] _Eu be the UAV mobility energy expended during the uth decision interval, and let Mt be the_ total number of decision intervals completed up to time t. We define the expected long-term average communication delay per request (D[¯] _µ) and average UAV power (P[¯]µ), under µ, as_ � _,_ _P[¯]µ ≜_ lim _t→∞_ [E][µ] � ¯ _Dµ ≜_ lim _t→∞_ [E][µ] � _M1t_ �Mu=1t [(∆]u[(][s][)] [+][ ξ]u �Ni=1u [∆]u,i[(][bs][)][)] 1 �Mt _Mt_ _u=1[(1 +][ ξ][u][N][u][)]_ � 1 �Mt _Mt_ _u=1_ _[E][u]_ 1 �Mt _Mt_ _u=1_ [∆][u] _._ (5) Note that _D[¯]_ _µ in (5) captures the delays of all requests, i.e., those relayed through the UAV_ (ξu=1), those transmitted directly to the BS (ξu=0), as well as the Nu additional requests served directly by the BS during the UAV relay phase. Thus, the objective is to solve _D¯_ _[∗]_ = minµ _D¯_ _µ, s.t. ¯Pµ ≤_ _Pavg,_ (6) where Pavg ∈ (Pmin, Pmax) is the average power constraint, and the optimal policy is denoted as µ[∗]. To simplify, let E[¯] _µ[Cu]≜_ _tlim→∞_ [E][µ][[][ 1]Mt �Mu=1t _[C][u][]][ be a shorthand notation for the long-term]_ average cost Cu per decision interval. Let _E[¯]µ≜E[¯]_ _µ [Eu] be the average UAV energy expenditure,_ _T¯µ≜E¯_ _µ [∆u] be the average interval duration, ¯Nµ≜E¯_ _µ[1+ξuNu] be the average number of requests_ served, _W[¯]_ _µ[(][s][)][≜][E][¯]_ _µ[[∆]u[(][s][)][]][ be the average delay of requests for which a scheduling decision is made,]_ _W¯_ _µ[(][bs][)]≜E[¯]_ _µ[ξu_ �Ni=1u [∆]u,i[(][bs][)][]][ be the average delay of requests served directly by the BS during the] UAV relay phase, per decision interval. Using Little’s Law [35], we can then express _P[¯]µ=_ _ET¯¯µµ_ and _D[¯]_ _µ=_ _W¯_ _µ[(][s][)]N+ ¯µW[¯]_ _µ[(][bs][)]_, hence the optimization problem can be recast as ¯ _D[∗]_ = min _µ_ _W¯_ _µ[(][s][)]_ + W[¯] _µ[(][bs][)]_ ¯ s.t. _E[¯]µ ≜_ _E[¯]µ −_ _PavgT[¯]µ ≤_ 0, (7) _Nµ_ where _E[¯]µ=E[¯]_ _µ[Eu−Pavg∆u] is the excess energy cost. Note the inherent complexity to solve_ (7): as the policy varies, the delay metric changes both the numerator and denominator of the objective function, precluding a direct application of dynamic programming tools. **Alternative Problem Formulation: To address this challenge, we now devise a surrogate** ----- optimization metric, by characterizing upper and lower bounds to _D[¯]_ _µ. To this end, let us define_ a "baseline" policy µBS as the one such that all requests are served by the BS and the UAV flies around at minimum power Pmin (this policy is feasible). Since the delay to serve a request from a GN in position (r, θ) by direct transmission to the BS is _R¯GBL_ (r) [, the expected delay] under policy µBS is obtained by computing the expectation with respect to the radial coordinate, _D¯_ _BS≜_ ´ a0 _R¯GBL_ (r) _[f]R[(][r][)d][r][. Clearly, optimization of the policy yields][ ¯][D][∗][≤][D][¯]_ _BS[. Under any policy]_ _µ (including µ[∗]) better than µBS (i.e., such that_ _D[¯]_ _µ_ _DBS), the following bounds hold._ _≤_ [¯] **_Proposition 2. Let µ be such that_** _D[¯]_ _µ ≤_ _D[¯]_ _BS. Then, it holds that_ _W¯_ _µ[(][s][)]_ _≤_ _D[¯]_ _µ ≤_ _W[¯]_ _µ[(][s][)]_ 1 + Λ[′][ ¯]DBS _≤_ _D[¯]_ _BS._ (8) 1 + Λ[′][ ¯]Wµ[(][s][)] _Proof. See Appendix B._ Noticing that both the lower and upper bounds of _D[¯]_ _µ are increasing functions of_ _W[¯]_ _µ[(][s][)][, in our]_ subsequent analyses we will focus on the alternative optimization problem minµ _W¯_ _µ[(][s][)]_ s.t. _E[¯]µ ≤_ 0. (9) In Sec. VI (see Table III), we show that this alternative formulation leads to a near-optimal solution with respect to the original optimization (6). To solve (9), we define the Lagrangian _Mt_ � _u=1_ � � � ∆u[(][s][)] [+][ ν][(][E][u][−][P][avg][∆][u][)] _g(ν) = minµ_ _W¯_ _µ[(][s][)]_ + νE[¯]µ = minµ _tlim→∞_ [E][µ] � 1 _Mt_ _,_ (10) where ν is the dual variable, optimized by solving maxν≥0 g(ν). We now demonstrate that for a given ν 0, (10) can be cast as a Semi-Markov Decision Process (SMDP) and solved with _≥_ dynamic programming tools. Next, we discuss the SMDP states, actions, transitions, and policy. **States: The state is defined by the UAV position qU**, an element of the set QUAV≜R+×[0, 2π) (polar coordinates), and the position qG of the GN originating traffic, taking values from the set QGN≜[0, a]×[0, 2π). The state space is then S=Swait ∪Scomm, where Swait=QUAV is the set of waiting states and Scomm=QUAV×QGN is the set of communication states. Crucial to the definition of the SMDP is how the system is sampled in time to define Markovian dynamics in the evolution of the sampled states: accordingly, we define the actions available in each state **s** and the transition probabilities, along with the time duration T (s; a), the UAV energy usage _∈S_ _E(s; a), and the request service delay ∆(s; a) metrics accrued in state s under action a._ ----- **Waiting states’ actions, transitions, and metrics: In waiting state s=qU** _∈Swait at time t, i.e., the_ UAV is in position qU (t)=qU =(rU _, θU_ ) with no active requests, then the UAV moves with radial and angular velocity components (vr, θc), over an arbitrarily small duration ∆0≪ Λ[1][′] [. Thus, the] � � � � waiting-state action space is Await(rU )≜ (vr, θc)∈R[2][��] _vr[2][+][r]U[2]_ _[·][θ]c[2][≤][V][max]_, where vU = _vr[2][+][r]U[2]_ _[θ]c[2]_ � is the velocity expressed using polar coordinates. Upon choosing action a=(vr, θc)∈Await(rU ), the communication delay is ∆(s; a)=0, since there is no ongoing communication; the duration of a waiting state is T (s; a)=∆0, and the UAV’s energy use is E(s; a)=∆0Pmob (vU ) to move at velocity vU . The new state is then sampled at time t+∆0, with the UAV moved to the new position qU (t+∆0)≈(rU _, θU_ )+(vr, θc)∆0. With probability e[−][Λ][′][∆][0], no new request is received in the time interval [t, t+∆0], so that the new state is a waiting state. Otherwise, a new request is received from a GN in position (r, θ) (communication state). The transition probabilities from the waiting state sn=qU _∈Swait under action an=(vr, θc)∈Await(rU_ ) are thus P(sn+1 = qU + an∆0|sn, an) = e[−][Λ][′][∆][0], (11) P(sn+1 = (qU + an∆0, q[′]G[)][ with][ q][′]G _[∈F |][s][n][,][ a][n][) =][ A]πa[(][F][2][ ·][)]_ [ (1][ −] _[e][−][Λ][′][∆][0][)][,][ ∀F ⊆Q][GN][,]_ where A( ) is the area of region, since requests are uniformly distributed in the cell. _F_ _F_ **Communication states’ actions, transitions, and metrics: Upon reaching a communication** state sn=(qU _, qG)∈Scomm at time t, the system must serve a GN request at position qG=(r, θ)._ The BS first determines the scheduling decision ξ 0, 1 . If ξ=0, denoted as the action a=BS, _∈{_ _}_ the GN transmits directly to the BS; the next state is the waiting state sn+1=qU, sampled immediately after, resulting in the energy-time metrics E(sn; a)=T (sn; a)=0, and service delay metric ∆(sn; a)= _R¯GBL_ (r) [(time required to transmit the payload with throughput][ ¯][R]GB[(][r][)][ between] the GN and the BS). Instead, if ξ=1, the UAV uses the D&F protocol, while following a trajectory starting from its current position qU and ending in position q[′]U [. We denote this action] as a=(qU _→q[′]U_ [)][. In the][ decode][ phase of D&F (of duration][ t][p][), the GN transmits its data payload] to the UAV; in the forward phase (of duration ∆−tp), the UAV relays it to the BS. Assuming a _move-and-transmit strategy [10], the trajectory (qU_ _→q[′]U_ [) and the durations (][t][p][ and][ ∆][−][t][p][) must] satisfy the data payload constraints (C.1), i.e., the entire payload of L bits is first transmitted to the UAV with throughput _R[¯]GU_ (rGU (η)), and then relayed to the BS with throughput _R[¯]UB(rUB(η)),_ where rGU (η) and rUB(η) are the GN-UAV and UAV-BS distances (projected onto the x−y plane) at time η along the trajectory, respectively, so that the total communication delay is ∆. ----- ∆ For this action, the cost metrics are ∆(sn; a)=T (sn; a)=∆ and E(sn; a)= ´0 _[P][mob][ (][v][U]_ [(][η][)) d][η][.] Upon completing D&F at time t+∆, the UAV enters the waiting state (sn+1=q[′]U [). The set of] feasible UAV trajectories from qU to q[′]U [, to serve a GN at position][ q][G][ is] � _QqG�qU →_ **q[′]U** � ≜ **pU : [0, ∆] �→** R+ × [0, 2π) s.t. (12) ˆ tp ˆ ∆ _R¯GU_ (rGU (η))dη ≥ _L,_ _R¯UB(rUB(η))dη ≥_ _L,_ (C.1) 0 _tp_ � _vU_ (η) ≤ _Vmax, pU_ (0) = qU _, pU_ (∆) = q[′]U _[,][ ∃][∆]_ _[≥]_ [0][,][ ∃] [0][ ≤] _[t][p]_ _[≤]_ [∆] _,_ (C.2) where vU (η) is the UAV speed, C.1 reflects the data payload constraints, and C.2 the maximum speed and trajectory constraints. Then, the action space in state (qU _, qG)∈Scomm when ξ=1 is_ the set QqG(qU )≜ _∪q[′]U_ _[∈Q][UAV][ Q][q][G]�qU_ _→q[′]U_ � of feasible trajectories starting in qU that serve the GN at qG via the D&F protocol. The overall action space of this communication state is then _Acomm(qU_ _, qG)≜{BS}∪{QqG(qU_ )}, including the scheduling decision ξ ∈{0, 1}. **Policy µ: For waiting states qU** _∈Swait, the policy µ(qU_ )∈Await(rU ) selects a velocity (vr, θc) from the respective action space. Likewise, for communication states (qU _, qG)∈Scomm, the_ policy selects the scheduling decision ξ 0, 1 and if ξ=1, the trajectory followed in the D&F _∈{_ _}_ protocol, i.e., µ(qU _, qG)∈QqG(qU_ ). With a stationary policy µ defined, the Lagrangian metric _L[(]µ[ν][)][≜][W][¯]_ [ (]µ[s][)][+][ν][ ¯][E]µ [in (10) is reformulated using Little’s Law [35] and is written as] 1 = _πcomm_ � ˆ Πµ(s)ℓν(s; µ(s))ds, (13) _S_ _L[(]µ[ν][)]_ = lim _N_ _→∞_ [E][µ] � 1 �N _−1_ _N_ _n=0_ _[ℓ][ν][(][s][n][;][ µ][(][s][n][))]_ 1 �N _−1_ _N_ _n=0_ [I][(][s][n][ ∈S][comm][)] where Πµ(s) is the steady-state probability density function of being in state s under policy µ, _πcomm=_ ´Scomm[Π][µ][(][s][)d][s][ is the steady-state probability that the UAV is in the communication phase,] and ℓν(s; a)≜∆(s; a)+ν�E(s; a)−PavgT (s; a)� is the Lagrangian metric in state s under action a. In (13), [�]n[N]=0[−][1] _[ℓ][ν][(][s][n][;][ µ][(][s][n][))][ is the total Lagrangian cost accrued during the first][ N][ SMDP stages,]_ and [�]n[N]=0[−][1] [I][(][s][n][∈S][comm][)][ is the number of communication states encountered; since a new decision] interval initiates after a communication state, this equals the number of decision intervals (Mt in (10)). Taking the limit N _→∞, L[(]µ[ν][)]_ is the expected Lagrangian cost per decision interval, as expressed in (10). The right-hand side expression in (13) follows because the SMDP reaches the � steady-state when N _→∞. Specializing, ℓν(rU_ _, θU_ ; vr, θc)=ν(Pmob( _vr[2][+][r]U[2]_ _[θ]c[2][)][−][P][avg][)∆][0]_ [for the] waiting states, ℓν(rU _, θU_ _, r, θ; BS)=_ _R¯GBL_ (r) [for direct-BS transmission in communication states,] ∆ and ℓν(rU _, θU_ _, r, θ; pU_ )=(1−νPavg)∆+ν ´0 _[P][mob][ (][V][ (][η][)) d][η][ for a communication relayed through]_ ----- the UAV. The next proposition shows that the steady-state probability πcomm is independent of the policy µ, i.e., it is not affected by the optimization over µ. **_Proposition 3. We have πcomm=1 −_** (2−e[−][Λ][′][∆][0])[−][1]. _Proof. See Appendix C._ This result permits rewriting (10) as an average cost-per-stage problem 1 _g(ν) =_ min _πcomm_ _µ_ ˆ Πµ(s)ℓν(s; µ(s))ds, (14) _S_ solvable through standard dynamic programming approaches (upon discretization of the state and action spaces), followed by the dual maximization maxν≥0g(ν). **Two-stage policy decomposition: Since GN transmission requests are uniformly distributed in** the circular cell, the UAV radius is a sufficient statistic in decision-making for a waiting state (rU _, θU_ ), expressed as rU _∈Swait ≜_ [0, a]. Likewise, for a communication state (rU _, θU_ _, r, θ), only_ the UAV radius, GN request radius, and the angle ψ [0, 2π) between them suffice to characterize _∈_ the state. Thus, communication states can be compactly represented as (rU _, r, ψ=θ−θU_ )∈Scomm ≜ [0, a][2] [0, 2π). Hence, the policy affects the SMDP state transitions (and its steady-state) only _×_ through the UAV radial velocity vr in the waiting states, the scheduling decision (direct-BS or UAV relay) and UAV trajectory’s end radius position ˆrU in communication states. Instead, the angular velocity θc in the waiting states and the UAV trajectory to reach the target end radius ˆrU in the communication states only affect the instantaneous Lagrangian ℓν, but not state dynamics. With this observation, let O(rU )≜vr∈[−Vmax, Vmax] define the radial velocity policy of waiting states rU _∈Swait, specifying the radial velocity component of waiting action (vr, θc)∈Await(rU_ ); let U (rU _, r, ψ)≜(ξ, ˆrU_ ) define the scheduling and next radius position policy of communication states (rU _, r, ψ)∈Scomm: either direct-BS with ˆrU = rU (ξ = 0), or any trajectory starting from_ radius rU and ending at radius ˆrU when relaying through the UAV (ξ = 1). Accordingly, O and _U are the SMDP’s outer decisions and are the only actions affecting the steady-state distribution,_ denoted as ΠO,U under the outer policy (O, U ); thus, (14) can be restated as 1 _g(ν) =_ min _πcomm_ _O,U_ � [ˆ] ˆ � ΠO,U (s)ℓ[∗]ν[(][s][;][ O][(][s][))d][s][ +] ΠO,U (s)ℓ[∗]ν[(][s][;][ U] [(][s][))d][s] _,_ (15) _Swait_ _Scomm_ where ℓ[∗]ν [is the Lagrangian metric optimized with respect to the][ inner decision][ components not] specified by O and U . In particular, for a waiting state rU, under the radial velocity action _O(rU_ )=vr, the inner optimization is performed with respect to the angular velocity θc, ----- � _ℓ[∗]ν[(][r][U]_ [;][ v][r][) = min]θc _ν (Pmob(V ) −_ _Pavg) ∆0 s.t. V =_ _vr[2]_ [+][ r]U[2] _[θ]c[2]_ (16) _[≤]_ _[V][max][.]_ Since ν≥0, the optimizer θc[∗] [is the angular velocity minimizing the UAV power consumption: due] to the quasi-convex structure of Pmob(v) [10], θc[∗][=0][ if][ |][v][r][|≥][v][P]min[≜] [arg min][V] _[P][mob][(][V][ )][ (in fact,]_ � any angular movement would undesirably increase power consumption), and _vr[2][+][r]U[2]_ [(][θ]c[∗][)][2][=][v][P]min otherwise (i.e., enough angular movement to yield the power minimizing speed). For communi cation states, under direct-BS transmission, ℓ[∗]ν[(][s][; 0][, r][U] [) =][ L/R][GB][(][r][)][; on the other hand, when] relaying through the UAV, ℓ[∗]ν [is obtained by optimizing the trajectory][ p][U] [followed by the UAV,] starting at radius rU and terminating at radius ˆrU (with final angular position _φ[ˆ] optimized),_ ∆ ˆ _ℓ[∗]ν[(][s][; 1][,][ ˆ][r][U]_ [)= min] (1−νPavg)∆+ν _Pmob(vU_ (η))dη s.t. C.1, C.2. (17) ∆,pU _,tp,φ[ˆ]_ 0 where C.1-C.2 are the data payload, maximum UAV speed and trajectory constraints (see (12)). In other words, the inner decision on trajectory minimizes the instantaneous delay-energy trade-off, among all feasible trajectories terminating at the target radius ˆrU . Defining α≜ (1+ν(2νPPmaxmax−Pavg)) _[∈]_ [0, 1] to regulate the trade-off between service delay and UAV energy, (17) can be rewritten as ∆ _ℓ[∗]ν[(][s][; 1][,][ ˆ][r][U]_ [)] ˆ 1+ν(2Pmax−Pavg) [= min]∆,pU _,tp[(1][ −]_ [2][α][)∆+][α] 0 _Pmob(V (η))_ dη s.t. C.1, C.2, (18) _Pmax_ This reformulation is the focus of our HCSO trajectory design algorithm, detailed in Sec. IV. Alg. 1 optimizes the outer policy and computes the average cost-per-stage metric g(ν), along with the average excess energy-per-stage metric for a given ν, by solving problem (15) via value iteration [25]. Alg. 2 solves the dual maximization maxν≥0g(ν) via projected sub-gradient ascent[1] [36]. Specifically, in Alg. 1, lines 2 and 3 compute the inner Lagrangian cost metric optimized with respect to the inner actions—along with the excess energy cost metric—for all states and outer actions; line 6 computes the value iteration update for waiting states: upon moving to the new radial position rU +vr∆0, no request is received, w.p. e[−][Λ][′][∆][0], hence moving to a waiting state (with future value VW,i(rU +vr∆0)); otherwise, the system moves to a communication state, with future value VC,i(rU +vr∆0) (averaged with respect to the request position); line 12 computes the value iteration update for communication states, transitioning to a waiting state w.p. 1; the corresponding optimal outer actions are saved in lines 7 and 13; line 16 averages the value of communication states with respect to the random request position; lines 8, 14, and 17 similarly [1The source code for these algorithms is available on GitHub [2].](https://github.com/bharathkeshavamurthy/MAESTRO-X.git) ----- **Algorithm 1 (O[∗], U** _[∗], g(ν),_ _E[¯], V·,[next]0_ _, E·[next],0_ [) = VITER(][ν, V][·][,][0][,][ E][·][,][0][)] 1: Initialization: i=0; stop criterion δ. 2: Inner optimization in waiting states: ∀rU _∈Swait, ∀vr∈[−Vmax, Vmax], calculate ℓ[∗]ν_ [(][r][U] [;][ v][r][)][ as in (16), with minimizer][ θ]c[∗][; compute] � excess energy cost ϵ[∗](rU ; vr)=Pmob( _vr[2] + rU[2]_ [(][θ][c][∗][)][2][)∆][0][ −] _[P][avg][∆][0][.]_ 3: Inner **optimization** **in** **communication** **states:** _∀s∈Scomm, ∀rˆU_ _∈[0, a],_ calculate _ℓ[∗]ν_ [(][s][; 1][,][ ˆ][r][U] [)] via Alg. 3 with _α_ = _νPmax/(1+ν(2Pmax−Pavg)), with minimizer p[∗]U_ [(trajectory); compute excess energy cost][ ϵ][∗][(][s][; ˆ][r][U] [)=][E][(][s][;][ p]U[∗] [)][ −] _[P][avg][T]_ [(][s][;][ p][∗]U [)][.] 4: repeat 5: **for each rU** _∈[0, a] do_ _▷_ Outer optimization in waiting states 6: _VW,i+1(rU_ )←vr _∈[−Vminmax,Vmax]�ℓ[∗]ν_ [(][r][U] [;][ v][r][)+][e][−][Λ][′][∆][0] _[V][W,i][(][r][U]_ [+][v][r][∆][0][)+(1][−][e][−][Λ][′][∆][0] [)][V][C,i][(][r][U] [+][v][r][∆][0][)]�, 7: _Oi+1(rU_ ) ← _vr[∗][, where][ v]r[∗]_ [is the][ arg min][.] 8: _EW,i+1(rU_ )←ϵ[∗](rU ; vr[∗][)+][e][−][Λ][′][∆][0] _[E][W,i][(][r][U]_ [+][v]r[∗][∆][0][)+(1][−][e][−][Λ][′][∆][0] [)][E][C,i][(][r][U] [+][v]r[∗][∆][0][)][.] 9: **end for** 10: **for each rU** _∈[0, a] do_ _▷_ Outer optimization in communication states 11: **for each r∈[0, a], ψ∈[0, 2π) (s = (rU** _, r, ψ)) do_ _▷_ Outer optimization in communication states 12: _Vˆ (s)←_ min � _RGBL_ (r) [+][V][W,i][(][r][U] [)], _rˆUmin ∈[0,a][ℓ]ν[∗]_ [(][s][; ˆ][r][U] [)+][V][W,i][(ˆ][r][U] [)] � _▷_ Value function given GN position � �� � � �� � _ξ=0_ _ξ=1_ 13: _Ui+1(s) ←_ (ξ[∗], ˆrU[∗] [)][, where][ (][ξ][∗][,][ ˆ][r]U[∗] [)][ is the][ arg min][ (][r][ˆ]U[∗] [=][ r][U][ if][ ξ][∗] [= 0][).] 14: _Eˆ(s)←ξ[∗]_ _· ϵ[∗](s; ˆrU[∗]_ [)+][E][W,i][(ˆ][r]U[∗] [)][.] _▷_ Total excess cost given GN pos., optimized over scheduling/trajectory 15: **end for** 16: _VC,i+1(rU_ )← ´ 20 _π_ 21π ´ a0 _a2r[2][ ˆ][V][ (][r][U]_ _[, r, ψ][)d][r][d][ψ][′]_ _▷_ Value function in comm states, averaged over GN position 17: _EC,i+1(rU_ )← ´ 20 _π_ 21π ´ a0 _a2r[2][ ˆ][E][(][r][U]_ _[, r, ψ][)d][r][d][ψ][′]_ _▷_ Excess energy cost in comm states, averaged over GN position 18: **end for** 19: _∀rU ∈_ [0, a] and X ∈{W, C}, calculate δX[(][V][ )](rU )=VX,i+1(rU )−VX,i(rU ) and δX[(][E][)][(][r][U] [)=][E][X,i][+1][(][r][U] [)][−E][X,i][(][r][U] [)][;][ i][←][i][+1][.] 20: until maxrU,X δX[V] [(][r][U] [)][−] [min][r]U _[,X][ δ]X[V]_ [(][r][U] [)][<δ][ and][ max][r]U _[,X][ δ]X[E]_ [(][r][U] [)][−] [min][r]U _[,X][ δ]X[E]_ [(][r][U] [)][<δ][.] _▷_ Termination condition 21: return g(ν)≈δW[(][V][ )][(0)][/π][comm][,][ ¯][E≈][δ]W[(][E][)][(0)][.] _▷_ dual cost and average excess energy cost 22: _V·[next],0_ (·)=V·,i(·)−VW,i(0), E·[next],0 (·)=E·,i(·)−EW,i(0). _▷_ Relative values (next VITER initialization) 23: _O[∗](·)=Oi(·), U_ _[∗](·)=Ui(·)._ _▷_ Optimal waiting and communication policies update the total excess energy cost, needed to compute the projected dual sub-gradient ascent in Alg. 2. In practice, the integrals in lines 16 and 17, and the continuous state/action spaces are discretized (see MAESTRO-X [2]), leading to an overall complexity of each value iteration update (lines 5-18) of order O(KR _·(KV +KR[2]_ _[·][K][A][))][, where][ K][R][ is the number of discretized radii]_ levels (rU and r values), KA is the number of angular levels (ψ and ψ[′]), and KV is the number of discretized radial velocities (vr). Upon convergence (typically, value iteration converges within (log(1/δ)) iterations to achieve a target accuracy δ [25, Sec. V]), line 21 estimates the values _O_ of the average cost-per-stage and excess energy-per-stage metrics. In Alg. 2, line 1 initializes the dual variable and a sequence of step-sizes used for projected sub-gradient ascent; line 3 calls value iteration (Alg. 1) using the current dual variable ν, and outputs the optimal outer policy and the average cost-, excess energy- per-stage metrics; line 5 monitors convergence in terms of primal feasibility and complementary slackness conditions; line 4 updates the value of the dual variable in the direction of its sub-gradient and projects its value to the non-negative range to ensure dual feasibility; note that Alg. 1 outputs also the _relative values metrics V and_ : these are used to initialize the total cost and excess energy _E_ metrics in the next call to Alg. 1, and help speed up convergence. We are left with the trajectory design (line 3 of Alg. 1), carried out using Hierarchical CSO in the next section. ----- **Algorithm 2 Projected Sub-gradient Ascent (PSGA)** 1: Initialization: k = 0; dual variable ν≥0; step-size {ρk= _k[ρ]+1[0]_ _[, k][≥][0][}][;][ V][·][,][0][(][·][)=][E][·][,][0][(][·][)][ ≡]_ [0][.] 2: repeat 3: (O[∗], U _[∗], g,_ _E[¯], V·,0, E·,0) ←_ VITER(ν, V·,0, E·,0) via Alg. 1. 4: Update ν ← max �ν+ρkE[¯], 0�; k←k+1. _▷_ Dual variable update 5: until _E[¯]<ϵP F ; ν|E|[¯]_ _<ϵCS_ _▷_ Check KKT optimality conditions 6: return: optimal outer policy (O[∗], U _[∗])._ **Algorithm 3 HCSO Algorithm** 1: Randomly initialize N particles (p, v)1:N : pi is a sequence of way-points, vi a sequence of UAV speeds. 2: while M ≤ _Mmax do_ 3: Obtain M -segment trajectory: (p[∗], v[∗])=CSO(p1:N _, v1:N_ _, N, M_ ) (see [26]). _▷_ CSO call 4: Increase M _←2M_ ; interpolate to form reference trajectory: (˜p, ˜v)=interp(p[∗], v[∗], M ). _▷_ Increase resolution via interpolation 5: Reduce swarm size N _←N_ _−Nred._ 6: **for n=1, 2, . . ., N do** _▷_ Generate N particles randomly 7: New way-point particle pn with mth way-point xm = ˜xm+(χm, ζm) and xM = ˆrU _∥xxMM−−11∥2_ [.] _▷_ Way-point perturbation 8: New velocity particle vn with mth velocity vm = [˜vm+κm][[][V][low][,V][max][]]. _▷_ Velocity perturbation 9: **end for** 10: end while IV. TRAJECTORY DESIGN VIA HIERARCHICAL COMPETITIVE SWARM OPTIMIZATION In this section, we design the UAV trajectory during the D&F protocol. To solve (18), we propose a CSO scheme [26] defining a meta-heuristic UAV trajectory. First, as done also with SCA approaches [10], [16], [37], we simplify the continuous UAV trajectory into a finite sequence of way-points connected by straight lines at constant velocity. However, a direct application of CSO to high-resolution trajectory design suffers from poor convergence due to exponentially large solution spaces [38]. We address this weakness by proposing a Hierarchical variant of CSO (HCSO), wherein a sequence of problems is solved: initially, CSO produces a low-resolution trajectory; the optimized trajectory is then interpolated to create a higher-resolution one, then further optimized with CSO. The process repeats until a target resolution is achieved. Let x0 = (rU _, 0) be the initial UAV position and xG≜(r cos ψ, r sin ψ) be the request position_ (in this section, expressed as Cartesian coordinates), corresponding to the communication state **s=(rU** _, r, ψ)∈Scomm. Given a target end radius position ˆrU (the outer action), we encode the_ UAV trajectory as a sequence of M way-points xm=(xm, ym), m = 1, . . ., M, ending at xM at radius ˆrU, and velocities vm∈[Vlow, Vmax] used to traverse each straight trajectory segment Ψm≜xm−xm−1. The first and second _[M]2_ [segments correspond to the two phases of the D&F] protocol. Here, the minimum velocity Vlow≪Vmax ensures well-defined segment durations; the sequences of way-points p≜[x1, . . ., xM ] and velocities v≜[v1, . . ., vM ] are the optimization variables. Since the number of bits communicated (C.1) during each trajectory segment, coupled with our throughput-maximizing rate adaptation scheme, cannot be computed in closed-form, we approximate them numerically. Specifically, between subsequent way-points xm−1 and xm ----- traversed with velocity vm, we generate a sequence of nres evenly-spaced points with sufficiently high resolution; letting {Rk[new]}k[n]=1[res] [be the expected throughput at each point, computed via (3)] and Prop. 1, the number of bits communicated along the mth segment is approximated as _Fm≜_ _[∥][Ψ]v[m]m[∥][2]_ _nres1_ �nk=1res _[R]k[new], where_ _[∥][Ψ]v[m]m[∥][2]_ is the time taken to traverse it. Thus, (18) becomes � 1 2α + α _[P][mob][(][v][m][)]_ _−_ _Pmax_ (P.0) min **p,v∈[Vlow,Vmax][M]** _M_ � _m=1_ _∥Ψm∥2_ _vm_ � (19) s.t. hi(p, v) ≜ _L −_ _M_ 2 [(][i][+1)] � _Fm ≤_ 0, i = 0 and 1, ∥xM _∥2 = ˆrU_ _,_ (C)[˜] _m=_ _[M]2_ _[i][+1]_ where C enforce the data payload and end radius constraints. To solve[˜] (P.0) with CSO, we first convert it into an unconstrained one, by penalizing constraint violations with a particular solution: 1) if the UAV does not decode (or forward) its data payload by the end of either phase, then it flies along the circumference of a circle (radius rmin>0, small) around the current position with its power-minimizing velocity (vPmin=22 m/s [10]) until the transmission/reception is completed; and 2) we enforce the end radius constraint by projecting the penultimate way-point xM _−1 to the_ circle at radius ˆrU, i.e. xM = ˆrU **xM** _−1/∥xM_ _−1∥2.[2]_ This yields the penalized objective function ˆ � _EP,0+ ˆEP,1_ +(1 − 2α)(t[ˆ]P,0+t[ˆ]P,1)+α ; _Pmax_ � 1 2α + α _[P][mob][(][v][m][)]_ _−_ _Pmax_ ˆ _f_ (p, v)≜ _M_ � _m=1_ _∥Ψm∥2_ _vm_ _tˆP,0≜_ [max][{][h][0][(][p][,][ v][)][,][ 0][}] ; E[ˆ]P,i≜Pmint[ˆ]P,i, xM = ˆrU **xM** _−1_ _,_ _R¯GU_ (∥xM/2 − **xG∥2)** [; ˆ][t][P,][1][≜] [max]R¯UB[{][h]([1]∥[(]x[p]M[,][ v]∥[)]2[,])[ 0][}] _∥xM_ _−1∥2_ where _t[ˆ]P,i and_ _E[ˆ]P,i are the time and energy penalties involved in finishing the data communication_ during the decode and forward phases (i=0 and 1). In particular, _t[ˆ]P,i equals the remaining_ payload max{hi(p, v), 0}, divided by the corresponding throughput at the terminal position (R[¯]GU for the decode phase and _R[¯]UB for the forward phase). Hence, (P.0) becomes minfˆ(p, v)._ **p,v** To solve this problem, we employ the HCSO algorithm, outlined in Alg. 3 and discussed next. We initialize N way-point particles p1:N ≜p1, . . ., pN and N UAV velocity particles v1:N ≜ **v1, . . ., vN (line 1). The core of the algorithm is CSO (line 3), detailed in [26]: essentially,** during the kth iteration within CSO, the N particles are randomly grouped into _N_ 2 [pairwise] competitions. For both members of a pair, _f[ˆ](p, v) is calculated; the winner of the competition_ is passed onto the (k+1)th iteration, while the loser is modified by learning from the winner, 2We let _∥xx∥2_ [=(1][,][ 0)][ for a point in the origin,][ x][=(0][,][ 0)][.] ----- Fig. 3: An illustration outlining the sequence of operations under MAESTRO-X that occur at each UAV. as detailed by the update equations in [26]; after repeating these pair-wise competitions, the CSO algorithm outputs a winning trajectory (p[∗], v[∗]). However, a direct application of CSO alone suffers from a complexity-accuracy dilemma: high-resolution trajectories are slow to converge, while low-resolution ones give rise to poor solutions that fail to capture fine-grained variations in the trajectory way-points and velocities. To overcome this limitation, we embed CSO within a hierarchical wrapper: starting from a low-resolution trajectory optimized via CSO, after each CSO iteration (line 3), the resulting trajectory is interpolated to form a reference higher-resolution trajectory of M 2M way-points (line 4). The new population size is then _←_ reduced, N _←N_ _−Nred, to lower the computational burden of CSO (line 5), and a new set of N_ particles is generated randomly. To preserve the quality of the previous lower-resolution trajectory solution, the mth way-point of each new particle is generated by injecting zero-mean Gaussian noise χm, ζm∼N �0, σm,X[2] � (line 7) around the reference trajectory; similarly, the UAV velocity is generated by injecting Gaussian noise κm∼N (0, σV[2] [)][ (line 8), followed by projection onto the] feasible set ([·][[][V][low][,V][max][]]). Here, the way-point variance σm,X[2] [=][ ς][(][∥][x][˜][m][+1][−][x][˜][m][∥][2][+][∥][x][˜][m][−][1][−][x][˜][m][∥][2][)][,] with scaling factor ς>0, is determined by the spread between neighboring reference trajectory way-points. This choice accounts for the empirical observation that in areas with clustered UAV way-points, the objective function _f[ˆ](p, v) is sensitive to large variations. The speed variance_ _σV[2]_ [=][ ε][(][V][max][−][V][low][)][2][, with scaling factor][ ε>][0][, reflects the observation that the UAV velocities] exhibit faster convergence with CSO than the trajectory way-points and less sensitivity to random initialization. These steps in Alg. 3 continue until the desired trajectory resolution is reached. V. MAESTRO-X: AN EXTENSION TO UAV SWARMS In this section, we extend MAESTRO to swarms of NU UAV-relays. This eXtension, termed MAESTRO-X, augments the multiscale optimal policy obtained via SMDP value iteration. Depicting an example scenario of serving data traffic generated by an aggregation of soil sensors in precision agriculture, Fig. 3 illustrates its control flow. MAESTRO-X is enabled by replicating the optimal single-agent policy of the SMDP in Sec. III across the swarm and ----- employing additional enhancements including spread maximization, consensus-driven conflict _resolution with queuing dynamics, piggybacking, and frequency reuse. These mechanisms[3]_ are implemented using a fully-connected distributed mesh network overlaid on the BS and UAVs, that enables periodic exchanges of command-and-control messages, as depicted in Fig. 3. **Spread Maximization: Note that the inner action of MAESTRO’s optimal waiting policy is** symmetric in relation to clockwise and counter-clockwise angular UAV movements. For multiple UAVs, we leverage this symmetry to proactively position idle UAVs for potential new relay requests. Specifically, each UAV in the waiting state moves either clockwise or counter-clockwise (with angular velocity given by (16)), so as to maximize its angular distance from the nearest UAV in the waiting state, in an attempt to spread out and more readily serve future requests. To this end, UAV i parses the state flag as 0 and GPS event fields in its control frame (see Fig. 3). By monitoring the control frames received from other UAVs, it constructs a local peer list of other waiting state UAVs, and determines its closest peer (in the angular dimension) _L_ _j[∗]= arg minj∈L |θi−θj|, where θj is the current angular coordinate of UAV j. UAV i then executes_ the angular motion away from UAV j[∗], until new control frames (containing updated positions) are received from its peers (at the end of the synchronized reporting period) or upon receiving a new GN transmission request, at which time it transitions to the communication state. **Consensus-driven Conflict Resolution: In our single-UAV formulation (Sec. III), the scheduling** action was determined by comparing the Lagrangian costs of direct-BS transmission to that of relayed UAV service. To extend scheduling decisions to UAV swarms—including queueing dynamics, as well as simultaneous multi-user service via piggybacking at the UAVs and frequency reuse (both described later in this section)—the augmented scheduling decision must now 1) resolve conflicts among the BS and UAVs as to whom should serve a new GN request; 2) facilitate a consensus on the best node to serve the GN; 3) account for queueing delays experienced at each potential server node while waiting for data channels to become available. Similarly to the single-UAV setting, this augmentation is driven by a cost-of-service metric computed at the BS and at each UAV. The new metric consists of several modifications to the original delay-energy cost trade-off computed in the single-UAV setting. For new requests served directly by the BS, the new metric equals the original delay metric, plus an estimate of the time needed for a data channel to become available (and considers the frequency reuse mechanism to be described). 3Due to space constraints, we keep our discussions on these multi-agent mechanisms brief. For more details on their [implementation, please refer to our source code on GitHub [2].](https://github.com/bharathkeshavamurthy/MAESTRO-X.git) ----- This time can be estimated based on the time needed to complete the requests currently served at the BS, and the time needed to complete those already queued. Thus, for a new GN request at (r, θ), the augmented cost metric associated with direct-BS transmission is _R¯GBL_ (r) [+][t]BS[, where] the first term accounts for the transmission time, whereas tBS is the additional waiting time. Meanwhile, for new requests served by UAV i at radius rU _|i, GN request radius r, and angle_ between them ψU _|i, i.e., state si = (rU_ _|i, r, ψU_ _|i), with target end radius ˆrU_ _|i, the augmented cost_ metric is given by _ℓ[˜][∗]ν[(][s][i][; 1][,][ ˆ][r][U]_ _[|][i][)+][t][U][|][i][. The first term,][ ˜][ℓ][∗]ν[(][s][i][; 1][,][ ˆ][r][U]_ _[|][i][)][, is the Lagrangian cost metric,]_ modified to account for the piggybacking mechanism (described later in this section), wherein the UAV follows a collated trajectory to handle the new request while serving previous requests; the second term, tU|i, is an estimate of the time needed for a data channel to become available (and considering the frequency reuse mechanism). Upon calculating these cost-of-service metrics for the BS and the UAVs, the network arrives at a consensus on the best node to serve the new request, i.e., if _R¯GBL_ (r) [+][t]BS[≤][ℓ][˜][∗]ν[(][s][i][; 1][,][ ˆ][r][U] _[|][i][)+][t][U][|][i][,][∀][i][∈{][1][,][ 2][, . . ., N][U]_ _[}][, then the BS serves the request;]_ otherwise, the request is relayed through the UAV i[∗]= arg mini∈{1,2,...,NU _}_ _ℓ[˜][∗]ν[(][s][i][; 1][,][ ˆ][r][U]_ _[|][i][) +][ t][U][|][i][.]_ **Frequency Reuse: To improve the spectrum utilization efficiency, we propose a frequency reuse** mechanism, allowing multiple serving nodes (the BS and UAVs) to share the same data channel simultaneously when serving their respective GN requests. When direct-BS transmission is used to serve a new GN request, a single data channel assignment occurs at the start of direct transmission. When the new request is instead served using a D&F UAV relay, two distinct data channel assignments occur: one each for the decode and forward phases of the UAV. In essence, reuse of an occupied data channel is permitted on the condition that the received SNRs of nodes sharing the data channel degrade no more than an acceptable pre-specified threshold permits. Moreover, to make operations of the frequency reuse mechanism more amenable to our problem, which includes UAVs following time-varying trajectories, we equivalently describe this SNR degradation threshold by instead using a minimum distance threshold dth. The frequency reuse mechanism proceeds in the same way, regardless of whether the data channel assignment under consideration is for a GN using direct-BS transmission, a GN sending its data to a UAV (decode phase), or a UAV relaying its data payload to the BS (forward phase). To formalize, let k∈{1, 2, . . ., NC} be the data channel under consideration for reuse; let node _i be the new transmitter (either a GN beginning its uplink transmission or a UAV beginning_ its forward phase) determining whether reuse of data channel k is possible; let node j be the intended receiver of the transmission originating from node i; let (k) be the set of active _T_ ----- transmitters already using data channel k to serve their requests, i.e., a GN transmitting to a BS or UAV, or a UAV transmitting to the BS during its forward phase; let (k) be the set of active _R_ receivers already using data channel k, i.e., a UAV receiving an uplink transmission from a GN during the decode phase, the BS receiving an uplink transmission directly from a GN, or the BS receiving the data payload from a UAV during the forward phase. For data channel k to be deemed acceptable for reuse, the following two conditions must both be met: (FR.1) _dℓ,j ≥_ _dth, ∀ℓ_ _∈T (k),_ (20) (FR.2) _di,ℓ_ _≥_ _dth, ∀ℓ_ _∈R(k),_ (21) where di′,j′ is the Euclidean distance between any transmitter i[′] and receiver j[′]. From the above equations, (FR.1) ensures that the distances between the intended receiver and all currently active transmitters are above the minimum distance threshold dth, at all times during the execution of the UAVs’ trajectories. Likewise, (FR.2) ensures that distances between the new transmitter and all currently active receivers are above the minimum distance threshold dth. Effectively, satisfying conditions (FR.1) and (FR.2) simultaneously ensure that no received SNR experiences a degradation beyond a pre-specified limit, and hence data channel k is acceptable for reuse. Next, given its re-usability, the wait time for a channel to become available is estimated by modeling queuing dynamics, choosing the channel with the smallest wait time for service. Also, note that, once a channel is chosen with reuse, since the throughput experienced by the UAV during service degrades due to the added interference from other transmitters using the same channel, the UAV might not be able to complete its decode or forward phases using the optimal trajectory: the UAV then flies along the circumference of a circle (rmin>0) around the phase-specific final way-point with its power-minimizing velocity (22 m/s) to complete the phase; additionally, we evaluate the service in this case using the same time and energy penalties discussed in Sec. IV. **Piggybacking: To facilitate simultaneous multi-user service at the UAVs, we incorporate a** piggybacking mechanism (in the cost-of-service computation of the consensus-driven conflict resolution process), wherein a UAV follows a collated trajectory to accommodate new GN uplink requests while serving previous requests. Recalling from the description of conflict resolution, for a new request served through UAV i, we consider the state si = �rU _|i, r, ψU_ _|i�, with target_ end radius ˆrU _|i, and modified Lagrangian cost metric_ _ℓ[˜][∗]ν[(][s][i][; 1][,][ ˆ][r][U]_ _[|][i][)][. If UAV][ i][ is currently not]_ serving any other request, this modified cost metric simplifies to _ℓ[˜][∗]ν[(][s][i][; 1][,][ ˆ][r][U]_ _[|][i][)=][ℓ][∗]ν[(][s][i][; 1][,][ ˆ][r][U]_ _[|][i][)][,]_ ----- |Notation|Description|Simulation Value|Col4|Notation|Description|Simulation Value| |---|---|---|---|---|---|---| |NG|Number of GNs|30||a|Cell radius|1 km| |L|Data payload|10 Mbits||W|System BW|20 MHz| |NC|Number of data channels|4||B|Data channel BW|5 MHz| |κ|NLoS attenuation constant|0.2|||SNR referenced at 1 m|40 dB| |(α, α˜)|LoS/NLoS pathloss exponents|(2,2.8)|||UAV mobility power consumption|Eq. (4), params. of [10]| |(k1, k2)|Rician K-factor parameters [14]|(1,0.05)||(z1, z2)|LoS probability parameters [39]|(9.61,0.16)| |HU / HB|UAV / BS antenna height|200 m / 80 m||Vmax|Max. UAV speed|55 m/s| ||Control frame reporting period|10 ms|||SINR degradation threshold|5 dB| TABLE II: The system simulation parameters (unless otherwise stated). i.e., the original Lagrangian cost metric computed for the single UAV. On the other hand, if the UAV is currently serving other requests, the UAV computes the cost metric to serve the new request by piggybacking it, i.e., serving it simultaneously with its current requests on a different data channel. In this case, the modified cost metric becomes _ℓ[˜][∗]ν[(][s][i][; 1][,][ ˆ][r][U]_ _[|][i][)=][ℓ]ν[(pg)](si; 1, ˆrU_ _|i), where_ _ℓ[(pg)]ν_ (si; 1, ˆrU _|i) is defined to encapsulate modifications to the cost-of-service metric corresponding_ to the amount of data payload of the new request that has been either decoded or forwarded (or both) during the execution of the current trajectory (serving the UAV’s previous requests). Note that the energy expended by the UAV serving its current trajectory while piggybacking the new request is not considered in the cost computed for this new request, since the energy cost has already been accounted for in the execution of the current trajectory; instead, we consider only the delays experienced by the piggybacked GN during its associated cost computation. VI. SIMULATION SETUP AND EVALUATIONS Unless otherwise stated, we use the parameter values in Table II. To solve (15) via Algorithms 1–3, we discretize the SMDP state and action spaces (with 25 equally-spaced radii levels and 25 radial velocity waiting actions) and apply linearly-interpolated value iteration (see implementation details documented in [2]). Furthermore, we chose ∆0 = 1s. Validation of surrogate optimization problem (9): First, we justify the efficacy of our alternative optimization framework that replaces the original metric _D[¯]_ _µ with the lower bound_ _W[¯]_ _µ[(][s][)][. As]_ depicted in Table III, we observe that the optimized value _W[¯]_ _µ[(][s][∗][)]_ [of the alternative formulation] (9) is practically identical to the expected delay metric _D[¯]_ _µ∗_ of the original formulation (6), across various data payload sizes (L) and data traffic arrival rates (Λ[′]). Hence, replacing _D[¯]_ _µ_ with its lower bound _W[¯]_ _µ[(][s][)]_ as the optimization metric leads to near-optimal solutions. Notably, the surrogate optimization problem (9) is amenable to dynamic programming tools such as value iteration (see Alg. 1) and enables our proposed two-scale policy decomposition that drastically reduces the size of the action space in our SMDP formulation. These tools would not be directly applicable to the original formulation (6) that uses _D[¯]_ _µ as the optimization objective._ ----- |Payload: L|Arrival rate: Λ′|Lower bound: W¯ µ(s ∗)|Expected Delay: D¯ µ∗|Direct-to-BS: D¯ BS| |---|---|---|---|---| |1 Mbits|1 req/min/UAV|1.15 s|1.15 s|31.64s| |10 Mbits|0.2 req/min/UAV|16.41 s|16.41 s|316.38 s| |100 Mbits|0.033 req/min/UAV|82.17 s|82.17 s|3163.81 s| TABLE III: Pavg=1 kW: A comparison between the lower bound _W[¯]_ _µ[(][s][∗][)]_ [of][ ¯][D][µ][∗] [(Prop. 2) and direct-BS (][D][ ¯] _[BS][).]_ (a) Optimal Wait Policy. (b) Optimal D&F trajectory. Fig. 4: L=10 Mbits, Pavg=1.2 kW, Λ[′]=0.2 req/min/UAV: Optimal waiting policy (a) and optimized D&F trajectory during a communication phase (terminating above the BS) (b). The arrows and associated numerical values represent the direction of motion and the flying speed in m/s. MAESTRO policy: We now study illustrative examples of the optimal policy (Fig. 4). We note that, during the waiting phase (Fig. 4a), the UAV moves towards a radius of 94 m; upon _≈_ reaching it, it flies at power-minimizing speed (22.5 m/s) along a circle: this allows the UAV to be well-positioned for future requests (not too close to the BS, and not too far away from it), and at the same time to minimize its power consumption. Next, Fig. 4b depicts the optimal trajectory obtained via HCSO (Algorithm 3), for a certain configuration of GN request positions, initial and target final UAV radii (evident from the figure). Intuitively, during the decode phase, the UAV flies towards the GN to improve the pathloss conditions; for the same reason, it moves towards the BS during the forward phase. Additionally, Fig 4b depicts two different trajectory choices for the GNs at [193, 594] m (GN-0 and GN-1, specular to each other), one corresponding _±_ to minimum service delay and the other corresponding to minimum service energy: here, in addition to observing the angular symmetry in our formulation (see Sec. III), we notice that, under the minimum delay trajectory, the UAV flies faster, to improve pathloss quicker and reduce ----- the transmission delay; in contrast, it flies slower under the minimum energy trajectory, to save energy. The delay-energy trade-off in trajectory design is regulated via α, as described by (18). MAESTRO-X delay-power trade-off: We compare the delay-power trade-off of MAESTRO-X with adaptations of state-of-the-art algorithms to our setup, namely: the CIRCLE heuristic [20]; a CVXPY implementation of the Successive Convex Approximation scheme (SCA) [10]; a CVXPY implementation of the Constrained SCA scheme with Alternating Direction Method of Multipliers (CSCA-ADMM) [16], and a TensorFlow implementation of the Double Deep-Q Networks framework (DDQN) [19]. Note that all these frameworks are optimized under their original channel and communication models detailed in the corresponding references (see Table I for a list of their features), while we evaluate their performance under more realistic models of dynamic traffic arrivals and A2G channels. In addition, we consider the following custom heuristics: BS-only, in which GNs transmit directly to the BS without using UAVs; HAP-only in which GNs transmit directly to a High Altitude Platform (HAP, height=2 km); and Static, in which the UAVs statically hover at fixed locations. We also compute a Lower Bound to the delay as follows: for a GN at radius level r, it is the minimum between the delay incurred with direct-BS transmission (with throughput _R[¯]GB(r)), and a D&F scheme in which the UAV is on_ top of the GN during the decode phase (with throughput _R[¯]GU_ (0)), and on top of the BS during the forward phase (with throughput _R[¯]UB(0)). Note that this lower bound is not attainable, since_ it neglects the mobility of the UAV. We average the results over 1000 requests. In Fig. 5a, we plot the delay-power trade-off under low congestion (Λ[′]=0.2 req/min/UAV). Remarkably, MAESTRO-X allows to regulate the delay-power trade-off, whereas the other schemes do not. Across such trade-off, it outperforms all other schemes. Specifically, exploiting the mobility and maneuverability of the UAVs via optimized trajectories demonstrate lower service delays compared to static UAV deployments: for instance, a single UAV optimized via MAESTRO under 1 kW power constraint delivers the data payload 29% faster than a static UAV, while using 27% less power. Notably, under the same power consumption as the competitors, a single UAV optimized with MAESTRO achieves 38% lower delay than 3 UAV relays under DDQN [19], and 13 faster service times than the CIRCLE heuristic with 3 UAVs [20]. Adding _×_ UAVs significantly improves the performance of MAESTRO-X: with 3 UAVs MAESTRO-X delivers the payloads 4.7 faster than SCA [10] and 8.6 faster than CSCA-ADMM [16]. The _×_ _×_ gains start to saturate with 2-3 UAVs. In fact, MAESTRO-X approaches the theoretical lower bound to the delay, for large power consumption values: with more power available, UAVs ----- (a) Delay-Power Trade-off. (b) Delay chart. Fig. 5: L=10 Mbits, Λ[′]=0.2 req/min/UAV: Delay-power trade-off (a) and delay charts (b) for MAESTRO-X, stateof-the-art algorithms, and custom heuristics. In (b), MAESTRO-X is evaluated under Pavg =1 kW. leverage their mobility to improve pathloss conditions; thanks to spread maximization, multiple UAVs are more likely to be in the vicinity of a request and readily serve it. In Fig. 5b, we show the contributions of the communication and queue wait times to the overall delay experienced by the GNs, with MAESTRO-X evaluated under a power constraint of 1 kW (less than any other scheme, see Fig. 5a). We note that the BS-only deployment suffers severely due to large communication delays of GNs at the cell edge, causing the queue to become backlogged. The performance is drastically improved by deploying HAPs (HAP-only), thanks to their higher elevation and improved LoS conditions. Yet, the delay performance offered by a HAP-only deployment is poorer than a non-terrestrial deployment involving UAVs: 2.7 _×_ slower than a static UAV and 3.8 slower than a UAV optimized with MAESTRO. Across all _×_ UAV-assisted implementations, increasing the number of UAVs in the swarm not only lowers the communication delay but also the queue wait times since more GNs can be served simultaneously. Remarkably, MAESTRO-X demonstrates negligible queue wait times even with a single UAV: in this low-traffic regime, requests are served quicker than the rate at which they are generated, thereby bypassing the need for piggybacking and frequency reuse. To analyze the impact of these mechanisms, in Fig. 6a and Fig. 6b, we study a high congestion regime (Λ[′]=20 req/min/UAV). The results depicted in Fig. 6a are qualitatively similar to the low congestion case with some key differences: for all the competitor schemes, we note a performance ----- (a) Delay-Power Trade-off. (b) Delay chart. Fig. 6: L=10 Mbits, Λ[′]=20 req/min/UAV: Delay-power trade-off (a) and delay charts (b) for MAESTRO-X, stateof-the-art algorithms, and custom heuristics. In (b), MAESTRO-X is evaluated under Pavg =1 kW. degradation, due to the large wait times (Fig. 6b); a similar performance degradation is noted for MAESTRO-X with a single UAV. However, remarkably, MAESTRO-X with 2-3 UAVs appears to be unaffected by the higher arrival rate, as also demonstrated by the small queue time. This is attributed to frequency reuse allowing more efficient spectrum use, and to piggybacking allowing simultaneous service of multiple requests by each UAV. MAESTRO-X, impact of number of channels for large swarms: In Fig. 7, we study the impact of the number of channels (each of 5 MHz) on the average service delay offered by a MAESTRO X deployment of 10 UAV-relays, in the high congestion regime. Note that the competitors become computationally intractable with more than 5-6 UAVs, whereas the policy replication mechanism of MAESTRO-X offers scalability to large UAV swarms (see Fig. 8). The delay quickly improves by increasing the number of channels, and saturates after 5 channels at 2s delay (consistent with Fig. 6a). This is a remarkable result: for instance, with 4 channels (service delay of 4 s), if no frequency reuse was allowed, the network could at most service _≈_ 4[data channels] 15[req/min/data channel]=60 req/min. The ability to serve a much larger rate _×_ of Λ = 200 req/min is attributed to the frequency reuse mechanism. Policy convergence time: Finally, in Fig. 8, we benchmark MAESTRO-X against SCA from [10] (single-agent, model-based), CSCA-ADMM from [16] (model-based), and DDQN from [19] (model-free), in terms of their policy convergence times, when varying the number of UAVs ----- Fig. 7: 10 UAVs, L=10 Mbits, Pavg=1 kW, Λ=200 req/min: Average service delay (communication time + queue wait time) vs the number of channels NC. Fig. 8: Policy convergence time (in hours) for MAESTRO-X and the relevant state-of-the-art. _NU_ . All implementations are in Python, and are executed on a compute node with 2× 64-core AMD EPYC Milan 7763 CPUs, 16 64 GB DDR4 memory, and 4 NVIDIA A100 GPUs with _×_ _×_ 40 GB VRAM each. Remarkably, the convergence time of MAESTRO-X is irrespective of the number of UAVs, whereas it grows quickly for CSCA-ADMM and DDQN. This is due to the policy replication mechanism used by MAESTRO-X: the policy is computed for a single-agent, and then replicated across the swarm, coupled with the supplementary UAV-swarm mechanisms developed in Sec. V. On the other hand, the convergence times of CSCA-ADMM and DDQN grow quickly with the number of UAVs, and become prohibitive when scaled to more than 5 and 6 UAVs, respectively: in fact, it grows linearly for CSCA-ADMM, due to a joint multi UAV construction involved in its CVXPY-SCS implementation, and exponentially for DDQN, due to combined multi-agent state and action space construction. Remarkably, MAESTRO X yields a faster convergence time even for a single UAV, thanks to its ability to leverage the multiscale structure of the decision process to achieve a more efficient implementation, in addition to Tensor-ized executions exploiting SIMD processing in CUDA-capable GPUs, and distributed workers and thread-pool concurrency in Python (TensorFlow). These benefits in policy convergence coupled with the superior delay-power performance illustrated in Figs. 5 and 6, present MAESTRO-X as an appealing solution for both small and large UAV swarms. VII. CONCLUSION In this paper, we propose the MAESTRO-X framework for the decentralized orchestration of rotary-wing UAV-relay swarms in cellular networks, augmenting the coverage and service ----- capabilities of a terrestrial BS. First, we specialize our system to single-UAV deployments and design the optimal scheduling and trajectory optimization policy under an SMDP formulation. Next, we extend to distributed multi-UAV deployments by employing multi-agent coordination mechanisms, and then replicate this augmented single-UAV policy across the swarm. Numerical evaluations demonstrate that MAESTRO-X delivers significant gains over BS- and HAP-only deployments; furthermore, it exhibits superior performance over static UAV deployments, deep Q-learning schemes, and successive convex approximation strategies. APPENDIX A: PROOF OF PROP. 1 Since 2(K+1) _g_ has a non-central χ[2] distribution with 2 degrees of freedom and a non_|_ _|[2]_ _√_ � centrality parameter 2K, we find that Pout(Υ, β, K)=1−Q1( 2K, 2(K+1)u(Υ, β)), where _√_ � _Q1(·, ·) is the standard Marcum Q-function [14]. Hence, R(Υ, β, K) = Υ·Q1(_ 2K, 2(K + 1)u(Υ, β)). We now maximize it over Υ≥0. Let Z≜2γ[−][1]u (Υ, β) and γ≜ _[N]βP[0][B]_ [, hence][ Υ=][B][ log][2] �1+ _[Z]2_ � ≜f (Z). It follows that Υ[∗]=f (Z _[∗]), where Z_ _[∗]_ maximizes over Z 0 the function _≥_ _√_ _h(Z) ≜_ ln R(f (Z), β, K) = ln f (Z) + ln Q1( � 2K, _γ(K + 1)Z)._ (22) � _√_ � Note that Q1 _a,_ _bZ_ is log-concave in Z≥0 for a, b>0 (see [40]), and second derivative of ln f (Z) satisfies (ln f (Z))[′′]= _[f]_ _[′′][(][Z][)]_ _f_ (Z) (f (Z))[2][ ≤][0][,][ ∀][Z][≥][0][, so that][ h][(][Z][)][ is concave in][ Z][≥][0][. Since] _[−]_ [(][f] _[′][(][Z][))][2]_ limZ→0+ h(Z)=−∞ and limZ→∞ _h(Z)=−∞, there exists a unique Z_ _[∗]∈(0, ∞) (hence Υ[∗]=f (Z_ _[∗]))_ such that h[′](Z _[∗])=0, solvable with the bisection method, with h[′](Z) given by_ _√_ _∂Q1(_ 2K, b)/∂b��b=[√]γ(K+1)Z _,_ _√_ � _Q1(_ 2K, _γ(K + 1)Z)_ _h[′](Z) =_ _[f][ ′][(][Z][)]_ _f_ (Z) [+] � _γ(K + 1)_ _√_ 2 _Z_ yielding (1) after solving for f _[′]_ and the partial derivative of Q1. APPENDIX B: PROOF OF PROP. 2 Let _W[¯]_ _µ≜W[¯]_ _µ[(][s][)][+ ¯][W][ (]µ[bs][)]. If ξu=1, then additional requests received during the UAV relay phase_ are served directly by the BS, with delay _R¯GBL_ (r) [for a GN in position][ (][r, θ][)][. Thus, the expected] average communication delay to serve these additional requests is E[∆[(]u,i[bs][)][]= ¯][D][BS][, yielding] _W¯_ _µ= ¯Wµ[(][s][)][+ ¯][D]BS[( ¯][N]µ[−][1)][ and][ ¯][D]µ[=]_ _WN¯¯µµ_ [=] _W¯N¯µ[(]µ[s][)]_ [+] �1− _N¯[1]µ_ � _D¯_ _BS. Let µ be any policy (including_ the optimal one) that satisfies _D[¯]_ _µ≤D[¯]_ _BS: under such policy, since_ _N[¯]µ≥1, the expression above_ implies that _W[¯]_ _µ[(][s][)]_ _µ[≤][D][¯]_ _BS[. Moreover, since][ E][[][N]u[|][∆]u[(][s][)][]=∆]u[(][s][)][Λ][′][ and][ ξ]u[≤][1][, it follows that]_ _[≤][D][¯]_ _N¯µ≤1+Λ[′]W¯_ _µ[(][s][)]_ with equality if the UAV always serves requests. This implies (8). ----- APPENDIX C: PROOF OF PROP. 3 Let πwait=1−πcomm be the SMDP steady-state probability of the UAV being in the waiting state. Since the probability of remaining in the waiting state (no request is received) in one SMDP step is pww=e[−][Λ][′][∆][0] and that of moving from a communication state to a waiting state is _pcw=1, πcomm and πwait are solutions of the stationary equation πwait = πwaitpww + πcommpcw =_ _e[−][Λ][′][∆][0]πwait + πcomm. Solving it with πwait+πcomm=1 yields the expression of πcomm in Prop. 3._ REFERENCES [1] B. Keshavamurthy and N. Michelusi, “Multiscale Adaptive Scheduling and Path-Planning for Power-Constrained [UAV-Relays via SMDPs,” 2022. [Online]. Available: https://arxiv.org/abs/2209.07655](https://arxiv.org/abs/2209.07655) [2] B. Keshavamurthy, “MAESTRO-X: Multiscale Adaptive Energy-conscious Scheduling and TRajectory Optimization [(eXtended),” April 2022. [Online]. Available: https://github.com/bharathkeshavamurthy/MAESTRO-X.git](https://github.com/bharathkeshavamurthy/MAESTRO-X.git) [3] A. Fotouhi, H. Qiang et al., “Survey on UAV Cellular Communications: Practical Aspects, Standardization Advancements, Regulation, and Security Challenges,” IEEE Communications Surveys Tutorials, pp. 1–1, 2019. [4] M. Mozaffari, W. Saad et al., “A Tutorial on UAVs for Wireless Networks: Applications, Challenges, and Open Problems,” _IEEE Communications Surveys Tutorials, pp. 1–1, 2019._ [5] B. Keshavamurthy and N. Michelusi, “Learning-Based Spectrum Sensing and Access in Cognitive Radios via Approximate POMDPs,” IEEE Transactions on Cognitive Communications and Networking, vol. 8, no. 2, pp. 514–528, 2022. [6] D. Alvear, “Inside Verizon | Up to Speed | Drones, robots and the power of 5G,” July 2021. [7] S. Guillette, “Verizon News Archives | Robots, drones and sensors are changing the way we farm,” March 2019. [8] Y. Zeng, R. Zhang et al., “Wireless communications with unmanned aerial vehicles: opportunities and challenges,” IEEE _Communications Magazine, vol. 54, no. 5, pp. 36–42, May 2016._ [9] Q. Wu, L. Liu et al., “Fundamental Trade-offs in Communication and Trajectory Design for UAV-Enabled Wireless Network,” IEEE Wireless Communications, vol. 26, pp. 36–44, 02 2019. [10] Y. Zeng, J. Xu et al., “Energy Minimization for Wireless Communication With Rotary-Wing UAV,” IEEE Transactions _on Wireless Communications, vol. 18, no. 4, pp. 2329–2345, April 2019._ [11] M. A. Abd-Elmagid and H. S. Dhillon, “Average Peak Age-of-Information Minimization in UAV-Assisted IoT Networks,” _IEEE Transactions on Vehicular Technology, vol. 68, no. 2, pp. 2003–2008, 2019._ [12] X. Hu, K.-K. Wong et al., “UAV-Assisted Relaying and Edge Computing: Scheduling and Trajectory Optimization,” IEEE _Transactions on Wireless Communications, vol. 18, no. 10, pp. 4738–4752, 2019._ [13] J. Chen and D. Gesbert, “Optimal positioning of flying relays for wireless networks: A LOS map approach,” in 2017 IEEE _International Conference on Communications (ICC), May 2017, pp. 1–6._ [14] C. You and R. Zhang, “3D Trajectory Optimization in Rician Fading for UAV-Enabled Data Harvesting,” IEEE Transactions _on Wireless Communications, vol. 18, no. 6, pp. 3192–3207, 2019._ [15] R. K. Patra and P. Muthuchidambaranathan, “Optimisation of Spectrum and Energy Efficiency in UAV-Enabled Mobile Relaying Using Bisection and PSO Method,” in 3rd Int. Conference for Convergence in Technology (I2CT), 2018, pp. 1–7. [16] Q. Hu, Y. Cai et al., “Low-Complexity Joint Resource Allocation and Trajectory Design for UAV-Aided Relay Networks With the Segmented Ray-Tracing Channel Model,” IEEE Transactions on Wireless Communications, vol. 19, no. 9, 2020. [17] Q. Wu, Y. Zeng et al., “Joint Trajectory and Communication Design for Multi-UAV Enabled Wireless Networks,” IEEE _Transactions on Wireless Communications, vol. 17, no. 3, pp. 2109–2121, 2018._ ----- [18] M. Mozaffari, W. Saad et al., “Efficient Deployment of Multiple Unmanned Aerial Vehicles for Optimal Wireless Coverage,” _IEEE Communications Letters, vol. 20, no. 8, pp. 1647–1650, 2016._ [19] H. Bayerlein, M. Theile et al., “Multi-UAV Path Planning for Wireless Data Harvesting With Deep Reinforcement Learning,” IEEE Open Journal of the Communications Society, vol. 2, p. 1171–1187, 2021. [20] L. Wang, K. Wang et al., “Multi-Agent Deep Reinforcement Learning-Based Trajectory Planning for Multi-UAV Assisted Mobile Edge Computing,” IEEE Transactions on Cognitive Communications and Networking, vol. 7, no. 1, 2021. [21] A. M. Koushik, F. Hu et al., “Deep Q-Learning-Based Node Positioning for Throughput-Optimal Communications in Dynamic UAV Swarm Network,” IEEE Transactions on Cognitive Communications and Networking, vol. 5, no. 3, 2019. [22] Q. Zhang, M. Mozaffari et al., “Machine Learning for Predictive On-Demand Deployment of UAVs for Wireless Communications,” in 2018 IEEE Global Communications Conference (GLOBECOM), 2018, pp. 1–6. [23] A. E. A. A. Abdulla, Z. M. Fadlullah et al., “Toward Fair Maximization of Energy Efficiency in Multiple UAS-Aided Networks: A Game-Theoretic Methodology,” IEEE Transactions on Wireless Communications, vol. 14, no. 1, Jan 2015. [24] Y. Li and L. Cai, “UAV-Assisted Dynamic Coverage in a Heterogeneous Cellular System,” IEEE Network, vol. 31, no. 4, pp. 56–61, July 2017. [25] D. P. Bertsekas, Dynamic Programming and Optimal Control, Vol. II, 4th ed. Athena Scientific, 2007. [26] R. Cheng and Y. Jin, “A Competitive Swarm Optimizer for Large Scale Optimization,” IEEE Transactions on Cybernetics, vol. 45, no. 2, pp. 191–204, 2015. [27] J. Hu, H. Zhang et al., “Reinforcement Learning for Decentralized Trajectory Design in Cellular UAV Networks With Sense-and-Send Protocol,” IEEE Internet of Things Journal, vol. 6, no. 4, pp. 6177–6189, 2019. [28] C. Zhou, H. He et al., “Deep RL-based Trajectory Planning for AoI Minimization in UAV-assisted IoT,” in 2019 11th _International Conference on Wireless Communications and Signal Processing (WCSP), 2019, pp. 1–6._ [29] M. Clerc, Particle Swarm Optimization, ser. ISTE. London ; Newport Beach: ISTE, 2006. [30] H. Shakhatreh, A. Khreishah et al., “Efficient 3D placement of a UAV using particle swarm optimization,” in 2017 8th _International Conference on Information and Communication Systems (ICICS), 2017, pp. 258–263._ [31] Z. Yuheng, Z. Liyan et al., “3-D Deployment Optimization of UAVs Based on Particle Swarm Algorithm,” in 2019 IEEE _19th International Conference on Communication Technology (ICCT), 2019, pp. 954–957._ [32] A. Al-Hourani, S. Kandeepan et al., “Optimal LAP Altitude for Maximum Coverage,” IEEE Wireless Communications _Letters, vol. 3, no. 6, pp. 569–572, 2014._ [33] D. Tse and P. Viswanath, Fundamentals of Wireless Communication. Cambridge: Cambridge University Press, 2005. [34] R. Essaadali and A. Kouki, “A new simple Unmanned Aerial Vehicle doppler effect RF reducing technique,” in MILCOM _2016 - 2016 IEEE Military Communications Conference, 2016, pp. 1179–1183._ [35] J. D. Little and S. C. Graves, “Little’s Law,” in Building Intuition. Springer, 2008, pp. 81–100. [36] S. Boyd, L. Xiao et al., “Subgradient methods,” Lecture notes, Stanford University, Autumn Quarter, pp. 2004–2005, 2003. [37] Y. Zeng and R. Zhang, “Energy-Efficient UAV Communication With Trajectory Optimization,” IEEE Transactions on _Wireless Communications, vol. 16, no. 6, pp. 3747–3760, June 2017._ [38] P. Yang, K. Tang et al., “Turning High-Dimensional Optimization Into Computationally Expensive Optimization,” IEEE _Transactions on Evolutionary Computation, vol. 22, no. 1, pp. 143–156, 2018._ [39] A. Al-Hourani, S. Kandeepan et al., “Optimal LAP Altitude for Maximum Coverage,” IEEE Wireless Communications _Letters, vol. 3, no. 6, pp. 569–572, 2014._ [40] Y. Sun and S. Zhou, “Tight bounds of the generalized marcum q-function based on log-concavity,” in IEEE GLOBECOM _2008 - 2008 IEEE Global Telecommunications Conference, 2008, pp. 1–5._ -----
{ "disclaimer": "Notice: Paper or abstract available at https://arxiv.org/abs/2007.01228, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": null, "status": "GREEN", "url": "http://arxiv.org/pdf/2007.01228" }
2,020
[ "JournalArticle" ]
true
2020-07-02T00:00:00
[ { "paperId": "b74155a6cfe3cc8303b6776cd5cff973f9fe3407", "title": "Multiscale Adaptive Scheduling and Path-Planning for Power-Constrained UAV-Relays via SMDPs" }, { "paperId": "b56b3f96468f3ffe3f7142f92e4c01d7ab6c64f7", "title": "Learning-Based Spectrum Sensing and Access in Cognitive Radios via Approximate POMDPs" }, { "paperId": "364ba551cdcc1f9d52ffb462c738fb2ee4152f78", "title": "Multi-UAV Path Planning for Wireless Data Harvesting With Deep Reinforcement Learning" }, { "paperId": "383a2b6362b8fe6c8a922d193383f88fdf87bb44", "title": "Multi-Agent Deep Reinforcement Learning-Based Trajectory Planning for Multi-UAV Assisted Mobile Edge Computing" }, { "paperId": "0bc6770fcb8446ef4532c6f557274b528eaaef0e", "title": "Low-Complexity Joint Resource Allocation and Trajectory Design for UAV-Aided Relay Networks With the Segmented Ray-Tracing Channel Model" }, { "paperId": "074d5b17ab7bab1e94181cd0bfcad29b4f7f7b10", "title": "Deep RL-based Trajectory Planning for AoI Minimization in UAV-assisted IoT" }, { "paperId": "1a9332b0dbcabd7a5535b59a0f3011fd318c2117", "title": "3-D Deployment Optimization of UAVs Based on Particle Swarm Algorithm" }, { "paperId": "da4dc1f8bc2987d47f532b8c7298e4098d0b4f74", "title": "Deep ${Q}$ -Learning-Based Node Positioning for Throughput-Optimal Communications in Dynamic UAV Swarm Network" }, { "paperId": "50bed0baf83b76345a346f4152dad0310127186c", "title": "3D Trajectory Optimization in Rician Fading for UAV-Enabled Data Harvesting" }, { "paperId": "20c41464a232cb13204a26dfacd6e8ff2e87c446", "title": "UAV-Assisted Relaying and Edge Computing: Scheduling and Trajectory Optimization" }, { "paperId": "013a051d2aa72e7595ec63d99f9b294196658430", "title": "Reinforcement Learning for Decentralized Trajectory Design in Cellular UAV Networks With Sense-and-Send Protocol" }, { "paperId": "925f022fbaa8fa7e955beb1c1d825fd9e376d40f", "title": "Survey on UAV Cellular Communications: Practical Aspects, Standardization Advancements, Regulation, and Security Challenges" }, { "paperId": "5fd570b617cd19b121fdf9da1367e5233d0aa29e", "title": "Fundamental Trade-offs in Communication and Trajectory Design for UAV-Enabled Wireless Network" }, { "paperId": "88e31c9264896dcd38fa08a36b870cf6f1752de4", "title": "Machine Learning for Predictive On-Demand Deployment of Uavs for Wireless Communications" }, { "paperId": "31af2e6f860e400711ca3237ff3ff97e2287823d", "title": "Average Peak Age-of-Information Minimization in UAV-Assisted IoT Networks" }, { "paperId": "38019e745cf8d7dc0274882f5d8e485acdf8f001", "title": "Energy Minimization for Wireless Communication With Rotary-Wing UAV" }, { "paperId": "5c351a8a822916e6581747fc17edff162a1a7ab0", "title": "Optimisation of Spectrum and Energy Efficiency in UAV-Enabled Mobile Relaying Using Bisection and PSO Method" }, { "paperId": "0f5a0fbd07155cbf81ae9a5e76a1bc78da10a376", "title": "A Tutorial on UAVs for Wireless Networks: Applications, Challenges, and Open Problems" }, { "paperId": "7660dc6e755e85fbb0b162491dd615b5245d08d5", "title": "UAV-Assisted Dynamic Coverage in a Heterogeneous Cellular System" }, { "paperId": "ee01b802099fbc958a2b572671eff970faf28d57", "title": "Optimal positioning of flying relays for wireless networks: A LOS map approach" }, { "paperId": "3b30b5b5cc861be94387905ad013ac9f4c427a98", "title": "Joint Trajectory and Communication Design for Multi-UAV Enabled Wireless Networks" }, { "paperId": "d349eb54d9aa0b5ba7008334e46c2d526a1f86a6", "title": "Efficient 3D placement of a UAV using particle swarm optimization" }, { "paperId": "a3ad8702915541a170a0f4ec7cd7203f8ad75d27", "title": "A new simple Unmanned Aerial Vehicle doppler effect RF reducing technique" }, { "paperId": "774c755a1c2bda4e312547c64fe1531c1f3ce3b4", "title": "Energy-Efficient UAV Communication With Trajectory Optimization" }, { "paperId": "adf581691aef09a0da06dae8bc87248f33ca348a", "title": "Efficient Deployment of Multiple Unmanned Aerial Vehicles for Optimal Wireless Coverage" }, { "paperId": "3df9de1483091be50d2a686da3d43f39e706db31", "title": "Turning High-Dimensional Optimization Into Computationally Expensive Optimization" }, { "paperId": "e022e57ec1877de91d9f129d31bbae4053d7983e", "title": "Wireless communications with unmanned aerial vehicles: opportunities and challenges" }, { "paperId": "9cfad16564dca92ce99ee0afa061f886d638c698", "title": "A Competitive Swarm Optimizer for Large Scale Optimization" }, { "paperId": "65753ff3e5ab97d0d6fd85c826cc0251c1eefae9", "title": "Optimal LAP Altitude for Maximum Coverage" }, { "paperId": "81a49c1a7421e07afe51fc262a8ea6a6e28307a6", "title": "Tight Bounds of the Generalized Marcum Q-Function Based on Log-Concavity" }, { "paperId": "fda8c1e5080a45613d9c74718ccc2b5ab86d5c00", "title": "Fundamentals of Wireless Communication" }, { "paperId": "b0fa6699b932bf18c404f84bddc4b9fe8ab4a864", "title": "Particle swarm optimization" }, { "paperId": null, "title": "Inside Verizon | Up to Speed | Drones, robots and the power of 5G" }, { "paperId": null, "title": "Robots, drones and sensors are changing the way we farm." }, { "paperId": "0769fdea3345e3c06459a95e39c9ea19ce70845f", "title": "Toward Fair Maximization of Energy Efficiency in Multiple UAS-Aided Networks: A Game-Theoretic Methodology" }, { "paperId": "1dad542838a53b6fed74376db5381e60f95a3b22", "title": "Little's Law" }, { "paperId": "bf184b2650c6261dbacad92d569779f4a840b23a", "title": "Building intuition insights from basic operations management models and principles" }, { "paperId": null, "title": "Dynamic Programming and Optimal Control, Vol. II, 4th ed" }, { "paperId": "25ea0939ecb7504ebb884539e3c4070e00f9b9da", "title": "Subgradient Methods" }, { "paperId": null, "title": "“ Drones , robots and the power of 5 G . ” Verizon News Center . Jul . 2021 . [ Online ] “ Robots , drones and sensors are changing the way we farm . ” Verizon News Center . Mar . 2019 . [ Online ]" } ]
30,200
en
[ { "category": "Computer Science", "source": "external" }, { "category": "Mathematics", "source": "external" }, { "category": "Computer Science", "source": "s2-fos-model" }, { "category": "Mathematics", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/02b1964d705661c5a16e0d87b152d14da1372dd8
[ "Computer Science", "Mathematics" ]
0.83962
Optimal Black-Box Secret Sharing over Arbitrary Abelian Groups
02b1964d705661c5a16e0d87b152d14da1372dd8
Annual International Cryptology Conference
[ { "authorId": "145100790", "name": "R. Cramer" }, { "authorId": "2266310", "name": "S. Fehr" } ]
{ "alternate_issns": null, "alternate_names": [ "Int Cryptol Conf", "Annu Int Cryptol Conf", "CRYPTO", "International Cryptology Conference" ], "alternate_urls": null, "id": "212b6868-c374-4ba2-ad32-19fde8004623", "issn": null, "name": "Annual International Cryptology Conference", "type": "conference", "url": "http://www.iacr.org/" }
A black-box secret sharing scheme for the threshold access structure Tt,n is one which works over any finite Abelian group G. Briefly, such a scheme differs from an ordinary linear secret sharing scheme (over, say, a given finite field) in that distribution matrix and reconstruction vectors are defined over Z and are designed independently of the group G from which the secret and the shares are sampled. This means that perfect completeness and perfect privacy are guaranteed regardless of which group G is chosen. We define the black-box secret sharing problem as the problem of devising, for an arbitrary given Tt,n, a scheme with minimal expansion factor, i.e., where the length of the full vector of shares divided by the number of players n is minimal.Such schemes are relevant for instance in the context of distributed cryptosystems based on groups with secret or hard to compute group order. A recent example is secure general multi-party computation over black-box rings.In 1994 Desmedt and Frankel have proposed an elegant approach to the black-box secret sharing problem based in part on polynomial interpolation over cyclotomic number fields. For arbitrary given Tt,n with O < t < n - 1, the expansion factor of their scheme is O(n). This is the best previous general approach to the problem.Using certain low degree integral extensions of Z over which there exist pairs of sufficiently large Vandermonde matrices with co-prime determinants, we construct, for arbitrary given Tt,n with O < t < n - 1, a black-box secret sharing scheme with expansion factor O(log n), which we show is minimal.
# Optimal Black-Box Secret Sharing over Arbitrary Abelian Groups Ronald Cramer and Serge Fehr BRICS[⋆], Department of Computer Science, Aarhus University, Denmark _{cramer,fehr}@brics.dk_ **Abstract. A black-box secret sharing scheme for the threshold access** structure Tt,n is one which works over any finite Abelian group G. Briefly, such a scheme differs from an ordinary linear secret sharing scheme (over, say, a given finite field) in that distribution matrix and reconstruction vectors are defined over Z and are designed independently of the group _G from which the secret and the shares are sampled. This means that_ perfect completeness and perfect privacy are guaranteed regardless of which group G is chosen. We define the black-box secret sharing problem as the problem of devising, for an arbitrary given Tt,n, a scheme with minimal expansion factor, i.e., where the length of the full vector of shares divided by the number of players n is minimal. Such schemes are relevant for instance in the context of distributed cryptosystems based on groups with secret or hard to compute group order. A recent example is secure general multi-party computation over black-box rings. In 1994 Desmedt and Frankel have proposed an elegant approach to the black-box secret sharing problem based in part on polynomial interpolation over cyclotomic number fields. For arbitrary given Tt,n with 0 < t < n − 1, the expansion factor of their scheme is O(n). This is the best previous general approach to the problem. Using certain low degree integral extensions of Z over which there exist pairs of sufficiently large Vandermonde matrices with co-prime determinants, we construct, for arbitrary given Tt,n with 0 < t < n − 1, a black-box secret sharing scheme with expansion factor O(log n), which we show is minimal. ## 1 Introduction A black-box secret sharing scheme for the threshold access structure Tt,n is one which works over any finite Abelian group G. Briefly, such a scheme differs from an ordinary linear secret sharing scheme (over, say, a given finite field; see e.g. [5,24,6,3,2,20,19,1,16,8]) in that distribution matrix and reconstruction vectors are defined over Z and are designed independently of the group G from which the secret and the shares may be sampled. In other words, the dealer computes the shares for the n players as Z-linear combinations of the secret group element _⋆_ Basic Research in Computer Science (www.brics.dk), funded by the Danish National Research Foundation. ----- of his interest and secret randomizing group elements, and reconstruction of the secret from the shares held by a large enough set of players is by taking Zlinear combinations over those shares. Note that each player may receive one or more group elements as his share in the secret. Perfect completeness and perfect privacy are guaranteed regardless of which group G is chosen. Here, perfect completeness means that the secret is uniquely determined by the joint shares of at least t +1 players, and perfect privacy means that the joint shares of at most t players contain no Shannon information at all about the secret of interest. Note that these schemes are homomorphic in the sense that the sum of share vectors is a share vector for the sum of the corresponding secrets. We define the black-box secret sharing problem as the problem of devising, for an arbitrary given Tt,n, a scheme with minimal expansion factor, i.e., where the length of the full vector of shares divided by the number of players n is minimized[1]. Note the case t = n 1 is easily solved by “additive n-out-of-n _−_ sharing,” which has expansion factor 1. The cases t = 0, n have no meaning for secret sharing. For the rest of this discussion we assume 0 < t < n 1. _−_ The idea of black-box secret sharing was first considered by Desmedt and Frankel [11] in the context of distributed cryptosystems based on groups with secret order. Shamir’s polynomial based secret sharing scheme over finite fields [24] cannot immediately be adapted to the setting of black-box secret sharing. Later, Desmedt and Frankel [12] showed a black-box secret sharing scheme that elegantly circumvents problems with polynomial interpolation over the integers by passing to an integral extension ring of Z over which a sufficiently large invert_ible Vandermonde matrix exists. Their scheme is then constructed using the fact_ that (sufficiently many copies of) an arbitrary Abelian group can be viewed as a module over such an extension ring. For a given commutative ring R with 1, the largest integer l such that there exists an invertible l _l Vandermonde matrix with entries in R is called the_ _×_ _Lenstra constant l(R) of the ring R. Equivalently, l(R) is the maximal size of a_ subset E of R that is “exceptional” in that for all α, α[′] _∈_ _E, α ̸= α[′], it holds_ that α _α[′]_ is a unit of R. _−_ Given an integral extension ring R of degree m over Z, they construct a black-box secret sharing scheme with expansion factor m for a threshold access structure on at most l(R) 1 players. For any prime p, Lenstra’s constant for _−_ the ring of integers of the pth cyclotomic number field is p [2]. Given an arbitrary 1 That minimal expansion is at most polynomial in n, even when appropriately gener alizing the concept to encompass non-Abelian groups as well, is verified by combination of the technique of Benaloh-Leichter [2] with the classical result of complexity theory that all monotone threshold functions are representable by poly-size monotone Boolean formulas. See also [10]. 2 It is not hard to find an exceptional set of size p in this ring. To see that the maximal size of such a set is p, let K be a number field of degree m, and let ZK denote its ring of algebraic integers. For an arbitrary non-trivial ideal I of ZK, it is easy to see that l(ZK ) ≤|ZK _/I| (≤_ 2[m]). In the case where K is the pth cyclotomic number field, the integer prime p totally ramifies. Hence l(ZK ) ≤|ZK _/P_ _| = p, where P is_ the unique prime ideal of ZK lying above p. ----- _Tt,n and choosing R as the ring of integers of the pth cyclotomic number field,_ where p is the smallest prime greater than n, they construct a black-box secret sharing scheme for Tt,n with expansion factor between n and 2n. This is the best previous general approach to the problem. Further progress on the blackbox secret sharing problem via the approach of [12] depends on the problem of finding for each n an extension whose degree is substantially smaller than n and whose Lenstra constant is greater than n. To the best of our knowledge, this is an open problem of algebraic number theory (see also [12] and the references therein). Except for some quite special cases, namely when t is constant or when t (resp. n _t) is small compared to n [14,4] or the constant factor gain from [15],_ _−_ no substantial improvement on the general black-box secret sharing problem has been reported since. The crucial difference with our approach to the black-box secret sharing problem is that we avoid dependence on Lenstra’s constant altogether. Namely, first, we observe that a sufficient condition for black-box secret sharing is the existence (over an extension of Z) of a pair of sufficiently large Vandermonde matrices with co-prime determinants. And, second, we show how to construct low _degree integral extensions of Z satisfying this condition. For arbitrary given Tt,n,_ this leads to a black-box secret sharing scheme with expansion factor O(log n). Using a result of Karchmer and Wigderson [20], we show that this is minimal. There are several applications of black-box secret sharing. For instance, the result of [12] is exploited in [13] to obtain an efficient and secure solution for sharing any function out of a certain abstract class of functions, including RSA. The interest in application of the result of [12] to practical distributed RSAbased protocols seems to have decreased somewhat due to recent developments, see for instance [25] and the references therein. However, apart from the fact that optimal black-box secret sharing is perhaps interesting in its own right, we note that in [9] our black-box secret sharing scheme is applied in protocols for secure general multi-party computation over black-box rings. Also, optimal black-box secret sharing may very well be relevant to new distributed cryptographic schemes for instance based on class groups. This paper is organized as follows. In Section 2 we give a formalization of the notion of black-box secret sharing, and show a natural correspondence between such schemes and our notion of integer span programs (ISPs). This generalizes the well-known correspondence between monotone span programs over finite fields [20] and linear secret sharing schemes over finite fields. In Section 3 we show lower bounds on the size of ISPs computing threshold access structures. Our main result is presented in Section 4, where we construct an ISP with minimal size for an arbitrary given threshold access structure. This leads to an optimal black-box secret sharing scheme for an arbitrary given threshold access structure. At the end, we point out further combinatorial properties of our scheme that are useful when exhibiting efficient simulators as required in the security proofs of threshold crypto-systems such as threshold RSA. ----- ## 2 Black-Box Secret Sharing **2.1** **Definitions** **Definition 1. A monotone access structure on** 1, . . ., n _is a non-empty col-_ _{_ _}_ _lection Γ of sets A_ 1, . . ., n _such that_ _Γ and such that for all A_ _Γ_ _⊂{_ _}_ _∅̸∈_ _∈_ _and for all sets B with A_ _B_ 1, . . ., n _it holds that B_ _Γ_ _._ _⊂_ _⊂{_ _}_ _∈_ **Definition 2. Let t and n be integers with 0 < t < n. The threshold access** structure Tt,n is the collection of sets A ⊂{1, . . ., n} with |A| > t [3]. Let Γ be a monotone access structure on {1, . . ., n}. Let M ∈ Z[d,e] be an integer matrix, and let ψ : 1, . . ., d 1, . . ., n be a surjective function. _{_ _} →{_ _}_ We say that the jth row (j = 1 . . . d) of M is labelled by ψ(j) or that “ψ(j) owns the jth row.” For A ⊂{1, . . ., n}, MA denotes the restriction of M to the rows jointly owned by A. Write dA for the number of rows in MA. Similarly, for **x ∈** Z[d], xA ∈ Z[d][A] denotes the restriction of x to the coordinates jointly owned by A. For each A ∈ _Γ_, let λ(A) ∈ Z[d][A] be an integer (column-) vector. We call this the reconstruction vector for A. Collect all these vectors in a set . _R_ **Definition 3. Let Γ be a monotone access structure on** 1, . . ., n _, and let_ = _{_ _}_ _B_ (M, ψ, ) be as defined above. _is called an integer Γ_ -scheme. Its expansion _R_ _B_ rate is defined as d/n, where d is the number of rows of M _._ Let G be a finite Abelian group. We use additive notation for its group operation, and use 0G to denote its neutral element. The group G is of course a Z-module (see e.g. [23]), by defining the map Z × G → _G, (µ, g) �→_ _µ · g, where_ 0 · g = 0G, µ · g = g + . . . + g (µ times) for µ > 0 and µ · g = −((−µ) · g) for _µ < 0_ [4]. We also write µg or gµ instead of µ _g. Note that it is well-defined how_ _·_ an integer matrix acts on a vector of group elements. **Definition 4. Let Γ be a monotone access structure on** 1, . . ., n _and let_ = _{_ _}_ _B_ (M, ψ, ) be an integer Γ _-scheme. Then_ _is a black-box secret sharing scheme_ _R_ _B_ _for Γ if the following holds. Let G be an arbitrary finite Abelian group G, and let_ _A_ 1, . . ., n _be an arbitrary non-empty set. For arbitrarily distributed s_ _G,_ _⊂{_ _}_ _∈_ _let g = (g1, . . ., ge)[T]_ _∈_ _G[e]_ _be drawn uniformly at random, subject to g1 = s._ _Define s = M_ **g. Then:** **– (Completeness) If A ∈** _Γ_ _, then s[T]A_ _[·][ λ][(][A][) =][ s][ with probability 1, where]_ **_λ(A)_** _is the reconstruction vector for A._ _∈R_ **– (Privacy) If A ̸∈** _Γ_ _, then sA contains no Shannon information on s._ 3 Note that some authors define Tt,n as consisting of all sets of size at least t. Our definition is consistent with a convention in the multi-party computation literature. 4 If the group operation in G is efficient, multiplication by an integer can also be efficiently implemented using standard “double-and-add.” ----- Note that these schemes[5] are homomorphic in the sense that the sum s + s[′] of two share vectors s and s[′], is a share vector for the sum s + s[′] of their corresponding secrets s and s[′]. **2.2** **Monotone Span Programs over Rings** In this section we provide quite natural necessary and sufficient conditions under which an integer Γ -scheme is a black-box secret sharing scheme for Γ . To this end, we introduce the notion of monotone span programs over rings. This is a certain variation of monotone span programs over finite fields, introduced by Karchmer and Wigderson [20]. These are well-known to have a natural one-toone correspondence with linear secret sharing schemes over finite fields (see e.g. [19,1]). Monotone span programs over Z (ISPs) will turn out to have a similar correspondence with black-box secret sharing schemes. We also show an efficient conversion of a monotone span program over an integral extension ring of Z to an ISP. As an aside, monotone span programs over rings are the basis for multi-party computation over black-box rings, as studied in [9]. In particular, the techniques of [8] for secure multiplication and VSS apply to this flavor of monotone span program as well. Throughout this paper, R denotes a (not necessarily finite) commutative ring with 1. Let Γ be a monotone access structure on 1, . . ., n, and let M _R[d,e]_ _{_ _}_ _∈_ be a matrix whose d rows are labelled by a surjective function ψ : 1, . . ., d _{_ _} →_ 1, . . ., n . _{_ _}_ **Definition 5. ε = (1, 0, . . ., 0)[T]** _R[e]_ _is called the target vector. Furthermore,_ _∈_ _M = (R, M, ψ, ε) is called a monotone span program (over the ring R). If R = Z,_ _it is called an integer span program, or ISP, for short. We define size(_ ) = d, _M_ _where d is the number of rows of M_ _._ For N _R[a,b], imN denotes its column space, i.e., the space of all vectors_ _∈_ _N_ **x** _R[a], where x ranges over R[b], and kerN denotes its null-space, i.e., the_ _∈_ space of all vectors x _R[b]_ with N **x = 0** _R[a]._ _∈_ _∈_ **Definition 6. As above, let Γ be a monotone access structure and let** = _M_ (R, M, ψ, ε) be a monotone span program over R. Then _is a monotone span_ _M_ _program for Γ_ _, if for all A_ 1, . . ., n _the following holds._ _⊂{_ _}_ **– If A ∈** _Γ_ _, then ε ∈_ imMA[T] _[.]_ **– If A ̸∈** _Γ_ _, then there exists κ = (κ1, . . ., κe)[T]_ _∈_ kerMA with κ1 = 1. _We also say that_ computes Γ _._ _M_ 5 See [21] for an equivalent definition. We also note that only requiring reconstruction to be linear, as some authors do, results in an equivalent definition of black-box secret sharing. This is an easily proved consequence of Lemma 2, but we omit the details here. ----- If R is a field, our definition is equivalent to the computational model of monotone span programs over fields [20]. Indeed, this model is characterized by the condition that A ∈ _Γ if and only if ε ∈_ imMA[T] [. The equivalence follows from] the remark below. _Remark 1. By basic linear algebra, if R is a field, then ε ̸∈_ imMA[T] [implies that] there exists κ ∈ kerMA with κ1 = 1. If R is not a field this does not necessarily hold[6]. The implication in the other direction trivially holds regardless of R. Using (generally inefficient) representations of monotone access structures as monotone Boolean formulas and using induction in a similar style as in e.g. [2], it is straightforward to verify that for all Γ and for all R, there is a monotone span program over R that computes Γ . **Definition 7. For any Γ and for any R, mspR(Γ** ) denotes the minimal size of _a monotone span program over R computing Γ_ _. If R = Z, we write isp(Γ_ ). Define a non-degenerate monotone span program as one for which the rows of M span the target-vector. As opposed to the case of fields, a non-degenerate monotone span program over a ring need not compute any monotone access structure. This is of no concern here, though. The following proposition characterizes black-box secret sharing schemes in terms of ISPs. **Proposition 1. Let Γ be a monotone access structure on** 1, . . ., n _, and let_ _{_ _}_ = (M, ψ, ) be an integer Γ _-scheme. Then_ _is a black-box secret sharing_ _B_ _R_ _B_ _scheme for Γ if and only if M = (Z, M, ψ, ε) is an ISP for Γ and for all A ∈_ _Γ_ _,_ _its reconstruction vector λ(A) ∈R satisfies MA[T]_ **_[λ][(][A][) =][ ε][.]_** _Proof. The argument that the stated ISP is sufficient for black-box secret sharing_ is quite similar to the well-known case of linear secret sharing over finite fields. The other direction of the implication follows in essence from Lemma 1 below. We include full details for convenience. Consider the ISP from the statement of the proposition, together with the as sumption on the reconstruction vectors. Consider an arbitrary set A 1, . . ., n _⊂{_ _}_ and an arbitrary finite Abelian group G. Define s = M **g for arbitrary g =** (s, g2, . . ., ge)[T] _∈_ _G[e]. Suppose A ∈_ _Γ_, and let λ(A) ∈R be its reconstruction vector. It follows that s[T]A[λ][(][A][) = (][M][A][g][)][T][ λ][(][A][) =][ g][T][ (][M][ T]A **_[λ][(][A][)) =][ g][T][ ε][ =][ s][.]_** Thus the completeness condition from Definition 4 is satisfied. If A _Γ_, then _̸∈_ there exists κ ∈ Z[e] with MAκ = 0 ∈ Z[d][A] and κ1 = 1, by Definition 6. For arbitrary s[′] _∈_ _G, define s[′]_ = M (g + (s[′] _−_ _s)κ) ∈_ _G[d][A]_ . The secret defined by s[′] equals s[′], while on the other hand s[′]A [=][ s][A][. This implies perfect privacy: the] assignment g[′] = g + (s[′] _s)κ provides a bijection between the set of possible_ _−_ vectors of “random coins” consistent with sA and s, and the set of those consistent with sA and s[′]. Therefore, the privacy condition from Definition 4 is also satisfied. 6 Consider for example the integer matrix M = (2 0). ----- In the other direction of the proposition, we start with a black-box se cret sharing scheme for Γ according to Definition 4. Consider an arbitrary set _A_ 1, . . ., n . Suppose A _Γ_, and let λ(A) be its reconstruction vec_⊂{_ _}_ _∈_ _∈R_ tor. For an arbitrary prime p, set G = Zp. By the completeness condition from Definition 4, it follows that (1, 0, . . ., 0)[T] _≡_ (MAIe)[T] **_λ(A) ≡_** _MA[T]_ **_[λ][(][A][) mod][ p][,]_** where Ie ∈ Z[e,e]p is the identity matrix. This holds for all primes p. Hence, _MA[T]_ **_[λ][(][A][) = (1][,][ 0][, . . .,][ 0)][T][ =][ ε][. Therefore, the condition on the sets][ A][ ∈]_** _[Γ][ in]_ Definition 6 and the condition on the reconstruction vectors from the state_R_ ment of the proposition are satisfied. To conclude the proof we show that the privacy condition from Definition 4 implies the condition on the sets A _Γ from Definition 6. The following formu-_ _̸∈_ lation is equivalent. Let y ∈ Z[d][A] denote the left-most column of MA, and let _NA ∈_ Z[d][A][,e][−][1] denote the remaining e − 1 columns. Then it is to be shown that the linear system of equations NAx = y is solvable over Z. By Lemma 1 below, it is sufficient to show that this holds modulo m, for all _m ∈_ Z, m ̸= 0. With notation as in Definition 4 and considering G = Zm, it follows from the privacy condition that there exists g[′] _∈_ Z[e]m [such that][ g]1[′] _[≡]_ _[s]_ _[−]_ [1] and sA ≡ _MAg[′]. Setting κ ≡_ **g −** **g[′]** _∈_ Z[e]m[, we have][ M][A][κ][ ≡] **[0][ with][ κ][1]** _[≡]_ [1. In] other words, NAx = y is solvable over Zm for all integers m ̸= 0. _⊓⊔_ We note that [21] also gives a characterization. Although there are some similarities in the technical analysis, the conditions stated there are still in terms of the black-box secret sharing scheme, rather than by providing simple algebraic conditions on the matrix M as we do. Therefore, we feel that our approach based on integer span programs is perhaps more useful and insightful, especially since monotone span programs over finite fields have since long been known to be equivalent to linear secret sharing schemes over finite fields. **Lemma 1. Let N ∈** Z[a,b] _and y ∈_ Z[a]. Then the linear system of equations _N_ **x = y is solvable over Z if and only if it is solvable over Zm for all integers** _m_ = 0. _̸_ _Proof. The forward direction of the proposition is trivial. In the other direction,_ consider the Z-module H generated by the columns of N . By basic theory of Z-modules (see e.g. [23]), there exists a Z-basis B = (b1, . . ., ba) of Z[a], and non-zero integers a1, . . ., al such that BH = (a1b1, . . ., albl) is a Z-basis of H. Let L denote the Z-module with basis BL = (b1, . . ., bl). Note that H ⊂ _L._ Let p be an arbitrary prime, and let ( ) denote reduction modulo p. Since the _·_ determinant of B is ±1, B (resp. BL) provides a basis for the vector-space F[a]p (resp. the vector-space L). Note that BL ⊂B. It follows from the assumptions that y ∈ _H ⊂_ _L. Let (y1, . . ., ya) ∈_ Z[a] denote the coordinates of y wrt. . Since the latter observation holds for all primes p, _B_ it follows that yl+1 = . . . = ya = 0. Hence, y ∈ _L. Now set ˆm =_ [�]i[l]=1 _[a][i][. By]_ the assumptions, there exists c ˆm ∈ Z[a] such that y + ˆm · c ˆm ∈ _H. Therefore,_ _mˆ_ _· c ˆm ∈_ _L, and by the definition of L, c ˆm ∈_ _L. By the choice of ˆm, it follows_ that ˆm · c ˆm ∈ _H. We conclude that y ∈_ _H, as desired._ _⊓⊔_ ----- _Remark 2. Let_ = (R, M, ψ, ε) compute Γ . If R is a field or a principal ideal _M_ domain (such as Z), then we may assume without loss of generality that e ≤ _d,_ i.e., there are at most as many columns in M as there are rows. This is easily shown using elementary linear algebra, and using the basic properties of modules over principal ideal domains (see e.g. [23] and the proof of Lemma 1). Briefly, since is non-degenerate, the last statement in Remark 1 _M_ implies that the space generated by the 2nd up to the eth column of M does not contain even a non-zero multiple of the first column. Without changing the access structure that is computed, we can always replace the 2nd up to the eth column of M by any set of vectors that generates the same space. If R is a field or a principal ideal domain, this space has a basis of cardinality at most d 1. _−_ _Remark 3. We may now identify a black-box secret sharing scheme for Γ with_ an ISP M = (Z, M, ψ, ε) for Γ . A reconstruction vector for A ∈ _Γ is simply_ any vector λ(A) ∈ Z[d][A] such that MA[T] **_[λ][(][A][) =][ ε][. Note that the expansion rate]_** of the corresponding black-box secret sharing scheme is equal to size( )/n. By _M_ Remark 2 it uses at most size( ) random group elements. _M_ We now state some lemmas that are useful in the sequel. **Definition 8. The dual Γ** _[∗]_ _of a monotone access structure Γ on {1, . . ., n} is_ _the collection of sets A_ 1, . . ., n _such that A[c]_ _Γ_ _._ _⊂{_ _}_ _̸∈_ Note that Γ _[∗]_ is a monotone access structure on {1, . . ., n}, that (Γ _[∗])[∗]_ = Γ, and that (Tt,n)[∗] = Tn−t−1,n. The lemma below generalizes a similar property shown in [20] for the case of fields. **Lemma 2. mspR(Γ** ) = mspR(Γ _[∗]), for all R and Γ_ _._ _Proof. Let_ = (R, M, ψ, ε) be a monotone span program for Γ . Select an _M_ arbitrary generating set of vectors b1, . . ., bl for kerM _[T]_, and choose λ with _M_ _[T]_ **_λ = ε. Let M_** _[∗]_ be the matrix defined by the l +1 columns (λ, b1, b2, . . ., bl), and use ψ to label M _[∗]_ as well. Define M[∗] = (R, M _[∗], ψ, ε[∗]), where ε[∗]_ = (1, 0, . . ., 0)[T] _R[l][+1]. Note that size(_ ) = size( ). We claim that com_∈_ _M[∗]_ _M_ _M[∗]_ putes Γ _[∗]. This is easy to verify._ If A[c] _̸∈_ _Γ_, then by Definition 6, there exists κ ∈ _R[l][+1]_ such that MAc **_κ = 0_** and κ1 = 1. Define λ[∗] = MAκ. Then (M _[∗])[T]A[λ][∗]_ [= ((][M][ ∗][)][T][ ·][ M] [)][κ][ =][ ε][∗][. On the] other hand, if A[c] _∈_ _Γ_, then there exists λ[ˆ] ∈ _R[d]_ such that M _[T][ ˆ]λ = ε and λ[ˆ]_ _A = 0._ By definition of M _[∗], there exists κ ∈_ _R[l][+1]_ such that M _[∗]κ = λ[ˆ] and κ1 = 1._ Hence, MA[∗] **_[κ][ = ˆ][λ][A][ =][ 0][ and][ κ][1][ = 1. This concludes the proof.]_** _⊓⊔_ The lemma below holds in a more general setting, but we tailor it to ours. **Lemma 3. Let f** (X) ∈ Z[X] be a monic, irreducible polynomial. Write m = deg(f ). Consider the ring R = Z[X]/(f (X)). Suppose M = (R, M, ψ, ε) is a _monotone span program over R for a monotone access structure Γ_ _. Then there_ _exists an ISP_ _Mˆ_ = (Z, _M,[ˆ]_ _ψ,[ˆ]_ ˆε) for Γ with size( M[ˆ] ) = m · size(M). ----- _Proof. The proof is based on a standard algebraic technique for representing a_ linear map defined over an extension ring in terms of a linear map defined over the ground ring. This technique is also used in [20] for monotone span programs over extension fields. Since our definition of monotone span programs over rings differs slightly from the definitions in [20], we explain it in detail. Note that R is a commutative ring with 1 and that it has no zero divisors, but that it is not a field. Fix w _R such that f_ (w) = 0 (such as w = X, the class of _∈_ _X modulo f_ (X)). Then for each x _R, there exists a unique coordinate-vector_ _∈_ _→x_ = (x0, . . ., xm−1)T ∈ Zm such that x = x0 · 1 + x1 · w + · · · + xm−1 · wm−1. In other words, W = {1, w, . . ., w[m][−][1]} is a basis for R when viewed as a Z-module. For each x ∈ _R there exists a matrix in Z[m,m], denoted as [x], such that, for_ all y _R, [x]→y_ = _xy (the coordinate vector of xy_ _R). The columns of [x] are_ _∈_ _−→_ _∈_ simply the coordinate vectors of x, x · w, . . ., x · w[m][−][1]. If x ∈ Z, then [x] is a diagonal matrix with x’s on its main diagonal. Furthermore, for all x, y _R, we_ _∈_ have the identities [x + y] = [x] + [y] and [xy] = [x][y]. Consider the monotone span program = (R, M, ψ, ε) from the statement _M_ of the lemma. As before, write d (resp. e) for the number of rows (resp. columns) of M . We define the ISP _Mˆ_ = (Z, _M,[ˆ]_ _ψ,[ˆ]_ ˆε) as follows. Construct M[ˆ] ∈ Z[md,me] from M by replacing each entry x _R in M by the matrix [x]. The labeling ψ_ _∈_ is extended to ψ[ˆ] in the obvious way, i.e., if a player owns a certain row in M, then that same player owns the m rows that it is substituted with in M[ˆ] . The target vector ˆε is defined by ˆε = (1, 0 . . ., 0)[T] _∈_ Z[me]. We verify that [ˆ] is an ISP for Γ . First, consider a set A _Γ_ . By definition, _M_ _∈_ there exists a vector λ = (λ1, . . ., λdA )[T] _∈_ _R[d][A]_ such that MA[T] **_[λ][ =][ ε][. Using the]_** identities stated above and carrying out matrix multiplication “block-wise,” it follows that M[ˆ] _A[T]_ [([][λ][1][]][, . . .,][ [][λ][d]A [])][T][ = ([1]][,][ [0]][, . . .,][ [0])][T][ . Define ˆ][λ][ ∈] [Z][md][A][ as the] first column of the matrix ([λ1], . . ., [λdA ])[T] . Then M[ˆ] _A[T]_ **_[λ][ˆ][ = ˆ][ε][. Now consider a]_** set A ̸∈ _Γ_ . By definition, there exists κ = (κ1, κ2, . . ., κe)[T] _∈_ _R[e]_ such that _κ1 = 1 and MAκ = 0 ∈_ _R[d][A]_ . Using similar reasoning as above, it follows that _Mˆ_ _A([κ1][T]_ _, . . ., [κe][T]_ )[T] = ([0][T] _, . . ., [0][T]_ )[T] . Define ˆκ ∈ Z[me] as the first column of the matrix derived from κ in the above equation. Then, the first m entries of **_κˆ are 1, 0, . . ., 0 (since κ1 = 1) and M[ˆ]_** _Aκˆ = 0 ∈_ Z[d][A] . This proves the lemma. As an aside, it follows directly from the analysis above that we may delete the 2nd up to mth leftmost colums of _Mˆ and the_ corresponding coordinates of ˆε without changing the access structure computed. Hence, 1 + m(e 1) columns suffice, rather than me. _−_ _⊓⊔_ ## 3 Lower Bounds for the Threshold Case **Proposition 2. For all integers t, n with 0 < t < n** _−_ 1, isp(Tt,n) = Ω(n _·_ log n). _Hence, the expansion factor of a black-box secret sharing scheme for Tt,n with_ 0 < t < n 1 is Ω(log n). _−_ ----- Proposition 2 follows quite directly from the bound shown in Theorem 1 for binary monotone span programs, as proved in [20][7]. Before we give the details of the proof of Proposition 2, we include a proof of their bound for convenience. Note that we have made constants for their asymptotic bound explicit. Throughout this section, K denotes a field. Let = (K, M, ψ, ε) be a non_M_ degenerate monotone span program. The access structure of, denoted Γ ( ), _M_ _M_ is the collection of sets A such that ε ∈ imMA[T] [. Note that by Remark 1 this is] consistent with our Definition 6. We write msp2(Γ ) instead of mspF2 (Γ ). **Proposition 3. [20] msp2(T1,n) ≥** _n · log n._ _Proof. Consider a monotone span program M = (F2, M, ψ, ε) such that Γ_ (M) = _T1,n. Define e as the number of columns of M_, d as its number of rows, and di as the number of rows of Mi for i = 1 . . . n, where we write Mi instead of M{i} and di instead of d{i}. Without loss of generality, assume that the rows of each _Mi are linearly independent over F2. Let H1 collect the vectors in F[e]2_ [with first] coordinate equal to 1. Since {i} ̸∈ _T1,n, Remark 1 implies that |kerMi ∩_ _H1| ̸= ∅._ By assumption on Mi, |kerMi _∩H1| = 2[e][−][1][−][d][i]_ for i = 1 . . . n. On the other hand, _{i, j} ∈_ _T1,n. Hence, by Remark 1, we have kerMi ∩_ kerMj ∩ _H1 = ∅, for all i, j_ with 1 _i < j_ _n. By counting and normalizing, 2[−][d][1]_ + + 2[−][d][n] 1. By the _≤_ _≤_ _· · ·_ _≤_ Log Sum Inequality (see e.g. [7]), d = d1 + · · · + dn ≥ _n log n._ _⊓⊔_ **Theorem 1. [20] n · (⌊log n⌋** + 1) ≥ msp2(Tt,n) ≥ _n · log_ _[n][+3]2_ _[, for all][ t, n][ with]_ 0 < t < n 1. _−_ _Proof. The upper bound, which is not needed for our purposes, follows by con-_ sidering an appropriate Vandermonde matrix over the field F2u, where u = (⌊log n⌋ +1). This is turned into a binary monotone span program for Tt,n using a similar conversion technique as in Lemma 3. For the lower bound, note that we may assume that t (n 1)/2, since _≥_ _−_ msp2(Tt,n) = msp2(Tn−t−1,n) by Lemma 2. We have the following estimates. _n_ _n_ msp2(Tt,n) ≥ _t + 2_ _t + 2_ _[·][ msp][2][(][T][t,t][+2][) =]_ _[·][ msp][2][(][T][1][,t][+2][)]_ _n_ _._ _≥_ _t + 2_ 2 _[·][ (][t][ + 2)][ ·][ log(][t][ + 2)][ ≥]_ _[n][ ·][ log][ n][ + 3]_ The first inequality is argued as follows. Consider an arbitrary monotone span program M = (F2, M, ψ, ε) for Tt,n. Assume without loss of generality that the number of rows in Mi is at most the number of rows in Mi+1, i = 1, . . ., n − 1. The first t + 2 blocks M1, . . ., Mt+2 clearly form a monotone span program for _Tt,t+2. Hence, the total number of rows in these blocks is at least msp2(Tt,t+2)._ Each other block Mj with j > t + 2 has at least as many rows as any of the first _t + 2 blocks. Therefore, Mj has at least msp2(Tt,t+2)/(t + 2) rows. Summing up_ over all i according to the observations above gives the first inequality. 7 Note that isp(Tn−1,n) = n: the case t = n−1 is solved by simple additive “n-out-of-n secret sharing.” ----- The equality is implied by Lemma 2, the second to last inequality follows from Proposition 3, and the last one from t (n 1)/2. _≥_ _−_ _⊓⊔_ For the proof of Proposition 2, let an ISP for Tt,n be given, and consider the ISP matrix, but with all entries reduced modulo 2. By our ISP definition and by arguing the cases A ̸∈ _Tt,n using Remark 1, it follows that a binary monotone_ span program for Tt,n is obtained in this way. The argument is concluded by applying Theorem 1[8]. The statement about black-box secret sharing follows from Proposition 1. Note that our lower bound on black-box secret sharing can also be appre ciated without reference to Proposition 1, by essentially the same argument as above. Namely, setting G = Z2 in Definition 4, we clearly obtain a (binary) linear secret sharing scheme. This is well-known to be equivalent to a binary monotone span program, as mentioned before. Hence, we can directly apply the bound from Theorem 1. ## 4 Optimal Black-Box Threshold Secret Sharing **Theorem 2. For all integers t, n with 0 < t < n −** 1, isp(Tt,n) = Θ(n · log n). _Hence, there exists a black-box secret sharing scheme for Tt,n with expansion_ _factor O(log n), which is minimal._ _Proof. By Proposition 1 it is sufficient to focus on the claim about the ISPs._ The lower bound follows from Proposition 2. For the upper bound, we consider rings of the form R = Z[X]/(f (X)), where f (X) ∈ Z[X] is a monic, irreducible polynomial. Write m = deg(f ), the degree of R over Z. On account of Lemma 3, it is sufficient to exhibit a ring R together with a monotone span program M over R for Tt,n such that m = O(log n) and size( ) = O(n). _M_ The proof is organized as follows. We first identify a certain technical property of a ring R that facilitates the construction of a monotone span program over _R for Tt,n, with size O(n). We finalize the proof by constructing a ring R that_ enjoys this technical property, and that has degree O(log n) over Z. For x1, . . ., xn ∈ _R, define_ _n_ � � _∆(x1, . . ., xn) =_ _xi ·_ (xi − _xj)._ _i=1_ 1≤j<i≤n Assume, for the moment, that there exist α1, . . ., αn ∈ _R and r0, r1 ∈_ _R such_ that _r0 · ∆(1, . . ., n)[2]_ + r1 · ∆(α1, . . ., αn)[2] = 1. This assumption implies the existence of a monotone span program over R for Tt,n with size 2n, as we now show. Define _∆0 = ∆(1, . . ., n) ∈_ Z, and _∆1 = ∆(α1, . . ., αn) ∈_ _R._ 8 See [21,22] for lower bounds on the randomness required in black-box secret sharing schemes. ----- Let N0 ∈ _R[n,t][+1]_ (resp. N1 ∈ _R[n,t][+1]) be the matrix in which the i-th row is_ equal to (∆0, i, i[2], . . ., i[t]) (resp. (∆1, αi, αi[2][, . . ., α]i[t][)),][ i][ = 1][ . . . n][. In both cases,] the ith row is labelled by i. When studied as possible monotone span programs over R for Tt,n, N0 (resp. N1) satisfies Definition 6 for the sets A ̸∈ _Tt,n. On the_ other hand, in both cases, the rows owned by a set A ∈ _Tt,n do not necessarily_ span the target vector (1, 0, . . ., 0) _R[t][+1]. However, these rows do span[9]_ the _∈_ vector (∆[2]0[,][ 0][, . . .,][ 0)][ ∈] _[R][t][+1][ (resp. (][∆][2]1[,][ 0][, . . .,][ 0)][ ∈]_ _[R][t][+1][). Both properties stated]_ can be verified immediately, for instance using the well-known expression for a Vandermonde determinant in combination with Cram´er’s rule (see e.g. [23]); passing to the fraction field K of R (note that R has no zero-divisors), this rule implies that a c _c linear system of equations N_ **x = y over the ring R, has a** _×_ solution at least in case where y det(N ) _R[c]. Another way is by using Lagrange_ _∈_ _·_ Interpolation over K, and clearing denominators. Define a new monotone span program matrix M _R[2][n,][2][t][+1]_ consisting of all _∈_ pairs of rows (∆0, i, i[2], . . ., i[t], 0, . . ., 0), and (∆1, 0, . . ., 0, αi, αi[2][, . . ., α]i[t][)][,] for i = 1 . . . n. The shown padding consists of t zeroes in both cases, and each of the rows in a pair is labelled by i. Define ε = (1, 0, . . ., 0)[T] _R[2][t][+1]. The_ _∈_ sets A ̸∈ _Tt,n clearly satisfy Definition 6, and this time the rows owned by sets_ _A ∈_ _Tt,n span the target vector: they span in particular all vectors of the form_ (r · ∆[2]0 [+][ s][ ·][ ∆]1[2][,][ 0][, . . .,][ 0), with][ r, s][ ∈] _[R][. By setting][ r][ =][ r][0]_ [and][ s][ =][ r][1][, these] include the target vector ε. To conclude, we exhibit a ring R with degree O(log n) over the integers and _α1, . . ., αn, r0, r1 ∈_ _R with r0 · ∆[2]0_ [+][ r][1] _[·][ ∆]1[2]_ [= 1, where][ ∆][0] [=][ ∆][(1][, . . ., n][) and] _∆1 = ∆(α1, . . ., αn)._ These conditions are reformulated as follows. Let Πn denote the set of integer primes p with 2 _p_ _n and define_ _≤_ _≤_ _Qn =_ � _p∈Πn_ _p ∈_ Z. Then we are looking for a ring R with degree O(log n) over the integers and _α1, . . ., αn ∈_ _R such that_ _∆1 ∈_ (R/(Qn))[∗], i.e., the residue-class of ∆1 in the ring R/(Qn) is a unit. Indeed, if ∆1 ∈ (R/(Qn))[∗], then ∆1 ∈ (R/(Q[k]n[))][∗] [as well, for any positive] integer k. To verify this by induction, suppose that ∆1 · v = 1 + w · Q[i]n [for some] _v, w ∈_ _R and i ≥_ 1: then ∆1 · (v − _vw · Q[i]n[) = 1][ −]_ _[w][2][ ·][ Q][2]n[i]_ [and 2][i][ ≥] _[i][ + 1. As]_ a consequence, ∆1 ∈ (R/(∆[2]0[))][∗][. Namely, as an integer,][ ∆][2]0 [factors completely] over the primes p ∈ _Πn. Then choose k∗_ large enough such that ∆[2]0 [divides][ Q][k]n[∗] [,] 2 and apply the previous observation. It follows that ∆1 0[))][∗] [as well, or] _[∈]_ [(][R/][(][∆][2] equivalently, there exist r0, r1 ∈ _R such that r0 · ∆[2]0_ [+][ r][1] _[·][ ∆]1[2]_ [= 1.] 9 A similar property was first noticed and exploited in [17,18] and later in [25]. ----- Set m = ⌊log n⌋ + 1. Let f[ˆ](X) ∈ Z[X] be any monic, irreducible polynomial of degree m such that for all p ∈ _Πn, f[ˆ]p(X) (the polynomial f[ˆ](X) with its_ coefficients reduced modulo p) is irreducible in Fp[X]. One way of constructing such a polynomial is as follows. For all p ∈ _Πn, select_ a monic, irreducible polynomial f[ˆ]p(X) ∈ Fp[X] of degree m. By the theory of finite fields, this is always possible. Applying the Chinese Remainder Theorem to each of the coefficients separately, select an arbitrary lift to a monic polynomial _fˆ(X) ∈_ Z[X] of degree m such that ˆf (X) ≡ _fˆp(X) mod p. Note that the monic_ polynomial f[ˆ](X) is irreducible in Z[X]: if not, reduction modulo p with p ∈ _Πn,_ gives a non-trivial factorization of f[ˆ]p(X) in Fp[X]. Set R = Z[X]/( f[ˆ](X)). By definition of f[ˆ](X), it follows that R/(p) is a finite field, for all p ∈ _Πn. Indeed, for all p ∈_ _Πn,_ _R/(p) ≃_ Z[X]/(p, _f[ˆ](X)) ≃_ Fp[X]/( f[ˆ]p(X)) ≃ Fpm. Note that all ideals (p) of R with p ∈ _Πn are distinct and maximal. It follows,_ using the Chinese Remainder Theorem for general rings, that _R/(Qn) ≃_ � _p∈Πn_ Fpm. For all p ∈ _Πn we have |F[∗]p[m][|][ =][ p][m]_ _[−][1][ ≥]_ [2][m] _[−][1][ ≥]_ _[n][. Therefore, for each][ p][ ∈]_ _[Π][n][,]_ distinct non-zero _β1[(][p][)][, . . ., β]n[(][p][)]_ _∈_ Fpm can be selected. Finally, select arbitrary α1, . . ., αn ∈ _R such that, for i = 1 . . . n,_ _R/(Qn) ∋_ _αi ←→_ (βi[(][p][)])p∈Πn ∈ � _p∈Πn_ Fp[m], where the correspondence is via the (implicit) isomorphism. By construction, for all i, j with 1 ≤ _i, j ≤_ _n and i ̸= j, it holds that αi ∈_ (R/(Qn))[∗] and _αi −_ _αj ∈_ (R/(Qn))[∗]. Hence, ∆1 ∈ (R/(Qn))[∗], as desired. _⊓⊔_ **Corollary 1. For all integers t, n with 0 < t < n** 1, there exists an ISP of size _−_ _n · (⌊log n⌋_ + 2) for Tt,n. _Proof. Let R, α1, . . ., αn, r0, r1, N0, N1 be as constructed in the proof of The-_ orem 2. Apply the construction from the proof of Lemma 3 to N1, and take into account the final remark of that proof. This gives an ISP matrix N[ˆ]1 with _n · (⌊log n⌋_ + 1) rows and 1 + t(⌊log n⌋ + 1) columns. Clearly, the sets A ̸∈ _Tt,n_ satisfy Definition 6. For the sets A ∈ _Tt,n, the rows owned by A span δ1_ _·εˆ, where_ _δ1_ Z is the first coordinate of r1 _∆[2]1[.]_ _∈_ _·_ The ISP matrix N0 has the properties stated in the proof of Theorem 2 also over Z. Hence, the sets A ̸∈ _Tt,n satisfy Definition 6 over Z. For the sets A ∈_ _Tt,n,_ the rows owned by them clearly span (δ0, 0, . . ., 0) ∈ Z[t][+1], where δ0 ∈ Z is the first coordinate of r0 · _∆[2]0[. Since][ δ][0]_ [+] _[δ][1]_ [= 1, this leads directly to an ISP for][ T][t,n][,] where the ISP matrix has n ( log n +2) rows and t( log n +2)+1 columns. _·_ _⌊_ _⌋_ _⌊_ _⌋_ _⊓⊔_ ----- ## 5 Concluding Remarks **5.1** **A Note on Simulateability** The ISPs _Mˆ_ = (Z, _M,[ˆ]_ _ψ,[ˆ]_ ˆε) constructed in the proofs of Theorem 2 and Corol lary 1 satisfy the following additional properties, which are helpful when proving the security of certain threshold cryptosystems. Let the share vector s = M[ˆ] **g be computed according to the corresponding** black-box secret sharing scheme, then the following holds. 1. The entries of sA are independent random group elements for any subset A of 1, . . ., n with _A_ _t._ _{_ _}_ _|_ _| ≤_ 2. Every player i can compute a reconstruction share s[′]i [by taking][ Z][-linear com-] binations (of course independent of the group) of the entries of his original share si, such that any t reconstruction shares s[′]i [still allow to reconstruct] the secret s, and such that any t original shares si together with s allow to compute the complete reconstruction share vector s[′] (by taking Z-linear combinations). The former property is inherited from the two Vandermonde matrices upon which the construction of _Mˆ_ is based on, and the latter holds for s[′] defined as **s[′]** = M[ˆ] _[′]g, where the ISP_ _Mˆ_ _[′]_ = (Z, _M[ˆ]_ _[′],_ _ψ,[ˆ]_ ˆε) is constructed from the matrices _∆0N0 and ∆1N1 in a way similar to which_ _Mˆ_ is constructed from N0 and N1 in the proof of Theorem 2. Assuming that the group operation is efficiently computatble and that (al most) random group elements can be sampled efficiently, these properties allow the players of a set A with |A| ≤ _t to efficiently simulate their joint view sA of_ the distribution phase, by sampling (almost) random elements from the group and to efficiently simulate their view of the corresponding reconstruction phase by computing s[′] from sA and the secret s. When proving the security of a direct application of our black-box secret sharing scheme to distributed RSA for instance, these properties enable an efficient simulator for the adversary’s view of the distributed decryption or signing process (see also [12,25]). **5.2** **Implementation** We stress that in this paper we are primarily interested in the asymptotically optimal result from Theorem 2. Several choices in its proof have been made to simplify the mathematical exposition, while suppressing computational aspects. There are a number of possible practical implementations of black-box secret sharing based on our result. We do not optimize its performance here, but merely indicate below that straightforward implementations run in time polynomial in n. Note that the scheme consumes O(n log n) random coins (group elements) and that the expansion factor is O(log n) in any case, i.e., each player receives ----- _O(log n) groups elements as his share in a secret group element. For an imple-_ mentation, it is important to limit the necessary computational resources for dealer and players. One implementation is based on the well-known fact that for any finite Abelian group G, G[m] can be viewed as a module over the ring R (see also [12]). The multiplication of an element of R by an element of G[m] can be performed having only black-box access to the group operation of G. This way, the monotone span program over R acts directly on vectors of elements of G[m]. This leads in a straightforward fashion to an attractive implementation of black-box secret sharing where the actual ISP it is based upon can be left implicit. See for instance [12] for the computational details of this general procedure, taking into account the remarks below. By the constructive method from the proof of Theorem 2, we may assume without loss of generality that the coefficients of the polynomial f (X) have bit length smaller than log Qn ≤ log(n!) = O(n log n) bits. Recall that its degree _m is_ log n + 1. For given threshold parameters t, n, it can be fixed once and _⌊_ _⌋_ for all. One simple possible choice for the αi’s is to identify them with distinct, non-zero integer polynomials of degree at most log n, such that each of the _⌊_ _⌋_ coefficients is either 0 or 1. For instance, αi can point to i by basing it on the bit representation of i. ∆[2]0 [is simply represented by an integer with bit length] _O(n[2]_ _· log n). The value ∆[2]1_ [is the product of][ O][(][n][2][) elements of][ R][, each of which] has integer coordinates −1, 0 or 1. The values r0 and r1 can be obtained by 2 computing the inverse u of ∆1 0[), for instance by solving a linear system] _[∈]_ _[R/][(][∆][2]_ of equations over Z∆20 [, and by computing][ u] _[·]_ _[∆]1[2]_ _[∈]_ _[R][. The reconstruction vectors]_ are computed from r0, r1 and obvious “interpolation coefficients” obtained from the αi’s. ## Acknowledgments We thank Ivan Damgaard for many helpful suggestions and discussions. Also thanks to Yvo Desmedt, Yair Frankel, Anna G´al, Yuval Ishai, Brian King and the anonymous referees of CRYPTO ’02 for comments. ## References 1. A. Beimel. Secure schemes for secret sharing and key distribution. Ph.D.-thesis, Technion, Haifa, June 1996. 2. J. Benaloh and J. Leichter. Generalized secret sharing and monotone functions. In: Proc. CRYPTO ’88, Springer LNCS, vol. 765, pp. 274–285, 1988. 3. M. Bertilsson, I. Ingemarsson. A construction of practical secret sharing schemes using linear block codes. In Proc. AUSCRYPT ’92, Springer LNCS, vol. 718, pp. 67-79, 1993. 4. S. Blackburn, M. Burmester, Y. Desmedt, and P. Wild. Efficient multiplicative sharing scheme. In: Proc. EUROCRYPT ’96, Springer LNCS, vol. 1070, pp. 107– 118, 1996. ----- 5. G. R. Blakley. Safeguarding cryptographic keys. In: Proc. National Computer Con ference ’79, AFIPS Proceedings, vol. 48, pp. 313-317, 1979. 6. E. F. Brickell. Some ideal secret sharing schemes. In: J. Combin. Maths. & Combin. Comp. vol. 9, pp. 105–113, 1989. 7. T. Cover and J. Thomas. Elements of information theory. Wiley Series in Telecom munications, 1991. 8. R. Cramer, I. Damgaard, and U. Maurer. Efficient general secure multi-party computation from any linear secret-sharing scheme. In: Proc. EUROCRYPT ’00, Springer LNCS, vol. 1807, pp. 316–334, 2000. 9. R. Cramer, S. Fehr, Y. Ishai, and E. Kushilevitz. Efficient multi-party computation over rings. Manuscript, February 2002. 10. Y. Di Crescenzo, and Y. Frankel. Existence of Multiplicative Secret Sharing Schemes with Polynomial Share Expansion. In: Proc. SODA ’99, ACM Press, pp. 895–896, 1999. 11. Y. Desmedt and Y. Frankel. Theshold cryptosystem. In: Proc. CRYPTO ’89, Springer LNCS, vol. 435, pp. 307–315, 1990. 12. Y. Desmedt and Y. Frankel. Homomorphic zero-knowledge threshold schemes over any finite Abelian group. In: SIAM Journal on Discrete Mathematics, 7(4), pp. 667–679, 1994. 13. Y. Desmedt, A. De Santis, Y. Frankel, and M. Yung. How to share a function securely. In: Proc. STOC ’94, ACM Press, pp. 22–33, 1994. 14. Y. Desmedt, G. Di Crescenzo, and M. Burmester. Multiplicative non-abelian sharing schemes and their application to threshold cryptography. In: Proc. ASIACRYPT ’94, Springer LNCS, vol. 917, pp. 21–31, 1995. 15. Y. Desmedt, B. King, W. Kishimoto, and K. Kurosawa. A comment on the effi ciency of secret sharing scheme over any finite Abelian group. In: Proc. ACISP ’98, Springer LNCS, vol. 1438, pp. 391–402, 1998. 16. M. van Dijk. Secret key sharing and secret key generation. Ph. D. Thesis, Eind hoven University of Technology, 1997. 17. Y. Frankel, P. Gemmell, P. MacKenzie, and M. Yung. Optimal resilience proactive public-key cryptosystems. In: Proc. FOCS ’97, IEEE Press, pp. 384–393, 1997. 18. Y. Frankel, P. Gemmell, P. MacKenzie, and M. Yung. Proactive RSA. In: Proc. CRYPTO ’97, Springer LNCS, vol. 1294, pp. 440–454, 1997. 19. A. G´al. Combinatorial methods in boolean function complexity. Ph.D.-thesis, Uni versity of Chicago, 1995. 20. M. Karchmer and A. Wigderson. On span programs. In: Proc. Structures in Com plexity Theory ’93, IEEE Computer Society Press, pp. 102–111, 1993. 21. B. King. Some results in linear secret sharing. Ph.D.-thesis, University of Wisconsin-Milwaukee, 2001. 22. B. King. Randomness required for linear threshold sharing schemes defined over any finite abelian group. In: Proc. ACISP ’01, Springer LNCS, vol. 2119, pp. 376– 391, 2001. 23. S. Lang. Algebra. Addison-Wesley Publishing Co., 2nd edition, 1984. 24. A. Shamir. How to share a secret. In: Communications of the ACM, (22) pp. 612– 613, 1979. 25. V. Shoup. Practical threshold signatures. In: Proc. EUROCRYPT ’00, Springer LNCS, vol. 1807, pp. 207–220, 2000. -----
{ "disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.1007/3-540-45708-9_18?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1007/3-540-45708-9_18, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": "CCBYNCND", "status": "CLOSED", "url": "https://tidsskrift.dk/brics/article/download/21726/19160" }
2,002
[ "JournalArticle", "Conference" ]
true
2002-02-05T00:00:00
[ { "paperId": "7dbdb4209626fd92d2436a058663206216036e68", "title": "Elements of Information Theory" }, { "paperId": "38647ae73600a9cae911c84d7f2463321b333a89", "title": "Efficient Multi-party Computation over Rings" }, { "paperId": "b7056d0ddc9f4d8e2da0eb2cbbb51f39df883324", "title": "Equational Axioms for Probabilistic Bisimilarity (Preliminary Report)" }, { "paperId": "aac0a5b51b82c5643b78ed95bc7004409a71be90", "title": "Composing Strand Spaces" }, { "paperId": "9da45d0ac15cd4d5877cc70eae4667f7594413fb", "title": "A Formalization of Linkage Analysis" }, { "paperId": "30687a58847d9defc1fdfb478af9ff8163dfe851", "title": "A Simple Correctness Proof of the Direct-Style Transformation" }, { "paperId": "2559ce9a3716cfff210db80265ffc57f0f475f89", "title": "Thesis" }, { "paperId": "cfd0b5fa49cc7cbc8fcb8b58643fe63c400393dc", "title": "Temporal Logic with Cyclic Counting and the Degree of Aperiodicity of Finite Automata" }, { "paperId": "75373008ac7c703a3ce1f815c2c9eb9aa0f9c456", "title": "A Simple CPS Transformation of Control-Flow Information" }, { "paperId": "2a423840a199b5999e623e19551b9c3af11ae822", "title": "Extracting Witnesses from Proofs of Knowledge in the Random Oracle Model" }, { "paperId": "b3e0be5d8981b5e02d07e8c4dde919b835e21e29", "title": "Randomness Required for Linear Threshold Sharing Schemes Defined over Any Finite Abelian Group" }, { "paperId": "970e8206335b0ca91bc25d7e079f14f38c33e3a4", "title": "Syntactic Theories in Practice" }, { "paperId": "d728243147560dd11960bc30f80a14fe6533a508", "title": "Practical Threshold Signatures" }, { "paperId": "64856147da1c26464f183f13be9137d60e23a729", "title": "General Secure Multi-party Computation from any Linear Secret-Sharing Scheme" }, { "paperId": "9232f19cc6419363d0e39752139de518a4bc406e", "title": "Syntactic accidents in program analysis: on the impact of the CPS transformation" }, { "paperId": "29806fa1bb52189c8df2ce98f4e403ebfbe55be9", "title": "A Comment on the Efficiency of Secret Sharing Scheme over Any Finite Abelian Group" }, { "paperId": "1723d3a80ee241c0e04f3870f4373a2de0798ac0", "title": "Optimal-resilience proactive public-key cryptosystems" }, { "paperId": "483f767ed12bc31b97dcf59ed0c019303f195c6d", "title": "Proactive RSA" }, { "paperId": "8fb9ab9929ce1a991b5b2ed8dd29f71b567da3e4", "title": "Efficient Multiplicative Sharing Schemes" }, { "paperId": "da417d59441865b90d0763933e226db09eebc5fe", "title": "Multiplicative Non-abelian Sharing Schemes and their Application to Threshold Cryptography" }, { "paperId": "f5451dd09f0b153ce05a06178a03f97dc1e30562", "title": "Perfect Homomorphic Zero-Knowledge Threshold Schemes over any Finite Abelian Group" }, { "paperId": "9b05ece21561b6d240292df8cc1b7366233ed98e", "title": "How to share a function securely" }, { "paperId": "87b5014904410775f09aa7512376b804541598ed", "title": "On span programs" }, { "paperId": "12d4e2345bb0cb0e6e559d6da9d5458c01adc7c8", "title": "A Construction of Practical Secret Sharing Schemes using Linear Block Codes" }, { "paperId": "6d96340b98082a0da318355361977ab5801694a0", "title": "Some Ideal Secret Sharing Schemes" }, { "paperId": "12113d0c06262e149df06516b77b8257c8b1a492", "title": "Generalized Secret Sharing and Monotone Functions" }, { "paperId": "bdccfcb9e908460052b799d0ccf6a21e92225e35", "title": "Threshold Cryptosystems" }, { "paperId": "88abb2cda4f2a57499a717966ac4fbe9a993027a", "title": "How to share a secret" }, { "paperId": "32d21ccc21a807627fcb21ea829d1acdab23be12", "title": "Safeguarding cryptographic keys" }, { "paperId": null, "title": "Recent BRICS Report Series Publications RS-02-8 Ronald Cramer and Serge Fehr. Optimal Black-Box Secret Sharing over Arbitrary Abelian Groups" }, { "paperId": null, "title": "RS-02-1 Claus Brabrand, Anders Møller, and Michael I. Schwartzbach. The <bigwig> Project" }, { "paperId": "d454a3aedc5b19b4aee6fa68ccca9f240ef4e776", "title": "Some results in linear secret sharing" }, { "paperId": "795633d34a83c571571725610664067c1f144ab8", "title": "Existence of multiplicative secret sharing schemes with polynomial share expansion" }, { "paperId": "8ae4c5492ca8525f0d438c42600d8f8cfd4b80a5", "title": "Secret key sharing and secret key generation" }, { "paperId": null, "title": "Proc. CRYPTO '97" }, { "paperId": "8c77cdf0ae09e931e67037aad106b671d28345a3", "title": "Secure schemes for secret sharing and key distribution" }, { "paperId": null, "title": "In: Proc" }, { "paperId": null, "title": "Algebra" }, { "paperId": "b6418463d62358a421f936c7a709d48f1b857ac4", "title": "The Faculty of the Division of the Physical Sciences in Candidacy for the Degree of Doctor of Philosophy Department of Computer Science" } ]
16,161
en
[ { "category": "Physics", "source": "external" }, { "category": "Biology", "source": "external" }, { "category": "Mathematics", "source": "external" }, { "category": "Medicine", "source": "external" }, { "category": "Mathematics", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/02b2d773a7a2249e864f271b4964ef7bbd942499
[ "Physics", "Biology", "Mathematics", "Medicine" ]
0.809525
Emergence of Oscillations in a Mixed-Mechanism Phosphorylation System
02b2d773a7a2249e864f271b4964ef7bbd942499
Bulletin of Mathematical Biology
[ { "authorId": "2285231", "name": "C. Conradi" }, { "authorId": "3291335", "name": "Maya Mincheva" }, { "authorId": "4686418", "name": "Anne Shiu" } ]
{ "alternate_issns": null, "alternate_names": [ "Bull Math Biology" ], "alternate_urls": null, "id": "8a47ce78-e793-4315-8bf8-b09466ebf633", "issn": "0092-8240", "name": "Bulletin of Mathematical Biology", "type": "journal", "url": "http://link.springer.com/journal/11538" }
This work investigates the emergence of oscillations in one of the simplest cellular signaling networks exhibiting oscillations, namely the dual-site phosphorylation and dephosphorylation network (futile cycle), in which the mechanism for phosphorylation is processive, while the one for dephosphorylation is distributive (or vice versa). The fact that this network yields oscillations was shown recently by Suwanmajo and Krishnan. Our results, which significantly extend their analyses, are as follows. First, in the three-dimensional space of total amounts, the border between systems with a stable versus unstable steady state is a surface defined by the vanishing of a single Hurwitz determinant. Second, this surface consists generically of simple Hopf bifurcations. Next, simulations suggest that when the steady state is unstable, oscillations are the norm. Finally, the emergence of oscillations via a Hopf bifurcation is enabled by the catalytic and association constants of the distributive part of the mechanism; if these rate constants satisfy two inequalities, then the system generically admits a Hopf bifurcation. Our proofs are enabled by the Routh–Hurwitz criterion, a Hopf bifurcation criterion due to Yang, and a monomial parametrization of steady states.
# Emergence of oscillations in a mixed-mechanism phosphorylation system #### Carsten Conradi[∗], Maya Mincheva[†], and Anne Shiu[‡] #### January 28, 2019 Abstract This work investigates the emergence of oscillations in one of the simplest cellular signaling networks exhibiting oscillations, namely, the dual-site phosphorylation and dephosphorylation network (futile cycle), in which the mechanism for phosphorylation is processive while the one for dephosphorylation is distributive (or vice-versa). The fact that this network yields oscillations was shown recently by Suwanmajo and Krishnan. Our results, which significantly extend their analyses, are as follows. First, in the three-dimensional space of total amounts, the border between systems with a stable versus unstable steady state is a surface defined by the vanishing of a single Hurwitz determinant. Second, this surface consists generically of simple Hopf bifurcations. Next, simulations suggest that when the steady state is unstable, oscillations are the norm. Finally, the emergence of oscillations via a Hopf bifurcation is enabled by the catalytic and association constants of the distributive part of the mechanism: if these rate constants satisfy two inequalities, then the system generically admits a Hopf bifurcation. Our proofs are enabled by the Routh-Hurwitz criterion, a Hopf-bifurcation criterion due to Yang, and a monomial parametrization of steady states. Keywords: multisite phosphorylation, monomial parametrization, oscillation, Hopf bifurcation, Routh-Hurwitz criterion ### 1 Introduction Oscillations have been observed experimentally in signaling networks formed by phosphorylation and dephosphorylation [20, 21], which suggests that these networks are involved in timekeeping and synchronization. Indeed, multisite phosphorylation is the main mechanism for establishing the 24-hour period in eukaryotic circadian clocks [30, 42]. Our motivating question, therefore, is, How do oscillations arise in phosphorylation networks? We tackle this question for the network that, according to Suwanmajo and Krishnan, “could be the simplest enzymatic modification scheme that can intrinsically exhibit oscillation” [39, §3.1]. This network, in (1), is the mixed-mechanism (partially processive, partially ∗HTW Berlin †Northern Illinois University ‡Texas A&M University 1 ----- distributive) dual-site phosphorylation network (or mixed-mechanism network for short). Examples of networks that include both processive and distributive elements include the “processive model” of Aoki et al. [1, Table S2] and a model of ERK regulation via enzymes MEK and MKP3 [37, Fig. 2]. In the mixed-mechanism network, Si denotes a substrate with i phosphate groups attached, and K and P are, respectively, a kinase and a phosphatase enzyme: k1 k3 k4 S0 + K ⇄ S0K −→ S1K −→ S2 + K k2 (1) k5 k8 k7 k10 S2 + P ⇄ S2P −→ S1 + P ⇄ S1P −→ S0 + P . k6 k9 When the kinase phosphorylates – that is, adds phosphate groups to – a substrate in the mixed-mechanism network (via the reactions labeled by k1 to k4), the kinase and substrate do not dissociate before both phosphate groups are added. Accordingly, the mechanism for phosphorylation is processive. In contrast, when the phosphatase dephosphorylates – i.e., removes phosphate groups from – a substrate (via reactions k5 to k10), this mechanism is distributive: the phosphatase and substrate dissociate each time a phosphate group is removed. Accordingly, network (1) is said to have a mixed mechanism[1]. The dynamical systems arising from the mixed-mechanism network live in a 9-dimensional space, but, due to three conservation laws, are essentially 6-dimensional. Specifically, the total amounts of kinase, phosphatase, and substrate – denoted by Ktot, Ptot, and Stot, respectively – are conserved. For each choice of three such total amounts and each choice of positive rate constants ki, there is a unique positive steady state [39]. One focus of our work is determining when such a steady state undergoes a Hopf bifurcation leading to oscillations (with any of the ki’s or total amounts as bifurcation parameter). #### 1.1 Summary of main results How do oscillations of the mixed-mechanism network emerge, and how robust are they? These questions are the motivation for our work. Let us describe Suwanmajo and Krishnan’s progress in this direction. They first found rate constants ki and total amounts, displayed in Table 1, that yield oscillations [39, Supplementary Information]. k1 k2 k3 k4 k5 k6 k7 k8 k9 k10 Ktot Ptot Stot 1 1 1 1 100 1 0.9 3 1 100 17.5 5 40 Table 1: Rate constants (left) and total amounts (right), from [39, Supplementary Information], which lead to oscillations in the mixed-mechanism network (1). Next, they examined whether oscillations persist as Ktot varies. What they found, summarized in Figure 1, is that oscillations persist when Ktot is in the (approximate) interval (13.03, 29.23), and oscillations arise as the unique steady state undergoes a Hopf bifurcation. 1Network (1) is symmetric to the mixed-mechanism network in which phosphorylation is distributive (instead of processive) and dephosphorylation is processive (instead of distributive), so our results apply equally well to that network (cf. [39, networks 21–22]). 2 ----- steady state is steady state is steady state is Hopf Hopf locally stable unstable locally stable K tot (oscillations) 0 ≈ 13.03 ≈ 29.23 Figure 1: Stability of the unique steady state of the mixed-mechanism network (1) as a function of Ktot, as analyzed by Suwanmajo and Krishnan [39, Fig. 4]. (The other total amounts, Ptot and Stot, and the rate constants ki are those in Table 1.) Oscillations were found when Ktot is in the “unstable” interval [39]. Subsequently, Conradi and Shiu [7] found that when Ptot also is allowed to vary, oscillations exist for larger values of Ktot (e.g., Ktot = 100). So, how exactly do oscillations depend on the three total amounts (or, equivalently, the initial conditions)? Concretely, our goal is to expand Figure 1 to encompass all possible perturbations to the initial conditions (i.e., the total amounts): Question 1.1. Consider the mixed-mechanism network (1), with ki’s from Table 1. 1. For which values of (Ktot, Ptot, Stot) ∈ R[3]>0 [is the unique steady state unstable?] 2. Whenever (by perturbing parameters or total amounts) a steady state switches from being locally stable to unstable, does this always give rise to a Hopf bifurcation? The direct method for solving Question 1.1(1) is to solve the steady-state equations, and then apply the six-dimensional Routh-Hurwitz stability criterion. However, this approach is intractable: the resulting Hurwitz determinants are pages-long. Accordingly, we take an algebraic shortcut. Namely, we find a parametrization of the set of steady states, and then use this for the input to Routh-Hurwitz. The result is somewhat surprising: each Hurwitz determinant except the last two (which are positive multiples of each other) is always positive. This yields our answer to Question 1.1(1): For every ODE system arising from the mixed-mechanism network (1), a (two-dimensional) surface in the three-dimensional space of total amounts defines the border between steady states that are stable and those that are unstable. (Our result even applies to many systems for which the ki’s are not those in Table 1; see Proposition 4.1.) We can now translate Question 1.1(2) as follows: does the surface mentioned above consist of Hopf bifurcations? We prove, using a Hopf-bifurcation criterion stated in terms of Hurwitz determinants, due to Yang [43], that the answer, at least generically, is “yes”: When the unique steady state of the mixed-mechanism network (1) switches from being stable to unstable, then, generically, it undergoes a Hopf bifurcation. For general one-parameter ODE systems, there are two types of local bifurcations: saddle nodes (which require a zero eigenvalue of the Jacobian matrix) and Hopf bifurcations (which require a pair of pure imaginary eigenvalues of the Jacobian) [16]. We show that a saddle node bifurcation can not occur for any parameter values (see the proof of Proposition 4.1). Therefore, only Hopf bifurcations are possible for the mixed-mechanism system. A second question we aim to answer is the following: 3 |locally stable Ho|unstable opf Ho|locally stable opf| |---|---|---| ||unstable|| |||| ----- Question 1.2. Consider the mixed-mechanism network (1). What conditions on the ki’s guarantee a Hopf-bifurcation for some (positive) values of the total concentrations? As an answer to Question 1.2, we prove that the catalytic constants (k7 and k10) and association constants (k5 and k8) of the distributive part of the mechanism enable oscillations to emerge via a Hopf bifurcation. Specifically, under the simplifying assumption that all dissociation (backward-reaction) constants are equal (k2 = k6 = k9), if the rate constants satisfy two inequalities – lower bounds on k10 and k5/k8 – then the system generically admits a Hopf bifurcation (Proposition 4.3 and Theorem 4.5). (As a comparison, for the fully distributive dual-site network described in Section 1.2 below, the catalytic constants alone enable bistability [5].) Finally, we encode the relevant inequalities in a procedure to generate many parameter values for which we expect oscillations (Procedure 5.1). #### 1.2 Connection to related work Our work joins a growing number of works that harness steady-state parametrizations. Such results include criteria for when such parametrizations exist [26, 40] and methods for using them to determine whether a network is multistationary [25, 29, 32, 34]. Going further, steady-state parametrizations can also be used to find a witness to multistationarity or even the precise parameter regions that yield multistationarity [4, 5]. In this work, we use a steady-state parametrization in a novel way: to study oscillations via Hopf bifurcations. (Our approach is similar in spirit to using Clarke’s convex parameters together with a Hopfbifurcation criterion [9, 11, 14, 18]). As mentioned earlier, there has been much interest in the dynamics of phosphorylation systems [7]. The mixed-mechanism network (1) fits into the related literature as follows. The mixed network is a dual-site network situated between two extremes: the fully processive dual-site network – in which the phosphorylation and dephosphorylation mechanisms are both processive – and the fully distributive dual-site network. One might therefore expect the dynamics of the mixed-mechanism network to straddle those of the two networks. This is indeed the case. As summarized in Table 2, and reviewed in [7], fully processive networks are globally convergent to a unique steady state [6, 10, 35], while mixed-mechanism networks admit oscillations but not bistability [39], and fully distributive networks admit bistability [19] (and the question of oscillations is open [7]). Dual-site network Oscillations? Bistability? Global convergence? Fully processive No No Yes Mixed-mechanism Yes No No Fully distributive (Open) Yes No Table 2: Dual-site phosphorylation networks and their properties: whether they admit oscillations or bistability, and whether all trajectories converge to a unique steady state. Finally, we revisit Suwanmajo and Krishnan’s claim mentioned earlier that the mixedmechanism network is among the simplest enzymatic mechanisms with oscillations. In support of this claim, Tung proved that the simpler system obtained from the mixed-mechanism network by taking its (two-dimensional) Michaelis-Menten approximation, is not oscillatory 4 ----- [41]. Moreover, Rao showed that this approximation is globally convergent to a unique steady state [36]. The validity of the Michaelis-Menten approximation for phosphorylation systems has been called into question [38], and what we know about the mixed-mechanism system concurs: this system is oscillatory, but its Michaelis-Menten approximation is not. The outline of our work is as follows. Section 2 provides background on multisite phosphorylation, steady states, and Hopf bifurcations. Section 3 gives a monomial parametrization of the steady states of mixed-mechanism network. In Section 4, we prove our main results (described above). We use these results in Section 5 to give a procedure for generating rate constants admitting Hopf bifurcations. In Section 6, we present simulations that suggest that oscillations are the norm in the unstable-steady-state regime. Finally, we end with a Discussion in Section 7. ### 2 Background In this section, we introduce the ODEs arising from the mixed-mechanism network, and recall two criteria: the Routh-Hurwitz criterion for steady-state stability and Yang’s criterion for Hopf bifurcations. #### 2.1 Differential equations of the mixed-mechanism network For the mixed-mechanism network (1), we let x1, x2, . . ., x9 denote the species concentrations in the order given in Table 3. The dynamical system (arising from mass-action kinetics) defined by the mixed-mechanism network (1) is given by the following ODEs: x˙ 1 = − k1x1x2 + k2x3 + k10x9 x˙ 2 = − k1x1x2 + k2x3 + k4x4 x˙ 3 = k1x1x2 − (k2 + k3)x3 x˙ 4 = k3x3 − k4x4 x˙ 5 = k4x4 − k5x5x6 + k6x7 (2) x˙ 6 = − k5x5x6 − k8x8x6 + (k6 + k7)x7 + (k9 + k10)x9 x˙ 7 = k5x5x6 − (k6 + k7)x7 x˙ 8 = k7x7 − k8x6x8 + k9x9 x˙ 9 = k8x6x8 − (k9 + k10)x9 . x1 x2 x3 x4 x5 x6 x7 x8 x9 S0 K S0K S1K S2 P S2P S1 S1P Table 3: Assignment of variables to species for the mixed-mechanism network (1). The conservation laws arise from the fact that the total amounts of free and bound enzyme or substrate remain constant. That is, as the dynamical system (2) progresses, the 5 ----- following three conservation values, denoted by Ktot, Ptot, Stot ∈ R>0, remain constant: Ktot = x2 + x3 + x4, Ptot = x6 + x7 + x9, (3) Stot = x1 + x3 + x4 + x5 + x7 + x8 + x9 . Also, a trajectory x(t) beginning in R[9]≥0 [remains in][ R]≥[9] 0 [for all positive time][ t][, so it] remains in a stoichiometric compatibility class, which we denote as follows: P = {x ∈ R[9]≥0 [|][ the conservation equations (3) hold][}][ .] (4) #### 2.2 Stability of steady states and the Routh-Hurwitz criterion The dynamical system (2) arising from the mixed-mechanism network is an example of a reaction kinetics system. That is, the system of ODEs takes the following form: dx = Γ · R(x) =: g(x), (5) dt where Γ and R are as follows. Letting s denote the number of species and r the number of reactions, Γ is an s × r matrix whose k-th column is the reaction vector of the k-th reaction, i.e., it encodes the net change in each species that results when that reaction takes place. Also, R : R[s]≥0 [→] [R]≥[r] 0 [encodes the reaction rates of the][ r][ reactions as functions of the][ s] species concentrations. A steady state (respectively, positive steady state) of a reaction kinetics system is a nonnegative concentration vector x[∗] ∈ R[s]≥0 [(respectively,][ x][∗] [∈] [R]>[s] 0[) at which the ODEs (5)] vanish: g(x[∗]) = 0. Letting S := im(Γ) denote the stoichiometric subspace, a steady state x[∗] is nondegenerate if Im (dg(x[∗])|S) = S, where dg(x[∗]) denotes the Jacobian matrix of g at x[∗]. A nondegenerate steady state is locally asymptotically stable if each of the σ := dim(S) nonzero eigenvalues of dg(x[∗]) has negative real part. Hence, a steady state is locally stable if and only if the characteristic polynomial of the Jacobian evaluated at the steady state has σ roots with negative real part (the remaining roots will be 0). To check whether a polynomial has only roots with negative real parts, we appeal to the Routh-Hurwitz criterion below [13]. Definition 2.1. The i-th Hurwitz matrix of a univariate polynomial p(λ) = a0λ[n] + a1λ[n][−][1] + - · · + an is the following i × i matrix: a1 a0 0 0 0 - · · 0 a3 a2 a1 a0 0 - · · 0 ... ... ... ... ... ... a2i−1 a2i−2 a2i−3 a2i−4 a2i−5 - · · ai      [,] Hi =      in which the (k, l)-th entry is a2k−l as long as 0 ≤ 2k − l ≤ n, and 0 otherwise. Proposition 2.2 (Routh-Hurwitz criterion). A polynomial p(λ) = a0λ[n] + a1λ[n][−][1] + · · · + an with a0 > 0 has all roots with negative real part if and only if all n of its Hurwitz matrices have positive determinant (det Hi > 0 for all i = 1, . . ., n). 6 ----- #### 2.3 Hopf bifurcations and a criterion due to Yang A simple Hopf bifurcation is a bifurcation in which a single complex-conjugate pair of eigenvalues of the Jacobian matrix crosses the imaginary axis, while all other eigenvalues remain with negative real parts. Such a bifurcation, if it is supercritical, generates nearby oscillations or periodic orbits [27]. To detect simple Hopf bifurcations, we will use a criterion of Yang that characterizes Hopf bifurcations in terms of Hurwitz-matrix determinants (Proposition 2.3). Setup for Yang’s criterion. We consider an ODE system parametrized by µ ∈ R: x˙ = gµ(x), where x ∈ R[n], and gµ(x) varies smoothly in µ and x. Assume that x0 ∈ R[n] is a steady state of the system defined by µ0, that is, gµ0(x0) = 0. Assume, furthermore, that we have a smooth curve of steady states: µ �→ x(µ) (6) (that is, gµ (x(µ)) = 0 for all µ) and that x(µ0) = x0. Denote the characteristic polynomial of the Jacobian matrix of gµ, evaluated at x(µ), as follows: pµ(λ) := det (λI − Jac gµ) |x=x(µ) = λ[n] + a1(µ)λ[n][−][1] + · · · + an(µ), and, for i = 1, . . ., n, let Hi(µ) denote the i-th Hurwitz matrix of pµ(λ). Proposition 2.3 (Yang’s criterion [43]). Assume the above setup. Then, there is a simple Hopf bifurcation at x0 with respect to µ if and only if the following hold: (i) an(µ0) > 0, (ii) det H1(µ0) > 0, det H2(µ0) > 0, . . ., det Hn−2(µ0) > 0, and (iii) det Hn−1(µ0) = 0 and [d][(det][ H]dµ[n][−][1][(][µ][))] |µ=µ0 ̸= 0. Remark 2.4. Liu [27] gave an earlier version of Yang’s Hopf-bifurcation criterion (Proposition 2.3), using a variant of the Hurwitz matrices that differs from ours. ### 3 Steady states of the mixed-mechanism network In this section, we recall that the mixed-mechanism network admits a unique steady state in each compatibility class (Proposition 3.1), and prove that the set of steady states admits a monomial parametrization (Theorem 3.2). We then use this parametrization to analyze the space of compatibility classes (Proposition 3.6). 7 ----- #### 3.1 Uniqueness of steady states Suwanmajo and Krishnan proved that, for every choice of positive rate constants and positive total amounts, the mixed-mechanism network does not admit multiple positive steady states [39, §A.2]. Additionally, there are no boundary steady states in any compatibility class P, as in (4), and P is compact. Hence, via a standard application of the Brouwer fixed-point theorem (e.g., [33, Remark 3.9]), there is always a unique steady state: Proposition 3.1 (Uniqueness of steady states). For any choice of positive rate constants ki and positive total amounts K, P, and S (2) arising from the tot tot tot, the dynamical system mixed-mechanism network has a unique steady state in P, and it is a positive steady state. Proposition 3.1 precludes the existence of multiple positive steady states, and hence the existence of a saddle node bifurcation. Thus, a Hopf bifurcation is the only other oneparameter bifurcation which may occur. Indeed, we will show that a Hopf bifurcation exists for some parameter values in Section 4. Also, Proposition 3.1 proves part of a conjecture that we posed [6]. The other half of the conjecture, however, posited that mixed-mechanism systems, like fully processive systems [6, 10], are globally convergent to the unique steady state. Suwanmajo and Krishnan demonstrated that this is false: the system can exhibit oscillatory behavior [39]! This capacity for oscillations is the focus of this work, and our analysis will harness a monomial parametrization of the steady states. We turn to this topic now. #### 3.2 A monomial parametrization of the steady states The steady states of the mixed-mechanism network can be parametrized by monomials (and thus is said to have “toric steady states” [33]): Proposition 3.2 (Parametrization of the steady states). For every choice of rate constants ki > 0, the set of positive steady states of the mixed-mechanism system (2) is threedimensional and is the image of the following map χ = χk1,...,k10: χ : R[3]+ [→] [R]+[9] (7) (x1, x2, x6) �→ (x1, x2, . . ., x9), given by k1 k1k3 x3 := x1x2, x4 := x1x2, x5 := [k][1][k][3][(][k][6][ +][ k][7][)] k2 + k3 (k2 + k3)k4 (k2 + k3)k5k7 x1x2 , x6 k1k3 x7 := x1x2, x8 := [k][1][k][3][(][k][9][ +][ k][10][)] (k2 + k3)k7 (k2 + k3)k8k10 x1x2 k1k3 , x9 := x1x2 . x6 (k2 + k3)k10 Proof. It is straightforward to check that the image of χ is contained in the set of steady states: after substituting χ(x1, x2, x3), the right-hand side of the mixed-mechanism network ODEs (2) vanishes. Conversely, let x[∗] = (x1, x2, . . ., x9) be a positive steady state. The right-hand side of the ODEs (2) vanish at x[∗], so, in the following order, we use ˙x3 = 0 to solve for x3 in terms of x1 and x2, use ˙x4 = 0 to solve for x4 via x3 which was already 8 ----- obtained, use ˙x1 = 0 to obtain x9, use ˙x9 = 0 to obtain x8, use ˙x8 = 0 to obtain x7, and finally use ˙x7 = 0 to obtain x5. This yields precisely the parametrization (7), so x[∗] is in the image of χ. Remark 3.3. The parametrization (7) appeared earlier in [7]. Remark 3.4. That we could achieve a steady-state parametrization was expected, due to Thomson and Gunawardena’s rational parametrization theorem for multisite systems [40]. Remark 3.5. In the parametrization χ in Theorem 3.2, we divide by x6, so χ is technically not a monomial map. However, χ can be made monomial: we introduce y := x[x]6[1] [, so that the] parametrization accepts as input (y, x2, x6), and then x1 is replaced by yx6. #### 3.3 A parametrization of the compatibility classes Every compatibility class P of the mixed-mechanism network, by definition (4), is uniquely determined by a choice of total amounts (Ktot, Ptot, Stot) ∈ R[3]>0[. Thus, we identify the] set of compatibility classes with {(Ktot, Ptot, Stot)} = R[3]>0[. We parametrize this set below] (Proposition 3.6). Let φ : R[9]>0 [→] [R]>[3] 0 [denote the map sending a vector of concentrations to the correspond-] ing total amounts (K, P, S tot tot tot), as in (3): φ(x) := (x2 + x3 + x4, x6 + x7 + x9, x1 + x3 + x4 + x5 + x7 + x8 + x9) . (8) Each compatibility class P contains a unique positive steady state (Proposition 3.1), and the positive steady states are parametrized by χ from Theorem 3.2, so the space of compatibility classes is parametrized as follows: Proposition 3.6 (Parametrization of the compatibility classes). Identify every compatibility class P of the mixed-mechanism network (1), with the corresponding total amounts (Ktot, Ptot, Stot) ∈ R[3]>0[. Then, for every choice of positive rate constants][ k][i][, the following] is a bijection that sends a vector (x1, x2, x6) ∈ R[3]>0 [to the compatibility class in which the] unique steady state is χ(x1, x2, x6): φ ◦ χ : R[3]>0 [→] [R]>[3] 0 [=][ {][(][K]tot[, P]tot[, S]tot[)][}][,] where φ is as in (8) and χ is the steady-state parametrization (7). The map φ ◦ χ is given by k1 x2 + k2 + k3 � k1k3 x1x2, x6 + k2 + k3 � 1 + [1] k7 k10 �1 + [k][3] k4 � x1x2, (x1, x2, x6) �→ � , � + [1] x6 �k6 + k7 + [k][10][ +][ k][9] k5k7 k10k8 �� x1x2 � k1k3 x1 + k2 + k3 �� 1 + [1] + [1] + [1] k3 k4 k7 k10 which becomes, when the rate constants are those in Table 1, the following: � x1x2 (x1, x2, x6) �→ x1x2 + x2, x6 + [1009] 1800[x][1][x][2][, x][1][ + 2809]1800[x][1][x][2][ + 161]900 x6 � . (9) 9 ----- Example 3.7. Consider the mixed-mechanism system with rate constants from Table 1. To compute the unique steady state x[∗] in the compatibility class given by (K, P, S tot tot tot) = (17.5, 5, 40), we use Proposition 3.6. Namely, we know that φ ◦ χ(x[∗]1[, x][∗]2[, x][∗]6[) = (17][.][5][,][ 5][,][ 40),] so we solve (using, e.g., Mathematica [22]) for the unique positive solution: (x[∗]1[, x][∗]2[, x][∗]6[)][ ≈] [(1][.][0134][,][ 8][.][6916][,][ 0][.][0624)][ .] We obtain the remaining coordinates of x[∗] using the parametrization χ in (7): x[∗] = χ(x[∗]1[, x][∗]2[, x][∗]6[)] (10) ≈ (1.0134, 8.6916, 4.4041, 4.4041, 1.4893, 0.0624, 4.8935, 23.7512, 0.0440) . #### 3.4 Steady states and Hopf bifurcations Our analysis of oscillations in the mixed-mechanism system is based on Hopf bifurcations. Hopf-bifurcation diagrams are displayed in Figure 2, where the total amounts are the bifurcation parameters (c.f. Figure 1 which is with respect to Ktot). Figure 2 suggests that, in the 3-dimensional space of total amounts, there is a surface of Hopf bifurcations. Indeed, we will see in the next section that this is the case (see Theorem 4.5 and Figure 3). (a) Bif. parameter Ktot. (b) Bif. parameter Ptot. (c) Bif. parameter Stot. Figure 2: Numerical continuation of the unique positive steady state, in (10), when (K, P, S .5, 5, 40): (a) For P, 8 and S tot tot tot) = (17 tot = 5 tot = 40, we observe (supercritical) Hopf bifurcations at Ktot ≈ 13.0296, 29.2251 (Ptot = 5) and Ktot ≈ 18.5758 (Ptot = 8). (b) For Ktot = 5 and Stot = 40, we observe (supercritical) Hopf bifurcations at Ptot ≈ 4.6310 and Ptot ≈ 7.5479. (c) For Ktot = 17.5 and Ptot = 5, we observe (supercritical) Hopf bifurcations at Stot ≈ 21.8213 and Stot ≈ 43.5944. All figures in this work were made using Matcont [8]. ### 4 Hopf bifurcations in the mixed-mechanism system We saw in the previous section that the mixed-mechanism network yields a unique positive steady state in each compatibility class. Now we show that the compatibility classes with a stable steady state are separated from those with an unstable steady state by a single surface H (Proposition 4.1 and Theorem 4.2), and, under stronger hypotheses, crossing the surface H generically corresponds to undergoing a Hopf bifurcation (Theorem 4.5). (Recall that generically means that the exceptional set has zero measure. So, we will show that the subset of the surface corresponding to non-Hopf points has dimension at most 1.) 10 ----- To simplify computations, we assume that dissociation (backward-reaction) constants are equal: k2 = k6 = k9. In chemistry, the forward reaction is usually more thermodynamically favorable than the backward reaction. Therefore, the rate constant of a forward reaction is much larger than the rate constant of the backward reaction [2]. We choose small values for the dissociation rate constants in Section 5, similar to what was done in [12]. Proposition 4.1. Consider the dynamical system (2) arising from the mixed-mechanism network and any positive rate constants for which k2 = k6 = k9. Then: 1. Every compatibility class P contains a unique (positive) steady state x[∗]. 2. Exactly one of the following holds: (a) The unique steady state x[∗] in each compatibility class P is locally asymptotically stable. (b) In the space of total amounts {(Ktot, Ptot, Stot)} = R[3]>0[, which we identify with] the space of compatibility classes P, a surface H defines the border between those P whose unique steady state x[∗] is locally asymptotically stable and those P for which x[∗] is unstable. Proof. Item 1 follows from Proposition 3.1. For item 2, let J denote the Jacobian matrix of the mixed-mechanism system (2), with equal dissociation constants: k2 = k6 = k9 =: kb, evaluated at the parametrized steady state χ(x1, x2, x6), from (7). The characteristic polynomial of J is: p(λ) := det(λI − J) = λ[3](λ[6] + b1λ[5] + b2λ[4] + · · · + b6), where the coefficients bi (displayed below) are rational functions in x1, x2, x6 and the ki’s. To streamline reading we only give the complete numerator of b6 and b1. The full coefficients can be found in the Mathematica file mixed coeffs charpoly kb.nb[2]. numerator(b6) = k1[2][k]3[2][k][4][(][k][10][ +][ k][7][)(][k][10][k][5][k][7][ +][ k][5][k][7][k][b][ +][ k][10][k][8][(][k][7][ +][ k][b][))][x][1][x][2]2 (11) + k1k10k3k4k7(k3 + kb)(k10k5k7 + k5k7kb + k10k8(k7 + kb))x2x6 + k10[2] [k][4][k][5][k]7[2][k][8][(][k][3] [+][ k][b][)][2][x]6[2] [+][ k][1][k]10[2] [(][k][3] [+][ k][4][)][k][5][k]7[2][k][8][(][k][3] [+][ k][b][)][x][1][x]6[2] + k1k10k5k7(k10k4k7 + k3k4k7 + k10k3(k4 + k7))k8(k3 + kb)x2x[2]6 numerator(b5) = k1[2][k]3[2][k][4][(][k][10][ +][ k][7][)(][k][10][ +][ k][b][)(][k][7][ +][ k][b][)][x][1][x][2]2 + k1k10k3k4k7(k10 + kb)(k3 + kb)(k7 + kb)x2x6 + . . . numerator(b4) = k1k3k4(k10 + k7)(k10 + kb)(k3 + kb)(k7 + kb)x1x2 + . . . numerator(b3) = . . . + k1[2][k][3]�k10[2] [(][k][7] [+][ k][b][) +][ k][7][k][b][(][k][3] [+][ k][4] [+][ k][7] [+][ k][b][)] + k10 �(k7 + kb)[2] + k3(2k7 + kb) + k4(2k7 + kb)�[�]x[2]1[x][2][ +][ . . .] numerator(b2) = . . . + k1[2][k][3][(][k][7][k][b][ +][ k][10][(2][k][7][ +][ k][b][))][x][2]1[x][2][ +][ . . .] numerator(b1) = k1k3(k7kb + k10(2k7 + kb))x1x2 + k10k7(k3 + kb)(k10 + k3 + k4 + k7 + 3kb)x6 + k1k10k7(k3 + kb)x1x6 + k1k10k7(k3 + kb)x2x6 + k10k7(k5 + k8)(k3 + kb)x[2]6 2This file and others mentioned below are in the Supporting Information; see Appendix A. 11 ----- And for the denominators: denominator(b6) = k10(kb + k3)k7 denominator(bi) = k10(kb + k3)k7x6, for i = 2, 3, 4, 5 . As x1, x2, x6 and the ki are positive, thus b1, b2, . . ., b6 > 0 (in the aforementioned Mathematica file, we checked the above numerators are sums of only positive monomials). Recall that, due to the 3 conservation laws (3), the Jacobian matrix has rank 6, not 9. Accordingly, the relevant Hurwitz matrix, namely, for p(λ)/λ[3], is as follows: b1 1 0 0 0 0  b3 b2 b1 1 0 0  b5 b4 b3 b2 b1 1   0 b6 b5 b4 b3 b2  0 0 0 b6 b5 b4 0 0 0 0 0 b6 Consider the Hurwitz determinants. First det H1 = b1 > 0. The next 3 Hurwitz determinants are also positive: numerator(det H2) = k1[3][k]3[2][(][k][7][k][b][ +][ k][10][(2][k][7][ +][ k][b][))][2][x][3]1[x][2]2 + k1[3][k][10][k][3][k][7][(][k][3][ +][ k][b][)(][k][7][k][b][ +][ k][10][(2][k][7][ +][ k][b][))][x][3]1[x][2][x][6][ +][ . . .] numerator(det H3) = k1[5][k]3[3][(][k][10][k][5][k][7] [+][ k][5][k][7][k][b] [+][ k][10][k][8][(][k][7] [+][ k][b][))(][k][7][k][b] [+][ k][10][(2][k][7] [+][ k][b][))][2][x]1[5][x][3]2[x][6] [+][ . . .] numerator(det H4) = k1[7][k]3[4][(][k][10][k][5][k][7] [+][ k][5][k][7][k][b] [+][ k][10][k][8][(][k][7] [+][ k][b][))(][k][7][k][b] [+][ k][10][(2][k][7] [+][ k][b][))][2] �k5k7(k3 + k4 + k7)kb + k10[2] [k][8][(][k][7][ +][ k][b][)+] k10(k3 + k4 + k7)(k5k7 + k8(k7 + kb))�x[7]1[x][4]2[x][2]6 [+][ . . .] where the denominators, which are positive, are, respectively: denominator(det H2) = k10[2] [k]7[2][(][k][b][ +][ k][3][)][2][x][2]6 denominator(det H3) = k10[3] [k]7[3][(][k][b] [+][ k][3][)][3][x]6[3] denominator(det H4) = k10[4] [k]7[4][(][k][b] [+][ k][3][)][4][x]6[4] (We display only the leading terms of the polynomials; the complete polynomials together with an algorithmic verification of positivity are in mixed Hi.nb.) The final Hurwitz determinant is det H6 = (b6)(det H5), and we saw that b6 > 0. So, by the Routh-Hurwitz criterion (Proposition 2.2), the steady state χ(x1, x2, x6) is locally stable if and only if det H5 > 0. Hence, the surface H that delineates the boundary between compatibility classes with stable steady states vs. those with unstable steady states is defined by det H5 ◦ (φ ◦ χ)[−][1] = 0, where φ ◦ χ is the parametrization of compatibility classes from Proposition 3.6. If H intersects the positive orthant R[3]>0[, then case (b) of the proposition holds. Otherwise, if] H ∩ R[3]>0 [=][ ∅][, then we claim that we are in case (a). To show this, we need to verify that] det H5(x1, x2, x6) > 0 for some (x1, x2, x6) ∈ R[3]>0[. The denominator of det][ H][5][(][x][1][, x][2][, x][6][) is] strictly positive: denominator(det H5) = k10[5] [k]7[5][(][k][3][ +][ k][b][)][5][x][5]6[.] 12 ----- So we need only show that the numerator of det H5(x1, x2, x6) is strictly positive for some (x1, x2, x6) ∈ R[3]>0[.] To this end, we view this numerator as a polynomial in x1 (so the coefficients are rational functions of x2, x6, and the ki’s): � numerator(det H5) = x[9]1[x][4]2 � k10k7x6(k3 + kb) k3(k10(2k7 + kb) + k7kb) [+][ x][2] + (12) � k8x6 � k5 k8 � � k5 + k8[2][x][2]6 α02 + α11 + α20 k8 �2[�] � k5 α01 + α10 k8 � k5 k8 + lower degree terms in x1, k5 � k5 α03 + α12 + α21 k8 k8 2 � + α30 �3[��] k8[3][x]6[3] � where the coefficients αij are sums of (many) positive monomials and are given in the file mixed analyis H5N x1 LT.nb. Therefore (for fixed x2 and x6) when x1 is sufficiently large, the expression (12) is positive, as desired. The proof of Proposition 4.1 focused on the surface H defined by the equation det H5 ◦ (φ ◦ χ)[−][1] = 0. This surface sometimes meets the positive orthant R[3]>0[, and indeed we show] that this is the case when certain relationships hold among the rate constants. Theorem 4.2. Consider the dynamical system (2) arising from the mixed-mechanism network. Assume the positive rate constants satisfy k2 = k6 = k9 and the following inequality: k10k3k4 − (k3 + k4)(k3 + k7)(k4 + k7) > 0 . (13) If k5/k8 is sufficiently large, then there is a compatibility class P whose unique steady state x[∗] is unstable. Proof. Assume that the rate constants satisfy k2 = k6 = k9 =: kb and (13). By the proof of Proposition 4.1, a steady state χ(x1, x2, x6) of the mixed-mechanism system (2) is locally stable if and only if det H5(x1, x2, x6) > 0. We also saw in that proof that the denominator of det H5(x1, x2, x6) is strictly positive for all (x1, x2, x6) ∈ R[3]>0[. So, by Proposition 2.2, it] suffices to show that if k5/k8 is sufficiently large, then there exists (x[∗]1[, x][∗]2[, x][∗]6[)][ ∈] [R]>[3] 0 [such] that the numerator of det H5(x[∗]1[, x][∗]2[, x][∗]6[) is strictly negative: this would show that the steady] state x[∗] := χ(x[∗]1[, x]2[∗][, x]6[∗][) is unstable.] To this end, view the numerator of det H5 as a polynomial in x2 with coefficients in x1, x6, and the ki’s. It is a degree-9 polynomial in x2 of the following form (see the file mixed analysis H5N x2 LT.nb): k10k7(k3 + kb) numerator(det H5) = k1[9] �α0x[3]6 [+][ α][1][x]6[2] [+][ α][2][x][6][ +][ α][3]� [�]x[5]1 [+] 1[x][6] k3(k10(2k7 + kb) + k7kb)[x][4] � x[9]2 + lower degree terms, (14) where α0, . . ., α3 are rational functions in kb, k3, k4, k5, k7, k8, k10. These functions αi are given in mixed analysis H5N x2 LT.nb. 13 ----- We now analyze α0, which has the following form (see mixed analysis H5N x2 LT.nb): β0 , (15) �k5 k8 �k5 k8 � + β3 3 � + β1 2 � + β2 � α0 = k8[3] � �k5 k8 where each coefficient βi is a rational function in kb, k3, k4, k7, k10 (and hence does not depend on k1, k5, or k8). In particular, β0 is the following polynomial: β0 = − k1[9][k]3[5][k]7[3] [(][k][10][k][3][k][4][ −] [(][k][3][ +][ k][4][)(][k][3][ +][ k][7][)(][k][4][ +][ k][7][)) (][k][10][ +][ k][b][)][3][ (][k][7][k][b][ +][ k][10][(2][k][7][ +][ k][b][))][2][ .] It follows that β0 < 0 when inequality (13) holds. Thus, when (13) holds, then, by equation (15), the inequality α0 < 0 holds for k5/k8 sufficiently large. In this case, the cubic polynomial in x6 appearing in (14), and hence also the coefficient of x[9]2 [in the numerator of det][ H][5][, will be negative for][ x][6] [sufficiently large.] Hence, if we choose x1 := 1 (or any positive value) and x6 and x2 sufficiently large, then the numerator of det H5 will be negative. In the remainder of this section, we focus on the question of whether the surface H consists of (at least generically) Hopf bifurcations. If so, this would imply that whenever a steady state of the mixed-mechanism network switches from stable to unstable, we expect it to undergo a Hopf bifurcation leading to oscillations. We begin our analyses of Hopf bifurcations by giving a criterion for such bifurcations. Proposition 4.3. Consider the dynamical system (2) arising from the mixed-mechanism network and any positive rate constants with k2 = k6 = k9 and k10k3k4 − (k3 + k4)(k3 + k7)(k4 + k7) > 0. Then there exists (x[∗]1[, x][∗]2[, x][∗]6[)][ ∈] [R]>[3] 0 [such that][ det][ H][5][(][x]1[∗][, x][∗]2[, x][∗]6[) = 0][ (in] other words, φ ◦ χ(x[∗]1[, x][∗]2[, x][∗]6[)][ is on][ H][). Moreover, for such a vector][ (][x][∗]1[, x][∗]2[, x][∗]6[)][, the system] undergoes a Hopf bifurcation with respect to x2 at the steady state χ(x[∗]1[, x][∗]2[, x][∗]6[)][ if and only if] the following inequality holds: d(numerator(det H5)|x1=x[∗]1[, x][6][=][x][∗]6[)] |x2=x[∗]2 [̸][= 0][ .] (16) dx2 Proof. Fix positive rate constants for which k2 = k6 = k9 and k10k3k4 −(k3 +k4)(k3 +k7)(k4 + k7) > 0. By the proofs of Proposition 4.1 and Theorem 4.2, the function det H5 : R[3]>0 [→] [R] takes both positive and negative values. So, as det H5 is continuous, det H5(x[∗]1[, x][∗]2[, x][∗]6[) = 0] for some (x[∗]1[, x][∗]2[, x][∗]6[)][ ∈] [R]>[3] 0 [(by the intermediate-value theorem).] Assume det H5(x[∗]1[, x][∗]2[, x][∗]6[) = 0. To see whether the steady state][ χ][(][x][∗]1[, x][∗]2[, x][∗]6[) is a Hopf] bifurcation with respect to the parameter µ = x2, where the curve of steady states is x(µ) = χ(x[∗]1[, µ, x][∗]6[) and][ µ][0][ =][ x][∗]2[, we use Proposition 2.3 (Yang’s criterion). Parts (i) and (ii) of that] criterion hold for any steady state χ(x[∗]1[, x][∗]2[, x][∗]6[), because][ b][6] [=][ b][6][(][x][∗]1[, x][∗]2[, x][∗]6[)][ >][ 0, by (11),] and also det Hi = det Hi(x[∗]1[, x][∗]2[, x][∗]6[)][ >][ 0 for][ i][ = 1][,][ 2][,][ 3][,][ 4 (from the proof of Proposition 4.1).] Recall from the proof of Proposition 4.1 that the denominator of det H5 is strictly positive and does not depend on x2; thus, we can focus on the numerator of H5. So, by Proposition 2.3, χ(x[∗]1[, x][∗]2[, x][∗]6[) is a Hopf bifurcation with respect][ x][2][ if and only if (16) holds.] 14 ----- Remark 4.4. Given rate constants ki as in Proposition 4.3 for which there is a Hopf bifurcation, we can perturb slightly the rate constants involved in (13) (while maintaining the equality k2 = k6 = k9) and preserve the existence of a Hopf bifurcation. Indeed, this assertion follows from Proposition 4.3 (inequality (16) is maintained under small perturbations of the xi’s), the fact that simple roots of a polynomial depend continuously – in fact, infinitely differentiably – on the coefficients [28], and the fact that the inequality (13) defines a (relatively) open set in the parameter space of the ki’s. Under the hypotheses of Proposition 4.3, we expect that inequality (16) holds generically on H. We will confirm this when the rate constants are those in Table 1 (Theorem 4.5). The proof of Theorem 4.5 makes use of discriminants, which we now review. Consider a degree-n, univariate polynomial f = cnx[n] + cn−1x[n][−][1] + · · · + c0 with coefficients ci ∈ C. A multiple root of f is some x[∗] ∈ C for which (x − x[∗])[2] divides f or equivalently f (x[∗]) = f [′](x[∗]) = 0. It is well-known that f has a multiple root in C if and only if a certain multivariate polynomial in the ci’s, the discriminant, vanishes [15]. For instance, the discriminant of the quadratic polynomial ax[2] + bx + c is the familiar expression b[2] − 4ac. Theorem 4.5 (Hopf bifurcations of the mixed-mechanism network). Consider the dynamical system (2) arising from the mixed-mechanism network and rate constants in Table 1. Let H denote the surface, from Proposition 4.1, that defines the border between those P whose unique steady state x[∗] is locally stable and those P for which x[∗] is unstable. Then H consists generically of compatibility classes P whose unique steady state x[∗] undergoes a simple Hopf bifurcation (with x2 as bifurcation parameter). Proof. It is straightforward to check that the rate constants in Table 1 satisfy the inequality (13). Therefore, the surface H as in Proposition 4.1.2(b) exists, and is defined by det H5 = 0, where H5 is the Hurwitz matrix (specialized to the rate constants in Table 1) as in the proof of Proposition 4.1. To prove that H consists generically of Hopf bifurcations, we use Proposition 4.3. That result states that χ(x[∗]1[, x][∗]2[, x][∗]6[) is a Hopf bifurcation with respect to][ x][2] [if and only if (][x][∗]1[, x][∗]2[, x][∗]6[)][ ∈] H[′] \ S, where H[′] := V>0(det H5) := �(x1, x2, x6) ∈ R[3]>0 [|][ det][ H][5][(][x][1][, x][2][, x][6][) = 0]�, and � d(det H5|x1=x[∗]1[, x][6][=][x][∗]6[)] � S := (x[∗]1[, x][∗]2[, x][∗]6[)][ ∈H][′] |x2=x[∗]2 [= 0] ⊆H[′] . dx2 ���� We have that H = φ ◦ χ(H[′]), and that the following subset of H consists of compatibility classes whose unique steady state undergoes a simple Hopf bifurcation with x2 as bifurcation parameter: φ ◦ χ(H[′] \ S). So, it suffices to show that dim(S) < dim(H[′]). Note that dim(H[′]) ≥ 2, so we will show that dim(S) ≤ 1. To this end, note that if (x[∗]1[, x][∗]2[, x][∗]6[)][ ∈S][, then][ x][∗]2 [is a multiple root of the univariate] polynomial numerator(det H5)|x1=x[∗]1[, x][6][=][x][∗]6 [(this also uses the fact the denominator of det][ H][5][,] which is 188956800000000000000x[5]6[, does not depend on][ x][2][). Thus, any (][x][∗]1[, x][∗]2[, x][∗]6[)][ ∈S] satisfies D(x[∗]1[, x][∗]6[) = 0, where][ D][ is the discriminant of det][ H][5] [and][ H][5] [is viewed as a univariate] polynomial in the variable x2. So, we have the map: S → {(x1, x6) ∈ R[2] | D(x1, x6) = 0} =: D (x1, x2, x6) �→ (x1, x6) . 15 ----- The preimage of any point of this map has size at most 4 (because numerator(det H5)|x1=x[∗]1[, x][6][=][x][∗]6 has degree 9, so it has at most 4 multiple roots). Thus, to achieve our desired inequality (namely, dim(S) ≤ 1), we need only prove the following claim: dim(D) ≤ 1 or, equivalently, the bivariate polynomial D is not the zero polynomial. It suffices to show that D(1, 1) is nonzero, which in turn would follow if we can show that the univariate, degree-9 polynomial numerator(det H5)|x1=x[∗]1[, x][6][=][x][∗]6 [=][ H][5][(1][, x][2][,][ 1)] does not have a multiple root over C. Indeed, using Mathematica, we see that the numerator of det H5(1, x2, 1) has 9 (distinct) complex roots: −131.425, − 102.999, − 78.022, − 66.423, − 39.194, − 3.946 ± 0.734i, − 3.677, 268.606 . Thus, D is a nonzero polynomial, and this completes the proof. (a) Stot = 40. (b) Ptot = 5. (c) Ktot ≈ 13.0296. Figure 3: Slices of the Hopf-bifurcation surface H, from Theorem 4.5. Specifically, displayed are the intersections of H with the hyperplanes defined by (a) Stot = 40, (b) Ptot = 5, and (c) Ktot ≈ 13.0296. Each such curve was obtained numerically, using Matcont [8], by a two-parameter continuation of the Hopf bifurcation arising from Ktot ≈ 13.0296, Ptot = 5, and Stot = 40. Each point of the curves in (a) – (c) corresponds to a Hopf bifurcation with respect to either of the two varying total concentrations. Points “inside” H correspond to unstable steady states and thus the potential for oscillations. In Figure 3, we show some slices of the Hopf-bifurcation surface H (where the rate constants are from Table 1). Accordingly, this figure extends the one-dimensional Figure 1. The bifurcations analyzed in Proposition 4.3 and Theorem 4.5 are with respect to the bifurcation parameter x2, the steady-state value of the kinase K. It is natural to ask whether we also obtain a bifurcation with respect to a more biologically meaningful parameter, such as a rate constant or a total amount. We now explain how to perform such an analysis. To use a total amount (here we use Ptot) as a bifurcation parameter (perturbing this parameter corresponds to perturbing the compatibility class), consider the following maps: {(Ktot, Ptot, Stot)} = R[3]>0 ←−φ◦χ R[3]>0 h5:=det−→ H5 R>0 Recall that (φ ◦ χ) : R[3]>0 [→] [R]>[3] 0 [is a bijection. Let][ g][ :=][ h][5] [◦] [(][φ][ ◦] [χ][)][−][1][ :][ R]>[3] 0 [→] [R][. Also, let] p := (φ ◦ χ)2 = x6 + [1009]1800 [x][1][x][2][ denote the second coordinate function of][ φ][ ◦] [χ][ from (9) (here] we assume the rate constants from Table 1). We are interested in checking whether ∂g ∂Ptot [is] 16 ----- (generically) nonzero whenever g = 0. Accordingly, we use the chain rule: ∂g 1 = ∂Ptot ∂p/∂x1 1800 = 1009x2 ∂h5 1800 + ∂x1 1009x1 ∂h5 1 + ∂x1 ∂p/∂x2 ∂h5 1 + ∂x2 ∂p/∂x6 ∂h5 ∂x6 ∂h5 + [∂h][5] . (17) ∂x2 ∂x6 For specific values of x1, x2, x6, it is straightforward to check whether the sum (17) is nonzero. More generally, we expect this sum to be generically nonzero; that is, we expect that the surface H consists generically of Hopf bifurcations with respect to the total-amount Ptot. ### 5 Generating rate constants admitting oscillations The proof of Theorem 4.2 yields a recipe for generating rate constants for the mixedmechanism network at which we expect oscillations arising from a Hopf bifurcation. Specifically, we choose rate constants ki for which the equalities k2 = k6 = k9 hold, the inequality (13) holds, and α0 < 0 (as in (15)), and then pick x2 and x6 large enough so that det H5 is negative but close to 0. We summarize these choices in the following procedure. Procedure 5.1 (Generating rate constants likely to admit oscillations). Input: The following functions[3]: (i) α0 as in (15), (ii) the numerator of det H5, (iii) q := α0x[3]6 [+][ α][1][x]6[2] [+][ α][2][x][6][ +][ α][3][ as in][ (14)][, and] (iv) φ ◦ χ given in Proposition 3.6. Output: Rate constants and total amounts for which det H5 is negative and close to 0. Steps: 1. Choose positive values for kb := k2 = k6 = k9, x1, k1, k3, k4, k7, and k8. 2. Choose a positive value for k10 for which k10 > [(][k][3][+][k][4][)(][k]k[3]3[+]k[k]4[7][)(][k][4][+][k][7][)] . 3. Choose the remaining rate constant k5 such that α0 < 0. 4. Choose x6 so that q < 0. 5. Choose x2 so that the numerator of det H5 is negative but close to 0. 6. Return the ki’s and (Ktot, Ptot, Stot) := φ ◦ χ(x1, x2, x6), where φ ◦ χ is evaluated at the ki’s (and x1, x2, x6) chosen in the previous steps. Remark 5.2. Using the output of Procedure 5.1, one can attempt to exhibit and analyze oscillations or Hopf bifurcations using software, e.g., Matcont [8]. See Figure 4. 3The functions are provided as a text file in the Supporting Information. See Appendix A. 17 ----- Example 5.3. We follow Procedure 5.1 as follows (to verify our computations see the file mixed generate rc.nb): Step 1. We pick kb = 0.143738, k1 = 0.575284, k3 = 3.89096, k4 = 5.05386, k7 = 9.25029, k8 = 0.621813, and x1 = 5.82148. Step 2. The inequality for this step evaluates to k10 > 85.5048, so we choose k10 = 90. Step 3. Evaluating α0 at the chosen ki’s, we obtain the following inequality: −8.896 × 10[17]k5[3] [+ 1][.][49735][ ×][ 10][20][k]5[2] [+ 4][.][79701][ ×][ 10][20][k][5] [+ 2][.][42695][ ×][ 10][20][ <][ 0][,] which we find, using Mathematica, is feasible for k5 > 171.471. So, we pick k5 = 172. Step 4. By evaluating q at the values chosen above, we obtain the following inequality: −1.41683 × 10[22]x[3]6 [−] [3][.][5508][ ×][ 10][25][x]6[2] [−] [1][.][80374][ ×][ 10][25][x][6] [+ 2][.][15078][ ×][ 10][24][ <][ 0][ .] This inequality holds when x6 > 0.0996797, so we choose x6 = 0.1. Step 5. By evaluating the numerator of det H5, we obtain the following inequality: − 5.42893 × 10[25]x[9]2 [−] [4][.][20944][ ×][ 10][29][x]2[8] [−] [5][.][05393][ ×][ 10][31][x]2[7] [−] [6][.][67609][ ×][ 10][32][x]2[6] + 4.66164 × 10[33]x[5]2 [+ 3][.][97617][ ×][ 10][34][x]2[4] [+ 1][.][01289][ ×][ 10][35][x]2[3] [+ 1][.][19894][ ×][ 10][35][x]2[2] + 6.7831 × 10[34]x2 + 1.4718 × 10[34] < 0 . This inequality is feasible, as computed in Mathematica, for x2 > 9.0382; we pick x2 = 10. Step 6. We have determined the following rate constants: k1 k2 k3 k4 k5 k6 k7 k8 k9 k10 0.575284 0.143738 3.89096 5.05386 172 0.143738 9.25029 0.621813 0.143738 90 We obtain the following steady state, using (7): (x1, x2, . . ., x9) = χ(x1, x2, x6) (18) = (5.82148, 10, 8.30052, 6.39056, 1.90691, 0.1, 3.49146, 520.229, 0.358855) . Using this steady state, we obtain the total amounts, using (8): (Ktot, Ptot, Stot) = φ(x1, x2, . . ., x9) = (24.6911, 3.95031, 546.499) . (19) The resulting bifurcation analysis is shown in Figure 4. ### 6 Dynamics: simulations and conjectures Are oscillations the norm when the mixed-mechanism system has an unstable steady state? We conjecture that this is the case. Conjecture 6.1. Consider the mixed-mechanism network, and any choice of rate constants and total amounts. If the unique steady state in P is unstable, then P contains a stable periodic orbit. 18 ----- (a) Bif. parameter Ktot. 0 10 20 30 40 (b) Bif. parameter Ptot. (c) Bif. parameter Stot. Figure 4: Numerical continuation of the steady state (18), when total amounts are as in (19): (a) A (supercritical) Hopf bifurcations are at Ktot ≈ 24.0623 and 107.5635. (b) (Supercritical) Hopf bifurcations are at Ptot ≈ 4.1022 and Ptot ≈ 2.3275. Matcont reported a branch point, the leftmost red circle, at Ptot ≈−8.5427 × 10[−][13], i.e., for Ptot ≈ 0 and thus outside the domain of interest. (c) A (supercritical) Hopf bifurcation is at Stot ≈ 288.4384. (a) x5 vs. t. (b) x5 vs. x2. (c) Increasing Ktot. Figure 5: Numerical verification of oscillations in the mixed-mechanism system with rate constants as in Table 1. For (a) and (b), we used (K, P, S, 5, 40) and tot tot tot) = (14 initial values as in (10). Here the solution converges to a periodic orbit. For (c), we used (P, S, 40) and three values for K tot tot) = (8 tot (namely, 100, 1000, and 10000), and again initial values as in (10), except that x5 = 1.1. Again the solutions seem to converge to a periodic orbit, and moreover this periodic orbit appears not to depend on the value of Ktot. See Conjecture 6.2. Some simulations are shown in Figure 5. In (A) and (B) of that figure, we see solutions converging to a period orbit; this system arises from total-amounts similar to those that Suwanmajo and Krishnan found to support oscillations. In contrast, in Figure 5(C), we see oscillations, when (Ptot, Stot) = (8, 40), for three large values for Ktot: 100, 1000, and 10000. Oscillations persist across these values, which yields a much larger range for Ktot than Suwanmajo and Krishnan’s results would suggest. Moreover, the value of Ktot appears not to affect the resulting periodic orbit (when projected to x5, the concentration of the doubly phosphorylated substrate S2). Could this be a biological design mechanism for robust timekeeping (for instance, in circadian clocks)? Mathematically, we conjecture that oscillations indeed persist for arbitrarily large Ktot; and, that the periodic orbit in x5 indeed does not depend on Ktot. Conjecture 6.2. 1. Consider the mixed-mechanism network with rate constants as in Table 1. Then there exist values of Ptot and Stot such that for Ktot arbitrarily large, the unique steady state in P is unstable. 19 ----- 2. For such values of Ptot and Stot and for sufficiently large Ktot, the compatibility class P contains a periodic orbit such that this orbit in x5 (the concentration of S2) does not depend on the value of Ktot. One way to tackle Conjecture 6.2 is to analyze the robustness of the period and the amplitude with respect to Ktot using the theory developed in [3, 24, 23]. Finally, we consider the dynamics in compatibility classes that contain a locally stable steady state. Our simulations suggest that such a steady state is in fact globally stable. Accordingly, we pose the question, Consider the mixed-mechanism network, and any choice of rate constants and total amounts. If the unique steady state x[∗] in P is locally stable, does it always follow that x[∗] is globally stable? In the Michaelis-Menten limit, this is true [36]. ### 7 Discussion We return to the question, How do oscillations emerge in phosphorylation networks? Concretely, we would like (1) easy-to-check criteria for exactly which phosphorylation networks admits oscillations or Hopf bifurcations, and (2) for those networks that admit oscillations, a better understanding of the “geography of parameter space”, that is, a characterization of which rate constants and initial conditions yield oscillations. Both of these problems are still unresolved, and the second problem in particular is very difficult. Nevertheless, here we made progress on characterizing some of the geography of parameter space for the mixed-mechanism phosphorylation network. Indeed, we found that a single surface defines the boundary between stable and unstable steady states, and this surface consists generically of Hopf bifurcations. Hence, when a steady state switches from stable to unstable, then we expect it to undergo a Hopf bifurcation leading to oscillations. Additionally, we gave a procedure for generating many parameter values leading to oscillations. We now discuss the significance of our work. At a glance, it might seem that our results are specific to network (1) and rate constants related to those in Table 1. However, the approach is general: for other rate constants (e.g., estimated from data) or other networks (e.g., a version of the ERK network from [37] also has oscillations and a unique steady state), one could apply the same techniques. Therefore, the potential impact is broad. Going forward, we hope that the novel techniques we used – specifically, using a steadystate parametrization together with a Hopf-bifurcation criterion – will contribute to solving other problems. For instance, we expect that such tools could help solve an important open problem in this area [7], namely, the question of whether oscillations or Hopf bifurcations arise from the fully distributive phosphorylation network. #### Acknowledgements AS was partially supported by the NSF (DMS-1312473/1513364 and DMS-1752672) and the Simons Foundation (#521874). AS thanks Alan Rendall and Jonathan Tyler for helpful discussions. CC was partially supported by the Deutsche Forschungsgemeinschaft DFG (DFG-284057449). The authors two referees for their helpful suggestions. 20 ----- ### A Files in the Supporting Information [The following files can be found at http://www.math.tamu.edu/~annejls/mixed.html:](http://www.math.tamu.edu/~annejls/mixed.html) Text files: - mixed H5N kb.txt . . . contains H5N, the numerator of det H5 under the assumption k2 = k6 = k9 = kb - mixed W.txt . . . contains a matrix W that defines (3) - mixed xt.txt . . . contains xt, the parameterization (7) - mixed Jx.txt . . . contains Jx, the Jacobian evaluated at the parameterization (7) Mathematica Notebooks: - mixed analysis H5N x1 LT.nb: Functionality: This file can be used to obtain numerator(det H5) as in (12), in particular to examine the coefficients α01, α10, . . . Input: the file mixed H5N kb.txt - mixed analysis H5N x2 LT.nb: Functionality: This file can be used to obtain numerator(det H5) as in (14), in particular to examine the coefficients α0, . . ., α3 and β0, . . ., β3. Input: the file mixed H5N kb.txt - mixed coeffs charpoly.nb: Functionality: This file can be used to obtain the characteristic polynomial of the Jacobian of the system (2). It contains the Mathematica commands to establish bi > 0. Input: the file mixed Jx.txt - mixed Hi.nb: Functionality: This file can be used to obtain the determinants of the Hurwitz matrices H2, . . ., H5. It contains the Mathematica commands to establish det Hi > 0, for i = 2, 3, 4 and that det H5 is of mixed sign. Input: the file mixed Jx.txt - mixed generate rc.nb: Functionality: This file contains a realization of Procedure 5.1. Input: the files mixed H5N kb.txt, mixed W.txt, mixed xt.txt, mixed Jx.txt. ### References [1] Kazuhiro Aoki, Masashi Yamada, Katsuyuki Kunida, Shuhei Yasuda, and Michiyuki Matsuda, Processive phosphorylation of ERK MAP kinase in mammalian cells, P. Natl. Acad. Sci. USA 108 (2011), no. 31, 12675–12680. [2] Peter Atkins, Julio De Paula, and James Keeler, Atkins’ physical chemistry, Oxford University Press, 2018. 21 ----- [3] EG Bure and Ye N Rozenvasser, On investigations of autooscillating system sensitivity, Avtomat. i Telemekh (1974), no. 7, 9–17. [4] Carsten Conradi, Elisenda Feliu, Maya Mincheva, and Carsten Wiuf, Identifying parameter regions for multistationarity, PLoS Comput. Biol. 13 (2017), no. 10, e1005751. [5] Carsten Conradi and Maya Mincheva, Catalytic constants enable the emergence of bistability in dual phosphorylation, J. R. Soc. Interface 11 (2014), no. 95. [6] Carsten Conradi and Anne Shiu, A global convergence result for processive multisite phosphorylation systems, B. Math. Biol. 77 (2015), no. 1, 126–155. MR 3303108 [7], Dynamics of post-translational modification systems: recent progress and future challenges, Biophys. J. 114 (2018), no. 3, 507–515. [8] Annick Dhooge, Willy Govaerts, and Yuri A. Kuznetsov, MATCONT: A MATLAB package for numerical bifurcation analysis of ODEs, ACM Trans. Math. Softw. 29 (2003), no. 2, 141–164. [9] Mirela Domijan and Markus Kirkilionis, Bistability and oscillations in chemical reaction networks, J. Math. Biol. 59 (2009), no. 4, 467–501. [10] Mitchell Eithun and Anne Shiu, An all-encompassing global convergence result for processive multisite phosphorylation systems, Math. Biosci. 291 (2017), 1–9. [11] Hassan Errami, Markus Eiswirth, Dima Grigoriev, Werner M. Seiler, Thomas Sturm, and Andreas Weber, Detection of Hopf bifurcations in chemical reaction networks using convex coordinates, J. Comput. Phys. 291 (2015), 279–302. [12] James E Ferrell and Sang Hoon Ha, Ultrasensitivity part II: multisite phosphorylation, stoichiometric inhibitors, and positive feedback, Trends Biochem. Sci. 39 (2014), no. 11, 556–569. [13] Feliks R. Gantmacher, Matrix theory, Chelsea, New York 21 (1959). [14] Karin Gatermann, Markus Eiswirth, and Anke Sensse, Toric ideals and graph theory to analyze Hopf bifurcations in mass action systems, J. Symbolic Comput. 40 (2005), no. 6, 1361–1382. [15] I.M. Gelfand, M.M. Kapranov, and A.V. Zelevinsky, Discriminants, resultants and multidimensional determinants, Birkh¨auser, 1994. [16] John Guckenheimer and Philip Holmes, Nonlinear oscillations, dynamical systems, and bifurcations of vector fields, vol. 42, Springer Science & Business Media, 2013. [17] Jeremy Gunawardena, Multisite protein phosphorylation makes a good threshold but can be a poor switch, P. Natl. Acad. Sci. USA 102 (2005), no. 41, 14617–14622. 22 ----- [18] Otto Hadaˇc, Frantiˇsek Muzika, Vladislav Nevoral, Michal Pˇribyl, and Igor Schreiber, Minimal oscillating subnetwork in the Huang-Ferrell model of the MAPK cascade, PLOS ONE 12 (2017), no. 6, 1–25. [19] Juliette Hell and Alan D. Rendall, A proof of bistability for the dual futile cycle, Nonlinear Anal.-Real 24 (2015), 175–189. [20] Zoe Hilioti, Walid Sabbagh, Saurabh Paliwal, Adriel Bergmann, Marcus D Goncalves, Lee Bardwell, and Andre Levchenko, Oscillatory phosphorylation of yeast Fus3 MAP kinase controls periodic gene expression and morphogenesis, Curr. Biol. 18 (2008), no. 21, 1700–1706. [21] Huizhong Hu, Alexey Goltsov, James L Bown, Andrew H Sims, Simon P Langdon, David J Harrison, and Dana Faratian, Feedforward and feedback regulation of the MAPK and PI3K oscillatory circuit in breast cancer, Cell. Signal. 25 (2013), no. 1, 26–32. [22] Wolfram Research, Inc., Mathematica, Version 11.3, Champaign, IL, 2018. [23] Brian Ingalls, Maya Mincheva, and Marc R. Roussel, Parametric sensitivity analysis of oscillatory delay systems with an application to gene regulation, B. Math. Biol. 79 (2017), no. 7, 1539–1563. [24] Brian P Ingalls, Autonomously oscillating biochemical systems: parametric sensitivity of extrema and period, Systems biol. 1 (2004), no. 1, 62–70. [25] Matthew D. Johnston, Translated chemical reaction networks, B. Math. Biol. 76 (2014), no. 6, 1081–1116. [26] Matthew D. Johnston, Stefan M¨uller, and Casian Pantea, A deficiency-based approach to parametrizing positive equilibria of biochemical reaction systems, Preprint, arXiv:1805.09295 (2018). [27] Wei Min Liu, Criterion of Hopf bifurcations without using eigenvalues, J. Math. Anal. Appl. 182 (1994), no. 1, 250–256. MR 1265895 [28] German Lozada-Cruz, The simple application of the implicit function theorem, Boletin de la Asociati´on Matem´atica Venezolana XIX (2012), no. 1. [29] Stefan M¨uller, Elisenda Feliu, Georg Regensburger, Carsten Conradi, Anne Shiu, and Alicia Dickenstein, Sign conditions for injectivity of generalized polynomial maps with applications to chemical reaction networks and real algebraic geometry, Found. Comput. Math. 16 (2016), no. 1, 69–97. [30] Koji L. Ode and Hiroki R. Ueda, Design principles of phosphorylation-dependent timekeeping in eukaryotic circadian clocks, Cold Spring Harbor Perspectives in Biology (2017). [31] Parag Patwardhan and W. Todd Miller, Processive phosphorylation: Mechanism and biological importance, Cell. Signal. 19 (2007), no. 11, 2218–2226. 23 ----- [32] Mercedes P´erez Mill´an and Alicia Dickenstein, The structure of MESSI biological systems, SIAM J. Appl. Dyn. Syst. 17 (2018), no. 2, 1650–1682. [33] Mercedes P´erez Mill´an, Alicia Dickenstein, Anne Shiu, and Carsten Conradi, Chemical reaction systems with toric steady states, B. Math. Biol. 74 (2012), no. 5, 1027–1065. [34] Mercedes P´erez Mill´an and Adri´an G. Turjanski, MAPK’s networks and their capacity for multistationarity due to toric steady states, Math. Biosci. 262 (2015), 125–137. [35] Shodhan Rao, Global stability of a class of futile cycles, J. Math. Biol. 74 (2017), 709– 726. [36], Stability analysis of the Michaelis–Menten approximation of a mixed mechanism of a phosphorylation system, Math. Biosci. 301 (2018), 159 –166. [37] Boris Y. Rubinstein, Henry H. Mattingly, Alexander M. Berezhkovskii, and Stanislav Y. Shvartsman, Long-term dynamics of multisite phosphorylation, Mol. Biol. Cell 27 (2016), no. 14, 2331–2340. [38] Carlos Salazar and Thomas H¨ofer, Multisite protein phosphorylation – from molecular mechanisms to kinetic models, FEBS Journal 276 (2009), no. 12, 3177–3198. [39] Thapanar Suwanmajo and J. Krishnan, Mixed mechanisms of multi-site phosphorylation, J. R. Soc. Interface 12 (2015), no. 107. [40] Matthew Thomson and Jeremy Gunawardena, The rational parameterisation theorem for multisite post-translational modification systems, J. Theoret. Biol. 261 (2009), no. 4, 626–636. [41] Hwai-Ray Tung, Precluding oscillations in Michaelis-Menten approximations of dualsite phosphorylation systems, Preprint, arXiv:1712.03594 (2017). [42] David M. Virshup and Daniel B. Forger, Keeping the beat in the rising heat, Cell 137 (2009), no. 4, 602–604. [43] Xiaojing Yang, Generalized form of Hurwitz-Routh criterion and Hopf bifurcation of higher order, Appl. Math. Lett. 15 (2002), no. 5, 615–621. 24 -----
{ "disclaimer": "Notice: Paper or abstract available at https://arxiv.org/abs/1809.02886, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": null, "status": "GREEN", "url": "http://arxiv.org/pdf/1809.02886" }
2,018
[ "JournalArticle" ]
true
2018-09-08T00:00:00
[ { "paperId": "2df819c0538469793a8389679b4f4bcf6f7145bf", "title": "Design Principles of Phosphorylation-Dependent Timekeeping in Eukaryotic Circadian Clocks." }, { "paperId": "41a39996bcfe17f463f2d887eebbdec691bf3646", "title": "A Deficiency-Based Approach to Parametrizing Positive Equilibria of Biochemical Reaction Systems" }, { "paperId": "2af19e1686fc089448c4d76d70ebaa80c4847e8c", "title": "Stability analysis of the Michaelis-Menten approximation of a mixed mechanism of a phosphorylation system." }, { "paperId": "11019f340361bdc0d84b5e772042182fe9c51caa", "title": "Precluding oscillations in Michaelis-Menten approximations of dual-site phosphorylation systems." }, { "paperId": "9174452c4ee3659b87f47fcd650cf86cae211321", "title": "BioVis Explorer: A visual guide for biological data visualization techniques" }, { "paperId": "65caa10cd354f4e1a15c47b66a4486528305c8e2", "title": "Minimal oscillating subnetwork in the Huang-Ferrell model of the MAPK cascade" }, { "paperId": "213a212e2dcf3eb06c7b306925aaebab89b13d50", "title": "Parametric Sensitivity Analysis of Oscillatory Delay Systems with an Application to Gene Regulation" }, { "paperId": "4a509bbee9455029501b3228ef435485bc1413a6", "title": "Parametric Sensitivity Analysis of Oscillatory Delay Systems with an Application to Gene Regulation" }, { "paperId": "1ce02a8f82b4239ffb1f6b607cc09121ea58b4ad", "title": "Dynamics of Posttranslational Modification Systems: Recent Progress and Future Directions." }, { "paperId": "75555f0a3d2b070de86aae488553f9629e2a1890", "title": "The Structure of MESSI Biological Systems" }, { "paperId": "0245441bee85be067fe74346abfb55e0333bb123", "title": "An all-encompassing global convergence result for processive multisite phosphorylation systems." }, { "paperId": "8e43cb607788a94fd1e18c995f2f27878a883178", "title": "Identifying parameter regions for multistationarity" }, { "paperId": "7f7b8e55714cadfeff8ab27fb425668357428e7e", "title": "Long-term dynamics of multisite phosphorylation" }, { "paperId": "5b64c964040a01b678909ec13840fe2de42199e9", "title": "Global stability of a class of futile cycles" }, { "paperId": "e9d3210e8ab533a6f93ac1315ebb06c229b0a190", "title": "Detection of Hopf bifurcations in chemical reaction networks using convex coordinates" }, { "paperId": "ac7c5862a54c47dc685a6f06487cb0063818ea9d", "title": "Mixed mechanisms of multi-site phosphorylation" }, { "paperId": "0ce9439f2a73ffbc0158e962fbb327307d1848ba", "title": "Ultrasensitivity part II: multisite phosphorylation, stoichiometric inhibitors, and positive feedback." }, { "paperId": "1f9a8259c0d2baa5b5ccb1a8a9c33070927bed37", "title": "Catalytic constants enable the emergence of bistability in dual phosphorylation" }, { "paperId": "8911294959240cadf32a79549a594345be3fdcd9", "title": "A Global Convergence Result for Processive Multisite Phosphorylation Systems" }, { "paperId": "2d77ee9289092ad82423fd5563fd085b21bcbb6b", "title": "A proof of bistability for the dual futile cycle" }, { "paperId": "cc23f1466e05093ffc272cb286b6d5220828c28b", "title": "MAPK's networks and their capacity for multistationarity due to toric steady states." }, { "paperId": "32aace90f177a3e07cf614cf078333bbfd1f80cc", "title": "Sign Conditions for Injectivity of Generalized Polynomial Maps with Applications to Chemical Reaction Networks and Real Algebraic Geometry" }, { "paperId": "68add4e13463db5bf6c05998a6e2a1fccc1c9267", "title": "Translated Chemical Reaction Networks" }, { "paperId": "2860502531c3d2e496c3a7b0e4264d3557a866d3", "title": "Processive phosphorylation of ERK MAP kinase in mammalian cells" }, { "paperId": "92f8857732fe572154eba60a6bcb3d6fd10ed515", "title": "Chemical Reaction Systems with Toric Steady States" }, { "paperId": "2a364c9811520cb77ca9f6512142e751bf8c2fc9", "title": "The rational parameterization theorem for multisite post-translational modification systems." }, { "paperId": "7a85cdb668d5c22d3dad7715597a448bb398f262", "title": "Bistability and oscillations in chemical reaction networks" }, { "paperId": "1da168c6ee12b4f250a658a79407032ba21da779", "title": "Multisite protein phosphorylation – from molecular mechanisms to kinetic models" }, { "paperId": "0ad56a3a5d4955095f84f45ba1e77d01ff540b21", "title": "Keeping the Beat in the Rising Heat" }, { "paperId": "3d4284a2375aa9e9112c481a026dc046fa28d9af", "title": "Oscillatory Phosphorylation of Yeast Fus3 MAP Kinase Controls Periodic Gene Expression and Morphogenesis" }, { "paperId": "76f61eb3f5a3b52a6757bad7a6a2cec27db814eb", "title": "Processive phosphorylation: mechanism and biological importance." }, { "paperId": "235b1fa2aa347949d653680d7753b388de74c21c", "title": "Toric ideals and graph theory to analyze Hopf bifurcations in mass action systems" }, { "paperId": "ca6641651b3653d60280eb88776f7154b06964cb", "title": "Multisite protein phosphorylation makes a good threshold but can be a poor switch." }, { "paperId": "d5066250f3c780d964083719ca7579ffada45556", "title": "Autonomously oscillating biochemical systems: parametric sensitivity of extrema and period." }, { "paperId": "8b6f7d0b305d6cba163ef568cdc361ada56e6b7a", "title": "MATCONT: A MATLAB package for numerical bifurcation analysis of ODEs" }, { "paperId": "b307ed62d5999744414a3b13f2a98b55a4acbcf3", "title": "Generalized form of Hurwitz-Routh criterion and Hopf bifurcation of higher order" }, { "paperId": "9592c5f840a913b2ddb658fedddf591483259807", "title": "Discriminants, Resultants, and Multidimensional Determinants" }, { "paperId": "b1018b0062afae20c467e86cb752d8aacfcc8686", "title": "Criterion of Hopf Bifurcations without Using Eigenvalues" }, { "paperId": "2ce5b9b35b5941749cb614e72a69e7ee0de9184c", "title": "NONLINEAR OSCILLATIONS, DYNAMICAL SYSTEMS, AND BIFURCATIONS OF VECTOR FIELDS (Applied Mathematical Sciences, 42)" }, { "paperId": "70e3da6c426ca384f78f77474cbbf00a436038a2", "title": "Nonlinear Oscillations, Dynamical Systems, and Bifurcations of Vector Fields" }, { "paperId": null, "title": "Mathematica, Version 11.3" }, { "paperId": null, "title": "A (2016) Sign conditions for injectivity" }, { "paperId": "d9bfc83ce4955238d7725f56599b70b3db04e402", "title": "Feedforward and feedback regulation of the MAPK and PI3K oscillatory circuit in breast cancer." }, { "paperId": "2913d69df8537296d587a1831d8debb5258dcaba", "title": "A simple application of implicit function theorem" }, { "paperId": null, "title": "Boletin de la Asociatión Matemática Venezolana Millán MP" }, { "paperId": "985d6e713796a9fd3526d2ed47d764d67ed445f3", "title": "Matrix Theory" }, { "paperId": "bb51541c410466e28955ec9e92768e36d28c61f9", "title": "Introduction to the Implicit Function Theorem" }, { "paperId": null, "title": "Atkins' Physical Chemistry" }, { "paperId": null, "title": "Rozenvasser, On investigations of autooscillating system sensitivity, Avtomat" }, { "paperId": null, "title": "Gantmacher, Matrix theory, Chelsea" }, { "paperId": null, "title": "Publisher's Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations" }, { "paperId": null, "title": "Return the k i 's and (K tot , P tot , S tot ) := φ • χ(x 1 , x 2 , x 6 ), where φ • χ is evaluated at the k i 's (and x 1" }, { "paperId": null, "title": "Using the output of Procedure 5.1, one can attempt to exhibit and analyze oscillations or Hopf bifurcations using software" }, { "paperId": null, "title": "< 0 , which we find, using Mathematica, is feasible for k 5 > 171.471. So, we pick k 5 = 172" } ]
20,839
en
[ { "category": "Medicine", "source": "external" }, { "category": "Computer Science", "source": "external" }, { "category": "Computer Science", "source": "s2-fos-model" }, { "category": "Geography", "source": "s2-fos-model" }, { "category": "Environmental Science", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/02b398677c3144cbb12459c81f06a8c4b350e510
[ "Medicine", "Computer Science" ]
0.884237
Open data products-A framework for creating valuable analysis ready data
02b398677c3144cbb12459c81f06a8c4b350e510
Journal of Geographical Systems
[ { "authorId": "1403960481", "name": "Daniel Arribas-Bel" }, { "authorId": "2116121340", "name": "Mark A. Green" }, { "authorId": "145192149", "name": "Francisco Rowe" }, { "authorId": "39908645", "name": "A. Singleton" } ]
{ "alternate_issns": [ "1435-5930" ], "alternate_names": [ "J Geogr Syst" ], "alternate_urls": [ "https://link.springer.com/journal/10109", "http://www.springer.com/economics/regional+science/journal/10109" ], "id": "7fac555d-41ef-410c-9c84-ed584a9cff7f", "issn": "1026-7050", "name": "Journal of Geographical Systems", "type": "journal", "url": "https://www.springer.com/economics/regional+science/journal/10109" }
This paper develops the notion of “open data product”. We define an open data product as the open result of the processes through which a variety of data (open and not) are turned into accessible information through a service, infrastructure, analytics or a combination of all of them, where each step of development is designed to promote open principles. Open data products are born out of a (data) need and add value beyond simply publishing existing datasets. We argue that the process of adding value should adhere to the principles of open (geographic) data science, ensuring openness, transparency and reproducibility. We also contend that outreach, in the form of active communication and dissemination through dashboards, software and publication are key to engage end-users and ensure societal impact. Open data products have major benefits. First, they enable insights from highly sensitive, controlled and/or secure data which may not be accessible otherwise. Second, they can expand the use of commercial and administrative data for the public good leveraging on their high temporal frequency and geographic granularity. We also contend that there is a compelling need for open data products as we experience the current data revolution. New, emerging data sources are unprecedented in temporal frequency and geographical resolution, but they are large, unstructured, fragmented and often hard to access due to privacy and confidentiality concerns. By transforming raw (open or “closed”) data into ready to use open data products, new dimensions of human geographical processes can be captured and analysed, as we illustrate with existing examples. We conclude by arguing that several parallels exist between the role that open source software played in enabling research on spatial analysis in the 90 s and early 2000s, and the opportunities that open data products offer to unlock the potential of new forms of (geo-)data.
p g **ORIGINAL ARTICLE** ## Open data products‑A framework for creating valuable analysis ready data **Dani Arribas‑Bel[1] · Mark Green[1] · Francisco Rowe[1] · Alex Singleton[1]** Received: 17 October 2019 / Accepted: 29 June 2021 / Published online: 20 October 2021 © The Author(s) 2021 **Abstract** This paper develops the notion of “open data product”. We define an open data product as the open result of the processes through which a variety of data (open and not) are turned into accessible information through a service, infrastructure, analytics or a combination of all of them, where each step of development is designed to promote open principles. Open data products are born out of a (data) need and add value beyond simply publishing existing datasets. We argue that the process of adding value should adhere to the principles of open (geographic) data science, ensuring openness, transparency and reproducibility. We also contend that outreach, in the form of active communication and dissemination through dashboards, software and publication are key to engage end-users and ensure societal impact. Open data products have major benefits. First, they enable insights from highly sensitive, controlled and/or secure data which may not be accessible otherwise. Second, they can expand the use of commercial and administrative data for the public good leveraging on their high temporal frequency and geographic granularity. We also contend that there is a compelling need for open data products as we experience the current data revolution. New, emerging data sources are unprecedented in temporal frequency and geographical resolution, but they are large, unstructured, fragmented and often hard to access due to privacy and confidentiality concerns. By transforming raw (open or “closed”) data into ready to use open data products, new dimensions of human geographical processes can be captured and analysed, as we illustrate with existing examples. We conclude by arguing that several parallels exist between the role that open source software played in enabling research on spatial analysis in the 90 s and early 2000s, and the opportunities that open data products offer to unlock the potential of new forms of (geo-)data. **Keywords Geographic data science · Open data · Open source** - Dani Arribas‑Bel D.Arribas-Bel@liverpool.ac.uk 1 Geographic Data Science Lab, Department of Geography and Planning, University of Liverpool, Roxby Building, 74, Bedford St S., Liverpool L69 7ZT, UK ----- **JEL Classification C55 · C63 · C80** ### 1 Introduction In the current era of digital transformation, data are a central pillar of the global economy and society. We have passed the point at which more data are being collected than can be physically stored (Lyman and Varian 2003; Gantz et al. 2007; Hilbert and López 2011).[1] In addition to traditional forms of data, such as social surveys and censuses, major technological innovations have enabled an explosion in the generation, collection and use of new forms of data (Timmins et al. 2018). Networked sensors embedded in electronic devices, such as mobile phones, satellites, vehicles, smart energy meters, computers, GPS trackers and industrial machines can now sense, create and store data on locations, transactions, operations and people. Social media, web search engines and online shopping platforms have also spurred this data revolution by recording and storing users’ activity and personal information. Data are created as a by-product through interaction with these technological systems. While they are often not designed for research purposes, they can bring value for answering research questions (Timmins et al. 2018). The world’s technological capacity to store, communicate and share information has significantly expanded. In 2018, companies worldwide were estimated to have generated and stored an excess of 33 zettabytes,[2] seven exabytes of new data (Cisco 2018). Networked sensor technology in the financial services, manufacturing, healthcare and media and entertainment industries was estimated to account for 48 percent of global data generation globally in 2018 (Cisco 2018). In July 2019, 66 percent (over 5 billion people) of the world’s population were estimated to use mobile phones, 56 percent (over 4.3 billion) to be internet users, and 46 percent (over 3.4 billion) to comprise active social media users, whose penetration is growing at over 7 percent at year (Hootsuite and We Are Social 2019). Despite the growing volume and speed of data collection and storage, only a small share are actually used. In 2019, a global study found that most organisations analysed less than half of the data they collected (Splunk, 2019). In 2018, a similar global survey estimated that 96% of all generated data in the engineering and construction industry goes unused (Snyder et al. 2018). In 2011, a small share of scientists from a survey of 1700 leading scientists reported to regularly use and analyse 1 Lyman and Varian (2003), for instance, estimated that 5 exabytes of new data generated through electronic channels, such as telephone, radio, television and the Internet were stored globally in 2002 but that more than three times that amount (i.e. 18 exabytes) were produced and not stored. Gantz et al. (2007) estimated that the amount of digital data created and replicated (255 exabytes) exceeded the storage capacity available (246 exabytes) in 2007. Hilbert and López (2011) estimated that the global generalpurpose computing capacity -as a measure of the ability to generate and process data- grew at an annual rate of 58 percent, while global storage capacity grew at an annual rate of 23 percent between 1986 and 2007. 2 One zettabyte is equivalent to ­1021 bytes. To visualise this, it would take one billion 1 TB external hard drives to store a zettabyte of data. ----- large data sets (Science Staff 2011). Only 12 percent reported data sets exceeding 100 gigabytes and use data sets exceeding 1 terabyte (Science Staff 2011). The low utilisation rate of data may be reflective of barriers to access, as well as inability to process such vast quantities of information efficiently. Two key challenges involve privacy and confidentiality concerns, as well as the unstructured nature of data production and storage (Hanson et al. 2011; Manyika et al. 2015). Privacy and confidentiality concerns restrict access to data collected by companies and government agencies. The frequency, detail and geographical granularity of data being generated are unprecedented and therefore ensuring their privacy, confidentiality and integrity is critical. While legislation has been slow in responding to the changing landscape of digital data, it is now evolving in this direction. Major changes to ensure data protection and privacy were made to the EU General Data Protection Regulation (GDPR) which came into effect in 2018. Innovative institutional arrangements, such as data collaboratives (Verhulst, Young and Srinivasan 2017; Klievink et al. 2018) or services (e.g. Consumer Data Research Centre in the UK), have developed data sharing protocols and secure environments to facilitate access to commercial and administrative data for research purposes. New forms of data are often highly unstructured and messy. They are produced in multiple formats, including videos, images and text; and, are stored in various organisational structures. Data are often not random samples of populations and are collected for specific administrative, business or operational purposes, and not necessarily for research (Hand 2018; Meng 2018; Timmins et al. 2018). In their original form, new forms of data are thus not readily usable limiting their applications. Significant data engineering is required, involving the use and design of specialised methods, software and expert knowledge, and linkage to other data sources (Hand 2018). To our knowledge, no formal analytical framework has been developed to chart the critical data engineering processes to develop purposely-built data products. In this paper, we propose and develop the idea of Open Data Products (ODPs) as a framework to transform raw data into Analysis Ready Data (Giuliani et al.2017; Dwyer et al. 2018) and identify the key features that we contend of this framework. We define an ODP as the final data outcome resulting from adding value to raw, highly complex, unstructured and difficult-to-access data to address a well-defined problem, and making the generated data output openly available. Thus, three fundamental components characterise an ODP: its insightful utility, value added and open availability. We argue that an open data product has two major benefits. First, it enables developing insights from scattered, and/or highly sensitive, and/or controlled, and/or secure data which may be difficult to gather and use, or may not be accessible otherwise. Second, it expands the use of commercial and administrative data for the public good leveraging on their high temporal frequency and geographic granularity. We also contend that there is a compelling need for data open products as we experience the current data revolution. New, emerging data sources are unprecedented in temporal frequency and geographical resolution, but they are large, unstructured, fragmented and often expensive to assemble and possibly hard to access due to privacy and confidentiality concerns. By transforming raw (open or “closed”) data into valuable open data products, new dimensions of human geographical processes ----- can be captured and analysed. Ultimately, ODPs may provide valuable guidance to develop appropriate policy interventions. The paper is structured as follows: The next section defines and develops the idea of ODPs detailing the core elements to developing ODPs. We outline a framework that covers the initial conception of an ODP and moving through to developing and disseminating a product. Following, we discuss some of the challenges involved in the process of developing ODPs. The fourth section introduces some case studies of ODP exemplars which highlight the potential offered through our framework. Finally, we conclude the paper discussing the future potential of ODPs. ### 2 Defining open data products Defining Open Data Products (ODPs) is challenging since their remit is wide and incorporates several, diverse aspects. In some ways, they share several characteristics with traditional open data, as described in Kitchin (2014) or Janssen et al. (2012). To the extent ODPs result in open data, they also share most of their main benefits for society (Molloy 2011). It might be intuitive to assume that making Open Data [3] available that were previously not accessible would constitute an ODP. However, building ODPs is a broader project encapsulating frameworks for product development, such as data, delivery channels, transparent processes, etc. Indeed, ODPs adhere to standard principles of product development (e.g. Bhuiyan 2011) such as end user feedback or prioritising goals more than almost any other academic output. In this context, we define ODPs as: The open result of transparent processes through which a variety of data (open and not) are turned into accessible information through a service, infrastructure, analytics or a combination of all of them, where each step of development follows open principles. We argue that the key difference between ODP over purely Open Data is the value added, which widens accessibility and use of data that would otherwise be expensive or inaccessible. Components of an ODP might include sophisticated data analysis to transform input data, digital infrastructure to host generated datasets, and dashboards, interactive web mapping sites or academic papers documenting the process. They almost always merge together data and algorithms but this is not necessarily a requisite. While we adhere to general open principles, we recognise not all steps of the process can (or even need to) be fully open. We also argue a need for hybrid approaches that allow for closed data to be incorporated and opened up through the creation of 3 Open Data have numerous definitions but commonly refer to data that are released into the public domain without restrictive licenses that prevent their reuse or inclusion in derivative products. Such data are differentiated from free data (e.g. Twitter/Facebook API), that may be restricted in terms of access limits, but also importantly in the purposes to which the data can be applied or used. ----- ODPs (Singleton and Longley 2019). Such approaches are necessary for widening access to information derived from sensitive data. The resulting product should be released as open data; ideally too, the majority of the process that results in an ODP should be open, and although it might not be possible to release every component of an ODP, those related to infrastructure such as computer code, platforms and algorithms required to generate output data should be made available and transparent (Peng 2011; Singleton et al. 2016). Akin to the argument in open-source software, this is not only so that third parties re-run every step of the process again before using the data, but also to build a reproducible environment of trust that contributes to user adoption of the product’s outputs (Brunsdon and Comber 2020). Although ODPs can take many forms and shapes, and hence differ greatly from each other, we think providing a few examples can be useful to land an abstract term in more practical settings. We will use two case studies that together embody differently but well both the ethos and the building blocks of ODPs: geodemographic classifications and data generated around the COVID19 pandemic. Below we introduce each, and we will return to different aspects in the next section. Geodemographic classifications are created with the aim of describing the most salient characteristics of people and the areas where they live (Webber and Burrows 2018). There are various classifications spanning different countries and substantive uses across both the public and private sector (Singleton and Spielman 2014). Geodemographic classifications combine diverse sources of publicly and privately available data to generate insights about the behaviour of existing or prospective customers, service users and citizens. Technically, a geodemographic classification collates and combines disparate sources of data through a computational data reduction technique called cluster analysis that groups areas into a set of representative clusters describing salient patterns based on their similarity across a wide range of descriptive attributes. Our second set of illustrations relate to the recent COVID-19 pandemic. The need to respond rapidly and efficiently to the spread of the virus, to save lives and sustain the economy, created intense demand for actionable data and information to feed into responsive decision making. Despite the global scope of the pandemic, many of the data generation, collection and processing systems originally in place were national at most, but in many cases regional or local. To bridge the gap between the available data and insight required, several researchers and organisations launched efforts to develop open data products. These included, for example, consolidated databases (e.g. Riffe et al. 2021) as well as ODPs derived from advanced analysis (e.g. Paez et al. 2020). ### 3 The building blocks of open data products In this section, we outline the key components of our proposed framework for developing ODPs. First, ODPs are born out of a need or problem that needs insight and will inform many design choices. Once the need is clearly delineated, the ODP process adds value to existing data in ways that help meet the original need. Adding value usually takes two forms: potentially complex transformations, fusion and ----- abstraction of the data in what we call Open (Geographic) Data Science; and outreach activities to ensure the original need is addressed with the maximum impact. Throughout these explanations, we illustrate key points with the geodemographics and COVID-19 case studies introduced above. **3.1 Identifying a problem in need of insight** Inception of an ODP begins with the identification of a concept or idea to address some problem that requires insight. Developing meaningful products often requires thinking less about ‘what’ a product might be, and more about ‘who’ might use it and what they would want to know. As such, identifying end users, understanding opportunities for satisfying their needs and mapping such opportunities to what is possible with the available data, skills and resources available can help to focus ODPs, and maximise their relevance. We would like to highlight this stage is usually followed in the research process (i.e. thinking about the “research question”), but that is not always the case in processes that result in open data. In fact, several open datasets are explicitly released as a side effect of the data existing for other purposes, and their release does not always have a clear end goal. While this has sometimes spurred several innovations (e.g. smartphone apps as a result of transit data made available as open data), we want to stress that ODPs are most useful when designed for a purpose and to further a goal. This process can be independent, however if possible co-designing products can be an effective approach. Co-design (or co-production) is the involvement of external partners within the research process to help create user-led and user-focused products (Ostrom 1996). It is not clear what activities might be considered co-design (Filipe et al. 2017), however this process does not necessarily have to be onerous. Building trust through collaborations can help to ensure relevant and impactful products (Klievink et al. 2018). Data or knowledge exchange can facilitate partnerships, as well as opening up new ODPs that often would otherwise have not been made available. Developing partnerships (termed data collaboratives) is relevant here which are cross-sector initiatives for sharing or developing new data products that add value to the work undertaken by each actor in the collaboration (Klievink et al. 2018). The principles of co-design are not limited to the identification of the product. It applies to each part of our framework, and understanding the end user needs is core to designing a successful product. Perhaps the clearest example of the importance of a problem needing insight can be found in the recent pandemic. Understanding the uneven impact of the pandemic on society requires information about how different demographic groups from a wide variety of geographic contexts are affected. However, very few readily available datasets exist to understand the dynamics on the pandemic as it unfolds across different countries and different age groups. To fill this gap, Riffe et al. (2021) introduce a global demographic database of COVID-19 cases and deaths, COVerAGEDB, enabling cross-country comparisons in the experiences of the pandemic. ----- **3.2 Adding value** The development of ODPs is not merely about making raw data available, as data driven innovation is more than opening up availability and use of data (Klievink et al. 2018). A key tenant of ODPs is to process, analyse and build on the original data, resulting in analysis ready data[4] (see, for example, the collection introduced by Zhu 2019). This enhances the value of the information and opportunities for insight. The added value of ODPs can be achieved through numerous strategies, although these should ideally be linked to the first step of the framework to maximise their utility. Development of new ODPs that extend the uses of existing data create value through producing new information. Data analysis can extract useful information or process data to create a new resource that demonstrates clear value added. Sources that cannot be made available in their raw form (often due to disclosure control or commercial sensitivity) can be made openly available through processing and manipulating into new ODPs with data owner permission. Improving usability of data can help increase access, particularly where data acquisition is costly, hidden or publicly unavailable. It can be more salient when data are already available. But utilising or processing the data requires advanced quantitative skills to derive information, and bridging potential skills and knowledge gaps can open up existing data to a much wider audience (Klievink et al. 2018). This is pertinent for lay populations who, if ODPs are combined with interactive visualisations and resources, can engage with complex data in ways that might otherwise be unavailable to them. In such cases, value is added through focusing on the needs of the end users. Matching or linking records can bring added value to existing databases or resources. Data linkage is the process of merging two or more independent resources or databases together based upon matching on a set of shared identifiers (Harron et al. 2017). Given the inherent costs of producing resources or collecting new data to investigate a research question, linking two or more existing sources together that could not answer the question by themselves, but possess all of the necessary information between them may provide a more efficient solution (Harron et al. 2017). Even where data linkage is not the priority, ODPs should be set up to allow future linkage to other potential resources. By generating analysis ready data, ODPs bridge the gap between useful but inaccessible data and user needs. In doing so, they unlock potential research findings that derive from analysis that relies on them, and can feed into decision making that encourage more evidence-based policy making. Geodemographics and other composite indices are an excellent example of adding value to existing datasets. These approaches manage to leverage information from multiple data sources, deriving 4 The term “Analysis Ready Data” finds its origin in the remote sensing literature. We use it in this context because we believe the challenges and benefits of processing data before they are made available to end-users are extend well beyond satellite imagery. ----- summary measures of the latent information (Green et al. 2018; Vickers and Rees 2007) while preserving the confidentiality of the original data as required. **3.3 Open (geographic) data science** Various sources of new data forms are available in a “half-cooked” state (Spielman 2017). They are not available in a form that would be useful or accessible for interested stakeholders. For instance, data such as open transport data are available through convoluted processes (e.g. APIs) that non-technical audiences are not able to easily access. Others, such as satellite imagery or air quality data, can be downloaded easily but their size, complexity and unstructured nature preclude wider use. Yet, others, such as purchase records from retailers, exist but have restricted access. Given the accidental nature of many of these data sources (Arribas-Bel 2014), few undergo thorough quality assurance and assessments for bias, completeness and statistical representativeness. This is an important feature which differentiates new forms of data from traditional census and survey-based sources, for which there exist reliable infrastructure and frameworks for analysis, publication and dissemination. The “unfinished” nature of new forms of data is a key feature of Data Science as a discipline. The explosion in the amount, variety and potential uses of new data has created the need for an interdisciplinary field that combines elements from areas such as statistics, computer science and information visualisation (Donoho 2017). Several new forms of data are inherently spatial, so there have been calls to establish closer links between these disciplines and Geography through GISc (Singleton and Arribas-Bel 2019), computational (Arribas-Bel and Reades 2018) and quantitative Geography (Arribas-Bel 2018).[5]This stage of the analysis has become increasingly sophisticated, increasingly with greater use of advanced algorithms and complex pipelines that transform data in useful ways. As an illustration, Stubbings et al. (2019) developed a green space index by combining street-level imagery, state-ofthe-art deep learning techniques and hierarchical modelling. Dismissing this component of every data project as merely “data cleaning” involves several risks. It diminishes the credit awarded to a step that can crucially influence the final results, which compels researchers to relegate this key task to short and vague descriptions that obscure the steps undertaken, with clear implications for openness, transparency and reproducibility of their research (Brunsdon 2016). We consider it vital that the (Geographic) Data Science process embedded in the generation of ODPs be as open and transparent as possible (Brunsdon and Comber 2020). Three main reasons underpin this requirement. First, as for open-source software (Raymond 1999), an open approach fosters collaboration, pooling of resources and avoids duplicating efforts. Second, an open approach involves an explicit recognition of the limitations of the datasets generated. Third, 5 We argue that, in this context, the term “Geographic Data Science” is more appropriate to capture the set of practices that we want to refer to. For more details on the motivation, reasoning and justification, in particular to how this term relates to more established ones such as GIScience or Geocomputation, we refer the reader to Arribas-Bel and Reades (2018) and Singleton and Arribas-Bel (2019). ----- **Fig. 1 The geographic data science stack** an open approach represents a clear message to users about the commitment to honesty and transparency by the ODP creator. This is an important element. The code, packages and platforms used to create an ODP will usually be accessed only by a small fraction of its users. However, the fact that they can be checked contributes to build user trust, and ultimately to amplify the use and impact of ODP by attracting a larger user base. The open approach that we recommend to maximise the impact of ODPs operates at three layers of the (Geographic) Data Science process: analysis, methods and infrastructure. Figure 1 shows an overview of what we term the Geographic Data Science stack. The top layer involves specification of the steps taken to transform the original input data into a final ODP, which we term ‘analysis’. In this context, the growing usage of computer code in research allows for the full documentation and evaluation of how products are developed (Brunsdon 2016). An open approach requires that the code generating the final dataset from the initial one(s) is available in both machine and human readable form. An increasingly popular format to meet this requirement within scientific communities is the computational notebook, such as Jupyter notebooks (Rule et al. 2019) or Rmarkdown notebooks (Casado-Díaz et al. 2017; Koster and Rowe 2019). In cases where commercial interest and copyright law prevents code sharing, so-called pseudo code with enough detail to reproduce the steps can be an acceptable compromise. Code released in the analysis stage should be specifically tailored to the development of the ODP. A good illustration of this approach is the Open SIMD pro[ject to expand on the Scottish Index of Multiple Deprivation (https://​github.​com/​](https://github.com/TheDataLabScotland/openSIMD) [TheDa​taLab​Scotl​and/​openS​IMD).](https://github.com/TheDataLabScotland/openSIMD) The second layer involves _methods. More generalisable code to implement a_ technique that could be applied in different contexts is relegated to this level. In this case, an open approach requires methods to be packaged as an open-source software library and released following standard software engineering practices (e.g. version control and continuous integration; Wolf, Oshan & Rey 2019). This division between analysis-specific code in notebooks and more modular code into packages avoids duplication of effort and increases the clarity with which the analysis is presented. Both R (CRAN) and Python (Conda-forge) are good ----- examples of community approaches to support packages; similarly, projects such as scikit-learn (Pedregosa et al. 2011) or the Tidyverse federation of packages (Wickham et al. 2019) are good illustrations of open source packages. The third layer comprises _infrastructure. The growing complexity of modern_ software stacks and analysis pipelines requires open access to analysis and methods used, as well as the infrastructure on which the development of ODPs has been based be transparently detailed. In this context, ODPs can borrow from several advances in software development to make the data available. A prominent example is containerisation, the technology underpinning projects like Docker or Singularity, that allows to isolate the computational environment required to reproduce a set of commands. The gds_env project (Arribas-Bel 2019) provides an illustration for the case of GDS. Full reproducibility may not always be possible or even desirable. For example, sensitive input data may not be amenable for sharing due to disclosure risks. We argue that as much of the process from start to finish should be made available, especially when there are few barriers against it. The purpose of an ODP is to design products that add value to existing data through opening up opportunities within data that are messy or unable to be openly shared. A good example of the value of open geographic data science can be found in the geodemographics literature. Many of the original classifications were created by the private sector, where full disclosure of the underlying methods and data input is not always be possible given associated commercial sensitivity or intellectual property. Such an approach has drawn criticism as being “black box” (Singleton and Longley 2009). Arguably, this poses an acute issue for applications in the public sector, especially where life outcomes are at stake (Longley 2005). Responding to these concerns, there has been movement towards creating geodemographics that are more open to scrutiny. Under the umbrella of Open Geodemographics, several classifications that are fully reproducible have been created in countries such as the UK (Vickers and Rees 2007; Gale et al. 2016; Martin et al. 2018) and US (Spielman and Singleton 2015). In these instances, code and data are disseminated openly, and these academic outputs also have associated journal articles in the peer reviewed literature. Such an approach was made possible through all of the data integral to these classifications being disseminated with open licences and enabling reuse and redistribution.[6]More recent research also discusses alternative reproducible methods that might also be applicable when data are sourced with wider and more restrictive licensing arrangements where full reproducibility was not possible (Singleton and Longley 2019). 6 For example, the 2011 ONS Output Area Classification has a formal page on the Office for National [Statistics website here: https://​www.​ons.​gov.​uk/​metho​dology/​geogr​aphy/​geogr​aphic​alpro​ducts/​areac​lassi​](https://www.ons.gov.uk/methodology/geography/geographicalproducts/areaclassifications/2011areaclassifications) [ficat​ions/​2011a​reacl​assif​icati​ons.](https://www.ons.gov.uk/methodology/geography/geographicalproducts/areaclassifications/2011areaclassifications) ----- **3.4 Outreach** The mantra ‘build it and they will come’ should not be the outcome of ODP development. Successful dissemination, circulation and impact should not rely on chance. Outreach activities and resources are required to encourage end users to engage with a product. These activities should be designed to guide end users on the use of the ODP. A full review of various forms of outreach activities is beyond the scope of the paper, we focus on two main dissemination channels. It is important to recognise that several of these practices closely relate to and take inspiration from a variety of literatures, including those of participatory GIS (Dunn 2007) and citizen science (e.g. Haklay 2013). A first key channel is user-focused events. These serve the purpose of refining and promoting a product. They can involve small, focused events such as workshops with stakeholders or lay community groups, and larger public promotion campaigns. Online presence and social media can play an important role in accessing wider coverage if supported with resources and materials. Project-specific social media accounts and online presence are increasingly more common. For example, the European Commission devotes an entire website to different aspects of their Global Human Settlement open data product.[7]Partnerships can also assist in the outreach process, especially when ODPs are designed to address a particular problem. For example, the “Access to Healthy Assets and Hazards” project (AHAH, Green et al. 2018) partnered with Public Health England (PHE) to make some of the data available through PHE’s Public Health Profiles resource. Co-designing an ODP requires engagement and co-development of project ideas with end users at every step so that the impact of ODP is maximised. Singleton and Longley (2019) co-developed a bespoke workplace classification in close collaboration with the Greater London Authority (GLA). The ODP is now available openly through the Consumer Data Research Centre’s data repository,[8], and the GLA is using it for internal operations. A second major channel involves the use of open-source platforms, software and resources. The integration of these assets is key to ensure interaction and engagement of end users with the ODPs, and a key principle is to facilitate end users with non-technical skills to interact with ODPs. Data stores comprise a useful example to make available ODPs and associated meta-data. Publishing all technical details, analytical code and documentation is important so that users can evaluate how ODPs were created and refine the project pipeline (see Paez et al. 2020, for an example of extensively documented data processes). Open-source platforms can help with this process, for example, CKAN for publishing open data, or GitHub for sharing code. Complementing these platforms should be the use of interactive resources that improve the accessibility and usability of ODPs. Examples of this approach include AHAH or the classification developed by Rowe et al. (2018) to analyse the trajectory of socio-economic inequality at the neighbourhood level in UK. These resources comprise an interactive web mapping tool that has been used by the 7 [https://​ghsl.​jrc.​ec.​europa.​eu/​datas​ets.​php.](https://ghsl.jrc.ec.europa.eu/datasets.php) 8 [http://​data.​cdrc.​ac.​uk/​datas​et/​london-​workp​lace-​zone-​class​ifica​tion.](http://data.cdrc.ac.uk/dataset/london-workplace-zone-classification) ----- general public and policy makers to point and click to their local areas and engage with the resource, as well as allow more technical users to download and analyse the information. Journals have also emerged as a key mechanism for the explicit dissemination of ODPs. Innovative examples include Data in Brief,[9] Scientific Data[10] or REGION,[11]which publish papers explicitly focused on ODPs rather than focused on research from which a side product is an ODP. In doing so, they seek to promote the creation, sharing and reuse of scientific data. Papers are peer reviewed and published under an open license. This form of publication is useful as it provides essential context, describing how ODPs have been generated as well as assessing their limitations and identifying potential purposes for the reuse of generated ODPs (e.g. Rowe et al. 2017), all elements hard to cover on a traditional research paper. Journals, such as REGION, have also started publishing computational notebooks, and a key aim is their added value in communicating and disseminating ODPs (Koster and Rowe 2019). Notebooks offer interactivity with the potential to engage policy, discipline-specific or local knowledge experts with data analysis exploration (Rowe et al. 2020). This in turn can enable the identification of new relevant patterns or uses that may have not been reported or explicitly discussed in the original publication. These novel ways of publication provide an incentive for researchers to generate ODPs. Outreach does not mark the end of developing ODPs. It is a continual and circular process that should incorporate constant evaluation and refinements to a product. Ideally, as data are updated, new relevant sources become available, and feedback from end users is gathered, they should be incorporated to refine ODPs. Outreach should therefore be designed to maximise this refinement process, facilitating feedback generation from relevant users. Examples of outreach into stakeholders and users can be found in geodemographics. Spielman and Singleton (2015) and Patias et al. (2019) produced open classifications for the US and UK, respectively. Through further interaction, engagement and outreach, the Location intelligence company Carto[12] has integrated them into their portfolio of data offerings. For the initial release of the US classification, only a description of the group level (ten clusters) was included, but Carto developed new labels for the 55 cluster type level, making these available within the public domain, alongside integration into their mapping platform,[13]used by industry and government. Thanks to this effort, the original classifications are openly accessible via their API and can be viewed within an interactive map improving their ease of access, engagement and dissemination. ODP development and outreach has also been instrumental in supporting responses to the COVID-19 pandemic. For example, the Local Data Spaces project 9 [https://​www.​journ​als.​elsev​ier.​com/​data-​in-​brief/.](https://www.journals.elsevier.com/data-in-brief/) 10 [https://​www.​nature.​com/​sdata/.](https://www.nature.com/sdata/) 11 [https://​openj​ourna​ls.​wu-​wien.​ac.​at/​ojs/​index.​php/​region/.](https://openjournals.wu-wien.ac.at/ojs/index.php/region/) 12 [https://​carto.​com.](https://carto.com) [13 The Carto blog describing the work can be found here: https://​carto.​com/​blog/​demog​raphic-​clust​ers-​](https://carto.com/blog/demographic-clusters-segmentation-data-observatory/) [segme​ntati​on-​data-​obser​vatory/.](https://carto.com/blog/demographic-clusters-segmentation-data-observatory/) ----- in the UK saw researchers working with Local Government practitioners to co-produce data insights using data held in secure and centralised researcher data environments (Leech et al. 2021). The aim was to help Local Authorities access these data directly or undertake research on their behalf, allowing them to gain data insights from data they did not have access to (including timely COVID-19 data deposited by the Office for National Statistics (ONS) not available elsewhere). Through continual repeated meetings with the team, researchers were able to co-design how Local Authorities wanted ODPs shared. Short computational notebooks were one solution, embedding descriptive data analyses as ‘conversation starters’ to show what data insights could be produced and help Local Authorities see the ‘art of possible’ (rather than sharing analysis ready data initially). For example, through sharing notebooks mapping asymptomatic COVID-19 test site accessibility in Liverpool, Liverpool City Council asked where to locate new sites and the team were able then focus on generating optimised locations to improve access (Green 2021). The added value of using notebooks meant that any analysis run for a Local Authority could be replicated for any other the local area resulting in all Local Authorities benefitting from insights during the co-production process. ### 4 Challenges Open Data are a good example of a Public Good, being both “non-rivalrous” and “non-excludable.” Open Data are, however, not free. There are direct costs associated with their collection, extraction, preparation and release; alongside indirect costs such as the loss of potential income that might be realised through alternative licensing models (Singleton et al. 2016; Johnson et al. 2017). Moreover, their consumption does not necessarily contribute to their production. For example, we might use OpenStreetMap data and services, but never commit any new geographic features or corrections to this open map system. Although some costs might be argued as being written off over time, others remain in perpetuity such as the cost of data hosting or download bandwidth. Issues of this nature which are associated with Open Data are generally enhanced when they are productised, given the additional human resource burden required in their creation, and the generation of necessary meta data or reporting associated with their release, such as extensive technical briefings, or the preparation of linked academic publications. As with Open Data, the “value” of an ODP is not realised directly (as it is free at the point of use), and to balance production costs, this would only likely to occur if these enveloped accounting of indirect benefits. For example, within some sectors where funding may be limited, an ODP might replace limited or no insight; potentially returning various economic or social benefits. Where funding is less constrained, ODPs may add value vis-a-vis commercial offerings if the insights generated are unique or complementary (Johnson et al. 2017). Capturing such value in both instances is however complex and lies somewhat outside the scope of this paper. However, given the costs of Open Data and those additional burdens of ODPs, there does need to be strategic planning and thought associated with creating ODPs. We would argue that some strategies that have been adopted by the Open Source Software community ----- might be applicable within the context of an ODP. This might include the sponsorship of ODPs by organisations who are benefiting from their availability, or the integration of ODPs into commercial software as a service platform (e.g. API). More specifically, organisations developing ODPs, might also supply these within a ‘freemium’ model where enhanced versions of ODPs might be provided as commercial offerings. The creation of ODPs share similarities to those ways in which open source software are produced. It has been argued that major contributions to many open source software packages are in fact mostly a result of contributions from a more limited set of developers (Krishnamurthy 2005). In a similar vein, many ODPs are created as a result of individuals or very focused teams. As with open software where there are a narrow set of contributors, this creates a challenge for how ODPs can be maintained and updated over time. Low diversity in teams developing Open Source Software (OSS) has also been suggested to hinder creativity and productivity (Giuri et al. 2010), which we would also argue is applicable to ODPs. Given these issues with OSS,one way in which they can be sustained is through code sharing platforms, such as Github or Bitbucket, where new developers can find out about software, make contributions or fork developments (Peng 2011). We argue that such platforms are equally useful for the sharing of code and data associated with the development of ODPs. However, they are not designed specifically for this purpose, and in essence, features are repurposed from the software developer community. The size of data that can be shared within such platforms is often limited, and where more extensive storage is required, this becomes an increasing cost burden. Although explicit data sharing platforms have emerged (e.g. figshare.com, zenodo.org, datadryad.org, dataverse.harvard.edu), these tend to focus on dissemination or archiving rather than development. Such platforms are useful for the promotion of ODPs, but are limited in functionality to support the process of remixing or update (Singleton et al. 2016). We would argue that there is space for new platforms with features that are better tailored to the needs of ODP development, and much like Github or Bitbucket might reward users through public profiles detailing their contributions to different ODPs. The extent to which any community of ODP developers might be formalised and developed akin to those established within OSS will be challenging given the positioning of this emergent area (Harris et al. 2014; Arribas-Bel 2018; Arribas-Bel and Reades 2018; Singleton and Arribas-Bel 2019). Such issues are accentuated within our current university curricula. Within the Quantitative Social Sciences and Statistics, focus tends to favour theory and applications of statistical models. Although the processes of software development are considered within Computer Science, these focus on applications rather the use of code in development of ODPs. Moreover, the recent rapid growth of Data Science has so far emphasised visualisation and new modelling techniques from the cannon of machine learning and artificial intelligence. We argue that there is clearly a role for the better embedding of ODP development both within curricula bearing components of Data Science. Finally, for those involved in the production of knowledge through research, historically there would be limited value ascribed to the considerable extra efforts required to package and document outputs from research as ODPs (Singleton et al. 2016). Within systems where impact is valued or measured, we argue that this might ----- support engagement for the development of ODPs given their utility as a route to stakeholder engagement. ### 5 Conclusion This paper introduces the concept of Open Data Product as a construct that lowers barriers for a wider audience of stakeholders to access and benefit from the (geo-) data revolution. The value in framing the challenge of making sense of new forms of data through ODPs resides in its comprehensive approach. We focus neither exclusively on technical issues, such as the current big data discourse; nor on governance and outreach solely, such as more traditional open data notions. Instead, ODPs recognise that turning disparate, unstructured and often sensitive data sources into useful and accessible information for a wider audience of stakeholders requires a combination of computational, statistical and social efforts. In doing so, we contribute to the Open Data literature by providing a framework that expands the notion of how Open Data can be generated and what can constitute the basis to generate open datasets, as well as how to ensure its final usability and reliability. Although not fully developed in this paper, we see a clear parallel between ODPs and the role that open-source software played in democratising access to cutting edge methods and computational power in the 90 s and 2000s. Three decades ago, a series of technological advances such as the advent of personal computing and rapid increase of computational power (e.g. Moore’s Law) provided fertile ground for experimentation in the domain of spatial analysis. Initially, however, this field of experimentation was hampered by a landscape dominated by proprietary software that was restrictive to access. Besides the obvious monetary cost, commercial software restricted access to methodological innovations as it used to be oriented to profitable market areas. In this context, OSS contributed significantly to unlock much of the potential of new computers and helped spur an era of new research that would have not been possible otherwise.[14] We see data, rather than computation, as the defining feature of the present technological context. To make the most of new forms of data, we need more than “just” OSS; hence the proposal for ODPs in this paper. However, we would also like to stress the relevance and crucial role that OSS has to play in a world where “raw data” are so distant from an “analysis ready data”. As highlighted above, ODPs can only succeed through a transparent process that can build trust among end-users. Without the ability that currently only OSS provides to access cutting-edge techniques and do so in a transparent way, it is difficult to imagine successful ODPs. Rather than definitive, our hope for this paper is to be provocative. The current data landscape is in transition and is very likely that several innovations are still in the notso-distant horizon. Hence, the notion of ODP will necessarily be an evolving one that 14 For a practical illustration of this statement, the reader is advised to examine the number of published papers that actively cite open-source software projects such as GeoDa (Anselin, Syabri & Kho; 2006); R’s spdep (Bivand et al. 2011); or PySAL (Rey & Anselin 2010). ----- adapts to changing conditions to remain useful and valuable. At any rate, we envision the need for novel approaches and mindsets such as those described in this paper only to increase in the coming years. There is much that the spatial analysis community holds to contribute to exploit the data deluge that is rapidly changing every aspect of society. New ways to communicate and deliver our collective advances in data intelligence and expertise to maximise societal impact are needed. We hope the ideas presented in this paper partially shape the agenda and, more generally, contribute to a wider conversation about our role in shaping this new world in the making. **Open Access This article is licensed under a Creative Commons Attribution 4.0 International License,** which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission [directly from the copyright holder. To view a copy of this licence, visit http://​creat​iveco​mmons.​org/​licen​](http://creativecommons.org/licenses/by/4.0/) [ses/​by/4.​0/.](http://creativecommons.org/licenses/by/4.0/) ### References Anselin L, Syabri I, Kho Y (2006) GeoDa: an introduction to spatial data analysis. Geogr Anal 38(1):5–22 Arribas-Bel D (2014) Accidental, open and everywhere: emerging data sources for the understanding of cities. Appl Geogr 49:45–53 Arribas-Bel D (2018) Statistics, modelling, and data science. In: Ash J, Kitchin R, Leszxzynski A (eds) Digital geographies. Sage, London Arribas-Bel D, Reades J (2018) Geography and computers: Past, present, and future. Geogr Compass 12(10):751 [Arribas-Bel D (2019) A containerised platform for Geographic Data Science https://​github.​com/​darri​bas/​](https://github.com/darribas/gds_env) [gds_​env](https://github.com/darribas/gds_env) Bhuiyan N (2011) A framework for successful new product development. J Indus Eng Manag 4(4):746–770 Bivand R, Anselin L, Berke O, Bernat A, Carvalho M, Chun Y, and Lewin-Koh N (2011). spdep: Spatial dependence: weighting schemes, statistics and models. Brunsdon C (2016) Quantitative methods I: reproducible research and quantitative geography. Prog Hum Geogr 40(5):687–696 Brunsdon C, Comber A (2020) Opening practice : supporting reproducibility and critical spatial data sci[ence. J Geogr Syst. https://​doi.​org/​10.​1007/​s10109-​020-​00334-2](https://doi.org/10.1007/s10109-020-00334-2) Casado-Díaz JM, Martínez-Bernabéu L, Rowe F (2017) An evolutionary approach to the delimitation of labour market areas: an empirical application for Chile. Spat Econ Anal 12(4):379–403 Cisco V (2018) Cisco visual networking index: Forecast and trends, 2017–2022.White Paper 1(1) Donoho D (2017) 50 Years of Data Science. J Comput Graph Stat 26(4):745–766 Dunn CE (2007) Participatory GIS—a people’s GIS? Prog Hum Geogr 31(5):616–637 Dwyer JL, Roy DP, Sauer B, Jenkerson CB, Zhang HK, Lymburner L (2018) Analysis ready data: enabling analysis of the Landsat archive. Remote Sens 10(9):1363 Filipe A, Renedo A, Marston C (2017) The co-production of what? Knowledge, values, and social relations in health care. PLoS Biol 15(5):2001403 Gale CG, Singleton AD, Bates AG, Longley P (2016) Creating the 2011 area classification for output [areas (2011 OAC). J Spatial Inf Sci. https://​doi.​org/​10.​5311/​JOSIS.​2016.​12.​232](https://doi.org/10.5311/JOSIS.2016.12.232) Gantz JF et al (2007) The expanding digital universe: a forecast of worldwide information growth through 2010. International Data Corporation (IDC) ----- Giuliani G, Chatenoux B, De Bono A, Rodila D, Richard JP, Allenbach K, Peduzzi P (2017) Building an earth observations data cube: Lessons learned from the Swiss data cube (SDC) on generating analysis ready data (ARD). Big Earth Data 1(1–2):100–117 Giuri P, Ploner M, Rullani F, Torrisi S (2010) Skills, division of labor and performance in collective inventions: Evidence from open source software. Int J Ind Organ 28(1):54–68 Green MA (2021) Thinking spatially to communicate and evaluate the roll-out of ‘mass’ testing in Liverpool, 2020. People, Place Policy 15(1):54–56 Green MA, Daras K, Davies A, Barr B, Singleton A (2018) Developing an openly accessible multidimensional small area index of ‘access to healthy assets and hazards’ for great Britain, 2016. Health Place 54:11–19 Haklay, M. (2013). Citizen science and volunteered geographic information: Overview and typology of participation. In Crowdsourcing geographic knowledge (pp. 105–122). Springer, Dordrecht. Hand DJ (2018) ‘Statistical challenges of administrative and transaction data.’ J R Stat Soc Ser a: Stat [Soc 181(3):555–605. https://​doi.​org/​10.​1111/​rssa.​12315](https://doi.org/10.1111/rssa.12315) Hanson B, Sugden A, Alberts B (2011) Making data maximally available. Science 331(6018):649. [https://​doi.​org/​10.​1126/​scien​ce.​12033​54](https://doi.org/10.1126/science.1203354) Harris R, Tate N, Souch C, Singleton A, Orford S, Keylock C, Jarvis C, Brunsdon C (2014) Geographers count: a report on quantitative methods in geography. Enhanc Learn Soc Sci 6(2):43–58 Harron K, Dibben C, Boyd J, Hjern A, Azimaee M, Barreto ML, Goldstein H (2017) Challenges in [administrative data linkage for research. Big Data. https://​doi.​org/​10.​1177/​20539​51717​745678](https://doi.org/10.1177/2053951717745678) Hilbert M, López P (2011) The World’s technological capacity to store, communicate, and compute [information. Science 332(6025):60–66. https://​doi.​org/​10.​1126/​scien​ce.​12009​70](https://doi.org/10.1126/science.1200970) [Hootsuite & We Are Social (2019) Digital 2019 Global Digital Overview. Available at: https://​datar​eport​](https://datareportal.com/reports/digital-2019-global-digital-overview) [al.​com/​repor​ts/​digit​al-​2019-​global-​digit​al-​overv​iew.](https://datareportal.com/reports/digital-2019-global-digital-overview) Janssen M, Charalabidis Y, Zuiderwijk A (2012) Benefits, adoption barriers and myths of open data and open government. Inf Syst Manag 29(4):258–268 Johnson PA, Sieber R, Scassa T, Stephens M, Robinson P (2017) The cost(s) of geospatial open data. Transactions in GIS 21(3):434–445 Kitchin R (2014) The data revolution: big data, open data, data infrastructures and their consequences. Sage Klievink B, van der Voort H, Veeneman W (2018) Creating value through data collaboratives. Informa[tion Polity 23(4):379–397. https://​doi.​org/​10.​3233/​ip-​180070](https://doi.org/10.3233/ip-180070) Koster S, Rowe F (2019) Fueling Research Transparency: Computational Notebooks and the Discussion Section. REGION 6(3):1–2 Krishnamurthy S (2005) ‘Cave or community? An empirical examination of 100 mature open source projects’, First Monday. Leech S, Green MA, Macdonald J, Gibin M (2021) Using local-level data to investigate Covid-19 inequalities in England. [https://​www.​adruk.​org/​news-​publi​catio​ns/​news-​blogs/​using-​local-​level-​data-​](https://www.adruk.org/news-publications/news-blogs/using-local-level-data-to-investigate-covid-19-inequalities-in-england-404/) [to-​inves​tigate-​covid-​19-​inequ​aliti​es-​in-​engla​nd-​404/](https://www.adruk.org/news-publications/news-blogs/using-local-level-data-to-investigate-covid-19-inequalities-in-england-404/) Longley P (2005) Geographical Information Systems: a renaissance of geodemographics for public service delivery. Prog Hum Geogr 29(1):57–63 [Lyman P and Hal R. Varian (2003) "How Much Information" 2003. Retrieved from http://​groups.​ischo​ol.​](http://groups.ischool.berkeley.edu/archive/how-much-info-2003/) [berke​ley.​edu/​archi​ve/​how-​much-​info-​2003/ on 03/04/2020.](http://groups.ischool.berkeley.edu/archive/how-much-info-2003/) Manyika J. et al (2015) Interoperability Integrating multiple IoT systems enables 40 percent of potential [value. San Francisco, USA: McKinsey Global Institute. Available at: www.​mckin​sey.​com/​mgi.](http://www.mckinsey.com/mgi) Martin D, Gale C, Cockings S, Harfoot A (2018) Origin-destination geodemographics for analysis of [travel to work flows. Comput Environ Urban Syst 67:68–79. https://​doi.​org/​10.​1016/j.​compe​nvurb​](https://doi.org/10.1016/j.compenvurbsys.2017.09.002) [sys.​2017.​09.​002](https://doi.org/10.1016/j.compenvurbsys.2017.09.002) Meng XL (2018) Statistical paradises and paradoxes in big data (I): Law of large populations, big data paradox, and the 2016 US presidential election. Ann Appl Stat 12(2):685–726 Molloy JC (2011) The open knowledge foundation: open data means better science. PLoS Biol 9(12):e1001195 Ostrom E (1996) Crossing the great divide: coproduction, synergy, and development. World Dev 24(6):1073–1087 Paez A, Lopez FA, Menezes T, Cavalcanti R, Pitta MGDR (2020) A spatio-temporal analysis of the environmental correlates of COVID-19 incidence in Spain. Geogr Anal 53(3):397–421 Patias N, Rowe F, Cavazzi S (2019) A scalable analytical framework for spatio-temporal analysis of neighborhood change: a sequence analysis approach. The annual international conference on geographic information science. Springer, Cham, pp 223–241 ----- Pedregosa F, Varoquaux G, Gramfort A, Michel V, Thirion B, Grisel O, Vanderplas J (2011) Scikit-learn: machine learning in Python. J Mach Learn Res 12:2825–2830 Peng RD (2011) Reproducible research in computational science. Science 334(6060):1226–1227 Raymond E (1999) The cathedral and the bazaar. Knowl Technol Policy 12(3):23–49 Rey SJ, Anselin L (2010) PySAL: a Python library of spatial analytical methods In Handbook of applied spatial analysis. Springer Riffe T, Acosta E (2021) Data Resource Profile: COVerAGE-DB: a global demographic database of [COVID-19 cases and deaths. Int J Epidemiol 50(2):390–390f. https://​doi.​org/​10.​1093/​ije/​dyab0​27](https://doi.org/10.1093/ije/dyab027) Rowe F, Casado-Díaz JM, Martínez-Bernabéu L (2017) Functional labour market areas for Chile. Region [4(3):7–9. https://​doi.​org/​10.​18335/​region.​v4i3.​199](https://doi.org/10.18335/region.v4i3.199) Rowe F, Patias N, Arribas-Bel D (2018) Policy brief: neighbourhood change and trajectories of inequality in Britain, 1971-2011. Policy Brief prepared for UK2070 Commission, pp 1–6 Rowe F, Maier G, Arribas-Bel D, Rey S (2020) The potential of notebooks for scientific publication. [Reproducib Dissemination Region 7(3):E1–E5. https://​doi.​org/​10.​18335/​region.​v7i3.​357](https://doi.org/10.18335/region.v7i3.357) Rule A, Birmingham A, Zuniga C, Altintas I, Huang S-C, Knight R, Moshiri N, Nguyen MH, Rosenthal SR, Perez F, Rose PW (2019) Ten simple rules for writing and sharing computational analyses in Jupyter Notebooks. PLoS Comput Biol 15(7):e1007007 [Science Staff (2011) ‘Challenges and Opportunities’, Science, 331(6018): 692–693. doi: https://​doi.​org/​](https://doi.org/10.1126/science.331.6018.692) [10.​1126/​scien​ce.​331.​6018.​692.](https://doi.org/10.1126/science.331.6018.692) [Singleton A, Arribas-Bel D (2019) Geographic data science. Geogr Anal. https://​doi.​org/​10.​1111/​gean.​12194](https://doi.org/10.1111/gean.12194) Singleton AD, Longley PA (2009) Geodemographics, visualisation, and social networks in applied geog[raphy. Appl Geogr 29(3):289–298. https://​doi.​org/​10.​1016/j.​apgeog.​2008.​10.​006](https://doi.org/10.1016/j.apgeog.2008.10.006) Singleton AD, Longley PA (2019) Data infrastructure requirements for new geodemographic classifica[tions: the example of London’s workplace zones. Appl Geogr 109:102038. https://​doi.​org/​10.​1016/j.​](https://doi.org/10.1016/j.apgeog.2019.102038) [apgeog.​2019.​102038](https://doi.org/10.1016/j.apgeog.2019.102038) Singleton AD, Spielman SE (2014) The past, present, and future of geodemographic research in the United States and United Kingdom. Prof Geogr 66(4):558–567 Singleton AD, Spielman S, Brunsdon C (2016) Establishing a framework for open geographic informa[tion science. Int J Geogr Inf Sci 30(8):1507–1521. https://​doi.​org/​10.​1080/​13658​816.​2015.​11375​79](https://doi.org/10.1080/13658816.2015.1137579) Snyder J, Menard A, Spare N (2018) Big Data = Big Questions for the Engineering and Construction Industry. White Paper. First Myanmar Investment (FMI). Raleigh, US Spielman S (2017) Keynote address CARTO I spatial data science conference. Sage, Brooklyn Spielman SE, Singleton A (2015) Studying neighborhoods using uncertain data from the american com[munity survey: a contextual approach. Ann Assoc Am Geogr 105(5):1003–1025. https://​doi.​org/​10.​](https://doi.org/10.1080/00045608.2015.1052335) [1080/​00045​608.​2015.​10523​35](https://doi.org/10.1080/00045608.2015.1052335) Splunk (2019) The state of dark data. Report. Splunk Inc. San Francisco, California, U.S. Stubbings P, Peskett J, Rowe F, Arribas-Bel D (2019) A hierarchical urban forest index using street-level imagery and deep learning. Remote Sensing 11(12):1395 Timmins K, Green MA, Radley D, Morris M, Pearce J (2018) How has big data contributed to obesity research? a review of the literature. Int J Obes 42:1951–1962 Verhulst S, Young A and Srinivasan P (2017) An Introduction to Data Collaboratives. New York, USA: [GovLab. Available at: http://​datac​ollab​orati​ves.​org/​static/​files/​data-​colla​borat​ives-​intro.​pdf.](http://datacollaboratives.org/static/files/data-collaboratives-intro.pdf) Vickers D, Rees P (2007) ‘Creating the UK National Statistics 2001 output area classification.’ J R Stat [Soc Ser a: Stat Soc 170(2):379–403. https://​doi.​org/​10.​1111/j.​1467-​985X.​2007.​00466.x](https://doi.org/10.1111/j.1467-985X.2007.00466.x) Webber R, Burrows R (2018) The predictive postcode: the geodemographic classification of british society. SAGE, London Wickham H, Averick M, Bryan J, Chang W, McGowan L, François R, Kuhn M (2019) Welcome to the Tidyverse. J Open Sour Softw 4(43):1686 Wolf LJ, Rey SJ, Oshan TM. (2019) Open code is not enough: towards a replicable future for geographic [data science http://​ljwolf.​org/​post/​openc​ode/](http://ljwolf.org/post/opencode/) Zhu Z (2019) Science of landsat analysis ready data. Remote Sens 11:2166 **Publisher’s Note Springer Nature remains neutral with regard to jurisdictional claims in published** maps and institutional affiliations. -----
{ "disclaimer": "Notice: Paper or abstract available at https://pmc.ncbi.nlm.nih.gov/articles/PMC8528182, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": "CCBY", "status": "HYBRID", "url": "https://link.springer.com/content/pdf/10.1007/s10109-021-00363-5.pdf" }
2,021
[ "JournalArticle" ]
true
2021-10-01T00:00:00
[ { "paperId": "8409cf2149de062d22c6c79c2680752e347a8031", "title": "Thinking spatially to communicate and evaluate the roll-out of ‘mass’ testing in Liverpool, 2020" }, { "paperId": "4d17e9ce4d485d3dd862cb2337847836c1f13157", "title": "Data Resource Profile: COVerAGE-DB: a global demographic database of COVID-19 cases and deaths" }, { "paperId": "3c02e4244178c3bd56434d7c5cd5d238405c64c1", "title": "The Potential of Notebooks for Scientific Publication, Reproducibility and Dissemination" }, { "paperId": "82620503cacf8ff6f8f3490e7bdf7508f1ab2021", "title": "Opening practice: supporting reproducibility and critical spatial data science" }, { "paperId": "03bb6ed47a7a639ce54ba96d83d0b877a348270f", "title": "A Spatio‐Temporal Analysis of the Environmental Correlates of COVID‐19 Incidence in Spain" }, { "paperId": "3f133afba45eecff1318de4fe6da5efdbb2114e9", "title": "Fueling Research Transparency: Computational Notebooks and the Discussion Section" }, { "paperId": "6492197d6460d54b5358ea1be9de60ed7ec2f800", "title": "Welcome to the Tidyverse" }, { "paperId": "123550fbc9f5420a2a06522e7dc3aac7d54d1b4e", "title": "Open Code is not enough: Towards a replicable future for geographic data science" }, { "paperId": "a6b13c667df9a58c1776b64b9e029aa90eb1459b", "title": "Science of Landsat Analysis Ready Data" }, { "paperId": "0abc1fcb955825792c717045b45ccd4ba9f80670", "title": "Data infrastructure requirements for new geodemographic classifications: The example of London's workplace zones" }, { "paperId": "55f80d39d1903438b2a72767ff755fe8cee98b97", "title": "Ten simple rules for writing and sharing computational analyses in Jupyter Notebooks" }, { "paperId": "6810ce73c832620d125a9ffa3af332e0accc91b8", "title": "A Hierarchical Urban Forest Index Using Street-Level Imagery and Deep Learning" }, { "paperId": "6d71c3bf52123c2fd6b11551050865311c31853c", "title": "A Scalable Analytical Framework for Spatio-Temporal Analysis of Neighborhood Change: A Sequence Analysis Approach" }, { "paperId": "160cd84c09e1f5658349a847a3c36ec039c85ca4", "title": "Policy Brief: Neighbourhood Change and Trajectories of Inequality in Britain, 1971-2011" }, { "paperId": "b29e9d93c58b0d61a111fa2af1023d842623249c", "title": "Creating value through data collaboratives" }, { "paperId": "e4a5d87a508710aabf82988878dbaef852259d98", "title": "Developing an openly accessible multi‐dimensional small area index of ‘Access to Healthy Assets and Hazards’ for Great Britain, 2016" }, { "paperId": "0cd67d7f1f66162dc48c40372c1fce9fb3d7f0ec", "title": "Geography and computers: Past, present, and future" }, { "paperId": "fdb5a2bf7f5f565cf98df71f00c02d4076c64a15", "title": "Analysis Ready Data: Enabling Analysis of the Landsat Archive" }, { "paperId": "2c0d246fb2a28eb0e54582215a0bb3a0606d3057", "title": "How has big data contributed to obesity research? A review of the literature" }, { "paperId": "2aa8348a499fc3d6adceb90ac54d6e13f4c757b6", "title": "Statistical challenges of administrative and transaction data" }, { "paperId": "8b58f608261132a950a5e83ec00aa3b3836ccab7", "title": "Statistical paradises and paradoxes in big data (I): Law of large populations, big data paradox, and the 2016 US presidential election" }, { "paperId": "ac9284630b27668ad6d592a6d928ab48f90139d2", "title": "The Predictive Postcode: The Geodemographic Classification of British Society" }, { "paperId": "258709301bff9401bf448ce83c47fd439ad4961f", "title": "Building an Earth Observations Data Cube: lessons learned from the Swiss Data Cube (SDC) on generating Analysis Ready Data (ARD)" }, { "paperId": "f56425ec56586dcfd2694ab83643e9e76f314e91", "title": "50 Years of Data Science" }, { "paperId": "4a6d46962d3f58d278cfb46d3ddebbb30bf275f5", "title": "Geographic Data Science" }, { "paperId": "367a9ee47615907d6787489eeb615147a7261779", "title": "Functional Labour Market Areas for Chile" }, { "paperId": "e80f918d9b6f219c84045b4a81d573aec857760e", "title": "Challenges in administrative data linkage for research" }, { "paperId": "75789f541bd5693889b5ba06166404fabfff80ea", "title": "The Cost(s) of Geospatial Open Data" }, { "paperId": "b968e8dac7355748d80ccea547ac5a0dea9a8f69", "title": "The co-production of what? Knowledge, values, and social relations in health care" }, { "paperId": "dd14b3c6a3cf0873c37c6692aa8e9f7d9a4676f2", "title": "An evolutionary approach to the delimitation of labour market areas: an empirical application for Chile" }, { "paperId": "75acbc5aeb26906953fd43fd450743c40e8dd4fc", "title": "Establishing a framework for Open Geographic Information science" }, { "paperId": "0d14e684edbf2a876d8d3ef24861ac59106ebd38", "title": "Creating the 2011 area classification for output areas (2011 OAC)" }, { "paperId": "b16c6550de331b694d1d5030a2318b4ff17e57e0", "title": "Spatial Dependence: Weighting Schemes, Statistics and Models" }, { "paperId": "69f4cf3191ba82a27d272a79f04fdc8f718c3078", "title": "Studying Neighborhoods Using Uncertain Data from the American Community Survey: A Contextual Approach" }, { "paperId": "92b8a8124872dfa576fbe9ea44d2a8ab723fe477", "title": "The Data Revolution: Big Data, Open Data, Data Infrastructures and Their Consequences" }, { "paperId": "3f558e813321e48930f0b211c4c81c37f1ee75b0", "title": "Geographers Count: A Report on Quantitative Methods in Geography" }, { "paperId": "171d0bc1598858bb728dd9373f97ca5b363b5edc", "title": "The Past, Present and Future of Geodemographic Research in the United States and United Kingdom" }, { "paperId": "27e3eb01ec182959ec1bd579b90c5a8eec3075ad", "title": "Challenges and Opportunities" }, { "paperId": "4b606805da01c61e4422fd90fe33877a6d71951c", "title": "Benefits, Adoption Barriers and Myths of Open Data and Open Government" }, { "paperId": "823b50be0da9b84c4945f947ecbacc83f2c2cc25", "title": "A framework for successful new product development" }, { "paperId": "e4625641d1072542315c14083befb43cd7096aba", "title": "Reproducible Research in Computational Science" }, { "paperId": "f41f3ad8c286a8dc509b2fa42b5febe81bcec9d8", "title": "The Open Knowledge Foundation: Open Data Means Better Science" }, { "paperId": "64d2fe89ba7f105cb96796742f4dcbf4e56bd2ca", "title": "The World’s Technological Capacity to Store, Communicate, and Compute Information" }, { "paperId": "9e7a7c605264f886bee1e0b53909f825ed6e20e6", "title": "Making Data Maximally Available" }, { "paperId": "ad4fd2c149f220a62441576af92a8a669fe81246", "title": "Scikit-learn: Machine Learning in Python" }, { "paperId": "1b49b613a4a3518b44803a417de2c85dfec2900f", "title": "Geodemographics, visualisation, and social networks in applied geography" }, { "paperId": "ba612664f5ae2a8f6c02fd369f259f216c55be23", "title": "Participatory GIS — a people's GIS?" }, { "paperId": "8e93610ebb1774904db423dd2513042cfb0bf290", "title": "Creating the UK National Statistics 2001 output area classification" }, { "paperId": "df48f0c4131672b95187bc2a0089682263a5937c", "title": "Keynote address" }, { "paperId": "49f4803a0dbdc0bf4199593c3e40e2ab8db38f9e", "title": "Geographical Information Systems: a renaissance of geodemographics for public service delivery" }, { "paperId": "2fc0516f700b490b7e13db0f0d73d05afa5e346c", "title": "Cave or Community? An Empirical Examination of 100 Mature Open Source Projects" }, { "paperId": "5eaf9f92564b1ba8060c5d1021f8cfd70ae6b36b", "title": "The cathedral and the bazaar" }, { "paperId": "8bd9fc0acc518db3d1efefe7fef37c3817bbbda5", "title": "Crossing the great divide: Coproduction, synergy, and development" }, { "paperId": null, "title": "Using local-level data to investigate Covid-19 inequalities in England" }, { "paperId": null, "title": "Digital 2019 Global Digital Overview" }, { "paperId": "6276081391d85136b2e9140ba03c82ae0f48e135", "title": "Statistics, Modelling, and Data Science" }, { "paperId": null, "title": "Open Sour Softw" }, { "paperId": null, "title": "The state of dark data" }, { "paperId": "2f1f46b45be18b117660682e84e93b31654c4c2a", "title": "Origin-destination geodemographics for analysis of travel to work flows" }, { "paperId": null, "title": "Harfoot A (2018) Origin-destination geodemographics for analysis" }, { "paperId": null, "title": "Big Data = Big Questions for the Engineering and Construction Industry" }, { "paperId": null, "title": "Quantitative methods I: reproducible research and quantitative geography" }, { "paperId": "390715d7f73579d0fe508e494bea2ef5929c00b4", "title": "University of Birmingham Accidental, open and everywhere: Emerging data sources for the understanding of cities" }, { "paperId": null, "title": "Interoperability Integrating multiple IoT systems enables 40 percent of potential value" }, { "paperId": "43789305e5d2212da05f9c16b148e84aae5614b2", "title": "Citizen Science and Volunteered Geographic Information: Overview and Typology of Participation" }, { "paperId": "6ac3d1790c7be421f47be53f337c7eda0f57224a", "title": "PySAL: A Python Library of Spatial Analytical Methods" }, { "paperId": null, "title": "The expanding digital universe: a forecast of worldwide information growth through" }, { "paperId": "d8f16b07147e2cbedd8227ea5a8ef2cd7f8ae413", "title": "GeoDa: An Introduction to Spatial Data Analysis" }, { "paperId": "0f7cb51fef6ff49ce293534e58935acf1cf5df73", "title": "Skills, Division of Labor and Performance in Collective Inventions. Evidence from the Open Source Software" }, { "paperId": null, "title": "How Much Information" }, { "paperId": null, "title": "Policy Brief prepared for UK2070 Commission, pp" }, { "paperId": null, "title": "Open data products‑A framework for creating valuable analysis…" } ]
15,175
en
[ { "category": "Computer Science", "source": "external" }, { "category": "Engineering", "source": "s2-fos-model" }, { "category": "Computer Science", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/02b64b590630a0ebcbf2f75e045c1d933e3f2667
[ "Computer Science" ]
0.865984
A Data-Driven Distributed Adaptive Control Approach for Nonlinear Multi-Agent Systems
02b64b590630a0ebcbf2f75e045c1d933e3f2667
IEEE Access
[ { "authorId": "102908639", "name": "Xian Yu" }, { "authorId": "1682560", "name": "S. Jin" }, { "authorId": "121866036", "name": "Genfeng Liu" }, { "authorId": "2029346528", "name": "Ting Lei" }, { "authorId": "2109922523", "name": "Ye Ren" } ]
{ "alternate_issns": null, "alternate_names": null, "alternate_urls": [ "http://ieeexplore.ieee.org/servlet/opac?punumber=6287639" ], "id": "2633f5b2-c15c-49fe-80f5-07523e770c26", "issn": "2169-3536", "name": "IEEE Access", "type": "journal", "url": "http://www.ieee.org/publications_standards/publications/ieee_access.html" }
In this paper the distributed leader-follower consensus tracking problem is investigated for unknown nonlinear non-affine discrete-time multi-agent systems. Via a dynamic linearization method both for the agent system and the local ideal distributed controller, a distributed adaptive control scheme is proposed in this paper using the Newton-type optimization method. The proposed approach is data-driven since only the local measurement information among neighboring agents is utilized in the control system design. The consensus tracking stabilities of the proposed approach are rigorously guaranteed in the cases of fixed and switching communication topologies. The simulations are conducted to verify the effectiveness of the proposed approach.
Received October 31, 2020, accepted November 11, 2020, date of publication November 17, 2020, date of current version November 30, 2020. _Digital Object Identifier 10.1109/ACCESS.2020.3038629_ # A Data-Driven Distributed Adaptive Control Approach for Nonlinear Multi-Agent Systems XIAN YU 1, SHANGTAI JIN 1, GENFENG LIU 1, TING LEI 2, AND YE REN3 1School of Electronic and Information Engineering, Beijing Jiaotong University, Beijing 100044, China 2College of Electrical and Information Engineering, Zhengzhou University of Light Industry, Zhengzhou 450002, China 3School of Electrical and Control Engineering, North China University of Technology, Beijing 100144, China Corresponding author: Xian Yu (yuxian@bjtu.edu.cn) This work was supported in part by the National Natural Science Foundation of China (NSFC) under Grant 61433002 and Grant 61833001. **ABSTRACT In this paper the distributed leader-follower consensus tracking problem is investigated for** unknown nonlinear non-affine discrete-time multi-agent systems. Via a dynamic linearization method both for the agent system and the local ideal distributed controller, a distributed adaptive control scheme is proposed in this paper using the Newton-type optimization method. The proposed approach is data-driven since only the local measurement information among neighboring agents is utilized in the control system design. The consensus tracking stabilities of the proposed approach are rigorously guaranteed in the cases of fixed and switching communication topologies. The simulations are conducted to verify the effectiveness of the proposed approach. **INDEX TERMS Dynamic linearization, data-driven control, adaptive control, multi-agent systems, consen-** sus tracking. **I. INTRODUCTION** The recent two decades have witnessed a burgeoning research direction in the automatic control of interconnected systems [1], [2]. Such systems are the well-known multi-agent systems (MASs). Cooperative control of MASs is aimed to exploit the local interactive control protocols among networked agents for achieving a global objective that is difficult to be accomplished by a single agent. Due to its powerful potential applications [3]–[5], a considerable attention has been attracted for different cooperating tasks, such as consensus, formation, coverage control, flocking and containment control. Among these research topics, consensus control is an important and fundamental problem. Remarkable results on consensus have been investigated from different perspectives, and readers are referred to [6]–[8] and references therein. Because of the pioneering works [9], [10] on consensus, many scholars have extensively investigated different consensus problems. For instance, in [11]–[13], the distributed consensus of linear continuous-time and discretetime homogeneous systems was discussed. Moreover, some works, such as [14], [15], were extended to heterogeneous network systems. Since almost all the physical dynamics The associate editor coordinating the review of this manuscript and approving it for publication was Fei Chen. of controlled systems in practice are inherently nonlinear, the aforementioned control schemes cannot be directly applied to nonlinear systems. Recently, the adaptive control schemes were developed for nonlinear MASs [6], [7]. However, the aforementioned works are usually based on availability of the dynamic models or structural information of the controlled MAS. This means that the first principle or an identification method is required for these distributed control schemes, which have the problems of unmodeled dynamics and model/controller reduction [16]. For the control problems of unknown systems, datadriven control methodologies are a concerned research topic, in which the model free adaptive control (MFAC), proposed by Hou in [17] and further developed in [18], is valuable. The MFAC has been extended from the original single-input single-output systems [19], [20] and multi-input multi-output systems [21], [22] to nonlinear MASs [23]–[25]. Besides, the dynamic linearization based control methods have been successfully applied to many practical applications, such as servo motor systems [26] and exoskeleton robotic systems [27]. The detail of the dynamic linearization based control methods can be found in [16]. However, some challenging issues still have not been developed for unknown nonlinear discrete-time MASs on data-driven consensus control. One issue is to how to ----- design a distributed consensus controller structure through a systematic approach. The local controller structures, such as the distributed proportional and proportional-integral control laws in [7], [28], are usually determined a priori by experience, which leads to the difficulty in determining their appropriation or effectiveness in applications. Another issue is to how to design a distributed control gain updating algorithm on condition that the local measurements are only applicable. The control gains in existing distributed control schemes are usually calibrated heuristically and chosen as fixed values if the physical dynamics of the controlled MAS are unknown. The dynamic linearization based control methods motivate us to explore a novel data-driven distributed consensus tracking approach for addressing these issues. Comparing to existing distributed control methods, the main contributions of this paper are as follows. - Provide a systematic way of directly designing the distributed controller structure for unknown MASs on the consensus tracking, and the designed distributed controller is independent of the controlled MAS. - Propose a data-driven distributed adaptive control approach, where the local control law and control gain updating algorithm are designed only using the local information among neighboring agents. - Establish the consensus tracking stability properties of the proposed approach under fixed and switching topologies. The rest of this paper is organized as follows. Section II formulates the consensus tracking problem. Section III concludes the main results, including the designed control law, distributed control gain updating algorithm, adaptive control approach and its convergence properties. Section IV conducts some simulations. Section V provides some concluding remarks. In this paper, denotes any generic vector ∥· ∥ or matrix norm. **II. PROBLEM FORMULATION** The communication topology of a leader-follower system including the leader is represented by the graph, where the _G_ topology among agents is fixed and directional. The leader’s command is only accessible to a subset of the follower agents with unidirectional paths from the leader to the follower agents. Each follower agent exchanges local measurement information only with its neighboring follower agents under a directional graph. It is assumed that the topology among the follower agents is a fixed strongly connected graph and at least one follower agent is communicated to the leader. We consider a set of N heterogeneous nonlinear non-affine discrete-time follower agents, where the physical model of follower agent q, q = 1, 2, · · ·, N, is described by _yq(k + 1) = fq�yq(k), · · ·, yq(k −_ _ny),_ _uq(k), · · ·, uq(k −_ _nu)�,_ (1) where yq(k) ∈ R[1] and uq(k) ∈ R[1] are the system output and control input of agent q at the time instant k, respectively, and k = 1, 2, · · · ; ny ∈ Z+ and nu ∈ Z+ are unknown orders for the system outputs and control inputs of agent q; fq(·) : R[n][y][+][n][u][+][2] �→ R[1] is an unknown nonlinear function. Following [19], the agent system (1) can be transformed into the following equivalent dynamic linearization data model: _δyq(k + 1) = ψq(k)δuq(k),_ (2) where δyq(k + 1) = yq(k + 1) − _yq(k); δuq(k) = uq(k) −_ _uq(k −_ 1); the unknown time-varying parameter ψq(k) is called the pseudo partial derivative (PPD) of the agent system (1), satisfying |ψq(k)| ≤ _bp, bp > 0 is a known constant._ _Assumption 1: The sign of ψq(k), remains invariant, satis-_ fying ψq(k) > 0 or ψq(k) < 0 for all k = 1, 2, · · · . Without loss of generality, we consider 0 < ψq(k) ≤ _bp in this paper._ _Remark 1: The considered condition 0 < ψq(k) ≤_ _bp in_ Assumption 1 implies that the control direction is known and positive. This condition is reasonable since many practical systems, such as autonomous underwater vehicles, unmanned aerial vehicles and mobile robots, feature this property. Note that the time-varying parameter ψq(k) is only a concept in the sense of mathematics, and its existence is rigorously guaranteed by the theorem in [19]. While the timeinvariant parameters usually introduced for traditional adaptive control methods indicate the variables of dynamics of a controlled plant. It can be seen from the theorem in [19] that _ψq(k) is obviously time-varying even if the controlled plant_ (1) is linear time-invariant since ψq(k) is only related to the system outputs and control inputs by the time k. Moreover, all of the possible properties of the controlled system (1), such as the nonlinearity and time-varying parameters or structures, are involved into ψq(k), which may lead to its complicated characteristic, but the simple numerical behavior that can be easily estimated. This implies that ψq(k) is capable of managing these properties that are difficult to be handled in the framework of traditional adaptive control due to its possible insensitivity to these properties. More detail on the parameter can be referred to [29]. Equation (2) is only a data model that is equivalent to (1) in the sense of mathematics and it has no physical meanings. This equivalent transformation is achieved by using the compact form dynamic linearization method, and the detailed derivations can be referred to [19] and [29]. The data model (2) is only related to the system outputs and control inputs of the controlled plant, and it not explicitly or implicitly includes the parametric and structural information of physical dynamics of a controlled plant. In addition, the data model (2) is only purposed to the control system design, and it is not suitable for other objectives, such as monitoring and diagnosis. The consensus tracking problem of the leader-follower MAS described by (1) under the graph is summarized as _G_ follows. The leader’s command at the time instant k is denoted as _yd_ (k). The global control objective is to develop a data-driven distributed adaptive control approach that drives the system ----- output yq(k) to yd (k) when the time instant k tends to infinity; that is, the tracking error lim _k→∞_ _[e][q][(][k][)][ =][ y][d]_ [(][k][)][ −] _[y][q][(][k][)][ =][ 0,]_ although the local information among neighboring agents is only used and the leader’s command is only accessible to a subset of the follower agents. To describe the local measurement information of agent q with its neighbors, we define the following distributed tracking error ξq(k) under G as: _ξq(k) =_ � _aq,p�yp(k) −_ _yq(k)�_ + dq�yd (k) − _yq(k)�, (3)_ _p∈Nq_ where Nq denotes the set of neighbors of agent q; aq,p 1 = if agent q can receive information from its neighboring agent _p, otherwise aq,p = 0, specially aq,q = 0; dq = 1 if agent q_ receives the leader’s command yd (k), otherwise dq = 0. The first issue for developing the distributed control approach is the structure design of a distributed control law. Since the physical model of the MAS described by (1) is completely unknown, so far there is not a systematic way to determine the distributed controller structure. The second issue is related to the design of the distributed control gain updating algorithm using only the local measurement information. The last important issue is the stability properties of the developed data-driven distributed control approach. In next section, these issues are discussed in detail. **III. MAIN RESULTS** _A. DISTRIBUTED CONTROL LAW_ This subsection considers the design of the distributed controller structure through only known local information. Assume there exists an ideal distributed consensus controller in theory that can guarantee the system output of agent q equal to yd (k +1) in one step-ahead. The ideal distributed controller can be written in the following mathematical form: _uq(k) = Cq�ξq(k + 1), · · ·, ξq(k −_ _ne + 2),_ _uq(k −_ 1), · · ·, uq(k − _nc)�,_ (4) where Cq(·) : R[n][e][+][n][c] �→ R[1] is an unknown nonlinear function; ne ∈ Z+ and nc ∈ Z+ are the unknown orders of ideal distributed controller (4) on the distributed tracking errors and control inputs, respectively. The assumption on (4) implies that it is reachable and the detailed discussions are referred to [30]. In practice, the controller (4) is difficult to derive. Thus the key task is to transform it into a practical distributed controller, while keeping it equivalent to (4) in the input-output data sense. In achieving this, the following two assumptions are required. _Assumption 2: The partial derivative of Cq(·) with respect_ to the distributed tracking error ξq(k + 1) is continuous. _Assumption 3: Cq(·) satisfies the generalized Lipschitz_ condition, that is, if |δξq(k + 1)| ̸= 0, then there exists an unknown constant β > 0 such that |δuq(k)| ≤ _β|δξq(k + 1)|,_ (5) where δξq(k + 1) = ξq(k + 1) − _ξq(k)._ Assumption 2 is common since many controllers, such as the distributed proportional controller [31] and the distributed adaptive controller [32], generally satisfy this condition. Assumption 3 implies that the ideal distributed controller (4) is required to be stable [16]. _Lemma 1: The controller (4) satisfies Assumptions 2_ and 3. If |δξq(k + 1)| ̸= 0, then there exists an unknown controller parameter θq(k), such that (4) can be transformed into the following equivalent distributed control law using the compact form dynamic linearization (CFDL) method: _δuq(k) = θq(k)δξq(k + 1),_ (6) where |θq(k)| ≤ _bt_, and bt > 0 is an unknown constant. _Proof:Lemma 1 can be proved by utilizing the differen-_ tial mean value theorem through Assumptions 2 and 3. The derivative detail is similar to the results in [30], [33] and thus the derivations are omitted. For simplicity, we label the obtained distributed control law (6) as CFDL controller (CFDLc). _Remark 2: The CFDLc (6), with a time-varying lineariza-_ tion structure, is equivalent to the ideal distributed controller (4), which implies two points. One point is that the structure complexity of (6) does not increase even though the agent system (1) is highly nonlinear. Another point is that the CFDLc (6) can be considered as a candidate consensus controller for unknown nonlinear MASs as described by (1) since (6) can drive eq(k + 1) = 0 in one-step. In other words, the issue of designing a distributed controller structure is addressed through the CFDL method, while the existing distributed controller structures are usually given in an ad hoc way. _Remark 3: Lemma 1 indicates that the CFDLc (6) is inde-_ pendent of the agent system (1), and θq(k) can be obtained through only the local information using some data analytical approaches when the dynamic model of agent q is unavailable. It can also be obtained via the model based optimization method through submitting (6) into the agent system (1) when its dynamic model is known. This paper just considers the issue of obtaining θq(k) using only the local information communicated with agent q. The CFDLc (6) cannot be implemented in practice due to the presence of the noncausal term ξq(k + 1) in (6). Similar to [33], the following practical CFDLc is obtained: _δuq(k) = −θq(k)ξq(k),_ (7) which means that uq(k) can be computed directly according to the measured ξq(k) at current time instant k. Note that (7) is not an approximation to (6), but a direct derivation from the observation that (6) can drive eq(k + 1) = 0. _B. DISTRIBUTED CONTROL GAIN UPDATING ALGORITHM_ This subsection considers the second issue on tuning θq(k) in the CFDLc (7) using only the local information communicated with agent q via the data model (2). ----- We first consider the following control objective function: _Jq =_ 2[1] � _aq,p�yp(k + 1) −_ _yq(k + 1)�2_ _p∈Nq_ + [1] �yd (k + 1) − _yq(k + 1)�2 + 1_ _q[(][k][)][,]_ (8) 2 _[d][q]_ 2 _[λδ][u][2]_ _µ > 0 is a weight factor, η ∈_ (0, 1] is the step size of ψq(k), and σ is a tiny positive constant. Based on the estimation given in (13) and (14), the estimation of the noncausal term ξq(k + 1) is given by _ξˆq(k + 1) =_ � _aq,p�yˆp(k + 1) −ˆyq(k + 1)�_ _p∈Nq_ + dq�yd (k + 1) −ˆyq(k + 1)�, (15) where _yˆp(k + 1) = yp(k) −_ _ψ[ˆ]_ _p(k)δup(k),_ (16) _yˆq(k + 1) = yq(k) −_ _ψ[ˆ]_ _q(k)δuq(k)._ (17) _C. SUMMARIZED DISTRIBUTED ADAPTIVE_ _CONTROL APPROACH_ The CFDLc (7), the two updating algorithms (11) and (13) with the resetting mechanisms (12) and (14), and the distributed tracking error estimation (15), formulate the distributed consensus tracking approach. The detailed steps are as follows. _Step 1: Set k_ 1, initialize the local measurement data = and the _ψ[ˆ]_ _q(1) satisfying σ ≤_ _ψ[ˆ]_ _q(1) ≤_ _bp, and randomly set_ _θq(1) satisfying −bt ≤_ _θq(1) < 0._ _Step 2: Compute the control input_ _uq(k) = uq(k −_ 1) − _θq(k)ξq(k),_ (18) rewritten from the CFDLc (7), apply it to agent q, and collect _yq(k + 1) and uq(k)._ _Step 3: Update the PPD estimation value_ _ψ[ˆ]_ _q(k) using (13)_ with the resetting mechanism (14). _Step 4: Compute_ _ξ[ˆ]q(k + 1) by (15) with (16) and (17)._ _Step 5: Update the control gain_ _ψˆ_ _q(k)ξˆq(k + 1) + λθq(k)ξq(k)_ _θq(k + 1) = θq(k) −_ _γ_ _, (19)_ �λ + ψ[ˆ] _q[2](k)�ξq(k)_ with the resetting mechanism (12). _Step 6: Set k_ _k_ 1, and go back to Step 2. = + For convenient descriptions, we label the proposed approach as CFDL based distributed adaptive control (CFDL-DAC). _Remark 4: The proposed CFDL-DAC illustrates that no_ physical dynamics of the controlled MAS are involved into the distributed controller design. The parameter updating algorithm (13) is based on only the input-output data of each agent. The distributed tracking error estimation (15), the distributed control law (18) and the distributed control gain updating algorithm (19) are designed using only the local information communicated to agent q. Further, the design for the distributed control law (18) is an independent process of the dynamics of agent q. Hence, the proposed CFDL-DAC is a pure data-driven distributed control approach. Note that the proposed CFDL-DAC can be extended to deal with the leaderless control problems [34], [35] since it is applicable as long as the local measurement information can be described, which is demonstrated by (3). However, where λ > 0 is a weight factor used as penalty for δuq(k). In order to obtain the optimal control gain θq(k) under the control objective function (8), the relationship between yq(k+ 1) and uq(k) for agent q is required. In achieving this, the data model (2) is applied and we rewrite it as _yq(k + 1) = yq(k) + ψq(k)δuq(k)._ (9) Taking the CFDLc (7) and data model (9) into the control objective function (8), we obtain _Jq =_ [1]2 � _aq,p�yp(k + 1) −_ _yq(k) + ψq(k)θq(k)ξq(k)�2_ _p∈Nq_ + [1] �yd (k + 1) − _yq(k) + ψq(k)θq(k)ξq(k)�2_ 2 _[d][q]_ + [1] �θq(k)ξq(k)�2. (10) 2 _[λ]_ Equation (10) indicates that the control objective function (8) is transformed into an identification function of θq(k). Then the tuning of θq(k) is achieved by applying the following Newton-type optimization method: �−1 _∂Jq_ _∂θq(k)_ _θq(k + 1) = θq(k) −_ _γ_ � _∂_ [2]Jq _∂θq[2](k)_ = θq(k) − _γ [ψ][q][(][k][)][ξ]�[q]λ[(] +[k][ +] ψ[ 1)]q[2]([ +]k)�[ λθ]ξq[q](k[(][k])[)][ξ][q][(][k][)]_ _, (11)_ with the given resetting mechanism _θq(k + 1) = −bt_ if _θq(k + 1) < −bt_, or _θq(k + 1) = 0_ if _θq(k + 1) > 0,_ (12) where γ ∈ (0, 1] is the step size of θq(k). Note that the control gain θq(k) is not required to be updated if the distributed tracking error ξq(k) = 0 since _yd_ (k) = yq(k) in this case; that is, a perfect tracking for agent _q is achieved._ However the control gain updating algorithm (11) is not realizable since ψq(k) is unknown and ξq(k + 1) is noncausal. For simplicity, we consider the updating algorithm given in [23] to estimate ψq(k): _ψˆ_ _q(k) = ˆψq(k −_ 1) + _[ηϵ][q][(][k][)][δ][u][q][(][k][ −]_ [1)] (13) _µ + δu[2](k −_ 1) _[,]_ with the resetting mechanism _ψˆ_ _q(k) = ˆψq(k −_ 1) if _ψ[ˆ]_ _q(k) < σ or_ _ψ[ˆ]_ _q(k) > bp, (14)_ where _ϵq(k) = δyq(k) −_ _ψ[ˆ]_ _q(k −_ 1)δuq(k − 1), ----- the results in [34], [35] are obtained for continues-time multiagent systems with unknown control directions, and this paper considers discrete-time multi-agent systems where the control directions are known. Therefore, the two obstacles are required to be tackled before utilizing the proposed approach. _D. CONVERGENCE ANALYSES_ The lemma following [36] is applied to facilitate the convergence analyses. _Lemma 2: HHH_ (k) ∈ R[N] [×][N] is an irreducible stochastic matrix with positive diagonal entries and is the set of all _H_ possible HHH (k). The multiplication of Q matrixes satisfies ∥HHH (Q)HHH (Q − 1) · · · _HHH_ (1)∥≤ _ι,_ (20) where {HHH (r)|r = 1, 2, · · ·, Q, Q ∈ Z+} is the subset of the Q matrixes arbitrarily selected from, and 0 < ι < 1. _H_ _Theorem 1: Let the MAS described by (1) under the com-_ munication graph satisfying Assumptions 1–3, be con_G_ trolled by the proposed CFDL-DAC. The leader’s command _yd_ (k) is assumed to be time-invariant, namely, yd (k) ≡ _c, c_ is a constant. Then eq(k) converges to zero as k →∞ for all _q = 1, 2, · · ·, N_, if the following condition is satisfied: 1 _bt <_ _bp�_ _q=max1,···,N_ �Np=1 _[a][q][,][p][ +][ d][q]�_ _._ (21) _Proof:We let_ Substituting (26) into (24) yields the following closed-loop error dynamics: _eee(k + 1) = eee(k) + ���(k)���(k)ξξξ_ (k). (27) Then based on equation (25), it has _eee(k + 1) = eee(k) + ���(k)���(k)(LLL + DDD)eee(k)_ = �III + ���(k)���(k)(LLL + DDD)�eee(k). (28) From Assumption 1, we have that 0 < ψq(k) ≤ _bp._ Besides, III + _���(k)���(k)(LLL +_ _DDD) must be an irreducible matrix_ since the communication graph is assume to be strongly _G_ connected. Based on the resetting mechanism (12), for the matrix III +���(k)���(k)(LLL +DDD), if the condition (21) is satisfied, then there is at least one row sum of the matrix strictly less than one, which means that it is an irreducible stochastic matrix with positive diagonal entries. With equation (28), we can conclude that _eee(k + 1) = GGG(k, 1)eee(1),_ (29) where _GGG(k, 1) =_ _k_ � _j=1_ � � _III + ���(k −_ _j + 1)���(k −_ _j + 1)(LLL + DDD)_ _._ _yyy(k) = [y1(k), y2(k), · · ·, yN_ (k)][T] ∈ R[N] _,_ _uuu(k) = [u1(k), u2(k), · · ·, uN_ (k)][T] ∈ R[N] _,_ _eee(k) = [e1(k), e2(k), · · ·, eN_ (k)][T] ∈ R[N] _,_ _ξξξ_ (k) = [ξ1(k), ξ2(k), · · ·, ξN (k)][T] ∈ R[N] _,_ and rewrite equations (2) and (3) respectively as the following form based on yd (k) ≡ _c:_ _eq(k + 1) = eq(k) −_ _ψq(k)δuq(k),_ (22) _ξq(k) =_ � _aq,p�eq(k) −_ _ep(k)�_ + dqeq(k), (23) _p∈Nq_ Then equations (22) and (23) can be respectively expressed by the following vector forms: _eee(k + 1) = eee(k) −_ _���(k)δuuu(k),_ (24) _ξξξ_ (k) = (LLL + DDD)eee(k), (25) where _���(k) = diag(ψ1(k), ψ2(k), · · ·, ψN_ (k)) ∈ R[N] [×][N] _,_ _δuuu(k) = uuu(k) −_ _uuu(k −_ 1), _DDD = diag(d1, d2, · · ·, dN_ ) ∈ R[N] [×][N] _,_ and LLL ∈ R[N] [×][N] is the Laplacian matrix of the follower agents under the communication graph . _G_ Similarly, we rewrite (7) as the following vector form: _δuuu(k) = −���(k)ξξξ_ (k), (26) where ���(k) = diag(θ1(k), θ2(k), · · ·, θN (k)) ∈ R[N] [×][N] . Taking norms on both sides of equation (29) yields ∥eee(k + 1)∥≤∥GGG(k, 1)∥∥eee(1)∥, (30) By grouping all Q matrixes together for _GGG(k, 1) in (30), and_ applying Lemma 2, we have ∥eee(k + 1)∥≤ _ι[⌊]_ _Q[k]_ [⌋]∥eee(1)∥, (31) where indicates the smaller but nearest integer to the real ⌊·⌋ number k/Q. Then it is obtained that lim _k→∞_ [∥][eee][(][k][ +][ 1)][∥=][ 0;] that is, the tracking error eq(k) converges to zero as k →∞ for all q = 1, 2, · · ·, N . This proof is completed. Next the communication graph for the MAS described by (1) is extended to switching topologies, where each communication graph is strongly connected and at least one agent is communicated to the leader’s command at each time instant k for each graph. To facilitate the description of the switching topologies, we denote (k) as a time-varying graph at time _G_ instant k, then we can have matrixes LLL(k) and DDD(k) with the same denotation as aforementioned. Furthermore, we denote _Gt = {G1, G2, · · ·, GP} as the set of all possible directed_ graphs, describing the switching communication topologies, where P ∈ Z+ is the total number of possible communication topologies. In this case, the stability of the CFDL-DAC is summarized as follows. _Corollary 1: Let the MAS described by (1) satisfying_ Assumptions 1–3, be controlled by the proposed CFDL-DAC, where the communication topology is the switching graphs _Gt = {G1, G2, · · ·, GP}, each graph is strongly connected, and_ the leader’s command yd (k) is assumed to be time-invariant, namely, yd (k) ≡ _c. Then the tracking error eq(k) converges_ ----- to zero as k →∞ for all q = 1, 2, · · ·, N, if we select bt satisfying the following condition: 1 _bt <_ _bp�_ _q=max1,···,N_ �Np=1 _[a][q][,][p][(][m][)][ +][ d][q][(][m][)]�_ _,_ (32) _m=1,···,P_ where (aq,p(m)) is the weighted adjacent matrix of Gm, dq(m) is the entries of D(m) = diag(d1(m), · · ·, dN (m)) under Gm, _Gm is the element of set Gt_, and m = 1, 2, · · ·, P. _Proof:In this case, equation (25) is rewritten as_ _ξξξ_ (k) = (LLL(k) + DDD(k))eee(k), (33) then based on equations (24) and (26), we have _eee(k + 1) =_ �III + ���(k)���(k)(LLL(k) + DDD(k))�eee(k). (34) Since all the possible communication topologies are strongly connected, III + ���(k)���(k)(LLL(k) + DDD(k)) is still an irreducible matrix. It is noted that the set {LLL1 + DDD1, _LLL2 +_ _DDD2, · · ·,_ _LLLP + DDDP} includes all the possible matrices of_ _LLL(k)_ _DDD(k). If the condition (32) is satisfied, then the greatest_ + diagonal entry of III + ���(k)���(k)(LLL(k) + DDD(k)) is less than 1; that is, the matrix III + ���(k)���(k)(LLL(k) + DDD(k)) is irreducibly stochastic with positive diagonal entries. Similar to the proof of Theorem 1, it then can be obtained that the tracking error _eq(k) converges to zero as k →∞_ for all q = 1, 2, · · ·, N . This completes the proof. _Remark 5: The results of Theorem 1 and Corollary 1 are_ based on the time-invariant leader’s command, and the convergence conditions (21) and (32) require a global communication topology to determine bt . The limitation in determining bt probably can be avoided by introducing the stability analysis methods given in [32]. However, the results in [32] are based on the availability of physical model of a controlled plant. The agent system considered in this paper is unknown. Therefore, it may need to integrate other analysis methods in addressing this problem. In future work the case of time-varying leader’s command will be investigated for generalizing the proposed approach given in this paper. The conditions (21) and (32) given in Theorem 1 and Corollary 1 seem from their mathematical forms that they can be ensured by simply choosing −bt ≥ _θq(k) < 0._ However, it should be noted that θq(k) is designed to achieve its automatic tuning using only the local information among neighboring agents. This is different from the most existing distributed control schemes where the control gains are usually calibrated heuristically and chosen as fixed values if the physical dynamics of a controlled plant are unknown. As presented by the control gain updating algorithm (11), this automation helps search and approximate the optimal value of the control gain in the sense of minimizing the control objective function (8). Moreover, bt is purposed to be chosen a value as large as possible under the conditions of (21) and (32), so that θq(k) can be searched in a larger space in order to better approximate the optimal value. and DDD = diag(1, 0, 0, 1). It is seen that the communication topology is strongly connected. The bound of θq(k) is set as bt = 0.2, and the **FIGURE 1. Fixed communication topology.** **IV. SIMULATION RESULTS** To illustrate the effectiveness of the proposed CFDL-DAC, three examples are simulated in this paper. The three examples consider the same nonlinear heterogeneous discrete-time MAS, where the first two examples are conducted under the fixed and switching communication topologies, respectively, with time-invariant leader’s command, and the third example is proceeded with time-varying leader’s command. The nonlinear heterogeneous discrete-time MAS consists of four follower agents, where the follower agent models are governed by  _y1(k −_ 1)y1(k) _y1(k + 1) =_ 1 + y[2]1[(][k][ −] [1)][ +][ y][2]1[(][k][)][ +][ 3][u][1][(][k][)][,] _y2(k)_ _y2(k + 1) =_ 2[(][k][)][,] 1 + y[4]2[(][k][)][ +][ u][3]  (35) _y3(k + 1) =_ _[y][3][(][k][ −]_ [1)][y][3][(][k][)][u][3][(][k][ −] [1)][ +][ u][3][(][k][)] 1 + y[2]3[(][k][ −] [1)][ +][ y][2]3[(][k][)] +u[3]3[(][k][)][,] _y4(k + 1) =_ _[y][4][(][k][)][u][4][(][k][)]_ + 2u4(k),  1 + y[6]4[(][k][)] and the initial system outputs of the four follower agents are set as y1(1) = y2(1) = y3(1) = y4(1) = 0. Furthermore, we would like to point out that the dynamic models of the simulated MAS are only for generating the input-output data, and are not involved in the control system design. _A. EXAMPLE 1: FIXED COMMUNICATION TOPOLOGY_ Fig. 1 shows the communication topology, where the leader is described by vertex 0, and only agents 1 and 4 receive the leader’s command. The communication among neighboring agents is depicted by solid arrows. We use 0 and 1 as weights for the information communicated between two adjacent follower agents, therefore the Laplacian matrix among the follower agents is given by 2 1 1 0 − − 0 1 0 1 − 1 0 1 0 − 0 1 1 2 − −   _,_ _LLL_ =   ----- **FIGURE 2. Tracking performance (Example 1).** **FIGURE 3. Tracking error (Example 1).** **FIGURE 4. Control gains (Example 1).** bound of ψq(k) is given as bp = 1. Thus it can be obtained: 1 {0.2, 0.15} < 1 × �q=max1,···,4 �4p=1 _[a][q][,][p][ +][ d][q]�_ 1 (36) = 1 3 × [≈] [0][.][3][,] which indicates that the convergence condition (21) of Theorem 1 is satisfied. We set the leader’s command as yd (k) = 6, and the simulation is executed with 120 time instants. The simulation results are shown in Figs. 2–4. Fig. 2 and Fig. 3 are the tracking performances and tracking errors of the four follower agents, respectively. Fig. 4 shows the updating values of the control gains for the four follower agents. It is obvious that the system outputs of the four follower agents have large deviations from the leader’s command at **FIGURE 5. Switching communication topologies.** **FIGURE 6. Tracking performance (Example 2).** **FIGURE 7. Tracking error (Example 2).** the primary time instants, but all the tracking errors of the four agents gradually decrease and the consensus tracking is basically achieved after 60 time instants. Furthermore, we can conclude from Fig. 4 that the proposed CFDL-DAC keeps automatically tuning and updating the control gains for the four follower agents to search the optimal values before the consensus tracking is achieved. _B. EXAMPLE 2: SWITCHING COMMUNICATION_ _TOPOLOGIES_ In this subsection, we represent that the proposed CFDL-DAC also works well under switching communication topologies. The communication topologies switch randomly among three graphs, which are described by the set Gt = {G1, G2, G3}, as shown in Fig. 5. Fig. 5 shows that each graph ----- **FIGURE 8. Control gain (Example 2).** **FIGURE 9. Tracking performance for fixed topology (Example 3).** **FIGURE 10. Tracking error for fixed topology (Example 3).** of the three communication topologies is strongly connected. The bound of θq(k) and ψq(k) are respectively set as bt = 0.12 and bp = 2, therefore it can be obtained that 1 1 0.12 < 2 × �q=max1,···,4 �4p=1 _[a][q][,][p][ +][ d][q]�_ = 2 × 3 [≈] [0][.][17][,] (37) which indicates that the convergence condition (32) for Corollary 1 is satisfied. We set the leader’s command as yd (k) = 4, and the simulation results are shown in Figs. 6–8. It is observed that the consensus tracking is achieved, and the tracking errors of all the follower agents converge to zero after 120 time instants which verifies the result of Corollary 1. The automatic tuning of the control gain, as shown in Fig. 8, contributes to the ability of tracking the leader’s command for the proposed CFDL-DAC even under the switching communication topologies. **FIGURE 11. Tracking performance for switching topologies (Example 3).** **FIGURE 12. Tracking error for switching topologies (Example 3).** _C. EXAMPLE 3: TIME-VARYING LEADER’s COMMAND_ To further demonstrate the effectiveness of the proposed approach, in this subsection we consider the time-varying leader’s command described by _yd_ (k) = 3 + 2 sin(0.9kπ/260) + cos(kπ/240), (38) and the simulation is executed with 700 time instants. In this simulation, the fixed graph as shown in Fig. 1 and the switching topologies as depicted in Fig. 5 are considered. The simulation results are presented in Figs. 9-12. These results show that the system outputs of the four follower agents rapidly approximate to the neighborhood of the leader’s command from a large deviation at the initial time instant. Although the tracking errors do not converge to zero, they reduce to a small bound. **V. CONCLUSION** This paper investigated a distributed leader-follower consensus tracking approach for a class of unknown nonlinear nonaffine discrete-time MASs. A data-driven distributed adaptive control scheme was designed using only the local measurements exchanged among neighboring agents via the dynamic linearization method applied to the controlled MAS and the ideal distributed controller. The stabilities of the proposed distributed adaptive control approach were rigorously guaranteed under both the fixed and switching communication topologies. In future, investigating a more general distributed adaptive control scheme and analyzing its stability properties for a time-varying leader’s command are interesting topics. ----- **REFERENCES** [1] F. L. Lewis, H. Zhang, K. Hengster-Movric, and A. Das, Cooperative Con_trol of Multi-Agent Systems: Optimal and Adaptive Design Approaches._ Berlin, Germany: Springer-Verlag, 2013. [2] W. Ren and R. Beard, Distributed Consensus in Multi-vehicle Cooperative _Control: Theory and Applications. London, U.K.: Springer-Verlag, 2008._ [3] Y.-J. Zhou, G.-P. Jiang, F.-Y. Xu, and Q.-Y. Chen, ‘‘Distributed finite time consensus of second-order multi-agent systems via pinning control (August 2018),’’ IEEE Access, vol. 6, pp. 45617–45624, 2018. [4] J.-Q. Liang, X.-H. Bu, Q.-F. Wang, and H. He, ‘‘Iterative learning consensus tracking control for nonlinear multi-agent systems with randomly varying iteration lengths,’’ IEEE Access, vol. 7, pp. 158612–158622, 2019. [5] Z. Gao, L. Wang, and S. Qiao, ‘‘Cucker-smale flocking control of leaderfollower multiagent systems with intermittent communication,’’ IEEE _Access, vol. 7, pp. 172676–172682, 2019._ [6] M. Khalili, X. Zhang, Y. Cao, M. M. Polycarpou, and T. Parisini, ‘‘Distributed fault-tolerant control of multiagent systems: An adaptive learning approach,’’ IEEE Trans. Neural Netw. Learn. Syst., vol. 31, no. 2, [pp. 420–432, Feb. 2020, doi: 10.1109/TNNLS.2019.2904277.](http://dx.doi.org/10.1109/TNNLS.2019.2904277) [7] F. Chen and W. Ren, ‘‘On the control of multi-agent systems: A survey,’’ _Found. Trends Syst. Control, vol. 6, no. 4, pp. 339–499, 2019._ [8] A. Dorri, S. S. Kanhere, and R. Jurdak, ‘‘Multi-agent systems: A survey,’’ _IEEE Access, vol. 6, pp. 28573–28593, 2018._ [9] A. Jadbabaie, J. Lin, and A. S. Morse, ‘‘Coordination of groups of mobile autonomous agents using nearest neighbor rules,’’ IEEE Trans. Autom. _Control, vol. 48, no. 6, pp. 988–1001, Jun. 2003._ [10] R. Olfati-Saber and R. M. Murray, ‘‘Consensus problems in networks of agents with switching topology and time-delays,’’ IEEE Trans. Autom. _Control, vol. 49, no. 9, pp. 1520–1533, Sep. 2004._ [11] L. Zhao, J. Yu, C. Lin, and H. Yu, ‘‘Distributed adaptive fixed-time consensus tracking for second-order multi-agent systems using modified terminal sliding mode,’’ Appl. Math. Comput., vol. 312, pp. 23–35, Nov. 2017. [12] K. You and L. Xie, ‘‘Network topology and communication data rate for consensusability of discrete-time multi-agent systems,’’ IEEE Trans. _Autom. Control, vol. 56, no. 10, pp. 2262–2275, Oct. 2011._ [13] W. Ren and R. W. Beard, ‘‘Consensus seeking in multiagent systems under dynamically changing interaction topologies,’’ IEEE Trans. Autom. _Control, vol. 50, no. 5, pp. 655–661, May 2005._ [14] M. di Bernardo, A. Salvi, and S. Santini, ‘‘Distributed consensus strategy for platooning of vehicles in the presence of time-varying heterogeneous communication delays,’’ IEEE Trans. Intell. Transp. Syst., vol. 16, no. 1, pp. 102–112, Feb. 2015. [15] Y. Zheng and L. Wang, ‘‘Consensus of heterogeneous multi-agent systems without velocity measurements,’’ Int. J. Control, vol. 85, no. 7, pp. 906–914, Jul. 2012. [16] Z. Hou, R. Chi, and H. Gao, ‘‘An overview of dynamic-linearization-based data-driven control and applications,’’ IEEE Trans. Ind. Electron., vol. 64, no. 5, pp. 4076–4090, May 2017. [17] Z. Hou, ‘‘Parameter identication, adaptive control and model-free learning adaptive control for nonlinear systems,’’ Ph.D. dissertation, School Automat., Northeastern Univ., Shenyang, China, 1994. [18] Z. Hou and S. Xiong, ‘‘On model-free adaptive control and its stability analysis,’’ IEEE Trans. Autom. Control, vol. 64, no. 11, pp. 4555–4569, [Nov. 2019, doi: 10.1109/TAC.2019.2894586.](http://dx.doi.org/10.1109/TAC.2019.2894586) [19] Z. Hou and S. Jin, ‘‘A novel data-driven control approach for a class of discrete-time nonlinear systems,’’ IEEE Trans. Control Syst. Technol., vol. 19, no. 6, pp. 1549–1558, Nov. 2011. [20] D. Xu, B. Jiang, and P. Shi, ‘‘Adaptive observer based data-driven control for nonlinear discrete-time processes,’’ IEEE Trans. Autom. Sci. Eng., vol. 11, no. 4, pp. 1037–1045, Oct. 2014. [21] Z. Hou and S. Jin, ‘‘Data-driven model-free adaptive control for a class of MIMO nonlinear discrete-time systems,’’ IEEE Trans. Neural Netw., vol. 22, no. 12, pp. 2173–2188, Dec. 2011. [22] D. Xu, B. Jiang, and P. Shi, ‘‘A novel model-free adaptive control design for multivariable industrial processes,’’ IEEE Trans. Ind. Electron., vol. 61, no. 11, pp. 6391–6398, Nov. 2014. [23] X. Bu, Z. Hou, and H. Zhang, ‘‘Data-driven multiagent systems consensus tracking using model free adaptive control,’’ IEEE Trans. Neural Netw. _Learn. Syst., vol. 29, no. 5, pp. 1514–1524, May 2018._ [24] X. Bu, Q. Yu, Z. Hou, and W. Qian, ‘‘Model free adaptive iterative learning consensus tracking control for a class of nonlinear multiagent systems,’’ IEEE Trans. Syst., Man, Cybern. Syst., vol. 49, no. 4, pp. 677–686, Apr. 2019. [25] R. Chi, Y. Hui, B. Huang, and Z. Hou, ‘‘Adjacent-agent dynamic linearization-based iterative learning formation control,’’ _IEEE_ _Trans. Cybern., vol. 50, no. 10, pp. 4358–4369, Oct. 2020, doi:_ [10.1109/tcyb.2019.2899654.](http://dx.doi.org/10.1109/tcyb.2019.2899654) [26] Z.-H. Pang, G.-P. Liu, D. Zhou, and D. Sun, ‘‘Data-based predictive control for networked nonlinear systems with network-induced delay and packet dropout,’’ IEEE Trans. Ind. Electron., vol. 63, no. 2, pp. 1249–1257, Feb. 2016. [27] X. Wang, X. Li, J. Wang, X. Fang, and X. Zhu, ‘‘Data-driven modelfree adaptive sliding mode control for the multi degree-of-freedom robotic exoskeleton,’’ Inf. Sci., vol. 327, pp. 246–257, Jan. 2016. [28] D. Meng and Y. Jia, ‘‘Iterative learning approaches to design finite-time consensus protocols for multi-agent systems,’’ Syst. Control Lett., vol. 61, no. 1, pp. 187–194, Jan. 2012. [29] Z. Hou and S. Jin, Model Free Adaptive Control Theory and Applications. New York, NY, USA: CRC Press, 2013. [30] Y. Zhu and Z. Hou, ‘‘Data-driven MFAC for a class of discrete-time nonlinear systems with RBFNN,’’ IEEE Trans. Neural Netw. Learn. Syst., vol. 25, no. 5, pp. 1013–1020, May 2014. [31] T. Ma, F. L. Lewis, and Y. Song, ‘‘Exponential synchronization of nonlinear multi-agent systems with time delays and impulsive disturbances,’’ Int. J. _Robust Nonlinear Control, vol. 26, no. 8, pp. 1615–1631, May 2016._ [32] Z. Li, W. Ren, X. Liu, and M. Fu, ‘‘Consensus of multi-agent systems with general linear and lipschitz nonlinear dynamics using distributed adaptive protocols,’’ IEEE Trans. Autom. Control, vol. 58, no. 7, pp. 1786–1791, Jul. 2013. [33] Z. Hou and Y. Zhu, ‘‘Controller-dynamic-linearization-based model free adaptive control for discrete-time nonlinear systems,’’ IEEE Trans. Ind. _Informat., vol. 9, no. 4, pp. 2301–2309, Nov. 2013._ [34] G. Wang, ‘‘Distributed control of higher-order nonlinear multi-agent systems with unknown non-identical control directions under general directed graphs,’’ Automatica, vol. 110, Dec. 2019, Art. no. 108559. [35] G. Wang, C. Wang, and X. Cai, ‘‘Consensus control of output-constrained multiagent systems with unknown control directions under a directed graph,’’ Int. J. Robust Nonlinear Control, vol. 30, no. 5, pp. 1802–1818, Mar. 2020. [36] S. Yang, J.-X. Xu, and X. Li, ‘‘Iterative learning control with input sharing for multi-agent consensus tracking,’’ Syst. Control Lett., vol. 94, pp. 97–106, Aug. 2016. XIAN YU received the B.S. degree in mechanical engineering and automation and the M.S. degree in mechatronic engineering from Guangxi University, Nanning, China, in 2012 and 2015, respectively. He is currently pursuing the Ph.D. degree with the Advanced Control Systems Laboratory, Beijing Jiaotong University, Beijing, China. From 2019 to 2020, he was a Visiting Researcher of the KIOS Research and Innovation Center of Excellence, University of Cyprus, Nicosia, Cyprus. His current research interests include data-driven control, multi-agent systems, adaptive control, model free adaptive control, and iterative learning control. SHANGTAI JIN received the bachelor’s, master’s, and Ph.D. degrees from Beijing Jiaotong University, Beijing, China, in 1999, 2004, and 2009, respectively. He is currently an Associate Professor with Beijing Jiaotong University. His current research interests include model-free adaptive control, data driven control, learning control, and intelligent transportation systems. ----- GENFENG LIU received the bachelor’s degree in electric engineering and automation and the master’s degree in control science and engineering from Henan Polytechnic University, Jiaozuo, China, in 2012 and 2015, respectively. He is currently pursuing the Ph.D. degree in control science and engineering with the Advanced Control Systems Laboratory, Beijing Jiaotong University, Beijing, China. His research interests include data-driven control, iterative learning control, model free adaptive control, adaptive control, fault-tolerant control, multiagent systems, and train control systems. TING LEI received the bachelor’s degree from Zhengzhou University, Zhengzhou, China, in 2012. He is currently pursuing the Ph.D. degree with the Advanced Control Systems Laboratory, Beijing Jiaotong University, Beijing, China. He is also working with the College of Electrical and Information Engineering, Zhengzhou University of Light Industry, Zhengzhou. His current research interests include urban transportation systems, data-driven control, and optimization and control of large scale networks. YE REN received the bachelor’s degree from Beijing Jiaotong University, Beijing, China, in 2013. He is currently pursuing the Ph.D. degree with the Advanced Control Systems Laboratory. He is also working with the School of Electrical and Control Engineering, North China University of Technology, Beijing. His current research interests include optimization and control of large scale networks, data-driven control, and multiagent systems control. -----
{ "disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.1109/ACCESS.2020.3038629?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1109/ACCESS.2020.3038629, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": "CCBYNCND", "status": "GOLD", "url": "https://ieeexplore.ieee.org/ielx7/6287639/8948470/09261580.pdf" }
2,020
[ "JournalArticle" ]
true
null
[ { "paperId": "dcd12a6d27fc79cad08ed283c38dfa03384fea2e", "title": "Adjacent-Agent Dynamic Linearization-Based Iterative Learning Formation Control" }, { "paperId": "695d0a78140696a410b0f9ecd6655e35e12e4e54", "title": "Distributed Fault-Tolerant Control of Multiagent Systems: An Adaptive Learning Approach" }, { "paperId": "285e8df98886ebfa3d13bba9e22207d5808ba32d", "title": "Consensus control of output‐constrained multiagent systems with unknown control directions under a directed graph" }, { "paperId": "6b675f32e60383ee98177384f0666b5e1217b30c", "title": "Distributed control of higher-order nonlinear multi-agent systems with unknown non-identical control directions under general directed graphs" }, { "paperId": "32dffdf55ac7d340aff4eccd491d5a89d4f58e62", "title": "Iterative Learning Consensus Tracking Control for Nonlinear Multi-Agent Systems With Randomly Varying Iteration Lengths" }, { "paperId": "e2a7b6f5e817a70a1a1645ba1a964626d3c6002e", "title": "On the Control of Multi-Agent Systems: A Survey" }, { "paperId": "1de86a998a07bdd339dabd51e6703605cb6df466", "title": "Model Free Adaptive Iterative Learning Consensus Tracking Control for a Class of Nonlinear Multiagent Systems" }, { "paperId": "b9073df6b915c0ebf92277b71cd2e3c805c29316", "title": "On Model-Free Adaptive Control and Its Stability Analysis" }, { "paperId": "ca872af455a646e22bf3d016d0c90f751140c56e", "title": "Data-Driven Multiagent Systems Consensus Tracking Using Model Free Adaptive Control" }, { "paperId": "04e348d70aab71b6b346e45e632136380ac5a88f", "title": "Distributed adaptive fixed-time consensus tracking for second-order multi-agent systems using modified terminal sliding mode" }, { "paperId": "0fe543ca1755ed50e2e110e8827ba68b6f18d68e", "title": "An Overview of Dynamic-Linearization-Based Data-Driven Control and Applications" }, { "paperId": "3d9a14f21b8969a5bebf90394b11fb5043625e4b", "title": "Iterative learning control with input sharing for multi-agent consensus tracking" }, { "paperId": "f650e7d22aacd1c08e9bb61d4d457be31c2859ce", "title": "Exponential synchronization of nonlinear multi‐agent systems with time delays and impulsive disturbances" }, { "paperId": "51c3b8d8d42b85bb3bd3c2707d21223f81758903", "title": "Data-Based Predictive Control for Networked Nonlinear Systems With Network-Induced Delay and Packet Dropout" }, { "paperId": "40a02aaacc1efe0b9a8bc937fa07fae3856cae72", "title": "Data-driven model-free adaptive sliding mode control for the multi degree-of-freedom robotic exoskeleton" }, { "paperId": "e058a1cf36df18c2e3fbe82224b6fc2c0d9df6ba", "title": "Distributed Consensus Strategy for Platooning of Vehicles in the Presence of Time-Varying Heterogeneous Communication Delays" }, { "paperId": "8d5d83d8b56c18d1290e908cb61b57425d2530fa", "title": "Data-Driven MFAC for a Class of Discrete-Time Nonlinear Systems With RBFNN" }, { "paperId": "df00836a72b09da059055713691d44267570c3ba", "title": "A Novel Model-Free Adaptive Control Design for Multivariable Industrial Processes" }, { "paperId": "0851aa9a4173c9bf11c87e865a084e43c9f2d3f2", "title": "Cooperative Control of Multi-Agent Systems: Optimal and Adaptive Design Approaches" }, { "paperId": "67c0b6026e506ade0aa999b752cdff83cd7b6a22", "title": "Model Free Adaptive Control: Theory and Applications" }, { "paperId": "cbb4898de5545a509442128f118fc067e7982283", "title": "Controller-Dynamic-Linearization-Based Model Free Adaptive Control for Discrete-Time Nonlinear Systems" }, { "paperId": "d3426bc252fa5aa29f7d098754e8fd337c686f1f", "title": "Consensus of heterogeneous multi-agent systems without velocity measurements" }, { "paperId": "f693c9a254c841cd7eb0edf243e9f8d3af450660", "title": "Data-Driven Model-Free Adaptive Control for a Class of MIMO Nonlinear Discrete-Time Systems" }, { "paperId": "2590f90b0245059e828b9c2598625720e559bed9", "title": "Consensus of Multi-Agent Systems With General Linear and Lipschitz Nonlinear Dynamics Using Distributed Adaptive Protocols" }, { "paperId": "0122e7e784aae390dcc230f23bc79e8d19073092", "title": "Network Topology and Communication Data Rate for Consensusability of Discrete-Time Multi-Agent Systems" }, { "paperId": "bbadd4370cb353cd31a260668824f27a41c9af56", "title": "Distributed Consensus in Multi-vehicle Cooperative Control - Theory and Applications" }, { "paperId": "ee6ff99457245822545a32c436f359cac84ccb68", "title": "Consensus seeking in multiagent systems under dynamically changing interaction topologies" }, { "paperId": "9839ed2281ba4b589bf88c7e4acc48c9fa6fb933", "title": "Consensus problems in networks of agents with switching topology and time-delays" }, { "paperId": "20be6a4e06295792977d2d6e4c9eb9a8405226e9", "title": "Coordination of groups of mobile autonomous agents using nearest neighbor rules" }, { "paperId": "a6ad73fa48ec6ef753a1572cc076479fdca06b01", "title": "Cucker-Smale Flocking Control of Leader-Follower Multiagent Systems With Intermittent Communication" }, { "paperId": "993fc07af995c9900a2682103842c58053290ae7", "title": "Distributed Finite Time Consensus of Second-Order Multi-Agent Systems via Pinning Control (August 2018)" }, { "paperId": "737f439d1b90b018a36130f48c084e87b8bb413b", "title": "Adaptive Observer Based Data-Driven Control for Nonlinear Discrete-Time Processes" }, { "paperId": "8cfdf6b9d95e8a4a418da0cb9afc9c6739c27a53", "title": "Iterative learning approaches to design finite-time consensus protocols for multi-agent systems" }, { "paperId": "f0fb49cebb954213b767452757d75c748887fe74", "title": "A Novel Data-Driven Control Approach for a Class of Discrete-Time Nonlinear Systems" }, { "paperId": "f16a0dcbf96f5a6b41ce1855c68e9f67a1514a10", "title": "Multi-Agent Systems: A Survey" }, { "paperId": null, "title": "‘‘Parameter identication, adaptive control and model-free learning adaptive control for nonlinear systems,’’" } ]
13,851
en
[ { "category": "Computer Science", "source": "external" }, { "category": "Computer Science", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/02b768bce3ec87af9443f2bb1d890ce09f1ca916
[ "Computer Science" ]
0.855051
Semi-Decentralized Federated Learning with Collaborative Relaying
02b768bce3ec87af9443f2bb1d890ce09f1ca916
International Symposium on Information Theory
[ { "authorId": "31525269", "name": "M. Yemini" }, { "authorId": "8352011", "name": "R. Saha" }, { "authorId": "120677219", "name": "Emre Ozfatura" }, { "authorId": "1727814", "name": "Deniz Gündüz" }, { "authorId": "1746299", "name": "A. Goldsmith" } ]
{ "alternate_issns": null, "alternate_names": [ "International Symposium on Information Technology", "Int Symp Inf Theory", "Int Symp Inf Technol", "ISIT" ], "alternate_urls": null, "id": "234ccdc0-f58f-4f94-b86a-428d11a0c5ad", "issn": null, "name": "International Symposium on Information Theory", "type": "conference", "url": "http://www.wikicfp.com/cfp/program?id=1719" }
We present a semi-decentralized federated learning algorithm wherein clients collaborate by relaying their neighbors’ local updates to a central parameter server (PS). At every communication round to the PS, each client computes a local consensus of the updates from its neighboring clients and eventually transmits a weighted average of its own update and those of its neighbors to the PS. We appropriately optimize these averaging weights to ensure that the global update at the PS is unbiased and to reduce the variance of the global update at the PS, consequently improving the rate of convergence. Numerical simulations substantiate our theoretical claims and demonstrate settings with intermittent connectivity between the clients and the PS, where our proposed algorithm shows an improved convergence rate and accuracy in comparison with the federated averaging algorithm.
# Semi-Decentralized Federated Learning with Collaborative Relaying ### Michal Yemini Rajarshi Saha Emre Ozfatura Deniz G¨und¨uz Andrea J. Goldsmith Princeton University Stanford University Imperial College London Imperial College London Princeton University **_Abstract—We present a semi-decentralized federated learning_** **algorithm wherein clients collaborate by relaying their neighbors’** **local updates to a central parameter server (PS). At every** **communication round to the PS, each client computes a local** **consensus of the updates from its neighboring clients and** **eventually transmits a weighted average of its own update and** **those of its neighbors to the PS. We appropriately optimize** **these averaging weights to ensure that the global update at** **the PS is unbiased and to reduce the variance of the global** **update at the PS, consequently improving the rate of convergence.** **Numerical simulations substantiate our theoretical claims and** **demonstrate settings with intermittent connectivity between the** **clients and the PS, where our proposed algorithm shows an** **improved convergence rate and accuracy in comparison with the** **federated averaging algorithm.** I. INTRODUCTION Federated learning (FL) algorithms iteratively optimize a common objective function to learn a shared model over data samples that are localized over multiple distributed clients [1]. FL approaches aim to reduce the required communication overhead and improve clients’ privacy by training local models of private dataset at the clients and forwarding them periodically to a centralized parameter server (PS). In practical FL setups, some clients are stragglers and cannot send their updates regularly, either because: (i) they cannot finish their computation within a prescribed deadline, or (ii) due to communication limitations [2], where they suffer from intermittent connectivity to the PS since their wireless channel is temporarily blocked [3]–[8]. Stragglers deteriorate the convergence of FL as the computed local updates become stale. This can even result in bias in the final model in the case of persistent stragglers. On the other hand, Communication strag_glers (type (ii)) are inherently different from computation-_ _limited stragglers (type (i)), since it can be solved by relaying_ the updates to the PS via neighboring clients. Communication quality at the wireless edge as a key design principle is considered in the federated edge learning (FEEL) framework [9], which takes into account the wireless channel characteristics from the clients to the PS to optimize the convergence and final model performance at the PS. So far the FEEL paradigm has mainly focused on direct communication from the clients to the PS, and aimed at improving the M. Yemini, R. Saha, and A.J. Goldsmith are partially supported by the AFOSR award #002484665 and a Huawei Intelligent Spectrum grant. E. Ozfatura and D. G¨und¨uz received funding from the European Research Council (ERC) through Starting Grant BEACON (no. 677854) and the UK EPSRC (grant no. EP/T023600/1) under the CHIST-ERA program. performance by resource allocation across clients [9]–[18]; these approaches have ignored possible cooperation between clients in the case of intermittent communication blockages. Motivated by our prior works [19]–[21], where client cooperation is used to improve the connectivity to the cloud and to reduce the latency and scheduling overhead, this work proposes a new FEEL paradigm, where the clients cooperate to mitigate the detrimental effects of communication stragglers. In our proposed method, clients share their local updates with neighbors so that each client sends to the PS a weighted average of its current update and those of its neighbors. Using this approach, the PS receives new updates from disconnected clients, which would otherwise become stale and be discarded. We demonstrate the success of our relaying scheme through both theoretical analysis and numerical simulations. _Related Works:_ Decentralized collaborative learning frameworks have been introduced as an alternative to centralized FL, in which the PS is removed to mitigate a potential communication bottleneck and a single point of failure [22]– [33]. In decentralized learning, each client shares its local model with its neighbors through device-to-device (D2D) communications, and model aggregation is executed at each client in parallel. This aggregation strategy is determined at each client according to the network topology, i.e., the connection pattern between the clients. An alternative approach to both centralized and decentralized schemes is hierarchical FL (HFL) [21], [34]–[36], where multiple PSs are employed for the aggregation to prevent a communication bottleneck. In HFL, clients are divided into clusters and a PS is assigned to each cluster to perform local aggregation. The aggregated models at the clusters are later aggregated at the main PS in a subsequent step to obtain the global model. HFL has significant advantages over centralized and decentralized schemes, particularly when the communication takes place over wireless channels since it allows spatial reuse of available resources [21]. Nonetheless, HFL requires employing multiple PSs that may not be practical in certain scenarios. Instead, the idea of hierarchical collaborative learning can be redesigned to combine hierarchical and decentralized learning, which is referred as semi-decentralized _FL, where the local consensus follows decentralized learning_ with D2D communications, whereas the global consensus is orchestrated by the PS [37], [38]. One of the major challenges in FL that is not considered in [37], [38] is the partial client connectivity [39], [40]. Unequal client participation due ----- to intermittent connectivity exacerbates the impact of data heterogeneity [41]–[44], and increases the generalization gap. Most existing works on FL assume error-free rate-limited orthogonal communication links, with an underlying communication protocol that takes care of wireless imperfections. However, this separation between the communication medium and the learning protocol can be suboptimal [9]. An alternative approach treats the communication of the model updates to the PS as an uplink communication problem and jointly optimizes the learning algorithm and the communication scheme [9]. Within this framework is an original and promising approach known as over-the-air computation (OAC) [14]–[16], which exploits the superposition property of wireless signals to convey the sum of the model updates that are transmitted by each client in an uncoded fashion. In addition to bandwidth efficiency, the OAC framework provides a certain level of anonymity to clients due to its superposition nature; hence, it can enhance the privacy of the participating clients [17], [18]. We emphasize that in OAC, PS receives the aggregate model, and it is not possible to disentangle the individual model updates. Therefore, any strategy that utilizes a PS side aggregation mechanism with individual model updates to address unequal client participation is not compatible with the OAC framework. One of the major advantages of our proposed scheme is that it mitigates the drawbacks of unequal client participation without requiring the identity of transmitting clients or their individual updates at the PS. Therefore, our solution is compatible with OAC. Client connectivity is a particularly significant challenge in FEEL, where the clients and the PS communicate over unreliable wireless channels. Due to their different physical environments and distances to the PS, clients can have different connectivity to each other and the PS. This problem has been recently addressed in [10]–[13], [45]–[48] by considering customized client selection mechanisms to balance the participation of the clients and the latency for the model aggregation in order to speed up the learning process. We adopt a different approach to this problem, where instead of designing a client selection mechanism, or optimizing resource allocation to balance client participation, we introduce a relay_ing mechanism that takes into account the nature of individual_ clients’ connectivity to the PS and ensures that, in case of poor connectivity, their local updates are conveyed to the PS with the help of their neighboring clients. _Paper Organization: Sec. II presents the FL system_ model and the proposed FL collaborative relaying scheme. Sec. III presents conditions for the unbiasedness of our proposed scheme and an analysis of the convergence rate. Sec. IV optimizes the convergence rate of our proposed scheme, while Sec. V presents numerical results that validate our theoretical analysis and highlight the performance improvement in terms of training accuracy. Finally, Sec. VI concludes this paper. **Remark. Due to space limitations, all proofs are omitted, and** _can be found in an online extended version of this paper [49]._ PS _c1_ _c3_ _c8_ _c9_ _c5_ _c9_ Fig. 1: System model with intermittent uplink communication between clients and PS (dotted lines) and reliable communication between neighboring clients (solid lines). II. SYSTEM MODEL FOR COLLABORATIVE RELAYING Consider n clients communicating periodically with a PS that trains a global model x ∈ R[d]. Let L(x, ζ) be the loss evaluated for a model x at data point ζ. Denote the local loss at client1 � _i by fi : R[d]_ _× Zi →_ R, where f (x; Zi) = _|Zi|_ **_ζ∈Zi_** _[L][(][x][;][ ζ][)][. Here,][ Z][i][ is the local dataset of client]_ _i. The goal of PS is to solve the following empirical risk_ minimization (ERM) problem:[1] 1 **x[∗]** = arg minx∈R[d] _·_ _f_ (x) ≜ arg minx∈R[d] _·_ _n_ _A. FL with Local SGD at Clients_ _n_ � _fi(x; Zi)._ _i=1_ Denote the local gradient as ∇fi(x) ≜ _∇xf_ (x; Zi), and let _gi(x) be an unbiased estimate of it. In the r[th]_ _round of FL,_ the PS broadcasts the global model x[(][r][)] to the clients. For a local averaging period of, each client performs iterations _T_ _T_ of local training, after which the local models are sent to the PS for aggregation. For local iteration k [0 : ] of the r[th] _∈_ _T_ round, client i applies the local update rule: � � **x[(]i[r,k][+1)]** = x[(]i[r,k][)] _−_ _ηrgi_ **x[(]i[r,k][)]** _,_ (1) where ηr is the learning rate for round r and x[(]i[r,][0)] = x[(][r][)]. _B. Communication Model_ **Communication between clients and PS. We consider a** setting where the uplink connections between the clients and the PS are intermittent. As shown in Fig. 1, we model the connectivity of client i to the PS at round r by the Bernoulli random variable τi(r) ∼ Bern(pi), where τi = 1 indicates the presence of an uplink communication opportunity, whereas _τi(r) = 0 indicates a blocked uplink. For simplicity of expo-_ sition, we consider the uplink channel to be either completely blocked or perfectly available without any noise, and the downlink from PS to clients does not suffer from intermittent dropouts. **Remark 1. The connectivity probabilities {pi}i∈[n] can be** _easily estimated using pilot signals. Moreover, clients can_ _share their pi with each other using local links in a pre-_ _training phase. On the other hand, we do not assume that_ 1For simplicity, we assume |Zi| = |Zj _| for all i, j ∈_ [n]. Our method can be extended to the setting of imbalanced local dataset sizes as well. ----- **Algorithm 1 COLREL-CLIENT: Collaborative Relaying** **Input: Round index r, Step-size ηr, Local avg. period T,** Neighborhood of client i Ni, αij for every j ∈Ni ∪{i}. **Output: ∆x�[r]i** [+1]. 1: Receive x[(][r][)] from PS. 2: Set x[(]i[r,][0)] = x[(][r][)]. 3: for k ← 0 to T − 1 do Compute (stochastic) gradient gi(x[(]i[r,k][)]t). � � **xi[(][r,k][+1)]** = x[(]i[r,k][)] _−_ _ηrgi_ **x[(]i[r,k][)]** . 4: end for 5: Set ∆x[r]i [+1] = x[(]i[r,][T][ )] _−_ **x[(][r][)].** 6: Send ∆xi to every j ∈Ni. 7: Receive ∆xj from every j ∈Ni. 8: Compute ∆x�[r]i [+1] = [�]j∈Ni∪{i} _[α][ij][ ·][ ∆][x]j[r][+1]._ 9: Transmit (relay) ∆x�[r]i [+1] to the PS. _the instantaneous connectivity information, i.e., τi(r), r ∈_ [n] _is available to any of the clients._ **Communication between clients. The connectivity be-** tween clients is modeled by an undirected graph G = (V, E) where V = [n] and (i, j) _E_ client i can communicate _∈_ _⇐⇒_ with client j. Let Ni ≜ _{j ∈_ _V_ : {i, j} ∈ _E}. We do_ not assume that the graph G is connected. Instead, it can be composed of multiple connected subgraphs. _C. Collaborative Relaying of Local Updates_ Let ∆x[(]i[r][+1)] denote client i’s update at the end of T _[th]_ local iteration of round r, i.e., ∆x[(]j[r][+1)] = x[(]j[r,][T][ )] _−_ **x[(][r][)]. We** assume that client i’s update ∆x[(]j[r][+1)] is readily available to its neighbors. Then client i computes a weighted average of its own update and those of its neighbors in Ni, i.e., **Algorithm 2 COLREL-PS: PS Aggregation** **Input: Number of rounds R, a set of clients [n].** **Output: Global model x[(][R][)].** 1: Set x[(0)] = 0 2: for k ← 0 to T − 1 do Broadcast x[(][r][)] to all clients. Set τi(r + 1) = 1 or 0 depending on connectivity. Update x[(][r][+1)] = x[(][r][)] + _n[1]_ �i∈[n] _[τ][i][(][r][ + 1)∆][x][�]i[r][+1]_ 3: end for III. CONVERGENCE ANALYSIS _A. Unbiasedness of COLREL FL_ In our collaborative relaying scheme, the local update of a particular client i can be transmitted to the PS by itself, or by one or more of its neighbors j ∈Ni. Since the PS may be blind to the identities of the clients, the clients collaborate among themselves to ensure that this redundancy is mitigated. This is done by appropriately choosing the weights αij. In particular, Lemma 1 gives a sufficient condition on the values of {αij}i,j∈[n] that ensures that the aggregated global update at the PS is an unbiased estimate of _n[1]_ �i∈[n] [∆][x]i[(][r][)][, the true] aggregate in the case of perfect channel connectivity. **Lemma 1. Let w = 1/n and {αij}i,j∈[n] be such that**  �  = piαii + _pjαji = 1._ (3) _j∈Ni_ E  �  _τj(r + 1)αji_ _j∈Ni∪{i}_ _Then, for every i_ [n], _∈_  � _r+1_  _τj(r + 1)αji∆x[r]i_ [+1]���∆xi _j∈Ni∪{i}_   = [1] _i_ _._ _n_ [∆][x][r][+1] � � � ∆x�[(]i[r][+1)] = _αij∆x[(]j[r][+1)]_ = _αij_ **x[(]j[r,][T][ )]** _−_ **x[(][r][)][�]** _,_ _j∈Ni∪{i}_ _j∈Ni∪{i}_ where αij is a non-negative importance weight assigned by client i while relaying the client j’s update. Note that weighted averaging entails a complexity of O �maxi∈[n] |Ni| + 1�. _D. PS Aggregation_ In our setting, the PS does not explicitly select the subset of clients from which it wants to receive information, rather it receives updates from all communicating clients at the beginning of every round. The PS uses the following re-scaled sum of received updates: � **x[(][r][+1)]** = x[(][r][)] + w _τi(r + 1)∆x�[(]i[r][+1)]._ (2) _i∈[n]_ This update can be computed over-the-air and does not require the PS to know the identities of the communicating clients. We set w=1/n to preserve the unbiasedness of the objective function at the PS, as discussed in Sec. III. Our Collaborative Relaying (ColRel) algorithm is presented in Algs. 1 and 2. Note that the standard FL setting under intermittent client connectivity to the PS but with no collaboration between the clients is captured by the choice w = 1/n, Ni = ∅, pi = _p, αii = 1, αij = 0 for all i, j ∈_ [n] and j ̸= i. _B. Expected Suboptimality Gap_ Next, Thm. 1 presents the convergence rate of COLREL as a function of {αij}, under the following assumptions. **Assumption 1. For any i, the local loss fi is L-smooth w.r.t.** **x, i.e., for any x, y ∈** R[d], ∥∇fi(x) _−∇fi(y)∥2 ≤_ _L∥x_ _−_ **y∥2.** **Assumption 2. The stochastic gradients gi(x) are unbiased** _and have bounded variance, i.e.,_ _i_ [n]: _∀_ _∈_ _1) E[gi(x)] = ∇fi(x), and_ _2) E∥gi(x) −∇fi(x)∥2[2]_ _[≤]_ _[σ][2][ for some finite][ σ][2][.]_ **Assumption 3. For any i, the loss fi is µ-strongly convex, i.e.,** _for any x, y ∈_ R[d], (∇fi(x) _−∇fi(y))[⊤](x_ _−_ **y) ≥** _µ∥x_ _−_ **y∥2[2][.]** _w · E_ ----- **Algorithm 3 OPT-α: Optimization of relay weight matrix A** **Input: Connectivity graph G, Transmission probability vector** **p, Maximum number of iterations L.** **Output: Relay weight matrix A[(][L][)]** that approximately solves (6). 1 1: Set A[(0)]ji [=] (|Ni|+1)·pj _[·][ 1][{][j][∈N][i][∪{][i][}][:][p][j]_ _[>][0][}][.]_ 2: for ℓ _←_ 0 to L − 1 do Set ℓ _ℓ_ + 1. _←_ Set i = ℓ mod n + n · 1{ℓ mod n=0}. (ℓ) Compute **A[�]** _i_ according to (9). Set A[(]k[ℓ][)] according to (7) for every k ∈ [n]. 3: end for Let A = (αij)i,j∈[n] denote the n × n matrix of relay weights, and let Nil = (Ni ∪{i}) ∩ (Nl ∪{l}) denote the common neighborhood of nodes i and j. Suppose, Fig. 2: Homogeneous connectivity with pi = 0.2, ∀i ∈ [n] and FCT. **A[(]i[ℓ][)]** = � (ℓ) **A�** _i_ if ℓ mod n + n · 1{ℓ mod n=0} = i, _. (7)_ **A[(]i[ℓ][−][1)]** otherwise � _S(p, A) =_ _i,l∈[n]_ � _pj(1 −_ _pj)αjiαjl._ (4) _j:j∈Nil_ **Theorem 1. Under Asms. 1-3 and condition (3), COLREL, as** _specified by Algs. 1 and 2, with ηr =_ _r[4]T[µ] +1[−][1]_ _[, satisfies for every]_ _r ≥_ _r0(p, A),_ E∥x[(][r][+1)] _−_ _x[⋆]∥[2]_ _≤_ [(][r][0][T][ + 1)] (r + 1)[2][ ∥][x][(0)][ −] _[x][⋆][∥][2][ +][ C][1]k[(][p] + 1[,][ A][)][T]_ _T_ _T_ ( 1)[2] _T −_ _T_ +C2 _k_ + 1 [+][ C][3][(][p][,][ A][)] (k + 1)[2][,] _T_ _T_ _where B(p, A) =_ [2]n[L][2][2][ S][(][p][,][ A][)][,][ C][1][(][p][,][ A][) =][ 4]µ[2][2][ ·][ 2]n[σ][2][2][ S][(][p][,][ A][)][,] � � _C2 =_ _µ[4][2][2][ ·][L][2][ σ]n[2]_ _[e][,][ C][3][(][p][,][ A][) =][ 4]µ[4][4][ ·]_ _L[2]σ[2]e +_ [2][L]n[2][σ][2] [2][e] _S(p, A)_ _,_ _and r0(p, A) = max_ � _Lµ_ _[,][ 4]_ � _B(µp[2],A)_ + 1� _,_ _T[1]_ _[,]_ _µ4[2]nT_ �. As a consequence of Thm. 1, it follows that, 2 E ���x(r+1) − _x⋆���_ = O ���x(0) − _x⋆��2_ + _[S][(][p][,][ A][)]_ _r[2]_ _r_ � _._ (5) (ℓ) Here, **A[�]** _i_ is given by (ℓ) � **A�** _i_ = arg min _pj(1 −_ _pj)αji[2]_ _j∈Ni∪{i}_ � � + 2 _pj(1 −_ _pj)αjiαjl[(][ℓ][−][1)],_ _l∈[n],l≠_ _i_ _j∈Nil_ � s.t.: _pjαji = 1,_ _αji ≥_ 0 _∀j ∈_ [n]. (8) _j∈Ni∪{i}_ Let Lji = {l : l ∈ [n], l ̸= i, j ∈Nil}, that is, the set of all clients that have j as a mutual neighbor with i, and let βji = �l∈Lji _[α]jl[(][ℓ][−][1)]. Let p(i) = maxk∈Ni∪{i} pk. Using Lagrange_ multipliers we solve (8) for j ∈Ni ∪{i} as follows:  �−βji + 2(1λ−ipj ) �+ if pj ∈ (0, 1), p(i) < 1, _α�ji[(][ℓ][)]_ [=]  0�k∈[n] [1][{][p]k1[=1][,k][∈N]i _[∪{][i][}}]_ otherwise[if][ p][j][ = 1][,]. (9) Here, λi satisfies [�]j∈Ni∪{i} _[p][j]_ �−βji + 2(1λ−ipj ) �+ = 1, and (·)[+] ≜ max{·, 0}. We can find λi using the bisection method. The complete algorithm is detailed in Alg. 3. Its overall _complexity is O(L_ (n[2] + K)), where K is the number of _·_ iterations used in the bisection method for optimizing λi. **Remark 2. The optimization (6) only requires client i to know** _the weight values for its neighbors of distance 2. Thus, we can_ _exploit the communication links between clients, and optimize_ (6) distributively. We present the distributed algorithm in [49]. V. NUMERICAL SIMULATIONS We consider training a ResNet-20 model for image classification on CIFAR-10 dataset over 10 clients; each executes 8 local training steps of local-SGD. All plots have been averaged over 5 different realizations. We used a learning rate of 0.1 for SGD, a coefficient of 1e − 4 for ℓ2-regularization to prevent overfitting, and a batch-size of 64. In Figs. 2 and 3, the dataset is distributed across the clients in an IID fashion. As benchmarks, we consider Federated Therefore, the convergence rate can be improved by minimizing the term S(p, A) subject to the unbiasedness condition in Lemma 1. Minimizing S(p, A) can also reduce r0(p, A). IV. OPTIMIZING THE RELAYING WEIGHTS We choose the relay weight matrix A to minimize the upper bound on the expected distance to optimality as given by Thm. 1. Thus, we solve the following optimization problem: arg min **A** _[S][(][p][,][ A][)][,]_ � s.t.: _pjαji = 1,_ _αji ≥_ 0 _∀i, j ∈_ [n]. (6) _j:j∈Ni_ The function S(p, A) is convex with respect to (w.r.t.) A for **p** [0, 1][n]. It can be shown that the domain of (6) is separable _∈_ w.r.t. Ai, the i[th] column of A, and we can use the GaussSeidel method [50, Prop. 2.7.1] to iteratively solve (6). At every iteration ℓ, we compute the estimate A[ℓ] as ----- Fig. 3: Different connectivity across clients with a ring topology. Fig. 4: Non-IID data + global momentum. _Averaging (FedAvg) - No Dropout, in which all clients are able_ to successfully transmit their local updates to the PS at every communication round. We also consider FedAvg - Dropout, in which the PS is unaware of the identity of the clients, and simply assumes that the update for any client unable to successfully transmit is zero. These benchmarks serve as natural upper and lower bounds to the performance of the proposed algorithm. In Fig. 2, we have a homogeneous connectivity setup with equal probability pi = 0.2 that client i ∈ [n] successfully transmits its local updates to the PS. Furthermore, we assume a fully-connected topology (FCT) where each clients is connected to all the other clients in the system. COLREL achieves a performance on par with FedAvg - No Dropout. We also consider a non-blind strategy, FedAvg - Dropout (Non_Blind) where the PS is aware of the identity of the clients, and_ knows exactly which clients have been successful in sending their local updates to the PS. This is common in point-to-point learning settings. In this case the PS simply ignores the clients that have been unable to send their updates, and averages the successful updates by dividing the global aggregate at the PS by the number of successful transmissions. In Fig. 3 (and also in Fig. 4), we consider every client has a different probability of successful transmission to the PS according to p = [0.1, 0.2, 0.3, 0.1, 0.1, 0.5, 0.8, 0.1, 0.2, 0.9]. We have deliberately chosen some clients to have a very low connectivity, some others moderate, and others very high. We consider a ring topology where client i is connected to clients (i 1) mod n and (i + 1) mod n. For this setting, we _−_ distinguish the cases with and without optimized weights. The weights are optimized in order to minimize the term S(p, A), which consequently minimizes the variance of the iterates, subject to ensuring that the updates are unbiased according to Alg. 3. Note that explicitly optimizing the consensus weights that the clients use for their neighbors was not essential in Fig. 2 because the initial weights of Alg. 3 are optimal for a FCT with homogeneous connectivity to the PS, i.e., pi = p∀i ∈ [n]. Finally, in Fig. 4, we consider the setting in which the training data is distributed across the clients in a non-IID fashion. To emulate non-IID-ness, we consider the sort-and_partition approach in which the training data is initially sorted_ based on labels, and then divided into blocks and distributed among clients in a skewed fashion so that each client has data from only a few classes. For the ring topology in this plot, we have considered each client to be connected to 4 of its nearest neighbors. We also use global momentum at the PS to update the global model. Remarkably, FedAvg (even with non-blind averaging) fails to converge in this setting. This is because in the absence of collaboration, clients that have important training samples that are critical for training a good model with high accuracy, may have a low probability of successful transmission and thus are rarely able to convey their updates to the PS. Therefore, when these clients are unable to convey their updates to the PS, the resulting test accuracy of the global model is 10%, as good as a random classifier for 10 classes. _∼_ Collaborative relaying ensures that the information from these critical datapoints are also conveyed to the PS even when the data owner does not have connectivity to the PS. VI. CONCLUSIONS Our goal in this paper is to mitigate the detrimental effect of clients’ intermittent connectivity on the training accuracy of FL systems. For this purpose, we proposed a collaborative relaying strategy, which exploits the connections between clients to relay potentially missing model updates to the PS due to blocked clients. Our algorithm allows the PS to receive an unbiased estimate of the model update, which would not be possible without relaying. We optimized the consensus weights at each client to improve the rate of convergence. Our proposed approach can be implemented even when the PS is blind to the identities of clients which successfully communicate with it at each round. Numerical results showed the improvement in training accuracy and convergence time that our approach provides under various settings, including IID and non-IID data distributions, different communication graph topologies, as well as blind and non-blind PSs. REFERENCES [1] B. McMahan, E. Moore, D. Ramage, S. Hampson, and B. A. y Arcas, “Communication-Efficient Learning of Deep Networks from Decentralized Data,” in Proceedings of the 20th International Conference on _Artificial Intelligence and Statistics, vol. 54, Apr 2017, pp. 1273–1282._ [2] M. Chen, D. G¨und¨uz, K. Huang, W. Saad, M. Bennis, A. V. Feljan, and H. V. Poor, “Distributed learning in wireless networks: Recent progress and future challenges,” arxiv:2104.02151, 2021. ----- [3] M. R. Akdeniz, Y. Liu, M. K. Samimi, S. Sun, S. Rangan, T. S. Rappaport, and E. Erkip, “Millimeter wave channel modeling and cellular capacity evaluation,” IEEE J. Sel. Areas Commun., vol. 32, no. 6, pp. 1164–1179, June 2014. [4] M. Gapeyenko, A. Samuylov, M. Gerasimenko, D. Moltchanov, S. Singh, M. R. Akdeniz, E. Aryafar, N. Himayat, S. Andreev, and Y. Koucheryavy, “On the temporal effects of mobile blockers in urban millimeter-wave cellular scenarios,” IEEE Trans. Veh. Technol., vol. 66, no. 11, pp. 10 124–10 138, Nov 2017. [5] Y. Yan and Y. Mostofi, “Co-optimization of communication and motion planning of a robotic operation under resource constraints and in fading environments,” IEEE Trans. Wireless Commun., vol. 12, no. 4, pp. 1562– 1572, April 2013. [6] M. M. Zavlanos, M. B. Egerstedt, and G. J. Pappas, “Graph-theoretic connectivity control of mobile robot networks,” Proc. IEEE, vol. 99, no. 9, pp. 1525–1540, Sep. 2011. [7] N. Michael, M. M. Zavlanos, V. Kumar, and G. J. Pappas, “Maintaining connectivity in mobile robot networks,” in Experimental Robotics, 2009. [8] S. Gil, S. Kumar, D. Katabi, and D. Rus, “Adaptive communication in multi-robot systems using directionality of signal strength,” Int. J. Rob. _Res., vol. 34, no. 7, pp. 946–968, 2015._ [9] D. G¨und¨uz, D. B. Kurka, M. Jankowski, M. M. Amiri, E. Ozfatura, and S. Sreekumar, “Communicate to learn at the edge,” IEEE Comm. _Magazine, vol. 58, no. 12, pp. 14–19, 2020._ [10] M. E. Ozfatura, J. Zhao, and D. G¨und¨uz, “Fast federated edge learning with overlapped communication and computation and channel-aware fair client scheduling,” in IEEE International Workshop on Signal Processing _Advances in Wireless Communications (SPAWC), 2021, pp. 311–315._ [11] D. Liu, G. Zhu, J. Zhang, and K. Huang, “Data-importance aware user scheduling for communication-efficient edge machine learning,” IEEE _Trans. Cogn. Commun. Netw., vol. 7, no. 1, pp. 265–278, 2021._ [12] W. Xia, T. Q. S. Quek, K. Guo, W. Wen, H. H. Yang, and H. Zhu, “Multi-armed bandit based client scheduling for federated learning,” _IEEE Trans. Wireless Commun., pp. 1–1, 2020._ [13] H. H. Yang, A. Arafa, T. Q. S. Quek, and H. Vincent Poor, “Agebased scheduling policy for federated learning in mobile edge networks,” in Proc. - ICASSP IEEE Int. Conf. Acoust. Speech Signal Process. _(ICASSP), 2020, pp. 8743–8747._ [14] M. M. Amiri and D. G¨und¨uz, “Federated learning over wireless fading channels,” IEEE Trans. Wireless Comms., vol. 19, no. 5, pp. 3546–3557, 2020. [15] E. Ozfatura, S. Rini, and D. G¨und¨uz, “Decentralized sgd with over-theair computation,” in GLOBECOM 2020 - 2020 IEEE Global Communi_cations Conference, 2020, pp. 1–6._ [16] G. Zhu, Y. Wang, and K. Huang, “Broadband analog aggregation for low-latency federated edge learning (extended version),” arxiv:1812.11494, 2019. [17] B. Hasircioglu and D. Gunduz, “Private wireless federated learning with anonymous over-the-air computation,” arxiv:2011.08579, 2021. [18] M. Seif, W.-T. Chang, and R. Tandon, “Privacy amplification for federated learning via user sampling and wireless aggregation,” arxiv:2103.01953, 2021. [19] M. Yemini, S. Gil, and A. J. Goldsmith, “Exploiting local and cloud sensor fusion in intermittently connected sensor networks,” in 2020 IEEE _Global Communications Conference (Globecom), December 2020._ [20] ——, “Cloud-cluster architecture for detection in intermittently connected sensor networks,” arXiv:2110.01119, 2021. [21] M. S. H. Abad, E. Ozfatura, D. G¨und¨uz, and O. Ercetin, “Hierarchical federated learning across heterogeneous cellular networks,” in Proc. _ICASSP IEEE Int. Conf. Acoust. Speech Signal Process. (ICASSP), 2020,_ pp. 8866–8870. [22] X. Lian, C. Zhang, H. Zhang, C.-J. Hsieh, W. Zhang, and J. Liu, “Can decentralized algorithms outperform centralized algorithms? a case study for decentralized parallel stochastic gradient descent,” in NIPS, Dec. 2017. [23] H. Tang, X. Lian, M. Yan, C. Zhang, and J. Liu, “d[2]: Decentralized training over decentralized data,” in 35th Int. Conf. Mach. Learn., vol. 80. PMLR, Jul 2018, pp. 4848–4856. [24] Z. Jiang, A. Balu, C. Hegde, and S. Sarkar, “Collaborative deep learning in fixed topology networks,” in NIPS, Dec. 2017. [25] K. Yuan, Q. Ling, and W. Yin, “On the convergence of decentralized gradient descent,” SIAM Journal on Optimization, 2016. [26] M. Kamp, L. Adilova, J. Sicking, F. H¨uger, P. Schlicht, T. Wirtz, and S. Wrobel, “Efficient decentralized deep learning by dynamic model averaging,” in Conf. Mach. Learn. Knowl. Discovery in Databases, 2019, pp. 393–409. [27] J. Zeng and W. Yin, “On nonconvex decentralized gradient descent,” _IEEE Trans. Signal Process., vol. 66, no. 11, pp. 2834–2848, June 2018._ [28] L. Kong, T. Lin, A. Koloskova, M. Jaggi, and S. U. Stich, “Consensus control for decentralized deep learning,” arXiv:2102.04828, 2021. [29] T. Vogels, L. He, A. Koloskova, S. P. Karimireddy, T. Lin, S. U. Stich, and M. Jaggi, “Relaysum for decentralized deep learning on heterogeneous data,” in Thirty-Fifth Conference on Neural Information _Processing Systems, 2021._ [30] R. Saha, S. Rini, M. Rao, and A. J. Goldsmith, “Decentralized optimization over noisy, rate-constrained networks: Achieving consensus by communicating differences,” IEEE Journal on Selected Areas in _Communications, vol. 40, no. 2, pp. 449–467, 2022._ [31] A. Koloskova, N. Loizou, S. Boreiri, M. Jaggi, and S. Stich, “A unified theory of decentralized SGD with changing topology and local updates,” in 37th Int. Conf. Mach. Learn., vol. 119. PMLR, Jul 2020, pp. 5381– 5393. [32] J. Wang, A. K. Sahu, Z. Yang, G. Joshi, and S. Kar, “MATCHA: speeding up decentralized SGD via matching decomposition sampling,” arxiv:1905.09435, 2019. [33] M. Assran, N. Loizou, N. Ballas, and M. Rabbat, “Stochastic gradient push for distributed deep learning,” in 36th Int. Conf. Mach. Learn. PMLR, Jun 2019, pp. 344–353. [34] L. Liu, J. Zhang, S. Song, and K. B. Letaief, “Client-edge-cloud hierarchical federated learning,” in IEEE Int. Conf. Commun. (ICC), 2020, pp. 1–6. [35] W. Y. B. Lim, J. S. Ng, Z. Xiong, J. Jin, Y. Zhang, D. Niyato, C. Leung, and C. Miao, “Decentralized edge intelligence: A dynamic resource allocation framework for hierarchical federated learning,” IEEE Trans. _Parallel Distrib. Syst., vol. 33, no. 3, pp. 536–550, 2022._ [36] T. Castiglia, A. Das, and S. Patterson, “Multi-level local {sgd}: Distributed {sgd} for heterogeneous hierarchical networks,” in International _Conference on Learning Representations, 2021._ [37] F. P.-C. Lin, S. Hosseinalipour, S. S. Azam, C. G. Brinton, and N. Michelusi, “Semi-decentralized federated learning with cooperative d2d local model aggregations,” IEEE Journal on Selected Areas in _Communications, vol. 39, no. 12, pp. 3851–3869, 2021._ [38] Anonymous, “Hybrid local SGD for federated learning with heterogeneous communications,” in Submitted to The Tenth International _Conference on Learning Representations, 2022, under review. [Online]._ [Available: https://openreview.net/forum?id=H0oaWl6THa](https://openreview.net/forum?id=H0oaWl6THa) [39] H. Yang, M. Fang, and J. Liu, “Achieving linear speedup with partial worker participation in non-IID federated learning,” in International _Conference on Learning Representations, 2021._ [40] X. Gu, K. Huang, J. Zhang, and L. Huang, “Fast federated learning in the presence of arbitrary device unavailability,” arxiv:2106.04159, 2021. [41] T. H. Hsu, H. Qi, and M. Brown, “Measuring the effects of nonidentical data distribution for federated visual classification,” CoRR, vol. abs/1909.06335, 2019. [42] T.-M. H. Hsu, H. Qi, and M. Brown, “Federated visual classification with real-world data distribution,” CoRR, vol. abs/2003.08082, 2020. [43] K. Hsieh, A. Phanishayee, O. Mutlu, and P. Gibbons, “The non-IID data quagmire of decentralized machine learning,” in 37th Int. Conf. Mach. _Learn., vol. 119._ PMLR, Jul 2020, pp. 4387–4398. [44] Y. Zhao, M. Li, L. Lai, N. Suda, D. Civin, and V. Chandra, “Federated learning with non-iid data,” CoRR, vol. abs/1806.00582, 2018. [45] E. Ozfatura, D. Gunduz, and H. V. Poor, “Collaborative learning over wireless networks: An introductory overview,” arxiv:2112.05559, 2021. [46] H. H. Yang, Z. Liu, T. Q. S. Quek, and H. V. Poor, “Scheduling policies for federated learning in wireless networks,” IEEE Trans. Commun., vol. 68, no. 1, pp. 317–333, 2020. [47] W. Shi, S. Zhou, and Z. Niu, “Device scheduling with fast convergence for wireless federated learning,” in IEEE Int. Conf. Commun. (ICC), 2020, pp. 1–6. [48] M. M. Amiri, D. G¨und¨uz, S. R. Kulkarni, and H. V. Poor, “Convergence of update aware device scheduling for federated learning at the wireless edge,” IEEE Trans. Wireless Comm., vol. 20, no. 6, pp. 3643–3658, 2021. [49] M. Yemini, R. Saha, E. Ozfatura, D. G¨und¨uz, and A. J. Goldsmith, “Robust federated learning with connectivity failures: A semi-decentralized framework with collaborative relaying,” arXiv:2202.11850, 2022. [50] D. Bertsekas, Nonlinear Programming. Athena Scientific, 1999. -----
{ "disclaimer": "Notice: Paper or abstract available at https://arxiv.org/abs/2205.10998, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": null, "status": "GREEN", "url": "https://arxiv.org/pdf/2205.10998" }
2,022
[ "JournalArticle" ]
true
2022-05-23T00:00:00
[ { "paperId": "f221c036aa17270b57ea254b1e7ceb3a21b36479", "title": "Decentralized Edge Intelligence: A Dynamic Resource Allocation Framework for Hierarchical Federated Learning" }, { "paperId": "7609583b23d37f38b675271b099dbe91340df286", "title": "Robust Federated Learning with Connectivity Failures: A Semi-Decentralized Framework with Collaborative Relaying" }, { "paperId": "4e55c40ddf5584aa8bdfb44732d335d63da70d87", "title": "Collaborative Learning over Wireless Networks: An Introductory Overview" }, { "paperId": "aa6c2814ff94ca098d90f188f95126b5b06ebb69", "title": "Nonlinear Programming" }, { "paperId": "b9b9f79b9644404c38c280b611812db39b2df9a4", "title": "RelaySum for Decentralized Deep Learning on Heterogeneous Data" }, { "paperId": "7fb49222bc676580c03e2e9175449818c7006ef7", "title": "Cloud-Cluster Architecture for Detection in Intermittently Connected Sensor Networks" }, { "paperId": "afe686ee1a7521bc6ef16f48f38b9420f65c9aa1", "title": "Fast Federated Edge Learning with Overlapped Communication and Computation and Channel-Aware Fair Client Scheduling" }, { "paperId": "263762edd4e8d5866e4bfb71015d1e8d03941f0d", "title": "Fast Federated Learning in the Presence of Arbitrary Device Unavailability" }, { "paperId": "654eaff6b4d8890b7b54dabcfa63822c90da42d5", "title": "Distributed Learning in Wireless Networks: Recent Progress and Future Challenges" }, { "paperId": "16a7dcb0332e02c502174a6210e8cfcde9f8762a", "title": "Semi-Decentralized Federated Learning With Cooperative D2D Local Model Aggregations" }, { "paperId": "19cc2c685f9683743750615df93d4cb1cf3eee15", "title": "Privacy Amplification for Federated Learning via User Sampling and Wireless Aggregation" }, { "paperId": "842a4e524f97a644e9995b9966da33b119f8bf52", "title": "Consensus Control for Decentralized Deep Learning" }, { "paperId": "433000baf18bb4403681fde5740bccd1fa2034a9", "title": "Achieving Linear Speedup with Partial Worker Participation in Non-IID Federated Learning" }, { "paperId": "7e4a83495c15fc3c4ff98f74ed3b6b03db393aac", "title": "Private Wireless Federated Learning with Anonymous Over-the-Air Computation" }, { "paperId": "eafd4efd5a5f90a22d714f13d63ec36120059533", "title": "Decentralized Optimization Over Noisy, Rate-Constrained Networks: Achieving Consensus by Communicating Differences" }, { "paperId": "6b8f1c25e47efd019210086d20318488e00c8897", "title": "Communicate to Learn at the Edge" }, { "paperId": "b97d07c6dc6132dbcbddf203d4a3f2d2a803f414", "title": "Multi-Level Local SGD for Heterogeneous Hierarchical Networks" }, { "paperId": "7843a2e12c0aa7f642806652358fa2a8fb058daf", "title": "Multi-Armed Bandit-Based Client Scheduling for Federated Learning" }, { "paperId": "552ec63759f9eccd7780edb0e755b6bcc3e54a3e", "title": "Exploiting Local and Cloud Sensor Fusion in Intermittently Connected Sensor Networks" }, { "paperId": "c933fed82e7b5cbf7230f0f970b69590b40f86a1", "title": "A Unified Theory of Decentralized SGD with Changing Topology and Local Updates" }, { "paperId": "e740a2b706fcae34850fd0e56619a2df7ee4dce7", "title": "Federated Visual Classification with Real-World Data Distribution" }, { "paperId": "45776d8fc6e9fccf87d923b9dcfede962910ee8c", "title": "Decentralized SGD with Over-the-Air Computation" }, { "paperId": "b93f5a3f8194baa3f3333775a9553694c58e0256", "title": "Convergence of Update Aware Device Scheduling for Federated Learning at the Wireless Edge" }, { "paperId": "fba1082dac7c90bfe97e7c3e85a55499ba982c14", "title": "Device Scheduling with Fast Convergence for Wireless Federated Learning" }, { "paperId": "2f3b15e68b793d7e5d834ce3356d2cf033ea7660", "title": "Age-Based Scheduling Policy for Federated Learning in Mobile Edge Networks" }, { "paperId": "cb4f814bc755ee7d4083a529cc1c84c2965547b4", "title": "Data-Importance Aware User Scheduling for Communication-Efficient Edge Machine Learning" }, { "paperId": "206261db1196e4e391ca42077f6fca6b3ece34d0", "title": "The Non-IID Data Quagmire of Decentralized Machine Learning" }, { "paperId": "46d8c9e2dc9c12615eb5f6813d18f967d61c7e0d", "title": "Measuring the Effects of Non-Identical Data Distribution for Federated Visual Classification" }, { "paperId": "bcb2d1c9cdc321d192925cc97c563470b30b8251", "title": "Hierarchical Federated Learning ACROSS Heterogeneous Cellular Networks" }, { "paperId": "efc088a6df785ffc96fb7ff3a65c9b549ac54832", "title": "Scheduling Policies for Federated Learning in Wireless Networks" }, { "paperId": "da2bd3d3d82ba6828dd90cb6777432099d1a1e02", "title": "Federated Learning Over Wireless Fading Channels" }, { "paperId": "72055aff17a462bcccb3250becaf62c9911cdd8b", "title": "MATCHA: Speeding Up Decentralized SGD via Matching Decomposition Sampling" }, { "paperId": "afb1acd9cb0caa50b9b9170e3cd63fa4a6f65478", "title": "Client-Edge-Cloud Hierarchical Federated Learning" }, { "paperId": "43572a7cc087e388f7f312a0f2e17915682ff27c", "title": "Broadband Analog Aggregation for Low-Latency Federated Edge Learning" }, { "paperId": "b42c38975f3d3f8bfe4b0c1a6e576c3e297cec38", "title": "Stochastic Gradient Push for Distributed Deep Learning" }, { "paperId": "2231677181ab63a5ab6f98aa417eed91d831f896", "title": "Efficient Decentralized Deep Learning by Dynamic Model Averaging" }, { "paperId": "5cfc112c932e38df95a0ba35009688735d1a386b", "title": "Federated Learning with Non-IID Data" }, { "paperId": "42f7bd35df5a280ccd47115a30901afab9f0776b", "title": "D2: Decentralized Training over Decentralized Data" }, { "paperId": "4d886c571c0849fda73a2d24e944d59fd37bcf9c", "title": "Collaborative Deep Learning in Fixed Topology Networks" }, { "paperId": "3f1ab8b484f7881a68c8562ff908390742e4ba90", "title": "Can Decentralized Algorithms Outperform Centralized Algorithms? A Case Study for Decentralized Parallel Stochastic Gradient Descent" }, { "paperId": "f7415d74aa51f0b665c2fc5da9ba76305dd90e19", "title": "On the Temporal Effects of Mobile Blockers in Urban Millimeter-Wave Cellular Scenarios" }, { "paperId": "de43968e11cb1c5f3a09e5d9b3cc3a271c1b0379", "title": "On Nonconvex Decentralized Gradient Descent" }, { "paperId": "d1dbf643447405984eeef098b1b320dee0b3b8a7", "title": "Communication-Efficient Learning of Deep Networks from Decentralized Data" }, { "paperId": "fce4412e5be888229b939fc80766092549395c0f", "title": "Adaptive communication in multi-robot systems using directionality of signal strength" }, { "paperId": "69a6cab0dd64f91548ce9b7feb24cdf92a0d0fe2", "title": "Millimeter Wave Channel Modeling and Cellular Capacity Evaluation" }, { "paperId": "9267430a1a7be8040f1afb09a2f2929a2e7d5489", "title": "On the Convergence of Decentralized Gradient Descent" }, { "paperId": "7586392a28c345a923a209b2a6fcfcc251f1c27a", "title": "Co-Optimization of Communication and Motion Planning of a Robotic Operation under Resource Constraints and in Fading Environments" }, { "paperId": "ffb03ba337560ccf1204e57e1d1bb831c55e21bf", "title": "Graph-theoretic connectivity control of mobile robot networks" }, { "paperId": "0aa7cb075978ed8c24f3e2a8ddc3ccb14df3e9a5", "title": "Hybrid Local SGD for Federated Learning with Heterogeneous Communications" }, { "paperId": "524d14d172db722a02e772f83e5f81023bf032bb", "title": "Maintaining Connectivity in Mobile Robot Networks" } ]
11,187
en
[ { "category": "Computer Science", "source": "external" }, { "category": "Computer Science", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/02b7726c2e069342adbf3dd51abbeeb68dd32bda
[ "Computer Science" ]
0.868341
Anomaly Detection Through Unsupervised Federated Learning
02b7726c2e069342adbf3dd51abbeeb68dd32bda
International Conference on Mobile Ad-hoc and Sensor Networks
[ { "authorId": "1726077791", "name": "Mirko Nardi" }, { "authorId": "2619829", "name": "L. Valerio" }, { "authorId": "2174176466", "name": "A. Passarella" } ]
{ "alternate_issns": null, "alternate_names": [ "MSN", "Mobile Ad-hoc and Sensor Networks", "Int Conf Mob Ad-hoc Sens Netw", "Mob Ad-hoc Sens Netw" ], "alternate_urls": null, "id": "72a6d50c-86ae-47c7-9a0e-54e5746aacee", "issn": null, "name": "International Conference on Mobile Ad-hoc and Sensor Networks", "type": "conference", "url": null }
Federated learning (FL) is proving to be one of the most promising paradigms for leveraging distributed resources, enabling a set of clients to collaboratively train a machine learning model while keeping the data decentralized. The explosive growth of interest in the topic has led to rapid advancements in several core aspects like communication efficiency, handling non-IID data, privacy, and security capabilities. However, the majority of FL works only deal with supervised tasks, assuming that clients' training sets are labeled. To leverage the enormous unlabeled data on distributed edge devices, in this paper, we aim to extend the FL paradigm to unsupervised tasks by addressing the problem of anomaly detection (AD) in decentralized settings. In particular, we propose a novel method in which, through a preprocessing phase, clients are grouped into communities, each having similar majority (i.e., inlier) patterns. Subsequently, each community of clients trains the same anomaly detection model (i.e., autoencoders) in a federated fashion. The resulting model is then shared and used to detect anomalies within the clients of the same community that joined the corresponding federated process. Experiments show that our method is robust, and it can detect communities consistent with the ideal partitioning in which groups of clients having the same inlier patterns are known. Furthermore, the performance is significantly better than those in which clients train models exclusively on local data and comparable with federated models of ideal communities' partition.
# Anomaly Detection through Unsupervised Federated Learning ### Mirko Nardi _Scuola Normale Superiore_ Pisa, Italy mirko.nardi@sns.it ### Lorenzo Valerio _IIT-CNR_ Pisa, Italy lorenzo.valerio@iit.cnr.it ### Andrea Passarella _IIT-CNR_ Pisa, Italy andrea.passarella@iit.cnr.it **_Abstract—Federated learning (FL) is proving to be one of the_** **most promising paradigms for leveraging distributed resources,** **enabling a set of clients to collaboratively train a machine learn-** **ing model while keeping the data decentralized. The explosive** **growth of interest in the topic has led to rapid advancements** **in several core aspects like communication efficiency, handling** **non-IID data, privacy, and security capabilities. However, the** **majority of FL works only deal with supervised tasks, assuming** **that clients’ training sets are labeled. To leverage the enormous** **unlabeled data on distributed edge devices, in this paper, we aim** **to extend the FL paradigm to unsupervised tasks by addressing** **the problem of anomaly detection in decentralized settings. In** **particular, we propose a novel method in which, through a** **preprocessing phase, clients are grouped into communities, each** **having similar majority (i.e., inlier) patterns. Subsequently, each** **community of clients trains the same anomaly detection model** **(i.e., autoencoders) in a federated fashion. The resulting model** **is then shared and used to detect anomalies within the clients** **of the same community that joined the corresponding federated** **process. Experiments show that our method is robust, and it** **can detect communities consistent with the ideal partitioning** **in which groups of clients having the same inlier patterns are** **known. Furthermore, the performance is significantly better than** **those in which clients train models exclusively on local data** **and comparable with federated models of ideal communities’** **partition.** **_Index Terms—federated learning, unsupervised, anomaly de-_** **tection** I. INTRODUCTION Distributed/decentralized ML executed at the edge represents one of the most promising approaches capable of addressing the issues that afflict centralized solutions. In this regard, the Federated Learning (FL) [1] paradigm has proved to be an effective and promising approach to face the hard challenges triggered by these distributed settings. It essentially aims to collaboratively train an ML model while keeping the data decentralized through the exchange of models’ parameters updates (instead of raw data) that, in its vanilla version, are iteratively aggregated and shared by a central coordinating node. Given its effectiveness, in the last years plenty of subsequent research works have been released focusing on different core This work has been partly funded under the H2020 MARVEL (grant 957337), HumaneAI-Net (grant 952026), SoBigData++ (grant 871042) and CHIST-ERA SAI (grant CHIST-ERA-19-XAI-010, by MUR, FWF, EPSRC, NCN, ETAg, BNSF). aspects: improving communication efficiency, increasing model performance in combination with non-IID data, extending privacy and security capabilities and addressing client hardware variability. Nevertheless, FL applications and implementations for mobile edge devices are still largely designed for supervised learning tasks as a spontaneous consequence of its original development purpose [2]. Thus, one of the least treated aspects is the extension of FL to other ML paradigms like unsupervised learning, reinforcement learning, active learning, and online learning [3]. This paper specifically aims to apply FL on unsupervised tasks for mobile edge devices. Unsupervised learning (as well as semi-supervised and self-supervised learning) has recently been considered one of the next great frontiers for AI [4]. Unlabeled data far surpasses labeled data in real-world applications. Hence its integration with federated contexts is mandatory to fully unleash the potential of this approach. In this paper, we consider nodes that have to learn a common ML model (e.g., a classifier). We assume that sets of these nodes “see” similar data patterns. However, as we assume that data are not labeled, nodes need to automatically group themselves into those sets, to perform FL across members of the same set. As a specific application case, we consider anomaly detection. Specifically, our methodology consists of a preprocessing phase in which each node of the system detects a membership group (cluster or community) such that each member shares similar majority (i.e., inlier) patterns. In fact, to ensure the effectiveness of an anomaly detection task, a federated model must be trained on data coming from the same distribution. Once the nodes are grouped in communities, a federated learning process is spawned for each of them: nodes of the same group use their local data to collaboratively train an autoencoder to recognize their majority pattern (i.e., the inlier class). Autoencoders are particularly suitable for this purpose since typical FL protocols involve using a neural network-based model. However, the methodology is orthogonal to the specific model trained via FL. Once the federated process is finished, each client gets a much more accurate global model than it would have obtained using only its local data, as long as it has joined the proper community. The proposed methodology is particularly suited for mobile environments for several reasons. First, it allows nodes not ----- to exchange local data, thus addressing privacy and network resource limitations. Second, it supports heterogeneous settings when the federation is not under the control of a single entity (like in a datacenter), but where nodes join “freely” the federation. Third, it is tailored to using tiny ML models on individual nodes, which is mandatory for realistically implementing decentralized model training on mobile devices. This work can subsequently be framed in a more general context of anomaly detection in which normal data belong to multiple classes (in contrast to the typical AD task involving only a single inlier class). For instance, the methodology proposed, whose output is a set of models each specialized in identifying a single normal pattern, can be further extended with ensemble-based methods to efficiently tackle the multi-class anomaly detection problem, as shown in [5]. The remainder of the paper is organised as follows. In Section II an overview of the problem and the related works are discussed. In Section IV we list the preliminaries and describe our method in detail. In Section V we discuss the results of the experiments, and in Section VI we draw the conclusions. II. RELATED WORKS Federated Learning is a distributed learning framework particularly amenable to optimize the computing power and the data management on edge devices. It is now widely considered modern and more effective evolution of the more traditional distributed paradigms [6]–[10], in which models are trained on large but ‘flat’ datasets within a fully controlled environment in terms of resource availability and data management. FL enables to relax many of the traditional constraints and, since its introduction [1], several lines of research contribute to fast advances [3]; additionally, from the application perspective, many specific use-case solutions have already been deployed by major service providers [2], [11], [12]. Due to space reasons, in the rest of the section, we provide an overview of unsupervised approaches to FL, which are the closest area with respect to the focus of this paper. _A. Unsupervised Federated Learning_ Very few works combining federated learning and unsupervised approaches have been released, each of them dealing with limited scenarios and settings. Reference [13] is the first to introduce unsupervised representation learning in a federated setting, but it simply combines the two concepts without assuming the typical issues of distributed settings, particularly for mobile environments (e.g., dealing with nonIID data, scaling the number of devices, different application domains). Reference [14] make progress on the same problem by adding and facing two relevant challenges: (i) inconsistency of representation spaces, due to non-IID data assumption, i.e., clients generate local models focused on different categories; (ii) misalignment of representations, given by the absence of unified information among clients. Reference [15] introduced an unsupervised federated learning (FL) approach for speech enhancement and separation with non-IID data across multiple clients. An interesting aspect of this work is that a small portion of supervised data is exploited to boost the main unsupervised task through a combination of updates from clients, with supervised and unsupervised data. In [16] authors present a first effort for introducing a collaborative system of autoencoders for distributed anomaly detection. However, the data collected by the edge devices are used to train the models in the cloud, which violates an essential FL feature. Locally, the models are used for inference only. A more recent work [17] in a similar direction proposes a federated learning (FL)-based anomaly detection approach for identification and classification intrusion in IoT networks using decentralized on-device data. Here the authors use federated training rounds on Gated Recurrent Units (GRUs) models and keep the data intact on local IoT devices by sharing only the learned weights with the central server of the FL. However, dealing with a classification task still assumes the availability of labeled data. III. PROBLEM FORMULATION AND PRELIMINARIES We consider a distributed learning system with a set of clients _M and a set of data distributions C, such that_ _C_ _M_ . With _|_ _| ≤|_ _|_ data distribution, we refer to a set of identically distributed data representing a specific pattern (e.g., observations of phenomena belonging to the same class of events, in case of a classification task). We assume that every client receives a portion d _∈_ (0%, 50%) of its samples from a single distribution Cout ∈ _C, and the remaining (100 −_ _d)% from Cin ∈_ _C, such that_ _Cin ̸= Cout. Thereby, the two samples partitions within each_ client form the outlier and inlier classes, respectively. This split represents a basic assumption when dealing with AD tasks [18]. d [5%, 15%] is generally a realistic value [19], _∈_ thus adopted in the majority of related works. Note that this scenario corresponds to assuming local skewed data, i.e., that each node “sees” a prevalence of data of a single class (its inlier class) and a minority of data from (one of the) other classes. This is also quite realistic in practice in AD tasks. The challenge addressed in the paper is the following. In case of supervised learning, data belonging to each class are labelled, so each node knows which other nodes “see” the same majority class, and therefore forming FL groups is straightforward. In unsupervised cases, each node can detect its majority class from local data, but has no direct information to know which other nodes see the same majority class. Therefore, the main objective of our methodology is to identify an effective algorithm for nodes to form consistent groups (i.e., groups that see the same majority class), to then run a standard FL process across nodes of the same group. Note that, as will be clear from the detailed description in Section IV, at the end of the first step of our methodology clients become partitioned into k disjoint groups S1, . . ., Sk. In the ideal case, each group corresponds to the (unknown to the clients) set of nodes seeing the same inlier class Cin, and therefore in the ideal case k = _C_ . _|_ _|_ ----- IV. PROPOSED METHODOLOGY As anticipated in Section I our methodology consists in two logical steps. In the first step we group clients that “see” the same inlier class, via a fully autonomous and unsupervised process. In the second step, we run a standard FL process among clients belonging the same group. We present the two steps in the following sections. _A. Step I: group identification_ The aim of this phase is to make the clients join a group (i.e. cluster) having the same (or similar) majority class Cin. To achieve this, we firstly train a “classical” AD model (e.g., OCSVM) on every client, using only its local data, such that each of them is able to compute a preliminary split of its data into inliers and outliers. Thereafter, every couple of clients perform the following steps: (i) they exchange their respective models, and (ii) they use the partner’s model to split its local data into “normal” and “anomalous” data through an inference step. In other words, for every pair of nodes (mi, mj), node _mi uses node’s mj local model to classify its own local data,_ and vice versa. If the classification accuracy is high enough, it means that node’s mj model has been trained on the same inlier class of node a, and therefore mi and mj should be in the same group. Note that, it is not necessary to use a very complex local model at this step. Although the local model of a client only enables an approximate preliminary inliers/outliers split, it suffices to detect clients sharing the same majority class of data, as long as those patterns of data in those classes are sufficiently different (as it is the case in typical AD tasks). Given a client mi, from its perspective this phase is detailed in Algorithm 1. Specifically, on the local dataset of the i-th client, i.e., xi, an inference step is computed using its own locally trained model (line 4) and all the models of other clients (line 9). yj,i is the output binary vector given by the AD model of the j-th client on the data of the i-th client. Thus, inj,i is the portion of inliers in the vector yj,i. The boolean bj,i indicates whether the i-th client flags the j-th client as a candidate for the association. The output the process corresponds to the group of candidate clients Gi with inlier classes similar to mi. At the end of algorithm 1, each client has a local view of which other clients should belong to its group. However, different clients in the same group may have different local views (i.e., even if mj is in Gi, Gj may not be identical to _Gi). In order to obtain an overall view of the groups, shared_ by all nodes, we adopt the following method. Since the association of two clients is reciprocal (line 14), a undirected graph can be built from all the resulting groups of candidates of each client. A link between two nodes means that those two nodes mutually “think” to be in the same group. Finally, a community detection algorithm is run on this graph to detect which groups of nodes should be considered part of the same set and thus undergo a standard FL step. In other words, we assume that communities found at the end of this step are the groups of clients with the same inlier class. **Algorithm 1 Client mi local training and association** **Input: AD Model Modi, contamination d, association thresh-** old q, set of other clients M **Output: Group Gi of candidate clients similar to mi** 1: procedure LOCALAD(Modi, d, q, M ) 2: _Gi ←∅_ 3: _Modi = Modi.fit(xi, d)_ 4: _yi,i = Modi.predict(xi)_ 5: _ini,i = inlierPercCount(yi,i)_ 6: _send(Modi, M_ ) 7: **for all mj in M do** 8: _Modj = receive(mj)_ 9: _yj,i = Modj.predict(xi)_ 10: _inj,i = inlierPercCount(yj,i)_ 11: _bj,i = ini,i −_ _q ≤_ _inj,i ≤_ _ini,i + q_ 12: _send(bj,i, mj)_ 13: _bi,j = receive(bi,j, mj)_ 14: **if bj,i AND bi,j then** 15: _Gi ←_ _mi_ 16: **end if** 17: **end for** 18: **return Gi** 19: end procedure _B. Step II: federated outlier detection_ The result of the first phase is a set of k groups (or communities) G0, . . ., Gk; for each of them a FL instance is started using autoencoders as models. Autoencoders are suitable for the purpose for two main reasons: (i) they naturally fit into the FL framework, being NN-based; (ii) they can be effectively used in AD task. In fact, they essentially learn a compressed representation of the unlabeled data used for the training, performing a nonlinear dimensionality reduction. Once trained, the reconstruction error of a given sample can be used to classify it using a threshold. We use the vanilla version of the Federated Averaging (FedAvg) [1], a FL protocol based on averaging the local stochastic gradient descent updates to compute the global model. At the end of each federation process, the trained autoencoder is shared among the clients of the same group. Note that, the community detection step requires either a central entity that runs the algorithm once and for all nodes, or that the graph is shared among all nodes and each runs the same community detection algorithm individually. Even in the former case, our methodology does not require that nodes share local data with any central controller, and thus can address situations where centralized learning is unfeasible or impractical (e.g., due to data ownership reasons). V. EXPERIMENTS In this section, we describe the numerical simulations to assess the performance of the proposed methodology. The baseline is given by the local model scheme, in which every client trains its model using only local data. We show a further ----- comparison with an ideal partitioning scheme in which the groups of clients having the same inlier patterns are known. This corresponds to a supervised FL algorithm, where all data are labeled by a central entity. Our code is based on wellaccessed and standard frameworks: Tensorflow, Scikit-Learn, PyOD and Flower. For the sake of reproducibility, the code is available at https://github.com/mirqr/FedAD _A. Datasets and setup_ We test our methodology on the MNIST [20] and the fashionMNIST [21] datasets, using the original 60000-10000 train-test splits. Since both have ten classes, we have _C_ = 10 data _|_ _|_ distributions. Locally, given a portion of outlier d, the train set of every client has d percent of its samples from a single distribution _Cout ∈_ _C, and the remaining (100_ _−_ _d) percent from Cin ∈_ _C,_ such that Cin ̸= Cout. With a view to a collaborative anomaly detection task, we ensure that all the datasets owned by the clients are numerically balanced and disjoint. The set of clients M that compose an experimental setup is configured as follows: we define a parameter p as the number of clients within the same data distribution (i.e., class), meaning that the train samples of a class Cin of the original dataset (e.g., MNIST) are evenly and randomly spread to form the inliers of p clients. Accordingly, the portion of outliers for each client within the same group and characterized by the same Cin, is given by the samples of a class different from Cin. We ensure that the outlier classes _C \ Cin are equally represented within the group, meaning_ that for each client of the group the minority class is “circular” through the set C \ Cin. As an example, using all the available data distributions of the dataset (i.e., 10 classes), and by setting p = 9, then the training data distribution among the clients of the group _Cin = 0 is shown in Fig. 1. The same applies to every group,_ i.e., an experimental system configuration ends up with _M_ = _|_ _|_ _C_ _p clients. Consequently, the ideal partitioning we aim to_ _|_ _|_ find through the community detection phase is composed by _k =_ _C_ = 10 groups with p clients each. _|_ _|_ Note that, without loss of generality, to obtain an balanced distribution of the outliers classes among the clients of a group, it is convenient to set p = (|C| − 1)n, n ∈ N. Additionally, since for each configuration run, we exploit all the samples of the dataset involved, as a higher value of p leads to smaller local datasets for the clients. _B. Models_ In the first phase, every client detects the partners having the same inlier class. As explained in Section IV, a client tests the others’ trained models on its local data and selects as partners those whose model produces an inliers/outliers ratio similar to its own. We select the “association” threshold q in the interval [0.01, 0.10], i.e., q represents the maximum of the percentage difference between the data classified as normal by the local model, and those considered normal by using the partner’s model. In other words, the local client considers another client as partner if the model of the latter produces a fraction of normal data on the local dataset equal to the percentage produced by the local node, _q. In particular, we_ _±_ found that the value q = 0.08 turns out to work well on every experiment. We choose the model of the first phase with the following requirements: (i) it must be easy to set up and fast to train; (ii) it must be light to store and to be transmitted; (iii) it must provide a preliminary sufficiently good outlier detection to allow the clients to correctly group for the next phase. There is not a model generally suitable for this purpose; it strongly depends on the type of data used, especially for AD tasks [22]. Moreover, the abovementioned requirements force us to discard any NN-based AD model. Thus, we have identified OC-SVM [23] to be a good choice for our cases. It requires essentially two parameters to be set: the kernel and the parameter ν (0, 1], which is an upper bound on the _∈_ fraction of training errors and a lower bound on the fraction of support vectors. The fine-tuning of ν in contaminated data can be challenging without any assumptions on the distribution of the outliers. However, since in our tests we assume to know (only) the contamination value d = 10% for every dataset, we can set ν = 0.1. Moreover, we use the RBF kernel. For the second phase, we use a fully connected autoencoder, a NN-based model that naturally fits into a federated learning framework, with a three-layers topology (64-32-64), ReLU activations on the hidden layers, and Sigmoid activation on the output layer. Thirty-two neurons for the middle layer is a reasonable value to avoid an information bottleneck. We empirically observed that using more layers/neurons does not significantly improve the effectiveness due to the tendency of the neural network to overfit on this specific dataset. _C. Group detection and anomaly detection performance_ For both the MNIST and the fashion-MNIST datasets, we run four tests varying the value of p : 9, 18, 27, 36 . In all _{_ _}_ the tests we use the contamination parameter d = 10% and we take into account all the available classes, i.e., _C_ = 10. Let _|_ _|_ _mCi,j be the j-th client with majority class Ci; we define ICi_ as the ideal set of clients having the same majority class Ci, e.g., I0 = {m0,0, . . . m0,p−1}. In Table I, we show the results of the community detection phase for the MNIST dataset: we find nine communities, and in most cases, they match with the ideal group of clients. The major exception is given by G4, that in all the four cases is given by the union of I4 and I9, meaning that the clients having 4 and 9 as inlier class join the same community. This is a consequence of the OC-SVM model’s inability to distinguish the two digits, and it represents a typical behaviour when dealing with image classification using MNIST. A similar result occurs for G5 when p = 36 (Table Id), in which the union of I5 and I8 is detected as single community. In this case, recalling that a higher value of p leads to smaller local datasets for the clients, it is reasonable that for p = 36 the local models do not have enough samples and are no longer able to distinguish the two digits. We can observe the anticipation ----- Fig. 1: Histograms of training data distribution for the group Cin = 0 (i.e., 0 is the common inlier class) with p = 9. TABLE I: Community detection for MNIST (a) p = 9 **Community ID** **Members** _G0_ _I0_ _G1_ _I1_ _G2_ _I2_ _G3_ _I3_ _G4_ _I4 ∪_ _I9_ _G5_ _I5_ _G6_ _I6_ _G7_ _I7_ _G8_ _I8_ (c) p = 27 **Community ID** **Members** _G0_ _I0_ _G1_ _I1_ _G2_ _I2_ _G3_ _I3_ _G4_ _I4 ∪_ _I9_ _G5_ _I5 ∪_ _m8,18_ _G6_ _I6_ _G7_ _I7_ _G8_ _I8 \ m8,18_ (b) p = 18 (d) p = 36 |Community ID|Members| |---|---| |G0 G1 G2 G3 G4 G5 G6|I0 I1 ∪I3 I2 ∪I4 ∪I6 I5 I6 I7 I8| |Community ID|Members| |---|---| |G0 G1 G2 G3 G4 G5 G6 G7 G8|I0 I1 I2 I3 I4 ∪I9 I5 I6 I7 I8| |Community ID|Members| |---|---| |G0 G1 G2 G3 G4 G5 G6 G7 G8|I0 I1 I2 I3 I4 ∪I9 I5 I6 I7 I8| |Community ID|Members| |---|---| |G0 G1 G2 G3 G4 G5 G6|I0 ∪I2 ∪I4 ∪I6 I1 ∪I3 I5 \ m5,6 I6 I7 I8 m5,6| |Community ID|Members| |---|---| |G0 G1 G2 G3 G4 G5 G6 G7 G8|I0 I1 I2 I3 I4 ∪I9 I5 ∪m8,18 I6 I7 I8 \ m8,18| |Community ID|Members| |---|---| |G0 G1 G2 G3 G4 G5 G6 G7 G8|I0 I1 I2 I3 I4 ∪I9 I5 ∪I8 I6 I7 I8| of this behaviour when p = 27 in Table Ic in which the client _m8,18 mistakenly joins I5._ Similar considerations can be done for the fashion-MNIST case (Table II). Here the ideal groups of clients I1 and I3 are detected as a single community in the four testes. The same applies to the groups I0, I2, I4, I6, excluding the case p = 9 (Table IIa), in which I0 is correctly isolated. This result is expected as fashion-MNIST is notably harder than MNIST. _D. Experimental result: federated outlier detection_ We compare our methodology with two baselines: (i) local, where clients only train on local data; (ii) ideal, in which a client mCi,j uses the model trained through federated learning on the set of clients ICi, i.e., the set of the clients sharing the same majority class. The test samples for each client are randomly sampled from the MNIST/fashion-MNIST test set, following the same inlier/outlier classes and the ratio of the corresponding client. In Tables III and IV we show the test AUC score on MNIST and fashion-MNIST by varying the value of p, meaning that for each row we compute the average AUC score of p _C_ clients. _|_ _|_ Our methodology performs almost as the upper bound baseline, represented by the ideal federations of clients. Nevertheless, the results are consistent with the partitioning we obtain in TABLE II: Community detection for fashion-MNIST (a) p = 9 **Community ID** **Members** _G0_ _I0_ _G1_ _I1 ∪_ _I3_ _G2_ _I2 ∪_ _I4 ∪_ _I6_ _G3_ _I5_ _G4_ _I6_ _G5_ _I7_ _G6_ _I8_ (b) p = 18 **Community ID** **Members** _G0_ _I0 ∪_ _I2 ∪_ _I4 ∪_ _I6_ _G1_ _I1 ∪_ _I3_ _G2_ _I5 \ m5,6_ _G3_ _I6_ _G4_ _I7_ _G5_ _I8_ _G6_ _m5,6_ (c) p = 27 **Community ID** **Members** _G0_ _I0 ∪_ _I2 ∪_ _I4 ∪_ _I6_ _G1_ _I1 ∪_ _I3_ _G2_ _I5_ _G3_ _I6_ _G4_ _I7_ _G5_ _I8_ (d) p = 36 **Community ID** **Members** _G0_ _I0 ∪_ _I2 ∪_ _I4 ∪_ _I6_ _G1_ _I1 ∪_ _I3_ _G2_ _I5_ _G3_ _I6_ _G4_ _I7_ _G5_ _I8_ the first step with the community detection that, especially for MNIST, identifies the right groups of clients in most of the cases. In the fashion-MNIST case, there are more exceptions to this behaviour. For instance, clients with different inlier classes all join a common group, as shown in Tabel IV (e.g., G1). This affects the average AUC scores, which appear slightly less than the ideal upper bound (as opposed to nearly identical MNIST scores), but are still satisfactory. More detailed results are shown in Tables V and VI, in which we only consider the detected communities that do not match the ideal cases. In these tables, each row corresponds to the average test AUC score for a fixed p and all the clients having |Community ID|Members| |---|---| |G0 G1 G2 G3 G4 G5|I0 ∪I2 ∪I4 ∪I6 I1 ∪I3 I5 I6 I7 I8| |Community ID|Members| |---|---| |G0 G1 G2 G3 G4 G5|I0 ∪I2 ∪I4 ∪I6 I1 ∪I3 I5 I6 I7 I8| ----- TABLE III: Test AUC on MNIST. For each p, mean std are _±_ computed on p _C_ clients _|_ _|_ Local Community (ours) Ideal _p_ 9 0.773 ± 0.205 0.836 ± 0.18 0.839 ± 0.185 18 0.769 ± 0.207 0.835 ± 0.18 0.836 ± 0.181 27 0.77 ± 0.208 0.836 ± 0.18 0.84 ± 0.181 36 0.766 ± 0.207 0.819 ± 0.191 0.838 ± 0.182 TABLE IV: Test AUC on fashion-MNIST. For each p, mean std are computed on p _C_ clients _±_ _|_ _|_ Local Community (ours) Ideal _p_ 9 0.714 ± 0.166 0.761 ± 0.161 0.772 ± 0.155 18 0.71 ± 0.173 0.747 ± 0.166 0.769 ± 0.155 27 0.706 ± 0.165 0.75 ± 0.162 0.765 ± 0.154 36 0.707 ± 0.166 0.749 ± 0.161 0.765 ± 0.151 majority class CIN . The difference between the community (ours) and the ideal case is that in the former the clients of _CIN are trained through the corresponding federation G such_ that CIN ∈ _G (Tables I and II), while in the latter they are_ trained through the perfect federation CIN = IIN . As regards the MNIST case, we always obtain a community _G4 = I4∪I9 and, for p = 36, we have an additional community_ _G5 = I5_ _I8. We ignore the one client mismatch in the_ _∪_ _p = 27 (Table Ic) as we verified that its influence is negligible._ In Table V we observe that the clients with majority class _CIN = 4 still perform well with our methodology, with an_ average increase of 6% in the AUC score from the local case and an average decrease of 2% from the ideal case. CIN = 9 scores end up approximately in the middle of the two bounds, highlighting, however, that the local case already reaches a good score of 0.83 for any p. CIN = 5 is the only case that performs noticeably worse than the ideal case, with a decrease of 9% in the AUC score. However, also in this case there is a noticeable improvement over using the local models only. For the fashion-MNIST case (Table VI), the scores are predictably lower than in the previous case: the gaps between the two bounds are generally tighter, but in any test, the scores of our methodology still fall in the middle. Clients of CIN = 1 almost reach the ideal result, although the difference with the local one is minimal, while clients with CIN = 3 have on average a 4% increase/decrease on both the lower/upper _∼_ baseline. Clients of CIN = 2, CIN = 4 have an average AUC score very close (+1%) to the lower baseline for p > 8; this is precisely the value beyond which their federation is the union of four sets, i.e., I0 ∪ _I2 ∪_ _I4 ∪_ _I6, thus totalling four different_ majority classes. On the other hand, the remaining clients of this big federation, CIN = 0 and CIN = 6, are still able to reach a 7% increase on the local case and be very close to _∼_ the ideal case. VI. CONCLUSIONS AND FUTURE WORK In this paper we propose a new methodology for federated learning in unsupervised settings, particularly amenable for dynamic mobile environments without central coordination. |mputed|± d on p|C| clients| |---|---| |p|Local Community (ours) Ideal| ||| |9 18 27 36|0.773 ± 0.205 0.836 ± 0.18 0.839 ± 0.185 0.769 ± 0.207 0.835 ± 0.18 0.836 ± 0.181 0.77 ± 0.208 0.836 ± 0.18 0.84 ± 0.181 0.766 ± 0.207 0.819 ± 0.191 0.838 ± 0.182| |td are|e computed on p|C| clients| |---|---| |p|Local Community (ours) Ideal| ||| |9 18 27 36|0.714 ± 0.166 0.761 ± 0.161 0.772 ± 0.155 0.71 ± 0.173 0.747 ± 0.166 0.769 ± 0.155 0.706 ± 0.165 0.75 ± 0.162 0.765 ± 0.154 0.707 ± 0.166 0.749 ± 0.161 0.765 ± 0.151| We specifically focus on Anomaly Detection tasks to define the details and test the methodology. The methodology is composed by two sequential steps: in the first step we detect the communities of clients having similar majority patterns (i.e., inlier class); this is achieved by having the clients perform a preliminary inlier/outlier split of their local data through the training of an AD model. Two clients join the same community when both agree in the inliers/outliers proportion after exchanging their respective models and computing an inference step on their local data. Then, each of the resulting community collaboratively trains a NN-based anomaly detection model through the federated learning framework. We tested our methodology on the MNIST and fashionMNIST datasets; in most cases, the communities found match with the ideal groups of clients, which are used as an upper bound baseline in experimental part. When the ideal groups TABLE V: Test AUC std on MNIST _±_ Local Community (ours) Ideal _p_ _CIN_ 9 4 0.749 ± 0.245 0.833 ± 0.197 0.833 ± 0.232 9 0.823 ± 0.184 0.86 ± 0.159 0.881 ± 0.138 18 4 0.774 ± 0.2 0.819 ± 0.204 0.855 ± 0.19 9 0.828 ± 0.176 0.872 ± 0.149 0.881 ± 0.139 27 4 0.762 ± 0.214 0.823 ± 0.208 0.84 ± 0.205 9 0.836 ± 0.158 0.862 ± 0.161 0.882 ± 0.132 36 4 0.76 ± 0.215 0.799 ± 0.213 0.84 ± 0.201 9 0.838 ± 0.156 0.862 ± 0.157 0.881 ± 0.13 5 0.708 ± 0.194 0.718 ± 0.188 0.807 ± 0.177 8 0.677 ± 0.196 0.696 ± 0.195 0.719 ± 0.219 TABLE VI: Test AUC std on MNIST _±_ Local Community (ours) Ideal _p_ _CIN_ 9 18 27 36 1 0.911 ± 0.051 0.94 ± 0.028 0.946 ± 0.025 3 0.741 ± 0.127 0.788 ± 0.139 0.83 ± 0.094 2 0.663 ± 0.146 0.686 ± 0.154 0.719 ± 0.125 4 0.714 ± 0.128 0.762 ± 0.13 0.782 ± 0.117 6 0.642 ± 0.142 0.675 ± 0.144 0.698 ± 0.137 1 0.913 ± 0.04 0.935 ± 0.036 0.944 ± 0.026 3 0.751 ± 0.107 0.792 ± 0.14 0.831 ± 0.082 0 0.683 ± 0.125 0.742 ± 0.124 0.775 ± 0.089 2 0.665 ± 0.153 0.667 ± 0.16 0.711 ± 0.126 4 0.713 ± 0.142 0.724 ± 0.134 0.775 ± 0.115 6 0.626 ± 0.142 0.68 ± 0.137 0.704 ± 0.133 1 0.907 ± 0.04 0.937 ± 0.033 0.944 ± 0.024 3 0.74 ± 0.099 0.77 ± 0.164 0.813 ± 0.088 0 0.688 ± 0.109 0.743 ± 0.101 0.773 ± 0.08 2 0.674 ± 0.136 0.692 ± 0.145 0.763 ± 0.109 4 0.71 ± 0.13 0.725 ± 0.117 0.777 ± 0.107 6 0.63 ± 0.125 0.705 ± 0.126 0.714 ± 0.133 1 0.907 ± 0.041 0.936 ± 0.035 0.943 ± 0.024 3 0.73 ± 0.118 0.762 ± 0.16 0.803 ± 0.093 0 0.68 ± 0.113 0.754 ± 0.095 0.772 ± 0.078 2 0.675 ± 0.127 0.694 ± 0.144 0.733 ± 0.123 4 0.714 ± 0.132 0.743 ± 0.119 0.783 ± 0.107 6 0.639 ± 0.131 0.698 ± 0.125 0.717 ± 0.127 ----- are not found, our methodology merges 2-4 ideal groups into one community; it occurs in two MNIST classes, obtaining 9 groups, and in 6 fashion-MINIST classes, obtaining 6 groups in the worst case. The aggregation usually occurs for clients having similar majority classes (e.g., 4 and 9 in the case of MNIST). We finally test the resulting AD federated models trained by the detected communities in term of AUC score, with local test sets on each client. In both cases, the results show clear advantage over the models locally trained (i.e., the lower baseline), while the performance is comparable with the federated models of ideal communities’ partition, even for detected communities in which different majority classes are merged. This indicates that, even though we may not always be able to group clients as in the ideal (supervised) case, still the accuracy of the resulting model is close to optimal, and significantly better than using local models trained only on local data. Future directions can involve several aspects of the proposed solution. Firstly, the optimization of the community detection phase, i.e., the all-to-all exchange of the local models may be suboptimal for high numbers of clients. Moreover, another possible improvement is the selection of the specific algorithms used to train local and federated models. For example, the “flat” fully connected autoencoder we use for the federated training may be too simple; as an example, when dealing with images, convolutional autoencoders may be introduced. Finally, we aim to frame this solution in a more general context of anomaly detection in which normal data belong to multiple classes, in contrast to the typical AD task that only involves a single inlier class. ACKNOWLEDGMENT This work has been partly funded under the H2020 MARVEL (grant 957337), HumaneAI-Net (grant 952026), SoBigData++ (grant 871042) and CHIST-ERA SAI (grant CHIST-ERA-19XAI-010, by MUR, FWF, EPSRC, NCN, ETAg, BNSF). REFERENCES [1] B. McMahan, E. Moore, D. Ramage, S. Hampson, and B. A. y. Arcas, “Communication-Efficient Learning of Deep Networks from Decentralized Data,” in _Proceedings_ _of_ _the_ _20th_ _International_ _Conference on Artificial Intelligence and Statistics, ser. Proceedings_ of Machine Learning Research, A. Singh and J. Zhu, Eds., vol. 54. PMLR, 20–22 Apr 2017, pp. 1273–1282. [Online]. Available: [https://proceedings.mlr.press/v54/mcmahan17a.html](https://proceedings.mlr.press/v54/mcmahan17a.html) [2] T. Yang, G. Andrew, H. Eichner, H. Sun, W. Li, N. Kong, D. Ramage, and F. Beaufays, “Applied federated learning: Improving google keyboard query suggestions,” arXiv preprint arXiv:1812.02903, 2018. [3] P. Kairouz, H. B. McMahan, B. Avent, A. Bellet, M. Bennis, A. N. Bhagoji, K. Bonawitz, Z. Charles, G. Cormode, R. Cummings, R. G. L. D’Oliveira, H. Eichner, S. E. Rouayheb, D. Evans, J. Gardner, Z. Garrett, A. Gascon, B. Ghazi, P. B. Gibbons, M. Gruteser, Z. Harchaoui, C. He,´ L. He, Z. Huo, B. Hutchinson, J. Hsu, M. Jaggi, T. Javidi, G. Joshi, M. Khodak, J. Konecny, A. Korolova, F. Koushanfar, S. Koyejo,´ T. Lepoint, Y. Liu, P. Mittal, M. Mohri, R. Nock, A. Ozg[¨] ur, R. Pagh,¨ H. Qi, D. Ramage, R. Raskar, M. Raykova, D. Song, W. Song, S. U. Stich, Z. Sun, A. T. Suresh, F. Tramer, P. Vepakomma, J. Wang,` L. Xiong, Z. Xu, Q. Yang, F. X. Yu, H. Yu, and S. Zhao, “Advances and open problems in federated learning,” Foundations and Trends® _in Machine Learning, vol. 14, no. 1–2, pp. 1–210, 2021. [Online]._ [Available: http://dx.doi.org/10.1561/2200000083](http://dx.doi.org/10.1561/2200000083) [4] Y. LeCun, “The next ai revolution will not be supervised,” 2018. [5] M. Nardi, L. Valerio, and A. Passarella, “Centralised vs decentralised anomaly detection: when local and imbalanced data are beneficial,” in _Proceedings of the Third International Workshop on Learning with_ _Imbalanced Domains: Theory and Applications, ser. Proceedings of_ Machine Learning Research, N. Moniz, P. Branco, L. Torgo, N. Japkowicz, M. Wozniak, and S. Wang, Eds., vol. 154. PMLR, 17 Sep 2021, pp. 7–20.´ [[Online]. Available: https://proceedings.mlr.press/v154/nardi21a.html](https://proceedings.mlr.press/v154/nardi21a.html) [6] K. Ota, M. S. Dao, V. Mezaris, and F. G. B. D. Natale, “Deep learning for mobile multimedia: A survey,” vol. 13, no. 3, pp. 1–22. [Online]. [Available: https://dl.acm.org/doi/10.1145/3092831](https://dl.acm.org/doi/10.1145/3092831) [7] K. Chahal, M. S. Grover, and K. Dey, “A hitchhiker’s guide on distributed training of deep neural networks.” [Online]. Available: [http://arxiv.org/abs/1810.11787](http://arxiv.org/abs/1810.11787) [8] T. Ben-Nun and T. Hoefler, “Demystifying parallel and distributed deep learning: An in-depth concurrency analysis.” [Online]. Available: [http://arxiv.org/abs/1802.09941](http://arxiv.org/abs/1802.09941) [9] J. Verbraeken, M. Wolting, J. Katzy, J. Kloppenburg, T. Verbelen, and J. S. Rellermeyer, “A survey on distributed machine learning,” vol. 53, no. 2, [pp. 30:1–30:33. [Online]. Available: https://doi.org/10.1145/3377454](https://doi.org/10.1145/3377454) [10] T. Tuor, S. Wang, K. K. Leung, and K. Chan, “Distributed machine learning in coalition environments: Overview of techniques,” in 2018 _21st International Conference on Information Fusion (FUSION), pp._ 814–821. [11] K. Bonawitz, H. Eichner, W. Grieskamp, D. Huba, A. Ingerman, V. Ivanov, C. Kiddon, J. Konecnˇ y, S. Mazzocchi, H. B. McMahan` _et al.,_ “Towards federated learning at scale: System design,” arXiv preprint _arXiv:1902.01046, 2019._ [12] W. A. Group, “Federated learning white paper v1.” [Online]. Available: [https://aisp-1251170195.cos.ap-hongkong.myqcloud.com/](https://aisp-1251170195.cos.ap-hongkong.myqcloud.com/fedweb/1552917186945.pdf) [fedweb/1552917186945.pdf](https://aisp-1251170195.cos.ap-hongkong.myqcloud.com/fedweb/1552917186945.pdf) [13] B. van Berlo, A. Saeed, and T. Ozcelebi, “Towards federated unsupervised representation learning,” in Proceedings of the Third ACM International _Workshop on Edge Systems, Analytics and Networking, 2020, pp. 31–36._ [14] F. Zhang, K. Kuang, Z. You, T. Shen, J. Xiao, Y. Zhang, C. Wu, Y. Zhuang, and X. Li, “Federated unsupervised representation learning,” _arXiv preprint arXiv:2010.08982, 2020._ [15] E. Tzinis, J. Casebeer, Z. Wang, and P. Smaragdis, “Separate but together: Unsupervised federated learning for speech enhancement from non-iid data,” arXiv preprint arXiv:2105.04727, 2021. [16] T. Luo and S. G. Nagarajany, “Distributed anomaly detection using autoencoder neural networks in WSN for IoT,” Tech. Rep., 2018. [17] V. Mothukuri, P. Khare, R. M. Parizi, S. Pouriyeh, A. Dehghantanha, and G. Srivastava, “Federated-learning-based anomaly detection for iot security attacks,” IEEE Internet of Things Journal, vol. 9, no. 4, pp. 2545–2554, 2021. [18] V. Chandola, A. Banerjee, and V. Kumar, “Anomaly detection,” ACM _Computing Surveys, vol. 41, pp. 1–58, 7 2009. [Online]. Available:_ [https://dl.acm.org/doi/10.1145/1541880.1541882](https://dl.acm.org/doi/10.1145/1541880.1541882) [19] C. C. Aggarwal, Outlier Analysis. Springer International Publishing, 2017. [20] Y. LeCun, C. Cortes, and C. Burges, “Mnist handwritten digit database,” 2010. [21] H. Xiao, K. Rasul, and R. Vollgraf. (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms. [22] S. Han, X. Hu, H. Huang, M. Jiang, and Y. Zhao, “Adbench: Anomaly detection benchmark,” arXiv preprint arXiv:2206.09426, 2022. [23] B. Scholkopf, R. C. Williamson, A. Smola, J. Shawe-Taylor, and J. Platt,¨ “Support vector method for novelty detection,” Advances in neural _information processing systems, vol. 12, 1999._ -----
{ "disclaimer": "Notice: Paper or abstract available at https://arxiv.org/abs/2209.04184, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": null, "status": "GREEN", "url": "https://arxiv.org/pdf/2209.04184" }
2,022
[ "JournalArticle", "Conference" ]
true
2022-09-09T00:00:00
[ { "paperId": "0bff4af924788d9779041513b6894385eac51ffd", "title": "ADBench: Anomaly Detection Benchmark" }, { "paperId": "048cb4d1b7712b7499a9d7db6d24caaab5ddd9ce", "title": "Separate But Together: Unsupervised Federated Learning for Speech Enhancement from Non-IID Data" }, { "paperId": "795308ca0a281865b42b612045e5074076a82a75", "title": "Federated-Learning-Based Anomaly Detection for IoT Security Attacks" }, { "paperId": "bf2ca8386bfc6a4c65a91f4628da7c49f931e9f2", "title": "Federated unsupervised representation learning" }, { "paperId": "2baca0b49c71388abe13e7c59bda9414bab24497", "title": "Towards federated unsupervised representation learning" }, { "paperId": "f9a855ae59579d16dca6a5133cd8daddd3305582", "title": "A Survey on Distributed Machine Learning" }, { "paperId": "07912741c6c96e6ad5b2c2d6c6c3b2de5c8a271b", "title": "Advances and Open Problems in Federated Learning" }, { "paperId": "2a3d09bbdfe21418ce75d6973f71028fa9192b89", "title": "Federated Learning for Edge Networks: Resource Optimization and Incentive Mechanism" }, { "paperId": "afa778ba0ba6333e25671cfb691a4bdda13b2868", "title": "Federated Learning With Differential Privacy: Algorithms and Performance Analysis" }, { "paperId": "79cf9462a583e1889781868cbf8c31e43b36dd2f", "title": "Towards Federated Learning at Scale: System Design" }, { "paperId": "b97047c4dc75cbe8d6fc5cb3dd5a81d36458892d", "title": "APPLIED FEDERATED LEARNING: IMPROVING GOOGLE KEYBOARD QUERY SUGGESTIONS" }, { "paperId": "7143230a68aecbce640e53b6cde171699a1e4270", "title": "A Hitchhiker's Guide On Distributed Training of Deep Neural Networks" }, { "paperId": "b7ba8f3fa0c587695ed2f87b92e2b5410284413e", "title": "Distributed Machine Learning in Coalition Environments: Overview of Techniques" }, { "paperId": "5cfc112c932e38df95a0ba35009688735d1a386b", "title": "Federated Learning with Non-IID Data" }, { "paperId": "7389d201825f3581649adb15e0246367d1ed9d97", "title": "Distributed Anomaly Detection Using Autoencoder Neural Networks in WSN for IoT" }, { "paperId": "d8c09661b1bebfb690f0566167c87d64c5628d73", "title": "Demystifying Parallel and Distributed Deep Learning" }, { "paperId": "f9c602cc436a9ea2f9e7db48c77d924e09ce3c32", "title": "Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms" }, { "paperId": "80ff2c726bdb4efdaac712bfc8712cfd4bb939ad", "title": "Deep Learning for Mobile Multimedia" }, { "paperId": "7fcb90f68529cbfab49f471b54719ded7528d0ef", "title": "Federated Learning: Strategies for Improving Communication Efficiency" }, { "paperId": "d1dbf643447405984eeef098b1b320dee0b3b8a7", "title": "Communication-Efficient Learning of Deep Networks from Decentralized Data" }, { "paperId": "1bc042ec7a58ca8040ee08178433752f2c16f25e", "title": "Outlier Analysis" }, { "paperId": "71d1ac92ad36b62a04f32ed75a10ad3259a7218d", "title": "Anomaly detection: A survey" }, { "paperId": "bf206bad6a74d27b40c8ea77ee54e98e492fb7f9", "title": "Support Vector Method for Novelty Detection" }, { "paperId": "0575cd39742118cb04c9df4e262fb5d22af48af8", "title": "Centralised vs decentralised anomaly detection: when local and imbalanced data are beneficial" }, { "paperId": null, "title": "The next ai revolution will not be supervised" }, { "paperId": null, "title": "query suggestions,” arXiv preprint arXiv:1812.02903" }, { "paperId": null, "title": "Mnist handwritten digit database" } ]
12,470